text stringlengths 9 7.94M | subset stringclasses 1 value | meta dict | file_path stringclasses 1 value | question dict | answers listlengths |
|---|---|---|---|---|---|
\begin{document}
\title{{\Large {\bf Localization of an inhomogeneous discrete-time \\
quantum walk on the line}
\par\noindent \begin{small}
\par\noindent {\bf Abstract}. We investigate a space-inhomogeneous discrete-time quantum walk in one dimension. We show that the walk exhibits localization by a path counting method.
\footnote[0]{ {\it Abbr. title:} Localization of an inhomogeneous quantum walk } \footnote[0]{ {\it AMS 2000 subject classifications: } 60F05, 60G50, 82B41, 81Q99 } \footnote[0]{ {\it PACS: } 03.67.Lx, 05.40.Fb, 02.50.Cw } \footnote[0]{ {\it Keywords: } Quantum walk, localization, Hadamard walk } \end{small}
\setcounter{equation}{0} \section{Introduction} As a quantum counterpart of the classical random walk, the quantum walk (QW) has recently attracted much attention for various fields. There are two types of QWs. One is the discrete-time walk and the other is the continuous-time one. The discrete-time QW in one dimension (1D) was intensively studied by Ambainis et al. \cite{AmbainisEtAl2001}. One of the most striking properties of the 1D QW is the spreading property of the walker. The standard deviation of the position grows linearly in time, quadratically faster than classical random walk. The review and book on QWs are Kempe \cite{Kempe2003}, Kendon \cite{Kendon2007}, Venegas-Andraca \cite{VAndraca2008}, Konno \cite{Konno2008b}, for examples.
In the present paper we focus on discrete-time case. The model considered here is a space-inhomogeneous two-state 1D QW. The two-state corresponds to left and right chiralities defined in the next section. Let $p_{n} (0)$ denote the probability that the walker returns to the origin at time $n$. The model is said to exhibit localization if $\lim_{n \to \infty} \> p_{2n} (0) >0$. The homogeneous two-state 1D QW except a trivial case does not exhibit localization, see \cite{AmbainisEtAl2001}, for example. The decay order of $p_{2n} (0)$ is closely related to the recurrence. As for the recurrence property of QWs, see \v{S}tefa\v{n}\'ak et al. \cite{StefanakEtAl2008a,StefanakEtAl2008b,StefanakEtAl2009}. Localization of the homogeneous model was shown for a three-sate 1D QW in \cite{InuiEtAl2005}, a four-state 1D QW in \cite{InuiKonno2005}, and a multi-state QW on tree in \cite{ChisakiEtAl2009}. Mackay et al. \cite{MackayEtAl2002} and Tregenna et al. \cite{TregennaEtAl2003} found numerically that a homogeneous 2D QW exhibits localization. Inui et al. \cite{InuiEtAl2004} and Watabe et al. \cite{WatabeEtAl2008} showed the phenomenon. In higher dimensions, a $d$-dimensional homogeneous tensor-product coin model does not exhibit localization \cite{StefanakEtAl2008b}. Oka et al. \cite{OkaEtAl2005} analyzed localization of a two-state QW on a semi-infinite 1D lattice, which is closely related to the Landau-Zener transition dynamics. Through numerical simulations, Buerschaper and Burnett \cite{Buerschaper2004} and W\'ojcik et al. \cite{WojcikEtAl2004} reported that the dynamics of the two-state 1D QWs exhibits from dynamical localization, spreading more slowly than in the classical case, to linear diffusion like the homogeneous two-state 1D QW as the period of the perturbation is varied. Linden and Sharam \cite{LindenSharam2009} investigated a similar inhomogeneous two-state 1D QW where the inhomogeneity is periodic in position. They showed that, depending on the period $2k$, the QW can be bounded for even $k$ and unbounded for odd $k$ in time. The former case corresponds to localization and the latter case to delocalization. An interesting question is whether localization emerges even for a simpler inhomogeneous two-state 1D QW compared with the previous models. The present paper gives an affirmative answer to the question. Our result could be useful for quantum information processing by controlling the spreading of the walker.
The rest of the paper is organized as follows. Section 2 gives the definition of our model. In Sect. 3, we present our main result (Theorem {\rmfamily \ref{thm1}}) of this paper. Section 4 is devoted to the proof of Proposition {\rmfamily \ref{pro0}}. In Sect. 5, we prove Theorem {\rmfamily \ref{thm1}}. Finally, we consider some related models in Sect. 6. In contrast to our model, we show that a similar simple inhomogeneous two-state 1D QW, whose limit theorem presented in \cite{Konno2009}, does not exhibit localization. Moreover, for the corresponding classical random walk also, localization does not emerge.
\section{Definition of the walk} In this section, we give the definition of the inhomogeneous two-state QW on $\mathbb{Z}$ considered here, where $\mathbb{Z}$ is the set of integers. The discrete-time QW is a quantum version of the classical random walk with additional degree of freedom called chirality. The chirality takes values left and right, and it means the direction of the motion of the walker. At each time step, if the walker has the left chirality, it moves one step to the left, and if it has the right chirality, it moves one step to the right. Let define \begin{eqnarray*} \ket{L} = \left[ \begin{array}{cc} 1 \\ 0 \end{array} \right], \qquad \ket{R} = \left[ \begin{array}{cc} 0 \\ 1 \end{array} \right], \end{eqnarray*} where $L$ and $R$ refer to the left and right chirality state, respectively.
For the general setting, the time evolution of the walk is determined by a sequence of $2 \times 2$ unitary matrices, $\{ U_x : x \in \mathbb{Z} \}$, where \begin{align*} U_x = \left[ \begin{array}{cc} a_x & b_x \\ c_x & d_x \end{array} \right], \end{align*} with $a_x,b_x,c_x,d_x \in \mathbb C$ and $\mathbb{C}$ is the set of complex numbers. The subscript $x$ indicates the location. The matrices $U_x$ rotate the chirality before the displacement, which defines the dynamics of the walk. To describe the evolution of our model, we divide $U_x$ into two matrices: \begin{eqnarray*} P_x = \left[ \begin{array}{cc} a_x & b_x \\ 0 & 0 \end{array} \right], \quad Q_x= \left[ \begin{array}{cc} 0 & 0 \\ c_x & d_x \end{array} \right], \end{eqnarray*} with $U_x=P_x+Q_x$. The important point is that $P_x$ (resp. $Q_x$) represents that the walker moves to the left (resp. right) at position $x$ at each time step.
For a given sequence $\{ \omega_x : x \in \mathbb{Z} \}$ with $\omega_x \in [0, 2 \pi)$, our previous paper \cite{Konno2009} treated the following $U_x$: \begin{align} U_x = U_x (\omega_x) = \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} e^{i \omega_x} & 1 \\ 1 & -e^{-i \omega_x} \end{array} \right]. \label{akina} \end{align} In the present paper, for a given sequence $\{ \omega_x : x \in \mathbb{Z} \}$, we consider \begin{align} U_x = U_x (\omega_x) = \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 1 & e^{i \omega_x} \\ e^{-i \omega_x} & -1 \end{array} \right]. \label{seiko} \end{align} In particular, here we concentrate on a simple inhomogeneous model depending only on a one-parameter $\omega \in [0,2\pi)$ as follows: \begin{align} U_0 = U_0 (\omega), \quad U_x = U_x (0) \quad \hbox{if} \> x \not= 0. \label{seikou} \end{align} So when $\omega \not=0$, our model is homogeneous except the origin. If $\omega = 0$, then this model becomes homogeneous and is equivalent to the {\it Hadamard walk} determined by the Hadamard gate $U_x = U_x (0) \equiv H$: \begin{eqnarray*} H=\frac{1}{\sqrt2} \left[ \begin{array}{cc} 1 & 1 \\ 1 &-1 \end{array} \right]. \end{eqnarray*} In this paper, we take $\varphi_{\ast} = {}^T [1/\sqrt{2},i/\sqrt{2}]$ as the initial qubit state, where $T$ is the transposed operator. Then the probability distribution of the Hadamard walk starting from $\varphi_{\ast}$ at the origin is symmetric.
Let $\Xi_{n} (l,m)$ denote the sum of all paths starting from the origin in the trajectory consisting of $l$ steps left and $m$ steps right at time $n$ with $n=l+m$. For example, \begin{align*} \Xi_2 (1,1) &= Q P_0 + P Q_0, \\ \Xi_4 (2,2) &= Q^2 P P_0 + P^2 Q Q_0 + Q P_0 Q P_0 + P Q_0 P Q_0 + P Q_0 Q P_0 + Q P_0 P Q_0.
\end{align*} The probability that our quantum walker is in position $x$ at time $n$ starting from the origin with $\varphi_{\ast} (={}^T [1/\sqrt{2},i/\sqrt{2}])$ is defined by \begin{align*}
P (X_{n} =x) = || \Xi_{n}(l, m) \varphi_{\ast} ||^2, \end{align*} where $n=l+m$ and $x=-l+m$. The following is the important quantity of this paper. \begin{align*} p_n (0) = P (X_{n} =0). \end{align*} This is the return probability at time $n$. Remark that $p_{2n+1} (0) = 0$ for $n \ge 0$. For our model with $\omega = \pi$, a direct computation implies \begin{align*} p_{2} (0) &= \frac{2}{2^2} = 0.5, \quad p_{4} (0) = \frac{10}{2^4} = 0.625, \quad p_{6} (0) = \frac{40}{2^6} = 0.625, \\ p_{8} (0) &= \frac{170}{2^8} = 0.66406 \ldots, \quad p_{10} (0) = \frac{680}{2^{10}} = 0.66406 \ldots, \quad p_{12} (0) = \frac{2600}{2^{12}} = 0.63476 \ldots.
\end{align*} In fact, as a consequence of our main result (Theorem {\rmfamily \ref{thm1}}), we have $\lim_{n \to \infty} p_{2n} (0) = (4/5)^2 = 0.64.$ Therefore the QW with $\omega = \pi$ exhibits localization. On the other hand, for the Hadamard walk case (i.e., $\omega =0$), \begin{align*} p_{2}^{(H)} (0) &= \frac{2}{2^2} = 0.5, \quad p_{4}^{(H)} (0) = \frac{2}{2^4} = 0.125, \quad p_{6}^{(H)} (0) = \frac{8}{2^6} = 0.125, \\ p_{8}^{(H)} (0) &= \frac{18}{2^8} = 0.07031 \ldots, \quad p_{10}^{(H)} (0) = \frac{72}{2^{10}} = 0.07031 \ldots, \quad p_{12}^{(H)} (0) = \frac{200}{2^{12}} = 0.04882 \ldots.
\end{align*} Superscript $(H)$ denotes the Hadamard walk. In this case, it is known that $\lim_{n \to \infty} p_{2n} (0) = 0,$ (see Sect. 6). So the QW with $\omega = 0$ does not exhibit localization.
\section{Our result} In this section, we present our main result on the inhomogeneous two-state 1D QW. Let \begin{align} \Psi_{2n} (0) = \left[ \begin{array}{cc} \Psi_{2n}^{(L)} (0) \\ \Psi_{2n}^{(R)} (0) \end{array} \right] = \Xi_{2n}(n, n) \varphi_{\ast} \end{align} for $n \ge 0$. This is the probability amplitude at the origin at time $2n$, where the upper (lower) component corresponds to the left (right) chirality. Remark that $\Psi_{2n+1} (0) = {}^T [\Psi_{2n+1}^{(L)} (0), \Psi_{2n+1}^{(R)} (0)] = {}^T [0,0]$ for $n \ge 0$. Let $\mathbb{P} = \{1,2, \ldots \}$. Then we have \begin{pro} \label{pro0} For $n \ge 1$, \begin{align*} \Psi_{2n} (0) &= \frac{1}{\sqrt{2}} \> \sum_{k=1}^n \sum_{\scriptstyle (a_1, \ldots , a_k) \in \mathbb{P}^k : \atop \scriptstyle a_1 + \cdots + a_k =n} \left( \prod_{j=1}^k r^{\ast}_{2 a_j-1} \right) \times \left[ \begin{array}{cc} \frac{1- \mu_{+}}{C_{+}^2} \left( \frac{\gamma_{+}}{2} \right)^k + \frac{1- \mu_{-}}{C_{-}^2} \left( \frac{\gamma_{-}}{2} \right)^k \\ - \left\{ \frac{\mu_{+}(1- \mu_{+})}{C_{+}^2} \left( \frac{\gamma_{+}}{2} \right)^k + \frac{\mu_{-}(1- \mu_{-})}{C_{-}^2} \left( \frac{\gamma_{-}}{2} \right)^k \right\} i \end{array} \right], \end{align*} where \begin{align*} \gamma_{\pm} &= - \cos \omega \pm i \sqrt{1 + \sin^2 \omega}, \quad \mu_{\pm} = \sin \omega \mp \sqrt{1 + \sin^2 \omega}, \\ C_{\pm} &= \sqrt{2 \left\{ (1+\sin^2 \omega) \mp \sin \omega \sqrt{1 + \sin^2 \omega} \right\} }, \\ \sum_{n=1}^{\infty} \> r_{n}^{\ast} z^n &= \frac{-1 - z^2 + \sqrt{1 + z^4}}{z}. \end{align*} \end{pro} The proof is given in Sect. 4. By using this proposition, the following main result of this paper can be obtained. As for the proof of the theorem, see Sect. 5. \begin{thm} \label{thm1} For our inhomogeneous two-state 1D QW with the parameter $\omega \in [0, 2 \pi)$ defined by Eqs. \ref{seiko} and \ref{seikou}, we have \begin{align*} \lim_{n \to \infty} \> p_{2n} (0) = \left( \frac{2(1-\cos \omega)}{3 - 2 \cos \omega} \right)^2 =: c (\omega), \end{align*} where $p_{2n} (0)$ is the return probability at the origin at time $2n$. \end{thm} We present some properties on the above limit $c (\omega)$ (see Fig. 1). (i) $c (\omega) = c (2 \pi - \omega)$. (ii) $c (\omega)$ is strictly increasing in $\omega \in [0, \pi]$. (iii) $c (0) = 0 \le c (\omega) \le c(\pi)=(4/5)^2$ for any $\omega \in [0, \pi]$. Therefore if the model is inhomogeneous, i.e., $\omega \in (0, 2 \pi)$, then it exhibits localization, i.e., $ c (\omega) > 0.$ When $\omega$ is the uniform distribution on $[0, 2 \pi)$, we see that $E[c (\cdot)] = (25 - 7 \sqrt{5})/25 = 0.3739 \ldots$, where $E[c (\cdot)]$ is the expectation of $c (\omega)$. As we will discuss in the last section, for another inhomogeneous two-state 1D QW defined by Eq. \ref{akina}, we have $\lim_{n \to \infty} \> p_{2n} (0) =0$. That is, the QW does not exhibit localization.
\par \ \par \begin{figure}
\caption{The plot of $c(\omega)$}
\label{Figure1}
\end{figure}
\section{Proof of Proposition {\rmfamily \ref{pro0}}} In this section, we prove Proposition {\rmfamily \ref{pro0}} by using a path counting approach. To do so, we first consider the Hadamard walk starting from location $m \> (\ge 1)$ on $\mathbb{Z}_{+} = \{ 0, 1, 2, \ldots \}$ with an absorbing boundary at $0$ (see Ambainis et al. \cite{AmbainisEtAl2001} for more details, for example). In this model, $P_0$ and $Q_0$ do not appear, since we consider only $\{ U_x \equiv H : x \ge 1\}$. Therefore the definition of the walk yields \begin{align*} U_x = H = \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array} \right], \end{align*} for $x \ge 1$. Then \begin{align*} P_x = P = \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 1 & 1 \\ 0 & 0 \end{array} \right], \quad Q_x = Q = \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 0 & 0 \\ 1 & -1 \end{array} \right] \qquad (x \ge 1). \end{align*} Let $\Xi^{(\infty,m)} _n$ be the sum over possible paths for which the particle first hits 0 at time $n$ starting from $m$. For example, \begin{align*} \Xi^{(\infty,1)} _5 = P^2 Q P Q + P^3 Q^2. \end{align*} We introduce $R$ and $S$ as follows: \begin{align*} R = \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 1 & -1 \\ 0 & 0 \end{array} \right], \quad S= \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 0 & 0 \\ 1 & 1 \end{array} \right]. \end{align*}
We should remark that $P, Q, R$ and $S$ form an orthonormal basis of the vector space of complex $2 \times 2$ matrices with respect to the trace inner product $\langle A | B \rangle = $ tr$(A^{\ast}B)$, where $\ast$ means the adjoint operator. Therefore $\Xi^{(\infty,m)} _n$ can be written as \begin{align*} \Xi^{(\infty,m)} _n = p^{(\infty,m)} _n P + q^{(\infty,m)} _n Q + r^{(\infty,m)} _n R + s^{(\infty,m)} _n S . \end{align*} Noting the definition of $\Xi^{(\infty,m)} _n$, we see that for $m \ge 1$, \begin{align*} \Xi^{(\infty,m)} _n = \Xi^{(\infty,m-1)} _{n-1} P + \Xi^{(\infty,m+1)} _{n-1} Q.\end{align*} Then we have \begin{align*} p^{(\infty,m)} _n &= \frac{1}{\sqrt{2}} \> p^{(\infty,m-1)} _{n-1} + \frac{1}{\sqrt{2}} \> r^{(\infty,m-1)} _{n-1}, \quad q^{(\infty,m)} _n = - \frac{1}{\sqrt{2}} \> q^{(\infty,m+1)} _{n-1} + \frac{1}{\sqrt{2}} \> s^{(\infty,m+1)} _{n-1}, \\ r^{(\infty,m)} _n &= \frac{1}{\sqrt{2}} \> p^{(\infty,m+1)} _{n-1} - \frac{1}{\sqrt{2}} \> r^{(\infty,m+1)} _{n-1}, \quad s^{(\infty,m)} _n = \frac{1}{\sqrt{2}} \> q^{(\infty,m-1)} _{n-1} + \frac{1}{\sqrt{2}} \> s^{(\infty,m-1)} _{n-1}. \end{align*} From the definition of $\Xi^{(\infty,m)} _n$, it is easily shown that there exist only two types of paths, that is, $P \ldots P$ and $P \ldots Q$. Therefore we see that $q^{(\infty,m)} _n=s^{(\infty,m)} _n=0$ for $n \ge 1$. We introduce generating functions of $p^{(\infty,m)} _n$ and $r^{(\infty,m)} _n$ as follows: \begin{align*} p^{(\infty,m)} (z) = \sum_{n=1} ^{\infty} p^{(\infty,m)} _n z^n, \quad r^{(\infty,m)} (z) = \sum_{n=1} ^{\infty} r^{(\infty,m)} _n z^n. \end{align*} Then we get \begin{align*} p^{(\infty,m)} (z) &= \frac{z}{\sqrt{2}} \> p^{(\infty,m-1)} (z) + \frac{z}{\sqrt{2}} \> r^{(\infty,m-1)} (z), \\ r^{(\infty,m)} (z) &= \frac{z}{\sqrt{2}} \> p^{(\infty,m+1)} (z) - \frac{z}{\sqrt{2}} \> r^{(\infty,m+1)} (z). \end{align*} Solving these, we see that both $p^{(\infty,m)} (z)$ and $r^{(\infty,m)} (z)$ satisfy the same recurrence: \begin{align*} p^{(\infty,m+2)} (z) + \sqrt{2} \> \left( {1 \over z} - z \right) p^{(\infty,m+1)} (z) - p^{(\infty,m)} (z) &= 0, \\ r^{(\infty,m+2)} (z) + \sqrt{2} \> \left( {1 \over z} - z \right) r^{(\infty,m+1)} (z) - r^{(\infty,m)} (z) &= 0. \end{align*} From the characteristic equations with respect to the above recurrences, we have the same roots: \begin{align*} \lambda_\pm=\frac{-1 + z^2 \pm\sqrt{1+z^4}}{\sqrt{2}z}. \end{align*} The definition of $\Xi^{(\infty,1)} _n$ gives $p^{(\infty,1)}_n = 0 \> (n \ge 2)$ and $p_1 ^{(\infty,1)} =1$. So we have $p^{(\infty,1)} (z) = z$. Moreover noting $\lim_{m \to \infty} p^{(\infty,m)} (z) < \infty$, the following explicit form can be obtained: \begin{align*} p^{(\infty,m)} (z) = z \lambda_ +^{m-1}, \quad r^{(\infty,m)} (z) = \frac{-1+\sqrt{1+z^4}}{z} \lambda_+^{m-1}. \end{align*} Therefore for $m=1$, \begin{align*} r^{(\infty,1)} (z) = \frac{-1+\sqrt{1+ z^4}}{z}. \end{align*} Next we consider the Hadamard walk starting from location $m (\le -1)$ on $\mathbb{Z}_{-} = \{ 0, -1, -2, \ldots \}$ with an absorbing boundary at $0$. Let $\Xi^{(-\infty,m)} _n$ be the sum over possible paths for which the particle first hits 0 at time $n$ starting from $m \> (\le -1)$. Similarly we see that \begin{align*} q^{(-\infty,m)} (z) = z \lambda_- ^{m+1}, \quad s^{(-\infty,m)} (z) = \frac{1-\sqrt{1+ z^4}}{z} \lambda_-^{m+1}. \end{align*} So for $m=-1$, \begin{align*} s^{(- \infty, -1)} (z) = \frac{1 - \sqrt{1 + z^4}}{z}. \end{align*} Remark that $r_{n}^{(\infty,1)} + s_{n}^{(-\infty,-1)} = 0$ for $n \ge 1$. Let $\Xi_n^{+} = \Xi_n^{(\infty, 1)} Q_0$ and $\Xi_n^{-} = \Xi_n^{(-\infty, -1)} P_0$, where \begin{align*} P_0 = \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 1 & e^{i \omega} \\ 0 & 0 \end{array} \right], \quad Q_0 = \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 0 & 0 \\ e^{-i \omega} & -1 \end{array} \right]. \end{align*} That is, $\Xi_n^{+}$ (resp. $\Xi_n^{-}$) is the sum of all paths for which the particle first hits 0 at time $n$ starting from the origin restricted in the region $\mathbb{Z}_{+}$ (resp. $\mathbb{Z}_{-}$). Therefore we obtain \begin{lem} \label{lem1} (i) If $n \ge 4$ and $n$ is even, then \begin{align*} \Xi_n^{+}
= r^{(\infty,1)}_{n-1} \> R Q_0 = \frac{r^{(\infty,1)}_{n-1}}{2} \left[ \begin{array}{cc} -e^{-i \omega} & 1 \\ 0 & 0 \end{array} \right],
\qquad \Xi_n^{-}
= s^{(-\infty,-1)}_{n-1} \> S P_0 = \frac{s^{(-\infty,-1)}_{n-1}}{2} \left[ \begin{array}{cc} 0 & 0 \\ 1 & e^{i \omega} \end{array} \right], \end{align*} where \begin{align*} \sum_{n=1}^{\infty} \> r_{n}^{(\infty,1)} z^n = \frac{-1 + \sqrt{1 + z^4}}{z}, \quad \sum_{n=1}^{\infty} \> s_{n}^{(-\infty,-1)} z^n = \frac{1 - \sqrt{1 + z^4}}{z}. \end{align*} (ii) \begin{align*} \Xi_2^{+} = P Q_0 = \frac{-1}{2} \left[ \begin{array}{cc} -e^{-i \omega} & 1 \\ 0 & 0 \end{array} \right], \qquad \Xi_2^{-} = Q P_0 = \frac{1}{2} \left[ \begin{array}{cc} 0 & 0 \\ 1 & e^{i \omega} \end{array} \right]. \end{align*} (iii) If $n$ is odd, then \begin{align*} \Xi_n^{+} = \Xi_n^{-} = \left[ \begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array} \right]. \end{align*} \end{lem} Put $\Xi_n^{\ast} = \Xi_n^{+} + \Xi_n^{-}$. From this lemma and $s_{n}^{(-\infty,-1)} = - r_{n}^{(\infty,1)}$for $n \ge 1$, we have \begin{align*} \Xi^{\ast}_n = \frac{r^{\ast}_{n-1}}{2} \> \left[ \begin{array}{cc} - e^{-i \omega} & 1 \\ -1 & - e^{i \omega} \end{array} \right], \end{align*} where \[ r_n^{\ast} = \left\{ \begin{array}{cl} \displaystyle{(-1)^{m-1} \> \frac{(2m-1)!}{2^{2m-1} (m-1)! m!}} & \mbox{if $n=4m-1$ and $m \ge 1$} \\ 0 & \mbox{if $n \not= 4m-1, \> n \ge 2$ and $m \ge 1$} \\ -1 & \mbox{if $n =1$} \end{array} \right. \] In fact, \begin{align*} r_1^{\ast} = -1, \> r_2^{\ast} = 0, \> r_3^{\ast} = 1/2, \> r_4^{\ast} = r_5^{\ast} = r_6^{\ast} = 0, \> r_7^{\ast} = -1/8, \> r_8^{\ast} = r_9^{\ast} = r_{10}^{\ast} = 0, \ldots. \end{align*} Then the generating function of $r_{n}^{\ast}$ is as follows: \begin{align*} \sum_{n=1}^{\infty} \> r_{n}^{\ast} z^n = \frac{-1 - z^2 + \sqrt{1 + z^4}}{z}. \end{align*} The definition of $\Xi^{\ast}_{n}$ yields \begin{align*} \Psi_{2n} (0) = \sum_{k=1}^n \sum_{\scriptstyle (a_1, \ldots , a_k) \in \mathbb{P}^k : \atop \scriptstyle a_1 + \cdots + a_k =n} \left( \prod_{j=1}^k \Xi^{\ast}_{2 a_j} \right) \> \varphi^{\ast}, \end{align*} where $\mathbb{P} = \{1,2, \ldots \}.$ Then a little algebra gives \begin{align*} \left[ \begin{array}{cc} - e^{-i \omega} & 1 \\ -1 & - e^{i \omega} \end{array} \right]^k \> \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 1 \\ i \end{array} \right] = \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} \frac{1- \mu_{+}}{C_{+}^2} \gamma_{+}^k + \frac{1- \mu_{-}}{C_{-}^2} \gamma_{-}^k \\ - \left\{ \frac{\mu_{+}(1- \mu_{+})}{C_{+}^2} \gamma_{+}^k + \frac{\mu_{-}(1- \mu_{-})}{C_{-}^2} \gamma_{-}^k \right\} i \end{array} \right], \end{align*} where \begin{align*} \gamma_{\pm} &= - \cos \omega \pm i \sqrt{1 + \sin^2 \omega}, \quad \mu_{\pm} = \sin \omega \mp \sqrt{1 + \sin^2 \omega}, \\ C_{\pm} &= \sqrt{2 \left\{ (1+\sin^2 \omega) \mp \sin \omega \sqrt{1 + \sin^2 \omega} \right\} }. \end{align*} Noting that \begin{align*} \left( \prod_{j=1}^k \Xi^{\ast}_{2 a_j} \right) \> \varphi^{\ast} = \left( \prod_{j=1}^k r^{\ast}_{2 a_j-1} \right) \> \frac{1}{2^k} \> \left[ \begin{array}{cc} - e^{-i \omega} & 1 \\ -1 & - e^{i \omega} \end{array} \right]^k \> \frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 1 \\ i \end{array} \right], \end{align*} we have the desired conclusion.
\section{Proof of Theorem {\rmfamily \ref{thm1}}} In this section, we prove our main result, i.e., Theorem {\rmfamily \ref{thm1}}. By Proposition {\rmfamily \ref{pro0}}, we will compute generating function of $\Psi_{n} ^{(L)} (0)$. Put $x_n = r^{\ast}_{2n-1}$ and $u_{\pm}=\gamma_{\pm}/2.$ Then we see that \begin{align*} &\sum_{n=1}^{\infty} \Psi_{2n} ^{(L)} (0) z^{2n} \\ & = \frac{1}{\sqrt{2}} \> \left[ \frac{1- \mu_{+}}{C_{+}^2} \> \sum_{n=1}^{\infty} \Biggl\{ \sum_{k=1}^n \sum_{\scriptstyle (a_1, \ldots , a_k) \in \mathbb{P}^k : \atop \scriptstyle a_1 + \cdots + a_k =n} \left( \prod_{j=1}^k x_{a_j} \right) u_+^k \Biggr\} z^{2n} \right. \\ &\left. \qquad \qquad \qquad + \frac{1- \mu_{-}}{C_{-}^2} \> \sum_{n=1}^{\infty} \Biggl\{ \sum_{k=1}^n \sum_{\scriptstyle (a_1, \ldots , a_k) \in \mathbb{P}^k : \atop \scriptstyle a_1 + \cdots + a_k =n} \left( \prod_{j=1}^k x_{a_j} \right) u_-^k \Biggr\} z^{2n} \right] \\ &= \frac{1}{\sqrt{2}} \> \left[ \frac{1- \mu_{+}}{C_{+}^2} \> \sum_{k=1}^{\infty} \Biggl\{ \sum_{n=k}^{\infty} \sum_{\scriptstyle (a_1, \ldots , a_k) \in \mathbb{P}^k : \atop \scriptstyle a_1 + \cdots + a_k =n} \left( \prod_{j=1}^k x_{a_j} \right) z^{2n} \Biggr\} u_+^k \right. \\ &\left. \qquad \qquad \qquad + \frac{1- \mu_{-}}{C_{-}^2} \> \sum_{k=1}^{\infty} \Biggl\{ \sum_{n=k}^{\infty} \sum_{\scriptstyle (a_1, \ldots , a_k) \in \mathbb{P}^k : \atop \scriptstyle a_1 + \cdots + a_k =n} \left( \prod_{j=1}^k x_{a_j} \right) z^{2n} \Biggr\} u_-^k \right] \\ &= \frac{1}{\sqrt{2}} \> \left[ \frac{1- \mu_{+}}{C_{+}^2} \> \sum_{k=1}^{\infty} \left\{ (-1 - z^2 + \sqrt{1 + z^4}) u_+ \right\}^k + \frac{1- \mu_{-}}{C_{-}^2} \> \sum_{k=1}^{\infty} \left\{ (-1 - z^2 + \sqrt{1 + z^4}) u_- \right\}^k \right] \\ &= \frac{1}{\sqrt{2}} \> \left\{ \frac{1- \mu_{+}}{C_{+}^2} \> \frac{ (-1 - z^2 + \sqrt{1 + z^4}) u_+}{1 - (-1 - z^2 + \sqrt{1 + z^4}) u_+} + \frac{1- \mu_{-}}{C_{-}^2} \> \frac{ (-1 - z^2 + \sqrt{1 + z^4}) u_-}{1 - (-1 - z^2 + \sqrt{1 + z^4}) u_-} \right\}. \end{align*} The first equality comes from Proposition {\rmfamily \ref{pro0}}. As for the third equality, we should remark that for $k=2$, \begin{align*} \sum_{n=2}^{\infty} \sum_{\scriptstyle (a_1, a_2) \in \mathbb{P}^2 : \atop \scriptstyle a_1 + a_2 =n} x_{a_1} x_{a_2} z^{2n} &= \left( \sum_{n=1}^{\infty} r_{2n-1}^{\ast} z^{2n-1} \right)^2 z^2 = \left( \sum_{n=1}^{\infty} r_{n}^{\ast} z^{n} \right)^2 z^2 \\ &= \left( \frac{-1 - z^2 + \sqrt{1 + z^4}}{z} \right)^2 \> z^2 = (-1 - z^2 + \sqrt{1 + z^4})^2. \end{align*} In the similar fashion, for general $k \ge 1$, we have \begin{align*} \sum_{n=k}^{\infty} \sum_{\scriptstyle (a_1, \ldots , a_k) \in \mathbb{P}^k : \atop \scriptstyle a_1 + \cdots + a_k =n} \left( \prod_{j=1}^k x_{a_j} \right) z^{2n} = (-1 - z^2 + \sqrt{1 + z^4})^k. \end{align*} Noting that the initial state $\Psi_{0} ^{(L)} (0) =1/\sqrt{2}$ and \begin{align*} \frac{1- \mu_{+}}{C_{+}^2} + \frac{1- \mu_{-}}{C_{-}^2} = 1, \end{align*} we obtain \begin{align*} \sum_{n=0}^{\infty} \Psi_{n} ^{(L)} (0) z^{n} = \frac{1}{\sqrt{2}} \> \left( \frac{1- \mu_{+}}{C_{+}^2} \> \frac{1}{1 - Z u_+} + \frac{1- \mu_{-}}{C_{-}^2} \> \frac{1}{1 - Z u_-} \right), \end{align*} where $Z = -1 - z^2 + \sqrt{1 + z^4}.$ Next we consider generating function of $\Psi_{n} ^{(R)} (0)$. From the initial state $\Psi_{0} ^{(R)} (0) =i/\sqrt{2}$ and \begin{align*} \frac{\mu_{+}(1- \mu_{+})}{C_{+}^2} + \frac{\mu_{-}(1- \mu_{-})}{C_{-}^2} = -1, \end{align*} we similarly get \begin{align*} \sum_{n=0}^{\infty} \Psi_{n} ^{(R)} (0) z^{n} = \frac{-i}{\sqrt{2}} \> \left\{ \frac{\mu_+(1- \mu_{+})}{C_{+}^2} \> \frac{1}{1 - Z u_+} + \frac{\mu_-(1- \mu_{-})}{C_{-}^2} \> \frac{1}{1 - Z u_-} \right\}. \end{align*} Therefore we have \begin{align*} \sum_{n=0}^{\infty} \Psi_n ^{(L,\Re)} (0) z^n &= \sum_{n=0}^{\infty} \Psi_n ^{(R,\Im)} (0) z^n = \frac{2+ Z \cos \omega}{\sqrt{2} (2+ 2 Z \cos \omega + Z^2)}, \\ \sum_{n=0}^{\infty} \Psi_n ^{(L,\Im)} (0) z^n &= \frac{(1 + \sin \omega) Z}{\sqrt{2} (2+ 2 Z \cos \omega + Z^2)}, \quad \sum_{n=0}^{\infty} \Psi_n ^{(R,\Re)} (0) z^n = - \frac{(1 - \sin \omega) Z}{\sqrt{2} (2+ 2 Z \cos \omega + Z^2)}, \end{align*} where $\Psi_n ^{(A,\Re)} (0)$ (resp. $\Psi_n ^{(A,\Im)} (0)$) is the real (resp. imaginary) part of $\Psi_n ^{(A)} (0)$ for $A= L, R$. A direct computation gives \begin{align*} \sum_{n=0}^{\infty} \Psi_n ^{(L,\Re)} (0) z^n &= \sum_{n=0}^{\infty} \Psi_n ^{(R,\Im)} (0) z^n \\ &= \frac{4 - 3 \cos \omega + 2(1- \cos \omega)^2 z^2 + (2 - \cos \omega) z^4 + (2- \cos \omega) (1 + z^2) \sqrt{1+ z^4}}{2 \sqrt{2} \> \left\{ 3 - 2 \cos \omega + 2(1- \cos \omega)^2 z^2 + (3 - 2 \cos \omega) z^4 \right\}}, \\ \sum_{n=0}^{\infty} \Psi_n ^{(L,\Im)} (0) z^n &= - \frac{(1 + \sin \omega) \left\{ 1 + 2(1- \cos \omega) z^2 + z^4 + (-1 + z^2) \sqrt{1+ z^4} \right\}}{2 \sqrt{2} \> \left\{ 3 - 2 \cos \omega + 2(1- \cos \omega)^2 z^2 + (3 - 2 \cos \omega) z^4 \right\}}, \\ \sum_{n=0}^{\infty} \Psi_n ^{(R,\Re)} (0) z^n &= \frac{(1 - \sin \omega) \left\{ 1 + 2(1- \cos \omega) z^2 + z^4 + (-1 + z^2) \sqrt{1+ z^4} \right\}}{2 \sqrt{2} \> \left\{ 3 - 2 \cos \omega + 2(1- \cos \omega)^2 z^2 + (3 - 2 \cos \omega) z^4 \right\}}. \end{align*} Then we obtain \begin{align*} \Psi_{2n} ^{(L,\Re)} (0) &= \Psi_{2n} ^{(R,\Im)} (0) \sim \frac{\sqrt{2} (1 - \cos \omega)}{3 - 2 \cos \omega} \> \cos (n \theta_0), \\ \Psi_{2n} ^{(L,\Im)} (0) &\sim - \frac{\sqrt{2} (1 - \cos \omega)(1 + \sin \omega)}{(3 - 2 \cos \omega) \sqrt{1 + \sin^2 \omega}} \> \sin (n \theta_0), \\ \Psi_{2n} ^{(R,\Re)} (0) &\sim \frac{\sqrt{2} (1 - \cos \omega)(1 - \sin \omega)}{(3 - 2 \cos \omega) \sqrt{1 + \sin^2 \omega}} \> \sin (n \theta_0), \end{align*} where $\sin \theta_0 = (2 - \cos \omega) \sqrt{1 + \sin^2 \omega}/(3 \cos \omega -2), \> \cos \theta_0 = -(1- \cos \omega)^2/(3 \cos \omega -2)$ and $f(n) \sim g(n)$ means $f(n)/g(n) \to 1$ as $n \to \infty$. Concerning the above derivation, see pp.264-265 of \cite{Flajolet2009}, for example. The definition of $p_{2n} (0)$ gives \begin{align*}
p_{2n} (0) = |\Psi_{2n} ^{(L,\Re)} (0)|^2 + |\Psi_{2n} ^{(L,\Im)} (0)|^2 + |\Psi_{2n} ^{(R,\Re)} (0)|^2 + |\Psi_{2n} ^{(R,\Im)} (0)|^2, \end{align*} so the proof of Theorem {\rmfamily \ref{thm1}} is complete.
\section{Discussion} In the last section, we consider some relations between our model and other related ones. For the Hadamard walk (homogeneous model), a similar argument yields \begin{align*} \sum_{n=0}^{\infty} \Psi_n ^{(L,\Re)} (0) z^n &= \sum_{n=0}^{\infty} \Psi_n ^{(R,\Im)} (0) z^n = \frac{1}{2 \sqrt{2}} \> \left( 1 + \frac{1+z^2}{\sqrt{1+z^4}} \right), \\ \sum_{n=0}^{\infty} \Psi_n ^{(L,\Im)} (0) z^n &= - \sum_{n=0}^{\infty} \Psi_n ^{(R,\Re)} (0) z^n = - \frac{1}{2 \sqrt{2}} \> \left( 1 + \frac{-1+z^2}{\sqrt{1+z^4}} \right). \end{align*} Therefore we have \begin{align*} p_{2n} ^{(H)} (0) \sim \frac{1}{\pi n}.
\end{align*} As for the result, see \cite{AmbainisEtAl2001}, for example. Then $\lim_{n \to \infty} p_{2n}^{(H)} (0) = 0.$ Moreover, using Proposition 4.3 of \cite{Konno2009}, we obtain $p_{2n}^{\ast} (0) = p_{2n}^{(H)} (0)$. Here $p_{2n}^{\ast} (0)$ is the return probability at the origin at time $2n$ for another inhomogeneous model defined by Eq. \ref{akina}, which was studied in \cite{Konno2009}. Therefore the QW has also the same decay order as the homogeneous walk, i.e., Hadamard walk: \begin{align*} p_{2n}^{\ast} (0) \sim \frac{1}{\pi n}. \end{align*} So $\lim_{n \to \infty} p_{2n}^{\ast} (0) = 0$, then the QW does not exhibit localization. This is in great contrast to our model.
For the inhomogeneous classical random walk starting from the origin on $\mathbb{Z}$, we similarly get \begin{align} f^{(c)} (z) = \sum_{n=0}^{\infty} p_n ^{(c)} (0) z^n = \left\{ 1 - \left( \frac{p_0}{p} + \frac{q_0}{q} \right) \> \frac{1 - \sqrt{1-4 p q z^2}}{2} \right\}^{-1}, \label{semisemi} \end{align} where $p_{n}^{(c)} (0)$ is the return probability at time $n$ for the classical walk. In this model, a walker at location $x$ moves one step to the left with probability $p_x$ and one step to the right with probability $q_x$ where $p_x + q_x =1$ for any $x \in \mathbb{Z}$ and $p_x = p, \> q_x = q$ for $x \in \mathbb{Z} \setminus \{0\}.$ From Eq. \ref{semisemi}, we have \begin{align*} p_{2n} ^{(c)} (0) \sim \frac{2}{\frac{p_0}{p} + \frac{q_0}{q}} \> \frac{(4pq)^n}{\sqrt{\pi n}}. \end{align*} Therefore $\lim_{n \to \infty} p_{2n}^{(c)} (0) = 0$ and localization does not occur. If $p \not= q$, then $p_{2n}^{(c)} (0)$ decays exponentially. If $p=q$=1/2, \begin{align*} p_{2n} ^{(c)} (0) \sim \frac{1}{\sqrt{\pi n}}. \end{align*} The result of the classical walk is also in contrast with that of our model. \par \ \par\noindent {\bf Acknowledgment.} The author thanks T. Machida and E. Segawa for helpful discussions and comments. This work was partially supported by the Grant-in-Aid for Scientific Research (C) of Japan Society for the Promotion of Science (Grant No. 21540118). \par \ \par
\begin{small}
\end{small}
\end{document} | arXiv | {
"id": "0908.2213.tex",
"language_detection_score": 0.5636523365974426,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\author{Ehsan Azmoodeh \thanks{Department of Mathematical Sciences, University of Liverpool, Liverpool L69 7ZL, United Kingdom. E-mail: \texttt{ehsan.azmoodeh@liverpool.ac.uk}} \qquad Yuliya Mishura \thanks{Department of Probability Theory, Statistics and Actuarial Mathematics, Taras Shevchenko National University of Kyiv,
Volodymyrska, 64, Kyiv, 01601, Ukraine. E-mail: \texttt{myus@univ.kiev.ua}} \qquad Farzad Sabzikar \thanks{Department of Statistics,
Iowa State University, Ames, IA 50011, USA. E-mail: \texttt{sabzikar@iastate.edu}} }
\title{f How does tempering affect the local and global properties of fractional Brownian motion?}
\begin{abstract} The present paper investigates the effects of tempering the power law kernel of moving average representation of a fractional Brownian motion (fBm) on some local and global properties of this Gaussian stochastic process. Tempered fractional Brownian motion (TFBM) and tempered fractional Brownian motion of the second kind (TFBMII) are the processes that are considered in order to investigate the role of tempering. Tempering does not change the local properties of fBm including the sample paths and $p$-variation, but it has a strong impact on the Breuer-Major theorem, asymptotic behavior of the 3rd and 4th cumulants of fBm and the optimal fourth moment theorem. \end{abstract}
\vskip0.3cm \noindent \textbf{Keywords}: Fractional Brownian motion; Tempered fractional processes; Semi--long memory; Breuer--Major theorem; Limit Theorems; Malliavin calculus \vskip0.1cm
\noindent \textbf{MSC 2010}: 60F17; 60H07; 60G22; 60G15 \tableofcontents
\section{Introduction}\label{sec-intr}
Fractional Brownian motion (fBm) is a Gaussian stochastic process whose increments, termed fractional Gaussian noise (fGn), can exhibit long range dependence in the sense that the power law spectral density of fGn blows up near the origin \cite{Beranbook, EmbrechtsMaejima, Pipirasbook, Samorodnitskybook}. A fBm has become popular in applications to science and engineering, since it yields a simple tractable model that captures the correlation structure seen in many natural systems \cite{Ascione, Harms, KolmogorovFBM, Mandelbrot1982, Meerschaertbook, Molz}.
Recently, two wide classes of continuous stochastic Gaussian processes which are called tempered fractional Brownian motion (TFBM) and tempered fractional Brownian motion of the second kind (TFBMII) were introduced in \cite{Meerschaertsabzikar} and \cite{SurgailisFarzadTFSMII}, respectively. Unlike the fBm, the TFBM and TFBMII can be defined for any value of the Hurst parameter $H>0$. TFBM and TFBMII attracted the attention of researchers in various fields. It is known that bifurcation theory is very beneficial to survey the qualitative or topological changes in the orbit structure of parameterized dynamical systems. A new stochastic phenomenological bifurcation of the Langevin equation perturbed by TFBM was constructed in \cite{YangZengChen}. As a result, it was shown that the tempered fractional Ornstein-Uhlenbeck process, which is the solution of Langevin equation driven by a tempered fractional Brownian motion, exhibits very diverse and interesting bifurcation phenomena. The paper \cite{DengChenWang} discovered further properties of tempered fractional Brownian motion such as its ergodicity, and the derivation of the corresponding Fokker-Planck equation. Furthermore, they argued carefully that the asymptotic form of the mean squared displacement of the tempered fractional Langevin equation transits from $t^2$ (ballistic diffusion for short time) to $t^{2-2H}$ , and then to ${t^2}$ (again ballistic diffusion for long time). The arbitrage opportunities for the Black-Scholes model driven by TFBM was investigated in \cite{ZhangXiao}. In \cite{SWP}, the authors developed the asymptotic theory for the ordinary least squares estimator of the unknown coefficient of an autoregressive model of order one when the additive error follows a discrete tempered linear process. Consequently, they showed that the limiting results involves TFBMII under some conditions. Finally, \cite{Lim} introduced some extensions of TFBM including Mixed TFBM and tempered multifractional Brownian motion and studied essential properties of these stochastic processes.
TFBM and TFBMII can also be useful stochastic models for situations where the data follows fBm at some intermediate scale, but then deviates from fBm at longer scale. For example, wind speed measurements typically resemble long range dependent fBm over a range of frequencies, but deviate significantly at very low frequencies (corresponding to very long spatial scales). Since the spectral density of the increments of tempered fractional processes follows the same pattern, they can provide a useful model for such data. Recently, \cite{BDS} showed that TFBM can display the Davenport spectrum which is a modification of the Kolomogorov $-5/3$ power law spectrum for the inertial range in turbulence. The same paper, also used the wavelets to study the statistical inference including the parameter estimation for TFBM and also hypothesis test for fBm vs TFBM.
The aim of this paper is to study more deeply properties of tempered fractional processes. In fact, our task is to investigate the effects of tempering on some local and global properties of fBm.
In Section \ref{local}, we study the consequences of the tempering on the behavior of the variance, sample paths, and $p$-variation of a fBm. Proposition \ref{lem:usefulrelations 1} gives the asymptotic properties of the variance of the tempered fractional processes for large time $t$. Unlike the fBm, the variance of the TFBM converges to a finite constant when the time converges to infinity. However, the variance of TFBMII is proportional to $ct$ for large $t$. Therefore, the tempering makes TFBM and TFBMII as stochastically bounded and stochastically unbounded for large $t$, respectively. On the other hand, tempering does not change the sample paths behavior of fBm, see Lemma \ref{lem:TFBM and TFBMII bounds}. Consequently, we prove the existence of local times for TFBM and TFBMII. We will also show that these tempered processes are locally nondeterministic on every compact interval, see Proposition \ref{prop:tfbmIIlocaltime} and Theorem \ref{prop:TFBMIILND}. In Subsection \ref{sec3}, we show that the tempering keeps the same $\frac{1}{H}$-variation for the TFBM and TFBMII. Tempered fractional processes demonstrate the so called semi-long range dependence or semi-long memory. Their increments' covariance function resembles long range dependence for a number of lags, depending on the tempering parameter, but eventually decays exponentially fast. The spectral density of tempered fractional Gaussian noise, TFGN, is zero at the origin so that TFBM is anti-persistence process, while the spectral density of TFGNII remains bounded at very low frequencies, see \cite{Meerschaertsabzikar,SurgailisFarzadTFSMII}. The semi-long range dependence property and behavior of the spectral density of tempered fractional processes motivate us to study Breuer-Major theorem for TFGNs in Section \ref{sec:BM}. The section begins with Lemma \ref{lem:Positive-Negative-Correlation} revealing an interesting switching feature on correlation structure of the tempered fractional Gaussian noise of the first kind. Then, we continue studying the effect of tempering on the popular Breuer--Major Theorem and its modern ramifications in the realm of Malliaivn--Stein method. Furthermore, we investigate the asymptotic behavior of the third and fourth cumulants of tempered fractional Gaussian processes. The tempering parameter $\lambda$ manifests its role in the optimal fourth moment as well, see Remark \ref{rem:optimalrate}.\\
In what follows, $C, C_i$ for $i=1,2, \ldots$ denote generic constants which may be different at different locations. We write $ \limd $, $ \limfdd $ and $ \eqfdd $
for weak convergence in distribution, weak convergence and equality
of finite-dimensional distributions, respectively. Also, denote $\mathbb R_+ := (0, \infty), (x)_\pm := \max (\pm x, 0), x \in \mathbb R, \, \int := \int_\mathbb R$. For two non-negative sequences $a_n$ and $b_n$, we write $a_n\asymp b_n$ to indicate that $0<\liminf_{n\to\infty}\frac{a_n}{b_n}\leq \limsup_{n\to\infty}\frac{a_n}{b_n}<\infty$. The relation $f\sim g$ means that $\lim_{z\to\infty}\frac{f(z)}{g(z)}=1$. Some other notations are given in Appendices A and B.
\section{The effects of tempering on the local properties of fBm}\label{local}
\subsection{Asymptotic behavior of the variations of tempered fractional processes}\label{sec2}
Let $\{B(t)\}_{ t\in \mathbb R}$ be a real-valued Brownian motion on the real line, i.e., a zero mean Gaussian process with stationary independent increments and variance $|t|$ for all $t\in\mathbb R$. Define an independently scattered Gaussian random measure $B(dx)$ with control measure $m(dx)= dx$ by setting $B((a,b])=B(b)-B(a)$ for any real numbers $a<b$, and then extending to all Borel sets with finite Lebesgue measure. Then the Wiener integral $I(f):=\int f(x)B(dx)$ is defined for all functions $f:\mathbb R\to\mathbb R$ such that $\int f(x)^2dx<\infty$, as Gaussian random variable with zero mean and covariance $\mathbb E[I(f)I(g)]= \int f(x)g(x)dx$. Moreover, well-known Mandelbrot-van-Ness representation of two-sided normalized fractional Brownian motion (fBm) with Hurst index $H\in (0,1)\setminus\left\{1/2\right\}$ has a form
\begin{equation*} B_H(t)=C_H\int\left((t-x)_{+}^{H-\frac{1}{2}}-(-x)_{+}^{H-\frac{1}{2}}\right)B(dx), \end{equation*} where $C_H=\frac{(\Gamma(2H+1)\sin(\pi H))^{1/2}}{\Gamma(H+1/2)}$, see \cite{Mishura}. Now our goal is to modify this representation as follows. On the one hand, it is possible simply to moderate the integrand by exponent, ignore the normalizing constant and give the following definition. \begin{defn}\label{defTFBM} Given an independently scattered Gaussian random measure $B(dx)$ on $\mathbb R$ with control measure $dx$, for any $H>0$ and $\lambda>0$, the stochastic process $B^I_{H,\lambda}= \{ B^I_{H,\lambda}(t) \}_{t\in \mathbb R}$ defined by the Wiener integral \begin{equation}\label{eq:TFBMdef} B^{I}_{H,\lambda}(t):={\int \left[e^{-\lambda(t-x)_{+}}(t-x)_{+}^{H-\frac{1}{2}}-e^{-\lambda(-x)_{+}}(-x)_{+}^{H-\frac{1}{2}}\right]\ B(dx)}, \end{equation} where $0^0=0$, is called the tempered fractional Brownian motion (TFBM). \end{defn}
It is easy to check that the function \begin{equation*} g^{I}_{H,\lambda,t}(x):=e^{-\lambda(t-x)_{+}}(t-x)_{+}^{H-\frac{1}{2}}-e^{-\lambda(-x)_{+}}(-x)_{+}^{H-\frac{1}{2}} \end{equation*} is square integrable over the entire real line for any $H>0$, so that TFBM is well-defined. Note that it is defined for $H=1/2$ as well, in contrast to the Mandelbrot-van-Ness representation, and equals $$B^{I}_{1/2,\lambda}(t)=e^{-\lambda t}\int_{-\infty}^t e^{ \lambda x}\ B(dx)-\int_{-\infty}^0e^{ \lambda x}\ B(dx).$$ However, in what follows, we shall consider mostly $H\neq 1/2$.
On the other hand, for $H\neq 1/2$, the kernel $(t-x)_{+}^{H-\frac{1}{2}}-(-x)_{+}^{H-\frac{1}{2}}$ can be represented as $$(t-x)_{+}^{H-\frac{1}{2}}-(-x)_{+}^{H-\frac{1}{2}}=(H-1/2) \int_0^t(s-x)_{+}^{H-\frac{3}{2}}ds.$$ Moderating respectively the integrand by the same exponent, integrating by parts and ignoring normalizing constant,
we get another tempered stochastic process. \begin{defn}\label{defTFBMII} Given an independently scattered Gaussian random measure $B(dx)$ on $\mathbb R$ with control measure $dx$, for any $H>0$ and $\lambda>0$, the stochastic process $B^{I\!I}_{H,\lambda}= \{ B^{I\!I}_{H,\lambda}(t) \}_{t\in \mathbb R}$ defined by the Wiener integral \begin{equation}\label{eq:TFBMIIdef} B^{I\!I}_{H,\lambda}(t):= \int g^{I\!I}_{H,\lambda,t}(x) B(dx), \end{equation} where \begin{equation}\label{hdef0} \begin{split} g^{I\!I}_{H,\lambda,t}(x)&:=(t-x)_+^{H - \frac{1}{2}}e^{-\lambda (t-x)_+} - (-x)_+^{H - \frac{1}{2}} e^{-\lambda (-x)_+}\\ & \quad + \lambda \int_{0}^{t} (s-x)_{+}^{H-\frac{1}{2}}e^{-\lambda(s-x)_{+}}\ ds, \quad x \in \mathbb R. \end{split} \end{equation} is called the tempered fractional Brownian motion of the second kind (TFBMII). \end{defn}
We also note that TFBM \eqref{eq:TFBMdef} and TFBMII \eqref{eq:TFBMIIdef} are Gaussian stochastic processes with stationary increments, having the following scaling property: for any scaling factor $c>0$ \begin{equation}\label{eq:scalingproperty} \left\{X_{H,\lambda}(ct)\right\}_{t \in \mathbb R}{\triangleq}\left\{c^{H}X_{H,c\lambda}(t)\right\}_{t \in \mathbb R} \end{equation} where $X_{H,\lambda}$ could be $B^{I}_{H,\lambda}$ or $B^{I\!I}_{H,\lambda}$ (see \cite[Proposition 2.2]{Meerschaertsabzikar} and \cite[Proposition 2.9]{SurgailisFarzadTFSMII}). Using the scaling property \eqref{eq:scalingproperty} and the fact that
$X_{H,\lambda}(|t|)$ has the same distribution as $|t|^{H} X_{H,\lambda|t|}(1)$, it is easy to see that $\mathbb{E}[(X_{H,\lambda}(|t|))^{2}]=|t|^{2H}\mathbb{E}[(X_{H,\lambda | t|}(1))^{2}]=: |t|^{2H}C^{2}_{t}$. Next, we recall an explicit representation for $C^{2}_{t}$. We refer the reader to \cite{Meerschaertsabzikar,SurgailisFarzadTFSMII} for the details.
\begin{lem}\label{lem:usefulrelations}
\noindent(a) Let $X_{H,\lambda} = B^{I}_{H,\lambda}$. Then the function $C_{t}^{2}= (C^{I}_{t})^{2}=\mathbb{E}[(B^{I}_{H,\lambda |t|}(1))^{2}]$ has the expression \begin{equation}\label{eq:CtTFBMDef}
(C^{I}_{t})^{2}=\frac{2\Gamma(2H)}{(2\lambda |t|)^{2H}}-\frac{2\Gamma(H+\frac 12)}{\sqrt{\pi}}\frac{1}{(2\lambda |t|)^{H}}K_{H}(\lambda |t|), \end{equation} where $t \neq 0$ and $K_{\nu}(z)$ is the modified Bessel function of the second kind (see Appendix A for the definition of $K_{\nu}(z)$).
\noindent(b) Let $X_{H,\lambda} = B^{I\!I}_{H,\lambda}$. Then the function $C_{t}^{2}= (C^{I\!I}_{t})^{2}=\mathbb{E}[(B^{I\!I}_{H,\lambda |t|}(1))^{2}]$ has the expression \begin{equation}\label{eq:CtTFBIIMDef} \begin{split} (C^{I\!I}_{t})^{2}&=\frac{(1-2H) \Gamma(H+\frac{1}{2}) \Gamma(H)(\lambda t)^{-2H} }{\sqrt{\pi}} \Big[1-{_2F_3}{ \Big(\{1,-1/2\}, \{1-H,1/2,1\}, \lambda^2 t^2/4\Big)}\Big]\\ &\quad + \frac{\Gamma (1-H) {\Gamma(H+\frac{1}{2})}}{ \sqrt{\pi} H 2^{2H}}\,{_2F_3} \Big(\{1,H- 1/2\}, \{1,H+1,H+ 1/2\}, \lambda^2 t^2/4\Big), \end{split} \end{equation} where ${_2F_3}$ is the generalized hypergeometric function (see Appendix A for the definition of ${_2F_3}$). \end{lem}
\begin{prop}\label{lem:usefulrelations 1} \begin{itemize} \item[$(a)$] The {\rm TFBM} \eqref{eq:TFBMdef} with parameters $H>0$ and $\lambda>0$ satisfies \begin{equation}\label{eq:varasy}
\lim _{t\to +\infty}\mathbb E[B^I_{H,\lambda}(t)]^2=\frac{2\Gamma{(2H)}}{(2\lambda)^{2H}}.
\end{equation} \item[$(b)$] The {\rm TFBMII} \eqref{eq:TFBMIIdef} with parameters $H>0$ and $\lambda>0$ satisfies \begin{equation}\label{eq:varasyII}
\lim _{t\to +\infty}\mathbb E\Big[\frac{B^{I\!I}_{H,\lambda}(t)}{\sqrt{t}}\Big]^2=\lambda^{1-2H}\Gamma^{2}\left(H+\frac{1}{2}\right).
\end{equation}
\end{itemize}
\end{prop}
\begin{proof} The proof of part $(a)$ follows from the fact that \begin{equation*} K_{\nu}(z)\sim\sqrt{\frac{\pi}{2}}\frac{e^{-z}}{\sqrt{z}} \end{equation*} as $z\to\infty$. Part $(b)$: Obviously, for any $t\ge 0$, \begin{equation*}\begin{gathered}\mathbb E\left( B^{I\!I}_{H,\lambda}(t) \right)^2=\int_{-\infty}^0\left((t-x)^{H-\frac{1}{2}}e^{-\lambda (t-x)} - (-x)^{H - \frac{1}{2}} e^{-\lambda (-x)}
+ \lambda \int_{0}^{t} (s-x)^{H-\frac{1}{2}}e^{-\lambda(s-x)}\ ds\right)^2dx\\+
\int_{0}^t\left((t-x)^{H-\frac{1}{2}}e^{-\lambda (t-x)}
+ \lambda \int_{x}^{t} (s-x)^{H-\frac{1}{2}}e^{-\lambda(s-x)}\ ds\right)^2dx\\=
\int_{0}^\infty\left((t+x)^{H-\frac{1}{2}}e^{-\lambda (t+x)} - x^{H - \frac{1}{2}} e^{-\lambda x}
+ \lambda \int_{0}^{t} (s+x)^{H-\frac{1}{2}}e^{-\lambda(s+x)}\ ds\right)^2dx\\+
\int_{0}^t\left(u^{H-\frac{1}{2}}e^{-\lambda u}
+ \lambda \int_{0}^{u} v^{H-\frac{1}{2}}e^{-\lambda v}\ dv\right)^2du
=\left(H-1/2\right)^2\int_{0}^{\infty}\left(\int_{0}^{t}(s+x)^{H-3/2}e^{-\lambda(s+x)}ds\right)^2dx\\
+\int_{0}^{t}u^{2H-1}e^{-2\lambda u}du+2\lambda \int_{0}^{t}u^{H-1/2}e^{-\lambda u}\int_{0}^{u}v^{H-1/2}e^{-\lambda v}dvdu\\
+\lambda^2\int_{0}^{t}\left(\int_{0}^{u}v^{H-1/2}e^{-\lambda v}dv\right)^2du=:\sum_{k=1}^4 I_k(t). \end{gathered}\end{equation*} It is easy to see that $$\lim_{t\rightarrow\infty}I_1(t)=I_1(\infty) =\left(H-1/2\right)^2\int_{0}^{\infty}\left(\int_{0}^{\infty}(s+x)^{H-3/2}e^{-\lambda(s+x)}ds\right)^2dx,$$ and this integral is finite, see Lemma \ref{lem:propo24} in Appendix A. Further, $$\lim_{t\rightarrow\infty}I_2(t)=I_2(\infty)=(2\lambda)^{-2H}\Gamma(2H),$$ and, according to L'H\^{o}pital's rule, $$\lim_{t\rightarrow\infty}t^{-1}I_3(t)= 2\lambda \lim_{t\rightarrow\infty}t^{H-1/2}e^{-\lambda t}\int_{0}^{t}v^{H-1/2}e^{-\lambda v}dv=0.$$ Finally, again according to L'H\^{o}pital's rule, $$\lim_{t\rightarrow\infty}t^{-1}I_4(t)=\lambda^2 \left(\int_{0}^{\infty}v^{H-1/2}e^{-\lambda v}dv\right)^2 =\lambda^{1-2H}\Gamma^2(H+1/2),$$ and the proof follows. \end{proof}
\begin{rem} Since TFBM is a Gaussian stochastic process with zero mean, it follows from \eqref{eq:varasy} that $B^{I}_{H,\lambda}(t)$ converges in law to a normal random variable with zero mean and variance $2\Gamma(2H)(2\lambda)^{-2H}$ as $t\to\infty$, unlike fBm, whose variance diverges to infinity. In contrast, relation \eqref{eq:varasyII} shows that TFBMII is stochastically unbounded as $t\to\infty$. \end{rem}
The following proposition gives the covariance structure of TFBM and TFBMII (see \cite{Meerschaertsabzikar, SurgailisFarzadTFSMII} for more details).
\begin{prop}\label{covTFBMTFBMII} \begin{itemize} \item[$(a)$] {\rm TFBM} \eqref{eq:TFBMdef} has the covariance function \begin{equation*}
{\rm Cov}\left[B^{I}_{H,\lambda}(t),B^{I}_{H,\lambda}(s)\right]=\frac {1}{2}\left[C_t^{2}\left|t\right|^{2H}
+C_s^{2}\left|s\right|^{2H}
-C_{t-s}^{2}\left|t-s\right|^{2H}\right] \end{equation*} for any $s,t\in\mathbb R$, where $C^{2}_{t}= (C^{I}_{t})^{2}$ is given by \eqref{eq:CtTFBMDef}.
\item[$(b)$] {\rm TFBMII} \eqref{eq:TFBMIIdef} has the covariance function \begin{equation*}
{\rm Cov}\left[B^{I\!I}_{H,\lambda}(t),B^{I\!I}_{H,\lambda}(s)\right]=\frac {1}{2}\left[C_t^{2}\left|t\right|^{2H}
+C_s^{2}\left|s\right|^{2H}
-C_{t-s}^{2}\left|t-s\right|^{2H}\right] \end{equation*} for any $s,t\in\mathbb R$, where $C^{2}_{t}= (C^{I\!I}_{t})^{2}$ is given by \eqref{eq:CtTFBIIMDef}. \end{itemize}
\end{prop}
\subsection{Sample paths properties and local times of tempered fractional processes} Now we prove the existence of local times for tempered fractional processes. To start with, we prove the following result that will be used in this section. \begin{thm}\label{lem:TFBM and TFBMII bounds}
Let $X$ stand for be a TFBM $B^I_{H,\lambda}$ from \eqref{eq:TFBMdef} or for a TFBMII $B^{I\!I}_{H,\lambda}$ from \eqref{eq:TFBMIIdef} both with $0<H<1$ and $\lambda>0$. Then there exist positive constants $C_1$ and $C_2$ such that \begin{equation}\label{two-sided1}
C_1 \left|t-s\right|^{2H}\leq \mathbb{E}[|X(t)-X(s)|^{2}]\leq C_2 \left|t-s\right|^{2H} \end{equation}
for any $s,t\in [0,1]$.
\end{thm} \begin{rem} \label{rem2} (i) Inequalities mean that both processes, TFBM and TFBMII, are quasi-h\'{e}lices, according to geometric terminology of J.-P. Kahane, see \cite{Kahane}.\\ (ii) Theorem \ref{lem:TFBM and TFBMII bounds} holds for any fixed interval $[0,T]$ with constants $C_i$ depending on $T$. \end{rem} \begin{proof} The proof for TFBM is similar to that of Lemma 4.2 in \cite{Meerschaertsabzikar2} and hence can be omitted. To prove \eqref{two-sided1} for TFBMII, we use its moving average representation \eqref{eq:TFBMIIdef} to write \begin{equation}\label{eq:incrementsTFBMII2} \begin{split}
\mathbb{E}\Big| B^{I\!I}_{H,\lambda}(t) - B^{I\!I}_{H,\lambda}(s) \Big|^2 &= \int_{-\infty}^{s} \Big[(t-x)^{H-\frac{1}{2}}e^{-\lambda(t-x)} - (s-x)^{H-\frac{1}{2}}e^{-\lambda(s-x)}\\ & \quad + \lambda \int_s^t (w-x)^{H-\frac{1}{2}}e^{-\lambda(w-x)}\ dw \Big]^2 dx \\ &\quad + \int_s^t \Big[ (t-x)^{H-\frac{1}{2}}e^{-\lambda(t-x)} + \lambda\int_x^t (w-x)^{H-\frac{1}{2}}e^{-\lambda(w-x)}\ dw\Big]^2 dx \\ &= I_1 + I_2. \end{split} \end{equation} Now, let $\frac{1}{2}< H <1$. Then $I_{1}= I_{1, H>\frac{1}{2}}$ can be written as \begin{equation}\label{I1TFBMIIHGRQ12} I_{1, H>\frac{1}{2}} = {\left(H-\frac{1}{2}\right)^2 }\int_{-\infty}^s \Big[ \int_{s}^t (w-x)^{H-\frac{3}{2}} e^{-\lambda(w-x)} \ dw \Big]^2 dx. \end{equation} Obviously, \begin{equation}\label{I11TFBMIIHGRQ12} \begin{split} I_{ 1, H>\frac{1}{2} } &\leq \left(H-\frac{1}{2}\right)^2\int_{-\infty}^{s} \Big( (t-x)^{H-\frac{1}{2}} - (s-x)^{H-\frac{1}{2}} \Big)^{2}\ dx\\ & = \left(H-\frac{1}{2}\right)^2\int_{0}^{\infty} \Big( ( h+u)^{H-\frac{1}{2}} - u^{H-\frac{1}{2}} \Big)^{2}\ du \quad (h:=t-s)\\
&=\left(H-\frac{1}{2}\right)^2h^{2H} \int_{0}^{\infty} \Big( (1+ u)^{H-\frac{1}{2}} - u^{H-\frac{1}{2}} \Big)^{2}\ du\\ &=C h^{2H} = C(t-s)^{2H}, \end{split} \end{equation} where we used the fact that $\int_{0}^{\infty} \Big( (1+ u)^{H-\frac{1}{2}} - u^{H-\frac{1}{2}} \Big)^{2}\ du$ is finite, see, e.g., \cite[Theorem 1.3.1]{Mishura}. Now, let's move on to consider $I_2=I_{ 2, H>\frac{1}{2} }$, and get that \begin{equation}\label{I2TFBMIIHGRQ12} \begin{split} I_{ 2, H>\frac{1}{2} } &= \int_s^t \Big[ (t-x)^{H-\frac{1}{2}}e^{-\lambda(t-x)} + \lambda\int_x^t (w-x)^{H-\frac{1}{2}}e^{-\lambda(w-x)}\ dw\Big]^2 dx\\ &= \left(H-\frac{1}{2}\right)^2\int_s^t \Big[ \int_x^t (w-x)^{H-\frac{3}{2}}e^{-\lambda(w-x)}\ dw\Big]^{2}\ dx \leq C (t-s)^{2H}. \end{split} \end{equation} We conclude from \eqref{eq:incrementsTFBMII2}--\eqref{I2TFBMIIHGRQ12} that \begin{equation*}
\mathbb{E}\left| B^{I\!I}_{H,\lambda}(t) - B^{I\!I}_{H,\lambda}(s) \right|^2 \leq C|t-s|^{2H} \end{equation*} provided $\frac{1}{2}< H < 1$. As the next step, we find an upper bound for the second moments of the increments of TFBMII for $0<H<\frac{1}{2}$. Recall from \eqref{eq:incrementsTFBMII2} that in this case too we have \begin{equation*}\label{eq:incrementsTFBMIIsecond}
\mathbb{E}\left| B^{I\!I}_{H,\lambda}(t) - B^{I\!I}_{H,\lambda}(s) \right|^2 = I_1 + I_2. \end{equation*} For $0<H<\frac{1}{2}$, we start with $I_2$ and write \begin{equation*} \begin{split} I_{2}&= I_{2,H<\frac{1}{2}} = \int_s^t \left[ (t-x)^{H-\frac{1}{2}}e^{-\lambda(t-x)} + \lambda\int_x^t (w-x)^{H-\frac{1}{2}}e^{-\lambda(w-x)}\ dw\right]^2 dx\\ & = \int_s^t (t-x)^{2H-1} e^{-2\lambda(t-x)}\ dx
+ \lambda^2 \int_s^t \Big[ \int_{x}^{t} (w-x)^{H-\frac{1}{2}}e^{-\lambda(w-x)} \ dw \Big]^{2}\ dx\\ & \quad + 2\lambda \int_s^t (t-x)^{H-\frac{1}{2}}e^{-\lambda(t-x)} \int_x^t (w-x)^{H-\frac{1}{2}}e^{-\lambda(w-x)}\ dw dx\\ &\le \int_s^t (t-x)^{2H-1} dx
+ \lambda^2 \int_s^t \Big[ \int_{x}^{t} (w-x)^{H-\frac{1}{2}} \ dw \Big]^{2}\ dx\\
&\quad + 2\lambda \int_s^t (t-x)^{H-\frac{1}{2}} \int_x^t (w-x)^{H-\frac{1}{2}} \ dw dx\leq C|t-s|^{2H}. \end{split} \end{equation*} Next, we consider $I_{1}=I_{1, H<\frac{1}{2}}$ and decompose it into three terms as follows: \begin{equation*} \begin{split} I_1&= I_{1, H<\frac{1}{2}} = \int_{-\infty}^{s} \left( (t-x)^{H-\frac{1}{2}}e^{-\lambda(t-x)} - (s-x)^{H-\frac{1}{2}}e^{-\lambda(s-x)}\right)^2\ dx\\ & \quad + \lambda^2 \int_{-\infty}^{s}\left( \int_s^t (w-x)^{H-\frac{1}{2}}e^{-\lambda(w-x)}\ dw \right)^2 dx \\ & \quad + 2\lambda \int_{-\infty}^{s} \Bigg( (t-x)^{H-\frac{1}{2}}e^{-\lambda(t-x)} - (s-x)^{H-\frac{1}{2}}e^{-\lambda(s-x)}\Bigg) \Bigg( \int_s^t (w-x)^{H-\frac{1}{2}}e^{-\lambda(w-x)}\ dw \Bigg) \ dx\\ &=: I_{11, H<\frac{1}{2}} + I_{12, H<\frac{1}{2}} + I_{13, H<\frac{1}{2}}. \end{split} \end{equation*} According to \cite[Lemma 4.2]{Meerschaertsabzikar2}, \begin{equation}\label{equat:11}
I_{11, H<\frac{1}{2}} \leq C |t-s|^{2H} \end{equation} provided $s,t\in (0,1)$. Let as before, $h=t-s$. Taking into account that for any $0\le y\le z$ and $H<1/2$ we have $z^{H+\frac{1}{2}}-y^{H+\frac{1}{2}}\le (z-y)^{H+\frac{1}{2}} $, the term $I_{12, H<\frac{1}{2}}$ can be rewritten as \begin{equation}\label{equat:12} \begin{split} I_{12, H<\frac{1}{2}} &= \lambda^2 \int_{-\infty}^{s}\Bigg( \int_s^t (w-x)^{H-\frac{1}{2}}e^{-\lambda(w-x)}\ dw \Bigg)^2 dx \\ &\leq \lambda^2 \Big(H+\frac{1}{2}\Big)^{-2} \int_{-\infty}^{s} e^{-2\lambda(s-x)} \Bigg( (t-x)^{H+\frac{1}{2}} - (s-x)^{H+\frac{1}{2}} \Bigg)^2 dx \\ &= \lambda^2 \Big(H+\frac{1}{2}\Big)^{-2} \int_{0}^{\infty} e^{-2\lambda u} \Bigg( (u+h)^{H+\frac{1}{2}} - u^{H+\frac{1}{2}} \Bigg)^2 du \\
&\le \lambda^2 \Big(H+\frac{1}{2}\Big)^{-2} h^{2H+1} \int_{0}^{\infty} e^{-2\lambda u} du
\leq Ch^{2H+1}\leq Ch^{2H}, \end{split} \end{equation} where the last inequality is due to $0<h=t-s<1$. Next, for $I_{13, H<\frac{1}{2}}$ we have \begin{equation}\label{I13forHleq12} \begin{split} I_{13, H<\frac{1}{2}}&= 2\lambda \int_{-\infty}^{s} \Bigg( (t-x)^{H-\frac{1}{2}}e^{-\lambda(t-x)} - (s-x)^{H-\frac{1}{2}}e^{-\lambda(s-x)}\Bigg)\\ & \quad \times\Bigg( \int_s^t (w-x)^{H-\frac{1}{2}}e^{-\lambda(w-x)}\ dw \Bigg) \ dx \\ &\leq 2\lambda \Big(H+\frac{1}{2}\Big)^{-1} \int_{-\infty}^{s} (t-x)^{H-\frac{1}{2}}e^{-\lambda(t-x)}\left[(t-x)^{H+\frac{1}{2}} - (s-x)^{H+\frac{1}{2}} \right]\ dx\\ & \quad +2\lambda \Big(H+\frac{1}{2}\Big)^{-1} \int_{-\infty}^{s} (s-x)^{H-\frac{1}{2}}e^{-\lambda(s-x)}\left[(t-x)^{H+\frac{1}{2}} - (s-x)^{H+\frac{1}{2}} \right]\ dx\\ &=: I_{131, H<\frac{1}{2}} + I_{132, H<\frac{1}{2}}. \end{split} \end{equation} Note that the function $(1+ u)^{H-\frac{1}{2}}\left[(1+u)^{H+\frac{1}{2}} - u^{H+\frac{1}{2}} \right]$ is bounded on $[0,\infty)$, and continue with $I_{131, H<\frac{1}{2}}$ changing a variable $s-x=hu$: \begin{equation}\label{I131} \begin{split} I_{131, H<\frac{1}{2}} &= 2\lambda \Big(H+\frac{1}{2}\Big)^{-1} \int_{-\infty}^{s} (s+h-x)^{H-\frac{1}{2}}e^{-\lambda(s+h-x)}\left[(s+h-x)^{H+\frac{1}{2}} - (s-x)^{H+\frac{1}{2}} \right]\ dx\\
&=2\lambda \Big(H+\frac{1}{2}\Big)^{-1} h^{2H+1} e^{-\lambda h} \int_{0}^{\infty} (1+ u)^{H-\frac{1}{2}} e^{-\lambda hu} \left[(1+u)^{H+\frac{1}{2}} - u^{H+\frac{1}{2}} \right]\ du\\ &\leq Ch^{2H+1}\int_{0}^{\infty}e^{-\lambda hu}du \leq Ch^{2H}. \end{split} \end{equation}
For $I_{132, H<\frac{1}{2}}$ corresponding function $u^{H-\frac{1}{2}}\Bigg[(1+u)^{H+\frac{1}{2}} - u^{H+\frac{1}{2}} \Bigg]$ is not bounded at zero, therefore we change a bit the transformations: \begin{equation}\label{I132} \begin{split} I_{132} &= 2\lambda \Big(H+\frac{1}{2}\Big)^{-1} \int_{-\infty}^{s} (s-x)^{H-\frac{1}{2}}e^{-\lambda(s-x)}\Bigg[(s+h-x)^{H+\frac{1}{2}} - (s-x)^{H+\frac{1}{2}} \Bigg]\ dx\\ &= 2\lambda h^{2H+1} \Big(H+\frac{1}{2}\Big)^{-1} \int_{0}^{\infty} u^{H-\frac{1}{2}} e^{-\lambda hu} \Bigg[ (1+ u)^{H+\frac{1}{2}} - u^{H+\frac{1}{2}} \Bigg]\ du \\ &\leq C h^{2H+1}\left(\int_{0}^{1}+\int_{1}^{\infty} \right)\leq C \left(h^{2H+1}+h^{2H}\right)\leq Ch^{2H}, \end{split} \end{equation} where the last inequality is due to $0<h=t-s<1$. From \eqref{I13forHleq12}-- \eqref{I132} we have \begin{equation*}
I_{13, H<\frac{1}{2}}\leq C|t-s|^{2H}, \end{equation*} and together with \eqref{equat:11} and \eqref{equat:12} it gives us the upper bound \begin{equation*}
I_{1}= I_{11, H<\frac{1}{2}}+ I_{12, H<\frac{1}{2}} + I_{13, H<\frac{1}{2}} \leq C|t-s|^{2H}. \end{equation*} Therefore, \begin{equation*}
\mathbb{E}\Big| B^{I\!I}_{H,\lambda}(t) - B^{I\!I}_{H,\lambda}(s) \Big|^2 \leq C|t-s|^{2H} \end{equation*} provided $0< H < \frac{1}{2}$. So, we established that the right-hand side of \eqref{two-sided1} holds for any for any $0<H<1$. Next, for $0<H<1$, let us prove that \begin{equation*}
\mathbb{E}\Big| B^{I\!I}_{H,\lambda}(t) - B^{I\!I}_{H,\lambda}(s) \Big|^2 \geq C|t-s|^{2H}. \end{equation*}
In order to obtain the required lower bound, it suffices to note that formula \eqref{eq:incrementsTFBMII2} allows us to write for $s,t\in[0,1]$: \begin{equation*} \begin{split}
\mathbb{E}\Big| B^{I\!I}_{H,\lambda}(t) - B^{I\!I}_{H,\lambda}(s) \Big|^2 &\geq \int_s^t \Big[ (t-x)^{H-\frac{1}{2}}e^{-\lambda(t-x)}\Big]^2 dx\\ &\geq e^{-2\lambda(t-s)}\int_s^t (t-x)^{2H-1} dx
\geq C (t-s)^{2H}, \end{split} \end{equation*} and the proof is complete. \end{proof}
Next, we prove the existence of local times for TFBM and TFBMII. We will also show that these tempered fractional processes are locally nondeterministic on any open interval. Suppose $X=\{X(t)\}_{t\geq 0}$ is a real-valued separable random process with Borel sample functions. The random Borel measure \begin{equation*} \mu_{B}(A)=\int_{ B}I\{X(s)\in A\}\ ds \end{equation*} defined for Borel sets $A\subseteq {\mathbb R},\;B\subseteq {\mathbb R}^{+}$ is called the occupation measure of $X$ on $B$. If $\mu_{B}$ is absolutely continuous with respect to Lebesgue measure on $\mathbb{R}^{+}$, then the Radon-Nikodym derivative of $\mu_{B}$ with respect to Lebesgue measure is called the local time of $X$ on $B$, denoted by $L(B,x)$. See Boufoussi et al.\ \cite{BoufoussiDozziGuerbaz} for more detail. For brevity, we denote $L(t,x):=L([0,t],x)$.
\begin{prop}\label{prop:tfbmIIlocaltime} Let $X$ be either TFBM \eqref{eq:TFBMdef} or TFBMII \eqref{eq:TFBMIIdef}. Then for $0< H < 1$ and $\lambda>0$, $X$ has a local time $L(t,x)$ that is continuous in $t$ for a.e.\ $x\in\mathbb{R}$, and square integrable with respect to $x$. \end{prop}
\begin{proof} It follows from Boufoussi et al.\ in \cite[Theorem 3.1]{BoufoussiDozziGuerbaz} that a stochastic process $X=\{X(t)\}_{t\in[0,T]}$ has a local time $L(t,x)$ that is continuous in $t$ for a.e.\ $x\in\mathbb{R}$, and square integrable with respect to $x$, if $X$ satisfies the following condition
$(\mathcal{H})$: There exist positive numbers $(\rho_{0},H)\in (0,\infty)\times (0,1)$ and a positive function $\psi\in L^{1}(\mathbb{R})$ such that for all $\kappa\in\mathbb{R}, T>0, t,s\in[0,T], 0<|t-s|<\rho_{0}$ we have \begin{equation*}
\Bigg|\mathbb{E}\left[\exp\Big(i\kappa\frac{X(t)-X(s)}{|t-s|^{H}}\Big)\right]\Bigg|\leq \psi(\kappa). \end{equation*} \vskip10pt \noindent Apply Lemma \ref{lem:TFBM and TFBMII bounds}, more precisely, the left-hand side of \eqref{two-sided1} and Remark \ref{rem2}, to get \begin{equation*} \begin{split}
\mathbb{E}\left[\exp\Big(i\kappa\frac{B^{I}_{H,\lambda}(t)-B^{I}_{H,\lambda}(s)}{|t-s|^{H}}\Big)\right]&=
\exp\Big(-|\kappa|^{2}\frac{\mathbb{E}[|{B}^{I}_{H,\lambda}(t)-{B}^{I}_{H,\lambda}(s)|^{2}]}{|t-s|^{2H}}\Big)\\
&\leq \exp\Big(-|\kappa|^{2}C\Big):=\psi(\kappa) \end{split} \end{equation*} where the function $\psi\in L^{1}(\mathbb{R},d\kappa)$. Hence TFBM satisfies condition $(\mathcal{H})$. Along the same line, using Lemma \ref{lem:TFBM and TFBMII bounds}, namely, the left-hand side of \eqref{two-sided1} and Remark \ref{rem2}, we can see that TFBMII satisfies condition $(\mathcal{H})$. Therefore, both $X=B^{I}_{H,\lambda}$ and $X=B^{I\!I}_{H,\lambda}$ have the local time $L(t,x)$ that is continuous in $t$ for a.e.\ $x\in\mathbb{R}$. The proof is completed. \end{proof} In the next step, we prove that tempered fractional processes are locally nondeterministic on any open interval $(0,T)$, $T>0$. Recall that a zero mean Gaussian process $\{X(t)\}_{t\in\mathbb{R}}$ is \textit{locally nondeterministic} (LND) on some interval $\mathbb{T}=(a,b)$ if $ X$ satisfies condition $(A)$ consisting of the following three assumptions:
\begin{itemize}\label{condition A} \item[$(A)$ $(i)$] $\mathbb{E}[ X^2(t) ] >0$ for all $t\in \mathbb{T}$; \item[$(ii)$] $\mathbb{E}[(X(t)-X(s))^2]>0$ for all $t,s\in \mathbb{T}$ sufficiently close; \item[$(iii)$] for any $m\geq 2$, \begin{equation}\label{BermanVM}
\liminf_{\epsilon\downarrow 0}V_{m}=\frac{{\text{Var}}\{X(t_m)-X(t_{m-1})|X(t_1), \ldots, X(t_{m-1})\}}{{\text{Var}}\{X(t_m)-X(t_{m-1})\}}>0, \end{equation}
where the $\liminf$ is taken over distinct, ordered $t_1<t_2<\ldots<t_m\in \mathbb{T}$ with $|t_1-t_m|<\epsilon$. \end{itemize}
\begin{rem}
According to Berman \cite{Berman}, the ratio $V_{m}$ in assumption $(iii)$ is called a relative linear prediction error and is always between $0$ and $1$. If the ratio is bounded away from zero as $|t_{1}-t_{m}|\to 0$, then we can approximate $X(t_m)$ in the ${\mathbb{L}}^{2}$ norm by the most recent value $X(t_{m-1})$ with the same order of error as by the set of values $X(t_{1}),\ldots,X(t_{m-1})$. We refer the reader to \cite{Berman} for more details. \end{rem}
\begin{thm}\label{prop:TFBMIILND} Let $X$ be either TFBM \eqref{eq:TFBMdef} or TFBMII \eqref{eq:TFBMIIdef}. Then for any $0<H<1$ and $\lambda>0$, $X$ is LND on every interval $(0,T)$ for $0<T<\infty$. \end{thm}
\begin{proof} By letting the index of stability $\alpha=2$ in the proof of Proposition 5.4 in \cite{Meerschaertsabzikar2}, one can prove that TFBM is LND on every interval $(0,T)$ for $0<T<\infty$ (it is proved on any interval $(\delta, T)$ for $0<\delta<T<\infty$ but the proof does not refer to $\delta$ and can be extended to $(0,T)$). To prove that TFBMII is LND on every interval $(0,T)$, we need to verify assumptions $(i)$--$(iii)$ of condition $(A)$. The first and second assumptions follow immediately from the left-hand side of inequality \eqref{two-sided1}, Theorem \ref{lem:TFBM and TFBMII bounds}. It remains to show that the TFBMII $\{B^{I\!I}_{H,\lambda}(t)\}$ satisfies assumption $(iii)$.
From \eqref{eq:TFBMIIdef} one can see that $\{B(u):u\leq s\}$ determines $\{B^{I\!I}_{H,\lambda}(u), u\leq s\}$ in the sense that \begin{equation}\label{incl} \sigma\left(B^{I\!I}_{H,\lambda}(u), u\leq s \right)\subset \sigma\left(B(u), u\leq s \right) \end{equation} for all $s>0$. So, for the moment, consider any $s<t$ and the value
$$ {\rm Var}\Big(B^{I\!I}_{H,\lambda}(t) - B^{I\!I}_{H,\lambda}(s)| B(u), u\leq s \Big).$$
Next, write the moving average representation in \eqref{eq:TFBMIIdef} as follows: \begin{equation*} \begin{split} B^{I\!I}_{H,\lambda}(t)&= \frac{1}{\Gamma(H+\frac{1}{2})}\left( \int_{-\infty}^{s}g^{I\!I}_{H,\lambda,t}(x) dB(x)
+\int_{s}^{t}g^{I\!I}_{H,\lambda,t}(x)\ dB(x)\right), \end{split} \end{equation*} and observe that $\int_{-\infty}^{s}g^{I\!I}_{H,\lambda,t}(x) dB(x)$ is measurable with respect to $\sigma\left(B(u), u\leq s \right)$. Therefore \begin{equation}\label{var3} \begin{split}
{\rm Var}\Big(B^{I\!I}_{H,\lambda}(t)|B(u), u\leq s \Big) &={\rm Var}\Bigg( \frac{1}{\Gamma(H+\frac{1}{2})}\Big[\int_{s}^{t} \big((t-x)_{+}^{H - \frac{1}{2}}e^{-\lambda (t-x)_{+}}\\
&+\lambda \int_{0}^{t} (w-x)_{+}^{H-\frac{1}{2}}e^{-\lambda(w-x)_{+}}\ dw\big)\ dB(x)\Big]\Big| B(u), u\leq s \Bigg)\\ &\geq \frac{1}{\Gamma^{2}(H+\frac{1}{2})} \int_{s}^{t}
(t-x)^{2H-1}e^{-2\lambda(t-x)} dx. \end{split} \end{equation} Now, taking into account the form of the numerator in \eqref{BermanVM}, relation \eqref{incl} and the fact that finally the left-hand side of \eqref{var3} is bounded from below by some non-random value, we conclude that the relative predicted error $V_{m}$ is bounded from below by the following value: \begin{equation}\label{bermanbound} \frac{ \int_{s}^{t} (t-x)^{2H-1}e^{-2\lambda(t-x)} \ dx}{\Gamma^{2}(H+\frac{1}{2})\text{Var}\Big(B^{I\!I}_{H,\lambda}(t)-B^{I\!I}_{H,\lambda}(s)\Big)}\ge \frac{ (t-s)^{2H}e^{-2\lambda(t-s)}}{\Gamma^{2}(H+\frac{1}{2})\text{Var}\Big(B^{I\!I}_{H,\lambda}(t)-B^{I\!I}_{H,\lambda}(s)\Big)}, \end{equation} where $s=t_{m-1}$ and $t=t_{m}$. Applying Lemma \ref{lem:TFBM and TFBMII bounds} and Remark \ref{rem2}, we get that \begin{equation}\label{eq:condition3TFBMII secondtpart}
{\text{Var}}\{B^{I\!I}_{H,\lambda}(t_m)-B^{I\!I}_{H,\lambda}(t_{m-1})\}\leq C_T\left|t_m-t_{m-1}\right|^{2H} \end{equation}
for $|t_m-t_{m-1}|<\epsilon$ and all points being at interval $(0,T)$. Here $C_T$ is a constant depending only on $T$ but not on $m$ and the points $t_1, \ldots, t_m$. With the help of \eqref{eq:condition3TFBMII secondtpart}, we get that the ratio in \eqref{bermanbound} is bounded below by \begin{equation*} \frac{e^{-2\lambda(t_m-t_{m-1})}(t_m-t_{m-1})^{2H}}{2H \Gamma^{2}(H+\frac{1}{2})C_{2}(t_m-t_{m-1})^{2H}}= \frac{e^{-2\lambda(t_m-t_{m-1})} }{2H \Gamma^{2}(H+\frac{1}{2})C_{T} } \end{equation*}
for $|t_m-t_{m-1}|<\epsilon$ and all points being at interval $(0,T)$. This value tends to $ \frac{1 }{2H \Gamma^{2}(H+\frac{1}{2})C_{T} }$ as $\epsilon\downarrow 0$,
and hence condition $(A)$ holds. It means $\{B^{I\!I}_{H,\lambda}\}$ is LND on $(0,T)$ and this completes the proof. \end{proof}
\subsection{$p$-variation of tempered fractional processes}\label{sec3}
In this section, we show that TFBM and TFBMII have the same $\frac 1H$-variation as the FBM, when $0<H<1$. First, we introduce the "uniform" definition of $\beta$-variation of a stochastic process. Let us introduce some notation. Fix a time interval $[a,b]\subset \mathbb R$, and consider the uniform partition \begin{equation*} \pi^{n}=\{a=t^{n}_{0}<t^{n}_{1}<\ldots<t^{n}_{n}=b\}, \end{equation*} where $t^{n}_{i}=a+\frac{i}{n}(b-a)$ for $i = 0,\ldots,n$. Let $\beta\geq 1$ and $X=\{X_t,t \in \mathbb R\}$ be a continuous stochastic process. Moreover, we define $\Delta^{n}_{i} X=X(t^{n}_{i})-X(t^{n}_{i-1})$.
\begin{defn} For any $\beta\ge 1 $ the $\beta$-variation of X on the interval $[a,b]$, denoted by $\langle X\rangle _{\beta,[a,b]}$, is the limit in probability of \begin{equation*}
S^{[a,b]}_{\beta,n}(X):=\sum_{i=1}^{n}|\Delta^{n}_{i} X|^{\beta}, \end{equation*} if the limit exists. We say that the $\beta$-variation of $X$ on $[a,b]$ exists in $L^{p}$ if the above limit exists in $L^{p}$ for some $p\ge 1$. \end{defn} It is also easy to see that the following triangular inequality holds: \begin{equation*} S^{[a,b]}_{\beta,n}(X+Y)^{\frac{1}{\beta}}\leq S^{[a,b]}_{\beta,n}(X)^{\frac{1}{\beta}}+S^{[a,b]}_{\beta,n}(Y)^{\frac{1}{\beta}}. \end{equation*} This inequality implies that if $X$ and $Y$ are two continuous stochastic processes such that $\langle X\rangle _{\beta,[a,b]}$ exists and $\langle Y\rangle _{\beta,[a,b]}=0$, then \begin{equation}\label{eq:key formula for variation} \langle X+Y\rangle _{\beta,[a,b]}=\langle X\rangle _{\beta,[a,b]}. \end{equation} Indeed, obviously $\langle X+Y\rangle_{\beta,[a,b]}\le\langle X\rangle_{\beta,[a,b]}$, and this inequality, complemented by the following one: $$\langle X\rangle_{\beta,[a,b]}\le\langle X+Y\rangle_{\beta,[a,b]}+\langle -Y\rangle_{\beta,[a,b]},$$
immediately implies \eqref{eq:key formula for variation}. Now, we are ready to state and proof the result of this section. The key for the proof of the result is \eqref{eq:key formula for variation} and using the well known fact that a normalized fBm has a finite $\frac{1}{H}$-variation on any interval $[a,b]$, and it equals $(b-a)\mathbb{E} [ |Z|^{\frac 1H}]$, where $Z$ is a $\mathcal{N}(0,1)$-random variable, see, e.g. \cite{Mishura}, Section 1.18.
\begin{thm}\label{theo:variation} Let $X$ be either a TFBM $B_{H,\lambda}$ with parameters $H\in (0,1)$ and $\lambda>0$, defined by \eqref{eq:TFBMdef} or a TFBMII $B^{I\!I}_{H, \lambda}$ given by \eqref{eq:TFBMIIdef}. Then $$\langle B_{H,\lambda}\rangle _{\frac 1H,[a,b]}=c_{H}(b-a)$$ in probability, where $c_H= C(H)^{-\frac 1H}\mathbb{E} [ |Z|^{\frac 1H}]$ and $Z$ is a $\mathcal{N}(0,1)$-random variable.
\end{thm}
\begin{proof} The proof for a TFBM and a TFBMII is similar and hence we only consider TFBM. First, apply the moving average representation of the TFBM to get the decomposition \[ B_{H,\lambda}(t)= B_{H}(t)+ Y(t), \] where \[ B_{H}(t)=\int_{-\infty}^{+\infty}\big[(t-x)_{+}^{H-\frac{1}{2}} -(-x)_{+}^{H-\frac{1}{2}} \big]B(dx) \] and \begin{equation}\label{eq:Y representation} Y(t)=\int_{-\infty}^{+\infty}\big[(t-x)_{+}^{H-\frac{1}{2}}(e^{-\lambda(t-x)_{+}}-1)-(-x)_{+}^{H-\frac{1}{2}}(e^{-\lambda(-x)_{+}}-1)\big]B(dx) \end{equation} for $t\in \mathbb R$. Notice that $C(H)^{-1} B_H$ is a fBm. Therefore, taking into account \eqref{eq:key formula for variation}, in order to prove the proposition, one needs to show that \begin{equation*}
S^{[a,b]}_{\beta,n}(Y):=\sum_{i=1}^{n}|\Delta^{n}_{i} Y|^{\frac 1H}, \end{equation*} converges to zero in probability, where $\Delta^{n}_{i} Y=Y(t^{n}_{i})-Y(t^{n}_{i-1})$. In other words, we are in position to establish that $\langle Y\rangle _{\beta,[a,b]}=0$ where $Y$ is given by \eqref{eq:Y representation}. Obviously, the increments of $Y$ equal \begin{equation*} Y(t^{n}_{i+1})-Y(t^{n}_{i}) =\int_{-\infty}^{+\infty}\big[(t^{n}_{i+1}-x)_{+}^{H-\frac{1}{2}}(e^{-\lambda(t^{n}_{i+1}-x)_{+}}-1)-(t^{n}_{i}-x)_{+}^{H-\frac{1}{2}}(e^{-\lambda(t^{n}_{i}-x)_{+}}-1)\big]B(dx) \end{equation*} and then \begin{equation*} \begin{split}
&\sum_{i=1}^{n}\mathbb{E}\Big(|Y(t^{n}_{i+1})-Y(t^{n}_{i})|^{\frac 1H}\Big)\\ &=C\sum_{i=1}^{n}\Big(\int_{-\infty}^{\infty}\Big[(t^{n}_{i+1}-x)_{+}^{H-\frac{1}{2}}(e^{-\lambda(t^{n}_{i+1}-x)_{+}}-1)-(t^{n}_{i}-x)_{+}^{H-\frac{1}{2}}(e^{-\lambda(t^{n}_{i}-x)_{+}}-1) \Big]^{2}\ dx\Big)^{\frac{1}{2H}}\\ &\leq C\sum_{i=1}^{n}\Big(\int_{-\infty}^{t^{n}_{i}}\Big[(t^{n}_{i+1}-x)^{H-\frac{1}{2}}(e^{-\lambda(t^{n}_{i+1}-x)}-1)-(t^{n}_{i}-x)^{H-\frac{1}{2}}(e^{-\lambda(t^{n}_{i}-x)}-1)\Big]^{2}\ dx\Big)^{\frac{1}{2H}}\\ & \quad +C\sum_{i=1}^{n}\Big(\int_{t^{n}_{i}}^{t^{n}_{i+1}}\Big[(t^{n}_{i+1}-x)^{H-\frac{1}{2}}(e^{-\lambda(t^{n}_{i+1}-x)}-1)\Big]^{2}\ dx\Big)^{\frac{1}{2H}}\\ &=: C(I_{1,n}+I_{2,n}), \end{split} \end{equation*}
where $C$ is a generic constant that depends on $H$. Let us first show that $I_{2,n}\to 0$ as $n\to\infty$. Using the change of variable $t^{n}_{i+1}-x=y$ in $I_{2}$ and the inequality $|e^{-a}-e^{-b}|\leq |a-b|$ for $a, b>0$ we can write \begin{equation*} \begin{split} I_{2,n}&= n\Big(\int_{0}^{\frac{b-a}{n}}y^{2H-1}(e^{-\lambda y}-1)^{2}\ dy\Big)^{\frac{1}{2H}}
\leq n\lambda^{\frac 1H }\Big(\int_{0}^{\frac{b-a}{n}}y^{2H+1}\ dy\Big)^{\frac{1}{2H}}= C n^{-\frac 1H }\to 0 \end{split} \end{equation*} as $n\to\infty$. Next, we show that $I_{1,n}\to 0$ as $n\to\infty$. First we use the change of variable $t^{n}_{i}-x=y$ to see that \begin{equation*}
I_{1,n} =n\left(\int_{0}^{\infty}\left[\left(y+\frac{b-a}{n}\right)^{H-\frac{1}{2}}\left(1-e^{-\lambda(y+\frac{b-a}{n})}\right)-y^{H-\frac{1}{2}}\left(1-e^{-\lambda y}\right)\right]^{2}\ dy\right)^{\frac{1}{2H}}, \end{equation*} and it is sufficient to prove that \begin{equation*} I^{2H}_{1,n}=n^{2H}\int_{0}^{\infty}\left[\left(y+\frac{b-a}{n}\right)^{H-\frac{1}{2}}\left(1-e^{-\lambda(y+\frac{b-a}{n})}\right)-y^{H-\frac{1}{2}}\left(1-e^{-\lambda y}\right)\right]^{2}\ dy\rightarrow 0 \end{equation*} as $n\rightarrow \infty$. Further, \begin{equation*} \begin{split}
I^{2H}_{1,n}&\le 2n^{2H}\left(1-e^{-\lambda(\frac{b-a}{n})}\right)^2\int_{0}^{\infty} \left(y+\frac{b-a}{n}\right)^{2H-1}e^{-2\lambda y}\ dy\\
&\quad +2(b-a)^{2H}\int_{0}^{\infty}\left((z+1)^{H-1/2}-z^{H-1/2}\right)^2(1-e^{-\lambda\left(\frac{b-a}{n}\right)z})^2dz\\ &=I^{2H}_{11,n}+2(b-a)^{2H}I^{2H}_{12,n}, \end{split} \end{equation*} where in the second integral we changed the variable $y=\frac{b-a}{n}z.$ It is easy to see that for $n>b-a$ \begin{equation*} \begin{split} I^{2H}_{11,n}&\le 2n^{2H-2}(\lambda(b-a))^2\int_{0}^{\infty} \left(\left(y+1\right)^{2H-1}\vee y^{2H-1}\right)e^{-2\lambda y}\ dy\rightarrow 0 \end{split} \end{equation*} as $n\rightarrow \infty$. Concerning $I^{2H}_{12,n}$, we observe that $(1-e^{-\lambda\left(\frac{b-a}{n}\right)z})^2\rightarrow 0$ as $n\rightarrow \infty$, an splitting $I^{2H}_{12,n}=\int_{0}^{1}+\int_{1}^{\infty}$, we immediately get that $\int_{0}^{1}\rightarrow 0$ as $n\rightarrow \infty$, while the integrand in the $\int_{1}^{\infty}$ can be bounded as follows: \begin{equation*} \begin{split}\left((z+1)^{H-1/2}-z^{H-1/2}\right)^2(1-e^{-\lambda\left(\frac{b-a}{n}\right)z})^2&\le (H-1/2)^2\left((z+1)^{2H-3}\vee z^{2H-3}\right), \end{split} \end{equation*} and $\int_{1}^{\infty}$ converges to zero due to the Lebesgue dominated convergence theorem. Now the proof is complete.
\end{proof} \begin{rem} Theorem \ref{theo:variation} implies immediately that $p$-variation of a TFBM and a TFBMII equals zero or infinity, depending on whether $p$ is greater or less than $1/H$.\end{rem}
\section{Breuer--Major theorem in application to tempered fractional Gaussian noises}\label{sec:BM}
In this section, we consider the tempered fractional Gaussian noises in the context of popular Breuer-Major Theorem (see \cite{Breuer} or \cite[Theorem 7.2.4]{Nourdin-Peccati-Bible} for a modern exposition) in the Gaussian analysis. \subsection{Covariance structures of tempered fractional Gaussian noises} First, we study the increment of tempered fractional Gaussian processes and investigate the asymptotic behavior of the their covariance functions for large lags. These results then provide a useful tool to develop some limit theorems. For simplicity, denote $\alpha=H-\frac{1}{2}$. Given a TFBM \eqref{eq:TFBMdef}, we define tempered fractional Gaussian noise (TFGN) \begin{equation*}\label{eq:TFGNdef} X^{I}_{\alpha,\lambda}(j)=B^{I}_{H,\lambda}(j+1)-B^{I}_{H,\lambda}(j)\quad\text{for}\quad j\in \mathbb{Z}. \end{equation*} It follows easily from \eqref{eq:TFBMdef} that TFGN has the moving average representation: \begin{equation}\label{eq:TFGNmoving} X^{I}_{\alpha,\lambda}(j)= \int_{\mathbb{R}}g^{I}_{\lambda,\alpha,j}(x)B(dx)= {\int_{\mathbb{R}}\left[e^{-\lambda(j+1-x)_{+}}(j+1-x)_{+}^{\alpha}-e^{-\lambda(j-x)_{+}}(j-x)_{+}^{\alpha} \right]B(dx)}. \end{equation} Along the same lines, a tempered fractional Gaussian noise of the second kind (TFGNII) can be defined as follows: \begin{equation*} X^{I\!I}_{\alpha,\lambda}(j)=B^{I\!I}_{H,\lambda}(j+1)-B^{I\!I}_{H,\lambda}(j)\quad\text{for $j\in \mathbb{Z}$.} \end{equation*} It follows from \eqref{hdef0} that a TFGNII has the moving average representation \begin{equation}\label{eq:TFGNIImoving} \begin{split} X^{I\!I}_{\alpha,\lambda}(j)&= \int_{\mathbb{R}}g^{I\!I}_{\lambda,\alpha,j}(x)B(dx)=\int_\mathbb R \Big[ e^{-\lambda(j+1-x)_{+}}(j+1-x)_{+}^{\alpha}-e^{-\lambda(j-x)_{+}}(j-x)_{+}^{\alpha}\\ &\qquad\qquad\qquad\quad+\lambda \int_{j}^{j+1} e^{-\lambda(s-x)_{+}}(s-x)_{+}^{\alpha} ds\Big] B(dx). \end{split} \end{equation}
So, let $X^{I}_{\alpha,\lambda}(j)$ and $X^{I\!I}_{\alpha,\lambda}(j)$ be the stationary sequences given by \eqref{eq:TFGNmoving} and \eqref{eq:TFGNIImoving} respectively. Denote \begin{equation}\label{eq:covar}\gamma^J (k):=\mathbb{E}[X^{J}_{\alpha,\lambda}(0) X^{J}_{\alpha,\lambda}(k)]=\abs{k+1}^{2H}(C^{J}_{\vert k \vert +1})^2-2\abs{k}^{2H} (C^{J}_{\vert k\vert})^2+\abs{k-1}^{2H}(C^{J}_{\vert k-1 \vert })^2,\; J=I,II,\end{equation}
where the normalizing constants $C^J_t$ are presented in Lemma \ref{lem:usefulrelations}. To analyze the behavior of $\gamma^J (k)$, we shall combine its direct representation via the kernels $g^{J}_{\lambda,\alpha}$, $J=I,II $ and its representation from \eqref{eq:covar}. The following lemma specifies the behavior of the intermediate noise covariance and will be used in the proof of the main theorems. \begin{lem}\label{lem:Positive-Negative-Correlation}
\begin{itemize}
\item[(a)] Let $\lambda >0$. Consider function $\psi (t) = (C^I_t)^2 \, t^{2H}$ for $t>0$ where the normalizing constant $C^I_t$ is given in Lemma \ref{lem:usefulrelations}. Then $\psi''(t)<0$ for all $t>0$ provided that $H \in (0,\frac{1}{2}]$. Hence, TFGN is negatively correlated when $H \in (0,\frac{1}{2}]$ meaning that for every $0 \neq k \in \mathbb Z$
\begin{equation}\label{eq:negative-correlation-I}
\gamma^I (k) < 0.
\end{equation}
\item[(b)] Let $ \lambda >0$. Then for every $ k \in \mathbb Z$ and $H>1/2$,
\begin{equation}\label{eq:positive-correlation-II}
\gamma^{I\!I} (k) >0.
\end{equation}
Moreover, when $H=1/2$, it holds $\gamma^{I\!I} (0) >0$, and $\gamma^{I\!I} (k) =0$ for every $0 \neq k \in \mathbb Z$.
\end{itemize}
\end{lem}
\begin{proof} (a) Note that using Lemma \ref{lem:usefulrelations} we can write \[ \psi (t) = \frac{2\Gamma(2H)}{(2\lambda)^{2H}}-\frac{2\Gamma(H+\frac 12)}{\sqrt{\pi} (2\lambda)^{H}}t^H K_{H}(\lambda t)= \frac{2\Gamma(2H)}{(2\lambda)^{2H}} - \frac{2\Gamma(H+\frac 12)}{\sqrt{\pi} 2^{H}\lambda^{2H}} (\lambda t)^{H} K_{H}(\lambda t). \] Hence, using relation $\frac{d}{dx} (x^\nu K_\nu (x))= - x^\nu K_{\nu -1}(x)$ for all $\nu \in \mathbb R$ (see e.g., Appendix in \cite{Robert-Bessel-functions}), we can immediately deduce that \begin{align} \psi'(t) & = \frac{2\Gamma(H+\frac 12)}{\sqrt{\pi} 2^{H}\lambda^{2H}} \lambda (\lambda t)^{H} K_{H-1} (\lambda t) = \frac{2\Gamma(H+\frac 12)}{\sqrt{\pi} 2^{H}\lambda^{2H-2}} \big\{ t (\lambda t)^{H-1}K_{H-1}(\lambda t) \big\},\\ \psi''(t) & = \frac{2\Gamma(H+\frac 12)}{\sqrt{\pi} 2^{H}\lambda^{2H-2}} \Big\{ (\lambda t)^{H-1}K_{H-1}(\lambda t) - \lambda t (\lambda t)^{H-1} K_{H-2}(\lambda t) \Big\} \nonumber\\ & = \frac{2\Gamma(H+\frac 12)}{\sqrt{\pi} 2^{H}\lambda^{2H-2}} (\lambda t)^{H-1} \Big\{ K_{H-1}(\lambda t) - (\lambda t) K_{H-2} (\lambda t) \Big\}\\ & = -\frac{2\Gamma(H+\frac 12)}{\sqrt{\pi} 2^{H}\lambda^{2H-2}} (\lambda t)^{H-1} \Big( (\lambda t) K_{H-1}(\lambda t) \Big) \times \Big( \frac{K_{H-2}(\lambda t)}{K_{H-1}(\lambda t)} - \frac{1}{\lambda t} \Big). \end{align} It is well known that $K_\nu (x) >0$ for every $x>0$ and real $\nu \in \mathbb R$. Let $\nu=H-1$. Therefore, it is enough to understand the sign of the quantity, \begin{equation}\label{eq:T1-H<1/2-sign} f(x):= \frac{K_{\nu-1}(x)}{K_\nu (x)} - \frac{1}{x}. \end{equation}
Let $\nu< - 1/2$, or equivalently $H<1/2$. In this case \cite[Proposition 4.5]{Bessel-Ratio-PAMS} contains a sharp estimate that can be used to rewrite function $f$ as \[ f(x)= \frac{K_{\nu-1}(x)}{K_\nu (x)} -\frac{1}{x} = \frac{K_{\mu+1}(x)}{K_\mu (x)} - \frac{1}{x} > 1, \quad \forall \, x >0, \] where $\mu=-\nu > 1/2$. Hence, function $f$ stays positive over the whole interval $(0,\infty)$. This means that the noise TFGN is globally negatively correlated. For $H=1/2$ situation is very simple: covariance function equals
$$\mathbb E B^I_{1/2,\lambda}(t)B^I_{1/2,\lambda}(s)=\frac{1}{2\lambda}\left(e^{-\lambda|t-s|}-e^{-\lambda t}-e^{-\lambda s}+1\right),$$
whence $$\gamma^I (k)=\frac{1}{\lambda} e^{-\lambda|k|}\left(1-\cosh \lambda\right)<0. $$ It also means that the noise TFGN is globally negatively correlated.
(b) Let $H>1/2$. Integrating by parts, we can rewrite the kernel $g^{I\!I}_{\lambda,\alpha,j}$ as $$g^{I\!I}_{\lambda,\alpha,j}(x)=\alpha\int_{j}^{j+1} e^{-\lambda(s-x)_{+}}(s-x)_{+}^{\alpha-1} ds>0,$$ and the proof immediately follows. When $H=1/2$, the tempered fractional Brownian motion of the second kind coincides with a Brownian motion, and hence the claim follows at once.
\end{proof} \begin{rem}
Item (a) from Lemma \ref{lem:Positive-Negative-Correlation} reveals that TFGN is globally negatively correlated provided that $H \in (0,1/2]$ no matter what tempering parameter $\lambda$ is. However, when Hurst parameter $H > 1/2$ a switching regime takes place that can be useful in modeling. More precisely, certainly there are time points $t^* = t^* (H,\lambda) \le t^{**}=t^{**}(H,\lambda)$ so that TFGN is positively correlated for every continuous lags $t< t^*$ and negatively correlated for all the lags $t>t^{**}$. Although, we were unable to prove that one can take $t^* = t^{**}$ however, our numerical MATALAB observations illustrate that this is in fact the case. The main obstacle in front to verify the uniqueness of time point where the TFGN switches from positive correlation to negative correlation is to show the strict monotonicity (increasing) of the function $$\frac{K_{\mu+1}(x)}{K_{\mu} (x)} - \frac{1}{x}$$ over the interval $(0,1)$ provided that $\mu \in [0,1/2)$.
As in the Breuer--Major Theorem we are interested in the behavior of the noise in the discrete clock, therefore positions of the critical times $t^* $ and $t^{**}$ are very significant, and that heavily depends on the Hurst parameter $H>0$ as well as the tempering parameter $\lambda>0$. For example, when $H = 3/2$, it can be shown that $t^* = t^{**} = \frac{1}{\lambda}$, and that
\begin{align}
\psi''(t) & > 0, \quad \text{ for } \quad t \in (0,\frac{1}{\lambda}),\\
\psi''(t) & <0, \quad \text{ for } \quad t > \frac{1}{\lambda}.
\end{align}
Hence, TFGN admits at the same time positive and negative correlation of discrete lags depending on the range of tempering parameter $\lambda$. We also feel that similar switching regime phenomenon takes places for TFGNII when $H <1/2$.
\end{rem} Now we are in position to investigate the asymptotic behavior of the increments of TFGN and TFGNII at large lags.
\begin{prop}\label{prop:TFLN}
Then we claim the following asymptotic behavior of covariances.
\begin{itemize} \item[$(a)$] For any $\alpha > -\frac{1}{2}$, \begin{equation}\label{x-2}\gamma^{I}(j)\sim -\frac{2\Gamma(\alpha+1)( \cosh \lambda-1 )}{ (2\lambda)^{\alpha+1}} e^{-\lambda j} j^{\alpha} \end{equation} as $j\to\infty$. It means that asymptotically TFGN has negative correlation for any $\alpha > -\frac{1}{2}$ (compare to Lemma \ref{lem:Positive-Negative-Correlation}). In particular, $\gamma^I \in \ell^q(\mathbb Z) $ for every $q \ge 1$.
\item[$(b)$] For any $\alpha > -\frac{1}{2}$, $$\gamma^{I\!I}(j) \sim (2e^\lambda-1) (2\lambda)^{-\alpha-1} \Gamma(\alpha+1) e^{-\lambda j } j^{ \alpha -1 }$$ as $j\to\infty$. It means that asymptotically TFGNII has positive correlation (compare to Lemma \ref{lem:Positive-Negative-Correlation}). In particular, $\gamma^{I\!I} \in \ell^q(\mathbb Z) $ for every $q \ge 1$.
\end{itemize} \end{prop}
\begin{proof}
\begin{itemize} \item[$(a)$] The following transformations are immediate: \begin{align*} \gamma^{I}(j)&=\mathbb{E}\bigg(\int_{-\infty}^{j+1}\left(e^{-\lambda(j+1-x)}(j+1-x)^{\alpha} -e^{-\lambda(j-x)_+}(j-x)^{\alpha}_+\right)dB(x)\\ & \quad \times\int_{-\infty}^{1}\left(e^{-\lambda(1-x)}( 1-x)^{\alpha} -e^{-\lambda( -x)_+}( -x)^{\alpha}_+\right)dB(x)\bigg) \\ &=\int_{-\infty}^{1}\left(e^{-\lambda(j+1-x)}(j+1-x)^{\alpha} -e^{-\lambda(j-x)}(j-x)^{\alpha}\right)\\ &\quad \times \left(e^{-\lambda(1-x)}( 1-x)^{\alpha} -e^{-\lambda( -x)_+}( -x)^{\alpha}_+\right)d x \\ &=e^{-\lambda j}\bigg(\int_{0}^{\infty} e^{-2\lambda z}z^{\alpha}\left( (j+z)^{\alpha} -e^\lambda(j-1+z)^{\alpha}\right)dz\\ & \quad - \int_{0}^{\infty} e^{-2\lambda z}z^{\alpha}\left( e^{-\lambda}(j+1+z)^{\alpha} -(j+z)^{\alpha}\right)dz\bigg) \\ &=e^{-\lambda j}j^{\alpha}\int_{0}^{\infty}e^{-2\lambda z}z^{\alpha}\bigg(2\left(1+\frac{ z}{j}\right)^{\alpha}- e^{-\lambda}\left(1+\frac{z+1}{j}\right)^{\alpha}-e^{\lambda}\left(1+\frac{z-1}{j}\right)^{\alpha}\bigg)dz. \end{align*}
Consider the value in the bracket $$2\left(1+\frac{ z}{j}\right)^{\alpha}- e^{-\lambda}\left(1+\frac{z+1}{j}\right)^{\alpha}-e^{\lambda}\left(1+\frac{z-1}{j}\right)^{\alpha}.$$
It tends to $2-e^\lambda-e^{-\lambda}$ as $j\rightarrow\infty$, and for $j\geq 2$ is bounded by
$$(2\left(1+ { z} \right)^{\alpha} + e^{-\lambda}\left(2+ z \right)^{\alpha}+e^{\lambda}z^{\alpha})\vee(2+2e^\lambda+e^{-\lambda}).$$
It means that we can apply Lebesgue dominated convergence theorem and get $(a)$. \item[$(b)$] Denote $$g_j(x):=e^{-\lambda(j+1-x)_{+}}(j+1-x)_{+}^{\alpha}-e^{-\lambda(j-x)_{+}}(j-x)_{+}^{\alpha}
+\lambda \int_{j}^{j+1} e^{-\lambda(s-x)_{+}}(s-x)_{+}^{\alpha} ds,$$
then, by the similar calculations as in the part (a), $\gamma^{I\!I}(j)=\int_{-\infty}^{1}g_j(x)g_0(x)dx$.
So, our goal is to study the asymptotic behavior of $g_j(x)$. Note that on the interval $(-\infty,1]$
\begin{equation*} \begin{split} g_j(x)&=e^{-\lambda(j+1-x)}(j+1-x)^{\alpha}-e^{-\lambda(j-x)}(j-x)^{\alpha}
+\lambda \int_{j}^{j+1} e^{-\lambda(s-x)}(s-x)^{\alpha} ds\\&=e^{-\lambda j}j^\alpha\left(e^{-\lambda(1-x)}\left(1+\frac{1-x}{j}\right)^{\alpha}-e^{\lambda x }\left(1-\frac{x}{j}\right)^{\alpha}
+\lambda \int_{0}^{1} e^{-\lambda(z-x)}\left(1+\frac{z-x}{j}\right)^{\alpha} dz\right). \end{split} \end{equation*} Applying Taylor expansion to the terms $\left(1+\frac{1-x}{j}\right)^{\alpha}, \left(1-\frac{x}{j}\right)^{\alpha}$ and $\left(1+\frac{z-x}{j}\right)^{\alpha}$, and integrating the last integral by parts, we get that
\begin{equation*} \begin{split}
g_j(x)&=e^{-\lambda j}j^\alpha\bigg(e^{-\lambda(1-x)}\left(1+\alpha\frac{1-x}{j}\right) -e^{\lambda x }\left(1-\alpha\frac{x}{j}\right)
\\&+\lambda \int_{0}^{1} e^{-\lambda(z-x)}\left(1+\alpha\frac{z-x}{j}\right) dz\bigg)+h_j(x)=
\alpha\lambda^{-1}(1-e^{-\lambda}) e^{-\lambda j}j^{\alpha-1}e^{\lambda x}+h_j(x),
\end{split}
\end{equation*} where $h_j(x)=j^{-2}h(x)$, and $h(x)$ is, up to a constant multiplier, of order $e^{\lambda x}x^2$. Applying again the Lebesgue dominated convergence theorem, we get that $$\gamma^{I\!I}(j)\sim \alpha\lambda^{-1}(1-e^{-\lambda}) e^{-\lambda j}j^{\alpha-1}\int_{-\infty}^{1} e^{\lambda x}g_0(x)dx.$$ As regards the last integral, it equals \begin{equation*} \begin{split}\int_{-\infty}^{1} e^{\lambda x}g_0(x)dx&=\int_{-\infty}^{1} e^{\lambda x}\left( e^{-\lambda(1-x)}(1-x)^{\alpha}-e^{-\lambda(-x)_{+}}(-x)_{+}^{\alpha}
+\lambda \int_{x}^{1} e^{-\lambda(s-x)}(s-x)^{\alpha} ds\right)dx\\&=(e^\lambda-1)\int_{0}^{\infty}e^{-2\lambda z}z^\alpha dz+\lambda e^\lambda\int_{0}^{\infty}e^{-\lambda u}\int_0^ue^{-\lambda z}z^\alpha dzdu\\&
=(2e^\lambda-1) (2\lambda)^{-\alpha-1} \Gamma(\alpha+1),
\end{split}
\end{equation*}
and the proof follows.
\end{itemize}
\end{proof}
\begin{lem}\label{Hermite} Let $Y^{I}(j)=(C^I_1)^{-1}X^{I}_{\alpha,\lambda}(j)$ and $Y^{I\!I}(j)=(C_1^{I\!I})^{-1}X^{I\!I}_{\alpha,\lambda}(j)$ be normalized tempered fractional Gaussian noises with associated normalizing constants $C^I_1$ and $C^{I\!I}_1$, appearing in Lemma \ref{lem:usefulrelations}, and covariance functions $\gamma^I$ and $\gamma^{I\!I}$, respectively. Let $V^J_{n,q}=\frac{1}{\sqrt{n} }\sum_{k=1}^{n} H_q(Y^J(k)), J=I,I\!I $, where $H_q$ stands for the $q$th Hermite polynomial. Then \begin{equation}\begin{gathered}\label{orieq} \sigma^2_{n,J,q} : = {\operatorname{Var}} \left( V^J_{n,q}\right) = \frac{q!}{n} \, (C_1^J)^{-2q} \sum_{k,l=1}^{n} \left(\gamma^J (k-l)\right)^q\longrightarrow \sigma^2_{J,q,H,\lambda}:= q! \, (C_1^J)^{-2q} \sum_{k \in \mathbb Z} \left(\gamma^J (k)\right)^q < + \infty. \end{gathered}\end{equation}
Furthermore we can guarantee this value is strictly positive provided that \begin{itemize}
\item[(a)] $J=I,I\!I $ and $q$ is even.
\item[(b)] $J=I$, $H\in (0,1/2]$ and $q >1$.
\item[(c)] $J=I\!I$, $H \ge 1/2$ and $q >1$.
\end{itemize}
\end{lem}
\begin{proof} Finiteness of the sum $\sum_{k \in \mathbb Z}\left|\gamma^I (k)\right|^q$ follows from Proposition \ref{prop:TFLN}. The first equality in the relation \eqref{orieq} is mentioned, e.g., in the proof of Theorem 7.2.4 \cite{n-p-AOP-Exact-Asymptotic}. Obviously, $$\sigma^2_{n,J,q} = q! \, (C^J_1)^{-2q} \sum_{\vert k \vert < n} \left(1 - \frac{\vert k \vert }{n}\right) \left(\gamma^J (k)\right)^q,$$
this sum is nonnegative, and for even $q$ the value $\sigma^2_{n,J,q}$ strictly increases in $n$ therefore its limit is strictly positive. For odd $q$ we can state that the limit exists due to the dominated convergence theorem and finiteness of the sum $\sum_{k \in \mathbb Z}\left|\gamma^I (k)\right|^q$, and this limit is obviously nonnegative.
For strict positivity of the limiting variance, part (a) is clear. (b) First note that Proposition \ref{prop:TFLN} part (a) yields that $\gamma^I \in L^1 (\mathbb Z)$ is an absolutely convergent sum, and hence by a telescopic argument, we can write
\begin{equation}\label{eq:>0-1}
\sum_{k \in \mathbb Z} \frac{\gamma^I (k)}{(C^I_1)^2} = 1+ 2 \sum_{k \ge 1} \frac{\gamma^I (k)}{(C^I_1)^2} =0, \quad \Longrightarrow \quad \sum_{k \ge 1} \frac{\gamma^I (k)}{(C^I_1)^2} = -1/2.
\end{equation}
Now Lemma \ref{lem:Positive-Negative-Correlation} item (a) implies that $ \frac{\gamma^I (k)}{(C^I_1)^2} \in (-1,0)$ for every $0 \neq k \in \mathbb Z$. Let $q>1$ be an arbitrary integer. Then $$ \left( \frac{\gamma^I (k)}{(C^I_1)^2} \right)^q > \frac{\gamma^I (k)}{(C^I_1)^2}, \qquad k \ge 1.$$ Therefore,
\begin{align*}
\sum_{k\in\mathbb Z} \frac{(\gamma^I (k))^q}{(C^I_1)^{2q}} = 1+ 2 \sum_{k \ge 1} \frac{(\gamma^I (k))^q}{(C^I_1)^{2q}} > 1 + 2 \sum_{k \ge 1} \frac{\gamma^I (k)}{(C^I_1)^{2}}=0.
\end{align*}
(c) It is clear due to Lemma \ref{lem:Positive-Negative-Correlation}, part (b).
\end{proof}
\begin{rem} Meerschaert and Sabzikar \cite[Remark 4.1]{Meerschaertsabzikar} pointed out that the covariance function $\gamma^{I}$ of TFGN of the first kind behaves asymptotically as $\frac{2H(2H-1)\Gamma(2H)}{(2\lambda)^{2H}}j^{-2}$ for large lags $j$. However, part (a) of Proposition \ref{prop:TFLN} carefully shows that the asymptotic behavior of TFGN should be represented as $-\frac{2\Gamma(H+1/2)( \cosh \lambda-1 )}{ (2\lambda)^{H+1/2}} e^{-\lambda j} j^{H-1/2}$ for large lags $j$. \end{rem}
\subsection{CLTs for the tempered fractional Gaussian noise processes} As an application of the analysis of the behavior of the noise covariance, we can derive the CLT for the tempered fractional Gaussian noise processes. Our first result treats the Gaussian fluctuations of the tempered fractional Gaussian noises in the setup of the Breuer--Major theorem. \begin{thm}[Breuer--Major theorem for tempered fractional Gaussian noises]\label{thm:Breuer-Major_tempered} Let $\gamma(dx) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2} dx$ denote the standard Gaussian measure on the real line. Assume that $f \in L^2(\mathbb R,\gamma)$ be a centered function, i.e. $\mathbb E_\gamma[f]=0$, with Hermite rank $d \ge 1$, meaning that, $f$ admits the Hermite expansion $f(x) = \sum_{j=d}^{\infty} a_j H_j(x)$ with $a_d \neq 0$.
We have that \begin{equation*} V^J_{n,d,H,\lambda} := \frac{1}{\sqrt{n}}\sum_{k=1}^{n}f(Y^{J}_j) \limd \mathcal{N}(0, {\sigma^{2}_{J,H,\lambda,d}}), \end{equation*} with \begin{equation}\label{eq:Target-Variance-TemperedI} \sigma^{2}_{J,H,\lambda,d}=\sum_{q=d} ^\infty\frac{q!}{2^{q}} a_q^2\sigma^2_{J,q,H,\lambda} \in [0,\infty), \end{equation} where $\sigma^2_{J,q,H,\lambda}$ is introduced in \eqref{orieq}.
In any of the following cases: (a) $J=I,II$ and $a_q\neq0$ for at least one of even $q$; (b) $J=I $ and $H\leq 1/2$; (c) $J=II $ and {$H\ge1/2$} we claim that $\sigma^{2}_{J,H,\lambda,d}>0$. \end{thm}
\begin{proof} Consider $J=I$. First note that by applying part $(a)$ of Proposition \ref{prop:TFLN}, for every fixed $H,\lambda>0$, we have $\gamma^I \in \l^{p}(\mathbb Z)$ for all $p>0$, and in particular $\gamma^I \in \l^{d}(\mathbb Z)$ where $d$ denotes the Hermite rank. As a direct consequence, the classical Breuer--Major Theorem \ref{thm:Breuer-Major} can be applied, and in order to obtain the desired result, we are only left to compute the limiting variance. Next, it is standard that (see e.g., \cite[page 131]{Nourdin-Peccati-Bible}) the dominated convergence theorem yields as $n$ tends to infinity,
\begin{eqnarray*}
\sigma^2_n &: = &{\operatorname{Var}}\left(V^J_{n,d,H,\lambda} \right) = \sum_{j=d}^{\infty} j! a^2_j C^{-2j}_1 \frac{1}{n} \, \sum_{k,l=1}^{n} \gamma^I(k-l)^j
= \sum_{j=d}^{\infty} j! a^2_j C^{-2j}_1 \sum_{\vert k \vert < n} \left( 1 - \frac{\vert k \vert }{n} \right) \gamma^I(k)^j\\ & \longrightarrow & \sum_{j=d}^{\infty} j! a^2_j \sum_{k\in \mathbb Z} \left( C^{-2}_1 \gamma^I(k) \right)^j=: \sigma^2_{I,H,\lambda,d}.
\end{eqnarray*}
Recall that $\vert C^{-2}_1 \gamma^I(k)\vert \le 1$ for all $k \in \mathbb Z$, and therefore one can readily infer that \[ \sigma^2_{I,H,\lambda,d} = \sum_{j=d}^{\infty} j! a^2_j C_1^{-2j} \sum_{k\in\mathbb Z} \gamma^I(k)^j \le C_1^{-2d} \, \sum_{j=d}^{\infty} j! a^2_j \sum_{k\in\mathbb Z} \vert \gamma^I(k)\vert^d \le C_1^{-2d} \, \Vert f \Vert^2_{L^2(\mathbb R,\gamma)} \sum_{k\in\mathbb Z} \vert \gamma^I(k)\vert^d < +\infty. \] Now the proof immediately follows from Lemma \ref{Hermite}.
\end{proof}
\begin{rem} \begin{itemize} \item[(i)] The message of Theorem \ref{thm:Breuer-Major_tempered} is that tempering always fulfills the sufficient condition in the Breuer--Major Theorem without assuming any extra condition on the Hurst parameter $H$ or/and the tempering parameter $\lambda$. This is in contract to the classical setup of the fractional Gaussian noise where often there is a phase transition for the validity of CLT, see \cite[Theorem 7.4.1]{Nourdin-Peccati-Bible}. \item[(ii)] In fact, according to the second Dini's theorem \cite[Proposition C.3.2]{Nourdin-Peccati-Bible} convergences in parts $(a)$ and $(b)$ of Theorem \ref{thm:Breuer-Major_tempered} holds in the Kolmogorov topology too. Furthermore, one can show the convergence holds in more stronger topology under some mild assumption on function $f$. This is topic of the forthcoming result.
\end{itemize}
\end{rem}
The next result aims to provide a quantitative version of the aforementioned CLTs. For given random elements $F$ and $G$ the \textit{total variation} distance, denoted by $d_{TV}$, between the laws of $F$ and $G$ defined as \[ d_{TV} (F,G):= \sup_{A} \Big \vert \mathbb P (F \in A) - \mathbb P(G \in A) \Big \vert\] where the supremum is taken over all the Borel subsets $A \in \mathcal{B}(\mathbb R)$ on the real line. Also, we introduce the Sobolev space $\mathbb{D}^{p,k}(\mathbb R,\gamma)$, where $p \ge 1$ and $k \in \mathbb N$, that is the closure of all polynomial mapping $f:\mathbb R \to \mathbb R$ with respect to the norm \[ \Vert f \Vert_{p,k} := \Bigg[ \sum_{i=0}^{k} \int_{\mathbb R}\vert f^{(i)} (x) \vert^p \gamma (dx) \Bigg]^{\frac{1}{p}}. \] Here $f^{(0)}=f$, and $f^{(i)}$ stands for the $i$th derivative of $f$, $i=1,\ldots, k$.
\begin{thm}\label{thm:quantitative-TV-BM-Tempered} Let the random variable $N \sim \mathcal{N}(0,1)$, the assumptions and notations of Theorem \ref{thm:Breuer-Major_tempered} hold, and $\sigma^{2}_{J,H,\lambda,d}>0$.
If $f \in L^{2}(\mathbb R,\gamma)$ with $\mathbb E_\gamma[f]=0$ and belongs to Sobolev space $\mathbb{D}^{1,4}(\mathbb R,\gamma)$, then \begin{equation}\label{eq:TV-rate-1} d_{TV} \left( \frac{V^{J}_n}{\sqrt{{\operatorname{Var}}\left( V^J_n \right)}}, N \right) = \mathcal{O} \left( n^{-\frac{1}{2}} \left( \sum_{\vert \nu \vert \le n} \Big \vert \gamma^J(\nu) \Big \vert \right)^{\frac{3}{2}}\right),\; n\rightarrow \infty. \end{equation}
So, $d_{TV} \left( \frac{V^J_n}{\sqrt{{\operatorname{Var}}\left( V^J_n \right)}}, N \right) \le C \, n^{-\frac{1}{2}}$ for some constant $C$, and $n \ge 1$. Here $J=I, II$. \end{thm}
\begin{proof}
Both estimates \eqref{eq:TV-rate-1} for $J=I, I\!I$ are direct consequence of \cite[Theorem 1.2]{n-p-y19} (recalling as Theorem \ref{thm:BM-TV-Rate} in Appendix B) along with the fact that the limiting variances given by relation \eqref{eq:Target-Variance-TemperedI} are non zero by our assumption and bounded. Moreover, those estimate can be further controlled from above relying on the fact that $\gamma^I, \gamma^{I\!I} \in l^1(\mathbb Z)$ by Proposition \ref{prop:TFLN}. \end{proof}
\begin{rem} \begin{itemize} \item[$(a)$] Clearly, Theorem \ref{thm:quantitative-TV-BM-Tempered} implies Theorem \ref{thm:Breuer-Major_tempered} under the extra assumptions that the function $f \in \mathbb{D}^{1,4}(\mathbb R,\gamma)$. Up to our knowledge, this is the minimal assumption required to obtain a rate of convergence in the total variation metric. It is clear that without imposing such regularity assumption any reasonable rate of convergence in the total variation distance is implausible.
\item[$(b)$] In general, it is an open problem in the field to provide the similar lower rate of the convergence, namely, to establish that for some positive constant $C>0$
\begin{equation*}
d_{TV} \left( \frac{V^{J}_n}{\sqrt{{\operatorname{Var}}\left( V^{J}_n \right)}}, N \right) \ge C \, n^{-\frac{1}{2}}, J=I,II.
\end{equation*} A partial decisive answer is given by the so called optimal fourth moment theorem \cite{n-p-optimal}, recalling as Theorem \ref{thm:4MT} item $(b)$ when the function $f=H_q$ is a Hermite polynomial of degree $q\ge2$.
\end{itemize} \end{rem}
Denote $F^I_n:= \frac{V^I_n}{\sqrt{{\operatorname{Var}}\left( V^I_n \right) }}$ and let $p^{I,(m)}_n$ and $p^{(m)}_N$ be the $m$th derivatives of densities of random variables $F_n$ and $N$ respectively, where, as before, $N \sim \mathcal{N}(0,1)$.
\begin{thm}[Density Convergence in the Breuer--Major Theorem]\label{thm:BM-Tempered-Densities} Let all the assumptions and notations of Theorem \ref{thm:Breuer-Major_tempered} hold, and function $f$ be given by \[ f (x) = \sum_{j=d}^{q} a_j H_j (x), \] where $2 \le d \le q$, and $(a_j : j=d,\ldots,q)$ are real numbers. Then \begin{description} \item[$(a)$] For all $m\ge 0$ and every $p \in [1,\infty]$ (including $p=\infty$ corresponding to the uniform norm), \begin{equation}\label{eq:Density-Convergence} \Big \Vert p^{I,(m)}_n - p^{(m)}_N \Big \Vert_{L^{p}(\mathbb R)} \longrightarrow 0 \end{equation}
as $n$ tends to infinity.
\item[$(b)$] In particular, if $q=d$ (in other words the sequence $F_n$ belongs to the fixed Wiener chaos of order $d$), then for all $m \ge 0$ there exist $n_0 \in \mathbb N$ and a constant $C$ (depending only on $m$ and $q$) such that for all $n \ge n_0$ we have
\begin{equation}\label{eq:Quantitative-Density-Convergence} \Big \Vert p^{I,(m)}_n - p^{(m)}_N \Big \Vert_{L^{\infty}(\mathbb R)} \le C \, \sqrt{ \mathbb E \left[F^4_n \right] - 3 }. \end{equation}
\end{description} Similar statements are also valid by replacing $V^I_n$ with $V^{I\!I}_n$. \end{thm} \begin{rem} With the particular case $p=1$, and $m=0$ in $(a)$, the above estimate implies that $d_{TV}(F_n,N) \to 0$, however it does not provide any rate of the convergence. Moreover, the assumption on the function $f$ here is more stronger that the previous theorem. This is somehow clear due to requiring a more stronger convergence. \end{rem} \begin{proof} Let $h^I$ denote the spectral density function of TFGNI. Note that $h^I \in L^1([-\pi,\pi]) $ in virtue of \cite[Proposition 7.3.3]{Nourdin-Peccati-Bible}. In fact, $h^I \in L^\infty ([-\pi,\pi])$, and hence $h^I \in L^r ([-\pi,\pi])$ for every $r \ge 1$, because $\gamma^I \in l^1 (\mathbb Z)$. Moreover, \[ h^I (\omega ) \approx \frac{\omega^2}{(\lambda^2 + \omega^2)}, \quad \text{ as } \quad \vert \omega \vert \to 0. \] Hence, $\log(h^I) \in L^1([-\pi,\pi])$. Now part $(a)$ follows directly from \cite[Theorem 1.5]{HU2} and \cite[Corollary 1.6]{HU2}, recalling as Theorem \ref{thm:BM-Density-Rate}. Proof for the case TFGNII is similar. \end{proof} \begin{rem}
Condition $\log(h^I) \in L^1([ \pi,\pi])$ is referred to as purely nondeterministic property in the literature, and in particular implies that the following useful representation takes places $$ X^I(k) = \sum_{j\ge 0} a_j \varepsilon_{k-j}$$ where $(\varepsilon_k)$ stands for a standard white noise. Roughly speaking, Malliavin calculus bridges density and its derivatives to existence of the negative moments on the norm of the Malliavin derivative. The latter condition exist only in some special cases, and the assumption $\log(h^I) \in L^1([ \pi,\pi])$ requires for the justification of the condition.
\end{rem}
{Fix $q \ge 2$}. Let $V_n=\frac{1}{\sqrt{n} }\sum_{k=1}^{n} H_q(Y^J(k)), J=I,I\!I$. Consider the sequence $(F^J_n : n \ge 1)$ defined via \begin{equation}\label{eq:hermite-variation} F^J_n = \frac{V_n}{\sqrt{{\operatorname{Var}}\left( V_n \right)}}= \frac{1}{\sqrt{n {\operatorname{Var}}\left( V_n\right)}} \sum_{k=1}^{n} H_q(Y^J(k)). \end{equation}
\begin{thm}[Exact asymptotics in the Breuer--Major CLT]\label{thm:Exact-BM-General-Chaos} Let $N \sim \mathcal{N}(0,1)$. Consider the sequence $(F^J_n : n\ge 1)$ given by relation \eqref{eq:hermite-variation}. Then, for every $z \in \mathbb R$, as $n\rightarrow\infty$, {the following exact asymptotic statement takes place}
\begin{equation}\label{eq:Calim}
\sqrt{n} \Big( \mathbb{P}\left( F^J_n \le z \right) - \mathbb{P} \left( N \le z \right) \Big) \longrightarrow \frac{ \rho}{3\sqrt{2\pi} } (z^2 -1 ) e^{- \frac{z^2}{2}},
\end{equation} with \begin{align} \rho &= C^{-3q}_1 \, qq! (q/2)! { q-1 \choose q/2 -1 }^2 \frac{1}{\sigma^3} \sum_{k,l \in \mathbb Z} \gamma^J(k)^{q/2} \gamma^J(l)^{q/2} \gamma^J(l-k)^{q/2}\nonumber \end{align} and \begin{align} \sigma^2 &= q! C^{-2q}_1 \sum_{k \in \mathbb Z} \left(\gamma^J(k)\right)^q >0\nonumber \end{align}
{provided that either $J=I,I\!I$, $q$ even, or $J=I$, $H \in (0,1/2]$, or $J=I\!I$, $H \ge 1/2$.} \end{thm}
\begin{proof} We only consider the case $J=I$. The other case is similar. We are going to apply \cite[Theorem 3.1]{n-p-AOP-Exact-Asymptotic}. In order to settle in that framework, we can assume, without loss of generality, that $X^I (k)= X (\varepsilon_k)$, where $\{ X(h) : h \in \mathfrak{H} \}$ is an adequate isonormal Gaussian process over the separable Hilbert space $\mathfrak{H}$ (see \cite[Proposition 7.2.3]{Nourdin-Peccati-Bible}) with $\langle \varepsilon_k, \varepsilon_l \rangle_{\mathfrak{H}} = C^{-2}_1\gamma^I (k-l)$ for every $k,l \in \mathbb Z$. So, we can write \[ F_n = I_q (f_n), \, \text{ and } \, f_n := \frac{1}{\sqrt{n {\operatorname{Var}} \left( V^I_n \right)}} \sum_{k=1}^{n} \varepsilon_k^{\otimes q}, \quad n \ge 1. \]
The notation $ \varepsilon_k^{\otimes q}$ stands for the tensor product. First, note that as in Lemma \ref{Hermite}, dominated convergence theorem yields that \begin{equation*}\begin{gathered} \sigma^2_n : = {\operatorname{Var}} \left( V^I_n\right) = \frac{q!}{n} \, C^{-2q}_1 \sum_{k,l=1}^{n} \left(\gamma^I (k-l)\right)^q = q! \, C^{-2q}_1 \sum_{\vert k \vert < n} \left(1 - \frac{\vert k \vert }{n}\right) \left(\gamma^I (k)\right)^q \\\longrightarrow \sigma^2:= q! \, C^{-2q}_1 \sum_{k \in \mathbb Z} \left(\gamma^I (k)\right)^q < + \infty. \end{gathered}\end{equation*} Therefore, according to Breuer--Major theorem we can conclude that $F_n \stackrel{d}{\longrightarrow} \mathcal{ N }(0,\sigma^2)$ as $n$ tends to infinity, and that $\sigma^2 >0$ according to Lemma \ref{Hermite}. Furthermore, $$DF_n = \frac{q}{\sqrt{n \sigma^2_n}}\sum_{k=1}^{n} \varepsilon_k I_{q-1} \left( \varepsilon^{\otimes (q-1)}_k \right).$$ Hence, using product formula for multiple integrals \eqref{multiplication}, we can write
\begin{multline*}
\Vert DF_n \Vert^2_{\mathfrak{H}}= \frac{q^2}{n \sigma^2_n} \sum_{k,l=1}^{n} C^{-2}_1 \gamma^I (k-l) I_{q-1} \left( \varepsilon^{\otimes (q-1)}_k \right) I_{q-1} \left( \varepsilon^{\otimes (q-1)}_l \right)\\
= \frac{q^2 }{n \sigma^2_n} \sum_{k,l=1}^{n} C^{-2}_1 \gamma^I (k-l) \sum_{r=0}^{q-1} r! { q-1 \choose r}^2 I_{2q-2r-2} \left( \varepsilon^{\otimes (q-1)}_k \otimes_{r} \varepsilon^{\otimes (q-1)}_l \right)\\
= \frac{q^2 }{n \sigma^2_n} \sum_{k,l=1}^{n} \sum_{r=0}^{q-1} r! { q-1 \choose r}^2 C^{-2(r+1)}_1 \gamma^I (k-l)^{r+1} I_{2q-2r-2} \left( \varepsilon^{\otimes (q-r-1)}_k \otimes \varepsilon^{\otimes (q-r-1)}_l \right)\\
=\frac{q^2}{n \sigma^2_n} \sum_{r=1}^{q} (r-1)! { q-1 \choose r-1}^2 \sum_{k,l=1}^{n} I_{2q-2r} \left( \varepsilon^{\otimes (q-r)}_k \otimes \varepsilon^{\otimes (q-r)}_l\right) C^{-2r}_1 \gamma^I (k-l)^{r}.
\end{multline*}
Therefore,
\begin{multline*} G_n := \frac{1}{q} \Vert DF_n \Vert^2_{\mathfrak{H}} -1= \frac{q}{n \sigma^2_n} \sum_{r=1}^{q} (r-1)! { q-1 \choose r-1}^2 \sum_{k,l=1}^{n} I_{2q-2r} \left( \varepsilon^{\otimes (q-r)}_k \otimes \varepsilon^{\otimes (q-r)}_l\right) C^{-2r}_1 \gamma^I (k-l)^{r}-1\\ =\frac{q}{n \sigma^2_n} \sum_{r=1}^{q-1} (r-1)! { q-1 \choose r-1}^2 \sum_{k,l=1}^{n} I_{2q-2r} \left( \varepsilon^{\otimes (q-r)}_k \otimes \varepsilon^{\otimes (q-r)}_l\right) C^{-2r}_1 \gamma^I (k-l)^{r}.
\end{multline*}
One has to note that for each $n\ge 1$, the random variable $G_n$ belongs to a finite sum of Wiener chaoses up to order $2q-2$. Our next aim is to show that $\sqrt{n}G_n \to \mathcal{N}(0,\widehat{\sigma}^2)$ as $n\rightarrow\infty$ for some variance $\widehat{\sigma}^2$ whose value will be determined later on. To do this, we apply the fourth moment Theorem \ref{thm:4MT}. For each $r \in \{ 1,\ldots,q-1 \}$, set $$\varrho=\varrho(q,r,H,\lambda):= q (r-1)!{q-1 \choose r-1}^2$$ and define
\begin{equation}\label{eq:DF_n-r}
G_{n,r} : = \frac{\varrho}{\sigma^2_n \sqrt{n}} \sum_{k,l=1}^{n} I_{2q-2r} \left( \varepsilon^{\otimes (q-r)}_k \otimes \varepsilon^{\otimes (q-r)}_l\right) C^{-2r}_1 \gamma^I (k-l)^{r}.
\end{equation}
First, using Proposition \ref{prop:TFLN} part $(a)$, we obtain that
\begin{multline}\label{eq:r-Variance}
\sigma^2_{n,r} : = {\operatorname{Var}} (G_{n,r}) = (2q-2r)! \frac{\varrho^2}{n \, \sigma^4_n} \sum_{k_1,l_1,k_2,l_2=1}^{n} C^{-2r}_1 \gamma^I (k_1 -l_1)^r C^{-2r}_1 \gamma^I (k_2 - l_2)^r \\ \hskip6cm \times C^{-2(q-r)}_1 \gamma^I(k_1 - k_2)^{q-r} C^{-2(q-r)}_1 \gamma^{I}(l_1 -l_2 )^{q-r}\\
\to \frac{(2q-2r)!\varrho^2}{\sigma^4} C^{-4q}_1 \sum_{ k_1,k_2,k_3\in \mathbb Z} \gamma^I (k_1)^r \gamma^I(k_2)^{q-r}\gamma^I (k_3)^{r} \gamma^I (k_2 + k_3 - k_1)^{q-r}=: \sigma^2_r < +\infty
\end{multline}
as $n\rightarrow\infty$.
Next, we will show that for each $r \in \{ 1,\ldots,q-1 \}$, we have that
\begin{equation}\label{eq:G^r-CLT}
\widetilde{G}_{n,r} : = \frac{G_{n,r}}{\sqrt{\sigma^2_{n,r}}} \limd \mathcal{N}(0,1)
\end{equation}
as $n\rightarrow\infty$. To start with, note that
\begin{equation*}
D\widetilde{G}_{n,r} = \frac{(2q-2r)\varrho}{\sigma_{n,r} \sigma^2_n} \times \frac{1}{\sqrt{n}} \sum_{k,l=1}^{n} \varepsilon_k I_{2q-2r-1} \left( \varepsilon^{\otimes (q-r-1)}_k \otimes \varepsilon^{\otimes (q-r)}_l\right) C^{-2r}_1 \gamma^I (k-l)^{r},
\end{equation*}
Therefore,
\begin{multline*}
\Vert D\widetilde{G}_{n,r} \Vert^2_{\mathfrak{H}} = \left( \frac{(2q-2r) \varrho}{\sigma_{n,r} \sigma^2_n} \right)^2\\
\times \Bigg[ \frac{1}{n} \sum_{k_1,l_1,k_2,l_2=1}^{n} C^{-4r}_1 \gamma^I (k_1 - l_1)^r \gamma^I (k_2 - l_2)^r C^{-2}_1 \gamma^I (k_1 - k_2) \\
\times I_{2q-2r-1} \left( \varepsilon^{\otimes (q-r-1)}_{k_1} \otimes \varepsilon^{\otimes (q-r)}_{l_1}\right) I_{2q-2r-1} \left( \varepsilon^{\otimes (q-r-1)}_{k_2}
\otimes \varepsilon^{\otimes (q-r)}_{l_2}\right) \Bigg] \\
= \left( \frac{(2q-2r)\varrho}{\sigma_{n,r} \sigma^2_n} \right)^2\\
\times \Bigg[ \frac{1}{n} \sum_{k_1,l_1,k_2,l_2=1}^{n} C^{-4r}_1 \gamma^I (k_1 - l_1)^r \gamma^I (k_2 - l_2)^r C^{-2}_1 \gamma^I (k_1 - k_2) \\
\sum_{s=0}^{2q-2r-1} s! { 2q-2r-1 \choose s}^2 I_{4q-4r-2-2s}
\left( \left( \varepsilon^{\otimes (q-r-1)}_{k_1} \otimes \varepsilon^{\otimes (q-r)}_{l_1}\right) \otimes_s \left( \varepsilon^{\otimes (q-r-1)}_{k_2} \otimes \varepsilon^{\otimes (q-r)}_{l_2}\right) \right) \Bigg], \\
\end{multline*}
and consequently,
\begin{multline*}
\frac{1}{(2q-2r)} \Vert D \widetilde{G}_{n,r} \Vert^2_{\mathfrak{H}} -1 = \frac{(2q-2r)\varrho^2}{\sigma^2_{n,r} \sigma^4_n} \times \Bigg[ \frac{1}{n} \sum_{k_1,l_1,k_2,l_2=1}^{n} C^{-4r}_1 \gamma^I (k_1 - l_1)^r \gamma^I (k_2 - l_2)^r C^{-2}_1 \gamma^I (k_1 - k_2) \\
\sum_{s=0}^{2q-2r-2} s! { 2q-2r-1 \choose s }^2 I_{4q-4r-2-2s}
\left( \left( \varepsilon^{\otimes (q-r-1)}_{k_1} \otimes \varepsilon^{\otimes (q-r)}_{l_1}\right) \otimes_s \left( \varepsilon^{\otimes (q-r-1)}_{k_2} \otimes \varepsilon^{\otimes (q-r)}_{l_2}\right) \right) \Bigg] \\
= \frac{(2q-2r)\varrho^2}{\sigma^2_{n,r} \sigma^4_n} \times \sum_{s=0}^{2q-2r-2} s! { 2q-2r-1 \choose s}^2\\
\times \Bigg[ \frac{1}{n} \sum_{k_1,l_1,k_2,l_2=1}^{n} C^{-4r}_1 \gamma^I (k_1 - l_1)^r \gamma^I (k_2 - l_2)^r C^{-2}_1 \gamma^I (k_1 - k_2) \\ \times I_{4q-4r-2-2s}
\left( \left( \varepsilon^{\otimes (q-r-1)}_{k_1} \otimes \varepsilon^{\otimes (q-r)}_{l_1}\right) \otimes_s \left( \varepsilon^{\otimes (q-r-1)}_{k_2} \otimes \varepsilon^{\otimes (q-r)}_{l_2}\right) \right) \Bigg]\\
= \frac{(2q-2r)\varrho^2}{\sigma^2_{n,r} \sigma^4_n}\times \sum_{s=0}^{q-r} s! { 2q-2r-1 \choose s}^2\\
\times \Bigg[ \frac{1}{n} \sum_{k_1,l_1,k_2,l_2=1}^{n} C^{-4r}_1 \gamma^I (k_1 - l_1)^r \gamma^I (k_2 - l_2)^r C^{-2}_1 \gamma^I (k_1 - k_2) \\
\times I_{4q-4r-2-2s} \left( \left( \varepsilon^{\otimes (q-r-1)}_{k_1} \otimes \varepsilon^{\otimes (q-r-1)}_{k_2} \right) \otimes \left( \varepsilon^{\otimes (q-r-s)}_{l_1} \otimes \varepsilon^{\otimes (q-r-s)}_{l_2} \right) \right) C^{-2s}_1 \gamma^I (l_1 -l_2)^{s} \Bigg]\\
+ \frac{(2q-2r)\varrho^2}{\sigma^2_{n,r} \sigma^4_n} \times \sum_{s=q-r+1}^{2q-2r-2} s! { 2q-2r-1 \choose s}^2\\
\times \Bigg[ \frac{1}{n} \sum_{k_1,l_1,k_2,l_2=1}^{n} C^{-4r}_1 \gamma^I (k_1 - l_1)^r \gamma^I (k_2 - l_2)^r C^{-2}_1 \gamma^I (k_1 - k_2) \\
\times I_{4q-4r-2-2s} \left( \varepsilon^{\otimes (2q-2r-1-s)}_{k_1} \otimes \varepsilon^{\otimes (2q-2r-1-s)}_{k_2} \right) C^{-2(s+1)}_1 \gamma^I (l_1 -l_2)^{q-r} \gamma^I(k_1 -k_2)^{s+1-q+r} \Bigg]\\
= \frac{(2q-2r)\varrho^2}{\sigma^2_{n,r} \sigma^4_n}\times
\sum_{s=1}^{q-r+1} (s-1)! { 2q-2r-1 \choose s-1}^2\\
\times \Bigg[ \frac{1}{n} \sum_{k_1,l_1,k_2,l_2=1}^{n} C^{-4r}_1 \gamma^I (k_1 - l_1)^r \gamma^I (k_2 - l_2)^r C^{-2}_1 \gamma^I (k_1 - k_2) \\
\times I_{4q-4r-2s} \left( \left( \varepsilon^{\otimes (q-r-1)}_{k_1} \otimes \varepsilon^{\otimes (q-r-1)}_{k_2} \right) \otimes \left( \varepsilon^{\otimes (q-r-s+1)}_{l_1} \otimes \varepsilon^{\otimes (q-r-s+1)}_{l_2} \right) \right) C^{-2(s-1)}_1 \gamma^I (l_1 -l_2)^{s-1} \Bigg]\\
+ \frac{(2q-2r)\varrho^2}{\sigma^2_{n,r} \sigma^4_n} \times \sum_{s=q-r+2}^{2q-2r-1} (s-1)! { 2q-2r-1 \choose s-1}^2\\
\times \Bigg[ \frac{1}{n} \sum_{k_1,l_1,k_2,l_2=1}^{n} C^{-4r}_1 \gamma^I (k_1 - l_1)^r \gamma^I (k_2 - l_2)^r C^{-2}_1 \gamma^I (k_1 - k_2) \\
\times I_{4q-4r-2s} \left( \varepsilon^{\otimes (2q-2r-s)}_{k_1} \otimes \varepsilon^{\otimes (2q-2r-s)}_{k_2} \right) C^{-2s}_1 \gamma^I (l_1 -l_2)^{q-r} \gamma^I(k_1 -k_2)^{s-q+r} \Bigg]\\
\end{multline*}
Now, for every $1 \le s \le q-r+1$, we have
\begin{multline*}
\mathbb E \Bigg \vert \frac{1}{n} \sum_{k_1,l_1,k_2,l_2=1}^{n} C^{-4r}_1 \gamma^I (k_1 - l_1)^r \gamma^I (k_2 - l_2)^r C^{-2}_1 \gamma^I (k_1 - k_2) \\
\times I_{4q-4r-2s} \left( \left( \varepsilon^{\otimes (q-r-1)}_{k_1} \otimes \varepsilon^{\otimes (q-r-1)}_{k_2} \right) \otimes \left( \varepsilon^{\otimes (q-r-s+1)}_{l_1} \otimes \varepsilon^{\otimes (q-r-s+1)}_{l_2} \right) \right) C^{-2(s-1)}_1 \gamma^I (l_1 -l_2)^{s-1} \Bigg \vert^2\\
=\frac{1}{n^2} \sum_{k_1,l_1,k_2,l_2,k_3,l_3,k_4,l_4=1}^{n} \Bigg[ C^{-8r-4s}_1 \gamma^I(k_1 -l_1)^{r} \gamma^I(k_2 -l_2)^r \gamma^I(k_1 - k_2) \gamma^I(l_1 - l_2)^{s-1}\\
\times \gamma^I (k_3 - l_3)^r \gamma^I(k_4 - l_4)^r \gamma^I(k_3 - k_4) \gamma^I(l_3 -l_4)^{s-1}\\
\times C^{-2(4q-4r-2s)}_1 \gamma^I(k_1 -k_3)^{q-r-1} \gamma^I (k_2 - k_4)^{q-r-1}\gamma^I(l_1 - l_3)^{q-r-s+1} \gamma^I(l_2 - l_4)^{q-r-s+1} \Bigg] \\
\sim_{n \to +\infty} C^{-8q}_1 \frac{1}{n} \sum_{x_1,...,x_7 \in \mathbb Z} \Bigg[ \gamma^I(x_1)^r \gamma^I(x_2)^r \gamma^I(x_3)\gamma^I(x_2+x_3 - x_1)^{s-1}\\
\times \gamma^I(x_4)^r \gamma^I(x_5)^r \gamma^I(x_6)\gamma^I(x_5+x_6 -x_4)^{s-1} \gamma^I(x_7)^{q-r-1}\\
\times \gamma^I(x_6+x_7-x_3)^{q-r-1} \gamma^I(x_4 +x_7 -x_1)^{q-r-s+1} \gamma^I(x_5+x_6+x_7-x_2-x_3)^{q-r-s+1}\Bigg] \\
\to 0, \, \text{ as } \, n \to \infty.
\end{multline*}
Similarly for each $s \in \{ q-r+2, ..., 2q-2r-1 \}$, one can show that
\begin{multline*}
\mathbb E \Bigg \vert \frac{1}{n} \sum_{k_1,l_1,k_2,l_2=1}^{n} \gamma^I (k_1 - l_1)^r \gamma^I (k_2 - l_2)^r \gamma^I (k_1 - k_2) \\
\times I_{4q-4r-2-2s} \left( \varepsilon^{\otimes (2q-2r-1-s)}_{k_1} \otimes \varepsilon^{\otimes (2q-2r-1-s)}_{k_2} \right) \gamma^I (l_1 -l_2)^{q-r} \gamma^I(k_1 -k_2)^{s-q+r} \Bigg \vert^2\\
\to 0\, \text{ as } \, n \to \infty.
\end{multline*}
Hence, using the orthogonality property of multiple stochastic integrals, one can infer that
\[ {\operatorname{Var}} \left( \frac{1}{2q-2r} \Vert D \widetilde{G}_{n,r} \Vert^2_{\mathfrak{H}} -1 \right) \to 0, \] and the latter immediately implies \eqref{eq:G^r-CLT}. Furthermore, taking into account
\[ \sqrt{n} \left( \frac{1}{q} \Vert DF_n \Vert^2_{\mathfrak{H}} -1 \right) = \sum_{r=1}^{q-1} G_{n,r} \]
and, using Peccati--Tudor multidimensional fourth moment Theorem \ref{thm:4MT-MultiDim}, we can infer that, as $n$ tends to infinity, we have
\[ \sqrt{n} \left( \frac{1}{q} \Vert DF_n \Vert^2_{\mathfrak{H}} -1 \right) \limd \mathcal{N}(0,\widehat{\sigma}^2), \]
with
\begin{equation}\label{eq:Final-Variance}
\widehat{\sigma}^2 = \sum_{r=1}^{q-1} \sigma^2_r,
\end{equation}
where $\sigma^2_r$ is given by relation \eqref{eq:r-Variance}. Therefore, \cite[Theorem 2.6]{n-p-AOP-Exact-Asymptotic} part (B) (recalling as Theorem \ref{thm:4MT}, part {\bf (c)}) yields that
\[ \left( F_n, \sqrt{n} \left( \frac{1}{q} \Vert DF_n \Vert^2_{\mathfrak{H}} -1 \right) \right) \limd (N_1,N_2), \]
where $(N_1,N_2)$ is a centered two dimensional Gaussian vector with $\mathbb E[N^2_1] =1$, $\mathbb E[N^2_2] = \widehat{\sigma}^2$, and $\mathbb E[N_1 \times N_2]= \rho$, where, by orthogonality of multiple stochastic integrals,
\begin{align*}
\rho & = \lim_{n\to \infty} \mathbb E \left[ F_n \times \sqrt{n} \left( \frac{1}{q} \Vert DF_n \Vert^2_{\mathfrak{H}} -1 \right) \right] =
\lim_{n \to \infty} \mathbb E \left[ F_n \times G_{n,q/2} \right]\\
&= q! q (q/2)! { q-1 \choose q/2 -1 }^2 \lim_{n\to \infty} \frac{1}{\sigma^3_n} \times \frac{1}{n} \sum_{k,l,t=1}^{n} C^{-q}_1 \gamma^I(l-t)^{q/2} C^{-q}_1 \gamma^I(l-k)^{q/2} C^{-q}_1 \gamma^I(t-k)^{q/2}\\
& \longrightarrow C^{-3q}_1 q! q (q/2)! { q-1 \choose q/2 -1 }^2 \frac{1}{\sigma^3} \sum_{k,l\in \mathbb Z} \gamma^I(k)^{q/2} \gamma^I(l)^{q/2} \gamma^I(l-k)^{q/2}.
\end{align*} Finally we can deduce the claim \eqref{eq:Calim}. \end{proof}
\begin{rem}[Exact asymptotics in the Breuer--Major CLT on the second Wiener chaos]\label{rem:Exact-BM-2Chaos} Let $N \sim \mathcal{N}(0,1)$. Let $Y^J(k)= C^{-1}_1 \left(X^J(k+1)-X^J(k) \right)$ where $X^J(k)$ is either TFBM (J=I) or TFBMII (J=$I\!I$) with the associated normalizing constant $C_1$ appearing in Lemma \ref{lem:usefulrelations}, covariance function $\gamma^J$ and spectral density function $h^J$. Let $ {q = 2}$. Consider the sequence $(F^J_n : n\ge 2)$ given by relation \eqref{eq:hermite-variation} belonging to the second Wiener chaos. In this case, thanks to the fact that $\gamma^J \in l^{4/3} (\mathbb Z)$ (by Proposition \ref{prop:TFLN}), one can apply \cite[Proposition 3.8]{n-p-AOP-Exact-Asymptotic} (or \cite[Theorem 9.5.1]{Nourdin-Peccati-Bible}) to readily obtain, for every $z \in \mathbb R$, as $n$ tends to infinity, that \[ \sqrt{n} \Big( \mathbb{P} \left( F^J_n \le z \right) - \mathbb{P} \left( N \le z\right) \Big) \longrightarrow \frac{\Vert h^J \Vert^3_{L^3([-\pi,\pi])}}{6 \pi \sqrt{\pi} \Vert \gamma^J \Vert^3_{l^2(\mathbb Z)}} (z^2 -1)e^{-\frac{z^2}{2}}. \] \end{rem}
\begin{thm}[Almost Sure Convergence in the Breuer--Major CLT]\label{thm:Almost-Sure-CLT}
Let $N \sim \mathcal{N}(0,1)$, and $f = \sum_{q=1}^{\infty} a_q H_q (x) \in L^2(\mathbb R,\gamma)$ where as before $\gamma$ stands for the standard Gaussian measure on the real line. For every $n\ge 1$ define $V^J_n := \frac{1}{\sqrt{n}} \sum_{k=1}^{n} f(X^J_k)$. Consider sequence $F^J_n:= \frac{V^J_n}{\sqrt{{\operatorname{Var}}(V^J_n)}}$.
If in addition, function $f$ is of the class $C^2(\mathbb R)$ such that $\mathbb E[f''(N)^4] < \infty$, then the sequence $(F^J_n : n \ge 1)$ satisfies an ASCLT meaning that, almost surely, for every bounded continuous function $\varphi : \mathbb R \to \mathbb R$ it holds that
\[ \frac{1}{\log n} \sum_{k=1}^{n} \frac{1}{k} \varphi (F^J_n) \longrightarrow \mathbb E \left[ \varphi(N) \right], \quad \text{ as } \quad n \to \infty, \]
provided that either $J=I,I\!I$ and there is at least one even $q$ so that $a_q \neq 0$, or $J=I$, $H \in (0,1/2]$, or $J=I\!I$, $H \ge 1/2$ and there is at least one coefficient $a_q \neq 0$ for $q>1$ in both latter cases.
\end{thm}
\begin{proof} Note that $\gamma^J \in l^1(\mathbb Z)$ for $J=I,I\!I$ by Proposition \ref{prop:TFLN}. Now, the claim can be straightforwardly achieved using Theorem 3.4, and Remark 3.5 in \cite{b-n-t-almost-sure}. \end{proof}
\begin{rem}\label{rem:ASCLT:Hermite-Variation}
The tempering parameter $\lambda$ removes completely the presence of any extra restriction on the Hurst parameter $H$ in the ASCLT for the $q$th Hermite variation of tempered fractional Gaussian noises, at least when $q$ is even. For the classical fractional Gaussian noise, we refer the reader to \cite[Theorem 6.2]{b-n-t-almost-sure}.
\end{rem}
We next investigate the asymptotic behavior of the third and fourth cumulants of tempered fractional Gaussian processes. First, we define the cumulants of a random variable. Let $F$ be a real-valued random variable with $\mathbb{E}|F|^n <\infty$ for $n\geq 1$. Let $\phi_{F}(t)=\mathbb{E}[e^{itF}]$ be the characteristic function of $F$. Then $$
\kappa_{j}(F) = (-i)^j \frac{d^j}{dt^j}\log \phi_{F}(t)|_{t=0} $$ is called the $j$th cumulant of F. For every $n \ge 1$, recall that \begin{equation}\label{eq:Hermite-Variation} F^J_n = \frac{V^J_n}{\sqrt{{\operatorname{Var}}\left( V^J_n \right)}}= \frac{1}{\sqrt{n {\operatorname{Var}}\left( V^J_n\right)}} \sum_{k=1}^{n} H_q(Y^J(k)), \quad J=I,I\!I. \end{equation}
\begin{prop}[Optimal 3rd moment theorem]\label{optimal_fourth_moment}
Let $q \ge 2$ be an integer. Consider sequence $(F^J_n \, : \, n \ge 1)$ given by relation \eqref{eq:Hermite-Variation} where $H_q$ denote the Hermite polynomial of degree $q$. Then, as $n$ tends to infinity,
\begin{itemize}
\item[(a)] For any even integer $q\geq 2$, it holds that $\kappa_{3}(F^J_n)\asymp n^{-\frac{1}{2}}$.
\item[(b)] For any integer $q\geq 2$, it holds that $\kappa_{4}(F^J_n)\asymp n^{-1}$ provided that either $q$ is even, or $J=I$, $H \in (0,1/2]$, or $J=I\!I$, $H \ge 1/2$.
\end{itemize}
Therefore, if $q\ge 2$ is an even integer, then there exist two constants $C_1, C_2 >0$ (independent of $n$) so that for every $n \ge1$, the following optimal third moment estimate holds:
\begin{equation}\label{eq:Optimal-Third-Moment}
C_2 \, \big \vert \mathbb E [(F^J_n)^3] \big\vert \le d_{TV} (F^J_n, N) \le C_1 \, \big \vert \mathbb E [(F^J_n)^3] \big\vert .
\end{equation}
\end{prop}
\begin{proof}
(a) First, Proposition \ref{prop:TFLN} implies that $\gamma^J \in \ell^{\frac{3q}{4}} (\mathbb Z)$. Now, from \cite[Proposition 6.3]{Bierme} we obtain that $0<\liminf \sqrt{n}\kappa_{3}(F_n)= \limsup \sqrt{n}\kappa_{3}(F_n)<\infty$ and this gives the desired result in part $(a)$. For part (b) using Proposition \ref{prop:TFLN} infer that $\gamma^J \in l^{2}(\mathbb{Z})$, and hence \cite[Proposition 6.4]{Bierme} completes the proof in virtue of Lemma \ref{Hermite}. Finally, relation \eqref{eq:Optimal-Third-Moment} is a direct application of \cite[Theorem 2.1]{n-p-optimal} (recalling as Theorem \ref{thm:4MT} part {\bf (b)} in the appendix section). \end{proof}
\begin{rem} \label{rem:optimalrate}
\begin{itemize}
\item[(i)] See \cite[Remark 8.4.5]{Nourdin-Peccati-Bible}, item $1$ when $q$ is odd. In fact, for a random variable $F$ belonging to a fixed Wiener chaos of odd order all the odd cumulants vanish. On the other hand, for a given general random variable $F$ with $\mathbb E[F]=0$, and $\mathbb E[F^2]=1$ we have $\kappa_3 (F)=\mathbb E[F^3]$ and $\kappa_4(F)=\mathbb E[F^4]-3$.
Hence, when $q$ is odd, then as explained $\mathbb E[(F^J_n)^3]=0$, and therefore the optimal rate of convergence in the total variation metric is given by $\kappa_{4}(F^J_n)$ that is equals to $n^{-1}$ under the extra assumption that either $J=I$, $H \in (0,1/2]$ or $J=I\!I$, $H\ge 1/2$. When $q$ is even, then there is a fight between the third and fourth cumulants in the optimal rate. Proposition \ref{optimal_fourth_moment} states that in this situation the third cumulant $\kappa_3(F^J_n)$ is the winner, and the rate of convergence is $n^{-1/2}$.
\item[(ii)] The tempering parameter $\lambda$ manifests its role in the optimal fourth moment theorem. In fact, the optimal rates of convergence of the third and fourth cumulants of $F^J_n$ given by Proposition \ref{optimal_fourth_moment} are valid for any $H>0$ and $\lambda>0$ for even $q$. This is in contrast with the case of fractional Brownian motion where $\kappa_{3}(F_n)\asymp n^{-\frac{1}{2}}$ provided $H\in (0, 1-\frac{2}{3q})$ with an even integer $q\geq 2$ and $\kappa_{4}(F_n)\asymp n^{-1}$ provided $H\in (0, 1-\frac{3}{4q})$ with $q\in {2,3}$, see Propositions 6.6 and 6.7 in \cite{Bierme}. It is also worth to mention that for $q$ even the sequence $(F^J_n : n\ge 1)$ given by \eqref{eq:Hermite-Variation} exhibits the interesting scenario that $\kappa_{3}(F^J_n) \approx \left( \kappa_{4}(F^J_n) \right)^{\frac{1}{2}}$, and hence the third cumulant $\kappa_{3}(F^J_n)$ asymptotically dominates the fourth cumulant as $n$ tends to infinity. A similar phenomenon appears in \cite{v-3Cumulant} as well, in which, convergence of third cumulants to zero implies the convergence of the fourth cumulants to zero.
\end{itemize} \end{rem}
\section{Acknowledgments} Farzad Sabzikar would like to thank David Nualart for stimulating discussion on the proof of Theorem \ref{theo:variation} as well as suggesting to investigate the role of tempering in the optimal fourth moment theorem \cite{n-p-optimal}. Yu. Mishura was partially supported by the ToppForsk project nr. 274410 of the Research Council of Norway with title STORM: Stochastics for Time-Space Risk Models.
\section{Appendix A }\label{sec:appendix} This appendix contains some notations, definitions and well known results that we applied in the main text of this paper.
\subsection{Special functions $K_{\nu}$ and ${_2F_3}$}
In this subsection we present definitions of two special functions $K_{\nu}$ and ${_2F_3}$ that we have used in Section \ref{sec2}. We also provide the proof of Lemma \ref{lem:propo24}, see below, that we used in the proof of Proposition \ref{lem:usefulrelations 1}. First, we start with the definition of the modified Bessel function of the second kind that appears in the variance and covariance function of TFBM, see part $(a)$ of Lemma \ref{lem:usefulrelations}. A modified Bessel function of the second kind $K_{\nu}(x)$ has the integral representation \begin{equation*} K_{\nu}(x)=\int_0^\infty e^{-x \cosh t} \cosh {\nu t}\ dt, \end{equation*} where $\nu>0, x>0$. The function $K_{\nu}(x)$ also has the series representation \begin{equation*} K_{\nu}(x) = \frac{1}{2}\pi \frac{ I_{-\nu}(x) - I_{\nu}(x) }{\sin(\pi \nu)}, \end{equation*}
where $I_{\nu}(x)=(\frac{1}{2}|x|)^{\nu} \sum_{n=0}^{\infty} \frac{ ( \frac{1}{2}x)^{2n} }{n! \Gamma(n+1+\nu)}$ is called the Bessel function. We refer the reader to see (\cite[Section 8.43]{Gradshteyn}, pages 140--1414) for more information about the modified Bessel function of the second kind.
Next, we define the confluent Hypergeometric function ${_2F_3}$ that we used to obtain the variance and covariance of TFBMII, see part $(b)$ of Lemma \ref{lem:usefulrelations}. In general, a generalized hypergeometric function ${_pF_q}$ is defined by \begin{equation*} {_pF_q}(a_1,\cdots, a_p, b_1, \cdots, b_q, z)= \sum_{k=0}^{\infty} \frac{ (a_1)_{k} (a_2)_{k} \cdots (a_p)_{k} }{ (b_1)_{k} (b_2)_{k} \cdots (b_q)_{k} }\frac{z^k}{k!}, \end{equation*} where $(c_i)_{k}=\frac{\Gamma(c_i + k)}{\Gamma(k)}$ is called Pochhammer Symbol. Therefore \begin{equation*} {_2F_3}( \{a_1, a_2\}, \{b_1, b_2, b_3\}, z)= {_2F_3}(a_1, a_2, b_1, b_2, b_3, z)= \sum_{k=0}^{\infty} \frac{ \Gamma(a_1 + k)\Gamma(a_2 + k) \Gamma(k) }{ \Gamma(b_1 + k)\Gamma(b_2 + k) \Gamma(b_3 + k) } \frac{z^k}{k!}, \end{equation*} \begin{lem}\label{lem:propo24} Integral $I=\int_{0}^{\infty}\left(\int_{0}^{\infty}(s+x)^{H-3/2}e^{-\lambda(s+x)}ds\right)^2dx$ is finite for any $H>0$. \end{lem} \begin{proof} Let $H<1/2$. Then \begin{equation*}\begin{gathered}I=\int_{0}^{\infty}\left(\int_{x}^{\infty}s^{H-3/2}e^{-\lambda s}ds\right)^2dx\leq \int_{0}^{\infty}e^{-2\lambda x}\left(\int_{x}^{\infty}s^{H-3/2}ds\right)^2dx\\=\left(H-1/2\right)^{-2}\int_{0}^{\infty}e^{-2\lambda x}x^{2H-1}dx =\left(H-1/2\right)^{-2}(2\lambda)^{-2H}\Gamma(2H)<\infty.
\end{gathered}\end{equation*}
Let $H>1/2$. Then
\begin{equation*}\begin{gathered}I=\int_{0}^{\infty}\left(\int_{x}^{\infty}s^{H-3/2}e^{-\lambda s}ds\right)^2dx\leq
\int_{0}^{\infty}e^{-\lambda x}\left(\int_{x}^{\infty}s^{H-3/2}e^{-\frac{ \lambda s}{2}}ds\right)^2dx\\\leq
\int_{0}^{\infty}e^{-\lambda x}\left(\int_{0}^{\infty}s^{H-3/2}e^{-\frac{ \lambda s}{2}}ds\right)^2dx
=2^{2H-1}\lambda^{-2H}\Gamma^2(H-1/2)<\infty.
\end{gathered}\end{equation*}
Finally, let $H=1/2$. Then
\begin{equation*}\begin{gathered}I=\int_{0}^{\infty}\left(\int_{x}^{\infty}s^{-1}e^{-\lambda s}ds\right)^2dx\leq
\int_{0}^{1}x^{-1/2}\left(\int_{0}^{\infty}s^{ -3/4}e^{-\frac{ \lambda s}{2}}ds\right)^2dx\\+\int_{1}^{\infty}x^{-2}\left(\int_{0}^{\infty} e^{-\lambda s}ds\right)^2dx<\infty,
\end{gathered}\end{equation*}
and the proof follows.
\end{proof}
\section{Appendix B }\label{sec:appendix-GA} This appendix section is devoted to the essential elements of Gaussian analysis and Malliavin calculus. For the sake of completeness, we also present some known results in Malliavin--Stein method that are used in this paper. For the first part, the reader can consult \cite{Nourdin-Peccati-Bible,nualart,DavidEulaliaBook} for further details. A comprehensive reference on the Malliavin--Stein method is the excellent monograph \cite{Nourdin-Peccati-Bible}.
\subsection{Elements of Gaussian Analysis}\label{ss:isonormal}
Let $ \mathfrak{H}$ be a real separable Hilbert space. For any $q\geq 1$, we write $ \mathfrak{H}^{\otimes q}$ and $ \mathfrak{H}^{\odot q}$ to indicate, respectively, the $q$th tensor power and the $q$th symmetric tensor power of $ \mathfrak{H}$; we also set by convention $ \mathfrak{H}^{\otimes 0} = \mathfrak{H}^{\odot 0} =\mathbb R$. When $\mathfrak{H} = L^2(A,\mathcal{A}, \mu) =:L^2(\mu)$, where $\mu$ is a $\sigma$-finite and non-atomic measure on the measurable space $(A,\mathcal{A})$, then $ \mathfrak{H}^{\otimes q} = L^2(A^q,\mathcal{A}^q,\mu^q)=:L^2(\mu^q)$, and $ \mathfrak{H}^{\odot q} = L_s^2(A^q,\mathcal{A}^q,\mu^q) := L_s^2(\mu^q)$, where $L_s^2(\mu^q)$ stands for the subspace of $L^2(\mu^q)$ composed of those functions that are $\mu^q$-almost everywhere symmetric. We denote by $W=\{W(h) : h\in \mathfrak{H}\}$ an {\it isonormal Gaussian process} over $ \mathfrak{H}$. This means that $W$ is a centered Gaussian family, defined on some probability space $(\Omega ,\mathcal{F},P)$, with a covariance structure given by the relation $\mathbb E\left[ W(h)W(g)\right] =\langle h,g\rangle _{ \mathfrak{H}}$. We also assume that $\mathcal{F}=\sigma(W)$, that is, $\mathcal{F}$ is generated by $W$, and use the shorthand notation $L^2(\Omega) := L^2(\Omega, \mathcal{F}, P)$.
For every $q\geq 1$, the symbol $C_{q}$ stands for the $q$th {\it Wiener chaos} of $W$, defined as the closed linear subspace of $L^2(\Omega)$
generated by the family $\{H_{q}(W(h)) : h\in \mathfrak{H},\left\| h\right\| _{ \mathfrak{H}}=1\}$, where $H_{q}$ is the $q$th Hermite polynomial, defined as follows: \begin{equation}\label{hq} H_q(x) = (-1)^q e^{\frac{x^2}{2}} \frac{d^q}{dx^q} \big( e^{-\frac{x^2}{2}} \big). \end{equation} We write by convention $C_{0} = \mathbb{R}$. For any $q\geq 1$, the mapping $I_{q}(h^{\otimes q})=H_{q}(W(h))$ can be extended to a linear isometry between the symmetric tensor product $ \mathfrak{H}^{\odot q}$
(equipped with the modified norm $\sqrt{q!}\left\| \cdot \right\| _{ \mathfrak{H}^{\otimes q}}$) and the $q$th Wiener chaos $C_{q}$. For $q=0$, we write by convention $I_{0}(c)=c$, $c\in\mathbb{R}$.
It is well-known that $L^2(\Omega)$ can be decomposed into the infinite orthogonal sum of the spaces $C_{q}$: this means that any square-integrable random variable $F\in L^2(\Omega)$ admits the following {\it Wiener-It\^{o} chaotic expansion} \begin{equation} F=\sum_{q=0}^{\infty }I_{q}(f_{q}), \label{E} \end{equation} where the series converges in $L^2(\Omega)$, $f_{0}=E[F]$, and the kernels $f_{q}\in \mathfrak{H}^{\odot q}$, $q\geq 1$, are uniquely determined by $F$. For every $q\geq 0$, we denote by $J_{q}$ the orthogonal projection operator on the $q$th Wiener chaos. In particular, if $F\in L^2(\Omega)$ has the form (\ref{E}), then $J_{q}F=I_{q}(f_{q})$ for every $q\geq 0$.
Let $\{e_{k},\,k\geq 1\}$ be a complete orthonormal system in $\mathfrak{H}$. Given $f\in \mathfrak{H}^{\odot p}$ and $g\in \mathfrak{H}^{\odot q}$, for every $r=0,\ldots ,p\wedge q$, the \textit{contraction} of $f$ and $g$ of order $r$ is the element of $ \mathfrak{H}^{\otimes (p+q-2r)}$ defined by \begin{equation} f\otimes _{r}g=\sum_{i_{1},\ldots ,i_{r}=1}^{\infty }\langle f,e_{i_{1}}\otimes \ldots \otimes e_{i_{r}}\rangle _{ \mathfrak{H}^{\otimes
r}}\otimes \langle g,e_{i_{1}}\otimes \ldots \otimes e_{i_{r}} \rangle_{ \mathfrak{H}^{\otimes r}}. \label{v2} \end{equation} Notice that the definition of $f\otimes_r g$ does not depend on the particular choice of $\{e_k,\,k\geq 1\}$, and that $f\otimes _{r}g$ is not necessarily symmetric; we denote its symmetrization by $f\widetilde{\otimes }_{r}g\in \mathfrak{H}^{\odot (p+q-2r)}$. Moreover, $f\otimes _{0}g=f\otimes g$ equals the tensor product of $f$ and $g$ while, for $p=q$, $f\otimes _{q}g=\langle f,g\rangle _{ \mathfrak{H}^{\otimes q}}$. When $\mathfrak{H} = L^2(A,\mathcal{A},\mu)$ and $r=1,...,p\wedge q$, the contraction $f\otimes _{r}g$ is the element of $L^2(\mu^{p+q-2r})$ given by \begin{eqnarray}\label{e:contraction}
f\otimes _{r}g (x_1,...,x_{p+q-2r}) && = \int_{A^r} f(x_1,...,x_{p-r},a_1,...,a_r) \notag\\ && \quad\quad\quad\quad \times g(x_{p-r+1},...,x_{p+q-2r},a_1,...,a_r)d\mu(a_1)...d\mu(a_r). \notag \end{eqnarray}
It is a standard fact of Gaussian analysis that the following {\it multiplication formula} holds: if $f\in \mathfrak{H}^{\odot p}$ and $g\in \mathfrak{H}^{\odot q}$, then \begin{eqnarray}\label{multiplication} I_p(f) I_q(g) = \sum_{r=0}^{p \wedge q} r! {p \choose r}{ q \choose r} I_{p+q-2r} (f\widetilde{\otimes}_{r}g). \end{eqnarray}
We now introduce some basic elements of the Malliavin calculus with respect to the isonormal Gaussian process $W$. Let $\mathcal{S}$ be the set of all cylindrical random variables of the form \begin{equation} F=g\left( W(\phi _{1}),\ldots ,W(\phi _{n})\right) , \label{v3} \end{equation} where $n\geq 1$, $g:\mathbb{R}^{n}\rightarrow \mathbb{R}$ is an infinitely differentiable function such that its partial derivatives have polynomial growth, and $\phi _{i}\in \mathfrak{H}$, $i=1,\ldots,n$. The {\it Malliavin derivative} of $F$ with respect to $W$ is the element of $L^2(\Omega , \mathfrak{H})$ defined as \begin{equation*} DF\;=\;\sum_{i=1}^{n}\frac{\partial g}{\partial x_{i}}\left( W(\phi_{1}),\ldots ,W(\phi _{n})\right) \phi _{i}. \end{equation*} In particular, $DW(h)=h$ for every $h\in \mathfrak{H}$. By iteration, one can define the $m$th derivative $D^{m}F$, which is an element of $L^2(\Omega , \mathfrak{H}^{\odot m})$, for every $m\geq 2$. For $m\geq 1$ and $p\geq 1$, ${\mathbb{D}}^{m,p}$ denotes the closure of $\mathcal{S}$ with respect to the norm $\Vert \cdot \Vert _{m,p}$, defined by the relation \begin{equation*}
\Vert F\Vert _{m,p}^{p}\;=\;\mathbb E\left[ |F|^{p}\right] +\sum_{i=1}^{m}\mathbb E\left[ \Vert D^{i}F\Vert _{ \mathfrak{H}^{\otimes i}}^{p}\right]. \end{equation*} We often use the (canonical) notation $\mathbb{D}^{\infty} := \bigcap_{m\geq 1} \bigcap_{p\geq 1}\mathbb{D}^{m,p}$.
The Malliavin derivative $D$ obeys the following \textsl{chain rule}. If $\varphi :\mathbb{R}^{n}\rightarrow \mathbb{R}$ is continuously differentiable with bounded partial derivatives and if $F=(F_{1},\ldots ,F_{n})$ is a vector of elements of ${\mathbb{D}}^{1,2}$, then $\varphi (F)\in {\mathbb{D}}^{1,2}$ and \begin{equation}\label{e:chainrule} D\,\varphi (F)=\sum_{i=1}^{n}\frac{\partial \varphi }{\partial x_{i}}(F)DF_{i}. \end{equation}
Note also that a random variable $F$ as in (\ref{E}) is in ${\mathbb{D}}^{1,2}$ if and only if
$\sum_{q=1}^{\infty }q\|J_qF\|^2_{L^2(\Omega)}<\infty$ and in this case one has the following explicit relation: $$\mathbb E\left[ \Vert DF\Vert _{ \mathfrak{H}}^{2}\right]
=\sum_{q=1}^{\infty }q\|J_qF\|^2_{L^2(\Omega)}.$$ If $ \mathfrak{H}= L^{2}(A,\mathcal{A},\mu )$ (with $\mu $ non-atomic), then the derivative of a random variable $F$ as in (\ref{E}) can be identified with the element of $L^2(A \times \Omega )$ given by \begin{equation} D_{t}F=\sum_{q=1}^{\infty }qI_{q-1}\left( f_{q}(\cdot ,t)\right) ,\quad t \in A. \label{dtf} \end{equation}
The operator $L$, defined as $L=-\sum_{q=0}^{\infty }qJ_{q}$, is the {\it infinitesimal generator of the Ornstein-Uhlenbeck semigroup}. The domain of $L$ is \begin{equation*}
\mathrm{Dom}L=\{F\in L^2(\Omega ):\sum_{q=1}^{\infty }q^{2}\left\|
J_{q}F\right\| _{L^2(\Omega )}^{2}<\infty \}=\mathbb{D}^{2,2}\text{.} \end{equation*}
For any $F \in L^2(\Omega )$, we define $L^{-1}F =-\sum_{q=1}^{\infty }\frac{1}{q} J_{q}(F)$. The operator $L^{-1}$ is called the \textit{pseudo-inverse} of $L$. Indeed, for any $F \in L^2(\Omega )$, we have that $L^{-1} F \in \mathrm{Dom}L = \mathbb{D}^{2,2}$, and \begin{equation}\label{Lmoins1} LL^{-1} F = F - \mathbb E(F). \end{equation}
\subsection{Malliavin--Stein method: selective results}\label{App:B-MS} Next, we collect some known findings in the realm of Malliavin--Stein method that we have used in Section \ref{sec:BM}. We begin with the celebrated {\it fourth moment theorem}.
\begin{thm}[Fourth Moment Theorem and Ramifications, see \cite{n-p-05,Nualart-Latorre,NP-PTRF,n-p-optimal,n-p-AOP-Exact-Asymptotic}] \label{thm:4MT} Fix $q \ge 2$. Let $F_n = I_q (f_n), n\ge 1$ be a sequence of elements belonging to the $q$th Wiener chaos of some isonormal Gaussian process $W = \{ W(h) : h \in \mathfrak{H} \}$ such that $\mathbb E[F^2_n]= q! \Vert f_n \Vert^2_{\mathfrak{H}^{\otimes q}} =1$ for every $n \ge 1$. \begin{description} \item[(a)] Then the following asymptotic statements are equivalent as $n \to \infty$: \begin{itemize}
\item[(a)] $F_n$ converges in distribution towards $\mathcal{ N }(0,1)$.
\item[(b)] $\mathbb E[F^4_n] \to 3$.
\item[(c)] $\Vert f_n \otimes_r f_n \Vert_{\mathfrak{H}^{\otimes (2q-2r)}} \to 0$ for $r=1,...,q-1$.
\item[(d)] $\Vert DF_n \Vert^2_{\mathfrak{H}} \to q $ in $L^2$.
\end{itemize} \item[(b)] Furthermore, whenever one of the equivalent statements at item {\bf (a)} take place then there exist two constants $C_1$ and $C_2$ (independent of $n$) such that the following optimal rate of convergence in total variation distance holds: \[ C_1 \, \max\{ \abs{\kappa_3(F_n)}, \kappa_4(F_n) \} \leq d_{TV}(F_n,N) \leq C_2 \, \max\{ \abs{\kappa_3(F_n)}, \kappa_4(F_n) \}.\] \item[(c)] Assume one of the equivalent statements at item {\bf (a)} take place. Let $G_n$, $n\ge 1$ be a sequence of the form \[ G_n = \sum_{p=1}^{M} I_p (g^{(p)}_n) \] for $M\ge 1$ (independent of $n$) and some kernels $g^{(p)}_n \in \mathfrak{H}^{ \odot p}, p=1,...,M$. Suppose that as $n$ tends to infinity, \[ \mathbb E[G^2_n] = \sum_{p=1}^{M} p! \Vert g^{(p)}_n \Vert^2_{\mathfrak{H}^{\otimes p}} \to c^2 >0, \quad \Vert g^{(p)}_n \otimes_r g^{(p)}_n \Vert_{\mathfrak{H}^{\otimes (2p-2r)}} \to 0, \quad \forall \, r=1,...,p-1 \] and every $p=1,...,M$. If furthermore, sequence $\mathbb E[F_nG_n] \to \rho$, then sequence $(F_n,G_n)$ converges in distribution towards a two dimensional centered Gaussian vector $(N_1,N_2)$ with $\mathbb E[N^2_1]=1$, $\mathbb E[N^2_2] =c^2$, and $\mathbb E[N_1 N_2] =\rho$. \end{description} \end{thm}
\begin{thm}[Peccati-Tudor Multidimensional Fourth Moment Theorem \cite{Nourdin-Peccati-Bible}, Theorem 6.2.3] \label{thm:4MT-MultiDim} Fix $d \ge 2$, and $q_1,...,q_d \ge 1$. Let $F_n = ( F_{1,n},...,F_{d,n} ) = ( I_{q_1}(f_{1,n}),...,I_{q_d} (f_{d,n}) ), n\ge 1$ with the kernels $f_{n,j} \in \mathfrak{H}^{\odot j}$ for $j=1,...,d$ and every $n$. Let $N \sim \mathcal{ N }_d (0,C)$ denote a $d$ dimensional centered Gaussian vector with a symmetric, non-negative covariance matrix $C$. Assume that $\mathbb E[F_{i,n} F_{n,j}] \to C_{i,j}$ as $n \to \infty$. Then the following asymptotic statements are equivalent. \begin{itemize}
\item[(a)] $F_n \to N$ is distribution.
\item[(b)] for every $j=1,...,d$, sequence $F_{j,n} \to \mathcal{ N }(0,C_{j,j})$ in distribution.
\end{itemize}
\end{thm} Now we recall Breuer-Major Theorem, see \cite{Breuer} or \cite[Theorem 7.2.4]{Nourdin-Peccati-Bible} for a modern treatment that is a cornerstone piece in Section \ref{sec:BM}. \begin{thm}\label{thm:Breuer-Major}
Let $X =\{X_k, k\in\mathbb{Z}\}$ be a centered Gaussian stationary sequence with unit variance and set $r(k)=\mathbb{E}[X_{0}X_{k}]$ for every $k \in \mathbb{Z}$. Let $\gamma$ be the standard normal $\mathcal{N}(0,1)$ distribution and $f\in L^{2}(\mathbb{R}, \gamma)$ be a fixed deterministic function such that $\mathbb{E}[f(X_1)]=0$ and $f$ has Hermite rank $d\geq 1$, that means, that $f$ admits the Hermite expansion
\begin{equation*}
f(x)=\sum_{j=d}^{\infty}a_j {H_j}(x),
\end{equation*}
where $H_{j}$ is the $j$-Hermite polynomial, and $a_d \not =0$. Define $V_{n}=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}f(X_k)$.
Suppose that $\sum_{\nu\in\mathbb{Z}}|r(\nu)|^{d}<\infty$. Then
\begin{equation*}
\sigma^2 : =\sum_{j=d}^{\infty}j! a^{2}_{j}\sum_{\nu\in\mathbb{Z}}r(\nu)^{j}\in [0,\infty),
\end{equation*}
and the convergence
\begin{equation*}
V_{n}\limd \mathcal{N}(0,\sigma^2)
\end{equation*}
holds as $n\to\infty$. \end{thm}
\begin{thm}[See \cite{n-p-y19}]\label{thm:BM-TV-Rate}
Let $N \sim \mathcal{ N }(0,1)$, and $X =\{X_k, k\in\mathbb{Z}\}$ be a centered Gaussian stationary sequence with unit variance and covariance function $r(k)=\mathbb{E}[X_{0}X_{k}]$. Let $\gamma$ be the standard normal $\mathcal{N}(0,1)$ distribution and $f\in \mathbb{D}^{1,4} \subseteq L^2(\mathbb{R}, \gamma)$ be a fixed deterministic function such that $\mathbb{E}[f(X_1)]=0$. Let $V_{n}=\frac{1}{\sqrt{n}}\sum_{k=1}^{n}f(X_k)$, and $\sigma^2_n = {\operatorname{Var}} \left( V_n \right)$. Define $F_n : = \frac{V_n}{\sigma_n}$. Then there exists an explicit constant $C=C(f)$ such that for every $n \in \mathbb N$,
\begin{equation}\label{eq:BM-TV-Rate}
d_{TV} (F_n,N) \le \frac{C(f)}{\sigma^2_n} n^{- \frac{1}{2}} \, \left( \sum_{\vert k \vert < n} \vert r (k) \vert \right)^{\frac{3}{2}}
\end{equation}
\end{thm}
\begin{thm}[See \cite{HU2}] \label{thm:BM-Density-Rate} Let $N \sim \mathcal{ N }(0,1)$. Assume that $X =\{X_k, k\in\mathbb{Z}\}$ is a centered Gaussian stationary sequence with unit variance and covariance function $r(k)=\mathbb{E}[X_{0}X_{k}]$ whose spectral density function $f_r$ satisfies in $ \log(f_r )\in L^1[-\pi,\pi]$. Fix $2 \le d \le q$. Let $V_n = \frac{1}{\sqrt{n}} \sum_{k=1}^{n} \sum_{j=d}^{q} a_j H_j (X_k)$ where $a_j \in \mathbb R$ for $d \le j \le q$, and that $\sigma^2_n : = {\operatorname{Var}} \left( V_n \right)$. Define $F_n := \frac{V_n}{\sigma_n}$. \begin{description}
\item[(a)] Assume further that $\sigma^2 := \sum_{j=d}^{q} j! a^2_j \sum_{\nu \in \mathbb Z} r(\nu) ^j \in (0,\infty)$. Then, for every $m \ge 0 $ as $n$ tends to infinity
\begin{equation*} \Big \Vert p^{(m)}_n - p^{(m)}_N \Big \Vert_{L^{\infty}(\mathbb R)}:=
\sup_{x \in \mathbb R} \Big \vert p^{(m)}_n (x) - p^{(m)}_N (x) \Big \vert \longrightarrow 0
\end{equation*}
where here $p^{(m)}_n$ and $p^{(m)}_N$ denote the $m$th derivative of density function of random variables $F_n$ and $N$ respectively.
\item[(b)] In particular, if $q=d$ (in other words the sequence $F_n$ belongs to the fixed Wiener chaos of order $d$), then for all $m \ge 0$ there exist $n_0 \in \mathbb N$ and a constant $C$ (depending only on $m$ and $q$) such that for all $n \ge n_0$ we have
\begin{equation}\label{eq:Quantitative-Density-Convergence}
\Big \Vert p^{(m)}_n - p^{(m)}_N \Big \Vert_{L^{\infty}(\mathbb R)} \le C \, \sqrt{ \mathbb E \left[F^4_n \right] - 3 }.
\end{equation}
\end{description}
\end{thm}
\begin{thm}[See \cite{n-p-AOP-Exact-Asymptotic}] \label{thm:BM-Exact-Asymptotic} Let $(F_n : n\ge 1)$ be a sequence of centered square integrable functionals of some isonromal Gaussian process $W =\{ W(h) : h \in \mathfrak{H} \}$ such that $\mathbb E [F^2_n] \to 1$ as $n $ tends to infinity. Assume further the following assumptions hold: \begin{itemize}
\item[(a)] for every $n$, the random variable $F_n \in \mathbb{D}^{1,2}$, and that the law of $F_n$ is absolutely continuous with respect to the Lebesgue measure.
\item[(b)] the quantity $\varphi(n):= \sqrt{ \mathbb E \left[ (1 - \langle DF_n , -DL^{-1}F_n \rangle_{\mathfrak{H}} )^2 \right] }$ is such that: (i) $\varphi(n) < \infty$ for every $n$, (ii) $\varphi(n) \to 0$ as $n $ tends to infinity, and (iii) there exists $m \in \mathbb N$ such that $\varphi(n) > 0$ for all $n \ge m$.
\item[(c)] as $n$ tends to infinity,
\[ \left( F_n ,\frac{1 - \langle DF_n , -DL^{-1}F_n \rangle_{\mathfrak{H}} }{\varphi(n)} \right) \stackrel{d}{\longrightarrow} (N_1,N_2) \]
where $(N_1,N_2)$ is a two dimensional centered Gaussian vector with $\mathbb E[N^2_1]= \mathbb E[N^2_2]=1$, and $\mathbb E[N_1 N_2] = \rho$.
\end{itemize} Then, we have $d_{Kol} (F_n, N) \le \varphi(n)$, and moreover, for every $z \in \mathbb R$ as $n \to \infty$: \[ \frac{ \mathbb{P} \left( F_n
\le z \right) - \mathbb{P} (N \le z)}{\varphi(n)} \longrightarrow \frac{\rho}{3 \sqrt{2\pi}} (z^2 -1) e ^{- \frac{z^2}{2}}. \]
\end{thm}
\begin{thm}[See \cite{b-n-t-almost-sure}] \label{thm:BM-AlmostSure}
Let $N \sim \mathcal{ N }(0,1)$. Assume that $X =\{X_k, k\in\mathbb{Z}\}$ is a centered Gaussian stationary sequence with unit variance and covariance function $r(k)=\mathbb{E}[X_{0}X_{k}]$ such that $\sum_{\nu \in \mathbb Z} \vert r(\nu) \vert < \infty$. Assume that $f \in L^2(\mathbb R,\gamma)$ is a non-constant function of the class $C^2(\mathbb R)$ so that $\mathbb E[f''(N)^4]< \infty$ and that $\mathbb E_\gamma[f]=0$. Let $V_n = \frac{1}{\sqrt{n}} \sum_{k=1}^{n} f (X_k)$, and that $\sigma^2_n : = {\operatorname{Var}} \left( V_n \right)$. Define $F_n := \frac{V_n}{\sigma_n}$. If as $n$ tends to infinity, $\sigma^2_n \to \sigma^2 > 0$, then the sequence $(F_n : n \ge 1)$ converges in distribution towards $N$, and moreover it satisfies an ASCLT meaning that, almost surely, for every bounded continuous function $\varphi : \mathbb R \to \mathbb R$ it holds that
\[ \frac{1}{\log n} \sum_{k=1}^{n} \frac{1}{k} \varphi (F_n) \longrightarrow \mathbb E \left[ \varphi(N) \right], \quad \text{ as } \quad n \to \infty. \] \end{thm}
\end{document} | arXiv | {
"id": "2005.11602.tex",
"language_detection_score": 0.5456860065460205,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Projectivity in (bounded) commutative integral\\ residuated lattices} \author{Paolo Aglian\`{o}\\ DIISM\\ Universit\`a di Siena, Italy\\ agliano@live.com \and
Sara Ugolini\\
Artificial Intelligence Research institute (IIIA), CSIC\\
Bellaterra, Barcelona, Spain\\
sara@iiia.csic.es} \date{} \maketitle
\begin{abstract} In this paper we study projective algebras in varieties of (bounded) commutative integral residuated lattices. We make use of a well-established construction in residuated lattices, the ordinal sum, and the order property of divisibility. Via the connection between projective and splitting algebras, we show that the only finite projective algebra in $\mathsf{{FL}_{ew}}$ is the two-element Boolean algebra. Moreover, we show that several interesting varieties have the property that every finitely presented algebra is projective, such as locally finite varieties of hoops. Furthermore, we show characterization results for finite projective Heyting algebras, and finitely generated projective algebras in locally finite varieties of bounded hoops and BL-algebras. Finally, we connect our results with the algebraic theory of unification. \end{abstract}
\section{Introduction}\label{sec:intro} In this paper we approach the study of projective algebras in varieties of (bounded) commutative integral residuated lattices. Our point of view is going to be algebraic, however the reader may keep in mind that projectivity is a categorical concept, and therefore our findings pertain the corresponding algebraic categories as well. Being projective in a variety of algebras, or in any class containing all of its free objects, corresponds to being a retract of a free algebra, and projective algebras contain relevant information both on their variety and on its lattice of subvarieties.
In particular, as first noticed by McKenzie \cite{McKenzie1972}, there is a close connection between projective algebras in a variety and {\em splitting algebras} in its lattice of subvarieties. The notion of splitting comes from lattice theory, and studying splitting algebras is particularly useful to understand lattices of subvarieties. Essentially, a splitting algebra generates a variety that divides the subvariety lattice in the disjoint union of its principal filter and a principal ideal generated by a variety axiomatizable by a single identity (called {\em splitting equation}). In particular, we shall see that subdirectly irreducible projective algebras are splitting for the variety they belong to.
The varieties of algebras which are object of our study are relevant both in the realm of algebraic logic and from a purely algebraic point of view. In fact, residuated structures arise naturally in the study of many interesting algebraic systems, such as ideals of rings \cite{WardDilworth39} or lattice-ordered groups \cite{BCGJT}, besides encompassing the equivalent algebraic semantics (in the sense of Blok-Pigozzi \cite{BlokPigozzi1989}) of so-called substructural logics. We refer the reader to \cite{GJKO} for detailed information on this topic. The Blok-Pigozzi notion of algebraizability entails that the logical deducibility relation is fully and faithfully represented by the algebraic equational consequence of the corresponding algebraic semantics, and therefore logical properties can be studied algebraically, and viceversa. Substructural logics are a large framework and include many of the interesting non-classical logics: intuitionistic logics, relevance logics, and fuzzy logics to name a few, besides including classical logic as a special case. Therefore, substructural logics on one side, and residuated lattices on the other, constitute a wide unifying framework in which very different structures can be studied uniformly.
The investigation of projective structures in particular varieties of residuated lattices has been approached by several authors (see for instance \cite{BalbesHorn1970,CabrerMundici2009,DantonaMarra2006,DiNolaGrigoliaLettieri2008,Ghilardi1997,Ugolini2022}). However, to the best of our knowledge, no effort has yet been done to provide a uniform approach in a wider framework, which is what we attempt to start in this manuscript.
In the general setting of $\mathsf{FL_{ew}}$-algebras, which are bounded commutative integral residuated lattices, via the connection with splitting algebras, we show that the only finite projective algebra in $\mathsf{FL_{ew}}$ is the two-element Boolean algebra $\alg 2$. Besides some general findings on $\mathsf{FL_{ew}}$-algebras, our other main results will concern varieties where the lattice order is the inverse divisibility ordering, that is, where $x \leq y$ if and only if there exists $z$ such that $x = z \cdot y$. We show that several interesting varieties in the realm of algebraic logic have the property that every finitely presented algebra is projective, such as, in particular, all locally finite varieties of hoops and their implicative reducts. Moreover, as a consequence of some general results about projectivity in varieties closed under the construction of ordinal sums, we show an alternative proof of the characterization of finite projective Heyting algebras shown in \cite{BalbesHorn1970}.
Interestingly, the study of projective algebras in this realm has a relevant logical application. Indeed, following the work of Ghilardi \cite{Ghilardi1997}, the study of projective algebras in a variety is strictly related to unification problems for the corresponding logic. More precisely, the author shows that a unification problem can be interpreted as a finitely presented algebra, and solving such a problem means finding an homomorphism to a projective algebra. We will explore some consequences of our study in this context in Section \ref{section:unif}.
We believe that our results, despite touching several relevant subvarieties of $\mathsf{FL_{ew}}$, have only scratched the surface of the study of projectivity in this large class of algebras, and shall serve as ground and inspiration for future work.
\section{Preliminaries} Given a class $\vv K$ of algebras, an algebra $\alg A \in \vv K$ is {\em projective} in $\vv K$ if for all $\alg B,\alg C \in \vv K$, any homomorphism $h\colon \alg A \longmapsto \alg C$, and any surjective homomorphism $g\colon \alg B\longmapsto \alg C$, there is a homomorphism $f\colon \alg A \longmapsto \alg B$ such that $h=gf$.
Determining the projective algebras in a class is usually a challenging problem, especially in a general setting. If however $\vv K$ contains all the free algebras on $\vv K$ (in particular, if $\vv K$ is a variety of algebras), projectivity admits a simpler formulation. We call an algebra $\alg B$ a {\em retract} of an algebra $\alg A$ if there is a homomorphism $g\colon \alg A \longmapsto \alg B$ and a homomorphism $f\colon\alg B \longmapsto \alg A$ with $gf= \op{id}_\alg B$ (and thus, necessarily, $f$ is injective and $g$ is surjective). The following theorem was proved first by Whitman for lattices \cite{Whitman1941} but it is well-known to hold for any class of algebras.
\begin{theorem}\label{whitman} Let $\vv K$ be a class of algebras containing all the free algebras in $\vv K$ and let $\alg A \in \vv K$. Then the following are equivalent: \begin{enumerate} \item $\alg A$ is projective in $\vv K$. \item $\alg A$ is a retract of a free algebra in $\vv K$. \item $\alg A$ is a retract of a projective algebra in $\vv K$. \end{enumerate} In particular every free algebra in $\vv K$ is projective in $\vv K$. \end{theorem}
It follows immediately that if $\vv V$ is a variety and $\alg A$ is projective in $\vv V$, then it is projective for any subvariety $\vv W$ of $\vv V$ to which it belongs; equivalently, if $\alg A \in \vv W$ is not projective in $\vv W$ then it cannot be projective in any supervariety of $\vv W$.
Let us also observe that the algebraic definition of projectivity we have given corresponds to a categorical notion for the corresponding algebraic category (where the objects are the algebras in $\vv K$ and whose morphisms are the homomorphisms between algebras in $\vv K$), where surjective homomorphisms are regular epimorphisms. It therefore follows that projectivity is preserved by categorical equivalence: the projective objects in one category are exactly the images, through the functor yielding the equivalence, of the projective objects in the other category.
In general, determining all projective algebras in a variety is a complicated task. As examples of success in characterizing projective algebras in interesting classes we can quote: \begin{enumerate} \item[$\bullet$] A finite lattice is projective in the variety of all lattices if and only if it semidistributive and satisfies Whitman's condition (W) (see \cite{Nation1982} for details). \item[$\bullet$] A finite distributive lattice is projective in the variety of distributive lattices if and only if the meet of any two meet irreducible elements is again meet irreducible \cite{Balbes1967}. \item[$\bullet$] A Boolean algebra is projective in the variety of distributive lattices if and only if it is finite \cite{Balbes1967}; hence every finite Boolean algebra is projective in the variety of Boolean algebras. \item[$\bullet$] An abelian group is projective in the variety of abelian groups if and only if it is free. \item[$\bullet$] If $\vv M_\alg D$ is the variety of left modules on a principal ideal domain $\alg D$, then a module is projective in $\vv M_\alg D$ if and only if it is free (this is a consequence of the so called {\em Quillen-Suslin Theorem}, see \cite{Quillen1976}). \item[$\bullet$] Even more generally \cite{Quackenbush1971}, if $\alg A$ is a finite quasi-primal algebra, then if $\alg A$ has no minimal nontrivial subalgebra, each finite algebra in ${\mathbf V}(\alg A)$ is projective; alternatively a finite algebra in ${\mathbf V}(\alg A)$ is projective if and only if it admits each minimal subalgebra of $\alg A$ as a direct decomposition factor. \end{enumerate} In what follows, we will be mainly interested in projective algebras that are finitely generated. We will see that in many interesting cases, these will correspond to \emph{finitely presented} algebras in the variety. An algebra is finitely presented if it can be defined by a finite number of generators and satisfies finitely many identities. For any set $X$ and any variety $\vv V$ we will denote by $\alg F_\vv V (X)$ the free algebra in $\vv V$ over $X$. Thus, we say that an algebra $\alg A\in \vv V$ is {\em finitely presented} in $\vv V$ if there is a finite set $X$ and a compact congruence $\theta \in \op{Con}(\alg F_\vv V (X))$ such that $\alg F_\vv V(X)/\theta \cong \alg A$.
Note that if $\vv V$ has finite type, every finite algebra in $\vv V$ is finitely presented, and if $\vv V$ is locally finite every finitely presented algebra in $\vv V$ is finite. We remark that the notion of finitely presented algebra is a categorical notion \cite{GabrielUllmer1971}, and thus it is preserved under categorical equivalence. The proof of the following theorem is standard (the reader can see \cite{Ghilardi1997}).
\begin{theorem}\label{finitelypresented} For a finitely presented algebra $\alg A \in \vv V$ the following are equivalent: \begin{enumerate} \item $\alg A$ is projective in $\vv V$; \item $\alg A$ is projective in the class of all finitely presented algebras in $\vv V$; \item $\alg A$ is a retract of a finitely generated free algebra in $\vv V$. \end{enumerate} \end{theorem}
\subsection{(Bounded) commutative integral residuated lattices} In the sequel we will deal mainly with (bounded) commutative and integral residuated lattices.
A commutative integral residuated lattice (or CIRL) is an algebra\\ $\langle A,\vee,\wedge, \cdot, \rightarrow,1\rangle$ such that \begin{enumerate} \item $\langle A,\vee,\wedge, 1 \rangle$ is a lattice with largest element $1$; \item $\langle A,\cdot,1\rangle$ is a commutative monoid; \item $(\cdot,\rightarrow)$ form a residuated pair w.r.t. the lattice ordering, i.e. for all $a,b,c \in A$ $$ a \cdot b \le c\qquad\text{if and only if}\qquad a \le b \rightarrow c. $$ \end{enumerate} In what follows, we will often write $xy$ for $x \cdot y$. We call {\em bounded commutative integral residuated lattice} a CIRL with an extra constant $0$ in the signature that is the least element in the lattice order.
(Bounded) commutative and integral residuated lattices form varieties called ($\mathsf{FL_{ew}}$) $\mathsf{CIRL}$. Some of the results of this paper will concern commutative and integral residuated (meet-){\em semilattices}, that is, the $\lor$-free subreducts of commutative and integral residuated lattices.
Residuated lattices are rich structures, and many equations hold in them; for an equational axiomatization and a list of valid identities we refer the reader to \cite{BlountTsinakis2003}. We focus on two equations that bear interesting consequences, i.e., prelinearity and divisibility: \begin{align} &(x \rightarrow y) \vee (y \rightarrow x) \approx 1.\tag{prel}\\ &x(x \rightarrow y) \approx y(y \rightarrow x); \tag{div} \end{align} It can be shown (see \cite{BlountTsinakis2003} and \cite{JipsenTsinakis2002}) that a subvariety of $\mathsf{FL_{ew}}$ satisfies the prelinearity equation (prel) if and only if any algebra therein is a subdirect product of totally ordered algebras, and this implies via Birkhoff's Theorem that all the subdirectly irreducible algebras are totally ordered. Such varieties are called {\em representable} (or {\em semilinear}) and the subvariety axiomatized by (prel) is the largest subvariety of $\mathsf{FL_{ew}}$ that is representable; such variety is usually denoted by $\vv M\vv T\vv L$, since it is the equivalent algebraic semantics of Esteva-Godo's {\em Monoidal t-norm based logic} \cite{EstevaGodo2001}.
If a variety satisfies the divisibility condition (div) then the lattice ordering becomes the inverse divisibility ordering: for any algebra $\alg A$ therein and for all $a,b \in A$: $$ a \le b \qquad\text{if and only if}\qquad \text{there is $ c \in A$ with $a =bc$}. $$ In this case, it can be shown that the meet is definable as $a\wedge b = a(a \rightarrow b)$. We will call $\mathsf{FL_{ew}}$-algebras satisfying (div) {\em hoop algebras} or {\em $\mathsf{HL}$-algebras}, and we denote the variety they form by $\mathsf{HL}$.
If an algebra in $\mathsf{FL_{ew}}$ satisfies both (prel) and (div) then it is called a $\vv B\vv L$-algebra and the variety of all $\vv B\vv L$-algebras is denoted by $\vv B\vv L$. Again the name comes from logic: the variety of $\vv B\vv L$-algebras is the equivalent algebraic semantics of {\em H\'ajek's Basic Logic} \cite{Ha98}. A systematic investigation of varieties of $\vv B\vv L$-algebras started with \cite{AglianoMontagna2003} and it is still ongoing (see \cite{Agliano2017c} and the bibliography therein).
It follows from the definition that given a variety of bounded commutative integral residuated lattices, the class of its {\em $0$-free subreducts} is a class of residuated lattices; we have a very general result.
\begin{lemma} Let $\vv V$ be any subvariety of $\mathsf{FL_{ew}}$; then the class ${\mathbf S}^0(\vv V)$ of the zero-free subreducts of algebras in $\vv V$ is a variety. Moreover if $\alg A \in {\mathbf S}^0(\vv V)$ is bounded (i.e. there is a minimum in the ordering), then it is polynomially equivalent to an algebra in $\vv V$. \end{lemma} \begin{proof}The proof of the first claim is in Proposition 1.10 of \cite{AFM}; it is stated for varieties of $\vv B\vv L$-algebras but it uses only the description of the congruence filters, that can be used in any subvariety of $\mathsf{FL_{ew}}$ (as the reader can easily check). The second claim is almost trivial and the proof is left to the reader. \end{proof} In particular, we point out the following relevant examples of varieties of zero-free subreducts: \begin{enumerate} \item[$\bullet$] ${\mathbf S}^0(\mathsf{FL_{ew}}) = \mathsf{CIRL}$. \item[$\bullet$] ${\mathbf S}^0(\vv M\vv T\vv L) = \mathsf{CIRRL}$, the variety of commutative integral representable residuated lattices, sometimes also called $\vv G\vv M\vv T\vv L$. \item[$\bullet$] ${\mathbf S}^0(\mathsf{HL})$ is the variety of commutative and integral residuated lattices satisfying divisibility (corresponding to commutative and integral $\mathsf{GBL}$-algebras). Notice that these algebras have been called {\em hoops} in several papers, but the original definition of hoop is different (see \cite{BlokFerr2000}): a hoop is the variety of $\{\wedge,\cdot,1\}$-subreducts of $\mathsf{HL}$-algebras (no join!). It is not true that all hoops have a lattice reduct, and the ones that do are exactly the $\lor$-free reducts of commutative and integral $\mathsf{GBL}$-algebras, and we will refer to them as \emph{full hoops}. \item[$\bullet$] ${\mathbf S}^0(\vv B\vv L) = \vv B\vv H$, the variety of {\em basic hoops} \cite{AFM}. Basic hoops are full hoops, since the prelinearity equation makes the join definable using $\wedge$ and $\rightarrow$ (see for instance \cite{Agliano2018c}): \begin{equation} ((x \rightarrow y) \rightarrow y) \wedge ((y \rightarrow x) \rightarrow x) \approx x \vee y. \tag{PJ} \end{equation} \end{enumerate}
With respect to the structure theory, congruences of algebras in $\mathsf{FL_{ew}}$ are very well behaved; as a matter of fact any subvariety of $\mathsf{FL_{ew}}$ is {\em ideal determined} w.r.t $1$, in the sense of \cite{GummUrsini1984} (but see also \cite{OSV2}). In particular this implies that congruences are totally determined by their $1$-blocks. If $\alg A \in \mathsf{FL_{ew}}$ the $1$-block of a congruence of $\alg A$ is called a {\em deductive filter} (or {\em filter} for short); it can be shown that a filter of $\alg A$ is an order filter containing $1$ and closed under multiplication. Filters form an algebraic lattice isomorphic with the congruence lattice of $\alg A$ and if $X \subseteq A$ then the filter generated by $X$ is $$ \op{Fil}_\alg A(X) = \{a \in A: x_1\cdot \ldots \cdot x_n \le a, \text{for some $n \in \mathbb N$ and $\vuc xn \in X$}\}. $$ The isomorphism between the filter lattice and the congruence lattice is given by the maps: \begin{align*} & \theta \longmapsto 1/\theta\\ & F \longmapsto \theta_F = \{(a,b): a \rightarrow b,\ b \rightarrow a \in F\}, \end{align*} where $\theta$ is a congruence and $F$ a filter.
\subsection{Ordinal sums}\label{section:ordinalsums} A powerful tool for investigating (bounded) integral residuated lattices in general, and $\vv M\vv T\vv L$-algebras in particular, is the {\em ordinal sum} construction. Let $\alg A_0,\alg A_1 \in \mathsf{CIRL}$ such that $\alg A_0 \cap \alg A_1 = \{1\}$, and consider $A_0 \cup A_1$. The ordering intuitively stacks $A_{1}$ on top of $A_{0} \setminus \{1\}$ and more precisely it is given by $$ a \le b \quad\text{if and only if} \quad \left\{
\begin{array}{l}
\hbox{$b=1$, or} \\
\hbox{$a \in A_0\setminus\{1\}$ and $b \in A_1\setminus\{1\}$ or} \\
\hbox{$a,b \in A_i\setminus\{1\}$ and $a \le_{A_i} b$, $i=0,1$.}
\end{array}
\right. $$ Moreover, we define the product inside of the two components to be the original one, and between the two different components to be the meet, and consequently we have the following: \begin{align*} &a\cdot b = \left\{
\begin{array}{ll}
a, & \hbox{if $a \in A_0\setminus\{1\}$ and $b \in A_1$;} \\
b, & \hbox{if $a \in A_1$ and $b\in A_0\setminus\{1\}$;}\\
a \cdot_{A_i} b, &\hbox{if $a,b \in A_i$, $i=0,1$.}
\end{array}
\right.\\ &a \rightarrow b = \left\{
\begin{array}{ll}
b, &\hbox{if $a \in A_1$ and $b\in A_0\setminus\{1\}$;} \\
1, &\hbox{if $a \in A_0\setminus\{1\}$ and $b \in A_1$;} \\
a \rightarrow_{A_i} b, &\hbox{if $a,b \in A_i$, $i=0,1$.}
\end{array}
\right. \end{align*} If we call $\alg A_0 \oplus \alg A_1$ the resulting structure, it is easily checked that $\alg A_0 \oplus \alg A_1$ is a semilattice ordered integral and commutative residuated monoid. However, it might not be a residuated lattice, and the reason is that if $1_{A_0}$ is not join irreducible, and $\alg A_1$ is not bounded, we run into trouble. In fact, if $a,b \in A_0\setminus\{1\}$ and $a \vee_{A_0} b =1_{A_0}$ then the upper bounds of $\{a,b\}$ all lie in $A_1$; and since $A_1$ is not bounded there is no least upper bound of $\{a,b\}$ in $\alg A_0 \oplus \alg A_1$, thus the ordering is not a lattice ordering. However, if $1_{A_0}$ is join irreducible, then the problem disappears, and we can define $\alg A_0 \oplus \alg A_1$ as before. While if $1_{A_0}$ is not join irreducible but $\alg A_1$ is bounded, say by $u$, then we can define: $$ a \vee b = \left\{
\begin{array}{ll}
a, & \hbox{$a \in A_1$ and $b \in A_0$;} \\
b, & \hbox{$a \in A_0$ and $b \in A_1$;} \\
a \vee_{A_1} b, & \hbox{if $a,b \in A_1$;} \\
a \vee_{A_0} b, & \hbox{if $a,b \in A_0$ and $a \vee_{A_0} b < 1$;}\\
u, & \hbox{if $a,b \in A_0$ and $a \vee_{A_0} b = 1$.}\\
\end{array}
\right. $$ We will therefore call $\alg A_0 \oplus \alg A_1$ the {\em ordinal sum} of $\alg A_0$ and $\alg A_1$, and we will say that the ordinal sum {\em exists} if $\alg A_0 \oplus \alg A_1 \in \mathsf{CIRL}$. If $\alg A_0 \in \mathsf{FL_{ew}}$ and the ordinal sum of $\alg A_0$ and $\alg A_1$ exists, then $\alg A_0 \oplus \alg A_1 \in \mathsf{FL_{ew}}$.
Every time we deal with a class of $\mathsf{CIRL}$ for which we know that the ordinal sum always exists, we can define the ordinal sum of a (possibly infinite) family of algebras in that class; in that case the family is indexed by a totally ordered set $\langle I,\le\rangle$ that may or may not have a minimum (but it does in case we are in $\mathsf{FL_{ew}}$).
For an extensive treatment of ordinal sums, even in more general cases, we direct the reader to \cite{Agliano2018a,Galatos2005,GJKO}. Here we would like to point out some facts that will be useful in the following sections.
We call an algebra in $\mathsf{CIRL}$ {\em sum irreducible} if it cannot be written as the ordinal sum of at least two nontrivial algebras in $\mathsf{CIRL}$. Then, by a straightforward application of Zorn's lemma (see for instance Theorem 3.2 in \cite{Agliano2018b}), we obtain the following. \begin{proposition}
Every algebra in $\mathsf{CIRL}$ is the ordinal sum of sum irreducible algebras in $\mathsf{CIRL}$. \end{proposition} In general, we do not know what the sum irreducible algebras in a subvariety of $\mathsf{CIRL}$ may be. A most recognized result is the classification of all totally ordered sum irreducible $\vv B\vv L$-algebras and basic hoops in terms of Wajsberg hoops \cite{AglianoMontagna2003}.
It is possible to describe the behavior of the classical operators ${\mathbf H},{\mathbf P},{\mathbf I},{\mathbf P}_u$ (denoting respectively homomorphic images, direct products, isomorphic images and ultraproducts) on ordinal sums; the reader can consult Section 3 of \cite{AglianoMontagna2003} and Lemma 3.1 in \cite{Agliano2018a}; subalgebras in the bounded case are slightly more critical since there is the constant $0$ which must be treated carefully. If we have a family $(\alg A_i)_{i \in I}$ of algebras in $\mathsf{CIRL}$ then we can describe subalgebras in a clear way: $$ {\mathbf S} (\bigoplus_{i \in I} \alg A_i) = \{\bigoplus_{i \in I} \alg B_i: \alg B_i \in {\mathbf S}(\alg A)\}. $$ However, if we consider sums in $\mathsf{FL_{ew}}$, the constant $0$ is represented by the $0$ of the lowermost component and so the above expression is equivocal. Therefore, when we write $\bigoplus_{i \in I} \alg A_i$ we will always regard the algebras $\alg A_i$, $i >0$ as algebras in $\mathsf{CIRL}$, i.e. their zero-free reducts.
At present, we do not have a characterization of which subvarieties of $\mathsf{FL_{ew}}$ are closed under ordinal sums, nor which equations are preserved by ordinal sums, however the following can be easily shown. \begin{lemma}\label{lemma:ordsumpres}
The following are preserved by the ordinal sum construction: \begin{enumerate} \item all join-free equations in one variable, \item the divisibility equation (div). \end{enumerate} \end{lemma} We have the following. \begin{proposition}\label{prop:ordsumexists}
Ordinal sums always exist in:
\begin{enumerate}
\item $\mathsf{FL_{ew}}$,
\item $\mathsf{HL}$,
\item the class of finite algebras in $\mathsf{CIRL}$, \item the class of totally ordered algebras in $\mathsf{CIRL}$. \end{enumerate} \end{proposition} \begin{proof}
The claim follows from Lemma \ref{lemma:ordsumpres}, and the fact that both $\mathsf{FL_{ew}}$-algebras and finite $\mathsf{CIRL}$s are always bounded. \end{proof}
Hence, $\mathsf{FL_{ew}}$ is closed under ordinal sums and so is $\mathsf{HL}$. An algebra in $\mathsf{FL_{ew}}$ is called {\em $n$-potent} (for $n \in \mathbb N$) if it satisfies the equation $x^n \approx x^{n-1}$; if $n=2$ we use the term {\em idempotent}. The variety of $n$-potent $\mathsf{FL_{ew}}$-algebras is called $\mathsf{P_{n}FL_{ew}}$ (and in the literature it is sometimes shortened in $\mathsf{E_{n}}$), and we shall call $\mathsf{P_{n}HL}$ the largest $n$-potent subvarieties of $\mathsf{HL}$; it follows from Proposition \ref{prop:ordsumexists} that they are both closed under ordinal sums.
However, in general a proper subvariety of $\mathsf{CIRL}$ or $\mathsf{FL_{ew}}$ is not closed under ordinal sums. For instance, we can easily construct an ordinal sum of two $\mathsf{MTL}$-algebras that is not in $\mathsf{MTL}$.
\begin{example}\label{ex:notsum}
Consider the ordinal sum $\alg 4 \oplus \alg 2$ of the four-element Boolean algebra $\alg 4$ and the two-element Boolean algebra $\alg 2$ (in Figure \ref{figure:failprel}). Prelinearity fails since the join $(a \to b) \lor (b \to a)$ in the ordinal sum is redefined to be the lowest element of $\alg 2$, $0_{2}$, and thus it is not $1$. \begin{figure}
\caption{$\alg 4 \oplus \alg 2$}
\label{figure:failprel}
\end{figure} \end{example} The previous example actually shows the following.
\begin{proposition} No nontrivial subvariety of $\vv M\vv T\vv L$ is closed under ordinal sums. \end{proposition} \begin{proof} The claim follows from Example \ref{ex:notsum}, and the fact that Boolean algebras are the atom in the lattice of subvarieties of $\vv M\vv T\vv L$. \end{proof}
However it can be easily checked that any (possibly infinite) ordinal sum of totally ordered $\vv M\vv T\vv L$-algebras is a totally ordered $\vv M\vv T\vv L$-algebra, and any (possibly infinite) ordinal sum of totally ordered $\vv B\vv L$-algebras is a totally ordered $\vv B\vv L$-algebra (since it is an ordinal sum of totally ordered Wajsberg hoops \cite{AglianoMontagna2003}). We conclude the preliminaries with the following observation. \begin{lemma}\label{closed} For a subvariety $\vv V$ of $\mathsf{FL_{ew}}$ the following are equivalent: \begin{enumerate} \item $\vv V$ is closed under finite ordinal sums; \item $\vv V$ is closed under ordinal sums. \end{enumerate} \end{lemma}
\begin{proof} Clearly 2 implies 1. Suppose now that $\vv V$ is not closed under ordinal sums. Thus, there is a family $(\alg A_i)_{i \in I}$ such that $\alg A_i \in \vv V$ for all $i \in I$ but $\alg A = \bigoplus_{i \in I} \alg A_i \notin \vv V$. Then it must exists an equation $p(\vuc xn) \approx 1$ holding in $\vv V$ and elements $\vuc an \in \bigoplus_{i \in I} \alg A_i$ such that $p(\vuc an) \ne 1$. Let $\alg A_{i_0},\dots,\alg A_{i_k}$ be an enumeration of all the components of the ordinal sum that contain at least one of the $\vuc an$. If $i_0 = 0$, then $\bigoplus_{j=0}^k \alg A_{i_j}$ is a finite ordinal sum belonging to $\vv V$ (since it is a subalgebra of $\alg A$) in which the equation fails; if $i_0\ne 1$ then $\alg A_0 \oplus \bigoplus_{j=0}^k \alg A_{i_j}$ is a finite ordinal sum with the same property. In any case $\vv V$ is not closed under finite ordinal sums and thus 1. implies 2. \end{proof}
\section{Projectivity in $\mathsf{FL_{ew}}$} Given Proposition \ref{prop:ordsumexists}, we will now make use of the ordinal sum construction to obtain some general results about projective algebras in $\mathsf{FL_{ew}}$. \begin{lemma}\label{finiteprojective} Let $\vv K$ be a class of $\mathsf{FL_{ew}}$-algebras with the following properties: \begin{enumerate} \item There is a non-trivial algebra in $\vv K$; \item $\vv K$ is closed under ordinal sums. \end{enumerate} If $\alg A \in \vv K$ is projective in $\vv K$ then $1$ is join irreducible in $\alg A$. If $\alg A$ is also finite, then it is subdirectly irreducible. \end{lemma} \begin{proof} Let $\alg A$ be projective in $\vv K$ and $\alg B$ a non-trivial algebra in $\vv K$. Then $f\colon \alg A \oplus \alg B \longmapsto \alg A$ defined by $f(a) =a$ if $a \in A \setminus \{1\}$ and $f(d)=1$ if $d \in B$ is a surjective homomorphism. Since $\alg A$ is projective and $\alg A \oplus \alg B \in \vv K$, there is a homomorphism $g\colon \alg A \longmapsto \alg A \oplus \mathbf B$ such that $fg=id_A$.
In order to prove that $1$ is join irreducible in $\alg A$, we show that if $a, b \in A \setminus \{1\}$, then $a \lor b \neq 1$. Let then $a, b \in A \setminus \{1\}$. Since $fg(a) = a, fg(b) = b$, by definition of $f$ we have that $g(a), g(b) \in A \setminus \{1\}$. Since $\alg B$ is non-trivial, there is $c \in B \setminus \{1\}$. Thus by definition of the order on the ordinal sum $\alg A \oplus \alg B$, $g(a) < c$ and $g(b) < c$, hence $g(a) \lor g(b) = g(a \lor b) \leq c < 1$. Since $g$ is a homomorphism, necessarily $a \lor b < 1$. Thus, if $1$ is a binary join of elements, at least one of the elements is $1$.
If $\alg A$ is finite then $1$ is completely join irreducible in $\alg A$; but this is well-known to being equivalent to subdirect irreducibility for finite members of $\mathsf{FL_{ew}}$ (see for instance \cite{GJKO}, Lemma 3.59). \end{proof}
Since $\mathbf 2$ belongs to any non-trivial subvariety of $\mathsf{FL_{ew}}$ (indeed, it is the free algebra on the empty set of generators), given $\vv V$ a non-trivial subvariety of $\mathsf{FL_{ew}}$ that is closed under ordinal sums, the finite projective algebras in $\vv V$ must be subdirectly irreducible.
We will now highlight the connection between projective algebras and splitting algebras. Given a lattice $(L, \land, \lor)$, a {\em splitting pair} $(a,b)$ of elements of $L$ is such that $a \not\leq b$ and for any $c \in L$, either $a \leq c $ or $c \leq b$. This notion can be considered on lattices of subvarieties of a variety $\vv V$, where a splitting pair is then a pair of subvarieties of $\vv V$, $(\vv V_1, \vv V_2)$, where $\vv V_1$ is relatively axiomatized by a single equation and $\vv V_2$ is generated by a single finitely generated subdirectly irreducible algebra, called a {\em splitting algebra} (see \cite[Section 10]{GJKO} for more details).
\begin{lemma}\label{splitting} Let $\vv V$ be any variety and let $\alg A$ be an algebra that is subdirectly irreducible and projective in $\vv V$. Then \begin{enumerate} \item\label{splitting1} $\vv U = \{ \alg B \in \vv V: \alg A \notin {\mathbf S}(\alg B) \}$ is a subvariety of $\vv V$; \item\label{splitting2} for any subvariety $\vv W$ of $\vv V$ either $\vv W \subseteq \vv U$ or $\vv V(\alg A) \subseteq\vv W$. \item\label{splitting3} there is an equation $\sigma$ in the language of $\vv V$ such that $\alg B \in \vv U$ if and only if $\alg B \vDash \sigma$; \end{enumerate} In other words, $(\vv U, \vv V(\alg A))$ constitutes a splitting pair in the lattice of subvarieties of $\vv V$. \end{lemma} \begin{proof} For \ref{splitting1}., $\vv U$ is closed under subalgebras by definition, under direct products since $\alg A$ is subdirectly irreducible, and under homomorphic images since $\alg A$ is projective.
For \ref{splitting2}., let $\vv W$ be a subvariety of $\vv V$. If $\alg A \in \vv W$, then $\vv V(\alg A) \subseteq\vv W$. Otherwise, if $\alg A \notin \vv W$ we get that $\vv W \subseteq \vv U$, since every algebra $\alg C$ in $\vv W$ is such that $\alg C \in \vv V$ and $\alg A \notin {\mathbf S}(\alg C)$.
Notice that \ref{splitting2}. implies that the two varieties $\vv U$ and $\vv V(\alg A)$ constitute a splitting pair in the lattice of subvarieties of $\vv V$, which implies that $\vv U$ is axiomatized by a single identity, thus \ref{splitting3}. holds. \end{proof}
Combining Lemma \ref{finiteprojective} and Lemma \ref{splitting}, we obtain the following.
\begin{theorem}\label{cor:projectivesplitting} If $\vv V$ is a subvariety of $\mathsf{FL_{ew}}$ closed under ordinal sums, then any finite projective algebra in $\vv V$ is splitting in $\vv V$. \end{theorem}
On the other hand, the converse holds only in very special cases. Splitting algebras have been thoroughly investigated in many subvarieties of $\mathsf{FL_{ew}}$ (\cite{Agliano2017c}, \cite{Agliano2018a}, \cite{Agliano2018b}, \cite{AglianoUgolini2019a}), but the seminal paper on the subject is \cite{KowalskiOno2000}.
We recall that a variety has the {\em finite model property} (or FMP) for its equational theory if and only if it is generated by its finite algebras. As a consequence of the general theory of splitting algebras (see again \cite[Section 10]{GJKO}), if a variety $\vv V$ has the FMP, every splitting algebra in $\vv V$ is a finite subdirectly irreducible algebra. Thus, in subvarieties of $\mathsf{FL_{ew}}$ closed under ordinal sums and with the FMP, the study of finite projective algebras is particularly relevant for the study of splitting algebras. The finiteness hypothesis cannot be removed, as we will see in Section \ref{heytingsection} below.
The problem of finding splitting algebras in $\mathsf{FL_{ew}}$ is solved by Kowalski and Ono. \begin{theorem} \cite{KowalskiOno2000} The only splitting algebra in $\mathsf{FL_{ew}}$ is $\mathbf 2$. \end{theorem} Since $\mathbf 2$ is the free algebra over the empty set of generators of any nontrivial subvariety of $\mathsf{FL_{ew}}$, we get the following fact. \begin{lemma}\label{lemma:2proj} $\alg 2$ is projective in every nontrivial subvariety of $\mathsf{FL_{ew}}$. \end{lemma}
Combining the two previous results with Theorem \ref{cor:projectivesplitting}, we get the following interesting fact. \begin{theorem} The only finite projective algebra in $\mathsf{FL_{ew}}$ is $\mathbf 2$. \end{theorem}
Interestingly, (the $0$-free reduct of) $\alg 2$ is instead not projective in $\mathsf{CIRL}$, since in \cite{Ugolini2022} it is shown that it is not projective in the variety of Wajsberg hoops, and as we previously noticed, projectivity is preserved in subvarieties.
By Proposition \ref{prop:ordsumexists}, $\mathsf{HL}$ is another variety closed under ordinal sums. $\mathsf{HL}$ has the finite model property \cite{BlokFerr1993} and moreover: \begin{theorem}\cite{Agliano2018a} An algebra is splitting in $\mathsf{HL}$ if and only if it is isomorphic with $\alg A \oplus \mathbf 2$ for some finite $\alg A \in \mathsf{HL}$. \end{theorem} Thus we can combine the previous theorem with Theorem \ref{cor:projectivesplitting}: \begin{corollary} All the finite projective algebras in $\mathsf{HL}$ are isomorphic with $\alg A \oplus \mathbf 2$ for some finite $\alg A \in \mathsf{HL}$. \end{corollary}
We close this section with some other general considerations about projectivity and ordinal sums in $\mathsf{FL_{ew}}$ and in its $n$-potent subvarieties that will be useful in the rest of the paper.
\begin{lemma}\label{techlemma1} Let $\vv V$ be a subvariety of $\mathsf{FL_{ew}}$ with the finite model property and let $\alg A$ be projective in $\vv V$; if $C$ is an infinite subset of $A$ closed under $\rightarrow$, then the least upper bound of $C \setminus\{1\}$ exists and it is equal to $1$. \end{lemma} \begin{proof} Assume that this is not the case, thus either the least upper bound does not exist or it is not $1$. In any case $C \setminus \{1\}$ has (at least) an upper bound $a < 1$. Since $\alg A$ is projective it is a retract of a suitable free algebra in $\vv V$, say $\alg F_\vv V(X)$; so there is a surjective homomorphism $f\colon \alg F_\vv V(X) \longrightarrow \alg A$ and a monomorphism $g\colon \alg A \longrightarrow \alg F_\vv V(X)$ such that $fg=id_\alg A$. Let $t(\vuc xn) = g(a)$; if $t(\vuc xn) =1$, then $1=f(t(\vuc xn)) = fg(a) =a$, a contradiction. Thus $t(\vuc xn) \ne 1$ and so the equation $t(\vuc xn) \approx 1$ must fail in $\vv V$; since $\vv V$ has the finite model property, there must be a finite $\alg B \in \vv V$ in which $t(\vuc xn) \approx 1$ fails. This is equivalent to saying that there is a surjective homomorphism $h\colon\alg F_\vv V(X) \longrightarrow \alg B$ with $h(t(\vuc xn)) \ne 1$. Now $g(C)$ is infinite, since $g$ is a monomorphism, and $\alg B$ is finite; so there are $s, r\in g(C)$ such that $s \ne r$ and $h(s) = h(r)$. Without loss of generality, we assume that $r \not\le s$, then $r \rightarrow s \ne 1$ so $r \rightarrow s \in g(C \setminus\{1\})$ and thus $t(\vuc xn) \ge r \rightarrow s$. In conclusion $$ h(t(\vuc xn)) \ge h(r \rightarrow s) = h(r) \rightarrow h(s) = 1, $$ thus $h(t(\vuc xn)) = 1$, that is a contradiction. Hence the thesis follows. \end{proof}
As a consequence we get the following.
\begin{proposition}\label{sumprojective} Let $\vv V$ be a subvariety of $\mathsf{FL_{ew}}$ with the finite model property and let $\alg A, \alg B \in \vv V$, with $\alg B$ nontrivial, be such that $\alg A \oplus \alg B$ is projective in $\vv V$ (and so $\alg A \oplus \alg B \in \vv V$). Then $\alg A$ is finite. \end{proposition} \begin{proof} $A$ is a subset of $\alg A \oplus \alg B$ closed under $\rightarrow$ such that the least upper bound of $A \setminus \{1\}$ is not $1$; by Lemma \ref{techlemma1}, $\alg A$ must be finite. \end{proof}
In general if $\alg A, \alg B, \alg A \oplus \alg B \in \vv V$ and $\alg A \oplus \alg B$ is projective, $\alg B$ is not necessarily projective; however there is a weakening of the notion that is very useful in subvarieties of $\mathsf{FL_{ew}}$. We call an algebra $\alg A \in \vv V$ {\em zero-projective} in $\vv V$ if for any $\alg B,\alg C \in \vv V$, any homomorphism $h\colon\alg A \longrightarrow \alg C$ and any surjective homomorphism $g\colon \alg B \longrightarrow \alg C$ such that $g^{-1}(0) = \{0\}$, there is a homomorphism $f\colon \alg A \longrightarrow \alg B$ with $h=gf$.
\begin{lemma}\label{weaklyprojective} Let $\vv V$ be a subvariety of $\mathsf{FL_{ew}}$ closed under ordinal sums. If $\alg A_i\in \vv V$ for all $i=0,\dots,n$ and $\bigoplus_{i=0}^n \alg A_i$ is projective, then $\alg A_i$ is zero-projective for all $i= 0,\dots, n$. \end{lemma} \begin{proof} Fix $i \le n$ and let $\alg U = \bigoplus_{j=0}^{i-1} \alg A_j$ and $\alg V = \bigoplus_{j=i+1}^n \alg A_j$; note that $\alg U$, $\alg V$ may be trivial if $i=0$ or $i=n$ but in any case $\bigoplus_{j=0}^n \alg A_j = \alg U \oplus \alg A_i \oplus \alg V$. Suppose that there are $\alg B,\alg C \in \vv V$, a homomorphism $h\colon\alg A_i \longrightarrow \alg C$, and an onto homomorphism $g\colon\alg B \longrightarrow \alg C$ such that $g^{-1}(0) = \{0\}$. We will show that there is a homomorphism $f\colon \alg A_{i} \longrightarrow \alg B$ with $h=gf$. Consider the map $g'\colon \alg U \oplus \alg B \longrightarrow \alg U \oplus \alg C $ defined by $g'(x) = x$ if $x \in U \setminus \{1\}$ and $g'(x)=g(x)$ otherwise; it follows from Proposition 3.2 in \cite{AglianoMontagna2003} that $g'$ is an onto homomorphism. Similarly we can define a homomorphism $h'\colon \alg U \oplus \alg A_i \oplus \alg V \longrightarrow \alg U \oplus \alg C$, as $h'(x) = x$ if $x \in U \setminus \{1\}$, $h'(x) = h(x)$ if $x \in A_{i}$ and $h(x) = 1$ otherwise. Since $\alg U \oplus \alg A_i \oplus \alg V = \bigoplus_{j=0}^n \alg A_j $ is projective, there is an $f'\colon \alg U \oplus \alg A_i \oplus \alg V \longrightarrow \alg U \oplus \alg B $ with $h'=g'f'$. Now $g'f'(0_{\alg A_i}) = h'(0_{\alg A_i})= h(0_{\alg A_i}) = 0_\alg C$. Given that $g'^{-1}(0_{\alg C}) = g^{-1}(0_{\alg C}) = \{0_{\alg B}\}$, we get that $f'(0_{\alg A_{i}}) = 0 _\alg B$; thus the restriction of $f'$ to $\alg A_{i}$, $f\colon \alg A_i \longrightarrow \alg B$ is a homomorphism. Thus $gf=h$, and $\alg A_i$ is zero-projective. \end{proof}
Moreover, we have the following. \begin{lemma}\label{weaklyprojective2} Let $\vv V$ be a subvariety of $\mathsf{CIRL}$; if $\alg A \in \vv V$ and $\mathbf 2 \oplus \alg A$ is zero-projective in the class $\vv K =\{\mathbf 2 \oplus \alg B: \alg B \in \vv V\}$, then $\alg A$ is projective in $\vv V$. \end{lemma} \begin{proof} Let $\alg B,\alg C \in \vv V$, $h\colon \alg A \longrightarrow \alg C$ and $g\colon \alg B \longrightarrow \alg C$ a surjective homomorphism. Define $h'\colon \mathbf 2 \oplus \alg A \longrightarrow \mathbf 2 \oplus \alg C$ and $g'\colon \mathbf 2 \oplus \alg B \longrightarrow \mathbf 2 \oplus \alg C$ as in the proof of Lemma \ref{weaklyprojective}, i.e. to coincide with, respectively, $h$ on $\alg A$ and $g$ on $\alg B$ and be the identity on $\alg 2$; by definition $g'^{-1}(0) = \{0\}$ and so, since $\mathbf 2 \oplus \alg A$ is zero-projective in $\vv K$, there is a homomorphism $f'\colon \mathbf 2 \oplus \alg A \longrightarrow \mathbf 2 \oplus \alg B$ with $g'f'=h'$. Notice that $f'(0_{\alg A}) \ge 0_{\alg B}$, indeed if otherwise $f'(0_{\alg A}) = 0$, then $g'f'(0_{\alg A}) = g'(0) = 0 \ne h'(0_\alg A)$; thus $f'(0_{\alg A}) \ge 0_{\alg B}$ which implies that the restriction of $f'$ to $\alg A$, let us call it $f$, is a homomorphism form $\alg A$ to $\alg B$. Since $gf=h$, $\alg A$ is projective in $\vv V$. \end{proof}
\section{Heyting algebras}\label{heytingsection} In this section we apply our general result on ordinal sums to study projectivity in the variety of {\em Heyting algebras}. Heyting algebras are the idempotent algebras in $\mathsf{FL_{ew}}$, and their variety $\mathsf{HA}$ is the equivalent algebraic semantics of {\em Brouwer's Intuitionistic Logic}. Heyting algebras are divisible by Lemma \ref{Heyting}(5) below, and can alternatively be characterized as the class of $\mathsf{HL}$-algebras satisfying the further equation $$ xy \approx x \wedge y, $$ i.e. $\mathsf{HA} = \mathsf{P_{2}FL_{ew}} = \mathsf{P_{2}HL}$.
The fact that the product and the meet coincide forces many equations to hold in $\mathsf{HA}$; we collect some of them (without proofs) in the following.
\begin{lemma}\label{Heyting} Let $\alg A$ be a Heyting algebra and $a,b,c \in A$; then \begin{enumerate} \item if $a \le c$ and $a \rightarrow b =b$, then $c \rightarrow b=b$; \item $a \le (a \rightarrow b) \rightarrow b)$; \item $((a \rightarrow b) \rightarrow b) \rightarrow (a \rightarrow b) = a \rightarrow b$; \item $(a \rightarrow b) \rightarrow ((a \rightarrow b) \rightarrow b) = ((a \rightarrow b) \rightarrow b)$; \item if $a \le b$, then $a \wedge (b \rightarrow c) = a \wedge c$. \end{enumerate} \end{lemma}
Now, every Heyting algebra $\alg A$ is the ordinal sum of sum irreducible Heyting algebras. If $\alg A$ is also finite and projective, then the last component must be $\mathbf 2$, since by Lemma \ref{finiteprojective} $\alg A$ is also subdirectly irreducible, and subdirectly irreducible Heyting algebra are exactly the algebras with $\alg 2$ as the last component (folklore, see \cite{BuSa}).
We first show that characterizing the finite sum irreducible Heyting algebras is deceptively easy. We call an element $a$ of a poset $\langle P, \le\rangle$ a {\em node} if it is a conical element: for all $b \in P$ either $b \le a$ or $a \le b$; for any $a \in P$ the {\em upset} of $a$ is the set $\mathop{\uparrow} a = \{b: a \le b\}$. The proof of the following lemma is straightforward and we leave it as an exercise.
\begin{lemma}\label{node} Let $\alg A$ be an Heyting algebra and let $a$ be a node. Then: \begin{enumerate} \item $\alg A_a = \langle (A \setminus \mathop{\uparrow} a) \cup \{1\}, \vee,\wedge,\rightarrow,0,1\rangle$ is a Heyting algebra where $$ u \vee v = \left\{
\begin{array}{ll}
1, & \hbox{if $u \vee v = a$;} \\
u \vee_\alg A v, & \hbox{ otherwise.}
\end{array}
\right.; $$ \item $\alg A^a = \langle \mathop{\uparrow} a, \vee,\wedge,\cdot,\rightarrow,1\rangle$ is a bounded Brouwerian algebra; \item $\alg A = \alg A_a \oplus \alg A^a$. \end{enumerate} \end{lemma}
\begin{proposition}
A finite Heyting algebra is sum irreducible if and only if it has no node different from $0,1$. \end{proposition} \begin{proof} If there is a node different from $0,1$ then the algebra is sum reducible by Lemma \ref{node}. Vice versa if $\alg A$ is finite and sum reducible, i.e. $\alg A = \alg B \oplus \alg C$ nontrivially, then the minimum $c \in C$ is a node of $\alg A$ different from $0,1$. \end{proof}
Since in a totally ordered Heyting algebra every element is a node, from Lemma \ref{node} we also obtain the following.
\begin{proposition}
A totally ordered Heyting algebra is an ordinal sum of copies of $\mathbf 2$. \end{proposition}
Every finite Boolean algebra is a sum irreducible Heyting algebra; but since every finite distributive lattice can be given the structure of an Heyting algebra, there are many non-Boolean sum irreducible Heyting algebras. However the list becomes very short if we consider sum irreducible Heyting algebras that are irreducible components of a finite projective Heyting algebra. We are now going to show that, indeed, the only possible components are $\alg 2$ and $\alg 4$. In order to use Lemma \ref{weaklyprojective}, we start by studying finite zero-projective Heyting algebras.
In the following theorem we will make use of the free Heyting algebra on one generator $\alg F_\mathsf{HA}(x)$, also called the {\em Nishimura lattice}, that is an infinite Heyting algebra totally described in \cite{Nishimura1960}. In Figure \ref{Nishimura} we see a picture of the bottom of the lattice. Moreover, we will denote by $\mathbf 3$ and $\mathbf 4$ the three and four elements Heyting algebras respectively, where $\mathbf 3 \cong \mathbf 2 \oplus \mathbf 2$.
\begin{figure}
\caption{The bottom of the Nishimura lattice}
\label{Nishimura}
\end{figure}
\begin{theorem}\label{twoatoms} Let $\alg A$ be a finite zero-projective Heyting algebra; if $\alg A$ is sum irreducible and has exactly two atoms then $\alg A \cong \mathbf 4$. \end{theorem} \begin{proof} If $a,b$ are the only atoms of $\alg A$ it is enough to show that $a \vee b =1$. Indeed, there cannot be elements below or incomparable to $a, b$ since they are the only atoms, and there cannot be elements above either of them or otherwise the lattice would not be distributive (there would be a sublattice isomorphic to $\alg N_{5}$). Suppose then that $a \vee b < 1$ in $\alg A$; since $\alg A$ is sum irreducible it cannot contain any node different from $0$ or $1$, so there has to be an element $d \in A$ covering $a$ incomparable with $ a \lor b$ (or the symmetric case). Let $F$ be the filter generated by $d$, $F = \mathop{\uparrow} d$,
then it can be checked (see Lemma 4.7 in \cite{BalbesHorn1970}) that, if $\theta$ is the congruence associated with $F$, then $\alg A /\theta \cong \mathbf 3$. In particular, if we denote by $\{0,u,1\}$ the universe of $\mathbf 3$, there is a onto homomorphism $h_0\colon \alg A \longrightarrow\mathbf 3$ with $h_0(a) = u$. Similarly, since $b$ is an atom (and hence the principal filter generated by $b$ is maximal) there is a onto homomorphism $h_1 \colon\alg A \longrightarrow \mathbf 2$ with $h_1(a) =0$. It follows that if we define $h(y) = (h_0(y),h_1(y))$ then $h\colon \alg A \longrightarrow \mathbf 3 \times \mathbf 2$ is a onto homomorphism and moreover $(u,0)$ generates $\mathbf 3 \times \mathbf 2$ (see Figure \ref{3times2}).
\begin{figure}
\caption{The algebra $\mathbf 3 \times \mathbf 2$}
\label{3times2}
\end{figure}
Let now $f$ be the onto homomorphism from the Nishimura lattice $\alg F_\mathsf{HA}(x)$ to $\mathbf 3 \times \mathbf 2$ defined by $f(x) = (u,0)$. Observe that \begin{align*} &f(\neg x) = \neg(u,0) = (0,1)\\ &f(\neg\neg x) = \neg\neg(u,0) = (1,0)\\ & f(x \vee \neg x) = (u,0) \vee \neg(u,0) = (u,0) \vee (0,1) = (u,1). \end{align*} Thus, by looking at Figure \ref{Nishimura}, by order preservation $f^{-1}(0,0) = \{0\}$ and $f^{-1}(u,0) = \{x\}$. Since $\alg A$ is zero-projective, there is a homomorphism $g\colon \alg A \longrightarrow \alg F_\mathsf{HA}(x)$ with $fg = h$. Now, $fg( a) = h( a) = (h_0(a),h_1( a)) = (u,0)$; since $f^{-1}(u,0) = \{x\}$, it means that $g(a) = x$. Since $x$ generates $\alg F_\mathsf{HA}(x)$, the map $g$ is onto; but the Nishimura lattice is infinite and $\alg A$ is finite, a contradiction. Hence the thesis follows. \end{proof}
It follows by Lemma \ref{Heyting}(5) that Heyting algebras are {\em pseudocomplemented}, that is, they satisfy the identity $x \land \neg x \approx 0$. Therefore, we can use the following fact. Given an algebra $\alg A$, let $R(\alg A)$ be the set of its {\em regular} elements, that is, elements $x \in A$ such that $\neg\neg x = x$. \begin{theorem}\cite{Castanoetal2011}\label{finitefree} Let $\vv V$ be a subvariety of $\mathsf{FL_{ew}}$ and let $\alg F_\vv V(X)$ be the free algebra in $\vv V$ on $X$. If $\vv V$ is pseudocomplemented, i.e., each algebra in $\vv V$ is pseudocomplemented, then $R(\alg F_\vv V(X)) \cong \alg F_\vv B(\neg\neg X)$, the free Boolean algebra on the set $\neg\neg X = \{\neg\neg x: x \in X\}$. \end{theorem} \begin{theorem}\label{nothree} Let $\alg A$ be a finite zero-projective Heyting algebra; then $\alg A$ has at most two atoms. \end{theorem} \begin{proof} Let $\alg F = \alg F_\mathsf{HA}(x,y)$ be the free Heyting algebra on two generators; by Theorem \ref{finitefree} $R(\alg F)$ is the free Boolean algebra generated by the atoms $$ b_1 =\neg\neg x \wedge \neg\neg y\quad b_2= \neg\neg x \wedge \neg y \quad b_3 = \neg x \wedge \neg\neg y\quad b_4= \neg x \wedge \neg y. $$ It is a nice exercise to show that the subalgebra $\alg G$ of $\alg F$ generated by $b_1,b_2$ is infinite (see \cite[Theorem 4.4]{BalbesHorn1970}). We will show that if $\alg A$ has at least three atoms, then $\alg G$ is a homomorphic image of $\alg A$; since $\alg A$ is finite this is clear contradiction.
Let $a_1, \ldots, a_{n}$ be the atoms of $\alg A$, with $n \geq 3$. Set $h(a_{1}) = b_{1}, h(a_{2}) = b_{2}, h(a_{3}) = b_{3}$ and $h(a_{i}) = b_{3}$ for $i: 4\leq i \leq n$; then the map $h\colon \alg A \longrightarrow R(\alg F)$ defined by $h(x) = \bigvee\{h(a_{i}): a_i \le x\}$ is a homomorphism. Now $\neg\neg\colon \alg F \longrightarrow R(\alg F)$ is an onto homomorphism and moreover if $u \in F$, $u \le (u \rightarrow 0) \rightarrow 0 = \neg \neg u$ (Lemma \ref{Heyting}(2)); thus $\neg\neg u = 0$ if and only if $u=0$. Since $\alg A$ is zero-projective, there is a homomorphism $g\colon\alg A \longrightarrow \alg F$ with $\neg\neg g = h$. But then $g(\neg\neg a_1) = \neg\neg g(a_1) =h(a_1) = b_1$ and $g(\neg\neg a_2) = \neg\neg g(a_2) = h(a_2) = b_2$, so $g$ restricted to $\alg A$ is onto $\alg G$. Thus we reached the desired contradiction. \end{proof}
\begin{corollary}\label{twoandfour} Let $\alg A$ be a finite projective Heyting algebra and suppose that $\bigoplus_{i=0}^n \alg A_i$ is a decomposition of $\alg A$ into its sum irreducible components. Then $\alg A_n \cong\mathbf 2$ and for each $i <n$, $\alg A_i$ is isomorphic with either $\mathbf 2$ or $\mathbf 4$. \end{corollary} \begin{proof} We have already observed that $\alg A_n$ must be isomorphic with $\mathbf 2$. If $i<n$, by Lemma \ref{weaklyprojective} $\alg A_i$ is zero-projective, hence it can have at most two atoms by Theorem \ref{nothree}. If it has only one atom, then $\alg A_i \cong \mathbf 2$; if it has two atoms, since it also sum irreducible, Theorem \ref{twoatoms} applies and thus $\alg A_i \cong \mathbf 4$. \end{proof}
We will now show that the necessary condition in Corollary \ref{twoandfour} is also sufficient. First we need a key lemma.
\begin{lemma} \label{inductionstep} Let $\alg A$ be a finite Heyting algebra, and let $\alg B$ be either $\alg 2$ or $\alg 4$. If $\alg A \oplus \mathbf 2$ is projective, then also $\alg A \oplus \alg B \oplus \mathbf 2$ is projective.\end{lemma} \begin{proof} We shall label the elements as in Figure \ref{figure:lemmainduction}. If $\alg A \oplus \mathbf 2$ is (finite and) projective, then it is a retract of a free algebra generated by a (finite) set $X$, $\alg F_{\sf HA}(X)$. Thus, there exist homomorphisms $i\colon \alg A \oplus \alg 2 \to \alg F_{\sf HA}(X)$, and $j\colon \alg F_{\sf HA}(X) \to \alg A \oplus \alg 2$ such that $j \circ i = id_{\alg A \oplus \alg 2}$. We consider first the case $\alg B \cong 2$. Given a new variable $y$, we will define homomorphisms $\bar i\colon \alg A \oplus \alg 2 \oplus \alg 2 \to \alg F_{\sf HA}(X \cup \{y\})$, and $\bar j\colon \alg F_{\sf HA}(X \cup \{y\}) \to \alg A \oplus \alg 2 \oplus \alg 2$ such that $\bar j \circ \bar i = id_{\alg A \oplus \alg 2 \oplus \alg 2}$. We define: \begin{align*} &\bar{i}(x) = i(x), \quad \mbox{ for } x \in A \oplus 2\\ &\bar{i}(c_1) = y \vee (y \rightarrow i(c_0)). \end{align*}
\begin{figure}
\caption{Labeling $\alg B$ and $\mathbf 2$}
\label{figure:lemmainduction}
\end{figure}
Since $i(c_0) \le \bar{i}(c_1)$ it is easy to check that $\bar{i}$ is a lattice homomorphism; for instance $$ \bar{i}(c_0) \vee \bar{i}(c_1) = i(c_0) \vee \bar{i}(c_1) = \bar{i}(c_1) = \bar{i}(c_0 \vee c_1). $$ For the implication there is only one nontrivial case and we proceed to prove it: \begin{align*} \bar{i}(c_1) \rightarrow \bar{i}(c_0) &= (y \vee (y \rightarrow i(c_0))) \rightarrow i(c_0) \\ &= (y \rightarrow i(c_0)) \wedge ((y \rightarrow i(c_0)) \rightarrow i(c_0)) \quad\text{by \cite[Theorem 3.10(2)]{GJKO}}\\ &= (y \rightarrow i(c_0)) \wedge i(c_0) \quad\text{by Lemma \ref{Heyting}(5)}\\ &= i(c_0) \quad\text{by the properties of residuated lattices}\\ &=\bar{i}(c_0) = \bar{i}(c_1 \rightarrow c_0) \quad\text{by definition of ordinal sum}. \end{align*} We now let $\bar j$ be the homomorphism coinciding with $j$ on $\alg F_{\sf HA}(X)$, and such that $\bar j(y) = c_1$. We prove that $\bar j \circ \bar i = id_{\alg A \oplus \alg 2 \oplus \alg 2}$. If $x \in \alg A \oplus \alg 2$, then $\bar j (\bar i(x)) = \bar j(i(x)) = j(i(x)) = x$. Moreover $\bar j (\bar i(c_1)) = \bar j(y \lor (y \to i(c_0))) = \bar j(y) \lor (\bar j(y) \to \bar j(i(c_0))) = c_1 \lor (c_1 \to c_0) = c_1 \lor c_0 = c_1$. Thus $\alg A \oplus \alg 2 \oplus \alg 2$ is a retract of $\alg F_{\sf HA}(X \cup \{y\})$ and is therefore projective.
Let us now consider the case where $\alg B \cong \alg 4$. We will define $\widehat i\colon \alg A \oplus \mathbf 4 \oplus \mathbf 2 \to \alg F_{\sf HA}(X \cup \{y\})$ and $\widehat j\colon \alg F_{\sf HA}(X \cup \{y\}) \to \alg A \oplus \mathbf 4 \oplus \mathbf 2$ such that their composition is the identity on $\alg A \oplus \mathbf 4 \oplus \mathbf 2$. First we define: \begin{align*} &\widehat{i}(x) = i(x) \quad \mbox{ for } x \in A \oplus 2\\ &\widehat{i}(a_0) = (y \rightarrow i(c_0)) \rightarrow i(c_0)\\ &\widehat{i}(b_0) = y \rightarrow i(c_0)\\ &\widehat{i}(c_1) = \widehat{i}(a_0) \vee \widehat{i}(b_0). \end{align*} It can be shown that $\widehat{i}$ is a lattice homomorphism, indeed, by definition $\widehat{i}(c_1) = \widehat{i}(a_0) \vee\, \widehat{i}(b_0)$, and moreover $\widehat{i}(a_0) \land\, \widehat{i}(b_0) = ((y \rightarrow i(c_0)) \rightarrow i(c_0)) \land (y \rightarrow i(c_0)) = i(c_0) = \widehat{i}(a_0 \land b_0)$.
Next, observe that for $v \in A\setminus\{1\}$ and $x \in \{c_0,a_0,b_0,c_1\}$, $i(v) \leq i(c_0) \leq \widehat i(x)$, and $i(c_0) \to i(v) = i(c_0 \to v) = i(v)$. Thus by Lemma \ref{Heyting}(1) we get: $$ \widehat{i}(x) \rightarrow \widehat{i}(v) = \widehat{i}(x) \rightarrow i(v) = i(v) = \widehat{i}(v)= \widehat{i}(x \rightarrow v). $$ Moreover: \begin{align*} \widehat{i}(a_0) \rightarrow \widehat{i}(c_0) &= ((y \rightarrow i(c_0)) \rightarrow i(c_0)) \rightarrow i(c_0) \\ &= y \rightarrow i(c_0) \quad \text{by the properties of residuated lattices}\\ &= \widehat{i}(b_0)= \widehat{i}(a_0 \rightarrow c_0) \quad\text{by residuation}. \end{align*} Similarly, $\widehat{i}(b_0) \rightarrow \widehat{i}(c_0)= \widehat{i}(b_0 \rightarrow c_0)$. The arguments for proving $\widehat{i}(a_0) \rightarrow \widehat{i}(b_0)= \widehat{i}(a_0 \rightarrow b_0)$ and $\widehat{i}(b_0) \rightarrow \widehat{i}(a_0)= \widehat{i}(b_0 \rightarrow a_0)$ are shown in an analogous way, using Lemma \ref{Heyting}(3) and (4). Finally, the proof that $\widehat{i}(c_1) \rightarrow \widehat{i}(x)= \widehat{f}(c_1 \rightarrow x)$ whenever $x < c_1$ follows from direct computation and it is left to the reader. Thus, $\widehat i$ is an homomorphism. We let $\widehat j$ be the homomorphism extending $j$ on $\alg F_{\sf HA}(X)$ and such that $\widehat j(y) = a_0$. We prove that $\widehat j \circ \widehat i$ is the identity on $\alg A \oplus \alg 4 \oplus \alg 2$. For $x \in \alg A \oplus \alg 2$, $\widehat j \circ \widehat i(x) = j \circ i (x) = x$. \begin{align*} \widehat j \circ \widehat i(a_0)&= \widehat j((y \to i(c_0)) \to i(c_0))\\ &= (\widehat j(y) \to j \circ i(c_0)) \to j \circ i(c_0) = (a_0 \to c_0) \to c_0 = a_0.\\ \widehat j \circ \widehat i(b_0)&= \widehat j(y \to i(c_0)) = \widehat j(y) \to j \circ i(c_0) = a_0 \to c_0 = b_0\\ \widehat j \circ \widehat i(c_1)&= \widehat j(\widehat i(a_0) \lor \widehat i(b_0)) = a_0 \lor b_0 = c_1. \end{align*} Thus, $\alg A \oplus \alg 4 \oplus \alg 2$ is a retract ot $\alg F_{\mathsf HA}(X \cup \{y\})$ and therefore is projective. \end{proof}
Thus we get the main result of this section.
\begin{theorem}\label{projectiveHA} Let $\kappa \leq \omega$ be any ordinal and let $\alg A = \bigoplus_{n < \kappa} \alg A_n$ where $\alg A_n$ is either $\mathbf 2$ or $\mathbf 4$ for all $n \in \mathbb N$. Then $\alg A \oplus \mathbf 2$ is projective in $\mathsf{HA}$. \end{theorem} \begin{proof} Since $\alg 2$ is projective in $\mathsf{HA}$, the claim follows by induction on $\kappa$ using Lemma \ref{inductionstep}. \end{proof}
Finite projective Heyting algebras have been characterized in \cite{BalbesHorn1970}; in this section we have given an alternative proof using the concepts of ordinal sum and sum irreducibility to get (we hope) a more streamlined and clear sequence of arguments.
\section{Divisible residuated lattices}\label{section:hoops} In the previous sections we have exploited the ordinal sum construction, while in this section we will make use of a strong order property that characterizes a large and interesting class of residuated lattices: divisibility. We observe that the results in this section are more conveniently stated for commutative and integral residuated {\em semilattices}, which we recall are the subreducts of commutative and integral residuated lattices and share a good deal of their theory with them (see \cite{Agliano2018a} for an extensive treatment, even for the noncommutative case). First we make a general observation.
\begin{proposition}\label{prop:retractproj1} Let $\vv K$ be a class of algebras. If every algebra $\alg B \in \vv K$ is a retract of every algebra $\alg A \in \vv K$ of which it is an homomorphic image, then every algebra in $\vv K$ is projective in $\vv K$. \end{proposition} \begin{proof} Let $\vv K$ be a class of algebras satisfying the hypothesis of the Proposition. Consider any algebra $\alg C \in \vv K$, and suppose there is an homomorphism $h\colon \alg C \to \alg B$, and a surjective homomorphism $g\colon \alg A \to \alg B$. Since $\alg B$ is a homomorphic image of $\alg A$, it is also its retract. Thus there exists an injective homomorphism $f'\colon\alg B \to \alg A$ such that $gf' = id_{\alg B}$. Then we can consider the homomorphism $f\colon \alg C \to \alg A$, $f = f'h$. We get $gf = gf'h = h$, and thus $\alg C$ is projective in $\vv K$. \end{proof}
A usual application of this fact is taking $\vv K$ as the class $\vv V_{fin}$ of finite algebras of a variety $\vv V$; thus if every finite $\alg A \in \vv V$ is a retract of any finite $\alg B \in \vv V$ of which it is a homomorphic image, then every finite member of $\vv V$ is projective in $\vv V_{fin}$. In this case it is customary to say that every finite member of $\vv V$ is {\em finitely projective}. In case $\vv V$ is also locally finite, the finite algebras are exactly the finitely presented algebras, and thus every finitely presented algebra in $\vv V$ is projective in $\vv V$.
The class of $\{\rightarrow,\cdot\}$-subreducts of commutative and integral residuated semilattices is the quasivariety $\mathsf{PO}$ of commutative and integral residuated partially ordered monoids, commonly known as {\em pocrims} \cite{BlokRaf1997}; the fact that they are indeed all subreducts is a consequence of a very general embedding theorem stated in \cite{OnoKomori1985}. Since the divisibility equation makes the $\wedge$ a definable operation by $ a \,\wedge\, b = (a \rightarrow b)a, $
every divisible variety of residuated semilattices is a variety of pocrims. In particular, the variety of hoops is a variety of pocrims, and for hoops we have the following result.
\begin{lemma}\label{idempo} Let $\alg A$ be a hoop and let $u \in A$ be idempotent; then for all $a,b \in A$ \begin{align} &u \rightarrow (a \rightarrow b) = (u \rightarrow a) \rightarrow (u \rightarrow b)\\ &u \rightarrow ab = (u \rightarrow a)(u \rightarrow b). \end{align} \end{lemma}
The two properties above, albeit similar, are quite different in nature; the first one can be proved through a standard computation \cite{EDPC3} while the second involves very complex calculations: it was first proved in \cite{SpinksVeroff2004} via a computer-assisted proof. We can now prove the following.
\begin{theorem}\label{mth} Every finite hoop is finitely projective in the variety of hoops, hence in any locally finite variety of hoops every finitely presented (equivalently, finite) algebra is projective. \end{theorem} \begin{proof} Let $\alg A, \alg B$ be finite hoops and let $g\colon\alg B \longrightarrow \alg A$ be a surjective homomorphism. Let $\theta = \op{ker}(g)$ and $F=1/\theta$; then $F$ is a filter of $\alg B$ and, since $\alg B$ is finite, it has a minimum $u$ which is idempotent. Let $f(x)\colon u \rightarrow x$; then by Lemma \ref{idempo} $f$ is an endomorphism of $\alg B$. It is also idempotent since for $b \in B$ $$ f(f(b)) = u \rightarrow (u \rightarrow b) = u^2 \rightarrow b = u \rightarrow b = f(b). $$ Now if $b \in F$, then $f(b) = u \rightarrow b =1$; conversely if $f(b) =1$ then $u \rightarrow b =1$, i.e. $u \le b$ and so $b \in F$. In conclusion $F = 1/\op{ker}(f)$ and thus $\op{ker}(f) = \theta$; if we set $\alg C = f(\alg B)$, which is a subalgebra of $\alg B$, then $$ \alg A \cong \alg B /\theta = \alg B/\op{ker}(f) \cong \alg C. $$ Let now $h\colon \alg A \longmapsto \alg C$ be the resulting isomorphism, then $h$ is an injective homomorphism from $\alg A$ to $\alg B$. Moreover, by definition $h(a) = f(b)$ where $b$ is such that $g(b) =a$; observe also that the fact that $f$ is idempotent implies that $(b,f(b)) \in \op{ker}(f)=\theta$, thus $g(b) = g(f(b))$. Hence, if $a \in A$ and $b \in B$ with $g(b)= a$ we have $$ g(h(a))= g(f(b)) = g(b) = a. $$ We have just proved that $\alg A$ is a retract of $\alg B$; by Proposition \ref{prop:retractproj1} the conclusion follows. \end{proof}
In particular any finite Brouwerian semilattice (i.e., an idempotent hoop) is finitely projective in the variety of Brouwerian semilattices. Since the latter is locally finite \cite{Kohler1981} we get at once that all the finitely presented Brouwerian semilattices are projective (a fact already noted in \cite{Ghilardi1997}).
The class of $\rightarrow$-subreducts of the variety of commutative and integral residuated lattice is the (proper) quasivariety $\mathsf{BCK}$ of BCK-algebras; those algebras were introduced by K. Iseki \cite{Iseki1962} and they have been widely studied since; the fact that they are indeed subreducts of commutative and integral residuated lattices is again a consequence of the quoted result in \cite{OnoKomori1985}. In \cite{Ferr2000} the class of BCK-algebras that are implicative subreducts of hoops has been described: it is a variety called $\mathsf{HBCK}$ and the algebras therein are called HBCK-algebras. An analogous of Theorem \ref{mth} follows with the same proof in the reduced language.
\begin{theorem}\label{thm:hbck} Every finite HBCK-algebra is finitely projective in the variety of HBCK-algebras, hence in any locally finite variety of HBCK-algebras every finitely presented (equivalently, finite) algebra is projective.
\end{theorem}
In order to transfer our argument to full hoops or to $\mathsf{HL}$-algebras we have to take care of the join. The easiest case is the one in which the join is definable, which is equivalent to saying that the hoop satisfies the equation (PJ) (see \cite{Agliano2018c}). Thus, prelinear hoops are full hoops (i.e., they are residuated lattices) and the conclusion applies.
\begin{corollary}\label{cor:projlfbasichoops}
Every finite basic hoop is finitely projective in the variety of basic hoops, hence in any locally finite variety of basic hoops every finitely presented (equivalently, finite) algebra is projective. \end{corollary}
If we remove the hypothesis of being locally finite, the previous result does not hold. Indeed, for instance, not all finitely presented Wajsberg hoops are projective, as shown in \cite{Ugolini2022}. However, there is another interesting variety of hoops that is not locally finite, but for which the same property holds: the variety of cancellative hoops. {\em Cancellative hoops} are basic hoops satisfying the cancellativity law: $$ x \to (x y) = y . \qquad (canc)$$ Cancellative hoops can be seen as negative cones of abelian lattice ordered abelian groups (or {\em $\ell$-groups} for short), and actually, they are categorically equivalent to $\ell$-groups \cite{DiNolaLettieri1994}. Now, projective and finitely generated $\ell$-groups coincide with finitely presented $\ell$-groups, as shown in \cite{Beynon1977}. Just as the properties of being projective and finitely presented, in every variety also the concept of an algebra being finitely generated is categorical, i.e., it can be described in the abstract categorical setting by properties of morphisms (see Theorem 3.11 and 3.12 in [1]), and thus all these notions are preserved by categorical equivalences. Therefore, we have the following result.
\begin{proposition}\label{prop:cancproj} Finitely presented cancellative hoops are exactly the finitely generated and projective in their variety. \end{proposition}
Are there non prelinear varieties of commutative and integral residuated lattices for which an analogous of Theorem \ref{mth} holds? We do not know; there are results in the literature (for instance Lemma 8.2 in \cite{OlsonRafVanAlten2008}) that suggest that one would need to use different proof techniques than the ones considered here.
If we add the constant $0$ to the signature to represent the least element of the considered structures, the argument used above does not work. Indeed, in Theorem \ref{mth}, given any surjective homomorphism to a finite algebra, we define an embedding that testifies the retraction which is not necessarily preserving the lower bound.
However, we can prove the following (weaker) result.
\begin{theorem}\label{cor:projlfBL}
Every finite bounded hoop is finitely zero-projective in the variety of bounded hoops; hence in any locally finite variety of bounded hoops every finitely presented algebra is zero-projective. \end{theorem} \begin{proof}
Let $\alg A,\alg B$ be finite bounded hoops and let $g\colon \alg B \longrightarrow \alg A$ be a surjective homomorphism such that $g^{-1}(0) =0$. Then, by the same argument as Theorem \ref{mth}, we can find an endomorphism $f$ of the $0$-free reduct of $\alg B$ such that $\alg A$ and $f(\alg B)$ are isomorphic through $h$ as $0$-free structures. It only remains to be checked that the map $h$ is a homomorphism in the full signature. Since we showed that $gh$ is the identity, $g(h(0)) = 0$, which implies that $h(0) = 0$ since $g^{-1}(0) = 0$. Hence the conclusion follows. \end{proof}
Prelinear bounded hoops are (termwise equivalent to) BL-algebras so we get:
\begin{corollary} Every finite BL-algebra is finitely zero-projective in the variety of BL-algebras; hence in any locally finite variety of BL-algebras every finitely presented algebra is zero-projective. \end{corollary}
Since any projective algebra is zero-projective, it is sensible to ask if it possible to characterize the projective algebras in a different way. We have seen that $\alg 2$ is projective in every subvariety of $\mathsf{FL_{ew}}$ (Lemma \ref{lemma:2proj}), since it is the free algebra over the empty set of generators. Moreover, the following holds.
\begin{lemma}\label{lemma:2retract}
$\alg 2$ is a retract of every free algebra in every subvariety of $\mathsf{FL_{ew}}$. \end{lemma} \begin{proof} $\alg 2$ is (isomorphic to) a subalgebra of every free algebra $\alg F$ in every subvariety of $\mathsf{FL_{ew}}$.
The retraction is then testified by the inclusion map and any (necessarily surjective) homomorphism that maps the generators of $\alg F$ to $\{0,1\}$. \end{proof} From this fact we derive another property of projective $\mathsf{FL_{ew}}$-algebras.
\begin{proposition}\label{lemma:2homimage} Let $\vv V$ be a variety of $\mathsf{FL_{ew}}$-algebras. If $\alg A$ is projective in $\vv V$, then $\alg A$ has $\alg 2$ as an homomorphic image. \end{proposition}
Now, any algebra that has a negation fixpoint, e.g., the totally ordered three-element Wajsberg algebra $\alg{\L}_3$, cannot have $\mathbf 2$ as homomorphic image. Thus, there are finite bounded residuated (semi)lattices that are zero-projective but not projective. However, the necessary condition turns out to be also sufficient for bounded hoops, and hence for $\mathsf{BL}$-algebras. We start with a technical lemma.
\begin{lemma}\label{lemma:semantical} The variety of bounded hoops satisfies the quasiequation \begin{equation} x^2 \approx x \quad\Longrightarrow\quad (x \rightarrow y) \rightarrow \neg\neg x \approx \neg\neg x.\tag{Q} \end{equation} \end{lemma} \begin{proof} It is well-known that the variety of bounded hoops has the FEP \cite{BlokFerr2000} and therefore it is generated as a quasivariety by its finite algebras; since any finite algebra is a subdirect product of finite subdirectly irreducible algebras it is enough to prove (Q) for all the finite subdirectly irreducible bounded hoops. Now \cite{BlokFerr2000} any finite subdirectly irreducible bounded hoop $\alg A$ is an ordinal sum $\alg F \oplus \alg S$ where $\alg S$ is a finite totally ordered Wajsberg hoop and $\alg F$ is a (possibly trivial) bounded hoop.
We observe that in a finite totally ordered Wajsberg hoop the only idempotents are $1$ and the bottom element so (Q) is satisfied. We will prove the thesis by induction on the size of $\alg A$; first the only two-element bounded hoop is $\mathbf 2$ and the only three-element subdirectly irreducible bounded hoop are $\mathbf 2 \oplus \mathbf 2$ and the three-element Wajsberg chain.
(Q) holds in both these structures by inspection. Now suppose we have proved the statement for all subdirectly irreducible algebras of size $\le n$ and let $\alg A = \alg F \oplus \alg S$ with $|A|=n+1$. If $\alg F$ is trivial , then $\alg A \cong \alg S$ and (Q) holds; otherwise $|F|\le n$ and $\alg F$ is a subdirect product of subdirectly irreducible algebras of size $\le n$ for which (Q) holds by induction hypothesis. Hence (Q) holds in $\alg F$ as well. Finally, if $x \in S, x^2 = x,$ and $y \in F$ then $\neg\neg x =1$ and $$ (x \rightarrow y) \rightarrow \neg\neg x = y \rightarrow 1 =1 = \neg\neg x; $$ if $x \in F, x^2 = x,$ and $y \in S$, then $$ (x \rightarrow y) \rightarrow \neg\neg x = 1 \rightarrow \neg\neg x = \neg\neg x. $$ This concludes the induction and proves the thesis. \end{proof}
If $\alg A$ is any bounded hoop, then the double negation is a $\cdot$-homomorphism, that is, for all $a,b \in A$ $$ \neg\neg(a\cdot b) = \neg\neg a \cdot \neg\neg b. $$ This has been shown by Cignoli and Torrens in \cite[Theorem 4.8]{CT04}. It follows that if $u \in A$ is idempotent, so is $\neg\neg u$.
The following lemma was stated in \cite{Dzik2008} for $\mathsf{BL}$-algebras without proof. While we were not able to derive it syntactically, using Lemma \ref{lemma:semantical} we show that the conclusions hold for bounded hoops (i.e., prelinearity and the join are not needed).
\begin{lemma}\label{lemma:dzik} If $\alg A$ is a bounded hoop, $a,b,u \in \alg A$ and $u$ is idempotent, then \begin{align*} ua = u \wedge a \tag{U1}\\ u \rightarrow ua = u\rightarrow a \tag{U2}\\ \neg\neg u((u \rightarrow a) \rightarrow (u \rightarrow b))&= (u \rightarrow a) \rightarrow \neg\neg u(u \rightarrow b)\tag{U3}\\ (u \rightarrow a) \rightarrow (u \rightarrow b) &= \neg\neg u(u \rightarrow a) \rightarrow \neg\neg u (u \rightarrow b) \\ &= \neg\neg u(u \rightarrow a) \rightarrow (u \rightarrow b).\tag{U4} \end{align*} \end{lemma} \begin{proof} Observe that $ua \le u \wedge a$ by integrality; conversely $$ u \wedge a = u (u \rightarrow a) = u^2(u \rightarrow a) = u (u\wedge a) \le ua. $$ Thus (U1) holds, and (U2) is an easy consequence: $u (u \to a) = u \land a = ua$ implies $u \to a \leq u \to ua$, and the other inequality is a consequence of residuation.
Now by Lemma \ref{lemma:semantical} if $u$ is idempotent and $a \in A$ then $$ (u \rightarrow a) \rightarrow \neg\neg u = \neg \neg u. $$ Using (U1) and the fact that in residuated structures the implication distributes over meet we get the following: \begin{align*} (u \rightarrow a) \rightarrow \neg\neg u(u \rightarrow b)&= (u \rightarrow a) \rightarrow[\neg\neg u \wedge (u \rightarrow b)]\\ &= [(u \rightarrow a) \rightarrow \neg\neg u] \wedge [(u \rightarrow a) \rightarrow (u\rightarrow b)]\\ &= \neg \neg u \wedge [(u \rightarrow a) \rightarrow (u\rightarrow b)] \\ &= \neg\neg u[(u \rightarrow a) \rightarrow (u\rightarrow b)] \end{align*} and (U3) holds. For (U4), the first equality is shown as follows: \begin{align*} \neg\neg u(u \rightarrow a) \rightarrow (u \rightarrow b) &\le u(u \rightarrow a) \rightarrow (u \rightarrow b) \\ &=u \rightarrow [(u \rightarrow a) \rightarrow (u \rightarrow b)] \\ & = u \rightarrow (u \rightarrow (a \rightarrow b)) \\ &= u \rightarrow (a \rightarrow b) \\ &= (u \rightarrow a) \rightarrow (u \rightarrow b)\\ &\le \neg\neg u( u \rightarrow a) \rightarrow (u \rightarrow b). \end{align*} Finally, \begin{align*} \neg\neg u(u \rightarrow a) \rightarrow \neg\neg u(u \rightarrow b) &= \neg\neg u(u \rightarrow a) \rightarrow [\neg\neg u \wedge(u \rightarrow b)]\\ & [\neg\neg u(u \rightarrow a) \rightarrow \neg\neg u] \wedge [\neg\neg (u \rightarrow a) \rightarrow (u \rightarrow b)] \\ &=\neg\neg u (u \rightarrow a) \rightarrow (u \rightarrow b). \end{align*} Hence (U4) holds as well. \end{proof}
\begin{lemma} \label{dzik}Let $\alg A$ be a bounded hoop, and let $\alg 2$ denote its subalgebra with domain $\{0,1\}$. If $\varphi\colon \alg A \longrightarrow \mathbf 2$ is a homomorphism and $u \in A$ is idempotent, then the map $$ f(x) = (u \rightarrow x) (\neg u \rightarrow \varphi(x)) $$ is an endomorphism of $\alg A$. \end{lemma} \begin{proof} We first prove that $f$ is a $\cdot$-homomorphism. Let $x,y \in A$; if $\varphi(xy) = 1$, then $\varphi(x) =\varphi(y) =1$ and \begin{align*} f(xy) &= (u \rightarrow xy)(\neg u \rightarrow 1) = (u \rightarrow x)(u \rightarrow y)\\ &= (u \rightarrow x)(\neg u \rightarrow 1)(u \rightarrow y) (\neg u \rightarrow 1) = f(x)f(y). \end{align*} Suppose that $\varphi(xy) = 0$, $\varphi(x)= 1$ and $\varphi(y)=0$; then \begin{align*} f(xy) &= (u \rightarrow xy)\neg\neg u = (u \rightarrow x)(u \rightarrow y)\neg\neg u\\ &= (u \rightarrow x)(\neg u \rightarrow 1)(u \rightarrow y) \neg\neg u = f(x)f(y). \end{align*} Finally suppose that $\varphi(xy) = \varphi(x) = \varphi(y)=0$; then \begin{align*} f(xy) &= (u \rightarrow xy)\neg\neg u = (u \rightarrow x)(u \rightarrow y)\neg\neg u\\ &= (u \rightarrow x)\neg\neg u(u \rightarrow y) \neg\neg u = f(x)f(y). \end{align*}
Thus, $f$ is a $\cdot$-homomorphism.
We now show that $f$ preserves $\rightarrow$ using Lemma \ref{lemma:dzik}; consider: \begin{align*} f(a \rightarrow b) &= (u \rightarrow (a \rightarrow b))(\neg u \rightarrow \varphi(a \rightarrow b)))\\ &= [(u \rightarrow a) \rightarrow (u\rightarrow b)][\neg u \rightarrow (\varphi(a) \rightarrow \varphi(b))]. \end{align*} Suppose now that $\varphi(a)=1$ and $\varphi(b)=0$; then $\varphi(a) \rightarrow \varphi(b) =0$ and the conclusion follows from (U3). In any other case $\varphi(a) \rightarrow \varphi(b) =1$ and the conclusion is either obvious or follows from (U4). Finally $f(1) =1$ and $$ f(0) = \neg u \cdot (\neg u \rightarrow 0) = \neg u \wedge 0 = 0. $$ This completes the proof. \end{proof} We are now ready to prove the following characterization result. \begin{theorem}\label{mth0} A finite bounded hoop $\alg A$ is finitely projective in the variety of bounded hoops if and only if there exists a homomorphism $\varphi\colon \alg A \longrightarrow \mathbf 2$. \end{theorem}
\begin{proof} We have already observed that the condition is necessary, we now show that it is sufficient. Let $\alg A$ be a finite bounded hoop, let $\varphi\colon \alg A \longrightarrow \mathbf 2$ be a homomorphism and let $\alg B$ be a finite bounded hoop such that $g\colon\alg B \longrightarrow \alg A$ is a surjective homomorphism. Then $\varphi g(x)$ is a homomorphism from $\alg B $ to $\mathbf 2$. Let $\varphi'$ the corresponding homomorphism from $\alg B$ to $\mathbf 2 \subseteq \alg B$. As in the proof of Theorem \ref{mth}, if $\theta = \op{ker}(g)$, then $F = 1/\theta$ is a filter of $\alg B$; since $\alg B$ is finite $F$ must have a minimum $u$ which is necessarily idempotent. Observe also that $\varphi'(u)=1$; in fact $u \in F$ which means that $g(u) =g(1)$ and thus
$$
\varphi'(u) = \varphi g(u) = \varphi g(1) = \varphi(1) = 1.
$$
Let $f$ be the endomorphism of $\alg B$ (relative to $u$ and $\varphi'$) resulting from Lemma \ref{dzik}; then $f(b) = 1$ implies $u \rightarrow b =1$ and thus $b \in F$. Conversely if $b \in F$ then $$ f(b) = (u \rightarrow b)(\neg u \rightarrow \varphi'(b))\mathrel{\theta} (u \rightarrow 1)(\neg u \rightarrow \varphi'(1)) = 1. $$ Moreover, we can show that $f$ is idempotent. Indeed, if $\varphi'(x) = 0$, \begin{align*} f(f(x)) &= (u \to \neg\neg u(u \to x)) \neg\neg u = [(u \to \neg\neg u) \land (u \to (u \to x))]\neg\neg u \\ &= (u \to x) \neg\neg u = f(x), \end{align*} while if $\varphi'(x) = 1$, \begin{align*} f(f(x)) &= (u \to (u \to x)) = u \to x = f(x). \end{align*} Now we can argue as in Theorem \ref{mth} to conclude that $\alg A$ is a retract of $\alg B$ and the conclusion follows again from Proposition \ref{prop:retractproj1}. \end{proof}
\begin{corollary}\label{cor:boundedhoops}Let $\vv V$ be a locally finite variety of bounded hoops; then a finite $\alg A \in \vv V$ is projective in $\vv V$ if and only if there is a homomorphism $\varphi\colon \alg A \longrightarrow \mathbf 2$. \end{corollary}
The case of BL-algebras is managed in the usual way; since the join is definable by prelinearity all the arguments go through without change.
\begin{corollary}\label{corollary:projectiveBL}Let $\vv V$ be a locally finite variety of BL-algebras; then a finite $\alg A \in \vv V$ is projective in $\vv V$ if and only if there is a homomorphism $\varphi\colon \alg A \longrightarrow \mathbf 2$. \end{corollary}
Since every finite algebra $\alg A$ in a variety is a subdirect product of finite subdirectly irreducible algebras, and all the subdirect factors are homomorphic images of $\alg A$, we can sharpen our results some more.
\begin{theorem}\label{theorem:projsubirr} Let $\vv V$ be a locally finite variety of bounded hoops or $\mathsf{BL}$-algebras such that every finite subdirectly irreducible in $\vv V$ has $\mathbf 2$ as homomorphic image. Then every finitely presented algebra in $\vv V$ is projective. \end{theorem}
The theorem above is (yet another) reason why every finitely presented (i.e., finite) Boolean algebra is projective: the variety of Boolean algebras is locally finite and the only subdirectly irreducible is $\mathbf 2$. A more intriguing example is the following: a variety of $\mathsf{FL_{ew}}$-algebras is {\em Stonean} if it satisfies the equation $$ \neg x \vee \neg\neg x \approx 1. $$ It is the straightforward consequence of the characterization of the subdirectly irreducible $\mathsf{BL}$-algebras in \cite{AglianoMontagna2003} that a finite subdirectly irreducible algebra in a Stonean variety is of the form $\alg A \cong \mathbf 2 \oplus \alg B$, where $\alg B$ is a totally ordered hoop. Since $\alg B$ is a filter of $\alg A$, we can collapse it and get $\mathbf 2$ as a homomorphic image of $\alg A$. Hence, locally finite Stonean varieties of $\mathsf {BL}$-algebras fall under the scope of Theorem \ref{theorem:projsubirr}. Stonean $\mathsf{BL}$-algebras are a particular instance of varieties of residuated lattices with a so-called {\em Boolean retraction term} \cite{CT12}; these varieties will be studied in depth in a forthcoming paper \cite{AU}.
\section{Algebraic unification theory: an application}\label{section:unif}
The origin of unification theory is usually attributed to Julia Robinson \cite{Robinson1965}. The classical syntactic unification problem is as follows: given two term $s,t$ (built from function symbols and variables), find a {\em unifier} for them, that is, a uniform replacement of the variables occurring in $s$ and $t$ by other terms that makes $s$ and $t$ identical. When the latter syntactical identity is replaced by equality modulo a given equational theory $E$, one speaks of $E$-unification. Unsurprisingly, $E$-unification can be considerably harder than syntactic unification, even when the theory $E$ is fairly well understood. To ease this problem S. Ghilardi in \cite{Ghilardi1997} proposed a new (equivalent) approach and it is the approach we are going to follow here. A unification problem for a variety $\vv V$ is a finitely presented algebra $\alg A$ in $\vv V$; a {\em solution} is a homomorphism $u\colon\alg A \longrightarrow \alg P$, where $\alg P$ is a projective algebra in $\vv V$. In this case $u$ is called a {\em unifier} for $\alg A$ and we say that $\alg A$ is {\em unifiable}. If $u_1, u_2$ are two different unifiers for an algebra $\alg A$ (with projective targets $\alg P_1$ and $\alg P_2$) we say that $u_1$ is {\em more general} than $u_2$ if there exists a homomorphism $m\colon \alg P_1 \longmapsto \alg P_2$ such that $mu_1 = u_2$. The relation ``being less general of'' is a preordering on the unifiers of $\alg A$, thus we can consider the associated equivalence relation; then the equivalence classes (i.e., the unifiers that are ``equally general'') form a partially ordered set $U_\alg A$. It is customary to assign a type (unitary, finitary, infinitary, or nullary) to the algebra according to how many maximal elements $U_\alg A$ has; we are particularly interested in the case of {\em unitary type} which happens if $U_\alg A$ has a maximum, i.e. there is only one maximal element which plays the role of a ``best solution'' to the unification problem, since all other ones can be obtained starting from it, by further substitutions. The type of a variety $\vv V$ is unitary if every unifiable algebra $\alg A \in \vv V$ has unitary type. In this case where the type of $\alg A$ is unitary then the maximum in $U_\alg A$ is called the {\em most general unifier} ({\em mgu} for short) of $\alg A$. We separate the case in which the {\em mgu} is of a special kind: we say that $\alg A$ has {\em strong unitary type} if its {\em mgu} is the identity. A variety $\vv V$ has strong unitary type if every unifiable algebra in $\vv V$ has strong unitary type.
The connection with the present paper is given by the following observation.
\begin{proposition}\label{prop:stunitary} \label{unitary} Let $\vv V$ be any variety; then the following are equivalent.
\begin{enumerate}
\item $\vv V$ has strong unitary unification type;
\item for any finitely presented algebra $\alg A \in \vv V$, $\alg A$ is unifiable if and only if it is projective.
\end{enumerate} \end{proposition}
In \cite{Ghilardi1997} S. Ghilardi used implicitly this fact to prove that any variety of Brouwerian semilattices, and any variety generated by a quasi-primal algebra, have strong unitary type. Any finitely presented (i.e. finite) Brouwerian semilattice is unifiable since it a has homomorphism into $\{1\}$ that is the free algebra on the empty set; on the other hand, any finite Brouwerian semilattice is projective by Theorem \ref{mth}. The same holds (see the first section of this paper) for varieties generated by a quasi-primal algebras with no minimal nontrivial subalgebras (e.g., the variety of Boolean algebras). In case the variety is generated by a quasi-primal algebra with nontrivial minimal subalgebras, then it is not true that every finite algebra is projective, but still every unifiable algebra is projective \cite{Ghilardi1997}.
In this paper we have showed that several interesting varieties are such that all finitely presented algebras are projective, therefore showing that point (2) of Proposition \ref{unitary} holds. Moreover, in the case of bounded commutative residuated lattices (i.e. $\mathsf{FL_{ew}}$-algebras) we can rephrase Proposition \ref{prop:stunitary} in a more transparent way. In varieties of $\mathsf{FL_{ew}}$-algebras any unifiable algebra must have a surjective homomorphism on the two element algebra $\mathbf 2$ (Proposition \ref{lemma:2homimage}), and since $\alg 2$ is projective in every variety of $\mathsf{FL_{ew}}$-algebras (Lemma \ref{lemma:2retract}) we get:
\begin{proposition}\label{prop:zerostunitary}
\label{ground} For a variety $\vv V$ of $\mathsf{FL_{ew}}$ the following are equivalent: \begin{enumerate} \item $\vv V$ has strong unitary type; \item for any finitely presented $\alg A \in \vv V$, $\alg A$ has $\mathbf 2$ as a homomorphic image if and only if $\alg A$ is projective. \end{enumerate} \end{proposition}
The property (2) in Lemma \ref{ground} has also been called {\em groundedness}. D. Valota et. al. (private communication) classified several subvarieties of $\mathsf{MTL}$ enjoying groundedness; we remark that in some cases their results overlap with ours. We get the following conclusions. \begin{theorem} The following varieties, and their corresponding logics, have strong unitary unification type: \begin{enumerate} \item all locally finite subvarieties of hoops and of HBCK-algebras; \item all locally finite subvarieties of bounded hoops and BL-algebras; \item cancellative hoops.
\end{enumerate} \end{theorem} \begin{proof}
The proof follows from Propositions \ref{prop:stunitary} and \ref{prop:zerostunitary} together with, respectively: Theorems \ref{mth} and \ref{thm:hbck}; Corollaries \ref{cor:boundedhoops} and \ref{corollary:projectiveBL}; Proposition \ref{prop:cancproj}. \end{proof} In particular then, all locally finite varieties of MV-algebras have strong unitary unification type, while the variety of all MV-algebras does not, in fact its unification type is nullary, the worst-case scenario (\cite{MarraSpada2013}). Projective algebras in locally finite varieties of MV-algebras have been characterized in \cite{DiNolaGrigoliaLettieri2008}. \begin{theorem}\label{dinola}\cite{DiNolaGrigoliaLettieri2008} Let $\vv V$ be a locally finite variety of MV-algebras; then a finite algebra $\alg A \in \vv V$ is projective if and only if it is isomorphic with $\mathbf 2 \times \alg A'$ for some finite $\alg A' \in \vv V$. \end{theorem}
Using this characterization, we can get a simple alternative proof of the fact that locally finite varieties of MV-algebras have strong unitary unification type. \begin{proposition}
Every locally finite variety $\vv V$ of MV-algebras has strong unitary type. \end{proposition} \begin{proof} If $\alg A$ is finite and unifiable then there is a homomorphism $f\colon\alg A \longrightarrow \mathbf 2 \times \alg B$ for some finite algebra $\alg B \in \vv V$. Let $p_1$ be the first projection on the direct product and let $g=p_1f$; then the function $$ h(a) = \langle g(a), a\rangle $$ is a monomorphism of $\alg A$ into $\mathbf 2 \times \alg A$, that is projective by Theorem \ref{dinola}. Let us consider the second projection $p_2$ on $\alg 2 \times \alg A$. Then $p_2h = id_{\alg A}$ thus
$\alg A$ is a retract of a projective algebra and therefore projective as well. Notice that if we project this monomorphism on the first coordinate we get a homomorphism from $\alg A$ to $\mathbf 2$ as predicted by Proposition \ref{prop:zerostunitary}. \end{proof}
\section{Conclusions}
In this paper we have investigated commutative integral residuated lattices mainly using some well established constructions, such as the ordinal sum, obtaining results on varieties with some strong structural properties, such as being divisible. In particular, we have shown interesting varieties whose finitely presented algebras are all projective. This by no means exhausts the investigation of projectivity in commutative residuated lattices and $\mathsf{FL_{ew}}$-algebras. Here we will simply give a preview of what we plan to do next.
The topic of residuated commutative lattices with a Boolean retraction term is promising; there are some general results, which will appear in \cite{AU}. Moreover in \cite{Ghilardi1999} S. Ghilardi investigated unification in Heyting algebras, proving that they have finitary (but not unitary) type; in particular he proved that the variety of Stonean Heyting algebras has unitary type (and it can be deduced that it is not strong), and that it is the largest subvariety of Heyting algebras whose type is unitary. The reader should keep in mind that such results in \cite{Ghilardi1999} are obtained via syntactical methods; though the symbolic and algebraic setting are equivalent, the translation of logical proofs into algebraic ones is anything but straightforward. As a matter of fact we believe that a careful algebraic analysis of the proofs could shed some light on unification in Stonean residuated lattices, and possibly even on commutative residuated lattices with a Boolean retraction term.
We intend to address these problems (and more) in the immediate future. \section*{Funding} This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 890616 awarded to S. Ugolini.
\end{document} | arXiv | {
"id": "2008.13181.tex",
"language_detection_score": 0.7841526865959167,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Two-Photon Algebra Eigenstates: A Unified Approach to Squeezing} \author{{\large\sc C. Brif} \thanks{Email: phr65bc@phys1.technion.ac.il}} \address{Department of Physics, Technion -- Israel Institute of Technology, Haifa 32000, Israel}
\maketitle \vspace*{0.3cm} \centerline{to appear in {\sc Annals of Physics}} \vspace*{-0.7cm}
\begin{abstract} We use the concept of the algebra eigenstates that provides a unified description of the generalized coherent states (belonging to different sets) and of the intelligent states associated with a dynamical symmetry group. The formalism is applied to the two-photon algebra and the corresponding algebra eigenstates are studied by using the Fock-Bargmann analytic representation. This formalism yields a unified analytic approach to various types of single-mode photon states generated by squeezing and displacing transformations.
\end{abstract}
\section{Introduction}
Coherent states (CS) associated with various dynamical symmetry groups are important in many problems of quantum physics \cite{KlSk,Per86,Gil:rev}. Actually, there are three distinct ways in which CS for a Lie group can be defined \cite{Gil:rev}.
In the general group-theoretic approach developed by Perelomov \cite{Per} and Gilmore \cite{Gil}, the CS are generated by the action of group elements on a reference state of a group representation Hilbert space. These states (called the generalized CS) have a number of remarkable properties that make them very useful in description of many quantum phenomena \cite{KlSk,Per86,Gil:rev}. The most important features of the coherent-state systems are their overcompleteness and their invariance under the action of group representation operators. The last property means that the CS transform among themselves during the evolution governed by Hamiltonians for which the corresponding Lie group is the dynamical symmetry group.
The second approach deals with states defined as eigenstates of a lowering group generator. Attention was mainly paid to eigenstates of the lowering generator $K_{-}$ for different realizations of SU(1,1) \cite{BG,EOCS,Agar88,Buz90,BBA:qo,I:qso}.
The third way in which CS can be defined is associated with the optimization of uncertainty relations for Hermitian generators of a group \cite{Schro,Arag,RBA,NiTr,BHY_YH,PrAg,Trif,Puri,BBA:jpa,GeGr}. States that minimize uncertainty relations are called intelligent states (IS) or minimum-uncertainty states. Ordinary IS \cite{Arag} provide an equality in the Heisenberg uncertainty relation while generalized IS \cite{Trif,Puri} do so in the Robertson uncertainty relation \cite{Robert}. The IS are determined by some type of the eigenvalue equation \cite{Jackiw,Trif,Puri}, and the lowering-generator eigenstates are in fact a particular case of the IS, corresponding to equal uncertainties of two Hermitian generators.
In the special case of the Heisenberg-Weyl group $H_{3}$ \cite{Weyl} whose generators are the boson annihilation and creation operators $a$ and $a^{\dagger}$ and the identity operator $I$, the first and second definitions coincide. The Glauber CS
$|\alpha\rangle$ \cite{Gla} can be defined as eigenstates of the lowering generator, $a|\alpha\rangle = \alpha|\alpha\rangle$, and also as states generated by the displacement operator $D(\alpha)$
(representing group elements) acting on the vacuum state $|0\rangle$,
\begin{equation}
|\alpha\rangle = D(\alpha) |0\rangle = \exp(\alpha a^{\dagger} -
\alpha^{\ast} a) |0\rangle . \label{1.1}
\end{equation}
At the same time, the Glauber CS $|\alpha\rangle$ are the IS for the field quadratures $X_{1}=(a^{\dagger}+a)/2$ and $X_{2}=i(a^{\dagger}-a)/2$, i.e., they minimize the Heisenberg uncertainty relation $\Delta X_{1} \Delta X_{2} \geq 1/4$. The
uncertainties are equal, $\Delta X_{1} = \Delta X_{2} = 1/2$, when the expectation values are calculated for the $|\alpha\rangle$
states. In this sense, the CS $|\alpha\rangle$ are a special case of the canonical squeezed states \cite{SS}. For the squeezed states, the fluctuations in one quadrature are reduced on account of growing fluctuations in the other (conjugate) quadrature. The canonical squeezed states can be considered as the generalized IS for the Heisenberg-Weyl group \cite{NiTr,Trif}.
For more complicated groups, e.g., for SU(1,1), the different definitions lead to distinct states. The Perelomov CS for the SU(1,1) Lie group, obtained by the action of the group elements on the reference state \cite{Per86}, and the Barut-Girardello states, defined as the eigenstates of the SU(1,1) lowering generator $K_{-}$ \cite{BG}, are quite different. However, the concept of squeezing can be naturally extended to the SU(1,1) group, and the squeezing properties of the SU(1,1) ordinary and generalized IS have been widely discussed \cite{WodEb,Ger85_88,Hil87_89,Agar88,Buz90, NiTr,BHY_YH,PrAg,Trif,Puri,BBA:jpa,GeGr}.
In Perelomov's definition, different sets of the CS are obtained for different choices of the reference state. The usually used sets of the CS (the standard sets, as we refer to them) correspond to the cases when an extreme state of the representation Hilbert space (e.g., the vacuum state of the quantized field mode) is chosen as the reference state \cite{Gil:rev}. In general, this choice of the reference state leads to the sets consisting of states with properties closest to those of classical states \cite{Per86}. On the other hand, the IS show a variety of nonclassical properties, such as squeezing and sub-Poissonian photon statistics. In the case of the SU(1,1) Lie group, the standard set of Perelomov's CS and the set of the ordinary IS have an intersection \cite{WodEb,BBA:jpa}. Both these types of states form subsets of the generalized IS \cite{Trif}.
In this paper we develop a formalism that provides a unified description of different types of coherent and intelligent states. We introduce the concept of algebra eigenstates (AES) which are defined for an arbitrary Lie group as eigenstates of elements of the corresponding complex Lie algebra. We show that different sets of the generalized CS (both standard and nonstandard) can be equivalently defined as the AES. Moreover, the ordinary and generalized IS for Hermitian generators of a Lie group form a subset of the AES associated with this group. On the basis of the algebra-eigenstate formalism, we use analytic methods that enable us to treat different types of states (including the standard and nonstandard CS and the IS) in a unified way. This unified description is also applicable for investigating more complicated states obtained by the action of unitary group transformations on the IS. Such states can be considered as (nonstandard) generalized CS with the reference state being an intelligent state.
In the present work we apply the general formalism to the two-photon group $H_{6}$ \cite{Gil:rev} that enables us to obtain the unified description of single-mode photon states generated by displacing and squeezing transformations. We use the Fock-Bargmann analytic representation \cite{FB} based on the standard set of the Glauber CS. In this analytic representation the eigenvalue equation that determines the two-photon AES becomes a linear homogeneous differential equation. Then the powerful theory of analytic functions is applied for studying various types of photon states and relations between them.
In Sec.\ 2 we develop the group-theoretic formalism of the AES for an arbitrary Lie group. The Fock-Bargmann representation of the two-photon AES is derived in Sec.\ 3. By using this representation, we find entire analytic functions representing different types of photon states. In Sec.\ 4 we consider displaced and squeezed Fock states. The superpositions of the Glauber CS (the Schr\"{o}dinger-cat states) and their squeezed and displaced versions are discussed in Sec.\ 5. The two-photon IS for the SU(1,1) subgroup of $H_{6}$ are considered in Sec.\ 6. We introduce the states which are generated by squeezing and displacement of the IS. We also touch on the question of the production of various two-photon AES.
\section{The general theory of the algebra eigenstates}
Let $G$ be an arbitrary Lie group and $T$ its unitary irreducible representation acting on the Hilbert space ${\cal H}$. By choosing
a fixed normalized reference state $|\Psi_{0}\rangle \in {\cal H}$,
one can define the system of states $\{ |\Psi_{g}\rangle \}$,
\begin{equation}
|\Psi_{g}\rangle = T(g) |\Psi_{0}\rangle , \mbox{\hspace{0.8cm}} g \in G, \label{2.1}
\end{equation} which is called the coherent-state system.
The isotropy (or maximum-stability) subgroup $H \subset G$ consists of all the group elements $h$ that leave the reference state invariant up to a phase factor,
\begin{equation}
T(h) |\Psi_{0}\rangle = e^{i\phi(h)} |\Psi_{0}\rangle ,
\mbox{\hspace{0.8cm}} | e^{i\phi(h)} | =1 , \mbox{\hspace{0.3cm}} h \in H . \label{2.2}
\end{equation} For every element $g \in G$, there is a unique decomposition of $g$ into a product of two group elements, one in $H$ and the other in the quotient (or coset) space $G/H$,
\begin{equation} g = \Omega h , \mbox{\hspace{0.8cm}} g \in G, \;\; h \in H, \;\; \Omega \in G/H . \label{2.3}
\end{equation} It is clear that group elements $g$ and $g'$ with different $h$ and $h'$ but with the same $\Omega$ produce the coherent states which
differ only by a phase factor: $|\Psi_{g}\rangle = e^{i\delta}
|\Psi_{g'}\rangle$, where $\delta =\phi(h) -\phi(h')$. Therefore a
coherent state $|\Psi_{\Omega}\rangle$ is determined by a point $\Omega = \Omega(g)$ in the quotient space $G/H$.
One can see from this group-theoretic procedure for the construction of the generalized CS that the choice of the reference state
$|\Psi_{0}\rangle$ firmly determines the structure of the coherent-state set. An important class of coherent-state sets corresponds to the quotient spaces $G/H$ which are homogeneous K\"{a}hlerian manifolds. Then $G/H$ can be considered as the phase space of a classical dynamical system, and the mapping $\Omega
\rightarrow |\Psi_{\Omega}\rangle \langle\Psi_{\Omega}|$ is the quantization for this system \cite{Ber}. It means that the quantization is performed via the CS \cite{Per86}.
Let us consider the Lie algebra $\frak{G}$ of the group $G$ (here and in the what follows we will call algebra the complex extension of the real algebra, i.e., the set of all linear combinations of elements of the real algebra with complex coefficients). The isotropy subalgebra ${\frak B}$ is defined as the set of elements $\{ {\frak b} \}$, ${\frak b} \in {\frak G}$, such that
\begin{equation}
{\frak b} |\Psi_{0}\rangle = \lambda |\Psi_{0}\rangle . \label{2.4}
\end{equation} Here $\lambda$ is a complex eigenvalue. If the isotropy subgroup $H$ is nontrivial, then the isotropy subalgebra ${\frak B}$ will be nontrivial too. By acting with $T(g)$ on both sides of Eq.\ (\ref{2.4}), we obtain
\begin{equation}
T(g) {\frak b} T^{-1}(g) T(g)|\Psi_{0}\rangle =
\lambda T(g)|\Psi_{0}\rangle . \label{2.5}
\end{equation} This leads to the eigenvalue equation
\begin{equation}
{\frak g} |\Psi_{g}\rangle = \lambda |\Psi_{g}\rangle , \label{2.6}
\end{equation}
where $|\Psi_{g}\rangle = T(g) |\Psi_{0}\rangle$ is a coherent state, and the operator ${\frak g} = T(g) {\frak b} T^{-1}(g)$ is an element of the algebra $\frak{G}$. We see that the generalized CS are the eigenstates of the elements of the complex algebra.
Now, let us choose a basis $\{ {\frak K}_{1},{\frak K}_{2},\ldots,{\frak K}_{p} \}$ for a $p$-dimensional Lie algebra $\frak{G}$. Then an element of the complex algebra can be written as the Euclidean scalar product in the $p$-dimensional vector space,
\begin{equation} {\frak g} = \bbox{\beta} \cdot\bbox{{\frak K}} = \beta_{1}{\frak K}_{1} + \beta_{2}{\frak K}_{2} + \cdots + \beta_{p}{\frak K}_{p} , \label{2.7}
\end{equation} where $\beta_{1},\beta_{2},\ldots,\beta_{p}$ are arbitrary complex coefficients. Then the AES are defined by the eigenvalue equation:
\begin{equation} \bbox{\beta}\cdot\bbox{{\frak K}}
|\Psi(\lambda,\bbox{\beta})\rangle =
\lambda |\Psi(\lambda,\bbox{\beta})\rangle . \label{2.8}
\end{equation} The comparison of Eqs.\ (\ref{2.6}) and (\ref{2.8}) shows that the generalized CS can be defined as the AES, and a specific set of the CS is obtained for the appropriate choice of the parameters
$\beta$'s. More precisely, let a state $|\Psi(\lambda,\bbox{\beta}) \rangle$ belong to a specific set of the CS corresponding to the
reference state $|\Psi_{0}\rangle$ that satisfies Eq.\ (\ref{2.4}). Then the parameters $\beta$'s must satisfy the condition $\bbox{\beta}\cdot\bbox{{\frak K}} = T(g) {\frak b} T^{-1}(g)$, $\forall g \in G$. Note that the definition (\ref{2.8}) of the AES does not depend
explicitly on the choice of the reference state $|\Psi_{0}\rangle$. Hence it is possible to treat the CS defined as the AES in a quite general way, regardless of the set to which they belong.
An important property of the generalized CS is the identity resolution:
\begin{equation}
\int d\mu(\Omega) |\Psi_{\Omega}\rangle \langle\Psi_{\Omega}| = I , \label{2.9}
\end{equation} where $I$ is the identity operator in the Hilbert space ${\cal H}$, and $d\mu(\Omega)$ is the invariant measure in the homogeneous
quotient space $G/H$. Then any state $|\Psi\rangle \in {\cal H}$ can
be expanded in the coherent-state basis $|\Psi_{\Omega}\rangle$,
\begin{equation}
|\Psi\rangle = \int d\mu(\Omega) f(\Omega) |\Psi_{\Omega}\rangle , \label{2.10}
\end{equation}
where $f(\Omega) = \langle\Psi_{\Omega}|\Psi\rangle$, and
\begin{equation}
\langle\Psi|\Psi\rangle = \int d\mu(\Omega) |f(\Omega)|^{2} . \label{2.11}
\end{equation} If we restrict the consideration to the square-integrable Hilbert space then the integral in (\ref{2.11}) must be convergent. Since the CS are not orthogonal to each other, the CS themselves can be expanded in their own basis.
Now, let us represent all the AES in the standard coherent-state basis. In what follows we will consider only the simplest cases in which the quotient space $G/H$ corresponding to the standard set is a homogeneous K\"{a}hlerian manifold that can be parametrized by a single complex number
$z$, so we write the standard generalized CS $|\Psi_{\Omega}\rangle$
in the form $|z\rangle$. Then Eq.\ (\ref{2.10}) reads
\begin{equation}
|\Psi(\lambda,\bbox{\beta})\rangle = \int d\mu(z)
f(\lambda,\bbox{\beta};z^{\ast}) |z\rangle . \label{2.12}
\end{equation}
The function $f(\lambda,\bbox{\beta};z) = \langle z^{\ast}| \Psi(\lambda,\bbox{\beta})\rangle$ can be decomposed into two factors: $f(\lambda,\bbox{\beta};z) = {\cal N}(z) \Lambda(\lambda,\bbox{\beta};z)$. Here ${\cal N}(z)$ is a normalization factor such that $\Lambda(\lambda,\bbox{\beta};z)$ is an entire analytic function of $z$ defined on the whole complex plane or on part of it. Such analytic representations are well studied \cite{FB,Per86} for the standard coherent-state bases of the simplest Lie groups. In these simplest cases the elements of the Lie algebra act in the Hilbert space of entire analytic functions as linear differential operators. Then the eigenvalue equation (\ref{2.8}) is converted into a linear homogeneous differential equation. By solving this equation, we obtain the entire analytic functions $\Lambda(\lambda,\bbox{\beta};z)$ representing the AES
$|\Psi(\lambda,\bbox{\beta})\rangle$ in the standard coherent-state
basis $|z\rangle$.
The standard set of the CS is a particular case of the wide system of the AES. Other particular cases of the AES are the sets of the ordinary and generalized IS. Any two quantum observables (Hermitian operators in the Hilbert space) $A$ and $B$ obey the Robertson uncertainty relation \cite{Robert}
\begin{equation} (\Delta A)^{2} (\Delta B)^{2} \geq \frac{1}{4} ( \langle C \rangle^{2} + 4 \sigma_{AB}^{2} ) , \mbox{\hspace{0.7cm}} C=-i[A,B] , \label{2.13}
\end{equation} where the variance of $A$ is $(\Delta A)^{2} = \langle A^{2} \rangle - \langle A \rangle^{2}$, $(\Delta B)^{2}$ is defined similarly, the covariance of $A$ and $B$ is $\sigma_{AB} = \frac{1}{2} \langle AB+BA \rangle - \langle A \rangle\langle B \rangle$, and the expectation values are taken over an arbitrary state in the Hilbert space. When the covariance of $A$ and $B$ vanishes, $\sigma_{AB} = 0$, the Robertson uncertainty relation reduces to the Heisenberg uncertainty relation,
\begin{equation} (\Delta A)^{2} (\Delta B)^{2} \geq \frac{1}{4} \langle C \rangle^{2} . \label{2.14}
\end{equation} The states which provide an equality in the Heisenberg uncertainty relation (\ref{2.14}) are called the ordinary IS \cite{Arag} and the states which minimize the Robertson uncertainty relation (\ref{2.13}) are called the generalized IS \cite{Trif}. It is clear that the ordinary IS form a subset of the generalized IS. The generalized IS for operators $A$ and $B$ are determined from the eigenvalue equation \cite{Trif,Puri}
\begin{equation}
(\eta A + i B) |\lambda,\eta\rangle
= \lambda |\lambda,\eta\rangle , \label{2.15}
\end{equation} where the parameter $\eta$ is an arbitrary complex number, and $\lambda$ is a complex eigenvalue. For the particular case of real $\eta$, the eigenvalue equation (\ref{2.15}) determines the ordinary IS for operators $A$ and $B$. Then the equation can be written in the form \cite{Jackiw}
\begin{equation}
(A + i\gamma B) |\lambda,\gamma\rangle =
\lambda |\lambda,\gamma\rangle , \label{2.16}
\end{equation} where $\gamma$ is a real parameter. By comparing Eqs.\ (\ref{2.15}) and (\ref{2.16}) with Eq.\ (\ref{2.8}), we see that the IS for any two Hermitian group generators form a subset of the AES of the group.
The generalized IS for the quadratures $X_{1}$ and $X_{2}$ coincide with the canonical squeezed states \cite{Trif}. The concept of squeezing is naturally related also to the IS associated with the SU(2) and SU(1,1) Lie groups \cite{WodEb,Ger85_88,Hil87_89, Agar88,Buz90,NiTr,BHY_YH,PrAg,Trif,Puri,BBA:jpa,GeGr}. At the last years there is a great interest in the IS. The SU(2) and SU(1,1) IS have been shown recently to be useful for improving the accuracy of interferometric measurements \cite{HiMl}. The investigation of the AES yields the most full information on the IS for generators of the corresponding Lie group. It is also possible to consider the states generated by the action of unitary group transformations on the IS. The most convenient way to examine different subsets of the AES and relations between them is via the analytic representation of the AES in the standard coherent-state basis. In the present work the algebra-eigenstate method is applied to the two-photon group $H_{6}$ whose unitary transformations squeeze and displace single-mode photon states.
\section{The Fock-Bargmann representation of the two-photon algebra eigenstates}
The theoretical analysis \cite{SS,SS1} and experimental realization \cite{SSe1,SSe2,SSe3} of squeezed states continue to attract considerable attention \cite{SS2}. Much of the work so far was concerned with the single-mode case whose group-theoretic basis lies in the two-photon Lie group $H_{6}$ \cite{Gil:rev}. The corresponding Lie algebra is spanned by the six operators $\{N, a^{2}, a^{\dagger 2}, a, a^{\dagger}, I \}$,
\begin{equation} \begin{array}{c} {[a^{2},a^{\dagger 2}] = 4N + 2I} , \;\;\;\;\;\;\;\; {[a,a^{\dagger}] = I} , \\ {[a^{\dagger 2},a] = -2 a^{\dagger}} , \;\;\;\;\;\;\;\; {[a^{2},a^{\dagger}] = 2 a} , \\ {[N,a^{\dagger 2}] = 2 a^{\dagger 2}} , \;\;\;\;\;\;\;\; {[N,a^{2}] = -2 a^{2}} , \\ {[N,a^{\dagger}] = a^{\dagger}} , \;\;\;\;\;\;\;\; {[N,a] = - a} , \end{array} \label{3.1}
\end{equation} where $N = a^{\dagger} a$ is the number operator. All the other commutation relations are zero. The unified group-theoretic description of various types of states associated with the $H_{6}$ transformations can be obtained by means of the algebra-eigenstate method. This provides the analytic representation of single-mode photon states generated by squeezing and displacement group operators.
The representation Hilbert space of $H_{6}$ is the Fock space of the quantum harmonic oscillator. The orthonormal basis in this
space is the Fock basis of the number eigenstates $|n\rangle$
$(n=0,1,\ldots,\infty)$. For any Fock state $|n\rangle$, the isotropy subgroup is U(1)$\otimes$U(1) with the algebra spanned by $\{ N,I \}$. The isotropy subgroup consists of all group elements
$h$ of the form $h = \exp(i\delta N + i\varphi I)$. Thus $h|n\rangle
= \exp(i\delta n + i\varphi ) |n\rangle$. The oscillator group $H_{4}$ is a subgroup of $H_{6}$. The corresponding solvable Lie algebra is spanned by the four operators $\{N,a,a^{\dagger},I \}$. The quotient space $H_{4}/$U(1)$\otimes$U(1) can be parametrized by an arbitrary complex number $\alpha$. Then an element $\Omega\in H_{4}/$U(1)$\otimes$U(1) can be written as the displacement operator, $\Omega \equiv D(\alpha) = \exp(\alpha a^{\dagger} - \alpha^{\ast} a)$. Note that the same quotient space and hence the same set of the CS is obtained also for the Heisenberg-Weyl group $H_{3}$. This is a subgroup of $H_{4}$ ($H_{3} \subset H_{4} \subset H_{6}$), and the nilpotent Lie algebra corresponding to $H_{3}$ is spanned by the three operators $\{ a, a^{\dagger}, I \}$. The quotient space $H_{3}/$U(1) is the same as the space $H_{4}/$U(1)$\otimes$U(1).
The standard Glauber set of the CS is obtained when the vacuum
state $|0\rangle$ is chosen as the reference state,
\begin{equation}
|\alpha\rangle = D(\alpha) |0\rangle = e^{-|\alpha|^{2}/2}
\sum_{n=0}^{\infty} \frac{\alpha^{n}}{\sqrt{n!}} |n\rangle . \label{3.2}
\end{equation}
For any state $|\Psi\rangle = \sum_{n=0}^{\infty} c_{n} |n\rangle$ in the Hilbert space, one can construct the entire analytic function \cite{FB}
\begin{equation}
f(\alpha) = e^{|\alpha|^{2}/2} \langle \alpha^{\ast} | \Psi \rangle = \sum_{n=0}^{\infty} c_{n} \frac{\alpha^{n}}{\sqrt{n!}} . \label{3.3}
\end{equation} Then the identity resolution, $(1/\pi) \int d^{2}\! \alpha \,
|\alpha\rangle \langle\alpha| = I$, can be used to expand the state $|\Psi\rangle$ in the coherent-state basis:
\begin{equation}
|\Psi\rangle = \frac{1}{\pi} \int d^{2}\! \alpha \,
e^{-|\alpha|^{2}/2} f(\alpha^{\ast}) |\alpha\rangle . \label{3.5}
\end{equation} It is customary in quantum mechanics to restrict the Hilbert space to consist of normalizable states that satisfy the condition
\begin{equation}
\langle \Psi | \Psi \rangle = \frac{1}{\pi} \int d^{2}\! \alpha \,
e^{-|\alpha|^{2}} |f(\alpha^{\ast})|^{2} < \infty . \label{3.6}
\end{equation} The analytic representation (\ref{3.3}) is known as the Fock-Bargmann representation \cite{FB}. The Glauber coherent state
$|\upsilon\rangle$ is represented by the function
\begin{equation}
{\cal F}(\upsilon;\alpha) = e^{|\alpha|^{2}/2}
\langle \alpha^{\ast} | \upsilon \rangle
= e^{-|\upsilon|^{2}/2} e^{\upsilon\alpha} . \label{3.7}
\end{equation} The generators of $H_{6}$ act in the Hilbert space of entire analytic functions $f(\alpha)$ as linear differential operators:
\begin{equation} \begin{array}{c}
a = \displaystyle{ \frac{d}{d\alpha} } , \;\;\;\;\;\;\;\; a^{\dagger} = \alpha , \;\;\;\;\;\;\;\; I = 1 , \\ N = \alpha \displaystyle{ \frac{d}{d\alpha} } , \;\;\;\;\;\;\;\; a^{2} = \displaystyle{ \frac{d^{2}}{d\alpha^{2}} } , \;\;\;\;\;\;\;\; a^{\dagger 2} = \alpha^{2} . \end{array} \label{3.8}
\end{equation}
The two-photon AES are determined by the eigenvalue equation
\begin{equation} ( \beta_{1} N + \beta_{2} a^{2} + \beta_{3} a^{\dagger 2} +
\beta_{4} a + \beta_{5} a^{\dagger} ) |\lambda,\bbox{\beta}\rangle
= \lambda |\lambda,\bbox{\beta}\rangle . \label{3.9}
\end{equation}
The AES $|\lambda,\bbox{\beta}\rangle$ are represented by the function
\begin{equation}
\Lambda(\lambda,\bbox{\beta};\alpha) = e^{|\alpha|^{2}/2}
\langle \alpha^{\ast} | \lambda,\bbox{\beta} \rangle , \label{3.10}
\end{equation} and in the Fock-Bargmann representation the eigenvalue equation (\ref{3.9}) becomes the second-order linear homogeneous differential equation
\begin{equation} \beta_{2} \frac{d^{2} \Lambda}{d \alpha^{2}} + ( \beta_{1} \alpha + \beta_{4} ) \frac{d\Lambda}{d\alpha} + ( \beta_{3} \alpha^{2} + \beta_{5} \alpha - \lambda ) \Lambda = 0 . \label{3.11}
\end{equation} By using the transformation
\begin{equation} \Lambda(\lambda,\bbox{\beta};\alpha) = \exp \left( \frac{ \Delta -\beta_{1} }{ 4\beta_{2} } \alpha^{2} \right) T(\lambda,\bbox{\beta};\alpha) , \label{3.12}
\end{equation} we get the equation with coefficients that are linear in $\alpha$,
\begin{equation} \beta_{2} \frac{d^{2} T}{d \alpha^{2}} + ( \Delta \alpha + \beta_{4} ) \frac{d T}{d\alpha} + \left[ \sigma \alpha + \mbox{\small{$\frac{1}{2}$}} (\Delta-\beta_{1}) - \lambda \right] T = 0 , \label{3.13}
\end{equation} where
\begin{eqnarray} & & \Delta^{2} \equiv \beta_{1}^{2}-4\beta_{2}\beta_{3} , \label{3.14} \\ & & \sigma \equiv \beta_{4}\frac{\Delta-\beta_{1}}{2\beta_{2}} + \beta_{5} . \label{3.15}
\end{eqnarray} Note the double-valuedness of $\Delta$. Equation (\ref{3.13}) can be transformed into the Kummer equation for the confluent hypergeometric function or into the Bessel equation, depending on values of the parameters \cite{Erd}.
In the most general case, $\beta_{2} \neq 0$, $\Delta \neq 0$, two independent solutions of Eq.\ (\ref{3.13}) are given by \cite{Erd} \begin{mathletters} \label{3.16}
\begin{eqnarray} & & T_{1}(\lambda,\bbox{\beta};\alpha) = \exp\left( -\frac{\sigma}{\Delta} \alpha \right) {}_{1}\! F_{1} \left( d
\left| \frac{1}{2} \left| -\frac{\Delta}{2\beta_{2}} (\alpha - \mu_{\Delta})^{2} \right. \right. \right) , \label{3.16a} \\ & & T_{2}(\lambda,\bbox{\beta};\alpha) = \sqrt{ -\frac{\Delta}{2\beta_{2}}}\, (\alpha - \mu_{\Delta}) \exp\left( -\frac{\sigma}{\Delta} \alpha \right)
{}_{1}\! F_{1} \left( d + \frac{1}{2} \left| \frac{3}{2}
\left| -\frac{\Delta}{2\beta_{2}} (\alpha - \mu_{\Delta})^{2} \right. \right. \right) , \label{3.16b}
\end{eqnarray} \end{mathletters} where
\begin{equation} \mu_{\Delta} \equiv (2\beta_{2}\beta_{5} - \beta_{1}\beta_{4})/ \Delta^{2}, \label{3.17}
\end{equation}
\begin{equation} d \equiv \frac{1}{2\Delta} \left( \beta_{2} \frac{\sigma^{2} }{\Delta^{2}} - \beta_{4} \frac{\sigma}{\Delta} + \frac{\Delta - \beta_{1}}{2} - \lambda \right) , \label{3.18}
\end{equation}
and ${}_{1}\! F_{1} (d|c|x)$ is the confluent hypergeometric function (the Kummer function). Note that the function
${}_{1}\! F_{1} (d|c|x)$ with $c=1/2$ or $c=3/2$ can be expressed in terms of the parabolic cylinder functions $D_{\nu}(\pm x)$ by using the relation \cite{AS}
\begin{equation} D_{\nu}(\pm x) = \sqrt{\pi}\, 2^{\nu/2} e^{-x^{2}/4} \left[ \frac{1}{ \Gamma\left(\frac{1-\nu}{2}\right) }\, {}_{1}\! F_{1}
\left(-\frac{\nu}{2} \left| \frac{1}{2} \left| \frac{x^{2}}{2} \right. \right. \right) \mp \frac{\sqrt{2}\,x}{\Gamma\left(-\frac{\nu}{2}
\right)}\, {}_{1}\! F_{1} \left(\frac{1-\nu}{2} \left| \frac{3}{2}
\left|\frac{x^{2}}{2}\right. \right. \right) \right] . \label{pcf}
\end{equation} The function $\Lambda(\lambda,\bbox{\beta};\alpha)$ is manifestly analytic, and the normalization condition (\ref{3.6}) requires
\begin{equation}
\left| \frac{\Delta \pm \beta_{1}}{2\beta_{2}} \right| < 1 . \label{3.19}
\end{equation} The sign `$-$' in Eq.\ (\ref{3.19}) must be taken when $d$ ($d+\frac{1}{2}$) is a nonpositive integer and the sign `$+$' otherwise.
The physical meaning of the two solutions can be understood by considering the particular case $\beta_{4} = \beta_{5} =0$ when one-photon processes are excluded. Then $\sigma = 0$, and $\mu_{\Delta} = 0$, so $T_{1}(\lambda,\bbox{\beta};\alpha)$ contains only even powers of $\alpha$ and $T_{2}(\lambda,\bbox{\beta};\alpha)$ contains only odd powers of $\alpha$. If we recall that the operators $\{ N, a^{2}, a^{\dagger 2} \}$ form a realization of the SU(1,1) Lie algebra, it will be clear that the two solutions represent the AES in the two irreducible sectors of SU(1,1). One-photon processes represented by $a$ and $a^{\dagger}$ mix these irreducible sectors, and then the total solution is given by a superposition of $T_{1}$ and $T_{2}$.
In the degenerate case $\Delta = 0$. Provided that $\beta_{2} \neq 0$, $\sigma \neq 0$, a solution of Eq.\ (\ref{3.13}) is given by \cite{Erd}
\begin{equation} T(\lambda,\bbox{\beta};\alpha) = \exp\left( -\frac{\beta_{4} }{2\beta_{2}} \alpha \right) \sqrt{\alpha-\mu_{0}} \,\, J_{1/3} \left( \frac{2}{3} \sqrt{\frac{\sigma}{ \beta_{2}}} (\alpha-\mu_{0})^{3/2} \right) , \label{3.20}
\end{equation} where
\begin{equation} \mu_{0} \equiv \frac{ 4\beta_{2}\lambda + 2\beta_{1}\beta_{2} + \beta_{4}^{2} }{ 4\beta_{2}\sigma } , \label{3.21}
\end{equation} and $J_{\nu}(x)$ is the Bessel function of the first kind. Another independent solution includes $J_{-\nu}(x)$ instead of $J_{\nu}(x)$ (for a noninteger $\nu$). The solution can be also expressed in terms of the Airy functions \cite{AS}: \begin{mathletters} \label{airy}
\begin{eqnarray} & & {\rm Ai}(-x) = \frac{\sqrt{x}}{3} \left[J_{1/3} \left( \frac{2}{3} x^{3/2}\right) + J_{-1/3} \left(\frac{2}{3} x^{3/2} \right) \right] , \label{airy_a} \\ & & {\rm Bi}(-x) = \sqrt{ \frac{x}{3} } \left[ J_{-1/3} \left(\frac{2}{3} x^{3/2}\right) - J_{1/3} \left(\frac{2}{3} x^{3/2}\right) \right] . \label{airy_b}
\end{eqnarray} \end{mathletters} The function $\Lambda(\lambda,\bbox{\beta};\alpha)$ is manifestly analytic, and the normalization condition (\ref{3.6}) requires
$|\beta_{1}/\beta_{2}| < 2$.
When, apart from $\Delta = 0$, also $\sigma=0$, then Eq.\ (\ref{3.13}) becomes an equation with constant coefficients. The solution of this equation can be written in terms of elementary functions:
\begin{equation} T(\alpha) = C_{+} \exp(\omega_{+}\alpha) + C_{-} \exp(\omega_{-}\alpha) , \label{lin1}
\end{equation} where $C_{\pm}$ are the integration constants, and
\begin{equation} \omega_{\pm} = \frac{1}{2\beta_{2}} \left( -\beta_{4} \pm \sqrt{ \beta_{4}^{2} + 4\beta_{2}\lambda +2\beta_{2}\beta_{2} } \right) . \label{lin2}
\end{equation}
In the case $\beta_{2} = 0$, $\beta_{1} \neq 0$, the eigenvalue equation (\ref{3.11}) is a first-order differential equation whose solution is easily found to be
\begin{equation} \Lambda(\lambda,\bbox{\beta};\alpha) = \Lambda_{0} \left( \alpha + \frac{\beta_{4}}{\beta_{1}} \right)^{p} \exp\left( -\frac{\beta_{3}}{2\beta_{1}} \alpha^{2} + \frac{ \beta_{3}\beta_{4} - \beta_{1}\beta_{5} }{ \beta_{1}^{2} } \alpha \right) , \label{3.23}
\end{equation} where
\begin{equation} p = [\beta_{1}^{2}\lambda - \beta_{4}(\beta_{3}\beta_{4} - \beta_{1}\beta_{5})]/\beta_{1}^{3}
\end{equation} must be a non-negative integer in order to satisfy the analyticity condition. $\Lambda_{0}$ is a normalization factor, and the normalization condition (\ref{3.6}) requires
$|\beta_{3}/\beta_{1}| < 1$.
When $\beta_{2} = 0$ and $\beta_{3} = 0$, the resulting AES are associated with the oscillator group $H_{4}$. The corresponding analytic function is
\begin{equation} \Lambda(\lambda,\bbox{\beta};\alpha) = \Lambda_{0} \left( \alpha + \frac{\beta_{4}}{\beta_{1}} \right)^{p} \exp\left( -\frac{\beta_{5}}{\beta_{1}} \alpha \right) , \label{3.25}
\end{equation} where
\begin{equation} p = (\beta_{1}\lambda + \beta_{4}\beta_{5})/\beta_{1}^{2} \label{3.26}
\end{equation} is once again a non-negative integer. The function $\Lambda(\lambda,\bbox{\beta};\alpha)$ of Eq.\ (\ref{3.25}) represents displaced Fock states \cite{DFS}. In order to derive the corresponding eigenvalue equation, we start from the equation
$N|n\rangle = n|n\rangle$. By applying the unitary displacement operator $D(\upsilon) = \exp(\upsilon a^{\dagger} - \upsilon^{\ast} a)$ to this equation, we obtain
\begin{equation}
(N - \upsilon^{\ast} a - \upsilon a^{\dagger}) |n,\upsilon\rangle
= (n - |\upsilon|^{2}) |n,\upsilon\rangle , \label{3.27}
\end{equation}
where $|n,\upsilon\rangle = D(\upsilon) |n\rangle$ is the displaced Fock state that reduces to the standard Glauber state for $n=0$. The corresponding analytic function is given by Eq.\ (\ref{3.25}). By substituting
\begin{equation} \beta_{1} = 1, \;\;\;\;\;\;\; \beta_{5} = \beta_{4}^{\ast} = -\upsilon, \;\;\;\;\;\;\;
\lambda = n - |\upsilon|^{2} , \label{3.28}
\end{equation} we find $p=n$, and
\begin{equation} \Lambda(n,\upsilon;\alpha) = \Lambda_{0} ( \alpha - \upsilon^{\ast} )^{n} e^{\upsilon\alpha} . \label{3.29}
\end{equation} For $n=0$, this function reduces to the function ${\cal F}(\upsilon;\alpha)$ representing the Glauber CS
$|\upsilon\rangle$ [cf. Eq.\ (\ref{3.7})]. The normalization
factor in this case is $\Lambda_{0} = \exp(-|\upsilon|^{2}/2)$. A consequence of Eq.\ (\ref{3.27}) is the following equation satisfied by the Glauber CS
\begin{equation}
(N - \upsilon^{\ast} a - \upsilon a^{\dagger} + |\upsilon|^{2})
|\upsilon\rangle = 0 . \label{3.30}
\end{equation}
By using the Glauber definition $a |\upsilon\rangle = \upsilon
|\upsilon\rangle$, we see that Eq.\ (\ref{3.30}) is an identity.
We also consider another example of displaced states. In the case $\beta_{1} = \beta_{2} = \beta_{3} = 0$, $\beta_{4} \neq 0$, the resulting AES are associated with the $H_{3}$ group. Then the solution of the eigenvalue equation (\ref{3.11}) is
\begin{equation} \Lambda(\lambda,\bbox{\beta};\alpha) = \Lambda_{0} \exp\left( - \frac{\beta_{5}}{2\beta_{4}} \alpha^{2} + \frac{\lambda}{\beta_{4}} \alpha \right) . \label{3.31}
\end{equation} We see that the analyticity condition is fulfilled. Besides, the
normalization condition (\ref{3.6}) requires $|\beta_{5}/\beta_{4}| < 1$. By comparing the function $\Lambda(\lambda,\bbox{\beta};\alpha)$ of Eq.\ (\ref{3.31}) with the function ${\cal F}(\upsilon;\alpha)$ of Eq.\ (\ref{3.7}), we
find that the algebra eigenstate $|\lambda,\bbox{\beta}\rangle$
coincides with the Glauber coherent state $|\upsilon\rangle$ for $\beta_{i} = 0$, $i=1,2,3,5$. Then $\upsilon = \lambda/\beta_{4}$, and Eq.\ (\ref{3.9}) reduces to the famous Glauber equation
$a |\upsilon\rangle = \upsilon|\upsilon\rangle$. We see that the eigenvalue equation for a state (e.g., for the standard coherent state) can be written in a number of ways, i.e., there is a number of equivalent definitions of the state. Note also that in the case $\beta_{i} = 0$, $i=1,2,3,4$, $\beta_{5} \neq 0$, Eq.\ (\ref{3.9}) has not any nontrivial solution. The reason is that the creation operator $a^{\dagger}$ has not any eigenstate.
The Gaussian form of the function $\Lambda(\lambda,\bbox{\beta}; \alpha)$ of Eq.\ (\ref{3.31}) means that this function represents displaced (canonical) squeezed states of Stoler and Yuen \cite{SS}. These states are generated by the action of the squeezing and displacement operators on the vacuum \cite{SS},
\begin{equation}
|\xi,\upsilon\rangle = D(\upsilon) S(\xi) |0\rangle , \label{3.32}
\end{equation} where the squeezing operator is
\begin{equation} S(\xi) = \exp(\mbox{\small{$\frac{1}{2}$}}\xi a^{\dagger 2} - \mbox{\small{$\frac{1}{2}$}}\xi^{\ast} a^{2}) . \label{3.33}
\end{equation} By applying the squeezing operator $S(\xi = s\, e^{i\theta})$ to the
equation $a|0\rangle = 0$, one derives the equation satisfied by the
squeezed vacuum $|\xi\rangle = S(\xi) |0\rangle$,
\begin{equation}
[ (\cosh s)a - (\sinh s \, e^{i\theta}) a^{\dagger}] |\xi\rangle = 0 . \label{3.34}
\end{equation} By applying the displacement operator $D(\upsilon)$ to this equation, one finds the eigenvalue equation satisfied by the
displaced squeezed state $|\xi,\upsilon\rangle$,
\begin{equation}
( a - \zeta a^{\dagger}) |\xi,\upsilon\rangle = (\upsilon -
\zeta \upsilon^{\ast}) |\xi,\upsilon\rangle , \label{3.35}
\end{equation} where
\begin{equation}
\zeta \equiv \frac{\xi}{|\xi|} \tanh |\xi| = \tanh s \, e^{i\theta} . \label{3.zeta}
\end{equation} By substituting
\begin{equation} \beta_{4} = 1 , \;\;\;\;\;\;\;\; \beta_{5} = - \zeta , \;\;\;\;\;\;\;\; \lambda = \upsilon - \zeta \upsilon^{\ast} \label{3.36}
\end{equation} into Eq.\ (\ref{3.31}), one obtains the analytic function representing the displaced squeezed states,
\begin{equation} \Lambda(\xi,\upsilon;\alpha) = \Lambda_{0} \exp\left[ \mbox{\small{$\frac{1}{2}$}} \zeta \alpha^{2} + (\upsilon - \zeta \upsilon^{\ast}) \alpha \right] . \label{3.37}
\end{equation} The normalization factor in this case is \cite{SS}
\begin{equation}
\Lambda_{0} = \frac{ \exp( -\frac{1}{2}|u|^{2} - \zeta^{\ast} u^{2} ) }{\sqrt{\cosh s}},
\end{equation} where
\begin{equation} u \equiv (\cosh s)\, \upsilon - \left(\sinh s\, e^{i\theta} \right) \upsilon^{\ast} . \label{3.u}
\end{equation} It is interesting to note that the displaced squeezed states
$|\xi,\upsilon\rangle$ are the standard CS of the group $H_{6}$ but simultaneously they are nonstandard CS of its subgroup $H_{3}$. The reference state of this nonstandard set is the squeezed vacuum
$|\xi\rangle$. The displaced squeezed states $|\xi,\upsilon\rangle$ are also the generalized IS for the quadratures $X_{1}$ and $X_{2}$ that are the Hermitian generators of $H_{3}$. By putting $a = X_{1} +iX_{2}$ and $a^{\dagger} = X_{1} -iX_{2}$ in the eigenvalue equation (\ref{3.35}), one obtains the equation of type (\ref{2.15}):
\begin{equation} \left[\left(\frac{1-\zeta}{1+\zeta} \right) X_{1} +iX_{2} \right]
|\xi,\upsilon\rangle = \left(\frac{ \upsilon -\zeta\upsilon^{\ast}
}{1+\zeta} \right) |\xi,\upsilon\rangle . \label{3.38}
\end{equation} (The $X_{1}$-$X_{2}$ generalized IS also are known as ``correlated coherent states'' \cite{corst}). For $\theta = 0$ and $\theta = \pi$, $\zeta$ is real and the
$|\xi,\upsilon\rangle$ states are the ordinary IS, i.e., they provide an equality in the uncertainty relation $\Delta X_{1} \Delta X_{2}\geq 1/4$.
The Glauber CS $|\upsilon\rangle$ form the zero-squeezing subset of the $X_{1}$-$X_{2}$ IS.
\section{Displaced and squeezed Fock states}
The differential equation (\ref{3.11}) determines analytic functions representing various photon states that can be produced by squeezing and displacement of an initial state. The first candidate to be the initial state is the vacuum. Recently, have been considerable interest in attempts to produce Fock states
(photon number eigenstates) $|n\rangle$ with nonzero occupation number \cite{FSP:mm,FSP_SCP,FSP:pa,FSP:qj,FSP:rl,FSP:sai}. Given that Fock states can be generated, it is natural to consider their displacement (by driving the light field by a classical current) and squeezing (by degenerate parametric amplification). Properties of displaced Fock states \cite{DFS,KOK:jmo,DFS:jcm,DFS:gen,DFS:pp}, squeezed Fock states \cite{SFS,KOK:jmo,SFS:pp}, and displaced and squeezed Fock states (DSFS) \cite{Kral,Lo} have been widely discussed. In this section we consider the DSFS as a characteristic example of the two-photon AES. The general results of the preceding section are used to obtain the Fock-Bargmann analytic representation of the DSFS.
We start from the equation $N|n\rangle = n|n\rangle$. By acting on both sides of this equation with the squeezing operator $S(\xi=se^{i\theta})$, we derive the eigenvalue equation
satisfied by the squeezed Fock states $|n,\xi\rangle =
S(\xi)|n\rangle$,
\begin{equation} ( \beta_{1} N + \beta_{2} a^{2} + \beta_{3} a^{\dagger 2} )
|n,\xi\rangle = (n-\sinh^{2}\! s) |n,\xi\rangle , \label{4.1}
\end{equation} where
\begin{equation} \beta_{1} = \cosh 2s , \;\;\;\;\;\;\; \beta_{2} = \beta_{3}^{\ast} = -\frac{1}{2} \sinh 2s\, e^{-i\theta} . \label{4.2}
\end{equation} Then we apply the displacement operator $D(\upsilon=re^{i\phi})$. The resulting eigenvalue equation reads
\begin{equation} ( \beta_{1} N + \beta_{2} a^{2} + \beta_{3} a^{\dagger 2}
+ \beta_{4} a + \beta_{5} a^{\dagger} ) |n,\xi,\upsilon\rangle =
\lambda |n,\xi,\upsilon\rangle , \label{4.3}
\end{equation} where
\begin{equation}
|n,\xi,\upsilon\rangle = D(\upsilon) S(\xi) |n\rangle \label{4.4}
\end{equation} are the DSFS. The parameters $\beta_{1}$, $\beta_{2}$ and $\beta_{3}$ remain as given above, and
\begin{eqnarray} & & \beta_{4} = \beta_{5}^{\ast} = \upsilon^{\ast}\left( \sinh 2s\, e^{-i(\theta-2\phi)} - \cosh 2s \right) , \label{4.5} \\ & & \lambda = n-\sinh^{2}\! s + r^{2}[ \sinh 2s\, \cos (\theta-2\phi) - \cosh 2s] . \label{4.6}
\end{eqnarray} These results can be easily derived by using the general recipe
\begin{equation} D(\upsilon) F(a,a^{\dagger}) D^{-1}(\upsilon) = F(a-\upsilon,a^{\dagger}-\upsilon^{\ast}) , \label{4.7}
\end{equation} where $F(a,a^{\dagger})$ is a power series.
Equation (\ref{4.3}) is of the general form (\ref{3.9}) and the corresponding differential equation is of the form (\ref{3.11}) with solutions given by Eqs.\ (\ref{3.12}) and (\ref{3.16}). A simple calculation yields
\begin{equation} \Delta^{2} = \beta_{1}^{2} -4\beta_{2}\beta_{3} = 1 , \label{4.8}
\end{equation} which is a direct consequence of the unitarity of the squeezing operator $S(\xi)$. Then $\Delta = \pm 1$, and we find, respectively,
\begin{equation} \sigma = \pm \upsilon \left[ \left( \frac{\sinh s}{\cosh s} \right)^{\pm 1} e^{i(\theta-2\phi)} - 1 \right] , \;\;\;\;\;\; \mu_{\Delta} = \upsilon^{\ast} , \;\;\;\;\;\; d = \mp \frac{1}{2} \left( n +\frac{1}{2} \mp\frac{1}{2} \right) . \label{4.11}
\end{equation}
Let us start from $\Delta =+1$. Then $d=-n/2$, and the normalization condition (\ref{3.19}) is satisfied by taking $T_{1}(\lambda, \bbox{\beta};\alpha)$ for even values of $n$ and $T_{2}(\lambda,
\bbox{\beta};\alpha)$ for odd values of $n$. This result is dictated by the fact that the analytic function representing the squeezed Fock states $|n,\xi\rangle$ contains only even powers of $\alpha$ for even $n$ and only odd powers of $\alpha$ for odd $n$. By using the relations between the confluent hypergeometric functions and the Hermite polynomials, \begin{mathletters} \label{4.12}
\begin{eqnarray}
& & {}_{1}\! F_{1} \left(-m \left| \frac{1}{2} \right| x^{2} \right) = \frac{(-1)^{m} m! H_{2m}(x)}{(2m)!} , \label{4.12a} \\
& & x\, {}_{1}\! F_{1} \left(-m \left| \frac{1}{2} \right| x^{2} \right) = \frac{(-1)^{m} m! H_{2m+1}(x)}{2 (2m+1)!} , \label{4.12b}
\end{eqnarray} \end{mathletters} we find the solution:
\begin{equation}
\Lambda(n,\xi,\upsilon;\alpha) = e^{|\alpha|^{2}/2}
\langle\alpha^{\ast}|n,\xi,\upsilon\rangle = \Lambda_{0}(n,\xi,\upsilon) \exp\left[ \frac{\zeta}{2} \alpha^{2} + (\upsilon-\zeta\upsilon^{\ast})\alpha \right] H_{n}\left(\frac{\alpha-\upsilon^{\ast}}{\sqrt{\sinh 2s\, e^{-i\theta}}} \right) . \label{4.13}
\end{equation}
As usual, $\Lambda_{0}$ is a normalization factor, and $\zeta$ is defined by Eq.\ (\ref{3.zeta}). This result is in accordance with the expression for $\langle\alpha|S(\xi)D(\upsilon)|n\rangle$ derived in a different way by Kr\'{a}l \cite{Kral}. The normalization factor is identified to be
\begin{equation} \Lambda_{0}(n,\xi,\upsilon) = \frac{ (\zeta^{\ast}/2)^{n/2}
}{ \sqrt{n!\cosh s} } \exp\left( -\frac{1}{2}|u|^{2} - \zeta^{\ast} u^{2} \right) , \label{4.14}
\end{equation} where $u$ is defined by Eq.\ (\ref{3.u}).
It is well known \cite{Erd} that the confluent hypergeometric function can be written in two equivalent forms which are related by Kummer's transformation
\begin{equation}
{}_{1}\! F_{1} \left(d \left| c \left| x \right. \right. \right)
= e^{x} {}_{1}\! F_{1} \left(c-d \left| c \left| -x \right. \right. \right) . \label{4.15}
\end{equation} It is not difficult to see that the choice $\Delta = -1$ leads to the solution which is related to the function $\Lambda(n,\xi,\upsilon;\alpha)$ of Eq.\ (\ref{4.13}) by Kummer's transformation (\ref{4.15}). Then the solution can be written in the form \begin{mathletters} \label{4.16}
\begin{equation} \Lambda_{1}(n,\xi,\upsilon;\alpha) = \Lambda_{0}^{(1)} \exp\left[ \frac{\alpha^{2}}{2\zeta^{\ast}} + (\upsilon- \upsilon^{\ast}/\zeta^{\ast})\alpha \right]
{}_{1}\! F_{1} \left(\frac{n+1}{2} \left| \frac{1}{2}
\left| \frac{ -(\alpha-\upsilon^{\ast})^{2} }{ \sinh 2s\, e^{-i\theta} } \right. \right. \right) \label{4.16a}
\end{equation} for even values of $n$ and
\begin{equation} \Lambda_{2}(n,\xi,\upsilon;\alpha) = \Lambda_{0}^{(2)} \exp\left[ \frac{\alpha^{2}}{2\zeta^{\ast}} + (\upsilon- \upsilon^{\ast}/\zeta^{\ast})\alpha \right] (\alpha-\upsilon^{\ast}) \,
{}_{1}\! F_{1} \left(\frac{n+2}{2} \left| \frac{3}{2}
\left| \frac{ -(\alpha-\upsilon^{\ast})^{2} }{ \sinh 2s\, e^{-i\theta} } \right. \right. \right) \label{4.16b}
\end{equation} \end{mathletters} for odd values of $n$. As usual, $\Lambda_{0}$ are appropriate normalization factors, and the normalization condition (\ref{3.19}) is obviously satisfied.
In the particular case $n=0$, the function $\Lambda(n,\xi,\upsilon;\alpha)$ given by Eq.\ (\ref{4.13}) reduces to the function (\ref{3.37}) representing the displaced squeezed states. The analytic function representing the squeezed Fock states
$|n,\xi\rangle$ is obtained by putting $\upsilon = 0$ in Eq.\ (\ref{4.13}) or in Eqs.\ (\ref{4.16}). The displaced Fock states
$|n,\upsilon\rangle$ were discussed in the preceding section and the corresponding analytic function is given by Eq.\ (\ref{3.29}).
We finish this section by a short review of basic methods for producing the DSFS. Displacement can be implemented by linear amplification of the light field. A usual method for doing that is by driving the field by a classical current. The use of a linear directional coupler as a displacing device was also discussed \cite{LBK}. The most frequently used squeezing device in the single-mode case is a degenerate parametric amplifier. These methods of displacement and squeezing are well developed and the main problem remaining is the production of a stable Fock state that will serve as the input state of displacing and squeezing devices. It was demonstrated that it is possible to generate a Fock state of the single-mode electromagnetic field in a micromaser operated under the appropriate conditions \cite{FSP:mm,FSP_SCP}. Another interesting method for producing Fock states is based on the process of parametric down-conversion in which one pump photon is destroyed and two correlated photons are simultaneously created, one in each of two distinct modes. The state of one mode is then conditioned on the detection of photons in the other mode \cite{FSP:pa}. It was also shown that a Fock state can be generated by observation of quantum jumps in an ion trap \cite{FSP:qj}, by coupling a cavity to a single three-level atom in a Raman lambda configuration \cite{FSP:rl}, and by using the single-atom interference \cite{FSP:sai}.
\section{Squeezing and displacement of coherent superposition states}
In this section we will consider squeezed and displaced
superpositions of the Glauber CS $|\upsilon\rangle$ and
$|-\upsilon\rangle$, which provide an interesting example of the two-photon AES. These states belong to the wide class of macroscopic quantum superpositions which are frequently referred to as the Schr\"{o}dinger-cat states \cite{Sch_cat}. Properties of different types of the Schr\"{o}dinger-cat states have been recently studied in a number of works \cite{EOCS,Hil87_89,YuSt86,XiGu,BVBK,KiBu, HaGe:qo,Bu:psa,HaGe:jmo,XWHM,DomJa}. The problem of the generation of optical superposition states have drawn recently a lot of attention \cite{YuSt86,MiHo,MeTo,WoCar,GeHa,GaKn,LyGN,GeaB,SKK, Mey,FSP_SCP,DMBRH,BGK,qndm}. It was shown that the Schr\"{o}dinger-cat states can be produced in various nonlinear processes \cite{YuSt86,MiHo,MeTo,WoCar,GeHa,GaKn,LyGN}, in field-atom interactions \cite{GeaB,SKK,Mey,FSP_SCP,DMBRH,BGK}, and in quantum nondemolition measurements \cite{qndm}.
We start from the coherent superposition state of the form
\begin{equation}
|\upsilon,\tau,\varphi\rangle = {\cal N} \left( |\upsilon\rangle
+\tau e^{i\varphi} |-\upsilon\rangle \right) , \label{5.1}
\end{equation}
where $|\upsilon\rangle$ and $|-\upsilon\rangle$ are the standard Glauber CS, $\tau$ and $\varphi$ are real parameters, and
\begin{equation}
{\cal N} = \left( 1+\tau^{2}+2\tau e^{-2|\upsilon|^{2}} \cos\varphi \right)^{-1/2} \label{5.2}
\end{equation} is the normalization factor. The analytic function
\begin{equation}
\Lambda(\upsilon,\tau,\varphi;\alpha) = e^{|\alpha|^{2}/2}
\langle\alpha^{\ast}|\upsilon,\tau,\varphi\rangle \label{5.3}
\end{equation} can be straightforwardly calculated:
\begin{equation} \Lambda(\upsilon,\tau,\varphi;\alpha) = {\cal N}
e^{-|\upsilon|^{2}/2} \left( e^{\upsilon\alpha} + \tau e^{i\varphi} e^{-\upsilon\alpha} \right) . \label{5.4}
\end{equation}
The superposition $|\upsilon,\tau,\varphi\rangle$ is a special kind of the two-photon AES since it is the eigenstate of the operator $a^{2}$:
\begin{equation}
a^{2} |\upsilon,\tau,\varphi\rangle =
\upsilon^{2} |\upsilon,\tau,\varphi\rangle . \label{5.5}
\end{equation} In the case $\tau=0$, this state reduces to the Glauber coherent
state $|\upsilon\rangle$.
Interesting superpositions are even and odd CS
$|\upsilon\rangle_{e}$ and $|\upsilon\rangle_{o}$ \cite{EOCS}: \begin{mathletters} \label{5.6}
\begin{eqnarray}
|\upsilon\rangle_{e} & = & |\upsilon,\tau=1,\varphi=0\rangle =
\frac{ |\upsilon\rangle + |-\upsilon\rangle }{ \sqrt{ 2\left(
1 + e^{-2|\upsilon|^{2}} \right) } } , \label{5.6a} \\
|\upsilon\rangle_{o} & = & |\upsilon,\tau=1,\varphi=\pi\rangle =
\frac{ |\upsilon\rangle - |-\upsilon\rangle }{ \sqrt{ 2\left(
1 - e^{-2|\upsilon|^{2}} \right) } } . \label{5.6b}
\end{eqnarray} \end{mathletters} The even and odd CS have a number of interesting nonclassical properties. The even CS are highly squeezed in the field quadrature $X_{2}$, while the odd CS have sub-Poissonian photon statistics \cite{XiGu,BVBK}. Multimode versions of the even and odd CS have been recently studied \cite{AM_DMN}.
In the case $\tau=1$, $\varphi=\pi/2$, one obtains the so-called Yurke-Stoler state
\begin{equation}
|\upsilon\rangle_{\text{YS}} = \frac{1}{\sqrt{2}} \left(
|\upsilon\rangle + i|-\upsilon\rangle \right) , \label{5.7}
\end{equation}
that can be generated when the Glauber state $|\upsilon\rangle$ propagates through a nonlinear Kerr medium \cite{YuSt86}. The
$|\upsilon\rangle_{\text{YS}}$ states are squeezed in the $X_{2}$ field quadrature \cite{BVBK}.
It follows from the eigenvalue equation (\ref{5.5}) that the
superpositions $|\upsilon,\tau,\varphi\rangle$ are a special case of the two-photon IS. More precisely, let us consider the two-photon realization of the SU(1,1) Lie algebra:
\begin{equation} K_{+} = \frac{1}{2}a^{\dagger 2}, \;\;\;\;\;\; K_{-} = \frac{1}{2}a^{2}, \;\;\;\;\;\; K_{0} = \frac{1}{2}N + \frac{1}{4} , \label{5.8}
\end{equation}
\begin{equation} [K_{-},K_{+}] = 2K_{0}, \;\;\;\;\;\; [K_{0},K_{\pm}] = \pm K_{\pm} . \label{5.9}
\end{equation} It is clear that SU(1,1)$\,\subset H_{6}$. One can use the Hermitian combinations
\begin{equation} \begin{array}{l}
K_{1} = \displaystyle{ \frac{1}{2} } (K_{+} + K_{-}) = \displaystyle{ \frac{1}{4} } (a^{\dagger 2} + a^{2}) , \\ K_{2} = \displaystyle{ \frac{1}{2i} } (K_{+} - K_{-}) = \displaystyle{ \frac{1}{4i} } (a^{\dagger 2} - a^{2}) , \label{5.10} \end{array}
\end{equation} which satisfy the commutation relation $[K_{1},K_{2}] = -iK_{0}$. According to the general formalism of section II D, the
$|\upsilon,\tau,\varphi\rangle$ states are the $K_{1}$-$K_{2}$ IS, i.e., they provide an equality in the uncertainty relation
\begin{equation} (\Delta K_{1})^{2} (\Delta K_{2})^{2} \geq \frac{1}{4} \langle K_{0} \rangle^{2} . \label{5.12}
\end{equation} Indeed, a simple calculation yields
\begin{equation} (\Delta K_{1})^{2} = (\Delta K_{2})^{2} = \frac{1}{2} \langle K_{0} \rangle =
\frac{|\upsilon|^{2}}{4} \frac{ 1 + \tau^{2} - 2\tau
e^{-2|\upsilon|^{2}} \cos\varphi }{ 1 + \tau^{2} + 2\tau
e^{-2|\upsilon|^{2}} \cos\varphi } + \frac{1}{8} , \label{5.13}
\end{equation} when the expectation values are calculated for the superpositions
$|\upsilon,\tau,\varphi\rangle$.
Now, let us recall that the Barut-Girardello states are defined as the eigenstates of the SU(1,1) lowering generator $K_{-}$ \cite{BG}. For each unitary irreducible representation of SU(1,1), there is a set of the Barut-Girardello states. In the case of the two-photon realization (\ref{5.8}), there are two irreducible representations and the two irreducible sectors are spanned by the Fock states
$|n\rangle$ with even and odd values of $n$, respectively. The two sets of the Barut-Girardello states are the even and odd CS
$|\upsilon\rangle_{e}$ and $|\upsilon\rangle_{o}$. Their intelligent properties were first recognized by Hillery \cite{Hil87_89}.
Nonclassical properties of displaced even and odd CS were briefly discussed by Xia and Guo \cite{XiGu}. Squeezed coherent superpositions were considered recently by Hach and Gerry \cite{HaGe:jmo} and by Xin {\em et al.} \cite{XWHM}. We will use the algebra-eigenstate method developed above in order to obtain the Fock-Bargmann analytic representation of the squeezed and displaced superpositions. By applying the squeezing operator $S(\xi=s e^{i\theta})$ to Eq.\ (\ref{5.5}), we find that the squeezed superpositions
\begin{equation}
|\upsilon,\tau,\varphi,\xi\rangle = S(\xi)
|\upsilon,\tau,\varphi\rangle \label{5.14}
\end{equation} satisfy the following eigenvalue equation
\begin{equation}
a_{\xi}^{2} |\upsilon,\tau,\varphi,\xi\rangle
= \upsilon^{2} |\upsilon,\tau,\varphi,\xi\rangle , \label{5.15}
\end{equation} where
\begin{equation} a_{\xi} = S(\xi) a S^{-1}(\xi) = (\cosh s) a - (\sinh s\, e^{i\theta}) a^{\dagger} . \label{5.16}
\end{equation} Equation (\ref{5.15}) can be written in the standard form (\ref{3.9}):
\begin{equation} (-2\zeta N + a^{2} + \zeta^{2} a^{\dagger 2})
|\upsilon,\tau,\varphi,\xi\rangle
= [\upsilon^{2} (1-|\zeta|^2)
+\zeta] |\upsilon,\tau,\varphi,\xi\rangle , \label{5.17}
\end{equation} where $\zeta = \tanh s\, e^{i\theta}$ is defined by Eq.\ (\ref{3.zeta}). Now, we apply the displacement operator $D(z)$. The resulting eigenvalue equation is
\begin{equation}
a_{\xi,z}^{2} |\upsilon,\tau,\varphi,\xi,z\rangle
= \upsilon^{2} |\upsilon,\tau,\varphi,\xi,z\rangle , \label{5.18}
\end{equation} where
\begin{equation}
|\upsilon,\tau,\varphi,\xi,z\rangle = D(z) S(\xi)
|\upsilon,\tau,\varphi\rangle \label{5.19}
\end{equation} is the displaced and squeezed superposition, and
\begin{equation} a_{\xi,z} = D(z) S(\xi) a S^{-1}(\xi) D^{-1}(z) = (\cosh s) (a-z) - (\sinh s\, e^{i\theta}) (a^{\dagger}-z^{\ast}) . \label{5.20}
\end{equation} The standard form (\ref{3.9}) of the eigenvalue equation is
\begin{equation} (-2\zeta N + a^{2} + \zeta^{2} a^{\dagger 2} -2\rho a
+2\xi\rho a^{\dagger}) |\upsilon,\tau,\varphi,\xi,z\rangle
= [\upsilon^{2} (1-|\zeta|^2) +\zeta -\rho^{2}]
|\upsilon,\tau,\varphi,\xi,z\rangle , \label{5.21}
\end{equation} where we have defined
\begin{equation} \rho \equiv z - \zeta z^{\ast} . \label{5.22}
\end{equation}
A simple calculation yields
\begin{equation} \Delta^{2} = \beta_{1}^{2} - 4\beta_{2}\beta_{3} = 0 , \;\;\;\;\;\;\;\; \sigma = \beta_{4} \displaystyle{ \frac{ \Delta-\beta_{1} }{ 2\beta_{2} } } + \beta_{5} = 0 . \label{5.23}
\end{equation} The solution in this case is given by Eq.\ (\ref{lin1}). The analytic function
\begin{equation}
\Lambda(\upsilon,\tau,\varphi,\xi,z;\alpha) = e^{|\alpha|^{2}/2}
\langle\alpha^{\ast}|\upsilon,\tau,\varphi,\xi,z\rangle \label{5.24}
\end{equation} is then given by
\begin{equation} \Lambda(\upsilon,\tau,\varphi,\xi,z;\alpha) = \exp\left( \frac{\zeta}{2} \alpha^{2} +\rho\alpha \right) \left[ C_{+} \exp\left( \frac{\upsilon\alpha}{\cosh s} \right) + C_{-} \exp\left(- \frac{\upsilon\alpha}{\cosh s} \right) \right] . \label{5.25}
\end{equation}
This function is manifestly analytic and normalizable due to the condition $|\zeta| < 1$. By putting $\rho=0$ in Eq.\ (\ref{5.25}), we obtain the function that represents squeezed superpositions
$|\upsilon,\tau,\varphi,\xi\rangle$. The case of zero squeezing is also included in Eq.\ (\ref{5.25}). By putting there $\zeta=0$, we find the function that represents displaced superpositions
$|\upsilon,\tau,\varphi,z\rangle$.
By comparing the function $\Lambda(\upsilon,\tau,\varphi,\xi,z;\alpha)$ of Eq.\ (\ref{5.25}) with the function $\Lambda(\upsilon,\tau,\varphi;\alpha)$ of Eq.\ (\ref{5.4}), we deduce that $C_{-} = \tau e^{i\varphi} C_{+}$, and
$C_{+} = {\cal N} e^{-|\upsilon|^{2}/2}$ for $\zeta = z = 0$. By using the generating function for the Hermite polynomials \cite{Erd}
\begin{equation} e^{2tx-x^{2}} = \sum_{n=0}^{\infty} H_{n}(t) \frac{ x^{n} }{n!} , \label{5.28}
\end{equation} we expand the function $\Lambda(\upsilon,\tau,\varphi,\xi,z;\alpha)$ of Eq.\ (\ref{5.25}) into the power series in $\alpha$ and obtain the Fock-state expansion of the displaced and squeezed superpositions:
\begin{equation}
|\upsilon,\tau,\varphi,\xi,z\rangle = C_{+} \sum_{n=0}^{\infty} \frac{ (-\zeta/2)^{n/2} }{ \sqrt{n!} } \left[ H_{n}\left( \frac{u+\upsilon}{\kappa} \right) + \tau e^{i\varphi} H_{n}\left(
\frac{u-\upsilon}{\kappa} \right) \right] |n\rangle , \label{5.29}
\end{equation} where we have defined
\begin{eqnarray} & & u \equiv \rho \cosh s =
\frac{z-\zeta z^{\ast}}{\sqrt{1-|\zeta|^{2}}} , \label{5.30} \\ & & \kappa \equiv i \sqrt{2\zeta} \, \cosh s = i \sqrt{ \sinh 2s\, e^{i\theta} } . \label{5.31}
\end{eqnarray} By using the summation theorem for Hermite polynomials \cite{Erd}, we readily find the normalization factor:
\begin{equation}
C_{+}^{-2} = \frac{ \exp\left\{ |u|^{2} + |\upsilon|^{2} + \text{Re}\, [\zeta^{\ast}(u^{2} + \upsilon^{2})] \right\} }{
\sqrt{1-|\zeta|^{2}} } \left[ e^{ 2\text{Re}\, y }
+ \tau^{2} e^{ -2\text{Re}\, y } + 2\tau e^{-2|\upsilon|^{2}} \cos\left( \varphi -2\,\text{Im}\, y \right) \right] , \label{5.32}
\end{equation} where
\begin{equation} y \equiv u^{\ast}\upsilon + \zeta^{\ast}u\upsilon = \frac{ \upsilon z^{\ast} }{ \cosh s } . \label{5.33}
\end{equation} All the properties of the displaced and squeezed superpositions
$|\upsilon,\tau,\varphi,\xi,z\rangle$ can be calculated by using the analytic function $\Lambda(\upsilon,\tau,\varphi,\xi,z;\alpha)$ of Eq.\ (\ref{5.25}) or the Fock-state expansion of Eq.\ (\ref{5.29}).
\section{Squeezing and displacement of the SU(1,1) intelligent states}
Recently, Nieto and Truax \cite{NiTr} proposed a generalization of squeezed states for an arbitrary dynamical symmetry group. They found that the generalized squeezed states are eigenstates of a linear combination of the lowering and raising generators of a group. Actually, these states are the IS for the group Hermitian generators. Connections between the concepts of squeezing and intelligence were further investigated by Trifonov \cite{Trif}. It turns out that the IS for two Hermitian generators can provide an arbitrarily strong squeezing in either of these observables \cite{Trif}. In the simplest case of the Heisenberg-Weyl group $H_{3}$, the quadrature IS determined by the eigenvalue equation (\ref{3.38}) are the canonical squeezed states
$|\xi,\upsilon\rangle$ of Stoler and Yuen \cite{SS}. By considering the $K_{1}$-$K_{2}$ IS, one can generalize the concept of squeezing to the SU(1,1) group \cite{NiTr,PrAg,Trif}. On the other hand, the usual squeezed vacuum states $|\xi\rangle$ are the generalized CS of SU(1,1). The algebra-eigenstate method enables to treat both the generalized CS and the generalized squeezed states (i.e., the IS) for an arbitrary Lie group in a unified way. Since the SU(1,1) Lie group in the two-photon realization (\ref{5.8}) is a subgroup of $H_{6}$, the SU(1,1) IS are a particular case of the two-photon AES. Furthermore, we can consider the states generated by the squeezing transformations $S(\xi)$ and displacement transformations $D(z)$ of the SU(1,1) IS. Such states form a nonstandard set of the generalized two-photon CS.
According to Eq.\ (\ref{2.15}), the SU(1,1) IS are determined by the eigenvalue equation
\begin{equation}
(\eta K_{1} - iK_{2}) |\lambda,\eta\rangle =
\lambda |\lambda,\eta\rangle . \label{6.1}
\end{equation} Here $\lambda$ is a complex eigenvalue and the parameter $\eta$ is complex in the general case of the Robertson intelligence [an equality is achieved in Eq.\ (\ref{2.13})] and real in the particular case of the Heisenberg intelligence [an equality is
achieved in Eq.\ (\ref{2.14})]. By evaluating the expectation values over the state $|\lambda,\eta\rangle$, one gets \cite{Trif} (for $\text{Re}\,\eta \neq 0$)
\begin{equation} \begin{array}{c}
(\Delta K_{1})^{2} = \displaystyle{ \frac{\langle K_{0} \rangle}{ 2\text{Re}\,\eta} } , \;\;\;\;\;\;\;
(\Delta K_{2})^{2} = |\eta|^{2} \displaystyle{ \frac{\langle K_{0} \rangle}{2\text{Re}\,\eta} } , \\ \sigma_{12} = \frac{1}{2} \langle K_{1}K_{2} + K_{2}K_{1} \rangle - \langle K_{1} \rangle \langle K_{2} \rangle = \displaystyle{ \frac{\text{Im}\,\eta}{2\text{Re}\,\eta} } \langle K_{0} \rangle . \end{array} \label{6.2}
\end{equation} In the two-photon realization, the SU(1,1) Hermitian generators $K_{1}$ and $K_{2}$ are given by Eq.\ (\ref{5.10}). Then the eigenvalue equation (\ref{6.1}) can be written in the form
\begin{equation} \left( \frac{\eta+1}{4} a^{2} + \frac{\eta-1}{4} a^{\dagger 2}
\right) |\lambda,\eta\rangle = \lambda |\lambda,\eta\rangle . \label{6.3}
\end{equation}
In the particular case $\eta=1$, the states $|\lambda,\eta\rangle$
are the eigenstates of the operator $a^{2}$, i.e., they reduce to the coherent superpositions $|\upsilon,\tau,\varphi\rangle$ considered in the preceding section. Then, according to Eq.\ (\ref{6.2}), the uncertainties of $K_{1}$ and $K_{2}$ are equal [cf. Eq.\ (\ref{5.13})]. In more general case of ordinary intelligent states ($\eta$ is real), the states
$|\lambda,\eta\rangle$ are squeezed in $K_{1}$ for $\eta>1$ and squeezed in $K_{2}$ for $\eta<1$.
As usual, we define the entire analytic function
\begin{equation}
\Lambda(\lambda,\eta;\alpha) = e^{|\alpha|^{2}/2} \langle
\alpha^{\ast}|\lambda,\eta\rangle \label{6.4}
\end{equation}
that describes the IS $|\lambda,\eta\rangle$ in the Fock-Bargmann representation. Then Eq.\ (\ref{6.3}) becomes a differential equation of the type (\ref{3.11}). The function $\Lambda(\lambda,\eta;\alpha)$ in this case is given by Eqs.\ (\ref{3.12}) and (\ref{3.16}) with the parameters
\begin{equation} \Delta^{2} = \frac{1}{4} (1-\eta^{2}) , \;\;\;\;\;\;\; \sigma = 0 , \;\;\;\;\;\;\; \mu_{\Delta} = 0 , \;\;\;\;\;\;\; d = \displaystyle{ \frac{1}{4} - \frac{\lambda}{2\Delta} } . \label{6.5}
\end{equation} Therefore we obtain \begin{mathletters} \label{6.6}
\begin{eqnarray} & & \Lambda_{e}(\lambda,\eta;\alpha) = {\cal A}(\alpha)\, {}_{1}\! F_{1} \left( \frac{1}{4}
- \frac{\lambda}{2\Delta} \left| \frac{1}{2} \right| - \Omega_{\eta} \alpha^{2} \right) , \label{6.6a} \\ & & \Lambda_{o}(\lambda,\eta;\alpha) = \alpha {\cal A}(\alpha)\, {}_{1}\! F_{1} \left( \frac{3}{4}
- \frac{\lambda}{2\Delta} \left| \frac{3}{2} \right| - \Omega_{\eta} \alpha^{2} \right) , \label{6.6b}
\end{eqnarray} \end{mathletters} where
\begin{eqnarray} & & {\cal A}(\alpha) \equiv \exp\left( \mbox{$\frac{1}{2}$} \Omega_{\eta}\alpha^{2} \right) , \label{6.7A} \\ & & \Omega_{\eta}^{2} \equiv \frac{\Delta^{2}}{4\beta_{2}^{2}} = \frac{1-\eta}{1+\eta} . \label{6.7}
\end{eqnarray} The solutions $\Lambda_{e}$ and $\Lambda_{o}$ represent the states
belonging to the SU(1,1) irreducible sectors spanned by the Fock states $|n\rangle$ with even and odd values of $n$, respectively. The total solution is given by a superposition of $\Lambda_{e}$ and $\Lambda_{o}$. Note that the double-valuedness of $\Delta$ and $\Omega_{\eta}$ reflects the invariance of the solution under Kummer's transformation (\ref{4.15}). The normalization condition
(\ref{3.6}) requires $|\Omega_{\eta}|<1$ which is satisfied for $\text{Re}\,\eta>0$. This is the only restriction on values of $\eta$. If we express the Kummer functions ${}_{1}\! F_{1}\left(
d\left|\frac{1}{2}\right|x\right)$ and ${}_{1}\! F_{1}\left(
d+\frac{1}{2}\left|\frac{3}{2}\right|x\right)$ in terms of the parabolic cylinder functions $D_{-2d}(\pm x)$ by means of Eq.\ (\ref{pcf}), we will recover the results of Prakash and Agarwal \cite{PrAg}.
An important feature of the algebra-eigenstate method is the possibility to find relations between various types of states. For the SU(1,1) group, the standard set of the generalized CS have an intersection with the set of the ordinary IS \cite{WodEb,BBA:jpa} and is a subset of the wider set of the generalized IS
\cite{Trif}. Let us demonstrate these relations by using the Fock-Bargmann representation of the two-photon AES. The squeezed vacuum states $|\xi\rangle$ which are the standard CS of SU(1,1) are represented by the function $\Lambda(\xi,\upsilon;\alpha)$ of Eq. (\ref{3.37}) with $\upsilon=0$, i.e.,
\begin{equation}
\Lambda(\xi;\alpha) = (1-|\zeta|^{2})^{1/4} \exp\left( \mbox{$\frac{1}{2}$} \zeta\alpha^{2} \right) . \label{6.8}
\end{equation} On the other hand, when
\begin{equation} \frac{1}{4} - \frac{\lambda}{2\Delta} = \frac{1}{2} , \label{6.9}
\end{equation}
the formula ${}_{1}\! F_{1}(d|d|x)=e^{x}$ enables us to write Eq.\ (\ref{6.6a}) in the (normalized) form
\begin{equation}
\Lambda_{e}(\lambda,\eta;\alpha) = (1-|\Omega_{\eta}|^{2})^{1/4} \exp\left( -\mbox{$\frac{1}{2}$} \Omega_{\eta}\alpha^{2} \right) . \label{6.10}
\end{equation}
Therefore, the intelligent state $|\lambda,\eta\rangle$ is the standard coherent state $|\xi\rangle$ under the condition
\begin{equation} \lambda = -\Delta/2 = \pm \frac{1}{4} \sqrt{1-\eta^{2}} . \label{6.11}
\end{equation} The corresponding coherent-state amplitude is
\begin{equation} \zeta = -\Omega_{\eta} = \pm \sqrt{ \frac{1-\eta}{1+\eta} } . \label{6.12}
\end{equation}
The condition $|\zeta|<1$ is guaranteed by virtue of the
normalization requirement $|\Omega_{\eta}|<1$ ($\text{Re}\,\eta>0$). When $\eta$ is complex (the case of the generalized IS), $\zeta$ can acquire any value in the unit disk. It means that the standard CS form a subset of the generalized IS. However, when $\eta$ is real (the case of the ordinary IS), $\zeta$ is real for $\eta<1$ and pure imaginary for $\eta>1$. It means that the standard set of the generalized CS has an intersection with the set of the ordinary IS. The standard CS are the ordinary IS squeezed in $K_{2}$ for real $\zeta$ and squeezed in $K_{1}$ for pure imaginary $\zeta$.
Now, let us consider the action of the squeezing operator $S(\xi = se^{i\theta})$. By applying $S(\xi)$ to Eq.\ (\ref{6.3}), we obtain the eigenvalue equation
\begin{equation} \left( \frac{\eta+1}{4} a_{\xi}^{2} + \frac{\eta-1}{4}
a_{\xi}^{\dagger 2} \right) |\lambda,\eta,\xi\rangle =
\lambda |\lambda,\eta,\xi\rangle \label{6.13}
\end{equation} satisfied by the squeezed IS
\begin{equation}
|\lambda,\eta,\xi\rangle = S(\xi) |\lambda,\eta\rangle . \label{6.14}
\end{equation} The operator $a_{\xi}$ is given by Eq.\ (\ref{5.16}) and $a_{\xi}^{\dagger}$ is its Hermitian conjugate. Equation (\ref{6.13}) can be written in the standard form:
\begin{equation} (\beta_{1}N + \beta_{2}a^{2} + \beta_{3}a^{\dagger 2})
|\lambda,\eta,\xi\rangle = \lambda_{\xi} |\lambda,\eta,\xi\rangle , \label{6.15}
\end{equation} where
\begin{equation} \begin{array}{c}
\beta_{1} = -2\zeta(\eta+1)-2\zeta^{\ast}(\eta-1) , \;\;\;\;\;\;\; \beta_{2} = (\eta+1)+\zeta^{\ast 2}(\eta-1) , \\
\beta_{3} = \zeta^{2}(\eta+1)+(\eta-1) , \end{array} \label{6.16}
\end{equation}\vspace*{-1.0cm}
\begin{equation}
\lambda_{\xi} = 4(1-|\zeta|^{2})\lambda + \zeta(\eta+1) + \zeta^{\ast}(\eta-1) , \label{6.17}
\end{equation} and $\zeta=\tanh s\, e^{i\theta}$ is defined by Eq.\ (\ref{3.zeta}). The analytic function $\Lambda(\lambda,\eta,\xi;\alpha)$ representing the squeezed IS is given by Eqs.\ (\ref{3.12}) and (\ref{3.16}) with the parameters
\begin{equation}
\Delta^{2} = 4(1-\eta^{2})(1-|\zeta|^{2}) , \;\;\;\;\;\;\;
\sigma = 0 , \;\;\;\;\;\;\; \mu_{\Delta} = 0 , \;\;\;\;\;\;\; d = \frac{1}{4} - 2(1-|\zeta|^{2})\lambda/\Delta . \label{6.18}
\end{equation} The unitary squeezing operator $S(\xi)$ is an element of the SU(1,1) group and, therefore, Eq.\ (\ref{6.15}) does not include the first-order operators $a$ and $a^{\dagger}$ which represent one-photon processes. Then the total solution $\Lambda(\lambda,\eta,\xi;\alpha)$ can be written as a superposition of two solutions $\Lambda_{e}$ and $\Lambda_{o}$ which represent the two irreducible sectors of SU(1,1).
At the next step, we apply the displacement operator $D(z)$. The resulting eigenvalue equation is
\begin{equation} \left( \frac{\eta+1}{4} a_{\xi,z}^{2} + \frac{\eta-1}{4}
a_{\xi,z}^{\dagger 2} \right) |\lambda,\eta,\xi,z\rangle =
\lambda |\lambda,\eta,\xi,z\rangle , \label{6.19}
\end{equation} where
\begin{equation}
|\lambda,\eta,\xi,z\rangle = D(z) S(\xi) |\lambda,\eta\rangle . \label{6.20}
\end{equation} are the displaced and squeezed IS. The operator $a_{\xi,z}$ is given by Eq.\ (\ref{5.20}) and $a_{\xi,z}^{\dagger}$ is its Hermitian conjugate. Equation (\ref{6.19}) can be written in the standard form:
\begin{equation} (\beta_{1}N + \beta_{2}a^{2} + \beta_{3}a^{\dagger 2} + \beta_{4}a
+ \beta_{5}a^{\dagger}) |\lambda,\eta,\xi,z\rangle
= \lambda_{\xi,z} |\lambda,\eta,\xi,z\rangle , \label{6.21}
\end{equation} where $\beta_{1}$, $\beta_{2}$ and $\beta_{3}$ remain as given by Eq.\ (\ref{6.16}), and
\begin{equation} \beta_{4} = -2\rho(\eta+1)+2\zeta^{\ast}\rho^{\ast}(\eta-1) , \;\;\;\;\;\;\;\; \beta_{5} = 2\zeta\rho(\eta+1)-2\rho^{\ast}(\eta-1) , \label{6.22}
\end{equation}\vspace*{-1.2cm}
\begin{equation}
\lambda_{\xi,z} = 4(1-|\zeta|^{2})\lambda + (\zeta-\rho^{2})(\eta+1) + (\zeta^{\ast}-\rho^{\ast 2})(\eta-1) . \label{6.23}
\end{equation} Here $\rho = z-\zeta z^{\ast}$ is defined by Eq.\ (\ref{5.22}). The analytic function $\Lambda(\lambda,\eta,\xi,z;\alpha)$ representing the displaced and squeezed IS is given by general equations (\ref{3.12}) and (\ref{3.16}). The corresponding parameters are
\begin{equation}
\Delta^{2} = 4(1-\eta^{2})(1-|\zeta|^{2}) , \label{6.24}
\end{equation}\vspace*{-1.0cm}
\begin{equation}
\sigma = \frac{ \frac{1}{2}z^{\ast}(1-|\zeta|^{2})\Delta^{2} +[-\rho(\eta+1)+\zeta^{\ast}\rho^{\ast}(\eta-1)]\Delta }{ (\eta+1) + \zeta^{\ast 2}(\eta-1) } , \label{6.25}
\end{equation}\vspace*{-1.0cm}
\begin{equation} \mu_{\Delta} = \rho^{\ast}+\zeta^{\ast}\rho =
z^{\ast}(1-|\zeta|^{2}) , \label{6.26}
\end{equation} and $d$ can be found from the general expression (\ref{3.18}).
The analytic function $\Lambda(\lambda,\eta,z;\alpha)$ that represents the displaced IS
\begin{equation}
|\lambda,\eta,z\rangle = D(z) |\lambda,\eta\rangle \label{6.27}
\end{equation} can be found by taking $\zeta=0$ in the expressions for the displaced and squeezed IS. Then we obtain
\begin{equation} \begin{array}{c} \beta_{1} = 0 , \;\;\;\;\;\;\; \beta_{2} = (\eta+1) , \;\;\;\;\;\;\; \beta_{3} = (\eta-1) , \\ \beta_{4} = -2z(\eta+1), \;\;\;\;\;\;\; \beta_{5} = -2z^{\ast}(\eta-1) , \end{array} \label{6.28}
\end{equation}\vspace*{-0.8cm}
\begin{equation} \lambda_{z} = 4\lambda - z^{2}(\eta+1)-z^{\ast 2}(\eta-1) . \label{6.29}
\end{equation} By using these results, we find the usual set of the parameters:
\begin{equation} \begin{array}{c} \Delta^{2} = 4(1-\eta^{2}) , \;\;\;\;\;\;\; \sigma = 2z^{\ast}(1-\eta) - z\Delta , \;\;\;\;\;\;\; \mu_{\Delta} = z^{\ast} , \\ d = \frac{1}{4} + [z^{2}(\eta+1)-2\lambda]/\Delta . \end{array} \label{6.30}
\end{equation}
Let us finish this discussion by a brief review of possibilities for the generation of the SU(1,1) IS. Gerry and Hach \cite{GeHa} demonstrated a possibility to generate coherent superposition states for the long-time evolution of the competition between two-photon absorption and two-photon parametric processes for a special initial state. This method can be also applied to the production of the SU(1,1) IS more general than the coherent superposition states. Prakash and Agarwal \cite{PrAg} proposed to use the degenerate down-conversion of coherent light in presence of a broadband squeezed field in the cavity (see also Ref. \cite{GeGr} where the same idea was applied to the generation of the two-mode SU(1,1) IS). This method is based on an earlier proposal \cite{AgPu} introduced in the context of the SU(2) group.
\section{Conclusions}
We have shown that almost all photon states known in the context of the two-photon algebra can be considered as the AES. Therefore, the algebra-eigenstate formalism unifies the description of various types of states within a common frame. This helps in understanding of relations between different kinds of states and of the physical basis of their mathematical properties. The theory of the AES is in general applicable to an arbitrary Lie group and will be useful for a unified description of generalized coherence and squeezing in a wide class of quantum systems. In the present work we have concentrated on the basic quantum optical phenomena, such as usual displacement and squeezing of the quantized single-mode light field. The mathematical formulation of these physical processes is based on the two-photon group $H_{6}$. The corresponding two-photon AES form an extremely wide set that includes as particular cases various types of photon states whose properties have drawn recently considerable attention. The standard CS of Glauber, two-photon CS (canonical squeezed states) of Stoler and Yuen, displaced and squeezed Fock states, displaced and squeezed coherent superpositions, and displaced and squeezed SU(1,1) IS are incorporated into the set of the two-photon AES. The Fock-Bargmann analytic representation of all the particular subsets is obtained from the general differential equation (\ref{3.11}) that is common for all the kinds of the AES. Then the powerful theory of analytic functions can be used for investigating properties of various types of states and relations between them.
\acknowledgments
The author thanks Profs. A. Mann, M. S. Marinov, and Y. Ben-Aryeh for interesting and stimulating discussions. The financial help from the Technion is gratefully acknowledged.
\begin{references}
\bibitem{KlSk} {\sc J. R. Klauder and B. S. Skagerstam}, {``Coherent states,''} World Scientific, Singapore, 1985.
\bibitem{Per86} {\sc A. M. Perelomov}, {``Generalized Coherent States and Their Applications,''} Springer, Berlin, 1986.
\bibitem{Gil:rev} {\sc W.-M. Zhang, D. H. Feng, and R. Gilmore}, {\em Rev. Mod. Phys.} {\bf 62} (1990), 867.
\bibitem{Per} {\sc A. M. Perelomov}, {\em Commun. Math. Phys.} {\bf 26} (1972), 222; {\em Sov. Phys. Usp.} {\bf 20} (1977), 703.
\bibitem{Gil} {\sc R. Gilmore}, {\em Ann. Phys. (N.Y.)} {\bf 74} (1972), 391; {\em Rev. Mex. de Fisica} {\bf 23} (1974), 142; {\em J.~Math. Phys.} {\bf 15} (1974), 2090.
\bibitem{BG} {\sc A. O. Barut and L. Girardello}, {\em Commun. Math. Phys.} {\bf 21} (1971), 41.
\bibitem{EOCS} {\sc V. V. Dodonov, I. A. Malkin, and V. I. Man'ko}, {\em Physica} {\bf 72} (1974), 597; {\sc I. A. Malkin and V. I. Man'ko}, ``Dynamical Symmetries and Coherent States of Quantum Systems,'' Nauka, Moscow, 1979.
\bibitem{Agar88} {\sc G. S. Agarwal}, {\em J. Opt. Soc. Am. B} {\bf 5} (1988), 1940.
\bibitem{Buz90} {\sc V. Bu\v{z}ek}, {\em J. Mod. Opt.} {\bf 37} (1990), 303.
\bibitem{BBA:qo} {\sc C. Brif and Y. Ben-Aryeh}, {\em Quantum Opt.} {\bf 6} (1994), 391.
\bibitem{I:qso} {\sc C. Brif}, {\em Quant. Semicl. Opt.} {\bf 7} (1995), 803.
\bibitem{Schro} {\sc E. Schr\"{o}dinger}, {\em Naturwissenschaften} {\bf 14} (1926), 664.
\bibitem{Arag} {\sc C. Aragone, G. Guerri, S. Salamo, and J. L. Tani}, {\em J. Phys. A: Math. Gen.} {\bf 7} (1974), L149; {\sc C. Aragone, E. Chalbaud and S. Salamo}, {\em J.~Math. Phys.} {\bf 17} (1976), 1963.
\bibitem{RBA} {\sc S. Ruschin and Y. Ben-Aryeh}, {\em Phys. Lett. A} {\bf 58} (1976), 207.
\bibitem{NiTr} {\sc M. M. Nieto and D. R. Truax}, {\em Phys. Rev. Lett.} {\bf 71} (1993), 2843.
\bibitem{BHY_YH} {\sc J. A. Bergou, M. Hillery, and D. Yu}, {\em Phys. Rev. A} {\bf 43} (1991), 515; {\sc D. Yu and M. Hillery}, {\em Quantum Opt.} {\bf 6} (1994), 37.
\bibitem{PrAg} {\sc G. S. Prakash and G. S. Agarwal}, {\em Phys. Rev. A} {\bf 50} (1994), 4258.
\bibitem{Trif} {\sc D. A. Trifonov}, {\em J. Math. Phys.} {\bf 35} (1994), 2297.
\bibitem{Puri} {\sc R. R. Puri}, {\em Phys. Rev. A} {\bf 49} (1994), 2178.
\bibitem{BBA:jpa} {\sc C. Brif and Y. Ben-Aryeh}, {\em J. Phys. A: Math. Gen.} {\bf 27} (1994), 8185.
\bibitem{GeGr} {\sc C. C. Gerry and R. Grobe}, {\em Phys. Rev. A} {\bf 51} (1995), 4123.
\bibitem{Robert} {\sc H. Robertson}, {\em Phys. Rev.} {\bf 35} (1930), 667.
\bibitem{Jackiw} {\sc R. Jackiw}, {\em J. Math. Phys.} {\bf 9} (1968), 339.
\bibitem{Weyl} {\sc H. Weyl}, {``The Theory of Groups and Quantum Mechanics,''} Dover, New York, 1950.
\bibitem{Gla} {\sc R. J. Glauber}, {\em Phys. Rev.} {\bf 130} (1963), 2529; {\bf 131} (1963), 2766.
\bibitem{SS} {\sc D. Stoler}, {\em Phys. Rev. D} {\bf 1} (1970), 3217; {\bf 4} (1971), 2308; {\sc H. P. Yuen}, {\em Phys. Rev. A} {\bf 13} (1976), 2226; {\sc J. N. Hollenhorst}, {\em Phys. Rev. D} {\bf 19} (1979), 1669; {\sc D. F. Walls}, {\em Nature} {\bf 306} (1983), 141.
\bibitem{WodEb} {\sc K. Wodkiewicz and J. H. Eberly}, {\em J. Opt. Soc. Am. B} {\bf 2} (1985), 458.
\bibitem{Ger85_88} {\sc C. C. Gerry}, {\em Phys. Rev. A} {\bf 31} (1985), 2721; {\bf 37} (1988), 2683.
\bibitem{Hil87_89} {\sc M. Hillery}, {\em Phys. Rev. A} {\bf 36} (1987), 3796; {\bf 40} (1989), 3147.
\bibitem{FB} {\sc V. A. Fock}, {\em Z. Phys.} {\bf 49} (1928), 339; {\sc V. Bargmann}, {\em Commun. Pure Appl. Math.} {\bf 14} (1961), 187.
\bibitem{Ber} {\sc F. A. Berezin}, {\em Sov. Math. Izv.} {\bf 38} (1974), 1116; {\bf 39} (1975), 363; {\em Commun. Math. Phys.} {\bf 40} (1975), 153.
\bibitem{HiMl} {\sc M. Hillery and L. Mlodinow}, {\em Phys. Rev. A} {\bf 48} (1993), 1548; {\sc C. Brif and Y. Ben-Aryeh}, {\em Quant. Semicl. Opt.} {\bf 8} (1996), 1.
\bibitem{SS1} {\sc C. M. Caves and B. L. Schumaker}, {\em Phys. Rev. A} {\bf 31} (1985), 3068; {\sc B. L. Schumaker and C. M. Caves}, {\em ibid.} {\bf 31} (1985), 3093; {\sc B. L. Schumaker}, {\em Phys. Rep.} {\bf 135} (1986), 317.
\bibitem{SSe1} {\sc R. E. Slusher, L. W. Hollberg, B. Yurke, J. C. Mertz, and J. F. Valley}, {\em Phys. Rev. Lett.} {\bf 55} (1985), 2409; {\sc R. M. Shelby, M. D. Levenson, S. H. Perlmutter, R. G. DeVoe, and D. F. Walls}, {\em ibid.} {\bf 57} (1986), 691; {\sc L.-A. Wu, H. J. Kimble, J. L. Hall, and H. Wu}, {\em ibid.} {\bf 57} (1986), 2520; {\sc M. W. Maeda, P. Kumar, and J. H. Shapiro}, {\em Opt. Lett.} {\bf 12} (1987), 161.
\bibitem{SSe2} {\sc B. L. Schumaker, S. H. Perlmutter, R. M. Shelby, and M. D. Levenson}, {\em Phys. Rev. Lett.} {\bf 58} (1987), 357; {\sc M. G. Raizen, L. A. Orosco, M. Xiao, T. L. Boyd, and H. J. Kimble}, {\em ibid.} {\bf 59} (1987), 198; {\sc A. Heidmann, R. J. Horowicz, S. Reynaud, E. Giacobino, C. Fabre, and G. Camy}, {\em ibid.} {\bf 59} (1987), 2555.
\bibitem{SSe3} {\sc P. Kumar, O. Aytur, and J. Huang}, {\em Phys. Rev. Lett.} {\bf 64} (1990), 1015; {\sc A. Sizmann, R. J. Horowicz, E. Wagner, and G. Leuchs}, {\em Opt. Commun.} {\bf 80} (1990), 138; {\sc M. Rosenbluh and R. M. Shelby}, {\em Phys. Rev. Lett.} {\bf 66} (1991), 153; {\sc K. Bergman and H. A. Haus}, {\em Opt. Lett.} {\bf 16} (1991), 663; {\sc E. S. Polzik, R. C. Carri, and H. J. Kimble}, {\em Phys. Rev. Lett.} {\bf 68} (1992), 3020; {\sc Z. Y. Ou, S. F. Pereira, H. J. Kimble, and K. C. Peng}, {\em ibid.} {\bf 68} (1992), 3663.
\bibitem{SS2} {\sc R. Loudon and P. L. Knight}, {\em J. Mod. Opt.} {\bf 34} (1987), 709; {\sc M. C. Teich and B. E. A. Saleh}, {\em Quantum Opt.} {\bf 1} (1990), 153; {\sc S. Reynaud, A. Heidmann, E. Giacobino, and C. Fabre}, {\em in} {``Progress in Optics,''} (E. Wolf, Ed.), Vol. 30, North-Holland, Amsterdam, 1992; {\sc C. Fabre}, {\em Phys. Rep.} {\bf 219} (1992), 215; {\sc H. J. Kimble}, {\em ibid.} {\bf 227} (1992), 215.
\bibitem{Erd} ``Higher Transcendental Functions, Bateman project,'' (A. Erd\'{e}lyi, Ed.), McGraw-Hill, New York, 1953.
\bibitem{AS} ``Handbook of Mathematical Functions, Natl. Bur. Stand. Appl. Math. Ser. No. 55,'' (M. Abramowitz and I. A. Stegun, Eds.), U.S. GPO, Washington, DC, 1964.
\bibitem{DFS} {\sc F. A. M. de Oliveira, M. S. Kim, P. L. Knight, and V. Bu\v{z}ek}, {\em Phys. Rev. A} {\bf 41} (1990), 2645.
\bibitem{corst} {\sc V. V. Dodonov, E. V. Kurmyshev, and V. I. Man'ko}, {\em Phys. Lett. A} {\bf 79} (1980), 150.
\bibitem{FSP:mm} {\sc P. Filipowicz, J. Javanainen, and P. Meystre}, {\em J. Opt. Soc. Am. B} {\bf 3} (1986), 906; {\sc T. Krause, M. O. Scully, T. Walther, and H. Walther}, {\em Phys. Rev. A} {\bf 39} (1989), 1915; {\sc F. W. Cummings and A. K. Rajagopal}, {\em ibid.} {\bf 39} (1989), 3414; {\sc J. R. Kuklinski}, {\em Phys. Rev. Lett.} {\bf 64} (1990), 2507; {\sc M. Brune, S. Haroche, V. Lefevre, J. M. Raimond, and N. Zagury}, {\em ibid.} {\bf 65} (1990), 976; {\sc N. Nayak, B. V. Thompson, and R. K. Bullough}, {\em in} ``Proceedings of the European Conference on Optics, Optical Systems and Applications,'' (M. Bertolotti and E. R. Pike, Eds.), p. 81, IOP, Bristol, 1991; {\sc D. L. Lin, C.-Q. Cao, and Z. D. Liu}, {\em Chin. J. Phys.} {\bf 30} (1992), 819; {\sc R. Schak, A. Breitenbach, and A. Schenzle}, {\em Phys. Rev. A} {\bf 45} (1992), 3260; {\sc D. Ivanov and T. A. B. Kennedy}, {\em ibid.} {\bf 47} (1993), 566; {\sc V. Bu\v{z}ek}, {\em Acta Phys. Slov.} {\bf 44} (1994), 1.
\bibitem{FSP_SCP} {\sc M. Brune, S. Haroche, J. M. Raimond, L. Davidovich, and N. Zagury}, {\em Phys. Rev. A} {\bf 45} (1992), 5193; {\sc B. M. Garraway, B. Sherman, H. Moya-Cessa, P. L. Knight, and G. Kurizki}, {\em ibid.} {\bf 49} (1994), 535.
\bibitem{FSP:pa} {\sc C. K. Hong and L. Mandel}, {\em Phys. Rev. Lett.} {\bf 56} (1986), 58; {\sc K. Watanabe and Y. Yamamoto}, {\em Phys. Rev. A} {\bf 38} (1988), 3556; {\sc C. A. Holmes, G. J. Milburn, and D. F. Walls}, {\em ibid.} {\bf 39} (1989), 2493.
\bibitem{FSP:qj} {\sc J. I. Cirac, R. Blatt, A. S. Parkins, and P. Zoller}, {\em Phys. Rev. Lett.} {\bf 70} (1993), 762.
\bibitem{FSP:rl} {\sc T. Pellizzari and H. Ritsch}, {\em Phys. Rev. Lett.} {\bf 72} (1994), 3973.
\bibitem{FSP:sai} {\sc P. Domokos, J. Janszky, and P. Adam}, {\em Phys. Rev. A} {\bf 50} (1994), 3340.
\bibitem{KOK:jmo} {\sc M. S. Kim, F. A. M. de Oliveira, and P. L. Knight}, {\em J. Mod. Opt.} {\bf 37} (1990), 659.
\bibitem{DFS:jcm} {\sc V. Bu\v{z}ek, I. Jex, and M. Brisudova}, {\em Int. J. Mod. Phys. B} {\bf 5} (1991), 797.
\bibitem{DFS:gen} {\sc A. W\"{u}nsche}, {\em Quantum Opt.} {\bf 3} (1991), 359; {\sc M. Brisudova}, {\em J. Mod. Opt.} {\bf 38} (1991), 2505; {\sc H.-Y. Fan and H.-G. Weng}, {\em Quantum Opt.} {\bf 4} (1992), 265.
\bibitem{DFS:pp} {\sc R. Tana\'{s}, B. K. Murzakhmetov, Ts. Gantsog, and A. V. Chizhov}, {\em Quantum Opt.} {\bf 4} (1992), 1; {\sc H. Zheng-Feng}, {\em J. Mod. Opt.} {\bf 39} (1992), 1381; {\sc I. Menda\v{s} and D. B. Popovi\'{c}}, {\em J. Phys. A: Math. Gen.} {\bf 26} (1993), 3313; {\em Phys. Rev. A} {\bf 50} (1994), 947.
\bibitem{SFS} {\sc M. S. Kim, F. A. M. de Oliveira, and P. L. Knight}, {\em Phys. Rev. A} {\bf 40} (1989), 2494; {\em Opt. Commun.} {\bf 72} (1989), 99; {\sc A. S. Shumovsky}, {\em Sov. Phys. Dokl.} {\bf 36} (1991), 135; {\sc P. Marian}, {\em Phys. Rev. A} {\bf 44} (1991), 3325.
\bibitem{SFS:pp} {\sc R. Nath and P. Kumar}, {\em J. Mod. Opt.} {\bf 38} (1991), 1665; {\sc H.-T. Tu and C.-D. Gong}, {\em ibid.} {\bf 40} (1993), 57; {\sc A. V. Chizhov, Ts. Gantsog, and B. K. Murzakhmetov}, {\em Quant. Opt.} {\bf 5} (1993), 85.
\bibitem{Kral} {\sc P. Kr\'{a}l}, {\em J. Mod. Opt.} {\bf 37} (1990), 889; {\em Phys. Rev. A} {\bf 42} (1990), 4177.
\bibitem{Lo} {\sc C. F. Lo}, {\em Phys. Rev. A} {\bf 43} (1991), 404; {\em Quantum Opt.} {\bf 3} (1991), 333.
\bibitem{LBK} {\sc W. K. Lai, V. Bu\v{z}ek, and P. L. Knight}, {\em Phys. Rev. A} {\bf 43} (1991), 6323.
\bibitem{Sch_cat} {\sc E. Schr\"{o}dinger}, {\em Naturwissenschaften} {\bf 23} (1935), 844.
\bibitem{YuSt86} {\sc B. Yurke and D. Stoler}, {\em Phys. Rev. Lett.} {\bf 57} (1986), 13.
\bibitem{XiGu} {\sc Y. Xia and G. Guo}, {\em Phys. Lett. A} {\bf 136} (1989), 281.
\bibitem{BVBK} {\sc V. Bu\v{z}ek, A. Vidiela-Barranco, and P. L. Knight}, {\em Phys. Rev. A} {\bf 45} (1992), 6570.
\bibitem{KiBu} {\sc M. S. Kim and V. Bu\v{z}ek}, {\em J. Mod. Opt.} {\bf 39} (1992), 1609; {\em Phys. Rev. A} {\bf 46} (1992), 4239.
\bibitem{HaGe:qo} {\sc E. E. Hach III and C. C. Gerry}, {\em Quantum. Opt.} {\bf 5} (1993), 327.
\bibitem{Bu:psa} {\sc M. S. Kim, K. S. Lee and V. Bu\v{z}ek}, {\em Phys. Rev. A} {\bf 47} (1993), 4302; {\sc V. Bu\v{z}ek, Ts. Gantsog, and M. S. Kim}, {\em Phys. Scripta} {\bf T48} (1993), 131; {\sc V. Bu\v{z}ek, M. S. Kim, and Ts. Gantsog}, {\em Phys. Rev. A} {\bf 48} (1993), 3394.
\bibitem{HaGe:jmo} {\sc E. E. Hach III and C. C. Gerry}, {\em J. Mod. Opt.} {\bf 40} (1993), 2351.
\bibitem{XWHM} {\sc Z. Z. Xin, D. B. Wang, M. Hirayama, and K. Matumoto}, {\em Phys. Rev. A} {\bf 50} (1994), 2865.
\bibitem{DomJa} {\sc P. Domokos and J. Janszky}, {\em Phys. Lett. A} {\bf 186} (1994), 289.
\bibitem{MiHo} {\sc G. J. Milburn and C. A. Holmes}, {\em Phys. Rev. Lett.} {\bf 56} (1986), 2237; {\sc G. J. Milburn}, {\em Phys. Rev. A} {\bf 33} (1986), 674.
\bibitem{MeTo} {\sc A. Mecozzi and P. Tombesi}, {\em Phys. Rev. Lett.} {\bf 58} (1987), 1055; {\sc P. Tombesi and A. Mecozzi}, {\em J. Opt. Soc. Am. B} {\bf 4} (1987), 1700.
\bibitem{WoCar} {\sc M. Wolinsky and H. J. Carmichael}, {\em Phys. Rev. Lett.} {\bf 60} (1988), 1836; {\sc L. Krippner, W. J. Munro, and M. D. Reid}, {\em Phys. Rev. A} {\bf 50} (1994), 4330.
\bibitem{GeHa} {\sc C. C. Gerry and E. E. Hach III}, {\em Phys. Lett A} {\bf 174} (1993), 185.
\bibitem{GaKn} {\sc B. M. Garraway and P. L. Knight}, {\em Phys. Rev. A} {\bf 49} (1994), 1266.
\bibitem{LyGN} {\sc M. L. Lyra and A. S. Gouveia-Neto}, {\em J. Mod. Opt.} {\bf 41} (1994), 1361.
\bibitem{GeaB} {\sc J. Gea-Banacloche}, {\em Phys. Rev. Lett.} {\bf 65} (1990), 3385; {\em Phys. Rev. A} {\bf 44} (1991), 5913; {\sc V. Bu\v{z}ek, H. Moya-Cessa, P. L. Knight, and S. J. D. Phoenix}, {\em ibid.} {\bf 45} (1992), 8190; {\sc P. F. Gora and C. Jedrzejek}, {\em ibid.} {\bf 48} (1993), 3291; {\sc K. Zaheer}, {\em J. Mod. Opt.} {\bf 41} (1994), 151.
\bibitem{SKK} {\sc B. Sherman, G. Kurizki, and A. Kadyshevitch}, {\em Phys. Rev. Lett.} {\bf 69} (1992), 1927.
\bibitem{Mey} {\sc J. J. Slosser, P. Meystre, and E. M. Wright}, {\em Opt. Lett.} {\bf 15} (1990), 233; {\sc J. J. Slosser and P. Meystre}, {\em Phys. Rev. A} {\bf 41} (1990), 3867; {\sc M. Wilkens and P. Meystre}, {\em ibid.} {\bf 43} (1991), 3832; {\sc P. Meystre, J. J. Slosser, and M. Wilkens}, {\em ibid.} {\bf 43} (1991), 4959.
\bibitem{DMBRH} {\sc L. Davidovich, A. Maali, M. Brune, J. M. Raimond, S. Haroche}, {\em Phys. Rev. Lett.} {\bf 71} (1993), 2360.
\bibitem{BGK} {\sc V. Bu\v{z}ek, Ts. Gantsog, and M. S. Kim}, {\em J. Mod. Opt.} {\bf 41} (1994), 1625.
\bibitem{qndm} {\sc B. Yurke}, {\em J. Opt. Soc. Am. B} {\bf 2} (1986), 732; {\sc R. M. Shelby and M. D. Levenson}, {\em Opt. Commun.} {\bf 64} (1987), 553; {\sc A. La Porta, R. E. Slusher, and B. Yurke}, {\em Phys. Rev. Lett.} {\bf 62} (1989), 26; {\sc S. Song, C. M. Caves, and B. Yurke}, {\em Phys. Rev. A} {\bf 41} (1990), 5261; {\sc B. Yurke, W. Schleich, and D. F. Walls}, {\em ibid.} {\bf 42} (1990), 1703.
\bibitem{AM_DMN} {\sc N. A. Ansari and V. I. Man'ko}, {\em Phys. Rev. A} {\bf 50} (1994), 1942; {\sc V. V. Dodonov, V. I. Man'ko, and D. E. Nikonov}, {\em ibid.} {\bf 51} (1995), 3328.
\bibitem{AgPu} {\sc G. S. Agarwal and R. R. Puri}, {\em Phys. Rev. A} {\bf 41} (1990), 3782.
\end{references}
\end{document} | arXiv | {
"id": "9605006.tex",
"language_detection_score": 0.6660215854644775,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} In this paper we establish necessary and sufficient conditions for the limit set of a projective Anosov representation to be a $C^{\alpha}$-submanifold of projective space for some $\alpha\in(1,2)$. We also calculate the optimal value of $\alpha$ in terms of the eigenvalue data of the Anosov representation. \end{abstract}
\title{Regularity of limit sets of Anosov representations}
\tableofcontents
\section{Introduction} Suppose that $\Hb_{\Rb}^d$ is real hyperbolic $d$-space. Let $\partial_\infty\Hb_{\Rb}^d$ denote the geodesic boundary of $\Hb_{\Rb}^d$ and let ${ \rm Isom}(\Hb_{\Rb}^d)$ denote the isometry group of $\Hb_{\Rb}^d$. Given a representation $\rho: \Gamma \rightarrow { \rm Isom}(\Hb_{\Rb}^d)$, the \emph{limit set of $\rho$} is defined to be \begin{align*} \Lc_\rho = \overline{\rho(\Gamma) \cdot x_0} \cap \partial_\infty \Hb_{\Rb}^d \end{align*} where $x_0 \in \Hb_{\Rb}^d$ is any point. If we further assume that $\Gamma$ is a hyperbolic group and $\rho$ is a convex co-compact representation, then there is a $\rho$-equivariant, continuous map from $\partial_\infty \Gamma$, the Gromov boundary of $\Gamma$, to the limit set $\Lc_\rho$. The limit set in this setting is generically very irregular, for instance when $\partial_\infty\Gamma$ is a topological manifold Yue~\cite{Y1996} proved: unless $\rho$ is a co-compact action on a totally geodesic subspace of $\Hb_{\Rb}^d$, its limit set is fractal like, and in particular, has Hausdorff dimension strictly greater than its topological dimension.
The group ${ \rm Isom}(\Hb_{\Rb}^d)$ is a semisimple Lie group. For a general semisimple Lie group $G$, there is a rich class of representations from a hyperbolic group $\Gamma$ to $G$ called \emph{Anosov representations}, which generalize the convex co-compact representations from $\Gamma$ to ${ \rm Isom}(\Hb_{\Rb}^d)$. Anosov representations were introduced by Labourie~\cite{L2006} and extended by Guichard-Wienhard~\cite{GW2012}. Since then, they have been heavily studied, \cite{KLP2013, KLP2014,KLP2014b,GGKW2015,BPS2016}. One reason for their popularity is that they are rigid enough to retain many of the good geometric properties that convex co-compact representations have, while at the same time are flexible enough to admit many new and interesting examples.
In this paper, we investigate the regularity of the limit sets of Anosov representations from $\Gamma$ into $\PGL_d(\Rb)$. We will give precise definitions in Section~\ref{sec:Anosov_repn} but informally: if $\Gamma$ is a word hyperbolic group with Gromov boundary $\partial_\infty \Gamma$, a representation $\rho:\Gamma \rightarrow \PGL_d(\Rb)$ is said to be $k$-Anosov if there exist continuous $\rho$-equivariant maps \begin{align*} \xi^{(k)} : \partial_\infty \Gamma \rightarrow \Gr_k(\Rb^d) \text{ and } \xi^{(d-k)} : \partial_\infty \Gamma \rightarrow \Gr_{d-k}(\Rb^d) \end{align*} which satisfy certain dynamical properties. For a $k$-Anosov representation, it is reasonable to call the image of $\xi^{(k)}$ in $\Gr_k(\Rb^d)$ the ``$k$-limit set of $\rho$ in $\Gr_k(\Rb^d)$.''
We will largely focus our attention on $1$-Anosov representations; by a result of Guichard-Wienhard \cite[Proposition 4.3]{GW2012}, for any Anosov representation $\rho : \Gamma \rightarrow G$ into a semisimple Lie group $G$, there exists $d > 0$ and an irreducible representation $\phi : G \rightarrow \PGL_d(\Rb)$ such that $\phi \circ \rho$ is $1$-Anosov. Thus, up to post composition with irreducible representations, the class of $1$-Anosov representations contains all other types of Anosov representations. Further, the flag maps induced by $\phi$ are smooth. Thus, another result of Guichard-Wienhard \cite[Proposition 4.4]{GW2012} implies that all regularity properties of the limit set can be investigated by reducing to the case of $1$-Anosov representations.
Our first main result gives a sufficient condition for the $1$-limit set of a $1$-Anosov representation to be a $C^\alpha$-submanifold of $\Pb(\Rb^d)$ for some $\alpha>1$.
\begin{theorem}\label{thm:main} (Theorem \ref{thm:main_body}) Suppose $\Gamma$ is a hyperbolic group, $\partial_\infty \Gamma$ is a topological $(m-1)$-manifold, and $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ is a $1$-Anosov representation. If \begin{enumerate} \item[($\dagger$)] $\rho$ is $m$-Anosov, and $\xi^{(1)}(x) + \xi^{(1)}(z) + \xi^{(d-m)}(y)$ is a direct sum for all pairwise distinct $x,y,z \in \partial_\infty \Gamma$, \end{enumerate} then \begin{enumerate} \item[($\ddagger$)] $M:=\xi^{(1)}(\partial_\infty \Gamma)$ is a $C^{\alpha}$-submanifold of $\Pb(\Rb^d)$ for some $\alpha > 1$. \end{enumerate} Moreover, $T_{\xi^{(1)}(x)} M = \xi^{(m)}(x)$ for any $x \in \partial_\infty \Gamma$. \end{theorem}
\begin{remark}\label{rmk:open} \ \begin{enumerate} \item A weaker version of this result, only deducing $C^1$ regularity was independently proven by Pozzetti-Sambarino-Wienhard \cite{PSW18}. \item Property ($\dagger$) and $k$-Anosovness in Theorem~\ref{thm:main} are open conditions in $\Hom(\Gamma, \PGL_d(\Rb))$, see Section \ref{sec:stability}.
\end{enumerate}
\end{remark}
Theorem \ref{thm:main} is a generalization of the following theorem due to Benoist in the setting of divisible, properly convex domains in $\Pb(\Rb^d)$. A group of projective transformations $\Gamma\subset\PGL_d(\Rb)$ \emph{divides} a properly convex domain $\Omega\subset\Pb(\Rb^d)$ if $\Gamma$ acts properly discontinuously and co-compactly on $\Omega$.
\begin{theorem}[\cite{Ben04}]\label{thm:convex_divisible} Let $\Gamma\subset\PGL_d(\Rb)$ be a hyperbolic group that divides a properly convex domain $\Omega\subset\Pb(\Rb^d)$. Then then $\id:\Gamma\to\PGL_d(\Rb)$ is a $1$-Anosov representation whose $1$-limit set is $\partial\Omega$. Furthermore, $\partial\Omega\subset\Pb(\Rb^d)$ is a $C^\alpha$-submanifold for some $\alpha>1$. \end{theorem}
Theorem \ref{thm:main} also generalizes a result due to Labourie in the setting of Hitchin representations. Let $S$ be a closed orientable hyperbolizable surface and fix a Fuchsian representation $\rho_0: \pi_1(S) \rightarrow \PGL_2(\Rb)$. Then let $\tau_d : \PGL_2(\Rb) \rightarrow \PGL_d(\Rb)$ be the standard irreducible representation (see Section \ref{sec:rhoirred}). A representation $\rho : \pi_1(S) \rightarrow \PGL_d(\Rb)$ is \emph{Hitchin} if it is conjugate to a representation in the connected component of $\Hom(\pi_1(S), \PGL_d(\Rb))$ that contains $\tau_d \circ \rho_0$.
\begin{theorem}[\cite{L2006}] If $\rho:\pi_1(S)\to\PGL_d(\Rb)$ is a Hitchin representation, then $\rho$ is $k$-Anosov for every $k \in \{1,\dots, d-1\}$, and the $1$-limit set of $\rho$ is a $C^{\alpha}$-submanifold in $\Pb(\Rb^d)$ for some $\alpha>1$. \end{theorem}
Using Theorem~\ref{thm:main}, we can find more examples of representations that preserve $C^\alpha$-submanifolds in $\Pb(\Rb^d)$.
\begin{example}\label{cor:hyperbolic_lattices}(See Section~\ref{sec:real_hyp_lattices}) Suppose $\tau: \PO(m,1) \rightarrow \PGL_d(\Rb)$ is a irreducible representation, $\Gamma \leq \PO(m,1)$ is a co-compact lattice, and $\rho := \tau|_{\Gamma} : \Gamma \rightarrow \PGL_d(\Rb)$. If $\rho$ is $1$-Anosov, then there exists a neighborhood $\Oc$ of $\rho$ in $\Hom(\Gamma, \PGL_d(\Rb))$ such that every representation in $\Oc$ is a $1$-Anosov representation whose $1$-limit set is a $C^{\alpha}$ submanifold of $\Pb(\Rb^d)$ for some $\alpha > 1$. \end{example}
\begin{example}\label{cor:hitchin} (See Section \ref{sec:Hitchin}) If $\rho : \pi_1(S) \rightarrow \PGL_d(\Rb)$ is a Hitchin representation, then for all $k=1,\dots,d-1$, there is an open set $\Oc$ of $\bigwedge^k\rho$ in $\Hom\left(\Gamma,\PGL\left(\bigwedge^k\Rb^d\right)\right)$ so that every representation in $\Oc$ is a $1$-Anosov representation whose $1$-limit set is a $C^{\alpha}$-submanifold of $\Pb\left(\bigwedge^k\Rb^d\right)$ for some $\alpha>1$. See Section \ref{sec:wedge} for the definition of $\bigwedge^k\rho$. In particular, by applying \cite[Proposition 4.4]{GW2012}, the $k$-limit set of $\rho$ is a $C^{\alpha}$-submanifold of $\Gr_k(\Rb^d)$ for some $\alpha>1$. \end{example}
\begin{remark} Example~\ref{cor:hitchin} was independently observed by Pozzetti-Sambarino-Wienhard \cite{PSW18}.\end{remark}
In fact, Theorem \ref{thm:main} is a consequence of a more general theorem, see Theorem \ref{thm:main_body}, that is stated using \emph{$\rho$-controlled subsets} $M\subset\Pb(\Rb^d)$, of which the $1$-limit set of $\rho$ is an example, see Definition \ref{def:controlled}. In the main body of our paper, all our results will be stated for $\rho$-controlled subsets. These statements are stronger than the results we mention in this introduction, but are more technical to state.
We also investigate the extent to which the converse of Theorem \ref{thm:main} holds. In general, there are $1$-Anosov representations $\rho$ whose $1$-limit set are $C^\infty$-submanifolds of $\Pb(\Rb^d)$, but for which ($\dagger$) in Theorem \ref{thm:main} does not hold, see Example~\ref{ex:surface_bad_example}. However, we prove that when $\Gamma$ is a surface group and $\rho$ is irreducible, the conditions in Theorem~\ref{thm:main} are both necessary and sufficient.
\begin{theorem}\label{thm:nec_surface}(Theorem \ref{thm:nec_surface_body}) Suppose $\Gamma$ is a hyperbolic group, $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ is an irreducible $1$-Anosov representation, and $\partial_\infty \Gamma$ is homeomorphic to a circle. Then the following are equivalent: \begin{enumerate} \item[($\dagger$)] $\rho$ is a $2$-Anosov representation and $\xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-2)}(z)$ is a direct sum for all $x,y,z \in \partial \Gamma$ distinct, \item[($\ddagger$)] $\xi^{(1)}(\partial_\infty \Gamma)$ is a $C^{\alpha}$-submanifold of $\Pb(\Rb^d)$ for some $\alpha > 1$. \end{enumerate} \end{theorem}
From Theorem \ref{thm:nec_surface} and (3) of Remark~\ref{rmk:open}, we have the following corollary.
\begin{corollary} Suppose $\Gamma$ is a hyperbolic group with $\partial_\infty \Gamma$ homeomorphic to a circle. Let $\Oc \subset \Hom(\Gamma, \PGL_d(\Rb))$ denote the set of representations that are irreducible, $1$-Anosov, and whose $1$-limit set is a $C^{\alpha}$-submanifold of $\Pb(\Rb^d)$ for some $\alpha> 1$ (which may depend on $\rho$). Then $\Oc$ is an open set in $\Hom(\Gamma, \PGL_d(\Rb))$. \end{corollary}
For non-surface groups the situation is more complicated; there exist irreducible $1$-Anosov representations $\rho: \Gamma \rightarrow \PGL_d(\Rb)$ whose $1$-limit set is a $C^{\infty}$-submanifold of $\Pb(\Rb^d)$, but $\rho$ does not satisfy the condition ($\dagger$) in Theorem~\ref{thm:main}, see Example~\ref{ex:irred_bad_example}. However, if one assumes a stronger irreducibility condition on $\rho$, then the conditions in Theorem~\ref{thm:main} are both necessary and sufficient.
\begin{theorem}\label{thm:nec_general} (Theorem \ref{thm:nec_general_body}) Suppose $\Gamma$ is a hyperbolic group, $\partial_\infty \Gamma$ is a $(m-1)$-dimensional topological manifold, and $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ is an irreducible $1$-Anosov representation such that $\bigwedge^m \rho: \Gamma \rightarrow \PGL(\bigwedge^m \Rb^d)$ is also irreducible. Then the following are equivalent: \begin{enumerate} \item[($\dagger$)] $\rho$ is a $m$-Anosov representation and $\xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-m)}(x)$ is a direct sum for all pairwise distinct $x,y,z \in \partial_\infty \Gamma$, \item[($\ddagger$)] $\xi^{(1)}(\partial_\infty \Gamma)$ is a $C^{\alpha}$-submanifold of $\Pb(\Rb^d)$ for some $\alpha > 1$. \end{enumerate} \end{theorem}
Recall that if $\rho:\Gamma\to H$ is a Zariski-dense representation and $\tau: H\to \PGL_d(\Rb)$ is an irreducible representation, then $\tau\circ\rho:\Gamma\to\PGL_d(\Rb)$ is irreducible. Thus, (3) of Remark~\ref{rmk:open} and Theorem \ref{thm:nec_general} give the following corollary.
\begin{corollary} Suppose $\Gamma$ is a hyperbolic group. Let $\Oc \subset \Hom(\Gamma, \PGL_d(\Rb))$ denote the set of representations $\rho: \Gamma \rightarrow \PGL_d(\Rb)$ where $\rho$ is $1$-Anosov, has Zariski dense image, and whose $1$-limit set is a $C^{\alpha}$-submanifold of $\Pb(\Rb^d)$ for some $\alpha> 1$ (which may depend on $\rho$). Then $\Oc$ is an open set in $\Hom(\Gamma, \PGL_d(\Rb))$. \end{corollary}
Finally, for representations satisfying certain irreducibility conditions, we also determine the optimal regularity of the $1$-limit set in terms of the spectral data of $\rho(\Gamma)$. More precisely, given $g\in\PGL_d(\Rb)$, let $\overline{g}\in\GL_d(\Rb)$ be a lift of $g$, and let $\lambda_1(\overline{g}) \geq \lambda_2(\overline{g}) \geq \dots \geq \lambda_d(\overline{g})$ denote the absolute values of the eigenvalues of $\overline{g}$. Note that for all $i,j$, the ratio \[\frac{\lambda_i}{\lambda_j}(g):=\frac{\lambda_i(\overline{g})}{\lambda_j(\overline{g})}\] does not depend on the choice of lift $\overline{g}$ of $g$. Then given a representation $\rho : \Gamma \rightarrow \PGL_d(\Rb)$ and $2 \leq m \leq d-1$ define \begin{align*} \alpha_m(\rho) = \inf_{\gamma\in\Gamma }\left\{\log\frac{\lambda_1}{\lambda_{m+1}}(\rho(\gamma))\Bigg/\log\frac{\lambda_1}{\lambda_{m}}(\rho(\gamma)) : \frac{\lambda_1}{\lambda_{m}}(\rho(\gamma)) \neq 1 \right\}. \end{align*} If $\rho$ is $(1,m)$-Anosov, it follows from definition that $\alpha_m(\rho) > 1$ (see Section~\ref{sec:Anosov_repn}).
\begin{theorem}\label{thm:regularity} (Theorem \ref{thm:regularity_body}) Suppose $\Gamma$ is a hyperbolic group and $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ is an irreducible $(1,m)$-Anosov representation so that $\xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-m)}(z)$ is a direct sum for all $x,y,z \in \partial_\infty \Gamma$ distinct. Then \begin{align*} \alpha_m(\rho) \leq \sup\left\{ \alpha \in (1,2) : \xi^{(1)}(\partial_\infty \Gamma) \text{ is a }C^{\alpha}\text{-submanifold} \right\} \end{align*} with equality if \begin{align*} \xi^{(1)}(\partial_\infty \Gamma) \cap \left(\xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-m)}(z)\right) \end{align*} spans $\xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-m)}(z)$ for all $x,y,z \in \partial_\infty \Gamma$ distinct. \end{theorem}
\begin{remark} \label{rem:stablility} \ \begin{enumerate} \item In Theorem \ref{thm:regularity}, when $\xi^{(1)}(\partial_\infty \Gamma)$ has either dimension one or co-dimension one, the extra hypothesis for equality is automatically satisfied. If the dimension is one (i.e. $m=2$), then \begin{align*} \xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-m)}(z)= \xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-2)}(z) = \Rb^d. \end{align*} So the extra hypothesis follows from the irreducibility of $\rho$. If the co-dimension is one (i.e. $m=d-1$), then \begin{align*} \xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-m)}(z)= \xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(1)}(z) \end{align*} is spanned by $\xi^{(1)}(x), \xi^{(1)}(y), \xi^{(1)}(z)$. Hence the extra hypothesis always holds in this case. \item In general, the extra hypothesis for equality is an open condition, see Section \ref{sec:stability}.
\item The irreducibility of $\rho$ is necessary in Theorem~\ref{thm:regularity}. For instance, if $\tau_d : \PGL_2(\Rb) \rightarrow \PGL_d(\Rb)$ is the standard irreducible representation, see Section~\ref{sec:rhoirred}, and $\Gamma \leq \PGL_2(\Rb)$ is a co-compact lattice, then $\rho = (\tau_5 \oplus \tau_2) |_{\Gamma}$ is $1$-Anosov and $\xi^{(1)}(\partial_\infty \Gamma)$ is a $1$-dimensional $C^\infty$-submanifold of $\Pb(\Rb^7)$. At the same time, for any infinite order $\gamma\in\Gamma$, \[\log\frac{\lambda_1}{\lambda_{3}}(\rho(\gamma))\Bigg/\log\frac{\lambda_1}{\lambda_{2}}(\rho(\gamma))= 3/2.\] \item Notice that the quantity $\alpha_m(\rho)$ is invariant under passing to finite index subgroups. In particular, if $\Gamma_0 \leq \Gamma$ is a finite index subgroup and $\gamma \in \Gamma$, then there exists some $k \in \Nb$ such that $\gamma^k \in \Gamma_0$. Further, \begin{align*} \log\frac{\lambda_1}{\lambda_{m+1}}(\rho(\gamma^k))\Bigg/\log\frac{\lambda_1}{\lambda_{m}}(\rho(\gamma^k)) = \log\frac{\lambda_1}{\lambda_{m+1}}(\rho(\gamma))\Bigg/\log\frac{\lambda_1}{\lambda_{m}}(\rho(\gamma)). \end{align*}
Hence $\alpha_m(\rho|_{\Gamma_0}) = \alpha_m(\rho)$.
\end{enumerate} \end{remark}
In Section~\ref{sec:regularity}, we establish a generalization of Theorem~\ref{thm:regularity} which holds for $\rho$-controlled subsets. One example of such a subset is the boundary of a properly convex domain $\Omega\subset\Pb(\Rb^d)$ that admits a $\Gamma$-action induced by a $1$-Anosov representation $\rho$. In this case, Theorem~\ref{thm:regularity_body} implies the following.
\begin{theorem}\label{thm:regularity2} Suppose $\Gamma$ is a hyperbolic group and $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ is an irreducible $1$-Anosov representation. Also, suppose $\Omega\subset\Pb(\Rb^d)$ is a $\rho(\Gamma)$-invariant properly convex domain so that $\xi^{(d-1)}(x)\cap \partial\Omega=\xi^{(1)}(x)$ for all $x\in \partial \Gamma$. If \begin{enumerate} \item [($\star$)]$p_1+p_2+\xi^{(1)}(y)$ is a direct sum for all pairwise distinct $p_1,p_2,\xi^{(1)}(y)\in\partial\Omega$, \end{enumerate} then \begin{enumerate} \item [($\star\star$)] $\partial\Omega$ is $C^\alpha$ along $\xi^{(1)}(\partial_\infty\Gamma)$ for some $\alpha>1$. \end{enumerate} Moreover, $T_{\xi^{(1)}(x)} \partial\Omega = \xi^{(d-1)}(x)$ for any $x \in \partial_\infty \Gamma$, and \begin{align*} \alpha_{d-1}(\rho) = \sup\left\{ \alpha \in (1,2) : \partial\Omega \text{ is } C^{\alpha} \text{ along }\xi^{(1)}(\partial_\infty \Gamma) \right\}. \end{align*} \end{theorem}
In the case when the $\Gamma$-action on $\Omega$ is co-compact, Theorem \ref{thm:regularity2} was previously proven by Guichard~\cite{G2005} using different techniques. Also, a weaker version of Theorem 1.13 (without the optimal bound for $\alpha$) was previously proven independently by Danciger-Gueritaud-Kassel \cite{DCG17} and the second author \cite{Zimmer17}.
\subsection{Terminology}\label{sec:terminology} Through out the paper we will use the following terminology: \begin{enumerate} \item $\norm{\cdot}_{2}$ will always denote the standard $\ell^2$-norm on $\Rb^d$, \item a $(m-1)$-dimensional topological manifold $M \subset \Pb(\Rb^d)$ is $C^{\alpha}$ for some $\alpha \in (1,2)$ if for every $p \in M$ there exists local coordinates around $p$ and a differentiable map $f:\Rb^{m-1}\to\Rb^{d-m}$ such that $M$ coincides with the graph of $f$ near $p$ and \[f(u+h)=f(u)+df_u(h)+{\rm O}(\norm{h}_{2}^\alpha)\] for all $u,h\in\Rb^{m-1}$. \item a $(m-1)$-dimensional topological manifold $M \subset \Pb(\Rb^d)$ is \emph{$C^{\alpha}$ along a subset $N\subset M$} for some $\alpha \in (1,2)$ if for every $p \in N$ there exists local coordinates around $p$ and a continuous map $f:\Rb^{m-1}\to\Rb^{d-m}$ such that $M$ coincides with the graph of $f$ near $p$ and if $(u,f(u)) \in N$, then $f$ is differentiable at $u$ and satisfies \[f(u+h)=f(u)+df_u(h)+{\rm O}(\norm{h}_{2}^\alpha)\] for all $h\in\Rb^{m-1}$. \end{enumerate}
\section{Anosov representations}\label{sec:Anosov_repn}
For the rest of this article, $\Gamma$ will denote a hyperbolic group, and $\partial_\infty\Gamma$ will be its Gromov boundary. In this section, we define Anosov representations from $\Gamma$ to $\PGL_d(\Rb)$, and mention some of their properties.
\subsection{A definition of Anosov representations}
Since they were introduced, several other characterizations of Anosov representations have been given by Kapovich et al.~\cite{KLP2013, KLP2014,KLP2014b}, Gu{\'e}ritaud et al.~\cite{GGKW2015}, and Bochi et al.~\cite{BPS2016}. The definition we give below comes from~\cite[Theorem 1.7]{GGKW2015}.
First, let $S$ be a finite symmetric generating set of $\Gamma$, and $d_S$ the induced word metric on $\Gamma$. For $\gamma \in \Gamma$, let $\ell_S(\gamma)$ denote the minimal translation distance of $\gamma$ acting on $\Gamma$, that is \begin{align*} \ell_S(\gamma) := \inf_{x \in \Gamma} d_S(\gamma\cdot x, x). \end{align*} Also, recall that for any $g\in\PGL_d(\Rb)$ and any $i,j\in\{1,\dots,d\}$, we have defined \[\frac{\lambda_i}{\lambda_j}(g):=\frac{\lambda_i(\overline{g})}{\lambda_j(\overline{g})},\] where $\lambda_1(\overline{g})\geq\dots\geq\lambda_d(\overline{g})$ are the absolute values of the (generalized) eigenvalues of a representative $\overline{g}\in\GL_d(\Rb)$ of $g$.
A third ingredient we need to define Anosov representations are appropriate definitions of ``well-behaved " flag maps. More precisely, we have the following.
\begin{definition} Let $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ be a representation. If $1 \leq k \leq d-1$, then a pair of maps $\xi^{(k)}: \partial_\infty \Gamma \rightarrow \Gr_k(\Rb^d)$ and $\xi^{(d-k)}: \partial_\infty \Gamma \rightarrow \Gr_{d-k}(\Rb^d)$ are called: \begin{itemize} \item \emph{$\rho$-equivariant} if $\xi^{(k)}(\gamma\cdot x) = \rho(\gamma)\cdot\xi^{(k)}(x)$ and $\xi^{(d-k)}(\gamma\cdot x) = \rho(\gamma)\cdot\xi^{(d-k)}(x)$ for all $x \in \partial_\infty \Gamma$ and $\gamma \in \Gamma$, \item \emph{dynamics-preserving} if for every $\gamma \in \Gamma$ of infinite order with attracting fixed point $\gamma^+ \in \partial_\infty \Gamma$, the points $\xi^{(k)}(\gamma^+) \in \Gr_k(\Rb^d)$ and $\xi^{(d-k)}(\gamma^+) \in \Gr_{d-k}(\Rb^d)$ are attracting fixed points of the action of $\rho(\gamma)$ on $\Gr_k(\Rb^d)$ and $\Gr_{d-k}(\Rb^d)$, and \item \emph{transverse} if for every distinct pair $x, y \in \partial_\infty \Gamma$ we have \begin{align*} \xi^{(k)}(x) + \xi^{(d-k)}(y) = \Rb^{d}. \end{align*} \end{itemize} \end{definition}
With these definitions, we can now define Anosov representations.
\begin{definition} A representation $\rho: \Gamma \rightarrow \PGL_d(\Rb)$ is \emph{$k$-Anosov} if \begin{itemize} \item there exist continuous, $\rho$-equivariant, dynamics preserving, and transverse maps $\xi^{(k)}: \partial \Gamma \rightarrow \Gr_k(\Rb^d)$, $\xi^{(d-k)}: \partial \Gamma \rightarrow \Gr_{d-k}(\Rb^d)$, and \item for any sequence $\{\gamma_i\}_{i=1}^\infty\subset\Gamma$ so that $\lim_{i\to\infty}\ell_S(\gamma_i)=\infty$, we have \[\lim_{i\to\infty}\log\frac{\lambda_k}{\lambda_{k+1}}(\rho(\gamma_i))=\infty.\] \end{itemize} If $\rho$ is $k$-Anosov for all $k \in \{ k_1,\dots, k_j\}$ we say that $\rho$ is $(k_1,\dots, k_j)$-Anosov. \end{definition}
If $S'$ is another finite symmetric generating set of $\Gamma$, then $\id:(\Gamma,d_S)\to(\Gamma,d_{S'})$ is a quasi-isometry. In particular, the notion of an Anosov representation does not depend on the choice of $S$. Also, it follows from the definition that a representation $\rho:\Gamma\to\PGL_d(\Rb)$ is $k$-Anosov if and only if it is $(d-k)$-Anosov. We refer to $\xi^{(k)}$ as the \emph{$k$-flag map} of $\rho$, and $\xi^{(k)}(\partial\Gamma)\subset\Gr_k(\Rb^d)$ as the \emph{$k$-limit set} of $\rho$.
Given a subspace $V \subset \Rb^N$ define \begin{align}\label{eqn:projectivization} [V]=\{ [v] \in \Pb(\Rb^N) : v \in V\}. \end{align} Often, we will view $\xi^{(k)}(x)$ as the projective subspace $[\xi^{(k)}(x)]\subset\Pb(\Rb^d)$. However, to simplify notation, we will denote $[\xi^{(k)}(x)]$ simply by $\xi^{(k)}(x)$ in those settings.
\begin{remark} In many other places in the literature, what we call a $k$-Anosov representation is usually known as a \emph{$P_k$-Anosov representation}, where $P_k$ is the stabilizer in $\PGL_d(\Rb)$ of a point in $\Gr_k(\Rb^d)$. This notation is an artifact of a more general definition of Anosov representations to an arbitrary non-compact semisimple Lie group. Since we do not use that generality here, we will use $k$ in place of $P_k$ to simplify the notation. \end{remark}
\subsection{Singular values and Anosov representations}
We will now briefly discuss singular values, which we use to give an alternate description of Anosov representations. This description was initially due to Kapovich et al.~\cite{KLP2014,KLP2014b}, but was also later proven by Bochi et al. \cite{BPS2016} using different techniques.
\begin{definition} Let $|\cdot|$ and $\norm{\cdot}$ be norms on $\Rb^d$, and let $L:(\Rb^d,|\cdot|)\to(\Rb^d,\norm{\cdot})$ be a linear map. \begin{itemize}
\item For any $X\in(\Rb^d,|\cdot|)$, the \emph{stretch factor} of $X$ under $L$ is the quantity
\[\sigma_X(L):=\frac{\norm{L(X)}}{|X|}.\] \item For $i=1,\dots,n$, the $i$-th \emph{singular value} of $L$ is the quantity \begin{align*} \sigma_i(L) :=\max_{W\subset\Rb^d,\dim W=i} & \min_{X\in W}\sigma_X(L) = \min_{W \subset \Rb^d, \dim W=d-i+1} \max_{X \in W} \sigma_X(L). \end{align*} \end{itemize} \end{definition}
Observe that for all $i=1,\dots,d-1$, $\sigma_i(L)\geq\sigma_{i+1}(L)$, and if $L$ is invertible, then $\sigma_i(L)=\frac{1}{\sigma_{d-i+1}(L^{-1})}$.
When $L=\overline{g}\in\GL_d(\Rb)$ and $\norm{\cdot}=|\cdot|$ is the standard norm $\norm{\cdot}_{2}$ on $\Rb^d$, we denote $\sigma_i(L)$ by $\mu_i(\overline{g})$. In that case, if $A$ is a $d\times d$ real-valued matrix representing $\overline{g}$ in an orthonormal basis for the standard inner product on $\Rb^d$, then the singular values $\mu_1(\overline{g})\geq \dots\geq\mu_d(\overline{g})>0$ are the square roots of the eigenvalues of $A^TA$. Using this, we may define, for any $g\in\PGL_d(\Rb)$ and all $i,j\in\{1,\dots,d\}$, the quantity \[\frac{\mu_i}{\mu_j}(g):=\frac{\mu_i(\overline{g})}{\mu_j(\overline{g})},\] where $\overline{g}\in\GL_d(\Rb)$ is a lift of $g$.
We can now state the following theorem due to Kapovich et al.~\cite{KLP2014,KLP2014b}, (see Bochi et al. \cite[Proposition 4.9]{BPS2016}).
\begin{theorem}\label{thm:SV_char_of_Anosov} Suppose $\Lambda$ is a finitely generated group and $S$ is a finite symmetric generating set. A representation $\rho:\Lambda\to\PGL_d(\Rb)$ is $k$-Anosov if and only if there are constants $C,c>0$ such that \begin{align*} \log \frac{\mu_k}{\mu_{k+1}}(\rho(\gamma)) \geq C d_S(\gamma,\id)-c
\end{align*} for all $\gamma \in \Lambda$. \end{theorem}
\begin{remark} In Theorem \ref{thm:SV_char_of_Anosov}, it is implied, not assumed, that $\Lambda$ is a hyperbolic group. \end{remark}
\subsection{Properties of Anosov representations} \label{sec:properties}
Next, we recall some important properties of Anosov representations.
Define respectively the \emph{Cartan} and \emph{Jordan projection} $\mu,\lambda:\GL_d(\Rb)\to\Rb^d$ by \[\mu(\overline{g}):= \left ( \log \mu_1(\overline{g}), \dots, \log \mu_d(\overline{g}) \right)\,\,\,\text{ and }\,\,\,\lambda(\overline{g}) = \left ( \log \lambda_1(\overline{g}), \dots, \log \lambda_d(\overline{g}) \right).\] Observe that while the Jordan projection is invariant under conjugation in $\GL_d(\Rb)$, the Cartan projection is not. These two projections can be interpreted geometrically in the following way.
Associated to the Lie group $\PGL_d(\Rb)$ is the Riemannian symmetric space $X$, on which $\PGL_d(\Rb)$ acts transitively and by isometries. As a $\PGL_d(\Rb)$-space, $X=\PGL_d(\Rb)/\PO(d)$. Furthermore, the distance $d_X$ on $X$ induced by its Riemannian metric can be computed from the Cartan projection by the formula \[d_X(g_1\cdot\PO(d),g_2\cdot\PO(d))=\norm{\mu\left(\overline{g_1^{-1}g_2}\right)}_2,\] where $\overline{g_1^{-1}g_2}\in\SL^\pm_d(\Rb):=\{g \in \GL_d(\Rb) : \det(g) = \pm 1 \}$ is a lift of $g_1^{-1}g_2$, and $\norm{\cdot}_2$ is the $l^2$-norm. On the other hand, if $g\in\PGL_d(\Rb)$ and $\overline{g}\in\SL^\pm_d(\Rb)$ is a representative of $g$, then \[\inf_{p\in X}d_X(p,g\cdot p)=\norm{\lambda(\overline{g})}_2.\] As such, if $g\in\PGL_d(\Rb)$ and $\overline{g}\in\SL^\pm_d(\Rb)$ is a lift of $g$, then $\mu(\overline{g})$ is a refinement of the distance by which $g$ translates the identity coset in $X$, and $\lambda(\overline{g})$ is a refinement of the minimal translation distance of $g$ in $X$.
As an immediate consequence of Theorem~\ref{thm:SV_char_of_Anosov} an Anosov representations coarsely preserve the metric $d_S$ on $\Gamma$.
\begin{corollary}\label{thm:QI_Anosov} Let $\rho:\Gamma\to\PGL_d(\Rb)$ be $k$-Anosov for any $k$. Then the map $\Gamma\to X$ defined by $\gamma\mapsto \rho(\gamma)\cdot\PSO(d)$ is a quasi-isometric embedding. In other words, there are constants $C\geq 1$ and $c\geq 0$ such that for all $\gamma_1,\gamma_2\in\Gamma$, \[\frac{1}{C}\norm{\mu\left(\overline{\rho(\gamma_1^{-1}\gamma_2)}\right)}_2-c\leq d_S(\gamma_1,\gamma_2)\leq C\norm{\mu\left(\overline{\rho(\gamma_1^{-1}\gamma_2)}\right)}_2+c,\] where $\overline{\rho(\gamma_1^{-1}\gamma_2)}\in\SL^\pm_d(\Rb)$ is a lift of $\rho(\gamma_1^{-1}\gamma_2)$.
\end{corollary}
We also have the following proposition due to Quint (see \cite[Lemma 2.19]{BCLS2015} for a proof), which restricts the possible Zariski closures of Anosov representations to $\PGL_d(\Rb)$.
\begin{proposition}\label{prop:Zclosure} Let $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ be a $1$-Anosov representation. If $\rho$ is irreducible, then the Zariski closure of $\rho(\Gamma)$ is a semisimple Lie group without compact factors. \end{proposition}
We will also use the following observation of Guichard-Wienhard.
\begin{proposition}\label{prop:strongly_irreducible}\cite[Lemma 5.12]{GW2012} Let $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ be an irreducible $1$-Anosov representation. If $\Gamma_0 \leq \Gamma$ is a finite index subgroup, then $\rho|_{\Gamma_0}$ is also irreducible. \end{proposition}
In many places in our argument, it will be more convenient to work with representations into $\SL_d(\Rb)$ instead of $\PGL_d(\Rb)$. The next observation allows us to make this reduction. Let $\pi:\GL_d(\Rb)\to\PGL_d(\Rb)$ denote the obvious projection.
\begin{observation}\label{obs:lift} For any representation $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$, there exists a subgroup $\Lambda_\rho \leq \SL_d(\Rb)$ so that $\pi|_{\Lambda_\rho}:\Lambda_\rho\to\PGL_d(\Rb)$ is a representation whose kernel is a subgroup of $\Zb_2$, and whose image is a subgroup of $\rho(\Gamma)$ with index at most two. \end{observation}
\begin{proof} Define $\Lambda_0 := \{ g \in \SL^{\pm}_d(\Rb) : [g] \in \rho(\Gamma) \}$, and let $\Lambda_\rho := \Lambda_0 \cap \SL_d(\Rb)$. Then $\pi(\Lambda_0)\subset\PGL_d(\Rb)$ coincides with $\rho(\Gamma)$, and $\Lambda_\rho$ has index at most two in $\Lambda_0$. \end{proof}
In particular, if $\Gamma$ is a hyperbolic group, then so is $\Lambda_\rho$, and there are canonical identifications $\partial_\infty\Gamma=\partial_\infty\rho(\Gamma)=\partial_\infty\pi(\Lambda_\rho)=\partial_\infty\Lambda_\rho$. Furthermore, the following proposition is an immediate consequence of \cite[Corollary 1.3]{GW2012} .
\begin{proposition} Let $\rho:\Gamma\to\PGL_d(\Rb)$ be a representation. The representation
\[\rho':=\pi|_{\Lambda_\rho}:\Lambda_\rho\to\PGL_d(\Rb)\] is $k$-Anosov if and only if $\rho$ is $k$-Anosov. If so, the $k$-flag maps of $\rho$ and $\rho'$ agree. \end{proposition}
\begin{remark}\label{rem:lift} To prove any properties about the $k$-limit sets of $\rho$, it is now sufficient to show those properties hold for the $k$-limit sets of $\rho'$. The advantage of working with $\rho'$ in place of $\rho$ is that $\rho':\Lambda_\rho\to\PGL(d,\Rb)$ admits a lift to a representation from $\Lambda_\rho$ to $\SL_d(\Rb)$. {\bf With this, we can henceforth assume that $\rho:\Gamma\to\PGL(d,\Rb)$ admits a lift to a representation $\overline{\rho}:\Gamma\to\SL(d,\Rb)$.} \end{remark}
\subsection{Gromov geodesic flow space}\label{sec:flowspace}
In their proof of Theorem \ref{thm:SV_char_of_Anosov}, Bochi et al. \cite{BPS2016} gave a description of Anosov representations using dominated splittings. Our next goal is to give this description. To do so, we recall the definition of the flow space of a hyperbolic group, and state some of their well-known properties. For more details, see for instance~\cite{Gromov1987},~\cite{C1994}, or~\cite{M1991}.
As a topological space, the \emph{flow space} for $\Gamma$, denoted $\widetilde{U\Gamma}$, is homeomorphic to $\partial_\infty\Gamma^{(2)}\times\Rb$, where $\partial_\infty\Gamma^{(2)}:=\{(x,y)\in\partial_\infty\Gamma^{2}:x\neq y\}$. This flow space admits a natural $\Rb$-action by translation in the $\Rb$-factor called the \emph{geodesic flow on $\widetilde{U\Gamma}$}. We will use the notation $v=(v^+,v^-,v_0)\in \widetilde{U\Gamma}$, and denote the geodesic flow on $\widetilde{U\Gamma}$ by $\phi_t$, i.e. \[\phi_t(v)=(v^+,v^-,v_0+t)=(\phi_t(v)^+,\phi_t(v)^-,\phi_t(v)_0). \]
There is a proper, co-compact $\Gamma$-action on $\wt{U\Gamma}$ that commutes with $\phi_t$, and satisfies $\gamma\cdot(v^+,v^-,\Rb)=(\gamma\cdot v^+,\gamma\cdot v^-,\Rb)$. There is also a natural $\mathbb{Z}/2\mathbb{Z}$ action on $\wt{U\Gamma}$ which satisfies \begin{align*} (1+2\Zb) \cdot (x,y,\Rb) = (y,x,\Rb). \end{align*} This action commutes with the $\Gamma$ action, but not the $\phi_t$ action. Instead: \begin{align*} \alpha \phi_t \alpha = \phi_{-t} \end{align*} where $\alpha = (1+2\Zb)$. So the actions of $\Gamma$, $\phi_t$, and $\mathbb{Z}/2\mathbb{Z}$ combine to yield an action of $\Gamma \times (\Rb \rtimes_{\psi}\mathbb{Z}/2\mathbb{Z})$ on $\wt{U\Gamma}$ where $\psi : \mathbb{Z}/2\mathbb{Z} \rightarrow \Aut(\Rb)$ is given by $\psi(\alpha)(t) = -t$.
Since the $\Gamma$ action commutes with $\phi_t$, the geodesic flow on $\widetilde{U\Gamma}$ descends to a flow on the compact space $U\Gamma:=\widetilde{U\Gamma}/\Gamma$, which we refer to as the \emph{geodesic flow on $U\Gamma$}, and denote by $\widehat{\phi}_t$. This also implies that if $v^+=\gamma^+$ and $v^-=\gamma^-$ are the attracting and repelling fixed points of some infinite order $\gamma\in\Gamma$, then the orbit $(\gamma^+,\gamma^-,\Rb)\subset\widetilde{U\Gamma}$ of $\phi_t$ descends to a closed orbit of $\widehat{\phi}_t$ in $U\Gamma$. We will denote the period of this closed orbit by $T_\gamma\in\Rb$, and refer to $T_\gamma$ as the \emph{period of $\gamma$}. In other words, for all $v_0\in\Rb$, $\gamma\cdot (\gamma^+,\gamma^-,v_0)=(\gamma^+,\gamma^-,v_0+T_\gamma)$.
Furthermore, $\widetilde{U\Gamma}$ admits a $\Gamma \times \mathbb{Z}/2\mathbb{Z}$-invariant metric so that every orbit $(v^+,v^-,\Rb)$ of $\phi_t$ is a continuous quasi-geodesic. Since the $\Gamma$-action on $\widetilde{U\Gamma}$ is also co-compact, any $\Gamma$-orbit is a quasi-isometry. As a consequence, there is a canonical $\Gamma$-invariant homeomorphism $\partial_\infty\widetilde{U\Gamma}\simeq\partial_\infty\Gamma$ between the Gromov boundaries of $\wt{U\Gamma}$ and $\Gamma$, and $v^+$ and $v^-$ in $\partial_\infty\wt{U\Gamma}$ are the forward and backward endpoints of $(v^+,v^-,\Rb)\subset\wt{U\Gamma}$ respectively.
\begin{remark} In the case when $\Gamma$ is the fundamental group of a compact Riemannian manifold $X$ with negative sectional curvature, this geodesic flow space is what one would expect. In particular, let $T^1 X$ denote the unit tangent bundle of $X$, let $\wt{X}$ denote the universal cover of $X$, and let $T^1\wt{X}$ denote the unit tangent bundle of $\wt{X}$. Then we may take $\widetilde{U\Gamma}$ to be $T^1\widetilde X$ and $U\Gamma$ to be $T^1X$. The geodesic flow on both $T^1X$ and $T^1\widetilde{X}$ is the usual geodesic flow associated to the Riemannian metrics on $\widetilde{X}$ and $X$, and the $\Gamma$-invariant metric $d_{\widetilde{U\Gamma}}$ is the lift of the Riemannian metric on $T^1X$ that is locally given by the product of the Riemannian metric on $X$ and the spherical metric on the fibers. Further, the $\mathbb{Z}/2\mathbb{Z}$ action is given by $v \rightarrow -v$. \end{remark}
Gromov proved that the geodesic flow space $\wt{U\Gamma}$ is unique up to homeomorphism.
\begin{theorem}\cite[Theorem 8.3.C]{Gromov1987}\label{thm:Gromov_uniqueness} Suppose that $\Gc$ is a proper Gromov hyperbolic metric space such that \begin{enumerate} \item $\Gamma \times (\Rb \rtimes_{\psi} \mathbb{Z}/2\mathbb{Z})$ acts on $\Gc$, \item the actions of $\Gamma$ and $\mathbb{Z}/2\mathbb{Z}$ are isometric, \item for every $v \in \Gc$, the map $\gamma \in \Gamma \rightarrow \gamma \cdot v \in \Gc$ is a quasi-isometry. \item the $\Rb$ action is free and every $\Rb$-orbit is a quasi-geodesic in $\Gc$. Further, the induced map $\Gc / \Rb \rightarrow \partial_\infty \Gc^{(2)}$ is a homeomorphism. \end{enumerate} Then there exists a $\Gamma \times \mathbb{Z}/2\mathbb{Z}$-equivariant homeomorphism $T : \Gc \rightarrow \wt{U\Gamma}$ that maps $\Rb$-orbits to $\Rb$-orbits. \end{theorem}
\subsection{Dominated Splittings} Next, we describe an alternate characterization of Anosov representations in $\GL_d(\Rb)$ using dominated splittings due to Bochi et al.~\cite{BPS2016}.
Let $\rho:\Gamma\to\GL_d(\Rb)$ be a representation. Let $E:=\widetilde{U\Gamma}\times\Rb^d$ be the trivial bundle over $\widetilde{U\Gamma}$, and define the vector bundle $E_{\rho}:=E/\Gamma$ over $U\Gamma$, where the $\Gamma$ action on $E$ is given by $\gamma\cdot(v,X)=(\gamma\cdot v,\rho(\gamma)\cdot X)$. Since $E_{\rho}$ is naturally a flat vector bundle over $U\Gamma$, it admits a continuous norm, and the compactness of $U\Gamma$ ensures that any two such norms are bi-Lipschitz. For any continuous norm on $E_{\rho}$, choose a lift of this norm to a $\Gamma$-invariant, continuous norm $\norm{\cdot}$ on $E$. With this, we can state the following theorem due to Bochi et al (see Theorem 2.2, Proposition 4.5 and Proposition 4.9 in~\cite{BPS2016}).
\begin{theorem}\label{prop:dom_split} A representation $\rho:\Gamma\to\GL_d(\Rb)$ is $k$-Anosov if and only if there exist \begin{itemize} \item continuous, $\phi_t$-invariant, $\rho$-equivariant maps \[F_1:\wt{U\Gamma}\to \Gr_k(\Rb^d)\,\,\,\text{ and }\,\,\,F_2:\wt{U\Gamma}\to\Gr_{d-k}(\Rb^d)\] so that $F_1(v)+F_2(v)=\Rb^d$ for all $v\in\wt{U\Gamma}$, and \item constants $C > 0$, $\beta > 0$ such that \begin{align*} \frac{\norm{X_1}_{\phi_tv}}{\norm{X_2}_{\phi_tv}} \leq C e^{-\beta t} \frac{\norm{X_1}_{v}}{\norm{X_2}_{v}} \end{align*} for all $v \in \widetilde{U\Gamma}$, $X_i \in F_i(v)$ non-zero, and $t \geq 0$. \end{itemize} \end{theorem}
Here, we may think of $F_1$ and $F_2$ as $\Gamma$-invariant sub-bundles of $E$. The maps $F_1$ and $F_2$ are related to the flag maps $\xi^{(k)}$ and $\xi^{(d-k)}$ by \[F_1(v) = \xi^{(k)}(v^+)\,\,\,\text{ and }\,\,\,F_2(v) = \xi^{(d-k)}(v^-)\] for all $v = (v^+,v^-,v_0)\in\widetilde{U\Gamma}$.
\section{$\rho$-controlled sets}\label{sec:rho_controlled_sets}
In this section we introduce $\rho$-controlled sets and construct a useful family of projections.
\begin{definition}\label{def:controlled}Suppose that $\rho : \Gamma \rightarrow \PGL_d(\Rb)$ is a $1$-Anosov representation. A closed $\rho(\Gamma)$-invariant subset $M\subset\Pb(\Rb^d)$ is \emph{$\rho$-controlled} if \begin{enumerate} \item[(i)] $\xi^{(1)}(\partial_\infty\Gamma)\subset M$ and \item[(ii)] $M\cap\xi^{(d-1)}(x)=\xi^{(1)}(x)$ for every $x\in\partial_\infty\Gamma$. \end{enumerate} If $\rho$ also happens to be $m$-Anosov for some $m=2,\dots,d-1$, then a $\rho$-controlled subset $M\subset\Pb(\Rb^d)$ is \emph{$m$-hyperconvex} if \begin{align*} p_1+p_2+\xi^{(d-m)}(y) \end{align*} is a direct sum for all $p_1,p_2 \in M$ and $y \in \partial_\infty \Gamma$ with $p_1, p_2, \xi^{(1)}(y)$ pairwise distinct. \end{definition}
\begin{remark} We will typically consider the case when $M$ is a topological $(m-1)$-dimensional manifold and then require that $M$ is $m$-hyperconvex. \end{remark}
The three main examples of $\rho$-controlled subsets $M\subset\Rb^d$ that we will be concerned with are the following.
\begin{example}\label{eg:limitset} When $\rho : \Gamma \rightarrow \PGL_d(\Rb)$ is a $1$-Anosov representation, then the $1$-limit set $\xi^{(1)}(\partial_\infty\Gamma)$ for $\rho$ is obviously $\rho$-controlled. Furthermore, if $\rho$ is $m$-Anosov for some $m=1,\dots,d-1$, then $\xi^{(1)}(\partial_\infty\Gamma)$ is $m$-hyperconvex if and only if \begin{align*} \xi^{(1)}(x)+\xi^{(1)}(z)+\xi^{(d-m)}(y) \end{align*}
is a direct sum for all pairwise distinct $x,y,z\in\partial_\infty\Gamma$. \end{example}
\begin{example}\label{eg:convex} Suppose that $\rho:\Gamma\to\PGL_d(\Rb)$ is a 1-Anosov representation and $\rho(\Gamma)$ preserves a properly convex domain $\Omega\subset\Pb(\Rb^d)$ so that $\xi^{(d-1)}(x)\cap\partial\Omega=\xi^{(1)}(x)$ for all $x\in\partial_\infty\Gamma$, see Section \ref{sec:properly_convex}. Then $M:=\partial\Omega$ is obviously $\rho$-controlled. Notice that in this case, the requirement that $M$ is $(d-1)$-hyperconvex is simply that \begin{align*} p_1+p_2+\xi^{(1)}(y) \end{align*} is a direct sum for all pairwise distinct $p_1,p_2, \xi^{(1)}(y)\in\partial\Omega$. This is satisfied if and only if $\xi^{(1)}(\partial_\infty \Gamma)$ does not intersect any proper line segments in $\partial \Omega$. \end{example}
\begin{example}\label{eg:limitset_subgroup} Suppose $\rho : \Gamma \rightarrow \PGL_d(\Rb)$ is a $1$-Anosov representation and $\Gamma_1 \leq \Gamma$ is a quasi-convex subgroup. Then $\rho_1:=\rho|_{\Gamma_1}: \Gamma_1 \rightarrow \PGL_d(\Rb)$ is also $1$-Anosov, and its $1$-limit set $\xi^{(1)}(\partial_\infty\Gamma)$ for $\rho$ is obviously $\rho_1$-controlled. Furthermore, if $\rho_1$ is $m$-Anosov for some $m=1,\dots,d-1$, then $\xi^{(1)}(\partial_\infty\Gamma)$ is $m$-hyperconvex if and only if \begin{align*} \xi^{(1)}(x)+\xi^{(1)}(z)+\xi^{(d-m)}(y) \end{align*}
is a direct sum for all pairwise distinct $x,y,z \in\partial_\infty\Gamma$ with $y \in \partial_\infty \Gamma_1$. \end{example}
Recall that $\partial_\infty \Gamma^{(2)}$ is the set of all pairs $(x,y) \in \partial_\infty \Gamma^2$ with $x \neq y$. Then for any $(x,y)\in\partial_\infty\Gamma^{(2)}$, let $L_{x,y}$ denote the orbit $(x,y,\Rb)\subset\widetilde{U\Gamma}$ of $\phi_t$. The following proposition is one of the key tools we use to investigate regularity properties of $\rho$-controlled subsets.
\begin{proposition}\label{prop:rho_controlled_sets_projections} Suppose that $\rho : \Gamma \rightarrow \PGL_d(\Rb)$ is a $1$-Anosov representation and $M \subset \Pb(\Rb^d)$ is $\rho$-controlled. Then there exists a continuous family of continuous maps \begin{align*} \pi_{x,y} : M \setminus \left\{ \xi^{(1)}(x), \xi^{(1)}(y) \right\} \rightarrow L_{x,y} \end{align*} indexed by $(x,y) \in \partial_\infty \Gamma^{(2)}$ such that \begin{align*} \pi_{x,y} & =\rho(\gamma)^{-1} \circ \pi_{\gamma\cdot x,\gamma\cdot y}\circ \rho(\gamma), \\ x & = \lim_{p\to \xi^{(1)}(x)} \pi_{x,y}(p), \text{ and} \\ y & = \lim_{p\to \xi^{(1)}(y)} \pi_{x,y}(p) \end{align*} for all $(x,y) \in \partial_\infty \Gamma^{(2)}$ and $\gamma\in\Gamma$. \end{proposition}
Delaying the proof of Proposition~\ref{prop:rho_controlled_sets_projections} until Section \ref{sec:controlled_proof}, we describe the main application. Suppose that $\rho : \Gamma \rightarrow \PGL_d(\Rb)$ is a $1$-Anosov representation and $M \subset \Pb(\Rb^d)$ is $\rho$-controlled. Let $\{ \pi_{x,y} : (x,y)\in \partial_\infty \Gamma^{(2)}\}$ be a family of maps satisfying Proposition~\ref{prop:rho_controlled_sets_projections}. Then define the following space \begin{align}\label{eqn:P(M)} P(M) : = \left\{ (v,p) \in \wt{U \Gamma} \times M : p \neq \xi^{(1)}(v^\pm) \text{ and } v = \pi_{v^+,v^-}(p) \right\}. \end{align} Notice that there is a natural $\Gamma$ action on $P(M)$ given by \begin{align*} \gamma \cdot (v,p) = (\gamma \cdot v, \rho(\gamma)p). \end{align*} This space has the following properties.
\begin{observation}\label{obs:compact} With the notation above, \begin{enumerate} \item $\Gamma$ acts co-compactly on $P(M)$, \item for any $v \in \wt{U\Gamma}$ and $z \in M \setminus \{ \xi^{(1)}(v^+), \xi^{(1)}(v^-)\}$, there exists $t \in \Rb$ such that $(\phi_t(v), z) \in P(M)$, \item for any compact set $K \subset \wt{U\Gamma}$ there exists $\delta > 0$ such that: If $v \in K$ and $p \in M\setminus \{ \xi^{(1)}(v^+)\}$ satisfies $d_{\Pb}\left(\xi^{(1)}(v^+), p\right) \leq \delta$, then $(\phi_t (v), p) \in P(M)$ for some $t > 0$. \end{enumerate} \end{observation}
\begin{proof} (1): Since the $\Gamma$-action on $\wt{U\Gamma}$ is co-compact, there exists a compact set $K \subset \wt{U\Gamma}$ such that $\Gamma \cdot K = \wt{U\Gamma}$. Since $\{ \pi_{x,y} : (x,y)\in \partial_\infty \Gamma^{(2)}\}$ is a family of maps satisfying Proposition~\ref{prop:rho_controlled_sets_projections}, the set \begin{align*} \wh{K} : = \left\{(v,z) \in K \times M:v \neq \xi^{(1)}(v^\pm) \text{ and } v=\pi_{v^+,v^-}(z) \right\} \end{align*} is compact. Further, by definition, $\Gamma\cdot \wh{K} = P(M)$.
(2): Follows directly from the definition.
(3): Fix a compact set $K \subset \wt{U\Gamma}$. If such a $\delta > 0$ does not exist, then there exists $v_n \in K$, $p_n \in M\setminus \{ \xi^{(1)}(v_n^+)\}$, and $t_n \leq 0$ such that \begin{align*} d_{\Pb}\left(\xi^{(1)}(v_n^+), p_n\right) \leq 1/n \end{align*} and $\phi_{t_n}(v_n)=\pi_{v_n^-,v_n^+}(p_n) $. By passing to a subsequence we can suppose that $v_n \rightarrow v \in K$. But then $p_n \rightarrow \xi^{(1)}(v^+)$ as $n\to\infty$, so \begin{align*} v^+ = \lim_{n \rightarrow \infty} \pi_{v_n^-,v_n^+}(p_n) = \lim_{n \rightarrow \infty} \phi_{t_n}(v_n) \in L_{v^+, v^-} \cup \{ v^-\} \end{align*} which is a contradiction. \end{proof}
\begin{remark} The set $P(M)$ is designed to be a generalization of the following construction: Suppose $\Gamma$ is the fundamental group of $X$ a compact negatively curved Riemannian manifold, $\wt{X}$ is the universal cover of $X$, $T^1 \wt{X}$ is the unit tangent bundle of $\wt{X}$, and $\phi_t$ is the geodesic flow on $T^1 \wt{X}$. Then we define \begin{align*} {\rm Perp} \subset T^1 \wt{X} \times \partial_\infty \Gamma \end{align*} to be the set of pairs $(v,z)$ such that there exists $w \in T^1_{\pi(v)} \wt{X}$ with $w \bot v$ and $\lim_{t \rightarrow \infty} \pi(\phi_tw) = z$. \end{remark}
\subsection{Properly convex domains}\label{sec:properly_convex} We now describe properly convex domains and some of their relevant properties. These will be used to prove Proposition~\ref{prop:rho_controlled_sets_projections}.
\begin{definition} \ \begin{enumerate} \item An open set $\Omega\subset\Pb(\Rb^d)$ is a \emph{properly convex domain} if its closure lies in an affine chart in $\Pb(\Rb^d)$, and it is convex, i.e. for every pair of distinct points $x,y\in\Omega$, there is a projective line segment in $\Omega$ whose endpoints are $x$ and $y$. \item Given a subset $X \subset \Pb(\Rb^d)$ the \emph{projective automorphism group of $X$} is defined to be \begin{align*} \Aut(X) = \{ g \in \PGL_d(\Rb) : g X = X\}. \end{align*} \end{enumerate} \end{definition}
Given a properly convex domain $\Omega\subset\Pb(\Rb^d)$, there is a canonical distance on $\Omega$ which is defined as follows. For any pair of points $x,y\in\Omega$, let $l$ be a projective line through $x$ and $y$, and let $a$ and $b$ be the two points of intersection of $l$ with $\partial\Omega$, ordered so that $a<x\leq y<b$ lie along $l$. Then define \[H_\Omega(x,y):=\log C(a,x,y,b).\] Here, $C$ is the cross ratio along the projective line $l$, i.e. \[C(a,x,y,b):=\frac{\norm{a-y}_2\norm{b-x}_2}{\norm{a-x}_2\norm{b-y}_2},\] where $\norm{\cdot}_2$ is the standard norm on some (equiv.) any affine chart of $\Pb(\Rb^d)$ containing the closure of $\Omega$. One can verify from properties of the cross ratio that the map $H_\Omega:\Omega\times\Omega\to\Rb^+\cup\{0\}$ is a well-defined, continuous, distance function. This is commonly known as the \emph{Hilbert metric} on $\Omega$.
Let $T\Omega$ denote the tangent bundle of $\Omega$ and $\pi : T\Omega \rightarrow \Omega$ the natural projection. Also, for any $v\in T\Omega$, let $l_v$ denote the oriented projective line segment in $\Omega$ through $\pi(v)$ in the direction given by $v$, and with endpoints in $\partial\Omega$. Then let $v^+$ and $v^-$ be the forward and backward endpoints of $l_v$ respectively. The Hilbert metric $H_\Omega$ is infinitesimally given by the norm \begin{eqnarray*} h_\Omega:&T\Omega&\to\Rb\cup\{0\},\\ &v&\mapsto \norm{v}_2\left(\frac{1}{\norm{\pi(v)-v^+}_2}+\frac{1}{\norm{\pi(v)-v^-}_2}\right), \end{eqnarray*} where $\norm{\cdot}_2$ is the standard norm on any affine chart of $\Pb(\Rb^d)$ containing the closure of $\Omega$. With this, define the \emph{unit tangent bundle} of $\Omega$ to be \begin{align*} T^1 \Omega = \{ v \in T\Omega : h_\Omega(v) = 1\}. \end{align*}
We recall the definition of a convex co-compact action on $\Omega$.
\begin{definition}\label{defn:cc} A discrete subgroup $\Lambda \leq \PGL_d(\Rb)$ \emph{acts convex co-compactly} on a properly convex domain $\Omega$ if $\Lambda \leq \Aut(\Omega)$ and there exists a closed non-empty convex subset $\Cc \subset \Omega$ such that $\Lambda \leq \Aut(\Cc)$ and the quotient $\Lambda \backslash \Cc$ is compact.
\end{definition}
\begin{remark} This is not the definition of convex co-compactness used in~\cite{DCG17}, instead they say groups satisfying Definition~\ref{defn:cc} act \emph{naive convex co-compactly}. \end{remark}
We now use work of Danciger-Gueritaud-Kassel~\cite{DCG17} and the second author~\cite{Zimmer17} to construct a convex co-compact action.
\begin{theorem}\label{thm:cc_action}\cite[Theorem 1.4]{DCG17}, \cite[Theorem 1.27]{Zimmer17} Suppose that $\rho : \Gamma \rightarrow \PGL_d(\Rb)$ is a $1$-Anosov representation and there exists a properly convex domain $\Omega_0 \subset \Pb(\Rb^d)$ with $\rho(\Gamma) \leq \Aut(\Omega_0)$. Then $\rho(\Gamma)$ acts convex co-compactly on a properly convex domain $\Omega \subset \Pb(\Rb^d)$. Moreover,
\begin{enumerate}
\item $\xi^{(1)}(\partial_\infty \Gamma) \subset \partial \Omega$,
\item for every $x,y \in \partial_\infty \Gamma$ distinct, $\Omega_0$ and $\Omega$ are contained in the same connected component of
\begin{align*}
\Pb(\Rb^d) \setminus \left( \xi^{(d-1)}(x) \cup \xi^{(d-1)}(y) \right),
\end{align*}
\item we can assume
\begin{align*}
\Cc = \Omega \cap {\rm ConvHull} \left\{ \xi^{(1)}(\partial_\infty \Gamma) \right\}
\end{align*}
and
\begin{align*} \overline{\Cc} \cap \partial \Omega = \xi^{(1)}(\partial_\infty \Gamma).
\end{align*}
\end{enumerate}
\end{theorem}
\begin{remark} In~\cite[Theorem 1.27]{Zimmer17} it is assumed that $\rho$ is irreducible. \end{remark}
\subsection{The proof of Proposition~\ref{prop:rho_controlled_sets_projections}}\label{sec:controlled_proof}
We will prove Proposition~\ref{prop:rho_controlled_sets_projections} by constructing a projective model of the geodesic flow space $\wt{U \Gamma}$. This construction has several steps: first we post compose to obtain a new 1-Anosov representation that preserves a properly convex domain. Theorem \ref{thm:cc_action} then gives us a convex co-compact action, which we then use to construct a projective model of the geodesic flow space. Finally we use this projective model to construct the maps $\pi_{x,y}$.
\subsubsection{Constructing an invariant properly convex domain}\label{subsec:constructing_properly_convex_domain}
In general, a 1-Anosov representation will not preserve a properly convex domain.
\begin{example}\cite[Proposition 1.7]{DCG17} If $d$ is even and $\rho : \pi_1(S) \rightarrow \PGL_d(\Rb)$ is Hitchin (see Definition~\ref{defn:hitchin_reps}), then $\rho(\pi_1(S))$ does not preserve a properly convex domain. \end{example}
However, we will show that after post composing with another representation we can always find an invariant properly convex domain. In Section~\ref{sec:suff_cond_diff} we will study the regularity of these sets.
Denote the vector space of symmetric 2-tensors by $\Sym_2(\Rb^d)$ and let $D :=\dim \,\Sym_2(\Rb^d)$. Then let $S : \GL_d(\Rb) \rightarrow \GL(\Sym_2(\Rb^d))$ be the representation \begin{align*} S(g)(v \otimes v) = gv \otimes gv. \end{align*} Given a representation $\rho : \Gamma \rightarrow \PGL_d(\Rb)$, let $S(\rho) : \Gamma \rightarrow \PGL_d(\Rb)$ be the representation $S(\rho) = S \circ \rho$.
Associated to $S$ are smooth embeddings $\Phi : \Pb(\Rb^d) \rightarrow \Pb(\Sym_2(\Rb^d))$ and $\Phi^* : \Gr_{d-1}(\Rb^d) \rightarrow \Gr_{D-1}(\Sym_2(\Rb^d))$ defined by \begin{align*} \Phi(v) = [v \otimes v] \end{align*} and \begin{align*} \Phi^*(W) = \Span\left\{ v \otimes w + w \otimes v : w \in W, v \in \Rb^d \right\}. \end{align*} Notice that $\Phi$ and $\Phi^*$ are both $S$-equivariant.
\begin{proposition}\label{prop:S_composition} If $\rho : \Gamma \rightarrow \PGL_d(\Rb)$ is $1$-Anosov with boundary maps $\xi^{(1)}$ and $\xi^{(d-1)}$, then $S(\rho)$ is 1-Anosov with boundary maps $\Phi \circ \xi^{(1)}$ and $\Phi^* \circ \xi^{(d-1)}$. \end{proposition}
\begin{proof} The maps $\Phi \circ \xi^{(1)}$ and $\Phi^* \circ \xi^{(d-1)}$ are clearly $S(\rho)$-equivariant, dynamics-preserving, and transverse.
Suppose that $\gamma \in \Gamma$ and $\overline{g}$ is a lift of $\rho(\gamma)$. If $\lambda_1(\overline{g}) \geq \dots \geq \lambda_d(\overline{g})$ are the absolute values of the eigenvalues of $\overline{g}$ and $\lambda_1(S(\overline{g})) \geq \lambda_2(S(\overline{g})) \geq \dots$ are the absolute values of the eigenvalues of $S(\overline{g})$, then \begin{align*} \lambda_1(S(\overline{g})) = \lambda_1(\overline{g})^2 \end{align*} and \begin{align*} \lambda_2(S(\overline{g})) = \lambda_1(\overline{g}) \lambda_2(\overline{g}). \end{align*} So \begin{align*} \frac{\lambda_1}{\lambda_2}(S(\rho)(\gamma)) =\frac{\lambda_1(\overline{g})}{\lambda_2(\overline{g})}= \frac{\lambda_1}{\lambda_2}(\rho(\gamma)). \end{align*} Then since $\rho$ is $1$-Anosov, we see that $S(\rho)$ is also $1$-Anosov. \end{proof}
Now we construct a properly convex domain in $\Pb(\Sym_2(\Rb^d))$ which is invariant under the action of $S(\PGL_d(\Rb))$. Given $X \in \Sym_2(\Rb^d)$ we say that $X$ is \emph{positive definite}, and write $X > 0$, if $(f\otimes f)(X) > 0$ for every $f \in \Rb^{d*}\setminus\{0\}$. Also, we say that $X$ is \emph{positive semidefinite}, and write $X \geq 0$, if $(f\otimes f)(X) \geq 0$ for every $f \in \Rb^{d*}$. Then define \begin{align*} \Pc^+ := \left\{ [X] : X \in \Sym_2(\Rb^d), X > 0\right\}. \end{align*}
\begin{observation}\label{obs:PD_matrices} \ \begin{enumerate} \item $\Pc^+$ is a properly convex domain in $\Pb(\Sym_2(\Rb^d))$, \item $S(\PGL_d(\Rb)) \leq \Aut(\Pc^+)$, \item $\Phi(\Pb(\Rb^d)) \subset \overline{\Pc^+}$. \end{enumerate} \end{observation}
\begin{proof} (1): Clearly $C := \{ X : X\in \Sym_2(\Rb^d), X > 0\}$ is a convex open cone in $\Sym_2(\Rb^d)$. Since \begin{align*} (f\otimes f)(X+tY) = (f\otimes f)(X)+t(f\otimes f)(Y) \end{align*} it is clear that $C$ does not contain any real affine lines. Thus $C$ is properly convex. Since $C$ projects to $\Pc^+$ we see that $\Pc^+$ is a properly convex domain.
(2): Notice that \begin{align*} (f\otimes f)(S(g)X) = \Big((f \circ g) \otimes (f \circ g) \Big)(X) \end{align*} when $f \in \Rb^{d*}$, $g \in \GL_d(\Rb)$, and $X \in\Sym_2(\Rb^d)$. So $S(\PGL_d(\Rb)) \leq \Aut(\Pc^+)$.
(3): Suppose that $[v] \in \Pb(\Rb^d)$. Then $\Phi([v]) = [v \otimes v]$ and \begin{align*} (f\otimes f)(v \otimes v) = f(v)f(v) \geq 0 \end{align*} when $f \in \Rb^{d*}$. Thus $\Phi([v]) \subset \overline{\Pc^+}$. \end{proof}
\subsubsection{Constructing a projective geodesic flow}\label{sec:proj_geod_flow}
In this step we construct a ``projective'' geodesic flow for any $1$-Anosov representation that acts convex co-compactly on a properly convex domain. For the rest of this subsection, suppose that $\rho : \Gamma \rightarrow \PGL_d(\Rb)$ is a $1$-Anosov representation and $\rho(\Gamma)$ acts convex co-compactly on a properly convex domain $\Omega$. We can further assume that \begin{align*}
\Cc = \Omega \cap {\rm ConvHull} \left\{ \xi^{(1)}(\partial_\infty \Gamma) \right\}
\end{align*}
and
\begin{align*} \overline{\Cc} \cap \partial \Omega = \xi^{(1)}(\partial_\infty \Gamma).
\end{align*}
Every projective line segment in $\Omega$ can be parametrized to be a geodesic in $H_\Omega$ and thus $T^1 \Omega$ has a natural geodesic flow, which we denote by $\psi_t$, obtained by flowing along the projective line segments. Using this flow we can construct a model of the flow space $\wt{U\Gamma}$.
For $x,y \in \partial_\infty \Gamma$ distinct, let $\ell_{x,y} \subset T^1 \Omega$ be the unit tangent vectors whose based points are contained in the line segment joining $\xi^{(1)}(x)$ to $\xi^{(1)}(y)$ and who point in the direction of $\xi^{(1)}(y)$. Then the set \begin{align*} \Gc : = \bigcup_{(x,y) \in \partial_\infty \Gamma^{(2)}}\ell_{x,y} \end{align*} is invariant under the action of $\rho(\Gamma)$, the flow $\psi_t$, and the natural $\mathbb{Z}/2\mathbb{Z}$ action on $T^1 \Omega$ given by $v \rightarrow -v$. Using Theorem~\ref{thm:Gromov_uniqueness} we will construct a homeomorphism $\Gc \rightarrow \wt{U\Gamma}$.
\begin{corollary}\label{cor:Gromov_model} With the notation above, there exists a homeomorphism $T : \Gc \rightarrow \wt{U\Gamma}$ with the following properties: \begin{enumerate} \item $T$ is equivariant relative to the $\Gamma$ and $\mathbb{Z}/2\mathbb{Z}$ actions \item for every $(x,y) \in \partial_\infty \Gamma^{(2)}$, $T$ maps the flow line $\ell_{x,y}$ to the flow line $L_{x,y}$. \end{enumerate} \end{corollary}
\begin{proof} By construction $\Gamma \times (\Rb \rtimes_{\psi} \mathbb{Z}/2\mathbb{Z})$ acts on $\Gc$ and the $\Rb$ action is free. Further, $\Gc$ is homeomorphic to $\partial_\infty \Gamma^{(2)} \times \Rb$ and so we have a homeomorphism \begin{align*} \Gc / \Rb \rightarrow \partial_\infty \Gamma^{(2)}. \end{align*} Hence to apply Gromov's theorem we just have to verify that $\Gc$ has a complete metric $d$ with the following properties: \begin{enumerate} \item the actions of $\Gamma$ and $\mathbb{Z}/2\mathbb{Z}$ are isometric, \item for every $v \in \Gc$, the map $\gamma \in \Gamma \rightarrow \gamma \cdot v \in \Gc$ is a quasi-isometry. \item every $\Rb$-orbit is a quasi-geodesic in $\Gc$. \end{enumerate} To do this, define $d:\mathcal{G}\times\mathcal{G}\to\Rb$ by \begin{align*} d(v,w) =\frac{1}{\sqrt{\pi}} \int_{\Rb} H_\Omega(\gamma_v(t), \gamma_w(t)) e^{-t^2} dt \end{align*} where $\gamma_v$ and $\gamma_w$ are the unit speed geodesics with $\gamma_v^\prime(0)=v$ and $\gamma_w^\prime(0)=w$. Then conditions (1) and (3) are easy to check. To verify condition (2), notice that \begin{align*} \abs{H_\Omega(\gamma_v(t), \gamma_w(t))-H_\Omega(\pi(v), \pi(w))} \leq 2\abs{t} \end{align*} and \begin{align*} \frac{1}{\sqrt{\pi}} \int_{\Rb}2 \abs{t}e^{-t^2} dt = \frac{2}{\sqrt{\pi}}. \end{align*} Hence \begin{align*} H_\Omega(\pi(v), \pi(w)) -\frac{2}{\sqrt{\pi}} \leq d(v,w) \leq H_\Omega(\pi(v), \pi(w)) +\frac{2}{\sqrt{\pi}}. \end{align*} And so $\pi : (\Gc, d) \rightarrow (\Cc, H_\Omega)$ is a quasi-isometry. Since $(\Cc, H_\Omega)$ is a geodesic metric space and $\Gamma$ acts co-compactly on $\Cc$, the fundamental lemma of geometric group theory states that for every $c \in \Cc$, the map $\gamma \in \Gamma \rightarrow \gamma \cdot c \in \Cc$ is a quasi-isometry. So the $\Gamma$ orbits in $\Gc$ are also quasi-isometries. \end{proof}
\subsubsection{Finishing the proof of Proposition~\ref{prop:rho_controlled_sets_projections}}
Suppose that $\rho : \Gamma \rightarrow \PGL_d(\Rb)$ is a $1$-Anosov representation and $M \subset \Pb(\Rb^d)$ is $\rho$-controlled.
By Proposition~\ref{prop:S_composition}, the representation $S(\rho) : \Gamma \rightarrow \PGL(\Sym_2(\Rb^d))$ is 1-Anosov with boundary maps $\xi^{(1)}_S : = \Phi \circ \xi^{(1)}$ and $\xi^{(d-1)}_S : = \Phi^* \circ \xi^{(d-1)}$. By Observation~\ref{obs:PD_matrices}, \begin{align*} S(\rho)(\Gamma) \leq \Aut(\Pc^+). \end{align*} Then by Theorem~\ref{thm:cc_action} there exists a properly convex domain $\Omega \subset \Pb(\Sym_2(\Rb^d))$ where $S(\rho)(\Gamma)$ acts convex co-compactly on $\Omega$. We can assume that
\begin{align*}
\Cc = \Omega \cap {\rm ConvHull} \left\{ \xi^{(1)}(\partial_\infty \Gamma) \right\}
\end{align*}
and
\begin{align*} \overline{\Cc} \cap \partial \Omega = \xi^{(1)}(\partial_\infty \Gamma).
\end{align*}
Now let $\Gc \subset T^1 \Omega$ be the projective model of the geodesic flow constructed in Section~\ref{sec:proj_geod_flow} and let $T : \Gc \rightarrow \wt{U\Gamma}$ denote the homeomorphism in Corollary~\ref{cor:Gromov_model}.
Next, for $x,y \in \partial_\infty \Gamma$ distinct we define a projection \begin{align*} p_{x,y} : \Pb\left(\Sym_2(\Rb^d)\right) \setminus \left( \xi_S^{(d-1)}(x) \cap \xi_S^{(d-1)}(y) \right) \rightarrow \xi_S^{(1)}(x) + \xi^{(1)}_S(y) \end{align*} by \begin{align*} \{ p_{x,y}(v) \} = \Big( \xi_S^{(1)}(x) + \xi^{(1)}_S(y)\Big) \cap \Big( v + \xi_S^{(d-1)}(x) \cap \xi_S^{(d-1)}(y) \Big). \end{align*}
\begin{observation} If $m \in M \setminus \{\xi^{(1)}(x),\xi^{(1)}(y) \}$, then $p_{x,y}(\Phi(m))$ is contained in the line segment joining $\xi^{(1)}_S(x)$ to $\xi^{(1)}_S(y)$ in $\Omega$. \end{observation}
\begin{proof} Since $M$ is $\rho$-controlled, \begin{align*} m \notin \xi^{(d-1)}(x) \cup \xi^{(d-1)}(y). \end{align*} Hence \begin{align*} \Phi(m) \notin \xi^{(d-1)}_S(x) \cup \xi^{(d-1)}_S(y). \end{align*} Observation~\ref{obs:PD_matrices} implies that $\Phi(m) \in \overline{\Pc^+}$ and (2) of Theorem~\ref{thm:cc_action} says that $\Pc^+$ and $\Omega$ are in the same connected component of \begin{align*} \Pb\left(\Sym_2(\Rb^d)\right) \setminus \left( \xi_S^{(d-1)}(x) \cup \xi_S^{(d-1)}(y) \right). \end{align*} Hence $p_{x,y}(\Phi(m))$ is contained in the line segment joining $\xi^{(1)}_S(x)$ to $\xi^{(1)}_S(y)$ in $\Omega$. \end{proof}
Next, for $x,y \in \partial_\infty \Gamma$ distinct we define a map \begin{align*} \wh{p}_{x,y} : M \setminus \{\xi^{(1)}(x),\xi^{(1)}(y) \} \rightarrow \Gc \end{align*} by letting $\wh{p}_{x,y}(m)$ be the unit vector above $\pi(p_{x,y}(\Phi(m)))$ pointing towards $y$.
Finally, we define \begin{align*} \pi_{x,y} : M \setminus \left\{ \xi^{(1)}(x), \xi^{(1)}(y) \right\} \rightarrow L_{x,y} \end{align*} by $\pi_{x,y} = T \circ \wh{p}_{x,y}$. Recall that $T$ is defined in Corollary \ref{cor:Gromov_model}.
By construction we have \begin{align*} \pi_{x,y}=\rho(\gamma)^{-1} \circ \pi_{\gamma\cdot x,\gamma\cdot y}\circ \rho(\gamma) \end{align*} for all $(x,y) \in \partial_\infty \Gamma^{(2)}$ and $\gamma\in\Gamma$. Further, by (2) of Corollary~\ref{cor:Gromov_model} \begin{align*} \lim_{p\to \xi^{(1)}(x)}\pi_{x,y}(p)=x \text{ and } \lim_{p\to \xi^{(1)}(y)}\pi_{x,y}(p)=y \end{align*} for all $(x,y) \in \partial_\infty \Gamma^{(2)}$.
\subsubsection{The construction for non-surface groups:}
It is worth noting that for many word hyperbolic groups, post composing with the representation $S : \GL_d(\Rb) \rightarrow \GL(\Sym_2(\Rb^d))$ is not necessary to construct a convex co-compact action.
\begin{theorem}\cite[Theorem 1.25]{Zimmer17} Suppose $\Gamma$ is a non-elementary word hyperbolic group which is not commensurable to a non-trivial free product or the fundamental group of a closed hyperbolic surface. Then any irreducible $1$-Anosov representation $\rho: \Gamma \rightarrow \PGL_d(\Rb)$ acts convex co-compactly on a properly convex domain $\Omega \subset \Pb(\Rb^d)$. \end{theorem}
\section{Sufficient conditions for differentiability of $\rho$-controlled subsets}\label{sec:suff_cond_diff}
The goal of this section is to prove Theorem \ref{thm:main_body}, which is a generalization of Theorem~\ref{thm:main} in terms of $\rho$-controlled subsets of $\Pb(\Rb^d)$ instead of the $1$-limit set.
\subsection{The quantity $\alpha^m(\rho)$}\label{sec:optimal1}
Suppose that $\rho$ is $(1,m)$-Anosov for some $m=2,\dots,d-1$. To state Theorem \ref{thm:main_body}, we first define a quantity $\alpha^m(\rho)$ as follows. Recall that $E:=\widetilde{U\Gamma}\times\Rb^d$. Let $\overline{\rho}:\Gamma\to\SL_d(\Rb)$ be a lift of $\rho$ (see Remark \ref{rem:lift}), and let $\norm{\cdot}$ be a $\Gamma$-invariant norm on $E$, i.e. $v\mapsto \norm{\cdot}_v$ is a continuous family of norms on $\Rb^d$ parameterized by $\widetilde{U\Gamma}$, so that $\norm{\overline{\rho}(\gamma)\cdot X}_{\gamma\cdot v}=\norm{X}_v$ for all $\gamma\in\Gamma$, $v\in\widetilde{U\Gamma}$ and $X\in \Rb^d$. For any $v=(v^+,v^-,v_0) \in \widetilde{U\Gamma}$, let \begin{align} E_1(v) & = \xi^{(1)}(v^+),\nonumber \\ E_2(v) & = \xi^{(d-1)}(v^-) \cap \xi^{(m)}(v^+),\label{eqn:E}\\ E_3(v) & = \xi^{(d-m)}(v^-),\nonumber \end{align} and define $f:\widetilde{U\Gamma}\times\Rb\to\Rb$ by \begin{align}\label{eqn:f} f(v,t):=\inf_{X_i\in S_i(v)}\left\{\log\frac{\norm{X_3}_{\phi_t(v)}}{\norm{X_1}_{\phi_t(v)}}\Bigg/\log\frac{\norm{X_2}_{\phi_t(v)}}{\norm{X_1}_{\phi_t(v)}}\right\}, \end{align}
where $S_i(v):=\{X\in E_i(v):||X||_v=1\}$ for $i=1,2,3$. Then define \begin{equation}\label{eqn:alpham1} \alpha^m(\rho):=\liminf_{t\to\infty}\inf_{v\in \widetilde{U\Gamma}}f(v,t). \end{equation}
To see that $\alpha^m(\rho)$ is well-defined and strictly larger than $1$, we need the following observation.
\begin{observation} \label{lem:weak flow} There exists $C_1 \geq1$ and $\beta_1 \geq0$ such that \begin{align*} \frac{1}{C_1} e^{-\beta_1 t} \norm{X}_{v} \leq \norm{X}_{\phi_t(v)} \leq C_1 e^{\beta_1 t} \norm{X}_v \end{align*} for all $v \in \wt{U\Gamma}$, $t > 0$, and $X\in \Rb^d$. \end{observation}
\begin{proof} Since $\Gamma$ acts co-compactly on $\wt{U\Gamma}$ there exists $\beta_1 \geq 0$ such that \begin{align*} e^{-\beta_1} \norm{X}_{v} \leq \norm{X}_{\phi_t(v)} \leq e^{\beta_1} \norm{X}_v \end{align*} for all $v \in \wt{U\Gamma}$, $t \in [0,1]$, and $X\in \Rb^d$. Then for any $t>0$, let $k\in\Zb^+$ so that $t\in[k-1,k)$, and note that \begin{align*} e^{-k\beta_1} \norm{X}_{v} \leq \norm{X}_{\phi_t(v)} \leq e^{k\beta_1} \norm{X}_v. \end{align*} Thus, if we let $C_1:=e^{\beta_1}$, then \begin{align*} \frac{1}{C_1}e^{-t\beta_1} \norm{X}_{v}\leq \frac{1}{C_1}e^{-(k-1)\beta_1} \norm{X}_{v} \leq \norm{X}_{\phi_t(v)} \leq C_1e^{(k-1)\beta_1} \norm{X}_v\leq C_1e^{t\beta_1} \norm{X}_v. \end{align*} \end{proof}
By Theorem \ref{prop:dom_split}, the assumption that $\rho$ is $(1,m)$-Anosov ensures that there are constants $C_2,C_3\geq 1$ and $\beta_2,\beta_3\geq0$ so that for all $v \in \widetilde{U\Gamma}$, $X_i \in F_i(v)$ non-zero, and $t \geq 0$, we have \begin{align*} \frac{\norm{X_2}_{\phi_t(v)}}{\norm{X_1}_{\phi_t(v)}} \geq \frac{1}{C_2} e^{\beta_2 t} \frac{\norm{X_2}_{v}}{\norm{X_1}_{v}}\,\,\,\text{ and }\,\,\,\frac{\norm{X_3}_{\phi_t(v)}}{\norm{X_2}_{\phi_t(v)}} \geq \frac{1}{C_3} e^{\beta_3 t} \frac{\norm{X_3}_{v}}{\norm{X_2}_{v}}. \end{align*} This, together with Observation \ref{lem:weak flow}, then implies that \begin{align*} \log\frac{\norm{X_3}_{\phi_t(v)}}{\norm{X_1}_{\phi_t(v)}}\Bigg/\log\frac{\norm{X_2}_{\phi_t(v)}}{\norm{X_1}_{\phi_t(v)}}&=1+\log\frac{\norm{X_3}_{\phi_t(v)}}{\norm{X_2}_{\phi_t(v)}}\Bigg/\log\frac{\norm{X_2}_{\phi_t(v)}}{\norm{X_1}_{\phi_t(v)}}\\ &\geq1+\frac{\beta_3 t-\log C_3+\log\frac{\norm{X_3}_{v}}{\norm{X_2}_{v}}}{2\beta_1 t+2\log C_1+\log \frac{\norm{X_2}_{v}}{\norm{X_1}_{v}}} \end{align*} and \begin{align*} \log\frac{\norm{X_3}_{\phi_t(v)}}{\norm{X_1}_{\phi_t(v)}}\Bigg/\log\frac{\norm{X_2}_{\phi_t(v)}}{\norm{X_1}_{\phi_t(v)}}&\leq1+\frac{2\beta_1 t+2\log C_1+\log\frac{\norm{X_3}_{v}}{\norm{X_2}_{v}}}{\beta_2 t-\log C_2+\log \frac{\norm{X_2}_{v}}{\norm{X_1}_{v}}} \end{align*} In particular, $\alpha^m(\rho)$ is a well-defined real number that is strictly larger than $1$. Also, observe that $\alpha^m(\rho)$ does not depend on the choice of $\norm{\cdot}$, nor on the choice of lift $\overline{\rho}$ of $\rho$.
With this, we can state the main theorem of this section.
\begin{theorem}\label{thm:main_body} Let $\rho:\Gamma\to\PGL_d(\Rb)$ be $(1,m)$-Anosov for some $m=2,\dots, d-1$. Suppose that $M\subset\Pb(\Rb^d)$ is a $\rho$-controlled subset that is also a topological $(m-1)$-dimensional manifold. If
\begin{enumerate} \item[($\dagger$)] $\rho$ is $m$-Anosov and $M$ is $m$-hyperconvex, \end{enumerate} then \begin{enumerate} \item [($\ddagger$)] $M$ is $C^\alpha$ along $\xi^{(1)}(\partial_\infty \Gamma)$ for all $\alpha$ so that $1<\alpha<\alpha^m(\rho)$. \end{enumerate} Moreover, for all $x\in\partial_\infty\Gamma$, the tangent space to $M$ at $\xi^{(1)}(x)$ is $\xi^{(m)}(x)$. \end{theorem}
\begin{remark} \ \begin{enumerate}
\item See Section~\ref{sec:terminology}, for the definition of ``$C^\alpha$ along.''
\item As mentioned in the introduction, in the special case when $M = \xi^{(1)}(\partial_\infty \Gamma)$ this theorem was independently proven by Pozzetti-Sambarino-Wienhard \cite{PSW18} without the estimate on $\alpha$.
\end{enumerate}
\end{remark}
It is clear from Example \ref{eg:limitset} and \ref{eg:convex} that Theorem~\ref{thm:main} and the first part of Theorem~\ref{thm:regularity2} follow immediately from Theorem \ref{thm:main_body}.
\subsection{The key inequality }\label{sec:optimal2}
Suppose that $\rho$ is $(1,m)$-Anosov for some $m=2,\dots,d-1$. Fix a distance $d_{\Pb}$ on $\Pb(\Rb^d)$ induced by a Riemannian metric. The following lemma is the key inequality needed to prove Theorem \ref{thm:main_body}.
\begin{lemma}\label{thm:optimal_contraction} Suppose that $M\subset\Pb(\Rb^d)$ be $\rho$-controlled and $m$-hyperconvex. Then for all $\alpha$ satisfying $0<\alpha<\alpha^m(\rho)$, there exists $D \geq 1$ with the following property: for every $x \in \partial_\infty \Gamma$ and $p\in M$, we have \begin{align} \label{eq:inequality_main} d_{\Pb}\left(p, \xi^{(m)}(x) \right) \leq D d_{\Pb}\left(p, \xi^{(1)}(x) \right)^{\alpha}. \end{align} \end{lemma}
We prove Lemma \ref{thm:optimal_contraction} via a series of small observations. First, from the definition of $\alpha^m(\rho)$, one observes the following.
\begin{observation}\label{obs:B0} If $0<\alpha<\alpha^m(\rho)$, then there is a constant $B\geq 1$ so that \begin{equation}\label{eqn:C1} \frac{\norm{X_1}_{\phi_t(v)}}{\norm{X_3}_{\phi_t(v)}} \leq B \left( \frac{\norm{X_1}_{\phi_t(v)}}{\norm{X_2}_{\phi_t(v)}} \right)^{\alpha} \end{equation} for all $v\in\wt{U\Gamma}$, $t\geq 0$, and $X_i\in S_i(v)$ \end{observation}
\begin{proof} Since $0<\alpha <\alpha^m(\rho)$, there exists $T > 0$ such that $\alpha < f(v,t)$ for all $t \geq T$ and $v \in \widetilde{U\Gamma}$, so \begin{align}\label{eqn:noconstant} \frac{\norm{X_1}_{\phi_t(v)}}{\norm{X_3}_{\phi_t(v)}} < \left( \frac{\norm{X_1}_{\phi_t(v)}}{\norm{X_2}_{\phi_t(v)}} \right)^{\alpha} \end{align} for all $t \geq T$, $v \in \widetilde{U\Gamma}$, and $X_i \in S_i(v)$. On the other hand, if $0\leq t\leq T$, then the $\Gamma$-invariance of $\norm{\cdot}$ implies that both sides of the inequality \eqref{eqn:noconstant} are continuous positive functions on $[0,T]\times S_{\overline{\rho}}$, where $S_{\overline{\rho}}\subset E_{\overline{\rho}}$ is a compact fiber bundle over $U\Gamma$ whose fiber over $[v]\in U\Gamma$ is $S_1(v)\times S_2(v)\times S_3(v)$. Thus, there exists some $B \geq 1$ so that \eqref{eqn:C1} holds. \end{proof}
Next, for $i=1,2,3$ and $v \in \wt{U\Gamma}$ define $P_{i,v} : \Rb^d \rightarrow E_i(v)$ to be the projection with kernel $E_{i-1}(v)+E_{i+1}(v)$, where the arithmetic in the subscripts are done modulo $3$.
The following observation is an immediate consequence of the fact that $M$ is $\rho$-controlled and $m$-hyperconvex.
\begin{observation} \label{obs:nonzero} If $v \in \wt{U\Gamma}$ and $p\in M \setminus \{\xi^{(1)}(v^+), \xi^{(1)}(v^-)\}$ then $P_{i,v}(X) \neq 0$ for all non-zero $X\in p$ and $i=1,2,3$. \end{observation}
Choose a compact set $K\subset\wt{U\Gamma}$ so that $\Gamma\cdot K=\wt{U\Gamma}$. By enlarging $K$ if necessary, we can ensure that $\{v^+:v\in K\}=\partial_\infty\Gamma$.
Next let \begin{align*} \pi_{x,y} : M \setminus \left\{ \xi^{(1)}(x), \xi^{(1)}(y) \right\} \rightarrow L_{x,y} \end{align*} be a family of maps which satisfy Proposition~\ref{prop:rho_controlled_sets_projections}. Then, as in Section~\ref{sec:rho_controlled_sets}, define \begin{align*} P(M):=\left\{ (v,z) \in \wt{U \Gamma} \times M : p \neq \xi^{(1)}(v^\pm) \text{ and } v = \pi_{v^+,v^-}(p) \right\}. \end{align*}
Using the fact that $\Gamma \backslash P(M)$ is compact (see (1) of Observation~\ref{obs:compact}) and Observation~\ref{obs:nonzero}, we deduce the next three observations, which we use to prove Theorem \ref{thm:optimal_contraction}.
\begin{observation}\label{obs:C} There is a constant $C\geq 1$ so that
\begin{equation}\label{eqn:C2} \frac{1}{C} \leq \frac{ \norm{ P_{i,v}\left( X \right) }_v}{\norm{P_{j,v}\left( X \right)}_v} \leq C \end{equation} for all $(v,p) \in P(M)$, all non-zero $X \in p$, and all $i,j \in \{1,2,3\}$, and \begin{equation}\label{eqn:C3} \frac{1}{C} \leq \frac{\norm{X}_v }{\norm{X}_{2}}\leq C \end{equation} for all $v \in K$ and non-zero $X\in\Rb^d$. \end{observation}
\begin{proof} Since $\norm{\cdot}$ is $\Gamma$-invariant, Observation \ref{obs:nonzero} implies that the map $P(M)/\Gamma\to\Rb$ defined by \begin{align*} [v,p]&\mapsto\frac{ \norm{ P_{i,v}\left( X \right) }_v}{\norm{P_{j,v}\left( X \right)}_v} \end{align*} where $X\in p$ is a non-zero vector, is a well-defined, continuous, positive function on $P(M)/\Gamma$. Hence, (1) of Observation \ref{obs:compact} implies that there exists $C\geq 1$, so that \eqref{eqn:C2} holds. Also, since the function $K\times\Pb(\Rb^d)\to\Rb$ defined by \[(v,[X])\mapsto \frac{\norm{X}_v}{\norm{X}_{2}}\] is also well-defined, continuous, and positive, by further enlarging $C$ if necessary, we may assume that \eqref{eqn:C3} holds. \end{proof}
Using the fact that $d_{\Pb}$ is induced by a Riemannian metric we have the following estimates.
\begin{observation} \label{obs:delta1} For any sufficiently small $\delta>0$, there exists $A \geq 1$ such that: for all $v \in K$, $p \in M$ so that $d_{\Pb}\left(\xi^{(1)}(v^+), p\right) \leq \delta$, and $X \in p$ non-zero, we have \begin{equation}\label{eqn:obs1} \frac{1}{A} \frac{\norm{P_{3,v}(X)}_{2}}{\norm{P_{1,v}(X)}_{2}}\leq d_{\Pb}\left( p, \xi^{(m)}(v^+) \right) \leq A \frac{\norm{P_{3,v}(X)}_{2}}{\norm{P_{1,v}(X)}_{2}} \end{equation} and \begin{equation}\label{eqn:obs2}
\frac{1}{A} \frac{\norm{P_{2,v}(X)}_{2}}{\norm{P_{1,v}(X)}_{2}}\leq d_{\Pb}\left(p, \xi^{(1)}(v^+)\right) \leq A \frac{\norm{P_{2,v}(X)}_{2}+\norm{P_{3,v}(X)}_{2}}{\norm{P_{1,v}(X)}_{2}}. \end{equation} \end{observation}
Using Observations \ref{obs:C} and \ref{obs:delta1}, we will now prove Lemma \ref{thm:optimal_contraction}.
\begin{proof}[Proof of Lemma \ref{thm:optimal_contraction} ] Let $\delta>0$ be sufficiently small so that Observation \ref{obs:delta1} holds. Using (3) of Observation~\ref{obs:compact} and possibly decreasing $\delta > 0$ we may also assume that for all $v \in K$ and $p \in M \setminus \{\xi^{(1)}(x^+)\}$ satisfying $d_{\Pb}\left(\xi^{(1)}(v^+), p\right) \leq \delta$, there is some $t>0$ so that $(\phi_t (v), p) \in P(M)$.
Elementary considerations imply that it is sufficient to prove Lemma \ref{thm:optimal_contraction} for all $x \in \partial_\infty \Gamma$ and $p\in M \setminus \{\xi^{(1)}(x)\}$ so that $d_{\Pb}\left(\xi^{(1)}(x), p\right) \leq \delta$. By the assumptions on $K$, there exist some $v \in K$ such that $v^+ = x$. Further, by our choice of $\delta$, there exists $t > 0$ such that $(\phi_t(v),p) \in P(M)$.
For any non-zero $X \in p$ and for $i=1,2,3$, let \begin{align*} X_i := \frac{P_{i,v}(X)}{\norm{P_{i,v}(X)}_v}\in S_i(v). \end{align*} By \eqref{eqn:C2}, \eqref{eqn:C3}, and \eqref{eqn:obs1}, \begin{align} d_{\Pb}\left( p, \xi^{(m)}(x) \right) &\leq A \frac{\norm{P_{3,v}(X)}_{2}}{\norm{P_{1,v}(X)}_{2}} \nonumber \\ &\leq AC^3 \frac{\norm{P_{3,v}(X)}_{v}}{\norm{P_{1,v}(X)}_{v}} \frac{\norm{P_{1,v}(X)}_{\phi_t(v)}}{\norm{P_{3,v}(X)}_{\phi_t(v)}} \label{eqn:1AC3}\\ &= AC^3 \frac{\norm{X_1}_{\phi_t(v)}}{\norm{X_3}_{\phi_t(v)}}\nonumber \end{align}
Repeating a similar argument, but with \eqref{eqn:obs2} in place of \eqref{eqn:obs1}, proves
\begin{align}
d_{\Pb}\left( p, \xi^{(1)}(x)\right) \geq \frac{1}{AC^3} \frac{\norm{X_1}_{\phi_t(v)}}{\norm{X_2}_{\phi_t(v)}}.\label{eqn:1AC3-} \end{align}
Finally, since $0<\alpha<\alpha^m(\rho)$, Observation \ref{obs:B0} and \eqref{eqn:1AC3} gives \begin{align*} d_{\Pb}\left( p, \xi^{(m)}(x) \right) \leq A BC^3 \left( \frac{\norm{X_1}_{\phi_t(v)}}{\norm{X_2}_{\phi_t(v)}}\right)^{\alpha}. \end{align*} Combining this with \eqref{eqn:1AC3-} yields \begin{align*} d_{\Pb}\left( p, \xi^{(m)}(x) \right) \leq D d_{\Pb}\left( p, \xi^{(1)}(x)\right)^{\alpha} \end{align*} where $D := A^{1+\alpha}B C^{3+3\alpha}$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:main_body}}\label{sec:proof_of_main_thm}
We now use Lemma \ref{thm:optimal_contraction} to prove Theorem \ref{thm:main_body}. Again, suppose that $\rho$ is $(1,m)$-Anosov for some $m=2,\dots,d-1$.
We begin by making the following simple observation. Fix a hyperplane $\Hc\subset\Rb^d$ and a $(d-m)$-dimensional subspace $\Vc\subset \Hc$. Then consider the affine chart $\Ab_{\Hc}:=\Pb(\Rb^d)\setminus[\Hc]$ of $\Pb(\Rb^d)$. Recall that $[\Hc]$ denotes the projectivization of $\Hc$, see \eqref{eqn:projectivization}. For any $m$-dimensional subspace $\Uc\subset\Rb^d$ that is transverse to $\Vc$, let \[\Pi_{\Uc,\Vc}:\Ab_{\Hc}\to [\Uc]\cap\Ab_{\Hc}\] be the projection given by $[X]\mapsto [U_X]$, where $X=U_X+V_X$ with $U_X\setminus\{0\}\in \Uc$ and $V_X\in \Vc$. Observe that the fibers of $\Pi_{\Uc,\Vc}$ are of the form $[\Rb\cdot X+\Vc]\cap\Ab_{\Hc}$ for some $X\in\Rb\setminus \Hc$. In particular, the fibers of $\Pi_{\Uc,\Vc}$ do not depend on $\Uc$, i.e. if $\Uc'\subset\Rb^d$ is another $m$-dimensional subspace of $\Rb^d$ that is transverse to $\Vc$, then the fibers of $\Pi_{\Uc,\Vc}$ and the fibers of $\Pi_{\Uc',\Vc}$ agree.
Now, fix $y\in\partial_\infty\Gamma$. We will specialize the observation in the previous paragraph to the case where $\Hc=\xi^{(d-1)}(y)$ and $\Vc=\xi^{(d-m)}(y)$. This yields the following statement, which we record as an observation.
\begin{observation}\label{obs:parallel} Let $\Ab_y :=\Ab_{\xi^{(d-1)}(y)}$. If $x\in\partial_\infty\Gamma\setminus\{y\}$, then \begin{align*} \Pi_{x,y}:=\Pi_{\xi^{(m)}(x),\xi^{(d-m)}(y)} : \Ab_y\rightarrow \xi^{(m)}(z)\cap\Ab_y \end{align*} is a projection whose fibers do not depend on $x$. \end{observation}
Since $M$ is $\rho$-controlled, $M\setminus\{\xi^{(1)}(y)\}\subset\Ab_y$, so we may define
\[F_{x,y}:=\Pi_{x,y}|_{M\setminus\{\xi^{(1)}(y)\}}.\]
\begin{lemma}\label{lem:homeo} If $x\in\partial_\infty\Gamma\setminus\{y\}$, then the map \begin{align*} F_{x,y}:M\setminus\{\xi^{(1)}(y)\} \rightarrow \xi^{(m)}(x) \cap \Ab_y. \end{align*} is a homeomorphism. \end{lemma}
\begin{remark} Lemma \ref{lem:homeo} implies that $M\setminus\{\xi^{(1)}(y)\}$ can be viewed as the graph of a map from \[\xi^{(m)}(x) \cap \Ab_y\,\,\,\text{ to }\,\,\,\Pi_{x,y}^{-1}(\xi^{(1)}(x))=\left(\xi^{(1)}(x)+\xi^{(d-m)}(y)\right) \cap \Ab_y.\] In particular, $M\setminus\{\xi^{(1)}(y)\}$ is diffeomorphic to $\Rb^{m-1}$. \end{remark}
The proof of Lemma \ref{lem:homeo} requires a basic result from topology.
\begin{theorem}[The Invariance of Domain Theorem] If $U \subset \Rb^d$ is open and $f: U \rightarrow \Rb^d$ is continuous injective map, then $f(U)$ is open and $f$ induces a homeomorphism $U \rightarrow f(U)$. \end{theorem}
\begin{proof}[Proof of Lemma~\ref{lem:homeo}]
We first observe that the map $F_{x,y}$ is injective. If $p_1, p_2 \in M \setminus \{\xi^{(1)}(y)\}$ and $F_{x,y}(p_1) = F_{x,y}(p_2)$, then \begin{align*} p_1 + \xi^{(d-m)}(y) = p_2 + \xi^{(d-m)}(y), \end{align*} so $p_1 + p_2 + \xi^{(d-m)}(y)$ is not direct. The assumption that $M$ is $m$-hypercovex implies that $p_1=p_2$.
Since $F_{x,y}$ is continuous and injective, we can now apply the invariance of domain theorem to deduce that $F_{x,y}$ is a homeomorphism onto an open set in $I(x,y)$ in $\xi^{(m)}(x)\cap \Ab_y$. To finish the proof, we now need to show that \begin{align}\label{eqn:Izy} I(x,y) = \xi^{(m)}(x) \cap \Ab_y. \end{align}
Suppose $\gamma \in \Gamma$ has infinite order, and denote its attracting and repelling fixed points in $\partial_\infty \Gamma$ by $\gamma^+$ and $\gamma^-$ respectively. Note that $\rho(\gamma)\cdot I(\gamma^+, \gamma^-) = I(\gamma^+,\gamma^-)$ and $\xi^{(1)}(\gamma^+)\in I(\gamma^+,\gamma^-)$. Since $\rho$ is $1$-Anosov, \begin{align*} \xi^{(m)}(\gamma^+)\cap \Ab_{\gamma^-} = \bigcup_{n \in \Nb}\rho(\gamma)^{-n}\cdot \Oc \end{align*} for any open set $\Oc \subset\xi^{(m)}(\gamma^+) \cap \Ab_{\gamma^-}$ containing $\xi^{(1)}(\gamma^+)$. Hence \begin{align*} I(\gamma^+, \gamma^-) = \bigcup_{n \in \Nb} \rho(\gamma)^{-n}\cdot I(\gamma^+, \gamma^-) = \xi^{(m)}(\gamma^+) \cap \Ab_{\gamma^-}. \end{align*} The density of $\{ (\gamma^+, \gamma^-) : \gamma \in \Gamma \text{ has infinite order} \}$ in $\partial_\infty \Gamma \times \partial_\infty \Gamma$ proves \eqref{eqn:Izy}. \end{proof}
With Lemma \ref{lem:homeo}, we can now proceed to the proof of Theorem \ref{thm:main_body}.
\begin{proof}[Proof of Theorem \ref{thm:main_body}] Fix $y\in\partial_\infty\Gamma$, and as before, consider the affine chart \[\Ab_y:=\Pb(\Rb^d) \setminus \xi^{(d-1)}(y).\] By working in some particular affine coordinates in the affine chart $\Ab_y$, we will show that Theorem \ref{thm:main_body} holds for all $x\in \partial_\infty\Gamma\setminus\{y\}$. Since $y$ was chosen arbitrarily, this suffices to prove the theorem.
Let $x \in \partial_\infty \Gamma\setminus\{y\}$ and choose affine coordinates $\Ab_y\simeq\Rb^{d-1}$ so that in these coordinates, \begin{itemize} \item $\xi^{(1)}(x) = 0$, \item $\xi^{(m)}(x)\cap\Ab_y = \Rb^{m-1} \times \{0\}$, \item $\left(\xi^{(1)}(x)+ \xi^{(d-m)}(y) \right)\cap\Ab_y = \{0\} \times \Rb^{d-m}$. \end{itemize} For any $z\in\partial_\infty\Gamma$ sufficiently close to $x$, there exists a unique affine map \begin{align*} A_z : \Rb^{m-1} \times \{0\} \rightarrow \{0\} \times \Rb^{d-m} \end{align*} whose graph is $H_z:=\xi^{(m)}(z)\cap\Ab_y $, i.e. \begin{align*} H_z = \left\{ u+A_z(u) : u \in \Rb^{m-1} \times \{0\} \right\}, \end{align*} see Figure \ref{fig:graph}. Let $L_z: \Rb^{m-1}\times\{0\} \rightarrow \{0\}\times\Rb^{d-m}$ denote the linear part of $A_z$ (in our choice of affine coordinates). Note that the maps $z \mapsto A_z$ and $z \mapsto L_z$ are continuous.
\begin{figure}
\caption{$M$ in the affine chart $\Ab_y$.}
\label{fig:graph}
\end{figure}
For any $z\in\partial_\infty\Gamma\setminus\{y\}$, Observation \ref{obs:parallel} implies that $\Pi_{z,y}^{-1}\left(\xi^{(1)}(z)\right)$ is parallel to $\Pi_{x,y}^{-1}\left(\xi^{(1)}(x)\right)=\{0\}\times\Rb^{d-m}$ in $\Ab_y$. Thus, as a consequence of Lemma~\ref{lem:homeo}, there exists a map \begin{align*} f_z : H_z \rightarrow \{0\} \times \Rb^{d-m} \end{align*} whose graph is $\xi^{(1)}(\partial_\infty \Gamma \setminus \{y\} )$, i.e. \begin{align*} \xi^{(1)}(\partial_\infty \Gamma \setminus \{y\} )= \{ u + f_z(u) : u \in H_z\}. \end{align*} Further, Theorem~\ref{thm:optimal_contraction} implies that for all $\alpha$ satisfying $1\leq\alpha <\alpha^m(\rho)$ and all $\xi^{(1)}(z)+ h \in H_z$, we have \begin{align*} f_z\left(\xi^{(1)}(z)+h\right) = { \rm o}(\norm{h}^\alpha). \end{align*}
Now, for any $u \in \Rb^{m-1} \times \{0\}$ and $z\in\partial_\infty\Gamma\setminus\{y\}$, \begin{align*} u + f_{x}(u) = \Big( u+ A_z(u) \Big) + f_z\Big( u+ A_z(u) \Big). \end{align*} Also, if $u_z:= \Pi_{x, y}(\xi^{(1)}(z))$ then $u_z+ A_z(u_z) = \xi^{(1)}(z)$, which means that $f_{x}(u_z) = A_z(u_z)$. Thus, for all $h\in\Rb^{m-1}\times\{0\}$, \begin{align*} f_{x}(u_z+h) &= A_z(u_z+h) + f_z\Big( u_z+h+ A_z(u_z+h) \Big) \\ & = A_z(u_z)+L_z(h) + f_z\Big( u_z+h+ A_z(u_z)+L_z(h) \Big) \\ &= f_{x}(u_z) + L_z(h) + f_z\Big(\xi^{(1)}(z)+h+L_z(h) \Big) \\ & = f_{x}(u_z) +L_z(h)+ { \rm o}(\norm{h+L_z(h)}^{\alpha}). \\ & = f_{x}(u_z) +L_z(h)+ { \rm o}(\norm{h}^{\alpha}). \end{align*} This proves the theorem.
\end{proof}
\section{Eigenvalue description of $\alpha^m(\rho)$}\label{sec:optimal}
For the rest of this section, let $\rho:\Gamma\to\PGL(d,\Rb)$ be a $(1,m)$-Anosov representation. Recall that in the introduction, we defined \begin{align*} \alpha_m(\rho) := \inf_{\gamma\in\Gamma}\left\{\log\frac{\lambda_1}{\lambda_{m+1}}(\rho(\gamma))\Bigg/\log\frac{\lambda_1}{\lambda_{m}}(\rho(\gamma)): \frac{\lambda_1}{\lambda_{m}}(\rho(\gamma)) \neq 1 \right\}. \end{align*}
The main result of this section is the following theorem.
\begin{theorem}\label{thm:alphas} If $\rho$ is irreducible, then \begin{align*} \alpha_m(\rho) = \alpha^m(\rho), \end{align*} where $\alpha^m(\rho)$ is the quantity defined by \eqref{eqn:alpham1}. \end{theorem}
The proof of Theorem \ref{thm:alphas} will be given in the following two subsections. In the first, we will use general properties of singular values to relate the quantity $f(v,t)$ (the function $f$ was defined by \eqref{eqn:f}) to the ratios of eigenvalues of $\rho(\gamma)$ when $v^\pm=\gamma^\pm$. In the second, we will use a deep result due to Benoist to finish the proof.
Before starting the proof we make several reductions. First, $\alpha_m(\rho)$ and $\alpha^m(\rho)$ are invariant under passing to a finite index subgroup (see Remark~\ref{rem:stablility}). So by Remark \ref{rem:lift}, we can assume that $\rho$ admits a lift $\overline{\rho}:\Gamma\to\SL_d(\Rb)$. Then, by passing to another finite index subgroup, we may also assume that the Zariski closure of $\rho(\Gamma)$ is connected. By Proposition~\ref{prop:strongly_irreducible} this representation is still irreducible.
\subsection{Singular values along closed orbits}
Let $E:=\wt{U\Gamma}\times\Rb^d$, and for $i=1,2,3$, let $E_i$ be the $\Gamma$-invariant sub-bundle of $E$ defined by \eqref{eqn:E}. (Recall that the $\Gamma$-action on $E$ is given by $\gamma\cdot (v,X)=(\gamma\cdot v,\overline{\rho}(\gamma)\cdot X)$.) Also, choose a $\Gamma$-invariant inner product $\langle\cdot,\cdot\rangle$ on $E$ so that $E=E_1\oplus E_2\oplus E_3$ is an orthogonal splitting. We may assume that the norm $\norm{\cdot}$ used in the definition of $\alpha^m(\rho)$ is given by $\norm{\cdot}_v=\sqrt{\langle\cdot,\cdot\rangle_v}$ for all $v\in \widetilde{U\Gamma}$.
For any $v,w\in \widetilde{U\Gamma}$, let $\sigma_i(v,w)$ denote the $i$-th singular value of \[\id=\id_{v,w}:(\Rb^d,\norm{\cdot}_v)\to(\Rb^d,\norm{\cdot}_w),\] and for $(v,t)\in \widetilde{U\Gamma}\times\Rb$, denote $\sigma_i(v,t):=\sigma_i(v,\phi_t(v))$.
Using this, define the fuction $h:\widetilde{U\Gamma}\times\Rb\to\Rb$ by \[h(v,t):=\log\frac{\sigma_{d-m}(v,t)}{\sigma_d(v,t)}\Bigg/\log\frac{\sigma_{d-m+1}(v,t)}{\sigma_{d}(v,t)}.\] The functions $h$ and $f$ (recall that $f$ is defined by \eqref{eqn:f}) are related by the following lemma.
\begin{lemma} \label{lem:f and g} For all $v\in \widetilde{U\Gamma}$ and for sufficiently large $t$, we have \[f(v,t)=h(v,t).\] In particular, $\displaystyle\alpha^m(\rho)=\liminf_{t\to\infty}\inf_{v\in \widetilde{U\Gamma}}h(v,t)$. \end{lemma}
\begin{proof} Since $E=E_1\oplus E_2\oplus E_3$ is an orthogonal splitting, Theorem \ref{prop:dom_split} implies that for all $v\in \widetilde{U\Gamma}$ and for sufficiently large $t$, \begin{itemize} \item $\sigma_d(v,t)=\norm{X}_{\phi_t(v)}$ for all $X\in S_1(v)$, \item $\sigma_{d-m+1}(v,t)=\sup_{X\in S_2(v)} \norm{X}_{\phi_t(v)}$, \item $\sigma_{d-m}(v,t)=\inf_{X\in S_3(v)} \norm{X}_{\phi_t(v)}$. \end{itemize} Thus, \begin{eqnarray*} h(v,t)&=&\log\frac{\sigma_{d-m}(v,t)}{\sigma_d(v,t)}\Bigg/\log\frac{\sigma_{d-m+1}(v,t)}{\sigma_{d}(v,t)}\\ &=&\log\frac{\inf_{X\in S_3(v)} \norm{X}_{\phi_t(v)}}{\sup_{X\in S_1(v)}\norm{X}_{\phi_t(v)}}\Bigg/\log\frac{\sup_{X\in S_2(v)} \norm{X}_{\phi_t(v)}}{\inf_{X\in S_1(v)}\norm{X}_{\phi_t(v)}}\\ &=&\inf_{X_i\in S_i(v)}\left\{\log\frac{\norm{X_3}_{\phi_t(v)}}{\norm{X_1}_{\phi_t(v)}}\Bigg/\log\frac{\norm{X_2}_{\phi_t(v)}}{\norm{X_1}_{\phi_t(v)}}\right\}\\ &=&f(v,t) \end{eqnarray*} Notice that in the third equality we used the fact that $\dim E_1(v) = 1$. \end{proof}
The following observation gives a simple but important bound for ratios of singular values. The proof is a straightforward calculation which we omit.
\begin{observation}\label{prop:easy comp} Suppose that for $i=1,\dots,4$, $\norm{\cdot}_{(i)}$ are norms on $\Rb^d$ so that for all $X\in\Rb^d$, $\frac{1}{A}\leq\frac{\norm{X}_{(1)}}{\norm{X}_{(2)}}\leq A$ and $\frac{1}{A'}\leq\frac{\norm{X}_{(3)}}{\norm{X}_{(4)}}\leq A'$ for some $A,A'>1$. Let $L:\left(\Rb^d,\norm{\cdot}_{(1)}\right)\to\left(\Rb^d,\norm{\cdot}_{(3)}\right)$ and $L':\left(\Rb^d,\norm{\cdot}_{(2)}\right)\to\left(\Rb^d,\norm{\cdot}_{(4)}\right)$ denote the identity maps. Then \[\frac{1}{AA'}\leq\frac{\sigma_i(L)}{\sigma_i(L')}\leq AA'.\] \end{observation}
The next lemma relates the function $h$ to the eigenvalues of $\rho(\gamma)$.
\begin{lemma}\label{lem:g and eig} Let $\gamma\in\Gamma\setminus\{\id\}$ be an infinite order element, and let $v=(v^+,v^-,v_0)\in \widetilde{U\Gamma}$ so that $v^\pm=\gamma^\pm$. Then \[\lim_{t\to\infty}h(v,t)=\log\frac{\lambda_1}{\lambda_{m+1}}(\rho(\gamma))\Bigg/\log\frac{\lambda_1}{\lambda_{m}}(\rho(\gamma)).\] \end{lemma}
\begin{proof} Let $T$ denote the period of $\gamma$ (see Section \ref{sec:flowspace}). For all $k \in \Zb^+$ and $X \in \Rb^d$ \[\norm{X}_{\phi_{kT}(v)}=\norm{X}_{\gamma^k\cdot v}=\norm{\overline{\rho}(\gamma^{-k})\cdot X}_v.\] Hence, the singular values of the two linear maps \[\id:(\Rb^d,\norm{\cdot}_v)\to(\Rb^d,\norm{\cdot}_{\phi_{kT}(v)})\,\,\,\text{ and } \overline{\rho}(\gamma^{-k}):(\Rb^d,\norm{\cdot}_v)\to(\Rb^d,\norm{\cdot}_v)\,\,\,\] agree.
It is a straightforward calculation to show that for any inner product $\langle\cdot,\cdot\rangle$ on $\Rb^d$ and any invertible linear map $\overline{g}:(\Rb^d,\langle\cdot,\cdot\rangle)\to(\Rb^d,\langle\cdot,\cdot\rangle)$, \begin{align}\label{eqn:Ben} \lim_{k\to\infty}\frac{1}{k}\log\sigma_i(\overline{g}^k)=\log\lambda_i(\overline{g}). \end{align} Thus, we can deduce that \begin{equation}\label{eqn:Benoist}
\lim_{k\to\infty}\sigma_i(v,kT)^{\frac{1}{k}}=\lim_{k\to\infty}\sigma_i(\overline{\rho}(\gamma^{-k}))^{\frac{1}{k}}=|\lambda_i(\overline{\rho}(\gamma^{-1}))|=\frac{1}{|\lambda_{d+1-i}(\overline{\rho}(\gamma))|},\end{equation} which implies that \begin{eqnarray}\label{eqn:period contraction} \lim_{k\to\infty}h(v,kT)&=&\lim_{k\to\infty}\left(\log\frac{\sigma_{d-m}(v,kT)}{\sigma_d(v,kT)}\Bigg/\log\frac{\sigma_{d-m+1}(v,kT)}{\sigma_{d}(v,kT)}\right)\\ &=&\log\frac{\lambda_1}{\lambda_{m+1}}(\rho(\gamma))\Bigg/\log\frac{\lambda_1}{\lambda_{m}}(\rho(\gamma)).\nonumber \end{eqnarray}
For any $t>0$, let $k\in\Zb^+$ so that $t\in[kT,(k+1)T)$. Then Lemma \ref{lem:weak flow} implies that there are constants $C\geq 1$ and $\beta\geq 0$ so that \[\frac{1}{C}e^{-\beta T}\leq\frac{\norm{X}_{\phi_tv}}{\norm{X}_{\phi_{kT}v}}\leq Ce^{\beta T}\] for all $t \in \Rb$ and $X\in\Rb^d$. This, together with Observation \ref{prop:easy comp}, implies that for all $i=1,\dots,d$, \[\frac{1}{C}e^{-\beta T}\leq\frac{\sigma_i(v,kT)}{\sigma_i(v,t)}\leq Ce^{\beta T}.\] Also, since $\rho$ is $(1,m)$-Anosov, we know that \[\lim_{k\to\infty}\log\frac{\sigma_{d-m}(v,kT)}{\sigma_d(v,kT)}=\infty=\lim_{k\to\infty}\log\frac{\sigma_{d-m+1}(v,kT)}{\sigma_{d}(v,kT)}.\] Hence, \begin{eqnarray*} \limsup_{t\to\infty}h(v,t)&=&\limsup_{t\to\infty}\log\frac{\sigma_{d-m}(v,t)}{\sigma_d(v,t)}\Bigg/\log\frac{\sigma_{d-m+1}(v,t)}{\sigma_{d}(v,t)}\\ &\leq&\limsup_{k\to\infty}\frac{\displaystyle2\log C+2\beta T+\log\frac{\sigma_{d-m}(v,kT)}{\sigma_d(v,kT)}}{\displaystyle-2\log C-2\beta T+\log\frac{\sigma_{d-m+1}(v,kT)}{\sigma_{d}(v,kT)}}\\ &=&\lim_{k\to\infty}h(v,kT). \end{eqnarray*} By a similar argument, $\displaystyle\liminf_{t\to\infty}h(v,t)\geq\lim_{k\to\infty}h(v,kT)$, so $\displaystyle\lim_{t\to\infty}h(v,t)=\lim_{k\to\infty}h(v,kT)$. This, together with \eqref{eqn:period contraction} implies the lemma. \end{proof}
\subsection{Asymptotic cones and eigenvalues}\label{sec:cones}
Recall that $\lambda,\mu:\GL_d(\Rb)\to\Rb^d$ respectively denote the Jordan and Cartan projections defined in Section \ref{sec:properties}. For any subgroup $G \leq \SL_d(\Rb)$, let $\Cc_\lambda(G) \subset \Rb^d$ denote the smallest closed cone containing $\lambda(G)$, that is \begin{align*} \Cc_\lambda(G) := \overline{\bigcup_{\overline{g} \in G} \Rb_{>0} \cdot \lambda(\overline{g})}. \end{align*} Also, let $\Cc_\mu(G)$ denote the \emph{asymptotic cone of $\mu(G)$}, that is \begin{align*} \Cc_\mu(G) := \{ x \in \Rb^d : \exists \overline{g_n} \in G, \exists t_n \searrow 0, \text{ with } \lim_{n \rightarrow \infty} t_n \mu(\overline{g_n}) =x\}. \end{align*}
A deep result of Benoist~\cite{B1997} implies the following.
\begin{theorem}\label{thm:cones} If $G \leq \SL_d(\Rb)$ is a connected semisimple real algebraic subgroup which acts irreducibly on $\Rb^d$ and $\Lambda \leq G$ is a Zariski dense subgroup, then \begin{align*} \Cc_\mu(\Lambda) = \Cc_\lambda(\Lambda). \end{align*} \end{theorem}
\begin{remark} Notice that for any subgroup $\Lambda\subset \SL_d(\Rb)$, the fact that $\Cc_\lambda(\Lambda) \subset \Cc_\mu(\Lambda)$ is a consequence of \eqref{eqn:Ben}. \end{remark}
A proof of Theorem \ref{thm:cones} is given in the appendix. Theorem \ref{thm:cones} can be used to prove the following lemma.
\begin{lemma} \label{lem:preliminary-inequality} For any $\epsilon > 0$ there exists $R > 0$ such that \begin{align*} \alpha_m(\rho)-\epsilon < \log\frac{\mu_1}{\mu_{m+1}}(\rho(\gamma))\Bigg/\log\frac{\mu_1}{\mu_{m}}(\rho(\gamma)) \end{align*} for all $\gamma \in \Gamma$ with $\norm{\mu(\overline{\rho}(\gamma))}_2\geq R$. \end{lemma}
\begin{proof} By Proposition~\ref{prop:Zclosure} and Theorem~\ref{thm:cones}, $\Cc_\mu(\overline{\rho}(\Gamma)) = \Cc_\lambda(\overline{\rho}(\Gamma))$. Fix $\epsilon > 0$ and suppose for contradiction that there exists a sequence $\{\gamma_n\}_{n=1}^\infty \subset \Gamma$ such that for all $n$, $\norm{\mu(\overline{\rho}(\gamma_n))}_2\geq n$ and \begin{align*} \alpha_m(\rho)-\epsilon \geq \log\frac{\mu_1}{\mu_{m+1}}(\rho(\gamma_n))\Bigg/\log\frac{\mu_1}{\mu_{m}}(\rho(\gamma_n)). \end{align*} By passing to a subsequence we can suppose that \begin{align*} \frac{1}{\norm{\mu(\overline{\rho}(\gamma_n))}_2} \mu(\overline{\rho}(\gamma_n)) \rightarrow x =(x_1,\dots,x_d)\in \Cc_\mu(\overline{\rho}(\Gamma)) = \Cc_\lambda(\overline{\rho}(\Gamma)). \end{align*} It follows that $\alpha_m(\rho)-\epsilon \geq\frac{x_1-x_{m+1}}{x_1-x_m}$. On the other hand, the definition of $\alpha_m(\rho)$ and $\Cc_\lambda(\overline{\rho}(\Gamma))$, implies that \[\alpha_m(\rho) \leq \frac{x_1-x_{m+1}}{x_1-x_m},\] which is a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:alphas}] It is clear from Lemma \ref{lem:f and g} and Lemma \ref{lem:g and eig} that $\alpha^m(\rho)\leq\alpha_m(\rho)$. We will now prove $\alpha^m(\rho)\geq\alpha_m(\rho)$. Let $K\subset \widetilde{U\Gamma}$ be a compact fundamental domain for the $\Gamma$-action on $\widetilde{U\Gamma}$. Since $h(v,t)=h(\gamma\cdot v,t)$ for all $\gamma\in\Gamma$ and all $v\in\widetilde{U\Gamma}$, by Lemma \ref{lem:f and g}, it is enough to show that \begin{align*} \alpha_m(\rho) \leq \liminf_{t\to\infty}\inf_{v\in K}h(v,t) \end{align*}
Fix $C > 1$ such that $\frac{1}{C} \norm{X}_{2} \leq \norm{X}_{v} \leq C \norm{X}_{2}$ for all $v\in K$ and $X \in \Rb^d$. By Lemma \ref{lem:preliminary-inequality}, there exists, for every $\epsilon>0$, a positive number $R' > 0$ such that \begin{align*} \alpha_m(\rho)-\epsilon < \log\frac{\mu_1}{\mu_{m+1}}(\rho(\gamma))\Bigg/\log\frac{\mu_1}{\mu_{m}}(\rho(\gamma)) \end{align*} for all $\gamma \in \Gamma$ with $\norm{\mu(\overline{\rho}(\gamma))}_2\geq R'$. Since $\rho$ is $1$-Anosov and \begin{align*} \log\frac{\mu_1}{\mu_{2}}(\rho(\gamma)) \leq \log\frac{\mu_1}{\mu_{k}}(\rho(\gamma)) \end{align*} for $k > 1$, Theorem~\ref{thm:SV_char_of_Anosov} and Corollary~\ref{thm:QI_Anosov} together imply that \[\log\frac{\mu_1}{\mu_{m+1}}(\rho(\gamma))\,,\,\log\frac{\mu_1}{\mu_{m}}(\rho(\gamma))\geq \frac{1}{A''}\norm{\mu(\overline{\rho}(\gamma))}_2-B''\] for some $A''\geq 1$ and $B''\geq 0$. Hence, there exists $R \geq R'$ such that
\begin{align*} \alpha_m(\rho)-2\epsilon < \left(\log\frac{\mu_1}{\mu_{m+1}}(\rho(\gamma))-4\log C\right)\bigg/\left(\log\frac{\mu_1}{\mu_{m}}(\rho(\gamma))+4\log C\right) \end{align*} for all $\gamma \in \Gamma$ with $\norm{\mu(\overline{\rho}(\gamma))}_2\geq R$.
Let $d=d_{\widetilde{U\Gamma}}$ denote the $\Gamma$-invariant metric on $\widetilde{U\Gamma}$ specified in Section \ref{sec:flowspace}, and let $D$ be the diameter of $K$. By Corollary~\ref{thm:QI_Anosov} and the fact that any $\Gamma$-orbit in $\widetilde{U\Gamma}$ is a quasi-isometry, there exists $A \geq 1$ and $B \geq 0$ such that \begin{align*} \frac{1}{A} \norm{\mu(\overline{\rho}(\gamma))}_2 - B \leq d(v, \gamma \cdot v) \leq A \norm{\mu(\overline{\rho}(\gamma))}_2 + B \end{align*} for all $v \in \wt{U\Gamma}$. Also, since every $\phi_t$-orbit in $\widetilde{U\Gamma}$ is a quasi-isometric embedding, there exists $A'\geq1$ and $B'\geq0$ so that , \begin{align*}
\frac{1}{A'} |t|- B' \leq d(v,\phi_t(v))\leq A' |t| + B' \end{align*} for all $t\in\Rb$ and $v \in \wt{U\Gamma}$.
Fix $t > A'(B'+D + AR + B)$ and $v \in K$. Let $\gamma \in \Gamma$ such that $\gamma^{-1}\cdot\phi_t (v) \in K$. By the definition of $C$, we see that for any $X\in\Rb^d$, \[\frac{1}{C}\leq\frac{\norm{X}_v}{\norm{X}_{2}}, \frac{\norm{\overline{\rho}(\gamma)^{-1}\cdot X}_{\gamma^{-1}\cdot \phi_t(v)}}{\norm{\overline{\rho}(\gamma)^{-1}\cdot X}_{2}} \leq C.\]
Since $ \norm{\overline{\rho}(\gamma)^{-1}\cdot X}_{\gamma^{-1}\cdot \phi_t (v)}=\norm{X}_{\phi_t (v)} $ and $X\mapsto\norm{\overline{\rho}(\gamma)^{-1}\cdot X}_{2}$ are both norms on $\Rb^d$, it follows from Proposition \ref{prop:easy comp} that \[\frac{1}{C^2}\frac{1}{\mu_{d+1-i}(\overline{\rho}(\gamma))}=\frac{1}{C^2}\mu_{i}(\overline{\rho}(\gamma)^{-1})\leq\sigma_i(v,t)\leq C^2\mu_{i}(\overline{\rho}(\gamma)^{-1})= C^2\frac{1}{\mu_{d+1-i}(\overline{\rho}(\gamma))}.\] Also, $d(\gamma\cdot v,v)\geq d(v,\phi_t(v))-d( \phi_t (v), \gamma\cdot v) \geq \frac{1}{A'}t-B'-D$, which means \begin{align*} \norm{\mu(\overline{\rho}(\gamma))}_2 \geq \frac{1}{A} \left(d(\gamma\cdot v, v) - B\right) \geq \frac{\frac{1}{A'}t-B'-D-B}{A} \geq R. \end{align*} Hence, \begin{align*} h(v,t) &= \log\frac{\sigma_{d-m}(v,t)}{\sigma_d(v,t)}\Bigg/\log\frac{\sigma_{d-m+1}(v,t)}{\sigma_{d}(v,t)} \\ & \geq \left(\log\frac{\mu_1}{\mu_{m+1}}(\rho(\gamma))-4\log C\right)\bigg/\left(\log\frac{\mu_1}{\mu_{m}}(\rho(\gamma))+4\log C\right)\\ & > \alpha_m(\rho)-2\epsilon. \end{align*} Since $v \in K$ and $t > A'(B'+D + AR+ B)$ was arbitrary, \begin{align*} \alpha_m(\rho)-2\epsilon \leq \liminf_{t\to\infty}\inf_{v\in K}h(v,t). \end{align*} Then since $\epsilon > 0$ was also arbitrary we see that \begin{equation*} \alpha_m(\rho) \leq \liminf_{t\to\infty}\inf_{v\in K}h(v,t). \qedhere \end{equation*} \end{proof}
\section{Optimal regularity}\label{sec:regularity}
In this section we prove Theorem~\ref{thm:regularity} and the second part of Theorem~\ref{thm:regularity2}. By Example \ref{eg:limitset} and \ref{eg:convex}, it is sufficient to prove the following theorem.
\begin{theorem} \label{thm:regularity_body}Suppose that $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ is an irreducible, $(1,m)$-Anosov representation for some $m=2,\dots,d-1$, and suppose that $M\subset\Pb(\Rb^d)$ is a $\rho$-controlled, $m$-hyperconvex, topological $(m-1)$-dimensional submanifold. Then \begin{align*} \alpha_m(\rho)\leq \sup\left\{ \alpha \in (1,2) : M \text{ is } C^{\alpha} \text{ along }\xi^{(1)}(\partial_\infty \Gamma) \right\} \end{align*} with equality if \begin{itemize} \item[($\ast$)] $M \cap \left(p_1 + p_2 + \xi^{(d-m)}(y)\right)$ spans $p_1 + p_2 + \xi^{(d-m)}(y)$ for all pairwise distinct $p_1,p_2,\xi^{(1)}(y)\in M$. \end{itemize} \end{theorem}
As mentioned in the introduction (see (2) of Remark \ref{rem:stablility}), the condition ($\ast$) is trivial when $m=2$ and $m=d-1$. In Section \ref{sec:stability}, we show that when $M=\xi^{(1)}(\partial_\infty\Gamma)$, ($\ast$) is an open condition in $\Hom(\Gamma,\PSL_d(\Rb))$. Then, in Section \ref{sec:proof_regularity_body}, we prove Theorem \ref{thm:regularity_body}.
\subsection{Stability of hypotheses}\label{sec:stability}
To show that ($\ast$) is an open condition when $M=\xi^{(1)}(\partial_\infty\Gamma)$, we use the following two statements. The first is a standard fact about hyperbolic groups.
\begin{proposition}\label{prop:3_point_action} The $\Gamma$-action on $\partial_\infty \Gamma^{(3)}: = \{ (x,y,z) \in \partial_\infty \Gamma^3 : x,y,z \text{ distinct} \}$ is co-compact. \end{proposition}
The second is a well-known result about Anosov representations due to Guichard-Wienhard. In the case when $\Gamma$ is the fundamental group of a negatively curved Riemannian manifold, this result was established by Labourie~\cite[Proposition 2.1]{L2006}. Before stating the result we need some notation: If $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ is a $k$-Anosov representation, let $\xi_\rho^{(k)}:\partial_\infty\Gamma\to\Gr_k(\Rb^d)$ denote the $k$-flag map of $\rho$.
\begin{theorem}\label{thm:gwstable} \cite[Theorem 5.13]{GW2012}\label{thm:continuous_limit_curve} Let \begin{align*} \Oc_k : = \{ \rho \in \Hom(\Gamma, \PGL_d(\Rb)) : \rho \text{ is $k$-Anosov} \}. \end{align*} Then $\Oc_k$ is open, and the map \begin{align*} \rho \in \Oc_k \rightarrow \xi^{(k)}_{\rho} \in C\left( \partial_\infty \Gamma, \Gr_k(\Rb^d)\right) \end{align*} is continuous. \end{theorem}
\begin{corollary} Suppose $\partial_\infty \Gamma$ is a topological $(m-1)$-manifold, and $\rho_0: \Gamma \rightarrow \PGL_{d}(\Rb)$ is a $(1,m)$-Anosov representation. If $\xi^{(1)}_{\rho_0}(x) + \xi^{(1)}_{\rho_0}(z) + \xi^{(d-m)}_{\rho_0}(y)$ is a direct sum and \begin{align*} \xi_{\rho_0}^{(1)}(\partial_\infty \Gamma) \cap \left(\xi_{\rho_0}^{(1)}(x) + \xi_{\rho_0}^{(1)}(z) + \xi_{\rho_0}^{(d-m)}(y)\right) \end{align*} spans $\xi_{\rho_0}^{(1)}(x) + \xi_{\rho_0}^{(1)}(z) + \xi_{\rho_0}^{(d-m)}(y)$ for all $x,y,z \in \partial_\infty \Gamma$ distinct, then any sufficiently small deformation of $\rho_0$ also has these properties. \end{corollary}
\begin{proof} It follows easily from Theorem \ref{thm:gwstable} and Proposition~\ref{prop:3_point_action} that there exists a neighborhood $\Oc\subset\Hom(\Gamma, \PGL_d(\Rb))$ of $\rho_0$ with the following property: if $\rho \in \Oc$, then $\rho$ is a $(1,m)$-Anosov representation and $\xi^{(1)}_{\rho}(x) + \xi^{(1)}_{\rho}(z) + \xi^{(d-m)}_{\rho}(y)$ is a direct sum for all $x,y,z \in \partial_\infty \Gamma$ distinct.
By Proposition~\ref{prop:3_point_action}, it is enough to fix $(x_0,y_0,z_0) \in \partial_\infty \Gamma^{(3)}$ and prove that there exists a neighborhood $U$ of $(x_0,y_0,z_0)$ in $\partial_\infty \Gamma^{(3)}$ such that \begin{align*} \xi_{\rho}^{(1)}(\partial_\infty \Gamma) \cap \left(\xi_{\rho}^{(1)}(x) + \xi_{\rho}^{(1)}(z) + \xi_{\rho}^{(d-m)}(y)\right) \end{align*} spans $\xi_{\rho}^{(1)}(x) + \xi_{\rho}^{(1)}(z) + \xi_{\rho}^{(d-m)}(y)$ for all $(x,y,z) \in U$ and any $\rho$ that is a sufficiently small deformation of $\rho_0$.
Let $e_1,\dots, e_d$ be the standard basis of $\Rb^d$. By changing coordinates we can assume that \begin{align*} \xi^{(1)}_{\rho_0}(x_0) & = \Rb \cdot e_1, \\ \xi^{(m)}_{\rho_0}(x_0) & = \Span\{e_1,\dots,e_m\},\\
\xi^{(d-m)}_{\rho_0}(y_0) &= \Span\{ e_{m+1},\dots, e_d\},\\
\xi^{(d-1)}_{\rho_0}(y_0) &= \Span\{e_2,\dots, e_d\}, \text{ and} \\
\xi^{(1)}_{\rho_0}(z_0) &= \Rb\cdot(e_1+e_2+e_d).
\end{align*}
Using Theorem~\ref{thm:continuous_limit_curve} and possibly shrinking $\Oc$, we can find a neighborhood $U_0$ of $(x_0,y_0,z_0)$ such that there exists a continuous map
\begin{align*}
(\rho, (x,y,z)) \in \Oc \times U_0 \rightarrow g_{\rho,(x,y,z)} \in \PGL_d(\Rb)
\end{align*}
such that $g_{\rho_0,(x_0,y_0,z_0)} =\id$,
\begin{align*} g_{\rho,(x,y,z)}\cdot\xi^{(1)}_{\rho}(x) & = \Rb \cdot e_1, \\ g_{\rho,(x,y,z)}\cdot\xi^{(m)}_{\rho}(x) & = \Span\{e_1,\dots,e_m\},\\ g_{\rho,(x,y,z)}\cdot \xi^{(d-m)}_{\rho}(y) &= \Span\{ e_{m+1},\dots, e_d\},\\ g_{\rho,(x,y,z)}\cdot \xi^{(d-1)}_{\rho}(y) &= \Span\{e_2,\dots, e_d\}, \text{ and} \\
g_{\rho,(x,y,z)}\cdot\xi^{(1)}_{\rho}(z) &= \Rb\cdot(e_1+e_2+e_d).
\end{align*}
By Theorem~\ref{thm:main}, for each $ (\rho, (x,y,z)) \in \Oc \times U_0$, there exists a unique $C^1$ function $f_{\rho, (x,y,z)} : \Rb^{m-1} \rightarrow \Rb^{d-m}$ such that
\begin{align*}
g_{\rho,(x,y,z)}\cdot\xi^{(1)}_{\rho}( \partial_\infty \Gamma \setminus \{y\}) = \left\{ [1:v:f_{\rho, (x,y,z)}(v)] : v \in \Rb^{m-1} \right\}. \end{align*} Then by Theorem~\ref{thm:continuous_limit_curve}, the map $\Oc \times U_0\to C\left(\Rb^{m-1}, \Rb^{d-m}\right)$ given by
\begin{align*}
(\rho, (x,y,z)) \mapsto f_{\rho,(x,y,z)}
\end{align*} is continuous. Notice that
\begin{align*} \xi^{(1)}_{\rho}( \partial_\infty & \Gamma \setminus \{y\}) \cap \left(\xi_{\rho}^{(1)}(x) + \xi_{\rho}^{(1)}(z) + \xi_{\rho}^{(d-m)}(y)\right) \\ & = g_{\rho,(x,y,z)}^{-1}\cdot\left\{ [1:te_2:f_{\rho, (x,y,z)}(te_2)] : t \in \Rb \right\}. \end{align*} So if \begin{align*} [1:t_1e_2:f_{\rho_0, (x_0,y_0,z_0)}(t_1e_2)], \dots, [1:t_{d-m+1}e_2:f_{\rho_0, (x_0,y_0,z_0)}(t_{d-m+1}e_2)] \end{align*} spans $\xi_{\rho_0}^{(1)}(x_0) + \xi_{\rho_0}^{(1)}(y_0) + \xi_{\rho_0}^{(d-m)}(z_0)$, then \begin{align*} g_{\rho,(x,y,z)}^{-1}[1:t_1e_2:f_{\rho, (x,y,z)}(t_1e_2)], \dots, g_{\rho,(x,y,z)}^{-1}[1:t_{d-m+2}e_2:f_{\rho, (x,y,z)}(t_{d-m+2}e_2)] \end{align*} spans $\xi_{\rho}^{(1)}(x) + \xi_{\rho}^{(1)}(y) + \xi_{\rho}^{(d-m)}(z)$ when $(\rho,(x,y,z))$ is sufficiently close to $(\rho_0, (x_0,y_0,z_0))$. \end{proof}
\subsection{Proof of Theorem \ref{thm:regularity_body}}\label{sec:proof_regularity_body}
We begin with the following observation. Let $e_1,\dots, e_d$ denote the standard basis of $\Rb^d$, and let $\overline{g} \in \GL_{d}(\Rb)$ be a proximal element so that \begin{itemize} \item $e_1$ spans the eigenspace corresponding to $\lambda_1(\overline{g})$, \item $e_m$ lies in the generalized eigenspace corresponding to $\lambda_m(\overline{g})$, and \item $e_{m+1}$ lies in the generalized eigenspace corresponding to $\lambda_{m+1}(\overline{g})$. \end{itemize} Then observe that \begin{align}\label{eqn:P2stretch} \log\lambda_1(\overline{g})&=\lim_{n \rightarrow \infty} \frac{1}{n} \log \norm{\overline{g}^n\cdot e_1},\nonumber\\ \log\lambda_m (\overline{g})&= \lim_{n \rightarrow \infty} \frac{1}{n} \log \norm{\overline{g}^n\cdot e_m},\text{ and }\\ \log\lambda_{m+1}(\overline{g}) &= \lim_{n \rightarrow \infty} \frac{1}{n} \log \norm{\overline{g}^n\cdot \sum_{j=m+1}^d v_j e_j}\text{ when }v_{m+1} \neq 0.\nonumber \end{align}
\begin{proof}[Proof of Theorem \ref{thm:regularity_body}] From Theorem~\ref{thm:main_body} and Theorem \ref{thm:alphas}, we see that
\begin{align*} \alpha_m(\rho) \leq \sup\left\{ \alpha \in (1,2): M \text{ is } C^{\alpha} \text{ along }\xi^{(1)}(\partial_\infty \Gamma) \right\}. \end{align*}
To prove the equality case, fix some $\gamma \in \Gamma$ with infinite order and let $\gamma^{\pm} \in \partial_\infty \Gamma$ denote the attracting and repelling fixed points of $\gamma$. We can make a change of basis and assume that $\xi^{(1)}(\gamma^+) = \Rb\cdot e_1$, $\xi^{(m)}(\gamma^+) = \Span\{ e_1,\dots, e_m\}$, $\xi^{(d-m)}(\gamma^-) = \Span\{ e_{m+1}, \dots, e_d\}$, and $\xi^{(d-1)}(\gamma^-) = \Span\{e_2,\dots, e_d\}$. Now fix a lift $\overline{g}\in\GL_d(\Rb)$ of $\rho(\gamma)\in\PGL_d(\Rb)$. Then
\begin{align*} \overline{g} = \begin{pmatrix} \lambda & & \\ & U & \\ & & V \end{pmatrix}
\end{align*} where $\lambda \in \Rb$, $U \in \GL_{m-1}(\Rb)$, and $V \in \GL_{d-m}(\Rb)$. By a further change of basis, we can assume that $e_m$ lies in the generalized eigenspace corresponding to $\lambda_m(\overline{g})$, and $e_{m+1}$ lies in the generalized eigenspace corresponding to $\lambda_{m+1}(\overline{g})$.
By Theorem \ref{thm:main_body}, $M$ is $C^1$ along $\xi^{(1)}(\partial_\infty \Gamma)$, and the tangent space to $M$ at $\xi^{(1)}(\gamma^+)$ is $\xi^{(m)}(\gamma^+)$. Thus, for any $\epsilon > 0$ sufficiently small there exists some $p \in M$ such that \begin{align*} p = \left[e_1 + \epsilon e_m + \sum_{j=m+1}^d y_j e_j \right]. \end{align*} Then \begin{align*} \xi^{(1)}(\gamma^+) + p + \xi^{(d-m)}(\gamma^-) = \Span\{e_1,e_m, e_{m+1}, \dots, e_d\} \end{align*} and by hypothesis there exists some $q \in M$ such that \begin{align*} q= \left[z_1e_1 + z_m e_m + \sum_{j=m+1}^d z_j e_j \right] \end{align*} and $z_{m+1} \neq 0$. The sums $q+\xi^{(d-1)}(\gamma^-)$ and $\xi^{(1)}(\gamma^+) + q + \xi^{(d-m)}(\gamma^-)$ are both direct, so $z_1 \neq 0\neq z_m$.
Next fix a distance $d_{\Pb}$ on $\Pb(\Rb^d)$ induced by a Riemannian metric. Since \[\lim_{n\to\infty}\rho(\gamma^n)\cdot q= \xi^{(1)}(\gamma^+),\] Observation~\ref{obs:delta1} implies that if \[X_n:=\overline{\rho}(\gamma^n)\cdot \left(z_1e_1 + z_m e_m + \sum_{j=m+1}^d z_j e_j\right),\] then there is some $A\geq 1$ so that for sufficiently large $n$, \[\frac{1}{A} \frac{\norm{P_{3,v}(X_n)}_{2}}{\norm{P_{1,v}(X_n)}_{2}}\leq d_{\Pb}\left( \rho(\gamma^n)\cdot q, \xi^{(m)}(v^+) \right) \leq A \frac{\norm{P_{3,v}(X_n)}_{2}}{\norm{P_{1,v}(X_n)}_{2}}\] and \[\frac{1}{A} \frac{\norm{P_{2,v}(X_n)}_{2}}{\norm{P_{1,v}(X_n)}_{2}}\leq d_{\Pb}\left( \rho(\gamma^n)\cdot q, \xi^{(1)}(v^+) \right) \leq A \frac{\norm{P_{2,v}(X_n)}_{2}+\norm{P_{3,v}(X_n)}_{2}}{\norm{P_{1,v}(X_n)}_{2}}.\] It then follows from \eqref{eqn:P2stretch} that \begin{align*} \lim_{n \rightarrow \infty} \frac{1}{n} \log d_{\Pb}\left(\rho(\gamma^n)\cdot q, \xi^{(m)}(\gamma^+) \right)=\log \frac{\lambda_{m+1}}{\lambda_1}. \end{align*} and \begin{align*} \lim_{n \rightarrow \infty} \frac{1}{n} \log d_{\Pb}\left(\rho(\gamma^n)\cdot q, \xi^{(1)}(\gamma^+) \right)= \log \frac{\lambda_{m}}{\lambda_1} \end{align*}
Finally, if $M$ is $C^\alpha$ along $\xi^{(1)}(\partial_\infty \Gamma)$, then there exists $C > 0$ such that \begin{align*} d_{\Pb}\left( \xi^{(m)}(\gamma^+), \rho(\gamma^n)\cdot q \right) \leq C d_{\Pb}\left( \xi^{(1)}(\gamma^+), \rho(\gamma^n)\cdot q \right)^{\alpha} \end{align*} for all sufficiently large $n$. By taking the logarithm to both sides, dividing by $n$, and then taking the limit, we see that \begin{align*} \alpha \leq \frac{\log \frac{\lambda_{1}}{\lambda_{m+1}}}{\log \frac{\lambda_{1}}{\lambda_m}}. \end{align*} Since $\gamma \in \Gamma$ was arbitrary, we see that $\alpha \leq \alpha_m(\rho)$. \end{proof}
\section{Necessary conditions for differentiability of $\rho$-controlled subsets}
In this section, we establish Theorem~\ref{thm:nec_general}. By Example \ref{eg:limitset}, it is sufficient to prove the following theorem.
\begin{theorem} \label{thm:nec_general_body}Suppose $\Gamma$ is a hyperbolic group and $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ is an irreducible $1$-Anosov representation such that $\bigwedge^m \rho: \Gamma \rightarrow \PGL(\bigwedge^m \Rb^d)$ is also irreducible. Also, suppose that $M$ is a $\rho$-controlled, $(m-1)$ dimensional topological manifold. If \begin{enumerate} \item[($\ddagger$)] $M$ is $C^\alpha$ along $\xi^{(1)}(\partial_\infty\Gamma)$ for some $\alpha>1$, \end{enumerate} then \begin{enumerate} \item[($\dagger$')] $\rho$ is $m$-Anosov and $\xi^{(1)}(x) + p + \xi^{(d-m)}(y)$ is a direct sum for all pairwise distinct $\xi^{(1)}(x),p,\xi^{(1)}(y) \in M$. \end{enumerate} \end{theorem}
\begin{remark} Note that ($\dagger'$) in Theorem \ref{thm:nec_general_body} is a weaker condition then ($\dagger$) in Theorem \ref{thm:main_body}. However, when $M=\xi^{(1)}(\partial_\infty\Gamma)$, then the two conditions are identical. \end{remark}
First, in Section \ref{sec:wedge}, we define, for any $1\leq m\leq d$ and any representation $\rho:\Gamma\to\PGL_d(\Rb)$, the representation \[\bigwedge^m\rho:\Gamma\to\PGL\left(\bigwedge^m\Rb^d\right),\] whose irreducibility appears as a hypothesis in the statements of Theorem~\ref{thm:nec_general_body}. Then, in Section \ref{sec:irredeg}, we give an example to demonstrate the necessity of the irreducibility of $\bigwedge^m\rho$ as a hypothesis of Theorem \ref{thm:nec_general_body} (and also in Theorem~\ref{thm:nec_general}). Next, we prove Theorem \ref{thm:nec_general_body}, whose proof can be broken down into two main steps. In Section \ref {sec:egap}, we use the fact that $M$ is an $(m-1)$-dimensional topological manifold that is $C^\alpha$ along the $1$-limit set of $\rho$ for some $\alpha>1$, to deduce that $\log\frac{\lambda_m}{\lambda_{m+1}}(\rho(\gamma))$ grows linearly with respect to the word length of $\gamma$. Then, in Section \ref{sec:sgap}, we use this to deduce that $\rho$ is $m$-Anosov, and obtain the required transversality condition.
\subsection{The wedge representation}\label{sec:wedge}
Observe that for any $m\leq d-1$, there is a natural linear $\GL_d(\Rb)$-action on $\bigwedge^m\Rb^d$ given by \[g\cdot(u_1\wedge\dots\wedge u_m):=(g\cdot u_1)\wedge\dots\wedge (g\cdot u_m),\] where $u_i\in\Rb^d$ for all $i$. This defines a representation \[\iota_{d,m}:\GL_d(\Rb)\to\GL\left(\bigwedge^m\Rb^d\right),\] which in turn defines a representation \[\widehat{\iota_{d,m}}:\PGL_d(\Rb)\to\PGL\left(\bigwedge^m\Rb^d\right).\] Using this, we may define the \emph{$m$-wedge representation} of $\overline{\rho}:\Gamma\to\GL_d(\Rb)$ (resp. $\rho:\Gamma\to\PGL_d(\Rb)$) to be \[\bigwedge^m\overline{\rho}:=\iota_{d,m}\circ\overline{\rho}:\Gamma\to\GL\left(\bigwedge^m\Rb^d\right)\,\,\,\left(\text{resp. }\bigwedge^m\rho:=\widehat{\iota_{d,m}}\circ\rho:\Gamma\to\PGL\left(\bigwedge^m\Rb^d\right)\right).\]
Also, if $\langle\cdot,\cdot\rangle_{\Rb^d}$ denotes the standard inner product on $\Rb^d$ with orthonormal basis $e_1,\dots,e_d$, then we may define a bilinear pairing $\langle\cdot,\cdot\rangle_{\bigwedge^m\Rb^d}$ on $\bigwedge^m\Rb^d$ by first defining \[\langle u_{i_1}\wedge\dots\wedge u_{i_m},v_{j_1}\wedge\dots\wedge v_{j_m}\rangle_{\bigwedge^m\Rb^d}:=\prod_{\sigma\in S_m}\prod_{k=1}^m\sgn(\sigma)\langle u_{i_k},v_{j_{\sigma(k)}}\rangle\] for all $u_{i_k},v_{j_k}\in\Rb^d$, and then extending it linearly to all of $\bigwedge^m\Rb^d$. Observe that \[\{e_{i_1}\wedge\dots\wedge e_{i_m}:1\leq i_1<\dots<i_k\leq d\}\] is an orthonormal basis of $\bigwedge\Rb^d$, so $\langle\cdot,\cdot\rangle_{\bigwedge^m\Rb^d}$ is an inner product. Using this, we may define the norm $\norm{\cdot}_{\bigwedge^m\Rb^d}$ on $\bigwedge^m\Rb^d$ associated to $\langle\cdot,\cdot\rangle_{\bigwedge^m\Rb^d}$.
Next, let $\overline{g}\in\GL_d(\Rb)$. For all $i$, let $\mu_i(\iota_{d,m}(\overline{g}))$ denote the $i$-th singular value of $\iota_{d,m}(\overline{g})$ with respect to the norm $\norm{\cdot}_{\bigwedge^m\Rb^d}$ on $\bigwedge^m\Rb^d$. One can verify from the definition of the $\GL_d(\Rb)$-action on $\bigwedge^m\Rb^d$ that for all $i$, there exists $i_1 < i_2 < \dots < i_m$ such that \[\lambda_i(\iota_{d,m}(\overline{g}))=\lambda_{i_1}(\overline{g})\dots\lambda_{i_m}(\overline{g})\,\,\,\text{ and }\,\,\,\mu_i(\iota_{d,m}(\overline{g}))=\mu_{i_1}(\overline{g})\dots\mu_{i_m}(\overline{g}).\] This implies that \begin{align}\label{eqn:evalue1} \lambda_1(\iota_{d,m}(\overline{g}))=\prod_{i=1}^m\lambda_{i}(\overline{g})\,\,\,\text{ and }\,\,\,\lambda_2(\iota_{d,m}(\overline{g}))=\lambda_{m+1}(\overline{g})\prod_{i=1}^{m-1}\lambda_{i}(\overline{g}) \end{align} and \begin{align}\label{eqn:0evalue1} \mu_1(\iota_{d,m}(\overline{g}))=\prod_{i=1}^m\mu_{i}(\overline{g})\,\,\,\text{ and }\,\,\,\mu_2(\iota_{d,m}(\overline{g}))=\mu_{m+1}(\overline{g})\prod_{i=1}^{m-1}\mu_{i}(\overline{g}). \end{align} Hence, for any $\gamma\in\Gamma$, \begin{align}\label{eqn:evalue2} \frac{\lambda_1}{\lambda_2}\left(\bigwedge^m\rho(\gamma)\right)=\frac{\lambda_m}{\lambda_{m+1}}(\rho(\gamma)) \end{align} and \begin{align}\label{eqn:0evalue2} \frac{\mu_1}{\mu_2}\left(\bigwedge^m\rho(\gamma)\right)=\frac{\mu_m}{\mu_{m+1}}(\rho(\gamma)). \end{align}
\subsection{Irreducibility of $\bigwedge^m\rho$}\label{sec:irredeg}
Now, we will discuss an example to demonstrate that the irreducibility of $\bigwedge^m\rho$ is a necessary hypothesis for Theorem~\ref{thm:nec_general_body} to hold.
The identification of $\Cb^3$ with $\Rb^6$ given by \[(z_1,z_2,z_3)\mapsto(\Re(z_1),\Im(z_1),\Re(z_2),\Im(z_2),\Re(z_3),\Im(z_3))\] defines an inclusion $j:\SL_3(\Cb)\to\SL_6(\Rb)$. The image of $j$ can be characterized as the subgroup of $\SL_6(\Rb)$ that commutes with the linear endomorphism $J$ on $\Rb^6$ defined by $J(x_1,y_1,x_2,y_2,x_3,y_3):=(-y_1,x_1,-y_2,x_2,-y_3,x_3)$. Let $\SU(2,1)\subset\SL_3(\Cb)$ be the subgroup that leaves invariant the bilinear pairing that is represented in the standard basis of $\Cb^3$ by the matrix \[\left(\begin{array}{ccc} 0&0&1\\ 0&1&0\\ 1&0&0 \end{array}\right),\] and define
\[\tau_0:=(\iota_{6,2}\circ j)|_{\SU(2,1)} : \SU(2,1) \rightarrow \SL\left(\bigwedge^2 \Rb^6\right).\]
Recall that $\iota_{d,m}$ was defined in Section \ref{sec:wedge}. Let $\bigwedge^2J$ be the linear endomorphism on $\bigwedge^2\Rb^6$ given by \[\left(\bigwedge^2J\right)(u_1\wedge u_2)=J(u_1)\wedge J(u_2).\] Consider the $\tau_0$-invariant subspace \begin{align*} E = \left\{ v \in \bigwedge^2 \Rb^6: \left(\bigwedge^2J\right)(v) = v\right\}, \end{align*} and let $\tau : \SU(2,1) \rightarrow \GL(E)$ be the representation defined by the $\tau_0$ action on $E$. Observe that if $e_1,\dots,e_6$ is the standard basis for $\Rb^6$, then \begin{align*} &f_1:=e_1\wedge e_2,&&f_4:=e_3\wedge e_4,&&f_7:=e_3\wedge e_5+e_4\wedge e_6,\\ &f_2:=e_2\wedge e_3-e_1\wedge e_4,&&f_5:=e_2\wedge e_5-e_1\wedge e_6,&&f_8:=e_4\wedge e_5-e_3\wedge e_6,\\ &f_3:=e_1\wedge e_3+e_2\wedge e_4,&&f_6:=e_1\wedge e_5+e_2\wedge e_6,&&f_9:=e_5\wedge e_6 \end{align*} is a basis of $E$. One can then explicitly verify that $\tau$ is irreducible.
If $g \in \SU(2,1)$, then there exists some $\lambda \geq 1$ such that the (complex) eigenvalues of $g \in \SU(2,1)$ have absolute values $\lambda, 1, \lambda^{-1}$. By conjugating $g$ by an appropriate element in $h \in \SU(2,1)$, we may also assume that the generalized eigenvectors of $hgh^{-1}$ corresponding to $\lambda,1,\lambda^{-1}$ are $(1,0,0)^T$, $(0,1,0)^T$ and $(0,0,1)^T$ respectively. This implies that the eigenvalues of $j(hgh^{-1})$ have absolute values $\lambda,1,\lambda^{-1}$ (each with multiplicity $2$), and the corresponding invariant subspaces are $\Span_{\Rb}\{e_1,e_2\}$, $\Span_{\Rb}\{e_3,e_4\}$ and $\Span_{\Rb}\{e_5,e_6\}$ respectively. Using the basis of $E$ described above, one can then compute that the eigenvalues of $\tau(hgh^{-1})$, and hence $\tau(g)$, have absolute values \begin{align*} \lambda^2, \lambda, \lambda, 1,1,1, \lambda^{-1}, \lambda^{-1}, \lambda^{-2}. \end{align*} In particular, the image of $\tau$ lies in $\SL(E)$.
With this set up, we can now give our example. It describes a $1$-Anosov, irreducible representation of a co-compact lattice $\Gamma\subset\SU(2,1)$ to $\SL(E)$, where $E$ is a $9$-dimensional vector space. We show that the $1$-limit set of this representation is a $3$-dimensional, $C^\infty$-submanifold of $\Pb(E)$, but $\rho$ is not $4$-Anosov.
\begin{example}\label{ex:irred_bad_example} Fix a co-compact lattice $\Gamma \leq \SU(2,1)$. Since $\SU(2,1)$ is a rank-one Lie group, it acts transitively and by isometries on a negatively curved Riemannian symmetric space $\Hb_{\Cb}^2$ (the $2$-dimensional complex hyperbolic space), whose visual boundary $\partial_\infty\Hb_{\Cb}^2$ has the structure of a $3$-dimensional smooth sphere. Thus, the inclusion of $\Gamma$ into $\SU(2,1)$ specifies an identification of $\partial_\infty\Gamma=\partial_\infty\Hb_{\Cb}^2$.
As $\SU(2,1)$-spaces, $\Hb_{\Cb}^2\simeq\SU(2,1)/B$, where $B\subset\SU(2,1)$ is the subgroup of upper triangular matrices. It is straightforward to check that $\tau(B)\subset\SL(E)$ lies in $P\cap Q$, where $P\subset\SL(E)$ is the subgroup that preserves the line spanned by $f_1$, and $Q\subset\SL(E)$ is the subgroup that preserves $\Span_{\Rb}(f_1,\dots,f_8)$. In particular, there are smooth, $\tau$-equivariant maps \[\xi^{(1)}:\partial_\infty\Gamma=\SU(2,1)/B\to\SL(E)/P=\Pb(E)\] and \[\xi^{(8)}:\partial_\infty\Gamma=\SU(2,1)/B\to\SL(E)/Q=\Pb^*(E)=\Gr_8(E).\] Furthermore, a result of Guichard-Wienhard \cite[Proposition 4.4]{GW2012} imply that $\tau_\Gamma:\Gamma\to\SL(E)$ is a $1$-Anosov representation whose $1$-flag map and $8$-flag map are $\xi^{(1)}$ and $\xi^{(8)}$ respectively. The eigenvalue calculation above implies that $\rho$ is not $4$-Anosov. However, the $1$-limit set $\xi^{(1)}(\partial_\infty \Gamma)$ is a $3$-dimensional $C^\infty$-submanifold of $\Pb(E)$. \end{example}
\subsection{Eigenvalue gaps from the $C^\alpha$ property along the $1$-limit set}\label{sec:egap}
Our goal now will be to prove the following proposition.
\begin{proposition}\label{prop:eigenvalue_est} Suppose that $\rho:\Gamma\to\PGL_d(\Rb)$ is a $1$-Anosov representation. Also, suppose that $\bigwedge^m \rho: \Gamma \rightarrow \PGL\left(\bigwedge^m \Rb^d\right)$ is irreducible and $M$ is $\rho$-controlled, $(m-1)$-dimensional topological manifold that is $C^{\alpha}$ along the $1$-limit set of $\rho$ for some $\alpha>1$. If $\gamma \in \Gamma$, then \begin{align*} \frac{ \lambda_{m+1}}{\lambda_m}(\rho(\gamma)) \leq \left(\frac{ \lambda_{2}}{\lambda_1}(\rho(\gamma)) \right)^{\alpha-1}. \end{align*} In particular, $\log\frac{ \lambda_m}{\lambda_{m+1}}(\rho(\gamma))$ grows linearly with the word-length of $\gamma$. \end{proposition}
The proof of Proposition \ref{prop:eigenvalue_est} requires two observations and a lemma.
\begin{observation}\label{obs:dynamics} Let $g \in \PGL_{d}(\Rb)$ be proximal, let $\overline{g}\in\GL_d(\Rb)$ be a lift of $g$, and let $g^+ \in \Pb(\Rb^d)$ and $g^-\in\Gr_{d-1}(\Rb^d)$ be the attracting fixed point and repelling fixed hyperplane of $g$ respectively. Also, let $d_{\Pb}$ be a distance on $\Pb(\Rb^d)$ induced by a Riemannian metric. If $p\in\Pb(\Rb^d)$ satisfies $p\neq g^+$ and $p\notin g^-$, then \begin{align*} \log\frac{\lambda_2}{\lambda_1}(g) \geq \limsup_{n \rightarrow \infty} \frac{1}{n} \log d_{\Pb}\Big(g^n\cdot p, g^+ \Big). \end{align*} Moreover, there is a proper subspace $V\subset\Rb^d$ so that if $p\notin [V]$, then the above inequality holds as equality. \end{observation}
\begin{remark} In the above observation, we identify $g^-\in\Gr_{d-1}(\Rb^d)$ with a hyperplane of $\Pb(\Rb^d)$, which we also denote by $g^-$. \end{remark}
\begin{proof} Note that the affine chart $\Ab_{g^-}$ contains both $p$ and $g^+$. Equip $\Ab_{g^-}$ with an Euclidean metric $d_{\Ab}$, and let $\Bb$ be the unit ball in $\Ab_{g^-}$ centered at $g^+$. Since $p\notin g^-$, $g^n\cdot p\in\Bb$ for sufficiently large $n$. On $\Bb$, $d_{\Pb}$ and $d_{\Ab}$ are bi-Lipschitz, so there is a constant $A$ so that for sufficiently large $n$, \begin{align}\label{eqn:approx} \frac{1}{A}\frac{\norm{P_2(X)}_{2}}{\norm{P_1(X)}_{2}}\leq d_{\Pb}(g^n\cdot p,g^+)\leq A\frac{\norm{P_2(X)}_{2}}{\norm{P_1(X)}_{2}}, \end{align} where $X\in\Rb^d$ is a non-zero vector in $g^n\cdot p$, $P_1:\Rb^d\to g^+$ is the projection with kernel $g^-$, and $P_2:\Rb^d\to g^-$ is the projection with kernel $g^+$. On the other hand, it is straightforward that \begin{align}\label{eqn:proj} \log\frac{\lambda_2}{\lambda_1}(g) \geq \limsup_{n \rightarrow \infty} \frac{1}{n} \log \frac{\norm{P_2(\overline{g}^n\cdot X)}_{2}}{\norm{P_1(\overline{g}^n\cdot X)}_{2}}, \end{align} thus giving the desired inequality.
To determine $V$, choose a basis $\{e_1,\dots,e_d\}$ for $\Rb^d$ so that $g$ is in real Jordan normal form in this basis. We may assume that $e_1$ is an eigenvector of $g$ corresponding to $\lambda_1$, and there is some $l$ so that $e_2,\dots,e_l$ spans the invariant corresponding to $\lambda_2$. Let $V$ be the span of $e_1,e_{l+1},\dots,e_d$, and it is easy to see that the inequality \eqref{eqn:proj} holds with equality when $v\notin [V]$. This proves the observation.
\end{proof}
\begin{observation}\label{obs:slow}
Let $g\in\GL_d(\Rb)$ be so that $\frac{\lambda_1}{\lambda_d}(g)>1$. Then let $\Rb^d=V_1+V_2$ be the $g$-invariant decomposition so that every eigenvalue of $g|_{V_1}$ has absolute value $\lambda_1$ and every eignevalue of $g|_{V_2}$ has absolute value strictly less than $\lambda_1$. Suppose that $\dim(V_1)>1$ and $g$ has an invariant line $l\in[V_1]$. Then for all $p\in \Pb(\Rb^d)\setminus[l+V_2]$, \begin{align} \label{eq:W_subspace_zero_limit} 0 = \lim_{n \rightarrow \infty} \frac{1}{n} \log d_{\Pb}\Big(g^{n}\cdot p, l\Big), \end{align} where $d_{\Pb}$ is a distance on $\Pb(\Rb^d)$ induced by a Riemannian metric. \end{observation}
\begin{proof} First, note that since $d_{\Pb}$ has bounded diameter, \begin{align*}
\limsup_{n \rightarrow \infty} \frac{1}{n} \log d_{\Pb}\Big(g^{n}\cdot p, l\Big) \leq 0. \end{align*} Now assume for a contradiction that Equation~\eqref{eq:W_subspace_zero_limit} does not hold for some $p\in \Pb(\Rb^d)\setminus[l+V_2]$. Then by taking a subsequence, we may assume that \begin{align} \label{eq:bad_assumption}
\lim_{k \rightarrow \infty} \frac{1}{n_k} \log d_{\Pb}\Big(g^{n_k}\cdot p, l\Big) < 0. \end{align} Notice that this implies that $g^{n_k}\cdot p \rightarrow l$ as $k\to\infty$.
Using the real Jordan normal form of $g$, we can decompose $V_1 = \bigoplus_{j=1}^r V_{1,j}$ where \begin{enumerate} \item $V_{1,1} = l$,
\item for $2 \leq j \leq r$
\begin{enumerate}
\item $V_{1,j}$ is either one or two dimensional,
\item there exists a linear transformation $L_j : V_{1,j} \rightarrow V_{1,j}$ such that \begin{align*} g \cdot Y \in L_j\cdot Y + V_{1,j-1} \end{align*} for all $Y\in V_{1,j}$, \item $\norm{L_j \cdot Y}_{2} = \lambda_1(g)\norm{Y}_{2}$ for all $Y \in V_{1,j}$.
\end{enumerate}
\end{enumerate} Also, let $P_{1,j} : \Rb^d \rightarrow V_{1,j}$ and $P_2 : \Rb^d \rightarrow V_{2}$ be the projections relative to the decomposition $\Rb^d = V_{1,1} \oplus \dots \oplus V_{1,r}\oplus V_2$.
Since $g^{n_k}\cdot p$ converges to $l$, \eqref{eqn:approx} in the first part of the proof of Observation~\ref{obs:dynamics} implies that there exists $A \geq 1$ such that \begin{align} \label{eq:affine_chart_estimate_non_proximal} \frac{1}{A} \left( \frac{\sum_{j=1}^\ell \norm{P_{1,j}(g^{n_k}\cdot X)}_{2} + \norm{P_2(g^{n_k}\cdot X)}_{2}}{\norm{P_{1,1}(g^{n_k}\cdot X)}_{2}} \right) \leq d_{\Pb}\Big(g^{n_k}\cdot p, \Phi(\xi^{(1)}(\gamma^+))\Big) \end{align} for all non-zero $X\in p$ and all sufficiently large $k$. Since $X \notin l+V_2$, there exists $2 \leq j_0 \leq r$ such that $P_{1,j_0}(X) \neq 0$. By increasing $j_0$ if necessary, we can also assume that $P_{1,j}(X) = 0$ for $j_0 < j \leq r$. This implies that \begin{align} \label{eq:P1j0_estimate}
\norm{P_{1,j_0}(g^{n}\cdot X)}_{2} = \lambda_1(g)^n \norm{P_{1,j_0}(X)}_{2}.
\end{align} Further, by increasing $A \geq 1$ if necessary, we can assume that \begin{align} \label{eq:P11_estimate} \norm{P_{1,1}(g^{n}\cdot X)}_{2} \leq A\norm{g^{n}\cdot X}_{2} \leq A\norm{g^{n}}_{\mathrm{op}}\norm{X}_{2} \end{align} for all $n \geq 0$.
Then by Equations~\eqref{eq:affine_chart_estimate_non_proximal},~\eqref{eq:P1j0_estimate}, and~\eqref{eq:P11_estimate}, \begin{align*} \lim_{k \rightarrow \infty} \frac{1}{n_k} \log d_{\Pb}\Big(g^{n_k}\cdot p, l\Big) &\geq \limsup_{k \rightarrow \infty} \frac{1}{n_k} \log \frac{\norm{P_{1,j_0}(g^{n_k}\cdot X)}_{2}}{A\norm{P_{1,1}(g^{n_k}\cdot X)}_{2}} \\ & \geq \log(\lambda_1(g))+\limsup_{k \rightarrow \infty} \frac{1}{n_k} \log \frac{\norm{P_{1,j_0}(X)}_{2}}{A^2\norm{g^{n_k}}_{\mathrm{op}}\norm{X}_{2}} \\ & \geq \log(\lambda_1(g))-\liminf_{k \rightarrow \infty} \frac{1}{n_k} \log\norm{g^{n_k}}_{\mathrm{op}}. \end{align*} But Gelfand's formula states that \begin{align*} \lim_{n \rightarrow \infty} \frac{1}{n} \log\norm{g^{n}}_{\mathrm{op}} = \log(\lambda_1(g)), \end{align*} so by \eqref{eq:bad_assumption}, \begin{align*} 0 & > \lim_{k \rightarrow \infty} \frac{1}{n_k} \log d_{\Pb}\Big(g^{n_k}\cdot p, \Phi(\xi^{(1)}(\gamma^+))\Big) \geq 0. \end{align*} and we have a contradiction. \end{proof}
Next, suppose $\rho:\Gamma\to\PGL_d(\Rb)$ and $M\subset\Pb(\Rb^d)$ satisfy the hypothesis of Proposition \ref{prop:eigenvalue_est}. Define the map \begin{align}\label{eqn:Fdm} F_{d,m}:\Gr_m(\Rb^d)\to\Pb\left(\bigwedge^m\Rb^d\right)\,\,\,\text{ by }\,\,\,F_{d,m}:V\mapsto\left[\bigwedge_{i=1}^mv_i\right], \end{align} where $v_1,\dots,v_m$ is a basis of $V$.
Note that $F_{d,m}$ is well defined, smooth, and $\widehat{\iota_{d,m}}$-equivariant. Since $M$ is differentiable along the $1$-limit set $\xi^{(1)}(\partial_\infty\Gamma)$ of $\rho$, we can define $\overline{\Phi}:\xi^{(1)}(\partial_\infty\Gamma)\to\Gr_m(\Rb^d)$ to be the map that associates to every point in $\xi^{(1)}(\partial_\infty\Gamma)$ its tangent space. Then define \begin{align}\label{eqn:Phi} \Phi:= F_{d,m}\circ\overline{\Phi}: \xi^{(1)}(\partial_\infty\Gamma) \rightarrow \Pb\left(\bigwedge^{m} \Rb^{d}\right). \end{align}
\begin{remark}\label{rem:d1d2} Fix distances $d_1$ on $\Pb(\Rb^{d})$ and $d_2$ on $\Pb\left(\bigwedge^{m} \Rb^{d}\right)$ which are induced by Riemannian metrics. Since $M$ is $C^\alpha$ along $\xi^{(1)}(\partial_\infty\Gamma)$ and $F_{d,m}$ is smooth, a calculation shows that there is some $C\geq 1$ so that \begin{align*} d_2(\Phi(q_1), \Phi(q_2)) \leq C d_1(q_1, q_2)^{\alpha-1} \end{align*} for all $q_1, q_2 \in \xi^{(1)}(\partial_\infty\Gamma)$. \end{remark}
\begin{lemma}\label{lem:proximal} Suppose that $\rho:\Gamma\to\PGL_d(\Rb)$ is a $1$-Anosov representation. Also, suppose that $\bigwedge^m \rho: \Gamma \rightarrow \PGL\left(\bigwedge^m \Rb^d\right)$ is irreducible and $M$ is $\rho$-controlled, $(m-1)$-dimensional topological manifold that is $C^{\alpha}$ along the $1$-limit set of $\rho$ for some $\alpha>1$. If $\gamma \in \Gamma$ has infinite order, then $g:=\left(\bigwedge^{m} \rho\right)(\gamma)$ is proximal and $\Phi(\xi^{(1)}(\gamma^+))\in\Pb\left(\bigwedge^m\Rb^d\right)$ is the attracting fixed point of $g$. \end{lemma}
\begin{proof} Let $\overline{h} \in \GL_d(\Rb)$ be a lift of $\rho(\gamma)$, $\overline{g}:=\bigwedge^{m} \overline{h}$, and $\lambda_i = \lambda_i(\overline{h})$ for $i=1,\dots,d$. Then by \eqref{eqn:evalue1}, $\lambda_1(\overline{g})=\lambda_1\cdots\lambda_m$. Thus it is equivalent to prove that $\overline{g}$ is proximal and $\Phi(\xi^{(1)}(\gamma^+))$ is the eigenline of $\overline{g}$ whose eigenvalue has absolute value $\lambda_1 \cdots \lambda_{m}$.
We first show that $\Phi(\xi^{(1)}(\gamma^+))$ is an eigenline of $\overline{g}$ whose eigenvalue has absolute value $\lambda_1 \cdots \lambda_{m}$. Let $\{n_k\}_{k=1}^\infty$ be an increasing sequence of integers such that \begin{align*} \frac{1}{\norm{\overline{g}^{n_k}}}\overline{g}^{n_k} \end{align*}
converges to some $T \in \End\left(\bigwedge^{m} \Rb^{d}\right)$. Also, let $\bigwedge^m \Rb^d = V_1 \oplus V_2$ be a $\overline{g}$-invariant decomposition of $\bigwedge^m \Rb^d$, where every eigenvalue of $\overline{g}|_{V_1}$ has absolute value $\lambda_1 \cdots \lambda_m$ and every eignevalue of $\overline{g}|_{V_2}$ has absolute value strictly less than $\lambda_1 \cdots \lambda_m$. Observe that the image of $T$ is contained in $V_1$. Since \[\overline{g}\cdot\Phi(\xi^{(1)}(\gamma^+))=\Phi(\xi^{(1)}(\gamma\cdot \gamma^+))=\Phi(\xi^{(1)}(\gamma^+)),\] $\Phi(\xi^{(1)}(\gamma^+))$ is an eigenline of $\overline{g}$. Thus, we only need to show that $\Phi(\xi^{(1)}(\gamma^+))$ is contained in the image of $T$.
We claim that the image of $T$ is exactly $\Phi(\xi^{(1)}(\gamma^+))$. Notice that if $p=[v] \in \Pb(\bigwedge^{m} \Rb^{d})$ and $v \notin \ker T$ then \begin{align*} [T(v)] = \lim_{k \rightarrow \infty} g^{n_k}\cdot p \end{align*} (recall that $[v]$ denotes the projective line containing $v$). Further, since $\bigwedge^{m} \rho : \Gamma \rightarrow \PGL\left(\bigwedge^{m} \Rb^d\right)$ is irreducible, the set $\{ \Phi(x) : x \in \xi^{(1)}(\partial_\infty\Gamma)\}$ spans $\bigwedge^{m} \Rb^d$. Thus there exists $x_1, \dots, x_N \in \partial_\infty \Gamma$ such that \begin{align*} \Phi(\xi^{(1)}(x_1)), \dots, \Phi(\xi^{(1)}(x_N)) \end{align*} span $\bigwedge^{m} \Rb^d$. By perturbing and relabelling the $x_i$ (if necessary) we can also assume that $\gamma^- \notin \{x_1, \dots, x_N\}$, and that there exists $1 \leq \ell \leq N$ such that \begin{align*} \Phi(\xi^{(1)}(x_1)) + \dots + \Phi(\xi^{(1)}(x_\ell)) + \ker T = \bigwedge^{m} \Rb^d \end{align*} is a direct sum. For $1 \leq i \leq \ell$, \begin{align*} T ( \Phi(\xi^{(1)}(x_i)) ) = \lim_{k \rightarrow \infty} g^{n_k} \Phi(\xi^{(1)}(x_i)) = \lim_{k \rightarrow \infty} \Phi( \xi( \gamma^{n_k}\cdot x_i)) = \Phi(\xi^{(1)}(\gamma^+)), \end{align*} so the image of $T$ is $\Phi(\xi^{(1)}(\gamma^+))$. Thus, $\Phi(\xi^{(1)}(\gamma^+))$ is an eigenline of $\overline{g}$ whose eigenvalue has absolute value $\lambda_1 \cdots \lambda_{m}$.
We next argue that $\overline{g}$ is proximal, or equivalently that $\dim V_1 = 1$. Let $d_1$ and $d_2$ be as defined in Remark \ref{rem:d1d2}. Let \begin{align*} W := V_2 + \Phi(\xi^{(1)}(\gamma^+)), \end{align*} and suppose for contradiction that $\dim V_1 > 1$. This implies that $W\subset\Rb^d$ is a proper subspace. By Observation \ref{obs:slow}, \begin{align*} 0 = \lim_{n \rightarrow \infty} \frac{1}{n} \log d_2\Big(g^{n}\cdot p, \Phi(\xi^{(1)}(\gamma^+))\Big) \end{align*} when $p \in \Pb\left(\bigwedge^{m} \Rb^d\right) \setminus [W]$.
Since $\left\{ \Phi(x) : x \in \xi^{(1)}(\partial_\infty\Gamma)\right\}$ spans $\bigwedge^{m} \Rb^d$, there exists $x \in \partial_\infty \Gamma$ such that $\Phi(\xi^{(1)}(x)) \notin [W]$. By perturbing $x$ (if necessary) we can assume that $x \neq \gamma^-$. Then \begin{align*} \lim_{n\to\infty}\rho(\gamma)^n\cdot \xi^{(1)}(x)=\xi^{(1)}(\gamma^+) \text{ and } \lim_{n\to\infty}g^{n} \cdot\Phi(\xi^{(1)}(x))=\Phi(\xi^{(1)}(\gamma^+)). \end{align*} So, by Observation~\ref{obs:dynamics}, \begin{align*} 0 > \log \frac{\lambda_2}{\lambda_1} & \geq \limsup_{n \rightarrow \infty} \frac{1}{n} \log d_1\Big(\rho(\gamma)^n\cdot \xi^{(1)}(x), \xi^{(1)}(\gamma^+)\Big) \\
&\geq \limsup_{n \rightarrow \infty} \frac{1}{(\alpha-1) n} \log d_2\Big( \Phi(\xi^{(1)}(\gamma^n\cdot x)),\Phi( \xi^{(1)}(\gamma^+))\Big) \\ & = \limsup_{n \rightarrow \infty} \frac{1}{(\alpha-1) n} \log d_2\Big( g^n\cdot\Phi(\xi^{(1)}(x)),\Phi( \xi^{(1)}(\gamma^+))\Big) =0, \end{align*} where the last inequality is Remark \ref{rem:d1d2}. This is a contradiction, so $\overline{g}$ is proximal.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:eigenvalue_est}] Fix some $\gamma \in \Gamma$. If $\gamma$ has finite order, then \begin{align*} \frac{\lambda_i}{\lambda_j}(\rho(\gamma)) = 1 \end{align*} for all $1 \leq i,j \leq d$ and there is nothing to prove. So suppose that $\gamma$ has infinite order and let $\gamma^+ \in \partial_\infty \Gamma$ be the attracting fixed point of $\gamma$. By Lemma \ref{lem:proximal}, $g:=\bigwedge^m \rho(\gamma)$ is proximal and $\Phi(\xi^{(1)}(\gamma^+)) = g^+$.
By \eqref{eqn:evalue2} and Observation~\ref{obs:dynamics}, there exists a proper subspace $V \subset \bigwedge^m \Rb^d$ such that: if $p\in \Pb(\bigwedge^m \Rb^{d}) \setminus [V]$ and $p$ is not in the repelling hyperplane of $g$, then \begin{align*} \log \frac{ \lambda_{m+1}}{\lambda_m}(\rho(\gamma)) = \lim_{n \rightarrow \infty} \frac{1}{n} \log d_2\Big( g^n\cdot p, \Phi(\xi^{(1)}(\gamma^+)) \Big). \end{align*} Since $\{ \Phi(q) : q \in \xi^{(1)}(\partial_\infty\Gamma)\}$ spans $\bigwedge^m \Rb^d$ we can find $x \in \partial_\infty \Gamma$ such that $\Phi(\xi^{(1)}(x)) \notin [V]$. By perturbing $x$ if necessary, we can also assume that $x \neq \gamma^-$. Then \begin{align*} \lim_{n \rightarrow \infty} g^n\cdot \Phi(\xi^{(1)}(x)) = \Phi(\xi^{(1)}(\gamma^+)), \end{align*} so $\Phi(\xi^{(1)}(x))$ does not lie in the repelling hyperplane of $g$. Thus, by Observation~\ref{obs:dynamics} and Remark \ref{rem:d1d2} \begin{align*} \log \frac{ \lambda_{m+1}}{\lambda_m}(\rho(\gamma)) &\leq (\alpha-1) \lim_{n \rightarrow \infty} \frac{1}{n} \log d_1\Big( \rho(\gamma)^n\cdot \xi^{(1)}(x), \xi^{(1)}(\gamma^+) \Big)\\ & \leq (\alpha-1) \log \frac{ \lambda_2}{\lambda_1}(\rho(\gamma)). \end{align*} \end{proof}
\subsection{Anosovness from eigenvalue gaps}\label{sec:sgap}
To prove Theorem~\ref{thm:nec_general_body}, we will use the following proposition.
\begin{proposition}\label{prop:nec_general} Suppose that $\rho:\Gamma\to\PGL_d(\Rb)$ is an irreducible $1$-Anosov representation and $M$ is a $\rho$-controlled subset which is $C^1$ along $\xi^{(1)}(\partial_\infty\Gamma)$. Suppose also that for all $\gamma\in\Gamma$ with infinite order, \begin{align*} \frac{ \lambda_{m+1}}{\lambda_m}(\rho(\gamma)) \leq \left(\frac{ \lambda_{2}}{\lambda_1}(\rho(\gamma)) \right)^{\alpha-1}, \end{align*} $g:=\left(\bigwedge^{m} \rho\right)(\gamma)$ is proximal, and $\Phi(\xi^{(1)}(\gamma^+))\in\Pb\left(\bigwedge^m\Rb^d\right)$ is the attracting fixed point of $g$. Then $\rho$ is $m$-Anosov, and $\xi^{(1)}(x) + p + \xi^{(d-m)}(y)$ is a direct sum for all pairwise distinct $\xi^{(1)}(x), p, \xi^{(1)}(y) \in M$. \end{proposition}
Assuming Proposition \ref{prop:nec_general}, we can prove Theorem \ref{thm:nec_general_body}.
\begin{proof}[Proof of Theorem~\ref{thm:nec_general_body}] If Condition ($\ddagger$) holds, then Proposition \ref{prop:eigenvalue_est}, Lemma \ref{lem:proximal} and Proposition \ref{prop:nec_general} imply Condition ($\dagger'$). \end{proof}
We start the proof of Proposition~\ref{prop:nec_general} by making some initial reductions. First, notice that the reduction made in Remark \ref{rem:lift} does not impact the hypothesis or conclusion of the Proposition. So we may assume that there exists a lift $\overline{\rho}:\Gamma\to\SL_d(\Rb)$ of $\rho$. Second, notice that passing to a finite index subgroup also does not impact the hypotheses or conclusion of the Proposition (see Proposition~\ref{prop:strongly_irreducible}). Hence we may also assume that the Zariski closure of $\overline{\rho}(\Gamma)$ is connected.
The proof of Proposition \ref{prop:nec_general} requires the following lemma.
\begin{lemma}\label{lem:est_on_sing_values}
If $1<\beta < \alpha$, then there exists $C > 0$ such that \begin{align*} \log \frac{ \mu_{m+1}}{\mu_m}(\rho(\gamma)) \leq (\beta-1) \log \frac{ \mu_2}{\mu_1}(\rho(\gamma)) +C \end{align*} for all $\gamma \in \Gamma$. \end{lemma}
\begin{proof} Let $\Cc_\mu=\Cc_\mu(\overline{\rho}(\Gamma))$ and $\Cc_\lambda=\Cc_\lambda(\overline{\rho}(\Gamma))$ be the cones defined in Section~\ref{sec:cones}. Then $\Cc_\mu = \Cc_\lambda$ by Proposition~\ref{prop:Zclosure} and Theorem~\ref{thm:cones}. By hypothesis, if $x=(x_1, \dots, x_d) \in \Cc_\lambda$, then \begin{align*} x_{m+1}-x_m \leq (\alpha-1) (x_2 - x_1). \end{align*} Further, since $\rho$ is $1$-Anosov, $x_2 - x_1 <0$ for all $x=(x_1, \dots, x_d) \in \Cc_\lambda$.
Next, we will prove that there exists $R > 0$ with the following property: if $\norm{\mu(\overline{\rho}(\gamma))}_2 \geq R$, then
\begin{align*} \log \frac{ \mu_{m+1}}{\mu_m}(\rho(\gamma)) \leq (\beta-1) \log \frac{ \mu_2}{\mu_1}(\rho(\gamma)). \end{align*} Suppose for contradiction that there exists $\{\gamma_n\}_{n=1}^\infty \subset \Gamma$ with $\norm{\mu(\overline{\rho}(\gamma_n))}_2 \rightarrow \infty$ and
\begin{align*} \log \frac{ \mu_{m+1}}{\mu_m}(\rho(\gamma_n)) > (\beta-1) \log \frac{ \mu_2}{\mu_1}(\rho(\gamma_n)). \end{align*} By passing to a subsequence, we can assume that \begin{align*} \frac{1}{\norm{\mu(\overline{\rho}(\gamma_n))}_2} \mu(\overline{\rho}(\gamma_n)) \rightarrow x=(x_1, \dots, x_d). \end{align*} Then $x \in \Cc_\mu = \Cc_\lambda$ and \begin{align*} x_{m+1}-x_m \geq (\beta-1)( x_2 -x_1) > (\alpha-1) (x_2 - x_1) \end{align*} so we have a contradiction.
The lemma then follows from the observation that since $\rho$ is $1$-Anosov, the set $\{ \gamma \in \Gamma : \norm{\mu(\overline{\rho}(\gamma))}_2 < R\}$ is finite. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:nec_general}] Since $\rho$ is $1$-Anosov, Theorem~\ref{thm:SV_char_of_Anosov} implies that there exists $C_0, c_0 > 0$ such that \begin{align*} \frac{ \mu_2}{\mu_1}(\rho(\gamma)) \leq C_0 e^{-c_0 d_S(\gamma, \id)} \end{align*} for all $\gamma \in \Gamma$. Then by Lemma \ref{lem:est_on_sing_values}, there exists $C,c > 0$ such that \begin{align*} \frac{ \mu_{m+1}}{\mu_m}(\rho(\gamma)) \leq C e^{-c d_S(\gamma, \id)} \end{align*} for all $\gamma \in \Gamma$. Thus, Theorem~\ref{thm:SV_char_of_Anosov} implies that $\rho$ is $m$-Anosov.
To finish the proof, we will now show that $\xi^{(1)}(x) + p + \xi^{(d-m)}(y)$ is a direct sum for all pairwise distinct $\xi^{(1)}(x),p,\xi^{(1)}(y) \in M$. Let \[\wh{M}: = \{ (\xi^{(1)}(x),p,\xi^{(1)}(y)) \in M^3 : \xi^{(1)}(x),p,\xi^{(1)}(y) \text{ are pairwise distinct} \}\]
and let \[\Oc :=\left\{(\xi^{(1)}(x),p,\xi^{(1)}(y))\in \wh{M}:\xi^{(1)}(x) + p+ \xi^{(d-m)}(y)\text{ is a direct sum}\right\}.\] Notice that $\Oc$ is open and $\Gamma$-invariant. Also, recall that $\Gamma$ acts co-compactly on $\widetilde{U\Gamma}$, the flow space associated to $\Gamma$ described in Section~\ref{sec:flowspace}. Hence, there exists a compact set $K \subset \widetilde{U\Gamma}$ such that $\Gamma \cdot K = \widetilde{U\Gamma}$. Then define \begin{align*} 0 < \epsilon := \min\{ d_{\Pb}(\xi^{(1)}(v^+), \xi^{(1)}(v^-)) : v \in K\}, \end{align*} where $d_{\Pb}$ is a distance on $\Pb(\Rb^d)$ induced by a Riemannian metric.
Given two proper subspaces $V, W \subset \Rb^d$ define \begin{align*} d(V,W):= \min\{ \norm{v-w}_2 : v \in V, w \in W, \norm{v}_2=\norm{w}_2=1\}. \end{align*}
Note that $\{(x,y)\in\partial_\infty\Gamma^2:d_{\Pb}(\xi^{(1)}(x), \xi^{(1)}(y)) \geq \epsilon\}$ is compact. Since $\xi^{(m)}(x) + \xi^{(d-m)}(y)=\Rb^d$ when $x \neq y$, this implies that there exists $\theta_0 > 0$ with the following property: if $x,y \in \partial_\infty \Gamma$ and $d_{\Pb}(\xi^{(1)}(x), \xi^{(1)}(y)) \geq \epsilon$, then \[d(\xi^{(m)}(x),\xi^{(d-m)}(y))\geq\theta_0.\]
Also, by hypothesis, if $\gamma \in \Gamma$ has infinite order and $\gamma^+\in \partial_\infty \Gamma$ is the attracting fixed point of $\gamma$, then \begin{align*} \xi^{(m)}(\gamma^+) = T_{\xi^{(1)}(x)} M. \end{align*}
So by the continuity of $\xi^{(m)}$ and the density of $\{ \gamma^+: \gamma \in \Gamma \text{ has infinite order}\}$ in $\partial_\infty \Gamma$ we see that $\xi^{(m)}(x) = T_{\xi^{(1)}(x)} M$ for all $x \in \partial_\infty \Gamma$. Thus, the compactness of $M$ implies that there exists $\delta > 0$ with the following property: if $\xi^{(1)}(x),p \in M$ and $d_{\Pb}(\xi^{(1)}(x), p) \leq \delta$, then \begin{align*} d\left( \xi^{(1)}(x)+p,\xi^{(m)}(x) \right) < \theta_0/2. \end{align*} Using this, define \[\Uc :=\left\{(\xi^{(1)}(x),p,\xi^{(1)}(y))\in \wh{M}:d_{\Pb}(\xi^{(1)}(x),\xi^{(1)}(y)) \geq \epsilon\text{ and } d_{\Pb}(\xi^{(1)}(x), p) \leq \delta\right\}.\] We claim that $\Uc\subset \Oc$. Indeed, by the definition of $\theta_0$ and $\delta$, if $(\xi^{(1)}(x),p,\xi^{(1)}(y)) \in \Uc$ then \begin{align*} d\left( \xi^{(1)}(x)+p,\xi^{(d-m)}(x) \right) > \theta_0/2. \end{align*} This implies that $\xi^{(1)}(x) +p+ \xi^{(d-m)}(y)$ is direct, so $(\xi^{(1)}(x),p,\xi^{(1)}(y)) \in \Oc$.
Next, let $P(M) \subset \widetilde{U\Gamma} \times M$ be the set defined by \eqref{eqn:P(M)}, and recall that $\phi_t$ denotes the geodesic flow on $\widetilde{U\Gamma}$. Note that there exists $T \geq 0$ such that if $v \in K$, $t \geq T$, and $(\phi_t(v),p) \in P(M)$, then $(\xi^{(1)}(v^+),p,\xi^{(1)}(v^-)) \in \Uc\subset \Oc$. Now, choose any $(\xi^{(1)}(x),p,\xi^{(1)}(y)) \in \wh{M}$. From the definition of $P(M)$, there exists $v \in \widetilde{U\Gamma}$ such that $v^+ = x$, $v^- = y$, and $(v,p) \in P(M)$. Further, there exists $\gamma \in \Gamma$ such that $w := \gamma \cdot\phi_{-T}(v) \in K$. Since the $\Gamma$-action on $\wt{U\Gamma}$ commutes with the geodesic flow, \begin{align*} (\phi_T(w),\rho(\gamma)\cdot p) = \gamma\cdot ( v, p) \in P(M) \end{align*} and so $\gamma\cdot (\xi^{(1)}(x),p,\xi^{(1)}(y))=(\xi^{(1)}(w^+),\rho(\gamma)\cdot p,\xi^{(1)}(w^-))\in \Oc$, which means $(\xi^{(1)}(x),p,\xi^{(1)}(y)) \in \Oc$. Thus, $\Oc =\wh{M}$.
\end{proof}
\section{Necessary conditions for differentiability of $1$-dimensional $\rho$-controlled subsets}\label{sec:nec_surface}
In this section we prove Theorem~\ref{thm:nec_surface}. Again by Example \ref{eg:limitset}, it is sufficient to prove the following theorem.
\begin{theorem} \label{thm:nec_surface_body} Suppose $\Gamma$ is a hyperbolic group and $\rho: \Gamma \rightarrow \PGL_{d}(\Rb)$ is an irreducible $1$-Anosov representation. Also, suppose that $M$ is a $\rho$-controlled, topological circle. If \begin{enumerate} \item[($\ddagger$)] $M$ is $C^\alpha$ along $\xi^{(1)}(\partial_\infty\Gamma)$ for some $\alpha>1$, \end{enumerate} then \begin{enumerate} \item[($\dagger$')] $\rho$ is $m$-Anosov and $\xi^{(1)}(x) + p + \xi^{(d-m)}(y)$ is a direct sum for all pairwise distinct $\xi^{(1)}(x),p,\xi^{(1)}(y) \in M$. \end{enumerate} \end{theorem}
Before proving Theorem \ref{thm:nec_surface_body}, we give an example to demonstrate that the irreducibility of $\rho$ is a necessary hypothesis in Theorem \ref{thm:nec_surface_body} (and also in Theorem~\ref{thm:nec_surface}) to hold.
\subsection{Irreducibility of $\rho$}\label{sec:rhoirred}
For $d \in \Nb$, let $\overline{\tau}_d : \GL_2(\Rb) \rightarrow \GL_d(\Rb)$ be the standard irreducible representation, which is constructed as follows. First, identify $\Rb^d$ with the space of homogeneous degree $d-1$ polynomials in two variables with real coefficients by \[(a_1,\dots,a_d)\mapsto \sum_{i=1}^{d}a_iX^{d-i}Y^{i-1}.\] Using this, we may define an $\GL_2(\Rb)$-action on $\Rb^d$ by \begin{align*} \left(\begin{array}{cc} a&b\\ c&d \end{array}\right) \cdot P(X,Y) = P\left( \left(\begin{array}{cc} a&b\\ c&d \end{array}\right)^{-1}\cdot(X,Y)\right). \end{align*} It is easy to check that this $\GL_2(\Rb)$-action is linear. Thus, it has an associated linear representation $\overline{\tau}_d : \GL_2(\Rb) \rightarrow \GL_d(\Rb)$, which descends to a representation $\tau_d:\PGL_2(\Rb)\to\PGL_d(\Rb)$.
One can verify that if $\lambda, \lambda^{-1}$ are the absolute value of the eigenvalues of $\overline{g} \in \SL_2^\pm(\Rb)$, then \begin{align} \label{eq:eigenvalues_std_repn} \lambda^{d-1}, \lambda^{d-3}, \dots, \lambda^{-(d-1)} \end{align} are the absolute values of the eigenvalues of $\overline{\tau}_d(\overline{g})$. Further, if $B_k\subset\GL_k(\Rb)$ denotes the subgroup of upper triangular matrices, then $\overline{\tau}_d(B_2)\subset B_d$. In particular, $\overline{\tau}_d$ induces a smooth map \[\Psi_d:\Pb(\Rb^2)\simeq\PGL_2(\Rb)/B_2\to\PGL_d(\Rb)/B_d.\] Since $\GL_d(\Rb)/B_d$ is the space of complete flags in $\Rb^d$, there is an obvious smooth projection $p_m:\GL_d(\Rb)/B_d\to\Gr_m(\Rb^d)$ for each $m=1,\dots,d-1$. Using this, define $\Psi_{d,m}:=p_m\circ\Psi_d:\Pb(\Rb^2)\to\Gr_m(\Rb^d)$. It is clear that $\Psi_{d,m}$ is $\overline{\tau}_d$-equivariant.
Next, observe that the subgroup $\Pb(\GL_d(\Rb)\times\GL_{d+2}(\Rb))\subset\PGL_{2d+2}(\Rb)$ preserves both the subspaces $\Pb(\Rb^d)$ and $\Pb(\Rb^{d+2})$ of $\Pb(\Rb^{2d+2})$ induced respectively by the obvious inclusions of $\Rb^d\simeq\Rb^d\oplus\{0\}$ and $\Rb^{d+2}\simeq\{0\}\oplus\Rb^{d+2}$ into $\Rb^d\oplus\Rb^{d+2}\simeq\Rb^{2d+2}$. Similarly, the subspaces $\Gr_{d-1}(\Rb^d)$ and $\Gr_{d+1}(\Rb^{d+2})$ of $\Gr_{2d+1}(\Rb^{2d+2})$ that are respectively defined by the inclusions $V\mapsto V\oplus\Rb^{d+2}$ and $U\mapsto \Rb^d \oplus U$ are $\Pb(\GL_d(\Rb)\times\GL_{d+2}(\Rb))$-invariant. In particular, the representation \[\overline{\tau}_d\oplus\overline{\tau}_{d+2}:\GL_2(\Rb)\to\GL_d(\Rb)\times\GL_{d+2}(\Rb)\subset\GL_{2d+2}(\Rb)\] defines the representation $\tau_d\oplus\tau_{d+2}:\PGL_2(\Rb)\to\PGL_{2d+2}(\Rb)$ by projectivizing, and the maps \begin{align*} &\Psi_{d,1}:\Pb(\Rb^2)\to\Pb(\Rb^d)\subset\Pb(\Rb^{2d+2}),\\ &\Psi_{d+2,1}:\Pb(\Rb^2)\to\Pb(\Rb^{d+2})\subset\Pb(\Rb^{2d+2}),\\ &\Psi_{d,d-1}:\Pb(\Rb^2)\to\Gr_{d-1}(\Rb^d)\subset\Gr_{2d+1}(\Rb^{2d+2}),\\ &\Psi_{d+2,d+1}:\Pb(\Rb^2)\to\Gr_{d+1}(\Rb^{d+2})\subset\Gr_{2d+1}(\Rb^{2d+2}). \end{align*} are smooth and $\tau_d\oplus\tau_{d+2}$-equivariant.
One can check that $(\Psi_{d,1},\Psi_{d,d-1})$ and $(\Psi_{d+2,1},\Psi_{d+2,d+1})$ are transverse pairs of maps. It also follows from \eqref{eq:eigenvalues_std_repn} that for any $g\in\PGL_2(\Rb)$, $\tau_d\oplus\tau_{d+2}(g)$ is proximal. However, the attracting eigenline and repelling hyperplane of $\tau_d\oplus\tau_{d+2}(g)$ lies in the image of $\Psi_{d+2,1}$ and $\Psi_{d+2,d+1}$ respectively, so only the pair of maps $(\Psi_{d+2,1},\Psi_{d+2,d+1})$ is dynamics preserving. \begin{example}\label{ex:surface_bad_example} Fix a co-compact lattice $\Gamma \leq \PGL_2(\Rb)$. The inclusion of $\Gamma$ into $\PGL_2(\Rb)$ induces an identification $\partial_\infty\Gamma\simeq\Pb(\Rb^2)$, and thus equips $\partial_\infty\Gamma$ with the structure of a smooth manifold. Consider the representation
\[\rho :=\tau_d \oplus \tau_{d+2}|_{\Gamma}:\Gamma\to\PGL(\Rb^d\oplus\Rb^{d+2}).\] By the discussion above, \[\Psi_{d+2,1}:\partial_\infty\Gamma\to\Pb(\Rb^{2d+2})\,\,\,\text{ and }\,\,\,\Psi_{d+2,d+1}:\partial_\infty\Gamma\to\Gr_{2d+1}(\Rb^{2d+2})\] is a pair of smooth, dynamics preserving, $\rho$-equivariant, transverse maps. Thus, one deduces from \eqref{eq:eigenvalues_std_repn} that $\rho$ is $1$-Anosov, but it is not $2$-Anosov because \begin{align*} \frac{\lambda_2}{\lambda_3}(\rho(\gamma)) =1 \end{align*} for any $\gamma \in \Gamma$. However, since $\Psi_{d+2,1}$ is a smooth map, the $1$-limit set of $\rho$ is a $1$-dimensional, $C^{\infty}$-submanifold of $\Pb(\Rb^{2d+2})$. This shows that the conclusion of Theorem \ref{thm:nec_surface_body} does not hold if we do not assume the irreducibility hypothesis of Theorem \ref{thm:nec_surface_body}. \end{example}
\subsection{Proof of Theorem \ref{thm:nec_surface_body}}
Lemma \ref{lem:surface1} and Lemma \ref{lem:surface2} stated below are respectively the analogs of Lemma \ref{lem:proximal} and Proposition \ref{prop:eigenvalue_est} in the case when $M$ is a $1$-dimensional topological manifold. With these two lemmas, we can replicate the proof of Theorem \ref{thm:nec_general_body} to prove Theorem~\ref{thm:nec_surface_body}
\begin{remark} In Lemma \ref{lem:proximal} and Proposition \ref{prop:eigenvalue_est}, we assumed that $\bigwedge^2\rho$ is irreducible, but in Lemma \ref{lem:surface1} and Lemma \ref{lem:surface2} we assume that $\rho$ is irreducible. \end{remark}
\begin{lemma} \label{lem:surface1} Suppose that $\rho:\Gamma\to\PGL_d(\Rb)$ is a $1$-Anosov representation. Also, suppose that $M$ is $\rho$-controlled, topological circle that is $C^{\alpha}$ along the $1$-limit set of $\rho$ for some $\alpha>1$, and let $\Phi:M\to\Pb\left(\bigwedge^2\Rb^d\right)$ be as defined in \eqref{eqn:Phi}. If $\gamma \in \Gamma$ has infinite order, then $\bigwedge^{2} \rho(\gamma)$ is proximal and $\Phi\left(\xi^{(1)}(\gamma^+)\right)\in\Pb\left(\bigwedge^2\Rb^d\right)$ is the attracting fixed point of $\bigwedge^{2} \rho(\gamma)$. \end{lemma}
\begin{proof} Define $\overline{\Psi}: \xi^{(1)}(\partial_\infty\Gamma)\times \xi^{(1)}(\partial_\infty\Gamma)\rightarrow \Gr_2(\Rb^d)$ by letting $\overline{\Psi}(p,q)$ be the projective line containing $p,q$ when $p \neq q$ and letting $\overline{\Psi}(p,p)$ be the projective line tangent to $M$ at $p$. Then define \[\Psi:=F_{d,2}\circ\overline{\Psi}: \xi^{(1)}(\partial_\infty\Gamma)\times \xi^{(1)}(\partial_\infty\Gamma)\to\Pb\left(\bigwedge^2\Rb^d\right),\] where $F_{d,2}$ is defined by \eqref{eqn:Fdm}. Observe that $\Psi$ is continuous and $\Phi(p)=\Psi(p,p)$ for all $p\in \xi^{(1)}(\partial_\infty\Gamma)$.
Fix distances $d_1$ on $\Pb(\Rb^d)$ and $d_2$ on $\Pb\left(\bigwedge^2\Rb^d\right)$ that are induced by Riemannian metrics. Since $M$ is $C^{\alpha}$ along the $1$-limit set of $\rho$ for some $\alpha>1$, there exists $C > 0$ such that \begin{align}\label{eqn:m=2} d_2\Big( \Psi(p,p), \Psi(p, q) \Big) \leq Cd_1(p, q)^{\alpha-1} \end{align} for all $p,q \in \xi^{(1)}(\partial_\infty\Gamma)$. Also, since $\rho$ is irreducible, the elements of $ \xi^{(1)}(\partial_\infty\Gamma)$ span $\Rb^d$, so \begin{align*} \Psi\left( \xi^{(1)}(\partial_\infty\Gamma) \times \xi^{(1)}(\partial_\infty\Gamma)\right) \end{align*} spans $\bigwedge^2 \Rb^d$. Now the rest of the proof closely follows the proof of Lemma \ref{lem:proximal}, but we use $\Psi(\xi^{(1)}(\gamma^+),\xi^{(1)}(\gamma^+))$ in place of $\Phi(\xi^{(1)}(\gamma^+))$ and $\Psi(\xi^{(1)}(x),\xi^{(1)}(\gamma^+))$ in place of $\Phi(\xi^{(1)}(x))$. \end{proof}
\begin{remark} In the case when $M$ is a topological $(m-1)$-dimensional manifold with $m>2$, it is not true that $\xi^{(1)}(x_1)+\dots+\xi^{(1)}(x_m)$ converges to $\xi^{(m)}(x)$ as $x_i\to x$, so the direct analog of \eqref{eqn:m=2} cannot hold. As such, we need the additional assumption that $\bigwedge^m\rho(\gamma)$ is irreducible in Theorem \ref{thm:nec_general_body}. \end{remark}
\begin{lemma}\label{lem:surface2} Suppose that $\rho:\Gamma\to\PGL_d(\Rb)$ is a $1$-Anosov representation. Also, suppose that $M$ is $\rho$-controlled, topological circle that is $C^{\alpha}$ along the $1$-limit set of $\rho$ for some $\alpha>1$. If $\gamma \in \Gamma$, then \begin{align*} \frac{ \lambda_{m+1}}{\lambda_m}(\rho(\gamma)) \leq \left(\frac{ \lambda_{2}}{\lambda_1}(\rho(\gamma)) \right)^{\alpha-1}. \end{align*} \end{lemma}
\begin{proof} Use the same argument as we did in the proof of Proposition~\ref{prop:eigenvalue_est}, but with $\Psi(\xi^{(1)}(\gamma^+),\xi^{(1)}(\gamma^+))$ and $\Psi(\xi^{(1)}(x),\xi^{(1)}(\gamma^+))$ in place of $\Phi(\xi^{(1)}(\gamma^+))$ and $\Phi(\xi^{(1)}(x))$ respectively, and Lemma \ref{lem:surface1} in place of Lemma \ref{lem:proximal}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:nec_surface}] Use the same proof as Theorem~\ref{thm:nec_general}, but replace Lemma \ref{lem:proximal} and Proposition \ref{prop:eigenvalue_est} by Lemma \ref{lem:surface1} and Lemma \ref{lem:surface2} respectively. \end{proof}
\section{$\PGL_d(\Rb)$-Hitchin representations}\label{sec:Hitchin}
In this section, let $\Gamma:=\pi_1(\Sigma)$, where $\Sigma$ is a closed, orientable, connected hyperbolic surface of genus at least $2$.
\begin{definition}\label{defn:hitchin_reps}
A \emph{$\PGL_d(\Rb)$-Hitchin representation} is a continuous deformation (in $\Hom(\Gamma,\PGL_d(\Rb))$) of $\tau_d\circ j$, where $j:\Gamma\to\PGL_2(\Rb)$ is a Fuchsian representation, and $\tau_d:\PGL_2(\Rb)\to\PGL_d(\Rb)$ is the representation defined in Section \ref{sec:rhoirred}.
\end{definition}
The goal of this section is to show that if $\rho$ is a $\PGL_d(\Rb)$-Hitchin representation, then for all $k=1,\dots,d-1$, $\bigwedge^k\rho:\Gamma\to\PGL(\bigwedge^k\Rb^d)$ satisfies the hypothesis of Theorem \ref{thm:main} (see Example \ref{cor:hitchin}). The following proposition is a straightforward consequence of Labourie's deep work on the Hitchin component~\cite{L2006} and has also been observed by Pozzetti-Sambarino-Wienhard \cite{PSW18}.
\begin{proposition}\label{prop:3_hyperconvex_exterior_prod} Let $\rho$ be a $\PGL_d(\Rb)$-Hitchin representation and $D := \dim \left(\bigwedge^k \Rb^d\right)$. If $k \in \{1,\dots, d-1\}$, then $\bigwedge^k \rho : \Gamma \rightarrow \PGL\left(\bigwedge^k \Rb^d\right)$ is $(1,2)$-Anosov, and its $1$-flag map $\zeta^{(1)}$ and $(D-2)$-flag map $\zeta^{(D-2)}$ satisfy the property that \begin{align*} \zeta^{(1)}(x)+\zeta^{(1)}(y) + \zeta^{(D-2)}(z), \end{align*} is a direct sum for all $x,y,z \in \partial_\infty \Gamma$ distinct. \end{proposition}
For the rest of the section fix some $\PGL_d(\Rb)$-Hitchin representation $\rho$ and some finite generating set $S$ of $\Gamma$.
\subsection{Preliminaries}\label{sec:prelim_Hitchin}
Before proving the proposition, we recall some results of Labourie. By Theorem 4.1 and Proposition 3.2 in~\cite{L2006},
\begin{enumerate}
\item\label{item:hitchin1} $\rho$ is $k$-Anosov for every $1 \leq k \leq d$. Denote the $k$-flag map of $\rho$ by $\xi^{(k)}$.
\item\label{item:hitchin2} If $x,y,z \in \partial_\infty \Gamma$ are distinct, $k_1,k_2,k_3 \geq 0$, and $k_1+k_2+k_3 =d$, then
\begin{align*}
\xi^{(k_1)}(x) + \xi^{(k_2)}(y) + \xi^{(k_3)}(z) = \Rb^d
\end{align*}
is a direct sum.
\item\label{item:hitchin3} If $x,y,z \in \partial_\infty \Gamma$ are distinct and $0\leq k < d-2$, then
\begin{align*}
\xi^{(k+1)}(y) + \xi^{(d-k-2)}(x) + \Big(\xi^{(k+1)}(z) \cap \xi^{(d-k)}(x) \Big)= \Rb^d
\end{align*}
is a direct sum.
\item $\rho$ admits a lift $\overline{\rho}:\Gamma\to\GL_d(\Rb)$ whose image lies in $\SL_d(\Rb)$.
\item\label{item:hitchin4} If $\gamma \in \Gamma \setminus \{1\}$, then the absolute values of the eigenvalues of $\overline{\rho}(\gamma)$ satisfy
\begin{align*}
\lambda_1(\overline{\rho}(\gamma)) > \dots > \lambda_d(\overline{\rho}(\gamma)).
\end{align*}
\item\label{item:hitchin5} If $\gamma \in \Gamma \setminus \{1\}$, then $ \xi^{(k)}(\gamma^+)$ is the span of the eigenspaces of $\rho(\gamma)$ corresponding to the eigenvalues
\begin{align*}
\lambda_1(\overline{\rho}(\gamma)), \dots, \lambda_k(\overline{\rho}(\gamma)).
\end{align*} \end{enumerate}
\subsection{Proof of Proposition~\ref{prop:3_hyperconvex_exterior_prod}}
Since $\rho$ is $k$-Anosov, Theorem~\ref{thm:SV_char_of_Anosov} implies that there exists $C,c>0$ such that \begin{align*} \log \frac{\mu_k}{\mu_{k+1}}(\rho(\gamma)) \geq C d_S(1,\gamma) -c
\end{align*} for all $\gamma \in \Gamma$ and $1 \leq k \leq d$.
\begin{lemma} $\bigwedge^k \rho$ is $(1,2)$-Anosov. \end{lemma}
\begin{proof} By Theorem~\ref{thm:SV_char_of_Anosov} it is enough to prove that there exists $A,a>0$ such that \begin{align*}
\log \frac{\mu_1}{\mu_{2}}\left(\bigwedge^k\rho(\gamma)\right) \geq A d_S(1,\gamma) -a
\end{align*}
and \begin{align*}
\log \frac{\mu_2}{\mu_{3}}\left(\bigwedge^k \rho(\gamma)\right) \geq A d_S(1,\gamma) -a
\end{align*} for all $\gamma \in \Gamma$.
Fix $\gamma \in \Gamma$ and let $\overline{g} \in \SL_d(\Rb)$ be a lift of $\rho(\gamma)$. Then let \begin{align*} \sigma_1 \geq \dots \geq \sigma_d \end{align*} denote the singular values of $\overline{g}$ (in the Euclidean norm on $\Rb^d$), and let \begin{align*} \chi_1 \geq \dots \geq \chi_D \end{align*} denote the singular values of $\bigwedge^k \overline{g}$ (in the induced norm on $\bigwedge^k\Rb^d$).
Recall, that Equation~\eqref{eqn:0evalue1} says that \begin{align*} \chi_1 = \sigma_1 \cdots \sigma_k \text{ and } \chi_2 = \sigma_1 \cdots \sigma_{k-1} \sigma_{k+1}. \end{align*} Hence \begin{align*} \log \frac{\mu_1}{\mu_{2}}\left(\bigwedge^k\rho(\gamma)\right) = \log \frac{\chi_1}{\chi_2} = \log \frac{\sigma_k}{\sigma_{k+1}} \geq C d_S(1,\gamma) -c.
\end{align*}
To verify the other inequality, pick $1 \leq i_1 < \dots <i_k \leq d$ such that \begin{align*} \chi_3 = \sigma_{i_1} \cdots \sigma_{i_k}. \end{align*} We consider two cases based on the value of $i_{k-1}$.
\noindent \textbf{Case 1:} Suppose $i_{k-1} = k-1$. Then $i_j = j$ for $j \leq k-1$ and $i_k \geq k$. Since \begin{align*} (i_1,\dots, i_k) \notin \{ (1,\dots, k), (1,\dots,k-1,k+1)\} \end{align*} we must have $i_k \geq k+2$. So \begin{align*} \log \frac{\chi_2}{\chi_3} = \log \left(\frac{\sigma_1} {\sigma_{i_1}}\cdots \frac{\sigma_{k-1}}{\sigma_{i_{k-1}}} \frac{\sigma_{k+1}}{\sigma_{i_{k}}}\right) = \log \frac{\sigma_{k+1}}{\sigma_{i_k}} \geq \log \frac{\sigma_{k+1}}{\sigma_{k+2}} \geq C \ell_S(\gamma)-c. \end{align*}
\noindent \textbf{Case 2:} Suppose $i_{k-1} \geq k$. Then $i_k \geq k+1$ and $i_j \geq j$ for all $j$ so
\begin{align*} \log \frac{\chi_2}{\chi_3} = \log \left(\frac{\sigma_1}{\sigma_{i_1}} \cdots \frac{\sigma_{k-2}}{\sigma_{i_{k-2}}} \frac{\sigma_{k-1}}{\sigma_{i_{k-1}}} \frac{\sigma_{k+1}}{\sigma_{i_{k}}}\right) \geq \log \frac{\sigma_{k-1}}{\sigma_{i_{k-1}}} \geq \log \frac{\sigma_{k-1}}{\sigma_{k}} \geq C \ell_S(\gamma)-c. \end{align*} In either case \begin{align*} \log \frac{\mu_2}{\mu_{3}}\left(\bigwedge^k \rho(\gamma)\right)=\log \frac{\chi_2}{\chi_3} \geq C \ell_S(\gamma) -c. \end{align*} Then since $\gamma \in \Gamma$ was arbitrary, we see that $\bigwedge^k \rho$ is $(1,2)$-Anosov. \end{proof}
Given subspaces $V_1,\dots,V_k\subset\Rb^d$, we will let $V_1\wedge\dots\wedge V_k$ denote the subspace of $\bigwedge^k\Rb^d$ that is spanned by $\{X_1\wedge\dots\wedge X_k:X_i\in V_i\}$. For $\ell\in\{1,2,D-2, D-1\}$ define maps \begin{align*} \zeta^{(\ell)} : \partial_\infty \Gamma \rightarrow \Gr_\ell\left(\bigwedge^k \Rb^d\right) \end{align*} by \begin{align*} \zeta^{(1)}(x) = \bigwedge^k \xi^{(k)}(x), \end{align*} \begin{align*} \zeta^{(2)}(x) =\left( \bigwedge^{k-1} \xi^{(k-1)}(x) \right) \wedge \xi^{(k+1)}(x), \end{align*} \begin{align*} \zeta^{(D-2)}(x) =\xi^{(d-k-1)}(x) \wedge \left( \bigwedge^{k-1} \Rb^d \right) + \xi^{(d-k)}(x) \wedge \xi^{(d-k+1)}(x) \wedge \left( \bigwedge^{k-2} \Rb^d \right), \end{align*} \begin{align*} \zeta^{(D-1)}(x) = \xi^{(d-k)}(x) \wedge \left(\bigwedge^{k-1} \Rb^d \right). \end{align*} These maps are clearly continuous and $\bigwedge^k \rho$-equivariant.
\begin{lemma} $\zeta^{(1)}, \zeta^{(2)}, \zeta^{(D-2)}, \zeta^{(D-1)}$ are the flag maps of $\bigwedge^k \rho$. \end{lemma}
\begin{proof} By the density of attracting fixed points in $\partial_\infty \Gamma$ and the continuity of the maps, it is enough to verify that $\zeta^{(j)}(\gamma^+)$ is the attracting fixed point of $\bigwedge^k \rho(\gamma)$ in $\Gr_j(\bigwedge^k \Rb^d)$ when $\gamma^+ \in \partial_\infty \Gamma$ is the attracting fixed point of $\gamma \in \Gamma$.
By Property~\eqref{item:hitchin5} in Section~\ref{sec:prelim_Hitchin}, there exists a basis $v_1, \dots, v_d$ of $\Rb^d$ of eigenvectors of $\rho(\gamma)$ such that \begin{align*} \xi^{(j)}(\gamma^+) = \Span\{ v_1,\dots, v_j\} \text{ for } j=1,\dots, d. \end{align*} Let $I_1 = \{ d-k+1, d-k+2, \dots, d\}$ and $I_2 = \{ d-k, d-k+2, d-k+3,\dots, d\}$. Then a calculation shows that \begin{align*} \zeta^{(1)}(\gamma^+) = \left[ v_1 \wedge \dots \wedge v_k\right], \end{align*} \begin{align*} \zeta^{(2)}(\gamma^+) = \left\{ v_1 \wedge \dots \wedge v_{k-1} \wedge (av_k+bv_{k+1}) : a,b \in \Rb\right\}, \end{align*} \begin{align*} \zeta^{(D-2)}(\gamma^+) = \Span\left\{ v_{i_1} \wedge \dots \wedge v_{i_k} : \{ i_1, \dots, i_k\} \notin \{ I_1, I_2\} \right\}, \end{align*} \begin{align*} \zeta^{(D-1)}(\gamma^+) = \Span\left\{ v_{i_1} \wedge \dots \wedge v_{i_k} : \{ i_1, \dots, i_k\} \neq I_1 \right\}. \end{align*} So $\zeta^{(j)}(\gamma^+)$ is the attracting fixed point of $\bigwedge^k \rho(\gamma)$ in $\Gr_j(\bigwedge^k \Rb^d)$. \end{proof}
\begin{lemma} $\zeta^{(1)}(x) + \zeta^{(1)}(y) + \zeta^{(D-2)}(z)$ is a direct sum for all $x,y,z \in \partial_\infty \Gamma$ distinct. \end{lemma}
\begin{proof} Fix $x,y,z \in \partial_\infty \Gamma$ distinct, and choose a basis $v_1,\dots, v_d \in \Rb^d$ such that \begin{align*} [v_\ell] = \xi^{(\ell)}(x) \cap \xi^{(d-\ell+1)}(y) \end{align*} for $1 \leq \ell \leq d$. Next pick $u_1, \dots, u_k \in \Rb^d$ such that \begin{align*} \xi^{(k)}(z) = \Span\{ u_1,\dots, u_k\}. \end{align*} Then $\zeta^{(1)}(z) = [ u_1 \wedge \dots \wedge u_k]$.
If $I = \{ 1,\dots, k-1, k+1\}$, then a computation shows that \begin{align*} \zeta^{(1)}(x) + \zeta^{(D-2)}(y) = \Span\left\{ v_{i_1} \wedge \dots \wedge v_{i_k} : \{ i_1, \dots, i_k\} \neq I \right\}. \end{align*} Since \begin{align*} \xi^{(k)}(z) + \left( \xi^{(k)}(x) \cap \xi^{(d-k+1)}(y) \right) + \xi^{(d-k-1)}(y) = \Rb^d \end{align*} is a direct sum and \begin{align*} \left( \xi^{(k)}(x) \cap \xi^{(d-k+1)}(y) \right) + \xi^{(d-k-1)}(y) = \Span \{ v_k, v_{k+2}, \dots, v_d\} \end{align*} we see that \begin{align*} (u_1 \wedge \dots \wedge u_k) \wedge (v_k \wedge v_{k+2} \wedge \dots \wedge v_d) \neq 0. \end{align*} This implies that \begin{equation*} \zeta^{(1)}(x) +\zeta^{(1)}(y) + \zeta^{(D-2)}(z) = \bigwedge^k \Rb^d.\qedhere \end{equation*}
\end{proof}
\section{Real hyperbolic lattices}\label{sec:real_hyp_lattices}
The goal of this section is to justify Example \ref{cor:hyperbolic_lattices}. More precisely, we need to prove the following proposition.
\begin{proposition}\label{thm:hyperbolic} Suppose $\tau: \PO(m,1) \rightarrow \PGL_d(\Rb)$ is a representation, $\Gamma \leq \PO(m,1)$ is a co-compact lattice, and $\rho = \tau|_{\Gamma} : \Gamma \rightarrow \PGL_d(\Rb)$ is the representation obtained by restricting $\tau$ to $\Gamma$. If $\rho$ is irreducible and $1$-Anosov, then $\rho$ is also $m$-Anosov and \begin{align*} \xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-m)}(z) \end{align*} is a direct sum for all $x,y,z \in \partial_\infty \Gamma$ distinct. Thus, the same is true for any small deformation of $\rho$.
\end{proposition}
Let $\PO(m,1)\subset\PGL_{m+1}(\Cb)$ be the subgroup that leaves invariant the bilinear pairing that is represented in the standard basis of $\Rb^{m+1}$ by the matrix
\[\left(
\begin{array}{ccccc}
1&0&\dots&0&0\\
0&1&\dots&0&0\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&\dots&1&0\\
0&0&\dots&0&-1
\end{array}\right).\]
\subsection{Preliminaries}
Consider the unit ball $\Bb_m \subset \Rb^m$ endowed with the metric \begin{align*} d(x,y) = \frac{1}{2}\log \frac{\norm{y-a}_2\norm{x-b}_2}{\norm{x-a}_2\norm{y-b}_2} \end{align*} where $a,b \in \partial \Bb_m \cap ( x+\Rb(y-x) )$ ordered $a,x,y,b$, and $\norm{\cdot}_2$ is the standard Euclidean norm on $\Rb^m$. The metric space $(\Bb_m, d)$ is usually known as the \emph{Klein-Beltrami model} of real hyperbolic $m$-space. Further, $\PO(m,1)$ acts transitively and by isometries on $(\Bb_m, d)$ via fractional linear transformations, that is \begin{align*} \begin{bmatrix} A & u \\ {^tv} & a \end{bmatrix} \cdot x = \frac{ Ax + u}{{^tv}x + a}. \end{align*}
Using the formula for the distance, one can compute that $d(e^{sH} \cdot 0, 0) = s$, where \[H:= \begin{bmatrix} 0 & e_1 \\ {^t e_1} & 0 \end{bmatrix}.\] In fact, one can verify that the map $\gamma_0:\Rb\to\Bb_m$ given by \[\gamma_0:s\mapsto\tanh(s)e_1=e^{sH}\cdot 0\] is a unit-speed geodesic in $\Bb_m$ with $-e_1$ and $e_1$ as its backward and forward endpoints respectively.
A computation also verifies that $K:=\{g\in\PO(m,1):g\cdot 0=0\}$ is given by \begin{align}\label{eqn:K} K = \left\{ \begin{bmatrix} A & 0 \\ 0 & \sigma \end{bmatrix}: \sigma \in \{-1,1\}, \ A \in {\rm O}(m)\right\}.
\end{align} In particular, $K$ acts transitively on the set of unit vectors in $T_0\Bb_m$. Since $\PO(m,1)$ acts transitively on $\Bb_m$, this implies that $\PO(m,1)$ acts transitively on the unit tangent bundle of $\Bb_m$. Also, if $p\in\Bb_m$, then $d(0,p)=d(0,k\cdot p)$ for all $k\in K$. This, together with the $KAK$-decomposition theorem \cite[Theorem 7.39]{knapp}, implies the following observation.
\begin{observation}\label{obs:KAK} If $g \in \PO(m,1)$, then there exists $k_1, k_2 \in K$ such that \begin{align*} g = k_1 e^{d(g \cdot 0, 0)H} k_2. \end{align*} \end{observation}
Recall that an element $g \in \PO(m,1)$ is called \emph{hyperbolic} if there exists some geodesic $\gamma: \Rb \rightarrow \Bb_m$ and some $\ell(g) > 0$ such that \begin{align*} g(\gamma(t)) = \gamma(t+\ell(g)) \end{align*} for all $t \in \Rb$. The number $\ell(g)$ is called the \emph{translation length of $g$}. For co-compact lattices in $\PO(m,1)$, we have the following proposition.
\begin{proposition} If $\Gamma \leq \PO(m,1)$ is a co-compact lattice and $\gamma \in \Gamma$ has infinite order, then $\gamma$ is a hyperbolic element. \end{proposition}
\begin{proof} See for instance~\cite[Chapter 12, Proposition 2.6]{dC1992}. \end{proof}
Let $\gamma_0$ be the geodesic as defined above, and let $M$ be the subgroup of $\PO(m,1)$ that fixes the image of the geodesic $\gamma_0$ pointwise, i.e.
\[M := \left\{ k \in K: k \cdot e_1 = e_1\right\}.\]
\begin{proposition}\label{prop:conj} If $h \in \PO(m,1)$ is hyperbolic, then $h=ge^{\ell(h)H}k g^{-1}$ for some $k \in M$ that commutes with $e^{\ell(h)H}$. \end{proposition}
\begin{proof}
Since $h$ is hyperbolic, there exists some geodesic $\gamma: \Rb \rightarrow \Bb_m$ such that $h \gamma(t) = \gamma(t+\ell(h))$ for all $t \in \Rb$. Also, $\PO(m,1)$ acts transitively on the unit tangent bundle of $\Bb_m$, there exists $g \in \PO(m,1)$ so that $g\circ\gamma=\gamma_0$. Since $h$ translates along $\gamma$ by $\ell(h)$ and $e^{-\ell(h)H}$ translates along $\gamma_0$ by $-\ell(h)$, we see that $e^{-\ell(h)H}ghg^{-1}=ghg^{-1}e^{-\ell(h)H}$ fixes the image of $\gamma_0$ pointwise, and therefore lies in $M$. Hence, there is some $k\in M$ so that
\begin{align*}
ghg^{-1}=e^{\ell(h)H}k=ke^{\ell(h)H}
\end{align*}
for some $k \in M$.
\end{proof}
\subsection{Proof of Proposition~\ref{thm:hyperbolic}} Let $\tau$, $\rho$, and $\Gamma$ satisfy the hypothesis of Proposition~\ref{thm:hyperbolic}. To prove Proposition \ref{thm:hyperbolic}, we use the following two lemmas.
Let $\tau_0 = \tau|_{e^{\Rb \cdot H}}$ and let $\overline{\tau}_0 : e^{\Rb \cdot H} \rightarrow \SL_d(\Rb)$ be the lift of $\tau_0$ (since $\Rb$ is simply connected, such a lift exists).
\begin{lemma} \label{lem:prox}
$\overline{\tau}_0\left(e^H\right)$ is proximal and the eigenvalue with maximal modulus is a positive real number. \end{lemma}
\begin{proof}The group $\tau(M)$ is a compact subgroup of $\PGL_d(\Rb)$, so every element in $\tau(M)$ is elliptic. Now suppose that $\gamma \in \Gamma$ has infinite order. Since $\rho$ is $1$-Anosov, $\tau(\gamma)$ has a representative in $\SL_d^\pm(\Rb)$ whose eigenvalue of maximal absolute value has multiplicity $1$. On the other hand, by Proposition~\ref{prop:conj}, $\gamma$ is conjugate to $k e^{s H}$ for some $s > 0$ and $k \in M$. Then since $\tau(k)$ is elliptic and commutes with $\tau(e^{sH})$, the eigenvalues of $\tau(e^{sH})$ and $\tau(k)\tau(e^{sH}) = \tau(k e^{sH})$ have the same absolute values. So $\tau(e^{sH})$ also has a unique eigenvalue with maximal absolute value. This implies that $\tau(e^{tH})$ is proximal for every $t \geq 0$.
Since $\overline{\tau}_0(\id) = \id$ has all positive eigenvalues, we see that the eigenvalue with maximal modulus of $\overline{\tau}_0(e^{tH})$ is positive for all $t \geq 0$.
\end{proof}
\begin{lemma}\label{lem:2espace} Let $e^\lambda$ denote the eigenvalue of $\overline{\tau}_0\left(e^H\right)$ with maximal modulus. There is some integer $k$ so that the set of eigenvalues of $\overline{\tau}_0\left(e^H\right)$ is \[\{e^{\lambda-n}:0\leq n\leq k\}.\] Furthermore, the eigenspace corresponding to $e^{\lambda-1}$ has dimension $m-1$. \end{lemma}
The proof of Lemma \ref{lem:2espace} is a standard argument from the theory of weight spaces. We give this argument in Appendix \ref{app:lem}. With Lemma \ref{lem:prox} and \ref{lem:2espace}, we can prove Proposition \ref{thm:hyperbolic}.
\begin{proof}[Proof of Proposition \ref{thm:hyperbolic} ] By Lemma \ref{lem:prox} and \ref{lem:2espace}, the eigenvalues of $\overline{\tau}_0(e^{sH})$ are \begin{align*} e^{\lambda s}, e^{(\lambda-1)s}, \dots, e^{(\lambda-1)s}, e^{(\lambda-2)s}, \dots, \end{align*} and the multiplicity of $e^{(\lambda-1)s}$ is $m-1$. In particular, \begin{align*} \frac{ \mu_{m}}{ \mu_{m+1}}(\tau(e^{sH})) = e^{s}. \end{align*} Also, the group $\tau(K)\subset\PGL_d(\Rb)$ is compact, so it lifts to a compact subgroup $\wh{K} \subset\SL_d^{\pm}(\Rb)$. Hence, there exists some $C > 1$ such that \begin{align*} \frac{1}{C} \mu_i(T) \leq \mu_i\Big(k_1 T k_2\Big) \leq C \mu_i(T) \end{align*} for all $1 \leq i \leq n$, all $k_1, k_2 \in \wh{K}$, and all $T \in \End(\Rb^n)$. By Observation~\ref{obs:KAK}, \begin{align*} \log \frac{ \mu_{m}}{ \mu_{m+1}}(\rho(\gamma)) \geq \log \frac{ \mu_{m}}{ \mu_{m+1}}\left(\tau\left(e^{d(\gamma\cdot 0,0)H}\right)\right)-\log (C^2) =\lambda d(\gamma \cdot 0, 0)-\log (C^2), \end{align*} which implies that $\rho$ is $m$-Anosov.
Since $\rho$ is the restriction of $\tau$ to $\Gamma$, the $\rho$-equivariant flag maps \[\xi^{(i)} : \partial\Gamma\simeq\partial \Bb_m \rightarrow \Gr_i(\Rb^d)\] are $\tau$-equivariant for $i=1,d-1,m,d-m$. Further, by the description of $K$ given by \eqref{eqn:K}, we see that $\PO(m,1)$ acts transitively on triples of distinct points $x,y,z \in \partial \Bb_m$. Thus it is enough to show that \begin{align*} \xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-m)}(z) \end{align*} is direct for some $x,y,z \in \partial \Bb_m$ distinct. Fix $y,z \in \partial \Bb_m$ distinct. Then since $\tau$ is irreducible we must have \begin{align*} \Rb^d = \Span \{ \xi^{(1)}(x) : x \in \partial \Bb_m\} \end{align*} and so there exists some $x \in \partial \Bb_m$ such that \begin{align*} \xi^{(1)}(x) + \xi^{(1)}(y) + \xi^{(d-m)}(z) \end{align*} is direct. \end{proof}
\appendix
\section{Theorem \ref{thm:cones}}
\begin{proof} First notice that $\Cc_\lambda(\Lambda)$ is invariant under conjugation in $\SL_d(\Rb)$, i.e. $\Cc_\lambda(\Lambda)=\Cc_\lambda(g\Lambda g^{-1})$ for all $g\in \SL_d(\Rb)$. Further, if $h \in \SL_d(\Rb)$, then from the geometric description of the Cartan projection given in \ref{sec:properties}, there exists some $C > 0$ such that \begin{align*} \norm{\mu(g) - \mu(hgh^{-1}) }_2 \leq C \end{align*} for all $g \in \SL_d(\Rb)$. Hence $\Cc_\mu(\Lambda)$ is also invariant under conjugation in $\SL_d(\Rb)$.
Let $\sL_d(\Rb) = \kL + \pL$ denote the standard Cartan decomposition of $\sL_d(\Rb)$, that is \begin{align*} \kL = \{ X \in \sL_d(\Rb) : {^tX} = -X\}\,\,\,\text{ and }\,\,\,\pL = \{ X \in \sL_d(\Rb) : {^tX} = X\}. \end{align*} Let $\gL$ denote the Lie algebra of $G$. Using Theorem 7 in~\cite{M1955} and conjugating $G$ we may assume that \begin{align*} \gL = \kL \cap \gL + \pL \cap \gL \end{align*} is a Cartan decomposition of $\gL$. Fix a maximal abelian subspace $\aL \subset \pL \cap \gL$. By~\cite[Chapter V, Lemma 6.3]{H2001}, there exists some $k \in \SO(d)$ such that $\Ad(k) \aL$ is a subspace of the diagonal matrices in $\sL_d(\Rb)$. Since $\Ad(k) \pL = \pL$, by replacing $G$ with $kGk^{-1}$ we can assume that $\aL$ is itself a subspace of the diagonal matrices. Finally fix a Weyl chamber $\aL^+$ of $\aL$.
Next let $K \subset G$ denote the subgroup corresponding to $\kL \cap \gL$, let $A = \exp(\aL)$, and let $A^+ = \exp(\aL^+)$. By~\cite[Chapter IX, Theorem 1.1]{H2001}, each $g \in G$ can be written as \begin{align*} g = k_1 \exp( \mu_G(g) ) k_2 \end{align*} where $k_1, k_2 \in K$ and $\mu_G(g) \in \overline{\aL^+}$ is unique. The map $\mu_G : G \rightarrow \overline{\aL^+}$ is called the \emph{Cartan projection of $G$ relative to the decomposition $G = K \overline{A}^+ K$.} Since $K \subset \SO(d)$ and $\aL$ is a subspace of the diagonal matrices, the diagonal entries of $\mu_G(g)$ coincide with the entries of $\mu(g)$ up to permuting indices.
Every $g \in G$ can be written as a product $g=g_e g_h g_u$ of commuting elements, where $g_e$ is elliptic, $g_h$ is hyperbolic, and $g_u$ is unipotent. This is called the \emph{Jordan decomposition of $g$ in $G$}. The element $g_h$ is conjugate to a unique element $\exp(\lambda_G(g)) \in \overline{A^+}$ and the map $\lambda_G : G \rightarrow \overline{\aL^+}$ is called the \emph{Jordan projection}. Since $G$ is an irreducible real algebraic subgroup of $\SL_d(\Rb)$, the Jordan decomposition in $G$ coincides with the standard Jordan decomposition in $\SL_d(\Rb)$. Then, since $\aL$ is a subspace of the diagonal matrices, the diagonal entries of $\lambda_G(g)$ coincide with the entries of $\lambda(g)$ up to permuting indices.
Next define cones $\Cc_1, \Cc_2 \subset \aL^+$ as follows: \begin{align*} \Cc_1 := \overline{\bigcup_{\gamma \in \Gamma} \Rb_{>0} \cdot \lambda_G(\gamma)} \end{align*} and \begin{align*} \Cc_2 := \{ x \in \Rb^d : \exists \gamma_n \in \Gamma, \exists t_n \searrow 0, \text{ with } \lim_{n \rightarrow \infty} t_n \mu_G(\gamma_n) =x\}. \end{align*} Then the main result in~\cite{B1997} says that $\Cc_1 = \Cc_2$. Since $\mu_G(g)$ and $\mu(g)$ (respectively $\lambda_G(g)$ and $\lambda(g)$) coincide up to permuting indices, this implies that $\Cc_\mu(\Lambda) = \Cc_\lambda(\Lambda)$. \end{proof}
\section{Proof of Lemma \ref{lem:2espace}\label{app:lem} }
Let $\mathfrak{so}(m,1)$ denote the Lie algebra of $\PO(m,1)$, and let $e_1,\dots, e_{m+1}$ be the standard basis of $\Rb^{m+1}$. By fixing the signature $(m,1)$-form on $\Rb^{m+1}$ that is represented in this basis by the matrix \begin{align*} \begin{bmatrix} 1& 0 & \cdots & 0 & 0 \\ 0 & 1 & & & 0\\ \vdots & & \ddots & & \vdots\\ 0 & & &1 & 0\\ 0 & 0 & \cdots & 0 & -1 \end{bmatrix}, \end{align*} one can compute that
\begin{align*}
\mathfrak{so}(m,1) = \left\{ \begin{bmatrix} A & u \\ {^tu} & 0 \end{bmatrix} : {^tA}=-A \right\}.
\end{align*} Define vector following subspaces of $ \mathfrak{so}(m,1)$:
\begin{align*}
\aL & = \left\{ \begin{bmatrix} 0 & \lambda e_1 \\ \lambda {^te_1} & 0 \end{bmatrix} : \lambda \in \Rb \right\}, \\
\gL_0 & = \left\{ \begin{bmatrix} A & \lambda e_1 \\ \lambda {^te_1} & 0 \end{bmatrix} : {^tA}=-A, \quad Ae_1 =0, \text{ and } \lambda \in \Rb \right\}, \\ \gL_{-1} & = \left\{ \begin{bmatrix} -u{^te_1} +e_1 {^tu} & u \\ {^tu} & 0 \end{bmatrix} : \ip{u,e_1} =0\right\}, \text{ and}\\ \gL_{1} & = \left\{ \begin{bmatrix} u{^te_1} - e_1 {^tu} & u \\ {^tu} & 0 \end{bmatrix} : \ip{u,e_1} =0\right\}.
\end{align*} Then $\mathfrak a\subset\mathfrak g_0$ is a maximal abelian subalgebra, and the decomposition \[\mathfrak{so}(1,m) = \gL_0 + \gL_{-1} + \gL_{1}\] is the associated (restricted) \emph{root space decomposition} of $\mathfrak{so}(1,m)$.
Recall that \[H:= \begin{bmatrix} 0 & e_1 \\ {^t e_1} & 0 \end{bmatrix}\in\PO(m,1).\] The following lemma states some basic properties of the root space decomposition \cite[Chapter II.1]{knapp}, and can be verified explicitly in this special case.
\begin{lemma}\label{obs:rootspace}\ \begin{enumerate} \item Let $\sigma\in\{0,1,-1\}$, and $Y \in \gL_\sigma$. Then \[[H,Y]=\sigma Y\,\,\,\text{ and }\,\,\, \Ad \left( e^{s H} \right) Y= e^{\sigma s} Y.\] \item Let $\alpha, \beta \in \{0,-1,1\}$. Then $[\gL_\alpha, \gL_\beta] \subset \gL_{\alpha+\beta}$, where $\gL_{-2}:=\{0\}=:\gL_2$. \end{enumerate} \end{lemma}
Next, suppose that $\tau:\PO(m,1)\to\PGL_d(\Rb)$ is an irreducible representation so that $\tau(e^{H})$ is proximal and $\overline{\tau}_0 : e^{\Rb \cdot H} \rightarrow \SL_d(\Rb)$ is the lift of $\tau_0:=\tau|_{e^{\Rb \cdot H}}$.
Let $\mathfrak{sl}_d(\Rb)$ denote the Lie algebra of $\PGL_d(\Rb)$ and let $d\tau:\mathfrak{so}(m,1)\to\mathfrak{sl}_d(\Rb)$ be the derivative at the identity of the homomorphism $\tau:\PO(m,1)\to\PGL_d(\Rb)$. The next lemma gives a description of the eigenvalues and eigenspaces of $\overline{\tau}_0(e^{H})$.
\begin{lemma}\label{lem:weights} Let $e^\lambda$ denote the largest eigenvalue of $\overline{\tau}_H(e^{H})$ and let $V_0 \subset \Rb^d$ denote the eigenspace of $\overline{\tau}_0(e^{H})$ corresponding to $e^\lambda$. For $n \in \Nb$, define \begin{align*} V_{n+1} := d\tau(\gL_{-1}) V_{n}, \end{align*} \begin{enumerate} \item If $v \in V_n$, then $\overline{\tau}_0\left(e^{H}\right)v = e^{\lambda -n}v$. \item If $Z \in \gL_0$, then $d\tau(Z)V_n \subset V_n$. \item If $Z \in \gL_{1}$, then $d\tau(Z)V_0 = \{0\}$ and $d\tau(Z) V_n \subset V_{n-1}$ when $m>0$. \item $\sum_{n \geq 0} V_n = \Rb^d$. \end{enumerate} \end{lemma}
\begin{proof} (1): By definition $v = d\tau(Y)w$ for some $Y \in \gL_{-1}$ and $w \in V_{n-1}$. Then by induction \begin{align*} \overline{\tau}_0\left(e^{H}\right)d\tau(Y)w &= d\tau( \Ad(e^{H})Y )\overline{\tau}_0\left(e^{H}\right)w \\ &= d\tau( e^{-1}Y )\left(e^{\lambda-(n-1)}w\right) \\ & = e^{\lambda-n} d\tau( Y )w, \end{align*} where the second equality is a consequence of (1) of Lemma \ref{obs:rootspace}.
(2): Fix some $v \in V_n$. Then by definition $v = d\tau(Y)w$ for some $Y \in \gL_{-1}$ and $w \in V_{n-1}$. Then $[Z,Y] \in \gL_{-1}$ by (2) of Lemma \ref{obs:rootspace}, so \begin{align*} d\tau(Z)d\tau(Y)w = d\tau([Z,Y])w - d\tau(Y)d\tau(Z)w \in V_n \end{align*} by induction.
(3): If $v_0 \in V_0$, then \begin{align*} \overline{\tau}_0\left(e^H\right)d\tau(Z)v_0 &= d\tau( \Ad(e^H)Z )\overline{\tau}_0\left(e^H\right)v_0 \\ & = e^{\lambda+1} d\tau( Z )v_0. \end{align*} Since $e^\lambda$ is the largest eigenvalue of $\overline{\tau}_0(e^H)$ we must have $d\tau(Z)v_0=0$. Since $v_0 \in V_0$ was arbitrary, we then have $d\tau(Z)V_0 = \{0\}$.
Next fix some $v \in V_n$. Then by definition $v = d\tau(Y)w$ for some $Y \in \gL_{-1}$ and $w \in V_{n-1}$. Then $[Z,Y] \in \gL_{0}$ by (2) of Lemma \ref{obs:rootspace}, so \begin{align*} d\tau(Z)d\tau(Y)w = d\tau([Z,Y])w - d\tau(Y)d\tau(Z)w \in V_{n-1} \end{align*} by (2) and induction.
(4): The previous parts show that $\sum_{n \geq 0} V_n$ is an $d\tau$ and hence $\tau$ invariant subspace. Since $\tau$ is irreducible, we then have $\sum_{n \geq 0} V_n=\Rb^d$. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:2espace}] The first statement of the lemma is an immediate consequence of Lemma \ref{lem:prox} and \ref{lem:weights}. To prove the second statement, fix some non-zero $v_0 \in V_0$, and consider the linear map $T: \gL_{-1} \rightarrow V_1$ given by \begin{align*} T(Y) = d\tau(Y)v_0. \end{align*} Since $T$ is onto and $\dim_{\Rb} \gL_{-1} = m-1$, we see that $\dim_{\Rb} V_1 \leq m-1$. It is now sufficient to prove that $\ker T = \{0\}$. To see this, again let \begin{align*}
M := \left\{ k \in K: k \cdot e_1 = e_1\right\}.
\end{align*} Then a calculation shows that $\Ad(M)$ preserves and acts irreducibly on $\gL_{-1}$. Notice that $\tau(M)v_0 \subset V_0$ is a compact connected set and so $\tau(M)v_0 = v_0$. Further, if $Y \in \gL_{-1}$ and $k \in M$, then \begin{align*} T(\Ad(k)Y) = d\tau\left(\Ad(k)Y\right)v_0 = \tau(k)d\tau(Y) \tau(k^{-1}) v_0 = \tau(k)T(Y). \end{align*} So $\ker T$ is an $\Ad(M)$-invariant subspace. So either $\ker T = \{0\}$ or $\ker T = \gL_{-1}$. If $\ker T = \gL_{-1}$, then $V_0= \Rb^d$ and since $d > 1$ this is impossible. So $\ker T =\{0\}$ and hence $\dim V_1 \geq m-1$. \end{proof}
\end{document} | arXiv | {
"id": "1903.11021.tex",
"language_detection_score": 0.5730786323547363,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Quantum clocks and their synchronisation --- the Alternate Ticks Game} \author{Sandra Rankovi\'{c}} \affiliation {Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland} \author{Yeong-Cherng Liang} \affiliation{Department of Physics, National Cheng Kung University, Tainan 701, Taiwan} \affiliation {Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland} \author{Renato Renner} \affiliation {Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland}
\begin{abstract} Time plays a crucial role in the intuitive understanding of the world around us. Within quantum mechanics, however, time is not usually treated as an observable quantity; it enters merely as a parameter in the laws of motion of physical systems. Here we take an operational approach to time. Towards this goal we consider quantum clocks, i.e., quantum systems that generate an observable time scale. We then study the quality of quantum clocks in terms of their ability to stay synchronised. To quantify this, we introduce the ``Alternate Ticks Game'' and analyse a few strategies pertinent to this game. \end{abstract}
\maketitle
\epigraph{``He who made eternity out of years remains beyond our reach. His ways remain inscrutable because He not only plays dice with matter but also with time."}{Karel V. Kucha\v{r}~\cite{Kuchar}}
\section{Introduction}
{\it Time} is a central concept used for the description of the world around us. We perceive it intuitively and we can measure it to very good precision \cite{Wineland, Haroche, AtomicClocks}. Nevertheless, it remains as one of the biggest unknowns of modern physics~\cite{EinsteinClock, Heisenberg, PauliTwo, DeWitt, Peres, Busch, Maccone, Kuchar}. In quantum mechanics, time is not considered an observable, but plays merely a parametric role in the equation of motion. In fact, it cannot be treated like other observables, as there cannot exist a self-adjoint time operator that is canonically conjugate to a Hamiltonian having a semi-bounded spectrum~\cite{PauliTwo}. There have been various attempts, though, to establish an understanding of time that goes beyond this parametric view~\cite{AharonovBohm,PageWootters, Wootters, Miyake, Vlatko, LLoydQuantumTime,ThermodynamicsOfTime}, e.g., by considering time as arising from correlations between physical systems. In this work we intend to take further steps in this direction.
Our approach is operational, in the sense that we study clocks, i.e., physical systems that provide time information. This is motivated by earlier work~\cite{TimeBook,Wootters}, where it has also been suggested to distinguish between {\it coordinate time} and {\it clock time}. While the first refers to a parameter used within a physical theory, the latter is an observable quantity. Here we are interested in the latter. Specifically, we view clock time as the observable output of a quantum system, hereafter referred to as a {\it quantum clock}.
Clearly, it is desirable to have a notion of quantum clocks that agrees with our usual perception of classical clocks in a certain limit. Yet, for our operational approach towards time, we shall assume only some basic features that a clock must exhibit. Importantly, a clock should generate a clock time that can be used to order events. This feature will play a central role in our treatment and will be captured below by our definition of a {\it time scale}.
Within Newtonian theory, there do not seem to be any fundamental limitations to the accuracy to which clocks can stay synchronised (see e.g.,~\cite{Wineland,AtomicClocks}). This gives rise to a {\it global} notion of time. A notion of global time is also assumed in the usual treatment of quantum mechanics. However, the corresponding global structures have various non-trivial features, which are a topic of ongoing research (see, e.g.,~\cite{Oreshkov, Bancal:2012aa, Brukner, Curchod:2014, Oreshkov2014, Amin}). Furthermore, a global notion of time is generally not achievable in relativistic theories~\cite{DeWitt, UnruhTime, Isham, Anderson, Kuchar, PhilosophyTime}. In our operational approach, we will thus treat quantum clocks as {\it local} physical systems, governed by the laws of quantum mechanics. This raises the question of {\it synchronisation}. In operational terms, we are interested to know whether two clocks that are separated, so that no communication between them is possible, can order events consistently. If this is the case, we say that the time scales generated by the two clocks are compatible. We note that this question relates to the general idea of reference frames in quantum theory~\cite{Spekkens} and, in particular, the question of how well such reference frames can be correlated.
To quantify the level of synchronisation between clocks, we consider a particular scenario, which we term the {\it Alternate Ticks Game}. In this game, two players, each equipped with a quantum clock, are asked to send {\it tick signals} in alternating order to a referee. The number of tick signals that they can generate in the correct order is then taken as a measure of how well the clocks are synchronised and, hence, of the quality of these clocks.
Quantum clocks should, by definition, produce a measurable output --- the clock time. As any measurement of a quantum system unavoidably disturbs its state, one should expect fundamental limitations to the precision of quantum clocks, and hence the level of synchronisation attainable between them. It is one of the major motivations of the present work to understand these limitations.
This paper is structured as follows. In Section~\ref{sec:Time}, we introduce our definition of a {\it quantum clock} and the associated {\it time scale}. Section~\ref{sec:Cont} is concerned with a {\it continuity} condition, which ensures that quantum clocks can be regarded as self-contained devices, in particular that they do not implicitly rely on a time-dependent control mechanism. Then, in Section~\ref{sec:Game}, we consider the synchronisation of clocks and introduce the {\it Alternate Ticks Game}, which serves as a method to investigate the quality of quantum clocks. The numerical results of some possible strategies pertinent to this game are presented in Section~\ref{sec:Results}. In our conclusions in Section~\ref{sec:Conclusion} we discuss possible future research directions that are spurred by this work.
\section{Quantum clocks and time scales}
\label{sec:Time}
The aim of this section is to motivate and define our notion of a {\it quantum clock} (Definition~\ref{Def1}) and the derived concept of a {\it time scale} (Definition~\ref{Def2}). Roughly speaking, a quantum clock is a quantum physical system equipped with a mechanism that generates time information. We model this mechanism as a process involving the following two systems (see Fig.~\ref{fig:Fig1}). \begin{itemize} \item {\it Clockwork (denoted $C$):} This is the dynamical part of the clock whose state evolves with respect to the coordinate time. In addition, it interacts with its environment, thereby outputting time information.
\item {\it Tick registers (denoted $T_1, T_2, \ldots $):} These belong to the environment of the clockwork. Each tick register $T_i$ is briefly in contact with the clockwork and records the output of the latter. After this interaction, it is separated from the clockwork, keeping a record of the clock time. \end{itemize}
The evolution of a physical clock appears to be continuous. However, as the notion of continuity implicitly refers to some underlying time parameter, we model the evolution of a quantum clock more generally as a sequence of discrete steps. Continuity may then be approximated by imposing an additional condition (see Definition~\ref{Def3} below). Each of the discrete evolution steps consists of the interaction between the clockwork $C$ and a fresh tick register $T_i$. Crucially, all steps are governed by the same dynamics, specified by a map $\cM$. Accordingly, all tick registers $T_i$ are taken to be isomorphic to a virtual system~$T$. Furthermore, we specify the initial state of the clockwork $C$ by a density operator, denoted by $\rho^0_C$. This idea is captured by the following definition.
\begin{mydef} \label{Def1}
A {\it quantum clock} is defined by a pair $(\rho^0_C, \cM_{C \to CT})$, where $\rho^0_C$ is a density operator of a system $C$, called {\it clockwork}, and $\cM_{C \to C T}$ is a completely positive trace-preserving (CPTP) map from $C$ to a composite system $C \otimes T$. \end{mydef}
The clock time produced by a clock gives rise to a {\it time scale}. Since, in our model, the clock time is recorded by the tick registers, a time scale corresponds to the cumulative content of these registers (see Fig.~\ref{fig:Fig2}). Formally, this is defined as follows.
\begin{mydef} \label{Def2}
Let $(\rho^0_C, \cM_{C \to CT})$ be a quantum clock and let $T_1, T_2, \ldots,$ be a sequence of systems isomorphic to $T$, called {\it tick registers}. Then the {\it time scale} generated by the quantum clock is the sequence $\{\rho^N_{T_1 \cdots T_N}\}_{N \in \mathbb{N}}$ of density operators on $T_1 \otimes \cdots \otimes T_N$ defined by the partial trace over $C$ of
\begin{align} \label{eq_rhoNdef}
\rho^N_{C T_1 \cdots T_N} = \bigcirc_{j = 1}^N \cM_{C \to C T_j}(\rho^0_C) \ ,
\end{align}
where $\cM_{C \to C T_j}$ denotes the completely positive map that acts like $\cM_{C \to C T}$ from $C$ to $C \otimes T_j$, while leaving $T_1, \ldots, T_{j-1}$ unchanged.
\end{mydef}
It is easy to verify that for any $N' > N$ we have
\begin{align}
\rho^N_{T_1 \cdots T_N} = \tr_{T_{N+1} \cdots T_{N'}}(\rho^{N'}_{T_1 \cdots T_{N'}}) \ .
\end{align}
The sequence $\{\rho^N_{T_1 \cdots T_N}\}_{N \in \mathbb{N}}$ of density operators is thus just a way to specify the state of the system\footnote{Note that the tensor product of infinitely many finite Hilbert spaces is not necessarily a separable Hilbert space.} $T_1 \otimes T_2 {\otimes \cdots}$ formed by all (infinitely many) tick registers after infinitely many invocations of the map~$\cM$.
\begin{figure}
\caption{{\it Model of a quantum clock.} A quantum clock consists of a clockwork $C$ that interacts with tick registers $T_1, T_2, \ldots$. These registers are propagated away from $C$ after their interaction, keeping a record of the clock time. }
\label{fig:Fig1}
\end{figure}
Realistic clocks consist of various physical components, such as a power source to drive the clock. Our definition of a quantum clock, i.e., the pair $(\rho^0_C, \cM_{C \to C T})$, does not specify these explicitly. It rather provides an abstract description of how the clock generates time information. While such an abstract model is suitable for our considerations, one should keep in mind that it is quite generic. That is, any physically realisable clock admits an abstract description in terms of a pair $(\rho^0_C, \cM_{C \to CT})$, but not any such pair corresponds to a physically realisable quantum clock. Indeed, in Section~\ref{sec:Cont} we will introduce an additional condition that accounts for certain physical constraints.
Our definition of a quantum clock also does not explicitly require certain features that may be expected from a clock --- constant time intervals, cyclicity, and unidirectional evolution. These turn out to be unnecessary for the operational task of synchronising quantum clocks. Take, as an example, the notion of constant time intervals. Intuitively, this seems to be necessary to determine the duration of events in some well-defined units (e.g., seconds). But, if the duration of a second were to change every time the clock should tick (i.e., if it was not a constant), and if {\it all} clocks would perform these adjustments automatically without us noticing, then such a change would not lead to any observable consequences.
In the following we will often consider particular constructions of quantum clocks where the map $\cM$ can be written as the concatenation of two CPTP maps, \begin{align} \label{eq_typicalclock}
\cM_{C \to C T} = \cM^{\mathrm{meas}}_{C \to C T} \circ \cM^{\mathrm{int}}_{C \to C} \ . \end{align} $\cM^{\mathrm{int}}$ acts only on the clockwork $C$ and $\cM^{\mathrm{meas}}$ corresponds to a measurement on $C$ of the form \begin{align} \label{eq_meas}
\cM^{\mathrm{meas}}_{C \to C T} : \, \rho_C \mapsto \sum_{t \in \cT} \sqrt{\pi_t} \rho_C \sqrt{\pi_t} \otimes \proj{t}_T \ , \end{align} where $\{\pi_t\}_{t \in \cT}$ is a Positive-Operator Valued Measure (POVM)\footnote{A POVM is a family of positive semidefinite operators $\pi_t$ such that $\sum_t \pi_t = \mathrm{id}$, where $\mathrm{id}$ denotes the identity operator.} on $C$ and where $\{\ket{t}\}_{t \in \cT}$ is a family of orthonormal vectors on the Hilbert space of $T$. $\cM^{\mathrm{int}}$ can be interpreted as the internal mechanism that drives the clockwork, whereas $\cM^{\mathrm{meas}}$ extracts time information and copies it to the tick registers.
For a simple example of a clock of the form~\eqref{eq_typicalclock}, assume that the clockwork $C$ is binary (i.e., described by the Hilbert space $\mathbb{C}^2$), with orthonormal states $\ket{0}$ and $\ket{1}$, and that the map $\cM^{\mathrm{int}}$ flips the state on $C$, corresponding to a logical NOT operation. Furthermore, $\cM^{\mathrm{meas}}$ could be a measurement defined by the projectors $\pi_0 = \proj{0}$ and $\pi_1 = \proj{1}$. Then, with $C$ initially set to $\ket{0}$, the resulting time scale $\{\rho_{T_1 \cdots T_N}^N\}_{N}$ would consist of operators of the form
\begin{align*}
\rho_{T_1 \cdots T_N}^N = \proj{1} \otimes \proj{0} \otimes \proj{1} \otimes \proj{0} \otimes \proj{1} \otimes \cdots .
\end{align*}
Because of its deterministic character such a clock would be perfectly precise, in the sense that two such clocks could stay synchronised infinitely long. In particular, they would perform arbitrarily well in the Alternate Ticks Game defined in Section~\ref{sec:Game}.
\begin{figure}
\caption{{\it Tick registers and the time scale.} Intuitively, one may think of the tick registers $T_i$ as propagating through space (coordinate~$x$), starting from the location of the clockwork $C$, in terms of coordinate time (denoted by~$t$). An observer located at position $x_k$ would observe them in sequential order (with respect to~$t$). The content of the sequence of tick registers defines a time scale. }
\label{fig:Fig2}
\end{figure}
\section{Continuous quantum clocks} \label{sec:Cont}
It appears to be easy to physically realise the simple clock described in the previous section --- one merely needs to implement a NOT gate and a measurement. However, it is unclear whether these operations can be carried out without the help of an additional {\it time-dependent} control mechanism, i.e., one that turns on and off certain interactions at well-defined points in coordinate time. And if such a control mechanism is necessary, the clock can of course no longer be regarded as a self-contained device. This motivates an additional assumption, namely that the clockwork evolves continuously. As our model is inherently discrete, we formalise this by the requirement that the individual steps of the evolution can be made arbitrarily small.
\begin{mydef}
\label{Def3}
A quantum clock $(\rho^0_C, \cM_{C \to C T})$ is called {\it $\epsilon$-continuous} if the map $\cM_{C \to C T}$ restricted to $C$ is $\epsilon$-close to the identity map $\cI_C$, i.e.,
\begin{align}
\| \tr_{T} \circ \cM_{C \to C T} - \cI_C \|_{\diamond} \leq \epsilon
\end{align}
where $\| \cdot \|_{\diamond}$ is the diamond norm.
\end{mydef}
$\epsilon$-continuity, for arbitrarily small values~$\epsilon$, is a necessary condition for a clock to be self contained.\footnote{That a clock is self contained means that any possible control mechanism is regarded as part of it. In this case the Hamiltonian of the system is time-independent. Considering small steps in coordinate time, such a clock can always be modelled as an $\epsilon$-continuous quantum clock, for any $\epsilon> 0$.} Furthermore, combined with our requirement that all evolution steps are identical it ensures that the evolution of the state of the clockwork does not have an implicit time dependence. In particular, it does not depend on the timing of the interaction between the clockwork and the individual tick registers. To see this, consider a gear system that, in each step, provides a fresh tick register~$T$ in a fixed state $\sigma_T$ and then lets the joint state of the clockwork~$C$ and the tick register~$T$ evolve according to some map $\cU = \cU_{C T}$. This corresponds to a quantum clock defined by the map \begin{align} \label{eq:Umap}
\cM_{C \to CT} : \, \rho_C \mapsto \cU(\rho_C \otimes \sigma_T) \ . \end{align} In particular, if the initial state of the clockwork~$C$ is $\rho_C^0$ then the states after the first and the second step are \begin{align*}
\rho^1_{C} & = (\tr_T \circ\, \cU) (\rho^0_C \otimes \sigma_T) \\
\rho^2_{C} & = (\tr_T \circ\, \cU)(\rho^1_C \otimes \sigma_T) \ . \end{align*} This may now be compared to a situation where the gear system fails to deliver a fresh tick register between the first and the second execution of $\cU$. In this case the state of the clockwork after the second step would be \begin{align*}
\bar{\rho}^2_{C} = (\tr_T \circ\, \cU \circ \cU)(\rho^0_C \otimes \sigma_T) \ . \end{align*}
For any map $\cM$ that is $\epsilon$-continuous one can choose the corresponding map $\cU$ such that $\cU = \cI + \cD$ with $\| \cD \|_{\diamond} \leq \epsilon$. Inserting this into the above expressions for the states gives \begin{align*}
\rho_C^2 & = \rho^0_C + 2(\tr_T \circ \cD)(\rho^0_C \otimes \sigma_T) +\delta_C \\
\bar{\rho}_C^2 & = \rho^0_C + 2(\tr_T \circ \cD)(\rho^0_C \otimes \sigma_T) + \bar{\delta}_C \end{align*} with \begin{align*}
\delta_C & = (\tr_T \circ \cD)\bigl((\tr_T \circ \cD)(\rho^0_C \otimes \sigma_T) \otimes \sigma_T\bigr) \\
\bar{\delta}_C & = (\tr_T \circ \cD \circ \cD)(\rho^0_C \otimes \sigma_T) \ . \end{align*} Using the fact that both the partial trace $\tr_T$ and the tensoring map $X_C \mapsto X_C \otimes \sigma_T$ have diamond norm equal to~$1$, we find \begin{align*}
\| \delta_C \|_1 \leq \| \cD \|_{\diamond}^2 \leq \epsilon^2 \quad \text{and} \quad
\| \bar{\delta}_C \|_1 \leq \| \cD \|_{\diamond}^2 \leq \epsilon^2 \ . \end{align*} From this we obtain a bound on the distance between the two states, \begin{align*}
\| \rho_C^2- \bar{\rho}_C^2 \|_1
= \| \delta_C - \bar{\delta}_C \|_1
\leq \| \delta_C \|_1 + \|\bar{\delta}_C \|_1 \leq 2 \epsilon^2 \ . \end{align*}
Suppose now that the clock runs for $N+1$ steps where, as above, in each step the state of $C \otimes T$ undergoes a mapping $\cU$. Let $\rho^{N+1}_C$ be the state of $C$ under the assumption that a fresh tick register (initialised in state $\sigma_T$) is provided in between any two executions of $\cU$. Let furthermore $\bar{\rho}^{N+1}_C$ be the corresponding state where this condition may fail with probability~$p$ in between any of the steps. Generalising the reasoning above, we find that the distance between the two states is bounded by \begin{align*}
\| \rho_C^{N+1} - \bar{\rho}_C^{N+1} \|_1 \leq 2 N p \epsilon^2 \ . \end{align*} Note that the value $N$ necessary to achieve a certain change of the state of the clockwork scales inverse proportionally to $\epsilon$, i.e., $N = c / \epsilon$ for some constant~$c$ (independent of~$\epsilon$). Inserting this in the above bound we conclude that the distance is of the order \begin{align*}
\| \rho_C^{N+1} - \bar{\rho}_C^{N+1} \|_1 \leq O(\epsilon)\ , \end{align*} i.e., the effect onto the state of the clockwork~$C$ due to failures in the replacement of the tick registers disappears as $\epsilon$ tends to~$0$. In other words, the performance of a continuous quantum clock does not depend on the exact timing of the insertion of the tick registers.
The clock with the NOT-based clockwork described at the end of Section~\ref{sec:Time} is not $\epsilon$-continuous for any $0 \leq \epsilon < 2$, as in each step the state of $C$ is deterministically changed to an orthogonal one. In fact, constructing a continuous clock is a bit more challenging. One conceivable approach could be to split the NOT operation used in the construction into small identical steps. Specifically, the map $\cM^{\mathrm{int}}$ could be defined as an $n^{\mathrm{th}}$ root of the NOT operation, for some large $n$, corresponding to a rotation in state space by a small angle. For any $\epsilon > 0$ and sufficiently large $n$ this map would be $\epsilon$-close to the identity. However, it is unclear how to choose the measurement $\cM^{\mathrm{meas}}$. In fact, if the described small rotations are applied in between two measurements, its outcome would no longer be deterministic. Consequently, the clock would generate a rather randomised time scale and thus not be very precise.
As we shall see, the price to pay for more precision seems to be an increased size of the clockwork~$C$. To illustrate this, we consider a clockwork that mimics the propagation of a wavepacket. Specifically, let $C$ be equipped with an orthonormal basis $\{\ket{c}\}_{c = 0, \ldots, d-1}$, for $d \in \mathbb{N}$ and define, for any $\bar{c} \,\in\{0, \ldots, d-1\}$ and $\Delta \ll d$, \begin{align}
\Pi_{\bar{c}} = \sum_{c: \, |c - \bar{c}| \leq \Delta} \proj{c} \ . \end{align} Furthermore, assume that the initial state $\rho^0_C$ is contained in the support of the projector $\Pi_{c_0}$ with $c_0 = 0$, i.e., \begin{align}
\tr(\Pi_{c_0} \rho_C^0) = 1 \ . \end{align} Intuitively, one may think of $\rho_C^0$ as a wavepacket of breadth~$\Delta$ localised around $c_0$. Let $\cM^{\mathrm{int}}$ be a map that moves this wavepacket by some fixed distance $\nu \ll 1$, in the sense that the state $\rho_C^N$ of the clockwork after $N$ applications of $\cM^{\mathrm{int}}$ [see~\eqref{eq_rhoNdef}] satisfies \begin{align} \label{eq_regularmovement}
\tr(\Pi_{c_N} \rho_C^N) \approx 1 \quad \text{for $c_N = \lfloor N \nu \rfloor$} \end{align} (provided that $N \nu + \Delta < d-1$). Finally, the measurement $\cM^{\mathrm{meas}}$ [see~\eqref{eq_meas}] could be defined by the POVM elements \begin{align}\label{Eq:Def:pi}
\pi_0 = \mathrm{id}_C - \delta \Pi_{\bar{c}} \quad \text{and} \quad \pi_1 = \delta \Pi_{\bar{c}} \end{align} for $\bar{c}= d-1$ and some $\delta \ll 1$. It is easy to see that, for any $\epsilon > 0$, the clock can be made $\epsilon$-continuous by choosing $\nu$ and $\delta$ sufficiently small.
We note that this construction relies on the assumption that the (mimicked) wavepacket propagates regularly (such that condition~\eqref{eq_regularmovement} is satisfied). This is however only possible if its breadth~$\Delta$ is sufficiently large so that the wavepacket has not too much spread in momentum. Since the dimension~$d$ of $C$ must certainly be larger than $\Delta$, the precision of the proposed construction depends strongly on the size of the clockwork.
To analyse the time scale that the clock generates we first observe that, as long as $N \nu + 2 \Delta < d$, the state of the clockwork $\rho_C^N$ has only negligible overlap with $\pi_1$ and is therefore not disturbed by the measurement $\cM^{\mathrm{meas}}$. During this phase, the clock would output a series of states $\ket{0}$ to the tick registers. Only once $N \nu$ is approximately equal to $d$, the clock would at some point output a state $\ket{1}$. Hence, assuming that $\Delta \ll d$, the first ``tick'' almost never occurs before $N_{\min} = d/\nu$ steps, but is very likely to occur before $N_{\max} = d/\nu + O(1/\delta)$ steps. Since the length of the interval $[N_{\min}, N_{\max}]$ can be made short relative to $N_{\min}$ by choosing $d \gg \nu / \delta$, the clock produces a relatively precise first tick. However, after this first tick, the clock would in each step output $\ket{1}$ with probability~$\delta$, resulting in a rather randomised pattern of ticks.
The construction above could be leveraged to a more useful clock by adapting the measurement $\cM^{\mathrm{meas}}$ such that it resets the state of $C$ to $\rho^0_C$ whenever the output $\ket{1}$ is written to the time register. Formally, this corresponds to a clock where the CPTP map defined by Eq.~\eqref{eq_meas} is replaced by \begin{equation}\label{eq_meas_reset} \begin{split}
\cM^{\mathrm{meas}}_{C \to C T} : \, \rho_C \mapsto &\sqrt{\pi_0}\, \rho_C\, \sqrt{\pi_0} \otimes \proj{0}_T \\
&+\tr (\pi_1\, \rho_C)\, \hat{\rho}_C \otimes \proj{1}_T \end{split} \end{equation} with $\hat{\rho}_C = \rho^0_C$. The resulting time scale would then consist of almost equally long sequences of states $\ket{0}$ separated by states $\ket{1}$. However, the precision depends crucially on the size $d$ of the clockwork (see Section~\ref{sec:Results} for numerical results). In fact, it appears to be impossible to construct continuous clocks of finite size that are infinitely precise.
\section{Synchronisation of clocks --- the Alternate Ticks Game} \label{sec:Game}
So far we have specified what type of physical systems we consider as clocks, but we have not yet provided any criteria to assess their capacity to generate precise time information. As it seems to be impossible to define the accuracy of a clock in an operational manner without comparing it to another one, we consider multiple clocks and ask how well they can stay synchronised.
There are various ways to define {\it synchronisation} between two clocks. In general, synchronisation means that the time scales generated by the two clocks are in some sense compatible. The strongest possible criterion would be that they are identical. However, such a perfect synchronisation would require perfectly precise clocks, which cannot be achieved with continuous clocks of finite size.
In the following, we introduce a quantitative measure for synchronisation. We formulate it in terms of a game, the {\it Alternate Ticks Game}. It involves two collaborating players, $A$ and $B$, each of them equipped with a quantum clock. The players can agree on a common strategy, but are not allowed to communicate once the game has begun. They are asked to provide {\it ticks} to a referee, who checks whether these ticks are received in an alternating order --- first from $A$, then from $B$, then again from $A$, and so on (see Fig.~\ref{fig:Fig3}). The goal of the players is to maximise the number of ticks respecting the posed alternate ticks condition.
\begin{figure}
\caption{{\it Schematic representation of the Alternate Ticks Game.} Two players $A$ and $B$ choose quantum clocks, possibly under certain constraints, e.g., a bound on the dimension of their clockworks, $C_A$ and $C_B$. The clocks are defined by initial states $\rho^{0,A}_C$, $\rho^{0,B}_C$ as well as maps $\mathcal{M}^A_{C\to CT}$, $\mathcal{M}^B_{C\to CT}$,
respectively. Each of them generates a stream of tick registers, denoted by $T^A_{j}$ and $T^B_{j}$, whose cumulative contents are denoted by $\rho^N_A$ and $\rho^N_B$, respectively. The tick registers are sent to a referee~$R$, who checks the alternate ticks condition.}
\label{fig:Fig3}
\end{figure}
Formally, the strategy of the players is defined by their respective choice of a quantum clock, $(\rho^{0,A}_C, \cM^A_{C \to C T})$ and $(\rho^{0,B}_C, \cM^B_{C \to C T})$ (cf.\ Def.~\ref{Def1}). This choice may be subject to certain constraints, e.g., on the size of the clockwork. To capture the idea that the players have no access to additional time information, we do not allow them to carry out any non-trivial operations on the individual tick registers $T_j$ (such as operations that depend on $j$). Specifically, we shall assume that they simply transmit their tick registers to the referee. The referee continuously monitors the incoming stream of tick registers from both players via predefined projective measurements $\{\tau^A, \mathrm{id}_T - \tau^A\}$ and $\{\tau^B, \mathrm{id}_T - \tau^B\}$, whose outcomes are interpreted as ``tick'' and ``no tick'', respectively.
For a quantitative analysis, it is convenient to introduce an operator that counts the ticks generated by each of the players. Let us denote by $\tau^A_j$ the projection operator $\tau^A$ applied to player $A$'s $j^{\mathrm{th}}$ tick register, and define $\nu^A_j = \mathrm{id}_{T_j} - \tau_j^A$. Furthermore, for any $N \in \mathbb{N}$ and $0 < k_1 < k_2 < \cdots < k_s < N$, we define the operator \begin{align} \label{eq_ticklocations}
\Pi^N_{k_1, k_2, \ldots, k_{s}} = \quad & \nu_1 \cdots \nu_{k_1-1} \tau_{k_1}\\
\quad & \nu_{k_1+1} \cdots \nu_{k_2-1} \tau_{k_2} \nonumber \\
\quad & \cdots \nonumber\\
\quad & \nu_{k_{s-1}+1} \cdots \nu_{k_s-1} \tau_{k_s} \nonumber
\end{align} on $T_1 \otimes \cdots \otimes T_N$. Our measure of success in the Alternate Ticks Game can then be formalised as follows.
\begin{mydef} For two clocks the {\it success probability for $t$ ticks} is defined as \begin{align}
p_t = \lim_{N \to \infty} \tr(\rho^N_A \otimes \rho^N_B \Pi^N_t) \ , \end{align} where $\{\rho^N_A\}_{N \in \mathbb{N}}$ and $\{\rho^N_B\}_{N \in \mathbb{N}}$ are the time scales of the two clocks and where $\Pi^N_t$ is the projector defined by\footnote{The specific expression captures the case of $t$ even, but can easily be adapted to $t$ odd.} \begin{align}
\Pi^N_t = \! \! \! \! \sum_{\substack{0 < k_1 < k_2 < k_3 < k_4 \\ < \cdots < k_{t-1} < k_t < N}} \! \! \! \! \Pi^{N}_{k_1, k_3, \ldots, k_{t-1}} \otimes \Pi^{N}_{k_2, k_4, \ldots, k_t } \ , \end{align} with the projectors in the sum given by~\eqref{eq_ticklocations}. \end{mydef}
Operationally, $p_t$ corresponds to the probability that {\it at least} $t$ ticks are produced in the correct {\it alternating} order. In particular, we have $p_{t'} \leq p_{t}$ whenever $t' \geq t$. There are different ways to condense this probabilistic statement into a single quantity. One would be to consider the {\it expected} (or the average) number $\bar{t}$ of ticks until the referee detects a failure, i.e., \begin{align}
\bar{t} = \sum_{t} p_t t \ . \end{align} This is the figure of merit that we will adopt in Section~\ref{sec:Results}. Alternatively, one may ask for the maximum number $t^\delta_{\max}$ of ticks such that the failure probability is below a specified {\it threshold} $\delta$, i.e., \begin{align*}
t^\delta_{\max} = \max\{t: p_t \geq 1-\delta \} \ . \end{align*}
We observe that two players equipped with the NOT-based clock described in Section~\ref{sec:Time} could continue the game arbitrarily, i.e., $p_t = 1$ for all $t \in \mathbb{N}$. In this sense, the NOT-based clock is perfectly precise. However, as already mentioned earlier, it is not continuous. Conversely, the $\epsilon$-continuous clock described in Section~\ref{sec:Cont} has a probabilistic behaviour, which would at some point lead to a failure of the alternating ticks condition. However, for large sizes $d \gg 1$ of the clockwork, this failure may happen only after many tick signals. In fact, if the size is unbounded, there may be strategies that achieve an arbitrarily large expected number~$\bar{t}$ of ticks.
\section{Two specific strategies for the Alternate Ticks Game}
\label{sec:Results}
In this section we describe two specific strategies for the Alternate Ticks Game and present selected numerical results illustrating their performance. The strategies are defined by the choice of quantum clocks by the two players, $A$ and $B$. Specifically, we consider clocks of the form~\eqref{eq_typicalclock} characterised by the following parameters: \begin{enumerate}
\item the dimension $d$ of the clockwork $C$ and its initial state $\rho_C^{0}$;
\item a unitary map $\cM^{\mathrm{int}}(\rho): \rho\to {\rm e}^{-{\rm i} H^{{\rm int}} \theta}\,\rho\, {\rm e}^{{\rm i} H^{{\rm int}} \theta}$ specified by a Hermitian operator $H^{{\rm int}}$ as well as a parameter $\theta$;
\item a map $\cM^{\mathrm{meas}}$ of the form~\eqref{eq_meas_reset}, specified by nonnegative operators $\pi_0$ and $\pi_1$ and a state $\hat{\rho}_C$. \end{enumerate}
We assume that the two players use identical quantum clocks, except for their initial states, which are chosen as \begin{equation}
\rho_{C}^{0,A}=\proj{0}, \quad \rho_{C}^{0,B}=\proj{\left\lfloor {\textstyle \frac{d}{2}} \right\rfloor}, \end{equation} where $\{\ket{0}, \ket{1},\ldots, \ket{d-1}\}$ is an orthonormal basis for the clockwork $C$.
The first specific strategy is inspired by Peres' model of a simple quantum clock~\cite{Peres} and defined by the Hermitian operator \begin{subequations} \begin{align}
H^{\mathrm{int}}_P={\rm i}\ln U_P\ , \end{align} where \begin{equation}
U_P=\ketbra{0}{d-1} + \sum_{k=0}^{d-2} \ketbra{k+1}{k} \end{equation} \end{subequations} is a unitary matrix that cyclically permutes the basis states --- bringing state $\ket{k}$ to $\ket{k+1\,\text{mod}\, d}$ for all $k \in\{0,\ldots,d-1\}$. The second strategy we consider is based on a clock defined by \begin{align}
H^{\mathrm{int}}_{S}=U_P + U_P^\dag\ . \end{align} For both types of clock the map $\cM^{\mathrm{meas}}$ is defined by the state $\hat{\rho}_C = \proj{0}$ and the operators \begin{equation}\label{Eq:POVMTick}
\pi_0 = \mathrm{id}-\pi_1 \quad \text{and} \quad \pi_1 = \delta\sum_{j=d-d_0}^{d-1} \proj{j} \ , \end{equation} where we chose $d_0=\lceil\frac{d}{10} \rceil$.
\begin{figure}
\caption{{\it Performance in the Alternate Ticks Game for the two strategies described in the text, fixed $\delta$, and varying~$\theta$.} The plot shows the average number of alternating ticks as a function of the size of the clockwork, $d\in\{ 2,...,60\}$, for different step sizes, $\theta \in \{1, 0.1, 0.01\}$ (with $\delta=0.1$). Each data point plotted is the averaged result of $500$ runs of the simulation.}
\label{Fig:Delta0.01}
\end{figure}
The results of our numerical analysis of these strategies in the Alternate Ticks Game are summarised in Figs.~\ref{Fig:Delta0.01} and~\ref{Fig:theta1}. The performance obviously depends strongly on the choice of $H^{\mathrm{int}}$ as well as the parameters $\theta$ (which determines the step size in between two interactions of the clockwork with the tick register) and $\delta$ (which determines the strength of the interaction with the tick register). Nonetheless, the results clearly show that the number of achievable alternate ticks increases with the size~$d$ of the clock, at least in the regimes that we explored.
\begin{figure}
\caption{{\it Performance in the Alternate Ticks Game for the strategy defined by the Peres model, fixed~$\theta$, and varying~$\delta$.} The plot shows the average number of alternating ticks as a function of the size of the clockwork, $d\in\{2,\ldots,250\}$, for different choices of~$\delta\in\{0.02, 0.05, 0.08, 0.1\}$ (with $\theta=1$). Each data point plotted is the averaged result of $500$ runs of the simulation.}
\label{Fig:theta1}
\end{figure}
While the optimal choice of $\theta$ depends on the size of the clockwork, a quantum clock that evolves according to $H^{\mathrm{int}}_P$ with $\theta=1$ has a good performance for a large range of clock sizes, as shown in Fig.~\ref{Fig:theta1}. This choice of parameters therefore has the potential of reaching arbitrary precision in the limit of large clockwork sizes. We expect that this remains true even if one imposes stronger continuity by further decreasing the value of the parameter~$\delta$.
Although the particular quantum clocks we considered here are already reasonably precise (in terms of the number of alternating ticks generated) they are certainly not optimal. While the construction of good quantum clocks appears to be a non-trivial task, it is certainly an interesting and important one. In fact, the question of what features make good operational clocks is still largely open.
\section{Conclusions and future work}
\label{sec:Conclusion}
The aim of this work is to propose a framework to study local clocks and their synchronisation in quantum mechanics. The Alternate Ticks Game is a simple and natural approach to quantifying the extent to which quantum clocks can be synchronised in an operationally meaningful manner. Considering the performance of two copies of a clock in the game, we also obtain a measure for the accuracy of a single quantum clock. Our numerical results show that, even with small-size clocks, a reasonable performance is reachable.
Nonetheless, for any given size of the clock --- measured by the dimension of the state space of the clockwork --- there must exist a non-trivial upper bound on the maximum number of alternating ticks achievable, irrespective of the strategy adopted by the players. This is due to the unavoidable interaction between the clockwork and its environment (the tick registers in our model) and the nature of quantum mechanics --- interactions degrade the ideal evolution of the system and measurements introduce disturbances. We note that, for a different notion of quantum clocks introduced by Salecker and Wigner~\cite{Salecker1958}, the limitations of time measurements have also been discussed, yet from a different perspective, considering the minimal mass required by the clock system (see also~\cite{Ng2008}).
The related question of uncertainty in time measurements has a long history. The famous uncertainty relations by Heisenberg, Robertson and others~\cite{Heisenberg, Robertson} were derived for observables in quantum theory. Since coordinate time is not such an observable, the analogous time-energy uncertainty relations are only applicable in specific cases, e.g., when considering the minimal time needed for a particle to be excited from one energy state to another~\cite{Busch}. Similarly, entropic uncertainty relations~\cite{Deutsch,Uffink,BertaM} can only be used for quantum observables, and are hence not applicable to coordinate time. However, it is conceivable that such relations can be obtained for clock time. The operational approach presented here may serve as a starting point towards this goal.
The synchronisation of clocks is a widely studied problem and commonly employed approaches include phase estimation, light signals exchanged between parties, or shared entanglement, as described, e.g., in~\cite{Chuang2000,Jozsa2000,SethLl,Preskill2000,Spekkens}. It was found, however, that shared entangled states provided by a third party do not seem to increase the performance of these synchronisation techniques ~\cite{Preskill2000,Spekkens}. One may therefore suspect that the use of entanglement would also not increase the performance of two players in the Alternate Ticks Game.
There are various possible directions of further research. Clearly, the game that we have introduced can be generalised to one involving multiple players, i.e., multiple local clocks. One may then impose the synchronisation criterion that no two successive ticks should come from the same player, or other variations thereof. Conceivably, by increasing the number of players, one can obtain a combined time scale with smaller intervals (as measured by some external time parameter) between consecutive ticks --- corresponding to one that would be produced by a clock with higher precision. Such a construction may also be used to define a global but still operational notion of time in non-relativistic quantum mechanics. Conversely, it may be worth investigating the impact of relativistic effects to the operational notion of time considered here.
Finally, we briefly comment on the connection between our work and some less closely related research areas. Over the past years there has been renewed interest in the question of how quantum mechanics can be reconciled with general relativity (GR) --- one of the biggest problems of modern physics. Here the ``problem of time" arises as witnessed by the apparent impossibility to combine the time parameter used to describe the evolution of quantum systems with the structure of GR, where time is defined locally. This has led to interesting novel approaches to define time. For example, since the Wheeler-DeWitt equation~\cite{DeWitt} corresponds to a timeless Universe, a possible way to regard time as correlations among systems~\cite{Peres, Wootters}. Our approach has various similarities to this idea. In particular, our notion of time scales is ultimately a stationary concept and the question of synchronisation may be reformulated as the question of whether the correlations between two time scales have a certain desired structure.
\section*{Acknowledgments}
The authors would like to thank \"Amin Baumeler, Roger Colbeck, Paul Erker, Cisco Gooding, Mauro Iazzi, Philipp Kammerlander, Chiara Marletto, Sandu Popescu, Nicolas Sangouard, Ralph Silva, Andr\'e Stefanov, Vlatko Vedral and Stefan Wolf for discussions and useful comments. This work was supported by the Swiss National Science Foundations through the National Centre of Competence in Research ``QSIT'', the European Commission through the ERC grant No.~258932, as well as by the Ministry of Education, Taiwan, R.O.C., through ``The Aim for the Top University Project'' granted to the National Cheng Kung University (NCKU).
\fontsize{3pt}{1pt}\selectfont
\end{document} | arXiv | {
"id": "1506.01373.tex",
"language_detection_score": 0.872521162033081,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Almost Designs and Their Links with Balanced Incomplete Block Designs
}
\author{Jerod Michel \and
Qi Wang }
\institute{Jerod Michel \at
Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China. \\
\email{michelj@sustc.edu.cn}
\and
Qi Wang \at
Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China.\\
\email{wangqi@sustc.edu.cn}\newline The authors were supported by the National Science Foundation of China under Grant No. 11601220. }
\date{Received: date / Accepted: date}
\maketitle
\begin{abstract} Almost designs ($t$-adesigns) were proposed and discussed by Ding as a certain generalization of combinatorial designs related to almost difference sets. Unlike $t$-designs, it is not clear whether $t$-adesigns need also be $(t-1)$-designs or $(t-1)$-adesigns. In this paper we discuss a particular class of 3-adesigns, i.e., 3-adesigns coming from certain strongly regular graphs and tournaments, and find that these are also $2$-designs. We construct several classes of these, and discuss some of the restrictions on the parameters of such a class. We also construct several new classes of 2-adesigns, and discuss some of their properties as well. \keywords{Almost difference set, difference set, strongly regular graph, $t$-adesign, tournament.}
\subclass{05B05 \and 05E30} \end{abstract}
\section{Introduction}\label{sec1} Combinatorial designs are an interesting subject of combinatorics closely related to finite geometry \cite{BET}, \cite{DEM}, \cite{HIR}, with applications in experiment design \cite{FISH}, coding theory \cite{ASS}, \cite{CUN}, \cite{HP} and cryptography \cite{CDR} \cite{STIN}. \subsection{Finite incidence structures} A (finite) {\it incidence structure} is a triple $(V,\mathcal{B},I)$ such that $V$ is a finite set of elements called {\em points}, $\mathcal{B}$ is a finite set of elements called {\em blocks}, and $I$ ($\subseteq V\times\mathcal{B}$) is a symmetric binary relation between $V$ and $\mathcal{B}$. Since, in the following, all incidence structures $(V,\mathcal{B},I)$ are such that $\mathcal{B}$ is a collection (i.e., a multiset) of nonempty subsets of $V$, and $I$ is given by membership (i.e., a point $p\in V$ and a block $B\in\mathcal{B}$ are incident if and only if $p\in B$), we will denote the incidence structure $(V,\mathcal{B},I)$ simply by $(V,\mathcal{B})$. An incidence structure that has no repeated blocks is called {\em simple}. All of the incidence structures discussed in the following are assumed to be simple. A $t$-$(v,k,\lambda)$ {\it design} (or $t$-design, for short) (with $0<t<k<v$) is an incidence structure $(V,\mathcal{B})$ where $V$ is a set of $v$ points and $\mathcal{B}$ is a collection of $k$-subsets of $V$ such that any $t$-subset of $V$ is contained in exactly $\lambda$ blocks \cite{BET}. In the literature, $t$-designs with $t=1$ are sometimes referred to as {\it tactical configurations}, and those with $t=2$ are sometimes referred to as {\it balanced incomplete block designs}. We will denote the number of blocks of an incidence structure by $b$, and the number of blocks containing a given subset $A\subseteq V$ of points by $r_{A}^{\mathcal{B}}$ (when $A$ is a singleton, and $(V,\mathcal{B})$ is a tactical configuration, simply by $r^{\mathcal{B}}$). Then the identities \[ bk=vr^{\mathcal{B}},\] and \[r^{\mathcal{B}}(k-1)=(v-1)\lambda\] restrict the possible sets of parameters of $2$-designs. A $t$-design in which $b=v$ and $r^{\mathcal{B}}=k$ is called {\it symmetric}. The {\it dual} $(V, \mathcal{B})^{\perp}$ of the incidence structure $(V,\mathcal{B})$ is the incidence structure $(\mathcal{B},V)$ with the roles of points and blocks interchanged. A symmetric incidence structure has the same parameters as its dual. For an ambient set $V$, and a subset $A\subseteq V$, we will sometimes denote by $\overline{A}$ its complement $V\setminus A$. The $v\times b$ $0$-$1$ matrix whose rows and columns are indexed by $V$ and $\mathcal{B}$, respectively, is called the {\it incidence matrix} of $(V,\mathcal{B})$. \par For a matrix $M$, let $(M)_{ij}$ denote the $(i,j)$-th entry of $M$. Let $\mathcal{V}_{M}$ denote the set by which the columns of $M$ are indexed, and let $\mathcal{B}_{M}$ denote the set of supports of the rows of $M$. We will denote by $J$ and $I$ the all-one matrix and the identity matrix, respectively (dimensions will be clear from the context). \subsection{Difference sets and almost difference sets} One important way of obtaining (symmetric) balanced incomplete block designs is by constructing difference sets \cite{BET}, \cite{STIN}. Let $G$ be a group (written additively) of order $v$, and let $k$ and $\lambda$ be integers satisfying $2\leq k<v$. A $(v,k,\lambda)$ {\it difference set} in $G$ is a $k$-subset $D\subseteq G$ such that the multiset $\{* \ x-y\mid x,y\in D,x\neq y \ *\}$ contains every nonidentity member of $G$ exactly $\lambda$ times. It is not difficult to see that the incidence structure given by $(G,Dev(D))$ is a $2$-$(v,k,\lambda)$ design, where $Dev(D)$, called the {\it development} of $D$, denotes the set $\{D+g\mid g\in G\}$ (where $D+g:=\{d+g\mid d\in D\}$) of translates of $D$ over $G$. \par Almost difference sets are a generalization of difference sets. In the literature there are two different definitions of almost difference sets \cite{DAV}, \cite{DING00}. The following unification was given in \cite{DHM}. A $(v,k,\lambda,s)$ {\it almost difference set} in $G$ is a $k$-subset $D\subseteq G$ such that the multiset $\{* \ x-y\mid x,y\in D,x\neq y \ *\}$ contains $s$ nonidentity members of $G$ with multiplicity $\lambda$, and $v-1-s$ nonidentity members with multiplicity $\lambda+1$. \par A difference set can be viewed as an almost difference set with $s=0$ or $s=v-1$. The complement $G\setminus D$ of a $(v,k,\lambda,s)$ almost difference set is an almost difference set with parameters $(v,v-k,v-2k+\lambda,s)$. A simple restriction which can be applied to the parameters of almost difference sets is that $(v-1)(\lambda+1)-s=k(k-1)$ must hold for any $(v,k,\lambda,s)$ almost difference set. \par Difference sets and almost difference sets also have extensive applications in various fields such as communications, sequence design, error correcting codes, and CDMA and cryptography \cite{CDR}, \cite{CUN}, \cite{GOL}. For a good survey on almost difference sets, the reader is referred to \cite{CUN}. \subsection{Strongly Regular Graphs and Tournaments}\label{sec1.3}
We will assume some familiarity with graph theory. A graph $\Gamma=(V,E)$ consists of a vertex set $V$ with $\left|V\right|=n$, an edge set $E$, and a relation that associates with each edge a pair of vertices. A {\it strongly regular graph} with parameters $(n,k,\lambda,\mu)$ is a graph $\Gamma$ with $n$ vertices in which the number of common neighbors of $x$ and $y$ is $k,\lambda$ or $\mu$ according as $x$ and $y$ are equal, adjacent or non-adjacent, respectively. The complement of an $(n,k,\lambda,\mu)$ strongly regular graph is an $(n,n-k-1,n-2k+\mu-2,n-2k+\lambda)$ strongly regular graph. For a good introduction to strongly regular graphs the reader is referred to \cite{VAN}. Strongly regular graphs whose parameters are (up to complementation) of the form $(n,\frac{n-1}{2},\frac{n-5}{4},\frac{n-1}{4})$, are called {\it Paley type}, and are closely related to conference matrices \cite{STIN}. A {\it conference matrix} of order $n$ is an $n\times n$ matrix $C$ with diagonal entries $0$, and off-diagonal entries $\pm 1$, which satisfies $CC^{T} =(n-1)I$. Assume $C$ is such a matrix, and let $S$ be the matrix obtained from $C$ by deleting the first row and column, and let $A$ be the matrix obtained from $S$ by replacing $-1$ by $1$, and $1$ by $0$. If $n \equiv 2 \ ({\rm mod} \ 4)$ then $S$ is symmetric, and $A$ is the adjacency matrix of a strongly regular graph of Paley type. Conference matrices of order $n \equiv 2 \ ({\rm mod} \ 4)$ are in fact equivalent to Paley type strongly regular graphs (see \cite{GOETH} and \cite{REID}). Cayley graphs which are strongly regular are equivalent to partial difference sets. For a formulation of strongly regular graphs in terms of partial difference sets, the reader is referred to \cite{VAN}.
\par A directed graph $\Gamma=(V,\mathcal{E})$ consists of a vertex set $V$ with $\left|V\right|=n$, and a set $\mathcal{E}$ of ordered pairs of vertices (or arcs). A {\it tournament} is a directed graph $\Gamma$ with $n$ vertices in which each pair of vertices $x,y\in V$ are joined by exactly one member of $\mathcal{E}$. The in-degree and out-degree of a vertex $x\in V$ are defined to be the number of arcs of the form $yx$, and the number of arcs of the form $xy$, for $y\in V$, respectively. A {\it doubly regular tournament} is a tournament $\Gamma$ on $n$ vertices in which every vertex has in-degree and out-degree $\frac{n-1}{2}$, and each pair of vertices has $\frac{n-3}{4}$ common out-neighbors, and the same number of common in-neighbors. Doubly regular tournaments are closely related to skew conference matrices. Let $C$ be a conference matrix of order $n$, let $S$ be the matrix obtained from $C$ by deleting the first row and column, and let $A$ be the matrix obtained from $S$ by replacing $-1$ by $1$, and $1$ by $0$. If $n \equiv 0 \ ({\rm mod} \ 4)$ then $S$ is skew-symmetric, and $A$ is the adjacency matrix of a doubly regular tournament. Conference matrices of order $n \equiv 0 \ ({\rm mod} \ 4)$ are in fact equivalent to doubly regular tournaments (see \cite{GOETH} and \cite{REID}). Cayley graphs which are doubly regular tournaments are equivalent to skew Hadamard difference sets. For a formulation of doubly regular tournaments in terms of skew Hadamard difference sets, the reader is referred to \cite{STIN} and \cite{DPW}. \subsection{Adesigns} Recent interest in almost difference sets and their codes is the main motivation for studying adesigns (almost designs). Let $V$ be a $v$-set and $\mathcal{B}$ a collection of subsets of $V$, called blocks, each having cardinality $k$. If there is a positive integer $\lambda$ such that every $t$-subset of $V$ is incident with either $\lambda$ blocks or with $\lambda+1$ blocks, and $(V,\mathcal{B})$ is not a $t$-design, then $(V,\mathcal{B})$ is called a $t$-$(v,k,\lambda)$ {\it adesign} (or $t$-adesign for short). It is easy to see that a $0$-$1$ matrix $A$ is the incidence matrix of a $2$-$(v,k,\lambda)$ design $(V,\mathcal{B})$ with repetition number $r=r^{\mathcal{B}}$ if and only if \begin{equation}\label{eqAA} AA^{T}=rI+\lambda(J-I) \text{ and } A^{T}J=kJ,\end{equation} and is the incidence matrix of a $2$-$(v,k,\lambda)$ adesign $(V,\mathcal{B})$ with constant repetition number $r=r^{\mathcal{B}}$ if and only if there exists a $v\times v$ $0$-$1$ matrix $S$, whose diagonal entries are all zero, such that \begin{equation}\label{eqAB} AA^{T}=rI+\lambda S + (\lambda+1)(J-I-S) \text{ and } A^{T}J=kJ.\end{equation} The following lemma illustrates the relation between almost difference sets and adesigns. The relation is analogous to that between difference sets and $2$-designs. The proof is easy and so is omitted. \begin{lemma}\label{le1} Let $D$ be a $(v,k,\lambda,s)$ almost difference set in an abelian group $G$. Then $(G,Dev(D))$ is a $2$-$(v,k,\lambda)$ adesign. Moreover, we have \[ r_{\{x,y\}}^{Dev(D)}=\begin{cases}
\lambda,& \text{ if } \ |(D+x)\cap(D+y)|=\lambda,\\ \lambda+1,&\text{ otherwise,}\end{cases}\]for all distinct $x,y \in G$. \end{lemma} \par Adesigns were first coined by Ding in \cite{CUN}, and several constructions of adesigns and their applications were further investigated in \cite{MDI} and, indirectly in \cite{DYI} and \cite{WW}, as it was shown in \cite{MDI} that almost difference families give $2$-adesigns. It should also be noted that adesigns need not always come from the developments of difference sets or almost difference sets, e.g., there are the duals of quasi-symmetric designs whose block intersection numbers have a difference of one (see Example 5.4 of \cite{MICH00}), as well as those discussed in Example 6.5 of \cite{MDI}. Partial geometric designs, an important type of incidence structure (see \cite{NEUM}) in which one of the defining properties of partial geometries is generalized, were considered in \cite{MICH00}, where an investigation was made into exactly when a $2$-adesign is partial geometric. It was found that, for this to occur, some strong conditions must be satisfied (see Examples 5.4 and 5.5 of \cite{MICH00}). In this paper we will study a special class of $3$-adesigns, i.e., 3-adesigns coming from certain strongly regular graphs and tournaments, and find that these are also $2$-designs. We give several constructions of such $3$-adesigns and we discuss some restrictions on their parameters as well as their links to some other combinatorial objects such as $\lambda$-coverings. Moreover, we construct several new families of $2$-adesigns and discuss some of the restrictions on their parameters. \par The remainder of this paper is organized as follows. In Section 2 we make an initial investigation into when a $(t+1)$-adesign is a $t$-design or a $t$-adesign. In Section 3 we give two generic constructions of $3$-adesigns which are balanced incomplete block designs and, furthermore, we discuss the question of when a $3$-adesign is a $2$-design or $2$-adesign. In Section 4 we give some new constructions of $2$-adesigns and we discuss some of the restrictions on their parameters as well. Section 5 concludes the paper with some open problems. \section{A note on the parameters of $(t+1)$-adesigns which are either $t$-designs or $t$-adesigns}\label{sec2} It is well-known that $(t+1)$-designs are always $t$-designs (see \cite{BET}). However, it is not clear whether a $(t+1)$-adesign need always be a $t$-design or $t$-adesign. In this section we make a preliminary investigation into when a $(t+1)$-adesign is a $t$-design, or a $t$-adesign, by eliminating some of the possible parameters. \par Suppose that $(V,\mathcal{B})$ is a $(t+1)$-$(v,k,\lambda)$ adesign with $b$ blocks. Let $r_{Y}$ denote the number of blocks containing the $t$-subset $Y$ of $V$, and define \[
I_{Y}=\{(z,B)\mid z\in V\setminus Y\text{ and }Y\cup\{z\}\subseteq B\in\mathcal{B}\}.\]We will count $|I_{Y}|$ in two ways. There are $v-t$ ways to choose $z$, and since $(V,\mathcal{B})$ is a $(t+1)$-adesign, neither $|I_{Y}|=\lambda(v-t)$ nor $|I_{Y}|=(\lambda+1)(v-t)$ can hold for all $t$-subsets $Y$ contained in $V$, otherwise $(V,\mathcal{B})$ would be a $(t+1)$-design. Thus $\lambda(v-t)\leq|I_{Y}|\leq(\lambda+1)(v-t)$. We also have $r_{Y}$ ways to choose a block $B$ containing $Y$, and for each choice of $B$, there are $k-t$ ways to choose $z$. This gives us $\lambda(v-t)\leq r_{Y}(k-t)\leq(\lambda+1)(v-t)$ for all possible $t$-subsets $Y$ contained in $V$, whence \begin{equation}\label{eq30} \lambda\leq r_{Y}\frac{k-t}{v-t}\leq\lambda+1. \end{equation} Notice that if $\frac{v-t}{k-t}<2$, then \[\left\lceil\lambda\frac{v-t}{k-t}\right\rceil\leq r_{Y}\leq\left\lfloor\lambda\frac{v-t}{k-t}+\frac{v-t}{k-t}\right\rfloor<\left\lceil\lambda\frac{v-t}{k-t}\right\rceil+2,\]so that, as $Y$ runs over the $t$-subsets of $V$, the only possible values for $r_{Y}$ are $\lceil\lambda\frac{v-t}{k-t}\rceil$ or $\lceil\lambda\frac{v-t}{k-t}\rceil+1$. Thus, $(V,\mathcal{B})$ is either a $t$-$(v,k,\lambda')$ adesign with $\lambda'=\lceil\lambda\frac{v-t}{k-t}\rceil$, or a $t$-$(v,k,\lambda')$ design with $\lambda'=\lceil\lambda\frac{v-t}{k-t}\rceil$ or $\lceil\lambda\frac{v-t}{k-t}\rceil+1$. Also notice that if $(V,\mathcal{B})$ is in fact a $t$-design, then by (\ref{eq30}) we must have $\lambda'\frac{k-t}{v-t}-1<\lambda<\lambda'\frac{k-t}{v-t}$ so that $\lambda=\lfloor\lambda'\frac{k-t}{v-t}\rfloor$. Moreover, multiplying through (\ref{eq30}) by $\binom{v}{t+1}/\binom{k}{t+1}$, and taking into account that the inequality must be strict, we have $\lambda\binom{v}{t+1}/\binom{k}{t+1}<\lambda'\binom{v}{t}/\binom{k}{t}<(\lambda+1)\binom{v}{t+1}/\binom{k}{t+1}$ from which it follows that \begin{equation}\label{eq3.1}\lambda\binom{v}{t+1}/\binom{k}{t+1}<b<(\lambda+1)\binom{v}{t+1}/\binom{k}{t+1}.\end{equation} We have thus shown the following. \begin{lemma}\label{le30.0} Let $(V,\mathcal{B})$ be a $(t+1)$-$(v,k,\lambda)$ adesign with $b$ blocks. Then for any $t$-subset $Y$ of $V$ we have $\lambda\leq r_{Y}\frac{k-t}{v-t}\leq\lambda+1$. If $\frac{k-t}{v-t}>\frac{1}{2}$ then $(V,\mathcal{B})$ is either a $t$-$(v,k,\lambda')$ adesign with $\lambda'=\lceil\lambda\frac{v-t}{k-t}\rceil$, or a $t$-$(v,k,\lambda')$ design with $\lambda'=\lceil\lambda\frac{v-t}{k-t}\rceil$ or $\lceil\lambda\frac{v-t}{k-t}\rceil+1$. Moreover, if $(V,\mathcal{B})$ is a $t$-$(v,k,\lambda')$ design, then $\lambda=\lfloor\lambda'\frac{k-t}{v-t}\rfloor$ and $\lambda\binom{v}{t+1}/\binom{k}{t+1}<b<(\lambda+1)\binom{v}{t+1}/\binom{k}{t+1}$. \end{lemma} It should be noted that the purpose behind stating the inequality in (\ref{eq3.1}) is not to give an estimate on the number of blocks in the $2$-design (since we already know the number of blocks in a $2$-design), but to show the relationship between the number of blocks and $\lambda$, and the resulting strictness of inequality. It should also be noted that, in what follows, it will be made evident that the assumption that $\frac{k-t}{v-t}>\frac{1}{2}$ is sufficient for the $(t+1)$-adesign to be either a $t$-adesign or a $t$-design, but not necessary. The necessary conditions seem difficult to discern, and are left as an open problem. We now give some constructions of $3$-adesigns which are balanced incomplete block designs. \section{Constructions of $3$-adesigns which are balanced incomplete block designs}\label{sec3} Both of the constructions in this section follow the same basic method. We take the sets of supports of the rows of adjacency matrices of certain graphs and consider their unions. Note that these are not the first constructions of $3$-adesigns (see Section 6 of \cite{MDI}). However, the constructions in this work are more general, and show definite links between other combinatorial objects such as balanced incomplete block designs and strongly regular graphs. \par We first give a construction based on strongly regular graphs. We will need the following lemma, which simply formalizes some of our discussion in Section \ref{sec1.3}. \begin{lemma}\label{le00} Let $A$ be the adjacency matrix of a strongly regular graph with parameters $(v,k,\lambda,\mu)$. Then \[ A^{2}=kI+\lambda A+\mu (J-I-A) \text{ and } AJ=kJ.\] \end{lemma} Here we give our first construction. \begin{theorem}\label{th5} Let $A$ be the adjacency matrix of a Paley type strongly regular graph on $n$ vertices, and denote by $A'$ the matrix $J-I-A$. Then $(\mathcal{V}_{A},\mathcal{B}_{A}\cup\mathcal{B}_{A'})$ is a $2$-$(n,\frac{n-1}{2},\frac{n-3}{2})$ design and a $3$-$(n,\frac{n-1}{2},\frac{n-9}{4})$ adesign. \end{theorem} \begin{proof} For simplicity, let $\mathcal{V}=\mathcal{V}_{A}$, $\mathcal{B}=\mathcal{B}_{A}$ and $\mathcal{B}'=\mathcal{B}_{A'}$, and denote $\frac{n-1}{2}$ by $k$ and $\frac{n-5}{4}$ by $\lambda$. By Lemma \ref{le00} (and using the fact that $A$ is symmetric) it is easy to see that \[ \begingroup\setlength\arraycolsep{2pt}\def\arraystretch{1.0}\begin{pmatrix} A & A' \end{pmatrix}\endgroup \begingroup\setlength\arraycolsep{2pt}\def\arraystretch{1.0}\begin{pmatrix} A \\ A' \end{pmatrix}\endgroup = A^{2} + (A')^{2} = (n-1)I+(k-1)(J-I). \] Then, by (\ref{eqAA}), and the regularity of the graphs, we have that $(\mathcal{V},\mathcal{B}\cup\mathcal{B}')$ is a $2$-$(n,k,k-1)$ design. \par Now we want to show that $(\mathcal{V},\mathcal{B}\cup\mathcal{B}')$ is a $3$-adesign. To count the number of blocks of $\mathcal{B}\cup\mathcal{B}'$ in which $x,y$ and $z$ appear together, we first count the number of blocks of $\mathcal{B}\cup\mathcal{\overline{B}}$ in which $x,y$ and $z$ appear together, where $\mathcal{\overline{B}}=\mathcal{B}_{J-A}$. Let $x,y$ and $z$ be distinct members of $\mathcal{V}$. Suppose that $x,y$ and $z$ appear together in $\omega$ blocks of $\mathcal{B}$. \newline {\bf Case 1:} Assume that $(A)_{xy}=(A)_{xz}=(A)_{yz}=1$. (In other words, assume that any two of $x,y$ and $z$ appear together in $\lambda$ blocks of $\mathcal{B}$.) Lemma \ref{le00} together with the principle of inclusion and exclusion implies that there are $n-3k+3\lambda-\omega$ blocks in $\mathcal{\overline{B}}$ containing $x,y$ and $z$. Thus, there are $n-3k+3\lambda$ blocks in $\mathcal{B}\cup\mathcal{\overline{B}}$ containing $x,y$ and $z$. We want to know how many of these correspond to rows of $J-A$ whose indices are $x,y$ or $z$. But if any one of the three blocks corresponding to the rows of $J-A$ indexed by $x,y$ and $z$ contains each of the points $x,y$ and $z$, then two of $(A)_{xy},(A)_{xz}$ and $(A)_{yz}$ must be equal to zero, a contradiction. Thus, in this case, $x,y$ and $z$ appear together in exactly $v-3k+3\lambda=\lambda-1$ blocks of $\mathcal{B}\cup\mathcal{B}'$. \newline {\bf Case 2:} Assume that $(A)_{xy}=(A)_{xz}=1$ and $(A)_{yz}=0$. By Lemma \ref{le00}, and the principle of inclusion and exclusion, there are $n-3k+3\lambda+1-\omega$ blocks in $\mathcal{\overline{B}}$ containing $x,y$ and $z$, whence $n-3k+\lambda+1$ blocks in $\mathcal{B}\cup\mathcal{\overline{B}}$ containing $x,y$ and $z$. As in the last case, if we suppose that any one of the three blocks corresponding to the rows of $J-A$ indexed by $x,y$ and $z$ contains each of the points $x,y$ and $z$, then two of $(A)_{xy},(A)_{xz}$ and $(A)_{yz}$ must be equal to zero, again leading to a contradiction. Thus, in this case, $x,y$ and $z$ appear together in exactly $n-3k+3\lambda+1=\lambda$ blocks of $\mathcal{B}\cup\mathcal{B}'$. \newline {\bf Case 3:} Assume that $(A)_{xy}=1$ and $(A)_{xz}=(A)_{yz}=0$. By Lemma \ref{le00}, and the principle of inclusion and exclusion, there are $n-3k+3\lambda+2-\omega$ blocks in $\mathcal{\overline{B}}$ containing $x,y$ and $z$, whence $n-1-3k+\lambda+2$ blocks in $\mathcal{B}\cup\mathcal{\overline{B}}$ containing $x,y$ and $z$. The only one of the three blocks corresponding to the rows of $J-A$ indexed by $x,y$ and $z$ that contains each of the points $x,y$ and $z$ is that corresponding to $z$ (i.e., the support of the $z$-th row). Thus, in this case, $x,y$ and $z$ appear together in exactly $n-3k+3\lambda+1=\lambda$ blocks of $\mathcal{B}\cup\mathcal{B}'$. Thus $(\mathcal{V},\mathcal{B}\cup\mathcal{B}')$ is a $3$-$(n,k,\lambda-1)$ adesign. \end{proof} There is also the following construction, where the union of the complementary block sets is considered. Its proof is similar and so is omitted. \begin{corollary} Let $A$ be the adjacency matrix of a Paley type strongly regular graph on $n$ vertices, and denote by $A'$ the matrix $J-I-A$. Then $(\mathcal{V}_{A},\mathcal{B}_{A+I}\cup\mathcal{B}_{A'+I})$ is a $2$-$(n,\frac{n+1}{2},\frac{n-1}{2})$ design and a $3$-$(n,\frac{n-1}{2},\frac{n-1}{4})$ adesign. \end{corollary} \begin{example}\label{ex00} If $C$ is a conference matrix of order $n \equiv 2 \ ({\rm mod} \ 4)$, then let $S$ be the matrix obtained from $C$ by deleting the first row and column, and let $A$ be the matrix obtained from $S$ by replacing $-1$ by $1$, and $1$ by $0$. Then $A$ is the adjacency matrix of a strongly regular graph of Paley type (see Section \ref{sec1.3}) and, by Theorem \ref{th5}, $(\mathcal{V}_{A},\mathcal{B}_{A}\cup\mathcal{B}_{A'})$ is a $2$-$(n-1,\frac{n-2}{2},\frac{n-4}{2})$ design and a $3$-$(n-1,\frac{n-2}{2},\frac{n-10}{2})$ adesign. \end{example} We now consider a construction based on tournaments. We will need the following lemma, which is also a mere formalization of part of the discussion in Section \ref{sec1.3}. \begin{lemma}\label{le01} {\rm \cite{GOETH}} Let $A$ be the adjacency matrix of a tournament $\Gamma$ on $n$ vertices, and denote by $S$ the matrix $2A+I-J$. Then $\Gamma$ is doubly regular if and only if \[SS^{T}=nI-J.\] \end{lemma} Here we give the construction. \begin{theorem}\label{th6} Let $A$ be the adjacency matrix of a doubly regular tournament on $n$ vertices, and denote by $A'$ the matrix $J-I-A$. Then $(\mathcal{V}_{A},\mathcal{B}_{A}\cup\mathcal{B}_{A'})$ is a $2$-$(n,\frac{n-1}{2},\frac{n-3}{2})$ design and a $3$-$(n,\frac{n-1}{2},\frac{n-7}{4})$ adesign. \end{theorem} \begin{proof} Let $\Gamma=(V,\mathcal{E})$ be the tournament with adjacency matrix $A$. Again, for simplicity, let $\mathcal{V}=\mathcal{V}_{A}$, $\mathcal{B}=\mathcal{B}_{A}$, $\mathcal{B}'=\mathcal{B}_{A'}$, and $\overline{\mathcal{B}}=\mathcal{B}_{J-A}$, and denote $\frac{n-1}{2}$ by $k$ and $\frac{n-3}{4}$ by $\lambda$. The fact that $r_{\{x,y\}}^{\mathcal{B}'}$ is the constant $\lambda$ for all $x,y\in \mathcal{V}$ implies that $(\mathcal{V},\mathcal{B}\cup\mathcal{B}')$ is a $2$-$(n,k,2\lambda)$ design. \par We need to show that $(\mathcal{V},\mathcal{B}\cup\mathcal{B}')$ is a $3$-adesign. Like in the previous construction, we will assume that $x,y$ and $z$ appear together in $\omega$ blocks of $\mathcal{B}$, and we will first count the number of blocks of $\mathcal{B}\cup\mathcal{\overline{B}}$ in which $x,y$ and $z$ appear together. By the principle of inclusion and exclusion, there are $n-3k+3\lambda-\omega$ blocks in $\mathcal{\overline{B}}$ containing $x,y$ and $z$. Then there are $n-3k+3\lambda$ blocks in $\mathcal{B}\cup\mathcal{\overline{B}}$ containing $x,y$ and $z$. We want to know how many of these correspond to the rows of $J-A$ whose indices are $x,y$ or $z$. Notice if we suppose that the two blocks corresponding to the rows of $J-A$ indexed by $x$ and $y$ both contain each of the points $x,y$ and $z$, then $\mathcal{E}$ must contain both of the arcs $xy$ and $yx$, a contradiction to Lemma \ref{le01}. Thus, no more than one of the three blocks corresponding to the rows of $J-A$ indexed by $x,y$ and $z$ can contain each of the points $x,y$ and $z$. We need only show that $(\mathcal{V},\mathcal{B}\cup\mathcal{B}')$ is not a $3$-design, and we will be done. If $(\mathcal{V},\mathcal{B}\cup\mathcal{B}')$ were a $3$-design, then, by the above arguments, the only choices for the constant $r_{\{x,y,z\}}^{\mathcal{B}\cup\mathcal{B}'}(=:\lambda')$ would be $\lambda$ or $\lambda-1$. The number of blocks in $\mathcal{B}\cup\mathcal{B}'$ is given by $\lambda'\binom{n}{3}/\binom{k}{3}$, whence the equation \begin{equation}\label{eq8} 2n=\lambda'\binom{n}{3}/\binom{k}{3}\end{equation} must hold. If $\lambda'=\lambda$ then (\ref{eq8}) becomes $n=n+3$, a contradiction, and if $\lambda'=\lambda-1$ then (\ref{eq8}) becomes $(k-1)(k-2)=(\lambda-1)(n-2)$, which again leads to a contradiction. \end{proof} In the following, the union of the complementary block sets is considered. The proof is similar and so is omitted. \begin{corollary} Let $A$ be the adjacency matrix of a doubly regular tournament on $n$ vertices, and denote by $A'$ the matrix $J-I-A$. Then $(\mathcal{V}_{A},\mathcal{B}_{A+I}\cup\mathcal{B}_{A'+I})$ is a $2$-$(n,\frac{n+1}{2},\frac{n+1}{2})$ design and a $3$-$(n,\frac{n+1}{2},\frac{n-3}{4})$ adesign. \end{corollary} \begin{example}\label{ex01} If $C$ is a conference matrix of order $n \equiv 0 \ ({\rm mod} \ 4)$, then let $S$ be the matrix obtained from $C$ by deleting the first row and column, and let $A$ be the matrix obtained from $S$ by replacing $-1$ by $1$, and $1$ by $0$. Then $A$ is the adjacency matrix of a doubly regular tournament (see Section \ref{sec1.3}) and, by Theorem \ref{th6}, $(\mathcal{V}_{A},\mathcal{B}_{A}\cup\mathcal{B}_{A'})$ is a $2$-$(n-1,\frac{n-2}{2},\frac{n-4}{2})$ design and a $3$-$(n-1,\frac{n-2}{2},\frac{n-8}{4})$ adesign. \end{example} It is clear that the balanced incomplete block designs resulting from the $3$-adesigns constructed from partial difference sets in Example \ref{ex00}, and from skew Hadamard difference sets in Example \ref{ex01}, can also be realized as difference families, each consisting of two distinct difference sets (or almost difference sets). To the best of our knowledge, the only instances of these that have been reported on previously are those constructed via cyclotomy, discussed by Wilson in \cite{WIL} as difference families, and also by Liu and Ding in \cite{LIU} as balanced incomplete block designs. \par Assuming that $(V,\mathcal{B})$ is a $2$-$(v,k,\lambda')$ design, the condition $\lambda=\lfloor\lambda'\frac{k-2}{v-2}\rfloor$ stated in Lemma \ref{le30.0} is necessary for $(V,\mathcal{B})$ to be a $3$-$(v,k,\lambda)$ adesign, but not sufficient, as the following example, whose construction method was discussed in {\rm \cite{MDI}}, illustrates.
\begin{example} Let $n$ be an odd integer divisible by $3$. Consider, for fixed $a\in\mathbb{Z}_{n}$, all pairs $\{a-i \ ({\rm mod} \ n),a+i \ ({\rm mod} \ n)\}$, for $i=1,...,\frac{n-1}{2}$. The union of any two distinct pairs gives a block consisting of four points. Denote, for fixed $a\in\mathbb{Z}_{n}$, the set of all blocks obtained in this way by $\mathcal{B}_{a}$. Then $(\mathbb{Z}_{n},\cup_{a\in\mathbb{Z}_{n}}\mathcal{B}_{a})$ is a $2$-$(n,4,n)$ design. Also notice that $\lfloor n\frac{2}{n-2}\rfloor=2$ for all $n\geq 9$, and the number of blocks is $b=n\binom{(n-1)/2}{2}$ so that $2\binom{n}{3}/\binom{4}{3}<b<3\binom{n}{3}/\binom{4}{3}$ is satisfied; however, since $n$ is divisible by 3, we can find $3$-subsets of $\mathbb{Z}_{n}$ not contained in any block (choose three points $x,y$ and $z$ so that $|x-y|=|x-z|=|y-z|$). \end{example} \section{Constructions of $2$-adesigns} We begin this section with a discussion on the possible number of blocks of $2$-adesigns. \subsection{Possible number of blocks of $2$-adesigns}\label{secB} Let $(V,\mathcal{B})$ be a $2$-$(v,k,\lambda)$ adesign with $b$ blocks. According to Lemma \ref{le30.0}, if $(V,\mathcal{B})$ is a tactical configuration, then $\lambda=\lfloor r^{\mathcal{B}}\frac{k-1}{v-1}\rfloor$ and \[ \lambda\binom{v}{2}/\binom{k}{2}<b<(\lambda+1)\binom{v}{2}/\binom{k}{2}.
\] Let $v,k$ and $\lambda$ be positive integers and let $(V,\mathcal{B})$ be an incidence structure with $|V|=v$ and $|B|=k$ for all $B\in\mathcal{B}$. If each pair of points occurs in at least $\lambda$ blocks, then $(V,\mathcal{B})$ is a $(v,k,\lambda)$-{\it covering}. If each pair of points occurs in at most $\lambda$ blocks then $(V,\mathcal{B})$ is a $(v,k,\lambda)$-{\it packing}. The classical bound for coverings is the Schonheim bound \cite{SCHON}, which states that, if $\mathcal{B}$ is a $\lambda$-covering with $b$ blocks, then \[b\geq C_{\lambda}\text{ where }C_{\lambda}:=\left\lceil\frac{v}{k}\left\lceil\frac{\lambda (v-1)}{k-1}\right\rceil\right\rceil,\] and the classical bound for packings is the Johnson bound \cite{JOHN}, and states that, if $\mathcal{B}$ is a $\lambda$-packing with $b$ blocks, then \[b\leq P_{\lambda}\text{ where }P_{\lambda}:=\left\lfloor\frac{v}{k}\left\lfloor\frac{\lambda (v-1)}{k-1}\right\rfloor\right\rfloor.\] \par In \cite{HOR}, Horsely showed that, in certain situations, these bounds could be improved. Denote the incidence matrix of a $(v,k,\lambda)$-covering resp. -packing by $M_{c}$ resp. $M_{p}$. Also note that, if $b$ is the number of blocks in $\mathcal{B}$, then \[ b\geq{\rm rank}(M)\geq{\rm rank}(MM^{T})\]where $M$ is either $M_{c}$ or $M_{p}$. \begin{lemma}\label{le4} {\rm \cite{HOR}} Let $v,k$ and $\lambda$ be positive integers such that $3\leq k<v$, and let $r$ and $d$ be the integers such that $\lambda(v-1)=r(k-1)-d$ and $0\leq d<k-1$. If $d<r-\lambda$, then \[ {\rm rank}(M_{c}M_{c}^{T})\geq C'_{\lambda}(r,d)\text{ where }C'_{\lambda}(r,d):=\left\lceil\frac{v(r+1)}{k+1}\right\rceil.\] \end{lemma} \begin{lemma}\label{le5} {\rm \cite{HOR}} Let $v,k$ and $\lambda$ be positive integers such that $3\leq k<v$, and let $r$ and $d$ be the integers such that $\lambda(v-1)=r(k-1)+d$ and $0\leq d<k-1$. If $d<r-\lambda$, then \[ b\leq P'_{\lambda}(r,d)\text{ where }P'_{\lambda}(r,d):=\left\lfloor\frac{v(r-1)}{k-1}\right\rfloor.\] \end{lemma} Again let $(V,\mathcal{B})$ be a $2$-$(v,k,\lambda)$ adesign with $b$ blocks. Clearly $(V,\mathcal{B})$ is a $(v,k,\lambda)$-covering and a $(v,k,\lambda+1)$-packing. Let $r_{1},d_{1}$ and $\lambda$ be defined, respectively, as $r,d$ and $\lambda$ were defined in Lemma \ref{le4}, and let $r_{2},d_{2}$ and $\lambda+1$ be defined, respectively, as $r,d$ and $\lambda$ were defined in Lemma \ref{le5}. Then, summing up the above discussion gives us \begin{equation}\label{eq9} C^{*}\leq b \leq P^{*}, \end{equation} where $C^{*}$ is equal to $C'_{\lambda}(r_{1},d_{1})$ if $r_{1}$ and $d_{1}$ satisfy the conditions of Lemma \ref{le4}, and equal to $C_{\lambda}$ otherwise, and $P^{*}$ is equal to $P'_{\lambda}(r_{2},d_{2})$ if $r_{2}$ and $d_{2}$ satisfy the conditions of Lemma \ref{le5}, and equal to $P_{\lambda}$ otherwise. \subsection{Constructions of $2$-adesigns} This section will also be concerned with constructions via adjacency matrices of strongly regular graphs, and we will assume much of the same notation used in Section \ref{sec3}. We open the discussion with the following simple construction. \par Let $A$ be the incidence matrix of a strongly regular graph with parameters $(v,k,\lambda,\mu)$ where $\mu=\lambda+1$ or $\lambda+3$. By Lemma \ref{le00} we have that \begin{equation}\label{eq00C} (A+I)^{2}=(k+1)I+(\lambda+2)A+\mu(J-I-A).\end{equation} Then, by the regularity of the graphs, (\ref{eqAB}) applies, and $(\mathcal{V}_{A},\mathcal{B}_{I+A})$ is a $2$-$(v,k+1,\lambda')$ adesign where $\lambda'=\lambda+1$ or $\lambda+2$. \begin{example}\label{ex00E} Strongly regular graphs with parameters of the form $(m^{2},d(m-1),d^{2}-3d+m,d(d-1))$ are called {\it pseudo-Latin square} type \cite{VAN}, and are known to exist whenever $m$ is an odd prime power and $2\leq d\leq m$, via orthogonal arrays (see Theorem 6.39 of \cite{STIN} and Section 2.5.2 of \cite{BEH}). Then, taking as $m$ the odd prime power $q$, and setting $d=\frac{q-1}{2}$, we can construct a strongly regular graph whose complementary graph has parameters $(q^{2},\frac{q^{2}+2q-3}{2},\frac{q^{2}+4q-9}{4},\frac{q^{2}+4q+3}{4})$. If $A$ is the adjacency matrix of such a strongly regular graph, then, by (\ref{eq00C}), we have that $(\mathcal{V}_{A},\mathcal{B}_{I+A})$ is a $2$-$(q^{2},\frac{q^{2}+2q-1}{2},\frac{q^{2}+4q-1}{4})$ adesign. \end{example} In the next constructions, we again consider unions of row supports of certain strongly regular graphs. \par Suppose that $A$ and $A'$ are the adjacency matrices of strongly regular graphs with parameters $(v,k,\lambda,\mu)$ where $\mu=\lambda+1$ or $\mu=\lambda-1$, and the property that either $A+A'$ is a $0$-$1$ matrix, or $A+A'+I$ is a $1$-$2$ matrix (i.e., all of the off-diagonal entries of $A+A'$ are either $1$ or $2$). By Lemma \ref{le00} we have that \begin{equation}\label{eq001} \begingroup\setlength\arraycolsep{2pt}\def\arraystretch{1.0}\begin{pmatrix} A & A' \end{pmatrix}\endgroup \begingroup\setlength\arraycolsep{2pt}\def\arraystretch{1.0}\begin{pmatrix} A \\ A' \end{pmatrix}\endgroup = A^{2} + (A')^{2} = 2kI+(\lambda-\mu)(A+A')+2\mu(J-I). \end{equation} Thus, it follows from (\ref{eqAB}) by the regularity of the graphs, and the fact that either $A+A'$ is a $0$-$1$ matrix or $A+A'+I$ is a $1$-$2$ matrix, that $(\mathcal{V}_{A},\mathcal{B}_{A}\cup\mathcal{B}_{A'})$ is a $2$-$(v,k,\lambda')$ adesign where \[ \lambda'=\begin{cases} 2\mu\text{ or }2\mu-1,&\text{ if }A+A'\text{ is a } 0\text{-}1\text{ matrix,}\\
2\mu+1\text{ or }2\mu-2,&\text{ if }A+A'+I\text{ is a } 1\text{-}2\text{ matrix.}\end{cases}\] Indeed, if $\mu=\lambda+1$, and $A+A'$ is a $0$-$1$ matrix, then (\ref{eq001}) implies that any pair of distinct points of $\mathcal{V}_{A}$ (since $A$ and $A'$ have the same dimensions, we may assume that $\mathcal{V}_{A}=\mathcal{V}_{A'}$) appears in either $2\mu-1$ or $2\mu$ blocks of $\mathcal{B}_{A}\cup\mathcal{B}_{A'}$. The other cases can be checked in a similar way. \par Now suppose that $A$ and $A'$ are the adjacency matrices of strongly regular graphs with parameters $(v,k,\lambda,\mu)$ where $\mu=\lambda+1$ or $\mu=\lambda+3$, and the property that either $A+A'$ is a $0$-$1$ matrix, or $A+A'+I$ is a $1$-$2$ matrix. By Lemma \ref{le00} we have \begin{equation}\label{eq002} \begingroup\setlength\arraycolsep{2pt}\def\arraystretch{1.0}\begin{pmatrix} A+I & A'+I \end{pmatrix}\endgroup \begingroup\setlength\arraycolsep{2pt}\def\arraystretch{1.0}\begin{pmatrix} A+I \\ A'+I \end{pmatrix}\endgroup = (A+I)^{2} + (A'+I)^{2} = 2(k+1)I+(\lambda-\mu+2)(A+A')+2\mu(J-I). \end{equation} Thus, it follows from (\ref{eqAB}) by regularity of the graphs and the fact that either $A+A'$ is a $0$-$1$ matrix or $A+A'+I$ is a $1$-$2$ matrix, that $(\mathcal{V}_{A},\mathcal{B}_{A+I}\cup\mathcal{B}_{A'+I})$ is a $2$-$(v,k,\lambda')$ adesign where \[ \lambda'=\begin{cases} 2\mu\text{ or }2\mu-1,&\text{ if }A+A'\text{ is a }0\text{-}1\text{ matrix,}\\
2\mu+1\text{ or }2\mu-2,&\text{ if }A+A'+I\text{ is a }1\text{-}2\text{ matrix.}\end{cases}\] Indeed, if $\mu=\lambda+3$, and $A+A'+I$ is a $1$-$2$ matrix, then (\ref{eq002}) implies that any pair of distinct points of $\mathcal{V}_{A}$ ($=\mathcal{V}_{A'}$) appears in either $2\mu-2$ or $2\mu-1$ blocks of $\mathcal{B}_{A+I}\cup\mathcal{B}_{A'+I}$. The other cases can be checked in a similar way. \begin{example}\label{ex01E} Let $q$ be an odd prime power. Let $G=(\mathbb{F}_{q},+)\times(\mathbb{F}_{q},+)$, and define \[ D=\{(a,b)\in G\mid a\text{ and }b\text{ are both squares or both nonsquares}\},\]and \[ \tilde{D}=\{(a,b)\in G\mid \text{one of }a,b\text{ is square and the other is nonsquare}\}.\] By Lemma \ref{le00A} (see Appendix), both $D$ and $\tilde{D}$ are partial difference sets with parameters \\$(q^{2},\frac{q^{2}-2q+1}{2},\frac{q^{2}-4q+3}{4},\frac{q^{2}-4q+7}{4})$. Thus, the incidence matrices $A$ resp. $A'$ of $(G,Dev(D))$ resp. $(G,Dev(\tilde{D}))$ are the adjacency matrices of strongly regular graphs where $A+A'$ is a $0$-$1$ matrix, and $2J-I-(A+A')$ is a $1$-$2$ matrix. Then, from the above assertions, it follows that \begin{enumerate}[(i)] \item $(G,Dev(D)\cup Dev(\tilde{D}))$ is a $2$-$(q^{2},\frac{q^{2}-2q+1}{2},\frac{q^{2}-4q+3}{2})$ adesign, and \item $(G,Dev(\overline{D})\cup Dev(\overline{\tilde{D}}))$ is a $2$-$(q^{2},\frac{q^{2}+2q+1}{2},\frac{q^{2}+4q-1}{2})$ adesign.\end{enumerate} In particular, if $q=5$, so that \[ D=\{(2,3),(1,4),(3,2),(4,1),(3,3),(1,1),(4,4),(2,2)\}\] and \[ \tilde{D}=\{(4,3),(1,2),(1,3),(3,1),(2,1),(2,4),(3,4),(4,2)\},\]then the incidence structure $(\mathbb{F}_{5}\times\mathbb{F}_{5},Dev(D)\cup Dev(\tilde{D}))$ is a $2$-$(25,8,4)$ adesign with 50 blocks. \end{example} The above discussion is summarized in the following. \begin{theorem}\label{th001} Let $A$ and $A'$ be the adjacency matrices of strongly regular graphs with parameters $(v,k,\lambda,\mu)$ and the property that $A+A'$ is a $0$-$1$ matrix or $A+A'+I$ is a $1$-$2$ matrix. \begin{enumerate}[(i)] \item If $\mu=\lambda+1$ or $\lambda-1$, then $(\mathcal{V}_{A},\mathcal{B}_{A}\cup\mathcal{B}_{A'})$ is a $2$-$(v,k,\lambda')$ adesign, and \item if $\mu=\lambda+1$ or $\lambda+3$, then $(\mathcal{V}_{A},\mathcal{B}_{A+I}\cup\mathcal{B}_{A'+I})$ is a $2$-$(v,k+1,\lambda')$ adesign,\end{enumerate} where \[ \lambda'=\begin{cases} 2\mu\text{ or }2\mu-1,&\text{ if }A+A'\text{ is a } 0\text{-}1\text{ matrix,}\\
2\mu+1\text{ or }2\mu-2,&\text{ if }A+A'+I\text{ is a } 1\text{-}2\text{ matrix.}\end{cases}\] \end{theorem} \begin{remark}\label{re01} The strongly regular graphs of pseudo-Latin square type mentioned in Example \ref{ex00E}, or those constructed by Pasechnik in \cite{PAS}, or either of their complements, have parameters which make them good candidates for satisfying the conditions of either part of Theorem \ref{th001}. However, realizing two distinct graphs of either one of these types whose incidence matrices $A$ and $A'$ are such that $A+A'$ is a $0$-$1$ matrix, or $A+A'+I$ is a $1$-$2$ matrix, seems difficult in general. We leave this as an open problem. \end{remark} Next we show how the concepts of derived and residual designs can be applied to certain adesigns. \begin{theorem}\label{th004} Let $A$ be the adjacency matrix of a Paley type $(v,k,\lambda,\lambda+1)$ strongly regular graph. Fix a row of $A$ and let $R$ denote its support. Define \[\mathcal{B}=\{R\cap S\mid S \ (\ne R)\text{ is the support of a row of }A\},\] and let $\mathcal{B}_{\infty}$ denote the set containing all members of $\mathcal{B}$ of size $\lambda+1$, and all members of $\mathcal{B}$ of size $\lambda$ modified by adjoining the point $\infty$. Then $(R\cup \{\infty\},\mathcal{B}_{\infty})$ is a $2$-$(k+1,\lambda+1,\lambda-1)$ adesign. \end{theorem} \begin{proof} If $x,y\in R$ are distinct, then the number of members of $\mathcal{B}$ in which they appear together, which is either $\lambda$ or $\lambda+1$, is also the number of members of $\mathcal{B}_{\infty}$ in which they appear together. We want to count the number of members of $\mathcal{B}_{\infty}$ in which $x$ and $\infty$ appear together. Notice there are $k-1$ members $B_{1},...,B_{k-1}\in\mathcal{B}_{\infty}$ containing $x$. The number of $B_{i}$'s also containing $\infty$ is the number of common neighbors of the two vertices corresponding to $R$ and $x$. Since $x\in R$, these two vertices are adjacent, whence the number of common neighbors is $\lambda$. \end{proof} The proof of the following corollary is a simple application of the principle of inclusion and exclusion, and so is omitted. \begin{corollary} Let $A$ be the adjacency matrix of a Paley type $(v,k,\lambda,\lambda+1)$ strongly regular graph. Fix a row of $A$ and let $R$ denote its support. Let $\mathcal{B}_{\overline{\infty}}$ be the set of complements in $\mathcal{V}_{A}\cup \{\infty\}$ of members of $\mathcal{B}_{\infty}$. Then $(R\cup \{\infty\},\mathcal{B}_{\overline{\infty}})$ is a $2$-$(k+1,\lambda+2,\lambda+1)$ adesign. \end{corollary} \begin{example} Let $q\equiv 1 \ ({\rm mod} \ 4)$ be a prime power and let $D\subseteq \mathbb{F}_{q}$ be the quadratic residues. If we take $\mathcal{B}=Dev(D)$ then, by Theorem \ref{th004}, $(D\cup\{\infty\},\mathcal{B}_{\infty})$ is a $2$-$(\frac{q+1}{2},\frac{q-1}{4},\frac{q-9}{4})$ adesign with $q-1$ blocks. Moreover, since $\lfloor\frac{vr}{k}\rfloor = q+1$, where $r=\frac{q-1}{2}$, any such adesign is two blocks short of meeting the Johnson bound for packings (See Section \ref{secB}). \end{example}
Let $(V,\mathcal{B})$ be an incidence structure. Let $p\in V$ and define $\mathcal{B}_{p}=\{B\setminus\{p\}\mid p\in B,B\in\mathcal{B}\}$. The incidence structure $(V\setminus\{p\},\mathcal{B}_{p})$ is called the {\it contraction} of $(V,\mathcal{B})$ at the point $p$. We can obtain new symmetric $2$-adesigns by contracting on a point of any one of the $3$-adesigns constructed in Section \ref{sec3}. It is easy to see that contracting at points of a $3$-adesign will result in a $2$-adesign as long as not all three subsets of points occur in the same number of blocks of the contraction. \begin{remark}\label{re1} Let $(V,\mathcal{B})$ be any one of the $3$-$(v,k,\lambda)$ adesigns constructed in Section \ref{sec3}, and let $p$ be any point of $V$. It is clear from their proofs that contracting at $p$ gives a $2$-$(v,k,\lambda)$ adesign since we can always find a pair $x,y\in V\setminus\{p\}$ such that $x,y$ and $p$ appear together in $\lambda$ blocks of $\mathcal{B}$, and we can also find another pair $x',y'\in V\setminus\{p\}$ such that $x',y'$ and $p$ appear together in $\lambda+1$ blocks of $\mathcal{B}$. Moreover, contracting at a point $p$ of $(V,\mathcal{B})$ gives a $\lambda$-covering which meets the bound given in Lemma \ref{le4}, i.e., it is a {\em minimal} $\lambda$-{\em covering}. \end{remark} \begin{example} Let $q$ be an odd prime power and let $D$ and $\tilde{D}$ be defined as in Example \ref{ex01E}. Denote $\mathbb{F}_{q}\times\mathbb{F}_{q}$ by $V$ and $Dev(D\cup (\mathbb{F}_{q}\times\{0\}))\cup Dev(\tilde{D}\cup (\{0\}\times \mathbb{F}_{q}))$ by $\mathcal{B}$. It was shown in \cite{ZHANG} that $D\cup (\mathbb{F}_{q}\times\{0\})$ is a Paley type partial difference set in $V$, whence by Theorem \ref{th5}, the incidence structure $(V,\mathcal{B})$ is a $3$-$(q^{2},\frac{q^{2}+1}{2},\frac{q^{2}-1}{4})$ adesign with $2q$ blocks. If we contract at the point $(0,0)$, then we get the incidence structure $(V\setminus\{(0,0)\},\mathcal{B}_{(0,0)})$, which, by Remark \ref{re1}, is a $2$-$(q^{2}-1,\frac{q^{2}-1}{2},\frac{q^{2}-1}{4})$ adesign with $b=q^{2}+1$ blocks. Now let $r=\frac{q^{2}+1}{2}$ and $d=\frac{q^{2}-5}{4}$. Then with $v=q^{2}-1,k=\frac{q^{2}-1}{2}$ and $\lambda=\frac{q^{2}-1}{4}$, we have that $\lambda(v-1)=r(k-1)-d$ with $0\leq d<r-\lambda$, and $r$ and $d$ satisfy the conditions of Lemma \ref{le4}. Then, since $(q^{2}-1)(q^{2}+3)/(q^{2}+1)>q^{2}$, we have \[ \left\lceil \frac{v(r+1)}{(k+1)}\right\rceil=\left\lceil \frac{(q^{2}-1)(q^{2}+3)}{(q^{2}+1)}\right\rceil=q^{2}+1=b,\]i.e., $b$ meets the bound given in Lemma \ref{le4}, and $(V\setminus\{(0,0)\},\mathcal{B}_{(0,0)})$ is a minimal $\lambda$-covering. \end{example} Our next construction is a modification of Bose's Steiner triple systems \cite{BOS2}. \begin{theorem} Let $n>3$ be an odd integer, let $G=\mathbb{Z}_{n}\times\mathbb{Z}_{3}$, and let ``$<$'' be any total ordering on $\mathbb{Z}_{n}$ (e.g. $0<1<\cdots<n-1$). Define \begin{eqnarray*} \mathcal{B} &=&\left\{\{(a,i),(b,i),(\frac{n+1}{2}(a+b),i+1)\}\bigg\vert a,b\in\mathbb{Z}_{n},a<b\right\}\\ & &\cup\left\{\{(a,i),(b-1,i),(\frac{n+1}{2}(a+b),i+1)\}\bigg\vert a,b\in\mathbb{Z}_{n},a\not<b,a\neq b-1\right\}\\ & &\cup\bigg\{\{(a,0),(a,1),(a,2)\}\bigg\vert a\in\mathbb{Z}_{n}\bigg\}.\end{eqnarray*}Then $(G,\mathcal{B})$ is a $2$-$(3n,3,1)$ adesign (with $3n^{2}-2n$ blocks). \end{theorem} \begin{proof} Let $(\alpha,j),(\beta,k)\in G$. It is clear that each block is incident with three points. If $\alpha=\beta$ then the pair occurs in the block $\{(\alpha,0),(\alpha,1),(\alpha,2)\}$ or in the block $\{(\alpha,j),(\alpha-1,j),(\alpha,j+1)\}$ ($=\{(\alpha,j),(\beta-1,j),((n+1)/2\cdot(\alpha+\beta),j+1)\}$) and in no other block. \par Now assume $\alpha\neq \beta$. Without loss of generality, we can assume that $\alpha<\beta$. There are three cases depending on the residues $k$ and $j$ modulo $3$: \newline {\bf (i)} If $k=j$, the pair $(\alpha,j),(\beta,k)$ occurs in the block $\{(\alpha,k),(\beta,k),((n+1)/2\cdot(\alpha+\beta),k+1)\}$ and in no other block.\newline {\bf (ii)} If $k=j+1 \ ({\rm mod} \ 3)$, the equation $\frac{n+1}{2}(x+\alpha)=\beta$ has a unique solution $x=\gamma$. Since $\frac{a+a}{2}=a$ for all $a\in\mathbb{Z}_{n}$, i.e., the binary operation $f(a,b):=\frac{n+1}{2}$ is idempotent, and $\alpha\neq\beta$, we have $\alpha\neq\gamma$. If $\gamma<\alpha$, the pair $(\alpha,j),(\beta,k)$ occurs in the block $\{(\alpha,j),(\gamma,j),(\beta,k)\}$ ($=\{(\alpha,j),(\gamma,j),((n+1)/2\cdot(\alpha+\gamma),j+1)\}$), as well as in the block $\{(\alpha,j),(\gamma-1,j),(\beta,k)\}$ ($=\{(\alpha,j),(\gamma-1,j),((n+1)/2\cdot(\alpha+\gamma),j+1)\}$), and in no other block. If $\alpha<\gamma$ the pair occurs in in the block $\{(\alpha,j),(\gamma,j),(\beta,k)\}$ and in no other block.\newline {\bf (iii)} If $j=k+1 \ ({\rm mod} \ 3)$ then the equation $\frac{n+1}{2}(x+\beta)=\alpha$ has a unique solution $x=\gamma$. Since the binary operation $f(a,b):=\frac{n+1}{2}$ is idempotent, and $\alpha\neq\beta$, we have $\gamma\neq\beta$. If $\gamma<\beta$ then the pair $(\alpha,j),(\beta,k)$ occurs in the block $\{(\gamma,k),(\beta,k),(\alpha,j)\}$ ($=\{(\gamma,k),(\beta,k),((n+1)/2\cdot(\gamma+\beta),k+1)\}$), as well as in the block $\{(\beta,k),(\gamma-1,k),(\alpha,j)\}$ ($=\{(\beta,k),(\gamma-1,k),((n+1)/2\cdot(\beta+\gamma),k+1)\}$), and in no other block. If $\gamma>\beta$ then the pair occurs in the block $\{(\beta,k),(\gamma,k),(\alpha,j)\}$ and in no other block.\newline This completes the proof. \end{proof} \section{Concluding Remarks} In this correspondence we investigated $t$-adesigns and their links with other combinatorial objects such as balanced incomplete block designs, coverings and packings. We first considered the question of when a $(t+1)$-adesign is a $t$-design or a $t$-adesign, and then we constructed several classes of $3$-adesigns which are, in fact, balanced incomplete block designs. We have also discussed some of the restrictions on the possible sets of feasible parameters for both $2$-adesigns and $3$-adesigns. The $2$-adesigns we constructed have new parameters, and some of them have the interesting property that they are minimal $\lambda$-coverings. We leave the reader with the following open problems: (1) We have yet to find an example of a $t$-adesign which is not a $(t-1)$-design. Must a $t$-adesign always be a $(t-1)$-design? (2) In Section \ref{sec2} we made an initial investigation into when a $t$-adesign is either a $(t-1)$-adesign or a $(t-1)$-design. Is it possible to formulate necessary and sufficient conditions for a $t$-adesign to be a $(t-1)$-design? (3) Do the strongly regular graphs mentioned in Remark \ref{re01} (or any other strongly regular graphs not necessarily coming from partial difference sets) satisfy the conditions of either part of Theorem \ref{th001}? \section*{Acknowledgment} The authors are very grateful to the three anonymous referees and to the Coordinating Editor for all of their detailed comments that greatly improved the quality and the presentation of this paper. \section*{Appendix}
We will need some facts about cyclotomic classes and cyclotomic numbers. Let $q=ef+1$ be a prime power, and $\gamma$ a primitive element of the finite field $\mathbb{F}_{q}$ with $q$ elements. The {\it cyclotomic classes} of order $e$ are given by $D_{i}^{(e,q)}=\gamma^{i}\langle \gamma^{e} \rangle$ for $i=0,1,...,e-1$. The {\it cyclotomic numbers of order $e$} are given by $(i,j)_{e}=|D_{i}^{(e,q)}\cap (D_{j}^{(e,q)}+1)|$. It is obvious that there are at most $e^{2}$ different cyclotomic numbers of order $e$. When it is clear from the context, we will simply denote $(i,j)_{e}$ by $(i,j)$. \par We will need to use the cyclotomic numbers of order $2$. \begin{lemma}\label{le2} {\rm \cite{STO}} For a prime power $q$, if $q \equiv 1$ (mod 4), then the cyclotomic numbers of order two are given by \begin{eqnarray*} (0,0) & = & \frac{q-5}{4}, \\ (0,1) & = & (1,0) = (1,1) = \frac{q-1}{4}. \end{eqnarray*} If $q \equiv 3$ (mod 4) then the cyclotomic numbers of order two are given by \begin{eqnarray*} (0,1) & = & \frac{q+1}{4}, \\ (0,0) & = & (1,0) = (1,1) = \frac{q-3}{4}. \end{eqnarray*} \end{lemma} \begin{lemma}\label{le00A} Let $q$ be an odd prime power. Let $G=(\mathbb{F}_{q},+)\times(\mathbb{F}_{q},+)$, and define \[ D=\{(a,b)\in G\mid a\text{ and }b\text{ are both squares or both nonsquares}\},\]and \[ \tilde{D}=\{(a,b)\in G\mid \text{one of }a,b\text{ is square and the other is nonsquare}\}.\]Then both $D$ and $\tilde{D}$ are $(q^{2},\frac{q^{2}-2q+1}{2},\frac{q^{2}-4q+7}{2},\frac{q^{2}-4q+3}{2})$ partial difference sets. \end{lemma} \begin{proof} The case for $D$ was shown in \cite{ZHANG}. To show that for $\tilde{D}$, we count the number of solutions to the equation \begin{equation}\label{eq5}(a,b)=(a_{1},b_{1})-(a_{2},b_{2}),\end{equation} where $(a_{1},b_{1}),(a_{2},b_{2})\in \tilde{D}$. We use a method similar to that used in \cite{ZHANG}. \newline
Assume that $a$ and $b$ are both square. If $a_{1}$ and $a_{2}$ are square and $b_{1}$ and $b_{2}$ are nonsquare, then, using Lemma \ref{le2}, the number of solutions to (\ref{eq5}) is $(0,0)_{2}(1,1)_{2}$. There are three other cases depending on which of $a_{1},a_{2},b_{1}$ and $b_{2}$ are square and which are nonsquare, and the number of solutions to (\ref{eq5}), as we run over these other possibilities, is one of $(0,1)_{2}(1,0)_{2},(1,0)_{2}(0,1)_{2}$ or $(1,1)_{2}(0,0)_{2}$. Summing over all four possibilities, the total number of solutions to (\ref{eq5}) when $a$ and $b$ are both square is $\frac{q^{2}-4q+3}{4}$ (regardless of the residue of $q$ modulo $4$).
\par
The other three cases where neither $a$ nor $b$ are zero can be argued similarly. When $a$ and $b$ are both nonsquare, the total number of solutions to (\ref{eq5}) is $\frac{q^{2}-4q+3}{4}$, and when one of $a$ and $b$ is square and the other is nonsquare, the total number of solutions is $\frac{q^{2}-4q+7}{4}$. If $a\neq0$ and $b=0$ then (\ref{eq5}) becomes $(a_{2}a^{-1},b_{2})+(1,0)=(a_{1}a^{-1},b_{1})$ which, using Lemma \ref{le2} again, has $((0,0)_{2}+(1,1)_{2})\frac{q-1}{2}=\frac{q^{2}-4q+3}{4}$ solutions. A similar argument shows that when $a=0$ and $b\neq0$ the number of solutions to (\ref{eq5}) is again $\frac{q^{2}-4q+3}{4}$. Thus, if $x,y\in G$ are distinct, we have that each member of $\tilde{D}$ appears as a difference of two distinct members of $\tilde{D}$, $\frac{q^{2}-4q+7}{4}$ times, and each member of $G\setminus(\tilde{D}\cup\{(0,0)\})$ appears as a difference of two distinct members of $\tilde{D}$, $\frac{q^{2}-4q+3}{4}$ times. This completes the proof. \end{proof}
\end{document} | arXiv | {
"id": "1807.10008.tex",
"language_detection_score": 0.7812585234642029,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\LARGE \bf
Stochastic Optimization for Deep CCA
\\ via Nonlinear Orthogonal Iterations
} \thispagestyle{empty} \pagestyle{empty}
\begin{abstract}
Deep CCA is a recently proposed deep neural network extension to the traditional canonical correlation analysis (CCA), and has been successful for multi-view representation learning in several domains. However, stochastic optimization of the deep CCA objective is not straightforward, because it does not decouple over training examples. Previous optimizers for deep CCA are either
batch-based algorithms or stochastic optimization using large minibatches, which can have high memory consumption. In this paper, we tackle the problem of stochastic optimization for deep CCA with small minibatches, based on an iterative solution to the CCA objective, and show that we can achieve as good performance as previous optimizers and thus alleviate the memory requirement.
\end{abstract}
\section{Introduction} \label{s:intro}
Stochastic gradient descent (SGD) is a fundamental and popular optimization method for machine learning problems~\cite{Bottou91a,Lecun_98b,Bottou04a,Zhang04b,Bertsek11a}. SGD is particularly well-suited for large-scale machine learning problems because it is extremely simple and easy to implement, it often achieves better generalization (test) performance (which is the focus of machine learning research) than sophisticated batch algorithms, and it usually achieves large error reduction very quickly in a small number of passes over the training set~\cite{BottouBousquet08a}. One intuitive explanation for the empirical success of stochastic gradient descent for large data is that it makes better use of data redundancy, with an extreme example given by \cite{Lecun_98b}: If the training set consists of $10$ copies of the same set of examples, then computing an estimate of the gradient over one single copy is $10$ times more efficient than computing the full gradient over the entire training set, while achieving the same optimization progress in the following gradient descent step.
At the same time, ``multi-view'' data are becoming increasingly available, and methods based on canonical correlation analysis (CCA)~\cite{Hotell36a} that use such data to learn representations (features) form an active research area. The views can be multiple measurement modalities, such as simultaneously recorded audio + video~\cite{Kidron_05a,Chaudh_09a}, audio + articulation~\cite{AroraLivesc13a}, images + text~\cite{Hardoon_04a,SocherLi10a,Hodosh_13a}, or parallel text in two languages~\cite{Vinokour_03a,Haghig_08a,Chandar_14a,FaruquiDyer14a,Lu_15a}, but may also be different information extracted from the same source, such as words + context~\cite{Pennin_14a} or document text + text of inbound hyperlinks~\cite{BickelScheff04a}. The presence of multiple information sources presents an opportunity to learn better representations (features) by analyzing multiple views simultaneously. Among various multi-view learning approaches, the recently proposed deep canonical correlation analysis \cite{Andrew_13a}, which extends traditional CCA with deep neural networks (DNNs), has been shown to be advantageous over previous methods in several domains \cite{Wang_15a,Wang_15b,YanMikolaj15a}, and scales to large data better than its nonparametric counterpart kernel CCA~\cite{LaiFyfe00a,BachJordan02a,Hardoon_04a}.
In contrast with most DNN-based methods, the objective of deep CCA couples together all of the training examples due to its whitening constraint, making stochastic optimization challenging. Previous optimizers for this model are
batch-based, e.g., limited-memory BFGS (L-BFGS) \cite{Nocedal80a} as in \cite{Andrew_13a}, or stochastic optimization with large minibatches~\cite{Wang_15a}, because it is difficult to obtain an accurate estimate of the gradient with a small subset of the training examples (again due to the whitening constraint). As a result, these approaches have high memory complexity and may not be practical for large DNN models with hundreds of millions of weight parameters (common with web-scale data~\cite{Dean_12a}), or if one would like to run the training procedure on GPUs which are equipped with faster but smaller (more expensive) memory than CPUs. In such cases there is not enough memory to save all intermediate hidden activations of the batch/large minibatch used in error backpropagation.
In this paper, we tackle this problem with two key ideas. First, we reformulate the CCA solution with orthogonal iterations, and embed the DNN parameter training in the orthogonal iterations with a nonlinear least squares regression objective, which naturally decouples over training examples. Second, we use adaptive estimates of the covariances used by the CCA whitening constraints and carry out whitening \emph{only} for the minibatch used at each step to obtain training signals for the DNNs. This results in a stochastic optimization algorithm that can operate on small minibatches and thus consume little memory. Empirically, the new stochastic optimization algorithm performs as well as previous optimizers in terms of convergence speed, even when using small minibatches with which the previous stochastic approach makes no training progress.
In the following sections, we briefly introduce deep CCA and discuss the difficulties in training it (Section~\ref{s:dcca}); motivate and propose our new algorithm (Section~\ref{s:algorithm}); describe related work (Section~\ref{s:related}); and present experimental results comparing different optimizers (Section~\ref{s:experiments}).
\section{Deep CCA} \label{s:dcca}
\noindent\textbf{Notation} In the multi-view feature learning setting, we have access to paired observations from two views, denoted ${(\ensuremath{\mathbf{x}}_1,\ensuremath{\mathbf{y}}_1),\dots,(\ensuremath{\mathbf{x}}_N,\ensuremath{\mathbf{y}}_N)}$, where $N$ is the training set size, $\ensuremath{\mathbf{x}}_i\in \ensuremath{\mathbb{R}}^{D_x}$ and $\ensuremath{\mathbf{y}}_i\in \ensuremath{\mathbb{R}}^{D_y}$ for $i=1,\dots,N$. We also denote the data matrices for View 1 and View 2 $\ensuremath{\mathbf{X}}=[\ensuremath{\mathbf{x}}_1,\dots,\ensuremath{\mathbf{x}}_N]$ and $\ensuremath{\mathbf{Y}}=[\ensuremath{\mathbf{y}}_1,\dots,\ensuremath{\mathbf{y}}_N]$, respectively. We use bold-face letters, e.g.~$\ensuremath{\mathbf{f}}$, to denote mappings implemented by DNNs, with a corresponding set of learnable parameters, denoted, e.g., $\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}}$. The dimensionality of the
learned features is denoted $L$.
\begin{figure}
\caption{Schematic diagram of deep canonical correlation analysis.}
\label{f:dcca}
\end{figure}
Deep CCA (DCCA)~\cite{Andrew_13a} extends (linear) CCA~\cite{Hotell36a} by extracting $d_x$- and $d_y$-dimensional nonlinear features with two DNNs $\ensuremath{\mathbf{f}}$ and $\ensuremath{\mathbf{g}}$ for views 1 and 2 respectively, such that the canonical correlation (measured by CCA) between the DNN outputs is maximized, as illustrated in Fig.~\ref{f:dcca}. The goal of the final CCA is to find $L \le \min(d_x,d_y)$ pairs of linear projection vectors $\ensuremath{\mathbf{U}} \in \ensuremath{\mathbb{R}}^{d_x \times L} $ and $\ensuremath{\mathbf{V}} \in \ensuremath{\mathbb{R}}^{d_y \times L}$ such that the projections of each view (a.k.a.~canonical variables,~\cite{Hotell36a}) are maximally correlated with their counterparts in the other view, constrained such that the dimensions in the representation are uncorrelated with each other.
Formally, the DCCA objective can be written as\footnote{In this paper, we use the scaled covariance matrices (scaled by $N$) so that the dimensions of the projection are orthonormal and comply with the custom of orthogonal iterations.} \begin{gather} \label{e:dcca}
\max_{\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}}, \ensuremath{\mathbf{W}}_\ensuremath{\mathbf{g}}, \ensuremath{\mathbf{U}}, \ensuremath{\mathbf{V}}} \quad \trace{\ensuremath{\mathbf{U}}^\top \ensuremath{\mathbf{F}} \ensuremath{\mathbf{G}}^\top \ensuremath{\mathbf{V}}} \\
\text{s.t.} \quad \ensuremath{\mathbf{U}}^\top \ensuremath{\mathbf{F}} \ensuremath{\mathbf{F}}^\top \ensuremath{\mathbf{U}} = \ensuremath{\mathbf{V}}^\top \ensuremath{\mathbf{G}} \ensuremath{\mathbf{G}}^\top \ensuremath{\mathbf{V}} = \ensuremath{\mathbf{I}}, \nonumber \end{gather} where $\ensuremath{\mathbf{F}}=\ensuremath{\mathbf{f}}(\ensuremath{\mathbf{X}})=[\ensuremath{\mathbf{f}}(\ensuremath{\mathbf{x}}_1),\dots,\ensuremath{\mathbf{f}}(\ensuremath{\mathbf{x}}_N)] \in \ensuremath{\mathbb{R}}^{d_x \times N}$ and $\ensuremath{\mathbf{G}}=\ensuremath{\mathbf{g}}(\ensuremath{\mathbf{Y}})=[\ensuremath{\mathbf{g}}(\ensuremath{\mathbf{y}}_1),\dots,\ensuremath{\mathbf{g}}(\ensuremath{\mathbf{y}}_N)] \in \ensuremath{\mathbb{R}}^{d_y \times N}$. We assume that $\ensuremath{\mathbf{F}}$ and $\ensuremath{\mathbf{G}}$ are centered at the origin for notational simplicity; if they are not, we can center them as a pre-processing operation. Notice that if we use the original input data without further feature extraction, i.e.~$\ensuremath{\mathbf{F}}=\ensuremath{\mathbf{X}}$ and $\ensuremath{\mathbf{G}}=\ensuremath{\mathbf{Y}}$, then we recover the CCA objective. In DCCA, the final features (projections) are \begin{gather}\label{e:concat}
\tilde{\f}(\ensuremath{\mathbf{x}})=\ensuremath{\mathbf{U}}^\top \ensuremath{\mathbf{f}}(\ensuremath{\mathbf{x}}) \qquad \text{and} \qquad \tilde{\g}(\ensuremath{\mathbf{y}})=\ensuremath{\mathbf{V}}^\top \ensuremath{\mathbf{g}}(\ensuremath{\mathbf{y}}). \end{gather} We observe that the last CCA step with linear projection mappings $\ensuremath{\mathbf{U}}$ and $\ensuremath{\mathbf{V}}$ can be considered as adding a linear layer on top of the feature extraction networks $\ensuremath{\mathbf{f}}$ and $\ensuremath{\mathbf{g}}$ respectively. In the following, we sometimes refer to the concatenated networks $\tilde{\f}$ and $\tilde{\g}$ as defined in \eqref{e:concat}, with $\ensuremath{\mathbf{W}}_{\tilde{\f}}=\{\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}},\ensuremath{\mathbf{U}}\}$ and $\ensuremath{\mathbf{W}}_{\tilde{\g}}=\{\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{g}},\ensuremath{\mathbf{V}}\}$.
\footnote{In principle there is no need for the final linear layer; we could define DCCA such that the correlation objective and constraints are imposed on the final nonlinear layer. However, the linearity of the final layer is crucial for algorithmic implementations such as ours.}
Let $\bSigma_{fg}= \ensuremath{\mathbf{F}} \ensuremath{\mathbf{G}}^\top$, $\bSigma_{ff}=\ensuremath{\mathbf{F}} \ensuremath{\mathbf{F}}^\top$ and $\bSigma_{gg}=\ensuremath{\mathbf{G}} \ensuremath{\mathbf{G}}^\top$ be the (scaled) cross- and auto-covariance matrices of the feature-mapped data in the two views. It is well-known that, when $\ensuremath{\mathbf{f}}$ and $\ensuremath{\mathbf{g}}$ are fixed, the last CCA step in \eqref{e:dcca} has a closed form solution as follows. Define $\tilde{\bSigma}_{fg}=\bSigma_{ff}^{-\frac{1}{2}} \bSigma_{fg} \bSigma_{gg}^{-\frac{1}{2}}$, and let $\tilde{\bSigma}_{fg}=\tilde{\U} \Lambda \tilde{\V}^\top $ be its rank-L singular value decomposition (SVD), where $\Lambda$ contains the singular values $\sigma_1 \ge \dots \ge \sigma_L \ge 0$ on its diagonal. Then the optimum of \eqref{e:dcca} is achieved by $(\ensuremath{\mathbf{U}},\ensuremath{\mathbf{V}})=(\bSigma_{ff}^{-\frac{1}{2}} \tilde{\U}, \bSigma_{gg}^{-\frac{1}{2}} \tilde{\V} )$, and the optimal objective value (the total canonical correlation) is $\sum_{j=1}^L \sigma_j$. By switching $\max(\cdot)$ with $- \min -(\cdot)$, and adding $1/2$ times the constraints, it is straightforward to show that \eqref{e:dcca} is equivalent to the following: \begin{gather}\label{e:dcca2}
\min_{\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}}, \ensuremath{\mathbf{W}}_\ensuremath{\mathbf{g}}, \ensuremath{\mathbf{U}}, \ensuremath{\mathbf{V}}} \quad \frac{1}{2} \norm{\ensuremath{\mathbf{U}}^\top \ensuremath{\mathbf{F}} - \ensuremath{\mathbf{V}}^\top \ensuremath{\mathbf{G}}}^2_F \\
\qquad \text{s.t.} \quad (\ensuremath{\mathbf{U}}^\top \ensuremath{\mathbf{F}}) (\ensuremath{\mathbf{U}}^\top \ensuremath{\mathbf{F}})^\top = (\ensuremath{\mathbf{V}}^\top \ensuremath{\mathbf{G}}) (\ensuremath{\mathbf{V}}^\top \ensuremath{\mathbf{G}})^\top = \ensuremath{\mathbf{I}}. \nonumber \end{gather} In other words, CCA minimizes the squared difference between the projections of the two views, subject to the whitening constraints. This alternative formulation of CCA will also shed light on our proposed algorithm for DCCA.
The DCCA objective \eqref{e:dcca} differs from typical DNN regression or classification training objectives. Typically, the objectives are unconstrained and can be written as the expectation (or sum) of error functions (e.g., squared loss or cross entropy) incurred at each training example. This property naturally suggests stochastic gradient descent (SGD) for optimization, where one iteratively generates random unbiased estimates of the gradient based on one or a few training examples (a minibatch) and takes a small step in the opposite direction. However, the objective in \eqref{e:dcca} can not be written as an unconstrained sum of errors. The difficulty lies in the fact that the training examples are coupled through the auto-covariance matrices (in the constraints), which can not be reliably estimated with only a small amount of data.
When introducing deep CCA, \cite{Andrew_13a} used the L-BFGS algorithm for optimization. To compute the gradients of the objective with respect to $(\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}},\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{g}})$, one first computes the gradients\footnote{Technically we are computing subgradients as the ``sum of singular values'' (trace norm) is not a differentiable function of the matrix.} with respect to $(\ensuremath{\mathbf{F}},\ensuremath{\mathbf{G}})$ as \begin{align}\label{e:gradient}
\frac{\partial \sum_{j=1}^L \sigma_j} {\partial \ensuremath{\mathbf{F}}} &= 2\Delta_{ff} \ensuremath{\mathbf{F}} + \Delta_{fg} \ensuremath{\mathbf{G}}, \\
\text{with}\qquad \Delta_{ff} & = -\frac{1}{2} \ensuremath{\boldsymbol{\Sigma}}_{ff}^{-1/2} \tilde{\ensuremath{\mathbf{U}}} \Lambda \tilde{\ensuremath{\mathbf{U}}}^\top \ensuremath{\boldsymbol{\Sigma}}_{ff}^{-1/2} \nonumber \\
\Delta_{fg} & = \ensuremath{\boldsymbol{\Sigma}}_{ff}^{-1/2} \tilde{\ensuremath{\mathbf{U}}} \tilde{\ensuremath{\mathbf{V}}}^\top \ensuremath{\boldsymbol{\Sigma}}_{gg}^{-1/2} \nonumber \end{align} where $\tilde{\bSigma}_{fg}=\tilde{\U}\Lambda\tilde{\V}^\top$ is the SVD of $\tilde{\bSigma}_{fg}$ as in the closed-form solution to CCA, and $\partial \sum_{j=1}^L \sigma_j / \partial \ensuremath{\mathbf{G}}$ has an analogous expression. One can then compute the gradients with respect to $\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}}$ and $\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{g}}$ via the standard backpropagation procedure~\cite{Rumelh_86c}. From the gradient formulas, it is clear that the key to optimizing DCCA is the SVD of $\tilde{\bSigma}_{fg}$; various nonlinear optimization techniques can be used here once the gradient is computed. In practice, however, batch optimization is undesirable for applications with large training sets or large DNN architectures, as each gradient step computed on the entire training set can be expensive in both memory and time.
Later, it was observed by \cite{Wang_15a} that stochastic optimization still works well even for the DCCA objective, as long as larger minibatches are used to estimate the covariances and $\tilde{\bSigma}_{fg}$ when computing the gradient with \eqref{e:gradient}. More precisely, the authors find that learning plateaus at a poor objective value if the minibatch is too small, but fast convergence and better generalization than batch algorithms can be obtained once the minibatch size is larger than some threshold, presumably because a large minibatch contains enough information to estimate the covariances and therefore the gradient accurately enough (the threshold of minibatch size varies for different datasets because they have different levels of data redundancy). Theoretically, the necessity of using large minibatches in this approach can also be established. Let the empirical estimate of $\tilde{\bSigma}_{fg}$ using a minibatch of $n$ samples be $\hat{\ensuremath{\boldsymbol{\Sigma}}}_{fg}^{(n)}$. It can be shown that the expectation of $\hat{\ensuremath{\boldsymbol{\Sigma}}}_{fg}^{(n)}$ does not equal the true $\tilde{\bSigma}_{fg}$ computed using the entire dataset, mainly due to the nonlinearities in the matrix inversion and multiplication operations in computing $\tilde{\bSigma}_{fg}$, and the nonlinearity in the ``sum of singular values''
(trace norm) of $\tilde{\bSigma}_{fg}$; moreover, the spectral norm of the error $\norm{\hat{\ensuremath{\boldsymbol{\Sigma}}}_{fg}^{(n)} - \tilde{\bSigma}_{fg}}$ decays slowly
as $\frac{1}{\sqrt{n}}$. Consequently, the gradient estimated on a minibatch using \eqref{e:gradient} does not equal the true gradient of the objective in expectation, indicating that the stochastic approach of \cite{Wang_15a} does not qualify as a stochastic gradient descent method for the DCCA objective.
\section{Our algorithm} \label{s:algorithm}
\subsection{An iterative solution to linear CCA}
\begin{algorithm}[t]
\caption{CCA projections via alternating least squares.}
\label{alg:cca-iterative}
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\begin{algorithmic}
\REQUIRE Data matrices $\ensuremath{\mathbf{F}}\in \ensuremath{\mathbb{R}}^{d_x \times N}$, $\ensuremath{\mathbf{G}}\in \ensuremath{\mathbb{R}}^{d_y \times N}$. Initialization $\tilde{\U}_0\in \ensuremath{\mathbb{R}}^{d_x\times L}$ s.t. $\tilde{\U}_0^\top \tilde{\U}_0=\ensuremath{\mathbf{I}}$.
\STATE $\ensuremath{\mathbf{A}}_0 \leftarrow \tilde{\U}_0^\top \bSigma_{ff}^{-\frac{1}{2}} \ensuremath{\mathbf{F}}$
\FOR{$t=1,2,\dots,T$}
\STATE $\ensuremath{\mathbf{B}}_t \leftarrow \ensuremath{\mathbf{A}}_{t-1} \ensuremath{\mathbf{G}}^\top \left( \ensuremath{\mathbf{G}} \ensuremath{\mathbf{G}}^\top \right)^{-1} \ensuremath{\mathbf{G}}$
\STATE $\ensuremath{\mathbf{B}}_{t} \leftarrow \left(\ensuremath{\mathbf{B}}_{t}\ensuremath{\mathbf{B}}_{t}^\top \right)^{-\frac{1}{2}} \ensuremath{\mathbf{B}}_t$
\STATE $\ensuremath{\mathbf{A}}_t \leftarrow \ensuremath{\mathbf{B}}_{t} \ensuremath{\mathbf{F}}^\top \left( \ensuremath{\mathbf{F}} \ensuremath{\mathbf{F}}^\top \right)^{-1} \ensuremath{\mathbf{F}}$
\STATE $\ensuremath{\mathbf{A}}_{t} \leftarrow \left(\ensuremath{\mathbf{A}}_{t}\ensuremath{\mathbf{A}}_{t}^\top \right)^{-\frac{1}{2}} \ensuremath{\mathbf{A}}_t$
\ENDFOR
\ENSURE $\ensuremath{\mathbf{A}}_{T}$/$\ensuremath{\mathbf{B}}_{T}$ are the CCA projections of view 1/2.
\end{algorithmic} \end{algorithm}
Our solution to \eqref{e:dcca} is inspired by the iterative solution for finding the linear CCA projections $(\ensuremath{\mathbf{U}}^\top \ensuremath{\mathbf{F}}, \ensuremath{\mathbf{V}}^\top \ensuremath{\mathbf{G}})$ for inputs $(\ensuremath{\mathbf{F}}, \ensuremath{\mathbf{G}})$, as shown in Algorithm~\ref{alg:cca-iterative}. This algorithm computes the top-$L$ singular vectors $(\tilde{\U},\tilde{\V})$ of $\tilde{\bSigma}_{fg}$ via orthogonal iterations \cite{GolubLoan96a}. An essentially identical algorithm (named \emph{alternating least squares} for reasons that will soon become evident) appears in \cite[Algorithm 5.2]{GolubZha95a} and according to the authors the idea goes back to J. Von Neumann. A similar algorithm is also recently used by \cite[Algorithm~1]{LuFoster14a} for large scale linear CCA with high-dimensional sparse inputs, although their algorithm does not implement the whitening operations $\ensuremath{\mathbf{A}}_{t} \leftarrow \left(\ensuremath{\mathbf{A}}_{t}\ensuremath{\mathbf{A}}_{t}^\top \right)^{-\frac{1}{2}} \ensuremath{\mathbf{A}}_t$ and $\ensuremath{\mathbf{B}}_{t} \leftarrow \left(\ensuremath{\mathbf{B}}_{t}\ensuremath{\mathbf{B}}_{t}^\top \right)^{-\frac{1}{2}} \ensuremath{\mathbf{B}}_t$ or they use the QR decomposition instead. The convergence of Algorithm~\ref{alg:cca-iterative} is characterized by the following theorem, which parallels \cite[Theorem~1]{LuFoster14a}.
\begin{theorem}
Let the singular values of $\tilde{\bSigma}_{fg}$ be
\begin{gather*}
\sigma_1 \ge \dots \ge \sigma_L > \sigma_{L+1} \ge \dots \ge \sigma_{\min(d_x,d_y)}
\end{gather*}
and
suppose $\tilde{\U}_0^\top \tilde{\U}$ is nonsingular. Then the output $(\ensuremath{\mathbf{A}}_T,\ensuremath{\mathbf{B}}_T)$ of Algorithm~\ref{alg:cca-iterative} converges to the CCA projections as $T\rightarrow \infty$. \end{theorem} \begin{proof}
We focus on showing that $\ensuremath{\mathbf{A}}_T$ converges to the view 1 projection; the proof for $\ensuremath{\mathbf{B}}_T$ is similar.
First recall that $\tilde{\bSigma}_{fg}=\tilde{\U} \Lambda \tilde{\V}^\top$ is the rank-$L$ SVD of $\bSigma_{ff}^{-\frac{1}{2}} \bSigma_{fg} \bSigma_{gg}^{-\frac{1}{2}}$, and thus $\tilde{\U}$ contains the top-$L$ eigenvectors of $\tilde{\bSigma}_{fg} \tilde{\bSigma}_{fg}^\top = \tilde{\U} \Lambda^2 \tilde{\U}^\top$.
Since the operation $\left(\ensuremath{\mathbf{A}} \ensuremath{\mathbf{A}}^\top\right)^{-\frac{1}{2}}\ensuremath{\mathbf{A}}$ extracts an orthonormal basis of the row space of $\ensuremath{\mathbf{A}}$, at iteration $t$ we can write
\begin{align*}
\ensuremath{\mathbf{A}}_{t-1} \ensuremath{\mathbf{G}}^\top \left( \ensuremath{\mathbf{G}} \ensuremath{\mathbf{G}}^\top \right)^{-1} \ensuremath{\mathbf{G}} & = \ensuremath{\mathbf{P}}_t \ensuremath{\mathbf{B}}_t\\
\ensuremath{\mathbf{B}}_{t} \ensuremath{\mathbf{F}}^\top \left( \ensuremath{\mathbf{F}} \ensuremath{\mathbf{F}}^\top \right)^{-1} \ensuremath{\mathbf{F}} & = \ensuremath{\mathbf{Q}}_t \ensuremath{\mathbf{A}}_t
\end{align*}
where $\ensuremath{\mathbf{P}}_t \in \ensuremath{\mathbb{R}}^{L\times L}$ and $\ensuremath{\mathbf{Q}}_t \in \ensuremath{\mathbb{R}}^{L\times L}$ are nonsingular coefficient matrices (as the initialization $\tilde{\U}_0$ is nonsingular) for representing the left-hand side matrices in their row space basis. Combining the above two equations gives the following recursion at iteration $t$:
\begin{gather*}
\ensuremath{\mathbf{A}}_{t-1} \ensuremath{\mathbf{G}}^\top \left( \ensuremath{\mathbf{G}} \ensuremath{\mathbf{G}}^\top \right)^{-1} \ensuremath{\mathbf{G}} \ensuremath{\mathbf{F}}^\top \left( \ensuremath{\mathbf{F}} \ensuremath{\mathbf{F}}^\top \right)^{-1} \ensuremath{\mathbf{F}} = \ensuremath{\mathbf{P}}_t \ensuremath{\mathbf{Q}}_t \ensuremath{\mathbf{A}}_t.
\end{gather*}
By induction, it can be shown that by the end of iteration $t$ we have
\begin{multline*}
\ensuremath{\mathbf{A}}_0 \left( \ensuremath{\mathbf{G}}^\top \left( \ensuremath{\mathbf{G}} \ensuremath{\mathbf{G}}^\top \right)^{-1} \ensuremath{\mathbf{G}} \ensuremath{\mathbf{F}}^\top \left( \ensuremath{\mathbf{F}} \ensuremath{\mathbf{F}}^\top \right)^{-1} \ensuremath{\mathbf{F}} \right)^t = \ensuremath{\mathbf{O}}_t \ensuremath{\mathbf{A}}_t.
\end{multline*} where $\ensuremath{\mathbf{O}}_t=\ensuremath{\mathbf{P}}_1 \ensuremath{\mathbf{Q}}_1 \dots \ensuremath{\mathbf{P}}_t \ensuremath{\mathbf{Q}}_t \in \ensuremath{\mathbb{R}}^{L\times L}$ is nonsingular.
Plugging in the definition of $\ensuremath{\mathbf{A}}_0$, this equation reduces to
\begin{gather} \label{e:orth-iteration}
\tilde{\U}_0^\top \left(\tilde{\bSigma}_{fg} \tilde{\bSigma}_{fg}^\top \right)^t \bSigma_{ff}^{-\frac{1}{2}} \ensuremath{\mathbf{F}} = \ensuremath{\mathbf{O}}_t \ensuremath{\mathbf{A}}_t.
\end{gather} It is then clear that $\ensuremath{\mathbf{A}}_t$ can be written as \begin{gather*} \ensuremath{\mathbf{A}}_t = \tilde{\U}_t^\top \bSigma_{ff}^{-\frac{1}{2}} \ensuremath{\mathbf{F}} \end{gather*}
with \begin{gather*} \tilde{\U}_t = \left(\tilde{\bSigma}_{fg} \tilde{\bSigma}_{fg}^\top \right)^t \tilde{\U}_0 \ensuremath{\mathbf{O}}_t^{-1} \; \in \ensuremath{\mathbb{R}}^{d_x\times L}. \end{gather*} And since $\ensuremath{\mathbf{A}}_t$ has orthonormal rows, we have \begin{gather*} \ensuremath{\mathbf{I}}=\ensuremath{\mathbf{A}}_t \ensuremath{\mathbf{A}}_t^\top = \tilde{\U}_t^\top \bSigma_{ff}^{-\frac{1}{2}} (\ensuremath{\mathbf{F}} \ensuremath{\mathbf{F}}^\top) \bSigma_{ff}^{-\frac{1}{2}} \tilde{\U}_t = \tilde{\U}_t^\top \tilde{\U}_t, \end{gather*} indicating that $\tilde{\U}_t$ has orthonormal columns.
As a result, we consider the algorithm as working implicitly in the space of $\{ \tilde{\U}_t\in \ensuremath{\mathbb{R}}^{d_x\times L}, t=0,\dots,T\}$, and have
\begin{gather} \label{e:orth-iteration}
(\tilde{\bSigma}_{fg} \tilde{\bSigma}_{fg}^\top)^T \tilde{\U}_0 = \ensuremath{\mathbf{O}}_T \tilde{\U}_T.
\end{gather} Following the argument of~\cite[Theorem~8.2.2]{GolubLoan96a}) for orthogonal iterations, under the assumptions of our theorem, the column space of $\tilde{\U}_T$ converges to that of $\tilde{\U}$, the top-$L$ eigenvectors of $\tilde{\bSigma}_{fg} \tilde{\bSigma}_{fg}^\top$, with a linear convergence rate depending on the ratio $\sigma_{L+1}/\sigma_L$. In view of the relationship between $\tilde{\U}_T$ and $\ensuremath{\mathbf{A}}_t$, we conclude that $\ensuremath{\mathbf{A}}_T$ converges to the view 1 CCA projection as $T\rightarrow \infty$. \end{proof}
It is interesting to note that, besides the whitening operations $\left(\ensuremath{\mathbf{A}}_{t}\ensuremath{\mathbf{A}}_{t}^\top \right)^{-\frac{1}{2}} \ensuremath{\mathbf{A}}_t$, the other basic operations in each iteration of Algorithm~\ref{alg:cca-iterative} are of the form \begin{gather}\label{e:lsq}
\ensuremath{\mathbf{A}}_t \leftarrow \ensuremath{\mathbf{B}}_{t} \ensuremath{\mathbf{F}}^\top \left( \ensuremath{\mathbf{F}} \ensuremath{\mathbf{F}}^\top \right)^{-1} \ensuremath{\mathbf{F}} \end{gather} which is solving a linear least squares (regression) problem with input $\ensuremath{\mathbf{F}}$ and target output $\ensuremath{\mathbf{B}}_{t}$ satisfying $\ensuremath{\mathbf{B}}_{t}\ensuremath{\mathbf{B}}_{t}^\top=\ensuremath{\mathbf{I}}$, i.e., \begin{gather*}
\min_{\ensuremath{\mathbf{U}}_t} \quad \norm{\ensuremath{\mathbf{U}}_t^\top \ensuremath{\mathbf{F}} - \ensuremath{\mathbf{B}}_t}_F^2. \end{gather*} By setting the gradient of this unconstrained objective to zero, we obtain $\ensuremath{\mathbf{U}}_t=(\ensuremath{\mathbf{F}}\ensuremath{\mathbf{F}}^\top)^{-1} \ensuremath{\mathbf{F}} \ensuremath{\mathbf{B}}_t^\top$ and so the optimal projection $\ensuremath{\mathbf{U}}_t^\top \ensuremath{\mathbf{F}}$ coincides with the update \eqref{e:lsq}.
For \cite{LuFoster14a}, the advantage of the alternating least squares formulation over the exact solution to CCA is that it does not need to form the high-dimensional (nonsparse) matrix $\tilde{\bSigma}_{fg}$; instead it directly operates on the projections, which are much smaller in size, and one can solve the least squares problems using iterative algorithms that require only sparse matrix-vector multiplications.
\subsection{Extension to DCCA}
Our intuition for adapting Algorithm~\ref{alg:cca-iterative} to DCCA is as follows. During DCCA optimization, the DNN weights $(\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}},\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{g}})$ are updated frequently and thus the outputs $\left( \ensuremath{\mathbf{f}}(\ensuremath{\mathbf{X}}),\ensuremath{\mathbf{g}}(\ensuremath{\mathbf{Y}}) \right)$, which are also the inputs to the last CCA step, also change upon each weight update. Therefore, the last CCA step needs to adapt to the fast evolving input data distribution. On the other hand, if we are updating the CCA weights $(\ensuremath{\mathbf{U}},\ensuremath{\mathbf{V}})$ based on a small minibatch of data (as happens in stochastic optimization), it is intuitively wasteful to solve $(\ensuremath{\mathbf{U}},\ensuremath{\mathbf{V}})$ to optimality rather than to make a simple update based on the minibatch. Moreover, the objective of this ``simple update'' can be used to derive a gradient estimate for $(\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}}, \ensuremath{\mathbf{W}}_\ensuremath{\mathbf{g}})$.
In view of Algorithm~\ref{alg:cca-iterative}, it is a natural choice to embed the optimization of $(\ensuremath{\mathbf{f}}, \ensuremath{\mathbf{g}})$ into the iterative solution to linear CCA. Instead of solving the regression problem $\ensuremath{\mathbf{F}} \rightarrow \ensuremath{\mathbf{B}}_{t}$ exactly with $\ensuremath{\mathbf{A}}_t \leftarrow \ensuremath{\mathbf{B}}_{t} \ensuremath{\mathbf{F}}^\top \left( \ensuremath{\mathbf{F}} \ensuremath{\mathbf{F}}^\top \right)^{-1} \ensuremath{\mathbf{F}}$, we try to solve the problem $\ensuremath{\mathbf{X}} \rightarrow \ensuremath{\mathbf{B}}_{t}$ on a minibatch with a gradient descent step on $(\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}}, \ensuremath{\mathbf{U}})$ jointly (recall $\ensuremath{\mathbf{F}}=\ensuremath{\mathbf{f}}(\ensuremath{\mathbf{X}})$ is a function of $\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}}$). Notice that this regression objective is unconstrained and decouples over training samples, so an unbiased gradient estimate for this problem can be easily derived through standard backpropagation using minibatches (however, this gradient estimate may not be unbiased for the original DCCA objective; see discussion in Section~\ref{s:related}).
The less trivial part of Algorithm~\ref{alg:cca-iterative} to implement in DCCA is the whitening operation $\left(\ensuremath{\mathbf{A}}_{t}\ensuremath{\mathbf{A}}_{t}^\top \right)^{-\frac{1}{2}} \ensuremath{\mathbf{A}}_t$, which needs $\ensuremath{\mathbf{A}}_t\in\ensuremath{\mathbb{R}}^{L\times N}$, the projections of all training samples. We would like to avoid the exact computation of $\ensuremath{\mathbf{A}}_t$ as it requires feeding forward the entire training set $\ensuremath{\mathbf{X}}$ with the updated $\ensuremath{\mathbf{W}}_{\tilde{\f}}$, and the computational cost of this operation is as high as (half of) the cost of evaluating the batch gradient (the latter requires both the forward and backward passes). We bypass this difficulty by noting that the only portion of $\ensuremath{\mathbf{A}}_{t}$ needed is the updated projection of the minibatch used in the subsequent view 2 regression problem $\ensuremath{\mathbf{X}} \rightarrow \ensuremath{\mathbf{A}}_{t}$ (corresponding to the step $\ensuremath{\mathbf{B}}_{t+1} \leftarrow \ensuremath{\mathbf{A}}_t \ensuremath{\mathbf{G}}^\top \left( \ensuremath{\mathbf{G}} \ensuremath{\mathbf{G}}^\top \right)^{-1} \ensuremath{\mathbf{G}}$ in Algorithm~\ref{alg:cca-iterative}). Therefore, if we have an estimate of the covariance $\bSigma_{\tf\tf}^t:=\ensuremath{\mathbf{A}}_{t}\ensuremath{\mathbf{A}}_{t}^\top$ without feeding forward the entire training set, we can estimate the updated projection for this minibatch only. Specifically, we estimate this quantity by\footnote{We add a small value $\epsilon>0$ to the diagonal of the covariance estimates in our implementation for numerical stability.} \begin{gather}\label{e:memory}
\bSigma_{\tf\tf}^{t} \leftarrow \rho\bSigma_{\tf\tf}^{t-1} + (1-\rho) \frac{N}{\abs{b}} \tilde{\f}(\ensuremath{\mathbf{X}}_b)\tilde{\f}(\ensuremath{\mathbf{X}}_b)^\top, \end{gather} where $\rho\in[0,1]$, $\ensuremath{\mathbf{X}}_b$ denotes a minibatch of data with index set $b$, and $\abs{b}$ denotes the size (number of samples) of this minibatch. The time constant $\rho$ controls how much the previous covariance estimate is kept in the update; a larger $\rho$ indicates forgetting the ``memory'' more slowly. Assuming that the parameters do not change much from time $t-1$ to $t$, then $\bSigma_{\tf\tf}^{t-1}$ will be close to $\bSigma_{\tf\tf}^{t}$, and incorporating it helps to reduce the variance from the term $\tilde{\f}(\ensuremath{\mathbf{X}}_b)\tilde{\f}(\ensuremath{\mathbf{X}}_b)^\top$ when $\abs{b}\ll N$. The update in \eqref{e:memory} has a form similar to that of the widely used momentum technique in the optimization~\cite{Polyak64a} and neural network literature~\cite{Sutskev_13a,Schaul_13a}, and is also used by \cite{Brand06a,SantosMilidiu10a,Yger_12a} for online subspace tracking and anomaly detection. We note that the memory cost of $\bSigma_{\tf\tf}^{t} \in \ensuremath{\mathbb{R}}^{L\times L}$ is small as we look for low-dimensional projections (small $L$) in practice. These advantages validate our choice of whitening operations over the more commonly used QR decomposition used by \cite{LuFoster14a}.
\begin{algorithm}[t]
\caption{Nonlinear orthogonal iterations (NOI) for DCCA.}
\label{alg:dcca}
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\begin{algorithmic}
\REQUIRE Data matrix $\ensuremath{\mathbf{X}}\in \ensuremath{\mathbb{R}}^{D_x \times N}$, $\ensuremath{\mathbf{Y}}\in \ensuremath{\mathbb{R}}^{D_y \times N}$. Initialization $(\ensuremath{\mathbf{W}}_{\tilde{\f}}, \ensuremath{\mathbf{W}}_{\tilde{\g}})$, time constant $\rho$, learning rate $\eta$.
\STATE Randomly choose a minibatch $(\ensuremath{\mathbf{X}}_{b_0},\ensuremath{\mathbf{Y}}_{b_0})$
\STATE $\bSigma_{\tf\tf} \leftarrow \frac{N}{b_0}\sum_{i\in b_0} \tilde{\f}(\ensuremath{\mathbf{x}}_i)\tilde{\f}(\ensuremath{\mathbf{x}}_i)^\top$,
\STATE $\bSigma_{\tg\tg} \leftarrow \frac{N}{b_0}\sum_{i\in b_0} \tilde{\g}(\ensuremath{\mathbf{y}}_i)\tilde{\g}(\ensuremath{\mathbf{y}}_i)^\top$
\FOR{$t=1,2,\dots,T$}
\STATE Randomly choose a minibatch $(\ensuremath{\mathbf{X}}_{b_t},\ensuremath{\mathbf{Y}}_{b_t})$
\STATE $\bSigma_{\tf\tf} \leftarrow \rho \bSigma_{\tf\tf} + (1-\rho) \frac{N}{\abs{b_t}}\sum_{i\in b_t} \tilde{\f}(\ensuremath{\mathbf{x}}_i)\tilde{\f}(\ensuremath{\mathbf{x}}_i)^\top$
\STATE $\bSigma_{\tg\tg} \leftarrow \rho \bSigma_{\tg\tg} + (1-\rho) \frac{N}{\abs{b_t}}\sum_{i\in b_t} \tilde{\g}(\ensuremath{\mathbf{y}}_i)\tilde{\g}(\ensuremath{\mathbf{y}}_i)^\top$
\STATE Compute the gradient $\partial \ensuremath{\mathbf{W}}_{\tilde{\f}}$ of the objective
\begin{gather*}
\min_{\ensuremath{\mathbf{W}}_{\tilde{\f}}}\; \frac{1}{\abs{b_t}} \sum_{i\in b_t} \norm{\tilde{\f}(\ensuremath{\mathbf{x}}_i) - \bSigma_{\tg\tg}^{-\frac{1}{2}}\tilde{\g}(\ensuremath{\mathbf{y}}_i) }^2
\end{gather*}
\STATE Compute the gradient $\partial \ensuremath{\mathbf{W}}_{\tilde{\g}}$ of the objective
\begin{gather*}
\min_{\ensuremath{\mathbf{W}}_{\tilde{\g}}}\; \frac{1}{\abs{b_t}} \sum_{i\in b_t} \norm{\tilde{\g}(\ensuremath{\mathbf{y}}_i) - \bSigma_{\tf\tf}^{-\frac{1}{2}}\tilde{\f}(\ensuremath{\mathbf{x}}_i) }^2
\end{gather*}
\STATE $\ensuremath{\mathbf{W}}_{\tilde{\f}} \leftarrow \ensuremath{\mathbf{W}}_{\tilde{\f}} - \eta \partial \ensuremath{\mathbf{W}}_{\tilde{\f}}$, $\ensuremath{\mathbf{W}}_{\tilde{\g}} \leftarrow \ensuremath{\mathbf{W}}_{\tilde{\g}} - \eta \partial \ensuremath{\mathbf{W}}_{\tilde{\g}}$.
\ENDFOR
\ENSURE The updated $(\ensuremath{\mathbf{W}}_{\tilde{\f}}, \ensuremath{\mathbf{W}}_{\tilde{\g}})$.
\end{algorithmic} \end{algorithm}
We give the resulting nonlinear orthogonal iterations procedure (NOI) for DCCA in Algorithm~\ref{alg:dcca}. Now adaptive whitening is used to obtain suitable target outputs of the regression problems for computing derivatives $(\partial \ensuremath{\mathbf{W}}_{\tilde{\f}}, \partial \ensuremath{\mathbf{W}}_{\tilde{\g}})$, and we no longer maintain the whitened projections of the entire training set at each iteration.
Therefore, by the end of the algorithm, $(\tilde{\f}(\ensuremath{\mathbf{X}}),\tilde{\g}(\ensuremath{\mathbf{Y}}))$ may not satisfy the whitening constraints of \eqref{e:dcca}. One may use an additional CCA step on $(\tilde{\f}(\ensuremath{\mathbf{X}}),\tilde{\g}(\ensuremath{\mathbf{Y}}))$ to obtain a feasible solution of the original problem if desired, and this amounts to linear transforms in $\ensuremath{\mathbb{R}}^L$ which do not change the canonical correlations between the projections for both the training and test sets.
In practice, we adaptively estimate the mean of $\tilde{\f}(\ensuremath{\mathbf{X}})$ and $\tilde{\g}(\ensuremath{\mathbf{Y}})$ with an update formula similar to that of \eqref{e:memory} and center the samples accordingly before estimating the covariances and computing the target outputs. We also use momentum in the stochastic gradient steps for the nonlinear least squares problems as is commonly used in the deep learning community \cite{Sutskev_13a}. Overall, Algorithm~\ref{alg:dcca} is intuitively quite simple: It alternates between adaptive covariance estimation/whitening and stochastic gradient steps over (a stochastic version of) the least squares objectives, without any involved gradient computation.
\section{Related Work} \label{s:related}
Stochastic (and online) optimization techniques for fundamental problems, such as principal component analysis and partial least squares, are of continuous research interest~\cite{Krasul69a,OjaKarhun85a,WarmutKuzmin08a,Arora_12a,Arora_13a,Mitliag_13a,Balsub_13a,Shamir15a}. However, as pointed out by \cite{Arora_12a}, the CCA objective is more challenging due to the whitening constraints.
Recently, \cite{Yger_12a} proposed an adaptive CCA algorithm with efficient online updates based on matrix manifolds defined by the whitening constraints. However, the goal of their algorithm is anomaly detection rather than optimizing the canonical correlation objective for a given dataset. Based on the alternating least squares formulation of CCA (Algorithm~\ref{alg:cca-iterative}), \cite{LuFoster14a} propose an iterative solution of CCA for very high-dimensional and sparse input features, and the key idea is to solve the high dimensional least squares problems with randomized PCA and (batch) gradient descent.
\begin{algorithm}[t]
\caption{CCA via gradient descent over least squares. }
\label{alg:cca-gd}
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\begin{algorithmic}
\REQUIRE Data matrix $\ensuremath{\mathbf{F}}\in \ensuremath{\mathbb{R}}^{d_x \times N}$, $\ensuremath{\mathbf{G}}\in \ensuremath{\mathbb{R}}^{d_y \times N}$. Initialization ${\ensuremath{\mathbf{u}}}_0 \in \ensuremath{\mathbb{R}}^{d_x}$, ${\ensuremath{\mathbf{v}}}_0 \in \ensuremath{\mathbb{R}}^{d_y}$. Learning rate $\eta$.
\FOR{$t=1,2,\dots,T$}
\STATE ${\ensuremath{\mathbf{u}}}_t \leftarrow {\ensuremath{\mathbf{u}}}_{t-1} - \eta \ensuremath{\mathbf{F}} (\ensuremath{\mathbf{F}}^\top {\ensuremath{\mathbf{u}}}_{t-1} - \frac{1}{\norm{\ensuremath{\mathbf{v}}_{t-1}^\top \ensuremath{\mathbf{G}}}} \ensuremath{\mathbf{G}}^\top {\ensuremath{\mathbf{v}}}_{t-1})$
\STATE ${\ensuremath{\mathbf{v}}}_t \leftarrow {\ensuremath{\mathbf{v}}}_{t-1} - \eta \ensuremath{\mathbf{G}} (\ensuremath{\mathbf{G}}^\top {\ensuremath{\mathbf{v}}}_{t-1} - \frac{1}{\norm{\ensuremath{\mathbf{u}}_{t-1}^\top \ensuremath{\mathbf{F}}}} \ensuremath{\mathbf{F}}^\top {\ensuremath{\mathbf{u}}}_{t-1})$
\ENDFOR
\STATE $\ensuremath{\mathbf{u}} \leftarrow \frac{{\ensuremath{\mathbf{u}}}_T}{\norm{\ensuremath{\mathbf{u}}_T^\top \ensuremath{\mathbf{F}}}}$,$\quad$$\ensuremath{\mathbf{v}} \leftarrow \frac{{\ensuremath{\mathbf{v}}}_T}{\norm{\ensuremath{\mathbf{v}}_T^\top \ensuremath{\mathbf{G}}}}$
\ENSURE $\ensuremath{\mathbf{u}}$/$\ensuremath{\mathbf{v}}$ are the CCA directions of view 1/2.
\end{algorithmic} \end{algorithm}
Upon the submission of this paper, we have become aware of the very recent publication of \cite{Ma_15b}, which extends \cite{LuFoster14a} by solving the linear least squares problems with (stochastic) gradient descent. We notice that a specical case of our algorithm ($\rho=0$) is equivalent to theirs for linear CCA. To see this, we give the linear CCA version of our algorithm (for a one-dimensional projection, to be consistent with the notation of \cite{Ma_15b}) in Algorithm~\ref{alg:cca-gd}, where we take a batch gradient descent step over the least squares objectives in each iteration. This algorithm is equivalent to Algorithm~3 of \cite{Ma_15b}.\footnote{Although Algorithm~3 of \cite{Ma_15b} maintains two copies---the normalized and the unnormalized versions---of the weight parameters, we observe that the sole purpose of the normalized version in the intermediate iterations is to provide whitened target output for the least squares problems; our version of the algorithm eliminates this copy and the normalized version can be retrieved by a whitening step at the end.} Though intuitively very simple, the analysis of this algorithm is challenging.
In~\cite{Ma_15b} it is shown that the solution to the CCA objective is a fixed point of this algorithm, but no global convergence property is given. We also notice that the gradients used in this algorithm are derived from the alternating least squares problems \begin{gather*} \min_{\ensuremath{\mathbf{u}}}\; \norm{ \ensuremath{\mathbf{u}}^\top \ensuremath{\mathbf{F}} - \frac{\ensuremath{\mathbf{v}}^\top \ensuremath{\mathbf{G}}}{\norm{\ensuremath{\mathbf{v}}^\top \ensuremath{\mathbf{G}}}} }_F^2 \text{\ and \ } \min_{\ensuremath{\mathbf{v}}}\; \norm{ \ensuremath{\mathbf{v}}^\top \ensuremath{\mathbf{G}} - \frac{\ensuremath{\mathbf{u}}^\top \ensuremath{\mathbf{F}}}{\norm{\ensuremath{\mathbf{u}}^\top \ensuremath{\mathbf{F}}}} }_F^2, \end{gather*} while the true CCA objective can be written as \begin{gather*} \min_{\ensuremath{\mathbf{u}},\ensuremath{\mathbf{v}}}\; \norm{ \frac{\ensuremath{\mathbf{u}}^\top \ensuremath{\mathbf{F}}}{\norm{\ensuremath{\mathbf{u}}^\top \ensuremath{\mathbf{F}}}} - \frac{\ensuremath{\mathbf{v}}^\top \ensuremath{\mathbf{G}}}{\norm{\ensuremath{\mathbf{v}}^\top \ensuremath{\mathbf{G}}}}}_F^2. \end{gather*} This shows that Algorithm~3 is \emph{not} implementing gradient descent over the CCA objective.
When extending Algorithm~3 to stochastic optimization, we observe the key differences between their algorithm and ours as follows. Due to the evolving $(\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{f}},\ensuremath{\mathbf{W}}_\ensuremath{\mathbf{g}})$, the last CCA step in the DCCA model is dealing with different $(\ensuremath{\mathbf{f}}(\ensuremath{\mathbf{X}}),\ensuremath{\mathbf{g}}(\ensuremath{\mathbf{Y}}))$ and covariance structures in different iterates, even though the original inputs $(\ensuremath{\mathbf{X}},\ensuremath{\mathbf{Y}})$ are the same; this motivates the adaptive estimate of covariances in \eqref{e:memory}. In the whitening steps of \cite{Ma_15b}, however, the covariances are estimated using \emph{only} the current minibatch at each iterate, without consideration of the remaining training samples or previous estimates, which corresponds to $\rho\rightarrow 0$ in our estimate. \cite{Ma_15b} also
suggests using a minibatch size of the order $\ensuremath{\mathcal{O}}(L)$, the dimensionality of the covariance matrices to be estimated, in order to obtain a high-accuracy estimate for whitening. As we will show in the experiments, in both CCA and DCCA, it is important to incorporate the previous covariance estimates ($\rho\rightarrow 1$) at each step to reduce the variance, especially when small minibatches are used. Based on the above analysis for batch gradient descent, solving the least squares problem with stochastic gradient descent is \emph{not} implementing stochastic gradient descent over the CCA objective. Nonetheless, as shown in the experiments, this stochastic approach works remarkably well and can match the performance of batch optimization, for both linear and nonlinear CCA, and is thus worth careful analysis.
Finally, we remark that other possible approaches for solving \eqref{e:dcca} exist. Since the difficulty lies in the whitening constraints, one can relax the constraints and solve the Lagrangian formulation repeatedly with updated Lagrangian multipliers, as done by \cite{LaiFyfe00a}; or one can introduce auxiliary variables and apply the quadratic penalty method \cite{NocedalWright06a}, as done by \cite{CarreirWang14b}. The advantage of such approaches is that there exists no coupling of all training samples when optimizing the primal variables (the DNN weight parameters) and thus one can easily apply SGD there, but one also needs to deal with the Lagrange multipliers or to set a schedule for the quadratic penalty parameter (which is non-trivial) and alternately optimize over two sets of variables repeatedly in order to obtain a solution of the original constrained problem.
\begin{table}[t]
\centering
\caption{Statistics of two real-world datasets.}
\label{t:datasets}
\begin{tabular}{|c||c|c|c|}
\hline
dataset & training/tuning/test & $L$ & DNN architectures \\ \hline
JW11 & 30K/11K/9K & 112 & \caja{c}{c}{273-1800-1800-112\\112-1200-1200-112} \\ \hline
MNIST & 50K/10K/10K & 50 & \caja{c}{c}{392-800-800-50\\392-800-800-50} \\
\hline
\end{tabular} \end{table}
\section{Experiments} \label{s:experiments}
\subsection{Experimental setup} We now demonstrate the NOI algorithm on the two real-world datasets used by \cite{Andrew_13a} when introducing DCCA. The first dataset is a subset of the University of Wisconsin X-Ray Microbeam corpus ~\cite{Westbur94a}, which consists of simultaneously recorded acoustic and articulatory measurements during speech. Following \cite{Andrew_13a,Wang_15a}, the acoustic view inputs are 39D Mel-frequency cepstral coefficients and the articulatory view inputs are horizontal/vertical displacement of 8 pellets attached to different parts of the vocal tract, each then concatenated over a 7-frame context window, for speaker `JW11'. The second dataset consists of left/right halves of the images in the MNIST dataset~\cite{Lecun_98a}, and so the input of each view consists of $28\times 14$ grayscale images. We do not tune neural network architectures as it is out of the scope of this paper. Instead, we use DNN architectures similar to those used by \cite{Andrew_13a} with ReLU activations~\cite{NairHinton10a}, and we achieve better generalization performance with these architectures mainly due to better optimization. The statistics of each dataset and the chosen DNN architectures (widths of input layer-hidden layers-output layer) are given in Table~\ref{t:datasets}. The projection dimensionality $L$ is set to 112/50 for JW11/MNIST respectively as in \cite{Andrew_13a}; these are also the maximum possible total canonical correlations for the two datasets.
We compare three optimization approaches: full batch optimization by L-BFGS~\cite{Andrew_13a}, using the implementation of \cite{Schmid12a} which includes a good line-search procedure; stochastic optimization with large minibatches~\cite{Wang_15a}, denoted STOL; and our algorithm, denoted NOI. We create training/tuning/test splits for each dataset and measure the total canonical correlations on the test sets (measured by linear CCA on the projections) for different optimization methods. Hyperparameters of each algorithm, including $\rho$ for NOI, minibatch size $n=\abs{b_1}=\abs{b_2},\dots$, learning rate $\eta$ and momentum $\mu$ for both STOL and NOI, are chosen by grid search on the tuning set. All methods use the same random initialization for DNN weight parameters. We set the maximum number of iterations to $300$ for L-BFGS and number of epochs (one pass over the training set) to $50$ for STOL and NOI.
\begin{table*}[!t]\centering
\caption{Total test set canonical correlation obtained by different algorithms.}
\label{t:corr}
\begin{tabular}{|c|c|c|c|c|c|c|c|}\hline
\multirow{2}{*}{dataset} & \multirow{2}{*}{L-BFGS} & \multicolumn{2}{|c}{STOL} & \multicolumn{4}{|c|}{NOI} \\ \cline{3-8}
&& $n=100$ & $n=500$ & $n=10$ & $n=20$ & $n=50$ & $n=100$ \\ \hline
JW11 & 78.7 & 33.0 & 86.7 & 83.6 & 86.9 & 87.9 & 89.1 \\ \hline
MNIST & 47.0 & 26.1 & 47.0 & 45.9 & 46.4 & 46.4 & 46.4 \\ \hline
\end{tabular} \end{table*}
\begin{figure}
\caption{Learning curves of different algorithms on tuning sets with different minibatch size $n$.}
\label{f:varyn}
\end{figure}
\subsection{Effect of minibatch size $n$} In the first set of experiments, we vary the minibatch size $n$ of NOI over $\{10,20,50,100\}$, while tuning $\rho$, $\eta$ and $\mu$. Learning curves (objective value vs.~number of epochs) on the tuning set for each $n$ with the corresponding optimal hyperparameters are shown in Fig.~\ref{f:varyn}. For comparison, we also show the learning curves of STOL with $n=100$ and $n=500$, while $\eta$ and $\mu$ are also tuned by grid search. We observe that STOL performs very well at $n=500$ (with the performance on MNIST being somewhat better due to higher data redundancy), but it can not achieve much progress in the objective over the random initialization with $n=100$, for the reasons described earlier. In contrast, NOI achieves very competitive performance with various small minibatch sizes, with fast improvement in objective during the first few iterations, although larger $n$ tends to achieve slightly higher correlation on tuning/test sets eventually. Total canonical correlations on the test sets are given in Table~\ref{t:corr}, showing that we achieve better results than \cite{Andrew_13a} with similar DNN architectures.
\subsection{Effect of time constant $\rho$}
\begin{figure}
\caption{Total correlation achieved by NOI on tuning sets with different $\rho$.}
\label{f:varyr}
\end{figure}
In the second set of experiments, we demonstrate the importance of $\rho$ in NOI for different minibatch sizes. The total canonical correlations achieved by NOI on the tuning set for $\rho=\{0,\, 0.2,\, 0.4,\, 0.6,\, 0.8,\, 0.9,\, 0.99,\, 0.999,\, 0.9999\}$ are shown in Fig.~\ref{f:varyr}, while other hyper-parameters are set to their optimal values. We confirm that for relatively large $n$, NOI works reasonably well with $\rho=0$ (so we are using the same covariance estimate/whitening as \cite{Ma_15b}). But also as expected, when $n$ is small, it is beneficial to incorporate the previous estimate of the covariance because the covariance information contained in each small minibatch is noisy. Also, as $\rho$ becomes too close to $1$, the covariance estimates are not adapted to the DNN outputs and the performance of NOI degrades. Moreover, we observe that the optimal $\rho$ value seems different for each $n$.
\begin{figure}
\caption{Pure stochastic optimization of linear CCA using NOI. We show total correlation achieved by NOI with $n=1$ on the MNIST training sets at different $\rho$, by the random initialization used by NOI, by the exact solution, and by STOL with $n=500$.}
\label{f:cca-noi}
\end{figure}
\subsection{Pure stochastic optimization for CCA}
Finally, we carry out pure stochastic optimization ($n=1$) for linear CCA on the MNIST dataset. Notice that linear CCA is a special case of DCCA with $(\tilde{\f},\tilde{\g})$ both being single-layer linear networks (although we have used small weight-decay terms for the weights, leading to a slightly different objective than that of CCA). Total canonical correlations achieved by STOL with $n=500$ and by NOI (50 training epochs) on the training set with different $\rho$ values are shown in Fig.~\ref{f:cca-noi}. The objective of the random initialization and the closed-form solution (by SVD) are also shown for comparison. NOI could not improve over the random initialization without memory ($\rho=0$, corresponding to the algorithm of \cite{Ma_15b}), but gets very close to the optimal solution and matches the objective obtained by the previous large minibatch approach when $\rho\rightarrow 1$. This result demonstrates the importance of our adaptive estimate \eqref{e:memory} also for CCA.
\section{Conclusions} \label{s:conclusion}
In this paper, we have proposed a stochastic optimization algorithm NOI for training DCCA which updates the DNN weights based on small minibatches and performs competitively to previous optimizers.
One direction for future work is to better understand the convergence properties of NOI, which presents several difficulties. First, we note that convergence of the alternating least squares formulation of CCA (Algorithm~\ref{alg:cca-iterative}, or rather orthogonal iterations) is usually stated as the angle between the estimated subspace and the ground-truth subspace converging to zero. In the stochastic optimization setting, we need to relate this measure of progress (or some other measure) to the nonlinear least squares problems we are trying to solve in the NOI iterations. As discussed in Section~\ref{s:related}, even the convergence
of the linear CCA version of NOI with batch gradient descent is not well understood~\cite{Ma_15b}. Second, the use of memory in estimating covariances \eqref{e:memory} complicates the analysis and ideally we would like to come up with ways of determining the time constant $\rho$.
We have also tried using the same form of adaptive covariance estimates in both views for the STOL approach for computing the gradients \eqref{e:gradient}, but its performance with small minibatches is much worse than that of NOI. Presumably this is because the gradient computation of STOL suffers from noise in both views which are further combined through various nonlinear operations, whereas the noise in the gradient computation of NOI only comes from the output target (due to inexact whitening), and as a result NOI is more tolerant to the noise resulting from using small minibatches. This deserves further analysis as well.
\end{document} | arXiv | {
"id": "1510.02054.tex",
"language_detection_score": 0.6982173919677734,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Splitting gradient algorithms for solving monotone equilibrium problems \thanks{The authors were supported in part by National Foundation for Science and Technology Development (NAFOSTED, Vietnam) under grant number 101.01-2017.315.} }
\author{Le Dung Muu \and Phung Minh Duc \and Xuan Thanh Le} \institute{Le Dung Muu \at Thang Long Institute of Mathematics and Applied Sciences (TIMAS), Thang Long University, Hanoi, Vietnam, \email{ldmuu@math.ac.vn} \and Phung Minh Duc \at Faculty of Basic Science, College of Technology Medical Equipment, Hanoi, Vietnam,\\ \email{ducphungminh@gmail.com} \and Xuan Thanh Le \at Institute of Mathematics, Vietnam Academy of Science and Technology, Hanoi, Vietnam,\\ \email{lxthanh@math.ac.vn} }
\maketitle
\begin{abstract} It is well known that the projection method is not convergent for monotone equilibrium problems. Recently Sosa \textit{et al.} in \cite{SS2011} proposed a projection algorithm ensuring convergence for paramonotone equilibrium problems. In this paper we modify \todo{this algorithm} to obtain a splitting convergent one for the case when the bifunction is the sum of the two ones. At each iteration, two strongly convex subprograms are required to solve separately, one for each component bifunction. We show that the algorithm is convergent for paramonotone bifunction without any Lipschitz type condition as well as H\"older continuity of the \todo{involved bifunctions}. Furthermore, we show that the ergodic sequence defined by the \todo{algorithm's iterates} converges to a solution without paramonotonicity property. We use the proposed algorithm to solve a jointly constrained Cournot-Nash model. The computational results show that this algorithm is efficient for the model with a restart strategy.
\keywords{ Monotone equilibria; splitting algorithm; ergodic sequence; restart strategy }
\end{abstract}
\section{Introduction}
Let $\mathcal{H}$ be a real Hilbert space endowed with weak topology defined by the inner product
$\langle \cdot , \cdot \rangle$ and its induced norm $\| \cdot \|$. Let $C \subseteq \mathcal{H}$ be a nonempty closed convex subset and $f: \mathcal{H} \times \mathcal{H} \to \mathbb{R} \cup \{+\infty\}$ a bifunction such that $f(x, y) < +\infty$ for every $x, y \in C$. The equilibrium problem defined by the Nikaido-Isoda-Fan inequality that we are going to consider in this paper is given as
$$\text{Find}\ x \in C: f(x, y) \geq 0 \ \forall y \in C. \eqno(EP)$$ This inequality first was used in 1955 by Nikaido-Isoda \cite{NI1955} in convex game models. Then in 1972 Ky Fan \cite{F1972} called this inequality a minimax one and established existence theorems for Problem $(EP)$. After the appearance of the paper by Blum and Oettli \cite{BO1994}
Problem $(EP)$ has been contracted much attention of researchers. It has been shown in \cite{BCPP2013, BO1994, MO1992} that some important problems such as optimization, variational inequality, Kakutani fixed point and Nash equilibrium can be formulated in the form of $(EP)$. Many papers concerning the solution existence, stabilities as well as algorithms for Problem $(EP)$ have been published (see e.g. \cite{HM2011, IS2003, M2003, MQ2009, QAM2012, QMH2008, SS2011} and the survey paper \cite{BCPP2013}). A basic method for Problem $(EP)$ is the gradient (or projection) one, where the sequence of iterates is defined by taking \begin{equation}\label{1m}
x^{k+1} = \min\left\{ \lambda_k f(x^k,y) +\frac{1}{2} \|y-x^k\|^2 : y \in C \right\}, \end{equation} with $\lambda_k$ is some appropriately chosen real number. Note that in the variational inequality case, where $f(x,y) := \langle F(x), y-x\rangle$, the iterate $x^{k+1}$ defined by (\ref{1m}) becomes $$x^{k+1} = P_C\left(x^k - \lambda_k F(x^k)\right),$$ where $P_C$ stands for the metric projection onto $C$.
It is well known that under certain conditions on the parameter $\lambda_k$, the projection method is convergent if $f$ is strongly pseudomonotone or paramonotone \cite{IS2003, QMH2008}. However when $f$ is monotone, it may fail to converge. In order to obtain convergent algorithms for monotone, even pseudomonotone, equilibrium problems, the extragradient method first proposed by Korpelevich \cite{K1976} for the saddle point and related problems has been extended to equilibrium problems \cite{QMH2008}. In this extragradient algorithm, at each iteration, it requires solving the two strongly convex programs
\begin{equation}\label{2m}
y^k = \min\left\{ \lambda_k f(x^k,y) +\frac{1}{2} \|y-x^k\|^2 : y \in C \right\}, \end{equation}
\begin{equation}\label{3m}
x^{k+1} = \min\left\{ \lambda_k f(x^k,y) +\frac{1}{2} \|y-y^k\|^2 : y \in C \right\}, \end{equation} which may cause computational cost. In order to reduce the computational cost, several convergent algorithms that require solving only one strongly convex program or computing only one projection at each iteration have been proposed. \todo{These algorithms were applied to} some classes of bifunctions \todo{such as} strongly pseudomonotone and paramonotone \todo{ones}, with or without using an ergodic sequence (see e.g. \cite{AHT2016, DMQ2016, SS2011}). In another direction, also for the sake of reducing computational cost, some splitting algorithms have been developed (see e.g. \cite{AH2017, HV2017, M2009}) for monotone equilibrium problems where the bifunctions can be decomposed into the sum of two bifunctions. In these algorithms the convex subprograms (resp. regularized subproblems) involving the bifunction $f$ can be replaced by the two convex subprograms (resp. regularized subproblems), one for each $f_i$ $(i=1, 2)$ independently.
In this paper we modify the projection algorithm in \cite{SS2011} to obtain a splitting convergent algorithm for paramonotone equilibrium problems. The main future of this algorithm is that at each iteration, it requires solving only one strongly convex program. Furthermore, in the case when $f = f_1 + f_2$, this strongly convex subprogram can be replaced by the two strongly convex subprograms, one for each $f_1$ and $f_2$ as the algorithm in \cite{AH2017, HV2017}, but for the convergence we do not require any additional conditions such as H\"older continuity and Lipschitz type condition as in \cite{AH2017, HV2017}. We also show that the ergodic sequence defined by the \todo{algorithm's} iterates \todo{converges} to a solution without paramonotonicity property. We apply the ergodic algorithm for solving a Cournot-Nash model with joint constraints. The computational results and experiences show that the ergodic algorithm is efficient for this model with a restart strategy.
The remaining part of the paper is organized as follows. The next section \todo{lists} preliminaries containing some lemmas that will be used in proving the convergence of the proposed algorithm. Section \ref{SectionAlgorithm} is devoted to the description of the algorithm and its convergence analysis. \todo{Section \ref{SectionExperiments} shows an application of the algorithm in solving a Cournot-Nash model with joint constraints. Section \ref{SectionConclusion} closed the paper with some conclusions. }
\section{Preliminaries}
We recall from \cite{BCPP2013} the following well-known definition on monotonicity of bifunctions. \begin{defi} A bifunction $f: \mathcal{H} \times \mathcal{H} \to \mathbb{R} \cup \{+\infty\}$ is said to be \begin{itemize}[leftmargin = 0.5 in] \item[(i)] strongly monotone on $C$ with modulus $\beta > 0$ (shortly $\beta$-strongly monotone) if
$$f(x, y) + f(y, x) \leq -\beta \| y - x \|^2 \quad \forall x, y \in C;$$
\item[(ii)] monotone on $C$ if
$$f(x, y) + f(y, x) \leq 0 \quad \forall x, y \in C;$$
\item[(iii)] strongly pseudo-monotone on $C$ with modulus $\beta > 0$ (shortly $\beta$-strongly pseudo-monotone) if
$$f(x, y) \geq 0 \implies f(y, x) \leq -\beta\| y - x \|^2 \quad \forall x, y \in C;$$
\item[(iv)] pseudo-monotone on $C$ if
$$f(x, y) \geq 0 \implies f(y, x) \leq 0 \quad \forall x, y \in C.$$
\item[(v)] paramonotone on $C$ with respect to a set $S$ if
$$x^* \in S, x\in C \text{ and } f(x^*, x) = f(x, x^*) = 0 \text{ implies } x \in S.$$ \end{itemize} \end{defi}
Obviously, $(i) \implies (ii) \implies (iv)$ and $(i) \implies (iii) \implies (iv)$. Note that a strongly pseudo-monotone bifunction may not be monotone. Paramonotone bifunctions have been used in e.g. \cite{SS2011,S2011}. Some properties of paramonotone operators can be found in \cite{IS1998}. Clearly in the case of optimization problem when $f(x,y) = \varphi(y) - \varphi(x)$, the bifunction $f$ is paramonotone on $C$ with respect to the solution set of the problem $\min_{x\in C} \varphi(x)$.
The following well known lemmas will be used for proving the convergence of the algorithm to be described in the next section.
\begin{lem}\label{lem1}{\em (see \cite{TX1993} Lemma 1)} Let $\{\alpha_k\}$ and $\{\sigma_k\}$ be two sequences of nonnegative numbers such that $\alpha_{k+1} \leq \alpha_k + \sigma_k$ for all $k \in \mathbb{N}$, where $\sum_{k=1}^{\infty} \sigma_k < \infty$. Then the sequence $\{\alpha_k\}$ is convergent. \end{lem}
\begin{lem}\label{lem2}{\em (see \cite{P1979})} Let $\mathcal{H}$ be a Hilbert space, $\{x^k\}$ a sequence in $\mathcal{H}$. Let $\{r_k\}$ be a sequence of nonnegative number such that $\sum_{k=1}^{\infty} r_k = +\infty$ and set $z^k := \dfrac{\sum_{i=1}^k r_i x^i}{\sum_{i=1}^k r_i}$. Assume that there exists a nonempty, closed convex set $S \subset \mathcal{H}$ satisfying: \begin{itemize}[leftmargin = 0.5 in]
\item[(i)] For every $z \in S$, $\lim_{n \to \infty}\|z^k - z\|$ exists; \item[(ii)] Any weakly cluster point of the sequence $\{z^k\}$ belongs to $S$. \end{itemize} Then the sequence $\{z^k\}$ weakly converges. \end{lem}
\begin{lem}\label{LemmaXu} {\em (see \cite{X2002})} Let $\{\lambda_k\}, \{\delta_k\}, \{\sigma_k\}$ be sequences of real numbers such that \begin{itemize}[leftmargin = 0.5 in]
\item[(i)] $\lambda_k \in (0, 1)$ for all $k \in \mathbb{N}$;
\item[(ii)] $\sum_{k = 1}^{\infty} \lambda_k = +\infty$;
\item[(iii)] $\limsup_{k \to +\infty} \delta_k \le 0$;
\item[(iv)] $\sum_{k = 1}^{\infty} |\sigma_k| < +\infty$. \end{itemize} Suppose that $\{\alpha_k\}$ is a sequence of nonnegative real numbers satisfying \begin{equation*} \alpha_{k+1} \le (1 - \lambda_k) \alpha_k + \lambda_k \delta_k + \sigma_k \qquad \forall k \in \mathbb{N}. \end{equation*} Then we have $\lim_{k \to +\infty} \alpha_k = 0$. \end{lem}
\section{The problem, algorithm and its convergence}\label{SectionAlgorithm}
\subsection{The problem} In what follows, for the following equilibrium problem $$\text{Find}\ x \in C: f(x, y) \geq 0 \ \forall y \in C \eqno(EP)$$
we suppose that $f(x, y) = f_1(x, y) + f_2(x, y)$ and that $f_i(x, x) = 0$ ($i=1, 2$) for every $x, y \in C$. The following assumptions for the bifunctions $f, f_1, f_2$ will be used in the sequel.
\begin{itemize}[leftmargin = 0.5 in] \item[(A1)] For each $i =1, 2$ and each $x\in C$, the function $f_i(x, \cdot)$ is convex and sub-differentiable, while $f(\cdot, y)$ is weakly upper semicontinuous on $C$; \item[(A2)] If $\{x^k\} \subset C$ is bounded, then for each $i = 1, 2$, the sequence $\{g^k_i\}$ with $g^k_i \in \partial f_i(x^k,x^k)$ is bounded; \item[(A3)] The bifunction $f$ is monotone on $C$. \end{itemize}
Assumption (A2) has been used in e.g. \cite{S2011}. Note that Assumption (A2) is satisfied if the functions $f_1$ and $f_2$ are jointly weakly continuous on an open convex set containing $C$ (see \cite{VSN2015} Proposition 4.1).\\
The dual problem of $(EP)$ is $$\text{find}\ x \in C: f(y, x) \leq 0 \ \forall y \in C. \eqno(DEP)$$ We denote the solution sets of $(EP)$ and $(DEP)$ by $S(C,f)$ and $S^d(C,f)$, respectively. A relationship between $S(C,f)$ and $S^d(C,f)$ is given in the following lemma.
\begin{lem}\label{lem3}{\em (see \cite{KS2000} Proposition 2.1)} (i) If $f(\cdot, y)$ is weakly upper semicontinuous and $f(x, \cdot)$ is convex for all $x, y \in C$, then $S^d(C,f) \subset S(C,f)$.
(ii) If $f$ is pseudomonotone, then $S(C, f) \subset S^d(C, f)$. \end{lem}
Therefore, under the assumptions (A1)-(A3) one has $S(C, f) = S^d(C, f)$. In this paper we suppose that $S(C, f)$ is nonempty.
\subsection{The algorithm and its convergence analysis}
The algorithm below is a gradient one for paramonotone equilibrium problem $(EP)$. The stepsize is computed as in the algorithm for equilibrium problem in \cite{SS2011}.
\begin{algorithm}[H] \caption{A splitting algorithm for solving paramonotone or strongly pseudo-monotone equilibrium problems.}\label{alg2} \begin{algorithmic} \State \textbf{Initialization:} Seek $x^0\in C$. Choose a sequence $\{\beta_k\}_{k \geq 0} \subset \mathbb{R}$ satisfying the following conditions \begin{equation*} \quad \sum_{k=0}^\infty \beta_k = +\infty, \quad \sum_{k=0}^\infty \beta_k^2 < +\infty. \end{equation*} \State \textbf{Iteration} $k = 0, 1, \ldots$:\\ \qquad Take $g_1^k \in \partial_2 f_1(x^k, x^k), g_2^k \in \partial_2 f_2(x^k, x^k)$.\\ \qquad Compute \begin{align*}
\eta_k &:= \max\{\beta_k, \|g_1^k\|, \|g_2^k\|\}, \ \lambda_k := \dfrac{\beta_k}{\eta_k},\\
y^k &:= \arg\min\{\lambda_k f_1(x^k, y) + \dfrac{1}{2}\|y - x^k\|^2 \mid y \in C\},\\
x^{k+1} &:= \arg\min\{\lambda_k f_2(x^k, y) +\dfrac{1}{2}\|y - y^k\|^2 \mid y \in C\}. \end{align*} \end{algorithmic} \end{algorithm}
\begin{thm} \label{thm1} In addition to the assumptions {\em (A1), (A2), (A3)} we suppose that $f$ is paramonotone on $C$, and that either int $C \not=\emptyset$ or for each $x\in C$ both bifunctions $f_1(x, \cdot)$, $f_2(x, \cdot)$ are continuous at a point in $C$. Then the sequence $\{x^k\}$ generated by the algorithm \ref{alg2} converges weakly to a solution of $(EP)$. Moreover, if $f$ is strongly pseudomonotone, then $\{x^k\}$ strongly converges to the unique solution of $(EP)$. \end{thm}
\begin{proof} First, we show that, for each $x^* \in S(f, C)$, the sequence $\{\|x^k - x^*\|\}$ is convergent.
Indeed, for each $k \geq 0$, for simplicity of notation, let \begin{equation*}
h_1^k (x) := \lambda_k f_1(x^k, x) + \frac{1}{2}\| x-x^k \|^2, \end{equation*} \begin{equation*}
h_2^k (x) := \lambda_k f_2(x^k, x) + \frac{1}{2}\| x-y^k \|^2. \end{equation*} By Assumption (A1), the functions $h_1^k$ is strongly convex with modulus $1$ and subdifferentiable, which implies \begin{equation} \label{ct2}
h_1^k (y^k) + \langle u_1^k, x - y^k \rangle + \frac{1}{2} \| x - y^k \|^2 \leq h_1^k(x) \quad \forall x \in C \end{equation} for any $u_1^k \in \partial h_1^k (y^k)$. On the other hand, from the definition of $y^k$, using the regularity condition, by the optimality condition for convex programming, we have $$0 \in \partial h_1^k (y^k) + N_C (y^k)$$
In turn, this implies that there exists $-u_1^k \in \partial h_1^k (y^k)$ such that $\langle u_1^k, x - y^k \rangle \geq 0$ for all $x \in C$. Hence, from (\ref{ct2}), for each $x \in C$, it follows that
$$ h_1^k(y^k) + \frac{1}{2} \| x - y^k \|^2 \leq h_1^k(x), $$ i.e.,
$$\lambda_k f_1(x^k, y^k) + \frac{1}{2}\| y^k - x^k \|^2 + \dfrac{1}{2}\|x - y^k\|^2
\leq \lambda_k f_1(x^k, x) + \frac{1}{2}\| x - x^k \|^2,$$ or equivalently, \begin{equation} \label{ct3}
\|y^k - x\|^2 \leq \|x^k - x\|^2 +2\lambda_k \left( f_1(x^k, x)-f_1(x^k, y^k) \right) - \|y^k - x^k\|^2. \end{equation} Using the same argument for $x^{k+1}$, we obtain \begin{equation} \label{ct4}
\|x^{k+1} - x\|^2 \leq \|y^k - x\|^2 +2\lambda_k \left( f_2(x^k, x) -f_2(x^k, x^{k+1}) \right) - \|x^{k+1} - y^k\|^2. \end{equation} Combining (\ref{ct3}) and (\ref{ct4}) yields \begin{align} \label{ct5}
\|x^{k+1} - x\|^2 &\leq \|x^k - x\|^2 - \|y^k - x^k\|^2 - \|x^{k+1} - y^k\|^2 \notag\\
& \quad + 2 \lambda_k \left( f_1(x^k, x) + f_2(x^k, x) \right) -2 \lambda_k \left( f_1(x^k, y^k) + f_2(x^k, x^{k+1}) \right) \notag\\
&= \|x^k - x\|^2 - \|y^k - x^k\|^2 - \|x^{k+1} - y^k\|^2 \notag\\ & \quad + 2 \lambda_k f(x^k, x) - 2\lambda_k \left( f_1(x^k, y^k) + f_2(x^k, x^{k+1}) \right). \end{align} From $g_1^k \in \partial_2f_1(x^k,x^k)$ and $f_1(x^k, x^k) = 0$, it follows that \begin{equation*} f_1(x^k, y^k) - f_1(x^k, x^k) \ge \langle g_1^k, y^k - x^k \rangle, \end{equation*} which implies \begin{equation} \label{ct6} -2 \lambda_k f_1(x^k, y^k) \leq - 2 \lambda_k \langle g_1^k, y^k - x^k \rangle.
\end{equation} By using the Cauchy-Schwarz inequality and the fact that $\|g_1^k\| \leq \eta_k$, from (\ref{ct6}) one can write \begin{equation} \label{ct7}
-2 \lambda_k f_1(x^k, y^k) \leq 2 \frac{\beta_k}{\eta_k} \eta_k \|y^k - x^k\| = 2 \beta_k \|y^k - x^k\|. \end{equation} By the same argument, we obtain \begin{equation} \label{ct8}
-2 \lambda_k f_2(x^k, x^{k+1}) \leq 2 \beta_k \|x^{k+1} - x^k\|. \end{equation} Replacing (\ref{ct7}) and (\ref{ct8}) to (\ref{ct5}) we get \begin{align} \label{ct9}
\|x^{k+1} - x\|^2 &\leq \|x^k - x\|^2 + 2\lambda_k f(x^k, x) \notag\\
&\quad + 2 \beta_k \|y^k - x^k\| + 2 \beta_k \|x^{k+1}-x^k\| - \|y^k - x^k\|^2 - \|x^{k+1} - y^k\|^2 \notag\\
&= \|x^k - x\|^2 + 2\lambda_k f(x^k, x) \notag \\
&\quad + 2\beta_k^2 - \left(\|y^k - x^k\| - \beta_k\right)^2 - \left(\|x^{k+1} - x^k\| - \beta_k\right)^2 \notag\\
&\leq \|x^k - x\|^2 + 2\lambda_k f(x^k, x) + 2\beta_k^2. \end{align} Note that by definition of $x^* \in S(f, C) = S^d(f, C)$ we have $f(x^k, x^*) \leq 0$. Therefore, by taking $x = x^*$ in (\ref{ct9}) we obtain \begin{align}\label{ct12}
\|x^{k+1} - x^*\|^2 &\leq \|x^k - x^*\|^2 + 2 \lambda_k f(x^k, x^*) + 2\beta_k^2 \notag\\
&\leq \|x^k - x^*\|^2 + 2\beta_k^2. \end{align} Since $\sum_{k = 0}^{\infty} \beta_k^2 < +\infty$ by assumption, in virtue of Lemma
\ref{lem1}, it follows from (\ref{ct12}) that the sequence $\{\|x^k - x^*\|\}$ is convergent.
Next, we prove that any cluster point of the sequence $\{x^k\}$ is a solution of $(EP)$.
Indeed, from (\ref{ct12}) we have \begin{equation}\label{ct16bc}
- 2 \lambda_k f(x^k, x^*) \leq \|x^k - x^*\|^2 - \|x^{k+1} - x^*\|^2 + 2 \beta_k^2 \quad \forall k \in \mathbb{N}. \end{equation} By summing up we obtain \begin{equation*} 2 \displaystyle \sum_{i = 0}^\infty \lambda_i\left(- f(x^i, x^*)\right)
\leq \|x^0 - x^*\|^2 + 2 \displaystyle \sum_{i = 0}^\infty \beta_i^2 < \infty. \end{equation*}
On the other hand, by Assumption (A2) the sequences $\{g_1^k\}, \{g_2^k\}$ are bounded. This fact, together with the construction of $\{\beta_k\}$, implies that there exists $M > 0$ such that $\|g_1^k\| \leq M, \|g_2^k\| \leq M, \beta_k \leq M$ for all $k \in \mathbb{N}$. Hence for each $k \in \mathbb{N}$ we have \begin{equation*}
\eta_k = \max\{\beta_k, \|g_1^k\|, \|g_2^k\|\} \leq M, \end{equation*} which implies
$\sum_{i=0}^\infty \lambda_i = \infty$. Thus, from $f(x^i,x^*) \leq 0$, it holds that $$\lim\sup f(x^k,x^*) = 0 \quad \forall x^* \in S(C,f).$$ Fixed $x^* \in S(C,f)$ and let $\{x^{k_j}\}$ be a subsequence of $\{x^k\}$ such that $$\lim_{k}\sup f(x^k,x^*) = \lim_{j} f(x^{k_j},x^*) = 0.$$ Since $\{x^{k_j}\}$ is bounded, we may assume that $\{x^{k_j}\}$ weakly converges to some $\bar{x}$. Since $f(\cdot,x^*)$ is weakly upper semicontinuous by assumption (A1), we have \begin{equation}\label{ct21} f(\bar{x},x^*) \geq \lim f(x^{k_j},x^*) = 0. \end{equation} Then it follows from the monotonicity of $f$ that $f(x^*,\bar{x}) \leq 0$. On the other hand, since $x^* \in S(C, f)$, by definition we have $f(x^*,\bar{x}) \geq 0$. Therefore we obtain $f(x^*,\bar{x}) = 0$. Again, the monotonicity of $f$ implies $f(\bar{x}, x^*) \le 0$, and therefore, by (\ref{ct21}) one has $f(\bar{x}, x^*) = 0$. Since $f(x^*,\bar{x}) = 0$ and $f(\bar{x}, x^*) = 0$, it follows from paramonotonicity of $f$
that $\bar{x}$ is a solution to $(EP)$. Since $\|x^k - \bar{x}\|$ converges, from the fact that $x^{k_j}$ weakly converges to $\bar{x}$, we can conclude that the whole sequence $\{x^k\}$ weakly converges to $\bar{x}$.
Note that if $f$ is strongly pseudomonotone, then Problem $(EP)$ has a unique solution (see \cite{MQ2015} Proposition 1). Let $x^*$ be the unique solution of $(EP)$. By definition of $x^*$ we have \begin{equation*} f(x^*, x) \ge 0 \quad \forall x \in C, \end{equation*} which, by strong pseudomonotonicity of $f$, implies \begin{equation}\label{ct22}
f(x, x^*) \le - \beta \|x-x^*\|^2 \quad \forall x \in C. \end{equation} By choosing $x = x^k$ in (\ref{ct22}) and then applying to (\ref{ct9}) we obtain \begin{equation*}
\|x^{k+1} - x^*\|^2 \le (1 - 2\beta\lambda_k) \|x^k - x^*\|^2 + 2 \beta_k^2 \quad \forall k \in \mathbb{N}, \end{equation*}
which together with the construction of $\beta_k$ and $\lambda_k$, by virtue of Lemma \ref{LemmaXu} with $\delta_k \equiv 0$, implies that \begin{equation*}
\lim_{k \to +\infty} \|x^k - x^*\|^2 = 0, \end{equation*} i.e., $x^k$ strongly converges to the unique solution $x^*$ of $(EP)$. $\square$ \end{proof}
The following simple example shows that without paramonotonicity, the algorithm may not be convergent. Let us consider the following example, taken from \cite{FP2003}, where $f_(x,y):= \langle Ax, y-x\rangle$ and $C:= \mathbb{R}^2$ and \begin{equation*} A = \begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix}. \end{equation*} Clearly, $x^*=(0,0)^T$ is the unique solution of this problem. It is easy to check that this bifunction is monotone, but not paramonotone. An elementary computation shows that $$x^{k+1} = x^k - \lambda_k A x^k = (x^k_1-\lambda_k x^k_2, x^k_2 + \lambda_k x^k_1)^T.$$
Thus, $\|x^{k+1}\|^2 = (1+\lambda^2_k)\|x^k\|^2 > \|x^k\|^2$ if $x^k \neq 0$ , which implies that the sequence $\{x^k\}$ does not converge to the solution $x^* = 0$ for any starting point $x^0 \neq 0$.
To illustrate our motivation let us consider the following optimization problem \begin{align*} (OP) \quad \min \quad &\varphi(x) := \frac{1}{2} x^T Q x - \displaystyle \sum_{i=1}^n \ln (1 + \max\{0, x_i\})\\ \text{subject to} \quad &x_i \in [a_i, b_i] \subset \mathbb{R} \quad (i = 1, \ldots, n), \end{align*} where $Q \in \mathbb{R}^{n \times n}$ is a positive semidefinite matrix. This problem is equivalent to the following equilibrium problem \begin{equation*} \text{Find } x^* \in C \text{ such that } f(x^*, y) \ge 0 \ \forall y \in C, \end{equation*} where $C := [a_1, b_1] \times \ldots \times [a_n, b_n]$, and $f(x, y) := \varphi(y) - \varphi(x)$. We can split the function $f(x, y) = f_1(x, y) + f_2(x, y)$ by taking \begin{equation*} f_1(x, y) = \frac{1}{2} y^T Q y - \frac{1}{2} x^T Q x, \end{equation*} and \begin{equation*} f_2(x, y) = \displaystyle \sum_{i = 1}^n \left( \ln(1 + \max\{0, x_i\}) - \ln(1 + \max\{0, y_i\}) \right). \end{equation*} Since $Q$ is a positive semidefinite matrix and $\ln(\cdot)$ is concave on $(0, +\infty)$, the functions $f_1, f_2$ are equilibrium functions satisfying conditions (A1)-(A3). Clearly, $f_1(x, \cdot)$ is convex quadratic, not necessarily separable, while $f_2(x, \cdot)$ is separable, not necessarily differentiable, but their sum does not inherit these properties.
In order to obtain the convergence without paramonotonicity we use the iterate $x^k$ to define an ergodic sequence by tanking $$z^k:= \dfrac{\sum_{i=0}^k \lambda_i x^i} { \sum_{i=0}^k \lambda_i}.$$ Then we have the following convergence result.
\begin{thm}\label{thm2} Under the assumption in Theorem 1, the ergodic sequence $\{z^k\}$
converges weakly to a solution of $(EP)$. \end{thm}
\begin{proof} In the proof of Theorem 1, we have shown that the sequence $\{\|x^k - x^*\|\}$ is convergent.
By the definition of $z^k$, the sequence $\{\|z^k - x^*\|\}$ is convergent too. In order to apply Lemma \ref{lem2}, now we show that all weakly cluster points
of $\{z^k\}$ belong to $S(f, C)$.
In fact, using the inequality (\ref{ct12}), by taking the sum of its two sides over all indices we have \begin{align*} 2 \displaystyle \sum_{i = 0}^k \lambda_i f(x, x^i)
&\leq \displaystyle \sum_{i = 0}^k \left(\|x^i - x\|^2 - \|x^{i+1} - x\|^2 + 2 \beta_i^2\right)\\
&= \|x^0 - x\|^2 - \|x^{k+1} - x\|^2 + 2 \displaystyle \sum_{i = 0}^k \beta_i^2\\
&\leq \|x^0 - x\|^2 + 2 \displaystyle \sum_{i = 0}^k \beta_i^2. \end{align*} By using this inequality, from definition of $z^k$ and convexity of $f(x, \cdot)$, we can write \begin{align}\label{ct18} f(x, z^k) &= f\left(x, \dfrac{\sum_{i=0}^k \lambda_i x^i}{\sum_{i=0}^k \lambda_i} \right) \notag\\ &\leq \dfrac{\sum_{i = 0}^k \lambda_i f(x, x^i)}{ \sum_{i = 0}^k \lambda_i} \notag\\
&\leq \dfrac{\|x^0 - x\|^2 + 2 \sum_{i = 0}^k \beta_i^2}{2 \sum_{i = 0}^k \lambda_i}. \end{align}
As we have shown in the proof of Theorem 1 that \begin{equation*} \lambda_k = \dfrac{\beta_k}{\eta_k} \geq \dfrac{\beta_k}{M}. \end{equation*} Since $\sum_{k = 0}^{\infty} \beta_k = +\infty$, we have
$\sum_{k=0}^{\infty} \lambda_k = +\infty.$ Then, it follows from
(\ref{ct18}) that \begin{equation}\label{ct20} \lim_{k \to \infty} \inf f(x, z^k) \leq 0. \end{equation} Let $\bar{z}$ be any weakly cluster of $\{z^k\}$. Then there exists a subsequence $\{z^{k_j}\}$ of $\{z^k\}$ such that $z^{k_j} \rightharpoonup \bar{z}$. Since $f(x, \cdot)$ is lower semicontinuous, tt follows from (\ref{ct20}) that \begin{equation*} f(x, \bar{z}) \le 0. \end{equation*} Since this inequality hold for arbitrary $x \in C$, it means that $\bar{z} \in S^d(f, C) = S(f, C)$.
Thus it follows from Lemma \ref{lem2} that the sequence $\{z^k\}$ converges weakly to a point $z^* \in S(f, C)$, which is a solution to $(EP)$. $\square$ \end{proof}
\begin{rem}\label{rema2} In case that $\mathcal{H}$ is of finite dimension, we have
$\|z^{k+1}-z^k\| \to 0$ as $k\to \infty$. Since $\sum_{k \to +\infty} \lambda_k^2 < +\infty$, at large enough iteration $k$, the value of $\lambda_k$ closes to $0$, which makes the intermediate iteration points $y^k, x^{k+1}$ close to $x^k$. In turn, the new generated ergodic point $z^{k+1}$ does not change much from the previous one. This slows down the convergence of the sequence $\{z^k\}$. In order to enhance the convergence of the algorithm, it suggests a restart strategy by replacing the starting point $x^0$ with $x^k$
whenever $\|z^{k+1}-z^k\| \leq \tau$ with an appropriate $\tau > 0$. \end{rem}
\section{Numerical experiments}\label{SectionExperiments}
We used MATLAB R2016a for implementing the proposed algorithms. All experiments were conducted on a computer with a Core i5 processor, 16 GB of RAM, and Windows 10.
As we have noted in Remark \ref{rema2}, to improve the performance of our proposed algorithm, we reset $x^0$ to $x^k$ whenever $\|z^{k+1}-z^k\| \leq \tau$
with an appropriate $\tau > 0$ and then restart the algorithm from beginning with the new starting point $x^0$ if the stoping criterion $\|z^{k+1}-z^k\| \leq \epsilon$ is still not satisfied. In all experiments, we set $\tau := 10^{-3}$, and terminated the algorithm when either the number of iterations exceeds $10^4$, or the distance between the two consecutive generated ergodic points is less than $\epsilon : =10^{-4}$
(i.e., $\|z^{k+1} - z^k\| < 10^{-4}$). All the tests reported below were solved within 60 seconds.
We applied Algorithm 1 to compute a Nash equilibrium of a linear Cournot oligopolistic model with some additional joint constraints on the model's variables. The precise description of this model is as follows.
There are $n$ firms producing a common homogeneous commodity. Let $x_i$ be the production level of firm $i$, and $x = (x_1, \ldots, x_n)$ the vector of production levels of all these firms. Assume that the production price $p_i$ given by firm $i$ depends on the total quantity $\sigma = \sum_{i = 1}^n x_i$ of the commodity as follows \begin{center} $p_i(\sigma) = \alpha_i - \delta_i \sigma\qquad(\alpha_i > 0, \delta_i > 0, i = 1, \ldots, n).$ \end{center} Let $h_i(x_i)$ denote the production cost of firm $i$ when its production level is $x_i$ and assume that the cost functions are affine of the forms \begin{center} $h_i(x_i) = \mu_i x_i + \xi_i \qquad(\mu_i > 0, \xi_i \ge 0, i = 1, \ldots, n).$ \end{center} The profit of firm $i$ is then given by \begin{center} $q_i(x_1, \ldots, x_n) = x_i p_i(x_1 + \ldots + x_n) - h_i(x_i) \qquad (i = 1, \ldots, n).$ \end{center} Each firm $i$ has a strategy set $C_i \subset \mathbb{R}_+$ consisting of its possible production levels, i.e., $x_i \in C_i$. Assume that there are lower and upper bounds on quota of the commodity (i.e., there exist $\underline{\sigma}, \overline{\sigma} \in \mathbb{R}_+$ such that $\underline{\sigma} \le \sigma = \sum_{i = 1}^n x_i \le \overline{\sigma}$). So the set of feasible production levels can be described by \begin{equation*}
\Omega := \{x \in \mathbb{R}^n_+ \ | \ x_i \in C_i (i = 1, \ldots, n), \sum_{i = 1}^n x_i \in [\underline{\sigma}, \overline{\sigma}]\}. \end{equation*} Each firm $i$ seeks to maximize its profit by choosing the corresponding production level $x_i$ under the presumption that the production of the other firms are parametric input. In this context, a Nash equilibrium point for the model is a point $x^* \in \Omega$ satisfying \begin{center} $q_i(x^*[x_i]) \le q_i(x^*) \qquad \forall x \in \Omega, i = 1,\ldots, n,$ \end{center} where $x^*([x_i])$ stands for the vector obtained from $x^*$ by replacing the component $x_i^*$ by $x_i$. It means that, if some firm $i$ leaves its equilibrium strategy while the others keep their equilibrium positions, then the profit of firm $i$ does not increase. It has been shown that the unique Nash equilibrium point $x^*$ is also the unique solution to the following equilibrium problem \begin{equation} \textrm {Find $x \in \Omega$ such that $f(x, y) := (\tilde{B} x + \mu - \alpha)^T (y - x) + \frac{1}{2} y^T B y - \frac{1}{2} x^T B x \ge 0 \ \forall y \in \Omega$,} \tag{$EP1$} \end{equation} where $\mu = (\mu_1, \ldots, \mu_n)^T, \alpha = (\alpha_1, \ldots, \alpha_n)^T$, and \begin{equation*} \tilde{B} = \begin{bmatrix} 0 & \delta_1 & \delta_1 & \ldots & \delta_1\\ \delta_2 & 0 & \delta_2 & \ldots & \delta_2\\ \cdot & \cdot & \cdot & \ldots & \cdot\\ \delta_n & \delta_n & \delta_n & \ldots & 0 \end{bmatrix},\qquad B = \begin{bmatrix} 2 \delta_1 & 0 & 0 & \ldots & 0\\ 0 & 2 \delta_2 & 0 & \ldots & 0\\ \cdot & \cdot & \cdot & \ldots & \cdot\\ 0 & 0 & 0 & \ldots & 2 \delta_n \end{bmatrix}. \end{equation*} Note that $f(x, y) = f_1(x, y) + f_2(x, y)$ in which \begin{align*} f_1(x, y) &= (\tilde{B} x + \mu - \alpha)^T (y - x),\\ f_2(x, y) &= \frac{1}{2} y^T B y - \frac{1}{2} x^T B x. \end{align*} It is obvious that $f, f_1, f_2$ are equilibrium functions satisfying conditions (A1)-(A3).
For numerical experiments, we set $C_i = [10, 50]$ for $i = 1, \ldots, n$, $\underline{\sigma} = 10n + 10$, and $\overline{\sigma} = 50n - 10$. The initial guess was set to $x^0_i = 30 (i = 1, \ldots, n)$. We tested the algorithm on problem instances with different numbers $n$ of companies but having the following fixed values of parameters $\alpha_i = 120, \delta_i = 1, \mu_i = 30$ for $i = 1, \ldots, n$. Table \ref{TableExp2} reports the outcomes of Algorithm 1 with restart strategy applied to these instances for different values of dimension $n$ and appropriate values of parameters $\beta_k$.
\begin{table}[H] \centering
\begin{tabular}{|c|c|c||c|c|} \hline \multirow{3}{*}{$n$} & \multirow{3}{*}{$\beta_k$} & \multirow{2}{*}{Total number of} & \multirow{2}{*}{Number of} & \multirow{2}{*}{Number of iterations}\\
& & & & \\
& & iterations & restarts & from the last restart\\ \hline 2 & $10/(k+1)$ & 2 & 0 & 2\\ 3 & $10/(k+1)$ & 639 & 2 & 9\\ 4 & $10/(k+1)$ & 911 & 2 & 4\\ 5 & $10/(k+1)$ & 1027 & 2 & 2\\ 10 & $10/(k+1)$ & 1201 & 1 & 2\\ 10 & $100/(k+1)$ & 266 & 1 & 2\\ 15 & $10/(k+1)$ & 2967 & 2 & 2\\ 15 & $100/(k+1)$ & 408 & 1 & 2\\ 20 & $10/(k+1)$ & 5007 & 2 & 2\\ 20 & $100/(k+1)$ & 539 & 1 & 2\\ \hline \end{tabular} \caption{Performance of Algorithm 1 in solving linear Cournot oligopolistic model with additional joint constraints.} \label{TableExp2} \end{table}
On one hand, the results reported in Table \ref{TableExp2} show the applicability of Algorithm 1 for solving linear Cournot-Nash oligopolistic model with joint constraints. On the other hand, it follows from this table that the choice of parameter $\beta_k$ is crucial for the convergence of the algorithm, since changing the value of this parameter may significantly reduce the number of iterations. Furthermore, the last two columns of Table \ref{TableExp2} show that, by applying our suggested restart strategy, we can find `good' starting points from that the algorithm terminated after few iterations.
\section{Conclusion}\label{SectionConclusion}
We have proposed splitting algorithms for monotone equilibrium problems where the bifunction is the sum of the two ones. The first algorithm uses an ergodic sequence ensuring convergence without extragradient (double projection). The second one is for paramonotone equilibrium problems ensuring convergence without using the ergodic sequence. A restart strategy has been used to enhance the convergence of the proposed algorithms.
\end{document} | arXiv | {
"id": "1804.10372.tex",
"language_detection_score": 0.6879127025604248,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{{\bf KAM for quasi-linear and fully nonlinear forced KdV}}
\date{}
\author{Pietro Baldi, Massimiliano Berti, Riccardo Montalto}
\maketitle
\noindent {\bf Abstract:} We prove the existence of quasi-periodic, small amplitude, solutions for quasi-linear and fully nonlinear forced perturbations of KdV equations. For Hamiltonian or reversible nonlinearities we also obtain the linear stability of the solutions. The proofs are based on a combination of different ideas and techniques: $(i)$ a Nash-Moser iterative scheme in Sobolev scales. $(ii)$ A regularization procedure, which conjugates the linearized operator to a differential operator with constant coefficients plus a bounded remainder. These transformations are obtained by changes of variables induced by diffeomorphisms of the torus and pseudo-differential operators. $(iii)$ A reducibility KAM scheme, which completes the reduction to constant coefficients of the linearized operator, providing a sharp asymptotic expansion of the perturbed eigenvalues.
\noindent {\it Keywords:} KdV, KAM for PDEs, quasi-linear PDEs, fully nonlinear PDEs, Nash-Moser theory, quasi-periodic solutions, small divisors.
\tableofcontents
\section{Introduction}
One of the most challenging and open questions in KAM theory concerns its possible extension to \emph{quasi-linear} and \emph{fully nonlinear} PDEs, namely partial differential equations whose nonlinearities contain derivatives of the same order as the linear operator. Besides its mathematical interest, this question is also relevant in view of applications to physical real world nonlinear models, for example in fluid dynamics and elasticity.
The goal of this paper is to develop KAM theory for quasi-periodically forced KdV equations of the form \begin{equation}\label{equation main} u_{t} + u_{xxx} + \varepsilon f(\o t , x , u, u_{x}, u_{xx}, u_{xxx} ) = 0 \, , \quad x \in \mathbb T := \mathbb R / 2\pi\mathbb Z \, . \end{equation} First, we prove in Theorem \ref{thm:main} an existence result of quasi-periodic solutions for a large class of quasi-linear nonlinearities $ f $. Then for Hamiltonian or reversible nonlinearities, we also prove the linear stability of the solutions, see Theorems \ref{thm:mainH}, \ref{thm:mainrev}. Theorem \ref{thm:mainrev} also holds for fully nonlinear perturbations. The precise meaning of stability is stated in Theorem \ref{cor:stab}. The key analysis is the reduction to constant coefficients of the linearized KdV equation, see Theorem \ref{thm:reducibility}. To the best of our knowledge, these are the first KAM results for quasi-linear or fully nonlinear PDEs.
Let us outline a short history of the subject. KAM and Nash-Moser theory for PDEs, which counts nowadays on a wide literature, started with the pioneering works of Kuksin \cite{Ku} and Wayne \cite{W1}, and was developed in the 1990s by Craig-Wayne \cite{CW}, Bourgain \cite{Bo1}, \cite{B3}, P\"oschel \cite{Po2} (see also \cite{k1}, \cite{C} for more references). These papers concern wave and Schr\"odinger equations with bounded Hamiltonian nonlinearities.
The first KAM results for \emph{unbounded} perturbations have been obtained by Kuksin \cite{K2}, \cite{k1}, and, then, Kappeler-P\"oschel \cite{KaP}, for Hamiltonian, analytic perturbations of KdV. Here the highest constant coefficients linear operator is $\partial_{xxx}$ and the nonlinearity contains one space derivative $\partial_x$. Their approach has been recently improved by Liu-Yuan \cite{LY} and Zhang-Gao-Yuan \cite{ZGY} for $1$-dimensional derivative NLS (DNLS) and Benjamin-Ono equations, where the highest order constant coefficients linear operator is $ \partial_{xx}$ and the nonlinearity contains one derivative $\partial_x$. These methods apply to dispersive PDEs with derivatives like KdV, DNLS, the Duffing oscillator (see Bambusi-Graffi \cite{Bambusi-Graffi}), but not to derivative wave equations (DNLW) which contain first order derivatives $\partial_x , \partial_t $ in the nonlinearity.
For DNLW, KAM theorems have been recently proved by Berti-Biasco-Procesi for both Hamiltonian \cite{BBiP1} and reversible \cite{BBiP2} equations. The key ingredient is an asymptotic expansion of the perturbed eigenvalues that is sufficiently accurate to impose the second order Melnikov non-resonance conditions. In this way, the scheme produces a constant coefficients normal form around the invariant torus (\emph{reducibility}), implying the linear stability of the solution. This is achieved introducing the notion of ``quasi-T\"oplitz'' vector field, which is inspired to the concept of ``quasi-T\"oplitz" and ``T\"oplitz-Lipschitz'' Hamiltonians, developed, respectively, in Procesi-Xu \cite{PX} and Eliasson-Kuksin \cite{EK}, \cite{EK1} (see also Geng-You-Xu \cite{GXY}, Gr\'ebert-Thomann \cite{GT}, Procesi-Procesi \cite{PP}).
Existence of quasi-periodic solutions of PDEs can also be proved by imposing only the first order Melnikov conditions. This approach has been developed by Bourgain \cite{Bo1}-\cite{B5} extending the work of Craig-Wayne \cite{CW} for periodic solutions. It is especially convenient for PDEs in higher space dimension, because of the high multiplicity of the eigenvalues: see also the recent results by Wang \cite{Wang}, Berti-Bolle \cite{BBo10}, \cite{BB12} (and \cite{Berti-book}, \cite{Berti-Bolle-Procesi-AIHP-2010}, \cite{GP} for periodic solutions). This method does not provide informations about the stability of the quasi-periodic solutions, because the linearized equations have variable coefficients.
All the aforementioned results concern ``semilinear'' PDEs, namely equations in which the nonlinearity contains \emph{strictly less} derivatives than the linear differential operator. For quasi-linear or fully nonlinear PDEs the perturbative effect is much stronger, and the possibility of extending KAM theory in this context is doubtful, see \cite{KaP}, \cite{C}, \cite{LY}, because of the possible phenomenon of formation of singularities outlined in Lax \cite{Lax}, Klainerman and Majda \cite{KM}. For example, Kappeler-P\"oschel \cite{KaP} (remark 3, page 19) wrote: ``{\it It would be interesting to obtain perturbation results which also include terms of higher order, at least in the region where the KdV approximation is valid. However, results of this type are still out of reach, if true at all}''. The study of this important issue is at its infancy.
For quasi-linear and fully nonlinear PDEs, the literature concerns, so far, only existence of \emph{periodic} solutions. We quote the classical bifurcation results of Rabinowitz \cite{Rabinowitz-tesi-1967} for fully nonlinear forced wave equations with a small dissipation term. More recently, Baldi \cite{Baldi Kirchhoff} proved existence of periodic forced vibrations for quasi-linear Kirchhoff equations. Here the quasi-linear perturbation term depends explicitly only on time. Both these results are proved via Nash-Moser methods.
For the water waves equations, which are a fully nonlinear PDE, we mention the pioneering work of Iooss-Plotnikov-Toland \cite{Ioo-Plo-Tol} about the existence of time periodic standing waves, and of Iooss-Plotinikov \cite{IP09}, \cite{IP11} for 3-dimensional traveling water waves. The key idea is to use diffeomorphisms of the torus $\mathbb T^2$ and pseudo-differential operators, in order to conjugate the linearized operator (at an approximate solution) to a constant coefficients operator plus a sufficiently regularizing remainder. This is enough to invert the whole linearized operator by Neumann series.
Very recently Baldi \cite{Baldi-Benj-Ono} has further developed the techniques of \cite{Ioo-Plo-Tol}, proving the existence of periodic solutions for fully nonlinear autonomous, reversible Benjamin-Ono equations.
These approaches do not imply the linear stability of the solutions and, unfortunately, they do not work for quasi-periodic solutions, because stronger small divisors difficulties arise, see the observation \ref{obs4} below.
We finally mention that, for quasi-linear Klein-Gordon equations on spheres, Delort \cite{Delort-2009} has proved long time existence results via Birkhoff normal form methods.
In the present paper we combine different ideas and techniques. The key analysis concerns the linearized KdV operator \eqref{linearized op} obtained at any step of the Nash-Moser iteration. First, we use changes of variables, like quasi-periodic time-dependent diffeomorphisms of the space variable $ x $, a quasi-periodic reparametrization of time, multiplication operators and Fourier multipliers, in order to reduce the linearized operator to constant coefficients up to a bounded remainder, see \eqref{L6red}. These transformations, which are inspired to \cite{Baldi-Benj-Ono}, \cite{Ioo-Plo-Tol}, are very different from the usual KAM transformations. Then, we perform a quadratic KAM reducibility scheme {\it \`a la} Eliasson-Kuksin, which completely diagonalizes the linearized operator. For reversible or Hamiltonian KdV perturbations we get that the eigenvalues of this diagonal operator are purely imaginary, i.e. we prove the linear stability. In section \ref{sec:ideas} we present the main ideas of proof.
We remark that the present approach could be also applied to quasi-linear and fully nonlinear perturbations of dispersive PDEs like 1-dimensional NLS and Benjamin-Ono equations (but not to the wave equation, which is not dispersive). For definiteness, we have developed all the computations in KdV case.
In the next subsection we state precisely our KAM results. In order to highlight the main ideas, we consider the simplest setting of nonlinear perturbations of the Airy-KdV operator $ \partial_t + \partial_{xxx}$ and we look for small amplitude solutions.
\subsection{Main results}
We consider
problem \eqref{equation main} where $ \varepsilon > 0 $ is a small parameter, the nonlinearity is quasi-periodic in time with diophantine frequency vector \begin{equation}\label{omdio} \omega = \lambda \bar \omega \in \mathbb R^{\nu} \, , \quad \lambda \in \Lambda := \Big[ \frac12\,, \frac32 \Big], \quad
|\bar \omega \cdot l | \geq \frac{3 \g_0}{|l|^{\t_0}} \quad \forall l \in \mathbb Z^{\nu} \setminus \{ 0 \}, \end{equation} and $ f(\varphi , x, z )$, $\varphi \in \mathbb T^{\nu}$, $ z := (z_0, z_1, z_2, z_3) \in \mathbb R^4 $, is a finitely many times differentiable function, namely \begin{equation}\label{f classe Cq} f \in C^q ( \mathbb T^{\nu} \times \mathbb T \times \mathbb R^4; \mathbb R) \end{equation} for some $ q \in \mathbb N $ large enough. For simplicity we fix in \eqref{omdio} the diophantine exponent $ \t_0 := \nu $. The only ``external'' parameter in \eqref{equation main} is $ \lambda $, which is the length of the frequency vector (this corresponds to a time scaling).
We consider the following questions:
\begin{itemize} \item {\it For $ \varepsilon $ small enough, do there exist quasi-periodic solutions of \eqref{equation main} for positive measure sets of $ \lambda \in \Lambda $? }
\item {\it Are these solutions linearly stable?} \end{itemize} Clearly, if $ f(\varphi ,x, 0)$ is not identically zero, then $ u = 0 $ is not a solution of \eqref{equation main} for $ \varepsilon \neq 0 $. Thus we look for non-trivial $ (2 \pi)^{\nu+1}$-periodic solutions $ u(\varphi ,x) $ of \begin{equation}\label{eq:invqp} \omega \cdot \partial_{\varphi } u + u_{xxx} + \varepsilon f(\varphi , x , u, u_{x}, u_{xx}, u_{xxx} ) = 0 \end{equation} in the Sobolev space \begin{align} H^s & := H^s ( \mathbb T^\nu \times \mathbb T; \mathbb R ) \label{Hs1} \\ & := \Big\{ u(\varphi ,x) = \sum_{(l,j) \in \mathbb Z^{\nu} \times \mathbb Z} u_{l,j} \, e^{{\rm i} (l \cdot \varphi + jx)} \in \mathbb R, \ \ {\bar u}_{l,j} = u_{-l,-j} \,, \ \
\| u \|_s^2 := \sum_{(l,j) \in \mathbb Z^{\nu} \times \mathbb Z} \langle l, j \rangle^{2s}
| u_{l,j} |^ 2 < \infty \Big\} \nonumber \end{align} where \[
\langle l,j \rangle := \max \{ 1, |l|, |j| \}. \] From now on, we fix $ {\mathfrak s}_0 := (\nu + 2) / 2 > (\nu +1 ) / 2 $, so that for all $s \geq \mathfrak{s}_0$ the Sobolev space $H^s$ is a Banach algebra, and it is continuously embedded $ H^s (\mathbb T^{\nu+1} ) \hookrightarrow C(\mathbb T^{\nu+1} ) $.
We need some assumptions on the nonlinearity. We consider {\it fully nonlinear} perturbations satisfying \begin{itemize} \item {\sc Type (F)} \begin{equation} \label{type F} \partial_{z_2} f = 0, \end{equation} \end{itemize} namely $ f $ is independent of $ u_{xx} $. Otherwise, we require that \begin{itemize} \item {\sc Type (Q)} \begin{equation} \label{type Q} \partial^2_{z_3 z_3} f = 0, \quad \partial_{z_2} f = \a(\varphi) \Big( \partial^2_{z_3 x} f + z_1 \partial^2_{z_3 z_0} f + z_2 \partial^2_{z_3 z_1} f + z_3 \partial^2_{z_3 z_2} f \Big) \end{equation} for some function $ \a(\varphi) $ (independent on $ x $). \end{itemize} If (Q) holds, then the nonlinearity $ f $ depends linearly on $u_{xxx} $, namely equation \eqref{equation main} is {\it quasi-linear}. We note that the Hamiltonian nonlinearities, see \eqref{f Ham}, are a particular case of those satisfying (Q), see remark \ref{rem:coeff in cases Q F}. In comment \ref{linea} after Theorem \ref{cor:stab} we explain the reason for assuming either condition (F) or (Q).
The following theorem is an existence result of quasi-periodic solutions for quasi-linear KdV equations.
\begin{theorem} \label{thm:main} {\bf (Existence)} There exist $ s := s( \nu ) > 0$, $ q := q( \nu) \in \mathbb N $, such that: \\[1mm] For every quasi-linear nonlinearity $ f \in C^q $ of the form \begin{equation}\label{f = der g} f = \partial_x \big( g(\omega t, x, u, u_x, u_{xx})\big) \end{equation} satisfying the (Q)-condition \eqref{type Q}, for all $\varepsilon \in (0, \varepsilon_0)$, where $\varepsilon_0 := \varepsilon_0 (f, \nu) $ is small enough, there exists a Cantor set $ {\cal C}_\varepsilon \subset \Lambda $ of asymptotically full Lebesgue measure, i.e. \begin{equation}\label{Cmeas}
| {\cal C}_\varepsilon | \to 1 \quad \text{as} \quad \varepsilon \to 0, \end{equation} such that, $ \forall \lambda \in {\cal C}_\varepsilon $ the perturbed KdV equation \eqref{eq:invqp}
has a solution $ u( \varepsilon, \l) \in H^s $ with $ \| u(\varepsilon, \l) \|_s \to 0 $ as $ \varepsilon \to 0 $. \end{theorem}
We may ensure the {\it linear stability} of the solutions requiring further conditions on the nonlinearity, see Theorem \ref{cor:stab} for the precise statement. The first case is that of {\it Hamiltonian} KdV equations \begin{equation}\label{Ham-KdV} u_t = \partial_x \nabla_{L^2} H(t,x,u, u_x) \, , \quad H(t,x,u,u_x) := \int_{\mathbb T} \frac{u_x^2}{2}\, + \varepsilon F(\omega t,x,u,u_x) \, dx \end{equation} which have the form \eqref{equation main}, \eqref{f = der g} with \begin{equation} \label{f Ham} f(\varphi,x,u,u_x, u_{xx}, u_{xxx})
= - \partial_x \big\{ (\partial_{z_0} F)(\varphi, x, u , u_x) \big\} + \partial_{xx} \big\{ (\partial_{z_1} F)(\varphi, x, u, u_x ) \big\} \, . \end{equation} The phase space of \eqref{Ham-KdV} is $$ H^1_0 (\mathbb T) := \Big\{ u(x) \in H^1(\mathbb T, \mathbb R) \, : \, \int_{\mathbb T} u(x) \, dx = 0 \Big\} $$ endowed with the non-degenerate symplectic form \begin{equation}\label{KdV symplectic} \Omega (u,v) := \int_{\mathbb T} (\partial_x^{-1} u) v \, dx \, , \quad \forall u, v \in H_0^1 (\mathbb T) \, , \end{equation} where $ \partial_x^{-1} u $ is the periodic primitive of $ u $ with zero average, see \eqref{dx-1}. As proved in remark \ref{rem:coeff in cases Q F}, the Hamiltonian nonlinearity $ f $ in \eqref{f Ham} satisfies also the (Q)-condition \eqref{type Q}. As a consequence, Theorem \ref{thm:main} implies the existence of quasi-periodic solutions of \eqref{Ham-KdV}. In addition, we also prove their linear stability.
\begin{theorem} \label{thm:mainH} {\bf (Hamiltonian KdV)} For all Hamiltonian quasi-linear KdV equations \eqref{Ham-KdV} the quasi-periodic solution $u(\varepsilon,\lambda)$ found in Theorem \ref{thm:main} is {\sc linearly stable} (see Theorem \ref{cor:stab}). \end{theorem}
The stability of the quasi-periodic solutions also follows by the {\it reversibility} condition \begin{equation} \label{parity f} f (-\varphi, -x, z_0, -z_1, z_2, -z_3) = - f(\varphi, x, z_0, z_1, z_2, z_3). \end{equation} Actually \eqref{parity f} implies that the infinite-dimensional non-autonomous dynamical system $$ u_t = V(t, u ), \quad V(t,u ) := - u_{xxx} - \varepsilon f(\o t , x , u, u_{x}, u_{xx}, u_{xxx}) $$ is reversible with respect to the involution \[ S : u(x) \rightarrow u(-x), \quad S^2 = I, \] namely \[ - S V(-t,u) = V(t,Su) \, . \] In this case it is natural to look for ``reversible" solutions of \eqref{eq:invqp}, that is \begin{equation}\label{solPP} u ( \varphi ,x ) = u ( -\varphi , -x ) \, . \end{equation}
\begin{theorem} \label{thm:mainrev} {\bf (Reversible KdV)} There exist $ s := s( \nu ) > 0$, $ q := q( \nu) \in \mathbb N $, such that:
\noindent For every nonlinearity $ f \in C^q $ that satisfies \\[1mm] \indent $(i)$ the reversibility condition \eqref{parity f}, \\[1mm] \noindent and \\[1mm] \indent $(ii)$ either the (F)-condition \eqref{type F} or the (Q)-condition \eqref{type Q}, \\[1mm] for all $\varepsilon \in (0, \varepsilon_0)$, where $\varepsilon_0 := \varepsilon_0 (f, \nu) $ is small enough, there exists a Cantor set $ {\cal C}_\varepsilon \subset \Lambda $ with Lebesgue measure satisfying \eqref{Cmeas}, such that for all $ \lambda \in {\cal C}_\varepsilon $ the perturbed KdV equation \eqref{eq:invqp} has a solution $ u (\varepsilon, \l) \in H^s $ that satisfies \eqref{solPP}, with
$ \| u (\varepsilon, \l) \|_s \to 0 $ as $ \varepsilon \to 0 $. In addition, $u(\varepsilon,\lambda)$ is {\sc linearly stable}. \end{theorem}
Let us make some comments on the results.
\begin{enumerate} \item The previous theorems (in particular the Hamiltonian Theorem \ref{thm:mainH}) give a positive answer to the question that was posed by Kappeler-P\"oschel \cite{KaP}, page 19, Remark 3, about the possibility of KAM type results for quasi-linear perturbations of KdV.
\item In Theorem \ref{thm:main} we do not have informations about the linear stability of the solutions because the nonlinearity $ f $ has no special structure and it may happen that some eigenvalues of the linearized operator have non zero real part (partially hyperbolic tori). We remark that, in any case, we may compute the eigenvalues (i.e. Lyapunov exponents) of the linearized operator with any order of accuracy. With further conditions on the nonlinearity---like reversibility or in the Hamiltonian case---the eigenvalues are purely imaginary, and the torus is linearly stable. The present situation is very different with respect to \cite{CW}, \cite{Bo1}-\cite{B5}, \cite{BBo10}-\cite{BB12} and also \cite{Ioo-Plo-Tol}-\cite{IP11}, \cite{Baldi-Benj-Ono},
where the lack of stability informations is due to the fact that the linearized equation has variable coefficients, and it is not reduced as in Theorem \ref{thm:reducibility} below. \item One cannot expect the existence of quasi-periodic solutions of \eqref{eq:invqp} for {\it any} perturbation $ f $. Actually, if $ f = m \neq 0 $ is a constant, then, integrating \eqref{eq:invqp} in $ (\varphi ,x) $ we find the contradiction $ \varepsilon m = 0 $. This is a consequence of the fact that \begin{equation}\label{Kernel} \mathrm{Ker} (\omega \cdot \partial_\varphi + \partial_{xxx}) = \mathbb R \end{equation} is non trivial. Both the condition \eqref{f = der g} (which is satisfied by the Hamiltonian nonlinearities) and the reversibility condition \eqref{parity f} allow to overcome this obstruction, working in a space of functions with zero average. The degeneracy \eqref{Kernel} also reflects in the fact that the solutions of \eqref{eq:invqp} appear as a $1$-dimensional family $ c + u_c( \varepsilon, \l) $ parametrized by the ``average'' $ c \in \mathbb R $. We could also avoid this degeneracy by adding a ``mass'' term $ + m u $ in \eqref{equation main}, but it does not seem to have physical meaning.
\item In Theorem \ref{thm:main} we have not considered the case in which $ f $ is fully nonlinear and satisfies condition (F) in \eqref{type F}, because any nonlinearity of the form \eqref{f = der g} is automatically quasi-linear (and so the first condition in \eqref{type Q} holds) and \eqref{type F} trivially implies the second condition in \eqref{type Q} with $ \alpha (\varphi ) = 0 $.
\item The solutions $ u \in H^s $ have the same regularity in both variables $ (\varphi ,x) $. This functional setting is convenient when using changes of variables that mix the time and space variables, like the composition operators $\mathcal{A}$, $\mathcal{T}$ in sections \ref{step-1}, \ref{step-4},
\item In the Hamiltonian case \eqref{Ham-KdV}, the nonlinearity $f$ in \eqref{f Ham} satisfies the reversibility condition \eqref{parity f} if and only if $ F( -\varphi, -x, z_0, -z_1) = F( \varphi, x, z_0, z_1) $. \end{enumerate}
Theorems \ref{thm:main}-\ref{thm:mainrev} are based on a Nash-Moser iterative scheme. An essential ingredient in the proof---which also implies the linear stability of the quasi-periodic solutions---is the {\it reducibility} of the linear operator \begin{equation}\label{linearized op} \mathcal{L} := \mathcal{L} (u) = \omega \cdot \partial_\varphi + (1 + a_3(\varphi ,x)) \partial_{xxx} + a_2(\varphi ,x) \partial_{xx} + a_1(\varphi ,x) \partial_x + a_0 (\varphi ,x) \end{equation} obtained linearizing \eqref{eq:invqp} at any approximate (or exact) solution $ u $, namely the coefficients $ a_i (\varphi , x) $ are defined in \eqref{ai formula}. Let $ H^s_x := H^s (\mathbb T) $ denote the usual Sobolev spaces of functions of $ x \in \mathbb T $ only (phase space).
\begin{theorem}\label{thm:reducibility} {{\bf (Reducibility)}} There exist $ \bar \sigma > 0 $, $ q \in \mathbb N $, depending on $ \nu $, such that: \\[1mm] For every nonlinearity $ f \in C^q $ that satisfies the hypotheses of Theorems \ref{thm:main} or \ref{thm:mainrev}, for all $\varepsilon \in (0, \varepsilon_0)$, where $\varepsilon_0 := \varepsilon_0 (f, \nu) $ is small enough,
for all $u$ in the ball $\| u \|_{ { \mathfrak s}_0 + \bar \s} \leq 1$, there exists a Cantor like set $ \L_\infty (u) \subset \Lambda $ such that, for all $ \lambda \in \L_\infty (u) $: \\[1mm]
{i)} for all $ s \in ({\mathfrak s}_0, q - \bar \s) $, if $\| u \|_{ s + \bar \s} < + \infty $ then there exist linear invertible bounded operators $ W_1 $, $ W_2 : H^s (\mathbb T^{\nu+1})\to H^s ( \mathbb T^{\nu+1} ) $ with bounded inverse, that semi-conjugate the linear operator $ {\cal L}(u) $ in \eqref{linearized op} to the diagonal operator $ {\cal L}_\infty $, namely \begin{equation}\label{semicon} {\cal L}(u) = W_1 {\cal L}_\infty W_2^{-1} \, , \quad {\cal L}_\infty := \om \cdot \pa_{\ph} + {\cal D}_\infty \end{equation} where \begin{equation}\label{thm:diag} {\cal D}_\infty := {\rm diag}_{j \in \mathbb Z} \{ \mu_j \}, \quad \mu_j := {\rm i} (-m_3 j^3 + m_1 j) + r_j \, , \quad m_3, m_1 \in \mathbb R \, ,
\quad \sup_j |r_j | \leq C \varepsilon \, . \end{equation} {ii)} For each $ \varphi \in \mathbb T^\nu $ the operators $ W_i $ are also bounded linear bijections of the phase space (see notation \eqref{notationA}) $$ W_i ( \varphi ) \, , W_i^{-1} ( \varphi ) : H^s_x \to H^s_x \, , \quad i = 1,2 \, . $$ A curve $ h(t) = h(t, \cdot ) \in H^{s}_x $ is a solution of the quasi-periodically forced linear KdV equation \begin{equation}\label{KdV:lin} \partial_t h + (1 + a_3(\omega t,x)) \partial_{xxx}h + a_2(\omega t,x) \partial_{xx}h + a_1(\omega t,x) \partial_xh + a_0 (\omega t,x)h = 0 \end{equation} if and only if the transformed curve $$ v(t) := v(t, \cdot ) := W_2^{-1} ( \omega t ) [h(t)] \in H^{s}_x $$ is a solution of the constant coefficients dynamical system \begin{equation}\label{Lin: Red} \partial_t v + {\cal D}_\infty v = 0 \, , \quad {\dot v}_j = - \mu_j v_j \, , \ \ \forall j \in \mathbb Z \, . \end{equation} In the reversible or Hamiltonian case all the $ \mu_j \in {\rm i} \mathbb R $ are purely imaginary. \end{theorem}
The exponents $ \mu_j $ can be effectively computed. All the solutions of \eqref{Lin: Red} are $$ v(t) = \sum_{j \in \mathbb Z} v_j(t) e^{\ii j x} \, , \quad v_j(t) = e^{- \mu_j t } v_j(0) \, . $$ If the $ \mu_j $ are purely imaginary -- as in the reversible or the Hamiltonian cases -- all the solutions of \eqref{Lin: Red} are almost periodic in time (in general) and the Sobolev norm \begin{equation}\label{constant v}
\| v(t) \|_{H^s_x} = \Big( \sum_{j \in \mathbb Z} |v_j(t)|^2 \langle j \rangle^{2s}\Big)^{1/2} =
\Big( \sum_{j \in \mathbb Z} |v_j(0)|^2 \langle j \rangle^{2s}\Big)^{1/2} =
\| v(0) \|_{H^s_x} \end{equation} is constant in time. As a consequence we have:
\begin{theorem}\label{cor:stab} {\bf (Linear stability)} Assume the hypothesis of Theorem \ref{thm:reducibility} and, in addition, that $ f $ is Hamiltonian (see \eqref{f Ham}) or it satisfies the reversibility condition \eqref{parity f}. Then, $ \forall s \in ( \mathfrak{s}_0, q - \bar \sigma - \mathfrak s_0) $,
$ \| u \|_{s+ \mathfrak s_0 + \bar \sigma } < + \infty $, there exists $ K_0 > 0 $ such that for all $\lambda \in \Lambda_\infty(u) $, $\varepsilon \in (0,\varepsilon_0)$, all the solutions of \eqref{KdV:lin} satisfy \begin{equation} \label{stability s}
\| h(t)\|_{H^s_x} \leq K_0 \| h(0)\|_{H^s_x} \, \end{equation} and, for some $ \mathtt a \in (0,1) $, \begin{equation} \label{stability epsilon}
\| h(0)\|_{H^s_x} - \varepsilon^{\mathtt a} K_0 \| h(0)\|_{H^{s+1}_x}
\leq \| h(t)\|_{H^s_x} \leq
\| h(0)\|_{H^s_x} + \varepsilon^{\mathtt a} K_0 \| h(0)\|_{H^{s+1}_x} \, . \end{equation} \end{theorem}
Theorems \ref{thm:main}-\ref{cor:stab} are proved in section \ref{sec:proof} collecting all the informations of sections \ref{sec:2}-\ref{sec:NM}.
\subsection{Ideas of proof}\label{sec:ideas}
The proof of Theorems \ref{thm:main}-\ref{thm:mainrev} is based on a Nash-Moser iterative scheme in the scale of Sobolev spaces $ H^s $. The main issue concerns the invertibility of the linearized KdV operator $ {\cal L} $ in \eqref{linearized op}, at each step of the iteration, and the proof of the tame estimates \eqref{L-1alta} for its right inverse. This information is obtained in Theorem \ref{inversione linearizzato} by conjugating $ {\cal L} $ to constant coefficients. This is also the key which implies the stability results for the Hamiltonian and reversible nonlinearities, see Theorems \ref{thm:reducibility}-\ref{cor:stab}.
We now explain the main ideas of the reducibility scheme. The term of $ {\cal L} $ that produces the strongest perturbative effects to the spectrum (and eigenfunctions) is $ a_3 (\varphi ,x) \partial_{xxx} $, and, then $ a_2 (\varphi ,x) \partial_{xx} $. The usual KAM transformations are not able to deal with these terms because they are ``too close" to the identity. Our strategy is the following. First, we conjugate the operator $ \mathcal{L} $ in \eqref{linearized op} to a constant coefficients third order differential operator plus a zero order remainder \begin{equation}\label{L6red} \mathcal{L}_5 = \omega \cdot \partial_\varphi + m_3 \partial_{xxx} + m_1 \partial_x + {\cal R}_0, \quad m_3 = 1 + O(\varepsilon), \ m_1 = O(\varepsilon ) \, , \ m_1, m_3 \in \mathbb R \, , \end{equation} (see \eqref{mL5}), via changes of variables induced by diffeomorphisms of the torus, reparametrization of time, and pseudo-differential operators. This is the goal of section \ref{sec:regu}. All these transformations could be composed into one map, but we find it more convenient to split the regularization procedure into separate steps (sections \ref{step-1}-\ref{step-5}), both to highlight the basic ideas, and, especially, in order to derive estimates on the coefficients, section \ref{subsec:mL0 mL5}. Let us make some comments on this procedure. \begin{enumerate} \item \label{comment2} In order to eliminate the space variable dependence of the highest order perturbation $ a_3 (\varphi ,x) \partial_{xxx} $ (see \eqref{mL1}) we use, in section \ref{step-1}, $\varphi $-dependent changes of variables
like $$ ({\cal A} h)(\varphi , x) := h(\varphi , x + \beta (\varphi , x)) \, . $$ These transformations converge pointwise to the identity if $ \beta \to 0 $ but not in operatorial norm. If $ \beta $ is odd, $\mathcal{A}$ preserves the reversible structure, see remark \ref{reversibilitˆ step 1}. On the other hand for the Hamiltonian KdV \eqref{Ham-KdV} we use the modified transformation \begin{equation}\label{operatore1 simplettico} ({\cal A}h)(\varphi ,x):= (1+ \b_x(\varphi , x)) \, h(\varphi , x + \beta (\varphi , x)) = \frac{d}{dx} \big\{ ({\partial_x}^{-1} h )(\varphi , x+ \b(\varphi ,x)) \big\} \end{equation} for all $ h(\varphi , \cdot ) \in H^1_0 (\mathbb T) $. This map is canonical, for each $ \varphi \in \mathbb T^\nu $, with respect to the KdV-symplectic form \eqref{KdV symplectic}, see remark \ref{rem: Ham0}. Thus \eqref{operatore1 simplettico} preserves the Hamiltonian structure and also eliminates the term of order $ \partial_{xx} $, see remark \ref{rem: Ham1}. \item In the second step of section \ref{step-2} we eliminate the time dependence of the coefficients of the highest order spatial derivative operator $ \partial_{xxx} $ by a quasi-periodic time re-parametrization. This procedure preserves the reversible and the Hamiltonian structure, see remark \ref{reversibilitˆ step 2} and \ref{rem: Ham2}. \item \label{linea} Assumptions (Q) (see \eqref{type Q}) or (F) (see \eqref{type F}) allow to eliminate terms like $ a (\varphi , x) \partial_{xx} $ along this reduction procedure, see \eqref{mL3}. This is possible, by a conjugation with multiplication operators (see \eqref{cambio3}), if (see \eqref{viapez}) \begin{equation} \label{zero mean in the intro} \int_\mathbb T \frac{a_2(\varphi,x)}{1 + a_3(\varphi,x)} \, dx = 0 \, . \end{equation} If (F) holds, then the coefficient $ a_2(\varphi,x) = 0 $ and \eqref{zero mean in the intro} is satisfied. If (Q) holds, then an easy computation shows that $ a_2(\varphi,x) = \a(\varphi) \, \partial_x a_3(\varphi,x) $ (using the explicit expression of the coefficients in \eqref{ai formula}), and so \[ \int_\mathbb T \frac{a_2(\varphi,x)}{1 + a_3(\varphi,x)} \, dx = \int_\mathbb T \a(\varphi) \, \partial_x \big( \log[ 1+a_3(\varphi,x)] \big) \, dx = 0 \, . \] In both cases (Q) and (F), condition \eqref{zero mean in the intro} is satisfied.
In the Hamiltonian case there is no need of this step because the symplectic transformation \eqref{operatore1 simplettico} also eliminates the term of order $ \partial_{xx} $, see remark \ref{rem: Ham2}.
We note that without assumptions (Q) or (F) we may always reduce $\mathcal{L}$ to a time dependent operator with $ a (\varphi ) \partial_{xx} $. If $ a(\varphi ) $ were a constant, then this term would even simplify the analysis, killing the small divisors. The pathological situation that we want to eliminate assuming (Q) or (F) is when $ a(\varphi ) $ changes sign. In such a case, this term acts as a friction when $ a(\varphi ) < 0 $ and as an amplifier when $ a(\varphi ) > 0 $.
\item In sections \ref{step-4}-\ref{step-5}, we are finally able to conjugate the linear operator to another one with a coefficient in front of $ \partial_x $ which is constant, i.e. obtaining \eqref{L6red}. In this step we use a transformation of the form $ I + w(\varphi ,x) \partial_x^{-1} $, see \eqref{mS}. In the Hamiltonian case we use the symplectic map $ e^{\pi_0 w(\varphi , x) \partial_x^{-1}} $, see remark \ref{rem:Ham5}.
\item \label{obs4} We can iterate the regularization procedure at any {\it finite} order $ k = 0, 1, \ldots $, conjugating $ {\cal L} $ to an operator of the form ${\mathfrak D} + {\cal R }$, where $$ {\mathfrak D} = \omega \cdot \partial_\varphi + \mathcal{D}, \quad \mathcal{D} = m_3 \partial_{x}^3 + m_1 \partial_x + \ldots + m_{-k} \partial_x^{-k} \, , \quad m_{i} \in \mathbb R \, , $$ has constant coefficients, and the rest $ {\cal R } $ is arbitrarily regularizing in space, namely \begin{equation}\label{Lmany} \partial_x^{k} \circ \mathcal{R} = \text{bounded} \, . \end{equation} However, one cannot iterate this regularization infinitely many times, because it is not a quadratic scheme, and therefore, because of the small divisors, it does not converge. This regularization procedure is sufficient to prove the invertibility of $ {\cal L} $, giving tame estimates for the inverse, in the periodic case, but it does not work for quasi-periodic solutions. The reason is the following. In order to use Neumann series, one needs that ${\mathfrak D} ^{-1} \mathcal{R} = ({\mathfrak D}^{-1} \partial_x^{-k}) (\partial_x^{k} \mathcal{R})$ is bounded, namely, in view of \eqref{Lmany}, that $ {\mathfrak D} ^{-1} \partial_x^{-k} $ is bounded. In the region where the eigenvalues $({\rm i} \omega \cdot l + \mathcal{D}_j)$ of ${\mathfrak D} $ are small, space and time derivatives are related,
$|\omega \cdot l| \sim |j|^3$, where $l$ is the Fourier index of time, $j$ is that of space,
and $\mathcal{D}_j = - \ii m_3 j^3 + \ii m_1 j + \ldots$ are the eigenvalues of $\mathcal{D}$. Imposing the first order Melnikov conditions $|{\rm i} \omega \cdot l + \mathcal{D}_j| > \gamma|l|^{-\t}$, in that region, $( {\mathfrak D} ^{-1} \partial_x^{-k})$ has eigenvalues \[
\Big| \frac{1}{({\rm i} \omega \cdot l + \mathcal{D}_j) j^{k}}\, \Big|
< \frac{|l|^\t}{\gamma|j|^{k}} \, < \frac{C |l|^\t}{|\omega \cdot l|^{k/3}} \,. \]
In the periodic case, $\omega \in \mathbb R$, $l \in \mathbb Z$, $|\omega \cdot l| = |\omega| |l|$, and this determines the order of regularization that is required by the procedure: $ k \geq 3 \t$.
In the quasi-periodic case, instead, $|l|$ is not controlled by $|\omega \cdot l|$, and the argument fails. \end{enumerate}
Once \eqref{L6red} has been obtained, we implement a quadratic reducibility KAM scheme to diagonalize $ {\cal L}_5 $, namely to conjugate $ {\cal L}_5 $ to the diagonal operator $ {\cal L}_\infty $ in \eqref{semicon}. Since we work with finite regularity, we perform a Nash-Moser smoothing regularization (time-Fourier truncation). We use standard KAM transformations, in order to decrease, quadratically at each step, the size of the perturbation $\mathcal{R}$, see section \ref{the-reducibility-step}. This iterative scheme converges (Theorem \ref{thm:abstract linear reducibility}) because the initial remainder $ {\cal R}_0 $ is a bounded operator (of the space variable $x$), and this property is preserved along the iteration. This is the reason for performing the regularization procedure of sections \ref{step-1}-\ref{step-5}. We manage to impose the second order Melnikov non-resonance conditions \eqref{Omgj}, which are required by the reducibility scheme, thanks to the good control of the eigenvalues $ \mu_j = - \ii m_3(\varepsilon,\l) j^3 + \ii m_1(\varepsilon,\l) j + r_j (\varepsilon,\l) $,
where $ \sup_j |r_j (\varepsilon,\l)| = O(\varepsilon ) $.
Note that the eigenvalues $ \mu_j $ could be not purely imaginary, i.e. $ r_j $ could have a non-zero real part which depends on the nonlinearity (unlike the reversible or Hamiltonian case, where $ r_j \in {\rm i} \mathbb R $). In such a case, the invariant torus could be (partially) hyperbolic. Since we do not control the real part of $ r_j $ (i.e. the hyperbolicity may vanish), we perform the measure estimates proving the diophantine lower bounds of the imaginary part of the small divisors.
The final comment concerns the dynamical consequences of Theorem \ref{thm:reducibility}-$ii$). All the above transformations (both the changes of variables of sections \ref{step-1}-\ref{step-5} as well as the KAM matrices of the reducibility scheme) are time-dependent quasi-periodic maps of the phase space (of functions of $ x$ only), see section \ref{sec: dyn redu}. It is thanks to this ``T\"oplitz-in-time" structure that
the linear KdV equation \eqref{KdV:lin} is transformed into the dynamical system
\eqref{Lin: Red}. Note that in \cite{Ioo-Plo-Tol} (and also \cite{B5}, \cite{BBo10},\cite{BB12}) the analogous transformations
have not this T\"oplitz-in-time structure and stability informations are not obtained.
\emph{Acknowledgements.} We warmly thank W. Craig for many discussions about the reduction approach of the linearized operators and the reversible structure, and P. Bolle for deep observations about the Hamiltonian case. We also thank T. Kappeler, M. Procesi for many useful comments.
\section{Functional setting}\label{sec:2}
For a function $f : \Lambda_o \to E$, $\lambda \mapsto f(\lambda)$, where $(E, \| \ \|_E)$ is a Banach space and $ \L_o $ is a subset of $\mathbb R$, we define the sup-norm and the Lipschitz semi-norm \begin{equation} \label{def norma sup lip}
\| f \|^{\sup}_E
:= \| f \|^{\sup}_{E,\L_o}
:= \sup_{ \lambda \in \Lambda_o } \| f(\lambda) \|_E \, , \quad
\| f \|^{{\rm lip}}_E
:= \| f \|^{{\rm lip}}_{E,\Lambda_o} := \sup_{\begin{subarray}{c} \lambda_1, \lambda_2 \in \Lambda_o \\ \lambda_1 \neq \lambda_2 \end{subarray}}
\frac{ \| f(\lambda_1) - f(\lambda_2) \|_E }{ | \lambda_1 - \lambda_2 | }\,, \end{equation} and, for $ \gamma> 0 $, the Lipschitz norm \begin{equation} \label{def norma Lipg}
\| f \|^{{\rm{Lip}(\g)}}_E
:= \| f \|^{\rm{Lip}(\g)}_{E,\Lambda_o}
:= \| f \|^{\sup}_E + \gamma\| f \|^{{\rm lip}}_E \, .
\end{equation} If $ E = H^s $ we simply denote $ \| f \|^{{\rm{Lip}(\g)}}_{H^s} := \| f \|^{{\rm{Lip}(\g)}}_s $.
As a notation, we write $$ a \leq_s b \quad \ \Longleftrightarrow \quad a \leq C(s) b $$ for some constant $ C(s) $. For $ s = \mathfrak s_0 := (\nu+2) \slash 2 $ we only write $ a \lessdot b $. More in general the notation $ a \lessdot b $ means $ a \leq C b $ where the constant $ C $ may depend on the data of the problem, namely the nonlinearity $ f $, the number $ \nu $ of frequencies, the diophantine vector $ \bar \omega $, the diophantine exponent $ \tau > 0 $ in the non-resonance conditions in \eqref{Omegainfty}. Also the small constants $ \delta $ in the sequel depend on the data of the problem.
\subsection{Matrices with off-diagonal decay}
Let $ b \in \mathbb N $ and consider the exponential basis $\{ e_i : i \in \mathbb Z^b \} $
of $L^2(\mathbb T^b) $, so that $L^2(\mathbb T^b)$ is the vector space $\{ u = \sum u_i e_i$, $\sum |u_i |^2 < \infty \}$. Any linear operator $A : L^2 (\mathbb T^b) \to L^2 (\mathbb T^b) $ can be represented by the infinite dimensional matrix \[ ( A_{i}^{i'} )_{i, i' \in \mathbb Z^b}, \quad A_{i}^{i'} := ( A e_{i'}, e_{i})_{L^2(\mathbb T^b)}, \quad A u = \sum_{i, i'} A_{i}^{i'} u_{i'} e_{i}. \] We now define the $ s $-norm (introduced in \cite{BBo10}) of an infinite dimensional matrix. \begin{definition}\label{def:norms} The $s$-decay norm of an infinite dimensional matrix $ A := (A_{i_1}^{i_2} )_{i_1, i_2 \in \mathbb Z^b } $ is \begin{equation} \label{matrix decay norm}
\left| A \right|_{s}^2 := \sum_{i \in \mathbb Z^b} \left\langle i \right\rangle^{2s} \Big( \sup_{ \begin{subarray}{c} i_{1} - i_{2} = i \end{subarray}}
| A^{i_2}_{i_1}| \Big)^{2} \, . \end{equation} For parameter dependent matrices $ A := A(\l) $, $\lambda \in \L_o \subseteq \mathbb R$, the definitions \eqref{def norma sup lip} and \eqref{def norma Lipg} become \[
| A |^{\sup}_s := \sup_{ \lambda \in \Lambda_o } | A(\lambda) |_s \, , \quad
| A |^{{\rm lip}}_s := \sup_{\lambda_1 \neq \lambda_2}
\frac{ | A(\lambda_1) - A(\lambda_2) |_s }{ | \lambda_1 - \lambda_2 | }\,, \quad
| A |^{{\rm{Lip}(\g)}}_s := | A |^{\sup}_s + \gamma| A |^{{\rm lip}}_s \,. \] \end{definition} Clearly, the matrix decay norm \eqref{matrix decay norm} is increasing with respect to the index $ s $, namely $$
| A |_s \leq | A |_{s'} \, , \quad \forall s < s'. $$ The $ s $-norm is designed to estimate the polynomial off-diagonal decay of matrices, actually it implies $$
|A_{i_1}^{i_2}| \leq \frac{|A|_s}{\langle i_1 - i_2 \rangle^s} \, , \quad \forall i_1, i_2 \in \mathbb Z^b \, , $$ and, on the diagonal elements, \begin{equation}\label{Aii}
|A_i^i | \leq |A|_0 \, , \quad |A_i^i |^{\rm lip} \leq |A|_0^{\rm lip} \, . \end{equation} We now list some properties of the matrix decay norm proved in \cite{BBo10}. \begin{lemma}\label{lem:multi} {\bf (Multiplication operator)} Let $ p = \sum_i p_i e_i \in H^s(\mathbb T^b)$. The multiplication operator $ h \mapsto p h$ is represented by the T\"oplitz matrix $ T_i^{i'} = p_{i - i'} $ and \begin{equation}\label{multiplication}
|T|_s = \| p \|_s. \end{equation} Moreover, if $p = p(\l)$ is a Lipschitz family of functions, \begin{equation}\label{multiplication Lip}
|T|_s^{\rm{Lip}(\g)} = \| p \|_s^{\rm{Lip}(\g)}\,. \end{equation} \end{lemma}
The $s$-norm satisfies classical algebra and interpolation inequalities.
\begin{lemma} \label{prodest} {\bf (Interpolation)} For all $s \geq s_0 > b/2 $ there are $ C(s) \geq C(s_0) \geq 1 $ such that \begin{equation} \label{interpm}
| A B|_{s} \leq C(s) |A|_{s_0} |B|_s + C(s_0) |A|_s |B|_{s_0} \, . \end{equation} In particular, the algebra property holds \begin{equation} \label{algebra}
|A B |_s \leq C(s) |A|_s |B|_s \, . \end{equation} If $A = A(\lambda)$ and $B = B(\lambda)$ depend in a Lipschitz way on the parameter $\lambda \in \Lambda_o \subset \mathbb R$, then \begin{align} \label{algebra Lip}
|A B |_s^{{\rm{Lip}(\g)}}
& \leq C(s) |A|_s^{{\rm{Lip}(\g)}} |B|_s^{{\rm{Lip}(\g)}} \, , \\ \label{interpm Lip}
|A B|_{s}^{{\rm{Lip}(\g)}}
& \leq C(s) |A|_{s}^{{\rm{Lip}(\g)}} |B|_{s_0}^{{\rm{Lip}(\g)}}
+ C(s_0) |A|_{s_0}^{{\rm{Lip}(\g)}} |B|_{s}^{{\rm{Lip}(\g)}} . \end{align} \end{lemma}
For all $n \geq 1$, using \eqref{algebra} with $ s = s_0 $, we get \begin{equation}\label{Mnab}
| A^n |_{s_0} \leq [C(s_0)]^{n-1} | A |_{s_0}^n \qquad \text{and} \qquad
| A^n |_{s} \leq n [ C(s_0) |A|_{s_0} ]^{n-1} C(s) | A |_{s} \, , \ \forall s \geq s_0 \, .
\end{equation} Moreover \eqref{interpm Lip} implies that \eqref{Mnab} also holds for Lipschitz norms $| \ |_s^{\rm{Lip}(\g)}$.
The $ s $-decay norm controls the Sobolev norm, also for Lipschitz families: \begin{equation}\label{interpolazione norme miste}
\| A h \|_s \leq C(s) \big(|A|_{s_0} \| h \|_s + |A|_{s} \| h \|_{s_0} \big), \ \
\| A h \|_s^{\rm{Lip}(\g)}
\leq C(s) \big(|A|_{s_0}^{\rm{Lip}(\g)} \| h \|_s^{\rm{Lip}(\g)} + |A|_{s}^{\rm{Lip}(\g)} \| h \|_{s_0}^{\rm{Lip}(\g)} \big). \end{equation} \begin{lemma}\label{lem:inverti} Let $ \Phi = I + \Psi $ with $\Psi := \Psi(\l)$, depending in a Lipschitz way on the parameter $\lambda \in \L_o \subset \mathbb R $,
such that $ C(s_0) | \Psi |_{s_0}^{{\rm{Lip}(\g)}} \leq 1/ 2 $. Then $ \Phi $ is invertible and, for all $ s \geq s_0 > b / 2 $, \begin{equation}\label{PhINV}
| \Phi^{-1} - I |_s \leq C(s) | \Psi |_s \, , \quad
| \Phi^{-1} |_{s_0}^{{\rm{Lip}(\g)}} \leq 2 \, , \quad
| \Phi^{-1} - I |_{s}^{{\rm{Lip}(\g)}} \leq C(s) | \Psi |_{s}^{{\rm{Lip}(\g)}} \, .
\end{equation} If $ \Phi_i = I + \Psi_i $, $ i = 1,2 $, satisfy $ C(s_0) | \Psi_i |_{s_0}^{{\rm{Lip}(\g)}} \leq 1/ 2 $, then \begin{equation}\label{derivata-inversa-Phi} \vert \Phi_2^{-1} - \Phi_1^{-1} \vert_{s} \leq C(s) \big( \vert \Psi_2 - \Psi_1 \vert_{s} + \big( \vert \Psi_1 \vert_s + \vert \Psi_2 \vert_s \big) \vert \Psi_2 - \Psi_1 \vert_{s_0} \big) \, . \end{equation} \end{lemma}
\begin{pf} Estimates \eqref{PhINV} follow by Neumann series and \eqref{Mnab}. To prove \eqref{derivata-inversa-Phi}, observe that
\[ \Phi_2^{-1} - \Phi_1^{-1} = \Phi_1^{-1} (\Phi_1 - \Phi_2) \Phi_2^{-1} = \Phi_1^{-1} (\Psi_1 - \Psi_2) \Phi_2^{-1} \] and use \eqref{interpm}, \eqref{PhINV}. \end{pf}
\subsubsection{T\"oplitz-in-time matrices}
Let now $ b := \nu + 1 $ and $$ e_i (\varphi , x) := e^{{\rm i} (l \cdot \varphi + j x)}, \quad i := (l, j) \in \mathbb Z^b , \quad l \in \mathbb Z^\nu, \quad j \in \mathbb Z \, . $$ An important sub-algebra of matrices is formed by the matrices T\"oplitz in time defined by \begin{equation}\label{Topliz matrix}
A^{(l_2, j_2)}_{(l_1, j_1)} := A^{j_2}_{j_1}(l_1 - l_2 )\, , \end{equation} whose decay norm \eqref{matrix decay norm} is \begin{equation}\label{decayTop}
|A|_s^2 = \sum_{j \in \mathbb Z, l \in \mathbb Z^\nu} \sup_{j_1 - j_2 = j} |A_{j_1}^{j_2}(l)|^2 \langle l,j \rangle^{2 s} \, . \end{equation} These matrices are identified with the $ \varphi $-dependent family of operators \begin{equation}\label{Aphi} A(\varphi ) := \big( A_{j_1}^{j_2} (\varphi )\big)_{j_1, j_2 \in \mathbb Z} \, , \quad A_{j_1}^{j_2} (\varphi ) := \sum_{l \in \mathbb Z^\nu} A_{j_1}^{j_2}(l) e^{\ii l \cdot \varphi } \end{equation} which act on functions of the $x$-variable as \begin{equation}\label{notationA} A(\varphi ) : h(x) = \sum_{j \in \mathbb Z} h_j e^{\ii jx} \mapsto A(\varphi ) h(x) = \sum_{j_1, j_2 \in \mathbb Z} A_{j_1}^{j_2} (\varphi ) h_{j_2} e^{\ii j_1 x} \, .
\end{equation} We still denote by $ | A(\varphi ) |_s $ the $ s $-decay norm of the matrix in \eqref{Aphi}.
\begin{lemma}\label{Aphispace} Let $ A $ be a T\"oplitz matrix as in \eqref{Topliz matrix}, and $\mathfrak s_0 := (\nu + 2)/2$ (as defined above). Then $$
|A(\varphi )|_{s} \leq C(\mathfrak s_0) |A|_{s+ \mathfrak s_0} \, , \quad \forall \varphi \in \mathbb T^\nu \, . $$ \end{lemma}
\begin{pf} For all $ \varphi \in \mathbb T^\nu $ we have \begin{eqnarray*}
|A(\varphi )|_{s}^2 & := & \sum_{j \in \mathbb Z} \langle j \rangle^{2 s} \sup_{j_1 - j_2 = j} |A_{j_1}^{j_2}(\varphi )|^2
\lessdot
\sum_{j \in \mathbb Z} \langle j \rangle^{2 s}
\sup_{j_1 - j_2 = j} \sum_{l \in \mathbb Z^\nu} |A_{j_1}^{j_2}(l)|^2 \langle l \rangle^{2 {\mathfrak s}_0} \\
& \lessdot & \sum_{j \in \mathbb Z} \sup_{j_1 - j_2 = j} \sum_{l \in \mathbb Z^\nu} |A_{j_1}^{j_2}(l)|^2 \langle l,j \rangle^{2 (s + {\mathfrak s}_0)}
\lessdot \sum_{j \in \mathbb Z, l \in \mathbb Z^\nu} \sup_{j_1 - j_2 = j} |A_{j_1}^{j_2}(l)|^2 \langle l,j \rangle^{2 (s + {\mathfrak s}_0)}\\
& \stackrel{\eqref{decayTop}} \lessdot & |A|_{s + {\mathfrak s}_0}^2 , \end{eqnarray*} whence the lemma follows. \end{pf}
Given $ N \in \mathbb N $, we define the smoothing operator $\Pi_N$ as \begin{equation}\label{SM} \big(\Pi_N A \big)^{(l_2, j_2)}_{(l_1, j_1)} := \begin{cases}
A^{(l_2, j_2)}_{(l_1, j_1)} \qquad \, {\rm if} \ | l_1 - l_2| \leq N \\ 0 \quad \qquad \qquad {\rm otherwise.} \end{cases} \end{equation} \begin{lemma} The operator $ \Pi_N^\bot := I - \Pi_N $ satisfies \begin{equation}\label{smoothingN}
| \Pi_N^\bot A |_{s} \leq N^{- \b} | A |_{s+\b} \, , \quad
| \Pi_N^\bot A |_{s}^{{\rm{Lip}(\g)}} \leq N^{- \b} | A |_{s+\b}^{{\rm{Lip}(\g)}} \, , \quad \beta \geq 0, \end{equation} where in the second inequality $A := A(\l)$ is a Lipschitz family $\lambda \in \L$. \end{lemma}
\subsection{Dynamical reducibility}\label{sec: dyn redu}
All the transformations that we construct in sections \ref{sec:regu} and \ref{sec:redu} act on functions $ u(\varphi , x ) $ (of time and space). They can also be seen as: \begin{itemize} \item[$(a)$] transformations of the phase space $H^s_x$ that depend quasi-periodically on time (sections \ref{step-1}, \ref{step-3}-\ref{step-5} and \ref{sec:redu}); \item[$(b)$] quasi-periodic reparametrizations of time (section \ref{step-2}). \end{itemize} This observation allows to interpret the conjugacy procedure from a dynamical point of view.
Consider a quasi-periodic linear dynamical system \begin{equation}\label{SD} \partial_t u = L(\omega t) u. \end{equation} We want to describe how \eqref{SD} changes under the action of a transformation of type $(a)$ or $(b)$.
Let $A(\omega t)$ be of type $(a)$, and let $u = A(\omega t)v$. Then \eqref{SD} is transformed into the linear system \begin{equation}\label{sistematrasformato} \partial_{t} v = L_+(\omega t)v \quad {\rm where} \quad L_{+}(\omega t) = A(\omega t)^{-1} L(\omega t) A(\omega t) - A(\omega t)^{-1} \partial_t A(\omega t) \, . \end{equation} The transformation $A(\omega t)$ may be regarded to act on functions $ u(\varphi , x) $ as \begin{equation}\label{trasfo spazio} ({\tilde A} u)(\varphi ,x) := \big(A(\varphi )u(\varphi , \cdot )\big) (x) := A(\varphi )u(\varphi , x) \end{equation} and one can check that $ ({\tilde A}^{-1} u)(\varphi ,x) = A^{-1}(\varphi ) u(\varphi , x) $. The operator associated to \eqref{SD} (on quasi-periodic functions) \begin{equation}\label{associa} {\cal L} := \omega \cdot \partial_\varphi - L(\varphi ) \end{equation} transforms under the action of $ {\tilde A} $ into $$ {\tilde A}^{-1} {\cal L} {\tilde A} = \omega \cdot \partial_\varphi - L_+(\varphi ), $$ which is exactly the linear system in \eqref{sistematrasformato}, acting on quasi-periodic functions.
Now consider a transformation of type $(b)$, namely a change of the time variable \begin{equation}\label{time repar} \tau := t + \a(\omega t) \ \Leftrightarrow \ t = \tau + \tilde \alpha (\omega \t); \quad (Bv)(t) := v(t + \a(\omega t)), \ \ (B^{-1} u)(\t) = u(\tau + \tilde \a(\omega \t)), \end{equation} where $\alpha = \a(\varphi)$, $\varphi \in \mathbb T^\nu$, is a $2\pi$-periodic function of $\nu$ variables (in other words, $ t \mapsto t + \a(\omega t) $ is the diffeomorphisms of $\mathbb R$ induced by the transformation $B$). If $ u(t) $ is a solution of \eqref{SD}, then $ v(\tau) $, defined by $ u = Bv$, solves \begin{equation}\label{timeqpDS} \partial_\tau v(\t) = L_+ (\omega \tau) v (\tau) \, , \quad
L_+ (\omega \tau) := \Big( \frac{L(\omega t)}{1+ (\om \cdot \pa_{\ph} \a) (\omega t)} \Big)_{|t= \tau + \tilde \alpha (\omega \t)} \,. \end{equation} We may regard the associated transformation on quasi-periodic functions defined by $$ (\tilde B h)(\varphi ,x) := h( \varphi + \omega \alpha (\varphi ), x) \, , \quad (\tilde B^{-1} h)(\varphi ,x) := h( \varphi + \omega \tilde \alpha (\varphi ), x) \, , $$ as in step \ref{step-2}, where we calculate $$ B^{-1} {\cal L} B = \rho(\varphi ) {\cal L}_+ \, , \quad \rho(\varphi ) := B^{-1} (1+ \omega \cdot \partial_\varphi \a) \, , $$ \begin{equation}\label{timeqp} {\cal L}_+ = \omega \cdot \partial_\varphi - L_+(\varphi ) \, , \ \ L_+(\varphi ) := \frac{1}{\rho(\varphi )} L(\varphi + \omega {\tilde \a}(\varphi )) \, . \end{equation} \eqref{timeqp} is nothing but the linear system \eqref{timeqpDS}, acting on quasi-periodic functions.
\subsection{Real, reversible and Hamiltonian operators}
We consider the space of {\it real} functions \begin{equation}\label{funzionireali} Z := \{ u(\varphi ,x) = \overline{u(\varphi ,x)} \}, \end{equation} and of
{\it even} (in space-time), respectively
{\it odd}, functions
\begin{equation}\label{funzionipari} X := \{ u(\varphi ,x) = u(-\varphi ,-x) \} \, , \quad Y := \{ u(\varphi ,x) = -u(-\varphi ,-x) \} \, . \end{equation}
\begin{definition}\label{def:RR} An operator $ R $ is \begin{enumerate} \item {\sc real} if $ R : Z \to Z $ \item {\sc reversible} if $ R : X \to Y $ \item {\sc reversibility-preserving} if $ R : X \to X $, $ R : Y \to Y $. \end{enumerate} \end{definition} The composition of a reversible and a reversibility-preserving operator is reversible.
The above properties may be characterized in terms of matrix elements. \begin{lemma} \label{lem:PR} We have $$ R : X \to Y \ \Longleftrightarrow \ R^{-j}_{-k}(-l) = - R^j_{k}(l) \, , \qquad R : X \to X \ \Longleftrightarrow \ R^{-j}_{-k}(-l) = R^j_k (l) \, , $$ $$ R : Z \to Z \quad \Longleftrightarrow \quad \overline{R^j_{k}(l)} = R^{-j}_{-k}(-l) \, . $$ \end{lemma}
For the Hamiltonian KdV the phase space is $ H^1_0 := \{ u \in H^1 (\mathbb T) \, : \, \int_{\mathbb T} u(x) dx = 0 \} $ and it is more convenient the dynamical systems perspective.
\begin{definition} A time dependent linear vector field $ X(t) : H_0^1 \to H_0^1$ is \textsc{Hamiltonian} if $ X(t) = \partial_x G(t) $ for some real linear operator $ G(t) $ which is self-adjoint with respect to the $ L^2 $ scalar product.
If $ G(t) = G(\omega t)$ is quasi-periodic in time, we say that the associated operator $ \omega \cdot \partial_{\varphi } - \partial_x G( \varphi ) $ (see \eqref{associa}) is Hamiltonian. \end{definition}
\begin{definition} A map $ A : H_0^1 \to H_0^1$ is
\textsc{symplectic} if
\begin{equation}\label{mappa simplettica}
\Omega(A u, A v) = \Omega (u, v) \, , \quad \forall u,v \in H_0^1 \, , \end{equation} where the symplectic 2-form $ \Omega $ is defined in \eqref{KdV symplectic}. Equivalently $ A^T \partial_x^{-1} A = \partial_x^{-1} $.
If $ A (\varphi ) $, $ \forall \varphi \in \mathbb T^\nu $, is a family of symplectic maps we say that the corresponding operator in \eqref{trasfo spazio} is symplectic.
\end{definition}
Under a time dependent family of symplectic transformations
$ u = \Phi(t) v $ the linear Hamiltonian equation $$ u_t = \partial_x G(t) u \quad {\rm with \ Hamiltonian} \quad H(t, u) := \tfrac12 \, \big(G(t)u ,u \big)_{L^2}
$$ transforms into the equation \[ v_t = \partial_x E(t) v, \quad E(t) := \Phi(t)^T G(t) \Phi(t) - \Phi(t)^T \partial_x^{-1} \Phi_t(t) \] with Hamiltonian \begin{equation}\label{transformed KdV} K(t,v) = \tfrac12\, \big( G(t) \Phi (t) v , \Phi(t) v \big)_{L^2} - \tfrac12\, \big( \partial_x^{-1} \Phi_t(t)v, \Phi (t) v \big)_{L^2} \, . \end{equation} Note that $E(t)$ is self-adjoint with respect to the $L^2$ scalar product because $\Phi^T \partial_x^{-1} \Phi_t + \Phi_t^T \partial_x^{-1} \Phi = 0$.
\section{Regularization of the linearized operator}\label{sec:regu}
Our existence proof is based on a Nash-Moser iterative scheme. The main step concerns the invertibility of the linearized operator (see \eqref{linearized op}) \begin{equation} \label{mL} \mathcal{L} h = \mathcal{L}(\lambda,u,\varepsilon) h := \om \cdot \pa_{\ph} h + (1 + a_3) \partial_{xxx} h + a_2 \partial_{xx} h + a_1 \partial_{x} h + a_0 h \end{equation} obtained linearizing \eqref{eq:invqp} at any approximate (or exact) solution $ u $. The coefficients $a_i = a_i(\varphi,x) = a_i(u,\varepsilon)(\varphi,x)$ are periodic functions of $(\varphi,x)$, depending on $u,\varepsilon$. They are explicitly obtained from the partial derivatives of $\varepsilon f(\varphi,x,z)$ as \begin{equation} \label{ai formula} a_i(\varphi,x) = \varepsilon (\partial_{z_i} f)\big( \varphi, x, u(\varphi,x), u_x(\varphi,x), u_{xx}(\varphi,x), u_{xxx}(\varphi,x) \big), \quad i=0,1,2,3. \end{equation} The operator $\mathcal{L}$ depends on $\lambda$ because $\omega = \lambda \bar\omega$. Since $\varepsilon$ is a (small) fixed parameter, we simply write $\mathcal{L}(\lambda,u)$ instead of $\mathcal{L}(\lambda,u,\varepsilon)$, and $a_i(u)$ instead of $a_i(u,\varepsilon)$. We emphasize that the coefficients $a_i$ do not depend explicitely on the parameter $\lambda$ (they depend on $ \lambda $ only through $ u(\l) $).
In the Hamiltonian case \eqref{f Ham} the linearized KdV operator \eqref{mL} has the form $$ {\cal L}h = \om \cdot \pa_{\ph} h + \partial_{x} \Big( \partial_x \big\{ A_1 (\varphi ,x) \partial_x h \big\} - A_0 (\varphi ,x) h \Big) $$ where $$ A_1 (\varphi ,x) := 1 + \varepsilon (\partial_{z_1 z_1} F) (\varphi ,x,u,u_x)\, , \quad A_0 (\varphi ,x) := - \varepsilon \partial_x \{ (\partial_{z_0 z_1} F)(\varphi,x,u,u_x) \} + \varepsilon (\partial_{z_0 z_0} F) (\varphi ,x,u,u_x) $$ and it is generated by the quadratic Hamiltonian $$ H_L(\varphi , h) := \frac{1}{2} \int_{\mathbb T} \Big( A_0 (\varphi , x) h^2 + A_1 (\varphi , x) h_x^2 \Big) \, dx \,, \quad h \in H^1_0 \,. $$ \begin{remark} In the reversible case, i.e. the nonlinearity $ f$ satisfies \eqref{parity f} and $ u \in X $ (see \eqref{funzionipari}, \eqref{solPP}) the coefficients $ a_i $ satisfy the parity \begin{equation}\label{a3a1a2a0} a_3, a_1 \in X, \quad a_2, a_0 \in Y, \end{equation} and $\mathcal{L}$ maps $X$ into $Y$, namely $\mathcal{L}$ is reversible, see Definition \ref{def:RR}. \end{remark}
\begin{remark} \label{rem:coeff in cases Q F} In the Hamiltonian case \eqref{f Ham}, assumption (Q)-\eqref{type Q} is automatically satisfied (with $ \alpha (\varphi ) = 2 $) because $$ f(\varphi,x,u,u_x, u_{xx}, u_{xxx}) = a(\varphi, x, u, u_x) + b(\varphi, x, u, u_x) u_{xx} + c(\varphi, x, u, u_x) u_{xx}^2 + d(\varphi, x, u, u_x) u_{xxx} $$ where $$ b = 2 (\partial_{z_1 z_1 x}^3 F) + 2 z_1 (\partial_{z_1 z_1 z_0}^3 F), \qquad c = \partial_{z_1}^3 F, \qquad d = \partial_{z_1}^2 F, $$ and so $$ \partial_{z_2} f = b + 2 z_2 c = 2(d_x + z_1 d_{z_0} + z_2 d_{z_1}) = 2 \Big( \partial^2_{z_3 x} f + z_1 \partial^2_{z_3 z_0} f + z_2 \partial^2_{z_3 z_1} f + z_3 \partial^2_{z_3 z_2} f \Big) \, . $$ \end{remark}
The coefficients $a_i$, together with their derivative $\partial_u a_i(u)[h]$ with respect to $u$ in the direction $h$, satisfy tame estimates:
\begin{lemma} \label{lemma:stime ai} Let $ f \in C^q $, see \eqref{f classe Cq}. For all
$ \mathfrak s_{0} \leq s \leq q - 2 $, $ \| u \|_{\mathfrak s_0 + 3} \leq 1 $, we have, for all $i = 0,1,2,3$, \begin{align} \label{stima coeff ai 1}
\| a_i(u) \|_s
& \leq \varepsilon \, C(s) \big( 1 + \| u \|_{s+3} \big), \\ \label{stima coeff ai 2}
\| \partial_{u} a_i(u)[h] \|_{s}
& \leq \varepsilon \, C(s) \big( \| h \|_{s+3} + \| u \|_{s+3} \| h \|_{\mathfrak s_0+3} \big) \, . \end{align}
If, moreover, $ \lambda \mapsto u(\l) \in H^s $ is Lipschitz family satisfying $ \| u \|_{\mathfrak s_0 + 3}^{{\rm{Lip}(\g)}} \leq 1 $ (see \eqref{def norma Lipg}), then \begin{equation} \label{stima coeff ai 3}
\| a_i \|_{s}^{{\rm{Lip}(\g)}} \leq \varepsilon \, C(s) \big( 1 + \| u \|_{s+3}^{{\rm{Lip}(\g)}} \big) \, . \end{equation} \end{lemma}
\begin{pf} The tame estimate \eqref{stima coeff ai 1} follows by Lemma \ref{lemma:composition of functions, Moser}$(i)$ applied to the function $\partial_{z_i}f$, $i=0,\ldots,3 $, which is valid for $s+1 \leq q$. The tame bound \eqref{stima coeff ai 2} for \[ \partial_u a_i(u) [h] \stackrel{\eqref{ai formula}} = \varepsilon \sum_{k=0}^3 (\partial^2_{z_k z_i} f)\big( \varphi, x, u, u_x, u_{xx}, u_{xxx} \big) \, \partial_x^k h, \quad i = 0, \ldots, 3, \] follows by \eqref{asymmetric tame product} and applying Lemma \ref{lemma:composition of functions, Moser}$(i)$ to the functions $\partial^2_{z_k z_i}f$, which gives \[
\| (\partial^2_{z_k z_i} f)\big( \varphi, x, u, u_x, u_{xx}, u_{xxx} \big) \|_s
\leq C(s) \| f \|_{C^{s+2}} (1 + \| u \|_{s+3}), \]
for $s+2 \leq q$. The Lipschitz bound \eqref{stima coeff ai 3} follows similarly. \end{pf}
\subsection{Step 1. Change of the space variable} \label{step-1}
We consider a $ \varphi $-dependent family of diffeomorphisms of the $ 1 $-dimensional torus $ \mathbb T $ of the form \begin{equation}\label{cambio1} y = x + \beta(\varphi ,x), \end{equation} where $ \beta $ is a (small) real-valued function, $2\pi$ periodic in all its arguments. The change of variables (\ref{cambio1}) induces on the space of functions the linear operator \begin{equation}\label{operatore1} ({\cal A}h)(\varphi ,x):= h(\varphi , x + \beta (\varphi , x)). \end{equation} The operator $ {\cal A} $ is invertible, with inverse \begin{equation}\label{inverse} ({\cal A}^{-1} v)(\varphi ,y) = v(\varphi , y + {\tilde \beta}(\varphi ,y) ), \end{equation} where $ y \mapsto y + {\tilde \beta}(\varphi ,y) $ is the inverse diffeomorphism of \eqref{cambio1}, namely \begin{equation} \label{INVDIF} x = y + {\tilde \beta}(\varphi ,y) \quad \Longleftrightarrow \quad y = x + \beta(\varphi ,x). \end{equation} \begin{remark}\label{rem: Ham0} In the Hamiltonian case \eqref{f Ham} we use, instead of \eqref{operatore1}, the modified change of variable \eqref{operatore1 simplettico} which is symplectic, for each $ \varphi \in \mathbb T^\nu $. Indeed, setting $ U := \partial_x^{-1} u $ (and neglecting to write the $ \varphi $-dependence) \begin{align*} \Omega ({\cal A}u, {\cal A}v) & = \int_{\mathbb T} \partial_{x}^{-1} \Big( \partial_x \big\{ U(x+ \beta (x) ) \big\} \Big) \, (1+ \b_x (x) ) v(x+ \beta (x) ) \, dx \\ & = \int_{\mathbb T} U(x+ \beta (x)) (1+ \b_x (x) ) v(x+ \beta (x) ) dx - c \int_{\mathbb T} (1+ \b_x (x) ) v(x+ \beta (x) ) dx \\ & = \int_{\mathbb T} U(y) v(y ) dy = \Omega (u,v) \, , \quad v \in H^1_0 \, , \end{align*} where $ c $ is the average of $ U(x+ \beta (x) ) $ in $ \mathbb T $. The inverse operator of \eqref{operatore1 simplettico} is $ ({\cal A}^{-1} v) (\varphi , y) = (1+ {\tilde \beta}_y (\varphi , y)) v( y + \tilde \beta (\varphi , y)) $ which is also symplectic. \end{remark}
\noindent Now we calculate the conjugate $ {\cal A}^{-1} {\cal L} {\cal A} $ of the linearized operator $\mathcal{L}$ in \eqref{mL} with $ {\cal A} $ in \eqref{operatore1}.
The conjugate $ {\cal A} ^{-1} a {\cal A} $ of any multiplication operator $a : h(\varphi,x) \mapsto a(\varphi,x) h(\varphi,x)$ is the multiplication operator $( {\cal A} ^{-1} a)$ that maps $v(\varphi,y) \mapsto ( {\cal A} ^{-1} a)(\varphi,y) \, v(\varphi,y)$. By conjugation, the differential operators become \begin{align*} {\cal A} ^{-1} \om \cdot \pa_{\ph} {\cal A} & = \om \cdot \pa_{\ph} + \{ {\cal A} ^{-1}(\om \cdot \pa_{\ph} \b) \} \, \partial_y, \\ {\cal A} ^{-1} \partial_x {\cal A} & = \{ {\cal A} ^{-1}(1 + \b_x) \} \, \partial_y, \\ {\cal A} ^{-1} \partial_{xx} {\cal A} & = \{ {\cal A} ^{-1} (1+\b_x)^2 \} \, \partial_{yy} + \{ {\cal A} ^{-1} (\b_{xx}) \} \, \partial_y, \\ {\cal A} ^{-1} \partial_{xxx} A & = \{ {\cal A} ^{-1} (1+\b_x)^3 \} \, \partial_{yyy} + \{ 3 {\cal A} ^{-1}[ (1+\b_x) \b_{xx}] \} \, \partial_{yy} + \{ {\cal A} ^{-1} (\b_{xxx}) \} \, \partial_y, \end{align*} where all the coefficients $\{ A^{-1} (\ldots) \}$ are periodic functions of $(\varphi,y)$. Thus (recall \eqref{mL}) \begin{equation} \label{inv1s} \mathcal{L}_1 := {\cal A}^{-1} \mathcal{L} {\cal A} = \om \cdot \pa_{\ph} + b_3(\varphi,y) \partial_{yyy} + b_2(\varphi,y) \partial_{yy} + b_1(\varphi,y) \partial_{y} + b_0(\varphi,y) \end{equation} where \begin{alignat}{2} \label{b1 b3} b_3 & = {\cal A} ^{-1}[(1+a_3) (1+\b_x)^3], & \qquad b_1 & = {\cal A} ^{-1}[\om \cdot \pa_{\ph} \beta + (1+a_3) \b_{xxx} + a_2 \b_{xx} + a_1 (1+\b_x)], \\ b_0 & = {\cal A} ^{-1}(a_0), & \qquad b_2 & = {\cal A} ^{-1}[(1+a_3) 3 (1+\b_x) \b_{xx} + a_2 (1+\b_x)^2]. \label{b0 b2} \end{alignat} We look for $\b(\varphi,x)$ such that the coefficient $b_3(\varphi,y)$ of the highest order derivative $\partial_{yyy}$ in \eqref{inv1s} does not depend on $y$, namely \begin{equation} \label{b(ph) 1} b_3(\varphi,y) \stackrel{\eqref{b1 b3}} = {\cal A}^{-1} [(1+a_3) (1+\b_x)^3] (\varphi,y) = b(\varphi ) \end{equation} for some function $b(\varphi)$ of $\varphi $ only. Since ${\cal A}$ changes only the space variable, ${\cal A}b = b$ for every function $b(\varphi)$ that is independent on $y$. Hence \eqref{b(ph) 1} is equivalent to \begin{equation}\label{eq:ste1} \big( 1 + a_3(\varphi,x) \big) \big( 1 + \b_x(\varphi,x) \big)^3 = b(\varphi), \end{equation} namely \begin{equation} \label{primaequazione} \beta_{x} = \rho_0, \qquad \rho_0(\varphi ,x) := b(\varphi)^{1/3} \big( 1 + a_3(\varphi,x) \big)^{-1/3} - 1. \end{equation} The equation \eqref{primaequazione} has a solution $\b$, periodic in $ x $, if and only if $ \int_{\mathbb T}{\rho_0(\varphi ,x) \, dx} = 0 $. This condition uniquely determines \begin{equation} \label{c} b(\varphi ) = \left( \frac{1}{2\pi}\int_{\mathbb T} \big( 1 + a_3(\varphi ,x) \big)^{-\frac13} \, dx \right)^{-3}. \end{equation} Then we fix the solution (with zero average) of \eqref{primaequazione}, \begin{equation}\label{defb1b0} \beta(\varphi ,x) := \, (\partial_x^{-1} \rho_0)(\varphi,x) \, , \end{equation} where $ \partial_x^{-1} $ is defined by linearity as \begin{equation}\label{dx-1} \partial_x^{-1} e^{\ii j x} := \frac{ e^{\ii j x} }{\ii j}\, \quad \forall j \in \mathbb Z \setminus \{ 0 \}, \qquad \partial_x^{-1} 1 = 0. \end{equation} In other words, $\partial_x^{-1} h$ is the primitive of $h$ with zero average in $x $.
With this choice of $ \beta $, we get (see \eqref{inv1s}, \eqref{b(ph) 1}) \begin{equation} \label{mL1} \mathcal{L}_1 = {\cal A}^{-1} \mathcal{L} {\cal A} = \om \cdot \pa_{\ph} + b_3(\varphi) \partial_{yyy} + b_2(\varphi,y) \partial_{yy} + b_1(\varphi,y) \partial_y + b_0(\varphi,y), \end{equation} where $ b_3(\varphi) := b(\varphi) $ is defined in $ \eqref{c} $.
\begin{remark}\label{reversibilitˆ step 1} In the reversible case, $ \beta \in Y $ because $a_3 \in X$, see \eqref{a3a1a2a0}. Therefore the operator $ A $ in \eqref{operatore1}, as well as $ {\cal A}^{-1} $ in \eqref{inverse}, maps $ X \to X $ and $ Y \to Y $, namely it is reversibility-preserving, see Definition \ref{def:RR}. By \eqref{a3a1a2a0} the coefficients of $\mathcal{L}_1$ (see \eqref{b1 b3}, \eqref{b0 b2}) have parity \begin{equation}\label{b3b1b2b0} b_3, b_1 \in X, \qquad b_2, b_0 \in Y, \end{equation} and $\mathcal{L}_1$ maps $X \to Y$, namely it is reversible. \end{remark}
\begin{remark}\label{rem: Ham1} In the Hamiltonian case \eqref{f Ham} the resulting operator $ {\cal L}_1 $ in \eqref{mL1} is Hamiltonian and $ b_2 (\varphi , y) = 2 \partial_y b_3 (\varphi ) \equiv 0 $. Actually, by \eqref{transformed KdV}, the corresponding Hamiltonian has the form \begin{equation}\label{Ham K} K(\varphi , v) = \frac{1}{2} \int_{\mathbb T} b_3(\varphi ) v_y^2 + B_0 (\varphi ,y) v^2\, dy \, , \end{equation} for some function $ B_0 (\varphi ,y) $. \end{remark}
\subsection{Step 2. Time reparametrization} \label{step-2}
The goal of this section is to make constant the coefficient of the highest order spatial derivative operator $\partial_{yyy}$ of $ {\cal L}_1 $ in \eqref{mL1}, by a quasi-periodic reparametrization of time. We consider a diffeomorphism of the torus $ \mathbb T^{\nu} $ of the form \begin{equation}\label{cambio2} \varphi \mapsto \varphi + \omega \a(\varphi ), \quad \varphi \in \mathbb T^{\nu}, \quad \alpha (\varphi ) \in \mathbb R \, , \end{equation} where $ \alpha $ is a (small) {\it real} valued function, $ 2\pi $-periodic in all its arguments. The induced linear operator on the space of functions is \begin{equation}\label{operatore2} (Bh)(\varphi ,y):= h \big( \varphi + \omega \a(\varphi ), \,y \big) \end{equation} whose inverse is \begin{equation}\label{B-1} (B^{-1} v)(\vartheta,y):= v \big( \vartheta + \omega {\tilde \a}(\vartheta), \,y \big) \end{equation} where $ \varphi = \vartheta + \omega {\tilde \a}(\vartheta) $ is the inverse diffeomorphism of $ \vartheta = \varphi + \omega \a(\varphi ) $. By conjugation, the differential operators become \begin{equation} \label{anche def rho} B^{-1} \om \cdot \pa_{\vartheta} B = \rho(\vartheta)\, \om \cdot \pa_{\vartheta} , \quad B^{-1} \partial_y B = \partial_y, \quad \rho := B^{-1}(1 + \om \cdot \pa_{\ph} \a). \end{equation} Thus, see \eqref{mL1}, \begin{equation}\label{L1B} B^{-1} \mathcal{L}_1 B = \rho \, \om \cdot \pa_{\vartheta} + \{ B^{-1} b_3 \} \, \partial_{yyy} + \{ B^{-1} b_2 \} \, \partial_{yy} + \{ B^{-1} b_1 \} \, \partial_{y} + \{ B^{-1} b_0 \} . \end{equation} We look for $\a(\varphi)$ such that the (variable) coefficients of the highest order derivatives ($\om \cdot \pa_{\vartheta}$ and $\partial_{yyy}$) are proportional, namely \begin{equation} \label{B-1b3} \{ B^{-1} b_3\}(\vartheta) = m_3 \rho(\vartheta) = m_3 \{ B^{-1}(1 + \om \cdot \pa_{\ph} \a)\}(\vartheta) \end{equation} for some constant $m_3 \in \mathbb R$. Since $ B $ is invertible, this is equivalent to require that \begin{equation} \label{proportional} b_3(\varphi) = m_3 \big( 1 + \om \cdot \pa_{\ph} \a(\varphi) \big). \end{equation} Integrating on $\mathbb T^\nu$ determines the value of the constant $m_3$, \begin{equation} \label{mu 3} m_3 := \frac{1}{(2\pi)^\nu} \, \int_{\mathbb T^\nu} b_3(\varphi) \, d\varphi. \end{equation} Thus we choose the unique solution of \eqref{proportional} with zero average \begin{equation} \label{alpha} \a(\varphi) := \frac{1}{m_3} \, (\om \cdot \pa_{\ph})^{-1} (b_3 - m_3)(\varphi) \end{equation} where $ (\om \cdot \pa_{\ph})^{-1} $ is defined by linearity $$ (\om \cdot \pa_{\ph})^{-1} e^{\ii l \cdot \varphi } := \frac{e^{\ii l \cdot \varphi }}{{\rm i} \omega \cdot l} \, , \ l \neq 0 \, , \quad (\om \cdot \pa_{\ph})^{-1} 1 = 0 \, . $$ With this choice of $ \alpha $ we get (see \eqref{L1B}, \eqref{B-1b3}) \begin{equation}\label{mL2} B^{-1} \mathcal{L}_1 B = \rho \, \mathcal{L}_2, \qquad \mathcal{L}_2 := \om \cdot \pa_{\vartheta} + m_3 \, \partial_{yyy} + c_2(\vartheta,y) \, \partial_{yy} + c_1(\vartheta,y) \, \partial_{y} + c_0(\vartheta,y), \end{equation} where \begin{equation}\label{coefficienti mL2} c_i := \frac{B^{-1} b_i}{\rho}\,, \quad i = 0,1,2. \end{equation}
\begin{remark}\label{reversibilitˆ step 2} In the reversible case, $\a$ is odd because $b_3$ is even (see \eqref{b3b1b2b0}), and $ B $ is reversibility preserving. Since $\rho $ (defined in \eqref{anche def rho}) is even, the coefficients $ c_3, c_1 \in X$, $ c_2, c_0 \in Y $ and $\mathcal{L}_2 : X \to Y $ is reversible. \end{remark}
\begin{remark}\label{rem: Ham2} In the Hamiltonian case, the operator $ \mathcal{L}_2 $ is still Hamiltonian (the new Hamiltonian is the old one at the new time, divided by the factor $ \rho $). The coefficient $ c_2 (\vartheta, y) \equiv 0 $ because $ b_2 \equiv 0 $, see remark \ref{rem: Ham1}. \end{remark}
\subsection{Step 3. Descent method: step zero}\label{step-3}
The aim of this section is to eliminate the term of order $\partial_{yy}$ from $ {\cal L}_2 $ in \eqref{mL2}.
Consider the multiplication operator \begin{equation}\label{cambio3} {\cal M} h := v(\vartheta , y) h \end{equation} where the function $v$ is periodic in all its arguments.
Calculate the difference \begin{equation} {\cal L}_2 \, {\cal M} - {\cal M} \, (\omega \cdot \partial_{\vartheta} + m_3 \partial_{yyy}) = T_2 \partial_{yy} + T_{1} \partial_{y} + T_{0}, \label{carmelino} \end{equation} where \begin{equation} \label{T2} T_{2} := 3 m_3 v_{y} + c_{2} v, \quad T_{1} := 3 m_3 v_{yy} + 2 c_{2} v_{y} + c_{1} v, \quad T_{0} := \o\cdot\partial_{\vartheta} v + m_3 v_{yyy} + c_{2} v_{yy} + c_{1} v_{y} + c_0 v . \end{equation} To eliminate the factor $T_2$, we need \begin{equation}\label{sted} 3 m_3 v_{y} + c_{2} v = 0. \end{equation} Equation \eqref{sted} has the periodic solution \begin{equation} \label{descent ordine zero} v(\vartheta,y) = \exp \Big\{ - \frac{1}{3m_3} \, (\partial_y^{-1} c_2)(\vartheta,y) \Big\} \end{equation} provided that \begin{equation}\label{zeromean} \int_\mathbb T c_2(\vartheta,y) \, dy = 0. \end{equation} Let us prove \eqref{zeromean}. By \eqref{coefficienti mL2}, \eqref{anche def rho}, for each $\vartheta = \varphi + \omega \a(\varphi)$ we get \[ \int_\mathbb T c_2(\vartheta,y) \, dy = \frac{1}{ \{ B^{-1}(1 + \om \cdot \pa_{\ph} \a)\}(\vartheta)} \, \int_\mathbb T (B^{-1} b_2)(\vartheta,y) \, dy = \frac{1}{ 1 + \om \cdot \pa_{\ph} \a(\varphi) } \, \int_\mathbb T b_2(\varphi,y) \, dy. \] By the definition \eqref{b0 b2} of $b_2$ and
changing variable $ y = x + \b(\varphi,x) $ in the integral (recall \eqref{operatore1}) \begin{align} \int_\mathbb T b_2(\varphi,y) \, dy & \stackrel{\eqref{b0 b2}} = \int_\mathbb T \Big( (1+a_3) 3 (1+\b_x) \b_{xx} + a_2 (1+\b_x)^2 \Big) \, (1 + \b_x) \, dx \nonumber \\ & \stackrel{ \eqref{eq:ste1}} = b(\varphi) \Big\{ 3 \int_\mathbb T \frac{ \b_{xx}(\varphi,x)}{1 + \b_x(\varphi,x)} \, dx + \int_\mathbb T \frac{ a_2(\varphi,x) }{ 1 + a_3(\varphi,x) } \, dx \Big\}. \label{viapez} \end{align} The first integral in \eqref{viapez} is zero because $\b_{xx} / (1 + \b_x) = \partial_x \log (1 + \b_x)$. The second one is zero because of assumptions (Q)-\eqref{type Q} or (F)-\eqref{type F}, see \eqref{zero mean in the intro}. As a consequence \eqref{zeromean} is proved, and
\eqref{sted} has the periodic solution $v$ defined in \eqref{descent ordine zero}. Note that $ v $ is close to $ 1 $ for $ \varepsilon $ small. Hence the multiplication operator $ {\cal M} $ defined in \eqref{cambio3} is invertible and $ {\cal M} ^{-1} $ is the multiplication operator for $ 1 / v $. By \eqref{carmelino} and since $T_2 = 0$, we deduce \begin{equation} \label{mL3} \mathcal{L}_3 := {\cal M} ^{-1} \mathcal{L}_2 {\cal M} = \om \cdot \pa_{\vartheta} + m_3 \partial_{yyy} + d_{1}(\vartheta , y)\partial_y + d_{0}(\vartheta , y) , \qquad d_i := \frac{T_i}{v},\quad i=0,1. \end{equation}
\begin{remark}\label{reversibilitˆ step 3} In the reversible case, since $c_2$ is odd (see Remark \ref{reversibilitˆ step 2} ) the function $v$ is even, then $ {\cal M} $, $ {\cal M} ^{-1}$ are reversibility preserving and by \eqref{T2} and \eqref{mL3} $d_1 \in X$ and $d_0 \in Y$, which implies that $\mathcal{L}_3 : X \rightarrow Y$. \end{remark}
\begin{remark}\label{rem: Ham3} In the Hamiltonian case, there is no need to perform this step because $ c_2 \equiv 0 $, see remark \ref{rem: Ham2}. \end{remark}
\subsection{Step 4. Change of space variable (translation)}\label{step-4}
Consider the change of the space variable $$ z = y + p(\vartheta) $$ which induces the operators \begin{equation}\label{MM-1} {\cal T} h(\vartheta,y) := h(\vartheta, y + p(\vartheta)), \quad {\cal T}^{-1} v(\vartheta,z) := v(\vartheta, z - p(\vartheta)). \end{equation} The differential operators become \[ {\cal T}^{-1} \om \cdot \pa_{\vartheta} {\cal T} = \om \cdot \pa_{\vartheta} + \{ \om \cdot \pa_{\vartheta} p(\vartheta) \} \, \partial_z, \qquad {\cal T}^{-1} \partial_y {\cal T} = \partial_z. \] Thus, by \eqref{mL3}, \[ \mathcal{L}_4 := {\cal T}^{-1} \mathcal{L}_3 {\cal T} = \om \cdot \pa_{\vartheta} + m_3 \partial_{zzz} + e_1(\vartheta,z) \, \partial_z + e_0(\vartheta,z) \] where \begin{equation}\label{e0 e1} e_1(\vartheta,z) := \om \cdot \pa_{\vartheta} p(\vartheta) + ({\cal T}^{-1} d_1) (\vartheta,z), \quad e_0(\vartheta,z) := ({\cal T} ^{-1} d_0)(\vartheta,z). \end{equation} Now we look for $p(\vartheta)$ such that the average \begin{equation} \label{condiz fastidio} \frac{1}{2\pi} \, \int_\mathbb T e_1(\vartheta,z) \, dz = m_1 \, , \quad \forall \vartheta \in \mathbb T^\nu \, , \end{equation} for some constant $m_1 \in \mathbb R$ (independent of $ \vartheta $). Equation \eqref{condiz fastidio} is equivalent to \begin{equation}\label{equazione omologica step 4} \omega \cdot \partial_{\vartheta} p = m_1 - \int_{\mathbb T} d_{1}(\vartheta , y) \, dy =: V(\vartheta). \end{equation} The equation \eqref{equazione omologica step 4} has a periodic solution $p(\vartheta)$ if and only if $\int_{\mathbb T^{\nu}}V(\vartheta) \, d \vartheta = 0$. Hence we have to define \begin{equation}\label{m-p} m_1 := \frac{1}{(2\pi)^{\nu+1}} \, \int_{\mathbb T^{\nu + 1}} d_{1}(\vartheta , y) \, d \vartheta d y \end{equation} and \begin{equation}\label{def:p} p(\vartheta) := (\omega \cdot \partial_{\vartheta})^{-1}V(\vartheta) \, . \end{equation} With this choice of $p$, after renaming the space-time variables $z = x$ and $\vartheta = \varphi$, we have \begin{equation} \label{mL4} \mathcal{L}_4 = \om \cdot \pa_{\ph} + m_3 \partial_{xxx} + e_1(\varphi,x) \, \partial_x + e_0(\varphi,x), \qquad \frac{1}{2\pi} \, \int_{\mathbb T} e_1(\varphi,x) \, dx = m_1 \, , \ \ \forall \varphi \in \mathbb T^\nu \, . \end{equation}
\begin{remark}\label{reversibilitˆ step 4} By \eqref{equazione omologica step 4}, \eqref{def:p} and since $ d_1 \in X $ (see remark \ref{reversibilitˆ step 3}), the function $p$ is odd. Then $ {\cal T} $ and $ {\cal T}^{-1}$ defined in \eqref{MM-1} are reversibility preserving and the coefficients $ e_1, e_0 $ defined in \eqref{e0 e1} satisfy $e_1 \in X$, $e_0 \in Y$. Hence $\mathcal{L}_4 : X \rightarrow Y$ is reversible. \end{remark}
\begin{remark}\label{rem: Ham4} In the Hamiltonian case the operator $ \mathcal{L}_4 $ is Hamiltonian, because the operator $ {\cal T} $ in \eqref{MM-1} is symplectic (it is a particular case of the change of variables \eqref{operatore1 simplettico} with $ \beta (\varphi ,x) = p( \varphi ) $). \end{remark}
\subsection{Step 5. Descent method: conjugation by pseudo-differential operators}\label{step-5}
The goal of this section is to conjugate $ \mathcal{L}_4 $ in \eqref{mL4} to an operator of the form $ \omega \cdot \partial_{\varphi } + m_3 \partial_{xxx} + m_1 \partial_{x} + \mathcal{R} $ where the constants $m_3$, $m_1$ are defined in \eqref{mu 3}, \eqref{m-p}, and $\mathcal{R}$ is a pseudo-differential operator of order $0$.
Consider an operator of the form \begin{equation}\label{mS} \mathcal{S} := I + w(\varphi,x) \partial_{x}^{-1} \end{equation} where $w : \mathbb T^{\nu + 1}\rightarrow \mathbb R$ and the operator $\partial_{x}^{-1}$ is defined in \eqref{dx-1}. Note that $\partial_x^{-1} \partial_x = \partial_x \partial_x^{-1} = \pi_0$, where $ \pi_0 $ is the $ L^2 $-projector on the subspace $ H_0 := \{ u(\varphi,x) \in L^2 (\mathbb T^{\nu+1})\, : \, \int_{\mathbb T} u(\varphi, x) \, dx = 0 \} $.
A direct computation shows that the difference \begin{equation} \label{carmelino2} \mathcal{L}_4 \mathcal{S} - \mathcal{S} (\omega \cdot \partial_{\varphi } + m_3 \partial_{xxx} + m_1 \partial_{x}) = r_1 \partial_{x} + r_0 + r_{-1} \partial_{x}^{-1} \end{equation} where (using $ \partial_x \pi_0 = \pi_0 \partial_x = \partial_x $, $ \partial_x^{-1} \partial_{xxx} = \partial_{xx} $) \begin{eqnarray} r_{1} & := & 3m_3 w_{x} + e_{1}(\varphi ,x) - m_1 \label{r1}\\ r_{0}& := & e_{0} + \big( 3m_3 w_{xx} + e_{1}w - m_1 w \big)\pi_{0}\label{r0}\\ r_{-1}& := & \omega \cdot \partial_{\varphi }w + m_3 w_{xxx} + e_{1} w_{x}\,.\label{r-1} \end{eqnarray} We look for a periodic function $ w (\varphi , x )$ such that $ r_1 = 0$. By \eqref{r1} and \eqref{condiz fastidio} we take \begin{equation}\label{w} w = \frac{1}{3m_3}\partial_{x}^{-1}[m_1 - e_{1}]. \end{equation} For $ \varepsilon $ small enough the operator $ {\cal S} $ is invertible and we obtain, by \eqref{carmelino2}, \begin{equation}\label{mL5} \mathcal{L}_5 := \mathcal{S}^{-1} \mathcal{L}_4 \mathcal{S} = \omega \cdot \partial_{\varphi } + m_3 \partial_{xxx} + m_1 \partial_{x} + {\cal R}, \qquad {\cal R} := \mathcal{S}^{-1} ( r_{0} + r_{-1} \partial_{x}^{-1} ). \end{equation}
\begin{remark}\label{reversibilitˆ step 5} In the reversible case, the function $w \in Y$, because $e_1 \in X$, see remark \ref{reversibilitˆ step 4}. Then $\mathcal{S}$, $\mathcal{S}^{-1}$ are reversibility preserving. By \eqref{r0} and \eqref{r-1}, $r_0 \in Y$ and $r_{-1} \in X$. Then the operators $ \mathcal{R}, \mathcal{L}_5 $ defined in \eqref{mL5} are reversible, namely $\mathcal{R}, \mathcal{L}_5 : X \rightarrow Y$. \end{remark}
\begin{remark}\label{rem:Ham5} In the Hamiltonian case, we consider, instead of \eqref{mS}, the modified operator \begin{equation}\label{modifiedmS} \mathcal{S} := e^{\pi_0 w(\varphi ,x) \partial_{x}^{-1}} := I + \pi_0 w(\varphi ,x) \partial_{x}^{-1} + \ldots \end{equation} which, for each $ \varphi \in \mathbb T^\nu $, is symplectic. Actually $ \mathcal{S} $ is the time one flow map of the Hamiltonian vector field $ \pi_0 w(\varphi , x) \partial_{x}^{-1} $ which is generated by the Hamiltonian $$ H_\mathcal{S}(\varphi , u) := - \frac12\, \int_{\mathbb T} w(\varphi , x) \big( \partial_x^{-1} u \big)^2 dx \, \, , \quad u \in H^1_0 \, . $$ The corresponding $ \mathcal{L}_5 $ in \eqref{mL5} is Hamiltonian. Note that the operators \eqref{modifiedmS} and \eqref{mS} differ only for
pseudo-differential smoothing operators of order $ O( \partial_{x}^{-2} ) $ and of smaller size $ O( w^2 ) = O(\varepsilon^2) $. \end{remark}
\subsection{Estimates on $\mathcal{L}_5$} \label{subsec:mL0 mL5}
Summarizing the steps performed in the previous sections \ref{step-1}-\ref{step-5}, we have (semi)-conjugated the operator $ \mathcal{L} $ defined in \eqref{mL} to the operator $\mathcal{L}_5 $ defined in \eqref{mL5}, namely \begin{equation} \label{Phi 1 2 def} \mathcal{L} = \Phi_1 \mathcal{L}_5 \Phi_2^{-1}, \qquad \Phi_1 := {\cal A} B \rho {\cal M} {\cal T} \mathcal{S}, \quad \Phi_2 := {\cal A} B {\cal M} {\cal T} \mathcal{S} \end{equation} (where $ \rho $ means the multiplication operator for the function $ \rho $ defined in \eqref{anche def rho}).
In the next lemma we give tame estimates for $\mathcal{L}_5$ and $\Phi_1, \Phi_2$. We define the constants \begin{equation}\label{costanti lemma mostro} \sigma := 2\t_0 + 2 \nu + 17, \quad \s' := 2\t_0 + \nu + 14 \end{equation} where $ \t_0 $ is defined in \eqref{omdio} and $ \nu $ is the number of frequencies.
\begin{lemma} \label{lemma:mostro} Let $ f \in C ^q $, see \eqref{f classe Cq}, and
$ \mathfrak s_0 \leq s \leq q - \sigma $. There exists $\delta > 0 $ such that, if $ \varepsilon \g_0 ^{-1} < \delta $ (the constant $ \g_0 $ is defined in \eqref{omdio}), then, for all \begin{equation} \label{palla di sicurezza}
\| u \|_{\mathfrak s_0 + \s} \leq 1 \, , \end{equation} $(i)$ the transformations ${\Phi}_1, {\Phi}_2$ defined in \eqref{Phi 1 2 def} are invertible operators of $H^s(\mathbb T^{\nu+1})$, and satisfy \begin{equation} \label{stima Phi 12 nel lemma}
\| \Phi_i h \|_s + \| \Phi_i^{-1} h \|_s
\leq C(s) \big( \| h \|_s + \| u \|_{s+\s} \| h \|_{\mathfrak s_0} \big), \end{equation} for $i = 1, 2$. Moreover, if $u(\lambda)$, $h(\lambda)$ are Lipschitz families with \begin{equation} \label{palla Lip di sicurezza}
\| u \|_{\mathfrak s_0 + \s}^{{\rm{Lip}(\g)}} \leq 1, \end{equation} then \begin{equation} \label{stima Lip Phi 12 nel lemma}
\| \Phi_i h \|_s^{{\rm{Lip}(\g)}} + \| \Phi_i^{-1} h \|_s^{{\rm{Lip}(\g)}}
\leq C(s) \big( \| h \|_{s+3}^{{\rm{Lip}(\g)}} + \| u \|_{s+\s}^{{\rm{Lip}(\g)}} \| h \|_{\mathfrak s_0+3}^{{\rm{Lip}(\g)}} \big), \quad i = 1,2. \end{equation} $(ii)$ The constant coefficients $m_3, m_1$ of $\mathcal{L}_5$ defined in \eqref{mL5} satisfy \begin{align} \label{coefficienti costanti 1}
| m_3 - 1| + |m_1| & \leq \varepsilon C \, , \\ \label{coefficienti costanti 2}
| \partial_u m_3(u)[h]| + | \partial_u m_1(u)[h]|
& \leq \varepsilon C \| h \|_{\s} \, . \end{align} Moreover, if $u(\lambda)$ is a Lipschitz family satisfying \eqref{palla Lip di sicurezza}, then \begin{equation} \label{Lete 3}
| m_3 - 1 |^{{\rm{Lip}(\g)}} + | m_1 |^{{\rm{Lip}(\g)}} \leq \varepsilon C . \end{equation} $(iii)$ The operator $\mathcal{R}$ defined in \eqref{mL5} satisfies: \begin{align} \label{stima R 1}
| \mathcal{R} |_s
& \leq \varepsilon C(s) (1 + \| u \|_{s + \s}), \\ \label{stima R 2}
| \partial_u \mathcal{R}(u)[h] \, |_{s}
& \leq \varepsilon C(s) \big( \| h \|_{s + \s'}
+ \| u \|_{s + \s} \| h \|_{\mathfrak s_0 + \s'} \big) \, , \end{align} where $ \sigma > \s' $ are defined in \eqref{costanti lemma mostro}. Moreover, if $u(\lambda)$ is a Lipschitz family satisfying \eqref{palla Lip di sicurezza}, then \begin{equation} \label{stima R 3}
| \mathcal{R} |_s^{{\rm{Lip}(\g)}}
\leq \varepsilon C(s) (1 + \| u \|_{s + \s}^{{\rm{Lip}(\g)}}). \end{equation} Finally, in the reversible case, the maps $\Phi_i, \Phi_i^{-1}$, $i=1,2$ are reversibility preserving and $\mathcal{R}, \mathcal{L}_5 : X \rightarrow Y$ are reversible. In the Hamiltonian case the operator $ \mathcal{L}_5 $ is Hamiltonian. \end{lemma}
\begin{pf} In section \ref{sec:proofs}. \end{pf}
\begin{lemma} \label{lemma:stime stabilita Phi 12} In the same hypotheses of Lemma \ref{lemma:mostro}, for all $\varphi \in \mathbb T^\nu$, the operators ${\cal A}(\varphi)$, $ {\cal M} (\varphi)$, $ {\cal T} (\varphi)$, $\mathcal{S}(\varphi)$ are invertible operators of the phase space $H^s_x := H^s(\mathbb T)$, with \begin{align} \label{A(ph)}
\| {\cal A}^{\pm 1}(\varphi) h \|_{H^s_x} & \leq
C(s) \big( \| h \|_{H^s_x} + \| u \|_{s + \mathfrak s_0 + 3} \| h \|_{H^1_x} \big), \\ \label{A(ph)-I}
\| ({\cal A}^{\pm 1}(\varphi) - I) h \|_{H^s_x} & \leq
\varepsilon C(s) \big( \| h \|_{H^{s+1}_x} + \| u \|_{s + \mathfrak s_0 + 3} \| h \|_{H^2_x} \big), \\ \label{Phi mM mS (ph)}
\| ({\cal M} (\varphi) {\cal T}(\varphi) \mathcal{S}(\varphi))^{\pm 1} h \|_{H^s_x}
& \leq C(s) \big( \| h \|_{H^s_x}
+ \| u \|_{s + \s} \| h \|_{H^1_x} \big), \\ \label{Phi mM mS (ph) - I}
\| (( {\cal M} (\varphi) {\cal T}(\varphi) \mathcal{S}(\varphi) )^{\pm 1} - I) h \|_{H^s_x} & \leq \varepsilon \g_0^{-1} C(s)
\big( \| h \|_{H^{s+1}_x}
+ \| u \|_{s + \s} \| h \|_{H^1_x} \big). \end{align} \end{lemma}
\begin{pf} In section \ref{sec:proofs}. \end{pf}
\section{Reduction of the linearized operator to constant coefficients}\label{sec:redu}
The goal of this section is to diagonalize the linear operator $ \mathcal{L}_5 $ obtained in \eqref{mL5}, and therefore to complete the reduction of $ {\cal L} $ in \eqref{mL} into constant coefficients. For $ \tau > \t_0 $ (see \eqref{omdio}) we define the constant \begin{equation} \label{defbq} \beta := 7 \tau + 6 \, . \end{equation}
\begin{theorem}\label{teoremadiriducibilita} Let $ f \in C^q $, see \eqref{f classe Cq}. Let $ \gamma\in (0,1) $ and $ \mathfrak s_0 \leq s \leq q - \sigma - \beta $ where $ \sigma $ is defined in \eqref{costanti lemma mostro}, and $ \beta $ in \eqref{defbq}. Let $u(\lambda) $ be a family of functions depending on the parameter $\lambda \in \Lambda_o \subset \Lambda := [1/2, 3/2]$ in a Lipschitz way, with \begin{equation}\label{norma bassa u riducibilitˆ} \Vert u \Vert_{\mathfrak s_0 + \sigma + \b, \L_o}^{{\rm{Lip}(\g)}} \leq 1. \end{equation} Then there exist $ \delta_{0} $, $ C $ (depending on the data of the problem) such that, if \begin{equation}\label{condizione-kam} \varepsilon \gamma^{-1} \leq \delta_{0} \, , \end{equation} then: \\[1mm] $(i)$ {\bf (Eigenvalues)} $\forall \lambda \in \Lambda $ there exists a sequence \begin{equation}\label{espressione autovalori} \mu_j^\infty(\lambda) := \mu_j^\infty(\lambda, u) = {\tilde \mu}^{0}_j(\lambda) + r_j^\infty(\lambda) \, , \ {\tilde \mu}^0_j(\lambda) :=
{\rm i} \big( - {\tilde m}_3 ( \lambda) j^3 + {\tilde m}_1(\lambda) j \big) \, , \ j \in \mathbb Z \, , \end{equation} where $ {\tilde m}_3, {\tilde m}_1$ coincide with the coefficients of $ {\cal L}_5 $ in \eqref{mL5} for all $ \lambda \in \Lambda_o $, and the corrections $r_j^\infty$ satisfy \begin{align} \label{autofinali}
| {\tilde m}_3 - 1 |^{{\rm{Lip}(\g)}} + | {\tilde m}_1 |^{{\rm{Lip}(\g)}} + | r^{\infty}_j |^{{\rm{Lip}(\g)}}_{\Lambda} & \leq \varepsilon C \, , \ \ \forall j \in \mathbb Z \, . \end{align} Moreover, in the reversible case (i.e. \eqref{parity f} holds) or Hamiltonian case (i.e. \eqref{f Ham} holds), all the eigenvalues $\mu_j^{\infty}$ are purely imaginary.
$(ii)$ {\bf (Conjugacy)}. For all $\lambda$ in \begin{equation} \label{Omegainfty} \Lambda_\infty^{2\g} := \Lambda_\infty^{2\g} (u) := \Big\{ \lambda \in \L_o \, : \,
| {\rm i} \lambda \bar\omega \cdot l + \mu^{\infty}_j (\lambda) - \mu^{\infty}_{k} (\lambda) |
\geq 2 \gamma | j^{3} - k^{3} | \langle l \rangle^{-\tau}, \ \forall l \in \mathbb Z^{\nu}, \, j ,k \in \mathbb Z \Big\} \end{equation} there is a bounded, invertible linear operator $\Phi_\infty(\lambda) : H^s \to H^s$, with bounded inverse $\Phi_\infty^{-1}(\lambda)$, that conjugates $\mathcal{L}_5$ in \eqref{mL5} to constant coefficients, namely \begin{equation}\label{Lfinale} {\cal L}_{\infty}(\lambda) := \Phi_{\infty}^{-1}(\lambda) \circ \mathcal{L}_5(\lambda) \circ \Phi_{\infty}(\lambda) = \lambda \bar \omega \cdot \partial_{\varphi } + {\cal D}_{\infty}(\lambda), \quad {\cal D}_{\infty}(\lambda) := {\rm diag}_{j \in \mathbb Z} \mu^{\infty}_{j}(\lambda) \, . \end{equation} The transformations $\Phi_\infty, \Phi_\infty^{-1}$ are close to the identity in matrix decay norm, with estimates \begin{equation} \label{stima Phi infty}
| \Phi_{\infty} (\lambda) - I |_{s,\Lambda_\infty^{2\g}}^{\rm{Lip}(\g)}
+ | \Phi_{\infty}^{- 1} (\lambda) - I |_{s,\Lambda_\infty^{2\g}}^{\rm{Lip}(\g)}
\leq \varepsilon \gamma^{-1} C(s) \big( 1 + \| u \|_{s + \sigma + \beta ,\Lambda_o }^{\rm{Lip}(\g)} \big). \end{equation} For all $\varphi \in \mathbb T^\nu$, the operator $\Phi_\infty(\varphi) : H^s_x \to H^s_x $ is invertible (where $H^s_x := H^s(\mathbb T)$) with inverse $ (\Phi_\infty(\varphi))^{-1} = \Phi_\infty^{-1}(\varphi)$, and \begin{align} \label{Phi infty (ph) - I}
\| (\Phi_\infty^{\pm 1}(\varphi) - I) h \|_{H^s_x} & \leq
\varepsilon \g^{-1} C(s) \big( \| h \|_{H^s_x} + \| u \|_{s + \sigma + \beta + \mathfrak s_0} \| h \|_{H^1_x} \big). \end{align}
In the reversible case $\Phi_{\infty}, \Phi_{\infty}^{-1} : X \rightarrow X $, $Y \rightarrow Y$ are reversibility preserving, and $\mathcal{L}_\infty : X \rightarrow Y$ is reversible. In the Hamiltonian case the final $\mathcal{L}_\infty $ is Hamiltonian. \end{theorem}
An important point of Theorem \ref{teoremadiriducibilita} is to require {\it only} the bound \eqref{norma bassa u riducibilitˆ} for the low norm of $ u $,
but it provides the estimate for $ \Phi_\infty^{\pm 1} - I $ in \eqref{stima Phi infty} also for the higher norms $ | \cdot |_s $, depending also on the high norms of $ u $. From Theorem \ref{teoremadiriducibilita} we shall deduce tame estimates for the inverse linearized operators in Theorem \ref{inversione linearizzato}.
Note also that the set $ \L_{\infty}^{2 \g} $ in \eqref{Omegainfty} depends only of the final eigenvalues, and it is not defined inductively as in usual KAM theorems. This characterization of the set of parameters which fulfill all the required Melnikov non-resonance conditions (at any step of the iteration) was first observed in \cite{BB4}, \cite{BB10} in an analytic setting. Theorem \ref{teoremadiriducibilita} extends this property also in a differentiable setting. A main advantage of this formulation is that it allows to discuss the measure estimates only once and not inductively: the Cantor set $ \L_{\infty}^{2 \g} $ in \eqref{Omegainfty} could be
empty (actually its measure $ |\L_{\infty}^{2 \g} | = 1 - O(\g) $ as $ \gamma\to 0 $) but the functions $ \mu^\infty_j (\l) $ are anyway well defined for all $ \lambda \in \Lambda $, see \eqref{espressione autovalori}. In particular we shall perform the measure estimates only along the nonlinear iteration, see section \ref{sec:NM}.
Theorem \ref{teoremadiriducibilita} is deduced from the following iterative Nash-Moser reducibility theorem for a linear operator of the form \begin{equation}\label{L0} {\cal L}_{0} = \omega \cdot \partial_{\varphi } + {\cal D}_{0} + {\cal R}_{0} \, , \end{equation} where $\omega = \lambda \bar\omega $, \begin{equation}\label{D0R0} {\cal D}_{0} := m_3 (\l,u(\l)) \partial_{xxx} + m_1(\l,u(\l)) \partial_{x} \, , \quad \mathcal{R}_0(\l,u(\l)) := \mathcal{R}(\l,u(\l)) \, , \end{equation} the $ m_3(\l,u(\l)), m_1 (\l,u(\l)) \in \mathbb R $ and $ u(\l) $ is defined for $ \lambda \in \L_o \subset \Lambda $. Clearly $\mathcal{L}_5$ in \eqref{mL5} has the form \eqref{L0}. Define \begin{equation}\label{defN} N_{-1} := 1 \, , \quad N_{\nu} := N_{0}^{\chi^{\nu}} \ \forall \nu \geq 0 \, , \quad \chi := 3 /2 \end{equation} (then $ N_{\nu+1} = N_{\nu}^\chi $, $ \forall \nu \geq 0 $) and
\begin{equation}\label{alpha-beta}
\alpha := 7 \tau + 4, \quad \s_2 := \sigma + \beta
\end{equation}
where $\s$ is defined in \eqref{costanti lemma mostro} and $\b$ is defined in \eqref{defbq}.
\begin{theorem}{{\bf (KAM reducibility)}} \label{thm:abstract linear reducibility} Let $ q > \sigma + \mathfrak s_0 + \beta $. There exist $C_0 > 0 $, $ N_{0} \in \mathbb N $ large, such that, if \begin{equation}\label{piccolezza1}
N_{0}^{C_0} |{\cal R}_{0} |_{\mathfrak s_{0} + \beta}^{{\rm{Lip}(\g)}} \g^{-1} \leq 1, \end{equation} then, for all $ \nu \geq 0 $:
\begin{itemize} \item[${\bf(S1)_{\nu}}$] There exists an operator \begin{equation}\label{def:Lj} {\cal L}_\nu := \omega \cdot \partial_{\varphi } + {\cal D}_\nu + {\cal R}_\nu \quad where \quad {\cal D}_\nu = {\rm diag}_{j \in \mathbb Z} \{ \mu^{\nu}_{j}(\lambda ) \} \end{equation} \begin{equation}\label{mu-j-nu} \mu_j^{\nu}(\l) = \mu_j^0(\l) + r_j^{\nu}(\l),\quad \mu_j^0(\l) := - {\rm i} \big( m_3(\l,u(\l)) j^3 - m_1(\l,u(\l)) j \big) , \ \ j \in \mathbb Z \, , \end{equation} defined for all $ \lambda \in \L_{\nu}^{\g}(u)$, where $\L_{0}^{\g}(u) := \L_o $ (is the domain of $ u $), and, for $\nu \geq 1$, \begin{equation}\label{Omgj} \L_{\nu}^{\g} :=
\L_{\nu}^{\gamma}(u):= \Big\{\lambda \in \L_{\nu - 1}^{\gamma} : \left| {\rm i} \omega \cdot l + \mu^{\nu-1}_{j}(\l) -
\mu^{\nu - 1}_{k}(\l) \right| \geq \gamma \frac{ |j^{3} - k^{3}|}{\left\langle l\right\rangle^{\tau}}
\ \forall \left|l\right| \leq N_{ \nu-1}, \ j, k \in \mathbb Z \Big\}. \end{equation} For $\nu \geq 0$, $r_j^{\nu} = \overline{r_{-j}^{\nu}}$, equivalently $ \mu_j^{\nu} = \overline{\mu_{-j}^{\nu}}$, and \begin{equation} \label{rjnu bounded}
|r_j^{\nu}|^{{\rm{Lip}(\g)}} := |r_j^{\nu}|^{{\rm{Lip}(\g)}}_{\L_{\nu}^\g} \leq \varepsilon C \, . \end{equation} The remainder $ {\cal R}_\nu $ is real (Definition \ref{def:RR}) and, $ \forall s \in [ \mathfrak s_0, q - \sigma - \beta ] $, \begin{equation}\label{Rsb}
\left|{\cal R}_{\nu}\right|_{s}^{{\rm{Lip}(\g)}} \leq \left|{\cal R}_{0}\right|_{s+\beta}^{{\rm{Lip}(\g)}}N_{\nu - 1}^{-\alpha} \, , \quad
\left|{\cal R}_{\nu}\right|_{s + \beta}^{{\rm{Lip}(\g)}} \leq \left|{\cal R}_{0}\right|_{s+\beta}^{{\rm{Lip}(\g)}}\,N_{\nu - 1}\,. \end{equation} Moreover, for $ \nu \geq 1 $, \begin{equation} \label{Lnu+1} {\cal L}_{\nu} = \Phi_{\nu-1}^{-1} {\cal L}_{\nu-1} \Phi_{\nu-1} \, , \quad \Phi_{\nu-1} := I + \Psi_{\nu-1} \, , \end{equation} where the map $ \Psi_{\nu-1} $ is real, T\"oplitz in time $ \Psi_{\nu-1} := \Psi_{\nu-1}(\varphi ) $ (see \eqref{Aphi}),
and satisfies \begin{equation}\label{Psinus}
\left|\Psi_{\nu-1} \right|_{s}^{{\rm{Lip}(\g)}} \leq
|{\cal R}_{0} |_{s+\beta}^{{\rm{Lip}(\g)}} \gamma^{-1} N_{\nu-1}^{2 \t+1} N_{\nu - 2}^{- \a} \, . \end{equation} In the reversible case, $\mathcal{R}_{\nu} : X \rightarrow Y$, $\Psi_{\nu - 1}, \Phi_{\nu - 1}, \Phi_{\nu - 1}^{-1} $ are reversibility preserving. Moreover, all the $ \mu^\nu_{j}(\l) $ are purely imaginary and $ \mu^{\nu}_{j} = - \mu^{\nu}_{-j} $, $ \forall j \in \mathbb Z $.
\item[${\bf(S2)_{\nu}}$] For all $ j \in \mathbb Z $, there exist Lipschitz extensions $ \widetilde{\mu}_{j}^{\nu}(\cdot ): \Lambda \to \mathbb R $ of $ \mu_{j}^{\nu}(\cdot ) : \L_\nu^\gamma\to \mathbb R $ satisfying, for $\nu \geq 1$, \begin{equation}\label{lambdaestesi}
|\widetilde{\mu}_{j}^{\nu} - \widetilde{\mu}_{j}^{\nu-1} |^{{\rm{Lip}(\g)}} \leq | {\cal R}_{\nu-1} |^{{\rm{Lip}(\g)}}_{\mathfrak s_0} \,. \end{equation} \item[ ${\bf (S3)_{\nu}}$] Let $ u_1(\l)$, $ u_2(\l)$, be Lipschitz families of Sobolev functions, defined for $\lambda \in \L_o$ and such that conditions \eqref{norma bassa u riducibilitˆ}, \eqref{piccolezza1} hold with $ {\cal R}_0 := {\cal R}_0 ( u_i) $, $ i = 1,2 $, see \eqref{D0R0}.
Then, for $ \nu \geq 0 $, $\forall \lambda \in \L_{\nu}^{\g_1}(u_1) \cap \L_{\nu}^{\g_2}(u_2)$, with $ \g_1, \g_2 \in [\g/2, 2\g]$, \begin{equation}\label{derivate-R-nu}
|{\cal R}_{\nu}(u_2) - \mathcal{R}_{\nu}(u_1)|_{\mathfrak s_{0}}\leq\varepsilon N_{\nu - 1}^{-\alpha} \Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2},\,\,|{\cal R}_{\nu}(u_2) - \mathcal{R}_{\nu}(u_1)|_{\mathfrak s_{0}+\beta} \leq \varepsilon N_{\nu - 1} \Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2} \, . \end{equation} Moreover, for $\nu \geq 1$, $ \forall s \in [\mathfrak s_{0},\mathfrak s_{0}+\b] $, $\forall j \in\mathbb Z $, \begin{equation}\label{deltarj12}
\big|\big(r_{j}^{\nu}(u_2) - r_{j}^{\nu}(u_1)\big) - \big(r_{j}^{\nu-1}(u_2) - r_{j}^{\nu-1}(u_1)\big) \big|\leq \vert {\cal R}_{\nu -1}(u_2) - \mathcal{R}_{\nu -1}(u_1) \vert_{\mathfrak s_0} \,, \end{equation} \begin{equation}\label{Delta12 rj}
| r_j^{\nu}(u_2) - r_j^{\nu}(u_1) | \leq \varepsilon C \| u_1 - u_2 \|_{\mathfrak s_0 + \s_2} \, . \end{equation}
\item[ ${\bf (S4)_{\nu}}$] Let $u_1, u_2$ like in $({\bf S3})_\nu $ and $ 0 < \rho < \gamma/ 2 $. For all $ \nu \geq 0 $ such that \begin{equation}\label{legno} \varepsilon C N_{\nu - 1}^{\tau} \Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2}^{\rm sup} \leq \rho \quad \Longrightarrow \quad
\Lambda_{\nu }^{\gamma}(u_1) \subseteq
\Lambda_{\nu}^{\gamma - \rho}(u_2) \, . \end{equation} \end{itemize} \end{theorem}
\begin{remark}\label{rem:Ham6} In the Hamiltonian case $ \Psi_{\nu-1}$ is Hamiltonian and, instead of \eqref{Lnu+1} we consider the symplectic map \begin{equation}\label{modified Phi nu} \Phi_{\nu-1} := \exp(\Psi_{\nu-1}) \, . \end{equation} The corresponding operators $ \mathcal{L}_\nu $, ${\cal R}_\nu $ are Hamiltonian. Note that the operators \eqref{modified Phi nu} and \eqref{Lnu+1} differ for an operator of order $\Psi_{\nu - 1}^2$. \end{remark}
The proof of Theorem \ref{thm:abstract linear reducibility} is postponed in Subsection \ref{subsec:proof of thm abstract linear reducibility}. We first give some consequences.
\begin{corollary}\label{lem:convPhi} {\bf (KAM transformation)} $ \forall \lambda \in \cap_{\nu \geq 0} \L_{\nu}^{\g} $ the sequence \begin{equation}\label{Phicompo} \widetilde{\Phi}_{\nu} := \Phi_{0} \circ \Phi_1 \circ \cdots\circ \Phi_{\nu}
\end{equation} converges in $ |\cdot |_{s}^{{\rm{Lip}(\g)}}$ to an operator $\Phi_{\infty} $ and \begin{equation}\label{Phinftys}
\left|\Phi_{\infty} - I \right|_{s}^{{\rm{Lip}(\g)}} + \left|\Phi_{\infty}^{-1} - I \right|_{s}^{{\rm{Lip}(\g)}} \leq
C(s) \left|{\cal R}_{0}\right|_{s + \beta}^{{\rm{Lip}(\g)}} \gamma^{-1} \, . \end{equation} In the reversible case $\Phi_\infty$ and $\Phi_{\infty}^{-1}$ are reversibility preserving. \end{corollary}
\begin{pf}
To simplify notations we write $|\cdot|_s $ for $|\cdot|_s^{{\rm{Lip}(\g)}}$. For all $ \nu \geq 0 $ we have $ \widetilde{\Phi}_{\nu + 1} = \widetilde{\Phi}_{\nu}\circ \Phi_{\nu + 1} =
\widetilde{\Phi}_{\nu} + \widetilde{\Phi}_{\nu}\Psi_{\nu + 1} $ (see \eqref{Lnu+1}) and so
\begin{equation}\label{lasopra}
|\widetilde{\Phi}_{\nu + 1}|_{\mathfrak s_{0}} \stackrel{\eqref{algebra Lip}} \leq
|\widetilde{\Phi}_{\nu} |_{\mathfrak s_{0}} + C
|\widetilde{\Phi}_{\nu} |_{\mathfrak s_{0}}\left|\Psi_{\nu + 1}\right|_{\mathfrak s_{0}}
\stackrel{\eqref{Psinus}} \leq |\widetilde{\Phi}_{\nu} |_{\mathfrak s_{0}} ( 1+ \varepsilon_\nu )
\end{equation} where $ \varepsilon_\nu := C' |{\cal R}_{0} |_{\mathfrak s_0 +\beta}^{{\rm{Lip}(\g)}} \gamma^{-1} N_{\nu+1}^{2 \t+1} N_{\nu}^{- \a} $. Iterating \eqref{lasopra} we get, for all $ \nu $, \begin{equation}\label{bassa}
|\widetilde{\Phi}_{\nu + 1}|_{\mathfrak s_{0}} \leq | \widetilde{\Phi}_0 |_{\mathfrak s_{0}}
\Pi_{\nu \geq 0} (1+ \varepsilon_\nu) \leq | \Phi_0 |_{\mathfrak s_0} e^{C |{\cal R}_{0} |_{\mathfrak s_0+\beta}^{{\rm{Lip}(\g)}} \gamma^{-1}} \leq 2
\end{equation} using \eqref{Psinus} (with $ \nu =1 $, $ s = \mathfrak s_0 $) to estimate $ | \Phi_0 |_{\mathfrak s_0} $ and \eqref{piccolezza1}. The high norm of $ \widetilde{\Phi}_{\nu + 1} = \widetilde{\Phi}_{\nu} + \widetilde{\Phi}_{\nu}\Psi_{\nu + 1} $ is estimated by \eqref{interpm Lip}, \eqref{bassa} (for $ {\widetilde \Phi}_\nu $), as \begin{eqnarray*}
|\widetilde{\Phi}_{\nu + 1}|_s
& \leq & | \widetilde{\Phi}_{\nu} |_s ( 1 + C(s)\left|\Psi_{\nu + 1}\right|_{\mathfrak s_{0}} )
+ C(s) \left|\Psi_{\nu + 1}\right|_s \nonumber \\
& \stackrel{\eqref{Psinus}, \eqref{alpha-beta}}\leq &|
\widetilde{\Phi}_{\nu} |_s ( 1 + \varepsilon_{\nu}^{(0)}) + \varepsilon_{\nu}^{(s)} \, ,
\ \varepsilon_{\nu}^{(0)} := |{\cal R}_0|_{\mathfrak s_0+\b} \g^{-1} N_\nu^{-1} \, ,
\ \varepsilon_{\nu}^{(s)} := |{\cal R}_0|_{s+\b} \g^{-1} N_\nu^{-1} \, . \end{eqnarray*} Iterating the above inequality and, using $ \Pi_{j \geq 0} (1+ \varepsilon_j^{(0)}) \leq 2 $, we get \begin{equation}\label{trieste}
|\widetilde{\Phi}_{\nu + 1}|_s \leq_s \sum_{j=0}^\infty \varepsilon_j^{(s)} +
|\widetilde{\Phi}_0 |_s \leq C(s) \big(1+ | {\cal R}_0|_{s+ \b} \g^{-1} \big)
\end{equation} using $ |\Phi_0 |_s \leq 1+ C(s) | {\cal R}_0|_{s+ \b} \g^{-1} $.
Finally, the $\widetilde{\Phi}_{j}$ a Cauchy sequence in norm $ | \cdot |_s $ because \begin{eqnarray}
| \widetilde{\Phi}_{\nu+m} - \widetilde{\Phi}_{\nu} |_{s} \! \!\! \! \!\! & \! \!\! \leq \! \!\! & \! \!\! \! \!\!
\sum_{j =\nu}^{\nu+m-1} |\widetilde{\Phi}_{j + 1} - \widetilde{\Phi}_{j} |_{s}
\stackrel{ \eqref{interpm Lip}}
{\leq_s} \sum_{j = \nu}^{\nu+m-1}\left( |\widetilde{\Phi}_j |_s |\Psi_{j + 1} |_{\mathfrak s_{0}}
+ |\widetilde{\Phi}_j |_{\mathfrak s_{0}} |\Psi_{j + 1} |_{s}\right) \nonumber \\
& \! \!\! \! \! \stackrel{\eqref{trieste}, \eqref{Psinus}, \eqref{bassa}, \eqref{piccolezza1}} {\leq_s} \! \!\! \! \! & \sum_{j \geq \nu} \left|{\cal R}_{0}\right|_{s + \beta} \gamma^{-1} N_j^{-1} \leq_s \left|{\cal R}_{0}\right|_{s + \beta} \gamma^{-1} N_\nu^{-1} \, . \label{quasifi} \end{eqnarray}
Hence $ \widetilde{\Phi}_{\nu} \stackrel{\left|\cdot\right|_ s}{\rightarrow} \Phi_{\infty} $. The bound for $ \Phi_\infty - I $ in \eqref{Phinftys} follows by \eqref{quasifi} with $ m = \infty $, $ \nu = 0 $ and
$ |\widetilde \Phi_0 - I |_s = $ $ |\Psi_0|_s \lessdot \g^{-1} | {\cal R}_0|_{s+\b} $. Then the estimate for $ \Phi_\infty^{-1} - I $ follows by \eqref{PhINV}.
In the reversible case all the $\Phi_\nu $ are reversibility preserving and so $\widetilde{\Phi}_\nu $, $\Phi_{\infty}$ are reversibility preserving. \end{pf}
\begin{remark}\label{rem:Ham7} In the Hamiltonian case, the transformation $\widetilde{\Phi}_\nu$ in \eqref{Phicompo} is symplectic, because $\Phi_\nu$ is symplectic for all $\nu$ (see Remark \ref{rem:Ham6}). Therefore $\Phi_\infty$ is also symplectic. \end{remark}
Let us define for all $j \in \mathbb Z$ $$ \mu^{\infty}_{j}(\l) = \lim_{\nu \to + \infty} \widetilde{\mu}_{j}^{\nu}(\l) = \tilde{\mu}_j^0 + r_j^{\infty}(\l), \quad r_j^{\infty}(\l) := \lim_{\nu \to + \infty} \tilde{r}_j^{\nu}(\l) \quad \forall \lambda \in \L. $$ It could happen that $ \L_{\nu_0}^\gamma= \emptyset $ (see \eqref{Omgj}) for some $ \nu_0 $. In such a case the iterative process of Theorem \ref{thm:abstract linear reducibility} stops after finitely many steps. However, we can always set $ \widetilde{\mu}_{j}^{\nu} := \widetilde{\mu}_{j}^{\nu_0} $, $ \forall \nu \geq \nu_0 $, and the functions $ \mu^{\infty}_{j} : \Lambda \to \mathbb R $ are always well defined.
\begin{corollary} {\bf (Final eigenvalues)} For all $ \nu \in \mathbb N $, $ j \in \mathbb Z $ \begin{equation}\label{autovcon}
| { \mu }_{j}^{\infty} - {\widetilde \mu }^{\nu}_{j} |_\L^{{\rm{Lip}(\g)}} =
| r_{j}^{\infty} - {\widetilde r }^{\nu}_{j} |^{{\rm{Lip}(\g)}}_\Lambda \leq
C \left|{\cal R}_{0}\right|_{\mathfrak s_{0}+\beta}^{{\rm{Lip}(\g)}} N_{\nu-1}^{-\alpha} \, , \ \
| { \mu }_{j}^{\infty} - {\widetilde \mu }^{0}_{j}|_\L^{{\rm{Lip}(\g)}} = | r_j^{\infty} |_\L^{{\rm{Lip}(\g)}}
\leq C \left|{\cal R}_{0}\right|_{\mathfrak s_{0}+\beta}^{{\rm{Lip}(\g)}} \, . \end{equation} \end{corollary}
\begin{pf} The bound \eqref{autovcon} follows by \eqref{lambdaestesi} and \eqref{Rsb} by summing the telescopic series. \end{pf}
\begin{lemma} {\bf (Cantor set)} \begin{equation}\label{cantorinclu} \L_{\infty}^{2 \g} \subset \cap_{\nu \geq 0} \L_{\nu}^\gamma\, . \end{equation} \end{lemma}
\begin{pf} Let $ \lambda \in \Lambda_{\infty}^{2\g} $. By definition $ \Lambda_{\infty}^{2\g} \subset \L_0^\gamma:= \L_o $. Then
for all $ \nu > 0 $, $ | l | \leq N_{\nu} $, $ j \neq k $ \begin{eqnarray*}
\left| {\rm i} \omega \cdot l + { \mu}_{j}^{\nu} - { \mu}_k^{\nu}\right| & \geq &
\left| {\rm i} \omega \cdot l + \mu_j^{\infty} -
\mu_k^{\infty}\right| - \left|
{ \mu}_j^{\nu} - \mu_j^{\infty}\right| -
\left| {\mu}_k^{\nu} - \mu_k^{\infty}\right| \\ & \stackrel{\eqref{Omegainfty}, \eqref{autovcon}} \geq &
2\gamma \left |j^{3} - k^{3}\right|\left\langle l\right\rangle^{-\tau} - 2 C | {\cal R}_0|_{\mathfrak s_0+ \b}
N_{\nu-1}^{-\alpha} \geq \gamma \left|j^{3} - k^{3}\right|\left\langle l\right\rangle^{-\tau} \end{eqnarray*} because
$ \gamma |j^{3} - k^{3} | \langle l \rangle^{-\tau} \geq
\gamma N_\nu^{-\tau} \stackrel{\eqref{piccolezza1}} \geq 2 C | {\cal R}_0|_{\mathfrak s_0+ \b} N_{\nu-1}^{-\alpha} $. \end{pf}
\begin{lemma}\label{lemma42} For all $\lambda \in \L_{\infty}^{2\g} (u) $ , \begin{equation}\label{realtˆ autovalori finali} \mu_j^{\infty}(\l) = \overline{\mu_{-j}^{\infty}(\l)}, \quad r_j^{\infty}(\l) = \overline{r_{-j}^{\infty}(\l)}\,, \end{equation} and in the reversible case \begin{equation}\label{reversibilitˆ autovalori finali} \mu_j^{\infty}(\l) = - \mu_{-j}^{\infty}(\l), \quad r_j^{\infty}(\l) = - r_{-j}^{\infty}(\l)\,. \end{equation} Actually in the reversible case $ \mu_j^{\infty}(\l) $ are purely imaginary for all $ \lambda \in \Lambda $. \end{lemma}
\begin{pf} Formula \eqref{realtˆ autovalori finali} and \eqref{reversibilitˆ autovalori finali} follow because, for all $ \lambda \in \L_\infty^{2\g} \subseteq \cap_{\nu\geq 0} \L_\nu^\gamma$ (see \eqref{cantorinclu}), we have $ \mu_j^{\nu} = \overline{\mu_{-j}^{\nu}} $, $ r_j^{\nu} = \overline{r_{-j}^{\nu}} $, and, in the reversible case, the $ \mu_j^{\nu} $ are purely imaginary and $ \mu_j^{\nu} = - \mu_{-j}^{\nu} $, $ r_j^{\nu} = - r_{-j}^{\nu} $. The final statement follows because, in the reversible case, the $ \mu_j^\nu (\l) \in {\rm i} \mathbb R $ as well as its extension $ {\widetilde \mu}_j^\nu (\l) $. \end{pf}
\begin{remark}\label{r0=0} In the reversible case, \eqref{reversibilitˆ autovalori finali} imply that $\mu_0^\infty = r_0^\infty = 0$. \end{remark}
\noindent {\bf Proof of Theorem \ref{teoremadiriducibilita}.} We apply Theorem \ref{thm:abstract linear reducibility} to the linear operator $ {\cal L}_0 := {\cal L}_5$ in \eqref{mL5}, where
$ \mathcal{R}_0 = {\cal R }$ defined in \eqref{D0R0} satisfies
\begin{equation}\label{R0tame}
\left|{\cal R}_{0}\right|_{\mathfrak s_{0} + \beta}^{{\rm{Lip}(\g)}} \stackrel{\eqref{stima R 3}}\leq
\varepsilon C(\mathfrak s_0 + \b) \Big(1 + \| u \|_{\mathfrak s_{0} + \sigma + \b}^{{\rm{Lip}(\g)}}\Big) \stackrel{\eqref{norma bassa u riducibilitˆ}}\leq 2 \varepsilon C(\mathfrak s_0 + \b) \,. \end{equation} Then the smallness condition (\ref{piccolezza1}) is implied by \eqref{condizione-kam} taking $\delta_0:= \delta_0(\nu)$ small enough.
For all $ \lambda \in \L_\infty^{2\g} \subset \cap_{\nu \geq 0} \L_\nu^\gamma$ (see \eqref{cantorinclu}), the operators \begin{equation}\label{conjugnu} {\cal L}_{\nu} \stackrel{\eqref{def:Lj}} = \omega \cdot \partial_{\varphi } + {\cal D}_{\nu} + {\cal R}_{\nu}
\stackrel{\left|\cdot\right|_{s}^{{\rm{Lip}(\g)}}}{\longrightarrow}\o\cdot \partial_{\varphi } + {\cal D}_{\infty} =: {\cal L}_{\infty} \, , \quad {\cal D}_{\infty} := {\rm diag}_{j \in \mathbb Z} \mu_j^{\infty} \end{equation} because $$
\left|{\cal D}_{\nu} - {\cal D}_{\infty}\right|_{s}^{{\rm{Lip}(\g)}} = \sup_{j \in \mathbb Z}
\left|{\mu}_{j}^{\nu} - \mu_{j}^{\infty}\right|^{{\rm{Lip}(\g)}} \stackrel{\eqref{autovcon}}\leq
C \left|{\cal R}_{0}\right|_{\mathfrak s_{0}+\beta}^{{\rm{Lip}(\g)}} N_{\nu - 1}^{- \alpha}, \quad
\left|{\cal R}_{\nu}\right|_{s}^{{\rm{Lip}(\g)}} \stackrel{\eqref{Rsb}} \leq
\left|{\cal R}_{0}\right|_{s + \beta}^{{\rm{Lip}(\g)}} N_{\nu - 1}^{-\alpha} \, . $$ Applying \eqref{Lnu+1} iteratively we get $ {\cal L}_{\nu} = {{\widetilde \Phi}_{\nu-1}}^{-1} {\cal L}_0 {\widetilde \Phi}_{\nu-1} $ where $ {\widetilde \Phi}_{\nu-1} $ is defined by \eqref{Phicompo} and
$ {\widetilde \Phi}_{\nu-1} \to {\Phi}_\infty $ in $ | \ |_s $ (Corollary \ref{lem:convPhi}). Passing to the limit we deduce \eqref{Lfinale}. Moreover \eqref{autovcon} and \eqref{R0tame} imply \eqref{autofinali}. Then \eqref{Phinftys}, \eqref{stima R 3} (applied to $ {\cal R}_0 = {\cal R} $) imply \eqref{stima Phi infty}.
Estimate \eqref{Phi infty (ph) - I} follows from \eqref{interpolazione norme miste} (in $ H^s_x (\mathbb T) $), Lemma \ref{Aphispace}, and the bound \eqref{stima Phi infty}.
In the reversible case, since $\Phi_\infty$, $\Phi_{\infty}^{-1}$ are reversibility preserving (see Corollary \ref{lem:convPhi}), and $\mathcal{L}_0$ is reversible (see Remark \ref{reversibilitˆ step 5} and Lemma \ref{lemma:mostro}), we get that $\mathcal{L}_\infty$ is reversible too. The eigenvalues $ \mu_j^{\infty} $ are purely imaginary by Lemma \ref{lemma42}.
In the Hamiltonian case, $ \mathcal{L}_0 \equiv \mathcal{L}_5 $ is Hamiltonian, $\Phi_{\infty}$ is symplectic, and therefore ${\cal L}_{\infty} = \Phi_{\infty}^{-1} \mathcal{L}_5 \Phi_{\infty}$ (see \eqref{Lfinale}) is Hamiltonian, namely $\mathcal{D}_\infty$ has the structure $ \mathcal{D}_\infty = \partial_x \mathcal{B} $, where $\mathcal{B} = \mathrm{diag}_{j \neq 0} \{ b_j \}$ is self-adjoint. This means that $b_j \in \mathbb R$, and therefore $\mu_j^\infty = \ii j b_j$ are all purely imaginary. \rule{2mm}{2mm}
\subsection{Proof of Theorem \ref{thm:abstract linear reducibility}} \label{subsec:proof of thm abstract linear reducibility}
\noindent {\sc Proof of ${\bf ({S}i)}_{0}$, $i=1,\ldots,4$.} Properties \eqref{def:Lj}-\eqref{Rsb} in ${\bf({S}1)}_0$ hold by \eqref{L0}-\eqref{D0R0} with $ \mu_j^0 $ defined in \eqref{mu-j-nu} and
$ r_j^0(\lambda) = 0 $ (for \eqref{Rsb} recall that $ N_{-1} := 1 $, see \eqref{defN}). Moreover, since $m_1$, $m_3$ are real functions, $\mu_j^0$ are purely imaginary, $\m_j^0 = \overline{{\mu}_{-j}^0}$ and $\mu_j^0 = - \mu_{-j}^0$. In the reversible case, remark \ref{reversibilitˆ step 5} implies that $\mathcal{R}_0 := \mathcal{R}$, $\mathcal{L}_0 := \mathcal{L}_5$ are reversible operators. Then there is nothing else to verify.
${\bf({S}2)}_0 $ holds extending from $ \Lambda^\g_0 := \Lambda_o $ to
$ \Lambda $ the eigenvalues $\mu_{j}^0 (\l) $, namely extending the functions $ m_1 (\l) $, $ m_3 (\l) $ to $ {\tilde m}_1 (\l) $, $ {\tilde m}_3 (\l) $, preserving the sup norm and the Lipschitz semi-norm, by Kirszbraun theorem.
${\bf({S}3)}_0 $ follows by \eqref{stima R 2}, for $s = \mathfrak s_0 , \mathfrak s_0 + \b$, and \eqref{norma bassa u riducibilitˆ}, \eqref{alpha-beta}.
${\bf({S}4)}_0 $ is trivial because, by definition, $\L_0^\g(u_1) = \L_o = \L_0^{\g-\rho}(u_2)$.
\subsubsection{The reducibility step}\label{the-reducibility-step}
We now describe the generic inductive step, showing how to define $ {\cal L}_{\nu+1 } $ (and $ \Phi_\nu $, $ \Psi_\nu $, etc). To simplify notations, in this section we drop the index $ \nu $ and we write $ + $ for $ \nu + 1$. We have \begin{eqnarray} {\cal L} \Phi h & = & \omega \cdot \partial_{\varphi } (\Phi(h)) + {\cal D} \Phi h + {\cal R} \Phi h \nonumber \\ & = & \omega \cdot \partial_{\varphi } h + \Psi \omega \cdot \partial_{\varphi } h + (\omega \cdot \partial_{\varphi } \Psi ) h + {\cal D} h + {\cal D} \Psi h + {\cal R} h + {\cal R} \Psi h \nonumber \\ & = & \Phi \Big( \omega \cdot \partial_{\varphi } h + {\cal D} h\Big) + \Big(\omega \cdot \partial_{\varphi } \Psi + \left[{\cal D}, \Psi \right] + \Pi_{N} {\cal R}\Big) h + \Big(\Pi_{N}^{\bot} {\cal R} + {\cal R} \Psi \Big) h \label{1stepNM} \end{eqnarray} where $[{\cal D}, \Psi ] := {\cal D} \Psi - \Psi {\cal D} $ and $\Pi_{N}{\cal R}$ is defined in \eqref{SM}.
\begin{remark} The application of the smoothing operator $ \Pi_N $ is necessary since we are performing a differentiable Nash-Moser scheme. Note also that $ \Pi_N $ regularizes only in time (see \eqref{SM}) because the loss of derivatives of the inverse operator is only in $ \varphi $ (see \eqref{solomo} and the bound on the small divisors \eqref{Omgj}). \end{remark}
We look for a solution of the {\it homological} equation \begin{equation}\label{eq:homo} \omega \cdot \partial_{\varphi } \Psi + \left[{\cal D}, \Psi \right] + \Pi_{N} {\cal R} = [{\cal R}] \qquad {\rm where} \qquad [{\cal R}] := {\rm diag}_{j \in \mathbb Z} {\cal R}^{j}_{j}(0) \, . \end{equation}
\begin{lemma} \label{lemma:redu} {\bf (Homological equation)} For all $ \lambda \in {\L}_{\nu+1}^{\gamma} $, (see \eqref{Omgj}) there exists a unique solution $ \Psi := \Psi (\varphi ) $ of the homological equation \eqref{eq:homo}. The map $ \Psi $ satisfies \begin{equation}\label{PsiR}
\left|\Psi\right|_{s}^{{\rm{Lip}(\g)}} \leq C N^{2\tau + 1} \gamma^{-1} \left|{\cal R}\right|_{s}^{{\rm{Lip}(\g)}} \, . \end{equation} Moreover if $\g/ 2 \leq \g_1, \g_2 \leq 2\g$ and if $ u_1(\l)$, $ u_2(\l) $ are Lipschitz functions, then $\forall s \in [\mathfrak s_0, \mathfrak s_0 + \b] $, $\lambda \in \L_{\nu + 1}^{\gamma_1}(u_1) \cap \L_{\nu + 1}^{\gamma_2}(u_2)$ \begin{equation}\label{differenza finita Psi}
| \Delta_{12} \Psi |_s \leq C N^{2\tau + 1}\g^{-1}\Big(|\mathcal{R}(u_2)|_s \Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2} + |\Delta_{12}\mathcal{R}|_s \Big) \end{equation} where we define $ \Delta_{12} \Psi := \Psi ( u_1) -\Psi (u_2) $.
In the reversible case, $ \Psi $ is reversibility-preserving. \end{lemma}
\begin{pf} Since ${\cal D} := {\rm diag}_{j \in \mathbb Z} (\mu_{j})$ we have $ [{\cal D}, \Psi ]_{j}^{k} = (\mu_j - \mu_k) \Psi_{j}^{k}(\varphi ) $ and \eqref{eq:homo} amounts to $$ \omega \cdot \partial_{\varphi } \Psi_{j}^{k}(\varphi ) + (\mu_{j} - \mu_{k}) \Psi_{j}^{k}(\varphi ) + {\cal R}_{j}^{k}(\varphi ) = [{\cal R}]_{j}^{k} \, , \quad \forall j, k \in \mathbb Z \, , $$ whose solutions are $ \Psi_{j}^{k}(\varphi ) = \sum_{l \in \mathbb Z^\nu} \Psi_{j}^{k}(l) e^{\ii l \cdot \varphi } $ with coefficients \begin{equation}\label{solomo} {\Psi}_{j}^{k}(l) := \begin{cases} \dfrac{{\cal R}_{j}^{k}(l)}{ \delta_{ljk}(\l) } \quad \, &
\text{if} \ (j-k, l) \neq (0,0) \ \ \text{and} \ \ |l | \leq N \, , \ \ \text{where} \ \ \delta_{ljk}(\l) := {\rm i} \omega \cdot l + \mu_j - \mu_k, \\ 0 & \text{otherwise.} \end{cases} \end{equation} Note that, for all $ \lambda \in \L_{\nu + 1}^{\g} $, by \eqref{Omgj} and \eqref{omdio}, if $ j \neq k $ or $ l \neq 0 $ the divisors $ \delta_{ljk}(\l) \neq 0 $. Recalling the definition of the $ s $-norm in \eqref{matrix decay norm} we deduce by \eqref{solomo}, \eqref{Omgj}, \eqref{omdio}, that \begin{equation}\label{supomo}
| \Psi |_s \leq \g^{-1} N^\tau | {\cal R} |_s \, , \quad \forall \lambda \in \L_{\nu+1}^\gamma\, . \end{equation} For $ \l_{1}, \l_{2} \in \L_{\nu + 1}^{\gamma} $, \begin{equation}\label{italia}
|\Psi_{j}^{k}(l) (\l_{1}) - \Psi_{j}^{k}(l) (\l_{2})| \leq
\frac{|{\cal R}_{j}^{k}(l) (\l_{1}) - {\cal R}_{j}^{k}(l) (\l_{2})|}{|\delta_{ljk}(\l_{1})|} \, +
|{\cal R}_{j}^{k}(l) (\l_{2})| \, \frac{|\delta_{ljk}(\l_{1}) - \delta_{ljk}(\l_{2})|}{|\delta_{ljk}(\l_{1})| |\delta_{ljk}(\l_{2})|} \end{equation} and, since $\omega = \lambda \bar\omega$, \begin{eqnarray}
|\delta_{ljk}(\l_{1}) - \delta_{ljk}(\l_{2})| & \stackrel{\eqref{solomo}} = &
|(\l_{1}-\l_{2})\bar\o\cdot l + (\mu_{j} - \mu_{k})(\l_{1}) - (\mu_{j}-\mu_{k})(\l_{2})| \\
& \stackrel{\eqref{mu-j-nu}} \leq & |\l_{1}-\l_{2}||\bar\o\cdot l | + |m_3(\l_{1}) - m_3(\l_{2})| |j^{3}-k^{3} | + |m_1(\l_{1}) - m_1(\l_{2})| | j-k | \nonumber\\
& & + \, |r_{j}(\l_{1}) - r_{j}(\l_{2})| + |r_{k}(\l_{1}) - r_{k}(\l_{2})| \nonumber \\
& \lessdot & |\l_{1}-\l_{2}| \Big( | l | + \varepsilon\g^{-1} |j^{3}-k^{3} | + \varepsilon\g^{-1} | j-k | + \varepsilon\g^{-1} \Big) \label{ultieq} \end{eqnarray} because \[
\g|m_3|^{\rm lip}
= \g|m_3 - 1|^{\rm lip}
\leq |m_3 - 1|^{{\rm{Lip}(\g)}} \leq \varepsilon C, \quad
|m_1|^{{\rm{Lip}(\g)}} \leq \varepsilon C, \quad
|r_{j} |^{{\rm{Lip}(\g)}} \leq \varepsilon C \quad \forall j \in \mathbb Z. \] Hence, for $ j \neq k $, $ \varepsilon \g^{-1} \leq 1 $, \begin{eqnarray}
\frac{|\delta_{ljk}(\l_{1}) - \delta_{ljk}(\l_{2})|}{|\delta_{ljk}(\l_{1})||\delta_{ljk}(\l_{2})|} \!\!\! & \stackrel{\eqref{ultieq}, \eqref{Omgj}} \lessdot & \!\!\!
|\l_{1}-\l_{2}| \Big( | l | + |j^{3}-k^{3} | \Big) \frac{\left\langle l\right\rangle^{2\tau}}{\gamma^{2}\left|j^{3}-k^{3}\right|^{2}}
\lessdot |\l_{1} - \l_{2}| N^{2\tau + 1} \gamma^{-2} \label{francia} \end{eqnarray}
for $ |l| \leq N $. Finally, recalling \eqref{matrix decay norm}, the bounds \eqref{italia}, \eqref{francia} and \eqref{supomo} imply \eqref{PsiR}. Now we prove \eqref{differenza finita Psi}. By \eqref{solomo}, for any $ \lambda \in \L_{\nu + 1}^{\g_1} (u_1) \cap \L_{\nu + 1}^{\g_2} (u_2) $, $ l \in \mathbb Z^{\nu} $, $ j \neq k $, we get \begin{equation}\label{delta coefficienti Psi} \Delta_{12}\Psi_j^k(l) = \frac{\Delta_{12}\mathcal{R}_j^k(l)}{\delta_{ljk}(u_1)} - \mathcal{R}_j^k(l)(u_2) \frac{\Delta_{12}\delta_{ljk}}{\delta_{ljk}(u_1) \delta_{ljk}(u_2)} \end{equation} where \begin{eqnarray}
|\Delta_{12}\delta_{ljk}| & = & |\Delta_{12}(\mu_j - \mu_k)| \leq |\Delta_{12}m_3 | \, | j^{3} - k^{3}| +
|\Delta_{12}m_1|\, |j - k| + |\Delta_{12}r_j | + | \Delta_{12}r_k| \nonumber \\
& \stackrel{\eqref{coefficienti costanti 2}, \eqref{Delta12 rj} } \lessdot &
\varepsilon |j^3 - k^3| \Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2} \, . \label{cina} \end{eqnarray} Then \eqref{delta coefficienti Psi}, \eqref{cina}, $\varepsilon \gamma^{-1} \leq 1$, $\g_{1}^{-1}, \g_{2}^{-1} \leq \g^{-1}$ imply $$
|\Delta_{12}\Psi_j^k(l)| \lessdot N^{2\tau} \gamma^{-1}
\Big(|\Delta_{12}\mathcal{R}_j^k (l)| + |\mathcal{R}_j^k (l)(u_2)| \Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2} \Big) $$ and so \eqref{differenza finita Psi} (in fact, \eqref{differenza finita Psi} holds with $2\t$ instead of $2\t+1$).
In the reversible case $ {\rm i} \omega \cdot l + \mu_j - \mu_k \in \ii\mathbb R $, $\overline{{\mu}_{-j}} = \mu_j$ and $ \mu_{-j} = - \mu_j $. Hence Lemma \ref{lem:PR} and \eqref{solomo} imply $$ \overline{ {\Psi}_{-j}^{-k}(-l)} = \frac{ \overline{{\cal R}_{-j}^{-k}(-l)}}{ {-{\rm i} \omega \cdot (-l) + \overline{{\mu}_{-j}} - \overline{{\mu}_{-k}}} } = \frac{{\cal R}_{j}^{k}(l)}{{\rm i} \omega \cdot l + \mu_{j} - \mu_k } = {\Psi}_j^k(l) $$ and so $ \Psi $ is real, again by Lemma \ref{lem:PR}. Moreover, since $ {\cal R} : X \to Y $, $$ \Psi^{-k}_{-j}(-l) = \frac{ {\cal R}^{-k}_{-j}(-l)}{{\rm i} \omega \cdot (-l) + \mu_{-j} - \mu_{-k} } = \frac{- {\cal R}^{k}_{j}(l)}{ \ii\omega \cdot (-l) - \mu_{j} + \mu_{k} } = {\Psi}^{k}_{j}(l) $$ which implies $ \Psi : X \rightarrow X $ by Lemma \ref{lem:PR}. Similarly we get $ \Psi : Y \rightarrow Y $. \end{pf}
\begin{remark} In the Hamiltonian case $ {\cal R} $ is Hamiltonian and the solution $ \Psi $ in \eqref{solomo} of the homological equation is Hamiltonian, because $ \overline{ \d_{l,j,k} } = \d_{-l,k,j} $ and, in terms of matrix elements, an operator $G(\varphi)$ is self-adjoint if and only if $ \overline{ G_j^k(l) } = G_k^j(-l) $. \end{remark}
Let $ \Psi $ be the solution of the homological equation \eqref{eq:homo} which has been constructed in Lemma \ref{lemma:redu}. By Lemma \ref{lem:inverti}, if $ C(\mathfrak s_0) | \Psi |_{\mathfrak s_0} < 1 /2 $ then $ \Phi := I + \Psi $ is invertible and
by \eqref{1stepNM} (and \eqref{eq:homo}) we deduce that \begin{equation}\label{defL+} {\cal L}_{+} := \Phi^{-1} {\cal L} \Phi = \omega \cdot \partial_{\varphi } + {\cal D}_{+} + {\cal R}_{+} \,, \end{equation} where \begin{equation}\label{D+} {\cal D}_{+} := {\cal D} + [{\cal R}] \, , \quad {\cal R}_{+} := \Phi^{-1} \Big(\Pi_N^{\bot} {\cal R} + {\cal R} \Psi - \Psi [{\cal R}] \Big). \end{equation} Note that $\mathcal{L}_+$ has the same form of $ {\cal L} $, but the remainder $ \mathcal{R}_+ $ is the sum of a quadratic function of $ \Psi, {\cal R} $ and a remainder supported on high modes.
\begin{lemma}\label{nuovadiagonale} {\bf (New diagonal part).} The eigenvalues of \[ {\cal D}_{+} = {\rm diag}_{j \in \mathbb Z} \{ \mu^{+}_{j}(\l) \}, \quad \text{where} \ \ \mu^{+}_{j} := \mu_{j} + {\cal R}^{j}_{j}(0) = \mu_j^0 + r_j + \mathcal{R}_j^j (0) = \mu_j^{0} + r_j^+, \quad r_j^+ := r_j + \mathcal{R}_j^j (0), \] satisfy $\mu_{j}^{+} = \overline{{\mu}^{+}_{-j}}$ and \begin{equation}\label{ultrasim}
|\mu^{+}_{j} - \mu_{j} |^{\rm lip}
= |r^{+}_{j} - r_{j} |^{\rm lip}
= |\mathcal{R}_j^j(0)|^{\rm lip}
\leq \left|{\cal R}\right|_{\mathfrak s_0}^{\rm lip},\quad \forall j \in \mathbb Z. \end{equation} Moreover if $ u_1 (\l)$, $u_2 (\l)$ are Lipschitz functions, then for all $\l\in \L_{\nu}^{\g_1}(u_1) \cap \L_{\nu}^{\g_2}(u_2)$ \begin{equation}\label{spagna}
|\Delta_{12} r_j^+ - \Delta_{12} r_j| \leq |\Delta_{12}\mathcal{R}|_{\mathfrak s_0}\,. \end{equation} In the reversible case, all the $\mu_j^{+}$ are purely imaginary and satisfy $\mu^{+}_{j} = - \mu^{+}_{-j}$ for all $j \in \mathbb Z$. \end{lemma}
\begin{pf} The estimates \eqref{ultrasim}-\eqref{spagna} follow using \eqref{Aii} because
$ | {\cal R}^{j}_{j}(0) |^{\rm lip} = $ $ |{\cal R}^{(l,j)}_{(l,j)} |^{\rm lip}\leq $
$ |{\cal R} |_0^{\rm lip} \leq $ $ |{\cal R} |_{\mathfrak s_0}^{\rm lip} $ and $$
|\Delta_{12}r^{+}_{j} - \Delta_{12} r_{j} |
= | \Delta_{12} {\cal R}^{j}_{j}(0) |
= |\Delta_{12}{\cal R}^{(l,j)}_{(l,j)} | \leq |\Delta_{12}{\cal R} |_0 \leq
|\Delta_{12}{\cal R} |_{\mathfrak s_0} \, . $$ Since $ {\cal R} $ is real, by Lemma \ref{lem:PR}, $$ {\cal R}^{k}_{j}(l)\,=\, \overline{{\cal R}^{-k}_{-j}(-l)} \qquad \Longrightarrow \qquad \mathcal{R}_{j}^{j}(0) = \overline{{\mathcal{R}}_{-j}^{-j}(0)} $$ and so
$ \mu_{j}^+ = \overline{{\mu}_{-j}^{+}} $. If $\mathcal{R}$ is also reversible, by Lemma \ref{lem:PR}, $$ {\cal R}^{k}_{j}(l) = - {\cal R}^{-k}_{-j}(-l) \, , \quad {\cal R}^{k}_{j}(l) = \overline{{\cal R}^{-k}_{-j}(-l)} = - \overline{{ {\cal R}^{k}_{j}(l)}} \, . $$ We deduce that $ {\cal R}^{j}_{j}(0) = - {\cal R}^{-j}_{-j}(0) $, $ {\cal R}^{j}_{j}(0) \in {\rm i} \mathbb R $ and therefore, $ \mu^{+}_{j} = - \mu^{+}_{-j} $ and $\mu_{j}^{+} \in {\rm i} \mathbb R$. \end{pf}
\begin{remark}\label{rem:Ham10} In the Hamiltonian case, $\mathcal{D}_\nu $ is Hamiltonian, namely $ \mathcal{D}_\nu = \partial_x \mathcal{B} $ where $\mathcal{B} = \mathrm{diag}_{j \neq 0} \{ b_j \}$ is self-adjoint. This means that $b_j \in \mathbb R$, and therefore all $\mu_j^\nu = \ii j b_j $ are purely imaginary. \end{remark}
\subsubsection{The iteration}
Let $\nu \geq 0$, and suppose that the statements ${\bf({S}i)_{\nu}}$ are true. We prove $({\bf Si})_{\nu+1}$, $i=1,\ldots,4$.
To simplify notations we write $|\cdot|_s$ instead of $|\cdot|_s^{{\rm{Lip}(\g)}}$.
\noindent {\sc Proof of $({\bf S1})_{\nu + 1}$}. By ${\bf (S1)_\nu} $, the eigenvalues $\mu_j^\nu$ are defined on $\L_\nu^\g$. Therefore the set $\L_{\nu+1}^\g$ is well-defined. By Lemma \ref{lemma:redu}, for all $ \lambda \in \L_{\nu+1}^{\g} $ there exists a real solution $ \Psi_{\nu} $ of the homological equation \eqref{eq:homo} which satisfies, $ \forall s \in [\mathfrak s_0, q- \sigma - \beta ] $, \begin{equation}\label{Psinu}
\left|\Psi_{\nu}\right|_{s} \stackrel{\eqref{PsiR}}{\lessdot}
N_{\nu}^{2\tau + 1}\left|{\cal R}_{\nu}\right|_{s} \gamma^{-1} \stackrel{\eqref{Rsb}}
{\lessdot} \left|{\cal R}_{0}\right|_{s + \beta} \gamma^{-1} N_{\nu}^{2\tau + 1}\ N_{\nu-1}^{- \a} \end{equation} which is \eqref{Psinus} at the step $ \nu +1 $. In particular, for $ s = \mathfrak s_0 $, \begin{equation}\label{Psinu0}
C(\mathfrak s_0) \left|\Psi_{\nu}\right|_{\mathfrak s_0} \stackrel{\eqref{Psinu}} \leq
C(\mathfrak s_0) \left|{\cal R}_{0}\right|_{\mathfrak s_0 + \beta} \gamma^{-1} N_{\nu}^{2\tau + 1}\ N_{\nu-1}^{- \a} \stackrel{\eqref{piccolezza1}} \leq 1/2 \end{equation} for $ N_0 $ large enough. Then the map $ \Phi_{\nu} := I + \Psi_\nu $ is invertible and, by \eqref{PhINV}, \begin{equation}\label{Phis0}
\left|\Phi_{\nu}^{-1}\right|_{\mathfrak s_{0}} \leq 2 \, , \quad \left|\Phi_{\nu}^{-1}\right|_{s} \leq 1 + C(s) | \Psi_\nu |_s \, . \end{equation} Hence \eqref{defL+}-\eqref{D+} imply $ {\cal L}_{\nu + 1} := $ $ \Phi_{\nu}^{-1} {\cal L}_{\nu} \Phi_{\nu} = $ $ \omega \cdot \partial_{\varphi } + {\cal D}_{\nu + 1} + {\cal R}_{\nu + 1} $ where (see Lemma \ref{nuovadiagonale}) \begin{equation}\label{Dnu+1} {\cal D}_{\nu + 1} := {\cal D}_{\nu} + [{\cal R}_{\nu}] = {\rm diag}_{j \in \mathbb Z} (\mu_j^{\nu + 1}) \, , \quad \mu_j^{\nu + 1} := \mu_j^{\nu} + ({\cal R}_{\nu})_{j}^{j}(0) \, , \end{equation} with $\mu_j^{\nu + 1} = \overline{\mu_{-j}^{\nu + 1}} $ and \begin{equation}\label{Rnu+1} {\cal R}_{\nu+1} := \Phi_\nu^{-1} H_{\nu},\quad H_{\nu}:= \Pi_{N_\nu}^{\bot} {\cal R}_\nu + {\cal R}_\nu \Psi_\nu - \Psi_\nu [{\cal R}_\nu] \, . \end{equation} In the reversible case, $\mathcal{R}_\nu : X \rightarrow Y$, therefore, by Lemma \ref{lemma:redu}, $\Psi_\nu$, $\Phi_\nu$, $\Phi_{\nu}^{-1}$ are reversibility preserving, and then, by formula \eqref{Rnu+1}, also $\mathcal{R}_{\nu + 1} : X \rightarrow Y$.
Let us prove the estimates \eqref{Rsb} for $ {\cal R}_{\nu + 1} $. For all $ s \in [\mathfrak s_0, q - \sigma - \beta ] $ we have \begin{eqnarray}
|{\cal R}_{\nu + 1} |_{s} & \!\! \!\! \!\! \stackrel{\eqref{Rnu+1}, \eqref{interpm Lip}} {\leq_s} \!\!\!\! \!\! &
| \Phi_\nu^{-1} |_{\mathfrak s_{0}} \Big( |\Pi_{N_\nu}^\bot {\cal R}_\nu |_s + |{\cal R}_\nu |_s |\Psi_\nu |_{\mathfrak s_{0}} +
|{\cal R}_\nu |_{\mathfrak s_{0}} |\Psi_\nu |_s \Big) +
| \Phi_\nu^{-1} |_{s} \Big( |\Pi_{N_\nu}^\bot {\cal R}_\nu |_{\mathfrak s_{0}} + |{\cal R}_\nu |_{\mathfrak s_{0}} |\Psi_\nu |_{\mathfrak s_{0}} \Big) \nonumber \\ & \!\!\!\! \!\! \stackrel{\eqref{Phis0}} {\leq_s} \!\!\!\! \!\! & 2
\Big( |\Pi_{N_\nu}^\bot {\cal R}_\nu |_s + |{\cal R}_\nu |_s |\Psi_\nu |_{\mathfrak s_{0}} +
|{\cal R}_\nu |_{\mathfrak s_{0}} |\Psi_\nu |_s \Big) + (1 + | \Psi_\nu |_s)
\Big( |\Pi_{N_\nu}^\bot {\cal R}_\nu |_{\mathfrak s_{0}} + |{\cal R}_\nu |_{\mathfrak s_{0}} |\Psi_\nu |_{\mathfrak s_{0}} \Big) \nonumber \\
& \!\! \stackrel{\eqref{Psinu0}} {\leq_s} \!\! & |\Pi_{N_\nu}^\bot {\cal R}_\nu |_s + |{\cal R}_\nu |_s |\Psi_\nu |_{\mathfrak s_{0}} +
|{\cal R}_\nu |_{\mathfrak s_{0}} |\Psi_\nu |_s \!\! \stackrel{\eqref{PsiR}} {\leq_s} \!\! |\Pi_{N_\nu}^\bot {\cal R}_\nu |_s + N_\nu^{2\t+1} \g^{-1} |{\cal R}_\nu |_s | {\cal R}_\nu |_{\mathfrak s_{0}} \, . \label{Rsgen} \end{eqnarray} Hence \eqref{Rsgen} and \eqref{smoothingN} imply \begin{equation}
|{\cal R}_{\nu + 1} |_s {\leq_s}
N_\nu^{-\b} | {\cal R}_\nu |_{s+\b} + N_\nu^{2\t+1} \g^{-1} |{\cal R}_\nu |_s | {\cal R}_\nu |_{\mathfrak s_{0}} \label{sch1} \end{equation} which shows that the iterative scheme is quadratic plus a super-exponentially small term. In particular $$
|{\cal R}_{\nu + 1} |_{s} \!\!
\stackrel{\eqref{sch1}, \eqref{Rsb}} {\leq_s} \!\! N_\nu^{-\b} |{\cal R}_0|_{s+\b} N_{\nu-1} +
N_\nu^{2\t+1} \g^{-1} |{\cal R}_0|_{s+\b} |{\cal R}_0|_{\mathfrak s_{0}+\b} N_{\nu-1}^{- 2\a}
\!\! \stackrel{\eqref{defbq}, \eqref{alpha-beta}, \eqref{piccolezza1}}\leq \!\! |{\cal R}_0|_{s+\b} N_{\nu}^{-\a} $$ ($ \chi = 3 / 2 $) which is the first inequality of \eqref{Rsb} at the step $ \nu +1 $.
The next key step is to control the divergence of the high norm $ | {\cal R}_{\nu+1} |_{s+\b} $. By \eqref{Rsgen} (with $ s + \beta $ instead of $ s $) we get \begin{equation}
|{\cal R}_{\nu + 1} |_{s+\b} \,
{\leq_{s+\b}} \, | {\cal R}_\nu |_{s+ \b} + N_\nu^{2\t+1} \g^{-1} |{\cal R}_\nu |_{s+\b}| {\cal R}_\nu |_{\mathfrak s_{0}} \label{sch2} \end{equation} (the difference with respect to \eqref{sch1} is that we do not apply to
$ | \Pi_{N_\nu}^\bot {\cal R}_{\nu} |_{s+\b} $ any smoothing). Then \eqref{sch2}, \eqref{Rsb}, \eqref{piccolezza1}, \eqref{alpha-beta} imply the inequality $$
|{\cal R}_{\nu + 1} |_{s+\b} \leq C(s+\b) | {\cal R}_\nu |_{s+ \b}, $$ whence, iterating, $$
|{\cal R}_{\nu + 1} |_{s+\b} \leq N_{\nu} |{\cal R}_0 |_{s+ \b} $$ for $ N_0 := N_0 (s,\b) $ large enough, which is the second inequality of \eqref{Rsb} with index $ \nu +1 $.
By Lemma \ref{nuovadiagonale} the eigenvalues $ \mu_j^{\nu + 1} := \mu_j^0 + r_j^{\nu + 1} $, defined on $ \L_{\nu+1}^{\g} $, satisfy $\mu_j^{\nu + 1} = \overline{{\mu}_{-j}^{\nu + 1}} $, and, in the reversible case, the $\mu_{j}^{\nu + 1}$ are purely imaginary and $\mu_j^{\nu + 1} = - \mu_{-j}^{\nu + 1}$.
It remains only to prove \eqref{rjnu bounded} for $\nu+1$, which is proved below.
\noindent {\sc Proof of ${\bf({S}2)}_{\nu + 1} $}. By \eqref{ultrasim}, \begin{equation}\label{lnu+1lnu}
|\mu_j^{\nu + 1} - \mu_j^{\nu} |^{{\rm{Lip}(\g)}} = |r_j^{\nu + 1} - r_j^{\nu} |^{{\rm{Lip}(\g)}}
\leq |{\cal R}_{\nu} |_{\mathfrak s_{0}}^{{\rm{Lip}(\g)}}
\stackrel{\eqref{Rsb}}\leq \left|{\cal R}_{0}\right|^{{\rm{Lip}(\g)}}_{\mathfrak s_{0} + \beta}N_{\nu - 1}^{- \alpha}\,. \end{equation} By Kirszbraun theorem, we extend the function $ \mu_j^{\nu + 1} - \mu_j^{\nu} = r_j^{\nu + 1} - r_j^{\nu} $ to the whole $ \Lambda $, still satisfying \eqref{lnu+1lnu}. In this way we define $ \tilde \mu_j^{\nu + 1}$. Finally \eqref{rjnu bounded} follows summing all the terms in \eqref{lnu+1lnu} and using \eqref{stima R 3}.
\noindent {\sc Proof of ${\bf({S}3)}_{\nu + 1} $}. Set, for brevity,
$$
\mathcal{R}_{\nu}^{i} := \mathcal{R}_{\nu}(u_i),\quad \Psi_{\nu - 1}^i := \Psi_{\nu - 1}(u_i),\quad \Phi_{\nu - 1}^{i} := \Phi_{\nu - 1}(u_i),
\quad H_{\nu - 1}^i := H_{\nu - 1} (u_i) \, , \quad i:= 1, 2 \, ,
$$ which are all operators defined for $\lambda \in \L_{\nu}^{\gamma_1}(u_1) \cap \L_{\nu}^{\gamma_2}(u_2) $. By Lemma \ref{lemma:redu} one can construct $\Psi_{\nu}^{i}:= \Psi_{\nu}(u_i)$, $\Phi_{\nu}^i := \Phi_{\nu}(u_i)$, $i = 1, 2$, for all $\lambda \in \Lambda_{\nu + 1}^{\g_1}(u_1) \cap \Lambda_{\nu + 1}^{\g_2}(u_2)$. One has \begin{align} \vert \Delta_{12} \Psi_{\nu} \vert_{\mathfrak s_0} & \stackrel{\eqref{differenza finita Psi}}{\lessdot}
N_\nu^{2\t+1} \g^{-1} \Big( |\mathcal{R}_\nu(u_2)|_{\mathfrak s_0} \| u_2 - u_1 \|_{\mathfrak s_0 + \s_2}
+ |\D_{12} \mathcal{R}_\nu|_{\mathfrak s_0} \Big) \notag \\ & \stackrel{\eqref{Rsb},\eqref{derivate-R-nu}}{\lessdot}
N_\nu^{2\t+1} N_{\nu-1}^{-\a} \g^{-1} \big( |\mathcal{R}_0|_{\mathfrak s_0+\b} + \varepsilon \big)
\| u_2 - u_1 \|_{\mathfrak s_0 + \s_2} \notag \\ & \stackrel{\eqref{stima R 3}, \eqref{norma bassa u riducibilitˆ}}{\lessdot}
N_\nu^{2\t+1} N_{\nu-1}^{-\a} \varepsilon \g^{-1}
\| u_2 - u_1 \|_{\mathfrak s_0 + \s_2}
\leq \| u_2 - u_1 \|_{\mathfrak s_0 + \s_2} . \label{delta Psi nu bassa} \end{align} for $ \varepsilon \g^{-1} $ small (and \eqref{alpha-beta}). By \eqref{derivata-inversa-Phi}, applied to $\Phi:= \Phi_{\nu}$, and \eqref{delta Psi nu bassa}, we get
\begin{equation}\label{delta Phi nu s}
\vert \Delta_{12} \Phi_{\nu}^{-1} \vert_{s} \leq_s
\big( \vert \Psi_{\nu}^{1} \vert_s + \vert \Psi_{\nu}^{2} \vert_{s} \big)\Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2} + \vert \Delta_{12} \Psi_{\nu} \vert_{s}
\end{equation}
which implies for $s = \mathfrak s_0$, and using \eqref{Psinus}, \eqref{piccolezza1}, \eqref{delta Psi nu bassa}
\begin{equation}\label{derivata-inversa-Phi-bassa}
\vert \Delta_{12} \Phi_{\nu}^{-1} \vert_{\mathfrak s_0} \lessdot \Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2}.
\end{equation} Let us prove the estimates \eqref{derivate-R-nu} for $\Delta_{12}\mathcal{R}_{\nu + 1}$, which is defined on $\lambda \in \Lambda_{\nu + 1}^{\g_1}(u_1) \cap \Lambda_{\nu + 1}^{\g_2}(u_2)$. For all $s \in [ {\mathfrak s}_{0}, {\mathfrak s}_{0}+\b]$, using the interpolation \eqref{interpm} and \eqref{Rnu+1}, \begin{equation}\label{derivata-R-nu+1}
|\Delta_{12}{\cal R}_{\nu + 1} |_{s} \! \stackrel{}{\leq_{s}} \!
|\Delta_{12}\Phi_{\nu}^{-1} |_{s} |H_{\nu}^{1} |_{\mathfrak s_{0}} + \vert \Delta_{12}\Phi_{\nu}^{-1}\vert_{\mathfrak s_{0}} \vert H_{\nu}^{1} \vert_{s} \! +
|(\Phi_{\nu}^2 )^{-1}|_s |\Delta_{12}H_{\nu} |_{\mathfrak s_0} +
|(\Phi_{\nu}^2 )^{-1} |_{\mathfrak s_{0}} |\Delta_{12}H_\nu |_s \, . \end{equation} We estimate the above terms separately. Set for brevity
$ A^\nu_{s} := | \mathcal{R}_\nu(u_1) |_s + | \mathcal{R}_\nu(u_2) |_s $. By \eqref{Rnu+1} and \eqref{interpm}, \begin{eqnarray}
| \Delta_{12}H_{\nu}|_s \! \! \! \! \! \!
& \leq_s & \! \! \! \! \! \! \left|\Pi_{N_{\nu}}^{\bot}\Delta_{12}{\cal R}_{\nu}\right|_{s} + |\Delta_{12}\Psi_{\nu}|_{s}
|{\cal R}_{\nu}^1 |_{\mathfrak s_{0}} + |\Delta_{12}\Psi_{\nu} |_{\mathfrak s_{0}} |{\cal R}_{\nu}^1|_{s}
+ |\Psi_{\nu}^2 |_{s} |\Delta_{12}{\cal R}_{\nu} |_{\mathfrak s_{0}} +
|\Psi_{\nu}^2 |_{\mathfrak s_{0}} |\Delta_{12}{\cal R}_{\nu} |_{s} \nonumber \\ & \stackrel{\eqref{PsiR},\eqref{differenza finita Psi}}{\leq_{s}} &
\left|\Pi_{N_{\nu}}^{\bot}\Delta_{12}{\cal R}_{\nu}\right|_{s} + N_{\nu}^{2\tau+1}\gamma^{-1} A^\nu_{\mathfrak s_0} A^\nu_s \Vert u_1 - u_2\Vert_{\mathfrak s_{0}+\s_2} \nonumber \\ & & \, + \, N_{\nu}^{2\tau+1}\gamma^{-1} A^\nu_{s} \vert \Delta_{12}{\cal R}_{\nu} \vert_{\mathfrak s_{0}} + N_{\nu}^{2\tau+1}\gamma^{-1} A^\nu_{\mathfrak s_0} \vert\Delta_{12}{\cal R}_{\nu} \vert_s \label{stima-derivata-H-nu} \, . \end{eqnarray} Estimating the four terms in the right hand side of \eqref{derivata-R-nu+1} in the same way, using \eqref{delta Phi nu s}, \eqref{Rnu+1}, {\eqref{PsiR}, \eqref{differenza finita Psi}, \eqref{Psinus}, \eqref{derivata-inversa-Phi-bassa}, \eqref{Phis0}, \eqref{stima-derivata-H-nu}, \eqref{Rsb}, we deduce \begin{eqnarray}
\vert \Delta_{12}{\cal R}_{\nu+1} \vert_{s} & {\leq_{s}} & |\Pi_{N_{\nu}}^{\bot} \Delta_{12}\mathcal{R}_{\nu}|_s + N_{\nu}^{2\tau + 1} \gamma^{-1} A_{s}^{\nu} A_{\mathfrak s_0}^{\nu} \| u_1 - u_2 \|_{\mathfrak s_0 + \s_2} \nonumber\\
& &+ N_{\nu}^{2\tau + 1}\g^{-1} A_{s}^{\nu} |\Delta_{12}\mathcal{R}_{\nu}|_{\mathfrak s_0} + N_{\nu}^{2\tau + 1} \gamma^{-1} A_{\mathfrak s_0}^{\nu} |\Delta_{12}\mathcal{R}_{\nu}|_s \label{stima-derivata-R-nu+1-s} \, . \end{eqnarray} Specializing \eqref{stima-derivata-R-nu+1-s} for $ s = \mathfrak s_0 $ and using \eqref{stima R 3}, \eqref{smoothingN}, \eqref{Rsb}, \eqref{derivate-R-nu}, we deduce $$ \vert \Delta_{12}{\cal R}_{\nu + 1}\vert_{\mathfrak s_{0}} \leq C ( \varepsilon N_{\nu - 1}N_{\nu}^{-\b} + N_{\nu}^{2\tau + 1}N_{\nu - 1}^{-2\alpha}\varepsilon^{2}\gamma^{-1} ) \Vert u_1 - u_2 \Vert_{\mathfrak s_{0}+\s_2} \leq \varepsilon N_{\nu}^{-\alpha} \Vert u_1 - u_2 \Vert_{\mathfrak s_{0}+\s_2} $$ for $N_{0}$ large and $\varepsilon\gamma^{-1}$ small. Next by \eqref{stima-derivata-R-nu+1-s} with $ s = \mathfrak s_0 + \beta $ \begin{eqnarray} \vert \Delta_{12}{\cal R}_{\nu} \vert_{\mathfrak s_{0}+\b} & \stackrel{\eqref{Rsb}, \eqref{derivate-R-nu}, \eqref{piccolezza1}} {\leq_{\mathfrak s_{0}+\b}} & A_{\mathfrak s_0 + \b}^{\nu} \Vert u_1 - u_2 \Vert_{\mathfrak s_{0}+\s_2} + \vert \Delta_{12}{\cal R}_{\nu}\vert_{\mathfrak s_{0}+\b} \nonumber\\ &\stackrel{\eqref{Rsb}\eqref{derivate-R-nu}}{\leq}&C(\mathfrak s_{0}+\b) \varepsilon N_{\nu - 1} \Vert u_1 - u_2 \Vert_{\mathfrak s_{0}+\s_2} \leq \varepsilon N_{\nu} \Vert u_1 - u_2 \Vert_{\mathfrak s_{0}+\s_2}\nonumber \end{eqnarray} for $N_{0}$ large enough. Finally note that \eqref{deltarj12} is nothing but \eqref{spagna}.
\noindent {\sc Proof of ${\bf({S}4)}_{\nu + 1} $}. We have to prove that, if $C \varepsilon N_{\nu}^{\tau} \Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2} \leq \rho$, then $$ \lambda \in \Lambda_{\nu+1}^{\g}(u_1) \quad \Longrightarrow \quad \lambda \in \Lambda_{\nu+1}^{\g- \rho}(u_2) \, . $$ Let $ \lambda \in \Lambda_{\nu+1}^{\g}(u_1) $. Definition \eqref{Omgj} and ${\bf({S}4)_{\nu}}$ (see \eqref{legno}) imply that $ \Lambda_{\nu+1}^{\g}(u_1) \subseteq \Lambda_{\nu}^{\g}(u_1) \subseteq \Lambda_{\nu}^{\g- \rho}(u_2) $. Hence $ \lambda \in \Lambda_{\nu}^{\gamma - \rho}(u_2) \subset \Lambda_{\nu}^{\gamma/2}(u_2) $. Then, by ${\bf({S}1)_{\nu}}$, the eigenvalues $\mu_{j}^{\nu}(\lambda, u_2(\lambda))$ are well defined. Now \eqref{mu-j-nu} and the estimates \eqref{coefficienti costanti 2}, \eqref{Delta12 rj} (which holds because $ \lambda \in \L_{\nu}^{\g}(u_1) \cap \L_{\nu}^{\g/2}(u_2) $) imply that \begin{eqnarray}
|(\mu_j^{\nu} - \mu_k^{\nu})(\l, u_2(\l)) - (\mu_j^{\nu} - \mu_k^{\nu})(\l, u_1(\l))| & \leq & |(\mu_j^{0} - \mu_k^{0})(\l, u_2(\l)) - (\mu_j^{0} - \mu_k^{0})(\l, u_1(\l))|\nonumber\\
& & + \, 2 \sup_{j \in \mathbb Z} |r_{j}^{\nu}(\l, u_2(\l)) - r_{j}^{\nu}(\l, u_1(\l))| \nonumber\\
& \leq & \varepsilon C|j^3 - k^3| \Vert u_2 - u_1 \Vert_{\mathfrak s_0 + \s_2}^{\rm sup} \, . \label{legno3} \end{eqnarray}
Then we conclude that for all $|l| \leq N_{\nu}$, $j \neq k$, using the definition of $\Lambda_{\nu+1}^\g(u_1)$ (which is \eqref{Omgj} with $\nu+1$ instead of $\nu$) and \eqref{legno3}, \begin{eqnarray}
|{\rm i} \omega \cdot l + \mu_j^{\nu} (u_2) - \mu_k^{\nu} (u_2) | & \geq & |{\rm i} \omega \cdot l + \mu_j^{\nu} (u_1) - \mu_k^{\nu} (u_1) | - |(\mu_j^{\nu} - \mu_k^{\nu})(u_2) - (\mu_j^{\nu} - \mu_k^{\nu})(u_1) | \nonumber\\ & {\geq}
& \gamma |j^3 - k^3| \langle l \rangle^{-\tau} - C \varepsilon |j^3 - k^3| \Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2} \nonumber\\
& \geq & (\gamma - \rho)|j^3 - k^3| \langle l \rangle^{-\tau} \nonumber \end{eqnarray} provided $C \varepsilon N_{\nu}^{\tau} \Vert u_1 - u_2 \Vert_{\mathfrak s_0 + \s_2} \leq \rho$. Hence $ \lambda \in \L^{\gamma- \rho}_{\nu+1} ( u_2 ) $. This proves \eqref{legno} at the step $ \nu + 1 $.
\subsection{Inversion of ${\cal L}(u)$}\label{sec:inversione}
In \eqref{Phi 1 2 def} we have conjugated the linearized operator $ \mathcal{L}$ to $\mathcal{L}_5$ defined in \eqref{mL5}, namely $\mathcal{L} = \Phi_1 \mathcal{L}_5 \Phi_2^{-1}$. In Theorem \ref{teoremadiriducibilita} we have conjugated the operator $\mathcal{L}_5$ to the diagonal operator $\mathcal{L}_{\infty}$ in \eqref{Lfinale}, namely $ \mathcal{L}_5 = \Phi_{\infty} \mathcal{L}_{\infty} \Phi_{\infty}^{-1}$. As a consequence \begin{equation} \label{L-coniugato} \mathcal{L} = W_1 \mathcal{L}_\infty W_2^{-1}, \quad W_i := \Phi_{i} \Phi_{\infty}, \quad i= 1,2\,. \end{equation} We first prove that $W_1, W_2 $ and their inverses are linear bijections of $H^{s}$. We take \begin{equation}\label{gamma-tau} \gamma \leq \g_0 / 2 \,, \quad \tau \geq \tau_0\,. \end{equation}
\begin{lemma}\label{stime-tame-coniugio} Let $ \mathfrak s_{0} \leq s \leq q - \sigma - \beta -3 $ where $ \beta $ is defined in \eqref{defbq} and $ \sigma $ in \eqref{costanti lemma mostro}. Let $u:= u(\l)$ satisfy $ \Vert u \Vert_{\mathfrak s_0 + \sigma + \beta + 3}^{{\rm{Lip}(\g)}} \leq 1 $, and $ \varepsilon \g^{-1} \leq \delta $ be small enough. Then $ W_i $, $ i = 1, 2 $, satisfy, $ \forall \lambda \in \L_{\infty}^{2\g}(u) $, \begin{equation}\label{W1W2}
\left\| W_i h\right\|_{s} + \left\| W_i^{-1}h\right\|_{s} \leq C(s) \big(\left\|h\right\|_{s }
+ \left\|u\right\|_{s + \sigma + \b}\left\|h\right\|_{\mathfrak s_{0}} \big)\, , \end{equation} \begin{equation}\label{tame-Phi1Phi2}
\left\| W_i h\right\|_{s}^{{\rm{Lip}(\g)}} + \left\| W_i^{-1}h\right\|_{s}^{{\rm{Lip}(\g)}} \leq C(s)
\big(\left\|h\right\|_{s + 3}^{{\rm{Lip}(\g)}} + \left\|u\right\|^{{\rm{Lip}(\g)}}_{s + \sigma + \beta + 3}\left\|h\right\|_{\mathfrak s_{0}+3}^{{\rm{Lip}(\g)}} \big)\,. \end{equation} In the reversible case (i.e. \eqref{parity f} holds), $ W_i $, $ W_i^{-1} $, $ i = 1, 2 $ are reversibility-preserving. \end{lemma}
\begin{pf} The bound \eqref{W1W2}, resp. \eqref{tame-Phi1Phi2}, follows by \eqref{stima Phi infty}, \eqref{stima Phi 12 nel lemma}, resp. \eqref{stima Lip Phi 12 nel lemma}, \eqref{interpolazione norme miste} and Lemma \ref{lemma astratto composizioni}. In the reversible case $ W_i^{\pm 1} $ are reversibility preserving because $ \Phi_i^{\pm1} $, $ \Phi_\infty^{\pm 1} $ are reversibility preserving. \end{pf}
By \eqref{L-coniugato} we are reduced to show that, $ \forall \lambda \in \L^{2\g}_{\infty}(u) $, the operator $$ {\cal L}_\infty := {\rm diag}_{j \in \mathbb Z} \{{\rm i} \lambda \bar \omega \cdot l + \mu_j^\infty (\l)\} \, , \quad \mu_j^\infty (\l) = -{\rm i} \big( m_3 (\l) j^3 - m_1(\l) j \big) + r_j^\infty (\l) $$ is invertible, assuming \eqref{f = der g} or the reversibility condition \eqref{parity f}.
We introduce the following notation: \begin{equation}\label{proiez00} \Pi_C u := \frac{1}{(2\pi)^{\nu+1}}\, \int_{\mathbb T^{\nu+1}} u(\varphi,x) \, d\varphi dx, \ \ \mathbb{P} u := u - \Pi_C u, \ \ H^s_{00} := \{ u \in H^s(\mathbb T^{\nu+1}) : \Pi_C u = 0 \}. \end{equation} If \eqref{f = der g} holds, then the linearized operator $ {\cal L} $ in \eqref{mL} satisfies \begin{equation} \label{mL mappa tutto in media nulla} \mathcal{L} : H^{s+3} \to H^s_{00} \end{equation} (for $ \mathfrak s_0 \leq s \leq q-1 $). In the reversible case \eqref{parity f} \begin{equation} \label{mL mappa tutto in media nulla REV} \mathcal{L} : X \cap H^{s+3} \to Y \cap H^s \subset H^s_{00} \, . \end{equation} \begin{lemma}\label{lem:modo0} Assume
either \eqref{f = der g} or the reversibility condition \eqref{parity f}. Then the eigenvalue \begin{equation}\label{zero eigenvalue} \mu_0^\infty (\l) = r^\infty_0 (\l) = 0 \, , \quad \forall \lambda \in \L_\infty^{2\g} (u) \, . \end{equation} \end{lemma}
\begin{pf} Assume \eqref{f = der g}. If $ r_0^\infty \neq 0 $ then there exists a solution of $ {\cal L}_\infty w = 1 $, which is $ w = 1 / r_0^\infty $. Therefore, by \eqref{L-coniugato}, $$
{\cal L} W_2 [1 / r^\infty_0] = {\cal L} W_2 w = W_1 {\cal L}_\infty w = W_1 [1] $$ which is a contradiction because $ \Pi_C W_1 [1] \neq 0 $, for $ \varepsilon \g^{-1} $ small enough, but the average $ \Pi_C {\cal L} W_2 [1 / r^\infty_0] = 0 $ by \eqref{mL mappa tutto in media nulla}. In the reversible case $ r^\infty_0 = 0 $ was proved in remark \ref{r0=0}. \end{pf}
As a consequence of \eqref{zero eigenvalue}, the definition of $ \L_\infty^{2 \g} $ in \eqref{Omegainfty} (just specializing \eqref{Omegainfty} with $ k = 0 $), and \eqref{omdio} (with $\g$ and $\t$ as in \eqref{gamma-tau}), we deduce also the {\it first} order Melnikov non-resonance conditions \begin{equation}\label{cantor inversione linearizzato} \forall \lambda \in \L_{\infty}^{2 \g} \, , \qquad
\big|{\rm i} \l\bar\omega \cdot l + \mu_j^\infty (\l) \big| \geq 2 \gamma\frac{ \langle j \rangle^3}{ \langle l \rangle^\t}, \quad \forall (l, j) \neq (0, 0) \, . \end{equation}
\begin{lemma} {\bf (Invertibility of ${\cal L}_\infty $)} \label{stima-L-infinito} For all $ \lambda \in \L_\infty^{2 \g} (u) $, for all $ g \in H^s_{00} $ the equation $ {\cal L}_\infty w = g $ has the unique solution with zero average \begin{equation}\label{Linfty inverse} \mathcal{L}_{\infty}^{-1} \, g (\varphi,x) := \sum_{(l,j) \neq (0,0)} \frac{g_{lj}}{{\rm i} \l\bar\omega \cdot l + \mu_j^\infty (\l) }\, e^{\ii(l \cdot \varphi + j x)}. \end{equation} For all Lipschitz family $ g := g(\l) \in H^s_{00} $ we have \begin{equation}\label{tame-L-infinito}
\left\|{\cal L}_{\infty}^{-1} g \right\|_{s}^{{\rm{Lip}(\g)}} \leq C \gamma^{-1} \left\| g \right\|_{s + 2\tau + 1}^{{\rm{Lip}(\g)}} \, . \end{equation} In the reversible case, if $ g \in Y $ then $ {\cal L}_\infty^{-1} g \in X $. \end{lemma}
\begin{pf} For all $\lambda \in \L_\infty^{2\g} (u) $, by \eqref{cantor inversione linearizzato}, formula
\eqref{Linfty inverse} is well defined and
\begin{equation}\label{stima-0-inverso-L-infinito}
\left\|{\cal L}_{\infty}^{-1}(\l)g(\l)\right\|_{s} \lessdot \gamma^{-1}\left\|g(\l)\right\|_{s + \tau}\, . \end{equation} Now we prove the Lipschitz estimate. For $ \l_1 , \l_2 \in \L_\infty^{2\g} (u) $ \begin{equation}\label{delta-L-infinito} {\cal L}_{\infty}^{-1}(\l_{1})g(\l_{1}) - {\cal L}_{\infty}^{-1}(\l_{2})g(\l_{2}) = {\cal L}_{\infty}^{-1}(\l_{1}) [g(\l_1) - g(\l_2)] + \big({\cal L}_{\infty}^{-1}(\l_1) - {\cal L}_{\infty}^{-1}(\l_2) \big)g(\l_{2})\,. \end{equation} By \eqref{stima-0-inverso-L-infinito} \begin{equation}\label{primo-pezzo-delta-L-infinito}
\gamma\|{\cal L}_{\infty}^{-1}(\l_{1}) [g(\l_{1}) - g(\l_{2})] \|_s
\lessdot \| g(\l_{1})- g(\l_{2}) \|_{s + \tau} \leq \g^{-1} \Vert g \Vert_{s + \tau}^{{\rm{Lip}(\g)}} |\l_1 - \l_2 | \, . \end{equation} Now we estimate the second term of \eqref{delta-L-infinito}. We simplify notations writing $ g := g(\l_{2}) $ and $ \d_{lj} := {\rm i} \lambda \bar \omega \cdot l + \mu_j^\infty $. \begin{equation}\label{secondo-pezzo-delta-L-infinito} \big({\cal L}_{\infty}^{-1}(\l_{1}) - {\cal L}_{\infty}^{-1}(\l_{2})\big)g = \sum_{(l , j)\neq(0,0)} \frac{\delta_{lj}(\l_{2}) - \delta_{lj}(\l_{1})}{\delta_{lj}(\l_{1})\delta_{lj}(\l_{2})} \, g_{lj} e^{\ii(l \cdot \varphi + j x)} \, . \end{equation} The bound \eqref{autofinali} imply
$ \vert \mu_{j}^{\infty} \vert^{\rm lip} \lessdot \varepsilon \g^{-1} | j |^{3} \lessdot | j |^{3} $ and, using also \eqref{cantor inversione linearizzato}, \begin{eqnarray}
\g\frac{|\delta_{lj}(\l_{2}) - \delta_{lj}(\l_{1}) |}{|\delta_{lj}(\l_{1})| |\delta_{lj}(\l_{2}) |}
\! \! & \lessdot & \! \! \frac{( | l | + | j |^{3}) \langle l \rangle^{2\tau}}{\gamma \langle j \rangle^{6}} |\l_{2} - \l_{1} |
\lessdot \langle l \rangle^{2\tau + 1} \gamma^{-1} | \l_2 - \l_1 | \, . \label{deltalj-o1-o2} \end{eqnarray} Then \eqref{secondo-pezzo-delta-L-infinito} and \eqref{deltalj-o1-o2} imply
$ \gamma\| ({\cal L}_{\infty}^{-1}(\l_2) - {\cal L}_{\infty}^{-1}(\l_1) )g \|_s
\lessdot \gamma^{-1} \|g \|_{s + 2\tau + 1}^{{\rm{Lip}(\g)}} |\l_2 - \l_1 | $ that, finally, with \eqref{stima-0-inverso-L-infinito}, \eqref{primo-pezzo-delta-L-infinito}, prove \eqref{tame-L-infinito}. The last statement follows by the property \eqref{reversibilitˆ autovalori finali}. \end{pf}
In order to solve the equation $ {\cal L} h = f $ we first prove the following lemma.
\begin{lemma} \label{lemma:iso zero mean} Let $\mathfrak s_0 + \tau + 3 \leq s \leq q - \sigma - \beta - 3 $. Under the assumption \eqref{f = der g} we have \begin{equation} \label{iso W1 Hs0} W_1 (H^s_{00}) = H^s_{00} \, , \quad \ W_1^{-1} (H^s_{00}) = H^s_{00} \, . \end{equation} \end{lemma}
\begin{pf} It is sufficient to prove that $ W_1 (H^s_{00}) = H^s_{00} $ because the second equality of \eqref{iso W1 Hs0} follows applying the isomorphism $ W_1^{-1} $. Let us give the proof of the inclusion \begin{equation} \label{inclusion} W_1 (H^s_{00}) \subseteq H^s_{00} \end{equation} (which is essentially algebraic). For any $ g \in H^s_{00}$, let $ w(\varphi,x) := {\cal L}_\infty^{-1} g \in H^{s - \t}_{00} $ defined in \eqref{Linfty inverse}. Then $ h := W_2 w \in H^{s-\t} $ satisfies \[ \mathcal{L} h \stackrel{\eqref{L-coniugato}} = W_1 \mathcal{L}_\infty W_2^{-1} h = W_1 \mathcal{L}_\infty w = W_1 g \, . \] By \eqref{mL mappa tutto in media nulla} we deduce that $W_1 g = \mathcal{L} h \in H^{s - \tau - 3}_{00} $. Since $ W_1 g \in H^s $ by Lemma \ref{stime-tame-coniugio}, we conclude $ W_1 g \in H^s \cap H^{s - \tau - 3}_{00} = H^s_{00}$. The proof of \eqref{inclusion} is complete.
It remains to prove that $H^s_{00} \setminus W_1(H^s_{00}) = \emptyset$. By contradiction, let $ f \in H^s_{00} \setminus W_1(H^s_{00}) $. Let $ g := W_1^{-1} f \in H^s $ by Lemma \ref{stime-tame-coniugio}. Since $W_1 g = f \notin W_1(H^s_{00})$, it follows that $ g \notin H^s_{00} $ (otherwise it contradicts \eqref{inclusion}), namely $c := \Pi_C g \neq 0$. Decomposing $ g = c + \mathbb{P} g $ (recall \eqref{proiez00}) and applying $W_1$, we get $ W_1 g = c W_1[1] + W_1 \mathbb{P} g $. Hence \[ W_1[1] = c^{-1} (W_1 g - W_1 \mathbb{P} g) \in H^s_{00} \] because $W_1 g = f \in H^s_{00}$ and $W_1 \mathbb{P} g \in W_1(H^s_{00}) \subseteq H^s_{00}$ by \eqref{inclusion}. However, $ \Pi_C W_1[1] \neq 0 $, a contradiction. \end{pf}
\begin{remark} In the Hamiltonian case (which always satisfies \eqref{f = der g}), the $ W_i (\varphi ) $ are maps of (a subspace of) $ H^1_0 $ so that Lemma \ref{lemma:iso zero mean} is automatic, and there is no need of Lemma \ref{lem:modo0}. \end{remark}
We may now prove the main result of sections \ref{sec:regu} and \ref{sec:redu}.
\begin{theorem}\label{inversione linearizzato} {\bf (Right inverse of $ {\cal L}$)} Let \begin{equation}\label{perdite stima tame linearizzato} \tau_1 := 2\tau + 7, \quad \mu:= 4\tau + \sigma + \beta + 14\,, \end{equation} where $ \sigma $, $\beta $ are defined in \eqref{costanti lemma mostro}, \eqref{defbq} respectively. Let $ u ( \lambda ) $, $ \lambda \in \L_o \subseteq \Lambda $, be a Lipschitz family with \begin{equation}\label{verasmall} \Vert u \Vert_{\mathfrak s_0 + \mu}^{{\rm{Lip}(\g)}} \leq 1 \, . \end{equation} Then there exists $ \delta $ (depending on the data of the problem) such that if $$ \varepsilon\g^{-1} \leq \delta \, , $$ and condition \eqref{f = der g}, resp. the reversibility condition \eqref{parity f}, holds, then for all $ \lambda \in \L_\infty^{2 \g}(u)$ defined in \eqref{Omegainfty}, the linearized operator $\mathcal{L}:= \mathcal{L}(\l, u(\l))$ (see \eqref{mL}) admits a right inverse on $ H^s_{00} $, resp. $ Y \cap H^s $. More precisely, for $\mathfrak s_0 \leq s \leq q - \mu$, for all Lipschitz family $ f(\l) \in H^s_{00} $, resp. $ Y \cap H^s $, the function \begin{equation}\label{scelta} h := {\cal L}^{-1} f := W_2 \mathcal{L}_{\infty }^{-1} \, W_1^{-1} f \end{equation} is a solution of $ {\cal L} h = f $. In the reversible case, $ {\cal L}^{-1} f \in X $. Moreover \begin{equation}\label{tame-L} \Vert \mathcal{L}^{-1} f \Vert_{s}^{{\rm{Lip}(\g)}} \leq C(s)\g^{-1} \Big(\Vert f \Vert_{s + \t_1}^{{\rm{Lip}(\g)}} + \Vert u \Vert_{s + \mu}^{{\rm{Lip}(\g)}} \Vert f \Vert_{\mathfrak s_0}^{{\rm{Lip}(\g)}} \Big) \, . \end{equation} \end{theorem}
\begin{pf} Given $f \in H^s_{00}$, resp. $ f \in Y \cap H^s $, with $ s $ like in Lemma \ref{lemma:iso zero mean}, the equation $ \mathcal{L} h = f $ can be solved for $ h $ because $\Pi_C f = 0 $. Indeed, by \eqref{L-coniugato}, the equation $ {\cal L} h = f $ is equivalent to $ \mathcal{L}_\infty W_2^{-1} h = W_1^{-1} f $ where $W_1^{-1} f \in H^s_{00} $ by Lemma \ref{lemma:iso zero mean}, resp. $ W_1^{-1} f \in Y \cap H^s $ being $ W_1^{-1} $ reversibility-preserving (Lemma \ref{stime-tame-coniugio}). As a consequence, by Lemma \ref{stima-L-infinito}, all the solutions of $ {\cal L} h = f $ are \begin{equation}\label{tutte le soluzioni} h = c W_2[1] + W_2 \mathcal{L}_{\infty}^{-1} W_1^{-1} f, \quad c \in \mathbb R \, . \end{equation} The solution \eqref{scelta} is the one with $ c = 0 $. In the reversible case, the fact that $ {\cal L}^{-1} f \in X $ follows by \eqref{scelta} and the fact that $ W_i $, $ W_i^{-1}$ are reversibility-preserving and $ {\cal L}_\infty^{-1} : Y \to X $, see Lemma \ref{stima-L-infinito}.
Finally \eqref{tame-Phi1Phi2}, \eqref{tame-L-infinito}, \eqref{verasmall} imply $$ \Vert \mathcal{L}^{-1} f \Vert_s^{{\rm{Lip}(\g)}} \leq C(s)\g^{-1} \big( \Vert f \Vert_{s + 2\tau + 7}^{{\rm{Lip}(\g)}} + \Vert u \Vert_{s + 2\tau + \sigma + \beta + 7}^{{\rm{Lip}(\g)}} \Vert f \Vert_{\mathfrak s_0 + 2\tau + 7}^{{\rm{Lip}(\g)}} \big) $$ and \eqref{tame-L} follows using \eqref{interpolation estremi fine} with $ b_0 = \mathfrak s_0 $, $ a_0 := \mathfrak s_0 + 2 \tau + \sigma + \beta + 7 $, $ q = 2 \tau + 7 $, $ p = s - \mathfrak s_0 $. \end{pf}
In the next section we apply Theorem \ref{inversione linearizzato} to deduce
tame estimates for the inverse linearized operators at any step of the Nash-Moser scheme. The approximate solutions along the iteration will satisfy \eqref{verasmall}.
\section{The Nash-Moser iteration} \label{sec:NM}
We define the finite-dimensional subspaces of trigonometric polynomials $$
H_{n} := \Big\{ u \in L^{2}(\mathbb T^{\nu + 1}) : u(\varphi ,x)=\sum_{\left|(l , j)\right|\leq N_{n}}u_{lj} e^{\ii(l\cdot\varphi + j x)} \Big\} $$ where $ N_n := N_0^{\chi^n}$ (see \eqref{defN}) and the corresponding orthogonal projectors $$ \Pi_{n}:=\Pi_{N_{n}} : L^{2}(\mathbb T^{\nu + 1}) \rightarrow H_{n} \, , \quad \Pi_{n}^\bot := I - \Pi_{n} \, . $$ The following smoothing properties hold: for all $\alpha , s \geq 0$, \begin{equation}\label{smoothing-u1}
\|\Pi_{n}u \|_{s + \alpha}^{\rm{Lip}(\g)}
\leq N_{n}^{\alpha} \| u \|_{s}^{\rm{Lip}(\g)}, \ \ \forall u(\lambda) \in H^{s} \,; \quad
\|\Pi_{n}^\bot u \|_{s}^{\rm{Lip}(\g)}
\leq N_{n}^{-\alpha} \|u \|_{s + \alpha}^{\rm{Lip}(\g)}, \ \ \forall u(\lambda) \in H^{s + \alpha}, \end{equation} where the function $u(\lambda)$ depends on the parameter $\lambda $ in a Lipschitz way. The bounds \eqref{smoothing-u1} are the classical smoothing estimates for truncated Fourier series, which also hold
with the norm $\| \cdot \|^{\rm{Lip}(\g)}_s $ defined in \eqref{def norma Lipg}.
Let \begin{equation}\label{operatorF(u)} F(u) := F(\l, u) := \lambda \bar \omega \cdot \partial_{\varphi } u + u_{xxx} + \varepsilon f(\varphi , x , u, u_{x}, u_{xx}, u_{xxx} ) \, . \end{equation} We define the constants \begin{equation}\label{sigma-beta-kappa} \kappa := 28 + 6 \mu, \qquad \b_1 := 50 + 11 \mu, \, \end{equation} where $\mu$ is the loss of regularity in \eqref{perdite stima tame linearizzato}.
\begin{theorem}\label{iterazione-non-lineare} {\bf (Nash-Moser)} Assume that $ f \in C^q $, $ q \geq \mathfrak s_0 + \mu + \b_1 $, satisfies the assumptions of Theorem \ref{thm:main} or Theorem \ref{thm:mainrev}. Let $ 0 < \gamma \leq {\rm min}\{\g_0, 1/48 \} $, $ \tau > \nu + 1 $. Then there exist $ \delta > 0 $, $ C_* > 0 $, $ N_0 \in \mathbb N $ (that may depend also on $ \tau $) such that, if $ \varepsilon \g^{-1} < \delta $, then, for all $ n \geq 0 $: \begin{itemize} \item[$({\cal P}1)_{n}$] there exists a function $u_n : \mathcal{G}_n \subseteq \Lambda \to H_n$, $\lambda \mapsto u_n(\lambda)$,
with $ \| u_{n} \|_{\mathfrak s_0 + \mu}^{{\rm{Lip}(\g)}} \leq 1 $, $ u_0 := 0$, where ${\cal G}_{n} $ are Cantor like subsets of $ \Lambda := [1/2, 3/2] $ defined inductively by: $ {\cal G}_{0} := \Lambda $, \begin{eqnarray}\label{G-n+1} {\cal G}_{n+1} & := &
\Big\{ \lambda \in {\cal G}_{n} \, : \, |{\rm i} \omega \cdot l + \mu_j^\infty (u_{n}) -
\mu_k^\infty (u_{n})| \geq \frac{2\gamma_{n} |j^{3}-k^{3}|}{\left\langle l\right\rangle^{\tau}}\, , \ \forall j , k \in \mathbb Z, \ l \in \mathbb Z^{\nu} \Big\} \end{eqnarray} where $ \gamma_{n}:=\gamma (1 + 2^{-n}) $. In the reversible case, namely \eqref{parity f} holds, then $ u_n (\lambda ) \in X $.
The difference $h_n := u_{n} - u_{n-1}$, where, for convenience, $h_0 := 0$, satisfy \begin{equation} \label{hn}
\| h_{n} \|_{\mathfrak s_0 + \mu}^{\rm{Lip}(\g)} \leq C_* \varepsilon \gamma^{-1} N_{n}^{-\s_1} \,, \quad \s_1 := 18 + 2 \mu \, . \end{equation}
\item[$({\cal P}2)_{n}$] $ \| F(u_n) \|_{\mathfrak s_{0}}^{{\rm{Lip}(\g)}} \leq C_* \varepsilon N_{n}^{- \kappa}$.
\item[$({\cal P}3)_{n}$] \emph{(High norms).} $ \|u_n \|_{\mathfrak s_{0}+ \beta_1}^{{\rm{Lip}(\g)}} \leq C_* \varepsilon \gamma^{-1} N_{n}^{\kappa}$
and $ \|F(u_n ) \|_{\mathfrak s_{0}+\beta_1}^{{\rm{Lip}(\g)}} \leq C_* \varepsilon N_{n}^{\kappa}$.
\item[$({\cal P}4)_{n}$] \emph{(Measure).} \newcommand{\mathcal{B}}{\mathcal{B}} The measure of the Cantor like sets satisfy \begin{equation}\label{Gmeasure}
|{\cal G}_0 \setminus {\cal G}_1| \leq C_* \gamma\, , \quad \big| {\cal G}_n \setminus {\cal G}_{n+1} \big| \leq \g C_* N_{n}^{-1} \, , \ n \geq 1 . \end{equation}
\end{itemize} All the Lip norms are defined on $ {\cal G}_{n} $. \end{theorem}
\begin{pf}
The proof of Theorem \ref{iterazione-non-lineare} is split into several steps. For simplicity, we denote $ \| \ \|^{\rm Lip} $ by $ \| \ \| $.
\noindent {\sc Step 1:} \emph{prove $(\mP1,2,3)_0$.} $(\mP1)_0$ and the first inequality of $(\mP3)_0$ are trivial because $u_0 = h_0 = 0$. $(\mP2)_0$ and the second inequality of $(\mP3)_0$ follow with
$ C_* \geq $ $ \max\{ \| f(0)\|_{\mathfrak s_0} N_0^\kappa, $ $ \| f(0)\|_{\mathfrak s_0+ \b_1} N_0^{-\kappa} \} $.
\noindent {\sc Step 2:} \emph{assume that $(\mP1,2,3)_n$ hold for some $n \geq 0$, and prove $(\mP1,2,3)_{n+1}$.}
By $(\mP1)_n$ we know that $ \| u_n \|_{\mathfrak s_{0} + \mu} \leq 1 $, namely condition \eqref{verasmall} is satisfied. Hence, for $ \varepsilon \g^{-1}$ small enough, Theorem \ref{inversione linearizzato} applies. Then, for all $\lambda \in {\cal G}_{n+1} $ defined in \eqref{G-n+1}, the linearized operator \[ \mathcal{L}_n(\lambda) := {\cal L}(\lambda, u_{n}(\lambda)) = F'(\lambda, u_n(\lambda)) \] (see \eqref{mL}) admits a right inverse for all $ h \in H^s_{00} $, if condition \eqref{f = der g} holds, respectively for $ h \in Y \cap H^s $ if the reversibility condition \eqref{parity f} holds. Moreover \eqref{tame-L} gives the estimates \begin{align} \label{L-1alta}
\| {\cal L}_n^{-1} h \|_s
& \leq_s \g^{-1} \Big( \| h \|_{s+\t_1} + \| u_n \|_{s+ \mu} \|h \|_{\mathfrak s_0} \Big) \, , \quad \forall h(\lambda), \\ \label{L-1s0}
\| {\cal L}_n^{-1} h \|_{\mathfrak s_0}
& \leq \g^{-1} N_{n+1}^{\t_1} \| h \|_{\mathfrak s_0} \, , \quad \forall h(\lambda) \in H_{n+1} \,, \end{align} (use
\eqref{smoothing-u1} and $ \| u_n \|_{\mathfrak s_{0} + \mu} \leq 1 $), for all Lipschitz map $h(\lambda)$. Then, for all $\lambda \in {\cal G}_{n+1} $, we define \begin{equation}\label{soluzioni-approssimate} u_{n+1} := u_{n} + h_{n + 1} \in H_{n+1} \, , \quad h_{n + 1}:= - \Pi_{n + 1} {\cal L}_n^{-1} \Pi_{n + 1} F(u_{n}) \, , \end{equation} which is well defined because, if condition \eqref{f = der g} holds then $ \Pi_{n + 1} F(u_n) \in H^s_{00} $, and, respectively, if \eqref{parity f} holds, then $ \Pi_{n + 1} F(u_{n}) \in Y \cap H^s $ (hence in both cases $ {\cal L}_n^{-1} \Pi_{n + 1} F(u_n) $ exists). Note also that in the reversible case $ h_{n + 1} \in X $ and so $ u_{n + 1} \in X $.
Recalling \eqref{operatorF(u)} and that $\mathcal{L}_n := F'(u_n) $, we write \begin{equation}\label{FTay} F(u_{n + 1}) = F(u_{n}) + {\cal L}_n h_{n + 1} + \varepsilon Q(u_{n}, h_{n + 1}) \end{equation} where $$ Q(u_{n},h_{n + 1}) := {\cal N}(u_{n} + h_{n + 1}) - {\cal N}(u_{n}) - {\cal N}'(u_{n}) h_{n + 1}, \quad \mathcal{N}(u) := f(\varphi,x,u,u_x, u_{xx}, u_{xxx}). $$ With this definition, \[ F(u) = L_\omega u + \varepsilon {\cal N}(u), \quad F'(u) h = L_\omega h + \varepsilon \mathcal{N}'(u)h, \quad L_\omega := \om \cdot \pa_{\ph} + \partial_{xxx}. \] By \eqref{FTay} and \eqref{soluzioni-approssimate} we have \begin{eqnarray} F(u_{n + 1}) & = & F(u_{n}) - \mathcal{L}_n \Pi_{n + 1} \mathcal{L}_n^{-1} \Pi_{n + 1} F(u_{n}) + \varepsilon Q(u_{n},h_{n + 1}) \nonumber\\ & = & \Pi_{n + 1}^{\bot} F(u_{n}) + \mathcal{L}_n \Pi_{n + 1}^{\bot} \mathcal{L}_n^{-1} \Pi_{n + 1} F(u_{n}) + \varepsilon Q(u_{n},h_{n + 1}) \nonumber\\ & = & \Pi_{n + 1}^{\bot} F(u_{n}) + \Pi_{n + 1}^{\bot} \mathcal{L}_n \mathcal{L}_n^{-1} \Pi_{n + 1} F(u_{n}) + [ \mathcal{L}_n , \Pi_{n + 1}^{\bot} ] \mathcal{L}_n^{-1} \Pi_{n + 1} F(u_{n}) + \varepsilon Q(u_{n},h_{n + 1})\nonumber \\ & = & \Pi_{n + 1}^{\bot} F(u_{n}) + \varepsilon [{\cal N}'(u_{n}) , \Pi_{n + 1}^{\bot} ] \mathcal{L}_n^{-1} \Pi_{n + 1} F(u_{n}) + \varepsilon Q(u_{n},h_{n + 1}) \label{F(u-n+1)} \end{eqnarray} where we have gained an extra $\varepsilon$ from the commutator $$ [{\cal L}_n , \Pi_{n + 1}^{\bot} ] = [ L_{\o} + \varepsilon{\cal N}'(u_{n}) , \Pi_{n + 1}^{\bot} ] = \varepsilon [{\cal N}'(u_{n}) , \Pi_{n + 1}^{\bot} ] \,. $$ \begin{lemma} Set \begin{equation}\label{BnB'n} U_{n}:=\Vert u_{n} \Vert_{\mathfrak s_{0}+\beta_1} + \g^{-1} \Vert F(u_{n}) \Vert_{\mathfrak s_{0}+\beta_1} \,, \qquad w_n := \g^{-1} \Vert F(u_{n}) \Vert_{\mathfrak s_{0}} \, . \end{equation} There exists $C_0 := C ( \t_1, \mu, \nu, \b_1) > 0 $ such that \begin{equation}\label{quadratico} w_{n+1} \leq C_0 N_{n + 1}^{- \b_1 + \mu'} U_n ( 1 + w_n ) + C_0 N_{n + 1}^{6 + 2\mu} w_n^2 , \qquad U_{n+1} \leq C_0 N_{n + 1}^{9 + 2\mu} ( 1 + w_n )^2 \, U_n \, . \end{equation} \end{lemma}
\begin{pf} The operators $\mathcal{N}'(u_n)$ and $Q(u_n,\cdot)$ satisfy the following tame estimates: \begin{align} \label{tameQ}
\| Q(u_n , h) \|_s
& \leq_s \| h \|_{\mathfrak s_0 + 3} \Big( \| h \|_{s + 3}
+ \| u_n \|_{s + 3} \| h \|_{\mathfrak s_0+3} \Big) \quad \ \forall h(\lambda), \\ \label{Qs0}
\| Q(u_n , h) \|_{\mathfrak s_0}
& \leq N_{n+1}^6 \| h \|_{\mathfrak s_0}^2 \ \quad \forall h(\lambda) \in H_{n+1} , \\ \label{tameN'}
\| \mathcal{N}'(u_n) h \|_{s}
& \leq_{s} \| h \|_{s + 3} + \| u_n \|_{s + 3} \| h \|_{\mathfrak s_0+3} \quad \forall h(\lambda), \end{align} where $h(\lambda)$ depends on the parameter $\lambda$ in a Lipschitz way. The bounds \eqref{tameQ} and \eqref{tameN'} follow by \ref{lemma:composition of functions, Moser}$(i)$ and Lemma \ref{lemma:Lip generale}.
\eqref{Qs0} is simply \eqref{tameQ} at $s = \mathfrak s_0$, using that $\| u_n \|_{\mathfrak s_0 + 3} \leq 1$, $u_n, h_{n+1} \in H_{n+1}$ and the smoothing \eqref{smoothing-u1}.
By \eqref{L-1alta} and \eqref{tameN'}, the term (in \eqref{F(u-n+1)}) $ R_n := [ {\cal N}' (u_n), \Pi_{n+1}^\bot ] {\cal L}_n^{-1} \Pi_{n+1} F(u_n) $ satisfies, using also that $ u_n \in H_n $ and \eqref{smoothing-u1}, \begin{align} \label{Rn alta}
\| R_n \|_s & \leq_s \g^{-1} N_{n+1}^{\mu'}
\Big( \| F(u_n) \|_s + \| u_n \|_{s} \| F(u_n) \|_{\mathfrak s_0} \Big), \quad \mu' := 3 + \mu, \\ \label{RNS0}
\| R_n \|_{\mathfrak s_0}
& \leq_{\mathfrak s_0 + \b_1} \g^{-1} N_{n+1}^{-\b_1 + \mu'} \Big(\| F(u_n) \|_{\mathfrak s_0+ \b_1} +
\| u_n \|_{\mathfrak s_0 + \b_1} \| F(u_n) \|_{\mathfrak s_0} \Big), \end{align} because $\mu \geq \t_1 + 3$. In proving \eqref{Rn alta} and \eqref{RNS0}, we have simply estimated $\mathcal{N}'(u_n) \Pi_{n+1}^\perp$ and $\Pi_{n+1}^\perp \mathcal{N}'(u_n)$ separately, without using the commutator structure.
From the definition \eqref{soluzioni-approssimate} of $h_{n+1}$, using \eqref{L-1alta}, \eqref{L-1s0} and \eqref{smoothing-u1}, we get \begin{align} \label{h-n+1-alta}
\| h_{n + 1} \|_{\mathfrak s_{0}+ \beta_1}
& \leq_{\mathfrak s_0 + \beta_1} \gamma^{-1} N_{n + 1}^{\mu} \Big( \|F(u_{n}) \|_{\mathfrak s_0+\beta_1} +
\| u_n \|_{\mathfrak s_0 + \beta_1} \| F(u_{n}) \|_{\mathfrak s_0} \Big), \\ \label{h-n+1-bassa}
\|h_{n + 1} \|_{\mathfrak s_{0}}
& \leq_{\mathfrak s_0} \g^{-1} N_{n + 1}^{\mu} \| F(u_n) \|_{\mathfrak s_0} \end{align} because $\mu \geq \t_1$. Then \begin{align}
\|u_{n + 1} \|_{\mathfrak s_0 + \beta_1}
& \stackrel{\eqref{soluzioni-approssimate}} \leq \|u_{n} \|_{\mathfrak s_0 + \beta_1} + \| h_{n + 1} \|_{\mathfrak s_0 + \beta_1} \notag \\
& \stackrel{\eqref{h-n+1-alta}} \leq_{\mathfrak s_0 + \b_1} \| u_n \|_{\mathfrak s_0 + \b_1} \Big( 1 + \g^{-1} N_{n+1}^{\mu} \| F (u_n) \|_{\mathfrak s_0} \Big)
+ \gamma^{-1} N_{n + 1}^{\mu} \| F(u_n) \|_{\mathfrak s_0 + \beta_1}. \label{u-n+1-alta} \end{align} Formula \eqref{F(u-n+1)} for $F(u_{n+1})$, and \eqref{RNS0}, \eqref{Qs0}, \eqref{h-n+1-bassa}, $\varepsilon \g^{-1} \leq 1$, \eqref{smoothing-u1}, imply \begin{equation}\label{stima-induttiva-bassa}
\| F(u_{n + 1}) \|_{\mathfrak s_0} \leq_{\mathfrak s_0 + \b_1} N_{n + 1}^{- \b_1 + \mu'}
\Big( \| F(u_n) \|_{\mathfrak s_0 + \beta_1} + \| u_n \|_{\mathfrak s_0 + \beta_1} \| F(u_n) \|_{\mathfrak s_0} \Big)
+ \varepsilon \g^{-2} N_{n + 1}^{6 + 2 \mu} \| F(u_{n}) \|_{\mathfrak s_{0}}^{2}. \end{equation} Similarly, using the ``high norm'' estimates \eqref{Rn alta}, \eqref{tameQ}, \eqref{h-n+1-alta}, \eqref{h-n+1-bassa}, $\varepsilon \g^{-1} \leq 1$ and \eqref{smoothing-u1}, \begin{equation}\label{stima-induttiva-alta}
\| F(u_{n + 1}) \|_{\mathfrak s_0 + \beta_1} \leq_{\mathfrak s_0 + \b_1}
\Big( \| F(u_n) \|_{\mathfrak s_0 + \b_1} + \| u_n \|_{\mathfrak s_0 + \b_1} \| F(u_n) \|_{\mathfrak s_0} \Big)
\Big( 1 + N_{n+1}^{\mu'} + N_{n+1}^{9 + 2 \mu} \g^{-1} \| F(u_n) \|_{\mathfrak s_0} \Big). \end{equation} By \eqref{u-n+1-alta}, \eqref{stima-induttiva-bassa} and \eqref{stima-induttiva-alta} we deduce \eqref{quadratico}. \end{pf}
By $ ({\cal P}2)_n $ we deduce, for $ \varepsilon \g^{-1} $ small, that (recall the definition on $ w_n $ in \eqref{BnB'n}) \begin{equation} \label{control wU} w_n \leq \varepsilon \g^{-1} C_* N_{n}^{-\kappa} \leq 1, \end{equation} Then, by the second inequality in \eqref{quadratico}, \eqref{control wU}, $ ({\cal P}3)_n $ (recall the definition on $ U_n $ in \eqref{BnB'n}) and the choice of $ \kappa $ in \eqref{sigma-beta-kappa}, we deduce $ U_{n+1} \leq C_* \varepsilon \g^{-1} N_{n+1}^\kappa $, for $ N_0 $ large enough. This proves $ ({\cal P}3)_{n+1} $.
Next, by the first inequality in \eqref{quadratico}, \eqref{control wU}, $ ({\cal P}2)_n $ (recall the definition on $ w_n $ in \eqref{BnB'n}) and \eqref{sigma-beta-kappa}, we deduce $ w_{n+1} \leq C_* \varepsilon \g^{-1} N_{n+1}^\kappa $, for $ N_0 $ large, $ \varepsilon \g^{-1}$ small. This proves $ ({\cal P}2)_{n+1} $.
The bound \eqref{hn} at the step $ n +1$ follows by \eqref{h-n+1-bassa} and $({\cal P}2)_n $ (and \eqref{sigma-beta-kappa}). Then $$
\| u_{n+1} \|_{\mathfrak s_0 + \mu} \leq \| u_0 \|_{\mathfrak s_0+ \mu} + \sum_{k=1}^{n+1} \| h_k \|_{\mathfrak s_0 + \mu}
\leq \sum_{k=1}^\infty C_* \varepsilon \g^{-1} N_k^{-\s_1} \leq 1 $$ for $ \varepsilon \g^{-1} $ small enough. As a consequence $(\mP1,2,3)_{n+1}$ hold.
\noindent {\sc Step 3:} \emph{prove $(\mP4)_n$, $n \geq 0$.} For all $n \geq 0$, \begin{equation}\label{natale} {\cal G}_n \setminus{\cal G}_{n+1} = \bigcup_{l \in \mathbb Z^{\nu}, j,k \in \mathbb Z} R_{ljk} (u_{n}) \end{equation} where \begin{eqnarray} R_{ljk} (u_{n}) & := &
\left\{\lambda\in{\cal G}_n : \left|{\rm i} \lambda\bar\omega \cdot l + \mu_{j}^{\infty}(\lambda,u_{n}(\lambda)) - \mu_{k}^{\infty}(\lambda,u_{n}(\lambda))\right| < 2\gamma_{n} | j^{3}-k^{3} | \left\langle l\right\rangle^{-\tau}\right\} \, . \label{R-ljk(u-n)} \end{eqnarray} Notice that, by the definition \eqref{R-ljk(u-n)}, $R_{ljk} (u_{n}) = \emptyset$ for $j = k$. Then we can suppose in the sequel that $j \neq k$. We divide the estimate
into some lemmata.
\begin{lemma}\label{risonanti-1} For $ \varepsilon \g^{-1}$ small enough,
for all $n \geq 0$, $|l|\leq N_n$, \begin{equation}\label{inclusione-1} R_{ljk}(u_{n}) \subseteq R_{ljk}(u_{n - 1}). \end{equation} \end{lemma}
\begin{pf} We claim that, for all $ j , k \in \mathbb Z $, \begin{equation}\label{marco}
|(\mu_{j}^{\infty} - \mu_{k}^{\infty})(u_{n})
- (\mu_{j}^{\infty} - \mu_{k}^{\infty})(u_{n-1})|
\leq C \varepsilon |j^{3} - k^{3}| N_n^{-\a} \, , \quad \forall \lambda \in {\cal G}_n \, , \end{equation} where $\mu_{j}^{\infty}(u_{n}) := \mu_{j}^{\infty}(\lambda, u_{n}(\lambda))$ and $ \alpha $ is defined in \eqref{alpha-beta}. Before proving \eqref{marco} we show how it implies \eqref{inclusione-1}.
For all $ j \neq k$, $|l| \leq N_{n}$, $\lambda \in \mathcal{G}_n$, by \eqref{marco} \begin{align*}
|{\rm i} \lambda \bar\omega \cdot l + \mu_{j}^{\infty}(u_{n}) - \mu_{k}^{\infty}(u_{n}) | & \geq
|{\rm i} \lambda \bar\omega \cdot l + \mu_{j}^{\infty}(u_{n-1}) - \mu_{k}^{\infty}(u_{n-1}) |
- |(\mu_{j}^{\infty}- \mu_{k}^{\infty})(u_{n}) - (\mu_{j}^{\infty}- \mu_{k}^{\infty})(u_{n - 1}) | \\ & \geq
2\gamma_{n - 1} |j^3 - k^3| \langle l \rangle^{-\tau} - C \varepsilon |j^3 - k^3| N_{n}^{-\a}
\geq 2\gamma_{n} |j^3 - k^3| \langle l \rangle^{-\tau} \end{align*} for $C \varepsilon \g^{-1} N_{n}^{\tau -\a}\, 2^{n+1} \leq 1$ (recall that $\gamma_n := \gamma(1 + 2^{-n})$), which implies \eqref{inclusione-1}. \\[1mm] {\sc Proof of \eqref{marco}.} By \eqref{espressione autovalori}, \begin{align} (\mu_{j}^{\infty}- \mu_{k}^{\infty})(u_{n}) - (\mu_{j}^{\infty}- \mu_{k}^{\infty})(u_{n - 1}) & = -{\rm i} \big[ m_3(u_{n}) - m_{3}(u_{n - 1}) \big] (j^{3} - k^{3}) +{\rm i} \big[ m_1(u_{n}) - m_1(u_{n-1}) \big] (j - k) \nonumber \\ & \quad + r_{j}^{\infty}(u_{n}) - r_{j}^{\infty}(u_{n - 1}) - \big( r_{k}^{\infty}(u_{n}) - r_{k}^{\infty}(u_{n-1}) \big) \label{vici} \end{align} where $ m_3 (u_{n}) := m_3(\lambda, u_{n}(\lambda))$ and similarly for $ m_1, r_{j}^{\infty}$. We first apply Theorem \ref{thm:abstract linear reducibility}-${\bf (S4)_{\nu}}$ with $ \nu = n + 1 $, $ \gamma= \g_{n-1} $, $ \gamma- \rho = \g_n $, and $ u_1 $, $ u_2 $, replaced, respectively, by $ u_{n-1} $, $ u_n $,
in order to conclude that \begin{equation}\label{primoste} \L_{n+1}^{\g_{n-1}} ( u_{n-1}) \subseteq \L_{n+1}^{\g_n} ( u_n ) \, . \end{equation} The smallness condition in \eqref{legno} is satisfied because $\s_2 < \mu$ (see definitions \eqref{alpha-beta}, \eqref{perdite stima tame linearizzato}) and so $$ \varepsilon C N_n^{\tau} \Vert u_n - u_{n - 1} \Vert_{\mathfrak s_0 + \s_2} \leq \varepsilon C N_n^{\tau} \Vert u_n - u_{n - 1} \Vert_{\mathfrak s_0 + \mu} \stackrel{\eqref{hn}}{\leq} \varepsilon^2 \g^{-1} C C_* N_{n}^{\tau - \sigma_1} \leq \gamma_{n-1} - \gamma_{n} =: \rho = \g 2^{-n } $$ for $\varepsilon \gamma^{-1}$ small enough, because $ \s_1 > \tau $ (see \eqref{hn}, \eqref{perdite stima tame linearizzato}). Then, by the definitions \eqref{G-n+1} and \eqref{Omegainfty}, we have $$ {\cal G}_{n} := {\cal G}_{n-1} \cap \L_{\infty}^{2 \g_{n-1}} (u_{n-1}) \stackrel{\eqref{cantorinclu}} \subseteq \bigcap_{\nu \geq 0} \Lambda_{\nu}^{\gamma_{n - 1}}(u_{n - 1}) \subset \Lambda_{n+1}^{\g_{n-1}}(u_{n-1}) \stackrel{\eqref{primoste}} \subseteq \Lambda_{n+1}^{\g_n}(u_n). $$ Next, for all $ \lambda \in {\cal G}_n \subset \Lambda_{n+1}^{\g_{n-1}}(u_{n-1}) \cap
\Lambda_{n+1}^{\g_n}(u_n) $ both $ r_j^{n+1} (u_{n-1}) $ and $ r_j^{n+1} (u_{n}) $ are well defined, and
we deduce by Theorem \ref{thm:abstract linear reducibility}-${\bf (S3)}_\nu $
with $ \nu = n+1 $, that \begin{equation}\label{vicin+1}
| r^{n+1}_j (u_n) - r_j^{n+1} (u_{n-1})| \stackrel{\eqref{Delta12 rj}}
\lessdot \varepsilon \| u_{n-1} - u_n \|_{\mathfrak s_0 + \s_2} \, . \end{equation} Moreover \eqref{autovcon} (with $ \nu = n+1 $) and \eqref{stima R 1} imply that \begin{eqnarray}\label{diffrkn}
| r_j^{\infty}(u_{n -1}) - r_j^{n + 1}(u_{n - 1})| +
|r_j^{\infty}(u_{n}) - r_j^{n + 1}(u_{n})| & \lessdot &
\varepsilon (1 + \| u_{n-1} \|_{\mathfrak s_0 + \beta + \s}+ \| u_n \|_{\mathfrak s_0 + \beta + \s}) N_{n}^{-\a} \nonumber \\ & \lessdot & \varepsilon N_{n}^{-\a} \end{eqnarray}
because $ \sigma + \beta < \mu $ and $ \| u_{n-1} \|_{\mathfrak s_0 + \mu} + $ $ \| u_n \|_{\mathfrak s_0 + \mu} \leq 2 $ by $ {\bf (S1)}_{n-1} $ and $ {\bf (S1)}_n $. Therefore, for all $\lambda \in {\cal G}_{n}$, $ \forall j \in \mathbb Z $, \begin{align}
\big| r_j^{\infty}(u_{n}) - r_j^{\infty}(u_{n - 1}) \big| & \leq
\big| r_j^{n + 1}(u_{n}) - r_j^{n + 1}(u_{n-1}) \big|
+ | r_j^{\infty}(u_{n}) - r_j^{n + 1}(u_{n})| + | r_j^{\infty}(u_{n -1}) - r_j^{n + 1}(u_{n - 1})| \nonumber\\ & \stackrel{\eqref{vicin+1}, \eqref{diffrkn}} \lessdot
\varepsilon \| u_n - u_{n - 1} \|_{\mathfrak s_0+ \s_2} + \varepsilon N_{n}^{-\a} \stackrel{\eqref{hn}} \lessdot \varepsilon N_{n}^{-\a} \label{variazione autovalori finali in u} \end{align} because $ \s_1 > \alpha $ (see \eqref{alpha-beta}, \eqref{hn}). Finally \eqref{vici}, \eqref{variazione autovalori finali in u}, \eqref{coefficienti costanti 2},
$\| u_n \|_{\mathfrak s_0 + \mu}\leq 1$, imply \eqref{marco}. \end{pf}
By definition, $ R_{ljk}(u_n) \subset {\cal G}_n $ (see \eqref{R-ljk(u-n)}) and, by \eqref{inclusione-1}, for all $ |l| \leq N_n $, we have $ R_{ljk} (u_n) \subseteq R_{ljk} (u_{n-1}) $. On the other hand $ R_{ljk}(u_{n-1}) \cap {\cal G}_n = \emptyset $, see \eqref{G-n+1}.
As a consequence, $ \forall |l| \leq N_n $, $ R_{ljk} (u_n) = \emptyset $, and \begin{equation}\label{parametri cattivi} {\cal G}_{n} \setminus{\cal G}_{n+1} \stackrel{\eqref{natale}}
\subseteq \bigcup_{|l|> N_{n}, j,k \in \mathbb Z} R_{ljk}(u_{n}) \, , \quad \forall n \geq 1. \end{equation}
\begin{lemma}\label{risonanti-2} Let $n \geq 0$.
If $R_{ljk}(u_{n}) \neq \emptyset$, then $|j^{3}-k^{3}| \leq 8 |\bar\omega \cdot l|$. \end{lemma}
\begin{pf} If $R_{ljk}(u_{n})\,\neq\,\emptyset$ then there exists $\lambda\in \Lambda$ such that
$ |{\rm i} \lambda\bar\o\cdot l + \mu_{j}^{\infty}(\lambda,u_{n}(\lambda))-
\mu_{k}^{\infty}(\lambda,u_{n}(\lambda)) | < $ $ 2 \gamma_{n} |j^{3}-k^{3} | \langle l \rangle^{-\tau} $ and, therefore, \begin{equation}\label{lambda-j-lambda-k}
|\mu_{j}^{\infty}(\lambda,u_{n}(\lambda)) - \mu_{k}^{\infty}(\lambda,u_{n}(\lambda)) |
< 2\gamma_{n} |j^{3}-k^{3}| \langle l \rangle^{-\tau}\, + 2 |\bar\o\cdot l|. \end{equation} Moreover, by \eqref{espressione autovalori}, \eqref{coefficienti costanti 1}, \eqref{autofinali}, for $\varepsilon$ small enough, \begin{equation}\label{lower-bound-jk}
|\mu_{j}^{\infty} - \mu_{k}^{\infty}|
\geq |m_3| |j^{3}-k^{3}| - |m_1| |j-k| - |r_j^{\infty}| - |r_k^{\infty}| \geq
\frac{1}{2} |j^{3}-k^{3}| - C \varepsilon |j - k| - C \varepsilon
\geq \frac{1}{3} |j^{3}-k^{3}| \end{equation} if $j \neq k$. Since $\g_n \leq 2\g$ for all $n \geq 0$, $\gamma\leq 1/ 48$, by \eqref{lambda-j-lambda-k} and \eqref{lower-bound-jk} we get $$
2 |\bar\omega \cdot l|
\geq \Big(\frac13 -\frac{4\gamma}{\langle l \rangle^{\tau}} \Big) |j^{3}-k^{3}|\
\geq \frac14|j^{3}- k^3| $$ proving the Lemma. \end{pf}
\begin{lemma}\label{risonanti-3} For all $n \geq 0$, \begin{equation}\label{stima-risonanti}
|R_{ljk}(u_{n})| \leq C \gamma \left\langle l\right\rangle^{-\tau}. \end{equation} \end{lemma}
\begin{pf} Consider the function $ \phi : \Lambda \to \mathbb C$ defined by \begin{eqnarray} \phi(\l) & := & {\rm i} \lambda \bar\o\cdot l +\mu_{j}^{\infty}(\l)- \mu_{k}^{\infty}(\l) \nonumber \\ & \stackrel{\eqref{espressione autovalori}} = &{\rm i} \lambda \bar\o\cdot l - {\rm i} {\tilde m}_3(\lambda)(j^{3}-k^{3}) +
{\rm i} {\tilde m}_1(\lambda)(j-k) + r_{j}^{\infty}(\lambda)- r_{k}^{\infty}(\lambda) \nonumber \end{eqnarray} where $ {\tilde m}_3 (\l) $, $ {\tilde m}_1 (\l) $, $ r^{\infty}_j (\l) $, $ \mu_{j}^{\infty}(\l)$, are defined for all $ \lambda \in \L$ and satisfy \eqref{autofinali}
by $ \| u_n \|^{{\rm{Lip}(\g)}}_{\mathfrak s_0 + \mu, \mathcal{G}_n} \leq 1$ (see $({\cal P}1)_n$).
Recalling $ | \cdot |^{\rm lip} \leq \g^{-1} | \cdot |^{\rm{Lip}(\g)} $ and using \eqref{autofinali} \begin{equation}\label{31}
| \mu_{j}^{\infty} - \mu_{k}^{\infty} |^{\rm lip} \leq |{\tilde m}_3|^{\rm lip} |j^{3} - k^{3}|
+ | {\tilde m}_1|^{\rm lip} |j - k| + |r_{j}^{\infty}|^{\rm lip} + |r_{k}^{\infty}|^{\rm lip} \leq C \varepsilon \g^{-1} |j^{3} - k^{3}|\,. \end{equation} Moreover Lemma \ref{risonanti-2} implies that, $\forall \l_1, \l_2 \in \Lambda $, $$
|\phi(\l_1) - \phi(\l_2)| \geq \big( |\bar{\o} \cdot l| - |\mu_j^{\infty} - \mu_k^{\infty}|^{\rm lip} \big) |\l_1 - \l_2|
\stackrel{\eqref{31}} \geq \big(\frac18 - C \varepsilon\g^{-1} \big)|j^3 - k^3| |\l_1 - \l_2| \geq \frac{|j^3 - k^3|}{9} |\l_1 - \l_2| $$ for $\varepsilon\gamma^{-1}$ small enough. Hence $$
|R_{ljk}(u_n)| \leq \frac{4 \g_n|j^{3}-k^3|}{\langle l \rangle^{\t}} \frac{9}{|j^3 - k^3|} \leq \frac{72 \g}{\langle l \rangle^{\t}} \,, $$ which is \eqref{stima-risonanti}. \end{pf}
Now we prove $(\mP4)_0$.
We observe that, for each fixed $l$, all the indices $j,k$ such that $R_{ljk}(0) \neq \emptyset$ are confined in the ball $j^2 + k^2 \leq 16 |\bar\omega| |l|$, because $$
|j^3 - k^3| = |j-k| |j^2+jk+k^2|
\geq j^2 + k^2 - |jk| \geq \frac12\, (j^2 + k^2) \, , \quad \forall j,k \in \mathbb Z, \ j \neq k, $$
and $|j^{3}-k^{3}| \leq 8 |\bar\o| |l|$ by Lemma \ref{risonanti-2}. As a consequence $$
| {\cal G}_0 \setminus {\cal G}_1 | \stackrel{\eqref{natale}} =
\Big| \bigcup_{l,j,k} R_{ljk}(0) \Big|
\leq \sum_{l \in \mathbb Z^\nu} \sum_{j^2 + k^2 \leq 16 |\bar\omega| |l|} |R_{ljk}(0)| \stackrel{\eqref{stima-risonanti}} \lessdot \sum_{l \in \mathbb Z^\nu} \gamma\langle l \rangle^{-\tau+1} = C \gamma$$ if $\tau > \nu + 1$. Thus the first estimate in \eqref{Gmeasure} is proved, taking a larger $C_*$ if necessary.
Finally, $(\mP4)_n$ for $n \geq 1$, follows by \begin{eqnarray*}
|{\cal G}_{n } \setminus {\cal G}_{n+1}| & \stackrel{\eqref{parametri cattivi}} \leq
& \sum_{|l|> N_{n} |j|,|k|\leq C |l|^{1/2}} |R_{ljk}(u_{n})| \stackrel{\eqref{stima-risonanti}} \lessdot
\sum_{|l|> N_{n} |j|,|k|\leq C|l|^{1/2}} \gamma\langle l \rangle^{-\t} \\ & \lessdot
& \sum_{|l| > N_n} \gamma\langle l \rangle^{-\tau + 1} \lessdot \g N_{n}^{-\tau + \nu} \leq C \g N_{n}^{-1} \end{eqnarray*} and \eqref{Gmeasure} is proved. The proof of Theorem \ref{iterazione-non-lineare} is complete. \end{pf}
\subsection{Proof of Theorems \ref{thm:main}, \ref{thm:mainH}, \ref{thm:mainrev}, \ref{thm:reducibility} and \ref{cor:stab}}\label{sec:proof}
\textsc{Proof of Theorems \ref{thm:main}, \ref{thm:mainH}, \ref{thm:mainrev}.} Assume that $ f \in C^q $ satisfies the assumptions in Theorem \ref{thm:main} or in Theorem \ref{thm:mainrev} with a smoothness exponent $ q := q(\nu) \geq \mathfrak s_0 + \mu + \b_1 $ which depends only on $ \nu $ once we have fixed $ \tau := \nu + 2 $ (recall that $ \mathfrak s_0 := (\nu + 2 ) \slash 2 $, $ \b_1 $ is defined in \eqref{sigma-beta-kappa} and $ \mu $ in \eqref{perdite stima tame linearizzato}).
For $ \gamma= \varepsilon^a $, $ a \in (0,1) $ the smallness condition $ \varepsilon \g^{- 1} = \varepsilon^{1- a} < \delta $ of Theorem \ref{iterazione-non-lineare} is satisfied. Hence on the Cantor set ${\cal G}_{\infty} := \cap_{n \geq 0} {\cal G}_{n} $, the sequence $ u_{n}(\l) $ is well defined and
converges in norm $ \|\cdot \|_{\mathfrak s_{0}+\mu, \mathcal{G}_\infty}^{{\rm{Lip}(\g)}}$ (see \eqref{hn}) to a solution $u_{\infty}(\l)$ of $$
F(\lambda, u_\infty(\lambda)) = 0 \quad {\rm with} \quad \sup_{\lambda \in {\cal G}_\infty} \| u_\infty (\l) \|_{\mathfrak s_0 + \mu} \leq C \varepsilon \g^{-1} = C \varepsilon^{1-a} \, , $$ namely $u_{\infty}(\l)$ is a solution of the perturbed KdV equation \eqref{eq:invqp} with $ \omega = \lambda \bar \omega $. Moreover, by \eqref{Gmeasure}, the measure of the complementary set satisfies \[
|\Lambda \setminus {\cal G}_{\infty}|
\leq \sum_{n \geq 0}|{\cal G}_{n } \setminus {\cal G}_{n+1} | \leq C \gamma+ \sum_{n \geq 1} \g C N_{n}^{-1} \leq C \gamma= C \varepsilon^{a} \, , \qedhere \] proving \eqref{Cmeas}. The proof of Theorem \ref{thm:main} is complete. In order to finish the proof of Theorems \ref{thm:mainH} or \ref{thm:mainrev}, it remains to prove the linear stability of the solution, namely Theorem \ref{cor:stab}.
\noindent \textsc{Proof of Theorem \ref{thm:reducibility}.} Part $(i) $ follows by \eqref{L-coniugato}, Lemma \ref{stime-tame-coniugio}, Theorem \ref{teoremadiriducibilita} (applied to the solution $ u_\infty (\l) $) with the exponents $ \bar \sigma := \sigma + \beta + 3 $, $ \L_\infty (u) := \L_\infty^{2\g} (u) $, see \eqref{Omegainfty}. Part ($ii$) follows by the dynamical interpretation of the conjugation procedure, as explained in section \ref{sec: dyn redu}. Explicitely, in sections \ref{sec:regu} and \ref{sec:redu}, we have proved that \[ \mathcal{L} = {\cal A} B \rho W \mathcal{L}_\infty W^{-1} B^{-1} {\cal A}^{-1}, \quad W := {\cal M} {\cal T} \mathcal{S} \Phi_\infty \, . \] By the arguments in Section \ref{sec: dyn redu} we deduce that a curve $h(t)$ in the phase space $H^s_x$ is a solution of the dynamical system \eqref{KdV:lin} if and only if the transformed curve \begin{equation}\label{vh} v(t) := W^{-1}(\omega t) B^{-1} {\cal A}^{-1}(\omega t) h(t) \end{equation} (see notation \eqref{notationA}, Lemma \ref{lemma:stime stabilita Phi 12}, \eqref{Phi infty (ph) - I}) is a solution of the constant coefficients dynamical system \eqref{Lin: Red}.
\noindent \textsc{Proof of Theorem \ref{cor:stab}.} If all $ \mu_j $ are purely imaginary, the Sobolev norm of the solution $ v(t) $ of \eqref{Lin: Red} is constant in time, see \eqref{constant v}. We now show that also the Sobolev norm of the solution $ h(t) $ in \eqref{vh} does not grow in time. For each $t \in \mathbb R$, $ {\cal A}(\omega t) $ and $W(\omega t)$ are transformations of the phase space $H^s_x $ that depend quasi-periodically on time, and satisfy, by \eqref{A(ph)}, \eqref{Phi mM mS (ph)}, \eqref{Phi infty (ph) - I}, \begin{equation} \label{nuova carla}
\| {\cal A}^{\pm 1}(\omega t) g \|_{H^s_x}
+ \| W^{\pm 1}(\omega t) g \|_{H^s_x}
\leq C(s) \| g \|_{H^s_x} \, , \quad \forall t \in \mathbb R, \ \forall g = g(x) \in H^s_x,
\end{equation} where the constant $C(s)$ depends on $\| u \|_{s + \sigma + \beta + \mathfrak s_0} < + \infty $. Moreover, the transformation $B$ is a quasi-periodic reparametrization of the time variable (see \eqref{time repar}), namely \begin{equation} \label{B t tau} Bf(t) = f(\psi(t)) = f(\t), \quad B^{-1}f(\t) = f(\psi^{-1}(\t)) = f(t) \quad \forall f : \mathbb R \to H^s_x, \end{equation} where $\tau = \psi(t) := t + \a(\omega t)$, $t = \psi^{-1}(\t) = \tau + \tilde \a(\omega \t)$ and $\a$, $\tilde\a$ are defined in Section \ref{step-2}. Thus
\begin{align*}
\| h(t) \|_{H^s_x} & \stackrel{\eqref{vh}} =
\| {\cal A}(\omega t) B W(\omega t) v (t) \|_{H^s_x} \stackrel{\eqref{nuova carla}} \leq
C(s) \| B W(\omega t) v (t) \|_{H^s_x}
\stackrel{\eqref{B t tau}} = C(s) \| W(\omega \t) v (\t) \|_{H^s_x} \\ &
\stackrel{\eqref{nuova carla}} \leq C(s) \| v (\t) \|_{H^s_x}
\stackrel{\eqref{constant v}} = C(s) \| v (\t_0) \|_{H^s_x}
\stackrel{\eqref{vh}} = C(s) \| W^{-1}(\omega \t_0) B^{-1} {\cal A}^{-1}(\omega \t_0) h(\t_0) \|_{H^s_x} \\ &
\stackrel{\eqref{nuova carla}} \leq C(s) \| B^{-1} {\cal A}^{-1}(\omega \t_0) h(\t_0) \|_{H^s_x}
\stackrel{\eqref{B t tau}} = C(s) \| {\cal A}^{-1}(0) h(0) \|_{H^s_x}
\stackrel{\eqref{nuova carla}} \leq C(s) \| h(0) \|_{H^s_x} \end{align*} having chosen $\t_0 := \psi(0) = \a(0)$ (in the reversible case, $\a$ is an odd function, and so $\a(0) = 0$). Hence \eqref{stability s} is proved. To prove \eqref{stability epsilon}, we collect the estimates \eqref{A(ph)-I}, \eqref{Phi mM mS (ph) - I}, \eqref{Phi infty (ph) - I} into \begin{equation} \label{nuova rosa}
\| ({\cal A}^{\pm 1}(\omega t) - I) g \|_{H^s_x}
+ \| (W^{\pm 1}(\omega t) - I) g \|_{H^s_x}
\leq \varepsilon \g^{-1} C(s) \| g \|_{H^{s+1}_x} \,, \quad \forall t \in \mathbb R, \ \forall g \in H^s_x,
\end{equation} where the constant $C(s)$ depends on $\| u \|_{s + \sigma + \beta + \mathfrak s_0}$. Thus \begin{eqnarray*}
\| h(t) \|_{H^s_x}
& \!\!\!\!\!\! \stackrel{\eqref{vh}} = \!\!\!\!\!\! & \| {\cal A}(\omega t) B W(\omega t) v (t) \|_{H^s_x} \leq
\| B W(\omega t) v (t) \|_{H^s_x} + \| ({\cal A}(\omega t)-I) B W(\omega t) v (t) \|_{H^s_x} \\
& \!\!\!\!\!\! \stackrel{\eqref{B t tau}\eqref{nuova rosa}} \leq \!\!\! \!\!\! & \| W(\omega \t) v(\t) \|_{H^s_x}
+ \varepsilon \g^{-1} C(s) \| B W(\omega t) v (t) \|_{H^{s+1}_x} \\
& \!\!\! \!\!\! \stackrel{\eqref{B t tau}} = \!\!\!\!\!\! & \| W(\omega \t) v(\t) \|_{H^s_x}
+ \varepsilon \g^{-1} C(s) \| W(\omega \t) v (\t) \|_{H^{s+1}_x} \\
& \!\!\!\!\!\! \stackrel{\eqref{nuova carla}} \leq \!\!\! \!\!\! & \| v(\t) \|_{H^s_x} + \| (W(\omega \t) - I) v(\t) \|_{H^s_x}
+ \varepsilon \g^{-1} C(s) \| v (\t) \|_{H^{s+1}_x} \\
& \!\!\! \!\!\! \stackrel{\eqref{nuova rosa}} \leq \!\!\!\!\!\! & \| v(\t) \|_{H^s_x} + \varepsilon \g^{-1} C(s) \| v(\t) \|_{H^{s+1}_x}
\stackrel{\eqref{constant v}} = \| v(\t_0) \|_{H^s_x} + \varepsilon \g^{-1} C(s) \| v(\t_0) \|_{H^{s+1}_x} \\
& \!\!\!\!\!\! \stackrel{\eqref{vh}} = \!\!\!\!\!\! & \| W^{-1}(\omega \t_0) B^{-1} {\cal A}^{-1}(\omega \t_0) h(\t_0) \|_{H^s_x}
+ \varepsilon \g^{-1} C(s) \| W^{-1}(\omega \t_0) B^{-1} {\cal A}^{-1}(\omega \t_0) h(\t_0) \|_{H^{s+1}_x} \, . \end{eqnarray*} Applying the same chain of inequalities at $ \tau = \t_0 $, $ t = 0 $, we get that the last term is $$
\leq \| h(0) \|_{H^s_x} + \varepsilon \g^{-1} C(s) \| h(0) \|_{H^{s+1}_x} \, , $$ proving the second inequality in \eqref{stability epsilon} with $ \mathtt a := 1 - a $. The first one follows similarly.
\section{Appendix A. General tame and Lipschitz estimates}
In this Appendix we present standard tame and Lipschitz estimates for composition of functions and changes of variables which are used in the paper. Similar material is contained in
\cite{Hormander-geodesy}, \cite{Ioo-Plo-Tol}, \cite{Berti-Bolle-Ck-Nodea},
\cite{Baldi-Benj-Ono}.
We first remind classical embedding, algebra, interpolation and tame estimates in the Sobolev spaces $ H^s := H^{s}(\mathbb T^d,\mathbb C) $ and $ W^{s, \infty} := W^{s, \infty}(\mathbb T^d,\mathbb C) $, $ d \geq 1$ .
\begin{lemma} \label{lemma:standard Sobolev norms properties}
Let
$ s_0 > d/2$. Then \\
$(i)$ {\bf Embedding.} $\| u \|_{L^\infty} \leq C(s_0) \| u \|_{s_0}$ for all $u \in H^{s_0} $. \\
$(ii)$ {\bf Algebra.} $\| uv \|_{s_0} \leq C(s_0) \| u \|_{s_0} \| v \|_{s_0}$ for all $u, v \in H^{s_0}$. \\ $(iii)$ {\bf Interpolation.} For $0 \leq s_1 \leq s \leq s_2$, $s = \lambda s_1 + (1-\lambda) s_2$, \begin{equation} \label{interpolation GN}
\| u \|_{s} \leq \| u \|_{s_1}^\lambda \| u \|_{s_2}^{1-\lambda} \, , \quad \forall u \in H^{s_2} \, . \end{equation} Let $ a_0, b_0 \geq 0$ and $ p,q > 0 $. For all $ u \in H^{a_0 + p + q} $, $ v \in H^{b_0 + p + q} $, \begin{equation} \label{interpolation estremi fine}
\| u \|_{a_0 + p} \| v \|_{b_0 + q} \leq
\| u \|_{a_0 + p + q} \| v \|_{b_0}
+ \| u \|_{a_0} \| v \|_{b_0 + p + q} \, .
\end{equation} Similarly, for the $|u|_{s, \infty} := \sum_{|\b| \leq s} | D^\b u |_{L^\infty} $ norm, \begin{equation} \label{interpolation GN-s}
| u |_{s, \infty} \leq C(s_1, s_2) | u |_{s_1, \infty}^\lambda | u |_{s_2, \infty}^{1-\lambda} \, , \quad \forall u \in W^{s_2, \infty} \, . \end{equation} and $ \forall u \in W^{a_0 + p + q, \infty} $, $ v \in W^{b_0 + p + q, \infty} $, \begin{equation} \label{interpolation estremi fine-s}
| u |_{a_0 + p, \infty} | v |_{b_0 + q, \infty} \leq C(a_0, b_0, p,q)\big(
| u |_{a_0 + p + q, \infty} | v |_{b_0, \infty}
+ | u |_{a_0, \infty} | v |_{b_0 + p + q, \infty}\big) \, . \end{equation} \noindent $(iv)$ {\bf Asymmetric tame product.} For $s \geq s_0$, \begin{equation} \label{asymmetric tame product}
\| uv \|_s \leq C(s_0) \|u\|_s \|v\|_{s_0} + C(s) \|u\|_{s_0} \| v \|_s \, , \quad \forall u,v \in H^s \, . \end{equation}
\noindent $(v)$ {\bf Asymmetric tame product in $W^{s,\infty}$.} For $s \geq 0$, $s \in \mathbb N$, \begin{equation} \label{tame product infty}
| uv |_ {s, \infty} \leq \tfrac32 \, | u |_ {L^\infty} |v|_ {s, \infty} + C(s) |u|_ {s, \infty} |v|_ {L^\infty} \, , \quad \forall u,v \in W^{s,\infty} \, . \end{equation}
\noindent $(vi)$ {\bf Mixed norms asymmetric tame product.} For $s \geq 0$, $s \in \mathbb N$, \begin{equation} \label{mixed norms tame product}
\| uv \|_s \leq \tfrac32 \, |u|_ {L^\infty} \| v \|_s + C(s)| u |_ {s, \infty} \| v \|_0 \, , \quad \forall u \in W^{s,\infty} \, , \ v \in H^s \, . \end{equation} If $u := u(\l)$ and $v := v(\l)$ depend in a lipschitz way on $\lambda \in \Lambda \subset \mathbb R $, all the previous statements hold if we replace the norms $\Vert \cdot \Vert_s$, $\vert \cdot \vert_ {s, \infty} $ with the norms $\Vert \cdot \Vert_s^{{\rm{Lip}(\g)}}$, $\vert \cdot \vert_ {s, \infty}^{{\rm{Lip}(\g)}}$.
\end{lemma}
\begin{pf} The interpolation estimate \eqref{interpolation GN} for the Sobolev norm \eqref{Hs1} follows by H\"older inequality, see also \cite{Moser-Pisa-66}, page 269. Let us prove \eqref{interpolation estremi fine}. Let $ a = a_0 \lambda + a_1 (1-\lambda) $, $ b = b_0 (1-\lambda) + b_1 \lambda $, $ \lambda \in [0,1] $. Then \eqref{interpolation GN} implies \begin{equation} \label{ulti2}
\| u \|_a \| v \|_b \leq
\big( \| u \|_{a_0} \| v \|_{b_1} \big)^{\lambda}
\big( \| u \|_{a_1} \| v \|_{b_0} \big)^{1-\lambda} \leq \lambda \| u \|_{a_0} \| v \|_{b_1} + (1- \l) \| u \|_{a_1} \| v \|_{b_0} \end{equation} by Young inequality. Applying \eqref{ulti2} with $ a = a_0 + p $, $ b = b_0 + q $, $ a_1 = a_0 + p + q $, $ b_1 = b_0 + p + q $, then $ \lambda = q \slash (p+q) $ and we get \eqref{interpolation estremi fine}. Also the interpolation estimates
\eqref{interpolation GN-s} are classical (see e.g. \cite{Hormander-geodesy}, \cite{Berti-Bolle-Procesi-AIHP-2010})
and \eqref{interpolation GN-s} implies \eqref{interpolation estremi fine-s} as above.
$(iv)$: see the Appendix of \cite{Berti-Bolle-Procesi-AIHP-2010}. $(v)$: we write, in the standard multi-index notation, \begin{equation} \label{derivate pure} D^\a(uv) = \sum_{\b+\gamma= \a} C_{\b, \g} (D^\b u) (D^\g v) = u D^\a v + \sum_{\b+\gamma= \a, \beta \neq 0} C_{\b, \g} (D^\b u) (D^\g v) \, . \end{equation}
Using $ |(D^\b u)(D^\g v)|_ {L^\infty} \leq |D^\b u|_ {L^\infty} |D^\g v|_ {L^\infty} \leq |u|_{|\b|, \infty} |v|_{|\g|, \infty} $, and the interpolation inequality \eqref{interpolation GN-s} for every $ \beta \neq 0$
with $\lambda := |\b| / |\a| \in (0,1]$ (where $ |\a| \leq s $), we get, for any $K > 0$, \begin{align}
C_{\b, \g} |D^\b u|_ {L^\infty} |D^\g v|_ {L^\infty} & \leq C_{\b, \g} C(s)
\big( |v|_{L^\infty} |u|_ {s, \infty} \big)^{\lambda} \big( |v|_ {s, \infty} |u|_ {L^\infty} \big)^{1 - \lambda} \nonumber \\ & =
\frac{C(s)}{K} \big[ (K C_{\b, \g})^{\frac{1}{\lambda}} |v|_ {L^\infty} |u|_ {s, \infty} \big]^{\lambda}
\big( |v|_ {s, \infty} |u|_ {L^\infty} \big)^{1 - \lambda} \nonumber \\ & \leq \frac{C(s)}{K} \,
\big\{ (K C_{\b, \g})^{\frac{|\a|}{|\b|}} |v|_ {L^\infty} |u|_ {s, \infty} \, + \, |v|_ {s, \infty} |u|_ {L^\infty} \big\}. \label{riga3} \end{align} Then \eqref{tame product infty} follows by \eqref{derivate pure}, \eqref{riga3} taking $ K := K(s) $ large enough. $(vi)$: same proof as $(v)$, using the elementary inequality
$\|(D^\b u)(D^\g v) \|_0 \leq |D^\b u|_{L^\infty} \| D^\g v \|_0 $. \end{pf}
We now recall classical tame estimates for composition of functions, see \cite{Moser-Pisa-66}, section 2, pages 272--275, and \cite{Rabinowitz-tesi-1967}-I, Lemma 7 in the Appendix, pages 202--203.
A function $ f : \mathbb T^d \times B_1 \to \mathbb C $, where $B_1 := \{ y \in \mathbb R^m : |y| < 1\} $, induces the composition operator \begin{equation}\label{comp} \tilde f(u)(x) := f(x,u(x),Du(x),\ldots,D^p u(x))
\end{equation} where $D^k u(x)$ denotes the partial derivatives $\partial_x^\a u(x)$ of order $|\a|=k$ (the number $ m $ of $ y $-variables depends on $p, d $).
\begin{lemma} {\bf (Composition of functions)} \label{lemma:composition of functions, Moser} Assume $ f \in C^r (\mathbb T^d \times B_1)$. Then
$(i)$
For all $ u \in H^{r+p} $ such that $ |u |_{p, \infty} < 1 $, the composition operator \eqref{comp} is well defined and \[
\| \tilde f(u) \|_r
\leq C \| f \|_{C^r} (\|u\|_{r+p} + 1) \] where the constant $C $ depends on $ r,d,p $. If $ f \in C^{r+2} $,
then, for all $ |u|_{p, \infty} $, $ | h |_{p, \infty} < 1 / 2 $, \begin{align*}
\big\| \tilde f(u+h) - \tilde f (u) \big\|_r
& \leq C \| f \|_{C^{r+1}} \, ( \| h \|_{r+p} + | h |_{p,\infty} \| u \|_{r+p}) \, , \\
\big\| \tilde f(u+h) - \tilde f (u) - \tilde f'(u) [h] \big\|_r
& \leq C \| f \|_{C^{r+2}} \, | h |_{p,\infty} ( \| h \|_{r+p} + | h |_{p,\infty} \| u \|_{r+p}) \, . \end{align*} $(ii)$ The previous statement also holds replacing
$\| \ \|_r$
with the norms $| \ |_{r, \infty} $. \end{lemma}
\begin{lemma} {\bf (Lipschitz estimate on parameters)} \label{lemma:Lip generale}
Let $d \in \mathbb N$, $d/2 < s_0 \leq s$, $p \geq 0$, $\g>0$. Let $ F $ be a $ C^1 $-map satisfying the tame estimates: $ \forall \| u \|_{s_0+p} \leq 1 $, $ h \in H^{s+p} $, \begin{align} \label{aux tame 1}
\| F(u) \|_s
& \leq C(s) (1 + \| u \|_{s+p}) \, , \\ \label{aux tame 2}
\| \partial_u F(u)[h] \|_s
& \leq C(s) (\| h \|_{s+p} + \| u \|_{s+p} \| h \|_{s_0 + p} ) \, . \end{align} For $\Lambda \subset \mathbb R $, let $ u(\lambda) $ be a Lipschitz family of functions with
$ \| u \|_{s_0 +p}^{{\rm{Lip}(\g)}} \leq 1 $ (see \eqref{def norma Lipg}). Then \[
\| F(u) \|_s^{\rm{Lip}(\g)} \leq C(s) \big( 1 + \| u \|_{s+p}^{\rm{Lip}(\g)} \big). \]
The same statement also holds when all the norms $\| \ \|_s $ are replaced by $| \ |_{s, \infty} $. \end{lemma}
\begin{pf}
By \eqref{aux tame 1} we get $ \sup_\lambda \| F(u(\lambda)) \|_s \leq
C(s) ( 1 + \| u \|_{s+p}^{{\rm{Lip}(\g)}}) $. Then, denoting $ u_1 := u(\lambda_1)$ and $h := u(\lambda_2) - u(\lambda_1)$, we have \begin{align*}
\| F(u_2) - F(u_1) \|_s &
\leq \int_0^1 \| \partial_u F(u_1 + t (u_2-u_1))[h] \, \|_s \, dt
\\ & \stackrel{\eqref{aux tame 2}} \leq_s \| h \|_{s+p}
+ \| h \|_{s_0 + p} \int_0^1 \big( (1-t) \| u(\lambda_1) \|_{s+p} + t \| u(\lambda_2) \|_{s+p} \big) \, dt \end{align*} whence \begin{align*} \gamma\, \sup_{\begin{subarray}{c} \lambda_1, \lambda_2 \in \Lambda \\ \lambda_1 \neq \lambda_2 \end{subarray}}
\frac{\| F(u(\lambda_1)) - F(u(\lambda_2)) \|_{s}}{|\lambda_1 - \lambda_2|} \, &
\leq_s \| u \|_{s+p}^{\rm{Lip}(\g)}
+ \| u \|_{s_0 + p}^{\rm{Lip}(\g)} \sup_{\lambda_1, \lambda_2} \big( \, \| u(\lambda_1) \|_{s+p} + \, \| u(\lambda_2) \|_{s+p} \big) \\
& \leq_s \| u \|_{s+p}^{\rm{Lip}(\g)}
+ \| u \|_{s_0 + p}^{\rm{Lip}(\g)} \| u \|_{s+p}^{\rm{Lip}(\g)}
\leq C(s) \| u \|_{s+p}^{\rm{Lip}(\g)} \, , \end{align*}
because $ \| u \|_{s_0 +p}^{{\rm{Lip}(\g)}} \leq 1 $, and the lemma follows. \end{pf}
The next lemma is also classical, see for example \cite{Hormander-geodesy}, Appendix, and \cite{Ioo-Plo-Tol}, Appendix G. The present version is proved in \cite{Baldi-Benj-Ono}, adapting Lemma 2.3.6 on page 149 of \cite{Hamilton}, except for the part on the Lipschitz dependence on a parameter, which is proved here below.
\begin{lemma} {\bf (Change of variable)} \label{lemma:utile} Let $p:\mathbb R^d \to \mathbb R^d$ be a $2\pi$-periodic function in $W^{s,\infty}$, $ s \geq 1$, with
$ |p|_{1, \infty} \leq 1/2 $. Let $f(x) = x + p(x)$. Then:
$(i)$ $f$ is invertible, its inverse is $f^{-1}(y) = g(y) = y + q(y)$ where $q$ is $ 2 \pi $-periodic, $q \in W^{s,\infty}(\mathbb T^d,\mathbb R^d)$, and
$|q|_{s, \infty} \leq C |p|_{s, \infty} $. More precisely, \begin{equation} \label{stime-q}
| q |_{L^\infty} = | p |_{L^\infty}, \quad
| Dq |_{L^\infty} \leq 2 | Dp |_{L^\infty} , \quad
| Dq |_{s-1, \infty} \leq C | Dp |_{s-1, \infty}. \end{equation} where the constant $C$ depends on $d, s$.
Moreover, suppose that $p = p_\lambda$ depends in a Lipschitz way by a parameter $\lambda \in \Lambda \subset \mathbb R $, and suppose, as above, that
$|D_x p_\lambda|_ {L^\infty} \leq 1/2$ for all $\lambda$. Then $q = q_\lambda$ is also Lipschitz in $\lambda$, and \begin{equation}\label{stime-lipschitz-q}
|q|_{s, \infty}^{{\rm{Lip}(\g)}}
\leq C \Big( |p|_ {s, \infty} ^{{\rm{Lip}(\g)}} + \big\{ \sup_{\lambda \in \L} |p_\lambda|_{s+1, \infty} \big\} \, |p|_{L^\infty}^{{\rm{Lip}(\g)}} \Big)
\leq C |p|_{s+1, \infty}^{{\rm{Lip}(\g)}}, \end{equation} The constant $C$ depends on $d, s $ (and is independent on $\g$).
$(ii)$ If $u \in H^s (\mathbb T^d,\mathbb C)$, then $u\circ f(x) = u(x+p(x))$ is also in $H^s $, and, with the same $C$ as in $(i)$, \begin{align}
\| u \circ f \|_s
& \leq C (\|u\|_s + |Dp|_{s-1, \infty} \|u\|_1), \label{tame-cambio-di-variabile} \\
\| u \circ f - u \|_s
& \leq C \big( | p |_{L^\infty} \| u \|_{s + 1} + |p|_{s, \infty} \| u \|_{2} \big) , \label{cambio di variabile meno identita} \\
\| u \circ f \|_{s}^{{\rm{Lip}(\g)}} & \leq C \,
\big( \| u \|_{s+1}^{{\rm{Lip}(\g)}} + |p|_{s, \infty}^{{\rm{Lip}(\g)}} \| u \|_2^{{\rm{Lip}(\g)}} \big). \label{tame-lipschitz-cambio-di-variabile} \end{align} \eqref{tame-cambio-di-variabile}, \eqref{cambio di variabile meno identita} \eqref{tame-lipschitz-cambio-di-variabile} also hold for $u \circ g$ .
$(iii)$ Part $(ii)$ also holds with $\| \cdot \|_k$ replaced by
$| \cdot |_{k, \infty}$, and $\Vert \cdot \Vert_{s}^{{\rm{Lip}(\g)}}$ replaced by $\vert \cdot \vert_{s, \infty}^{{\rm{Lip}(\g)}}$, namely \begin{align} \label{composizione infty}
| u \circ f |_{s, \infty}
& \leq C (|u|_{s, \infty} + |Dp|_{s-1, \infty} |u|_{1, \infty}), \\ \label{composizione infty Lip}
| u \circ f |_{s, \infty}^{\rm{Lip}(\g)}
& \leq C (|u|_{s+1, \infty}^{\rm{Lip}(\g)} + |Dp|_{s-1, \infty}^{\rm{Lip}(\g)} |u|_{2, \infty}^{\rm{Lip}(\g)}). \end{align} \end{lemma}
\begin{pf} The bounds \eqref{stime-q}, \eqref{tame-cambio-di-variabile} and \eqref{composizione infty} are proved in \cite{Baldi-Benj-Ono}, Appendix B. Let us prove \eqref{stime-lipschitz-q}. Denote $p_\lambda(x) := p(\lambda,x)$, and similarly for $q_\lambda, g_\lambda, f_\lambda$. Since $y = f_\lambda(x) = x + p_\lambda(x)$ if and only if $x = g_\lambda(y) = y + q_\lambda(y)$, one has \begin{equation}\label{q-p-lambda} q_\lambda(y) + p_\lambda(g_\lambda(y)) = 0 \, , \quad \forall \lambda \in \L, \ y \in \mathbb T^d. \end{equation} Let $\lambda_{1}, \lambda_{2} \in \Lambda $, and denote, in short, $q_1 = q_{\lambda_1}$, $q_2 = q_{\lambda_2}$, and so on. By \eqref{q-p-lambda}, \begin{align} q_1 - q_2 & = p_2 \circ g_2 - p_1 \circ g_1 = (p_2 \circ g_2 - p_1 \circ g_2) + (p_1 \circ g_2 - p_1 \circ g_1) \notag \\ & = A_2^{-1} (p_2 - p_1) + \int_{0}^{1} A_t^{-1} (D_{x} p_1) \, dt \, (q_2 - q_1) \label{contrazione Lip} \end{align} where $ A_2^{-1} h := h \circ g_2 $, $ A_t^{-1} h := h \circ \big( g_1 + t [g_2 - g_1] \big)$, $ t \in [0,1]$. By \eqref{contrazione Lip}, the $ L^\infty $ norm of $(q_2 - q_1)$ satisfies \[
|q_2 - q_1|_{L^\infty}
\leq
|A_2^{-1} (p_2 - p_1)|_{L^\infty}
+ \int_{0}^{1} |A_t^{-1} (D_{x} p_1)|_{L^\infty} \, dt \, |q_2 - q_1|_{L^\infty} \leq
|p_2 - p_1|_{L^\infty}
+ \int_{0}^{1} |D_{x} p_1|_{L^\infty} dt \, |q_2 - q_1|_{L^\infty} \]
whence, using the assumption $|D_{x} p_1|_{L^\infty} \leq 1/2$, \begin{equation} \label{norma 0 q2-q1}
|q_2 - q_1|_{L^\infty} \leq 2 |p_2 - p_1|_{L^\infty} \, . \end{equation} By \eqref{contrazione Lip}, using \eqref{tame product infty}, the $W^{s,\infty}$ norm of $(q_2 - q_1)$, for $s \geq 0$, satisfies \[
|q_1 - q_2|_{s, \infty}
\leq |A_2^{-1} (p_2 - p_1)|_{s, \infty}
+ \frac32\, \int_{0}^{1} |A_t^{-1} (D_{x} p_1)|_{L^\infty} \, dt \, |q_2 - q_1|_{s, \infty}
+ C(s) \int_{0}^{1} |A_t^{-1} (D_{x} p_1)|_{s, \infty} \, dt \, |q_2 - q_1|_{L^\infty}. \]
Since $|A_t^{-1} (D_{x} p_1)|_{L^\infty} = |D_x p_1|_{L^\infty} \leq 1/2$, \[
\Big( 1 - \frac34 \Big) |q_2 - q_1|_{s, \infty}
\leq |A_2^{-1} (p_2 - p_1)|_{s, \infty} + C(s) \int_{0}^{1} |A_t^{-1} (D_{x} p_1)|_{s, \infty} \, dt \, |q_2 - q_1|_{L^\infty}. \] Using \eqref{norma 0 q2-q1}, \eqref{composizione infty}, \eqref{interpolation estremi fine-s} and \eqref{stime-q}, \[
|q_2 - q_1|_{s, \infty}
\leq C(s) \Big( |p_2 - p_1|_{s, \infty} + \big\{ \sup_{\lambda \in \Lambda} |p_\lambda|_{s+1, \infty} \big\} |p_2 - p_1|_{L^\infty} \Big) \] and \eqref{stime-lipschitz-q} follows.
{\it Proof of \eqref{cambio di variabile meno identita}}. We have $ u \circ f - u = \int_{0}^{1} A_t (D_x u)\,d t\, p $ where $A_t u (x) := u (x + t p(x)) $, $t \in [0, 1]$. Then, by \eqref{mixed norms tame product} and \eqref{tame-cambio-di-variabile}, \begin{eqnarray*}
\Big\| \int_{0}^{1} A_t (D_x u)\,d t\, p \Big\|_{s} & \leq_s
& \int_{0}^{1} \Vert A_t (D_x u) \Vert_{s} \, d t\, |p|_{L^{\infty}} + \int_{0}^{1} \Vert A_t (D_x u) \Vert_{0} \, d t\, |p|_{s, \infty} \\ & \leq_s
& \Vert u \Vert_{s + 1} |p|_{L^{\infty}} + |p|_{s, \infty} |p|_{L^\infty} \Vert u \Vert_{2} + |p|_{s,\infty} \Vert u \Vert_{1} \,, \end{eqnarray*} which implies \eqref{cambio di variabile meno identita}.
{\it Proof of \eqref{tame-lipschitz-cambio-di-variabile}}. With the same notation as above, \[ u_2 \circ f_2 - u_1 \circ f_1 = (u_2 \circ f_2 - u_2 \circ f_1) + (u_2 \circ f_1 - u_1 \circ f_1) = \int_{0}^{1} A_t (D_{x} u_2) \, dt \, (f_2 - f_1) + A_1 (u_2 - u_1), \] where $A_1 h = h \circ f_1$ and $A_t h = h\circ (f_1 + t[f_2 - f_1])$. Using \eqref{mixed norms tame product} and \eqref{tame-cambio-di-variabile}, \[
\Big\| \int_{0}^{1} A_t (D_{x} u_2) \, dt \, (f_2 - f_1) \Big\|_s \leq_s
\Big( \| D_x u_2 \|_{s} + \big( \sup_\lambda |D_x p_\lambda|_{s-1, \infty} \big) \| D_x u_2 \|_1 \Big)
|p_2 - p_1|_{L^\infty}
+ \| D_x u_2 \|_{0} |p_2 - p_1|_{s, \infty} \] and
$ \| A_1 (u_2 - u_1) \|_s \leq_s \| u_2 - u_1 \|_s + |D_x p_1|_{s-1, \infty} \| u_2 - u_1 \|_1 $. Therefore \begin{eqnarray*}
\| u_2 \circ f_2 - u_1 \circ f_1 \|_s & \leq_s &
|p_2 - p_1|_{L^\infty}
\Big( \sup_\lambda \| u_\lambda \|_{s+1} +
\big( \sup_\lambda |p_\lambda|_{s, \infty} \big) \big( \sup_\lambda \| u_\lambda \|_{2} \big) \Big) \\
& & + |p_2 - p_1|_{s, \infty} \big( \sup_\lambda \| u_\lambda \|_1 \big)
+ \| u_2 - u_1 \|_s + \big( \sup_\lambda |p_\lambda|_{s, \infty} \big) \, \| u_2 - u_1 \|_1 \end{eqnarray*} whence \eqref{tame-lipschitz-cambio-di-variabile} follows. The proof of \eqref{composizione infty Lip} is the same as for \eqref{tame-lipschitz-cambio-di-variabile},
replacing all norms $\| \cdot \|_s$ with $| \cdot |_{s, \infty}$. \end{pf}
\begin{lemma} {\bf (Composition)} \label{lemma astratto composizioni} Suppose that for all $\Vert u \Vert_{s_{0}+ \mu_i} \leq 1$ the operator ${\cal Q}_i(u)$ satisfies \begin{equation}\label{tame Phi-i} \Vert {\cal Q}_i h \Vert_{s} \leq C(s) \big(\Vert h \Vert_{s + \tau_i} + \Vert u \Vert_{s + \mu_i} \Vert h \Vert_{s_{0}+\tau_i} \big), \quad i = 1, 2. \end{equation} Let $\tau := {\rm max}\{ \t_1, \t_2 \}$, $\mu := {\rm max}\{ \mu_1, \mu_2 \}$. Then, for all \begin{equation}\label{u-s0-tau-mu}
\Vert u \Vert_{s_{0}+ \tau + \mu} \leq 1 \, , \end{equation} the composition operator ${\cal Q} := {\cal Q}_{1} \circ {\cal Q}_2$ satisfies the tame estimate \begin{equation} \label{stima tame astratta composizioni} \Vert {\cal Q} h \Vert_{s} \leq C(s) \big( \Vert h \Vert_{s + \tau_1 + \tau_2} + \Vert u \Vert_{s + \tau + \mu} \Vert h \Vert_{s_0 + \tau_1 + \tau_2}\big). \end{equation} Moreover, if ${\cal Q}_1$, ${\cal Q}_2$, $u$ and $h$ depend in a lipschitz way on a parameter $\lambda$, then \eqref{stima tame astratta composizioni} also holds with $\Vert \cdot \Vert_s$ replaced by $\Vert \cdot \Vert_{s}^{{\rm{Lip}(\g)}}$. \end{lemma}
\begin{pf} Apply the estimates for \eqref{tame Phi-i} to $\Phi_1$ first, then to $\Phi_2$, using condition \eqref{u-s0-tau-mu}. \end{pf}
\section{Appendix B: proof of Lemmata \ref{lemma:mostro} and \ref{lemma:stime stabilita Phi 12}} \label{sec:proofs}
The proof is elementary. It is based on a repeated use of the tame estimates of the Lemmata of the Appendix A. For convenience, we split it into many points. We remind that $ \mathfrak s_0 := (\nu+2) / 2 $ is fixed (it plays the role of the constant $ s_0 $ in Lemma \ref{lemma:standard Sobolev norms properties}).
{\bf Estimates in Step $ 1 $.}
1. --- We prove that $b_3 = b $ defined in \eqref{c} satisfies the tame estimates \begin{align} \label{stima b3}
\| b_3 - 1 \|_{s}
& \leq \varepsilon \, C(s) \big( 1 + \| u \|_{s + 3} \big), \\ \label{stima derivata b3}
\| \partial_u b_3(u)[h] \|_{s}
& \leq \varepsilon \, C(s) \big( \| h \|_{s+3} + \| u \|_{s+3} \| h \|_{ \mathfrak s_0+3} \big), \\ \label{stima Lip b3}
\| b_3-1 \|_{s}^{\rm{Lip}(\g)}
& \leq \varepsilon \, C(s) \big( 1 + \| u \|_{s+3}^{\rm{Lip}(\g)} \big). \end{align} {\it Proof of \eqref{stima b3}}. Write $b_3 = b $ (see \eqref{c}) as \begin{equation} \label{b as} b_3 - 1 = \psi\big( M [ g(a_3) - g(0) ] \, \big) - \psi(0), \quad \psi(t) := (1 + t)^{-3}, \quad Mh := \frac{1}{2\pi}\, \int_\mathbb T h \, dx, \quad g(t) := (1 + t)^{-\frac13}. \end{equation} Thus, for $ \varepsilon $ small, \[
\| b_3 - 1 \|_{s} \leq C(s) \| M [ g(a_3) - g(0) ] \, \|_s
\leq C(s) \| g(a_3) - g(0) \|_s
\leq C(s) \| a_3 \|_s . \] In the first inequality we have applied Lemma \ref{lemma:composition of functions, Moser}$(i)$ to the function $\psi$, with $u=0$, $ p = 0 $, $h=M[g(a_3)-g(0)]$.
In the second inequality we have used the trivial fact that $ \| Mh \|_{s} \leq \| h \|_{s} $ for all $h$. In the third inequality we have applied again Lemma \ref{lemma:composition of functions, Moser}$(i)$ to the function $g$, with $u=0$, $ p = 0 $, $h=a_3$. Finally we estimate $a_3 $ by \eqref{stima coeff ai 1} with $ s_0 = \mathfrak s_0 $, which holds for $s+2 \leq q$.
\noindent {\it Proof of \eqref{stima derivata b3}}. Using \eqref{b as}, the derivative of $b_3$ with respect to $u$ in the direction $h$ is \[ \partial_u b_3(u)[h] = \psi' \big( M [ g(a_3) - g(0) ] \big) \, M \big( g'(a_3) \partial_u a_3[h] \, \big). \] Then use \eqref{asymmetric tame product}, Lemma \ref{lemma:composition of functions, Moser}$(i)$ applied to the functions $\psi'$ and $g'$, and \eqref{stima coeff ai 2}.
\noindent {\it Proof of \eqref{stima Lip b3}}. It follows from \eqref{stima b3}, \eqref{stima derivata b3} and Lemma \ref{lemma:Lip generale}.
2. --- Using the definition \eqref{primaequazione} of $\rho_0$, estimates \eqref{stima b3}, \eqref{stima derivata b3}, \eqref{stima Lip b3} for $b_3$ and estimates \eqref{stima coeff ai 1}, \eqref{stima coeff ai 2}, \eqref{stima coeff ai 3} for $a_3$, one proves that $\rho_0$ also satisfies the same estimates \eqref{stima b3}, \eqref{stima derivata b3}, \eqref{stima Lip b3} as $(b_3 - 1)$. Since $\beta = \partial_x^{-1} \rho_0$ (see \eqref{defb1b0}), by Lemma \ref{lemma:standard Sobolev norms properties}($i$) we get \begin{align} \label{stima beta}
| \beta |_{s, \infty}
& \leq C(s) \| \beta \|_{s + {\mathfrak s}_0} \leq C(s) \| \rho_0 \|_{s + {\mathfrak s}_0}
\leq \varepsilon \, C(s) \big( 1 + \| u \|_{s + {\mathfrak s}_0 + 3} \big), \\ \intertext{and, with the same chain of inequalities,} \label{stima derivata beta}
| \partial_u \b(u)[h] |_{s, \infty} &
\leq \varepsilon \, C(s) \big( \| h \|_{s + {\mathfrak s}_0 +3} + \| u \|_{s + {\mathfrak s}_0 +3} \| h \|_{{\mathfrak s}_0 +3} \big) \, . \end{align} Then Lemma \ref{lemma:Lip generale} implies \begin{equation} \label{stima Lip beta}
| \beta |_{s, \infty}^{\rm{Lip}(\g)}
\leq \varepsilon \, C(s) \big( 1 + \| u \|_{s+ {\mathfrak s}_0 +3}^{\rm{Lip}(\g)} \big), \end{equation} for all $s + {\mathfrak s}_0 + 3 \leq q$.
Note that $x \mapsto x + \b(\varphi,x)$ is a well-defined diffeomorphism if $|\b|_{1, \infty} \leq 1/2$, and, by \eqref{stima beta}, this condition is satisfied provided
$ \varepsilon \, C \big( 1 + \| u \|_{{\mathfrak s}_0 + 4} \big) \leq 1/2$.
Let $(\varphi,y) \mapsto (\varphi, y + \tilde\b(\varphi,y))$ be the inverse diffeomorphism of $(\varphi,x) \mapsto (\varphi, x + \b(\varphi,x))$. By Lemma \ref{lemma:utile}($i$) on the torus $\mathbb T^{\nu+1}$, $\tilde\b$ satisfies \begin{equation} \label{stima beta tilde}
| \tilde\beta |_{s, \infty} \leq C | \beta |_{s, \infty} \stackrel{\eqref{stima beta}} \leq \varepsilon \, C(s) \big( 1 + \| u \|_{s + 3 + {\mathfrak s}_0} \big). \end{equation} Writing explicitly the dependence on $u$, we have $ \tilde\b(\varphi,y;u) + \b\big( \varphi, \, y + \tilde\b(\varphi,y;u) ; u \big) = 0 $. Differentiating the last equality with respect to $u$ in the direction $h$ gives \[ (\partial_u \tilde\b)[h] = - ^{-1} \Big( \frac{ \partial_u \beta [h] }{ 1+\b_x } \Big), \] therefore, applying Lemma \ref{lemma:utile}$(iii)$ to deal with ${\cal A}^{-1}$, \eqref{tame product infty} for the product $(\partial_u \beta [h]) (1 + \b_x)^{-1}$, the estimates \eqref{stima beta}, \eqref{stima derivata beta}, \eqref{stima Lip beta} for $\b$, and \eqref{interpolation estremi fine} (with $ a_0 = \mathfrak s_0 + 3 $, $ b_0 = \mathfrak s_0 + 4 $, $ p = 1 $, $ q = s - 1$), we obtain (for $s + {\mathfrak s}_0 + 4 \leq q$) \begin{equation}\label{stima derivata beta tilde}
| \partial_u \tilde\b(u)[h] |_{s, \infty}
\leq \varepsilon \, C(s) \big( \| h \|_{s+3 + {\mathfrak s}_0} + \| u \|_{s+ 4 + {\mathfrak s}_0} \| h \|_{3 + {\mathfrak s}_0} \big) \, . \end{equation} Then, using Lemma \ref{lemma:Lip generale} with $ p = 4 + \mathfrak s_0 $, the bounds \eqref{stima beta tilde}, \eqref{stima derivata beta tilde} imply \begin{equation} \label{stima Lip beta tilde}
| \tilde\beta |_{s, \infty}^{\rm{Lip}(\g)}
\leq \varepsilon \, C(s) \big( 1 + \| u \|_{s+ 4 + {\mathfrak s}_0}^{\rm{Lip}(\g)} \big). \end{equation}
3. --- {\sc Estimates of ${\cal A}(u)$ and ${\cal A}(u)^{-1}$. } By \eqref{tame-cambio-di-variabile}, \eqref{stima beta} and \eqref{stima beta tilde}, \begin{equation}
\Vert {\cal A}(u)h \Vert_{s} + \| {\cal A}(u)^{-1} h \|_{s}
\leq \, C(s) \big(\| h \|_{s} + \| u \|_{s + {\mathfrak s}_0 + 3} \| h \|_1 \big). \label{stima A e sua inversa} \end{equation} Moreover, by \eqref{tame-lipschitz-cambio-di-variabile}, \eqref{stima Lip beta} and \eqref{stima Lip beta tilde}, \begin{equation}\label{stima A e sua inversa lipschitz}
\|{\cal A}(u) h \|_{s}^{\rm{Lip}(\g)} + \|{\cal A}(u)^{-1} h \|_{s}^{\rm{Lip}(\g)}
\leq C(s) \big(\|h\|_{s + 1}^{\rm{Lip}(\g)} + \| u \|_{s + {\mathfrak s}_0 + 4}^{\rm{Lip}(\g)} \|h \|_{2}^{\rm{Lip}(\g)} \big). \end{equation}
Since ${\cal A}(u)g(\varphi , x) = g(\varphi , x + \beta(\varphi , x; u))$, the derivative of ${\cal A}(u)g$ with respect to $u$ in the direction $h$ is the product $ \partial_u \big( {\cal A}(u)g \big)[h]\,=\, ( {\cal A}(u)g_{x} ) \, \partial_u\beta(u)[h] $. Then, by \eqref{mixed norms tame product}, \eqref{stima derivata beta} and \eqref{stima A e sua inversa}, \begin{equation}\label{stima derivata A}
\| \partial_u ({\cal A}(u)g) [h] \, \|_s
\leq \varepsilon C(s) \Big(\| g \|_{s + 1} \| h \|_{ {\mathfrak s}_0 + 3} + \| g \|_{2} \| h \|_{s + {\mathfrak s}_0 + 3}
+ \| u \|_{s + {\mathfrak s}_0 + 3} \| g \|_{2} \| h \|_{{\mathfrak s}_0 + 3}\Big). \end{equation} Similarly $\partial_u ({\cal A}(u)^{-1} g ) [h] = ({\cal A}(u)^{-1} g_x) \, \partial_u \tilde{\beta}(u)[h]$, therefore \eqref{mixed norms tame product}, \eqref{stima derivata beta tilde}, \eqref{stima A e sua inversa} imply that \begin{equation}\label{stima derivata inversa A}
\| \partial_u ( {\cal A}^{-1}(u)g ) [h] \, \|_s
\leq \varepsilon C(s) \Big(\| g \|_{s + 1} \| h \|_{{\mathfrak s}_0 + 3} + \| g \|_{2} \| h \|_{s + {\mathfrak s}_0 + 3} +
\| u \|_{s + {\mathfrak s}_0 + 4}\| g \|_{2} \| h \|_{{\mathfrak s}_0 + 3} \Big). \end{equation}
4. --- The coefficients $b_0, b_1, b_2$ are given in \eqref{b1 b3}, \eqref{b0 b2}. By \eqref{mixed norms tame product}, \eqref{stima A e sua inversa}, \eqref{palla Lip di sicurezza}, \eqref{stima beta} and \eqref{stima coeff ai 1}, \begin{equation}\label{stime bi}
\| b_i \|_s \leq \varepsilon C(s) ( 1 + \| u \|_{s + {\mathfrak s}_0 + 6} ), \quad i = 0 , 1 , 2. \end{equation} Moreover, in analogous way, by \eqref{mixed norms tame product}, \eqref{stima A e sua inversa lipschitz}, \eqref{palla Lip di sicurezza}, \eqref{stima Lip beta} and \eqref{stima coeff ai 3}, \begin{equation}\label{stime bi lipschitz}
\| b_i \|_{s}^{\rm{Lip}(\g)} \leq \varepsilon C(s) (1 + \| u \|_{s + {\mathfrak s}_0 + 7}^{\rm{Lip}(\g)} ), \quad i = 0 , 1 , 2. \end{equation} Now we estimate the derivative with respect to $u$ of $b_1$. The estimates for $b_0$ and $b_2$ are analogous. By \eqref{b1 b3} we write $b_1(u) = {\cal A}(u)^{-1} b_1^*(u)$ where $ b_1^* := $ $ \o\cdot\partial_{\varphi }\beta + $ $ (1 + a_3)\beta_{xxx} + $ $ a_2 \beta_{xx} + $ $ a_1(1 + \beta_x) $. The bounds \eqref{stima coeff ai 2}, \eqref{stima derivata beta}, \eqref{stima beta}, \eqref{palla Lip di sicurezza}, and \eqref{mixed norms tame product} imply that \begin{equation}\label{stima b1*}
\| \partial_u b_{1}^{*}(u)[h]\|_{s} \leq \varepsilon C(s) \big(\| h \|_{s + {\mathfrak s}_0 + 6} +
\| u \|_{s + {\mathfrak s}_0 + 6} \| h \|_{{\mathfrak s}_0 + 6} \big)\,. \end{equation} Now, \begin{equation}\label{marcellino} \partial_u b_1(u)[h] = \partial_u \big({\cal A}(u)^{-1} b_1^{*}(u) \big) [h] = (\partial_u {\cal A}(u)^{-1}) (b_1^*(u)) [h] + {\cal A}(u)^{-1} (\partial_{u}b_1^{*}(u)[h]). \end{equation} Then \eqref{asymmetric tame product}, \eqref{marcellino}, \eqref{stima A e sua inversa}, \eqref{stima derivata inversa A}, \eqref{interpolation estremi fine} (with $ a_0 = \mathfrak s_0 +4 $, $ \b_0 = \mathfrak s_0 + 6 $, $ p = s- 1$, $ q = 1 $) \eqref{stima b1*} imply \begin{eqnarray}
\| \partial_u {\cal A}(u)^{-1} (b_1^*(u)) [h] \|_s
& \leq & \varepsilon C(s) \big( \| h \|_{s + {\mathfrak s}_0 + 3} + \| u \|_{s + {\mathfrak s}_0 + 7} \| h \|_{{\mathfrak s}_0 + 3} \big) \label{primo pezzo derivata b1} \\
\| {\cal A}(u)^{-1} \partial_{u} b_1^{*}(u)[h] \|_{s}
& \leq & \varepsilon C(s) \big( \| h \|_{s + {\mathfrak s}_0 + 6} + \| u \|_{s + {\mathfrak s}_0 + 6} \| h \|_{ {\mathfrak s}_0 + 6} \big). \label{secondo pezzo derivata b1} \end{eqnarray} Finally \eqref{marcellino}, \eqref{primo pezzo derivata b1} and \eqref{secondo pezzo derivata b1} imply \begin{equation}\label{stima finale b1}
\| \partial_u b_1(u)[h] \|_s \leq \varepsilon C(s) \big(\| h \|_{s + {\mathfrak s}_0 + 6} + \| u \|_{s + {\mathfrak s}_0 + 7}
\| h \|_{{\mathfrak s}_0 + 6} \big), \end{equation} which holds for all $s + {\mathfrak s}_0 + 7 \leq q$.
{\bf Estimates in Step 2. }
5. --- We prove that the coefficient $m_3$, defined in \eqref{mu 3}, satisfies the following estimates: \begin{eqnarray}
|m_3 - 1 | \, ,
|m_3 - 1 |^{\rm{Lip}(\g)} &\leq& \varepsilon C \label{stima mu3-1 Lip}\\
|\partial_u m_3(u)[h]| & \leq & \varepsilon C \Vert h \Vert_{{\mathfrak s}_0 + 3} . \label{stima derivata mu 3} \end{eqnarray} Using \eqref{mu 3} \eqref{stima b3}, \eqref{palla Lip di sicurezza} $$
|m_3 - 1| \leq \frac{1}{(2\pi)^{\nu}} \int_{\mathbb T^{\nu}}|b_3 - 1| \, d\varphi \leq C \Vert b_3 - 1\Vert_{{\mathfrak s}_0} \leq \varepsilon C. $$ Similarly we get the Lipschitz part of \eqref{stima mu3-1 Lip}. The estimate \eqref{stima derivata mu 3} follows by \eqref{stima derivata b3}, since $$
| \partial_u m_3(u)[h] \, | \leq \frac{1}{(2\pi)^{\nu}} \int_{\mathbb T^{\nu}}\vert\partial_{u}b_3(u)[h] \vert \, d\varphi \leq C \Vert \partial_{u}b_3(u)[h]\Vert_{{\mathfrak s}_0} \leq \varepsilon C \Vert h \Vert_{{\mathfrak s}_0 + 3}. $$ 6. --- \textsc{Estimates of $\alpha$. } The function $\alpha(\varphi)$, defined in \eqref{alpha}, satisfies \begin{eqnarray} \vert \alpha \vert_{s, \infty} & \leq & \varepsilon \gamma_0^{-1} \, C(s) \big(1 + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 3} \big) \label{stima alpha} \\
\vert \alpha \vert_{s, \infty}^{\rm{Lip}(\g)} & \leq & \varepsilon \gamma_0^{-1} \, C(s) \big(1 + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 3}^{\rm{Lip}(\g)} \big) \label{stima Lip alpha} \\ \vert \partial_u \alpha(u)[h] \vert_{s, \infty} & \leq & \varepsilon \gamma_0^{-1} \, C(s) \big(\Vert h \Vert_{s + \tau_0 + {\mathfrak s}_0 + 3} + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 3}\Vert h \Vert_{ {\mathfrak s}_0 + 3} \big). \label{stima derivata alpha} \end{eqnarray}
Remember that $\omega = \lambda \bar\omega$, and $|\bar\omega \cdot l| \geq 3 \g_0 |l|^{-\t_0}$, $ \forall l \neq 0$, see \eqref{omdio}. By \eqref{stima b3} and \eqref{stima mu3-1 Lip}, $$
| \alpha |_{s, \infty}
\leq \| \alpha \|_{s + \mathfrak s_0} \leq C \gamma_0^{-1} \Vert b_3 - m_3 \Vert_{s + \mathfrak s_0 + \tau_0} \leq C(s) \gamma_0^{-1} \varepsilon (1 + \Vert u \Vert_{s +\tau_0 + \mathfrak s_0 + 3} ) $$ proving \eqref{stima alpha}. Then \eqref{stima Lip alpha} holds similarly using \eqref{stima Lip b3} and $ (\om \cdot \pa_{\ph})^{-1} = \lambda^{-1} \, (\bar\omega \cdot \partial_\varphi)^{-1} $. Differentiating formula \eqref{alpha} with respect to $u$ in the direction $h$ gives $$ \partial_{u}\alpha(u)[h] = (\lambda \bar\omega \cdot \partial_{\varphi })^{-1} \Big(\frac{\partial_u b_3(u)[h] m_3 - b_3 \partial_{u}m_3(u)[h]}{m_3^{2}}\Big) $$ then, the standard Sobolev embedding, \eqref{stima b3}, \eqref{stima derivata b3}, \eqref{stima mu3-1 Lip}, \eqref{stima derivata mu 3} imply \eqref{stima derivata alpha}.
Estimates \eqref{stima Lip alpha} and \eqref{stima derivata alpha} hold for $s + \t_0 + {\mathfrak s}_0 + 3 \leq q$. Note that \eqref{cambio2} is a well-defined diffeomorphism if $|\a|_{1, \infty} \leq 1/2$, and, by \eqref{stima Lip alpha}, this holds by \eqref{palla di sicurezza}.
7. --- {\sc Estimates of $\tilde{\alpha}$. } Let $\vartheta \rightarrow \vartheta + \omega \tilde{\alpha}(\vartheta)$ be the inverse change of variable of \eqref{cambio2}. The following estimates hold: \begin{eqnarray} \vert \tilde{\alpha} \vert_{s, \infty} & \leq & \varepsilon\gamma_0^{-1} \, C(s) \big(1 + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 3} \big) \label{stima alpha tilde}\\ \vert \tilde{\alpha} \vert_{s, \infty}^{\rm{Lip}(\g)} & \leq & \varepsilon\gamma_0^{-1} \, C(s) \big(1 + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 4}^{\rm{Lip}(\g)} \big) \label{stima Lip alpha tilde}\\ \vert \partial_{u}\tilde{\alpha}(u)[h] \vert_{s, \infty} & \leq & \varepsilon\gamma_0^{-1} \, C(s) \big(\Vert h \Vert_{s + \tau_0 + {\mathfrak s}_0 + 3} + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 4} \Vert h \Vert_{\tau_0 + {\mathfrak s}_0 + 3} \big). \label{stima derivata alpha tilde} \end{eqnarray} The bounds \eqref{stima alpha tilde}, \eqref{stima Lip alpha tilde} follow by \eqref{stime-q}, \eqref{stima alpha}, and \eqref{stime-lipschitz-q}, \eqref{stima Lip alpha}, respectively. To estimate the partial derivative of $\tilde{\alpha}$ with respect to $u$ we differentiate the identity
$ \tilde{\alpha}(\vartheta ; u) + \alpha(\vartheta + \omega \tilde{\alpha}(\vartheta; u); u) = 0 $, which gives $$
\partial_u \tilde{\alpha}(u)[h] = - B^{-1} \Big(\frac{\partial_u \alpha[h]}{1 + \o\cdot\partial_{\varphi }\alpha} \Big).
$$ Then applying Lemma \ref{lemma:utile}$(iii)$ to deal with $B^{-1}$, \eqref{tame product infty} for the product $\partial_u \alpha[h] \, (1 + \o\cdot \partial_{\varphi }\alpha )^{-1}$, and estimates \eqref{stima Lip alpha}, \eqref{stima derivata alpha}, \eqref{interpolation estremi fine}, we obtain \eqref{stima derivata alpha tilde}.
8. --- The transformations $B(u)$ and $B(u)^{-1}$, defined in \eqref{operatore2} resp. \eqref{B-1}, satisfy the following estimates: \begin{eqnarray} \Vert B(u) h \Vert_s + \Vert B(u)^{-1} h \Vert_s \!\!\!\! & \leq & \!\! \!\!C(s) \big(\Vert h \Vert_s + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 3} \Vert h \Vert_1 \big) \label{stima B e sua inversa} \\ \Vert B(u) h \Vert_s^{\rm{Lip}(\g)} + \Vert B(u)^{-1} h \Vert_s^{\rm{Lip}(\g)} \!\! \!\!&\leq & \!\!\!\! C(s) \big(\Vert h \Vert_{s + 1}^{\rm{Lip}(\g)} + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 4}^{\rm{Lip}(\g)} \Vert h \Vert_2^{\rm{Lip}(\g)} \big) \label{stima Lip B e sua inversa} \\ \Vert \partial_u (B(u)g) [h] \Vert_s \!\! \!\! & \leq & \!\!\!\! C(s) \big(\Vert g\Vert_{s + 1} \Vert h \Vert_{\s_0} + \Vert g\Vert_{1} \Vert h \Vert_{s + \s_0} + \Vert u \Vert_{s + \s_0} \Vert g\Vert_{2} \Vert h \Vert_{ \s_0} \big) \label{stima derivata B} \\ \Vert \partial_u ( B(u)^{-1}g ) [h] \Vert_s \!\!\!\! &\leq& \!\!\!\! C(s) \big( \Vert g\Vert_{s + 1} \Vert h \Vert_{\s_0} + \Vert g \Vert_{1} \Vert h \Vert_{s +\s_0} + \Vert u \Vert_{s + \s_0 + 1} \Vert g\Vert_{2} \Vert h \Vert_{\s_0} \big) \label{stima derivata B inversa} \end{eqnarray} where $ \s_0 := \t_0 + {\mathfrak s}_0 +3 $. Estimates \eqref{stima B e sua inversa} and \eqref{stima Lip B e sua inversa} follow by Lemma \ref{lemma:utile}$(ii)$ and \eqref{stima alpha}, \eqref{stima alpha tilde}, \eqref{stima Lip alpha}, \eqref{stima Lip alpha tilde}. The derivative of $B(u)g$ with respect to $u$ in the direction $h$ is the product $f z$ where $f := B(u)(\o\cdot\partial_{\varphi }g)$ and $z := \partial_{u}\alpha(u)[h]$.
By \eqref{mixed norms tame product}, $\| fz \|_s \leq C(s) ( \| f \|_s |z|_{L^\infty} + \| f \|_0 |z|_{s, \infty})$. Then \eqref{stima derivata alpha}, \eqref{stima B e sua inversa} imply \eqref{stima derivata B}. In analogous way, \eqref{stima derivata alpha tilde} and \eqref{stima B e sua inversa} give \eqref{stima derivata B inversa}.
9. --- {\sc estimates of $\rho$. } The function $\rho$ defined in \eqref{anche def rho}, namely $\rho = 1 + B^{-1} (\om \cdot \pa_{\ph} \a)$, satisfies \begin{eqnarray} \vert \rho -1 \vert_{s, \infty} & \leq & \varepsilon \g_0 ^{-1} \, C(s) ( 1 + \Vert u \Vert_{s +\tau_0 + {\mathfrak s}_0 + 4} ) \label{stima rho} \\ \vert \rho -1 \vert_{s, \infty}^{\rm{Lip}(\g)} & \leq & \varepsilon \g_0 ^{-1} \, C(s) (1 + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 5}^{\rm{Lip}(\g)} ) \label{stima Lip rho} \\ \Vert \partial_u \rho(u)[h] \, \Vert_{s} & \leq & \varepsilon \g_0 ^{-1} \, C(s ) \big( \Vert h \Vert_{s + \tau_0 + {\mathfrak s}_0 + 4} + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 5} \Vert h \Vert_{ \tau_0 + {\mathfrak s}_0 + 4} \big). \label{stima derivata rho} \end{eqnarray} The bound \eqref{stima rho} follows by \eqref{anche def rho}, \eqref{composizione infty}, \eqref{stima alpha}, \eqref{palla di sicurezza}. Similarly \eqref{stima Lip rho} follows by \eqref{composizione infty Lip}, \eqref{stima Lip alpha} and \eqref{palla Lip di sicurezza}. Differentiating \eqref{anche def rho} with respect to $u$ in the direction $h$ we obtain
$$
\partial_{u}\rho(u)[h]\,=\,\partial_{u}B(u)^{-1}(\o\cdot\partial_{\varphi }\alpha) [h] + B(u)^{-1}\big(\o\cdot\partial_{\varphi }(\partial_u\alpha(u)[h]) \big).
$$ By \eqref{stima derivata B inversa}, \eqref{stima alpha}, and \eqref{palla di sicurezza}, we get
\begin{equation}\label{stima primo pezzo derivata rho}
\Vert \partial_{u}B(u)^{-1}(\o\cdot\partial_{\varphi }\alpha) [h] \Vert_{s}
\leq \varepsilon \g_0 ^{-1} \, C(s) \big( \Vert h \Vert_{s + \tau_0 + {\mathfrak s}_0 + 3} + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 5} \Vert h \Vert_{ \tau_0 + {\mathfrak s}_0 + 3} \big).
\end{equation} Using \eqref{stima B e sua inversa}, \eqref{stima derivata alpha}, \eqref{palla di sicurezza},
and applying \eqref{interpolation estremi fine}, one has
\begin{equation}\label{stima secondo pezzo derivata rho}
\Vert B(u)^{-1}\big(\o\cdot\partial_{\varphi }(\partial_u\alpha(u)[h]) \big) \Vert_{s}
\leq \varepsilon \g_0 ^{-1} \, C(s) \big(\Vert h \Vert_{s + \tau_0 + {\mathfrak s}_0 + 4} + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 4}\Vert h \Vert_{ \tau_0 + {\mathfrak s}_0 + 4} \big)\,.
\end{equation} Then \eqref{stima primo pezzo derivata rho} and \eqref{stima secondo pezzo derivata rho} imply \eqref{stima derivata rho}, for all $s + \t_0 + {\mathfrak s}_0 + 5 \leq q$.
10. --- The coefficients $c_0$, $c_1$, $c_2$ defined in \eqref{coefficienti mL2} satisfy the following estimates: for $ i = 0,1,2 $, $ s \geq \mathfrak s_0 $, \begin{eqnarray} \Vert c_i \Vert_{s} & \leq & \varepsilon C(s) \big(1 + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 6} \big), \label{stime ci} \\ \Vert c_i \Vert_{s}^{\rm{Lip}(\g)} & \leq & \varepsilon C(s) \big(1 + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 7}^{\rm{Lip}(\g)} \big),
\label{stime Lip ci} \\ \Vert \partial_u c_i [h] \Vert_s & \leq & \varepsilon C(s ) \big(\Vert h \Vert_{s + \tau_0 + {\mathfrak s}_0 + 6} + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 7} \Vert h \Vert_{ \tau_0 + 2{\mathfrak s}_0 + 6} \big) \, . \label{stime derivate ci} \end{eqnarray} The definition of $c_i$ in \eqref{coefficienti mL2}, \eqref{mixed norms tame product}, \eqref{palla di sicurezza}, \eqref{stima B e sua inversa}, \eqref{stima rho}, \eqref{stime bi} and $\varepsilon \g_0^{-1} < 1 $, imply \eqref{stime ci}. Similarly \eqref{palla Lip di sicurezza}, \eqref{stima Lip B e sua inversa}, \eqref{stima Lip rho} and \eqref{stime bi lipschitz} imply \eqref{stime Lip ci}. Finally \eqref{stime derivate ci} follows from differentiating the formula of $c_i(u)$ and using \eqref{palla di sicurezza}, \eqref{stime bi}, \eqref{stima derivata B inversa}, \eqref{stima B e sua inversa},
\eqref{asymmetric tame product}-\eqref{mixed norms tame product}, \eqref{stima rho}, \eqref{stima derivata rho}.
{\bf Estimates in the step 3.}
11. --- The function $v$ defined in \eqref{descent ordine zero} satisfies the following estimates: \begin{eqnarray} \Vert v - 1 \Vert_s & \leq & \varepsilon C(s)\big(1 + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 6} \big) \label{stima v} \\ \Vert v - 1 \Vert_{s}^{\rm{Lip}(\g)} & \leq & \varepsilon C(s) \big(1 + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 7}^{\rm{Lip}(\g)} \big) \label{stima Lip v} \\ \Vert \partial_u v[h] \Vert_s & \leq & \varepsilon C(s) \big(\Vert h \Vert_{s + \tau_0 + {\mathfrak s}_0 + 6} + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 7} \Vert h \Vert_{ \tau_0 + 2 {\mathfrak s}_0 + 6} \big) \label{stima derivata v} \end{eqnarray} In order to prove \eqref{stima v} we apply the Lemma \ref{lemma:composition of functions, Moser}$(i)$ with $f(t) := \exp(t) $ (and $ u = 0 $, $ p = 0 $): $$ \Vert v - 1 \Vert_s
= \Big\| f\Big(- \frac{\partial_y^{-1}c_2}{3 m_3}\Big) - f(0) \Big\|_s \stackrel{\eqref{stima mu3-1 Lip}} \leq C \Vert c_2 \Vert_s \stackrel{\eqref{stime ci}} \leq \varepsilon C(s)\big(1 + \Vert u \Vert_{s + \tau_0 + {\mathfrak s}_0 + 6} \big) \, . $$ Similarly \eqref{stima Lip v} follows. Differentiating formula \eqref{descent ordine zero} we get
$$
\partial_u v [h] = - f'\Big(- \frac{\partial_y^{-1}c_2}{3 m_3} \Big) \left\{\frac{1}{3 m_3}\partial_u \Big(\partial_y^{-1} c_2 \Big)[h] - \frac{\partial_y^{-1}c_2 \partial_u m_3[h]}{3 m_3^{2}} \right\} .
$$ Then using \eqref{palla di sicurezza}, \eqref{asymmetric tame product}, Lemma \ref{lemma:composition of functions, Moser}$(i)$ applied to $f' = f$, and the estimates \eqref{stime ci}, \eqref{stime derivate ci}, \eqref{stima mu3-1 Lip} and \eqref{stima derivata mu 3} we get \eqref{stima derivata v}.
12. --- The multiplication operator $ {\cal M} $ defined in \eqref{cambio3} and its inverse $ {\cal M} ^{-1}$ (which is the multiplication operator by $v^{-1}$) both satisfy \begin{align} \label{stima Phi}
\| {\cal M} ^{\pm 1} h \|_s
& \leq C(s) \big( \| h \|_s + \|u \|_{s + \tilde\s} \| h \|_{\mathfrak s_0} \big), \\ \label{stima Lip Phi}
\| {\cal M} ^{\pm 1} h \|_s^{\rm{Lip}(\g)}
& \leq C(s) \big( \| h \|_s^{\rm{Lip}(\g)} + \|u \|_{s + \tilde\sigma +1}^{\rm{Lip}(\g)} \| h \|_{\mathfrak s_0}^{\rm{Lip}(\g)} \big), \\ \label{stima derivata Phi}
\| \partial_u {\cal M} ^{\pm 1} (u) g [h] \|_s & \leq \varepsilon C(s) \big( \Vert g\Vert_{s} \Vert h \Vert_{\mathfrak s_0 + \tilde\s} + \Vert g \Vert_{\mathfrak s_0} \Vert h \Vert_{s + \tilde\s} + \Vert u \Vert_{s + \tilde\sigma +1} \Vert g\Vert_{\mathfrak s_0} \Vert h \Vert_{\mathfrak s_0 + \tilde\s} \big), \end{align} with $\tilde\sigma := \t_0 + {\mathfrak s}_0 + 6$.
The inequalities \eqref{stima Phi}-\eqref{stima derivata Phi} follow by \eqref{palla di sicurezza}, \eqref{palla Lip di sicurezza}, \eqref{asymmetric tame product}, \eqref{stima v}-\eqref{stima derivata v}.
13. --- The coefficients $d_1, d_0$, defined in \eqref{mL3}, satisfy, for $ i = 0,1 $ \begin{align}
\| d_i \|_s
& \leq \varepsilon C(s) (1 + \| u \|_{s + \t_0 + {\mathfrak s}_0 + 9}), \label{stima di} \\
\| d_i \|_s^{\rm{Lip}(\g)}
& \leq \varepsilon C(s) (1 + \| u \|_{s + \t_0 + {\mathfrak s}_0 + 10}^{\rm{Lip}(\g)}), \label{stima Lip di} \\
\| \partial_u d_i(u)[h] \|_s
& \leq \varepsilon C(s) \big(\| h \|_{s + \tau_0 + {\mathfrak s}_0 + 9}
+ \| u \|_{s + \tau_0 + {\mathfrak s}_0 + 10} \| h \|_{ \tau_0 +2 {\mathfrak s}_0 + 9} \big), \label{stima derivata di} \end{align} by \eqref{asymmetric tame product}, \eqref{palla di sicurezza}, \eqref{palla Lip di sicurezza}, \eqref{stime ci}-\eqref{stime derivate ci} and \eqref{stima v}-\eqref{stima derivata v}.
{\bf Estimates in the Step 4.}
14. --- The constant $m_1$ defined in \eqref{m-p} satisfies \begin{equation}\label{stime mu1} \vert m_1 \vert + \vert m_1 \vert^{\rm{Lip}(\g)} \leq \varepsilon C, \quad \vert \partial_u m_1(u)[h] \vert \leq \varepsilon C \Vert h \Vert_{ \tau_0 + 2 {\mathfrak s}_0 + 9} \, , \end{equation} by \eqref{palla Lip di sicurezza}, \eqref{stima di}-\eqref{stima derivata di}.
15. --- The function $p(\vartheta)$ defined in \eqref{def:p} satisfies the following estimates:
\begin{eqnarray}
\vert p \vert_{s, \infty} & \leq & \varepsilon\gamma_{0}^{-1} C(s) ( 1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 9})\label{stima p}\\
\vert p \vert_{s, \infty}^{\rm{Lip}(\g)} & \leq & \varepsilon\gamma_{0}^{-1} C(s) ( 1 + \Vert u \Vert_{s + 2\tau_0 + 2 {\mathfrak s}_0 + 10}^{\rm{Lip}(\g)})\label{stima Lip p}\\
\vert \partial_u p(u)[h] \vert_{s, \infty} & \leq & \varepsilon\gamma_{0}^{-1} C(s) \big(\Vert h \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 9} + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 10} \Vert h \Vert_{\tau_0 + 2{\mathfrak s}_0 + 9} \big).
\label{stima derivata p}
\end{eqnarray} which follow by \eqref{stima di}-\eqref{stima derivata di} and \eqref{stime mu1} applying the same argument used in the proof of \eqref{stima Lip alpha}.
16. --- The operators $ {\cal T} $, ${\cal T}^{-1} $ defined in \eqref{MM-1} satisfy \begin{eqnarray} \Vert {\cal T}^{\pm1} h \Vert_{s} & \leq & C(s) \big(\Vert h \Vert_s + \Vert u \Vert_{s + \bar\s} \Vert h \Vert_1 \big) \label{stima M e sua inversa} \\ \Vert {\cal T}^{\pm1} h \Vert_{s}^{\rm{Lip}(\g)} & \leq & C(s) \big(\Vert h \Vert_{s + 1}^{\rm{Lip}(\g)} + \Vert u \Vert_{s + \bar\sigma +1}^{\rm{Lip}(\g)} \Vert h \Vert_2^{\rm{Lip}(\g)} \big) \label{stima Lip M e sua inversa} \\ \Vert \partial_u({\cal T}^{\pm1} (u)g)[h] \Vert_s & \leq & \varepsilon\gamma_0^{-1} \, C(s)\big(\Vert g \Vert_{s+1}\Vert h \Vert_{\bar\s} + \Vert g \Vert_1\Vert h \Vert_{s + \bar\s} + \Vert u \Vert_{s + \bar\sigma + 1}\Vert g \Vert_2 \Vert h \Vert_{ \bar\s} \big), \label{stima derivata M e sua inversa} \end{eqnarray} with $ \bar\s:= 2\tau_0 + 2{\mathfrak s}_0 + 9 $. The estimates \eqref{stima M e sua inversa} and \eqref{stima Lip M e sua inversa} follow by \eqref{tame-cambio-di-variabile}, \eqref{tame-lipschitz-cambio-di-variabile} and using \eqref{stima p} and \eqref{stima Lip p}. The derivative $ \partial_u( {\cal T} (u)g)[h] $ is the product $ ({\cal T}(u) g_y)\,\partial_u p(u)[h]$. Hence \eqref{mixed norms tame product}, \eqref{stima M e sua inversa} and \eqref{stima derivata p} imply \eqref{stima derivata M e sua inversa}.
17. --- The coefficients $e_0$, $e_1$, defined in \eqref{e0 e1}, satisfy the following estimates: for $ i = 0, 1 $ \begin{eqnarray} \Vert e_i \Vert_s & \leq & \varepsilon C(s) (1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 9}), \label{stima ei} \\ \Vert e_i \Vert_s^{\rm{Lip}(\g)} & \leq & \varepsilon C(s) (1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 10}^{\rm{Lip}(\g)}), \label{stima Lip ei} \\ \Vert \partial_u e_i(u)[h] \Vert_s & \leq & \varepsilon C(s)\big(\Vert h \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 9} + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 10} \Vert h \Vert_{ 2\tau_0 + 2{\mathfrak s}_0 + 9} \big) \, . \label{stima derivata ei} \end{eqnarray} The estimates \eqref{stima ei}, \eqref{stima Lip ei} follow
by \eqref{palla di sicurezza}, \eqref{palla Lip di sicurezza}, \eqref{equazione omologica step 4}, \eqref{stima di}, \eqref{stima Lip di}, \eqref{stima M e sua inversa} and \eqref{stima Lip M e sua inversa}. The estimate \eqref{stima derivata ei} follows differentiating the formulae of $e_0$ and $e_1$ in \eqref{e0 e1}, and applying \eqref{stima di}, \eqref{stima derivata di}, \eqref{stima M e sua inversa} and \eqref{stima derivata M e sua inversa}.
{\bf Estimates in the Step 5.}
18. --- The function $w$ defined in \eqref{w} satisfies the following estimates: \begin{eqnarray} \Vert w \Vert_s & \leq & \varepsilon C(s) (1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 9})\label{stima w}\\ \Vert w \Vert_s^{\rm{Lip}(\g)} & \leq & \varepsilon C(s) (1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 10}^{\rm{Lip}(\g)})\label{stima Lip w}\\ \Vert \partial_u w(u)[h] \Vert_s & \leq & \varepsilon C(s)\big(\Vert h \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 9} + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 10} \Vert h \Vert_{ 2\tau_0 + 2{\mathfrak s}_0 + 9} \big) \label{stima derivata w} \end{eqnarray} which follow by \eqref{stima mu3-1 Lip}, \eqref{stima derivata mu 3}, \eqref{stime mu1}, \eqref{stima ei}-\eqref{stima derivata ei}, \eqref{palla di sicurezza}, \eqref{palla Lip di sicurezza}.
19. --- The operator $\mathcal{S} = I + w \partial_x^{-1}$, defined in \eqref{mS}, and its inverse $\mathcal{S}^{-1}$ both satisfy the following estimates
(where the $s$-decay norm $| \cdot |_s $ is defined in \eqref{matrix decay norm}): \begin{eqnarray}
| \mathcal{S}^{\pm1} - I |_s & \leq & \varepsilon C(s)(1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 9}), \label{stima mS} \\
| \mathcal{S}^{\pm1} - I |_{s}^{\rm{Lip}(\g)} & \leq & \varepsilon C(s)(1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 10}^{\rm{Lip}(\g)}), \label{stima Lip mS} \\
\big| \partial_{u}\mathcal{S}^{\pm1}(u) [h]\big|_{s} & \leq & \varepsilon C(s) \big(\Vert h \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 9} + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 10} \Vert h \Vert_{ 2\tau_0 + 3{\mathfrak s}_0 + 9} \big). \label{stima derivata mS} \end{eqnarray} Thus \eqref{stima mS}-\eqref{stima derivata mS} for $\mathcal{S}$ follow by \eqref{stima w}-\eqref{stima derivata w} and the fact that
the matrix decay norm $| \partial_x^{-1} |_s \leq 1$, $s \geq 0$, using \eqref{multiplication}, \eqref{multiplication Lip}, \eqref{algebra}, \eqref{algebra Lip}. The operator $\mathcal{S}^{-1}$ satisfies the same bounds \eqref{stima mS}-\eqref{stima Lip mS} by Lemma \ref{lem:inverti}, which may be applied thanks to \eqref{stima mS}, \eqref{palla di sicurezza}, \eqref{palla Lip di sicurezza} and $\varepsilon$ small enough.
Finally \eqref{stima derivata mS} for $\mathcal{S}^{-1}$ follows by $$ \partial_u \mathcal{S}^{-1}(u) [h] = - \mathcal{S}^{-1}(u) \, \partial_u \mathcal{S}(u)[h] \, \mathcal{S}^{-1}(u) \, , $$ and \eqref{interpm}, \eqref{stima mS} for $\mathcal{S}^{-1}$, and \eqref{stima derivata mS} for $\mathcal{S}$.
20. --- The operatpr $\mathcal{R}$, defined in \eqref{mL5} , where $r_0$, $r_{-1}$ are defined in \eqref{r0}, \eqref{r-1},
satisfies the following estimates: \begin{eqnarray}
\big| \mathcal{R} \big|_s & \leq & \varepsilon C(s)(1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 12})\label{stima mR}\\
\big| \mathcal{R} \big|_s^{\rm{Lip}(\g)}& \leq & \varepsilon C(s)(1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 13}^{\rm{Lip}(\g)})\label{stima Lip mR}\\
\big| \partial_u \mathcal{R}(u)[h] \big|_s & \leq & \varepsilon C(s)\big(\Vert h \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 12} + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 13} \Vert h \Vert_{2\tau_0 + 3{\mathfrak s}_0 +12} \big). \label{stima derivata mR} \end{eqnarray} Let $T := r_0 + r_{-1}\partial_x^{-1}$. By \eqref{multiplication}, \eqref{multiplication Lip}, \eqref{asymmetric tame product}, \eqref{stima w}, \eqref{stima Lip w}, \eqref{stima ei}, \eqref{stima Lip ei}, \eqref{stime mu1}, \eqref{stima mu3-1 Lip}, and using the trivial fact that
$|\partial_x^{-1}|_s \leq 1$ and $|\pi_0|_s \leq 1$ for all $s \geq 0$, we get \begin{eqnarray}
\big| T \big|_{s} & \leq & \varepsilon C(s)(1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 12})\label{stima T}\\
\big|T \big|_{s}^{\rm{Lip}(\g)} & \leq & \varepsilon C(s)(1 + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 13}^{\rm{Lip}(\g)}). \label{stima Lip T} \end{eqnarray} Differentiating $T$ with respect to $u$, and using \eqref{multiplication}, \eqref{asymmetric tame product}, \eqref{stima derivata w}, \eqref{stima derivata ei}, \eqref{stime mu1}, \eqref{stima mu3-1 Lip} and \eqref{stima derivata mu 3}, one has \begin{equation}\label{stima derivata T}
\big| \partial_u T(u)[h] \big|_s \leq \varepsilon C(s)\big(\Vert h \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 + 12} + \Vert u \Vert_{s + 2\tau_0 + 2{\mathfrak s}_0 +13} \Vert h \Vert_{ 2\tau_0 + 3{\mathfrak s}_0 +12} \big). \end{equation} Finally \eqref{interpm}, \eqref{interpm Lip} \eqref{stima mS}-\eqref{stima derivata mS}, \eqref{stima T}-\eqref{stima derivata T} imply the estimates \eqref{stima mR}-\eqref{stima derivata mR}.
21. --- Using Lemma \ref{lemma astratto composizioni}, \eqref{palla di sicurezza} and all the previous estimates on ${\cal A}, B, \rho, {\cal M} , {\cal T}, \mathcal{S}$, the operators $\Phi_1 = {\cal A} B \rho {\cal M} {\cal T} \mathcal{S}$ and $\Phi_2 = {\cal A} B {\cal M} {\cal T} \mathcal{S} $, defined in \eqref{Phi 1 2 def}, satisfy \eqref{stima Phi 12 nel lemma} (note that $ \sigma > 2\t_0 + 2{\mathfrak s}_0 + 9$). Finally, if the condition \eqref{palla Lip di sicurezza} holds, we get the estimate \eqref{stima Lip Phi 12 nel lemma}.
The other estimates \eqref{coefficienti costanti 1}-\eqref{stima R 3} follow by
\eqref{stima mu3-1 Lip}, \eqref{stima derivata mu 3}, \eqref{stime mu1}, \eqref{stima mR}-\eqref{stima derivata mR}. The proof of Lemma \ref{lemma:mostro} is complete.
\noindent \textbf{Proof of Lemma \ref{lemma:stime stabilita Phi 12}.} For each fixed $\varphi \in \mathbb T^\nu$, ${\cal A}(\varphi)h(x) := h(x+\b(\varphi,x))$. Apply \eqref{tame-cambio-di-variabile} to the change of variable $\mathbb T \to \mathbb T$, $x \mapsto x + \b(\varphi,x)$: \[
\| {\cal A}(\varphi) h \|_{H^s_x}
\leq C(s) \big( \| h \|_{H^s_x} + | \b(\varphi, \cdot) |_{W^{s, \infty}(\mathbb T)} \| h \|_{H^1_x} \big). \]
Since $| \b(\varphi, \cdot) |_{W^{s, \infty}(\mathbb T)} \leq | \beta |_{s, \infty}$ for all $\varphi \in \mathbb T^\nu$, by \eqref{stima beta} we deduce \eqref{A(ph)}. Using \eqref{cambio di variabile meno identita}, \eqref{palla di sicurezza}, and \eqref{stima beta}, \[
\| ({\cal A}(\varphi) - I) h \|_{H^s_x}
\leq_s | \beta |_{L^\infty} \| h \|_{H^{s+1}_x}
+ |\b|_{s,\infty} \| h \|_{H^2_x} \leq_s
\varepsilon \big( \| h \|_{H^{s+1}_x} + \| u \|_{s + \mathfrak s_0 + 3} \| h \|_{H^2_x} \big). \] By \eqref{stima beta tilde}, estimates \eqref{A(ph)} and \eqref{A(ph)-I} also hold for ${\cal A}(\varphi)^{-1} = {\cal A}^{-1}(\varphi)$ $: h(y) \mapsto h(y + \tilde \b(\varphi,y))$.
The multiplication operator $ {\cal M} (\varphi) : H^s_x \to H^s_x$, $\, {\cal M} (\varphi) h := v(\varphi, \cdot) h$ satisfies \begin{multline}
\| ({\cal M} (\varphi) - I)h \|_{H^s_x}
= \| (v(\varphi, \cdot) - 1) h \|_{H^s_x} \leq_s
\| v(\varphi, \cdot) - 1 \|_{H^s_x} \| h \|_{H^1_x}
+ \| v(\varphi, \cdot) - 1 \|_{H^1_x} \| h \|_{H^s_x} \\ \leq_s
\| v-1 \|_{s + \mathfrak s_0} \| h \|_{H^1_x}
+ \| v-1 \|_{1 + \mathfrak s_0} \| h \|_{H^s_x} \leq_s
\varepsilon \big( \| h \|_{H^s_x} + \| u \|_{s + \tau_0 + 2 {\mathfrak s}_0 + 6} \| h \|_{H^1_x} \big) \label{Phi(ph)-I} \end{multline} by \eqref{asymmetric tame product}, \eqref{multiplication}, Lemma \ref{Aphispace}, \eqref{stima v} and \eqref{palla di sicurezza}. The same estimate also holds for $ {\cal M} (\varphi)^{-1} = {\cal M} ^{-1}(\varphi)$, which is the multiplication operator by $v^{-1}(\varphi,\cdot)$. The operators $ {\cal T}^{\pm 1}(\varphi)h(x) = h(x \pm p(\varphi))$ satisfy \begin{equation} \label{mM(ph)-I}
\| {\cal T}^{\pm 1}(\varphi) h \|_{H^s_x} = \| h \|_{H^s_x}, \quad
\| ({\cal T}^{\pm 1}(\varphi) - I) h \|_{H^s_x} \leq \varepsilon \g_0^{-1} C \| h \|_{H^{s+1}_x}, \quad \end{equation} by \eqref{cambio di variabile meno identita}, \eqref{palla di sicurezza}, \eqref{stima p} and by the fact that $p(\varphi)$ is independent on the space variable.
By \eqref{interpolazione norme miste}, \eqref{stima mS}, \eqref{palla di sicurezza} and Lemma \ref{Aphispace}, the operator $\mathcal{S}(\varphi) = I + w(\varphi,\cdot) \partial_x^{-1}$ and its inverse satisfy \begin{equation} \label{mS(ph)-I}
\| (\mathcal{S}^{\pm 1}(\varphi) - I)h \|_{H^s_x} \leq_s \varepsilon
\big( \| h \|_{H^s_x} + \| u \|_{s + 2\tau_0 + 3{\mathfrak s}_0 + 9} \| h \|_{H^1_x} \big). \end{equation}
Collecting estimates \eqref{Phi(ph)-I}, \eqref{mM(ph)-I}, \eqref{mS(ph)-I} we get \eqref{Phi mM mS (ph)} and \eqref{Phi mM mS (ph) - I}. Lemma \ref{lemma:stime stabilita Phi 12} is proved.
\noindent Massimiliano Berti, Pietro Baldi, Dipartimento di Matematica e Applicazioni ``R. Caccioppoli", Universit\`a degli Studi Napoli Federico II, Via Cintia, Monte S. Angelo, I-80126, Napoli, Italy, {\tt m.berti@unina.it}, {\tt pietro.baldi@unina.it}. \\[1mm] \noindent Riccardo Montalto, SISSA, Via Bonomea 265, 34136, Trieste, Italy, {\tt riccardo.montalto@sissa.it}. \\[2mm] \indent This research was supported by the European Research Council under FP7 and partially by the PRIN2009 grant ``Critical Point Theory and Perturbative Methods for Nonlinear Differential Equations".
\end{document} | arXiv | {
"id": "1211.6672.tex",
"language_detection_score": 0.4165472388267517,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\sc Regular integers modulo $n$} \author{{\bf L\'aszl\'o T\'oth} (P\'ecs, Hungary)} \date{} \maketitle
\centerline{Annales Univ. Sci. Budapest., Sect. Comp., {\bf 29} (2008), 263-275} \vskip4mm
\vskip1mm \centerline{\it Dedicated to Professor Imre K\'atai on his 70th birthday} \vskip1mm
\begin{abstract} Let $n=p_1^{\nu_1}\cdots p_r^{\nu_r} >1$ be an integer. An integer $a$ is called regular (mod $n$) if there is an integer $x$ such that $a^2x\equiv a$ (mod $n$). Let $\varrho(n)$ denote the number of regular integers $a$ (mod $n$) such that $1\le a\le n$. Here $\varrho(n)=(\phi(p_1^{\nu_1})+1)\cdots (\phi(p_r^{\nu_r})+1)$, where $\phi(n)$ is the Euler function. In this paper we first summarize some basic properties of regular integers (mod $n$). Then in order to compare the rates of growth of the functions $\varrho(n)$ and $\phi(n)$ we investigate the average orders and the extremal orders of the functions $\varrho(n)/\phi(n)$, $\phi(n)/\varrho(n)$ and $1/\varrho(n)$. \end{abstract}
{\it Mathematics Subject Classification}: 11A25, 11N37
{\it Key Words and Phrases}: regular integers (mod $n$), unitary divisor, Euler's function, average order, extremal order
\vskip1mm {\bf 1. Introduction}
\vskip1mm Let $n>1$ be an integer. Consider the integers $a$ for which there exists an integer $x$ such that $a^2x\equiv a$ (mod $n$). In the background of this property is that an element $a$ of a ring $R$ is said to be regular (following J. von Neumann) if there is an $x\in R$ such that $a=axa$. In case of the ring $\mathds{Z}_n$ this is exactly the condition of above.
Properties of these integers were investigated by J. Morgado \cite{M72}, \cite{M74}, who called them regular (mod $n$). In a recent paper O. Alkam and E. A. Osba \cite{AO} using ring theoretic considerations rediscovered some of the statements proved elementarly by J. Morgado. It was observed in \cite{M72}, \cite{M74} that $a>1$ is regular (mod $n$) if and only if the gcd $(a,n)$ is a unitary divisor of $n$. We recall that $d$ is said to be a unitary divisor of $n$ if $d\mid n$ and gcd $(d,n/d)=1$, notation $d \mid \mid n$.
These integers occur in the literature also in an other context. It is said that an integer $a$ possesses a weak order (mod $n$) if there exists an integer $k\ge 1$ such that $a^{k+1} \equiv a$ (mod $n$). Then the weak order of $a$ is the smallest $k$ with this property, see \cite{J81}, \cite{F}. It turns out that $a$ is regular (mod $n$) if and only if $a$ possesses a weak order (mod $n$).
Let $\operatorname{Reg}_n=\{a: 1\le a\le n$, $a$ is regular (mod $n$)$\}$ and let $\varrho(n)=\# \operatorname{Reg}_n$ denote the number of regular integers $a$ (mod $n$) such that $1\le a\le n$. This function is multiplicative and $\varrho(p^{\nu})=\phi(p^{\nu})+1= p^{\nu}-p^{\nu-1}+1$ for every prime power $p^{\nu}$ ($\nu \ge 1$), where $\phi$ is the Euler function. Consequently, $\displaystyle \varrho(n)=\sum_{d \mid \mid n} \phi(d)$ for every $n\ge 1$, also $\phi(n)< \varrho(n)\le n$ for every $n>1$, and $\varrho(n)= n$ if and only if $n$ is squarefree, see \cite{M72}, \cite{J81}, \cite{AO}.
Let us compare the functions $\varrho(n)$ and $\phi(n)$. The first few values of $\varrho(n)$ and $\phi(n)$ are given by the next tables ($\varrho(n)$ is sequence $A055653$ in Sloane's On-Line Encyclopedia of Integer Sequences \cite{S}). Note that $\varrho(n)$ is even iff $n\equiv 2$ (mod $4$), and $\sqrt{n}\le \varrho(n)\le n$ for every $n\ge 1$, see \cite{AO}.
$$\vbox{\offinterlineskip \halign{\strut \quad
\hfil $\ # \ $ \hfil &\vrule \vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil \cr
n & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 &
10 & 11 & 12 & 13 & 14 & 15 \cr
\noalign{\hrule} \noalign{\hrule}
\varrho(n) & 1 & 2 & 3 & 3 & 5 & 6 & 7 & 5 &
7 & 10 & 11 & 9 & 13 & 14 & 15 \cr \noalign{\hrule} \noalign{\hrule}
\phi(n) & 1 & 1 & 2 & 2 & 4 & 2 & 6 & 4 &
6 & 4 & 10 & 4 & 12 & 6 & 8 \cr}} $$
$$\vbox{\offinterlineskip \halign{\strut \quad
\hfil $\ # \ $ \hfil &\vrule \vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil &\vrule \hfil $\ #\ $ \hfil &\vrule
\hfil $\ # \ $ \hfil \cr
n & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 &
25 & 26 & 27 & 28 & 29 & 30 \cr
\noalign{\hrule} \noalign{\hrule}
\varrho(n) & 9 & 17 & 14 & 19 & 15 & 21 & 22 & 23 &
15 & 21 & 26 & 19 & 21 & 29 & 30 \cr \noalign{\hrule} \noalign{\hrule}
\phi(n) & 8 & 16 & 6 & 18 & 8 & 12 & 10 & 22 &
8 & 20 & 12 & 18 & 12 & 28 & 8
\cr}} $$
Figure 1 is a plot of the function $\varrho(n)$ for $1\le n\le 10\, 000$.
{\mapleplot{Fig101.eps}}
For the Euler $\phi$-function, \[ \lim_{x\to \infty} \frac1{x^2} \sum_{n\le x} \phi(n) = \frac{3}{\pi^2} \approx 0.3039. \]
The average order of the function $\varrho(n)$ was considered in \cite{J81}, \cite{F}. One has \[ \lim_{x\to \infty} \frac1{x^2} \sum_{n\le x} \varrho(n) =\frac1{2} A
\approx 0.4407, \] where \[ A=\prod_p \left(1-\frac1{p^2(p+1)}\right)=\zeta(2)\prod_p \left(1-\frac1{p^2}-\frac1{p^3}+\frac1{p^4}\right)\approx 0.8815 \] is the so called quadratic class-number constant. For its numerical evaluation see \cite{NM}.
More exactly, \[ \sum_{n\le x} \varrho(n)=\frac1{2}Ax^2+ R(x), \] where $R(x)=O(x\log^3 x)$, given in \cite{J81} using elementary arguments. This was improved into $R(x)=O(x\log^2 x)$ in \cite{Y86}, and into $R(x)=O(x\log x)$ in \cite{HS92}, using analytic methods. Also, $R(x)=\Omega_{\pm}(x\sqrt{\log \log x})$, see \cite{HS92}.
In this paper we first summarize some basic properties of regular integers (mod $n$). We give also their direct proofs, because the proofs of \cite{M72}, \cite{M74} are lengthy and those of \cite{AO} are ring theoretical.
Then in order to compare the rates of growth of the functions $\varrho(n)$ and $\phi(n)$ we investigate the average orders and the extremal orders of the functions $\varrho(n)/\phi(n)$, $\phi(n)/\varrho(n)$ and $1/\varrho(n)$. The study of the minimal order of $\varrho(n)$ was initiated in \cite{AO}.
\vskip1mm {\bf 2. Characterization of regular integers (mod $n$)}
\vskip1mm The integer $a=0$ and those coprime to $n$ are regular (mod $n$) for each $n>1$. If $a\equiv b$ (mod $n$), then $a$ and $b$ are regular (mod $n$) simultaneously. If $a$ and $b$ are regular (mod $n$), then $ab$ is also regular (mod $n$).
In what follows let $n>1$ be of canonical form $n=p_1^{\nu_1}\cdots p_r^{\nu_r}$.
\vskip1mm {\bf Theorem 1.} {\it For an integer $a\ge 1$ the following assertions are equivalent:
i) $a$ is regular (mod $n$),
ii) for every $i\in \{1,...,r\}$ either $p_i\nmid a$ or $p_i^{\nu_i}\mid a$,
iii) $(a,n)=(a^2,n)$,
iv) $(a,n) \mid \mid n$,
v) $\displaystyle a^{\phi(n)+1}\equiv a$ (mod $n$),
vi) there exists an integer $k\ge 1$ such that $a^{k+1} \equiv a$ (mod $n$).}
\vskip1mm {\bf Proof.} i) $\Rightarrow$ ii). If $a^2x\equiv a$ (mod $n$) for an integer $x$, then $a(ax-1)\equiv 0$ (mod $p_i^{\nu_i}$) for every $i$. We have two cases: $p_i\nmid a$ and $p_i\mid a$. In the second case, since $(a,ax-1)=1$, obtain that $a \equiv 0$ (mod $p_i^{\nu_i}$).
ii) $\Rightarrow$ i). If $p_i^{\nu_i}\mid a$, then $a^2x \equiv a$ (mod $p_i^{\nu_i}$) for any $x$. If $p_i\nmid a$, then the linear congruence $ax\equiv 1$ (mod $p_i^{\nu_i})$ has solutions in $x$ and obtain also $a^2x \equiv a$ (mod $p_i^{\nu_i}$).
ii) $\Leftrightarrow$ iii). Follows at once by the property of the gcd.
ii) $\Leftrightarrow$ iv) Follows at once by the definition of the unitary divisors (the unitary divisors of a prime power $p^\nu$ are $1$ and $p^\nu$).
ii) $\Rightarrow$ v) (\cite{AO}) If $p_i^{\nu_i}\mid a$, then $a^{\phi(n)+1} \equiv a$ (mod $p_i^{\nu_i}$). If $p_i\nmid a$, then using Euler's theorem, $a^{\phi(n)+1}\equiv a(a^{\phi(p_i^{\nu_i})})^{\phi(n)/\phi(p_i^{\nu_i})} \equiv a$ (mod $p_i^{\nu_i})$. Therefore $a^{\phi(n)+1}\equiv a$ (mod $p_i^{\nu_i}$) for every $i$ and $a^{\phi(n)+1}\equiv a$ (mod $n$).
v) $\Rightarrow$ i) (\cite{AO}) If $a^{\phi(n)+1}\equiv a$ (mod $n$), then $a^2 a^{\phi(n)-1}\equiv a$ (mod $n$), hence $a^2x\equiv a$ (mod $n$) is verified for $x=a^{\phi(n)-1}$ (which is the von Neumann inverse of $a$ in $\mathds Z_n$).
v) $\Rightarrow$ vi) Immediate by taking $k=\phi(n)$.
vi) $\Rightarrow$ i) If $a^{k+1}\equiv a$ (mod $n$) for an integer $k\ge 1$, then $a^2x\equiv a$ (mod $n$) holds for $x=a^{k-1}$, finishing the proof.
\vskip1mm Note that the proof of i) $\Leftrightarrow$ v) given in \cite{M74} uses Dirichlet's theorem on arithmetic progressions, which is unnecessary.
\vskip1mm {\bf Theorem 2.} {\it The function $\varrho(n)$ is multiplicative and $\varrho (p^\nu)=p^\nu-p^{\nu-1}+1$ for every prime power $p^\nu$ ($\nu \ge 1$). For every $n\ge 1$, \[ \varrho(n)=\sum_{d \mid \mid n} \phi(d). \]}
\vskip1mm {\bf Proof.} By Theorem 1, $a$ is regular (mod $n$) iff for every $i\in \{1,...,r\}$ either $p_i\nmid a$ or $p_i^{\nu_i}\mid a$.
Let $a\in \operatorname{Reg}_n$. If $p_i\nmid a$ for every $i$, then $(a,n)=1$, the number of these integers $a$ is $\phi(n)$. Suppose that $p_i^{\nu_i}\mid a$ for exactly one value $i$ and that for all $j\ne i$, $(p_j,a)=1$. Then $a=bp_i^{\nu_i}$, where $1\le b\le n/p_i^{\nu_i}$ and $(b, n/p_i^{\nu_i})=1$. The number of such integers $a$ is $\phi(n/p_i^{\nu_i})$. Now suppose that $p_i^{\nu_i}\mid a$, $p_j^{\nu_j}\mid a$, $i<j$, and for all $k\ne i, k\ne j$, $(p_i,a)=(p_j,a)=1$. Then $a=cp_i^{\nu_i}p_j^{\nu_j}$, where $1\le c\le n/(p_i^{\nu_i}p_j^{\nu_j})$ and $(c, n/(p_i^{\nu_i}p_j^{\nu_j}))=1$. The number of such integers $a$ is $\phi(n/(p_i^{\nu_i}p_j^{\nu_j}))$, etc. We obtain \[ \varrho(n) = \phi(n)+\sum_{1\le i\le r} \phi(n/p_i^{\nu_i})+ \sum_{1\le i<j\le r} \phi(n/p_i^{\nu_i}p_j^{\nu_j})+...+ \phi(n/(p_1^{\nu_1}\cdots p_r^{\nu_r})). \]
Let $y_i=\phi(p_i^{\nu_i})$, $1\le i\le r$, and $y=y_1\cdots y_r$. Then $\phi(n)=y$ and \[ \varrho(n) = y+ \sum_{1\le i\le r} \frac{y}{y_i} + \sum_{1\le i<j\le r} \frac{y}{y_iy_j}+...+ \frac{y}{y_1\cdots y_r} = \] \[ =(y_1+1)\cdots (y_r+1)= (\phi(p_1^{\nu_i})+1)\cdots (\phi(p_r^{\nu_r})+1). \]
The given representation of $\varrho(n)$ now follows at once taking into account that the unitary convolution preserves the multiplicativity of functions, see for example \cite{Mc86}.
Another method, see \cite{M72}: Group the integers $a\in \{ 1,2,...,n\}$ according to the value $(a,n)$. Here $(a,n)=d$ if and only if $(j,n/d)=1$, where $a=jd$, $1\le j\le n/d$, hence the number of integers $a$ with $(a,n)=d$ is $\phi(n/d)$. According to Theorem 1, $a$ is regular (mod $n$) if and only if $d=(a,n)\mid \mid n$, and obtain that \[ \varrho(n)=\sum_{d \mid \mid n} \phi(n/d)= \sum_{d \mid \mid n} \phi(d). \]
Now the multiplicativity of $\varrho(n)$ is a direct consequence of this representation.
\vskip1mm Let $S(n)$ denote the sum of regular integers $a\in \operatorname{Reg}_n$. We give a simple formula for $S(n)$, not considered in the cited papers, which is analogous to $\sum_{1\le a\le n, (a,n)=1} a =n\phi(n)/2$ ($n>1$).
\vskip1mm {\bf Theorem 3.} {\it For every $n\ge 1$, \[ S(n)=\frac{n(\varrho(n)+1)}{2}. \]}
\vskip1mm {\bf Proof.} Similar to the counting procedure of above or by grouping the integers $a\in \{ 1,2,...,n\}$ according to the value $(a,n)$: \[ S(n)= \sum_{a\in \operatorname{Reg}_n} a = \sum_{d\mid \mid n} \sum_{\substack{a\in \operatorname{Reg}_n \\ (a,n)=d}} a = \sum_{d\mid \mid n} d \sum_{\substack{j=1\\(j,n/d)=1}}^{n/d} j= \] \[ =n+ \sum_{\substack{d\mid \mid n\\ d<n}} d\frac{n\phi(n/d)}{2d} = n+ \frac{n}{2} \sum_{\substack{d\mid \mid n\\ d<n}} \phi(n/d)= \frac{n(\varrho(n)+1)}{2}. \]
\vskip1mm {\bf 3. Average orders}
\vskip1mm {\bf Theorem 4.} {\it For the quotient $\varrho(n)/\phi(n)$ we have \[ \sum_{n\le x} \frac{\varrho(n)}{\phi(n)} = B x + O(\log^2 x),\] where $B=\pi^2/6\approx 1.6449$. }
\vskip1mm {\bf Proof.} By Theorem 2, $\varrho(p^{\nu})/\phi(p^{\nu}) = 1+1/\phi(p^{\nu})$ for every prime power $p^{\nu}$ ($\nu \ge 1$). Hence, taking into account the multiplicativity, for every $n\ge 1$, \[ \frac{\varrho(n)}{\phi(n)} =\sum_{d\mid \mid n} \frac1{\phi(d)}. \]
Using this representation (given also in \cite{AO}) we obtain \[ \sum_{n\le x} \frac{\varrho(n)}{\phi(n)} =\sum_{\substack{de\le x\\ (d,e)=1}} \frac1{\phi(d)} = \sum_{d\le x} \frac1{\phi(d)}\sum_{\substack{e\le x/d \\ (e,d)=1}} 1= \] \[ =\sum_{d\le x} \frac1{\phi(d)} \left(\frac{\phi(d)x}{d^2}+O(2^{\omega(d)}) \right)= x \sum_{d\le x} \frac1{d^2} + O\left(\sum_{d\le x} \frac{2^{\omega(d)}}{\phi(d)} \right), \] where $\omega(d)$ denotes, as usual, the number of distinct prime factors of $d$. Furthermore, let $\tau(n)$ and $\sigma(n)$ denote the number and the sum of divisors of $n$, respectively. Using that $\phi(n)\sigma(n)\gg n^2$, we have $2^{\omega(d)}/\phi(d) \ll \tau(d)\sigma(d)/d^2$. Here $\sum_{d\le x} \tau(d)\sigma(d) \ll x^2\log x$, according to a result of Ramanujan, and obtain by partial summation that the error term is $O(\log^2 x)$.
Figure 2 is a plot of the error term $\sum_{n\le x} \varrho(n)/\phi(n) - B x$ for $1\le x \le 1000$.
{\mapleplot{Fig201.eps}}
Consider now the quotient $f(n)=\phi(n)/\varrho(n)$, where $f(n)\le 1$. According to a well-known result of H. Delange, $f(n)$ has a mean value given by \[ C= \prod_p \left(1-\frac1{p}\right)\left(1+ \sum_{\nu=1}^{\infty} \frac{f(p^{\nu})}{p^{\nu}} \right)= \prod_p \left(1-\frac1{p}\right) \left(1+ (1-\frac1{p}) \sum_{\nu=1}^{\infty} \frac1{p^{\nu}-p^{\nu-1}+1} \right). \]
Here $C\approx 0.6875$, which can be obtained using that for every $k\ge 1$, \[ C= \prod_p \left(1-\frac1{p}\right)\left(1+ (1-\frac1{p}) \sum_{\nu=1}^k \frac1{p^{\nu}-p^{\nu-1}+1} +\frac1{p^kr_p}\right), \] where $p-1<r_p<p$ for each prime $p$.
We prove the following asymptotic formula:
\vskip1mm {\bf Theorem 5.} {\it \[ \sum_{n\le x} \frac{\phi(n)}{\varrho(n)}= C x + O((\log x)^{5/3}(\log \log x)^{4/3}). \]}
\vskip1mm {\bf Proof.} For $f(n)=\phi(n)/\varrho(n)$ let \[ f(n) =\sum_{d\mid n} \frac{\phi(d)}{d}\, v(n/d), \] that is, in terms of the Dirichlet convolution, $f=\phi/E * v$, $f=\mu/E*I*v$, $v=f* I/E * \mu$, where $\mu(n)$ is the M\"obius function, $E(n)=n$, $I(n)=1$ ($n\ge 1$).
The function $v(n)$ is multiplicative, for every prime power $p^\nu$ ($\nu \ge 1$), \[ v(p^{\nu})=f(p^{\nu})-\left(1-\frac1{p}\right) \left(f(p^{\nu-1})+ \frac1{p}f(p^{\nu-2})+... +\frac1{p^{\nu-2}}f(p)+ \frac1{p^{\nu-1}}\right), \]
and $v(p)=0$, $|v(p^2)|\le 1/p$ for every prime $p$. Also, \[ f(p^{\nu})=\frac{p^{\nu}-p^{\nu-1}}{p^{\nu}-p^{\nu-1}+1}=\frac{1-1/p}{1-(1/p- 1/p^{\nu})}= \left(1-\frac1{p}\right)\left(1+\left(\frac1{p}-\frac1{p^\nu}\right)+ \left(\frac1{p}-\frac1{p^\nu}\right)^2+...\right)= \] \[= \left(1-\frac1{p}\right)\left(1+\frac1{p}+\frac1{p^2} +...+\frac1{p^{\nu}}- \frac1{p^{\nu}} +O\left(\frac1{p^{\nu+1}}\right)\right), \] and obtain that for every fixed $\nu \ge 3$, \[ f(p^{\nu})= 1-\frac1{p^{\nu}}+O\left(\frac1{p^{\nu+1}}\right), \] consequently, \[ v(p^\nu)= 1-\frac1{p^\nu}+O\left(\frac1{p^{\nu+1}}\right)-\left(1-\frac1{p}\right)\left(1+\frac1{p}+...+\frac1{p^{\nu-1}} - \frac{\nu-1}{p^{\nu-1}} +O\left(\frac1{p^{\nu}}\right)\right), \] \[ v(p^\nu)= \frac{\nu-1}{p^{\nu-1}} + O\left(\frac1{p^\nu}\right). \leqno(*) \]
It follows that there exists $x_0$ such that for every prime $p>x_0$ and for every $\nu \ge 3$, \[
|v(p^\nu)|\le \frac1{p^{3\nu/5}}. \leqno(**) \]
Now we show that \[ \sum_{n\le x} v(n)=O(\log x), \quad \sum_{n>x} \frac{v(n)}{n} = O\left(\frac{\log x}{x}\right). \]
We deduce the first estimate, the second one will follow by partial summation. Let ${\cal M}_1 =\{ n: \ p \mid n \ \Rightarrow \ p\le x_0\}$, ${\cal M}_2 =\{ n: \ p \mid n \ \Rightarrow \ p^3\mid n, p>x_0\}$, ${\cal M}_3 =\{ n: \ p \mid n \ \Rightarrow \ p^2\mid n, p^3\nmid n, p>x_0\}$. If $v(n)\ne 0$, then $n$ can be written uniquely as $n=n_1n_2n_3$, where $n_1\in {\cal M}_1$, $n_2\in {\cal M}_2$, $n_3\in {\cal M}_3$. We have the following estimates.
If $n_3\in {\cal M}_3$, then $n_3=m^2$ with $|\mu(m)|=1$. Using
$|v(p^2)|\le 1/p$ we have $|v(n_3)|\le 1/m$, and \[ \sum_{\substack{n_3\le x\\ n_3\in {\cal M}_3}} v(n_3) \ll \sum_{m\le
\sqrt{x}} \frac{|\mu(m)|}{m}\ll \log x. \]
By (**), for $x_0$ sufficiently large, \[ \sum_{\substack{n_2\le x\\ n_2\in {\cal M}_2}} v(n_2) \ll
\prod_{p>x_0} \left(1+|v(p^3)|+|v(p^4)|+...\right)\ll \prod_{p> x_0} \left(1+\frac1{p^{9/5}}+\frac1{p^{12/5}}+ ...\right) \ll \] \[ \ll \prod_{p> x_0} \left(1+\frac{2}{p^{9/5}}\right)< \infty. \]
Using (*) we also have \[ \sum_{\substack{n_1\le x\\ n_1\in {\cal M}_1}} v(n_1) \ll
\prod_{p\le x_0} \left(1+\frac1{p}+ |v(p^3)|+ |v(p^4)|+...\right)< \infty. \]
Hence \[
\sum_{n\le x} v(n)= \sum_{n_1n_2n_3\le x} |v(n_1)|\, |v(n_2)|\,
|v(n_3)| = \sum_{n_1n_2 \le x} |v(n_1)|\, |v(n_2)|\, \sum_{n_3\le x/n_1n_2} |v(n_3)| \ll \log x. \]
Now applying the following well-known result of Walfisz, \[ \sum_{n\le x} \frac{\phi(n)}{n} =\frac{6}{\pi^2} x+ O((\log x)^{2/3}(\log \log x)^{4/3}) \] we have \[ \sum_{n\le x} f(n)= \sum_{d\le x} v(d) \sum_{e\le x/d} \frac{\phi(e)}{e} = \frac{6}{\pi^2} x \sum_{d\le x} \frac{v(d)}{d} + O((\log x)^{2/3}(\log \log x)^{4/3}\sum_{d\le x} v(d))= \] \[ =\frac{6}{\pi^2} x \sum_{d=1}^{\infty} \frac{v(d)}{d} + O((\log x)^{5/3}(\log \log x)^{4/3}), \] ending the proof of Theorem 5.
Figure 3 is a plot of the error term $\sum_{n\le x} \phi(n)/\varrho(n) - C x$ for $1\le x\le 1000$.
{\mapleplot{Fig301.eps}}
{\bf Theorem 6.} {\it \[ \sum_{n\le x} \frac1{\varrho(n)}=D\log x + E +O\left(\frac{\log^9x}{x}\right), \] where $D$ and $E$ are constants, \[ D=\frac{\zeta(2)\zeta(3)}{\zeta(6)} \prod_p \left(1-\frac{p(p-1)}{p^2-p+1}\sum_{\nu=1}^{\infty} \frac1{p^{\nu}(p^{\nu}-p^{\nu-1}+1)} \right).\] }
\vskip1mm {\bf Proof.} Write \[ \frac1{\varrho(n)}=\sum_{\substack{de=n\\(d,e)=1}} \frac{h(d)}{\phi(e)},\] where $h$ is multiplicative and for every prime power $p^{\nu}$ ($\nu \ge 1$), \[ \frac1{\varrho(p^{\nu})}= h(p^{\nu})+ \frac1{\phi(p^{\nu})}, \quad h(p^{\nu})=- \frac1{\phi(p^{\nu})(\phi(p^{\nu})+1)}, \] therefore $h(n)\ll 1/\phi^2(n)$. We need the following known result, cf. for example \cite{MV07}, p. 43, \[ \sum_{\substack{n\le x\\ (n,k)=1}} \frac1{\phi(n)} = K a(k) \left(\log x + \gamma + b(k)\right)+ O\left(2^{\omega(k)} \frac{\log x}{x}\right), \] where $\gamma$ is Euler's constant, \[ K= \frac{\zeta(2)\zeta(3)}{\zeta(6)}, \ a(k)=\prod_{p\mid k}\left(1-\frac{p}{p^2-p+1}\right) \le \frac{\phi(k)}{k}, \] \[ b(k)=\sum_{p\mid k} \frac{\log p}{p-1}- \sum_{p\nmid k} \frac{\log p}{p^2-p+1} \ll \frac{\psi(k)\log k}{\phi(k)}, \ \text{ with } \ \psi(k)= k \prod_{p\mid k} \left(1+\frac1{p}\right). \]
We have \[ \sum_{n\le x} \frac1{\varrho(n)}=\sum_{d\le x} h(d) \sum_{\substack{e\le x/d\\ (e,d)=1}} \frac1{\phi(e)}= \] \[ = K \left((\log x+\gamma)\sum_{d\le x} h(d)a(d)+\sum_{d\le x} h(d)a(d)(b(d)-\log d) \right)+O\left(\frac{\log x}{x}\sum_{d\le x} d
|h(d)|2^{\omega(d)} \right), \] and we obtain the given result with the constants \[ D=K\sum_{n=1}^{\infty} h(n)a(n), \ E=K\gamma \sum_{n=1}^{\infty} h(n)a(n) + K \sum_{n=1}^{\infty} h(n)a(n)(b(n)-\log n), \] these series being convergent taking into account the estimates of above. For the error terms, \[
\sum_{n>x} |h(n)|a(n) \ll \sum_{n>x} \frac1{n\phi(n)} \ll \sum_{n>x}
\frac{\sigma(n)}{n^3}\ll \frac1{x}, \ \sum_{n>x} |h(n)| a(n)\log n \ll \frac{\log x}{x}, \] \[
\sum_{n>x} |h(n)a(n)b(n)| \ll \sum_{n>x} \frac{\tau^3(n)\log n}{n^2} \ll \frac{\log^8 x}{x}, \] using that $\sum_{n\le x} \tau^3(n) \ll x \log^7 x$ (Ramanujan), and \[
\sum_{n\le x} n|h(n)|2^{\omega(n)} \ll \sum_{n\le x} \frac{\tau^3(n)}{n} \ll \log^8 x. \]
\vskip1mm {\bf 4. Extremal orders}
\vskip1mm Since $\varrho(n)\le n$ for every $n\ge 1$ and $\varrho(p)=p$ for every prime $p$, it is immediate that \\ $\limsup_{n\to \infty} \varrho(n)/n = 1$. The minimal order of $\varrho(n)$ is also the same as that of $\phi(n)$, namely,
\vskip1mm {\bf Theorem 7.} {\it \[\liminf_{n\to \infty} \frac{\varrho(n)\log \log n}{n}=e^{-\gamma}. \]}
\vskip1mm {\bf Proof.} We apply the following result (\cite{TW}, Corollary 1): If $f$ is a nonnegative real-valued multiplicative arithmetic function such that for each prime $p$,
i) $\rho(p):=\sup_{\nu \ge 0} f(p^{\nu})\le (1-1/p)^{-1}$, and
ii) there is an exponent $e_p=p^{o(1)}\in \mathds N$ satisfying $f(p^{e_p})\ge 1+1/p$,
then \[ \displaystyle \limsup_{n\to \infty} \frac{f(n)}{\log \log n}=e^{\gamma}\prod_p \left(1-\frac1{p}\right) \rho(p). \]
Take $f(n)=n/\varrho (n)$, where $f(p^{\nu})=(1-1/p+ 1/p^{\nu})^{-1} < (1-1/p)^{-1}=\rho(p)$, and for $e_p=3$, \[ f(p^3)> 1+\frac{p^2-1}{p^3-p^2+1}> 1+\frac1{p} \] for every prime $p$.
\vskip1mm It is immediate that $\liminf_{n\to \infty} \varrho(n)/\phi(n)=1$. The maximal order of $\varrho(n)/\phi(n)$ is given by
\vskip1mm {\bf Theorem 8.} {\it \[ \limsup_{n\to \infty} \frac{\varrho(n)}{\phi(n)\log \log n}=e^{\gamma}.\] }
\vskip1mm {\bf Proof.} Now let $f(n)=\varrho (n)/\phi(n)$ in the result given above. Here \[ f(p^{\nu})=1+\frac1{p^\nu-p^{\nu-1}}\le 1+\frac1{p-1}= \left(1-\frac1{p}\right)^{-1}= \rho(p), \] and for $e_p=1$, $f(p)>1+1/(p-1)> 1+1/p$ for every prime $p$.
\vskip1mm {\bf 5.} The plots were produced using Maple. The function $\varrho(n)$ was generated by the following procedure: \begin{verbatim}
rho:= proc(n) local x, i: x:= 1: for i from 1 to nops(ifactors(n)[ 2 ]) do
p_i:=ifactors(n)[2][i][1]: a_i:=ifactors(n)[2][i][2];
x := x*(p_i^a_i-p_i^(a_i-1)+1): od: RETURN(x) end; \end{verbatim}
\vskip1mm \noindent {{\bf L\'aszl\'o T\'oth}\\ University of P\'ecs\\ Institute of Mathematics and Informatics\\ Ifj\'us\'ag u. 6\\ 7624 P\'ecs, Hungary\\ ltoth@ttk.pte.hu}
\end{document} | arXiv | {
"id": "0710.1936.tex",
"language_detection_score": 0.5579369068145752,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Multimode Quantum State Tomography Using Unbalanced Array Detection} \author{Brandon S. Harms} \author{Blake E. Anthony} \author{Noah T. Holte} \author{Hunter A. Dassonville} \author{Andrew M.C. Dawes} \email[]{dawes@pacificu.edu} \homepage[]{www.amcdawes.com} \affiliation{Pacific University, Department of Physics, Forest Grove Oregon USA 97116}
\date{\today}
\begin{abstract} We measure the joint Q-function of a multi-spatial-mode field using a charge-coupled device array detector in an unbalanced heterodyne configuration. The intensity pattern formed by interference between a weak signal field and a strong local oscillator is resolved using Fourier analysis and used to reconstruct quadrature amplitude statistics for 22 spatial modes simultaneously. The local oscillator and signal propagate at an angle of \SI{12}{\milli\radian} thus shifting the classical noise to modes that do not overlap with the signal. In this configuration, balanced detection is not necessary. \end{abstract}
\pacs{42.50.Ar, 03.65.Ta, 03.65.Wj} \keywords{tomography, Q-function}
\maketitle
Multimode quantum optical systems are increasingly important in applications such as slow and stored light devices \cite{Grodecka-Grad:2012aa}. A complete understanding of these devices requires a fidelity measurement of the stored quantum state, but there has not yet been a simple way to measure the quantum state of a multi-spatial mode field. Balanced homodyne detection (BHD) is the standard measurement technique used to construct the complete quantum mechanical state of light \cite{leonhardt_measuring_1997}. This technique has one primary weakness: multimode fields or fields in an unknown spatial mode suffer significant losses due to mode mismatch between the local oscillator (LO) and the signal field \cite{leonhardt_measuring_1997,beck_quantum_2000}.
Balanced array detection overcomes mode-matching losses and has been used to perform measurements of more than two modes at once \cite{raymer_many-port_1993}. It is possible to obtain simultaneous (but not joint) measurements of the Wigner functions of many modes by using array detectors \cite{dawes_simultaneous_2001,dawes_mode_2003}. It has also been shown theoretically that array detectors may be used to measure the joint Q-function of a multimode field \cite{iaconis_temporal_2000}. One of the difficulties of these methods is the need to balance two array detectors and perform pixel-by-pixel subtraction to eliminate the classical intensity noise of the LO \cite{Yuen:1983aa}. These alignment requirements become prohibitive when trying to measure two transverse directions. This difficulty was removed by Beck et al.\ who achieved the same effect with a single array, using unbalanced detection, by arranging the LO and signal fields to isolate the classical LO noise \cite{Beck:2001aa}.
In this Letter, we describe a multi-spatial-mode detection method that is analogous to the multi-temporal-mode detection method of Beck et al. We simultaneously measure many spatial modes without having to vary the propagation direction or the phase of the LO; this reduces the amount of data that needs to be acquired and enables simultaneous joint measurements of two modes. This measurement technique can measure joint statistics between two independent spatial modes. Knowledge of these inter-mode correlations could provide a new route to information storage in a wide variety of systems.
To develop the theory of unbalanced array detection, we consider the spatial intensity $S(\mathbf{x})$ incident on the array detector due to the combination of an LO field $E_{LO}$ and the signal field $E_S$. For simplicity, we describe only one transverse coordinate (i.e. horizontal); generalization to both transverse coordinates is straightforward. If the LO is normally incident on the detector and the signal field propagates at a small angle to the LO, the spatial intensity at the detector is given by \begin{align}
S(\mathbf{x})&=|E_{LO}(\mathbf{x})+E_S(\mathbf{x})\exp(i\mathbf{k_S\cdot x})|^2\nonumber\\
&=|E_{LO}(\mathbf{x})|^2+|E_S(\mathbf{x})|^2\nonumber\\
&+\left[{E}^*_{LO}(\mathbf{x})E_S(\mathbf{x})\exp(i\mathbf{k_S\cdot x})+ c.c.\right]. \end{align}
The Fourier transform of this measurement yields \begin{align}
\widetilde{S}(\mathbf{k})&=\widetilde{E}_{LO}^*(\mathbf{-k})\otimes\widetilde{E}_{LO}(\mathbf{k})+ \widetilde{E}_S^*(\mathbf{-k})\otimes\widetilde{E}_S(\mathbf{k})\nonumber\\
&\quad + f(\mathbf{k-k_S})+f^*(\mathbf{-k-k_S}), \end{align} where $f(\mathbf{k}) = \widetilde{E}_{LO}^*(\mathbf{-k})\otimes\widetilde{E}_S(\mathbf{k})$ and $\otimes$ denotes the convolution. The first of the three terms peaks at $\mathbf{k=0}$ and contains the second-order classical LO noise that would be removed in balanced detection. The second term is negligible if the signal field is weak. If $\mathbf{k_S}$ is large enough, the function $f$ contains all information on the heterodyned signal. This function has a peak value at $\mathbf{k=k_S}$ and is therefore separate from the classical noise at $\mathbf{k}=0$. Just as Beck et al.~\cite{Beck:2001aa} eliminate classical LO noise by temporally separating the signal and LO fields, we eliminate classical LO noise by separating the LO and signal fields by their propagation direction---they have different transverse components of their propagation wave vector \cite{Dawes:2013aa}.
\begin{figure}
\caption{A normally-incident plane-wave local oscillator (LO) interferes
with a signal field at angle $\theta$. The array detector pixels measure $\delta x$ and
the detector size is $D_x$.}
\label{fig:losig}
\end{figure}
To describe the detection process from the perspective of quantum mechanics, we consider the spatial pattern detected at an array, as in Fig.~\ref{fig:losig}, where $\hat{n}_j=\hat{a}^\dagger_j\hat{a}_j$ is the operator corresponding to the number of photons incident on pixel $j$ of the array. Each pixel measures a different spatial position $x_j=j\delta x$ and there are $N$ pixels labeled by $-N/2 \leq j < N/2$. We can also express the measured field in terms of plane-wave modes $\hat{b}_k$, $-N/2\leq k<N/2$ using the Fourier relation \begin{align}
\label{a_to_b}
\hat{a}_j = \sum_k \exp\left[-i2\pi j k/N\right]\hat{b}_k \end{align} We assume the LO occupies the $2M+1$ plane-wave modes near the center of the spectral window. The signal occupies the plane-wave modes with positive $k$ and the plane-wave modes with negative $k$ contain only the vacuum field. This is explicitly written as \begin{equation}
\label{3bs}
\hat{b}_k = \begin{cases}
\hat{b}_k^{(vac)} & -N/2\leq k < -M,\\
\hat{b}_k^{(LO)} & -M\leq k \leq M,\\
\hat{b}_k^{(s)} & M < k < N/2.
\end{cases} \end{equation} The operator we measure corresponds to the inverse Fourier transform of $\hat{n}_j$ \begin{align}
\label{K_to_n}
\hat{K}_p = \frac{1}{\sqrt{N}} \sum_j \exp\left[i2\pi p j/N\right]\hat{n}_j \end{align} Combining Eqns.~\ref{a_to_b} and \ref{3bs} yields an expression for the per-pixel number operator \(\hat{n}_j\) in terms of plane-wave mode operators \(\hat{b}_k\). Terms that combine two weak fields \(\hat{b}_k^{(vac)}\) and \(\hat{b}_k^{(s)}\) are discarded as second-order. From here, Eqn.~\ref{K_to_n} is used to relate \(\hat{K}_p\) in terms of the \(\hat{b}_k\)'s. Finally, the large-amplitude of the LO field means the LO operators \(\hat{b}_k^{(LO)}\) can be replaced with their coherent state amplitudes $\beta_k$. For $p > 2M$ we find \begin{equation}
\label{K_to_b}
\hat{K}_p = \sum_{k=-M}^{M}\left( \beta_k^*\hat{b}^{(s)}_{k+p}+
\beta_k\hat{b}^{\dagger(vac)}_{k-p}\right). \end{equation}
If the LO occupies only a single ($k=0$) plane-wave mode, then Eq.~\ref{K_to_b} simplifies to \begin{equation}
\label{eqn:K_to_b2}
\hat{K}_p = \beta_0^*\hat{b}^{(s)}_{p}+ \beta_0\hat{b}^{\dagger(vac)}_{-p}. \end{equation} For an LO in a single plane-wave mode, a measurement of $\hat{K}_p$ (the Fourier transform of the photocount data) returns a complex number. Using Eq.~\ref{eqn:K_to_b2}, this complex number can be interpreted as a measurement of $\hat{b}_p^{(s)}$, the signal mode annihilation operator, plus a vacuum contribution (the second term in Eq.~\ref{eqn:K_to_b2}). The annihilation operator is itself the sum of the two field quadrature amplitudes \begin{equation}
\label{eq:quadratures}
\hat{b}_p^{(s)} = \frac{1}{\sqrt{2}}\left(\hat{x}_p+i\hat{y}_p\right). \end{equation} Therefore, the real and imaginary components of each $\hat{K}_p$ correspond to simultaneous measurement of the quadrature amplitudes $x_p$ and $y_p$. Of course these observables are noncommuting so the ability to measure them simultaneously comes at the price of additional vacuum noise \cite{arthurs_simultaneous_1965,leonhardt_measuring_1997}. It is precisely this additional noise that prevents reconstruction of the Wigner function, instead allowing reconstruction of the Q-function.
It is important to note that each Fourier-transformed exposure of the array returns a set of $(N/2) - M$ complex numbers. Each of these, indexed by $p$, has a real and imaginary part that corresponds to the signal mode field quadrature $(x_p,y_p)$. If one value of $p$ is selected, and the corresponding $(x_p,y_p)$ pairs are histogrammed, the result tends toward the Q distribution for the field quadratures of mode $p$. Because the data for all values of $p$ are collected simultaneously, any mode $p$ can be analyzed from a single set of data. Additionally, joint Q-distributions can be computed for any pair or combination of modes.
Our implementation of this system, shown in Fig.~\ref{fig:setup}, begins with a \SI{780}{\nano\meter} external-cavity diode laser. The laser output is sent through an acousto-optic modulator (80 MHz). The first order diffracted beam is then spatially filtered using an optical fiber (780-HP). After the fiber output, a telescope, starting with a 50-mm focal length lens, expands the beam diameter to \SI{1}{\centi\meter}. A 50/50 beam splitter separates the LO from the signal and a 250-mm focal length lens collimates the LO beam after expansion. The signal beam remains slightly diverging after passing through a 300-mm focal length lens. The mode mismatch between the signal and the LO provides a simple multimode signal for our demonstration of this method. The signal is attenuated by a factor of $10^5$ using neutral density filters. The signal and LO interfere on a charge coupled device (CCD) camera \footnote{Princeton Instruments, PyLoN 400BR eXcelon.}. The CCD is a 1340$\times$400 array of $\delta x =$~\SI{20}{\micro\meter} square pixels with quantum efficiency of 98\% at \SI{780}{\nano\meter}, cooled to \SI{-120}{\celsius} to achieve a dark current rate of 2-3 electrons per pixel per hour.
The LO power incident on the CCD is $\sim$\SI{1}{\micro\watt} and the LO and signal beams interfere on the CCD at an angle of $\sim$\SI{12}{\milli\radian}. The AOM signal is modulated by a function generator (Rigol DG4062) to create a \SI{7.5}{\milli\second} rectangular pulse. The pulse is triggered \SI{5}{\milli\second} after the start of the \SI{20}{\milli\second} exposure to keep the CCD dark during readout.
\begin{figure}
\caption{Experimental setup: a 780 nm external cavity diode laser is coupled
into single-mode fiber, gated using an 80-MHz acousto-optic modulator (AOM),
and split into LO and signal fields that interfere on a CCD array. Lens
focal lengths are: L1 50 mm, L2 250 mm, L3 300 mm.}
\label{fig:setup}
\end{figure}
The largest noise source in our measurements is the electronic readout noise. We characterize this noise by measuring the variance in photocounts $\Delta n_j$ with the LO illuminating the CCD and without illumination. The variance with illumination is an average of \SI{15}{\deci\bel} above the variance without illumination so our signal to noise ratio is more than adequate.
If the signal is blocked, the signal mode entering the detector is then the vacuum mode, and the quadrature amplitudes measured should be zero. Despite our spatial filtering, the residual spatial components in the LO beam leave a background signal in the FFT output that should be subtracted before computing the Q-function. To carry out this correction, we collect 500 exposures with the signal blocked (i.e. the signal in the vacuum state) and average the FFT output across these 500 exposures. This vacuum average $\langle \hat{K}_{p}\rangle_\mathrm{vac}$ is then subtracted from subsequent data prior to further calculations.
We collect 1000-8000 shots of CCD data from an ROI at the center of the CCD (600 pixels wide by 10 pixels tall), sum vertically the ten rows to obtain $N = 600$ data points and compute the FFT of the resulting 600-element array. The output of this calculation is an array of 600 complex values of $K_p$ where the first 300 values correspond to unique modes indexed by $p$ \footnote{Given that the raw CCD data is real, the FFT output is symmetric about the midpoint.}.
\begin{figure}
\caption{Figure of single mode Q-function. The mode selected here has an
average of $\langle n \rangle = 7.2$ photons and $\Delta n=2.8$, within 4\% of the quantum noise limit. Shown on the left is the raw-data histogram and on the right is a kernal density estimate calculated from the raw data.}
\label{fig:qfunc}
\end{figure}
We compute the quadratures by scaling the data by the total number of photons $n_t$ detected per shot in the CCD region of interest. For weak signal fields,
$n_t$ is essentially equal to the average number of photons in the LO and therefore related to the classical LO amplitude as $$|\beta| = \sqrt{n_t}.$$ Therefore, the quadratures in Eq.~\ref{eq:quadratures} are calculated from experimental values of $K_p$ as \begin{align}
\label{eq:quads}
x_p = \sqrt{\frac{2}{n_t}}\mathrm{Re}\left(K_p - \langle K_p \rangle_{vac}\right),\\
y_p = \sqrt{\frac{2}{n_t}}\mathrm{Im}\left(K_p - \langle K_p \rangle_{vac}\right).\\ \end{align}
From the quadratures, we compute the Q-function \(Q(x_p,y_p)\) for a specific mode $p$. This calculation is done in two ways. First, we simply histogram the $x_p$ and $y_p$ pairs using 30 bins in each dimension. For large data sets, this histogram will approximate \(Q(x_p,y_p)\). Additionally, we calculate \(Q(x_p,y_p)\) by applying a kernel density estimate (KDE) to the $x_p$ and $y_p$ pairs. This method essentially places a narrow Gaussian function at each point $(x_p,y_p)$ and computes the sum. The width of the Gaussian kernel is determined using Scott's rule and results in some smoothing relative to our raw histogram \cite{Scott:2009aa}.
\begin{figure}
\caption{Average photon number vs. plane wave mode angle. For the 15 modes occupied by the signal field we plot $\langle n_p \rangle$ with error bars given by $\Delta n_p$.}
\label{fig:nvp}
\end{figure}
In the following figures, both the raw-data histogram and the KDE versions of the Q-function are presented for comparison. Shown in Fig.~\ref{fig:qfunc} is $Q(x_p,y_p)$ for a mode with $\langle n \rangle = 7.2$ photons and $\Delta n = 2.8$. The Q-function can be computed for any mode $p$ using the same set of data. Because data is collected for all modes simultaneously, we can also compute a quantity of interest (such as average photon number $\langle n_p \rangle$ and measure that quantity for all modes simultaneously. In Fig.~\ref{fig:nvp} we plot average photon number $\langle n_p \rangle$ vs. plane wave mode angle $\theta_p = p \lambda / (N \delta x)$. This illustrates the mode spectrum of the signal field. In particular, with a multimode field there are several modes with comparable amplitudes. While Fig.~\ref{fig:nvp} shows the 22 modes near the signal, we measure 600 modes simultaneously; highly multimode signals can be measured using this technique.
\begin{figure}
\caption{Q-function for mode with randomized phase. With randomized phase, the Q-function takes on a donut shape centered at the origin. The consistent radius indicates a stable photon number while the angular spread indicates the changing phase.}
\label{fig:randphase}
\end{figure}
To demonstrate the detection of a signal with variable phase, we dither the position of a piezo-mounted mirror in the signal beam path and observe modulation through \SI{2\pi}{\radian} of phase. The Q-function calculated from such data is shown in Fig.~\ref{fig:randphase}. The characteristic donut shape corresponds to phase noise although the phase modulation is not completely random as evidenced by the peaks present in the Q-function.
\begin{figure}
\caption{Two-mode joint Q-function. For adjacent modes ($p=197$ and $p'=198$) the joint Q-function illustrates the correlation between the real parts, $x_{p}$, of each mode.}
\label{fig:joint}
\end{figure}
It is important to note that we can also compute any quantity that is a function of the quadratures $x_p$ and $y_p$. One such quantity is the joint Q-function, $Q(x_p,x_{p'})$. The joint Q-function shown in Fig.~\ref{fig:joint} is for two nearby modes $p$ and $p'=p+1$. As expected, these modes exhibit strong positive correlation between $x_p$ and $x_{p'}$.
In conclusion, we point out that this method could be used to measure the quantum state of light stored in a multimode quantum memory such as that described in \cite{Grodecka-Grad:2012aa}. The peak sensitivity of our CCD system is near the \SI{780}{\nano\meter} and \SI{795}{\nano\meter} resonances of Rubidium. Very recently, quantum state tomography measurements have been performed on single-photon states retrieved from a stored-light system \cite{Bimbard:2014aa}. Multimode measurements of such systems could be made using methods presented in this Letter and would yield new information about correlations between modes retrieved from stored light systems.
\acknowledgments
We thank M. Beck for helpful discussions. This material is based upon work supported by the National Science Foundation under Grant No. 1205828. Additional financial support was provided by the Research Corporation for Science Advancement and the Pacific Research Institute for Science and Mathematics.
\end{document} | arXiv | {
"id": "1407.6389.tex",
"language_detection_score": 0.8072298765182495,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Group distance magic Cartesian product of two cycles}
\author{Sylwia Cichacz$^{1}$\footnote{This work was partially supported by the Faculty of Applied Mathematics AGH UST statutory tasks within subsidy of Ministry of Science and Higher Education.}, Pawe{\l} Dyrlaga$^1$, Dalibor Froncek$^2$\\ $^1$AGH University of Science and Technology, Poland\\ $^2$University of Minnesota Duluth, U.S.A.}
\maketitle \begin{abstract} Let $G=(V,E)$ be a graph and $\Gamma $ an Abelian group both of order $n$. A $\Gamma$-distance magic labeling of $G$ is a bijection $\ell \colon V\rightarrow \Gamma $ for which there exists $\mu \in \Gamma $ such that $ \sum_{x\in N(v)}\ell (x)=\mu $ for all $v\in V$, where $N(v)$ is the neighborhood of $v$. Froncek showed that the Cartesian product $C_m \square C_n$, $m, n\geq3$ is a $\mathbb{Z}_{mn}$-distance magic graph if and only if $mn$ is even. It is also known that if $mn$ is even then $C_m \square C_n$ has $\mathbb{Z}_{\alpha}\times \mathcal{A}$-magic labeling for any $\alpha \equiv 0 \pmod {\mathop{\rm lcm}\nolimits(m,n)}$ and any Abelian group $\mathcal{A}$ of order $mn/\alpha$. However, the full characterization of group distance magic Cartesian product of two cycles is still unknown.\\
In the paper we make progress towards the complete solution this problem by proving some necessary conditions. We further prove that for $n$ even the graph $C_{n}\square C_{n}$ has a $\Gamma$-distance magic labeling for any Abelian group $\Gamma$ of order $n^{2}$. Moreover we show that if $m\neq n$, then there does not exist a $(\mathbb{Z}_2)^{m+n}$-distance magic labeling of the Cartesian product $C_{2^m} \square C_{2^{n}}$. We also give necessary and sufficient condition for $C_{m} \square C_{n}$ with $\mathop{\rm gcd}\nolimits(m,n)=1$ to be $\Gamma$-distance magic. \end{abstract}
\section{Introduction}\label{sec:intro}
All graphs $G=(V,E)$ are finite undirected simple graphs. For standard graph theoretic notation and definitions we refer to Diestel \cite{Diest}.
In 1963 Sedl\'{a}\v{c}ek \cite{ref_Sed} noticed the following connection between a~magic square $M$ of size $n\times n$ and an edge labeling of the complete bipartite graph $K_{n,n}$. Namely, assigning the entry in row $i$ and column $j$ of the magic square to the edge connecting the $i$-th vertex in one partite set to the $j$-th vertex in the other set, we obtain the sum of labels of edges incident with each vertex equal to the magic square constant. This type of labeling became known as \emph{magic labeling}, \emph{supermagic labeling} or \emph{vertex-magic edge labeling}. Another concept in graph labeling that was motivated by the construction of magic squares labels vertices instead. A \emph{distance magic labeling} of a graph $G$ of order $n$ is a bijection $\ell:V\rightarrow\{1,2,\dots,n\}$ with the property that there exists a positive integer $\mu$ such that
$$w(v)=\sum_{u\in N(v)}\ell(u)=\mu$$
for all $v\in V$, where $N(v)$ is the open neighborhood of $v$ and $w(v)$ is called the \emph{weight of the vertex} $v$. The constant $\mu$ is called the \emph{magic constant} of the labeling $f$. Any graph admitting a distance magic labeling is called a \emph{distance magic graph}.
We recall one out of four standard graph products (see \cite{IK}). Let $G$ and $H$ be two graphs. The \emph{Cartesian product} $G\square H$ is a graph with vertex set $V(G)\times V(H)$. Two vertices $(g,h)$ and $(g^{\prime },h^{\prime })$ are adjacent if and only if either $g=g^{\prime }$ and $h$ is adjacent with $h^{\prime }$ in $H$, or $h=h^{\prime }$ and $g$ is adjacent with $ g^{\prime }$ in $G$.
Rao at al. proved the following result for Cartesian product of cycles in \cite{RSP}. \begin{thm}[\cite{RSP}]\label{thm:cart_cycle_integers}The Cartesian product $C_m \square C_n$, $ m,n\geq3$ is a distance magic graph if and only if $m,n \equiv 2\imod 4$ and $m=n$. \end{thm}
Assume $\Gamma$ is a finite Abelian group of order $n$ with the operation denoted by $+$. For convenience we will write $ka$ to denote $a + a + \ldots + a$ (where the element $a$ appears $k$ times), $-a$ to denote the inverse of $a$, and use $a - b$ instead of $a+(-b)$. Moreover, the notation $\sum_{a\in S}{a}$ will be used as a short form for $a_1+a_2+a_3+\dots+a_t$, where $a_1, a_2, a_3, \dots,a_t$ are all elements of the set $S$. The identity element of $\Gamma$ will be denoted by $0$.
Recall that any group element $\iota\in\Gamma$ of order 2 (i.e., $\iota\neq 0$ and $2\iota=0$) is called an \emph{involution}.
The magic labeling (in the classical point of view) with labels being the elements of an Abelian group has been studied for a long time (see papers by Stanley~\cite{ref_Sta,ref_Sta2}). Therefore, it was a natural step to label the vertices of a graph $G$ with elements of an Abelian group also in the case of distance magic labeling. This concept was introduced by~Froncek in~\cite{Fro1}.\\% defined the notion of group distance magic graphs, i.e. the graphs allowing the bijective labeling of vertices with elements of an Abelian group resulting in constant sums of neighbor labels.\\
A $\Gamma$\emph{-distance magic labeling} of a graph $G = (V, E)$ with $|V| = n$ is a bijection $\ell$ from $V$ to an Abelian group $\Gamma$ of order $n$
such that the weight $w(v) =\sum_{u\in N(v)}\ell(u)$ of every vertex $v \in V$ is equal to the same element $\mu\in \Gamma$, called the \emph{magic constant}. A graph $G$ is called a \emph{group distance magic graph} if there exists a $\Gamma$-distance magic labeling for every Abelian group $\Gamma$ of order $|V(G)|$.\\
First result on $\Gamma$-distance magic labeling of Cartesian product of cycles was proved by~\cite{Fro1}:
\begin{thm}\label{thm:cart_cycle_cyclic}\emph{(\cite{Fro1})} The Cartesian product $C_m \square C_n$, $m, n\geq3$ is $\mathbb{Z}_{mn}$-distance magic graph if and only if
$mn$ is even. \end{thm}
The result was later improved by Cichacz~\cite{ref_CicAus}. \begin{thm}\label{thm:cart_cycle_Sylwia}\emph{(\cite{ref_CicAus})} Let $m$ or $n$ be even and $l =\mathop{\rm lcm}\nolimits(m,n)$. Then $C_m \square C_n$ has a $\mathbb{Z}_{\alpha}\times \mathcal{A}$-magic labeling for any $\alpha \equiv 0 \pmod {l}$ and any Abelian group $\mathcal{A}$ of order $mn/\alpha$. \end{thm}
The following related results were also proved in the respective papers.
\begin{thm}\label{thm:C_2^n_by_Z_2}\emph{(\cite{Fro1})}
The graph $C_{2^n}\square C_{2^n}$ has a $(\mathbb{Z}_2)^{2n}$-distance magic labeling for $n \geq 2$ and $\mu = (0,0,\ldots,0)$. \end{thm}
\begin{thm}\label{thm:odd-m,n}\emph{(\cite{ref_CicAus})}
If $m,n$ are odd, then $C_m \square C_n$ is not $\Gamma$-distance magic graph for any Abelian group $\Gamma$ of order $mn$. \end{thm}
The following general problem is still widely open.
\begin{prob}\label{prob:general}\emph{(\cite{Fro1})}
For a given graph $C_m \square C_n$, determine all Abelian groups $\Gamma$ such that the graph $C_m \square C_n$ admits a $\Gamma$-distance magic labeling. \end{prob}
\indent Note that if a graph $G$ of order $n$ is distance magic, then it is $\mathbb{Z}_n$-distance magic. Moreover there are infinitely many distance magic graphs that at the same time are group distance magic \cite{ref_AnhCicPetTep1}. Hence Cichacz and Froncek stated the following conjecture. \begin{conj}[\cite{CicFro}]\label{conj:DM->GDM} If $G$ is a distance magic graph, then $G$ is group distance magic. \end{conj}
In the paper we make some progress towards solution of Problem~\ref{prob:general} by proving some necessary conditions as well as some new existence results. In particular, we prove that for $n$ even the graph $C_{n}\square C_{n}$ has a $\Gamma$-distance magic labeling for any Abelian group $\Gamma$ of order $n^{2}$. Moreover we show that if $m\neq n$, then there does not exist a $(\mathbb{Z}_2)^{m+n}$-distance magic labeling of the Cartesian product $C_{2^m} \square C_{2^{n}}$. We prove a necessary and sufficient condition for $C_{m} \square C_{n}$ with $\mathop{\rm gcd}\nolimits(m,n)=1$ to be $\Gamma$-distance magic. Observe that the Cartesian product $C_{2^m} \square C_{2^{n}}$ is $\mathbb{Z}_{2^{m+n}}$-distance magic by Theorem~\ref{thm:cart_cycle_Sylwia} but is not distance magic by Theorem~\ref{thm:cart_cycle_integers}. Therefore, this result is the first example that shows that assumptions in Conjecture~\ref{conj:DM->GDM} cannot be relaxed, that is, the statement that if a graph $G$ of order $n$ is $\mathbb{Z}_n$-distance magic then it is group distance magic is not true.
\section{Sufficient conditions}\label{sec:sufficient}
Recall that the \emph{exponent} $\exp{(\Gamma)}$ of a group $\Gamma$ of order $q$ with elements $a_1,a_2,\dots,a_q$ is the smallest possible $r$ such that $ra_i=0$ for any $a_i\in \Gamma$. It is well known that in Abelian groups, $r=\mathop{\rm lcm}\nolimits(o_1,o_2,\dots,o_q)$ where $o_i$ is the order of $a_i$ for $i=1,2,\dots,q$.
It also well known that if $\Gamma$ has an even order, then there is an element $a_i$ of even order, and hence $\exp{(\Gamma)}=r$ must be even. Because the non-existence of $\Gamma$-labelings of $C_m\Box C_n$ for $|\Gamma|=mn$ odd follows from Theorem~\ref{thm:odd-m,n}, we will from now on only consider the case where $|\Gamma|=mn$ is even. Consequently, we will always have $\exp{(\Gamma)}=r$ even.
We start with the following general theorem for Cartesian product of graphs:
\begin{thm}\label{thm:product} Let $\Gamma_1$ and $\Gamma_2$ be Abelian groups with exponents $r_1$ and $r_2$, respectively. Let $a_1$ and $a_2$ be some positive integers. If an $a_1r_{1}$-regular graph $G_{1}$ is $\Gamma _{1}$-distance magic and an $ a_2r_{2}$ -regular graph $G_{2}$ is $\Gamma _{2}$-distance magic, then the Cartesian product $G_{1}\Box G_{2}$ is $\Gamma _{1}\times \Gamma _{2}$ -distance magic. \end{thm}
\begin{proof}
Let $\ell _{i}\colon V(G_{i})\rightarrow \Gamma_{i}$ be a $\Gamma _{i}$-distance magic labeling, and $\mu _{i}$ the magic constant for the graph $G_{i}$, $i\in \{1,2\}$. Define the labeling $ \ell :V(G_{1}\Box G_{2})\rightarrow \Gamma _{1}\times \Gamma _{2}$ for $G_{1}\Box G_{2}$, as: \begin{equation*} \ell ((x,y))=(\ell _{1}(x),\ell _{2}(y)). \end{equation*} Obviously, $\ell $ is a bijection and moreover, for any $(u,w)\in V(G_{1}\Box G_{2})$: \begin{eqnarray*} w(u,w) &=&\sum_{(x,y)\in N_{G_{1}\Box G_{2}}((u,w))}{\ell (x,y)}\\[6pt]
&=&\left(\sum_{x\in N_{G_{1}}(u)}\ell _{1}(x) +a_2r_{2}\ell(u),\sum_{y\in N_{G_{2}}(w)}\ell_{2}(y)+a_1r_1\ell(w)\right) \\[8pt]
&=&(\mu _{1},\mu _{2})=\mu , \end{eqnarray*} which settles the proof.\end{proof}
Theorem~\ref{thm:product} implies the following observation. \begin{obs} Let $d \equiv 0 \mod 4$. A hypercube $\mathcal{Q}_d$ is $\Gamma$-distance magic for any Abelian $\Gamma$ of order $2^d$ with $\exp{(\Gamma)}\leq4$. \end{obs} \begin{proof}
Note that in the factorization of $\Gamma$ we have only factors $\mathbb{Z}_2$ and $\mathbb{Z}_4$ since $\exp{(\Gamma)}\leq4$. The proof is by induction on $d$. Because $\mathcal{Q}_4\cong C_4 \Box C_4$ we obtain by Theorems~\ref{thm:cart_cycle_Sylwia} and \ref{thm:C_2^n_by_Z_2} that $\mathcal{Q}_4$ is $\Gamma$-distance magic for any Abelian $\Gamma$ of order $16$ with $\exp{(\Gamma)}\leq4$. Recall that for $d\geq8$ the hypercube $\mathcal{Q}_d$ can be also defined recursively in terms of the Cartesian product of two graphs as $\mathcal{Q}_d=\mathcal{Q}_{d-4}\Box \mathcal{Q}_4$. Obviously $\mathcal{Q}_d$ is $d$-regular. Therefore we are done by Theorem~\ref{thm:product}. \end{proof}
Let $V(C_m \square C_n)=\{x_{i,j}:0\leq i\leq m-1, 0\leq j\leq n-1\}$, where $N(x_{i,j})=\{x_{i,j-1},x_{i,j+1},x_{i+1,j},x_{i-1,j}\}$ and the operations on the first and second suffix are performed modulo $m$ and $n$, respectively. Without loss of generality we can assume $m\leq n$. By a diagonal $D^j$ of $C_m \square C_n$ we mean a sequence of vertices $(x_{0,j},x_{1,j+1},\ldots,$ $x_{m-1,j+m-1},x_{0,j+m}, x_{1,j+m+1},$ $\ldots,x_{m-1,j-1})$ of length $l$. It is easy to observe that $l = \mathop{\rm lcm}\nolimits(m,n)$, the least common multiple of $m$ and $n$. We denote the $j$-th diagonal by $D^j = (d^j_0,d^j_1,\ldots,d^j_{l-1})$ and call $D^0$ the \textit{main diagonal}.
Now we slightly strengthen Theorem~\ref{thm:cart_cycle_Sylwia}.
\begin{thm}\label{thm:lcm/2} Let $mn$ be even and $l =\mathop{\rm lcm}\nolimits(m,n)$. Then $C_m \square C_n$ has a $\mathbb{Z}_{\alpha}\times \mathcal{A}$-magic labeling for any $\alpha \equiv 0 \pmod {l/2}$ and any Abelian group $\mathcal{A}$ of order $mn/\alpha$. \end{thm} \begin{proof} Notice that $l=2k$ for some $k$ and $\alpha=kh$ for some $h$. Notice that $d=\mathop{\rm gcd}\nolimits(m,n)$ is the number of diagonals of $C_m \square C_n$.
For $\alpha\equiv0\pmod{l}$ the claim follows from Theorem~\ref{thm:cart_cycle_Sylwia}. Hence, we can can only look at the case when $\alpha \equiv l/2 \pmod l$.
Let $r=mn/\alpha$ and $\Gamma\cong\mathbb{Z}_{\alpha}\times \mathcal{A}$, thus if $g \in \Gamma$, then we can write that $g=(j,a_i)$ for $j \in \mathbb{Z}_{\alpha}$ and $a_i \in \mathcal{A}$ for $i=0,1,\ldots,r-1$. We can assume that $a_0$ is the identity in $A$. Let $\ell(x)=(l_1(x),l_2(x))$ where $l_1(x)\in\mathbb{Z}_{\alpha}$ and $l_2(x)\in\mathcal{A}$.
There exists a subgroup $\langle h' \rangle$ of $\mathbb{Z}_{\alpha}$ of order $k=l/2$, therefore the element $h=(h',a_0)$ generates a subgroup $H$ in $\Gamma$ of order $k$. Let $b_0,b_1,\ldots,b_{2d-1}$ be the set of coset representatives for $\Gamma/H$. Notice that in any cyclic group $\mathbb{Z}_{2j}$, $j\geq 1$ there exists an element $g\neq0$ such that there is no $a\in \mathbb{Z}_{2j}$ satisfying $2a=g$ (for instance take $g=1\in \mathbb{Z}_{2j}$). Thus by Fundamental Theorem of Abstract Algebra, because $|\Gamma/H|$ is even, we can assume without loss of generality that $b_1\in \Gamma/H$ is such that $b_1\neq 2b$ for any $b\in \Gamma/H$. Moreover we can partition $\Gamma/H$ in to $d$ pairs $(h_i,h_i')$, where $h_i+h_i'=b_1$ and $h_i\neq h_i'$ for $i=0,1,\ldots,d-1$.
Label the vertices of $D^0$ as follows:
$$\ell(d^0_{2i})=ih+h_0,\; \ell(d^0_{2i+1})=-ih-h_0+b_1$$ for $i=0,1,\ldots,k-1$.
The vertices in $D^1,D^2,D^3,\dots,D^{d-1}$ will be labeled as
\begin{eqnarray*} \ell(d^{j}_{g})&=l_1(d^{j-1}_g)+h_{j+1} \ \text{if} \ g\equiv 1 \imod 2,\\ \ell(d^{j}_{g})&=l_1(d^{j-1}_g)-h_{j+1} \ \text{if} \ g\equiv 0 \imod 2. \end{eqnarray*}
Observe that the labeling $\ell$ is a bijection because $h_j\neq -h_i+b_1$ for any $i\neq j$. Moreover,
\begin{eqnarray*}
\ell(d^{j}_{2i})+\ell(d^{j}_{2i+1}) &= &b_1 \hskip35pt \text{and}\\
\ell(d^{j}_{2i+1})+\ell(d^{j}_{2i+2}) &= & h+b_1 \end{eqnarray*} for any $i$.
If $d>2$, then the vertex $x_{i',j'}=d^j_i$ has in $C_m \square C_n$ neighbors $d^{j-1}_{i},d^{j-1}_{i+1},d^{j+1}_{i-1}$ and $d^{j+1}_{i}$. Therefore $w(d^j_i)=h+2b_1$ and the labeling is $\Gamma$-distance magic as desired.
If $d\leq2$, then the vertex $x_{i',j'}$ has in $C_m \square C_n$ neighbors $d^{j}_{a},d^{j}_{a+1},d^{j}_{b-1}$ and $d^{j}_{b} $ for $0\leq j\leq 1$, $0\leq a<b\leq l-1$. We know that at least one of $m,n$ is even, so we can assume that $m = 2s$. Because $d^{j}_{a}=x_{i',j'-1}$ and $d^{j}_{b}= x_{i',j'+1}$, it is clear that $a = b + qm$ for same $1\leq q < l/m$. But $m = 2s$ and $a = b + 2qs$ and hence $a$ and $b$ have the same parity. When $a$ and $b$ are even, say $a = 2c$ and $b = 2f$, then
$$d^{j}_{a}+d^{j}_{a+1}=d^{j}_{2s}+d^{j}_{2s+1}=b_1$$
and
$$d^{j}_{b-1}+d^{j}_{b}=d^{j}_{2f-1}+d^{j}_{2f}=b_1+h,$$
which implies $w(x_{i',j'})=h+2b_1$.\end{proof}
Now we present a class of group distance magic cycle products, that is, cycle products that are $\Gamma$-distance magic for any Abelian group $\Gamma$ of an appropriate order.
\begin{thm}\label{thm:C_n x C_n} Let $n$ be even. Then $C_{n} \square C_{n}$ has a $\Gamma$-distance magic labeling for any Abelian group $\Gamma$ of order $n^2$. \end{thm} \begin{proof} The Fundamental Theorem of Finite Abelian Groups states that a finite Abelian group $\Gamma$ of order $m=n^2$ can be expressed as the direct product of cyclic subgroups of prime-power order. This implies that
$$
\Gamma\cong\mathbb{Z}_{p_1^{\alpha_1}}\times\mathbb{Z}_{p_2^{\alpha_2}}\times\ldots\times\mathbb{Z}_{p_k^{\alpha_k}}\;\;\; \mathrm{where}\;\;\; n = p_1^{\alpha_1}\cdot p_2^{\alpha_2}\cdot\ldots\cdot p_k^{\alpha_k} $$
and $p_i$ for $i \in \{1, 2,\ldots,k\}$ are primes, not necessarily distinct. This product is unique up to the order of the direct product. Therefore there exists $H<\Gamma$ such that $|H|=n$. Let $b_0,b_1,\ldots,b_{n-1}$ be coset representatives of $\Gamma/H$. Recall that in any Abelian group of even order the number of involutions is odd, therefore
$\Gamma/H$ has $2t-1$ involutions $\iota_1,\iota_2,\ldots,\iota_{2t-1}$ for $t\geq 1$.
Let $b_0$ be the identity element of $\Gamma/H$ and $b_i=\iota_i$ for $i\in\{1,2,\ldots,2t-1\}$, and $b_{i+1} = -b_i$ for $i \in\{ 2t,2t+2,2t+4,\ldots,n-2\}$.
Observe that because $|H|$ is even there exists an involution $\iota\in H$ ($\iota\neq 0$ and $2\iota=0$). We will define a bijection $\varphi\colon H \rightarrow H$ such that $\varphi(x)\neq x$ for any $x\in H$. For $n\equiv 0 \pmod 4$ let $\varphi(x)=x+\iota$. When $n\equiv 2 \pmod 4$, then let $\varphi(x)=-x+\iota$. Notice that for $n\equiv 2 \pmod 4$ there does not exist $x\in H$ such that $2x=\iota$.
Hence we can partition $H$ in to $n/2$ pairs $(h_i,h_i')$ such that $h_i'=\varphi(h_i)$ and $h_i\neq h_i'$ for $i=0,1,\ldots,n/2-1$.
Now, for $i=0,1,\dots,n-1$, label the vertices of $D^0$ as $$\ell(d^0_{i})=h_i,\;\;\;\ell(d^0_{n/2+i})=\varphi(h_i) $$ and the vertices of $D^2$ as \begin{eqnarray*}
\ell(d^2_{i}) &=-\ell(d^0_{i+1})+b_2 & \text{if} \ i\equiv 0 \imod 2 \hskip10pt \text{and}\\
\ell(d^2_{i}) &=-\ell(d^0_{i+1})-b_2 &\text{if} \ i\equiv 1 \imod 2. \end{eqnarray*}
For $r\in\{1,3\}$ label the vertices of $D^r$ as \begin{eqnarray*}
\ell(d^r_{i}) &=\ell(d^{r-1}_{i})-b_{r-1}+b_r & \text{if} \ i\equiv 0 \imod 2 \hskip10pt \text{and}\\
\ell(d^r_{i}) &=\ell(d^{r-1}_{i})+b_{r-1}-b_r &\text{if} \ i\equiv 1 \imod 2. \end{eqnarray*}
For $r\in\{4,5,\ldots,n-1\}$ the vertices in $D^r$ will be labeled as \begin{eqnarray*}
\ell(d^{r}_{i}) &=\ell(d^{r-4}_{i+2})-b_{r-4}+b_{r} & \text{if} \ i\equiv 0 \imod 2\hskip10pt \text{and}\\
\ell(d^{r}_{i}) &=\ell(d^{r-4}_{i+2})+b_{r-4}-b_{r} & \text{if} \ i\equiv 1 \imod 2. \end{eqnarray*}
Observe that $$w(d^{r}_{i})= \ell(d^{r-1}_{i})+\ell(d^{r-1}_{i+1})+\ell(d^{r+1}_{i-1})+\ell(d^{r+1}_{i}).$$
Assume first that $r\not \in \{n-1,0\}$. When $r\equiv 1,2\pmod 4$, then $$
w(d^{r}_{i})= \ell(d^{r-1}_{i})+\ell(d^{r-1}_{i+1})-\ell(d^{r-1}_{i})-\ell(d^{r-1}_{i+1})+2\iota=0. $$ If
$r\equiv 0,3\pmod 4$,
then $$
w(d^{r}_{i})= -\ell(d^{r-3}_{i+1})-\ell(d^{r-3}_{i+2})+2\iota+\ell(d^{r-3}_{i+1})+\ell(d^{r-3}_{i+2})=0. $$
Assume now that $r=n-1$ and $n\equiv 0 \pmod 4$. Then
\begin{eqnarray*}
w(d^{n-1}_{i})
&=&\ell(d^{n-2}_{i})+\ell(d^{n-2}_{i+1})+\ell(d^{0}_{i-1})+\ell(d^{0}_{i})\\
&=&\ell(d^{2}_{i+2(n/4-1)})+\ell(d^{2}_{i+1+2(n/4-1)})+\ell(d^{0}_{i-1})+\ell(d^{0}_{i})\\
&=&\ell(d^2_{i-2+n/2})+\ell(d^2_{i-1+n/2})+\ell(d^{0}_{i-1})+\ell(d^{0}_{i})\\
&=&-\ell(d^0_{i-1+n/2})-\ell(d^0_{i+n/2})+\ell(d^{0}_{i-1})+\ell(d^{0}_{i})\\
&=&-h_{i-1}-h_{i}+h_{i-1}+h_i=0. \end{eqnarray*}
If $r=n-1$ and $n\equiv 2 \pmod 4$, then \begin{eqnarray*}
w(d^{n-1}_{i})
&=&\ell(d^{n-2}_{i})+\ell(d^{n-2}_{i+1})+\ell(d^{0}_{i-1})+\ell(d^{0}_{i})\\
&=&\ell(d^{0}_{i+(n/2-1)})+\ell(d^{0}_{i+1+(n/2-1)})+\ell(d^{0}_{i-1})+\ell(d^{0}_{i})\\
&=&\ell(d^0_{i-1+n/2})+\ell(d^0_{i+n/2})+\ell(d^{0}_{i-1})+\ell(d^{0}_{i})\\
&=&h'_{i-1}+h'_{i}+h_{i-1}+h_i\\
&=&0. \end{eqnarray*}
Similarly we obtain $w(d^{0}_{i})=0$. Hence the labeling is $\Gamma$-distance magic as desired.\end{proof}
Theorem~\ref{thm:C_n x C_n} now immediately implies the following. \begin{cor}\label{cor:C_2^n}
The graph $C_{2^n}\square C_{2^n}$ has a $\Gamma$-distance magic labeling for $n \geq 2$ and any Abelian group $\Gamma$ of order $2^{2n}$. \end{cor}
\section{Necessary conditions}\label{sec:necessary}
Now we present theorems showing that if we have a group $\Gamma\cong\mathbb{Z}_{p_1^{\alpha_1}}\times\mathbb{Z}_{p_2^{\alpha_2}}\times\ldots\times\mathbb{Z}_{p_k^{\alpha_k}}$ with elements $(g_1,g_2,\dots,g_k)$ and the exponent $r=\exp(\Gamma)$ is rather small compared with the length of the diagonal of $C_{m} \square C_{n}$, then there is no $\Gamma$-distance magic labeling of the cycle product. In other words, the results are showing that if some entries $g_i$ of $(g_1,g_2,\dots,g_k)\in\Gamma$ would have to repeat too many times, then such labeling does not exist.
For a positive integer $m$ define a function
$$f(m)=\left\{ \begin{array}{lll} m/4&\textrm{if}&m\equiv0 \pmod 4,\\ m/2 &\textrm{if}&m\equiv2 \pmod 4,\\ m &\textrm{if}&m\equiv1 \pmod 2. \end{array}\right.$$
\begin{thm}\label{thm:main-f}
Let $\Gamma$ be an Abelian group of an even order $mn$ with exponent $r$. If $2r\min\{f(m),f(n)\} <\mathop{\rm lcm}\nolimits(m,n)$, then there does not exist a $\Gamma$-distance magic labeling of the Cartesian product $C_{m} \square C_{n}$. \end{thm}
\begin{proof}
For the sake of contradiction, suppose that there exists a $\Gamma$-distance magic labeling $\ell$ of the Cartesian product $C_{m} \square C_{n}$ with magic constant $\mu$. By our assumption, we have $mn$ even. Without loss of generality we can assume that $m<n$ and $\ell(x_{0,0})=0$.
Let us consider the weights of $x_{0,1}$ and $x_{m-1,2}$: $$
w(x_{0,1})=\ell(x_{0,0})+\ell(x_{1,1})+\ell(x_{m-1,1})+\ell(x_{0,2}) $$ and $$
w(x_{m-1,2})=\ell(x_{m-1,1})+\ell(x_{0,2})+\ell(x_{m-2,2})+\ell(x_{m-1,3}). $$
Because we assumed that $\ell$ is a $\Gamma$-distance magic labeling, we have $w(x_{0,1})=w(x_{m-1,2})=\mu$, which yields
$$ \ell(x_{0,0})+\ell(x_{1,1})+\ell(x_{m-1,1})+\ell(x_{0,2})=\ell(x_{m-1,1})+\ell(x_{0,2})+\ell(x_{m-2,2})+\ell(x_{m-1,3}). $$
and hence
$$ \ell(x_{0,0})+\ell(x_{1,1})=\ell(x_{m-2,2})+\ell(x_{m-1,3}). $$
Similarly, comparing weights of vertices $x_{m-2,3}$ and $x_{m-3,4}$ we obtain
\begin{align*}
\ell(x_{m-2,2})+\ell(x_{m-1,3})+\ell(x_{m-3,3})+\ell(x_{m-2,4})=\\
\ell(x_{m-3,3})+\ell(x_{m-2,4})+\ell(x_{{m}-4,4})+\ell(x_{{m}-3,5}) \end{align*}
and thus
$$ \ell(x_{m-2,2})+\ell(x_{m-1,3})=\ell(x_{{m}-4,4})+\ell(x_{{m}-3,5}). $$
which implies
$$ \ell(x_{0,0})+\ell(x_{1,1})=\ell(x_{{m}-4,4})+\ell(x_{{m}-3,5}). $$
Repeating that procedure we conclude that
$$
\ell(x_{0,0})+\ell(x_{1,1})=\ell(x_{-2\alpha,2\alpha})+\ell(x_{1-2\alpha,1+2\alpha})=a_0 $$
for some $a_0\in \Gamma$ and any natural number $\alpha$.
Recall that by the {main diagonal} of $C_m \square C_n$ we mean the cyclic sequence of vertices $(x_{0,0},x_{1,1},\ldots,$ $x_{m-1,m-1},x_{0,m}, x_{1,m+1},$ $\ldots,x_{m-1,n-1})$ of length $l= \mathop{\rm lcm}\nolimits(m,n)$. We now consider the following system of equations, going along the main diagonal. Notice that the subscripts need to be read modulo $m$ and $n$, respectively. So, for instance, when $m<n$, the vertex denoted by $x_{m,m}$ is in fact $x_{0,m}$.
Analogously, we get \begin{align*} \ell(x_{j,j})+\ell(x_{j+1,j+1})=\ell(x_{j-2\alpha,j+2\alpha})+\ell(x_{j+1-2\alpha,j+1+2\alpha})=a_j \end{align*} for some $a_j\in \Gamma$ and any natural number $\alpha$.
Note that $x_{-2\alpha,2\alpha}$ belongs to the same diagonal as $x_{0,0}$ for $2\alpha\equiv-2\alpha\pmod m$ (then $x_{-2\alpha,2\alpha}=x_{2\alpha,2\alpha}$) or $2\alpha\equiv-2\alpha \pmod n$ (then $x_{-2\alpha,2\alpha}=x_{-2\alpha,-2\alpha}$), what happens for both $\alpha=f(m)$ and $\alpha=f(n)$.
This implies that taking $k=\min\{f(m),f(n)\}$ we obtain: \begin{align}\label{eq:rownanie} \ell(x_{j,j})+\ell(x_{j+1,j+1})=\ell(x_{j+2k,j+2k})+\ell(x_{j+1+2k,j+1+2k}). \end{align}
Note that $x_{2rk,2rk}\neq x_{0,0}$ since $2rk< \mathop{\rm lcm}\nolimits(m,n)$. Also, the elements $a_0,a_1,\dots,a_{2k-1}$ are not necessarily all distinct.
\begin{align}\label{uklad} \left\{ \begin{array}{lll}
\ell(x_{0,0})&+\ell(x_{1,1}) & =a_0 \\
\ell(x_{1,1})&+\ell(x_{2,2}) & =a_1 \\
\vdots\\
\ell(x_{2k-1,2k-1})&+\ell(x_{2k,2k})&=a_{2k-1} \\
\ell(x_{2k,2k})&+\ell(x_{2k+1,2k+1}) & =a_0 \\
\ell(x_{2k+1,2k+1})&+\ell(x_{2k+2,2k+2}) & =a_1 \\
\vdots\\
\ell(x_{4k-1,4k-1})&+\ell(x_{4k,4k})&=a_{2k-1} \\
\vdots\\
\ell(x_{2(r-1)k,2(r-1)k})&+\ell(x_{2(r-1)k+1,2(r-1)k+1}) & =a_0 \\
\ell(x_{2(r-1)k+1,(r-1)k+1})&+\ell(x_{2(r-1)k+2,2(r-1)k+2}) & =a_1 \\
\vdots\\
\ell(x_{2rk-1,2rk-1})&+\ell(x_{2rk,2rk})&=a_{2k-1} .\\ \end{array}\right. \end{align} Multiplying every other equation by $-1$, starting with
$$-\ell(x_{1,1})-\ell(x_{2,2}) =-a_1,$$
and adding all equations, we obtain
$$ \ell(x_{0,0})-\ell(x_{2rk,2rk}) =r\sum_{i=0}^{2k-1}(-1)^{i}a_i. $$
Recall that $\ell(x_{0,0})=0$. Because $r=\exp(\Gamma)$, we have $r\sum_{i=0}^{2k-1}(-1)^{i}a_i=0$. This implies $-\ell(x_{2rk,2rk})=0$, which is a contradiction, because the labeling is injective and we have assumed that $x_{2rk,2rk}\neq x_{0,0}$. \end{proof}
The following theorem gives a similar result in terms of a more obvious bound, using $\mathop{\rm gcd}\nolimits(m,n)$, the number of diagonals in $C_{m} \square C_{n}$.
\begin{thm}\label{thm:gcd} Let $\Gamma$ be an Abelian group of an even order $mn$ with exponent $r$. If $2r\mathop{\rm gcd}\nolimits(m,n) <\mathop{\rm lcm}\nolimits(m,n)$, then there does not exist a $\Gamma$-distance magic labeling of the Cartesian product $C_{m} \square C_{n}$. \end{thm}
\begin{proof} We again use contradiction and assume that there exists a $\Gamma$-distance magic labeling $\ell$ of $C_{m} \square C_{n}$ with magic constant $\mu$ and $mn$ is even, $m<n$ and $\ell(x_{0,0})=0$.
By the \emph{first backward diagonal} of $C_m \square C_n$ we mean the cyclic sequence of vertices
$(x_{0,1},x_{1,0},x_{2,n-1}\ldots,$ $x_{m-1,n-m+2},x_{0,n-m+1},x_{1,n-m},\ldots,x_{m-1,2})$
of length $l = \mathop{\rm lcm}\nolimits(m,n)$. Similarly, the sequence
$(x_{0,2},x_{1,1},x_{2,0}\ldots,x_{m-1,n-m+3},$ \linebreak $ x_{0,n-m+2}, x_{1,n-m+1},\ldots,x_{m-1,3})$ is the \emph{second backward diagonal} and so on.
Set $k=\mathop{\rm gcd}\nolimits(m,n)$. Because the length of each backward diagonal is $\mathop{\rm lcm}\nolimits(m,n)$, there are $k$ backward diagonals, and the vertices
$ x_{0,k+1},x_{1,k},x_{2,k-1}$ $\ldots, x_{0,2k+1},x_{1,2k},x_{2,2k-1}\ldots, $ $x_{k,k+1},\ldots, x_{0,3k+1},x_{1,3k},x_{2,3k-1},\ldots,x_{2k,2k+1}, \dots $
belong to the same diagonal as $x_{0,1}$.
We look at weights of vertices of that sequence, starting at $x_{0,1}$ and going the opposite direction, that is, $w(x_{0,1}),w(x_{m-1,2}),w(x_{m-2,3}),\ldots,w(x_{1,0})$.
The weights of $x_{0,1}$ and $x_{m-1,2}$ are again $$ w(x_{0,1})=\ell(x_{0,0})+\ell(x_{1,1})+\ell(x_{m-1,1})+\ell(x_{0,2})=\mu $$ and $$ w(x_{m-1,2})=\ell(x_{m-1,1})+\ell(x_{0,2})+\ell(x_{m-2,2})+\ell(x_{m-1,3})=\mu. $$
Comparing the above equalities, we get
$$ \ell(x_{0,0})+\ell(x_{1,1})=\ell(x_{m-2,2})+\ell(x_{m-1,3}). $$
Continuing this way, we obtain
\begin{align}\label{eq:rownanie-gcd-a_0} \ell(x_{0,0})+\ell(x_{1,1})=\ell(x_{-2\alpha_0,2\alpha_0})+\ell(x_{1-2\alpha_0,1+2\alpha_0})=c_1 \end{align}
for some $c_1\in \Gamma$ and any natural number $\alpha_0$. In particular, because there are $k$ diagonals and $x_{2k,2k+1}$ belongs to the same backward diagonal as $x_{0,1}$, we must also have\\
$$\ell(x_{0,0})+\ell(x_{1,1})=\ell(x_{2k,2k})+\ell(x_{2k+1,2k+1})=c_1,$$ $$\ell(x_{0,0})+\ell(x_{1,1})=\ell(x_{4k,4k})+\ell(x_{4k+1,4k+1})=c_1,$$ $$\vdots$$ $$\ell(x_{0,0})+\ell(x_{1,1})=\ell(x_{2(r-1)k,2(r-1)k})+\ell(x_{2(r-1)k+1,2(r-1)k+1})=c_1.$$
Looking at the first backward diagonal again and starting to compare the weights at $x_{1,0},x_{0,1},x_{m-,2},\dots$ instead, we get $$ w(x_{1,0})=\ell(x_{1,n-1})+\ell(x_{2,0})+\ell(x_{0,0})+\ell(x_{1,1})=\mu $$ and $$ w(x_{0,1})=\ell(x_{0,0})+\ell(x_{1,1})+\ell(x_{m-1,1})+\ell(x_{0,2})=\mu. $$
Comparing these equalities, we obtain $$ \ell(x_{1,n-1})+\ell(x_{2,0})=\ell(x_{m-1,1})+\ell(x_{0,2})=d_1 $$ for some element $d_1$.
Comparing the weights of remaining vertices, we again have $$ \ell(x_{1,n-1})+\ell(x_{2,0})=\ell(x_{1-2\beta_1,-1+2\beta_1})+\ell(x_{2-2\beta_1,2\beta_1})=d_1 $$ for any $\beta_1\in\mathbb{Z}$.
Proceeding in the same fashion, we obtain two such equalities for each diagonal. Ordering them conveniently and renaming the elements $c_i$ and $d_i$, we again obtain the same system of equations as in the previous theorem. Namely,
$$ \left\{ \begin{array}{lll} \ell(x_{0,0})&+\ell(x_{1,1}) & =a_0 \\ \ell(x_{1,1})&+\ell(x_{2,2}) & =a_1 \\ \vdots\\ \ell(x_{2k-1,2k-1})&+\ell(x_{2k,2k})&=a_{2k-1} \\ \ell(x_{2k,2k})&+\ell(x_{2k+1,2k+1}) & =a_0 \\ \ell(x_{2k+1,2k+1})&+\ell(x_{2k+2,2k+2}) & =a_1 \\ \vdots\\ \ell(x_{4k-1,4k-1})&+\ell(x_{4k,4k})&=a_{2k-1} \\ \vdots\\ \ell(x_{2(r-1)k,2(r-1)k})&+\ell(x_{2(r-1)k+1,2(r-1)k+1}) & =a_0 \\ \ell(x_{2(r-1)k+1,(r-1)k+1})&+\ell(x_{2(r-1)k+2,2(r-1)k+2}) & =a_1 \\ \vdots\\ \ell(x_{2rk-1,2rk-1})&+\ell(x_{2rk,2rk})&=a_{2k-1} .\\ \end{array}\right. $$
Solving it the same way as before, we again obtain $$\ell(x_{2rk,2rk})=r\sum_{i=0}^{2k-1}(-1)^{i}a_i=0$$ by the property of group exponent $r$. But because $\ell(x_{0,0})=0$ and $x_{2rk,2rk}\neq x_{0,0}$, we get a contradiction, which completes the proof. \end{proof}
Notice that each of the above two theorems is useful in different scenarios. For instance, when $m\equiv0\pmod4$ and $n=mq$, then Theorem~\ref{thm:main-f} gives a stronger result; when $m$ is odd and $n\neq mq$, then it is better to use Theorem~\ref{thm:gcd}.
To illustrate the strength of the theorems above with a concrete example, we present two special cases separately.
\begin{cor}\label{cor:many-Z_s}
Let $\Gamma\cong (\mathbb{Z}_s)^t$ when $s$ is even or $\Gamma\cong (\mathbb{Z}_s)^t\times\mathbb{Z}_2$ when $s$ is odd, and $|\Gamma|=mn$. Then $C_m\square C_n$ does not have a $\Gamma$-distance magic labeling if $n>sm$. \end{cor}
\begin{proof}
Follows directly from Theorem~\ref{thm:gcd}. \end{proof}
The following non-existence result is in a certain sense `dual' to the statement above. We show that for a given cycle length $m$, we can always find a corresponding long cycle $C_n$ and an Abelian group $\Gamma$ which does not provide a labeling of $C_m\square C_n$.
\begin{obs}\label{obs:long-C_n-m-only}
For any positive integer $m$ there exists $n$ for which the Cartesian product $C_m\square C_n$ is not group distance magic. That is, there exists an Abelian group $\Gamma$ such that $C_m\square C_n$ is not $\Gamma$-distance magic. \end{obs}
\begin{proof}
Let $n=2m^3$ and $\Gamma\equiv(\mathbb{Z}_m)^4\times\mathbb{Z}_2$. Then $\exp(\Gamma)\leq2m$ and $\mathop{\rm gcd}\nolimits(m,n)=m$. Hence we have
$2r\mathop{\rm gcd}\nolimits(m,n)\leq 4m^2<2m^3$, because $m\geq3$. But $\mathop{\rm lcm}\nolimits(m,n)=n=2m^3$, and the product $C_m\square C_n$ is not $\Gamma$-distance magic by Theorem~\ref{thm:gcd}. \end{proof}
The following extreme case is worth mentioning.
\begin{obs}\label{gdc} Let $\mathop{\rm gcd}\nolimits(m,n)=1$. There exists a $\Gamma$-distance magic labeling of the Cartesian product $C_{m} \square C_{n}$ if and only if $mn$ is even and either $\Gamma\cong \mathbb{Z}_2\times \mathbb{Z}_{mn/2}$ or $\Gamma\cong \mathbb{Z}_{mn}$.
\end{obs}
\begin{proof}
For $\Gamma\cong \mathbb{Z}_{mn}$ and $\Gamma\cong \mathbb{Z}_2\times \mathbb{Z}_{mn/2}$ the labelings exist by Theorem~\ref{conj:DM->GDM}.
Because $\mathop{\rm gcd}\nolimits(m,n)=1$, we cannot have both $m$ and $n$ even, so without loss of generality $m$ is odd. If $n$ is odd, then $C_m \square C_n$ is not $\Gamma$-distance magic graph for any Abelian group $\Gamma$ of order $mn$ by Theorem~\ref{thm:odd-m,n}.
It is well known that for an Abelian group $\Gamma$ of order $2k$, $\exp(\Gamma) =k$ if and only if $k$ is even and $\Gamma\cong\mathbb{Z}_2\times\mathbb{Z}_k$.
Therefore, if $\Gamma\ncong \mathbb{Z}_{mn}$ and $\Gamma\ncong \mathbb{Z}_2\times \mathbb{Z}_{mn/2}$, we must have $\exp(\Gamma)< mn/2$. On the other hand, we have $\mathop{\rm lcm}\nolimits(m,n)=mn$, because $\mathop{\rm gcd}\nolimits(m,n)=1$. Hence
$2r\mathop{\rm gcd}\nolimits(m,n)<mn=\mathop{\rm lcm}\nolimits(m,n)$ and
there does not exist a $\Gamma$-distance magic labeling of $C_{m} \square C_{n}$ by Theorem~\ref{thm:gcd}. \end{proof}
Finally, we complement Theorem~\ref{thm:C_2^n_by_Z_2} and Observation~\ref{obs:C_2^n} by the following related result.
\begin{thm}\label{obs:C_2^n}
There exists a $(\mathbb{Z}_2)^{m+n}$-distance magic labeling of the Cartesian product $C_{2^m} \square C_{2^{n}}$ if and only if $2\leq m =n$. \end{thm}
\begin{proof}
When $m=n$, then we are done by Theorem~\ref{thm:C_2^n_by_Z_2}. Suppose that $m\neq n$. Without loss of generality we can assume that $2\leq m<n$; then $\min\{f(2^m),f(2^n),\mathop{\rm gcd}\nolimits(2^m,2^n)\}=2^{m-2}$, whereas $\mathop{\rm lcm}\nolimits(2^m,2^n)=2^n$. Observe that the exponent of $(\mathbb{Z}_2)^{m+n}$ is $r=2$. Therefore $2r\min\{f(m),f(n)\}=2^{m} <2^{n}=\mathop{\rm lcm}\nolimits(m,n)$. \end{proof}
\section{Conclusion}
We made some progress towards the full characterization of Abelian groups $\Gamma$ such that $|\Gamma|=mn$ and there exists a $\Gamma$-distance magic labeling of $C_m\square C_n$.
We improved a previous bound for existence of such labeling, showing that if $\Gamma$ has a cyclic subgroup of order $\mathop{\rm lcm}\nolimits(m,n)/2$, then $C_m\square C_n$ is $\Gamma$-distance magic, lowering the bound from $\mathop{\rm lcm}\nolimits(m,n)$.
On the other hand, we have shown that groups with an exponent $\exp(\Gamma)$ that is relatively small compared with $\mathop{\rm lcm}\nolimits(m,n)$ do not admit such labeling.
Since we found necessary conditions for existence of such labeling (Theorem~\ref{thm:main-f} and~\ref{thm:gcd}), which in some cases were also sufficient (Observation~\ref{gdc}), we post now the following conjecture.
\begin{conj}
Let $\Gamma$ be an Abelian group of an even order $mn$ with exponent $r$. There exists a $\Gamma$-distance magic labeling of the Cartesian product $C_{m} \square C_{n}$ if and only if $2r\min\{f(m),f(n),\mathop{\rm gcd}\nolimits(m,n)\} \ge\mathop{\rm lcm}\nolimits(m,n)$. \end{conj}
\end{document} | arXiv | {
"id": "1905.04946.tex",
"language_detection_score": 0.6709661483764648,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[A projective system of $k$-tree evolutions]{Aldous diffusion I:
\\ a projective system of continuum $k$-tree evolutions}
\author{N\MakeLowercase{\sc oah} F\MakeLowercase{\sc orman,} S\MakeLowercase{\sc oumik} P\MakeLowercase{\sc al,} D\MakeLowercase{\sc ouglas} R\MakeLowercase{\sc izzolo, and} M\MakeLowercase{\sc atthias} W\MakeLowercase{\sc inkel}}
\address{\hspace{-0.42cm}N.~Forman\\ Department of Mathematics\\ University of Washington\\ Seattle WA 98195\\ USA\\ Email: noah.forman@gmail.com}
\address{\hspace{-0.42cm}S.~Pal\\ Department of Mathematics\\ University of Washington\\ Seattle WA 98195\\ USA\\ Email: soumikpal@gmail.com}
\address{\hspace{-0.42cm}D.~Rizzolo\\ 531 Ewing Hall\\ Department of Mathematical Sciences\\ University of Delaware\\ Newark DE 19716\\ USA\\ Email: drizzolo@udel.edu}
\address{\hspace{-0.42cm}M.~Winkel\\ Department of Statistics\\ University of Oxford\\ 24--29 St Giles'\\ Oxford OX1 3LB\\ UK\\ Email: winkel@stats.ox.ac.uk}
\keywords{Brownian CRT, reduced tree, interval partition, stable process, squared Bessel processes, excursion, de-Poissonisation}
\subjclass[2010]{60J80}
\date{\today}
\thanks{This research is partially supported by NSF grants DMS-1204840, DMS-1444084, DMS-1612483, EPSRC grant EP/K029797/1, and the University of Delaware Research Foundation}
\begin{abstract} The Aldous diffusion is a conjectured Markov process on the space of real trees that is the continuum analogue of discrete Markov chains on binary trees. We construct this conjectured process via a consistent system of stationary evolutions of binary trees with $k$ labeled leaves and edges decorated with diffusions on a space of interval partitions constructed in previous work by the same authors. This pathwise construction allows us to study and compute path properties of the Aldous diffusion including evolutions of projected masses and distances between branch points. A key part of proving the consistency of the projective system is Rogers and Pitman's notion of intertwining. \end{abstract}
\maketitle
\section{Introduction}
The Aldous chain \cite{Aldous00,Schweinsberg02} is a Markov chain on the space of (rooted) binary trees with $n$ labeled leaves. Each transition of the chain, called a down-up move, has two stages. First, a uniform random leaf is deleted and its parent branch point is contracted away. Next, a uniform random edge is selected, we insert a new branch point into the middle of that edge, and we extend a new leaf-edge out from that branch point. This is illustrated in Figure \ref{fig:AC_move} where $n=6$ and the leaf labeled $3$ is deleted and re-inserted. The stationary distribution of this chain is the uniform distribution on rooted binary trees with $n$ labeled leaves. \begin{figure}
\caption{From left to right, one Aldous down-up move.}
\label{fig:AC_move}
\end{figure}
Aldous observed that if $b$ branch points are selected in the initial tree and the leaves are partitioned into sets based on whether or not they are in the same component of the tree minus these branch points then, up until one of the selected branch points disappears, the scaling limit of the evolution of the sizes of these sets under the Aldous chain is a Wright--Fisher diffusion, running at quadruple speed, with some positive parameters and some negative ones. He conjectured that these diffusions were recording aspects of a diffusion on continuum trees and posed the problem of constructing this process \cite{ADProb2, AldousDiffusionProblem}. This process would be stationary under the law of the Brownian continuum random tree (BCRT) because this is the scaling limit of the stationary distribution of the Aldous chain \cite{AldousCRT1, AldousCRT2, AldousCRT3}. Building on \cite{Paper0,Paper1,Paper2,Paper3}, the present paper resolves this conjecture via a pathwise construction of a \textit{consistent finite-dimensional projected} system of $k$-tree evolutions that are described below. In a follow-up paper \cite{Paper5} we establish some important properties of this process. The idea of constructing a BCRT as a projective system of finite random binary trees goes back to the original construction in \cite{AldousCRT1}. The critical extension here is that the projective consistency now has to hold for the entire stochastic process.
The concept of a $k$-tree is best understood by generating a random $k$-tree from the BCRT. Consider a BCRT $(\mathcal{T},d,\rho,\mu)$. Roughly, $\left(\mathcal{T}, d\right)$ is a metric space that is tree-shaped and has a distinguished element $\rho$ called the root and an associated Borel probability measure $\mu$ called the leaf mass measure. Of course, these quantities are all random. Given an instance of this tree, let $\Sigma_n$, $n\!\ge\!1$, denote a sequence of leaves sampled conditionally i.i.d.\ with law $\mu$. Denote by $\mathcal{R}_k$ the subtree of $\mathcal{T}$ spanned by $\rho,\Sigma_1,\ldots,\Sigma_k$, and by $\mathcal{R}_k^\circ$ the subtree of $\mathcal{R}_k$ spanned by the set ${\rm Br}(\mathcal{R}_k)$ of branch points and the root $\rho$ of $\mathcal{R}_k$. Almost surely, $\mathcal{R}_k$ is a binary tree.
Let $[k]\!:=\!\{1,\ldots,k\}$ and suppose that $k\geq 2$. The \emph{Brownian reduced $k$-tree}, denoted by $\mathcal{T}_k\!=\!(\mathbf{t}_k,(X_j^{(k)}\!,j\!\in\![k]),(\beta_E^{(k)}\!,E\!\in\!\mathbf{t}_k))$, is defined as follows. \begin{itemize}
\item The combinatorial \emph{tree shape} $\mathbf{t}_k$ is in one-to-one correspondence with ${\rm Br}(\mathcal{R}_k)$ or with the branches of $\mathcal{R}_k^\circ$. Specifically, for each \emph{branch}, i.e.\ each connected component $\mathcal{B}\subseteq\mathcal{R}_k^\circ\setminus{\rm Br}(\mathcal{R}_k)$, the corresponding element of $\mathbf{t}_k$ is the set $E$ of all $i\in[k]$ for which leaf $\Sigma_i$ is ``above $\mathcal{B}$,'' i.e.\ is in the component of $\mathcal{R}_k\setminus\mathcal{B}$ not containing the root. We denote by $\mathcal{B}_E$ the branch corresponding to $E$. Each element of $\mathbf{t}_k$ is a set containing at least two leaf labels; we call the elements of $\mathbf{t}_k$ the \emph{internal edges}.
\item For $j\in[k]$, the \emph{top mass} $X_j^{(k)}$ is the $\mu$-mass of the component of $\mathcal{T}\setminus\mathcal{R}_k^\circ$ containing $\Sigma_j$.
\item Each edge $E\in\mathbf{t}_k$ is assigned an interval partition $\beta_E^{(k)}$ as in \cite{GnedPitm05,PitmWink09}. Consider the connected components of $\mathcal{T}\setminus\mathcal{R}^\circ_k$ that attach to $\mathcal{R}_k^\circ$ along the interior of $\mathcal{B}_E$. These are totally (but not sequentially) ordered by decreasing distance between their attachment points on $\mathcal{B}_E$ and the root. Then $\beta_E^{(k)}$ is the interval partition whose \textit{block} lengths are the $\mu$-masses of components in this total order. \end{itemize} See Figure \ref{fig:B_k_tree_proj} for an illustration of such a $k$-tree.
\begin{figure}
\caption{Left: Simulation of a BCRT (courtesy of Igor Kortchemski)
with root $\rho$, and $k=5$ leaves $\Sigma_1,\ldots,\Sigma_5$. The bold lines and triangles are the branches and vertices of $\mathcal{R}_5^\circ$. Right: The associated Brownian reduced $k$-tree.}
\label{fig:B_k_tree_proj}
\end{figure}
We later provide more precise definitions of sets of $k$-trees that support the laws of the Brownian reduced $k$-trees. There is a natural projection map $\pi_{k}$ from $(k+1)$-trees to $k$-trees for every $k\ge 2$ such that $\pi_k(\mathcal{T}_{k+1})=\mathcal{T}_k$. There is also a natural map $S_k$ taking $k$-trees to rooted metric measure trees such that, almost surely, $S_k(\mathcal{T}_k)$ is the rooted metric measure tree that results from projecting the leaf mass measure of $\mathcal{T}$ onto $\mathcal{R}_k^\circ$. To see this, observe that since the combinatorial tree shape of $\mathcal{R}_k^\circ$ is $\mathbf{t}_k$ and atoms of the projected measure are given by the top masses and intervals of the interval partitions, one only needs to be able to recover the metric structure of $\mathcal{R}_k^\circ$ from $\mathcal{T}_k$. This is accomplished using the diversity of the interval partitions on the edges as in \cite{PitmWink09}, see also Equation \eqref{eq diversitydef} below for a precise definition of diversity. Given any system of $k$-trees $\mathbf{T}=(T_k, k\geq 2)$, we define $S(\mathbf{T}) = \lim_{k\rightarrow\infty} S_k(T_k)$, where the limit is taken with respect to the rooted Gromov--Hausdorff--Prokhorov metric \cite{M09}, provided this limit exists, and $S(\mathbf{T})$ is the trivial one-point tree if the limit does not exist. From the definition of $\mathcal{R}^\circ_k$, it is clear that $S((\mathcal{T}_k,k\geq 1)) = \mathcal{T}$ almost surely, see e.g. \cite{PitmWink09} for a similar argument in the context of bead-crushing constructions.
The following collection of results summarizes the main contribution of this paper.
\begin{theorem}\label{thm:intromain}
There is a projective system $(\widebar\mathcal{T}^u, u\geq 0)=((\widebar\mathcal{T}^u_{\!\!k},k\geq 2), u\geq 0)$ such that the following hold.
\begin{enumerate}[itemsep=.1cm]
\item \label{main markov} For each $k$, $(\widebar{\mathcal{T}}_{\!\!k}^u,u\geq 0)$ is a $k$-tree-valued Markovian evolution.
\item \label{main cons} These processes are consistent in the sense that $(\widebar\mathcal{T}^u_{\!\!k},u\geq 0) = (\pi_k(\widebar\mathcal{T}^u_{\!\!k+1}),u\geq 0)$.
\item \label{main stat} The law of the consistent family of Brownian reduced $k$-trees $(\widebar\mathcal{T}_{\!\!k}, k\geq 2)$ is a stationary law for the process $(\widebar\mathcal{T}^u, u\geq 0)$.
\item \label{main Wright--Fisher} Let $\widebar\mathcal{T}^u_{\!\!k} = (\mathbf{t}^u_k,(X^u_j\!,j\!\in\![k]),(\beta^u_E,E\!\in\!\mathbf{t}^u_k))$, let $\|\beta^u_E\|$ be the mass of the interval partition $\beta^u_E$, i.e.\ the sum of the lengths of the intervals in the partition, and let $\tau$ be the first time either a top mass or the mass of an interval partition is $0$. For $u<\tau$ we have $\mathbf{t}^u_k=\mathbf{t}^0_k$. The process $((( X^{u/4}_j\!,j\!\in\![k]),(\|\beta^{u/4}_E\|,E\!\in\!\mathbf{t}^0_k)), 0\leq u <\tau)$ is a Wright--Fisher diffusion, killed when one of the coordinates vanishes, with the parameters proposed by Aldous, which are $-1/2$ for coordinates corresponding to top masses and $1/2$ for coordinates corresponding to masses of interval partitions.
\item \label{main GHPAldous} If $((\widebar\mathcal{T}^u_{\!\!k},k\geq 2), u\geq 0)$ is running in stationarity, then $(S(\widebar\mathcal{T}^u), u\geq 0)$ is a stationary stochastic process of rooted metric measure trees whose stationary distribution is the BCRT.
\end{enumerate} \end{theorem}
For fixed $k$, we give a pathwise definition of $(\widebar{\mathcal{T}}_{\!\!k}^u,u\geq 0)$ in Definition \ref{def:resamp_1}. Properties \ref{main markov} and \ref{main stat} are proved in Theorem \ref{thm:dePoi}. Property \ref{main cons} follows from Theorem \ref{thm:consistency} and the Kolmogorov consistency theorem. Property \ref{main Wright--Fisher} is proved in Corollary \ref{cor:WF}. Property \ref{main GHPAldous} is an immediate consequence of Property \ref{main stat} and the discussion immediately following the definition of $S$ above.
\begin{definition}\label{def:Aldousdiff} The \textit{Aldous diffusion} is the process $(S(\mathcal{T}^y), y\geq 0)$ defined in Theorem \ref{thm:intromain}\ref{main GHPAldous}. \end{definition}
In the follow-up paper \cite{Paper5} we show that the Aldous diffusion has the Markov property and a continuous modification. The techniques used to show these properties are quite different from those used in the current paper.
The current paper and its companion \cite{Paper5} are the culmination of several ideas that we have developed in our previous joint work. Although it is not necessary to read all the previous papers to follow the mathematics here, in the interest of the ``big picture'', Table \ref{tbl:project_outline} outlines their dependence structure. The $k$-tree evolutions described here require Markovian evolutions on spaces of interval partitions. Their existence and properties have been worked out in \cite{Paper1}. As mentioned above, the leaf masses of the continuum-tree-valued process are captured by the interval lengths of the evolving interval partitions. What is not obvious is that the evolution of the metric structure of the continuum-tree-valued process is also captured by the so-called \textit{diversity} of the interval-partition-valued process. This has been dealt with in \cite{Paper0}. The $2$-tree-valued process, which is an important building block in the current work, has been separately constructed in \cite{Paper3}. The consistency of the $k$-tree-valued process is subtle and requires a non-trivial labeling of the $k$-tree shapes and a \textit{resampling} mechanism when the randomly evolving tree shape drops a leaf label. In \cite{Paper2}, a similar labeling and resampling scheme has been worked out for the Aldous Markov chain where it has been proved to lead to consistent projections. The proofs of consistency in \cite{Paper2} and the current work are related in the sense that they both employ a combination of two well-known criteria: intertwining and Dynkin--Kemeny--Snell. However, the proof of \cite{Paper2} cannot be directly generalized in the continuum and it necessitates new arguments that are developed here tying together the threads of \cite{Paper1, Paper3, Paper0}.
\begin{table}
\centering
\begin{tabular}{cc}
Uniform control of local times of stable processes \cite{Paper0} & \\
$\downarrow$ & \\
Interval-partition-valued diffusions \cite{Paper1} & \\
$\downarrow$
& Consistent projections \\
Interval partition evolutions with emigration \cite{Paper3} & of the Aldous chain \cite{Paper2}
\end{tabular}
$\underbrace{\hphantom{Interval partition evolutions with emigration [20] of the Aldous chain [20] fillerfil}}$\\
$\downarrow$\\
Projective system of continuum $k$-tree evolutions {[this paper]}\\
$\downarrow$\\
Properties of the continuum-tree-valued process \cite{Paper5}
\vphantom{text}
\caption{Outline of the present authors' construction of Aldous the diffusion.\label{tbl:project_outline}}
\end{table}
\subsection{Related work of L\"ohr, Mytnik, and Winter}
Recently L\"ohr, Mytnik, and Winter \cite{LohrMytnWint18} used a martingale problem to independently prove the existence of a process on a new space of trees, which they call algebraic measure trees. They named their process the \textit{Aldous diffusion on binary algebraic non-atomic measure trees}, which they abbreviated to \textit{Aldous diffusion}, but for clarity, we will abbreviate as the \textit{algebraic Aldous diffusion}. Algebraic measure trees can be thought of as the structures that remain when one forgets the metric on a metric measure tree but retains knowledge of the branch points; cf.\ mass-structural equivalence in \cite{IPTrees}. Equivalence classes of algebraic trees form the state space for the algebraic Aldous diffusion. They showed that, on this state space with a new topology that they introduce, the Aldous chain with time rescaled by $N^2$ converges to the algebraic Aldous diffusion as the number of leaves tends to infinity. In explaining the move to a new state space that does not include the metric structure of the trees they say that, although it is traditionally useful to include the metric structure and this was part of the original problem, the metric might not evolve naturally under the Aldous chain because the quadratic variation of some functions of the distance scale differently than one would anticipate from time scaling of $N^2$ predicted by Aldous and further supported by Pal \cite{Pal13}. We believe, however, that this is not the case. Our results show that the evolution of the metric can be described using the local time of a L\'evy process. One does not expect this to be a semimartingale and we believe this is the cause of the difficulty identified in \cite{LohrMytnWint18}.
Changing to a new state space and topology allows the authors of \cite{LohrMytnWint18} to use standard martingale problem techniques to establish the existence of the algebraic Aldous diffusion, however it also results in the loss of some information that we believe formed an important aspect of Aldous's conjecture. In particular, in order to remove information about the metric, it seems that one also must sacrifice some detailed information about the distribution of leaves. Indeed, from Pitman and Winkel \cite{PitmWink09}, a detailed knowledge of the distribution of leaves allows you to reconstruct the metric. This forms a central part of our construction here. The new state space and topology also seem insufficient for recovering the mixed Wright--Fisher diffusions that Aldous conjectured were recording aspects of the underlying continuum-tree-valued diffusion. Rather, in \cite{LohrMytnWint18} the authors recover the annealed behavior of how the mass split around a typical branch point behaves, thus recovering some aspects of the negative Wright--Fisher diffusion in an averaged sense (the methods they use may be able to recover the behavior of the mass split among multiple branch points in the same averaged sense, but they have not done this). In contrast, our approach shows clearly how the mixed Wright--Fisher diffusions observed by Aldous embed into the process we construct on continuum trees. More generally, our pathwise construction grants access to sample path properties of the diffusion, which may be difficult to study from the martingale problem approach.
There is a natural conjecture relating the processes we call the Aldous diffusion and the algebraic Aldous diffusion. In particular, in this paper in Theorem \ref{thm:intromain} we construct a consistent system of $k$-tree evolutions $((\mathcal{T}^y_k)_{y\geq 0})_{k\geq 0}$ that captures, in a consistent way, the mixed Wright--Fisher diffusions observed by Aldous. In Definition \ref{def:Aldousdiff} we define the Aldous diffusion by mapping $\mathcal{T}^y_k$ to a metric measure tree using the diversities and block sizes in the interval partitions to determine branch lengths and masses of atoms, then taking a projective limit as $k\rightarrow \infty$ in the Gromov--Hausdorff--Prokhorov metric. If instead of mapping $\mathcal{T}^y_k$ to a metric measure tree we map it to an algebraic measure tree and take the limit in the topology of \cite{LohrMytnWint18} (which is easily seen to exist), we conjecture that the resulting process is the algebraic Aldous diffusion.
\subsection{Structure of the paper}
In Section \ref{sec:type012} we briefly recall the main objects and some properties from \cite{Paper1,Paper3}. In Section \ref{sec:def_thm} we formally introduce the space of $k$-trees, define various $k$-tree-valued Markov processes, and state our main results. In particular, we begin by defining processes in which the total mass of the tree fluctuates and eventually the tree dies out; we obtain processes with constant mass by ``de-Poissonization.'' We prove some properties of these processes for fixed $k$, including ``pseudo-stationarity'' results for the processes with fluctuating mass, in Section \ref{sec:k_fixed}. We apply these pseudo-stationarity results in Section \ref{sec:consistency} to prove that these Markov processes are projectively consistent under an additional hypothesis that we remove in Section \ref{sec:non_acc}. Section \ref{sec:const:other} then includes proofs of additional projective consistency results.
\section{Preliminaries on type-0, type-1, and type-2 evolutions}\label{sec:type012}
Type-0, type-1, and type-2 evolutions are Markov processes introduced with pathwise constructions in \cite{Paper1,Paper3}. In \cite[Section 1.1]{Paper3}, we argue via a connection to ordered Chinese restaurant processes that the type-2 evolution is a continuum analogue of a certain 2-tree projection of the discrete Aldous chain discussed in \cite[Appendix A]{Paper2}. By the same argument, the $k$-tree projection of the Aldous chain discussed in \cite[Appendix A]{Paper2} can be decomposed into parts whose evolutions are analogous to type-0/1/2 evolutions. In Figure \ref{fig:B_k_tree_proj}, the dashed lines separate parts of the $k$-tree that evolve as type-0/1/2 evolutions; see Definition \ref{def:killed_ktree}.
The aforementioned pathwise construction brings a lot of symmetry to light, and it makes many calculations accessible. In this paper, we will not delve into this construction, and in fact, only a few key properties of these processes are needed. For the sake of completeness, we include enough information to fully specify the distributions of these processes.
An \emph{interval partition} (IP) is a set $\alpha$ of disjoint open intervals that cover some compact interval $[0,M]$ up to a Lebesgue-null set. $M$ is called the mass of $\alpha$ and denoted by $\|\alpha\|$. We call the elements $(a,b)\in\alpha$ the \emph{blocks} of the partition. A block $(a,b)$ has \emph{mass} $b-a$. We say that $\alpha$ has the \emph{$\frac12$-diversity property} if the following limit exists for every $t\ge0$:
\begin{equation}\label{eq diversitydef}
\mathscr{D}_{\alpha}(t) := \sqrt{\pi}\lim_{h\rightarrow 0}\sqrt{h}\#\{(a,b)\in\alpha\colon |b-a|>h, b < t\}.
\end{equation}
Let $\mathcal{I}$ denote the space of interval partitions with the $\frac12$-diversity property. We denote the left-to-right concatenation of interval partitions by the operator $\star$ or $ \mbox{\huge $\star$} $, as in $\alpha\star \beta$ or $ \mbox{\huge $\star$} _{n\ge 1}\alpha_n$. Then $\mathcal{I}$ is closed under pairwise concatenation.
This space is equipped with a metric $d_{\mathcal{I}}$ \cite[Definition 2.1]{Paper1}. In a continuous process $(\alpha^y,y\ge0)$ in this metric topology, large blocks of the partition persist over time, with continuously fluctuating endpoints $(a^y,b^y)$, while smaller blocks may be born or die out (enter and exit continuously at mass 0) in such a way that the diversity $\mathscr{D}_{\alpha^y}(t)$ up to any point $t$ in the partition evolves continuously in $y$ as well.
We follow the convention of \cite{Paper3} that type-$i$ evolutions, for $i=0,1,2$, are valued in $[0,\infty)^i\times \mathcal{I}$ (we emphasized a different, $\mathcal{I}$-valued, representation of type-1 evolutions in \cite{Paper1}). We refer to the real-valued first coordinates of type-1 and type-2 evolutions as \emph{top blocks} or \emph{top masses}. Each block in a type-$i$ evolution, including these top blocks, has mass that fluctuates as a squared Bessel diffusion \distribfont{BESQ}\checkarg[-1]. For $\theta\in\mathbb{R}$ and $m\ge0$, the $\distribfont{BESQ}_m(\theta)$ diffusion satisfies
\begin{equation}
Z_t = m + \theta t + \int_{u=0}^t 2\sqrt{|Z_u|}dB_u \qquad \text{for }t\le \inf\{u\ge0\colon Z_u = 0\}.
\end{equation}
See \cite{RevuzYor}. We take the convention that for $\theta\le0$, $\distribfont{BESQ}(\theta)$ diffusions are absorbed at zero.
Informally, when a top block of a type-1 or type-2 evolution hits mass zero, the leftmost blocks of the interval partition component of the evolution are successively (informally speaking, as the blocks are not well-ordered) pulled out of the interval partition to serve as new top blocks, until their \distribfont{BESQ}\checkarg[-1] masses are absorbed at zero.
\begin{proposition}[Propositions 4.30, 5.4, 5.16 of \cite{Paper1}]\label{prop:012:transn}
Fix a time $y>0$ and a state $\beta\in\mathcal{I}$. For each block $(a,b)\in\beta$,
\begin{itemize}
\item let $\chi_{(a,b)}\sim \distribfont{Bernoulli}(1-e^{-w/2y})$, where $w = b-a$,
\item let $L_{(a,b)}$ be a $(0,\infty)$-valued random variable with probability density function
\begin{equation}
\mathbb{P}\{L_{(a,b)}\in du\} = \frac{1}{\sqrt{2\pi}}\frac{\sqrt{y}}{u^{3/2}}\frac{e^{-u/2y}}{e^{w/2y}-1}\left( 1 - \cosh\!\left(\frac{\sqrt{wu}}{y}\right) + \frac{\sqrt{wu}}{y} \sinh\!\left(\frac{\sqrt{wu}}{y}\right) \right)du,
\end{equation}
\item let $S_{(a,b)}\sim\distribfont{Exponential}\checkarg[(2y)^{-1/2}]$, and
\item let $R_{(a,b)}$ be a subordinator with Laplace exponent
$\lambda\mapsto \left(\lambda+\frac{1}{2y}\right)^{1/2}-\left(\frac{1}{2y}\right)^{1/2}$.
\end{itemize}
We take these objects to all be jointly independent. Let $(R_0,S_0)$ denote an additional independent pair, with this same law as that given for $(R_{(a,b)},S_{(a,b)})$. For the purpose of the following, we take
$$\textsc{ip}(R,S) := \big\{(R(t-),R(t))\colon t < S,\ R(t-)<R(t)\big\}.$$
The map that send an initial state $\beta$ and time $y$ to the law of
\begin{equation}\label{eq:type1:kernel}
\mathop{ \raisebox{-2pt}{\Huge$\star$} } _{U\in \beta\colon \chi_U=1}\big((0,L_U)\star \textsc{ip}(R_U,S_U)\big)
\end{equation}
gives a transition semigroup on $\mathcal{I}$. The same holds for the map sending $\beta$ and $y$ to the law of
\begin{equation}\label{eq:type0:kernel}
\textsc{ip}(R_0,S_0)\star \mathop{ \raisebox{-2pt}{\Huge$\star$} } _{U\in \beta\colon \chi_U=1}\big((0,L_U)\star \textsc{ip}(R_U,S_U)\big).
\end{equation} \end{proposition}
\begin{definition}\label{def:type01}
\emph{Type-0 evolutions} are right-continuous Markov processes on $\mathcal{I}$ with transition kernel \eqref{eq:type0:kernel}. \emph{IP-valued type-1 evolutions} are right-continuous Markov processes $((\gamma^y),y\ge0)$ on $\mathcal{I}$ with transition kernel \eqref{eq:type1:kernel}. Such an IP-valued type-1 evolution is associated with a \emph{(pair-valued) type-1 evolution} $((m^y,\alpha^y),y\ge0)$ where $m^y$ is the mass of the leftmost block of $\gamma^y$, or zero if there is no leftmost block, while $\alpha^y$ comprises the rest of the partition, so that $(0,m^y)\star\alpha^y = \gamma^y$.
A type-1 evolution is said to \emph{degenerate} when it is absorbed at $(0,\emptyset)$. A type-0 evolution is never absorbed and is said to have degeneration time $\infty$. \end{definition}
It is helpful to bear in mind that, for any interval partition $\beta$ and $y>0$, for $\chi_{(a,b)}$ as defined in Proposition \ref{prop:012:transn}, the sum $\sum_{U\in\beta}\chi_U$ is a.s.\ finite; this is easily proved using the Borel--Cantelli lemma. In other words, for a type-0/1 evolution with initial state $\beta$ a.s.\ only finitely many of $(0,L_U)\star \textsc{ip}(R_U,S_U)$ contribute to the state of the type-0/1 evolution at the later time. From this it is seen that the IP-valued type-1 evolution a.s.\ has a leftmost block at any given time, unless it has degenerated before that time.
\begin{definition}\label{def:type2}
Let $(x_1,x_2,\beta)\in [0,\infty)^2\times\mathcal{I}$ with $x_1+x_2>0$. A \emph{type-2 evolution} starting from $(x_1,x_2,\beta)$ is a process of the form $((m_1^y,m_2^y,\alpha^y),y\ge 0)$, with $(m_1^0,m_2^0,\alpha^0)=(x_1,x_2,\beta)$. Its law is specified by the following construction.
Let $\left(\mathbf{m}^{(0)}, \gamma^{(0)} \right)$ be a type-1 evolution starting from $( x_2, \beta)$ independent of $\mathbf{f}^{(0)}\sim \distribfont{BESQ}_{x_1}(-1)$. Let $Y_0:=0$ and $Y_1=\zeta(\mathbf{f}^{(0)})$. For $0\le y \le Y_1$, define the type-2 evolution as
\[
\left( m_1^y, m_2^y, \alpha^y \right):=\left(\mathbf{f}^{(0)}(y), \mathbf{m}^{(0)}(y), \gamma^{(0)}(y) \right), \quad 0\le y \le Y_1.
\]
Now proceed inductively. Suppose, for some $n\ge 1$, these processes have been constructed until time $Y_n$ with $m_1^{Y_n}+m_2^{Y_n}>0$. Conditionally given this history, consider a type-1 evolution $(\mathbf{m}^{(n)}, \gamma^{(n)})$ with initial condition $(0,\alpha^{Y_n}) = (0,\gamma^{(n-1)}(Y_n-Y_{n-1}))$ that is independent of $\mathbf{f}^{(n)}$, a $\distribfont{BESQ}(-1)$ diffusion with initial value $\mathbf{m}^{(n-1)}(Y_n-Y_{n-1})$. The latter equals $m_2^{Y_n}$ if $n$ is odd or $m_1^{Y_n}$ if $n$ is even.
Set $Y_{n+1}=Y_n+\zeta(\mathbf{f}^{(n)})$. For $y\in(0,Y_{n+1}-Y_n]$, define
\begin{align*}
&\left(m_1^{Y_n+y},m_2^{Y_n+y},\alpha^{Y_n+y}\right):= \left\{\begin{array}{ll}
(\mathbf{m}^{(n)}(y),\mathbf{f}^{(n)}(y),\gamma^{(n)}(y)), &\mbox{if $n$ is odd},\\[3pt]
(\mathbf{f}^{(n)}(y),\mathbf{m}^{(n)}(y),\gamma^{(n)}(y)), &\mbox{if $n$ is even.}
\end{array}\right.
\end{align*}
If, for some $n\ge 1$, $m_1^{Y_n}+m_2^{Y_n}=0$, set $(m_1^y,m_2^y,\alpha^y):=(0,0,\emptyset)$ for all $y>Y_n$ and $Y_{n+1} := \infty$. This time $Y_n$ is the \emph{lifetime} of the type-2 evolution. We also define the earlier time $D\in [Y_{n-1},Y_n)$ that corresponds to the degeneration of the type-1 evolution $(\mathbf{m}^{(n)},\gamma^{(n)})$ at time $D-Y_{n-1}$. Equivalently, $D$ is the time at which either $m_1^y+\|\alpha^y\|$ or $m_2^y+\|\alpha^y\|$ hits zero and is absorbed. This is the \emph{degeneration time} of the type-2 evolution. \end{definition}
Though this fact is obscure in the above definition, the roles of the two top masses in a type-2 evolution are symmetric.
\begin{lemma}[Lemma 19 of \cite{Paper3}]\label{lem:type2:symm}
If $\big(\big(m_1^y,m_2^y,\alpha^y\big),y\ge0\big)$ is a type-2 evolution then so is $\big(\big(m_2^y,m_1^y,\alpha^y\big),y\ge0\big)$. In particular, if $Y := \inf\big\{y>0\colon m_2^y = 0\big\}$ then given $\big(m_2^0,m_1^0,\alpha^0\big)$, the process $\big(m_2^y,y\in [0,Y]\big)$ is a $\distribfont{BESQ}\checkarg_{m_2^0}(-1)$ stopped when it hits zero, while $\big(\big(m_1^y,\alpha^y\big),y\in [0,Y]\big)$ is conditionally distributed as an independent type-1 evolution stopped at time $Y$. \end{lemma}
\begin{proposition}[Theorem 1.4 of \cite{Paper1}, Theorem 2, Corollary 16 of \cite{Paper3}]\label{prop:012:pred}
\begin{enumerate}
\item Type-0/1/2 evolutions and IP-valued type-1 evolutions are Borel right Markov processes.\label{item:pred:Markov}
\item Type-0 evolutions and IP-valued type-1 evolutions are path-continuous.\label{item:pred:0}
\item If $\big(\big(m_1^y,m_2^y,\alpha^y\big),y\ge0\big)$ is a type-2 evolution and $I(y) :\equiv \max\{n\ge 0\colon Y_n\le y\}\text{ mod }2$ is $\{1,2\}$-valued, where $(Y_n,n\ge0)$ is as in Definition \ref{def:type2}, then the \emph{IP-valued type-2 evolution} $\big(\big(0,m_{3-I(y)}^y\big)\star \big(0,m_{I(y)}^y\big)\star\alpha^y,y\ge0\big)$ is a diffusion. Also,
each of $m_1^y$ and $m_2^y$ can only equal zero when $\alpha^y$ has no leftmost block, and they can only both equal zero if $\alpha^y = \emptyset$.\label{item:pred:2}
\end{enumerate}
\noindent In particular type-0/1/2 evolutions are predictable Markov processes: the value of a type-0/1/2 evolution at any stopping time $Y$ is a (deterministic) function of its left limit at that time. \end{proposition}
\begin{proposition}[Concatenation properties; Proposition 9 of \cite{Paper3}]\label{prop:012:concat}
Let $((m^y,\alpha^y),y\ge0)$ denote a type-1 evolution.
\begin{enumerate}[label=(\roman*),ref=(\roman*)]
\item \label{item:012concat:BESQ+0}
Let $\zeta$ denote the first time that $m^y$ hits zero. Then $(m^y\mathbf{1}\{y\le\zeta\},y\ge0)$ is a \distribfont{BESQ}\checkarg[-1] and $(\alpha^y,y\in [0,\zeta])$ distributed as an independent type-0 evolution stopped at $\zeta$.
\item \label{item:012concat:0+1}
If $(\widetilde\alpha^y,y\ge0)$ is an independent type-0 evolution then $(\widetilde\alpha^y\star(0,m^y)\star\alpha^y,y\ge0)$ is a type-0 evolution.
\item \label{item:012concat:1+1}
Suppose instead that $((\widetilde m^y,\widetilde\alpha^y),y\ge0)$ is an independent type-1 evolution and let $\widetilde D$ denote its degeneration time. Then the following process is a type-1 evolution.
\begin{equation}\label{eq:012concat:1+1}
\left\{\begin{array}{ll}
(\widetilde m^y,\widetilde\alpha^y\star(0,m^y)\star\alpha^y) & \text{for }y\in [0,\widetilde D),\\
(m^y,\alpha^y) & \text{for }y\ge \widetilde D.
\end{array}\right.
\end{equation}
\item \label{item:012concat:2+1}
Suppose instead that $((\widetilde m_1^y,\widetilde m_2^y,\widetilde\alpha^y),y\ge0)$ is an independent type-2 evolution. Let $\widetilde D$ denote its degeneration time. Let $(\widehat x_1,\widehat x_2)$ equal $(\widetilde m_1^{\widetilde D},m^{\widetilde D})$ if $m_2^{\widetilde D} = 0$ (i.e.\ if label 2 is the label that degenerates at time $\widetilde{D}$), or equal $(m^{\widetilde D},\widetilde m_2^{\widetilde D})$ otherwise (if label 1 degenerates). Let $((\widehat m_1^y,\widehat m_2^y,\widehat\alpha^y),y\ge0)$ be a type-2 evolution with initial state $(\widehat x_1,\widehat x_2, \alpha^{\widetilde D})$, conditionally independent of the other processes given its initial state. The following is a type-2 evolution:
\begin{equation}\label{eq:012concat:2+1}
\left\{\begin{array}{ll}
(\widetilde m_1^y,\widetilde m_2^y,\widetilde\alpha^y\star(0,m^y)\star\alpha^y) & \text{for }y\in [0,\widetilde D),\\
(\widehat m_1^{y-\widetilde D},\widehat m_2^{y-\widetilde D},\widehat\alpha^{y-\widetilde D}) & \text{for }y\ge \widetilde D.
\end{array}\right.
\end{equation}
\end{enumerate}
Moreover, the concatenated evolutions constructed in \ref{item:012concat:0+1}, \ref{item:012concat:1+1}, and \ref{item:012concat:2+1} each possess the strong Markov property in the larger filtrations generated by their constituent parts. \end{proposition}
\begin{remark}\label{rmk:decomp_ker}
Note that in assertion \ref{item:012concat:2+1}, we may consider the conditional joint distribution of $((m^y,\alpha^y),y\in [0,\wt D))$ and $((\widetilde m_1^y,\widetilde m_2^y,\widetilde\alpha^y),y\in [0,\wt D))$ given the path of the concatenated process of display \eqref{eq:012concat:2+1} prior to time $\wt D$. Such a regular conditional distribution exists because $(\mathcal{I},d_{\mathcal{I}})$ is Lusin \cite[Theorem 2.7]{Paper1} and these evolutions have c\`adl\`ag paths. We can do the same for the assertions \ref{item:012concat:0+1} and \ref{item:012concat:1+1} of Proposition \ref{prop:012:concat}. In each case, the independence of the two constituent evolutions is lost under this conditioning; it is only recovered after integrating over the law of the concatenated type-$i$ evolution. \end{remark}
Recall from \cite{Paper1} that a Poisson--Dirichlet interval partition with parameters $\big(\frac12,\frac12\big)$, called \distribfont{PDIP}\checkarg[\frac12,\frac12], is an interval partition whose ranked block sizes have law $\distribfont{PD}\checkarg[\frac12,\frac12]$, with the blocks exchangeably ordered from left to right. Let $A\sim \distribfont{Beta}\checkarg[\frac12,\frac12]$, $(A_1,A_2,A_3)\sim\distribfont{Dirichlet}\big(\frac12,\frac12,\frac12\big)$, and $\bar\beta\sim\distribfont{PDIP}\checkarg[\frac12,\frac12]$ independent of each other. A probability distribution on $\mathcal{I}$ is said to be a \emph{pseudo-stationary law for the type-0 evolution} if it is the law of $M\bar\beta$, i.e.\ $\bar\beta$ scaled by $M$, for an independent random mass $M>0$. Likewise a law on $[0,\infty)\times\mathcal{I}$, respectively $[0,\infty)^2\times\mathcal{I}$, is a \emph{pseudo-stationary law for the type-1, {\rm resp.} type-2, evolution} if it is the law of any independent multiple of $(A,(1-A)\bar\beta)$, resp.\ $(A_1,A_2,A_3\bar\beta)$. This language is in reference to the following proposition.
\begin{proposition}[Theorem 6.1 of \cite{Paper1}, Proposition 33 of \cite{Paper3}]\label{prop:012:pseudo}
For $i=0,1,2$, if a type-$i$ evolution has a pseudo-stationary initial distribution then, given that it does not degenerate prior to time $y$, its conditional law at time $y$ is also pseudo-stationary. In the special case that its initial mass has law $\distribfont{Gamma}\checkarg[\frac{1+i}{2},\gamma]$, then its mass at time $y$ has conditional law $\distribfont{Gamma}\checkarg[\frac{1+i}{2},\gamma/(2\gamma y+1)]$. \end{proposition}
\begin{proposition}[Theorem 1.5 of \cite{Paper1}, Theorem 3 of \cite{Paper3}]\label{prop:012:mass}
For $i=0,1,2$, the total mass process for a type-$i$ evolution is a \distribfont{BESQ}\checkarg[1-i]. In particular, this total mass process is Markovian in the filtration of the type-$i$ evolution. \end{proposition}
\section{Definitions of $k$-tree evolutions and statements of main results}\label{sec:def_thm}
\subsection{State spaces and killed $k$-tree evolutions}
A \emph{tree shape} is a rooted binary combinatorial tree with leaves labelled by a non-empty finite set $A\subset\mathbb{N}$, with the convention that the root vertex has degree one. We refer to the root as the ancestor of the other vertices, and the edge incident to the root as the ancestor of other edges. We will use genealogical language, such as ``child/parent/sibling/uncle,'' to describe relations between vertices or between edges of the tree shape. Think of each edge of a tree shape (and the branch point at the end of the edge farthest from the root) as being labeled by the set of labels of leaves that are separated from the root by that edge. This collection of edge labels specifies the tree shape. I.e.\ a tree shape can be represented as a collection $\mathcal{H}$ of subsets of $A$ with certain properties, including: $A\in\mathcal{H}$, representing the edge incident to the root, and $\{i\}\in\mathcal{H}$ for each $i\in A$, representing edges incident to the leaves.
We use the related representation that omits the singletons. We denote the set of such (representations of) tree shapes by $\bT^{\textnormal{shape}}_A$. For example, $$\mathbb{T}_{[3]}^{\rm shape}=\{\{[3],\{2,3\}\},\ \{[3],\{1,3\}\},\ \{[3],\{1,2\}\}\},\qquad\mbox{edge sets of }\
\parbox{0.9cm}{\includegraphics[height=1cm]{1alonecrop.pdf}}\;,\;\;
\parbox{0.9cm}{\includegraphics[height=1cm]{2alonecrop.pdf}}\;,\;\;
\parbox{0.9cm}{\includegraphics[height=1cm]{3alonecrop.pdf}}.$$
Given a tree shape $\mathbf{t}\in\bT^{\textnormal{shape}}_A$, for $E\in \mathbf{t}\cup \{\{i\}\colon i\in A\}$, $E\neq A$, we write $\parent{E}$ to denote the parent edge, $$\parent{E} = \bigcap_{F\in \mathbf{t}\colon E\subsetneq F}F.$$ We say an internal edge $E\in\mathbf{t}$ is of type 0, 1, or 2, if 0, 1, or 2 of its children are leaf edges, respectively. For example, in the tree shape $\{[3],\{1,3\}\}$, the edge $\{1,3\}$ is of type 2 with leaves 1 and 3 as its children, while $[3]$ is of type 1 with leaf 2 as its child.
For a finite, non-empty set $A\subset\mathbb{N}$, an \emph{$A$-tree} is a tree shape $\mathbf{t}\in\mathbb{T}_{A}^{\rm shape}$ equipped with non-negative weights on leaf edges and interval partitions marking the internal edges: \begin{equation}
\widebar{\bT}_{A}=\bigcup_{\mathbf{t}\in\mathbb{T}^{\rm shape}_{A}}\{\mathbf{t}\}\times[0,\infty)^A\times\mathcal{I}^\mathbf{t}. \end{equation}
For $k\ge1$ we refer to elements of $\widebar{\bT}_{k} := \widebar{\bT}_{[k]}$ as \emph{$k$-trees}. Consider $T=(\mathbf{t},(x_i,i\in A),(\beta_E,E\in\mathbf{t}))$ $\in \widebar{\bT}_{A}$. We write $\|T\|=\sum_{i\in A}x_i+\sum_{E\in\mathbf{t}}\|\beta_E\|$. Think of this representation in connection with Figure \ref{fig:B_k_tree_proj} and the description of the $k$-tree projection of a Brownian CRT in the introduction. The $x_i$ represent masses of subtrees corresponding to leaves of the tree represented by $\mathbf{t}$, while the $\beta_E$ represent totally ordered collections of subtree masses.
In this interpretation, the intervals in $\beta_E$ that are closer to $0$ represent subtrees that are farther from the root of the CRT.
We refer to each top mass $x_i$, $i\in A$, and each interval in each of the partitions $\beta_E$, $E\in\mathbf{t}$, as a \emph{block} of $T$. Formally, we denote the set of blocks by \begin{equation}
\textsc{block}(\mathbf{t},(x_i,i\in A),(\beta_E,E\in\mathbf{t})) := A\cup\{(E,a,b)\colon E\in\mathbf{t},\,(a,b)\in \beta_E\}. \end{equation}
We will write $\|\ell\|$ for the \emph{mass} of $\ell\in \textsc{block}(T)$; i.e.\ for the top masses $\|\ell\|:=x_\ell$, $\ell\in A$, and for the other blocks $\|\ell\|=\|(E,a,b)\|:=b-a$.
Then $\sum_{\ell\in \textsc{block}(T)}\|\ell\|=\|T\|$.
For each label set $A$ and each $\mathbf{t}\in\bT^{\textnormal{shape}}_A$, we topologize the set of $A$-trees with shape $\mathbf{t}$ by the product over the topologies in the components. This can be metrized by setting \begin{equation}\label{eq:ktree:metric_1}
d_{\mathbb{T}}(T,T') = \sum_{i\in A} |x_i-x'_i| + \sum_{E\in\mathbf{t}}d_{\IPspace}(\beta_E,\beta'_E) \end{equation} for $T,T'\in\widebar{\bT}_A$ with shapes $\mathbf{t}=\mathbf{t}'$. Within the set of trees with a given label set $A$ and shape, there is a single $A$-tree of zero total mass; we topologize the space of all $A$-trees, for all finite label sets $A$, by identifying all of these trees of zero mass, thereby gluing these spaces together. This is metrized by \begin{equation}\label{eq:ktree:metric_2}
d_{\mathbb{T}}(T,T') = \sum_{i\in A} x_i + \sum_{i\in A'}x'_i + \sum_{E\in\mathbf{t}}d_{\IPspace}(\beta_E,\emptyset) + \sum_{E\in\mathbf{t}'}d_{\IPspace}(\beta'_E,\emptyset) \end{equation}
for $T\in\widebar{\bT}_A$, $T'\in\widebar{\bT}_{A'}$ with differing tree shapes. We note that $d_{\IPspace}(\beta,\emptyset) = \max\{\|\beta\|,\mathscr{D}_\beta(\infty)\}$ for any $\beta\in\mathcal{I}$.
\begin{proposition}\label{prop:Lusin}
$\big(\big(\bigcup_A \widebar{\bT}_A \big)\big/ \big\{T\in \bigcup_A \widebar{\bT}_A\colon \|T\|=0\big\},d_{\mathbb{T}}\big)$ is a Lusin space. \end{proposition}
\begin{proof}
From \cite[Theorem 2.7]{Paper1}, $(\mathcal{I},d_{\IPspace})$ is Lusin. Thus, so is the above gluing of countably many product topologies of this with the Euclidean topology. \end{proof}
We are interested in $k$-tree-valued Markov processes that avoid certain degenerate states. For example, states with multiple zero top masses will be inaccessible by our evolutions. We also exclude states having a zero top mass with an empty partition on its parent edge. These latter states will arise as left limits but force jumps ``away from the boundary.'' Specifically, \begin{align}
\widetilde{\bT}_A &:= \left\{ T = (\mathbf{t},(x_i,i\in A),(\beta_E,E\in\mathbf{t}))\in\widebar{\bT}_A\, \middle|
\begin{array}{l}
x_i+x_j>0\mbox{ for all }E=\{i,j\}\in\mathbf{t}\\
\mbox{and }x_i\!+\!\big\|\beta_{\parent{\{i\}}}\big\| \!=\! 0\mbox{ for at most one }i\!\in\! A
\end{array}\!\!\right\}\!\notag\\ \label{eq:k_tree_spaces}
\bT_{A} &:= \left\{ T = \left(\mathbf{t},(x_i,i\in A),(\beta_E,E\in\mathbf{t})\right)\in\widetilde{\bT}_A \,\middle|\,
x_i+\big\|\beta_{\parent{\{i\}}}\big\| > 0\mbox{ for all }i\in A.\right\}. \end{align}
Let $I\colon\widetilde{\bT}_A\rightarrow A\cup\{\infty\}$ record $I(T)=i$ if $x_i+\|\beta_{\parent{\{i\}}}\| = 0$ and set $I(T)=\infty$ if $T\in\bT_A$. In the former case, we say that \emph{label $i$ is degenerate in $T$}.
Because we will only ever consider single leaf trees in the case where the leaf has label 1, we take the convention that $\widetilde{\bT}_{1} = [0,\infty)$ and $\bT_{1} = (0,\infty)$, with this real number representing the mass on the leaf 1 component, which is then the total mass of the tree. We define $\bT_\emptyset = \{0\}$.
As noted above Proposition \ref{prop:Lusin}, we identify all trees of zero mass. We take the convention of writing $0$ to denote such a tree. Furthermore, let $\partial\notin \bigcup_A\widebar{\bT}_A$ denote an isolated cemetary state.
\begin{definition}[Killed $A$-tree evolution]\label{def:killed_ktree}
Consider an $A$-tree $T = (\mathbf{t},(m^0_i,i\in A),(\alpha^0_E,E\in\mathbf{t}))$ $\in\bT_A$ for some finite $A\subset\mathbb{N}$ with $\#A\ge 2$.
\begin{itemize}
\item For each type-2 edge $E = \{i,j\}\in \mathbf{t}$, let $((m_i^y,m_j^y,\alpha_E^y),y\ge0)$ denote a type-2 evolution from initial state $(m_i^0,m_j^0,\alpha_E^0)$, and let $D_E$ denote its degeneration time.
\item For each type-1 edge $E = \parent{\{i\}}\in \mathbf{t}$, let $((m_i^y,\alpha_E^y),y\ge0)$ denote a type-1 evolution from initial state $(m_i^0,\alpha_E^0)$, and let $D_E$ denote its absorption time in $(0,\emptyset)$.
\item For each type-0 edge $E\in\mathbf{t}$, let $(\alpha_E^y,y\ge0)$ denote a type-0 evolution from initial state $\alpha^0_E$ and define $D_E=\infty$.
\end{itemize}
We take these evolutions to be jointly independent. Let $D = \min_{E\in\mathbf{t}}D_E$. Define $\mathcal{T}^y = (\mathbf{t},(m_i^y,i\in A),(\alpha_E^y,E\in\mathbf{t}))$ for $y\in [0,D)$ and $\mathcal{T}^y = \partial$ for $y\ge D$. This is the \emph{killed $A$-tree evolution from initial state $T$}. We call $D$ the \emph{degeneration time} of the evolution.
For $\#A=1$, define $(\mathcal{T}^y)$ to be a \distribfont{BESQ}\checkarg[-1] starting from $T$, killed upon hitting zero. \end{definition}
In light of this construction, in a $k$-tree, we refer to each type-2 edge partition together with its two top masses, $(x_i,x_j,\beta_{\{i,j\}})$, as a \emph{type-2 compound}. Likewise, for a type-1 edge $E = \parent{\{i\}}$, we call $(x_i,\beta_E)$ a \emph{type-1 compound}, and for each type-0 edge $F$, the partition $\beta_F$ is a \emph{type-0 compound}. In Figure \ref{fig:B_k_tree_proj}, $\beta_{[5]}^{(5)}$ is a type-0 compound, $\big(X_2^{(5)},\beta_{\{1,2,4\}}^{(5)}\big)$ is a type-1 compound, and $\big(X_3^{(5)},X_5^{(5)},\beta_{\{3,5\}}^{(5)}\big)$ and $\big(X_1^{(5)},X_4^{(5)},\beta_{\{1,4\}}^{(5)}\big)$ are type-2 compounds.
\subsection{Label swapping and non-resampling $k$-tree evolution}\label{sec:non_resamp_def}
In the theory of Borel right Markov processes, \emph{branch states} are states that are not visited by the right-continuous Markov process but may be attained as a left limit, triggering an instantaneous jump. We will now define non-resampling $k$-tree evolutions with branch states in $\widetilde{\bT}_A\setminus\bT_A$. When a type-1 or type-2 compound in an $A$-tree degenerates, we project this compound down and the evolution proceeds with one fewer leaf label.
However, in \cite{Paper2} we found that in the discrete regime, in order to construct a family of projectively consistent Markov processes, it was necessary to have degenerate labels sometimes swap places with other nearby, higher labels before dropping the degenerate component and its label with it. The following two definitions lead to an analogous construction in the present setting. The role of this mechanism in preserving consistency will be evident in the proof of Proposition \ref{prop:Dynkin:killed}.
\begin{figure}
\caption{Example of the swap-and-reduce map on a tree shape. Least labels in the two subtrees descended from sibling and uncle of leaf edge $\{i\}$ are shown in bold.}
\label{fig:label_swap}
\end{figure}
\textbf{Swap-and-reduce map for tree shapes. }
Consider a tree shape $\mathbf{t}\in\bT^{\textnormal{shape}}_A$ on some label set with $\#A\ge2$ and label $i\in A$. Let $J(\mathbf{t},i) = \max\{i,a,b\}$ where $a$ and $b$ are, respectively, the smallest labels of sibling and uncle of leaf edge $\{i\}$ in $\mathbf{t}$. In the special case that its parent $\parent{\{i\}}$ is $A$, in which case $\{i\}$ has no uncle, we define $b=0$. In the example in Figure \ref{fig:label_swap},
$$\mathbf{t} = \big\{\, [9],\, \{5,7\},\, \{1,2,3,4,6,8,9\},\, \{1,2,4,6,8,9\},\, \{1,6\},\, \{2,4,8,9\},\, \{4,8,9\},\, \{4,8\} \,\}.$$
Leaf edge $\{2\}$ has sibling $\{4,8,9\}$ and uncle $\{1,6\}$, so $a=4$, $b=1$, $J(\mathbf{t},2) = \max\{2,4,1\} = 4$.
We define a \emph{swap-and-reduce map on tree shapes}, $\widetilde\varrho\colon \bT^{\textnormal{shape}}_A\times A \rightarrow \bigcup_{a\in A}\bT^{\textnormal{shape}}_{A\setminus\{a\}}$ mapping $(\mathbf{t},i)$ to the tree shape $\mathbf{t}'$ obtained from $\mathbf{t}$ by first swapping labels $i$ and $j=J(\mathbf{t},i)$, then deleting the leaf subsequently labeled $j$ and contracting away its parent branch point. Formally, $\mathbf{t}'$ is the image of $\mathbf{t}\setminus\big\{\parent{\{i\}}\big\}$ under the map $\phi_{\mathbf{t},i}$ that modifies label sets $E\in\mathbf{t}$ by first deleting label $i$ from the sets, and then replacing label $j$ by $i$.
In the example in Figure \ref{fig:label_swap}, $\phi_{\mathbf{t},2}$ is
\newcommand{\mapsdown}{\rotatebox[origin=c]{-90}{\ensuremath{\mapsto}}}
\newcommand{\hphantom{,}}{\hphantom{,}}
\definecolor{darkblue}{rgb}{0.25,0.25,0.7}
\definecolor{mygreen}{rgb}{0,0.75,0}
\definecolor{gold}{rgb}{.8,0.7,0}
$$
\begin{array}{r@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}l}
\mathbf{t} = \big\{ & \hphantom{,}[9], & \hphantom{,}\{5,7\}, & \hphantom{,}\{1,2,3,4,6,8,9\}, & \hphantom{,}\{1,2,4,6,8,9\}, & \hphantom{,}\textcolor{blue}{\{1,6\}}, & \hphantom{,}\textcolor{red}{\{2,4,8,9\}}, & \hphantom{,}\textcolor{mygreen}{\{4,8,9\}}, & \hphantom{,}\{4,8\}\hphantom{,}&\big\}\\
& \mapsdown & \mapsdown & \mapsdown & \mapsdown & \mapsdown & & \mapsdown & \mapsdown & \\
\mathbf{t}' = \big\{& \hphantom{,}[9]\!\setminus\!\{4\}, & \hphantom{,}\{5,7\}, & \hphantom{,}\{1,2,3,6,8,9\}, & \hphantom{,}\{1,2,6,8,9\}, & \hphantom{,}\textcolor{blue}{\{1,6\}}, & & \hphantom{,}\textcolor{mygreen}{\{2,8,9\}}, & \hphantom{,}\{2,8\}\hphantom{,} &\big\}.
\end{array}
$$
Note that in the preceding definition, $\phi_{\mathbf{t},i}(E_1) = \phi_{\mathbf{t},i}(E_2)$ if and only if $E_1\setminus\{i\} = E_2\setminus\{i\}$. But the only distinct edges $E_1\neq E_2$ in $\mathbf{t}$ with this relationship are the sibling and parent of leaf edge $\{i\}$. Thus, by excluding $\parent{\{i\}}$ from its domain, we render $\phi_{\mathbf{t},i}$ injective and ensure that the range of this map is an element of $\bT^{\textnormal{shape}}_{A\setminus\{J(\mathbf{t},i)\}}$.
The swap-and-reduce map on tree shapes induces a corresponding map for degenerate $A$-trees, where labels are swapped and the degenerate component is projected away, but everything else remains unchanged.
\textbf{Swap-and-reduce map for $A$-trees. }
Consider $T = (\mathbf{t},(x_a,a\in A),(\beta_E,E\in\mathbf{t}))\in\widetilde{\bT}_A\setminus\bT_A$. Recall that for such an $A$-tree, $I(T)$ denotes the unique index $i\in A$ for which $x_i+\big\|\beta_{\parent{\{i\}}}\big\| = 0$. We define $J\colon\widetilde{\bT}_A\setminus\bT_A\rightarrow A$ by
$J(T) = J(\mathbf{t},I(T))$, as defined above. The \emph{swap-and-reduce map on $A$-trees} is the map
$$\varrho\colon \widetilde{\bT}_A\setminus \bT_A \rightarrow \bigcup_{j\in A}\bT_{A\setminus\{j\}}$$
that sends $T$ to
$(\widetilde\varrho(T),(x'_a, a\in A\setminus\{J(T)\}),(\beta'_E, E\in \widetilde\varrho(T)))$ where: (1) $x'_a = x_a$ for $a\neq I(T)$, (2) $x'_{I(T)} = x_{J(T)}$ if $I(T)\neq J(T)$, and (3) $\beta'_{E} = \beta_{\phi_{\mathbf{t},I(T)}^{-1}(E)}$ for each $E\in \widetilde\varrho(T)$, where $\phi_{\mathbf{t},I(T)}$ is the injective map defined above.
\begin{definition}[Non-resampling $k$-tree evolution]\label{def:nonresamp_1}
Set $A_1 = [k]$ and fix some $\mathcal{T}^0_{(1)} =T\in \bT_A$. Inductively for $1\le n<k$, let $(\mathcal{T}^y_{(n)},y\in [0,\Delta_n))$ denote a killed $A_{n}$-tree evolution from initial state $\mathcal{T}_{(n)}^0$, run until its degeneration time $\Delta_n$, conditionally independent of $(\mathcal{T}_{(j)},j < n)$ given its initial state. If $n<k-1$, we then set $A_{n+1} = A_n\setminus \{J(\mathcal{T}_{(n)}^{\Delta_n-})\}$ and let $\mathcal{T}_{(n+1)}^0 = \varrho(\mathcal{T}_{(n)}^{\Delta_n-})$.
For $1\le n\le k$ we define $D_n = \sum_{j=1}^n\Delta_j$ and set $D_0=0$. For $y\in [D_{n-1},D_{n})$ we define $\mathcal{T}^y = \mathcal{T}_{(n)}^{y-D_{n-1}}$. For $y\ge D_k$ we set $\mathcal{T}^y = 0 \in \bT_{\emptyset}$. Then $(\mathcal{T}^y,y\ge0)$ is a \emph{non-resampling $k$-tree evolution from initial state $T$}. We say that at each time $D_n$, label $I(\mathcal{T}^{D_n-})$ has \emph{caused degeneration} and label $J(\mathcal{T}^{D_n-})$ is \emph{dropped in degeneration}. \end{definition}
\subsection{Resampling $k$-tree evolutions and de-Poissonization}\label{sec:resamp_def}
We now define a resampling $k$-tree evolution in which at degeneration times we first apply $\varrho$ and then jump into a random state according to a resampling kernel, which reinserts the label lost in degeneration, so that the evolution always retains all $k$ labels.
\textbf{Label insertion operator $\oplus$. }
\emph{For tree shapes.} Consider $\mathbf{t}\in\bT^{\textnormal{shape}}_A$. Given an edge $F\in\mathbf{t}\cup\{\{a\}\colon a\in A\}$, we define $\mathbf{t}\oplus (F,j)$ to be the tree shape with labels $A\cup\{j\}$ formed by replacing edge $F$ by a path of length 2, and inserting label $j$ as a child of the new branch point in the middle of the path. Formally, for each $E\in\mathbf{t}$ we define (a) $\phi(E)=E\cup\{j\}$ if $F\subsetneq E$ and (b) $\phi(E)=E$ otherwise. Then $\mathbf{t}\oplus (F,j)$ equals $\phi(\mathbf{t})\cup \{F\cup\{j\}\}$.
\emph{For $A$-trees.} Consider an $A$-tree $T = (\mathbf{t},(x_m,m\in A),(\beta_E,E\in\mathbf{t}))$, a label $i\in A$, and a 2-tree
$U = (y_1,y_2,\gamma)\in \bT_{2}$ with $\|U\| = 1$, where we have dropped the tree shape because all elements of $\bT_{2}$ have the same shape. We define $T\oplus (i,j,U)$ to be the $(A\cup\{j\})$-tree formed by replacing the leaf block $i$ and its weight $x_i$ by the rescaled 2-tree in which label $i$ gets weight $x_iy_1$, a new label $j$ gets weight $x_iy_2$, and their new parent edge bears partition $x_i\gamma$.
Formally,
$T\oplus (i,j,U) = (\mathbf{t}\oplus(\{i\},j),(x'_m,m\in A\cup\{j\}),(\beta'_E,E\in\mathbf{t}\oplus(\{i\},j)))$, where: (i) $(x'_i,x'_j,\beta'_{\{i,j\}}) = x_i U$, (ii) $x'_m=x_m$ for $m\notin \{i,j\}$, and (iii) $\beta'_E = \beta_{\phi^{-1}(E)}$ for $E\neq \{i,j\}$, where $\phi$ is as above.
Now consider a block $\ell = (F,a,b)\in \textsc{block}(T)$. This block splits $\beta_F$ into $\beta_{F,0}\star (0,b-a)\star \beta_{F,1}$. We define $T\oplus (\ell,j,U)$ to be the $A\cup\{j\}$ tree formed by inserting label $j$ into block $\ell$. In this definition, $U$ is redundant. Formally, $T\oplus (\ell,j,U) = (\mathbf{t}\oplus (F,j),(x'_m,m\in A\cup\{j\}),(\beta'_E,E\in\mathbf{t}\oplus(F,j)))$, where: (i) $x'_m=x_m$ for $m\neq j$, (ii) $\beta'_E = \beta_{\phi^{-1}(E)}$ for $E\notin \{F,F\cup\{j\}\}$, and (iii) $(\beta'_F,x'_j,\beta'_{F\cup\{j\}}) = (\beta_{F,0},b-a,\beta_{F,1})$.
\textbf{Resampling kernel for $A$-trees. }
For finite non-empty $A\subset\mathbb{N}$ and $j\in\mathbb{N}\setminus A$, we define the \emph{resampling kernel} as the distribution of the tree obtained by inserting label $j$ into a block chosen at random according to the masses of blocks and, if the chosen block is a top mass $x_i$, then replacing the block by a rescaled Brownian reduced $2$-tree. More formally, we define a kernel $\Lambda_{j,A}$ from $\bT_A$ to $\bT_{A\cup\{j\}}$ by
\begin{equation}
\int_{T^\prime\in\mathbb{T}_{A\cup\{j\}}^{\rm int}}\varphi(T^\prime)\Lambda_{j,A}(T,dT^\prime)=\sum_{\ell\in \textsc{block}(T)}\frac{\|\ell\|}{\|T\|}\int_{U\in\bT_{2}}\varphi(T\oplus(\ell,j,U))Q(dU),
\end{equation}
where $Q$ denotes the distribution of a Brownian reduced $2$-tree with leaf labels $\{1,2\}$, as defined in the introduction.
In \eqref{eq:B_ktree_resamp}, we describe how these resampling kernels generate a Brownian reduced $k$-tree.
\begin{definition}[Resampling $k$-tree evolution]\label{def:resamp_1}
Fix some $\mathcal{T}^0_{(1)} =T\in \bT_{k}$. Inductively for $n\ge1$, let $(\mathcal{T}^y_{(n)},y\in [0,\Delta_n))$ denote a killed $k$-tree evolution from initial state $\mathcal{T}_{(n)}^0$, run until its degeneration time $\Delta_n$, conditionally independent of $(\mathcal{T}_{(j)},j< n)$ given its initial state. We define $\mathcal{T}_{(n+1)}^0$ to have conditional distribution $\Lambda_{J_n,[k]\setminus\{J_n\}}\big(\varrho(\mathcal{T}_{(n)}^{\Delta_n-}),\cdot\,\big)$ given $(\mathcal{T}_{(j)},j\le n)$, where $J_n = J(\mathcal{T}_{(n)}^{\Delta_n-})$.
We define $D_n = \sum_{j=1}^n\Delta_j$ and set $D_0=0$. For $y\in [D_{n-1},D_{n})$ we define $\mathcal{T}^y = \mathcal{T}_{(n)}^{y-D_{n-1}}$. For $y\!\ge\! D_\infty:=\sup_{n\ge 0}D_n$ we set $\mathcal{T}^y\! =\! 0\! \in\! \bT_{\emptyset}$. Then $(\mathcal{T}^y,y\!\ge\!0)$ is a \emph{resampling $k$-tree evolution with initial state $T$}. \end{definition}
\begin{theorem}\label{thm:Markov}
Killed $A$-tree evolutions and non-resampling and resampling $k$-tree evolutions are Borel right Markov processes but are not quasi-left-continuous; thus they are not Hunt processes. \end{theorem}
\begin{proposition}\label{prop:non_accumulation}
For resampling $k$-tree evolutions with degeneration times $D_n$, $n\ge 1$, the limit $D_\infty = \lim_{n\rightarrow\infty}D_n$ equals $\inf\{y\ge0\colon \|\mathcal{T}^{y-}\|=0\}$, and this is a.s.\ finite. \end{proposition}
\begin{theorem}\label{thm:total_mass}
For a non-resampling or resampling $k$-tree evolution $(\mathcal{T}^y,y\ge0)$ with initial state with mass $\|\mathcal{T}^0\| = m$, the total mass process $(\|\mathcal{T}^y\|,y\ge0)$ has law $\distribfont{BESQ}\checkarg_m(-1)$. \end{theorem}
We prove Theorem \ref{thm:Markov} and a partial form of Theorem \ref{thm:total_mass} at the start of Section \ref{sec:k_fixed}. In particular, we prove the assertion of Theorem \ref{thm:total_mass} for non-resampling evolutions, and we reduce the resampling case to Proposition \ref{prop:non_accumulation}, which we then prove in Section \ref{sec:non_acc} and Appendix \ref{sec:non_acc_2}.
\begin{proposition}\label{prop:pseudo:resamp}
Let $(\mathcal{T}^y,y\ge 0)$ be a resampling $k$-tree evolution starting from a scaled Brownian reduced $k$-tree of total mass $M$, and let $B\sim\distribfont{BESQ}_M(-1)$. Then at any fixed time $y\ge0$, $\mathcal{T}^y$ has the distribution of a scaled Brownian reduced $k$-tree of mass $B(y)$. \end{proposition}
In light of this result, we refer to the laws of independently scaled Brownian reduced $k$-trees as the \emph{pseudo-stationary laws for resampling $k$-tree evolutions.} We prove this proposition in Section \ref{sec:pseudo}.
\subsection{De-Poissonized $k$-tree evolutions}
Given a c\`adl\`ag\ path $\mathbf{T} = (T^y,y\!\ge\!0)$ in $\bigcup_A\widetilde{\bT}_{A}$, let \begin{equation}\label{eq:dePoi:time_change}
\rho(\mathbf{T})\colon[0,\infty)\rightarrow[0,\infty],\qquad\rho_u(\mathbf{T}) := \inf\left\{y\ge 0\colon\int_0^y\|T^z\|^{-1}dz>u\right\},\quad u\ge 0. \end{equation}
This process is continuous and strictly increasing until it is possibly absorbed at $\infty$. If the total mass process $(\|T^y\|,y\ge0)$ evolves as a $\distribfont{BESQ}(-1)$, as in Theorem \ref{thm:total_mass}, then $\rho(\mathbf{T})$ is bijective onto $[0,\zeta)$, where $\zeta = \inf\{y\ge0\colon \|T^y\| = 0\}$ is a.s.\ finite; see e.g.\ \cite[p.\ 314-5]{GoinYor03}.
Let $\bT_{k,1} := \big\{T\in\bT_{k}\colon \|T\| = 1\big\}$.
\begin{definition}\label{def:dePoi}
Let $\mathbf{T} = (\mathcal{T}^y,y\ge0)$ denote a resampling (respectively, non-resampling) $k$-tree evolution from initial state $T\in\bT_{k,1}$. Then
$$\widebar\mathcal{T}^u := \big\|\mathcal{T}^{\rho_u(\mathbf{T})}\big\|^{-1} \mathcal{T}^{\rho_u(\mathbf{T})},\quad u\ge0$$
is a \emph{de-Poissonized resampling} (resp.\ \emph{non-resampling}) \emph{$k$-tree evolution from initial state $T$}.
\end{definition}
\begin{theorem}\label{thm:dePoi}
De-Poissonized resampling and de-Poissonized non-resampling $k$-tree evolutions are Borel right Markov processes. The former are stationary with the laws of the Brownian reduced $k$-trees. The latter are eventually absorbed at the state $1\in\bT_{1,1}$ of the tree whose only leaf is $1$, carrying unit weight. \end{theorem}
Analogous de-Poissonization results have been given for type-0/1/2 evolutions in \cite[Theorem 1.6]{Paper1} and \cite[Theorem 4]{Paper3}. The results stated in Section \ref{sec:resamp_def} suffice to prove this Theorem by the same method.
\begin{proof}
The right-continuity of sample paths is preserved by both the time change and the normalization. The rest of the proof of \cite[Theorem 1.6]{Paper1}, including the auxiliary results, from Proposition 6.7 to Theorem 6.9 of that paper, are easily adapted, with the Markov property, total mass, and pseudostationarity results of Theorems \ref{thm:Markov} and \ref{thm:total_mass} and Proposition \ref{prop:pseudo:resamp} serving in place of Proposition 5.20 and Theorems 1.5 and 6.1 in \cite{Paper1}. \end{proof}
We also obtain the following result for de-Poissonised resampling $k$-tree evolutions.
\begin{corollary}\label{cor:WF}
Let $\widebar \mathbf{T} = (\widebar \mathcal{T}^u,u\!\ge\!0)= ((\widebar \mathbf{t}^u_k,(\widebar X^u_j ,j\!\in\![k]),(\widebar\beta^u_E,E\!\in\!\widebar\mathbf{t}^u_k)), u\!\geq\! 0)$ denote a de-Poissonised resampling $k$-tree evolution from initial state $T= (\mathbf{t}_k,(X_j,j\!\in\![k]),(\beta_E,E\!\in\!\mathbf{t}_k))$ $\in\bT_{k,1}$, and let $\tau$ be the first time either a top mass or an interval partition has mass $0$. Observe that $\tau \leq \widebar D_1$, where $\widebar D_1$ is the first time $\widebar \mathbf{T}$ resamples, so for $u<\tau$, $\widebar \mathbf{t}^u_k=\mathbf{t}_k$. Then $((\widebar X^{u/4}_j,j\!\in\![k]),(\|\widebar\beta^{u/4}_E\|,E\!\in\!\mathbf{t}_k)), 0\leq u <\tau)$ is a Wright--Fisher diffusion, killed when one of the coordinates vanishes, with parameters $-1/2$ for coordinates corresponding to top masses and $1/2$ for coordinates corresponding to masses of interval partitions. \end{corollary}
\begin{proof}
Let $\mathbf{T}$ be a resampling $k$-tree evolution started from $T$. By Propositions \ref{prop:012:pred} and \ref{prop:012:concat}, up until the first time a top mass or the mass of an interval partition is zero, the top masses evolve as \distribfont{BESQ}\checkarg[-1] processes and the masses of internal interval partitions evolve as \distribfont{BESQ}\checkarg[1] processes, and all of these are independent. The effect of de-Poissonisation procedure in Definition \ref{def:dePoi} on these evolutions is identical to the de-Poissonisation procedure \cite{Pal11, Pal13} used to construct Wright--Fisher diffusions, and the result follows.\qedhere \end{proof}
\subsection{Projective consistency results}
\begin{definition}[Projection maps for $A$-trees]\label{def:proj}
For $j\in\mathbb{N}$ and $A\subseteq\mathbb{N}$ with $\#(A\setminus\{j\})\in [1,\infty)$, we define a projection map
$$\pi_{-j}\colon \bT_A\rightarrow\bT_{A\setminus\{j\}}$$
to remove label $j$ from an $A$-tree, as follows. Let $T = (\mathbf{t},(x_i,i\in A),(\beta_E,E\in\mathbf{t}))\in \bT_A$. If $j\notin A$ then $\pi_{-j}(T) = T$. Otherwise, let $\phi$ denote the map $E\mapsto E\setminus\{j\}$ for $E\in\mathbf{t}\setminus\big\{\parent{\{j\}}\big\}$. As noted for a similar map in Section \ref{sec:non_resamp_def}, this map is injective. Then $\pi_{-j}(T) := (\mathbf{t}',(x'_i,i\in A\setminus\{j\}),(\beta'_E,E\in\mathbf{t}'))$, where
\begin{enumerate}[label=(\roman*),ref=(\roman*)]
\item $\displaystyle\mathbf{t}' = \phi(\mathbf{t}) = \big\{E\setminus\{j\}\colon E\in \mathbf{t}\setminus\big\{\parent{\{j\}}\big\}\big\}$,
\item if $E = \parent{\{j\}}$ is a type-1 edge in $\mathbf{t}$ then $\beta'_{E\setminus\{j\}} = \beta_{E\setminus\{j\}}\star (0,x_j)\star\beta_E$,\label{item:proj:merge}
\item if $\parent{\{j\}} = \{a,j\}$ is a type-2 edge in $\mathbf{t}$ then $x_a' = x_a+x_j+\|\beta_{\{a,j\}}\|$, \label{item:proj:add}
\item if $i\in A\setminus\{j\}$ is \emph{not} the sibling of $\{j\}$ in $\mathbf{t}$, then $x'_i = x_i$, and
\item if $E\in \mathbf{t}'$ is \emph{not} the sibling of $\{j\}$ in $\mathbf{t}$, then $\beta'_E = \beta_{\phi^{-1}(E)}$.
\end{enumerate}
For $k\ge1$ and any finite $A\subseteq\mathbb{N}$ with $A\cap[k]\neq\emptyset$, we define $\pi_k\colon \bT_A\rightarrow\bT_{A\cap [k]}$ to be the composition $\pi_{-(k+1)}\circ\pi_{-(k+2)}\circ\cdots\circ\pi_{-\max(A)}$. It is straightforward to check that this composition of projection maps commutes. \end{definition}
These projections are illustrated in Figure \ref{fig:k_tree_proj}. Note how, in that example, in passing from $T$ to $\pi_5(T)$, the condition of item \ref{item:proj:merge} of the above definition applies, whereas in passing from $\pi_5(T)$ to $\pi_4(T)$, item \ref{item:proj:add} applies.
\begin{figure}
\caption{Projections of a $6$-tree.}
\label{fig:k_tree_proj}
\end{figure}
\begin{theorem}\label{thm:consistency}
\begin{enumerate}[label=(\roman*),ref=(\roman*)]
\item \label{item:cnst:nonresamp}
Let $2\le j<k$. For $(\mathcal{T}^y,y\ge0)$ any non-resampling $k$-tree evolution, $(\pi_j(\mathcal{T}^y),y\ge0)$ is a non-resampling $j$-tree evolution.
\item \label{item:cnst:resamp}
If $(\mathcal{T}^y,y\ge0)$ is a resampling $k$-tree evolution and $\mathcal{T}^0$ satisfies
\begin{equation}\label{eq:cnst:init}
\mathbb{E} [\varphi(\mathcal{T}^0)] = \int_{(T_i)_{i=j+1}^k \in \bT_{j+1}\times\cdots\times \bT_{k}}\Lambda_{j+1,[j]}(T_j,dT_{j+1})\cdots\Lambda_{k,[k-1]}(T_{k-1},dT_k)\varphi(T_k)
\end{equation}
for some $T_j\in \bT_{j}$, then $(\pi_j(\mathcal{T}^y),y\ge0)$ is a resampling $j$-tree evolution.
\item \label{item:cnst:dePoi} These same results hold for de-Poissonized versions of these processes.
\end{enumerate} \end{theorem}
For $j\ge1$, we say that $(T_k,k\ge j)$ is a consistent family of $k$-trees if $\pi_{k-1}(T_k)=T_{k-1}$ for all $k > j$. We say that a family of $k$-tree evolutions $(\mathcal{T}_k^y,y\ge 0)$, $k\ge j$, is consistent if $(\mathcal{T}_k^y,k\ge j)$ is consistent for each $y\ge 0$. This next result follows from Theorem \ref{thm:consistency} by the Kolmogorov consistency theorem.
\begin{corollary}\label{cor:consistent_fam}
\begin{enumerate}[label=(\roman*), ref=(\roman*)]
\item For every consistent family $T_k\in\bT_{k}$, $k\ge 1$, there are consistent families of non-resampling $k$-tree evolutions $(\mathcal{T}_k^{y},y\ge 0)$, $k\ge 1$, with $\mathcal{T}_k^0 = T_k$ for each $k$.\label{item:cnstfam:nonresamp}
\item For any fixed $j\ge 1$ and $T\in\bT_{j}$, there exists a consistent family of resampling $k$-tree evolutions $(\mathcal{T}_k^y,y\ge0)$, $k\ge j$, $\mathcal{T}_j^0 = T$.\label{item:cnstfam:resamp}
\item Assertions \ref{item:cnstfam:nonresamp} and \ref{item:cnst:resamp} hold for de-Poissonized versions of these processes; in particular, there is a consistent family of stationary de-Poissonised resampling $k$-tree evolutions, $(\widebar{\mathcal{T}}_k^{u},u\ge 0)$, $k\ge 1$.\label{item:cnstfam:dePoi}
\end{enumerate} \end{corollary}
We note one more consistency result. For the following result, we require notation $\pi_A$ generalising Definition \ref{def:proj} of $\pi_k$ to a general finite non-empty label set $A\subset \mathbb{N}$. This is defined in the obvious, analogous manner.
\begin{proposition}\label{prop:resamp_to_non}
Suppose $(\mathcal{T}^y,y\ge0)$ is a resampling $k$-tree evolution. Then there exists a process $((A_y,B_y,\sigma_y),y\ge0)$ that is constant between degeneration times, such that $A_y$ and $B_y$ are subsets of $[k]$ and $\sigma_y$ is a bijection between them, and such that $\sigma_y\circ\pi_{A_y}(\mathcal{T}^y)\in\bT_{B^y}$, $y\ge0$, is a non-resampling $k$-tree evolution. \end{proposition}
In Section \ref{sec:consistency}, we prove that the consistency of Theorem \ref{thm:consistency}\ref{item:cnst:resamp} holds until time $D_\infty$. We apply this result in Section \ref{sec:non_acc} to prove Proposition \ref{prop:non_accumulation}, which in turn completes the proof of Theorem \ref{thm:consistency}\ref{item:cnst:resamp}. The remaining results of Theorem \ref{thm:consistency} and Proposition \ref{prop:resamp_to_non} are proved in Section \ref{sec:const:other}.
\section{Proofs of results for fixed $k$}\label{sec:k_fixed}
\subsection{Markov property and total mass}\label{sec:Markov_mass}
\begin{proof}[Proof of Theorem \ref{thm:Markov}]
Killed $A$-tree evolutions are Borel right Markov processes as they are effectively tuples of independent type-0/1/2 evolutions, which are themselves Borel right Markov processes as noted in Proposition \ref{prop:012:pred}\ref{item:pred:Markov}, killed at a stopping time. Note that because these evolutions in the various type-0/1/2 compounds in the tree are independent and their degeneration times are continuous random variables, almost surely one of them degenerates before all of the others. Since each type-$i$ compound has $i$ positive top masses and positive interval partition mass at almost all times before its degeneration time, $\mathcal{T}^{D_1-}\in\widetilde{\bT}_{k}$ a.s.. Therefore, the non-resampling and resampling $k$-tree evolutions are well-defined. Moreover, the type of construction undertaken in Definitions \ref{def:nonresamp_1} and \ref{def:resamp_1} of non-resampling and resampling $k$-tree evolutions is well-studied; it yields a Borel right Markov process by Th\'eor\`eme 1 and the Remarque on p.\ 474 of Meyer \cite{Mey75}.
Finally, we use stopping times $S_n := \inf\big\{y\ge0\colon \min_{i\in A}(m_i^y+\|\beta_{\parent{\{i\}}}^y\|) < 1/n\big\}$, $n\ge1$, to show that these processes fail to be quasi-left-continuous.
These times are eventually strictly increasing (as soon as they exceed 0) and they converge to the (first) degeneration time. Thus, the first degeneration time is an increasing limit of stopping times, so it is visible in the left-continuous filtration and is a time at which the three processes are discontinuous. \end{proof}
\begin{proposition}\label{prop:total_mass_0}
The total mass process of a non-resampling $A$-tree evolution is a \distribfont{BESQ}\checkarg[-1]. The total mass process of a resampling $k$-tree evolution is also a \distribfont{BESQ}\checkarg[-1], killed at the random time $D_\infty := \sup_nD_n$. \end{proposition}
This is a partial form of Theorem \ref{thm:total_mass}. In Section \ref{sec:non_acc}, we prove Proposition \ref{prop:non_accumulation}, that $D_\infty = \inf\{y\ge0\colon \|\mathcal{T}^{y-}\|=0\}$, completing the proof of Theorem \ref{thm:total_mass}.
\begin{proof}
Let $(\mathcal{T}^y,y\ge0)$ denote a non-resampling $k$-tree evolution. Up until its first degeneration, its total mass $\|\mathcal{T}^y\|$ is the sum of the total masses of $k-1$ type-0/1/2 evolutions -- one compound for each internal edge $E\in\mathbf{t}$ of the tree shape. In particular, the sum of the ``type numbers'' of these compounds is $k$: if we let $n_i$ denote the number of type-$i$ compounds, $i\in\{0,1,2\}$, then
$$k-1 = n_0+n_1+n_2 \qquad\text{and} \qquad k = 0\times n_0 + 1\times n_1 + 2\times n_2 = n_1 + 2n_2.$$
This gives $n_2 = n_0+1$. By Proposition \ref{prop:012:mass},
the total mass process of a type-$i$ evolution is a \distribfont{BESQ}\checkarg[1-i]. Then, by the additivity of squared Bessel processes \cite[Proposition 1]{PitmWink18}, the sum of these total masses, $(\|\mathcal{T}^y\|,y\in [0,D_1])$, evolves as a squared Bessel process with parameter $n_0\times 1 + 0\times n_1 - 1\times n_2 = -1$, stopped at a stopping time in a filtration to which the squared Bessel process is adapted. Moreover, the same argument and the strong Markov property show that the total mass continues to evolve as a \distribfont{BESQ}\checkarg[-1] between the first and second degeneration times, and so on. Thus, the process evolves as a \distribfont{BESQ}\checkarg[-1] until its absorption at $0$. The same argument proves the above assertion for the resampling $k$-tree evolution. \end{proof}
\subsection{The Brownian reduced $k$-tree}\label{sec:B_ktree}
For a tree shape $\mathbf{t}\in \bT^{\textnormal{shape}}_{[k]}$, let $\phi_{\mathbf{t}}\colon \mathbf{t}\rightarrow\{k+1,\ldots,2k-1\}$ be a bijection (e.g.\ one that preserves lexigraphic order).
\begin{proposition}[Section 3.3 of \cite{PW13}]\label{prop:B_ktree}
Fix $k\ge1$.
\begin{itemize}[topsep=3pt]
\item Let $\tau$ be a uniform random element of $\bT^{\textnormal{shape}}_{[k]}$.
\item Independently, let $(M_i,1\le i\le 2k-1)\sim \distribfont{Dirichlet}\big(\frac12,\ldots,\frac12\big)$.
\item Independently, let $(\beta_i,k+1\le i\le 2k-1)$ be a sequence of i.i.d.\ \distribfont{PDIP}\checkarg[\frac12,\frac12].
\end{itemize}
Then $(\tau,(M_i,i\in [k]),(M_{\phi_{\tau}(E)}\beta_{\phi_{\tau}(E)},E\in\mathbf{t}))$ is a Brownian reduced $k$-tree, as described in the introduction. In particular, its distribution is invariant under the permutation of labels. \end{proposition}
Denote by $Q_{z,A}(dU)$ the distribution on $\bT_A$ of the $A$-tree obtained from the distribution $Q_{z,[k]}(dU)$ of a Brownian reduced $k$-tree scaled to have total mass $z$, with leaves then relabeled by the increasing bijection $[k]\rightarrow A$, for $k=\#A$. The resampling kernel $\Lambda_{j,A}$ of Definition \ref{def:resamp_1} satisfies \begin{equation}\label{eq:B_ktree_resamp}
\int_{T\in\bT_{k}}Q_{z,[k]}(dT)f(T) = \int_{(T_i,i\in [2,k])}\Lambda_{2,[1]}(z,dT_2)\cdots\Lambda_{k,[k-1]}(T_{k-1},dT_k)f(T_k), \end{equation} where $z\in\bT_{1}$ denotes the $1$-tree with weight $z$ on its sole component, leaf 1. This formula indicates that the Markov chain that begins with $z$ and at each step, adds a successive label via the resampling kernel, has as its path a consistent system of Brownian reduced $k$-trees, $k\ge1$, each scaled to have total mass $z$. Like the above proposition, this formula follows from the development in \cite[Section 3.3]{PW13}.
\subsection{Pseudo-stationarity}\label{sec:pseudo}
In this section, we will prove Proposition \ref{prop:pseudo:resamp}. Recall that type-0 evolutions do not degenerate (and are reflected when reaching zero total mass), while we say that type-1 evolutions degenerate when they reach the (absorbing) state of zero total mass and type-2 evolutions degenerate when they reach a single-top-mass state on an empty interval partition. In particular, total mass evolutions conditioned on no degeneration up to time $y$ are unaffected by the conditioning for type-0 evolutions as we are conditioning on an event of probability 1, while they are conditioned to be positive for type-1 evolutions and conditioned on an event that depends on the underlying type-2 evolution for type 2.
\begin{proposition}\label{prop:pseudo:pre_D}
Let $(\mathcal{T}^y,y\ge0)$ be a killed/non-resampling/resampling $k$-tree evolution starting from a Brownian reduced $k$-tree scaled by an independent random initial mass. Then for $y\ge0$, given $\{D_1>y\}$, the tree $\mathcal{T}^y$ is again conditionally a Brownian reduced $k$-tree scaled by an independent random mass. In the special case that $\|\mathcal{T}^0\|\sim\distribfont{Gamma}\checkarg[k-\frac12,\gamma]$, given $\{D_1>y\}$, $\|\mathcal{T}^y\|$ has conditional law $\distribfont{Gamma}\checkarg[k-\frac12,\gamma/(2\gamma y+1)]$. \end{proposition}
\begin{proof}
First, suppose $M\sim \distribfont{Gamma}\checkarg[k-\frac12,\gamma]$. Note that
\begin{equation}
\mathbb{P}(D_1>y) = (2y\gamma+1)^{-k},
\end{equation}
since each type-1 compound contributes $(2y\gamma+1)^{-1}$ by \cite[Equation (6.3)]{Paper1} and each type-2 compound contributes $(2y\gamma+1)^{-2}$, by Proposition \ref{prop:012:pseudo}, all independently, with $k$ top masses altogether. Conditioning on non-degeneration means conditioning each independent type-$i$ evolution, $i=1,2$, not to degenerate; thus, this conditioning does not break the independence of these evolutions. By Proposition \ref{prop:012:pseudo}, the conditional distribution of each edge partition and top mass at time $y$ is the same as the initial distribution, but with each mass and partition scaled up by a factor of $2\gamma y+1$, as claimed.
The result for deterministic initial total mass follows by Laplace inversion, and for general random mass by integration. We leave the details to the reader and refer to \cite[Proposition 6.4 and the proof of Theorem 6.1]{Paper1} or \cite[Propositions 32, 34]{Paper3} for similar arguments. \end{proof}
\begin{lemma}\label{lem:scaling}
If $(\mathcal{T}^y,y\ge0)$ is a killed/non-resampling/resampling $k$-tree evolution, then for any $c>0$, so is $(c\mathcal{T}^{y/c},y\ge0)$. \end{lemma}
\begin{proof}
This follows from the corresponding scaling properties for type-0/1/2 evolutions, noted in \cite[Lemma 6.3]{Paper1} and \cite[Lemma 35]{Paper3}. \end{proof}
\begin{proposition}\label{prop:pseudo:degen}
Let $(\mathcal{T}^y,y\ge 0)$ be a killed/non-resampling/resampling $k$-tree evolution starting from a Brownian reduced $k$-tree, scaled by any independent initial mass $M$, and let $y\ge 0$. Then the following hold.
\begin{enumerate}[label=(\roman*), ref=(\roman*)]
\item \label{item:pseudoD:J}
The label $J = J(\mathcal{T}^{D-})$ dropped at the first degeneration time $D = D_1$ has law $\mathbb{P}(J=2)=2/k(2k-3)$ and $\mathbb{P}(J=j)=(4j-5)/k(2k-3)$, $j\in\{3,\ldots,k\}$.
\item \label{item:pseudoD:swapred}
Conditionally given $J\!=\!j$, the normalized tree $\varrho\big(\mathcal{T}^{D-}\big) \big/ \big\|\mathcal{T}^{D-}\big\|$, which is simply $\mathcal{T}^D / \big\|\mathcal{T}^D\big\|$ in the non-resampling case, is a Brownian reduced $([k]\!\setminus\!\{j\})$-tree.
\item \label{item:pseudoD:indep}
The pair $\big(J\big(\mathcal{T}^{D-}\big),\varrho\big(\mathcal{T}^{D-}\big) \big/ \big\|\mathcal{T}^{D-}\big\|\big)$ is independent of $\big(M,D,\big\|\mathcal{T}^{D-}\big\|\big)$.
\item \label{item:pseudoD:gamma_mass}
In the special case that $M\sim\distribfont{Gamma}\checkarg[k-\frac12,\gamma]$, conditionally given $D=y$,
$$\|\mathcal{T}^y\|\sim\distribfont{Gamma}\checkarg[k-\textstyle\frac32,\gamma/(2y\gamma+1)].$$
\item \label{item:pseudoD:resamp}
In the resampling case, properties \ref{item:pseudoD:J}, \ref{item:pseudoD:swapred}, and \ref{item:pseudoD:indep} also hold at each subsequent degeneration time $D=D_n$, $n\ge1$. Moreover, $\mathcal{T}^{D_n}/\big\|\mathcal{T}^{D_n}\big\|$ is a Brownian reduced $k$-tree.
\end{enumerate} \end{proposition}
\begin{proof}
First, we derive \ref{item:pseudoD:resamp} as a consequence of the other assertions. Equation \eqref{eq:B_ktree_resamp}, along with exchangeability of labels in the Brownian reduced $k$-tree, implies that taking a Brownian reduced $([k]\!\setminus\!\{j\})$-tree and inserting label $j$ via the resampling kernel results in a Brownian reduced $k$-tree. Thus, \ref{item:pseudoD:swapred} gives us $\mathcal{T}^{D_1}/\big\|\mathcal{T}^{D_1}\big\| \stackrel{d}{=} \mathcal{T}^0/\big\|\mathcal{T}^0\big\|$. Assertion \ref{item:pseudoD:resamp} for subsequent degeneration times then follows by induction and the strong Markov property of resampling $k$-tree evolutions at degeneration times.
It remains to prove \ref{item:pseudoD:J}, \ref{item:pseudoD:swapred}, \ref{item:pseudoD:indep}, and \ref{item:pseudoD:gamma_mass}. We begin with the special case $M\!\sim\!\distribfont{Gamma}\checkarg[k\!-\!\frac12,\gamma]$. In this case, by Proposition \ref{prop:B_ktree}, each type-$i$ compound has initial mass $\distribfont{Gamma}\checkarg[(i+1)/2,\gamma]$, with all initial masses being independent. For $y>0$, each type-1 compound avoids degeneration prior to time $y$ with probability $(2y\gamma+1)^{-1}$ by \cite[equation (6.3)]{Paper1}. For type-2 the corresponding probability is $(2y\gamma+1)^{-2}$ by \cite[Proposition 38]{Paper3}. Moreover, when a type-2 compound degenerates, each of the two labels is equally likely to be the one to cause degeneration \cite[Proposition 39]{Paper3}. Thus, the first label $I$ to cause degeneration is uniformly random in $[k]$ and is jointly independent with the initial tree shape $\tau_k$ and the time of degeneration $D$. But recall that this does not necessarily mean that label $I$ is dropped at degeneration; we must account for the swapping part of the swap--and-reduce map $\varrho$.
This places us in the setting of our study of a modified Aldous chain in \cite{Paper2}, where we begin with a uniform random tree shape and select a uniform random leaf for removal, with the same label-swapping dynamics as in the definition of $\varrho$ in Section \ref{sec:non_resamp_def}. In particular, \cite[Corollary 5]{Paper2} gives $p_1:=\mathbb{P}(J=1)=0$, $p_2:=\mathbb{P}(J=2)=2/k(2k-3)$ and $p_j:=\mathbb{P}(J=j)=(4j-5)/k(2k-3)$ for $j\in\{3,\ldots,k\}$; and \cite[Lemma 4]{Paper2} says that given $\{J=j\}$, the tree shape $\tau_{k-1}$ after swapping and reduction is conditionally uniformly distributed on $\bT^{\textnormal{shape}}_{[k]\setminus\{j\}}$.
Since $D$ is independent of $(\tau_k,I)$ and since $\tau_{k-1} = \varrho(\tau_k,J(\tau_k,I))$, if we additionally condition on $D$ then the above conclusion still holds: given $\{D=y,J=j\}$, the resulting tree shape $\tau_{k-1}$ is still conditionally uniform on $\bT^{\textnormal{shape}}_{[k]\setminus\{j\}}$. Moreover, by the independence of the evolutions of the type-0/1/2 compounds in the tree (prior to conditioning), each type-1 compound that does not degenerate is conditionally distributed as a type-1 evolution in pseudo-stationarity, conditioned not to die up to time $y$, and correspondingly for type-2 and type-0 compounds. As noted in Proposition \ref{prop:012:pseudo}, the law at time $y$ is the same as the initial distribution, but with total mass scaled up by a factor of $2y\gamma+1$, meaning that each top mass $m_a^y$ in these compounds is conditionally independent with law $\distribfont{Gamma}\checkarg[\frac12,\gamma/(2y\gamma+1)]$, and each internal edge partition $\alpha_E^y$ is a conditionally independent \distribfont{PDIP}\checkarg[\frac12,\frac12] scaled by a $\distribfont{Gamma}\checkarg[\frac12,\gamma/(2y\gamma+1)]$ mass. Similarly, if the degeneration occurs in a type-2 compound in $\tau_k$, then the remaining top mass in that compound also has conditional law $\distribfont{Gamma}\checkarg[\frac12,\gamma/(2y\gamma+1)]$ \cite[Proposition 39]{Paper3}. Thus, $\varrho\big(\mathcal{T}^{D-}\big)/\big\|\mathcal{T}^{D-}\big\|$ is conditionally a Brownian reduced $([k]\!\setminus\!\{j\})$-tree, as claimed, and is conditionally independent of $\|\mathcal{T}^D\| \sim \distribfont{Gamma}\checkarg[k-\frac32,\gamma/(2y\gamma+1)]$.
This completes the proof of \ref{item:pseudoD:gamma_mass} as well as of \ref{item:pseudoD:J} and \ref{item:pseudoD:swapred} in the special case when the initial total mass is $M\sim\distribfont{Gamma}\checkarg[k-\frac12,\gamma]$. Moreover, since the above conditional law of the normalized tree does not depend on the particular value $D$, we find that the pair in \ref{item:pseudoD:indep} is independent of $\big(D,\big\|\mathcal{T}^D\big\|\big)$ in this case; it remains to show independence from $M$.
Now consider a $k$-tree evolution $(\widebar{\mathcal{T}}^{y},y\ge 0)$ starting from a unit-mass Brownian reduced $k$-tree, with degeneration time $\widebar{D}$. Let $M$ be independent of this evolution with law $\distribfont{Gamma}\checkarg[k-\frac12,\gamma]$. By the scaling property of Lemma \ref{lem:scaling}, $\mathcal{T}^y = M\widebar{\mathcal{T}}^{y/M}$, $y\ge0$, is a $k$-tree evolution with initial mass $M$, as studied above. In particular,
\begin{equation*}
\left(J\big(\mathcal{T}^{D-}\big),\frac{\varrho\big(\mathcal{T}^{D-}\big)}{\big\|\mathcal{T}^{D-}\big\|}\right) = \left(J\big(\widebar{\mathcal{T}}^{\widebar{D}-}\big),\frac{\varrho\big(\widebar{\mathcal{T}}^{\widebar{D}-}\big)}{\big\|\widebar{\mathcal{T}}^{\widebar{D}-}\big\|}\right) \quad\text{and}\quad \big(D,\big\|\mathcal{T}^{D-}\big\|\big) = \big(M\widebar{D},M\big\|\widebar{\mathcal{T}}^{\widebar{D}-}\big\|\big).
\end{equation*}
We showed that
\begin{equation*}
\begin{split}
&\int
\mathbb{E} \left[ f\left( J\big(\widebar{\mathcal{T}}^{\widebar{D}-}\big) , \frac{\varrho\big(\widebar{\mathcal{T}}^{\widebar{D}-}\big)}{\big\|\widebar{\mathcal{T}}^{\widebar{D}-}\big\|}\right)
g\left(x\left\|\widebar{\mathcal{T}}^{\widebar{D}-}\right\|,x\widebar{D}\right)\right] \frac{\gamma^{k-\frac12}}{\Gamma(k-\frac12)}x^{k-\frac32}e^{-\gamma x} dx\\
&\ \ = \mathbb{E} [ f(J^*,\widebar{\mathcal{T}}^*) ] \int \mathbb{E}\left[g\left(x\left\|\widebar{\mathcal{T}}^{\widebar{D}-}\right\|,x\widebar{D}\right)\right] \frac{\gamma^{k-\frac12}}{\Gamma(k-\frac12)}x^{k-\frac32}e^{-\gamma x} dx,
\end{split}
\end{equation*}
where $J^*$ and $\widebar{\mathcal{T}}^*$ have the laws described in \ref{item:pseudoD:J} and \ref{item:pseudoD:swapred} for the dropped label and the normalized tree. If we cancel out the constant factors of $\gamma^{k-\frac12}/\Gamma(k-\frac12)$ on each side, appeal to Laplace inversion, and then cancel out factors of $x^{k-\frac32}$, then we find
\begin{equation*}
\mathbb{E} \left[ f\left( J\big(\widebar{\mathcal{T}}^{\widebar{D}-}\big) , \frac{\varrho\big(\widebar{\mathcal{T}}^{\widebar{D}-}\big)}{\big\|\widebar{\mathcal{T}}^{\widebar{D}-}\big\|}\right)
g\left(x\left\|\widebar{\mathcal{T}}^{\widebar{D}-}\right\|,x\widebar{D}\right)\right] = \mathbb{E} [ f(J^*,\widebar{\mathcal{T}}^*) ] \mathbb{E}\left[g\left(x\left\|\widebar{\mathcal{T}}^{\widebar{D}-}\right\|,x\widebar{D}\right)\right]
\end{equation*}
for Lebesgue-a.e.\ $x>0$. By Lemma \ref{lem:scaling}, it follows that this holds for every $x>0$, thus proving \ref{item:pseudoD:J} and \ref{item:pseudoD:swapred} for fixed initial mass, or for any independent random initial mass by integration. This formula also demonstrates the independence of the dropped label and normalized tree from the degeneration time and mass at degeneration. Since the laws that we find for the dropped index and the normalized tree do not depend on the initial mass $x$, this also proves \ref{item:pseudoD:indep}. \end{proof}
This means that for any scaled Brownian reduced $k$-tree, the $k$-tree evolution without resampling runs through independent multiples of trees $\widebar{\mathcal{T}}_k^{*},\widebar{\mathcal{T}}_{k-1}^{*},\ldots,\widebar{\mathcal{T}}_2^{*},\widebar{\mathcal{T}}_1^{*},0$, where each of the $m$-trees for $m<k$ has as its distribution the appropriate mixture of Brownian reduced trees with label sets of size $m$. We now combine the previous results to establish the laws of independently scaled Brownian reduced $k$-trees as pseudo-stationary laws for resampling $k$-tree evolutions.
\begin{proof}[Proof of Proposition \ref{prop:pseudo:resamp}]
By Proposition \ref{prop:pseudo:degen} and \eqref{eq:B_ktree_resamp}, conditional on $D_1 = z>0$, the tree $\mathcal{T}^{z}$ is distributed as a Brownian reduced $k$-tree scaled by an independent random mass. By the strong Markov property at degeneration times and induction, the same holds conditional on $D_n=z>0$, for any $n\ge1$.
While we have not yet proved Proposition \ref{prop:non_accumulation}, that $D_\infty$ is the hitting time at zero for the total mass process, it is easier to prove it in this special setting. Indeed, the rescaled inter-degeneration times $(D_{n+1}-D_n)/\|\mathcal{T}^{D_n}\|$ are independent and identically distributed. As noted at the end of Section \ref{sec:Markov_mass}, the total mass $\|\mathcal{T}^y\|$ evolves as a \distribfont{BESQ}\checkarg[-1] up until time $D_\infty$, so this time must be a.s.\ finite so as to not exceed the time of absorption for the \distribfont{BESQ}\checkarg. From this we conclude that the masses $\|\mathcal{T}^{D_n}\|$ must tend to zero.
Let $(\mathcal{T}^x_n,x\ge0)$ denote a resampling $k$-tree evolution with $\mathcal{T}^0_n=\mathcal{T}^{D_n}$, conditionally independent of $(\mathcal{T}^y,y\ge0)$ given $\mathcal{T}^{D_n}$. Now, we condition on $D_n = z\le y < D_{n+1}$. Then by the strong Markov property at time $D_n$, the tree $\mathcal{T}^y$ is conditionally distributed according to the conditional law of $\mathcal{T}^{y-z}_n$, given that $(\mathcal{T}^x_n)$ does not degenerate prior to this time. By Proposition \ref{prop:pseudo:pre_D}, this too is a Brownian reduced $k$-tree scaled by an independent random mass. Integrating out this conditioning preserves the property of $\mathcal{T}^y$ being a Brownian reduced $k$-tree scaled by an independent mass. \end{proof}
\section{Proof of consistency results for resampling $k$-tree evolutions}\label{sec:consistency}
In this section, we prove the following partial result towards Theorem \ref{thm:consistency}\ref{item:cnst:resamp}.
\begin{proposition}\label{prop:consistency_0}
Fix $T\in\bT_{k}$. If $(\mathcal{T}^y_{k+1},y\ge0)$ is a resampling $(k\!+\!1)$-tree evolution with $\mathcal{T}^0_{k+1}\sim \Lambda_{k+1,[k]}(T,\cdot\,)$ then $\big(\pi_k\big(\mathcal{T}^y_{k+1}\big),y\ge0\big)$ is a resampling $k$-tree evolution killed at the limit $D_\infty$ of degeneration times in $\big(\mathcal{T}_{k+1}^y\big)$. \end{proposition}
Once we have proved this proposition, we use it in Section \ref{sec:non_acc} to prove Proposition \ref{prop:non_accumulation}, that $D_\infty$ is a.s.\ the time at which the total mass $\|\mathcal{T}^y\|$ first approaches zero. This, in turn, completes the proof of Theorem \ref{thm:consistency}\ref{item:cnst:resamp} from the above proposition.
In Section \ref{sec:const:intertwin} we introduce a process that is intermediate between a resampling $(k\!+\!1)$-tree evolution and a resampling $k$-tree evolution. We appeal to a novel lemma presented in Appendix \ref{sec:intertwining_lem} to prove that the intermediate process is intertwined below the $(k+1)$-tree evolution. In Section \ref{sec:const:Dynkin1}, we show that the projection from the intermediate process to a $k$-tree-valued process satisfies Dynkin's criterion, thus completing a two-step proof of Proposition \ref{prop:consistency_0}, similar to the approach to an analogous discrete result in \cite[proof of Theorem 2]{Paper2}. This pair of relations is illustrated in the commutative diagram in Figure \ref{fig:inter_Dynkin}.
\begin{figure}
\caption{Combining intertwining with Dynkin's criterion to show, via an intermediate process, that the $k$-tree projection of a resampling $(k+1)$-tree evolution is in turn a Markov process. Intertwining and Dynkin's criterion can be thought of as the upper and lower boxes in this diagram commuting, respectively. The kernels denoted above are introduced in Section \ref{sec:const:intertwin}.}
\label{fig:inter_Dynkin}
\end{figure}
\subsection{Intermediate process intertwined below a resampling $(k+1)$-tree}\label{sec:const:intertwin}
We define marked $k$-trees as $k$-trees with one block of the tree ``marked.'' In particular, we are interested in projecting from $(k\!+\!1)$-trees and marking the block of the resulting $k$-tree into which label $k\!+\!1$ must be inserted to recover the $(k\!+\!1)$-tree from the $k$-tree. See Figure \ref{fig:mark_k_tree}.
\begin{figure}
\caption{Two marked $k$-trees based on the same $k$-tree, with marked blocks indicated with a ``$\bigstar$.'' In example (A), the marking is on an internal block. Then the kernel $\Ast\Lambda_k$ inserts label $k\!+\!1$ into the block. In (B), the marking is on a leaf block. Then $\Ast\Lambda_k$ splits the marked block into a Brownian reduced 2-tree.}
\label{fig:mark_k_tree}
\end{figure}
\begin{definition}\label{def:mark}
We define the set of \emph{marked $k$-trees}
\begin{equation}\label{eq:marked}
\begin{split}
\Ast\bTInt_k &:= \left\{\!(T,\ell)\,\middle|\,
T\!\in\! \widebar{\bT}_{k}\!\setminus\!\{0\},\ \ell \in \textsc{block}(T)\cup\left\{(F,a,a)\colon F\!\in\!\mathbf{t},\,a\!\in\!\big[0,\|\beta_F\|\big]\setminus\bigcup\nolimits_{U\in\beta_F}\!U\right\}\!
\right\}\cup\{0\}
\end{split}
\end{equation}
where we take $\mathbf{t}$, $\beta_F$, and $x_i$ to denote the tree shape and an interval partition and leaf component, respectively, of $T$.
We view marked $k$-trees as intermediate objects between $(k\!+\!1)$-trees and $k$-trees, via a pair of projection maps. First, $\phi_1\colon \Ast\bTInt_k\rightarrow \widebar{\bT}_{k}$ is the projection $\phi_1(T,\ell) = T$. We define $\phi_2\colon \widebar{\bT}_{k+1}\rightarrow\Ast\bTInt_k$ as follows.
\begin{enumerate}[label=(\roman*),ref=(\roman*)]
\item If in $T\in\widebar{\bT}_{k+1}$ we have $\longparent{\{k\!+\!1\}} = \{j,k+1\}$ for some $j\in [k]$, then $\phi_2(T) = (\pi_{k}(T),j)$.
\item Otherwise, if $E = \longparent{\{k\!+\!1\}}$ is \emph{not} a type-2 edge, then recall part \ref{item:proj:merge} of Definition \ref{def:proj}, in which the interval partitions of $E$ and $F := E\setminus\{k\!+\!1\}$ are combined with the leaf mass $x_{k+1}$ to form the partition $\beta'_F = \beta_F\star (0,x_{k+1})\star\beta_{E}$ of $F$ in the projected tree. In this case we define $\phi_2(T) = \big(\pi_{k}(T),(F,\|\beta_F\|,\|\beta_F\|+x_{k+1})\big)$, where the marked block is the block in $\pi_{k}(T)$ corresponding to the leaf mass $x_{k+1}$ in $T$.
\end{enumerate}
We also define a stochastic kernel from $\Ast\bTInt_k$ to $\widebar{\bT}_{k+1}$. Recall the label insertion operator, $\oplus$, of Section \ref{sec:resamp_def}. Let $\Ast\Lambda_k$ denote the kernel from $\Ast\bTInt_k$ to $\widebar{\bT}_{k+1}$ that associates with each $(T,\ell)\in \Ast\bTInt_k$ the law of $T\oplus (\ell,k\!+\!1,U)$, where $U\sim Q$ is a Brownian reduced $2$-tree of unit mass.
We adopt the convention that $\phi_2(0) = \phi_1(0) = 0$ and $\Ast\Lambda_k(0,\cdot\,) = \delta_0(\,\cdot\,)$. \end{definition}
The term in \eqref{eq:marked} in which we allow a marking $\ell$ of the form $(F,a,a)$ allows the description of a $(k\!+\!1)$-tree in which the leaf mass $x_{k+1}$ equals $0$ and sits in a type-1 compound. The case $\ell = (F,\|\beta_F\|,\|\beta_F\|)$ corresponds to a degenerate zero leaf mass above a zero interval partition; correspondingly, if $\ell = \big(\parent{\{i\}},0,0\big)$ and $x_i=0$, this corresponds to label $i$ being degenerate in the $(k\!+\!1)$-tree. We write $\Ast\bT_{k}:=\phi_2(\bT_{k+1})$ and $\Ast\tdTInt_k:=\phi_2(\widetilde{\bT}_{k+1})$ for the spaces of marked trees with respectively no degenerate labels or at most one degenerate label, which could be label $k+1$, as discussed above.
We may think of the resampling kernel $\Lambda_{k+1,[k]}$ of Section \ref{sec:resamp_def} as representing a two-step transition, in which a block is first selected at random and then, if a leaf block was chosen, it is split into a scaled Brownian reduced 2-tree. Then $\Ast\Lambda_k$ represents the second of these steps: \begin{equation}\label{eq:resamp_vs_mark}
\Ast\Lambda_k\left(\left(T,\ell\right),\cdot\,\right) = \Lambda_{k+1,[k]}\left(T,\cdot\ \middle|\ k\!+\!1\text{ is inserted into }\ell\right)\quad \text{for }(T,\ell)\in\Ast\bT_{k}\text{ with }\|\ell\|>0. \end{equation}
We metrize $\Ast\bTInt_k$ by \begin{equation}\label{eq:markk:dist_def}
d_{\Ast\mathbb{T}}\left(\Ast T_{k,1},\Ast T_{k,2}\right) := \inf\left\{ d_{\mathbb{T}}(T_{k+1,1},T_{k+1,2})\colon \phi_2(T_{k+1,1}) = \Ast T_{k,1},\,\phi_2(T_{k+1,2}) = \Ast T_{k,2}\right\}. \end{equation}
We note that, if $T_{k,1}$ and $T_{k,2}$ have the same tree shape as each other and both are marked in corresponding leaf blocks $i\in [k]$, then \begin{equation}\label{eq:markk:dist_1}
d_{\Ast\mathbb{T}}\left((T_{k,1},i),(T_{k,2},i)\right) = d_{\mathbb{T}}(T_{k,1},T_{k,2}). \end{equation} Indeed, $d_{\mathbb{T}}(T_{k,1},T_{k,2})$ is a general lower bound for distances between marked $k$-trees. In this special case, the bound can be seen to be sharp by splitting block $i$ in each of the marked $k$-trees into a very small block $k\!+\!1$, a small edge partition with little diversity, and a massive block $i$, in order to form $(k\!+\!1)$-trees that project down as desired. In the limit as block $k\!+\!1$ and the edge partition on its parent approach mass and diversity zero, the $d_{\mathbb{T}}$-distance between the resulting $(k\!+\!1)$-trees converges to $d_{\mathbb{T}}(T_{k,1},T_{k,2})$.
On the other hand, if two marked $k$-trees have equal tree shape but the marked blocks lie in different leaf components or internal edge partitions, then \begin{equation}\label{eq:markk:dist_2}
d_{\Ast\mathbb{T}}\left((T_{k,1},\ell_1),(T_{k,2},\ell_2)\right) = d_{\mathbb{T}}(T_{k,1},0) + d(0,T_{k,2}). \end{equation} If $T_{k,1}$ and $T_{k,2}$ have different tree shapes, then both \eqref{eq:markk:dist_1} and \eqref{eq:markk:dist_2} hold, as the right hand sides are then equal, by \eqref{eq:ktree:metric_2}.
This leaves only the case where the two marked $k$-trees have the same shape and the marked blocks each lie in corresponding internal edge partitions in the two trees. Then each marked $k$-tree is as in example (A) in Figure \ref{fig:mark_k_tree}: for $i=1,2$, there is a unique $(k\!+\!1)$-tree $T_{k+1,i}$ for which $\phi_2(T_{k+1,i}) = (T_{k,i},\ell_i)$. Then
\begin{equation}\label{eq:markk:dist_3}
d_{\Ast\mathbb{T}}\big((T_{k,1},(E,a_1,b_1)),\, (T_{k,2},(E,a_2,b_2))\big) = d_{\mathbb{T}}(T_{k+1,1},T_{k+1,2}).
\end{equation}
\begin{lemma}\label{lem:Lstar_cont}
The kernel $\Ast\Lambda_k$ is weakly continuous in its first coordinate. \end{lemma}
\begin{proof}
We separately check continuity at zero, at $k$-trees with a marked leaf, and at $k$-trees with the mark in a block of an interval partition. In each case, we consider a sequence $((T_n,\ell_n),n\ge1)$ of marked $k$-trees converging to a limit $\Ast T_\infty$ of that type.
Case 1: $\Ast T_\infty = 0$. Then the total mass $\|T_n\|$ and the diversities of all interval partition components of the $T_n$ must go to zero. Let $U = (m_1,m_2,\alpha)\sim Q$ denote a Brownian reduced 2-tree of unit mass. For $n\ge1$, let $\mathcal{T}_n := T_n\oplus (\ell_n,U)$. Then $\mathcal{T}_n$ has law $\Ast\Lambda_k((T_n,\ell_n),\cdot\,)$. We recall from \cite[Lemma 2.12]{Paper1} that scaling an interval partition by $c$, causes its diversity to scale by $\sqrt{c}$. Thus,
$$d_{\mathbb{T}}(\mathcal{T}_n,0) \le d_{\mathbb{T}}(T_n,0) + \sqrt{\|\ell_n\|}\mathscr{D}_{\alpha}(\infty) \le d_{\mathbb{T}}(T_n,0) + \sqrt{\|T_n\|}\mathscr{D}_{\alpha}(\infty),$$
which goes to zero as $n$ tends to infinity. We conclude that $\Ast\Lambda_k$ is weakly continuous at 0.
Case 2: $\Ast T_\infty = (T_\infty,i)$ for some $i\in [k]$. Then by \eqref{eq:markk:dist_2}, for all sufficiently large $n$, $\ell_n = i$ and $T_n$ has the same tree shape as $T_\infty$; call this tree shape $\mathbf{t}$. Let $U$ and $(\mathcal{T}_n,n\ge1)$ be as in Case 1. Let $x_{n,i}$ denote the mass of block $i$ in $T_n$. By the bounds on $d_{\IPspace}$-distance between rescaled interval partitions in \cite[Lemma 2.12]{Paper1},
$$d_{\mathbb{T}}(T_n,T_m) \le d_{\mathbb{T}}(\mathcal{T}_n,\mathcal{T}_m) \le d_{\mathbb{T}}(T_n,T_m) + \big|\sqrt{x_{n,i}} - \sqrt{x_{m,i}}\big|\mathscr{D}_{\alpha}(\infty).$$
Since the sequence $(x_{n,i},n\ge1)$ is Cauchy, it follows from the above bounds that $(\mathcal{T}_n,n\ge1)$ is a.s.\ Cauchy as well. Thus, we conclude that $\Ast\Lambda_k(T_n,\cdot\,)$ converges weakly.
Case 3: $\ell_n = (E,a_n,b_n)$ for some $E\in \mathbf{t}$, for all sufficiently large $n$. Then for each such large $n$, there is some $(k\!+\!1)$-tree $T_{k+1,n}$ such that $\Ast\Lambda_k((T_n,\ell_n),\cdot\,) = \delta_{T_{k+1,n}}(\,\cdot\,)$. By \eqref{eq:markk:dist_3}, if the marked $k$-trees $(T_n,\ell_n)$ converge then so do the $(k+1)$-trees $T_{k+1,n}$.
This proves that $\Ast\Lambda_k$ is weakly continuous in its first coordinate everywhere on $\Ast\bTInt_k$. \end{proof}
When composing stochastic kernels, we adopt the standard convention that sequential transitions are ordered from left to right: $$\int PQ(x,dz)f(z) = \int P(x,dy)\int Q(y,dz)f(z).$$
\begin{definition}\label{def:intertwining}
Consider a Markov process $X$ with transition kernels $(P_t,t\ge0)$ on a state space $(S,\mathcal{S})$, a measurable map $\phi\colon S\rightarrow T$, and a stochastic kernel $\Lambda\colon T\times\mathcal{S}\rightarrow [0,1]$. Following \cite{RogersPitman}, the image process $Y(t) := \phi(X(t))$, $t\ge0$, is \emph{intertwined below} $X$ (with respect to $\Lambda$) if:
\begin{enumerate}
\item \label{item:intertwin:up} $\Lambda\Phi$ is the identity kernel on $(T,\mathcal{T})$, where $\Phi$ is the kernel $\Phi(x,\cdot\,) = \delta_{\phi(x)}(\,\cdot\,)$, and
\item \label{item:intertwining} $\Lambda P_t = Q_t\Lambda$, $t\ge0$, where $Q_t := \Lambda P_t\Phi$.
\end{enumerate} \end{definition}
If $Y$ is intertwined below $X$ and $X$ additionally satisfies \begin{enumerate}[start=3]
\item \label{item:intertwin:init} $X(0)$ has regular conditional distribution (r.c.d.) $\Lambda(Y(0),\cdot\,)$ given $Y(0)$, \end{enumerate} then $(Y(t),t\ge0)$ is a Markov process \cite[Theorem 2]{RogersPitman}. If conditions \ref{item:intertwin:up} and \ref{item:intertwin:init} are satisfied, then \ref{item:intertwining} is equivalent to \begin{enumerate}[start=2, label=(\roman*'), ref=(\roman*')]
\item \label{item:intertwining_v2} $X(t)$ is conditionally independent of $Y(0)$ given $Y(t)$, with r.c.d.\ $\Lambda(Y(t),\cdot\,)$.
\end{enumerate}
Now, fix $(T,\ell)\in\Ast\bT_{k}$. Let $(\mathcal{T}_{k+1}^y,y\ge0)$ denote a resampling $(k\!+\!1)$-tree evolution with initial distribution $\Ast\Lambda_k((T,\ell),\cdot\,)$. Let $\Ast\mathcal{T}_{k}^y = (\mathcal{T}_{k}^y,\ell^y) = \phi_2(\mathcal{T}^y_{k+1})$, $y\ge0$. We denote the components of these evolutions by \begin{equation}\label{eq:const:component_notation} \begin{split}
\mathcal{T}^y_{k+1} &= \big(\tau^y,(m_i^y,i\in [k+1]), (\alpha_E^y,E\in\tau^y)\big),\\
\mathcal{T}^y_k &= \big(\widetilde\tau^y,(\widetilde m_i^y,i\in [k]), (\widetilde\alpha_E^y,E\in\widetilde\tau^y)\big). \end{split} \end{equation} Note that conditions \ref{item:intertwin:up} and \ref{item:intertwin:init} above are satisfied with $\phi = \phi_2$ and $\Lambda = \Ast\Lambda_k$. To conclude that $\Ast\mathcal{T}_k$ is Markovian, it remains to check condition \ref{item:intertwining} or \ref{item:intertwining_v2}.
Our approach to proving intertwining goes by way of Lemma \ref{lem:intertwin_jump}, which is a novel, general result for proving intertwining between processes that jump away from some set of ``boundary'' states, which arise only as left limits of the processes at a discrete sequence of times. The lemma states that it suffices to prove that the processes killed upon first approaching the ``boundary'' are intertwined, and that the Markov chains of states that the processes jump into at those times are intertwined as well.
Let $D_1,D_2,\ldots$ denote the sequence of degeneration times of $\big(\mathcal{T}^y_{k+1}\big)$. Recall the definition of the swap-and-reduce map, $\varrho$, in Section \ref{sec:non_resamp_def}. When a label $i$ in a $(k\!+\!1)$-tree degenerates, it swaps places with $\max\{i,a,b\}$, where $a$ is the least label descended from the sibling of $i$ and $b$ is the least label descended from its uncle. Since $k\!+\!1$ is the greatest label in the tree, there are three cases in which it will resample:
\begin{enumerate}[label=(D\arabic*), ref= (D\arabic*), topsep=4pt, itemsep=3pt]
\item\label{case:degen:type2} $k\!+\!1$ belongs to a type-2 compound, and either it (in the case $k\!+\!1=i$) or its sibling (in the case $k\!+\!1=a$) causes degeneration;
\item\label{case:degen:self} $k\!+\!1$ belongs to a type-1 compound and causes degeneration, so $k\!+\!1=i$; or
\item\label{case:degen:nephew} $k\!+\!1=b$, as $k\!+\!1$ belongs to a type-1 compound and its sibling in the tree shape is an internal edge that belongs to a type-1 or type-2 compound that degenerates. \end{enumerate} In case \ref{case:degen:type2}, the resampling kernel $\Lambda_{k+1,[k]}(\varrho(\mathcal{T}_{k+1}^{D_n-}),\cdot\,)$ may select the block of the former sibling of $k\!+\!1$ as the point of reinsertion. In this case, the marked $k$-tree remains continuous at this degeneration time, $\Ast\mathcal{T}^{D_n}_k = \Ast\mathcal{T}^{D_n-}_k$, and the degeneration time is not a stopping time in the filtration generated by the marked $k$-tree process $\big(\Ast\mathcal{T}_k^y\big)$. Then, we say that $D_n$ is an \emph{invisible} degeneration time. Otherwise, the degeneration time is said to be \emph{visible}. In particular, in cases \ref{case:degen:self} and \ref{case:degen:nephew} the degeneration is always visible.
Consider the subsequence $D'_1,D'_2,\ldots$ of visible degeneration times. We define $D_0 = D'_0 := 0$. Let $\Phi_2$ denote the stochastic kernel from $\bT_{k+1}$ to $\Ast\bT_{k}$ given by $\Phi_2(T,\cdot\,) = \delta_{\phi_2(T)}(\,\cdot\,)$. Let $P^{\circ,k+1}_y$, $y\ge0$, denote the transition kernels for a resampling $(k\!+\!1)$-tree evolution that jumps and is absorbed at the zero tree at time $D'_1$.
\begin{proposition}\label{prop:inter:killed}
$\Ast\mathcal{T}^{\circ,y}_{k} := \Ast\mathcal{T}^y_{k}\mathbf{1}\{y<D'_1\}$, $y\ge 0$, is a Markov process intertwined below $\mathcal{T}^{\circ,y}_{k+1} := \mathcal{T}^y_{k+1}\mathbf{1}\{y<D'_1\}$, $y \ge0$, with kernel $\Ast\Lambda_k$.
\end{proposition}
\begin{proof}
As noted above, conditions \ref{item:intertwin:up} and \ref{item:intertwin:init} in Definition \ref{def:intertwining} are satisfied by our construction. Thus, it suffices to check condition \ref{item:intertwining_v2}, that $\mathcal{T}^{\circ,y}_{k+1}$ has r.c.d.\ $\Ast\Lambda_k(\Ast\mathcal{T}^{\circ,y}_k,\cdot\,)$ given $\Ast\mathcal{T}^{\circ,y}_k$.
By construction, we have the a.s.\ equality of events $\big\{\mathcal{T}^{\circ,y}_{k+1} = 0\big\} = \big\{\Ast\mathcal{T}^{\circ,y}_k = 0\big\} = \big\{D'_1 < y\big\}$. Indeed, $\Ast\Lambda_k(0,\cdot\,) = \delta_0$, so this gives the correct conditional distribution on this event. It remains to check the claimed r.c.d.\ over the domain $\Ast\mathcal{T}^y_k = \Ast T\in \Ast\bT_{k}\setminus\{0\}$.
Recall that $(T,\ell)\in\Ast\bT_{k}$ denotes the fixed initial state of $\Ast\mathcal{T}^y_k = \big(\mathcal{T}^y_k,\ell^y\big)$, $y\ge0$.
Case 1: $\ell\notin [k]$. Then label $k\!+\!1$ belongs to a type-1 compound in $\mathcal{T}^0_{k+1}$, so the first degeneration cannot be a case \ref{case:degen:type2} degeneration, and it must be visible, $D_1 = D'_1$. Thus, $\{\Ast\mathcal{T}^y_k \neq 0\} = \{y<D_1\}$. On the event $\{y<D_1\}$, the tree shape of $\Ast\mathcal{T}^y_k$ must equal that of $T$, and its marked block must lie in the same internal edge as the initial marked block $\ell$. For any marked $k$-tree $\Ast T$ with the latter of these properties, there is a unique $(k\!+\!1)$-tree $T'$ that satisfies $\Ast T = \phi_2(T')$, as in example (A) in Figure \ref{fig:mark_k_tree}, and $\Ast\Lambda_k\big(\Ast T,\cdot\,\big) = \delta_{T'}$. Thus, if $\ell\notin [k]$ then $\mathcal{T}^{\circ,y}_{k+1}$ is in fact a deterministic function of $\Ast\mathcal{T}^{\circ,y}_k$, and $\Ast\Lambda_k(\Ast\mathcal{T}^{\circ,y}_k,\cdot\,)$ is a r.c.d.\ for $\mathcal{T}^{\circ,y}_{k+1}$ given $\Ast\mathcal{T}^{\circ,y}_k$ as claimed, albeit in a trivial sense.
Case 2: $\ell = i\in [k]$. By the discussion above this proposition, for $n\ge1$, on the event $\{D_n < D'_1\}$, the time $D_n$ is an invisible degeneration time, so label $k\!+\!1$ has just been dropped from a type-2 compound $(m_i^{D_n-},m_{k+1}^{D_n-},\alpha_{\{i,k+1\}}^{D_n-})$ and it resamples into block $i$. Formally,
\begin{equation}\label{eq:degen_filtration}
\{D_n < D'_1\} = \{D_{n-1}< D'_1\} \cap\{ \ell^{D_n} = i \}.
\end{equation}
For $n\ge1$ let $\mathcal{F}_{n-} := \sigma\big(\mathcal{T}^{\circ,y}_{k+1},\,y \in [0,D_n)\big)$ and $\mathcal{F}_n := \sigma\big(\mathcal{T}^{\circ,y}_{k+1},\,y \in [0,D_n]\big)$. The first event on the right hand side in \eqref{eq:degen_filtration} is measurable in $\mathcal{F}_{n-}$ while the second is measurable in $\mathcal{F}_n$. Moreover, by definition of invisible degeneration times, on the event $\{D_n < D'_1\}$ we get $\Ast\mathcal{T}^{\circ,D_n}_k = \Ast\mathcal{T}^{\circ,D_n-}_k$, the latter of which is $\mathcal{F}_{n-}$-measurable. We also get $\varrho\big(\mathcal{T}^{\circ,D_n-}_{k+1}\big) = \mathcal{T}^{\circ,D_n-}_k = \mathcal{T}^{\circ,D_n}_k$. Thus, by \eqref{eq:resamp_vs_mark},
$\Ast\Lambda_k(\Ast\mathcal{T}^{\circ,D_n}_k,\cdot\,)$ is a r.c.d.\ for $\mathcal{T}^{\circ,D_n}_{k+1}$ given $\mathcal{F}_{n-}$.
Let $N := \sup\{n\ge0\colon D_n<y\}$. Note that $\{n = N\} = \{D_n < y\}\cap \{y \le D_{n+1}\}$.
Thus, for bounded, measurable $f\colon \bT_{k+1}\rightarrow\mathbb{R}$ with $f(0)=0$,
\begin{equation}\label{eq:inter:killed:calc}
\begin{split}
\mathbb{E}\big[f\big(\mathcal{T}^{\circ,y}_{k+1}\big)\big] &= \sum_{n\ge 0}\int_{[0,y)\times\Ast\bT_{k}}\mathbb{P}\!\left\{D_n\in dz,\,\Ast\mathcal{T}^{\circ,D_n}_k \in d\Ast T\right\}\int_{\bT_{k+1}} \Ast\Lambda_k\!\left(\Ast T,dT'\right)\\
&\qquad\qquad \int_{\bT_{k+1}}\mathbb{P}(\mathcal{T}^{\circ,y}_{k+1}\mathbf{1}\{y<D_{n+1}\} \in dT''\ |\ \mathcal{T}^{\circ,z}_{k+1} = T',\,D_n = z)f(T'').
\end{split}
\end{equation}
By the strong Markov property of $\big(\mathcal{T}^{\circ,y}_{k+1},y\ge0\big)$,
\begin{equation*}
\mathbb{P}\left(\mathcal{T}^{\circ,y}_{k+1}\mathbf{1}\{y<D_{n+1}\} \in dT''\ \middle|\ \mathcal{T}^{\circ,z}_{k+1} = T',\,D_n = z\right) = \mathbb{P}\!\left(\widehat\mathcal{T}^{y-z}_{k+1}\mathbf{1}\{y-z < \widehat D_1\} \in dT''\ \middle|\ \widehat\mathcal{T}^0_{k+1} = T'\right),
\end{equation*}
where $(\widehat\mathcal{T}^y_{k+1},y\ge0)$ is a $(k\!+\!1)$-tree evolution and $\widehat D_1$ its first degeneration time. Now, suppose we start this evolution with initial distribution $\Ast\Lambda_k\big(\Ast T,\cdot\,\big)$, where the marked block in $\Ast T$ is a leaf block $i\in [k]$. Then in $\widehat\mathcal{T}^0_{k+1}$, the type-2 compound $\big(\widehat m^y_i,\widehat m_{k+1}^y,\widehat\alpha_{\{i,k+1\}}^y\big)$ begins in a pseudo-stationary initial distribution. Conditioning the $(k\!+\!1)$-tree evolution not to degenerate prior to time $y-z$ means conditioning this type-2 compound not to degenerate, and likewise for the other, independently evolving compounds that comprise this $(k\!+\!1)$-tree evolution. Thus, by Proposition \ref{prop:012:pseudo}, $\big(\widehat m^{y-z}_i,\widehat m_{k+1}^{y-z},\widehat\alpha_{\{i,k+1\}}^{y-z}\big)$ is conditionally a Brownian reduced 2-tree scaled by an independent random mass, and this is independent of all other compounds. Equivalently, $\Ast\Lambda_k\big(\phi_2\big(\widehat\mathcal{T}^{y-z}_{k+1}\mathbf{1}\{y-z < \widehat D_1\}\big),\cdot\,\big)$ is a r.c.d.\ for $\widehat\mathcal{T}^{y-z}_{k+1}\mathbf{1}\{y-z < \widehat D_1\}$ given $\phi_2\big(\widehat\mathcal{T}^{y-z}_{k+1}\mathbf{1}\{y-z < \widehat D_1\}\big)$. Plugging this back into \eqref{eq:inter:killed:calc}, we get
\begin{align*}
\mathbb{E}\big[f\big(\mathcal{T}^{\circ,y}_{k+1}\big)\big] &= \sum_{n\ge 0}\int_{[0,y)\times\Ast\bT_{k}}\mathbb{P}\!\left\{D_n\in dz,\,\Ast\mathcal{T}^{\circ,D_n}_k \in d\Ast T\right\}\\
&\qquad \int_{\Ast\bT_{k}} \mathbb{P}\left(\Ast\mathcal{T}^{\circ,y}_{k}\mathbf{1}\{y<D_{n+1}\} \in d\Ast T'\ \middle|\ D_n = z,\,\Ast\mathcal{T}^{\circ,D_n}_k = \Ast T\right)\int_{\bT_{k+1}}\Ast\Lambda_k\left(\Ast T',dT''\right)f(T'')\\
&= \mathbb{E}\left[\int\Ast\Lambda_k\left(\Ast\mathcal{T}^{\circ,y}_{k},dT''\right)f(T'')\right].
\end{align*}
Thus, $\Ast\Lambda\big(\Ast\mathcal{T}^{\circ,y}_k,\cdot\,\big)$ is a r.c.d.\ for $\mathcal{T}^{\circ,y}_{k+1}$ given $\Ast\mathcal{T}^{\circ,y}_k$, as claimed. \end{proof}
\begin{proposition}\label{prop:inter:skel}
$\Big(\!\Big(\Ast\mathcal{T}^{D'_n}_{k},D'_n\Big),n\!\ge\!0\Big)$ is a Markov chain intertwined below $\Big(\!\Big(\mathcal{T}^{D'_n}_{k+1},D'_n\Big),n\!\ge\!0\Big)$ with kernel $\Ast\Lambda_k\otimes I$, where $I$ denotes the identity kernel on $\mathbb{R}$. \end{proposition}
\begin{proof}
Again, it suffices to verify condition \ref{item:intertwining_v2}, that for $n\ge1$, $\Ast\Lambda_k\otimes I\big(\big(\Ast\mathcal{T}^{D'_n}_{k},D'_n\big),\cdot\,\big)$ is a r.c.d.\ for $\big(\mathcal{T}^{D'_n}_{k+1},D'_n\big)$ given $\big(\Ast\mathcal{T}^{D'_n}_{k},D'_n\big)$. Equivalently, we must show that $\Ast\Lambda_k\big(\Ast\mathcal{T}^{D'_n}_{k},\cdot\,\big)$ is a r.c.d.\ for $\mathcal{T}^{D'_n}_{k+1}$ given $\big(\Ast\mathcal{T}^{D'_n}_{k},D'_n\big)$. In fact, if we can prove this claim for $n=1$, then by the strong Markov property of the resampling $(k\!+\!1)$-tree evolution at time $D'_1$, we will have that $\big(\big(\Ast\mathcal{T}_{k}^{D'_{n+1}},D'_{n+1}-D'_1\big),n\ge0 \big)$ is another instance of the same Markov chain, but with a different initial distribution that also satisfies condition \ref{item:intertwin:init} of Definition \ref{def:intertwining}. Then, by induction, the full result will follow, for all $n\ge1$.
To verify the claim, we first note that on the event that label $k\!+\!1$ resamples at time $D'_1$, as at any time label $k\!+\!1$ resamples, we have $\mathcal{T}^{D'_1}_k = \varrho\big(\mathcal{T}^{D'_1-}_{k+1}\big)$. Thus, \eqref{eq:resamp_vs_mark} yields
\begin{equation*}
\mathbb{E}\left[\mathbf{1}\left\{J\left(\mathcal{T}^{D'_1-}_{k+1}\right)=k\!+\!1\right\}f\left(\mathcal{T}^{D'_1}_{k+1}\right)\right]
=\mathbb{E}\left[\mathbf{1}\left\{J\left(\mathcal{T}^{D'_1-}_{k+1}\right)=k\!+\!1\right\}\int_{\mathbb{T}_{k+1}}\Ast\Lambda_k\left(\Ast\mathcal{T}^{D'_1}_{k},dT\right)f(T)\right].
\end{equation*}
Also, defining $N$ so that $D_{N+1} = D'_1$ and conditioning on this $N$ as in the proof of Proposition \ref{prop:inter:killed}, we can show that
\begin{equation*}
\mathbb{E}\left[\mathbf{1}\left\{J\left(\mathcal{T}^{D'_1-}_{k+1}\right)\in[k]\right\}f\left(\mathcal{T}^{D'_1-}_{k+1}\right)\right]
= \mathbb{E}\left[\mathbf{1}\left\{J\left(\mathcal{T}^{D'_1-}_{k+1}\right)\in[k]\right\}\int_{\mathbb{T}_{k+1}}\Ast\Lambda_k\left(\Ast\mathcal{T}^{D'_1-}_{k},dT\right)f(T)\right].
\end{equation*}
Specifically, conditioning a resampling $(k\!+\!1)$-tree evolution (the post-$D_n$ evolution) to drop a label in $[k]$ at the first degeneration means conditioning the compound containing label $k\!+\!1$ to be non-degenerate at the independent first degeneration time of the other compounds, so any initial pseudo-stationarity of this compound yields pseudo-stationarity at this independent random time, by Proposition \ref{prop:012:pseudo}.
To complete the proof, we want to deduce that we have
\begin{equation}\label{eq:rcd_small_labels}
\mathbb{E}\left[\mathbf{1}_{A} f\left(\mathcal{T}^{D'_1}_{k+1}\right)\right]
=\mathbb{E}\left[\mathbf{1}_{A}\int_{\mathbb{T}_{k+1}}\Ast\Lambda_k\left(\Ast\mathcal{T}^{D'_1}_{k},dT\right)f(T)\right].
\end{equation}
for $A = A_j := \big\{J\big(\mathcal{T}^{D'_1-}_{k+1}\big) = j\big\}$, $j\in[k]$. To this end, we recall that, by the definition of resampling $(k\!+\!1)$-tree evolutions, $\Lambda_{j,[k\!+\!1]\setminus\{j\}}\big(\varrho\big(\mathcal{T}^{D'_1-}_{k+1}\big),\cdot\,\big)$, is a r.c.d. of $\mathcal{T}^{D'_1}_{k+1}$ given $\mathcal{T}^{D'_1-}_{k+1}$ with $J\big(\mathcal{T}^{D'_1-}_{k+1}\big) = j$. This means that, on $A_j$, we may write $\mathcal{T}_{k+1}^{D'_1}$ in terms of $\Ast\mathcal{T}_k^{D'_1-} = \big(\mathcal{T}_k^{D'_1-},L\big)$ as
\begin{equation*}
\mathcal{T}_{k+1}^{D'_1}=\left(\varrho\left(\mathcal{T}_k^{D'_1-}\right)\oplus(L,k\!+\!1,\mathcal{U})\right)\oplus(M,j,\mathcal{V}),
\end{equation*}
where $\mathcal{U},\mathcal{V}\sim Q$ are two independent Brownian reduced 2-trees and, given $\Ast\mathcal{T}_k^{D'_1-}$, $\mathcal{U}$ and $\mathcal{V}$, the block $M$ of $\varrho\big(\mathcal{T}_k^{D'_1-}\big)\oplus(L,k+1,\mathcal{U})$ is chosen at random according to the masses of blocks. We derive \eqref{eq:rcd_small_labels} separately for $A=A_{j,i}$, $1\!\le\! i\!\le\! 5$, where $\{A_{j,i}\colon 1\!\le\! i\!\le\! 5\}$ is a partition of $A_j$, which we define in the following.
On $A_{j,1} := \big\{L\!\in\![k], M\!\in\!\textsc{block}\big(\varrho\big(\mathcal{T}_k^{D'_1-}\big)\big), M\!\neq\! L\big\}$, we have $\Ast\mathcal{T}_k^{D'_1} = \big(\varrho\big(\mathcal{T}_k^{D'_1-}\big)\!\oplus\!(M,j,\mathcal{V}),L\big)$ and $\mathcal{T}_{k+1}^{D'_1} = \mathcal{T}_k^{D'_1}\oplus(L,k+1,\mathcal{U})$, with $\mathcal{U}$ independent of $\Ast\mathcal{T}_k^{D'_1}$. Hence, \eqref{eq:rcd_small_labels} holds for $A=A_{j,1}$.
On $A_{j,2} := \{M=k+1\}$, we have $\mathcal{T}^{D'_1}_{k+1} = \mathcal{T}_k^{D'_1}\oplus(j,k+1,\widebar{\mathcal{V}})$, where $\widebar{\mathcal{V}}$ is formed by swapping leaf labels in $\mathcal{V}$. As the Brownian reduced 2-tree has exchangeable labels, $\widebar{\mathcal{V}}\sim Q$, so \eqref{eq:rcd_small_labels} holds for $A = A_{j,2}$.
On $A_{j,3} := \big\{L\not\in[k],\, M\neq k\!+\!1\}$, the tree $\mathcal{T}^{D'_1}_{k+1}$ is the unique $(k\!+\!1)$-tree with $\phi_2\big(\mathcal{T}^{D'_1}_{k+1}\big) = \Ast\mathcal{T}_k^{D'_1}$, as in example (A) in Figure \ref{fig:mark_k_tree}, and indeed $\Ast\Lambda_k\big(\Ast\mathcal{T}^{D'_1}_k,\cdot\,\big)$ is a point mass at this $(k\!+\!1)$-tree. Then, as in Case 1 in the proof of Proposition \ref{prop:inter:killed}, we conclude trivially.
Consider $A_{j,4} := \{L\in[k],\, M\in \{(\{L,k\!+\!1\},a,b)\colon a,b\in\mathbb{R}\}\}$. We view this as a sub-event of $B := A_{j,4} \cup \{L\in [k],\, M\in \{L,k\!+\!1\}\}$. Then $B$ is the event that label $k\!+\!1$ is inserted into a leaf block of $\varrho\big(\mathcal{T}_k^{D'_1-}\big)$ forming a type-2 compound in $\varrho\big(\mathcal{T}_{k+1}^{D'_1-}\big)$, and then label $j$ is inserted into a block in this type-2 compound. On this event, the aforementioned type-2 compound is a Brownian reduced 2-tree. Thus, by \eqref{eq:B_ktree_resamp}, the subtree of $\mathcal{T}^{D'_1}_{k+1}$ comprising leaf blocks $L$, $j$, and $k\!+\!1$ and the partitions on their parent edges is a Brownian reduced 3-tree. The tree shape of this 3-tree has two leaves at distance 3 from the root, sitting in a type-2 compound, and one leaf at distance 2 from the root, in a type-1 compound. It follows from Proposition \ref{prop:B_ktree} that the type-2 compound is then a Brownian reduced 2-tree $\mathcal{W}$ scaled by an independent random mass. Conditioning on $A_{j,4}$ is equivalent to conditioning label $j$ to sit in the type-1 compound. By Proposition \ref{prop:B_ktree}, the labels in the Brownian reduced 3-tree are exchangeable, so $\mathcal{W}$ remains a Brownian reduced 2-tree under this conditioning. Hence, \eqref{eq:rcd_small_labels} holds for $A = A_{j,4}$.
The event $A_{j,5}:=\{L\in[k],\, M=L\}$ is another sub-event of $B$ introduced in the preceding paragraph. This is the sub-event in which label $k\!+\!1$ sits in the type-1 compound in the Brownian reduced 3-tree. We argue as for $A_{j,3}$. \end{proof}
As we have mentioned above, Lemma \ref{lem:intertwin_jump} is a novel, general lemma for proving intertwining. Propositions \ref{prop:inter:killed} and \ref{prop:inter:skel} provide the ingredients needed to apply this lemma, as well as Lemma \ref{lem:intertwin_strong}, to deduce the following conclusion.
\begin{proposition}\label{prop:intertwined}
$\big(\Ast\mathcal{T}^y_k,y\ge 0\big)$ is a strong Markov process intertwined below $\big(\mathcal{T}^y_{k+1},y\ge0\big)$ with kernel $\Ast\Lambda_k$. Moreover, the intertwining criteria \ref{item:intertwining} and \ref{item:intertwining_v2} hold not just at fixed times $y$, but at all stopping times for $\big(\Ast\mathcal{T}^y_k,y\ge 0\big)$. \end{proposition}
We call $\big(\big(\Ast\mathcal{T}^y_k\big),y\ge 0\big)$ the \emph{(resampling) marked $k$-tree evolution from initial state $(T,\ell)$}.
\subsection{Projection from marked $k$-tree}\label{sec:const:Dynkin1}
We continue with the notation of the preceding subsection, but now we study $\mathcal{T}^y_k = \phi_1\big(\Ast\mathcal{T}^y_k\big) = \pi_{k}\big(\mathcal{T}_{k+1}^y\big)$, $y\ge0$. For the moment, we know that this is a $\bT_k$-valued stochastic process; in the course of this section, we will show that it is a resampling $k$-tree evolution up to time $D_\infty = \sup_n D_n$. Let $(D''_n,n\ge1)$ denote the subsequence of degeneration times of $(\mathcal{T}_{k+1}^y,y\ge0)$ at which a label other than $k\!+\!1$ is killed. This is a further subsequence of $(D'_n,n\ge1)$.
\begin{proposition}\label{prop:Dynkin:killed}
$\big(\mathcal{T}^{y}_k\mathbf{1}\{y<D''_1\},y\ge0\big)$ is a killed $k$-tree evolution and is Markovian in the filtration generated by $\big(\mathcal{T}^y_{k+1},y\ge0\big)$. \end{proposition}
This next proof is where we finally appreciate the usefulness of the swap-and-reduce map of Section \ref{sec:non_resamp_def} in preserving projective consistency at degeneration times.
\begin{proof}
This will be done by a coupling argument. In particular, we will define a killed $k$-tree evolution $\big(\widetilde \mathcal{T}^y_k, y\in [0,\wt D)\big)$ with which we will couple a resampling $(k\!+\!1)$-tree evolution $\big(\wt\mathcal{T}^y_{k+1}, y\in [0,\wt D\wedge D_{\infty})\big)$ in such a way that $\big(\pi_k\big( \wt\mathcal{T}^y_{k+1}), y\in [0,\wt D\wedge D_{\infty})\big) = \big(\wt \mathcal{T}^y_{k}, y\in [0,\wt D\wedge D_{\infty}) \big)$ almost surely.
Let $(T,\ell)\in\Ast\bT_{k}$ be as in the previous section. We denote the coordinates of $T$ by $(\mathbf{t},(x_i,i\in[k]),(\beta_E,E\in\mathbf{t}))$. Let $(\Omega_{(0)},\mathcal{F}_{(0)},\mathbb{P}_{(0)})$ denote a probability space on which we have defined an independent type-$d$ evolution corresponding to each type-$d$ compound in $T$, with initial state equal to that compound, for $d=0,1,2$. We denote the top mass evolution corresponding to each leaf $i$ by $\big(\wt m_i^y,y\ge0\big)$, and we denote the interval-partition-valued process associated with each internal edge $E$ by $\big(\wt\alpha_E^y,y\ge0\big)$. Let $\widetilde D$ denote the minimum of the degeneration times of these type-$d$ evolutions. As in Definition \ref{def:killed_ktree}, $\widetilde\mathcal{T}^y_k := \big(\mathbf{t},\big(\widetilde m_j^y,j\in[k]\big), \big(\widetilde\alpha_E^y,E\in\mathbf{t}\big)\big)$, $y\in [0,\widetilde D)$, is a killed $k$-tree evolution.
Case 1: the initial marked block $\ell$ is a leaf block in $T$. Then we extend our probability space to $(\Omega_{(1)},\mathcal{F}_{(1)},\mathbb{P}_{(1)})$ to include a process $\big(U_{(1)}^y,y\ge0\big)$ so that, given the sub-$\sigma$-algebra of $\mathcal{F}_{(1)}$ corresponding to $\mathcal{F}_{(0)}$, it is distributed as a pseudo-stationary type-2 evolution conditioned to have total mass evolution $\big\|U_{(1)}^y\big\| = m_\ell^y$ for $0\le y \le \inf\big\{z\ge 0\colon m_\ell^z = 0\big\}$. Such a conditional distribution exists as $(\mathcal{I},d_{\mathcal{I}})$ is Lusin \cite[Theorem 2.7]{Paper1}, and type-2 evolutions are c\`adl\`ag. As noted in Lemma \ref{lem:type2:symm} and Proposition \ref{prop:012:concat}\ref{item:012concat:BESQ+0}, this total mass process is a \distribfont{BESQ}\checkarg[-1], so after integrating out this conditioning, $\big(U_{(1)}^y,y\ge0\big)$ is a type-2 evolution by Proposition \ref{prop:012:mass}. We define $D_{(1)}$ to be the degeneration time of $\big(U_{(1)}^y,y\ge0\big)$ and set
\begin{equation}\label{eq:Dynkin:D1}
\widetilde D_1 := \widetilde D\wedge D_{(1)}.
\end{equation}
Recall the label insertion operator $\oplus$ defined in Section \ref{sec:resamp_def}. We define
\begin{equation}\label{eq:Dynkin:insert_ext}
\wt\mathcal{T}^y_{k+1} := \wt\mathcal{T}^y_k\oplus \left(\ell,U_{(1)}^y/\left\|U_{(1)}^y\right\|\right), \quad y\in \left[0,\widetilde D_1\right).
\end{equation}
Then this is a killed $(k\!+\!1)$-tree evolution with initial law $\Ast\Lambda_k((T,\ell),\cdot\,)$, in which the type-2 compound containing label $k\!+\!1$ equals $\big(U_{(1)}^y,y\ge0\big)$, up to relabeling.
Case 2: $\ell$ is an internal block $(E,a,b)$ in a type-$d$ edge in $T$, $d = 0,1,2$. We consider the case $d=2$, so $E = \{i,j\}$ for some $i,j\in [k]$; the other cases can be handled similarly. As noted in Remark \ref{rmk:decomp_ker}, there exists a stochastic kernel $\kappa$ that takes the path of a type-$2$ evolution and a block in the interval partition component at time zero and specifies the conditional law of a type-2 evolution and a type-1 evolution, stopped at the lesser of their two degeneration times, conditioned to concatenate to form the specified path split around the specified block. Moreover, after mixing over the law of the type-$2$ evolution the constituent evolutions are independent. In this case, we are interested in such a pair with conditional law
\begin{equation}
\left( \Gamma_{(1)}^y,\; \left(m_{(1)}^y,\alpha_{(1)}^y\right),\; y\in [0,D_{(1)})\right) \sim
\kappa\left( \left( \left( \left(\wt m_i^y,\wt m_j^y,\wt\alpha_E^y \right),y\ge0\right),\,(a,b)\right),\,\cdot\,\right).
\end{equation}
To be clear, concatenating these two processes in the sense of \eqref{eq:012concat:2+1} would yield $\big( \big(\wt m_i^y,\wt m_j^y,\wt\alpha_E^y \big),y\in [0,D_{(1)})\big)$ prior to the first time $D_{(1)}$ that one of $\big(\Gamma_{(1)}^y\big)$ or $\big(\big(m_{(1)}^y,\alpha_{(1)}^y\big)\big)$ degenerates, and in this concatenation, the top mass $m_{(1)}^0$ corresponds to the block $(a,b)\in \wt\alpha_E^0$. We extend our probability space to $(\Omega_{(1)},\mathcal{F}_{(1)},\mathbb{P}_{(1)})$ to include a pair with this conditional law given the sub-$\sigma$-algebra of $\mathcal{F}_{(1)}$ corresponding to $\mathcal{F}_{(0)}$.
We define $a_{(1)}^y$ to equal the mass of the interval partition component of $\Gamma_{(1)}^y$ and we set $b_{(1)}^y := a_{(1)}^y + m_{(1)}^y$ for $y\in [0,D_{(1)})$. We define $\wt D_1$ as in \eqref{eq:Dynkin:D1}. Then we set
\begin{equation}\label{eq:Dynkin:insert_int}
\wt\mathcal{T}^y_{k+1} := \wt\mathcal{T}^y_k\oplus \left(\left(E,a_{(1)}^y,b_{(1)}^y\right),\,k+1,\,U \right),\quad y\in \big[0,\widetilde D_1\big),
\end{equation}
where $U$ is an arbitrary 2-tree, say $(1/2,1/2,\emptyset)$, which, we recall from Section \ref{sec:resamp_def}, is redundant in the label insertion operator when inserting into an internal block.
In each case, the constructed process $\big(\widetilde\mathcal{T}^y_{k+1},y\in [0,\widetilde D_1)\big)$ is a killed $(k\!+\!1)$-tree evolution with initial distribution $\Ast\Lambda_k((T,\ell),\cdot\,)$. Recall the three cases, \ref{case:degen:type2}, \ref{case:degen:self}, and \ref{case:degen:nephew}, in which label $k\!+\!1$ may resample in a resampling evolution. Case \ref{case:degen:type2} corresponds to Case 1 above, and the event that in that case, $D_{(1)} < \widetilde D$. Cases \ref{case:degen:self} and \ref{case:degen:nephew} correspond to Case 2 above, and the events that in that case, $\big(\big(m_{(1)}^y,\beta_{(1)}^y\big)\big)$ or $\big(\Gamma_{(1)}^y\big)$, respectively, are the first to degenerate among $\big(\big(m_{(1)}^y,\beta_{(1)}^y\big)\big)$, $\big(\Gamma_{(1)}^y\big)$, and $\big(\widetilde\mathcal{T}^y_k\big)$. In other words, we have the equality of events \begin{equation}
A_1 := \{D_{(1)} < \widetilde D\} = \left\{J\left(\wt\mathcal{T}_{k+1}^{\widetilde D_1-}\right) = k\!+\!1\right\}.
\end{equation}
We now confirm that on $A_1$,
\begin{equation}\label{eq:swapred:k+1}
\varrho\left(\wt\mathcal{T}^{\widetilde D_1-}_{k+1}\right) = \wt\mathcal{T}^{\wt D_1}_k.
\end{equation}
In Case 1 above, this is clear: we had split the block $\ell$ into a type-2 compound, and at time $\widetilde D_1$, regardless of whether label $k\!+\!1$ or label $\ell$ was the cause of degeneration in the compound, after applying the swap-and-reduce map $\varrho$, label $k\!+\!1$ get dropped and the single remaining mass bears label $\ell$. In Case 2, on the event that label $k\!+\!1$ was the cause of degeneration, then this is again clear: $\big(m_{(1)}^{\wt D_1-},\beta_{(1)}^{\wt D_1-}\big) = (0,\emptyset)$, so all that remains on the compound containing edge $E$ is $\Gamma_{(1)}^{D_1-}$.
It remains to confirm \eqref{eq:swapred:k+1} in Case 2, on the event that one of the nephews of label $k\!+\!1$, corresponding to one of the labels in $\big(\Gamma_{(1)}^y\big)$, is the cause of degeneration. We will address the case where the edge $E$ containing the marked block is a type-2 edge, $E = \{u,v\}$, and say label $u$ causes degeneration; the type-1 case is similar. Then, in $\wt\mathcal{T}_{k+1}^{\wt D_1-}$, block $u$ and edge $\{u,v\}$ both have mass zero; but in $\varrho\big(\mathcal{T}_{k+1}^{\wt D_1-}\big)$, label $u$ displaces label $k\!+\!1$, and the edge that was formerly $\longparent{\{k\!+\!1\}} = \{u,v,k\!+\!1\}$ gets relabeled as $\{u,v\}$, so that the newly labeled block $u$ has mass $m_{(1)}^{\wt D_1}$ while edge $\{u,v\}$ bears the partition $\alpha_{(1)}^{\wt D_1}$. This is consistent with the second line of the formula in Proposition \ref{prop:012:concat}\ref{item:012concat:2+1}, so that $\big(m_{(1)}^{\wt D_1},\alpha_{(1)}^{\wt D_1}\big) = \big(\wt m_u^{\wt D_1},\wt\alpha_E^{\wt D_1}\big)$. Thus, again, $\varrho\big(\wt\mathcal{T}^{\widetilde D_1-}_{k+1}\big) = \wt\mathcal{T}^{\wt D_1}_k$.
We now extend this construction recursively. Suppose that we have defined $\big(\wt\mathcal{T}^y_{k+1},y\in [0,\wt D_n)\big)$ on some extension $(\Omega_{(n)},\mathcal{F}_{(n)},\mathbb{P}_{(n)})$ of $(\Omega_{(0)},\mathcal{F}_{(0)},\mathbb{P}_{(0)})$ so that this is distributed as a resampling $(k\!+\!1)$-tree evolution stopped either just before the first time that a label in $[k]$ resamples or the $n^{\text{th}}$ time that label $k\!+\!1$ resamples, whichever comes first. Suppose also that $\wt\mathcal{T}^y_k = \pi_k\big(\wt\mathcal{T}^y_{k+1}\big)$ for $y\in [0,\wt D_n)$ and that, on the event $A_n$ that the $n^{\text{th}}$ resampling time for label $k\!+\!1$ precedes the first time that a lower label would resample, also $\varrho\big(\wt\mathcal{T}^{\widetilde D_n-}_{k+1}\big) = \wt\mathcal{T}^{\wt D_n}_k$.
We further extend our probability space to $(\Omega_{(n+1)},\mathcal{F}_{(n+1)},\mathbb{P}_{(n+1)})$ to include random objects with the following conditional distributions given the sub-$\sigma$-algebra of $\mathcal{F}_{(n+1)}$ that corresponds to $\mathcal{F}_{(n)}$.
\begin{itemize}
\item A random block $L_{n}$ conditionally distributed as a size-biased pick from $\textsc{block}\big(\wt\mathcal{T}_k^{\wt D_n}\big)$ given $A_n$, and that equals a cemetery state $\partial$ on $A_n^c$.
\item A process $\big(U_{(n+1)}^y,y\in [0,D_{(n+1)}\big)$ that, if we additionally condition on $A_{n,1} := \{L_n\in [k]\}$, is conditionally distributed as a pseudo-stationary type-2 evolution stopped at degeneration, conditioned to have total mass process $\big\|U_{(n+1)}^y\| = m_{L_n}^{\wt D_n + y}$, $y\in [0,D_{(n+1)})$. On $A_{n,1}^c$ we instead define this to be the constant process at $\partial$.
\item A pair of processes that, if we additionally condition on $L_n = (E,a,b)\in \textsc{block}\big(\wt\mathcal{T}^{\wt D_n}_k\big)\setminus [k]$, have conditional law
\begin{equation}
\left( \Gamma_{(n+1)}^y,\; \left(m_{(n+1)}^y,\alpha_{(n+1)}^y\right),\; y\in [0,D_{(n+1)})\right) \sim
\kappa\left( \left( \left( \wt \Gamma_E^{\wt D_n + y},y\ge0\right),\,(a,b)\right),\,\cdot\,\right),
\end{equation}
where $\big(\wt\Gamma_E^y,y\ge0\big)$ denotes the evolution on the type-0/1/2 compound in $\big(\wt\mathcal{T}^y_k\big)$ containing edge $E$. We define these to be constant $\partial$ processes on the event $A_n^c\cup A_{n,1}$.
\end{itemize}
We define $\wt D_{n+1} := \wt D\wedge (\wt D_n + D_{(n+1)})$. On $A_{n,1}$, we define $\big(\wt\mathcal{T}^y_{k+1},y\in [\wt D_n,\wt D_{n+1})\big)$ as in Case 1 above, with the obvious modifications. On $A_{n,2} := A_n\setminus A_{n,1}$, we define this process as in Case 2 with the obvious modifications. On $A_n^c$, we get $\wt D_{n+1} = \wt D_n = \wt D$, and we do not define $\wt\mathcal{T}^y_{k+1}$ for $y\ge \wt D$.
As required for our induction, $\big(\mathcal{T}^y_{k+1},y\in [0,\wt D_{n+1})\big)$ is distributed as a resampling $(k\!+\!1)$-tree evolution stopped at either the first time a label in $[k]$ would resample or the $n\!+\!1^{\text{st}}$ time that label $k\!+\!1$ would resample, and $\wt\mathcal{T}^y_k = \pi_k\big(\wt\mathcal{T}^y_{k+1}\big)$ for $y\in [0,\wt D_{n+1})$. By the same arguments as before, \eqref{eq:swapred:k+1} holds at $\wt D_{n+1}$ on the event $A_{n+1}$ that label $k\!+\!1$ resamples an $n\!+\!1^{\text{st}}$ time before the first time that a lower label would resample.
By the Ionescu Tulcea theorem \cite[Theorem 6.17]{Kallenberg}, there is a probability space $(\Omega_\infty,\mathcal{F}_{\infty},\mathbb{P}_{\infty})$ on which we can define a resampling $(k\!+\!1)$-tree evolution $\big(\wt\mathcal{T}^y_{k+1}\big)$ run until either the times at which label $k\!+\!1$ resamples have an accumulation point $D_{(\infty)}$ or a lower label would resample for the first time, with $\pi_k\big(\wt\mathcal{T}^y_{k+1}\big) = \wt\mathcal{T}^y_k$ in this time interval $[0,\wt D_\infty)$ and $\wt\mathcal{T}^0_{k+1}\sim \Ast\Lambda_k((T,\ell),\cdot\,)$. The claim that $\big(\wt\mathcal{T}^y_{k},y\in [0,\wt D)\big)$ is Markovian in the filtration generated by itself and $\big(\wt\mathcal{T}^y_{k+1},y\in [0,\wt D_\infty)\big)$ follows from our construction and the assertions concerning filtrations at the ends of Propositions \ref{prop:012:concat} and \ref{prop:012:mass}. \end{proof}
We now prove a pair of results, one intertwining-like and the other Dynkin-like, regarding the behavior of $\mathcal{T}_{k+1}^y$, $\Ast\mathcal{T}_k^y$, and $\mathcal{T}_k^y$ at the degeneration times $D''_n$. Note that at such times, $\mathcal{T}_k^{D''_n-} = \pi_{-(k+1)}\big(\mathcal{T}_{k+1}^{D''_n-}\big)$ is degenerate, with $$\left(I\!\left(\mathcal{T}_k^{D''_n-}\right),J\!\left(\mathcal{T}_k^{D''_n-}\right)\right) = \left(I\!\left(\mathcal{T}_{k+1}^{D''_n-}\right),J\!\left(\mathcal{T}_{k+1}^{D''_n-}\right)\right)
\quad\text{and}\quad
\varrho\!\left(\mathcal{T}_k^{D''_n-}\right) = \pi_{-(k+1)}\circ\varrho\!\left(\mathcal{T}_{k+1}^{D''_n-}\right)\!.$$
\begin{lemma}\label{lem:preswap}
Fix $n\ge1$. Given $\big(\Ast\mathcal{T}_k^y,y\in [0,D''_n)\big)$ and the event $\{D_n''<D_\infty\}$, with $\Ast\mathcal{T}_k^{D''_n-} = (U,L)$ and $J\big(\mathcal{T}_{k}^{D''_n-}\big) = j\in [k]$,
\begin{enumerate}
\item $\mathcal{T}_{k+1}^{D''_n-}$ has conditional law $\Ast\Lambda_k((U,L),\cdot\,)$ and\label{item:preswap:inter}
\item $\mathcal{T}_{k}^{D''_n}$ has conditional law $\Lambda_{j,[k]\setminus\{j\}}(\varrho(U),\cdot\,)$.\label{item:preswap:Dynkin}
\end{enumerate} \end{lemma}
These assertions are illustrated in Figure \ref{fig:preswap}.
\begin{figure}\label{fig:preswap}
\end{figure}
\begin{proof}
\ref{item:preswap:inter} Observe that $D_n''$ is previsible for $\Ast\mathcal{T}^y_{k}$ because, if we let $C_a''$ denote the first time after $D_{n-1}''$ that for some label $i$ other than $k\!+\!1$ or its sibling or nephews, $m_i^y+\|\beta_{\parent{\{i\}}}^y\|$ is less than $1/a$, for $a\ge 1$, then
$C_a''<D_n''$ and $\lim_{a\rightarrow\infty}C_a'' = D_n''$. From Proposition \ref{prop:inter:killed},
\[ \mathbb{P}\left(\mathcal{T}^{C_a''}_{k+1} \in \,\cdot\, \middle|\, \big(\Ast\mathcal{T}_k^y,y\in [0,C''_a]\big)\right) = \Ast\Lambda_k(\Ast\mathcal{T}_k^{C''_a},\cdot\,). \]
Since $\mathcal{T}^{C_a''}_{k+1} \rightarrow \mathcal{T}^{D_n''-}_{k+1}$ almost surely, and $\sigma \big(\Ast\mathcal{T}_k^y,y\in [0,C''_a]\big) \uparrow \sigma \big(\Ast\mathcal{T}_k^y,y\in [0,D''_n)\big)$ by \cite[Lemma VI.17.9]{RogersWilliams2}, it follows from \cite[Theorem 5.5.9]{Durrett} and Lemma \ref{lem:Lstar_cont} that
\[\mathbb{E} \left[ f\left(\mathcal{T}^{D_n''-}_{k+1}\right)\, \middle|\, \big(\Ast\mathcal{T}_k^y,y\in [0,D''_n-)\big)\right] = \int f(T) \Ast\Lambda_k(\Ast\mathcal{T}_k^{D''_n-},dT\,)\]
for every bounded continuous function $f$, as desired.
\ref{item:preswap:Dynkin} To compute $\mathbb{E}(F(\Ast\mathcal{T}_k^y,y\in[0,D_n''))G(\mathcal{T}_k^{D_n''}))$, we apply \ref{item:preswap:inter} to write this expectation as
$$\mathbb{E}\left(F\!\left(\Ast\mathcal{T}_k^y,y\!\in\![0,D_n'')\right)\!\int_{W_1\in\mathbb{T}_{[k+1]}}\int_{W_2\in\mathbb{T}_{[k+1]}}\!\!G(\pi_k(W_2))\Lambda_{j,[k+1]\setminus\{j\}}(\varrho(W_1),dW_2)\Ast\Lambda_k(\Ast\mathcal{T}_k^{D_n''-},dW_1)\!\right)\!,$$ for functionals $F(\Ast\mathcal{T}_k^y,y\in[0,D_n''))$ that vanish outside $\{J(\mathcal{T}_k^{D_n''-})=j\}$, for some $j\in[k]$. Thus, it suffices to show
$$\int_{\widetilde{\bT}_{k+1}}\Ast\Lambda_k((U,L),dT')\int_{\bT_{k+1}}\Lambda_{j,[k+1]\setminus\{j\}}(\varrho(T'),dT'')f\circ\pi_k(T'') = \int_{\widetilde{\bT}_{k}}\Lambda_{j,[k]\setminus\{j\}}(\varrho(U),dT''')G(T''').$$
This is trivial in the case that $L$ marks an internal block in $U$: in that case, $\mathcal{T}_{k+1}^{D''_n-}$ is a deterministic function of $\Ast\mathcal{T}_k^{D''_n-} = (U,L)$, as in Figure \ref{fig:mark_k_tree}(A), and there is a natural weight- and tree-structure-preserving bijection between the blocks of the former and those of the latter, allowing us to couple $\Lambda_{j,[k+1]\setminus\{j\}}\big(\varrho\big(\mathcal{T}^{D''_n-}_{k+1}\big),\cdot\,\big)$ with $\Lambda_{j,[k]\setminus\{j\}}\big(\varrho\big(\Ast\mathcal{T}^{D''_n-}_{k}\big),\cdot\,\big)$.
Henceforth, we assume that $L = i\in [k]$ with block mass $x_i$ in $U$. Let $H$ denote the event that, after resampling, leaf $j$ is the sibling, uncle, or nephew of leaf $k\!+\!1$ in $\mathcal{T}_{k+1}^{D''_n}$. Equivalently, $H$ is the event that in the marked $k$-tree, label $j$ resamples into the marked block $i$ of $\varrho\big(\mathcal{T}_k^{D''_n-}\big)$ so that after resampling, the marking sits somewhere in a type-2 compound in $\Ast\mathcal{T}_k^{D''_n}$ containing label $j$. Again, the assertion is trivial on the event $H^c$, as then there is a weight- and tree-structure-preserving bijection between the \emph{remaining} blocks of the trees, i.e.\ the unmarked blocks in $\mathcal{T}_k^{D''_n-}$ and the blocks outside of the type-2 compound containing $k\!+\!1$ in $\mathcal{T}_{k+1}^{D''_n-}$. The only remaining event is when $L = i\in [k]$ and $H$ holds.
Given $\Ast\mathcal{T}_k^{D''_n-} = (U,L)$, the event $H$ has conditional probability $x_i / \|U\|$; in particular, it is conditionally independent of the normalized ``internal structure'' of the type-2 compound containing label $k\!+\!1$ in $\mathcal{T}_{k+1}^{D''_n-}$,
$$x_i^{-1}\left(m_i^{D''_n-},m_{k+1}^{D''_n-},\alpha_{\{i,k+1\}}^{D''_n-}\right) \quad \text{where }m_i^{D''_n-} + m_{k+1}^{D''_n-} + \left\|\alpha_{\{i,k+1\}}^{D''_n-}\right\| = x_i.$$
By assertion \ref{item:preswap:inter}, given $\Ast\mathcal{T}^{D''_n-}_{k} = (U,L)$ and $L=i$, this internal structure is conditionally a Brownian reduced 2-tree. By \eqref{eq:B_ktree_resamp} and the exchangeability of labels in Brownian reduced $k$-trees noted in Proposition \ref{prop:B_ktree}, this means that after inserting label $j$, blocks $i$, $j$, and $k\!+\!1$, along with the partitions marking their parent edges, comprise an independently scaled Brownian reduced 3-tree. Thus, the $\pi_{-(k+1)}$-projection of this 3-tree is an independently scaled Brownian reduced 2-tree, as required for $\Lambda_{j,[k]\setminus\{j\}}$ on the event $H$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:consistency_0}]
By Proposition \ref{prop:Dynkin:killed} and Lemma \ref{lem:preswap}\ref{item:preswap:Dynkin}, $\mathcal{T}^{y}_k = \pi_k\big(\mathcal{T}^y_{k+1}\big)$ evolves as a resampling $k$-tree evolution up to time $D''_1$. This holds up to time $D_\infty$ by induction and the strong Markov property applied at the degeneration times $D''_n$, $n\ge1$. \end{proof}
In fact, we have shown that the map $\phi_1$ on $\big(\Ast\mathcal{T}^y_k,y\ge0\big)$ satisfies Dynkin's criterion.
\section{Accumulation of degeneration times as mass hits zero}\label{sec:non_acc}
We now possess all major ingredients needed to prove Proposition \ref{prop:non_accumulation}, that $D_\infty := \sup_nD_n$ equals $\inf\{y\ge0\colon \|\mathcal{T}^{y-}\|=0\}$ for a resampling $k$-tree evolution. This proposition immediately completes the proofs of Theorems \ref{thm:total_mass} and \ref{thm:consistency}\ref{item:cnst:resamp}, that total mass of a resampling $k$-tree evolves as a \distribfont{BESQ}\checkarg[-1] and the projective consistency of resampling $k$-tree evolutions, from the partial results in Propositions \ref{prop:total_mass_0} and \ref{prop:consistency_0}, respectively.
We require the following lemma.
\begin{lemma}\label{lem:degen_diff}
Fix $k\ge 3$ and $\epsilon>0$. Let $T\in\bT_{k-1}$ with $\|T\|>\epsilon$ and let $(\mathcal{T}^y,y\ge0)$ be a resampling $k$-tree evolution with $\mathcal{T}^0\sim\Lambda_{k,[k-1]}(T,\cdot\,)$. Let $(D^*_n,n\ge1)$ denote the subsequence of degeneration times at which label $k$ is dropped and resamples. Assume that with probability one we get $D_\infty > D^*_2$. Then there is some $\delta = \delta(k,\epsilon)>0$ that does not depend on $T$ such that $\mathbb{P}(D^*_2>\delta)>\delta$. \end{lemma}
The proof of this lemma is somewhat technical, so we postpone it until Appendix \ref{sec:non_acc_2}.
\begin{proof}[Proof of Proposition \ref{prop:non_accumulation} using Lemma \ref{lem:degen_diff}]
By Proposition \ref{prop:total_mass_0}, the total mass $\|\mathcal{T}^y\|$ of a resampling $k$-tree evolution evolves as a \distribfont{BESQ}\checkarg[-1] stopped at a random stopping time $D_\infty$ that is not necessarily measurable in the filtration of the total mass process. By the independence of the type-$i$ evolutions in the compounds of the $k$-tree in Definition \ref{def:killed_ktree} and continuity of the distributions of their degeneration times, there is a.s.\ no time at which two compounds degenerate simultaneously. Thus, since a \distribfont{BESQ}\checkarg[-1] a.s.\ hits zero in finite time, there are a.s.\ infinitely many degenerations in finite time: $D_\infty < \infty$. In fact, the lifetime of a $\distribfont{BESQ}_x(-1)$ before absorption has law \distribfont{InverseGamma}\checkarg[3/2,x/2] by \cite[equation (13)]{GoinYor03}. Thus, $\mathbb{E}[D_\infty] < \infty$.
We will prove this by showing that for every $\epsilon\in (0,\|\mathcal{T}^0\|)$ the time $H_\epsilon := \inf\{y\ge0\colon 0<\|\mathcal{T}^y\|\le\epsilon\}$ is a.s.\ finite. This implies that these times have a limit in $[0,D_\infty]$ at which time $\|\mathcal{T}^y\|$ converges to zero, by continuity. Thus, by the argument of the previous paragraph, this limit equals $D_\infty$, which completes the proof. We prove $H_\epsilon<\infty$ by showing that
\begin{equation}\label{eq:degen_diff}
\mathbb{P}\big(D_{j+2}-D_{j} > \delta\ \big|\ H_{\epsilon}>D_{j}\big) > \delta \quad \text{for }j\ge n
\end{equation}
for some sufficiently small $\delta>0$ and sufficiently large $n$.
This implies
\begin{equation*}
\infty > \mathbb{E}[D_\infty] > \sum_{j \ge n}\delta^2 \mathbb{P}\{H_{\epsilon} > D_{2j}\}.
\end{equation*}
From this, it follows by the Borel--Cantelli Lemma that $H_\epsilon$ is a.s.\ finite, as desired. We proceed to verify \eqref{eq:degen_diff}.
Fix $\epsilon\in (0,\|\mathcal{T}^0\|)$. We proceed by induction on the number of labels. Consider a resampling 2-tree evolution. There is some $\delta>0$ such that a pseudo-stationary 2-tree evolution with initial mass $\epsilon$ will \emph{not} degenerate prior to time $\delta$ with probability at least $\delta$. By the scaling property, Lemma \ref{lem:scaling}, the same holds for any larger initial mass with the same $\delta$, proving \eqref{eq:degen_diff} in this case.
Now, suppose that the proposition holds for $k$-tree evolutions and consider a resampling $(k\!+\!1)$-tree evolution $(\mathcal{T}^y,y\ge0)$. By Proposition \ref{prop:consistency_0}, $(\pi_k(\mathcal{T}^y),y\ge0)$ is a resampling $k$-tree evolution up to the accumulation time $D_\infty$ of degenerations of the $(k\!+\!1)$-tree evolution. The degeneration times of $(\pi_k(\mathcal{T}^y))$ are the times at which a label less than or equal to $k$ resamples in $(\mathcal{T}^y)$. By the inductive hypothesis, these degeneration times do not have an accumulation point prior to the extinction time of the \distribfont{BESQ}\checkarg[-1] total mass. Thus, $D_\infty$ must equal the accumulation point of degeneration times $(D^*_j,j\ge1)$ at which label $k\!+\!1$ resamples. Lemma \ref{lem:degen_diff} now yields \eqref{eq:degen_diff} with $(D_m,m\ge1)$ replaced by $(D_m^*,m\ge1)$. \end{proof}
\section{Proofs of remaining consistency results}\label{sec:const:other}
The remaining assertions of Theorem \ref{thm:consistency} largely follow from our arguments in the proof of Proposition \ref{prop:Dynkin:killed}. We summarize the proofs of these results below.
\begin{proof}[Proof of Theorem \ref{thm:consistency}\ref{item:cnst:nonresamp}]
Suppose $\big(\mathcal{T}_{k+1}^y,y\ge0\big)$ is a non-resampling $(k\!+\!1)$-tree evolution and let $\mathcal{T}_{k}^y = \pi_{k}\big(\mathcal{T}_{k+1}^y\big)$, $y\ge0$. For the purpose of this argument, let $D^*$ denote the time at which label $k\!+\!1$ is dropped in degeneration and let $(D'_n,n\in [k])$ denote the sequence of times at which labels in $[k]$ are dropped. For $y\ge D_*$, $\mathcal{T}_{k}^y = \mathcal{T}_{k+1}^y$, and both evolve from time $D^*$ onwards as non-resampling $k$-tree evolutions. From a slight extension of the argument of Proposition \ref{prop:Dynkin:killed}, allowing $\mathcal{T}_{k+1}^0$ to be any $(k\!+\!1)$-tree satisfying $\pi_k\big(\mathcal{T}_{k+1}^0\big) = \mathcal{T}_k^0$ in Case 1 in that proof, we find that $\mathcal{T}_{k}^y$ evolves as a $k$-tree evolution up to time $D'_1$. At this degeneration time,
$$\mathcal{T}_k^{D'_1} = \pi_{k}\circ\varrho\big(\mathcal{T}_{k+1}^{D'_1-}\big) = \varrho\circ \pi_{k}\big(\mathcal{T}_{k+1}^{D'_1-}\big) = \varrho\big(\mathcal{T}_k^{D'_1-}\big),$$
as in Definition \ref{def:nonresamp_1} of non-resampling $k$-tree evolutions. After this time, given that label $k\!+\!1$ has not yet been dropped, then $\mathcal{T}_{k+1}^y$ continues as a non-resampling $([k\!+\!1]\!\setminus\!\{j\})$-tree evolution, where $j$ was the first label dropped. Then by the same argument as above, $\mathcal{T}_k^y$ evolves until the next degeneration time as a $([k]\!\setminus\!\{j\})$-tree evolution. By an induction applying the strong Markov property of $\big(\mathcal{T}_{k+1}^y\big)$ at the times $(D'_n,n\in [k])$, the process $(\mathcal{T}_k^y,y\ge0)$ is a non-resampling $k$-tree evolution. A second induction argument allows one to project away multiple labels, rather than just one. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:consistency}\ref{item:cnst:dePoi}]
Fix $1\le j<k$. Suppose $\mathbf{T}_k := (\mathcal{T}_k^y,y\ge0)$ is a resampling $k$-tree evolution with initial distribution as in \eqref{eq:cnst:init}, so that $\mathbf{T}_j = (\mathcal{T}_j^y,y\ge0) := (\pi_j(\mathcal{T}_k^y),y\ge0)$ is a resampling $j$-tree evolution. Then because these evolutions have the same total mass process, they require the same time change for de-Poissonization: $(\rho_u(\mathbf{T}_k),u\!\ge\!0) = (\rho_u(\mathbf{T}_j),u\!\ge\!0)$. \linebreak Thus, the associated de-Poissonized processes are also projectively consistent. The same argument holds in the non-resampling case. \end{proof}
We can now also prove Proposition \ref{prop:resamp_to_non}.
\begin{proof}[Proof of Proposition \ref{prop:resamp_to_non}]
Suppose $\big(\mathcal{T}_{k,+}^y,y\ge0\big)$ is a resampling $k$-tree evolution. Let $(D_n,n\ge1)$ denote its sequence of degeneration times, and set $D_0 := 0$. Recall that we consider each edge in a tree shape to be labeled by the set of all labels of leaves in the subtree above that edge. Recall the definition of $\varrho$ in Section \ref{sec:non_resamp_def}: when a label $i_n := I\big(\mathcal{T}_{k,+}^{D_n-}\big)$ causes degeneration, it swaps places with label $j_n := J\big(\mathcal{T}_{k,+}^{D_n-}\big) = \max\{i_n,a_n,b_n\}$, where $a_n$ and $b_n$ are respectively the least labels on the sibling and uncle of leaf edge $\{i_n\}$ in the tree shape of $\mathcal{T}_{k,+}^{D_n-}$. In the resampling evolution, label $j_n$ is resampled.
We extend this notation slightly. Let $E_{n}^{(a)}$ and $E_n^{(b)}$ denote the sets of labels on the sibling and uncle of edge $\{i_n\}$, so that $a_n = \min\big(E_n^{(a)}\big)$ and $b_n = \min\big(E_n^{(b)}\big)$. Let $\tau_n$ denote the transposition permutation that swaps $i_n$ with $j_n$.
Set $A_0 := B_0 := [k]$ and let $\sigma_0$ denote the identity map on $[k]$. Now suppose for a recursive construction that we have defined $(A_{n-1},B_{n-1},\sigma_{n-1})$. We consider four cases.
\begin{enumerate}[label = Case \arabic*:, ref = \arabic*, topsep=5pt, itemsep=5pt, itemindent=1.82cm, leftmargin=0pt]
\item $i_n\notin A_{n-1}$ and $j_n\notin A_{n-1}$. In this case, the degeneration, swap-and-reduce map, and resampling in $\mathcal{T}_k^{D_n}$ are invisible under $\sigma_{n-1}\circ \pi_{A_{n-1}}$, since the projection erases both labels involved. We set $(A_n,B_n,\sigma_n) := (A_{n-1},B_{n-1},\sigma_{n-1})$.
\item $i_n\notin A_{n-1}$ and $j_n\in A_{n-1}$. In this case, the label $i_n$ that has caused degeneration is invisible under $\pi_{A_{n-1}}$, so there is no degeneration in the projected process, but $i_n$ displaces a label that \emph{is} visible. To maintain continuity in the projected process at this time, $i_n$ takes the place of $j_n$ in such a way that $\sigma_n(i_n) = \sigma_{n-1}(j_n)$. In particular, $A_n:= (A_{n-1}\setminus\{j_n\})\cup \{i_n\}$, $B_n := B_{n-1}$, and $\sigma_n := \sigma_{n-1}\circ\tau_n|_{A_n}$.
\item \label{case:r2n:degen} $i_n\in A_{n-1}$ and $E_n^{(a)}$ and $E_n^{(b)}$ both intersect $A_{n-1}$ non-trivially. Let ${\tilde{\textit{\i}}}_n := \sigma_{n-1}(i_n)$. In this case, the degeneration caused by $i_n$ in $\mathcal{T}_{k,+}$ corresponds to a degeneration caused by ${\tilde{\textit{\i}}}_n$ in $\mathcal{T}_{k,-}$.
Let $\tilde a_n := \min\!\big(\sigma_{n-1}\big(E_n^{(a)}\cap A_{n-1}\big)\!\big)$ and $\tilde b_n = \min\!\big(\sigma_{n-1}\big(E_n^{(b)}\cap A_{n-1}\big)\!\big)$. Let ${\tilde{\textit{\j}}}_n := \max\{{\tilde{\textit{\i}}}_n,\tilde a_n,\tilde b_n\}$ and let $\tilde\tau_n$ denote the transposition permutation that swaps ${\tilde{\textit{\i}}}_n$ with ${\tilde{\textit{\j}}}_n$. If $j_n\in A_{n-1}$ then we set $A_n := A_{n-1}\setminus\{j_n\}$; otherwise, we set $A_n := A_{n-1}\setminus\{i_n\}$. In either case, we define $B_n := B_{n-1}\setminus\{{\tilde{\textit{\j}}}_n\}$ and $\sigma_n := \tilde\tau_n\circ\sigma_{n-1}\circ\tau_n|_{A_n}$.
\item \label{case:r2n:shrink} $i_n\in A_{n-1}$ and $E_n^{(a)}$ is disjoint from $A_{n-1}$. Then leaf block $i_n$ and the subtree that contains label set $E_n^{(a)}$ in $\mathcal{T}_{k,+}^y$ project down to a single leaf block, $\sigma_{n-1}(i_n)$, in $\mathcal{T}_{k,-}^y$ as $y$ approaches $D_n$. By leaving open the possibility that $E_n^{(b)}$ may be disjoint from $A_{n-1}$ as well, we include in this case the possibility that the subtree of $\mathcal{T}_{k,+}^y$ with label set $E_n^{(b)}$ projects to this same leaf block as well. Regardless, this degeneration is ``invisible'' in $\mathcal{T}_{k,-}$. In order to keep label $\sigma_{n-1}(i_n)$ in place in the projected process, if label $i_n$ resamples or swaps with a label in $E_n^{(b)}$, then we choose a label in $E_n^{(a)}$ to map to $\sigma_{n-1}(i_n)$ under $\sigma_n$.
\begin{enumerate}[label = Case \theenumi.\arabic*: , ref=\theenumi.\arabic*, topsep=0pt, itemsep=0pt, itemindent=2cm, leftmargin=0pt]
\item $j_n = a_n$. Then we define $(A_n,B_n,\sigma_n) := (A_{n-1},B_{n-1},\sigma_{n-1})$.\label{case:r2n:shrink:null}
\item $j_n = i_n$ or $j_n = b_n$. Then let $\hat\tau_n$ denote the transposition that swaps $i_n$ with $a_n$. If $j_n\in A_{n-1}$, as is always the case when $j_n = i_n$, then we set $A_n := (A_{n-1}\setminus\{j_n\})\cup\{a_n\}$. Otherwise, if $j_n\notin A_{n-1}$ then we set $A_n := (A_{n-1}\setminus\{i_n\})\cup\{a_n\}$. In either case, we define $B_n := B_{n-1}$ and $\sigma_n := \sigma_{n-1}\circ\hat\tau_n\circ\tau_n|_{A_n}$.\label{case:r2n:shrink:swap}
\end{enumerate}
\item $i_n\in A_{n-1}$ while $E_n^{(a)}$ intersects $A_{n-1}$ non-trivially but $E_n^{(b)}$ does not. This degeneration time in $\mathcal{T}_{k,+}^y$ corresponds to a time at which labeled leaf block $\sigma_{n-1}(i_n)$ in $\mathcal{T}_{k,-}^y$ has mass approaching zero (really, it is a.s.\ an accumulation point of times at which this mass equals zero) while the interval partition on its parent edge has a leftmost block. The subtree of $\mathcal{T}^{D_n-}_{k,+}$ that contains the leaf labels $E_n^{(b)}$ maps to a single internal block, the aforementioned leftmost block, in $\mathcal{T}_{k,-}^{D_n-}$. Therefore, we define $\sigma_n$ in such a way that some label that sits in the subtree corresponding to that block gets mapped to $\sigma_{n-1}(i_n)$, so that this latter label ``moves into'' the leftmost block in the projected process, as in a type-1 or type-2 evolution; see Proposition \ref{prop:012:pred}. In fact, we can accomplish this with the same definitions of $(A_n,B_n,\sigma_n)$ as in Cases \ref{case:r2n:shrink:null} and \ref{case:r2n:shrink:swap}, but with roles of $a_n$ and $b_n$ reversed.
\end{enumerate}
It follows from the consistency result of Theorem \ref{thm:consistency}\ref{item:cnst:nonresamp} that for each $n$, the projected process evolves as a stopped non-resampling $k$-tree evolution (or $B_n$-tree evolution) during the interval $[D_n,D_{n+1})$. By our construction, we have $\mathcal{T}_{k,-}^{D_n} = \varrho\big(\mathcal{T}_{k,-}^{D_n-}\big)$ in Case \ref{case:r2n:degen}, as in Definition \ref{def:nonresamp_1} of non-resampling evolutions. In the other cases, it follows from the arguments in the proof of Proposition \ref{prop:Dynkin:killed} that each type-0/1/2 compound in $\mathcal{T}_{k,-}^{D_n}$ attains the value required by the type-0/1/2 evolution in that compound, given its left limit in $\mathcal{T}_{k,-}^{D_n-}$; see Proposition \ref{prop:012:pred}. Thus, $\big(\mathcal{T}_{k,-}^y,y\ge0\big)$ is a non-resampling $k$-tree evolution. \end{proof}
\appendix
\section{Intertwining for processes that jump from branch states}\label{sec:intertwining_lem}
In this appendix we state and prove a general lemma that can be used to prove intertwining for a function of a Markov process that attains ``forbidden states'' as left limits in its path from which it jumps away, in the manner of the resampling and non-resampling $k$-tree evolutions at degeneration times. More precisely, we consider a strong Markov process $(X^\bullet(t),t\ge0)$ constructed as follows.
\newcommand{\widetilde{\Lambda}}{\widetilde{\Lambda}}
Let $(\widebar{\mathbb{X}},d_\mathbb{X})$ be a metric space, $\mathbb{X}\subset\widebar{\mathbb{X}}$ a Borel subset, $\partial\not\in\widebar{\mathbb{X}}$ a cemetery state and set $\mathbb{X}_\partial:=\mathbb{X}\cup\{\partial\}$, extending the topology of $\mathbb{X}$ so that $\partial$ is isolated in $\mathbb{X}$. We denote the Borel sigma algebra on $\mathbb{X}$ by $\mathcal{X}$. Consider an $\mathbb{X}_\partial$-valued Borel right Markov process $(X^\circ(t),t\ge 0)$ with transition kernels $(P_t^\circ,t\ge 0)$. Suppose that $(X^\circ(t),t\ge 0)$ has left limits in $\widebar{\mathbb{X}}$ and is absorbed in $\partial$ the first time a left limit is in $\widebar{\mathbb{X}}\setminus\mathbb{X}$, or by an earlier jump to $\partial$ from within $\mathbb{X}$. We denote this absorption time by $\zeta$ and refer to $(X^\circ(t),t\ge 0)$ as the \em killed Markov process\em. We use the standard setup where our basic probability space supports a family $(\mathbb{P}_x,x\in\mathbb{X})$ of probability measures under which $X^\circ$ has initial state $X^\circ(0)=x$. Suppose for simplicity that $\mathbb{P}_x(\zeta<\infty)=1$ for all $x\in\mathbb{X}$.
Let $\kappa\colon\widebar{\mathbb{X}}\times\mathcal{X}\rightarrow[0,1]$ be a stochastic kernel. We use $\kappa$ as a \em regeneration kernel \em by sampling $X^\bullet(\zeta)$ from $\kappa(X^\circ(\zeta-),\,\cdot\,)$ and continuing according to the killed Markov process starting from $X^\bullet(\zeta)$. More formally, let $X^\bullet(0)=x\in\mathbb{X}$ and $S_0=0$. Inductively, given $(X^\bullet(t),0\le t\le S_n)$ for any $n\ge 0$, let $(X^\circ_n(t),t\ge 0)$ be a killed Markov process starting from $X^\bullet(S_n)$ with absorption time $\zeta_n$, set $X^\bullet(S_n+t)=X^\circ_n(t)$, $0\le t<\zeta_n$, and $S_{n+1}=S_n+\zeta_n$, and then sample $X^\bullet(S_{n+1})$ from the regeneration kernel $\kappa(X^\bullet(S_{n+1}-),\,\cdot\,)$. Finally, set $X^\bullet(t)=\partial$ for $t\ge S_\infty:=\lim_{n\rightarrow\infty}S_n$.
Then $\widetilde{P}((x,s),\,\cdot\,)=\mathbb{P}_x((X^\bullet(\zeta),s+\zeta)\in\,\cdot\,)$ is clearly a Markov transition kernel. Meyer \cite{Mey75} showed that $(X^\bullet(t),t\ge 0)$ is a (Borel right) Markov process, and we denote its transition kernels by $(P_t^\bullet,t\ge 0)$.
We recall Definition \ref{def:intertwining} of intertwining. Consider a measurable map $\phi\colon\widebar{\mathbb{X}}\rightarrow\widebar{\mathbb{Y}}$ to another metric space $(\widebar{\mathbb{Y}},d_\mathbb{Y})$ with $\partial\notin\widebar{\mathbb{Y}}$. We extend this map, defining $\phi(\partial) = \partial$ and set $\mathbb{Y}_\partial:=\phi(\mathbb{X})\cup\{\partial\}$. Let $\Phi(x,\cdot)=\delta_{\phi(x)}$ denote the trivial kernel associated with $\phi$. Let $\Lambda$ denote a stochastic kernel from $\mathbb{Y}$ to $\mathbb{X}$ such that $\Lambda \Phi$ is the identity.
We define $Y^\bullet(t) := \phi(X^\bullet(t))$ and $Y^\circ(t) := \phi(X^\circ(t))$, $t\ge0$. We set $Q^\circ_t := \Lambda P^\circ_t \Phi$, $t\ge0$, $Q^\bullet_t:=\Lambda P^\bullet_t\Phi$ and $\widetilde Q := \widetilde\Lambda\widetilde P\widetilde\Phi$, where \[
\widetilde{\Lambda}((y,s), dxdt)= \Lambda(y, dx) \delta_s(dt) \quad \text{and}\quad \widetilde{\Phi}((x,t), dyds) = \delta_{\phi(x),t}(dyds). \]
Following \cite{RogersPitman}, the criteria for a discrete-time process to be intertwined below a Markov \emph{chain} are the same as in Definition \ref{def:intertwining}, but with single-step transition kernels in place of $P_t$ and $Q_t$ in \ref{item:intertwining}. This definition is sufficient for the same conclusion as in the continuous setting: if the processes additionally satisfy criterion \ref{item:intertwin:init} noted after Definition \ref{def:intertwining}, then the image process is also Markovian.
\begin{lemma}\label{lem:intertwin_jump}
Suppose the pair of triplets of stochastic kernels $(P_t^\circ,Q_t^\circ,\Lambda)$ and $(\widetilde{P},\widetilde{Q},\widetilde{\Lambda})$
satisfy the intertwining conditions
\begin{equation}\label{eq:assumed_intertwin}
\Lambda P_t^\circ = Q_t^\circ\Lambda,\ t\ge0,\quad \text{and}\quad \widetilde\Lambda \widetilde P = \widetilde Q\widetilde\Lambda
\end{equation}
for all $t\!>\!0$. Then $(P^\bullet_t,Q^\bullet_t,\Lambda)$ also satisfies the intertwining condition $\Lambda P_t^\bullet=Q_t^\bullet\Lambda$ for all
$t\!>\!0$.
\end{lemma}
\begin{proof}
Let $f\colon \mathbb{X}_\partial\rightarrow [0, \infty)$ be bounded and measurable such that $f(\partial)=0$ and fix $t>0$. Now, for any $y \in \mathbb{Y}$,
\begin{align*}
\int_{\mathbb{X}} \Lambda(y,dx) & \int_{\mathbb{X}_\partial} P_t^\bullet (x,dz) f(z)= \int_{\mathbb{X}} \Lambda(y, dx) \mathbb{E}_x\left[f\left( X_t^\bullet\right)\right]\\
&= \sum_{k=0}^\infty \int_{\mathbb{X}} \Lambda(y, dx) \mathbb{E}_x \left[ f\left( X_t^\bullet\right) \mathbf{1}\{ S_k \le t < S_{k+1}\} \right]\\
&= \sum_{k=0}^\infty \int_{\mathbb{X}} \Lambda(y, dx) \mathbb{E}_x \left[ \mathbf{1}\{ S_k < t\} \mathbb{E}_{X^\bullet(S_k)}\left( f\left( X_{t-S_k}^\bullet\right) \mathbf{1}\{ t - S_k < \zeta\} \right) \right].
\end{align*}
In the last line, we have used the strong Markov property for $X^\bullet$ at the stopping times $(S_k)$.
Now, we write the above expectations in terms of the Markov chain $\big( \big(X^\bullet(S_n), S_n\big) \big)$, on a probability space where, under
$\mathbb{P}_{x,s}$ this Markov chain starts from $(x,s)$. Define a function $h\colon\mathbb{X}\times [0, \infty) \rightarrow \mathbb{R}$ by
\[
\begin{split}
h(x,s)&= \mathbf{1}\{ s < t\} \mathbb{E}_{x} \left( f\left( X_{t-s}^\bullet\right) \mathbf{1}\{ t - s < S_1\} \right)
= \mathbf{1}\{ s < t\} \mathbb{E}_{x} \left( f\left( X_{t-s}^\circ\right)\right).
\end{split}
\]
Applying \eqref{eq:assumed_intertwin} twice, first for the chain and then for the killed process, we get
\begin{align*}
\int_{\mathbb{X}} & \Lambda(y,dx) \int_{\mathbb{X}_\partial} P_t^\bullet (x,dz) f(z)
= \sum_{k=0}^\infty \int_{\mathbb{X}\times [0, \infty)} \widetilde{\Lambda}((y,0), dx ds) \mathbb{E}_{x,s}\left[ h(X^\bullet(S_k), S_k) \right]
\\
&= \sum_{k=0}^\infty \int_{\mathbb{X}\times [0, \infty)} \widetilde{\Lambda}((y,0), dx ds) \int_{\mathbb{X}_\partial\times [0, \infty)}\widetilde{P}^k \left( (x,s), dz du\right) h(z,u)\\
&= \sum_{k=0}^\infty \int_{\mathbb{Y}\times [0, \infty)} \widetilde{Q}^k((y,0),dw ds) \int_{\mathbb{X}_\partial \times [0, \infty)} \widetilde{\Lambda}\left( (w,s), dzdu \right) h(z,u)\\
&= \sum_{k=0}^\infty \int_{\mathbb{Y}\times [0, \infty)} \widetilde{Q}^k((y,0),dw ds) \mathbf{1}\{s < t \} \int_{\mathbb{X}_\partial} \Lambda\left( w, dz \right) \mathbb{E}_{z} \left[ f\left( X_{t-s}^\circ\right) \right] \\
&=\sum_{k=0}^\infty \int_{\mathbb{Y}\times [0, \infty)} \widetilde{Q}^k((y,0),dw ds) \mathbf{1}\{s < t \} \int_{\mathbb{X}_\partial} \Lambda\left( w, dz \right) \int_{\mathbb{X}_\partial} P^\circ_{t-s} (z, dr) f(r) \\
&= \sum_{k=0}^\infty \int_{\mathbb{Y} \times [0, \infty)} \widetilde{Q}^k((y,0),dw ds) \mathbf{1}\{s < t \} \int_{\mathbb{Y}_\partial} Q_{t-s}^\circ(w,dv) \int_{\mathbb{X}_\partial} \Lambda(v,dr) f(r).
\end{align*}
We now claim that the last line in the above display is exactly
\[
\int_{\mathbb{Y}_\partial} Q_t^\bullet(y,dw)\int_{\mathbb{X}_\partial} \Lambda(w,dx) f(x),
\]
which will prove the statement of the lemma. To see this let $F:\mathbb{X}_\partial\rightarrow [0, \infty)$ be given by
\[
F(x)= \int_{\mathbb{X}_\partial} \Lambda(\phi(x), dz) f(z).
\]
Then, by definition of $Q_t^\bullet$ and a very similar calculation as before, but replacing $f$ by $F$, we get
\begin{align*}
\int_{\mathbb{Y}_\partial} &Q_t^\bullet(y,w)\int_\mathbb{X} \Lambda(w,x) f(x)= \int_{\mathbb{X}_\partial} \Lambda(y,dx) \int_{\mathbb{X}_\partial} P^\bullet_t(x,dz) F(z)\\
&= \int_\mathbb{X} \Lambda(y,dx) \sum_{k=0}^\infty \mathbb{E}_x\left[ F\left( X^\bullet_t \right) \mathbf{1}\{ S_k \le t < S_{k+1}\}\right]\\
&= \sum_{k=0}^\infty \int_\mathbb{X} \widetilde{\Lambda}\left((y,0), dxdu \right) \int_{{\mathbb{X}_\partial}\times[0, \infty)}\widetilde{P}^k((x,u), dzds)
\mathbf{1}\{s<t\}\int_{\mathbb{X}_\partial}P^\circ_{t-s}(z,dr)F(r)\\
&=\sum_{k=0}^\infty \int_{\mathbb{Y}_\partial\times [0, \infty)} \widetilde{Q}^k\left( (y,0), dwds\right) \mathbf{1}\{ s < t \} \int_{\mathbb{X}_\partial} \Lambda(w,dz) \int_{\mathbb{X}_\partial} P_{t-s}^\circ(z,dr) F(r)\\
&= \sum_{k=0}^\infty \int_{\mathbb{Y}_\partial\times [0, \infty)} \widetilde{Q}^k\left( (y,0), dwds\right) \mathbf{1}\{ s < t \} \int_{\mathbb{X}_\partial} \Lambda(w,dz) \int_{\mathbb{X}_\partial} P_{t-s}^\circ(z,dr) \int_{\mathbb{X}_\partial} \Lambda(\phi(r), du) f(u)\\
&=\sum_{k=0}^\infty \int_{\mathbb{Y}_\partial\times [0, \infty)} \widetilde{Q}^k\left( (y,0), dwds\right) \mathbf{1}\{ s < t \} \int_{\mathbb{Y}_\partial} Q_{t-s}^\circ(w,dv) \int_{\mathbb{X}_\partial} \Lambda(v,du) f(u),
\end{align*}
where the final line is applying the definition of $Q_t^\circ$. This proves the result. \end{proof}
In fact, this proof does not require that $X^\circ$ have c\`adl\`ag paths, but only that we can make some sense of taking a left limit at the absorption time $\zeta$, so that we can carry out the construction.
We require one additional property related to intertwining. For the following, suppose that $(X(t),t\ge0)$ and $(Y(t),t\ge0)$ are strong Markov processes on respective metric spaces $(\mathbb{X},d_\mathbb{X})$ and $(\mathbb{Y},d_\mathbb{Y})$, and that they are intertwined via the measurable map $\phi\colon \mathbb{X}\rightarrow\mathbb{Y}$ and the kernel $\Lambda\colon \mathbb{Y}\times\mathcal{X}\rightarrow [0,1]$, where $\mathcal{X}$ is the Borel $\sigma$-algebra associated with $d_X$. Let $(\mathcal{F}^Y(t),t\ge0)$ denote the filtration generated by $(Y(t),t\ge0)$. Rogers and Pitman \cite{RogersPitman} note that, if two processes satisfy criteria (i)-(iii) of Definition \ref{def:intertwining} of intertwining, then they additionally satisfy the following strong form of criterion \ref{item:intertwining_v2}: \begin{equation}\label{eq:inter:strongish}
\mathbb{P}\big\{X(t)\in A\ \big|\ \mathcal{F}^Y(t)\big\} = \Lambda(Y(t),A) \quad \text{for }t\ge0,\ A\in\mathcal{X}. \end{equation}
\begin{lemma}\label{lem:intertwin_strong}
Suppose that $\phi$ is continous, $\Lambda(y,\cdot\,)$ is weakly continuous in $y\in\mathbb{Y}$, and $X$ and $Y$ have c\`adl\`ag sample paths. Then for any stopping time $\tau$ in $(\mathcal{F}^Y(t),t\ge0)$, equation \eqref{eq:inter:strongish} is satisfied with $\tau$ in place of $t$. \end{lemma}
\begin{proof}
First, we prove the result for a discrete stopping time. Suppose that $\mathcal{T}:=\{ t_1, t_2, \ldots,\}$ is a countable discrete set such that $\tau\in \mathcal{T}$ a.s.. Fix an arbitrary $G \in \mathcal{F}^Y(\tau)$. Then
\[
\mathbb{P}\left( \{ X(\tau) \in A\} \cap G \right)= \sum_{i=1}^\infty \mathbb{P}\left( \{ X(t_i) \in A\} \cap G \cap \{ \tau =t_i\} \right).
\]
By definition, $G \cap \{ \tau =t_i\} \in \mathcal{F}^Y(t_i)$. Hence, by applying \eqref{eq:inter:strongish} at each $t_i$, we get
\[
\begin{split}
\mathbb{P}\left( \{ X(\tau) \in A\} \cap G \right)&= \sum_{i=1}^\infty \mathbb{E}\left[ \mathbf{1}_{G \cap \{ \tau =t_i\}} \mathbb{P}\left(X(t_i)\in A\mid \mathcal{F}^Y(t_i)\right) \right]\\
&= \sum_{i=1}^\infty \mathbb{E}\left[ \mathbf{1}_{G \cap \{ \tau =t_i\}} \Lambda\left(Y(t_i), A\right) \right]= \mathbb{E}\left[ \mathbf{1}_G \Lambda\left(Y(\tau), A \right)\right].
\end{split}
\]
Since $\Lambda\left(Y(\tau), \cdot\, \right)$ is measurable with respect to $\mathcal{F}^Y(\tau)$, the claim follows in this case.
For an arbitrary stopping time $\tau$, there exists a sequence of discrete stopping times $\tau_k$, $k \in \mathbb{N}$, such that $\tau_k \downarrow \tau$, almost surely. In particular $\mathcal{F}^Y(\tau) \subseteq \mathcal{F}^Y(\tau_k)$, for all $k \in \mathbb{N}$. Hence, if $G \in \mathcal{F}^Y(\tau)$, then by the above paragraph, for each $k\in \mathbb{N}$, and for a continuous function $f\colon \mathbb{X}\rightarrow [0,\infty)$,
\[
\mathbb{E}\left[ f(X(\tau_k))\mathbf{1}_G \right] = \mathbb{E}\left[ \mathbf{1}_G\int_\mathbb{X} \Lambda\left(Y(\tau_k),dx\right) f(x)\right].
\]
We take the limit as $k\rightarrow\infty$ above and appeal to the right-continuity of $X$ and $Y$ and continuity of $\Lambda(y, \cdot\,)$ in $y$ to conclude that $\mathbb{E}\left[ f(X(\tau))\mathbf{1}_G \right] = \mathbb{E}\big[ \mathbf{1}_G\int_\mathbb{X} \Lambda\left(Y(\tau),dx\right) f(x)\big]$. \end{proof}
\section{Proof of Lemma \ref{lem:degen_diff}}\label{sec:non_acc_2}
As in the statement of the lemma, fix $k\ge 3$, $\epsilon>0$, $T\in \bT_{k-1}$ with $\|T\|>\epsilon$, and let $(\mathcal{T}^y,y\ge0)$ denote a resampling $k$-tree evolution with initial distribution $\mathcal{T}^0\sim\Lambda_{k,[k-1]}(T,\cdot\,)$. Let $(D_n,n\ge1)$ denote the sequence of all degeneration times of this evolution and $(D^*_n,n\ge1)$ the subsequence of degeneration times at which label $k$ drops and resamples. We prove this lemma in three cases. \begin{enumerate}[label = Case \arabic*:, ref = \arabic*, itemindent=1.82cm, leftmargin=0pt]
\item\label{case:degdif:leaf} $T$ contains a leaf block of mass $x_i > \|T\|/2k$.
\item\label{case:degdif:IP_big} $T$ contains an edge partition $\beta$ of mass at least $\|T\|/2k$, and $\beta$ contains a block of mass at least $\|\beta\|/2k^2$.
\item\label{case:degdif:IP_small} $T$ contains an edge partition $\beta$ of mass at least $\|T\|/2k$, and each block in $\beta$ has mass less than $\|\beta\|/2k^2$. \end{enumerate}
\begin{proof}[Proof of Lemma \ref{lem:degen_diff}, Case \ref{case:degdif:leaf}]
With probability at least $1/2k$, the kernel $\Lambda_{k,[k-1]}(T,\cdot\,)$ inserts label $k$ into the large leaf block $i$, splitting it into a Brownian reduced 2-tree $(x_i,x_k,\beta_{\{i,k\}})$. This type-2 compound $(\mathcal{U}^y,y\in [0,D_1))$ will then evolve in pseudo-stationarity, as in Proposition \ref{prop:012:pseudo}, until the first degeneration time $D_1$ of $(\mathcal{T}^y,y\ge0)$.
Let $A_1$ denote the event that $\mathcal{U}^{D_1-}$ is not degenerate, i.e.\ some other compound degenerates at time $D_1$. On $A_1$, some outside label may swap places with $i$ and cause $i$ to resample. However, as noted in the discussion of cases \ref{case:degen:type2}, \ref{case:degen:self}, and \ref{case:degen:nephew} in Section \ref{sec:const:intertwin}, no label will swap places with label $k$ at time $D_1$ on the event $A_1$. Let $\mathcal{R}_1$ denote the subtree of $\varrho(\mathcal{T}^{D_1-})$ corresponding to $\mathcal{U}^{D_1-}$. This equals $\mathcal{U}^{D_1-}$ if no label swaps with $i$. By Proposition \ref{prop:012:pseudo} and exchangeability of labels in Brownian reduced 2-trees, on the event $A_1$ the tree $\mathcal{R}_1$ is a Brownian reduced 2-tree.
Let $B_1$ denote the event that the label dropped at $D_1$ resamples into a block of $\mathcal{R}_1$. Let $\mathcal{U}^{D_1}$ denote the resulting subtree after resampling. On the event $A_1\cap B_1^c$, $\mathcal{U}^{D_1} = \mathcal{R}_1$ is again a Brownian reduced 2-tree, by definition of the resampling $k$-tree evolution. On the event $A_1\cap B_1$, the tree $\mathcal{U}^{D_1}$ is a Brownian reduced 3-tree, by \eqref{eq:B_ktree_resamp} and the exchangeability of labels.
We extend this construction inductively. Suppose that on the event $\bigcap_{m=1}^n A_m$, the tree $\mathcal{U}^{D_n}$ is a Brownian reduced $M$-tree, for some (random) $M$. We define $(\mathcal{U}^y,y\in [D_n,D_{n+1}))$ to be the $M$-tree evolution in this subtree during this time interval. Let $A_{n+1}$ denote the event that $\mathcal{U}^{D_{n+1}-}$ is non-degenerate, $\mathcal{R}_{n+1}$ the corresponding subtree in $\varrho(\mathcal{T}^{D_{n+1}-})$, $B_{n+1}$ the event that the dropped label resamples into a block in $\mathcal{R}_{n+1}$, and $\mathcal{U}^{D_{n+1}}$ the corresponding subtree in $\mathcal{T}^{D_{n+1}}$. Then on $\bigcap_{m=1}^{n+1} A_m$ the tree $\mathcal{R}_{n+1}$ is again a Brownian reduced $M$-tree, by the same arguments as above, with Proposition \ref{prop:pseudo:pre_D} in place of Proposition \ref{prop:012:pseudo}. On $B_{n+1}^c\cap \bigcap_{m=1}^{n+1} A_m$, the tree $\mathcal{U}^{D_{n+1}} = \mathcal{R}_{n+1}$ is a Brownian reduced $M$-tree, and on $B_{n+1}\cap \bigcap_{m=1}^{n+1} A_m$ the tree $\mathcal{U}^{D_{n+1}}$ is a Brownian reduced $(M\!+\!1)$-tree.
In this manner, we define $(\mathcal{U}^y,y\in [0,D_N))$ where $D_N$ is the first time that $\mathcal{U}^{y-}$ attains a degenerate state as a left limit. Let $A^y$ denote the label set of $\mathcal{U}^y$ for $y\in [0,D_N)$; by the preceding argument and Proposition \ref{prop:pseudo:pre_D}, $\mathcal{U}^y$ is conditionally a Brownian reduced $(\# A^y)$-tree given $\{y < D_N\}$. Let $(\sigma^y,y\in [0,D_N))$ denote the evolving permutation that composes all label swaps due to the swap-and-reduce map, $\sigma^y = \tau_n\circ\tau_{n-1}\circ\cdots\circ\tau_1$ for $y\in [D_n,D_{n+1})$, where $\tau_m$ is the label swap permutation that occurs at time $D_m$.
We can simplify this account by considering a pseudo-stationary killed $k$-tree evolution $(\mathcal{V}^y,y\in [0,D''))$ coupled so that $\pi_{A^y}\circ\sigma^y(\mathcal{V}^y) = \mathcal{U}^y$, $y\in [0,D'')$, where $D''$ is the degeneration time of $(\mathcal{V}^y)$. Such a coupling is possible due to the consistency result of Proposition \ref{prop:consistency_0} and the exchangeability of labels evident in Definition \ref{def:killed_ktree} of killed $k$-tree evolutions. Note that, in particular, $D''$ precedes the first time at which a label in $(\mathcal{U}^y)$ degenerates. Moreover, following the discussion of cases \ref{case:degen:type2}, \ref{case:degen:self}, and \ref{case:degen:nephew} in Section \ref{sec:const:intertwin}, label $k$ cannot be dropped in degeneration until a label within $(\mathcal{U}^y)$ degenerates.
There is some $\delta>0$ sufficiently small so that a pseudo-stationary $k$-tree evolution with initial mass $\epsilon/2k$ will avoid degenerating prior to time $\delta$ with probability at least $2k\delta$. By the scaling property of Lemma \ref{lem:scaling}, this same $\delta$ bound holds for pseudo-stationary $k$-tree evolutions with greater initial mass. Applying this bound to $(\mathcal{V}^y,y\in [0,D''))$ proves the lemma in this case. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:degen_diff}, Case \ref{case:degdif:IP_big}]
In this case, with probability at least $1/4k^3$, label $k$ is inserted into a ``large'' block in $\beta$ of mass at least $\|\beta\|/2k^2$. If another label resamples into this same block prior to time $D^*_1$, then we are in the regime of Case \ref{case:degdif:leaf}, and the same argument applies, albeit with smaller initial mass proportion. However, if no other label resamples into this block then, although it is unlikely for this block to vanish quickly, it is possible for label $k$ to be dropped in degeneration if a label that is a nephew of $k$ causes degeneration (case \ref{case:degen:nephew} in Section \ref{sec:const:intertwin}). In this latter case, however, that label swaps into the block in which label $k$ was sitting. Then, label $k$ resamples and may jump back into this large block with probability bounded away from zero. This, again, puts us in the regime of Case \ref{case:degdif:leaf}. In this case, $D^*_1$ may be small with high probability, but not $D^*_2$.
More formally, a version of the argument for Case \ref{case:degdif:leaf} yields $\delta>0$ for which, with probability at least $\delta$: (i) the kernel $\Lambda_{k,[k-1]}(T,\cdot\,)$ inserts label $k$ into a block in $\beta$ with mass at least $\|\beta\|/2k^2$; (ii) this block, or a subtree created within this block survives to time $\delta$ with its mass staying above $\|\beta\|/3k^2$; (iii) the total mass stays below $2\|T\|$ and either (iv) $D^*_1 > \delta$; or (v) $D^*_{1} \le \delta$ but at time $D^*_{1}$, label $k$ resamples back into this same block, which only holds a single other label at that time; and then (vi) $D^*_{2} - D^*_{1} > \delta$. \end{proof}
To prove Case \ref{case:degdif:IP_small}, we require two lemmas, one of which recalls additional properties of type-0/1/2 evolutions from \cite{Paper1,Paper3}.
\begin{lemma}\label{lem:012:clade_ish}
Fix $(x_1,x_2,\beta)\in [0,\infty)^2\times\mathcal{I}$ with $x_1+x_2>0$. There exist a type-0 evolution $(\alpha_0^y,y\ge0)$, a type-1 evolution $((m^y,\alpha_1^y),y\ge0)$, and a type-2 evolution $((m_1^y,m_2^y,\alpha_2^y),y\ge0)$ with respective initial states $\beta$, $(x_1,\beta)$, and $(x_1,x_2,\beta)$, coupled in such a way that for every $y$, there exists an injective, left-to-right order-preserving and mass-preserving map sending the blocks of $\alpha_2^y$ to blocks of $\alpha_1^y$, and a map with these same properties sending the blocks of $\alpha_1^y$ to blocks of $\alpha_0^y$. \end{lemma}
These assertions are immediate from the pathwise constructions of type-0/1/2 evolutions in \cite[Definitions 3.21, 5.14]{Paper1} and \cite[Definition 17]{Paper3}. They can alternatively be derived as consequences of Definitions \ref{def:type01} and \ref{def:type2} and the transition kernels described in Proposition \ref{prop:012:transn}.
\begin{lemma}\label{lem:type2:smallblocks}
Fix $c\in (0,1/2)$ and $x>0$. Consider $u_1,u_2\ge0$ with $u_1+u_2>0$ and $\beta\in\mathcal{I}$ with $\|\beta\|>x$ and none of its blocks having mass greater than $c\|\beta\|$. For every $\epsilon>0$ there exists some $\delta = \delta(x,c) > 0$ that does not depend on $(u_1,u_2,\beta)$ such that with probability at least $1-\epsilon$, a type-2 evolution with initial state $(u_1,u_2,\beta)$ avoids degenerating prior to time $\delta$. \end{lemma}
\begin{proof}
Fix a block $(a,b)\in\beta$ with $a\in [c\|\beta\|,2c\|\beta\|]$ and let
\begin{equation*}
\beta_0 := \{(a',b')\in\beta\colon a'<a\},\quad \beta_1 := \{(a'-b,b'-b)\colon (a',b')\in\beta, a'\ge b\}
\end{equation*}
so that $\beta = \beta_0\star (0,b-a)\star\beta_1$. We follow Proposition \ref{prop:012:concat}\ref{item:012concat:2+1}, in which a type-2 evolution is formed by concatenating a type-2 with a type-1. In particular, let $\wh\Gamma^y := \big( \wh m_1^y, \wh m_2^y,\wh \alpha^y\big)$ and $\widetilde\Gamma^y := (\widetilde m^y,\widetilde\alpha^y)$, $y\ge0$, denote a type-2 and a type-1 evolution with respective initial states $(u_1,u_2,\beta_0)$ and $(b-a,\beta_1)$. Let $\wh D$ denote the degeneration time of $(\wh\Gamma^y,y\ge0)$ and let $\wh Z$ denote the time at which $\|\wh\Gamma^y\|$ hits zero. Let $I$ equal 1 if $\wh m_1^{\wh D}>0$ or 2 if $\wh m_2^{\wh D}>0$, and set $(X_I,X_{3-I}) := \big(\wh m_I^{\wh D},\widetilde m^{\wh D}\big)$. Finally, let $\big(\widebar m_1^y,\widebar m_2^y,\widebar\alpha^y\big)$, $y\ge 0$ denote a type-2 evolution with initial state $(X_1,X_2,\widetilde\alpha^{\widehat D})$, conditionally independent of $((\wh\Gamma^y,\widetilde\Gamma^y)),y\in [0,\widehat D]))$ given this initial state, but coupled to have $\widebar m_I^y = \wh m_I^{\widehat D + y}$ for $y\in [0,\widehat Z-\widehat D]$. By Proposition \ref{prop:012:concat}\ref{item:012concat:2+1}, the following is a type-2 evolution:
\begin{equation}\label{eq:SB:concat_1}
\left\{\begin{array}{ll}
(\wh m_1^y,\;\wh m_2^y,\;\wh\alpha^y\star(0,\widetilde m^y)\star\widetilde\alpha^y) & \text{for }y\in [0,\wh D),\\
(\widebar m_1^{y-\wh D},\;\widebar m_2^{y-\wh D},\;\widebar\alpha^{y-\wh D}) & \text{for }y\ge \wh D.
\end{array}\right.
\end{equation}
Moreover, by the Markov property of type-1 evolutions, Definition \ref{def:type2} of type-2 evolutions, and the symmetry noted in Lemma \ref{lem:type2:symm}, the following is a stopped type-1 evolution:
\begin{equation}\label{eq:SB:concat_2}
\left\{\begin{array}{ll}
(\widetilde m^y,\widetilde\alpha^y) & \text{for }y\in [0,\wh D),\\
(\widebar m_{3-I}^{y-\wh D},\widebar\alpha^{y-\wh D}) & \text{for }y\in [\wh D,\wh Z].
\end{array}\right.
\end{equation}
Let $\delta>0$ be sufficiently small so that, with probability at least $\sqrt{1-\epsilon}$, a $\distribfont{BESQ}_{cx}(-1)$ avoids hitting zero prior to time $\delta$, and likewise for a $\distribfont{BESQ}_{(1-2c)x}(0)$. Then with probability at least $1 - \epsilon = (\sqrt{1-\epsilon})^2$, both $(\|\wh\Gamma^y\|,y\ge0)$ and the $\distribfont{BESQ}(0)$ total mass of the type-1 evolution of \eqref{eq:SB:concat_2} avoid hitting zero prior to time $\delta$. On this event, the type-2 evolution of \eqref{eq:SB:concat_1} does not degenerate prior to time $\delta$. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:degen_diff}, Case \ref{case:degdif:IP_small}]
Informally, we proved that degenerations of $k$ may take a long time in Cases \ref{case:degdif:leaf} and \ref{case:degdif:IP_big} by controlling
the degeneration times of pseudo-stationary structures inserted into large blocks in repeated resampling events. In Case
\ref{case:degdif:IP_small}, there are no large blocks, and indeed large blocks may never form. Instead, there must be a large interval partition,
which we can cut rather evenly into $2k-1$ sub-partitions. We will control the degeneration of evolving sub-partitions and the probability
that insertions are into distinct non-adjacent internal sub-partitions.
Specifically, let us follow the notation introduced at the start of this appendix. We can decompose $\beta=\beta_1\star\cdots\star\beta_{2k-1}$
into sub-partitions $\beta_i$ with $\|\beta_1\|\ge \|\beta\|/k$ and $\|\beta\|/4k\le\|\beta_i\|\le\|\beta\|/2k$ for $2\le i\le 2k-1$, since no
block exceeds mass $\|\beta\|/2k^2\le\|\beta\|/4k$.
With probability at least $1/8k^2$, the kernel $\Lambda_{k,[k-1]}(T,\cdot\,)$ inserts label $k$ into $\beta_2$, splitting
$\beta_2=\beta_2^-\star(0,x_k)\star\beta_2^+$. Then label $k$ is in the type-1 compound
$\mathcal{U}^0=(x_k,\beta_2^+\star\beta_3\star\ldots\star\beta_{2k-1})$, while $\beta_1\star\beta_2^-$ is the interval partition of the
compound associated with the sibling edge of $k$. Consider the concatenation $\mathcal{V}^y:=\mathcal{V}^y_3\star\mathcal{V}^y_4\star\cdots\star\mathcal{V}^y_{2k-1}$
of type-1 evolutions $(\mathcal{V}^y_i,y\ge 0)$, $3\le i\le 2k-1$, starting respectively from $\beta_2^+\star\beta_3,\beta_4,\ldots,\beta_{2k-1}$, for
times $y$ up to the first time $D_\mathcal{V}$ that one of them reaches half or double its initial mass. We denote by $D_\mathcal{W}$ the degeneration time of
the sibling edge of $k$ as part of this resampling $k$-tree evolution. If this sibling edge of $k$ is a type-2 edge,
denote by $u_1$ and $u_2$ its top masses and consider a type-2 evolution $(\mathcal{W}^y,y\ge 0)$ starting from $(u_1,u_2,\beta_1\star\beta_2^-)$, with
degeneration time $D_\mathcal{W}$.
Otherwise, this edge has three or more labels, so one or both children of this edge have more than one label. For each of these children, we
choose as $u_1$ or $u_2$, respectively, the top mass of its smallest label. Then the degeneration time of a type-2 evolution
$(\widetilde{\mathcal{W}}^y,y\ge 0)$ starting from $(u_1,u_2,\beta_1\star\beta_2^-)$ is stochastically dominated by the time $D_\mathcal{W}$ at which the sibling edge of $k$ degenerates. We denote by $D_\mathcal{T}$ the first time that $\|\mathcal{T}^y\|$ reaches half or double its initial mass.
Since $\|\beta_1\star\beta_2^-\|\ge\|\beta_1\|\ge\|\beta\|/k\ge\|T\|/2k^2\ge\epsilon/2k^2$, Lemma \ref{lem:type2:smallblocks} yields
$\delta_\mathcal{W}=\delta(\epsilon/2k^2,1/2k)>0$ such that $\mathbb{P}(D_\mathcal{W}>\delta_1)\ge 1-1/(32k^2)^k$.
Let $\delta_\mathcal{T}>0$ be such that a $\distribfont{BESQ}_{\epsilon}(-1)$ stays in $(\epsilon/2,2\epsilon)$ up to time $\delta_\mathcal{T}$ with probability at least
$1-1/(32k^2)^k$. Then $\mathbb{P}(D_\mathcal{T}>\delta_\mathcal{T})\ge 1-1/(32k^2)^k$. Finally, let $\delta_\mathcal{V}>0$ be such that the probability that
$\distribfont{BESQ}_{\epsilon/8k^2}(0)$ does not exit $(\epsilon/16k^2,\epsilon/4k^2)$ before time $\delta_\mathcal{V}$ exceeds $(1/2)^{1/2k}$. Since the $2k-3$
independent type-1 evolutions are starting from greater initial mass, we obtain from Proposition \ref{prop:012:mass} and the scaling property of
Lemma \ref{lem:scaling} that $\mathbb{P}(D_\mathcal{V}>\delta_\mathcal{V})>1/2$.
We proceed in a way similar to Case 1 and inductively construct a subtree evolution $(\mathcal{U}^y,y\in[0,D_\mathcal{U}))$ coupled to $(\mathcal{V}^y,y\in[0,D_\mathcal{V}))$ on
events $A_{n+1}$, $n\ge 0$, on which $D_{n+1}<\min\{D_\mathcal{V},D_\mathcal{W},D_\mathcal{T}\}$ and any resampling of a label at time $D_{n+1}$ when $\mathcal{U}^{D_{n+1}-}$
has $j-1$ labels occurs into a block of $\mathcal{V}^{D_{n+1}}_{2j}$, $j=2,\ldots,k-1$. Given that $D_{n+1}<\min\{D_\mathcal{V},D_\mathcal{W},D_\mathcal{T}\}$, such a block is
chosen by the resampling kernel with (conditional) probability exceeding $\|\beta_{2j}\|/(4\|T\|)\ge 1/32k^2$.
Thus, with probability at least $\delta:=\min\{\delta_\mathcal{V},\delta_\mathcal{W},\delta_\mathcal{T},1/(32k^2)^{k-2}\}$, we have $D_1^*>\delta$ and so $D_2^*>\delta$.
\end{proof}
\end{document} | arXiv | {
"id": "1809.07756.tex",
"language_detection_score": 0.6862143874168396,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Semantics and Proof Theory of the\\ Epsilon Calculus} \titlerunning{Semantics and Proof Theory of the Epsilon Calculus}
\author{\href{http://ucalgary.ca/rzach}{Richard Zach}\thanks{Research supported by the Natural Sciences and Engineering Research Council.}} \institute{Department of Philosophy, University of Calgary, Canada\\
\email{rzach@ucalgary.ca}}
\maketitle
\begin{abstract} The epsilon operator is a term-forming operator which replaces quantifiers in ordinary predicate logic. The application of this undervalued formalism has been hampered by the absence of well-behaved proof systems on the one hand, and accessible presentations of its theory on the other. One significant early result for the original axiomatic proof system for the \ensuremath{\varepsilon}-calculus is the first epsilon theorem, for which a proof is sketched. The system itself is discussed, also relative to possible semantic interpretations. The problems facing the development of proof-theoretically well-behaved systems are outlined. \end{abstract}
\section{Introduction}
A formalism for logical choice operators has long been available in the form of Hilbert's epsilon calculus. The epsilon calculus is one of the first formal systems of first-order predicate logic. It was introduced in 1921 by David Hilbert \cite{Hilbert:22a}, who proposed to use it for the formalization and proof theoretical investigation of mathematical systems. In the epsilon calculus, a term-forming operator \ensuremath{\varepsilon}{} is used, the intuitive meaning of which is an indefinite choice function: $\meps x {A(x)}$ is some $x$ which satisfies $A(x)$ if $A(x)$ is satisfied at all, and arbitrary otherwise. Quantifiers can then be defined, e.g., $(\exists x)A(x)$ as $A(\epsilon_x A(x))$.
The epsilon calculus and proof theoretic methods developed for it, such as the so-called epsilon substitution method, have mainly been applied to the proof theoretic analysis of mathematical systems of arithmetic and analysis (especially in work by Ackermann, Mints, Arai). (See \cite{AvigadZach:2002} for a survey of the epsilon calculus and its history.) Despite its long history and manifold uses, the epsilon calculus as a logical formalism in general is not thoroughly understood, yet its potential for applications in logic and other areas, especially linguistics and computer science, has by far not been fully explored.
There are various options for definitions of semantics of the epsilon operator. The choice of $\meps x {A(x)}$ may be extensional (i.e., depend only on the set of $x$ which satisfy $A(x)$; this definition validates the so-called axiom of \ensuremath{\varepsilon}-extensionality), it may be intensional (i.e., depend also on $A(x)$ itself; \ensuremath{\varepsilon}-extensionality fails), and it may be completely indeterministic (i.e., different occurrences of the same \ensuremath{\varepsilon}-term $\meps x {A(x)}$ may select different witnesses for $A(x)$). The first and third versions have been investigated by Blass and Gurevich \cite{BlassGurevich:2000:JSL}. These different semantics result in different expressive power (in particular, over finite models), and are characterized by different formalizations. Below we present the first two versions of the semantics of the \ensuremath{\varepsilon}-calculus and sketch completeness results.
The very beginnings of proof theory in the work of Hilbert and his students consisted in the proof theoretic study of axiom systems for the \ensuremath{\varepsilon}-calculus. One of the most significant results in this connection are the epsilon theorems. It plays a role similar to Gentzen's midsequent theorem in the proof theory of the sequent calculus: it yields a version of Herbrand's Theorem. In fact, it was used to give the first correct proof of Herbrand's theorem (Hilbert and Bernays \cite{HB70}). In a simple formulation, the theorem states that if an existential formula $(\exists x)A(x)$ (not containing \ensuremath{\varepsilon}) is derivable in the epsilon calculus, then there are terms $t_1$, \dots, $t_n$ so that a (Herbrand-) disjunction $A(t_1) \lor \ldots \lor A(t_n)$ is derivable in propositional logic. The proof gives a constructive procedure that, given a derivation of $(\exists x)A(x)$, produces the corresponding Herbrand disjunction. An analysis of this proof (see \cite{MoserZach:06}) gives a hyper-exponential bound on the length of the Herbrand disjunction in the number of critical formulas occurring in the proof. The bound is essentially optimal, since it is known from work by Orevkov and Statman that the length of Herbrand disjunctions is hyper-exponential in the length of proofs of the original existential formula (this is the basis for familiar speed-up theorems of systems with cut over cut-free systems). In section~\ref{epsthm} we prove the first epsilon theorem with identity, along the lines of Bernays's proof.
A general proof theory of the epsilon calculus requires formal systems that are more amenable to proof-theoretic investigations than the Hilbert-type axiomatic systems studied in the Hilbert school. Although some sequent systems for the epsilon calculus exist, it is not clear that they are the best possible formulations, nor have their proof-theoretic properties been investigated in depth. Maehara's~\cite{Maehara:55} and Leisenring's \cite{Leisenring:1969} systems were not cut-free complete. Yasuhara \cite{Yasuhara:82} studied a cut-free complete system, but only gave a semantic cut-elimination proof. Section~\ref{proofth} surveys these and other systems, and highlights some of the difficulties in developing a systematic proof theory on the basis of them. Proof-theoretically suitable formalisms for the \ensuremath{\varepsilon}-calculus are still a desideratum for applications of the epsilon calculus.
The classical \ensuremath{\varepsilon}-calculus is usually investigated as a proof-theoretic formalism, and no systematic study of the model theory of epsilon calculi other than Asser's classic \cite{Asser:1957} exists. However, Abiteboul and Vianu \cite{AbiteboulVianu:91}, Blass and Gurevich \cite{BlassGurevich:2000:JSL}, and Otto \cite{Otto:00} have studied the model theory of choice operators in the context of finite model theory and database query languages. And applications of choice operators to model definite and indefinite noun phrases in computational linguistics Meyer Viol \cite{MeyerViol:95} and von Heusinger \cite{Heusinger:00,Heusinger:04} have led to the definition of indexed epsilon calculus by Mints and Sarenac \cite{MintsSarenac:2003}.
With a view to applications, it is especially important to develop the semantics and proof theory of epsilon operators in non-classical logics. Of particular importance in this context is the development of epsilon calculi for intuitionistic logic, not least because this is the context in which the epsilon calculus can and has been applied in programming language semantics. Some work has been done on intuitionistic $\ensuremath{\varepsilon}$-calculi (e.g., Bell \cite{Bell:93a}, DeVidi \cite{DeVidi:95}, Meyer Viol \cite{MeyerViol:95}, Mints \cite{Mints:77}), but there are still many important open questions. The straightforward extensions of intuitionistic logic by epsilon operators are not conservative and result in intermediate logics related to G\"odel logic. Meyer Viol \cite{MeyerViol:95} has proposed a conservative extensions of intuitionistic logic by epsilon operators which warrants further study.
\section{Syntax and Axiomatic Proof Systems}
\begin{defn}
The language of of the elementary calculus~$L_\mrm{EC}^=$ contains the
usual logical symbols (variables, function and predicate
symbols, $=$). A subscript $\ensuremath{\varepsilon}$ will indicate the presence of the
symbol~\ensuremath{\varepsilon}, and $\forall$ the presence of the quantifiers $\forall$
and $\exists$. The \emph{terms}~$\mrm{Trm}$ and \emph{formulas}~$\mrm{Frm}$
of~$L_{\eps\forall}$ are defined as usual, but simultaneously, to include: \begin{quote}
If $A$ is a formula in which $x$ has a free occurrence but no bound
occurrence, then $\meps x A$ is a term, and all occurrences of $x$ in
it are bound. \end{quote} If $E$ is an expression (term or formula), then $\mrm{FV}(E)$ is the set of variables which have free occurrences in~$E$. \end{defn}
When $E$, $E'$ are expressions (terms or formulas), we write $E \seq E'$ iff $E$ and $E'$ are syntactically identical up to a renaming of bound variables. We say that a term $t$ is \emph{free for $x$ in $E$} iff $x$ does not occur free in the scope of an \ensuremath{\varepsilon}-operator $\meps y{}$ or quantifier $\forall y$, $\exists y$ for any~$y \in \mrm{FV}(t)$.
If $E$ is an expression and $t$ is a term, we write $\st E x t$ for the result of substituting every free occurrence of~$x$ in~$E$ by~$t$, provided $t$ is free for $x$ in $E$, and renaming bound variables in~$t$ if necessary. We write $E(x)$ to indicate that $x \in \mrm{FV}(E)$, and $E(t)$ for $\st E x t$. We write $\ST E t u$ for the result of replacing every occurrence of $t$ in $E$ by $u$.\footnote{Skipping
details, (a)~we want to replace not just every occurrence of $t$ by
$u$, but every occurrence of a term $t' \seq t$. (b)~$t$ may have
an occurrence in $E$ where a variable in $t$ is bound by a
quantifier or \ensuremath{\varepsilon}{} outside $t$, and such occurrences shouldn't be
replaced (they are not subterm occurrences). (c)~When replacing $t$
by $u$, bound variables in $u$ might have to be renamed to avoid
conflicts with the bound variables in $E'$ and bound variables in
$E'$ might have to be renamed to avoid free variables in $u$ being
bound.}
\begin{defn}[\ensuremath{\varepsilon}-Translation]
If $E$ is an expression, define $E^\ensuremath{\varepsilon}$ by:
\begin{enumerate}
\item $E^\ensuremath{\varepsilon} = E$ if $E$ is a variable, a constant symbol,
or~$\bot$.
\item If $E = f^n_i(t_1, \dots, t_n)$, $E^\ensuremath{\varepsilon} = f^n_i(t_1^\ensuremath{\varepsilon},
\dots, t_n^\ensuremath{\varepsilon})$.
\item If $E = P^n_i(t_1, \dots, t_n)$, $E^\ensuremath{\varepsilon} = P^n_i(t_1^\ensuremath{\varepsilon},
\dots, t_n^\ensuremath{\varepsilon})$.
\item If $E = \lnot A$, then $E^\ensuremath{\varepsilon} = \lnot A^\ensuremath{\varepsilon}$.
\item If $E = (A \land B)$, $(A \lor B)$, $(A \lif B)$, or $(A \liff
B)$, then $E^\ensuremath{\varepsilon} = (A^\ensuremath{\varepsilon} \land B^\ensuremath{\varepsilon})$, $(A^\ensuremath{\varepsilon} \lor
B^\ensuremath{\varepsilon})$, $(A^\ensuremath{\varepsilon} \lif B^\ensuremath{\varepsilon})$, or $(A^\ensuremath{\varepsilon} \liff B^\ensuremath{\varepsilon})$,
respectively.
\item If $E = \exists x\, A(x)$ or $\forall x\, A(x)$, then $E^\ensuremath{\varepsilon}
= A^\ensuremath{\varepsilon}(\meps x{A(x)^\ensuremath{\varepsilon}})$ or $A^\ensuremath{\varepsilon}(\meps x {\lnot A(x)^\ensuremath{\varepsilon}})$.
\item If $E = \meps x {A(x)}$, then $E^\ensuremath{\varepsilon} = \meps x{A(x)^\ensuremath{\varepsilon}}$.
\end{enumerate} \end{defn}
\begin{defn}
An \ensuremath{\varepsilon}-term $p \seq \meps x {B(x; x_1, \dots, x_n)}$ is a
\emph{type of an \ensuremath{\varepsilon}-term~$\meps x {A(x)}$} iff
\begin{enumerate}
\item $p \seq \st{\st{\meps x {A(x)}}{x_1}{t_1}\dots}{x_n}{t_n}$
for some terms $t_1$, \dots,~$t_n$.
\item $\mrm{FV}(p) = \{x_1, \dots, x_n\}$.
\item $x_1$, \dots, $x_n$ are all immediate subterms of $p$.
\item Each $x_i$ has exactly one occurrence in~$p$.
\item The occurrence of $x_i$ is left of the occurrence of $x_j$ in
$p$ if $i < j$.
\end{enumerate}
We denote the set of types as~\mrm{Typ}. \end{defn}
\begin{proposition}
The type of an epsilon term~$\meps x {A(x)}$ is unique up to
renaming of bound, and disjoint renaming of free variables. \end{proposition}
\begin{defn}
An \ensuremath{\varepsilon}-term $e$ is \emph{nested in} an \ensuremath{\varepsilon}-term $e'$ if $e$ is a
proper subterm of~$e$. \end{defn}
\begin{defn}
The \emph{degree~$\deg e$} of an \ensuremath{\varepsilon}-term~$e$ is defined as
follows: (1) $\deg e = 1$ iff $e$ contains no nested \ensuremath{\varepsilon}-terms.
(2) $\deg e = \max\{\deg {e_1}, \dots, \deg{e_n}\} + 1$ if $e_1$,
\dots,~$e_n$ are all the \ensuremath{\varepsilon}-terms nested in~$e$. For convenience,
let $\deg{t} = 0$ if $t$ is not an \ensuremath{\varepsilon}-term. \end{defn}
\begin{defn}
An \ensuremath{\varepsilon}-term $e$ is \emph{subordinate to} an \ensuremath{\varepsilon}-term $e' = \meps x
A(x)$ if some $e'' \seq e$ occurs in $e'$ and $x \in \mrm{FV}(e'')$. \end{defn}
Note that if $e$ is subordinate to $e'$ it is \emph{not} a subterm of $e'$, because $x$ is free in $e$ and so the occurrence of $e$ (really, of the variant $e''$) in $e'$ is in the scope of~$\ensuremath{\varepsilon}_x$.\footnote{One might think that replacing $e$ in $\meps x{A(x)}$ by a new variable $y$ would result in an \ensuremath{\varepsilon}-term $\meps x{A'(y)}$ so that $e' \equiv \st {\meps x{A'(y)}}{y}{e}$. But (a) $\meps x{A'(y)}$ is not in general a term, since it is not guaranteed that $x$ is free in $A'(y)$ and (b) $e$ is not free for $y$ in $\meps x {A'(y)}$.}
\begin{defn}
The \emph{rank~$\rk e$} of an \ensuremath{\varepsilon}-term~$e$ is defined as follows:
(1) $\rk e = 1$ iff $e$ contains no subordinate \ensuremath{\varepsilon}-terms. (2) $\rk
e = \max\{\rk {e_1}, \dots, \rk{e_n}\} + 1$ if $e_1$, \dots,~$e_n$
are all the \ensuremath{\varepsilon}-terms subordinate to~$e$. \end{defn}
\begin{proposition}
If $p$ is the type of $e$, then $\rk{p} = \rk{e}$. \end{proposition}
\subsection{Axioms and Proofs}
\begin{defn} The axioms of the \emph{elementary calculus}~\mrm{EC}{} are \begin{align}
& A & \text{for any tautology~$A$} \tag{Taut} \end{align} and its only rule of inference is \[\infer[$MP$]{A}{A & A \lif B}\] For \ensuremath{\mathrm{EC}^=}, we add \begin{align} t & = t & \text{for any term~$t$} \tag{$=_1$} \\ t = u & \lif (\st A x t \liff \st A x u). \tag{$=_2$} \end{align} The axioms and rules of the (intensional) \emph{\ensuremath{\varepsilon}-calculus}~\ensuremath{\mathrm{EC}_\eps}{} (\ensuremath{\mathrm{EC}_\eps^=}) are those of~\mrm{EC}{} (\ensuremath{\mathrm{EC}^=}) plus the \emph{critical formulas} \begin{align} A(t) \lif A(\meps x {A(x)}). \tag{crit} \end{align} The axioms and rules of the \emph{extensional \ensuremath{\varepsilon}-calculus}~$\ensuremath{\mathrm{EC}_\eps^\ext}$ are those of~\ensuremath{\mathrm{EC}_\eps^=}{} plus \begin{align}
(\forall x(A(x) \liff B(x)))^\ensuremath{\varepsilon} & \lif \meps x {A(x)} = \meps x
{B(x)}, \tag{ext} \\
\intertext{that is,}
A(\meps x{\lnot(A(x) \liff B(x))}) \liff B(\meps x{\lnot(A(x) \liff
B(x))}) & \lif \meps x {A(x)} = \meps x {B(x)} \notag \end{align} The axioms and rules of \ensuremath{\mathrm{EC}_\forall}, \ensuremath{\mathrm{EC}_{\eps\forall}}, $\ensuremath{\mathrm{EC}_{\eps\forall}}^\mathrm{ext}$ are those of \mrm{EC}, \ensuremath{\mathrm{EC}_\eps}, $\ensuremath{\mathrm{EC}_\eps}^\mathrm{ext}$, respectively, together with the axioms \begin{align} A(t) & \lif \exists x\, A(x) \tag{Ax$\exists$} \\ \forall x\, A(x) & \lif A(t) \tag{Ax$\forall$} \end{align} and the rules \[ \infer[R\exists]{\exists x\, A(x) \lif B}{A(x) \lif B} \qquad \infer[R\forall]{B \lif \forall x\, A(x)}{B \lif A(x)} \] Applications of these rules must satisfy the \emph{eigenvariable
condition}, viz., the variable $x$ must not appear in the conclusion or anywhere below it in the proof. \end{defn}
\begin{defn}
If $\Gamma$ is a set of formulas, a \emph{proof of $A$ from $\Gamma$
in $\ensuremath{\mathrm{EC}_{\eps\forall}}^\mathrm{ext}$} is a sequence~$\pi$ of formulas $A_1$, \dots, $A_n
= A$ where for each $i \le n$, $A_i \in \Gamma$, $A_i$ is an
instance of an axiom, or follows from formulas $A_j$ ($j<i$) by a
rule of inference.
If $\pi$ only uses the axioms and rules of \mrm{EC}, \ensuremath{\mathrm{EC}_\eps}, $\ensuremath{\mathrm{EC}_\eps}^\mathrm{ext}$,
etc., then it is a proof of $A$ from $\Gamma$ in \mrm{EC}, \ensuremath{\mathrm{EC}_\eps},
$\ensuremath{\mathrm{EC}_\eps}^\mathrm{ext}$, etc., and we write $\Gamma \proves[\pi]{} A$, $\Gamma
\proves[\pi]{\ensuremath{\varepsilon}} A$, $\Gamma \proves[\pi]{\ensuremath{\varepsilon}\mathrm{ext}} A$, etc.
We say that $A$ is provable from $\Gamma$ in \mrm{EC}, etc. ($\Gamma
\proves{} A$, etc.), if there is a proof of $A$ from $\Gamma$ in
\mrm{EC}, etc. \end{defn}
Note that our definition of proof, because of its use of $\seq$, includes a tacit rule for renaming bound variables. Note also that substitution into members of~$\Gamma$ is \emph{not} permitted. However, we can simulate a provability relation in which substitution into members of $\Gamma$ is allowed by considering $\inst \Gamma$, the set of all substitution instances of members of~$\Gamma$. If $\Gamma$ is a set of sentences, then $\inst\Gamma = \Gamma$.
\begin{proposition}\label{proof-subst}
If $\pi = A_1$, \dots, $A_n \equiv A$ is a proof of $A$ from
$\Gamma$ and $x \notin \mrm{FV}(\Gamma)$ is not an eigenvariable in
$\pi$, then $\st \pi x t = \st {A_1} x t$, \dots, $\st {A_n} x t$ is
a proof of $\st A x t$ from $\inst \Gamma$. \end{proposition}
\begin{lemma}\label{ded-lemma}
If $\pi$ is a proof of $B$ from $\Gamma \cup \{A\}$, then there is a
proof $\pi[A]$ of $A \lif B$ from $\Gamma$, provided $A$
contains no eigenvariables of~$\pi$ free. \end{lemma}
\begin{proof}
By induction on the length of~$\pi$, as in the classical case. \end{proof}
\begin{theorem}[Deduction Theorem]\label{deduction-thm}
If $\Sigma \cup \{A\}$ is a set of sentences, $\Sigma \proves{} A \lif B$
iff $\Sigma \cup \{A\} \proves{} B$. \end{theorem}
\begin{corollary}\label{incons}
If $\Sigma \cup \{A\}$ is a set of sentences, $\Sigma \proves{} A$
iff $\Sigma \cup \{\lnot A\} \proves{} \bot$. \end{corollary}
\begin{lemma}[\ensuremath{\varepsilon}-Embedding Lemma]
If $\Gamma \proves[\pi]{\ensuremath{\varepsilon}\forall} A$, then there is a proof
$\pi^\ensuremath{\varepsilon}$ so that $\inst{{\Gamma^\ensuremath{\varepsilon}}} \proves[\pi^\ensuremath{\varepsilon}]{\ensuremath{\varepsilon}}
A^\ensuremath{\varepsilon}$ \end{lemma}
\begin{proof} By induction, see \cite{MoserZach:06}. \end{proof}
\section{Semantics and Completeness}
\subsection{Semantics for \ensuremath{\mathrm{EC}_\eps^\ext}}
\begin{defn}
A \emph{structure}~$\mathfrak{M} = \langle \card \mathfrak{M}, (\cdot)^\mathfrak{M}\rangle$
consists of a nonempty \emph{domain}~$\card \mathfrak{M} \neq \emptyset$ and a
mapping $(\cdot)^\mathfrak{M}$ on function and predicate symbols where $(f^0_i)^\mathfrak{M} \in \card \mathfrak{M}$, $(f^n_i)^M \in \card \mathfrak{M}^{\card \mathfrak{M}^n}$, and
$(P^n_i)^\mathfrak{M} \subseteq \card \mathfrak{M}^n$. \end{defn}
\begin{defn}
An \emph{extensional choice function~$\Phi$ on $\mathfrak{M}$} is a function
$\Phi\colon \wp(\card\mathfrak{M}) \to \card\mathfrak{M}$ where $\Phi(X) \in X$ whenever
$X \neq \emptyset$. \end{defn}
Note that $\Phi$ is total on $\wp(\card\mathfrak{M})$, and so $\Phi(\emptyset) \in \card{\mathfrak{M}}$.
\begin{defn}
An \emph{assignment~$s$ on $\mathfrak{M}$} is a function $s\colon \mrm{Var} \to
\card\mathfrak{M}$.
If $x \in \mrm{Var}$ and $m \in \card\mathfrak{M}$, $\st s x m$ is the assignment
defined by \[ \st s x m(y) = \begin{cases} m & \text{if $y = x$} \\ s(y) &
\text{otherwise} \end{cases} \] \end{defn}
\begin{defn}\label{ext-sat}
The \emph{value~$\val \mathfrak{M} \Phi s t$ of a term} and the
\emph{satisfaction relation $\sat \mathfrak{M} \Phi s A$} are defined as
follows: \begin{enumerate} \item $\val \mathfrak{M} \Phi s x = s(x)$ \item $\sat \mathfrak{M} \Phi s \top$ and $\mathfrak{M}, \Phi, s \not\models \bot$ \item $\val \mathfrak{M} \Phi s {f^n_i(t_1, \dots, t_n)} = (f^n_i)^\mathfrak{M}(\val \mathfrak{M}
\Phi s {t_1}, \dots, \val \mathfrak{M} \Phi s {t_n})$ \item $\sat \mathfrak{M} \Phi s {t_1 = t_n}$ iff $\val \mathfrak{M}
\Phi s {t_1} = \val \mathfrak{M} \Phi s {t_2}$ \item $\sat \mathfrak{M} \Phi s {P^n_i(t_1, \dots, t_n)}$ iff $\langle\val \mathfrak{M}
\Phi s {t_1}, \dots, \val \mathfrak{M} \Phi s {t_n}\rangle \in (P^n_i)^\mathfrak{M}$ \item\label{epsilon-sat} $\val \mathfrak{M} \Phi s {\meps x{A(x)}} = \Phi(\val \mathfrak{M}
\Phi s {A(x)})$ where \[ \val \mathfrak{M} \Phi s {A(x)} = \{ m \in \card\mathfrak{M} : \sat \mathfrak{M} \Phi {\st s x m} A(x)\} \] \item $\sat \mathfrak{M} \Phi s {\exists x\, A(x)}$ iff for some $m \in
\card\mathfrak{M}$, $\sat \mathfrak{M} \Phi {\st s x m} {A(x)}$ \item $\sat \mathfrak{M} \Phi s {\forall x\, A(x)}$ iff for all $m \in
\card\mathfrak{M}$, $\sat \mathfrak{M} \Phi {\st s x m} {A(x)}$ \end{enumerate} \end{defn}
\begin{proposition}
If $s(x) = s'(x)$ for all $x \notin \mrm{FV}(t) \cup \mrm{FV}(A)$, then $\val
\mathfrak{M} \Phi s t = \val \mathfrak{M} \Phi {s'} t$ and $\sat \mathfrak{M} \Phi s A$ iff $\sat
\mathfrak{M} \Phi {s'} A$. \end{proposition}
\begin{proposition}[Substitution Lemma] If $m = \val \mathfrak{M} \Phi s u$, then
$\val \mathfrak{M} \Phi s {t(u)} = \val \mathfrak{M} \Phi {\st s x m} {t(x)}$ and
$\sat \mathfrak{M} \Phi s {A(u)}$ iff $\sat \mathfrak{M} \Phi {\st s x m} {A(x)}$ \end{proposition}
\begin{defn} \begin{enumerate} \item $A$ is \emph{locally true} in $\mathfrak{M}$ w.r.t.\ $\Phi$ and~$s$ iff
$\sat \mathfrak{M} \Phi s A$. \item $A$ is \emph{true} in $\mathfrak{M}$ with respect to $\Phi$, $\mathfrak{M}, \Phi
\models A$, iff for all $s$ on $\mathfrak{M}$: $\sat \mathfrak{M} \Phi s A$. \item $A$ is \emph{generically true} in $\mathfrak{M}$ with respect to~$s$, $\mathfrak{M},
s \models^g A$, iff for all choice functions $\Phi$
on~$\mathfrak{M}$: $\sat \mathfrak{M} \Phi s A$. \item $A$ is \emph{generically valid} in $\mathfrak{M}$, $\mathfrak{M} \models A$, if for
all choice functions $\Phi$ and assignments~$s$ on~$\mathfrak{M}$: $\sat \mathfrak{M}
\Phi s A$. \end{enumerate} \end{defn}
\begin{defn} Let $\Gamma \cup\{A\}$ be a set of formulas. \begin{enumerate} \item $A$ is a \emph{local consequence} of $\Gamma$, $\Gamma
\models^l A$, iff for all $\mathfrak{M}$, $\Phi$, and $s$:\\
\qquad if $\sat \mathfrak{M} \Phi s \Gamma$ then $\sat \mathfrak{M} \Phi s A$. \item $A$ is a \emph{truth consequence} of $\Gamma$, $\Gamma
\models A$, iff for all $\mathfrak{M}$, $\Phi$:\\
\qquad if $\mathfrak{M}, \Phi \models \Gamma$
then $\mathfrak{M}, \Phi \models A$. \item $A$ is a \emph{generic consequence} of $\Gamma$, $\Gamma
\models^g A$, iff for all $\mathfrak{M}$ and $s$:\\
\qquad if $\mathfrak{M}, s \models^g \Gamma$
then $\mathfrak{M} \models A$. \item $A$ is a \emph{generic validity consequence} of $\Gamma$, $\Gamma
\models^v A$, iff for all $\mathfrak{M}$:\\
\qquad if $\mathfrak{M} \models^v \Gamma$ then $\mathfrak{M}
\models A$. \end{enumerate} \end{defn}
\begin{proposition}
If $\Sigma \cup \{A\}$ is a set of sentences, $\Sigma \models^l A$
iff $\Sigma \models A$ \end{proposition}
\begin{proposition}
If $\Sigma \cup \{A, B\}$ is a set of sentences, $\Sigma \cup \{A\}
\models B$ iff $\Sigma \models A \lif B$. \end{proposition}
\begin{corollary}
If $\Sigma \cup \{A\}$ is a set of sentences, $\Sigma \models A$
iff for no $\mathfrak{M}$, $\Phi$, $\mathfrak{M} \models \Sigma \cup \{\lnot A\}$ \end{corollary}
\subsection{Soundness and Completeness}
\begin{theorem} If $\Gamma \proves{\ensuremath{\varepsilon}} A$, then $\Gamma \models^l A$. \end{theorem}
\begin{proof}
Suppose $\sat\Gamma\Phi s \Gamma$. We show by induction on the
length $n$ of a proof $\pi$ that $\sat \mathfrak{M} \Phi {s'} A$ for all $s'$
which agree with $s$ on $\mrm{FV}(\Gamma)$. We may assume that no
eigenvariable~$x$ of $\pi$ is in $\mrm{FV}(\Gamma)$ (if it is, let $y
\notin \mrm{FV}(\pi)$ and not occurring in~$\pi$; consider $\st \pi x y$
instead of $\pi$).
If $n = 0$ there's nothing to prove. Otherwise, we distinguish cases
according to the last line $A_n$ in $\pi$. The only interesting case is when
$A_n$ is a critical formula, i.e., $A_n \equiv A(t) \lif A(\meps
x {A(x)})$. Then either $\sat \mathfrak{M} \Phi s {A(t)}$ or not (in which
case there's nothing to prove). If yes, $\sat\mathfrak{M} \Phi {s[x/m]} {A(x)}$
for $m = \val \mathfrak{M} \Phi s {t}$, and so $Y = \val \mathfrak{M} \Phi s {A(x)} \neq
\emptyset$. Consequently, $\Phi(Y) \in Y$, and hence $\sat \mathfrak{M} \Phi
s {A(\meps x {A(x)})}$. \end{proof}
\begin{lemma}
If $\Gamma$ is a set of sentences and $\Gamma
\nproves{\ensuremath{\varepsilon}} \bot$, then there are $\mathfrak{M}$, $\Phi$ so that $\mathfrak{M},
\Phi \models \Gamma$. \end{lemma}
\begin{theorem}[Completeness]
If $\Gamma \cup \{A\}$ are sentences and $\Gamma \models
A$, then $\Gamma \proves{\ensuremath{\varepsilon}\mathrm{ext}} A$. \end{theorem}
\begin{proof}
Suppose $\Gamma \not\models A$. Then for some $\mathfrak{M}$, $\Phi$ we have
$\mathfrak{M}, \Phi \models \Gamma$ but $\mathfrak{M}, \Phi \not\models A$. Hence $\mathfrak{M},
\Phi \models \Gamma \cup \{\lnot A\}$. By the Lemma, $\Gamma \cup
\{\lnot A\} \proves{\ensuremath{\varepsilon}} \bot$. By Corollary~\ref{incons}, $\Gamma
\proves{\ensuremath{\varepsilon}} A$. \end{proof}
The proof of the Lemma comes in several stages. We have to show that if $\Gamma$ is consistent, we can construct $\mathfrak{M}$, $\Phi$, and $s$ so that $\sat \mathfrak{M} \Phi s \Gamma$. Since $\mrm{FV}(\Gamma) = \emptyset$, we then have $\mathfrak{M}, \Phi \models \Gamma$.
\begin{lemma}
If $\Gamma \nproves{\ensuremath{\varepsilon}} \bot$, there is $\Gamma^* \supseteq
\Gamma$ with (1) $\Gamma^* \nproves{\ensuremath{\varepsilon}} \bot$ and (2) for all
formulas $A$, either $A \in \Gamma^*$ or $\lnot A \in \Gamma^*$. \end{lemma}
\begin{proof}
Let $A_1$, $A_2$, \dots\ be an enumeration of $\mrm{Frm}_\ensuremath{\varepsilon}$. Define
$\Gamma_0 = \Gamma$ and \[ \Gamma_{n+1} = \begin{cases}
\Gamma_n \cup \{A_n\} &
\text{if $\Gamma_n \cup \{A_n\} \nproves{\ensuremath{\varepsilon}} \bot$} \\
\Gamma_n \cup \{\lnot A_n\} &
\text{if $\Gamma_n \cup \{\lnot A_n\}
\nproves{\ensuremath{\varepsilon}} \bot$ otherwise} \end{cases} \] Let $\Gamma^* = \bigcup_{n\ge 0} \Gamma_n$. Obviously, $\Gamma \subseteq \Gamma^*$. For (1), observe that if $\Gamma^* \proves[\pi]{\ensuremath{\varepsilon}} \bot$, then $\pi$ contains only finitely many formulas from~$\Gamma^*$, so for some $n$, $\Gamma_n \proves[\pi]{\ensuremath{\varepsilon}} \bot$. But $\Gamma_n$ is consistent by definition.
To verify (2), we have to show that for each $n$, either $\Gamma_n \cup \{A_n\} \nproves{\ensuremath{\varepsilon}} \bot$ or $\Gamma_n \cup \{\lnot A\} \nproves{\ensuremath{\varepsilon}} \bot$. For $n = 0$, this is the assumption of the lemma. So suppose the claim holds for $n-1$. Suppose $\Gamma_n \cup \{A\} \proves[\pi]{\ensuremath{\varepsilon}} \bot$ and $\Gamma_n \cup \{\lnot A\} \proves[\pi']{\ensuremath{\varepsilon}} \bot$. Then by the Deduction Theorem, we have $\Gamma_n \proves[{\pi[A]}] A \lif \bot$ and $\Gamma_n \proves[{\pi'[A']}] \lnot A \lif \bot$. Since $(A \lif \bot) \lif ((\lnot A \lif \bot) \lif \bot)$ is a tautology, we have $\Gamma_n \proves{\ensuremath{\varepsilon}} \bot$, contradicting the induction hypothesis. \end{proof}
\begin{lemma}\label{lem-closed} If $\Gamma^* \proves{\ensuremath{\varepsilon}} B$, then $B \in \Gamma^*$. \end{lemma}
\begin{proof}
If not, then $\lnot B \in \Gamma^*$ by maximality, so $\Gamma^*$
would be inconsistent. \end{proof}
\begin{defn}
Let $\approx$ be the relation on $\mrm{Trm}_\ensuremath{\varepsilon}$ defined by \[ t \approx u \text{ iff } t = u \in \Gamma^* \] It is easily seen that $\approx$ is an equivalence relation. Let $\widetilde t = \{u : u \approx t\}$ and $\widetilde \mrm{Trm} = \{\widetilde t : t \in \mrm{Trm}\}$. \end{defn}
\begin{defn} A set $T \in \widetilde \mrm{Trm}$ is \emph{represented by $A(x)$} if $T = \{\widetilde t : A(t) \in \Gamma^*\}$.
Let $\Phi_0$ be a fixed choice function on $\widetilde \mrm{Trm}$, and define \[ \Phi(T) = \begin{cases} \widetilde{\meps x {A(x)}} & \text{if $T$ is represented by $A(x)$}\\ \Phi_0(T) & \text{otherwise.} \end{cases} \] \end{defn}
\begin{proposition} $\Phi$ is a well-defined choice function on $\widetilde \mrm{Trm}$. \end{proposition}
\begin{proof}
Use (ext) for well-definedness and (crit) for choice function. \end{proof}
Now let $\mathfrak{M} =\langle \widetilde \mrm{Trm}, (\cdot)^\mathfrak{M}\rangle$ with $c^\mathfrak{M} = \widetilde c$, $(P_i^n)^\mathfrak{M} = \{\langle\widetilde t_1, \dots, \widetilde t_1\rangle : P_i^n(t_1, \ldots, t_n)\}$, and let $s(x) = \widetilde s$.
\begin{proposition}
$\sat \mathfrak{M} \Phi s {\Gamma^*}$. \end{proposition}
\begin{proof}
We show that $\val \mathfrak{M} \Phi s t = \widetilde t$ and $\sat \mathfrak{M} \Phi s
A$ iff $A \in \Gamma^*$ by simultaneous induction on the complexity
of $t$ and $A$.
If $t = c$ is a constant, the claim holds by definition of
$(\cdot)^\mathfrak{M}$. If $A = \bot$ or $= \top$, the claim holds by
Lemma~\ref{lem-closed}.
If $A \equiv P^n(t_1, \ldots, t_n)$, then by induction hypothesis,
$\val \mathfrak{M} \Phi s t_i = \widetilde {t_i}$. By definition of
$(\cdot)^\mathfrak{M}$, $\langle \widetilde{t_1}, \dots,
\widetilde{t_n}\rangle \in (P^n_i)(t_1, \dots, t_n)$ iff $P^n_i(t_1,
\dots, t_n) \in \Gamma^*$.
If $A\equiv \lnot B$, $(B \land C)$, $(B \lor C)$, $(B \lif C)$, $(B
\liff C)$, the claim follows immediately from the induction
hypothesis and the definition of $\models$ and the closure
properties of $\Gamma^*$. For instance, $\sat \mathfrak{M} \Phi s {(B \land
C)}$ iff $\sat \mathfrak{M} \Phi s B$ and $\sat \mathfrak{M} \Phi s C$. By induction
hypothesis, this is the case iff $B \in \Gamma^*$ and $C \in
\Gamma^*$. But since $B, C \proves{\ensuremath{\varepsilon}} B \land C$ and $B \land C
\proves{\ensuremath{\varepsilon}} B$ and $\proves{\ensuremath{\varepsilon}} C$, this is the case iff $(B
\land C) \in \Gamma^*$. Remaining cases: Exercise.
If $t \seq \meps x {A(x)}$, then $\val \mathfrak{M} \Phi s t = \Phi(\val \mathfrak{M}
\Phi s {A(x)})$. Since $\val \mathfrak{M} \Phi s {A(x)} $ is represented by
$A(x)$ by induction hypothesis, we have $\val \mathfrak{M} \Phi s t =
\widetilde{\meps x {A(x)}}$ by definition of $\Phi$. \end{proof}
\subsection{Semantics for \ensuremath{\mathrm{EC}_\eps}}
In order to give a complete semantics for \ensuremath{\mathrm{EC}_\eps}, i.e., for the calculus without the extensionality axiom~(ext), it is necessary to change the notion of choice function so that two \ensuremath{\varepsilon}-terms \meps x {A(x)} and \meps x {B(x)} may be assigned different representatives even when $\sat \mathfrak{M} \Phi s {\forall x(A(x) \liff B(x))}$, since then the negation of (ext) is consistent in the resulting calculus. The idea is to add the \ensuremath{\varepsilon}-term itself as an additional argument to the choice function. However, in order for this semantics to be sound for the calculus---specifically, in order for ($=_2$) to be valid---we have to use not \ensuremath{\varepsilon}-terms but \ensuremath{\varepsilon}-types.
\begin{defn}
An \emph{intensional choice operator} is a mapping $\Psi\colon \mrm{Typ}
\times \card{\mathfrak{M}}^{<\omega} \to \card{\mathfrak{M}}^{\wp(\card{\mathfrak{M}})}$ such that
for every type $p = \meps x{A(x; y_1, \dots, y_n)}$ is a type, and
$m_1$, \dots,~$m_n \in \card{\mathfrak{M}}$, $\Psi(p, m_1, \dots, m_n)$ is a
choice function. \end{defn}
\begin{defn}
If $\mathfrak{M}$ is a structure, $\Psi$ an intensional choice operator, and
$s$ an assignment, $\val \mathfrak{M} \Psi s t$ and $\sat \mathfrak{M} \Psi s A$ is
defined as before, except (\ref{epsilon-sat}) in Definition~\ref{ext-sat} is
replaced by:
\begin{enumerate}
\item[($\ref{epsilon-sat}'$)] $\val \mathfrak{M} \Psi s {\meps x{A(x)}} =
\Psi(p, m_1, \dots, m_n)(\val \mathfrak{M} \Phi s {A(x)})$ where
\begin{enumerate}
\item $p = \meps x{A'(x; x_1, \dots, x_n)}$ is the type of $\meps
x {A(x)}$,
\item $t_1$, \dots, $t_n$ are the subterms corresponding to $x_1$,
\dots, $x_n$, i.e., $\meps x{A(x)} \seq \meps x{A'(x; t_1,
\dots, t_n)}$,
\item $m_i = \val \mathfrak{M} \Psi s t_1$, and
\item
$\val \mathfrak{M} \Phi s {A(x)} =
\{ m \in \card\mathfrak{M} : \sat \mathfrak{M} \Psi {\st s x m} A(x)\}$
\end{enumerate} \end{enumerate} \end{defn}
The soundness and completeness proofs generalize to \ensuremath{\mathrm{EC}_\eps}, \ensuremath{\mathrm{EC}_\eps^=}, and~\ensuremath{\mathrm{EC}_{\eps\forall}}.
\section{The First Epsilon Theorem}\label{epsthm}
\subsection{The Case Without Identity}
\begin{defn}
An \ensuremath{\varepsilon}-term $e$ is \emph{critical in~$\pi$} if $A(t) \lif A(e)$ is
one of the critical formulas in $\pi$. The \emph{rank~$\rk{\pi}$ of
a proof~$\pi$} is the maximal rank of its critical \ensuremath{\varepsilon}-terms. The
\emph{$r$-degree~$\deg{\pi, r}$} of $\pi$ is the maximum degree of
its critical \ensuremath{\varepsilon}-terms of rank~$r$. The \emph{$r$-order~$o(\pi,
r)$} of $\pi$ is the number of different (up to renaming of bound
variables) critical \ensuremath{\varepsilon}-terms of rank~$r$. \end{defn}
\begin{lemma}
If $e = \meps x{A(x)}$, $\meps y {B(y)}$ are critical in $\pi$,
$\rk{e} = \rk{\pi}$, and $B^* \equiv B(u) \lif B(\meps y{B(y)})$ is a
critical formula in~$\pi$. Then, if $e$ is a subterm of $B^*$, it is
a subterm of $B(y)$ or a subterm of~$u$. \end{lemma}
\begin{proof}
Suppose not. Since $e$ is a subterm of $B^*$, we have $B(y)
\seq B'(\meps x{A'(x, y)}, y)$ and either $e \seq \meps x{A'(x, u)}$
or $e \seq \meps x{A'(x, \meps y{B(y)})}$. In each case, we see that
$\meps x{A'(x, y)}$ and $e$ have the same rank, since the latter is
an instance of the former (and so have the same type). On the
other hand, in either case, $\meps y {B(y)}$ would be
\[
\meps y {B'(\meps x{A'(x, y)}, y)}
\]
and so would have a higher rank than $\meps x{A'(x, y)}$ as that
\ensuremath{\varepsilon}-term is subordinate to it. This contradicts $\rk{e} = \rk{\pi}$. \end{proof}
\begin{lemma}
Let $e$, $B^*$ be as in the lemma, and $t$ be any term. Then
\begin{enumerate}
\item If $e$ is not a subterm of $B(y)$, $\ST{B^*}{e}{t} \equiv
B(u') \lif B(\meps y{B(y)})$.
\item If $e$ is a subterm of $B(y)$, i.e., $B(y) \seq B'(e, y)$,
$\ST{B^*}{e}{t} \equiv B'(t, u') \lif B'(t, \meps{y}{B'(t, y)})$.
\end{enumerate} \end{lemma}
\begin{lemma}
If $\proves[\pi]{\ensuremath{\varepsilon}} E$ and $E$ does not contain \ensuremath{\varepsilon}, then there
is a proof $\pi'$ such that $\proves[\pi']{\ensuremath{\varepsilon}} E$ and $\rk{\pi'}
\le \rk{pi} = r$ and $o(\pi', r) < o(\pi, r)$. \end{lemma}
\begin{proof}
Let $e$ be an \ensuremath{\varepsilon}-term critical in $\pi$ and let $A(t_1) \lif
A(e)$, dots, $A(t_n) \lif A(e)$ be all its critical formulas in
$\pi$.
Consider $\ST \pi e t_i$, i.e., $\pi$ with $e$ replaced by $t_i$
throughout. Each critical formula belonging to $e$ now is of the
form $A(t_j') \lif A(t_i)$, since $e$ obviously cannot be a subterm
of $A(x)$ (if it were, $e$ would be a subterm of $\meps x {A(x)}$,
i.e., of itself!). Let $\hat\pi_i$ be the sequence of tautologies
$A(t_i) \lif (A(t_j') \lif A(t_i))$ for $i = 1$, \dots,~$n$,
followed by $\ST \pi e t_i$. Each one of the formulas $A(t_j') \lif
A(t_i)$ follows from one of these by (MP) from $A(t_i)$. Hence,
$A(t_i) \proves[\hat\pi_i]{\ensuremath{\varepsilon}} E$. Let $\pi_i = \hat\pi_i[A_i]$ as
in Lemma~\ref{ded-lemma}. We have $\proves[\pi_i]{\ensuremath{\varepsilon}} A_i \lif E$.
The \ensuremath{\varepsilon}-term $e$ is not critical in $\pi_i$: Its original critical
formulas are replaced by $A(t_i) \lif (A(t_j') \lif A(t_i))$, which
are tautologies. By (1) of the preceding Lemma, no critical
\ensuremath{\varepsilon}-term of rank~$r$ was changed at all. By (2) of the preceding
Lemma, no critical \ensuremath{\varepsilon}-term of rank $< r$ was replaced by a
critical \ensuremath{\varepsilon}-term of rank~$\ge r$. Hence, $o(\pi_i, r) = o(\pi) -
1$.
Let $\pi''$ be the sequence of tautologies $\lnot \bigvee_{i=1}^n
A(t_i) \lif (A(t_i) \lif A(e))$ followed by $\pi$. Then
$\bigvee_{i=1}^n A(t_i) \proves[\pi''] E$, $e$ is not critical in
$\pi''$, and otherwise $\pi$ and $\pi''$ have the same critical
formulas. The same goes for $\pi''[\lnot \bigvee A(t_i)]$, a proof
of $\lnot\bigvee A(t_i) \lif E$.
We now obtain $\pi'$ as the $\pi_i$, $i = 1$, \dots,~$n$, followed
by $\pi[\lnot \bigvee_{i=1}^n A(t_i)]$, followed by the tautology
\[
(\lnot \bigvee A(t_i) \lif E) \lif (A(t_1) \lif E) \lif \dots \lif
(A(t_n) \lif E) \lif E)\dots)
\]
from which $E$ follows by $n+1$ applications of (MP). \end{proof}
\begin{theorem}[First Epsilon Theorem for \ensuremath{\mathrm{EC}_\eps}]
If $E$ is a formula not containing any \ensuremath{\varepsilon}-terms
and $\proves{\ensuremath{\varepsilon}} E$, then $\proves{\ensuremath{\varepsilon}} E$. \end{theorem}
\begin{proof}
By induction on $o(\pi, r)$, we have: if $\proves[\pi]{\ensuremath{\varepsilon}} E$,
then there is a proof $\pi^*$ of $E$ with $\rk{\pi^-} < r$. By
induction on $\rk(\pi)$ we have a proof $\pi^{**}$ of $E$ with
$\rk{\pi^{**}} = 0$, i.e., without critical formulas at all. \end{proof}
\begin{corollary}[Extended First \ensuremath{\varepsilon}-Theorem]
If $\proves{\ensuremath{\varepsilon}} E(e_1, \dots, e_n)$, then $\proves
\bigvee_{i=1}^m E(t_1^j, \dots, t_n^j)$ for some terms $t_j$ (in \mrm{EC}). \end{corollary}
\begin{proof} If $E$ contains \ensuremath{\varepsilon}-terms, say, $E$ is $E(e_1, \dots, e_n)$, then replacement of \ensuremath{\varepsilon}-terms in the construction of $\pi_i$ may change $E$---but of course only the \ensuremath{\varepsilon}-terms appearing as subterms in it. In each step we obtain not a proof of $E$ but of some disjunction of instances $E(e_1', \dots, e_n')$. For details, see \cite{MoserZach:06}. \end{proof}
\subsection{The Case with Identity}
In the presence of the identity ($=$) predicate in the language, things get a bit more complicated. The reason is that instances of the ($=_2$) axiom schema, \[ t = u \lif (A(t) \lif A(u)) \] may also contain \ensuremath{\varepsilon}-terms, and the replacement of an \ensuremath{\varepsilon}-term~$e$ by a term~$t_i$ in the construction of $\pi_i$ may result in a formula which no longer is an instance of ($=_2$). For instance, suppose that $t$ is a subterm of $e = e'(t)$ and $A(t)$ is of the form $A'(e'(t))$. Then the original axiom is \[ t = u \lif (A'(e'(t)) \lif A'(e'(u)) \] which after replacing $e = e'(t)$ by $t_i$ turns into \[ t = u \lif (A'(t_i) \lif A'(e'(u)). \] So this must be avoided. In order to do this, we first observe that just as in the case of the predicate calculus, the instances of ($=_2$) can be derived from restricted instances. In the case of the predicate calculus, the restricted axioms are \begin{align}
t = u & \lif (P^n(s_1, \dots, t, \dots s_n) \lif P^n(s_1, \dots, u,
\dots, s_n) \tag{$=_2'$} \\
t = u & \lif f^n(s_1, \dots, t, \dots, s_n) = f^n(s_1, \dots, u,
\dots, s_n) \tag{$=_2''$} \\ \intertext{to which we have to add the \emph{\ensuremath{\varepsilon}-identity axiom schema:}}
t = u & \lif \meps x {A(x; s_1, \dots, t, \dots s_n)} =
\meps x {A(x; s_1, \dots, u, \dots s_n)} \tag{$=_\ensuremath{\varepsilon}$} \end{align} where $\meps x {A(x; x_1, \dots, x_n)}$ is an \ensuremath{\varepsilon}-type.
\begin{proposition}\label{atomic}
Every instance of $(=_2)$ can be derived from $(=_2')$, $(=_2'')$,
and $(=_\ensuremath{\varepsilon})$. \end{proposition}
\begin{proof}
By induction. \end{proof}
Now replacing every occurrence of $e$ in an instance of ($=_2'$) or ($=_2''$)---where $e$ obviously can only occur inside one of the terms $t$, $u$, $s_1$, \dots, $s_n$---results in a (different) instance of ($=_2'$) or ($=_2''$). The same is true of ($=_\ensuremath{\varepsilon}$), \emph{provided
that} the $e$ is neither $\meps x{A(x; s_1, \dots, t, \dots s_n)}$ nor $\meps x{A(x; s_1, \dots, u, \dots s_n)}$. This would be guaranteed if the type of $e$ is not $\meps x {A(x; x_1, \dots,
x_n)}$, in particular, if the rank of $e$ is higher than the rank of $\meps x {A(x; x_1, \dots, x_n)}$. Moreover, the result of replacing $e$ by $t_i$ in any such instance of $(=_\ensuremath{\varepsilon}$) results in an instance of $(=_\ensuremath{\varepsilon})$ which belongs to the same \ensuremath{\varepsilon}-type. Thus, in order for the proof of the first \ensuremath{\varepsilon}-theorem to work also when $=$ and axioms $(=_1)$, $(=_2')$, $(=_2''$), and $(=_\ensuremath{\varepsilon})$ are present, it suffices to show that the instances of $(=_\ensuremath{\varepsilon})$ with \ensuremath{\varepsilon}-terms of rank~$\rk{\pi}$ can be removed. Call an \ensuremath{\varepsilon}-term $e$ \emph{special} in~$\pi$, if $\pi$ contains an occurrence of $t = u \lif e' = e$ as an instance of $(=_\ensuremath{\varepsilon})$.
\begin{theorem}
If $\proves[\pi]{\ensuremath{\varepsilon}=} E$, then there is a proof $\pi^=$ so that
$\proves[\pi^=]{\ensuremath{\varepsilon}=} E$, $\rk{\pi^=} = \rk{pi}$, and the rank of the
special \ensuremath{\varepsilon}-terms in $\pi^=$ has rank $< \rk{\pi}$. \end{theorem}
\begin{proof}
The basic idea is simple: Suppose $t = u \lif e' = e$ is an instance of $(=_\ensuremath{\varepsilon})$, with $e' \seq \meps x {A(x; s_1, \dots, t, \dots s_n)}$ and $e \seq \meps x {A(x; s_1, \dots, u, \dots s_n)}$. Replace $e$ everywhere in the proof by~$e'$. Then the instance of $(=_\ensuremath{\varepsilon})$ under consideration is removed, since it is now provable from $e' = e'$. This potentially interferes with critical formulas belonging to $e$, but this can also be fixed: we just have to show that by a judicious choice of~$e$ it can be done in such a way that the other $(=_\ensuremath{\varepsilon})$ axioms are still of the required form.
Let $p = \meps x{A(x; x_1, \dots, x_n)}$ be an \ensuremath{\varepsilon}-type of rank $\rk{\pi}$, and let $e_1$, \dots, $e_l$ be all the \ensuremath{\varepsilon}-terms of type~$p$ which have a corresponding instance of~$(=_\ensuremath{\varepsilon})$ in~$\pi$. Let $T_i$ be the set of all immediate subterms of $e_1$, \dots, $e_l$, in the same position as $x_i$, i.e., the smallest set of terms so that if $e_i \seq \meps x{A(x; t_1, \dots, t_n)}$, then $t_i \in T$. Now let let $T^*$ be all instances of $p$ with terms from $T_i$ substituted for the $x_i$. Obviously, $T$ and thus $T^*$ are finite (up to renaming of bound variables). Pick a strict order $\prec$ on $T$ which respects degree, i.e., if $\deg{t} < \deg{u}$ then $t \prec u$. Extend $\prec$ to $T^*$ by \[
\meps x{A(x; t_1, \dots, t_n)} \prec \meps x{A(x; t'_1, \dots, t'_n)} \] iff \begin{enumerate} \item $\max\{\deg{t_i} : i = 1, \dots, n\} < \max\{\deg{t_i} : i = 1,
\dots, n\}$ or \item $\max\{\deg{t_i} : i = 1, \dots, n\} = \max\{\deg{t_i} : i = 1,
\dots, n\}$ and \begin{enumerate} \item $t_i \seq t_i'$ for $i = 1$, \dots,~$k$. \item $t_{k+1} \prec t_{k+1}'$ \end{enumerate} \end{enumerate} \end{proof}
\begin{lemma}
Suppose $\proves[\pi]{\ensuremath{\varepsilon}=} E$, $e$ a special \ensuremath{\varepsilon}-term in $\pi$
with $\rk{e} = \rk{\pi}$, $\deg{e}$ maximal among the special
\ensuremath{\varepsilon}-terms of rank~$\rk{\pi}$, and $e$ maximal with respect to
$\prec$ defined above. Let $t = u \lif e' = e$ be an instance of
$(=_\ensuremath{\varepsilon})$ in $\pi$. Then there is a proof $\pi'$,
$\proves[\pi']{\ensuremath{\varepsilon}=} E$ such that
\begin{enumerate}
\item $\rk{\pi'} = \rk{\pi}$
\item $\pi'$ does not contain $t = u \lif e' = e$ as an axiom
\item Every special \ensuremath{\varepsilon}-term $e''$ of $\pi'$ with the same type
as $e$ is so that $e'' \prec e$.
\end{enumerate} \end{lemma}
\begin{proof}
Let $\pi_0 = \ST \pi e {e'}$ and suppose $t' = u' \lif e''' = e''$
is an $(=_\ensuremath{\varepsilon})$ axiom in $\pi$.
If $\rk{e''} < \rk{e}$, then the replacement of $e$ by $e'$ can only
change subterms of $e''$ and $e'''$. In this case, the uniform
replacement results in another instance of $(=_\ensuremath{\varepsilon})$ with
\ensuremath{\varepsilon}-terms of the same \ensuremath{\varepsilon}-type, and hence of the same rank $<
\rk{\pi}$, as the original.
If $\rk{e''} = \rk{e}$ but has a different type than $e$, then this
axiom is unchanged in $\pi_0$: Neither $e''$ nor $e'''$ can be $\seq
e$, because they have different \ensuremath{\varepsilon}-types, and neither $e''$ nor
$e'''$ (nor $t'$ or $u'$, which are subterms of $e''$, $e'''$) can
contain $e$ as a subterm, since then $e$ wouldn't be degree-maximal
among the special \ensuremath{\varepsilon}-terms of $\pi$ of rank~$\rk{\pi}$.
If the type of $e''$, $e'''$ is the same as that of $e$, $e$ cannot
be a proper subterm of $e''$ or $e'''$, since otherwise $e''$ or
$e'''$ would again be a special \ensuremath{\varepsilon}-term of rank~$\rk{\pi}$ but of
higher degree than~$e$. So either $e \seq e''$ or $e \seq e'''$,
without loss of generality suppose $e \seq e''$. Then the
$(=_\ensuremath{\varepsilon})$ axiom in question has the form
\[
t' = u' \lif \underbrace{\meps x {A(x; s_1, \dots t', \dots
s_n)}}_{e'''} = \underbrace{\meps x {A(x; s_1, \dots u', \dots
s_n)}}_{e'' \seq e}
\]
and with $e$ replaced by $e'$:
\[
t' = u' \lif \underbrace{\meps x {A(x; s_1, \dots t', \dots
s_n)}}_{e'''} = \underbrace{\meps x {A(x; s_1, \dots t, \dots
s_n)}}_{e'}
\]
which is no longer an instance of $(=_\ensuremath{\varepsilon})$, but can be proved from
new instances of~$(=_\ensuremath{\varepsilon})$. We have to distinguish two cases
according to whether the indicated position of $t$ and $t'$ in $e'$,
$e'''$ is the same or not. In the first case, $u \seq u'$, and the
new formula
\begin{align}
t' = u & \lif \underbrace{\meps x {A(x; s_1, \dots t', \dots
s_n)}}_{e'''} = \underbrace{\meps x {A(x; s_1, \dots t, \dots
s_n)}}_{e'} \notag\\
\intertext{can be proved from $t = u$ together with} t' = t & \lif
\underbrace{\meps x {A(x; s_1, \dots t', \dots s_n)}}_{e'''} =
\underbrace{\meps x {A(x; s_1, \dots t, \dots
s_n)}}_{e'} \tag{$=_\ensuremath{\varepsilon}$}\\
t = u & \lif (t' = u \lif t' = t) \tag{$=_2'$}
\end{align}
Since $e'$ and $e'''$ already occurred in~$\pi$, by assumption $e'$, $e'''
\prec e$.
In the second case, the original formulas read, with terms indicated:
\begin{align*}
t = u & \lif \underbrace{\meps x {A(x; s_1, \dots t, \dots, u',
\dots, s_n)}}_{e'} = \underbrace{\meps x {A(x; s_1, \dots u,
\dots, u', \dots,
s_n)}}_{e} \\
t' = u' & \lif \underbrace{\meps x {A(x; s_1, \dots u, \dots, t',
\dots, s_n)}}_{e'''} = \underbrace{\meps x {A(x; s_1, \dots u,
\dots, u', \dots,
s_n)}}_{e'' \seq e}
\intertext{and with $e$ replaced by $e'$ the latter becomes:}
t' = u' & \lif \underbrace{\meps x {A(x; s_1, \dots u, \dots, t', \dots
s_n)}}_{e'''} = \underbrace{\meps x {A(x; s_1, \dots t, \dots,
u', \dots, s_n)}}_{e'} \\
\intertext{This new formula is provable from $t = u$ together with}
u = t & \lif \underbrace{\meps x {A(x; s_1, \dots u, \dots, t', \dots
s_n)}}_{e'''} = \underbrace{\meps x {A(x; s_1, \dots t, \dots,
t', \dots, s_n)}}_{e''''} \\
t' = u' & \lif \underbrace{\meps x {A(x; s_1, \dots t, \dots, t', \dots
s_n)}}_{e''''} = \underbrace{\meps x {A(x; s_1, \dots t, \dots,
u', \dots, s_n)}}_{e'}
\end{align*}
and some instances of $(=_2')$. Hence, $\pi'$ contains a
(possibly new) special \ensuremath{\varepsilon}-term~$e''''$. However, $e''''
\prec e$.
In the special case where $e = e''$ and $e' = e'''$, i.e., the
instance of $(=_\ensuremath{\varepsilon})$ we started with, then replacing $e$ by $e'$
results in $t = u \lif e' = e'$, which is provable from $e' = e'$,
an instance of $(=_1)$.
Let $\pi_1$ be $\pi_0$ with the necessary new instances of
$(=_\ensuremath{\varepsilon})$, added. The instances of $(=_\ensuremath{\varepsilon})$ in $\pi_1$ satisfy
the properties required in the statement of the lemma.
However, the results of replacing $e$ by $e'$ may have impacted
some of the critical formulas in the original proof. For a
critical formula to which $e \seq \meps x {A(x, u)}$ belongs is of the form
\begin{align}
A(t', u) & \lif A(\meps x {A(x, u)}, u) \\
\intertext{which after replacing $e$ by $e'$ becomes}
A(t'', u) & \lif A(\meps x {A(x, t)}, u) \label{newcrit}
\end{align}
which is no longer a critical formula. This formula, however, can
be derived from $t = u$ together with
\begin{align}
A(t'', u) & \lif A(\meps x {A(x, t)}, u) \tag{\ensuremath{\varepsilon}}\\
t = u & \lif (A(\meps x{A(x, t)}, t) \lif A(\meps x{A(x, t)}, u))
\tag{$=_2$} \\
u = t & \lif (A(t'', u) \lif A(t'', t)) \tag{$=_2$}
\end{align}
Let $\pi_2$ be $\pi_1$ plus these derivations of (\ref{newcrit})
with the instances of $(=_2)$ themselves proved from $(=_2')$ and
$(=_\ensuremath{\varepsilon})$. The rank of the new critical formulas is the same, so
the rank of $\pi_2$ is the same as that of~$\pi$. The new
instances of $(=_\ensuremath{\varepsilon})$ required for the derivation of the last
two formulas only contain \ensuremath{\varepsilon}-terms of lower rank that that
of~$e$, as can be verified.
$\pi_2$ is thus a proof of $E$ from $t = u$ which satisfies the
conditions of the lemma. From it, we obtain a proof $\pi_2[t =
u]$ of $t = u \lif E$ by the deduction theorem. On the other
hand, the instance $t = u \lif e' = e$ under consideration can
also be proved trivially from $t \neq u$. The proof $\pi[t \neq
u]$ thus is also a proof, this time of $t \neq u \lif E$, which
satisfies the conditions of the lemma. We obtain $\pi'$ by
combining the two proofs. \end{proof}
\begin{theorem}[First Epsilon Theorem for \ensuremath{\mathrm{EC}_\eps^=}]
If $E$ is a formula not containing any \ensuremath{\varepsilon}-terms
and $\proves{\ensuremath{\varepsilon}=} E$, then $\proves{=} E$ (in \ensuremath{\mathrm{EC}^=}). \end{theorem}
\begin{proof}
By repeated application of the Lemma, every instance of $(=_\ensuremath{\varepsilon})$
involving \ensuremath{\varepsilon}-terms of a given type~$p$ can be eliminated from~$\pi$.
The Theorem follows by induction on the number of different
types of special \ensuremath{\varepsilon}-terms of rank~$\rk{\pi}$ in~$\pi$. \end{proof}
\section{Proof Theory of the Epsilon Calculus}\label{proofth}
\subsection{Sequent Calculi}
Leisenring \cite{Leisenring:1969} presented a one-sided sequent calculus for the \ensuremath{\varepsilon}-calculus. It operates on sets of formulas (sequents); proofs are trees of sets of formulas each of which is either an axiom (at a leaf of the tree) or follows from the sets of formulas above it by an inference rule. Axioms are $A, \lnot A$. The rules are given below: \begin{center} \begin{tabular}{ccc} $\infer[\land R]{\Gamma, A \land B}{\Gamma, A & \Gamma, B}$ & $\infer[\land L]{\Gamma, \lnot(A \land B)}{\Gamma, \lnot A, \lnot B}$ & $\infer[\lnot\lnot]{\Gamma, \lnot\lnot A}{\Gamma, A}$ \\ $\infer[\lor R]{\Gamma, A \lor B}{\Gamma, A, B}$ & $\infer[\lor L]{\Gamma, \lnot(A \lor B)}{\Gamma, \lnot A & \Gamma, \lnot B}$ & $\infer[\textit{cut}]{\Pi, \Lambda}{\Pi, A & \Lambda, \lnot A}$ \\ $\infer[\exists R]{\Gamma, \exists x\, A(x)}{\Gamma, A(t)}$ & $\infer[\exists L]{\Gamma, \lnot \exists x\, A(x)}{\Gamma, \lnot A(\meps x A(x))}$ & $\infer[w]{\Gamma, A, B}{\Gamma, A}$ \\ $\infer[\forall R]{\Gamma, \forall x\, A(x)}{\Gamma, A(\meps x \lnot A(x))}$ & $\infer[\forall L]{\Gamma, \lnot\forall x\, A(x)}{\Gamma, \lnot A(t)}$ \end{tabular} \end{center} In contrast to classical sequent systems, there are no eigenvariable conditions!
It is complete, since proofs can easily be translated into derivations in $\mrm{EC}_\ensuremath{\varepsilon}$; in particular it derives critical formulas: \[ \infer[\textit{cut}]{\lnot A(t), A(\meps x A(x))}{
\infer[\exists R]{\lnot A(t), \dem{\exists x\, A(x)}}{\lnot A(t), A(t)}
&
\infer[\exists L]{\dem{\lnot\exists x\, A(x)}, A(\meps x A(x))}{\lnot A(\meps x A(x)), A(\meps x A(x))}} \] This sequent, however, has no cut-free proof.
Maehara \cite{Maehara:55} instead proposed to simply add axioms corresponding to to critical formulas and leave out quantifier rules. Hence, its axioms are $\lnot A, A$ and $\lnot A(t), A(\meps x A(x))$. It is complete, since the additional axioms allow derivation of critical formulas. However, it is also not cut-free complete. Converses of critical formulas are derivable using cut: \[ \infer[\mathit{cut}]{\lnot A(\meps x \lnot A(x)), A(t)}{ \dem{\lnot\lnot A(t)}, \lnot A(\meps x \lnot A(x)) & \dem{\lnot A(t)}, A(t) } \] But these obviously have no cut-free proof. Furthermore, addition of these converses as axioms will not result in a cut-free complete system, either. Consider the example given by Wessels: Let $e = \meps x \lnot(A(x) \lor B(x))$. \[ \infer[\mathit{cut}]{\lnot A(\meps x \lnot(A(x) \lor B(x))), A(t) \lor B(t)}
{
\infer*[\mathit{cut}]{\dem{\lnot (A(e) \lor B(e))}, A(t) \lor B(t)}{
\dem{\lnot\lnot(A(t) \lor B(t))}, \lnot (A(e) \lor B(e))
}
&
\infer[\lor R]{\lnot A(e), \dem{A(e) \lor B(e)}}{\lnot A(e), A(e)}
} \]
Wessels \cite{Wessels:77} proposed to add instead the following rule to the propositional one-sided sequent calculus: \[ \infer[\eps0]{\Gamma, \Delta(\meps x A(x))}{ \Gamma, \Delta(z), \lnot A(z) & \Gamma, A(t) } \] Here, $\Delta(z)$ must be not empty, and $z$ may not occur in the lower sequent. This system also derives critical formulas, and so is complete: \[ \infer[\eps0]{\lnot A(t), A(\meps x A(x))}{
\infer[w]{\underbrace{\lnot A(t)}_\Gamma, \underbrace{A(a)}_\Delta, \lnot A(z)}{A(z), \lnot A(z)}
&
\underbrace{\lnot A(t)}_\Gamma, A(t) } \] The rule $\eps0$ is sound.\footnote{Suppose the upper sequents are
valid but the lower sequent is not, i.e., for some $\mathfrak{M}, \Psi, s$,
$\mathfrak{M} \not\models \Gamma, \Delta(\meps x A(x))$. In particular,
$\mathfrak{M},\Psi, s \not\models \Gamma$. Hence, $\mathfrak{M},\Psi,s \models A(t)$,
i.e., $\mathfrak{M},\Psi,s \models A'(t, t_1, \ldots, t_n)$, as the right
premise is valid. So $\val \mathfrak{M} \Psi s t \in \val \mathfrak{M} \Psi s
{A(x)}$. Now let $s(z) = \val M \Psi s {\meps x A'(x, t_1, \ldots,
t_n)}$. Then $\mathfrak{M}, \Psi, s \models A(z)$ and so $\mathfrak{M}, \Psi, s
\not\models \lnot A(z)$. Since the left premise is valid, $\mathfrak{M}, \Psi,
s \models \Delta(z)$. But also $\mathfrak{M}, \Psi, s \not\models \Delta(z)$
since $\mathfrak{M}, \Psi, s \not\models \Delta(\meps x A(x))$.}
Wessels offered a cut-elimination proof for her system. However, the proof relied on a false lemma to which Maehara gave a counterexample. \begin{center} \textbf{Wessels' Lemma.} If $\vdash \Gamma, \Delta(\meps x A(x))$ then $\vdash \Gamma, \Delta(z), \lnot A(z)$. \end{center}
Let $A(x) = P(x, \meps y Q(\meps u P(u, y)))$, $\Delta(z) = Q(z)$, and $\Gamma = \lnot Q(\meps x B(x, w))$. Then \[ \underbrace{\lnot Q(\meps x P(x, w))}_\Gamma, \underbrace{Q(\meps x P(x, \meps y Q(\meps u P(u, y))))}_{\Delta(\meps x A(x))} \] is derivable, since it is of the form $\lnot B(w), B(\meps y B(y))$. However, the corresponding sequent in the consequent of the lemma, \[ \underbrace{\lnot Q(\meps x P(x, w))}_\Gamma, \underbrace{Q(z)}_{\Delta(z)}, \underbrace{\lnot P(z, \meps y Q(\meps u P(u, y)))}_{\lnot A(z)} \] is not derivable, because not valid.\footnote{Let $\card{\mathfrak{M}} = \{1,
2\}$, $Q^\mathfrak{M} = \{1\}, P^\mathfrak{M} = \{\langle 1,2\rangle, \langle
2,2\rangle\}$, $s(z) = s(w) = 2$. Since $\langle 1, 2\rangle \in P^\mathfrak{M}$, we
can choose $\Psi$ so that $\val \mathfrak{M} \Psi s {\meps x P(x, 2)} = 1$. So
$\mathfrak{M}, \Psi, s \not\models \lnot Q(\meps x P(x, w))$. Also,
$\mathfrak{M},\Psi,s \not\models Q(z)$. As $\val \mathfrak{M} \Psi s {\meps u P(u, 2)}
= 1$ and $1 \in Q^\mathfrak{M}$, we can also fix $\Psi$ so that $\val \mathfrak{M} \Psi
s {\meps y Q(\meps u P(u, y))} = 2$. But then $\mathfrak{M}, \Psi, s
\not\models \lnot P(z, \meps y Q(\meps u P(u, y)))$.)}
Mints (in a review of Wessels' paper) proposed the following rule instead: \[ \infer[\eps1]{\Gamma, \Delta(\meps x A(x))}{ \Gamma, \Delta(\meps x A(x)), \lnot A(\meps x A(x)) & \Gamma, A(t) } \] It, too, derives all critical formulas: \[ \infer[\eps1]{\lnot A(t), A(\meps x A(x))}{
\infer[w]{\underbrace{\lnot A(t)}_\Gamma, \underbrace{A(\meps x A(x))}_\Delta, \lnot A(\meps x A(x))}{A(\meps x A(x)), \lnot A(\meps x A(x))}
&
\underbrace{\lnot A(t)}_\Gamma, A(t) } \] The system was developed in detail by Yasuhara \cite{Yasuhara:82}. The Mints-Yasuhara system is cut-free complete. However, it is not known if the sequent has a cut-elimination theorem that transforms a proof with cuts successively into one without cuts. Both Gentzen's and Tait's approach to cut-elimination do not seem to work. In a Gentzen-style proof, the main induction is on on cut length, i.e., the height of the proof tree above an uppermost cut. In the induction step, a cut is permuted upward to reduce the cut length. For instance, we replace the subproof proof ending in a cut \[ \infer[\textit{cut}]{\Pi, \Lambda, \exists x\, B(x)}{
\infer*[\pi]{\Pi, \dem{A}}{} &
\infer[\exists R]{\dem{\lnot A}, \Lambda, \exists x\, B(x)}{
\infer*[\pi']{\lnot A, \Lambda, B(t)}{}
} } \qquad\text{by}\qquad \infer[\exists R]{\Pi, \Lambda, \exists x\, B(x)}{ \infer[\textit{cut}]{\Pi, \Lambda, B(t)}{
\infer*[\pi]{\Pi, \dem{A}}{} &
\infer*[\pi']{\dem{\lnot A}, \Lambda, B(t)}{}
} } \] To permute a cut across the $\ensuremath{\varepsilon} 1$ rule: \[ \infer[\textit{cut}]{\Pi, \Gamma, \Delta(\meps x B(x))}{
\infer*[\pi]{\Pi, \dem{A}}{} &
\infer[\ensuremath{\varepsilon} 1]{\dem{\lnot A}, \Gamma, \Delta(\meps x B(x))}{
\infer*[\pi']{\lnot A, \Gamma, \Delta(\meps x B(x)), \lnot B(\meps x B(x))}{}
&
\infer*[\pi'']{\Gamma, B(t)}{}
} } \] one might try to replace the proof tree with \[ \infer[\ensuremath{\varepsilon} 1]{\Gamma, \Delta(\meps x B(x))}{ \infer[\textit{cut}]{\Pi, \Gamma, \Delta(\meps x B(x)), \lnot B(\meps x B(x))}{
\infer*[\pi]{\Pi, \dem{A}}{} &
\infer*[\pi']{\dem{\lnot A}, \Gamma, \Delta(\meps x B(x)), \lnot B(\meps x B(x))}{} } &
\infer*[\pi'']{\Gamma, B(t)}{} } \] However, here the condition on $\eps1$ is violated if $\lnot A$ is in $\Delta$.
In a Tait-style cut elimination proof, the main induction is on cut rank, i.e., complexity of the cut formula. In the induction step, the complexity of the cut formula is reduced. For instance, if a subproof ends in a cut \[ \infer[\mathit{cut}]{\Pi, \Lambda}{
\infer*[\pi]{\Pi, \dem{\lnot(A \land B)}}{}
&
\infer*[\pi']{\Lambda, \dem{A \land B\strut}}{} } \] we replace it with \[ \infer[\mathit{cut}]{\Pi, \Lambda}{
\infer[\mathit{cut}]{\Pi, \Lambda, \dem{\lnot B}}{
\infer*[\pi_1]{\Pi, \dem{\lnot A}, \lnot B}{}
&
\infer*[\pi'_1]{\Lambda, \dem{A}}{}
}
&
\infer*[\pi'_2]{\Lambda, \dem{B}}{} } \] This approach requires \emph{inversion lemmas}. A typical case is: If $\pi' \vdash \Pi, A \land B$ then there is a $\pi_1' \vdash \Pi, A$ of cut rank and length $\le$ that of $\pi'$. In the proof of the inversion lemma, one replaces all ancestors of $A \land B$ in $\pi'$ by $A$ and ``fixes'' those rules that are no longer valid. For instance, replace \[ \infer[\land R]{\Gamma, A}{
\infer*{\Gamma, A}{} & \infer*{\Gamma, B}{} } \quad\text{by} \quad \infer*{\Gamma, A}{} \] But now consider a derivation $\pi'$ which contains the $\ensuremath{\varepsilon} 1$ rule:\footnote{($A \land B(\meps x C(x))$ is $\Delta(\meps x C(x))$ in
this case).} \[ \infer[\ensuremath{\varepsilon} 1]{\Pi, A \land B(\meps x C(x)}{
\infer*{\Pi, A \land B(\meps x C(x)), \lnot C(\meps x C(x))}{}
&
\infer*{\Pi, C(t)}{} } \] The inversion lemma produces \[ \infer[\ensuremath{\varepsilon} 1]{\Pi, A}{
\infer*{\Pi, A, \lnot C(\meps x C(x))}{}
&
\infer*{\Pi, C(t)}{} } \] This, again, no longer satisfies the condition of $\meps 1$.
\begin{prob} Prove cut-elimination for the Mints-Yasuhara system, or give a similarly simple sequent calculus for which it can be proved. \end{prob}
\subsection{Natural deduction}
In Gentzen's classical natural deduction system NK, the quantifier rules are given by \begin{center}
\begin{tabular}{c@{\qquad}c} $\infer[\forall I]{\forall x\, A(x)}{A(z)}$ & $\infer[\forall E]{A(t)}{\forall x\, A(x)}$ \\ $\infer[\exists I]{\exists x\, A(x)}{A(t)}$ & $\infer[\exists E]{B}{\exists x\, A(x) & \infer*{C}{[A(z)]}}$
\end{tabular}
\end{center} where $z$ must not appear in any undischarged assumptions (nor in $A(x)$ or $B$). Meyer Viol \cite{MeyerViol:95} has proposed a system in which the $\exists E$ rule is replaced by \[ \infer[\exists E_\ensuremath{\varepsilon}]{A(\meps x A(x))}{\exists x\, A(x)} \] and the following term rule is added \[ \infer[I\ensuremath{\varepsilon}]{A(\meps x A(x))}{A(t)} \]
\begin{prob} Does Meyer Viol's system have a normal form theorem? \end{prob}
Adding $\exists E_\ensuremath{\varepsilon}$ and $I\ensuremath{\varepsilon}$ to the intuitionistic system NJ results in a system that is not conservative over intuitionistic logic. For instance, \emph{Plato's principle}, the formula \[ \exists x(\exists y\, A(y) \to A(x)) \] becomes derivable: \[ \infer[\exists I]{\exists x(\exists y\, A(y) \to A(x))}{
\infer[\to I]{\exists y\, A(y) \to A(\meps x A(x))}{
\infer[\exists E_\ensuremath{\varepsilon}]{A(\meps x A(x))}{[\exists y\, A(\meps x A(x))]}
}} \] However, the system also does not collapse to classical logic: it is conservative for propositional formulas.
Intuitionistic natural deduction systems are especially intriguing, as Abadi, Gonthier and Werner \cite{AbadiGonthierWerner:04} have shown that a system of quantified propositional intuitionistic logic with a choice operator $\meps X$ can be given a Curry-Howard correspondence via a type system which $\meps X A(X)$ is a type such that the type $A(X)$ is inhabited. System $\cal E$ is paired with a simply typed $\lambda$-calculus that, in addition to $\lambda$-abstraction and application, features \emph{implementation}: \( \langle t\colon A \text{ with } X = T\rangle \) of type $A(\epsilon_X A/X)$. If $A(X)$ is a type specification of an interface with variable type $X$, then $A(T)$ for some type $T$ is an implementation of that interface.
\nocite{Mancosu:98}
\end{document} | arXiv | {
"id": "1610.06289.tex",
"language_detection_score": 0.691281259059906,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Smooth Symbolic Regression:\\Transformation of Symbolic Regression into a Real-valued Optimization Problem} \titlerunning{Smooth Symbolic Regression} \toctitle{Smooth Symbolic Regression: Transformation of Symbolic Regression into a Real-valued Optimization Problem}\footnotetext[1]{The final publication is available at \url{https://link.springer.com/chapter/10.1007/978-3-319-27340-2_47}} \author{Erik Pitzer \and Gabriel Kronberger} \authorrunning{E. Pitzer and G. Kronberger} \tocauthor{Erik Pitzer, and Gabriel Kronberger} \institute{Heuristic and Evolutionary Algorithms Laboratory, School of Informatics, Communications and Media, University of Applied Sciences Upper Austria, Franz-Fritsch Strasse 11, 4600 Wels, AUSTRIA, \email{\{erik.pitzer,gabriel.kronberger\}@fh-hagenberg.at}, \texttt{http://heal.heuristiclab.com} and \texttt{http://fh-ooe.at}}
\maketitle
\begin{abstract}
The typical methods for symbolic regression produce rather abrupt changes in
solution candidates. In this work, we have tried to transform symbolic
regression from an optimization problem, with a landscape that is so rugged
that typical analysis methods do not produce meaningful results, to one that
can be compared to typical and very smooth real-valued problems. While the
ruggedness might not interfere with the performance of optimization, it
restricts the possibilities of analysis. Here, we have explored different
aspects of a transformation and propose a simple procedure to create
real-valued optimization problems from symbolic regression problems.
\end{abstract}
\section{Introduction} When analyzing complex data sets not only to predict a variable from others but also trying to gain insights into the relationships between variables, symbolic regression is a very valuable tool. It can often produce human-interpretable explanations for relationships between variables. The considered formulas are explored by substituting variables or re-organizing the syntax tree. This results in rather abrupt changes in the behavior of these formulas.
Genetic programming (GP) \cite{Koza1992} is a technique that uses an evolutionary algorithm to evolve computer programs that solve a given problem when executed. These programs are often represented as symbolic expression trees, and operations such as crossover and mutation are performed on sub-trees. One particular problem that can be solved by GP is symbolic regression \cite{Koza1992}, where the goal is to find a function, mapping the known values of input variables to the value of a target variable with minimal error.
Several specialized and improved variants of GP for symbolic regression have already been described in the literature e.g. \cite{Keijzer2003,Kotanchek2007,Vladislavleva2009,Kommenda2013} and it has been shown that other techniques are also viable for solving symbolic regression problems such as FFX \cite{McConaghy2011} or dynamic programming \cite{Worm2013}.
While GP -- or other (quasi-)combinatorial methods -- typically work well in identifying formulas that describe relations between variables and, therefore, do not create a black-box function but an opaque and often human-interpretable description of these relations, the process for obtaining these formulas is complicated and and large effort is necessary to grasp their progression.
During the optimization, the symbolic expression tree can vary wildly between individuals and between generations. In other words, when a GP fitness landscape is analyzed it appears extremely rugged. A typical measure for ruggedness, the autocorrelation \cite{Weinberger1990} between two ``neighboring'' solution candidates, is usually very close to zero.
This makes it very hard to derive any meaningful conclusions of GP fitness landscapes as typical ``neighbors'' are too dissimilar to each other, even more so, when crossover operators are employed which creates even more drastic changes, let alone the difficulties of crossover landscapes themselves \cite{Stadler1998}.
In this paper, we present the idea of creating a smoother fitness landscape for symbolic regression problems by borrowing ideas from neural networks and combining them with typical tree formulations found in genetic programming. The intent is to have a smooth transition from one syntax tree to another. At the same time, we want to ensure that the final output settles for and determines one of the available operators at each node.
\subsection{Symbolic Regression}\label{sec:symbolic-regression} In a regression problem the task is to find a mapping from a set of input variables to on or more output variables so that using only the input values, the output can be accurately predicted. This task can be tackled with two fundamentally different approaches. On the one hand, so-called black-box models focus on providing predictions with maximum quality sacrificing ``understanding'' of the model for exampling when employing neural networks \cite{Rosenblatt1958} and deep learning \cite{Deng2013} or Support Vector Machines \cite{Cortes1995}. On the other hand, white box models try not only to give good quality explanations but also try to provide some insight into how the relationship between the variables. Examples are most prominently linear regression \cite{Freedman2009} and generalized linear regression \cite{Nelder1972}.
As an extension to these methods, symbolic regression provides great freedom in the formulation of a regression formula. By allowing an arbitrary syntax tree as the formulation for the relationship between the variables. This freedom, however, comes with the price of a much large solution space and, hence, much more possibilities for a goo solution. Therefore, powerful methods have to be used to control the complexity of this approach.
Genetic Programming (GP) \cite{Koza1992} is a method that is able to conquer these problems and create white-box models with good quality and often understandable models that can give new insights into the relationships between the variables in addition to the provided predictions.
In Genetic Programming, syntax trees are usually modified using two different methods both drawing the solution candidates from a pool called the current generation. The most important form of modification is achieved via crossover, where two trees are recombined into a new tree that has some features from both predecessors. The second form of modification is mutation that randomly introduces or changes some features of the syntax tree, such as replacing operators or changing constants.
Overall, genetic programming provides powerful means of evolving symbolic trees and has proven its effectiveness and efficiency in providing good quality white box models of difficult regression problems. On the downside, however, the changes introduced during the evolution of syntax trees are quite drastic and the process, how genetic programming arrives at these solutions is very difficult to follow through, and its analysis is complicated.
\subsection{Fitness Landscape Analysis} Every optimization problem implies a so-called fitness landscape that describes the relationship of solution candidates and their associated fitness. Formally, a fitness landscape can be defined as the triple $\mathcal{F} := \left\{ \mathcal{S}, f, \mathcal{X} \right\}$, where $\mathcal{S}$ is an arbitrary set of solution candidates for the optimization problems at hand. The function $f: \mathcal{S} \rightarrow \mathds{R}$ is the actual fitness function that assigns a value of desirability to every solution candidate and is often the most expensive part in the optimization of a problem. Finally, $\mathcal{X}$ describes how solution candidates relate to each other: A very simple case would be to define $\mathcal{X} \subseteq \mathcal{S} \times \mathcal{S}$ a relation of neighboring solution candidates or alternatively as distance function $\mathcal{X} : \mathcal{S} \times \mathcal{S} \rightarrow \mathds{R}$. The important observation is that this definition of a fitness landscape can be made for any optimization problem and, therefore, provides the foundation for very general and portable problem analysis techniques.
Based on this formulation several different analysis methods have been proposed. Many of which require a sample of the solution space, i.e. so called walkable landscapes \cite{Hordijk1996}. This sample often comes in the form of a trajectory inside the solution space, following the neighborhood relation or distance function.
Based on these trajectories, different measures can be defined that characterize the fitness landscape. Examples are auto correlation \cite{Weinberger1990} that measures the average decay of correlation of fitness values as the trajectory moves away from a point. Typically, it is only defined for the very first step but can be continued to an arbitrary distance. The distance at which the correlation is not statistically significant anymore is called the correlation length also defined in \cite{Weinberger1990}.
Other techniques include the information analysis proposed in \cite{Vassilev2000} where several measure are defined that try to capture information theoretic characteristics of these trajectories. One particularly simple but interesting property is the information stability which simply captures the maximum fitness difference between neighboring solution candiates and has proven to be a very characteristic property of a problem instance.
Besides these trajectory-based analysis methods several other methods have been proposed such as analytical decomposition into elementary landscapes \cite{Stadler1994,Chicano2011} or fitness distance correlation \cite{Jones1995}.
\section{Transformation} As described in the previous sections most landscape analysis methods as well as trajectory-based optimization algorithms rely on a relatively smooth landscape or, in other words, high correlation between neighboring solution candidates. Conversely, typical modifications in genetic programming are very large and previous attempts of applying classic fitness landscape analysis (FLA) have failed.
Figure~\ref{fig:basic-ideas} summarizes the basic ideas for this transformation: To overcome the drastic fitness changes induced by changes in the tree structure, the first simple ideas is to fix the tree structure (Figure~\ref{fig:fixed-structure}) to a full (e.g binary) tree. This is comparable to the limited tree depth or tree size that can often be found in genetic programming. This limits the maximum change to the replacement of an operator in the tree. However, directly switching from e.g. an addition to a multiplication can still have quite a large impact on the behavior of the formula. Therefore, the second idea is to make this transition smooth too, as illustrated in Figure~\ref{fig:smooth-transition}. A very simple way to achieve this smooth transition is to simply use a weighted average over all possible operators as shown in Equation~\ref{eq:weighted-average}, where the $\mathrm{op}_i(x, y)$ are the possible operators and $\mathrm{op}(x, y)$ is the overall operation result. In the simplest case, when only two operators are available, i.e. addition and multiplication, only a single factor needs to be tuned, i.e. $\mathrm{op(x, y)} := \alpha \cdot (x+y) + (1 - \alpha) \cdot (x \cdot y)$. \begin{equation}
\label{eq:weighted-average}
\textrm{op}(x, y) := \sum_i^{2^d-1} \alpha_i \cdot \textrm{op}_i(x, y) \end{equation}
\begin{figure}
\caption{Basic Ideas}
\label{fig:fixed-structure}
\label{fig:smooth-transition}
\label{fig:basic-ideas}
\end{figure}
So far, this yields a smooth optimization problem for the operator choices where only real values have to be adjusted. However, this simply replaces every operator with a weighted average of other operators and make the formula much more complicated. Therefore, another simple addition is necessary: The overall fitness function is augmented with a penalty for undecided operator choices. In the simplest possible case where two possible operators are chosen an inverted quadratic function that peaks at 0.5 and intersects with the abscissa at zero and one can be used. When more than two operations are available a different penalty function is required. In this case the least penalty should occur when exactly one operator is chosen and it should increase progressively the more weight other operators are receiving. Therefore, the simple formula shown in Equation~\ref{eq:op-penalty} can been used to steer the optimization towards a unique operator choice. \begin{equation}
\label{eq:op-penalty}
p_{\mathrm{op}}(\alpha) := \left(\sum_i^{2^d-1} \alpha_i - \max_i \alpha_i \right)/(2^d-2) \end{equation}
Now that the structure and the operators of the tree can be selected using only real values, the last remaining aspect is how to make a smooth choices between variables. This task is a litte more complicated as the variables should also have a weight attached. However, another simple solution can be applied as shown in Equation~\ref{eq:variable-selection} where the operation in the leaf node is selected in addition to the variables, and $n$ is the number of input variables in $X$. Please note that one additional weight $\beta_{i,n+1}$is included to allow a constant to be selected which gives the ability to ``mute'' parts of the tree by selecting for example a multiplication with one or an addition with zero for some subtree. \begin{equation}
\label{eq:variable-selection}
\mathrm{op}(X) := \sum_i^{2^{d-1}} \alpha_i \cdot
\mathrm{op}(\beta_{i, 1}\cdot X_1, \ldots, \beta_{i, n} \cdot X_n, \beta_{i,n+1}) \end{equation} This yields quite a large number of variable weights. As the number of leaf nodes increases exponentially with the tree depth and is further multiplied with the number of input variables. For example a smooth symbolic regression problem with ten variables and a tree depth of five has only 31 operator weights but 176 variable weights.
In Figure~\ref{fig:alt-var-sel} several alternative ideas are shown for variable selection schemes with fewer resulting variables. However, they have not been very promising in preliminary tests. Therefore, we can only recommend to use the variable selection scheme with more variables, given the complexity of the variable selection problem per se. The first idea was to use only a single angle and take the two closest variables weighted by distance, as shown in Figure~\ref{fig:angular-selection}, however, this blinds the algorithm by completely hiding other choices. This might still be a good option for other algorithms where diversity of the population is kept very high. Another idea was to use multidimensional scaling \cite{Borg2005} to project the variables according to their correlation onto e.g. a two dimensional plane and use only two coordinates to choose between all variables. Figure~\ref{fig:nearest-neighbor} shows the selection of the two nearest neighbors, which has similar problems as the angular selection as it completely hides other variable choices. The second alternative is to use again a weighted average, however, using the distance to the coordinates as the weights.
\begin{figure}
\caption{Alternative Variable Selection Strategies}
\label{fig:angular-selection}
\label{fig:nearest-neighbor}
\label{fig:weighted-average}
\label{fig:alt-var-sel}
\end{figure}
It has to be noted, that also for the variable selection the optimization has to be guided towards limiting the number of variables. This can be achieved similarly to the operator choice, only that this time two or more non-zero weights are acceptable and their weights can be subtracted from the penalty.
In summary, the new encoding transforms a problem with $2^d-1$ tree nodes, where $d$ is the depth of the tree, $k$ possible operations at each node and $n$ input variables into a problem with $(2^d-1)\cdot (k-1)$ operator weights $\alpha$ and $(2^{d-1})\cdot n$ variable weights $\beta$. So, for example, for a depth of five, with two operations and ten variables we get a 31+176=207 dimensional real-valued problem instead of a combinatorial problem with $6\cdot 10^{20}$ possible choices. Obviously this does not make the problem less complex, i.e. dimensions compared with choices, however it makes the fitness landscape much smoother. One could think of discrete points in space in the combinatorial formulation and filling the volume between them in the smooth approach.
\section{Experimental Results} We have tested this new implementation only on comparatively simple problems most notably the \mbox{Poly-10}~Problem \cite{Langdon2008} using custom operators in HeuristicLab \cite{Wagner2009} with a CMA Evolution Strategy\cite{Hansen2006}. One very interesting aspect is shown in Figure~\ref{fig:results} where the penalties for operators and variable weights have been successively turned on. It can be seen that the correlation of the formula slightly decreases as the optimization tries to lower the operator penalty but quickly recovers with very low operator penalties, indicating a crisp operator choice. This choice is also achieved rather quickly indicating that it might not be so difficult to make this crisp operator choice. When turning on the variable selection penalty a distinct knee and steeper slope can be seen in the variable selection penalty curve, however, it is much harder to decrease as many more weights are involved.
\begin{figure}
\caption{Results}
\label{fig:results}
\end{figure}
Finally, we have used the new formulation to calculate some fitness landscape analysis measures. For the first time, traditional techniques can be applied to symbolic regression problems and reasonable results can be obtained as shown in Table~\ref{tab:fla}, where the fitness landscape of the \mbox{Poly-10}~Problem was analyzed using different neighborhoods: In particular polynomial one position or all position manipulators where used with contiguity of 15 or only 2 as well as a uniform one position manipulator. Both random and up-down walks \cite{Jones1995} where performed to get a first impression of the landscape's characteristics.
\begin{table}[htbp]
\caption{Fitness Landscape Analysis of Symbolic Regression Problem}
\label{tab:fla}
\centering\scriptsize
\begin{tabular}{l|rrrrr}
& Poly-1-15 & Poly-All-15 & Poly-1-2 & Poly-All-2 & Uni-1 \\\hline auto correlation & 0.999 & 0.910 & 0.998 & 0.547 & 0.991 \\ corr. length & 2245 & 57 & 1246 & 11 & 290 \\ density basin information & 0.628 & 0.619 & 0.626 & 0.593 & 0.628 \\ information content & 0.546 & 0.394 & 0.686 & 0.403 & 0.399 \\ information stability & 0.058 & 0.141 & 0.058 & 0.255 & 0.037 \\ partial inf. content & 0.476 & 0.532 & 0.457 & 0.586 & 0.506 \\\hline up walk length & 328.063 & 124.872 & 14.321 & 5.179 & 83.433 \\ up walk len. variance & 46668.196 & 5405.852 & 35.814 & 3.176 & 870.758 \\ down walk length & 296.563 & 123.400 & 12.140 & 4.878 & 83.100 \\ down walk len. variance & 27152.263 & 4627.477 & 26.232 & 2.713 & 832.024 \end{tabular} \end{table}
\section{Conclusions} While there is certainly still a lot of work needed to fine tune the optimization process and play with different variants of variable selection and penalty schemes, this transformation principle opens the door for classical fitness landscape analysis applied to symbolic regression problems. The focus of future work should therefore not be the tuning of algorithm performance but rather the interpretation and utilization of FLA results generated for the class of symbolic regression problems.
\subsubsection*{Acknowledgments} The work described in this paper was done within the COMET Project Heuristic Optimization in Production and Logistics (HOPL), \#843532 funded by the Austrian Research Promotion Agency (FFG).
\end{document} | arXiv | {
"id": "2108.03274.tex",
"language_detection_score": 0.884331464767456,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{center} \epsfxsize=4in
\end{center}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{conjecture}[theorem]{Conjecture}
\theoremstyle{remark}
\newtheorem{remark}[theorem]{Remark}
\begin{center}
\vskip 1cm{\LARGE\bf
Skew Dyck paths having no peaks at level 1 (Sequence A128723)
}
\vskip 1cm
\large Helmut Prodinger \\ Stellenbosch University \\
Department of Mathematical Sciences\\
7602 Stellenbosch \\
South Africa, \\
and \\ NITheCS\\(National Institute for Theoretical and Computational Sciences)\\
South Africa\\
{\tt hproding@sun.ac.za} \\
\end{center}
\vskip .2 in
\begin{abstract}
Skew Dyck paths are a variation of Dyck paths, where additionally to steps $(1,1)$ and $(1,-1)$ a south-west step $(-1,-1)$ is also allowed, provided that the path
does not intersect itself. Replacing the south-west step by a red south-east step, we end up with decorated Dyck paths.
Sequence A128723 of the Encyclopedia of Integer Sequences considers such paths where peaks at level 1 are forbidden. We provide a thorough analysis of a more general scenario, namely partial
decorated Dyck paths, ending on a prescribed level $j$, both from left-to-right and from right-to-left (decorated Dyck paths are not symmetric).
The approach is completely based on generating functions.
\end{abstract}
\begin{center}
\end{center}
\section{Introduction}
Skew Dyck paths are a variation of Dyck paths, where additionally to steps $(1,1)$ and $(1,-1)$ a south-west step $(-1,-1)$ is also allowed, provided that the path
does not intersect itself. Replacing the south-west step by a red south-east step, we end up with decorated Dyck paths. Our earlier publication \cite{skew-old} studied such paths using generating functions.
\begin{figure}
\caption{All 10 skew Dyck paths of length 6 (consisting of 6 steps).}
\label{F1}
\end{figure}
\begin{figure}
\caption{The 10 paths redrawn, with red south-east edges instead of south-west edges.}
\label{F2}
\end{figure}
Sequence A128723 considers such paths where peaks at level 1 are forbidden. These paths are the main objects of the present paper.
The Figures \ref{F1}, \ref{F2}, \ref{F3} describe such paths of length 6.
\begin{figure}
\caption{The 6 paths without peaks on level 1.}
\label{F3}
\end{figure}
We catch the essence of a decorated Dyck path using a state-diagram (Fig.~\ref{arse4}):
\begin{figure}
\caption{Three layers of states according to the type of steps leading to them (up, down-black, down-red).}
\label{arse4}
\end{figure}
It has three types of states, with $j$ ranging from 0 to infinity; in the drawing, only $j=0..8$ is shown. The first
layer of states refers to an up-step leading to a state, the second layer refers to a black down-step leading to a state
and the third layer refers to a red down-step leading to a state.
If the dashed edge is present, the graph models skew (decorated) Dyck paths. Any path from the origin to a node on level $j$
represents such a decorated Dyck path ending on level $j$. In particular, if $j=0$, the path comes back to the $x$-axis.
Note that the syntactic rules of forbidden patterns
\begin{tikzpicture}[scale=0.3]\draw [thick](0,0)--(1,1); \draw [red,thick] (1,1)--(2,0);\end{tikzpicture}
and
\begin{tikzpicture}[scale=0.3] \draw [red,thick] (0,1)--(1,0);\draw [thick](1,0)--(2,1);\end{tikzpicture}
can be clearly seen from the picture.
However, if the dashed edge is \emph{not} present, it means that peaks at level 1 cannot be modeled by this graph, and that
is what we want in the present paper.
We will work out generating functions describing all paths
leading to a particular state. We will use the notations $f_j,g_j,h_j$ for the three respective layers, from top to bottom.
Although one could in principle compute all these functions separately, we are mainly interested in $s_j=f_j+g_j+h_j$, so
we are interested in paths arriving on level $j$ but we do not care in which way this final level has been reached.
It is also clear that a path of length $n$ leading to a state at level $j$ must satisfy $n\equiv j\bmod2$.
In a last section, the right-to-left model is briefly described. Then, red down-steps become blue up-steps.
\section{Generating functions and the kernel method}
The functions depend on the variable $z$ (marking the number of steps), but mostly
we just write $f_j$ instead of $f_j(z)$, etc.
The following recursions can be read off immediately from the diagram (Fig. \ref{arse4}):
\begin{gather*}
f_0=1,\quad f_{i+1}=zf_i+zg_i,\quad i\ge0,\\
g_i=zf_{i+1}+zg_{i+1}+zh_{i+1},\quad i\ge1,\\
g_0=zg_{i+1}+zh_{i+1},\quad i\ge1,\\
h_i=zh_{i+1}+zg_{i+1},\quad i\ge0.
\end{gather*} We can make a few direct observations: $f_0=1$, $f_1=z+zg_0$, $g_0=h_0$. The latter can be proved from combinatorial reasoning, since switching the last step from black to red resp.\ from red to black constitutes a bijection. This is a consequence of the fact that there are no peaks at level 1, otherwise the syntactic restrictions might be violated.
Now it is time to introduce \emph{bivariate} generating functions:
\begin{equation*}
F(z,u)=\sum_{i\ge0}f_i(z)u^i,\quad
G(z,u)=\sum_{i\ge0}g_i(z)u^i,\quad
H(z,u)=\sum_{i\ge0}h_i(z)u^i.
\end{equation*}
Again, often we just write $F(u)$ instead of $F(z,u)$ and treat $z$ as a `silent' variable. Summing the recursions leads to
\begin{align*}
\sum_{i\ge0}u^if_{i+1}&=\sum_{i\ge0}u^izf_i+\sum_{i\ge0}u^izg_i,\\
\sum_{i\ge1}u^ig_i&=\sum_{i\ge1}u^izf_{i+1}+\sum_{i\ge1}u^izg_{i+1}+\sum_{i\ge1}u^izh_{i+1},\\
\sum_{i\ge0}u^ih_i&=\sum_{i\ge0}u^izh_{i+1}+\sum_{i\ge0}u^izg_{i+1}.
\end{align*}
This can be rewritten as
\begin{align*}
\frac1u(F(u)-1)&=zF(u)+zG(u),\\*
\sum_{i\ge1}u^ig_i+g_0&=\sum_{i\ge1}u^izf_{i+1}+\sum_{i\ge1}u^izg_{i+1}+zg_{1}+\sum_{i\ge1}u^izh_{i+1}+zh_1,\\
\sum_{i\ge0}u^ig_i&=\sum_{i\ge1}u^izf_{i+1}+\sum_{i\ge0}u^izg_{i+1}+\sum_{i\ge0}u^izh_{i+1},\\
G(u)&=\frac zu[F(u)-f_0-uf_1]+\frac zu[G(u)-g_0]+\frac zu[H(u)-h_0],\\
H(u)&= \frac zu(G(u)-G(0))+\frac zu(H(u)-H(0)).
\end{align*} Instead of working with 3 functions, we can reduce the system to just one equation (with the variable $G$):
\begin{equation*}
F=\frac{1+zuG}{1-zu},\quad H=\frac{z(G-g_0-h_0)}{u-z}.
\end{equation*} Using this, we get
\begin{equation*}
G=\frac{-z^3u(u-z)+z(1-zu)(2+zu-z^2)g_0}{z(u-r_1)(u-r_2)}
\end{equation*} with
\begin{equation*}
r_1=\frac{1+z^2+\sqrt{1-6z^2+5z^4}}{2z},\quad r_2=\frac{1+z^2-\sqrt{1-6z^2+5z^4}}{2z}. \end{equation*} Note that $r_1r_2=2-z^2$. Since the factor $u-r_2$ in the denominator is ``bad,'' it must also cancel in the numerator. This is an essential step in the kernel method, see for instance our own survey \cite{Prodinger-kernel}. This leads to the new version
\begin{equation*}
G=\frac{-z^3(u-z+r_2)-z^2(-z^2+zu+1+zr_2)g_0}{z(u-r_1)}.
\end{equation*}
Plugging in $u=0$ and solving the equation
\begin{equation*}
G(z,0)=g_0=\frac{-z^3(-z+r_2)-z^2(-z^2+1+zr_2)g_0}{z(-r_1)}
\end{equation*} leads to
\begin{equation*}
g_0={\frac {1-2{z}^{4}-3{z}^{2}-\sqrt { 1-6z^2+5z^4 }}{ 2( {z}^{2}+ 3 ) {z}^{2}}}.
\end{equation*} Knowing this, we know $G$, and thus $F$ and $H$. As the first goal, we still set $u=0$, thus considering paths coming back to the $x$-axis. Using Maple,
\begin{equation*}
s_0:= f_0+g_0+h_0={\frac {1-{z}^{4}-\sqrt { 1-6z^2+5z^4 }}{ ( {z}^{2}+ 3 ) {z}^{2}}}.
\end{equation*}
\subsection*{The conjecture}
We write $z^2=x$, since skew paths, as discussed here, must have an even number of steps.
The function
\begin{equation*}
y(x)={\frac {1-{x}^{2}-\sqrt { 1-6x+5x^2 }}{ x( x+ 3 ) }}
\end{equation*}
is the generating function of the sequence A128723:
\begin{equation*}
1, 0, 2, 6, 22, 84, 334, 1368, 5734, 24480, 106086, 465462, 2063658, 9231084, 41610162, \dots
\end{equation*}
\textsc{Gfun}, as described in \cite{gfun}, produces the algebraic equation that $y(x)$ satisfies:
\begin{equation*}
-(x-1)(x-2)+3x+2(1-x^2)y-x(3+x)y^2=0
\end{equation*} and from this the differential equation
\begin{equation*}
- \left( 9 {x}^{2}+5 {x}^{3}+3-17 x \right) xy' + \left( 9 {x}^{2}+7 x-5 {x}^{3}-3 \right) y
+3+9 {x}^{2}-5 {x}^{3}-7 x
\end{equation*}
and finally from the differential equation the recursion for the coefficients $s_n=[x^n]y(x)$:
\begin{equation*}
3(n+4)s_{n+3}-(17n+41)s_{n+2}+9ns_{n+1}+5(n+1)s_n=0.
\end{equation*}
An equivalent recursion was conjectured in the description of sequence A128723~\cite{OEIS}.
\subsection*{Partial paths}
Another compution with Maple leads to \begin{equation*} S(z,u)=F(z,u)+G(z,u)+H(z,u)={\frac {-{z}^{4}-{z}^{4}g_0-{z}^{2}g_0+{z}^{2}-1}{z(u-r_1)}}. \end{equation*}
Further
\begin{align*} s_j:=[u^j]S(z,u)&={\frac {{z}^{4}+{z}^{4}g_0+{z}^{2}g_0-{z}^{2}+1}{zr_1(1-u/r_1)}}\\ &={\frac {{z}^{4}+{z}^{4}g_0+{z}^{2}g_0-{z}^{2}+1}{zr_1^{j+1}}}.
\end{align*}
One sees the parity: $j$ even/odd iff exponents are even/odd.
If it is desired, $1/r_1$ may be expressed by $r_2$ (and a factor).
\subsection*{Open-ended paths}
We might allow \emph{any} level as end-level of the path. In terms of generating functions, this means to consider $S(z,1)$, viz.
\begin{equation*} S(z,1)=\frac{-2 {z}^{5}-3{z}^{4}+{z}^{3}-5 {z}^{2}-3 z+4-(z^2+3z+4)\sqrt{1-6z^2+5z^4}}{2z(3+z^2)(z^2+2z-1)}.
\end{equation*} The sequence of coefficients \begin{equation*} 1, 1, 1, 2, 5, 8, 18, 31, 71, 126, 290, 527, 1218, 2253, 5223, 9796, 22763, 43170, 100502, 192347, \dots \end{equation*} is not in the encyclopedia \cite{OEIS}.
\section{Reading the decorated paths from right to left}
Since decorated paths are not symmetric, it makes sense to consider this scenario separately. \begin{figure}
\caption{All 6 dual (=right-to-left) skew Dyck paths of length 6 (consisting of 6 steps), having no peak at level 1.}
\end{figure}
We catch the essence of a decorated (dual skew) Dyck path using a state-diagram:
\begin{figure}
\caption{Three layers of states according to the type of steps leading to them (down, up-black, up-blue).}
\end{figure} Note that the syntactic rules of forbidden patterns \begin{tikzpicture}[scale=0.3]\draw [thick,cyan](0,0)--(1,1); \draw [thick] (1,1)--(2,0);\end{tikzpicture} and \begin{tikzpicture}[scale=0.3] \draw [thick] (0,1)--(1,0);\draw [thick,cyan](1,0)--(2,1);\end{tikzpicture} can be clearly seen from the picture.
As in the earlier section, if the dashed edge is removed it means that the condition `no peak at level 1' is modeled, which is what we need to do. Using the letters $c_j,a_j,b_j$ (in that order) for the generating functions to reach state $j$ in the particular layer, we find the
following recursions immediately from the diagram: \begin{gather*}
a_0=1,\quad a_{i+1}=za_i+zb_i+zc_i,\quad i\ge0,\\
b_0=zb_1,\quad b_i=za_{i+1}+zb_{i+1},\quad i\ge1,\\
c_{i+1}=za_{i}+zc_{i},\quad i\ge0. \end{gather*} We introduce \emph{bivariate} generating functions: \begin{equation*}
A(z,u)=\sum_{i\ge0}a_i(z)u^i,\quad
B(z,u)=\sum_{i\ge0}b_i(z)u^i,\quad
C(z,u)=\sum_{i\ge0}c_i(z)u^i. \end{equation*}
Summing the recursions leads to \begin{align*}
A(u)&=\sum_{i\ge0}u^ia_i =1+u\sum_{i\ge0}u^i(za_i+zb_i+zc_i)\\
&=1+uzA(u)+uzB(u)+uzC(u),\\
\sum_{i\ge0}u^ib_i &= \sum_{i\ge1}u^iza_{i+1}+\sum_{i\ge0}u^izb_{i+1},\\
B(u)&=\frac zu\sum_{i\ge2}u^ia_i+\frac zu\sum_{i\ge1}u^ib_i\\
&=\frac zu[A(u)-a_0-ua_1]+\frac zu[B(u)-b_0],\\
\sum_{i\ge1}u^ic_i &=uz\sum_{i\ge0}u^ia_i+uz\sum_{i\ge0}u^ic_i,\\
C(u)-c_0&=uzA(u)+uzC(u). \end{align*} We have $c_0=0$, $a_0=1$, and $a_1=z+zb_0$. We may write \begin{equation*} C(u)=\frac{uzA(u)}{1-uz}, \end{equation*} \begin{equation*}
A(u) =1+uzA(u)+uzB(u)+\frac{u^2z^2A(u)}{1-uz}=\frac{1+uzB(u)}{1-uz-\frac{u^2z^2}{1-uz}}=\frac{1-uz}{1-2uz}[1+uzB(u)], \end{equation*} \begin{equation*} C(u)=\frac{uz}{1-2uz}[1+uzB(u)]. \end{equation*} Solving for $B(u)$, \begin{equation*} B(u)=\frac{z \left( 2{u}^{2}{z}^{2}+2{z}^{2}{u}^{2}b_0+b_0zu-b_0 \right)}{z(z^2-2)(u-r_1^{-1})(u-r_2^{-1})}. \end{equation*} We cancel the bad factor $(u-r_1^{-1})$ out of the numerator: \begin{equation*} B(u)=\frac {z ( 2r_1uz+2r_1uzb_0+b_0r_1+2z+2zb_0 ) r_2}{r_1 ( {z}^{2}- 2 ) ( ur_2-1 ) } \end{equation*} Plugging in $u=0$ results in the equation \begin{equation*} b_0=\frac {-z ( b_0r_1+2z+2zb_0 ) r_2}{r_1 ( {z}^{2}- 2 ) } \end{equation*} and thus \begin{equation*} b_0=\frac{1-z^4-\sqrt{1-6z^2+5z^4}}{z^2(3+z^2)}-1, \end{equation*} as expected, since $1+b_0$ is the generating function of all skew Dyck paths without peaks at level 1.
Expressions for $A(z,u)+B(z,u)+C(z,u)$ and $[u^j](A(z,u)+B(z,u)+C(z,u))$ could be explicitly written, which we leave to the reader.
The open paths in this model are enumerated via \begin{equation*} A(z,1)+B(z,1)+C(z,1), \end{equation*} which is a long expression, with coefficients \begin{equation*} 1, 2, 4, 10, 24, 56, 134, 318, 764, 1824, 4390, 10520, 25346, 60878, 146768,\dots \end{equation*} which are again not in the OEIS \cite{OEIS}.
Explicit formul\ae{} for this model are a bit unpleasant, but easily regenerated using Maple.
\section{Conclusion}
In order to keep this paper short (and not boring) we refrained from working out many additional parameters. That might be a good project for graduate students.
\hrule
\noindent 2010 {\it Mathematics Subject Classification}:
Primary 05A15; Secondary 05A19.
\noindent \emph{Keywords: Skew Dyck path, peak, forbidden pattern, generating function, kernel-method.}
\hrule
\noindent (Concerned with sequence
{A128723}.)
\vspace*{+.1in}
\noindent
\hrule
\noindent
Return to
\htmladdnormallink{Journal of Integer Sequences home page}{https://cs.uwaterloo.ca/journals/JIS/}.
\vskip .1in
\end{document} | arXiv | {
"id": "2201.00640.tex",
"language_detection_score": 0.7137244939804077,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{f Superposed Coherent and Squeezed Light} \begin{abstract} We first calculate the mean photon number and quadrature variance of superposed coherent and squeezed light, following a procedure of analysis based on combining the Hamiltonians and using the usual definition for the quadrature variance of superposed light beams. This procedure of analysis leads to physically unjustifiable mean photon number of the coherent light and quadrature variance of the superposed light. We then determine both of these properties employing a procedure of analysis based on superposing the Q functions and applying a slightly modified definition for the quadrature variance of a pair of superposed light beams. We find the expected mean photon number of the coherent light and the quadrature variance of the superposed light. Moreover, the quadrature squeezing of the superposed output light turns out to be equal to that of the superposed cavity light. \end{abstract} \hspace*{9.5mm}Keywords: Q function, Mean photon number, Quadrature Squeezing, Superposition \vspace*{5mm} \begin{multicols}{2} \section*{1. Introduction}
\vspace*{-2mm} It has been established that a one-mode subharmonic generator produces light with a maximum quadrature squeezing of $50\%$ below the coherent-state level [1-4]. In addition, some authors [5-7] have studied the statistical and squeezing properties of superposed coherent and squeezed light, produced by harmonic and subharmonic generations in the same cavity. These authors have carried out their analysis by combining the pertinent Hamiltonians and applying the usual definition for quadrature variance. But such procedure leads to physically unjustifiable mean photon number of the coherent light and quadrature variance of the superposed coherent and squeezed light.
In order to see clearly the problems connected with the mean photon number of the coherent light and quadrature variance of the superposed coherent and squeezed light, we undertake a procedure of analysis based on combining the Hamiltonians and using the usual definition for the quadrature variance of the superposed light. From the results of our analysis, we discover that the presence of the squeezed light has some effect on the mean photon number of the coherent light. This must certainly be due to the procedure of analysis we have employed. Moreover, we find that the quadrature variance of the superposed coherent and squeezed light is equal to that of the squeezed light only. In other words, the coherent light has no contribution to the quadrature variance of the superposed light. The origin of this problem must be the use of the usual definition for the quadrature variance of the superposed light.
In order to avoid the aforementioned problems, we calculate the mean photon number and quadrature variance of the superposed coherent and squeezed light, applying a procedure of analysis based on superposing the Q functions and slightly modifying the usual definition for the quadrature variance of the superposed light. We also determine the quadrature squeezing of the superposed cavity and output light.
\section*{2. The Hamiltonian combining \hspace*{8mm} procedure} \subsection*{2.1 The mean photon number}
We seek to obtain the mean photon number and quadrature variance of superposed coherent and squeezed light, produced by harmonic and subharmonic generations, in a cavity coupled to a vacuum reservoir via a single-port mirror. We consider the case in which the coherent and squeezed light beams have the same frequency. The process of harmonic generation is described by the Hamiltonian \begin{equation}\label{1} \hat{H}_{c}=i\varepsilon_{1}(\hat{a}^{\dagger}-\hat{a}), \end{equation} where $\varepsilon_{1}$ is proportional to the amplitude of the coherent light driving the cavity mode. And the process of subharmonic generation is described by the Hamiltonian
\begin{equation}\label{2} \hat{H}_{s}={i\varepsilon_{2}\over 2}(\hat{a}^{2}-\hat{a}^{\dagger 2}), \end{equation} with $\varepsilon_{2}$ being proportional to the amplitude of the coherent light pumping the nonlinear crystal. The analysis of the superposed coherent and squeezed light is usually carried out employing the Hamiltonian [5-7] \begin{equation}\label{3} \hat{H}=i\varepsilon_{1}(\hat{a}^{\dagger}-\hat{a})+{i\varepsilon_{2}\over 2}(\hat{a}^{2}-\hat{a}^{\dagger 2}), \end{equation} which is the sum of the Hamiltonians given by Eqs. (\ref{1}) and (\ref{2}).
We will calculate the expectation values of only normally-ordered operators pertaining to a cavity mode coupled to a vacuum reservoir. We then note that the noise operator associated with the vacuum reservoir has no effect on the dynamics of the cavity mode operators. Hence upon dropping this noise operator, the quantum Langevin equation for the operator $\hat{a}$ can be written as \begin{equation}\label{4} {d\hat{a}(t)\over dt}=-{1\over 2}\kappa\hat{a}(t)-i[\hat{a}(t),\hat{H}], \end{equation} where $\kappa$ is the cavity damping constant. Therefore, on account of (\ref{4}) and (\ref{3}), we have \begin{equation}\label{5} {d\hat{a}(t)\over dt}=-{1\over 2}\kappa\hat{a}(t)-\varepsilon_{2}\hat{a}^{\dagger}(t)+\varepsilon_{1}. \end{equation} Now applying the relation \begin{equation}\label{6} {d\over dt}\langle\hat{a}(t)\hat{a}(t)\rangle=\bigg\langle{d\hat{a}(t)\over dt}\hat{a}(t)\bigg\rangle+\bigg\langle\hat{a}(t){d\hat{a}(t)\over dt}\bigg\rangle \end{equation} along with Eq. (\ref{5}), we get \begin{eqnarray}\label{7} {d\over dt}\langle\hat{a}(t)\hat{a}(t)\rangle\hspace*{-3mm}&=\hspace*{-3mm}&-\kappa\langle\hat{a}^{2}(t)\rangle-2\varepsilon_{2} \langle\hat{a}^{\dagger}(t)\hat{a}(t)\rangle\nonumber\\&& +2\varepsilon_{1}\langle\hat{a}(t)\rangle-\varepsilon_{2}. \end{eqnarray} Moreover, using the relation \begin{equation}\label{8} {d\over dt}\langle\hat{a}^{\dagger}(t)\hat{a}(t)\rangle=\bigg\langle{d\hat{a}^{\dagger}(t)\over dt}\hat{a}(t)\bigg\rangle+\bigg\langle\hat{a}^{\dagger}(t){d\hat{a}(t)\over dt}\bigg\rangle \end{equation} together with Eq. (\ref{5}), we find \begin{eqnarray}\label{9} {d\over dt}\langle\hat{a}^{\dagger}(t)\hat{a}(t)\rangle\hspace*{-3mm}&=\hspace*{-3mm}&-\kappa\langle\hat{a}^{\dagger}(t)\hat{a}(t)\rangle -\varepsilon_{2}\langle\hat{a}^{2}(t)\rangle\nonumber\\&& -\varepsilon_{2}\langle\hat{a}^{\dagger 2}(t)\rangle+\varepsilon_{1} \langle\hat{a}(t)\rangle\nonumber\\&& +\varepsilon_{1}\langle\hat{a}^{\dagger}(t)\rangle. \end{eqnarray}
The steady-state solutions of Eqs. (\ref{7}) and (\ref{9}) have the form \begin{equation}\label{10} \langle\hat{a}^{2}\rangle=-b\langle\hat{a}^{\dagger}\hat{a}\rangle+a\langle\hat{a}\rangle-{b\over 2}, \end{equation} \begin{equation}\label{11}
\langle\hat{a}^{\dagger}\hat{a}\rangle=-{b\over 2}\langle\hat{a}^{2}\rangle-{b\over 2} \langle\hat{a}^{\dagger 2}\rangle+{a\over 2}\langle\hat{a}\rangle+{a\over 2}\langle\hat{a}^{\dagger}\rangle, \end{equation} where \begin{equation}\label{12} a=2\varepsilon_{1}/\kappa \end{equation} and \begin{equation}\label{13} b=2\varepsilon_{2}/\kappa. \end{equation} In addition, the steady-state solution of the expectation value of Eq. (\ref{5}) turns out to be \begin{equation}\label{14} \langle\hat{a}\rangle=-b\langle\hat{a}^{\dagger}\rangle+a. \end{equation} We also note that \begin{equation}\label{15} \langle\hat{a}^{\dagger}\rangle=-b\langle\hat{a}\rangle+a. \end{equation} Upon adding and subtracting Eq. (\ref{15}) to and from Eq. (\ref{14}), we arrive at \begin{equation}\label{16} \langle\hat{a}\rangle{\pm}\langle\hat{a}^{\dagger}\rangle={a{\pm}a\over 1+b}. \end{equation} It then follows that \begin{equation}\label{17} \langle\hat{a}\rangle=\langle\hat{a}^{\dagger}\rangle={a\over{b+1}}. \end{equation}
Furthermore, with the aid of (\ref{17}), Eqs. (\ref{10}) and (\ref{11}) can be rewritten as \begin{equation}\label{18} \langle\hat{a}^{2}\rangle=-b\langle\hat{a}^{\dagger}\hat{a}\rangle+{2a^{2}-b(1+b)\over 2(1+b)}, \end{equation} \begin{equation}\label{19}
\langle\hat{a}^{\dagger}\hat{a}\rangle=-b\langle\hat{a}^{2}\rangle+{a^{2}\over{1+b}}. \end{equation} Following the same procedure as the one leading to the result given by (\ref{17}), we obtain from Eqs. (\ref{18}) and (\ref{19}) that \begin{equation}\label{20} \langle\hat{a}^{2}\rangle={a^{2}\over{(1+b)^{2}}}+{b\over{2(b^{2}-1)}} \end{equation} and \begin{equation}\label{21} \langle\hat{a}^{\dagger}\hat{a}\rangle={a^{2}\over{(1+b)^{2}}}+{b^{2}\over{2(1-b^{2})}}. \end{equation} Now on account of (\ref{12}) and (\ref{13}), we see that \begin{equation}\label{22} \langle\hat{a}^{\dagger}\hat{a}\rangle={4\varepsilon_{1}^{2}\over(\kappa+2\varepsilon_{2})^{2}}+{2\varepsilon_{2}^{2} \over{\kappa^{2}-4\varepsilon_{2}^{2}}}. \end{equation} Evidently the second term in Eq. (\ref{22}) is the mean photon number of the squeezed light. However, the first term does not represent the mean photon number of the coherent light. The mean photon number of the coherent light is just the first term with $\varepsilon_{2}=0$. We clearly see that the presence of the squeezed light has some effect on the mean photon number of the coherent light. There cannot be any physically valid justification for this effect. The origin of the problem connected with the first term in Eq. (\ref{22}) must certainly be the procedure of analysis we used, which is based on combining the Hamiltonians.
\subsection*{2.2 Quadrature variance} We next wish to calculate the quadrature variance for the superposed coherent and squeezed light. The variance of the quadrature operators \begin{equation}\label{23} \hat{a}_{+}=\hat{a}^{\dagger}+\hat{a} \end{equation} and \begin{equation}\label{24} \hat{a}_{-}=i(\hat{a}^{\dagger}-\hat{a}), \end{equation}
for the superposed light, is usually defined by \begin{equation}\label{25} (\Delta a_{\pm})^{2}=1+\langle:\hat{a}_{\pm},\hat{a}_{\pm}:\rangle. \end{equation} This can be put in the form \begin{eqnarray} (\Delta a_{\pm})^{2}\hspace*{-3mm}&=\hspace*{-3mm}&1+2\langle\hat{a}^{\dagger}\hat{a}\rangle {\pm}\langle\hat{a}^{2}\rangle{\pm}\langle\hat{a}^{\dagger 2}\rangle\nonumber\\&&{\mp}\langle\hat{a}\rangle^{2}{\mp}\langle\hat{a}^{\dagger}\rangle^{2} -2\langle\hat{a}^{\dagger}\rangle\langle\hat{a}\rangle. \end{eqnarray} Now employing Eqs. (\ref{17}), (\ref{20}), and (\ref{21}), one can readily verify that \begin{equation}\label{26} (\Delta a_{\pm})^{2}=1{{\mp}{b\over{1{\pm}b}}} \end{equation} and in view of (\ref{13}), we have \begin{equation}\label{27} (\Delta a_{\pm})^{2}=1{{\mp}{2\varepsilon_{2}\over{\kappa{\pm}2\varepsilon_{2}}}}. \end{equation} This is just the quadrature variance of the squeezed light alone [4]. Contrary to our expectation, the coherent light has no contribution to the quadrature variance of the superposed light. We maintain standpoint that the problems connected with the first term in Eq. (\ref{22}) and the quadrature variance described by Eq. (\ref{27}) could be resolved by carrying out the pertinent analysis based on superposing the Q functions, instead of combining the Hamiltonians, and by slightly modifying the usual definition for the quadrature variance of superposed light beams.
\section*{3. The Q function superposing \hspace*{9mm}procedure} \subsection*{3.1 The Q function} In this section we first calculate the Q functions for the coherent and squeezed light beams. Then using these results, we determine the Q function for the superposed coherent and squeezed light.
\subsubsection*{3.1.1 The coherent light Q function} We now seek to obtain the Q function for the coherent light produced by harmonic generation. To this end, upon setting $\varepsilon_{2}$=0 in Eq. (\ref{5}), we have \begin{equation}\label{29} {d\hat{a}(t)\over dt}=-{1\over 2}\kappa\hat{a}(t)+\varepsilon_{1}. \end{equation} The solution of this equation can be written as \begin{equation}\label{30} \hat{a}(t)=a(1-e^{-\kappa t/2}) +\hat{a}'(t),
\end{equation} where \begin{equation}\label{31} \hat{a}'(t)=\hat{a}(0)e^{-\kappa t/2} \end{equation} and $a$ is given by (\ref{12}). Then we easily get
\begin{equation}\label{32} {d\hat{a}'(t)\over dt}=-{1\over 2}\kappa\hat{a}'(t) \end{equation} and with the coherent light considered to be initially in a vacuum state, we see that \begin{equation}\label{33} \langle\hat{a}'(t)\rangle=0. \end{equation} The Q function for a single-mode light can be expressed as \begin{equation}\label{34} Q(\alpha^{*},\alpha,t)={1\over\pi^{2}}\int d^{2}z\phi_{a}(z,t)e^{z^{*}\alpha-z\alpha^{*}}, \end{equation} in which \begin{equation}\label{35} \phi_{a}(z,t)=\langle e^{-z^{*}\hat{a}(t)}e^{z\hat{a}^{\dagger}(t)}\rangle \end{equation} is the antinormally-ordered characteristic function. With the aid of the identity \begin{equation}\label{36} e^{\hat{A}}e^{\hat{B}}=e^{\hat{A}+\hat{B}+[\hat{A},\hat{B}]}, \end{equation} Eq. (\ref{35}) can be put in the form \begin{equation}\label{37} \phi_{a}(z,t)=e^{-z^{*}z/2}\langle e^{z\hat{a}^{\dagger}(t)-z^{*}\hat{a}(t)}\rangle. \end{equation} Now on substituting (\ref{30}) into Eq. (\ref{37}) and observing that the operator $\hat{a}'(t)$ is a Gaussian variable with a vanishing mean, we have \begin{eqnarray}\label{38} \phi_{a}(z,t)\hspace*{-3mm}&=\hspace*{-3mm}&exp[-z^{*}z+a(z-z^{*})(1-e^{-\kappa t/2}) \nonumber\\&&+{1\over 2}z^{2}\langle\hat{a}'^{2}(t)\rangle+{1\over 2}z^{* 2}\langle\hat{a}'^{\dagger 2}(t)\rangle\nonumber\\&& -z^{*}z\langle\hat{a}'^{\dagger}(t)\hat{a}'(t)\rangle]. \end{eqnarray} Furthermore, applying Eq, (\ref{31}) and considering the coherent light to be initially in a vacuum state, one easily gets \begin{equation}\label{39} \langle\hat{a}'^{2}(t)\rangle=\langle\hat{a}'^{\dagger}(t)\hat{a}'(t)\rangle=0. \end{equation} Therefore, in view of this result, Eq. (\ref{38}) takes at steady state the form \begin{equation}\label{40} \phi_{a}(z)=exp[-z^{*}z+a(z-z^{*})]. \end{equation} Finally, upon introducing (\ref{40}) into Eq. (\ref{34}) and carrying out the integration, the coherent light Q function is found to be \begin{equation}\label{41} Q(\alpha^{*},\alpha)={1\over\pi}exp[-\alpha^{*}\alpha+a(\alpha+\alpha^{*})-a^{2}]. \end{equation}
\subsection*{3.1.2 The squeezed light Q function}
We next proceed to determine the Q function for the squeezed light produced by subharmonic generation. Thus upon setting $\varepsilon_{1}=0$ in Eqs. (\ref{5}), (\ref{20}), and (\ref{21}), we have \begin{equation}\label{42} {d\hat{a}(t)\over dt}=-{1\over 2}\kappa\hat{a}(t)-\varepsilon_{2}\hat{a}^{\dagger}(t), \end{equation}
\begin{equation}\label{43} \langle\hat{a}^{2}\rangle={b\over{2(b^2-1)}}, \end{equation} and \begin{equation}\label{44} \langle\hat{a}^{\dagger}\hat{a}\rangle=-{b^{2}\over{2(1-b^2)}}. \end{equation}
Using Eq. (\ref{42}) and assuming the squeezed light to be initially in a vacuum state, one can readily verify that $\langle\hat{a}(t)\rangle=0$. Now on realizing that the operator $\hat{a}(t)$ is a Gaussian variable with a vanishing mean and taking into account (\ref{43}) together with (\ref{44}), Eq. (\ref{37}) can be put at steady state in the form
\begin{equation}\label{45}
\phi_{a}(z)=exp[-a_{1}z^{*}z+a_{2}(z^{2}+z^{* 2})/2],
\end{equation} in which \begin{equation}\label{46} a_{1}=1+{b^{2}\over{2(1-b^{2})}}, \end{equation} \begin{equation}\label{47} a_{2}={b\over{2(b^{2}-1)}}. \end{equation} Hence introducing (\ref{45}) into Eq. (\ref{34}) and carrying out the integration, the squeezed light Q function is found to be \begin{eqnarray}\label{48} Q(\alpha^{*},\alpha)\hspace*{-3mm}&=\hspace*{-3mm}&{[u^{2}-v^{2}]^{1\over 2}\over\pi}exp[-u\alpha^{*}\alpha\nonumber\\&& +v(\alpha^{2}+\alpha^{* 2})/2], \end{eqnarray} where \begin{equation}\label{49} u={1-b^{2}/2\over{1-b^{2}/4}} \end{equation} and \begin{equation}\label{50} v=-{b/2\over{1-b^{2}/4}}. \end{equation}
\subsection*{3.1.3 The superposed light Q function} We finally seek to calculate the Q function for the superposed coherent and squeezed light. This Q function is expressible as [8] \begin{eqnarray}\label{51} Q(\alpha^{*},\alpha)\hspace*{-3mm}&=\hspace*{-3mm}&{1\over\pi}\int d^{2}\beta d^{2}\gamma Q_{c}(\beta^{*}, \alpha-\gamma)\nonumber\\&& \times Q_{s}(\gamma^{*},\alpha-\beta) exp(-\alpha^{*}\alpha-\beta^{*}\beta\nonumber\\&& -\gamma^{*}\gamma +\alpha^{*}\beta+\alpha\beta^{*}+\alpha^{*}\gamma+\alpha\gamma^{*}\nonumber\\&& -\beta^{*}\gamma-\beta\gamma^{*}), \end{eqnarray} where $Q_{c}(\beta^{*},\alpha-\gamma)$ and $Q_{s}(\gamma^{*},\alpha-\beta)$ represent the Q functions for the coherent and squeezed light beams. With the aid of (\ref{41}) and (\ref{48}), one can put Eq. (\ref{51}) in the form \begin{eqnarray}\label{52} Q(\alpha^{*},\alpha)\hspace*{-3mm}&=\hspace*{-3mm}&{[u^{2}-v^{2}]^{1\over 2}\over\pi^{3}}\int d^{2}\beta d^{2}\gamma \exp[-\alpha^{*}\alpha\nonumber\\&& -\beta^{*}\beta-\gamma^{*}\gamma+[\alpha^{*}+(u-1)\gamma^{*}\nonumber\\&& -v\alpha]\beta+a\beta^{*}+(\alpha^{*}-a)\gamma\nonumber\\&& +(1-u)\alpha\gamma^{*}+a\alpha-a^{2}\nonumber\\&& +v(\alpha^{2}+\beta^{2}+\gamma^{* 2})/2], \end{eqnarray} so that on carrying out the integration, we readily arrive at \begin{eqnarray}\label{53} Q(\alpha^{*},\alpha)\hspace*{-3mm}&=\hspace*{-3mm}&{A\over\pi}\exp[-u\alpha^{*}\alpha +v(\alpha^{2}+\alpha^{* 2})/2\nonumber\\&& +a(u-v)(\alpha+\alpha^{*})], \end{eqnarray} in which \begin{equation}\label{54} A=[u^{2}-v^{2}]^{1\over 2}exp[a^{2}(v-u)]. \end{equation}
\section*{3.2 The mean photon number} We now proceed to calculate the mean photon number of the superposed coherent and squeezed light. We recall that the expectation value of an operator function $A(\hat{a}^{\dag},\hat{a})$ can be written as \begin{equation}\label{55} \langle\hat{A}\rangle=\int d^{2}\alpha Q(\alpha^{*},\alpha)A_{a}(\alpha^{*},\alpha), \end{equation} where $A_{a}(\alpha^{*},\alpha)$ is the c-number function corresponding to $\hat{A}(\hat{a}^{\dag},\hat{a})$ in the antinormal order. Now applying (\ref{53}), one can put Eq. (\ref{55}) in the form \begin{eqnarray}\label{56} \langle\hat{A}\rangle\hspace*{-3mm}&=\hspace*{-3mm}&{A\over\pi}\int\alpha d^{2}\exp[-u\alpha^{*}\alpha+v(\alpha^{2}+\alpha^{* 2})/2\nonumber\\&& +a(u-v)(\alpha+\alpha^{*})]A_{a}(\alpha^{*},\alpha). \end{eqnarray} Now using the fact that $A_{a}=\alpha^{*}\alpha-1$, the mean photon number can be written as \begin{eqnarray}\label{57} \langle\hat{a}^{\dag}\hat{a}\rangle\hspace*{-3mm}&=\hspace*{-3mm}&{A\over\pi}\int d^{2}\alpha \exp[-u\alpha^{*}\alpha+v(\alpha^{2} +\alpha^{* 2})/2\nonumber\\&& +a(u-v)(\alpha+\alpha^{*})]\alpha^{*}\alpha-1, \end{eqnarray} so that on carrying out the integration, there follows \begin{eqnarray}\label{58} \langle\hat{a}^{\dag}\hat{a}\rangle=a^{2}+{b^{2}\over 2(1-b^{2})}. \end{eqnarray} Finally, in view of (\ref{12}) and (\ref{13}), we see that \begin{equation}\label{59} \langle\hat{a}^{\dagger}\hat{a}\rangle={4\varepsilon_{1}^{2}\over\kappa^{2}}+{2\varepsilon_{2}^{2} \over{\kappa^{2}-4\varepsilon_{2}^{2}}}. \end{equation} As expected this is the sum of the mean photon number of the coherent light and that of the squeezed light.
We next wish to calculate the mean photon number of the output light. The output mode operator $\hat{a}_{out}$ can be written in terms of the cavity mode operator $\hat{a}$ and the input mode operator $\hat{a}_{in}$ as \begin{equation}\label{60} \hat{a}_{out}=\sqrt{\kappa}\hat{a}-\hat{a}_{in}, \end{equation} where $\kappa$ is the cavity damping constant. When calculating the expectation values of only normally-ordered output operators, with the cavity mode coupled to a vacuum reservoir, one can use the relation \begin{equation}\label{61} \hat{a}_{out}=\sqrt{\kappa}\hat{a}. \end{equation} Then the mean photon number of the output light, defined by \begin{equation}\label{62} \overline{n}_{out}=\langle\hat{a}^{\dagger}_{out}\hat{a}_{out}\rangle, \end{equation} is expressible as \begin{equation}\label{63} \overline{n}_{out}=\kappa\langle\hat{a}^{\dagger}\hat{a}\rangle. \end{equation} We then see that the mean photon number of the output light is just $\kappa$ times that of the cavity light.
\section*{3.3 Quadrature squeezing}
We finally discus the quadrature squeezing of the superposed coherent and squeezed light. We define the variance of the quadrature operators \begin{equation}\label{64} \hat{a}_{+}=\hat{a}^{\dagger}+\hat{a} \end{equation} and \begin{equation}\label{65} \hat{a}_{-}=i(\hat{a}^{\dagger}-\hat{a}), \end{equation} for a pair of superposed light beams, by \begin{equation}\label{66} (\Delta a_{\pm})^{2}=2+\langle:\hat{a}_{\pm},\hat{a}_{\pm}:\rangle. \end{equation} This can be rewritten as \begin{eqnarray}\label{67} (\Delta a_{\pm})^{2}\hspace*{-3mm}&=\hspace*{-3mm}&2+2\langle\hat{a}^{\dagger}\hat{a}\rangle{\pm}\langle\hat{a}^{2}\rangle{\pm}\langle\hat{a}^{\dagger 2}\rangle\nonumber\\&& {\mp}\langle\hat{a}\rangle^{2}{\mp}\langle\hat{a}^{\dagger}\rangle^{2} -2\langle\hat{a}^{\dagger}\rangle\langle\hat{a}\rangle. \end{eqnarray}
We next proceed to determine the expectation value of the operator $\hat{a}^{2}$. This expectation value can be written employing (\ref{53}) as \begin{eqnarray}\label{68} \langle\hat{a}^{2}\rangle\hspace*{-3mm}&=\hspace*{-3mm}&{A\over\pi}\int d^{2}\alpha \exp[-u\alpha^{*}\alpha+v(\alpha^{2}+\alpha^{* 2})/2\nonumber\\&& +a(u-v)(\alpha+\alpha^{*})\alpha^{2}. \end{eqnarray} Hence on carrying out the integration, we readily get \begin{equation}\label{69} \langle\hat{a}^{2}\rangle=a^{2}+{b\over{2(b^{2}-1)}}. \end{equation} Moreover, following the same procedure, one can easily verify that \begin{equation}\label{70} \langle\hat{a}\rangle=a. \end{equation} Now on account of Eq. (\ref{67}) along with (\ref{58}), (\ref{69}), and (\ref{70}), the quadrature variance of the superposed coherent and squeezed light turns out to be \begin{equation}\label{71} (\Delta a_{\pm})^{2}=2{\mp}{b\over 1{\pm}b}. \end{equation} We observe that the quadrature variance given by (\ref{71}) is the sum of the quadrature variance of the coherent light and that of the squeezed light.
One usually calculates the quadrature squeezing of a pair of superposed light beams relative to the quadrature variance of a single coherent light beam. But now this does not appear to be a correct procedure. We then assert that the quadrature squeezing of a pair of superposed light beams must be calculated relative to the quadrature variance of a pair of superposed coherent light beams. Evidently, employing (\ref{67}) one easily finds the quadrature variance of a pair of superposed coherent light beams to be two. We see from Eq. (\ref{71}) that the squeezing of the superposed coherent and squeezed light occurs in the plus quadrature. We then define the quadrature squeezing of the superposed light by \begin{equation}\label{72} S={2-(\Delta a_{+})^{2}\over 2} \end{equation} and in view of (\ref{71}), we see that \begin{equation}\label{73} S={b\over 2(1+b)}. \end{equation} This is just half of the quadrature squeezing of the squeezed light.
Finally, we seek to determine the quadrature squeezing of the output light. We define the variance of the quadrature operators \begin{equation}\label{74} \hat{a}^{out}_{+}=\hat{a}_{out}^{\dagger}+\hat{a}_{out} \end{equation} and \begin{equation}\label{75} \hat{a}^{out}_{-}=i(\hat{a}_{out}^{\dagger}-\hat{a}_{out}), \end{equation} for the superposed coherent and squeezed output light, by \begin{equation}\label{76} (\Delta a^{out}_{\pm})^{2}=2\kappa+\langle:\hat{a}^{out}_{\pm},\hat{a}^{out}_{\pm}:\rangle, \end{equation} with $2\kappa$ being the quadrature variance of a pair of superposed coherent output light. On account of (\ref{61}) and (\ref{66}), one can put Eq. (\ref{76}) in the form \begin{equation}\label{77} (\Delta a^{out}_{\pm})^{2}=\kappa(\Delta a_{\pm})^{2}. \end{equation} We now realize that the quadrature variance of the output light is just $\kappa$ times that of the cavity light. Moreover, we define the quadrature squeezing of the superposed coherent and squeezed output light by \begin{equation}\label{78} S^{out}={2\kappa-(\Delta a^{out}_{+})^{2}\over 2\kappa}. \end{equation} Hence on account of Eq. (\ref{77}), we see that \begin{equation}\label{79} S^{out}={2-(\Delta a_{+})^{2}\over 2}, \end{equation} which is exactly the same as the result described by Eq. (\ref{72}). We then see that the quadrature squeezing of the output light is equal to that of the cavity light.
\section*{4. Conclusion}
Our calculation of the mean photon number and quadrature variance of the superposed coherent and squeezed light, following a procedure of analysis based on combining the Hamiltonians and using the usual definition for the quadrature variance of superposed light beams, leads to physically unjustifiable mean photon number of the coherent light and quadrature variance of the superposed light. We have then determined both properties employing a procedure of analysis based on superposing the Q functions and applying a slightly modified definition for the quadrature variance of a pair of superposed light beams. We have found the usual mean photon number of the coherent light and the quadrature variance turned out be, as expected, the sum of the quadrature variance of the coherent light and that of the squeezed light. In addition, our analysis shows that the quadrature squeezing of the output light is exactly same as that of the cavity light. It is also perhaps worth mentioning that the presence of the coherent light leads to an increase in the mean photon number and to a decrease in the quadrature squeezing.
\vspace*{5mm} \noindent {\bf References} \vspace*{3mm}
\noindent [1] G.J. Milburn and D.F. Walls, Phys. Rev. \hspace*{5.8mm}A 27, 392 (1983).\newline [2] M.J. Collet and C.W. Gardiner, Phys. Rev. \hspace*{5.8mm}A 30, 1386 (1984).\newline [3] J. Anwar and M.S. Zubairy, Phys. Rev. \hspace*{5.8mm}A 45, 1804 (1992).\newline [4] B. Daniel and K. Fesseha, Opt. Commun. \hspace*{5.8mm}151, 384 (1998).\newline [5] Berihu Teklu, Opt. Commun. 261, 310 \hspace*{5.8mm}(2006).\newline [6]Tewodros Y. Darge and Fesseha Kassahun, \hspace*{5.8mm}PMC Physics B, 1 (2010).\newline [7] Misrak Getahun, PhD Dissertation (Addis \hspace*{5.8mm}Ababa University, 2009).\newline [8] Fesseha Kassahun, Fundamentals of Quan-\hspace*{5.8mm}tum Optics (Lulu Press Inc., North Car-\hspace*{5.8mm}olina, 2008).\newline
\end{multicols}
\end{document} | arXiv | {
"id": "1201.3712.tex",
"language_detection_score": 0.6770300269126892,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\setcounter{tocdepth}{1}
\title{The intrinsic stable normal cone}
\author{Marc Levine} \email{marc.levine@uni-due.de} \address{Fakult\widehat{+}"at Mathematik\widehat{+} Universit\widehat{+}"at Duisburg-Essen\widehat{+} Thea-Leymann-Stra{\ss}e 9\widehat{+} 45127 Essen\widehat{+} Germany}
\subjclass[2020]{14N35 (primary), 14F42, 55P42 (secondary)} \keywords{Enumerative geometry, virtual fundamental class, motivic homotopy theory} \thanks{The author thanks the DFG for support through the grant LE 2259/8-1 and the ERC for support through the project QUADAG. This paper is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 832833).}
\begin{abstract} We construct an analog of the intrinsic normal cone of Behrend-Fantechi in the setting of motivic stable homotopy theory. A perfect obstruction theory gives rise to a virtual fundamental class in ${\mathcal E}$-cohomology for any motivic cohomology theory ${\mathcal E}$; this includes the oriented Chow groups of Barge-Morel and Fasel. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction}\label{sec:Intro} The various versions of modern enumerative geometry, including Gromov-Witten theory and Donaldson-Thomas theory, are based on two important constructions due to Behrend and Fantechi \cite{BF}. The first is the construction of the intrinsic normal cone $\mathfrak{C}_Z$ of a Deligne-Mumford stack $Z$ over some base-scheme $B$. The second, based on the first, is the virtual fundamental class $[Z,[\phi]]^\text{\it vir}\in {\rm CH}_r(Z)$ associated to a perfect obstruction theory $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ on $Z$, with $r$ the virtual rank of $E_\bullet$. In case $r=0$ and $Z$ is proper over a field $k$, one has the numerical invariant $\text{deg}_k[Z,[\phi]]^\text{\it vir}$; more generally, one can cut down $[Z,[\phi]]^\text{\it vir}$ to dimension zero by taking so-called descendants, and then taking the degree of the resulting 0-cycle.
A perfect obstruction theory on $Z$ is given by a map $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ in $D^\text{\it perf}(Z)$ such that $E_\bullet$ is locally represented on $Z$ by a two-term complex $F_1\to F_0$ in degrees $0, 1$ (we use homological notation) and such that the map $[\phi]$ induces an isomorphism on the sheaf $h_0$ and a surjection on $h_1$.
In case $[\phi]$ admits a global resolution $(F_1\to F_0)\to{\mathbb L}_{Z/B}$, the virtual fundamental class is defined by embedding $\mathfrak{C}_Z$ in the quotient stack $[F^1/F^0]$ ($F^i:={\mathbb V}(F_i)$), pulling back $\mathfrak{C}_Z$ via the quotient map $F^1\to [F^1/F^0]$, which gives the subcone $\mathfrak{C}(F_\bullet)\subset F^1$, and then intersecting with the zero-section: \widehat{+}[ [Z,[\phi]]^\text{\it vir}:=0_{F^1}^!([\mathfrak{C}(F_\bullet)]). \widehat{+}] Here $F^i\to Z$ is the vector bundle dual to $F_i$ and $[\mathfrak{C}(F_\bullet)]$ is the fundamental class associated to the closed subscheme $\mathfrak{C}(F_\bullet)$ of $F^1$. If one wishes to extend this type of construction to more general cohomology theories, there may be a problem in even defining the fundamental class $[\mathfrak{C}(F_\bullet)]$. For instance, in algebraic cobordism $\Omega_*$, fundamental classes of arbitrary schemes do not exist \cite[\S3]{LevineFund}.
The main point of this paper is to reinterpret the constructions of the intrinsic normal cone, its fundamental class, and the virtual fundamental class associated to a perfect obstruction theory in the setting of motivic homotopy theory. Rather than taking a DM or Artin stack as our basic object, we work in the $G$-equivariant setting, following the current state of the art in motivic stable homotopy theory, for which unfortunately a suitable theory for stacks is not yet available. We will assume that $G$ is {\em tame} in the sense of \cite{Hoyois6}; this includes the case of a split torus, a finite \widehat{+}'etale group scheme of order prime to all residue characteristics, or a reductive group scheme in characteristic zero. For the full theory, we will also need to assume that the base-scheme $B$ is affine and the $G$-scheme $Z$ carrying the perfect obstruction theory is $G$-quasi-projective over $B$.
In spite of these restrictions, we gain a great deal of generality. We construct an ``intrinsic stable normal cone'' $\mathfrak{C}^{st}_Z$ for each $G$-quasi-projective $B$-scheme $Z$, with $\mathfrak{C}^{st}_Z$ defined as an object in the equivariant motivic stable homotopy category ${\operatorname{SH}}^G(B)$ (see Theorem~\ref{thm:IntStableNormalCone} and Definition~\ref{Def:IntStableNormalCone}). $\mathfrak{C}^{st}_Z$ carries a fundamental class $[\mathfrak{C}^{st}_Z]$ in co-homotopy $\mathbb{S}_B^{0,0}(\mathfrak{C}^{st}_Z)$ (Definition~\ref{def:FundClass}). Moreover, for a perfect obstruction theory $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ on $Z$, we use $[\mathfrak{C}^{st}_Z]$ to construct a virtual fundamental class in twisted Borel-Moore homology (see Definition~\ref{def:BMHomology}) \widehat{+}[ [Z, [\phi]]^\text{\it vir}\in \mathbb{S}_B^{\operatorname{B.M.}}(Z, {\mathbb V}(E_\bullet)). \widehat{+}]
As we are not relying on a theory in the setting of stacks, the construction (Definition~\ref{def:VirtFundClass}) relies on a number of choices; Proposition~\ref{prop:VirtFundClass} provides the crucial independence of these choices.
If ${\mathcal E}\in {\operatorname{SH}}^G(B)$ is a motivic ring spectrum (i.e., a monoid object in ${\operatorname{SH}}^G(B)$) with unit map $\epsilon_{\mathcal E}:\mathbb{S}_B\to{\mathcal E}$, applying $\epsilon_{\mathcal E}$ to $[\mathfrak{C}^{st}_Z]$ or $[Z, [\phi]]^\text{\it vir}$ gives us elements \begin{align*} &[\mathfrak{C}^{st}_Z]_{\mathcal E}\in {\mathcal E}^{0,0}(\mathfrak{C}^{st}_Z);\widehat{+} &[Z, [\phi]]^\text{\it vir}_{\mathcal E}\in {\mathcal E}^{\operatorname{B.M.}}(Z, {\mathbb V}(E_\bullet)). \end{align*}
For simplicity, we consider the case $G=\widehat{+}{{\operatorname{\rm Id}}\widehat{+}}$. If we take ${\mathcal E}=H{\mathbb Z}$, the spectrum representing motivic cohomology, then, suitably interpreted, these classes reduce to the classes defined by Behrend-Fantechi. More generally, if ${\mathcal E}$ is orientable, then \widehat{+}[ {\mathcal E}^{\operatorname{B.M.}}(Z, {\mathbb V}(E_\bullet))\cong {\mathcal E}^{\operatorname{B.M.}}_{2r,r}(Z) \widehat{+}] with $r$ the virtual rank of $E_\bullet$. We can thus identify ${\mathcal E}^{0,0}(\mathfrak{C}^{st}_Z)$ and ${\mathcal E}^{\operatorname{B.M.}}(Z, {\mathbb V}(E_\bullet))$ as Borel-Moore ${\mathcal E}$-homology, giving classes \begin{align*} &[\mathfrak{C}^{st}_Z]_{\mathcal E}\in {\mathcal E}^{\operatorname{B.M.}}_{2d_M,d_M}(\mathfrak{C}_i);\widehat{+} &[Z, [\phi]]^\text{\it vir}_{\mathcal E}\in {\mathcal E}^{\operatorname{B.M.}}_{2r,r}(Z). \end{align*} Here $\mathfrak{C}_i$ is the normal cone of $Z$ for a given closed immersion $i:Z\to M$ with $M$ smooth over $B$, $d_M$ is the dimension of $M$ over $B$ (which is the same as the dimension of $\mathfrak{C}_i$ over $B$) and $r$ is the virtual rank of $E_\bullet$. Besides motivic cohomology, this includes such oriented theories such as (homotopy invariant) algebraic $K$-theory or algebraic cobordism.
If we work with theories ${\mathcal E}$ that are not oriented, the identification of the group carrying the virtual fundamental class becomes more complicated. However, there is an interesting class of theories, the $\operatorname{SL}$-oriented theories, which admit a Thom isomorphism for bundles with a trivial determinant. One such theory is cohomology in the sheaf of Milnor-Witt $K$-groups (see \cite{MorelA1}). The part of this theory corresponding to the Chow groups gives the Barge-Morel theory of {\em oriented} Chow groups (or Chow-Witt groups) \widehat{+}[ \widetilde{{\rm CH}}^n(X):=H^n(X, {\mathcal K}^{MW}_n) \widehat{+}] a formula reminiscent of Bloch's formula relating the classical Chow groups with Milnor $K$-theory. There are also twisted versions of the oriented Chow groups \widehat{+}[ \widetilde{{\rm CH}}^n(X; L):=H^n(X, {\mathcal K}^{MW}_n(L)) \widehat{+}] for a line bundle $L$ on $X$. These formulas for the oriented Chow groups are only valid for smooth $X$, but one has a straightforward extension to the general case using Borel-Moore homology. The general theory gives us classes \begin{align*} &[\mathfrak{C}^{st}_Z]_{{\mathcal K}^{MW}_*}\in \widetilde{{\rm CH}}_{d_M}(\mathfrak{C}_i; i^*\omega^{-1}_{M/B});\widehat{+} &[Z, [\phi]]^\text{\it vir}_{{\mathcal K}^{MW}_*}\in \widetilde{{\rm CH}}_r(Z; \operatorname{det} E_\bullet). \end{align*}
In this setting the push-forward maps on the oriented Chow groups are restricted to {\em oriented} proper morphisms. This still allows one to achieve a refinement of the usual Gromov-Witten type invariants in case the given perfect deformation theory $E_\bullet$ not only has virtual rank zero, but also has trivial virtual determinant bundle modulo squares. In this case, we have \widehat{+}[ \text{deg}([Z, [\phi]]^\text{\it vir}_{{\mathcal K}^{MW}_*})\in {\operatorname{GW}}(k) \widehat{+}] where ${\operatorname{GW}}(k)$ is the Grothendieck-Witt group of the base-field $k$; applying the rank homomorphism ${\operatorname{GW}}(k)\to {\mathbb Z}$ recovers the classical degree. We hope that this approach will prove useful, for example, in studying the enumerative geometry of real varieties, where one takes the signature rather than the rank to obtain the relevant invariant.
Our approach is essentially formal: our construction uses three ingredients beyond some elementary geometry of normal cones, namely: \begin{enumerate} \item The existence of Grothendieck's six operations for the equivariant motivic stable homotopy category ${\operatorname{SH}}^G(-):({\operatorname{\mathbf{Sch}}}^{G}/B)^{\text{\rm op}}\to \mathbf{Tr}$. Here $\mathbf{Tr}$ is the 2-category of triangulated categories. In particular, for each $G$-vector bundle $V\to X$, we have the autoequivalence $\Sigma^V:{\operatorname{SH}}^G(X)\to {\operatorname{SH}}^G(X)$. \item For $X\in {\operatorname{\mathbf{Sch}}}^G/B$, we have the path groupoid ${\mathcal V}^G(X)$ of the $G$-equivariant $K$-theory space of $X$. We need the existence of a natural transformation $\Sigma^{-}:{\mathcal V}^G(-)\to {\operatorname{Aut}}({\operatorname{SH}}^G(-))$ extending the map $V\mapsto \Sigma^V$, such that the exceptional push-forward and pull-back for a smooth morphism $f:X\to Y$ is given by $f^!=\Sigma^{T_{X/Y}}\circ f^*$, $f_!=f_\widehat{+}#\circ \Sigma^{-T_{X/Y}}$, where $f_\widehat{+}#$ is the left adjoint to $f^*$. \item ${\mathbb A}^1$-homotopy invariance: for $p:V\to Z$ an affine space bundle, co-unit of adjunction $p_!p^!\to {\operatorname{\rm Id}}_{{\operatorname{SH}}^G(Z)}$ is an isomorphism. \end{enumerate} Presumably other contravariant functors ${\operatorname{\mathbf{Sch}}}^G/B\to \mathbf{Tr}$ have these three properties.
A construction of the fundamental class of the normal cone $\mathfrak{C}_{Z\subset M}$ in algebraic cobordism was communicated to us by Parker Lowrey some years ago. Our construction of the fundamental class may be viewed as a generalization of this method, see Example~\ref{ex:ClassicalExamples} for further details. F. D\widehat{+}'eglise, F. Jin and A. Khan \cite{DJK}
have generalized aspects of the work of Lowrey-Sch\widehat{+}"urg \cite{LowreySchuerg}, constructing fundamental classes of quasi-smooth derived schemes in a motivic stable homotopy category of derived schemes; we expect there is a suitable dictionary translating between some of their constructions and some of the ones given here. For instance, A. Khan \cite{KhanDerVir} has constructed a virtual fundamental class associated to a quasi-smooth morphism $f:{\mathcal X}\to {\mathcal Y}$ of derived higher stacks. Due to his use of (higher) stacks, Khan's construction takes place in the framework of the motivic stable homotopy category for the \widehat{+}'etale topology, which places the virtual fundamental class in a Borel-Moore homology for a theory that satisfies \widehat{+}'etale descent and thus is not applicable for getting invariants in more general theories, such as those closely related to the Grothendieck-Witt ring.
It seems one can restrict Khan's construction to the setting of derived schemes, for which one does have a good theory comparable to that of ${\operatorname{SH}}(B)$ for $B$ a scheme, in fact a central result of Khan \cite{KhanDerived} states that for $X$ the underlying classical scheme of a (connective) derived scheme ${\mathcal X}$, the restriction map ${\operatorname{SH}}({\mathcal X})\to {\operatorname{SH}}(X)$ is an equivalence. A quasi-smooth map of derived schemes $f:{\mathcal X}\to {\mathcal Y}$ does give rise to a (relative) perfect obstruction theory on the underlying classical scheme of ${\mathcal X}$, and we expect that Khan's classes agree with the ones constructed here. However, Sch\widehat{+}"urg \cite{Schurg} has found obstructions for a given perfect obstruction theory to arise this way, so the approach using derived schemes may not always be applicable. These issues of relating the derived theory with the one presented here is being investigated by A. D'Angelo \cite{DAngelo}.
In \S\ref{sec:Background} we recall the necessary background from motivic homotopy theory. We construct the intrinsic stable normal cone in \S\ref{sec:NormalThom} and its fundamental class in \S\ref{sec:FundClass}. The construction of the virtual fundamental class in a special case (for a {\em reduced, normalized representative} of a perfect obstruction theory) is given in \S\ref{sec:RedNormVirClass}, where the formula is completely analogous to the one of Behrend-Fantechi.
In \S\ref{sec:ObstThy}, we show how a Jouanolou cover $p_Z:\tilde{Z}\to Z$ and a perfect obstruction theory $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ give rise to an {\em induced} perfect obstruction theory $p_Z^![\phi]:p_Z^!E_\bullet\to {\mathbb L}_{\tilde{Z}/B}$, which admits a reduced, normalized representative.
The general case is handled in \S\ref{sec:VFCGeneral}. Roughly speaking, one uses Jouanolou's trick and ${\mathbb A}^1$-homotopy invariance properties and the results of \S\ref{sec:ObstThy} to reduce the general case to the case handled in \S\ref{sec:RedNormVirClass}. The main point is to show that the resulting class is independent of the various choices made along the way.
We compare our constructions with those of Behrend-Fantechi and discuss variants in \S\ref{sec:Comp}. We conclude with some explicit computations in \S\ref{sec:LocalDeg}, looking at critical loci and relating the virtual fundamental class of a local complete intersection to the ${\mathbb A}^1$ local degree defined in \cite{KW}.
I would like to thank the referee for a series of insightful comments, which were used to improve an earlier version of this paper. Further improvements were made possible by detailed comments and suggestions from Sabrina Pauli, for which I am very grateful.
\section{Background on motivic homotopy theory}\label{sec:Background} We begin by recalling some of the aspects of the six operations for the motivic stable homotopy category. We refer the reader to \cite{Ayoub, CD, JardineMotSym, MorelVoev, VoevICM} for details on the non-equivariant case and \cite{Hoyois6} for the extension to the equivariant setting.
Fix a noetherian affine scheme $U$ with a flat, finitely presented linearly reductive group scheme $G_0$ over $U$ (see \cite[Definition 2.14]{Hoyois6}).
We fix a quasi-projective $U$-scheme $B\to U$ (with trivial $G_0$-action) as base-scheme and let $G=G_0\times_UB$. A $G$-equivariant morphism $q:Y\to X$ of $G$-schemes over $B$ is called {\em $G$-quasi-projective} if there is a $G$-vector bundle $V\to X$ and a $G$-equivariant locally closed immersion $i:Y\to {\mathbb P}(V)$ of $X$-schemes. We let ${\operatorname{\mathbf{Sch}}}^G/B$ be the full subcategory of $G$-schemes over $B$ with objects the $G$-quasi-projective $B$-schemes and let ${\mathbf{Sm}}^G/B$ be the full subcategory of smooth $B$-schemes in ${\operatorname{\mathbf{Sch}}}^G/B$. We will usually denote the structure morphism for $Z\in {\operatorname{\mathbf{Sch}}}^G/B$ by $\pi_Z:Z\to B$.
For $X\in {\operatorname{\mathbf{Sch}}}^G/B$, we have the category $\operatorname{QCoh}_X^G$ of quasi-coherent ${\mathcal O}_X$-modules with $G$-action and the full subcategory $\operatorname{Coh}^G_X$ of coherent sheaves. We call ${\mathcal F}$ in $\operatorname{Coh}_X^G$ locally free if ${\mathcal F}$ is locally free as an ${\mathcal O}_X$-module. We let $D^b_G(X)$ denote the bounded derived category of coherent $G$-sheaves, $D_G(X)$ the unbounded derived category of quasi-coherent $G$-sheaves and $D^\text{\it perf}_G(X)$ the full subcategory of $D_G(X)$ of complexes isomorphic in $D_G(X)$ to a bounded complex of locally free coherent sheaves. Such a complex is called a {\em perfect} complex.
We will use homological notation for complexes: for a homological complex $C_\bullet$, $\tau_{\le n}C_\bullet$ is the quotient complex which is $C_m$ in degree $m<n$, $0$ in degree $m>n$ and $C_n/\partial(C_{n+1})$ in degree $n$.
We will assume that $U$ has the {\em $G_0$-resolution property}, namely, that each ${\mathcal F}\in \operatorname{Coh}^{G_0}_U$ admits a surjection ${\mathcal E}\to {\mathcal F}$ from a locally free ${\mathcal E}$ in $\operatorname{Coh}^{G_0}_U$ (see \cite[Definition 2.7]{Hoyois6}). This implies that the group scheme $G$ over $B$ is {\em tame} in the sense of \cite[Definition 2.26]{Hoyois6}.
Examples of linearly reductive $G_0$ such that $U$ has the $G_0$-resolution property include \begin{itemize} \item $G_0$ is finite locally free of order invertible on $U$. \item $G_0$ is of multiplicative type and is isotrivial. \item $U$ has characteristic zero and $G_0$ is reductive with isotrivial radical and coradical (e.g., $G_0$ is semisimple). \end{itemize} See \cite[Examples 2.8, 2.16, 2.27]{Hoyois6}.
Hoyois shows \cite[Lemma 2.11]{Hoyois6} that each $Z\in {\operatorname{\mathbf{Sch}}}^G/B$ has the $G$-resolution property. In addition, if $Z\in {\operatorname{\mathbf{Sch}}}^G/B$ is affine, then a locally free coherent $G$-sheaf ${\mathcal F}$ on $Z$ is projective in $\operatorname{QCoh}_X^G$ \cite[Lemma 2.17]{Hoyois6}. This also implies that a complex $E_\bullet$ in $D_G(Z)$ that is locally (on $Z_{\text{\rm Zar}}$) a perfect complex is in fact a perfect complex on $Z$. Similarly, if $E_\bullet\in D_G(X)$ is locally isomorphic to a complex of locally free coherent sheaves that is 0 in degrees outside a given interval $[a,b]$, then $E_\bullet$ is isomorphic to a complex of coherent locally free $G$-sheaves on $X$ that is 0 in degrees outside $[a,b]$. Such a complex is called a {\em perfect complex supported in $[a,b]$}.
For $E\in \operatorname{Coh}_X^G$ locally free, we have the associated vector bundle $p:{\mathbb V}(E)\to X$, with ${\mathbb V}(E):={\rm Spec\widehat{+},}_{{\mathcal O}_X}{\operatorname{Sym}}^* E$. The $G$-action on $E$ gives ${\mathbb V}(E)$ a $G$-action, with $p:{\mathbb V}(E)\to X$ a $G$-equivariant morphism.
We will often drop the ``$G$'' in our notations, speaking of $B$-morphisms for $G$-equivariant $B$-morphisms, vector bundles $V\to X$ for $G$-vector bundles, etc.
Let $\mathbf{Tr}$ be the 2-category of triangulated categories. Following \cite[\S6, Theorem 6.18]{Hoyois6}, we have the motivic stable homotopy category \widehat{+}[ {\operatorname{SH}}^G(-):{\operatorname{\mathbf{Sch}}}^G/B^{\text{\rm op}}\to \mathbf{Tr}; \widehat{+}] for $f:Y\to X$ in ${\operatorname{\mathbf{Sch}}}^G/B$, we have the exact functor $f^*:{\operatorname{SH}}^G(X)\to {\operatorname{SH}}^G(Y)$ with right adjoint $f_*:{\operatorname{SH}}^G(Y)\to {\operatorname{SH}}^G(X)$ and the exceptional pull-back $f^!:{\operatorname{SH}}^G(X)\to {\operatorname{SH}}^G(Y)$ with left adjoint $f_!:{\operatorname{SH}}^G(X)\to {\operatorname{SH}}^G(Y)$. If $f$ is a smooth morphism, $f^*$ admits the left adjoint $f_\widehat{+}#$. ${\operatorname{SH}}^G(X)$ is a closed symmetric monoidal triangulated category with product denoted $\wedge_X$, unit $1_X$ (we often write $\mathbb{S}_B$ for $1_B$) and internal Hom $\mathcal{H}om_X(-,-)$; $f^*$ is a symmetric monoidal functor and $f_*$ and $f_!$ satisfy projection formulas, that is, for $f:Y\to X$, $f_*$ and $f_!$ are ${\operatorname{SH}}^G(X)$-module maps and the same holds for $f_\widehat{+}#$ if $f$ is smooth. There is a natural transformation $\eta^f_{!*}:f_!\to f_*$ which is an isomorphism if $f$ is proper. See also the earlier treatments \cite{Ayoub} and \cite{CD} for the non-equivariant case.
For the pair of adjoint functors $a_!\dashv a^!$, we let $e_a:a_!a^!\to {\operatorname{\rm Id}}$ denote the co-unit. For the pair of adjoint functor $a^*\dashv a_*$, we let $u_a:{\operatorname{\rm Id}}\to a_*a^*$ denote the unit. We will use analogous notation for other adjoint pairs, leaving the context to make the meaning clear.
For $p:V\to X$ a vector bundle with zero-section $s:X\to V$, we have the $V$-suspension and $-V$-suspension operators $\Sigma^{\pm V}:{\operatorname{SH}}^G(X)\to {\operatorname{SH}}^G(X)$ defined as \widehat{+}[ \Sigma^V:=p_\widehat{+}#\circ s_*=p_\widehat{+}#\circ s_!,\widehat{+} \Sigma^{-V}:=s^!\circ p^*, \widehat{+}] with $\Sigma^V$ the left adjoint to $\Sigma^{-V}$. These endofunctors are in fact inverse equivalences \cite[Proposition 6.5]{Hoyois6}.
For a locally free sheaf $E\in\operatorname{Coh}_X^G$, we thus have the autoequivalence $\Sigma^{{\mathbb V}(E)}$ of ${\operatorname{SH}}^G(X)$. Ayoub \cite[Th\widehat{+}'eor\widehat{+}`em 1.5.18]{Ayoub} and Riou \cite[Proposition 4.1.1]{RiouRR} have shown that for $G=\widehat{+}{{\operatorname{\rm Id}}\widehat{+}}$, the association $E\mapsto \Sigma^{{\mathbb V}(E)}$ extends to a functor \widehat{+}[ \Sigma^{-}:{\mathcal V}(X)\to {\operatorname{Aut}}({\operatorname{SH}}(X)), \widehat{+}] Here ${\mathcal V}(X)$ is the path groupoid of the $K$-theory space of $X$ and ${\operatorname{Aut}}({\operatorname{SH}}(X))$ is the category with objects the auto-equivalences of ${\operatorname{SH}}(X)$ and morphisms the natural isomorphisms.
$\Sigma^{-}$ natural with respect to $f^*$, $f^!$, $f_*$ and $f_!$ for morphisms $f:Y\to X$: \widehat{+}[ \Sigma^{f^*{\mathbb V}(E)}\circ f^?\cong f^?\circ \Sigma^{{\mathbb V}(E)},\widehat{+} f_?\circ \Sigma^{f^*{\mathbb V}(E)} \cong \Sigma^{{\mathbb V}(E)}\circ f_?, \widehat{+}] for $?=*, !$. This extends without problem to the equivariant case to give a functor $\Sigma^{-}:{\mathcal V}^G(X)\to {\operatorname{Aut}}({\operatorname{SH}}^G(X))$, with ${\mathcal V}^G(X)$ the path groupoid of the $G$-equivariant $K$-theory space of $X$, having the properties listed above.
We write $\Sigma^{{\mathbb V}(E_\bullet)}$ for the image of a perfect complex $E_\bullet$ under this functor.\footnote{Many authors, for example \cite{Hoyois6}, use the notation $\Sigma^{E_\bullet}$ for our notation $\Sigma^{{\mathbb V}(E_\bullet)}$. For example, for $X$ smooth over $S$ and $E=\Omega_{X/S}$, we have ${\mathbb V}(E)=T_{X/S}$, the relative tangent bundle, and the operator denoted $\Sigma^{\Omega_{X/S}}$ in \cite{Hoyois6} will be written here as $\Sigma^{T_{X/S}}$} For each distinguished triangle $E^1_\bullet\to E_\bullet\to E^2_\bullet\to$ in $D_G^\text{\it perf}(X)$ there is an isomorphism \widehat{+}[
\Sigma^{{\mathbb V}(E_\bullet)}\cong \Sigma^{{\mathbb V}(E^{1}_\bullet)}\circ \Sigma^{{\mathbb V}(E^{2}_\bullet)}, \widehat{+}] natural with respect to isomorphisms of distinguished triangles, and for each $E_\bullet$ an isomorphism $\Sigma^{{\mathbb V}(E_\bullet[1])}\cong (\Sigma^{{\mathbb V}(E_\bullet)})^{-1}$. Thus, if $E_\bullet=(E_n\to\ldots\to E_m)$, supported in $[m,n]$, then $\Sigma^{{\mathbb V}(E_\bullet)}$ is canonically isomorphic to $\Sigma^{(-1)^n{\mathbb V}(E_n)}\circ\ldots\circ \Sigma^{(-1)^m{\mathbb V}(E_m)}$.
For $f:Y\to X$ smooth, there are canonical isomorphisms \cite[Theorem 6.9]{Hoyois6} \widehat{+}[ f_!\cong f_\widehat{+}#\circ \Sigma^{-T_f}, \widehat{+} f^!\cong \Sigma^{T_f}\circ f^*, \widehat{+}] with $T_f\to Y$ the relative tangent bundle, giving the canonical isomorphism \widehat{+}[ f_!f^!\cong f_\widehat{+}#f^*. \widehat{+}] If $f:V\to X$ is an affine space bundle over $X$, then the ${\mathbb A}^1$-homotopy property shows that the co-unit of the adjunction $f_\widehat{+}#\dashv f^*$, $e_f:f_\widehat{+}#f^*\to {\operatorname{\rm Id}}$, is a natural isomorphism.
Besides the equivariant stable motivic homotopy category, Hoyois has defined an equivariant unstable motivic category ${\mathcal H}^G_\bullet(X)$ for $X\in{\operatorname{\mathbf{Sch}}}^G/B$ \cite[\S5]{Hoyois6}, generalizing the constructions of Morel-Voevodsky \cite{MorelVoev} in the non-equivariant case. There is an infinite $T$-suspension functor $\Sigma^\infty_T:{\mathcal H}^G_\bullet(X)\to {\operatorname{SH}}^G(X)$; we often simply write ${\mathcal X}$ for $\Sigma^\infty_T{\mathcal X}$ when the context makes the meaning clear. ${\mathcal H}_\bullet^G(X)$ is a symmetric monoidal category with unit $S^0_X:=X_+$, and the unit $1_X\in {\operatorname{SH}}^G(X)$ is $\Sigma^\infty_T(S^0_X)$.
The functors $f^*$, $f_*$ are $T$-stabilizations of functors $f^*:{\mathcal H}_\bullet(X)\to{\mathcal H}_\bullet(Y)$, $f_*:{\mathcal H}_\bullet(Y)\to {\mathcal H}_\bullet(X)$, with $f^*$ left adjoint to $f_*$. If $f:Y\to X$ is smooth, $f_\widehat{+}#$ is the $T$-stabilization of $f_\widehat{+}#:{\mathcal H}_\bullet(Y)\to {\mathcal H}_\bullet(X)$, left adjoint to $f^*$. Similarly, if $i:Y\to X$ is a closed immersion, the maps $i_*=i_!:{\operatorname{SH}}^G(Y)\to {\operatorname{SH}}^G(X)$, are the $T$-stabilizations of the unstable $i_*$.
To give the reader some intution about the suspension functors, we mention that for $V\to X$ a vector bundle with zero section $s$, the suspension $\Sigma^V(1_X)$ is the stabilition of the {\em Thom space} ${\operatorname{Th}}_X(V):=p_\widehat{+}#s_*(1_V)\in{\mathcal H}^G_\bullet(X)$, and that ${\operatorname{Th}}_X(V)$ is canonically isomorphic to the cofiber of $V\setminus s(X)\to V$ in ${\mathcal H}^G_\bullet(X)$: ${\operatorname{Th}}_X(V)\cong V/(V\setminus s(X))$. We will not be using these facts in the sequel.
There are {\em exchange morphisms} associated to a cartesian diagram \widehat{+}[ \xymatrix{ Z\ar[r]^-q\ar@{}@<-12pt>[r]_\Delta\ar[d]_g&Y\ar[d]^f\widehat{+} W\ar[r]_-p&X } \widehat{+}] as follows:\widehat{+} 1. We have $Ex(\Delta^*_*):p^*f_*\to g_*q^*$ defined as the composition \widehat{+}[ p^*f_*\xrightarrow{u_g}g_*g^*p^*f_*=g_*q^*f^*f_*\xrightarrow{e_f}g_*q^* \widehat{+}] $Ex(\Delta^*_*)$ is an isomorphism if $p$ is smooth or if $f$ is proper.\widehat{+} 2. Suppose that $p$ is smooth. The isomorphism $Ex(\Delta^*_\widehat{+}#):q_\widehat{+}#g^*\to f^*p_\widehat{+}#$ is defined as the composition \widehat{+}[ q_\widehat{+}#g^*\xrightarrow{u_p}q_\widehat{+}#g^*p^*p_\widehat{+}#=q_\widehat{+}#q^*f^*p_\widehat{+}#\xrightarrow{e_q}f^*p_\widehat{+}#. \widehat{+}] 3. Suppose $p$ is smooth. We have $Ex(\Delta_{\widehat{+}#*}):p_\widehat{+}#g_*\to f_*q_\widehat{+}#$ defined as the composition \widehat{+}[ p_\widehat{+}#g_*\xrightarrow{u_f}f_*f^*p_\widehat{+}#g_*\xrightarrow{Ex(\Delta^*_\widehat{+}#)^{-1}}f_*q_\widehat{+}#g^*g_* \xrightarrow{e_g}f_*q_\widehat{+}#. \widehat{+}] 4. Suppose $p$ is smooth. We have the isomorphism $Ex(\Delta^{*!}):g^*p^!\to q^!f^*$ defined as the composition \widehat{+}[ g^*p^!\cong g^*\Sigma^{T_{W/X}}p^*\cong \Sigma^{T_{Z/Y}}g^*p^*\cong \Sigma^{T_{Z/Y}}q^*f^*\cong q^!f^*. \widehat{+}] 5. Suppose $p$ is smooth. We have $Ex(\Delta_{!*}):p_!g_*\to f_*q_!$ defined as the composition \widehat{+}[ p_!g_*\cong p_\widehat{+}#\Sigma^{-T_{W/X}}g_*\cong p_\widehat{+}#g_*\Sigma^{-T_{Z/Y}}\xrightarrow{Ex(\Delta_{\widehat{+}#*})}f_*q_\widehat{+}#\Sigma^{-T_{Z/Y}}\cong f_*q_! \widehat{+}] 6. For arbitrary $p$, we have the base-change isomorphism $Ex(\Delta^*_!):p^* f_!\to g_!q^*$ (see \cite[Theorem 6.12]{Hoyois6}) satisfying $\eta^g_{!*}\circ Ex(\Delta^*_!)=Ex(\Delta^*_*)\circ \eta^f_{!*}$. If $f$ is smooth, combining $Ex(\Delta^*_!)$ with the naturality of $\Sigma^-$ gives the base-change isomorphism $Ex(\Delta^*_\widehat{+}#):p^*f_\widehat{+}#\to g_\widehat{+}#q^*$.
Suppose we have a closed immersion $i:Z\to X$ in ${\operatorname{\mathbf{Sch}}}^G/B$, with open complement $j:U\to X$. This yields the {\em localization distinguished triangle} in ${\operatorname{SH}}^G(X)$ \begin{equation}\label{eqn:LocTria} j_!j^!\xrightarrow{e_j} {\operatorname{\rm Id}}_{{\operatorname{SH}}^G(X)}\xrightarrow{u_i} i_*i^*\to j_!j^![1]. \end{equation} Note that $j_!=j_\widehat{+}#$, $j^!=j^*$ and $i_*=i_!$.
\begin{definition} Let $\pi_Z:Z\to B$ be in ${\operatorname{\mathbf{Sch}}}^G/B$. The {\em Borel-Moore motive} of $Z$ over $B$ is the object $Z_{\operatorname{B.M.}}:=\pi_{Z!}(1_Z)$ in ${\operatorname{SH}}^G(B)$. For $v:={\mathbb V}(E_\bullet)$, we have the twisted Borel-Moore motive $Z(v)_{\operatorname{B.M.}}:= \pi_{Z!}(\Sigma^{v}(1_Z))$. If we need to denote the base-scheme $B$, we write these as $Z/B_{\operatorname{B.M.}}$ and $Z/B(v)_{\operatorname{B.M.}}$, respectively. \end{definition}
Let $p:Z\to W$ be a morphism in ${\operatorname{\mathbf{Sch}}}^G/B$. For $p$ proper, we have the natural transformation ({\em proper pull-back})
\begin{equation}\label{eqn:Projpush-forward}
p^*:\pi_{W!}\to \pi_{Z!}\circ p^*
\end{equation}
defined as the composition \widehat{+}[ \pi_{W!} \xrightarrow{u_p} \pi_{W!}p_*p^*\xrightarrow{(\eta^p_{!*})^{-1}} \pi_{W!}p_!p^*\cong \pi_{Z!}\circ p^*. \widehat{+}] Applying $p^*$ to $\Sigma^{{\mathbb V}(E_\bullet)}(1_W)$ gives the morphism \widehat{+}[ p^*:W(v)_{\operatorname{B.M.}}\to Z(p^*v)_{\operatorname{B.M.}} \widehat{+}] in ${\operatorname{SH}}^G(B)$. One checks easily that $(pq)^*=q^*p^*$ for composable proper morphisms $p$ and $q$.
Let $f:Z\to W$ be a smooth morphism in ${\operatorname{\mathbf{Sch}}}^G/B$. We have the natural transformation ({\em smooth push-forward}) \begin{equation}\label{eqn:Smoothpull-back} f_*: \pi_{Z!}\Sigma^{T_f}f^*\to \pi_{W!} \end{equation} defined by the composition \widehat{+}[ \pi_{Z!} \Sigma^{T_f}f^*\cong \pi_{W!}f_! f^!\xrightarrow{e_f} \pi_{W!}. \widehat{+}] One checks that $f\mapsto f_*$ is functorial: For smooth morphisms $f:Z\to W$, $g:Y\to Z$, we have \widehat{+}[ (fg)_*\circ\theta_{Y/Z/W}=f_*\circ [g_*\circ(\Sigma^{T_{fg}}\circ f^*)] \widehat{+}] where \widehat{+}[ \theta_{Y/Z/W}:(\pi_{Y!} \Sigma^{T_g}g^*)\circ(\Sigma^{T_f}\circ f^*)\to \pi_{Y!} \Sigma^{T_{fg}}\circ(fg)^* \widehat{+}] is the isomorphism induced by the exact sequence \widehat{+}[ 0\to T_g\to T_{fg}\to g^*T_f\to 0. \widehat{+}] Applying $f_*$ to $\Sigma^{{\mathbb V}(E_\bullet)}(1_W)$ gives the smooth push-forward map \widehat{+}[ f_*:Z(f^*v+T_f)_{\operatorname{B.M.}}\to W(v)_{\operatorname{B.M.}}. \widehat{+}]
Suppose a smooth morphism $f:Z\to W$ admits a section $s:W\to Z$. We have the canonical isomorphism \begin{equation}\label{eqn:a0} f_!s_!\cong (fs)_!\tag{a} \end{equation} which induces the isomorphism on the adjoints \begin{equation}\label{eqn:b0} s^!f^!\cong (fs)^!\tag{b} \end{equation} Let $e_s:s_!s^!\to{\operatorname{\rm Id}}$, $e_f:f_!f^!\to {\operatorname{\rm Id}}$, $e_{fs}: (fs)_!(fs)^!\to {\operatorname{\rm Id}}$ be the co-units of adjunction. This gives us the commutative diagram \widehat{+}[ \xymatrix{
{\operatorname{\rm Id}}\ar@{}[r]|{\hbox{$\cong$}}\ar@{=}[d]&f_!s_!s^!f^!\ar[r]^-{f_!e_sf^!}\ar[d]_{(a)\circ(b)}^\wr&f_!f^!\ar[r]^{e_f}&{\operatorname{\rm Id}}\ar@{=}[d]\widehat{+}
{\operatorname{\rm Id}} \ar@{}[r]|{\hbox{$\cong$}}\ar@/_0.6cm/[rrr]_{{\operatorname{\rm Id}}}& (fs)_!(fs)^!\ar[rr]_{e_{fs}}&&{\operatorname{\rm Id}},
}
\widehat{+}] in other words, $f_!e_sf^!$ is a right inverse to $e_f$.\footnote{I am grateful to F. D\widehat{+}'eglise and D.C. Cisinski for this argument.} Define the natural transformation \begin{equation}\label{eqn:Section} s_!:\pi_{W!}\to \pi_{Z!}\circ\Sigma^{T_f}\circ f^* \end{equation} as the composition \widehat{+}[ \pi_{W!}\xrightarrow{\pi_{W!}\circ f_!e_sf^!}\pi_{W!}\circ f_!\circ f^!\cong \pi_{W!}\circ f_!\circ \Sigma^{T_f}\circ f^*\cong\pi_{Z!}\circ\Sigma^{T_f}\circ f^* \widehat{+}] We call $s_!$ the {\em Gysin push-forward}.
\begin{lemma}\label{lem:SectionIsos} 1. Let $f:Z\to W$ be a smooth morphism in ${\operatorname{\mathbf{Sch}}}^G/B$ with a section $s:W\to Z$. Then the composition \widehat{+}[ \pi_{W!}\xrightarrow{s_!} \pi_{Z!}\circ\Sigma^{T_f}\circ f^*\xrightarrow{f_*} \pi_{W!} \widehat{+}] is the identity.\widehat{+} 2. If $f:Z\to W$ is a vector bundle over $W$, then \widehat{+}[ f_*:\pi_{Z!}\circ\Sigma^{T_f}\circ f^*\to \pi_{W!} \widehat{+}] is an isomorphism, with inverse $s_!$.\widehat{+} 3. If $f:Z\to W$ is an affine space bundle, then \widehat{+}[ f_*:\pi_{Z!}\circ\Sigma^{T_f}\circ f^*\to \pi_{W!} \widehat{+}] is an isomorphism. \end{lemma}
\begin{proof} The fact that $e_f\circ(f_!e_sf^!)={\operatorname{\rm Id}}$ implies (1).
For (2), it suffices by (1) to show that $f_*$ is an isomorphism. Since $Z\to W$ is a vector bundle, it follows by ${\mathbb A}^1$ homotopy invariance that $e_f:f_!f^!=f_\widehat{+}#f^*\to {\operatorname{\rm Id}}$ is an isomorphism; applying $\pi_{W!}\circ-$ yields (2).
As an affine space bundle is locally for the Zariski topology a vector bundle, (3) follows from (2) and Mayer-Vietoris. \end{proof}
\begin{lemma}\label{lem:BaseChange} Suppose we have a cartesian diagram in ${\operatorname{\mathbf{Sch}}}^G/B$ \widehat{+}[ \xymatrix{ Z\ar[r]^-q\ar@{}@<-12pt>[r]_\Delta\ar[d]_g&Y\ar[d]^f\widehat{+} W\ar[r]_-p&X } \widehat{+}] with $p$ proper and $f$ smooth. Then the diagram \widehat{+}[ \xymatrixcolsep{3pt} \xymatrix{ \pi_{Y!}\circ\Sigma^{T_f}\circ f^*\hskip20pt\ar[rrr]^-{q^*\circ(\Sigma^{T_f}\circ f^*)}\ar[d]_{f_*} &&&\hskip20pt\pi_{Z!}\circ q^*\circ\Sigma^{T_f}\circ f^* \ar@{=}[r]&\pi_{Z!}\circ \Sigma^{T_g}\circ g^*\circ p^*\ar[d]^{g_*\circ p^*}\widehat{+} \pi_{X!}\ar[rrrr]_{p^*}&&&&\pi_{W!}\circ p^* } \widehat{+}] commutes. In other words, proper pull-back commutes with smooth push-forward. \end{lemma}
\begin{proof} In what follows we simply write $\xrightarrow{\sim}$ for isomorphisms that follow from functoriality, such as $\pi_{X!}f_!\cong\pi_{Y!}$ or that follow from the isomorphisms $\Sigma^{T_f}f^*\cong f^!$ or $\Sigma^{T_g}g^*\cong g^!$.
We fit a number of diagrams together. \begin{equation}\label{eqn:a}\tag{a} \xymatrixcolsep{30pt} \xymatrix{ \pi_{Y!}\Sigma^{T_f}f^*\ar[r]^-{\pi_{Y!}\circ u_q}\ar[dd]_\wr&\pi_{Y!}q_*q^*\Sigma^{T_f}f^*\ar[d]^\wr\widehat{+} &\pi_{Y!}q_*q^*f^!\ar[d]^\wr\widehat{+} \pi_{X!}f_!f^!\ar[r]^{\pi_{X!}f_!u_qf^!}\ar[ddd]_{\pi_{X!}e_f}&\pi_{X!}f_!q_*q^*f^!\ar[d]^{\pi_{X!}Ex(\Delta_{!*})q^*f^!}\widehat{+} &\pi_{X!}p_*g_!q^*f^!\ar[d]^{\pi_{X!}p_*g_!Ex(\Delta^{*!})}_\wr\widehat{+} &\pi_{X!}p_*g_!g^!p^*\ar[d]^{\pi_{X!}p_*e_gp^*}\widehat{+} \pi_{X!}\ar[r]_{\pi_{X!}u_p}&\pi_{X!}p_*p^* } \end{equation} \begin{equation}\label{eqn:b1}\tag{b1} \xymatrixcolsep{60pt} \xymatrix{ \pi_{Y!}q_*q^*\Sigma^{T_f}f^*\ar[d]^\wr&\pi_{Y!}q_!q^*\Sigma^{T_f}f^*\ar[l]^\sim_{\pi_{Y!}\eta^q_{!*}\Sigma^{T_f}f^*} \ar[d]^\wr\widehat{+} \pi_{Y!}q_*q^*f^!\ar[d]^\wr&\pi_{Y!}q_!q^*f^! \ar[l]^\sim_{\pi_{Y!}\eta^q_{!*}f^!}\ar[d]^\wr\widehat{+} \pi_{X!}f_!q_*q^*f^!\ar[d]_{\pi_{X!}Ex(\Delta_{!*})}&\pi_{X!}p_!g_!q^*f^!\ar[ld]_\sim^{\pi_{X!}\eta^p_{!*}g_!q^*f^!}_\sim\widehat{+} \pi_{X!}p_*g_!q^*f^!& } \end{equation} \begin{equation}\label{eqn:b2}\tag{b2} \xymatrixcolsep{60pt} \xymatrix{ \pi_{X!}p_*g_!q^*f^!\ar[d]^\wr_{\pi_{X!}p_*g_!Ex(\Delta^{*!})}&\pi_{X!}p_!g_!q^*f^!\ar[l]^\sim_{\pi_{X!}\eta^p_{!*}g_!q^*f^!}\ar[d]_\wr^{\pi_{X!}p_!g_!Ex(\Delta^{*!})}\widehat{+} \pi_{X!}p_*g_!g^!p^*\ar[d]_{\pi_{X!}p_*e_gp^*}&\pi_{X!}p_!g_!g^!p^*\ar[l]^\sim_{\pi_{X!}\eta^p_{!*}g_!g^!p^*}\ar[d]^{\pi_{X!}p_!e_gp^*}\widehat{+} \pi_{X!}p_*p^*&\pi_{X!}p_!p^*\ar[l]_\sim^{{\pi_{X!}\eta^p_{!*}p^*}} } \end{equation}
\begin{equation}\label{eqn:c}\tag{c} \xymatrixcolsep{60pt} \xymatrix{ \pi_{Y!}q_!q^*\Sigma^{T_f}f^*\ar[r]^\sim\ar[d]^\wr&\pi_{Z!}\Sigma^{T_g}g^*p^*\ar[d]^\wr\widehat{+} \pi_{Y!}q_!q^*f^!\ar[d]^\wr&\pi_{Z!}g^!p^*\ar[d]^\wr\widehat{+} \pi_{X!}p_!g_!q^*f^!\ar[r]_\sim^{\pi_{W!}g_!Ex(\Delta^{*!})}\ar[d]^\sim_{\pi_{W!}g_!Ex(\Delta^{*!})} &\pi_{W!}g_!g^!p^*\ar[dl]^{\sim\circ{\operatorname{\rm Id}}_{g_!g^!p^*}}\ar[dd]^{\pi_{W!}e_gp^*}\widehat{+} \pi_{X!}p_!g_!g^!p^*\ar[d]_{\pi_{X!}p_!e_gp^*}\widehat{+} \pi_{X!}p_!p^*\ar[r]^\sim&\pi_{W!}p^* } \end{equation}
These fit together as \widehat{+}[ \xymatrixcolsep{0pt} \xymatrixrowsep{5pt} \xymatrix{ \pi_{Y!}\circ\Sigma^{T_f}\circ f^*\ar[rrrr]^-{q^*\circ(\Sigma^{T_f}\circ f^*)}\ar[dddd]_{f_*}&&&&\pi_{Z!}\circ\Sigma^{T_{Z/W}}\circ g^*\circ p^*\ar[dddd]^{g_*\circ p^*}\widehat{+} &&(b1)&\widehat{+} &(a)&&(c)\widehat{+} &&(b2)&\widehat{+} \pi_{X!}\ar[rrrr]_{p^*}&&&&\pi_{W!}\circ p^* } \widehat{+}] The four diagrams (a), (b1), (b2) and (c) all commute; this follows from the commutativity of transformations acting on separate parts of a composition of functors, or the naturality of the unit and co-unit of an adjunction, or that fact that the exchange isomorphisms $Ex(\Delta^{*!})$ and $Ex(\Delta_{*!})$ are derived from the functoriality of composition for $(-)^*$ and $(-)_*$, combined with units and co-units of various adjunctions. For instance, the commutativity of the lower square in (a) is equivalent to the commutativity of the square \widehat{+}[ \xymatrix{ f_\widehat{+}#f^*\ar[r]^-{u_q}\ar[ddd]_{e_f}&f_\widehat{+}#q_*q^*f^*\ar[d]^{Ex(\Delta_{\widehat{+}# *})}\widehat{+} &p_*g_\widehat{+}#q^*f^*\ar[d]^\wr\widehat{+} &p_*g_\widehat{+}#g^*p^*\ar[d]^{e_g}\widehat{+} {\operatorname{\rm Id}}\ar[r]_-{u_p}&p_*p^* } \widehat{+}] We fill this in as follows \widehat{+}[ \xymatrix{ f_\widehat{+}#f^*\ar[rrr]^-{u_q}\ar@{}@<-12pt>[rrr]_-{(i)}\ar[dr]^{u_p}\ar[dddd]_{e_f}\ar@{}@<15pt>[dddd]^{(v)} &&&f_\widehat{+}#q_*q^*f^*\ar[dd]^{Ex(\Delta_{\widehat{+}# *})}\ar@{}@<-20pt>[dd]_{(ii)} \ar[dl]_{u_p}\widehat{+} &p_*p^*f_\widehat{+}#f^*\ar[r]^-{u_q}\ar@{}@<-14pt>[r]_-{(iii)}&p_*p^*f_\widehat{+}#q_*q^*f^*\widehat{+} &p_*g_\widehat{+}#q^*f^*\ar[u]_\wr^{Ex(\Delta^*_\widehat{+}#)}\ar[r]_-{u_q}\ar[d]^\wr&p_*g_\widehat{+}#q^*q_*q^*f^*\ar[u]^\wr_{Ex(\Delta^*_\widehat{+}#)}\ar[r]_{e_q} &p_*g_\widehat{+}#q^*f^*\ar[d]^\wr\ar@{}@<-96pt>[d]^{(iv)}\widehat{+} &p_*g_\widehat{+}#g^*p^*\ar[rr]^{{\operatorname{\rm Id}}}\ar[drr]_{e_g}\ar@{}@<-8pt>[rr]_(.7){(vi)}&&p_*g_\widehat{+}#g^*p^*\ar[d]^{e_g}\widehat{+} {\operatorname{\rm Id}}\ar[rrr]_-{u_p}&&&p_*p^* } \widehat{+}] The commutativity of (i), (iii) and (vi) is obvious, that of (ii) is the definition of $Ex(\Delta_{\widehat{+}# *})$ and that of (iv) is the standard identity $(e_q\circ q^*)\circ(q^*\circ u_q)={\operatorname{\rm Id}}$ for the unit and co-unit of an adjunction. The commutativity of (v) reduces to that of \widehat{+}[ \xymatrixcolsep{30pt} \xymatrix{ f_\widehat{+}#f^*\ar[r]^-{u_p}\ar[d]_{e_f}&p_*p^*f_\widehat{+}#f^*\ar[d]_{e_f} &p_*g_\widehat{+}#q^*f^*\ar[l]^\sim_{Ex(\Delta^*_\widehat{+}#)}\ar[d]^\wr\widehat{+} {\operatorname{\rm Id}}\ar[r]_-{u_p}&p_*p^*&p_*g_\widehat{+}#g^*p^*\ar[l]^{e_g} } \widehat{+}] The commutativity of the left side is obvious and, using the definition of $Ex(\Delta^*_\widehat{+}#)$, that of the right side reduces to the commutativity of \widehat{+}[ \xymatrix{ g_\widehat{+}#q^*f^*\ar[d]_{u_f}&g_\widehat{+}#g_*p^*\ar[l]_\sim\ar[r]^-{e_g}&p^*\widehat{+} g_\widehat{+}#q^*f^*f_\widehat{+}#f^*\ar[r]_\sim &g_\widehat{+}#g^*p^*f_\widehat{+}#f^*\ar[r]_-{e_g}&p^*f_\widehat{+}#f^*\ar[u]_{e_f} } \widehat{+}] Filling this in as \widehat{+}[ \xymatrix{ &g_\widehat{+}#g_*p^*\ar[dl]_\sim\ar[r]^-{e_g}&p^*\widehat{+} g_\widehat{+}#q^*f^*\ar[d]_{u_f}\ar@{=}[r]&g_\widehat{+}#q^*f^*\ar[u]_\wr\widehat{+} g_\widehat{+}#q^*f^*f_\widehat{+}#f^*\ar[r]_\sim\ar[ru]_{e_f} &g_\widehat{+}#g^*p^*f_\widehat{+}#f^*\ar[r]_-{e_g}&p^*f_\widehat{+}#f^*\ar[uu]_{e_f} } \widehat{+}] we see that the commutativity follows from the identity $(e_f\circ f^*)\circ(f^*\circ u_f)={\operatorname{\rm Id}}$.
The commutativity of the remaining diagrams is much easier to verify and we leave the details to the reader. \end{proof}
\begin{remark} \label{rem:basechange} Proper pull-back, smooth push-forward and Gysin push-forward are all compatible with base-change in the following sense. Fix a morphism $g:B'\to B$.
Let $p:Z\to W$ in ${\operatorname{\mathbf{Sch}}}^G/B$ be proper. Using the base-change isomorphisms $Ex(\Delta^*_!)$, we see that the proper pull-back morphism $p^*:\pi_{W!}\to \pi_{Z!}\circ p^*$ is natural with respect to pull-back by $g:B'\to B$. In detail, we have the cartersian square
\widehat{+}[
\xymatrix{ Z_{B'}\ar[r]^{g_Z}\ar[d]_{p_{B'}}&Z\ar[d]^p\widehat{+} W_{B'}\ar[r]^{g_W}&W } \widehat{+}] with $Z_{B'}=Z\times_BB'$, $W_{B'}=W\times_BB'$, both considered as objects in ${\operatorname{\mathbf{Sch}}}^G/B'$. The base-change isomorphisms $Ex(\Delta^*_!)$ gives the commutative diagram
\widehat{+}[ \xymatrix{ \pi_{W_{B'}!}\circ g_W^*\ar[r]_-\sim\ar[d]_{p_{B'}^*}&g^*\circ \pi_{W!}\ar[d]^{p^*}\widehat{+} \pi_{Z_{B'}!}\circ p_{B'}^*\circ g_W^*\ar[r]_-\sim&g^*\circ\pi_{Z!} } \widehat{+}]
For $f:Z\to W$ smooth in ${\operatorname{\mathbf{Sch}}}^G/B$, inducing $f_{B'}:Z_{B'}\to W_{B'}$, the base-change isomorphisms $Ex(\Delta^*_!)$ and $Ex(\Delta^{*!})$, and the naturality of $\Sigma^{-}$ gives us the commutative diagram \widehat{+}[ \xymatrix{ \pi_{Z_{B'}!}\circ \Sigma^{T_{f_{B'}}}\circ f_{B'}^*\circ g_W^*\ar[r]_-\sim\ar[d]_{f_{B*}}& g^*\circ \pi_{Z!}\circ \Sigma^{T_{f}}\circ f^*\ar[d]^{f_*}\widehat{+} \pi_{W_{B'}!}\circ g_W^*\ar[r]_-\sim&g^*\circ \pi_{W!} } \widehat{+}] If $s:W\to Z$ is a section to a smooth $f:Z\to W$, we have the induced section $s_{B'}:W_{B'}\to Z_{B'}$ to $f_{B'}$, and the base-change isomorphisms $Ex(\Delta^*_!)$ and $Ex(\Delta^{*!})$, and the naturality of $\Sigma^{-}$ give us the commutative diagram \widehat{+}[ \xymatrix{ \pi_{Z_{B'}!}\circ \Sigma^{T_{f_{B'}}}\circ f^{\prime*}\circ g_W^*\ar[r]_-\sim& g^*\circ \pi_{Z!}\circ \Sigma^{T_{f}}\circ f^*\widehat{+} \pi_{W_{B'}!}\circ g_W^*\ar[r]_-\sim\ar[u]^{s_{B!}}&g^*\circ\pi_{W!}\ar[u]^{s_!} } \widehat{+}] These are all diagrams of functors from ${\operatorname{SH}}^G(W)$ to ${\operatorname{SH}}^G(B')$, and all the arrows marked with a ``$\sim$'' are isomorphisms.
\end{remark}
In the next definition, we adapt the notation for a bivariant theory introduced in \cite{DJK}. \begin{definition}\label{def:BMHomology} For ${\mathcal E}\in {\operatorname{SH}}^G(B)$, $\pi_Z:Z\to B$ in ${\operatorname{\mathbf{Sch}}}^G/B$, $E_\bullet\in {\mathcal V}^G(Z)$ and $v={\mathbb V}(E_\bullet)$, define the {\em twisted Borel-Moore homology} with values in ${\mathcal E}$ as \widehat{+}[ {\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z, v):={\rm Hom}_{{\operatorname{SH}}^G(B)}(\Sigma^{a,b}Z(v)_{\operatorname{B.M.}}, {\mathcal E}) \widehat{+}] To simplify the notation, we write ${\mathcal E}^{\operatorname{B.M.}}(Z, v)$ for ${\mathcal E}^{\operatorname{B.M.}}_{0,0}(Z, v)$.
For an object ${\mathcal F}\in {\operatorname{SH}}^G(B)$, we have the {\em ${\mathcal E}$-cohomology} \widehat{+}[ {\mathcal E}^{a,b}({\mathcal F}):={\rm Hom}_{{\operatorname{SH}}^G(B)}({\mathcal F}, \Sigma^{a,b}{\mathcal E}); \widehat{+}] for ${\mathcal X}\in {\mathcal H}^G_\bullet(B)$ define ${\mathcal E}^{a,b}({\mathcal X}):={\mathcal E}^{a,b}(\Sigma^\infty_T{\mathcal X})$ and for ${\mathcal Y}\in {\mathcal H}^G(B)$, define ${\mathcal E}^{a,b}({\mathcal Y}):={\mathcal E}^{a,b}({\mathcal Y}_+)$. Finally, for $X\in {\mathbf{Sm}}^G/B$, $E_\bullet\in {\mathcal V}^G(X)$ and $v={\mathbb V}(E_\bullet)$ define the twisted ${\mathcal E}$-cohomology \widehat{+}[ {\mathcal E}^{a,b}(X,v):={\mathcal E}^{a,b}(\pi_{X\widehat{+}#}(\Sigma^{-v}(1_X))={\rm Hom}_{{\operatorname{SH}}^G(B)}(\Sigma^\infty_TX_+, \Sigma^{a,b}\Sigma^v{\mathcal E}) \widehat{+}] \end{definition}
\begin{remark}\label{rem:GysinSmoothCartesian} Let
\widehat{+}[
\xymatrix{
Z'\ar[r]^q\ar[d]_g&Z\ar[d]^f\widehat{+}
W'\ar[r]^p&W
}
\widehat{+}]
be a cartesian diagram in ${\operatorname{\mathbf{Sch}}}^G/B$ with $p, q$ smooth and with $f:Z\to W$ a vector bundle. Let $s:W\to Z$ be a section and let $t:W'\to Z'$ be the induced section. Then the diagram
\widehat{+}[
\xymatrix{ (Z, f^*v+T_f)_{\operatorname{B.M.}} &&(W, v)_{\operatorname{B.M.}}\ar[ll]_-{s_!}\widehat{+} (Z', q^*(f^*v+ T_f)+T_q)_{\operatorname{B.M.}} \ar[u]^{q_*}& (Z', g^*(p^*v+ T_p)+T_g)_{\operatorname{B.M.}} \ar@{=}[l]&(W', p^*v+T_p)_{\operatorname{B.M.}}\ar[u]^{p_*}\ar[l]^-{t_!}
} \widehat{+}] commutes. To see this, we may replace the maps $s_!$, $t_!$ with their respective inverses $f_*$, $g_*$ (Lemma~\ref{lem:SectionIsos}), and then the commutativity follows from the functoriality of smooth push-forward: $p_*g_*=f_*q_*$.
In fact, the Gysin push-forward commutes with smooth push-forward in cartesian squares, without assuming the smooth maps are vector bundles, but as we do not need this result, we omit the proof. \end{remark}
\begin{remark} The usual operations on Borel-Moore homology: proper push-forward, smooth pull-back, intersection with a section, all follow by applying proper pull-back $p^*$, smooth push-forward $f_*$ or Gysin push-forward $s_!$ to morphisms $\Sigma^{**}(-)/B^{\operatorname{B.M.}}(-)\to {\mathcal E}$. Explicitly, a proper map $p:Z\to W$ in ${\operatorname{\mathbf{Sch}}}^G/B$ induces the functorial proper push-forward \widehat{+}[ p_*: {\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z, p^*v)\to {\mathcal E}^{\operatorname{B.M.}}_{a,b}(W, v) \widehat{+}] defined by $p_*:=(p^*)^*$, a smooth map $f:Z\to W$ in ${\operatorname{\mathbf{Sch}}}^G/B$ induces the functorial smooth pull-back \widehat{+}[ f^*:{\mathcal E}^{\operatorname{B.M.}}_{a,b}(W, v)\to{\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z, f^*v+T_f) \widehat{+}] defined by $f^*:=(f_*)^*$, and for $f:Z\to W$ a smooth morphism in ${\operatorname{\mathbf{Sch}}}^G/B$ with a section $s:W\to Z$, we have the Gysin map \widehat{+}[ s^!:{\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z, f^*v+T_f)\to {\mathcal E}^{\operatorname{B.M.}}_{a,b}(W, v) \widehat{+}] defined by $s^!:=(s_!)^*$, and satisfying $s^!\circ f^*={\operatorname{\rm Id}}$. We hope that the context will enable the reader to distinguish the {\em maps} $p_*$, $f^*$ and $s^!$ from the {\em functors} $p_*$, $f^*$ and $s^!$.
With this translation, Lemma~\ref{lem:BaseChange} says that on twisted Borel-Moore homology, proper push-forward commutes with smooth pull-back in a cartesian square, and Remark~\ref{rem:GysinSmoothCartesian} says that the Gysin map commutes with smooth pull-back in a cartesian square of vector bundles.
For $\pi_X:X\to B$ in ${\mathbf{Sm}}^G/B$, the purity isomorphism $\pi_{X!}\cong \pi_{X\widehat{+}#}\circ \Sigma^{-T_X}$ gives the isomorphism \widehat{+}[ {\mathcal E}^{\operatorname{B.M.}}_{a,b}(X, v)\cong {\mathcal E}^{-a,-b}(X, T_X-v). \widehat{+}]
Finally, for $f:Z\to X$ a morphism in ${\operatorname{\mathbf{Sch}}}^G/B$ with $X\in {\mathbf{Sm}}^G/B$, and ${\mathcal E}\in {\operatorname{SH}}^G(B)$ a commutive ring spectrum (i.e. commutative monoid object in ${\operatorname{SH}}^G(B)$), we have the cap product \widehat{+}[ -\cap f^*(-): {\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z, v)\times {\mathcal E}^{p,q}(W, w)\to {\mathcal E}^{\operatorname{B.M.}}_{a-p,b-q}(Z, v-f^*w) \widehat{+}] defined via the monoidal structure \begin{multline*} {\rm Hom}_{{\operatorname{SH}}^G(B)}(\Sigma^{a,b}Z(v)_{\operatorname{B.M.}}, {\mathcal E})\times {\rm Hom}_{{\operatorname{SH}}^G(B)}(\Sigma^\infty_TW_+, \Sigma^{p,q}\Sigma^w{\mathcal E})\widehat{+}\to {\rm Hom}_{{\operatorname{SH}}^G(B)}(\Sigma^{a-p,b-q}Z(v)_{\operatorname{B.M.}}\wedge \Sigma^{-w}\Sigma^\infty_TW_+,{\mathcal E}\wedge{\mathcal E}), \end{multline*} followed by the multiplication map ${\mathcal E}\wedge{\mathcal E}\to {\mathcal E}$, the purity isomorphism \widehat{+}[ Z(v)_{\operatorname{B.M.}}\wedge \Sigma^{-w} \Sigma^\infty_TW_+\cong Z\times_BW(p_1^*v-p_2^*w+p_2^*T_W)_{\operatorname{B.M.}} \widehat{+}] and the Gysin map for the graph morphism $\gamma_f:Z\to Z\times_BW$ \widehat{+}[ \gamma_f^!:{\mathcal E}^{\operatorname{B.M.}}_{a-p,b-q}(Z\times_BW, p_1^*v-p_2^*w+p_2^*T_W)\to {\mathcal E}^{\operatorname{B.M.}}_{a-p,b-q}(Z, v-f^*w). \widehat{+}]
\end{remark}
\section{The intrinsic stable normal cone}\label{sec:NormalThom} Take $Z\in {\operatorname{\mathbf{Sch}}}^G/B$. Since $Z$ is $G$-quasi-projective over $B$, $Z$ admits a closed immersion $i:Z\to M$ in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M\in {\mathbf{Sm}}^G/B$. As in \cite{BF}, we have the {\em normal cone} $\mathfrak{C}_{i}$: \widehat{+}[ \mathfrak{C}_{i}:={\rm Spec\widehat{+},}_{{\mathcal O}_X}(\oplus_{n\ge0}{\mathcal I}_Z^n/{\mathcal I}_Z^{n+1}), \widehat{+}] where ${\mathcal I}_Z$ is the ideal sheaf of $Z\subset M$. Let $p_i:\mathfrak{C}_{i}\to Z$ be the projection and $\sigma_i:\mathfrak{C}_i\to M$ the composition $i\circ p_i$. As before, we denote the structure morphism for $Y\in {\operatorname{\mathbf{Sch}}}^G/B$ by $\pi_Y:Y\to B$. For $f:Y\to Z$ a smooth morphism in ${\operatorname{\mathbf{Sch}}}^G/B$, we often denote the relative tangent bundle $T_f$ by $T_{Y/Z}$ and in case $f=\pi_Y:Y\to B$ is the (smooth) structure morphism we may write $T_Y$ instead of $T_{Y/B}$
\begin{lemma} \label{lem:CanonIso} Suppose we have closed immersions $i:Z\to M$, $i':Z\to M'$ in ${\operatorname{\mathbf{Sch}}}^G_B$ with $M, M'\in {\mathbf{Sm}}^G/B$. Then there is a canonical isomorphism \widehat{+}[ \psi_{i,i'}: \mathfrak{C}_{i'}(\sigma_{i'}^*T_{M'})_{\operatorname{B.M.}}\xrightarrow{\sim} \mathfrak{C}_{i}(\sigma_i^* T_{M})_{\operatorname{B.M.}}. \widehat{+}] If we have a third closed immersion $i'':Z\to M''$ then $\psi_{i,i'}\circ \psi_{i',i''}=\psi_{i,i''}$. \end{lemma}
\begin{proof} Suppose we have defined the required isomorphism $\psi_g:=\psi_{i,i'}$ for each commutative diagram in ${\operatorname{\mathbf{Sch}}}^G_B$ \begin{equation}\label{eqn:SmoothLift} \xymatrix{ Z\ar[d]_{i'}\ar[dr]^i\widehat{+} M'\ar[r]_g&M } \end{equation} with $g$ a smooth morphism in ${\mathbf{Sm}}^G/B$ and $i, i'$ closed immersions, satisfying $\psi_{gg'}=\psi_g\circ\psi_{g'}$ for each commutative diagram \widehat{+}[ \xymatrix{ &Z\ar[dl]_{i''}\ar[d]^{i'}\ar[dr]^i\widehat{+} M''\ar[r]_{g'}&M'\ar[r]_g &M } \widehat{+}] For an arbitrary pair $i:Z\to M$, $i':Z\to M'$, we have the closed immersion $(i,i'):Z\to M\times_BM'$ and set $\psi_{i,i'}:=\psi_{p_1}\circ\psi_{p_2}^{-1}$, which solves our problem.
We consider a commutative diagram \eqref{eqn:SmoothLift}. The smooth morphism $g$ induces a smooth morphism $\mathfrak{C}(g):\mathfrak{C}_{i'}\to \mathfrak{C}_{i}$, giving the commutative diagram \widehat{+}[ \xymatrix{ \mathfrak{C}_{i'}\ar[r]^{\mathfrak{C}(g)}\ar[d]_{\sigma_{i'}}&\mathfrak{C}_{i} \ar[d]^{\sigma_i}\widehat{+} M'\ar[r]_g&M } \widehat{+}] The projection $\mathfrak{C}(g)$ makes $\mathfrak{C}_{i'}$ into a torsor over $\mathfrak{C}_{i}$ for the vector bundle $p_i^*i^{\prime*}T_{M'/M}$, giving a canonical identification \widehat{+}[ T_{\mathfrak{C}_{i'}/\mathfrak{C}_{i}}\cong \sigma_{i'}^*T_{M'/M}. \widehat{+}]
The exact sequence \widehat{+}[ 0\to T_{M'/M}\to T_{M'}\to g^*T_{M}\to 0 \widehat{+}] gives the canonical isomorphism \widehat{+}[
\Sigma^{\sigma_{i'}^*T_{M'}} \xrightarrow{\theta_g}\Sigma^{\sigma_{i'}^*T_{M'/M}} \circ \Sigma^{\mathfrak{C}(g)^*\sigma_i^*T_{M}}. \widehat{+}]
By Lemma~\ref{lem:SectionIsos}, the smooth push-forward \widehat{+}[ \mathfrak{C}(g)_*:\pi_{\mathfrak{C}_{i'}!}\circ\Sigma^{\sigma_{i'}^*T_{M'/M}}\circ \mathfrak{C}(g)^*\to \pi_{\mathfrak{C}_{i}!} \widehat{+}] is an isomorphism.
Composing $\mathfrak{C}(g)_*$ with the isomorphism $\Theta_g$, defined as the composition \begin{multline*} \pi_{\mathfrak{C}_{i'}!}\circ\Sigma^{\sigma_{i'}^*T_{M'}}\circ \mathfrak{C}(g)^*\xymatrix{\ar[r]^{\theta_g}_\sim&} \pi_{\mathfrak{C}_{i'}!}\circ\Sigma^{\sigma_{i'}^*T_{M'/M}} \circ \Sigma^{\mathfrak{C}(g)^*\sigma_i^*T_{M}}\circ \mathfrak{C}(g)^*\widehat{+}\cong \pi_{\mathfrak{C}_{i'}!}\circ\Sigma^{\sigma_{i'}^*T_{M'/M}}\circ \mathfrak{C}(g)^*\circ \Sigma^{\sigma_i^*T_{M}} \end{multline*} gives us the isomorphism \widehat{+}[
\mathfrak{C}(g)_*\circ\Theta_g:\pi_{\mathfrak{C}_{i'}!}\circ\Sigma^{\sigma_{i'}^*T_{M'/B}}\circ \mathfrak{C}(g)^*\to \pi_{\mathfrak{C}_{i}!}\circ \Sigma^{\sigma_i^*T_{M}}. \widehat{+}] Evaluating at $1_{\mathfrak{C}_{i}}$ gives the isomorphism \widehat{+}[ \psi_g: \mathfrak{C}_{i'}(\sigma_{i'}^*T_{M'})_{\operatorname{B.M.}}\to \mathfrak{C}_{i}(\sigma_i^*T_{M})_{\operatorname{B.M.}}. \widehat{+}]
Suppose we have another smooth morphism $g':M''\to M'$ and a closed immersion $i'':Z\to M''$ with $g'\circ i''=i'$. Let $\Xi_{g,g'}$ be the isomorphism \begin{multline*} \pi_{\mathfrak{C}_{i''}!}\circ\Sigma^{\sigma_{i''}^*T_{M''/M}}\circ\mathfrak{C}(gg')^*\circ\Sigma^{\sigma_i^*T_{M}}\widehat{+} \xrightarrow{\Xi_{g,g'}} \pi_{\mathfrak{C}_{i''}!}\circ\Sigma^{\sigma_{i''}^*T_{M''/M'}}\circ\mathfrak{C}(g')^*\circ\Sigma^{\sigma_{i'}^*T_{M'/M}}\circ\mathfrak{C}(g)^*\circ \Sigma^{\sigma_i^*T_{M}} \end{multline*} defined as the composition \begin{multline*} \pi_{\mathfrak{C}_{i''}!}\circ\Sigma^{\sigma_{i''}^*T_{M''/M}}\circ\mathfrak{C}(gg')^*\circ\Sigma^{\sigma_i^*T_{M}} \widehat{+}\xrightarrow{\theta_{g'}} \pi_{\mathfrak{C}_{i''}!}\circ\Sigma^{\sigma_{i''}^*T_{M''/M'}}\circ\Sigma^{\mathfrak{C}(g')^*\sigma_{i'}^*T_{M'/M}}\circ\mathfrak{C}(g')^*\circ \mathfrak{C}(g)^*\circ\Sigma^{\sigma_i^*T_{M}}\widehat{+}\cong \pi_{\mathfrak{C}_{i''}!}\circ\Sigma^{\sigma_{i''}^*T_{M''/M'}}\circ\mathfrak{C}(g')^*\circ\Sigma^{\sigma_{i'}^*T_{M'/M}}\circ\mathfrak{C}(g)^*\circ \Sigma^{\sigma_i^*T_{M}} \end{multline*} The functoriality of smooth push-forward gives the identity \begin{equation}\label{eqn:Fun} \mathfrak{C}(gg')_*=\mathfrak{C}(g)_*\circ \mathfrak{C}(g')_*\circ\Xi_{g,g'}. \end{equation} as maps from $\pi_{\mathfrak{C}_{i''}!}\circ \Sigma^{\sigma_{i''}^*T_{M''}}\circ \mathfrak{C}(gg')^*\circ\Sigma^{\sigma_{i}^*T_{M}}$ to $\pi_{\mathfrak{C}_i!}\circ\Sigma^{\sigma_{i}^*T_{M}}$.
We then have \begin{align*} \psi_g\circ \psi_{g'}&=\mathfrak{C}(g)_*\circ \Theta_g\circ \mathfrak{C}(g')_*\circ \Theta_{g'}(1_{\mathfrak{C}_i})\widehat{+} &=\mathfrak{C}(g)_*\circ\mathfrak{C}(g')_*\circ \Theta'_g\circ \Theta_{g'}(1_{\mathfrak{C}_i})\widehat{+} &=\mathfrak{C}(g)_*\circ\mathfrak{C}(g')_*\circ\Xi_{g,g'}\circ \Theta_{gg'}(1_{\mathfrak{C}_i})\widehat{+} &=\mathfrak{C}(gg')_*\circ \Theta_{gg'}(1_{\mathfrak{C}_i})\widehat{+} &=\psi_{gg'}, \end{align*} where $\Theta'_g$ is the isomorphism \begin{multline*} \pi_{\mathfrak{C}_{i''}}\circ\Sigma^{\sigma_{i''}^*T_{M''/M'}}\circ\mathfrak{C}(g')^*\circ\Sigma^{\sigma_{i'}^*T_{M'}}\circ \mathfrak{C}(g)^*\widehat{+}\to \pi_{\mathfrak{C}_{i''}}\circ\Sigma^{\sigma_{i''}^*T_{M''/M'}}\circ\mathfrak{C}(g')^*\circ\Sigma^{\sigma_{i'}^*T_{M'/M}}\circ \mathfrak{C}(g)^*\circ \Sigma^{\sigma_{i'}^*T_{M}} \end{multline*} constructed similarly to $\Theta_g$. All the above identities are easy consequences of the naturality of the functor $\Sigma^{-}:D^\text{\it perf}_{G, iso}(-)\to {\operatorname{Aut}}{\operatorname{SH}}^G(-)$, and \eqref{eqn:Fun}. \end{proof}
Relying on Lemma~\ref{lem:CanonIso}, we can prove the main result of this section: the construction of an analog of the Behrend-Fantechi intrinsic normal cone in the setting of the stable motivic homotopy category.
\begin{theorem}\label{thm:IntStableNormalCone} For each $Z\in {\operatorname{\mathbf{Sch}}}^G/B$, there is an object $\mathfrak{C}^{st}_Z\in {\operatorname{SH}}^G(B)$ satisfying the following:\widehat{+}[5pt] 1. Let $i:Z\to M$ be a closed immersion in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M$ smooth over $B$. Then there is a canonical isomorphism \widehat{+}[ \alpha_i:\mathfrak{C}^{st}_Z\xrightarrow{\sim} \mathfrak{C}_{i}(\sigma_i^*T_{M})_{\operatorname{B.M.}} \widehat{+}] 2. For $i':Z\to M$ another closed immersion in ${\operatorname{\mathbf{Sch}}}^G/B$, have \widehat{+}[ \alpha_i=\psi_{i,i'}\circ\alpha_{i'}, \widehat{+}] where $\psi_{i,i'}:\mathfrak{C}_{i'}(\sigma_{i'}^*T_{M'})_{\operatorname{B.M.}}\xrightarrow{\sim} \mathfrak{C}_{i}(\sigma_i^*T_{M})_{\operatorname{B.M.}}$ is the isomorphism defined in Lemma~\ref{lem:CanonIso}. \end{theorem}
\begin{proof} Choose a closed immersion $i_0:Z\to M_0$ in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M_0\in {\mathbf{Sm}}^G/B$, which exists by the definition of the category ${\operatorname{\mathbf{Sch}}}^G/B$, and define \widehat{+}[ \mathfrak{C}^{st}_Z:=\mathfrak{C}_{i_0}(\sigma_{i_0}^*T_{M_0})_{\operatorname{B.M.}}. \widehat{+}] For $i:Z\to M$ a closed immersion ${\operatorname{\mathbf{Sch}}}^G/B$ with $M\in {\mathbf{Sm}}^G/B$, Lemma~\ref{lem:CanonIso} gives us the isomorphism \widehat{+}[ \alpha_i:=\psi_{i,i_0}:\mathfrak{C}^{st}_Z\to \mathfrak{C}_{i}(\sigma_i^*T_{M})_{\operatorname{B.M.}} \widehat{+}] and shows that if $i':Z\to M'$ is another closed immersion, the diagram \widehat{+}[ \xymatrix{ \mathfrak{C}^{st}_Z\ar[r]^-{\alpha_{i'}}\ar[dr]_-{\alpha_i}& \mathfrak{C}_{i'}(\sigma_{i'}^*T_{M'})_{\operatorname{B.M.}}\ar[d]^{\psi_{i,i'}}\widehat{+} &\mathfrak{C}_{i}(\sigma_i^*T_{M})_{\operatorname{B.M.}} } \widehat{+}] commutes. \end{proof}
\begin{definition} \label{Def:IntStableNormalCone} For $Z$ in ${\operatorname{\mathbf{Sch}}}^G/B$, we call $\mathfrak{C}^{st}_Z\in {\operatorname{SH}}^G(B)$ the {\em intrinsic stable normal cone} of $Z$. \end{definition}
\begin{ex} Suppose $\pi_Z:Z\to B$ is smooth over $B$. Then we may take the identity for $i:Z\to M$, giving $\mathfrak{C}_{i}=Z$ and $\mathfrak{C}^{st}_Z=\pi_{Z!}\circ \Sigma^{T_{Z}}(1_Z)=\pi_{Z\widehat{+}#}(1_Z)=\Sigma^\infty_TZ_+$ in ${\operatorname{SH}}^G(B)$. \end{ex}
\begin{remark} Here is an intuitive justification for our definition of $\mathfrak{C}^{st}_Z$. Let $V\to Z$ be a rank $r$ vector bundle on a $B$-scheme $Z$ and let $p:C\to Z$ be a morphism in ${\operatorname{\mathbf{Sch}}}^G/B$. We consider $V$ as a group-scheme over $Z$ via fiberwise vector addition.
Suppose we have a (smooth) morphism $q:C\to \bar{C}$ over $Z$ that makes $C$ a principal $V$-bundle over $\bar{C}$; let $\bar{p}:\bar{C}\to Z$ be the structure morphism and let $\pi_C:C\to B$, $\pi_{\bar{C}}:\bar{C}\to B$ be the structure morphisms to $B$. We have a canonical isomorphism $T_{C/\bar{C}}\cong p^*V$, the purity isomorphism $q_!\circ\Sigma^{p^*V}\cong q_\widehat{+}#$ and the homotopy invariance isomorphism $1_{\bar{C}}\cong q_\widehat{+}#(1_C)$. This gives the isomorphism in ${\operatorname{SH}}^G(B)$ \begin{multline*} \bar{C}_{\operatorname{B.M.}}:=\pi_{\bar{C}!}(1_{\bar{C}})\cong \pi_{\bar{C}!}(q_\widehat{+}#(1_C))\cong \pi_{\bar{C}!}\circ q_!\circ\Sigma^{p^*V}(1_C)\cong \pi_{C!}(\Sigma^{p^*V}(1_C))=C(p^*V)_{\operatorname{B.M.}} . \end{multline*}
Now suppose we just have a action of $V$ on $C$ over $Z$, giving us the quotient stack $[V\backslash C]$ with projection $\pi:C\to [V\backslash C]$. In the category of Artin stacks, $\pi$ is a principal $V$-bundle, so we could reasonably {\em define} the Borel-Moore motive of $[V\backslash C]$ by \widehat{+}[ [V\backslash C]_{\operatorname{B.M.}}:=C(p^*V)_{\operatorname{B.M.}}. \widehat{+}]
The cone $p:\mathfrak{C}_i\to Z$ for a closed immersion $i:Z\to M$ has the canonical action by $i^*T_M$, giving the Behrend-Fantechi intrinsic normal cone $\mathfrak{C}_Z:=[i^*T_M\backslash \mathfrak{C}_i]$. We should thus define the Borel-Moore motive of $\mathfrak{C}_Z$ by \widehat{+}[ (\mathfrak{C}_Z)_{\operatorname{B.M.}}=[i^*T_M\backslash \mathfrak{C}_i]_{\operatorname{B.M.}}:=\mathfrak{C}_i(\sigma_i^*T_M)_{\operatorname{B.M.}}, \widehat{+}] which is exactly our definition of $\mathfrak{C}^{st}_Z$. \end{remark}
\begin{lemma}\label{lem:HomotopyInv} Suppose we have a closed immersion $i:Z\to M$ with $M\in{\mathbf{Sm}}^G/B$ and an affine space bundle $q:V\to M$, giving the cartesian diagram \widehat{+}[ \xymatrix{ Z'\ar[r]^{i'}\ar[d]_{q_Z}&V\ar[d]^q\widehat{+} Z\ar[r]_i&M } \widehat{+}] Let $\mathfrak{C}(q):\mathfrak{C}_{i'}\to \mathfrak{C}_{i}$ be the morphism induced by $(q_Z, q)$ and let \begin{equation}\label{eqn:StableConeIso} \mathfrak{C}^{st}(q):\mathfrak{C}^{st}_{Z'}\to \mathfrak{C}^{st}_Z \end{equation} be defined as the composition \begin{align*} \mathfrak{C}^{st}_{Z'}&\xymatrix{\ar[r]^{\alpha_{i'}}&}\pi_{\mathfrak{C}_{i'}!}\circ \Sigma^{\sigma_{i'}^*T_{V}}(1_{\mathfrak{C}_{i'}})\widehat{+} &\xymatrix{\ar[r]^\theta_\sim&} \pi_{\mathfrak{C}_{i'}!}\circ \Sigma^{\sigma_{i'}^*q^*V}\circ \Sigma^{\sigma_{i'}^*q^*T_{M}}\circ \mathfrak{C}(q)^*(1_{\mathfrak{C}_{i}})\widehat{+} &\xymatrix{\ar[r]_\sim&} \pi_{\mathfrak{C}_{i'}!}\circ \Sigma^{\sigma_{i'}^*q^*V}\circ \Sigma^{\mathfrak{C}(q)^*\sigma_{i}^*T_{M}}\circ \mathfrak{C}(q)^*(1_{\mathfrak{C}_{i}})\widehat{+} &\xymatrix{\ar[r]_\sim&} \pi_{\mathfrak{C}_{i'}!}\circ \Sigma^{\sigma_{i'}^*q^*V}\circ \mathfrak{C}(q)^*\circ\Sigma^{\sigma_{i}^*T_{M}}(1_{\mathfrak{C}_{i}})\widehat{+} &\hskip2pt\xrightarrow{\mathfrak{C}(q)_*}
\pi_{\mathfrak{C}_{i}!}\circ\Sigma^{\sigma_{i}^*T_{M}}(1_{\mathfrak{C}_{i}})\widehat{+} &\xymatrix{\ar[r]^{\alpha_i^{-1}}&}\mathfrak{C}^{st}_Z, \end{align*} where the isomorphism $\theta$ is induced by the exact sequence \widehat{+}[ 0\to q^*V\to T_{V}\to q^*T_{M}\to 0. \widehat{+}] Then $\mathfrak{C}^{st}(q)$ is an isomorphism. Moreover, if we have an extension of our diagram to a cartesian diagram in ${\operatorname{\mathbf{Sch}}}^G/B$ \widehat{+}[ \xymatrix{ Z''\ar[r]^{i''}\ar[d]_{q'_Z}&W\ar[d]^{q'}\widehat{+} Z'\ar[r]^{i'}\ar[d]_{q_Z}&V\ar[d]^q\widehat{+} Z\ar[r]_i&M } \widehat{+}] with $q':W\to V$ an affine space bundle, then \widehat{+}[ \mathfrak{C}^{st}(q\circ q')=\mathfrak{C}^{st}(q)\circ \mathfrak{C}^{st}(q'). \widehat{+}] \end{lemma}
\begin{proof} We use the closed immersion $i':Z'\to V$ to compute $\mathfrak{C}^{st}_{Z'}$. The morphism \widehat{+}[ \mathfrak{C}(q):\mathfrak{C}_{i'}\to \mathfrak{C}_{i}, \widehat{+}] identifies $\mathfrak{C}_{i'}$ with $\mathfrak{C}_{i}\times_MV$, and hence by Lemma~\ref{lem:SectionIsos}, the map $\mathfrak{C}(q)_*$ is an isomorphism, which implies that $\mathfrak{C}^{st}(q)$ is an isomorphism as well.
The functoriality $\mathfrak{C}^{st}(q\circ q')=\mathfrak{C}^{st}(q)\circ \mathfrak{C}^{st}(q')$ follows the functoriality of smooth push-forward, the naturality of the isomorphisms $\theta_{-}$ and naturality of $\Sigma^{-}:{\mathcal V}^G(-)\to {\operatorname{Aut}}({\operatorname{SH}}^G(-))$, as in the proof of Lemma~\ref{lem:CanonIso}. \end{proof}
\section{The fundamental class}\label{sec:FundClass} The next step in the construction is to define a fundamental class in co-homotopy of our intrinsic stable normal cone: $[\mathfrak{C}^{st}_Z]\in \mathbb{S}_B^{0,0}(\mathfrak{C}^{st}_Z)$. We do this by the method of specialization to the normal cone, suitably interpreted.
Choose as before a closed immersion $i:Z\to M$ in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M\in {\mathbf{Sm}}^G/B$. Let \widehat{+}[ \tilde{\pi}:\widetilde{M\times{\mathbb A}^1}\to M\times{\mathbb A}^1 \widehat{+}] be the blow up of $M\times{\mathbb A}^1$ along $Z\times0$, that is \widehat{+}[ \widetilde{M\times{\mathbb A}^1}={\operatorname{Proj}}_{M\times{\mathbb A}^1}\oplus_{n\ge0}{\mathcal I}_{Z\times0}^n. \widehat{+}] Writing $M\times{\mathbb A}^1={\rm Spec\widehat{+},}{\mathcal O}_M[t]$, the element $t\in {\mathcal I}_{Z\times0}$, considered as an element of $\oplus_{n\ge0}{\mathcal I}_{Z\times0}^n$ of degree one, gives a $G$-invariant section of ${\mathcal O}(1)$, which we denote by $T$. We define \widehat{+}[ Def(i):=\widetilde{M\times{\mathbb A}^1}\setminus (T=0)\in {\operatorname{\mathbf{Sch}}}^G/B, \widehat{+}] so \widehat{+}[ Def(i)={\rm Spec\widehat{+},}_{M\times{\mathbb A}^1}(\oplus_{n\ge0}{\mathcal I}_{Z\times0}^n[T^{-1}])_0, \widehat{+}] where the subscript 0 denotes the subsheaf of homogeneous sections of degree 0. The projection $p:Def(i)\to M\times{\mathbb A}^1$ is flat, $p^{-1}(M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})$ is isomorphic via $p$ to $M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})$ and $p^{-1}(M\times 0)=\mathfrak{C}_{i}$. Thus $\mathfrak{C}_{i}$ is a principal effective Cartier divisor on $Def(i)$, with ideal sheaf $(t){\mathcal O}_{Def(i)}$. Let $i_{\mathfrak{C}}:\mathfrak{C}_{i}\to Def(i)$ be the inclusion, and let $j:M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})\to Def(i)$ be the open complement.
The localization triangle \eqref{eqn:LocTria}, twisted by $\Sigma^{p^*p_1^*T_{M/B}}$ and pushed forward by $\pi_{Def(i)!}$, gives us the distinguished triangle in ${\operatorname{SH}}^G(B)$ \widehat{+}[ \pi_{M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})!}\circ \Sigma^{p_1^*T_{M}}(1_{M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})})\to \pi_{Def(i)!}\circ \Sigma^{p^*p_1^*T_{M}}(1_{Def(i)})\to \mathfrak{C}_{i}(\sigma_i^*T_{M})_{\operatorname{B.M.}}\xrightarrow{+1}. \widehat{+}] The isomorphism \widehat{+}[ T_{M\times ({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})}\cong p_1^*T_{M}\oplus p_2^*T_{ {\mathbb A}^1\setminus\widehat{+}{0\widehat{+}}} \widehat{+}] and the canonical isomorphism $T_{ {\mathbb A}^1\setminus\widehat{+}{0\widehat{+}}}\cong {\mathcal O}_{ {\mathbb A}^1\setminus\widehat{+}{0\widehat{+}}}$ gives the isomorphisms \begin{align*} \pi_{M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})!}\circ \Sigma^{p_1^*T_{M}}(1_{M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})})&\cong \pi_{M\widehat{+}#}(1_M)\wedge_B\pi_{{\mathbb A}^1\setminus \widehat{+}{0\widehat{+}}\widehat{+}#}(\Sigma_T^{-1}1_{{\mathbb A}^1\setminus \widehat{+}{0\widehat{+}}})\widehat{+} &\cong \Sigma_T^{-1}M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})_+\widehat{+} &\cong \Sigma_{S^1}^{-1}\Sigma_{{\mathbb G}_m}^{-1}M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})_+ \end{align*}
Our distinguished triangle thus gives us the map in ${\operatorname{SH}}^G(B)$ \widehat{+}[ \partial:\mathfrak{C}_i(\sigma_i^*T_M)_{\operatorname{B.M.}}\to \Sigma_{{\mathbb G}_m}^{-1} M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})_+. \widehat{+}] Let \widehat{+}[ \bar{p}^M_2:M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})_+\to {\mathbb G}_m \widehat{+}] be the projection $p_2:M\times ({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})_+\to ({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})_+$ followed by the quotient map $({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})_+\to ({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}},\widehat{+}{1\widehat{+}})={\mathbb G}_m$. This gives us the map \widehat{+}[ \Sigma_{{\mathbb G}_m}^{-1}\bar{p}^M_2\circ \partial:\mathfrak{C}_i(\sigma_i^*T_M)_{\operatorname{B.M.}}\to \mathbb{S}_B \widehat{+}]
\begin{definition}\label{def:FundClass0} The fundamental class $[\mathfrak{C}_i]\in \mathbb{S}^{\operatorname{B.M.}}_B(\mathfrak{C}_i, \sigma_i^*T_M)$ is the element represented by $\Sigma_{{\mathbb G}_m}^{-1}\bar{p}^M_2\circ \partial$.
If ${\mathcal E}$ is a commutative monoid in ${\operatorname{SH}}^G(B)$ with unit $\epsilon_{\mathcal E}:\mathbb{S}_B\to {\mathcal E}$, we define the fundamental class $[\mathfrak{C}_i]_{\mathcal E}\in {\mathcal E}^{\operatorname{B.M.}}(\mathfrak{C}_i, \sigma_i^*T_M)$ by composing $[\mathfrak{C}_i]$ with $\epsilon_{\mathcal E}$. \end{definition}
We have the canonical isomorphism $\alpha_i:\mathfrak{C}^{st}_Z\to \mathfrak{C}_{i}(\sigma_i^*T_{M})_{\operatorname{B.M.}}$, giving the map in ${\operatorname{SH}}^G(B)$ \begin{equation}\label{eqn:Boundary} [{\mathcal C}_i]\circ\alpha_i:\mathfrak{C}^{st}_Z\to \mathbb{S}_B \end{equation}
\begin{lemma}\label{lem:FundClassInd} The map \widehat{+}[ [{\mathcal C}_i]\circ\alpha_i:\mathfrak{C}^{st}_Z\to \mathbb{S}_B \widehat{+}] is independent of the choice of closed immersion $i:Z\to M$. \end{lemma}
\begin{proof} We reduce as in the proof of Lemma~\ref{lem:CanonIso} to the case in which we have a commutative diagram \widehat{+}[ \xymatrix{ Z\ar[r]^{i'}\ar[rd]_i&M'\ar[d]^g\widehat{+} &M} \widehat{+}] with $g$ smooth. The map $g$ induces a smooth morphism $Def(g):Def(i')\to Def(i)$, giving us the commutative diagram \widehat{+}[ \xymatrix{ Def(i')\ar[r]^{p_{M'}}\ar[d]_{Def(g)}&M'\times{\mathbb A}^1\ar[d]^{g\times{\operatorname{\rm Id}}}\widehat{+} Def(i)\ar[r]_{p_M}&M\times{\mathbb A}^1} \widehat{+}] The restriction of $Def(g)$ to $\mathfrak{C}_{i'}$ is the map $\mathfrak{C}(g):\mathfrak{C}_{i'}\to \mathfrak{C}_{i}$ induced by $g$.
This gives us the map of distinguished triangles (we suppress the isomorphisms on the suspension operations induced by various exact sequences) \widehat{+}[ \xymatrix{ \pi_{M'\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})!}\circ \Sigma^{p_1^*T_{M'}}(1_{M'\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})}) \hskip-50pt \ar[dd]\ar[rd]^{\hskip30pt(g\times{\operatorname{\rm Id}})_*\circ\Theta_g}\widehat{+} &\hskip-50pt\pi_{M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})!}\circ\Sigma^{p_1^*T_{M}}(1_{M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})}) \ar[dd]\widehat{+} \pi_{Def(i')!}\circ \Sigma^{p_{M'}^*p_1^*T_{M'}}(1_{Def(i')})\ar[dd]\ar[rd]^{\hskip30pt Def(g)_*\circ\Theta_g}\widehat{+}& \pi_{Def(i)!}\circ \Sigma^{p_M^*p_1^*T_{M}}(1_{Def(i)})\ar[dd]\widehat{+} \mathfrak{C}_{i'}(\sigma_{i'}^*T_{M'})_{\operatorname{B.M.}} \ar[dr]^{\hskip20pt\psi(g)=\mathfrak{C}(g)_*\circ\Theta_g}\widehat{+} &\mathfrak{C}_{i}(\sigma_i^* T_{M})_{\operatorname{B.M.}} } \widehat{+}] which in turn gives the commutative diagram \widehat{+}[ \xymatrixcolsep{50pt} \xymatrix{ \mathfrak{C}^{st}_Z\ar[d]_{\alpha_{i'}}\ar[dr]^{\alpha_i}\widehat{+} \mathfrak{C}_{i'}(\sigma_{i'}^*T_{M'})_{\operatorname{B.M.}}\ar[r]^{\psi_g}\ar[d]_{\partial}& \mathfrak{C}_{i}(\sigma_i^* T_{M})_{\operatorname{B.M.}}\ar[d]^\partial\widehat{+} \Sigma_{{\mathbb G}_m}^{-1} M'\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})_+\ar[r]^{\Sigma_{{\mathbb G}_m}^{-1}g\times{\operatorname{\rm Id}}}\ar[dr]_{\bar{p}_2^{M'}}& \Sigma_{{\mathbb G}_m}^{-1} M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})_+\ar[d]^{\bar{p}_2^M}\widehat{+} &\mathbb{S}_B, } \widehat{+}] completing the proof. \end{proof}
This lemma allows us to make the following definition.
\begin{definition}\label{def:FundClass} Let $Z$ be in ${\operatorname{\mathbf{Sch}}}^G/B$. The {\em fundamental class} $[\mathfrak{C}^{st}_Z]\in \mathbb{S}_B^{0,0}(\mathfrak{C}^{st}_Z)$ is the element represented by the map $[\mathfrak{C}_i]\circ\alpha_i:\mathfrak{C}^{st}_Z\to \mathbb{S}_B$ for any choice of closed immersion $i:Z\to M$ in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M$ smooth over $B$.
If ${\mathcal E}$ is a commutative monoid in ${\operatorname{SH}}^G(B)$ with unit $\epsilon_{\mathcal E}:\mathbb{S}_B\to {\mathcal E}$, we define the fundamental class $[\mathfrak{C}^{st}_Z]_{\mathcal E}\in {\mathcal E}^{0,0}(\mathfrak{C}^{st}_Z)$ by composing $[\mathfrak{C}^{st}_Z]$ with $\epsilon_{\mathcal E}$. \end{definition}
\section{The virtual fundamental class for a reduced normalized representative}\label{sec:RedNormVirClass}
Let ${\mathbb L}_{Z/B}$ be the relative cotangent complex on $Z\in{\operatorname{\mathbf{Sch}}}^G/B$ and let $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ be a perfect obstruction theory on $Z$. Recall that we use homological notation for complexes.
If we choose a closed immersion $i:Z\to M$ in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M\in {\mathbf{Sm}}^G/B$, we have the explicit model $({\mathcal I}_Z/{\mathcal I}_Z^2\xrightarrow{d}i^*\Omega_{M/B})$ for $\tau_{\le 1}{\mathbb L}_{Z/B}$. Since $E_\bullet$ is by definition supported in $[0,1]$ and $Z$ satisfies the $G$-resolution property, we have a global resolution, that is, we have a two-term complex of locally free sheaves in $\operatorname{Coh}_Z^G$, $F_\bullet:=(F_1\to F_0)$, and a map of complexes \widehat{+}[ \phi:(F_1\xrightarrow{d_F} F_0)\to ({\mathcal I}_Z/{\mathcal I}_Z^2\xrightarrow{d}i^*\Omega_{M/B}) \widehat{+}] which induces an isomorphism on $h_0$ and a surjection on $h_1$, representing $\tau_{\le1}$ applied to $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$.
\begin{definition} We call a representative $(F_\bullet, \phi)$ of $[\phi]$ as above a {\em normalized} representative if the maps $\phi_0, \phi_1$ are surjective. If in addition $F_0=i^*\Omega_{M/B}$ and $\phi_0$ is the identity, we call $\phi$ {\em reduced}. \end{definition}
Let $i:Z\to M$ be a closed immersion in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M$ smooth over $B$ and let \widehat{+}[ \phi:(F_1\to F_0=i^*\Omega_{M/B})\to ({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B}). \widehat{+}] be a reduced normalized representative of a perfect obstruction theory $[\phi]$ on $Z$. The surjection $\phi_1:F_1\to {\mathcal I}_Z/{\mathcal I}_Z^2$ induces a surjection of graded algebras ${\operatorname{Sym}}^*F_1\to \oplus_n {\mathcal I}_Z^n/{\mathcal I}_Z^{n+1}$, and thereby a closed immersion of cones over $Z$ \widehat{+}[ i_\phi:\mathfrak{C}_{i}\to F^1:={\mathbb V}(F_1). \widehat{+}] Let $p_{F^1}:F^1\to Z$ be the projection and let $0_{F^1}:Z\to F^1$ be the 0-section.
The map $i_\phi$ induces the proper push-forward \widehat{+}[ i_{\phi*}:\mathbb{S}_B^{\operatorname{B.M.}}(\mathfrak{C}_{i}, \sigma_i^*T_M)\to \mathbb{S}_B^{\operatorname{B.M.}}(F^1,p_{F^1}^*i_Z^*T_M) \widehat{+}] Noting that $T_{F^1/Z}=p_{F^1}^*(F^1)$, the zero section $0_{F^1}$ gives us the Gysin map \widehat{+}[ 0_{F^1}^!:\mathbb{S}_B^{\operatorname{B.M.}}(F^1, p_{F^1}^*i^*T_M)\to \mathbb{S}_B^{\operatorname{B.M.}}(Z, i^*T_M-F^1) \widehat{+}]
We have the fundamental class $[\mathfrak{C}_i]\in \mathbb{S}_B^{\operatorname{B.M.}}(\mathfrak{C}_{i}, \sigma_i^*T_M)$ (Definition~\ref{def:FundClass0}). \begin{definition} \label{defVFCRN}The {\em virtual fundamental class} for a reduced normalized representative $\phi$ of a perfect obstruction theory $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ with respect to a closed immersion $i$ is defined as \widehat{+}[ [Z,\phi, i]^\text{\it vir}:=0_{F^1}^!(i_{\phi*}([\mathfrak{C}_i]))\in \mathbb{S}^{\operatorname{B.M.}}_B(Z, i^*T_M-F^1)= \mathbb{S}^{\operatorname{B.M.}}_B(Z,{\mathbb V}(E_\bullet)) \widehat{+}] \end{definition} The identity $ \mathbb{S}^{\operatorname{B.M.}}_B(Z, i^*T_M-F^1)= \mathbb{S}^{\operatorname{B.M.}}_B(Z,{\mathbb V}(E_\bullet))$ is really the isomorphism induced by the given isomorphism $(F_1\to F_0)\cong E_\bullet$ in $D^\text{\it perf}_G(Z)$.
\section{Reduced normalized representatives of a perfect obstruction theory}\label{sec:ObstThy} We show how a given perfect obstruction theory $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$
on some $Z\in {\operatorname{\mathbf{Sch}}}^G/B$ admits a reduced normalized representative ``up to ${\mathbb A}^1$-homotopy equivalence'', in a way which will be clarified in this section. If $Z$ is affine, $[\phi]$ already admits a reduced normalized representative. In general we replace $Z$ with a Jouanolou cover $p_Z:\tilde{Z}\to Z$ and we construct an ``induced perfect obstruction theory'' $p_Z^![\phi]:p_Z^!E_\bullet\to {\mathbb L}_{\tilde{Z}/B}$ on $\tilde{Z}$.
As in the previous section, let ${\mathbb L}_{Z/B}$ be the relative cotangent complex on $Z$ and let $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ be a perfect obstruction theory on $Z$. Choose a closed immersion $i:Z\to M$ in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M\in {\mathbf{Sm}}^G/B$, a two-term complex of locally free sheaves in $\operatorname{Coh}_Z^G$, $F_\bullet:=(F_1\to F_0)$, and a map of complexes \widehat{+}[ \phi:(F_1\xrightarrow{d_F} F_0)\to ({\mathcal I}_Z/{\mathcal I}_Z^2\xrightarrow{d}i^*\Omega_{M/B}) \widehat{+}] which induces an isomorphism on $h_0$ and a surjection on $h_1$, representing $\tau_{\le1}$ applied to $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$.
By the $G$-resolution property for $Z$, there is a locally free sheaf ${\mathcal F}$ on $Z$ and a surjection $p:{\mathcal F}\to {\mathcal I}_Z/{\mathcal I}_Z^2$. We may then replace $(F_\bullet,\phi)$ by \widehat{+}[ F_1\oplus {\mathcal F}\xrightarrow{d_F\oplus {\operatorname{\rm Id}}_{\mathcal F}}F_0\oplus {\mathcal F} \widehat{+}] and map the copy of ${\mathcal F}$ in degree 0 to $i^*\Omega_{M/B}$ by $d\circ p$, giving the map \widehat{+}[ \phi':(F_1\oplus {\mathcal F}\xrightarrow{d_F\oplus {\operatorname{\rm Id}}_{\mathcal F}}F_0\oplus {\mathcal F})\to ({\mathcal I}_Z/{\mathcal I}_Z^2\xrightarrow{d}i^*\Omega_{M/B}). \widehat{+}] The map $\phi'_1:=\phi+p$ is surjective by construction and the assumption that $h_0(\phi)$ is an isomorphism implies that $\phi'_0$ is surjective as well. Thus, each $[\phi]$ admits a normalized representative.
For a normalized representative $\phi:F_\bullet\to ({\mathcal I}_Z/{\mathcal I}_Z^2\xrightarrow{d}i^*\Omega_{M/B})$, we let $K_i\subset F_i$ be the kernel of $\phi_i$. We let $F^i\to Z$ be the vector bundle ${\mathbb V}(F_i):={\rm Spec\widehat{+},}_{{\mathcal O}_Z}{\operatorname{Sym}}^* F_i$ and similarly define $K^i:={\mathbb V}(K_i)$.
\begin{lemma} \label{lem:ExactSeq} Let $\phi:F_\bullet\to ({\mathcal I}_Z/{\mathcal I}_Z^2\xrightarrow{d}i^*\Omega_{M/B})$ be a normalized representative of a perfect obstruction theory $[\phi]$. Let $K(h_1(F_\bullet))$ be the kernel of the surjection $h_1(\phi):h_1(F_\bullet)\to h_1({\mathbb L}_{Z/B})$. Then $K_0$ is locally free and in the commutative diagram \widehat{+}[ \xymatrix{ &0\ar[d]&0\ar[d]&0\ar[d]\widehat{+} 0\ar[r]&K(h_1(F_\bullet))\ar[r]\ar[d]&K_1\ar[r]\ar[d]&K_0\ar[r]\ar[d]&0\ar[d]\widehat{+} 0\ar[r]&h_1(F_\bullet)\ar[r]\ar[d]_{h_1(\phi)}&F_1\ar[r]^{d_F}\ar[d]_{\phi_1}&F_0\ar[d]_{\phi_0}\ar[r]&h_0(F_\bullet)\ar[d]_{h_0(\phi)}\ar[r]&0\widehat{+} 0\ar[r]&h_1({\mathbb L}_{Z/B})\ar[r]\ar[d]&{\mathcal I}_Z/{\mathcal I}_Z^2\ar[r]_d\ar[d]&i^*\Omega_{M/B}\ar[r]\ar[d]&h_0({\mathbb L}_{Z/B})\ar[r]\ar[d]&0\widehat{+} &0&0&0&0 } \widehat{+}] all the rows and columns are exact. \end{lemma}
\begin{proof} Our assumption that $\phi$ is a normalized representative is just that $\phi_0, \phi_1$ and $h_1(\phi)$ are surjective, and $h_0(\phi)$ is an isomorphism. The rest follows by the snake lemma. \end{proof}
\begin{lemma}\label{lem:ReducedNorm} Suppose $Z$ is affine, choose a closed immersion $i:Z\to M$ in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M\in {\mathbf{Sm}}^G/B$ and let $[\phi]$ be a perfect obstruction theory on $Z$. Then $[\phi]$ admits a reduced normalized representative. \end{lemma}
\begin{proof} We have already seen that $[\phi]$ admits a normalized representative \widehat{+}[ \phi:(F_1\to F_0)\to ({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B}). \widehat{+}]
We use the notation of Lemma~\ref{lem:ExactSeq}. Since $Z$ is affine, each locally free ${\mathcal F}$ in $\operatorname{QCoh}_Z^G$ is a projective object in $\operatorname{QCoh}_Z^G$. Thus, we may choose a splitting $s_K:K_0\to K_1$ to the surjection $K_1\to K_0$. This gives us the commutative diagram \widehat{+}[ \xymatrix{ K_0\ar@{=}[d]\ar@/^20pt/[rr]^{i_1}\ar[r]^{s_K}&K_1\ar[r]\ar[d]&F_1\ar[d]^{d_F}\ar[r]^{\phi_1}&{\mathcal I}_Z/{\mathcal I}^2_Z\ar[d]^d\widehat{+} K_0\ar@{=}[r]&K_0\ar[r]&F_0\ar[r]_{\phi_0}&i^*\Omega_{M/B} } \widehat{+}] Replacing $F_1$ with $F_1':=F_1/i_1(K_0)$ and $F_0$ with $i^*\Omega_{M/B}\cong F_0/K_0$, we have the reduced normalized representative \widehat{+}[ (F'_1\xrightarrow{d_{F'}}i^*\Omega_{M/B})\xrightarrow{(\phi'_1,{\operatorname{\rm Id}})} ({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B}) \widehat{+}] for $[\phi]$. \end{proof}
\begin{lemma}\label{lem:RedNormIso} Suppose $Z$ is affine and choose a closed immersion $i:Z\to M$ in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M\in {\mathbf{Sm}}^G/B$. If $\phi:F_\bullet\to ({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B})$ and $\phi':F'_\bullet\to ({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B})$ are two reduced normalized representatives of a given perfect obstruction theory $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ on $Z$, then the induced isomorphism $F_\bullet\cong E_\bullet\cong F'_\bullet$ in $D^\text{\it perf}_G(Z)$ arises from an isomorphism of complexes $\rho_\bullet:F_\bullet\to F'_\bullet$ making the diagram \widehat{+}[ \xymatrix{ F_\bullet\ar[r]^{\rho_\bullet}\ar[dr]&F'_\bullet\ar[d]\widehat{+} &({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B}) } \widehat{+}] commute. Moreover $\rho_\bullet$ is unique up to chain homotopy of the form $h:i^*\Omega_{M/B}\to h_1(F'_\bullet)\subset F_1'$ satisfying $d_{F'_\bullet}h=0=\phi_1'hd_{F_\bullet}$. \end{lemma}
\begin{proof} We denote the homology of the complex $({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B})$ by $h_1, h_0$. As we are given an isomorphism $F_\bullet\cong F'_\bullet$ in $D^\text{\it perf}_G(Z)$, we have isomorphisms $\xi_i:h_i(F_\bullet)\cong h_i(F_\bullet')$ in $\operatorname{Coh}_Z^G$ such that the induced map on the Ext groups \widehat{+}[ {\operatorname{Ext}}_{\operatorname{Coh}_Z^G}^2(h_0(F_\bullet), h_1(F_\bullet))\to {\operatorname{Ext}}_{\operatorname{Coh}_Z^G}^2(h_0(F'_\bullet), h_1(F'_\bullet)) \widehat{+}] sends the class of $0\to h_1(F_\bullet)\to F_1\to F_0\to h_0(F_\bullet)\to 0$ to the class of $0\to h_1(F'_\bullet)\to F'_1\to F'_0\to h_0(F'_\bullet)\to 0$.
Let $M_0\subset i^*\Omega_{M/B}$ the image subsheaf $d({\mathcal I}_Z/{\mathcal I}_Z^2)$. The maps $\phi$, $\phi'$ give rise to exact sequences \begin{equation}\label{eqn:Ext1} 0\to h_1(F_\bullet)\to F_1\xrightarrow{d_F}M_0\to 0, \end{equation} \begin{equation}\label{eqn:Ext1'} 0\to h_1(F'_\bullet)\to F_1'\xrightarrow{d_{F'}}M_0\to 0 \end{equation} and \begin{equation}\label{eqn:Ext1''} 0\to M_0\to i^*\Omega_{M/B}\to h_0\to 0 \end{equation} Thus the ${\operatorname{Ext}}^2$ classes considered above arise from the ${\operatorname{Ext}}^1$-classes of \eqref{eqn:Ext1} and \eqref{eqn:Ext1'} respectively by taking the boundary in the long exact ${\operatorname{Ext}}$ sequence associated to \eqref{eqn:Ext1''}. Since $Z$ is affine, ${\operatorname{Ext}}^1_{\operatorname{Coh}_Z^G}(i^*\Omega_{M/B}, -)=0$, so the isomorphism $h_1(F_\bullet)\cong h_1(F'_\bullet)$ sends the ${\operatorname{Ext}}^1$-class of \eqref{eqn:Ext1} to that of \eqref{eqn:Ext1'} giving us the isomorphism $\rho_1:F_1\to F_1'$ making the diagram \widehat{+}[ \xymatrix{ h_1(F_\bullet)\ar[r]\ar[d]_{\xi_1}&F_1\ar[d]_{\rho_1}\ar[r]&M_0\ar@{=}[d]\widehat{+} h_1(F'_\bullet)\ar[r]&F_1'\ar[r]&M_0 } \widehat{+}] commute. This gives us the isomorphism $\rho:=(\rho_1,{\operatorname{\rm Id}}):F_\bullet\to F_\bullet'$ of perfect complexes.
Since $Z$ is affine and $G$ is tame, $\operatorname{Coh}_Z^G$ has enough projectives, so $D^-(\operatorname{Coh}_Z^G)$ is equivalent to the bounded-above homotopy category of the full subcategory of projective objects in $\operatorname{Coh}_Z^G$; the projective objects are the ${\mathcal F}\in \operatorname{Coh}_Z^G$ which are locally free as ${\mathcal O}_Z$-modules.
Consider the two morphisms $\phi, \phi'\circ\rho:F_\bullet\to ({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B})$. We have $[\phi]=[\phi'\circ\rho]$ as maps in $D^b(\operatorname{Coh}_Z^G)$, and as $F_\bullet$ is a perfect complex, the maps $\phi, \phi'\circ\rho$ are chain homotopic. Since $\phi'_1$ is surjective, for $h:F_0\to {\mathcal I}_Z/{\mathcal I}_Z^2$ a chain homotopy, there is a $\tilde{h}:F_0\to F'_1$ with $ \phi'_1\circ\tilde{h}=h$. Since $\phi_0={\operatorname{\rm Id}}_{i^*\Omega_{M/B}}$, we have \widehat{+}[ 0=d\circ h=d\circ\phi'_1\circ\tilde{h}={\operatorname{\rm Id}}_{i^*\Omega_{M/B}}\circ d_{F'_\bullet}\circ\tilde{h} \widehat{+}] so we may consider $\tilde{h}$ as a map $\tilde{h}:i^*\Omega_{M/B}\to h_1(F'_\bullet)$. Replacing $\rho$ with $\rho':=\rho-d\tilde{h}$, we see that $\rho':F_\bullet\to F'_\bullet$ satisfies $\phi'\circ\rho'=\phi$ as maps of complexes. Changing notation, we may assume that $\rho$ is an isomorphism satisfying $\phi'\circ\rho=\phi$.
Given two isomorphisms $\rho_1, \rho_2:F_\bullet\to F'_\bullet$ over ${\mathbb L}_{Z/B}$, both representing the same morphism in $D^\text{\it perf}_G(Z)$, then $\tau_{\le1}\rho_1$ and $\tau_{\le1}\rho_2$ are chain homotopic by a map $h:F_0=i^*\Omega_{M/B}\to F_1'$. The fact that both $\tau_{\le1}\rho_i$ are isomorphisms over $({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B})$ implies the identities $d_{F'_\bullet}h=0=\phi_1'hd_{F_\bullet}$, the first of which shows that $h(F_0)\subset h_1(F'_\bullet)$. \end{proof}
We return to case of arbitrary $Z\in {\operatorname{\mathbf{Sch}}}^G/B$ with a closed immersion $i_Z:Z\to M$ in ${\operatorname{\mathbf{Sch}}}^G/B$, $M\in {\mathbf{Sm}}^G/B$. Suppose we have a smooth morphism $p_M:\tilde{M}\to M$. Form the cartesian square \widehat{+}[ \xymatrix{ \tilde{Z}\ar[r]^{i_{\tilde{Z}}}\ar[d]_{p_Z}&\tilde{M}\ar[d]^{p_M}\widehat{+} Z\ar[r]_{i_Z}&M } \widehat{+}] Since $p_Z$ is smooth, we have the distinguished triangle in $D^\text{\it perf}_G(\tilde{Z})$ \begin{equation}\label{eqn:InducedDistTriangle1} p_Z^*{\mathbb L}_{Z/B}\xrightarrow{{\mathbb L}(p_Z)} {\mathbb L}_{\tilde{Z}/B}\to \Omega_{\tilde{Z}/Z}\to p_Z^*{\mathbb L}_{Z/B}[1] \end{equation} and the isomorphism of locally free sheaves on $\tilde{Z}$, $\Omega_{\tilde{Z}/Z}\cong i_{\tilde{Z}}^*\Omega_{\tilde{M}/M}$. Applying the truncation functor $h_0$ gives us the exact sequence of sheaves \widehat{+}[ 0\to p_Z^*\Omega_{Z/B}\to \Omega_{\tilde{Z}/B}\xrightarrow{\pi} \Omega_{\tilde{Z}/Z}\to0 \widehat{+}]
Given a perfect obstruction theory $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$, we say that a perfect obstruction theory $[\tilde{\phi}]:\tilde{E}_\bullet\to {\mathbb L}_{\tilde{Z}/B}$ is {\em induced from } $[\phi]$ if \widehat{+}[5pt] i. $\tilde{E}_\bullet=p_Z^*E_\bullet\oplus \Omega_{\tilde{Z}/Z}$\widehat{+}[2pt] ii. $[\tilde{\phi}]$ fits into a commutative diagram \widehat{+}[ \xymatrix{ p_Z^*E_\bullet \ar[r]^i\ar[d]_{p_Z^*[\phi]}&\tilde{E}_\bullet\ar[d]^{[\tilde{\phi}]}\ar[r]^p&\Omega_{\tilde{Z}/Z}\ar@{=}[d]\widehat{+} p_Z^*{\mathbb L}_{Z/B}\ar[r]^{{\mathbb L}(p_Z)}& {\mathbb L}_{\tilde{Z}/B}\ar[r]& \Omega_{\tilde{Z}/Z} } \widehat{+}] with $i$ and $p$ the canonical inclusion and projection.
Since $h_0([\phi]):h_0(E_\bullet)\to \Omega_{Z/B}$ and $h_0([\tilde{\phi}]):h_0(\tilde{E}_\bullet)\to \Omega_{\tilde{Z}/B}$ are both isomorphisms, a necessary condition for an induced obstruction theory to exist is that the surjection $\Omega_{\tilde{Z}/B}\to \Omega_{\tilde{Z}/Z}$ should split.
\begin{lemma}\label{lem:InducedObstThy} Suppose $\tilde{Z}$ is affine. Then for each perfect obstruction theory $[\phi]$ on $Z$, there exists an induced obstruction theory $[\tilde{\phi}]:\tilde{E}_\bullet\to {\mathbb L}_{\tilde{Z}/B}$ on $\tilde{Z}$.
Moreover, $[\tilde{\phi}]$ is unique up to canonical isomorphism $\alpha:(\tilde{E},[\tilde{\phi}])\xrightarrow{\sim}(\tilde{E},[\tilde{\phi}'])$ of obstruction theories. \end{lemma}
\begin{proof} Since $\tilde{Z}={\rm Spec\widehat{+},} A$ is affine and $G$ is tame, ${\rm Hom}_{D^\text{\it perf}_G(\tilde{Z})}(\Omega_{\tilde{Z}/Z}, p_Z^*{\mathbb L}_{Z/B}[1])=0$. This gives us a splitting $\tilde{s}$ of the distinguished triangle \widehat{+}[ p_Z^*{\mathbb L}_{Z/B}\to {\mathbb L}_{\tilde{Z}/B}\xrightarrow{\pi} \Omega_{\tilde{Z}/Z}\to p_Z^*{\mathbb L}_{Z/B}[1] \widehat{+}] and thereby an isomorphism ${\mathbb L}_{\tilde{Z}/B}\cong p_Z^*{\mathbb L}_{Z/B} \oplus \Omega_{\tilde{Z}/Z}$ in $D^\text{\it perf}_G(\tilde{Z})$. From this we see that each perfect obstruction theory $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ admits an induced perfect obstruction theory of the form \widehat{+}[ [\tilde\phi_{\tilde{s}}]:=[\phi]\oplus{\operatorname{\rm Id}}_{\Omega_{\tilde{Z}/Z}}:p_Z^*E_\bullet \oplus \Omega_{\tilde{Z}/Z}\to p_Z^*{\mathbb L}_{Z/B} \oplus \Omega_{\tilde{Z}/Z}\cong {\mathbb L}_{\tilde{Z}/B} \widehat{+}] with the isomorphism depending on the choice of splitting $\tilde{s}$ of the distinguished triangle
We have the distinguished triangle \widehat{+}[ \tau_{\ge1}{\mathbb L}_{\tilde{Z}/B}\to {\mathbb L}_{\tilde{Z}/B}\to h_0{\mathbb L}_{\tilde{Z}/B}\to (\tau_{\ge1}{\mathbb L}_{\tilde{Z}/B})[1] \widehat{+}] Since $\tilde{Z}$ is affine, $\Omega_{\tilde{Z}/Z}$ is locally free and $h_n\tau_{\ge1}{\mathbb L}_{\tilde{Z}/B}=0$ for $n\le0$ and for $n>>0$, we have ${\rm Hom}_{D^\text{\it perf}_G(\tilde{Z})}(\Omega_{\tilde{Z}/Z}, \tau_{\ge1}{\mathbb L}_{\tilde{Z}/B}[i])=0$ for all $i\ge0$, so each map of sheaves \widehat{+}[ s:\Omega_{\tilde{Z}/Z}\to h_0{\mathbb L}_{\tilde{Z}/B}=\Omega_{\tilde{Z}/B} \widehat{+}] lifts uniquely to a map $\tilde{s}:\Omega_{\tilde{Z}/Z}\to {\mathbb L}_{\tilde{Z}/B}$ in $D^\text{\it perf}_G(\tilde{Z})$. This shows that splittings of the map $\pi$ are in bijection with splittings of the surjection $\Omega_{\tilde{Z}/B}\to \Omega_{\tilde{Z}/Z}$.
Since $[\phi]$ is a perfect obstruction theory, we also have $h_0(p_Z^*E_\bullet)=p_Z^*\Omega_{Z/B}$, and the same argument as above show that a second splitting $s'$ of $\Omega_{\tilde{Z}/B}\to \Omega_{\tilde{Z}/Z}$ determines a map $\lambda:\Omega_{\tilde{Z}/Z}\to p_Z^*E_\bullet$. The map $\alpha:p_Z^*E_\bullet \oplus \Omega_{\tilde{Z}/Z}\to p_Z^*E_\bullet \oplus \Omega_{\tilde{Z}/Z}$ with matrix \widehat{+}[ \alpha=\begin{pmatrix}{\operatorname{\rm Id}}&\lambda\widehat{+}\emptyset&{\operatorname{\rm Id}}\end{pmatrix} \widehat{+}] thus defines an automorphism of $p_Z^*E_\bullet \oplus \Omega_{\tilde{Z}/Z}$ intertwining $[\tilde{\phi}_{\tilde{s}}]$ and $[\tilde{\phi}_{\tilde{s}'}]$. \end{proof}
Assuming that $\tilde{Z}$ is affine as above, we write $p_Z^![\phi]$ for the (unique up to canonical isomorphism) perfect obstruction theory induced by a given perfect obstruction theory $[\phi]$.
\begin{remark}[Jouanolou covers]\label{rem:Jouanolou} Hoyois \cite[Proposition 2.20]{Hoyois6} has shown that the Jouanolou trick extends to the equivariant case: for each $M\in {\operatorname{\mathbf{Sch}}}^G/B$, there is an affine space bundle $\tilde{M}\to M$ in ${\operatorname{\mathbf{Sch}}}^G/B$ such that $\pi_{\tilde{M}}:\tilde{M}\to B$ is an affine morphism. We call such a map $\tilde{M}\to M$ a {\em Jouanolou cover} of $M$. For affine $B$, Lemma~\ref{lem:HomotopyInv} will thus enable us to reduce various constructions to the case of affine $Z\in{\operatorname{\mathbf{Sch}}}^G/B$. \end{remark}
We now assume that $B$ is affine. Using Remark~\ref{rem:Jouanolou}, for each $Z\in {\operatorname{\mathbf{Sch}}}^G/B$ and each closed immersion $Z\to M$ with $M\in {\mathbf{Sm}}^G/B$, there is an affine space bundle $\tilde{M}\to M$ with $\tilde{M}$ affine, and thus $\tilde{Z}:=Z\times_M\tilde{M}$ is affine as well.
\begin{lemma}\label{lem:Discussion} Suppose $B$ is affine and we have a closed immersion $i_Z:Z\to M$ with $M$ in ${\mathbf{Sm}}^G/B$. Let $\tilde{M}\to M$ be a Jouanolou cover of $M$ and form the cartesian square \widehat{+}[ \xymatrix{ \tilde{Z}\ar[r]^{i_{\tilde{Z}}}\ar[d]_{p_Z}&\tilde{M}\ar[d]^{p_M}\widehat{+} Z\ar[r]_{i_Z}&M } \widehat{+}] Let $(E_\bullet, [\phi])$ be a perfect obstruction theory on $Z$. Then an induced obstruction theory $p_Z^![\phi]$ on $\tilde{Z}$ exists, is unique up to canonical isomorphism, and admits a reduced normalized representative $\tilde\phi$ for $p_Z^![\phi]$. \end{lemma}
\begin{proof} Since $\tilde{Z}$ is affine, this follows from Lemma~\ref{lem:ReducedNorm} and Lemma~\ref{lem:InducedObstThy}. \end{proof}
\section{The virtual fundamental class-the general case}\label{sec:VFCGeneral} In this section, we assume that $B$ is affine.
We use our construction in \S\ref{sec:RedNormVirClass} of the virtual fundamental class for a reduced normalized representative together with the results of \S\ref{sec:ObstThy} to give a well-defined virtual fundamental class for a perfect obstruction theory on a general $Z\in {\operatorname{\mathbf{Sch}}}^G/B$.
Let $[\phi]:E\to {\mathbb L}_{Z/B}$ be a perfect obstruction theory on some $Z\in {\operatorname{\mathbf{Sch}}}^G/B$. Choose a closed immersion $i:Z\to M$ with $M\in{\mathbf{Sm}}^G/B$ and take a Jouanolou cover \widehat{+}[ \xymatrix{ \tilde{Z}\ar[r]^{i_{\tilde{Z}}}\ar[d]_{p_Z}&\tilde{M}\ar[d]^{p_M}\widehat{+} Z\ar[r]_{i_Z}&M } \widehat{+}] By Lemma~\ref{lem:Discussion}, we have an induced perfect obstruction theory $(\tilde{E}_\bullet, [\tilde\phi]):=(p_Z^!E_\bullet, p_Z^![\phi])$ on $\tilde{Z}$ and a reduced normalized representative $\tilde\phi:\tilde{F}_\bullet\to \tau_{\le 1}{\mathbb L}_{\tilde{Z}/B}$ for $[\tilde\phi]$. In particular, we have the identity \widehat{+}[
\tilde{E}_\bullet= p_Z^*E_\bullet\oplus \Omega_{\tilde{Z}/Z} \widehat{+}] inducing an isomorphism of suspension operators \widehat{+}[ \Sigma^{p_Z^*{\mathbb V}(E_\bullet)+T_{\tilde{Z}/Z}}\cong \Sigma^{{\mathbb V}(\tilde{E}_\bullet)}. \widehat{+}]
The quasi-isomorphism $\tilde{F}_\bullet\to\tilde{E}_\bullet$, and the fact that $\tilde{F}_\bullet$ is reduced and normalized gives us a canonical isomorphism \widehat{+}[ \Sigma^{i_{\tilde{Z}}^*T_{\tilde{M}}-\tilde{F}^1}\cong \Sigma^{{\mathbb V}(\tilde{E}_\bullet)}. \widehat{+}]
Since $p_Z:\tilde{Z}\to Z$ is an affine space bundle, homotopy invariance implies that the smooth push-forward map \widehat{+}[ p_{Z*}:\pi_{\tilde{Z}!}\circ \Sigma^{p_Z^*{\mathbb V}(E_\bullet)+T_{\tilde{Z}/Z}}\circ p_Z^*\to \pi_{Z!}\circ \Sigma^{{\mathbb V}(E_\bullet)} \widehat{+}] is an isomorphism. Applying this to $1_Z$, using the isomorphisms \widehat{+}[
\Sigma^{p_Z^*{\mathbb V}(E_\bullet)+T_{\tilde{Z}/Z}}\cong
\Sigma^{{\mathbb V}(\tilde{E}_\bullet)}\cong \Sigma^{i_{\tilde{Z}}^*T_{\tilde{M}}-\tilde{F}^1}
\widehat{+}]
described above and taking the inverse gives us the isomorphism \begin{equation}\label{eqn:JouanolouComp} \vartheta_{i_Z, p_Z, \tilde{F}_\bullet}: Z({\mathbb V}(E_\bullet))_{\operatorname{B.M.}}\to \tilde{Z}(i_{\tilde{Z}}^*T_{\tilde{M}}-\tilde{F}^1)_{\operatorname{B.M.}}. \end{equation}
We have the virtual fundamental class $[\tilde{Z}, \tilde{\phi}, i_{\tilde{Z}}]^\text{\it vir}\in \mathbb{S}^{\operatorname{B.M.}}_B(\tilde{F}, i_{\tilde{Z}}^*T_{\tilde{M}}-\tilde{F}^1)$ (Definition~\ref{defVFCRN}), giving us the class \widehat{+}[ \vartheta_{i_Z, p_Z, \tilde{F}_\bullet}^*([\tilde{Z}, \tilde{\phi}, i_{\tilde{Z}}]^\text{\it vir})\in \mathbb{S}^{\operatorname{B.M.}}_B(F, {\mathbb V}(E_\bullet)). \widehat{+}] The main point is that this class is independent of the various choices made. This is proven with the help of the following two results.
\begin{lemma}\label{lem:BundleInv} Let $p_M:\tilde{M}\to M$ be a Jouanolou cover and let $q_M:\hat{M}\to \tilde{M}$ be a vector bundle. Form the cartesian diagram \begin{equation}\label{eqn:DiagBundleInv} \xymatrix{ \hat{Z}\ar@/_1cm/[dd]_{\hat{p}_Z}\ar[r]^{i_{\hat{Z}}}\ar[d]_{q_Z}&\hat{M}\ar[d]^{q_M}\ar@/^1cm/[dd]^{\hat{p}_M}\widehat{+} \tilde{Z}\ar[r]^{i_{\tilde{Z}}}\ar[d]_{p_Z}&\tilde{M}\ar[d]^{p_M}\widehat{+} Z\ar[r]_{i_Z}&M } \end{equation} Let $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ be a perfect obstruction theory, form the induced obstruction theories $(p_Z^!E_\bullet, p_Z^![\phi])$ and $(\hat{p}^!E_\bullet, \hat{p}_Z^![\phi])$, and let \widehat{+}[ \tilde{\phi}:\tilde{F}_\bullet\to ({\mathcal I}_{\tilde{Z}}/{\mathcal I}_{\tilde{Z}}^2\xrightarrow{d}i_{\tilde{Z}}^*\Omega_{\tilde{M}/B}),\widehat{+} \hat{\phi}:\hat{F}_\bullet\to ({\mathcal I}_{\hat{Z}}/{\mathcal I}_{\hat{Z}}^2\xrightarrow{d}i_{\hat{Z}}^*\Omega_{\hat{M}/B}) \widehat{+}] be reduced normalized representatives of $p_Z^![\phi]$ and $\hat{p}_Z^![\phi]$, respectively. We have the isomorphisms of Lemma~\ref{lem:HomotopyInv}, \widehat{+}[ \mathfrak{C}^{st}(p_M):\mathfrak{C}^{st}_{\tilde{Z}}\to \mathfrak{C}^{st}_Z,\widehat{+} \mathfrak{C}^{st}(\hat{p}_M):\mathfrak{C}^{st}_{\hat{Z}}\to \mathfrak{C}^{st}_Z. \widehat{+}]
The respective 0-sections $0_{\tilde{F}^1}:\tilde{Z}\to \tilde{F}^1$, $0_{\hat{F}^1}:\hat{Z}\to \hat{F}^1$ induce the Gysin push-forward maps $0_{\tilde{F}^1!}$ and $0_{\hat{F}^1!}$, and the respective closed immersions $i_{\tilde{\phi}}:\mathfrak{C}_{i_{\tilde{Z}}}\to \tilde{F}^1$ and $i_{\hat{\phi}}:\mathfrak{C}_{i_{\hat{Z}}}\to \hat{F}^1$ induce the proper pull-back maps
$i^*_{\tilde{\phi}}$ and $i^*_{\hat{\phi}}$. We have as well the isomorphisms
$\alpha_{i_{\tilde{Z}}}:\mathfrak{C}^{st}_{\tilde{Z}}\to \mathfrak{C}_{i_{\tilde{Z}}}(\sigma_{\tilde{i}}^*T_{\tilde{M}})_{\operatorname{B.M.}}$ and $\alpha_{i_{\hat{Z}}}:\mathfrak{C}^{st}_{\hat{Z}}\to\mathfrak{C}_{i_{\tilde{Z}}}(\sigma_{\hat{i}}^*T_{\hat{M}})_{\operatorname{B.M.}}$
Define maps $s_{\tilde\phi}:\tilde{Z}(i_{\tilde{Z}}^*T_{\tilde{M}}-\tilde{F}^1)_{\operatorname{B.M.}}\to \mathfrak{C}^{st}_{\tilde{Z}}$, $s_{\hat\phi}:\hat{Z}(i_{\hat{Z}}^*T_{\hat{M}}-\hat{F}^1)_{\operatorname{B.M.}}\to \mathfrak{C}^{st}_{\hat{Z}}$ by \begin{equation}\label{eqn:CompMapDef} s_{\tilde\phi}:=\alpha_{i_{\tilde{Z}}}^{-1}\circ i^*_{\tilde{\phi}}\circ 0_{\tilde{F}^1!}\text{ and } s_{\hat\phi}:=\alpha_{i_{\hat{Z}}}^{-1}\circ i^*_{\hat{\phi}}\circ 0_{\hat{F}^1!}. \end{equation}
Suppose that $\hat{F}_1=q_Z^*\tilde{F}_1$. Then \widehat{+}[ \mathfrak{C}^{st}(p_M)\circ s_{i_{\tilde{\phi}}}\circ\vartheta_{i_Z,p_Z, \tilde{F}_\bullet}=\mathfrak{C}^{st}(\hat{p}_M)\circ s_{i_{\hat{\phi}}}\circ \vartheta_{i_Z, \hat{p}_Z, \hat{F}_\bullet} \widehat{+}] \end{lemma}
\begin{proof} The identity $\hat{F}_1=q_Z^*\tilde{F}_1$ gives us the map of vector bundles over $q_Z$, $q_F:\hat{F}^1\to \tilde{F}^1$. This identifies $\hat{F}^1$ with the vector bundle $\hat{Z}\times_{\tilde{Z}}\tilde{F}^1\to \tilde{F}^1$, which we denote by $q_F:V\to \tilde{F}^1$.
Saying that $(\tilde{F}_\bullet, \tilde\phi)$ represents $p_Z^![\phi]$ gives us an isomorphism $\tilde{F}_\bullet\cong p_Z^*E_\bullet\oplus \Omega_{\tilde{Z}/Z}$; similar we are given an isomorphism $\hat{F}_\bullet\cong \hat{p}_Z^*E_\bullet \oplus \Omega_{\hat{Z}/Z}$. These give us isomorphisms \begin{gather*} \tilde\alpha:\tilde{Z}(i^*_{\tilde{Z}}T_{\tilde{M}}-\tilde{F}^1)_{\operatorname{B.M.}}\xrightarrow{\sim} \tilde{Z}(p_Z^*{\mathbb V}(E_\bullet)+T_{\tilde{Z}/Z})_{\operatorname{B.M.}},\widehat{+} \hat\alpha:\hat{Z}(i^*_{\hat{Z}}T_{\hat{M}}-\hat{F}^1)_{\operatorname{B.M.}}\xrightarrow{\sim} \hat{Z}(\hat{p}_Z^*{\mathbb V}(E_\bullet)+T_{\hat{Z}/Z})_{\operatorname{B.M.}} \end{gather*} The exact sequence \widehat{+}[ 0\to T_{\hat{Z}/\tilde{Z}}\to T_{\hat{Z}/Z}\to q_Z^*T_{\tilde{Z}/Z}\to0 \widehat{+}] gives the isomorphism \widehat{+}[ \alpha:\hat{Z}(\hat{p}_Z^*{\mathbb V}(E_\bullet)+T_{\hat{Z}/Z})_{\operatorname{B.M.}}\to \hat{Z}(q_Z^*(p_Z^*{\mathbb V}(E_\bullet)+ T_{\tilde{Z}/Z}) +T_{\hat{Z}/\tilde{Z}})_{\operatorname{B.M.}} \widehat{+}] Since $q_Z:\hat{Z}\to \tilde{Z}$ is a vector bundle over $\tilde{Z}$, the smooth push-forward map \widehat{+}[ q_{Z*}:\hat{Z}(q_Z^*(p_Z^*{\mathbb V}(E_\bullet)+T_{\tilde{Z}/Z})+T_{\hat{Z}/\tilde{Z}})_{\operatorname{B.M.}}\to \tilde{Z}(p_Z^*{\mathbb V}(E_\bullet)+T_{\tilde{Z}/Z})_{\operatorname{B.M.}} \widehat{+}] is an isomorphism. Putting these together gives us the isomorphism \widehat{+}[ \beta:\hat{Z}(i^*_{\hat{Z}}T_{\hat{M}}-\hat{F}^1)_{\operatorname{B.M.}}\xrightarrow{\sim} \tilde{Z}(i^*_{\tilde{Z}}T_{\tilde{M}}-\tilde{F}^1)_{\operatorname{B.M.}} \widehat{+}] defined as $\beta:=(\tilde\alpha)^{-1}\circ q_{Z*}\circ \alpha\circ \hat{\alpha}$.
The functoriality of smooth push-forward for the composition $\hat{p}_Z=p_Z\circ q_Z$ yields the identity \begin{equation}\label{eqn:BundleInv(A)}\tag{A} \beta\circ\vartheta_{\hat{i}_Z,\hat{p}_Z, \hat{F}_\bullet}=\vartheta_{i_Z, p_Z, \tilde{F}_\bullet}. \end{equation}
Let $p_{\hat{F}^1}:\hat{F}^1\to \hat{Z}$, $p_{\tilde{F}^1}:\tilde{F}^1\to \tilde{Z}$ be the projections. The smooth push-forward map for the smooth map $q_F$ combined with the exact sequence of vector bundles on $\hat{F}^1$ \begin{equation}\label{eqn:ExactSeqBundleInv} 0\to q_F^*V\to p_{\hat{F}^1}^*i_{\hat{Z}}^*T_{\hat{M}}\to q_F^*p_{\tilde{F}^1}^*i_{\tilde{Z}}^*T_{\tilde{M}}\to0 \end{equation} induces the morphism \widehat{+}[ q_{F*}: \hat{F}^1(p_{\hat{F}^1}^*i_{\hat{Z}}^*T_{\hat{M}})_{\operatorname{B.M.}}\to \tilde{F}^1(p_{\tilde{F}^1}^*i_{\tilde{Z}}^*T_{\tilde{M}})_{\operatorname{B.M.}}, \widehat{+}] giving us the diagram \begin{equation}\label{eqn:BundleInv(B)}\tag{B} \xymatrix{ \hat{Z}(i_{\hat{Z}}^*T_{\hat{M}}-\hat{F}^1)_{\operatorname{B.M.}}\ar[r]^-\beta\ar[d]_{0_{\hat{F}^1!}}&
\tilde{Z}(i^*_{\tilde{Z}}T_{\tilde{M}}-\tilde{F}^1)_{\operatorname{B.M.}} \ar[d]^{0_{\tilde{F}^1!}}\widehat{+} \hat{F}^1(p_{\hat{F}^1}^*i_{\hat{Z}}^*T_{\hat{M}})_{\operatorname{B.M.}} \ar[r]^-{q_{F*}} & \tilde{F}^1(p_{\tilde{F}^1}^*i_{\tilde{Z}}^*T_{\tilde{M}})_{\operatorname{B.M.}} } \end{equation} This commutes by Remark~\ref{rem:GysinSmoothCartesian}: the Gysin push-forward commutes with smooth push-forward in a cartersian square of vector bundles.
The cartesian diagram \eqref{eqn:DiagBundleInv} gives rise to the cartesian diagram \widehat{+}[ \xymatrix{ \mathfrak{C}_{i_{\hat{Z}}}\ar[r]^-{i_{\hat{\phi}}}\ar[d]_{\mathfrak{C}(q_M)}&\hat{F}^1\ar[d]^{q_F}\widehat{+} \mathfrak{C}_{i_{\tilde{Z}}}\ar[r]_-{i_{\tilde{\phi}}}&\tilde{F}^1 } \widehat{+}] Using the exact sequence \eqref{eqn:ExactSeqBundleInv} again, smooth push-forward for the smooth morphism $\mathfrak{C}(q_M)$ gives the map \widehat{+}[ \mathfrak{C}(q_M)_*:\mathfrak{C}_{i_{\hat{Z}}}(\sigma_{i_{\hat{Z}}}^*T_{\hat{M}})_{\operatorname{B.M.}}\to \mathfrak{C}_{i_{\tilde{Z}}}(\sigma_{i_{\tilde{Z}}}^*T_{\tilde{M}})_{\operatorname{B.M.}} \widehat{+}]
The compatibility of smooth push-forward and proper pull-back in cartesian squares (see Lemma~\ref{lem:BaseChange}) implies that the diagram \begin{equation}\label{eqn:BundleInv(C)}\tag{C} \xymatrix{ \hat{F}^1(p_{\hat{F}^1}^*i_{\hat{Z}}^*T_{\hat{M}})_{\operatorname{B.M.}} \ar[r]^-{q_{F*}}\ar[d]_{i_{\hat{\phi}}^*} & \tilde{F}^1(p_{\tilde{F}^1}^*i_{\tilde{Z}}^*T_{\tilde{M}})_{\operatorname{B.M.}} \ar[d]_{i_{\tilde{\phi}}^*}\widehat{+} \mathfrak{C}_{i_{\hat{Z}}}(\sigma_{i_{\hat{Z}}}^*T_{\hat{M}})_{\operatorname{B.M.}} \ar[r]_-{\mathfrak{C}(q_M)_*}& \mathfrak{C}_{i_{\tilde{Z}}}(\sigma_{i_{\tilde{Z}}}^*T_{\tilde{M}})_{\operatorname{B.M.}} } \end{equation} commutes.
Finally, it follows directly from the definitions of the various morphisms involved that the diagram \begin{equation}\label{eqn:BundleInv(D)}\tag{D} \xymatrix{ \mathfrak{C}^{st}_{\hat{Z}}\ar[r]^-{\mathfrak{C}^{st}(q_M)}\ar[d]_{\alpha_{i_{\hat{Z}}}}&\mathfrak{C}^{st}_{\tilde{Z}}\ar[d]^{\alpha_{i_{\tilde{Z}}}}\widehat{+} \mathfrak{C}_{i_{\hat{Z}}}(\sigma_{i_{\hat{Z}}}^*T_{\hat{M}})_{\operatorname{B.M.}} \ar[r]_-{\mathfrak{C}(q_M)_*} & \mathfrak{C}_{i_{\tilde{Z}}}(\sigma_{i_{\tilde{Z}}}^*T_{\tilde{M}})_{\operatorname{B.M.}} } \end{equation} commutes. Putting together the identity \eqref{eqn:BundleInv(A)} with the commutativity of the diagrams \eqref{eqn:BundleInv(B)}, \eqref{eqn:BundleInv(C)} and \eqref{eqn:BundleInv(D)} gives the identity \widehat{+}[ s_{i_{\tilde{\phi}}}\circ\vartheta_{i_Z,p_Z, \tilde{F}_\bullet}=\mathfrak{C}^{st}(q_M)\circ s_{i_{\hat{\phi}}}\circ \vartheta_{i_Z, \hat{p}_Z, \hat{F}_\bullet}; \widehat{+}] composing on the left with $\mathfrak{C}^{st}(p_M)$ and using the functoriality \widehat{+}[ \mathfrak{C}^{st}(p_M)\circ \mathfrak{C}^{st}(q_M)=\mathfrak{C}^{st}(\hat{p}_M) \widehat{+}] completes the proof. \end{proof}
\begin{proposition} \label{prop:VirtFundClass} Suppose $B$ is affine, take $Z\in {\operatorname{\mathbf{Sch}}}^G/B$ and let $[\phi]:E_\bullet\to{\mathbb L}_{Z/B}$ be a perfect obstruction theory on $Z$. Choose a closed immersion $i_Z:Z\to M$ with $M\in {\mathbf{Sm}}^G/B$. Choose a Jouanolou cover $p_M:\tilde{M}\to M$ and let $p_Z:\tilde{Z}\to Z$ be the pull-back $\tilde{M}\times_MZ$. Choose
a reduced normalized obstruction theory $\tilde{\phi}:\tilde{F}_\bullet\to ({\mathcal I}_{\tilde{Z}}/{\mathcal I}_{\tilde{Z}}^2\to i^*\Omega_{\tilde{M}/B})$ representing $p_Z^![\phi]$. This gives us the isomorphisms $\mathfrak{C}^{st}(p_M)$ \eqref{eqn:StableConeIso} and $\vartheta_{i_Z, p_Z, \tilde{F}_\bullet}$ \eqref{eqn:JouanolouComp}, and the map $s_{\tilde\phi}$ \eqref{eqn:CompMapDef}. Composing these morphisms gives \widehat{+}[ \Phi_{E_\bullet,[\phi]}:=\mathfrak{C}^{st}(p_M)\circ s_{\tilde\phi}\circ \vartheta_{i_Z, p_Z, \tilde{F}_\bullet}: Z({\mathbb V}(E_\bullet))_{\operatorname{B.M.}}\to \mathfrak{C}^{st}_Z. \widehat{+}] Then the morphism $\Phi_{E_\bullet,[\phi]}$ depends only on the perfect obstruction theory $(E_\bullet,[\phi])$, that is, it is independent of the choice of closed immersion $i_Z$, the choice of Jouanolou cover $p_M$, the choice of induced obstruction theory $p_Z^![\phi]$ and the choice of reduced normalized obstruction theory $\tilde{\phi}$ representing
$p_Z^![\phi]$.
Moreover, we have the identity \widehat{+}[ \vartheta_{i_Z, p_Z, \tilde{F}_\bullet}^*([\tilde{Z}, \tilde{\phi}, i_{\tilde{Z}}]^\text{\it vir})= [\mathfrak{C}^{st}]\circ \Phi_{E_\bullet,[\phi]}. \widehat{+}] \end{proposition}
Relying on Proposition~\ref{prop:VirtFundClass}, we may make the following definition. \begin{definition}\label{def:VirtFundClass} Suppose $B$ is affine, take $Z\in {\operatorname{\mathbf{Sch}}}^G/B$ and let $[\phi]:E_\bullet\to{\mathbb L}_{Z/B}$ be a perfect obstruction theory on $Z$. We have the fundamental class $[\mathfrak{C}^{st}]\in \mathbb{S}_B^{0,0}(\mathfrak{C}^{st})$. Define the {\em virtual fundamental class} \widehat{+}[ [Z,[\phi]]^\text{\it vir}\in \mathbb{S}^{\operatorname{B.M.}}_B(Z,{\mathbb V}(E_\bullet)) \widehat{+}] by \widehat{+}[ [Z,[\phi]]^\text{\it vir}=[\mathfrak{C}^{st}]\circ \Phi_{E_\bullet,[\phi]} \widehat{+}]
If we have a motivic ring spectrum ${\mathcal E}$ in ${\operatorname{SH}}^G(B)$ with unit $\epsilon_{{\mathcal E}}:\mathbb{S}_B\to {\mathcal E}$, we define \widehat{+}[ [Z,[\phi]]^\text{\it vir}_{\mathcal E}:=\epsilon_{{\mathcal E}}([Z,[\phi]]^\text{\it vir})\in {\mathcal E}^{\operatorname{B.M.}}(Z,{\mathbb V}(E_\bullet)). \widehat{+}] \end{definition}
\begin{remark} It follows from the last assertion in Proposition~\ref{prop:VirtFundClass}, that given a a closed immersion $i_Z:Z\to M$ with $M\in{\mathbf{Sm}}^G/B$, a Jouanolou cover $\tilde{M}\to M$, inducing the Jouanolou cover $p_Z:\tilde{Z}\to Z$, and a reduced normalized representative $\tilde\phi:\tilde{F}_\bullet\to L_{\tilde{Z}/B}$ for $p_Z^![\phi]$, we have \widehat{+}[ [Z,[\phi]]^\text{\it vir}=\vartheta_{i_Z, p_Z, \tilde{F}_\bullet}^*([\tilde{Z}, \tilde{\phi}, i_{\tilde{Z}}]^\text{\it vir}). \widehat{+}] \end{remark}
\begin{proof}[Proof of Proposition~\ref{prop:VirtFundClass}]
Suppose we have fixed a closed immersion $i_Z:Z\to M$ and a Jouanolou cover $\tilde{M}\to M$. By Lemma~\ref{lem:Discussion}, there exists an induced obstruction theory $p_Z^![\phi]$ on $\tilde{Z}$, which is unique up to canonical isomorphism.
Suppose we have two reduced normalized obstruction theories $(\tilde{F}_\bullet,\phi)$, $(\tilde{F}'_\bullet,\phi')$ representing $p_Z^![\phi]$. By lemma~\ref{lem:RedNormIso}, there is an isomorphism $\rho_\bullet:\tilde{F}_\bullet\to \tilde{F}'_\bullet$ of perfect obstruction theories. In particular, the map $\rho^1:F^{\prime 1}\to F^1$ satisfies $\rho^1\circ i_{\phi'}=i_\phi$. This gives us the commutative diagram \widehat{+}[ \xymatrix{ Z({\mathbb V}(E_\bullet)_{\operatorname{B.M.}}\ar[r]^-{\vartheta_{i_Z, p_Z, \tilde{F}_\bullet}} \ar[d]_{\vartheta_{i_Z, p_Z, \tilde{F}'_\bullet}}&\tilde{Z}(i_{\tilde{Z}}^*T_{\tilde{M}}-\tilde{F}^1)_{\operatorname{B.M.}}\ar[d]^{s_{i_\phi}}\widehat{+} \tilde{Z}(i_{\tilde{Z}}^*T_{\tilde{M}}-\tilde{F}^{\prime1})_{\operatorname{B.M.}}\ar[r]_-{s_{i_{\phi'}}}\ar[ur]_{\gamma(\rho^1)}&\mathfrak{C}^{st}_Z } \widehat{+}] where $\gamma(\rho^1)$ is the isomorphism induced by $\rho^1$. This yields the independence of $\Phi_{E_\bullet,[\phi]}$ on the choice of reduced normalized obstruction theory representing $p_Z^![\phi]$.
To show the independence on the choice of Jouanolou cover $p_M:\tilde{M}\to M$ over a fixed closed immersion $i:Z\to M$ we may assume that we are comparing one cover $p_M:\tilde{M}\to M$ with a second cover $\hat{p}_M:\hat{M}\to M$ which factors as \widehat{+}[ \hat{M}\xrightarrow{q_M}\tilde{M}\xrightarrow{p_M}M \widehat{+}] with $q_M:\hat{M}\to \tilde{M}$ a vector bundle over $\tilde{M}$. The independence here follows from Lemma~\ref{lem:BundleInv} and what we have already shown.
Finally, suppose we have a smooth morphism $q:N\to M$ and a closed immersion $i'_Z:Z\to N$ with $q\circ i'_Z=i_Z$. By what we have already shown, we may suppose that $M$ is affine; if we take a Jouanolou cover $\tilde{N}\to N$, then as $Z$ is affine, the cover admits a section over $Z$, so we may replace $N$ with $\tilde{N}$, change notation and assume that $N$ is also affine. With what we have already proven, we may chose a reduced normalized representative $\phi:F_\bullet\to ({\mathcal I}_{i(Z)}/{\mathcal I}_{i(Z)}^2\to i_Z^*\Omega_{M/B})$ for $[\phi]$.
We have the commutative diagram with exact rows \widehat{+}[ \xymatrix{ 0\ar[r]&q_Z^*({\mathcal I}_{i(Z)}/{\mathcal I}_{i(Z)}^2)\ar[r]\ar[d]^d&{\mathcal I}_{i'(Z)}/{\mathcal I}_{i'(Z)}^2\ar[r]\ar[d]^d&i^{\prime*}\Omega_{N/M}\ar[r]\ar@{=}[d]&0\widehat{+} 0\ar[r]&i^{\prime*}q^*\Omega_{M/B}\ar[r]&i^{\prime*}\Omega_{N/B}\ar[r]&i^{\prime*}\Omega_{N/M}\ar[r]&0 } \widehat{+}] Choosing a splitting $\rho:i^{\prime*}\Omega_{N/M}\to {\mathcal I}_{i'(Z)}/{\mathcal I}_{i'(Z)}^2$ gives us the splitting $\sigma:=d\circ\rho:i^{\prime*}\Omega_{N/M}\to i^{\prime*}\Omega_{N/B}$. This gives the reduced normalized representative for $[\phi]$ (with respect to $i'_Z$) \widehat{+}[ F'_\bullet:=(F_1\oplus i^{\prime*}\Omega_{N/M}\xrightarrow{q^*d_F\oplus \sigma}
i^{\prime*}\Omega_{N/B})\xrightarrow{(q^*\phi_1+\rho, {\operatorname{\rm Id}})} ( {\mathcal I}_{i'(Z)}/{\mathcal I}^2_{i'(Z)}\xrightarrow{d}i^{\prime*}\Omega_{N/B}).
\widehat{+}]
We have the cartesian diagram \widehat{+}[ \xymatrix{ \mathfrak{C}_{i'_Z}\ar[d]_{\mathfrak{C}(q)}\ar[r]^-{i_{\phi'}}&F^1\oplus i^{\prime*}T_{N/M}\ar[d]^{p_1}\widehat{+} \mathfrak{C}_{i_Z}\ar[r]_-{i_\phi}&F^1 } \widehat{+}] identifying $\mathfrak{C}_{i'_Z}$ with the bundle $\sigma_{i'}^*T_{N/M}=\mathfrak{C}_{i_Z}\times_Z i^{\prime*}T_{N/M}$ over $\mathfrak{C}_{i_Z}$ and $i_{\phi'}$ with $i_\phi\times {\operatorname{\rm Id}}$.
We have the isomorphism (Lemma~\ref{lem:CanonIso}) \widehat{+}[ \psi_q:=\psi_{i_Z',i_Z}: \mathfrak{C}_{i_Z'}(\sigma_{i_Z'}^*T_{N})_{\operatorname{B.M.}}\to \mathfrak{C}_{i_Z}(\sigma_{i_Z}^*T_{M})_{\operatorname{B.M.}}. \widehat{+}] By Theorem~\ref{thm:IntStableNormalCone}, the diagram \begin{equation}\label{eqn:VFCDiag1} \xymatrix{ \mathfrak{C}^{st}_Z\ar[r]^-{\alpha_i}_-\sim\ar[d]_{\alpha_{i'}}^\wr& \mathfrak{C}_{i_Z}(\sigma_{i_Z}^*T_{M})_{\operatorname{B.M.}} \widehat{+} \mathfrak{C}_{i_Z'}(\sigma_{i_Z'}^*T_{N})_{\operatorname{B.M.}} \ar[ur]_{\psi_q}
} \end{equation} commutes.
We have the isomorphism $\theta_q$ \widehat{+}[ \Sigma^{\sigma_{i'_Z}^*T_{N}}\cong \Sigma^{\sigma_{i'_Z}^*T_{N/M}}\circ\Sigma^{\sigma_{i'_Z}^*q^*T_{M}} \widehat{+}] induced by the exact sequence \begin{equation}\label{eqn:ExactSeqFundClassInv} 0\to T_{N/M}\to T_{N}\to q^*T_{M}\to0 \end{equation} and the isomorphism $nat$ \widehat{+}[ \Sigma^{\sigma_{i'_Z}^*q^*T_{M}}\circ \mathfrak{C}(q)^*=\Sigma^{ \mathfrak{C}(q)^*\sigma_{i_Z}^*T_{M}}\circ \mathfrak{C}(q)^*\cong \mathfrak{C}(q)^*\circ\Sigma^{\sigma_{i_Z}^*T_{M}}, \widehat{+}] giving the composition \begin{multline*} \pi_{\mathfrak{C}_{i'_Z}!}\circ\Sigma^{\sigma_{i'_Z}^*T_{N}}\circ \mathfrak{C}(q)^*
\xrightarrow{\theta_q}
\pi_{\mathfrak{C}_{i'_Z}!}\circ\Sigma^{\sigma_{i'_Z}^*T_{N/M}}\circ\Sigma^{\sigma_{i'_Z}^*q^*T_{M}}\circ \mathfrak{C}(q)^*\widehat{+}
\xrightarrow{nat}
\pi_{\mathfrak{C}_{i'_Z}!}\circ\Sigma^{\sigma_{i'_Z}^*T_{N/M}}\circ\mathfrak{C}(q)^*\circ\Sigma^{\sigma_{i_Z}^*T_{M}}
\xrightarrow{\mathfrak{C}(q)_*}
\pi_{\mathfrak{C}_{i_Z}!}\Sigma^{\sigma_{i_Z}^*T_{M}} .
\end{multline*} Let $\Theta_q:=nat\circ \theta_q$. By the definition of $\psi_q$ we have \widehat{+}[ \psi_q=(\mathfrak{C}(q)_*\circ \Theta_q)(1_{\mathfrak{C}_{i_Z}}) \widehat{+}]
We have the isomorphism \widehat{+}[ \theta'_q: \Sigma^{-F^{\prime1}}\circ\Sigma^{i^{\prime*}T_{N}} \xrightarrow{\sim}
\Sigma^{-F^1}\circ\Sigma^{i^*T_{M}} \widehat{+}] induced by the exact sequence \eqref{eqn:ExactSeqFundClassInv} and the identity $F^{\prime1}\cong F^1\oplus i^{\prime*}T_{N/M}$. Let $p_1:F^{\prime1}\to F^1$ be the projection.
Consider the diagram \begin{equation}\label{eqn:VFCDiag22} \xymatrixcolsep{2pt} \xymatrix{ \pi_{Z!}\circ \Sigma^{-F^{\prime1}}\circ\Sigma^{i^{\prime*}T_{N}}\ar[rr]^-{\theta'_q}_-\sim\ar[d]_{0_{F^{\prime1}!}}&& \pi_{Z!}\circ \Sigma^{-F^1}\circ\Sigma^{i^*T_{M}} \ar[d]^{0_{F^1!}}\widehat{+}
\pi_{F^{\prime1}!}\circ\Sigma^{p_{F^{\prime1}}^*i^{\prime*}_ZT_{N}}\circ p_{F^{\prime1}}^*\ar[dr]_\sim^-{\Theta_q}&& \pi_{F^{1}!}\circ\Sigma^{p_{F^{1}}^*i^{*}_ZT_{M}}\circ p_{F^{1}}^*\widehat{+} &\kern-80pt{\pi_{F^{\prime1}!}\circ\Sigma^{p_{F^{\prime1}}^*i^{\prime*}_ZT_{N/M}}\circ p_1^*\circ \Sigma^{p_{F^{1}}^*i^{*}_ZT_{M}}\circ p_{F^{1}}^*}\kern-50pt\ar[ru]^{p_{1*}}_\sim } \end{equation}
As in the proof of Remark~\ref{rem:GysinSmoothCartesian}, one shows that this diagram commutes by using the functoriality of smooth push-forward, replacing $0_{F^1!}$ and $0_{F^{\prime1}!}$ with their respective inverses $p_{F^1*}$ and $p_{F^{\prime1}*}$.
The commutativity of the left-hand side of the diagram \begin{equation}\label{eqn:VFCDiag23} \xymatrixcolsep{2pt} \xymatrix{ \pi_{F^{\prime1}!}\circ\Sigma^{p_{F^{\prime1}}^*i^{\prime*}_ZT_{N}}\circ p_{F^{\prime1}}^*\ar[dr]_\sim^-{\Theta_q}\ar[dd]_{i^*_{\phi'}}&& \pi_{F^{1}!}\circ\Sigma^{p_{F^{1}}^*i^{*}_ZT_{M}}\circ p_{F^{1}}^*\ar[dd]^{i^*_{\phi}}\widehat{+} &\kern-60pt{\pi_{F^{\prime1}!}\circ\Sigma^{p_{F^{\prime1}}^*i^{\prime*}_ZT_{N/M}}\circ p_1^*\circ \Sigma^{p_{F^{1}}^*i^{*}_ZT_{M}}\circ p_{F^{1}}^*}\kern-50pt\ar[ru]_\sim^{p_{1*}} \ar[dd]_{i^*_{\phi'}}\widehat{+} \pi_{\mathfrak{C}_{i'_Z}!}\circ\Sigma^{\sigma_{i^{\prime}_Z}^*T_{N}}\circ p_{i'_Z}^*\ar[dr]^-{\Theta_q}_-\sim&& \pi_{\mathfrak{C}_{i_Z}!}\circ\Sigma^{\sigma_{i_Z}^*T_{M}}\circ p_{i_Z}^*\widehat{+} &\kern-50pt\pi_{\mathfrak{C}_{i_Z}!}\circ\Sigma^{\sigma_{i^{\prime}_Z}^*T_{N/M}}\circ\mathfrak{C}(q)^*\circ\Sigma^{\sigma_{i^{\prime}_Z}^*T_{M}}\circ p_{i'_Z}^*\ar[ru]_\sim^{\mathfrak{C}(q)_*}\kern-50pt } \end{equation} follows from the naturality $\Sigma^{(-)}:D^\text{\it perf}_{G, iso}(?)\to {\operatorname{Aut}}({\operatorname{SH}}^G(?))$ and that of the right-hand side by the commutativity of smooth push-forward with proper pull-back, Lemma~\ref{lem:BaseChange}.
Putting these two diagrams together and evaluating at $1_Z$ gives us the commutative diagram
\begin{equation}\label{eqn:VFCDiag2} \xymatrixcolsep{40pt} \xymatrix{ Z(i^{\prime*}T_{N}-F^{\prime1})_{\operatorname{B.M.}}\ar[r]^-{\theta'_q(1_Z)}_-\sim\ar[d]_{i_{\phi'}^*\circ 0_{F^{\prime1}!}}& Z(i^*T_{M}-F^1)_{\operatorname{B.M.}} \ar[d]^{i_{\phi}^*\circ 0_{F^1!}}\widehat{+} \mathfrak{C}_{i'_Z}(\sigma_{i^{\prime}_Z}^*T_{N})_{\operatorname{B.M.}}\ar[r]^-\sim_-{\psi_q}& \mathfrak{C}_{i_Z}(\sigma_{i_Z}^*T_{M})_{\operatorname{B.M.}}. } \end{equation}
The functoriality of $\Sigma^{(-)}:D^\text{\it perf}_{G, iso}(Z)\to {\operatorname{Aut}}({\operatorname{SH}}^G(Z))$ gives us the commutative diagram of isomorphisms \begin{equation}\label{eqn:VFCDiag3} \xymatrixcolsep{0pt} \xymatrix{ &Z({\mathbb V}(E_\bullet)_{\operatorname{B.M.}}\ar[dl]_{\vartheta_{i_Z, {\operatorname{\rm Id}}_M, F_\bullet}\widehat{+} }\ar[dr]^{\vartheta_{i'_Z, {\operatorname{\rm Id}}_N, F'_\bullet}}\widehat{+} Z(i^{\prime*}T_{N}-F^{\prime1})_{\operatorname{B.M.}}\ar[rr]^-\sim_-{\theta'_q(1_Z)} && Z(i^*T_{M}-F^1)_{\operatorname{B.M.}} } \end{equation} Putting the diagrams \eqref{eqn:VFCDiag1}, \eqref{eqn:VFCDiag2} and \eqref{eqn:VFCDiag3} together, the definition of $s_\phi$ and $s_{\phi'}$ gives the identity \widehat{+}[ \alpha_{i_Z}\circ s_{\phi'}\circ\vartheta_{i'_Z, {\operatorname{\rm Id}}_N, F'_\bullet}= \psi_q\circ \alpha_{i'_Z}\circ s_{\phi'}\circ\vartheta_{i'_Z, {\operatorname{\rm Id}}_N, F'_\bullet}= \alpha_{i_Z}\circ s_{\phi}\circ\vartheta_{i_Z, {\operatorname{\rm Id}}_M, F_\bullet} \widehat{+}] or \widehat{+}[ s_{\phi'}\circ\vartheta_{i'_Z, {\operatorname{\rm Id}}_N, F'_\bullet}=s_{\phi}\circ\vartheta_{i_Z, {\operatorname{\rm Id}}_M, F_\bullet} \widehat{+}] As we are taking the trivial Jouanoulou covers, we have $p_M={\operatorname{\rm Id}}_M$, $p_N={\operatorname{\rm Id}}_N$, completing the proof. \end{proof}
We conclude with a result on compatibility of the fundamental class and virtual fundamental class with respect to base-change.
\begin{proposition}\label{prop:basechange} Let $\pi_Z:Z\to B$, be in ${\operatorname{\mathbf{Sch}}}^G/B$, let $f:B'\to B$ be a morphism in ${\operatorname{\mathbf{Sch}}}/B$ and let $\pi_{Z'}:Z'=Z\times_BB'\to B'$ be the pull-back. Suppose that the cartesian square \widehat{+}[ \xymatrix{ Z'\ar[d]\ar[r]^{p_1}&Z\ar[d]\widehat{+} B'\ar[r]&B } \widehat{+}] is Tor-independent: ${\operatorname{\rm Tor}}^i_{{\mathcal O}_B}({\mathcal O}_{B'},{\mathcal O}_Z)=0$ for $i>0$. Then\widehat{+}[5pt] 1. Suppose we have a closed immersion $i:Z\to M$ in ${\operatorname{\mathbf{Sch}}}^G/B$ with $M$ in ${\mathbf{Sm}}^G/B$, let $M'=M\times_BB'$, and let $i_{Z'}:Z'\to M'$ be the induced closed immersion. Then we have natural isomorphisms $\mathfrak{C}_{i_{Z'}}\cong \mathfrak{C}_{i_Z}\times_BB'$, $Def(i_{Z'})\cong Def(i)\times_BB'$ over $M'\times{\mathbb A}^1$.\widehat{+} 2. The isomorphism $\mathfrak{C}_{i_{Z'}}\cong \mathfrak{C}_{i_Z}\times_BB'$ of (1) induces an isomorphism $\mathfrak{C}^{st}_{Z'}\cong f^*\mathfrak{C}^{st}_Z$ sending the fundamental class $[\mathfrak{C}^{st}_{Z'}]\in \mathbb{S}_{B'}^{0,0}(\mathfrak{C}^{st}_{Z'})$ to $f^*[\mathfrak{C}^{st}_Z]$.\widehat{+} 3. Let $[\phi]:E_\bullet\to {\mathbb L}_{Z/B}$ be a perfect obstruction theory on $Z$. Then the map $[\phi']:p_1^*E_\bullet\to {\mathbb L}_{Z'/B'}$ defined as the composition \widehat{+}[ p_1^*E_\bullet\xrightarrow{p_1^*[\phi]} p_1^*{\mathbb L}_{Z/B}\xrightarrow{can} {\mathbb L}_{Z'/B'} \widehat{+}] is a perfect obstruction theory on $Z'$, we have a canonical isomorphism $f^*(Z({\mathbb V}(E_\bullet))_{\operatorname{B.M.}})\cong Z'({\mathbb V}(p_1^*E_\bullet))_{\operatorname{B.M.}}$ and via this isomorphism we have $p_1^*([Z, [\phi]]^\text{\it vir})=[Z'[\phi']]^\text{\it vir}$. \end{proposition}
\begin{proof} The Tor-independence implies that $p_1^*{\mathbb L}_{Z/B}\cong {\mathbb L}_{Z'/B'}$ and that $p_1^*[\phi]:p_1^*E_\bullet\to {\mathbb L}_{Z'/B'}$ is a perfect obstruction theory on $Z'$.
Letting $p_M:M':=M\times_BB'\to M$ be the projection, the Tor-independence implies that the canonical map $p_M^*{\mathcal I}_Z\to {\mathcal I}_{Z'}$ is an isomorphism. This readily implies (1) and gives us the isomorphism of the diagram \widehat{+}[ \xymatrix{ \mathfrak{C}_{i_{Z'}}\ar[r]\ar[d]&Def(i_{Z'})\ar[d]&\ar[l]M'\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})\ar@{=}[d]\widehat{+} Z'\ar[r]&M'\times{\mathbb A}^1&\ar[l] M'\times ({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}}) } \widehat{+}] with \widehat{+}[ \left[ \raise15pt\vbox to 20pt{\hbox{$ \xymatrix{ \mathfrak{C}_{i_{Z}}\ar[r]\ar[d]&Def(i_{Z})\ar[d]&\ar[l]M\times({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}})\ar@{=}[d]\widehat{+} Z\ar[r]&M\times{\mathbb A}^1&\ar[l] M\times ({\mathbb A}^1\setminus\widehat{+}{0\widehat{+}}) }$}}\right]\times_BB' \widehat{+}] (2) follows from this, the naturality of $\Sigma^{-}:{\mathcal V}(?)\to {\operatorname{SH}}^G(?)$ and the localization distinguished triangle with respect to $f^*$, and the base-change isomorphism $Ex(\Delta^*_!)$. This also gives us the isomorphism $f^*(Z({\mathbb V}(E_\bullet))_{\operatorname{B.M.}}\cong Z'({\mathbb V}(p_1^*E_\bullet))_{\operatorname{B.M.}}$. Using the definition of the virtual fundamental class, (3) follows from (1) and (2) together with the compatibility of proper pull-back, smooth push-forward and Gysin push-forward with base-change by $f^*$ (Remark~\ref{rem:basechange}). \end{proof} \section{Comparisons and examples}\label{sec:Comp} We relate our constructions to the constructions of \cite{BF} in case of motivic cohomology/Chow groups, or more generally the case of an oriented theory, for example, $K$-theory or algebraic cobordism. For simplicity, we take $G=\widehat{+}{{\operatorname{\rm Id}}\widehat{+}}$ and we remind the reader that we are assuming that $B$ is affine.
\subsection{Oriented theories} Let ${\mathcal E}$ be an oriented commutative ring spectrum in ${\operatorname{SH}}(B)$. For $Z\in {\operatorname{\mathbf{Sch}}}^G/B$ and $v\in D^\text{\it perf}_G(Z)$ of virtual rank $r$, we have the {\em Thom isomorphism} \widehat{+}[ \vartheta_v: \Sigma^{v}\pi_Z^!{\mathcal E}\cong \Sigma^{2r,r}\pi_Z^!{\mathcal E} \widehat{+}] giving the isomorphism on ${\mathcal E}$-Borel-Moore homology \widehat{+}[ \vartheta_v^*: {\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z, v)\xrightarrow{\sim}{\mathcal E}^{\operatorname{B.M.}}_{2r+a, r+b}(Z) \widehat{+}] If $p_Z:Z\to B$ is smooth of relative dimension $d_Z$, the purity isomorphism $p_{Z\widehat{+}#}\circ\Sigma^{-T_Z}\cong p_{Z!}$ gives the purity isomorphism \widehat{+}[
{\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z, v)\cong
(\Sigma^{T_X-v}{\mathcal E})^{-a, -b}(Z)\cong {\mathcal E}^{2d_Z-2r-a, d_Z-r-b}(Z).
\widehat{+}]
For $i:Z\to M$ a closed immersion in ${\operatorname{\mathbf{Sch}}}^G/B$, in a smooth dimension $d_M$ $B$-scheme $M$, we thus have \widehat{+}[ {\mathcal E}^{a,b}(\mathfrak{C}^{st}_Z)\xymatrix{\ar[r]^{(\alpha_i^{-1})^*}_\sim&} {\mathcal E}^{a,b}(\mathfrak{C}_i(\sigma_i^*T_M)_{\operatorname{B.M.}})\cong {\mathcal E}_{2d_M-a, d_M-b}^{\operatorname{B.M.}}(\mathfrak{C}_{i}), \widehat{+}]
Noting that $\mathfrak{C}_{i}$ has pure dimension $d_\mathfrak{C}=d_M$ over $B$, the fundamental class $[\mathfrak{C}^{st}_Z]_{\mathcal E}$ is thus an element of ${\mathcal E}^{0,0}(\mathfrak{C}^{st}_Z)\cong{\mathcal E}^{\operatorname{B.M.}}_{2d_\mathfrak{C},d_\mathfrak{C}}(\mathfrak{C}_{i})$ and the virtual fundamental class $[Z,[\phi]]^\text{\it vir}_{\mathcal E}$ associated to a perfect obstruction theory $(E_\bullet, [\phi])$ of virtual rank $r$ lives in ${\mathcal E}^{\operatorname{B.M.}}(Z,{\mathbb V}(E_\bullet))\cong {\mathcal E}^{\operatorname{B.M.}}_{2r,r}(Z)$.
\subsection{Fundamental classes} Let ${\mathcal E}$ be an oriented theory. Suppose we have an integral $B$-scheme $D$ and a principal effective Cartier divisor $\mathfrak{C}$ on $D$ such that $D\setminus \mathfrak{C}$ is smooth over $B$. Suppose $D$ has pure dimension $d_\mathfrak{C}+1$ over $B$, so $\mathfrak{C}$ has pure dimension $d_\mathfrak{C}$ over $B$. Let $t\in \Gamma(D, {\mathcal O}_D)$ be a generator for ${\mathcal I}_\mathfrak{C}$. The map $t:D\setminus \mathfrak{C}\to {\mathbb G}_m$ determines an element \widehat{+}[ [t]\in {\mathcal E}^{1,1}(D\setminus \mathfrak{C})\cong {\mathcal E}^{\operatorname{B.M.}}_{2d_\mathfrak{C}+1, d_\mathfrak{C}}(D\setminus \mathfrak{C}). \widehat{+}] We have the localization sequence \widehat{+}[ \ldots\to{\mathcal E}^{\operatorname{B.M.}}_{2d_\mathfrak{C}+1, d_\mathfrak{C}}(D\setminus \mathfrak{C})\xrightarrow{\partial} {\mathcal E}^{\operatorname{B.M.}}_{2d_\mathfrak{C}, d_\mathfrak{C}}(\mathfrak{C})\xrightarrow{i_{\mathfrak{C}*}} {\mathcal E}^{\operatorname{B.M.}}_{2d_\mathfrak{C}, d_\mathfrak{C}}(D)\to \ldots \widehat{+}] and we have the fundamental class $[\mathfrak{C}]_{\mathcal E}\in {\mathcal E}^{\operatorname{B.M.}}_{2d_\mathfrak{C}, d_\mathfrak{C}}(\mathfrak{C})$ defined by $[\mathfrak{C}]_{\mathcal E}:=\partial[t]$. This class is independent of the choice of defining equation $t$. In case $D=Def(i)$ for our closed immersion $i:Z\to M$, and $t:Def(i)\to {\mathbb A}^1$ is the structure morphism $Def(i)\xrightarrow{p}M\times {\mathbb A}^1\xrightarrow{p_2}{\mathbb A}^1$, we have $\mathfrak{C}=\mathfrak{C}_i$ and $[\mathfrak{C}]$ agrees with the fundamental class $[\mathfrak{C}^{st}_Z]_{\mathcal E}\in {\mathcal E}^{0,0}(\mathfrak{C}^{st}_Z)$ (see \S\ref{sec:FundClass}) after identifying ${\mathcal E}^{\operatorname{B.M.}}_{2d_\mathfrak{C}, d_\mathfrak{C}}(\mathfrak{C})$ with ${\mathcal E}^{0,0}(\mathfrak{C}^{st}_Z)$ as above. The construction of the fundamental class of $\mathfrak{C}_{i}$ by this method in the case of algebraic cobordism was described to us by Parker Lowrey and appears in his paper with Timo Sch\widehat{+}"urg \cite{LowreySchuerg}.
\begin{ex} \label{ex:ClassicalExamples} Take $B={\rm Spec\widehat{+},} k$, and $Z\in{\operatorname{\mathbf{Sch}}}/B$. For ${\mathcal E}=H{\mathbb Z}$, the ring spectrum representing motivic cohomology, $H{\mathbb Z}^{\operatorname{B.M.}}_{2d,d}(Z)$ is the classical Chow group ${\rm CH}_d(Z)$. For ${\mathcal E}={\operatorname{KGL}}$, the ring spectrum representing Quillen $K$-theory, ${\operatorname{KGL}}^{\operatorname{B.M.}}_{2d,d}(Z)=G_0(Z)$, the Grothendieck group of coherent sheaves on $Z$. For ${\mathcal E}={\operatorname{MGL}}$, the ring spectrum representing Voevodsky's algebraic cobordism \cite{VoevICM}, and $k$ a field of characteristic zero, ${\operatorname{MGL}}^{\operatorname{B.M.}}_{2d,d}=\Omega_d(Z)$, the algebraic cobordism of \cite{LevineMorel}.
For ${\mathcal E}=H{\mathbb Z}$, the fundamental class $[\mathfrak{C}]_{H{\mathbb Z}}\in {\mathcal E}^{\operatorname{B.M.}}_{2d_\mathfrak{C}, d_\mathfrak{C}}(\mathfrak{C})$ is the cycle class associated to the scheme $\mathfrak{C}$. For ${\mathcal E}={\operatorname{KGL}}$, $[\mathfrak{C}]_{{\operatorname{KGL}}}$ is the class in $G_0(\mathfrak{C})$ of the structure sheaf ${\mathcal O}_\mathfrak{C}$. For ${\mathcal E}={\operatorname{MGL}}$, $[\mathfrak{C}]_{\operatorname{MGL}}$ is the class associated to the pseudo-divisor ${\operatorname{div}}[t]$ on $D$, applied to any resolution of singularities $f:\tilde{D}\to D$ \widehat{+}[ [\mathfrak{C}]_{\operatorname{MGL}}=f_*({\operatorname{div}}_{\tilde{D}}(f^*t)). \widehat{+}] \end{ex}
\subsection{Push-forward and intersection with the 0-section} Let $p:Y\to X$ be a projective map in ${\operatorname{\mathbf{Sch}}}/B$ and let ${\mathcal E}$ be an oriented theory. We have the push-forward map \widehat{+}[ p_*:{\mathcal E}^{\operatorname{B.M.}}_{a,b}(Y)\to {\mathcal E}^{\operatorname{B.M.}}_{a,b}(X) \widehat{+}] Given a rank $r$ vector bundle $f:V\to Z$ with a section $s:Z\to V$, we have the maps \widehat{+}[ f^*:{\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z)\to {\mathcal E}^{\operatorname{B.M.}}_{a,b}(V, T_{V/Z})\cong {\mathcal E}^{\operatorname{B.M.}}_{a+2r,b+r}(V) \widehat{+}] \widehat{+}[ s^!: {\mathcal E}^{\operatorname{B.M.}}_{a+2r,b+r}(V)\cong {\mathcal E}^{\operatorname{B.M.}}_{a,b}(V, T_{V/Z})\to {\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z) \widehat{+}] with $s^!=(f^*)^{-1}$, by Lemma~\ref{lem:SectionIsos}. $f^*$ is the usual pull-back for the smooth morphism $f$ and $s^!$ is the classical ``intersection with the 0-section'' defined as the inverse of $f^*$.
\begin{remark} Suppose we have a closed immersion $i:Z\hookrightarrow M$ with $M$ smooth of dimension $d_M$ over $B$. The intrinsic normal cone $\mathfrak{C}_Z$ as defined by Behrend-Fantechi \cite{BF} is the quotient stack $[\mathfrak{C}_{i}/i^*T_{M/B}]$. They also define the normal sheaf ${\mathcal N}_{i}:={\rm Spec\widehat{+},}_{{\mathcal O}_Z}{\operatorname{Sym}}^*{\mathcal I}_Z/{\mathcal I}_Z^2$; the surjection ${\operatorname{Sym}}^*{\mathcal I}_Z/{\mathcal I}_Z^2\to \oplus_n{\mathcal I}^n_Z/{\mathcal I}_Z^{n+1}$ defines the closed immersion $\mathfrak{C}_{i}\hookrightarrow {\mathcal N}_{i}$. This induces the closed immersion of quotient stacks $\mathfrak{C}_Z\hookrightarrow {\mathcal N}_Z:=[{\mathcal N}_{i}/i^*T_{M/B}]$.
Suppose we have a perfect obstruction theory $[\phi]$ of virtual rank $r$ on $i:Z\hookrightarrow M$, with global resolution $(F_1\to F_0)\xrightarrow{\phi_\bullet}({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B})$. We may assume that $(F_\bullet, \phi)$ is normalized. The assumption that $\phi$ is a perfect obstruction theory implies that $\phi$ induces closed immersions \widehat{+}[ \mathfrak{C}_Z\hookrightarrow {\mathcal N}_Z\hookrightarrow [F^1/F^0]. \widehat{+}] Let $\mathfrak{C}(F_\bullet)\subset {\mathcal N}(F_\bullet)\subset F^1$ be the pull-back of this sequence of closed immersions by the quotient map $F^1\to [F^1/F^0]$.
One can describe ${\mathcal N}(F_\bullet)$ explicitly as follows: We have the commutative diagram \widehat{+}[ \xymatrix{ F_1\ar[r]^{d_F}\ar[d]_{\phi_1}&F_0\ar[d]^{\phi_0}\widehat{+} {\mathcal I}_Z/{\mathcal I}_Z^2\ar[r]_d&i^*\Omega_{M/B} } \widehat{+}] Let $F:={\mathcal I}_Z/{\mathcal I}_Z^2\times_{i^*\Omega_{M/B}}F_0$. The map $(\phi_1, d_F)$ gives a surjection $\phi_{\mathcal N}:F_1\to F$; ${\mathcal N}(F_\bullet)$ is the closed subscheme ${\rm Spec\widehat{+},}_{{\mathcal O}_Z}{\operatorname{Sym}}^*F$ of $F^1={\rm Spec\widehat{+},}_{{\mathcal O}_Z}{\operatorname{Sym}}^*F_1$.
The virtual fundamental class $[Z,[\phi]]^\text{\it vir}_{BF}$ as defined by Behrend-Fantechi {\it loc. cit.} is the element of ${\rm CH}_r(Z)$ given by \widehat{+}[ [Z,[\phi]]^\text{\it vir}_{BF}:=0_{F^1}^!([\mathfrak{C}(F_\bullet)]). \widehat{+}]
Suppose that we have a Jouanolou cover $p_M:\tilde{M}\to M$, with pull-back $p_Z:\tilde{Z}\to Z$. An induced perfect obstruction theory $p_Z^![F_\bullet]\to ({\mathcal I}_{\tilde{Z}}/{\mathcal I}_{\tilde{Z}}^2\to i_{\tilde{Z}}^*\Omega_{\tilde{M}/B})$ is defined (see Lemma~\ref{lem:Discussion}).
Writing $(\tilde{F}_1\to \tilde{F}_0):=p_Z^![F_\bullet]$, we have $\tilde{F}_1=p_Z^*F_1$ and $\tilde{F}_0=p_Z^*F_0\oplus \Omega_{\tilde{Z}/Z}$. Thus, we have the isomorphism of quotients of $\tilde{F}_1$ \widehat{+}[ \tilde{F}:={\mathcal I}_{\tilde{Z}}/{\mathcal I}^2_{\tilde{Z}}\times_{i_{\tilde{Z}}^*\Omega_{\tilde{M}/B}}\tilde{F}_0\cong p_Z^*F \widehat{+}] which shows that ${\mathcal N}(\tilde{F}_\bullet)\subset \tilde{F}^1$ is equal to $p_Z^*{\mathcal N}(F_\bullet)$. Thus \widehat{+}[ \mathfrak{C}(\tilde{F}_\bullet)=p_Z^*\mathfrak{C}(F_\bullet)\subset p_Z^*F^1=\tilde{F}^1, \widehat{+}] which implies that \widehat{+}[ p_Z^*([Z,[\phi]]^\text{\it vir}_{BF})=[\tilde{Z},p_Z^![\phi]]^\text{\it vir}_{BF} \widehat{+}] in ${\rm CH}_{r+d}(\tilde{Z})$, where $d$ is the rank of $\Omega_{\tilde{Z}/Z}$.
Since $p_Z:\tilde{Z}\to Z$ induces an isomorphism \widehat{+}[ p_Z^*:{\rm CH}_r(Z)\to {\rm CH}_{r+d}(\tilde{Z}), \widehat{+}] we may assume that $Z$ is affine for the purpose of comparing our construction of virtual fundamental classes with that of Behrend-Fantechi.
Assuming then that $Z$ is affine, with a closed immersion $i:Z\to M$ into a smooth affine $B$-scheme $M$, we may take a reduced normalized representative $(F_1\to F_0)\to ({\mathcal I}_Z/{\mathcal I}_Z^2\to i^*\Omega_{M/B})$ of a given rank $r$ perfect obstruction theory $[\phi]$; let $r_i:\text{rank}(F_i)$. In this case, we have $\mathfrak{C}(F_\bullet)=\mathfrak{C}_{i}\subset F^1$, and $d_\mathfrak{C}=r_0$. We have already identified our construction of the fundamental class $[\mathfrak{C}_{i}]_{H{\mathbb Z}}$ with the cycle class $[\mathfrak{C}_{i}]\in {\rm CH}_{r_0}(\mathfrak{C}_{i})$; we have also identified the push-forward map $i_{\mathfrak{C}*}:{\rm CH}_{r_0}(\mathfrak{C}_{i})\to {\rm CH}_{r_0}(F^1)$ and the intersection with the 0-section $0_{F^1}^!:{\rm CH}_{r_0}(F^1)\to {\rm CH}_{r_0-r_1}(Z)$, as defined here, with the classical ones. This gives the identity of virtual fundamental classes \widehat{+}[ [Z,[\phi]]^\text{\it vir}_{BF}=[Z,[\phi]]^\text{\it vir} \widehat{+}] in ${\rm CH}_r(Z)$.
Of course, the Behrend-Fantechi theory is defined for perfect obstruction theories on Deligne-Mumford stacks, whereas the theory presented here is limited to quasi-projective $B$-schemes for affine $B$. \end{remark}
\subsection{Gromov-Witten invariants for oriented theories} If $[\phi]$ is a virtual rank zero perfect obstruction theory on some $Z\in {\operatorname{\mathbf{Sch}}}/B$, and ${\mathcal E}$ is an oriented ring spectrum in ${\operatorname{SH}}(B)$, the virtual fundamental class lives in ${\mathcal E}^{\operatorname{B.M.}}_{0,0}(Z)={\mathcal E}^{0,0}(\pi_{Z!}(1_Z))$. If $\pi_Z:Z\to B$ is proper, we have the push-forward map in ${\mathcal E}$-cohomology \widehat{+}[ \pi_{Z*}:{\mathcal E}_{a,b}^{\operatorname{B.M.}}(Z)\to {\mathcal E}^{\operatorname{B.M.}}_{a,b}(B)={\mathcal E}^{-a,-b}(B), \widehat{+}] induced by the map $\pi_Z^*:1_B\to \pi_{Z!}(1_Z)$, giving the GW-invariant \widehat{+}[ \text{deg}_{\mathcal E}([Z,[\phi]]^\text{\it vir}):=\pi_{Z*}([Z,[\phi]]^\text{\it vir})\in {\mathcal E}^{0,0}(B). \widehat{+}] This is the classical ``degree of the virtual fundamental class'' in case ${\mathcal E}=H{\mathbb Z}$. For more general theories, we may have non-zero invariants for perfect obstruction theories of non-zero ranks which give rise to non-zero degrees: \widehat{+}[ \text{deg}_{\mathcal E}([Z,[\phi]]^\text{\it vir}):=\pi_{Z*}([Z,[\phi]]^\text{\it vir})\in {\mathcal E}^{\operatorname{B.M.}}_{2r,r}(B) ={\mathcal E}^{-2r,-r}(B) \widehat{+}] for $[\phi]$ of virtual rank $r$.
If we have a morphism $f:Z\to W$, one can twist $[Z,[\phi]]^\text{\it vir}$ by classes coming from $W$; if $Z$ is proper over $B$, pushing forward gives the descendant classes in ${\mathcal E}^{\operatorname{B.M.}}_{*,*}(B)$.
\subsection{Gromov-Witten invariants for $\operatorname{SL}$-oriented theories} We now consider theories ${\mathcal E}$ which are not oriented in the sense of the previous section, but are rather $\operatorname{SL}$-oriented. This means that, given a perfect complex $E_\bullet$ on some $Z\in {\operatorname{\mathbf{Sch}}}/B$ of virtual rank $r$ and virtual determinant $\operatorname{det} E_\bullet$, there is a canonical isomorphism \widehat{+}[ \lambda_{E_\bullet}:\Sigma^{{\mathbb V}(E_\bullet)-{\mathbb V}({\mathcal O}^r_Z)}\pi_Z^*{\mathcal E}\cong \Sigma^{{\mathbb V}(\operatorname{det} E_\bullet)-{\mathbb V}({\mathcal O}_Z)}\pi_Z^*{\mathcal E} \widehat{+}] Moreover, for $L$ a line bundle on $Z$, we have a canonical isomorphism \widehat{+}[ \Sigma^{{\mathbb V}(L^{\otimes 2})-{\mathbb V}({\mathcal O}_Z)}\pi_Z^*{\mathcal E}\cong \pi_Z^*{\mathcal E}. \widehat{+}] This gives us the isomorphisms \widehat{+}[ {\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z, {\mathbb V}(E_\bullet))\cong {\mathcal E}^{\operatorname{B.M.}}_{a+2r-2,b+r-1}(Z, {\mathbb V}(\operatorname{det}(E_\bullet))) \widehat{+}] and \widehat{+}[ {\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z)\cong {\mathcal E}^{\operatorname{B.M.}}_{a-2,b-1}(Z, L^{\otimes 2}) \widehat{+}]
This also implies that, if $\beta:V\to V$ is an automorphism of a vector bundle $V\to Z$, the induced map \widehat{+}[ \Sigma^\beta: {\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z,V)\to {\mathcal E}^{\operatorname{B.M.}}_{a,b}(Z,V) \widehat{+}] is given by multiplication by the automorphism $\langle\operatorname{det}\beta\widehat{+}>\in {\rm End}_{{\operatorname{SH}}(Z)}(1_Z)$.
Suppose we have a rank $r$ perfect obstruction theory $([\phi], E_\bullet)$ on some $Z\in {\operatorname{\mathbf{Sch}}}/B$, with $\pi_Z:Z\to B$ projective over $B$. The virtual fundamental class $[Z,[\phi]]^\text{\it vir}$ lives in ${\mathcal E}^{\operatorname{B.M.}}(Z, {\mathbb V}(E_\bullet))\cong {\mathcal E}^{\operatorname{B.M.}}_{2r,r}(Z, {\mathbb V}(\operatorname{det} E_\bullet)-{\mathbb V}({\mathcal O}_Z))$. Given an isomorphism \widehat{+}[ \alpha:\operatorname{det} E_\bullet\to L^{\otimes 2} \widehat{+}] for some line bundle $L$ on $Z$, we have the isomorphism \widehat{+}[ \lambda_\alpha:{\mathcal E}^{\operatorname{B.M.}}_{2r,r}(Z, {\mathbb V}(\operatorname{det} E_\bullet)-{\mathbb V}({\mathcal O}_Z))\xrightarrow{\sim} {\mathcal E}^{\operatorname{B.M.}}_{2r,r}(Z), \widehat{+}] so we may push $\lambda_\alpha([Z,[\phi]^\text{\it vir})$ forward by $\pi_Z$ to give \widehat{+}[ \text{deg}_{\mathcal E}([Z,[\phi]^\text{\it vir}, \alpha):=\pi_{Z*}(\lambda_\alpha([Z,[\phi]^\text{\it vir}))\in {\mathcal E}^{\operatorname{B.M.}}_{2r,r}(B). \widehat{+}]
\begin{ex} We take $B={\rm Spec\widehat{+},} k$, $k$ a perfect field, $G=\widehat{+}{{\operatorname{\rm Id}}\widehat{+}}$ and ${\mathcal E}=H_0(\mathbb{S}_k)$. We have the sheaf of graded rings on ${\mathbf{Sm}}_k$ given by {\em Milnor-Witt $K$-theory}, ${\mathcal K}^{MW}_*$ (see \cite{MorelA1} for details). The 0-th homology $H_0(\mathbb{S}_k)$ with respect to the homotopy $t$-structure on ${\operatorname{SH}}(k)$ represents the theory of Milnor-Witt $K$-theory: for $X\in {\mathbf{Sm}}/k$, there is a canonical isomorphism \widehat{+}[ H_0(\mathbb{S}_k)^{a+b,b}(X)\cong H^a_{Nis}(X, {\mathcal K}^{MW}_b) \widehat{+}]
$H_0(\mathbb{S}_k)$ is $\operatorname{SL}$-oriented, giving the canonical isomorphism \widehat{+}[ H_0(\mathbb{S}_k)^{\operatorname{B.M.}}_{a,b}(Z, v-{\mathbb V}({\mathcal O}_Z^r))\cong H_0(\mathbb{S}_k)_{a, b}^{\operatorname{B.M.}}(Z, \operatorname{det} v-{\mathbb V}({\mathcal O}_Z)) \widehat{+}] for $v$ of virtual rank $r$ on $Z\in {\operatorname{\mathbf{Sch}}}/B$. This gives the isomorphism \widehat{+}[ H_0(\mathbb{S}_k)^{\operatorname{B.M.}}_{2n,n}(Z, {\mathbb V}(E_\bullet))\cong \widetilde{{\rm CH}}_n(Z,\operatorname{det} E_\bullet). \widehat{+}] for $Z\in {\operatorname{\mathbf{Sch}}}/k$, $E_\bullet\in D^\text{\it perf}(Z)$, where $\widetilde{{\rm CH}}_*$ is the Chow-Witt theory of Barge-Morel \cite{BargeMorel} and Fasel \cite{FaselCW} (also called the oriented Chow groups). For example, we have $\widetilde{{\rm CH}}_0({\rm Spec\widehat{+},} k)={\operatorname{GW}}(k)$, the Grothendieck-Witt ring of non-degenerate symmetric bilinear forms over $k$.
For $i:Z\hookrightarrow M$, $M$ smooth of dimension $d_M$ over $k$, and $(\phi, E_\bullet)$ a rank $r$ perfect obstruction theory, this gives us the fundamental class and virtual fundamental class \begin{align*} &[\mathfrak{C}^{st}_Z]\in \widetilde{{\rm CH}}_{d_M}(\mathfrak{C}_{i}, \sigma_i^*\omega_{M/k})\widehat{+} &[Z, [\phi]]^\text{\it vir}\in H_0(\mathbb{S}_k)^{\operatorname{B.M.}}_{2r, r}(Z, {\mathbb V}(\operatorname{det} E_\bullet)-{\mathbb V}({\mathcal O}_Z)) = \widetilde{{\rm CH}}_r(Z, \operatorname{det} E_\bullet). \end{align*}
Thus, if we have a rank 0 perfect obstruction theory $(E_\bullet, [\phi])$ on some $Z\in {\operatorname{\mathbf{Sch}}}/k$, with $Z$ projective over $k$, and an isomorphism $\alpha:\operatorname{det} E_\bullet\xrightarrow{\sim} L^{\otimes 2}$ for some line bundle $L$ on $Z$, we have \widehat{+}[ \text{deg}_{H_0(\mathbb{S}_k)}([Z,[\phi]]^\text{\it vir}, \alpha):=\pi_{Z*}(\lambda_\alpha([Z,[\phi]]^\text{\it vir}))\in
H_0(\mathbb{S}_k)^{\operatorname{B.M.}}({\rm Spec\widehat{+},} k)=K^{MW}_0(k)={\operatorname{GW}}(k). \widehat{+}]
More generally, if $E_\bullet$ has virtual rank $r$, and we have a morphism $f:Z\to W$ with $W\in {\mathbf{Sm}}/k$, a line bundle $L'$ on $W$, a line bundle $L$ on $Z$ and an isomorphism \widehat{+}[ \alpha: \operatorname{det} E_\bullet\otimes f^*L'\to L^{\otimes 2} \widehat{+}] then a class \widehat{+}[ \beta\in H^r(W, {\mathcal K}^{MW}_{r+s}(L'))=H_0(\mathbb{S}_k)^{2r+s, r+s}(W, {\mathbb V}({\mathcal O}_W)-{\mathbb V}(L')) \widehat{+}] gives via the cap product \begin{multline*} H_0(\mathbb{S}_k)^{\operatorname{B.M.}}_{2r, r}(Z, {\mathbb V}(\operatorname{det} E_\bullet)-{\mathbb V}({\mathcal O}_Z))\times H_0(\mathbb{S}_k)^{2r+s, r+s}(W, {\mathbb V}({\mathcal O}_W)-{\mathbb V}(L'))\widehat{+} \xrightarrow{-\cap f^*(-)} H_0(\mathbb{S}_k)^{\operatorname{B.M.}}_{-s, -s}(Z, {\mathbb V}(\operatorname{det} E_\bullet\otimes f^*L')-{\mathbb V}({\mathcal O}_Z)) \end{multline*} the element \widehat{+}[ \lambda_\alpha([Z,[\phi]^\text{\it vir}]\cap f^*\beta)\in H_0(\mathbb{S}_k)^{\operatorname{B.M.}}_{-s,-s}(Z), \widehat{+}] and we can define the descendant class \begin{multline*} \text{deg}^s_{H_0(\mathbb{S}_k)}([Z,[\phi]^\text{\it vir}]\cap f^*\beta,\alpha):= \pi_{Z*}(\lambda_\alpha([Z,[\phi]^\text{\it vir}]\cap f^*\beta))\widehat{+}\in H_0(\mathbb{S}_k)^{\operatorname{B.M.}}_{-s, -s}({\rm Spec\widehat{+},} k)=H_0(\mathbb{S}_k)^{s, s}({\rm Spec\widehat{+},} k)=K^{MW}_s(k). \end{multline*}
There is a universal $\operatorname{SL}$-oriented theory, ${\operatorname{MSL}}$, with ${\operatorname{MSL}}_n$ the Thom space of the universal bundle $\tilde{E}_n\to \operatorname{BSL}_n$. Just as for ${\operatorname{MGL}}_n$, ${\operatorname{MSL}}_n^{-2r, -r}(k)$ is non-zero for all $r\ge0$, so we have a non-trivial target for the degree map for perfect obstruction theories of all non-negative ranks, but having a trivialized determinant bundle (up to a square). \end{ex}
\section{Generating ideals, \hbox{${\mathbb A}^1$}-local degree and critical loci} \label{sec:LocalDeg} The most elementary type of obstruction theory is one that is already normalized and reduced. Fix a scheme $B$. For $X\to B$ a $B$-scheme we let $\pi_X:X\to B$ denote the structure morphism. Let $M\to B$ be a smooth $B$-scheme and $i:Z\hookrightarrow M$ a closed subscheme. This gives us the cone $\mathfrak{C}_{i}$ and the fundamental class $[\mathfrak{C}_i]\in \mathbb{S}_B^{\operatorname{B.M.}}(\mathfrak{C}_i,\sigma_i^*T_M)$.
We suppose we have a locally free sheaf ${\mathcal V}$ on $M$ and an ${\mathcal O}_Z$-linear surjective map $F:{\mathcal V}^\vee\otimes{\mathcal O}_Z\to {\mathcal I}_Z/{\mathcal I}_Z^2$; if $M$ is quasi-projective, this always exists. We let $\partial_\phi:{\mathcal V}^\vee\otimes{\mathcal O}_Z\to \Omega_{M/B}\otimes{\mathcal O}_Z$ be the map $d\circ \phi_1$, with $d:{\mathcal I}_Z/{\mathcal I}_Z^2\to \Omega_{M/B}\otimes{\mathcal O}_Z$ induced by the canonical derivation $d:{\mathcal O}_M\to \Omega_{M/B}$. This gives us the perfect obstruction theory \widehat{+}[ \phi=(F, {\operatorname{\rm Id}}):({\mathcal V}^\vee\otimes{\mathcal O}_Z\xrightarrow{\partial_\phi} \Omega_{M/B}\otimes{\mathcal O}_Z)\to ({\mathcal I}_Z/{\mathcal I}_Z^2\xrightarrow{d} \Omega_{M/B}\otimes{\mathcal O}_Z) \widehat{+}] on $Z$, which is already reduced and normalized.
The map $F$ induces the closed immersion of $Z$-schemes $i_\phi:\mathfrak{C}_i\hookrightarrow i^*V$, where $V\to M$ is the vector bundle ${\mathbb V}({\mathcal V}^\vee)$. The associated virtual fundamental class is then \widehat{+}[ [Z, [\phi]]^\text{\it vir}:=0_{i^*V}^!i_{\phi*}([\mathfrak{C}_i])\in \mathbb{S}_B^{\operatorname{B.M.}}(Z, i^*T_M-i^*V). \widehat{+}] Since $\phi$ is determined by $F$, we write this as $[Z, F]^\text{\it vir}$
A choice of an isomorphism $\psi:T_M\xrightarrow{\sim} V$ (if one exists) simplifies this to a virtual fundamental class $[Z, F]^\text{\it vir}\in \mathbb{S}_B^{\operatorname{B.M.}}(Z)$. If we pass to an $\operatorname{SL}$-oriented theory ${\mathcal E}$ and $V$ and $T_M$ have the same rank, then we only need an isomorphism $\rho: \operatorname{det}{\mathcal V}\otimes \omega_{M/B}\otimes{\mathcal O}_Z\to {\mathcal L}^{\otimes 2}$ for some invertible sheaf ${\mathcal L}$ on $Z$ to reduce to $[Z, F]^\text{\it vir}_{\mathcal E}\in {\mathcal E}^{\operatorname{B.M.}}(Z)$. For example, if ${\mathcal V}=\Omega_{M/B}$, we have the identity $\operatorname{det} {\mathcal V}\otimes\omega_{M/B}=\omega_{M/B}^{\otimes 2}$. If $M$ has even dimension $2m$ and ${\mathcal V}=\Omega_{M/B}\otimes{\mathcal L}$ for some invertible sheaf ${\mathcal L}$, then we have the canonical isomorphism $\operatorname{det}{\mathcal V}\otimes \omega_{M/B}\cong(\omega_{M/B}\otimes{\mathcal L}^{\otimes m})^{\otimes 2}$.
\begin{ex} Let $R$ be a commutative ring, let $B={\rm Spec\widehat{+},} R$, and let $Z\subset {\mathbb A}^n_B$ be a closed subscheme. Let $I\subset R[X_1, \ldots, X_n]$ be the ideal of $Z$ and choose $F_1,\ldots, F_n\in I$ that generate $I/I^2$ as ${\mathcal O}_Z$-module.
We have ${\mathcal O}_{Z}$-linear surjection $F:\Omega^\vee_{{\mathbb A}^n_B/B}\otimes{\mathcal O}_{Z}\to I/I^2$ defined by sending the basis element $\partial/\partial X_i$ to the image of $F_i$ in $I/I^2$. Using the trivialization of $T_{{\mathbb A}^n_B/B}$ and $T^\vee_{{\mathbb A}^n_B/B}$ by the bases $\widehat{+}{\partial/\partial X_i\widehat{+}}_i$ and $\widehat{+}{dX_i\widehat{+}}_i$ we have the canonical isomorphism $\Sigma^{i^*T_{A^n_B/B}-i^*T^\vee_{{\mathbb A}^n_B/B}}\cong {\operatorname{\rm Id}}$, giving us the virtual fundamental class $[Z, F]^\text{\it vir}\in \mathbb{S}_B^{\operatorname{B.M.}}(Z)= \mathbb{S}_B^{0,0}(Z_{\operatorname{B.M.}})$. Note that $[Z, F]^\text{\it vir}$ depends only on the image of the $F_i$ modulo $I^2$. \end{ex}
\begin{definition}[${\mathbb A}^1$-local degree \hbox{\cite[Definition 10]{KW}}]\label{def:A1LocDeg} Let $B={\rm Spec\widehat{+},} A$ be an affine scheme and let $g=(g_1,\ldots, g_n):{\mathbb A}^n_B\to {\mathbb A}^n_B$ be a polynomial map, $g_i\in A[X_1,\ldots, X_n]$. Suppose that $g^{-1}(0_B)_{\rm red}$ is a disjoint union, $g^{-1}(0_B)=x\amalg x'$, with $g_{|x}:x\to B$ finite. Choose $U\subset {\mathbb A}^n$ an open neighborhood of $x$ with $U\cap g^{-1}(0_B)=x$. The {\em ${\mathbb A}^1$-local degree} of $g$ along $x$, $\delta_{{\mathbb A}^1}(g, x)$, is the element of $\mathbb{S}_B^{0,0}(B)={\rm End}_{{\operatorname{SH}}(B)}(\mathbb{S}_B)$ given by stabilizing the composition \widehat{+}[
S^{2n,n}_B={\mathbb P}^n_B/{\mathbb P}^{n-1}_B\to {\mathbb P}^n_B/{\mathbb P}^n_B\setminus x \xleftarrow{\sim}U/U\setminus x\xrightarrow{g_{|(U,U\setminus x)}} {\mathbb P}^n_B/{\mathbb P}^n_B\setminus\widehat{+}{0_B\widehat{+}}\xleftarrow{\sim} {\mathbb P}^n_B/{\mathbb P}^{n-1}_B=S^{2n,n}_B. \widehat{+}] The map $U/U\setminus x\to {\mathbb P}^n_B/{\mathbb P}^n_B\setminus x$ is an isomorphism by excision and the map ${\mathbb P}^n_B/{\mathbb P}^{n-1}_B\to {\mathbb P}^n_B/{\mathbb P}^n_B\setminus\widehat{+}{0_B\widehat{+}}$ is an isomorphism by homotopy invariance. It is easy to see that this composition is independent of the choice of $U$. \end{definition}
\begin{remarks}\label{rem:LocalDegAdditivity} Let $g:{\mathbb A}^n_B\to {\mathbb A}^n_B$ and $x\subset g^{-1}(0_B)\subset {\mathbb A}^n_B$ be as in Definition~\ref{def:A1LocDeg}. \widehat{+}[5pt] 1. Suppose $x$ is a disjoint union $x=x_1\amalg x_2$. Then \widehat{+}[ \delta_{{\mathbb A}^1}(g, x)=\delta_{{\mathbb A}^1}(g,x_1)+\delta_{{\mathbb A}^1}(g, x_2). \widehat{+}] This follows by considering the Nisnevich cover \widehat{+}[ U\setminus x_2\amalg U\setminus x_1\to U, \widehat{+}] where $U\subset {\mathbb A}^n$ is an open subscheme with $g^{-1}(0_B)\cap U= x$. \widehat{+}[3pt] 2. Letting $f:B'\to B$ be a morphism, we have $f^*(\delta_{{\mathbb A}^1}(g, x))\in \mathbb{S}_B^{0,0}(B')$, we have the pull-back morphism $g':{\mathbb A}^n_{B'}\to {\mathbb A}^n_{B'}$ and $x':=f^{-1}(x)\subset g^{\prime-1}(0_{B'})$. Then \widehat{+}[ f^*(\delta_{{\mathbb A}^1}(g, x))=\delta_{{\mathbb A}^1}(g', x'). \widehat{+}] Indeed, if we take $U\subset {\mathbb A}^n_B$ with $g^{-1}(0_B)_{\rm red}\cap U=x$, then letting $U'=U\times_BB'$, we have $g^{\prime-1}(0_{B'})_{\rm red}\cap U'=x'$, and the result follows by applying the base-change isomorphism $Ex(\Delta^*_{\widehat{+}#})$ to the sequence of maps defining $\delta_{{\mathbb A}^1}(g, x)$ and $\delta_{{\mathbb A}^1}(g', x')$.\widehat{+}[2pt] 3. In case $B={\rm Spec\widehat{+},} k$, $k$ a perfect field, Morel's theorem identifies $\mathbb{S}_k^{0,0}({\rm Spec\widehat{+},} k)$ with ${\operatorname{GW}}(k)$, so we have $\delta_{{\mathbb A}^1}(g, x)\in {\operatorname{GW}}(k)$. \end{remarks}
\begin{remark}\label{rem:Automorphism} Let $K$ be a perfect field, $f:{\mathbb A}^n_K\to {\mathbb A}^n_K$ a linear automorphism and $p:{\mathbb A}^n_K\to {\rm Spec\widehat{+},} K$ the projection. Then the map $f_*:p_!(1_{{\mathbb A}^n})\to p_!(1_{{\mathbb A}^n})$ is multiplication by the rank one quadratic form $\langle\operatorname{det} f\widehat{+}>$. Indeed, we may use a matrix representation for $f$. Since $\operatorname{SL}_n(K)$ is generated by elementary matrices, $f$ is ${\mathbb A}^1$-homotopic to the map $(x_1,\ldots, x_n)\mapsto (ux_1,x_2,\ldots, x_n)$ with $u=\operatorname{det} f$.
Via the canonical isomorphism $p_!=p_\widehat{+}#\circ\Sigma^{-T_{{\mathbb A}^n/K}}$, we have $p_!(1_{{\mathbb A}^n})\cong \Sigma^n_{{\mathbb P}^1}(1_{{\rm Spec\widehat{+},} K})$, with the action of $f$ going over to $\Sigma^{n-1}_{{\mathbb P}^1}(p_{{\mathbb A}^1!}(\operatorname{det} f))$, with $\operatorname{det} f:{\mathbb A}^1_K\to {\mathbb A}^1_K$ the multiplication map. This reduces us to the case $n=1$. In this case, via the isomorphism $p_!(1_{{\mathbb A}^1})\cong \Sigma_{{\mathbb P}^1}(1_{{\rm Spec\widehat{+},} K})$, multiplication by $u=\operatorname{det} f$ goes over to the map ${\mathbb P}^1\to {\mathbb P}^1$ sending $[x_0:x_1]$ to $[x_0:ux_1]$. Morel's isomorphism ${\operatorname{GW}}(K)\to {\rm End}_{{\operatorname{SH}}(K)}(1_{{\rm Spec\widehat{+},} K})$ sends the quadratic form $\widehat{+}<u\widehat{+}>$ to the stable version of this latter map. \end{remark}
\begin{lemma}\label{lem:VIrDegEt} Let $k$ be a perfect field, let $g:{\mathbb A}^n_k\to {\mathbb A}^n_k$ be a polynomial map, $g=(g_1,\ldots, g_n)$ and suppose we have closed points $x_1,\ldots, x_r$ of ${\mathbb A}^n_k$ which are all isolated points of $g^{-1}(0)$. Let $Z= \widehat{+}{x_1,\ldots, x_r\widehat{+}}$ and suppose that the map $g$ is \widehat{+}'etale in a neighborhood of $Z$. Let $\pi_Z:Z\to {\rm Spec\widehat{+},} k$ be the projection, giving the push-forward map $\pi_{Z*}:\mathbb{S}_k^{\operatorname{B.M.}}(Z)\to \mathbb{S}_k^{\operatorname{B.M.}}({\rm Spec\widehat{+},} k)=\mathbb{S}_k^{0,0}({\rm Spec\widehat{+},} k)$. Then \widehat{+}[ \pi_{Z*}([Z,g]^\text{\it vir})=\delta_{{\mathbb A}^1}(g, Z) \widehat{+}] \end{lemma}
\begin{proof} We use the standard basis $dX_1,\ldots, dX_n$ for $\Omega_{{\mathbb A}^n/k}$. This gives us the canonical isomorphisms \widehat{+}[ \mathfrak{C}_{Z\subset{\mathbb A}^n}\cong {\mathbb A}^n_Z,\widehat{+} T_{{\mathbb A}^n/k}^\vee\otimes{\mathcal O}_Z\cong {\mathbb A}^n_Z \widehat{+}] via which the isomorphism $i_g:\mathfrak{C}_{Z\subset{\mathbb A}^n}\to T_{{\mathbb A}^n/k}^\vee\otimes{\mathcal O}_Z$ becomes the ${\mathcal O}_Z$-linear map ${\mathbb A}^n_Z\to {\mathbb A}^n_Z$ with matrix the Jacobian matrix ${\mathcal J}(g)=(\partial g_i/\partial X_j)$ restricted to $Z$. Letting $J(g)=\operatorname{det}{\mathcal J}(g)$, it follows from Remark~\ref{rem:Automorphism} and the definition of $[Z,g]^\text{\it vir}$ that $[Z,g]^\text{\it vir}$ is the rank one quadratic form $\widehat{+}<J(g)(Z)\widehat{+}>\in {\operatorname{GW}}(Z)=\mathbb{S}_k^{0,0}(Z)$, that is, at $x_i\in Z$, $[Z,g]^\text{\it vir}$ takes the values $\widehat{+}<J(g)(x_i)\widehat{+}>\in {\operatorname{GW}}(k(x_i)\widehat{+}>=\mathbb{S}_k^{0,0}(x_i)$.
Since $Z\to {\rm Spec\widehat{+},} k$ is \widehat{+}'etale, we have $Z_{\operatorname{B.M.}}=Z$ and the map $\pi_{Z*}:\mathbb{S}_k^{0,0}(Z)\to \mathbb{S}_k^{0,0}({\rm Spec\widehat{+},} k)$ is identified with the trace map \widehat{+}[ {\rm Tr}_{k[Z]/k}=\sum_i{\rm Tr}_{k(x_i)/k}:\prod_{i=1}^r{\operatorname{GW}}(k(x_i))\to {\operatorname{GW}}(k) \widehat{+}] (see \cite[Lemma 5.3]{HoyoisTrace}). By \cite[Proposition 14]{KW}, $\delta_{{\mathbb A}^1}(g, Z)={\rm Tr}_{k[Z]/k}\widehat{+}<J(g)(Z)\widehat{+}>$, thus $\pi_{Z*}([Z,g]^\text{\it vir})=\delta_{{\mathbb A}^1}(g, Z)$, as claimed. \end{proof}
We can remove the condition that $g$ is \widehat{+}'etale along $Z$ by a deformation argument, taken from \cite{KW}.
\begin{proposition}\label{prop:VirDeg} Let $k$ be perfect field of characteristic different from 2. Let $g=(g_1,\ldots, g_n):{\mathbb A}^n_k\to{\mathbb A}^n_k$ be a polynomial map. Suppose that $g^{-1}(0)$ is a disjoint union of closed subschemes $Z\amalg Z'$ with $Z$ of pure dimension zero and write $Z_{\rm red}=z$. Then $\pi_{Z*}([Z, g]^\text{\it vir})\in \mathbb{S}_k^{0,0}(k)={\operatorname{GW}}(k)$ is the ${\mathbb A}^1$ local degree $\delta_{{\mathbb A}^1}(g, z)$. \end{proposition}
\begin{proof} Since the map ${\operatorname{GW}}(k)\to {\operatorname{GW}}(k')$ is injective for $k\subset k'$ a field extension that is the union over a tower of finite extensions $k\subset k_\alpha$ of odd degree, we may assume that $k$ is an infinite field.
Let $\mathfrak{m}\subset k[X_1,\ldots, X_r]$ be the ideal of $z$ Then $Z$ is a complete intersection component of the subscheme of ${\mathbb A}^n_k$ defined by $(g_1,\ldots, g_n)$. Moreover, the cone $\mathfrak{C}_{Z\subset{\mathbb A}^n}$, the fundamental class $[\mathfrak{C}_{Z\subset{\mathbb A}^n}]$ and the map $\phi_{g}$ are unchanged if we replace the $g_i$ with polynomials $g_i'\in k[X_1,\ldots, X_n]$ such that $g_i'-g_i\in \mathfrak{m}^b$ for $b>>0$. By \cite[Lemma 15(3)]{KW}, the same holds for the ${\mathbb A}^1$ local degree $\delta_{{\mathbb A}^1}(g, z)$.
Adding to each $g_i$ a suitably general $h_i\in (X_1,\ldots, X_n)^b$ for sufficiently high $b$, we may assume that each of the $g_i$ have the same degree $d$ and that the map $g$ extends to a morphism $\bar{g}:{\mathbb P}^n_k\to {\mathbb P}^n_k$ satisfying \begin{equation}\label{eqn:Assumption18} \vbox{ \begin{enumerate} \item[(1)] $\bar{g}$ is finite, flat and of degree prime to the characteristic of $k$. \item[(2)] $\bar{g}$ is \widehat{+}'etale at each point of $F^{-1}(0)\setminus Z$ \item[(3)] $\bar{g}^{-1}({\mathbb A}^n)\subset {\mathbb A}^n$ \end{enumerate} } \end{equation} This is proven in \cite[Lemmas 19-21, Proposition 22]{KW} under the assumption that $Z$ is supported at 0, but the same proof works in the more general case. We construct a morphism over ${\mathbb A}^1$, ${\mathcal G}:{\mathbb P}^n_{{\mathbb A}^1}\to {\mathbb P}^n_{{\mathbb A}^1}$, satisfying \eqref{eqn:Assumption18}(1) and in addition \begin{enumerate} \item[(2$'$)] There is an open subset $V\subset {\mathbb A}^1$ containing $1$ such that ${\mathcal G}$ is \widehat{+}'etale over a neighborhood of ${\mathcal G}^{-1}(0\times V)$. \item[(3$'$)] ${\mathcal G}^{-1}({\mathbb A}^n\times {\mathbb A}^1)\subset {\mathbb A}^n\times{\mathbb A}^1$ \item[(4$'$)] Letting ${\mathcal G}_\lambda:{\mathbb P}^n_k\to {\mathbb P}^n_k$ be the pull-back of ${\mathcal G}$ over $\lambda\in {\mathbb A}^1(k)$, we have ${\mathcal G}_0=\bar{g}$. \end{enumerate} Indeed, since $\bar{g}$ is finite and of degree prime to the characteristic, $\bar{g}$ is \widehat{+}'etale over a dense open subset $U\subset {\mathbb P}^n_k$. Since $k$ is infinite, there is a $k$-point $u\in U$, and thus $\bar{g}$ is \widehat{+}'etale over a neighborhood of $\bar{g}^{-1}(u)$. Let $\phi:{\mathbb A}^1\to {\operatorname{Aut}}{\mathbb P}^n_k$ be the morphism sending $t$ to translation by $tu$ (in the Euclidean subgroup of affine linear automorphisms of ${\mathbb A}^n$, embedded as a subgroup of ${\operatorname{Aut}}({\mathbb P}^n)$ in the usual way), and define ${\mathcal G}$ by ${\mathcal G}(x,t)=\phi(-t)(\bar{g}(x))$.
Let $Z\subset {\mathbb A}^n\times{\mathbb A}^1$ be the closed subscheme ${\mathcal G}^{-1}(0\times{\mathbb A}^1)$. Then $\pi_Z:Z\to {\mathbb A}^1$ is finite and flat with fiber over 0 a disjoint union of two closed subschemes $Z\amalg Z'$. Moreover, $Z'$ is reduced and the map $g$ is \widehat{+}'etale on a neighborhood of $Z'$. Let $\tilde{{\mathcal G}}={\mathcal G}_{|{\mathbb A}^n\times{\mathbb A}^1}:{\mathbb A}^n\times{\mathbb A}^1\to {\mathbb A}^n\times{\mathbb A}^1$, and $\tilde{{\mathcal G}}_\lambda$ the restriction of $\tilde{{\mathcal G}}$ over $\lambda\in {\mathbb A}^1(k)$, so $\tilde{{\mathcal G}}_0=g$. Similarly, let $Z_\lambda$ be the fiber of $Z$ over $\lambda$.
Replacing ${\operatorname{SH}}(k)$ with ${\operatorname{SH}}({\mathbb A}^1_k)$, we have the virtual fundamental class \widehat{+}[ [Z, \tilde{{\mathcal G}}]^\text{\it vir}\in {\rm Hom}_{{\operatorname{SH}}({\mathbb A}^1_k)}(\pi_{Z!}(1_Z), \mathbb{S}_{{\mathbb A}^1}) \widehat{+}] its push-forward $\pi_{Z*}([Z, \tilde{{\mathcal G}}]^\text{\it vir})\in {\rm End}_{{\operatorname{SH}}({\mathbb A}^1_k)}(\mathbb{S}_{{\mathbb A}^1})$ and the ${\mathbb A}^1$ local degree of $\tilde{{\mathcal G}}$, $\delta_{{\mathbb A}^1}(\tilde{{\mathcal G}}, Z_{\rm red})\in {\rm End}_{{\operatorname{SH}}({\mathbb A}^1_k)}(\mathbb{S}_{{\mathbb A}^1})$. Since $Z$ is flat over ${\mathbb A}^1$, we have
for each $\lambda\in {\mathbb A}^1(k)$, with inclusion $i_\lambda:{\rm Spec\widehat{+},} k\to {\mathbb A}^1$, the identity \widehat{+}[ i_\lambda^*(\pi_{Z*}([Z, \tilde{{\mathcal G}}]^\text{\it vir}))= \pi_{Z_\lambda*}([Z, \tilde{{\mathcal G}}_\lambda^\text{\it vir}]) \widehat{+}] (see Proposition~\ref{prop:basechange}). Similarly, \widehat{+}[ i_\lambda^*(\delta_{{\mathbb A}^1}(\tilde{{\mathcal G}}, Z_{\rm red}))=\delta_{{\mathbb A}^1}( \tilde{{\mathcal G}}_\lambda, Z_{\lambda{\rm red}}). \widehat{+}] On the other hand, for $p:{\mathbb A}^1_k\to {\rm Spec\widehat{+},} k$ the projection, we have \widehat{+}[ {\rm End}_{{\operatorname{SH}}({\mathbb A}^1_k)}(\mathbb{S}_{{\mathbb A}^1})={\rm Hom}_{{\operatorname{SH}}({\mathbb A}^1)}(1_{{\mathbb A}^1}, p^*(1_k))=
{\rm Hom}_{{\operatorname{SH}}(k)}(p_\widehat{+}#1_{{\mathbb A}^1}, 1_k)=\mathbb{S}^{0,0}_k({\mathbb A}^1_k), \widehat{+}] so by homotopy invariance and Remark~\ref{rem:LocalDegAdditivity} we have \begin{align*} &\pi_{Z_1*}([Z, \tilde{{\mathcal G}}_1]^\text{\it vir})=\pi_{Z_0*}([Z, \tilde{{\mathcal G}}_0]^\text{\it vir})=\pi_{Z*}([Z, g]^\text{\it vir})+\pi_{Z'*}([Z', g]^\text{\it vir})\widehat{+} &\delta_{{\mathbb A}^1}(\tilde{{\mathcal G}}_1, Z_{1{\rm red}})=\delta^{{\mathbb A}^1}(\tilde{{\mathcal G}}_0, Z_{0{\rm red}})= \delta_{{\mathbb A}^1}(g_0, Z_{\rm red})+\delta_{{\mathbb A}^1}(g, Z') \end{align*} By Lemma~\ref{lem:VIrDegEt}, we have \widehat{+}[ \pi_{Z_1*}([Z, \tilde{{\mathcal G}}_1]^\text{\it vir})=\delta_{{\mathbb A}^1}(Z_1, \tilde{{\mathcal G}}_1)\text{ and }\pi_{Z'*}([Z', g]^\text{\it vir})=\delta_{{\mathbb A}^1}(g, Z'), \widehat{+}] so \widehat{+}[ \pi_{Z*}([Z, g]^\text{\it vir})=\delta_{{\mathbb A}^1}(g, Z_{\rm red}). \widehat{+}] \end{proof}
\begin{remark} Proposition~\ref{prop:VirDeg} deals with the trivial case of a virtual fundamental class, namely, the case of a local complete intersection. In the classical case with values in the Chow groups, the virtual fundamental class is just the cycle class associated to the local complete intersection, in other words, the result given by classical intersection theory. The refined version of this trivial case is still interesting, as it points out how the classical intersection multiplicity is replaced by the ${\mathbb A}^1$-local degree. \end{remark}
As an example of the above construction we have the virtual fundamental class of the critical locus of a function $f:M\to {\mathbb A}^1$, $M$ a smooth $k$-scheme. The critical locus $Z$ of $f$ is simply the 0-subscheme of the section $df$ of $\Omega_{M/k}$.
Taking the Hessian matrix of $f$ gives the globally defined morphism of sheaves \widehat{+}[ H:{\mathcal O}_Z\to \mathcal{H}om(\Omega^{\vee}_{M/k}\otimes{\mathcal O}_Z, \Omega_{M/k}\otimes{\mathcal O}_Z). \widehat{+}] In a coordinate neighborhood with coordinates $x_1,\ldots, x_n$, $H(1)$ is the map sending $\partial/\partial x_i\otimes f$ to $\sum_j dx_j\otimes \partial^2f/\partial x_i\partial x_j$. We have as well the commutative diagram of sheaves on $Z$ \widehat{+}[ \xymatrix{ \Omega^{\vee}_{M/k}\otimes{\mathcal O}_Z\ar[r]^{H}\ar[d]_{\text{\it ev}_{df}}&\Omega_{M/k}\otimes{\mathcal O}_Z\ar@{=}[d]\widehat{+} {\mathcal I}_Z/{\mathcal I}_Z^2\ar[r]_d&\Omega_{M/k}\otimes{\mathcal O}_Z } \widehat{+}] where $\text{\it ev}_{df}$ is the map evaluating a vector field on $df$, giving us a perfect obstruction theory on $Z$.
Clearly $[\Omega^{\vee}_{M/k}\otimes{\mathcal O}_Z\xrightarrow{H} \Omega_{M/k}\otimes{\mathcal O}_Z]$ has virtual rank 0 and virtual determinant $\omega_{M/k}^{\otimes 2}$. Thus, this perfect obstruction theory gives us a virtual fundamental class \widehat{+}[ [Z, \partial_f]^\text{\it vir}\in {\mathcal E}^{\operatorname{B.M.}}_{0,0}(Z) \widehat{+}] for any cohomology theory ${\mathcal E}\in {\operatorname{SH}}(k)$. If in addition $Z$ is projective over $k$, and ${\mathcal E}$ is $\operatorname{SL}$-oriented, then we have the push-forward map, giving \widehat{+}[ \text{deg}_{\mathcal E}([Z, \partial_f]^\text{\it vir}):=\pi_{Z*}([Z, \partial_f]^\text{\it vir})\in {\mathcal E}^{0,0}({\rm Spec\widehat{+},} k). \widehat{+}] For instance, we may take ${\mathcal E}$ to be hermitian $K$-theory or Chow-Witt theory, giving \widehat{+}[ \text{deg}_{\mathcal E}([Z, \partial_f]^\text{\it vir})\in {\operatorname{GW}}(k) \widehat{+}] with rank the degree of the usual virtual fundamental class.
One can generalize this construction slightly. Assuming that the critical subscheme of $f$ is a disjoint union of components, $Z\amalg Z'$, we can restrict the whole construction to $Z$, giving the virtual fundamental class $[Z, \partial_f]^\text{\it vir}\in {\mathcal E}^{\operatorname{B.M.}}_{0,0}(Z)$.
As a direct consequence of Proposition~\ref{prop:VirDeg} we have the following description of the virtual fundamental class of a zero-dimensional component of the critical locus.
\begin{corollary}\label{cor:VirClassA1Deg} Let $f:{\mathbb A}^n\to {\mathbb A}^1$ be a function and suppose that the critical subscheme of $f$ is a disjoint of closed subschemes $Z\amalg Z'$ with $Z$ of pure dimension zero. We have the polynomial map \widehat{+}[ \partial f=(\partial f/\partial X_1,\ldots, \partial f/\partial X_n): {\mathbb A}^n\to {\mathbb A}^n. \widehat{+}] Then \widehat{+}[ \pi_{Z*}([Z,\partial_f]^\text{\it vir})=\delta_{{\mathbb A}^1}(\partial f, Z_{\rm red}) \widehat{+}] in ${\operatorname{GW}}(k)$. \end{corollary}
\end{document} | arXiv | {
"id": "1703.03056.tex",
"language_detection_score": 0.5409358739852905,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\scshape{Discrepancy estimates for sequences: new results and open problems}} \author{Gerhard Larcher\thanks{is partially supported by the Austrian Science Fund (FWF), Project P21943.} } \date{} \maketitle
\abstract{ In this paper we give an overview of recent results on (upper and lower) discrepancy estimates for (concrete) sequences in the unit-cube. In particular we also give an overview of discrepancy estimates for certain classes of hybrid sequences. Here by a hybrid sequence we understand an $(s+t)$-dimensional sequence which is a combination of an $s$-dimensional sequence of a certain type (e.g. Kronecker-, Niederreiter-, Halton-, $\ldots$ type) and a $t$-dimensional sequence of another type. The analysis of the discrepancy of hybrid sequences (and of their components) is a rather current and vivid branch of research. We give a collection of some challenging open problems on this topic.}
\noindent {\bf Keywords:} Digital sequences, hybrid sequences, discrepancy\\ \\ {\bf AMS classification:} 11K06, 11K38
\section{Introduction}
Let $(\boldsymbol{z}_n)_{n \ge 0}$ be a sequence in the $d$-dimensional unit-cube $\left[0,1\right)^d $. Then the {\it discrepancy} of the first $N$ points of the sequence is defined by
$$D_N = \sup_{B \subseteq \left[0,1\right)^d }\left|\frac{A_N(B)}{N}-\lambda (B)\right|,$$ where $$A_N(B) := \;\#\; \{n : 0 \le n < N, \boldsymbol{z}_n \in B\}, $$ $\lambda$ is the $d$-dimensional volume and the supremum is taken over all axis-parallel subintervals $B \subseteq \left[0,1)^d.\right.$ The sequence $(z_n)_{n \ge 0}$ is called uniformly distributed if $\lim\limits_{N \rightarrow\infty} D_N = 0.$ If the supremum is restricted to all $B$ with the left-lower corner in the origin, then we speak of the star-discrepancy $D_N^*$. It is the most well-known conjecture in the theory of irregularities of distribution, that for every sequence $(z_n)_{n \ge 0}$ in $\left[0,1)^d \right.$ we have $$ D_N \ge c_d \frac{(\log N )^d }{N}$$ for a constant $c_d > 0$ and for infinitely many $N$. This result was proven for dimension $d=1$ by Schmidt \cite{Schmidt2}. Hence sequences whose discrepancy satisfies $ D_N = O \left( \frac{(\log N )^d }{N}\right)$ are called ``low-discrepancy sequences''. \newline Note that recent investigations of Bilyk, Lacey et al., see for example \cite{Bilyk} or \cite{Bilyk et}, have led some people to conjecture that $\frac{(\log N )^{\frac{d+1}{2}} }{N}$ instead of $\frac{(\log N )^d }{N}$ is the best possible order for the discrepancy of sequences in $\left[0,1\right)^d$. At the moment the best known general lower bound for the discrepancy of sequences in $\left[0,1\right)^d$ for $d \ge 2$ is $$D_N \ge c_d \frac{(\log N )^{\frac{d}{2}+\epsilon (d)} }{N}$$ for infinitely many $N$, with some small $\epsilon (d) > 0.$ For more details on this topic see \cite{Beck2} or \cite{BilykL}.
There are three groups of (almost) low-discrepancy sequences which are of main interest for applications in quasi-Monte Carlo methods. (Here by a quasi-Monte Carlo method we mean simulation in the setting of Monte Carlo methods, but using deterministic, i.e., quasi-random (usual low-discrepancy) point sets instead of pseudo-random point sets.) Indeed, these are (until now) the only known types of sequences containing concrete examples of (almost) low-discrepancy sequences. These are Kronecker sequences, Halton sequences and digital $({\bf T},s)$-sequences in the sense of Niederreiter.
The most classical type are the {\em Kronecker sequences}. A Kronecker sequence is of the form $$ \boldsymbol{z}_n = \left(\left\{n \boldsymbol{\alpha}\right\}\right)_{n \ge 0} = \left(\left(\left\{n \alpha_1\right\}, \ldots, \left\{n \alpha_d\right\}\right)\right)_{n \ge 0}$$ for some $\boldsymbol{\alpha} =\left(\alpha_1,\ldots,\alpha_d\right) \in \left[0,1)^d\right.$. The sequence is uniformly distributed in $\left[0,1)^d \right.$ iff $1,\alpha_1, \ldots , \alpha_d$ are linearly independent over ${\mathbb Z}$.
(Good) {\it lattice point sets} are -- in some sense -- finite versions of Kronecker sequences. They are of the following form: $$ \boldsymbol{z}_n = \left(\left\{ n \frac{a_1}{N}\right\}, \ldots, \left\{n \frac{a_d}{N}\right\}\right)_{n=0, 1 \ldots, N-1}$$ for some given $N \in {\mathbb N}$ and $a_1, \ldots, a_d \in {\mathbb Z}$.
The second type of sequences are {\em Halton sequences} which are defined as follows: Let $b_1, \ldots , b_d \ge 2$ be pairwise coprime integers, then the Halton sequence $\left(\boldsymbol{z}_n\right)_{n \ge 0}$ with $\boldsymbol{z}_n=\left(z_n^{(1)},\ldots,z_n^{(d)}\right)$, in bases $b_1, \ldots , b_d$ is given by $$z_n^{(j)} := \psi_{b_j}(n)$$ where $$\psi_{b_j}(n) := \sum_{i=0}^{\infty}n_i b_j^{-i-1}$$ for $n \in {\mathbb N}_0$ with base $b_j$ representation $$n = \sum_{i=0}^{\infty} n_i b_j^i \quad, 0 \le n_i \le b_j \, .$$ It is easy to show that every Halton sequence is a low-discrepancy sequence.
The concept of {\em digital-sequences} in a base $q$ (see for example \cite{Dick}, \cite{Lar}, \cite{Nied1}, or \cite{Nied2}) was introduced by Niederreiter and it contains also earlier special examples of sequences of the same type considered by Sobol' and by Faure.
Let $q$ be a prime number and let ${\mathbb Z}_q$ be the finite field of order $q$. We identify ${\mathbb Z}_q$ with the set $\{0,1,\ldots, q-1\}$ equipped with arithmetic operations modulo $q$. Let $d \in {\mathbb N}$. Let $C_1, \ldots, C_d \in {\mathbb Z}_q^{{\mathbb N} \times {\mathbb N}}$ be ${\mathbb N} \times {\mathbb N}$ matrices over ${\mathbb Z}_q$. Let $n \in {\mathbb N}_0$, with $q$-adic expansion $n = n_0 + n_1q + n_2q^2 + \ldots$ and set $$ \vec{n} = (n_0, n_1, n_2, \ldots)^\top \in ({\mathbb Z}_q^{\mathbb N})^\top.$$ Then define $$ \vec{x}_{n,j} = C_j \vec{n} \quad \mbox{for} \quad j=1, \ldots, d, $$ where all arithmetic operations are taken modulo $q$. Let $\vec{x}_{n,j} = (x_{n,j,1}, x_{n,j,2}, \ldots)^\top$ and define $$ x_{n,j} = x_{n,j,1}q^{-1} + x_{n,j,2}q^{-2} + \cdots \, .$$ Then the $n$th point $\boldsymbol{x}_n$ of the sequence ${\bf S} (C_1, \ldots, C_d)$ is given by $\boldsymbol{x}_n = (x_{n,1}, \ldots, x_{n,d})$. A sequence ${\bf S} (C_1, \ldots, C_d)$ constructed this way is called a \textit{digital sequence (over ${\mathbb Z}_q$)} with generating matrices $C_1, \ldots, C_d$. Under certain conditions on the generating matrices it can be shown that the star discrepancy of ${\bf S} (C_1, \ldots, C_d)$ is of order of magnitude $\frac{(\log N)^d}{N}$ in $N$. For more information we refer to \cite{Dick}, \cite{Nied1}, \cite{Nied2} and the references therein.
The finite versions of digital sequences are digital $(t,m,s)$-nets (over ${\mathbb Z}_q$). These are point-sets $\boldsymbol{x}_n = (x_{n,1}, \ldots, x_{n,d})$ with $n=0,1,\ldots, N-1$, where $N=q^m$, and which are defined in the same way as digital sequences but now with $C_1, \ldots, C_d \in {\mathbb Z}_q^{m \times m}$.
In the following we will give some recent results on the discrepancy of these point sequences and point sets, and of combinations (hybrids) of them, and we will also give a collection of challenging open problems in this context.
\section{Metrical and average type discrepancy estimates for digital point sets and sequences and for good-lattice point-sets}
In \cite{Beck} Beck has given upper and lower metrical bounds for the discrepancy of the $d$-dimensional Kronecker sequence. He showed:
\begin{theorem}(Beck, 1994)\label{Beck1} For almost all $\boldsymbol{\alpha}$ in $[0,1)^d$ for the discrepancy $D_N$ of the $d$-dimensional Kronecker sequence we have $$ D_N = O \left(\frac{(\log N)^d (\log \log N)^{1+\epsilon}}{N}\right) $$ for every $\epsilon > 0$. \end{theorem}
\begin{theorem}(Beck, 1994)\label{Beck2} For almost all $\boldsymbol{\alpha}$ in $[0,1)^d$ for the discrepancy $D_N$ of the $d$-dimensional Kronecker sequence we have $$ D_N \ge C (\alpha, d) \frac{(\log N)^d \log \log N}{N} $$ for infinitely many $N$, where $C (\alpha, d) > 0 \cdots $. \end{theorem}
It is quite interesting that until now, for dimensions $d \ge 2$ no \textbf{concrete} choice of $\alpha_1, \ldots, \alpha_d$ is known, such that the upper discrepancy estimate in Theorem \ref{Beck1} of Beck is satisfied for this \textbf{concrete} sequence. The discrepancy of a Kronecker sequence generated by $\boldsymbol{\alpha} = (\alpha_1, \ldots, \alpha_d)$ heavily depends on how good $\alpha_1, \ldots, \alpha_d$ can be simultaneously approximated by rationals.
For example if $\alpha_1, \ldots, \alpha_d$ are algebraic numbers such that $1, \alpha_1, \ldots, \alpha_d$ are linearly independent over ${\mathbb Z}$, then by the Theorem of Thue-Siegel-Roth-Schmidt we have $D_N = O \left(\frac{1}{N^{1-\epsilon}}\right)$ for every $\epsilon > 0$. Further discrepancy estimates for the Kronecker sequence in dependence on diophantine approximation properties of $\alpha_1, \ldots, \alpha_d$ can be found in \cite{KuNied2006} or in \cite[Theorem 2]{Nied2012}.
For the proofs of Theorems \ref{Beck1} and \ref{Beck2} Beck uses a Poisson summation formula for the discrepancy function and some results from probabilistic diophantine approximation. Especially, for example, he uses a result of Schmidt from \cite{Schmidt} which gives for almost all $\boldsymbol{\alpha} = (\alpha_1, \ldots, \alpha_d) \in [0,1)^d$ a rather exact formula for \begin{align*}
N(h):= \# & \left\{ (q_1, \ldots, q_d) \in {\mathbb Z}^d \, \Big| \, \left| q_i \right| \le h \quad \mbox{for} \quad i=1, \ldots d \right. \quad \mbox{and}\\
\quad & \left\{ q_1 \alpha_1 + \cdots + q_d \alpha_d\right\} < \phi (q_1, \ldots, q_d)\Big\}. \end{align*} Indeed, for suitable $\phi$ we have $$ N(h) = \underset{\vline \, q_i \, \vline \, \le h}{\underset{q_1, \ldots, q_d}{\sum}}\phi(q_1, \ldots, q_d) + R, $$ with a certain ``small'' error-term R.
In \cite{Lar-Pill} an analogous result to Theorem \ref{Beck1} was given for digital sequences.
\begin{theorem} (Larcher, 1998) Let $d \in {\mathbb N}$, let $q$ be a prime number and let $\epsilon > 0$. Then for $\mu_d$-almost all $d$-tuples $(C_1, \ldots, C_d) \in ({\mathbb Z}_q^{{\mathbb N} \times {\mathbb N}})^d$ of generating matrices the digital sequence generated by $(C_1, \ldots, C_d)$ has discrepancy satisfying $$ D_N = O \left(\frac{(\log N)^d (\log \log N)^{2+\epsilon}}{N}\right)$$ for all $\epsilon > 0$. (Here $\mu_d$ is a probability measure on $({\mathbb Z}_q^{{\mathbb N} \times {\mathbb N}})^d$ defined in a quite natural way, see \cite{Lar-Pill}.) \end{theorem}
For the proof of this result one had to combine some counting arguments with results from metrical non-archimedean diophantine approximation. An important subclass of the class of digital sequences is the class of digital Kronecker sequences. These sequences build a ``non-archimedean analog'' to classical Kronecker sequences. They have been introduced by Niederreiter \cite[Section 4]{Nied2}, and further investigated by Larcher and Niederreiter \cite{Lar2}.
Let ${\mathbb Z}_q[x]$ be the set of all polynomials over ${\mathbb Z}_q$ and let ${\mathbb Z}_q((x^{-1}))$ be the field of formal Laurent series $g$ with $g=0$ or $$ g = \sum_{k=\omega}^{\infty} a_k x^{-k} \quad \mbox{with} \quad a_k \in {\mathbb Z}_q \quad \mbox{and} \quad \omega \in {\mathbb Z} \quad \mbox{with} \quad a_\omega \neq 0. $$ ${\mathbb Z}_q ((x^{-1}))$ contains the field of rational functions over ${\mathbb Z}_q$ as a subfield. The discrete exponential evaluation $\nu$ of $g$ is defined by $\nu(g):=-\omega$ and $\nu(0):=-\infty$. Furthermore, we define the ``fractional part'' of $g$ by $$ \{g\}:= \sum_{k=\max (1, \omega)}^{\infty} a_k x^{-k}.$$ We associate a nonnegative integer $n$ with $q$-adic expansion $n=n_0 + n_1 q + \cdots + n_rq^r$ with the polynomial $n(x) = n_0 + n_1 x + \cdots + n_r x^r$ in ${\mathbb Z}_q [x]$ and vice versa.
For every $d$-tuple $\boldsymbol{f} = (f_1, \ldots, f_d)$ of elements of ${\mathbb Z}_q((x^{-1}))$ we define the sequence $S(\boldsymbol{f}) = (\boldsymbol{x}_n)_{n \ge 0}$ by $$ \boldsymbol{x}_n = (\{n(x)f_1(x)\}_{x=q}, \ldots, \{n(x) f_d (x)\}_{x=q}) \quad \mbox{for} \quad n \in {\mathbb N}_0. $$ In analogy to classical Kronecker sequences it has been shown in \cite{Lar2} that a digital Kronecker sequence $S(\boldsymbol{f})$ is uniformly distributed in $[0,1)^d$ if and only if $1,f_1, \ldots, f_d$ are linearly independent over ${\mathbb Z}_q [x]$. By $\mu$ we denote the normalized Haar-measure on ${\mathbb Z}_q((x^{-1}))$ and by $\tilde{\mu}_d$ the $d$-fold product measure on $({\mathbb Z}_q((x^{-1})))^d$.
In \cite{Lar1995} Larcher proved the following metrical upper bound on the star discrepancy of digital Kronecker sequences.
\begin{theorem}(Larcher, 1995). Let $d \in {\mathbb N}$, let $q$ be a prime number and let $\epsilon > 0$. For $\tilde{\mu}_d$-almost all $\boldsymbol{f} \in ({\mathbb Z}_q((x^{-1})))^d$ the digital Kronecker sequence $S (\boldsymbol{f})$ has star discrepancy satisfying $$ D_N (S(\boldsymbol{f})) = O \left( \frac{(\log N)^d (\log \log N)^{2+\epsilon}}{N}\right). $$ \end{theorem}
Quite recently Larcher and Pillichshammer were able to give corresponding metrical lower bounds for the discrepancy of digital sequences and of digital Kronecker sequences (see \cite{Lar-Pill2}, \cite{Lar-Pill3}).
\begin{theorem} (Larcher and Pillichshammer, 2013). \label{Lar-Pill1} Let $d \in {\mathbb N}$ and let $q$ be a prime number. Then for $\mu_d$-almost all $d$-tuples $(C_1, \ldots, C_d) \in ({\mathbb Z}_q^{{\mathbb N} \times {\mathbb N}})^d$ of generating matrices the digital sequence $S (C_1, \ldots, C_d)$ over ${\mathbb Z}_q$ has discrepancy satisfying $$ D_N (S (C_1, \ldots, C_d)) \ge c(q, d)\frac{(\log N)^d \log \log N}{N} \quad \mbox{for infinitely many} \, N \in {\mathbb N} $$ with some $c (q, d) > 0$ not depending on $N$. \end{theorem}
\begin{theorem} (Larcher and Pillichshammer, 2013).\label{Lar-Pill2} Let $d \in {\mathbb N}$ and let $q$ be a prime number. For $\tilde{\mu}_d$-almost all $\boldsymbol{f} \in ({\mathbb Z}_q((x^{-1})))^d$ the digital Kronecker sequence $S (\boldsymbol{f})$ has star discrepancy satisfying $$ D_N (S (\boldsymbol{f})) \ge c(q, d)\frac{(\log N)^d \log \log N}{N} \quad \mbox{for infinitely many} \, N \in {\mathbb N} $$ with some $c(q,d) > 0$ not depending on $N$. \end{theorem}
For the proofs of Theorems \ref{Lar-Pill1} and \ref{Lar-Pill2} we used, in analogy to the method of Beck, a Poisson summation formula based on Walsh-functions for the discrepancy-function of these digital sequences and again certain results on non-archimedean diophantine approximation.
For the finite versions of these point sequences, namely for the good lattice point sets (as discrete versions of Kronecker sequences) and for digital $(t,m,s)$-nets (as discrete versions of digital sequences), as well as for digital nets generated by rational functions over finite fields until now just \textit{upper} ``metrical'' (i.e, average type), bounds are known.
The last mentioned class of point set is a finite analogon to the digital Kronecker sequences. Digital nets generated by rational functions over finite fields are defined as follows: Let $f \in {\mathbb Z}_q [x]$ with $\deg (f) = t \ge 1$, and $g_1, \ldots, g_d \in {\mathbb Z}_q [x]$ with $\gcd (g_i, f) = 1$ for $i=1, \ldots, d$ and $\deg (g_i) < t$.
Then we consider point sets $(\boldsymbol{x}_n)_{n \ge 0}$ in $[0,1)^d$ of the form
$$ \boldsymbol{x}_n := \left(\left. \left\{\frac{n(x)g_1(x)}{f(x)}\right\} \right|_{x=q}, \ldots, \left.\left\{\frac{n(x) g_d (x)}{f(x)}\right\}\right|_{x=q}\right).$$ The mentioned upper bounds for the discrepancy of these three classes of finite point sets were proven in \cite{Bykov}, \cite{Lar1993} and \cite{Lar1998}.
\begin{theorem}(Bykovskii, 2012).\label{Bykovskii} For all $\chi<1$ there is a constant $C_\chi > 0$ such that for all $N$ the discrepancy $D_N$ of the lattice point set $$ \left(\left\{n \frac{a_1}{N}\right\}, \ldots, \left\{n \frac{a_d}{N}\right\}\right)_{n=0, \ldots, N-1} $$ satisfies $$D_N \le C_\chi \frac{(\log N)^{d-1} \log \log N}{N}$$ for at least $\chi N^d$ choices for $(a_1, \ldots, a_d)$ with $0 \le a_1, \ldots, a_d < N$. \end{theorem}
In dimension $d=2$ this result already had been shown by Larcher in \cite{Lar1986}. In dimension $d=2$ the result moreover has a narrow connection to the conjecture of Zaremba on continued fractions. One version of this conjecure is the following:
\begin{con}(Zaremba). \textit{There is an absolute constant $A$ such that for all $N \in {\mathbb N}$ there is an $a \in {\mathbb N}$ relatively prime to $N$ such that all continued fraction coefficients of $\frac{a}{N}$ are less than $A$.} \end{con}
A weaker version of this conjecture is the following conjecture (which is said to be stated by Moser for the first time):
\begin{con}(Moser) \textit{There is an absolute constant $B$ such that for all $N \in {\mathbb N}$ there is an $a \in {\mathbb N}$ relatively prime to $N$ such that the sum of all continued fraction coefficients of $\frac{a}{N}$ is less than $B \log N$.} \end{con}
Of course the conjecture of Moser is true if the conjecture of Zaremba holds. From the correctness of Moser's conjecture it follows that for all $N$ there exists an $a$ relatively prime to $N$ such that for the discrepancy $D_N$ of the $2$-dimensional lattice point set $$\left(\left\{n \frac{1}{N}\right\}, \left\{n \frac{a}{N}\right\}\right)_{n=0, \ldots, N-1}$$ we have $$ D_N \le C(B) \frac{\log N}{N} ,$$ which is an improvement of Theorem \ref{Bykovskii} for dimension $2$, if we see this just as an existence result. Concerning the conjecture of Zaremba there has been important progress quite recently by the paper \cite{BouKont} of Bourgain and Kontorovich who show, that the conjecture of Zaremba holds at least for a set of integers $N$ with density $1$. See in this connection also the papers of Frolenkov and Kontorovich \cite{Frolenkov} and \cite{Kontorovich}.
For digital nets and for digital nets generated by rational functions over finite fields we have the following upper average type estimates (see \cite{Lar1993} and \cite{Lar1998}).
\begin{theorem} (Larcher, 1998)\label{Lar2} For given integers $d \ge 1, m \ge 2$ and prime base $q$ we have: for all $\delta$ with $0 < \delta < 1$, the number of $d$-tuples $C = (C_1, \ldots, C_d)$ of $m \times m$-matrices, providing a digital net over ${\mathbb Z}_q$ with discrepancy $D_N$ satisfying $$ D_N \le \frac{1}{\delta} B(d, q)\frac{(\log N)^{d-1} \log \log N}{N} + O \left(\frac{(\log N)^{d-1}}{N}\right), $$ is at least $$ (1-\delta) \# M_d (m). $$ (Here $B(d,q)$ is a constant depending only on $d$ and $q$, whereas the O-constant also depends on $\delta$, and $M_d(m)$ is the set of all $d$-tuples of $m \times m$ matrices over ${\mathbb Z}_q$.) \end{theorem}
\begin{theorem}(Larcher, 1993)\label{Lar3} For every $t \in {\mathbb N}$ there are $g_1, \ldots, g_d \in {\mathbb Z}_q[x], g_1 =1, \gcd (g_i, x) = 1, i=1, \ldots, d$, such that for the discrepancy $D_N$ of the point set
$$ \boldsymbol{x}_n:= \left( \left. \left\{\frac{n(x)g_1(x)}{x^t}\right\} \right|_{x=q} , \ldots, \left. \left\{\frac{n(x)g_d(x)}{x^t}\right\} \right|_{x=q}\right), $$ $$ \quad \quad n=0, \ldots, q^t - 1 =: N-1, $$ we have $$ D_N^* < c (d, q) \frac{(\log N)^{d-1} (\log \log N)}{N}, $$ with a constant $c (d, q)$ depending only on $d$ and $q$. \end{theorem}
Indeed by the proof of Theorem \ref{Lar3} in \cite{Lar1993} it is easy to see, that this existence result even also can be stated as an average-type result like Theorem \ref{Lar2}. An analogous result for such point sets, but with denominator $f \in {\mathbb Z}_q[x]$ of degree $t$, with $\gcd (f,x)= 1$ instead of denominator $x^t$ was shown by Kritzer and Pillichshammer in \cite{KriPill}.
\begin{op}\label{latpois} \textit{Show that for every $\chi < 1$ there is a constant $C'_{\chi} > 0$ such that for all $N$ for the discrepancy $D_N$ of the lattice point set $$ \left(\left\{n \frac{a_1}{N}\right\}, \ldots, \left\{n \frac{a_d}{N}\right\}\right)_{n = 0, \ldots, N-1} $$ we have $$ D_N \ge C'_\chi \frac{(\log N)^{d-1} \log \log N}{N} $$ for at least $\chi N^d$ choices for $(a_1, \ldots, a_d)$ with $0 \le a_1, \ldots, a_d < N$.} \end{op}
\begin{op}\label{constant} \textit{Show that for given integers $d \ge 1, m \ge 2$ and prime base $q$ there exists a constant $\tilde{B} (d, q)> 0$ such that: for all $\delta$ with $0 < \delta < 1$, the number of $d$-tuples $C = (C_1, \ldots, C_d)$ of $m \times m$-matrices, providing a digital net over ${\mathbb Z}_q$ with discrepancy $D_N$ satisfying $$ D_N \ge \delta \tilde{B}(d, q) \frac{(\log N)^{d-1} \log \log N}{N}$$ for infinitely many $N$, is at least $(1-\delta) \# M_d(m)$, where $M_d(m)$ is the set of all $d$-tuples of $m \times m$ matrices over ${\mathbb Z}_q$.} \end{op}
\begin{op}\label{ratfunc} \textit{Give an average-type lower bound (in the style as stated in the Open Problem \ref{constant}) for the discrepancy of digital nets generated by rational functions over finite fields.} \end{op}
For the proof of Problem \ref{latpois} (and analogously for the proof of Problems \ref{constant} and \ref{ratfunc}) we think that it is possible to use the basic method from the proof of Theorem \ref{Beck2} (respectively from Theorems \ref{Lar-Pill1} and \ref{Lar-Pill2}) together with a certain average type result on discrete diophantine approximation. For example a result of following type, which would be a discrete version of the above mentioned result of Schmidt from \cite{Schmidt} would be quite helpful:
\begin{op}\label{asymprep} \textit{Find (for suitable $\phi$) an asymptotic representation with a ``small'' error-term $R_\chi$ for $$N (h; a_1, \ldots, a_d):=$$
$$\# \left\{ -h \le n_1, \ldots, n_d \le h \left| \right. \left\{n_1 \frac{a_1}{N} + n_2 \frac{a_2}{N} + \ldots + n_d \frac{a_d}{N} \right\} < \phi (n_1, \ldots, n_d)\right\}$$ of the following form: For all $\chi < 1$ and all $h \in {\mathbb N}$ we have $$ N(h; a_1, \ldots, a_d) = \sum_{-h \le n_1, \ldots, n_d < h} \phi (n_1, \ldots, n_d) + R_\chi(h)$$ for at least $\chi N^d$ choices of $(a_1, \ldots, a_d)$ with $0 \le a_1, \ldots, a_d < N$.} \end{op}
\noindent For the proof of Open Problems \ref{constant} and \ref{ratfunc} we again probably will need a non-archimedean version of Problem \ref{asymprep}.
It probably should be a quite challenging task to extend the investigations of the above type to Halton-Niederreiter sequences. By a Halton-Niederreiter sequence we understand any $d$-dimensional sequence $(\boldsymbol{z}_n)_{n \ge 0}$ which is obtained by combining $l$ sequences $(\boldsymbol{z}_n^{(1)})_{n \ge 0}, \cdots, (\boldsymbol{z}_n^{(l)})_{n \ge 0}$ in dimensions $d_1, \cdots, d_l$ with $d_1 + \cdots + d_l = d$, i.e., $$ \boldsymbol{z}_n = (\boldsymbol{z}_n^{(1)}, \boldsymbol{z}_n^{(2)}, \ldots, \boldsymbol{z}_n^{(l)}), $$ where $(\boldsymbol{z}_n^{(i)})_{n \ge 0}$ is a digital sequence in base $q_i$, and $\gcd (q_1, \cdots, q_l)=1$.
The Halton sequence is a special case of a Halton-Niederreiter sequence. It was shown in \cite{HofKriLarPill} that a Halton-Niederreiter sequence is uniformly distributed if and only if each of its $l$ components is uniformly distributed. The discrepancy of such sequences was investigated in \cite{HofLar} and in \cite{Hofer}. It turns out that, on the one hand, the class of Halton-Niederreiter sequences contains low-discrepancy sequences, but on the other hand in general Halton-Niederreiter sequences are not low-discrepancy sequences even when all of their components are of low-discrepancy. A simple example of such a uniformly distributed Halton-Niederreiter sequence, generated by two low-discrepancy digital sequences, which itself is not of low-discrepancy, is the two-dimensional sequence generated by the matrix \[ C_1 = \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 & \ldots\\ 0 & 1 & 0 & 0 & 0 & 0 & \ldots\\ 0 & 0 & 1 & 0 & 0 & 0 & \ldots\\ 0 & 0 & 0 & 1 & 0 & 0 & \ldots\\ 0 & 0 & 0 & 0 & 1 & 0 & \ldots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \end{pmatrix} \in {\mathbb Z}_3^{{\mathbb N} \times {\mathbb N}} \] in base 3 and the unit matrix in base 2.
It would be of interest to determine the order of discrepancy of almost all Halton-Niederreiter sequences, i.e., to solve the following:
\begin{op}\label{copbas} \textit{For given $d_1, \ldots, d_l \in {\mathbb N}$ with $d_1 + \ldots + d_l = d$ and given coprime bases $q_1, \ldots, q_l$ determine $g (N)$ as small as possible and $f(N)$ as large as possible such that we have: for the discrepancy $D_N$ of almost all Halton-Niederreiter sequences with $d_i$-dimensional components in base $q_i; i=1, \ldots, l$ we have $$ D_N = \Omega (f(N)) \quad \mbox{and} \quad D_N = O (g(N)).$$ }\end{op}
We think that, to solve this problem, the techniques developed by Hellekalek in \cite{Hell} based on certain function systems should be quite helpful (see also \cite{Hell2} and \cite{HellNied}.)
In the analysis of Halton-Niederreiter sequences carried out until now, it turned out that this analysis is much easier if the generating matrices of the components all are ``finite-row-matrices''. That means: each row of each generating matrix has only finitely many entries different from zero. Moreover, it seems that the metrical investigation of discrepancy of Halton-Niederreiter sequences will lead to a smaller order of discrepancy if we restrict ourselves to considering ``finite row digital Halton-Niederreiter sequences'' than in the general case. Of course, we first have to consider what a suitable measure for these finite row sequences is.
\begin{op}\label{copbascomp} \textit{For given $d_1, \ldots, d_l$ with $d_1 + \ldots + d_l = d$ and given coprime bases $q_1, \ldots, q_l$ determine $g (N)$ as small as possible and $f(N)$ as large as possible such that we have: for the discrepancy $D_N$ of almost all \textbf{finite-row} Halton-Niederreiter sequences (with respect to a suitable measure) with $d_i$-dimensional components in base $q_i; i=1, \ldots, l$ we have $$D_N = \Omega (f(N)) \quad \mbox{and} \quad D_N = O (g(N)).$$ }\end{op}
\section{Discrepancy estimates for and applications of hybrid sequences}\label{discest} As already mentioned above there are three groups of ((almost) low-discrepancy) sequences which are of main interest. Indeed, these are (until now) the only known types of sequences containing concrete examples of (almost) low-discrepancy sequences. These are Kronecker sequences, Halton sequences and digital sequences in the sense of Niederreiter. (In some sense we could consider the classes of sequences introduced by Levin \cite{Levin} as a fourth class of low-discrepancy sequences.)
In the last years there grew considerable interest in the distribution of "hybrids" of such sequences, and also of such sequences together with pseudo-random-number-sequences (as well from a theoretical point of view as from the point of view of applications). A hybrid sequence is defined as follows: take an $s$-dimensional sequence $(\boldsymbol{x}_n)_{n \ge 0}$ of a certain type and a $t$-dimensional sequence $(\boldsymbol{y}_n)_{n \ge 0}$ of another type and combine them to an $(s+t)$-dimensional {\em hybrid sequence} $$(\boldsymbol{z}_n)_{n \ge 0} : = ((\boldsymbol{x}_n,\boldsymbol{y}_n))_{n \ge 0}.$$
For the application of these hybrid sequences in QMC methods see for example \cite{Keller}, or \cite{Spanier}. A possible, quite natural application of hybrid sequences in financial risk management is described at the end of this section.
Hybrids of the two classical types of sequences, namely of Halton sequences and of Kronecker sequences (we call them Halton-Kronecker sequences) were first studied by Niederreiter in \cite{Nied2009}, \cite{Nied2010} or \cite{Nied2012}. From a metrical point of view the discrepancy of Halton-Kronecker sequences was studied in \cite{HofLar2} and in \cite{Lar2013}, where the following was shown:
\begin{theorem}(Larcher, 2013).\label{Lar2013} For every Halton sequence in $[0,1)^s$ and almost every $\boldsymbol{\alpha} \in \mathbb{R}^t$ for the discrepancy of the ($s+t$)-dimensional Halton-Kronecker sequence we have $D_N = O \left( \frac{(\log N)^{s+t + \epsilon}}{N} \right)$ for every $\epsilon > 0$. \end{theorem}
The proof of this Theorem again heavily depends on the techniques developed by Beck in \cite{Beck}.
So for almost all $\boldsymbol{\alpha} \in \mathbb{R}^t$ a Halton-Kronecker sequence is an (almost) low-discrepancy sequence. However, until now, we do not know any \textbf{{concrete}} example of an (almost) low-discrepancy Halton-Kronecker sequence. Even in the most simple case where $s=t=1$ we do not have any such concrete example. So for example it would be of great interest to study the discrepancy $D_N$ of $ (\boldsymbol{z}_n)_{n \ge 0} = (x_n, \{n \sqrt{2}\})_{n \ge 0} $ where $(x_n)_{n \ge 0}$ is the one-dimensional Halton sequence in base $2$, i.e., the van der Corput sequence in base 2.
\begin{op}\label{vanCorpseq} \textit{Study the discrepancy of concrete Halton-Kronecker sequences, as for example $(x_n, \{n \sqrt{2}\})_{n \ge 0}$ with $(x_n)_{n \ge 0}$ the van der Corput sequence.} \end{op}
To handle Open Problem \ref{vanCorpseq} it is necessary to study the growth of the largest coefficients $A_K$ in the continued fraction expansion of $2^K \sqrt{2}$ for $ K=1,2,\ldots $. So as a prework for solving Problem \ref{vanCorpseq} it would be helpful to give an answer to the following question.
\begin{op}\label{sharboun} \textit{Let $A_K$ denote the largest continued fraction coefficient of $2^K \sqrt{2}$ for $K=1,2,\ldots$. Give sharp bounds for the growth behaviour of $$ B_L := \underset{K \le L}{\max} \; A_K. $$} \end{op}
Indeed for a solution of Problem \ref{vanCorpseq} even more subtle investigations on the continued fraction coefficients of $2^K \sqrt{2}$ will be necessary.
In \cite{Kritzer} Kritzer has shown an analog to Theorem \ref{Lar2013} for \textbf{finite} hybrid point sets. He showed an ``average-type'' upper bound for the combination of a Hammersley point set with a good lattice point set. We recall, that a Hammersley point set is a set in $[0,1)^d$ of the form $$ (\boldsymbol{x}_n)_{n=0}^{N-1}= \left(\frac{n}{N}, \boldsymbol{y}_n\right)_{n=0}^{N-1}$$ where $(\boldsymbol{y}_n)_{n=0}^{N-1}$ are the first $N$ elements of a $(d-1)$-dimensional Halton sequence.
\begin{theorem}(Kritzer, 2012).\label{Kritzer1} Let $b_1, \ldots, b_s$ be $s$ distinct prime numbers and let $N$ be a prime that is different from $b_1, \ldots, b_s$. Let $H_N := (\boldsymbol{x}_n)_{n=0}^{N-1}$ be the $(s+1)$-dimensional Hammersley point set to the bases $b_1, \ldots, b_s$. Then there exists a generating vector $\boldsymbol{g} \in \{1, \ldots, N-1\}^t$ such that the point set $$ S_N = (\boldsymbol{z}_n)_{n=0}^{N-1} = ((\boldsymbol{x}_n, \boldsymbol{y}_n))_{n=0}^{N-1} $$ in $[0,1)^{s+t+1}$ with $\boldsymbol{y}_n = \{\frac{n\boldsymbol{g}}{N}\},$ for $0 \le n \le N - 1$, satisfies $$ D_N (S_N) = O \left( \frac{(\log N)^{s+t+1}}{N}\right), $$ with an implied constant independent of $N$. \end{theorem}
Indeed, the result of Kritzer is not a discrete analog to Theorem \ref{Lar2013}, but to the result given in \cite{HofLar}, which is a predecessor of the result given in Theorem \ref{Lar2013}. The result of Theorem \ref{Lar2013} is by essentially a logarithmic factor better than the result in \cite{HofLar}. The reason for this is, that in \cite{HofLar} and \cite{Kritzer} it is worked with techniques analogous to the methods used by Schmidt in \cite{Schmidt1}, whereas in \cite{Lar2013} it is worked with the more powerful techniques of Beck. Hence it could be conjectured that by suitably adapting the methods of Beck and of \cite{Lar2013} to the discrete case it should be possible to improve also the result of Kritzer by almost a logorithmic factor. Note that in \cite{KriLeoPill} a component by component construction of such point sets was given.
\begin{op} \textit{Improve the result of Kritzer cited in Theorem \ref{Kritzer1} above on the discrepancy of Hammersley-good lattice point-sets by almost a logarithmic factor.} \end{op}
The result of Theorem \ref{Lar2013} probably should be essentially the best possible metrical estimate for the discrepancy of Halton-Kronecker sequences. However, whereas Beck was able to prove this assumption for pure Kronecker sequences, it seems to be out of reach at the moment also to give a rather sharp metrical lower bound in the general case of Halton-Kronecker sequences. The reason for this is that until now we do not even have a satisfying lower bound for the discrepancy of the pure $s$-dimensional Halton sequence in dimension $s \ge 2$ (see Section \ref{discest}). So (even if it is tempting) we do not state the search for a metrical lower bound for the Halton-Kronecker sequence as an open problem, but just for two special cases
\begin{op} \textit{Show for the sequence of Problem \ref{vanCorpseq}, namely $$ (x_n, \{n \sqrt{2}\})_{n \ge 0}, $$ where $(x_n)_{n \ge 1}$ is the van der Corput-sequence in base 2, that $$ D_N \ge c \frac{(\log N)^2}{N} $$ holds for a constant $c > 0$ and infinitely many $N$.} \end{op}
\begin{op} \textit{Show for the discrepancy $D_N$ of the sequences $(x_n, \{n \alpha\}_{n \ge 0})$ where $(x_n)_{n \ge 0}$ is the van der Corput sequence in base 2, that for almost all $\alpha$ there is a $C(\alpha) > 0$ such that $$ D_N \ge C(\alpha) \frac{(\log N)^2}{N} $$ holds for infinitely many $N$.} \end{op}
The next more general step would be to study the discrepancy of Niederreiter-Kronecker sequences, i.e., hybrid sequences generated be the combination of digital sequences with Kronecker sequences. Again, like in Problems \ref{copbas} and \ref{copbascomp} it seems to be easier to attack first the problem for \textbf{finite-row}-digital sequences for the digital component. For the case of infinite-row digital sequences we think it should be challenge enough first to investigate the most basic case. So concerning the analysis of metrical discrepancy of Niederreiter-Kronecker sequences we state the following two open problems:
\begin{op}\label{shupmet} \textit{Give an essentially sharp upper metrical bound for the discrepancy of Niederreiter-Kronecker sequences $$ \boldsymbol{z}_n = (\boldsymbol{x}_n, \{n \boldsymbol{\alpha}\})_{n \ge 0} $$ where $(\boldsymbol{x}_n)_{n \ge 0}$ is a given digital $s$-dimensional sequence generated by matrices with finite rows, and $(\{n \boldsymbol{\alpha}\})_{n \ge 0}$ is a $t$-dimensional Kronecker sequence, which is valid for almost all $\boldsymbol{\alpha}$.} \end{op}
\begin{op}\label{shmetup} \textit{Give sharp metrical upper - and, if possible, also lower - bounds for the discrepancy of the sequence $$ \boldsymbol{z}_n = (x_n, \{n \alpha\})_{n \ge 0} $$ where $(\{n \alpha\})_{n \ge 0}$ is a one-dimensional Kronecker sequence, and where $(x_n)_{n \ge 0}$ is the one-dimensional digital sequence in base 2 generated by the matrix \[ C= \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 & \ldots\\ 0 & 1 & 0 & 0 & 0 & 0 & \ldots\\ 0 & 0 & 1 & 0 & 0 & 0 & \ldots\\ 0 & 0 & 0 & 1 & 0 & 0 & \ldots\\ \vdots &\vdots & \vdots & \vdots & \vdots & \vdots \end{pmatrix} \in {\mathbb Z}_2^{{\mathbb N} \times {\mathbb N}} . \] The estimates should hold for almost all $\alpha \in {\mathbb R}$.} \end{op}
The investigation of the discrepancy of the sequence considered in Problem \ref{shmetup} in a first step leads necessarily to the investigation of the discrepancy of the following one-dimensional sequence:
\begin{op}\label{exmetor} \textit{Give the exact metrical order of the discrepancy of the sequence $$ (\{n_k \alpha\})_{k \ge 0} , $$ where $(n_k)_{k \ge 0}$ denotes the increasing sequence of positive integers $n_k$ for which $S_2 (n_k) \equiv 0 \mod{2}$ holds, where $S_2 (n)$ denotes the sum of digits of $n$ in base 2.} \end{op}
We will make in the remaining part of this section a side-step from discrepancy theory to applications of QMC methods, and especially of hybrid sequences. The analysis of hybrid sequences already has been motivated by Spanier in \cite{Spanier} and by Keller in \cite{Keller} by applications on transport problems and in image processing. Here we give a suggestion for an application in finance, especially in credit risk management, where hybrid sequences in a quite natural way seemingly should be the suitable tool for generating a simulation scenario. Here by a hybrid sequence, in contrast to the examples given above, we mean the combination of any of the low-discrepancy sequences with a pseudo-random sequence (for the analysis of the distribution of such sequences see for example \cite{GomHofNied}, \cite{Nied2011} or \cite{NiedWin}).
The credit risk management system Credit Metrics of J.P.Morgan (for all theoretical details see \cite{CredMet}) analyses the risk inherent in a large portfolio of credits held by a bank. This is done by calculating the $1 \%$ percentile of the future value of the credit portfolio in one year. To determine this percentile in the original version of credit metrics Monte Carlo simulation is suggested. In this simulation problem each of the credits in the portfolio represents one dimension. Credit portfolios usually contain much more than 1000 single credits, so we have to deal here with a very high-dimensional simulation problem, hence it is certainly too high-dimensional for a pure QMC method. (The benefits of QMC methods in high dimensions usually appear only when the number of sample points is very high.)
On the other hand it could be a good idea to handle the (few) credits in the portfolios which contribute most to the risk of the whole portfolio more carefully, by using a low-discrepancy sequence for the dimensions in the simulation problem which correspond to these highest risk credits. The risk contribution of a credit is a function of the face amount of the credit, its rating class (i.e., of its downgrading-or even default-probabilities), and also on its correlation properties within the credit portfolio. So the (sketch of a) program would be the following: Given a number $N$ of sample points and the dimension $d$ of the simulation problem (i.e., $d$ is the number of credits in the portfolio), \begin{itemize} \item[-] determine a dimension $s < d$, such that it is preferable to use an $s$-dimensional low-discrepancy sequence consisting of $N$ points for an $s$-dimensional simulation problem instead of a pseudo-random sequence, \item[-] determine the $s$ credits of the portfolio which contribute most to the risk of the whole credit portfolio, \item[-] carry out the simulations suggested by the system Credit Metrics, but using for the $s$ selected credits a QMC sequence and for all other (many) credits a pseudo-random sequence, i.e., choose a hybrid sequence for the simulation. \end{itemize}
\begin{op} \textit{Carry out the above sketched program for using hybrid sequences in the credit risk management program Credit Metrics in detail and analyse the performance in comparison with pure Monte Carlo or pure quasi-Monte Carlo methods.} \end{op}
\section{Miscellaneous problems}\label{Miscel}
Of course there exist the big open problems in the theory of uniform distribution and essentially in discrepancy theory, like the most prominent one:
\textit{ \begin{itemize} \item determine the correct order of the best general lower bound for discrepancy, holding for all sequences in $[0,1)^d$ (with the most important contributions of Beck \cite{Beck2}, Bilyk and Lacey \cite{Bilyk}, \cite{BilykL} or Roth \cite{Roth}).\end{itemize} Or the still open question: \begin{itemize} \item is the sequence $\left(\left\{\left(\frac{3}{2}\right)^n \right\}\right)_{n \ge 0}$ uniformly distributed in $[0,1)$ or not?\end{itemize} Another well-known example is: \begin{itemize} \item do there exist $\alpha, \beta \in {\mathbb R}$ such that the $2$-dimensional Kronecker sequence $$ (\{n \alpha\}, \{n \beta\})_{n \ge 0} $$ is a low-discrepancy sequence, i.e., satisfies $$ D_N \le C \frac{(\log N)^2}{N} \quad \mbox{for all} \quad N , $$ or not? \end{itemize}}
This question has narrow connections to the still open conjecture of Littlewood in diophantine approximation, stating that for all $\alpha, \beta \in {\mathbb R}$ we have
$$ \underset{n \rightarrow \infty}{\lim \inf} \quad n \left\| n \alpha \right\| \cdot \left\| n \beta \right\| = 0, $$
where $\left\| x \right\|$ denotes the distance of $x$ to the nearest integer.\\
In the following we give some further problems in discrepancy theory, which in our opinion also are of considerable interest, and, most probably, will be easier to attack than the above mentioned prominent problems:
The first open problem in this section concerns the best possible lower bound for the star-discrepancy $D_N^*$ for one-dimensional sequences in $[0,1)$. The until now best known lower bound is already quite old: Bejian \cite{Bejian} has shown in 1982 that for every sequence $(\boldsymbol{x}_n)_{n \ge 0}$ in $[0,1)$ we have $$D_N^* \ge \left(0.06015\ldots\right) \frac{\log N}{N}$$ for infinitely many $N$.
In \cite{Ostromoukhov} it was shown by Ostromoukhov that there exist $(\boldsymbol{x}_n)_{n \ge 0}$ in $[0,1)$ with $$D_N^* \le \left(0.222\ldots\right) \frac{\log N}{N}$$ for all $N$ large enough. So we state
\begin{op} \textit{Let $c$ be maximal such that for every sequence $(\boldsymbol{x}_n)_{n \ge 0}$ in $[0,1)$ we have $$ D_N^* \ge c \frac{\log N}{N} $$ for infinitely many $N$.\\ \\ Improve the best until now known bounds $0.06015 \ldots \le c \le 0.222 \ldots$ for $c$.} \end{op}
First new investigations of the author in this direction show, that by refining a method developed by Liardet \cite{Liardet} and by Tijdeman and Wagner \cite{TijdWag} a rather simple proof for $c \ge 0.06182$ should be possible.
As already noted above it is the most well-known conjecture in the theory of irregularities of distribution, that for every sequence $(\boldsymbol{z}_n)_{n \ge 0}$ in $[0,1)^d$ we have $$ D_N \ge c_d \frac{(\log N)^d}{N} $$ for a constant $c_d > 0$ and for infinitely many $N$. At the moment the best known general lower bound for the discrepancy of sequences in $[0,1)^d$ is $$ D_N \ge c_d \frac{(\log N)^{\frac{d}{2} + \epsilon(d)}}{N} $$ for infinitely many $N$, with some small $\epsilon(d) > 0$.
Indeed the problem is that even for some of the most well known and seemingly simple sequences in dimensions $d \ge 2$ we do not know the right order of discrepancy, like for example for the $2$-dimensional Kronecker sequences, for the $2$-dimensional Halton sequences and for most of the $2$-dimensional digital low-discrepancy sequences.
Faure has shown in \cite{Faure} that for a certain digital low-discrepancy sequence (i.e., with $D_N \le c \cdot \frac{(\log N)^2}{N}$ for all $N$) we indeed also have $$ D_N \ge c' \frac{(\log N)^2}{N} $$ for infinitely many $N$.
\begin{op} \textit{Show that for all $2$-dimensional digital sequences in base 2 we have $$ D_N \ge c \frac{(\log N)^2}{N} $$ for infinitely many $N$.\\ \\ Or, show this estimate at least for a certain class of such (low-discrepancy) sequences (e.g. for NUT sequences or finite-row sequences).} \end{op}
A probably even more difficult and challenging but for us utmost appealing problem is to find the right order or discrepancy for the Halton sequence in dimension 2. Until now for the discrepancy of this sequence there is no better lower bound known than the general lower bound given by Bilyk and Lacey which holds for \textbf{all} sequences in $[0,1)^2$.
So we finish this paper which should give the reader a survey on some recent results on discrepancy theory and on some open problems in current research with the author's favorite open problem listed in this paper:
\begin{op} \textit{Give an improved lower bound for the discrepancy of the Halton sequence in dimension 2.\\ \\ In the best case decide whether the right order of the discrepancy is $\frac{(\log N)^2}{N}$, or not.} \end{op}
\end{document} | arXiv | {
"id": "1407.2380.tex",
"language_detection_score": 0.79575115442276,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[On entropy of $\Phi$-irregular and $\Phi$-level sets]{On entropy of $\Phi$-irregular and $\Phi$-level sets in maps with the shadowing property}
\author[M. Fory\'{s}-Krawiec]{Magdalena Fory\'{s}-Krawiec} \address[M. Fory\'s-Krawiec]{
National Supercomputing Centre IT4Innovations, Division of the University of Ostrava,
Institute for Research and Applications of Fuzzy Modeling,
30. dubna 22, 70103 Ostrava,
Czech Republic} \email{magdalena.forys@osu.cz}
\author[J. Kupka]{Jiri Kupka} \address[J. Kupka]{National Supercomputing Centre IT4Innovations, Division of the University of Ostrava,
Institute for Research and Applications of Fuzzy Modeling,
30. dubna 22, 70103 Ostrava,
Czech Republic} \email{jiri.kupka@osu.cz}
\author[P. Oprocha]{Piotr Oprocha} \address[P. Oprocha]{AGH University of Science and Technology, Faculty of Applied
Mathematics, al.
Mickiewicza 30, 30-059 Krak\'ow, Poland
-- and --
National Supercomputing Centre IT4Innovations, Division of the University of Ostrava,
Institute for Research and Applications of Fuzzy Modeling,
30. dubna 22, 70103 Ostrava,
Czech Republic} \email{oprocha@agh.edu.pl}
\author[X. Tian]{Xueting Tian} \address[X. Tian]{School of Mathematical Science, Fundan University, Shanghai 200433, People's Republic of China} \email{xuetingtian@163.com} \maketitle
\section{introduction} Studies on shadowing property have long tradition within theory of dynamical systems. They originated from studies of Bowen and Anosov from 1970s. These early works brought evidence, that there are strong connections between shadowing property, entropy and ergodic measures (e.g. see \cite{Bow1}, \cite{Bow2}), bringing motivation for intensive studies lasting last 50 years. It was also Bowen who introduced specification property, a strong mixing condition that can be observed in mixing maps with shadowing property. This started a very interesting chapter in studies of tracing (pseudo) orbits. In particular, in \cite{Sig1} Sigmund showed that for dynamical systems with periodic specification property the set of ergodic measures is the complement of a set of first category in the space of invariant measures. In fact, technique developed by Sigmund, which allows to approximate given measure by ergodic measure, is among standard tools nowadays.
It was also observed that regularity of dynamics guaranteed by ergodic theorem, is always complemented by quite ``wild'' or ``irregular'' dynamical behavior. Let us state more precisely what we mean by these irregularities. For a continuous function $\Phi \in \mathcal{C}(X,\mathbb{R})$ and for every $x \in X$ we define the sum $\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)$ as a \textit{Birkhoff average}. By the classical Birkhoff ergodic theorem this sum converges for a set of points of full $T$-invariant measure. Those points, for which the Birkhoff average converges are called \textit{$\Phi$-regular points}. The complementary set is called a~\textit{$\Phi$-irregular set} and is denoted by $I_{\Phi}(T)$. The set of all irregular points: $$ I(T) = \bigcup_{\Phi\in\mathcal{C}(X,\mathbb{R})}I_{\Phi}(T) $$ is called an \textit{irregular set}. Irregular points are also referred to as \textit{points with historical behavior}, which suggests that points for which Birkhoff average converges present just the average behavior of the system, while the irregular points are capturing the ''complete history'' of the system. It is clear, that from the ergodic point of view irregular sets are negligible. However if we study their qualitative aspects of dynamics, it turns out their dynamical structure may be quite complicated and interesting. In some cases they may be dynamically even as complex as the whole space. By \cite{FFW} we know that for symbolic dynamics $I_{\Phi}(T)$ is either empty or has full topological entropy, while in \cite{BSch} the authors showed that for a class of systems containing horseshoes and conformal repellers irregular sets carry full entropy and Hausdorff dimension. Studies of Olsen (e.g. see \cite{O1}, \cite{O2}, \cite{Ow} on deformations and empirical measures, together with the methodology from \cite{TV} motivated the authors of \cite{ChTS} to consider irregular points in systems with specification property. They showed that $I(T)$ has full topological entropy in such case, while the authors of \cite{LW} gave a topological characterization of $\Phi$-irregular set $I_{\Phi}(T)$ in systems with the specification property, by proving that it is either empty or residual. Later, Thompson made an attempt to weaken the assumptions and proved in \cite{Thom} that for dynamical systems with almost specification the set $I_{\Phi}(T)$ is either empty or carries full topological entropy. In \cite{DOT} the authors consider (possibly not transitive) systems with the shadowing property and proved that $I(T)$ carries full topological entropy when nonempty.
In fact, shadowing property allows to look deeper in the topological structure of the system. In \cite{Mooth} and \cite{MoothO} the authors proved that both collections of uniformly recurrent points and regularly recurrent points are dense in the non-wandering set (that is, Toeplitz minimal systems are dense). Later, Li and Oprocha in \cite{LO} extended these results, showing that points whose orbit closure are odometers are dense in non-wandering set and, moreover, in transitive system with shadowing the collection of ergodic measures supported on odometers is dense in the space of invariant measures. They also proved that ergodic measures supported on Toeplitz systems may approximate other measures in weak*-topology and in terms of value of entropy. While in the case of specification we control only convergence of measures, shadowing property provides deeper insight into the structure of $\omega$-limit sets.
Successful approach in \cite{LO} provides motivation for the present work. We aim for better understanding of local dynamical structure of systems with the shadowing property. The paper is organized as follows.
In the preliminary section we introduce some basic facts about dynamical systems with the shadowing property, measures and ergodic theory. In the next section we focus on $\Phi$-irregular sets in dynamical systems with the shadowing property. We estimate value of entropy of $\Phi$-irregular set over chain recurrent set it intersect (see Theorem \ref{thmhtop}) and then use this result to express entropy of nonempty $\Phi$-irregular sets in terms of entropies of chain recurrent sets ( Corollary \ref{cor:CRclass} and \ref{CorSup}). We also show that $\Phi$-irregular sets of full entropy are typical (Theorem~\ref{thmtyp}).
At the end, we study properties of $\Phi$-level sets (sets of points whose Birkhoff averages are in a given set) and relate it to entropies of some ergodic measures (see Theorem~\ref{LevelSetThm}). We also consider level sets with respect to reference measures, proving local analogs of result of L. Young in \cite{Young}, who considers the problem of large deviations in systems with the specification property (see Theorem \ref{thm:v-}). While shadowing property provides slightly better control of tracing than specification, we cannot apply global ``gluing'' condition, provided by specification. It is why we have to work in neighborhoods of chain recurrent classes. Surprisingly, despite of local character of our considerations, some global estimates are obtained.
\section{preliminaries}
\subsection{Basic notions and definitions}
A \textit{dynamical system} is a pair $(X,T)$ consisting of a compact metric space $(X,d)$ and a continuous mapping $T:X\rightarrow X$. Let $x\in X$. By a \textit{trajectory} of the point $x$ we mean a sequence $\{T^nx\}_{n\geq 0}$, $n\in \mathbb N$, and by an \emph{orbit} of $x$ we call the set of all iterations, that is the following set: $$ \{T^nx: n \geq 0\}. $$ For two integers $L>K\geq0$, by writing $x_{[K,L]}$ we mean a~\textit{block}, i.e. a finite part of the trajectory of $x$: $$ T^Kx, T^{K+1}x, \dots, T^{L-1}x, T^{L}x. $$
For a finite set of indices $\Lambda\subseteq\{0,1,\dots,n-1\}$ and $x,y \in X$ we define a \textit{Bowen distance} of $x,y$ along $\Lambda$ by the following formula: $$ d_{\Lambda}(x,y) = \max_{j\in \Lambda}\{d(T^{j}x,T^jy)\}, $$ and a \textit{Bowen ball} of radius $\varepsilon>0$ centered in $x\in X$ as the following set: $$ B_{\Lambda}(x,\varepsilon) = \{y \in X: d_{\Lambda}(x,y)<\varepsilon\}. $$ In particular, when $\Lambda = \{0,1,\dots,n-1\}$ we denote $d_{\Lambda}(x,y)$ by $d_n(x,y)$ and $B_{\Lambda}(x,\varepsilon)$ by $B_n(x,\varepsilon)$. A set $B_n(x,\varepsilon)$ is then called an $(n,\varepsilon)$-Bowen ball.
\begin{defn} A dynamical system $(X,T)$ is \emph{topologically transitive} if for every pair of nonempty open sets $U,V \subseteq X$ there exists an integer $M$ such that $T^M(U)\cap~V~\neq \emptyset$. \end{defn}
\begin{defn} Given $\delta$>0, a sequence $\{x_n\}_{n \in \mathbb{N}}\subseteq X$ is a \emph{$\delta$-pseudo-orbit} of $T$ if:
$$
d(Tx_n,x_{n+1})<\delta \text{ for all } n \in \mathbb{N}.
$$ A finite $\delta$-pseudo-orbit $\{x_n\}_{n=0}^k$ is often called a \emph{$\delta$-chain} from $x_0$ to $x_k$. \end{defn}
\begin{defn}
A map $T:X\rightarrow X$ has a \emph{shadowing property} if for all $\varepsilon>0$ there exists $\delta>0$ such that for any $\delta$-pseudo-orbit $\{x_n\}_{n\in \mathbb{N}}$ there exists a point $y \in X$ such that:
$$
d(T^ny,x_n)<\varepsilon \text{ for all }n \in \mathbb{N}.
$$
We say that $\delta$-pseudo-orbit $\{x_n\}_{n \in \mathbb{N}}$ is \emph{$\varepsilon$-shadowed} (resp. \emph{$\varepsilon$-traced}) by an orbit of $y$. \end{defn}
For a point $x \in X$ define its \textit{$\omega$-limit set} $\omega_T (x)$ as the set of limit points of the trajectory of $x$: $$\omega_T(x) = \bigcap_{n=0}^{\infty} \overline{\{T^k(x): k\geq n\}}.$$ Let $Y^{\omega}$ denote the set of all points whose $\omega$-limit sets are subsets of $Y$, that is: $$ Y^{\omega}=\{x \in X: \omega_T(x)\subseteq Y\}. $$ It is well known that $Y^{\omega}$ need not be compact. \begin{defn}
Let $x,y \in X$ and $\varepsilon>0$. If there exist a sequence of points $\{x_i\}_{i=0}^n~\subseteq~X$ and an increasing sequence of positive integers $\{t_i\}_{i=0}^{n-1}$ such that:
\begin{align*}
x_0&=x,\\
x_n&=y,\\
d(T^{t_i}x_i,x_{i+1})&<\varepsilon \text{ for } i=0,\dots,n-1,
\end{align*}
we say that $x$ is in a \emph{chain stable set} of $y$. \end{defn}
If $x$ is in a chain stable set of $y$ and $y$ is in a chain stable set of $x$ we say that the points $x$ and $y$ are \textit{chain related}. Note that this relation is an equivalence relation. If $x$ is chain related with itself we say that $x$ is a \textit{chain recurrent point}. By $CR(X)\subseteq X$ we denote the set of chain recurrent points in $X$ and by a \textit{chain recurrent class} we call every equivalence class in $CR(X)$ given by the chain relation.
\begin{defn}
A subset $Y \subseteq X$ is an internally chain transitive set if for all $x,y \in Y$ and for every $\varepsilon>0$ there exist an $\varepsilon$-chain from $x$ to $y$ consisting only of points from $Y$. \end{defn}
\begin{rem}\label{omegaSub} Since each chain recurrent class $Y\subseteq X$ is invariant, we always have $Y\subseteq Y^{\omega}$. \end{rem} The statement follows from the fact that every chain recurrent class is closed and $T$-invariant ($T(Y) \subseteq Y$).
\begin{defn}
A point $x \in X$ is $\mu$-generic for a measure $\mu$ if for every continuous mapping $f:X\rightarrow \mathbb{R}$ the following condition holds:
$$\lim_{n\rightarrow \infty}\frac{1}{n}\sum_{i=0}^{n-1}f(T^ix)=\int_Xfd\mu,$$
or, equivalently,
$$
\frac{1}{n}\sum_{i=0}^{n-1}\delta_{T^ix}=\mu,
$$
where $\delta_y$ denotes the Dirac measure on $y$. \end{defn}
From the above definition it follows that the orbit of every generic point is dense on the support of the measure provided that $x$ belongs to that support. In that case the orbit of the generic point has a~nonempty intersection with every open neighbourhood of that point, hence every generic point is also recurrent.
\subsection{Measures}
For a compact metric space $X$ by $\mathcal{B}$ we denote the $\sigma$-algebra of Borel subsets of $X$. Let $\mathcal{M}(X)$ be the set of all Borel probability measures on the space $(X,\mathcal{B})$. The \textit{support} of a measure $\mu \in \mathcal{M}(X)$, denoted by $\supp(\mu)$ is the smallest closed subset $C\subset X$ such that $\mu(C)=1$.
For a dynamical system $(X,T)$ we say that a measure $\mu \in \mathcal{M}(X)$ is \textit{$T$-invariant} if $\mu(T^{-1}A)=\mu(A)$ for all $A \in \mathcal{B}$, and $\mu$ is \textit{ergodic} if the only Borel sets $B$ satisfying $T^{-1}B = B$ are sets of zero or full measure, i.e. $\mu(B)=0$ or $\mu(B)=1$. By $\mathcal{M}_T(X)$ we denote the set of all $T$-invariant measures and the set of all ergodic measures on $X$ is denoted by $\mathcal{M}_e(X)$.
By the Riesz representation theorem we may look at $\mathcal{M}(X)$ as a compact metric space with the metric given by the weak$^*$ topology of the dual space $\mathcal{C}(X,\mathbb{R})$. To define the convergence in $\mathcal{M}(X)$, we say that a sequence of measures $\{\mu_n\}_{n \in \mathbb{N}}$ converges to a~measure $\mu \in \mathcal{M}(X)$ in the weak$^*$ topology if the following expression holds for every $\phi \in \mathcal{C}(X,\mathbb{R})$: $$ \lim_{n\rightarrow\infty}\int\phi d\mu_n = \int \phi d\mu. $$
Let $BL(X)$ be the set of all bounded Lipschitz real-valued functions on $X$. Note that $BL(X)$ is dense in $\mathcal{C}(X,\mathbb{R})$. Let $\|\varphi\|_{BL} = \|\varphi\|_{\infty}+\|\varphi\|_{L}$, where $\|\varphi\|_{\infty}$ is the supremum norm, and: $$
\|\varphi\|_{L} = \sup\frac{|\varphi(x)-\varphi(y)|}{d(x,y)}<\infty. $$
For a countable sequence $\{\varphi_n\}_{n \in \mathbb{N}} \subset \{\varphi \in BL(X): \|\varphi\|_{BL}\leq 1 \}$ and for measures $\mu,\nu~\in~\mathcal{M}(X)$ we define the following metric: $$
d_{\mathcal{M}}(\mu,\nu) = \sum_{n=1}^{\infty}\frac{1}{2^n}\left|\int \varphi_n d\mu - \int \varphi_n d\nu\right|. $$ Then $d_{BL}$ is a metric on $\mathcal{M}(X)$ and the topology induced by $d_{BL}$ coincides with the weak$^*$ topology.
\subsection{Metric entropy}
The idea of metric entropy has its motivation in Shannon's information theory. We will present the definition of so-called Kolmogorov-Sinai metric entropy. Let $(X,\mathcal{B},\mu)$ be a probability space, where $\mathcal{B}$ is a $\sigma$-algebra of subsets of $X$. Let $T:X\rightarrow X$ be a measure preserving transformation, i.e. $T^{-1}A \in~\mathcal{B}$ and $\mu(A) = \mu(T^{-1}A)$ for all $A \in \mathcal{B}$. Let $\alpha = \{A_1,\dots,A_k\}$ be a finite partition of $X$. Put $T^{-i}\alpha = \{ T^{-i}A_1,\dots, T^{-i}A_k\}$. For two partitions $\alpha$ and $\beta$ denote $\alpha\vee\beta = \{A\cap B: A \in \alpha, B \in \beta\}$ and define the set $\bigvee_{i=0}^{n-1}T^{-i}\alpha$ as the collection of sets of the form $\{x: x \in A_{i_0},Tx \in A_{i_1},\dots, T^{n-1}x \in A_{i_{n-1}}\}$ for some $(i_0,i_1,\dots,i_{n-1})$. Now we give a formal definition of the metric entropy using the above notation. \begin{defn} The metric entropy of the mapping $T$ is defined as follows: \begin{align*} H(\alpha) &= H(\mu(A_1),\dots,\mu( A_k)) = -\sum_{i=0}^{n-1}\mu(A_i)\log \mu(A_i),\\ h_{\mu}(T,\alpha) &= \lim_{n\rightarrow\infty}\frac{1}{n}H(\bigvee_{i=0}^{n-1}T^{-i}\alpha),\\ h_{\mu}(T) &= \sup_{\alpha}h_{\mu}(T,\alpha). \end{align*} \end{defn} Moreover, for ergodic measures we have the following characterization of the metric entropy presented by the authors of \cite{BK}: \begin{thm} Let $\mu$ be an ergodic measure. Then the following is true $\mu$-a.e. $$ h_{\mu}(T) = \lim_{\varepsilon\rightarrow 0} \limsup_{n\rightarrow\infty}-\frac{1}{n}\log \mu(B_n(x,\varepsilon)) = \lim_{\varepsilon\rightarrow 0}\liminf_{n\rightarrow\infty}-\frac{1}{n}\log \mu(B_n(x,\varepsilon)). $$ \end{thm}
\subsection{Topological entropy} Below we will define two types of topological entropy, namely the upper capacity topological entropy and Bowen topological entropy. The reader should keep in mind that values of these entropies are equal for invariant and compact spaces.
Let $E\subseteq X$. A set $S\subseteq X$ is \textit{$(n,\varepsilon)$-separated} for $E$ if $S\subseteq E$ and $d_{n}(x,y)>\varepsilon$ for any $x,y \in S$, $x\neq y$. A set $S\subseteq X$ is \textit{$(n,\varepsilon)$-spanning} for $E$ if $S\subseteq E$ and for any $x \in X$ there exists $y \in S$ such that $d_{n}(x,y)<\varepsilon$. Define: \begin{eqnarray*}
s_n(E,\varepsilon) &=& \sup\{|S|: S \text{ is }(n,\varepsilon)\text{-separated for } E\},\\
r_n(E,\varepsilon) &=& \inf\{|S|:S \text{ is }(n,\varepsilon)\text{-spanning for }E\}. \end{eqnarray*} It is true that: \begin{equation}\label{SepSpanSet}
r_n(E,\varepsilon)\leq s_n(E,\varepsilon)\leq r_n(E,\frac{\varepsilon}{2}). \end{equation} \begin{defn} The \textit{upper capacity topological entropy} of $E\subset X$ is defined by the following formula:
$$
h_d(T,E) = \lim_{\varepsilon\rightarrow 0}\limsup_{n\rightarrow\infty}\frac{\log s_n(E,\varepsilon)}{n}=\lim_{\varepsilon\rightarrow0}\limsup_{n\rightarrow\infty}\frac{\log r_n(E,\varepsilon)}{n}.
$$ \end{defn} Now let $\mathcal{G}_n(E,\varepsilon)$ be the collection of all finite or countable coverings of the set $E$ with the Bowen balls $B_v(x,\varepsilon)$ for $v\geq n$. Define: $$ C(E;t,n,\varepsilon,T) = \inf_{C \in \mathcal{G}_n(E,\varepsilon)}\sum_{B_v(x,\varepsilon) \in C}e^{-tv} $$ and $$ C(E;t,\varepsilon,T) = \lim_{n\rightarrow\infty}C(E;t,n,\varepsilon,T). $$ Set: $$ h_{\text{top}}(E,\varepsilon,T) = \inf\{t: C(E;t,\varepsilon,T)=0 \} = \sup\{t: C(E;t,\varepsilon,T)=\infty \}. $$ \begin{defn} The \textit{Bowen topological entropy} of the set $E\subset X$ is defined by the following formula: $$ h_{\text{top}}(T,E) = \lim_{\varepsilon\rightarrow 0}h_{\text{top}}(E,\varepsilon,T). $$ \end{defn}
By the definitions of the measure and the topological entropy we have that $h_{\mu}(T)\leq h_{\text{top}}(T,X)$. The following theorem known as a \emph{variational principle} states a stronger relation between the metric and topological entropy: \begin{thm} Let $T:X\rightarrow X$ be a continuous map of a compact metric space $X$. Then: $$ h_{\text{top}}(T,X) = \sup_{\mu}h_{\mu}(T), $$ where the supremum is taken over all $T$-invariant Borel probability measures $\mu$. \end{thm} For ergodic measures we also have the \emph{Katok entropy formula} proved in \cite{K}, which is a generalization of Bowen's formula: \begin{equation}\label{KatokEntrForm} h_{\mu}(T) = \lim_{\varepsilon\rightarrow 0}\lim_{n\rightarrow\infty}\frac{1}{n}N_T(n,\varepsilon,\delta), \end{equation} where $N_T(n,\varepsilon,\delta)$ denotes the smallest number of $(n,\varepsilon)$-Bowen balls covering a~subset in $X$ of $\mu$-measure at least $1-\delta$ for some ergodic measure $\mu$.
\section{Main results}
\subsection{$\Phi$-irregular points and shadowing} \begin{thm}\label{thmhtop} Let $(X,T)$ be a dynamical system with the shadowing property and let $Y\subseteq X$ be a chain recurrent class. If $\Phi\in \mathcal{C}(X,\mathbb{R})$ is such that there exist $\mu_1,\mu_2 \in \mathcal{M}_e(Y)$ with: $$ \int \Phi d\mu_1 \neq \int \Phi d\mu_2 $$ then $h_{\text{top}}(T,I_\Phi(T))\geq h_{\text{top}}(T,Y)$. \end{thm}
\begin{proof} It suffices to show that for any $\gamma>0$ we have: $$h_{\text{top}}(T,I_\Phi(T))\geq h_{\text{top}}(T,Y)-6\gamma.$$ We will achieve this goal by constructing a closed set $A\subset I_\Phi(T)$ such that: $$h_{\text{top}}(T,A)\geq h_{\text{top}}(T,Y)-6\gamma.$$ So let $\gamma >0$ be fixed. Without loss of generality we can assume by the variational principle that: \begin{equation}\label{VarPrinc} h_{\mu_1}(T)>h_{\text{top}}(T,Y)-\gamma. \end{equation} Denote: \begin{align} \int\Phi d\mu_1&=\alpha, \label{alpha} \\ \int\Phi d\mu_2&=\beta. \label{beta} \end{align} Without loss of generality we may assume that $\alpha<\beta$. For sufficiently large $\xi_0~\in~(0,1)$ we have: \begin{equation}\label{1-xi} 1-\xi_0 < \frac{\gamma}{h_{\text{top}}(T,Y)}. \end{equation} Take $\rho \in (1/2,1)$ such that: \begin{equation}\label{rho} 1-\xi_0\rho < \frac{\gamma}{h_{\text{top}}(T,Y)}. \end{equation}
Choose $\eta>0$ such that $8\eta< (1-\xi_0)(\beta-\alpha)$. The fuction $\Phi$ is continuous, so it is also bounded on $X$. In particular there is $M>0$ such that for all $x \in X$ we have $|\Phi(T^ix)|+\beta<M$.
Since $\mu_1$-generic points have full $\mu_1$ measure, for sufficiently large $\tilde{L}_1>0$ there is a subset $D_1\subseteq Y$ with $\mu_1(D_1)>\frac{3}{4}$ such that for every $x \in D_1$ and for every $n>\tilde{L}_1$ we have: \begin{equation}\label{LCond}
|\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)-\alpha|<\frac{\eta}{4}. \end{equation} By the Katok entropy formula (\ref{KatokEntrForm}) for each sufficiently small $\varepsilon>0$ there is $N>0$ such that for all $n>N$ we have: $$ \frac{1}{n}\log N_T(n,4\varepsilon,\frac{1}{2})>h_{\mu_1}(T)-\gamma, $$ where $N_T(n,4\varepsilon,\frac{1}{2})$ is defined like in (\ref{KatokEntrForm}). If we denote by $E_m$ the maximal $(m,4\varepsilon)$-separated set in $D_1$ we have for $m > N$: \begin{equation}\label{E'set}
|E_m|> N_T(m,4\varepsilon,\frac{1}{2})> e^{m(h_{\text{top}}(T,Y)-\gamma)}, \end{equation}
since $\{B_m(x,4\varepsilon)\}_{x \in E_m}$ is a cover of $D_1$. Moreover, by decreasing $\varepsilon$ if necessary, we can assume that $d(x,y)<\varepsilon$ implies $|\Phi(x)-\Phi(y)|<\eta$. Fix $\delta>0$ such that any $2\delta$-pseudo-orbit is $\varepsilon$-traced. Let $\mathcal{U} = \{U_i\}_{i=1}^S$ be a finite open cover of $Y$ such that $\mesh (\mathcal{U}) < \delta $. For $i,j \in \{1,\dots,S\}$ by $\omega_{ij}$ we denote the length of a chosen $\delta$-chain between $U_i$ and $U_j$. Let $\omega_{max} = \max_{i,j \in \{1,\dots,S\}}\omega_{ij}$. For every $m\geq N$ and $i,j~\in~\{1,\dots,S\}$ we define a family of sets: $$ \Lambda^{(m)}_{ij}=\{x \in E_m\cap U_i:T^mx \in U_j\}. $$ For each $m>N$ by $i_m,j_m$ we denote the index of the set with the highest cardinality, that is: $$
|\Lambda^{(m)}_{i_mj_m}| = \max\left\{|\Lambda^{(m)}_{ij}|:i,j\in \{1,\dots,S\}\right\}. $$ Note that there are exactly $S$ sets in the cover which gives us at most $S^2$ possible pairs $(i_m,j_m)\in \{1,\dots,S\}\times \{1,\dots,S\}$. Hence for the sequence $\{\Lambda^{(m)}_{i_mj_m}\}_{m=N+1}^{\infty}$ we can find an increasing subsequence $\{m_k\}_{k=0}^{\infty}$ and an indexing pair $(\bar{i},\bar{j})$ which occurs infinitely many times, i.e. $(i_{m_k},j_{m_k}) = (\bar{i},\bar{j})$ for all $k\geq 0$.
Denote $U=U_{\bar{i}}$, $V = U_{\bar{j}}$ and $E'_{m_k} = \Lambda^{(m_k)}_{\bar{i}\bar{j}}$. That way we get a new family $\{E'_{m_k}\}_{k=0}^{\infty}$ of $(m_k,4\varepsilon)$-separated sets, with: $$ E'_{m_k} = \{ x \in E_{m_k}\cap U: T^{m_k}x \in V \} $$ and clearly: $$
S^2|E'_{m_k}|\geq |E_{m_k}| > e^{m_k(h_{\text{top}}(T,Y)-\gamma)}. $$ Consequently for sufficiently large $k$ we have: \begin{equation}\label{2gamma}
|E'_{m_k}|> e^{m_k(h_{\text{top}}(T,Y)-2\gamma)}. \end{equation} Choose $k_1 \in \mathbb{N}$ large enough so that: \begin{equation}\label{LBound} L=m_{k_1} > \frac{\omega_{max}M}{\eta} \end{equation} and each $m_k$ for $k>k_1$ satisfies both (\ref{LCond}) and (\ref{2gamma}).
Analogously, since $\mu_2$-generic points have full $\mu_2$ measure, for sufficiently large $\tilde{L}_2>0$ there is a subset $D_2\subseteq Y$ with $\mu_2(D_2)>3/4$ such that for every $x \in D_2$ and for every $n>\tilde{L}_2$ we have: \begin{equation}\label{L'Cond}
|\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)-\beta|<\frac{\eta}{4}. \end{equation} Take: $$ \tilde{L} = \max\{\tilde{L}_1,\tilde{L}_2\}. $$ Choose some sufficiently large $k_2 \in \mathbb{N}$ such that: $$ m_{k_2} > \max \{ \frac{2\tilde{L}}{\xi_0}, \frac{\tilde{L}}{1-\xi_0} \} $$ and there is $J>m_{k_2}$ such that: $$ \frac{\xi_0\rho}{2} < \frac{m_{k_2}}{J} < \xi_0 $$ and \begin{equation}\label{KBound} J>\frac{M\omega_{max}}{\eta}. \end{equation} It follows that: \begin{align} 8\eta&< (1-\frac{m_{k_2}}{J})(\beta-\alpha),\\ (1-\frac{m_{k_2}}{J})&<\frac{\gamma}{h_{\text{top}}(T,Y)}.\label{gammaentr} \end{align} Define $\xi = \frac{m_{k_2}}{J}$. That way $\xi J$ satisfies (\ref{LCond}) and (\ref{2gamma}), $(1-\xi) J$ satisfies (\ref{L'Cond}) and both $\xi J$ and $(1-\xi)J$ are integers. Let:
$$
\zeta = \xi\alpha + (1-\xi)\beta.
$$ Fix an arbitrary $\mu_2$-generic point $y \in D_2$. Let $U'$ be the set from $\mathcal{U}$ such that $y \in U'$ and fix any $V' \in \mathcal{U}$ such that $T^{(1-\xi)J}(y) \in V'$. By the definitions of $\omega_{max}$ there are $\delta$-chains $\{p_i\}_{i=0}^{P}$, $\{q_i\}_{i=0}^{Q}$, $\{w_i\}_{i=0}^{W}$ of length at most $\omega_{max}$ such that: \begin{itemize}
\item[(p)] $\{p_i\}_{i=0}^{P}$ is such that $p_0 \in V'$ and $p_P \in U$,
\item[(q)] $\{q_i\}_{i=0}^{Q}$ is such that $q_0 \in V$ with $q_Q \in U$,
\item[(w)] $\{w_i\}_{i=0}^{W}$ is such that $w_0 \in V$ with $w_W \in U'$. \end{itemize} In case when some of relevant sets are equal we simply assume that the corresponding $\delta$-chain is of length zero. If we put $K = J+W$, then $K$ satisfies (\ref{KBound}) as well. Take $\Gamma_K$ as the set of all $\delta$-pseudo-orbits of length $K$ built from the following blocks: \begin{enumerate}
\item block of length $\xi J$ of the orbit of some point from $E'_{\xi J}$ (by definition that point is $\mu_1$-generic),
\item $\delta$-chain $\{w_i\}_{i=0}^{W-1}$ from $V$ to $U'$ (note that we skipped the last point of the chain),
\item block of length $(1-\xi)J$ of the orbit of the chosen point $y \in U'$ (by definition $y$ is $\mu_2$-generic). \end{enumerate} Note that for any element $y \in \Gamma_K$ we have the following estimation: \begin{multline}\label{zetasplit}
\left|\frac{1}{K}\sum_{i=0}^{K-1}\Phi(T^iy)-\zeta\right| \leq \left|\frac{1}{K}\sum_{i=0}^{\xi J -1}\Phi(T^iy)-\frac{\xi J}{K}\alpha\right|\\ +
\left|\frac{1}{K}\sum_{i=\xi J}^{\xi J + W-1}\Phi(T^iy)-\frac{K-J}{K}\zeta\right| \\
+ \left|\frac{1}{K}\sum_{i=\xi J + W}^{K-1}\Phi(T^iy)-\frac{(1-\xi)J}{K}\beta\right|. \end{multline} Now let us present the estimates of the three summands above. For the first one we get: \begin{equation}\label{zeta1}
\left|\frac{1}{K}\sum_{i=0}^{\xi J-1}\Phi(T^iy)-\frac{\xi J}{K}\alpha\right| = \frac{\xi J}{K}\left|\frac{1}{\xi J}\sum_{i=0}^{\xi J-1}\Phi(T^iy)-\alpha\right|<\frac{\xi J }{K}\frac{\eta}{4}<\frac{\eta}{4}, \end{equation} and, for the second one, by using (\ref{KBound}) and the definition of $K$, we get: \begin{align}\label{zeta2}
\left|\frac{1}{K}\sum_{i=\xi J}^{\xi J +W-1}\Phi(T^iy) \right.&\left.- \frac{K-J}{K}\zeta\right| < \frac{1}{K}\sum_{i=0}^{W-1}\left|\Phi(T^{i+\xi J}y)-\frac{K-J}{W}\zeta\right|\\
& = \frac{1}{K}\sum_{i=0}^{W-1}\left|\Phi(T^{i+\xi J}y)-\zeta\right| <\frac{WM}{K}< \eta. \nonumber \end{align} And, finally, for the third one we have: \begin{align}\label{zeta3}
\left|\frac{1}{K}\sum_{i=\xi J +W}^{K-1}\Phi(T^iy)\right. &\left.-\frac{(1-\xi)J}{K}\beta\right| = \frac{(1-\xi)J}{K}\left|\frac{1}{(1-\xi)J}\sum_{i=\xi J + W}^{K-1}\Phi(T^iy)-\beta\right|\\
& = \frac{(1-\xi) J}{K}\left|\frac{1}{(1-\xi)J}\sum_{i=0}^{(1-\xi)J-1}\Phi(T^{i+\xi J + W}y)-\beta\right| \nonumber \\ &<\frac{(1-\xi)J}{K}\frac{\eta}{4}<\frac{\eta}{4}. \nonumber \end{align} Going back to (\ref{zetasplit}), taking (\ref{zeta1}), (\ref{zeta2}), (\ref{zeta3}) into consideration we have: \begin{equation}\label{KzetaEq}
\left|\frac{1}{K}\sum_{i=0}^{K-1}\Phi(T^iy)-\zeta\right| < 2\eta. \end{equation} By $(\ref{gammaentr})$ we have: \begin{equation}\label{GammaK}
|\Gamma_K|\geq |E'_{\xi J}| \geq e^{\xi J(h_{\text{top}}(T,Y)-2\gamma)}\geq e^{J(h_{\text{top}}(T,Y)-3\gamma)}. \end{equation}
{\bf Construction (*)}
We are going to define a $\delta$-pseudo-orbit $\mathcal Z=\{z_i\}_{i=1}^{\infty}$ in $Y$ combining cyclically two types of blocks: \begin{enumerate}\label{blockTypes} \item[(C1)] blocks of length $L+Q$ consisting of the part of the orbit $x_{[0,L-1]}$ of some point $x$ from $E'_L$ and the $\delta$-chain $\{q_i\}_{i=0}^{Q-1}$ returning from $V$ to $U$ (note that $T^{L}x, q_0 \in V$ and $q_Q, x \in U$), \item[(C2)] blocks of length $K+P$ consisting of the pseudo-orbit from $\Gamma_K$ and $\delta$-chain $\{p_i\}_{i=0}^{P-1}$ returning to $U$ (note that $T^{(1-\xi)J}y, p_0 \in V'$ and $p_P \in U$ and the first point of each pseudo-orbit in $\Gamma_K$ is in $U$). \end{enumerate} By the above any concatenation of sequences of $\delta$-pseudo-orbits (C1), (C2) is a~$\delta$-pseudo-orbit as well.
Before we start the construction of the $\delta$-pseudo-orbit we would like to make sure that the total length of the blocks (C1) and (C2) is divisible by the same number in every step of the construction. That is why we choose $\lambda, \kappa >0$ such that: $$ \lambda(L+Q) = \kappa(K+P). $$ Without loss of generality we may assume that $\lambda Q \geq \kappa P$ which implies: \begin{equation}\label{LKrelation} \lambda L\leq \kappa K. \end{equation} We also choose two increasing sequences of integers $\{l_n\}_{n=1}^{\infty}$, $\{l_n'\}_{n=1}^{\infty}$ and two inductively defined sequences $\{a_n\}_{n=1}^\infty$ and $\{b_n\}_{n=1}^\infty$ (see (\ref{EQ:anbn}) below) so that if we denote: $$M_n = l_n\lambda (L+Q),$$ and $$M_n' = l_n'\kappa (K+P),$$ then the following conditions are satisfied: \begin{eqnarray} M_n &>& \frac{M-\eta}{\eta}a_n, \label{MCond}\\ M_n' &>& \frac{M-\eta}{\eta}b_n\label{M'Cond}. \end{eqnarray} The sequences of integers $\{a_n\}_{n=1}^{\infty}$ and $\{ b_n\}_{n=1}^{\infty}$ are defined inductively as follows for $n \geq 1$: \begin{align} a_1 &= 0,\nonumber \\ b_n &= a_n+ M_n, \label{EQ:anbn}\\ a_{n+1} &= b_n+M_n' = a_n + M_n + M_n'. \nonumber \end{align} Values $M_n$ (resp. $M'_n$) determine the length of the blocks $\tilde{x}^{(n)}$ (resp. $\tilde{y}^{(n)}$) which we will use in the construction of the $\delta$-pseudo-orbit below. The first one is the~concatenation of $l_n\lambda$ blocks (C1). The second is the concatenation of $l_n'\kappa$ blocks (C2). Strictly speaking: \begin{align*} \tilde{x}^{(n)} &= x^{(1)}_{[0,L-1]}q_0\dots q_{Q-1} x^{(2)}_{[0,L-1]}q_0\dots q_{Q-1}\dots x^{(l_n\lambda)}_{[0,L-1]}q_0\dots q_{Q-1}\\ \tilde{y}^{(n)} &= y^{(1)}_{[0,K-1]}p_0\dots p_{P-1}y^{(2)}_{[0,K-1]} p_0\dots p_{P-1}\dots y^{(l_n'\kappa)}_{[0,K-1]}p_0\dots p_{P-1}, \end{align*} for some not necessarily pairwise distinct points $x^{(i)} \in E'_L\subseteq U$ for $i=1,\dots,l_n\lambda$ and $y^{(j)} \in \Gamma_K$ for $j =1,\dots, l_n'\kappa$. Note that $\tilde{x}^{(n)}$, $\tilde{y}^{(n)}$ are in fact a whole family of $\delta$-pseudo-orbits depending on the choice of the blocks $x^{(i)}$ and elements of $\Gamma_K$ respectively.
Observe that the value $b_n$ tells us what is the length of the block of the pseudo-orbit $\mathcal{Z}$ since the beginning till the moment when we used the block $\tilde{x}^{(n)}$ in the $n$th step of our construction. The value $a_n$ tells us what is the total length of the $\delta$-pseudo-orbit $\mathcal{Z}$ after $(n-1)$ steps of the construction, which is also the moment when we used the block $\tilde{y}^{(n-1)}$ in the $(n-1)$st step of the construction.
Now let us more precisely present the steps of the construction of the infinite $\delta$-pseudo-orbit $\mathcal{Z}$. As described above we first have: \begin{align*} z_{[0,M_1-1]} &= \tilde{x}^{(1)},\\ z_{[M_1,M_1+M_1'-1]} &= \tilde{y}^{(1)}. \end{align*} and then in consecutive steps of the construction we repeat this procedure by putting: \begin{align*} z_{[a_n,b_n-1]} &= \tilde{x}^{(n)},\\ z_{[b_n, b_n+M_n'-1]} = z_{[b_n,a_{n+1}-1]}&= \tilde{y}^{(n)}. \end{align*} The full $n$th step of the construction can be seen on the following scheme: $$\dots \xymatrix{
*[A]{ }\ar@{|--|}[rr]^{\tilde{x}_{[0,M_n]}}_<{a_n}&\hspace{0,2cm}&*[B]{ }
\ar@{~|}[rr]^{\tilde{y}_{[0,M'_n]}}_<{b_n}&\hspace{0,5cm}&*[D]{ } \ar@{ }[]_<{a_{n+1}} }\dots $$ This finishes the construction of a single pseudo-orbit $\mathcal Z$ (Construction (*)). Note that different choices for $\tilde{x}^{(n)}$ lead to many different elements $\mathcal{Z}$. Later we will calculate how it reflects to the value of entropy.
Now let set $A\subseteq X$ be the closure of the set of elements of all possible orbits $\varepsilon$-tracing all possible $\delta$-pseudo-orbits $\mathcal{Z}$ achieved by Construction (*). It means that for $u \in A$ we will find a $\delta$-pseudo-orbit $\mathcal{Z} = \{z_i\}_{i=0}^{\infty}$ such that $d(T^i(u),z_i)<\varepsilon$. To show that $A$ is indeed a subset of the $\Phi$-irregular set $I_{\Phi}(T)$ we need to show that the Birkhoff average diverges for every point $u \in A$, that is: \begin{equation}\label{BirkhDivCond} \liminf_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^iu) \neq \limsup_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^iu). \end{equation} Note that by the construction of the pseudo-orbit $\mathcal{Z}$ and the choice of the elements in the set $A$, blocks of the trajectory of every point $u \in A$ is within the $\varepsilon$-distance from the trajectories of length $L$ of points $x^{(i)}$ whose orbits build the blocks appropriate $\tilde{x}^{(n)}$.
First we estimate the lower limit of the Birkhoff average for a point $u \in A$ using the subsequence $\{b_n \}_{n=1}^{\infty}$. In fact all calculations below are shown for arbitrarily chosen point $u$ being the element of orbit $\varepsilon$-tracing some $\delta$-pseudo-orbit from Construction (*). However if we take any point $v \in A$ from the closure, then $v$ is the limit of some sequence of points from $A$ and so in the sufficiently long prefix of the orbit of $v$ we will find the same structure of blocks as in the elements of the sequence converging to $v$. That means the estimations below are true for such points as well. By (\ref{MCond}) we have that: $$ \frac{a_n}{b_n}M < \eta $$ and by (\ref{LBound}) we have: $$ \frac{l_n\lambda QM}{b_n} < \frac{l_n\lambda \omega_{max} M }{l_n \lambda L} \eta. $$ Note that by the construction of the block $\tilde{x}^{(n)}$ we have for every $n \geq 1$: \begin{multline}\label{bnaverage} \frac{1}{b_n}\sum_{i=a_n}^{b_n-1}\Phi(T^iu) = \frac{1}{b_n}\sum_{j=1}^{l_n\lambda}\sum_{i=a_n+(j-1)(L+Q)}^{a_n+jL+(j-1)Q-1}\Phi(T^iu) \\ + \frac{1}{b_n}\sum_{j=1}^{l_n\lambda}\sum_{i=a_n+jL+(j-1)Q}^{a_n+j(L+Q)-1}\Phi(T^iu). \end{multline} Hence for every sufficiently large $n$: \begin{align*}
|\frac{1}{b_n}&\sum_{i=1}^{b_n-1}\Phi(T^iu)-\alpha| \leq \frac{1}{b_n}|\sum_{i=0}^{a_n-1}(\Phi(T^iu)-\alpha)| + |\frac{1}{b_n}\sum_{i=a_n}^{b_n-1}(\Phi(T^iu)-\alpha)|\\
&\leq \frac{a_n}{b_n}M + |\frac{1}{b_n}\sum_{j=1}^{l_n\lambda}\sum_{i=a_n+(j-1)(L+Q)}^{a_n+jL+(j-1)Q-1}(\Phi(T^iu)- \alpha)|\\
&\qquad + |\frac{1}{b_n}\sum_{j=1}^{l_n\lambda}\sum_{i=a_n+jL+(j-1)Q}^{a_n+j(L+Q)-1}(\Phi(T^iu)-\alpha)|\\
&\leq \frac{a_n}{b_n}M + \frac{1}{b_n}\sum_{j=1}^{l_n\lambda}\sum_{i=a_n+(j-1)(L+Q)}^{a_n+jL+(j-1)Q-1}|\Phi(T^iu)-\Phi(T^{i-a_n-(j-1)(L+Q)}x^{(j)})| \\
&\qquad + |\frac{1}{b_n}\sum_{j=1}^{l_n\lambda}\sum_{i=a_n+(j-1)(L+Q)}^{a_n+jL+(j-1)Q-1}(\Phi(T^{i-a_n-(j-1)(L+Q)}x^{(j)})-\alpha)|\\
&\qquad + \frac{1}{b_n}\sum_{j=1}^{l_n\lambda}\sum_{i=a_n+jL+(j-1)Q}^{a_n+j(L+Q)-1}|\Phi(T^iu)-\alpha|\\
&\leq \frac{a_n}{b_n}M + \frac{l_n\lambda L}{b_n}\eta + |\frac{1}{b_n}\sum_{j=1}^{l_n\lambda}\sum_{i=0}^{L-1}(\Phi(T^ix^{(j)})-\alpha)| + \frac{l_n\lambda Q}{b_n}M\\
& \leq 3\eta + |\frac{1}{b_n}\sum_{j=1}^{l_n\lambda}\sum_{i=0}^{L-1}(\Phi(T^ix^{(j)})-\alpha)|. \end{align*} Now observe that by the choice of $L$ the last component can be bounded from above as follows: \begin{align*}
|\frac{1}{b_n}\sum_{j=1}^{l_n\lambda}\sum_{i=0}^{L-1}(\Phi(T^ix^{(j)})-\alpha)| &\leq \frac{L}{b_n}|\sum_{j=1}^{l_n\lambda}(\frac{1}{L}\sum_{i=0}^{L-1}\Phi(T^iu)-\alpha)|\\ &\leq \frac{l_n\lambda L\eta}{4b_n} \leq \frac{\eta}{4}. \end{align*} It follows that: \begin{equation}\label{liminf}
|\frac{1}{b_n} \sum_{i=1}^{b_n-1}\Phi(T^iu)-\alpha| \leq 4\eta, \end{equation} which means that $$ \liminf_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^n\Phi(T^iu) \leq \alpha + 4\eta. $$ To estimate the upper limit of the Birkhoff average of the point $u \in A$ we use the subsequence $\{ a_n \}_{n=1}^{\infty}$. By (\ref{M'Cond}) we also have that: $$ \frac{b_n}{a_{n+1}}M < \eta, $$ and similarly to (\ref{bnaverage}) we will consider the following splitting for every $n\geq 1$: \begin{align*} \frac{1}{a_{n+1}}\sum_{i=b_n}^{a_{n+1}-1}\Phi(T^iu) &= \frac{1}{a_{n+1}}\sum_{j=1}^{l'_n\kappa}\sum_{i=b_n+(j-1)(K+P)}^{b_n+jK+(j-1)P-1}\Phi(T^iu)\\ &+ \frac{1}{a_{n+1}}\sum_{j=1}^{l'_n\kappa}\sum_{i=b_n+jK+(j-1)P}^{b_n+j(K+P)-1}\Phi(T^iu). \end{align*} Hence for every suficciently large $n$ we have:
\begin{align*}
|\frac{1}{a_{n+1}} (\Phi(T^iu)-\zeta)|&\leq \frac{1}{a_{n+1}}|\sum_{i=0}^{b_n-1}(\Phi(T^iu)-\zeta)| + \frac{1}{a_{n+1}}|\sum_{i=b_n}^{a_{n+1}-1}(\Phi(T^iu)-\zeta)|\\
& \leq \frac{b_n}{a_{n+1}}M + |\frac{1}{a_{n+1}}\sum_{j=1}^{l'_n\kappa}\sum_{i=b_n+(j-1)(K+P)}^{b_n+jK+(j-1)P-1}(\Phi(T^iu)-\zeta)| \\
&\qquad + |\frac{1}{a_{n+1}}\sum_{j=1}^{l'_n\kappa}\sum_{i=b_n+jK+(j-1)P}^{b_n+j(K+P)-1}(\Phi(T^iu)-\zeta)|\\ &\leq \frac{b_n}{a_{n+1}}M \\
&+ \frac{1}{a_{n+1}}\sum_{j=1}^{l'_n\kappa}\sum_{i=b_n+(j-1)(K+P)}^{b_n+jK+(j-1)P-1}|\Phi(T^iu) - \Phi(T^{i-b_n-(j-1)(K+P)}y^{(j)})|\\
&\qquad+ |\frac{1}{a_{n+1}}\sum_{j=1}^{l'_n\kappa}\sum_{i=b_n+(j-1)(K+P)}^{b_n+jK+(j-1)P-1}(\Phi(T^{i-b_n-(j-1)(K+P)}y^{(j)} - \zeta)| \\
&\qquad+ \frac{1}{a_{n+1}}\sum_{j=1}^{l'_n\kappa}\sum_{i=b_n+jK+(j-1)P}^{b_n+j(K+P)-1}|\Phi(T^iu)-\zeta|\\
&\leq \frac{b_n}{a_{n+1}}M + \frac{l'_n\kappa K}{a_{n+1}}\eta + |\frac{1}{a_{n+1}}\sum_{j=1}^{l'_n\kappa}\sum_{i=0}^{K-1}(\Phi(T^iy^{(j)})-\zeta)| + \frac{l'_n\kappa P}{a_{n+1}}M\\
&\leq 3\eta + |\frac{1}{a_{n+1}}\sum_{j=1}^{l'_n\kappa}\sum_{i=0}^{K-1}(\Phi(T^iy^{(j)})-\zeta)|. \end{align*}
The last component is bounded as follows: \begin{align*}
|\frac{1}{a_{n+1}}\sum_{j=1}^{l_n'\kappa}\sum_{i=0}^{K-1}\Phi(T^iy^{(j)})-\zeta|&\leq \frac{K}{a_{n+1}}|\sum_{j=1}^{l_n'\kappa}(\frac{1}{K}\sum_{i=0}^{K-1}\Phi(T^iu)-\zeta)|\\ &\leq \frac{l_n'\kappa K\eta}{4a_{n+1}}\leq \frac{\eta}{4}. \end{align*} It follows that: \begin{equation}\label{limsup}
|\frac{1}{a_{n+1}} \sum_{i=0}^{a_{n+1}-1}\Phi(T^iu)-\zeta| \leq 4\eta. \end{equation} Altogether by (\ref{liminf}) and (\ref{limsup}) we have: $$ \liminf_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^iu) \leq \alpha + 4\eta < \zeta-4\eta \leq \limsup_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^iu), $$ which means that the condition (\ref{BirkhDivCond}) is satisfied and the set $A$ is indeed a subset of the $\Phi$-irregular set $I_{\Phi}(T)$.
The final step of the proof is to estimate the value of topological entropy of the set $A$. Denote $h = h_{\mu_1}(T)-5\gamma$ and recall that by (\ref{VarPrinc}) we have $h > h_{\text{top}}(T,Y)-6\gamma$. We are going to prove that: $$ C(A;h,\varepsilon,T)\geq 1, $$ as it implies $h_{\text{top}}(T,A)\geq h$. The constructed set $A$ is compact, so we can restrict our attention to a finite covers of $A$. We will show that for every sufficiently large $n$ we have: $$ \sum_{B_v(x,\varepsilon)\in \mathcal{C}}e^{-hv}\geq 1 $$ for every finite cover $\mathcal{C} \in \mathcal{G}_n(A,\varepsilon)$. Of course the choice of $n$ affects the choice of the admissible coverings $\mathcal{C}$ of the set $A$ in the above sum.
Fix an integer $q$ such that: \begin{equation}\label{qcond} \frac{q+1}{q}\leq \frac{h_{\mu}(T)-3\gamma}{h_{\mu}(T)-5\gamma} \end{equation} and a sufficiently large $r_0$ such that for each $r\geq r_0$ we have: \begin{equation}\label{1/qcond} \frac{\lambda(L+Q)}{a_r}<\frac{1}{q}. \end{equation}
By $\mathcal{A} = \{A_1,\dots A_t,\}$ denote the alphabet of cardinality $t = \min\{ (|E'_L|^l, (|\Gamma_K|^k \}$. Each symbol of that alphabet will uniquely represent blocks of the $\delta$-pseudo-orbit $\mathcal{Z}$ in the following way. The blocks in each $\mathcal{Z}$ are of the following possible types: \begin{enumerate}
\item[(D1)] blocks of length $\lambda(L+Q)$ consisting of the parts of the orbit of some points $x \in E'_L$ intertwined by $\delta$-chains from $V$ to $U$,
\item[(D2)] blocks of length $\kappa(K+P)$ consisting of pseudo-orbits from $\Gamma_K$ intertwined by $\delta$-chains from $V'$ to $U$. \end{enumerate}
Using the above splitting every $\delta$-pseudo-orbit uniquely defines an infinite word over the alphabet $\mathcal{A}$. We restrict the number of the blocks of types (D1), (D2) when necessary, so that there are exactly $t$ choices of each type. Using that representation, given some $\delta$-pseudo-orbit $\mathcal{Z}$ denote by $S_m = |A_1\dots A_m|$ the total length of prefix built from $n$ blocks of the form (D1), (D2) in the splitting of $\mathcal{Z}$, whose code is $A_1,\dots,A_m$. By the definition of $S_m$ there are $r>0$ and $0\leq s < 2l_r$ such that: \begin{eqnarray} S_m &=& a_r+s\lambda (L+Q),\label{def:ar}\\ S_{m+1} &=& S_m + \lambda(L+Q)\nonumber. \end{eqnarray}
For every $x \in A$ we have a unique $\delta$-pseudo-orbit $\mathcal{Z} = \{z_i\}_{i=0}^{\infty}$ derived from the construction (*) such that $d(T^ix,z_i)\leq \varepsilon$. This $\mathcal{Z}$ defines a unique sequence of symbols $A_1,A_2\dots$ and associated increasing sequence of integers $S_m$. Therefore every cover $\mathcal{C} \in \mathcal{G}_{n}(A,\varepsilon)$ induces a cover $\mathcal{C}'$ where each ball $B_v(x,\varepsilon) \in \mathcal{C}$ is replaced by $B_{S_m}(x,\varepsilon)\in \mathcal{C}'$ such that $S_m\leq v < S_{m+1}$. We may assume $a_r$ from \eqref{def:ar} satisfies \eqref{1/qcond} since we are interested only in large values of $n$, and this easily ensures $r>r_0$. Observe that by the definition: \begin{equation}\label{BowenBallEst} \sum_{B_v(x,\varepsilon)\in \mathcal{C}}e^{-hv}\geq \sum_{B_{S_m}(x,\varepsilon)\in \mathcal{C}'}e^{-hS_{m+1}}. \end{equation} Consider such a cover $\mathcal{C}'$ and let $c$ be the largest value $m$ used for replacement of $B_v(x,\varepsilon)$ by $B_{S_m}(z,\varepsilon) \in \mathcal{C}'$. By $\mathcal{W}_m$ define the set of all possible words of length $m$ over $\mathcal{A}$, while by $\mathcal{V}_c$ denote the set of all possible words of length at most $c$, that is: $$ \mathcal{V}_c = \bigcup_{m=1}^c\mathcal{W}_m. $$
Any point $p \in \mathcal{W}_m$ is a unique representation of some prefix of lengths $m$ of a $\delta$-pseudo-orbit. Furthermore, observe that each word of length $m$ over $\mathcal{A}$ is a prefix of exactly $\frac{|\mathcal{W}_c|}{|\mathcal{W}_m|}$ words from $\mathcal{W}_c$. Therefore, if we consider a set $\mathcal{W}\subseteq \mathcal{V}_c$ containing some prefixes of all words from $\mathcal{W}_c$, then: $$
\sum_{m=1}^c |\mathcal{W}\cap \mathcal{W}_m|\frac{|\mathcal{W}_c|}{|\mathcal{W}_m|}\geq |\mathcal{W}_c|, $$
as for every word in $\mathcal{W}_c$ one of its prefixes has to be in $\mathcal{W}\cap \mathcal{W}_m$ for some $1\leq m\leq c$ and the number of words in $\mathcal{W}_c$ with that prefix cannot exceed $\frac{|\mathcal{W}_c|}{|\mathcal{W}_m|}$. By the above discussion, if $\mathcal{W}$ contains a prefix of every word from $\mathcal{W}_c$ then: $$
\sum_{m=1}^c \frac{|\mathcal{W}\cap\mathcal{W}_m|}{|\mathcal{W}_m|}\geq 1. $$
Recall that any $x \in B_{S_m}(z,\varepsilon)$ defines uniquely a word $A_{i_1},\dots, A_{i_m} \in\mathcal{A}$, $i_1,\dots,i_m \in \{1,\dots,t\}$ such that $x_{[0,S_m]} \approx A_{i_1}\dots A_{i_m} \in \mathcal{W}_c$, where $\approx$ denotes the identification of pseudo-orbit with symbols in $\mathcal{A}$. We claim that the block $x_{[0,S_m]}$ defines the same $m$-letter words as any $z \in B_{S_m}(x,\varepsilon)\cap A$. Assume on the contrary that $z_{[0,S_m]}\approx A_{j_1}\dots A_{j_m}$ and $i_\iota\neq j_\iota$ for some $\iota$. Then in case of block of the form (C1) there are $p,q \in E'_L$, $p\neq q$ (the case of (C2) is analogous with $p,q \in E'_{\xi J}$) and $1\leq i\leq m $ and $0\leq \kappa<l$ such that: $$ T^{S_{i-1}+\kappa(L+Q)}z \in B_L(p,\varepsilon) \text{ and } T^{S_{i-1}+\kappa(L+Q)}x \in B_L(q,\varepsilon) $$ and so for $0\leq j< L$ we also have: \begin{multline*} d(T^jp,T^jq) \leq d(T^jp,T^{S_{i-1}+\kappa(L+Q)+j}z)\\ + d(T^{S_{i-1}+\kappa(L+Q)+j}z,T^{S_{i-1}+\kappa(L+Q)+j}x) \\+ d(T^{S_{i-1}+\kappa(L+Q)+j}x, T^jq) <3\varepsilon. \end{multline*} This is the contradiction, as the set $E'_L$ is $(L,4\varepsilon)$-separated. Indeed the claim holds, that is each $B_{S_m}(z,\varepsilon)$ defines a unique word over $\mathcal{A}$. This immediately implies that: \begin{equation}\label{WmBallEst}
\sum_{B_{S_m}(x,\varepsilon)\in \mathcal{C}'}\frac{1}{|\mathcal{W}_m|}\geq 1. \end{equation} By (\ref{GammaK}) and $(\ref{LKrelation})$ we have the following estimation for the cardinality of set $\mathcal{W}_m$: \begin{equation}\label{WmCard}
|\mathcal{W}_m| = t^m \geq e^{m\lambda L(h_{\text{top}}(T,Y)-3\gamma)}. \end{equation} Moreover by (\ref{qcond}) and (\ref{1/qcond}): \begin{align*}\label{SEst} \frac{S_{m+1}}{S_m} &= 1 + \frac{\lambda(L+Q)}{S_m} \leq 1+\frac{\lambda(L+Q)}{a_r}\\ &\leq \frac{q+1}{q} \leq \frac{h_{\mu}(T)-3\gamma}{h_{\mu}(T)-5\gamma}. \end{align*} Combining (\ref{BowenBallEst}), (\ref{WmBallEst}), (\ref{WmCard}) we get: \begin{align*} \sum_{B_v(x,\varepsilon) \in \mathcal{C}}e^{-hv}&\geq \sum_{B_{S_m}(x,\varepsilon) \in \mathcal{C}'}e^{-S_m(h_{\text{top}}(T,Y)-3\gamma)}\\ &\geq \sum_{B_{S_m}(x,\varepsilon) \in \mathcal{C}'}e^{-m\lambda L(h_{\text{top}}(T,Y)-3\gamma)} \geq 1. \end{align*} That implies: $$ h_{\text{top}}(T,A)\geq h_{\mu_1}(T)-5\gamma > h_{\text{top}}(T,Y)-6\gamma. $$ The proof is finished. \end{proof}
\begin{cor}\label{cor:CRclass}
Let $(X,T)$ be a dynamical system with the shadowing property and let $Y\subset X$ be a chain recurrent class. If $\Phi \in \mathcal{C}(X,\mathbb{R})$ is such that $I_\Phi(T)\cap Y \neq \emptyset$ then $h_{\text{top}}(T,I_\Phi(T))\geq h_{\text{top}}(T,Y)$. \end{cor} \begin{proof}
Since $I_\Phi(T)\cap Y\neq \emptyset$, there is a point $x\in Y$ such that $\lim_{n\to\infty} \frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)$ does not exist. By ergodic decomposition \cite[Lemma~2.1]{Thom} we have that there are ergodic measures $\mu_1, \mu_2 \in \mathcal{M}_e(Y)$ such that $\int\Phi d\mu_1\neq \int\Phi d\mu_2$. The result follows by Theorem \ref{thmhtop}. \end{proof}
By Corollary~\ref{cor:CRclass} we obtain that: $$ h_{\text{top}}(T,I_{\Phi}(T)) \geq \sup \left\{h_{\text{top}}(T,Y): I_{\Phi}(T)\cap Y \neq \emptyset \text{ and }Y\in C_c(X,T) \right\}. $$ where $C_c(X,T)$ denotes the set of all chain recurrent classes of $(X,T)$.
Unfortunately, we were unable to prove that the inequality cannot be strict in the above formula. However, if we slight change the above set, it ensures equality as shown below. \begin{cor}\label{CorSup} Let $(X,T)$ be a dynamical system with the shadowing property and let $\Phi:X\rightarrow \mathbb{R}$ be a continuous function. Then we have: \begin{equation}\label{EQ:EXT} h_{\text{top}}(T,I_{\Phi}(T)) = \sup\{h_{\text{top}}(T,Y):I_{\Phi}(T)\cap Y^{\omega}\neq\emptyset \text{ and }Y\in C_c(X,T) \}. \end{equation} \begin{proof} Fix any chain-recurrent class $Y$ such that $I_{\Phi}(T)\cap Y^{\omega}\neq\emptyset$. Take any $x~\in~Y^\omega$ and let $V_T(x)$ denote the weak$^*$ limit set of $\frac{1}{n}\sum_{i=0}^{n-1}\delta_{T^ix}$. Any measure $\mu~ \in~V_T(x)$ is supported on $\omega_T(x)$, so it is supported on $Y$ as well. Hence there are measures $\mu_1, \mu_2$ with supports in $Y$ such that $\int \Phi d\mu_1 \neq \int \Phi d\mu_2$. By ergodic decomposition (see \cite[Lemma 2.1]{Thom}) we may assume that $\mu_1, \mu_2 $ are ergodic and so we may apply Theorem \ref{thmhtop} which proves that $h_{\text{top}}(T,I_{\Phi}(T))\geq h_{\text{top}}(T,Y)$. This proves ``$\geq$" in \eqref{EQ:EXT}.
To prove the opposite inequality ``$\leq$" in \eqref{EQ:EXT} we start by showing that: $$h_{\text{top}}(T,Y^{\omega})=h_{\text{top}}(T,Y).$$ We need the result from \cite{Bow} which states that letting: $$QR(t)=\{ x \in X: \text{ there exists } \mu \in V_T(x) \text{ with }h_{\mu}(T)\leq t \},$$ we have $h_{\text{top}}(T,QR(t))\leq t$. Taking $t = \sup_{\mu \in V_T(x), x \in Y^{\omega}}h_{\mu}$ we see that $Y^{\omega}~\subseteq~QR(t)$ which gives: \begin{eqnarray}\label{htopeq} h_{top}(T,Y^{\omega}) &\leq & \sup\{h_{\mu}:\mu \in V_T(x),\; x \in Y^{\omega}\}\\ &\leq& \sup\{h_{\mu}: \supp(\mu)\subseteq Y\} = h_{top}(T,Y).\nonumber \end{eqnarray}
while the opposite inequality is obvious by inclusion $Y\subset Y^\omega$. This way we obtain that if we denote: $$ \hat{t} = \sup\{h_{\text{top}}(T,Y): I_{\Phi}(T)\cap Y^{\omega}\neq \emptyset \text{ and }Y\in C_c(X,T)\} $$ then: $$ \hat{t} = \sup\{h_{\text{top}}(T,Y^\omega): I_{\Phi}(T)\cap Y^{\omega}\neq \emptyset \text{ and }Y\in C_c(X,T)\}. $$
Finally observe that if $x\in I_{\Phi}(T)$ then there is a chain-recurrent class $Y$ such that $\omega_T(x)\subset Y$ (e.g. see \cite{HSZ}) and so $I_{\Phi}(T)\cap Y^\omega\neq \emptyset$. Therefore $$ I_{\Phi}(T) \subseteq \bigcup_{Y \in C_c(X,T), Y^{\omega}\cap I_{\Phi}(T)\neq \emptyset}Y^{\omega}\subseteq QR(\hat{t}). $$ and the proof is completed by the above mentioned result from \cite{Bow}.
\end{proof} \end{cor}
\subsection{Typical $\Phi$-irregular sets}
For any dynamical system $(X,T)$, from \cite[Lemma 3.3]{Tian2017} the set $$\mathcal C^*:=\{\Phi\in C^0(X,\mathbb{R})\,:\,I_\Phi(T) \text{ is not empty}\}$$ is either empty or is an open and dense subset in the space of continuous functions. Suppose that $(X,T)$ has the shadowing property and has positive topological entropy. By \cite{DOT} the set of irregular points $I(T)$ is not empty and carries full topological entropy, in particular there exists some $\Phi$ with $I_{\Phi}(T)\neq \emptyset.$ Thus $\mathcal C^*$ is an open and dense subset in the space of continuous functions. On one hand, if additionally $f$~is transitive, then by Theorem~\ref{thmhtop} we obtain that: $$\mathcal C^*=\{\Phi\in C^0(X,\mathbb{R})\,: \,I_\Phi(T) \text{ is not empty and carries full topological entropy }\}$$
so that the later set is an open and dense subset in the space of continuous functions. On the other hand, if the system is not transitive, one can have some $\Phi$ such that $I_\Phi$ is not empty but does not carry full topological entropy. For example, suppose that the system $(X,T)$ is composed by two disjoint transitive subsystem $(X_i,f_i)$, $i=1,2$ with shadowing for which $f_{1}$ has entropy larger than the one of $f_{2}$ and suppose that $f_{2}$ has at least two invariant measures, then one can take a continuous function $\Phi$~such that $\Phi |_{X_1}=0$ but $$\inf_{\mu\in \mathcal{M}_{f_2}(X_2)} \int \Phi d\mu<\sup_{\mu\in \mathcal{M}_{f_2}(X_2)}\int \Phi d\mu.$$ In this case $I_{\Phi}(T)$ is not empty and carries entropy equal to the one of $f_2$ (not equal to the entropy of the whole system). However, we can show that ``most'' functions still have the property that $I_\Phi(T)$ is not empty and carries full topological entropy.
\begin{thm}\label{thmtyp} Suppose that $(X,T)$ has the shadowing property and positive topological entropy. Then: $$ \mathcal{R}=\{\Phi\in C^0(X,\mathbb{R}): I_\Phi(T) \text{ is not empty and carries full topological entropy }\} $$ is a dense $G_\delta$ subset in $C^{0}(X,\mathbb{R})$. In fact, for any $\varepsilon>0$: \begin{multline*} \mathcal{R}_{\varepsilon}=\{\Phi\in C^0(X,\mathbb{R}): I_\Phi(T) \text{ is not empty} \\ \text{and carries entropy larger than }h_{top}(T)-\varepsilon\} \end{multline*} is an open and dense subset in $C^{0}(X,\mathbb{R})$. \end{thm}
\begin{proof} Clearly $\mathcal R=\bigcap_n \mathcal R_{\frac1n}$ so by Baire theorem it is enough to prove the second statement.
Fix any $\varepsilon>0.$ First we are going to show that $\mathcal{R}_{\varepsilon}$ is dense. By \cite[Corollary 3.5]{DOT} there are two ergodic measures $\mu$ and $\nu$ such that $h_{\mu}(T)>h_{top}(T)-\varepsilon$ and $\mu\neq \nu,$ $\supp(\nu)\subseteq \supp(\mu)$. By the definition of weak$^*$ topology, there exists some continuous function $\Phi_\varepsilon$ such that $\int\Phi_\varepsilon d\mu\neq \int \Phi_\varepsilon d\nu.$ Note that by ergodicity $T$ restricted to $\supp(\mu)$ is transitive, in particular $\supp(\mu)$ is contained in a chain recurrent class $Y$. Therefore by Theorem~\ref{thmhtop} we see that $$
h_{top}(I_{\Phi_\varepsilon}(T))\geq h_{top}(T|_Y)\geq h_{\mu}(T)>h_{top}(T)-\varepsilon. $$
For any given continuous function $\Phi$, if $\inf_{\mu\in \mathcal{M}_T(Y)}\int\Phi d\mu<\sup_{\mu\in \mathcal{M}_{T}(Y)}\int\Phi d\mu,$ then by Theorem~\ref{thmhtop} together with ergodic decomposition theorem we obtain that $\Phi\in \mathcal{R}_{\varepsilon}. $
In the second case, that is when $\inf_{\mu\in \mathcal{M}_{T}(Y)}\int\Phi d\mu=\sup_{\mu\in \mathcal{M}_{T}(Y)}\int\Phi d\mu,$ consider functions $\Phi_{n}=\Phi+ \frac1n\Phi_{\epsilon}$ which converge to $\Phi$ and satisfy: $$ \inf_{\mu\in \mathcal{M}_{T}(Y)}\int\Phi_n d\mu<\sup_{\mu\in \mathcal{M}_{T}(Y)}\int\Phi_n d\mu. $$ By the previous case, we see that $\Phi_n\in \mathcal{R}_{\varepsilon}$ for every $n$. Indeed $\mathcal{R}_{\varepsilon}$ is dense.
Now we will show that $\mathcal{R}_{\varepsilon}$ is also open. Once again we will need result of Bowen from \cite{Bow} which states that letting $$QR(t)=\{ x \in X: \text{ there exists } \mu \in V_T(x) \text{ with }h_{\mu}(T)\leq t \},$$ we have $h_{\text{top}}(T,QR(t))\leq t$. Take any $\Phi\in \mathcal{R}_{\varepsilon}. $ We claim that there exists a point $y\in I_{\Phi}(T)$ such that every $\mu\in V_{T}(y)$ we have $h_{\mu}(T)>h_{top}(T)-\epsilon$. Otherwise, $ I_{\Phi}(T) \subseteq QR(h_{\text{top}}(T)-\varepsilon)$ and Bowen's result implies that $h_\top(I_{\Phi}(T))\leq h_{\text{top}}(T)-\varepsilon$ which is in contradiction to $\Phi\in \mathcal{R}_{\varepsilon}$. This proves the claim. In other words, there is $y\in I_{\Phi}(T)$ and measures $\mu,\nu\in V_T(y)$ such that $ \int\Phi d\mu\neq \int \Phi d\nu.$ Recall that $\cup_{\mu\in V_T(y)} \supp(\mu)\subseteq \omega_T(y).$ It is well known (e.g. see \cite{HSZ}) that $\omega_T(y)$ is always a~subset of some chain recurrent class, say $Y$, and consequently $V_T(y)\subseteq \mathcal M_{T}(Y)$. Thus: \begin{align*}
h_{top}(Y)&=\sup\{h_\mu(T)|\, \mu\in \mathcal M_{T}(Y)\}\\
&\geq \sup\{h_\mu(T)|\, \mu\in V_T(x)\} \geq h_{\mu}(T)>h_{top}(T)-\varepsilon \end{align*} and $\inf_{\mu\in \mathcal{M}_{T}(Y)}\Phi d\mu<\sup_{\mu\in \mathcal{M}_{T}(Y)}\Phi d\mu.$ If we take sufficiently small open neighborhood $\mathcal U$ of $\Phi$ in $C^0(X,\mathbb{R})$ with supremum metric, then for any $\Psi\in U$ we will also have $\inf_{\mu\in \mathcal{M}_{T}(Y)}\int\Psi d\mu<\sup_{\mu\in \mathcal{M}_{T}(Y)}\int\Psi d\mu$. Using once again Theorem~\ref{thmhtop} we see that $\Psi\in \mathcal{R}_{\epsilon}.$ This completes the proof. \end{proof}
\subsection{Level sets with respect to reference measure and shadowing}
The considerations below have their motivation in \cite{Young}. We assume that $m \in \mathcal{M}$ is a~finite Borel measure on $X$ and think of it as of a reference measure. Define: $$ h_m(T,x) = \lim_{\varepsilon\rightarrow 0}\limsup_{n\rightarrow\infty}-\frac{\log m(B_n(x,\varepsilon))}{n} $$ and $$ h_m(T,\nu) = \nu - \esssup h_m(T,x), $$ where : $$\nu - \esssup h_m(T,x) = \inf\left\{ \alpha \in \mathbb{R}: \nu\left(\{x \in X:h_m(T,x)>\alpha\}\right)=0\right\}.$$
Note that the measure $m$ need not be even $T$-invariant, however if we take $m$ as an ergodic measure, then $h_{m}(T,x) = h_{m}(T)$ $m$-a.e. by \cite{BK}. In particular if $m=\nu$ and $\nu$ is ergodic then $h_m(T,\nu) = h_{\nu}(T)$. Now define the set: \begin{multline*} \mathcal{V}^-=\{\xi \in C(X,\mathbb R): \text{ there exist an arbitrarily small } \varepsilon>0 \text{ and } C=C(\varepsilon)\\ \text{ such that, for all }x \in X \text{ and }n \geq 0: m(B_n(x,\varepsilon))\geq C e^{-\sum_{i=0}^{n-1}\xi(T^ix)}\}. \end{multline*} For bad choice of the reference measure $m$ it may happen that $\mathcal{V}^-$ is empty. However in many cases there are natural candidates for $m$ which also ensures $\mathcal{V}^-\neq \emptyset$ (e.g. see Example \ref{exV}).
For $\Phi \in \mathcal{C}(X,\mathbb{R})$ and $E\subseteq \mathbb{R}$ put: $$ \underline{R}(\Phi, E)=\liminf_{n\rightarrow\infty}\frac{1}{n}\log m\left(\{x \in X: \frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix) \in E\}\right). $$ In \cite{Young} the author proved the [Theorem 1] for dynamical systems with the specification property. Here we state an analogous theorem for dynamical systems with the shadowing property. \begin{thm}\label{thm:v-} Let $(X,T)$ be a dynamical system with the shadowing property such that $h_{\text{top}}(T,X)<\infty$. Then for every $\Phi \in \mathcal{C}(X,\mathbb{R})$, every $c \in \mathbb{R}$ and every $\xi \in \mathcal{V}^-$ we have: \begin{multline*} \underline{R}(\Phi,(c,\infty))\geq \sup \{h_{\nu}(T)-\int\xi d\nu: \nu \in \mathcal{M}_T(X), \int \Phi d\nu>c,\\ \nu \text{ is supported on some chain recurrent class } Y\subseteq X\}. \end{multline*} \end{thm} \begin{proof} Fix some $\xi \in \mathcal{V}^-$, $c \in \mathbb{R}$ and $\Phi \in \mathcal{C}(X,\mathbb{R})$. Let $$D_n = \{ x \in X: \frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)>c \}$$
and pick an arbitrary $\nu \in \mathcal{M}$ supported on some chain recurrent class $Y\subseteq X$ with $\int\Phi d\nu >c$. Fix an arbitrary $\gamma>0$ and put $\delta = \frac{1}{4}(\int\Phi d\nu -c)$. Observe that, in fact, we have $\int \Phi d\nu = c+4\delta$.
By Theorem 4.3 from \cite{LO} there exists a sequence of ergodic measures $\{\nu_n\}_{n \in \mathbb{N}}$ supported on some (possibly different) chain recurrent classes in $X$ such that $ \lim_{n\rightarrow\infty}\nu_n = \nu$ and $\lim_{n\rightarrow\infty}h_{\nu_n}(T)=h_{\nu}(T).$ Hence there exist $N_1, N_2>0$ such that: \begin{align*}
\left|\int\Phi d\nu_n - \int \Phi d\nu\right| &< \delta \text{ and } \left|h_{\nu_n}(T)-h_{\nu}(T)\right|<\delta \text{ for all }n>N_1,\\
\left|\int \xi d\nu_n - \int \xi d\nu\right|& <\gamma \text{ and } |h_{\nu_n}(T)-h_{\nu}(T)|<\gamma \text{ for all }n>N_2. \end{align*} Put $M = \max\{N_1,N_2\}$ and denote $$ \mu=\nu_M. $$ Clearly, it follows that $\int \Phi d\mu > c+3\delta$ and $h_{\mu}(T) - \int \xi d\mu \geq h_{\nu}(T) - \int\xi d\nu - 2\gamma$.
Now we are going to prove that: $$ \liminf_{n\rightarrow\infty}\frac{1}{n}\log m(D_n)\geq h_{\nu}(T) - \int \xi d\nu - 3\gamma, $$
which requires estimating the value of $m(D_n)$ in terms of $h_{\mu}(T) - \int \xi d\mu$. Choose $\varepsilon>0$ such that $d(x,y)<\varepsilon$ implies $|\Phi(x)-\Phi(y)|<\delta$. If we denote by $N_T(n,2\varepsilon,\frac{1}{2})$ the minimal number of $(n,2\varepsilon)$-Bowen balls covering a set of $\mu$-measure at least $\frac{1}{2}$ and then by the Brin-Katok entropy formula we have (decreasing $\varepsilon$ when necessary): \begin{equation} h_{\mu}(T)-\gamma/2 \leq \liminf_{n\rightarrow\infty}\frac{1}{n}\log N_T(n,2\varepsilon,\frac{1}{2}).\label{eq:brin-katok} \end{equation}
For $\xi \in \mathcal{V}^-$ chosen above and every $x\in X$ we have: \begin{equation}\label{BBallMeasureEst} \frac{1}{n}\log m(B_n(x,\varepsilon))\geq \frac{1}{n}\log C - \frac{1}{n}\sum_{i=0}^{n-1}\xi(T^ix). \end{equation}
As $\mu$ is ergodic, we have $\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)>c+3\delta$ $\mu$-a.e. Furthermore, note that if we take $x \in X$ such that $|\frac{1}{n}\sum_{i=0}^{n-1}\xi(T^ix)-\int\xi d\mu|<\gamma$ for all $n \geq N$ then, by (\ref{BBallMeasureEst}), we have: \begin{equation} \frac{1}{n}\log m(B_n(x,\varepsilon))\geq -\frac{1}{n}\sum_{i=0}^{n-1}\xi(T^ix)>-\int \xi d\mu - \gamma. \end{equation} Again by ergodic theorem, $\mu$-a.e. point satisfies this condition for sufficiently large $N$. In particular, there is an $N \in \mathbb{N}$ such that $\mu(D)>\frac{2}{3}$, where:
\begin{multline*} D = \left\{ x \in X:
|\frac{1}{n}\sum_{i=0}^{n-1}\xi(T^ix)-\int\xi d\mu|<\gamma \text{ and } \right. \\ \left. \frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)>c+\delta \text{ for all } n \geq N \right\}. \end{multline*}
Clearly $D\subseteq D_n$ for all $n\geq N$ and so $\mu(D_n)>\frac{2}{3}$. For each $x\in D$ and $y \in B_n(x,\varepsilon)$ we have: \begin{align*} \frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^iy)>-\delta + \frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)>c, \end{align*} therefore for each $n\geq N$ we have $B_n(x,\varepsilon)\subseteq D_n$, provided that $x\in D$.
Let $\mathcal{E}_n \subseteq D$ be the maximal $(n,2\varepsilon)$-separated subset of $D$. Then for any distinct $x, x' \in \mathcal{E}_n$
we have $B_n(x,\varepsilon)\cap B_n(x',\varepsilon)=\emptyset$ and clearly $D\subset \bigcup_{x\in \mathcal{E}_n}B_{n}(x,2\varepsilon)$, so:
$$
N_T(n,2\varepsilon,\frac{1}{2})\leq |\mathcal{E}_n|.
$$
Thus, increasing the value of $N$ if necessary, by \eqref{eq:brin-katok} for $n\geq N$ we have:
$$
h_{\mu}(T)-\gamma \leq \frac{1}{n}\log |\mathcal{E}_n|
$$
which we can equivalently write as:
$$
e^{n(h_{\mu}(T)-\gamma)}\leq |\mathcal{E}_n|.
$$
Combining the above observations together, we see that for $n\geq N$:
\begin{align*} \frac{1}{n}\log m(D_n) &\geq \frac{1}{n}\log \sum_{x \in \mathcal{E}_n}m(B_n(x,\varepsilon)) \geq \frac{1}{n}\log \sum_{x \in \mathcal{E}_n}Ce^{-\sum_{i=0}^{n-1}\xi(T^ix)} \\
&\geq \frac{1}{n}\log C|\mathcal{E}_n|e^{-n(\int \xi d\mu +2\gamma)} \geq \frac{1}{n}\log C e^{n(h_{\mu}(T)-\gamma)}e^{-n(\int\xi d\mu + 2\gamma)}.
\end{align*}
It follows that:
\begin{align*}
\liminf_{n\rightarrow\infty}\frac{1}{n}\log m(D_n)\geq h_{\mu}(T)-\int\xi d\mu -3\gamma
\end{align*}
which completes the proof. \end{proof}
Now for $a\in \mathbb{R}$ and $\theta>0$ define the following two level sets: \begin{align*} R_{\Phi}(a) &= \left\{x \in X: \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)=a\right\},\\ R_{\Phi}(a,\theta) &= \left\{x \in X: a-\theta<\liminf_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)\leq \limsup_{n\rightarrow\infty}\frac{1}{n}\sum_{i=0}^{n-1}\Phi(T^ix)<a+\theta\right\}. \end{align*}
By the definition $R_{\Phi}(a) = \bigcap_{\theta>0}R_{\Phi}(a,\theta)$, hence the set $R_{\Phi}(a)$ is much harder to deal with.
\begin{thm}\label{LevelSetThm} Let $(X,T)$ be a dynamical system with the shadowing property and $\Phi \in \mathcal{C}(X,\mathbb{R})$, $a\in \mathbb{R}$ and $\theta>0$ be such that $R_{\Phi}(a,\theta)\neq \emptyset$. Then
\begin{align*} h_{\text{top}}(T,R_{\Phi}(a,\theta)) &= \sup \{ h_{\mu}(T):\int \Phi d\mu \in (a-\theta,a+\theta), \mu \in \mathcal{M}_e(X) \} \\ &= \sup \{h_{\mu}(T): \int \Phi d\mu \in (a-\theta,a+\theta), \supp(\mu)\subset Y \in C_c(X,T) \}. \end{align*} \end{thm}
\begin{proof} Given $x\in X$ as usual we denote by $V_T(x)$ the weak$^*$ limit set of $\frac{1}{n}\sum_{i=0}^{n-1}\delta_{T^ix}$. Recall that $\bigcup_{\mu \in V_T(x)}\supp(\mu)\subseteq \omega_T(x)$ so each $\mu\in V_T(x)$ is supported on some chain-recurrent class. Denote the set of generic points for $\mu$ by $$ G_{\mu} = \{x \in X: V_T(x) = \{\mu\} \}. $$ By the result of Bowen \cite[Theorem 3]{Bow} we have that $h_{\text{top}}(T,G_{\mu}) = h_{\mu}(T)$ provided that $\mu$ is ergodic. Note that if $\int \Phi d\mu \in (a-\theta,a+\theta)$ and $\mu$ is ergodic then the set of generic points for $\mu$ is a subset of $R_{\Phi}(a,\theta)$, which automatically implies: $$ h_{\mu}(T)=h_{\text{top}}(T,G_{\mu})\leq h_{\text{top}}(T,R_{\Phi}(a,\theta)) $$ In other words $$ h_{\text{top}}(T,R_{\Phi}(a,\theta)) \geq \sup \{ h_{\mu}(T):\int \Phi d\mu \in (a-\theta,a+\theta), \mu \in \mathcal{M}_e(X) \}. $$ Recall that by the Bowen's result from \cite[Theorem 2]{Bow} we know that: $$h_{\text{top}}(T,QR(t))~\leq~t,$$ where: $$ QR(t) = \{ x \in X: \text{ there exists } \mu \in V_T(x) \text{ with }h_{\mu}(T)\leq t \}.$$ If we put $$ \hat{t}=\sup_{x\in R_{\Phi}(a,\theta)}\sup_{\mu \in V_T(x)} h_\mu(T). $$ then $R_{\Phi}(a,\theta)\subset QR(\hat{t})$ and so we have $h_{\text{top}}(T,R_{\Phi}(a,\theta))\leq \hat{t}$. But for every $x\in X$ and $\mu \in V_T(x)$, by \cite[Theorem~4.3]{LO} we can find a sequence of ergodic measures $\nu_n$ with $\nu_n\to \mu$ and $h_{\nu_n}(T)~\to~ h_\mu(T)$. By the definition, $\int \Phi d\nu_n~\to~\int \Phi d\mu$, and the support of every ergodic measure is internally chain recurrent, so: \begin{align*} h_{\text{top}}(T,R_{\Phi}(a,\theta)) &\leq \sup \{h_{\mu}(T): \int \Phi d\mu \in (a-\theta,a+\theta) \text{ and }\supp(\mu)\subseteq Z \\ & \text{ for some internally chain transitive set } Z\subseteq X \}\\ &\leq \sup \{h_{\mu} (T): \int \Phi d\mu \in (a-\theta,a+\theta)\text{ and }\supp(\mu)\subseteq Y\in C_c(T, X) \}. \end{align*}
It remains to show that: \begin{align*} \sup&\{ h_{\mu}(T): \int \Phi d\mu \in (a-\theta,a+\theta), \mu \in \mathcal{M}_e(T) \} \\ \geq &\sup \{ h_{\mu}(T): \int\Phi d\mu \in (a-\theta,a+\theta), \supp(\mu) \subset Y\in C_c(T,X) \}. \end{align*}
Take any $\eta>0$ and any invariant measure $\mu$ supported on some chain recurrent class in $X$ with $\int \Phi d\mu \in (a-\theta,a+\theta)$. Then by mentioned result of \cite{LO} we find an ergodic measure $\nu$ with $\int \Phi d\nu \in (a-\theta,a+\theta)$ and $h_{\nu}(T)>h_{\mu}(T)-\eta$ which completes the proof. \end{proof}
Theorem~\ref{thm:v-} is motivated by result for maps with the specification property in \cite{Young}, in particular mixing maps with the shadowing property. The following example shows that it has also a chance to hold where there are numerous chain-recurrent classes. Of course a trivial example is provided by identity on Cantor set with any measure, but it would be nice to have a more sophisticated example.
\begin{ex}\label{exV}
We are going to construct a map $f\colon [0,1]\to [0,1]$ such that $f$ has the shadowing property, infinitely many chain recurrent classes,
Lebesgue measure $m$ on $[0,1]$ is not $f$-invariant, and $\mathcal{V}^-\neq \emptyset$. This shows that Theorem~\ref{thm:v-} can be satisfied
also in the case when there are infinitely many chain-recurrent classes.
Let $a_n=2^{-n}$ and $b_n=5a_{n+1}/4$ for $n=0,1,\ldots$. Let $\lambda_n\in (\sqrt{2},2]$ be any sequence such that tent map with slope $\lambda_n$
has shadowing property. It equivalently means, that this map satisfies the so called linking property \cite{Chen}, and by results of \cite{Yorke} most of slopes in $(\sqrt{2},2]$ satisfies it.
We also assume that $\lambda_0=2$, which is admissible, since full tent map has the shadowing property.
Let $f\colon [0,1]\to [0,1]$ be defined as follows. On the interval $[b_n,a_n]$ the map $f$ is the tent map with slope $\lambda_n$ and on $[a_{n+1},b_n]$ we have affine map with $f(a_{n+1})=b_{n+1}$ and $f(b_{n})=b_n$. Note that $f(b_n)-f(a_{n+1})=b_n-{b_{n+1}}=5a_{n+1}/4$ which shows that $f$ has slope $5$ on each of the intervals $[a_{n+1},b_n]$.
This shows that $f$ is Lipschitz with Lipschitz constant $5$, and so if $m$ is Lebesgue measure, then for every $x$ and every $\varepsilon>0$ we have:
$$
m(B_n(x,\varepsilon))\geq \varepsilon 5^{-n}=\varepsilon e^{-n \ln (5)}.
$$
Then clearly $\mathcal{V}^{-}$ contains any function $\xi\geq \ln(5)$ but of course contains other functions as well.
To see that $f$ has shadowing, it is enough to apply Theorem~7 from \cite{Chris}. Simply, let $\pi_{n}\colon [a_{n+1},a_0]\to [a_n,a_0]$ be defined for $n>0$
by $\pi_n(x)=\max\{x,a_{n}\}$. In other words $\pi_n$ collapses $[a_{n+1},a_n]$ to the point $a_n$. Note that $f_n\circ \pi_n=\pi_n\circ f_{n+1}$
where $f_n=f|_{[a_n,a_0]}$. Also each $f_n$ has shadowing property, as it is piecewise linear and by definition it has linking property (since $f$ on each $[a_{b},a_n]$ has it).
Coordinate-wise action of $f_n$ on the inverse limit $\varprojlim(\pi_n, [a_n,a_0])$ has the shadowing property. But it is also easy too check that by the definition it is conjugate with $(f,[0,1])$. \end{ex}
\end{document} | arXiv | {
"id": "1909.08463.tex",
"language_detection_score": 0.6653578281402588,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[On tangent sphere bundles with contact pseudo-metric structures]
{On tangent sphere bundles with contact pseudo-metric structures}
\author[N. Ghaffarzadeh]{Narges Ghaffarzadeh}
\address{Department of Pure Mathematics,\\ Faculty of Mathematical Sciences,\\ University of Tabriz, Tabriz, Iran.}
\email{n.ghaffarzadeh@tabrizu.ac.ir}
\author[M. Faghfouri]{Morteza Faghfouri} \address{Department of Pure Mathematics,\\ Faculty of Mathematical Sciences,\\ University of Tabriz, Tabriz, Iran.} \email{faghfouri@tabrizu.ac.ir}
\subjclass{53C15, 53C50, 53C07}
\keywords{contact pseudo-metric structure, tangent sphere bundle, unit tangent sphere bundle, Sasaki pseudo-metric}
\date{January 1, 2004}
\dedicatory{To my boss}
\begin{abstract} In this paper, we introduce a contact pseudo-metric structure on a tangent sphere bundle $T_\varepsilon M$. we prove that the tangent sphere bundle $T_{\varepsilon}M$ is $(\kappa, \mu)$-contact pseudo-metric manifold
if and only if the manifold $M$ is of constant sectional curvature. Also, we prove that this structure on the tangent sphere bundle is $K$-contact iff the base manifold has constant curvature $\varepsilon$. \end{abstract}
\maketitle
\maketitle
\section{Introduction} In 1956, S. Sasaki \cite{sasaki:bundle} introduced a Riemannian metric on tangent bundle $TM$ and tangent sphere bundle $T_1M$ over a Riemannian manifold $M$. Thereafter, that metric was called the Sasaki metric. In 1962, Dombrowski \cite{Dombrowski} also showed at each $Z\in TM,\, TM_Z=HTM_Z\oplus VTM_Z$, where $HTM_Z$ and $VTM_Z$ orthogonal subspaces of dimension $n$, called horizontal and vertical distributions, respectively. He defined an almost K\"{a}hlerian structure on $TM$ and proved that it is K\"{a}hlerian manifold if $M$ is flat. In the same year, Tachibana and Okumura \cite{tachibana:bundlecomplex} showed that the tangent bundle space $TM$ of any non-flat Riemannian space $M$ always admits an almost K\"{a}hlerian structure, which is not K\"{a}hlerian. Tashiro \cite{Tashiro:contactbundle} introduced a contact metric structure on the unit tangent sphere bundle $T_1M$ and he proved that contact metric structure on $T_1M$ is $K$-contact iff $M$ has constant curvature 1, in which case the structure is Sasakian.
Kowalski \cite{Kowalski} computed the curvature tensor of Sasaki metric. Thus, on $T_1M,\, R(X,Y)\xi$ can be computed by the formulas for the curvature of $TM$.
In \cite{Blair:Contactmetricmanifoldssatisfyingnullitycondition}, Blair et al. introduced $(\kappa,\mu)$-contact Riemannian manifolds and proved that, the tangent sphere bundle $T_{1}M$ is a $(\kappa, \mu)$-contact Riemannian manifold iff the base manifold $M$ is of constant sectional curvature $c$.
Takahashi \cite{Takahashi:SasakianManifoldWithPseud} introduced contact pseudo-metric structures $(\eta,g)$, where $\eta$ is a contact one-form and $g$ is a pseudo-Riemannian metric associated to it, are a natural generalization of contact metric structures. Recently, contact pseudo-metric manifolds have been studied by Calvaruso and Perrone \cite{Calvaruso.Perrone:ContactPseudoMtricManifolds,Perrone:CurvatureKcontact} and authors of this paper \cite{GhaffarzadehFaghfouri} introduce and study $(\kappa, \mu)$-contact pseudo-metric manifolds.
In this paper, we suppose that $(M,g)$ is pseudo-metric manifold and define pseudo-metric on $TM$. Also, we introduce contact pseudo-metric structures $(\varphi,\xi,\eta,g_{cm})$ on $T_\varepsilon M$ and prove that \begin{align*} \bar{R}(X,Y)\xi=c(4\varepsilon-(c+2))\{\eta(Y)X-\eta(X)Y\}-2\varepsilon c\{\eta(Y)hX-\eta(X)hY\} \end{align*}
if and only if the base manifold $M$ is of constant sectional curvature. That is, the tangent sphere bundle $T_{\varepsilon}M$ is a $(\kappa, \mu)$-contact pseudo-metric manifold iff the base manifold $M$ is of constant sectional curvature. Also, the contact pseudo-metric structure $(\varphi,\xi,\eta,g_{cm})$ on $T_{\varepsilon}M$ is $K$-contact if and only if the base manifold $(M,g)$ has constant curvature $\varepsilon$.
\section{Preliminaries} Let $(M,g)$ be a pseudo-metric manifold and $\nabla$ the associated Levi-Civita connection and $R=[\nabla,\nabla]-\nabla_{[,]}$ the curvature tensor. The tangent bundle of $M$, denoted by $TM$, consists of pairs $(x,u)$, where $x\in M$ and $u\in T_xM$,( i.e. $TM=\cup_{x\in M}T_xM$). The mapping $\pi:TM\to M, \pi(x, u)=x$ is the natural projection.\\ The tangent space $T_{(x,u)}TM$ splits into the vertical subspace $VTM_{(x,u)}$ is given by $VTM_{(x,u)}:=\ker\pi_{*}\vert_{(x,u)}$ and the horizontal subspace $HTM_{(x,u)}$ with respect to $\nabla$: $$T_{(x,u)}TM=VTM_{(x,u)}\oplus HTM_{(x,u)}.$$ For every $X\in T_{x}M$, there is a unique vector $X^{h}\in HTM_{(x,u)}$, such that $\pi_{*}(X^{h})=X$. It is called the horizontal lift of $X$ to $(x,u)$. Also, there is a unique vector $X^{v}\in VTM_{(x,u)}$, such that $X^{v}(df)=Xf$ for all $f\in C^\infty(M)$. $X^{v}$ is called the vertical lift of $X$ to $(x,u)$. The maps $X\mapsto X^{h}$ between $T_{x}M$ and $HTM_{(x,u)}$, and $X\mapsto X^{v}$ between $T_{x}M$ and $VTM_{(x,u)}$ are isomorphisms. Hence, every tangent vector $\bar{Z}\in T_{(x,u)}TM$ can be decomposed $\bar{Z}=X^{h}+Y^{v}$ for uniquely determined vectors $X,Y\in T_{x}M$. The horizontal ( respectively, vertical) lift of a vector field $X$ on $M$ to $TM$ is the vector field $X^{h}$ (respectively, $X^{v}$ ) on $M$, whose value at the point $(x,u)$ is the horizontal (respectively, vertical) lift of $X_{x}$ to $(x,u)$.\\ A system of local coordinate $(x^{1},\ldots,x^{n})$ on an open subset $U$ of $M$ induces on $\pi^{-1}(U)$ of $TM$ a system of local coordinate $(\bar{x}^{1},\ldots,\bar{x}^{n};u^{1},\ldots,u^{n})$ as follows:
$\bar{x}^{i}(x,u)=(x^{i}\circ\pi)(x,u)=x^{i}(x), u^{i}(x,u)=dx^{i}(u)=ux^{i}$
for $i=1,\ldots,n$ and $(x,u)\in\pi^{-1}(U)$. With respect to the induced local coordinate system, the horizontal and vertical lifts of a vector field $X= X^{i}\frac{\partial}{\partial x^{i}}$ on $U$ are given by \begin{gather} X^{h}= (X^{i}\circ\pi)\frac{\partial}{\partial\bar{x}^{i}}-u^{b}((X^{a}\Gamma^{i}_{ab})\circ\pi)\frac{\partial}{\partial u^{i}},\label{0000}\\ X^{v}=(X^{i}\circ\pi)\frac{\partial}{\partial u^{i}},\label{0001} \end{gather} where $\Gamma^{i}_{jk}$ are the local components of $\nabla$, i.e., $\nabla_{\frac{\partial}{\partial x^{j}}}\frac{\partial}{\partial x^{k}}=\Gamma^{i}_{jk}\frac{\partial}{\partial x^{i}}$. The span of the horizontal lifts at $t\in TM$ is called the horizontal subspace of $T_{t}TM$. For all $t\in TM$, the connection map $\mathcal{K}:TTM\to TM$ is given by $\mathcal{K}X_t^{h}=0 $ and $ \mathcal{K}X_{t}^{v}=X_{\pi(t)}$ \cite{Dombrowski}.
From (\ref{0000}) and (\ref{0001}), one can easily calculate the brackets of vertical and horizontal lifts: \begin{gather} [X^{h},Y^{h}]=[X,Y]^{h}-v\{R(X,Y)u\},\label{0002}\\ [X^{h},Y^{v}]=(\nabla_{X}Y)^{v},\label{0003}\\ [X^{v},Y^{v}]=0,\label{0004} \end{gather} for all $X,Y\in\Gamma(TM)$. We use some notation, due to M. Sekizawa (\cite{Sekizawa}). Let $T$ be a tensor field of type $(1,s)$ on $M$ and $X_{1},\ldots,X_{s-1}\in\Gamma(TM)$, the vertical vector field $v\{T(X_{1},\ldots,u,\ldots,X_{s-1})\}$ on $\pi^{-1}(U)$ is given by $$v\{T(X_{1},\ldots,u,\ldots,X_{s-1})\}:= u^{a}(T(X_{1},\ldots,\frac{\partial}{\partial x^{a}},\ldots,X_{s-1}))^{v}.$$ Analogously, one defines the horizontal vector field $h\{T(X_{1},\ldots,u,\ldots,X_{s-1})\}$ and $h\{T(X_{1},\ldots,u,\ldots,u,\ldots,X_{s-2})\}$ and the vertical vector field $v\{T(X_{1},\ldots$ $,u,\ldots,u,\ldots,X_{s-2})\}$. Note that these vector fields do not depend on the choice of coordinates on $U$. Let $(M,g)$ be a pseudo-metric manifold. On the tangent bundle $TM$, we can define a pseudo-metric $Tg$ to be \begin{align}\label{0005} Tg(X^{h},Y^{h})=Tg(X^{v},Y^{v})=g(X,Y)\circ\pi,\quad Tg(X^{h},Y^{v})=0 \end{align} for all $X,Y\in\Gamma(TM)$. We call it Sasaki pseudo-metric.
According \eqref{0005}, If $\{E_1,\ldots,E_n\}$ is an orthonormal frame field on $U$ then $\{E_1^v,\ldots,E_n^v,E_1^h,\ldots,E_n^h\}$ is an orthonormal frame field on $\pi^{-1}(U)$. So, we have the following:
\begin{pro} If the index of $g$ is $\nu$ then the index of the Sasaki pseudo-metric $Tg$ is $2\nu$.
\end{pro}
Let $\tilde{\nabla}$ be the Levi-Civita connection of $Tg$. It is easy to check that for $X,Y\in\Gamma(TM)$ and $(x,u)\in TM$(see \cite{Kowalski} for more details): \begin{align}\label{0006} \begin{split} (\tilde{\nabla}_{X^{v}} Y^{v})&=0,\qquad (\tilde{\nabla}_{X^{v}} Y^{h})=\frac{1}{2}h\{R(u,X)Y\},\\ (\tilde{\nabla}_{X^{h}} Y^{v})&=({\nabla}_{X} Y)^{v}+\frac{1}{2}h\{R(u,Y)X\},\\ (\tilde{\nabla}_{X^{h}} Y^{h})&=({\nabla}_{X} Y)^{h}-\frac{1}{2}v\{R(X,Y)u\}. \end{split} \end{align} \section{The curvature of the unit tangent sphere bundle with pseudo-metric}
Let $(TM,Tg)$ be the tangent bundle of $(M,g)$ endowed with its Sasaki pseudo-metric. We consider the hypersurface $T_{\varepsilon}M=\{(x,u)\in TM| g_{x}(u,u)=\varepsilon\}$, which we call the unit tangent sphere bundle. A unit normal vector field $N$ on $T_{x}M$ is the (vertical) vector field $N= u^{i}\frac{\partial}{\partial u^{i}}=u^{i}(\frac{\partial}{\partial x^{i}})^{v}$. $N$ is independent of the choice of local coordinates and it is defined globally on $TM$. We introduce some more notation. If $X\in T_{x}M$, we define the tangential lift of $X$ to $(x,u)\in T_{\varepsilon}M$ by \begin{align}\label{0007} X^{t}_{(x,u)}=X^{v}_{(x,u)}-\varepsilon g(X,u)N_{(x,u)}. \end{align} Clearly, the tangent space to $T_{\varepsilon}M$ at $(x,u)$ is then spanned by vectors of the form $X^{h}$ and $X^{t}$, where $X\in T_{x}M$. Note that $u^{t}_{(x,u)}=0$. The tangential lift of a vector field $X$ on $M$ to $T_{\varepsilon}M$ is the vertical vector field $X^{t}$ on $T_{\varepsilon}M$, whose value at the point $(x,u)\in T_{\varepsilon}M$ is the tangential lift of $X_{x}$ to $(x,u)$. For a tensor field $T$ of type $(1,s)$ on $M$ and $X_{1},\ldots,X_{s-1}\in\Gamma(TM)$, we define the vertical vector fields $t\{T(X_{1},\ldots,u,\ldots,X_{s-1})\}$ and $t\{T(X_{1},\ldots,u,\ldots,u,\ldots,X_{s-2})\}$ on $T_{\varepsilon}M$ in the obvious way.\\ In what follows, we will give explicit expressions for the brackets of vector fields on $T_{\varepsilon}M$ involving tangential lifts, the Levi-Civita connection and the associated curvature tensor of the induced metric $\bar{g}$ on $T_{\varepsilon}M$.\\ First, for the brackets of vector fields on $T_{\varepsilon}M$ involving tangential lifts, we obtain \begin{gather} [X^{h},Y^{t}]=(\nabla_{X}Y)^{t},\label{0008}\\ [X^{t},Y^{t}]=\varepsilon g(X,u)Y^{t}-\varepsilon g(Y,u)X^{t}.\label{0009} \end{gather} Next, we denote by $\bar{g}$ the pseudo-metric induced on $T_{\varepsilon}M$ from $Tg$ on $TM$. \begin{pro} The Levi-Civita connection $\bar{\nabla}$ of $(T_{\varepsilon}M,\bar{g})$ is described completely by \begin{align}\label{0010} \begin{split} \bar{\nabla}_{X^{t}} Y^{t}&=-\varepsilon g(Y,u)X^{t},\\ \bar{\nabla}_{X^{t}} Y^{h}&=\frac{1}{2}h\{R(u,X)Y\},\\ \bar{\nabla}_{X^{h}} Y^{t}&=({\nabla}_{X} Y)^{t}+\frac{1}{2}h\{R(u,Y)X\},\\ \bar{\nabla}_{X^{h}} Y^{h}&=({\nabla}_{X} Y)^{h}-\frac{1}{2}t\{R(X,Y)u\} \end{split} \end{align} for all vector fields $X$ and $Y$ on $M$. \end{pro} \begin{proof} This is obtained by an easy calculation using \eqref{0006} and the following equation $$\bar{\nabla}_{\bar{A}}\bar{B}=\tilde{\nabla}_{\bar{A}}\bar{B}-\varepsilon Tg(\bar{\nabla}_{\bar{A}}\bar{B},N)N,$$ for vector fields $\bar{A},\bar{B}$ tangent to $T_{\varepsilon}M$. \end{proof} \begin{pro} The curvature tensor $\bar{R}$ of $(T_{\varepsilon}M,\bar{g})$ is described completely by \begin{gather} \bar{R}(X^{t},Y^{t})Z^{t}=\varepsilon\{-\bar{g}(X^{t},Z^{t})Y^{t}+\bar{g}(Z^{t},Y^{t})X^{t}\},\label{0011}\\ \bar{R}(X^{t},Y^{t})Z^{h}=(R(X,Y)Z)^{h}-\varepsilon\{g(Y,u)h(R(X,u)Z)+g(X,u)h(R(u,Y)Z)\}\nonumber\\ \quad\quad\quad\quad\quad\quad\quad+\frac{1}{4}h\{[R(u,X),R(u,Y)]Z\},\label{0012}\\ \bar{R}(X^{h},Y^{t})Z^{t}=-\frac{1}{2}(R(Y,Z)X)^{h}+\frac{\varepsilon}{2}\{g(Y,u)h(R(u,Z)X)+g(Z,u)h(R(Y,u)X)\}\nonumber\\ \quad\quad\quad\quad\quad\quad\quad-\frac{1}{4}h\{R(u,Y)R(u,Z)X\},\label{0013}\\ \bar{R}(X^{h},Y^{t})Z^{h}=\frac{1}{2}(R(X,Z)Y)^{t}-\frac{\varepsilon}{2}g(Y,u)t\{R(X,Z)u\}\nonumber\\ \quad\quad\quad\quad\quad\quad\quad-\frac{1}{4}t\{R(X,R(u,Y)Z)u\}+\frac{1}{2}h\{(\nabla_{X}R)(u,Y)Z\},\label{0014}\\ \bar{R}(X^{h},Y^{h})Z^{t}=(R(X,Y)Z)^{t}-\varepsilon g(Z,u)t\{R(X,Y)u\}\nonumber\\ \quad\quad\quad\quad\quad\quad\quad+\frac{1}{4}t\{R(Y,R(u,Z)X)u-R(X,R(u,Z)Y)u\}\nonumber\\ \quad\quad\quad\quad\quad\quad\quad+\frac{1}{2}h\{(\nabla_{X}R)(u,Z)Y-(\nabla_{Y}R)(u,Z)X\},\label{0015}\\ \bar{R}(X^{h},Y^{h})Z^{h}=(R(X,Y)Z)^{h}+\frac{1}{2}h\{R(u,R(X,Y)u)Z\}\nonumber\\ \quad\quad\quad\quad\quad\quad\quad-\frac{1}{4}h\{R(u,R(Y,Z)u)X-R(u,R(X,Z)u)Y\}\nonumber\\ \quad\quad\quad\quad\quad\quad\quad+\frac{1}{2}t\{(\nabla_{Z}R)(X,Y)u\}\label{0016} \end{gather} for all vector fields $X,Y$ and $Z$ on $M$. \end{pro} \begin{proof} The proof is made by using the following equation and equation (\ref{0010}) for the covariant derivative, (\ref{0002}), (\ref{0008}) and (\ref{0009}) for the brackets are explicitly calculated.
$$\bar{R}(\bar{A},\bar{B})\bar{C}=\bar{\nabla}_{\bar{A}}\bar{\nabla}_{\bar{B}}\bar{C}-\bar{\nabla}_{\bar{B}}\bar{\nabla}_{\bar{A}}\bar{C}-\bar{\nabla}_{[\bar{A},\bar{B}]}\bar{C}.$$
\end{proof} \section{The contact pseudo-metric structure of the unit tangent sphere bundle} First, we give some basic facts on contact pseudo-metric structures. A pseudo-Riemannian manifold $(M^{2n+1},g)$ has a contact pseudo-metric structure if it admits a vector field $\xi$, a one-form $\eta$ and a $(1,1)$-tensor field $\varphi$ satisfying \begin{align}\label{0017} \begin{split} &\eta (\xi)= 1,\\ &\varphi^2(X)=-X+\eta(X)\xi,\\ &g(\varphi X,\varphi Y)=g(X,Y)-\varepsilon\eta(X)\eta(Y),\\ &d\eta(X,Y) =g(X,\varphi Y), \end{split} \end{align} where $\varepsilon=\pm1$ and $X,Y\in\Gamma(TM)$. In this case, $(M,\varphi,\xi,\eta,g)$ is called a contact pseudo-metric manifold. In particular, the above conditions imply that the characteristic curves, i.e., the integral curves of the characteristic vector field $\xi$, are geodesics.\\ If $\xi$ is in addition a Killing vector field with respect to $g$, then the manifold is said to be a $K$-contact (pseudo-metric) manifold. Another characterizing property of such contact pseudo-metric manifolds is the following:\\
geodesics which are orthogonal to $\xi$ at one point, always remain orthogonal to $\xi$. This yields a second special class of geodesics, the $\varphi$-geodesics.\\ Next, if $(M^{2n+1},\varphi,\xi,\eta,g)$ is a contact pseudo-metric manifold satisfying the additional condition $N_{\varphi}(X,Y)+2d\eta(X,Y)\xi=0$ is said to be Sasakian, where $N_{\varphi}(X,Y)=\varphi^2[X,Y]+[\varphi X,\varphi Y]-\varphi[\varphi X,Y]-\varphi[X,\varphi Y]$ is the Nijenhuis torsion tensor of $\varphi$. Moreover, an almost contact pseudo-metric manifold $(M^{2n+1},\varphi,\xi,\eta,g)$ is a Sasakian manifold if and only if $(\nabla_X\varphi)Y=g(X,Y)-\varepsilon\eta(Y)X$.
In particular, one can show that the characteristic vector field $\xi$ is a Killing vector field. Hence, a Sasakian manifold is also a $K$-contact manifold(see \cite{Calvaruso.Perrone:ContactPseudoMtricManifolds,Perrone:CurvatureKcontact} for more details). If a contact pseudo-metric manifold satisfying $R(X,Y)\xi= \varepsilon\kappa(\eta(Y )X -\eta(X)Y ) +\varepsilon \mu(\eta(Y )hX -\eta(X)hY )$, we call $(\kappa, \mu)$-contact pseudo-metric manifold, where $(\kappa ,\mu)\in \mathbb{R}^2$. $(\kappa, \mu)$-contact pseudo-metric manifold is Sasakian if and only if $\kappa=\varepsilon$(see \cite{GhaffarzadehFaghfouri} for more details).
Take now an arbitrary pseudo-metric manifold $(M,g)$. One can easily define an almost complex structure $J$ on $TM$ in the following way: \begin{align}\label{0018} JX^{h}=X^{v},\quad JX^{v}=-X^{h} \end{align} for all vector fields $X$ on $M$.
From (\ref{0002}), (\ref{0003}) and (\ref{0004}), we have the almost complex structure $J$ is integrable if and only if $(M,g)$ is flat.
From the definition (\ref{0005}) of the pseudo-metric $Tg$ on $TM$, it follows immediately that $(TM,Tg,J)$ is almost Hermitian. Moreover, $J$ defines an almost K\"{a}hlerian structure. It is a K\"{a}hler manifold only when $(M,g)$ is flat\cite{Dombrowski}.\\ Next, we consider the unit tangent sphere bundle $(T_{\varepsilon}M,\bar{g})$, which is isometrically embedded as a hypersurface in $(TM,Tg)$ with unit normal field $N$. Using the almost complex structure $J$ on $TM$, we define a unit vector field $\xi'$, a one-form $\eta'$ and a $(1,1)$-tensor field $\varphi'$ on $T_{\varepsilon}M$ by \begin{align}\label{0019} \xi'=-JN,\quad JX=\varphi' X+\eta'(X)N. \end{align} In local coordinates, $\xi'$, $\eta'$ and $\varphi'$ are described by \begin{align}\label{0020} \begin{split} &\xi'=u^{i}(\frac{\partial}{\partial x^{i}})^{h},\\ &\eta'(X^{t})=0,\quad\eta'(X^{h})=\varepsilon g(X,u),\\ &\varphi'(X^{t})=-X^{h}+\varepsilon g(X,u)\xi',\quad\varphi'(X^{h})=X^{t}, \end{split} \end{align} where $X,Y\in\Gamma(TM)$. It is easily checked that these tensors satisfy the conditions (\ref{0017}) excepts or the last one, where we find $\bar{g}(X,\varphi' Y)=2 d\eta'(X,Y)$. So strictly speaking, $(\varphi',\xi',\eta',\bar{g})$ is not a contact pseudo-metric structure. Of course, the difficulty is easily rectified and $$\eta=\frac{1}{2}\eta',\quad\xi=2\xi',\quad\varphi=\varphi',\quad g_{cm}=\frac{1}{4}\bar{g}$$ is taken as the standard contact pseudo-metric structure on $T_{\varepsilon}M$. In local coordinates, with respect to induce the local coordinates $(x^{i},u^{i})$ on $TM$, the characteristic vector field is given by $$\xi=2u^{i}(\frac{\partial}{\partial x^{i}})^{h},$$ the vector field $u^{i}(\frac{\partial}{\partial x^{i}})^{h}$ is the well-known geodesic flow on $T_{\varepsilon}M$. Before beginning our theorems, we explicitly obtain the covariant derivatives of $\xi$ and $\varphi$. For a horizontal tangent vector field, we may use a horizontal lift again. Then $$\bar{\nabla}_{X^{h}}\xi=\tilde{\nabla}_{X^{h}}\xi=-v\{R(X,u)u\}$$ and hence for any horizontal vector $X^{h}$ at $(x,u)\in T_{\varepsilon}M$, we have $$\bar{\nabla}_{X^{h}}\xi=-v\{R(X,u)u\}=-t\{R(X,u)u\}.$$ For a vertical vector field $X^{v}$ tangent to $T_{\varepsilon}M$, we have $$\bar{\nabla}_{X^{v}}\xi=\tilde{\nabla}_{X^{v}}\xi=-2\varphi X^{v}-h\{R(X,u)u\}.$$ Since $J(\frac{\partial}{\partial x^{i}})^{h}=(\frac{\partial}{\partial x^{i}})^{v}$, or in terms of tangential lifts of a vector $X$ on $M$, $$\bar{\nabla}_{X^{t}}\xi=-2\varphi X^{t}-h\{R(X,u)u\}.$$ Comparing with $\bar{\nabla}_{X}\xi=-\varepsilon\varphi X-\varphi hX$ on $T_{\varepsilon}M$ for a vertical vector $V$ and a horizontal vector $X$ orthogonal to $\xi$, $hV$ and $hX$ are given by \begin{align}\label{085} hV=(2-\varepsilon)V-v\{R(\mathcal{K}V,u)u\}\quad\text{and}\quad hX=-\varepsilon X+h\{R(\pi_{*}X,u)u\}. \end{align} Also for any tangent vector fields $X$ and $Y$, we have \begin{align*} (\bar{\nabla}_{X}\varphi)Y=&\tilde{\nabla}_{X}JY-(\bar{\nabla}_{X}\eta')(Y)N+\eta'(Y)AX\\ &-\varepsilon\bar{g}(X,A\varphi Y)N-J(\tilde{\nabla}_{X}Y)-\varepsilon\bar{g}(X,AY)\xi'. \end{align*} We present two computations, one done in each manner.\\
As before, for $X,Y$ horizontal vector fields, we suppose that they are horizontal lifts, and we have $$(\bar{\nabla}_{X^{h}}\varphi)Y^{h}=\frac{1}{2}h\{R(u,X)Y\},$$ where we used the first Bianchi identity.\\ For $Y^{v}=Y^{i}\frac{\partial}{\partial u^{i}}$ a vertical vector field tangent to $T_{\varepsilon}M$ and $X^{h}$ a horizontal tangent vector, we have $$(\bar{\nabla}_{X^{h}}\varphi)Y^{v}=\frac{1}{2}t\{R(X,u)Y\},$$ where we used $$(\bar{\nabla}_{X^{h}}\eta')(Y^{v})=\varepsilon\bar{g}(Y^{v},\bar{\nabla}_{X^{h}}\xi')=-\frac{\varepsilon}{2}g(Y,R(X,u)u)=\frac{\varepsilon}{2}Tg(N,(R(X,u)Y)^{v}).$$ Similarly, we obtain $$(\bar{\nabla}_{X^{v}}\varphi)Y^{h}=t\{R(X,u)Y\}-2\eta(Y^{h})X^{v},$$ $$(\bar{\nabla}_{X^{v}}\varphi)Y^{v}=\frac{1}{2}h\{R(X,u)Y\}+2\varepsilon g_{cm}(X,Y)\xi.$$ \begin{theorem}\label{th:R} Let $(\varphi,\xi,\eta,g_{cm})$ be a contact pseudo-metric structure on the tangent sphere bundle $T_{\varepsilon}M$. Then \begin{align}\label{R} \bar{R}(X,Y)\xi=c(4\varepsilon-(c+2))\{\eta(Y)X-\eta(X)Y\}-2\varepsilon c\{\eta(Y)hX-\eta(X)hY\} \end{align}
if and only if the base manifold $M$ is of constant sectional curvature $c$. \end{theorem} \begin{proof} Assume that the manifold $M$ is a pseudo-metric manifold of constant curvature $c$. Then from equations (\ref{0011}-\ref{0016}), for $X,Y$ orthogonal to $\xi$, we have $\bar{R}(X, Y)\xi=0$ and for a vertical vector $V$, we get $\bar{R}(V,\xi)\xi=c^{2}V$ and also, for a horizontal vector $X$ orthogonal to $\xi$, we obtain $\bar{R}(X,\xi)\xi=(4\varepsilon c- 3c^{2})X$. Moreover, from equations (\ref{085}), $hV=(2-\varepsilon (1+c))V$ and $hX= \varepsilon (c- 1)X$. Thus for all $X,Y$ on $T_{\varepsilon}M$, the curvature tensor on $T_{\varepsilon}M$ satisfies $$\bar{R}(X,Y)\xi=c(4\varepsilon-(c+2))\{\eta(Y)X-\eta(X)Y\}-2\varepsilon c\{\eta(Y)hX-\eta(X)hY\}.$$ Conversely, if the contact pseudo-metric structure on $T_{\varepsilon}M$ satisfies the condition $$\bar{R}(X,Y)\xi=\varepsilon\kappa\{\eta(Y)X-\eta(X)Y\}+\varepsilon \mu\{\eta(Y)hX-\eta(X)hY\},$$ where $\kappa=c(4-\varepsilon(c+2)), \mu= -2c $ then \begin{align}\label{086} \bar{R}(X,\xi)\xi=\varepsilon\kappa X+\varepsilon\mu hX, \end{align} for any $X$ orthogonal to $\xi$. Now, for a vector $u$ on $M$, that $g(u,u)=\varepsilon$ define a symmetric operator $\psi_{u}:\langle u\rangle^{\perp}\to \langle u\rangle^{\perp}$
by $\psi_{u}X= R(X, u)u$. By placing the equation (\ref{085}) in (\ref{086}), we get
\begin{align}\label{1}
\bar{R}(V,\xi)\xi=\varepsilon(\kappa+\mu(2-\varepsilon))V-\varepsilon\mu\, v\{\psi_{u}\mathcal{K}V\}.
\end{align}
Also using equations (\ref{0011}-\ref{0016}), we have
\begin{align}\label{2}
\bar{R}(V,\xi)\xi=-v(R(R(u,\mathcal{K}V)u,u)u)=v\{\psi^{2}_{u}\mathcal{K}V\}.
\end{align} From a comparison of equations \eqref{1} and \eqref{2}, the operator $\psi_{u}$ satisfies the equation \begin{align}\label{1.1}\psi^{2}_{u}+\varepsilon\mu \psi_{u}-\varepsilon(\kappa+(2-\varepsilon)\mu)I=0.\end{align} In a similar way, for a horizontal $X$ orthogonal to $\xi$, $$\bar{R}(X,\xi)\xi=(\varepsilon\kappa-\mu)X+\varepsilon\mu\, h(\psi_{u}\pi_{*}X),$$ and, from equations \eqref{0011}-\eqref{0016}, we get $$\bar{R}(X,\xi)\xi=h(4\psi_{u}\pi_{*}X-3\psi^{2}_{u}\pi_{*}X),$$ giving \begin{align}\label{1.2}3\psi^{2}_{u}+(\varepsilon\mu-4)\psi_{u}+(\varepsilon\kappa-\mu)I=0.\end{align} From equations \eqref{1.1} and \eqref{1.2}, the eigenvalues $a$ of $\psi_{u}$ satisfy the two quadratic equations $$a^{2}+\varepsilon\mu a-(\varepsilon\kappa+(2\varepsilon-1)\mu)=0,\quad a^{2}+\frac{\varepsilon\mu-4}{3}a+\frac{\varepsilon\kappa-\mu}{3}=0.$$ Thus, the common root of the above quadratic equations is $a=\varepsilon c$.
Hence from $\psi_{u}X= R(X, u)u=\varepsilon cX$ and $g(u,u)=\varepsilon$, $M$ is of constant curvature $c$. \end{proof} We can now rephrase Theorem \ref{th:R} as the following result. \begin{result}\label{4.2}
The tangent sphere bundle $T_{\varepsilon}M$ is $(\kappa, \mu)$-contact pseudo-metric manifold
if and only if the base manifold $M$ is of constant sectional curvature $c$ and $\kappa=c(4-\varepsilon(c+2)), \mu= -2c $. \end{result} We now have the following theorem about the $K$-contact structure. \begin{theorem} The contact pseudo-metric structure $(\varphi,\xi,\eta,g_{cm})$ on $T_{\varepsilon}M$ is $K$-contact if and only if the base manifold $(M,g)$ has constant curvature $\varepsilon$. \end{theorem} \begin{proof} Using Theorem 3.3 of \cite{Perrone:CurvatureKcontact}, since sectional curvature of all nondegenerate plane sections containing $\xi$ equals $\varepsilon$, therefore $T_{\varepsilon}M$ is $K$-contact and conversely. \end{proof} \begin{theorem} Let $M$ be an $n$-dimensional pseudo-metric manifold, $n>2$, of constant sectional curvature $c$. The tangent sphere bundle $T_{\varepsilon}M$ has constant $\varphi$-sectional curvature $(4c(\varepsilon-1)+\varepsilon c^{2})$ if and only if $c=2\varepsilon\pm\sqrt{5}$. \end{theorem} \begin{proof} Using Theorem 3.5 of \cite{GhaffarzadehFaghfouri}, $(\kappa, \mu)$-contact pseudo-metric manifold $M$ has constant $\varphi$-sectional curvature $-(\kappa+\varepsilon\mu)$ if and only if $\mu=\varepsilon\kappa+1$. So, using Result \ref{4.2} and a straightforward calculation, the proof is completed. \end{proof} \begin{result}\label{4.3}
For $\varepsilon=+1$, the tangent sphere bundle $T_{1}M$ is $(\kappa, \mu)$-contact pseudo-metric manifold
if and only if the base manifold $M$ is of constant sectional curvature $c$ and $\kappa=c(2-c), \mu= -2c $. Also, $T_{1}M$ is Sasakian if and only if $c=1$. \end{result} \begin{result}\label{4.4}
For $\varepsilon=-1$, the tangent sphere bundle $T_{-1}M$ is $(\kappa, \mu)$-contact pseudo-metric manifold
if and only if the base manifold $M$ is of constant sectional curvature $c$ and $\kappa=c(6+c), \mu= -2c $. Also, $T_{-1}M$ is Sasakian if and only if $c=-3\pm2\sqrt{2}.$ \end{result}
\end{document} | arXiv | {
"id": "2003.12600.tex",
"language_detection_score": 0.6301301717758179,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{f Meaningful aggregation functions mapping ordinal scales into an ordinal scale: a state of the art}
\theoremstyle{plain} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{corollary}{Corollary}[section]
\theoremstyle{definition} \newtheorem{definition}{Definition}[section] \newtheorem{example}{Example}[section] \newtheorem{conjecture}{Conjecture}[section] \newtheorem{remark}{Remark}[section]
\newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{
}{
} \newcommand{{\rm ran}}{{\rm ran}} \newcommand{\mathcal{I}_n[E]}{\mathcal{I}_n[E]} \newcommand{\mathcal{I}_n^*[E]}{\mathcal{I}_n^*[E]} \newcommand{L_c}{L_c} \newcommand{L_{\xi(I)}}{L_{\xi(I)}} \newcommand{\mathcal{C}_\mu}{\mathcal{C}_\mu} \newcommand{\mathcal{S}_\mu}{\mathcal{S}_\mu} \newcommand{\mathcal{C}_n}{\mathcal{C}_n} \newcommand{\mathcal{C}_n[E]}{\mathcal{C}_n[E]} \newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\mathbf{y}}{\mathbf{y}} \newcommand{\mathbf{a}}{\mathbf{a}} \newcommand{\mathbf{b}}{\mathbf{b}} \newcommand{\mathbf{f}}{\mathbf{f}} \newcommand{\boldsymbol{\phi}}{\boldsymbol{\phi}} \newcommand{~\Big\{{<\atop =}\Big\}~}{~\Big\{{<\atop =}\Big\}~} \def\mathop{\rm argmax}{\mathop{\rm argmax}} \def\mathop{\boldsymbol{\times}}{\mathop{\boldsymbol{\times}}}
\begin{abstract} We present an overview of the meaningful aggregation functions mapping ordinal scales into an ordinal scale. Three main classes are discussed, namely order invariant functions, comparison meaningful functions on a single ordinal scale, and comparison meaningful functions on independent ordinal scales. It appears that the most prominent meaningful aggregation functions are lattice polynomial functions, that is, functions built only on projections and minimum and maximum operations. \end{abstract}
\noindent{\bf Keywords:} Aggregation functions, ordinal scales, order invariance, comparison meaningfulness, lattice polynomial functions.\\ 2000 Mathematics Subject Classification: Primary 39B22, 39B72, Secondary 06A05, 91E45.
\section{Introduction}
In many domains we are faced with the problem of aggregating a collection of numerical readings to obtain an average value. Actually, such an aggregation problem is becoming more and more present in an increasing number of areas not only of mathematics or physics, but also of engineering, economical, social, and other sciences. Various aggregation functions and processes have already been proposed in the literature and many others are still to be designed to fulfill newer and newer requirements.
Studies on the aggregation problem have shown that the choice of the aggregation function is far from being arbitrary and should be based upon properties dictated by the framework in which the aggregation is performed. One of the main concerns when choosing an appropriate function is to take into account the scale types of the variables being aggregated. On this issue, Luce~\cite{Luc59} observed that the general form of the functional relationship between variables is greatly restricted if we know the scale types of the dependent and independent variables. For instance, if all the variables define a common ordinal scale, it is clear that any relevant aggregation function cannot be constructed from usual arithmetic operations, unless these operations involve only order. Thus, computing the arithmetic mean is forbidden whereas the median or any order statistic is permitted.
Specifically, suppose $x_1,\ldots,x_n,x_{n+1}$ are $n+1$ variables, each $x_i$ having a real interval as a domain, and $x_{n+1}=F(x_1,\ldots,x_n)$ is some unknown aggregation function. The problem is to find the general form of the function $F$ knowing the scale types of the input and output variables. The \emph{scale type}\/ of a variable $x_i$ is defined by the class of \emph{admissible transformations}, transformations, such as that from grams to pounds or degrees Fahrenheit to degrees centigrade, that change the scale into an alternative acceptable scale. In the case of a \emph{ratio scale}, for example, an admissible transformation is a function of the form $x\mapsto rx$, with some $r>0$, which changes the unit of the scale. Similarly, for an \emph{interval scale}, an admissible transformation is a function $x\mapsto rx+s$, with $r>0$ and $s\in\mathbb{R}$, which modifies both the origin and the unit of the scale. For an \emph{ordinal scale}, an admissible transformation is a strictly increasing function $x\mapsto \phi(x)$, which changes the values of the scale while preserving their order. For more details on the theory of scale types, see \cite{KraLucSupTve71,LucKraSupTve90,Nar02,Rob79,Rob94}.
Luce's principle, called ``principle of theory construction", is based on the requirement that admissible transformations of the input variables must lead to an admissible transformation of the output variable. For example, if the input variables are independent scales, then the aggregation function $F$ should satisfy the following condition. For any admissible transformations $\phi_1,\ldots,\phi_n$ of the input variables, there is an admissible transformation $\psi_{\phi_1,\ldots,\phi_n}$ of the output variable so that $F\big(\phi_1(x_1),\ldots,\phi_n(x_n)\big)=\psi_{\phi_1,\ldots,\phi_n}(x_{n+1})$ or, equivalently, \begin{equation}\label{eq:fppf1} F\big(\phi_1(x_1),\ldots,\phi_n(x_n)\big)=\psi_{\phi_1,\ldots,\phi_n}\big(F(x_1,\ldots,x_n)\big). \end{equation}
The solutions of this functional equation constitute the set of the possible aggregation functions, which are ``meaningful" in the sense that they do not depend upon the particular scales of measurement chosen for the variables, but only upon their scale types.
We can also assume that the input variables define the same scale, which implies that the same admissible transformation must be applied to all the input variables. In this case, the condition on $F$ is the following. For any common admissible transformation $\phi$ of the input variables, there is an admissible transformation $\psi_{\phi}$ of the output variable so that \begin{equation}\label{eq:fppf2} F\big(\phi(x_1),\ldots,\phi(x_n)\big)=\psi_{\phi}\big(F(x_1,\ldots,x_n)\big). \end{equation}
In the extreme case where all the input and output variables define the same scale, then, for any admissible transformation $\phi$ of the input and output variables, we must have \begin{equation}\label{eq:fppf3} F\big(\phi(x_1),\ldots,\phi(x_n)\big)=\phi\big(F(x_1,\ldots,x_n)\big). \end{equation}
Equations (\ref{eq:fppf1}) and (\ref{eq:fppf2}) were completely solved in the eighties for ratio scale variables, with domain $\mathbb{R}_+:=[0,\infty)$, and interval scale variables, with domain $\mathbb{R}$, even under some further assumptions such as symmetry, continuity, and nondecreasing monotonicity (in each argument); see \cite{Acz87,AczGroSch94,AczRob89,AczRobRos86,Kim90}.
\begin{example}[\cite{AczRobRos86}] If all the input variables are independent ratio scales and the output variable is also a ratio scale then the meaningful aggregation functions $F\colon\mathbb{R}^n_+\to\mathbb{R}_+$ are exactly the solutions of (\ref{eq:fppf1}), where each admissible transformation is a multiplication by a positive constant. These solutions are given by $$F(x_1,\ldots,x_n)=a\prod_{i=1}^n f_i(x_i)\qquad (a>0),$$ where the functions $f_i\colon\mathbb{R}_+\to\mathbb{R}_+$ fulfill the equations $f_i(x_iy_i)=f_i(x_i)f_i(y_i)$ $(i=1,\ldots,n)$. Under continuity, these solutions are of the form $$ F(x_1,\ldots,x_n)=a\prod_{i=1}^n x_i^{c_i}\qquad (a>0, \; c_1,\ldots,c_n\in\mathbb{R}). $$ \end{example}
For ordinal scales and without any further assumptions, equations (\ref{eq:fppf1}) and (\ref{eq:fppf2}) have resisted and have remained unsolved for a long time. At present, the complete description of their solutions is known and has been presented recently in a couple of articles; see \cite{MarMes04,MarMesRuc05}.
The main purpose of this paper is to present a catalog of all the possible meaningful aggregation functions mapping ordinal scales into an ordinal scale. More precisely, we yield all the possible solutions of each of the functional equations above, where the admissible transformations are strictly increasing functions. We also present the solutions under some further assumptions such as continuity, symmetry, idempotency, and nondecreasing monotonicity (in each argument).
In such an ordinal framework, it is natural to assume that the common domain of the input variables be any open real interval or even the whole real line. However, we consider the more general situation where the domain of the input variables is any real interval $E$, possibly unbounded, and where the domain of the output variable is the real line, except for equation (\ref{eq:fppf3}) where this domain must also be the set $E$. We further assume that the admissible transformations of the input variables are confined to the increasing bijections from $E$ onto $E$. This latter assumption, which brings no restriction to the solutions of the functional equations, enables us to consider closed domains $E$ whose endpoints remain fixed under any admissible transformation.
We thus provide and discuss all the solutions $F\colon E^n\to\mathbb{R}$ of the functional equations (\ref{eq:fppf1}) and (\ref{eq:fppf2}), where $\phi_1,\ldots,\phi_n$, and $\phi$ are arbitrary increasing bijections from $E$ onto $E$ and where $\psi_{\phi_1,\ldots,\phi_n}$ and $\psi_{\phi}$ are strictly increasing functions from ${\rm ran}(F)$ into ${\rm ran}(F)$. We call those solutions \emph{comparison meaningful functions on independent ordinal scales}\/ and \emph{comparison meaningful functions on a single ordinal scale}, respectively. We also provide all the solutions $F\colon E^n\to E$ of the functional equation (\ref{eq:fppf3}), where $\phi$ is an arbitrary increasing bijection from $E$ onto $E$. We call those solutions \emph{order invariant functions}.
The outline of this paper is as follows. In \S{2} we introduce the concept of order invariant subsets, which will play a key role in the description of the most general solutions of the three functional equations. In \S{3} we introduce the lattice polynomial functions, which represent most of the regular (e.g., nondecreasing) solutions of equations (\ref{eq:fppf2}) and (\ref{eq:fppf3}). We also recall some of their properties as aggregation functions. In \S{4} we present and discuss all the order invariant functions. In \S\S{5} and 6 we present respectively the comparison meaningful functions on a single ordinal scale and the comparison meaningful functions on independent ordinal scales. Finally in \S{7} we provide interpretations of equations (\ref{eq:fppf1})--(\ref{eq:fppf3}) in the setting of aggregation on finite chains.
Throughout this paper we denote by $E$ any real interval, bounded or not, with interior $E^\circ$. We also denote by $B[E]$ the set of \emph{included boundaries} of $E$, that is $B[E]:=E\setminus E^\circ$. The set of all increasing bijections $\phi$ of $E$ onto itself is denoted by $\Phi[E]$. As each function $\phi\in\Phi[E]$ preserves the ordinal structure of $E$, the set $\Phi[E]$ is actually the \emph{order automorphism group}, under composition, of $E$. Finally, the symbol $[n]$ denotes the index set $\{1,\ldots,n\}$ and, for any $\mathbf{x}\in E^n$ and any $\boldsymbol{\phi}\in\Phi[E]^n$, the symbol $\boldsymbol{\phi}(\mathbf{x})$ denotes the vector $\big(\phi_1(x_1),\ldots,\phi_n(x_n)\big)$.
\section{Order invariant subsets}\label{sec:gfass3}
The space $E^n$ can be partitioned into \emph{order invariant subsets}, which are very useful in describing the general solutions of the functional equations introduced above. Those subsets were introduced first by Ovchinnikov (see \cite[\S{3}]{Ovc96} and \cite[\S{2}]{Ovc98c}) in the general framework of ordered sets and then independently by Bart{\l}omiejczyk and Drewniak~\cite{BarDre04} for closed real intervals; see also \cite{MarMes04,MarMesRuc05,MesRuc04}. In this section we introduce them through the concept of group orbit.\footnote{For definitions and results about the concept of orbit in algebra, see e.g.\ \cite{Gri07}.}
Consider the product set $\Phi[E]^n$ and its \emph{diagonal restriction} $$\Phi_n[E]:=\big\{(\underbrace{\phi,\ldots,\phi}_n) : \phi\in \Phi[E]\big\}.$$ As $\Phi_n[E]$ is clearly a subgroup of $\Phi[E]^n$, we can define the orbit of any element $\mathbf{x}\in E^n$ under the action of $\Phi_n[E]$, that is, $\Phi_n[E](\mathbf{x}) := \{\boldsymbol{\phi}(\mathbf{x}) : \boldsymbol{\phi} \in \Phi_n[E]\}$. The set of orbits of $E^n$ under $\Phi_n[E]$ forms a partition of $E^n$ into equivalence classes, where $\mathbf{x},\mathbf{y}\in E^n$ are equivalent if their orbits are the same, that is, if there exists $\boldsymbol{\phi}\in\Phi_n[E]$ such that $\mathbf{y}=\boldsymbol{\phi}(\mathbf{x})$. The orbits of $E^n$ under $\Phi_n[E]$ are \emph{order invariant subsets} in the following sense (see \cite{BarDre04}).
\begin{definition} A nonempty subset $I$ of $E^n$ is called \emph{order invariant}\/ if, for any $\mathbf{x}\in I$, we have $\boldsymbol{\phi}(\mathbf{x})\in I$ for all $\boldsymbol{\phi}\in \Phi_n[E]$.\footnote{Equivalently, $I$ is order invariant if $\boldsymbol{\phi}(I)\subseteq I$ for all $\boldsymbol{\phi}\in\Phi_n[E]$. Actually, since $\Phi_n[E]$ is a group, we can even write $\boldsymbol{\phi}(I)=I$.} An order invariant subset of $E^n$ is \emph{minimal}\/ if it has no proper order invariant subset. \end{definition}
It is easy to see that the set $\mathcal{I}_n[E]:=E^n/\Phi_n[E]$ of orbits of $E^n$ under $\Phi_n[E]$ is identical to the set of minimal order invariant subsets of $E^n$. Moreover, any order invariant subset is a union of those orbits.
The following proposition (for closed $E$, see \cite{BarDre04,MesRuc04}) yields a complete description of the orbits.
\begin{proposition}\label{prop:DescOrbits} We have $I\in \mathcal{I}_n[E]$ if and only if there exists a permutation $\pi$ on $[n]$ and a sequence $\{\lhd_i\}_{i=0}^n$ of symbols $\lhd_i\in\{<,=\}$, containing at least one symbol $<$ if $\inf E\in E$ and $\sup E\in E$, such that $$I=\{\mathbf{x}\in E^n : \inf E \,\lhd_0\, x_{\pi(1)} \,\lhd_1\,\cdots\,\lhd_{n-1}\, x_{\pi(n)} \,\lhd_n\, \sup E\},$$ where $\lhd_0$ is $<$ if $\inf E\notin E$ and $\lhd_n$ is $<$ if $\sup E\notin E$. \end{proposition}
\begin{example}[\cite{MesRuc04}]\label{ex:fggger} The unit square $[0,1]^2$ contains exactly eleven minimal order invariant subsets, namely the open triangles $\{(x_1,x_2) : 0<x_1<x_2<1\}$ and $\{(x_1,x_2) : 0<x_2<x_1<1\}$, the open diagonal $\{(x_1,x_2) : 0<x_1=x_2<1\}$, the four square vertices, and the four open line segments joining neighboring vertices. \end{example}
\begin{remark}\label{rem:perm2} From Proposition~\ref{prop:DescOrbits} we can easily derive an alternative way to characterize the membership of given vectors $\mathbf{x},\mathbf{y}\in E^n$ in the same orbit. Let $\Pi_n$ be the set of permutations on $\{0,1,\ldots,n+1\}$ and, for any $\mathbf{x}\in E^n$, define $$ \Pi(\mathbf{x}):=\{\pi\in\Pi_n : x_{\pi(0)}\leqslant x_{\pi(1)} \leqslant\cdots\leqslant x_{\pi(n+1)}\}, $$ where $x_0:=\inf E$ and $x_{n+1}:=\sup E$. Then, for any $\mathbf{x},\mathbf{y}\in E^n$, there exists $I\in\mathcal{I}_n[E]$ such that $\mathbf{x},\mathbf{y}\in I$ if and only if $\Pi(\mathbf{x})=\Pi(\mathbf{y})$.\footnote{This condition is more restrictive than \emph{comonotonicity} of vectors $\mathbf{x}$ and $\mathbf{y}$, which simply means that $\Pi(\mathbf{x})$ and $\Pi(\mathbf{y})$ overlap; see \cite{HarLitPol52}.} \end{remark}
Since $\Phi[E]^n$ is itself a group, we can also define the orbit of any element $\mathbf{x}\in E^n$ under the action of $\Phi[E]^n$, that is, $\Phi[E]^n(\mathbf{x}) := \{\boldsymbol{\phi}(\mathbf{x}) : \boldsymbol{\phi} \in \Phi[E]^n\}$. Just as for the subgroup $\Phi_n[E]$, the set of orbits of $E^n$ under $\Phi[E]^n$ forms a partition of $E^n$ into equivalence classes, where $\mathbf{x},\mathbf{y}\in E^n$ are equivalent if there exists $\boldsymbol{\phi}\in\Phi[E]^n$ such that $\mathbf{y}=\boldsymbol{\phi}(\mathbf{x})$. The orbits of $E^n$ under $\Phi[E]^n$ are \emph{strongly order invariant subsets} in the following sense (see \cite{MarMesRuc05}).
\begin{definition} A nonempty subset $I$ of $E^n$ is called \emph{strongly order invariant}\/ if, for any $\mathbf{x}\in I$, we have $\boldsymbol{\phi}(\mathbf{x})\in I$ for all $\boldsymbol{\phi}\in \Phi[E]^n$.\footnote{Equivalently, $I$ is strongly order invariant if $\boldsymbol{\phi}(I)\subseteq I$ for all $\boldsymbol{\phi}\in\Phi[E]^n$. Once again, since $\Phi[E]^n$ is a group, we can even write $\boldsymbol{\phi}(I)=I$.} A strongly order invariant subset of $E^n$ is \emph{minimal}\/ if it has no proper strongly order invariant subset. \end{definition}
The set $\mathcal{I}_n^*[E]:=E^n/\Phi[E]^n$ of orbits of $E^n$ under $\Phi[E]^n$ is identical to the set of minimal strongly order invariant subsets of $E^n$. Moreover, any strongly order invariant subset is a union of those orbits.
The following proposition \cite{MarMesRuc05} yields a complete description of the orbits.
\begin{proposition}
We have $\mathcal{I}_n^*[E] = \{\times_{i=1}^n I_i : I_i\in \mathcal{I}_1[E]\}= (\mathcal{I}_1[E])^n$, with cardinality $|\mathcal{I}_n^*[E]|=(1+|B[E]|)^n$. \end{proposition}
\begin{example}[\cite{MesRuc04}] The unit square $[0,1]^2$ contains exactly nine minimal strongly order invariant subsets, namely the open square $(0,1)^2$, the four square vertices, and the four open line segments joining neighboring vertices. \end{example}
Let us now show that the set $\mathcal{I}_n^*[E]$ can be described by means of the set $\mathcal{I}_n[E]$. For any $i\in [n]$, let ${\rm P}_i\colon E^n\to E$ be the projection operator onto the $i$th coordinate, that is, ${\rm P}_i(\mathbf{x}):=x_i$. We can easily see that, for any $I\in\mathcal{I}_n[E]$, we have ${\rm P}_i(I)\in\mathcal{I}_1[E]$. Define an equivalence relation $\sim$ on $\mathcal{I}_n[E]$ as $$ I\sim J \quad \Leftrightarrow \quad {\rm P}_i(I)={\rm P}_i(J) \quad (i\in [n]). $$ Then, it is easy to see \cite{MarMesRuc05} that $$ \mathcal{I}_n^*[E] = \Big\{\bigcup_{\textstyle{J\in\mathcal{I}_n[E]\atop J\sim I}}J : I\in \mathcal{I}_n[E]\Big\} = \Big\{\mathop{\boldsymbol{\times}}_{i=1}^n {\rm P}_i(I) : I\in \mathcal{I}_n[E]\Big\}. $$
Now, to easily describe certain nondecreasing aggregation functions, it is useful to consider partial orders on $\mathcal{I}_n[E]$ and $\mathcal{I}_n^*[E]$. Starting from the natural order $\{\inf E\} \prec E^\circ \prec \{\sup E\}$ on $\mathcal{I}_1[E]$, we can straightforwardly derive a partial order $\preccurlyeq$ on $\mathcal{I}_n[E]$, namely $$ I\preccurlyeq J\quad \Leftrightarrow \quad {\rm P}_i(I)\preccurlyeq {\rm P}_i(J) \quad (i\in [n]). $$ The corresponding partial order on $\mathcal{I}_n^*[E]$ is defined similarly.
\begin{remark} Consider again the set $\Pi_n$ of permutations on $\{0,1,\ldots,n+1\}$ (see Remark~\ref{rem:perm2}). For any $\mathbf{x}\in E^n$, we can define \begin{eqnarray*} \lefteqn{\Pi^*(\mathbf{x}) :=\{\pi\in\Pi_n : \pi(i)\leqslant \ell(\mathbf{x}) ~\Leftrightarrow ~x_i=\inf E}\\ && \mbox{ and }~ \pi(j)\geqslant n+1-u(\mathbf{x}) ~\Leftrightarrow ~x_j=\sup E\}, \end{eqnarray*} where $x_0:=\inf E$, $x_{n+1}:=\sup E$ and $\ell(\mathbf{x}):=\{i\in [n] : x_i=\inf E\}$, $u(\mathbf{x}):=\{j\in [n] : x_j=\sup E\}$. Then, for any $\mathbf{x},\mathbf{y}\in E^n$, there exists $I\in\mathcal{I}_n^*[E]$ such that $\mathbf{x},\mathbf{y}\in I$ if and only if $\Pi^*(\mathbf{x})=\Pi^*(\mathbf{y})$. \end{remark}
\section{Lattice polynomial functions and some of their properties}
As we will see in the subsequent sections, certain solutions of equations (\ref{eq:fppf2}) and (\ref{eq:fppf3}) are constructed from \emph{lattice polynomial functions}. In this section we briefly recall the basic material about these functions. As we are concerned with aggregation functions defined in real domains, we do not consider lattice polynomial functions on a general lattice, but simply on $\mathbb{R}$, which is a particular lattice. The lattice operations $\wedge$ and $\vee$ then represent the minimum and maximum operations, respectively.
\subsection{Lattice polynomial functions} \label{sec:sfdaag}
Let us first recall the concept of lattice polynomial function (with real variables); see e.g.\ Birkhoff~\cite[\S{II.5}]{Bir67} or Gr\"atzer~\cite[\S{I.4}]{Grae03}.
\begin{definition} The class of lattice polynomial functions from $\mathbb{R}^n$ to $\mathbb{R}$ is defined as follows. \begin{enumerate} \item[(i)] For any $k\in [n]$, the projection $\mathrm{P}_k\colon \mathbf{x}\mapsto x_k$ is a lattice polynomial function from $\mathbb{R}^n$ to $\mathbb{R}$.
\item[(ii)] If $p$ and $q$ are lattice polynomial functions from $\mathbb{R}^n$ to $\mathbb{R}$, then $p\wedge q$ and $p\vee q$ are lattice polynomial functions from $\mathbb{R}^n$ to $\mathbb{R}$.
\item[(iii)] Every lattice polynomial function from $\mathbb{R}^n$ to $\mathbb{R}$ is constructed by finitely many applications of the rules (i) and (ii). \end{enumerate} \end{definition}
Because $\mathbb{R}$ is a distributive lattice, any lattice polynomial function can be written in {\em disjunctive}\/ and {\em conjunctive}\/ forms as follows; see e.g.\ \cite[\S{II.5}]{Bir67}.
\begin{proposition}\label{prop:lp dnf} Let $p\colon\mathbb{R}^n\to \mathbb{R}$ be any lattice polynomial function. Then there are nonconstant set functions $\alpha\colon 2^{[n]}\to\{0,1\}$ and $\beta\colon 2^{[n]}\to\{0,1\}$, with $\alpha(\varnothing)=0$ and $\beta(\varnothing)=1$, such that \begin{equation}\label{eq:pab} p(\mathbf{x})=\bigvee_{\textstyle{S\subseteq [n]\atop \alpha(S)=1}}\bigwedge_{i\in S}x_i = \bigwedge_{\textstyle{S\subseteq [n]\atop \beta(S)=0}}\bigvee_{i\in S}x_i. \end{equation} \end{proposition}
The set functions $\alpha$ and $\beta$ that disjunctively and conjunctively define the polynomial function $p$ in Proposition~\ref{prop:lp dnf} are not unique. For example, we have $x_1 \vee (x_1 \wedge x_2) = x_1 = x_1\wedge (x_1 \vee x_2)$. However, it can be shown \cite{Mar02c} that, from among all the possible set functions that disjunctively define a given lattice polynomial function, only one is nondecreasing. Similarly, from among all the possible set functions that conjunctively define a given lattice polynomial function, only one is nonincreasing. These particular set functions are given by $\alpha(S) = p(\mathbf{1}_S)$ and $\beta(S) = p(\mathbf{1}_{[n]\setminus S})$ for all $S\subseteq [n]$, where $\mathbf{1}_S$ denotes the characteristic vector of $S$ in $\{0,1\}^n$. Thus, a lattice polynomial function $p\colon\mathbb{R}^n\to\mathbb{R}$ can always be written as $$ p(\mathbf{x})=\bigvee_{\textstyle{S\subseteq [n]\atop p(\mathbf{1}_S)=1}}\bigwedge_{i\in S}x_i=\bigwedge_{\textstyle{S\subseteq [n]\atop p(\mathbf{1}_{[n]\setminus S})=0}}\bigvee_{i\in S}x_i. $$
\begin{remark} Now it becomes evident that any $n$-variable lattice polynomial function is a nondecreasing and continuous order invariant function in $\mathbb{R}^n$. We will see in Proposition~\ref{prop:cindilp} that the converse is also true. A nondecreasing (or continuous) order invariant function in $\mathbb{R}^n$ is a lattice polynomial function. \end{remark}
Denote by $p_{\alpha}^{\vee}$ (resp.\ $p_{\beta}^{\wedge}$) the lattice polynomial function disjunctively (resp.\ conjunctively) defined by a given set function $\alpha$ (resp.\ $\beta$) as defined in Proposition~\ref{prop:lp dnf}. Let $f\colon\{0,1\}^n\to\{0,1\}$ be a nonconstant and nondecreasing Boolean function. Then the lattice polynomial function $p_{\alpha}^{\vee}$, where $\alpha\colon 2^{[n]}\to\{0,1\}$ is defined by $\alpha(S):=f(\mathbf{1}_S)$ for all $S\subseteq [n]$, is an extension to $\mathbb{R}^n$ of $f$. Indeed, we immediately have $f(\mathbf{1}_S)=\alpha(S)=p_{\alpha}^{\vee}(\mathbf{1}_S)$ for all $S\subseteq [n]$. Consequently, any $n$-variable lattice polynomial function is an extension to $\mathbb{R}^n$ of a nonconstant and nondecreasing Boolean function.
Throughout we will denote by $\mathcal{C}_n$ the set of $\{0,1\}$-valued nonconstant and nondecreasing set functions on $[n]$. By definition, this set is equipollent to the set of $n$-variable lattice polynomial functions, as well as to the set of nonconstant and nondecreasing Boolean functions.\footnote{The problem of enumerating the number of distinct nondecreasing Boolean functions of $n$ variables is known as the Dedekind's problem \cite{Kle69,KleMar75} (Sloane's integer sequence A000372). Although Dedekind first considered this question in 1897, there is still no concise closed-form expression for this sequence.}
Now, regard the lattice polynomial function $p$ as a function from $E^n$ to $E$. If $E$ is a bounded lattice, we necessarily have $\bigvee_{x\in\varnothing}x:=\inf E$ and $\bigwedge_{x\in\varnothing}x:=\sup E$. Then, from (\ref{eq:pab}), we immediately see that $p\equiv\inf E$ if $\alpha\equiv 0$, and $p\equiv\sup E$ if $\alpha\equiv 1$. Thus we can extend the definition of lattice polynomial functions by allowing the set function $\alpha$ to be constant.
Let $\mathcal{C}_n[E]$ denote the set $\mathcal{C}_n$ completed with the constant set function $\alpha\equiv 0$, if $\inf E\in E$, and the constant set function $\alpha\equiv 1$, if $\sup E\in E$. Evidently $\mathcal{C}_n[E]$ can be partially ordered by the standard partial order on set functions, namely $\alpha_1 \preccurlyeq \alpha_2$ if and only if $\alpha_1(S) \leqslant \alpha_2(S)$ for all $S\subseteq [n]$. We will refer to this partial order in the subsequent sections.
\subsection{Special lattice polynomial functions} \label{sec:plp}
We now consider the important special case of symmetric lattice polynomial functions. Denote by $x_{(1)},\ldots,x_{(n)}$ the {\em order statistics}\/ resulting from reordering the variables $x_1,\ldots,x_n$ in nondecreasing order, that is, $x_{(1)}\leqslant\cdots\leqslant x_{(n)}$. As Ovchinnikov~\cite[\S{7}]{Ovc96} observed, any order statistic is a symmetric lattice polynomial function. More precisely, for any $k\in [n]$, we have $$
x_{(k)}=\bigvee_{\textstyle{S\subseteq [n]\atop |S|=n-k+1}}\bigwedge_{i\in S}x_i=\bigwedge_{\textstyle{S\subseteq [n]\atop |S|=k}}\bigvee_{i\in S}x_i. $$ Conversely, Marichal~\cite[\S{2}]{Mar02c} showed that any symmetric lattice polynomial function is an order statistic.
Let us denote by $\mathrm{os}_k\colon \mathbb{R}^n\to\mathbb{R}$ the $k$th \emph{order statistic function}, that is, $\mathrm{os}_k(\mathbf{x}):=x_{(k)}$. It is then easy to see that, for any $S\subseteq [n]$, we have $\mathrm{os}_k(\mathbf{1}_S)=1$ if and only if $|S|\geqslant n-k+1$ and, likewise, we have
$\mathrm{os}_k(\mathbf{1}_{[n]\setminus S})=0$ if and only if $|S|\geqslant k$. Note that when $n$ is odd, $n = 2k-1$, the particular order statistic $x_{(k)}$ is the well-known \emph{median}\/ function $$ {\rm median}(x_1,\ldots,x_{2k-1}) := x_{(k)}. $$
Another special case of lattice polynomial functions is given by the projection functions, already introduced in \S\ref{sec:gfass3}. Recall that, for any $k \in [n]$, the \emph{projection}\/ function ${\rm P}_k\colon\mathbb{R}^n \to \mathbb{R}$ associated with the $k$th argument is defined by ${\rm P}_k(\mathbf{x}) := x_k$. The projection function ${\rm P}_k$ consists in projecting $\mathbf{x} \in \mathbb{R}^n$ onto the $k$th coordinate axis. As a particular aggregation function, it corresponds to a dictatorial aggregation.
\subsection{Some aggregation properties} \label{sec:fgrtre}
Lattice polynomial functions $p\colon E^n\to E$ are clearly continuous and nondecreasing functions. They are also order invariant functions in the sense that they fulfill equation (\ref{eq:fppf3}) with arbitrary increasing bijections $\phi\colon E\to E$; see e.g.\ \cite{Ovc98c}.
Lattice polynomial functions also fulfill other properties shared by many aggregation functions. We now examine three of them : internality, idempotency, and discretizability.
The most often encountered functions in aggregation theory are means or averaging functions, such as the weighted arithmetic means. Cauchy~\cite{Cau21} considered in 1821 the mean of $n$ independent variables $x_1,\ldots,x_n$ as a function $F(x_1,\ldots,x_n)$ which should be internal to the set of $x_i$ values.
\begin{definition} $F\colon E^n \to \mathbb{R}$ is an \emph{internal} function if $\bigwedge_{i=1}^n x_i \leqslant F(\mathbf{x}) \leqslant \bigvee_{i=1}^n x_i$ for all $\mathbf{x} \in E^n$. \end{definition}
Such means satisfy trivially the property of \emph{idempotency}, that is, if all $x_i$ are identical, $F(\mathbf{x})$ restitutes the common value.
\begin{definition} $F\colon E^n \to \mathbb{R}$ is an \emph{idempotent} function if $F(x,\ldots,x) = x$ for all $x \in E$. \end{definition}
Conversely, we can easily see that any nondecreasing and idempotent function $F\colon E^n \to \mathbb{R}$ is internal.
As any lattice polynomial function is clearly internal, it is a mean in the Cauchy sense. Thus, the internality property makes it possible to define means even on ordinal scales (see, e.g.\ \cite{Ovc96}). For example, as a particular lattice polynomial function, the classical median function (see \S\ref{sec:plp}), which gives the middle value of an odd-length sequence of ordered values, is a continuous, nondecreasing, and symmetric mean defined on ordinal scales. To give a second example, consider the classical \emph{mode}\/ function, $\mathrm{mode}\colon E^n\to E$, defined by\footnote{As usual, $\mathop{\rm argmax}$ stands for the argument of the maximum, that is to say, the value of the given argument for which the value of the given expression attains its maximum value.} \begin{equation}\label{eq:mode} \mathrm{mode}(\mathbf{x}):=\mathop{\rm argmax}_{r\in E}\,\sum_{i=1}^n \mathbf{1}_{\{0\}}(x_i-r), \end{equation} where the function $\mathbf{1}_{\{0\}}\colon\mathbb{R}\to\mathbb{R}$ is defined by $\mathbf{1}_{\{0\}}(0):=1$ and $\mathbf{1}_{\{0\}}(x):=0$ for all $x\neq 0$ (in case of multiple values for $\mathop{\rm argmax}$, take the smallest one). This function, which gives the (lowest) most repeated value of a sequence of values, is a symmetric mean defined on ordinal scales, and even on nominal scales.\footnote{The admissible transformations associated with a nominal scale are one-to-one transformations (injections) of $E$ into itself; see \cite[p.~66]{Rob79}.} However, since the mode function is not nondecreasing, it is not a lattice polynomial function.
We can also observe that any lattice polynomial function is \emph{discretizable}\/ in the sense that it always yields the value of one of its variables. This property was actually introduced in the framework of triangular norms (see e.g.\ \cite{DeBMes03,Fod00}) but is easily extended to any function as follows.
\begin{definition}\label{de:disc} $F\colon E^n \to E$ is a \emph{discretizable} function if $F(\mathbf{x})\in\{x_1,\ldots,x_n\} \cup B[E]$ for all $\mathbf{x}\in E^n$. \end{definition}
We can readily prove~\cite{Mes01} that $F\colon E^n \to E$ is a discretizable function if and only if, for any nonempty finite subset $C\subset E$ and any $\mathbf{x}\in (C\cup B[E])^n$, we have $F(\mathbf{x})\in C\cup B[E]$. Thus, this property means that the domain and range of $F$ can be restricted to a finite or countable chain.
Another interesting property is {\em self-duality} (for bounded $E$, see \cite{GarMar08} and the references therein), which is fulfilled for example by the median and the mode functions.
\begin{definition}\label{de:selfduality} Let $\psi\colon E\to E$ be a decreasing and involutive (i.e., $\psi\circ\psi=\mathrm{id}$) bijection (hence necessarily $B[E]$ is not a singleton). \begin{itemize} \item The {\em $\psi$-dual} of a function $F\colon E^n\to E$ is the function $F_{\psi}\colon E^n\to E$, defined by $$ F_{\psi}(\mathbf{x}):=\psi^{-1}\big(F\big(\psi(x_1),\ldots,\psi(x_n)\big)\big). $$ \item A function $F\colon E^n\to E$ is said to be {\em $\psi$-self-dual} if $F_{\psi}=F$.
\item A function $F\colon E^n\to E$ is said to be {\em weakly self-dual} if it is $\psi$-self-dual for some decreasing and involutive bijection $\psi\colon E\to E$. \end{itemize} If $E$ is bounded, then the only affine decreasing bijection from $E$ onto itself is given by $\psi^d(x):=\inf E+\sup E-x$, and $\psi^d$-duality is then called \emph{duality}, with notation $F^d:=F_{\psi^d}$. A function $F\colon E^n\to E$ is said to be {\em self-dual} if $F^d=F$. \end{definition}
\begin{remark} \begin{enumerate} \item[(i)] By definition, for any function $F\colon E^n\to E$ and any decreasing and involutive bijection $\psi\colon E\to E$, we have $(F_{\psi})_{\psi}=F$.
\item[(ii)] We note that $\psi$-duality is an example of {\em $\psi$-conjugacy} \cite[Chapter~8]{KucChoGer90}, whose definition is the same except that it does not require $\psi$ to be decreasing nor involutive. We also observe that the classic notion of duality in ordered sets concerns the converse order relations; see \cite[p.~3]{Bir67}. \end{enumerate} \end{remark}
Assume that $B[E]$ is not a singleton and let $\alpha\in\mathcal{C}_n$. We can straightforwardly show that the lattice polynomial function $p_{\alpha}^{\vee}\colon E^n\to E$ is weakly self-dual if and only if $\alpha^d=\alpha$, where $\alpha^d\in\mathcal{C}_n$ is the dual of $\alpha$, defined by $\alpha^d(S):=1-\alpha([n]\setminus S)$.
The special case of order statistics is dealt with in the next immediate result (see \cite[\S{5}]{Mar02c}), which characterizes the median as the only weakly self-dual order statistic.
\begin{proposition}\label{prop:charmed} Assume that $n$ is odd and that $B[E]$ is not a singleton. The $k$th order statistic function ${\rm os}_k\colon E^n\to E$ is weakly self-dual if and only if $n=2k-1$. In this case ${\rm os}_k$ is the median function. \end{proposition}
\section{Order invariant functions}
The first meaningful aggregation functions we consider are the \emph{order invariant functions}, which were first investigated (as ordinally stable functions) by Marichal and Roubens~\cite{MarRou93}, and then by many other authors; see \cite{BarDre04,FodRou95,KamOvc95,Mar98,Mar02c,MarMat01,MarMes04,MarMesRuc05,Mes01,MesRuc04,Ovc96,Ovc98c,OvcDuk00,OvcDuk02}.
\subsection{Definition and first results}
Let $x_1,\ldots,x_n$ be independent variables defining the same ordinal scale, with domain $E$, and suppose that, when aggregating these variables by a function $F\colon E^n\to E$, we require that the dependent variable \begin{equation}\label{eq:yem} x_{n+1}=F(x_1,\ldots,x_n) \end{equation} defines the same scale. As equation (\ref{eq:yem}) should represent a meaningful relation between the independent and dependent variables, the aggregation function $F$ should be invariant under actions from $\Phi[E]$. That is, $\phi(x_{n+1})=F(\phi(x_1),\ldots,\phi(x_n))$ for all $\phi\in \Phi[E]$. Thus, the order invariance property is defined as follows.
\begin{definition}\label{de:inv} $F\colon E^n \to E$ is said to be an \emph{order invariant}\/ function if $$ F\big(\phi(x_1),\ldots,\phi(x_n)\big) = \phi\big(F(x_1,\ldots,x_n)\big) $$ for all $\mathbf{x}\in E^n$ and all $\phi \in \Phi[E]$. \end{definition}
The following result (see Propositions~\ref{prop:eondin} and \ref{prop:eocoin} below) shows that the lattice polynomial functions are the most prominent order invariant functions (however see Theorem~\ref{thm:nvkjfd} for a full description of order invariant functions).
\begin{proposition}\label{prop:cindilp} Assume that $E$ is open and consider a function $F\colon E^n\to E$. Then the following three assertions are equivalent: \begin{enumerate} \item[(i)] $F$ is a nondecreasing order invariant function.
\item[(ii)] $F$ is a continuous order invariant function.
\item[(iii)] $F$ is a lattice polynomial function. \end{enumerate} \end{proposition}
Proposition~\ref{prop:cindilp} poses the interesting question of how we can interpret the continuity property for order invariant functions. Let $\Phi'[E]$ be the superset of $\Phi[E]$ consisting of the continuous nondecreasing surjections $\phi\colon E\to E$. The following result \cite[\S{5.2}]{MarMes04}, inspired from \cite[Proposition 2]{BouPir97}, shows that the conjunction of continuity and order invariance is equivalent to requiring that the admissible transformations belong to $\Phi'[E]$.
\begin{proposition} $F\colon E^n\to E$ is a continuous order invariant function if and only if $F(\phi(x_1),\ldots,\phi(x_n)) = \phi(F(x_1,\ldots,x_n))$ for all $\mathbf{x}\in E^n$ and all $\phi \in \Phi'[E]$. \end{proposition}
Let $\Phi''[E]$ be the superset of $\Phi[E]$ consisting of all the monotone bijections of $E$ onto itself (assuming that $B[E]$ is not a singleton). It is clear (for bounded $E$, see e.g.\ \cite[\S{3}]{MesRuc04}) that the conjunction of weak self-duality (cf.\ Definition~\ref{de:selfduality}) and order invariance is equivalent to requiring that the admissible transformations belong to $\Phi''[E]$. The independent and dependent variables then define a \emph{nominal} scale.
\begin{proposition} Assume that $B[E]$ is not a singleton. $F\colon E^n\to E$ is a weakly self-dual order invariant function if and only if $F(\phi(x_1),\ldots,\phi(x_n)) = \phi(F(x_1,\ldots,x_n))$ for all $\mathbf{x}\in E^n$ and all $\phi \in \Phi''[E]$. \end{proposition}
\subsection{General descriptions}
When $E$ is open we have the following description (see \cite[Theorem~5.1]{Ovc98c}).
\begin{proposition}\label{prop:eoin} Assume that $E$ is open. Then $F\colon E^n\to E$ is an order invariant function if and only if there exists a mapping $\xi\colon\mathcal{I}_n[E]\to [n]$ such that
$F|_I={\rm P}_{\xi(I)}|_I$ for all $I\in\mathcal{I}_n[E]$. \end{proposition}
This result shows that, when $E$ is open, the restriction of $F$ to any minimal order invariant subset is a projection function onto one coordinate. That is, for any $I\in\mathcal{I}_n[E]$, there exists $k_I\in [n]$ such that $F|_I=\mathrm{P}_{k_I}|_I$. Clearly, such a function is internal and hence idempotent.
As an example, any nonconstant lattice polynomial function is a continuous, nondecreasing, idempotent, and order invariant function. On the other hand, the mode function (\ref{eq:mode}) is an idempotent and order invariant function that is neither continuous nor nondecreasing.
When $E$ is not open, the restriction of $F$ to any minimal order invariant subset reduces to a constant function or a projection function onto one coordinate (see \cite{MarMesRuc05,MesRuc04}).
\begin{theorem}\label{thm:nvkjfd} $F\colon E^n\to E$ is an order invariant function if and only if there exists a mapping $\xi\colon\mathcal{I}_n[E]\to [n]$ such that, for any $I\in\mathcal{I}_n[E]$, \begin{itemize}
\item either $F|_I\equiv c\in B[E]$ (assuming $B[E]\neq\varnothing$),
\item or $F|_I={\rm P}_{\xi(I)}|_I$. \end{itemize} \end{theorem}
\begin{remark} It was proved in \cite[Proposition~3.1]{Mar02c} (see \cite{KamOvc95,Ovc96,Ovc98c} for preliminary results), that any order invariant function is discretizable, and hence it is internal whenever $E$ is open; it is clear that this follows from Theorem~\ref{thm:nvkjfd}. For instance, the mode function (\ref{eq:mode}) is order invariant and hence discretizable. The converse is not true. For example, the function $F\colon (0,1)^2\to (0,1)$ defined by $$ F(x_1,x_2):= \begin{cases} x_1, & \mbox{if $x_1+x_2<1$},\\ x_2, & \mbox{otherwise}, \end{cases} $$ is discretizable but not order invariant. \end{remark}
When an order invariant function is idempotent, clearly it must be a projection function on the open diagonal of $E^n$, also on $I=\{(\inf E,\ldots,\inf E)\}$ (if $\inf E\in E$), and on $I=\{(\sup E,\ldots,\sup E)\}$ (if $\sup E\in E$).
\subsection{The nondecreasing case}
We now present descriptions of order invariant functions which are nondecreasing. The following result (see \cite[Corollary~4.4]{Mar02c}) shows that, when $E$ is open, the family of nondecreasing order invariant functions in $E^n$ is identical to that of lattice polynomial functions in $E^n$.
\begin{proposition}\label{prop:eondin} Assume that $E$ is open. Then $F\colon E^n\to E$ is a nondecreasing order invariant function if and only if it is a lattice polynomial function. \end{proposition}
\begin{corollary}\label{cor:gdfg} Assume that $E$ is open. Then $F\colon E^n\to E$ is a symmetric, nondecreasing, and order invariant function if and only if it is an order statistic function. \end{corollary}
Combining Proposition~\ref{prop:charmed} with Corollary~\ref{cor:gdfg} immediately yields the following axiomatization of the median function.
\begin{corollary} Assume that $n$ is odd and that $E$ is open. Then $F\colon E^n\to E$ is a symmetric, weakly self-dual, nondecreasing, and order invariant function if and only if it is the median function. \end{corollary}
A complete description of nondecreasing order invariant functions in $E^n$, with open or non-open interval $E$, is given in the following theorem (see \cite{MarMesRuc05,MesRuc04}). It shows that discontinuities of $F$ may occur only on the border of $E^n$. Recall that the lattice polynomial function in $E^n$ disjunctively defined by $\alpha\in\mathcal{C}_n[E]$ is denoted $p_{\alpha}^{\vee}$ (see \S\ref{sec:sfdaag}).
\begin{theorem} $F\colon E^n\to E$ is a nondecreasing order invariant function if and only if there exists a nondecreasing mapping $\xi\colon\mathcal{I}_n^*[E]\to\mathcal{C}_n[E]$ such that
$F|_I=p_{\xi(I)}^{\vee}|_I$ for all $I\in\mathcal{I}_n^*[E]$. \end{theorem}
\begin{example} Consider the semiopen interval $E:=[a,b)$.\footnote{Here the poset $\mathcal{I}_n^*[E]$ contains 8 elements (a point, three open line segments, three open square facets, and an open cube).} The function $F\colon [a,b)^3\to [a,b)$ defined by $$ F(x_1,x_2,x_3):=\begin{cases} a, & \mbox{if $x_1=a$}, \\ x_3, & \mbox{if $x_1\neq a$ and $x_2=a$}, \\ x_1\vee x_2\vee x_3, & \mbox{otherwise},\end{cases} $$ is a nondecreasing order invariant function in $[a,b)^3$. \end{example}
\begin{corollary} $F\colon E^n\to E$ is a nondecreasing, idempotent, and order invariant function if and only if there exists a nondecreasing mapping
$\xi\colon\mathcal{I}_n^*[E]\to\mathcal{C}_n[E]$, where $\xi[(E^\circ)^n]$ is nonconstant, such that $F|_I=p_{\xi(I)}^{\vee}|_I$ for all $I\in\mathcal{I}_n^*[E]$. \end{corollary}
\subsection{The continuous case}\label{sec:OIcont}
We now consider the family of continuous order invariant functions. It was shown in \cite[Corollary~4.2]{Mar02c} that, when $E$ is open, this family is identical to the family of lattice polynomial functions in $E^n$; see also \cite[\S{3.4.2}]{Mar98}.
\begin{proposition}\label{prop:eocoin} Assume that $E$ is open. Then $F\colon E^n\to E$ is a continuous order invariant function if and only if it is a lattice polynomial function. \end{proposition}
\begin{remark} Note that this result was independently stated and proved earlier by Ovchinnikov~\cite[Theorem~5.3]{Ovc98} in the more general setting where the range of variables is a doubly homogeneous simple order, that is, a simple order $X$ satisfying the following property: \begin{quote} For any $x_1, x_2, y_1, y_2 \in X$, with $x_1 < x_2$ and $y_1 < y_2$, there is an automorphism $\phi\colon X \to X$ such that $\phi(x_1) = y_1$ and $\phi(x_2) = y_2$. \end{quote} As any open interval $E$ of the real line is clearly a doubly homogeneous simple order, Ovchinnikov's result encompasses that of Proposition~\ref{prop:eocoin}.\footnote{Note that the extension of this result to the (infinite) case of functional operators was described in \cite{OvcDuk00}; see also~\cite{OvcDuk02}.} \end{remark}
A complete description of continuous order invariant function in $E^n$ was stated in \cite[Corollary~4.3]{Mar02c} as follows (see also \cite{MarMesRuc05}).
\begin{theorem}\label{thm:co/inv} $F\colon E^n\to E$ is a continuous order invariant function if and only if there exists $\alpha\in\mathcal{C}_n[E]$ such that $F=p_{\alpha}^{\vee}$. \end{theorem}
Theorem \ref{thm:co/inv} actually says that a continuous order invariant function $F\colon E^n\to E$ is either the constant function $F\equiv\inf E$ if $\inf E\in E$, or the constant function $F\equiv\sup E$ if $\sup E\in E$, or any lattice polynomial function in $E^n$ (any order statistic function in $E^n$ if $F$ is symmetric).
\begin{remark} From Theorems~\ref{thm:nvkjfd} and \ref{thm:co/inv} it follows that a function $F\colon E^n\to E$ is a lattice polynomial function if and only if its restriction to each closed simplex of the standard triangulation of $E^n$ is a projection function onto one coordinate (see also \cite[Proposition~2.1]{Mar02c}). \end{remark}
\begin{corollary}\label{cor:coidin} $F\colon E^n\to E$ is a continuous, idempotent, and order invariant function if and only if it is a lattice polynomial function. \end{corollary}
\begin{corollary}\label{cor:sycoidin} $F\colon E^n\to E$ is a symmetric, continuous, idempotent, and order invariant function if and only if it is an order statistic function. \end{corollary}
\begin{corollary}\label{cor:dsfs} Assume that $n$ is odd and that $B[E]$ is not a singleton. Then $F\colon E^n\to E$ is a symmetric, weakly self-dual, continuous, idempotent, and order invariant function if and only if it is the median function. \end{corollary}
\begin{remark} By combining Proposition~\ref{prop:charmed} with Theorem \ref{thm:co/inv}, we immediately see that idempotency is not necessary in Corollary~\ref{cor:dsfs}. Indeed, a weak self-dual lattice polynomial function cannot be constant and hence it is necessarily idempotent. \end{remark}
\section{Comparison meaningful functions on a single ordinal scale}
We now present the class of \emph{comparison meaningful functions on a single ordinal scale}. These functions were introduced first by Orlov~\cite{Orl81} and then investigated by many other authors; see \cite{KamOvc95,Mar98,Mar01,Mar02c,MarMat01,MarMes04,MarMesRuc05,Ovc96,Yan89}.
\subsection{Definition and first results}
Let $x_1,\ldots,x_n$ be independent variables defining the same ordinal scale, with domain $E$, and suppose that, when aggregating these variables by a function $F\colon E^n\to \mathbb{R}$, we require that the dependent variable $x_{n+1}=F(x_1,\ldots,x_n)$ defines an ordinal scale, with an arbitrary domain in $\mathbb{R}$. According to Luce's principle \cite{Luc59}, any admissible transformation of the independent variables must lead to an admissible transformation of the dependent variable. This condition can be formulated as follows.
\begin{definition}\label{de:cmfsos} $F\colon E^n \to \mathbb{R}$ is said to be a \emph{comparison meaningful function on a single ordinal scale}\/ if, for any $\boldsymbol{\phi}\in \Phi_n[E]$, there is a strictly increasing mapping $\psi_{\boldsymbol{\phi}}\colon {\rm ran}(F)\to{\rm ran}(F)$ such that $F[\boldsymbol{\phi}(\mathbf{x})]=\psi_{\boldsymbol{\phi}}[F(\mathbf{x})]$ for all $\mathbf{x}\in E^n$. \end{definition}
Comparison meaningful functions on a single ordinal scale were first introduced by Orlov~\cite{Orl81} as those functions preserving the comparison of aggregated values when changing the scale defined by the independent variables.\footnote{A general study on meaningfulness of ordinal comparisons can be found in \cite{RobRos94}.} We paraphrase from Orlov: \begin{quote} When one compares two sets of objects according to a criterion, it is sometimes required to evaluate each object on the same ordinal scale (e.g., by means of measurement or expert estimate). The aggregated values of the evaluations corresponding to each set of objects are computed by a certain aggregation function, and then compared together. It is natural to require that the inferences made from this comparison are meaningful, that is, depend only on the initial information, but not on the scale used.\footnote{More generally, a statement using scales of measurement is said to be {\sl meaningful}\/ if its truth or falsity is invariant when every scale is replaced by another acceptable version of it; see \cite[p.~59]{Rob79}.} \end{quote} The equivalence between Definition~\ref{de:cmfsos} and Orlov's definition can be formulated mathematically as follows (see \cite{MarMes04}).
\begin{proposition}\label{prop:cmim} $F\colon E^n \to \mathbb{R}$ is a comparison meaningful function on a single ordinal scale if and only if $$ F(\mathbf{x})~\Big\{{<\atop =}\Big\}~ F(\mathbf{x}') \quad\Rightarrow\quad F\big(\boldsymbol{\phi}(\mathbf{x})\big)~\Big\{{<\atop =}\Big\}~ F\big(\boldsymbol{\phi}(\mathbf{x}')\big) $$ for all $\mathbf{x},\mathbf{x}'\in E^n$ and all $\boldsymbol{\phi}\in\Phi_n[E]$.\footnote{Equivalently, $F\colon E^n \to \mathbb{R}$ is a comparison meaningful function on a single ordinal scale if and only if $$ F(\mathbf{x})\leqslant F(\mathbf{x}') \quad\Leftrightarrow\quad F\big(\boldsymbol{\phi}(\mathbf{x})\big)\leqslant F\big(\boldsymbol{\phi}(\mathbf{x}')\big) $$ for all $\mathbf{x},\mathbf{x}'\in E^n$ and all $\boldsymbol{\phi}\in\Phi_n[E]$.} \end{proposition}
\begin{remark} Although the condition in Proposition~\ref{prop:cmim} is natural and even mandatory to aggregate ordinal values, it severely restricts the allowable operations for defining a meaningful aggregation function. For example, the comparison of two arithmetic means is meaningless on an ordinal scale. Indeed, considering the pairs of values $(3,5)$ and $(1,8)$, we have $\frac 12(3+5) <\frac 12(1+8)$ and, using any admissible transformation $\phi$ such that $\phi(1)=1$, $\phi(3)=4$, $\phi(5)=7$, and $\phi(8)=8$, we have $\frac 12(\phi(3)+\phi(5)) >\frac 12(\phi(1)+\phi(8))$. \end{remark}
Order invariant functions and comparison meaningful functions on a single ordinal scale can actually be related through the idempotency property. Indeed, when a comparison meaningful function on a single ordinal scale is idempotent then the output scale must coincide with the input scale. This result is stated in the next proposition (see \cite[Proposition~3.3]{Mar02c} and preliminary work in \cite{KamOvc95,Ovc96}).
\begin{proposition}\label{prop:idcmin} Consider a function $F\colon E^n \to E$. \begin{itemize} \item If $F$ is idempotent and comparison meaningful on a single ordinal scale then it is order invariant.
\item If $F$ is order invariant then it is comparison meaningful on a single ordinal scale.
\item If $E$ is open then $F$ is idempotent and comparison meaningful on a single ordinal scale if and only if it is order invariant. \end{itemize} \end{proposition}
Just as for order invariant functions, continuity of comparison meaningful functions on a single ordinal scale can be interpreted by means of the set $\Phi'[E]$ of continuous nondecreasing surjections from $E$ onto $E$; see \cite[\S{5.2}]{MarMes04}. Denote by $\Phi'_n[E]$ the diagonal restriction of $\Phi'[E]^n$ (see \S\ref{sec:gfass3}).
\begin{proposition} $F\colon E^n \to \mathbb{R}$ is a continuous and comparison meaningful function on a single ordinal scale if and only if, for any $\boldsymbol{\phi}\in \Phi'_n[E]$, there is a continuous and nondecreasing mapping $\psi_{\phi}\colon{\rm ran}(F)\to{\rm ran}(F)$ such that $F\big(\boldsymbol{\phi}(\mathbf{x})\big)=\psi_{\boldsymbol{\phi}}\big(F(\mathbf{x})\big)$ for all $\mathbf{x}\in E^n$. \end{proposition}
\subsection{General descriptions}
The class of comparison meaningful functions on a single ordinal scale can be described as follows (see \cite[Theorem~3.1]{MarMesRuc05}).
\begin{theorem} $F\colon E^n\to\mathbb{R}$ is a comparison meaningful function on a single ordinal scale if and only if, for any $I\in\mathcal{I}_n[E]$, there exist an index $k_I\in [n]$
and a strictly monotonic or constant function $g_I\colon \mathrm{P}_{k_I}(I)\to\mathbb{R}$ such that $F|_I=(g_I\circ\mathrm{P}_{k_I})|_I$, where, for any $I,I'\in\mathcal{I}_n[E]$, \begin{itemize} \item either $g_I=g_{I'}$,
\item or ${\rm ran}(g_I)={\rm ran}(g_{I'})$ is a singleton,
\item or ${\rm ran}(g_I)<{\rm ran}(g_{I'})$,
\item or ${\rm ran}(g_I)>{\rm ran}(g_{I'})$.\footnote{Note that ${\rm ran}(g_I)<{\rm ran}(g_{I'})$ means that for all $r\in{\rm ran}(g_I)$ and all $r'\in{\rm ran}(g_{I'})$, we have $r<r'$.} \end{itemize} \end{theorem}
Thus, a comparison meaningful function on a single ordinal scale reduces, on each minimal order invariant subset of $E^n$, to a constant or a transformed projection function onto one coordinate.
\begin{example} We have seen in Example~\ref{ex:fggger} that there are eleven minimal order invariant subsets in the unit square $[0,1]^2$, namely \begin{itemize} \item $I_1:=\{(0,0)\}$, $I_2:=\{(1,0)\}$, $I_3:=\{(1,1)\}$, $I_4:=\{(0,1)\}$,
\item $I_5:=(0,1)\times\{0\}$, $I_6:=\{1\}\times (0,1)$, $I_7:=(0,1)\times\{1\}$, $I_8:=\{0\}\times (0,1)$,
\item $I_9:=\{(x_1,x_2)\mid 0<x_1=x_2 <1\}$, $I_{10}:=\{(x_1,x_2)\mid 0<x_1<x_2<1\}$, $I_{11}:=\{(x_1,x_2)\mid 0<x_2<x_1<1\}$. \end{itemize} Let $k_{I_j}:=1$ and $g_{I_j}(x):=1-x$ if $j\in\{1,2,3,5,6,9,11\}$, and $k_{I_j}:=2$ and $g_{I_j}(x):=2x-3$ if $j\in\{4,7,8,10\}$, where always $x\in {\rm P}_{k_{I_j}}(I_j)$. Then the corresponding comparison meaningful function $F\colon [0,1]^2\to \mathbb{R}$ is given by $$ F(x_1,x_2):= \begin{cases} 1-x_1, & \mbox{if $x_1\geqslant x_2$}, \\ 2x_2-3, & \mbox{if $x_1<x_2$}.\end{cases} $$ \end{example}
When a comparison meaningful function on a single ordinal scale is idempotent, it must satisfy $g_I(x)=x$, for all $x\in {\rm P}_{k_I}(I)$, whenever either $I$ is the open diagonal of $E^n$, or $I=\{(\inf E,\ldots,\inf E)\}$ (if $\inf E\in E$), or $I=\{(\sup E,\ldots,\sup E)\}$ (if $\sup E\in E$).
\subsection{The nondecreasing case}
The following result~\cite{MarMesRuc05} yields, when $E$ is open, a description of all nondecreasing comparison meaningful functions $F\colon E^n\to \mathbb{R}$ on a single ordinal scale.
\begin{proposition}\label{prop:eondcm} Assume that $E$ is open. Then $F\colon E^n\to\mathbb{R}$ is a nondecreasing comparison meaningful function on a single ordinal scale if and only if there exist $\alpha\in\mathcal{C}_n$ and a strictly increasing or constant function $g\colon E\to\mathbb{R}$ such that $F=g\circ p_{\alpha}^{\vee}$. \end{proposition}
As we can see, all the functions described in Proposition~\ref{prop:eondcm} are continuous up to possible discontinuities of the function $g$.
The following corollaries~\cite[Theorem~4.4]{Mar02c} (see~\cite[Theorem~3.1]{MarMat01} for preliminary results) immediately follow from Proposition~\ref{prop:eondcm}.
\begin{corollary} Assume that $E$ is open. Then $F\colon E^n\to \mathbb{R}$ is a nondecreasing, idempotent, and comparison meaningful function on a single ordinal scale if and only if it is a lattice polynomial function. \end{corollary}
\begin{corollary}\label{cor:sdsds} Assume that $E$ is open. Then $F\colon E^n\to \mathbb{R}$ is a symmetric, nondecreasing, idempotent, and comparison meaningful function on a single ordinal scale if and only if it is an order statistic function. \end{corollary}
\begin{corollary} Assume that $n$ is odd and that $E$ is open. Then $F\colon E^n\to \mathbb{R}$ is a symmetric, weakly self-dual, nondecreasing, idempotent, and comparison meaningful function on a single ordinal scale if and only if it is the median function. \end{corollary}
A complete description of nondecreasing comparison meaningful functions $F\colon E^n\to \mathbb{R}$ on a single ordinal scale is given in the next theorem~\cite[Corollary~4.1]{MarMesRuc05}. Let $\mathcal{G}(E)$ be the set of all strictly increasing or constant real functions $g$ defined either on $E^\circ$, or on the singleton $\{\inf E\}\cap E$, or on the singleton $\{\sup E\}\cap E$ (if these singletons exist). This set is partially ordered as follows: $g_1 \preccurlyeq g_2$ if either $g_1=g_2$, or ${\rm ran}(g_1)={\rm ran}(g_2)$ is a singleton, or ${\rm ran}(g_1)<{\rm ran}(g_2)$.
\begin{theorem} $F\colon E^n\to \mathbb{R}$ is a nondecreasing comparison meaningful function on a single ordinal scale if and only if there exist nondecreasing mappings
$\gamma\colon \mathcal{I}_n^*[E]\to \mathcal{G}(E)$ and $\xi\colon \mathcal{I}_n^*[E]\to\mathcal{C}_n[E]$ such that $F|_I=(\gamma(I)\circ p_{\xi(I)}^{\vee})|_I$ for all $I\in\mathcal{I}_n^*[E]$. \end{theorem}
If furthermore $F$ is idempotent, then by nondecreasing monotonicity, we have ${\rm ran}(F)=E$ and, by Proposition~\ref{prop:idcmin}, $F$ is order invariant. Hence we have the following corollary.
\begin{corollary}
$F\colon E^n\to \mathbb{R}$ is a nondecreasing, idempotent, and comparison meaningful function on a single ordinal scale if and only if there exists a nondecreasing mapping $\xi\colon \mathcal{I}_n^*[E]\to\mathcal{C}_n[E]$, where $\xi[(E^\circ)^n]$ is nonconstant, such that $F|_I=p_{\xi(I)}^{\vee}|_I$ for all $I\in\mathcal{I}_n^*[E]$. \end{corollary}
\subsection{The continuous case}
Based on a preliminary result~\cite[\S{4}]{MarMat01} (see also~\cite[\S{3.4.2}]{Mar98}), a full description of continuous comparison meaningful functions on a single ordinal scale was given in \cite[Theorem~4.2]{Mar02c} as follows.
\begin{theorem} $F\colon E^n\to \mathbb{R}$ is a continuous comparison meaningful function on a single ordinal scale if and only if there exist $\alpha\in\mathcal{C}_n$ and a continuous and strictly monotonic or constant function $g\colon E\to\mathbb{R}$ such that $F=g\circ p_{\alpha}^{\vee}$. \end{theorem}
\begin{corollary}\label{cor:coidcm} $F\colon E^n\to \mathbb{R}$ is a continuous, idempotent, and comparison meaningful function on a single ordinal scale if and only if it is a lattice polynomial function. \end{corollary}
\begin{remark} The result in Corollary~\ref{cor:coidcm} was stated and proved first in social choice theory by Yanovskaya~\cite[Theorem~1]{Yan89} when $E=\mathbb{R}$. \end{remark}
\begin{corollary} $F\colon E^n\to \mathbb{R}$ is a symmetric, continuous, and comparison meaningful function on a single ordinal scale if and only if there exist $k\in [n]$ and a continuous strictly monotonic or constant function $g\colon E\to\mathbb{R}$ such that $F=g\circ {\rm os}_k$. \end{corollary}
\begin{corollary}\label{cor:sycoidcm} $F\colon E^n\to \mathbb{R}$ is a symmetric, continuous, idempotent, and comparison meaningful function on a single ordinal scale if and only if it is an order statistic function. \end{corollary}
\begin{remark} A slightly stronger version of the result in Corollary~\ref{cor:sycoidcm}, consisting in replacing idempotency with internality, was actually proved first by Orlov~\cite{Orl81} in $\mathbb{R}^n$, then by Marichal and Roubens~\cite[Theorem~1]{MarRou93} in $E^n$ (see also \cite[Theorem~3.4.13]{Mar98}), and finally by Ovchinnikov~\cite[Theorem~4.3]{Ovc96} in the more general framework where the range of variables is a simple order $X$ whose open intervals are homogeneous and nonempty (see also~\cite[\S{6}]{Ovc98}). \end{remark}
\begin{corollary} Assume that $n$ is odd and that $B[E]$ is not a singleton. Then $F\colon E^n\to \mathbb{R}$ is a symmetric, weakly self-dual, continuous, idempotent, and comparison meaningful function on a single ordinal scale if and only if it is the median function. \end{corollary}
\section{Comparison meaningful functions on independent ordinal scales}
In this section we present the class of \emph{comparison meaningful functions on independent ordinal scales}, which were introduced by Acz\'el and Roberts~\cite[Case~\#21]{AczRob89} and studied by Kim~\cite{Kim90} (see preliminary work in Osborne~\cite{Osb70}) and then investigated by some other authors; see \cite{Mar02c,MarMat01,MarMes04,MarMesRuc05}.
\subsection{Definition and first results}\label{sec:sdsgf}
Let $x_1,\ldots,x_n$ be independent variables defining independent ordinal scales, with a common domain $E$, and suppose that, when aggregating these variables by a function $F\colon E^n\to \mathbb{R}$, we require that the dependent variable $x_{n+1}=F(x_1,\ldots,x_n)$ defines an ordinal scale, with an arbitrary domain in $\mathbb{R}$. This condition can be formulated as follows.
\begin{definition}\label{de:cmfios} $F\colon E^n \to \mathbb{R}$ is said to be a \emph{comparison meaningful function on independent ordinal scales}\/ if, for any $\boldsymbol{\phi}\in \Phi[E]^n$, there is a strictly increasing mapping $\psi_{\boldsymbol{\phi}}\colon {\rm ran}(F)\to{\rm ran}(F)$ such that $F\big(\boldsymbol{\phi}(\mathbf{x})\big)=\psi_{\boldsymbol{\phi}}\big(F(\mathbf{x})\big)$ for all $\mathbf{x}\in E^n$. \end{definition}
Comparison meaningful functions on independent ordinal scales can also be defined as those functions preserving the comparison of aggregated values when changing the scales defined by the independent variables (see \cite{MarMes04}).
\begin{proposition}\label{prop:cmim2} $F\colon E^n \to \mathbb{R}$ is a comparison meaningful function on independent ordinal scales if and only if $$ F(\mathbf{x})~\Big\{{<\atop =}\Big\}~ F(\mathbf{x}') \quad\Rightarrow\quad F\big(\boldsymbol{\phi}(\mathbf{x})\big)~\Big\{{<\atop =}\Big\}~ F\big(\boldsymbol{\phi}(\mathbf{x}')\big) $$ for all $\mathbf{x},\mathbf{x}'\in E^n$ and all $\boldsymbol{\phi}\in\Phi[E]^n$.\footnote{Equivalently, $F\colon E^n \to \mathbb{R}$ is a comparison meaningful function on independent ordinal scales if and only if $$ F(\mathbf{x})\leqslant F(\mathbf{x}') \quad\Leftrightarrow\quad F\big(\boldsymbol{\phi}(\mathbf{x})\big)\leqslant F\big(\boldsymbol{\phi}(\mathbf{x}')\big) $$ for all $\mathbf{x},\mathbf{x}'\in E^n$ and all $\boldsymbol{\phi}\in\Phi[E]^n$.} \end{proposition}
Comparison meaningfulness on independent ordinal scales is a very strong condition, much stronger than comparison meaningfulness on a single ordinal scale. For example, it was proved~\cite[Lemma~5.2]{Mar02c} that this condition reduces any lattice polynomial function to a projection function onto one coordinate.
Regarding continuity of comparison meaningful function on independent ordinal scales, it can be interpreted in the same way as for comparison meaningful function on a single ordinal scale; see \cite[\S{5.2}]{MarMes04}. Consider again the set $\Phi'[E]$ of continuous nondecreasing surjections from $E$ onto $E$.
\begin{proposition} $F\colon E^n \to \mathbb{R}$ is a continuous and comparison meaningful function on independent ordinal scales if and only if, for any $\boldsymbol{\phi}\in \Phi'[E]^n$, there is a continuous and nondecreasing mapping $\psi_{\boldsymbol{\phi}}\colon {\rm ran}(F)\to{\rm ran}(F)$ such that $F\big(\boldsymbol{\phi}(\mathbf{x})\big)=\psi_{\boldsymbol{\phi}}\big(F(\mathbf{x})\big)$ for all $\mathbf{x}\in E^n$. \end{proposition}
\subsection{General descriptions}
The description of comparison meaningful functions on independent ordinal scales is very similar to that of comparison meaningful functions on a single ordinal scale. The result can be formulated as follows~\cite[Corollary~3.1]{MarMesRuc05}.
\begin{theorem}\label{thm:fdgfasd} $F\colon E^n\to\mathbb{R}$ is a comparison meaningful function on independent ordinal scales if and only if, for any $I\in\mathcal{I}_n^*[E]$, there exist an index $k_I\in
[n]$ and a strictly monotonic or constant function $g_I\colon \mathrm{P}_{k_I}(I)\to\mathbb{R}$ such that $F|_I=(g_I\circ\mathrm{P}_{k_I})|_I$, where, for any $I,I'\in\mathcal{I}_n^*[E]$, \begin{itemize} \item either $g_I=g_{I'}$,
\item or ${\rm ran}(g_I)={\rm ran}(g_{I'})$ is a singleton,
\item or ${\rm ran}(g_I)<{\rm ran}(g_{I'})$,
\item or ${\rm ran}(g_I)>{\rm ran}(g_{I'})$. \end{itemize} \end{theorem}
Thus, a comparison meaningful function on independent ordinal scales reduces, on each minimal strongly order invariant subset of $E^n$, to a constant or a transformed projection function onto one coordinate.
When a comparison meaningful function on independent ordinal scales is idempotent, it must satisfy $g_I(x)=x$, for all $x\in {\rm P}_{k_I}(I)$, whenever either $I=(E^\circ)^n$, or $I=\{(\inf E,\ldots,\inf E)\}$ (if $\inf E\in E$), or $I=\{(\sup E,\ldots,\sup E)\}$ (if $\sup E\in E$).
When $E$ is open, the family $\mathcal{I}_n^*[E]$ reduces to $\{E^\circ\}$, thus considerably simplifying Theorem~\ref{thm:fdgfasd} as follows.
\begin{proposition}\label{prop:eoscm} Assume that $E$ is open. Then $F\colon E^n\to \mathbb{R}$ is a comparison meaningful function on independent ordinal scales if and only if there exist $k\in [n]$ and a strictly monotonic or constant function $g\colon E\to\mathbb{R}$ such that $F=g\circ {\rm P}_k$. \end{proposition}
\begin{corollary} Assume that $E$ is open. Then $F\colon E^n\to \mathbb{R}$ is an idempotent and comparison meaningful function on independent ordinal scales if and only if it is a projection function. \end{corollary}
It follows from Proposition~\ref{prop:eoscm} that, when $E$ is open and $n\geqslant 2$, any symmetric and comparison meaningful function on independent ordinal scales is necessarily a constant function. In this case, it cannot be idempotent.
\subsection{The nondecreasing case}
Starting from Proposition~\ref{prop:eoscm} we deduce immediately the following characterizations.
\begin{proposition}\label{prop:eondscm} Assume that $E$ is open. Then $F\colon E^n\to \mathbb{R}$ is a nondecreasing comparison meaningful function on independent ordinal scales if and only if there exist $k\in [n]$ and a strictly increasing or constant function $g\colon E\to\mathbb{R}$ such that $F=g\circ {\rm P}_k$. \end{proposition}
When $E$ is not open, we have the following (see \cite[Corollary~4.2]{MarMesRuc05}).
\begin{theorem} $F\colon E^n\to \mathbb{R}$ is a nondecreasing comparison meaningful function on independent ordinal scales if and only if there exists a mapping
$\xi\colon \mathcal{I}_n^*[E]\to [n]$ and a nondecreasing mapping $\gamma\colon\mathcal{I}_n^*[E]\to \mathcal{G}(E)$ such that $F|_I=(\gamma(I)\circ {\rm P}_{\xi(I)})|_I$ for all $I\in\mathcal{I}_n^*[E]$, where if $\gamma(I)=\gamma(I')$ then also $\xi(I)=\xi(I')$ (unless $\gamma(I)=\gamma(I')$ is constant). \end{theorem}
\subsection{The continuous case}
As we already mentioned in \S\ref{sec:sdsgf}, comparison meaningfulness on independent ordinal scales reduces any lattice polynomial function to a projection function onto one coordinate. From this result we deduce immediately the following characterizations; see \cite[\S{5}]{Mar02c}.
\begin{theorem}\label{thm:coscm} $F\colon E^n\to \mathbb{R}$ is a continuous and comparison meaningful function on independent ordinal scales if and only if there exist $k\in [n]$ and a continuous and strictly monotonic or constant function $g\colon E\to\mathbb{R}$ such that $F=g\circ {\rm P}_k$. \end{theorem}
\begin{corollary} $F\colon E^n\to \mathbb{R}$ is a continuous, idempotent, and comparison meaningful function on independent ordinal scales if and only if it is a projection function. \end{corollary}
\begin{remark} The result in Theorem~\ref{thm:coscm} was proved first by Kim~\cite[Corollary~1.2]{Kim90} in $\mathbb{R}^n$ (see Osborne~\cite{Osb70} for preliminary results). \end{remark}
It follows from Theorem~\ref{thm:coscm} that, if $n\geqslant 2$, any symmetric, continuous, and comparison meaningful function on independent ordinal scales is necessarily a constant function.
\section{Aggregation on finite chains by chain independent functions} \label{sec:aod}
In this final section, mainly based on a paper by the authors~\cite{MarMes04}, we give interpretations of order invariance and comparison meaningfulness properties in the setting of aggregation on finite chains (i.e., totally ordered finite sets). These interpretations show that the order invariant functions and comparison meaningful functions always have isomorphic discrete representatives defined on finite chains. These discrete functions do not depend on the chains on which they are defined.
\subsection{Introduction}
Let $A$ be a set of \emph{alternatives} (objects, individuals, etc.) and consider an open real interval $E$, possibly unbounded.\footnote{Without loss of generality, we can assume that $E=(0,1)$ or $E=\mathbb{R}$.} In representational measurement theory~\cite{Rob79,Rob94}, a \emph{scale of measurement} can be seen as a mapping $h\colon A\to E$ that assigns a real number to each element of $A$ according to some attribute or criterion.\footnote{A criterion is an attribute defined in a preference-ordered domain.} As already mentioned in the introduction, such a scale is an ordinal scale if any other acceptable version of it is of the form $\phi\circ h$ for some strictly increasing function $\phi\colon E\to E$.
An ordinal scale is finite if ${\rm ran}(h)$ is a finite subset of $E$, that is of the form $$ {\rm ran}(h)=\{e_1 < e_2 < \cdots < e_k\}, $$
where the values $e_1,e_2,\ldots,e_k$ represent the possible rating benchmarks defined along some ordinal criterion. We shall assume throughout that $|{\rm ran}(h)|=k\geqslant 2$.
Since the values $e_1,e_2,\ldots,e_k$ of the scale are defined up to order, that is, within a strictly increasing function $\phi\colon E\to E$, we can simply replace ${\rm ran}(h)$ with a finite chain $(S,\preccurlyeq)$ of $k$ elements, that is, $$ S=\{s_1 \prec s_2 \prec \cdots \prec s_k\}, $$ where $\preccurlyeq$ represents a total order on $S$ and $\prec$ represents its asymmetric part. In this representation we denote by $s_*:=s_1$ (resp.\ $s^*:=s_k$) the bottom element (resp.\ top element) of the chain.
\begin{example} Consider the problem of evaluating a commodity by a consumer according to a given ordinal criterion. Typically this evaluation is done by rating the product on a finite ordinal scale. For instance we could consider the following rating benchmarks: $$ 1=\mathrm{Bad},~2=\mathrm{Weak},~3=\mathrm{Fair},~4=\mathrm{Good},~5=\mathrm{Excellent}. $$ Since the scale values are determined only up to order, this scale can be replaced with a finite chain $S=\{\mathrm{B} \prec \mathrm{W} \prec \mathrm{F} \prec \mathrm{G} \prec \mathrm{E}\}$ whose elements B, W, F, G, E refer to the following linguistic terms: {\it bad}, {\it weak}, {\it fair}, {\it good}, {\it excellent}. \end{example}
It is well known (see \cite[Chapter~1]{KraLucSupTve71}) that the total order $\preccurlyeq$ defined on $S$ can always be numerically represented in $E$ by means of an isomorphism (a strictly increasing function) $f\colon S\to E$ such that $$ s_i\preccurlyeq s_j \quad\Leftrightarrow \quad f(s_i)\leqslant f(s_j) \qquad (s_i,s_j\in S). $$ Moreover, just as for the mapping $h$, the isomorphism $f$ is defined up to a strictly increasing function $\phi\colon E\to E$. That is, with $f$ all isomorphisms $f'=\phi\circ f$ (and only these) represent the same order on $S$.
By choosing $f$ so that $f(s_i)=e_i$ for $i=1,\ldots,k$, we immediately see that the elements of $A$ can be ordinally evaluated not only by means of the numerical mapping $h\colon A\to {\rm ran}(h)$ but also by the non-numerical mapping $h_S\colon A\to S$, defined by $h_S:=f^{-1}\circ h$. The following diagram illustrates the relationship among the mappings, where $h$ and $f$ are defined within a strictly increasing function $\phi\colon E\to E$: $$ \xymatrix{
A \ar[r]^{\phi\circ h} \ar[rd]_{h_S} & E \\
& S \ar[u]_{\phi\circ f} } $$
We may also consider non-open intervals $E$ with the natural condition that if $f(s)=\inf E\in E$ for some $s\in S$ then $s=s_*$, and similarly, if $f(s)=\sup E\in E$ for some $s\in S$ then $s=s^*$. In that case, the isomorphism $f$ is required to be \emph{endpoint preserving}, that is, if $\inf E\in E$ (resp.\ $\sup E\in E$) then $f(s_*)=\inf E$ (resp.\ $f(s^*)=\sup E$), regardless of the chain $(S,\preccurlyeq)$ considered.\footnote{This amounts to assuming that all the chains considered have a common bottom element $s_*$ (resp.\ a common top element $s^*$) whose numerical representation is $\inf E$ (resp.\ $\sup E$).} Consequently also all the functions $\phi\colon E\to E$ must be endpoint preserving in the sense that $\phi(x)=x$ for all $x\in B[E]$. Due to the finiteness of the ordinal scales, we may even assume that the functions $\phi$ are continuous, which amounts to assuming that they all belong to $\Phi[E]$.
The endpoint preservation assumption of $f$ (and hence of $\phi$) clarifies why we consider numerical representations in an interval $E$ of $\mathbb{R}$, possibly non-open, rather than $\mathbb{R}$ itself.\footnote{If $E$ is closed, one typically chooses $E=[0,1]$ or $E=\mathbb{R}\cup\{-\infty,\infty\}$.}
In this section, the set of all endpoint preserving isomorphisms $f\colon S\to E$ is denoted $\mathrm{F}[S,E]$. The diagonal restriction of $\mathrm{F}[S,E]^n$ is the set $$ \mathrm{F}_n[S,E]:=\{\underbrace{(f,\ldots,f)}_{n} : f\in\mathrm{F}[S,E]\}. $$ Finally, for any $\mathbf{a}\in S^n$ and any $\mathbf{f}\in\mathrm{F}[S,E]^n$, the symbol $\mathbf{f}(\mathbf{a})$ denotes the vector $\big(f_1(a_1),\ldots,f_n(a_n)\big)$.
\subsection{Aggregation by order invariant functions} \label{sec:aif}
Suppose we have $n$ evaluations expressed in a finite chain $(S,\preccurlyeq)$, with $|S|=k\geqslant 2$. To aggregate these evaluations and obtain an overall evaluation in the same chain, we can use a discrete aggregation function $G\colon S^n\to S$, which is a ranking function sorting $k^n$ $n$-tuples into $k$ classes. (Here, ``discrete" means that the domain of the function $G$ is a discrete set.)
Among all the possible aggregation functions, we could choose one that is ``independent'' of the chain used.\footnote{For example, we could use any lattice polynomial function, which does not depend on the chain used.} Such a \emph{chain independent}\/ aggregation function is necessarily based on a numerical function $F\colon E^n\to E$ that can be represented in any finite chain $(S,\preccurlyeq)$ by a discrete analog $G\colon S^n\to S$ in the sense that the following identity $$ F(x_1,\ldots,x_n)=f\big(G\big(f^{-1}(x_1),\ldots,f^{-1}(x_n)\big)\big)\qquad (\mathbf{x}\in E^n) $$ holds for all isomorphisms $f\in\mathrm{F}[S,E]$.
As the following theorem shows~\cite[Proposition~4.1]{MarMes04}, this condition completely characterizes the order invariant functions.
\begin{theorem}\label{thm:si} $F\colon E^n\to E$ is an order invariant function if and only if, for any finite chain $(S,\preccurlyeq)$, there exists an aggregation function $G\colon S^n\to S$ such that, for any $f\in\mathrm{F}[S,E]$, we have \begin{equation}\label{eq:si} F\big(f(a_1),\ldots,f(a_n)\big)=f\big(G(a_1,\ldots,a_n)\big) \qquad (\mathbf{a}\in S^n). \end{equation} \end{theorem}
Thus, an order invariant function is characterized by the fact that it can always be represented by a discrete aggregation function $G\colon S^n\to S$ on any finite chain $(S,\preccurlyeq)$, regardless of the cardinality of this chain.\footnote{It is important to remember that considering a discrete function $G\colon S^n\to S$, where $(S,\preccurlyeq)$ is a given chain, is not equivalent to considering an order invariant function $F\colon E^n\to E$. Indeed, defining an order invariant function is much more restrictive since such a function should be independent of any scale. For instance, if $n=2$ and $E$ is open, we see by Proposition~\ref{prop:eoin} that there are only four order invariant functions (namely $x_1$,
$x_2$, $x_1\wedge x_2$, and $x_1\vee x_2$) while the number of possible discrete functions $G\colon S^2\to S$ is clearly $k^{k^2}$, where $k=|S|$.} It is informative to represent the identity (\ref{eq:si}) by the following commutative diagram, where $\mathbf{f}:=(f,\ldots,f)$: $$ \xymatrix{
E^n \ar[r]^F & E \\
S^n \ar[r]_G \ar[u]^{\mathbf{f}} & S \ar[u]_{f} } $$
It is clear from (\ref{eq:si}) that the discrete function $G$ representing $F$ in $(S,\preccurlyeq)$ is uniquely determined and, in some sense, is isomorphic to the ``restriction'' of $F$ to $S^n$. For example, if $n=2$ and $F(\mathbf{x})=x_1\wedge x_2$ (resp.\ $F(\mathbf{x})=\inf E$) then the unique representative $G$ of $F$ is defined by $G(\mathbf{a})=a_1\wedge a_2$ (resp.\ $G(\mathbf{a})=s_*$).
Evidently an order invariant function is nondecreasing if and only if its discrete representative is nondecreasing. Another property that might be required on order invariant functions is continuity (see \S\ref{sec:OIcont}) whose discrete counterpart, called \emph{smoothness}, is defined as follows (see \cite{GodSie88}).\footnote{Fodor~\cite[Theorem~2]{Fod00} (see \cite{MarMes04} for the general case) showed that the smoothness condition is equivalent to the discrete version of the intermediate value theorem~\cite[Lemma~1]{FunFu75}.}
\begin{definition}\label{de:smooth} Consider $(n+1)$ finite chains $(S_0,\preccurlyeq_{S_0}),\ldots,(S_n,\preccurlyeq_{S_n})$. A discrete function $G\colon\mathop{\boldsymbol{\times}}_{i=1}^n S_i\to S_0$ is said to be \emph{smooth} if, for any $\mathbf{a},\mathbf{b}\in \mathop{\boldsymbol{\times}}_{i=1}^n S_i$, the elements $G(\mathbf{a})$ and $G(\mathbf{b})$ are equal or neighboring whenever there exists $j\in [n]$ such that $a_j$ and $b_j$ are neighboring and $a_i=b_i$ for all $i\neq j$. \end{definition}
The following important result~\cite[Proposition~5.1]{MarMes04} relates the continuity property of order invariant functions to the smoothness condition of its discrete representatives, thus making continuity sensible and even appealing for order invariant functions.
\begin{proposition}\label{prop:cisr} An order invariant function $F\colon E^n\to E$ is continuous if and only if it is represented only by smooth discrete aggregation functions. \end{proposition}
\subsection{Aggregation by comparison meaningful functions on a single ordinal scale} \label{sec:aif2}
Consider the more general situation where the evaluations to be aggregated are expressed in the same finite chain $(S,\preccurlyeq_S)$ and the overall evaluation is expressed in a finite chain $(T,\preccurlyeq_T)$, possibly different from $(S,\preccurlyeq_S)$. Again, we can consider aggregation functions $G\colon S^n\to T$ and, among them, we might want to choose aggregation functions that are independent of the chains used.
As the following theorem shows~\cite[Proposition~4.3]{MarMes04}, such chain independent functions are constructed from numerical functions $F\colon E^n\to\mathbb{R}$ that are exactly the comparison meaningful functions on a single ordinal scale.
\begin{theorem}\label{thm:si2} $F\colon E^n\to \mathbb{R}$ is a comparison meaningful function on a single ordinal scale if and only if, for any finite chain $(S,\preccurlyeq_S)$, there exists a finite chain $(T,\preccurlyeq_T)$ and a surjective aggregation function $G\colon S^n\to T$ such that, for any $\mathbf{f}\in\mathrm{F}_n[S,E]$, there is an isomorphism $g_{\mathbf{f}}\colon T\to\mathbb{R}$ such that \begin{equation}\label{eq:iuif} F\big(\mathbf{f}(\mathbf{a})\big)=g_{\mathbf{f}}\big(G(\mathbf{a})\big) \qquad (\mathbf{a}\in S^n). \end{equation} \end{theorem}
Thus, a comparison meaningful function on a single ordinal scale is characterized by the fact that it can always be represented by a discrete aggregation function $G\colon S^n\to T$ on any finite chain $(S,\preccurlyeq)$, regardless of the cardinality of this chain. The identity (\ref{eq:iuif}) can be graphically represented by the following commutative diagram $$ \xymatrix{
E^n \ar[r]^F & \mathbb{R} \\
S^n \ar[r]_G \ar[u]^{\mathbf{f}} & T \ar[u]_{g_{\mathbf{f}}} } $$
It can be easily shown~\cite[\S{4.2}]{MarMes04} that, given a comparison meaningful function $F\colon E^n\to \mathbb{R}$ on a single ordinal scale and a finite chain $(S,\preccurlyeq_S)$, the output chain $(T,\preccurlyeq_T)$ and the functions $G\colon S^n\to T$ and $g_{\mathbf{f}}\colon T\to\mathbb{R}$ are uniquely determined.
The analog of Proposition~\ref{prop:cisr} can be stated as follows~\cite[Proposition~5.2]{MarMes04}. Unfortunately here we no longer have a necessary and sufficient condition.
\begin{proposition} A continuous comparison meaningful function $F\colon E^n\to \mathbb{R}$ on a single ordinal scale is represented only by smooth discrete aggregation functions. \end{proposition}
\subsection{Aggregation by comparison meaningful functions on independent ordinal scales} \label{sec:aif3}
We now assume that the $n$ evaluations are expressed in independent finite chains $(S_i,\preccurlyeq_{S_i})$, $i=1,\ldots,n$, and that the overall evaluation is expressed in a finite chain $(T,\preccurlyeq_T)$. We can consider aggregation functions $G\colon \mathop{\boldsymbol{\times}}_{i=1}^n S_i\to T$ and, among them, we might want to choose aggregation functions that are independent of the chains used.
As the following theorem shows~\cite[Proposition~4.6]{MarMes04}, such chain independent functions are constructed from numerical functions $F\colon E^n\to\mathbb{R}$ that are exactly the comparison meaningful functions on independent ordinal scales.
\begin{theorem}\label{thm:si3} $F\colon E^n\to \mathbb{R}$ is a comparison meaningful function on independent ordinal scales if and only if, for any finite chains $(S_i,\preccurlyeq_{S_i})$, $i=1,\ldots,n$, there exists a finite chain $(T,\preccurlyeq_T)$ and a surjective aggregation function $G\colon \mathop{\boldsymbol{\times}}_{i=1}^n S_i\to T$ such that, for any $\mathbf{f}\in\mathrm{F}[S,E]^n$, there is a isomorphism $g_{\mathbf{f}}\colon T\to\mathbb{R}$ such that $$ F\big(\mathbf{f}(\mathbf{a})\big)=g_{\mathbf{f}}\big(G(\mathbf{a})\big) \qquad (\mathbf{a}\in \mathop{\boldsymbol{\times}}_{i=1}^n S_i). $$ \end{theorem}
Thus, a comparison meaningful function on independent ordinal scales is characterized by the fact that it can always be represented by a discrete aggregation function $G\colon\mathop{\boldsymbol{\times}}_{i=1}^n S_i\to T$, regardless of the cardinality of the chains considered. Here the commutative diagram is given by $$ \xymatrix{
E^n \ar[r]^F & \mathbb{R} \\
\mathop{\boldsymbol{\times}}\limits_{i=1}^n S_i \ar[r]_G \ar[u]^{\mathbf{f}} & T \ar[u]_{g_{\mathbf{f}}} } $$
Here again, it can be easily shown~\cite[\S{4.3}]{MarMes04} that, given a comparison meaningful function $F\colon E^n\to \mathbb{R}$ on independent ordinal scales and $n$ finite chains $(S_i,\preccurlyeq_{S_i})$, $i=1,\ldots,n$, the output chain $(T,\preccurlyeq_T)$ and the functions $G\colon\mathop{\boldsymbol{\times}}_{i=1}^n S_i\to T$ and $g_{\mathbf{f}}\colon T\to\mathbb{R}$ are uniquely determined.
Regarding continuous comparison meaningful functions, we have the following result~\cite[Proposition~5.3]{MarMes04}.
\begin{proposition} A continuous comparison meaningful function $F\colon E^n\to \mathbb{R}$ on independent ordinal scales is represented only by smooth discrete aggregation functions. \end{proposition}
\section*{Acknowledgments}
Radko Mesiar greatly acknowledges the support of grants APVV-0375-06 and VEGA 1/4209/07.
\end{document} | arXiv | {
"id": "0903.2434.tex",
"language_detection_score": 0.7034677267074585,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\sloppy \title{
A (tight) upper bound for the length of confidence intervals with conditional
coverage
}
\begin{abstract} We show that two popular selective inference procedures, namely data carving \citep{fithian2017} and selection with a randomized response \citep{tian2018}, when combined with the polyhedral method \citep{lee2016}, result in confidence intervals whose length is bounded. This contrasts results for confidence intervals based on the polyhedral method alone, whose expected length is typically infinite \citep{kivaranovic2020}. Moreover, we show that these two procedures always dominate corresponding sample-splitting methods in terms of interval length. \end{abstract}
\noindent {\it Keywords:} Inference after model selection, selective inference,
confidence interval, interval length
\section{Introduction} \label{intro}
\cite{kivaranovic2020} showed that the expected length of confidence intervals based on the polyhedral method of \cite{lee2016} is typically infinite. The polyhedral method can be modified by combining it with data carving \citep{fithian2017} or with selection on a randomized response \citep{tian2018}. In these references, the authors found, in simulations, that this combination results in significantly shorter intervals than those based on the polyhedral method alone. In this paper, we give a formal analysis of this phenomenon. We show that the polyhedral method, when combined with the proposals of of \cite{fithian2017} or of \cite{tian2018}, delivers intervals that are always bounded. Our upper bound is easy to compute, easy to interpret and, in the interesting case where the polyhedral method alone gives intervals with infinite expected length, also sharp. Moreover, we show that the intervals of \cite{fithian2017} and \cite{tian2018} are always shorter than the intervals obtained by corresponding sample splitting methods.
There are several ongoing developments on inference procedures after model selection. Roughly speaking, one can divide them into two branches: Inference conditional on the selected model and inference simultaneous over all potential models. Pioneer works in these two areas are \cite{lee2016} and \cite{berk2013}, respectively. This paper is concerned with procedures in the first branch, i.e., procedures that evolved from the polyhedral method. See, among others, \cite{fithian2017, markovic2018, panigrahi2018, panigrahi2019, reid2017, reid2018, taylor2018, tian2016, tian2017, tian2018b, tian2018, tibshirani2016}. For related literature from the second branch, see among others, \cite{bachoc2019,bachoc2020,kuchibhotla2018a,kuchibhotla2018b}. We also want to note that all these works address model-dependent targets and not the true underlying parameter in the classical sense. Valid inference for the underlying true parameter is a more challenging task, as demonstrated by the impossibility results of \cite{leeb2006,leeb2008}.
This paper is structured as follows. In Section \ref{sec_core} we present the main technical result, Theorem \ref{th_core}, where we show that a confidence interval based on a particular conditional distribution of normal random variables has bounded length. We also perform simulations that provide deeper insights. In Section~\ref{sec_selective}, we demonstrate that the conditional distribution considered in Theorem~\ref{th_core} frequently arises in selective inference. In particular, we consider the polyhedral method combined with data carving \citep{fithian2017} as well as the polyhedral method combined with selection on a randomized response \citep{tian2018}. We show that these procedures give confidence intervals whose length is always bounded, and that they strictly dominate corresponding sample-splitting methods in terms of interval length. While the discussion in Section~\ref{sec_selective} is generic, we also provide a detailed example in Section~\ref{sec_lasso}, namely model selection using the Lasso on a randomized response. A discussion in Section \ref{sec_discussion} concludes. All proofs are collected in the appendix.
\section{Main technical result} \label{sec_core}
With the polyhedral method, confidence intervals are constructed based on an observation from a so-called truncated normal distribution, i.e., the distribution of \begin{equation}\label{cond_tn}
X ~ | ~ X \in T, \end{equation} where $X$ as a $N(\mu,\sigma^2)$-distributed random variable with $\mu\in\mathbb R$ and $\sigma^2 \in (0,\infty)$. The truncation set $T$ depends on the selected model and is of the form \begin{equation} \label{trunc_set}
T ~ = ~ \bigcup_{i=1}^k ~ (a_i,b_i), \end{equation} where $k \in \mathbb N$ and $\infty \leq a_1 < b_1 < \dots < a_k < b_k \leq \infty$. \cite{kivaranovic2020} showed that the resulting confidence interval has infinite expected length if and only if the truncation set $T$ is bounded from above or from below (or both). In that reference, it is also found that the polyhedral method typically leads to truncation sets that are bounded from above or from below, e.g., when the Lasso is used for model selection.
Several recent extensions of the polyhedral method provide confidence intervals that are constructed based on an observation of a random variable that is distributed as \begin{equation} \label{cond_rv}
X ~ | ~ X + U \in T, \end{equation} where $X$ and $T$ are as before and $U\sim N(0,\tau^2)$ is independent of $X$ with $\tau^2 \in (0,\infty)$. Clearly, \eqref{cond_rv} can be viewed as a randomized version of \eqref{cond_tn}, where the randomization is controlled by $\tau^2$, i.e., by the variance of $U$.
Let $\Phi(x)$ be the cumulative distribution function (c.d.f.) of the standard normal distribution and denote by $F_{\mu,\sigma^2}^{\tau^2}(x)$ the conditional c.d.f. of the random variable in \eqref{cond_rv}. For sake of readability, we do not show the dependence of this c.d.f. on $T$ in the notation. For $q \in (0,1)$, let $\mu_q(x)$ satisfy
\begin{equation} \label{eq_bound}
F_{\mu_q(x),\sigma^2}^{\tau^2}(x) ~ = ~ 1-q.
\end{equation} The quantity $\mu_q(x)$ is well-defined and strictly increasing as a function of $q$ for fixed $x \in \mathbb R$ (cf. Lemma \ref{le_cdf}). For all $q_1,q_2 \in (0,1)$ such that $q_1 \leq q_2$, we have \begin{equation*}
\mathbb P \left(
\mu \in [\mu_{q_1}(X),\mu_{q_2}(X)] ~ | ~ X + U \in T
\right) ~ = ~ q_2 - q_1 \end{equation*} by a textbook result for confidence bounds (e.g., Chapter 3.5 in \citealp{lehmann2006}). A common choice is to set $q_1=\alpha/2$ and $q_2=1-\alpha/2$ such that, conditional on $\{X + U \in T\}$, $[\mu_{q_1}(X),\mu_{q_2}(X)]$ is an equal-tailed confidence interval for $\mu$ at level $1-\alpha$. Another option is to choose $q_1$ and $q_2$ such that, conditional on $\{X + U \in T\}$, $[\mu_{q_1}(X),\mu_{q_2}(X)]$ is an unbiased confidence interval at level $1-\alpha$ (cf. Chapter 5.5 in \citealp{lehmann2006}).
It is easy to see that, as $\tau^2$ goes to $0$, $F_{\mu,\sigma^2}^{\tau^2}(x)$ converges weakly to the c.d.f. of the truncated normal distribution \eqref{cond_tn}, which leads to confidence intervals with infinite expected length if the truncation set $T$ is bounded from above or from below. On the other hand, as $\tau^2$ goes to $\infty$, it is similarly easy to see that $F_{\mu,\sigma^2}^{\tau^2}(x)$ converges weakly to the c.d.f. of the normal distribution with mean $\mu$ and variance $\sigma^2$. Hence, in the case where the intervals are based on $\lim_{\tau^2\to \infty} F_{\mu,\sigma^2}^{\tau^2}(x)$, the length of the resulting confidence interval equals $\sigma \left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right)$. These observations suggest that, in the case where $\tau^2 \in (0,\infty)$, the length of $[\mu_{q_1}(x),\mu_{q_2}(x)]$ might be bounded somewhere between $\sigma \left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right)$ and $\infty$. This idea is formalized by the following result.
\begin{theorem} \label{th_core}
Let $\tau^2 \in (0,\infty)$ and let $T$ be as in \eqref{trunc_set}. Let $F_{\mu,\sigma^2}^{\tau^2}(x)$ be the c.d.f. of \eqref{cond_rv} and let $\mu_q(x)$ satisfy \eqref{eq_bound}. Set $\rho^2 = \tau^2 / (\sigma^2+\tau^2)$. For all $x \in \mathbb R$ and all $q_1,q_2 \in (0,1)$ such that $q_1 \leq q_2$, we then have
\begin{equation*}
\mu_{q_2}(x) - \mu_{q_1}(x) ~ < ~ \frac{\sigma}{\rho} \left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right).
\end{equation*}
If $\sup T = b_k < \infty$, the left-hand side converges to the
right-hand side as $x \to \infty$. The same is true if $\inf T = a_1 > -\infty$ and $x \to -\infty$. \end{theorem}
The upper bound in Theorem~\ref{th_core} is easy to compute, does not depend on the truncation set $T$, and increases as the amount of randomization $\tau^2$ decreases. As $\tau^2$ goes to zero, the upper bound diverges to infinity, in accordance with \cite{kivaranovic2020}. On the other hand, as $\tau^2$ goes to $\infty$, the upper bound converges to $\sigma \left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right)$. Also note that the upper bound is sharp if $T$ is bounded from above or from below, i.e., in the case where confidence intervals based on \eqref{cond_tn} have infinite expected length.
In Figure \ref{fig_length}, we plot the length of $[\mu_{q_1}(x),\mu_{q_2}(x)]$ as a function of $x$ for several truncation sets $T$. In the left panel the truncation set is of the form $(-a, a)$ (bounded) and in the right panel the truncation set is of the form $(-\infty, -a) \cup (a,\infty)$ (unbounded with a gap in the middle). The top dashed line denotes the upper bound $\frac{\sigma}{\rho} \left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right)$; the bottom dashed line denotes $\sigma\left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right)$, i.e., the length of the confidence interval with unconditional coverage. In the left panel, we see that $\mu_{q_2}(x) -
\mu_{q_1}(x)$ approximates the upper bound as $|x|$ diverges. The smaller $a$, i.e., the smaller the truncation set $T$, the faster the convergence. Also in this case, where the truncation set is an interval, the left panel indicates that the length is bounded from below by $\sigma\left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right)$. In the right panel, we see that our upper bound is not sharp when the truncation set is unbounded. It appears that the length converges to $\sigma\left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right)$ as $x$ diverges. However, as the gap of the truncation set becomes larger (i.e., as $a$ grows), we see that the length approximates the upper bound for values around $a$ and $-a$. Finally, $\sigma\left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right)$ is not an lower bound in this case, as we can see that the length is considerably smaller for values in the gap $(-a,a)$. It seems that the length converges to zero for values of $x$ around $0$ as $a$ diverges. \begin{figure}
\caption{The length $[\mu_{q_1}(x),\mu_{q_2}(x)]$ is plotted as a function of $x$. In the left panel the truncation sets are of the form $(-a, a)$ and in the right panel truncation sets are of the form $(-\infty, -a) \cup (a,\infty)$. The different values for $a$ are shown below the plot. The remaining parameters where: $q_1=1-q_2=0.025$ and $\sigma^2 = \tau^2 = 1$.}
\label{fig_length}
\end{figure}
In Figure \ref{fig_expected_length}, we plot Monte-Carlo approximations of the conditional expected length of $[\mu_{q_1}(X),\mu_{q_2}(X)]$ given $X+U\in T$ as a function of $\mu$, for the same scenarii as considered in Figure~\ref{fig_length}. For the Monte-Carlo simulations, we draw $2000$ independent samples from the distribution in \eqref{cond_rv}, compute the confidence interval for each and estimate the conditional expected length by the sample mean of the lengths. In the left panel, we observe that the conditional expected length is minimized at $\mu=0$ and converges to the upper bound as $\mu$ diverges. We also note that the smaller the truncation set, the larger the conditional expected length. This means that the conditional expected length is close to the upper bound if the probability of the conditioning event is small. In the right panel, we again observe that the conditional expected length is minimized at $\mu=0$. However, it does not converge the upper bound but to $\sigma\left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right)$ as $\mu$ diverges. Surprisingly, the conditional expected length at $\mu=0$ decreases as $a$ increases and becomes significantly smaller than $\sigma\left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right)$. In particular, and in contrast to the left panel, we here observe that the conditional expected length decreases as the probability of the conditioning event decreases.
\begin{figure}
\caption{The conditional expected length of $[\mu_{q_1}(X),\mu_{q_2}(X)]$ is plotted as a function of $\mu$. In the left panel the truncation sets are of the form $(-a, a)$ and in the right panel the truncation sets are of the form $(-\infty, -a) \cup (a,\infty)$. The different values for $a$ are shown below the plot. The remaining parameters where: $q_1=1-q_2=0.025$ and $\sigma^2 = \tau^2 = 1$.}
\label{fig_expected_length}
\end{figure}
\section{Application to selective inference} \label{sec_selective}
Throughout this section, we consider the generic sample mean setting that lies at the heart of the polyhedral method and of many procedures derived from it. To use these methods with more complex model selectors, one essentially has to reduce the more complex situation at hand to the generic setting considered here. This is demonstrated by the example in Section~\ref{sec_lasso}; further examples can be obtained from the papers on selective inference mentioned in Section~\ref{intro}.
Let $n\in\mathbb N$ and let $X_1,\dots, X_n$ be i.i.d. normal random variables with mean $\mu\in \mathbb R$ and variance $\sigma^2\in (0,\infty)$. The outcome of a model-selection procedure can often be characterized through an event of the form $\bar X_n \in T$, where $\bar X_n$ denotes the sample mean and $T$ is as in Section~\ref{sec_core}. The polyhedral method provides a confidence interval for $\mu$ with pre-specified coverage probability conditional on the event $\bar X_n \in T$, based on the conditional distribution of
$\bar X_n | \bar X_n \in T$. The (conditional) expected length of this interval is infinite if and only if $T$ is bounded from above or from below; cf. \cite{kivaranovic2020}.
\subsection{Data carving} \label{carving}
Data carving \citep{fithian2017} means that only a subset of the data used for model selection while the entire dataset is used for inference based on the selected model. Let $\delta \in (0,1)$ be such that $\delta n$ is a positive integer. If only the first $\delta n$ observations are used for selection, the outcome of a model-selection procedure can often be characterized through an event of the form $\bar X_{\delta n} \in T$; here $\bar X_{\delta n}$ is the sample mean of the first $\delta n$ observations and the truncation set $T$ is again as in Section~\ref{sec_core}. (Of course, the truncation sets used by the plain polyhedral method and by the polyhedral method with data carving might differ.) Inference for $\mu$ is based on the conditional distribution \begin{equation} \label{rv_carving}
\bar X_n ~ | ~ \bar X_{\delta n} \in T. \end{equation} In the preceding display, the conditioning variable $\bar X_{\delta n}$ can be written as $\bar X_{\delta n}= \bar X_n + U$ for $U = \bar X_{\delta_n} - \bar X_n$; using elementary properties of the normal distribution, it is not difficult to obtain the following result.
\begin{proposition}
The conditional c.d.f. of the random variable in \eqref{rv_carving} is equal to $F_{\mu,\tilde \sigma^2}^{\tilde \tau^2}(x)$ with truncation set $T$, \begin{equation*}
\tilde \sigma^2 ~ = ~ \frac{\sigma^2}{n} \quad \text{and} \quad \tilde \tau^2 ~ = ~ \frac{\sigma^2}{n}\frac{1-\delta}{\delta}. \end{equation*} \end{proposition}
Let $\tilde \mu_q(x)$ satisfy $F_{\tilde \mu_q(x),\tilde \sigma^2}^{\tilde \tau^2}(x) = 1-q$, where $\tilde \sigma^2$ and $\tilde \tau^2$ are as in the proposition. Then $[\tilde \mu_{q_1}(\bar X_n), \tilde \mu_{q_2}(\bar X_n)]$ is a confidence interval for $\mu$ with conditional coverage probability $q_2-q_1$ given $\bar X_{\delta n} \in T$ ($0<q_1<q_2<1)$. Theorem \ref{th_core} implies that \begin{equation*}
\tilde \mu_{q_2}(x) - \tilde \mu_{q_1}(x) ~ < ~ \frac{\sigma}{\sqrt{n}} \left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right) \frac{1}{\sqrt{1-\delta}}. \end{equation*} We see that the length of $[\tilde \mu_{q_1}(\bar X_n),\tilde \mu_{q_2}(\bar X_n)]$ shrinks at the same $\sqrt{n}$-rate as in the unconditional case. The price of conditioning is at most the factor $1/\sqrt{1-\delta}$. Note that, for $\delta=1$, data carving reduces to the polyhedral method.
A corresponding sample-splitting method is the following: Again, the model is selected based on the first $\delta n$ observations, resulting in the same event $\bar X_{\delta n} \in T$. For subsequent inference, however, only the last $(1-\delta) n$ observations are used. Because these are independent of $\bar X_{\delta n}$, one obtains the standard confidence interval for $\mu$ based on $\bar X_{\delta n}$, whose length is $\sigma (\Phi^{-1}(q_2) - \Phi^{-1}(q_1)) /\sqrt{(1-\delta)n}$. By the inequality in the preceding display, this interval is strictly larger than the interval obtained with data carving.
\subsection{Selection with a randomized response} \label{randomized}
This method of \cite{tian2018} performs model-selection with a randomized version of the data (i.e., after adding noise), while inference based on the selected model is performed with the original data. Let $\omega \sim N(0,\tau^2 I_n)$ be a noise vector independent of $X_1,\dots, X_n$ and write $\bar \omega_n$ for the mean of its components. If, in the model-selection step, the randomized data $X_1+\omega_1,\dots X_n + \omega_n$ are used instead of the original data, then the outcome of the model-selection step can often be characterized through an event of the form $\bar X_n + \bar \omega_n \in T$ where $T$ again is a truncation set as in Section~\ref{sec_core} (possibly different from the truncation sets used by the plain polyhedral method or by the polyhedral method with data carving). Here, inference for $\mu$ is based on the conditional distribution \begin{equation} \label{rv_lasso}
\bar X_n ~|~\bar X_n+\bar \omega_n \in T, \end{equation} which is easy to compute.
\begin{proposition} The conditional c.d.f. of the random variable in \eqref{rv_lasso} is equal to $F_{\mu, \bar \sigma^2}^{\bar \tau^2}(x)$ with truncation set $T$, \begin{equation*}
\bar \sigma^2 ~ = ~ \frac{\sigma^2}{n}
\quad \text{and}\quad \bar \tau^2 ~ = ~ \frac{\tau^2}{n}. \end{equation*} \end{proposition}
Let $\bar \mu_q(x)$ satisfy $F_{\bar \mu_q(x), \bar \sigma^2}^{\bar \tau^2}(x) = 1-q$, where $\bar \sigma^2$ and $\bar \tau^2$ are as in the proposition, so that $[\bar\mu_{q_1}(\bar X_n), \bar\mu_{q_2}(\bar X_n)]$ is a confidence interval for $\mu$ with conditional coverage probability $q_2-q_1$ given $\bar X_n+\bar\omega_n\in T$ ($0<q_1<q_2<1$). With Theorem \ref{th_core}, we see that \begin{equation*} \bar\mu_{q_1}(x) - \bar\mu_{q_2}(x) ~<~ \frac{\sigma}{\sqrt{n}} \left( \Phi^{-1}(q_2) - \Phi^{-1}(q_1)\right) \sqrt{1+\frac{\sigma^2}{\tau^2}}. \end{equation*} Similarly to data carving, the length of the interval shrinks at the same $\sqrt{n}$-rate as in the unconditional case, the price of conditioning is controlled by the factor $\sqrt{1+\sigma^2/\tau^2}$, and the method reduces to the polyhedral method if $\tau=0$.
To obtain a sample-splitting method that is comparable to selection with a randomized response, we proceed as follows: We use the first $n \sigma^2/(\sigma^2+\tau^2)$ observations for model selection and the remaining $m = n \tau^2/(\sigma^2+\tau^2)$ observations for inference (assuming, for simplicity, that these numbers are positive integers). Write $\tilde X_m$ for the mean of the last $m$ observations. Because the first $n-m$ observations are independent of $\tilde X_m$, we thus obtain the standard confidence interval for $\mu$ based on $\tilde X_m$ with length $\sigma(\Phi^{-1}(q_2) - \Phi^{-1}(q_1))/\sqrt{m} = \sigma(\Phi^{-1}(q_2) - \Phi^{-1}(q_1))\sqrt{1+\sigma^2/\tau^2}$. In terms of length, this interval is dominated by $[\bar \mu_{q_1} (\bar X_n) - \bar \mu_{q_2}(\bar X_n)]$ considered above.
For data carving, choosing a corresponding sample splitting method was obvious; cf. Subsection~\ref{carving}. This is not the case in the setting considered here. Nevertheless, the considerations in the preceding paragraph show that selection with a randomized response dominates any sample splitting scheme that uses at most $m=n \tau^2 /(\sigma^2+\tau^2)$ observation for inference and the rest for selection.
\begin{remark*} \normalfont Throughout this section, we have considered Gaussian data. In non-Gaussian settings, our results can be applied asymptotically, provided that the estimator used in the inference step is asymptotically normal and provided that the probability of the truncation set does not vanish. \end{remark*}
\section{Example: Lasso selection with a randomized response} \label{sec_lasso}
Consider a response vector $Y$ and a matrix $A$ of explanatory variables. In particular, assume that $Y\sim N(\theta, \sigma^2 I_n)$ with $n\in\mathbb N$, $\theta \in \mathbb R^n$ and $\sigma^2 \in (0,\infty)$, and assume that $A \in \mathbb R^{n\times d}$ is a fixed matrix whose columns are in general position ($d\in\mathbb N$).
The Lasso estimator, denoted by $\hat{\beta}(y)$, is the minimizer of the least squares problem with an additional penalty on the absolute size of the regression coefficients \citep{frank1993, tibshirani1996}: $$ \hat{\beta}(y)\quad=\quad \argmin_{\beta \in \mathbb R^d}
\frac{1}{2}\| y-A \beta\|_2^2 + \lambda \|\beta\|_1, $$ where $y\in\mathbb R^n$ and $\lambda \in (0,\infty)$ is a given tuning parameter. Because the columns of $A$ are in general position, $\hat{\beta}(y)$ is well-defined; cf. \cite{tibshirani2013}. Since individual components of $\hat{\beta}(Y)$ are zero with positive probability, the non-zero coefficients of $\hat{\beta}(Y)$ can be viewed as the model $\hat{m}(Y)$ selected by the Lasso. More formally, for each $y\in \mathbb R^n$, let $\hat{m}(y) \subseteq \{1,\dots, d\}$ and
$\hat{s}(y)\in \{-1,1\}^{|\hat{m}(y)|}$
denote the set of indices and the vector of signs, respectively, of the non-zero components of $\hat{\beta}(y)$. For $m\subseteq \{1,\dots, d\}$ and $s \in \{-1,1\}^{|m|}$, the set $\{ y:\; \hat{m}(y) = m, \hat{s}(y) = s\}$ can be written as $\{y:\; A_{m,s} y < b_{m,s}\}$ (up to a set of measure zero), where the matrix $A_{m,s}$ and the vector $b_{m,s}$ depend on $m$ and $s$; see Theorem~3.3 of \cite{lee2016} or Appendix~C of \cite{kivaranovic2020} for explicit formulae. In particular, this set is a polyhedron in $y$-space. Obviously, the set $\{y:\; \hat{m}(y)=m\}$ can be written as a finite disjoint union of such polyhedra.
Throughout the rest of this section, consider a model $m\subseteq\{1,\dots, d\}$ and a sign-vector
$s \in \{-1,1\}^{|m|}$, so that the event $\hat{m}(Y)=m, \hat{s}(Y)=s$
has positive probability (this also ensures that $|m| \leq n$). Write $A_m$ for the matrix of those columns of $A$ whose index lies in $m$, and set $\beta_m = (A_m'A_m)^{-1} A_m' \theta$. Suppose the quantity of interest is $\gamma'\beta_m$ for some
$\gamma\in \mathbb R^{|m|}$. Equivalently, this quantity can be written as $\gamma' \beta_m = \eta_m' \theta$ for $\eta_m = A_m (A_m'A_m)^{-1} \gamma$. The polyhedral method provides a confidence interval for $\eta_m'\theta$ with pre-specified coverage probability conditional on the event $\hat{m}(Y)=m$:
Set $P_{\eta_m} = \eta_m\eta_m'/\|\eta_m\|_2^2$, decompose $Y$ into $P_{\eta_m}Y + (I_n-P_{\eta_m})Y$, i.e., into the sum of two independent components, and note that $\mathbb E( \eta_m' Y) = \eta_m' \theta$. For each $z$ orthogonal to $\eta_m$, the set $\{y:\; \hat{m}(y)=m, \hat{s}(y)=s, (I_n-P_{\eta_n})y = z\}$ is a (possibly unbounded) open line-segment (up to a set of measure zero). Therefore, the conditional distribution $$
\eta_m'Y\;|\; \hat{m}(Y)=m, \hat{s}(Y)=s, (I_n-P_{\eta_m})Y=z $$
is a truncated normal distribution with mean $\eta_m'\theta$, variance $\sigma^2 \|\eta_m\|_2^2$ and a truncation set $T_{m,s}(z)$ that is a (possibly unbounded) open interval which depends on $m$, $s$ and $z$ (see \citealp{lee2016}, or \citealp{kivaranovic2020}, for explicit formulae for $T_{m,s}(z)$). In a similar fashion, the conditional distribution $$
\eta_m'Y\;|\; \hat{m}(Y)=m, (I_n-P_{\eta_m})Y=z $$ is a truncated normal distribution with the same mean and variance and a truncation set $T_m(z)$ that is a finite union of (possibly unbounded) disjoint open intervals. Based on these two conditional distributions, and arguing as in Section~\ref{sec_core}, mutatis mutandis, the polyhedral method gives a confidence interval for $\eta_m'\theta$ with pre-specified coverage probability conditional on either the event $\{\hat{m}(Y)=m,\hat{s}(Y)=s, (I_n-P_{\eta_m})Y=z\}$ or the event $\{\hat{m}(Y)=m, (I_n-P_{\eta_m})Y=z\}$. As this holds true for each possible value $z$ of $(I_n-P_{\eta_m})Y$, the resulting intervals also maintain coverage conditional on larger events like $\{\hat{m}(Y)=m,\hat{s}(Y)=s\}$ or $\{\hat{m}(Y)=m\}$, respectively. \cite{kivaranovic2020} showed that the confidence intervals based on the first conditional distribution always have infinite expected length, and that the same is typically true for those based on the second conditional distribution.
Following \cite{tian2018}, we here consider the case where the data is randomized throughout the model selection step. Let $\omega \sim N(0,\tau^2 I_n)$, with $\tau\in (0,\infty)$, independent of $Y$. Replacing $Y$ by $Y+\omega$ during model selection amounts to computing the Lasso estimator as $\hat{\beta}(Y+\omega)$, the selected model as $\hat{m}(Y+\omega)$ and also the corresponding sign-vector as $\hat{s}(Y+\omega)$. Arguing as in the preceding paragraph, but with the events $\{\hat{m}(Y)=m, \hat{s}(Y)=s, (I_n-P_{\eta_m})Y=z\}$ and $\{\hat{m}(Y)=m, (I_n-P_{\eta_m})Y=z\}$ replaced by $\{\hat{m}(Y+\omega)=m, \hat{s}(Y+\omega)=s, (I_n-P_{\eta_m})(Y+\omega)=z\}$ and $\{\hat{m}(Y+\omega)=m, (I_n-P_{\eta_m})(Y+\omega)=z\}$, respectively, it is easy to see that the c.d.f. of $$
\eta_m'Y \;|\: \hat{m}(Y+\omega)=m, \hat{s}(Y+\omega)=s, (I_n-P_{\eta_m})(Y+\omega)=z $$ is equal to $F_{\check{\mu}, \check{\sigma}^2}^{\check \tau^2}(x)$ with the truncation set $T$ replaced by $T_{m,s}(z)$, where $\check{\mu}=\eta_m'\theta$,
$\check{\sigma}^2 = \sigma^2\|\eta_m\|_2^2$ and
$\check{\tau}^2 = \tau^2 \|\eta_m\|_2^2$, for each $z$ orthogonal to $\eta_m$. Similarly, the c.d.f. of $$
\eta_m'Y \;|\: \hat{m}(Y+\omega)=m, (I_n-P_{\eta_m})(Y+\omega)=z $$ is equal to $F_{\check{\mu}, \check{\sigma}^2}^{\check{\tau}^2}(x)$ with the truncation set $T$ now replaced by $T_{m}(z)$. (The truncation sets $T_{m,s}(z)$ and $T_m(z)$ are as in the preceding paragraph.)
Clearly, the two conditional distributions in the preceding paragraph are covered by Theorem~\ref{th_core}: If $\check{\mu}_q(x)$ is such that $F_{\check{\mu}_q(x),\check{\sigma}^2}^{\check{\tau}^2}(x) = 1-q$, then $$ \check{\mu}_{q_2}(x) - \check{\mu}_{q_1}(x) \;<\; \check{\sigma} \left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1)\right) \sqrt{1+\frac{\sigma^2}{\tau^2}} $$ ($0<q_1<q_2<1)$ so that, in particular, the length of the confidence interval $[\check{\mu}_{q_1}(\eta_m'Y),\check{\mu}_{q_2}(\eta_m'Y)]$ for $\eta_m'\theta$ is always bounded by a constant, irrespective of the truncation set. If the truncation set is $T_{m,s}(z)$, then, by construction, the conditional coverage probability $$ \mathbb P\Big( \eta_m'\theta \in [\check{\mu}_{q_1}(\eta_m'Y),\check{\mu}_{q_2}(\eta_m'Y)]
\;\Big|\; \hat{m}(Y+\omega)=m, \hat{s}(Y+\omega)=s, (I_n-P_{\eta_m})(Y+\omega)=z \Big) $$ equals $q_2 - q_1$, and the same is true if one conditions on larger events like $\{ \hat{m}(Y+\omega)=m, \hat{s}(Y+\omega)=s\}$ or $\{ \hat{m}(Y+\omega)=m\}$. That statement continues to hold if the truncation set is $T_m(z)$, provided that the restriction $\hat{s}(Y+\omega)=s$ is removed throughout.
\section{Discussion} \label{sec_discussion}
We have shown that the length of certain confidence intervals with conditional coverage guarantee can be drastically shortened by adding some noise to the data throughout the model-selection step; compare \eqref{cond_tn} and \eqref{cond_rv}. Examples include data carving and selection on a randomized response, both combined with the polyhedral method. Our findings clearly support the observations of \cite{fithian2017} and \cite{tian2018} that sacrificing some power in the model-selection step results in an significant increase in power in subsequent inferences. Selection and inference on the same data is not favorable in the case where the events describing the outcome of the selection step correspond to bounded regions in sample space (in our case, the truncation set $T$), because then the resulting confidence set has infinite expected length; cf. \cite{kivaranovic2020}. There are, however, situations where this case can not occur: For example, \cite{heller2019} studies a situation where first a global hypothesis is tested and subsequent tests are only performed if the global hypothesis is rejected. There, bounded selection regions do not arise and excessively long intervals are not an issue. Hence, we recommend to be cautious about the selection procedure one chooses. In some situations, adding noise in the selection step (e.g., through data carving or randomized selection) may be beneficial; in other situations, it may not be necessary.
\begin{appendices}
\section{Proof of Theorem \ref{th_core}}
We first provide some intuition behind the theorem. Second, we state Proposition \ref{prop_core1} and \ref{prop_core2} which are the two core results which the proof of Theorem \ref{th_core} relies on. Third, we prove Theorem \ref{th_core} with the help of these two propositions. In Section \ref{sec_prop_core1} and \ref{sec_prop_core2} we prove Proposition \ref{prop_core1} and \ref{prop_core2}, respectively. The first of these two propositions is considerably more difficult to prove. In Section \ref{sec_aux} we collect several auxiliary results which are required for the proofs of the main results.
Throughout this section, fix $\sigma^2$ and $\tau^2$ in $(0,\infty)$, and simplify notation by setting $F_\mu (x) = F_{\mu,\sigma^2}^{\tau^2}(x)$ and $f_\mu (x) = f_{\mu,\sigma^2}^{\tau^2}(x)$, where $F_{\mu,\sigma^2}^{\tau^2}(x)$ and $f_{\mu,\sigma^2}^{\tau^2}(x)$ denote the conditional c.d.f. and the conditional p.d.f., respectively, of the random variable in \eqref{cond_rv}. Recall that $\rho^2 = \tau^2 / (\sigma^2+\tau^2)$, and that $\mu_q(x)$ is defined by \eqref{eq_bound}. Denote by $\phi(x)$ and $\Phi(x)$ the p.d.f. and c.d.f. of the standard normal distribution with the usual convention that $\phi(-\infty)=\phi(\infty)=\Phi(-\infty)=1-\Phi(\infty)=0$.
Observe that the random vector $(X, X+U)'$ has a two-dimensional normal distribution with mean $(\mu,\mu)'$, variance $(\sigma^2,\sigma^2+\tau^2)'$ and covariance $\sigma^2$. It is elementary to verify that, for any $v \in \mathbb R$, \begin{equation} \label{rv_cond_v}
X ~ | ~ X + U = v ~ \sim ~ N(\rho^2\mu + (1-\rho^2)v, \sigma^2\rho^2). \end{equation} Let $G_\mu(x, v)$ denote the c.d.f. of this normal distribution, i.e., \begin{equation} \label{cdf_normal}
G_\mu(x,v) = \Phi\left(\frac{x - (\rho^2\mu + (1-\rho^2)v)}{\sigma\rho} \right). \end{equation} By definition of $F_\mu(x)$, we have \begin{equation} \label{eq_cdf_cond}
F_\mu(x) = \mathbb E\left[G_\mu(x, V_\mu) \right], \end{equation} where $V_\mu$ is a random variable that is truncated normally distributed with mean $\mu$, variance $\sigma^2+\tau^2$ and truncation set $T$.
Assume, for this paragraph, that $T$ is the singleton set $T = \{v\}$ for some fixed $v\in\mathbb R$ (singleton truncation sets are excluded by our definition of $T$ in \eqref{trunc_set}). Then the c.d.f.s $G_\mu(x,v)$ and $F_\mu(x)$ coincide, and it is elementary to verify that the length of $[\mu_{q_1}(x),\mu_{q_2}(x)]$ is equal to $(\sigma/\rho)(\Phi^{-1}(q_2) -\Phi^{-1}(q_1))$, which is exactly the upper bound in Theorem \ref{th_core}. The theorem thus implies that confidence intervals only become shorter if one conditions on a set $T$ with positive Lebesgue measure instead of a singleton. On the other hand, if $T$ is equal to $\mathbb R$, it is clear that the length of $[\mu_{q_1}(x),\mu_{q_2}(x)]$ is equal to $\sigma(\Phi^{-1}(q_2)-\Phi^{-1}(q_1))$. Surprisingly, this latter quantity is not necessarily a lower bound for the length of $[\mu_{q_1}(x),\mu_{q_2}(x)]$ if $T$ is a proper subset of $\mathbb R$; cf. the r.h.s. of Figure \ref{fig_length}, or in Figure 1 of \cite{kivaranovic2020} in the case where $\tau$ is equal to $0$.
\begin{proposition} \label{prop_core1}
For all $x \in \mathbb R$ and all $\mu \in \mathbb R$, we have
\begin{equation} \label{ineq_dmu}
\frac{\partial \Phi^{-1}(F_\mu(x))}{\partial \mu} ~ < ~ -\frac{\rho}{\sigma}.
\end{equation} \end{proposition}
\begin{proposition} \label{prop_core2}
Let $G_\mu(x,v)$ be defined as in \eqref{cdf_normal} and let $q \in (0,1)$. If $\sup T = b_k < \infty$, then
\begin{equation*}
\lim_{x \to \infty} G_{\mu_{q}(x)}(x,b_k) = 1-q.
\end{equation*}
If $\inf T = a_1 > -\infty$, then
\begin{equation*}
\lim_{x \to -\infty} G_{\mu_{q}(x)}(x,a_1) = 1-q.
\end{equation*} \end{proposition}
This proposition entails that $G_{\mu_{q}(x)}(x,b_k)$ converges to $F_{\mu_{q}(x)}(x)$ as $x \to \infty$ if the truncation set $T$ is bounded from above, or as $x\to-\infty$ if $T$ is bounded from below. We continue now with the proof of Theorem \ref{th_core}.
\begin{proof}[Proof of Theorem \ref{th_core}]
By definition of $\mu_{q_1}(x)$, we have $\Phi^{-1}(F_{\mu_{q_1}(x)}(x)) = \Phi^{-1}(1-q_1)$. Because Proposition \ref{prop_core1} holds for any $x$ and $\mu$, it follows that for any $c \in (0,\infty)$, we have
\begin{equation*}
\Phi^{-1}(F_{\mu_{q_1}(x) + c}(x)) < \Phi^{-1}(1-q_1) - \rho c/\sigma.
\end{equation*}
We plug $c=\sigma(\Phi^{-1}(q_2) - \Phi^{-1}(q_1))/\rho$ into the inequality, apply the strictly increasing function $\Phi$ to both sides and use the symmetry $\Phi^{-1}(q) = -\Phi^{-1}(1-q)$ to obtain
\begin{equation*}
F_{\mu_{q_1}(x) + \sigma(\Phi^{-1}(q_2) - \Phi^{-1}(q_1))/\rho}(x) < 1-q_2.
\end{equation*}
Because $F_\mu(x)$ is strictly decreasing in $\mu$ by Lemma \ref{le_cdf} and $\mu_{q_2}(x)$ satisfies the equation $F_{\mu_{q_2}(x)}(x) = 1-q_2$ , the previous inequality implies that
\begin{equation*}
\mu_{q_2}(x) < \mu_{q_1}(x) + \frac{\sigma}{\rho} \left(\Phi^{-1}(q_2) - \Phi^{-1}(q_1) \right).
\end{equation*}
Subtracting $\mu_{q_1}(x)$ on both sides gives the inequality of the theorem. It remains to show that this upper bound is tight if the truncation set $T$ is bounded. We only consider the case $\sup T = b_k < \infty$ here, because the case $\inf T = a_1 > -\infty$ can be treated by similar arguments, mutatis mutandis. In view of the definition of $G_\mu(x,v)$ in \eqref{cdf_normal}, Proposition \ref{prop_core2} and the symmetry $\Phi^{-1}(q) = -\Phi^{-1}(1-q)$ imply that
\begin{equation*}
\lim_{x \to \infty} \frac{x - (\rho^2 \mu_{q_1}(x) + (1-\rho^2)b_k)}{\sigma\rho} = -\Phi^{-1}(q_1)
\end{equation*}
and
\begin{equation*}
\lim_{x \to \infty} \frac{x - (\rho^2 \mu_{q_2}(x) + (1-\rho^2)b_k)}{\sigma\rho} = -\Phi^{-1}(q_2).
\end{equation*}
Subtracting the second limit from the first and multiplying by $\sigma/\rho$ gives
\begin{equation*}
\lim_{x\to\infty} \mu_{q_2}(x) - \mu_{q_1}(x) = \frac{\sigma}{\rho} (\Phi^{-1}(q_2)-\Phi^{-1}(q_1)).
\end{equation*}
Hence the upper bound is tight. \end{proof}
\subsection{Proof of Proposition \ref{prop_core1}} \label{sec_prop_core1} The proof of the Proposition is split up into a sequence of lemmas that are directly proven here. \begin{lemma} \label{le_dx}
For all $x \in \mathbb R$ and all $\mu \in \mathbb R$, we have
\begin{equation} \label{ineq_dx}
\frac{\partial \Phi^{-1}(F_\mu(x))}{\partial x} ~ < ~ \frac{1}{\sigma\rho}.
\end{equation} \end{lemma} \begin{proof}
By the inverse function theorem, we have $\partial \Phi^{-1}(q) / \partial q = 1/\phi(\Phi^{-1}(q))$. This equation and the chain rule imply that \begin{equation*}
\frac{\partial \Phi^{-1}(F_\mu(x))}{\partial x} = \frac{f_\mu(x)}{\phi(\Phi^{-1}(F_\mu(x)))}. \end{equation*} It suffices to show that the numerator on the r.h.s. is less than $\phi(\Phi^{-1}(F_\mu(x))) / (\sigma\rho)$.
In view of \eqref{cdf_normal}--\eqref{eq_cdf_cond}, Leibniz's rule implies that the p.d.f. $f_\mu(x)$ is equal to \begin{equation*}
f_\mu(x) ~ = ~ \frac{1}{\sigma\rho} \mathbb E\left[\phi\left( \frac{x-(\rho^2\mu + (1-\rho^2)V_\mu)}{\sigma\rho}\right) \right] ~ = \frac{1}{\sigma\rho} \mathbb E\left[\phi\left(\Phi^{-1}(G_\mu(x,V_\mu)\right)\right]. \end{equation*} Observe now that it is sufficient to show that the function $\phi(\Phi^{-1}(q))$ is strictly concave because, by Jensen's inequality, it follows then that $f_\mu(x)$ is bounded from above by \begin{equation*}
\frac{1}{\sigma\rho} \phi\left(\Phi^{-1}\left(\mathbb E\left[G_\mu(x,V_\mu)\right]\right)\right) ~ = ~ \frac{1}{\sigma\rho}\phi(\Phi^{-1}(F_\mu(x))), \end{equation*} completing the proof.
Elementary calculus shows that \begin{equation*}
\frac{ \partial^2 \phi(\Phi^{-1}(q))}{\partial q^2} = \frac{-1}{\phi(\Phi^{-1}(q))}. \end{equation*} Because $\phi(\Phi^{-1}(q))$ is positive for all $q \in (0,1)$, it follows that the second derivative is negative for all $q \in (0,1)$. This means, $\phi(\Phi^{-1}(q))$ is strictly concave and the proof is complete. \end{proof} Note that the inequality \eqref{ineq_dx} of this lemma resembles inequality \eqref{ineq_dmu} of Proposition \ref{prop_core1}. While inequality \eqref{ineq_dx} is surprisingly easy to prove, inequality \eqref{ineq_dmu} is more difficult. Equation \eqref{eq_cdf_cond} provides intuition why this is the case: The distribution of the random variable $V_\mu$ does not depend on $x$ but it depends on $\mu$. Hence to prove inequality \eqref{ineq_dmu}, we cannot exchange integral and differential and we cannot apply Jensen's inequality as we did in the proof of Lemma \ref{le_dx}. We did not find a direct proof of inequality \eqref{ineq_dmu}. However, in the following we show that inequality \eqref{ineq_dx} in fact implies \eqref{ineq_dmu}.
To show this implication, we need a different representation of $f_\mu(x)$. Elementary calculus and properties of the conditional normal distribution imply that the conditional p.d.f. $f_\mu(x)$ can also be written as \begin{equation} \label{eq_pdf_v2}
f_\mu(x) ~ = ~ \frac{1}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)\frac{\sum_{i=1}^k \Phi\left(\frac{b_i-x}{\tau}\right) - \Phi\left(\frac{a_i-x}{\tau}\right) }{\sum_{i=1}^k \Phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) - \Phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right)}. \end{equation}
\begin{lemma} \label{le_dmu_formula}
Let $G_\mu(x, v)$ be defined as in \eqref{cdf_normal}. For all $x \in \mathbb R$ and all $\mu \in \mathbb R$, we have
\begin{equation*}
\frac{\partial \Phi^{-1}(F_\mu(x))}{\partial \mu} = \frac{-f_\mu(x) + \sum_{i=1}^k h(b_i)(F_\mu(x)-G_\mu(x,b_i))-h(a_i)(F_\mu(x)-G_\mu(x,a_i))}{
\phi(\Phi^{-1}(F_\mu(x)))},
\end{equation*}
where
\begin{equation*}
h(v) = \frac{\frac{1}{\sqrt{\sigma^2+\tau^2}}\phi\left(\frac{v - \mu}{\sqrt{\sigma^2+\tau^2}}\right)}{\sum_{i=1}^k \Phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) - \Phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right)}.
\end{equation*} \end{lemma} \begin{proof}
The chain rule implies that
\begin{equation*}
\frac{\partial \Phi^{-1}(F_\mu(x))}{\partial \mu} = \frac{\partial \Phi^{-1}(F_\mu(x))}{\partial F_\mu(x)} \frac{\partial F_\mu(x)}{\partial \mu}.
\end{equation*}
The inverse function theorem implies that the first derivative on the r.h.s. is equal to $1/\phi(\Phi^{-1}(F_\mu(x)))$. Therefore, it remains to show that $\partial F_\mu(x)/\partial \mu$ is equal to the numerator on the r.h.s. of the equation of the lemma. Leibniz's rule implies that $\partial F_\mu(x)/ \partial \mu = \int_{-\infty}^x \partial f_\mu(u) / \partial\mu ~ du$. Therefore, we compute $\partial f_\mu(x) / \partial\mu$ first. Lemma \ref{le_dens_dmu} implies that
\begin{equation*}
\frac{\partial f_\mu(x)}{\partial \mu} = \frac{x-\mu}{\sigma^2}f_\mu(x) + f_\mu(x) \sum_{i=1}^k h(b_i) - h(a_i).
\end{equation*} We use the expression on the r.h.s. to compute $\int_{-\infty}^x \partial f_\mu(u) /\partial \mu d u$.
Lemma \ref{le_int_f1} implies that the integral of the first summand is equal to
\begin{equation*}
-f_\mu(x) - \sum_{i=1}^k h(b_i)G_\mu(x,b_i) - h(a_i)G_\mu(x,a_i).
\end{equation*}
Because, in the second-to-last display,
the second summand on the r.h.s. depends on $x$ only through $f_\mu(x)$, it is easy to see the integral of this function is equal to
\begin{equation*}
F_\mu(x) \sum_{i=1}^k h(b_i) - h(a_i).
\end{equation*}
The sum of the last two expressions is equal to the numerator on the r.h.s of
the equation of the lemma, which completes the proof. \end{proof}
This lemma implies that proving Proposition \ref{prop_core1} is equivalent to showing that $B_\mu(x)<0$ for \begin{multline} \label{eq_B}
B_\mu(x) = \frac{\rho}{\sigma} \phi(\Phi^{-1}(F_\mu(x))) -f_\mu(x) \\ + \sum_{i=1}^k h(b_i)(F_\mu(x)-G_\mu(x,b_i))-h(a_i)(F_\mu(x)-G_\mu(x,a_i)). \end{multline} Observe that \begin{align*}
0 &= \lim_{|x|\to\infty} f_\mu(x) = \lim_{|x|\to\infty} \phi(\Phi^{-1}(F_\mu(x)) \\
&= \lim_{x\to-\infty} F_\mu(x) = \lim_{x\to\infty} 1-F_\mu(x) \\
&= \lim_{x\to-\infty} G_\mu(x,v) = \lim_{x\to\infty} 1-G_\mu(x,v). \end{align*}
This equation chain implies that $B_\mu(x)$ converges to $0$ as $|x| \to \infty$. Holding $\mu, \sigma^2$ and $\tau^2$ fixed, this is the same as saying that $B_\mu(x)$ converges to $0$ as $F_\mu(x)$ goes to $0$ or $1$. Let the function $F_\mu^{-1}(q)$ be defined by the equation \begin{equation} \label{F_inv}
F_\mu(F_\mu^{-1}(q)) = q. \end{equation} Clearly, $F_\mu^{-1}(q)$ is well-defined for all $q \in (0,1)$ and we have that $F_\mu^{-1}(F_\mu(x)) = x$ for all $x \in \mathbb R$. To prove Proposition \ref{prop_core1}, it is now sufficient to show that $B_\mu(F_\mu^{-1}(q))$ is strictly convex as a function of $q$ for any fixed $\mu, \sigma^2$ and $\tau^2$ (in view of the second-to-last display). \begin{lemma}
Let $B_\mu(x)$ be defined in \eqref{eq_B} and $F_\mu^{-1}(q)$ in \eqref{F_inv}. Let $\mu, \sigma^2$ and $\tau^2$ be fixed. Then, for all $x \in \mathbb R$,
\begin{equation*}
\frac{\partial^2 B_\mu(F_\mu^{-1}(q))}{\partial q^2}\Bigr\rvert_{q = F_\mu(x)} = -\frac{\rho}{\sigma \phi(\Phi^{-1}(F_\mu(x)))} + \frac{1}{\sigma^2 f_\mu(x)}.
\end{equation*} \end{lemma} \begin{proof}
We start by computing the first derivative of $B_\mu(F_\mu^{-1}(q))$ with respect to $q$. The chain rule implies that
\begin{equation*}
\frac{\partial B_\mu(F_\mu^{-1}(q))}{\partial q} = \frac{\partial B_\mu(F_\mu^{-1}(q))}{\partial F_\mu^{-1}(q)} \frac{\partial F_\mu^{-1}(q)}{\partial q}.
\end{equation*}
The inverse function theorem implies that the second derivative on the r.h.s. is equal to $1/ f_\mu(F_\mu^{-1}(q))$. In view of the definitions of $B_\mu(x)$ and $F_\mu^{-1}(q)$ in \eqref{eq_B} and \eqref{F_inv}, we see that to compute the first derivative on the r.h.s. we need to compute the derivatives of $q$, $\phi(\Phi^{-1}(q))$, $f_\mu(F_\mu^{-1}(q))$ and $G_\mu(F_\mu^{-1}(q),v)$ with respect to $F_\mu^{-1}(q)$. Because $\partial F_\mu^{-1}(q) / \partial q$ is equal to $1/ f_\mu(F_\mu^{-1}(q))$, it follows that the derivative of $q$ with respect to $F_\mu^{-1}(q)$ is equal to $f_\mu(F_\mu^{-1}(q))$. The chain rule implies that the derivative of $\phi(\Phi^{-1}(q))$ with respect to $F_\mu^{-1}(q)$ is equal to $-\Phi^{-1}(q) f_\mu(F_\mu^{-1}(q))$. Lemma \ref{le_dens_dx} implies that the derivative of $f_\mu(F_\mu^{-1}(q))$ with respect to $F_\mu^{-1}(q)$ is equal to
\begin{equation*}
-\frac{F_\mu^{-1}(q)-\mu}{\sigma^2} f_\mu(F_\mu^{-1}(q)) - f_\mu(F_\mu^{-1}(q)) \sum_{i=1}^k l(F_\mu^{-1}(q),b_i) - l(F_\mu^{-1}(q),a_i),
\end{equation*}
where $l(x,v)$ is defined in Lemma \ref{le_dens_dx}. And finally, the Lemma \ref{le_dens_G_dx} implies that the derivative of $G_\mu(F_\mu^{-1}(q),v)$ with respect to $F_\mu^{-1}(q)$ is equal to
\begin{equation*}
\frac{f_\mu(F_\mu^{-1}(q)) l(F_\mu^{-1}(q),v)}{h(v)},
\end{equation*}
where $h(v)$ is defined in Lemma \ref{le_dmu_formula} and $l(x,v)$ is defined in Lemma \ref{le_dens_dx}. The previous four derivatives and the definition of $B_\mu(x)$ in \eqref{eq_B} entail, after straight-forward simplifications, that
\begin{equation*}
\frac{\partial B_\mu(F_\mu^{-1}(q))}{\partial F_\mu^{-1}(q)} = f_\mu(F_\mu^{-1}(q)) \left( -\frac{\rho}{\sigma} \Phi^{-1}(q) + \frac{F_\mu^{-1}(q)-\mu}{\sigma^2} + \sum_{i=1}^k h(b_i) - h(a_i) \right)
\end{equation*}
and therefore
\begin{equation*}
\frac{\partial B_\mu(F_\mu^{-1}(q))}{\partial q} = -\frac{\rho}{\sigma} \Phi^{-1}(q) + \frac{F_\mu^{-1}(q)-\mu}{\sigma^2} + \sum_{i=1}^k h(b_i) - h(a_i).
\end{equation*}
Now it is easy to see that
\begin{equation*}
\frac{\partial^2 B_\mu(F_\mu^{-1}(q))}{\partial q^2} = -\frac{\rho}{\sigma \phi(\Phi^{-1}(q))} + \frac{1}{\sigma^2 f_\mu(F_\mu^{-1}(q))}.
\end{equation*}
The claim of the lemma follows by evaluating the second derivative at $q=F_\mu(x)$.
\end{proof} To prove Proposition \ref{prop_core1}, it remains to show that \begin{equation*}
-\frac{\rho}{\sigma \phi(\Phi^{-1}(F_\mu(x)))} + \frac{1}{\sigma^2 f_\mu(x)} > 0. \end{equation*} But this is the same as showing \begin{equation*}
\frac{f_\mu(x)}{\phi(\Phi^{-1}(F_\mu(x)))} < \frac{1}{\sigma \rho}. \end{equation*} The l.h.s. is equal to $\partial \Phi^{-1}(F_\mu(x)) / \partial x$ and in Lemma \ref{le_dx} we have shown that this inequality is true. This completes the proof of Proposition \ref{prop_core1}.
\subsection{Proof of Proposition \ref{prop_core2}} \label{sec_prop_core2}
Let $\epsilon > 0$. We will only consider the case where $\sup T = b_k < \infty$; the other case follows by similar arguments, mutatis mutandis. Recall the definition of $F_\mu(x)$ in \eqref{eq_cdf_cond}. By the law of total probability, we can write $F_\mu(x)$ as \begin{equation*}
\mathbb P(V_\mu >b_k -\epsilon) \mathbb E\left[G_\mu(x, V_\mu)| V_\mu > b_k-\epsilon \right] + \mathbb P(V_\mu \leq b_k -\epsilon) \mathbb
E\left[G_\mu(x, V_\mu)| V_\mu \leq b_k-\epsilon \right]. \end{equation*} Note that both conditional expectations are bounded by $1$. Because the random variable $V_\mu$ (defined under equation \eqref{eq_cdf_cond}) is a truncated normal with mean $\mu$, variance $\sigma^2+\tau^2$ and truncation set $T$, it follows that $V_\mu$ converges in probability to $b_k$ as $\mu$ goes to $\infty$. Now as $x$ goes to $\infty$, it follows that $\mu_q(x)$ goes to $\infty$. This implies that \begin{equation*}
\lim_{x\to\infty} F_{\mu_{q}(x)}(x) = \lim_{x\to\infty} \mathbb
E\left[G_{\mu_{q}(x)}(x, V_{\mu_{q}(x)})| V_{\mu_{q}(x)} > b_k-\epsilon \right]. \end{equation*} Note that $G_\mu(x,v)$ is strictly decreasing in $v$. This means that $F_\mu(x)$ is bounded from below by $G_\mu(x,b_k)$. But this also means that $\lim_{x\to\infty} F_{\mu_{q}(x)}(x)$ is bounded from below by $\lim_{x\to\infty} G_{\mu_{q}(x)}(x,b_k)$. On the other hand, observe that the conditional expectation on the r.h.s. of the preceding display is bounded from above by $G_\mu(x,b_k-\epsilon)$. This means that $\lim_{x\to\infty} F_{\mu_{q}(x)}(x)$ is bounded from above by $\lim_{x\to\infty} G_{\mu_{q}(x)}(x,b_k- \epsilon)$. Since $F_{\mu_{q}(x)}(x)$ is equal to $1-q$ for all $x \in \mathbb R$, it follows that \begin{equation*}
\lim_{x\to\infty} G_{\mu_{q}(x)}(x,b_k) \leq 1-q \leq \lim_{x\to\infty} G_{\mu_{q}(x)}(x,b_k- \epsilon). \end{equation*} Because $\epsilon$ was arbitrary and $G_\mu(x,v)$ is simply a normal c.d.f. (the normal c.d.f. is uniformly continuous), the claim of the lemma follows.
\subsection{Auxiliary results} \label{sec_aux}
\begin{lemma} \label{le_cdf}
For every $x \in \mathbb R$, $F_\mu(x)$ is continuous and strictly decreasing in $\mu$ and satisfies
\begin{equation*}
\lim_{\mu \to \infty} F_\mu(x) = \lim_{\mu \to -\infty} 1-F_\mu(x) = 0.
\end{equation*} \end{lemma} \begin{proof}
Continuity is obvious. For monotonicity, it is sufficient to show that $f_\mu(x)$ has monotone likelihood ratio because \cite{lee2016} already showed that monotone likelihood ratio implies monotonicity. This means for $\mu_1 < \mu_2$, we need to show that $f_{\mu_2}(x) / f_{\mu_1}(x)$ is strictly increasing in $x$. In view of the definition of $f_\mu(x)$ in \eqref{eq_pdf_v2}, it is easy to see that $f_{\mu_2}(x) / f_{\mu_1}(x)$ can be written as $c \exp((\mu_2-\mu_1)x/\sigma)$, where $c$ is a positive constant that does not depend on $x$. Because $\mu_1 < \mu_2$ and the exponential function is strictly increasing, it follows that $f_\mu(x)$ has monotone likelihood ratio. Finally, we show that $\lim_{\mu \to \infty} F_\mu(x) = 0$. The other part of this equation follows by similar arguments. Let $M < b_k$. Recall the definition of $F_\mu(x)$ in \eqref{eq_cdf_cond}. By the law of total probability, we can write $F_\mu(x)$ as
\begin{equation*}
\mathbb P(V_\mu > M) \mathbb E\left[G_\mu(x, V_\mu)| V_\mu > M \right]
+ \mathbb P(V_\mu \leq M) \mathbb E\left[G_\mu(x, V_\mu)| V_\mu \leq M \right].
\end{equation*}
Note that both conditional expectations are bounded by $1$. Because the random variable $V_\mu$ (defined under equation \eqref{eq_cdf_cond}) is a truncated normal with mean $\mu$, variance $\sigma^2+\tau^2$ and truncation set $T$, it follows that $\lim_{\mu\to\infty} \mathbb P(V_\mu > M) = 1- \lim_{\mu\to\infty} \mathbb P(V_\mu \leq M) = 1$. This implies that
\begin{equation*}
\lim_{\mu\to\infty} F_\mu(x) = \lim_{\mu\to\infty} \mathbb
E\left[G_\mu(x, V_\mu)| V_\mu > M \right].
\end{equation*}
Because $G_\mu(x,v)$ is strictly decreasing in $v$, it follows that the conditional expectation on the r.h.s. is bounded from above by $G_\mu(x, M)$. But this means that $\lim_{\mu\to\infty} F_\mu(x)$ is bounded by $\lim_{\mu\to\infty} G_\mu(x, M)$. Because latter limit is equal to $0$, the same follows for the former limit. \end{proof} This lemma ensures that the function $\mu_q(x)$ is well defined, continuous, strictly increasing in $x$ and $q$ and that $\lim_{x\to\infty} \mu_q(x) = \lim_{x\to-\infty} -\mu_q(x)=\infty$ (see also Lemma A.3 in \cite{kivaranovic2020}).
\begin{lemma} \label{le_dens_dmu}
Let the function $h(v)$ be defined as in Lemma \ref{le_dmu_formula}. For all $x \in \mathbb R$ and all $\mu \in \mathbb R$, we have
\begin{equation*}
\frac{\partial f_\mu(x)}{\partial \mu} = \frac{x-\mu}{\sigma^2}f_\mu(x) + f_\mu(x) \sum_{i=1}^k h(b_i) - h(a_i).
\end{equation*} \end{lemma} \begin{proof}
In view of the definition of $f_\mu(x)$ in \eqref{eq_pdf_v2}, the chain rule and product rule imply that the derivative of $f_\mu(x)$ with respect to $\mu$ is equal to
\begin{gather*} \frac{x-\mu}{\sigma^2}\frac{1}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)\frac {\sum_{i=1}^k \Phi\left(\frac{b_i-x}{\tau}\right) - \Phi\left(\frac{a_i-x}{\tau}\right) }{\sum_{i=1}^k \Phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) - \Phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right)} \\
+ \frac{1}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)\frac{\left(\sum_{i=1}^k \Phi\left(\frac{b_i-x}{\tau}\right) - \Phi\left(\frac{a_i-x}{\tau}\right)\right) \left( \frac{1}{\sqrt{\sigma^2+\tau^2}} \sum_{i=1}^k \phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) - \phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) \right)}{ \left(\sum_{i=1}^k \Phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) - \Phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right)\right)^2}.
\end{gather*}
In view of the definition of $f_\mu(x)$, it is easy to see that the first summand is equal to
\begin{equation*}
\frac{x-\mu}{\sigma^2}f_\mu(x)
\end{equation*}
and, in view of the definitions of $f_\mu(x)$ and $h(v)$, that the second summand is equal to
\begin{equation*}
f_\mu(x) \sum_{i=1}^k h(b_i) - h(a_i).
\end{equation*} \end{proof}
\begin{lemma} \label{le_int_f1}
Let $G_\mu(x, v)$ be defined as in \eqref{cdf_normal} and $h(v)$ as in Lemma \ref{le_dmu_formula}. For all $x \in \mathbb R$ and all $\mu \in \mathbb R$, we have
\begin{equation*}
\int_{-\infty}^x \frac{u-\mu}{\sigma^2} f_\mu(u) du ~ = ~ -f_\mu(x) - \sum_{i=1}^k h(b_i)G_\mu(x,b_i) - h(a_i)G_\mu(x,a_i).
\end{equation*} \end{lemma} \begin{proof}
By definition of $f_\mu(x)$ in \eqref{eq_pdf_v2}, the integral can be written as
\begin{equation*}
\frac{\sum_{i=1}^k \int_{-\infty}^x \frac{u-\mu}{\sigma^3} \phi\left(\frac{u-\mu}{\sigma}\right) \Phi\left(\frac{b_i-u}{\tau}\right) du - \int_{-\infty}^x \frac{u-\mu}{\sigma^3} \phi\left(\frac{u-\mu}{\sigma}\right) \Phi\left(\frac{a_i-u}{\tau}\right) du }{\sum_{i=1}^k \Phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) - \Phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right)}.
\end{equation*}
Note that all integrands in the numerator are of the same form. They only differ in the constants $a_1,b_1,\dots,a_k,b_k$. This means, we can apply Equation 10,011.1 of \cite{owen1980} to each integral. This equation implies that the numerator is equal to
\begin{align*}
&\sum_{i=1}^k \frac{-1}{\sqrt{\sigma^2+\tau^2}} \phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}} \right) \Phi\left(\frac{x - (\rho^2\mu + (1-\rho^2)b_i)}{\sigma\rho} \right) - \frac{1}{\sigma} \phi\left(\frac{x-\mu}{\sigma}\right) \Phi\left(\frac{b_i-x}{\tau}\right) \\
&- \left(\frac{-1}{\sqrt{\sigma^2+\tau^2}} \phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}} \right) \Phi\left(\frac{x - (\rho^2\mu + (1-\rho^2)a_i)}{\sigma\rho} \right) - \frac{1}{\sigma} \phi\left(\frac{x-\mu}{\sigma}\right) \Phi\left(\frac{a_i-x}{\tau}\right) \right).
\end{align*}
(Also note that this equation can easily be verified by differentiation of the antiderivative.) In view of the definitions of $f_\mu(x)$, $h(v)$ and $G_\mu(x, v)$, we can see that the claim of the lemma is true. \end{proof}
\begin{lemma} \label{le_dens_dx}
For all $x \in \mathbb R$ and all $\mu \in \mathbb R$, we have
\begin{equation*}
\frac{\partial f_\mu(x)}{\partial x} = -\frac{x-\mu}{\sigma^2}f_\mu(x) - f_\mu(x) \sum_{i=1}^k l(x,b_i) - l(x,a_i),
\end{equation*}
where
\begin{equation*}
l(x,v) = \frac{\frac{1}{\tau} \phi\left(\frac{v -x}{\tau}\right)}{\sum_{i=1}^k \Phi\left(\frac{b_i -x}{\tau}\right) - \Phi\left(\frac{a_i -x}{\tau}\right)}.
\end{equation*} \end{lemma} \begin{proof}
In view of the definition of $f_\mu(x)$ in \eqref{eq_pdf_v2}, the chain rule and product rule imply that the derivative of $f_\mu(x)$ with respect to $x$ is equal to
\begin{align*} &-\frac{x-\mu}{\sigma^2}\frac{1}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right) \frac{\sum_{i=1}^k \Phi\left(\frac{b_i-x}{\tau}\right) - \Phi\left(\frac{a_i-x}{\tau}\right) }{\sum_{i=1}^k \Phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) - \Phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right)} \\
&- \frac{1}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)\frac{\frac{1}{\tau} \sum_{i=1}^k \phi\left(\frac{b_i-x}{\tau}\right) - \phi\left(\frac{a_i-x}{\tau}\right)}{ \sum_{i=1}^k \Phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) - \Phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) }.
\end{align*}
It is easy to see that the first summand is equal to
\begin{equation*}
-\frac{x-\mu}{\sigma^2}f_\mu(x)
\end{equation*}
and, in view of the definitions of $f_\mu(x)$ and $l(x,v)$, that the second summand is equal to
\begin{equation*}
-f_\mu(x) \sum_{i=1}^k l(x, b_i) - l(x,a_i).
\end{equation*} \end{proof}
\begin{lemma} \label{le_dens_G_dx}
Let $h(v)$ be defined as in Lemma \ref{le_dmu_formula} and $l(x,v)$ as in Lemma \ref{le_dens_dx}. For all $x \in \mathbb R$ and all $\mu \in \mathbb R$, we have
\begin{equation*}
\frac{\partial G_\mu(x,v)}{\partial x} = \frac{f_\mu(x) l(x,v)}{h(v)}.
\end{equation*} \end{lemma}
\begin{proof}
By definition of $G_\mu(x,v)$ in \eqref{cdf_normal}, we have
\begin{equation*}
\frac{\partial G_\mu(x,v)}{\partial x} = \frac{1}{\sigma\rho} \phi\left(\frac{x-(\rho^2\mu+ (1-\rho^2)v)}{\sigma\rho}\right).
\end{equation*}
We claim that
\begin{equation*}
\frac{1}{\sigma\rho} \phi\left(\frac{x-(\rho^2\mu+ (1-\rho^2)v)}{\sigma\rho}\right) = \frac{\frac{1}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right)\frac{1}{\tau}\phi \left(\frac{v-x}{\tau}\right)}{\frac{1}{\sqrt{\sigma^2+\tau^2}} \phi\left(\frac{v-\mu}{\sqrt{\sigma^2+\tau^2}}\right)}.
\end{equation*}
To see this note that we can write the l.h.s. as $c_1 \exp(-d_1/2)$ and r.h.s. as $c_2 \exp(-d_2/2)$, where
\begin{equation*}
c_1 = \frac{1}{\sigma\rho \sqrt{2\pi}} = \frac{\sqrt{\sigma^2+\tau^2}}{\sigma \tau \sqrt{2\pi}} = c_2
\end{equation*}
and
\begin{align*}
d_1 &= \left( \frac{x-(\rho^2\mu+ (1-\rho^2)v)}{\sigma\rho} \right)^2 \\
&= \frac{x^2 - 2\rho^2\mu x - 2(1-\rho^2)vx + \rho^4\mu^2 + (1-\rho^2)^2 v^2 + 2\rho^2(1-\rho^2)\mu v}{\sigma^2\rho^2} \\
&= \frac{\rho^2(x-\mu)^2 + (1-\rho^2)(v-x)^2 - \rho^2(1-\rho^2)(v-\mu)^2 }{\sigma^2\rho^2} \\
&= \left(\frac{x-\mu}{\sigma}\right)^2 + \left( \frac{v-x}{\tau} \right)^2 - \left(\frac{v-\mu}{\sqrt{\sigma^2+\tau^2}}\right)^2 = d_2.
\end{align*}
Hence the claimed equation is true. Observe that the r.h.s. of that equation can be written as
\begin{equation*}
\Scale[1.05]{
\frac{1}{\sigma}\phi\left(\frac{x-\mu}{\sigma}\right) \frac{\sum_{i=1}^k \Phi\left(\frac{b_i -x}{\tau}\right) - \Phi\left(\frac{a_i -x}{\tau}\right)}{\sum_{i=1}^k \Phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) - \Phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right)} \frac{\frac{1}{\tau}\phi\left(\frac{v-x}{\tau}\right)}{\sum_{i=1}^k \Phi\left(\frac{b_i -x}{\tau}\right) - \Phi\left(\frac{a_i -x}{\tau}\right)} \frac{\sum_{i=1}^k \Phi\left(\frac{b_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right) - \Phi\left(\frac{a_i - \mu}{\sqrt{\sigma^2+\tau^2}}\right)}{\frac{1}{\sqrt{\sigma^2+\tau^2}} \phi\left(\frac{v-\mu}{\sqrt{\sigma^2+\tau^2}}\right)}
}.
\end{equation*}
In view of the definitions of $f_\mu(x)$, $h(v)$ and $l(x,v)$, it is easy to see that the previous expression is equal to $f_\mu(x) l(v) / h(x,v)$. Hence the derivative of $G_\mu(x,v)$ with respect to $x$ is of the claimed form. \end{proof}
\end{appendices}
\end{document} | arXiv | {
"id": "2007.12448.tex",
"language_detection_score": 0.7363286018371582,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Global sections]{Birational geometry of the space of complete quadrics}
\author[Lozano]{C\'esar Lozano Huerta} \address{University of Illinois at Chicago\\\newline Department of Mathematics, Statistics and Computer Science \\ Chicago, IL.} \email{lozano@math.uic.edu}
\begin{abstract} We study the birational geometry of the moduli space of complete $n$-quadrics $X$. We exhibit generators for Eff$(X)$ and Nef$(X)$, the cone of effective divisors and the cone of nef divisors, respectively. As a corollary, we show that $X$ is a Fano variety. Furthermore, we run the Minimal Model Program on $X$ and find a moduli interpretation for the models induced by the generators of the nef cone. In the case of complete quadric surfaces, we describe all the birational models of $X$ induced by the movable cone and find a moduli interpretation for some of these models. \end{abstract}
\maketitle \section*{Introduction} \noindent In 1864 Chasles \cite{CHAS} introduced the space of complete conics in order to solve a famous enumerative problem. This space parameterizes conics with a marking, and it is defined as follows. Let $C$ and $C^*$ be a smooth conic and its dual conic, respectively. The space of complete conics is the closure of the set of pairs $X=\overline{\{(C,C^*)\}}\subset {\mathbb P}^5\times {\mathbb P}^{5*}$. Soon after, Schubert in his seminal work \cite{Schu} introduced the higher dimensional generalization: the space of complete $n$-quadrics, which parametrizes marked quadrics and is the space we will study.
The space of complete $(n-1)$-quadrics is a compactification of the family of smooth quadric hypersurfaces in ${\mathbb P}^n$. In this paper we study the birational geometry of this classical space, which we denote by $X_n$, using the Minimal Model Program (MMP). In other words, we wish to understand the collection of all morphisms from $X_n$ into a target variety which is normal and projective. All such target varieties can be defined in terms of the geometry of $X_n$. Indeed, let $X_n(D):=\mathrm{Proj}(\oplus_{m\ge 0}H^0(X_n, mD))$ be a \textit{model} of $X_n$ induced by the divisor $D$. For example, the space of complete conics has two such models: the projections to ${\mathbb P}^5$ and ${\mathbb P}^{5*}$. We provide an explicit description of some models in the next dimensional case, which is the space of complete quadric surfaces (Theorem A). For example, we show there are eight distinct birational models.
Remarkably, models obtained by running the MMP on a moduli space often can be interpreted as moduli spaces themselves. A priori, there is no reason for this to be the case. However, Hassett and Keel first exhibited this phenomenon in the context of the Deligne-Mumford compactification of the moduli space of Riemann surfaces $\overline{\mathcal{M}}_g$ \cite{HAS}, \cite{HHD}, \cite{HHF}. In the same vein, Arcara, Bertram, Coskun and Huizenga \cite{ABCH} showed that the same phenomenon holds for the models of $\displaystyle{\mathbf{Hilb}}^n({\mathbb P}^2)$, a compactification of the configuration space of $n$ points on the plane. In this case, they explicitly demonstrate that the models can be interpreted as moduli spaces of Bridgeland stable objects on ${\mathbb P}^2$.
The purpose of the present paper is to show that the remarkable phenomenon first found by Hassett and Keel also applies to the space of complete quadrics and its birational models. In other words, the birational models of $X_n$ are compactifications of the family of smooth quadrics and they can all be interpreted as moduli spaces. Theorem A and Theorem C are our main results about this.
\begin{thm1}\label{THEO1} \textit{The cone of effective divisors of the space of complete quadric surfaces has eight Mori chambers. Furthermore, there is a moduli interpretation for some of the birational models induced by such a chamber decomposition.} \end{thm1}
In studying the models of a given algebraic variety, the first question we may ask is, how many are there? The Mori chamber decomposition of the cone of effective divisors has the information about the number of models a given variety may have. However, it is typically very difficult to compute. In the case of complete quadric surfaces $X_3$, we can exhibit such a decomposition explicitly by analyzing the stable base locus of each effective divisor $D\in \mathrm{Eff}(X_3)$. The relation between the Mori chambers and the stable base locus of a divisor has been studied in detail in \cite{POPA1}, \cite{POPA2}. Hence, Theorem A asserts, in particular, that there are finitely many Mori chambers for the space $X_3$, which implies there is a finite number of models of this space. This is an important finiteness property of a so-called Mori dream space. In general, we can show there is a finite number of Mori chambers by showing that the space of complete $(n-1)$-quadrics $X_n$ is a Fano variety, hence a Mori dream space (Corollary \ref{FANO}) by \cite{BC}.
We can say more about the birational models of $X_3$. Each of them can be interpreted as a moduli space of objects described in terms of classical geometry. For example, a smooth quadric surface $Q\subset {\mathbb P}^3$ contains two rulings $i.e.$, two $1$-parameter families of lines. It turns out that the smooth quadric $Q\subset {\mathbb P}^3$ and the family of lines contained in it determine each other. This implies that we get a rational map from the space of complete $2$-quadrics to the space of (flat) families of lines contained in quadrics. This map is not an isomorphism; it induces a \textit{flip}.
The proof of Theorem A will rely on the relation between $\overline{\mathcal{M}}_{0,0}(\mathbb{G}(1,3),2)$, the Kontsevich moduli space of stable maps, and the space of complete $2$-quadrics. The former space is a two-fold ramified cover of the latter (Lemma \ref{2COVER}). The birational geometry of $\overline{\mathcal{M}}_{0,0}(\mathbb{G}(1,3),2)$ has been analyzed by Chen and Coskun \cite{CC}.
In higher dimensions, we start our study of the birational geometry of the space of complete $(n-1)$-quadrics $X_n$ by exhibiting generators of Eff$(X_n)$ and Nef$(X_n)$, the cone of effective and nef divisors, respectively (Theorem B). Using Theorem A as a guiding example, we describe a moduli structure on $X_n(H_k)$, the models induced by the generators of the nef cone (Theorem C).
Now, let us define the space $X_n$. Originally, the following definition is a Theorem in \cite{VAIN}. We will use this result as a definition of $X_n$ because it is more suitable for the purposes of the present paper. We will present the historically accurate definition of $X_n$ in Section \ref{SEIS}.
Let ${\mathbb P}^N$, where $N=\binom{n+2}{2}-1$, be the space parametrizing quadric hypersurfaces in ${\mathbb P}^n$. We can stratify ${\mathbb P}^N$ by the rank of the quadric hypersurfaces, $$\Phi_1\subset \cdots \subset \Phi_{n-1}\subset \Phi_n\subset {\mathbb P}^N $$ where $\Phi_i$ denotes the locus of quadrics of rank at most $i$. The space of complete quadrics is obtained as a sequence of blowups of ${\mathbb P}^N$ along all the $\Phi_i$'s, for $i\le n-1$.
\begin{defn}[Vainsencher, \cite{VAIN}]\label{1.2} Let ${\mathbb P}^N=X(0)$, and $X(k)=Bl_{\tilde{\Phi}_k}X(k-1)$, where $\tilde{\Phi}_k$ denotes the strict transform of the locus of quadrics of rank at most $k$. The space of complete $(n-1)$-quadrics is defined as $X_n = Bl_{\tilde{\Phi}_{n-1}}X(n-2)$. \end{defn}
It turns out that the exceptional divisors in the previous definition, which we denote by $E_j$ for $1\le j \le n$, generate the effective cone Eff$(X_n)$. Moreover, points along such $E_j$'s parametrize quadrics which are marked over its singular locus by another quadric. In other words, if $Q\in X_n$ is a generic complete quadric of rank $k$, then $Q=(Q',q)$ where $Q'$ is a quadric hypersurface in ${\mathbb P}^n$ of rank $k$, and $q$ is a smooth quadric over $Sing(Q')\cong {\mathbb P}^{n-k}$. In this case, $Q$ is in the boundary divisor $E_k$. The divisor $E_n$ is the strict transform of $\Phi_n\subset {\mathbb P}^N$, hence, $Q\in E_n$ represents a quadric of rank $n$.
\noindent \textit{Example:} Consider $X_3$, the space of complete $2$-quadrics in ${\mathbb P}^3$. A quadric of rank one, whose marking consists of a double line with two marked points, can be written $Q=(x_0^2,x_1^2,(ax_2+bx_3)^2)\in E_1$. We will study this space in more detail later.
Let us introduce generators for the nef divisors whose mention can already be found in Schubert \cite{Schu}.
\begin{defn}\label{DEFH} Let $H_i\subset X_n$ denote the closure in $X_n$ of the subvariety parametrizing smooth quadric hypersurfaces in ${\mathbb P}^{n}$ which are tangent to a fixed linear subspace of dimension $i-1$. \end{defn}
\begin{thm2} \label{THEO2} Let $X_n$ be the space of complete $(n\!-\!1)$-quadrics in ${\mathbb P}^n$. The cone of effective divisors on $X_n$ is generated by boundary divisors $\displaystyle{\mathrm{Eff}}(X_n)=\langle E_1,\ldots , E_{n}\rangle$. Furthermore, the nef cone is generated by $\displaystyle{\mathrm{Nef}}(X_n)=\langle H_1,\ldots , H_{n}\rangle$. \end{thm2}
This result allows us to see that $X_n$ is a Fano variety. Hence, $X_n$ is a Mori dream space by \cite{BC}. In particular, $N^1(X_n)\otimes \mathbb{Q}=\mathrm{Pic}(X_n)\otimes \mathbb{Q}$.
Let us define the space which carries the desired moduli structure of $X_n(H_k)$, the models induced by the generators of the nef cone.
The second order Chow variety
$\displaystyle{\mathbf{Chow}}_2(k-1,X_n)$ parametrizes tangent $(k-1)$-planes to complete $(n-1)$-quadrics. In other words, if $Q\subset {\mathbb P}(V)$ is a smooth quadric hypersurface, then the tangent $k$-planes to $Q$ are parametrized by the Chow form $CF_Q(k)\subset \mathbb{G}(k,n)$, which is a divisor of degree two. Thus, $[CF_Q(k)]$ is an element in the linear system $|\mathcal{O}_{\mathbb{G}}(2)|\subset {\mathbb P}(S^2(\wedge^kV))$. The association $Q\mapsto [CF_Q(k)]$ induces a birational morphism (\cite{BERTRAM}) $$\rho_k:X_n\rightarrow {\mathbb P}(S^2(\wedge^kV))\ .$$ We define the second order Chow variety $\displaystyle{\mathbf{Chow}}_2(k-1,X_n)$ as the image of $\rho_k$.
\begin{thm3} \label{THEO3} Let $X_n$ be the space of complete $(n-1)$-quadrics. The birational model $X_n(H_k)$, induced by any generator of the nef cone $\displaystyle{\mathrm{Nef}}(X_n)$, is isomorphic to the normalization of $\displaystyle{\mathbf{Chow}}^{\nu}_2(k-1,X_n)$. \end{thm3}
The paper is organized as follows. Section \ref{UNO} studies higher dimensional quadrics. It contains the proofs of Theorem B and Theorem C. From Section \ref{DOS} onwards, we focus on the case of surfaces. Section \ref{DOS} studies divisors on the space of complete quadric surfaces $X_3$. Section \ref{TRES} contains the stable base locus decomposition of Eff$(X_3)$. Section \ref{CUATRO} describes some birational models that appear in Theorem A. Section \ref{CINCO} contains the proof of Theorem A. We include a final section containing historical remarks in which we describe how this paper unifies results by J.G. Semple \cite{SEM}, \cite{SEMII} using the MMP. We also state a connection of this work with representation theory and GIT-quotients which we would like to explore in the future. We work over the field of complex numbers throughout.
\section*{Acknowledgments} \noindent I would like to express my gratitude to my advisor Izzet Coskun for his guidance and encouragement over the years. I want to thank Dawei Chen for relevant conversations about this work. Thanks to Kevin Whyte and Artie Prendergast-Smith for useful comments about this project, and Francesco Cavazzani for pointing out the precise reference which relates this work to GIT-quotients. Many thanks to Christopher Gomes for improving the language of this work.
\section{Higher dimensional complete quadrics}\label{UNO}
\noindent In this section, we prove Theorem B and Theorem C. We denote by $H_i$, for $1 \le i \le n$, the closure in $X_n$ of the smooth quadrics tangent to a fixed $(i-1)$-plane $\Lambda\subset {\mathbb P}^n$. We refer the reader to \cite{THROOP} for a comprehensive description of the tangency properties of complete quadrics and linear subspaces.
\begin{proof}[Proof of Theorem B] We make use of the following strategy. Let $\overline{\mathrm{NE}}(X_n)$ be the dual cone of $\mathrm{Nef}(X_n)$; this is the Mori cone of effective curves. If the divisors $H_i$ are basepoint-free, then $\langle H_1,\ldots, H_n\rangle \subset \mbox{Nef}(X_n)$. The opposite containment is equivalent to $\langle H_1,\ldots, H_n\rangle^{\vee} \subset \mbox{Nef}(X_n)^{\vee}\cong \overline{\mathrm{NE}}(X_n)$. We show this latter statement holds by exhibiting that the dual curves to $\langle H_1,\ldots , H_n\rangle$ are effective curves.
The divisors $H_i$ are basepoint-free. In other words, given $\Lambda_i$, a linear subspace of dimension $i-1$, and $Q\in X_n$ such that $H_i(\Lambda_i)$ vanishes on $Q$, then we can find a distinct $\Lambda'_i$ such that $H_i(\Lambda'_i)$ does not vanish on $Q$. Indeed, if $Q$ is smooth or $\mbox{dim }Sing(Q)< \mbox{codim} \Lambda_i$, then it is clear. If $\mbox{dim }Sing(Q)\ge \mbox{codim} \Lambda_i$, then $\Lambda_i$ is tangent to the complete quadric $Q=(Q',q)$ as long as the restriction $\Lambda_i|_{Sing(Q)}$ is tangent to the marking-quadric $q$ \cite{THROOP}. If the marking-quadric $q$ is smooth, then $H_i(\Lambda_i')$ does not vanish on $Q$ if the restriction $\Lambda_i'|_{Sing(Q)}$ is not tangent to $q$. In case the marking-quadric $q$ is singular, we repeat the previous argument for the restriction $\Lambda_i|_{Sing(q)}$. So, inductively, we can find $\Lambda_i'$ such that the complete quadric $Q$ is not tangent to $\Lambda_i'$ and consequently $H_i(\Lambda_i')$ does not vanish on $Q$. Hence, $H_i$ is basepoint-free and $\langle H_1,\ldots ,H_n\rangle \subset \mbox{Nef}(X_n)$.
Let us show the opposite containment. Consider the following flag, $$\mathbf{Fl}_{\circ}=\{pt=F_1\subset F_2\subset \cdots \subset F_{n+1}={\mathbb P}^n \}\ ,$$ where each $F_i$ stands for a linear subspace of dimension $i-1$ contained in $F_{i+1}$. Observe that the most singular complete quadric $Q\in X_n$ can be interpreted as the flag $\mathbf{Fl}_{\circ}$ where the nested marking quadrics all have rank $1$. Hence, by letting the subspace $F_i$ vary inside $F_{i+1}$ such that it contains $F_{i-1}$, we get a curve $\mathbf{Fl}_i\subset X_n$ for each $1\le i\le n$. Observe that $\mathbf{Fl}_j.H_i=\delta_{ij}$. This implies that the curves $\langle \mathbf{Fl}_1,\ldots , \mathbf{Fl}_n\rangle$ span the dual cone to $\langle H_1,\ldots , H_n\rangle$. Since the $\mathbf{Fl}_i$ are effective, the result follows.
Let us now prove the claim about the effective cone. It is clear that $\langle E_1,\ldots E_n \rangle \subset \mbox{Eff}(X_n)$. To show that this is an equality, we consider a general effective divisor $D$ and show that it can be written as a linear combination $D=a_1E_1+\cdots + a_nE_n$, where $a_i\ge 0$ for all $i$. In order to do that, consider the following curves which sweep out each boundary divisor $E_k$, for $1\le k \le n-1$. Let us denote by $B_k$ the $1$-parameter family of complete quadrics $Q=(Q',q)\in E_k$, such that $Q'$ is fixed and the marking quadric $q\subset {\mathbb P}^{n-k}\cong Sing(Q')$ varies in a general pencil of dual quadrics. The following intersection numbers hold, \begin{equation} B_k.E_{k}\le 0 \quad \mbox{and }\quad B_k.E_{k+1}>0\ ,\end{equation}
and zero otherwise. In fact, the number $B_k.E_{k+1}=n-k+1$, as it is the number of times the marking quadric $q$ becomes singular. On the other hand, observe that $B_k\subset E_{k}$, and that the normal bundle $N_{E_k/X_n}\cong \mathcal{O}_{E_k}(-1)$ for $1\le k \le n-1$. Thus, $B_k.E_{k}=c_1(\mathcal{O}_{E_k}(-1)|_{B_k})=-(n-k)$.
Let $D=a_1E_1+\cdots + a_nE_n$ be a general effective divisor. Then, it does not contain any of the curves $B_k$. This means that $B_k.D\ge 0$, and by $(1)$, we have that $(n-k)a_k\le (n-k+1)a_{k+1}$ for $1\le k \le n-1$. This implies that $$a_1\le \tfrac{n}{n-1}a_2\le \ldots \le na_{n}\ .$$ By intersecting $D$ with the pullback to $X_n$ of a general pencil in ${\mathbb P}^{N*}$, we get that $0\le a_1$. \end{proof}
The following corollary tells us that the MMP yields finitely many models when applied to the space $X_n$. \begin{cor}\label{FANO} The space $X_n$ of complete $(n-1)$-quadrics is a Fano variety, hence a Mori dream space. \end{cor} \begin{proof} By definition, one can compute the canonical class of $X_n$ recursively as follows. Recall our notation $X(1)\cong Bl_{\Phi_1}({\mathbb P}^N)$, $X(2)\cong Bl_{\tilde{\Phi}_2}X(1)$, and so on. Then, \begin{equation*}\begin{aligned} K_{X_n}&=K_{X(n-2)}+2E_{n-1},\\[-.2cm] &\vdots \\[-.2cm] K_{X(i)}&=K_{X(i-1)}+(\Gamma_i-1) E_{i},\\[-.2cm] &\vdots \\[-.2cm] K_{X(1)}&=K_{{\mathbb P}^N}+(\Gamma_1-1) E_{1},\\ \end{aligned} \end{equation*} where $N=\binom{n+2}{2}-1$, and $\Gamma_i$ denotes the codimension of the locus $\Phi_i\subset {\mathbb P}^N$. From \cite[~page 38]{DECON}, we have that $E_i=2H_i-H_{i-1}-H_{i+1}$, for $1<i<n$. Then, we can write the canonical class $K_{X_n}$ in terms of the generators of the nef cone, $$K_{X_n}=-2H_1-H_2-\cdots -H_{n-1}-2H_n\ .$$ Hence, $X_n$ is Fano by Theorem B, and a Mori dream space by \cite{BC}. \end{proof}
Since the entries of $\wedge^k Q\in {\mathbb P}(S^2(\wedge^kV))$ are the $(k\times k)$-minors of $Q$, it follows that the birational morphism $\rho_k:X_n\rightarrow {\mathbb P}(S^2(\wedge^kV))$ is a bijection over the locus of non-singular quadrics. Indeed, given two non-singular matrices $A$ and $B$, if each of the respective $(k\times k)$-minors of $A$ and $B$ are equal, then $A=\lambda B$ for some non-zero scalar $\lambda$.
\begin{defn}\label{def3} Let $Q\subset {\mathbb P}^n={\mathbb P}(V)$ be a smooth quadric hypersurface. The second order Chow form $CF_Q(k)\in {\mathbb P} H^0((\mathbb{G}(k-1,n),\mathcal{O}(2))\subset {\mathbb P}(S^2(\wedge^kV))$ parametrizes tangent $(k-1)$-planes to $Q$. \end{defn}
\begin{lemma}\label{LEMMA5} Let $Q\subset {\mathbb P}^n$ be a smooth quadric hypersurface. The second order Chow form $CF_Q(k)=\wedge^k Q$ is equal to the $k$-th wedge of $Q$. \end{lemma} \begin{proof}
Let $L\subset {\mathbb P}^n$ be a $(k-1)$-plane, and $q=Q|_{L}$ be the restriction of $Q$ to $L$. If $L$ is not contained in $Q$, then the $(k-1)$-plane $L$ is tangent to $Q$ if $q$ is singular, which is equivalent to $\mbox{det}\ q=0$. Observe that $L\in \mathbb{G}(k-1,n)$ belongs to the zero locus of the Chow form $CF_Q(k)$ if and only if $L$ belongs to the zero locus of the quadric $\wedge^kQ$. Indeed, \begin{equation*} \begin{aligned} L^t (\wedge^kQ) L =&\wedge^k(L^t Q L)\\ =&\wedge^k q \\ =& \mbox{det}( q)\ . \end{aligned} \end{equation*} It follows that $\mbox{det}\ q=0$ if and only if $L$ is in the zero locus of $\wedge^k Q$. Hence, $CF_Q(k)$ and $\wedge^k Q$ define the same divisor on $\mathbb{G}(k-1,n)$. \end{proof}
Lemma \ref{LEMMA5} implies that the image of the morphism $\rho_k:X_n\rightarrow {\mathbb P}(S^2(\wedge^kV))$ carries a moduli interpretation: it parametrizes tangent $(k-1)$-planes to complete quadrics. We define the second order Chow variety $\mathbf{Chow}_2(k-1,X_n)$ as the image of $\rho_k(X_n)\subset {\mathbb P}(S^2(\wedge^kV))$.
\begin{thm3} Let $H_k$ be a generator of $\displaystyle{\mathrm{Nef}}(X_n)$, the nef cone of $X_n$. For each $1 \le k\le n$, the model $X_n(H_k)$ is isomorphic to the normalization of the second order Chow variety, $$X_n(H_k)=\mathrm{Proj}\left(\bigoplus_{m\ge 0}H^0(X_n,mH_k)\right)\cong \mathbf{Chow}_2(k-1,X_n)^{\nu}. $$ \end{thm3} \begin{proof} In order to establish that $X_n(H_k)\cong \rho_k(X_n)^{\nu}$, it suffices to show that both $\rho_k$ and the induced map $\phi_{H_k}:X_n\rightarrow X_n(H_k)$ contract the same extremal rays in $\overline{\mathrm{NE}}(X_n)$ (See, \cite{LAZ} for a proof of this fact). By Theorem B, we know that $\phi_{H_k}$ contracts the classes $\mathbf{Fl}_j$ for $j\ne k$, which generate the Mori cone of curves $\overline{\mathrm{NE}}(X_n)$. In order to show that the morphism $\rho_k$ contracts those same curve classes, we use a parametric representation of them. Let us describe such a parametrization.
The following description follows closely \cite{SEMII} and \cite{TYRELL}. We write a complete quadric as $Q=M^t q M$, where the matrix $M=(M_{ij})$ has $1$'s along the diagonal, and the entries $M_{k,k+1}=t_k$ are affine parameters above the diagonal, and zero otherwise. For example, $M$ has the following form in the case $n=3$, $$M=\left( \begin{array}{cccc} 1&t_1&0&0\\ &1&t_2 &0\\ &&1&t_3\\ & & &1 \end{array} \right) .$$ The matrix $q=[1,q_1,q_1q_2,\ldots , q_1\cdots q_n]$ is a diagonal matrix, where $q_j$ are affine parameters. Observe that the matrix $M$, as described above, and $q_r=0$, for $1\le r\le n$, give rise to the complete quadric $$Q=M^tqM=((x_0+t_1x_1)^2,(x_1+t_2x_2)^2,\ldots ,(x_{n-1}+t_nx_n)^2),$$ where the marking has rank $1$ (Section \ref{SEIS} further clarifies this notation).
We obtain a parametrization of the representatives for the curve classes $\mathbf{F}_j\in \overline{\mathrm{NE}}(X_n)$, when $q_1=\cdots =q_n=0$ and $t_k=0$ for all $k\ne j$ \cite{TYRELL}. Hence, the parameter $t_j$ in the expression of $M$, is an affine parameter of the curve $\mathbf{Fl}_j$.
In order to conclude that $\rho_k$ contracts $\mathbf{Fl}_j$ for $j\ne k$, it suffices to show that if $t_l=0$, for $l\ne j$ ($i.e.$, only $t_j$, the parameter of $F_j$, survives), then the form $\wedge^kQ=\wedge^k M^t(\wedge^k q) \wedge^k M$ is constant. For example, consider $n=3$. From the following matrix, $$\wedge^2Q=\left( \begin{array}{cccccc} 1&t_2&0&t_1t_2&0&0\\ t_2&1&0 &t_1t_2^2&0&0\\ 0&0&0&0&0&0\\ t_1t_2&t_1t_2^2 &0 &t_1^2t_2^2&0&0\\ 0&0 &0 &0&0&0\\ 0&0 &0 &0&0&0\\ \end{array} \right), $$ it follows that $\rho_2$ contracts $\mathbf{F}_1=\{t_2=t_3=0\}$ and $\mathbf{F}_3=\{t_1=t_2=0\}$. The fact that $q=[1,0,\ldots,0]$ simplifies greatly the computation of $\wedge^kQ$ in general. We omit the details since no difficulty arises. This completes the proof. \end{proof}
\begin{rmk} Following the historically accurate definition of $X_n$ (see, Section \ref{SEIS}), the morphisms $\rho_k$ are very natural projection maps. A good deal of the rest of the paper is devoted to fully understanding all of the maps $\rho_k$ in the case $n=3$.\end{rmk}
\section{Divisors on the space of complete $2$-quadrics}\label{DOS}
\noindent J.G. Semple in \cite{SEM}, \cite{SEMII} studied in detail $X_3$, the space of complete quadric surfaces. The rest of the paper is devoted to further studying this space by applying the Minimal Model Program. Indeed, we will interpret the spaces studied by Semple as models $X_3(D)=\mathrm{Proj}(\oplus_{m\ge 0}H^0(X_3, mD))$, where $D$ lies in the cone $\mathrm{Nef}(X_3)$. This section contains the preliminaries needed to show more; we aim to describe all the models $X_3(D)$, where $D$ lies in a larger cone; the movable cone $\mathrm{Mov}(X_3)$ (Definition \ref{MOVABLE}). Moreover, we exhibit some of these models as moduli spaces.
Let $\Phi_2$ denote the locus of symmetric $(3\times 3)$-matrices of rank at most two. By definition, the space $X_3\cong Bl_{\tilde{\Phi}_2} X(1)$, where $X(1)$ is a blowup of ${\mathbb P}^9$ along $\Phi_1$, the locus of symmetric matrices of rank at most one. We can also obtain $X_3$ by blowing up ${\mathbb P}^{9*}$, the space of dual quadrics in ${\mathbb P}^3$, in a similar manner. Let us interpret these spaces as models of $X_3$.
The divisor classes $H_i$ in $\mathrm{Pic}(X_3)$, as in Definition (\ref{DEFH}), coincide with the class of the strict transform of a generator of the ideal of $\Phi_i\subset {\mathbb P}^9$. Indeed, let us denote by $p:X_3 \rightarrow {\mathbb P}^9$ the blowup map, clearly $p^*(\mathcal O_{{\mathbb P}^9}(1))=H_1$. Moreover, let $h_1$ and $h_2$ be two generators of the ideals $I(\Phi_1)$ and $I(\Phi_2)$, respectively. Since \begin{equation*}\begin{aligned} p^*([h_1])&=2H_1-E_1,\\ p^*([h_2])& =3H_1-2E_1-E_2, \end{aligned}\end{equation*} in $\mathrm{Pic}(X_3)$, we can compare these classes with those of $H_2$ and $H_3$.
\begin{lemma}\label{1.7} Let $H_2, H_3$ be the divisors as in Definition \ref{DEFH}. Their classes in $\mathrm{Pic}(X_3)$ are \begin{equation}\label{EQ} \begin{aligned} H_2&=2H_1-E_1 , \\ H_3&=3H_1-2E_1-E_2 . \end{aligned}\end{equation} \end{lemma} \begin{proof} Let $G,C_2,L_2\subset X_3$ be the following test curves. The curve $G$ stands for a general pencil. $C_2$ is defined by the product of a fixed plane $P_0$ and a pencil of planes $P_t$ such that $C_2=\{P_0P_t\}$. The curve $L_2$ is defined by fixing two planes whose intersection is the line $l$ and letting one of the two marked points on $l$ vary.
The following numbers determine the class of $H_i$ for $i=2,3$. \begin{equation*} \begin{aligned}
G.H_1=1\quad & C_2.H_1=1 & L_2.H_1=0 \\
G.H_2=2 \quad & C_2.H_2=0 & L_2.H_2=0\\ G.H_3=3\quad & C_2.H_3=0 & L_2.H_3=1 \\ G.E_1=0\quad & C_2.E_1=2 & L_2.E_1=0 \\ G.E_2=0\quad & C_2.E_2=-1 & L_2.E_2=-1 \end{aligned} \end{equation*} The normal bundle $N_{E_2\backslash {X_3}}\cong \mathcal{O}_{E_2}(-1)$. Then, the restriction to the generic line $L_2\subset E_2$ is isomorphic to $\mathcal{O}_{{\mathbb P}^1}(-1)$, hence $L_2.E_2=-1$. Similarly $C_2.E_2=-1$. If we write $H_i=aH_1+bE_2+cE_3$ for $i=2,3$, and use the test curves $G,C_2,L_2$ to find the values of $a,b,c$, the result follows. \end{proof}
The following proposition complements Theorem A; see diagram \ref{DIAGRAM}. We will denote by $H_1$ the pull-back of $\mathcal{O}_{{\mathbb P}^9}(1)$, and $H_2$ the pull-back of a generator of the ideal $I(\Phi_1)$.
\begin{prop}\label{X1} Let $X(1)=Bl_{\Phi_1}{\mathbb P}^9$. Then, $\mathrm{Nef}(X(1))\cong \langle H_1,H_2\rangle$. \end{prop} \begin{proof} $H_1$ is clearly an extremal ray of the nef cone. The divisor $H_2$ is basepoint-free by definition and we have that $H_2.S=0$, where $S$ denotes a pencil $Q_1+tQ_1'$ of quadrics of rank $1$. Since the curve $S$ sweeps out the secant variety $Sec(\Phi_1)=\Phi_2$, then $H_2$ induces a small contraction $\phi_1:X(1)\rightarrow Z$. \end{proof} The canonical divisor $K_{X(1)}=-10H_1-5E_1=-5H_2$. Hence, $K_{X(1)}.S=0$, and $X(1)$ is a flop of a space $Y(1)$ over $Z$ in the following diagram, \begin{equation}\label{X1DIAGRAM} \xymatrix @!=1pc{ & &X_3\ar[dr]^{\pi_2} \ar[dl]_{\pi_1}& & \\ & X(1)\ar[dl] \ar@{.>}[rr]^{\mbox{\tiny{flop}}} \ar[dr]^{\phi_1}& & Y(1) \ar[dl]_{\phi_2}\ar[dr]&\\ {\mathbb P}^{9}&&Z&& {\mathbb P}^{9*}} \end{equation} where $X(1)$ as above, and $Y(1)=Bl_{\Phi_1}{\mathbb P}^{9*}$. The morphism $\phi_1$ is induced by the sub-linear series of $(2\times 2)$-minors cutting out $\Phi_1\subset {\mathbb P}^9$ scheme-theoretically. Observe that the morphisms both $\pi_1$ and $\pi_2$ contract the divisor $E_2$.
The following divisor class, and the model induced by it, was not analyzed in \cite{SEMII}. This constitutes new information about the birational geometry of $X_3$.
\begin{defn}\label{P} Let $P$ denote the closure in $X_3$ of smooth quadrics $Q$ such that the induced $2$-plane $\Lambda_Q\subset {\mathbb P}^5$ by one of the rulings of $Q$ has a non-empty intersection with a fixed $2$-plane in the Pl\"{u}cker embedding of $\mathbb{G}(1,3)$. \end{defn} \begin{lemma}\label{DIVISORP} The divisor class of $P$ is $$[P]=2(2H_1-H_2+2H_3) \ .$$ \end{lemma}
\begin{proof}The assertion follows from the following intersection numbers $$ P.C_1^*=0, \quad P.R_2=4, \quad P.C_3=0 ,$$ where the curves $C_1^*, R_2,C_3$ are defined as follows. The curve $C_1^*$ is a double plane with a pencil of dual conics on it, $R_2$ denotes the strict transform to $X_3$ of the pencil $Q_2+\lambda Q'_2$, where $Q_2$ and $Q'_2$ denote quadrics of rank 2 in ${\mathbb P}^3$, and the curve class $C_3$ is defined by a cone over a general pencil of conics.
Let us compute the intersection number $P.R_2$. Since $R_2.E_2=2$ and $R_2.E_1=R_2.E_3=0$, then it induces a $2$-fold cover of curves $\gamma(\lambda)\rightarrow R_2(\lambda)$, where $\gamma(\lambda)$ represents the curve of 2-planes induced by the pencil $R_2$. Indeed, for each $\lambda$, the lines contained in the complete quadric $Q(\lambda)\in R_2(\lambda)$ form two curves $C_{\lambda}, C'_{\lambda} $ in the Grassmannian $\mathbb{G}(1,3)$. This is the Fano variety of lines $F_1(Q)$ (or a flat limit of it). Each such curve $C_{\lambda}$ is contained in a unique $2$-plane $\Lambda_{C_{\lambda}}\subset {\mathbb P}^5$. Consequently, $P.R_2=deg(\gamma)$ as a subvariety of $\mathbb{G}(2,5)$. On the other hand, the class of the surface $S$ that a curve $C_{\lambda}$ sweeps out in the Grassmannian $\mathbb{G}(1,3)$ (as we vary $\lambda$), is $[S]=\sigma_2+\sigma_{1,1}\in A_2(\mathbb{G}(1,3))$. Thus, \begin{equation*} \begin{aligned} P.R_2&=\mbox{deg }\gamma & \quad\mbox{in }\mathbb{G}(2,5)\\ &=2S.\sigma_1^2 \\ &=2(\sigma_{1,1}+\sigma_2)\sigma^2_1 & \quad \mbox{in }\mathbb{G}(1,3)\\ &=4 \end{aligned} \end{equation*} The numbers $P.C_1^*=0$ and $C_3.P=0$ follow from the fact that all the conics induced by them lie in a fixed plane. The result follows.\end{proof}
\section{Stable Base Locus Decomposition}\label{TRES}
\noindent Two divisors $D_1,D_2$ induce Mori equivalent models $X(D_1), X(D_2)$ as long as they belong to the same Mori chamber. Thus, we can partition the cone $\mathrm{Eff}(X)$ into Mori chambers by looking at the models $X(D)$. Typically, Mori chambers are very difficult to compute. In order to describe the Mori chamber decomposition of $\mathrm{Eff}(X)$ we use the fact that the Mori chambers can be identified by looking at the stable base locus of the respective divisors. This relation among Mori chambers and the stable base locus decomposition has been studied in \cite{POPA1, POPA2}. In our case, there will be finitely many chambers in $\mathrm{Eff}(X_3)$ as the space of complete quadric surfaces is a Mori dream space.
Divisors $D$ for which the map $\phi_{D}:X\rightarrow X(D)$ is an isomorphism in codimension one are called small modifications of $X$, and are of special importance: they give rise to divisorial contractions and flips of $X$. Such divisors are called movable and they form the so-called movable cone $\mathrm{Mov}(X)$ (Definition \ref{MOVABLE}). We will focus on studying models $X_3(D)$, where $D$ is movable.
In this section, we compute the base locus decomposition of $\mathrm{Eff}(X_3)$. In order to do that, we need curve classes and their intersection numbers with divisors. We summarize such intersection numbers in the following table, and define the curve classes immediately after.
$$
\begin {tabular}{|c|c|c|c|c|c|c|c|c|} Curve class & $C.H_1$ & $C.H_2$ & $C.H_3$ & $C.E_1$ & $C.E_2$ & $C.E_3$ & Deformation cover \\ \hline $G$ & $1$ & $2$ & $3$ & $0$ & $0$ & $4$ & $X_3$ \\[2mm] $G^*$ & 3 & 2 & 1& $4$ & $0$ & $0$ & $X_3$ \\[2mm] $C_1$ & $0$ & $1$ & $2$ & $-1$ & $0$ & $3$ & $E_1$ \\[2mm] $C_1^*$ & $0$ & $2$ & $1$ & $-2$ & $3$ & $0$ & $E_1$ \\[2mm] $C_2$ & $1$ & $0$ & $0$ & $2$ & $-1$ & $0$ & $E_2$ \\[2mm] $C_3$ & $1$ & $2$ & $0$ & $0$ & $3$ & $-2$ & $E_3$ \\[2mm] $C_{1,2}$ & $0$ & $1$ & $0$ & $-1$ & $2$ & $-1$ & $E_1\cap E_3$ \\[2mm] $L_2$ & $0$ & $0$ & $1$ & $0$ & $-1$ & $2$ & $E_2$ \\[2mm]
\end {tabular} $$
\noindent The curve class $G$ (respectively, $G^*$) stands for the strict transform to $X_3$ of a general pencil in ${\mathbb P}^9$ (respectively, ${\mathbb P}^{9*}$). The class of $C_1$ (respectively, $C_1^*$) is defined by considering a general pencil of conics (respectively, dual conics) on a fixed double plane. The curve class $C_2$ is defined as the product of a fixed plane $P_0$ and a pencil of planes $P_t$ such that the marking is fixed. The class $C_3$ consists of the cone over the pencil of conics in a fixed plane. The curve $C_{1,2}$ consists of a pencil of rank 2 conics on a fixed double plane. Such a pencil of conics is a fixed line $l_0$ and a pencil of lines whose base-locus is on $l_0$. Similarly, the curve $L_2$ is defined by fixing two planes whose intersection is the line $l$ and letting one of the two marked points on $l$ vary.
\textbf{Notation.} We denote by $c(H_1,\overline{P})$ the positive linear combinations $aH_1+bP$ such that $0\le a$ and $0< b$.
\begin{prop}\label{prop2} The stable base locus decomposition partitions the effective cone of $X_3$ into the following chambers: \begin{itemize} \item[(1)] In the closed cone spanned by non-negative linear combinations of $\langle H_1, H_2, H_3 \rangle$, the stable base locus is empty. \item[(2)] In the domain spanned by positive linear combinations of $\langle H_1,H_3,P\rangle$ along with the set $c(H_1,\overline{P})\cup c(H_3,\overline{P})$, the stable base locus is $E_1\cap E_3$ and consists of double planes marked with a singular conic of rank 2. \item[(3)] In the domain spanned by positive linear combinations of $\langle H_3,E_3,P\rangle $ along with $c(H_3,\overline{E}_3)\cup c(P,\overline{E}_3)$, the stable base locus consists of the divisor $E_3$. \item[(4)] In the domain spanned by positive linear combinations of $\langle H_1, E_1, P\rangle$ along with $c(H_1,\overline{E}_1)\cup c(P,\overline{E}_1)$, the stable base locus consists of the divisor $E_1$. \item[(5)] In the domain spanned by positive linear combinations of $\langle P, E_1, E_3\rangle $ along with $c(E_1,E_3)$, the stable base locus consists of the union $E_1\cup E_3$. \item[(6)] In the domain spanned by positive linear combinations of $\langle H_3, E_2, E_3\rangle $ along with $c(E_2,E_3)$, the stable base locus consists of the union $E_2\cup E_3$. \item[(7)] In the domain spanned by positive linear combinations of $\langle H_1, E_1, E_2\rangle$ along with $c(E_1,E_2)$, the stable base locus consists of the union $E_1\cup E_2$. \item[(8)] In the domain bounded by $H_1,H_2,H_3$ and $E_2$ along with $c(H_1,\overline{E}_2)\cup c(H_3,\overline{E}_2)$, the stable base locus consists of the divisor $E_2$. \end{itemize} \end{prop} \begin{center} \begin{figure}
\caption{Stable base locus decomposition of $\mathrm{Eff}(X_3)$.}
\end{figure} \end{center} \begin{proof} We will make use of the symmetry induced by the map $\xi:X_3 \rightarrow X_3$ defined by sending the quadric $Q$ to $\wedge^3Q$, $$\xi: Q\longmapsto \wedge^3Q \ .$$ Note that $\xi$ maps $E_1$ (respectively, $H_1$) to $E_3$ (respectively, $H_3$) and keeps $E_2$ (respectively, $H_2$) fixed. The stable base locus of the divisor $\xi^*(D)$ is equal to the inverse image under $\xi$ of the stable base locus of $D$. The symmetry given by $\xi$ will simplify our calculations.
By Theorem B, any divisor contained in the closed cone generated by $H_1$, $H_2$ and $H_3$ is basepoint-free, hence its (stable) base locus is empty.
Let $D$ be a general divisor $D=aH_1+bH_2+cH_3$. Consider the curves $C_1$ and $C_3$ as defined above. Then, \begin{equation}\label{3} C_1.D=b+2c, \qquad C_3.D=a+2b \ . \end{equation} Since the curve $C_1$ (respectively, $C_3$) covers $E_1$ (respectively, $E_3$), it follows that $E_1$ (respectively, $E_3$) is in the base locus of any divisor $D$ satisfying $b+2c<0$ (respectively, $a+2b<0$).
On the other hand, $\xi$ maps the plane $b+2c=0$ to the plane $b+2a=0$. Consequently, $E_3$ is in the base locus of any divisor satisfying $b+2a<0$. Similarly, $E_1$ is in the base locus of the linear system $|D|$ if $c+2b<0$. We conclude that $E_1$ is in the base locus of any divisor contained in the region bounded by $E_1,E_2,H_1$ and $E_3$. Similarly, $E_3$ is in the base locus of any divisor contained in the region bounded by $E_3,E_2,H_3$ and $E_1$.
Let the curve classes $C_2$ and $L_2$ be as defined above. We have that, \begin{equation}\label{4} C_2.D=a, \qquad L_2.D=c \ . \end{equation} Since both the curves $C_2$ and $L_2$ cover the divisor $E_2$, then $E_2$ is in the base locus of any divisor $D$ satisfying $a<0$ as well as $c<0$. The inequality $a<0$, tells us the union $E_2\cup E_3$ is in the base locus of any divisor in the region spanned by positive linear combinations of $\langle E_3, H_3, E_2\rangle $ along with the set $c(E_3,\overline{E}_2)\cup c(H_3, \overline{E}_2)$. Similarly, the union $E_2\cup E_1$ is in the base locus of any divisor in the region spanned by positive linear combinations of $\langle E_1,E_2, H_1 \rangle $ along with the set $c(E_1,\overline{E}_2)\cup c(H_1,\overline{E}_2)$. By intersecting these two regions, the union $E_3\cup E_1$ is in the base locus of any divisor in the region spanned by positive linear combinations of $\langle E_1,P, E_3 \rangle $ along with the set $c(E_1,E_3)$.
By the equation (5) above, $E_3$ is in the base locus of any divisor in the region spanned by positive linear combinations of $E_3,H_3$ and $P$, along with $c(P,\overline{E}_3)\cup c(H_3,\overline{E}_3)$. Symmetry implies that $E_1$ is in the base locus in the region spanned by positive linear combinations of $P$, $H_1$ and $E_1$ along with $c(P,\overline{E}_1)\cup c(H_1, \overline{E}_1)$.
Let $C_{1,2}$ be the curve as defined above. We have that $$C_{1,2}.D=b \ . $$ Since the deformations of $C_{1,2}$ cover the intersection $E_1\cap E_3$, this locus $E_1\cap E_3$ is in the base locus of any divisor contained in the region bounded by $H_1$, $H_3$ $P$ and $c(H_1,\overline{P})\cup c(H_3,\overline{P})$. Finally, $E_2$ is in the base locus for any divisor $D$ in the region bounded by $H_1, H_2,H_3$ and $E_2$. This is the description of the base locus decomposition of $\mathrm{Eff}(X_3)$.
In order to finish the proof, we need to show that the stable base locus does not get any bigger than our description of it above.
(i) The divisors $H_1,H_2,H_3$ are basepoint-free by Theorem B. Hence, for divisors contained in the closed cone generated by $H_1,H_2,H_3$ the base locus is empty.
(ii) Since $H_1$ and $H_3$ are basepoint-free, it follows that for any divisor $D$ in the interior of the cone generated by $H_1, H_2$ and $P$, the base locus of the linear system $|D|$ is contained in that of $|P|$. The same applies for the walls $c(H_1,\overline{P})$ and $c(H_3,\overline{P})$. Observe that the base locus of $|P|$ is the locus in $X_3$ parametrizing those complete quadric surfaces whose rulings induce a double line with two
marked points in $\mathbb{G}(1,3)$ (Proposition \ref{CC}) . Indeed, for any complete quadric $Q$ inducing either rank $3$ or $2$ conics in $\mathbb{G}(1,3)$, there is a unique $2$-plane in ${\mathbb P}^5$ containing them. The indeterminacy of $|P|$ does not get bigger because for any pair of $2$-planes $\Lambda_i$ ($i=1,2$) in ${\mathbb P}^5$, we can find another $2$-plane missing them both. It follows that for $Q$ a quadric defining a $2$-plane $\Lambda\subset {\mathbb P}^5$, there is a $D\in |P|$, such that $D$ does not vanish at $Q$. We conclude that the quadrics inducing double lines with two marked points in $\mathbb{G}(1,3)$ are in the base locus of $P$ $i.e.$, $E_1\cap E_3$.
(iii-iv) Since $P$ can be written as $P=E_3+2H_1=E_1+2H_3$, it follows that for any divisor $D$ contained in the interior of the cone generated by $E_3, H_3$ and $P$ or along the wall $c(H_3,\overline{P})$, the base locus of $D$ must be contained in $E_3$. Similarly, for any divisor $D$ contained in the interior of the cone of $E_1, H_1$ and $P$ or along the walls $c(H_1,\overline{P})$, the base locus of $D$ must be contained in $E_1$.
(v) By the previous argument, for any divisor $D$ in the interior of the cone $\langle E_1,E_3, P\rangle$, its base locus must be contained in $E_1\cup E_3$.
(vi-vii) Follows easily from what we said above.
(viii) Any divisor $D$ in the interior of the cone generated by $H_1,H_3$ and $E_2$ the base locus of $D$ must be contained in $E_2$. However, since we know the $\mbox{nef}$ cone, then the base locus of any divisor in the complement of $\mbox{nef}$ cone must be contained in $E_2$. This completes the proof. \end{proof}
\begin{defn}\label{MOVABLE}
Let $Y$ be a smooth projective variety over $\mathbb{C}$. The movable cone $\overline{\mathrm{Mov}}(Y)\subset N^1(Y)$ is the closure of the cone generated by classes of effective Cartier divisors $L$ such that the base locus of $|L|$ has codimension at least two. We say that a divisor is movable if its numerical class lies in $\overline{\mathrm{Mov}}(Y)$. \end{defn}
\begin{cor} The movable cone $\mathrm{Mov}(X_3)$ of $X_3$ is the closed cone spanned by non-negative linear combinations of $H_1,H_2,H_3$ and $P$. \end{cor}
\noindent
\section{Birational Models of complete quadric surfaces}\label{CUATRO} In this section we describe some birational models of the space $X_3$. We present the results very explicitly at the risk of making proofs longer than optimal. This approach will exhibit the moduli structure on the birational models constructed.
\subsection*{Second Order Chow Variety $\mathbf{Chow}_2(1, X_3)$}
We define the second order Chow variety, $\mathbf{Chow}_2(1, X_3)$, as the parameter space of tangent lines to complete quadric surfaces. More precisely, let $Q\in X_3$ be a smooth complete quadric and let $TQ$ denote the set of tangent lines to it in the Grassmannian $\mathbb{G}=\mathbb{G}(1,3)$. Since the class $[TQ]=2\sigma_1\in A^1(\mathbb{G})$, it follows that the subvariety $TQ$ is defined by an element in the linear system $|\mathcal{O}_{\mathbb{G}}(2)|$. Consequently, we have a map $Q\mapsto TQ\in {\mathbb P} H^0(\mathbb{G},\mathcal{O}(2))\cong {\mathbb P}^{19}$. The subvariety $TQ$ is the so-called quadric line-complex.
\begin{lemma} Let $X_3^{\circ}\subset X_3$ be the open subset parameterizing smooth quadric surfaces. Then, we have an embedding $$\phi: X_3^{\circ}\rightarrow {\mathbb P}(H^0(\mathbb{G},\mathcal{O}(2))\cong {\mathbb P}^{19}$$ by mapping a smooth quadric $Q\mapsto TQ$ to its associated degree $2$ hypersurface $TQ\subset \mathbb{G}(1,3)$. \end{lemma} \begin{proof} Let $Q$ and $Q'$ two distinct smooth quadrics. Then there exists a point $x\in Q$ which is not in $Q'$. The tangent space $T_xQ$ contains a $1$-parameter family of lines tangent to $Q$ among which only $2$ are also tangent to $Q'$. This says that $TQ\ne TQ'$ as desired. \end{proof}
\begin{prop}\label{propchow1} The map $\phi$ extends to a morphism $\rho_2:X_3\rightarrow \mathbf{Chow}_2(1,X_3)$. \end{prop} \begin{proof} By Serre's criteria \cite{EISEN}, the rational map $\phi$ extends to a complement of a subset of codimension $2$ in $X_3$. Furthermore, the space $X_3$ is stratified by $SL_4$-orbits as follows: there is an open dense orbit $X_3^{\circ}$, codimension $1$ and $2$ orbits $E_i^{\circ}$ and $E_i^{\circ}\cap E_j^{\circ}$ ($i\ne j$) respectively, and a unique closed orbit $E_1\cap E_2\cap E_3$. Therefore, the result follows if the map $\phi$ extends to each of the $E_i$'s, $i.e.$, $\phi(E_i)$ is well-defined for $i=1,2,3$.
Let us show that the map $\phi:(X_3^{\circ})\rightarrow {\mathbb P}^{19}$ extends to the divisor $E_1$ by performing the explicit computation. First, we exhibit the extension of the map $\phi$ to the open $SL_4 \mathbb{C}$-orbit $E_1^{\circ}$.
To simplify the computations, let us assume $Q_t\subset {\mathbb P}^3$ is the family $Q_t=\{x^2+t(ay^2+byz+cyw+dz^2+ezw+f^2w^2)=0\}$. The limit as $t\rightarrow 0$, is a complete quadric $(Q_0,q[y:z:w])\in E_1$.
We proceed by definition in order to compute the Chow form $TQ_t$. A line in $l\subset {\mathbb P}^3$ is the image of the morphism $$[\alpha,\beta]\overset{g}{\longmapsto} [a_1\alpha+b_1\beta:\cdots :a_4\alpha+b_4\beta]\in {\mathbb P}^3\ .$$ The line $l$ is tangent to a quadric $Q$ as long as the the restriction of $Q$ to $l$ consists of a single point (with multiplicity two). Therefore, the discriminant $B^2-4AC=0$ of the quadratic polynomial in $[\alpha,\beta]$,
\begin{equation*} \begin{aligned} g^*Q_t&= A\alpha^2+B\alpha\beta+C\beta^2\\ &=(a_1^2+t(aa_2^2+ba_2a_3+\cdots +fa^2_4))\alpha^2+\\&+(2aa_2b_2+t((a_2b_3+a_3b_2)b+\cdots )\alpha\beta+(b_1^2+t(ab_2^2+\cdots))\beta^2 \end{aligned} \end{equation*} describes the equations desired, which in Pl\"{u}cker coordinates is \begin{equation}\label{chow1} \begin{aligned} TQ_t=\{ap_0^2+bp_0p_1+dp_1^2+cp_0p_2+ep_1p_2+fp_2^2+t(\mbox{extra terms})=0\} \end{aligned} \end{equation} This shows that $\phi:X_3^{\circ}\rightarrow {\mathbb P}^{19}$ can be extended to the whole of $E_1$. Similar computations show that there are extensions to all of $E_2$ and $E_3$. Indeed, since $E_2$ is $SL_4\mathbb{C}$-invariant, then we can assume that $Q_t=xy+t(az^2+bzw+cw^2)$ and analyze the normal directions at this point. Following the same argument as above, we find that the associated hypersurface, in Pl\"{u}cker coordinates is $$\Sigma_1(Q_t)=\{p_0^2+t(\mbox{other terms})=0\}\ .$$
It follows that the (radical of the) limit as $t\rightarrow 0$ coincides with the Schubert cycle $\Sigma_1(L)\subset \mathbb{G}(1,3)$ where $L=\{x=y=0\}$. This gives the natural extension for $\phi_{|E_2}$ as desired. The case for $E_3$ is clear. This completes the proof. \end{proof} Semple's notation for $\phi(X_3)$ is $C_9^{92}[19]$. He showed \cite{SEMII} that $\rho_2(X_3)$ is normal. Hence, the following result follows from Theorem B. \begin{cor}\label{Chow} The morphism $\rho_2:X_3\rightarrow \mathbf{Chow}_2(1,X_3)\subset {\mathbb P}^{19}$ contracts the divisor $E_2$. Furthermore, $X_3(H_2)\cong \mathbf{Chow}_2(1,X_3)$. \end{cor}
\begin{rmk} The degree of $\mathbf{Chow}_2(1,X_3)\subset {\mathbb P}^{19}$ is $92$, and has the following enumerative significance. It is the number of smooth quadric hypersurfaces in ${\mathbb P}^3$ which are tangent to $9$ fixed lines in general position. \end{rmk}
\subsection*{The flip of $X_3$} We now construct the flip $X^{+}_3$ of the space of complete quadric surfaces. We do so by analyzing a $\mathbb{Z}/2$-action on $\mathbf{Hilb}^{2x+1}(\mathbb{G}(1,3))$.
\begin{defn} Let $\mathbf{Hilb}=\mathbf{Hilb}^{2x+1}(\mathbb{G}(1,3))$ denote the Hilbert scheme parametrizing subschemes of $\mathbb{G}(1,3)\subset {\mathbb P}^5$ whose Hilbert polynomial is $P(x)=2x+1$. \end{defn}
\begin{prop} Let $\mathbf{Hilb}$ be as defined above, then $$\mathbf{Hilb}\cong \mbox{Bl}_{\mathbb{OG}}(\mathbb{G}(2,5))$$ where $\mathbb{O G}\subset \mathbb{G}(2,5)$ denotes the Orthogonal Grassmannian inside the Grassmannian of $2$-planes in ${\mathbb P}^5$. \end{prop} \begin{proof} Observe that a generic smooth curve with Hilbert polynomial $P(x)=2x+1$ in ${\mathbb P}^5$ is a plane conic $C$. Thus, its ideal $I_C\subset k[p_0,...,p_5]$ is generated by a quadric $F$ and three independent linear forms $L_1,L_2,L_3$. Since $C\subset \mathbb{G}=\mathbb{G}(1,3)$, the equation $F$ is the quadric corresponding to the Grassmannian $\mathbb{G} \subset {\mathbb P}^5$ under the Pl\"{u}cker embedding. This description gives rise to a rational map $$f:\mathbb{G}(2,5)\dashrightarrow \mathbf{Hilb} $$ by assigning the $2$-plane $P$ defined by the independent linear forms $(L_1,L_2,L_3)$ to the ideal $\langle L_1,L_2,L_3\rangle+\langle F\rangle \subset k[p_0,...,p_5]$. Observe the exceptional locus of $f$ consists of planes in ${\mathbb P}^5$ such that there is a containment of ideals $\langle F\rangle \subset \langle L_1,L_2,L_3\rangle$ $i.e.$, planes $P$ which are contained in the quadric $\mathbb{G} \subset {\mathbb P}^5$. We denote the locus parametrizing such planes by $\mathbb{OG}$. This locus is precisely the Orthogonal Grassmannian. Now, we resolve the rational map $f$, $$ \xymatrix @!=2pc{ **[c]Bl_{\mathbb{ O G}}(\mathbb{G}(2,5))&& \\ **[c]\mathbb{G}(2,5)\ar@{<-}[u]^{\pi} \ar@{-->}[rr]_{f} & & \mathbf{Hilb} \ar@{<-}[ull]_{\tilde f}
} $$ The morphism $\tilde f$ is an isomorphism. Indeed, the rational map $f$ is birational as it has an inverse morphism $g:\mathbf{Hilb}\rightarrow \mathbb{G}(2,5)$ defined as follows: let $[C]\in \mathbf{Hilb}$ be a generic element, then the ideal $I(C)=(F)+ (\mbox{plane})\overset{g}{\mapsto}(\mbox{plane})\in \mathbb{G}(2,5)$. It is clear that $f \circ g=Id$, hence $f$ and consequently $\tilde f$ are birational. Furthermore, $\tilde f$ is a bijection. Indeed, since the exceptional divisor $E\subset Bl_{\mathbb{OG}}(\mathbb{G}(2,5))$ is a ${\mathbb P}^5$-bundle over $\mathbb{OG}$, then we can write $p=(P,C)$ where $P\subset {\mathbb P}^5$ is a $2$-plane and $C\subset P$ is a plane conic. Thus, Zariski's Main Theorem implies that $\tilde f$ is an isomorphism. \end{proof} \begin{cor} Let $\mathbf{Hilb}$ be as above, then $\mathrm{Pic}(\mathbf{Hilb})\cong \langle H^{+},E_2^{+},E_{1,1}^{+}\rangle$ where $H^{+}$ is the pullback of $\sigma_1\in A^1(\mathbb{G}(2,5))$ and the $E^{+}$'s are the exceptional divisors of the blowup. \end{cor} \begin{proof} The orthogonal Grassmannian $\mathbb{OG}$ has two components, hence the result follows. \end{proof}
If the field $k$ is algebraically closed, then for a given smooth quadric $Q\subset {\mathbb P}^3_k$, the Fano variety of lines $F_1(Q)\subset \mathbb{G}(1,3)$ consists of two smooth conics. By exchanging such conics we get a $\mathbb{Z}/2$-action on $\mathbf{Hilb}^{2x+1}(\mathbb{G}(1,3))$.
\noindent \begin{lemma} There is a nontrivial globally defined $\mathbb{Z}/2$-action on $\mathbf{Hilb}^{2x+1}(\mathbb{G}(1,3))$. \end{lemma} \begin{proof} Let $Q\subset {\mathbb P}^3$ be a smooth quadric hypersurface. The Fano variety of lines $F_1(Q)$ is the zero locus of a section of the following bundle, \begin{equation*} \xymatrix @!=3pc{ **[c] Sym^2(S^*)\ar[d]^{\pi} \\
\mathbb{G}(1,3) \ar@/^2pc/[u]^{Q|L}.} \end{equation*} A smooth conic in ${\mathbb P}^5$ determines uniquely a $2$-plane, thus in the Pl\"{u}cker embedding $\mathbb{G}(1,3)\subset {\mathbb P}^5$, we have that \begin{itemize} \item[(1)] $F_1(Q)$ determines two $2$-planes if $rank(Q)$ is either $2$ or $4$, \item[(2)] $F_1(Q)$ determines a single $2$-plane if $rank(Q)$ is either $1$ or $3$. \end{itemize} Exchanging such planes gives rise to a $\mathbb{Z}/2$-action on $\mathbb{G}(2,5)$, the Grassmannian of $2$-planes in ${\mathbb P}^5$. Clause $(2)$ above, says that such a $\mathbb{Z}/2$-action on $\mathbb{G}(2,5)$ preserves the Orthogonal Grassmannian $\mathbb{OG}$, hence inducing a $\mathbb{Z}/2$-action on the blowup $\mathbf{Hilb}^{2x+1}(\mathbb{G}(1,3))$. \end{proof}
Observe that there is a $SL_4\mathbb{C}$-action on $\mathbf{Hilb}$ induced by the action of $SL_4$ on ${\mathbb P}^3$. This action stratifies $\mathbf{Hilb}$ in $SL_4$-orbits compatible with the exceptional divisors $E_2^+,E_{1,1}^+$. Notice that $\mathbb{Z}/2$ acts trivially (as the identity) over $SL_4$-orbits of codimension $2$. In codimension $1$, we have that $\mathbb{Z}/2$ acts as the identity on the exceptional divisors $E_2^{+}$ and $E_{1,1}^{+}$.
\begin{defn} Considering the $\mathbb{Z}/2$-action defined above, let us denote the quotient $X^{+}_3:=\mathbf{Hilb}/(\mathbb{Z}/2)$. \end{defn}
Let $\overline{\mathcal{M}}_{0,0}(\mathbb{G},2)$ be the Kontsevich moduli space of degree $2$ stable maps into the Grassmannian $\mathbb{G}=\mathbb{G}(1,3)$. \begin{lemma}\label{2COVER} There is a nontrivial globally defined $\mathbb{Z}/2$-action on the Kontsevich moduli space $\overline{\mathcal{M}}_{0,0}(\mathbb{G}(1,3),2)$. \end{lemma} \begin{proof} Observe we have a generic 2-1 morphism from the Kontsevich moduli space $\overline{\mathcal{M}}=\overline{\mathcal{M}}_{0,0}(\mathbb{G}(1,3),2)=\{(C,C^* \}$ to the space $X_3$ of complete quadric surfaces defined as follows $$(C,C^*)\mapsto \left(\bigcup_{L\in C}L,C^*\right)$$ where the notation $(S,C^*)$ means a surface $S$, and a curve $C^*$ as its marking. This map is $2$ to $1$ over the open subset parametrizing smooth quadric surfaces as well as over the divisor of complete quadrics of rank $2$. Indeed, if $\bigcup_{L\in C} L$ sweeps out a smooth quadric $S$, then $L$ is a ruling of $S$. The other ruling induces another conic $C'$ which gets mapped to $S$. The situation is similar over the locus of complete quadrics of rank 2. Notice that this map is 1-1 outside two such regions. We now define the $\mathbb{Z}/2$-action on $\overline{\mathcal{M}}$ by identifying the two curves $C$ and $C'$. \end{proof}
\begin{cor} The quotient of $\overline{\mathcal{M}}_{0,0}(\mathbb{G},2)$ by the $\mathbb{Z}/2$-action is isomorphic to $X_3$. In particular, the quotient is smooth. \end{cor} \begin{proof} Let $Z$ denote the quotient of $\overline{\mathcal{M}}$ by the $\mathbb{Z}/2$-action defined above. Observe that $X_3$ and $Z$ are birational and there is a bijection between them. Zariski's Main Theorem implies the corollary. \end{proof}
\section{Proof of Theorem A}~\label{CINCO}
\noindent Corollary \ref{MOVABLE} claims that the movable cone $\mathrm{Mov}(X_3)$ is the closed cone spanned by non-negative linear combinations of $H_1,H_2,H_3$ and $P$, where the latter divisor is defined in Definition \ref{P}. In this section we describe all the models $X_3(D)$, where $D\in \mathrm{Mov}(X_3)$. We interpret the spaces constructed thus far, $\mathbf{Chow}_2(1,X_3)$ and $X_3^+$, as models $X_3(D)$ for some $D$ in $\mathrm{Mov}(X_3)$, which exhibits the moduli structure on the models.
\begin{prop}\label{CC} There is a morphism from $X^{+}_3=\mathbf{Hilb}/(\mathbb{Z}/2)$ to the $\mathbb{Z}/2$-Chow variety defined by forgetting the scheme structure and only considering its cycle class. Likewise, there is a morphism from the space of complete quadrics $X_3$ to the same $\mathbb{Z}/2$-Chow variety. \end{prop} \begin{proof} Define the following spaces: let $I=\{(C,C^*,\Lambda)\}$ be the incidence correspondence such that $C$ is a connected, arithmetic genus zero, degree two curve in $\mathbb{G}(1,3)\cap \Lambda$ and $C^*$ is the dual curve in $\Lambda^*$. Let $\overline{\mathcal{M}}_{0,0}(\mathbb{G},2)$ be the Kontsevich space of degree two stable maps into the Grassmannian $\mathbb{G}=\mathbb{G}(1,3)$. Let $\mathcal{C}$ denote the Chow variety of conics in ${\mathbb P}^5$ which are contained in $\mathbb{G}(1,3)$. The incidence correspondence $I$ admits a map to both $\overline{\mathcal{M}}_{0,0}(\mathbb{G},2)$ and $\mathbf{Hilb}$ by projecting to the first two factors, and by projecting to the first and third factors, respectively. By projection to the first factor, we get a map to $\mathcal{C}$. Since the morphisms $Kh$ and $Ch$ are small contractions, and $\mathbb{Z}/2$ acts trivially in $SL_4$-orbits of codimension $2$ and higher, then the Chow variety $\mathcal{C}$ inherits a $\mathbb{Z}/2$-action. We thus have the following, \begin{equation}\label{DIAGRAM}
\xymatrix @!=2pc{ & I \ar@{->}[dl]\ar@{->}[dr] & \\
**[c] \overline{\mathcal{M}}_{0,0}(\mathbb{G},2) \ar@{->}[dr]^{Kh}\ar@{->}[d]_{\mathbb{Z}/2} & &\ar@{->}[dl]_{Ch} \mathbf{Hilb} \ar@{->}[d]^{\mathbb{Z}/2}\\ **[c] X_3 \ar@{->}[dr]_{\pi_1} & \mathcal{C} \ar@{->}[d] & X^{+}_3 \ar@{->}[dl]^{\pi_2}\\ **[c] & \mathcal{C}/(\mathbb{Z}/2) &
}
\end{equation} The existence of the morphisms $\pi_1$ and $\pi_2$ follows from the fact that $X_3$ as well as $X^{+}_3$ are $\mathbb{Z}/2$-quotients. \end{proof}
We can identify models $X_3(D)$ thanks to the following.
\begin{lemma} Let $f:X\rightarrow Y$ be a birational morphism, where $X$ and $Y$ are normal projective algebraic varieties. Let $D\subset Y$ be an ample divisor, then $$\mathrm{Proj}(\oplus_{m\ge 0}H^0(X,f^*D))=Y.$$ \end{lemma}
In the main Theorem of this section, we list only the models $X_3(D)$ for which we have exhibited a moduli interpretation.
\begin{thm1}\label{T1} Let $D$ be an integral effective divisor in the space of complete quadric surfaces $X_3$, and let $$X_3(D)=\mathrm{Proj}\left( \bigoplus_{m\ge 0}H^0(X_3, mD)\right)$$ be the model induced by $D$. Then, we have the following models for $X_3(D)$, \begin{itemize} \item[1.] $X_3(D)\cong X_3$ for $D$ contained in the cone spanned by $H_1$, $H_2$ and $H_3$. \item[2.] $X_3(H_1)\cong \mathbf{Hilb}^{(x+1)^2}({\mathbb P}^3)\cong {\mathbb P}^9$ and $f:X_3\rightarrow X_3(H_1)$ contracts the divisors $E_1$ and $E_2$. \item[3.]$ X_3(H_3)\cong \mathbf{Hilb}^{(x+1)^2}({\mathbb P}^{3*})\cong {\mathbb P}^{9*}$ and $g:X_3\rightarrow X_3(H_3)$ contracts the divisors $E_3$ and $E_2$. \item[4.]$ X_3(H_2)\cong \mathbf{Chow}_2(1,X_3)$ and $\phi:X_3\rightarrow X_3(H_2)$ contracts the divisor $E_2$. \item[5.] $X_3(D)\cong \mathcal{C}/(\mathbb{Z}/2)$ where $\mathcal{C}$ is the Chow variety of Proposition $21$ and $D=tH_1+(1-t)H_3$ for $0<t<1$. The map $\phi_1:X_3\rightarrow \mathcal{C}/(\mathbb{Z}/2)$ is a small contraction, whose exceptional locus is $Exc(\phi_1)=E_1\cap E_3$. \item [6.] $X_3(D)\cong X^{+}_3$ for $D$ contained in the domain spanned by $H_1$, $H_3$ and $P$. The map $\phi_2: X^{+}_3\rightarrow \mathcal{C}/(\mathbb{Z}/2)$ is the flip of $\phi_1$, where the flipping locus consists of subschemes supported on a line. \item [7.] $X_3(P)\cong \mathbb{G}(2,5)/(\mathbb{Z}/2)$, where $P$ is defined in Definition \ref{P}. \end{itemize} \end{thm1}
The result above can be best seen from the following diagram: \begin{equation}\label{DIAGRAM} \xymatrix @!=3pc{ {\mathbb P}^9 & \ar[l]_{2} X_3\ar[dr]_{\phi_1}^5\ar[d]_4 \ar[dl]_3 \ar@{.>}[rr]_{\mbox{flip}}^6& &X^+_3 \ar[dr]^{7} \ar[dl]^{\phi_2} & \\
{\mathbb P}^{9*}& \mathbf{Chow}_2 & \mathcal{C}/(\mathbb{Z}/2) & & \mathbb{G}(2,5)/(\mathbb{Z}/2) \ . } \end{equation} where $\phi_1$ and $\phi_2$ are small contractions and the other maps, except for the flip, are all divisorial contractions. Observe that from Corollary 16 and Proposition 7 we know abstractly all the divisorial contractions of $X_3$ or $X^+_3$ induced by $\mathrm{Mov}(X_3)$.
\begin{proof}[Proof of Theorem A] (1), (2), (3) follow from Theorem B and the description of $X_3$ given in section $2$.
(4) This is established in corollary \ref{Chow}.
(5) Let $C_{1,2}$ be the curve defined before Proposition \ref{prop2}, whose deformations cover the codimension 2 subvariety $E_1\cap E_3$. Observe that for any divisor $D=t H_1+(1-t)H_3$ where $0<t<1$, we have that $C_{1,2}.D=0$, which says the map $\phi_D$ contracts the codimension 2 locus $E_1\cap E_3$. The locus which is contracted does not get any larger as the map $X_3\rightarrow \mathcal{C}/(\mathbb{Z}/2)$ behaves locally similar to that of diagram (\ref{DIAGRAM}) and by the observation made about the $\mathbb{Z}/2$-action on $SL_4$-orbits, its exceptional locus behaves as in \cite{CC}.
(6) By definition, the morphism $\phi_2:X^{+}_3\rightarrow \mathcal{C}/(\mathbb{Z}/2)$ is the flip of $\phi_1:X_3\rightarrow \mathcal{C}/(\mathbb{Z}/2)$, if for any divisor $D$ in the domain spanned by $H_1$, $H_3$ and $P$, then both $-D$ is $\phi_1$-ample and $D$ is $\phi_2$-ample.
It is important to notice that the $\mathbb{Z}/2$-action on $X_3^{+}$ is the identity over the locus which is flipped.
The following analysis takes place in codimension $2$; by the previous observation we can neglect the $\mathbb{Z}/2$-action altogether. We verify that any $D$ as above is $\phi_2$-ample. Note that $\phi_2^{-1}(p)\cong {\mathbb P}^1$. Indeed, for $L\subset \mathbb{G}(1,3)$, where $L$ denotes a line, consider the tangent space $\mathbb{T}_L\mathbb{G}(1,3)\cong {\mathbb P}^3$. Now $\mathbb{T}_L\mathbb{G}(1,3)\cap \mathbb{G}(1,3)$ is a quadric of rank $1$ (a double plane) with a double line $2L$ on it. The pencil of planes containing $L$ are different points of $\mathbf{Hilb}$, however they all map to the same point $[2L]$ of the Chow variety $\mathcal{C}$. This means that the fiber of $\phi_2$ over the point $[2L]$ is a pencil of planes, hence ${\mathbb P}^1$. Now let $D=a\overline{H}_1+b\overline{H}_3+cP$ for positive $a,b,c \in \mathbb{Q}$ and where $\overline{H}_1$ and $\overline{H}_3$ are defined in $\mathrm{Pic}(\mathbf{Hilb})$ as follows.
$\overline{H}_1=\{(C,\Lambda)|\ C\cap \sigma_2(Pl)\ne \emptyset \}$ for a fixed plane $Pl\subset {\mathbb P}^3$, and $\overline{H}_3=\{(C,\Lambda)|C\cap \sigma_{1,1}(p)\ne \emptyset \}$ for a fixed point $p\in {\mathbb P}^3$. Consequently, for the curve $\gamma=\phi^{-1}_2(2L)$, we have \begin{equation*} \begin{aligned} \gamma.D&=\gamma.(a\overline{H}_1+b\overline{H}_3+cP)\\ &=c(\gamma.P) \\ &=c(\gamma.\sigma_1) \quad \mbox{in }\mathbb{G}(2,5)\\ &> 0 \ . \end{aligned} \end{equation*} Thus, $D$ is $\phi_2$-ample.
Let us now describe the fiber $\phi^{-1}_1(p)$. By Nakai's ampleness criteria, $-D$ will be $\phi_1$-ample if and only if $D.\gamma<0$ for any curve $\gamma$ contained in the fiber of $\phi_1$. The curve $C_{1,2}$ is contained in such a fiber as it is contracted by $\phi_1$. Hence, by Lemma \ref{DIVISORP}, \begin{equation*} \begin{aligned} C_{1,2}.D&=C_{1,2}.(aH_1+bH_3+cP)\\ &=c(C_{1,2}.P) \\ &=-c(C_{1,2}.H_2)\\ &< 0 \ . \end{aligned} \end{equation*} Thus, $-D$ is $\phi_1$-ample.
(7) Follows by construction. This completes the proof of Theorem A. \end{proof}
\section{Historical Remarks and Future Work} \label{SEIS} \noindent In this section, we state the historically accurate definition of the space $X_n$, and comment on the relation of Semple's work \cite{SEM}, \cite{SEMII} to the results in this paper. Moreover, we link our work to GIT and representation theory.
Let us recall the historically accurate definition of the space of complete quadrics. Under this definition, the description of $X_n$ in Definition \ref{1.2} is a Theorem in \cite{VAIN}. Let $Q\subset {\mathbb P}^{n}={\mathbb P}(V)$ be a smooth quadric hypersurface. It defines a symmetric linear map $Q:V\rightarrow V^*$, which induces a symmetric linear map $\wedge ^k Q: \wedge^k V\rightarrow \wedge^k V^*$ for any $1\le k\le n$. Hence, $\wedge^kQ\in S^2(\wedge^kV)$. If $Q$ is smooth, then the association $Q\mapsto \wedge^kQ$ is injective up to multiplication by scalars. Consequently, we get an embedding of $X^{\circ}_n$, the family of smooth quadric hypersurfaces in ${\mathbb P}(V)$, into the space $W={\mathbb P}(S^2 (V))\times{\mathbb P}(S^2 (\wedge^2V))\times\ldots \times {\mathbb P}(S^2(\wedge^{n}V))$ through the map $$\rho: Q\mapsto (Q,\wedge^2Q,\ldots ,\wedge^{n}Q) .$$
\begin{defn} \label{CQUADRICS} The space of complete $(n-1)$-quadrics $X_n$ is defined as the closure $\overline{\rho(X_n^{\circ})}\subset W$. \end{defn}
Consider $X_3\subset {\mathbb P}^9\times {\mathbb P}^{19}\times {\mathbb P}^{9*}={\mathbb P}_1\times {\mathbb P}_2\times {\mathbb P}_3$, the space of complete quadric surfaces. Theorem A claims the image of the projection map $\rho_i:X_3\rightarrow {\mathbb P}_i$ is isomorphic to $X_3(H_i)$, for $1\le i\le 3$. One can also consider the projection to $$\rho_{i,j}:X_3\rightarrow {\mathbb P}_i\times {\mathbb P}_j\ ,$$ for $1\le i<j\le 3$. Semple focused on the projections $\rho_2$ and $\rho_{1,3}$. For example, he denotes the space $\rho_2(X_3)$ by $C^{92}_9[19]$ and carefully studies its singularities. By Proposition \ref{X1}, the projection $\rho_{1,2}(X_3)$ (respectively, $\rho_{2,3}(X_3)$) is a divisorial contraction isomorphic to $Bl_{\Phi_1}{\mathbb P}^9$ (respectively, $Bl_{\Phi_1}{\mathbb P}^{*9}$). The projection $\rho_{1,3}(X_3)$ is of a different kind. It is a small contraction which Semple denotes by $W_9$. He carefully analyzes the singularities of $W_9$ as well as its geometry. We have seen, these spaces are the models arising from divisors in the nef cone of $X_3$.
Birational models of $X_3$ which are not analyzed in \cite{SEMII} arise when we study the models $X_3(D)$ induced by divisors $D$ which are not nef, but which are contained in the movable cone. One such example is the flip of $X_3$ over $\rho_{1,3}(X_3)$.
On the other hand, the space of complete $(n-1)$-quadrics can be obtained as a GIT quotient. Indeed, De Concini and Procesi in \cite{DECON} constructed the ``wonderful compactification" of a symmetric variety. Viewing $SL_{n+1}(\mathbb{C})\cong SL_{n+1}(\mathbb{C})\times SL_{n+1}(\mathbb{C})/\Delta$, as a symmetric variety, one can consider the wonderful compactification $\overline{G}=\overline{SL_{n+1}\mathbb{C}}$. This is a $H$-variety, where $H$ is the fixed subgroup of the $SL_{n+1}$-involution $A\mapsto \ ^{t}\!A^{-1}$. Thus, we can take the GIT-quotient $\overline{G}^{ss}/\!/H$ for a suitable choice of linearization of $H$. This quotient is a compactification of $SL_{n+1}/H$, which is isomorphic to complete $(n-1)$-quadrics \cite{KAN}. This point of view suggests that we might understand Theorem A as a variation of GIT.
Observe that the models $X_3(D)$, for $D\in \mathrm{Mov}(X_3)$, are $SL_4$-equivariant compactifications of the homogeneous space $SL_4/\overline{SO}(4)$. Then, the results of this paper admit a description in terms of the Luna-Vust Theory on compactifications of spherical varieties \cite{LUNA}. The latter theory is written in representation-theoretic terms, and aims to understand the $G$-equivariant embeddings of homogeneous spaces $G/H\rightarrow X$, where $X$ is a $G$-variety. In future work, we aim to study the relation among the $SL_4$-equivariant compactifications of $SL_4/\overline{SO}(4)$, \`a la Vust-Luna, and the small modifications (Definition \ref{MOVABLE}) of the De Concini-Procesi wonderful compactification of $SL_4/\overline{SO}(4)$.
\nocite{TAD} \nocite{CHE} \nocite{DECON} \nocite{DGP} \nocite{HART} \nocite{LAZ} \nocite{GH} \nocite{JANOS} \nocite{CHJ} \nocite{HARTD} \nocite{JANOS3} \nocite{LAK} \nocite{KLE}
\end{document} | arXiv | {
"id": "1401.8050.tex",
"language_detection_score": 0.7491210699081421,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Forbidden minors for graphs with no first obstruction to parametric Feynman integration} \begin{abstract}
We give a characterization of 3-connected graphs which are planar and forbid cube, octahedron, and $H$ minors, where $H$ is the graph which is one $\Delta-Y$ away from each of the cube and the octahedron. Next we say a graph is \emph{Feynman 5-split} if no choice of edge ordering gives an obstruction to parametric Feynman integration at the fifth step. The 3-connected Feynman 5-split graphs turn out to be precisely those characterized above. Finally we derive the full list of forbidden minors for Feynman 5-split graphs of any connectivity. \end{abstract}
\tableofcontents
\section{Introduction}
The Robertson-Seymour theorem \cite{RSXX} tells us that any minor closed graph property is defined by a finite set of forbidden minors. The set of forbidden minors itself can vary from the sublime, such as Wagner's theorem, to the ridiculous, like the sixty-eight billion (and counting) forbidden minors for $\Delta-Y$ reducibility \cite{YuYDY}. Middle ground includes cases like \cite{DDpathwidth} where the 3-connected situation is simple while the general case is more intricate.
In this paper we are interested in a minor closed graph property, called \emph{Feynman 5-splitting}, which originates from a physics residue calculation. The property of interest is defined in Section \ref{sec splitting} in its original physics manifestation, and matroidally using Edmonds’ matroid intersection theorem in subsection \ref{matroid splitting}.
Briefly, Feynman graphs in quantum field theory encode integrals which describe particle interactions. Francis Brown \cite{Brbig} developed an algebro-geometric algorithm to integrate certain scalar Feynman integrals one edge of the graph at a time. The denominators after each step of the integration are key to the algorithm and also can be interpreted combinatorially as certain polynomials defined from the original graph. A graph which is not Feynman 5-split is a graph which, for at least one order of its edges, has an obstruction to continuing the algorithm after the fifth step.
Our first result in the classification of the excluded minors for Feynman 5-splitting is a purely graph theoretic structure theorem showing that a simple 3-connected graph $G$ has a certain width property, closely related to pathwidth 3, if and only if $G$ does not have one of a family of five minors. To state this precisely we require some further terminology. Let $C$ and $O$ denote the cube and octahedron graphs respectively, let $H$ denote the graph depicted in Figure \ref{firsth}, and define $\mathcal{F}_0 = \{ K_{3,3}, K_5, C, H, O \}$.
\begin{figure}
\caption{$H$}
\label{firsth}
\end{figure}
For any set of edges $A \subseteq E(G)$ we let $G_A$ denote the subgraph of $G$ induced by $A$. A \emph{separation} of $G$ consists of a pair $(A,B)$ of subsets of $E(G)$ for which $A \cap B = \emptyset$ and $A \cup B = E(G)$. We say that this is a separation \emph{on} $V(G_A) \cap V(G_B)$, we call $| V(G_A) \cap V(G_B) |$ the \emph{order} of the separation, and write $\partial(A)=\partial(B)=V(G_A) \cap V(G_B)$. If the order of $(A,B)$ is at most $k$, we call $(A,B)$ a $k$-separation. Finally, a separation $(A,B)$ is \emph{proper} if both $V(G_A) \setminus V(G_B)$ and $V(G_B) \setminus V(G_A)$ are nonempty.
\begin{definition}[\cite{Brbig} section 1.4\footnote{Brown uses the term ``vertex width'' instead of ``width''. This notion is also known as ``linear-width'' by Thilikos \cite{Twidth}}] Let $G$ be a graph with $n$ edges. The \emph{width} of an ordering $e_1, e_2, \ldots, e_n$ of $E(G)$ is the maximum order of a separation of the form $( \{e_1, \ldots, e_{\ell}\}, \{e_{\ell+1}, \ldots, e_n \} )$. The \emph{width} of $G$ is the minimum width among all edge orders of $G$. \end{definition}
The notion of width will be explored in detail in Section \ref{sec matroids}, however let us note here that it is closely related to the well-known concept of path width. We may state the structure theorem as follows.
\begin{theorem} \label{decomp} A simple 3-connected graph has width at most $3$ if and only if it has no minor in $\mathcal{F}_0$. \end{theorem}
The proof of this theorem appears in Section \ref{seccharac}. Related characterizations in the literature include cube-free graphs \cite{Mcube}, octahedron-free graphs \cite{MR3090713} and planar cube-free graphs \cite{PScube}. In the other direction, Thilikos \cite{Twidth} gave the list of forbidden minors for width at most 2.
Now we turn our attention back to the notion of Feynman 5-splitting. As with \cite{DDpathwidth}, our forbidden minor result breaks into the 3-connected simple case and the non-3-connected case.
Feynman 5-split graphs which are at least 3-connected turn out to again be those 3-connected graphs which forbid $\mathcal{F}_0$ (Theorem \ref{heyguesswhatsplits}). The 2-connected case is more intricate. First we must observe that certain small 2-cuts are functionally the same for the purposes of Feynman 5-splitting (Section \ref{wands}). This then gives 34 more functionally distinct forbidden minors to complete our characterization (Lemma \ref{fminor}).
\section{3-connected graphs with no $\mathcal{F}_0$ minor}\label{seccharac}
In this section we prove Theorem \ref{decomp}. This argument will require the notion of a rooted minor. Let $G$ and $R$ be loopless graphs. As usual, we say that $G$ has $R$ as a \emph{minor} if for every $v \in V(R)$ there exists a set of vertices $X_v \subseteq V(G)$ satisfying the following: \begin{itemize} \item $X_v \cap X_w = \emptyset$ whenever $v,w \in V(R)$ and $v \neq w$. \item The subgraph of $G$ induced by $X_v$ is connected for every $v \in V(R)$. \item For all $v,w \in V(R)$ with $v \neq w$, the number of edges in $G$ between $X_v$ and $X_w$ is at least the number of edges between $v$ and $w$ in $R$. \end{itemize}
If $U \subseteq V(R)$ and $T \subseteq V(G)$ are the same size and in addition $|T \cap X_{u_i}| = 1$ for every $1 \le i \le n$, then $(G,T)$ has a \emph{rooted} $(R,U)$-minor. If $U = \{u_1, \ldots, u_n \} \subseteq V(R)$ and $T = \{t_1, \ldots, t_n\} \subseteq V(G)$ and in addition $t_i \in X_{u_i}$ for every $1 \le i \le n$, then $(G; t_1, \ldots, t_n)$ has a \emph{rooted} $(R; u_1, \ldots, u_n)$ minor. In these cases we refer to the vertices of both $T$ and $U$ as \emph{roots}.
\begin{figure}\label{s_plus}
\end{figure}
We define $S$ to be the first graph depicted in Figure \ref{s_plus} and consider the vertices $\{v_1,v_2,v_3\}$ to be roots.
\begin{lem} \label{fan2} Let $G$ be a simple 3-connected graph with no minor in $\mathcal{F}_0$ and let $(A,B)$ be a proper 3-separation on $X = \{x_1,x_2,x_3\}$. If $X$ is independent in $G_B$ and $(G_B; x_1, x_2, x_3)$ has a rooted $(S; v_1, v_2, v_3)$-minor, then $G_A \setminus x_1$ is a path from $x_2$ to $x_3$. \end{lem}
\begin{proof} If $G_A \setminus x_1$ contains a cycle $D$, then by the 3-connectivity of $G$ we may choose three vertex disjoint paths in $G_A$ starting at $x_1,x_2,x_3$ and terminating in $D$ (the paths starting at $x_2$ and $x_3$ may be trivial). It then follows that $G$ contains $H$ as a minor, which is a contradiction.
Since $G$ is 3-connected, the graph $G_A \setminus x$ is connected and so must be a tree. If $u$ is a leaf of this tree which is not one of $x_2,x_3$, then $u$ has degree at most 2 in $G$, a contradiction. It follows that $G_A \setminus x_1$ is a path from $x_2$ to $x_3$ as desired. \end{proof}
\begin{definition} Consider a graph $G$ with a separation $(A,B)$ on $X$. We say that $A$ is \emph{rich} if for every $x \in X$ there exists a cycle
$D \subseteq G$ with $x \not\in V(D)$ so that $|E(D) \setminus A| \le 1$, and similarly for $B$. If $A$ ($B$) is not rich, we call it \emph{poor}. \end{definition}
Note that in the previous lemma we may add the conclusion that $A$ is poor since there does not exist a cycle in $G \setminus x_1$ containing at most one edge of $B$.
\begin{lem} \label{getsrich} Let $G$ be a 3-connected graph with no $\mathcal{F}_0$ minor which has a proper 3-separation $(A,B)$ on $X$. If $X$ is independent in $G_B$ and $\mathit{deg}_{G_B}(x) \ge 2$ for every $x \in X$, then $A$ is poor, and there exists a 3-separation $(A',B')$ for which $A'$ and $B'$ are rich. \end{lem}
\begin{proof} Let $X = \{x_1,x_2,x_3\}$ and consider an embedding of $G$ in the sphere. For $1 \le i < j \le 3$ let $P_{ij}$ be a non-trivial path in $G_B$ from $x_i$ to $x_j$ which forms part of the boundary of a face. Note that since $G$ is 3-connected, these paths do not depend on the choice of embedding. In addition, our assumptions imply that each $P_{ij}$ has at least one interior vertex and are internally disjoint. By restricting our embedding of $G$ we may obtain an embedding of $G_B$ in a disc with the cycle $D = P_{12} \cup P_{23} \cup P_{13}$ on the boundary of the disc. Define a \emph{bridge} to be a subgraph of $G_B$ which either consists of a single edge $e \in E(G_B) \setminus E(D)$ for which both ends of $e$ are in $V(D)$, or consists of a component $G'$ of $G_B \setminus V(D)$ together with all edges joining a vertex of $G'$ and a vertex of $D$ (together with any endpoint of such an edge). An \emph{attachment} of a bridge $K$ is a vertex in $V(K) \cap V(D)$. Note that by the embedding of $G_B$ and the 3-connectivity of $G$, for every bridge $K$, and every path $P_{ij}$, the attachments of $K$ which lie in $V(P_{ij})$ must form an interval of this path, and $K$ must have an attachment outside of this path.
If there is a bridge with attachments in the interiors of $P_{12}$, $P_{23}$ and $P_{13}$, then $(G_B, \{x_1,x_2,x_3\})$ has the third graph from Figure \ref{s_plus} together with $\{v_1,v_2,v_3\}$ as a rooted minor. It then follows that $G$ has $C$ as a minor, which is a contradiction. Next suppose that there exists a bridge $K$ with $x_1$ and a vertex in the interior of $P_{23}$ as attachments. We may assume (without loss) that there are no attachments of $K$ in the interior of $P_{12}$. Let $u$ be the attachment of $K$ on $P_{23}$ which is closest along this path to $x_2$, and note that $u \neq x_2$ (otherwise this would force all of $V(P_{12})$ to be attachments of $K$). It now follows that $G_B$ has a 2-separation on $\{x_1,u\}$. Furthermore, by planarity, every bridge with an attachment in the interior of $P_{12}$ ($P_{13}$) also has an attachment in the interior of $P_{23}$. Therefore $(G_B,X)$ has a rooted $(S,\{v_1,v_2,v_3\})$ minor. Now Lemma \ref{fan2} implies that $G_A \setminus x_1$ is a path, so $G_A$ is poor. Furthermore, for any interior vertex $w$ of this path we have that $w$ and $x_1$ are adjacent (by 3-connectivity) and thus there is a 3-separation $(A',B')$ of $G$ on $\{w,x_1,u\}$ for which both $A'$ and $B'$ are rich. Argue similarly for any bridge from $x_2$ or $x_3$ to the interior of the opposite side.
We may now assume that there does not exist a bridge with attachments as considered in the previous paragraph. It follows from this that every bridge has attachments in the interior of exactly two of the three paths $P_{12}$, $P_{13}$, $P_{23}$. If for each pair of these paths there is a bridge which attaches to both interiors, then $(G_B, \{x_1,x_2,x_3\})$ has the second graph from figure \ref{s_plus} together with $\{v_1,v_2,v_3\}$ as a rooted minor and it follows that $G$ contains $H$ as a minor, which is contradictory. So, we may assume (without loss) that there are no bridges with attachments in the interior of both $P_{12}$ and $P_{13}$. It follows from this that $x_1$ is not an attachment of any bridge. Now let $u$ be the vertex on $P_{23}$ which is closest to $x_2$ along this path with the property that $u$ is an attachment of a bridge which also has an attachment in the interior of $P_{13}$. Now $G_B$ has a 2-separation on $\{x_1,u\}$ and $(G_B,X)$ has a rooted $(S,\{v_1,v_2,v_3\})$ minor. As before, Lemma \ref{fan2} implies that $G_A \setminus x_1$ is a path, so $G_A$ is poor, and for any interior vertex $w$ of this path, we have a 3-separation $(A',B')$ of $G$ on $\{w,x_1,u\}$ for which both $A'$ and $B'$ are rich. \end{proof}
The proof of Theorem \ref{decomp} also requires the following classical theorem on graph minors.
\begin{theorem}[Halin, Jung \cite{HJ}] \label{octk5} Every simple graph with minimum degree at least four contains either $K_5$ or $O$ as a minor. \end{theorem}
\begin{proof}[Proof of Theorem \ref{decomp}.] \mbox{} Let $G = (V,E)$ be a simple 3-connected graph with no minor in $\mathcal{F}_0$. By Theorem \ref{octk5} we may choose a vertex $v$ of degree 3. We now form a sequence $e_1, e_2, \ldots$ of edges by choosing $e_1,e_2,e_3$ to be the three edges incident with $v$ and then greedily extending the list by a new edge $e_k$ if there exists such an edge with the property that $( \{e_1, \ldots, e_k\} , E \setminus \{e_1, \ldots, e_k \} )$ is a 3-separation. If this procedure exhausts all of the edges, then we are done. Otherwise, at the point where we get stuck, we have a 3-separation $( \{e_1, \ldots, e_{\ell} \} , E \setminus \{e_1, \ldots, e_{\ell} \} )$ on $X$ with the property that no edge in $E \setminus \{e_1, \ldots, e_{\ell} \}$ has both ends in $X$, and every vertex in $X$ has degree at least two in $G_{ E \setminus \{e_1, \ldots, e_{\ell} \} }$. So we may apply Lemma \ref{getsrich} to choose a 3-separation $(A',B')$ of $G$ for which $A'$ and $B'$ are both rich. Now we form a sequence of edges $b_1, b_2, \ldots $ of $B'$ in a greedy fashion by repeatedly choosing an edge $b_{i}$ so that $(A' \cup \{b_1, \ldots, b_i \}, B' \setminus \{b_1, \ldots, b_i \} )$ is a 3-separation. If this procedure terminates before all edges of $B'$ have been used, say after choosing $b_i$, then we have a 3-separation $(A' \cup \{b_1, \ldots, b_i\}, B' \setminus \{b_1, \ldots, b_i\} )$ satisfying the hypothesis of Lemma \ref{getsrich} and this implies that $A' \cup \{b_1, \ldots, b_i\}$ is poor, which is contradictory to $A'$ being rich. Therefore, this process terminates with a sequence $b_1, \ldots, b_m$ containing all edges of $B'$. Similarly, we may greedily sequence the edges of $A'$ as $a_1, a_2, \ldots a_{\ell}$ so that $(A' \setminus \{a_1, \ldots, a_j\} , B' \cup \{a_1, \ldots, a_j \})$ is a 3-separation for every $1 \le j \le \ell$. Now the edge sequence $a_{\ell}, \ldots, a_1, b_1, \ldots, b_m$ has width 3.
For the other direction, note that the graphs $K_{3,3}$, $K_5$, $C$, and $O$ do not contain a 3-separation $(A,B)$ with $|A|, |B| \ge 4$, and the graph $H$ has no such 3-separation $(A,B)$ with $|A|, |B| \ge 5$. It follows that all of these graphs have width at least 4. \end{proof}
\section{Splitting}\label{sec splitting}
In this section we introduce the notion of Feynman 5-splitting, which serves as the central focus of our investigation. Given a not necessarily simple graph $G$ assign to each edge $e$ a variable $x_e$. Then the \emph{Kirchhoff} polynomial of $G$ is \[ \Psi_G = \sum_{\substack{T \text{ spanning}\\\text{tree of } G}} \left( \prod_{e\not\in T}x_e \right) \]
The \emph{Feynman period} of $G$ is \[ \int_{0}^\infty\cdots \int_0^\infty \frac{\prod_e dx_e}{\Psi_G^2}\delta(\sum_{e}x_e-1) \] where $\delta$ is the Dirac delta. This is only a part of the full Feynman integral of a graph, which in general would involve external momenta, masses, and for non-scalar Feynman diagrams, further terms in the numerator incorporating the tensor structure of the diagram. However, it is an important part physically. The full Feynman integral typically diverges, and with appropriate regularization this part is the coefficient of logarithmic growth at infinity, hence is a kind of residue \cite{numbers}. Furthermore for primitive divergent graphs, defined below, this part gives the most complicated contributions to the $\beta$-function of the theory \cite{Sphi4} and is invariant under a wide variety of renormalization schemes since no choices have to be made for subdivergences. It is also mathematically a very interesting object both algebro-geometrically; see for example \cite{AMdet, bek, BrS}, and more combinatorially \cite{BrSY, BrY, SFq}.
Let $\ell(G)$ be the cyclotomic number of $G$, the minimum number of edges needed to be removed to create an acyclic graph, and let $e(G)$ be the number of edges of $G$. The Feynman period converges provided $\ell(G) = 2e(G)$ and $\ell(\gamma) < 2e(\gamma)$ for any proper subset of edges $\gamma$ of $G$. Graphs that meet this requirement are known as \emph{primitive divergent}. Calculating the values of Feynman periods is quite difficult. Oliver Schnetz \cite{Sphi4} following older work of David Broadhurst and Dirk Kreimer \cite{bkphi4} calculated as many as possible with current techniques.
There is also a matrix-theoretic point of view on the Kirchhoff polynomials which is very useful. Following Brown \cite{Brbig} we will use an exploded Laplacian which is well suited to our form of the Kirchhoff polynomial.
Given an undirected graph $G$, choose an arbitrary orientation for the edges and let $\xi_G$ be the $|E(G)| \times |V(G)|$ incidence matrix for this directed graph.
Let $A$ be the diagonal matrix with entries $x_e$ for $e \in E(G)$ with the same choice of order as in the construction of $\xi_G$.
Let $\widehat{\xi}_G$ be a submatrix of the incidence matrix $\xi_G$ obtained by deleting an arbitrary column. Define the matrix $M_G$ as, \[
M_G = \left[\begin{array}{c|c}A & \widehat{\xi}_G \\ \hline -\widehat{\xi}^T_G & \bf{0}\end{array}\right]. \]
The first $|E(G)|$ rows and columns are indexed by the edges of $G$, and the remaining $|V(G)|-1$ rows and columns are indexed by the set of vertices of $G$ other than the one removed in the construction of $\widehat{\xi}$.
Then the matrix-tree theorem implies \[ \Psi_G = \det(M_G) \] and hence that the determinant of $M_G$ does not depend on the orderings, orientations, or choice of removed vertex (see Proposition 3.7 of \cite{Ycov} for a concise derivation of this from the standard form of the matrix-tree theorem).
In \cite{Brbig} Francis Brown gave an algorithm to compute some Feynman periods one edge variable at a time. The key to the algorithm is the structure of the denominator at each step; if the denominator at a given step factors into two terms which are each linear in a common variable, then the algorithm can proceed to the next step. The numerators at each stage are explicit polylogarithms. The obstruction to this algorithm is the denominator not factoring. The first time when this can occur is at the fifth step.
\begin{definition} Let $\Psi_G$ be the Kirchoff polynomial for a graph $G$ and $M_G$ the matrix used above. Let $I,J,K \subseteq E(G)$. Let $M_G(I,J)_K$ be the matrix obtained by deleting rows indexed by the edges in $I$, columns indexed by the edges in $J$, and setting $\alpha_e = 0$ for all $e \in K$.
If $|I| = |J|$ define the \emph{Dodgson polynomial} \[ \Psi^{I,J}_K = \det(M_G(I,J)_K). \] If $K = \emptyset$, write this as $\Psi^{I,J}$. \end{definition}
It is demonstrated in \cite{Brbig} that Dodgson polynomials are well-defined up to overall sign. For any given calculation, by fixing a choice of matrix $M_G$ the relative signs between Dodgson polynomials are also well-defined.
\begin{definition} For a graph $G$, a \emph{5-configuration} is a set $S \subseteq E(G)$ such that $|S| = 5$. \end{definition}
\begin{definition} Let $G$ be a graph and $S = e_1, ..., e_5$ a 5-configuration of $G$. The \emph{five-invariant} is the polynomial \[ ^5\Psi(e_1,e_2,e_3,e_4,e_5) = \pm (\Psi^{e_1e_2,e_3e_4}_{e_5} \Psi^{e_1e_3e_5,e_2e_4e_5} - \Psi^{e_1e_3,e_2e_4}_{e_5} \Psi^{e_1e_2e_5,e_3e_4e_5}). \]
\end{definition}
The 5-invariant is the denominator at the fifth stage of integration. It was first implicitly found in \cite{bek}, equation (8.13).
\begin{lem}[\cite{Brbig} Lemma 87] \label{reindex} Reordering the edges in a five-invariant may at most change the sign of the polynomial. \end{lem}
The Dodgsons which appear in the definition of the 5-invariant must have $|I \cup J \cup K| =5$, $|I \cap K| = |J \cap K| = 0$, and either $|I| = |J| = 2$, $|K| = 1$, and $|I \cap J| = 0$ or $|I| = |J| = 3$, $|K| = 0$, and $|I \cap J| = 1$. Trivially, $\Psi^{I,J}_K =\Psi^{J,I}_K$. Thus for a fixed 5-configuration $S$, there are thirty generically distinct Dodgsons of these forms associated with $S$.
\begin{definition} Let $G$ be a graph and fix a 5-configuration $S$ in $G$. We say that $S$ \emph{splits} if at least one of the 30 Dodgson polynomials associated to $S$ is identically zero.
If $S$ splits for every possible 5-configuration $S \subseteq E(G)$, we say that $G$ itself is Feynman 5-split which we will henceforth abbreviate as \emph{split} for the sake of brevity. \end{definition}
If a 5-configuration $\{e_1,e_2,e_3,e_4,e_5\}$ splits, then, by Lemma \ref{reindex}, it is possible to permute the indices such that $^5\Psi(e_1,...,e_5) = \pm \Psi^{e_1e_2,e_3e_4}_{e_5} \Psi^{e_1e_3e_5,e_2e_4e_5}$. As each Dodgson is, by construction, linear in each variable, this five-invariant can be factored into a product of polynomials that is linear in each variable. Therefore, splitting is a way of avoiding the obstruction to the integration algorithm at the fifth step. It is a theoretically nice way to avoid the obstruction since the factorization comes for combinatorial reasons, and it is also in practice the sort of factorization which occurs.
On the other hand, splitting is a very strong condition as it requires not just that there is some way to avoid the obstruction but that every choice of 5 edges avoids the obstruction. Furthermore Theorems \ref{decomp} and \ref{heyguesswhatsplits} tell us that split graphs have width 3. A result of Brown (\cite{Brbig} Theorems 1 and 2) is that for width 3 graphs there is at least one edge ordering so that the algorithm can continue until all edges have been integrated. Thus splitting requires that all ways of starting the algorithm are well behaved but as a consequence gives that there is some ordering with no obstructions at any point.
The following proposition provides a more graph-theoretic method of calculating Dodgson polynomials that will prove useful.
\begin{prop} \label{calcdodgson} For a graph $G$ and $I,J,K \subseteq E(G)$ such that $|I| = |J|$, the Dodgson polynomial $$\Psi^{I,J}_K = \sum_{T \subseteq E(G)} \left( \pm \prod_{e \notin T \cup S} x_e \right)$$ where $S = I \cup J \cup K$ and the sum is over all edge sets which induce spanning trees in both graph minors $G \setminus I / ((J \cup K) - I)$ and $G \setminus J / ((I \cup K) - J)$. \end{prop}
In particular, the above proposition immediately implies the following useful corollary.
\begin{corollary}\label{split-tree-cor}
Let $G$ be a graph and let $S \subseteq E(G)$ be a 5-configuration. Then $S$ splits if and only if is has a partition $\{ \{e\}, S_1 , S_2\}$ with $|S_1| = 2 = |S_2|$ so that one of the graphs $G \setminus e$ or $G/e$ does not have a set of edges $T$ for which both $T \cup S_1$ and $T \cup S_2$ are spanning trees. \end{corollary}
Key for us is the observation that being split is a minor closed property. Hence, by the Robertson-Seymour Theorem, our goal is to find the forbidden minors for splitting.
\begin{prop} Splitting is a minor-closed property. \end{prop}
\begin{proof} Suppose a graph $G$ splits. For any 5-configuration $S$ and edge $e \notin S$, the deletion or contraction of $e$ cannot create more spanning trees. It follows from Corollary \ref{split-tree-cor} that $G\setminus e$ and $G / e$ must also split. \end{proof}
The following proposition is another key property of splitting that will be useful.
\begin{prop}\label{closedunderdual} Let $G$ be a planar graph and $G^*$ its planar dual. Then $G$ splits if and only if $G^*$ splits. \end{prop}
\begin{proof}
This follows immediately from Corollary \ref{split-tree-cor}, using the edges dual to those not in the tree in $G$ to create a tree in $G^*$, and from the fact that edge deletion and contraction are dual operations. It follows immediately from this that if a graph $G$ is minor-minimal non-splitting, then $G^*$ must be also. \end{proof}
The following calculations using the 5-configurations indicated in Figure \ref{nonsplitfives} show that the graphs in $\mathcal{F}_0$ are non-split. In the next section, following Corollary \ref{whatsplitmeans}, we will provide an alternate proof of this fact.
\begin{figure}
\caption{Non-splitting five configurations.}
\label{nonsplitfives}
\end{figure}
\begin{align*} ^5\Psi_{K_5} (e_0,e_1,e_2,e_6,e_8) = & x_7 x_5 x_4 x_3 (x_3 x_5 x_7 + x_3 x_4 x_9 \\ &+ x_3 x_5 x_9 + x_3 x_7 x_9 + x_4 x_9^2)\end{align*} \begin{align*} ^5\Psi_{K_{3,3}} (e_0,e_1,e_3,e_5,e_8) = & x_2 x_4 x_6 + x_2 x_4 x_7 + x_2 x_6 x_7 - x_4 x_6 x_7 + x_2 x_7^2 \end{align*} \begin{align*} ^5\Psi_{O} (e_0,e_1,e_2,e_3,e_5) = &x_{11} x_9 x_8 x_6 x_4 ( -x_6 x_7^2 x_9 - x_6 x_7^2 x_{10} \\ &+ x_4 x_7 x_9 x_{10} - x_6 x_7 x_9 x_{10} + x_4 x_7 x_{10}^2 \\ &+ x_4 x_8 x_{10}^2 + x_7 x_8 x_{10}^2 + x_4 x_9 x_{10}^2 \\ &- x_6 x_7^2 x_{11} - x_7^2 x_{10} x_{11} ) \end{align*} \begin{align*} ^5\Psi_{H} (e_0,e_1,e_3,e_4,e_8) = &x_6 x_5 ( x_2 x_5 x_6 x_9 x_{10} + x_2 x_5 x_7 x_9 x_{10} \\ &+ x_2 x_5 x_6 x_{10}^2 + x_2 x_5 x_7 x_{10}^2 + x_2 x_6 x_9 x_{10}^2 \\ &+ x_2 x_7 x_9 x_{10}^2 - x_2 x_5 x_9^2 x_{11} -x_2 x_7 x_9^2 x_{11} \\ &- x_2 x_5 x_9 x_{10} x_{11} + x_2 x_6 x_9 x_{10} x_11 - x_2 x_9^2 x_{10} x_{11} \\ &- x_7 x_9^2 x_{10} x_{11} - x_2 x_9^2 x_{11}^2 - x_7 x_9^2 x_{11}^2 ) \end{align*} \begin{align*} ^5\Psi_{C} (e_0,e_1,e_3,e_5,e_7) = &-x_2 x_4 x_6 x_8 x_9 - x_2 x_4 x_6 x_9^2 - x_4 x_6 x_8 x_9^2 \\ &- x_4 x_6 x_8 x_9 x_{10} -x_2 x_4 x_6 x_9 x_{11} + x_2 x_4 x_8 x_{10} x_{11} \\ &+ x_2 x_6 x_8 x_{10} x_{11} + x_2 x_4 x_9 x_{10} x_{11} + x_2 x_8 x_9 x_{10} x_{11} \\ &+ x_6 x_8 x_9 x_{10} x_{11} + x_2 x_8 x_{10}^2 x_{11} + x_6 x_8 x_{10}^2 x_{11} \\ &+ x_2 x_4 x_{10} x_{11}^2 + x_2 x_8 x_{10} x_{11}^2 \end{align*}
\begin{theorem} \label{heyguesswhatsplits} A simple 3-connected graph splits if and only if it has no minor in $\mathcal{F}_0$. \end{theorem}
\begin{proof} It follows from the above discussion that every graph with a minor in $\mathcal{F}_0$ is non-splitting. Now let $G$ be a simple 3-connected graph with no minor in $\mathcal{F}_0$. By Theorem \ref{decomp} we may choose an edge ordering $e_1, \ldots, e_m$ of $G$ of width 3. Now let $S \subseteq E(G)$ be a 5-configuration and choose $1 \le \ell \le m$ so that $e_{\ell}$ is the third edge of this 5-configuration in the ordering. Now both $\partial( \{e_1, \ldots, e_{\ell - 1} \})$ and $\partial( \{e_1, \ldots, e_{\ell} \} )$ have size three. If these sets are equal, then $G / e_{\ell}$ has a 2-separation with two edges in $S$ on each side. Otherwise each of these sets has exactly one endpoint of $e_{\ell}$ and $G \setminus e_{\ell}$ has a 2-separation with two edges in $S$ on each side. In either case, it follows from Corollary \ref{split-tree-cor} that $S$ splits. \end{proof}
\section{Matroids}\label{sec matroids}
In this section we introduce matroid theory to the study of splitting. This section is divided into three subsections. The first is a primer on matroids which we intend as an introduction for those not familiar with their theory. This subsection can safely be skipped by those readers familiar with matroids. The second subsection introduces a notion of width for matroids which is quite closely related to the width we have already established for graphs, and compares these two notions. The final subsection calls upon the Matroid Intersection Theorem to reformulate the notion of splitting in terms of basic connectivity properties.
\subsection{Introduction}
Our primary goal in this subsection is to acquaint the reader with matroids, and to highlight a couple of powerful theorems from this realm with particularly broad scope for application.
A matroid $M$ consists of a finite ground set $E$ together with a collection $\mathcal{I}$ of subsets of $E$ called \emph{independent sets}, which satisfy the following axioms: \begin{enumerate} \item $\emptyset \in \mathcal{I}$. \item If $A \in \mathcal{I}$ and $A' \subseteq A$ then $A' \in \mathcal{I}$.
\item If $A,B \in \mathcal{I}$ and $|A| < |B|$ then there exists $b \in B \setminus A$ so that $A \cup \{b\} \in \mathcal{I}$. \end{enumerate}
Two of the principle examples of matroids are:
\begin{itemize} \item Let $E$ be a finite set of vectors from a vector space and define $S \subseteq E$ to be independent if these vectors are linearly independent. \item Let $E$ be the edge set of a graph $G$ and define $S \subseteq E$ to be independent if these edges do not contain (the edge set of) a cycle. This matroid, denoted $M(G)$, is called the \emph{cycle matroid} of $G$. \end{itemize}
The fact that the first example above satisfies the axioms for a matroid is immediate. Indeed we may view matroids as a natural abstraction of the concept of linear independence in vector spaces. Our second example is actually a special case of the first. To see this, let $G = (V,E)$ and define $M$ to be the $V \times E$ incidence matrix of $E$ viewed as a matrix over $\mathbb{F}_2$. Now associating each edge with the corresponding column of $M$ we find that a set of edges $S \subseteq E$ has no cycle if and only if the corresponding column vectors are linearly independent.
A \emph{basis} of a matroid is a maximal independent set. Generalizing a familiar property from linear algebra, we observe that the third axiom implies that any two bases must have the same size. Note that for a connected graph, the bases of its cycle matroid are edge sets of spanning trees. Pleasingly, every matroid $M$ has a dual $M^*$ with the property that $B$ is a basis of $M$ if and only if $E \setminus B$ is a basis of $M^*$. This duality generalizes that found in planar graphs in the sense that the cycle matroids associated with two dual planar graphs will be dual matroids.
Another concept from linear algebra which generalizes naturally to matroids is that of rank. For an arbitrary set $S \subseteq E$, the \emph{rank} of $S$, denoted $r(S)$, is the size of the largest independent set in $S$. We define the \emph{rank} of the matroid to be $r(M) = r(E)$. The following classical theorem shows that both covering and packing by bases are well-characterized in terms of the rank function.
\begin{theorem} Let $M$ be a matroid with ground set $E$ and let $k \ge 0$. The following hold: \begin{itemize}
\item Either there exist $k$ bases with union $E$ or there is a subset $S \subseteq E$ with $|S| > k \cdot r(S)$.
\item Either there exist $k$ disjoint bases in $E$ or there is a subset $S \subseteq E$ so that $|E \setminus S| < k \cdot (r(M) - r(S))$. \end{itemize} \end{theorem}
In each of the above cases, the ``or'' is actually an exclusive or, since the two conditions are mutually exclusive. To see this, we need only note that every basis contains at most $r(S)$ elements from a set $S$ (so must contain at least $r(M) - r(S)$ elements from $E \setminus S$).
Next we state another famous and extremely useful theorem concerning finding a common independent set in two matroids. This is the result we will require for our characterization of splitting 5-configurations.
\begin{theorem}[Matroid Intersection] Let $M_1$ and $M_2$ be matroids on $E$ with rank functions $r_1$ and $r_2$ and let $k \ge 0$. Exactly one of the following holds: \begin{enumerate}
\item There exists a set $S \subseteq E$ with $|S| = k$ which is independent in both $M_1$ and $M_2$. \item There exists a pair of disjoint sets $A,B \subseteq E$ with $A \cup B = E$ so that $r_1(A) + r_2(B) < k$. \end{enumerate} \end{theorem}
\subsection{Matroid separations and caterpillar width}
In this section we introduce the notion of a separation of a matroid, and introduce a notion of width for matroids.
\begin{definition} Let $M$ be a matroid on $E$ with rank function $r$. A \emph{separation} of $M$ is a pair $(A,B)$, $A,B \subseteq E$ such that $A \cap B = \emptyset$ and $A \cup B = E$. The \emph{order} of the separation is $r(A) + r(B) - r(M) + 1$. \end{definition}
Now let us consider a connected graph $G=(V,E)$ and its cycle matroid $M(G)$. Let $A,B \subseteq E$ satisfy $A \cap B = \emptyset$ and $A \cup B = E$. Then $(A,B)$ is a separation of the graph $G$ with order $|V(G_A) \cap V(G_B)|$. Further, $(A,B)$ is a separation of the matroid $M(G)$ with order $|V(G_A) \cap V(G_B)| + 2 - \text{comp}(G_A) - \text{comp}(G_B)$ where $\text{comp}$ gives the number of connected components (this follows from the rank formula $r(S) = |V| - \text{comp}(V,S)$ for every $S \subseteq E$). So, whenever $A,B \neq \emptyset$ the order of $(A,B)$ in the graph $G$ is greater than or equal to the or its order in the matroid $M(G)$.
Next we introduce the important notion of branch width. If $T$ is a tree, we say that $T$ is \emph{cubic} if every vertex of $G$ has degree $1$, or $3$.
\begin{definition} A \emph{branch decomposition} of a matroid $M$ with ground set $E$ consists of a cubic tree $T$ and an injection $\phi : E \rightarrow V(T)$ with range a subset of leaf vertices. If $T_1,T_2$ are the two components of $T \setminus e$, then the \emph{width} of $e$ is the order of the separation $( \phi^{-1} (V(T_1)), \phi^{-1}(V(T_2)) )$. The \emph{width} of $T$ is the maximum width over all of its edges. The \emph{branch width} of $M$ is the minimum width over all branch decompositions of $M$. \end{definition}
For a graph $G$ with cycle matroid $M(G)$, the standard graph theoretical notion of tree width of $G$ is closely related to the branch width of $M(G)$ \cite{RSX}. The problem of this paper naturally gives rise to decompositions which are linear, rather than tree-like. The most linear cubic trees are a type of \emph{caterpillar} graphs, that is, trees whose nonleaves form a path. The \emph{caterpillar width} of a matroid is the minimum width of branch decomposition whose underlying tree is a cubic caterpillar.
\begin{figure}
\caption{A caterpillar}
\label{caterpillar1}
\end{figure}
\begin{theorem}
If $G = (V,E)$ is a connected graph with $|E| \ge 2$, then the caterpillar width of $M(G)$ is at most the width of $G$. \end{theorem}
\begin{proof}
Suppose $k$ is the width of $G$ and let $e_1, \ldots, e_m$ be an ordering of $E$ with width $k$. If $|E| \ge 3$ then we construct a cubic caterpillar $F$ with vertex set $\{ x_1, \ldots, x_m \} \cup \{ y_2, \ldots, y_{m-1} \}$ and edges $y_i y_{i+1}$ for $2 \le i \le m-2$ and $x_i y_i$ for $2 \le i \le m-1$ and also $x_1 y_2$ and $x_m y_{m-1}$. If $|E| = 2$ then we let $F$ be the one edge graph with vertex set $x_1,x_2$. Now define a branch decomposition of $M(G)$ by mapping $E$ to the leaves of $F$ by the rule that $e_i$ maps to $x_i$.
For every edge $y_i y_{i+1}$ of $F$, the width of this edge is the order of the separation $(\{e_1, \ldots, e_i\}, \{e_{i+1} , \ldots, e_m \} )$ in $M(G)$ which is at most its order in $G$, which is at most $k$. Every other edge of $F$ corresponds to a separation of the form $(\{e_i\}, E \setminus \{e_i\})$ for some $1 \le i \le m$ and therefore has order at most $2$. Since $k \ge 1$, we have the desired outcome unless $k=1$ and there exists an edge $e_i$ for which $(\{e_i\}, E \setminus \{e_i\})$ has order 2 in the matroid. However, if $k = 1$ then $G$ cannot contain a cycle, so $G$ itself is a tree, but in this case the order of $(\{e_i\}, E \setminus \{e_i\})$ in $M(G)$ will be 1 for every $e_i$. \end{proof}
On the other hand, there do exist graphs $G$ for which the width of $G$ is greater than the caterpillar width of $M(G)$. One such graph is depicted in Figure \ref{k13s}. This graph has caterpillar width 1 (in fact every such branch decomposition achieves this) but has width 2. To see this latter claim, note that any edge ordering which does not begin with the form $e_i', e_i$ has order at least two. Now whatever edge appears next gives rise to a separation of order at least two.
\begin{figure}
\caption{A small tree}
\label{k13s}
\end{figure}
Note that whenever a graph $G$ has an edge ordering $e_1, \ldots, e_m$ of minimum width which has the added property that both $G_{ \{e_1, \ldots, e_i\} }$ and $G_{ \{e_{i+1}, \ldots, e_m\} }$ are always connected, the width of $G$ and the caterpillar width of $M(G)$ will be equal. In fact, this phenomena is closely related to the difference between two other graph parameters. Just as the caterpillar width is a linear version of branch width, the path width of a graph is a linear version of tree width. In the study of path width, there is a somewhat stronger notion, that of connected path width (where the graphs on either side of any separation must always be connected), and the discrepancy between the width of $G$ and caterpillar width of $M(G)$ is closely related to the discrepancy between the path width and connected path width of $G$.
\subsection{A matroidal look at splitting} \label{matroid splitting}
In this section we use the matroid intersection theorem to show that certain natural necessary conditions for non-splitting are in fact sufficient. In fact, the matroid intersection theorem immediately gives us a necessary and sufficient condition in terms of matroidal order for an arbitrary Dodgson polynomial to vanish. Here we will translate this into a more natural graph theoretic condition which will be easy to work with.
\begin{theorem} \label{edmondssplitting} Let $G = (V,E)$ be a connected graph and let $S \subseteq E$ with $|S| = 4$. There is a partition $S = S_1 \cup S_2$, $|S_1| = |S_2| =2$ such that $\Psi^{S_1,S_2} = 0$ if and only if there exist edge disjoint subgraphs $G_1$ and $G_2$ with $G_1 \cup G_2 = G$ such that one of the following holds; \begin{enumerate}
\item $|V(G_1) \cap V(G_2)| \leq 2$ and $|E(G_1) \cap S| = 2$.
\item $|V(G_1) \cap V(G_2)| = 1$ and $|E(G_1) \cap S| = 1$. \end{enumerate} \end{theorem}
\begin{proof} If either numbered condition holds, then there cannot not exist a partition $\{S_1,S_2\}$ of $S$ and a set $T \subseteq E \setminus S$ so that both $T \cup S_1$ and $T \cup S_2$ are edge sets of spanning trees. It then follows from Proposition \ref{calcdodgson} that $\Psi^{S_1, S_2}$ is zero.
Now, choose a partition of $S$ that meets the stated conditions. Then, there does not exist a $T \subseteq E \setminus S$ such that $T \cup S_1$ and $T \cup S_2$ are spanning trees. By assumption, $G$ is connected, so setting $|V(G)| = n$ and $M = M(G)$ we have $r(M) = n-1$. Define matroids $M_1 = M / S_1 \setminus S_2$ and $M_2 = M / S_2 \setminus S_1$. If either $S_1$ or $S_2$ contains an edge cut or cycle, the proof is complete. If we assume not, then both of these are cycle matroids on connected graphs with $n-2$ vertices, and thus have rank $n-3$. Furthermore, it follows immediately from our choice of which edges to contract that a set $T \subseteq E \setminus S$ satisfies $T \cup S_i$ a spanning tree of $G$ if and only if $T$ is a basis of $M_i$ for $i = 1,2$. By Edmonds' Matroid Intersection Theorem, there exist sets A and B with $A \cap B = \emptyset$ and $A \cup B = E \setminus S$ so that $r_1(A) + r_2(B) < n-3$.
Setting $A^* = A \cup S_1$ and $B^* = B \cup S_2$, we have that $(A^*,B^*)$ is a separation in the original matroid $M$ satisfying \begin{align*} o(A^*,B^*) &= r(A^*) + r(B^*) - r(M) +1 \\ &= (r_1(A)+2) + (r_2(B)+2) - (n-1)+1 \\ &< 3. \end{align*} So, there exists a separation $(A^*,B^*)$ of order at most two (in the matroid) such that $A^*$ and $B^*$ each contain exactly two edges in $S$. Among all such separations, choose one $(A^*,B^*)$ such that $\text{comp}(G_{A^*}) + \text{comp}(G_{B^*})$ is minimum. Let $X = V(G_{A^*}) \cap V(G_{B^*})$, $F_1,...,F_l$ be the components of $G_{A^*}$, and $H_1,...,H_m$ be the components of $G_{B^*}$. By the formula for the order of the separation, we have $|X| \leq l+m$. As $G$ is connected and $F_1,...,F_l$ are distinct components of $G_{A^*}$, the sets $X\cap V(F_1)$ through $X \cap V(F_l)$ are disjoint and nonempty. Suppose there is a component with $|X \cap V(F_i)| =1$. If $E(F_i) \cap S \neq \emptyset$, then the graph induced by $E(G) - E(F_i)$ and $F_i$ must satisfy either (1) or (2). Otherwise, replacing $A^*$ by $A^* \setminus E(F_i)$ and $B^*$ by $B^* \cup E(F_i)$ contradicts our choice of $(A^*,B^*)$. So, we may assume that every $F_i$ contains at least two vertices of $X$, and similarly every $H_j$ contains at least two vertices in $X$. By disjointedness and $|X| \leq l +m$, we now have $l = m$, each $F_i$ and $H_j$ contains precisely two points in $X$, and further the vertices in $X$ may be cyclically ordered as $v_1,v_2,...,v_{2m}$ so that every $F_i$ contains $v_{2i-1}$ and $v_{2i}$ and every $H_i$ contains $v_{2i}$ and $v_{2i+1}$ modulo $2m$. As $A^*$ and $B^*$ both contained precisely two edges of $S$, each $F_i$ and $H_j$ may contain at most two edges in $S$, and hence there must be subgraphs $G_1$ and $G_2$ satisfying the first property above. This completes the proof. \end{proof}
The following corollary relates Theorem \ref{edmondssplitting} to our interests in Dodgson polynomials and 5-configurations.
\begin{corollary} \label{whatsplitmeans} Let $G$ be a graph and $S$ a 5-configuration. Then $S$ splits if and only if there exists an $e \in S$ so that, setting $G'$ equal to one of $G \setminus e$ or $G / e$, there exist edge-disjoint subgraphs $G_1$ and $G_2$ of $G'$ with $G' = G_1 \cup G_2$ satisfying one of the following; \begin{enumerate} \item $|V(G_1) \cap V(G_2)| \leq 2$ and $|E(G_1) \cap S| = 2$. \item $|V(G_1) \cap V(G_2)| = 1$ and $|E(G_1) \cap S| = 1$. \end{enumerate} \end{corollary}
With this corollary in hand, we can give a simple proof that the graphs in $\mathcal{F}_0$ are non-split. All of these graphs have the property
that whenever $(A,B)$ is a 3-separation, $\min\{ |A|, |B| \} \le 3$. It follows from this and Corollary \ref{whatsplitmeans} that whenever $S$ is any set of edges in one of these graphs which does not contain a triangle or all three edges incident with a vertex of degree 3, then $S$ does not split. So, in particular, the 5-configurations highlighted in Figure \ref{nonsplitfives} do not split.
\section{Enhanced Graphs}\label{wands}
As we shall shortly prove, every minor-minimal non-split graph is rather close to being 3-connected in the sense that it may only have a small number of specially structured 2-separations. It is natural then to replace the small side of such a separation by a single edge encoded with some added information so as to operate in the setting of 3-connected graphs. In this section we adopt this approach, and reduce our problem to that of finding excluded minors in this new setting.
An \emph{enhanced graph} consists of a graph $\widetilde{G}$ together with two distinguished subsets $\widetilde{C}, \widetilde{D} \subseteq E( \widetilde{G} )$. The edges in $\widetilde{C}$ are called \emph{contract-proof} and the edges in $\widetilde{D}$ are \emph{delete-proof}. As before, a set $\widetilde{S}$ of five edges is called a 5-configuration, However, we shall define splitting differently for enhanced graphs. We say that a 5-configuration $\widetilde{S}$ \emph{splits} if there is an enhanced graph which is either equal to $\widetilde{G}$ or obtained from it by deleting an edge in $\widetilde{S} \setminus \widetilde{D}$ or contracting an edge in $\widetilde{S} \setminus \widetilde{C}$ which has a separation $(A,B)$ satisfying one of the following: \begin{itemize} \item $(A,B)$ has order at most 1 and $A \cap \widetilde{S} \neq \emptyset \neq B \cap \widetilde{S}$.
\item $(A,B)$ has order 2 and $\min\{ |A \cap \widetilde{S}|, |B \cap \widetilde{S}| \} = 2$. \end{itemize} Note that in the special case when $\widetilde{C} = \widetilde{D} = \emptyset$ this definition aligns with the notion of splitting in ordinary graphs thanks to Corollary \ref{whatsplitmeans}. We say that a 2-separation $(A,B)$ satisfying the second property above is a \emph{bad} $2$-separation (this will generally be the obstruction we encounter). We will refer to both contract-proof and delete-proof edges as forms of \emph{protection}.
Let $G$ be an ordinary connected graph and let $S$ be a non-split 5-configuration in $G$. It follows from the definitions that every 1-separation $(A,B)$ of $G$ must satisfy $A \cap S = \emptyset$ or $B \cap S = \emptyset$. So, there exists a block $G'$ of
$G$ with $S \subseteq E(G')$. Now consider $G'$ and note that it is 2-connected and non-split. It follows from the definitions that every 2-separation $(A,B)$ of $G'$ must satisfy $\min \{ |A \cap S|, |B \cap S| \} \le 1$. Define a subset $\emptyset \neq L \subset E(G')$ to be a
\emph{lobe} if $(L, E(G') \setminus L)$ is a 2-separation with $|L \cap S| \le 1$. It follows from the 2-connectivity of $G'$ that whenever $L,L'$ are lobes with $L \cap L' \neq \emptyset$ the set $L \cup L'$ is also a lobe. Furthermore, every edge $e$ must be contained in a lobe, since $( \{e\}, E(G') \setminus \{e\} )$ is a 2-separation. Therefore, the maximal lobes give us a partition of $E(G')$. Next we construct an enhanced graph $\widetilde{G}$ with $\widetilde{C}$, $\widetilde{D}$ and a 5-configuration $\widetilde{S}$ as follows. We begin with $\widetilde{G} = G'$ and with $\widetilde{C} = \widetilde{D} = \widetilde{S} = \emptyset$. Now, for every maximal lobe $L$ in the graph $G'$ with $\partial(L) = \{x,y\}$, we replace $L$ in $\widetilde{G}$ by the single edge $xy$. If $L \cap S \neq \emptyset$ then we add the edge $xy$ to $\widetilde{S}$. If in addition $L \setminus S$ contains a path from $x$ to $y$ then we add $xy$ to $\widetilde{D}$ and if the unique edge in $S \cap L$ does not have ends $x,y$ then we add $xy$ to $\widetilde{C}$. We say that the enhanced graph $\widetilde{G}$ together with $\widetilde{S}$ are \emph{associated with} $G$ and $S$. The following proposition captures the key properties of this operation.
\begin{prop} Let $S$ be a non-splitting 5-configuration in the connected graph $G$ and let $\widetilde{G}$ together with $\widetilde{S}$ be associated with $G$ and $S$. Then we have: \begin{enumerate} \item $\widetilde{G}$ is 3-connected. \item $\widetilde{S}$ does not split in $\widetilde{G}$. \end{enumerate} \end{prop}
\begin{proof} It follows immediately from the construction that $\widetilde{G}$ is 3-connected. Therefore, the enhanced graph $\widetilde{G}$ has no bad 2-separation. If $e \in \widetilde{S}$ is not delete-proof and $\widetilde{G} \setminus e$ has a bad 2-separation, then it follows immediately that the graph obtained from $G$ by deleting the edge in $S$ associated with $e$ also has a bad 2-separation. A similar argument for contraction reveals that $\widetilde{S}$ does not split in $\widetilde{G}$, as desired. \end{proof}
\begin{center} \begin{figure}
\caption{Excluded Minors isomorphic to $K_4$}
\caption{Excluded Minors isomorphic to $W_4$}
\caption{Excluded Minors isomorphic to $W_5$}
\caption{Excluded Minors isomorphic to $K_5^-$}
\caption{Excluded Minors isomorphic to $P$}
\caption{The graph $P^+$}
\caption{Excluded Minors isomorphic to $P^+$}
\caption{Excluded Minors isomorphic to $D$ and $D^*$}
\caption{Our starting configuration}
\caption{an $(x_1,x_2,x_3)$-doublefan}
\label{ordinary-enhanced}
\label{mainskeleton}
\label{mm3sep}
\label{enhanced dual}
\label{secwheel}
\label{easy_break}
\label{excl_k4}
\label{k4}
\label{nonsplit_wheel}
\label{excl_w4}
\label{w4}
\label{excl_w5}
\label{w5}
\label{k5m_split}
\label{excl_k5m}
\label{exclk5mlem}
\label{excl_p}
\label{exclp}
\label{pp_itself}
\label{excl_pp}
\label{exclpp}
\label{excld}
\label{sec full char}
\label{fminor}
\label{init_struc}
\label{subsec basic}
\label{spinerib}
\label{basic_decomp}
\label{cube-v}
\label{iftriangle}
\label{tame_triangle}
\label{subsec triangle}
\label{notriangle}
\label{triangle-doublefan}
\label{notriangle_extras}
\label{central_doublefan}
\label{spinespinecases}
\end{figure}
\begin{case} One of $e_1,e_4$ is a spine, the other a rib. \end{case}
Without loss of generality, assume that $e_4$ is a $v_3$-rib in $G_1$. Further, we may assume that $e_4$ is an inner rib, as otherwise this reduces to Lemma \ref{iftriangle}.
If $e_1$ and $e_4$ do not share a vertex and there is a strictly lower order $v_1$-rib that does not share a vertex with $e_4$, then there is a $P^+(3)$ minor. Hence, either $e_1$ and $e_4$ share a vertex or all rib edges of lower order are $v_3$-ribs.
If there is inner $v_1$-rib in both $G_0$ and $G_1$ we get a $K_5^-(4)$ minor, seen in the fourth graph of Figure \ref{spineribcases}. If neither side has an inner $v_1$-rib, $\widetilde{G}$ is a wheel.
If $e_1$ is a spine furthest along this path from $v_2$, then $e_2$ must be protected; if $e_2$ is a $v_1$-rib it must be delete-proof, and otherwise it must be contract-proof and $e_1$ must be double protected. In either case, we may assume that there is a non-terminal $v_1$-rib in $G_1$ to avoid a wheel enhanced graph. This produces either a $K_5^-(4)$ or $K_5^-(7)$ minor.
If we suppose that $e_1$ is not a spine of highest index, then precisely one of $G_0$ or $G_1$ must have at least one inner $v_1$-rib, and either $G_0$ or $G_1$, call it $G_i$, must contain only $v_3$-ribs. As such the second edge in $\widetilde{S}$ in $G_i$ must be protected; contract-proof if it is a $v_3$-rib and delete-proof otherwise. In all cases, this produces a non-splitting graph; each shown in Figure \ref{spineribcases}. The first row if $G_1$ has no $v_1$-ribs, and the second row if $G_0$ has no $v_1$-ribs. Columns correspond to spine, $v_1$-rib, and $v_3$-rib protection, respectively. Note in particular that the first $W_4(6)$ minor is produced if the two selected spine edges share a vertex or not, since there may only be $v_3$-ribs between if the two distinguished edges do not share a vertex, and the rib must be double protected otherwise. Furthermore, in the case of a $v_3$-rib, illustrated in the last graph of Figure \ref{spineribcases}, we can ignore the case when this $v_3$-rib shares a vertex with $e_1$ by Lemma \ref{notriangle}.
\begin{figure}\label{spineribcases}
\end{figure}
\begin{case} Both $e_1$ and $e_4$ are ribs. \end{case}
Again, we may assume that $e_1$ and $e_4$ are inner ribs by Lemma \ref{notriangle}. By symmetry both distinguished ribs may be assumed to be $v_3$-ribs, as otherwise we get a $K_5^-(2)$ minor. If there exists a $v_1$-rib of strictly lower order, we get a $P^+(5)$ minor, the first graph in Figure \ref{ribribcases}. Thus, we may assume that either there are no vertices properly between $e_1$ and $e_4$ or that there are only $v_3$-ribs properly between. In either case, there may be at most one side with an inner $v_1$-rib after (or incident to) these ribs, as otherwise we get a $K_5^-(7)$ minor, the second graph in Figure \ref{ribribcases}.
If there are no vertices or only $v_3$-ribs between the two closest ribs, then precisely one of $G_0$ or $G_1$ must have at least one inner $v_1$-rib to avoid $\widetilde{G}$ being a wheel; say this is $G_i$. The second edge in $\widetilde{S}$ in $G_i$ must be protected; contract-proof if it's a $v_3$-rib and delete-proof otherwise. These are all non-splitting; the last three graphs in Figure \ref{ribribcases} shows these non-splitting minors.
\begin{figure}\label{ribribcases}
\end{figure}
Hence, any minor-minimal non-split graph with two $(v_1,v_2,v_3)$-doublefans is already in $\mathcal{F}$. \end{proof}
\subsection{Two doublefans}
\begin{lemma} \label{general_doublefan} If $\widetilde{G}_i$ is a $(v_1,v_2,v_3)$-doublefan with no delete-proof spine edge, then $\widetilde{G}_{1-i}$ is not a $\{v_1,v_2,v_3\}$-doublefan. \end{lemma}
\begin{proof}
Suppose $\widetilde{G}_0$ is a $\{v_1 , v_2 , v_3\}$-doublefan and $\widetilde{G}_1$ is a $(v_1 , v_2 , v_3 )$-doublefan with no delete-proof spine edge.
Again, $e_1,e_2 \in \widetilde{G}_0$ with $e_1 \geq e_2$ and $e_4,e_5\in \widetilde{G}_1$ with $e_4 \leq e_5$. As $e_4$ must be protected, by assumption it cannot be a spine edge. Without loss of generality say $e_4$ is a $v_1$-rib.
By Lemma \ref{central_doublefan} we can assume that $\widetilde{G}_0$ is either a $(v_1, v_3, v_2)$-doublefan or a $(v_2,v_1,v_3)$-doublefan. By Lemma \ref{notriangle} we know that $e_4$ must be an inner rib.
\startingcases \begin{case} Edge $e_4$ is not incident to $v_2$. \end{case}
By 3-connectivity of $\widetilde{G}$, there is at least one additional $v_2$-rib. If there is a $v_2$-rib which is not incident to $v_1$ or $v_3$ then $G$ has a $P(1)$ minor. Otherwise then $\widetilde{G}$ consists of two central doublefans and we are done by Lemma \ref{central_doublefan}. These possibilities are illustrated in Figure \ref{dfdf_case1}.
\begin{figure}\label{dfdf_case1}
\end{figure}
\begin{case} $\widetilde{G}_0$ is a $(v_1,v_3,v_2)$-doublefan and $e_4$ is incident to $v_2$. \end{case}
In this case $e_2$ must be protected. Let $v$ be the last vertex in the spine of $\widetilde{G}_0$. This case has subcases based on what kind of edge $e_2$ is.
Suppose $e_2$ is a $v_2$-rib. Then $e_2$ is contract-protected. If it is not incident to $v$ then $G$ has a $K_5^-(2)$ minor as in Figure \ref{dfdf_case2a}. Suppose $e_2$ is incident to $v$. Then $e_1$ is the rib from $v$ to $v_1$ and so we are done by Lemma \ref{notriangle}.
\begin{figure}\label{dfdf_case2a}
\end{figure}
Suppose $e_2$ is a $v_1$-rib. We may assume that $e_2$ is not incident to $v$ by Lemma \ref{notriangle}. Suppose $e_1$ is not incident to $v$. If there are no inner $v_2$-ribs in $\widetilde{G}_0$ and no inner $v_3$-ribs in $\widetilde{G}_1$ then $\widetilde{G}$ is a wheel and so by Lemma \ref{nonsplit_wheel} it has one of the known minors. Assume there is at least one such rib $r$. There are now two possibilities.
\begin{figure}\label{dfdf_case2b}
\end{figure}
\begin{itemize}
\item Suppose $r$ is either in $\widetilde{G}_1$ or is before $e_2$ in $\widetilde{G}_0$. To avoid a 2-separation separating $e_2$ and $e_3$ from $e_4$ and $e_5$ then either there is a $v_2$-rib in $\widetilde{G}_0$ at or after $e_2$, or $e_1$ is protected. In the case of the extra $v_2$-rib or a delete-protected $e_1$, i.e. $e_1$ on the spine or $e_1$ the rib from $v$ to $v_2$, then $G$ has a $K_5^-(7)$ minor. In the case where $e_1$ is a contract-protected $v_1$-rib we get a $W_4(3)$ minor. These possibilities are illustrated in the first two graphs of Figure \ref{dfdf_case2b}.
\item Suppose $r$ is at or above $e_2$ in $\widetilde{G}_0$ and we are not in the previous point. As above but with the sides flipped, to avoid a 2-separation separating $e_2$ and $e_3$ from $e_1$ and $e_4$ then $e_5$ must be protected. If $e_5$ is delete protected then $G$ has a $K_5^-(7)$ and otherwise $G$ has a $W_4(3)$ minor. These possibilities are illustrated in the third and fourth graphs of Figure \ref{dfdf_case2b}. \end{itemize}
To complete Case 2 suppose $e_2$ is a spine. If there is no inner $v_2$-rib in $\widetilde{G}_0$ except possibly from $v_2$ to $v_3$ then we are done by Lemma \ref{central_doublefan}. So assume there is such a $v_2$-rib. If this $v_2$-rib is smaller than $e_2$ then we get a $P^+(3)$ minor as illustrated in the first graph of Figure \ref{dfdf_case2_spine}. Assume all $v_2$-ribs are larger than $e_2$ and there is at least one such rib.
To avoid a 2-cut separating $e_4$ and $e_1$ from $e_2$ and $e_3$, either $e_5$ is protected or there is an inner $v_3$-rib. There are four possibilities
\begin{figure}\label{dfdf_case2_spine}
\end{figure}
\begin{itemize} \item If $e_5$ is protected and is a $v_3$-rib (possibly from $v_2$ to $v_3$) and there is no inner $v_3$-rib, then either there is also a $v_1$-rib in $\widetilde{G}_0$ before $e_2$, or $e_3$ is double protected. In either case this gives $W_4(6)$ as a minor. \item If $e_5$ is protected and is a $v_1$-rib then $G$ has a $W_4(1)$ minor. \item If $e_5$ is protected and is a spine then $G$ has a $P^+(1)$ minor. \item If there is an inner $v_3$-rib then $G$ has a $K^-_5(4)$ minor. \end{itemize}
These possibilities are illustrated in the last four graphs of Figure \ref{dfdf_case2_spine}
\begin{case} $\widetilde{G}_0$ is a $(v_2,v_1,v_3)$-doublefan and $e_4$ is incident to $v_2$. \end{case}
Assume we are not in the previous case, and so there is at least one inner $v_3$-rib in $\widetilde{G}_0$. If there is at least one inner $v_2$-rib then $G$ has a $K_5^-(2)$ minor. Otherwise again both sides become central doublefans so appeal to Lemma \ref{central_doublefan}. These possibilities are illustrated in Figure \ref{dfdf_case3}.
\begin{figure}\label{dfdf_case3}
\end{figure}
\end{proof}
\subsection{Doublefan \& ladder}\label{subsec df-l}
In this subsection we establish another special case of the main theorem by proving that our minimal counterexample $\widetilde{G}$ cannot have a particular structure. However, we first need to define a key structure of interest.
\begin{center} \begin{figure}[h] \centerline{\includegraphics[height=5.5cm]{figures/ladder-def.pdf}} \caption{$(x_1,x_2,x_3)$-ladders} \end {figure} \end{center}
Let $L$ be a graph with distinct vertices $x_1,x_2,x_3$. We define $L$ to be a \emph{basic} $(x_1,x_2,x_3)$-\emph{ladder} If there exist two internally disjoint paths $P_1,P_2$ in $L$ with the following properties: \begin{enumerate} \item $P_i$ is a path of length at least 2 from $x_i$ to $x_{i+1}$ for $i=1,2$. \item Every vertex apart from $x_1,x_2,x_3$ has degree at least 3. \item Every edge in $E(L) \setminus E(P_1 \cup P_2)$ has one end in $P_1$ and the other in $P_2$. \item There do not exist edges $y_1y_2, z_1 z_2 \in E(L) \setminus E(P_1 \cup P_2)$ which ``cross'' in the sense that $y_1, z_1, y_2, z_2$ are distinct and appear in this order along the path $P_1 \cup P_2$. \end{enumerate} We call edges in $P_1 \cup P_2$ \emph{supports} and the other edges \emph{rungs}. Observe that our assumptions imply that the unique neighbour of $x_2$ on $P_1$ and the unique neighbour of $x_2$ on $P_2$ must be adjacent. We say that $L$ is an \emph{extended} $(x_1,x_2,x_3)$-\emph{ladder} if $x_2$ is adjacent to a single vertex $x_2'$ in $L$ and $L \setminus x_2$ is a basic $(x_1,x_2',x_3)$-ladder. We will say that $L$ is a $(x_1,x_2,x_3)$-\emph{ladder} if it is either a basic or extended $(x_1,x_2,x_3)$-ladder.
As we did with doublefans, it will be helpful to introduce a total preorder on the edges of a basic ladder. To introduce this, let $L$ be a basic $(x_1,x_2,x_3)$-ladder as defined above and define a preorder on $E(L)$ by the following rule. If $uv$ is a rung, then $uv \le f$ for every edge $f$ with both ends in the subpath of $P_1\cup P_2$ between $u$ and $v$. If $uv$ is a support on the path $P_1$ with $u$ closer to $x_1$ than $v$ (along this path), then let $w$ be the furthest vertex from $v$ on the path $P_1 \cup P_2$ which is joined to $v$ by a rung. Then we define $uv \le f$ for every edge $f$ with both ends in the subpath of $P_1 \cup P_2$ from $v$ to $w$. Finally define the two edges of $P_1 \cup P_2$ incident with $x_2$ to be comparable with each other in both directions. It is straightforward to check that this is indeed a total preorder.
\begin{lemma} \label{df-lad} For $i=0,1$ we do not have $\widetilde{G}_i$ a $(v_1,v_2,v_3)$-doublefan with a delete-proof spine edge and $\widetilde{G}_{1-i}$ a $(v_1,v_2,v_3)$-ladder. \end{lemma}
\begin{center} \begin{figure}[h] \centerline{\includegraphics[scale=0.7]{figures/df-lad_01.pdf}} \caption{} \label{df-lad1} \end {figure} \end{center}
\begin{proof} Suppose towards a contradiction that the lemma fails for $i=0$ (a similar argument works for $i=1$). Note that the delete-proof spine edge of $\widetilde{G}_0$ must be in $\widetilde{S}$ by Lemma \ref{mm3sep}, so we may assume (by possibly relabeling) that $e_2$ is a delete-proof spine edge. If there is a rib edge which precedes $e_2$, then $\widetilde{G}$ contains $P^+(1)$ as a minor (as indicated in the first enhanced graph of Figure \ref{df-lad1}) which is a contradiction. So, may assume that $e_2$ is the minimal edge in $\widetilde{G}_0$. Similarly, by 3-connectivity, $\widetilde{G}$ will contain $P^+(1)$ if $\widetilde{G}_1$ is an extended ladder, so it must be that $\widetilde{G}_1$ is a basic ladder. If the edges $\{e_4,e_5\}$ are the two elements incident with $v_2$ on this ladder, then $\widetilde{G}$ contains exactly three edges incident with $v_2$ and all are in $\widetilde{S}$, so all must be delete-proof. It then follows that $\widetilde{G}$ contains a $P(5)$ minor (as indicated in the second enhanced graph of Figure \ref{df-lad1}) which is a contradiction. So, we may assume that $e_4 < e_5$ without loss. If $e_4$ is a rung, then $\widetilde{G} / e_4$ has a bad 2-separation, so $e_4$ must be contract-proof. Conversely, if $e_4$ is a support, then $\widetilde{G} \setminus e_4$ has a bad 2-separation, so $e_4$ must be delete-proof. We now split into cases depending on $e_4$.
\begin{center} \begin{figure}[h] \centerline{\includegraphics[height=4cm]{figures/df-lad_struc.pdf}} \caption{} \label{df-lads} \end {figure} \end{center}
\startingcases \begin{case} $e_4$ is a rung. \end{case}
If $e_4$ is not incident with either $v_1$ or $v_3$, then $\widetilde{G}$ contains a $P(3)$ minor as indicated in the third enhanced graph from Figure \ref{df-lad1} which is a contradiction. So, by possibly reindexing, we may assume $e_4$ is incident with $v_1$. If there is an inner $v_3$-rib of $\widetilde{G}_0$, then by contracting the edge of $P_1$ incident with $v_2$ we find that $\widetilde{G}$ contains a $W_4(1)$ minor (see the fourth enhanced graph in Figure \ref{df-lad1}) which is a contradiction. Therefore every inner rib of $G_0$ is incident with $v_1$. Now setting $A^0_1 = \{ f \in A_1 \setminus \{e_4\} \mid f \le e_4 \}$ and $A_1^1 = A_1 \setminus (A_1^0 \cup \{e_4\})$ we find that $\widetilde{G}$ has the structure indicated on the left in Figure \ref{df-lads}. If the edge $e_1$ is a spine edge, then it must be delete-proof, and $\widetilde{G}$ contains a $P^+(1)$ minor as before, which is contradictory. So, $e_1$ must be a rib edge. If $e_1$ is incident with $v_3$, then it must be delete proof and then $\widetilde{G}$ contains $W_4(1)$ as a minor (see the fifth enhanced graph in Figure \ref{df-lad1}), which is contradictory. Therefore, $e_1$ is incident with $v_1$, and now it must be contract-proof. However, again in this case $\widetilde{G}$ contains a $W_4(1)$ minor (this is similar to the fourth enhanced graph in Figure \ref{df-lad1} with the subdivided edge replaced by a contract-proof one), which is a contradiction.
\begin{center} \begin{figure}[h] \centerline{\includegraphics[height=3cm]{figures/df-lad_02.pdf}} \caption{} \label{df-lad2} \end {figure} \end{center}
\begin{case} $e_4$ is a support. \end{case}
We may assume (without loss) that $e_4$ is contained in the path $P_1$ from $v_1$ to $v_2$. Let $Q$ be the component of $P_1 \setminus e_4$ containing $v_1$. If there exists an edge between a vertex in $Q$ and the interior of the path $P_2$, then $\widetilde{G}$ contains a $P^+(2)$ minor
(as shown in the first enhanced graph of Figure \ref{df-lad2}) which is contradictory. Therefore, no such edge exists. If there is an inner rib of $\widetilde{G}_0$ incident with $v_1$, then $\widetilde{G}$ contains $P(4)$ (as shown in the second enhanced graph of Figure \ref{df-lad2}) which is a contradiction. So, all inner ribs are incident with $v_3$. Now setting $A_1^0 = \{ f \in A_1 \setminus \{e_4\} | f \le e_4 \}$ and $A_1^1 = A_1 \setminus (A_1^0 \cup \{e_4\})$ we find that $\widetilde{G}$ has the structure depicted on the right in Figure \ref{df-lads}. Note that $A_1^0$ is empty if and only if $e_4$ is incident with $v_1$. However $A_1^1$ contains the edge $e_5$, so it must be nonempty. As before, we now turn our attention to the edge $e_1$. If $e_1$ is a spine edge, then $e_1$ must be delete-proof and then $\widetilde{G}$ contains $P^+(1)$ as before, which is a contradiction. If $e_1$ is incident with $v_3$, then $e_1$ must be contract-proof and we find that $\widetilde{G}$ contains a $P(4)$ minor (this is similar to the second enhanced graph in Figure \ref{df-lad2} except with the subdivided edge replaced by a contract-proof one), which is contradictory. Therefore $e_1$ is the unique edge of $A_0$ incident with $v_1$. In this case $e_1$ must be delete-proof. If $A_1^0$ is empty, then all three edges incident with $v_1$ are in $\widetilde{S}$ and must therefore be delete-proof, and this gives us a $P(7)$ minor (as indicated in the third enhanced graph of Figure \ref{df-lad2}), which is a contradiction. Otherwise, $A_1^0$ is nonempty and $\widetilde{G}$ contains a $W_4(6)$ minor as indicated in the fourth enhanced graph from Figure \ref{df-lad2}. This final contradiction completes the proof. \end{proof}
\subsection{Proof of theorem \ref{mainskeleton}}\label{subsec proof}
In this subsection we will complete our proof of the main result. We begin by investigating the presence of small rooted minors. For brevity, we will say that $\widetilde{G}_i$ contains one of the enhanced graphs $F$ from Figure \ref{triad_halves} if $(\widetilde{G}_i ; v_1,v_2,v_3)$ contains $(F; v_1, v_2, v_3)$ as a rooted minor.
\begin{center} \begin{figure}[h] \centerline{\includegraphics[height=2.4cm]{figures/triad_halves.pdf}} \caption{} \label{triad_halves} \end {figure} \end{center}
\begin{lemma} \label{no_triad} For $i=0,1$, either $\widetilde{G}_i$ contains triad, or it is a $(v_1,v_2,v_3)$-doublefan. \end{lemma}
\begin{proof} If $\widetilde{G}_i \setminus \{v_1,v_3\}$ contains a cycle, there exist three vertex disjoint paths from $\{v_1,v_2,v_3\}$ to this cycle, and $\widetilde{G}_i$ contains triad. Otherwise, Lemma \ref{basic_decomp} along with Lemmas \ref{iftriangle} and \ref{notriangle} imply that $\widetilde{G}_i$ is a $\{v_1,v_2,v_3\}$-doublefan, and the result follows easily. \end{proof}
\begin{lemma} \label{triad_struc} For $i=0,1$, if $\widetilde{G}_i$ contains triad, then $\widetilde{G}_{1-i}$ is either a $(v_1,v_2,v_3)$-doublefan or a $(v_1,v_2,v_3)$-ladder. \end{lemma}
\begin{proof} We will not use the size of the $A_i$ in this argument, so we may assume without loss that $i=0$. If $v_2$ is incident with at least two edges in $A_1$ then set $G' = \widetilde{G}_1$ and set $v_2' = v_2$. Otherwise, let $v_2'$ be the vertex adjacent to $v_2$ by an edge in $A_1$ and set $G' = \widetilde{G}_1 \setminus v_2$.
\begin{center} \begin{figure}[h] \centerline{\includegraphics[scale=0.65]{figures/triad_extras2.pdf}} \caption{} \label{triad_extras} \end {figure} \end{center}
First suppose that $v_2'$ is adjacent to one of $v_1,v_3$. If the enhanced graph $G' \setminus \{v_1,v_3\}$ contains a cycle, then $\widetilde{G}$ contains $P^+(1)$ as a minor (as seen in the first enhanced graph in Figure \ref{triad_extras}) which is contradictory. Otherwise, $\widetilde{G}_1$ is a $(v_1,v_2,v_3)$-doublefan and we are finished. So, we may assume $v_2'$ is not adjacent to $v_1,v_3$. Now, $\widetilde{G}$ is a 3-connected planar graph, and $v_1,v_2',v_3$ form a 3 vertex separation, so there is a path $P_1 \subseteq G'$ from $v_1$ to $v_2'$ which forms a part of a facial cycle, and there is a similar path $P_2 \subseteq G'$ from $v_2'$ to $v_3$. Note that our assumptions imply that these paths both have length at least two, and meet only at $v_2'$.
There must not exist a path of length at least two which is internally disjoint from $P_1 \cup P_2$ and has one end in the interior of $P_1$ and the other in the interior of $P_2$, as in this case $\widetilde{G}$ contains $P(3)$ (as shown in the second graph from Figure \ref{triad_extras}). Since the graph obtained from the union of the faces of $\widetilde{G}$ containing $v_2'$ by deleting the vertex $v_2'$ is a cycle (by 3-connectivity and planarity) it then follows that $v_2'$ is adjacent with exactly two vertices in $G'$, one vertex $v_2^-$ in $V(P_1)$ and one vertex $v_2^+$ in $V(P_2)$ and these three vertices form a triangular face.
If $V(G') = V(P_1) \cup V(P_2)$ then the planarity and 3-connectivity of $\widetilde{G}$ imply that $G'$ is a $(v_1,v_2',v_3)$-ladder and we are finished. Otherwise, let $X$ be the vertex set of a component of $G' \setminus (V(P_1) \cup V(P_2))$. There must not exist a vertex in the interior of $P_1$ adjacent to a point in $X$ and another in the interior of $P_2$ adjacent to a point in $X$, as then $\widetilde{G}$ would contain $P(3)$ (as before). Thus by 3-connectivity and planarity (and symmetry between $v_1$ and $v_3$), we may assume that there is a vertex in $X$ adjacent to $v_3$ and all other vertices outside $X$ which are adjacent to a point in $X$ lie in the path $P_1$. Note that this implies that $G'$ contains Hawk.
\begin{center} \begin{figure}[h] \centerline{\includegraphics[height=4.3cm]{figures/triad_lem_struc2.pdf}} \caption{} \label{triad_lem_struc} \end {figure} \end{center}
Let $w$ be the vertex on the path $P_1$ nearest $v_2'$ which is adjacent to a point in $X$. Now, by planarity, there exists a partition
of $E(G')$ into $\{A_1^0, A_1^1 \}$ so that $\partial(A_1^0) = \{v_1, w, v_3\}$ and $\partial(A_1^1) = \{ v_2', w, v_3 \}$ as on the right hand side of Figure \ref{triad_lem_struc}. Furthermore, $|A_1^0|, |A_1^1| \ge 4$, so by Lemma \ref{mm3sep} we have $|A_1^0 \cap S| = |A_1^1 \cap S| = 1$.
Equipped with this information, we now turn our attention to $\widetilde{G}_0$. If $\widetilde{G}_0 \setminus \{v_1,v_3\}$ contains a cycle, then $\widetilde{G}_0$ contains Half-Basket, and since $\widetilde{G}_1$ also contains Half-Basket, we find that $\widetilde{G}$ contains $D^*(1)$, which is a contradiction. It follows that $\widetilde{G}_0$ is a $(v_1,v_2,v_3)$-doublefan. Let $u_1, u_2, \ldots, u_n$ be the vertex sequence in the path $\widetilde{G}_0 \setminus \{v_1,v_3\}$ and assume $v_2 = u_1$. Note that the assumption $\widetilde{G}_0$ contains triad implies that $\widetilde{G}_0$ has a delete-proof spine edge $u_i u_{i+1}$. If there is a smaller rib edge, then $\widetilde{G}$ contains $P^+(1)$ (as shown in the first graph of Figure \ref{triad_extras}) which is a contradiction. So, by 3-connectivity the only delete-proof spine edge is $u_1 u_2 = v_2 u_2$. For the same reason, it must be that $v_2' = v_2$. If there is an inner $v_1$-rib, then $\widetilde{G}$ contains $W_4(1)$ (as shown in the third graph of Figure \ref{triad_extras}) which is a contradiction. So, we may assume that every inner rib is incident with $v_3$, and Figure \ref{triad_lem_struc} depicts our graph.
The edge $u_1 u_2$ must be in $\widetilde{S}$ by Lemma \ref{mm3sep}, so we may assume $e_2 = u_1 u_2$. If $e_1$ is a spine edge, then it must be delete-protected, (since $\widetilde{G} \setminus e_1$ has a bad 2-separation using the vertices $w,v_3$). However, then $\widetilde{G}$ contains $P^+(1)$ (as shown in the first graph of Figure \ref{triad_extras}) which is contradictory. If $e_1$ is a $v_3$-rib then it must be contract proof, and $\widetilde{G}$ contains $W_4(1)$ (as shown in the last graph of Figure \ref{triad_extras}) which is contradictory. Thus $e_1 = v_1 u_n$, and this edge must be delete-proof. However then $\widetilde{G}$ has a $K_5^-(4)$ minor similar to that in the third graph of Figure \ref{triad_extras}. This final contradiction completes the proof. \end{proof}
Now we are ready to complete our results by proving Lemma \ref{fminor}.
\noindent{\it Proof of Lemma \ref{fminor}.} If neither $\widetilde{G}_0$ nor $\widetilde{G}_1$ contains triad, then Lemma \ref{no_triad} implies that both are $(v_1,v_2,v_3)$-doublefans contradicting Lemma \ref{central_doublefan}. If they both contain triad, then by Lemma \ref{triad_struc} each of $\widetilde{G}_0,\widetilde{G}_1$ must either be a $(v_1,v_2,v_3)$-doublefan or a $(v_1,v_2,v_3)$-ladder. If both are $(v_1,v_2,v_3)$-doublefans, then we again have a contradiction to Lemma \ref{central_doublefan}. If both are $(v_1,v_2,v_3)$-ladders, then each contains Half-Basket, so $\widetilde{G}$ has $D^*(1)$ as a minor, which is a contradiction. Finally, if exactly one is a ladder, then the other is a $(v_1,v_2,v_3)$-doublefan with a delete-proof spine edge, contradicting Lemma \ref{df-lad}.
In the remaining case we may assume $\widetilde{G}_0$ does not contain triad but $\widetilde{G}_1$ does. It follows from Lemma \ref{no_triad} and the assumption that $\widetilde{G}_0$ has no triad that $\widetilde{G}_0$ is a $(v_1,v_2,v_3)$-doublefan without a delete-proof spine edge. Since there is no triangle of edges in $\widetilde{S}$, we may then assume (by possibly switching $v_1$ and $v_3$) that $\widetilde{G}_0$ contains Sail. If $\widetilde{G}_1 \setminus \{v_1,v_2,v_3\}$ contains a cycle, then $\widetilde{G}$ has a $P(1)$ minor which is contradictory. Thus, Lemma \ref{basic_decomp} implies that $\widetilde{G}_1$ is a $\{v_1,v_2,v_3\}$-doublefan, and now Lemma \ref{general_doublefan} gives us a contradiction. \qed
\section{Code}
Throughout this project, we used computers in a variety of ways. The lion's share of code was written in Sage \cite{sagemath}, a freely available, open source mathematical software package built on top of the Python programming language. Initially, we developed code to explore the splitting property of graphs ``from the bottom up,'' considering for each configuration of five edges the possible ways in which their endpoints were connected within a larger graph. The code looked for graphs that had induced spanning trees in minors corresponding to the various five configurations, using Proposition \ref{calcdodgson} in Section \ref{sec splitting}. This approach yielded limited results for certain connected configurations, but the number of possibilities quickly became unwieldy for more general configurations.
We then took the opposite ``top down'' approach to characterizing split graphs, starting with the graph and looking at various configurations of five edges contained in it. This latter approach proved to be much more effective, although not initially. The computational overhead for calculating Kirchhoff matrices and Dodgson polynomials for each combination of five edges was simply too high (more than a week running on 4 parallel 2.13 GHz cores to run through graphs up to 12 edges). This bottleneck was somewhat alleviated by using large random integers rather than symbolic polynomials, although this opened the door to the (extremely unlikely) possibility of falsely determining a split five configuration as a result of a numerical coincidence. It should be stressed that these calculations were meant as rough guides towards the characterization of the set of forbidden minors, not as the substance of rigorous proofs.
Each new discovery, as a result of an exhaustive search in Sage, led to new theoretical developments that necessitated rewriting the code to optimize the search. Especially after relaxing the 3-connected hypothesis, the pantheon of minor minimal non-splitting graphs grew with each larger size of graphs tested. The search was reaching the limits of feasibility, using brute-force methods. Here the development of the matroid approach, using enhanced graphs (Section \ref{wands}) was pivotal. The calculation was broken into ``phases,'' initially assuming that all types of protection were both contract- and delete-proof (one of the configurations of three edges on the right of Figure \ref{ordinary-enhanced}). Those graphs that did not split were passed to the second phase, in which the smaller choices of protection were tested. In this way, \emph{minimal} non-splitting graphs were found. To alleviate the significant memory strain, results from the first phase were written to file in a systematic way and retrieved for the second phase of the calculation. All told, the total processing time for this final successful approach measured in days. Using this approach we exhaustively checked all enhanced graphs where the underlying graph had at most 11 edges.
Happily, this exhaustive search for minimal non-split graphs agrees completely with the characterization in Theorem \ref{mainskeleton}.
\appendix
\section{All excluded minors}\label{app all_excl}
The full list of excluded minors for splitting is $K_5$, $K_{3,3}$, the octahedron $O$, the cube $C$, $H$ as defined in Figure \ref{firsth}, and the following enhanced graphs:
\noindent
\includegraphics[width=\linewidth]{figures/excl_all2}
\end{document} | arXiv | {
"id": "1310.5788.tex",
"language_detection_score": 0.8207771182060242,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[higher-order Bernoulli, Euler and Hermite polynomials]{Some identities of higher-order Bernoulli, Euler and Hermite polynomials arising from umbral calculus}
\author{Dae San Kim$^1$} \address{$^1$ Department of Mathematics, Sogang University, Seoul 121-742, Republic of Korea.} \email{dskim@sogang.ac.kr} \author{Taekyun Kim$^2$} \address{$^2$ Department of Mathematics, Kwangwoon University, Seoul 139-701, Republic of Korea.} \email{tkkim@kw.ac.kr} \author{Dmitry V. Dolgy $^3$} \address{$^3$ Hanrimwon, Kwangwoon University, Seoul 139-701, Republic of Korea.} \email{dgekw2011@gmail.com}
\author{Seog-Hoon Rim$^4$} \address{$^4$ Department of Mathematics Education, Kyungpook National University, Taegu 702-701, Republic of Korea.} \email{shrim@knu.ac.kr}
\subjclass{05A10, 05A19.} \keywords{Bernoulli polynomial, Euler polynomial, Abel polynomial.}
\maketitle
\begin{abstract} In this paper, we study umbral calculus to have alternative ways of obtaining our results. That is, we derive some interesting identities of the higher-order Bernoulli, Euler and Hermite polynomials arising from umbral calculus to have alternative ways. \end{abstract}
\section{Introduction} As is well known, the Hermite polynomials are defined by the generating function to be \begin{equation}\label{1} e^{2xt-t^2}=e^{H(x)t}=\sum_{n=0} ^{\infty} H_n (x) \frac{t^n}{n!}, \end{equation} with the usual convention about replacing $H^n(x)$ by $H_n(x)$ (see \cite{08, 10}). In the special case, $x=0$, $H_n(0)=H_n$ are called the {\it{$n$-th Hermite numbers}}. The Bernoulli polynomials of order $r$ are given by the generating function to be \begin{equation}\label{2} \left(\frac{t}{e^t-1}\right)^re^{xt}=\sum_{n=0} ^{\infty} B_n ^{(r)} (x) \frac{t^n}{n!},~(r \in {\mathbb{R}}). \end{equation} From \eqref{2}, the $n$-th Bernoulli numbers of order $r$ are defined by $B_n ^{(r)}(0)=B_n^{(r)}$ (see [1-16]). The higher-order Euler polynomials are also defined by the generating function to be \begin{equation}\label{3} \left(\frac{2}{e^t+1}\right)^re^{xt}=\sum_{n=0} ^{\infty}E_n ^{(r)} (x)\frac{t^n}{n!},~(r \in {\mathbb{R}}), \end{equation} and $E_n ^{(r)}(0)=E_n ^{(r)}$ are called the {\it{$n$-th Euler numbers}} of order $r$ (see [1-16]).
The first Stirling number is given by \begin{equation}\label{4} (x)_n=x(x-1)\cdots(x-n+1)=\sum_{l=0} ^n S_1(n,k)x^l,{\text{ (see [6,13])}}, \end{equation} and the second Stirling number is defined by the generating function to be \begin{equation}\label{5} (e^t-1)^n=n!\sum_{l=n} ^{\infty} S_2(l,n) \frac{t^l}{l!},{\text{ (see [6,9,13])}}. \end{equation} For $\lambda(\neq 1) \in {\mathbb{C}}$, the {\it{Frobenius-Euler polynomials}} are given by \begin{equation}\label{6}
\left(\frac{1-\lambda}{e^t-\lambda}\right)^re^{xt}=\sum_{n=0} ^{\infty} H_n ^{(r)}(x|\lambda)\frac{t^n}{n!},~(r \in {\mathbb{R}}){\text{ (see [1,5])}}. \end{equation}
In the special case, $x=0$, $H_n ^{(r)}(0|\lambda)=H_n ^{(r)}(\lambda)$ are called the {\it{$n$-th Frobenius-Euler numbers}} of order $r$.
Let ${\mathcal{F}}$ be the set of all formal power series in the variable $t$ over ${\mathbb{C}}$ with \begin{equation*}
{\mathcal{F}}=\left\{ \left.f(t)=\sum_{k=0} ^{\infty} \frac{a_k}{k!} t^k~\right|~ a_k \in {\mathbb{C}} \right\}. \end{equation*}
Let us assume that ${\mathbb{P}}$ is the algebra of polynomials in the variable $x$ over ${\mathbb{C}}$ and ${\mathbb{P}}^{*}$ is the vector space of all linear functionals on ${\mathbb{P}}$. $\left< L~|~p(x)\right>$ denotes the action of the linear functional $L$ on a polynomial $p(x)$ and we remind that the vector space structure on ${\mathbb{P}}^{*}$ is defined by \begin{equation*} \begin{split}
\left<L+M|p(x) \right>&=\left<L|p(x) \right>+\left<M|p(x) \right> ,\\
\left<cL|p(x) \right>&=c\left<L|p(x) \right>, \end{split} \end{equation*} where $c$ is a complex constant (see \cite{06,09,13}).
The formal power series $f(t)=\sum_{k=0} ^{\infty} \frac{a_k}{k!}t^k \in {\mathcal{F}}$ defines a linear functional on ${\mathbb{P}}$ by setting \begin{equation}\label{7}
\left<f(t)|x^n \right>=a_n,{\text{ for all }} n\geq 0{\text{ (see [6,9,13])}}. \end{equation} Then, by \eqref{7}, we get \begin{equation}\label{8}
\left<t^k | x^n \right>=n! \delta_{n,k},~(n,k \geq 0), \end{equation} where $\delta_{n,k}$ is the Kronecker symbol (see \cite{06,09,13}).
Let $f_L(t)=\sum_{k=0} ^{\infty} \frac{\left<L|x^k\right>}{k!}t^k$ (see \cite{13}). For $f_L(t)=\sum_{k=0} ^{\infty} \frac{\left<L|x^k\right>}{k!}t^k$, we have $\left<f_L(t)|x^n\right>=\left<L|x^n\right>$. The map $L \mapsto f_L(t)$ is a vector space isomorphism from ${\mathbb{P}}^{*}$ onto ${\mathcal{F}}$. Henceforth, ${\mathcal{F}}$ will be thought of as both a formal power series and a linear functional. We shall call ${\mathcal{F}}$ the {\it{umbral algebra}}. The umbral calculus is the study of umbral algebra (see \cite{06,09,13}).
The order $o(f(t))$ of the non-zero power series $f(t)$ is the smallest integer $k$ for which the coefficient of $t^k$ does not vanish. A series $f(t)$ having $o(f(t))=1$ is called a {\it{delta series}} and a series $f(t)$ having $o(f(t))=0$ is called an {\it{invertible series}}. Let $f(t)$ be a delta series and $g(t)$ be an invertible series. Then there exists a unique sequence $S_n(x)$ of polynomials such that $\left<g(t)f(t)^k|S_n(x)\right>=n!\delta_{n,k}$ where $n,k \geq 0$. The sequence $S_n(x)$ is called {\it{Sheffer sequence}} for $(g(t),f(t))$, which is denoted by $S_n(x)\sim (g(t),f(t))$. By \eqref{7} and \eqref{8}, we see that $\left<e^{yt}|p(x)\right>=p(y)$. For $f(t)\in {\mathcal{F}}$ and $p(x) \in {\mathbb{P}}$, we have \begin{equation}\label{9}
f(t)=\sum_{k=0} ^{\infty} \frac{\left<f(t)|x^k\right>}{k!} t^k,~p(x)=\sum_{k=0} ^{\infty} \frac{\left<t^k|p(x)\right>}{k!}x^k, \end{equation} and, by \eqref{9}, get \begin{equation}\label{10}
p^{(k)}(0)=\left<t^k|p(x)\right>,~\left<1|p^{(k)}(x)\right>=p^{(k)}(0). \end{equation} Thus, from \eqref{10}, we have \begin{equation}\label{11} t^kp(x)=p^{(k)}(x)=\frac{d^kp(x)}{dx^k}. \end{equation}
In \cite{06,09,13}, we note that $\left<f(t)g(t)|p(x)\right>=\left<g(t)|f(t)p(x)\right>$.
For $S_n(x) \sim \left(g(t),f(t)\right)$, we have \begin{equation}\label{11} \frac{1}{g({\bar{f}}(t))}e^{y{\bar{f}}(t)}=\sum_{k=0} ^{\infty} \frac{S_k(y)}{k!}t^k,{\text{ for all }}y \in {\mathbb{C}}, \end{equation} where ${\bar{f}}(t)$ is the compositional inverse of $f(t)$. For $S_n(x) \sim \left(g(t),f(t)\right)$ and $r_n(x)=(h(t),l(t))$, let us assume that \begin{equation}\label{12} S_n(x)=\sum_{k=0} ^n C_{n,k} r_k(x),{\text{ (see [6,9,13])}}. \end{equation} Then we have \begin{equation}\label{13}
C_{n,k}=\frac{1}{k!}\left. \left< \frac{h({\bar{f}}(t))}{g({\bar{f}}(t))}l({\bar{f}}(t))^k\right| x^n \right>,{\text{ (see [13])}}. \end{equation} The equation \eqref{12} and \eqref{13} are called the alternative ways of Sheffer sequences.
In this paper, we study umbral calculus to have alternative ways of obtaining our results. That is, we derive some interesting identities of the higher-order Bernoulli, Euler and Hermite polynomials arising from umbral calculus to have alternative ways.
\section{Some identities of higher-order Bernoulli, Euler and Hermite polynomials}
In this section, we use umbral calculus to have alternative ways of obtaining our results. Let us consider the following Sheffer sequences: \begin{equation}\label{14} E_n ^{(r)}(x)\sim\left(\left(\frac{e^t+1}{2}\right)^r,t\right),H_n(x)\sim\left(e^{\frac{1}{4}t^2},\frac{t}{2}\right). \end{equation} Then, by \eqref{12}, we assume that \begin{equation}\label{15} E_n ^{(r)}(x)=\sum_{k=0} ^n C_{n,k}H_k(x). \end{equation} From \eqref{13} and \eqref{15}, we have \begin{equation}\label{16} \begin{split}
C_{n,k}&=\frac{1}{k!}\left. \left<\frac{e^{\frac{1}{4}t^2}}{\left(\frac{e^t+1}{2}\right)^r}\left(\frac{t}{2}\right)^k \right| x^n\right> \\
&=\frac{1}{k!2^k} \left. \left< \left(\frac{2}{e^t+1}\right)^re^{\frac{1}{4}t^2}\right|t^kx^n\right> \\
&=2^{-k}\binom{n}{k}\left.\left<\left(\frac{2}{e^t+1}\right)^r\right| e^{\frac{1}{4}t^2}x^{n-k}\right> \\
&=2^{-k}\binom{n}{k}\left< \left.\left(\frac{2}{e^t+1}\right)^r\right| \sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{1}{4^ll!}t^{2l}x^{n-k}\right> \\
&=2^{-k}\binom{n}{k}\sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{1}{2^{2l}l!}(n-k)_{2l}\left<\left.1\right. \left|\left(\frac{2}{e^t+1}\right)^rx^{n-k-2l}\right. \right>\\ &=2^{-k}\binom{n}{k}\sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{1}{2^{2l}l!}(n-k)_{2l} E_{n-k-2l} ^{(r)} \\ &=n!\sum_{0 \leq l \leq n-k,~l:{\text{even}}}\frac{E_{n-k-l} ^{(r)}}{\left(\frac{l}{2}\right)!2^{k+l}k!(n-k-l)!}. \end{split} \end{equation} Therefore, by \eqref{15} and \eqref{16}, we obtain the following theorem. \begin{theorem} For $n \geq 0$, we have \begin{equation*} E_n ^{(r)} (x)=n!\sum_{k=0} ^n \left\{\sum_{0 \leq l \leq n-k,~l:{\text{even}}}\frac{E_{n-k-l} ^{(r)}}{k!(n-k-l)!2^{k+l}\left(\frac{l}{2}\right)!}\right\}H_k(x). \end{equation*} \end{theorem} Let us consider the following two Sheffer sequences: \begin{equation}\label{17} B_n ^{(r)}(x)\sim\left(\left(\frac{e^t-1}{t}\right)^r,t\right),~H_n(x)\sim\left(e^{\frac{1}{4}t^2},\frac{t}{2}\right). \end{equation} Let us assume that \begin{equation}\label{18} B_n ^{(r)}(x)=\sum_{k=0} ^n C_{n,k}H_k(x). \end{equation} By \eqref{13} and \eqref{17}, we get \begin{equation}\label{19} \begin{split}
C_{n,k}&=\frac{1}{k!}\left. \left<\frac{e^{\frac{1}{4}t^2}}{\left(\frac{e^t-1}{t}\right)^r}\left(\frac{t}{2}\right)^k \right| x^n\right> \\
&=2^{-k}\binom{n}{k}\left.\left<\left(\frac{t}{e^t-1}\right)^r\right| \sum_{l=0} ^{\infty} \left(\frac{1}{4}\right)^l \frac{1}{l!} t^{2l}x^{n-k}\right> \\
&=2^{-k}\binom{n}{k}\sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{1}{l!4^l}(n-k)_{2l} \left. \left<\left(\frac{t}{e^t-1}\right)^r \right|x^{n-k-2l}\right>\\
&=2^{-k}\binom{n}{k}\sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{(n-k)!}{l!2^{2l}(n-k-2l)!}\left< \left.1 \right. \left| \left(\frac{t}{e^t-1} \right)^rx^{n-k-2l}\right. \right> \\ &=2^{-k}\binom{n}{k}\sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{(n-k)!}{l!2^{2l}(n-k-2l)!}B_{n-k-2l} ^{(r)} \\ &=n!\sum_{0 \leq l \leq n-k,~l:{\text{even}}}\frac{B_{n-k-l} ^{(r)}}{k!(n-k-l)!2^{k+l}\left(\frac{l}{2}\right)!}. \end{split} \end{equation} Therefore, by \eqref{18} and \eqref{19}, we obtain the following theorem. \begin{theorem} For $n \geq 0$, we have \begin{equation*} B_n ^{(r)}(x)=n!\sum_{k=0} ^n \left\{\sum_{0 \leq l \leq n-k,~l:{\text{even}}}\frac{B_{n-k-l} ^{(r)}}{k!(n-k-l)!2^{k+l}\left(\frac{l}{2}\right)!} \right\}H_k(x). \end{equation*} \end{theorem} Consider \begin{equation}\label{20}
H_n ^{(r)}(x|\lambda)\sim\left(\left(\frac{e^t-\lambda}{1-\lambda}\right)^r,t\right),H_n(x)\sim\left(e^{\frac{1}{4}t^2},\frac{t}{2}\right). \end{equation} Let us assume that \begin{equation}\label{21}
H_n ^{(r)}(x|\lambda)=\sum_{k=0} ^n C_{n,k}H_k(x). \end{equation} By \eqref{13}, we get \begin{equation}\label{22} \begin{split}
C_{n,k}&=\frac{1}{k!}\left. \left<\frac{e^{\frac{1}{4}t^2}}{\left(\frac{e^t-\lambda}{1-\lambda}\right)^r}\left(\frac{t}{2}\right)^k \right| x^n\right> \\
&=\frac{1}{k!2^k}\left.\left<\left(\frac{1-\lambda}{e^t-\lambda}\right)^re^{\frac{1}{4}t^2}\right| t^k x^n\right> \\
&=2^{-k}\binom{n}{k}\sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{(n-k)_{2l}}{l!4^l}\left< \left.1 \right. \left| \left(\frac{1-\lambda}{e^t-\lambda} \right)^rx^{n-k-2l}\right. \right> \\ &=n!\sum_{l=0} ^{\left[\frac{n-k}{2}\right]}\frac{H_{n-k-2l} ^{(r)}(\lambda)}{l!2^{2l+k}(n-k-2l)!k!} \\ &=n!\sum_{0 \leq l \leq n-k,~l:{\text{even}}}\frac{H_{n-k-l} ^{(r)}(\lambda)}{\left(\frac{l}{2}\right)!2^{k+l}(n-k-l)!k!}. \end{split} \end{equation} Therefore, by \eqref{21} and \eqref{22}, we obtain the following theorem. \begin{theorem} For $n \geq 0$, we have \begin{equation*}
H_n ^{(r)} (x|\lambda)=n!\sum_{k=0} ^n \left\{\sum_{0 \leq l \leq n-k,~l:{\text{even}}}\frac{H_{n-k-l} ^{(r)}(\lambda)}{k!(n-k-l)!\left(\frac{l}{2}\right)!2^{k+l}}\right\}H_k(x). \end{equation*} \end{theorem} Let us assume that \begin{equation}\label{23} H_n (x)=\sum_{k=0} ^n C_{n,k}E_k ^{(r)}(x). \end{equation} From \eqref{13}, \eqref{14} and \eqref{23}, we have \begin{equation}\label{24} \begin{split}
C_{n,k}&=\frac{1}{k!}\left. \left<\frac{\left(\frac{e^{2t}+1}{2}\right)^r}{e^{\frac{1}{4}(2t)^2}}(2t)^k\right| x^n\right> \\
&=\frac{1}{k!}\left. \left<\frac{\left(\frac{e^{t}+1}{2}\right)^r}{e^{\frac{1}{4}t^2}}t^k\right| (2x)^n\right> \\
&=\frac{1}{k!}2^n \left. \left<\left(\frac{e^t+1}{2}\right)^re^{-\frac{1}{4}t^2}\right|t^k x^n \right> \\
&=2^n\binom{n}{k}\left. \left<\left(\frac{e^t+1}{2}\right)^r\right|\sum_{l=0} ^{\infty} \frac{(-1)^l}{l!4^l}t^{2l}x^{n-k}\right> \\
&=2^{n-r}\binom{n}{k}\sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{(-1)^l}{l!2^{2l}}(n-k)_{2l} \left. \left<\left(e^t+1\right)^r \right|x^{n-k-2l}\right>\\ &=\frac{1}{2^r} \sum_{j=0} ^r \sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{\binom{n}{k}\binom{r}{j}2^k(-1)^l(n-k)!}{l!(n-k-2l)!}(2j)^{n-k-2l}. \end{split} \end{equation} Therefore, \eqref{23} and \eqref{24}, we obtain the following theorem. \begin{theorem} For $n \geq 0$, we have \begin{equation*} H_n(x)=\frac{1}{2^r}\sum_{k=0} ^n \left\{\sum_{j=0} ^r \sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{\binom{n}{k}\binom{r}{j}2^k(-1)^l(n-k)!(2j)^{n-k-2l}}{l!(n-k-2l)!}\right\}E_k ^{(r)} (x). \end{equation*} \end{theorem} Note that $H_n(x)\sim\left(e^{\frac{1}{4}t^2},\frac{t}{2}\right)$. Thus, we have \begin{equation}\label{25} e^{\frac{1}{4}t^2}H_n(x)\sim\left(1,\frac{t}{2}\right),{\text{ and }}(2x)^n\sim\left(1,\frac{t}{2}\right). \end{equation} From \eqref{25}, we have \begin{equation}\label{26} e^{\frac{1}{4}t^2}H_n(x)=(2x)^n \Leftrightarrow H_n(x)=e^{-\frac{1}{4}t^2}(2x)^n. \end{equation} By \eqref{24} and \eqref{26}, we also see that \begin{equation}\label{27} \begin{split}
C_{n,k}&=\frac{1}{k!}\left. \left<\frac{\left(\frac{e^{2t}+1}{2}\right)^r}{e^{\frac{1}{4}(2t)^2}}(2t)^k\right| x^n\right> \\
&=\frac{1}{k!} \left. \left<\left(\frac{e^t+1}{2}\right)^rt^k\right|e^{-\frac{1}{4}t^2}(2x)^n \right> \\
&=\frac{1}{k!2^r}\left< \left. (e^t+1)^r\right|t^kH_n(x)\right>=\frac{1}{2^r}\binom{n}{k}2^k \sum_{j=0} ^r \binom{r}{j}H_{n-k} (j). \end{split} \end{equation} Therefore, by \eqref{23} and \eqref{27}, we obtain the following theorem. \begin{theorem} For $n \geq 0$, we have \begin{equation*} H_n(x)=\frac{1}{2^r}\sum_{k=0} ^n\binom{n}{k}2^k\left[\sum_{j=0} ^r\binom{r}{j}H_{n-k} (j) \right]E_k ^{(r)}(x). \end{equation*} \end{theorem} Let us assume that \begin{equation}\label{28} H_n (x)=\sum_{k=0} ^n C_{n,k}B_k ^{(r)}(x). \end{equation} From \eqref{13}, \eqref{17} and \eqref{28}, we have \begin{equation}\label{29} \begin{split}
C_{n,k}&=\frac{1}{k!}\left. \left<\frac{\left(\frac{e^{2t}-1}{2t}\right)^r}{e^{\frac{1}{4}(2t)^2}}(2t)^k\right| x^n\right> \\
&=\frac{1}{k!}\left. \left<\frac{\left(\frac{e^t-1}{t}\right)^r}{e^{\frac{1}{4}t^2}}t^k\right| (2x)^n\right> \\
&=\frac{1}{k!}\left. \left<\left(\frac{e^t-1}{t}\right)^rt^k\right|{e^{-\frac{1}{4}t^2}}(2x)^n \right>. \end{split} \end{equation} From \eqref{26} and \eqref{29}, we have \begin{equation}\label{30}
C_{n,k}=\frac{1}{k!}\left. \left<\left(\frac{e^t-1}{t}\right)^rt^k\right|H_n(x) \right>. \end{equation} For $r>n$, by \eqref{5} and \eqref{30}, we get \begin{equation} \begin{split}\label{31}
C_{n,k}&=\frac{1}{k!}\left< \left.(e^t-1)^k \right. \left| \left(\frac{e^t-1}{t} \right)^{r-k}H_n(x)\right. \right> \\
&=\frac{1}{k!}\left< \left.(e^t-1)^k \right. \left| \sum_{l=0} ^n \frac{(r-k)!}{(l+r-k)!}S_2(l+r-k,r-k)t^lH_n(x)\right. \right>\\
&=\frac{1}{k!}\sum_{l=0} ^n \frac{(r-k)!}{(l+r-k)!}S_2(l+r-k,r-k)2^l(n)_l\left<(e^t-1)^k|H_{n-l}(x)\right> \\ &=\frac{1}{k!}\sum_{l=0} ^n \frac{(r-k)!}{(l+r-k)!}S_2(l+r-k,r-k)2^l \frac{n!}{(n-l)!}\sum_{j=0} ^k\binom{k}{j}(-1)^{k-j}H_{n-l}(j) \\ &=n!\sum_{j=0} ^k \sum_{l=0} ^n \frac{(r-k)!S_2(l+r-k,r-k)(-1)^{k-j}\binom{k}{j}2^lH_{n-l}(j)}{(l+r-k)!k!(n-l)!}. \end{split} \end{equation} Therefore, by \eqref{28} and \eqref{31}, we obtain the following theorem. \begin{theorem} For $r>n\geq0$,we have \begin{equation*} H_n(x)=n!\sum_{k=0} ^n \left\{\sum_{j=0} ^k \sum_{l=0} ^n \frac{(r-k)!S_2(l+r-k,r-k)(-1)^{k-j}\binom{k}{j}2^lH_{n-l}(j)}{(l+r-k)!k!(n-l)!}\right\}B_k ^{(r)}(x). \end{equation*} \end{theorem} Let us assume that $r \leq n$. For $0 \leq k<r$, by \eqref{31}, we get \begin{equation}\label{32} C_{n,k}=n!\sum_{j=0} ^k \sum_{l=0} ^n \frac{(r-k)!S_2(l+r-k,r-k)(-1)^{k-j}\binom{k}{j}2^lH_{n-l}(j)}{(l+r-k)!k!(n-l)!}. \end{equation} For $r \leq k\leq n$, by \eqref{30}, we get \begin{equation}\label{33} \begin{split}
C_{n,k}&=\frac{1}{k!}\sum_{j=0} ^r \binom{r}{j}(-1)^{r-j}\left<e^{jt}|D^{k-r}H_n(x)\right>\\ &=\frac{2^{k-r}n!}{k!(n-k+r)!}\sum_{j=0} ^r\binom{r}{j}(-1)^{r-j}H_{n-k+r} (j). \end{split} \end{equation} Therefore, by \eqref{28}, \eqref{32} and \eqref{33}, we obtain the following theorem. \begin{theorem} For $n \geq r$, we have \begin{equation*} \begin{split} H_n(x)=&n!\sum_{k=0} ^{r-1}\left\{\sum_{j=0} ^k\sum_{l=0} ^n \frac{(r-k)!S_2(l+r-k,r-k)(-1)^{k-j}\binom{k}{j}2^lH_{n-l}(j)}{(l+r-k)!k!(n-l)!}\right\}B_k ^{(r)}(x)\\ &+n!\sum_{k=r} ^n \left\{\sum_{j=0} ^r \frac{(-1)^{r-j}\binom{r}{j}2^{k-r}H_{n-k+r}(j)}{k!(n-k+r)!}\right\}B_k ^{(r)}(x). \end{split} \end{equation*} \end{theorem} Let us assume that \begin{equation}\label{33}
H_n (x)=\sum_{k=0} ^n C_{n,k}H_k ^{(r)}(x|\lambda). \end{equation} Then, by \eqref{13}, \eqref{20} and \eqref{33}, we get \begin{equation}\label{34} \begin{split}
C_{n,k}&=\frac{1}{k!}\left. \left<\frac{\left(\frac{e^{2t}-\lambda}{1-\lambda}\right)^r}{e^{\frac{1}{4}(2t)^2}}(2t)^k\right| x^n\right> \\
&=\frac{1}{k!}\left. \left<\frac{\left(\frac{e^t-\lambda}{1-\lambda}\right)^r}{e^{\frac{1}{4}t^2}}t^k\right| (2x)^n\right> \\
&=\frac{1}{k!} \left. \left<\left(\frac{e^t-\lambda}{1-\lambda}\right)^rt^k\right| e^{-\frac{1}{4}t^2}(2x)^n\right>. \end{split} \end{equation} By \eqref{26} and \eqref{34}, we get \begin{equation}\label{35} \begin{split}
C_{n,k}&=\frac{1}{k!}\left. \left<\left(\frac{e^t-\lambda}{1-\lambda}\right)^rt^k\right| H_n(x)\right>=\frac{1}{k!(1-\lambda)^r}\left< \left. (e^t-\lambda)^r\right|t^kH_n(x)\right> \\
&=\frac{\binom{n}{k}2^k}{(1-\lambda)^r} \sum_{j=0} ^r\binom{r}{j}(-\lambda)^{r-j}\left<e^{jt}|H_{n-k} (x)\right> \\ &=\frac{\binom{n}{k}2^k}{(1-\lambda)^r}\sum_{j=0} ^r\binom{r}{j}(-\lambda)^{r-j}H_{n-k}(j). \end{split} \end{equation} Therefore, by \eqref{33} and \eqref{35}, we obtain the following theorem. \begin{theorem} For $n \geq 0$, we have \begin{equation*}
H_n (x)=\frac{1}{(1-\lambda)^r}\sum_{k=0} ^n \binom{n}{k}2^k\left[\sum_{j=0} ^r\binom{r}{j}(-\lambda)^{r-j}H_{n-k}(j)\right]H_k ^{(r)}(x|\lambda). \end{equation*} \end{theorem} {\scshape Remark.} From \eqref{34}, we have \begin{equation}\label{36} \begin{split}
C_{n,k}&=\frac{1}{k!}\left. \left<\frac{\left(\frac{e^t-\lambda}{1-\lambda}\right)^r}{e^{\frac{1}{4}t^2}}t^k\right| (2x)^n\right>=\frac{2^n}{k!} \left. \left<\left(\frac{e^t-\lambda}{1-\lambda}\right)^re^{-\frac{1}{4}t^2}\right| t^k x^n\right>\\
&=\frac{(n)_k}{k!}2^n \sum_{l=0} ^{\left[\frac{n-k}{2}\right]}\frac{(-1)^l}{l!4^l}\left. \left<\left(\frac{e^t-\lambda}{1-\lambda}\right)^r\right| t^{2l}x^{n-k}\right> \\
&=\frac{\binom{n}{k}2^n}{(1-\lambda)^r}\sum_{l=0} ^{\left[\frac{n-k}{2}\right]}\frac{(-1)^l}{l!2^{2l}}(n-k)_{2l}\left. \left<\left(e^t-\lambda\right)^r\right| x^{n-k-2l}\right> \\ &=\frac{1}{(1-\lambda)^r}\sum_{j=0} ^r \sum_{l=0} ^{\left[\frac{n-k}{2}\right]} \frac{\binom{n}{k}\binom{r}{j}2^k(-1)^l(-\lambda)^{r-j}(n-k)!(2j)^{n-k-2l}}{l!(n-k-2l)!}. \end{split} \end{equation} Thus, by \eqref{33} and \eqref{36}, we get \begin{equation*} H_n(x)=\frac{1}{(1-\lambda)^r}\sum_{k=0} ^n \left\{\sum_{j=0} ^r \sum_{l=0} ^{\left[\frac{n-k}{2}\right]}
\frac{\binom{n}{k}\binom{r}{j}2^k(-1)^l(-\lambda)^{r-j}(n-k)!(2j)^{n-k-2l}}{l!(n-k-2l)!}\right\}H_k ^{(r)}(x|\lambda). \end{equation*}
\end{document} | arXiv | {
"id": "1302.5345.tex",
"language_detection_score": 0.43150782585144043,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{frontmatter}
\title{Generation of Fractional Factorial Designs}
\author{Roberto Fontana\thanksref{c}} \author{Giovanni Pistone\thanksref{c}}
\address[c]{DIMAT Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy}
\begin{abstract} The joint use of counting functions, Hilbert basis and Markov basis allows to define a procedure to generate all the fractions that satisfy a given set of constraints in terms of orthogonality. The general case of mixed level designs, without restrictions on the number of levels of each factor (like primes or power of primes) is studied. This new methodology has been experimented on some significant classes of fractional factorial designs, including mixed level orthogonal arrays. \end{abstract}
\begin{keyword} Design of Experiments \sep Hilbert basis \sep Markov basis \sep Algebraic statistics \sep Indicator polynomial \sep Counting function. \end{keyword}
\end{frontmatter}
\section{Introduction}
All the fractional factorial designs that satisfy a set of conditions in terms of orthogonality between factors have been described as the zero-set of a system of polynomial equations in which the indeterminates are the complex coefficients of their counting polynomial functions (\cite{pistone|rogantin:2008-JSPI}, \cite{fontana|pistone|rogantin:2000}). A short review of this theory can be found in \cite{fontana|rogantin:sudo2008}. In Section \ref{sec:back} we report a part of it to facilitate the reader. In Section \ref{sec:balance} we write the problem of finding fractional factorial designs that satisfy a set of conditions as a system of linear equations in which the indeterminates are positive integers. In section \ref{sec:gof}, using 4ti2 (\cite{4ti2_1.3.1}) we find all the generators of some classes of fractional factorial designs, including mixed level orthogonal arrays and sudoku designs. Finally, in section \ref{sec:moves} we consider the moves between different fractions as integer valued functions defined over the full factorial design. We build a procedure to move between fractions that use Markov basis.
\section{Notation and background} \label{sec:back} \subsection{Full factorial design}
We adopt the notation used in \cite{pistone|rogantin:2008-JSPI} and denote:
\begin{itemize}
\item by ${\mathcal D}_j$ a \emph{factor} with $n_j$ levels coded with the $n_j$-th roots of the unity:
\begin{equation*}{\mathcal D}_{j} = \{\omega_0,\ldots,\omega_{n_j-1}\}
\qquad \omega_k=\exp\left(i\: \frac {2\pi}{n_j} \ k\right) \ ;
\end{equation*}
\item by ${\mathcal D}$ the \emph{full factorial design} with \emph{complex coding}
\begin{equation*}
{\mathcal D} = {\mathcal D}_1 \times \cdots {\mathcal D}_j \cdots \times {\mathcal D}_m \ .
\end{equation*} \
\item by $\# {\mathcal D}$ the cardinality of ${\mathcal D}$. \item by $L$ the \emph{full factorial design} with \emph{integer coding} \begin{eqnarray*}
L = \mathbb Z_{n_1} \times \cdots \times \mathbb Z_{n_j}\cdots \times \mathbb Z_{n_m} \ , \end{eqnarray*} \item by $\alpha$ an element of $L$ $$ \alpha= (\alpha_1,\ldots,\alpha_m) \quad \alpha_j = 0,\ldots,n_j-1, j=1,\ldots,m \ . $$ \item by $[\alpha-\beta]$ the $m$-tuple made by the componentwise difference
$$\left(\left[\alpha_1-\beta_1 \right]_{n_1}, \ldots, \left[\alpha_j-\beta_j \right]_{n_j}, \ldots, \left[\alpha_m - \beta_m\right]_{n_m} \right)\ ; $$ the computation of the $j$-th element is in the ring $\mathbb Z_{n_j}$.
\item by $X_j$ the $j$-th component function, which maps a point to its $i$-th component: \begin{equation*} X_j : \quad {\mathcal D} \ni (\zeta_1,\ldots,\zeta_m)\ \longmapsto \ \zeta_j \in {\mathcal D}_j \ ; \end{equation*} the function $X_j$ is called \emph{simple term} or, by abuse of terminology, \emph{factor}.
\item by $X^\alpha$ the \emph{interaction term} $X_1^{\alpha_1} \cdots X_m^{\alpha_m}$, i.e. the function \begin{equation*} X^\alpha : \quad {\mathcal D} \ni (\zeta_1,\ldots,\zeta_m)\ \mapsto \ \zeta_1^{\alpha_1}\cdots \zeta_m^{\alpha_m} \ ; \end{equation*}
\end{itemize}
We notice that $L$ is both the full factorial design with integer coding and \emph{the exponent set of all the simple factors and interaction terms} and $\alpha$ is both a treatment combination in the integer coding and a multi-exponent of an interaction term.
The full factorial design in complex coding is identified as the zero-set in $\mathbb C^m$ of the system of polynomial equations \begin{equation}\label{eq:design} X_j^{n_j}-1=0 \quad , \qquad j=1,\ldots,m \ . \end{equation}
\begin{definition} \begin{enumerate} \item A \emph{response} $f$ on the design ${\mathcal D}$ is a $\mathbb C$-valued polynomial function defined on ${\mathcal D}$. \item The \emph{mean value} on ${\mathcal D}$ of a response $f$, denoted by $E_{{\mathcal D}}(f) $, is: \begin{equation*} E_{{\mathcal D}}(f) = \frac 1 {\#{\mathcal D}} \sum_{\zeta \in {\mathcal D}} f(\zeta) \ . \end{equation*} \item A response $f$ is \emph{centered} on ${\mathcal D}$ if $E_{\mathcal D}(f) = 0$. Two responses $f$ and $g$ are \emph{orthogonal on ${\mathcal D}$} if $E_{\mathcal D}(f \ \overline g) = 0$, where $\overline g$ is the complex conjugate of $g$. \end{enumerate} \end{definition} It should be noticed that the set of all the responses is a complex Hilbert space with the Hermitian product:
$$ f\cdot g = E_{\mathcal D}(f \ \overline g) \ . $$ Moreover \begin{enumerate}
\item $X^\alpha\overline{X^\beta}=X^{[\alpha-\beta]}$;
\item $E_{{\mathcal D}}(X^0)=1$, and $E_{{\mathcal D}}(X^\alpha)=0$ for $\alpha
\neq 0$. \end{enumerate}
The set of functions $\left\{X^\alpha \ , \ \alpha \in L \right \}$ is an orthonormal basis of the \emph{complex responses} on design ${\mathcal D}$. In fact $\#L=\#{\mathcal D}$ and, from properties (i) and (ii) above, it follows that: \begin{equation*}E_{{\mathcal D}}(X^\alpha\overline{X^\beta}) = E_{{\mathcal D}}(X^{[\alpha-\beta]}) = \begin{cases} 1 & \text{if } \alpha=\beta \\ 0 & \text{if } \alpha \neq \beta \end{cases}\end{equation*}
In particular, each response $f$ can be represented as a unique $\mathbb C$-linear combination of constant, simple and interaction terms. This representation is obtained by repeated applications of the re-writing rules derived from Equations (\ref{eq:design}). Such a polynomial is called the {\em normal form} of $f$ on ${\mathcal D}$. In this paper we intend that all the computation are made using the normal form.
\begin{example} \label{ex:2to3}
Consider the $2^3$ full factorial design. All the monomial responses on ${\mathcal D}$ are $$ 1, \ X_1, \ X_2 , \ X_3, \ X_1 X_2 , \ X_1 X_3 , \ X_2 X_3 , \ X_1 X_2 X_3 $$ or, equivalently, $$X^{(0,0,0)}, X^{(1,0,0)}, X^{(0,1,0)}, X^{(0,0,1)}, X^{(1,1,0)}, X^{(1,0,1)}, X^{(0,1,1)}, X^{(1,1,1)} $$ and $L$ is $$ L=\{(0,0,0),(1,0,0),(0,1,0),(0,0,1),(1,1,0),(1,0,1),(0,1,1),(1,1,1)\} \ . $$ \end{example}
\subsection{Fractions of a full factorial design} A fraction ${\mathcal F}$ is a multiset $({\mathcal F}_*,f_*)$ whose underlying set of elements ${\mathcal F}_*$ is contained in ${\mathcal D}$ and $f_*$ is the multiplicity function $f_*: {\mathcal F}_* \rightarrow \mathbb N$ that for each element in ${\mathcal F}_*$ gives the number of times it belongs to the multiset ${\mathcal F}$.
All fractions can be obtained by adding polynomial equations, called \emph{generating equations} to the design equations \ref{eq:design}, in order to restrict the number of solutions.
\begin{definition}
If $f$ is a response on ${\mathcal D}$ then its \emph{mean value on ${\mathcal F}$}, denoted by $E_{{\mathcal F}}(f)$, is $$E_{{\mathcal F}}(f)= \frac 1 {\# {\mathcal F}} \sum_{\zeta\in{\mathcal F}} f(\zeta)$$ where $\# {\mathcal F}$ is the total number of treatment combinations of the fraction.
A response $f$ is \emph{centered} if $E_{\mathcal F}(f) = 0$. Two responses $f$ and $g$ are \emph{orthogonal on ${\mathcal F}$} if $E_{\mathcal F}(f \ \overline g) = 0$. \end{definition}
With the complex coding the vector orthogonality of two interaction terms $X^\alpha$ and $X^\beta$ as defined before (with respect to a given Hermitian product) corresponds to the combinatorial orthogonality (all the level combinations appear equally often in $X^\alpha X^\beta$).
We consider the general case in which fractions can contain points that are replicated.
\begin{definition} \label{de:indicator}
The \emph{counting function} $R$ of a fraction ${\mathcal F}$ is a response defined on ${\mathcal D}$ so that for each $\zeta \in {\mathcal D}$, $R(\zeta)$ equals the number of appearances of $\zeta$ in the fraction. A $0-1$ valued counting function is called \emph{indicator function} of a single replicate fraction ${\mathcal F}$. We denote by $c_\alpha$ the coefficients of the representation of $R$ on ${\mathcal D}$ using the monomial basis $\{X^\alpha, \ \alpha \in L\}$: \begin{equation*} R(\zeta) = \sum_{\alpha \in L} c_\alpha X^\alpha(\zeta) \quad \zeta\in{\mathcal D} \quad c_\alpha \in \mathbb C \ . \end{equation*} \end{definition}
As the counting function is real valued, we have $\overline{c_\alpha} = c_{[-\alpha]}$. We will write $c_0$ in place of $c_{0,\dots,0}$.
\begin{remark} The counting function $R$ coincides with multiplicity function $f_*$. \end{remark}
\begin{proposition} \label{pr:bc-alpha}
Let ${\mathcal F}$ be a fraction of a full factorial design ${\mathcal D}$ and $R = \sum_{\alpha \in L} c_\alpha X^\alpha$ be its counting function. \begin{enumerate}
\item \label{it:balpha} The coefficients $c_\alpha$ are:
$$c_\alpha= \frac 1 {\#{\mathcal D}} \sum_{\zeta \in {\mathcal F}} \overline{X^\alpha(\zeta)} \ ;$$ in particular, $c_0$ is the ratio between the number of points of the fraction and that of the design.
\item \label{it:system} In a fraction without replications, the coefficients $c_\alpha$ are related according to: $$ c_\alpha = \sum_{\beta \in L} c_\beta \ c_{[\alpha - \beta]} \ .$$
\item The term $X^\alpha$ is centered on ${\mathcal F}$, i.e. $\mathbb E_{{\mathcal F}}(X^\alpha)$, if, and only if,
$$c_\alpha=c_{[-\alpha]}=0 \ . $$
\item The terms $X^\alpha$ and $X^\beta$ are orthogonal on ${\mathcal F}$, i.e. $\mathbb E_{{\mathcal F}}(X^\alpha \ \overline{X^\beta})=0$, if, and only if, $$c_{[\alpha-\beta]}=0 \ .$$ \end{enumerate} \end{proposition}
\begin{example}
We consider the fraction ${\mathcal F} = \{(-1,-1,1), (-1,1,-1)\}$ of the $2^3$ full factorial design of Example \ref{ex:2to3}. All the monomial responses on ${\mathcal F}$ and their values on the points are \begin{equation*}
\begin{array}{c|r|r|r|r|r|r|r|r}
\zeta &1 & X_1 & X_2 & X_3 & X_1 X_2 & X_1 X_3 & X_2 X_3& X_1 X_2 X_3 \\ \hline (-1,-1,1) & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 \\
(-1,1,-1) & 1 & -1 & 1 & -1 & -1 & 1 & -1 & 1 \end{array} \end{equation*} Using Item \ref{it:balpha} of Proposition \ref{pr:bc-alpha}, it is easy to compute the coefficients $c_\alpha$: \
$ c_{(0,1,0)}= c_{(0,0,1)}= c_{(1,1,0)}=c_{(1,0,1)}=0$; \
$ c_{(0,0,0)}=c_{(1,1,1)}= \frac 2 4$ and $c_{(1,0,0)}= c_{(0,1,1)}= - \frac 2 4 $. Hence, the indicator function is $$ F= \frac 1 2 \left( 1 - X_1 -X_2 X_3+X_1X_2 X_3\right) \ .$$ From the null coefficients we see that $X_1$ and $X_3$ are centered and that $X_1$ is orthogonal to both $X_2$ and $X_3$.
$\square$ \end{example}
\subsection{Projectivity and orthogonal arrays}
\begin{definition}
A fraction ${\mathcal F}$ {\em factorially projects} onto the $I$-factors, $I\subset \{1,\ldots,m\}$, if the projection is a multiple full factorial design, i.e. a full factorial design where each point appears equally often. A fraction ${\mathcal F}$ is a {\em mixed orthogonal array} of strength $t$ if it factorially projects onto any $I$-factors with $\#I=t$. \end{definition} Strength $t$ means that, for any choice of $t$ columns of the matrix design, all possible combinations of symbols appear equally often.
\begin{proposition} [Projectivity] \label{pr:projectivity} \begin{enumerate} \item \label{it:fact} A fraction \emph{factorially projects onto the $I$-factors} if, and only if,
all the coefficients of the counting function involving only the $I$-factors are 0.
\item If there exists a subset $J$ of $\{1,\ldots,m\}$ such that the $J$-factors appear in all the non null elements of the counting function,
the fraction \emph{factorially projects onto the $I$-factors}, with $I=J^c$.
\item A fraction is an \emph{orthogonal array of strength $t$} if, and
only if, all the coefficients of the counting function up to the order $t$ are zero:
$$ c_{\alpha}=0 \quad \textrm{for all} \ \alpha \textrm{ of order up to }t, \ \alpha \ne (0,0, \ldots ,0)\ . $$ \end{enumerate} \end{proposition}
\begin{example} [Orthogonal array]
The fraction of a $2^5$ full factorial design {\footnotesize \begin{eqnarray*} {\mathcal F}_O&=& \{(-1, -1, -1, -1, -1, 1), ( -1, -1, -1, 1, 1, 1), ( -1, -1, 1, -1, -1, -1), \\ & & ( -1, -1, 1, 1, 1, -1),( -1, 1, -1, -1, -1, -1), ( -1, 1, -1, 1, 1, -1), ( -1, 1, 1, -1, 1, 1), \\ & & ( -1, 1, 1, 1, -1,1 ), ( 1, -1, -1, -1, 1, 1), ( 1, -1, -1, 1, -1, 1), ( 1, -1, 1, -1, 1, -1), \\ & & ( 1, -1, 1, 1, -1, -1),( 1, 1, -1, -1, 1, -1), ( 1, 1, -1, 1, -1, -1), ( 1, 1, 1, -1, -1, 1), \\ & &( 1, 1, 1, 1, 1, 1)\} \end{eqnarray*}}
is an orthogonal array of strength 2; in fact, its indicator function
\begin{eqnarray*}
F &=&\frac 1 4+ \frac 1 4 X_2X_3X_6- \frac 1 8 X_1X_4X_5+ \frac 1 8 X_1X_4X_5X_6+ \frac 1 8 X_1X_3X_4X_5\\
& &
+ \frac 1 8 X_1X_2X_4X_5+ \frac 1 8 X_1X_3X_4X_5X_6+ \frac 1 8 X_1X_2X_4X_5X_6
\\
& &+ \frac 1 8 X_1X_2X_3X_4X_5-\frac 1 8 X_1X_2X_3X_4X_5X_6
\end{eqnarray*}
contains only terms of order greater than 2, together with the constant term.
$\square$ \end{example}
\section{Counting functions and strata} \label{sec:balance}
From Proposition \ref{pr:bc-alpha} and Proposition \ref{pr:projectivity} we have that the problem of finding fractional factorial designs that satisfy a set of conditions in terms of orthogonality between factors can be written as a polynomial system in which the indeterminates are the complex coefficients $c_\alpha$ of the counting polynomial fraction.
\begin{example} Let's consider 3 factors, each one with two levels. The indicator functions $F=\sum_\alpha {c_\alpha X^{\alpha}}$ such that the terms $X_1, X_2, X_3$ are centered on ${\mathcal F}$ and the terms $X_i, X_j$ $i,j=1,2,3, i\neq j$ are orthogonal on ${\mathcal F}$, where ${\mathcal F} = \{\zeta \in {\mathcal D}: F(\zeta)=1 \}$, are those for which the following conditions on the coefficients of $F$ holds \begin{equation*} \begin{cases} c_0=c_0^2 + c_{123}^2 \\ c_{123}=2 c_0 c_{123} \end{cases}\end{equation*}
Apart from the trivial $F=0$, i.e. ${\mathcal F}=\emptyset$ and $F=1$, i.e. ${\mathcal F}={\mathcal D}$ we find $F=\frac{1}{2} ( 1+X_1 X_2 X_3)$ and $F=\frac{1}{2} ( 1-X_1 X_2 X_3)$ \end{example}
Let's now introduce a different way to describe the full factorial design ${\mathcal D}$ and all its subsets. Let's consider the indicator functions $1_{\zeta}$ of all the single points of ${\mathcal D}$
\begin{equation*} 1_{\zeta} : \quad {\mathcal D} \ni (\zeta_1,\ldots,\zeta_m)\ \mapsto \begin{cases} 1 \quad \zeta=(\zeta_1,\ldots,\zeta_m) \\ 0 \quad \zeta \neq (\zeta_1,\ldots,\zeta_m) \end{cases} \end{equation*}
It follows that the counting function $R$ of a fraction ${\mathcal F}$ can be written as \[ \sum_{\zeta \in {\mathcal D}} y_{\zeta} 1_{\zeta} \] with $y_{\zeta} \equiv R(\zeta) \in \{0,1,\ldots,n,\ldots\}$. The particular case in which $R$ is an indicator function corresponds to $y_{\zeta} \in \{0,1\}$.
The coefficients $y_{\zeta}$ are related to the coefficients $c_\alpha$ as in the following Proposition \ref{pr:beqy}
\begin{proposition} \label{pr:beqy} Let ${\mathcal F}$ be a fraction of ${\mathcal D}$. Its counting fraction $R$ can be expressed both as $R=\sum_\alpha c_\alpha X^\alpha$ and $R=\sum_{\zeta \in {\mathcal D}} y_{\zeta} 1_{\zeta}$. The relation between the coefficients $c_\alpha$ and $y_{\zeta}$ is \[ c_\alpha=\frac{1}{\#{\mathcal D}}\sum_{\zeta \in {\mathcal D}} y_{\zeta} \overline{X^\alpha(\zeta)} \] \end{proposition} \begin{proof} From Proposition \ref{pr:bc-alpha} we have \begin{eqnarray*} c_\alpha&=&\frac{1}{\#{\mathcal D}}\sum_{\zeta \in {\mathcal F}} \overline{X^\alpha(\zeta)} = \\
&=& \frac{1}{\#{\mathcal D}}\sum_{\zeta \in {\mathcal D}} y_{\zeta} \overline{X^\alpha(\zeta)} \end{eqnarray*} \end{proof}
\subsection{Strata} As described in Section \ref{sec:back}, we consider $m$ factors, ${\mathcal D}_1, \ldots, {\mathcal D}_m$ where
${\mathcal D}_j \equiv \Omega_{n_j}=\{\omega_0, \ldots, \omega_{n_j-1}\}$, for $j=1,\ldots,m$. From \cite{pistone|rogantin:2008-JSPI}, we recall two basic properties which hold true for the full design ${\mathcal D}$
\begin{proposition} \label{pr:P-1} Let $X_j$ the simple term with level set $\Omega_{n_j} = \{\omega_0, \ldots, \omega_{n_j-1}\}$. Let's consider the term $X_j^r$ and let's define \[ s_j= \begin{cases} 1 & \; r=0 \\ n_j/gcd(r,n_j) & \; r>0 \end{cases} \] Over ${\mathcal D}$, the term $X_j^r$ takes all the values of $\Omega_{s_j}$ equally often. \end{proposition}
\begin{proposition} \label{pr:P-2} Let $X^\alpha=X^{\alpha_1}_{1}\cdots X^{\alpha_m}_{m}$ an interaction. $X^{\alpha_i}_i$ takes values in $\Omega_{s_i}$ where $s_i$ is determined according to the previous Proposition \ref{pr:P-1}. Let's define $s=lcm(s_1,\ldots,s_m)$. Over ${\mathcal D}$, the term $X^\alpha$ takes all the values of $\Omega_s$ equally often. \end{proposition}
Let's now define the strata that are associated to simple and interaction terms.
\begin{definition} Given a term $X^\alpha, \alpha \in L=\mathbb Z_{n_1} \times \ldots \times \Z_{n_m}$ the full design ${\mathcal D}$ is partitioned into the the following strata \[ D_h^\alpha = \left \{ \zeta \in {\mathcal D}: \overline{X^\alpha(\zeta)} = \omega_h \right \} \] where $\omega_h \in \Omega_s$ and $s$ is determined according to the previous Propositions \ref{pr:P-1} and \ref{pr:P-2}. \end{definition}
\begin{remark} We define strata using the conjugate $\overline{X^\alpha}$ of the term in place of the term ${X^\alpha}$ itself because it will simplify the notations. \end{remark}
\begin{remark}
Each stratum is a regular fraction whose defining equation is $X^\alpha(\zeta)=\omega_{-h}$, \cite{pistone|rogantin:2008-JSPI}. \end{remark}
We use $n_{\alpha,h}$ to denote the number of points of the fraction ${\mathcal F}$ that are in the stratum $D_h^\alpha$, with $h=0,\dots,s-1$, \[ n_{\alpha,h}=\sum_{\zeta \in D_h^\alpha} y_\zeta \]
The following Proposition \ref{pr:ceqn} links the coefficients $c_\alpha$ with $n_{\alpha,h}$. \begin{proposition} \label{pr:ceqn} Let ${\mathcal F}$ be a fraction of ${\mathcal D}$ with counting fraction $R=\sum_{\alpha \in L} c_\alpha X^\alpha$. Each $c_\alpha, \alpha \in L$, depends on $n_{\alpha,h}, h=0,\ldots,s-1$, as \[ c_\alpha=\frac{1}{\#{\mathcal D}} \sum_{h=0}^{s-1} n_{\alpha,h} \omega_h \] where $s$ is determined by $X^\alpha$ (see Proposition \ref{pr:P-2}). Viceversa, each $n_{\alpha,h}, h=0,\ldots,s-1$, depends on $c_{[-k\alpha]}, k=0,\ldots,s-1$ as \[ n_{\alpha,h}=\frac{\#{\mathcal D}}{s} \sum_{k=0}^{s-1} c_{[-k \alpha]} \omega_{[hk]} \] \end{proposition} \begin{proof} Using Proposition \ref{pr:beqy}, it follows that we can write the coefficients $c_\alpha$ in the following way \[ c_\alpha = \frac{1}{\#{\mathcal D}}\sum_{\zeta \in {\mathcal D}} y_{\zeta} \overline{X^\alpha(\zeta)} =\frac{1}{\#{\mathcal D}} \sum_{h=0}^{s-1} \omega_h \sum_{\zeta \in D_h^\alpha} y_{\zeta} = \frac{1}{\#{\mathcal D}} \sum_{h=0}^{s-1} n_{\alpha,h} \omega_h \] For the viceversa, we observe the indicator function of strata can be obtained as follows. We define \[ \tilde{F}_{0}^s(\zeta)=\sum_{k=0}^{s-1} \zeta^k = \begin{cases} \frac{1-\zeta^s}{1-\zeta} & \text{if } \zeta \neq 1 \\ s & \text{if } \zeta=1 \end{cases} \] We have $\tilde{F}_{0}^s(\omega_k)=0$ for all $\omega_k \in \Omega_s, k \neq 0$. It follows that \[ F_{\alpha,0}(\zeta) = \frac{1}{s} \tilde{F}_{0}^s(\zeta^\alpha)= \frac{1}{s} \left( 1+\zeta^\alpha+\ldots+\zeta^{(s-1)\alpha} \right) \] is the indicator function associated to $D_0^\alpha$.
The indicator of $D_h^\alpha=\left \{ \zeta \in {\mathcal D}: \overline{X^\alpha(\zeta)} = \omega_h \right \}= \left \{ \zeta \in {\mathcal D}: X^\alpha(\zeta) = \omega_{[-h]} \right \}$ will be \[ F_{\alpha,h}(\zeta)=F_{0}^s(\omega_{h} \zeta^\alpha)= \frac{1}{s} \left( 1+\omega_{h}\zeta^\alpha+\ldots+\omega_{[(s-1)h]}\zeta^{(s-1)\alpha}\right ) \] We get \begin{eqnarray*} n_{\alpha,h} &=& \sum_{\zeta \in D_h^\alpha} R(\zeta) = \sum_{\zeta \in {\mathcal D}} F_{\alpha,h}(\zeta)R(\zeta)=\\ &=& \sum{\zeta \in {\mathcal D}} \left( \frac{1}{s} \sum_{k=0}^{s-1} \omega_{[kh]} X^{k\alpha}(\zeta) \right) \left( \sum_{\beta} c_\beta X^{\beta}(\zeta)\right)= \\ &=& \frac{\#{\mathcal D}}{s} \sum_{k,\beta: [k\alpha+\beta]=0} \omega_{[kh]} c_\beta = \frac{\#{\mathcal D}}{s} \sum_{k=0}^{s-1} \omega_{[kh]} c_{[-k\alpha]} \\ \end{eqnarray*}
\end{proof}
\begin{remark} From Proposition \ref{pr:ceqn} we get \begin{eqnarray*} n_{0,h}&=&0, \; h=1,\ldots,s-1 \\ n_{\alpha,0}&=&\frac{\#{\mathcal D}}{s}\sum_{k=0}^{s-1} c_{[-k\alpha]} \end{eqnarray*} and in particular $n_{0,0}=\#{\mathcal F}$. \end{remark}
We now use a part of Proposition 3 of \cite{pistone|rogantin:2008-JSPI} to get conditions on $n_{\alpha,h}$ that makes $X^\alpha$ centered on the fraction ${\mathcal F}$.
\begin{proposition} \label{pr:pr3jspi} Let $X^\alpha$ be a term with level set $\Omega_s$ on full design ${\mathcal D}$. Let $P(\zeta)$ the complex polynomial associated to the sequence $(n_{\alpha,h})_{h=0,\ldots,s-1}$ so that \[ P(\zeta)= \sum_{h=0}^{s-1} n_{\alpha,h} \zeta^{h} \] and let's denote by $\Phi_s$ the cyclotomic polynomial of the $s$-roots of the unity. \begin{enumerate} \item Let $s$ be prime. The term $X^\alpha$ is centered on the fraction ${\mathcal F}$ if, and only if, its $s$ levels appear equally often: \[ n_{\alpha,0}=n_{\alpha,1}=\ldots=n_{\alpha,s-1}=\lambda_\alpha \] \item Let $s=p_1^{h_1} \dots p_d^{h_d}$ with $p_i$ prime, for $i=1,\ldots,d$. The term $X^\alpha$ is centered on the fraction ${\mathcal F}$ if, and only if, the remainder \[ H(\zeta)=P(\zeta) \text{ mod } \Phi_s(\zeta) \] whose coefficients are integer linear combinations of $n_{\alpha,h}, h=0,\ldots,s-1$, is identically zero. \end{enumerate} \end{proposition}
\begin{proof}
See Proposition 3 of \cite{pistone|rogantin:2008-JSPI}. \end{proof}
\begin{remark} Being $D_h^\alpha$ a partition of ${\mathcal D}$, if $s$ is prime we get $\lambda_\alpha=\frac{\#{\mathcal F}}{s}$. \end{remark}
If we remind that $n_{\alpha,h}$ are related to the values of the counting function $R$ of a fraction ${\mathcal F}$ by the following relation \[ n_{\alpha,h}=\sum_{\zeta \in D_h^\alpha} y_\zeta, \] this Proposition \ref{pr:pr3jspi} allows to express the condition \emph{$X^\alpha$ is centered on ${\mathcal F}$} as integer linear combinations of the values $R(\zeta)$ of the counting function over the full design ${\mathcal D}$. In the Section \ref{sec:gof}, we will show the use of this property to generate fractional factorial designs.
We conclude this section limiting to the particular case where all factors have the same number of levels $s$ and $s$ is prime. We provide some results concerning the coefficients of counting functions, regular fractions, wordlength patterns and margins.
\subsection{Coefficients of the polynomial counting function} From Proposition \ref{pr:pr3jspi} we get the following result on the coefficients of a counting function \begin{proposition} \label{pr:samestrata} Given a counting function $R=\sum_{\alpha} c_{\alpha} X^{\alpha}$, if $c_\alpha=0$ then $c_{[k\cdot \alpha]}=0$ for all $k=1,\ldots,s-1$, where $[k\cdot \alpha]$ is $\underbrace{\alpha+\ldots+\alpha}_{k \; times}$ in the ring $\mathbb Z_{s}^m$. \end{proposition} \begin{proof} Let's consider $c_{k\cdot \alpha}$. From Proposition \ref{pr:pr3jspi}, $c_{k\cdot \alpha}$ is equal to zero if, and only if, \[ \sum_{\zeta \in D_0^{k\cdot \alpha}} y_{\zeta} = \sum_{\zeta \in D_1^{k\cdot \alpha}} y_{\zeta} = \ldots = \sum_{\zeta \in D_{s-1}^{k\cdot \alpha}} y_{\zeta} \] We observe that \begin{eqnarray*} D_h^{k\cdot \alpha} &=& \left \{ \zeta \in {\mathcal D}: \overline{X^{k\cdot \alpha}(\zeta)} = \omega_h \right \} = \\ &=& \left \{ \zeta \in {\mathcal D}: \overline{X^{\alpha}(\zeta)}^k = \omega_h \right \}= \left \{ \zeta \in {\mathcal D}: \overline{X^{\alpha}(\zeta)}= \omega_{[kh]} \right \} = D_{[kh]}^{\alpha} \end{eqnarray*} where $[kh]$ is $\underbrace{h+\ldots+h}_{k \; times}$ in the ring $\mathbb Z_{s}$.
It follows that $X^{\alpha}$ and $X^{k \cdot \alpha}$ partition ${\mathcal D}$ in the same strata and therefore we get the proof. \end{proof}
\subsection{Regular designs}
Let's consider a fraction ${\mathcal F}$ without replicates and with indicator function $F=\sum_{\alpha}c_\alpha X^\alpha$. Proposition 5 in (\cite{pistone|rogantin:2008-JSPI}) states that a fraction ${\mathcal F}$ is regular if, and only if, its indicator function $F$ has the form \[ F=\frac{1}{l}\sum_{\alpha \in \mathcal L} \overline{e(\alpha)}X^\alpha \] where $\mathcal L \subseteq L$, $\mathcal L$ is a subgroup of $L$ and $e:\mathcal L \rightarrow \{\omega_0,\ldots,\omega_{s-1}\}$ is a given mapping.
If we use Proposition \ref{pr:pr3jspi} we immediately get a characterisation of regular fractions based on the frequencies $n_{\alpha,h}$.
\begin{proposition} Given a single replicate fraction ${\mathcal F}$ with indicator function $F=\sum_{\alpha} c_{\alpha}X^\alpha$ the following statements are equivalent: \begin{enumerate} \item[(i)] ${\mathcal F}$ is regular \item [(ii)] for $n_{\alpha,h}$ there are only two possibilities \begin{enumerate}
\item if $c_\alpha=0$ then $n_{\alpha,h} = \frac{\#{\mathcal F}}{s}, \; h=0,\ldots,s-1$,
\item if $c_\alpha \neq 0$ then $\exists h_{*} \in \{0,\ldots,s-1\}$ such that
\begin{equation*}
n_{\alpha,h}=
\begin{cases} \frac{\#{\mathcal D}}{l} & \text{if } h=h_{*}\\ 0 & \text{otherwise}
\end{cases}
\end{equation*} \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} Using Proposition \ref{pr:ceqn} we get \[ c_\alpha = \frac{1}{\#{\mathcal D}} \sum_{h=0}^{s-1} n_{\alpha,h} \omega_h \]
Proposition 5 in \cite{{pistone|rogantin:2008-JSPI}} gives the following conditions on the coefficients of the indicator function $F$ of a regular fraction ${\mathcal F}$: \[ c_\alpha=\begin{cases} \frac{\overline{e(\alpha)}}{l}, & \alpha \in \mathcal L \subseteq L \\ 0 & \text{otherwise} \end{cases} \] where $e:\mathcal L \rightarrow \{\omega_0,\ldots,\omega_{s-1}\}$, $l=\#mathcal L$ and $\mathcal L$ is a subgroup of $L$.
Let's consider $\alpha \in \mathcal L$. We get \[ \frac{1}{\#{\mathcal D}} \sum_{h=0}^{s-1} n_{\alpha,h} \omega_h = \frac{\overline{e(\alpha)}}{l} \] Let's suppose $e(\alpha)=\omega_{h_*}$. We obtain \begin{equation} \label{eq} \frac{1}{\#{\mathcal D}} \sum_{h=0, h\neq h_*}^{s-1} n_{\alpha,h} \omega_h + (\frac{1}{\#{\mathcal D}} n_{\alpha,h_*} - \frac{1}{l}) \omega_{h_*} = 0 \end{equation} To simplify the notation we let $a_h=\frac{1}{\#{\mathcal D}}n_{\alpha,h}, h=0,\ldots,s-1, h \neq h_*$ and $a_{h_*}=\frac{1}{\#{\mathcal D}} n_{\alpha,h_*} - \frac{1}{l}$. Therefore, from the proof of item (1) of Proposition \ref{pr:pr3jspi}, for the relation \ref{eq} to be valid, it should be \[ a_0 = a_1 = \ldots = a_{s-1} \] Being $\sum_{h=0}^{s-1} n_{\alpha,h} = \#{\mathcal F}$ it follows \[ \sum_{h=0}^{s-1} n_{\alpha,h}=\sum_{h=0, h\neq h_*}^{s-1} (\#{\mathcal D}) a_h + (\#{\mathcal D})( a_{h_*} + \frac{1}{l}) = (\#{\mathcal D}) \sum_{h=0}^{s-1} a_{h} + \frac{(\#{\mathcal D})}{l} = \#{\mathcal F} \] and so \[ a_h = \frac{1}{s(\#{\mathcal D})} (\#{\mathcal F} - \frac{(\#{\mathcal D})}{l} ) \] We finally get \[ n_{\alpha,h} = \begin{cases} \frac{1}{s} (\#{\mathcal F} - \frac{(\#{\mathcal D})}{l} ) + \frac{(\#{\mathcal D})}{l} & \text{if } h=h_*\\ \frac{1}{s} (\#{\mathcal F} - \frac{(\#{\mathcal D})}{l} ) & \text{ otherwise} \end{cases} \] Being $\mathcal L$ a subgroup of $L$ it follows that $0 \in \mathcal L$ and so $c_0=1/l$. We also know that $c_0=\frac{\#{\mathcal F}}{\#{\mathcal D}}$ and therefore \[ \#{\mathcal F}=\frac{\#{\mathcal D}}{l} \] For the null coefficients of $F$, $\{c_\alpha: \alpha \in L - \mathcal L \}$, it is enough to use Proposition \ref{pr:c-strata} to conclude the proof. \end{proof}
\subsection{Wordlength Pattern} \label{sec:wlp}
Aberration is often used as a criterion to compare fractional factorial designs. The generalized minimum aberration, proposed by \cite{xu|wu:2001}, is based on the generalised wordlength pattern, see also \cite{beder|willenbring:2009}. It can be shown that the generalized wordlengths can be written in terms of the squares of the modules of the coefficients $c_\alpha$, obtaining \[
A_j = \left( \frac{\#{\mathcal D}}{\#{\mathcal F}} \right)^2 \sum_{wt(\alpha)=j} {\left| c_\alpha \right|^2} = \frac{1}{c_0^2} \sum_{wt(\alpha)=j} {\left| c_\alpha \right|^2} \; \text{ for } j=1,\ldots,m \] where $wt(\alpha)$ is the Hamming weight of $\alpha$, i.e. the number of nonzero components of $\alpha$. We now express the square of the module of the coefficient $c_\alpha$ in terms of $n_{\alpha,h}$. \begin{proposition} \label{pr:modulo} \begin{equation*}
\left| c_\alpha \right|^2 = \frac{1}{(\#{\mathcal D})^2} \sum_{h=0}^{s-1} ( n_{\alpha,h}^2 - n_{\alpha,h} n_{[\alpha,h-\gamma]}) \; \text{ for } \gamma \in \{1,\ldots,s-1\} \end{equation*} \end{proposition} \begin{proof} From Proposition \ref{pr:ceqn} we get \[ c_\alpha = \frac{1}{\#{\mathcal D}} \sum_{h=0}^{s-1} n_{\alpha,h} \omega_h \] It follows \begin{eqnarray*} \label{eq1}
\left| c_\alpha \right|^2 &=& c_\alpha \overline{c_\alpha} = \\
&=&\frac{1}{(\#{\mathcal D})^2}( \sum_{h=0}^{s-1} n_{\alpha,h} \omega_h)( \sum_{k=0}^{s-1} n_{\alpha,k} \overline{\omega_k}) = \\
&=& \frac{1}{(\#{\mathcal D})^2} ( \sum_{h=0}^{s-1} n_{\alpha,h} \omega_h) ( \sum_{k=0}^{s-1} n_{\alpha,k} \omega_{[s-k]}) = \\
&=& \frac{1}{(\#{\mathcal D})^2} \sum_{\gamma=0}^{s-1} \sum_{p=0}^{s-1} n_{\alpha,p} n_{[\alpha,p-\gamma]} \omega_\gamma \end{eqnarray*}
$\left| c_\alpha \right|^2$ must be a real number. Being $\omega_0=1$ it follows \begin{eqnarray} \label{eq1}
(\frac{1}{(\#{\mathcal D})^2} \sum_{p=0}^{s-1} n_{\alpha,p}^2 - \left| c_\alpha \right|^2)\omega_0 + \frac{1}{(\#{\mathcal D})^2} \sum_{\gamma=1}^{s-1} \sum_{p=0}^{s-1} n_{\alpha,p} n_{[\alpha,p-\gamma]} \omega_\gamma = 0 \end{eqnarray}
To simplify the notation we let $a_0=(\frac{1}{(\#{\mathcal D})^2} \sum_{p=0}^{s-1} n_{\alpha,p}^2 - \left| c_\alpha \right|^2)$ and $a_\gamma=\frac{1}{(\#{\mathcal D})^2} \sum_{p=0}^{s-1} n_{\alpha,p} n_{[\alpha,p-\gamma]}, \gamma=1,\ldots,s-1$. Therefore, by Lemma \ref{le:roots}, for the relation \ref{eq1} to be valid, it should be \[ a_0 = a_1 = \ldots = a_{s-1} \] Using one of the equalities, $a_0=a_h$ $h=1,\ldots,s-1$, it follows \[
\left| c_\alpha \right|^2 = \frac{1}{(\#{\mathcal D})^2} \sum_{p=0}^{s-1} ( n_{\alpha,p}^2 - n_{\alpha,p} n_{[\alpha,p-h]}) \] \end{proof}
\begin{remark}
Proposition \ref{pr:modulo} provides a useful tool to compute the modules of the coefficients $c_\alpha$. Indeed it is enough to choose $\gamma=1$ and compute $\left| c_\alpha \right|^2$ as $\frac{1}{(\#{\mathcal D})^2} \sum_{h=0}^{s-1} ( n_{\alpha,h}^2 - n_{\alpha,h} n_{[\alpha,h-1]})$; \end{remark}
\begin{remark} We make explicit these relations for $2$ and $3$ level fraction.
If $s=2$ then \[
\left| c_\alpha \right|^2 = \frac{1}{(\#{\mathcal D})^2} ( n_{\alpha,0} - n_{\alpha,1})^2 \] If $s=3$ then, choosing $\gamma=1$, \[
\left| c_\alpha \right|^2 = \frac{1}{(\#{\mathcal D})^2} ( n_{\alpha,0}^2+ n_{\alpha,1}^2 + n_{\alpha,2}^2 - n_{\alpha,0} n_{\alpha,2} - n_{\alpha,1} n_{\alpha,0} - n_{\alpha,2} n_{\alpha,1}) \] \end{remark}
\begin{remark} We observe that, denoting by $\overline{n}_\alpha$ the mean of the values of $n_{\alpha,h}$, $\overline{n}_\alpha=\frac{1}{s} \sum_{h=0}^{s-1} n_{\alpha,h}$, we get \[ \sum_{h=0}^{s-1} {(n_{\alpha,h} - \overline{n}_\alpha)^2} = \sum_{h=0}^{s-1} {n_{\alpha,h}^2} - s \overline{n}_\alpha^2 \] We have \begin{eqnarray*} \overline{n}_\alpha^2 &=& \frac{1}{s^2} \sum_{h,k=0}^{s-1} n_{\alpha,h} n_{\alpha,k} = \\ &=& \frac{1}{s^2} \left( \sum_{h=0}^{s-1} {n_{\alpha,h}^2} + 2 \sum_{h=0}^{s-1} n_{\alpha,h} n_{\alpha,[h-1]} + \ldots 2 \sum_{h=0}^{s-1} n_{\alpha,h} n_{\alpha,[h-s_*]} \right) \end{eqnarray*} where $s_*=\frac{s-1}{2}$. Proposition \ref{pr:modulo} states that all the quantities $\sum_{h=0}^{s-1} n_{\alpha,h} n_{\alpha,[h-\gamma]}$ are equal and so, choosing, without loss of generality, $\gamma=1$, we get \[ \overline{n}_\alpha^2 = \frac{1}{s^2} \left( \sum_{h=0}^{s-1} {n_{\alpha,h}^2} + 2 s_* \sum_{h=0}^{s-1} n_{\alpha,h} n_{\alpha,[h-1]} \right) = \frac{1}{s^2} \left( \sum_{h=0}^{s-1} {n_{\alpha,h}^2} + (s-1) \sum_{h=0}^{s-1} n_{\alpha,h} n_{\alpha,[h-1]} \right) \] and therefore \begin{eqnarray*} \sum_{h=0}^{s-1} {(n_{\alpha,h} - \overline{n}_\alpha)^2} &=& \sum_{h=0}^{s-1} {n_{\alpha,h}^2} - s \overline{n}_\alpha^2 = \\ &=& \frac{s-1}{s} \left( \sum_{h=0}^{s-1} {n_{\alpha,h}^2} - \sum_{h=0}^{s-1} n_{\alpha,h} n_{\alpha,[h-1]} \right) = \\
&=& \frac{s-1}{s} (\# {\mathcal D})^2 \left| c_\alpha \right|^2 \end{eqnarray*} It follows that, if we denote by $\sigma_\alpha^2$ the variance of $n_{\alpha,h}$, $\sigma_\alpha^2=\frac{1}{s} \sum_{h=0}^{s-1} {(n_{\alpha,h} - \overline{n}_\alpha)^2}$ we get \[
\left| c_\alpha \right|^2 = \left( \frac{s^2}{(s-1) (\# {\mathcal D})^2} \right) \sigma_\alpha^2 \] and so the square of the module of $c_\alpha$ represents, apart from a multiplicative constant, the variance of the frequencies $n_{\alpha,h}$.
\end{remark} \subsection{Margins} \label{sec:margin}
We now examine the relationship between the margins and the coefficients of the counting functions. We refer to (\cite{pistone|rogantin:2008-JSPI}) and we report here a part of it.
For each point $\zeta \in {\mathcal D}$ we consider the decomposition $\zeta=(\zeta_I,\zeta_J)$ where $I \subseteq \{1,\dots,m\}$ and $J = \{1,\dots,m\} - I \equiv I^c$ is its complement. We denote by $R_I(\zeta_I)$ the number of points in ${\mathcal F}$ whose projection on the $I$ factors is $\zeta_I$.
In particular if $I=\{1,\ldots,m\}$ we have $R_I=R$ and if $I = \emptyset$ we have $R_I=\# {\mathcal F}$.
We denote by $L_I$ the subset of the exponents restricted to the $I$ factors and by $\alpha_I$ an element of $L_I$: \[ L_I=\{a_I =(\alpha_1,\dots,\alpha_m), \alpha_j=0 \text{ if } j\in J\} \] Then for each $\alpha \in L$ and $\zeta \in {\mathcal D}$ we have $\alpha=\alpha_I+\alpha_J$ and $X^\alpha(\zeta)=X^\alpha_I(\zeta_I) X^\alpha_j(\zeta_J)$. Finally we denote by ${\mathcal D}_I$ and ${\mathcal D}_J$ the full factorial over the $I$ factors and $J$ factors, respectively (${\mathcal D}={\mathcal D}_I \times {\mathcal D}_J$).
We have the following proposition (see item 1 and 2 of Proposition 4 of \cite{pistone|rogantin:2008-JSPI})
\begin{proposition} \label{pr:pr4_gpmpr} Given a fraction ${\mathcal F}$ of ${\mathcal D}$ \begin{enumerate} \item the number of replicates of the points of ${\mathcal F}$ projected on the $I$ factors is: \[ R_I(\zeta_I)=\#{\mathcal D}_J\sum_{\alpha_I}c_{\alpha_I} X^{\alpha_I}(\zeta_I) \] \item ${\mathcal F}$ fully projects on the $I$ factors if, and only if, \[ R_I(\zeta_I)=\#{\mathcal D}_J \cdot c_0 = \#{\mathcal D}_J \frac{\#{\mathcal F}}{\#{\mathcal D}} = \frac{\#{\mathcal F}}{\#{\mathcal D}_I} \] \end{enumerate} \end{proposition}
We will refer to $R_I$ as $k$-margin, where $k=\#I$. The number of $k$-margins is $\binom{m}{k}$ and each $k$-margin can be computed over $s^{k}$ points $\zeta_I \in {\mathcal D}_I$. It follows that there are $(1+s)^m$ marginal values in total.
Using item 1 of Proposition \ref{pr:pr4_gpmpr} and reminding that we work with a prime number of level s we have \[ R_I(\zeta_I)=s^{m-k} \sum_{\alpha_I} c_{\alpha_I} \zeta_I^{\alpha_I} \] or, by the definition of $R_I$ as the restriction of $R$ over the $I$ factors, \[ \sum_{\zeta_J \in {\mathcal D}_J} R(\zeta_I,\zeta_J) \equiv \sum_{\zeta_J \in {\mathcal D}_J} y_{\zeta_I,\zeta_J} = s^{m-k} \sum_{\alpha_I} c_{\alpha_I} \zeta_I^{\alpha_I} \]
We point out the following relationship between margins. \begin{proposition} \label{pr:hierarchy} If $A \subseteq B \subseteq \{1,\dots,m\}$ and $R_B(\zeta_B)=s^{m-k_B} c_0$ then $R_A(\zeta_A)=s^{m-k_A} c_{0}$ where $\#B=k_B$ and $\#A=k_A$ \end{proposition} \begin{proof} Let's put $A_1=B - A$. We have \[ R_A(\zeta_A)=\sum_{\zeta_{A_1}\in A_1} R_{A \cup A_1}(\zeta_A,\zeta_{A_1})= \sum_{\zeta_{A_1}\in A_1} R_B(\zeta_A,\zeta_{A_1})= s^{k_B-k_A} s^{m-k_B} c_{0}= s^{m-k_A} c_{0} \] \end{proof}
We finally observe that, as we already pointed out, given $\mathcal C \subseteq L$ a set of conditions $c_\alpha=0, \alpha \in \mathcal C$ translates in a set of conditions $\sum_{\zeta \in D_h^\alpha} y_\zeta = \lambda, h=0,\dots,s-1, \alpha \in \mathcal C$ where $\lambda$ does not depend by $\alpha$ (and by $h$). In general, with respect to margins, the situation is different. For example let's suppose to have a ${\mathcal F}$ that fully projects over the $I_1$ and the $I_2$ factors, with $I_1 \cap I_2= \emptyset$ and $\#I_1 \neq \#I_2$. From Proposition \ref{pr:pr4_gpmpr} we obtain \[ R_{I_1}(\zeta_{I_1})=\frac{\#{\mathcal D}}{s^{\#I_1}} \text{ and } R_{I_2}(\zeta_{I_2})=\frac{\#{\mathcal D}}{s^{\#I_2}} \]
\section{Generation of fractions} \label{sec:gof} Let use strata to generate fractions that satisfy a given set of constrains on the coefficients of their counting functions. Formally we give the following definition \begin{definition} A counting function $R=\sum_{\alpha} {c_\alpha X^{\alpha}}$ associated to ${\mathcal F}$ is a $\mathcal{C}$-compatible counting function if its coefficients satisfy to \[ c_{\alpha}=0, \; \alpha \in \mathcal{C}, \; \mathcal{C} \subseteq \mathbb Z_{n_1} \times \ldots \mathbb Z_{n_m} \] \end{definition} We will denote by $OF(n_1 \dots n_m,\mathcal C)$ the set of all the fractions whose counting functions are $\mathcal C$-compatible.
In the next sections, we will show our methodology on Orthogonal Arrays and Sudoku designs.
\subsection{$OA(n,s^m,t)$} Let's consider $OA(n,s^m,t)$, i.e. orthogonal arrays with $n$ rows and $m$ columns where each columns has $s$ symbols, $s$ prime and with strength $t$.
Using Proposition \ref{pr:projectivity} we have that the coefficients of the corresponding counting functions must satisfy the conditions $c_\alpha=0$ for all $\alpha \in \mathcal{C}$ where $\mathcal{C} \subseteq L = \{ \alpha : 0 < \|\alpha \| \leq t \}$ where $\|\alpha \|$ is the number of non null elements of $\alpha$. We have $N_1=\sum_{k=1}^t \binom{m}{k} (s-1)^k$ coefficients that must be null.
It follows that $OF(s^m,\mathcal C) =\bigcup_{n} OA(n,s^m,t)$.
Now using Proposition \ref{pr:pr3jspi}, we can express these conditions using strata. If we consider $\alpha \in \mathcal C$ we write the condition $c_\alpha=0$ as \begin{equation*} \begin{cases} \sum_{\zeta \in D_0^\alpha} y_\zeta = \lambda\\ \sum_{\zeta \in D_1^\alpha} y_\zeta = \lambda\\ \dots \\ \sum_{\zeta \in D_{s-1}^\alpha} y_\zeta = \lambda \end{cases} \end{equation*}
To obtain all the conditions it is enough to vary $\alpha \in \mathcal{C}$. We use Proposition \ref{pr:samestrata} to limit to the $\alpha$ that give different strata. It is easy to show that we obtain $N_2 = \frac{N_1}{s-1}$ different $\alpha$, each of them generate $s$ linear equations, for a total of \[ N=s N_2 = s \sum_{k=1}^t \binom{m}{k} (s-1)^{k-1} \] constraints on the values of the counting function over ${\mathcal D}$.
We therefore get the following system of linear equations \[ A Y = \lambda \underline{1} \] where $A$ is the $(N \times s^m)$ matrix whose rows contains the values, over ${\mathcal D}$, of the indicator function of the strata, $1_{D_h^\alpha}$, $Y$ is the $s^m$ column vector whose entries are the values of the counting function over ${\mathcal D}$, $\lambda$ will be equal to $\frac{\#{\mathcal F}}{s}$ and $\underline{1}$ is the $s^m$ column vector whose entries are all equal to $1$. We can write an equivalent homogeneous system if we consider $\lambda$ as a new variable. We obtain \[ \tilde{A} \tilde{Y} = 0 \] where \[ \tilde{A} = \left[
\begin{array}{c|c}
A & \begin{array}{r}
-1 \\
-1 \\
\dots \\
-1 \end{array} \end{array} \right] = \left[ A ,-\underline{1} \right] \] and \[ \tilde{Y} = \left[ \begin{array}{c} Y \\ \hline \lambda \end{array} \right] = \left( Y , \lambda \right) \]
In an equivalent way, we can also express the conditions $c_\alpha=0$ for all $\alpha \in \mathcal{C}$ in terms of margins. We obtain \[ R_I(\zeta_I) = s^{m-(\#I)} c_{0} \] where $I \subseteq \{1,\dots,m \}$ and $ 1 \leq \#I \leq t$. If we recall Proposition \ref{pr:hierarchy}, we can limit to the margins $R_I$ where $\#I=t$. We have $s^t \binom{m}{t}$ values of such $t$ margin \[ \sum_{\zeta_J \in {\mathcal D}_J} y_{\zeta_I,\zeta_J} = s^{m-t} c_{0} \] In this case, with the same approach that we adopted for strata, we obtain a system of linear equations \[ B Y = \rho \underline{1} \] where $\rho=s^{m-t} c_{0}$ and its equivalent homogeneous system \[ \tilde{B} \tilde{Y} = 0 \]
Now we can find all the generators of $OF(s^m,\mathcal C$, that means of Orthogonal Arrays $OA(n,s^m,t)$, by computing the Hilbert Basis corresponding to $\tilde{A}$ (or, equivalently, to $\tilde{B})$. This approach is the same of \cite{carlini|pistone:2007} but, in that work, the following conditions were used \[ c_\alpha= \frac 1 {\#{\mathcal D}} \sum_{\zeta \in {\mathcal F}} \overline{X^\alpha(\zeta)} =\frac 1 {\#{\mathcal D}} \sum_{\zeta \in {\mathcal D}} \overline{X^\alpha(\zeta)} y_\zeta =0 \] The advantage of using strata (or margins) is that we avoid computations with complex numbers ($\overline{X^\alpha(\zeta)}$). We explain this point in a couple of examples. For the computation we use 4ti2 (\cite{4ti2_1.3.1}).
We use both $\tilde{A}$ (strata) and $\tilde{B}$ (margins) because, even if they are fully equivalent from the point of view of the solutions that they generate, they perform differently from the point of view of the computational speed.
\subsubsection {$OA(n,2^5,2)$} \label{subsec:oa2_5}
$OA(n,2^5,2)$ were investigated in \cite{carlini|pistone:2007}. We build both the matrix $\tilde{A}$ and $\tilde{B}$. They have $30$ rows and $40$ rows, respectively and $33$ columns.
We find the same $26,142$ solutions as in the cited paper.
\subsubsection {$OA(n,3^3,2)$} \label{subsec:oa3_3} We build both the matrix $\tilde{A}$ and $\tilde{B}$. They have $54$ rows and $27$ rows, respectively and $28$ columns.
We find $66$ solutions, $12$ have $9$ points, all different and $54$ have $18$ points, $17$ different.
Finally we point out that 4ti2 allows to specify upper bounds for variables. For example, if we use $\tilde{B}$ and we are interested in single replicate orthogonal arrays, we can set $1$ as the upper bound for $y_{\zeta}, \zeta \in {\mathcal D}$. The upper bound for the variable $\rho$ can be set to $s^{m-t} \equiv 3^{3-2}$ that corresponds to $c_0=1$, i.e. to the full design ${\mathcal D}$.
\subsection{$OA(n,n_1 \dots n_m,t)$} Let's now consider the general case in which we do not put restrictions on the number of levels.
\subsubsection {$OA(n,4^2,1)$} \label{subsec:oa4_2}
In this case the number of levels is a power of a prime, $2^2$. Using Proposition \ref{pr:projectivity} we have that the coefficients of the corresponding counting functions must satisfy the conditions $c_\alpha=0$ for all $\alpha \in \mathcal{C}$ where $\mathcal{C} \subseteq L = \{ \alpha : \|\alpha \| =1 \}$.
Let's consider $c_{1,0}$. From Proposition \ref{pr:P-1} we have that $X_1$ takes the values in $\Omega_s$ where $s=4$. From Proposition \ref{pr:pr3jspi}, $X_1$ will be centered on ${\mathcal F}$ if, and only if, the remainder \[ H(\zeta)= P(\zeta) \text{ mod } \Phi_4(\zeta) \] is identically zero. We have $\Phi_4(\zeta)=1+\zeta^2$ (see \cite{lang:65}) and so we can compute the remainder \[ H(\zeta)=n_{(1,0),0} - n_{(1,0),2} + (n_{(1,0),1} - n_{(1,0),3}) \zeta \] The condition $H(\zeta)$ identically zero translates into \[ \begin{cases} n_{(1,0),0} - n_{(1,0),2}=0 \\ n_{(1,0),1} - n_{(1,0),3}=0 \end{cases} \] Let's now consider $c_{2,0}$. From Proposition \ref{pr:P-1} we have that $X_1^2$ takes the values in $\Omega_s$ where $s=2$. From Proposition \ref{pr:pr3jspi}, $X_1^2$ will be centered on ${\mathcal F}$ if, and only if, the remainder \[ H(\zeta)= P(\zeta) \text{ mod } \Phi_2(\zeta) \] is identically zero. We have $\Phi_2(\zeta)=1+\zeta$ (see \cite{lang:65}) and so we can compute the remainder \[ H(\zeta)=n_{(2,0),0} - n_{(2,0),1} \]
If we repeat the same procedure for all the $\alpha$ such that $\|\alpha \| =1$ and we recall that \[ n_{\alpha,h}=\sum_{\zeta \in D_h^\alpha} y_\zeta \] orthogonal arrays $OA(n,4^2,1)$ become the integer solutions of the following integer linear homogeneous system
\[ \left[ \begin{array}{r r r r r r r r r r r r r r r r} 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 \\ 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\ 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 \\ 0 & -1 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 & 1 & 0 & -1 & 0 & 1 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & -1 \\ 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & -1 & -1 & -1 & -1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \end{array} \right] \left[ \begin{array}{c} y_{00} \\ y_{10} \\ y_{20} \\ y_{30} \\ y_{01} \\ y_{11} \\ y_{21} \\ y_{31} \\ y_{02} \\ y_{12} \\ y_{22} \\ y_{32} \\ y_{03} \\ y_{13} \\ y_{23} \\ y_{33} \end{array} \right] \]
Using 4ti2 we find $24$ solutions that correspond to all the Latin Hypercupe Designs (LHD).
\subsubsection {$OA(n,6^2,1)$} \label{subsec:oa6_2}
As in the previous examples, using Proposition \ref{pr:projectivity} we have that the coefficients of the corresponding counting functions must satisfy the conditions $c_\alpha=0$ for all $\alpha \in \mathcal{C}$ where $\mathcal{C} \subseteq L = \{ \alpha : \|\alpha \| =1 \}$.
Let's consider $c_{1,0}$. From Proposition \ref{pr:P-1} we have that $X_1$ takes the values in $\Omega_s$ where $s=6$. From Proposition \ref{pr:pr3jspi}, $X_1$ will be centered on ${\mathcal F}$ if, and only if, the remainder \[ H(\zeta)= P(\zeta) \text{ mod } \Phi_6(\zeta) \] is identically zero. We have $\Phi_6(\zeta)=1-\zeta+\zeta^2$ (see \cite{lang:65}) and so we can compute the remainder \[ H(\zeta)=n_{(1,0),0} - n_{(1,0),2} - n_{(1,0),3} + n_{(1,0),6}+ (n_{(1,0),1} + n_{(1,0),2} - n_{(1,0),5} - n_{(1,0),6}) \zeta \]
If we repeat the same procedure for all the $\alpha$ such that $\|\alpha \| =1$ and we recall that \[ n_{\alpha,h}=\sum_{\zeta \in D_h^\alpha} y_\zeta \] orthogonal arrays $OA(n,6^2,1)$ become the integer solutions of an integer linear homogeneous system $A R=0$ where the matrix $A$ is built as in the previous case of $OA(n,4^2,1)$. Using 4ti2 we find $620$ solutions that correspond to all the Latin Hypercupe Designs (LHD).
\subsection{Sudoku designs}
As shown in \cite{fontana|rogantin:sudo2008}, a sudoku can be described using its indicator function. Here we report a very short synthesis of Section 1.3 of that work.
A $p^2 \times p^2$ with $p$ prime sudoku design can be seen as a fraction ${\mathcal F}$ of the full factorial design ${\mathcal D}$: $$ {\mathcal D} = R_1 \times R_2 \times C_1 \times C_2 \times S_1 \times S_2 $$ where each factor is coded with the $p$-th roots of the unity. $R_1$ and $R_2$, $C_1$ and $C_2$, $S_1$ and $S_2$, represent the rows, the columns and the symbols of the sudoku grid, respectively.
The following proposition (Proposition 5 of \cite{fontana|rogantin:sudo2008}) holds. \begin{proposition} \label{pr:pr5} Let $F$ be the indicator function of a fraction ${\mathcal F}$ of a design $design$, $F=\sum_{\alpha \in L} b_\alpha X^\alpha$. The fraction ${\mathcal F}$ corresponds to a sudoku grid if and only if the coefficients $b_\alpha$ satisfy the following conditions: \begin{enumerate} \item \label{it:b0} $b_{000000} = 1/{p^2}$, i.e. the ratio between the number of points of the fraction and the number of points of the full factorial design is $ 1/{p^2}$; \item \label{it:b} for all $i_j \in \left\{0,1,\dots,p-1\right\}$: \begin{enumerate} \item $b_{i_1 i_2 i_3 i_4 0 0}=0$ for $(i_1, i_2, i_3, i_4) \neq (0,0,0,0)$, \item $b_{i_1 i_2 0 0 i_5 i_6}=0$ for $(i_1, i_2, i_5, i_6) \neq (0,0,0,0)$, \item $b_{0 0 i_3 i_4 i_5 i_6}=0$ for $(i_3, i_4, i_5, i_6) \neq (0,0,0,0)$, \item $b_{i_1 0 i_3 0 i_5 i_6}=0$ for $(i_1, i_3, i_5, i_6) \neq (0,0,0,0)$ \end{enumerate} i.e. the fraction factorially projects onto the first four factors and onto both symbol factors and row/column/box factors, respectively. \end{enumerate} \end{proposition}
From this Proposition, we define $\mathcal C$ as the union of $\mathcal C_1$, $\mathcal C_2$, $\mathcal C_3$ and $\mathcal C_4$, where \begin{eqnarray*} \mathcal C_1 &=&\{ (i_1 i_2 i_3 i_4 0 0): (i_1, i_2, i_3, i_4) \neq (0,0,0,0)\} \\ \mathcal C_2 &=&\{ (i_1 i_2 0 0 i_5 i_6): (i_1, i_2, i_5, i_6) \neq (0,0,0,0)\} \\ \mathcal C_3 &=&\{ (0 0 i_3 i_4 i_5 i_6): (i_3, i_4, i_5, i_6) \neq (0,0,0,0)\} \\ \mathcal C_4 &=&\{ (i_1 0 i_3 0 i_5 i_6): (i_1, i_3, i_5, i_6) \neq (0,0,0,0)\} \\ \end{eqnarray*} The problem of finding Sudoku becomes equivalent to find $\mathcal C$-compatible counting functions, that are (i) indicator functions and (ii) that satisfy the additional requirement $b_0=1/{p^2}$.
\subsubsection {$4 \times 4$ Sudoku} \label{subsec:sudo4_4} We use the conditions $\mathcal C$ to build both the matrices $\tilde{A}$ and $\tilde{B}$. $\tilde{A}$ has $78$ rows. With respect to $\tilde{B}$, that corresponds to the margins that must be constant, if we recall Proposition \ref{pr:hierarchy} we obtain $64$ constraints, all corresponding to $4$-margins.
To find all sudoku we use 4ti2, specifying the upper bounds for all the $65$ variables. The upper bounds for $y_\zeta, \zeta \in {\mathcal D}$ must be equal to $1$. If we use $\tilde{A}$, the upper bound for $\lambda$ must be set equal to $\frac{\#{\mathcal F}}{s}\equiv \frac{16}{2}=8$, while if we use $\tilde{b}$ the upper bound for $\rho$ must be set equal to $s^{m-k}b_0 \equiv 2^2 \frac{1}{4}=1$.
We find all the $288$ different $4 \times 4$ sudoku as in \cite{fontana|rogantin:sudo2008}. We point out that to solve the problem using $\tilde{A}$ the total time was $31.59$ minutes, while using $\tilde{B}$ the total time was only $58.04$ seconds on the same computer.
If we admit counting functions with values in $\{0,1,2\}$ and $\#{\mathcal F} \leq 32$ we find $55,992$ solutions.
\section{Moves} \label{sec:moves} Sometimes, given a set of conditions $\mathcal C$ we are interested in picking up a solution more than in finding all the generators. The basic idea is to generate somehow a starting solution and then to randomly walk in the set of all the solutions for a certain number of steps, taking the arrival point as a new but still $\mathcal C$-compatible counting function.
Let's use the previous results on strata to get a suitable set of \emph{moves}. We will show this procedure in the case in which all the factors have the same number of levels $s$, $S$ prime, but it can also be applied to the general case. In Section \ref{sec:gof} we have shown that counting functions must satisfy the following set of linear equations \[ A Y = \lambda \underline{1} \] where $A$ corresponds to the set of conditions $\mathcal C$ written in terms of strata.
It follows that if, given a $\mathcal C$-compatible solution $Y$, such that $A Y= \lambda \underline{1}$, we search for an additive move $X$ such that $A(Y+X)$ is still equal to $\lambda \underline{1}$, we have to solve the following linear homogenous system \[ A X = 0 \] with $X=(x_\zeta), \zeta \in {\mathcal D}$, $x_\zeta \in \mathbb Z$ and $y_\zeta+x_\zeta \geq 0$ for all $\zeta \in {\mathcal D}$. We observe that this set of conditions allows to determine new $\mathcal C$-compatible solutions \emph{that give the same $\lambda$}. We know that $\lambda=\frac{\#{\mathcal F}}{s}$ so this homogenous system determines moves that \emph{do not change the dimension of the solutions}.
Let's now consider the extended homogeneous system, where $\tilde{A}$ has already been defined in Section \ref{sec:gof}, \[ \tilde{A} \tilde{X} =0 \] with $\tilde{X}=(\tilde{x}_\zeta), \zeta \in {\mathcal D}$, $\tilde{x}_\zeta \in \mathbb Z$ and $\tilde{y}_\zeta+\tilde{x}_\zeta \geq 0$ for all $\zeta \in {\mathcal D}$.
Given $\tilde{Y}=(Y,\lambda_Y)$, where $Y$ is $\mathcal C$-compatible counting function and $\lambda_Y=\frac{\sum_\zeta y_\zeta}{s}$, the solutions of $\tilde{A} \tilde{X} =0$ determine all the other $\tilde{Y}+\tilde{X}=(Y+X,\lambda_{Y+X})$ such that $\tilde{A} (\tilde{Y}+\tilde{X}) =0$. $Y+X$ are $\mathcal C$-compatible counting functions whose sizes, $s \lambda_{Y+X}$, are, in general, \emph{different from that of $Y$}.
\subsection{Markov Basis}
We use the theory of Markov basis (see for example \cite{Drton|Sturmfels|Sullivant:2009} where it is also available a rich bibliography on this subject) to determine a set of generators of the moves.
We use the following procedure in order to randomly select a $\mathcal C$-compatible counting function. We compute a Markov basis of $\ker(A)$ using 4ti2 (\cite{4ti2_1.3.1}). Once we have determined the Markov basis of $\ker(A)$, we make a random walk on the \emph{fiber} of $Y$, where $Y$, as usual, contains the values of the counting function of an initial design ${\mathcal F}$. The fiber is made by all the $\mathcal C$-compatible counting functions that have the same size of ${\mathcal F}$. The randow walk is done randomly choosing one move among the feasible ones, i.e. among the moves for which we do not get negative values for the new counting function.
In the next paragraphs we consider moves for the cases that we have already studied in Section \ref{sec:gof}.
\subsection{Orthogonal arrays}
\subsubsection{$OA(n,2^5,2)$} We use the matrix $A$, already built in Section \ref{subsec:oa2_5} and give it as input to 4ti2 to obtain the Markov Basis, that we denote by $\mathcal{M}$. It contains $5.538$ different moves. Given $M=(x_\zeta)\in \mathcal M$ we define $M^+=\max(x_\zeta,0)$ and $M^{-}=\max(-x_\zeta,0)$. We have $M=M^+-M^-$.
As an initial fraction ${\mathcal F}_0$, we consider the eight-run regular fraction whose indicator function $R_0$ is \[ R_0=\frac{1}{4}(1+X_1X_2X_3)(1+X_1X_4X_5) \] We obtain the set of feasible moves observing that a move $M \in \mathcal{M}$, to be feasible, should be not negative when $R_0$ is equal to zero that means \[ (1-R_0) M^{-} =0 \] We find $12$ moves. Analogously an element $M \in \mathcal{M}$ such that \[ (1-R_0) M^{+} =0 \] gives a feasible move, $-M$. In this case we do not find any of such element.
Therefore, given $R_0$, the set of feasible moves becomes $\mathcal M_{R_0}$ that contains $12+0$ different moves.
We randomly choose one move $M_{R_0}$ out of the $12$ available ones and move to \[ R_1=R_0+M_{R_0} \] We run 1.000 simulations repeating the same loop, generating $R_i$ as $R_i=R_{i-1}+M_{R_{i-1}}$.
We obtain all the $60$ different 8-run fractions, each one with 8 different points as in \cite{carlini|pistone:2007}.
Using $\tilde{A}$ we obtain the set $\mathcal{\tilde{M}}$ that contains $18$ different moves.
\subsubsection{$OA(n,3^3,2)$} Using $A$ as built in the Section \ref{subsec:oa3_3}, we use 4ti2 to generate the Markov basis corresponding to the homogeneous system $AX=0$. We obtain $\mathcal{M}$ that contains $81$ different moves.
As an initial fraction we can consider the nine-run regular fraction ${\mathcal F}_0$ whose indicator function $R_0$ is \[ R_0=\frac{1}{3}(1+X_1 X_2 X_3+X_1^2 X_2^2 X_3^2) \] We run $1.000$ simulations repeating the same loop, i.e. generating $R_i$ as $R_i=R_{i-1}+M_{R_{i-1}}$.
We obtain all the $12$ different 9-run fractions, each one with 9 different points as known in the literature and as found in Section \ref{subsec:oa3_3}.
Using $\tilde{A}$ we also obtain the set $\mathcal{\tilde{M}}$ that contains $10$ different moves.
\subsubsection{ $4 \times 4$ sudoku} Using the matrix $A$ built in Section \ref{subsec:sudo4_4}, we run 4ti2 getting the Markov basis $\mathcal{M}$ that contains $34.920$ moves.
We randomly choose an initial sudoku
\renewcommand{1.2}{1.2}
\begin{equation*} \begin{array} { |c c|c c|} \cline{1-4}
3 & 2 & 4 & 1 \\
4 & 1 & 3 & 2 \\ \cline{1-4}
2 & 3 & 1 & 4 \\
1 & 4 & 2 & 3 \\ \cline{1-4}
\end{array} \end{equation*} The corresponding indicator function is \[ F_0=\frac 1 4 ( 1 - R_2 C_1 S_1 S_2 ) ( 1 - R_1 C_2 S_1) \ . \] Then we extract from $\mathcal{M}$ the feasible moves. We obtain a subset $\mathcal M_{F_0}$ that contains $5$ different moves. We repeat the procedure on $-\mathcal{M}$ and we obtain other $9$ moves.
We randomly choose one move $M_{F_0}$ out of the $5+9$ available ones and move to \[ F_1=F_0+M_{F_0} \] We run $1.000$ simulations repeating the same loop $F_i=F_{i-1}+M_{F_{i-1}}$.
We obtained all the $288$ different $4 \times 4$ sudoku.
\section{Conclusions}
We considered mixed level fractional factorial designs. Given the counting function $R$ of a fraction ${\mathcal F}$ we translated the constraint $c_\alpha=0$, where $c_\alpha$ is a generic coefficient of its polynomial representation $R=\sum_{\alpha}c_{\alpha} X^{\alpha}$, into a set of linear constraints with integer coefficients on the values $y_\zeta$ that $R$ takes on all the points $\zeta \in {\mathcal D}$. We obtained the set of generators of the solutions of some problems using Hilbert Basis. We also studied the moves between fractions. We characterized these moves as the solution of a homogeneous linear system. We defined a procedure to randomly walk among the solutions that is based on the Markov basis of this system. We showed the procedure on some examples. Computations have been made using 4ti2 (\cite{4ti2_1.3.1}).
Main advantages of the procedure are that we do not put restrictions on the number of levels of factors and that it is not necessary to use software that deals with complex polynomials.
One limit is in the high computational effort that is required. In particular only a small part of the Markov basis is used because of the requirement that counting functions can only take values greater than or equal to zero. The possibility to generate only the moves that are feasible could make the entire process more efficient and is part of current research.
\end{document} | arXiv | {
"id": "0906.3259.tex",
"language_detection_score": 0.6524872183799744,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\date{\today}
\title[Solvable quadratic Lie algebras in low dimensions]{Solvable quadratic Lie algebras in low dimensions} \author{Tien Dat Pham, Anh Vu Le, Minh Thanh Duong}
\address{Department of Physics, Ho Chi Minh city University of Pedagogy, 280 An Duong Vuong, Ho Chi Minh city, Vietnam.}
\email{thanhdmi@hcmup.edu.vn} \email{vula@uel.edu.vn}
\keywords{Solvable Lie algebras. Quadratic Lie algebras. Double extension. Classification. Low dimension}
\subjclass[2000]{17B05, 17B30, 17B40} \date{\today} \begin{abstract}
In this paper, we classify solvable quadratic Lie algebras up to dimension 6. In dimensions smaller than 6, we use the Witt decomposition given in \cite{Bou59} and a result in \cite{PU07} to obtain two non-Abelian indecomposable solvable quadratic Lie algebras. In the case of dimension 6, by applying the method of double extension given in \cite{Kac85} and \cite{MR85} and the classification result of singular quadratic Lie algebras in \cite{DPU}, we have three families of solvable quadratic Lie algebras which are indecomposable and not isomorphic. \end{abstract} \maketitle
\section{Introduction}
Throughout the paper, the base field is the complex field $\CC$. All considered vector spaces are finite-dimensional complex vector spaces.
As we known, the Killing form is a useful tool in studying semisimple Lie algebras. The Cartan criterion in the classification of Lie algebras or the proof of the Kostant-Morosov theorem, an important tool in the classification of adjoint orbits of classical Lie algebras $\mathfrak{o}(m)$ and $\mathfrak{sp}(2n)$, base on the non-degeneracy of the Killing form. Another remarkable property of the Killing form is the invariance. Therefore, it is natural to give a question that are there Lie algebras not necessarily semisimple for which there exists a non-degenerate invariant bilinear form? We call them {\em quadratic} Lie algebras. The answer is confirmed. For instance, let $\mathfrak{g}$ be a Lie algebra, $\mathfrak{g}^*$ be its dual space and $\operatorname{ad}^*:\mathfrak{g}\rightarrow \operatorname{End}(\mathfrak{g}^*)$ denoted by the coadjoint representation of $\mathfrak{g}$ in $\mathfrak{g}^*$. Then the vector space $\bar{\mathfrak{g}}=\mathfrak{g}\oplus \mathfrak{g}^*$ with the product defined by: \[ [X+f, Y+g]_{\bar{\mathfrak{g}}}=[X,Y]_{\mathfrak{g}} + \operatorname{ad}^*(X)(g)-\operatorname{ad}^*(Y)(f) \] becomes a quadratic Lie algebra with an invariant bilinear form given by: \[B(X+f, Y+g) = f(Y)+g(X)\] for all $X,\ Y \in \mathfrak{g},\ f,\ g\in\mathfrak{g}^*$.
Quadratic Lie algebras are an interesting algebraic object which is in relation with many problems in Mathematics and Physics (see \cite{Bor97}, \cite{FS96} and their references). Also, this notion can be generalized for Lie superalgebras or can be similarly considered for other algebras (\cite{Bor97}, \cite{BB99}, \cite{AB10} or \cite{ZC07}). Many works have been devoted to develop tools for the study of quadratic Lie algebras (\cite{Kac85}, \cite{MR85}, \cite{Bor97} and \cite{PU07}) in which the notion of double extension is an effective method to construct the structure of a quadratic Lie algebra. It is initiated by V. Kac in the solvable case \cite{Kac85} and after that developed by A. Medina and P. Revoy in the general case \cite{MR85}.
In this paper, we approach to quadratic Lie algebras in a familiar way that is in low dimensions. We focus on the solvable case and on the classification up to dimension 6. The classification of solvable quadratic Lie algebras up to dimension 4 can be found in \cite{ZC07} but here we redo it by a shorter way which is based on the Witt decomposition in \cite{Bou59} combined with a result in \cite{PU07}. We use this method to classify solvable quadratic Lie algebras of dimension 5 which have been done in \cite{DPU} by the method of double extension. In the case of dimension 6, we apply the classification of $\operatorname{O}(n)$-adjoint orbits of the Lie algebra $\mathfrak{o}(n)$ (see \cite{CM93}) and the classification of singular quadratic Lie algebras give in \cite{DPU} to obtain three families of solvable quadratic Lie algebras which are indecomposable and not isomorphic.
The paper is organized in 3 sections. The first one is devoted to introduce basic definitions and preliminaries. The classification of solvable quadratic Lie algebras up to dimension 5 is shown in Section 2. In Section 3, we consider a particular case of the notion of double extension and apply it to obtain all indecomposable solvable quadratic Lie algebras of dimension 6.
\section{Preliminaries}
\begin{defn}Let $\mathfrak{g}$ be a Lie algebra. A bilinear form $B:\ \mathfrak{g}\times\mathfrak{g}\rightarrow \CC$ is called:
\begin{enumerate}
\item[(i)] {\em symmetric} if $B(X,Y) = B(Y,X)$ for all $X, \ Y \in\mathfrak{g}$,
\item[(ii)] {\em non-degenerate} if $B(X,Y) = 0$ for all $Y\in\mathfrak{g}$ implies $X=0$,
\item[(iii)] {\em invariant} if $B([X,Y],Z) = B(X,[Y,Z])$ for all $X,\ Y,\ Z \in\mathfrak{g}$. \end{enumerate}
A Lie algebra $\mathfrak{g}$ is called {\em quadratic} if there exists a bilinear form $B$ on $\mathfrak{g}$ such that $B$ is symmetric, non-degenerate and invariant. \end{defn}
Let $(\mathfrak{g},B)$ be a quadratic Lie algebra and $V$ be a subspace of $\mathfrak{g}$. Denote the {\em orthogonal component} of $V$ by $V^\bot=\{X\in\mathfrak{g}\ |\ B(X,Y)=0,\ \forall \ Y\in V\}$ then we have: \[ \dim(V)+\dim(V^\bot) = \dim(\mathfrak{g}). \]
An element $X$ in $\mathfrak{g}$ is called {\em isotropic} if $B(X,X)=0$ and a subspace $W$ of $\mathfrak{g}$ is called {\em totally isotropic} if $B(X,Y)=0$ holds all $X,\ Y\in W$. In this case, it is obvious that $W\subset W^\bot$.
The study of quadratic Lie algebras can be reduced to the study of {\em indecomposable} ones by the following decomposition \cite{Bor97}.
\begin{prop}\label{prop1.2}
Let $(\mathfrak{g},B)$ be a quadratic Lie algebra and $I$ be an ideal of $\mathfrak{g}$. Then $I^\bot$ is also an ideal of $\mathfrak{g}$. Moreover, if the restriction of $B$ on $I\times I$ is non-degenerate then the restriction of $B$ on $I^\bot\times I^\bot$ is also non-degenerate, $[I,I^\bot]=\{0\}$ and $I\cap I^\bot=\{0\}$.
\end{prop}
If the restriction of $B$ on $I\times I$ is non-degenerate then $I$ is called a {\em non-degenerate} ideal of $\mathfrak{g}$. In this case, $\mathfrak{g}=I\oplus I^\bot$. Since the direct product is the orthogonal direct product so for convenience, we use the notion $\mathfrak{g}=I\oplusp I^\bot$.
\begin{defn} We say a quadratic Lie algebra $\mathfrak{g}$ {\em indecomposable} if $\mathfrak{g} = \mathfrak{g}_1
\oplusp \mathfrak{g}_2$, with $\mathfrak{g}_1$ and $\mathfrak{g}_2$ ideals of $\mathfrak{g}$, implies $\mathfrak{g}_1$
or $\mathfrak{g}_2 = \{0\}$. \end{defn}
\begin{defn} Let $\mathfrak{g}$ and $\mathfrak{g}'$ be two Lie algebras endowed with non-degenerate invariant bilinear forms $B$ and $B'$ respectively. If $A$ is a Lie algebra isomorphism from $\mathfrak{g}$ onto $\mathfrak{g}'$ satisfying $B'(A(X),A(Y)) = B(X,Y)$ for all $X,Y\in\mathfrak{g}$ then we say that $\mathfrak{g}$ and $\mathfrak{g}'$ are \emph{i-isomorphic} and $A$ is an \emph{i-isomorphism} from $\mathfrak{g}$ onto $\mathfrak{g}'$. \end{defn}
Remark that the isomorphic and i-isomorphic notions may be not equivalent. An example can be found in \cite{DPU}.
Next, we introduce another decomposition that is called the {\em reduced} decomposition. This notion allows us to only focus on quadratic Lie algebras having a totally isotropic center.
\begin{prop} \label{prop2.8} \cite{PU07}
Let $(\mathfrak{g},B)$ be a non-Abelian quadratic Lie algebra. Then there
exist a central ideal $\mathfrak{z}$ and an ideal $\mathfrak{l} \neq \{0\}$ such
that:
\begin{enumerate}
\item[(i)] $\mathfrak{g} = \mathfrak{z} \oplusp \mathfrak{l}$, where $\left( \mathfrak{z}, B|_{\mathfrak{z} \times \mathfrak{z}} \right)$ and $\left(\mathfrak{l},
B|_{\mathfrak{l} \times \mathfrak{l}} \right)$ are quadratic Lie algebras. Moreover,
$\mathfrak{l}$ is non-Abelian.
\item[(ii)] The center $\mathscr Z(\mathfrak{l})$ is totally isotropic, i.e. $\mathscr Z(\mathfrak{l})
\subset [\mathfrak{l}, \mathfrak{l}]$ and
\[\dim(\mathscr Z(\mathfrak{l}))\leq \frac{1}{2} \dim(\mathfrak{l})\leq\dim([\mathfrak{l}, \mathfrak{l}]).
\]
\item[(iii)] Let $\mathfrak{g}'$ be a quadratic Lie algebra and $A : \mathfrak{g} \to \mathfrak{g}'$ be a
Lie algebra isomorphism. Then \[ \mathfrak{g}' = \mathfrak{z}' \oplusp \mathfrak{l}'\] where
$\mathfrak{z}' = A(\mathfrak{z})$ is central, $\mathfrak{l}' = A(\mathfrak{z})^\perp$, $\mathscr Z(\mathfrak{l}')$ is
totally isotropic and $\mathfrak{l}$ and $\mathfrak{l}'$ are isomorphic. Moreover if
$A$ is an i-isomorphism then $\mathfrak{l}$ and $\mathfrak{l}'$ are i-isomorphic.
\end{enumerate}
\end{prop}
\begin{defn} A quadratic Lie algebra $\mathfrak{g}\neq\{0\}$ is called {\em reduced} if $\mathscr Z(\mathfrak{g})$ is totally isotropic. \end{defn}
Note that if a quadratic Lie algebra $\mathfrak{g}$ of dimension more than 1 is not reduced then there is a central element $X$ such that $B(X,X)\neq 0$. That means the ideal spanned by $X$ is non-degenerate. By Proposition \ref{prop1.2}, $\mathfrak{g}$ is decomposable. \section{Solvable quadratic Lie algebras up to dimension 5} \label{Section2}
In this section, we classify all indecomposable solvable quadratic Lie algebras up to dimension 5. The classification is based on the {\em Witt decomposition} given in \cite{Bou59} as follows: \begin{prop} Let $V$ be a finite-dimensional complex vector space endowed with a non-degenerate bilinear form $B$. Assume $U$ a totally isotropic subspace of $V$. Then there exist a totally isotropic subspace $W$ and a non-degenerate subspace $F$ of $V$ such that $\dim(U)=\dim(W)$, $F=(U\oplus W)^\bot$ and \[V=F\oplusp (U\oplus W) \] \end{prop}
As a consequence, if $\{X_1,...,X_n\}$ is a basis of $U$ then there exists a basis $\{Y_1,...,Y_n\}$ of $W$ such that $B(X_i,Y_j)=\delta_{ij}$ for all $1\leq i,j\leq n$.
We recall the classification of solvable quadratic Lie algebras up to dimension 4 in \cite{ZC07} but here there is a shorter proof based on a remarkable result in \cite{PU07} that if $\mathfrak{g}$ is a non-Abelian quadratic Lie algebra then $\dim([\mathfrak{g},\mathfrak{g}])\geq 3$. \begin{prop}\label{prop3.2} Let $\mathfrak{g}$ be a solvable quadratic Lie algebra with $\dim(\mathfrak{g})\leq 4$. Then we have the following cases: \begin{enumerate}
\item[(i)] If $\dim(\mathfrak{g})\leq 3$ then $\mathfrak{g}$ is Abelian.
\item[(ii)] If $\dim(\mathfrak{g})=4$ and $\mathfrak{g}$ is non-Abelian then $\mathfrak{g}$ is i-isomorphic to the diamond Lie algebra $\mathfrak{g}_4 = \operatorname{span}\{X,P,Q,Z\}$, where the subspaces spanned by $\{X,P\}$ and $\{Q,Z\}$ are totally isotropic, $B(X,Z) = B(P,Q) = 1$, $B(X,Q)= B(P,Z) = 0$ and the Lie bracket is defined by $[X,P] = P$, $[X,Q] = -Q$, $[P,Q] = Z$, otherwise trivial. \end{enumerate} \end{prop} \begin{proof} Since $\mathfrak{g}$ is solvable then $[\mathfrak{g},\mathfrak{g}]\neq \mathfrak{g}$. Therefore if $\dim(\mathfrak{g})\leq 3$ then $\mathfrak{g}$ is Abelian. If $\dim(\mathfrak{g})=4$ and $\mathfrak{g}$ is non-Abelian, we show that $\mathfrak{g}$ is reduced. Indeed, if $\mathfrak{g}$ is not reduced then there exists a central element $X$ such that $B(X,X)\neq 0$. That implies $I=\CC X$ a non-degenerate ideal of $\mathfrak{g}$ and so we have \[\mathfrak{g} = I\oplusp I^\bot. \] Note that $I^\bot$ is a solvable quadratic Lie algebra of dimension 3 then $I^\bot$ is Abelian. Therefore, $\mathfrak{g}$ is Abelian. This contradiction shows $\mathfrak{g}$ reduced.
Since $[\mathfrak{g},\mathfrak{g}]\neq \mathfrak{g}$ and $\dim([\mathfrak{g},\mathfrak{g}])\geq 3$ then $\dim([\mathfrak{g},\mathfrak{g}])= 3$ and so $\dim(\mathscr Z(\mathfrak{g}))= 1$. Assume $\mathscr Z(\mathfrak{g})=\CC Z$. Since $\mathscr Z(\mathfrak{g})$ is totally isotropic then by the Witt decomposition there exists a one-dimensional totally isotropic subspace $W$ and a two-dimensional non-degenerate subspace $F$ of $\mathfrak{g}$ such that: \[\mathfrak{g}=F\oplusp(\mathscr Z(\mathfrak{g})\oplus W). \] Moreover, we can choose an element $X$ in $W$ and a basis $\{P,Q\}$ of $F$ such that $B(X,Z)=B(P,Q)=1$ and $B(P,P)=B(Q,Q)=0$.
By $[\mathfrak{g},\mathfrak{g}]=\mathscr Z(\mathfrak{g})^\bot$ then $[\mathfrak{g},\mathfrak{g}]=\operatorname{span}\{Z,P,Q\}$ and therefore we can assume \[[X,P]=a_1Z+b_1P+c_1Q,\ [X,Q]=a_2Z+b_2P+c_2Q \ \text{and}\ [P,Q]=a_3Z+b_3P+c_3Q \] where $a_i,\ b_i, c_i\in\CC,\ 1\leq i\leq 3$.
Since $B(X,[X,P])=B([X,X],P)=0$ and $B(P,[X,P])=-B([P,P],X)=0$ one has $a_1=c_1=0$. Similarly, $a_2=b_2=0$ and $b_3=c_3=0$. Moreover, $B(X,[P,Q])=B([X,P],Q)=-B([X,Q],P)$ implies $b_1 = -c_2=a_3$.
By replacing $Z:=a_3Z$ and $X:=\frac{X}{a_3}$ we obtain the Lie bracket $[X,P]=P$, $[X,Q]=-Q$ and $[P,Q]=Z$. \end{proof}
Now, we continue with $\mathfrak{g}$ an indecomposable solvable quadratic Lie algebra of dimension 5. It is obvious that $\mathfrak{g}$ is reduced. By Proposition \ref{prop2.8}, there are only two cases: $\dim(\mathscr Z(\mathfrak{g}))= 1$ and $\dim(\mathscr Z(\mathfrak{g}))= 2$. \begin{enumerate}
\item[(i)] If $\dim(\mathscr Z(\mathfrak{g}))= 1$, assume $\mathscr Z(\mathfrak{g})=\CC Z$. Then there exist an isotropic element $Y$ and a subspace $F$ of $\mathfrak{g}$ such that $B(Z,Y)=1$ and $\mathfrak{g}=F\oplusp(\CC Z\oplus \CC Y)$, where $F=(\CC Z\oplus \CC Y)^\bot$. We can choose a basis $\{P,Q,X\}$ of $F$ satisfying $B(P,X)=B(Q,Q)=1$, the other are zero.
Since $[\mathfrak{g},\mathfrak{g}]=\mathscr Z(\mathfrak{g})^\bot$ then $[\mathfrak{g},\mathfrak{g}]=\operatorname{span}\{Z,P,Q,X\}$. Moreover, since $B([Y,X],Y)=B([Y,X],X) = 0$ then we can assume $[Y,X]=a_1X + b_1Q$ with $a_1,\ b_1\in\CC$. Similarly, we have the Lie bracket defined by:
\[ [Y,Q]=a_2X+b_2P,\ \ [Y,P]=a_3Q+b_3P,\ \ [X,Q]=a_4X+b_4Z\] \[ [X,P]=a_5Q+b_5Z \text{ and }\ [Q,P]=a_6P+b_6Z\] where $a_i,\ b_i\in\CC$, $2\leq i\leq 6$.
By the invariance of $B$, we obtain $a_1=b_5=-b_3$, $b_1 = b_4=-b_2$, $a_2 = b_6=-a_3$ and $a_4=a_6=-a_5$. Therefore, we rewrite Lie brackets as follows:
\[ [Y,X]=xX+yQ,\ \ [Y,Q]=zX-yP,\ \ [Y,P]=-zQ-xP,\]
\[ [X,Q]=wX+yZ,\ \ [X,P]=-wQ+xZ\ \ \text{ and }\ [Q,P]=wP+zZ\]
where $x,\ y,\ z, \ w \in\CC$.
If $w\neq 0$, set $A:=[X,Q]$, $B:=[X,P]$ and $C:=[Q,P]$ then one has $[A,B]=-w^2 A$, $[B,C]=-w^2 C$ and $[C,A]=-w^2 B$. It implies that the vector space $U=\operatorname{span}\{A,B,C\}$ becomes a subalgebra of $\mathfrak{g}$ and this subalgebra is not solvable. This is a contradiction by $\mathfrak{g}$ solvable. So $w = 0$. In this case, it is easy to check that $zX-xQ+yP\in \mathscr Z(\mathfrak{g})$. On the other hand, the numbers $x,\ y,\ z$ are not zero at the same time. So $\dim(\mathscr Z(\mathfrak{g}))> 1$. This is a contradiction then it does not happen this case.
\item[(ii)] If $\dim(\mathscr Z(\mathfrak{g}))= 2$, assume $\mathscr Z(\mathfrak{g})=\operatorname{span}\{Z_1,Z_2\}$. By the Witt decomposition, there exist elements $X_1,X_2$ and $T$ satisfying: $\mathfrak{g}$ is spanned by $\{Z_1,Z_2,T,\linebreak X_1,X_2\}$, the subspace $W=\operatorname{span}\{X_1,X_2\}$ is totally isotropic, the bilinear form $B$ is defined by $B(T,T)=1$, $B(X_i,Z_j)=\delta_{ij}$, $1\leq i,j\leq 2$, the other are zero.
Since $[\mathfrak{g},\mathfrak{g}]=\mathscr Z(\mathfrak{g})^\bot$ then $[\mathfrak{g},\mathfrak{g}]=\operatorname{span}\{Z_1,Z_2,T\}$. Moreover, $B$ is invariant then we can assume $[X_1,X_2]=x T$, $[X_1,T]=y Z_2$ and $[X_2,T]=z Z_1$, where $x,\ y,\ z\in\CC$. By the invariance of $B$ again, we obtain $x=-y=z$. Replace $X_1:=\frac{X_1}{x}$ and $Z_1:=x Z_1$ to obtain $[X_1,X_2]=T$, $[X_1,T]=- Z_2$ and $[X_2,T]=Z_1$. \end{enumerate}
Finally, we have a classification of solvable quadratic Lie algebras of dimension 5 as follows. \begin{prop} Let $(\mathfrak{g},B)$ be an indecomposable solvable quadratic Lie algebra of dimension 5. Then there exists a basis $\{Z_1,Z_2,T,X_1,X_2\}$ of $\mathfrak{g}$ such that $B(T,T)=1$, $B(X_i,Z_j)=\delta_{ij}$, $1\leq i,j\leq 2$, the other are zero and the Lie bracket is defined by $[X_1,X_2]=T$, $[X_1,T]=- Z_2$ and $[X_2,T]=Z_1$, the other are trivial. \end{prop} \section{Solvable quadratic Lie algebras of dimension 6} \subsection{Double extension} \begin{defn} Let $(\mathfrak{g},B)$ be a quadratic Lie algebra. A derivation $D$ of $\mathfrak{g}$ is called {\em skew-symmetric} if $D$ satisfies \[ B(D(X),Y)=-B(X,D(Y)), \ \ \ \forall X,\ Y\in\mathfrak{g}. \] \end{defn}
Denote by $\operatorname{Der}_a(\mathfrak{g})$ the vector space of skew-symmetric derivations of $\mathfrak{g}$. By the invariance of $B$, all inner derivations of $\mathfrak{g}$ are in $\operatorname{Der}_a(\mathfrak{g})$. \begin{defn} (\cite{Kac85} and \cite{MR85})
Let $(\mathfrak{g},B)$ be a quadratic Lie algebra and $C\in \operatorname{Der}_a(\mathfrak{g})$. On the vector space \[\bar{\mathfrak{g}}=\mathfrak{g}\oplus \CC e\oplus \CC f, \] we define the product: \[[X,Y]_{\bar{\mathfrak{g}}} = [X,Y]_{\mathfrak{g}} +B(C(X),Y)f,\ \ [e,X]=C(X)\ \text{ and } [f,\bar{\mathfrak{g}}]=0 \] for all $X,\ Y\in\mathfrak{g}$.
Then $\bar{\mathfrak{g}}$ becomes a Lie algebra. Moreover, $\bar{\mathfrak{g}}$ is a quadratic Lie algebra with an invariant bilinear form $\bar{B}$ defined by: \[\bar{B}(e,e)=\bar{B}(f,f)=\bar{B}(e,\mathfrak{g})=\bar{B}(f,\mathfrak{g})=0, \ \bar{B}(X,Y)=B(X,Y) \text{ and } \bar{B}(e,f)=1 \] for all $X,\ Y\in\mathfrak{g}$. In this case, we call $\bar{\mathfrak{g}}$ the {\em double extension of $\mathfrak{g}$ by $C$}. \end{defn}
The Lie algebra $\bar{\mathfrak{g}}$ is also called the {\em double extension of $\mathfrak{g}$ by the one-dimensional Lie algebra by means of $C$} (or a {\em one-dimensional double extension}, for short). The more general definition can be found in \cite{MR85}. However, the one-dimensional double extension is sufficient for studying solvable quadratic Lie algebras by the following proposition (see \cite{Kac85} or \cite{FS87}). \begin{prop}\label{prop3.4} Let $(\mathfrak{g},B)$ be a solvable quadratic Lie algebra of dimension $n$, $n\geq 2$. Assume $\mathfrak{g}$ non-Abelian. Then $\mathfrak{g}$ is a one-dimensional double extension of a solvable quadratic Lie algebra of dimension $n-2$. \end{prop} \begin{rem} A particular case of one-dimensional double extensions is $\mathfrak{g}$ Abelian, then $C$ is a skew-symmetric map in the Lie algebra $\mathfrak{o}(\mathfrak{g})$ and the Lie bracket on $\bar{\mathfrak{g}}$ is given by: \[[e,X]=C(X)\ \ \text{and}\ \ [X,Y]=B(C(X),Y)f,\ \ \ \forall\ X,\ Y\in\mathfrak{g}. \] Such Lie algebras have been classified up to isomorphism and up to i-isomorphism in \cite{DPU}. \end{rem} \begin{ex} Let $\mathfrak{q}$ be a two-dimensional complex vector space endowed with a non-degenerate symmetric bilinear form $B$. In this case, $\mathfrak{q}$ is called a two-dimensional {\em quadratic} vector space. We can choose a basis $\{P,Q\}$ of $\mathfrak{q}$ such that $B(P,P)=B(Q,Q)=0$ and $B(P,Q)=1$ (we call $\{P,Q\}$ a {\em canonical} basis of $\mathfrak{q}$). Let $C:\mathfrak{q}\rightarrow\mathfrak{q}$ be a skew-symmetric map of $\mathfrak{q}$, that is an endomorphism satisfying $B(C(X),Y)=-B(X,C(Y)$ for all $X,\ Y\in\mathfrak{q}$. In this example, we choose $C$ having the matrix with respect to the basis $\{P,Q\}$ as follows:
\[C = \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix}. \] Set the vector space \[\mathfrak{g}=\mathfrak{q}\oplus \CC X\oplus \CC Z \] and define the product: \[[X,P]=C(P)=P,\ \ [X,Q]=C(Q)=-Q \ \ \text{ and }\ \ [P,Q]=B(C(P),Q)Z = Z \] Then $\mathfrak{g}$ is the diamond Lie algebra given in Proposition \ref{prop3.2} (ii). \end{ex} \subsection{Solvable quadratic Lie algebras of dimension 6}
By Proposition \ref{prop3.4}, the key of the classification of solvable quadratic Lie algebras is describing skew-symmetric derivations of a solvable quadratic Lie algebra. More particular, in the case of dimension 6, it is necessary to describe skew-symmetric derivations of solvable quadratic Lie algebras of dimension 4. By the classification result in Proposition \ref{prop3.2}, we focus on the Abelian Lie algebra of dimension 4 and the diamond Lie algebra. We need the following lemma.
\begin{lem}\label{lem3.8} Any skew-symmetric derivation of the diamond Lie algebra $\mathfrak{g}_4$ is inner. \end{lem} \begin{proof} Assume $\mathfrak{g}=\operatorname{span}\{X,P,Q,Z\}$ where $[X,P]=P$, $[X,Q]=-Q$ and $[P,Q]=Z$, the bilinear form $B$ is given by $B(X,Z) = B(P,Q)=1$, the other are zero. Let $D$ be a skew-symmetric derivation of $\mathfrak{g}$. Since $\mathscr Z(\mathfrak{g})$ is stable by $D$ then we can assume $D(Z)=xZ$ with $x\in\CC$. Moreover $D$ is skew-symmetric then \[B(D(X),Z)=-B(X,D(Z)) = B(X,xZ)=-x.\] So we can assume $D(X)=-xX+yP+zQ+wZ$ with $y,\ z,\ w\in\CC$. The ideal $[\mathfrak{g},\mathfrak{g}]$ is also stable by $D$ then we write: \[ D(P)=aP+bQ+cZ\ \ \text{ and }\ \ D(Q)=a'P+b'Q+c'Z \] where $a,\ b,\ c,\ a',\ b',\ c'\in \CC$.
One has $D(P)=D([X,P]) = [D(X),P] + [X,D(P)]$. Then we obtain $x=b=0$ and $z=-c$. By a straightforward calculating, we obtain $y=-c'$, $a'=w=0$, $a=-b'$ and then the matrix of $D$ corresponding to the basis $\{X,P,Q,Z\}$ is:
\[D = \begin{pmatrix} 0 & 0 & 0 & 0 \\ y & a & 0 & 0 \\ z &
0 & -a & 0\\ 0 & -z & -y & 0\end{pmatrix}. \] It is clear that $D= \operatorname{ad}(aX-yP+zQ)$ and then $D$ is an inner derivation. \end{proof}
\begin{cor}\label{cor3.9} Any double extension of $\mathfrak{g}_4$ by a skew-symmetric derivation is decomposable. \end{cor} \begin{proof}
Keep the basis $\{X,P,Q,Z\}$ of $\mathfrak{g}_4$ as in the proof of Lemma \ref{lem3.8} and let $D$ be a skew-symmetric derivation of $\mathfrak{g}_4$. By the above lemma, the matrix of $D$ is:
\[D = \begin{pmatrix} 0 & 0 & 0 & 0 \\ y & a & 0 & 0 \\ z &
0 & -a & 0\\ 0 & -z & -y & 0\end{pmatrix} \] where $a,\ y,\ z\in\CC$.
Let $\bar{\mathfrak{g}}=\mathfrak{g}_4\oplus \CC e\oplus \CC f$ be the double extension of $\mathfrak{g}_4$ by $D$. Then the Lie bracket is defined on $\bar{\mathfrak{g}}$ as follows: \[[e,X]=yP+zQ,\ \ [e,P] = aP-zZ, \ \ [e,Q]=-aQ-yZ, \ \ [X,P]=P+zf,\] \[[X,Q]=-Q+yf \ \text{and } [P,Q]=Z+af. \] It is easy to check that $u=-e+aX-yP+zQ$ is in $\mathscr Z(\bar{\mathfrak{g}})$. Moreover, $f\in\mathscr Z(\bar{\mathfrak{g}})$ and $B(u,f)=-1$. Therefore $\mathscr Z(\bar{\mathfrak{g}})$ is not totally isotropic and then $\bar{\mathfrak{g}}$ is decomposable. \end{proof}
A more general result of Corollary \ref{cor3.9} can be found in \cite{FS96} where any double extension by an inner derivation is decomposable.
\begin{prop} Let $(\mathfrak{g},B)$ be a solvable quadratic Lie algebra of dimension 6. Assume $\mathfrak{g}$ indecomposable. Then there exists a basis $\{Z_1,Z_2,Z_3,X_1,X_2,X_3\}$ of $\mathfrak{g}$ such that the bilinear form $B$ is defined by $B(X_i,Z_j)=\delta_{i,j}$, $1\leq i,j\leq 3$, the other are zero and $\mathfrak{g}$ is i-isomorphic to each of Lie algebras as follows:
\begin{enumerate}
\item[(i)] $\mathfrak{g}_{6,1}$: $[X_3,Z_2]=Z_1$, $[X_3,X_1]=-X_2$ and $[Z_2,X_1]=Z_3$,
\item[(ii)] $\mathfrak{g}_{6,2}(\lambda)$: $[X_3,Z_1]=Z_1$, $[X_3,Z_2]=\lambda Z_2$, $[X_3,X_1]=-X_1$, $[X_3,X_2]=-\lambda X_2$, $[Z_1,X_1]=Z_3$ and $[Z_2,X_2]=\lambda Z_3$ where $\lambda\in\CC$ and $\lambda \neq 0$. In this case, $\mathfrak{g}_{6,2}(\lambda_1)$ and $\mathfrak{g}_{6,2}(\lambda_2)$ is i-isomorphic if and only if $\lambda_2=\pm\lambda_1$ or $\lambda_2={\lambda_1}^{-1}$,
\item[(iii)] $\mathfrak{g}_{6,3}$: $[X_3,Z_1]=Z_1$, $[X_3,Z_2]=Z_1+Z_2$, $[X_3,X_1]=-X_1-X_2$, $[X_3,X_2]=-X_2$ and $[Z_1,X_1]=[Z_2,X_1]=[Z_2,X_2]=Z_3$. \end{enumerate}
\end{prop} \begin{proof}
Assume $(\mathfrak{g},B)$ an indecomposable solvable quadratic Lie algebra of dimension 6. Then $\mathfrak{g}$ is a double extension of a four-dimensional solvable quadratic Lie algebra $\mathfrak{q}$ by a skew-symmetric derivation $C$. By Corollary \ref{cor3.9} and note that $\mathfrak{g}$ is indecomposable, $\mathfrak{q}$ must be Abelian. Therefore, $\mathfrak{g}$ can be written by $\mathfrak{g} = (\CC X_3\oplus\CC Z_3)\oplusp \mathfrak{q}$, where $\mathfrak{q}$ is a four-dimensional quadratic vector space, $C = \operatorname{ad}(X_3)\in\mathfrak{o}(\mathfrak{q},B_\mathfrak{q})$, $B(X_3,Z_3) = 1$, $B(X_3,X_3) = B(Z_3,Z_3) = 0$ and $B_\mathfrak{q} = B|_{\mathfrak{q}\times\mathfrak{q}}$ \cite{DPU}. Moreover, the i-isomorphic classification of $\mathfrak{g}$ reduces to classify $\operatorname{O}(\mathfrak{q})$-orbits of $\PP^1(\mathfrak{o}(\mathfrak{q}))$, where $\PP^1(\mathfrak{o}(\mathfrak{q}))$ is denoted by the projective space of $\mathfrak{o}(\mathfrak{q})$.
Let $\{Z_1,Z_2,X_1,X_2\}$ be a canonical basis of $\mathfrak{q}$, that means $B_\mathfrak{q}(Z_i,Z_j) = B_\mathfrak{q}(X_i,X_j) = 0$, $B_\mathfrak{q}(Z_i,X_j) = \delta_{ij}$, $1\leq i,j\leq 2$. Since $\mathfrak{g}$ is indecomposable then we choose $\operatorname{O}(\mathfrak{q})$-orbits of $\PP^1(\mathfrak{o}(\mathfrak{q}))$ whose representative element $C$ has $\ker(C)\subset\operatorname{Im}(C)$ and the matrix of $C$ with respect to the basis $\{Z_1,Z_2,X_1,X_2\}$ is given by one of following matrices:
\begin{enumerate}
\item[(i)] $C = \begin{pmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 &
0 & 0 & 0\\ 0 & 0 & -1 & 0\end{pmatrix}$: the nilpotent case, \item[(ii)] $C(\lambda) = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \lambda & 0 & 0 \\ 0 &
0 & -1 & 0\\ 0 & 0 & 0 & -\lambda\end{pmatrix},\ \lambda\neq 0$: the diagonalizable case, \item[(iii)] $C = \begin{pmatrix} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 &
0 & -1 & 0\\ 0 & 0 & -1 & -1\end{pmatrix}$: the invertible case (see \cite{DPU} for more details). \end{enumerate}
Therefore, we have three not i-isomorphic families of quadratic Lie algebras given as in the proposition which correspond to each of above cases. Note that for those Lie algebras, the i-isomorphic and isomorphic notions are equivalent. For the second family, $\mathfrak{g}_{6,2}(\lambda_1)$ is i-isomorphic to $\mathfrak{g}_{6,2}(\lambda_2)$ if and only if there exists $\mu\in\CC$ nonzero such that $C(\lambda_1)$ is in the $\operatorname{O}(\mathfrak{q})$-adjoint orbit through $\mu C(\lambda_2)$. That happens if and only if $\lambda_1=\pm\lambda_2$ or $\lambda_2={\lambda_1}^{-1}$.
\end{proof}
\begin{bibdiv} \begin{biblist} \bib{AB10}{article}{
author={Ayadi, I.},
author={Benayadi, S.},
title={Symmetric Novikov superalgebras},
journal={J. Math. Phys.},
fjournal={Journal of Mathematical Physics},
volume={51},
number={2},
date={2010},
pages={023501},
}
\bib{BB99}{article}{
author={Benamor, H.},
author={Benayadi, S.}
title={Double extension of quadratic Lie superalgebras},
journal={Comm. in Algebra},
fjournal={Communication in Algebra},
volume={27},
number={1},
date={1999},
pages={67 -- 88},
}
\bib{Bor97}{article}{
author={Bordemann, M.}
title={Nondegenerate invariant bilinear forms on nonassociative algebras},
journal={Acta Math. Univ. Comenianae},
volume={LXVI},
number={2},
date={1997},
pages={151 -- 201}, }
\bib{Bou59}{book}{
author={Bourbaki, N.},
title={Eléments de Mathématiques. Algèbre, Formes sesquilinéaires et formes quadratiques},
volume={Fasc. XXIV, Livre II},
publisher={Hermann},
place={Paris},
date={1959},
pages={}, }
\bib{CM93}{book}{
author={Collingwood, D. H.},
author={McGovern, W. M.},
title={Nilpotent Orbits in Semisimple Lie. Algebras},
publisher={Van Nostrand Reihnhold Mathematics Series},
place={New York},
date={1993},
pages={186}, } \bib{DPU}{article}{
author={Duong, M.T},
author={Pinczon, G.},
author={Ushirobira, R.},
title={A new invariant of quadratic Lie algebras},
journal={Alg. Rep. Theory, DOI: 10.1007/s10468-011-9284-4},
fjournal={Journal of Algebras and Representation Theory},
volume={},
pages={41 pages}, } \bib{Kac85}{book}{
author={Kac, V.},
title={Infinite-dimensional Lie algebras},
publisher={Cambridge University Press},
place={New York},
date={1985},
pages={xvii + 280 pp}
} \bib{FS87}{article}{
author={Favre, G.},
author={Santharoubane, L.J.},
title={Symmetric, invariant, non-degenerate bilinear form on a Lie algebra},
journal={J. of Algebra},
fjournal={Journal of Algebra},
volume={105},
date={1987},
pages={451--464},
} \bib{FS96}{article}{
author={Figueroa-O'Farrill, J. M.},
author={Stanciu, S.},
title={On the structure of symmetric self-dual Lie algebras},
journal={J. Math. Phys.},
fjournal={Journal of Mathematical Physics},
volume={37},
number={8},
date={1996},
pages={4121 -- 4134}, }
\bib{Med85}{article}{
author={Medina, A.},
title={Groupes de Lie minis de m\'etriques bi-invariantes},
journal={T\^ohoku Math. Journ.},
fjournal={T\^ohoku Mathematical Journal},
volume={37},
date={1985},
pages={405--421},
}
\bib{MR85}{article}{
author={Medina, A.},
author={Revoy, P.},
title={Alg\`ebres de Lie et produit scalaire invariant},
journal={Ann. Sci. \'Ecole Norm. Sup.},
fjournal={Annales Scientifiques de l'\'Ecole Normale Sup\'erieure},
volume={4},
date={1985},
pages={553--561},
} \bib{PU07}{article}{
author={Pinczon, G.},
author={Ushirobira, R.},
title={New Applications of Graded Lie Algebras to Lie Algebras, Generalized Lie Algebras, and Cohomology},
journal={J. of Lie Theory},
fjournal={Journal of Lie Theory},
volume={17},
date={2007},
number={3},
pages={633 -- 668},
}
\bib{ZC07}{article}{
author={Zhu, F.},
author={Chen, Z.},
title={Novikov algebras with associative bilinear forms},
journal={J. Phys. A: Math. Theor.},
fjournal={Journal of Physics A: Mathematical and Theoretical},
volume={40},
date={2007},
number={47},
pages={14243--14251},
}
\end{biblist} \end{bibdiv}
\end{document} | arXiv | {
"id": "1204.4787.tex",
"language_detection_score": 0.6362499594688416,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Normalized solutions for $p$-Laplacian equation with critical Sobolev exponent and mixed nonlinearities}
\author{Shengbing Deng\footnote{Corresponding author.}\ \footnote{E-mail address:\, {\tt shbdeng@swu.edu.cn} (S. Deng), {\tt qrwumath@163.com} (Q. Wu).}\ \ and Qiaoran Wu\\ \footnotesize School of Mathematics and Statistics, Southwest University, Chongqing, 400715, P.R. China}
\date{ } \maketitle
\begin{abstract} {In this paper, we consider the existence and multiplicity of normalized solutions for the following $p$-Laplacian critical equation
\begin{align*}
\left\{\begin{array}{ll}
-\Delta_{p}u=\lambda\lvert u\rvert^{p-2}u+\mu\lvert u\rvert^{q-2}u+\lvert u\rvert^{p^*-1}u&\mbox{in}\ \mathbb{R}^N,\\
\int_{\mathbb{R}^N}\lvert u\rvert^pdx=a^p,
\end{array}\right.
\end{align*}
where $1<p<N$, $2<q<p^*=\frac{Np}{N-p}$, $a>0$, $\mu\in\mathbb{R}$ and $\lambda\in\mathbb{R}$ is a Lagrange multiplier. Using concentration compactness lemma, Schwarz rearrangement, Ekeland variational principle and mini-max theorems, we obtain several existence results under $\mu>0$ and other assumptions. We also analyze the asymptotic behavior of there solutions as $\mu\rightarrow 0$ and $\mu$ goes to its upper bound. Moreover, we show the nonexistence result for $\mu<0$ and get that the $p$-Laplacian equation has infinitely solutions by genus theory when $p<q<p+\frac{p^2}{N}$. }
\emph{\bf Keywords:} Normalized solutions, $p$-Laplacian equation; Sobolev critical nonlinearities; Infinitely solutions.
\emph{\bf 2020 Mathematics Subject Classification:} 35B33, 35J62, 35J92.
\end{abstract}
\section{\textbf{Introduction}}
In this paper, we consider the following $p$-Laplacian equation
\begin{equation}\label{equation}
-\Delta_{p}u=\lambda\lvert u\rvert^{p-2}u+\mu\lvert u\rvert^{q-2}u+\lvert u\rvert^{p^*-2}u\quad\mbox{in}\ \mathbb{R}^N,
\end{equation}
where $1<p<N$, $2<q<p^*=\frac{Np}{N-p}$, $\lambda,\mu\in\mathbb{R}$ and $\Delta_{p}=\mbox{div}(\lvert\nabla u\rvert^{p-2}\nabla u)$ is the $p$-Laplacian operator.
If $p=2$, then equation (\ref{equation}) can be derived from the time-dependent equation as
\begin{equation}\label{time}
i\psi_{t}+\Delta\psi+\mu\lvert\psi\rvert^{q-2}+\lvert\psi\rvert^{2^*-2}\psi=0\quad\mbox{in}\ \mathbb{R}_{+}\times\mathbb{R}^N,
\end{equation}
when we look for the stationary waves of the form $\psi(t,x)=e^{-i\lambda t}u(x)$. Equation (\ref{time}) can represent both the famous Schr\"odinger equation which describes the laws of particle motion\cite{hhsca,sn1,sn2}, and the Bose-Einstein condensates\cite{fg,ttvmzx,zz}. Consider the following equation
\begin{equation}\label{f}
-\Delta u=\lambda u+f(u)\quad\mbox{in}\ \mathbb{R}^N,
\end{equation}
A direct way for studying the existence of solutions for (\ref{f}) is to find the critical points for the following functional
\[I(u)=\frac{1}{2}\int_{\mathbb{R}^N}|\nabla u|^2dx-\frac{\lambda}{2}\int_{\mathbb{R}^N}| u|^2dx-\int_{\mathbb{R}^N}F(u)dx,\]
where $F(s)=\int_{0}^{s}f(t)dt$. In this case, particular attention is devoted to least action solutions, namely solutions minimizing $I(u)$ among all non-trivial solutions. Here we refer the readers to \cite{acosmasmm,fagf}. Another possible approach is to give a prescribed $L^2$ mass of $u$, that is, consider the constraint
\begin{equation}\label{l2con}
\int_{\mathbb{R}^N}\lvert u\rvert^2dx=a^2,
\end{equation}
where $a>0$ is a constant. Then, the corresponding functional of (\ref{f}) is
\[J(u)=\frac{1}{2}\int_{\mathbb{R}^N}|\nabla u|^2dx-\int_{\mathbb{R}^N}F(u)dx,\]
and $\lambda$ appears as a Lagrange multiplier. The solutions of (\ref{f}) which has prescribed mass called normalized solutions.
When we consider the normalized solutions of (\ref{f}), a new critical exponent $2+\frac{4}{N}$ appears which is called $L^2$-critical exponent. This constant can be derived from the Gagliardo-Nirenberg inequality: for every $p<q<p^*$, there exist an optimal constant $C_{N,p,q}>0$ such that
\[\lVert u\rVert_{q}^q\leqslant C_{N,p,q}\lVert\nabla u\rVert_{p}^{q\gamma_{q}}\lVert u\rVert_{p}^{q(1-\gamma_{q})}\quad\forall u\in W^{1,p}(\mathbb{R}^N),\]
where
\[\gamma_{q}:=\frac{N(q-p)}{pq}.\]
By \cite[section 1]{am}, $C_{N,p,q}$ can be attained for some $\psi_{0}\in W^{1,p}(\mathbb{R}^N)$ which satisfies
\[-\Delta_{p}u+\lvert u\rvert^{p-2}u=\beta\lvert u\rvert^{q-2}\]
for some $\beta>0$. Moreover, $\psi_{0}$ can be chosen non-negative, radially symmetric, radially non-increasing and tends to $0$ as $\lvert x\rvert\rightarrow+\infty$. If the nonlinearities of (\ref{f}) are pure $L^2$-subcritical terms, for example $f(u)=\lvert u\rvert^{q-2}u$ with $2<q<2+\frac{4}{N}$, then by Gagliardo-Nirenberg inequality, it is not difficult to prove that $J(u)$ is bounded from below if $u$ satisfies (\ref{l2con}), and we can find a global minimum solution of (\ref{f}). Here we refer the readers to \cite{lpl2,sma}. If the nonlinearities of (\ref{f}) are pure $L^2$-supercritical terms, for example $f(u)=\lvert u\rvert^{q-2}u$ with $2+\frac{4}{N}<q<2^*$, then $J(u)$ is unbounded from below if $u$ satisfies (\ref{l2con}). The first result to deal with $L^2$-supercritical case was studied by Jeanjean\cite{jl}. He proved that problem (\ref{f}) has a mountain-pass type solution under suitable assumptions. Compared with the pure $L^2$-subcritical or $L^2$-supercritical case, the mixed nonlinearities terms are more complicated, Soave \cite{sn1} considered the following problem
\begin{equation}\label{secn}
\left\{\begin{array}{ll}
-\Delta u=\lambda u+\mu\lvert u \rvert^{q-2}u+\lvert u \rvert^{p-2}u&\mbox{in}\ \mathbb{R}^N,\\
\int_{\mathbb{R}^N}\lvert u\rvert^2dx=a^2,
\end{array}\right.
\end{equation}
where $N\geqslant 3$, $\mu>0$, $2<q\leqslant 2+\frac{4}{N}\leqslant p<2^*$, and analyzed the existence, asymptotic behavior and stability of solutions. All the references listed above are considered in Sobolev subcritical case, the first result of the normalized solutions for Sobolev critical case was studied by Soave \cite{sn2}, that is, Soave studied the following problem
\begin{equation}\label{secn}
\left\{\begin{array}{ll}
-\Delta u=\lambda u+\mu\lvert u \rvert^{q-2}u+\lvert u \rvert^{2^*-2}u&\mbox{in}\ \mathbb{R}^N,\\
\int_{\mathbb{R}^N}\lvert u\rvert^2dx=a^2,
\end{array}\right.
\end{equation}
where $N\geqslant 3$, $\mu>0$, $2<q<p^*$ and analyzed the existence, nonexistence, asymptotic behavior and stability of solutions. We refer to \cite{jljjltt,jlltt,wjwy} and references therein for the existence of normalized solutions for the mixed nonlinearities.
If $p\neq 2$, there are few papers on the normalized solution of $p$-Laplacian equation.
Wang et al. \cite{wwlqzj} considered
\begin{equation*}
\left\{\begin{array}{ll}
-\Delta_{p}u+\lvert u\rvert^{p-2}u=\mu u+\lvert u\rvert^{s-2}u&\mbox{in}\ \mathbb{R}^N,\\
\int_{\mathbb{R}^N}\lvert u\rvert^2dx=\rho,
\end{array}\right.
\end{equation*}
where $1<p<N$, $\mu\in\mathbb{R}$, $s\in(\frac{N+2}{N}p,p^*)$, they considered the $L^2$ constraint, by the Gagliardo-Nirenberg inequality, the $L^2$-critical exponent should be $\frac{N+2}{N}p$. Moreover, we know $L^2(\mathbb{R}^N)\not\subset W^{1,p}(\mathbb{R}^N)$, so the work space is $W^{1,p}(\mathbb{R}^N)\cap L^2(\mathbb{R}^N)$ which is a Hilbert space. The work space is Hilbert space is very important for \cite{wwlqzj}. \cite{zzzz} is the first paper to study the $p$-Laplacian equation with $L^p$ constraint:
\begin{equation}\label{Sovlevsub}
\left\{\begin{array}{ll}
-\Delta_{p}u=\lambda\lvert u\rvert^{p-2}u+\mu\lvert u\rvert^{q-2}u+g(u)&\mbox{in}\ \mathbb{R}^N,\\
\int_{\mathbb{R}^N}\lvert u\rvert^pdx=a^p,
\end{array}\right.
\end{equation}
where $g\in C(\mathbb{R},\mathbb{R})$ and there exist $p+\frac{p^2}{N}<\alpha\leqslant\beta<p^*$ such that for all $s\in\mathbb{R}$, there is
\[0<\alpha G(s)s\leqslant g(s)s\leqslant\beta G(s),\quad G(s)=\int_{0}^{s}g(t)dt.\]
A simple example is $g(s)=\lvert s\rvert^{r-2}s$ with $p+\frac{p^2}{N}<r<p^*$. Moreover, Wang and Sun \cite{wcsj} considered both the $L^2$ constraint and the $L^p$ constraint for the following problem
\begin{equation*}
\left\{\begin{array}{ll}
-\Delta_{p}u+V(x)\lvert u\rvert^{p-2}u=\lambda\lvert u\rvert^{r-2}u+\lvert u\rvert^{q-2}u&\mbox{in}\ \mathbb{R}^N\\
\int_{\mathbb{R}^N}\lvert u\rvert^rdx=c,
\end{array}\right.
\end{equation*}
where $1<p<N$, $\lambda\in\mathbb{R}$, $r=p$ or $2$, $p<q<p^*$ and $V(x)$ is a trapping potential satisfies
\[V(x)\in C(\mathbb{R}^N),\quad\lim_{\lvert x\rvert\rightarrow+\infty}V(x)=+\infty\quad\mbox{and}\quad\inf_{x\in\mathbb{R}^N}V(x)=0.\]
In this work, we study the existence of normalized solutions for (\ref{equation}) by fixing $L^p$-norm of $u$.
Let
\[S_{a}:=\{u\in W^{1,p}(\mathbb{R}^N):\lVert u\rVert_{p}^p=a^p\},\]
where $a>0$ is a constant. Following \cite[definition 1]{sn2}, we give the definition of ground state as follows.
\begin{definition}\label{grosta}
{\rm We say $u$ is a ground state of {\rm (\ref{equation})} on $S_{a}$, if $u$ is a solution to {\rm (\ref{equation})} and have minimal energy among all the solutions which belongs to $S_{a}$, that is}
\[dE_{\mu}|_{S_{a}}(u)=0,\quad\mbox{{\rm and}}\quad E_{\mu}(u)=\inf\big\{E_{\mu}(v): dE_{\mu}|_{S_{a}}(v)=0, u\in S_{a}\big\}.\]
\end{definition}
Since $u\in S_{a}$, there are some difficulties to observe the structure of $E_{\mu}(u)$ directly. A possible approach is to consider an auxiliary function
\[\Psi_{u}^{\mu}(s):=E_{\mu}(s\star u)=\frac{1}{p}e^{ps}\lVert\nabla u\rVert_{p}^p-\frac{\mu}{q}e^{q\gamma_{q}s}\lVert u\rVert_{q}^q-\frac{1}{p^*}e^{p^*s}\lVert u\rVert_{p^*}^{p^*},\]
where
\[s\star u:=e^{\frac{Ns}{p}}u(e^s\cdot).\]
It is clear that $s\star u\in S_{a}$ for all $s\in\mathbb{R}$ if $u\in S_{a}$. Thus, we can investigate the structure of $\Psi_{u}^{\mu}$ to speculate the structure of $E_{\mu}|_{S_{a}}$.
Assume that $u$ is a critical point of $E_{\mu}|_{S_{a}}$. Then $0$ may be a critical point of $\Psi_{u}^{\mu}$. If $0$ is the critical point of $\Psi_{u}^{\mu}$, we have $(\Psi_{u}^{\mu})'(0)=0$, that is
\begin{equation}\label{Psider}
\lVert\nabla u\rVert_{p}^p=\mu\gamma_{q}\lVert u\rVert_{q}^q+\lVert u\rVert_{p^*}^{p^*}.
\end{equation}
In fact, by the Pohozaev indentity of (\ref{equation}), all critical point of $E_{\mu}$ satisfies (\ref{Psider})(see Proposition \ref{Pohozaev}). Therefore, if we consider such a manifold
\[\mathcal{P}_{a,\mu}=\{u\in S_{a}:P_{\mu}(u)=0\},\]
where
\[P_{\mu}(u)=\lVert\nabla u\rVert_{p}^p-\mu\gamma_{q}\lVert u\rVert_{q}^q-\lVert u\rVert_{p^*}^{p^*},\]
we know all critical points of $E_{\mu}|_{S_{a}}$ belong to $\mathcal{P}_{a,\mu}$ and $s\star u\in\mathcal{P}_{a,\mu}$ if and only if $(\Psi_{u}^{\mu}(s))'=0$. The manifold $\mathcal{P}_{a,\mu}$ is always called Pohozaev manifold.
We divde $\mathcal{P}_{a,\mu}$ into three parts
\begin{align*}
\mathcal{P}_{a,\mu}^{+}&=\big\{u\in\mathcal{P}_{a,\mu}:(\Psi_{u}^{\mu})''(0)>0\big\}=\big\{u\in\mathcal{P}_{a,\mu}:p\lVert\nabla u\rVert_{p}^p>\mu q\gamma_{q}^2\lVert u\rVert_{q}^q+p^*\lVert u\rVert_{p^*}^{p^*}\big\},
\end{align*}
\begin{align*}
\mathcal{P}_{a,\mu}^{0}&=\big\{u\in\mathcal{P}_{a,\mu}:(\Psi_{u}^{\mu})''(0)=0\big\}=\big\{u\in\mathcal{P}_{a,\mu}:p\lVert\nabla u\rVert_{p}^p=\mu q\gamma_{q}^2\lVert u\rVert_{q}^q+p^*\lVert u\rVert_{p^*}^{p^*}\big\},
\end{align*}
and
\begin{align*}
\mathcal{P}_{a,\mu}^{-}&=\big\{u\in\mathcal{P}_{a,\mu}:(\Psi_{u}^{\mu})''(0)<0\big\}=\big\{u\in\mathcal{P}_{a,\mu}:p\lVert\nabla u\rVert_{p}^p<\mu q\gamma_{q}^2\lVert u\rVert_{q}^q+p^*\lvert u\rVert_{p^*}^{p^*}\big\}.
\end{align*}
Define
\[m(a,\mu)=\inf_{u\in\mathcal{P}_{a,\mu}}E_{\mu}(u),\quad m^{\pm}(a,\mu)=\inf_{u\in\mathcal{P}_{a,\mu}^{\pm}}E_{\mu}(u),\]
and
\[m_{r}(a,\mu)=\inf_{u\in\mathcal{P}_{a,\mu}\cap W_{rad}^{1,p}(\mathbb{R}^N)}E_{\mu}(u),\quad m_{r}^{\pm}(a,\mu)=\inf_{u\in\mathcal{P}_{a,\mu}^{\pm}\cap W_{rad}^{1,p}(\mathbb{R}^N)}E_{\mu}(u).\]
Obviously, by Definition \ref{grosta}, if we can prove $u$ is a critical point of $E_{\mu}|_{S_{a}}$ and $E_{\mu}(u)=m(a,\mu)$, then $u$ is a ground state of (\ref{equation}).
Although we give the method to observe the structure of $E_{\mu}$ on $S_{a}$, the difficulty that arises is how do we get the compactness of PS(Palais-Smale) sequence.
In \cite[lemma 2.9]{zzzz}, the authors considered the equation (\ref{Sovlevsub}) and proved the compactness of PS sequence, but the nonlinearities is Sobolev subcritical. When $p=2$, in \cite[proposition 3.1]{sn2}, the author proved a compactness lemma of PS sequence for (\ref{equation}): Assume that $\{u_{n}\}\in S_{a,r}=S_{a}\cap H^1(\mathbb{R}^N)$ is a PS sequence of $E_{\mu}|_{S_{a}}$ at level $c$. Furthermore we assume that $P_{\mu}(u_{n})\rightarrow 0$. Then, one of the following alternatives holds: up to a subsequence, either $u_{n}\rightharpoonup u$ in $H^{1}(\mathbb{R}^N)$ but not strongly, and
\[E_{\mu}(u)\leqslant c-\frac{1}{N}S^{\frac{N}{2}};\]
or $u_{n}\rightarrow u$ in $H^1(\mathbb{R}^N)$. However, the proof of \cite[proposition 3.1]{sn2} need to use the Br\'{e}zis-Lieb lemma\cite{bhle} and the linearity of Laplace operator. By using concentration compactness lemma(see \cite[section 4]{sm} or \cite[lemma 1.1]{lpl}) and referring to the idea of \cite{hdly}, we prove a compactness result of PS sequence similar to \cite[proposition 3.1]{sn2}. Therefore, our main goal is to obtain the PS sequence and exclude the case of weak convergence, thereby obtaining strong convergence.
Now, we can state the existence results. Even though $\lVert u\rVert_{p^*}^{p^*}$ is always a $L^p$-supercritical term, there are some differences in the existence results when $\lVert u\rVert_{q}^q$ is a $L^p$-subcritical, critical or supercritical term. Therefore, we will state the existence results separately when $q<(=,>)p+\frac{p^2}{N}$.
If $p<q<p+\frac{p^2}{N}$. Since $q\gamma_{q}<p$, the function $\Psi_{u}^{\mu}$ may have two critical points on $\mathbb{R}$(such as $f(s)=50e^{2s}-50e^s-e^{6s}$), one is a local minimum point and the other is global maximum point, we note them as $s_{u}$ and $t_{u}$ respectively. Moreover, it is not difficult to prove that $s_{u}\star u\in\mathcal{P}_{a,\mu}^{+}$ and $t_{u}\star u\in\mathcal{P}_{a,\mu}^{-}$. Of course, $\Psi_{u}^{\mu}$ may not have any critical points on $\mathbb{R}$(such as $f(s)=50e^{2s}-200e^s-e^{6s}$). Therefore, it is natural to speculate $E_{\mu}$ has two critical points on $S_{a}$ under appropriate assumptions, one is a local minimizer and is also a minimizer of $E_{\mu}$ on $\mathcal{P}_{a,\mu}^{+}$, the other is mountain-pass type critical point and is also a minimizer of $E_{\mu}$ on $\mathcal{P}_{a,\mu}^{+}$.
Let
\begin{equation*}
C'=\bigg(\frac{p^*S^{p^*/p}(p-q\gamma_{q})}{p(p^*-q\gamma_{q})}\bigg)^{\frac{p-q\gamma_{q}}{p^*-p}}\frac{q(p^*-p)}{pC_{N,q}^q(p^*-q\gamma_{q})},
\end{equation*}
and
\begin{equation*}
C''=\frac{pp^*}{N\gamma_{q}C_{N,q}^q(p^*-p)}\bigg(\frac{q\gamma_{q}S^{N/p}}{p-q\gamma_{q}}\bigg)^{\frac{p-q\gamma_{q}}{p}},
\end{equation*}
where $S$ is the optimal constant of Sobolev inequality
\[S\lVert u\rVert_{p^*}^p\leqslant\lVert\nabla u\rVert_{p}^p\quad\forall u\in D^{1,p}(\mathbb{R}^N).\]
Define
\begin{equation}\label{alpha}
\alpha(N,p,q):=\min\{C',C''\}.
\end{equation}
Then, the existence result of local minimizer for $p<q<\frac{p^2}{N}$ can be stated as follows.
\begin{theorem}\label{th1}
Let $N\geqslant 2$, $1<p<N$, $p<q<p+\frac{p^2}{N}$, and $a,\mu>0$. Assume that
\begin{equation}\label{muconsub}
\mu a^{q(1-\gamma_{q})}<\alpha(N,p,q),
\end{equation}
then $E_{\mu}|_{S_{a}}$ has a ground state $u_{a,\mu}^{+}$ which is positive, radially symmetric, radially non-increasing, and solves {\rm (\ref{equation})} for some $\lambda_{a,\mu}^{+}<0$. Moreover,
\[E_{\mu}(u_{a,\mu}^{+})=m(a,\mu)=m^{+}(a,\mu)<0,\]
and $u_{a,\mu}^{+}$ is a local minimizer of $E_{\mu}$ on the set
\[A_{k}:=\big\{u\in S_{a}: \lVert\nabla u\rVert_{p}\leqslant k\big\},\]
for a suitable $k>0$ sufficiently small. Any other ground state of $E_{\mu}|_{S_{a}}$ is a local minimizer of $E_{\mu}|_{A_{k}}$.
\end{theorem}
In addition to guaranteeing that $\Psi_{u}^{\mu}$ has two critical points on $\mathbb{R}$(in fact, this conclusion can be obtained by $\mu a^{q(1-\gamma_{q})}<C'$), another important reason we assume (\ref{muconsub}) is to ensure the convergence of the PS sequence by using compactness lemma we have obtained.
The existence result of mountain-pass type solution for $p<q<p+\frac{p^2}{N}$ can be stated as follows.
\begin{theorem}\label{th2}
Let $N\geqslant 2$, $1<p<N$, $p<q<p+\frac{p^2}{N}$, and $a,\mu>0$ satisfies {\rm(\ref{muconsub})}. Further we assume that $N\geqslant p^2$ or $N<p^2<9$. Then $E_{\mu}|_{S_{a}}$ has a critical point of mountain-pass type $u_{a,\mu}^{-}$ which is positive, radially symmetric, radially non-increasing, and solves {\rm (\ref{equation})} for some $\lambda_{a,\mu}^{-}<0$. Moreover, $E_{\mu}(u_{a,\mu}^{-})=m^{-}(a,\mu)$ and
\[0<m^{-}(a,\mu)<m^{+}(a,\mu)+\frac{1}{N}S^{\frac{N}{p}}.\]
\end{theorem}
In order to use compactness lemma to obtain the convergence of PS sequences, the strict inequality $m^{-}(a,\mu)<m^{+}(a,\mu)+\frac{1}{N}$ is a crucial step in our proof. Here we refer to the ideas of \cite{gajpai2,wjwy} to prove the strict inequality. However, there are some difficulties to obtain the inequality for $N<p^2$ and $p\geqslant 3$, we don't know whether the result hold.
For $q\geqslant p+\frac{p^2}{N}$. Since $q\gamma_{q}\geqslant p$ and hence $\Psi_{u}^{\mu}(s)$ has unique critical point $t_{\mu}$ of mountain-pass type under suitable assumptions of $\mu$ if $q=p+\frac{p^2}{N}$. Moreover, $t_{u}\star u\in\mathcal{P}_{a,\mu}^{-}$. Therefore, it is natural to speculate that $E_{\mu}|_{S_{a}}$ has a critical point of mountain-pass type which is also a minimizer of $E_{\mu}$ on $\mathcal{P}_{a,\mu}^{-}$.
\begin{theorem}\label{th3}
Let $N\geqslant p^{\frac{3}{2}}$, $1<p<N$, $p+\frac{p^2}{N}\leqslant q<p^*$, and $a,\mu >0$. Further we assume that
\begin{equation}\label{muconcri}
\mu a^{\frac{p^2}{N}}<\frac{q}{pC_{N,p,q}^q},
\end{equation}
if $q=p+\frac{p^2}{N}$. Then $E_{\mu}|_{S_{a}}$ has a ground state $u_{a,\mu}^{-}$ which is positive, radially symmetric, radially non-increasing, and solves {\rm (\ref{equation})} for some $\lambda_{a,\mu}^{-}<0$. Moreover, $u_{a,\mu}^{-}$ is a critical point of mountain pass-type and
\[E_{\mu}(u_{a,\mu}^{-})=m^{-}(a,\mu)=m(a,\mu)\in\Big(0,\frac{1}{N}S^{\frac{N}{p}}\Big).\]
\end{theorem}
Similar to Theorem \ref{th2}, to obtain the convergence of PS sequence, the strict inequality $m^{-}(a,\mu)<\frac{1}{N}S^{\frac{N}{p}}$ plays an important role. However, obtaining this result seems somewhat difficult for $N<p^{\frac{3}{2}}$ by classical method(see \cite{bhnl,sn2}).
Now, we start to analyze the asymptotic behavior of $u_{a,\mu}^{\pm}$ as $\mu\rightarrow 0$ and $\mu$ goes it's upper bound. To state these results, let us first introduce some necessary notations. Through a scaling, we know the equation
\begin{equation}\label{uniqueness}
-\Delta_{p}u+u^{p-1}=u^{q-1}\quad\mbox{in}\ \mathbb{R}^N,
\end{equation}
has a radial solution $u$ which is non-negative. Similar to \cite[theorem A.1]{gajpai2}, we can prove $u\in L_{loc}^{\infty}(\mathbb{R}^N)$. Then, by regularity result\cite{tp}, we know $u\in C_{loc}^{1,\alpha}$ for some $\alpha\in(0,1)$. Thus, by \cite{sjtm}, (\ref{uniqueness}) has unique radial ``ground state" $\phi_{0}$. Here, in \cite{sjtm}, the meaning of ``ground state" is a non-negative non-trivial $C^1$ distribution solution of (\ref{uniqueness}).
The asymptotic result as $\mu\rightarrow 0$ can be stated as follows.
\begin{theorem}\label{th4}
Let $N\geqslant 2$, $1<p<N$, $p<q<p^*$, $a>0$ and $\mu>0$ sufficiently small. Let $u_{a,\mu}^{+}$ be the local minimizer which is obtained by Theorem {\rm \ref{th1}} and $u_{a,\mu}^{-}$ be the mountain-pass solution which is obtained by Theorem {\rm \ref{th2}} and {\rm {\ref{th3}}}. Then,
\noindent{\rm (1)}
\begin{minipage}[t]{\linewidth}
We have
\[\sigma_{0}^{\frac{1}{p-q}}\mu^{-\frac{N}{p(p-q\gamma_{q})}}u_{a,\mu}^{+}\Big(\sigma_{0}^{-\frac{1}{p}}\mu^{-\frac{1}{p-q\gamma_{q}}}\cdot\Big)\rightarrow\phi_{0}\]
in $W^{1,p}(\mathbb{R}^N)$ as $\mu\rightarrow 0$, where $\phi_{0}$ is the unique radial ``ground state" solution of {\rm (\ref{uniqueness})} and
\[\sigma_{0}=\Big(\frac{a^p}{\lVert\phi_{0}\rVert_{p}^p}\Big)^{\frac{p(q-p)}{p^2-N(q-p)}}.\]
\end{minipage}
\noindent{\rm (2)}
\begin{minipage}[t]{\linewidth}
For $N\leqslant p^2$, there exists $\sigma_{\mu}>0$ such that
\[w_{\mu}=\sigma_{\mu}^{\frac{N-p}{p}}u_{\mu}^{-}(\sigma_{\mu}\cdot)\rightarrow U_{\varepsilon_{0}}\]
in $D^{1,p}(\mathbb{R}^N)$ as $\mu\rightarrow 0$ for some $\varepsilon_{0}>0$, where $U_{\varepsilon_{0}}$ is given by {\rm (\ref{Udf})} and $\sigma_{\mu}\rightarrow 0$ as $\mu\rightarrow 0$.
\end{minipage}
\noindent{\rm (3)}
\begin{minipage}[t]{\linewidth}
For $N>p^2$, we have $u_{a,\mu}^{-}\rightarrow U_{\varepsilon_{0}}$ in $W^{1,p}(\mathbb{R}^N)$ as $\mu\rightarrow 0$, where $U_{\varepsilon_{0}}$ satisfies $\lVert U_{\varepsilon_{0}}\rVert_{p}^p=a^p$.
\end{minipage}
\end{theorem}
In fact, for $p<q<p+\frac{p^2}{N}$, we can prove $\lVert\nabla u_{a,\mu}^{+}\rVert_{p}^p\rightarrow 0$ and $m^{+}(a,\mu)\rightarrow 0$ as $\mu\rightarrow 0$. Thus, $\{u_{a,\mu}^{+}\}$ does not convergence strongly in $W^{1,p}(\mathbb{R}^N)$. For $p+\frac{p^2}{N}<q<p^*$, we can prove
\[\lVert\nabla u_{a,\mu}^{-}\rVert_{p}^p,\ \ \lVert u_{a,\mu}^{-}\rVert_{p^*}^{p^*}\rightarrow S^{\frac{N}{p}}\quad and\quad m^{-}\rightarrow\frac{1}{N}S^{\frac{N}{p}}\]
as $\mu\rightarrow 0$ which implies $\{u_{a,\mu}^{-}\}$ is a minimizing sequence of minimizing problem
\[S=\inf_{u\in D^{1,p}(\mathbb{R}^N)\backslash\{0\}}\frac{\lVert\nabla u\rVert_{p}^p}{\lVert u\rVert_{p^*}^{p^*}}.\]
When $N>p^2$, we can obtain strongly convergence result. But if $N\leqslant p^2$, since $U_{\varepsilon}\notin L^p(\mathbb{R}^N)$ for all $\varepsilon>0$, it is posible to obtain $u_{a,\mu}^{-}\rightarrow U_{\varepsilon_{0}}$ in $W^{1,p}(\mathbb{R}^N)$.
Next we analyze the asymptotic behavior as $\mu$ goes its upper bound. Let
\[\bar{\alpha}=\frac{1}{pa^{p^2/N}C_{N,p,q}^q},\]
when $q=p+\frac{p^2}{N}$. We have following asymptotic result.
\begin{theorem}\label{th5}
Let $N\geqslant 2$, $1<p<N$, $p+\frac{p^2}{N}\leqslant q<p^*$, and $a,\mu>0$ satisfies {\rm (\ref{muconcri})} if $q=p+\frac{p^2}{N}$. Let $u_{a,\mu}^{+}$ be the mountain-pass solution which is obtained by Theorem {\rm \ref{th3}}. Then,
\noindent{\rm (1)}
\begin{minipage}[t]{\linewidth}
For $q=p+\frac{p^2}{N}$, we have
\[(\bar{\alpha}\sigma_{0})^{\frac{1}{p-q}}s_{\mu}^{\frac{N}{p}}u_{\mu}^{-}\Big(\sigma_{0}^{-\frac{1}{p}}s_{\mu}\cdot\Big)\rightarrow\phi_{0}\]
in $W^{1,p}(\mathbb{R}^N)$ as $\mu\rightarrow\bar{\alpha}$, where $\phi_{0}$ is the unique radial "ground state" solution of {\rm (\ref{uniqueness})}, $s_{\mu}=(\bar{\alpha}-\mu)^{-(N-p)/p^2}$ and
\[\sigma_{0}=\bar{\alpha}^{\frac{p^2}{N(q-p)=p^2}}\bigg(\frac{a^p}{\lVert\phi_{0}\rVert_{p}^p}\bigg)^{\frac{p(q-p)}{p^2-N(q-p)}}.\]
\end{minipage}
\noindent{\rm (2)}
\begin{minipage}[t]{\linewidth}
For $p+\frac{p^2}{N}<q<p^*$, we have
\[\sigma_{0}^{\frac{1}{p-q}}\mu^{-\frac{N}{p(p-q\gamma_{q})}}u_{a,\mu}^{+}\Big(\sigma_{0}^{-\frac{1}{p}}\mu^{-\frac{1}{p-q\gamma_{q}}}\cdot\Big)\rightarrow\phi_{0}\]
in $W^{1,p}(\mathbb{R}^N)$ as $\mu\rightarrow+\infty$, where $\phi_{0}$ is the unique radial "ground state" solution of {\rm (\ref{uniqueness})} and
\[\sigma_{0}=\Big(\frac{a^p}{\lVert\phi_{0}\rVert_{p}^p}\Big)^{\frac{p(q-p)}{p^2-N(q-p)}}.\]
\end{minipage}
\end{theorem}
For $q=p+\frac{p^2}{N}$, we can prove $m_{a,\mu}^{-}=0$ when $\mu\geq\bar{\alpha}$(see Lemma \ref{asycrinonexi}) and hence $u_{a,\mu}^{-}$ does not exist. By Theorem \ref{th3}, we know $u_{a,\mu}^{-}$ exist when $\mu<\bar{\alpha}$. Thus, $\bar{\alpha}$ is the sharp constant such that $u_{a,\mu}^{-}$ exist and we can analyze the asymptotic behavior as $\mu\rightarrow\bar{\alpha}$. However, For $p<q<p+\frac{p^2}{N}$, we can not claim that $\alpha(N,p,q)$ which is given by (\ref{alpha}) is the sharp constant such that $u_{a,\mu}^{\pm}$ exist. From our proof, it seems that $\alpha(N,p,q)$ is not optimal(see Section \ref{exisub}). Thus, it is impossible to study the asymptotic behavior as $\mu$ goes it's upper when $p<q<p+\frac{p^2}{N}$.
Finally, we want to investigate the nonexistence and multiplicity of equation (\ref{equation}). The nonexistence result is attained for $\mu<0$.
\begin{theorem}\label{th6}
Let $N\geqslant 2$, $1<p<N$, $p<q<p^*$, $a>0$ and $\mu<0$. Then
\noindent{\rm (1)}
\begin{minipage}[t]{\linewidth}
If $u$ is a critical point of $E_{\mu}|_{S_{a}}$(not necessary positive), then then associated Lagrange multiplier $\lambda$ is positive, and $E_{\mu}(u)>S^{\frac{N}{p}}/N$.
\end{minipage}
\noindent{\rm (2)}
\begin{minipage}[t]{\linewidth}
The problem
\begin{equation*}
-\Delta_{p}u=\lambda u^{p-1}+\mu u^{q-1}+u^{p^*-1},\quad u>0\quad in\ \mathbb{R}^N
\end{equation*}
has no solution $u\in W^{1,p}(\mathbb{R}^N)$ for any $\mu<0$.
\end{minipage}
\end{theorem}
The reason why we speculate equation (\ref{equation}) has no solution is due to Theorem \ref{th6}(1). In the previous existence results, $c<\frac{1}{N}S^{\frac{N}{p}}$ is crucial to obtain the convergence of $\mbox{(PS)}_{c}$ sequence. Moreover, there is an example such that a $\mbox{(PS)}_{c}$ sequence with $c=\frac{1}{N}S^{\frac{N}{p}}$ without any convergent subsequence(see \cite{bh}). Thus, it is natural to guess that equation (\ref{equation}) has no solution.
We use genus theory to prove multiplicity result. One of the crucial problem is that the functional should bounded from below when we use genus theory, but due to $\lVert u\rVert_{p^*}^{p^*}$ is a $L^2$-supercritical term, $E_{\mu}|_{S_{a}}(u)$ does not bounded from below. To overcome this problem, we introduce a truncation function to complete the proof.
Let
\[S_{a,r}=S_{a}\cap W_{rad}^{1,p}(\mathbb{R}^N).\]
The multiplicity result can be stated as follows.
\begin{theorem}\label{th7}
Let $N\geqslant 3$, $2<p<N$, $p<q<p+\frac{p^2}{N}$ and $a,\mu>0$ satisfies {\rm(\ref{muconsub})}. Then
equation $(\ref{equation})$ has infinitely many solutions on $S_{a,r}$ at negative levels.
\end{theorem}
In Theorem \ref{th7} we assume that $p>2$, since the quantitative deformation lemma\cite[lemma 5.15]{wm} will be used in the proof and it requires $\lVert u\rVert_{p}^p\in C^2(L^p(\mathbb{R}^N,\mathbb{R}))$. Therefore, we need $p>2$.
\noindent\textbf{Notations.} Throughout this paper, $C$ is indiscriminately used to denote various absolutely positive constants. $a\sim b$ means that there exist $C>1$ such that $C^{-1}a\leqslant b\leqslant Ca$.
\section{\textbf{Prelimilaries}}\label{prelimilaries}
In this section, we collect some results which will be used in the rest of the paper. Firstly, Let us recall the Sobolev inequality.
\begin{lemma}
For every $N\geqslant 2$ and $1<p<N$, there exists an optimal constant $S$ depends on $N$ and $p$ such that
\[S\lVert u\rVert_{p^*}^p\leqslant\lVert\nabla u\rVert_{p}^p\quad\forall u\in D^{1,p}(\mathbb{R}^N),\]
where $D^{1,p}(\mathbb{R}^N)$ denotes the completion of $C_{c}^{\infty}(\mathbb{R}^N)$ with respect to the norm $\lVert u\rVert_{D^{1,p}}:=\lVert\nabla u\rVert_{p}^p$.
\end{lemma}
It is well knwon\cite{tg} that the optimal constant is attained by
\begin{equation}\label{Udf}
U_{\varepsilon,y}=d_{N,p}\varepsilon^{\frac{N-p}{p(p-1)}}\Big(\varepsilon^{\frac{p}{p-1}}+\lvert x-y\rvert^{\frac{p}{p-1}}\Big)^{\frac{p-N}{p}},
\end{equation}
where $\varepsilon>0$, $y\in\mathbb{R}^N$ and $d_{N,p}>0$ depends on $N$ and $p$ such that $U_{\varepsilon,y}$ satisfies
\[-\Delta_{p}u=u^{p*-1},\quad u>0\quad\mbox{in}\ \mathbb{R}^N,\]
and hence
\[\lVert\nabla U_{\varepsilon,y}\rVert_{p}^p=\lVert U_{\varepsilon,y}\rVert_{p^*}^{p^*}=S^{\frac{N}{p}}.\]
If $y=0$, we set $U_{\varepsilon}=U_{\varepsilon,0}$.
Next, we introduce the Pohozaev identity for $p$-Laplacian.
\begin{lemma}\label{Pohozaevf}
{\rm\cite{jlsm}} Assume that $N\geqslant 2$, $1<p<N$, $f\in C(\mathbb{R},\mathbb{R})$ such that $f(0)=0$ and let $u$ be a local weak solution of
\[-\Delta_{p}u=f(u)\quad\mbox{in}\ D'(\mathbb{R}^N),\]
where $D(\mathbb{R}^N)=C_{c}^{\infty}(\mathbb{R}^N)$, and $D'(\mathbb{R}^N)$ is the dual space of $D(\mathbb{R})$. Suppose that
\[u\in L_{loc}^{\infty}(\mathbb{R}^N),\quad\nabla u\in L^p(\mathbb{R}^N),\quad\mbox{and}\quad F(u)\in L^1(\mathbb{R}^N).\]
Then $u$ satisfies
\[(N-p)\lVert\nabla u\rVert_{p}^p=Np\int_{\mathbb{R}^N}F(u)dx.\]
\end{lemma}
By the Pohozaev identity, we can prove that all critical points belong to the Pohozaev manifold.
\begin{proposition}\label{Pohozaev}
Assume that $u\in S_{a}$ is a solution to {\rm(\ref{equation})}, then $u\in\mathcal{P}_{a,\mu}$.
\end{proposition}
\begin{proof}
Similar to \cite[lemma A1]{gajpai2}, we can prove that $u\in L_{loc}^{\infty}(\mathbb{R}^N)$. It is clear that
\[\nabla u\in L^p(\mathbb{R}^N)\quad\mbox{and}\quad F(u)=\frac{\lambda}{p}\lvert u\rvert^p+\frac{\mu}{q}\lvert u\rvert^q+\frac{1}{p^*}\lvert u\rvert^{p^*}\in L^1(\mathbb{R}^N).\]
Thus, by Lemma \ref{Pohozaevf}, we have
\begin{equation}\label{pohozaev}
(N-p)\lVert\nabla u\rVert_{p}^p=\lambda Na^p+\frac{\mu Np}{q}\lVert u\rVert_{q}^q+(N-p)\lVert u\rVert_{p^*}^{p^*}.
\end{equation}
Using the euqation (\ref{equation}), we have
\[\lVert\nabla u\rVert_{p}^p=\lambda a^p+\mu\lVert u\rVert_{q}^q+\lVert u\rVert_{p^*}^{P^*},\]
which together with (\ref{pohozaev}), implies
\[\lVert\nabla u\rVert_{p}^p=\mu\gamma_{q}\lVert u\rVert_{q}^q+\lVert u\rVert_{p^*}^{p^*}.\]
\end{proof}
\section{\textbf{Compactness of PS sequence}}\label{compactness}
In this section, we prove a compactness lemma of PS sequence under suitable assumptions. This is a crucial result to obtain the existence of critical point for $E_{\mu}|_{S_{a}}$.
\begin{proposition}\label{compactnesslemma}
Let $N\geqslant 2$, $p<q<p^*$, and $a,\mu>0$. Let $\{u_{n}\}\subset S_{a,r}$ be a PS sequence for $E_{\mu}|_{S_{a}}$ at level $c$, with
\begin{equation*}
c<\frac{1}{N}S^{\frac{N}{p}}\quad\mbox{and}\quad c\neq 0.
\end{equation*}
Suppose in addition that $P_{\mu}(u_{n})\rightarrow 0$ as $n\rightarrow\infty$. Then, one of the following two conclusions is true:
{ \rm(i)}
\begin{minipage}[t]{\linewidth}
either up to a sequence $u_{n}\rightharpoonup u$ in $W^{1,p}(\mathbb{R}^N)$ but not strongly, where $u\not\equiv 0$ is a solution to {\rm(\ref{equation})} for some $\lambda<0$, and
\begin{equation*}
E_{\mu}(u)<c-\frac{1}{N}S^{\frac{N}{p}};
\end{equation*}
\end{minipage}
{\rm (ii)}
\begin{minipage}[t]{\linewidth}
or up to a subsequence $u_{n}\rightarrow u$ in $W^{1,p}(\mathbb{R}^N)$, $E_{\mu}(u)=m$, and $u$ solves {\rm (\ref{equation})} for some $\lambda<0$.
\end{minipage}
\end{proposition}
Since the proof of Proposition \ref{compactnesslemma} is relatively long, We will divide the proof into some lemmas.
\begin{lemma}\label{bounded}
$\{u_{n}\}$ is bounded in $W^{1,p}(\mathbb{R}^N)$.
\end{lemma}
\begin{proof}
At first, we assume that $p<q<p+\frac{p^2}{N}$, so that $q\gamma_{q}<p$. Since $P_{\mu}(u_{n})\rightarrow 0$, by the Gagliardo-Nirenberg inequality, we have
\begin{align*}
E_{\mu}(u_{n})&=\frac{1}{N}\lVert\nabla u_{n}\rVert_{p}^p-\mu\gamma_{q}\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)\lVert u_{n}\rVert_{q}^q+o_{n}(1)\\
&\geqslant\frac{1}{N}\lVert\nabla u_{n}\rVert_{p}^p-\mu\gamma_{q}\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)C_{N,p,q}^qa^{q(1-\gamma_{q})}\lVert\nabla u_{n}\rVert_{q}^{q\gamma_{q}}+o_{n}(1).
\end{align*}
Then, using $E_{\mu}(u_{n})\rightarrow c$ as $n\rightarrow\infty$, we deduce that $\{u_{n}\}$ is bounded in $W^{1,p}(\mathbb{R}^N)$.
Now, let $q=p+\frac{p^2}{N}$, so that $q\gamma_{q}=p$. Then $P_{\mu}(u_{n})\rightarrow 0$ gives
\begin{equation*}
E_{\mu}(u_{n})=\frac{1}{N}\lVert u_{n}\rVert_{p^*}^{p^*}+o_{n}(1),
\end{equation*}
which implies $\{u_{n}\}$ is bounded in $L^{p^*}(\mathbb{R}^N)$. By the H\"older inequality
\begin{equation*}
\lVert u_{n}\rVert_{q}^q\leqslant\lVert u_{n}\rVert_{p^*}^{q\gamma_{q}}\lVert u_{n}\rVert_{p}^{q(1-\gamma_{q})}=a^{q(1-\gamma_{q})}\lVert u_{n}\rVert_{p^*}^{q\gamma_{q}},
\end{equation*}
we obtain $\{u_{n}\}$ is bounded in $L^{q}(\mathbb{R}^N)$. Using again that $P_{\mu}(u_{n})\rightarrow 0$, we know $\{u_{n}\}$ is bounded in $W^{1,p}(\mathbb{R}^N)$.
Finally, let $p+\frac{p^2}{N}<q<p^*$, so that $q\gamma_{q}>p$. Since $P_{\mu}(u_{n})\rightarrow 0$, we have
\begin{equation*}
E_{\mu}(u_{n})=\mu\gamma_{q}\Big(\frac{1}{p}-\frac{1}{q\gamma_{q}}\Big)\lVert u_{n}\rVert_{q}^q+\frac{1}{N}\lVert u_{n}\rVert_{p^*}^{p^*}+o_{n}(1),
\end{equation*}
and the cofficient of $\lVert u_{n}\rVert_{q}^q$ is positive. Therefore, $\{\lVert u_{n}\rVert_{q}\}$ and $\{\lVert u_{n}\rVert_{p^*}\}$ are both bounded which implies $\{\lVert\nabla u_{n}\rVert_{p}\}$ is bounded, since $P_{\mu}(u_{n})\rightarrow 0$.
\end{proof}
Now, we can state the concentration compactness lemma of $\{u_{n}\}$, the proof can be found in \cite[section 4]{sm} and \cite[lemma 1.1]{lpl}.
\begin{lemma}\label{concentration}
Suppose $u_{n}\rightharpoonup u$ in $W^{1,p}(\mathbb{R}^N)$ and $\lvert\nabla u_{n}\rvert^p\rightharpoonup\kappa, \lvert u_{n}\rvert^{p^*}\rightharpoonup\nu$ in the sense of measures where $\kappa$ and $\nu$ are bounded non-negative measures on $\mathbb{R}^N$. Then, for some at most countable set $J$, we have
\begin{equation*}
\kappa\geqslant\lvert\nabla u\rvert^p+\sum_{j\in J}\kappa_{j}\delta_{x_{j}},\quad\nu=\lvert u\rvert^{p^*}+\sum_{j\in J}\nu_{j}\delta_{x_{j}},
\end{equation*}
where $\kappa_{j}, \nu_{j}>0$ satisfies $S\nu_{j}^{\frac{p}{p^*}}\leqslant\kappa_{j}$ for all $j\in J$.
\end{lemma}
\begin{lemma}\label{concentrationnorm}
There is
\begin{equation}\label{pstar}
\lim_{n\rightarrow\infty}\lVert u_{n}\rVert_{p^*}^{p^*}=\lVert u\rVert_{p^*}^{p^*}+\sum_{j\in J}\nu_{j}.
\end{equation}
\end{lemma}
\begin{proof}
For every $R>0$, let $\varphi_{R}\in C_{c}^{\infty}(\mathbb{R}^N)$ be such that
\begin{equation*}
0\leqslant\varphi_{R}\leqslant 1,\quad\varphi_{R}=1\ \mbox{for}\ \lvert x\rvert\leqslant R,\quad\mbox{and}\quad\varphi_{R}=0\ \mbox{for}\ \lvert x\rvert\geqslant R+1.
\end{equation*}
Since $\lvert u_{n}\rvert^{p^*}\rightharpoonup\nu$, we have
\begin{align*}
\lim_{n\rightarrow\infty}\int_{\mathbb{R}^N}\lvert u_{n}\rvert^{p^*}dx&=\lim_{n\rightarrow\infty}\Big(\int_{\mathbb{R}^N}\lvert u_{n}\rvert^{p^*}\varphi_{R}dx+\int_{\mathbb{R}^N}\lvert u_{n}\rvert^{p^*}(1-\varphi_{R})dx\Big)\\
&=\int_{\mathbb{R}^N}\lvert u\rvert^{p^*}\varphi_{R}dx+\sum_{j\in J}\varphi_{R}(x_{j})\nu_{j}+\lim_{n\rightarrow\infty}\int_{\mathbb{R}^N}\lvert u_{n}\rvert^{p^*}(1-\varphi_{R})dx.
\end{align*}
Let $R\rightarrow +\infty$, by the Lebesgue dominated convergence theorem, we obtain
\begin{equation*}
\lim_{n\rightarrow\infty}\int_{\mathbb{R}^N}\lvert u_{n}\rvert^{p^*}dx=\int_{\mathbb{R}^N}\lvert u\rvert^{p^*}dx+\sum_{j\in J}\nu_{j}+\lim_{R\rightarrow +\infty}\lim_{n\rightarrow\infty}\int_{\mathbb{R}^N}\lvert u_{n}\rvert^{p^*}(1-\varphi_{R})dx.
\end{equation*}
Now, we prove that
\begin{equation}\label{Rinfty}
\lim_{R\rightarrow +\infty}\lim_{n\rightarrow\infty}\int_{\mathbb{R}^N}\lvert u_{n}\rvert^{p^*}(1-\varphi_{R})dx=0,
\end{equation}
which leads to (\ref{pstar}). Since $\{u_{n}\}$ is bounded in $W_{rad}^{1,p}(\mathbb{R}^N)$, we have
\begin{equation*}
\lvert u_{n}(x)\rvert\leqslant C\lvert x\rvert^{\frac{1-N}{p}}\quad \mbox{a.e. in}\ \mathbb{R}^N,
\end{equation*}
where $C>0$ is a constant independent of $n$. It follows that
\begin{equation*}
\int_{\mathbb{R}^N}\lvert u_{n}\rvert^{p^*}(1-\varphi_{R})dx\leqslant\int_{\lvert x\rvert\geqslant R}\lvert u_{n}\rvert^{p^*}dx\leqslant CR^{\frac{N(1-p)}{N-p}},
\end{equation*}
which implies (\ref{Rinfty}) holds.
\end{proof}
\noindent\textbf{Proof of Proposition \ref{compactnesslemma}} Since $\{u_{n}\}$ is a bounded PS sequence for $E_{\mu}|_{S_{a}}$, there exists $\{\lambda_{n}\}\in\mathbb{R}$ such that for every $\psi\in W^{1,p}(\mathbb{R}^N)$,
\begin{equation}\label{Lagrange}
\int_{\mathbb{R}^N}\Big(\lvert\nabla u_{n}\rvert^{p-2}\nabla u_{n}\cdot\nabla\psi-\lambda_{n}\lvert u_{n}\rvert^{p-2}u_{n}\psi-\mu\lvert u_{n}\rvert^{q-2}u_{n}\psi-\lvert u_{n}\rvert^{p^*-2}u_{n}\psi\Big)=o_{n}(1)\lVert\psi\rVert_{W^{1,p}}
\end{equation}
as $n\rightarrow\infty$. Choosing $\psi=u_{n}$, we deduce that $\{\lambda_{n}\}$ is bounded as well, and hence, up to a subsequence $\lambda_{n}\rightarrow\lambda\in\mathbb{R}$. Using the fact that $P_{\mu}(u_{n})\rightarrow 0$ and $\gamma_{q}<1$, we know that
\begin{align}\label{lambda}
\lambda a^p&=\lim_{n\rightarrow\infty}\lambda_{n}\lVert u_{n}\rVert_{p}^{p}=\lim_{n\rightarrow\infty}\Big(\lVert\nabla u_{n}\rVert_{p}^{p}-\mu\lVert u_{n}\rVert_{q}^{q}-\lVert u_{n}\rVert_{p^*}^{p^*}\Big)\nonumber\\
&=\lim_{n\rightarrow\infty}\mu(\gamma_{q}-1)\lVert u_{n}\rVert_{q}^q=\mu(\gamma_{q}-1)\lVert u\rVert_{q}^q\leqslant 0,
\end{align}
with $\lambda=0$ is and only if $u\equiv 0$.
We consider $\varphi_{\varepsilon}\in C_{c}^{\infty}(\mathbb{R}^N)$ such that
\begin{equation*}
0\leqslant\varphi_{\varepsilon}\leqslant 1,\quad\varphi_{\varepsilon}=1\ \mbox{in}\ B_{\varepsilon}(x_{j}),\quad\varphi_{\varepsilon}=0\ \mbox{in}\ B_{2\varepsilon}(x_{j}),\quad\mbox{and}\quad\lvert\nabla\varphi_{\varepsilon}\rvert\leqslant\frac{2}{\varepsilon}.
\end{equation*}
It is clear that the sequence $\{\varphi_{\varepsilon}u_{n}\}$ is bounded in $W^{1,p}(\mathbb{R}^N)$, then, testing (\ref{Lagrange}) with $\psi=\varphi_{\varepsilon}u_{n}$, we obtain
\begin{align}\label{phiepsilon}
&\lim_{n\rightarrow\infty}\int_{\mathbb{R}^N}\lvert\nabla u_{n}\rvert^{p-2}u_{n}\nabla u_{n}\cdot\nabla\varphi_{\varepsilon}dx\nonumber\\
=&\lim_{n\rightarrow\infty}\int_{\mathbb{R}^N}\Big(\lambda_{n}\lvert u_{n}\rvert^p\varphi_{\varepsilon}+\mu\lvert u_{n}\rvert^q\varphi_{\varepsilon}+\lvert u_{n}\rvert^{p^*}\varphi_{\varepsilon}-\lvert\nabla u_{n}\rvert^p\varphi_{\varepsilon}\Big)dx\nonumber\\
=&\lambda\int_{\mathbb{R}^N}\lvert u\rvert^p\varphi_{\varepsilon}dx+\mu\int_{\mathbb{R}^N}\lvert u\rvert^q\varphi_{\varepsilon}dx+\int_{\mathbb{R}^N}\varphi_{\varepsilon}d\nu-\int_{\mathbb{R}^N}\varphi_{\varepsilon}d\mu.
\end{align}
By the H\"older inequality,
\begin{align*}
\Big|\int_{\mathbb{R}^N}\lvert\nabla u\rvert^{p-2}u_{n}\nabla u_{n}\cdot\nabla\varphi_{\varepsilon}dx\Big|&\leqslant\frac{2}{\varepsilon}\int_{B_{2\varepsilon}(x_{j})}\lvert\nabla u\rvert^{p-1}\lvert u_{n}\rvert dx\\
&\leqslant\frac{2}{\varepsilon}\Big(\int_{B_{2\varepsilon}(x_{j})}\lvert\nabla u_{u}\rvert^pdx\Big)^{\frac{p-1}{p}}\Big(\int_{B_{2\varepsilon}(x_{j})}\lvert u_{n}\rvert^pdx\Big)^{\frac{1}{p}}\\
&\leqslant\frac{C}{\varepsilon}\lVert u_{n}\rVert_{L^p(B_{2\varepsilon}(x_{j}))},
\end{align*}
where $C>0$ is a constant independent of $n$. Thus, using the H\"older inequality again, we have
\begin{equation*}
\lim_{n\rightarrow\infty}\Big|\int_{\mathbb{R}^N}\lvert\nabla u\rvert^{p-2}u_{n}\nabla u_{n}\cdot\nabla\varphi_{\varepsilon}dx\Big|\leqslant\frac{C}{\varepsilon}\lVert u\rVert_{L^p(B_{2\varepsilon}(x_{j}))}\leqslant C\lVert u\rVert_{L^{p^*}(B_{2\varepsilon}(x_{j}))},
\end{equation*}
which implies
\begin{equation*}
\lim_{n\rightarrow\infty}\int_{\mathbb{R}^N}\lvert\nabla u\rvert^{p-2}u_{n}\nabla u_{n}\cdot\nabla\varphi_{\varepsilon}dx\rightarrow 0
\end{equation*}
as $\varepsilon\rightarrow 0$.
If $J\neq\emptyset$. Let $\varepsilon\rightarrow 0$ on both sides of (\ref{phiepsilon}), we obtain $\nu_{j}=\mu_{j}$. By Lemma \ref{concentration}, since $\mu_{j}\geqslant S\nu_{j}^{p/p^*}$, we have $\nu_{j}\geqslant S^{\frac{N}{p}}$. Therefore, by Lemma \ref{concentrationnorm},
\begin{equation*}
c=\lim_{n\rightarrow\infty}E_{\mu}(u_{n})\geqslant E_{\mu}(u)+\Big(\frac{1}{p}-\frac{1}{p^*}\Big)\sum_{k\in J}\nu_{k}\geqslant E_{\mu}(u)+\frac{1}{N}S^{\frac{N}{p}}.
\end{equation*}
Since $m<S^{N/p}/N$, so $E_{\mu}(u)<0$ which implies $u\not\equiv 0$. Following the idea of \cite[lemma 2.2]{yjf}, we can prove
\begin{equation*}
\lvert\nabla u_{n}\rvert^{p-2}\nabla u_{n}\rightharpoonup\lvert\nabla u\rvert^{p-2}\nabla u\ in\ \big(L^p(\mathbb{R}^N)\big)^{*}.
\end{equation*}
Thus, passing to the limit in (\ref{Lagrange}) by weak convergence, we know $u$ is a solution to (\ref{equation}). Now, case (i) in the proposition \ref{compactnesslemma} holds.
If instead $J=\emptyset$, then the Br\'{e}zis-Lieb Lemma \cite{bhle} and (\ref{pstar}) implies $u_{n}\rightarrow u$ in $L^{p^*}(\mathbb{R}^N)$. Now, we prove $u\not\equiv 0$ and hence by (\ref{lambda}), we know $\lambda<0$. Suppose by contradiction that $u\equiv 0$. Then, by $P_{\mu}(u_{n})\rightarrow 0$,
\[c=\lim_{n\rightarrow\infty}E_{\mu}(u_{n})=\lim_{n\rightarrow\infty}\bigg(\frac{1}{N}\lVert u_{n}\rVert_{p^*}^{p^*}+\mu\gamma_{q}\Big(\frac{1}{p}-\frac{1}{q\gamma_{q}}\Big)\lVert u_{n}\rVert_{q}^q\bigg)=0,\]
which contradicts our assumptions. Let $T: W^{1,p}(\mathbb{R}^N)\rightarrow\big(W^{1,p}(\mathbb{R}^N)\big)^{*}$ be the mapping given by
\begin{equation*}
<Tu,v>=\int_{\mathbb{R}^N}\Big(\lvert\nabla u\rvert^{p-2}\nabla u\cdot\nabla v-\lambda\lvert u\rvert^{p-2}uv\Big)dx.
\end{equation*}
Then, slightly modifying the proof in \cite[Lemma 3.6]{hdly}, we can derive that $u_{n}\rightarrow u$ in $W^{1,p}(\mathbb{R}^N)$, and hence case (ii) in the proposition \ref{compactnesslemma} holds.$
\qed$
\section{\textbf{Existence result to the case $p<q<p+\frac{p^2}{N}$}}\label{exisub}
In this section, we prove that under assumption (\ref{muconsub}), $E_{\mu}|_{S_{a}}$ has two critical points, one is a local minimizer and the other is a mountain-pass type solution. In order to prove this result, we need some properties of $E_{\mu}$ and $\mathcal{P}_{a,\mu}$ by analyzing the structure of $\Psi_{u}^{\mu}$. \subsection{Some properties of $E_{\mu}$ and $\mathcal{P}_{a,\mu}$}
\
\newline
\indent For Pohozaev manifold $\mathcal{P}_{a,\mu}$, we have the following properties.
\begin{lemma}\label{empty}
$\mathcal{P}_{a,\mu}^{0}=\emptyset$, and $\mathcal{P}_{a,\mu}$ is a smooth manifold of codimension $2$ in $W^{1,p}(\mathbb{R}^N)$.
\end{lemma}
\begin{proof}
Let us assume that there exists $u\in\mathcal{P}_{a,\mu}^{0}$. Then, combining $P_{\mu}(u)=0$ and $\big(\Psi_{u}^{\mu}\big)''(0)=0$, we deduce that
\begin{equation*}
\mu\gamma_{q}(p-q\gamma_{q})\lVert u\rVert_{q}=(p^*-p)\lVert u\rVert_{p^*}^{p^*}.
\end{equation*}
Using this equation in $P_{\mu}(u)=0$, we obtain
\begin{equation}\label{nablapstar}
\lVert\nabla u\rVert_{p}^p=\frac{p^*-q\gamma_{q}}{p-q\gamma_{q}}\lVert u\rVert_{p^*}^{p^*}\leqslant\frac{S^{p^*/p}(p^*-q\gamma_{q})}{p-q\gamma_{q}}\lVert\nabla u\rVert_{p}^{p^*},
\end{equation}
and
\begin{equation}\label{nablaq}
\lVert\nabla u\rVert_{p}^p=\frac{\mu\gamma_{q}(p^*-q\gamma_{q})}{p^*-p}\lVert u\rVert_{q}^q\leqslant\frac{\mu\gamma_{q}(p^*-q\gamma_{q})}{p^*-p}a^{q(1-\gamma_{q})}\lVert\nabla u\rVert_{p}^{q\gamma_{q}}.
\end{equation}
From (\ref{nablapstar}) and (\ref{nablaq}), we infer that
\begin{equation*}
\bigg(\frac{p-q\gamma_{q}}{S^{p^*/p}(p^*-q\gamma_{q})}\bigg)^{\frac{1}{p^*-p}}\leqslant\bigg(\frac{\mu\gamma_{q}(p^*-q\gamma_{q})}{p^*-p}C_{N,q}^qa^{q(1-\gamma_{q})}\bigg)^{\frac{1}{p-q\gamma_{q}}},
\end{equation*}
that is
\begin{equation}\label{betacontradiction}
\mu a^{q(1-q\gamma_{q})}\geqslant\bigg(\frac{p-q\gamma_{q}}{S^{p^*/p}(p^*-q\gamma_{q})}\bigg)^{\frac{p-q\gamma_{q}}{p^*-p}}\frac{p^*-p}{C_{N,q}^q\gamma_{q}(p^*-q\gamma_{q})}.
\end{equation}
We can check that this is contradicts with (\ref{muconsub}): it is sufficient to verify that the right hand side in (\ref{muconsub}) is less than or equal to the right hand side in (\ref{betacontradiction}), and this is equivalent to
\begin{equation}\label{monotone}
\bigg(\frac{q\gamma_{q}}{p}\bigg)^{p^*-p}\bigg(\frac{p^*}{p}\bigg)^{p-q\gamma_{q}}\leqslant 1.
\end{equation}
Since the function $\varphi(t)=\log t/(t-1)$ is monotone decreasing on $(0,+\infty)$, we have
\begin{equation*}
\varphi\bigg(\frac{q\gamma_{q}}{p}\bigg)\leqslant\varphi\bigg(\frac{p^*}{p}\bigg),
\end{equation*}
that is (\ref{monotone}).
Now, we can check that $\mathcal{P}_{a,\mu}$ is a smooth manifold of codimension $2$ in $W^{1,p}(\mathbb{R}^N)$. Recall the definition of $\mathcal{P}_{a,\mu}$, since $P_{\mu}(u)$ and $G(u)=\lVert u\rVert_{p}^p-a^p$ are of class $C^1$ in $W^{1,p}(\mathbb{R}^N)$, so we just show that the differential $\big(dG(u),dP_{\mu}(u)\big): W^{1,p}(\mathbb{R}^N)\longmapsto\mathbb{R}^2$ is surjective for every $u\in\mathcal{P}_{a,\mu}$. To this end, we prove that for every $u\in\mathcal{P}_{a,\mu}$ there exists $\varphi\in T_{u}S_{a}$ such that $dP_{\mu}(u)[\varphi]\neq 0$. Once such $\varphi$ exist, the system
\begin{equation*}
\left\{\begin{array}{l}
dG(u)[\alpha\varphi+\theta u]=x,\\
dP_{\mu}(u)[\alpha\varphi+\theta u]=y,
\end{array}\right.
\quad\Longleftrightarrow\quad\left\{\begin{array}{l}
a^p\theta=x,\\
\alpha dP_{\mu}(u)[\varphi]+\theta dP_{\mu}(u)[u]=y,
\end{array}\right.
\end{equation*}
is solvable with respect to $\alpha$ and $\theta$, for every $(x,y)\in\mathbb{R}^2$, and hence the surjectivity is proved. Suppose by contradiction that for $u\in\mathcal{P}_{a,\mu}$ such that $dP_{\mu}(u)[\varphi]=0$ for every $\varphi\in T_{u}S_{a}$. Then $u$ is a constrained critical point for the functional $P_{\mu}|_{S_{a}}$, and hence by the Lagrange multipliers rule there exists $\nu\in\mathbb{R}$ such that
\[-\Delta_{p}u=\nu\lvert u\rvert^{p-2}u+\frac{\mu q\gamma_{q}}{p}\lvert u\rvert^{q-2}u+\frac{p^*}{p}\lvert u\rvert^{p^*-2}u,\quad in\ \mathbb{R}^N.\]
But, by the Pohozaev identity, this implies that
\[p\lVert\nabla u\rVert_{p}^{p}=\mu q\gamma_{q}^2\lVert u\rVert_{q}^q+p^*\lVert u\rVert_{P^*}^{p^*},\]
that is $u\in\mathcal{P}_{a,\mu}^{0}$, a contradiction.
\end{proof}
By the H\"older inequality, we have
\begin{equation}\label{Egreat}
E_{\mu}(u)\geqslant\frac{1}{p}\lVert\nabla u\rVert_{p}^p-\frac{\mu}{q}C_{N,q}^qa^{q(1-\gamma_{q})}\lVert\nabla u\rVert_{p}^{q\gamma_{q}}-\frac{1}{p^*S^{p*/p}}\lVert\nabla u\rVert_{p}^{p^*}=h(\lVert\nabla u\rVert_{p}),
\end{equation}
where
\begin{equation*}
h(t)=\frac{1}{p}t^p-\frac{\mu}{q}C_{N,q}^qa^{q(1-\gamma_{q})}t^{q\gamma_{q}}-\frac{1}{p^*S^{p*/p}}t^{p^*}.
\end{equation*}
\begin{lemma}\label{hfunction}
Under assumption {\rm (\ref{muconsub})}, the function $h$ has exactly two critical points, one is a local minimum at negative level, and the other is a global maximum at positive level. In addition, there exists $R_{1}>R_{0}>0$ such that $h(R_{0})=h(R_{1})=0$, and $h(t)>0$ if and only if $t\in (R_{0},R(1))$.
\end{lemma}
\begin{proof}
For every $t>0$, we know $h(t)>0$ if and only if
\begin{equation*}
\varphi (t)>\frac{\mu}{q}C_{N,q}^qa^{q(1-\gamma_{q})},\quad with\quad\varphi (t)=\frac{1}{p}t^{p-q\gamma_{q}}-\frac{1}{pS^{p^*/p}}t^{p^*-q\gamma_{q}}.
\end{equation*}
We can check that $\varphi$ has a unique critical point on $(0,+\infty)$, which is a global maximum point at positive level. The critical point is
\begin{equation*}
\bar{t}=\bigg(\frac{p^*S^{p^*/p}(p-q\gamma_{q})}{p(p^*-q\gamma_{q})}\bigg)^{\frac{1}{p^*-p}},
\end{equation*}
and the maximum level is
\begin{equation*}
\varphi (\bar{t})=\frac{p^*-p}{p(p^*-q\gamma_{q})}\bigg(\frac{p^*S^{p^*/p}(p-q\gamma_{q})}{p(p^*-q\gamma_{q})}\bigg)^{\frac{p-q\gamma_{q}}{p^*-p}}.
\end{equation*}
Therefore, $h$ is positive on $(R_{0},R_{1})$ if and only if
\begin{equation*}
\varphi (\bar{t})>\frac{\mu}{q}C_{N,q}^qa^{q(1-\gamma_{q})},
\end{equation*}
that is $\mu a^{q(1-\gamma_{q})}<C'$. It follows that $h$ has a global maximum at positive level on $(R_{0},R_{1})$. Moreover, since $h(0^{+})=0^{-}$, there exists a local minimum point at negative level in $(0,R_{0})$. The fact that $h$ has no other critical points can be derived from $h'(t)=0$ with only two zeros.
\end{proof}
Using the properties of $h(t)$, we can analyze the structure of $\Psi_{u}^{\mu}$ and $E_{\mu}$.
\begin{lemma}\label{structure}
For every $u\in S_{a}$, the function $\Psi_{u}^{\mu}$ has exactly two critical points $s_{u}<t_{u}$ and two zeros $c_{u}<d_{u}$, with $s_{u}<c_{u}<t_{u}<d_{u}$. Moreover,
{\rm (i)}
\begin{minipage}[t]{\linewidth}
$s_{u}\star u\in\mathcal{P}_{a,\mu}^{+}, t_{u}\star u\in\mathcal{P}_{a,\mu}^{-}$, and if $s\star u\in\mathcal{P}_{a,\mu}$, then either $s=s_{u}$ or $s=t_{u}$.
\end{minipage}
{\rm (ii)}
\begin{minipage}[t]{\linewidth}
$\lVert\nabla(s\star u)\rVert_{p}\leqslant R_{0}$ for every $s\leqslant c_{u}$, and
\begin{equation*}
E_{\mu}(s_{u}\star u)=\min\Big\{E_{\mu}(s\star u):s\in\mathbb{R},\ \lVert\nabla(s\star u)\rVert_{p}\leqslant R_{0}\Big\}<0.
\end{equation*}
\end{minipage}
{\rm (iii)}
\begin{minipage}[t]{\linewidth}
We have
\[E_{\mu}(t_{u}\star u)=\max_{s\in\mathbb{R}}E_{\mu}(s\star u)>0.\]
and if $t_{u}<0$, then $P_{\mu}(u)<0$.
\end{minipage}
{\rm (iv)}
\begin{minipage}[t]{\linewidth}
The maps $u\in S_{a}\longmapsto s_{u}\in\mathbb{R}$ and $u\in S_{a}\longmapsto t_{u}\in\mathbb{R}$ are of class $C^1$.
\end{minipage}
\end{lemma}
\begin{proof}
By (\ref{Egreat}),
\[\Psi_{u}^{\mu}(s)=E_{\mu}(s\star u)\geqslant h(\lVert\nabla(s\star u)\rVert_{p})=h(e^{s}\lVert\nabla u\rVert_{p}).\]
Thus, by Lemma \ref{hfunction}, $\Psi_{u}^{\mu}(s)$ is positive on $\big(\log(R_{0}/\lVert\nabla u\rVert_{p}),\log(R_{1}/\lVert\nabla u\rVert_{p})\big)$. It is clearly that $\Psi_{u}^{\mu}(-\infty)=0^{-}$ and $\Psi_{u}^{\mu}(+\infty)=-\infty$, hence $\Psi_{u}^{\mu}$ has at least two critical points $s_{u}<t_{u}$, with $s_{u}$ is local minimum point on $(-\infty,\log(R_{0}/\lVert\nabla u\rVert_{p}))$ at negative level, and $t_{u}$ is global maximum point at positive level. Now, we can check that there are no other critical points of $\Psi_{u}^{\mu}$. Indeed, the equation $\big(\Psi_{u}^{\mu}\big)'(s)=0$ has only two zeros. Now, the zero point theorem implies $\Psi_{u}^{\mu}$ has exactly two zeros $c_{u}<d_{u}$, with $s_{u}<c_{u}<t_{u}<d_{u}$.
By Lemma \ref{Pohozaev}, $s\star u\in\mathcal{P}_{a,\mu}$ if and only if $\big(\Psi_{u}^{\mu}\big)'(s)=0$, that is either $s=s_{u}$ or $s=t_{u}$. Since $0$ is a local minimum of $\Psi_{s_{u}\star u}^{\mu}(s)\big(=\Psi_{u}^{\mu}(e^{s_{u}+s})\big)$, we have $\big(\Psi_{s_{u}\star u}^{\mu}\big)''(0)\geqslant 0$, which implies $s_{u}\star u\in\mathcal{P}_{a,\mu}^{+}\cup\mathcal{P}_{a,\mu}^{0}$. Lemma \ref{empty} gives $\mathcal{P}_{a,\mu}^{0}=\emptyset$, hence $s_{u}\star u\in\mathcal{P}_{a,\mu}^{+}$. In the same way $t_{u}\star u\in\mathcal{P}_{a,\mu}^{+}$.
Since $\Psi_{u}^{\mu}$ is positive on $\big(\log(R_{0}/\lVert\nabla u\rVert_{p}),\log(R_{1}/\lVert\nabla u\rVert_{p})\big)$, we deduce that $c_{u}<\log(R_{0}/\lVert\nabla u\rVert_{p})$. Thus, $\lVert\nabla(c_{u}\star u)\rVert_{p}\leqslant R_{0}$ which implies $\lVert\nabla(s\star u)\rVert_{p}\leqslant R_{0}$ for all $s\leqslant c_{u}$. We know $s_{u}$ is a local minimum on $\big(-\infty,\log(R_{0}/\lVert\nabla u\rVert_{p})\big)$ at negative level, so
\begin{align*}
E_{\mu}(s_{u}\star u)&=\Psi_{u}^{\mu}(s_{u})=\min\Big\{\Psi_{u}^{\mu}(s): s\leqslant\log\big( R_{0}/\lVert\nabla u\rVert_{p}\big)\Big\}\\
&=\min\Big\{E_{\mu}(s\star u): \lVert\nabla(s\star u)\rVert_{p}\leqslant R_{0}\Big\}.
\end{align*}
Since $t_{u}$ is a global maximum of $\Psi_{u}^{\mu}$, we conclude that $\Psi_{u}^{\mu}$ is decreasing on $(t_{u},+\infty)$. Therefore, $\big(\Psi_{u}^{\mu}\big)'(s)<0$ for all $s\in (t_{u},+\infty)$(since there are no critical points on $(t_{u},+\infty)$). If $t_{u}<0$, then $P_{\mu}(u)=\big(\Psi_{u}^{\mu}\big)'(0)<0$.
It remain to show that $u\longmapsto s_{u}$ and $u\longmapsto t_{u}$ are class of $C^1$. Considering the function $\Phi(s,u):=\big(\Psi_{u}^{\mu}\big)'(s)$. Since $\Phi(s_{u},u)=0, \partial_{t}\Phi(s_{u},u)=\big(\Psi_{u}^{\mu}\big)''(s_{u})>0$, and it is possible to pass with continuity from $\mathcal{P}_{a,\mu}^{+}$ to $\mathcal{P}_{a,\mu}^{-}$(since $\mathcal{P}_{a,\mu}^{0}=\emptyset$), by the implicit theorem, we known $u\longmapsto s_{u}$ is of class $C^1$. The same argument proves that $u\longmapsto t_{u}$ is of class $C^1$.
\end{proof}
\subsection{Existence of local minimizer}
\
\newline
\indent In this subsection, we always assume that the assumptions of Theorem \ref{th1} hold. Set
\begin{equation*}
A_{k}=\big\{u\in S_{a}: \lVert\nabla u\rVert_{p}\leqslant k\big\},
\end{equation*}
where $k>0$ is a constant.
\begin{lemma}\label{mplus}
$m^{+}(a,\mu)=\inf_{u\in A_{R_{0}}}E_{\mu}(u)<0<m^{-}(a,\mu)$.
\end{lemma}
\begin{proof}
For every $v\in\mathcal{P}_{a,\mu}^{+}$, by Lemma \ref{structure} (i) and (ii), we know $s_{v}=0$ and $\lVert\nabla v\rVert_{p}\leqslant R_{0}$. Thus,
\[\inf_{u\in A_{R_{0}}}E_{\mu}(u)\leqslant E_{\mu}(v)<0,\]
which implies
\[\inf_{u\in A_{R_{0}}}E_{\mu}(u)\leqslant m^{+}(a,\mu)<0.\]
For every $v\in A_{R_{0}}$, using again Lemma \ref{structure} (i) and (ii), we have $s_{v}\star v\in\mathcal{P}_{a,\mu}^{+}$ and $E_{\mu}(s_{v}\star v)\leqslant E_{\mu}(v)$. Hence,
\[m^{+}(a,\mu)\leqslant E_{\mu}(s_{v}\star v)\leqslant E_{\mu}(v),\]
which implies
\[m^{+}(a,\mu)\leqslant\inf_{u\in A_{R_{0}}}E_{\mu}(u).\]
It remain to prove that $m^{-}(a,\mu)>0$. For every $v\in\mathcal{P}_{a,\mu}^{-}$, by Lemma \ref{structure} (i) and (iii), $t_{v}=0$ and $E_{\mu}(v)\geqslant E_{\mu}(s\star v)$ for all $s\in\mathbb{R}$. Now, using (\ref{Egreat}) and Lemma \ref{hfunction}, we have
\[E_{\mu}(v)\geqslant\max_{s\in\mathbb{R}}E_{\mu}(s\star v)\geqslant\max_{s\in\mathbb{R}}h(e^s\lVert\nabla v\rVert_{p})=\max_{t>0}h(t)>0,\]
which implies
\[m^{-}(a,\mu)\geqslant\max_{t>0}h(t)>0.\]
\end{proof}
By (\ref{Egreat}) and Lemma \ref{hfunction},
\[\inf_{u\in\partial A_{R_{0}}}E_{\mu}(u)\geqslant\inf_{u\in\partial A_{R_{0}}}h(\lVert\nabla u\rVert_{p})=0.\]
Therefore, if we can find a minimizer for $E_{\mu}|_{A_{R_{0}}}$, it must be a minimizer for $E_{\mu}|_{S_{a}}$.\\
\noindent\textbf{Proof of Theorem \ref{th1}:} Let $\{v_{n}\}$ be a minimizing sequence for $E_{\mu}|_{A_{R_{0}}}$. It is not restrictive to assume that $v_{n}$ is radially symmetric and radially decreasing(if this is not the case, we can replace $v_{n}$ with $\lvert v_{n}\rvert^*$, the Schwarz rearrangement of $\lvert v_{n}\rvert$, and we obtain another minimizing sequence for $E_{\mu}|_{A_{R_{0}}}$). Furthermore, by Lemma \ref{mplus}, we can assume that $v_{n}\in\mathcal{P}_{a,\mu}^{+}$.
Now, the Ekeland's variational principle gives a new minimizing sequence $\{u_{n}\}\subset A_{R_{0}}$ which is also a PS sequence for $E_{\mu}|_{S_{a}}$, with the property that $\lVert u_{n}-v_{n}\rVert\rightarrow 0$ as $n\rightarrow\infty$. The condition $\lVert u_{n}-v_{n}\rVert\rightarrow 0$ implies that $P_{\mu}(u_{n})=P_{\mu}(v_{n})+o(1)\rightarrow 0$ as $n\rightarrow\infty$. Hence one of the cases in proposition \ref{compactnesslemma} holds. If case(i) occurs, that is $u_{n}\rightharpoonup u$ in $W^{1,p}({\mathbb{R}^N})$, where $u$ solves (\ref{equation}) for some $\lambda <0$, and
\begin{equation}\label{Eless}
E_{\mu}(u)\leqslant m^{+}(a,\mu)+\frac{1}{N}S^{\frac{N}{p}}<-\frac{1}{N}S^{\frac{N}{p}}.
\end{equation}
Since $u$ solves (\ref{equation}), by the Pohozaev identity $P_{\mu}(u)=0$. Therefore, the Gagliardo-Nirenberg inequality implies
\begin{align*}
E_{\mu}(u)&=\frac{1}{N}\lVert\nabla u\rVert_{p}^p-\mu\gamma_{q}\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)\lVert u\rVert_{q}^q\\
&=\frac{1}{N}\lVert\nabla u\rVert_{p}^p-\mu\gamma_{q}\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)C_{N,q}^qa^{q(1-\gamma_{q})}\lVert\nabla u\rVert_{p}^{q\gamma_{q}},
\end{align*}
where we used the fact that $\lVert u\rVert_{p}\leqslant a$ by the Fatou lemma. we introduce the function
\[\varphi(t)=\frac{1}{N}t^p-\mu\gamma_{q}\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)C_{N,q}^qa^{q(1-\gamma_{q})}t^{q\gamma_{q}},\quad t>0.\]
Then, we can check that $\varphi$ has a unique critical point on $(0,+\infty)$, which is a global minimum point at negative level. The critical point is
\[\bar{t}=\bigg(\frac{\mu\gamma_{q}(p^*-q\gamma_{q})NC_{N.q}^q}{pp^*}\bigg)^{\frac{1}{p-q\gamma_{q}}}a^{\frac{q(1-\gamma_{q})}{p-q\gamma_{q}}},\]
and the minimum is
\[\varphi(\bar{t})=-\frac{p-q\gamma_{q}}{q}(N\gamma_{q})^{\frac{q\gamma_{q}}{p-q\gamma_{q}}}\bigg(\mu a^{q(1-\gamma_{q})}\frac{(p^*-q\gamma_{q})C_{N,q}^q}{pp^*}\bigg)^{\frac{p}{p-q\gamma_{q}}}<0.\]
By (\ref{alpha}), since $\mu a^{q(1-\gamma_{q})}<\alpha(N,p,q)\leqslant C''$, we have $\varphi(\bar{t})\geqslant-S^{N/p}/N$ which contradicts with (\ref{Eless}). This means that $u_{n}\rightarrow u$ in $W^{1,p}(\mathbb{R}^N)$ and $u\in S_{a}$ is a solution to (\ref{equation}) for some $\lambda<0$. Since $u_{n}$ is radially symmetric and radially decreasing, we know $u$ is radially symmetric and radially non-increasing. Now, by the strong maximum principle \cite{}, $u$ is positive.
In fact, $u$ is a ground state, since $E_{\mu}(u)=\inf_{\mathcal{P}_{a,\mu}}E_{\mu}$, and any other normalized solution stays on $\mathcal{P}_{a,\mu}$. It remains to show that any other ground state is a local minimizer for $E_{\mu}$ on $A_{R_{0}}$. Let $v$ be a critical point of $E_{\mu}|_{S_{a}}$ and $E_{\mu}(v)=m^{+}(a,\mu)$, then $v\in\mathcal{P}_{a,\mu}$. By Lemma \ref{structure}, we know $s_{v}=0$ and $\lVert\nabla v\rVert_{p}\leqslant R_{0}$. Therefore, $v$ is a local minimizer for $E_{\mu}$ on $A_{R_{0}}$.$
\qed$
\subsection{Existence of mountain-pass type solution}
\
\newline
\indent In this section, we prove Theorem \ref{th2}. By Lemma \ref{structure} and Lemma \ref{mplus}, we can construct a minimax structure. Let
\[E^{c}:=\Big\{u\in S_{a}:E_{\mu}(u)\leqslant c\Big\}.\]
We introduce the minimax class
\[\Gamma:=\Big\{\gamma=(\alpha,\theta)\in C([0,1],\mathbb{R}\times S_{a,r}): \gamma(0)\in(0,\mathcal{P}_{a,\mu}^{+}), \gamma(1)\in(0,E^{2m^{+}(a,\mu)})\Big\},\]
with associated minimax level
\[\sigma(a,\mu):=\inf_{\gamma\in\Gamma}\max_{(\alpha,\theta)\in\gamma([0,1])}\tilde{E}_{\mu}(\alpha,\theta),\]
where
\[\tilde{E}_{\mu}(s,u):=E_{\mu}(s*u).\]
In order to obtain the compactness of PS sequence by using Proposition \ref{compactnesslemma}, the following energy estimates are required.
\begin{lemma}\label{m-<m+1}
Let $N\geqslant p^2$. Then, we have
\[m^{-}(a,\mu)<m^{+}(a,\mu)+\frac{1}{N}S^{\frac{N}{p}}.\]
\end{lemma}
\begin{proof}
By Appendix \ref{A}, we have
\begin{equation*}
\lVert\nabla u_{\varepsilon}\rVert_{p}^p=S^{\frac{N}{p}}+O(\varepsilon^{\frac{N-p}{p-1}}),\quad\lVert u_{\varepsilon}\rVert_{p^*}^{p^*}=S^{\frac{N}{p}}+O(\varepsilon^{\frac{N}{p-1}}),
\end{equation*}
and
\begin{equation*}
\lVert u_{\varepsilon}\rVert_{r}^{r}=\left\{\begin{array}{ll}
C\varepsilon ^{N-\frac{(N-p)r}{p}}+O({\varepsilon^{\frac{(N-p)r}{p(p-1)}}}) &N>p^2 \ or\ p<r<p^*\\
C\varepsilon^{p}\lvert \log\varepsilon\rvert+O(\varepsilon ^{p}) &N=p^2\ and\ r=p,
\end{array}\right.
\end{equation*}
where $p\leqslant r<p^*$.
Let $V_{\varepsilon,\tau}=u_{a,\mu}^+ +\tau u_{\varepsilon}(\cdot-x_{\varepsilon})$ and
\begin{equation}\label{wet}
W_{\varepsilon,\tau}(x)=\big(a^{-1}\lVert V_{\varepsilon,\tau}\rVert_{p}\big)^{\frac{N-p}{p}}V_{\varepsilon,\tau}\big(a^{-1}\lVert V_{\varepsilon,\tau}\rVert_{p}x\big),
\end{equation}
where $\lvert x_{\varepsilon}\rvert=\varepsilon^{-1}$. Then, we have
\[\lVert W_{\varepsilon,\tau}\rVert_{p}^p=a^p,\quad\lVert\nabla W_{\varepsilon,\tau}\rVert_{p}^p=\lVert\nabla V_{\varepsilon,\tau}\rVert_{p}^p,\quad\lVert W_{\varepsilon,\tau}\rVert_{p^*}^{p^*}=\lVert V_{\varepsilon,\tau}\rVert_{p^*}^{p^*},\]
and
\[\lVert W_{\varepsilon,\tau}\rVert_{q}^q=(a\lVert V_{\varepsilon,\tau}\rVert_{p}^{-1})^{q(1-\gamma_{q})}\lVert V_{\varepsilon,\tau}\rVert_{q}^{q}.\]
Thus, there exists a unique $t_{\varepsilon,\tau}\in\mathbb{R}$ such that $t_{\varepsilon,\tau}\star W_{\varepsilon,\tau}\in\mathcal{P}_{a,\mu}^{-}$, that is
\begin{equation}\label{tet}
e^{pt_{\varepsilon,\tau}}\lVert\nabla W_{\varepsilon,\tau}\rVert_{p}^p=\mu\gamma_{q,s} e^{q\gamma_{q}t_{\varepsilon,\tau}}\lVert W_{\varepsilon,\tau}\rVert_{q}^q+e^{p^*t_{\varepsilon,\tau}}\lVert W_{\varepsilon,\tau}\rVert_{p^*}^{p^*}.
\end{equation}
Since $u_{a,\mu}^+\in\mathcal{P}_{a,\mu}^{+}$, we know $t_{\varepsilon,0}>0$. By (\ref{tet}),
\[e^{(p^*-p)t_{\varepsilon,\tau}}<\frac{\lVert\nabla W_{\varepsilon,\tau}\rVert_{p}^p}{\lVert W_{\varepsilon,\tau}\rVert_{p^*}^{p^*}}=\frac{\lVert\nabla V_{\varepsilon,\tau}\rVert_{p}^p}{\lVert V_{\varepsilon,\tau}\rVert_{p^*}^{p^*}},\]
which implies $t_{\varepsilon,\tau}\rightarrow -\infty$ as $\tau\rightarrow +\infty$. By Lemma \ref{structure} , $t_{\varepsilon,\tau}$ is continuous for $\tau$, hence we can choose a suitable $\tau =\tau_{\varepsilon}>0$ such that $t_{\varepsilon,\tau_{\varepsilon}}=0$. It follows that
\begin{align}\label{wete}
m^-(a,\mu)&\leqslant E_{\mu}(W_{\varepsilon,\tau_{\varepsilon}})=\frac{1}{p}\lVert\nabla W_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^p-\frac{\mu}{q}\lVert W_{\varepsilon,\tau_{\varepsilon}}\rVert_{q}^q-\frac{1}{p^*}\lVert W_{\varepsilon,\tau_{\varepsilon}}\rVert_{p^*}^{p^*}\nonumber\\
&=\frac{1}{p}\lVert\nabla V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^p-\frac{\mu}{q}(a\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^{-1})^{q(1-\gamma_{q})}\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{q}^q-\frac{1}{p^*}\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p^*}^{p^*}.
\end{align}
If $\liminf_{\varepsilon\rightarrow 0}\tau_{\varepsilon}=0$ or $\limsup_{\varepsilon\rightarrow 0}\tau_{\varepsilon}=+\infty$, then
\begin{equation*}
m^-(a,\mu)\leqslant\liminf_{\varepsilon\rightarrow 0} E_{\mu}(W_{\varepsilon,\tau_{\varepsilon}})\leqslant E_{\mu}(u_{a,\mu}^+)=m^+(a,\mu),
\end{equation*}
a contradiction with Lemma \ref{mplus}. Therefore, there exists $t_{2}>t_{1}>0$ independent of $\varepsilon$ such that $\tau_{\varepsilon}\in[t_{1},t_{2}]$.
Now, we estimate $\lVert\nabla V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^p$. Using the inequality
\[(a+b)^p\leqslant a^p+b^p+C(a^{p-1}b+ab^{p-1})\quad\forall a,b\geqslant 0,\]
we have
\begin{align}\label{nablaVe1}
\lVert\nabla V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^p&\leqslant\lVert\nabla u_{a,\mu}^{+}\rVert_{p}^p+\tau_{\varepsilon}^p\lVert\nabla u_{\varepsilon}\rVert_{p}^p\nonumber\\
&\qquad\qquad+C\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}^{+}\rvert^{p-1}\lvert\nabla u_{\varepsilon}\rvert dx+C\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}^{+}\rvert\lvert\nabla u_{\varepsilon}\rvert^{p-1}dx\nonumber\\
&=\lVert\nabla u_{a,\mu}^{+}\rVert_{p}^p+S^{\frac{N}{p}}\tau_{\varepsilon}^p\nonumber\\
&\qquad\qquad+C\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}^{+}\rvert^{p-1}\lvert\nabla u_{\varepsilon}\rvert dx+ C\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}^{+}\rvert\lvert\nabla u_{\varepsilon}\rvert^{p-1}dx+O(\varepsilon^{\frac{N-p}{p-1}}).
\end{align}
By the H\"older inequality,
\begin{align*}
\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}^{+}\rvert^{p-1}\lvert\nabla u_{\varepsilon}\rvert dx&=\int_{B_{2}(x_{\varepsilon})}\lvert\nabla u_{a,\mu}^{+}\rvert^{p-1}\lvert\nabla u_{\varepsilon}\rvert dx\nonumber\\
&\leqslant\lVert\nabla u_{a,\mu}^{+}\rVert_{L^p(B_{2}(x_{\varepsilon}))}^{p-1}\lVert\nabla u_{\varepsilon}\rVert_{p}\nonumber\\
&\leqslant C\lVert\nabla u_{a,\mu}^{+}\rVert_{L^p(B_{2}(x_{\varepsilon}))}^{p-1}.
\end{align*}
We know there exists $\lambda_{a,\mu}^{+}<0$ such that
\[-\Delta_{p}u_{a,\mu}^+=\lambda_{a,\mu}^{+}\lvert u_{a,\mu}^+\rvert^{p-1}+\mu\lvert u_{a,\mu}^+\rvert^{q-1}+\lvert u_{a,\mu}^+\rvert^{p^*-1}.\]
Then, by \cite[theorem 8]{gfsj}, there exists $a,b>0$ such that
\[\lvert\nabla u_{a,\mu}^{+}(x)\rvert\leqslant ae^{-b\lvert x\rvert}\]
for $\lvert x\rvert$ sufficiently large. This means
\[\lVert\nabla u_{a,\mu}^{+}\rVert_{L^{p}(B_{2}(x_{\varepsilon}))}\leqslant Ce^{-b\varepsilon^{-1}},\]
and hence
\[\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}^{+}\rvert^{p-1}\lvert\nabla u_{\varepsilon}\rvert dx\leqslant Ce^{e^{-b(p-1)\varepsilon^{-1}}}.\]
Similarly, we have
\[\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}^{+}\rvert\lvert\nabla u_{\varepsilon}\rvert^{p-1}dx\leqslant Ce^{e^{-b\varepsilon^{-1}}}.\]
Therefore, by (\ref{nablaVe1}), we obtain
\begin{equation}\label{nablaVe}
\lVert\nabla V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^{p}\leqslant\lVert\nabla u_{a,\mu}^{+}\rVert_{p}^{p}+S^{\frac{N}{p}}\tau_{\varepsilon}^{p}+O\big(\varepsilon^{\frac{N-p}{p-1}}\big).
\end{equation}
Next, we estimate $\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^{p}$. We have
\begin{align*}
\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}+\tau_{\varepsilon}u_{\varepsilon}\rvert^{p}&=\int_{B_{2}(x_{\varepsilon})}\lvert u_{a,\mu}^{+}+\tau_{\varepsilon}u_{\varepsilon}\rvert^{p}dx+\int_{B_{2}^{c}(x_{\varepsilon})}\lvert u_{a,\mu}^{+}\rvert^{p}dx\\
&\leqslant a^{p}+\int_{B_{2}(x_{\varepsilon})}\lvert u_{a,\mu}^{+}+\tau_{\varepsilon}u_{\varepsilon}\rvert^{p}dx\\
&\leqslant a^{p}+C\int_{B_{2}(x_{\varepsilon})}\big(\lvert u_{a,\mu}^{+}\rvert^{p}+\lvert u_{\varepsilon}\rvert^{p}\big)dx\\
&\leqslant a^{p}+C(e^{-pb\varepsilon^{-1}}+\varepsilon^{p}\lvert\log\varepsilon\rvert)\\
&=a^{p}+O(\varepsilon^{p}\lvert\log\varepsilon\rvert).
\end{align*}
Thus,
\begin{equation}\label{Vep}
\big(a\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^{-1}\big)^{q(1-\gamma_{q})}\geqslant 1+O(\varepsilon^{p}\lvert\log\varepsilon\rvert).
\end{equation}
It is easy to know that
\begin{equation}\label{Veq}
\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{q}^{q}\geqslant\lvert u_{a,\mu}^{+}\rVert_{q}^{q}+\tau_{\varepsilon}^{q}\lvert u_{\varepsilon}\rvert_{q}^{q}\geqslant\lvert u_{a,\mu}^{+}\rVert_{q}^{q}+C\varepsilon^{N-\frac{(N-p)q}{p}},
\end{equation}
and
\begin{equation}\label{Vepstar}
\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p^*}^{p^*}\geqslant\lVert u_{a,\mu}^{+}\rVert_{p^*}^{p^*}+\tau_{\varepsilon}^{p^*}\lVert u_{\varepsilon}\rVert_{p^*}^{p^*}=\lVert u_{a,\mu}^{+}\rVert_{p^*}^{p^*}+S^{\frac{N}{p}}\tau_{\varepsilon}^{p^*}+O(\varepsilon^{\frac{N}{p-1}}).
\end{equation}
Combining (\ref{wete}), (\ref{nablaVe}), (\ref{Vep}) (\ref{Veq}) and (\ref{Vepstar}), we obtain
\begin{align*}
m^{-}(a,\mu)&\leqslant m^{+}(a,\mu)+S^{\frac{N}{p}}\Big(\frac{1}{p}\tau_{\varepsilon}^{p}-\frac{1}{p^*}\tau_{\varepsilon}^{p^*}\Big)-C\varepsilon^{N-\frac{(N-p)q}{p}}+O(\varepsilon^{p}\lvert\log\varepsilon\rvert)\\
&<m^{+}(a,\mu)+\frac{1}{N}S^{\frac{N}{p}},
\end{align*}
by taking $\varepsilon$ sufficiently small.
\end{proof}
\begin{lemma}\label{m-<m+2}
Let $N<p^2<9$. Then, we have
\[m^{-}(a,\mu)<m^{+}(a,\mu)+\frac{1}{N}S^{\frac{N}{p}}.\]
\end{lemma}
\begin{proof}
We set $V_{\varepsilon,\tau_{\varepsilon}}=u_{a,\mu}^{+}+\tau u_{\varepsilon}$ and the definition of $W_{\varepsilon,\tau_{\varepsilon}}$ is same to (\ref{wet}). Then, we can choose $\tau=\tau_{\varepsilon}>0$ such that $W_{\varepsilon,\tau_{\varepsilon}}\in\mathcal{P}_{a,\mu}^{-}$. Moreover, there exists $t_{2}>t_{1}>0$ independent of $\varepsilon$ such that $\tau_{\varepsilon}\in[t_{1},t_{2}]$. Therefore,
\begin{equation}
m^{-}(a,\mu)\leqslant\frac{1}{p}\lVert\nabla V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^p-\frac{\mu}{q}(a\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^{-1})^{q(1-\gamma_{q})}\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{q}^q-\frac{1}{p^*}\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p^*}^{p^*}.
\end{equation}
Now, we estimate $\lVert\nabla V_{\varepsilon,\tau_{{\varepsilon}}}\rVert_{p}^{p}$ and $\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^{p}$. We can prove that for $a,b\geqslant 0$, there is
\[(a^2+b^2+2ab\cos\alpha)^{\frac{p}{2}}\leqslant a^p+b^p+pa^{p-1}b\cos\alpha+Ca^{\frac{p-1}{2}}b^{\frac{p+1}{2}},\]
uniformly in $\alpha$. Thus,
\begin{align}\label{nablaVete1}
\lVert\nabla V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^{p}&=\int_{\mathbb{R}^N}\big(\lvert\nabla u_{a,\beta}^{+}\rvert^2+\tau_{\varepsilon}^2\lvert\nabla u_{\varepsilon}\rvert^2+2\tau_{\varepsilon}\nabla u_{a,\mu}^{+}\cdot\nabla u_{\varepsilon}\big)^{\frac{p}{2}}dx\nonumber\\
&\leqslant\lVert\nabla u_{a,\mu}^{+}\rVert_{p}^{p}+\tau_{\varepsilon}^{p}\lVert u_{\varepsilon}\rVert_{p}^{p}\nonumber\\
&\qquad\qquad+p\tau_{{\varepsilon}}\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}^{+}\rvert^{p-2}\nabla u_{a,\mu}^{+}\cdot\nabla u_{\varepsilon}dx+C\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}^{+}\rvert^{\frac{p-1}{2}}\lvert\nabla u_{\varepsilon}\rvert^{\frac{p+1}{2}}dx.
\end{align}
By \cite{tp}, $\nabla u_{a,\mu}^{+}$ is local H\"older continuous, hence
\begin{equation}\label{interact}
\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}^{+}\rvert^{\frac{p-1}{2}}\lvert\nabla u_{\varepsilon}\rvert^{\frac{p+1}{2}}dx\leqslant C\int_{\mathbb{R}^N}\lvert\nabla u_{\varepsilon}\rvert^{\frac{p+1}{2}}dx=O(\varepsilon^{\frac{(N-p)(p+1)}{2p(p-1)}}).
\end{equation}
By (\ref{equation}), we know
\begin{align}\label{lambda+}
&\int_{\mathbb{R}^N}\lvert\nabla u_{a,\mu}\rvert^{p-2}\nabla u_{a,\mu}^{+}\cdot\nabla u_{\varepsilon}dx\nonumber\\
=&\lambda_{a,\mu}^{+}\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{p-1}u_{\varepsilon}dx+\mu\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{q-1}u_{\varepsilon}dx+\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{p^*-1}u_{\varepsilon}dx.
\end{align}
From (\ref{nablaVete1}), (\ref{interact}) and (\ref{lambda+}), we obtain
\begin{equation}\label{nablaVete}
\begin{split}
\lVert\nabla V_{\varepsilon,\tau_{{\varepsilon}}}\rVert_{p}^{p}&
\leqslant\lVert\nabla u_{a,\mu}^{+}\rVert_{p}^{p}+S^{\frac{N}{p}}\tau_{\varepsilon}^{p}+p\tau_{\varepsilon}\lambda_{a,\mu}^{+}\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{p-1}u_{\varepsilon}dx\\
&\qquad\qquad+p\tau_{\varepsilon}\mu\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{q-1}u_{\varepsilon}dx+p\tau_{\varepsilon}\lvert u_{a,\mu}^{+}\rvert^{p^*-1}u_{\varepsilon}dx+O(\varepsilon^{\frac{(N-p)(p+1)}{2p(p-1)}}).
\end{split}
\end{equation}
In the same way, we have
\begin{align*}
\lVert V_{\varepsilon}\rVert_{p}^{p}&\leqslant\lVert u_{a,\mu}^{+}\rVert_{p}^{p}+\tau_{\varepsilon}^{p}\lVert u_{\varepsilon}\rVert_{p}^{p}+p\tau_{\varepsilon}\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert u_{\varepsilon}dx+C\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{\frac{p-1}{2}}\lvert u_{\varepsilon}\rvert^{\frac{p+1}{2}}dx\nonumber\\
&\leqslant a^p+p\tau_{\varepsilon}\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{p-1}u_{\varepsilon}dx+O(\varepsilon^{\frac{(N-p)(p+1)}{2p(p-1)}}),
\end{align*}
which implies
\begin{equation}\label{Vepp}
\big(a\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p}^{-1}\big)^{q(1-\gamma_{q})}\geqslant 1-\frac{q(1-\gamma_{q})\tau_{\varepsilon}}{a^{p}}\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{p-1}u_{\varepsilon}dx+O(\varepsilon^{\frac{(N-p)(p+1)}{2p(p-1)}}).
\end{equation}
Next, we estimate $\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{q}^{q}$ and $\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p^*}^{p^*}$. For every $a,b\geqslant 0$, we know
\[(a+b)^{r}\geqslant\left\{\begin{array}{ll}
a^r+b^r+r(a^{r-1}b+ab^{r-1})&\forall r\geqslant 3\\
a^r+ra^{r-1}b&\forall r\geqslant 1.
\end{array}
\right.\]
Thus, we have
\begin{equation}\label{Veqq}
\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{q}^{q}\geqslant\lVert u_{a,\mu}^{+}\rVert_{q}^{q}+q\tau_{\varepsilon}\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{q-1}u_{\varepsilon},
\end{equation}
and
\begin{align}\label{Vepstarp}
\lVert V_{\varepsilon,\tau_{\varepsilon}}\rVert_{p^*}^{p^*}&\geqslant\lVert u_{a,\mu}^{+}\rVert_{p^*}^{p^*}+\tau_{\varepsilon}^{p^*}\lVert u_{\varepsilon}\rVert_{p^*}^{p^*}\nonumber\\
&\qquad\qquad+p^*\tau_{\varepsilon}\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{p^*-1}u_{\varepsilon}dx+p^*\tau_{\varepsilon}^{p^*-1}\int_{\mathbb{R}^N}u_{a,\mu}^{+}\lvert u_{\varepsilon}\rvert^{p^*-1}dx\nonumber\\
&=\lVert u_{a,\mu}^{+}\rVert_{p^*}^{p^*}+S^{\frac{N}{p}}\tau_{\varepsilon}^{p^*}\nonumber\\
&\qquad\qquad+p^*\tau_{\varepsilon}\int_{\mathbb{R}^N}\lvert u_{a,\mu}^{+}\rvert^{p^*-1}u_{\varepsilon}dx+p^*\tau_{\varepsilon}^{p^*-1}\int_{\mathbb{R}^N}u_{a,\mu}^{+}\lvert u_{\varepsilon}\rvert^{p^*-1}dx+O(\varepsilon^{\frac{N}{p-1}}).
\end{align}
Combining (\ref{m-<m+2}), (\ref{nablaVete}), (\ref{Vepp}), (\ref{Veqq}), (\ref{Vepstarp}) and using $\lambda_{a,\mu}^{+}a^p=\mu(\gamma_{q}-1)\lVert u_{a,\mu}^{+}\rVert_{q}^q$, we obtain
\begin{align*}
m^{-}(a,\mu)&\leqslant m^{+}(a,\mu)+S^{\frac{N}{p}}\Big(\frac{1}{p}\tau_{\varepsilon}^p-\frac{1}{p^*}\tau_{\varepsilon}^{p^*}\Big)-\tau_{\varepsilon}^{p^*-1}\int_{\mathbb{R}^N}u_{a,\mu}^{+}u_{\varepsilon}^{p^*-1}dx+O(\varepsilon^{\frac{(N-p)(p+1)}{2p(p-1)}})\nonumber\\
&\leqslant m^{+}(a,\mu)+\frac{1}{N}S^{\frac{N}{p}}-C\varepsilon^{\frac{N-p}{p}}+O(\varepsilon^{\frac{(N-p)(p+1)}{2p(p-1)}})\nonumber\\
&<m^{+}(a,\mu)+\frac{1}{N}S^{\frac{N}{p}},
\end{align*}
by taking $\varepsilon$ sufficiently small.
\end{proof}
\begin{remark}
{\rm Although Lemma \ref{m-<m+1} and Lemma \ref{m-<m+2} both obtain the same estimates, the method of proof were slightly different, so we state these two results separately.}
\end{remark}
For every $0<a<\big(\mu^{-1}\alpha\big)^{1/(q(\gamma_{q}-1))}$, let $u\in\mathcal{P}_{a,\mu}^{\pm}$, then $u_{b}=\frac{b}{a}u\in S_{b}$ for every $b>0$. By lemma \ref{structure}, there exists unique $t_{\pm}(b)\in\mathbb{R}$ such that $t_{\pm}(b)\star u_{b}\in\mathcal{P}_{b,\mu}^{\pm}$ for every $0<b<\big(\mu^{-1}\alpha\big)^{1/(q(\gamma_{q}-1))}$. Clearly, $t_{\pm}(a)=0$.
\begin{lemma}
For every $0<a<\big(\mu^{-1}\alpha\big)^{1/(q(\gamma_{q}-1))}$, $t'_{\pm}(a)$ exist and
\begin{equation}\label{tderivative}
t'_{\pm}(a)=\frac{\mu q\gamma_{q}\lVert u\rVert_{q}^q+p^*\lVert u\rVert_{p^*}^{p^*}-p\lVert\nabla u\rVert_{p}^p}{a\big(p\lVert\nabla u\rVert_{p}^p-\mu q\gamma_{q}^2\lVert u\rVert_{q}^q-p^*\lVert u\rVert_{p^*}^{p^*}\big)}.
\end{equation}
\end{lemma}
\begin{proof}
Since $t_{\pm}(b)\star u_{b}\in\mathcal{P}_{b,\mu}^{\pm}$, we have
\[\Big(\frac{b}{a}\Big)^pe^{pt_{\pm}(b)}\lVert\nabla u\rVert_{p}^p=\mu\gamma_{q}\Big(\frac{b}{a}\Big)^{q}e^{q\gamma_{q}t_{\pm}(b)}\lVert u\rVert_{q}^q+\Big(\frac{b}{a}\Big)^{p^*}e^{p^*t_{\pm}(b)}\lVert u\rVert_{p^*}^{p^*}.\]
Considering the function
\[\Phi(b,t)=\Big(\frac{b}{a}\Big)^pe^{pt}\lVert\nabla u\rVert_{p}^p-\mu\gamma_{q}\Big(\frac{b}{a}\Big)^{q}e^{q\gamma_{q}t}\lVert u\rVert_{q}^q-\Big(\frac{b}{a}\Big)^{p^*}e^{p^*t}\lVert u\rVert_{p^*}^{p^*},\]
then $\Phi(a,0)=0$ and $\Phi(b,t)$ has a continuous derivative in some neighborhood of $(a,0)$. Moreover, since $u\in\mathcal{P}_{a,\mu}^{\pm}$, we have
\[\partial_{t}\Phi(a,0)=p\lVert\nabla u\rVert_{p}^p-\mu q\gamma_{q}^2\lVert u\rVert_{q}^q-p^*\lVert u\rVert_{p^*}^{p^*}\neq 0.\]
Now, by the implict function theorem, we know $t'_{\pm}(a)$ exist and (\ref{tderivative}) holds.
\end{proof}
\begin{lemma}\label{decreasing}
$m^{\pm}(a,\mu)$ is non-increasing for $0<a<\big(\mu^{-1}\alpha\big)^{1/(q(\gamma_{q}-1))}$.
\end{lemma}
\begin{proof}
Since
\[E_{\mu}\big(t_{\pm}(b)\star u_{b}\big)=\frac{1}{p}\Big(\frac{b}{a}\Big)^pe^{pt_{\pm}(b)}\lVert\nabla u\rVert_{p}^p-\frac{\mu}{q}\Big(\frac{b}{a}\Big)^{q}e^{q\gamma_{q}t_{\pm}(b)}\lVert u\rVert_{q}^q-\frac{1}{p^*}\Big(\frac{b}{a}\Big)^{p^*}e^{p^*t_{\pm}(b)}\lVert u\rVert_{p^*}^{p^*},\]
and $u\in\mathcal{P}_{a,\mu}^{\pm}$, we have
\begin{align*}
\frac{dE_{\mu}\big(t_{\pm}(b)\star u_{b}\big)}{db}|_{b=a}&=\frac{1}{a}\big(\lVert\nabla u\rVert_{p}^p-\mu\lVert u\rVert_{q}^q-\lVert u\rVert_{p^*}^{p^*}\big)+\big(\lVert\nabla u\rVert_{p}^p-\mu\gamma_{q}\lVert u\rVert_{q}^q-\lVert u\rVert_{p^*}^{p^*}\big)t'_{\pm}(a)\\
&=\frac{\mu(\gamma_{q}-1)\lVert u\rVert_{q}^q}{a}<0,
\end{align*}
which implies $E_{\mu}\big(t_{\pm}(b)\star u_{b}\big)<E_{\mu}(u)$ for $a<b<\big(\mu^{-1}\alpha\big)^{1/(q(\gamma_{q}-1))}$. Therefore, $m^{\pm}(a,\mu)\geqslant m^{\pm}(b,\mu)$ for $a<b<\big(\mu^{-1}\alpha\big)^{1/(q(\gamma_{q}-1))}$.
\end{proof}
\begin{lemma}\label{msigma}
We have $m^{-}(a,\mu)=m_{r}^{-}(a,\mu)=\sigma(a,\mu)$.
\end{lemma}
\begin{proof}
By the definition of $m^{-}(a,\mu)$ and $m_{r}^{-}(a,\mu)$, we have $m^{-}(a,\mu)\leqslant m_{r}^{-}(a,\mu)$. For every $u\in\mathcal{P}_{a,\mu}^{-}$, let $v=\lvert u\rvert^*$, the Schwarz rearrangement of $\lvert u\rvert$, then
\[E_{\mu}(s\star v)\leqslant E_{\mu}(s\star u)\quad\forall s\in\mathbb{R}.\]
Therefore, by lemma \ref{structure},
\[m_{r}^{-}(a,\mu)\leqslant E_{\mu}(t_{v}\star v)\leqslant E_{\mu}(t_{v}\star u)\leqslant E_{\mu}(u),\]
which implies $m_{r}^{-}(a,\mu)\leqslant m^{-}(a,\mu)$.
Next, we prove $m_{r}^{-}(a,\mu)=\sigma(a,\mu)$.
For every $u\in\mathcal{P}_{a,\mu}^{-}\cap S_{a,r}$, choosing $s_{0}$ such that $E_{\mu}(s_{0}\star u)\leqslant 2m^{+}(a,\mu)$ and defining
\[\gamma_{u}: \tau\in [0,1]\longmapsto\big(0,((1-\tau)s_{u}+\tau s_{0}\star u)\big)\in\mathbb{R}\times S_{a,r}.\]
By lemma \ref{structure}, $\gamma_{u}\in\Gamma$. Thus
\begin{align*}
\sigma(a,\mu)&\leqslant\max_{\tau\in [0,1]}\tilde{E}_{\mu}\big(0,((1-\tau)s_{u}+\tau s_{0})\star u\big)\\
&=\max_{\tau\in [0,1]}E_{\mu}\big(((1-\tau)s_{u}+\tau s_{0})\star u\big)\\
&\leqslant E_{\mu}(t_{u}\star u)=E_{\mu}(u),
\end{align*}
which implies $\sigma(a,\mu)\leqslant m_{r}^{-}(a,\mu)$.
For every $\gamma\in\Gamma$, we have $\gamma(0)\in(0,\mathcal{P}_{a,\mu}^{+})$ and $\gamma(1)\in E^{2m^{+}(a,\mu)}$. Then, by Lemma \ref{structure}, we know
$t_{\theta(0)}>0>t_{\theta(1)}$ and since $t_{\alpha(\tau)\star\theta(\tau)}$ is continuous for $\tau$, there exists $\tau_{\gamma}\in [0,1]$ such that $t_{\alpha(\tau_{\gamma})\star\theta(\tau_{\gamma})}=0$. This implies
\begin{equation*}
\max_{(\alpha,\theta)\in\gamma([0,1])}\tilde{E}_{\mu}(\alpha,\theta)\geqslant\tilde{E}_{\mu}\big(\alpha(\tau_{\gamma}),\theta(\tau_{\gamma})\big)=E_{\mu}\big(\alpha(\tau_{\gamma})\star\theta(\tau_{\gamma})\big)\geqslant m_{r}^{-}(a,\mu).
\end{equation*}
Therefore, $\sigma(a,\mu)\geqslant m_{r}(a,\mu)$.
\end{proof}
\noindent\textbf{Proof of Theorem \ref{th2}:} Let
\[X=\mathbb{R}\times S_{a,r},\quad\mathcal{F}=\Gamma,\quad and\quad B=(0,A_{k})\cup(0,E^{2m^{+}(a,\mu)}).\]
Then, using the terminology in \cite[definition 5.1]{gn}, $\Gamma$ is a homotopy stable family of compact subset of $\mathbb{R}\times S_{a,r}$ with extend closed boundary $(0,A_{k})\cup(0,E^{2m^{+}(a,\mu)})$. Let
\[\varphi=\tilde{E}_{\mu}(s,u),\quad c=\sigma(a,\mu),\quad and\quad F=\Big\{(s,u)\in\mathbb{R}\times S_{a,r}: \tilde{E}_{\mu}(s,u)\geqslant c\Big\},\]
we can check that $F$ satisfies assumptions (F'1) and (F'2) in \cite[theorem 5.2]{gn}.
Taking a minimizing sequence $\{\gamma_{n}=(\alpha_{n},\theta_{n})\}\subset\Gamma$ for $\sigma(a,\mu)$ with properties that $\alpha_{n}\equiv 0$ and $\theta_{n}\geqslant 0$ for every $\tau\in [0,1]$(if this is not the case, we just have to notice that $\{(0,\alpha_{n}\star\theta_{n})\}$ is also a minimizing sequence). Then, by \cite[theorem 5.2]{gn}, there exists a PS sequence $\{(s_{n}\star w_{n})\}\subset\mathbb{R}\times S_{a,r}$ for $\tilde{E}_{\mu}|_{\mathbb{R}\times S_{a,r}}$ at level $\sigma(a,\mu)$, that is
\begin{equation}\label{PS}
\partial_{s}\tilde{E}_{\mu}(s_{n},w_{n})\rightarrow 0,\quad and\quad\lVert\partial_{u}\tilde{E}_{\mu}(s_{n},w_{n})\rVert_{(T_{w_{n}}S_{a,r})^*}\rightarrow 0\quad as\quad n\rightarrow\infty.
\end{equation}
Moreover,
\begin{equation}\label{dist}
\lvert s_{n}\rvert+dist_{W^{1,p}}\big(w_{n},\theta_{n}([0,1])\big)\rightarrow 0\quad as\quad n\rightarrow\infty.
\end{equation}
Thus, we have
\[E_{\mu}(s_{n}\star w_{n})=\tilde{E}_{\mu}(s_{n},w_{n})\rightarrow\sigma(a,\mu),\quad as\quad n\rightarrow\infty,\]
and
\begin{align*}
dE_{\mu}(s_{n}\star w_{n})(s_{n}\star\varphi)&=\partial_{u}\tilde{E}_{\mu}(0,s_{n}\star w_{n})(s_{n}\star\varphi)\\
&=\partial_{u}\tilde{E}_{\mu}(s_{n},w_{n})\varphi\\
&=o(1)\lVert\varphi\rVert=o(1)\lVert s_{n}\star\varphi\rVert
\end{align*}
for every $\varphi\in T_{w_{n}}S_{a,r}$, which implies $\{u_{n}\}:=\{s_{n}\star w_{n}\}$ is a PS sequence for $E_{\mu}|_{S_{a,r}}$ at level $\sigma(a,\mu)$. Since $E_{\mu}$ is invariant under rotations, by \cite[theorem 2.2]{kjom}, $\{u_{n}\}$ is also a PS sequence for $E_{\mu}|_{S_{a}}$ at level $\sigma(a,\mu)$.
From (\ref{PS}), we have
\[P_{\mu}(u_{n})=P_{\mu}(s_{n}\star w_{n})=\partial_{s}\tilde{E}_{\mu}(s_{n},w_{n})\rightarrow 0\]
as $n\rightarrow\infty$. Thus, by proposition \ref{compactnesslemma}, Lemma \ref{m-<m+1} and \ref{m-<m+2}, one of the cases in proposition \ref{compactnesslemma} holds. If case (i) occurs, we have $u_{n}\rightharpoonup u$ in $W^{1,p}(\mathbb{R}^N)$ and
\begin{equation}\label{eless0}
E_{\mu}(u)\leqslant m^{-}(a,\mu)-\frac{1}{N}S^{\frac{N}{p}}.
\end{equation}
Since $u$ solves (\ref{equation}) for some $\lambda<0$, by Theorem \ref{th1} and Lemma \ref{decreasing},
\[E_{\mu}(u)\geqslant m^{+}\big(\lVert u\rVert_{p},\mu\big)\geqslant m^{+}(a,\mu).\]
therefore,
\[m^{+}(a,\mu)\leqslant m^{-}(a,\mu)-\frac{1}{N}S^{\frac{N}{p}},\]
which contradicts with Lemma \ref{m-<m+1} and \ref{m-<m+2}. This implies that case (ii) in proposition \ref{compactnesslemma} holds, that is $u_{n}\rightarrow u\in S_{a,r}$ in $W^{1,p}(\mathbb{R}^N)$, and $u$ solves (\ref{equation}) for some $\lambda<0$. Moreover, noticing that $\theta_{n}(\tau)\geqslant 0$ for every $\tau\in [0,1]$, then (\ref{dist}) implies $u$ is non-negative and hence positive by strong maximum principle.$
\qed$
\section{\textbf{Existence result to the case $q=p+\frac{p^2}{N}$}}\label{critical}
In this section, we prove Theorem \ref{th3} for $q=p+\frac{p^2}{N}$. Firstly, we analyzed the properties of $E_{\mu}$ and $\mathcal{P}_{a,\mu}$, and then we construct a mini-max structure.
\begin{lemma}\label{empty2}
$\mathcal{P}_{a,\mu}^{0}=\emptyset$, and $\mathcal{P}_{a,\mu}$ is a smooth manifold of co-dimension $2$ in $W^{1,p}(\mathbb{R}^N)$.
\end{lemma}
\begin{proof}
If $u\in\mathcal{P}_{a,\mu}^{0}$, we have
\[\lVert\nabla u\rVert_{p}^p=\mu\gamma_{q}\lVert u\rVert_{q}^q+\lVert u\rVert_{p^*}^{p^*},\quad and\quad p\lVert\nabla u\rVert_{p}^p=\mu q\gamma_{q}^2\lVert u\rVert_{q}^q+p^*\lVert u\rVert_{p^*}^{p^*},\]
which implies $\lVert u\rVert_{p^*}=0$(since $q\gamma_{q}=p$). But we know this impossible since $u\in S_{a}$. The rest of the proof is similar to one of the Lemma \ref{empty}, and hence is omitted.
\end{proof}
\begin{lemma}\label{structure2}
For every $u\in S_{a}$, the function $\Psi_{u}^{\mu}$ has a unique critical point $t_{u}$ which is a strict maximum point at positive level. Moreover:
{\rm (i)}
\begin{minipage}[t]{\linewidth}
$\mathcal{P}_{a,\mu}=\mathcal{P}_{a,\mu}^{-}$, and $s\star u\in\mathcal{P}_{a,\mu}$ if and only if $s=t_{u}$.
\end{minipage}
{\rm (ii)}
\begin{minipage}[t]{\linewidth}
$t_{u}<0$ is and only if $P_{\mu}(u)<0$.
\end{minipage}
{\rm (iii)}
\begin{minipage}[t]{\linewidth}
The map $u\in S_{a}\longmapsto t_{u}\in\mathbb{R}$ is of class $C^1$.
\end{minipage}
\end{lemma}
\begin{proof}
Here we just prove $\mathcal{P}_{a,\mu}=\mathcal{P}_{a,\mu}^{-}$, the rest of the proof is similar to Lemma \ref{structure}. For every $u\in\mathcal{P}_{a,\mu}$, we have $t_{u}=0$ and hence $0$ is a strict maximum point of $\Psi_{u}^{\mu}$. Now, $\big(\Psi_{u}^{\mu}\big)''(0)\leqslant 0$ implies $u\in\mathcal{P}_{a,\mu}^{0}\cup\mathcal{P}_{a,\mu}^{-}$. By lemma \ref{empty2}, we obtain $u\in\mathcal{P}_{a,\mu}^{-}$.
\end{proof}
\begin{lemma}
We have $m(a,\mu)=m^{-}(a,\mu)>0$.
\end{lemma}
\begin{proof}
By lemma \ref{structure2}, we know $m(a,\mu)=m^{-}(a,\mu)$. If $u\in\mathcal{P}_{a,\mu}$, then
\[\lVert\nabla u\rVert_{p}^p=\mu\gamma_{q}\lVert u\rVert_{q}^q+\lVert u\rVert_{p^*}^{p^*}.\]
Using the Gagliardo-Nirenberg inequality and Sobolev inequality, we have
\[\lVert\nabla u\rVert_{p}^p\leqslant\frac{\mu}{q}C_{N,q}^qa^{\frac{p^2}{N}}\lVert\nabla u\rVert_{p}^p+S^{-\frac{p^*}{p}}\lVert\nabla u\rVert_{p}^{p^*}.\]
Combining (\ref{muconcri}), we derive that
\[\inf_{u\in\mathcal{P}_{a,\mu}}\lVert\nabla u\rVert_{p}>0.\]
For every $u\in\mathcal{P}_{a,\mu}$, we have
\[E_{\mu}(u)=\frac{1}{N}\lVert\nabla u\rVert_{p}^p-\frac{p\mu}{Nq}\lVert u\rVert_{q}^q\geqslant\frac{1}{N}\Big(1-\frac{p}{q}C_{N,q}^{q}\mu a^{\frac{p^2}{N}}\Big)\lVert\nabla u\rVert_{p}^p,\]
which implies $m(a,\mu)>0$.
\end{proof}
\begin{lemma}\label{neighbor}
There exists $k>0$ sufficiently small such that
\[0<\sup_{u\in A_{k}}E_{\mu}(u)<m(a,\mu),\]
and
\[E_{\mu}(u), P_{\mu}(u)>0\quad\forall u\in A_{k},\]
where $A_{k}=\Big\{u\in S_{a}: \lVert\nabla u\rVert_{p}\leqslant k\Big\}$.
\end{lemma}
\begin{proof}
By the Gagliardo-Nirenberg inequality and Sobolev inequality, we have
\[P_{\mu}(u)\geqslant\Big(1-\frac{p}{q}C_{N,q}^q\mu a^{\frac{p^2}{N}}\Big)\lVert\nabla u\rVert_{p}^p-S^{-\frac{p^*}{p}}\lVert\nabla u\rVert_{p}^{p^*},\]
and
\[\frac{1}{p}\lVert\nabla u\rVert_{p}^p\geqslant E_{\mu}(u)\geqslant\Big(\frac{1}{p}-\frac{1}{q}C_{N,q}^q\mu a^{\frac{p^2}{N}}\Big)\lVert\nabla u\rVert_{p}^p-\frac{1}{pS^{p^*/p}}\lVert\nabla u\rVert_{p}^{p^*}.\]
Thus, we can choose suitable $k>0$ such that the conclusion holds.
\end{proof}
By Lemma \ref{neighbor}, we can construct a mini-max structure. Let
\[E^{c}:=\Big\{u\in S_{a}:E_{\mu}(u)\leqslant c\Big\}.\]
We introduce the mini-max class
\[\Gamma:=\Big\{\gamma=(\alpha,\theta)\in C([0,1],\mathbb{R}
\times S_{a,r}): \gamma(0)\in(0,A_{k}), \gamma(1)\in(0,E^{0})\Big\},\]
with associated mini-max level
\[\sigma(a,\mu):=\inf_{\gamma\in\Gamma}\max_{(\alpha,\theta)\in\gamma([0,1])}\tilde{E}_{\mu}(\alpha,\theta),\]
where
\[\tilde{E}_{\mu}(s,u):=E_{\mu}(s*u).\]
In order to use Proposition \ref{compactnesslemma}, we need the following lemmas.
\begin{lemma}\label{msigma2}
We have $m(a,\mu)=m_{r}(a,\mu)=\sigma(a,\mu)$.
\end{lemma}
\begin{proof}
The proof of $m(a,\mu)=m_{r}(a,\mu)$ is similar to Lemma \ref{msigma}, and hence we omit it. Next, we prove $m_{r}(a,\mu)=\sigma(a,\mu)$.
For every $u\in\mathcal{P}_{a,\mu}\cap S_{a,r}$. By Lemma \ref{structure2}, $t_{u}=0$. Choosing $s_{0}<0<s_{1}$ such that $\lVert\nabla(s_{0}\star u)\rVert_{p}\leqslant k$ and $E_{\mu}(s_{1}\star u)\leqslant 0$, and Defining
\[\gamma_{u}: \tau\in [0,1]\longmapsto\big(0,((1-\tau)s_{0}+\tau s_{1})\star u\big)\in\mathbb{R}\times S_{a,r},\]
then $\gamma_{u}\in\Gamma$. Thus
\begin{align*}
\sigma(a,\mu)&\leqslant\max_{\tau\in [0,1]}\tilde{E}_{\mu}\big(0,((1-\tau)s_{0}+\tau s_{1})\star u\big)\\
&=\max_{\tau\in [0,1]}E_{\mu}\big(((1-\tau)s_{0}+\tau s_{1})\star u\big)\\
&\leqslant E_{\mu}(t_{u}\star u)=E_{\mu}(u),
\end{align*}
which implies $\sigma(a,\mu)\leqslant m_{r}(a,\mu)$.
For every $\gamma\in\Gamma$, since $\gamma(0)\in(0,A_{k})$, by Lemma \ref{neighbor}, we have
$P_{\mu}\big(\theta(0)\big)>0$. Now we claim that $P_{\mu}\big(\theta(1)\big)<0$. Indeed, since $\gamma(1)\in(0,E^{0})$, we have $E_{\mu}\big(\theta(1)\big)\leqslant 0$, that is $\Psi_{\theta(1)}^{\mu}(0)\leqslant 0$. Then, by Lemma \ref{structure2}, $t_{\theta(1)}<0$ and hence $P_{\mu}\big(\theta(1)\big)<0$. We know $\tau\longmapsto\alpha(\tau)\star\theta(\tau)$ is continuous for $[0,1]$ to $W^{1,p}(\mathbb{R}^N)$, so there exists $\tau_{\gamma}\in(0,1)$ such that $P_{\mu}\big(\alpha(\tau_{\gamma})\star\theta(\tau_{\gamma})\big)=0$. This implies
\begin{equation*}
\max_{(\alpha,\theta)\in\gamma([0,1])}\tilde{E}_{\mu}(\alpha,\theta)\geqslant\tilde{E}_{\mu}\big(\alpha(\tau_{\gamma}),\theta(\tau_{\gamma})\big)=E_{\mu}\big(\alpha(\tau_{\gamma})\star\theta(\tau_{\gamma})\big)\geqslant m_{r}(a,\mu).
\end{equation*}
Therefore, $\sigma(a,\mu)\geqslant m_{r}(a,\mu)$.
\end{proof}
\begin{lemma}\label{energy2}
We have $m(a,\mu)<\frac{1}{N}S^{\frac{N}{p}}$.
\end{lemma}
\begin{proof}
Let
\begin{equation}
W_{\varepsilon}(x)=\big(a^{-1}\lVert u_{\varepsilon,\tau}\rVert_{p}\big)^{\frac{N-p}{p}}u_{\varepsilon}\big(a^{-1}\lVert u_{\varepsilon}\rVert_{p}x\big),
\end{equation}
Then, we have
\[\lVert W_{\varepsilon}\rVert_{p}^p=a^p,\quad\lVert\nabla W_{\varepsilon}\rVert_{p}^p=\lVert\nabla u_{\varepsilon}\rVert_{p}^p,\quad\lVert W_{\varepsilon}\rVert_{p^*}^{p^*}=\lVert u_{\varepsilon}\rVert_{p^*}^{p^*},\]
and
\[\lVert W_{\varepsilon}\rVert_{q}^q=(a\lVert u_{\varepsilon}\rVert_{p}^{-1})^{q(1-\gamma_{q})}\lVert u_{\varepsilon}\rVert_{q}^{q}.\]
Thus, there exists unique $\tau_{\varepsilon}\in\mathbb{R}$ such that $\tau_{\varepsilon}\star W_{\varepsilon}\in\mathcal{P}_{a,\mu}$. By the definition of $m(a,\mu)$, we have
\begin{align}\label{m}
m(a,\mu)&\leqslant E_{\mu}(\tau_{\varepsilon}\star W_{\varepsilon})\nonumber\\
&=\frac{1}{p}e^{p\tau_{\varepsilon}}\lVert\nabla u_{\varepsilon}\rVert_{p}^p-\frac{\mu}{q}e^{q\gamma_{q}\tau_{\varepsilon}}\big(a\lVert u_{\varepsilon}\rVert_{p}^{-1}\big)^{q(1-\gamma_{q})}\lVert u_{\varepsilon}\rVert_{q}^q-\frac{1}{p^*}e^{p^*\tau_{\varepsilon}}\lVert u_{\varepsilon}\rVert_{p^*}^{p^*}
\end{align}
If $\liminf_{\varepsilon\rightarrow 0}=-\infty$ or $\limsup_{\varepsilon\rightarrow 0}=+\infty$, then
\[m(a,\mu)\leqslant\liminf_{\varepsilon\rightarrow 0}E_{\mu}(\tau_{\varepsilon}\star W_{\varepsilon})\leqslant 0,\]
which contradicts with Lemma \ref{neighbor}. Therefore, there exists $t_{2}>t_{1}$ such that $\tau_{\varepsilon}\in[t_{1},t_{2}]$. Now, (\ref{m}) implies
\begin{align*}
m(a,\mu)&\leqslant S^{\frac{N}{p}}\Big(\frac{1}{p}e^{p\tau_{\varepsilon}}-\frac{1}{p^*}e^{p^*\tau_{\varepsilon}}\Big)+O(\varepsilon^{\frac{N-p}{p-1}})-C\lVert u_{\varepsilon}\rVert_{p}^{q(\gamma_{q}-1)}\lVert u_{\varepsilon}\rVert_{q}^q\\
&\leqslant\frac{1}{N}S^{\frac{N}{p}}+O(\varepsilon^{\frac{N-p}{p-1}})-\left\{\begin{array}{ll}
C&N>p^2\\
C\lvert\log\varepsilon\rvert^{-\frac{p}{N}}&N=p^2\\
C\varepsilon^{\frac{p(p^2-N)}{N(p-1)}}&p^{\frac{3}{2}}<N<p^2\\
C\varepsilon^{\frac{p^{3/2}-p}{p-1}}\lvert\log\varepsilon\rvert&N=p^\frac{3}{2}
\end{array}\right.\\
&<\frac{1}{N}S^{\frac{N}{p}},
\end{align*}
by taking $\varepsilon$ sufficiently small.
\end{proof}
Now, we give the proof of Theorem \ref{th3} in the case $q=p+\frac{p^2}{N}$.\\
\noindent\textbf{Proof of Theorem \ref{th3}:} Let
\[X=\mathbb{R}\times S_{a,r},\quad\mathcal{F}=\Gamma,\quad and\quad B=(0,A_{k})\cup(0,E^{0}).\]
Then, using the terminology in \cite[definition 5.1]{gn}, $\Gamma$ is a homotopy stable family of compact subset of $\mathbb{R}\times S_{a,r}$ with extend closed boundary $(0,A_{k})\cup(0,E^{0})$. Let
\[\varphi=\tilde{E}_{\mu}(s,u),\quad c=\sigma(a,\mu),\quad and\quad F=\Big\{(s,u)\in\mathbb{R}\times S_{a,r}: \tilde{E}_{\mu}(s,u)\geqslant c\Big\},\]
we can check that $F$ satisfies assumptions (F'1) and (F'2) in \cite[theorem 5.2]{gn}.
Similar to the proof of Theorem \ref{th2}, there exists a PS sequence $\{(s_{n}\star w_{n})\}\subset\mathbb{R}\times S_{a,r}$ for $\tilde{E}_{\mu}|_{\mathbb{R}\times S_{a,r}}$ at level $\sigma(a,\mu)$
and we can check that $\{u_{n}\}:=\{s_{n}\star w_{n}\}$ is a PS sequence for $E_{\mu}|_{S_{a,r}}$ at level $\sigma(a,\mu)$. Thus, $\{u_{n}\}$ is also a PS sequence for $E_{\mu}|_{S_{a}}$ at level $\sigma(a,\mu)$.
Since
\[P_{\mu}(u_{n})=P_{\mu}(s_{n}\star w_{n})=\partial_{s}\tilde{E}_{\mu}(s_{n},w_{n})\rightarrow 0\]
as $n\rightarrow\infty$, by proposition \ref{compactnesslemma} and Lemma \ref{energy2}, one of the cases in proposition \ref{compactnesslemma} holds. If case (i) occurs, we have $u_{n}\rightharpoonup u$ in $W^{1,p}(\mathbb{R}^N)$ and
\begin{equation}\label{eless02}
E_{\mu}(u)\leqslant m(a,\mu)-\frac{1}{N}S^{\frac{N}{p}}<0.
\end{equation}
Since $u$ solves (\ref{equation}) for some $\lambda<0$, by the Pohozaev identity $P_{\mu}(u)=0$, we can derive that
\[E_{\mu}(u)=\frac{1}{N}\lVert u\rVert_{P^*}^{p^*}>0,\]
a contradiction with (\ref{eless02}). This implies that case (ii) in proposition \ref{compactnesslemma} holds, that is $u_{n}\rightarrow u\in S_{a,r}$ in $W^{1,p}(\mathbb{R}^N)$, and $u$ solves (\ref{equation}) for some $\lambda<0$. Moreover, we can choose $u$ is non-negative and hence positive by strong maximum principle.
It remains to show that $u$ is a ground state. This is a direct result by Proposition \ref{Pohozaev} and Lemma \ref{msigma}.$
\qed$
\section{\textbf{Existence result for the case $p+\frac{p^2}{N}<q<p^*$}}
In this section, we always assume that the condition in Theorem \ref{th3} holds and $p+\frac{p^2}{N}<q<p^*$. We will omit some of the proofs, since it is very similar to the proofs in Section \ref{critical}.
\begin{lemma}\label{empty3}
$\mathcal{P}_{a,\mu}^{0}=\emptyset$, and $\mathcal{P}_{a,\mu}$ is a smooth manifold of co-dimension $2$ in $W^{1,p}(\mathbb{R}^N)$.
\end{lemma}
\begin{proof}
If $u\in\mathcal{P}_{a,\mu}^{0}$, we have
\[\lVert\nabla u\rVert_{p}^p=\mu\gamma_{q}\lVert u\rVert_{q}^q+\lVert u\rVert_{p^*}^{p^*},\quad and\quad p\lVert\nabla u\rVert_{p}^p=\mu q\gamma_{q}^2\lVert u\rVert_{q}^q+p^*\lVert u\rVert_{p^*}^{p^*},\]
which implies
\[\mu\gamma_{q}(q\gamma_{q}-p)\lVert u\rVert_{q}^q+(p^*-p)\lVert u\rVert_{p^*}^{p^*}=0.\]
Since $q\gamma_{q}>p$, we know this impossible.
\end{proof}
\begin{lemma}\label{structure3}
For every $u\in S_{a}$, the function $\Psi_{u}^{\mu}$ has a unique critical point $t_{u}$ which is a strict maximum point at positive level. Moreover:
{\rm (i)}
\begin{minipage}[t]{\linewidth}
$\mathcal{P}_{a,\mu}=\mathcal{P}_{a,\mu}^{-}$, and $s\star u\in\mathcal{P}_{a,\mu}$ if and only if $s=t_{u}$.
\end{minipage}
{\rm (ii)}
\begin{minipage}[t]{\linewidth}
$t_{u}<0$ is and only if $P_{\mu}(u)<0$.
\end{minipage}
{\rm (iii)}
\begin{minipage}[t]{\linewidth}
The map $u\in S_{a}\longmapsto t_{u}\in\mathbb{R}$ is of class $C^1$.
\end{minipage}
\end{lemma}
\begin{lemma}
We have $m(a,\mu)=m^{-}(a,\mu)>0$.
\end{lemma}
\begin{proof}
If $u\in\mathcal{P}_{a,\mu}$, then
\[\lVert\nabla u\rVert_{p}^p=\mu\gamma_{q}\lVert u\rVert_{q}^q+\lVert u\rVert_{p^*}^{p^*}.\]
Using the Gagliardo-Nirenberg inequality and Sobolev inequality, we have
\[\lVert\nabla u\rVert_{p}^p\leqslant\frac{\mu}{q}C_{N,q}^qa^{q(1-\gamma_{q})}\lVert\nabla u\rVert_{p}^{q\gamma_{q}}+S^{-\frac{p^*}{p}}\lVert\nabla u\rVert_{p}^{p^*}.\]
Since $q\gamma_{q}>p$, we derive that
\[\inf_{u\in\mathcal{P}_{a,\mu}}\lVert\nabla u\rVert_{p}^p>0,\]
and hence by $P_{\mu}(u)=0$, we know
\[\inf_{u\in\mathcal{P}_{a,\mu}}\big(\lVert u\rVert_{q}^q+\lVert u\rVert_{p^*}^{p^*}\big)>0\]
For every $u\in\mathcal{P}_{a,\mu}$, we have
\[E_{\mu}(u)=\mu\gamma_{q}\Big(\frac{1}{p}-\frac{1}{q\gamma_{q}}\Big)\lVert u\rVert_{q}^q+\frac{1}{N}\lVert u\rVert_{p^*}^{p^*},\]
which implies $m(a,\mu)>0$.
\end{proof}
\begin{lemma}\label{neighbor2}
There exists $k>0$ sufficiently small such that
\[0<\sup_{u\in A_{k}}E_{\mu}(u)<m(a,\mu),\]
and
\[E_{\mu}(u), P_{\mu}(u)>0\quad\forall u\in A_{k},\]
where $A_{k}=\Big\{u\in S_{a}: \lVert\nabla u\rVert_{p}\leqslant k\Big\}$.
\end{lemma}
\begin{proof}
By the Gagliardo-Nirenberg inequality and Sobolev inequality, we have
\[P_{\mu}(u)\geqslant\lVert\nabla u\rVert_{p}^p-\mu\gamma_{q}C_{N,p,q}^qa^{q(1-\gamma_{q})}\lVert u\rVert_{q}^q-S^{-\frac{p^*}{p}}\lVert\nabla u\rVert_{p}^{p^*},\]
and
\[\frac{1}{p}\lVert\nabla u\rVert_{p}^p\geqslant E_{\mu}(u)\geqslant\lVert\nabla u\rVert_{p}^p-\frac{\mu}{q}C_{N,p,q}^qa^{q(1-\gamma_{q})}\lVert u\rVert_{q}^q-\frac{1}{pS^{p^*/p}}\lVert\nabla u\rVert_{p}^{p^*}.\]
Thus, we can choose suitable $k>0$ such that the conclusion holds.
\end{proof}
\[E^{c}:=\Big\{u\in S_{a}:E_{\mu}(u)\leqslant c\Big\}.\]
\[\Gamma:=\Big\{\gamma=(\alpha,\theta)\in C([0,1],\mathbb{R}
\times S_{a,r}): \gamma(0)\in(0,A_{k}), \gamma(1)\in(0,E^{0})\Big\},\]
\[\sigma(a,\mu):=\inf_{\gamma\in\Gamma}\max_{(\alpha,\theta)\in\gamma([0,1])}\tilde{E}_{\mu}(\alpha,\theta),\]
\[\tilde{E}_{\mu}(s,u):=E_{\mu}(s*u).\]
\begin{lemma}\label{msigma3}
We have $m(a,\mu)=m_{r}(a,\mu)=\sigma(a,\mu)$.
\end{lemma}
\begin{lemma}\label{energy3}
We have $m(a,\mu)<\frac{1}{N}S^{\frac{N}{p}}$.
\end{lemma}
\begin{proof}
Similar to Lemma \ref{energy2}, we have
\begin{align*}
m(a,\mu)&\leqslant\frac{1}{N}S^{\frac{N}{p}}+O(\varepsilon^{\frac{N-p}{p-1}})-C\lVert u_{\varepsilon}\rVert_{p}^{q(\gamma_{q}-1)}\lVert u_{\varepsilon}\rVert_{q}^q\\
&\leqslant\frac{1}{N}S^{\frac{N}{p}}+O(\varepsilon^{\frac{N-p}{p-1}})-\left\{\begin{array}{ll}
C&N>p^2\\
C\lvert\log\varepsilon\rvert^{\frac{q(\gamma_{q}-1)}{p}}&N=p^2\\
C\varepsilon^{N-\frac{q(p-\gamma_{q})(N-p)}{p(p-1)}}&p^{\frac{3}{2}}\leqslant N<p^2
\end{array}\right.\\
&<\frac{1}{N}S^{\frac{N}{p}},
\end{align*}
by taking $\varepsilon$ sufficiently small.
\end{proof}
Now, we give the proof of Theorem \ref{th3} in the case $p+\frac{p^2}{N}<q<p^*$.
\noindent\textbf{Proof of Theorem \ref{th3}:} Similar to Section \ref{critical}, we can obtain a PS sequence $\{u_{n}\}$ for $E_{\mu}|_{S_{a}}$ at level $\sigma(a,\mu)$ with the property $P_{\mu}(u_{n})\rightarrow 0$ as $n\rightarrow\infty$. Therefore, by Proposition \ref{compactnesslemma} and Lemma \ref{energy3}, one of the cases in proposition \ref{compactnesslemma} holds. If case (i) occurs, we have $u_{n}\rightharpoonup u$ in $W^{1,p}(\mathbb{R}^N)$ and
\begin{equation}\label{eless03}
E_{\mu}(u)\leqslant m(a,\mu)-\frac{1}{N}S^{\frac{N}{p}}<0.
\end{equation}
Since $u$ solves (\ref{equation}) for some $\lambda<0$, by the Pohozaev identity $P_{\mu}(u)=0$, we can derive that
\[E_{\mu}(u)=\mu\gamma_{q}\Big(\frac{1}{p}-\frac{1}{q\gamma_{q}}\Big)\lVert u\rVert_{q}^q+\frac{1}{N}\lVert u\rVert_{p^*}^{p^*}>0,\]
a contradiction with (\ref{eless03}). This implies that case (ii) in proposition \ref{compactnesslemma} holds. The rest of the proof is same to Similar to Section \ref{compactnesslemma}.$
\qed$
\section{\textbf{Asymptotic behavior of $u_{a,\mu}^{\pm}$}}
In this section, the dependence of parameter $a$ will not be considered, so we write $u_{a,\mu}^{\pm}$, $\mathcal{P}_{a,\mu}$, $S_{a}$, $m(a,\mu)$, $\lambda_{a,\mu}$, ... as $u_{\mu}^{\pm}$, $\mathcal{P}_{\mu}$, $S$, $m(\mu)$, $\lambda_{\mu}$,....
\subsection{Asymptotic behavior of $u_{\mu}^{+}$ as $\mu\rightarrow 0$}
\
\newline
\indent In this subsection, we always assume that the assumptions of Theorem \ref{th4}(1) hold. In fact, we can prove that $u_{\mu}^{-}\rightarrow 0$ in $D^{1,p}(\mathbb{R}^N)$ as $\mu\rightarrow 0$. Therefore, we need more accurate estimate of how fast $u_{\mu}^{-}$ approaches $0$.
\begin{lemma}\label{asylammu}
We have
\[-\lambda_{\mu}^{+}\sim\lVert\nabla u_{\mu}^{+}\rVert_{p}^p\sim\mu^{\frac{p}{p-q\gamma_{q}}}.\]
\end{lemma}
\begin{proof}
Since $u_{\mu}^{+}\in\mathcal{P}_{\mu}^{+}$, we have
\begin{equation*}
\lVert\nabla u_{\mu}^{+}\rVert_{p}^p=\mu\gamma_{q}\lVert u_{\mu}^{+}\rVert_{q}^q+\lVert u_{\mu}^{+}\rVert_{p^*}^{p^*},
\end{equation*}
and
\[p\lVert\nabla u_{\mu}^{+}\rVert_{p}^p>\mu q\gamma_{q}^2\lVert u_{\mu}^{+}\rVert_{q}^q+p^*\lVert u_{\mu}^{+}\rVert_{p^*}^{p^*}.\]
It follows from the Gagliardo-Nirenberg inequality that
\begin{equation}\label{asynablaq}
(p^*-p)\lVert\nabla u_{\mu}^{+}\rVert_{p}^p\leqslant\mu\gamma_{q}(p^*-q\gamma_{q})\lVert u_{\mu}^{+}\rVert_{q}^q\leqslant\mu\gamma_{q}(p^*-q\gamma_{q})C_{N,p,q}^qa^{q(1-\gamma_{q})}\lVert\nabla u_{\mu}^{+}\rVert_{p}^{q\gamma_{q}},
\end{equation}
which together with $q\gamma_{q}<p$ for $p<q<p+\frac{p^2}{N}$, implies
\[\lVert\nabla u_{\mu}^{+}\rVert_{p}^p\leqslant C\mu^{\frac{p}{p-q\gamma_{q}}}.\]
Using Gagliardo-Nirenberg inequality again, we know
\begin{equation}\label{asyuq}
\lVert u_{\mu}^{+}\rVert_{q}^q\leqslant C\mu^{\frac{q\gamma_{q}}{p-q\gamma_{q}}}.
\end{equation}
Let $u\in S$ be fixed. Then, there exists unique $s_{u}(\mu)\in\mathbb{R}$ such that $s_{u}(\mu)\star\mathcal{P}_{\mu}^{+}$, that is
\[e^{ps_{u}(\mu)}\lVert\nabla u\rVert_{p}^p=\mu\gamma_{q}e^{q\gamma_{q}s_{u}(\mu)}\lVert u\rVert_{q}^q+e^{p^*s_{u}(\mu)}\lVert u\rVert_{p^*}^{p^*},\]
and
\[pe^{ps_{u}(\mu)}\lVert\nabla u\rVert_{p}^p>\mu q\gamma_{q}^2e^{q\gamma_{q}s_{u}(\mu)}\lVert u\rVert_{q}^q+p^*e^{p^*s_{u}(\mu)}\lVert u\rVert_{p^*}^{p^*}.\]
It follows that
\[\bigg(\frac{\mu\gamma_{q}\lVert u\rVert_{q}^q}{\lVert\nabla u\rVert_{p}^p}\bigg)^{\frac{1}{p-q\gamma_{q}}}<e^{s_{u}(\mu)}<\bigg(\frac{\mu\gamma_{q}(p^*-q\gamma_{q})\lVert u\rVert_{q}^q}{(p^*-p)\lVert\nabla u\rVert_{p}^p}\bigg)^{\frac{1}{p-q\gamma_{q}}},\]
which implies $e^{s_{u}(\mu)}\sim\mu^{\frac{1}{p-q\gamma_{q}}}$. Thus, by $s_{u}(\mu)\star u\in\mathcal{P}_{\mu}^{+}$ and $q\gamma_{q}<p$ for $p<q<p+\frac{p^2}{N}$, we have
\[E_{\mu}(s_{u}(\mu)\star u)=\Big(\frac{1}{p}-\frac{1}{q\gamma_{q}}\Big)e^{ps_{u}(\mu)}\lVert\nabla u\rVert_{p}^p+\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)e^{p^*s_{u}(\mu)}\lVert u\rVert_{q}^q\sim-\mu^{\frac{p}{p-q\gamma_{q}}}.\]
Therefore, by $E_{\mu}(s_{u}\star u)\geqslant m^{+}(\mu)$ and
\[m^{+}(\mu)=\mu\gamma_{q}\Big(\frac{1}{p}-\frac{1}{q\gamma_{q}}\Big)\lVert u_{\mu}^{+}\rVert_{q}^q+\frac{1}{N}\lVert u_{\mu}^{+}\rVert_{p^*}^{p^*}>\mu\gamma_{q}\Big(\frac{1}{p}-\frac{1}{q\gamma_{q}}\Big)\lVert u_{\mu}^{+}\rVert_{q}^q,\]
we obtain
\begin{equation}\label{asyuqgeq}
\lVert u_{\mu}^{+}\rVert_{q}^q\geqslant C\mu^{\frac{q\gamma_{q}}{p-q\gamma_{q}}}.
\end{equation}
Now, (\ref{asyuq}) and (\ref{asyuqgeq}) implies
\[\lVert u_{\mu}^{+}\rVert_{q}^q\sim\mu^{\frac{q\gamma_{q}}{p-q\gamma_{q}}}.\]
By the Pohozaev identity, we know
\[\lambda_{\mu}^{+}a^p=\mu(\gamma_{q}-1)\lVert u_{\mu}^{+}\rVert_{q}^q.\]
Therefore, by (\ref{asynablaq}),
\[-\lambda_{\mu}^{+}\sim\lVert\nabla u_{\mu}^{+}\rVert_{p}^p\sim\mu^{\frac{p}{p-q\gamma_{q}}}.\]
\end{proof}
\begin{remark}
{\rm It is clear that $u_{\mu}^{+}\rightarrow 0$ in $D^{1,p}(\mathbb{R}^N)$ and $m^{+}(\mu)\rightarrow 0$ as $\mu\rightarrow 0$.}
\end{remark}
\noindent\textbf{Proof of Theorem \ref{th4}(1)} Let
\[w_{\mu}=\mu^{-\frac{N}{p(p-q\gamma_{q})}}u_{\mu}^{+}\Big(\mu^{-\frac{1}{p-q\gamma_{q}}}\cdot\Big)\in S.\]
By Lemma \ref{asylammu},
\[\lVert\nabla w_{\mu}\rVert_{p}^p=\mu^{-\frac{p}{p-q\gamma_{q}}}\lVert\nabla u_{\mu}^{+}\rVert_{p}^p\leqslant C,\]
which implies $\{w_{\mu}\}$ is bounded in $W^{1,p}(\mathbb{R}^N)$. Thus, there exists $w\in W^{1,p}(\mathbb{R}^N)$ such that $w_{\mu}\rightharpoonup w$ in $W^{1,p}(\mathbb{R}^N)$, $w_{\mu}\rightarrow w$ in $L^q(\mathbb{R}^N)$, $w_{\mu}\rightarrow w$ a.e. in $\mathbb{R}^N$.
We know $u_{\mu}^{+}$ solves (\ref{equation}) for some $\lambda_{\mu}^{+}$. Direct calculations show that
\[-\Delta_{p}w_{\mu}=\lambda_{\mu}^{+}\mu^{-\frac{p}{p-q\gamma_{q}}}w_{\mu}^{p-1}+w_{\mu}^{q-1}+\mu^{\frac{p^2}{(N-p)(p-q\gamma_{q})}}w_{\mu}^{p^*-1}.\]
By Lemma \ref{asylammu}, we know $\{\lambda_{\mu}^{+}\mu^{-\frac{p}{p-q\gamma_{q}}}\}$ is bounded, hence there exists $\sigma_{0}>0$ such that $\lambda_{\mu}^{+}\mu^{-\frac{p}{p-q\gamma_{q}}}\rightarrow-\sigma_{0}$ as $\mu\rightarrow 0$. Considering the mapping $T:W^{1,p}(\mathbb{R}^N)\rightarrow\Big(W^{1,p}(\mathbb{R}^N)\Big)^*$ which is given by
\[<Tu,v>=\int_{\mathbb{R}^N}\big(\lvert\nabla u\rvert^{p-2}\nabla u\cdot\nabla v+\sigma_{0}\lvert u\rvert^{p-2}uv\big)dx.\]
Then, similar to \cite[Lemma 3.6]{hdly}, we can prove that $w_{\mu}\rightarrow w$ in $W^{1,p}(\mathbb{R}^N)$. Thus, $w$ satisfies the equation
\[-\Delta_{p}w+\sigma_{0}w^{p-1}=w^{q-1}.\]
Let $\tilde{w}=\sigma_{0}^{\frac{1}{p-q}}w\big(\sigma_{0}^{-\frac{1}{p}}\cdot\big)$. It is not difficult to shows that
\[\sigma_{0}^{\frac{1}{p-q}}\mu^{-\frac{N}{p(p-q\gamma_{q})}}u_{\mu}^{+}\big(\sigma_{0}^{-\frac{1}{p}}\mu^{-\frac{1}{p-q\gamma_{q}}}\cdot\big)\rightarrow\tilde{w}\]
in $W^{1,p}(\mathbb{R}^N)$, and $\tilde{w}$ satisfies (\ref{uniqueness}). By the regularity and properties of $u_{\mu}^{+}$, we can derive that $\tilde{w}$ is the "ground state" of (\ref{uniqueness}) and hence $\tilde{w}=\phi_{0}$. Now, using $u_{\mu}^{+}\in S_{a}$, we can obtain that
\[\sigma_{0}=\bigg(\frac{a^p}{\lVert\phi_{0}\rVert_{p}^p}\bigg)^{\frac{p(q-p)}{p^2-N(q-p)}}.\]
$
\qed$
\subsection{Asymptotic behavior of $u_{\mu}^{-}$ as $\mu\rightarrow 0$}
\
\newline
\indent In this subsection, we always assume that the assumptions of Theorem \ref{th4}(2) or (3) hold. Unlike $u_{\mu}^{+}$, we can prove that $\lVert\nabla u_{\mu}^{-}\rVert_{p}^p,\lVert u_{\mu}^{-}\rVert_{p^*}^{p^*}\rightarrow S^{\frac{N}{p}}$ as $\mu\rightarrow 0$.
\begin{lemma}\label{infmax}
Let $\mu>0$ satisfies {\rm (\ref{muconsub})} for $p<q<p+\frac{p^2}{N}$ and { \rm(\ref{muconcri})} for $q=p+\frac{p^2}{N}$. Then,
\[m^{-}(\mu)=\inf_{u\in S}\max_{s\in\mathbb{R}}E_{\mu}(s\star u).\]
\end{lemma}
\begin{proof}
For every $v\in\mathcal{P}_{\mu}^{-}$, by Lemma \ref{structure}, \ref{structure2} and \ref{structure3}, we know
\[E_{\mu}(v)=\max_{s\in\mathbb{R}}E_{\mu}(s\star v)\geqslant\inf_{u\in S}\max_{s\in\mathbb{R}}E_{\mu}(s\star u),\]
and hence
\[m^{-}(\mu)\geqslant\inf_{u\in S}\max_{s\in\mathbb{R}}E_{\mu}(s\star u).\]
For every $v\in S_{a}$, by Lemma \ref{structure}, \ref{structure2} and \ref{structure3}, we know
\[\max_{s\in\mathbb{R}}E_{\mu}(s\star v)=E_{\mu}(t_{v}\star v)\geqslant\inf_{u\in\mathcal{P}_{\mu}^{-}}E_{\mu}(u),\]
and hence
\[\inf_{u\in S}\max_{s\in\mathbb{R}}E_{\mu}(s\star u)\geqslant m^{-}(\mu).\]
\end{proof}
\begin{lemma}\label{noninc}
Let $\tilde{\mu}>0$ satisfies {\rm (\ref{muconsub})} for $p<q<p+\frac{p^2}{N}$ and { \rm(\ref{muconcri})} for $q=p+\frac{p^2}{N}$. Then, the function $\mu\in(0,\tilde{\mu}]\mapsto m^{-}(\mu)\in\mathbb{R}$ is non-increasing.
\end{lemma}
\begin{proof}
Let $0<\mu_{1}<\mu_{2}\leqslant\tilde{\mu}$. By Lemma \ref{infmax}, we have
\begin{align*}
m^{-}(\mu_{2})&=\inf_{u\in S}\max_{s\in\mathbb{R}}E_{\mu_{2}}(s\star u)=\inf_{u\in S}\max_{s\in\mathbb{R}}\Big(E_{\mu_{1}}(s\star u)-\frac{\mu_{2}-\mu_{1}}{q}\lVert u\rVert_{q}^q\Big)\\
&\leqslant\inf_{u\in S}\max_{s\in\mathbb{R}}E_{\mu_{1}}(s\star u)=m^{-}(\mu_{1}).
\end{align*}
\end{proof}
\begin{lemma}\label{asyuneg}
We have $\lVert\nabla u_{\mu}^{-}\rVert_{p}^p, \lVert u_{\mu}^{-}\rVert_{p^*}^{p^*}\rightarrow S^{\frac{N}{p}}$ and $m^{-}(\mu)\rightarrow S^{\frac{N}{p}}/N$ as $\mu\rightarrow 0$.
\end{lemma}
\begin{proof}
Using the fact $m^{-}(\mu)<S^{\frac{N}{p}}/N$ and slightly modifying the proof of Lemma \ref{bounded}, we know $\{u_{\mu}^{-}\}$ is bounded in $W^{1,p}(\mathbb{R}^N)$. Thus, we can assume that $\lVert\nabla u_{\mu}^{-}\rVert_{p}^p\rightarrow l$ as $\mu\rightarrow 0$.
We claim that $l\neq 0$. Suppose by contradiction that $l=0$, then $E_{\mu}(u_{\mu}^{-})\rightarrow 0$ as $\mu\rightarrow 0$. However, by Lemma \ref{mplus}, \ref{neighbor}, \ref{neighbor2} and \ref{noninc},, we know $E_{\mu}(u_{\mu}^{-})\geqslant m^{-}(\tilde{\mu})>0$ for every $0<\mu\leqslant\tilde{\mu}$, a contradiction.
Now, by $P_{\mu}(u_{\mu}^{-})=0$, we deduce that
\[\lVert u_{\mu}^{-}\rVert_{p^*}^{p^*}=\lVert\nabla u_{\mu}^{-}\rVert_{p}^p-\mu\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q\rightarrow l\]
as $\mu\rightarrow 0$. Therefore, by the Sobolev inequality we have $l\geqslant Sl^{\frac{p}{p^*}}$ which implies $l\geqslant S^{\frac{N}{p}}$. On the other hand, since
\[\frac{l}{N}=\lim_{\mu\rightarrow 0}\bigg(\frac{1}{N}\lVert\nabla u_{\mu}^{-}\rVert_{p}^p-\mu\gamma_{q}\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)\lVert u_{\mu}^{-}\rVert_{p^*}^{p^*}\bigg)=\lim_{\mu\rightarrow 0}E_{\mu}(u_{\mu}^{-})\leqslant\frac{1}{N}S^{\frac{N}{p}},\]
we obtain that $l=S^{\frac{N}{p}}$. We complete the proof.
\end{proof}
\noindent\textbf{Proof of Theorem \ref{th4}(2)} Lemma \ref{asyuneg} implies $\{u_{\mu}^{-}\}$ is a minimizing sequence of the following minimizing problem:
\[S=\inf_{u\in D^{1,p}(\mathbb{R}^N)\backslash\{0\}}\frac{\lVert\nabla u\rVert_{p}^p}{\lVert u\rVert_{p^*}^{p^*}}.\]
Since $u_{\mu}^{-}$ is radially symmetric, by \cite[Theorem 4.9]{sm}, there exists $\sigma_{\mu}>0$ such that
\[w_{\mu}=\sigma_{\mu}^{\frac{N-p}{p}}u_{\mu}^{-}(\sigma_{\mu}\cdot)\rightarrow U_{\varepsilon_{0}}\]
in $D^{1,p}(\mathbb{R}^N)$ as $\mu\rightarrow 0$ for some $\varepsilon_{0}>0$. We know $U_{\varepsilon_{0}}\notin S$ for $N\leqslant p^2$, and $\lVert w_{\mu}\rVert_{p}^p=a^p/\sigma_{\mu}^p$, by the Fatou lemma, $\sigma_{\mu}\rightarrow 0$ as $\mu\rightarrow 0$.$
\qed$
$U_{\varepsilon_{0}}\notin S$ for $N\leqslant p^2$ implies $w_{\mu}$ will not converge to $U_{\varepsilon_{0}}$ in $W^{1,p}(\mathbb{R})$ as $\mu\rightarrow 0$. However, since $U_{\varepsilon_{0}}\in S$ for $N>p^2$, in the next, we will prove $u_{\mu}^{-}\rightarrow U_{\varepsilon_{0}}$ in $W^{1,p}(\mathbb{R}^N)$ as $\mu\rightarrow 0$ for $N>p^2$. \\
\noindent\textbf{Proof of Theorem \ref{th4}(3)} Since $\lVert U_{\varepsilon}\rVert_{p}^p$=$\varepsilon^p\lVert U_{1}\rVert_{p}^p$, we can choose $\varepsilon_{0}>0$ satisfies $\lVert U_{\varepsilon_{0}}\rVert_{p}^p=a^p$. Hence, there exists unique $t(\mu)\in\mathbb{R}$ such that $t(\mu)\star U_{\varepsilon_{0}}\in\mathcal{P}_{\mu}^{-}$, that is
\[e^{pt(\mu)}S^{\frac{N}{p}}=\mu\gamma_{q}e^{q\gamma_{q}t(\mu)}\lVert U_{\varepsilon_{0}}\rVert_{q}^q+e^{p^*t(\mu)}S^{\frac{N}{p}}.\]
Clearly, $t(0)=0$. Now, using implicit function theorem, $t(\mu)$ is of class $C^1$ in a neighborhood of $0$. By direct calculation, we have
\[t'(0)=-\frac{\gamma_{q}\lVert U_{\varepsilon_{0}}\rVert_{q}^q}{(p^*-p)S^{N/p}},\]
which implies
\[t(\mu)=t(0)+t'(0)\mu+o(\mu)=-\frac{\gamma_{q}\lVert U_{\varepsilon_{0}}\rVert_{q}^q}{(p^*-p)S^{N/p}}\mu+o(\mu).\]
Consequently,
\begin{align*}
E_{\mu}\big(t(\mu)\star U_{\varepsilon_{0}}\big)&=\Big(\frac{1}{p}-\frac{1}{q\gamma_{q}}\Big)e^{pt(\mu)}\lVert U_{\varepsilon_{0}}\rVert_{p}^p+\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)e^{p^*t(\mu)}\lVert U_{\varepsilon_{0}}\rVert_{p^*}^{p^*}\\
&=\Big(\frac{1}{p}-\frac{1}{q\gamma_{q}}\Big)\bigg(1-\frac{p\gamma_{q}\lVert U_{\varepsilon_{0}}\rVert_{q}^q}{(p^*-p)S^{N/p}}\mu+o(\mu)\bigg)S^{\frac{N}{p}}+\\
&\qquad\qquad\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)\bigg(1-\frac{p^*\gamma_{q}\lVert U_{\varepsilon_{0}}\rVert_{q}^q}{(p^*-p)S^{N/p}}\mu+o(\mu)\bigg)S^{\frac{N}{p}}\\
&=\frac{1}{N}S^{\frac{N}{p}}-\frac{\lVert U_{\varepsilon_{0}}\rVert_{q}^q}{q}\mu+o(\mu).
\end{align*}
By the definition of $m^{-}(\mu)$, we have
\begin{equation}\label{asyumuue}
m^{-}(\mu)=\frac{1}{N}\lVert\nabla u_{\mu}^{-}\rVert_{p}^p-\mu\gamma_{q}\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)\lVert u_{\mu}^{-}\rVert_{q}^q\leqslant\frac{1}{N}S^{\frac{N}{p}}-\frac{\lVert U_{\varepsilon_{0}}\rVert_{q}^q}{q}\mu+o(\mu).
\end{equation}
By the Sobolev inequality, we have
\begin{align*}
\lVert u_{\mu}^{-}\rVert_{p}^p&\geqslant S\lVert u_{\mu}^{-}\rVert_{p^*}^p=S\Big(\lVert\nabla u_{\mu}\rVert_{p}^p-\mu\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q\Big)^{\frac{p}{p^*}}\\
&=S\lVert\nabla u_{\mu}^{-}\rVert_{p}^{\frac{p^2}{p^*}}\bigg(1-\frac{\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q}{S^{N/p}}\mu+o(\mu)\bigg)^{\frac{p}{p^*}}\\
&=S\lVert\nabla u_{\mu}^{-}\rVert_{p}^{\frac{p^2}{p^*}}\bigg(1-\frac{p\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q}{p^*S^{N/p}}\mu+o(\mu)\bigg).
\end{align*}
Thus,
\[\lVert\nabla u_{\mu}^{-}\rVert_{p}^p\geqslant S^{\frac{N}{p}}\bigg(1-\frac{p\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q}{p^*S^{N/p}}+o(\mu)\bigg)^{\frac{N}{p}}=S^{\frac{N}{p}}-\frac{(N-p)\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q}{p}\mu+o(\mu),\]
which together with (\ref{asyumuue}), implies
\begin{equation}\label{umugeque}
\lVert u_{\mu}^{-}\rVert_{q}^q\geqslant\lVert U_{\varepsilon_{0}}\rVert_{q}^q+o(1).
\end{equation}
Since $\{u_{\mu}^{-}\}$ is bounded in $W^{1,p}(\mathbb{R}^N)$, there exists $u\in W^{1,p}(\mathbb{R}^N)$ such that $u_{\mu}^{-}\rightharpoonup u$ in $W^{1,p}(\mathbb{R}^N)$, $u_{\mu}^{-}\rightarrow u$ in $L^q(\mathbb{R}^N)$ and $u_{\mu}\rightarrow u$ a.e. in $\mathbb{R}^N$ as $\mu\rightarrow 0$. By (\ref{umugeque}), we know $u\neq 0$.
By the Pohozaev identity, we know
\[\lambda_{\mu}^{-}a^p=\mu(\gamma_{q}-1)\lVert u_{\mu}^{-}\rVert_{q}^q\rightarrow 0\]
as $\mu\rightarrow 0$. Thus, by the weak convergence, $u$ is the solution of the equation
\[-\Delta_{p}U=U^{p^*-1},\]
which implies $u=U_{\varepsilon,y}$ for some $(\varepsilon,y)\in(\mathbb{R}^+,\mathbb{R}^N)$. Since $u_{\mu}^{-}$ is radially symmetric, we have $y=0$ and hence $u=U_{\varepsilon}$. Now, by the Fatou lemma and (\ref{umugeque}), we obtain
\[\lVert U_{\varepsilon}\rVert_{p}^p=\lVert u\rVert_{p}^p\leqslant a^p=\lVert U_{{\varepsilon_{0}}}\rVert_{p}^p,\quad\lVert U_{\varepsilon}\rVert_{q}^q=\lVert u\rVert_{q}^q=\lVert U_{\varepsilon_{0}}\rVert_{q}^q.\]
Therefore, $\varepsilon=\varepsilon_{0}$ and $u=U_{\varepsilon_{0}}$. Finally, since
\[\lVert U_{\varepsilon_{0}}\rVert_{p}^p=\lim_{\mu\rightarrow 0}\lVert u_{\mu}^{-}\rVert_{p}^p=a^p,\quad and\quad\lVert\nabla U_{\varepsilon_{0}}\rVert_{p}^p=\lim_{\mu\rightarrow 0}\lVert\nabla u_{\mu}^{-}\rVert_{p}^p=S^{\frac{N}{p}},\]
the Br\'{e}zis-Lieb lemma \cite{bhle} implies $u_{\mu}\rightarrow U_{\varepsilon_{0}}$ in $W^{1,p}(\mathbb{R}^N)$ as $\mu\rightarrow 0$.$
\qed$
\subsection{Asymptotic behavior of $u_{\mu}^{-}$ as $\mu$ goes to its upper bound}
\
\newline
\indent In this subsection, we always assume that the assumptions of Theorem \ref{th5} hold. Firstly, we assume that $q=p+\frac{p^2}{N}$ and we prove $\bar{\alpha}$ is the upper bound of $\mu$ when $q=p+\frac{p^2}{N}$.
\begin{lemma}\label{infty}
We have
\[\sup_{u\in S}\frac{\lVert\nabla u\rVert_{p}^p}{\lVert u\rVert_{q}^q}=+\infty.\]
\end{lemma}
\begin{proof}
By the Sobolev inequality, we just have to prove that
\[\sup_{u\in S}\frac{\lVert u\rVert_{p^*}^p}{\lVert u\rVert_{q}^q}=+\infty.\]
Let
\[u_{k}(x)=\frac{A_{k}\varphi_{k}(x)}{(1+\lvert x\rvert^2)^{a_{k}}}\in S,\]
where $A_{k}>0$ is a constant dependent on $k$,
\[a_{k}=\frac{N-p}{2p}-\frac{1}{\log\log(k+2)},\]
and $\varphi_{k}\in C_{c}^{\infty}(\mathbb{R}^N)$ is a radial cut-off function satisfies
\[0\leqslant\varphi_{k}\leqslant 1,\quad\varphi_{k}=1\ in\ B_{k},\quad and\quad\varphi_{k}=0\ in\ B_{k+1}^c.\]
Since $u_{k}\in S$ and
\begin{align*}
\lVert u_{k}\rVert_{p}^p&=A_{k}^p\int_{\mathbb{R}^N}\frac{\varphi_{k}^p(x)}{(1+\lvert x\rvert^2)^{pa_{k}}}dx\sim A_{k}^p\int_{0}^{+\infty}\frac{\varphi_{k}^p(r)r^{N-1}}{(1+r^2)^{pa_{k}}}dr\\
&\sim A_{k}^p\int_{0}^{k}\frac{r^{N-1}}{(1+r^2)^{pa_{k}}}dr\sim\frac{A_{k}^pk^{N-2pa_{k}}}{N-2pa_{k}}\sim A_{k}^pk^{N-2pa_{k}}
\end{align*}
as $k\rightarrow\infty$, we have $A_{k}\sim k^{2a_{k}-N/p}$ as $k\rightarrow\infty$. Therefore,
\[\lVert u_{k}\rVert_{q}^q=A_{k}^q\int_{\mathbb{R}^N}\frac{\varphi_{k}^q(x)}{(1+\lvert x\rvert^2)^{qa_{k}}}dx\sim k^{2qa_{k}-N-p}\int_{0}^{k}\frac{r^{N-1}}{(1+r^2)^{qa_{k}}}dr\sim\frac{1}{k^p},\]
and
\[\lVert u_{k}\rVert_{p^*}^{p^*}=A_{k}^{p^*}\int_{\mathbb{R}^N}\frac{A_{k}^{p^*}}{(1+\lvert x\rvert^2)^{p^*a_{k}}}dx\sim k^{2p^*a_{k}-\frac{Np^*}{p}}\int_{0}^{k}\frac{r^{N-1}}{(1+r^2)^{p^*a_{k}}}dr\sim\frac{1}{(N-p^*a_{k})k^{p^*}},\]
as $k\rightarrow\infty$, which implies
\[\frac{\lVert u_{k}\rVert_{p^*}^p}{\lVert u_{k}\rVert_{q}^q}\sim\frac{1}{(N-p^*a_{k})^{p/p^*}}\rightarrow+\infty\]
as $k\rightarrow\infty$.
\end{proof}
\begin{lemma}\label{notemp}
For $\mu\geqslant\bar{\alpha}$, we have $\mathcal{P}_{\mu}=\mathcal{P}_{\mu}^{-}\neq\emptyset$.
\end{lemma}
\begin{proof}
For every $\mu\geq\bar{\alpha}$, by Lemma \ref{infty}, there exists $u\in S$ such that $\lVert\nabla u\rVert_{p}^p>\mu\gamma_{q}\lVert u\rVert_{q}^q$. Then,
\[\Psi_{u}^{\mu}(s)=\frac{1}{p}e^{ps}\big(\lVert\nabla u\rVert_{p}^p-\mu\gamma_{q}\lVert u\rVert_{q}^q\big)-\frac{1}{p^*}e^{p^*s}\lVert u\rVert_{p^*}^{p^*}\]
has a critical point $t_{u}\in\mathbb{R}$. By proposition \ref{Pohozaev}, we know $t_{u}\star u\in\mathcal{P}_{\mu}$ which implies $\mathcal{P}_{\mu}\neq\emptyset$.
If there exists $v\in\mathcal{P}_{\mu}^{0}\cup\mathcal{P}_{\mu}^{+}$, we have
\[\lVert\nabla v\rVert_{p}^p=\mu\gamma_{q}\lVert v\rVert_{q}^q+\lVert v\rVert_{p^*}^{p^*},\quad and\quad p\lVert\nabla v\rVert_{p}^p\geqslant\mu q\gamma_{q}^2\lVert v\rVert_{q}^q+p^*\lVert v\rVert_{p^*}^{p^*},\]
which implies $\lVert v\rVert_{p^*}^{p^*}\leqslant 0$(since $q\gamma_{q}=p$), a contradiction since $v\in S$.
\end{proof}
\begin{lemma}\label{asycrinonexi}
For $\mu\geqslant\bar{\alpha}$, there is $m^{-}(\mu)=0$, and $m^{-}(\mu)$ can not be attained by any $u\in S$.
\end{lemma}
\begin{proof}
For every $\mu\geqslant\bar{\alpha}$, by Lemma \ref{infty}, there exists $\{u_{n}\}\subset S$ such that
\[\frac{\lVert\nabla u_{n}\rVert_{p}^p}{\lVert u_{n}\rVert_{q}^q}>\mu\gamma_{q},\quad and\quad\frac{\lVert\nabla u_{n}\rVert_{p}^p}{\lVert u_{n}\rVert_{q}^q}\rightarrow\mu\gamma_{q}\]
as $n\rightarrow\infty$. Without loss of generality, by scaling $\frac{as_{n}^{N/p}}{\lVert u_{n}\rVert_{p}^p}u_{n}(sx)$ if necessary, we may assume that $\lVert u_{n}\rVert_{q}^q=1$. Then, we have $\lVert\nabla u_{n}\rVert_{p}^p>\mu\gamma_{q}$ and $\lVert\nabla u_{n}\rVert_{p}^p\rightarrow\mu\gamma_{q}$ as $n\rightarrow\infty$.
Now, we can obtain the function
\[\Psi_{u_{n}}^{\mu}(s)=\frac{1}{p}e^{ps}\big(\lVert\nabla u_{n}\rVert_{p}^p-\mu\gamma_{q}\big)-\frac{1}{p^*}e^{p^*s}\lVert u_{n}\rVert_{p^*}^{p^*}\]
has a critical point $t_{n}\in\mathbb{R}$. Hence, $t_{n}\star u_{n}\in\mathcal{P}_{\mu}^{-}$ and we have
\begin{equation}\label{tn}
\lVert u_{n}\rVert_{p}^p-\mu\gamma_{q}=e^{(p^*-p)t_{n}}\lVert u_{n}\rVert_{p^*}^{p^*}.
\end{equation}
By the Sobolev inequality $S\lVert u_{n}\rVert_{p}^p\leqslant\lVert\nabla u_{n}\rVert_{p}^p$ and H\"older inequality $\lVert u_{n}\rVert_{q}^q\leqslant\lVert u_{n}\rVert_{p^*}^{q\gamma_{q}}\lVert u_{n}\rVert_{p}^{q(1-\gamma_{q})}$, we obtain $\lVert u_{n}\rVert_{p^*}^{p^*}\sim 1$. Thus, (\ref{tn}) implies $t_{n}\rightarrow-\infty$ as $n\rightarrow\infty$.
By the definition of $m^{-}(\mu)$, we have $m^{-}(\mu)\leqslant E_{\mu}(t_{n}\star u_{n})$, that is
\[m^{-}(\mu)\leqslant\frac{1}{N}e^{pt_{n}}\big(\lVert\nabla u_{n}\rVert_{p}^p-\mu\gamma_{q}\big)\rightarrow 0\]
as $n\rightarrow\infty$, which implies $m^{-}(\mu)\leqslant 0$. For every $u\in\mathcal{P}_{\mu}^{-}$, we have
\[E_{\mu}(u)=\frac{1}{N}\lVert u\rVert_{p^*}^{p^*},\]
which implies $m^{-}(\mu)\geqslant 0$. Therefore, $m^{-}(\mu)=0$.
If there exists $u\in\mathcal{P}_{\mu}^{-}$ such that $E_{\mu}(u)=0$. Then, it must have $u\equiv 0$ which contradicts with $u\in S$.
\end{proof}
Theorem \ref{th3} and Lemma \ref{asycrinonexi} implies $\bar{\alpha}$ is the upper bound of $\mu$. Therefore, we can study the asymptotic behavior of $u_{a,\mu}^{-}$ as $\mu\rightarrow\bar{\alpha}$. We give the asymptotic behavior of $\lambda_{\mu}^{-}$ as follows.
\begin{lemma}\label{asycrilam}
For $\mu<\bar{\alpha}$, we have
\[-\lambda_{\mu}^{-}\sim\lVert\nabla u_{\mu}^{-}\rVert_{p}^p\sim(\bar{\alpha}-\mu)^{\frac{N-p}{p}},\]
as $\mu\rightarrow\bar{\alpha}$.
\end{lemma}
\begin{proof}
By the Gagliardo-Nirenberg inequality and Sobolev inequality,
\[\lVert\nabla u_{\mu}^{-}\rVert_{p}^p=\mu\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q+\lVert u_{\mu}^{-}\rVert_{p^*}^{p^*}\leqslant\frac{\mu}{\bar{\alpha}}\lVert\nabla u_{\mu}^{-}\rVert_{p}^p+S^{\frac{p^*}{p}}\lVert\nabla u_{\mu}^{-}\rVert_{p}^{p^*},\]
which implies
\begin{equation}\label{asycrinab}
\lVert\nabla u_{\mu}^{-}\rVert_{p}^p\geqslant C(\alpha-\mu)^{\frac{N-p}{p}},
\end{equation}
as $\mu\rightarrow\bar{\alpha}$.
Let $\varphi=\frac{a}{\lVert\psi_{0}\rVert_{p}}\psi_{0}\in S$, where $\psi_{0}$ is a minimizer of Gagliardo-Nirenberg inequality. Then, direct calculations shows that $t_{\mu}\star\varphi\in\mathcal{P}_{\mu}^{-}$, where
\[e^{t_{\mu}}=\frac{\lVert\psi_{0}\rVert_{p}\lVert\nabla\psi_{0}\rVert_{p}^{p/(p^*-p)}}{a\lVert\psi_{0}\rVert_{p^*}^{p^*/(p^*-p)}}\Big(1-\frac{\mu}{\bar{\alpha}}\Big)^{\frac{1}{p^*-p}}.\]
Therefore,
\[m^{-}(\mu)\leqslant E_{\mu}(t_{\mu}\star\varphi)=\frac{a^p\lVert\nabla\psi_{0}\rVert_{p}^p}{N\lVert\psi_{0}\rVert_{p}^p}\Big(1-\frac{\mu}{\bar{\alpha}}\Big)e^{pt_{\mu}}
=\frac{1}{N}\Big(1-\frac{\mu}{\bar{\alpha}}\Big)^{\frac{N}{p}}\frac{\lVert\nabla\psi_{0}\rVert_{p}^N}{\lVert\psi_{0}\rVert_{p^*}^N}.\]
Since
\[E_{\mu}(u_{\mu}^{-})=\frac{1}{N}\lVert u_{\mu}^{-}\rVert_{p^*}^{p^*},\]
we have
\[\lVert u_{\mu}^{-}\rVert_{p^*}^{p^*}\leqslant C(\bar{\alpha}-\mu)^{\frac{N}{p}}.\]
Now, by the H\"older inequality, we obtain
\[\lVert\nabla u_{\mu}^{-}\rVert_{p}^p=\mu\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q+\lVert u_{\mu}^{-}\rVert_{p^*}^{p^*}\leqslant\frac{\mu}{\bar{\alpha}}\lVert u_{\mu}^{-}\rVert_{p^*}^p+\lVert u_{\mu}^{-}\rVert_{p^*}^{p^*}\leqslant C(\bar{\alpha}-\mu)^{\frac{N-p}{p}},\]
as $\mu\rightarrow\bar{\alpha}$, which together with (\ref{asycrinab}), implies
\[\lVert\nabla u_{\mu}^{-}\rVert_{p}^p\sim(\bar{\alpha}-\mu)^{\frac{N-p}{p}}.\]
Using the Gagliardo-Nirenberg inequality and Sobolev inequality again, we have
\[\lVert u_{\mu}^{-}\rVert_{q}^q\leqslant C\lVert\nabla u_{\mu}^{-}\rVert_{p}^p\leqslant C(\bar{\alpha}-\mu)^{\frac{N-p}{p}},\]
and
\[\mu\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q=\lVert\nabla u_{\mu}^{-}\rVert_{p}^p-\lVert u_{\mu}^{-}\rVert_{p}^{p^*}\geqslant\lVert\nabla u_{\mu}^{-}\rVert_{p}^p-S^{\frac{p^*}{p}}\lVert\nabla u_{\mu}^{-}\rVert_{p}^{p^*}\geqslant C(\bar{\alpha}-\mu)^{\frac{N-p}{p}},\]
as $\mu\rightarrow\bar{\alpha}$. Therefore,
\[\lVert u_{\mu}^{-}\rVert_{q}^q\sim(\bar{\alpha}-\mu)^{\frac{N-p}{p}},\]
as $\mu\rightarrow\bar{\alpha}$. By the Pohozaev identity $\lambda_{\mu}^{-}a^p=\mu(\gamma_{q}-1)\lVert u_{\mu}^{-}\rVert_{q}^q$, we know
\[-\lambda_{\mu}^{-}\sim(\bar{\alpha}-\mu)^{\frac{N-p}{p}},\]
as $\mu\rightarrow\bar{\alpha}$.
\end{proof}
\begin{remark}
{\em We have $u_{\mu}^{-}\rightarrow 0$ in $D^{1,p}(\mathbb{R}^N)$ and $m^{-}(\mu)\rightarrow 0$ as $\mu\rightarrow\bar{\alpha}$.}
\end{remark}
\noindent\textbf{Proof of Theorem \ref{th5}(1)} Let
\[w_{\mu}=s_{\mu}^{\frac{N}{p}}u_{\mu}^{-}(s_{\mu}\cdot)\in S,\]
where $s_{\mu}=(\bar{\alpha}-\mu)^{-(N-p)/p^2}$. By Lemma \ref{asycrilam},
\[\lVert\nabla w_{\mu}\rVert_{p}^p=s_{\mu}^p\lVert\nabla u_{\mu}^{-}\rVert_{p}^p\leqslant C,\]
which implies $\{w_{\mu}\}$ is bounded in $W^{1,p}(\mathbb{R}^N)$. Thus, there exists $w\in W^{1,p}(\mathbb{R}^N)$ such that $w_{\mu}\rightharpoonup w$ in $W^{1,p}(\mathbb{R}^N)$, $w_{\mu}\rightarrow w$ in $L^q(\mathbb{R}^N)$, $w_{\mu}\rightarrow w$ a.e. in $\mathbb{R}^N$.
Direct calculations shows that
\[-\Delta_{p}w_{\mu}=\lambda_{\mu}^{-}s_{\mu}^pw_{\mu}^{p-1}+\mu w_{\mu}^{q-1}+s_{\mu}^{-\frac{p^2}{N-p}}w_{\mu}^{p^*-1}.\]
By Lemma \ref{asycrilam}, we know $\{\lambda_{\mu}^{+}s_{\mu}^p\}$ is bounded, hence there exists $\sigma_{0}>0$ such that $\lambda_{\mu}^{+}s_{\mu}^p\rightarrow-\sigma_{0}$ as $\mu\rightarrow 0$. Similar to the proof of Theorem \ref{th4}(1), we can prove that $w_{\mu}\rightarrow w$ in $W^{1,p}(\mathbb{R}^N)$. Thus, $w$ satisfies the equation
\[-\Delta_{p}w+\sigma_{0}w^{p-1}=\bar{\alpha}w^{q-1}.\]
Let $\tilde{w}=(\bar{\alpha}\sigma_{0})^{\frac{1}{p-q}}w\big(\sigma_{0}^{-\frac{1}{p}}\cdot\big)$. It is not difficult to show that
\[(\bar{\alpha}\sigma_{0})^{\frac{1}{p-q}}s_{\mu}^{\frac{N}{p}}u_{\mu}^{-}\big(\sigma_{0}^{-\frac{1}{p}}s_{\mu}\cdot\big)\rightarrow\tilde{w}\]
in $W^{1,p}(\mathbb{R}^N)$, and $\tilde{w}$ satisfies (\ref{uniqueness}). By the regularity and properties of $u_{\mu}^{-}$, we can derive that $\tilde{w}$ is the "ground state" of (\ref{uniqueness}) and hence $\tilde{w}=\phi_{0}$. Now, using $w\in S_{a}$, we can obtain that
\[\sigma_{0}=\bar{\alpha}^{\frac{p^2}{N(q-p)-p^2}}\bigg(\frac{a^p}{\lVert\phi_{0}\rVert_{p}^p}\bigg)^{\frac{p(q-p)}{p^2-N(q-p)}}.\]
$
\qed$
Now, we assume that $p+\frac{p^2}{N}<q<p^*$. Obviously, the upper bound of $\mu$ is $+\infty$.
\begin{lemma}\label{asysuplam}
We have
\[-\lambda_{\mu}^{-}\sim\lVert\nabla u_{\mu}^{-}\rVert_{p}^p\sim\mu^{-\frac{p}{q\gamma_{q}-p}},\]
as $\mu\rightarrow+\infty$.
\end{lemma}
\begin{proof}
By the Gagliardo-Nirenberg inequality and Sobolev inequality,
\[\lVert\nabla u_{\mu}^{-}\rVert_{p}^p=\mu\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q+\lVert u_{\mu}^{-}\rVert_{p^*}^{p^*}\leqslant\mu\gamma_{q} a^{q(1-\gamma_{q})}C_{N,p,q}^q\lVert\nabla u_{\mu}^{-}\rVert_{p}^{q\gamma_{q}}+S^{\frac{p^*}{p}}\lVert\nabla u_{\mu}^{-}\rVert_{p}^{p^*},\]
which implies
\begin{equation}\label{asysupnab}
\lVert\nabla u_{\mu}^{-}\rVert_{p}^p\geqslant C\mu^{-\frac{p}{q\gamma_{q}-p}},
\end{equation}
as $\mu\rightarrow+\infty$.
Let $u\in S$ be fixed. Then, there exists $t_{\mu}\in\mathbb{R}$ such that $t_{\mu}\star u\in\mathcal{P}_{\mu}^{-}$, that is
\[e^{pt_{\mu}}\lVert\nabla u\rVert_{p}^p=\mu\gamma_{q}e^{q\gamma_{q}t_{\mu}}\lVert u\rVert_{q}^q+e^{p^*t_{\mu}}\lVert u\rVert_{p^*}^{p^*},\]
which implies
\[e^{t_{\mu}}\leqslant C\mu^{-\frac{1}{q\gamma_{q}-p}},\]
as $\mu\rightarrow+\infty$. Therefore,
\[E_{\mu}(t_{\mu}\star u)=\frac{1}{N}e^{pt_{\mu}}\lVert\nabla u\rVert_{p}^p+\mu\gamma_{q}\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)e^{p^*t_{\mu}}\lVert u\rVert_{q}^q\leqslant C\mu^{-\frac{p}{q\gamma_{q}-p}},\]
as $\mu\rightarrow+\infty$. Since,
\[E_{\mu}(u_{\mu}^{-})=\frac{1}{N}\lVert\nabla u_{\mu}^{-}\rVert_{p}^{p}+\mu\gamma_{q}\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)\lVert u_{\mu}^{-}\rVert_{q}^q\geqslant\frac{1}{N}\lVert\nabla u_{\mu}^{-}\rVert_{p}^{p},\]
we have
\[\lVert\nabla u_{\mu}^{-}\rVert_{p}^{p}\leqslant E_{\mu}(t_{\mu}\star u)\leqslant C\mu^{-\frac{p}{q\gamma_{q}-p}},\]
as $\mu\rightarrow+\infty$, which together with (\ref{asysupnab}), implies
\[\lVert\nabla u_{\mu}^{-}\rVert_{p}^p\sim\mu^{-\frac{p}{q\gamma_{q}-p}}\]
as $\mu\rightarrow+\infty$.
Using the Gagliardo-Nirenberg inequality and Sobolev inequality again, we have
\[\lVert u_{\mu}^{-}\rVert_{q}^q\leqslant C\lVert\nabla u_{\mu}^{-}\rVert_{p}^{q\gamma_{q}}\leqslant C\mu^{-\frac{q\gamma_{q}}{q\gamma_{q}-p}},\]
and
\[\mu\gamma_{q}\lVert u_{\mu}^{-}\rVert_{q}^q=\lVert\nabla u_{\mu}^{-}\rVert_{p}^p-\lVert u_{\mu}^{-}\rVert_{p}^{p^*}\geqslant\lVert\nabla u_{\mu}^{-}\rVert_{p}^p-S^{\frac{p^*}{p}}\lVert\nabla u_{\mu}^{-}\rVert_{p}^{p^*}\geqslant C\mu^{-\frac{p}{q\gamma_{q}-p}}\]
as $\mu\rightarrow+\infty$. Therefore,
\[\lVert u_{\mu}^{-}\rVert_{q}^q\sim\mu^{-\frac{q\gamma_{q}}{q\gamma_{q}-p}}\]
as $\mu\rightarrow+\infty$. By the Pohozaev identity $\lambda_{\mu}^{-}a^p=\mu(\gamma_{q}-1)\lVert u_{\mu}^{-}\rVert_{q}^q$, we know
\[-\lambda_{\mu}^{-}\sim\mu^{-\frac{p}{q\gamma_{q}-p}}\]
as $\mu\rightarrow+\infty$.
\end{proof}
\begin{remark}
{\em We have $u_{\mu}^{-}\rightarrow 0$ in $D^{1,p}(\mathbb{R}^N)$ and $m^{-}(\mu)\rightarrow 0$ as $\mu\rightarrow+\infty$.}
\end{remark}
\noindent\textbf{Proof of Theorem \ref{th5}(2)} Let
\[w_{\mu}=\mu^{\frac{N}{p(q\gamma_{q}-p)}}u_{\mu}^{-}\Big(\mu^{\frac{1}{q\gamma_{q}-p}}\cdot\Big)\in S.\]
Similar to the proof of Theorem \ref{th4}(1), we can prove that there exists $w\in W^{1,p}(\mathbb{R}^N)$ such that $w_{\mu}\rightarrow w$ in $W^{1,p}(\mathbb{R}^N)$ and $w$ satisfies
\[-\Delta_{p}w+\sigma_{0}w^{p-1}=w^{q-1}\]
for some $\sigma_{0}>0$.
Let $\tilde{w}=\sigma_{0}^{\frac{1}{p-q}}w\big(\sigma_{0}^{-\frac{1}{p}}\cdot\big)$. Then
\[\sigma_{0}^{\frac{1}{p-q}}\mu^{\frac{N}{q\gamma_{q}-p}}u_{\mu}^{-}\Big(\sigma_{0}^{-\frac{1}{p}}\mu^{\frac{1}{q\gamma_{q}-p}}\cdot\Big)\rightarrow\tilde{w}\]
in $W^{1,p}(\mathbb{R}^N)$ as $\mu\rightarrow+\infty$ and we can prove that $\tilde{w}=\phi_{0}$. Finally, using $w\in S_{a}$, we have
\[\sigma_{0}=\bigg(\frac{a^p}{\lVert\phi_{0}\rVert_{p}^p}\bigg)^{\frac{p(q-p)}{p^2-N(q-p)}}.\]
$
\qed$
\section{\textbf{Nonexistence result}}
In this section, we prove the nonexistence result for $\mu<0$. The proof is not complicated and is a direct application of the result in \cite{asnsb}.\\
\noindent\textbf{Proof of Theorem \ref{th6}} Let $u$ be a critical point of $E_{\mu}|_{S_{a}}$. Then, $u$ solves (\ref{equation}) for some $\lambda\in\mathbb{R}$. By the Pohozaev identity, we have
\[\lambda a^p=\mu(\gamma_{q}-1)\lVert u\rVert_{q}^q,\]
which implies $\lambda<0$, since $\mu<0$, $\gamma_{q}<1$ and $u\in S_{a}$.
Using the Sobolev inequality and the fact that $P_{\mu}(u)=0$, we deduce that
\[\lVert\nabla u\rVert_{p}^p=\mu\gamma_{q}\lVert u\rVert_{q}^q+\lVert u\rVert_{p^*}^{p^*}<\lVert u\rVert_{p^*}^{p^*}\leqslant S^{-\frac{p^*}{p}}\lVert\nabla u\rVert_{p}^{p^*},\]
which implies $\lVert\nabla u\rVert_{p}^p>S^{\frac{N}{p}}$. Therefore,
\[E_{\mu}(u)=\frac{1}{N}\lVert\nabla u\rVert_{p}^p-\mu\gamma_{q}\Big(\frac{1}{q\gamma_{q}}-\frac{1}{p^*}\Big)\lVert u\rVert_{q}^q>\frac{1}{N}S^{\frac{N}{p}}.\]
We complete the proof of (1).
In order to prove (2), we use corollary 4.2 in \cite{asnsb}. Let $Q=\Delta_{p}$, $\gamma=0$ and $g(u)=\lambda u+\mu u^{q-1}+u^{p^*-1}$. We know
\[\alpha^*=\frac{N-p}{N-1}>0,\]
thus,
\[\sigma^*=p-1+\frac{p-\gamma}{\alpha^*}=\frac{N(p-1)}{N-p}.\]
By (1), since $\lambda<0$, we have
\[\liminf_{s\rightarrow 0^+}s^{-\sigma^*}g(s)=+\infty.\]
Now, by \cite[corollary 4.2]{asnsb}, (\ref{equation}) has no positive solution for any $\mu<0$.$
\qed$
\section{\textbf{Multiplicity result}}
In this section, we will prove the multiplicity result. Thus, we always assume that the assumptions of Theorem \ref{th7} holds.
Firstly, we introduce the concept of genus. Let $X$ be a Banach space and $A$ be a subset of $X$. The set $A$ is said to be symmetric if $u\in A$ implies $-u\in A$. Denote by $\Sigma$ the family of closed symmetric subsets $A$ of $X$ such that $0\notin A$, that is
\[\Sigma=\{A\subset X\backslash\{0\}: \mbox{A is closed and symmetric with respect to the origin}\}.\]
For $A\in\Sigma$, define the genus $\gamma(A)$ by
\[\gamma(A)=\min\{k\in\mathbb{N}: \exists\phi\in C(A,\mathbb{R}^k\backslash\{0\})\ \mbox{and}\ \phi(x)=-\phi(-x), \forall x\in X\}.\]
If such odd map $\phi$ does not exist, we define $\gamma(A)=+\infty$. For all $k\in\mathbb{N}_{+}$, let
\[\Sigma_{k}=\{A:A\in\Sigma\ and\ \gamma(A)\geqslant k\}.\]
For every $\delta>0$ and $A\in\Sigma$, let
\[A_{\delta}=\{x\in X:\inf\nolimits_{y\in A}\lVert x-y\rVert_{X}\leqslant\delta\}.\]
We have following lemma with respect to genus.
\begin{lemma}\label{genus}
{\rm\cite[section 7]{rph}} Let $A,B\in\Sigma$. Then the following statements hold.
\noindent{\rm (i)}
\begin{minipage}[t]{\linewidth}
If $\gamma(A)\geqslant 2$, then $A$ contains infinitely many distinct points.
\end{minipage}
\noindent{\rm (ii)}
\begin{minipage}[t]{\linewidth}
If there exists an odd mapping $f\in C(A,B)$, then $\gamma(A)\leqslant\gamma(B)$. In particular, if $f$ is a homeomorphism between $A$ and $B$, then $\gamma(A)=\gamma(B)$.
\end{minipage}
\noindent{\rm (iii)}
\begin{minipage}[t]{\linewidth}
Let $\mathbb{S}^{N-1}$ is the sphere in $\mathbb{R}^N$, then $\gamma(\mathbb{S}^{N-1})=N$.
\end{minipage}
\noindent{\rm (iv)}
\begin{minipage}[t]{\linewidth}
If $\gamma(B)<+\infty$, then $\gamma(\overline{A-B})\geqslant\gamma(A)-\gamma(B)$.
\end{minipage}
\noindent{\rm (v)}
\begin{minipage}[t]{\linewidth}
If $A$ is compact, then $\gamma(A)<\infty$ and there exists $\delta>0$ such that $\gamma(A)=\gamma(A_{\delta})$.
\end{minipage}
\end{lemma}
Let $\varphi\in C^{1}(X,\mathbb{R})$ be an even functional and
\[V=\{v\in X:\psi(v)=1\},\]
where $\psi\in C^{2}(X,\mathbb{R})$ and $\psi'(v)\neq 0$ for all $v\in V$. We define the set of critical points of $\varphi|_{V}$ at level $c$ as
\[K^c=\{u\in V:\varphi(u)=c,\varphi|_{V}'(u)=0\}.\]
The following conclusion is the key to proving the result of multiplicity.
\begin{proposition}\label{minimax}
Assume that $\varphi|_{V}$ is bounded from below and satisfies the $(PS)_{c}$ condition for all $c<0$. Moreover, we also assume that $\Sigma_{k}\neq\emptyset$ for $k\in\mathbb{N}_{+}$. Define a sequence of mini-max values $-\infty<c_{1}\leqslant c_{2}\leqslant...\leqslant c_{n}\leqslant...<+\infty$ as follows
\[c_{k}:=\inf_{A\in\Sigma_{k}}\sup_{u\in A}\varphi(u)\quad\forall k=1,2,..,n.\]
We have following statements hold.
\noindent{\rm (i)}
\begin{minipage}[t]{\linewidth}
If $c_{k}<0$, then $c_{k}$ is a critical value of $\varphi|_{V}$.
\end{minipage}
\noindent{\rm (ii)}
\begin{minipage}[t]{\linewidth}
If there exists $c<0$ such that
\[c_{k}=c_{k+1}=...=c_{k+l}=c,\]
then $\gamma(K^c)\geqslant l+1$. In particular, $\varphi|_{V}$ has infinitely many critical points at level $c$ if $l\geqslant 2$.
\end{minipage}
\end{proposition}
\begin{proof}
The proof is very similar to \cite[Theorem 2.1]{jlls}, if we replace \cite[Lemma 2.3]{jlls} with following quantitative deformation lemma.
\end{proof}
For every $c,d\in\mathbb{R}$ with $c<d$, define
\[\varphi|_{V}^{c}:=\{u\in V:\varphi(u)\leqslant c\},\quad and\quad\varphi^{-1}([c,d]):=\{u\in X:c\leqslant\varphi(u)\leqslant d\}.\]
Then, we have following quantitative deformation lemma.
\begin{lemma}
{\rm \cite[Lemma 5.15]{wm}} Let $\varphi\in C^{1}(X,\mathbb{R})$, $W\subset V$, $c\in\mathbb{R}$, and $\varepsilon,\delta>0$ such that
\[\lVert\varphi|_{V}'(u)\rVert\geqslant\frac{8\varepsilon}{\delta}\quad\forall u\in\varphi^{-1}([c-2\varepsilon,c+2\varepsilon]\cap W_{2\delta}).\]
Then there exists $\eta\in C([0,1]\times V,V)$ such that
\noindent{\rm (i)} $\eta(t,u)=u$ if $t=0$ or if $u\notin\varphi^{-1}([c-2\varepsilon,c+2\varepsilon])\cap W_{2\delta}$.
\noindent{\rm (ii)} $\eta(1,\varphi^{c+\varepsilon}\cap W)\in\varphi^{c-\varepsilon}$.
\noindent{\rm (iii)} $\varphi(\eta(\cdot,u))$ is non-increasing in $t\in [0,1]$ for all $v\in V$.
\noindent{\rm (iv)} $\eta(t,u)$ is odd in $V$ for all $t\in [0,1]$ if $\varphi$ is even in $V$.
\end{lemma}
Let $\tau\in C^{\infty}(\mathbb{R}^{+},[0,1])$ be a non-increasing function satisfies
\[\tau(t)=1\ \mbox{for}\ t\in[0,R_{0}],\quad\mbox{and}\quad\tau(t)=0\ \mbox{for}\ t\in[R_{1},+\infty),\]
where $R_{0}$ and $R_{1}$ are obtained by Lemma \ref{hfunction}. Define the truncated functional as follows
\[E_{\tau}(u)=\frac{1}{p}\lVert\nabla u\rVert_{p}^p-\frac{\mu}{q}\lVert u\rVert_{q}^q-\frac{1}{p^*}\tau(\lVert\nabla u\rVert_{p})\lVert u\rVert_{P^*}^{p^*}.\]
For $u\in S_{a}$, by the Gagliardo-Nirenberg inequality and Sobolev inequality, there is
\[E_{\tau}(u)\geqslant\frac{1}{p}\lVert\nabla u\rVert_{p}^p-\frac{\mu}{q}C_{N,p,q}^qa^{q(1-\gamma_{q})}\lVert\nabla u\rVert_{p}^{q\gamma_{q}}-\frac{1}{p^*S^{p^*/p}}\tau(\lVert\nabla u\rVert_{p})\lVert\nabla u\rVert_{p}^{p^*}=\tilde{h}(\lVert\nabla u\rVert_{p}),\]
where
\[\tilde{h}(t)=\frac{1}{p}t^p-\frac{\mu}{q}C_{N,p,q}^qa^{q(1-\gamma_{q})}t^{q\gamma_{q}}-\frac{\tau(t)}{p^*S^{p^*/p}}t^{p^*}.\]
By Lemma \ref{hfunction}, we know that $\tilde{h}(t)<0$ for $t\in(0,R_{0})$ and $\tilde{h}(t)>0$ for $t\in(R_{0},+\infty)$.
\begin{lemma}\label{infps}
We have
\noindent{\rm (i)}
\begin{minipage}[t]{\linewidth}
$E_{\tau}\in C^{1}(W_{rad}^{1,p}(\mathbb{R}^N),\mathbb{R})$.
\end{minipage}
\noindent{\rm (ii)}
\begin{minipage}[t]{\linewidth}
$E_{\tau}|_{S_{a,r}}$ is coercive and bounded from below. Moreover, if $E_{\tau}(u)\leqslant 0$ on $S_{a,r}$, then $\lVert\nabla u\rVert_{p}\leqslant R_{0}$ and $E_{\tau}(u)=E_{\mu}(u)$.
\end{minipage}
\noindent{\rm (iii)}
\begin{minipage}[t]{\linewidth}
$E_{\tau}|_{S_{a,r}}$ satisfies the $(PS)_{c}$ condition for all $c<0$.
\end{minipage}
\end{lemma}
\begin{proof}
(i) In fact, we just have to prove $I(u)=\tau(\lVert\nabla u\rVert_{p})\in C^{1}(W_{rad}^{1,p}(\mathbb{R}^N),\mathbb{R})$. For every $u\in W_{rad}^{1,p}(\mathbb{R}^N)$, direct calculations show that
\[I'(u)v=\frac{1}{p}\tau'(\lVert\nabla u\rVert_{p})\lVert\nabla u\rVert_{p}^{1-p}\int_{\mathbb{R}^N}\lvert\nabla u\rvert^{p-2}\nabla u\cdot\nabla vdx\quad\forall v\in W_{rad}^{1,p}(\mathbb{R}^N).\]
(ii) For every $u\in S_{a,r}$, since $E_{\tau}(u)\geq\tilde{h}(\lVert\nabla u\rVert_{p})$ and $\tilde{h}(t)\rightarrow+\infty$ as $t\rightarrow+\infty$, we know $E_{\tau}|_{S_{a,r}}$ is coercive and bounded from below. If $E_{\tau}(u)\leqslant 0$ on $S_{a,r}$. Then, using $E_{\tau}(u)\geq\tilde{h}(\lVert\nabla u\rVert_{p})$ again, since $\tilde{h}(t)>0$ on $(R_{0},+\infty)$, we have $\lVert\nabla u\rVert_{p}\leqslant R_{0}$ and $E_{\tau}(u)=E_{\mu}(u)$.
(iii) Let $\{u_{n}\}\in S_{a,r}$ be a PS sequence for $E_{\tau}|_{S_{a,r}}$ at level $c<0$. Then, by (ii), we know $\lVert\nabla u_{n}\rVert_{p}<R_{0}$ for $n$ sufficiently large and hence $E_{\tau}(u_{n})=E_{\mu}(u_{n})$. Therefore, $\{u_{n}\}$ is also a PS sequence for $E_{\mu}|_{S_{a,r}}$. Since $\{u_{n}\}$ is bounded in $W_{rad}^{1,p}(\mathbb{R}^N)$, similar to the proof of Proposition \ref{compactnesslemma}, by concentration compactness lemma, we can prove that one of the cases in Proposition \ref{compactnesslemma} holds. However, similar to the proof of Theorem \ref{th1}, we can prove that case (i) does not occurs under assumption $\mu<\alpha a^{q(\gamma_{q}-1)}$. Thus, $\{u_{n}\}$ converges strongly in $W_{rad}^{1,p}(\mathbb{R}^N)$.
\end{proof}
\begin{lemma}\label{gengeqn}
Given $n\in\mathbb{N}_{+}$, there exists $\varepsilon=\varepsilon(n)$ such that $\gamma(E_{\tau}|_{S_{a,r}}^{-\varepsilon})\geqslant n$.
\end{lemma}
\begin{proof}
The main idea of this proof comes from \cite{gajpai}. For every $n\in\mathbb{N}_{+}$ and $R>1$, let
\[u_{k}(x)=A_{k,R}(1+\lvert x\rvert^2)^k\varphi_{k,R}(x)\in S_{a}\quad \mbox{for}\quad k=1,...,n,\]
where $A_{k,R}$ is a constant and $\varphi_{k,R}\in C_{c}^{\infty}(\mathbb{R}^N)$ is radial cut-off function satisfies $0\leqslant\varphi_{k,R}\leqslant 1$,
\[\varphi_{k,R}=1\ \mbox{in}\ B_{(2k+\frac{1}{2})R}\backslash B_{(2k-\frac{1}{2})R}^c,\quad\varphi_{k,R}=0\ \mbox{in}\ B_{(2k-1)R}\cup B_{(2k+1)R}^c\quad\mbox{and}\quad\lvert\nabla\varphi_{k,R}\rvert\leqslant\frac{4}{R}.\]
Since $u_{k}\in S_{a}$ and
\begin{align*}
\lVert u_{k}\rVert_{p}^p&=A_{k,R}^p\int_{\mathbb{R}^N}(1+\lvert x\rvert^2)^{kp}\varphi_{k,R}^p(x)dx\sim A_{k,R}^p\int_{0}^{+\infty}(1+r^2)^{kp}r^{N-1}\varphi_{k,R}^p(r)dr\\
&\sim A_{k,R}^p\int_{(2k-\frac{1}{2})R}^{(2k+\frac{1}{2})R}(1+r^2)^{kp}r^{N-1}dr\sim A_{k,R}^pR^{2kp+N}
\end{align*}
as $R\rightarrow+\infty$, we have $A_{k,R}\sim R^{-2k-\frac{N}{p}}$ as $R\rightarrow+\infty$. we know
\[\nabla u_{k}(x)=A_{k,R}(1+\lvert x\rvert^2)^k\nabla\varphi_{k,R}(x)+2kA_{k,R}(1+\lvert x\rvert^2)^{k-1}\varphi_{k,R}(x)x.\]
Then, direct calculations show that
\begin{equation}\label{infnab}
\lVert\nabla u_{k}\rVert_{p}^p\leqslant\frac{C}{R^p}\quad\mbox{for}\quad k=1,...,n
\end{equation}
as $R\rightarrow+\infty$.
It is clear that $u_{1},...,u_{n}$ are linearly independent in $W_{rad}^{1,p}(\mathbb{R}^N)$. Thus, we can define a $n$-dimensional subspace of $W_{rad}^{1,p}(\mathbb{R}^N)$ by
\[E_{n}=\mbox{span}\{u_{1},...,u_{n}\}.\]
For every $v_{n}\in S_{a,r}\cap E_{n}$, there exists $a_{1},...,a_{n}$ such that $v_{n}=a_{1}u_{1}+...+a_{n}u_{n}$. Since $v_{n}\in S_{a,r}$ and
\[\lVert a_{1}u_{1}+...+a_{n}u_{n}\rVert_{p}^p=\lVert a_{1}u_{1}\rVert_{p}^p+...+\lVert a_{n}u_{n}\rVert_{p}^p=(\lvert a_{1}\rvert^p+...+\lvert a_{n}\rvert^p)a^p,\]
we have
\[\lvert a_{1}\rvert^p+...+\lvert a_{n}\rvert^p=1.\]
Therefore, by (\ref{infnab}),
\begin{align}\label{infetau}
E_{\tau}(v_{n})&=\frac{1}{p}\lVert\nabla v_{n}\rVert_{p}^p-\frac{\mu}{q}\lVert v_{n}\rVert_{q}^q-\frac{\tau(\lVert\nabla v_{n}\rVert_{p})}{p^*}\lVert v_{n}\rVert_{p^*}^{p^*}\nonumber\\
&\leqslant\frac{1}{p}(\lvert a_{1}\rvert^p\lVert\nabla u_{1}\rVert_{p}^p+...+\lvert a_{n}\rvert^p\lVert\nabla u_{n}\rVert_{p}^p)-\frac{\mu}{q}\lVert v_{n}\rVert_{q}^q\nonumber\\
&\leqslant\frac{C}{R^p}-\frac{\mu}{q}\lVert v_{n}\rVert_{q}^q
\end{align}
as $R\rightarrow+\infty$. By the H\"older inequality
\begin{align*}
a^p&=\lVert v_{n}\rVert_{p}^p=\int_{B_{(2n+1)R}}\lvert v_{n}\rvert^pdx\leqslant\Big(\int_{B_{(2n+1)R}}\lvert v_{n}\rvert^qdx\Big)^{\frac{p}{q}}\Big(\int_{B_{(2n+1)R}}dx\Big)^{\frac{q-p}{p}}\\
&=(2n+1)^{\frac{N(q-p)}{q}}\omega_{N}^{\frac{q-p}{p}}R^{\frac{N(q-p)}{q}}\lVert v_{n}\rVert_{q}^p,
\end{align*}
which implies
\begin{equation}\label{infq}
\lVert v_{v}\rVert_{q}^q\geqslant\frac{C}{R^{q\gamma_{q}}}
\end{equation}
as $R\rightarrow+\infty$. Now, combining (\ref{infetau}) and (\ref{infq}), we obtain
\[E_{\tau}(v_{n})\leqslant\frac{C-R^{p-q\gamma_{q}}}{R^p}<-\varepsilon,\]
by taking $R$ sufficiently large and $\varepsilon$ sufficiently small. Since $v_{n}\in S_{a,r}\cap E_{n}$ is arbitrary, this means that $S_{a,r}\cap E_{n}\subset E_{\tau}^{-\varepsilon}$. We know $E_{n}$ is a space of finite dimension, so all the norms in $E_{n}$ are equivalent. Then, by Lemma \ref{genus},
\[\gamma(E_{\tau}^{-\varepsilon})\geqslant\gamma(S_{a,r}\cap E_{n})=\gamma(\mathbb{S}^{n-1})=n.\]
\end{proof}
Now, we can use Proposition \ref{minimax} to prove our multiplicity result.\\
\noindent\textbf{Proof of Theorem \ref{th7}} Let $\varphi=E_{\tau}$, $X=W_{rad}^{1,p}(\mathbb{R}^N)$. Since $p>2$, by \cite[Proposition 1.12]{wm},
\[\psi(u)=\frac{1}{a^p}\int_{\mathbb{R}^N}\lvert u\rvert^pdx\in C^2(W_{rad}^{1,p},\mathbb{R}),\]
which implies we can set $V=S_{a,r}$. By Lemma \ref{infps}, $E_{\tau}|_{S_{a,r}}$ is bounded from below and satisfies $(PS)_{c}$ condition for all $c<0$. Moreover, by lemma \ref{gengeqn}, $\Sigma_{k}\neq\emptyset$ and $c_{k}<0$ for all $k\in\mathbb{N}_{+}$. Thus, Proposition \ref{minimax} implies $E_{\tau}|_{S_{a,r}}$ has infinitely many solutions at negative level. Using Lemma \ref{infps} again, we know $E_{\tau}|_{S_{a,r}}=E_{\mu}|_{S_{a,r}}$ at negative level. Thus, $E_{\mu}|_{S_{a,r}}$ has infinitely many solutions. Finally, by the principle of symmetric criticality(see \cite[Theorem 2.2]{kjom}), we know $E_{\mu}|_{S_{a}}$ has infinitely many solutions.
\appendix \section{\textbf{Some useful estimates}}\label{A}
For every $\varepsilon>0$, we define
\[u_{\varepsilon}(x)=\varphi(x)U_{\varepsilon}(x)=\varphi(x)d_{N,p}\varepsilon^{\frac{N-p}{p(p-1)}}\big(\varepsilon^{\frac{p}{p-1}}+\lvert x\rvert^{\frac{p}{p-1}}\big)^{\frac{p-N}{p}},\]
where $\varphi\in C_{c}^\infty(\mathbb{R}^N)$ is a radial cut off function with $\varphi=1$ in $B_{1}$, $\varphi=0$ in $B_{2}^c$, and $\varphi$ radially decreasing. Then, we have following estimates for $u_{\varepsilon}$.
\begin{lemma}
Let $N\geqslant 2$, $1<p<N$, $1\leqslant r<p^*$. Then, we have
\[\lVert\nabla u_{\varepsilon}\rVert_{p}^p=S^{\frac{N}{p}}+O(\varepsilon^{\frac{N-p}{p-1}}),\quad\lVert u_{\varepsilon}\rVert_{p^*}^{p^*}=S^{\frac{N}{p}}+O(\varepsilon^{\frac{N}{p-1}}),\]
\begin{align*}
\lVert\nabla u_{\varepsilon}\rVert_{r}^{r}\sim\left\{\begin{array}{ll}
\varepsilon^{\frac{N(p-r)}{p}}&\frac{N(p-1)}{N-1}<r<p\\
\varepsilon^{\frac{N(N-p)}{(N-1)p}}\lvert\log\varepsilon\rvert&r=\frac{N(p-1)}{N-1}\\
\varepsilon^{\frac{(N-p)r}{p(p-1)}}&1\leqslant r<\frac{N(p-1)}{N-1},
\end{array}\right.
\end{align*}
and
\begin{align*}
\lVert u_{\varepsilon}\rVert_{r}^r\sim\left\{\begin{array}{ll}
\varepsilon^{N-\frac{(N-p)r}{p}}&\frac{N(p-1)}{N-p}<r<p^*\\
\varepsilon^{\frac{N}{p}}\lvert\log\varepsilon\rvert&r=\frac{N(p-1)}{N-p}\\
\varepsilon^{\frac{(N-p)r}{p(p-1)}}&1\leqslant r<\frac{N(p-1)}{N-p},
\end{array}\right.
\end{align*}
as $\varepsilon\rightarrow 0$.
\end{lemma}
\subsection*{Acknowledgments}
The authors were supported by National Natural Science Foundation of China 11971392.
\end{document} | arXiv | {
"id": "2306.06709.tex",
"language_detection_score": 0.48522311449050903,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[]{Truncation and Spectral Variation in Banach Algebras}
\author{C. Tour\'{e}, F. Schulz \and R. Brits}
\address{Department of Mathematics, University of Johannesburg, South Africa}
\email{cheickkader89@hotmail.com, francoiss@uj.ac.za, rbrits@uj.ac.za}
\subjclass[2010]{15A60, 46H05, 46H10, 46H15, 47B10}
\keywords{spectrum; truncation; spectral radius; subharmonic; $C^\star$-algebra}
\begin{abstract} Let $a$ and $b$ be elements of a semisimple, complex and unital Banach algebra $A$. Using subharmonic methods, we show that if the spectral containment $\sigma(ax)\subseteq\sigma(bx)$ holds for all $x\in A$, then $ax$ belongs to the bicommutant of $bx$ for all $x\in A$. Given the aforementioned spectral containment, the strong commutation property then allows one to derive, for a variety of scenarios, a precise connection between $a$ and $b$. The current paper gives another perspective on the implications of the above spectral containment which was also studied, not long ago, by J. Alaminos, M. Bre\v{s}ar \emph{et. al.} \end{abstract}
\parindent 0mm
\maketitle
\section{Introduction}
Problems related to spectral variation under the multiplicative and additive operations in Banach algebras have recently attracted attention of researchers working in the field of abstract spectral theory in Banach algebras. Specifically, the first contributions were made by Bre\v{s}ar and \v{S}penko \cite{specproperties}, and at around the same time, but independently by Braatvedt and Brits \cite{univari}, and then later by J. Alaminos \emph{et. al.} \cite{specproperties2}, and Brits and Schulz \cite{univarisoc}. The aim of this paper is to extend and elaborate on the results obtained in \cite{specproperties2} and \cite{univarisoc}; we shall employ techniques which are distinctly different from the methods used in \cite{specproperties2} and \cite{specproperties}.
Unless otherwise stated, $A$ will assumed to be a semisimple, complex, and unital Banach algebra with the unit denoted by $\mathbf 1$. The group of invertible elements, and the centre of $A$ are denoted respectively by $G(A)$ and $Z(A)$. We shall use $\sigma_A$ and $\rho_A$ to denote, respectively, the spectrum $$\sigma_A(x):=\{\lambda\in\mathbb C:\lambda\mathbf 1-x\notin G(A)\},$$ and the spectral radius
$$\rho_A(x):=\sup\{|\lambda|:\lambda\in\sigma_A(x)\}$$ of an element $x\in A$ (and agree to omit the subscript if the underlying algebra is clear from the context). Denote further by $\sigma^\prime(x):=\sigma(x)\backslash\{0\}$ the \emph{non-zero spectrum} of $x\in A$. If $X$ is a compact Hausdorff space, then $A=C(X)$ is the Banach algebra of continuous, complex functions on $X$ with the usual pointwise operations and the spectral radius as the norm. If $X$ is a complex Banach space then $A=\mathcal L(X)$ is the Banach algebra of bounded linear operators on $X$ to $X$ (also in the usual sense). The main question of this paper is, loosely stated, the following:
\emph{ Let $A$ be a semisimple, complex, and unital Banach algebra, and suppose that $a,b\in A$ satisfy
\begin{equation}\label{contain}
\sigma(ax)\subseteq\sigma(bx)\mbox{ for all }x\in A.
\end{equation}
What is the relationship between $a$ and $b$?}
Observe, trivially, that $$\eqref{contain}\Rightarrow\sigma^\prime(ax)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A.$$ Since the non-zero spectrum is cyclic (Jacobson's Lemma, \cite[Lemma 3.1.2]{aupetit1991primer}) it turns out to be advantageous to assume, where applicable, the preceding implication of \eqref{contain} rather than \eqref{contain} itself. For easy reference we label \begin{equation}\label{contain2} \sigma^\prime(ax)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A, \end{equation} and then note that \eqref{contain2} is equivalent to the statement: $$\sigma^\prime(xa)\subseteq\sigma^\prime(xb)\mbox{ for all }x\in A.$$
Further, if \eqref{contain2} holds then we also have \begin{equation}\label{contain3} \sigma^\prime((b-a)x)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A. \end{equation} To see this, if $\lambda\not=0$ and $\lambda\notin\sigma^\prime(bx)$, then $$1+bx(\lambda\mathbf1-bx)^{-1}=\lambda(\lambda\mathbf1-bx)^{-1}\in G(A),$$ from which the assumption \eqref{contain2} implies that $\mathbf1+ax(\lambda\mathbf1-bx)^{-1}\in G(A).$ Then $$\lambda\mathbf 1-(b-a)x=(\mathbf1+ax(\lambda\mathbf1-bx)^{-1})(\lambda\mathbf 1-bx)\in G(A).$$
We give a short list of some of the major known results which are related to \eqref{contain} and \eqref{contain2}:
\begin{itemize}
\item[(a)]{ \cite[Theorem 3.7]{specproperties}: Let $A$ be a prime $C^\star$-algebra and let $a,b\in A$ be such that $\rho(ax)\leq\rho(bx)$ for all $x\in A$. Then there exists $\lambda\in\mathbb C$ such that $|\lambda|\leq1$ and $a=\lambda b$. }
\item[(b)]{\cite[Theorem 2.6]{univari}: If $A$ is an arbitrary semisimple, complex and unital Banach algebra, and $a,b\in A$, then $a=b$ if and only if $\sigma(ax)=\sigma(bx)$ for all $x\in A$ satisfying $\rho(x-\mathbf 1)<1$ (the bound on the spectral radius is sharp). }
\item[(c)]{ \cite[Theorem 2.3]{specproperties2}: If $A$ is a unital $C^\star$-algebra and $a,b\in A$, then $\sigma(ax)\subseteq\sigma(bx)\cup\{0\}$ for every $x\in A$ if and only if there exists a central projection $z\in A^{\prime\prime}$, the second dual of $A$, such that $a=zb$. }
\item[(d)]{ \cite[Theorem 3.6]{specproperties2}: If $A$ is a unital $C^\star$-algebra and $a,b\in A$, then $\rho(ax)\leq\rho(bx)$ for every $x\in A$ if and only if there exists a central projection $z\in A^{\prime\prime}$, the second dual of $A$, such that $a=zb$ and $\|z\|\leq1$. }
\item[(e)]{\cite[Theorem 3.9]{univarisoc}: If $A$ is a semisimple, complex and unital Banach algebra with non-zero socle, denoted $\soc(A)$, then $A$ is prime if and only if for $a,b\in A$ the following are equivalent:
\begin{itemize}
\item[(i)]{$\rho(ax)\leq\rho(bx)$ for all $x\in A$.}
\item[(ii)]{$a=\lambda b$ for some $\lambda\in\mathbb C$ with $|\lambda|\leq1$. }
\end{itemize} In particular, if $A=\mathcal L(X)$, then (i) and (ii) are equivalent. } \end{itemize}
The following simple example serves as the impetus for this paper, and may perhaps indicate a general relationship between $a$ and $b$ when \eqref{contain} is satisfied:
\begin{example}\label{truncation}
Let $A=C(X)$ where $X=[0,1]$. Define $b,a\in A$ by respectively
\begin{equation}
b(t)=\left|t-1/2\right|,\ t\in X\mbox { and } a(t)=\left\{\begin{array}{cc} 0, & t\in[0,1/2)\\
t-1/2, & t\in[1/2,1].\end{array}\right.
\end{equation} \end{example}
Then \eqref{contain} holds, and moreover, from the graphs of $a$ and $b$, it is easy that $a$ is a truncation of $b$. The obvious question is whether, for arbitrary Banach algebras, \eqref{contain} or \eqref{contain2}, implies that $a$ is, in some suitable sense, a ``truncation" of $b$? Example~\ref{truncation} suggests the following definition:
\begin{definition}[algebraic truncation]
Let $A$ be a complex and unital Banach algebra, and let $a,b\in A$. Then $a$ is said to be an \emph{algebraic truncation} of $b$ if
$$a(b-a)=(b-a)a=0.$$ \end{definition}
\section{General results} We start with a simple but interesting observation: \begin{proposition}\label{reverse}
If $a,b\in A$ satisfy $ax(b-a)=0$ for all $x\in A$, then $$\sigma^\prime(ax)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A.$$ \end{proposition} \begin{proof}
We shall first prove that $ax(b-a)=0$ for all $x\in A$ implies that
$(b-a)xa=0$ for all $x\in A$. With the hypothesis, suppose that
$(b-a)x_0a\not=0$ for some $x_0\in A$. Since $A$ is semisimple we can find $y\in A$ such that $\sigma((b-a)x_0ay)\not=\{0\}$. But this gives a contradiction because $((b-a)x_0ay)^2=0$. Towards the spectral containment: If we write $ax+(b-a)x=bx$ then the preceding calculation implies that
$$\sigma^\prime(bx)=\sigma^\prime(ax)\cup \sigma^\prime\left((b-a)x\right),$$ which proves the claim.
\end{proof}
In light of Proposition~\ref{reverse} the main question can therefore be phrased as whether the condition $ax(b-a)=0$ for all $x\in A$ is the only possible instance which fulfils the spectral containment \eqref{contain2}. Notice further that the condition $ax(b-a)=0$ for all $x\in A$ is equivalent to the condition $ax(b-a)x=0$ for all $x\in A$; in other words, to the condition that $ax$ is an algebraic truncation of $bx$ for all $x\in A$.
Our first main result shows that \eqref{contain2} forces strong commutation properties. The proof is an application of Vesentini's Theorem \cite[Theorem 3.4.7]{aupetit1991primer}:
\begin{theorem}\label{commute}
If $\sigma^\prime(ax)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A$ then $a$ belongs to the bicommutant of $b$. Hence, for each $x\in A$, $ax$ belongs to the bicommutant of $bx$. \end{theorem}
\begin{proof}
Pick $\alpha\in\mathbb C$ arbitrary but fixed, and suppose $c\in A$ commutes with $b$. By assumption, we have that
$$\sigma^\prime(ae^{\alpha c}xe^{-\alpha c})\subseteq\sigma^\prime(be^{\alpha c}xe^{-\alpha c})\mbox{ for all }x\in A.$$
Jacobson's Lemma, together with the fact that $b$ and $c$ commute, imply that
$$\sigma^\prime(e^{-\alpha c}ae^{\alpha c}x)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A.\eqno(1)$$
Now fix $x$ and take $\lambda\in\mathbb C$ with $\rho(bx)<|\lambda|$. Then
$\mathbf1+bx(\lambda\mathbf 1-bx)^{-1}\in G(A)$ from which (1) implies that
$\mathbf1+e^{-\alpha c}ae^{\alpha c}x(\lambda-bx)^{-1}\in G(A)$ and so we have
$\mathbf1+(\lambda-bx)^{-1}e^{-\alpha c}ae^{\alpha c}x\in G(A)$. Multiplication by $\lambda\mathbf1-bx$ on the left then implies that
$\lambda\mathbf1-(bx-e^{-\alpha c}ae^{\alpha c}x)\in G(A)$. Since $\rho(bx)<|\lambda|$ we observe that (1) implies $\lambda\mathbf1+e^{-\alpha c}ae^{\alpha c}x\in G(A)$. Arguing as before, factorizing
$$(\lambda\mathbf1+e^{-\alpha c}ae^{\alpha c}x)(\mathbf1-(\lambda\mathbf1+e^{-\alpha c}ae^{\alpha c}x)^{-1}bx),$$ followed by multiplication with $(\lambda\mathbf1+e^{-\alpha c}ae^{\alpha c}x)^{-1}$ on the left, we have that $\mathbf1-(\lambda\mathbf1+e^{-\alpha c}ae^{\alpha c}x)^{-1}bx\in G(A)$ whence $\mathbf1-(\lambda\mathbf1+e^{-\alpha c}ae^{\alpha c}x)^{-1}ax\in G(A)$.
Then
\begin{align*}
(\lambda\mathbf1+e^{-\alpha c}ae^{\alpha c}x)[\mathbf1-(\lambda\mathbf1+e^{-\alpha c}ae^{\alpha c}x)^{-1}ax]&=\lambda\mathbf1-[ax-e^{-\alpha c}ae^{\alpha c}x]\in G(A).
\end{align*}
From this it follows that, for each $\alpha\in\mathbb C$,
$$\rho(ax-e^{-\alpha c}ae^{\alpha c}x)\leq\rho(bx).$$
The subharmonic function
$$\alpha\mapsto\rho(ax-e^{-\alpha c}ae^{\alpha c}x)$$ is therefore bounded on $\mathbb C$, and, by Liouville's Theorem, it must be constant. In particular, with $\alpha=0$, we see that it vanishes everywhere on $\mathbb C$. Define $f:\mathbb C\rightarrow A$ by
\begin{displaymath}
f(\alpha)=\left\{\begin{array}{cc} [ax-e^{-\alpha c}ae^{\alpha c}x]/\alpha & \alpha\not=0\\
(ca-ac)x & \alpha=0.\end{array}\right.
\end{displaymath}
Then $f$ is analytic on $\mathbb C$ and $\rho(f(\alpha))=0$ holds for all $\alpha\not=0$. But this means that $\rho(f(0))=0$. Since $x$ was arbitrary, and $A$ is semisimple, we have that $ca-ac=0$ as required. \end{proof}
\begin{corollary}\label{cyc}
If $\sigma^\prime(ax)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A$ then $axb=bxa$ for all $x\in A$. \end{corollary} \begin{proof}
Theorem~\ref{commute} says that for each $x\in A$, $axbx=bxax$. Now, given $x\in A$, pick $\lambda\in\mathbb C$ such that $\lambda\mathbf 1-x$ is invertible. Then obviously $a(\lambda\mathbf 1-x)b=b(\lambda\mathbf 1-x)a$. So $axb=bxa$ follows from $ab=ba$. \end{proof}
To obtain one of our main results in this section, Theorem~\ref{group}, we shall need two lemmas. The first is somewhat folklore, but very well-known, and appears scattered throughout the literature on Banach algebras; the second lemma is, as far as the authors could establish, originally due to Ptak \cite{ptakderivations} and has since been ``rediscovered", and applied, in a number of papers related to Banach algebra theory.
\begin{lemma}\label{folklore}
If $A$ is a semisimple, complex and unital Banach algebra, and $p\in A$ is a projection, then $pAp$ is a semisimple Banach algebra with identity element $p$. Moreover
$$\sigma^\prime_{pAp}(z)=\sigma^\prime_{A}(z)\mbox{ holds for each }z\in pAp.$$ \end{lemma}
\begin{lemma}[Ptak]\label{center}
If $A$ is a semisimple, complex and unital Banach algebra, and $z\in A$ satisfies $\rho(zx)\leq \rho(x)$ for all $x \in A$, then $z \in Z(A)$. \end{lemma}
\begin{theorem}\label{group}
Suppose, for some $a,b\in A$, that $\sigma^\prime(ax)\subseteq\sigma^\prime(bx)$ for all $x\in A$.
\begin{itemize}
\item[(a)]{ If $a$ is invertible then $a=b$.}
\item[(b)]{ If $b$ is a projection then $ab=ba=a$ and $a$ is a projection. In particular if $b=\mathbf 1$, then $a$ is a projection belonging to $Z(A)$.}
\item[(c)]{ If $b$ is invertible then $a$ is group invertible. In particular, there exists a projection $p\in Z(A)$ such that $a=bp$.}
\item[(d)]{ If $a$ is a projection then $ab=ba=a$.}
\end{itemize} \end{theorem} \begin{proof}
(a) We shall first prove that if $a=\mathbf 1$ then $b=\mathbf 1$. Observe that, by Corollary~\ref{cyc}, $b\in Z(A)$. To obtain the preliminary result we consider two cases:
(i){ $b\in G(A)$: Obviously $b^{-1}\in Z(A)$ and, moreover, we have that $\sigma(b^{-1})\subseteq \sigma(bb^{-1})=\{1\}$ implies that $\sigma(b^{-1})=\{1\}$. Then $\rho((b^{-1}-\mathbf1)x)\leq \rho(b^{-1}-\mathbf1)\rho(x)=0$ implies (by semisimplicity) that $b^{-1}-\mathbf 1 =0$. Thus $b=\mathbf 1$.}
(ii){ $b\notin G(A)$: Pick $\lambda\in\mathbb C$ such that $\lambda \mathbf{1}-bx,\,\lambda\mathbf{1} \in G(A)$. Then
$$(\lambda \mathbf{1}-bx)(1+(\lambda\mathbf{1}-bx )^{-1}bx)=\lambda \mathbf{1}\in G(A)$$
from which it follows that $ 1+( \lambda \mathbf{1}-bx )^{-1}bx \in G(A)$ and hence that $1+( \lambda \mathbf{1}-bx )^{-1}x\in G(A)$ (using \eqref{contain2} together with the fact that $(\lambda \mathbf{1}-bx )^{-1}$ commutes with $x$). Thus
$$\lambda \mathbf{1}-(b-\mathbf1)x=(\lambda \mathbf{1}-bx )(1+(\lambda \mathbf{1}-bx )^{-1}x) \in G(A)$$ and so $\sigma{'}((b-\mathbf1)x) \subseteq \sigma{'}(bx) $. Since $b\in Z(A)$ we have that $bx$ is not invertible for any $x \in A$ and we infer that $\sigma((b-\mathbf1)x) \subseteq \sigma(bx) $. Taking $x=\mathbf 1$ we deduce $\sigma(b-\mathbf1) \subseteq \sigma(b)$. But since $0 \in \sigma(b)$ the implication of this would (inductively) be that all negative integers belong to $\sigma(b)$ contradicting the compactness of the spectrum. Thus, if $a=\mathbf 1$ then $b=\mathbf 1$.}
To complete the proof of (a) notice that if $a\in G(A)$ then $\eqref{contain2}$ implies that $\sigma^\prime(x)\subseteq\sigma^\prime(ba^{-1}x)$ holds for all $x\in A$. So by the preceding paragraph $\mathbf 1=ba^{-1}$ and the result follows.
(b) From the hypothesis we deduce that $\sigma^\prime(abbxb)\subseteq\sigma^\prime(bbxb)$, and hence that $\sigma^\prime(abbxb)\subseteq\sigma^\prime(bxb)$ for all $x \in A$. Denote by $B$ the semisimple Banach algebra $bAb$. Using Theorem~\ref{commute} it follows that $ab=bab\in B$, and from Lemmas~\ref{folklore} and ~\ref{center} we deduce that $ab$ commutes with every $c\in B$. Since $b$ is a projection in $A$ we have that $\sigma_B(ab)\subseteq \{0,1\}.$ Therefore $\sigma_B(ab(ab-b))=\{0\}$ from which
$$\rho_B(ab(ab-b)c)\leq \rho_B(ab(ab-b))\rho_B(c)=0\mbox{ for each }c\in B.$$ Since $B$ is semisimple we conclude that $ab(ab-b)=0$, and hence that $ab$ is a projection. But the hypothesis also implies that $ \sigma(a(b-\mathbf 1)x) \subseteq \sigma(b(b-\mathbf 1)x)=\{0\} $ whence $a(b-\mathbf 1)=0$ by semisimplicity. Consequently $ab=a$ is a projection.
(c) If $b\in G(A)$ then $\eqref{contain2}$ implies that $\sigma^\prime(ab^{-1}x)\subseteq\sigma^\prime(x)$ holds for all $x\in A$. It follows from part (b) that $ab^{-1}=p$ for some projection $p$ in $Z(A)$.
(d) Observe that $\sigma_{aAa}^{\prime}(a(axa))\subseteq \sigma_{aAa}^{\prime}(aba(axa))$ holds in the semisimple Banach algebra $aAa$ which has identity element $a$. It follows from part (a) that $a=aba$, and hence that $ab=ba=a$.
\end{proof}
The following is immediate from Theorem~\ref{group}:
\begin{corollary}\label{abinvert}
Suppose, for some $a,b\in A$, that $\sigma^\prime(ax)\subseteq\sigma^\prime(bx)$ for all $x\in A$. If either $a$ or $b$ is invertible, or a projection, then $a$ is an algebraic truncation of $b$. \end{corollary}
Theorem~\ref{scalar} settles the case, with respect to \eqref{contain2}, when $a$ and $b$ are linearly dependent. As a corollary we can then deduce a precise algebraic characterization of \eqref{contain2} for some important classes of Banach algebras.
\begin{theorem}\label{scalar}
If $a=\alpha b$ for some $\alpha\in\mathbb C$, and $\sigma^\prime(ax)\subseteq\sigma^\prime(bx)$ for all $x\in A$, then $\alpha=0$ or $\alpha=1$. \end{theorem} \begin{proof}
If $a$ or $b$ is invertible then $a=b$ or $a=0$, and the proof is complete; so we may assume $a,b\not\in G(A)$. If $\sigma(bx)=\{0\}$ for all $x\in G(A)$, then, by semisimplicity, $a=b=0$. We can therefore assume the existence of $x^\prime\in G(A)$ such that $\sigma(bx^\prime)\not=\{0\}$. If we can establish $ax^\prime=0$ or $ax^\prime=bx^\prime$, then $a=0$ or $a=b$. So we can assume, without loss of generality, that $\sigma(b)\not=\{0\}$. To obtain a contradiction we shall assume then that $\alpha\not=0$, $\alpha\not=1$. The first step is to show that $\alpha\in(0,1)\subset\mathbb R$: Via the spectral radius we obtain $|\alpha|\leq1$. But \eqref{contain3} implies that $1-\alpha$ is also a number satisfying
$$\sigma^\prime((1-\alpha)bx)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A$$ whence $|1-\alpha|\leq1$.
Observe next that if $\alpha$ is a number satisfying $\sigma^\prime(\alpha bx)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A,$ then so is the number $\alpha^n$ for any $n\in\mathbb N$. Arguing as before we therefore have
$$\sigma^\prime((\alpha^n bx)\subseteq\sigma^\prime(bx)\mbox{ and }\sigma^\prime((1-\alpha^n)bx)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A, n\in\mathbb N.$$ But if $\alpha\not\in(0,1)$ then, for some $n\in\mathbb N$, $\alpha^n$ would not be in the ``feasible region" (by rotation) i.e. $\alpha^n\notin \{\lambda\in\mathbb C:|\lambda|\leq1\}\cap \{\lambda\in\mathbb C:|1-\lambda|\leq1\}$. So we conclude that $\alpha\in(0,1)$. At this stage we need to make two further observations: Firstly, if $\alpha\in (0,1)$ and $\sigma^\prime(\alpha bx)\subseteq\sigma^\prime(bx)$ holds for all $x\in A$, then, given any $\epsilon>0$, we can (by the preceding argument) find $\beta\in(0,1)$ such that $|\beta|<\epsilon$ and $\sigma^\prime(\beta bx)\subseteq\sigma^\prime(bx)$. Consequently we can also find $\gamma\in(0,1)$ such that $|\gamma-1|<\epsilon$ and $\sigma^\prime(\gamma bx)\subseteq\sigma^\prime(bx)$ for all $x\in A$. Secondly, if
$\sigma^\prime(\alpha bx)\subseteq\sigma^\prime(bx)$ holds for all $x\in A$, and $\xi\in\mathbb C$ is arbitrary then
$\sigma^\prime(\alpha (\xi b)x)\subseteq\sigma^\prime((\xi b)x)$ holds for all $x\in A$. Thus, by the second observation, we can assume without loss of generality that $\sigma^\prime(b)$ contains a complex number on a horizontal line $y=\pm(2k+1)\pi$ (for some $k\in\mathbb N$) in the complex plane. Now, if we take
$$x_0=\sum_{j=0}^\infty\frac{b^j}{(j+1)!}$$ then
$$e^b-1=\sum_{j=1}^\infty \frac{b^j}{j!}=bx_0.$$ Since $\sigma^\prime(b)$ is bounded and contains at least one complex number on the horizontal line $y=\pm(2k+1)\pi$ for some $k\in\mathbb N$, it follows, by the Spectral Mapping Theorem, that there exists an open interval $(t_1,t_2)\subset\mathbb R$ with $t_1<t_2<0$ such that $t_1\in\sigma^\prime(bx_0)$ and $(t_1,t_2)\cap\sigma^\prime(bx_0)=\emptyset$. But now, by the first observation, with $|\gamma-1|$ sufficiently small, we obtain a contradiction with $\sigma^\prime(\gamma bx_0)\subseteq\sigma^\prime(bx_0)$. Thus either $\alpha=0$ or $\alpha=1$.
\end{proof}
For the last paragraph of this section we recall that an algebra $A$ is called \emph{prime} if $$axb=0\mbox{ for all }x\in A\Rightarrow a=0\mbox{ or } b=0.$$ Further, a prime Banach algebra $A$ is called \emph{centrally closed} if the \emph{extended centroid} (see Sections 7.4--7.6 in \cite{bresar2014} for the definition and properties) of $A$ is equal to the complex field. We can now establish:
\begin{corollary}\label{LX}
Let $A$ be a centrally closed semisimple prime Banach algebra, and let $a,b\in A$. Then $\sigma^\prime(ax)\subseteq\sigma^\prime(bx)$ for all $x\in A$ if and only if either $a=0$ or $a=b$. \end{corollary}
\begin{proof}
Notice that the condition $axb=bxa$ for all $x\in A$ (Corollary~\ref{cyc}) implies that $a$ and $b$ are linearly dependent over the extended centroid \cite[Lemma 7.41]{bresar2014}. The forward implication is then clear from Theorem~\ref{scalar}. The reverse implication is trivial. \end{proof}
Corollary~\ref{LX} covers some important classes of Banach algebras: Examples include prime $C^\star$-algebras (see for instance \cite[Proposition 2.2.10]{localmultipliers} as well as primitive Banach algebras (look at \cite[Corollary 4.1.2]{ringsgenid}); specifically, Corollary~\ref{LX} holds for $A=\mathcal L(X)$. Finally we may observe that semisimple prime algebras can be characterized in terms of the spectral containment \eqref{contain2}:
\begin{proposition}\label{prime}
$A$ is prime if and only if for $a\not=0,\ a\not=b$
\begin{equation}\label{primecond}
\sigma^\prime(ax)\subseteq\sigma^\prime(bx)\ \forall\ x\in A\ \Rightarrow aA\cap (b-a)A\not=\{0\}.
\end{equation} \end{proposition} \begin{proof}
Suppose $A$ is prime, and there exist $a\not=0$, $b\not=a$ in $A$ satisfying
$\sigma^\prime(ax)\subseteq\sigma^\prime(bx)$ for all $x\in A$ with $aA\cap (b-a)A=\{0\}$. By Theorem~\ref{commute} we then have $(b-a)xa=ax(b-a)$ for all $x\in A$, from which it follows that $ax(b-a)=0$ for all $x\in A$. But, since $A$ is prime, this gives a contradiction. For the converse, suppose $A$ is not prime. Then, since $A$ is semisimple, we can find $a,c\in A$, both nonzero, and $a\not=c$, such that $axc=0$ for all $x\in A$. It then follows from Jacobson's Lemma, together with semisimplicity of $A$, that $cxa=0$ for all $x\in A$. Now set $b=c+a$. Then
$$\sigma^\prime(ax)\subseteq\sigma^\prime(cx)\cup\sigma^\prime(ax)=\sigma^\prime(cx+ax)=\sigma^\prime(bx)$$ holds for each $x\in A$. Suppose
$aA\cap (b-a)A\not=\{0\}$. Then we find can $x,y\in A$ such that $ax=cy\not=0$. Orthogonality implies that, for each $z\in A$, we have $\sigma(axzax)=\{0\}$ from which Jacobson's Lemma, semisimplicity of $A$, and the Spectral Mapping Theorem yield the contradiction that $ax=0$. So we conclude that if $A$ is not prime then \eqref{primecond} does not hold which completes the proof. \end{proof}
Intuitively, the impression is that Proposition~\ref{prime} seems a bit artificial; the natural conjecture here should be that a semisimple Banach algebra $A$ is prime if and only if $$\sigma^\prime(ax)\subseteq\sigma^\prime(bx)\ \forall\ x\in A\ \Rightarrow a=0\mbox{ or }a=b.$$ However, a complete proof of the preceding statement eludes us at this stage, thus leaving the problem as a conjecture.
\section{$C^\star$-algebras}
As an application of the results in the preceding section we consider the case where $A$ is a $C^\star$-algebra. We should point out that Theorem~\ref{Cstar} can also very easily be obtained as a corollary of \cite[Theorem 2.3]{specproperties2}, and it is therefore not really new; the main difference here lies in the arguments leading to the respective results. We first establish the result for the commutative case:
\begin{theorem}\label{C(X)}
Let $a,b\in A=C(X)$ where $X$ is any compact Hausdorff space. Then $\sigma^\prime(ax)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A$, if and only if $a$ is an algebraic truncation of $b$. \end{theorem}
\begin{proof}
The reverse implication follows directly from Proposition~\ref{reverse}. We prove the forward implication: If either $a$ or $b$ belongs to $G(A)$ then the result follows from Corollary~\ref{abinvert}. So we assume neither is invertible. For $x\in A$ denote by $x^\star \in A$ the function $x^\star(t)=\overline{x(t)}$. Using
\eqref{contain2} we obtain
$$\sigma^\prime(aa^\star x)\subseteq\sigma^\prime(ab^\star x)\subseteq\sigma^\prime(bb^\star x)\mbox{ for all }x\in A.$$
In particular $\sigma(ab^\star )\subset\mathbb R^+$ which means that, for each $t\in X$, $a(t)\overline{b(t)}\in\mathbb R^+$. This shows that $ab^\star =a^\star b$. For each $x\in A$ define $K_x=\{t\in X:x(t)=0\}$. We can write
\begin{equation}
x(t)=\left\{\begin{array}{cc} r_x(t)\left[\cos\theta_x(t)+i\sin\theta_x(t)\right], & x(t)\not=0\\
0, & x(t)=0.\end{array}\right.
\end{equation}
where $r_x$ and $\theta_x$ are functions:
$$r_x:K_x\rightarrow \mathbb R^+\mbox { and }\theta_x:K_x\rightarrow (-\pi,\pi]$$
Suppose now for some $t\in X$, $a(t)\not=0$ and $b(t)\not=0$. Then
$$a(t)b^\star (t)=r_a(t)r_b(t)[\left[\cos(\theta_a(t)-\theta_b(t))+i\sin(\theta_a(t)-\theta_b(t))\right]\in\mathbb R^+$$ implies that $\sin(\theta_a(t)-\theta_b(t))=0$, from which we deduce
(i) $\theta_a(t)-\theta_b(t)=0$ or (ii) $\theta_a(t)-\theta_b(t)=\pm\pi$. If (ii) holds then we have a contradiction with $\sigma(ab^\star )\subset\mathbb R^+$ and so $\theta_a(t)=\theta_b(t)$. We can hence conclude from this that for each $t\in X$ such that $a(t)\not=0$ and $b(t)\not=0$ there exists a corresponding positive real number, say $\alpha(t)$, such that $a(t)=\alpha(t)b(t)$. Formally we have the following:
If $K=K_a\cup K_b$, and if $a$ and $b$ satisfy \eqref{contain2}, then there exists a continuous function $\alpha: X\backslash K\rightarrow (0,\infty)$ such that $$a(t)=\alpha(t)b(t)\mbox{ for all }t\in X\backslash K.$$
We now show that the function $\alpha(t)\equiv1$ for all $t\in X\backslash K$: By $\eqref{contain2}$ we have that $\sigma^\prime\left(ab^\star +iab^\star bb^\star \right)\subseteq \sigma^\prime(bb^\star +i(bb^\star )^2)$. Observe that the set on the right side is contained in the parabola $\Ima(z)=[\Rea(z)]^2$ in the first quadrant of the complex plane.
Using the fact that $a(t)=\alpha(t)b(t)$ for all $t\in X\backslash K$ where $\alpha(t)\in(0,\infty)$ yields the containment
$$\{\alpha(t)bb^\star (t)+i\alpha(t)(bb^\star (t))^2:t\in X\backslash K\}\subseteq \{bb^\star (t)+i(bb^\star (t))^2:t\in X\backslash K\}.$$ So if $t_0$ belongs to the set on the left then we must necessarily have the relation $$[\alpha(t_0)bb^\star (t_0)]^2=\alpha(t_0)(bb^\star (t_0))^2$$
which forces $\alpha(t_0)=1$ hence proving our claim. We are now in a position to show that $a(b-a)=0$. Pick $x\in A$ arbitrary. If $t\in X\backslash K$ then we have that $(abx)(t)=(a^2x)(t)$; if $t\in K_a$ then $(abx)(t)=0=(a^2x)(t)$; if $t\in K_b$ then $(abx)(t)=0$ but we have no information about $(a^2x)(t)$. However, this argument suffices to conclude that $\sigma(abx)\subseteq\sigma(a^2x)$ (for any $x\in A$). On the other hand \eqref{contain2} says $\sigma^\prime(a^2x)\subseteq\sigma^\prime(abx)$ for an arbitrary $x$. Since $A$ is commutative and $a\notin G(A)$ we actually have $\sigma(a^2x)\subseteq\sigma(abx)$ for all $x\in A$. By a result of Braatvedt and Brits \cite[Theorem 2.6]{univari} it follows that $(b-a)a=0$. \end{proof}
\begin{theorem}\label{Cstar}
Let $A$ be any unital $C^\star$-algebra. If $a,b\in A$ satisfy $\sigma^\prime(ax)\subseteq\sigma^\prime(bx)\mbox{ for all }x\in A$, then $a$ is an algebraic truncation of $b$. More generally, \eqref{contain2} is equivalent to the condition that $ax$ is an algebraic truncation of $bx$ for all $x\in A$. \end{theorem} \begin{proof}
We shall first establish the result under the assumption that $a$ and $b$ are normal: From Theorem~\ref{commute} we have that $a$ commutes with $b$ and $b^\star $. Observe further that \eqref{contain2} implies $\sigma^\prime(a^\star x)\subseteq\sigma^\prime(b^\star x)\mbox{ for all }x\in A,$ and so, arguing as before, we have that $a^\star $ commutes with $b$ and $b^\star $. Let $B$ be the Banach algebra generated by the set $\{\mathbf 1, a, a^\star , b, b^\star \}$. Then
$$\sigma_B(ax)=\sigma_A(ax)\subseteq\sigma_A(bx)=\sigma_B(bx)\mbox{ for each }x\in B.$$ Since $B$ is a commutative $C^\star $-algebra, it follows from the Gelfand-Naimark Theorem together with Theorem~\ref{C(X)}, that $a(b-a)=(b-a)a=0$.\
Suppose next that $a,b$ are arbitrary elements of $A$ satisfying \eqref{contain}. Then it follows that
\begin{equation}\label{invol}
\sigma^\prime(xa^\star )\subseteq\sigma^\prime(xb^\star )\ \forall\ x\in A.
\end{equation}
By Corollary~\eqref{cyc} $a^\star xb^\star =b^\star xa^\star $ for all $x\in A$. Observe then that
$$(ba^\star )(ba^\star )^\star =ba^\star ab^\star =ab^\star ba^\star =(ba^\star )^\star (ba^\star )$$ and hence $ba^\star $ is normal (if we then notice that \eqref{invol} implies that $\sigma(ba^\star )$ is on the real line we may deduce that $ba^\star $ is self-adjoint, and in fact $ba^\star \geq0$). Then
\begin{equation*}
\sigma^\prime(xaa^\star )=\sigma^\prime(a^\star xa)\subseteq\sigma^\prime(b^\star xa)=\sigma^\prime(xab^\star )=
\sigma^\prime(xba^\star )\subseteq\sigma^\prime(xbb^\star )
\end{equation*}
holds for all $x\in A$. Applying the result obtained in the first part of the proof to
the self-adjoint elements $aa^\star $ and $ba^\star $ we conclude that $(aa^\star )(ba^\star )=(aa^\star )^2$. Observe also that $(ba^\star )^2=ba^\star ab^\star =aa^\star bb^\star $. Now take the Banach algebra, say $B$, generated by the self-adjoint, mutually commuting, collection $\{\mathbf 1, ba^\star , aa^\star , bb^\star \}$. Again by the Gelfand-Naimark Theorem we may view $B$ as a $C(K)$ for some compact Hausdorff space $K$, whence $B$ is semisimple. If $\chi$ is any character of $B$ then, since
$$(aa^\star )(ba^\star )=(aa^\star )^2\mbox{ and }(ba^\star )^2=(aa^\star )(bb^\star ),$$ it follows that
$\chi(aa^\star )=\chi(ba^\star )$ and, by semisimplicity of $B$, we conclude that $aa^\star =ba^\star $. To complete the proof:
\begin{align*}
\|a(b-a)\|^2&=\|[a(b-a)][a(b-a)]^\star \|=
\|a(b-a)(b-a)^\star a^\star \|\\&=
\|(b-a)a(b-a)^\star a^\star \|=
\|(b-a)(ab^\star -aa^\star )a^\star \|\\&=
\|(b-a)(ba^\star -aa^\star )a^\star \|=0.
\end{align*} \end{proof}
\end{document} | arXiv | {
"id": "1808.03108.tex",
"language_detection_score": 0.6423856019973755,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Comment on `On the Quantum Theory of Molecules' [J. Chem.Phys. {\bf 137}, 22A544 (2012)]} \author {Brian T. Sutcliffe\thanks{B.T. Sutcliffe, Service de Chimie quantique et Photophysique, Universit\'{e} Libre de Bruxelles, B-1050 Bruxelles, Belgium. {email: bsutclif@ulb.ac.be}},~~ R. Guy Woolley\thanks{R.G.Woolley, School of Science and Technology, Nottingham Trent University, Nottingham NG11 8NS, U.K.}
\thanks{Manuscript accepted by \textit{Journal of Chemical Physics}, December 2013.}} \maketitle
\begin{abstract} In our previous paper [J. Chem.Phys. {\bf 137}, 22A544 (2012)] we argued that the Born-Oppenheimer approximation could not be based on an exact transformation of the molecular Schr\"{o}dinger equation. In this Comment we suggest that the fundamental reason for the approximate nature of the Born-Oppenheimer model is the lack of a complete set of functions for the electronic space, and the need to describe the continuous spectrum using spectral projection. \end{abstract}
After removal of the centre-of-mass variables, the internal molecular Hamiltonian may be written as\cite{SW:12, SW:13} \begin{equation} \mathsf{H}'~=~\mathsf{T}_{Nu}~+~\mathsf{V} \label{Moldec} \end{equation} where $\mathsf{T}_{Nu}$ is the nuclear kinetic energy. In the Born-Huang method $\mathsf{V}$ is identified with the electronic Hamiltonian for stationary nuclei (the `clamped-nuclei' Hamiltonian). Let $X$ stand for the nuclear positions, and $x$ for the electronic positions. Then for fixed $X_f$, Born-Huang write\cite{MB:51,BH:54} \begin{equation} \mathsf{V}~=~\mathsf{H}_o(X_f) \label{cnHam} \end{equation}
where $\mathsf{H}_o(X_f)$ is an operator on the electronic Hilbert space, $L^{2}_{x}$, with `eigenfunctions' \{$|\varphi_j(x,X)\rangle$\}, (where $j$ stands for both discrete and continuous labels), supposed `complete' in $L^{2}_{x}$ so that there is a resolution of the identity, at fixed $X_f$, \begin{equation}
\mathsf{I}(X_f)_x~=~\sum_{j}|\varphi_j(x,X_f)\rangle \langle \varphi_{j}(x,X_f)|. \label{idens} \end{equation}
The Born-Huang method has three steps: firstly use $\mathsf{I}(X_f)_x$ to write an expansion of a molecular wavefunction, $|\Psi_{i}\rangle$, \begin{eqnarray}
|\Psi_{i}\rangle~&=&~\sum_{j}|\varphi_j(x,X_f)\rangle \langle \varphi_{j}(x,X_f)|\Psi_{i}\rangle_x\\~&\equiv&~\sum_{j}a_{ij}(X_f)|\varphi_{j}(x,X_f)\rangle. \label{expans} \end{eqnarray}
Then write $\mathsf{H}'$ as in equation (\ref{Moldec}) with $\mathsf{V}$ identified by (\ref{cnHam}) and apply it to the expansion of molecular states, \begin{equation}
0~=~(\mathsf{H}'-E_i)\sum_{j}|\varphi_{j}(x,X_f)\rangle~a_{ij}(X_f). \label{expansBH} \end{equation}
Finally in the third step form the scalar product in $\mathsf{L}^{2}_{x}$ of this expression with a `basis' vector, $|\varphi_{k}(x,X_f)\rangle$ \begin{equation}
\langle \varphi_{k}(x,X_f)(\mathsf{H}'-E_i)|\sum_{j}|\varphi_{j}(x,X_f)\rangle_x~a_{ij}(X_f)=0. \label{aeqns} \end{equation} This leads in the well-known way to a system of coupled differential equations for the coefficients \{$a_{ij}(X)$ which are the unknowns - the nuclear wavefunctions. In the Chemistry and Physics literature this is commonly presented as an exact transformation of the original Schr\"{o}dinger equation \cite{AMG:12,LSC:13}.
In our earlier paper \cite{SW:12} we argued that for the decomposition (\ref{Moldec}) to be valid, the operator $\mathsf{V}$, which we denoted as $\mathsf{H}^{\mbox{elec}}$, must be written as a \textit{direct integral} over clamped-nuclei electronic Hamiltonians, the integral being taken over all nuclear positions; we further suggested that after this correction an expansion analogous to (\ref{expans}) would be problematic. The operator $\mathsf{H}^{\mbox{elec}}$ has purely continuous spectrum extending from some minimum value to $+\infty$ on the real-axis; it has no true eigenvalues, and no normalizable eigenvectors in the Hilbert space $L^2(x,X)$. We emphasize that this description is entirely in agreement with the mathematical physics literature but not with the conventional Born-Huang discussion summarized above since the distinction between $\mathsf{H}_o(X_f)$ and its direct integral $\mathsf{H}^{\mbox{elec}}$ is obvious and fundamental.
The recent work of Jecko \cite{TJ:13} gives a critical mathematical summary of the Born-Oppenheimer approximation including the expansion approach described by (\ref{idens}) - (\ref{aeqns}), and emphasizes the fundamental role of spectral projection for operators with continuous spectra. Thus it is very much to the point that there is no `complete set of eigenfunctions' in Hilbert space for the clamped-nuclei Hamiltonian $\mathsf{H}_o(X_f)$ required for the expansion (\ref{expans}) (and likewise for $\mathsf{H}^{\mbox{elec}}$). This is crucial since the Born-Huang method effectively writes the clamped nuclei Hamiltonian in a spectral form using the `eigenfunctions' to provide a resolution of the identity \begin{equation}
\mathsf{H}_o(x,X_f)~=~\sum_{j}E_{j}(X_f)|\varphi_{j}(x,X_f)\rangle \langle \varphi_{j}(x,X_f)| \end{equation} Such a representation of an operator is valid for self-adjoint operators on a finite dimensional Hilbert space, or for operators with \textit{purely discrete spectrum} on an infinite dimensional Hilbert space; thus it would be valid for a system of light and heavy coupled oscillators which is often used as a model for testing the Born-Oppenheimer approximation. But it is not generally valid for unbounded operators with continuous spectra\cite{JMJ:72, TK:80}, as is the case here.
One might hope that $\mathsf{V}$ (either choice) has `generalized eigenfunctions' satisfying \begin{equation}
\mathsf{V}|\varphi_{E,n}\rangle~=~E|\varphi_{E,n} \rangle \end{equation} where \begin{eqnarray}
\langle \varphi_{E'',n} | \mathsf{V}|\varphi_{E',m}\rangle~&=&~E'~\delta(E''~-~E')~\delta_{nm}\nonumber\\
\langle \varphi_{E'',n}|\varphi_{E',m}\rangle~&=&~\delta(E''~-~E')~\delta_{nm} \label{4} \end{eqnarray} However nothing is known about the properties of such `generalized eigenfunctions' for Hamiltonians with Coulombic interactions other than for the special case of the 2-body system (H atom); even their very \textit{existence} does not seem to be demonstrated for $\mathsf{V}$ derived from the general $N$-particle Coulomb Hamiltonian\cite{TJ:13}. Thus there is no way to check an expansion of the molecular wavefunction based on a set of such `generalized eigenfunctions' (for either choice of $\mathsf{V}$), and such an approach cannot be described rationally as `exact'.
To deal correctly with the continuous spectrum we need the following abstract result; to functions of a self-adjoint operator $\mathsf{L}$ one can associate a projection operator valued function (measure) $\mathsf{P}_{\lambda}$ through the spectral relation \cite{JMJ:72,TK:80,RS1:75}
\begin{equation}
\mathsf{f(L)} = \int\limits_{-\infty}^{+\infty} f(\lambda)~ d\mathsf{P}_{\lambda}.
\end{equation} where the spectral projector $\mathsf{P}_{\lambda}$ is given formally by the Stone formula, which relates functions of a self-adjoint operator to the discontinuity of its resolvent across the real axis \cite{RS1:75}. The spectral theorem guarantees that the spectral family $\mathsf{P}_{\lambda}$ is orthogonal, `diagonalizes' the operator $\mathsf{L}$ and provides a resolution of the identity. Spectral projection replaces the notion of `complete set of states' which is valid in general only for operators with purely discrete spectra; there is no general practical method for obtaining a family of spectral projectors and we do not propose this as an alternative route for molecular theory. These formal results do however justify our earlier claim that it is not possible to reduce the molecular Schr\"{o}dinger equation to a system of coupled differential equations of classical type for nuclear motion on PES, as suggested by Born\cite{MB:51}, without a further approximation of an essentially empirical character\cite{SW:12}.
The crucial point emphasized by Jecko\cite{TJ:13} is that if one gives up the goal of an exact computation of the molecular wavefunction $|\Psi\rangle$ one still has the opportunity to use ideas about PES to seek a \textit{good approximation} to it. The essential step is to fix the total energy of the molecular system at the outset and then construct an effective Hamiltonian depending on this energy working with the purely discrete part of the spectrum. The accuracy of the Born-Oppenheimer approximation is then defined by the quality of the replacement of the true Hamiltonian by the effective one (the `adiabatic Hamiltonian') and is measured in terms of the usual small parameter related to the electron/nucleon mass ratio. The procedure turns out to be universal in the sense that it applies to bound states, resonances, non-resonant scattering and time evolution, and it avoids the construction of the family of spectral projectors required for an exact description of the continuous spectrum. Mathematically it is an example of \textit{semiclassical analysis}\cite{TJ:13}.
The clamped-nuclei Hamiltonian and the associated notion of PES are crucial to the method; on the other hand PES make no appearance in the formally exact description based on spectral projection, so it is hard to claim any fundamental role for Potential Energy Surfaces in the quantum mechanics of molecules. This is not of course, to ignore the important role of PES in approximate calculations and their role in interpreting experimental results. It is simply to place PES properly in relation to quantum theory.
We are aware that the above considerations have a much wider range of applicability in Chemistry and Physics than simply the Born-Oppenheimer approximation, for example the conventional perturbation theoretic accounts of the electrical, magnetic and optical properties of substances commonly involve sums over `complete sets of molecular states'. It seems likely that the strategies to avoid spectral projection (similar to that described by Jecko\cite{TJ:13}) to be found in the book by Kato\cite{TK:80} are relevant here.
\end{document} | arXiv | {
"id": "1401.0873.tex",
"language_detection_score": 0.8187392354011536,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{A corrected AIC for the selection of seemingly unrelated regressions models} \author{J.\ L. van Velsen} \affiliation{Dutch Ministry of Justice, Research and Documentation Centre (WODC), P.\ O.\ Box 20301, 2500 EH The Hague, The Netherlands} \email{j.l.van.velsen@minjus.nl} \begin{abstract} A bias correction to Akaike's information criterion (AIC) is derived for seemingly unrelated regressions models. The correction is of particular use when the sample size is not much larger than the number of fitted parameters. A small-sample simulation study indicates that the bias-corrected AIC (AICc) provides better model choices than other model selection criteria. \end{abstract}
\maketitle
\section{Introduction}
The selection of a model from a set of fitted candidate models requires objective data-driven criteria. One such criterion often used in practice is Akaike's information criterion (AIC), which was designed to be an asymptotically unbiased estimator of the expected Kullback-Leibler information of a fitted model \cite{akaike73}. In finite samples, AIC has a non-vanishing bias that depends on the number of fitted parameters. This limits its effectiveness as a model selection criterion, particularly in instances where the sample size is not much larger than the number of fitted parameters of the most complex candidate model. For such instances, Hurvich and Tsai \cite{hurvich89} extended the bias-corrected AIC (AICc) originally suggested by Sugiura \cite{sugiura78} for linear regression models, to non-linear regression models and autoregressive models. Also, Hurvich and Tsai \cite{hurvich89} demonstrated the small-sample superiority of AICc over AIC as a model selection criterion. Since then, AICc has been extended to many other models, such as autoregressive moving average models \cite{hurvich90}, vector autoregressive models \cite{hurvich93} and multivariate linear regression models \cite{bedrick94}.
The objective of this work is to define AICc for seemingly unrelated regressions models. These are models of multiple response variables that follow a joint distribution \cite{zellner62,srivastava87}. In contrast to the multivariate linear regression model of Ref.\ \cite{bedrick94}, the response variables of a seemingly unrelated regressions model do not need to depend on the same covariates. Seemingly unrelated regressions models play a central role in econometrics \cite{goldberger91} but also appear in other contexts \cite{verbyla88,rochon96,andersson01}.
The remainder of this paper is organized as follows. In Sec.\ \ref{aicaicc}, the bias of AIC is calculated in seemingly unrelated regressions models with the assumption that the candidate model is either correctly specified or overspecified. The same assumption is required for AIC to be asymptotically unbiased \cite{linhart86} and has been used to calculate its bias in finite samples in other models \cite{hurvich90,hurvich93,bedrick94}. Expanded in inverse powers of the sample size $N$, the bias of AIC (${\mathcal B}_{\rm AIC}$) takes the form ${\mathcal B}_{\rm AIC}=-N^{-1}\beta(\Sigma_{0})+o(N^{-1})$, where the positive coefficient $\beta(\Sigma_{0})=O(1)$ depends on the unknown true $p \times p$ covariance matrix $\Sigma_{0}$ of the $p$ response variables. In Sec.\ \ref{aicaicc}, a lower bound $\beta^{*}>0$ of $\min_{\Omega}\beta(\Omega)$, where the minimization is over all $p \times p$ symmetric positive definite matrices $\Omega$, is found in terms of the number of fitted parameters and AICc is defined as ${\rm AICc}={\rm AIC}+N^{-1}\beta^{*}$. The performance of AICc as a model selection criterion is simulated in Sec.\ \ref{simulation} and compared to that of AIC and the Bayesian information criterion (BIC) of Schwarz \cite{schwarz78}. Finally, we give some concluding remarks in Sec.\ \ref{discussion}. Details about the calculation of $\beta(\Sigma_{0})$, its lower bound $\beta^{*}$ and the simulation study are given in, respectively, Appendices \ref{AppendixA}, \ref{AppendixB} and \ref{AppendixC}. Appendix \ref{AppendixC} also holds additional simulation results.
\section{AIC and AIC{\tiny c}} \label{aicaicc}
We consider the seemingly unrelated regressions model \begin{equation} Y=ZB+U. \label{model} \end{equation} Here, $Y$ is an $N \times p$ matrix of $p$ response variables on $N$ subjects, $Z$ is a known $N \times M$ matrix of $N$ values of $M$ covariates, each row of the $N \times p$ matrix $U$ has independent $N_{p}(0,\Sigma)$ distribution, and $B$ is an $M \times p$ matrix holding $K \le Mp$ regression coefficients and $(Mp-K)$ zeroes. The restriction $B_{ij}=0$ means that response variable $y_{j}$ of the $j$-th column of $Y$ does not depend on covariate $z_{i}$ of the $i$-th column of $Z$. The entries of the elements of the $j$-th column of $B$ that are not restricted to zero, are collected in the set $\mathcal{J}_{j}$. Each column of the matrix $B$ holds at least one regression coefficient, which means that $\mathcal{J}_{j}$ is non-empty for all $j$. Throughout this work, we assume that the $M \times M$ matrix $Z^{\rm T}Z$ is positive definite, that $p$ and $M$ do not scale with $N$, and that $\lim_{N \rightarrow \infty} N^{-1}Z^{\rm T}Z$ is finite and positive definite.
Suppose that $Y$ is not generated by the model of Eq.\ (\ref{model}), but by the model \begin{equation} Y=Z_{0}B_{0}+\mathcal{E}. \label{operating} \end{equation} Here, $Z_{0}$ is an $N \times M_{0}$ matrix of $N$ values of $M_{0}$ unknown true covariates, $B_{0}$ is an $M_{0} \times p$ matrix of unknown coefficients and each row of the $N \times p$ matrix $\mathcal{E}$ has independent $N_{p}(0,\Sigma_{0})$ distribution with unknown covariance matrix $\Sigma_{0}$. The entries of the non-vanishing elements of the $j$-th column of $B_{0}$ are collected in the set $\mathcal{J}_{0j}$. A measure of the discrepancy between the candidate (or approximating) model of Eq.\ (\ref{model}) and the data-generating model of Eq.\ (\ref{operating}) is the Kullback-Leibler information \begin{equation} \Delta(B,\Sigma) = E_{0}\{-2 \mathcal{L}(B,\Sigma)\} = Np\ln 2\pi + N \ln {\rm Det}\Sigma + {\rm Tr}(Z_{0}B_{0}-ZB)^{\rm T}(Z_{0}B_{0}-ZB)\Sigma^{-1}+ N{\rm Tr}\Sigma_{0}\Sigma^{-1}, \label{KL} \end{equation} where $E_{0}$ denotes expectation under the data-generating model and $\mathcal{L}(B,\Sigma)$ is the log-likelihood function of the candidate model, \begin{equation} -2\mathcal{L}(B,\Sigma)=Np\ln 2\pi + N\ln {\rm Det}\Sigma +{\rm Tr}(Y-ZB)^{\rm T}(Y-ZB)\Sigma^{-1}. \end{equation} AIC is an estimator of the expected Kullback-Leibler information $E_{0}\{\Delta(\hat{B},\hat{\Sigma})\}$, where $\hat{B}$ and $\hat{\Sigma}$ are the maximum likelihood estimators of, respectively, $B$ and $\Sigma$. It is defined as the sum of $-2\mathcal{L}(\hat{B},\hat{\Sigma})$ and twice the number of fitted parameters, \begin{equation} {\rm AIC}(\hat{\Sigma})=N\ln {\rm Det}\hat{\Sigma}+Np(\ln 2\pi + 1)+2K+p(p+1). \label{aic} \end{equation}
In Appendix \ref{AppendixA}, with the assumption that the candidate model is either correctly specified or overspecified ($Z_{0}=Z$ and $\mathcal{J}_{0i} \subseteq \mathcal{J}_{i}$ for all $i$), we demonstrate that \begin{equation} \mathcal{B}_{\rm AIC} = E_{0}\{{\rm AIC}(\hat{\Sigma})\}-E_{0}\{\Delta(\hat{B},\hat{\Sigma})\} = -N^{-1}\beta(\Sigma_{0})+o(N^{-1}), \label{BAIC} \end{equation} where $\beta(\Sigma_{0})=O(1)$ takes the form \begin{equation} \beta(\Sigma_{0})= 6K(p+1)+2{\rm Tr}({\rm Tr}_{\rm S}P_{0})^{2}-3{\rm Tr}P_{0}^{\vphantom{{\rm T}_{\rm S}}}P_{0}^{{\rm T}_{\rm S}}-3{\rm Tr}({\rm Tr}_{\rm R}P_{0}^{\vphantom{\rm T}})({\rm Tr}_{\rm R}P_{0}^{\rm T})+p(p+1)^{2}. \label{bias} \end{equation} Here, the $Np \times Np$ oblique projection matrix $P_{0}$ is given by \begin{equation} P_{0}=X\{X^{\rm T}(\Sigma_{0}^{-1} \otimes \openone_{N})X\}^{-1}X^{\rm T}(\Sigma_{0}^{-1} \otimes \openone_{N}), \label{P0} \end{equation}
where $X$ is an $Np \times K$ block-diagonal matrix of $p$ blocks of $N \times |\mathcal{J}_{i}|$ matrices $X_{i}$ holding the $|\mathcal{J}_{i}|$ columns of $Z$ corresponding to $z_{j}$ with $j \in \mathcal{J}_{i}$, \begin{equation} X=\left( \begin{array}{ccc} X_{1} & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & X_{p} \end{array} \right). \end{equation} In Eq.\ (\ref{bias}), the operators `${\rm Tr}_{\rm S}$' and `${\rm Tr}_{\rm R}$' denote partial traces over, respectively, the $N$ subjects and the $p$ response variables. Given an $Np \times Np$ matrix $A$, ${\rm Tr}_{\rm S}A$ is the $p \times p$ matrix defined componentwisely as $({\rm Tr}_{\rm S}A)_{ij}=\sum_{n=1}^{N} A_{in,jn}$, where $A_{in,jm}$ is multi-index notation for $A_{(i-1)N+n,(j-1)N+m}$. Similarly, ${\rm Tr}_{\rm R}A$ is the $N \times N$ matrix with elements $({\rm Tr}_{\rm R}A)_{nm}=\sum_{i=1}^{p} A_{in,im}$. Finally, in Eq.\ (\ref{bias}), `${\rm T}_{\rm S}$' denotes the partial transpose of subjects: $(A^{{\rm T}_{\rm S}})_{in,jm}=A_{im,jn}$.
If $\mathcal{J}_{i}=\mathcal{J}_{j}$ for all $i$ and $j$, $\beta(\Sigma_{0})$ collapses to \begin{equation} \beta^{*}=3K(p+1)+2K^{2}p^{-1}+p(p+1)^{2}, \end{equation} which equals the coefficient of the first term of the expansion of $-\mathcal{B}_{\rm AIC}$ of Ref.\ \cite{bedrick94} in inverse powers of $N$. In Appendix \ref{AppendixB}, we demonstrate that \begin{equation} \beta^{*} \le \min_{\Omega} \beta(\Omega), \label{ineq} \end{equation} where the minimization is over all $p \times p$ symmetric positive definite matrices $\Omega$ and the equality sign is attained if and only if $\mathcal{J}_{i}=\mathcal{J}_{j}$ for all $i$ and $j$. We define AICc as \begin{equation} {\rm AICc}(\hat{\Sigma})={\rm AIC}(\hat{\Sigma})+N^{-1}\beta^{*}. \label{aicc} \end{equation} Because $0 < \beta^{*} \le \min_{\Omega} \beta(\Omega)$, \begin{equation} \mathcal{B}_{\rm AICc}=E_{0}\{{\rm AICc}(\hat{\Sigma})\}-E_{0}\{\Delta(\hat{B},\hat{\Sigma})\}=-N^{-1}\left\{ \beta(\Sigma_{0})-\beta^{*} \right\}+o(N^{-1}) \end{equation} satisfies \begin{equation} \lim_{N \rightarrow \infty} N\mathcal{B}_{\rm AIC} < \lim_{N \rightarrow \infty} N\mathcal{B}_{\rm AICc} \le 0. \label{biascomp} \end{equation}
\section{A simulation study} \label{simulation}
We compare the performance of AIC, AICc and BIC in the selection of seemingly unrelated regressions models. For this purpose, 1000 samples of sizes $N=15$, $N=20$ and $N=50$ are created from the data-generating model (\ref{operating}) with $p=2$. For each sample and each criterion, the fitted candidate model with the smallest value of the criterion is selected from a set of candidate models. The matrix $Z$ holds the values of 10 covariates and its $10N$ elements are fixed after drawing them independently from $N(0,1)$. We consider 25 candidate models specified by $\mathcal{J}_{1}=\{1,\ldots,i\}$ and $\mathcal{J}_{2}=\{6,\ldots,5+j\}$, where $i$ and $j$ are integers ranging from 1 to 5. For the data-generating model, we set $Z_{0}=Z$ and take $\mathcal{J}_{01}=\{1,2\}$ and $\mathcal{J}_{02}=\{6,7\}$. The 4 non-vanishing elements of $B_{0}$ equal unity and the covariance matrix $\Sigma_{0}$ has parametrization $\Sigma_{0}=(1-\rho)\openone_{p}+ \rho j_{p}$, where $j_{p}$ is the $p \times p$ matrix of ones and $|\rho|<1$. The samples are constructed based on 1000 independent drawings of $\mathcal{E}$, where each row of $\mathcal{E}$ is independently drawn from $N_{p}(0,\Sigma_{0})$.
The candidate models are fitted with the constrained maximization (CM) algorithm \cite{oberhofer74,meng93}: \begin{equation} \hat{\Sigma}_{n+1}=N^{-1}(Y-Z\hat{B}_{n})^{\rm T}(Y-Z\hat{B}_{n}), \quad \mbox{where} \quad {\rm vec}(Z\hat{B}_{n})=X\{X^{\rm T}(\hat{\Sigma}_{n}^{-1} \otimes \openone_{N})X\}^{-1}X^{\rm T}(\hat{\Sigma}_{n}^{-1} \otimes \openone_{N}){\rm vec}(Y). \label{CM} \end{equation}
Here, $\hat{\Sigma}_{n+1}$ and $\hat{B}_{n}$ are estimators of, respectively, $\Sigma$ and $B$, $n$ is a positive integer and `${\rm vec}$' is the column-wise vectorization operator. The algorithm is started with $\hat{\Sigma}_{1}=\openone_{p}$ and terminated if $|{\rm Det}\hat{\Sigma}_{n+1}-{\rm Det}\hat{\Sigma}_{n}| \le \delta {\rm Det}\hat{\Sigma}_{n}$, with $\delta=1 \cdot 10^{-7}$. If the log-likelihood function $\mathcal{L}(B,\Sigma)$ is globally concave, then $\hat{\Sigma}_{n+1}$ and $\hat{B}_{n}$ converge to, respectively, $\hat{\Sigma}$ and $\hat{B}$ and the numerical error of $\ln{\rm Det}\hat{\Sigma}$ is of the order of magnitude of $\delta$. If $\mathcal{L}(B,\Sigma)$ is multi-modal, the CM algorithm does not necessarily converge to the global maximum, but may end up in a local maximum or a saddle point \cite{drton04,drton06}. Although multi-modality is rare, we choose several other initial estimators $\hat{\Sigma}_{1}$ and calculate $\ln{\rm Det}\hat{\Sigma}$ with a numerical error of about $10\delta$ (see Appendix \ref{AppendixC} for details). This means that the difference between two values of a criterion has a numerical error of $20N\delta$.
The frequencies of selecting the 25 candidate models with the three criteria are given in Table \ref{modelselection1} for $\rho=0.5$ and $N=15$. The correct model ($i=j=2$) is more often selected with AICc than with AIC and BIC. To see how the improvement of AICc on AIC is related to the bias correction, we have plotted $E_{0}\{\Delta(\hat{\Sigma},\hat{B})\}$, $E_{0}\{{\rm AICc}(\hat{\Sigma})\}$ and $E_{0}\{{\rm AIC}(\hat{\Sigma})\}$ as a function of $i$ (with $j=2$) in Fig.\ \ref{biasplot}. The expected Kullback-Leibler information has a minimum at $i=2$ and increases rapidly with $i$ for $i>2$. This increase is more precisely followed by $E_{0}\{{\rm AICc}(\hat{\Sigma})\}$ than by $E_{0}\{{\rm AIC}(\hat{\Sigma})\}$, which explains why AIC more often selects models that are too complex. In Appendix \ref{AppendixC}, the frequencies of selecting the correct model with the three criteria are given for $\rho=0.2,0.5,0.8$ and $N=15,20,50$. The frequencies do not depend much on $\rho$ and, as expected, the improvement of AICc on AIC decreases as $N$ increases. For $N=20$, AICc and BIC perform equally well, while for $N=50$, the asymptotically consistent BIC outperforms AICc. In Appendix \ref{AppendixC}, we also demonstrate that $\delta$ is sufficiently small and that the results of Table \ref{modelselection1} are not affected by numerical errors.
\begin{table}[h!] \caption{Frequencies of selecting the 25 candidate models with AIC, AICc and BIC in 1000 samples of size $N=15$ for $\rho=0.5$.} \begin{tabular}{ccccccccccccccccccc} \hline
& & \multicolumn{5}{l}{AIC} & & \multicolumn{5}{l}{AICc} & & \multicolumn{5}{l}{BIC} \\ i & & j & & & & & & j & & & & & & j & & & & \\ \cline{3-7} \cline{9-13} \cline{15-19}
& & 1 & 2 & 3 & 4 & 5 & & 1 & 2 & 3 & 4 & 5 & & 1 & 2 & 3 & 4 & 5 \\ \hline 1 & & 0 & 1 & 0 & 0 & 0 & & 0 & 2 & 0 & 1 & 0 & & 0 & 2 & 0 & 0 & 0 \\ 2 & & 3 & 241 & 75 & 60 & 66 & & 9 & 488 & 83 & 50 & 40 & & 7 & 385 & 78 & 53 & 58 \\ 3 & & 1 & 53 & 20 & 16 & 34 & & 5 & 64 & 14 & 14 & 12 & & 4 & 59 & 15 & 18 & 23 \\ 4 & & 1 & 49 & 25 & 36 & 52 & & 3 & 49 & 12 & 18 & 21 & & 4 & 47 & 19 & 26 & 30 \\ 5 & & 4 & 63 & 31 & 46 & 123 & & 4 & 37 & 12 & 17 & 45 & & 1 & 50 & 20 & 29 & 72 \\ \hline \end{tabular} \label{modelselection1} \end{table}
\begin{figure}
\caption{Expected Kullback-Leibler information (triangles), AICc (squares) and AIC (circles) as a function of $i$ with $j=2$ for $N=15$ and $\rho=0.5$. The expected criteria are estimated with the same 1000 samples as the ones of Table \ref{modelselection1}. The standard error of the expected AIC (and AICc) is about $0.3$ for all $i$ and that of the expected Kullback-Leibler information ranges from $0.3$ ($i=1$) to $1.8$ ($i=5$).}
\label{biasplot}
\end{figure}
\section{Discussion} \label{discussion}
In the simulation study of Sec.\ \ref{simulation}, the data-generating model is finite dimensional and one of the candidate models is correctly specified. The case of an infinite dimensional data-generating model is not considered here. Although in this case the assumption of correct specification or overspecification does not hold for any candidate model, Hurvich and Tsai \cite{hurvich91} demonstrated that for linear regression models in small samples, AICc is much less biased than AIC for most choices of the data-generating model. A similar study can be done for seemingly unrelated regressions models. Also, for an infinite dimensional data-generating model, AIC and AICc are asymptotically efficient \cite{shibata80,shibata81} and, based on the results of Ref.\ \cite{hurvich91}, it can be surmised that in small samples, AICc is more efficient than AIC and BIC for most choices of the data-generating model.
\appendix
\section{Bias of AIC} \label{AppendixA}
In this Appendix, we demonstrate that $\mathcal{B}_{\rm AIC}=-N^{-1}\beta(\Sigma_{0}) + o(N^{-1})$, where $\beta(\Sigma_{0})=O(1)$ is given by Eq.\ (\ref{bias}). First, we calculate $\hat{\gamma}$ in the expansion \begin{equation} {\rm AIC}(\hat{\Sigma})-\Delta(\hat{B},\hat{\Sigma}) = -\hat{\gamma} + o_{p}(N^{-1}). \label{decomp} \end{equation} Taking the expectation under the data-generating model of both sides of Eq.\ (\ref{decomp}) yields $\mathcal{B}_{\rm AIC}=-E_{0}(\hat{\gamma})+o(N^{-1})$. Second, we calculate $E_{0}(\hat{\gamma})$ and find $\beta(\Sigma_{0})$ from $E_{0}(\hat{\gamma})=N^{-1}\beta(\Sigma_{0})+o(N^{-1})$.
\subsection{The first term of the expansion of Eq.\ (\ref{decomp})} \label{A1}
We consider the expansion \begin{equation} \lim_{n \rightarrow \infty} \left\{{\rm AIC}(\hat{\Sigma}_{n+1})-\Delta(\hat{B}_{n},\hat{\Sigma}_{n+1})\right\} = -\hat{\eta} + o_{p}(N^{-1}), \label{stochbias} \end{equation} where the estimators $\hat{\Sigma}_{n+1}$ and $\hat{B}_{n}$ of, respectively, $\Sigma$ and $B$ at the $n$-th step of the constrained maximization (CM) algorithm, are given by Eq.\ (\ref{CM}). Depending on the initial estimator $\hat{\Sigma}_{1}$, Drton and Richardson \cite{drton04} demonstrated that the CM algorithm may end up in a local maximum or a saddle point of $\mathcal{L}(B,\Sigma)$, rather than in the global maximum $\mathcal{L}(\hat{B},\hat{\Sigma})$. It turns out, however, that $\hat{\eta}$ does not depend on $\hat{\Sigma}_{1}$, which implies $\hat{\gamma}=\hat{\eta}$.
Because the candidate model is either correctly specified or overspecified, the left-hand side of Eq.\ (\ref{stochbias}) can be written as \begin{multline} \lim_{n \rightarrow \infty} \left\{{\rm AIC}(\hat{\Sigma}_{n+1})-\Delta(\hat{B}_{n},\hat{\Sigma}_{n+1})\right\} = 2K+p(p+1) \\ - N \lim_{n \rightarrow \infty} \left\{{\rm Tr} \left(\underbrace{\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T}}_{O_{p}(N^{-1/2})}+\underbrace{N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T}\hat{P}_{n}^{\rm T}}_{O_{p}(N^{-1})}+\underbrace{N^{-1}{\rm Tr}_{\rm S} \hat{P}_{n}\epsilon\epsilon^{\rm T}}_{O_{p}(N^{-1})}\right) \underbrace{\hat{\Sigma}_{n+1}^{-1}}_{O_{p}(1)} \right\}, \label{stochbiasexpand} \end{multline} where $\epsilon={\rm vec}(\mathcal{E})$, `${\rm vec}$' is the column-wise vectorization operator, \begin{equation} \hat{\Sigma}_{n+1}=N^{-1} {\rm Tr}_{\rm S} (\openone_{Np}-\hat{P}_{n})\epsilon\epsilon^{\rm T}(\openone_{Np}-\hat{P}_{n})^{\rm T} \quad \mbox{and} \quad \hat{P}_{n}=X\{X^{\rm T}(\hat{\Sigma}_{n}^{-1} \otimes \openone_{N})X\}^{-1}X^{\rm T}(\hat{\Sigma}_{n}^{-1} \otimes \openone_{N}). \label{psig} \end{equation} (By writing it as $N{\rm Tr}\hat{\Sigma}_{n+1}^{\vphantom{-1}}\hat{\Sigma}_{n+1}^{-1}$, the part $Np$ of ${\rm AIC}$ is absorbed in the second line of Eq.\ (\ref{stochbiasexpand}).) The order symbols below the horizontal curly braces in Eq.\ (\ref{stochbiasexpand}) refer to the elements of the corresponding matrices. From now on, when an order symbol refers to a matrix, all of its elements are of the indicated order. (The $Np \times Np$ matrix $\hat{P}_{n}$ is $O_{p}(N^{-1})$ because $N^{-1}Z^{\rm T}Z=O(1)$.)
The matrix $\hat{\Sigma}_{n}^{-1}$ has expansion \begin{equation} \hat{\Sigma}_{n}^{-1}=\Sigma_{0}^{-1}\sum_{j=0}^{Q} (-1)^{j} \left\{ (\hat{\Sigma}_{n}-\Sigma_{0})\Sigma_{0}^{-1}\right\}^{j} + o_{p}(N^{-Q/2}), \label{exp1} \end{equation} where $Q$ is a non-negative integer. The expansion of Eq.\ (\ref{exp1}) holds because $p=O(1)$. Similarly, because $K=O(1)$, the matrix $\{X^{\rm T}(\hat{\Sigma}_{n}^{-1} \otimes \openone_{N})X\}^{-1}$ has expansion \begin{multline} \{X^{\rm T}(\hat{\Sigma}_{n}^{-1}\otimes \openone_{N})X\}^{-1}= \\ \{X^{\rm T}(\Sigma^{-1}_{0}\otimes \openone_{N})X \}^{-1}\sum_{j=0}^{Q'} (-1)^{j} \left[ X^{\rm T}\{(\hat{\Sigma}_{n}^{-1}-\Sigma^{-1}_{0})\otimes \openone_{N}\}X\{X^{\rm T}(\Sigma^{-1}_{0}\otimes \openone_{N})X\}^{-1} \right]^{j} + o_{p}(N^{-Q'/2-1}), \label{exp2} \end{multline} where $Q'$ is a non-negative integer. By combining Eq.\ (\ref{exp1}) with $Q=2$, Eq.\ (\ref{exp2}) with $Q'=2$ and $\lim_{n \rightarrow \infty} \hat{\Sigma}_{n}=N^{-1}{\rm Tr}_{\rm S}(\openone_{Np}-P_{0})\epsilon\epsilon^{\rm T}(\openone_{Np}-P_{0})^{\rm T}+o_{p}(N^{-1})$, where $P_{0}=O(N^{-1})$ is given by Eq.\ (\ref{P0}), we obtain \begin{equation} \lim_{n \rightarrow \infty} \hat{P}_{n}=P_{0}+\hat{P}_{-3/2}+\hat{P}_{-2}+o_{p}(N^{-2}), \label{Pdecomp} \end{equation} where the matrices $\hat{P}_{-3/2}=O_{p}(N^{-3/2})$ and $\hat{P}_{-2}=O_{p}(N^{-2})$ are given by \begin{equation} \hat{P}_{-3/2}=P_{0}\{(N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T}-\Sigma_{0})\otimes \openone_{N}\}(P_{0}^{\rm T}-\openone_{Np})(\Sigma_{0}^{-1} \otimes \openone_{N}) \end{equation} and \begin{multline} \hat{P}_{-2}=N^{-1}P_{0}\{({\rm Tr}_{\rm S}\epsilon\epsilon^{\rm T}P_{0}^{\rm T}+{\rm Tr}_{\rm S}P_{0}\epsilon\epsilon^{\rm T}-{\rm Tr}_{\rm S}P_{0}^{\vphantom{\rm T}}\epsilon\epsilon^{\rm T}P_{0}^{\rm T})\otimes \openone_{N}\}(\openone_{Np}-P^{\rm T}_{0})(\Sigma_{0}^{-1}\otimes\openone_{N}) \\ -\hat{P}_{-3/2}\{(N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T} - \Sigma_{0})\otimes\openone_{N}\}(\Sigma_{0}^{-1} \otimes \openone_{N}) (\openone_{Np}-P_{0}). \end{multline} By combining Eq.\ (\ref{exp1}) with $Q=3$ and $\lim_{n \rightarrow \infty}\hat{P}_{n}=P_{0}+\hat{P}_{-3/2}+o_{p}(N^{-3/2})$, we obtain \begin{equation} \lim_{n \rightarrow \infty} \hat{\Sigma}_{n+1}^{-1}= \Sigma_{0}^{-1} \sum_{j=0}^{3} (-1)^{j} \left( \left[N^{-1} {\rm Tr}_{\rm S} \{\openone_{Np}-(P_{0}+\hat{P}_{-3/2})\}\epsilon\epsilon^{\rm T}\{\openone_{Np}-(P_{0}+\hat{P}_{-3/2})\}^{\rm T}-\Sigma_{0}\right]\Sigma_{0}^{-1}\right)^{j}+o_{p}(N^{-3/2}). \label{Sigmadecomp} \end{equation}
Substituting the expansions of Eqs.\ (\ref{Pdecomp},\ref{Sigmadecomp}) in the right-hand side of Eq.\ (\ref{stochbiasexpand}), expressing it as the right-hand side of Eq.\ (\ref{stochbias}), and noting that $\hat{\gamma}=\hat{\eta}$ (because $\hat{\eta}$ does not depend on $\hat{\Sigma}_{1}$), yields \begin{equation} \hat{\gamma}=N{\rm Tr}(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S}\epsilon\epsilon^{\rm T})\Sigma_{0}^{-1} + \hat{\gamma}_{0}+\hat{\gamma}_{-1/2}+\hat{\gamma}_{-1}, \label{hatbeta} \end{equation} where $\hat{\gamma}_{0}=O_{p}(1)$, $\hat{\gamma}_{-1/2}=O_{p}(N^{-1/2})$ and $\hat{\gamma}_{-1}=O_{p}(N^{-1})$ are given by \begin{equation} \hat{\gamma}_{0}=-2K-p(p+1)+ {\rm Tr}({\rm Tr}_{\rm S}\epsilon\epsilon^{\rm T}P_{0}^{\rm T}+{\rm Tr}_{\rm S} P_{0}\epsilon\epsilon^{\rm T})\Sigma_{0}^{-1} + N{\rm Tr}\{(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T})\Sigma_{0}^{-1} \}^{2}, \end{equation} \begin{equation} \begin{array}{lll} \hat{\gamma}_{-1/2} & = & 2{\rm Tr}(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}({\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T}P^{\rm T}_{0}+{\rm Tr}_{\rm S} P_{0}\epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}+N{\rm Tr}\{(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}\}^{3} \\
& - & {\rm Tr}(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}({\rm Tr}_{\rm S} P_{0}^{\vphantom{\rm T}}\epsilon\epsilon^{\rm T}P^{\rm T}_{0})\Sigma_{0}^{-1}
+ {\rm Tr}({\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T}\hat{P}^{\rm T}_{-3/2}+{\rm Tr}_{\rm S} \hat{P}_{-3/2}\epsilon\epsilon^{\rm T})\Sigma_{0}^{-1} \end{array} \end{equation} and \begin{equation} \begin{array}{lll} \hat{\gamma}_{-1} & = & N^{-1}{\rm Tr}({\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T}P^{\rm T}_{0}+{\rm Tr}_{\rm S} P_{0}\epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}({\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T}P^{\rm T}_{0}+{\rm Tr}_{\rm S} P_{0}\epsilon\epsilon^{\rm T}-{\rm Tr}_{\rm S}P_{0}^{\vphantom{\rm T}}\epsilon\epsilon^{\rm T}P^{\rm T}_{0})\Sigma_{0}^{-1} \\
& + & 2{\rm Tr}({\rm Tr}_{\rm S}\epsilon\epsilon^{\rm T}\hat{P}^{\rm T}_{-3/2}+{\rm Tr}_{\rm S} \hat{P}_{-3/2}\epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T})\Sigma_{0}^{-1} \\
& + & {\rm Tr}({\rm Tr}_{\rm S}\epsilon\epsilon^{\rm T}\hat{P}^{\rm T}_{-2}+{\rm Tr}_{\rm S} \hat{P}_{-2}\epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}+ N{\rm Tr}\{(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}\}^{4} \\
& - & {\rm Tr}({\rm Tr}_{\rm S}\hat{P}_{-3/2}\epsilon\epsilon^{\rm T}P^{\rm T}_{0}+{\rm Tr}_{\rm S}P_{0}\epsilon\epsilon^{\rm T}\hat{P}^{\rm T}_{-3/2})\Sigma_{0}^{-1}(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T})\Sigma_{0}^{-1} \\
& + & 3{\rm Tr}({\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T}P^{\rm T}_{0}+{\rm Tr}_{\rm S} P_{0}\epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}\{(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}\}^{2} \\
& - & 2{\rm Tr}({\rm Tr}_{\rm S}P_{0}^{\vphantom{\rm T}}\epsilon\epsilon^{\rm T}P^{\rm T}_{0})\Sigma_{0}^{-1}\{(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S} \epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}\}^{2}. \\ \end{array} \end{equation}
\subsection{Expectation under the data-generating model} \label{A2}
The elements of the $Np$-dimensional Gaussian columnvector $\epsilon={\rm vec}(\mathcal{E})$ have vanishing mean and two-point average \begin{equation} E_{0}(\epsilon_{in}\epsilon_{jm})=(\Sigma_{0})_{ij}\delta_{nm}, \end{equation} where $\epsilon_{in}$ is multi-index notation for $\epsilon_{N(i-1)+n}=\mathcal{E}_{in}$ and $\delta_{nm}$ is a Kronecker delta. Because $E_{0}\{N{\rm Tr}(\Sigma_{0}-N^{-1}{\rm Tr}_{\rm S}\epsilon\epsilon^{\rm T})\Sigma_{0}^{-1}\}=0$, $E_{0}(\hat{\gamma})$ takes the form \begin{equation} E_{0}(\hat{\gamma})=E_{0}(\hat{\gamma}_{0})+E_{0}(\hat{\gamma}_{-1/2})+E_{0}(\hat{\gamma}_{-1}). \label{avegamma} \end{equation} Applying Wick's theorem, which states that the average of a product of $2g$ elements of $\epsilon$, where $g$ is a positive integer, equals the sum of products of all $\prod_{i=1}^{g}(2i-1)$ possible pairings of two-point averages, we obtain \begin{equation} E_{0}(\hat{\gamma}_{0})=0, \label{ave0} \end{equation} \begin{equation} E_{0}(\hat{\gamma}_{-1/2})=N^{-1} [ -6K(p+1)+3{\rm Tr}P_{0}^{\vphantom{{\rm T}_{\rm S}}}P_{0}^{{\rm T}_{\rm S}}+3{\rm Tr}({\rm Tr}_{\rm R}P_{0}^{\vphantom{\rm T}})({\rm Tr}_{\rm R}P_{0}^{\rm T})-\{p^{2}+3p+p(p+1)^{2}\} ] \label{ave1} \end{equation} and \begin{equation} E_{0}(\hat{\gamma}_{-1})= N^{-1} \{ 12K(p+1)+2{\rm Tr}({\rm Tr}_{\rm S}P_{0})^{2}-6{\rm Tr}P_{0}^{\vphantom{{\rm T}_{\rm S}}}P_{0}^{{\rm T}_{\rm S}}-6{\rm Tr}({\rm Tr}_{\rm R}P_{0}^{\vphantom{\rm T}})({\rm Tr}_{\rm R}P_{0}^{\rm T})+p^{2}+3p+2p(p+1)^{2} \}+o(N^{-1}). \label{ave2} \end{equation} Substituting Eqs.\ (\ref{ave0},\ref{ave1},\ref{ave2}) in Eq.\ (\ref{avegamma}), yields \begin{equation} E_{0}(\hat{\gamma})=N^{-1}\beta(\Sigma_{0})+o(N^{-1}), \end{equation} where $\beta(\Sigma_{0})=O(1)$ is given by Eq.\ (\ref{bias}).
\section{Proof of Eq.\ (\ref{ineq})} \label{AppendixB}
In this Appendix, we demonstrate \begin{equation} -3K(p+1)+2K^{2}p^{-1} \le \min_{\Omega} \left[\left\{ 2{\rm Tr}({\rm Tr}_{\rm S}P_{0})^{2}-3{\rm Tr}P_{0}^{\vphantom{\rm T}}P_{0}^{{\rm T}_{\rm S}}-3{\rm Tr}({\rm Tr}_{\rm R}P_{0}^{\vphantom{\rm T}})({\rm Tr}_{\rm R}P_{0}^{\rm T}) \right\}_{\Sigma_{0}=\Omega} \right], \label{ineqapp} \end{equation} where the minimization is over all $p \times p$ symmetric positive definite matrices $\Omega$ and the equality sign is attained if and only if $\mathcal{J}_{i}=\mathcal{J}_{j}$ for all $i$ and $j$. By adding $6K(p+1)+p(p+1)^{2}$ on both sides of Eq.\ (\ref{ineqapp}), we obtain $\beta^{*} \le \min_{\Omega}\beta(\Omega)$ of Eq.\ (\ref{ineq}).
Using ${\rm Tr}_{\rm S}P_{0}=\Sigma_{0}^{1/2}({\rm Tr}_{\rm S}\mathcal{A})\Sigma_{0}^{-1/2}$, where \begin{equation} \mathcal{A}=(\Sigma_{0}^{-1/2} \otimes \openone_{N})P_{0}(\Sigma_{0}^{1/2} \otimes \openone_{N})=(\Sigma_{0}^{-1/2} \otimes \openone_{N})X\{X^{\rm T}(\Sigma_{0}^{-1} \otimes \openone_{N})X\}^{-1}X^{\rm T}(\Sigma_{0}^{-1/2} \otimes \openone_{N}), \end{equation} we find that ${\rm Tr}({\rm Tr}_{\rm S}P_{0})^{2}$ can be written as \begin{equation} {\rm Tr}({\rm Tr}_{\rm S}P_{0})^{2}={\rm Tr}({\rm Tr}_{\rm S}\mathcal{A})^{2}. \end{equation} From \begin{equation} \min_{\mathcal{C}} \left( {\rm Tr}\,\mathcal{C}^{2} \, \vline \, {\rm Tr}\,\mathcal{C}=K \right)=K^{2}p^{-1}, \label{bound1u} \end{equation} where the minimization is over all $p \times p$ symmetric matrices $\mathcal{C}$, we obtain \begin{equation} \min_{\Omega}\left[\left\{ {\rm Tr}({\rm Tr}_{\rm S}P_{0})^{2}\right\}_{\Sigma_{0}=\Omega}\right] = \min_{\Omega}\left[\left\{{\rm Tr}({\rm Tr}_{\rm S}\mathcal{A})^{2}\right\}_{\Sigma_{0}=\Omega}\right] \ge K^{2}p^{-1}. \label{bound1c} \end{equation}
The minimum of Eq.\ (\ref{bound1u}) is attained if and only if $\mathcal{C}_{ii}=Kp^{-1}$ and $\mathcal{C}_{ij}=0$ for all $i \neq j$. This corresponds to ${\rm Tr}_{\rm S}\mathcal{A}=Kp^{-1}\openone_{p}$, which can be reached if $\mathcal{J}_{i}=\mathcal{J}_{j}$ for all $i$ and $j$ or if $\Sigma_{0}=\openone_{p}$ and $|\mathcal{J}_{i}|=|\mathcal{J}_{j}|$ for all $i$ and $j$.
Because \begin{equation} P_{0}^{{\rm T}_{\rm S}}=(\Sigma_{0}^{1/2} \otimes \openone_{N})\mathcal{A}^{{\rm T}_{\rm S}}(\Sigma_{0}^{-1/2} \otimes \openone_{N}), \end{equation} ${\rm Tr}P_{0}^{\vphantom{{\rm T}_{\rm S}}}P_{0}^{{\rm T}_{\rm S}}$ equals the inner product of $\mathcal{A}^{{\rm T}_{\rm S}}$ and $\mathcal{A}$: \begin{equation} {\rm Tr}P_{0}^{\vphantom{{\rm T}_{\rm S}}}P_{0}^{{\rm T}_{\rm S}}={\rm Tr}\mathcal{A}^{{\rm T}_{\rm S}}\mathcal{A}^{\rm T}. \end{equation} The squared length ${\rm Tr}\mathcal{A}\mathcal{A}^{\rm T}$ of $\mathcal{A}$ equals $K$ ($\mathcal{A}$ is an orthogonal projection matrix of rank $K$). The squared length of $\mathcal{A}^{{\rm T}_{\rm S}}$ equals that of $\mathcal{A}$ and we have \begin{equation} \max_{\Omega}\left\{ \left({\rm Tr}P_{0}^{\vphantom{{\rm T}_{\rm S}}}P_{0}^{{\rm T}_{\rm S}}\right)_{\Sigma_{0}=\Omega}\right\}= \max_{\Omega}\left\{ \left({\rm Tr}\mathcal{A}^{{\rm T}_{\rm S}}\mathcal{A}^{\rm T}\right)_{\Sigma_{0}=\Omega} \right\} \le \left\{{\rm Tr}\mathcal{A}{\mathcal{A}}^{\rm T}{\rm Tr}\mathcal{A}^{{\rm T}_{\rm S}}(\mathcal{A}^{{\rm T}_{\rm S}})^{\rm T}\right\}^{1/2}=K. \label{bound2c} \end{equation} The upper bound of $K$ in Eq.\ (\ref{bound2c}) is attained if and only if $\mathcal{A}=\mathcal{A}^{{\rm T}_{\rm S}}$, which can be reached if $\mathcal{J}_{i}=\mathcal{J}_{j}$ for all $i$ and $j$ or if $\Sigma_{0}=\openone_{p}$.
Using ${\rm Tr}_{\rm R}P_{0}={\rm Tr}_{\rm R}\mathcal{A}$, we find \begin{equation} {\rm Tr}({\rm Tr}_{\rm R}P_{0}^{\vphantom{\rm T}})({\rm Tr}_{\rm R}P_{0}^{\rm T})={\rm Tr}({\rm Tr}_{\rm R}\mathcal{A})({\rm Tr}_{\rm R}\mathcal{A})^{\rm T} = \sum_{i=1}^{p}\sum_{j=1}^{p} {\rm Tr} a_{ii}^{\vphantom{\rm T}}a_{jj}^{\rm T}, \end{equation} where $a_{ij}$ is the $ij$-th $N \times N$ submatrix of $\mathcal{A}$. The sum of the squared lengths of the $a_{ii}$'s is bounded by \begin{equation} \sum_{i=1}^{p}{\rm Tr}a_{ii}^{\vphantom{\rm T}}a_{ii}^{\rm T} \le \sum_{i=1}^{p}\sum_{j=1}^{p} {\rm Tr}a_{ij}^{\vphantom{\rm T}}a_{ij}^{\rm T} = {\rm Tr}\mathcal{A}\mathcal{A}^{\rm T}=K \end{equation} The upper bound $pK$ of \begin{equation} \max_{\{c_{ii}\}} \left( \sum_{i=1}^{p}\sum_{j=1}^{p} {\rm Tr} c_{ii}^{\vphantom{\rm T}}c_{jj}^{\rm T} \vline \sum_{i=1}^{p}{\rm Tr}c_{ii}^{\vphantom{\rm T}}c_{ii}^{\rm T} \le K \right) \le pK, \label{maxa} \end{equation} where the maximization is over $p$ symmetric $N \times N$ matrices $c_{ii}$, is attained if and only if $c_{ii}=c_{jj}$ and $\sum_{i=1}^{p} {\rm Tr}c_{ii}^{\vphantom{\rm T}}c_{ii}^{\rm T}=K$. Translated to $\mathcal{A}$, this means that $a_{ij}=0$ for all $i \neq j$ and the $a_{ii}$'s are identical orthogonal projection matrices of rank $Kp^{-1}$. This can be reached if and only if $\mathcal{J}_{i}=\mathcal{J}_{j}$ for all $i$ and $j$. It follows that \begin{equation} \max_{\Omega} \left[ \left\{{\rm Tr}({\rm Tr}_{\rm R}P_{0}^{\vphantom{\rm T}})({\rm Tr}_{\rm R}P_{0}^{\rm T})\right\}_{\Sigma_{0}=\Omega} \right] \le pK, \label{bound3c} \end{equation} where the equality sign is attained if and only if $\mathcal{J}_{i}=\mathcal{J}_{j}$ for all $i$ and $j$.
By combining the bounds of Eqs.\ (\ref{bound1c},\ref{bound2c},\ref{bound3c}), we obtain Eq.\ (\ref{ineqapp}).
\section{Details about the simulation study} \label{AppendixC}
In this Appendix, we give the algorithm used to calculate the maximum likelihood estimators. Also, additional simulation results are presented and $\delta$ is demonstrated to be sufficiently small.
\subsection{Calculating the maximum likelihood estimators} \label{C1}
The CM algorithm is run with $\hat{\Sigma}_{1}=\openone_{p}$ and, after convergence is achieved ($|{\rm Det}\hat{\Sigma}_{n+1}-{\rm Det}\hat{\Sigma}_{n}| \le \delta {\rm Det}\hat{\Sigma}_{n}$), we set $\hat{\Sigma}_{\rm temp}=\hat{\Sigma}_{n+1}$ and $\hat{B}_{\rm temp}=\hat{B}_{n}$. Then, another $\hat{\Sigma}_{1}$ is constructed by drawing a $p \times p$ matrix from $W_{p}(\openone_{p},p)$, where `$W$' denotes a Wishart distribution, and dividing it by $p$. With the randomly created $\hat{\Sigma}_{1}$, the CM algorithm is run up to convergence (possibly with another number of iterations than in the previous run) and if the newly calculated $\hat{\Sigma}_{\rm new}=\hat{\Sigma}_{n+1}$ and $\hat{B}_{\rm new}=\hat{B}_{n}$ satisfy \begin{equation}
\mathcal{L}(\hat{B}_{\rm new},\hat{\Sigma}_{\rm new}) > \mathcal{L}(\hat{B}_{\rm temp},\hat{\Sigma}_{\rm temp}) \quad \mbox{and} \quad |{\rm Det}\hat{\Sigma}_{\rm new}-{\rm Det}\hat{\Sigma}_{\rm temp}| > 10\delta {\rm Det}\hat{\Sigma}_{\rm temp}, \label{algineq} \end{equation} we set $\hat{\Sigma}_{\rm temp}=\hat{\Sigma}_{\rm new}$ and $\hat{B}_{\rm temp}=\hat{B}_{\rm new}$. The above is repeated until $\hat{\Sigma}_{\rm temp}$ and $\hat{B}_{\rm temp}$ remain unchanged for 10 different randomly created $\hat{\Sigma}_{1}$'s in a row. When the algorithm is terminated, $\mathcal{L}(\hat{B}_{\rm temp},\hat{\Sigma}_{\rm temp})$ is considered to be the global maximum of $\mathcal{L}(B,\Sigma)$ and we set $\hat{B}=\hat{B}_{\rm temp}$ and $\hat{\Sigma}=\hat{\Sigma}_{\rm temp}$. Table \ref{jumps} holds the number of jumps of $\hat{\Sigma}_{\rm temp}$ (and $\hat{B}_{\rm temp}$) in 1000 samples of size $N=15$ for $\rho=0.5$. (These are the same samples as the ones of Sec.\ \ref{simulation}.) It turns out that multi-modality is indeed rare: The candidate model with $i=j=5$ has a multi-modal $\mathcal{L}(B,\Sigma)$ in at most 1.6\% of the 1000 samples. Table \ref{jumps} also holds the number of additional $\hat{\Sigma}_{1}$'s in the 1000 samples. There are about 2 to 3 additional $\hat{\Sigma}_{1}$'s per jump and a maximum of 5 additional $\hat{\Sigma}_{1}$'s per jump.
\begin{table}[h!] \caption{Number of jumps of $\hat{\Sigma}_{\rm temp}$ and number of additional $\hat{\Sigma}_{1}$'s in 1000 samples of size $N=15$ for $\rho=0.5$.} \begin{tabular}{ccccccccccccc} \hline
& & \multicolumn{5}{l}{jumps} & & \multicolumn{5}{l}{additional $\hat{\Sigma}_{1}$'s} \\ i & & j & & & & & & j & & & & \\ \cline{3-7} \cline{9-13}
& & \,1\, & \,2\, & \,3\, & \,4\, & \,5\, & & \,1\, & \,2\, & \,3\, & 4 & 5 \\ \hline 1 & & 0 & 0 & 0 & 0 & 0 & & 0 & 0 & 0 & 0 & 0 \\ 2 & & 0 & 0 & 0 & 1 & 1 & & 0 & 0 & 0 & 5 & 2 \\ 3 & & 0 & 0 & 1 & 2 & 1 & & 0 & 0 & 5 & 2 & 5 \\ 4 & & 0 & 2 & 1 & 2 & 7 & & 0 & 4 & 1 & 4 & 18 \\ 5 & & 0 & 2 & 5 & 5 & 16 & & 0 & 3 & 10 & 10 & 38 \\ \hline \end{tabular} \label{jumps} \end{table}
\subsection{Additional simulation results} \label{C2}
The frequencies of selecting the correct model with AIC, AICc and BIC in 1000 samples of sizes $N=15,20,50$ are given in Table \ref{modelselection2} for $\rho = 0.2,0.5,0.8$. In the simulation, the values of the covariates in the samples of sizes $N=15$ and $N=20$ are the same as, respectively, the first 15 and 20 values of the covariates in the samples of size $N=50$. Table \ref{modelselection2} also holds the number of times that the difference between the second smallest and smallest value of a criterion is less than $200N\delta$ (10 times the numerical error of the difference). These numbers are of order unity such that $\delta$ is sufficiently small. The average (over 1000 samples) of the difference between the second smallest and smallest value of a criterion is not given in Table \ref{modelselection2}, but it ranges from 3.8 (for AIC with $N=15$ and $\rho=0.2$) to 52.4 (for BIC with $N=50$ and $\rho=0.8$). In all 1000 samples of sizes $N=20$ and $N=50$, there are no jumps of $\hat{\Sigma}_{\rm temp}$. In the samples of size $N=15$, the number of jumps of $\hat{\Sigma}_{\rm temp}$ and the number of additional $\hat{\Sigma}_{1}$'s do not depend much on $\rho$.
\begin{table}[h!] \caption{Frequencies $f$ and $\nu$ in 1000 samples of, respectively, selecting the correct model and the difference between the second smallest and smallest value of a criterion being smaller than $200N\delta$.} \begin{tabular}{ccccccccc} \hline N & $\rho$ & \multicolumn{3}{c}{$f$} & & \multicolumn{3}{c}{$\nu$} \\ \cline{3-5} \cline{7-9}
& & AIC & AICc & BIC & & AIC & AICc & BIC \\ \hline 15 & 0.2 & 249 & 500 & 397 & & 0 & 1 & 0 \\
& 0.5 & 241 & 488 & 385 & & 0 & 0 & 0 \\
& 0.8 & 233 & 473 & 365 & & 1 & 0 & 0 \\ 20 & 0.2 & 366 & 570 & 577 & & 0 & 0 & 0 \\
& 0.5 & 353 & 604 & 611 & & 0 & 2 & 0 \\
& 0.8 & 324 & 553 & 556 & & 0 & 0 & 0 \\ 50 & 0.2 & 493 & 590 & 835 & & 0 & 1 & 0 \\
& 0.5 & 528 & 616 & 832 & & 0 & 1 & 0 \\
& 0.8 & 503 & 594 & 848 & & 0 & 0 & 0 \\ \hline \end{tabular} \label{modelselection2} \end{table}
\end{document} | arXiv | {
"id": "0906.0708.tex",
"language_detection_score": 0.551365852355957,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Every graph with no $\mathcal{K}
\begin{abstract}
For positive integers $t$ and $s$, let $\mathcal{K}_t^{-s}$ denote the family of graphs obtained from the complete graph $K_t$ by removing $s$ edges. A graph $G$ has no $\mathcal{K}_t^{-s}$ minor if it has no $H$ minor for every $H\in \mathcal{K}_t^{-s}$. Motivated by the famous Hadwiger's Conjecture, Jakobsen in 1971 proved that every graph with no $\mathcal{K}_7^{-2}$ minor is $6$-colorable; very recently the present authors proved that every graph with no $\mathcal{K}_8^{-4}$ minor is $7$-colorable. In this paper we continue our work and prove that every graph with no $\mathcal{K}_9^{-6}$ minor is $8$-colorable. Our result implies that $H$-Hadwiger's Conjecture, suggested by Paul Seymour in 2017, is true for all graphs $H$ on nine vertices such that $H$ is a subgraph of every graph in $ \mathcal{K}_9^{-6}$. \end{abstract}
\baselineskip 16pt
\section{Introduction}
All graphs in this paper are finite and undirected, and have no loops or parallel edges. For a graph $G$ we use $|G|$, $e(G)$, $\delta (G)$, $\Delta(G)$, $\alpha(G)$, $\chi(G)$ to denote the number of vertices, number of edges, minimum degree, maximum degree, independence number, and chromatic number of $G$, respectively. The \dfn{complement} of $G$ is denoted by $\overline{G}$. For any positive integer k, we define $[k]$ to be the set $\{1, \ldots, k\}$. A graph $H$ is a \dfn{minor} of a graph $G$ if $H$ can be
obtained from a subgraph of $G$ by contracting edges. We write $G\succcurlyeq H$ if $H$ is a minor of $G$. In those circumstances we also say that $G$ has an \dfn{$H$ minor}. For positive integers $t, s$, we use $\mathcal{K}_t^{-s}$ to denote the family of graphs obtained from the complete graph $K_t$ by deleting $s$ edges. We use $K_t^-$, $K_t^=$, and $K_t^\equiv$ to denote the unique graph obtained from $K_t$ by deleting one, two and three independent edges, respectively; and
$K_t^<$ to denote the unique graph obtained from $K_t$ by deleting two adjacent edges. Note that $\mathcal{K}_t^{-1}=\{K_t^-\}$ and $\mathcal{K}_t^{-2}=\{K_t^=, K_t^<\}$. A graph $G$ has \dfn{no $\mathcal{K}_t^{-s}$ minor} if it has no $H$ minor for every $H\in \mathcal{K}_t^{-s}$; and $G$ has a $\mathcal{K}_t^{-s}$ minor, otherwise. We write $G\succcurlyeq \mathcal{K}_t^{-s}$ if $G$ has a $\mathcal{K}_t^{-s}$ minor.
Our work is motivated by Hadwiger's Conjecture~\cite{Had43}, which is perhaps the most famous conjecture in graph theory.
\begin{conj}[Hadwiger's Conjecture~\cite{Had43}]\label{HC} Every graph with no $K_t$ minor is $(t-1)$-colorable. \end{conj}
\cref{HC} is trivially true for $t\le3$, and reasonably easy for $t=4$, as shown independently by Hadwiger~\cite{Had43} and Dirac~\cite{Dirac52}. However, for $t\ge5$, Hadwiger's Conjecture implies the Four Color Theorem~\cite{AH77,AHK77, RSST97}. Wagner~\cite{Wagner37} proved that the case $t=5$ of Hadwiger's Conjecture is, in fact, equivalent to the Four Color Theorem, and the same was shown for $t=6$ by Robertson, Seymour and Thomas~\cite{RST}. Despite receiving considerable attention over the years, Hadwiger's Conjecture remains wide open for all $t\ge 7$, and is considered among the most important problems in graph theory and has motivated numerous developments in graph coloring and graph minor theory.
K\"{u}hn and Osthus~\cite{KuhOst03c} proved that Hadwiger's Conjecture is true for $C_4$-free graphs of sufficiently large chromatic number, and for all graphs of girth at least $19$. Until very recently the best known upper bound on the chromatic number of graphs with no $K_t$ minor is $O(t (\log t)^{1/2})$, obtained independently by Kostochka~\cite{Kostochka82,Kostochka84} and Thomason~\cite{Thomason84}, while Norin, Postle and the second author~\cite{NPS20} improved the frightening $(\log t)^{1/2}$ term to $(\log t)^{1/4}$. The current record
is $O(t\log \log t)$ due to Delcourt and Postle~\cite{DelcourtPostle}.
Given the notorious difficulty of Hadwiger's Conjecture, Paul Seymour in 2017 suggested the study of the following $H$-Hadwiger's Conjecture.
\begin{conj}[$H$-Hadwiger's Conjecture]\label{HHC} For every graph $H$ on $t$ vertices, every graph with no $H$ minor is $(t-1)$-colorable. \end{conj}
Jakobsen~\cite{Jakobsen71b} in 1971 proved that every graph with no $K_7^{-}$ minor is $7$-colorable. It is not known yet whether every graph with no $K_7$ minor is $7$-colorable; some progress has been made in \cite{RST22}. For $H\in\{K_7^-, K_7^=, K_7^<\}$, proving that graphs with no $H$ minor are $6$-colorable also remains open. Kostochka~\cite{Kos14} proved that $H$-Hadwiger's Conjecture is true for graphs with no $K_{s,t}$ minor, provided that $t>C(s\log s)^3$. Very recently, Norin and Seymour~\cite{NorSey22} proved that every graph on $n$ vertices with independence number two has an $H$ minor, where $H$ is a graph with $\lceil n/2\rceil$ vertices and at least $ 0.98688\cdot {{|H|}\choose2}-o(n^2)$ edges. We refer the reader to a recent paper of the present authors~\cite{K84} on partial results towards Hadwiger's Conjecture for $t\le 9$; and recent surveys~\cite{CV2020, K2015,Seymoursurvey} for further background on Hadwiger's Conjecture.
Dirac in 1964 began the study of a variation of $H$-Hadwiger's Conjecture in~\cite{Dirac64b} by excluding more than one forbidden minor simultaneously; he proved that every graph with no $\mathcal{K}_t^{-2}$ minor is $(t-1)$-colorable for each $t\in\{5,6\}$. Jakobsen~\cite{Jakobsen71a} in 1971 proved that every graph with no $\mathcal{K}_7^{-2}$ minor is $6$-colorable; this implies that $H$-Hadwiger's Conjecture is true for all graphs $H$ on seven vertices such that $\Delta(\overline{H})\ge2$ and $\overline{H}$ has a matching of size two.
Very recently, using the techniques developed in \cite{ KT05,RST} and generalized Kempe chains of contraction-critical graphs by Rolek and the second author~\cite{RolekSong17a}, the present authors considered the case when $t=8$ and proved the following result.
\begin{thm}[Lafferty and Song~\cite{K84}]\label{t:K84} Every graph with no $\mathcal{K}_8^{-4}$ minor is $7$-colorable. In particular, $H$-Hadwiger's Conjecture is true for all graphs $H$ on eight vertices such that $\Delta(\overline{H})\ge4$, and $\overline{H}$ has a perfect matching, a triangle and a cycle of length four. \end{thm}
The purpose of this paper is to consider the next step and prove the following main result.
\begin{restatable}{thm}{main}\label{t:main} Every graph with no $\km{9}{6}$ minor is $8$-colorable. \end{restatable}
\cref{t:main} implies that $H$-Hadwiger's Conjecture holds for all graphs $H$ on nine vertices such that $H$ is a subgraph of every graph in $ \km{9}{6}$. Following the ideas in \cite{K84}, our proof of \cref{t:main} utilizes an extremal function for $\km{9}{6}$ minors (see \cref{t:exfun}), generalized Kempe chains of contraction-critical graphs (see \cref{l:wonderful}), and the method for finding $\km{9}{6}$ minors from three different $K_6$ subgraphs in $7$-connected graphs on at least $19$ vertices (see \cref{l:threek6s}).
\begin{restatable}{thm}{exfun}\label{t:exfun} Every graph on $n\ge 9$ vertices with at least $5n-14$ edges has a $\km{9}{6}$ minor. \end{restatable}
\cref{t:exfun} is best possible in the sense that every $(K_8^=, 4)$-cockade on $n$ vertices has $5n-14$ edges but no $\km{9}{5}$ minor, where for a graph $H$ and an integer $k\ge1$, an $(H, k)$-cockade is defined recursively as follows: any graph isomorphic to $H$ is an $(H,k)$-cockade. Let $G_1$ and $G_2$ be $(H, k)$-cockades and let $G$ be obtained from the disjoint union of $G_1$ and $G_2$ by identifying a clique of size $k$ in $G_1$ with a clique of the same size in $G_2$. Then the graph $G$ is also an $(H,k)$-cockade, and every $(H,k)$-cockade can be constructed this way.
This paper is organized as follows. In the next section, we introduce the necessary definitions and collect several tools which we will need later on. We prove \cref{t:main} in Section~\ref{s:coloring}, and \cref{t:exfun} in Section~\ref{s:exfun}.
\section{Notation and tools}
Let $G$ be a graph. If $x,y$ are adjacent vertices of $G$, then we denote by $G/xy$ the graph obtained from $G$ by contracting the edge $xy$ and deleting all resulting parallel edges. We simply write $G/e$ if $e=xy$. If $u,v$ are distinct nonadjacent vertices of $G$, then by $G+uv$ we denote the graph obtained from $G$ by adding an edge with ends $u$ and $v$. If $u,v$ are adjacent or equal, then we define $G+uv$ to be $G$. Similarly, if $M\subseteq E(G)\cup E(\overline{G})$, then by $G+M$ we denote the graph obtained from $G$ by adding all the edges of $M$ to $G$. Every edge in $\overline{G}$ is called a \dfn{missing edge} of $G$. For a vertex $x\in V(G)$, we will use $N(x)$ to denote the set of vertices in $G$ which are adjacent to $x$. We define $N[x] = N(x) \cup \{x\}$. The degree of $x$ is denoted by $d_G(x)$ or simply $d(x)$. If $A, B\subseteq V(G)$ are disjoint, we say that $A$ is \emph{complete} to $B$ if each vertex in $A$ is adjacent to all vertices in $B$, and $A$ is \emph{anticomplete} to $B$ if no vertex in $A$ is adjacent to any vertex in $B$. If $A=\{a\}$, we simply say $a$ is complete to $B$ or $a$ is anticomplete to $B$. We use $e(A, B)$ to denote the number of edges between $A$ and $B$ in $G$. The subgraph of $G$ induced by $A$, denoted by $G[A]$, is the graph with vertex set $A$ and edge set $\{xy \in E(G) \mid x, y \in A\}$. We denote by $B \setminus A$ the set $B - A$, and $G \setminus A$ the subgraph of $G$ induced on $V(G) \setminus A$, respectively. If $A = \{a\}$, we simply write $B \setminus a$ and $G \setminus a$, respectively. An $(A, B)$-path in $G$ is a path with one end in $A$ and the other in $B$ such that all its internal vertices lie in $G\setminus (A\cup B)$. We simply say an $(a, B)$-path if $A=\{a\}$. It is worth noting that each vertex in $A \cap B$ is an $(A, B)$-path.
For a positive integer $k$, a $k$-vertex is a vertex of degree $k$, and a $k$-clique is a set of $k$ pairwise adjacent vertices.
Let $\mathcal{F}$ be a family of graphs. A graph $G$ is \emph{$\mathcal{F}$-free} if it has no subgraph isomorphic to $H$ for every $H\in\mathcal{F}$. We simply say $G$ is $H$-free if $\mathcal{F}=\{H\}$. The \dfn{join} $G+H$ (resp. \dfn{union} $G\cup H$) of two vertex-disjoint graphs $G$ and $H$ is the graph having vertex set $V(G)\cup V(H)$ and edge set $E(G)
\cup E(H)\cup \{xy\, |\, x\in V(G), y\in V(H)\}$ (resp. $E(G)\cup E(H)$). We use the convention ``$A:=$" to mean that $A$ is defined to be the right-hand side of the relation. Finally, if $H$ is a connected subgraph of a graph $G$ and $y \in V(H)$, we say that we \textit{contract $H \setminus y$ onto $y$} when we contract $H$ to a single vertex, that is, contract all the edges of $H$.
To prove \cref{t:main}, we need to investigate the basic properties of contraction-critical graphs. For a positive integer $k$, a graph $G$ is \dfn{$k$-contraction-critical} if $\chi(G)=k$ and every proper minor of $G$ is $(k-1)$-colorable.
Dirac~\cite{Dirac60} introduced the notion of contraction-critical graphs and proved \cref{l:alpha2} below; in the same paper he also proved that $5$-contraction-critical graphs are $5$-connected. The latter was then extended by Mader~\cite{7con} as stated in \cref{t:7conn}. It remains unknown whether every $k$-contraction-critical graph is $8$-connected for all $k\ge8$.
\begin{lem}[Dirac~\cite{Dirac60}]\label{l:alpha2} Let $G$ be a $k$-contraction-critical graph. Then for each $v\in V(G)$, \[\alpha(G[N(v)])\le d(v)-k+2.\] \end{lem}
\begin{thm}[Mader~\cite{7con}]\label{t:7conn} For all $k \ge 7$, every $k$-contraction-critical graph is $7$-connected. \end{thm}
\cref{l:wonderful} on contraction-critical graphs turns out to be very powerful, as the existence of pairwise vertex-disjoint paths is guaranteed without using the connectivity of such graphs. Recall that every edge in $\overline{H}$ is a \dfn{missing edge} of a graph $H$.
\begin{lem}[Rolek and Song~\cite{RolekSong17a}]\label{l:wonderful} Let $G$ be any $k$-contraction-critical graph. Let $x\in V(G)$ be a vertex of
degree $k + s$ with $\alpha(G[N(x)]) = s + 2$ and let $S \subset N(x)$ with
$ |S| = s + 2$ be any independent set, where $k \ge 4$ and $s \ge 0$ are integers.
Let $M$ be a set of missing edges of $G[N(x) \setminus S]$. Then there
exists a collection $\{P_{uv}\mid uv\in M\} $ of paths in $G$ such that
for each $uv\in M$, $P_{uv}$ has ends $u, v$ and all its internal vertices
in $G \setminus N[x]$. Moreover, if vertices $u,v,w,z$ with $uv,wz\in M$ are distinct, then
the paths $P_{uv}$ and $P_{wz}$ are vertex-disjoint.
\end{lem}
The proof of \cref{l:wonderful} uses Kempe chains. Using a result of Mader~\cite{7con} on rooted $K_4$ minors and the proof of \cref{l:wonderful}, the present authors~\cite{K84} proved a strengthened version of the remark given in \cite[Page 17]{RolekSong17a}.
\begin{lem}[Lafferty and Song~\cite{K84}]\label{l:rootedK4} Let $G$ be any $k$-contraction-critical graph. Let $x\in V(G)$ be a vertex of
degree $k + s$ with $\alpha(G[N(x)]) = s + 2$ and let $S \subset N(x)$ with
$ |S| = s + 2$ be any independent set, where $k \ge 4$ and $s \ge 0$ are integers.
If \[M=\{x_1y_1, x_1y_2, x_2y_1, x_2y_2, a_1b_{11}, \dots, a_1b_{1r_1}, \dots, a_mb_{m1}, \dots, a_mb_{mr_m}\}\] is a set of missing edges of $G[N(x)\setminus S]$, where the vertices $x_1, x_2, y_1, y_2, a_1, \dots, a_m,
b_{11}, \dots, b_{mr_m}\in N(x)\setminus S$ are all distinct, and for all $1\le i \le m$, $a_ib_{i1}, \dots, a_ib_{ir_i}$ are $r_i$ missing edges with $a_i$ as a common end, and $x_1x_2, y_1y_2\in E(G)$, then $G \succcurlyeq G[N[x]]+M$. \end{lem}
\begin{rem}\label{r:rootedK4} As observed in \cite{K84}, \cref{l:rootedK4} can be applied when \[M=\{x_1y_1, x_1y_2, x_2y_1, x_2y_2, a_1b_{11}, \dots, a_1b_{1r_1}, \dots, a_mb_{m1}, \dots, a_mb_{mr_m}\}\] is a subset of edges and missing edges of $G[N(x)\setminus S]$, where $x_1, x_2, y_1, y_2, a_1, \dots, a_m,
b_{11}, \dots, b_{mr_m}\in N(x)\setminus S$ are all distinct, and $x_1x_2, y_1y_2\in E(G)$. Under those circumstances, it suffices to apply \cref{l:rootedK4} to $M^*$, where $M^*= \{e\in M\mid e \text{ is a missing edge of } G[N(x)\setminus S])\}$. It is straightforward to see that $G\succcurlyeq G[N[x]]+M$.
\end{rem}
\begin{figure}
\caption{The nine possibilities for three $5$-cliques in Theorem~\ref{t:goodpaths}.}
\label{fig:threek5s}
\end{figure}
Finally we need a tool to find a desired $\km{9}{6}$ minor through three different $6$-cliques in $7$-connected graphs. This method was first introduced by Robertson, Seymour and Thomas~\cite{RST} to prove Hadwiger's Conjecture for $t=6$: they found a desired $K_6$ minor via three different $4$-cliques in $6$-connected non-apex graphs. The method was later extended by Kawarabayashi and Toft~\cite{KT05} to find a desired $K_7$ minor via three different $5$-cliques in $7$-connected graphs. It is worth noting that \cref{t:goodpaths} corresponds to \cite[Lemma 5]{KT05}, where the existence of such seven ``good paths" follows from the proof of \cite[Lemma 5]{KT05}.
\begin{thm}[Kawarabayashi and Toft~\cite{KT05}]\label{t:goodpaths}
Let $G$ be a $7$-connected graph such that $|G| \geq 19$. Let $L_1, L_2$, and $L_3$ be three different $5$-cliques of $G$ such that $|L_1\cup L_2\cup L_3|\ge 12$, that is, they fit into one of the nine configurations depicted in Figure~\ref{fig:threek5s}. Then $G$ has seven pairwise vertex-disjoint ``good paths", where a ``good path" is an $(L_i, L_j)$-path in $G$ with $i \neq j$. \end{thm}
\section{Coloring graphs with no $\km{9}{6}$ minor}\label{s:coloring}
We first use \cref{t:goodpaths} to prove a lemma that finds a $\km{9}{6}$ minor via three different $6$-cliques in $7$-connected graphs with at least $19$ vertices.
\begin{lem} \label{l:threek6s}
Let $G$ be a $7$-connected graph such that $|G|\ge19$. If $L_1$, $L_2$, and $L_3$ are three $6$-cliques of $G$ satisfying \[\min\{|L_1 \setminus (L_2 \cup L_3)|, |L_2 \setminus (L_1 \cup L_3)|, |L_3 \setminus (L_1 \cup L_2)| \} \geq 1,\tag{$*$}\] then $G$ has a $\km{9}{6}$ minor. \end{lem}
\begin{proof} Suppose $G$ has no $\km{9}{6}$ minor. By the assumption ($*$), we see that $|L_1 \cap L_2\cap L_3| \le 5$, and $|L_i \cap L_j| \le 5$ for $1\le i<j\le 3$.
We first observe that $G$ is $\km{8}{5}$-free: suppose $G$ has an $H$ subgraph for some $H\in \km{8}{5}$. Since $G$ is $7$-connected, we see that there are at least seven pairwise disjoint $(V(H), V(C))$-paths in $G$ for every component $C$ of $G \setminus V(H)$. Thus we obtain a $\km{9}{6}$ minor by contracting a component of $G \setminus V(H)$ to a single vertex, a contradiction. It follows that $|L_1 \cap L_2\cap L_3| \ne 5$, and $|L_i \cap L_j| \ne 4$ for $1\le i<j\le 3$, else neither $G[L_1 \cup L_2\cup L_3]$ nor $G[L_i \cup L_j]$ is $\km{8}{5}$-free. We may assume that $|L_1 \cap L_2| \ge |L_1 \cap L_3| \ge |L_2 \cap L_3|$.
Suppose first $|L_1 \cap L_2| \le 1$. Let $L_i'$ be a $5$-clique of $L_i$ for each $i\in[3]$ such that $L_i\cap L_j=L_i'\cap L_j'$ for $1\le i<j\le 3$. Then $L_1', L_2'$ and $ L_3'$ fit into one of the five configurations in Figure~\ref{fig:threek5s}(a,c,d,g,i). By Theorem~\ref{t:goodpaths} applied to $L_1', L_2'$ and $ L_3'$, there exist seven pairwise vertex-disjoint ``good paths", say $Q_1, \ldots, Q_7$, between $L_1, L_2$, and $L_3$; we choose $Q_1, \ldots, Q_7$ so that $|V(Q_1)|+ \cdots+|V(Q_7)|$ is as small as possible. It follows that no internal vertex of each $Q_i$ belongs to $L_1 \cup L_2 \cup L_3$, and no vertex of $L_i\cap L_j$ belongs to a ``good path'' of length at least one. Let $t_{i,j}$ denote the number of ``good paths" between $L_i$ and $L_j$ for $1\le i<j\le 3$. We may assume that $t_{1,2}\ge t_{1,3}\ge t_{2,3}$. Then $3\le t_{1,2}\le 5$. We may further assume that $Q_6$ and $Q_7$ are $(L_1, L_2)$-paths of length at least one. Suppose $t_{1,2}= 5$. By contracting each of $Q_1, \ldots, Q_5$ to a single vertex, all the edges, but one, of $Q_6$ (that is, contracting $Q_6$ to a $K_2$), and all the edges, but one, of $Q_7$ (that is, contracting $Q_7$ to a $K_2$), we see that $G\succcurlyeq \km{9}{6}$, a contradiction. Thus $3\le t_{1,2}\le 4$. Recall that $ t_{1,3}\ge t_{2,3}$.
Let $x\in L_2$; in addition, let $y\in L_1\cup L_2$ with $y\ne x$ when $t_{1,2}=3$, such that neither $x$ nor $y$ is an end of any ``good path". But now contracting each of $Q_1, \ldots, Q_7$ to a single vertex, together with $x$ and $y$, yields a $ \km{9}{6}$ minor in $G$ when $t_{1,2}=3$, and contracting each of $Q_1, \ldots, Q_6$ to a single vertex and $Q_7$ to a $K_2$, together with $x$, yields a $ \km{9}{6}$ minor in $G$ when $t_{1,2}=4$, a contradiction.
This proves that $|L_1 \cap L_2| \ge 2$. Let $a_1, \ldots, a_p\in L_1\cap L_2$, where $p:=|L_1\cap L_2|$. Then $p=5$ or $2\le p\le 3$.
Suppose next $2\le p\le 3$.
By Menger's Theorem, there exist $6-p\ge3$ pairwise vertex-disjoint $(L_1 \setminus L_2, L_2 \setminus L_1)$-paths, say $Q_1, \ldots, Q_{6-p}$, in $G \setminus \{ a_1, \ldots, a_p\}$. But then we obtain a $\km{9}{6}$ minor in $G$ from $G[L_1\cup L_2]$ by contracting each of $Q_1, Q_2, Q_3$ to a $K_2$; in addition, contracting $Q_4$ to a single vertex when $p=2$, a contraction.
It remains to consider the case $p=5$. Let $x \in L_1 \setminus L_2 $ and $ y \in L_2 \setminus L_1 $. By the assumption ($*$), $x, y\notin L_3$. Let $z\in L_3\setminus (L_1\cup L_2)$. Since $G$ is $\km{8}{5}$-free, we see that $|L_1 \cap L_2\cap L_3|\le 2$, else $G[L_1\cup L_2\cup\{z\}]$ is not $\km{8}{5}$-free. Suppose $ |L_1 \cap L_2\cap L_3|= 2$. We may assume $a_4, a_5\in L_1 \cap L_2\cap L_3 $. Let $z'\in L_3\setminus(L_1\cup L_2)$ such that $z'\ne z$. Then $G\setminus \{a_4, a_5\}$ has five pairwise internally vertex-disjoint $(z, \{x, a_1, a_2, a_3, y\})$-paths, say $Q_1, \ldots, Q_5$. We may assume that $z'$ does not belong to $Q_1, \ldots, Q_4$. Let $Q_5^*$ be the $(z', w)$-subpath of $Q_5$ when $z'$ lies on $Q_5$, where $w$ is the other end of $Q_5$. Then $G\succcurlyeq \km{9}{6}$ from $G[L_1\cup L_2\cup\{z,z'\}]$ by contracting each of $Q_1\setminus z, \ldots, Q_5\setminus z$ to a single vertex when $z'\notin V(Q_5)$; and each of $Q_1\setminus z, \ldots, Q_4\setminus z $, and $Q_5^* \setminus z'$ to a single vertex when $z'\in V(Q_5)$, a contradiction.
This proves that $ |L_1 \cap L_2\cap L_3|\le 1$. By Menger's Theorem, $G\setminus y$ has six pairwise vertex-disjoint $(L_3, L_1)$-paths, say $Q_1, \dotsc, Q_6$. We may assume that $a_i$ is an end of $Q_i$ for each $i \in [5]$. Then $x$ is an end of $Q_6$. We may assume further assume that $a_5\notin L_3$. But then we obtain a $\km{9}{5}$ minor in $G$ from $G[L_1\cup L_2\cup L_3]$ by contracting each of $Q_1, \ldots, Q_4, Q_5\setminus a_5, Q_6\setminus x$ to a single vertex, a contradiction.
This completes the proof of \cref{l:threek6s}. \end{proof}
\begin{figure}
\caption{Six $K_5$-free graphs $H$ with $|H|=9$ and $\alpha(H)=2$.}
\label{fig:counter}
\end{figure}
\begin{lem} \label{l:H9}
Let $H$ be a graph such that $|H| = 9$ and $\alpha(H) = 2$. Then $H$ contains $K_5$ or one of the graphs in Figure~\ref{fig:counter} as a spanning subgraph. \end{lem} \begin{proof}
Suppose $H$ is $K_5$-free, and $H_i$-free for each $H_i$ given in Figure~\ref{fig:counter}. We may assume that $H$ is edge-minimal subject to being $K_5$-free and $\alpha(H)=2$. Then $H$ has no dominating edge, where an edge $xy\in E(H)$ is dominating if every vertex in $V(H)\setminus\{x, y\}$ is adjacent to $x$ or $y$. This implies that $\Delta(H)\le7$ and
\noindent (a) no vertex in $N(v)$ is complete to $V(H)\setminus N[v]$ for each $v\in V(H)$.
Since $\alpha(H)=2$, we see that, for each $v\in V(H)$, $V(H)\setminus N[v]$ is a clique, and so $|H\backslash N[v]|\le4$ because $H$ is $K_5$-free. Then $\delta(H) \ge 4$ and $H[N(v)]$ is $K_4$-free for each $v\in V(H)$. By (a), $\Delta(H)\le 6$.
Let $x\in V(H)$ be a vertex of degree $\Delta(H)$. Let $N(x):=\{x_1, \ldots, x_{d(x)}\}$ and $V(H)\setminus N[x]:=\{y_1, \ldots, y_{8-d(x)}\}$. Suppose $d(x)=4$. Then $V(H)\setminus N[x]$ is a $4$-clique, and each $y_i$ is adjacent to exactly one vertex in $N(x)$. We may assume that $x_1y_1\in E(H)$. We may further assume that $x_1y_4\notin E(H)$ and $x_4y_4\in E(H)$. Then $\{x_2, x_3, x_4\}$ is a $3$-clique, and $x_1$ is complete to $\{x_2, x_3\}$ because $y_4$ is anticomplete to $\{x_2, x_3\}$. But then $x_4$ is complete to $\{y_2, y_3\}$ because $x_1$ is anticomplete to $\{x_4, y_2,y_3\}$, contrary to the fact that $\Delta(H)=4$. Suppose next $d(x)=6$. Then $V(H)\setminus N[x]=\{y_1, y_2\}$, and by (a), both $y_1$ and $y_2$ are $4$-vertices such that $y_1$ and $y_2$ have no common neighbor in $N(x)$. Thus $N(y_1)\cap N(x)$ and $N(y_2)\cap N(x)$ are disjoint $3$-cliques in $H$. But then $H$ contains $H_1$ as a subgraph, a contradiction. This proves that $d(x)=5$, and so $N(x)=\{x_1, \ldots, x_5\}$ and $V(H)\setminus N[x]=\{y_1, y_2, y_3\}$.
Suppose $H[N(x)]$ is $K_3$-free. Note that $\alpha(H[N(x)])=2$. Thus $H[N(x)]=C_5$, say with vertices $x_1, \ldots, x_5$ in order. Since $d(y_1)\le 5$, we may assume that $y_1$ is anticomplete to $\{x_4, x_5\}$. Then $y_1$ is complete to $\{x_1, x_2, x_3\}$. By (a) applied to $\{x_1, x_2, x_3\}$, we may further assume that $y_3$ is anticomplete to $\{x_1, x_2\}$. Then $y_3$ is complete to $\{x_3, x_4, x_5\}$ and so $y_2x_3\notin E(H)$. Then $y_2$ is complete to $\{x_1, x_5\}$; in addition, $y_2$ is adjacent to exactly one of $x_2$ and $x_4$ because $\alpha(H)=2$ and $\Delta(H)=5$. It follows that $H$ contains $H_2$ as a subgraph, a contradiction. This proves that
\noindent (b) $H[N(v)]$ contains $K_3$ as a subgraph for every $5$-vertex $v$.
Note that $\delta(H[N(x)])\ge 1$ because $H$ is $K_5$-free. Suppose $\delta(H[N(x)])= 1$. We may assume that $x_5x_4\in E(H)$ and $x_5$ is anticomplete to $\{x_1, x_2, x_3\}$. By (a), we may assume that $x_5y_1\notin E(H)$. Then $x_5$ is a $4$-vertex, $\{y_1, x_1, x_2, x_3\}$ is a $4$-clique, and $x_5$ is complete to $\{y_2, y_3\}$. By (a) applied to $\{x_1, x_2, x_3\}$, we may assume that $y_2$ is anticomplete to $\{x_2, x_3\}$. Suppose $x_4y_2\notin E(H)$. Then $x_4$ is complete to $\{x_2, x_3\}$ and $y_2x_1, x_4y_3\in E(H)$. Thus $H$ contains $H_3$ as a subgraph, a contradiction. It follows that $x_4y_2\in E(H)$. Then $x_4y_3\notin E(H)$, else $H$ contains $H_4$ as a subgraph. Note that each of $x_1, x_2, x_3$ is adjacent to exactly one of $x_4$ and $y_3$; and either $e_H(\{x_1, x_2, x_3\}, x_4)=2$ and $e_H(\{x_1, x_2, x_3\}, y_3)=1$, or $e_H(\{x_1, x_2, x_3\}, x_4)=1$ and $e_H(\{x_1, x_2, x_3\}, y_3)=2$. In the former case, we may assume that $x_1y_3, x_2x_4, x_3x_4\in E(H)$; thus $H$ contains $H_3$ as a subgraph, a contradiction. In the latter case, we may assume that $x_1y_3, x_2y_3, x_3x_4\in E(H)$; again $H$ contains $H_3$ as a subgraph by drawing the graph $H$ according to the $5$-vertex $y_1$, a contradiction. This proves that
\noindent (c)\, $\delta(H[N(v)])\ge 2$ for every $5$-vertex $v$.
By (b), we may assume that $\{x_1, x_2, x_3\}$ is a clique. Suppose $x_4x_5\notin E(H)$. Note that neither $x_4$ nor $x_5$ is complete to $\{x_1, x_2, x_3\}$, and no vertex in $\{x_1, x_2, x_3\}$ is anticomplete to $\{ x_4, x_5\}$. By (c), we may assume that $x_4$ is complete to $\{x_1, x_2\}$ and $x_5$ is complete to $\{x_2, x_3\}$. Then $x_1x_5, x_3x_4\notin E(H)$. Note that each $y_j$ is adjacent to at least two vertices in $N(x)$ for each $j\in[3]$. However, each of $x_1$ and $x_3$ is adjacent to at most one vertex in $\{y_1, y_2, y_3\}$; each of $x_4$ and $x_5$ is adjacent to at most two vertices in $\{y_1, y_2, y_3\}$; and $x_2$ is anticomplete to $\{y_1, y_2, y_3\}$. It follows that $e_H(\{y_1, y_2, y_3\}, N(x))=6$; and every $x_i$ is a $5$-vertex and every $y_j$ is a $4$-vertex for each $i\in[5]$ and $j\in[3]$. We may assume that $x_4$ is complete to $\{y_1, y_2\}$. Then $x_4y_3\notin E(H)$ and so $y_3$ is complete to $\{x_3, x_5\}$. But then $y_3$ is adjacent to $x_5$ only in $H[N(x_3)]$, contrary to (c). We may assume that
\noindent (d)\, $H[N(v)]$ is $K_3\cup \overline{K}_2$-free for every $5$-vertex $v$.
It remains to consider the case
$x_4x_5\in E(H)$. Suppose $x_i$ is complete to $\{x_4, x_5\}$ for some $i\in[3]$, say $i=3$. We may assume that $x_1x_4\notin E(H)$ because $H[N(x)]$ is $K_4$-free. By (d) applied to $H[N(x)]$, we have $x_2x_5\notin E(H)$; in addition, either $x_1x_5, x_2x_4\notin E(H) $, or $x_1x_5, x_2x_4\in E(H) $. Since $e_H(\{y_1, y_2, y_3\}, N(x))\ge 6$, we see that $\{x_1, x_2\}$ is anticomplete to $\{x_4, x_5\}$. By (a), we may assume that $x_1y_3, x_4y_j\notin E(H)$ for some $j\in[3]$. Then $y_3$ is complete to $\{x_4, x_5\}$. Thus $j\ne 3$. We may assume that $j=1$. Then $y_1$ is complete to $\{x_1, x_2\}$. Since $H$ is $H_5$-free, we see that $y_2$ is not complete to $\{x_2, x_4\}$. We may assume that $x_2y_2\notin E(H)$. Then $y_2$ is complete to $\{x_4, x_5\}$ because $\{x_1, x_2\}$ is anticomplete to $\{x_4, x_5\}$. But then $H$ contains $H_6$ as a subgraph, a contradiction. This proves that no $x_i$ is complete to $\{x_4, x_5\}$ for each $i\in[3]$. By (c), we may assume that $x_2x_4, x_3x_5\in E(H)$. Then $x_2x_5, x_3x_4\notin E(H)$. By (a), we may assume that $x_5y_3\notin E(H)$. Then $y_3x_2\in E(H)$. By (d) applied to $H[N(x_2)]$, we have $y_3x_4\in E(H)$. By (a), we may assume that $x_4y_2\notin E(H)$. Then $x_4$ is anticomplete to $\{x_3, y_2\}$, and so $x_3y_2\in E(H)$. Thus $y_1$ is anticomplete to $\{x_2, x_3\}$, and so $y_1$ is complete to $\{x_4, x_5\}$. But then $x_4$ is a $5$-vertex such that $G[N(x_4)]=C_5$, contrary to (b).
This completes the proof of \cref{l:computer}.
\end{proof}
\begin{figure}
\caption{Graphs in \cref{fig:counter} with bold vertices and edges depicted, and dashed edges added.}
\label{fig:wonderful}
\end{figure}
We are now ready to prove Theorem~\ref{t:main}, which we restate for convenience.\main*
\begin{proof} Suppose the assertion is false. Let $G$ be a graph with no $\km{9}{6}$ minor such that $\chi(G) \ge 9$. We may choose such a graph $G$ so that it is $9$-contraction-critical. Then $\delta(G)\ge8$, $G$ is $7$-connected by \cref{t:7conn}, and $\delta(G) \le 9$ by \cref{t:exfun}.
Let $x\in V(G)$ be of minimum degree. Since $G$ is $9$-contraction-critical and has no $\km{9}{6}$ minor, by \cref{l:alpha2} applied to $G[N(x)]$, we see that $\delta(G)=9$ and $\alpha(G[N(x)]) = 2$. We next prove that $G[N(x)]$ contains a $5$-clique. Suppose $G[N(x)]$ is $K_5$-free. By \cref{l:H9}, $G[N(x)]$ contains a spanning subgraph isomorphic to one of the graphs in Figure~\ref{fig:counter}. Let $A$ be the set of all bold vertices, $M$ be the set of all dashed edges and $e$ the bold edge in each $H_i$ given in Figure~\ref{fig:wonderful}. Since $G[N(x)]$ is $K_5$-free, we see that $A$ is not a clique in $G$. Let $S$ be a set of two nonadjacent vertices in $A$. Note that no vertex in $S$ is incident with any dashed edges in $M$. By \cref{l:rootedK4} applied to $G[N(x)]$ with $S$ and $M$ given above, we see that $G[N[x]]+M\succcurlyeq \km{9}{6}$ by contracting the bold edge $e$, a contradiction. This proves that $G[N(x)]$ contains a $5$-clique for all such $9$-vertices $x$ in $G$. Let $n_9$ denote the number of $9$-vertices in $G$. Then $e(G)\ge \big(9n_9+10(|G|-n_9)\big)/2=(10|G|-n_9)/2 $. By \cref{t:exfun}, $5|G|-15\ge e(G)\ge (10|G|-n_9)/2$. It follows that $n_9\ge30$. Then $G$ contains at least three pairwise nonadjacent $9$-vertices in $G$. Let $x_1, x_2, x_3\in V(G) $ be three pairwise nonadjacent $9$-vertices in $G$. For each $i\in[3]$, $G[N(x_i)]$ has a $5$-clique; let $L_i $ be a $6$-clique of $G[N[x_i]]$ such that $x_i\in L_i $. Then
\[\min\{|L_1 \setminus (L_2 \cup L_3)|, |L_2 \setminus (L_1 \cup L_3)|, |L_3 \setminus (L_1 \cup L_2)| \} \geq 1.\]
By \cref{l:threek6s}, $G\succcurlyeq\km{9}{6}$, a contradiction.
This completes the proof of \cref{t:main}.
\end{proof}
\section{An extremal function for $\km{9}{6}$ minors}\label{s:exfun}
Throughout this section, if $G$ is a graph and $K$ is a subgraph of $G$, then by $N(K)$ we denote the set of vertices of $V(G)\setminus V(K)$ that are adjacent to a vertex of $K$.
If $V(K)=\{x\}$, then $N(K)=N(x)$. It can be easily checked that for each vertex $x\in V(G)$, if $K$ is a component of $G\setminus N[x]$, then $N(K)$ is a minimal separating set of $G$.
Lemma~\ref{l:k4-} follows from the proof of Lemma~16 of J\o rgensen~\cite{Jor01}. A proof can be found in~\cite{K84}.
\begin{lem}[J\o rgensen~\cite{Jor01}]\label{l:k4-} Let $G$ be a $4$-connected graph and let $S\subseteq V(G)$ be a separating set of four vertices. Let $G_1$ and $G_2$ be proper subgraphs of G so that $G_1\cup G_2=G$ and $G_1\cap G_2=G[S]$. Let $d_1$ be the largest integer so that $G_1$ contains pairwise disjoint sets of vertices $V_1, V_2, V_3, V_4$ so that $G_1[V_j]$ is connected,
$|S\cap V_j|=1$ for $1\le j\le 4$, and so that the graph obtained from $G_1$ by contracting each of $G_1[V_1], G_1[V_2],G_1[V_3],G_1[V_4]$ to a single vertex and deleting $V(G)\setminus \bigcup_{j=1}^4 V_j$ has
$e(G[S])+d_1$ edges. If $|G_1|\ge 6$, then \[e(G[S]) + d_1 \geq 5.\] \end{lem}
We next prove a lemma that will be needed in the proof of Theorem~\ref{t:exfun}.
\begin{lem} \label{l:computer} Let $H$ be a graph on eight or nine vertices. If $\delta(H) \geq 5$, then
$H$ has a vertex $v$ such that $H \setminus v$ has a $ \km{7}{6}$ minor. \end{lem}
\begin{proof} We may assume that $\delta(H) =5$ and every edge is incident with a $5$-vertex in $H$. Suppose $H \setminus v $ has no $\km{7}{6}$ minor for every $v\in V(H)$. Then $|H|=9$, else for any $5$-vertex $v$ in $H$, $e(H\setminus v)\ge 20-5=e(K_7)-6$, and so $H\setminus v$ has a $\km{7}{6}$ minor, a contradiction. We claim that some edge in $H$ belongs to at most two triangles. Suppose not. Then every edge in $H$ belongs to at least three triangles. Let $x$ be a $5$-vertex in $H$. Then $\delta(H[N(x)])\ge3$, and so $H[N(x)]$ contains $K_5^=$ as a spanning subgraph; in addition, every vertex in $H\setminus N[x]$ is adjacent to at least three vertices in $N(x)$. It follows that $H[N[x]\cup\{y\}]\succcurlyeq \km{7}{5}$, where $y\in V(H)\setminus N[x]$, a contradiction. Thus there exists an edge $uw\in E(H)$ such that $uw$ belongs at most two triangles. Let $H^*:=H/uw$. Then $e(H^*)\ge e(H)-3\ge 23-3=20$. Note that $|H^*|=8$. Similar to the case when $|H|=8$, we see that if $\delta(H^*)\ge5$, then $H^*\setminus v$ has a $\km{7}{6}$ minor for any $5$-vertex $v$ in $H$. Thus $\delta(H^*)\le 4$. Let $v\in V(H^*)$ such that $d_{H^*}(v)\le4$. Then $ e(H^*\setminus v)\ge 20-4=e(K_7)-5$, and so $ H^*\setminus v$ has a $\km{7}{5}$ minor, a contradiction. \end{proof}
We are now ready to prove Theorem~\ref{t:exfun}, which we restate for convenience. \exfun*
\begin{proof} Suppose the assertion is false. Let $G$ be a graph on $n \geq 9$ vertices with $ e(G) \geq 5n-14$ and, subject to this, $n$ is minimum. We may assume that $e(G) = 5n-14$. It is straightforward to check that $G\succcurlyeq \km{9}{6}$ when $n=9$. Thus $n \geq 10$. We next prove several claims.
\setcounter{counter}{0} \noindent {\bf Claim\refstepcounter{counter} \label{c:d5} \arabic{counter}.}
$\delta(G) \geq 6$.
\begin{proof} Suppose $\delta(G)\le 5$. Let $x\in V(G)$ with $d(x)\le 5$. Then
\[e(G \setminus x) =e(G)-d(x)\ge (5n-14)-5 =5(n-1)-14.\]
Thus $G \setminus x $ has a $ \km{9}{6}$ minor by the minimality of $G$, a contradiction. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:triangles} \arabic{counter}.} Every edge in $G$ belongs to at least five triangles. Moreover, $G[N[x]]=K_7$ if $x\in V(G)$ is a $6$-vertex, and $G[N[x]]$ contains a $K_8^\equiv$ subgraph if $x\in V(G)$ is a $7$-vertex.
\begin{proof}
Suppose there exists an edge $uv \in E(G)$ such that $uv$ belongs to at most four triangles. Then
\[e(G / uv) \ge (5n-14)-5 =5|G/uv| -14.\]
Thus $G /uv\succcurlyeq \km{9}{6}$ by the minimality of $G$, a contradiction. Since every edge in $G$ belongs to at least five triangles, we see that $G[N[x]]=K_7$ for each $6$-vertex $x$ in $G$, and $G[N[y]]$ contains a $K_8^\equiv$ subgraph for each $7$-vertex $y$ in $G$. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:n12} \arabic{counter}.} $n\ge12$.
\begin{proof} Suppose $10\le n\le 11$. Let $x\in V(G)$ be a vertex of degree $\delta(G)$. Then $d(x)\le 7$ because $e(G) = 5n-14$. By Claim~\ref{c:d5}, $6\le d(x)\le 7$. Suppose $ d(x)=7$. Then $G[N[x]]$ contains a $K_8^\equiv$ subgraph by Claim~\ref{c:triangles}, and every vertex in $ V(G)\setminus N[x]$ is adjacent to at least five vertices in $N(x)$. But then $G\succcurlyeq G[\{v\}\cup N[x]] \succcurlyeq\km{9}{6}$ for each $ v\in V(G)\setminus N[x]$, a contradiction. Thus $d(x)=6$. Then $G[N[x]]=K_7$ by Claim~\ref{c:triangles}. Let $y, z\in V(G)\setminus N[x]$. Then $e_G(\{y, z\}, N(x))\ge 2(6-(n-8))=28-2n$. If $e_G(\{y, z\}, N(x))\ge 9$, or $e_G(\{y, z\}, N(x))\ge 8$ and $yz\in E(G)$, then $G[\{y, z\}\cup N[x]]\succcurlyeq \km{9}{6}$, a contradiction. Thus $ e_G(\{y, z\}, N(x)) =8 $ and $yz\notin E(G)$, or $6\le e_G(\{y, z\}, N(x)) \le 7 $. In the former case, there exists $w\in V(G)\setminus N[x]$ such that $w$ is complete to $\{y,z\}$. But then $G[\{y, z,w\}\cup N[x]]/wz\succcurlyeq \km{9}{6}$. Thus $6\le e_G(\{y, z\}, N(x)) \le 7 $ for any two vertices $y, z\in V(G)\setminus N[x]$. It follows that $n=11$, $V(G)\setminus N[x]$ is a $4$-clique, no vertex of $V(G)\setminus N[x]$ has degree at least eight, and at least three vertices of $V(G)\setminus N[x]$ are $6$-vertices in $G$. Thus $e_G(V(G)\setminus N[x], N(x))\le 12+1=13$. But then \[e(G)=e(G[N[x]])+e_G(V(G)\setminus N[x], N(x))+e(G\setminus N[x])\le 21+13+6< 5\times 11-14,\] which is impossible. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:55} \arabic{counter}.} No three $6$-vertices in $G$ are pairwise adjacent.
\begin{proof} Suppose there exist three distinct $6$ vertices, say $x, y, z$, in $G$ such that $\{x,y,z\}$ is a $3$-clique.
Then $|G \setminus \{ x, y,z \}|=n-3 \geq 9$ by Claim~\ref{c:n12}, and \[ e(G \setminus \{ x, y,z \}) = e(G)-15= 5(n-3) - 14.\]
Thus $G \setminus \{ x, y,z \}$ has a $\km{9}{6}$ by the minimality of $G$, a contradiction.
\end{proof}
Let $S$ be a minimal separating set of vertices in $G$, and let $G_1$ and $G_2$ be proper subgraphs of $G$ so that $G=G_1\cup G_2$ and $G_1\cap G_2=G[S]$. For each $i\in[2]$, let $d_i$ be the largest integer so that $G_i$ contains pairwise disjoint sets of vertices $V_1, \dots, V_p$ so that $G_i[V_j]$ is connected,
$|S\cap V_j|=1$ for $1\le j\le p :=|S|$, and so that the graph obtained from $G_i$ by contracting each of $G_i[V_1], \dots, G_i[V_p]$ to a single vertex and deleting $V(G)\setminus \bigcup_{j=1}^p V_j$ has $e(G[S])+d_i$ edges. It follows from the minimality of $G$ that for each $i\in[2]$,
\[ e(G_i) + d_{3-i} \le 5|G_1| - 15 \,\, \text{ if } |G_i|\ge9.\tag{$ \bigstar$}\]
\noindent {\bf Claim\refstepcounter{counter}\label{c:nodis7} \arabic{counter}.}
If $|G_i|=8$ for some $i\in [2]$, then $|S|\le 4$ and some vertex in $V(G_i)\setminus S$ is a $7$-vertex in $G$. \begin{proof}
Suppose, say, $|G_1| = 8$. Let $C$ be a component of $G_2 \setminus S$. We first prove that $G_1\setminus S$ is connected. Suppose not. By Claims~\ref{c:d5} and \ref{c:triangles}, $G_1\setminus S$ must contain two nonadjacent $6$-vertices in $G$ with $G[S]=K_6$. Thus $G_1=K_8^-$ and so $G\succcurlyeq \km{9}{3}$ by contracting $C$ to a single vertex, a contradiction. Thus $G_1\setminus S$ is connected. We next prove that some vertex in $V(G_1)\setminus S$ is a $7$-vertex in $G$. Suppose no vertex in $V(G_1)\setminus S$ is a $7$-vertex in $G$. Then every vertex in $V(G_1)\setminus S$ is a $6$-vertex in $G$. By Claim~\ref{c:triangles} and the fact that $G_1\setminus S$ is connected, we see that $V(G_1)\setminus S$ is a clique of order $8-|S| $. By Claim~\ref{c:55}, $|V(G_1)\setminus S|\le 2$. By the minimality of $S$, every vertex in $S$ is adjacent to at least one vertex in $V(G_1)\setminus S$. It follows that $|V(G_1)\setminus S|= 2$ and $G_1=K_8^-$. But then $G\succcurlyeq\km{9}{2}$ by contracting $C$ to a single vertex, a contradiction. Finally, let $x\in V(G_1)\setminus S$ be a $7$-vertex in $G$. By Claim~\ref{c:triangles}, $G_1=G[N[x]]$ contains $K_8^\equiv$ as a spanning subgraph. Thus $|S|\le 4$, else $G\succcurlyeq\km{9}{6}$ by contracting $C$ to a single vertex. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:no7} \arabic{counter}.} Neither $G_1$ nor $G_2$ has exactly eight vertices.
\begin{proof}
Suppose not, say $|G_1| = 8$. By Claim~\ref{c:nodis7}, $|S|\le 4$ and some vertex, say $x$, in $V(G_1)\setminus S$ is a $7$-vertex in $G$. By Claim~\ref{c:triangles}, $G_1$ contains $K_8^\equiv$ as a spanning subgraph.
We next prove that $|G_2|\ge 9$. Suppose $ |G_2|\le 8$. Note that $|G_2|\ge7$ by Claim~\ref{c:d5}. If $|G_2|=7$, then every vertex in $G_2\setminus S$ is a $6$-vertex. By Claims~\ref{c:triangles}, $G_2=K_7$, but then $V(G_2)\setminus S$ is a clique of order $7-|S|\ge3$ because $|S|\le 4$, contrary to Claim~\ref{c:55}.
Thus $|G_2|=8$ and $n=16-|S|$. By Claim~\ref{c:nodis7}, some vertex, say $y$, in $V(G_2)\setminus S$ is a $7$-vertex in $G$. By Claim~\ref{c:triangles}, $G_2$ contains $K_8^\equiv$ as a spanning subgraph.
Suppose $ |S|=4$. Then $G[S]=K_4$, else $G\succcurlyeq \km{9}{6}$ by contracting $G_2\setminus (S\cup\{y\})$ onto an end of a missing edge of $G[S]$. Thus $G_1=G_2=K_8^\equiv$, else say $G_1$ contains $K_8^=$ as a spanning subgraph, then $G[V(G_1)\cup\{y\}]\succcurlyeq\km{9}{6}$. But then $n=16-4=12$ and $e(G)=e(G_1)+e(G_2)-6=50-6=44<5\times 12-14$, a contradiction. Suppose next $ |S|= 3$. Then $n=13$; moreover, $G_1, G_2\in\{K_8^=, K_8^\equiv\}$, else say $G_1 $ contains $K_8^-$ as a subgraph, then $G[V(G_1)\cup\{y\}]\succcurlyeq \km{9}{6}$. But then $e(G[S])\ge2$ and $e(G)=e(G_1)+e(G_2)-e(G[S]) \le 26+26-2 =50<5\times 13-14$, a contradiction. Thus $|S|\le 2$. Then \[ 5(16-|S|)-14 = e(G)=e(G_1)+e(G_2)-e(G[S])\le 28+28-e(G[S]).\]
It follows that $|S|=2$, $G_1=G_2=K_8$ and $G[S]=\overline{K}_2$, which is impossible.
This proves that $|G_2|\ge 9$.
Recall that $G_1$ contains $K_8^\equiv$ as a subgraph. It is easy to check that $e(G[S])+d_1={{|S|}\choose2}$. Note that if $|S|=4$, then $G_1=K_8^\equiv$, else $G\succcurlyeq \km{9}{6}$ by contracting a component of $G_2\setminus S$ to a single vertex. But then \begin{align*} e(G_2)+d_1&=e(G)-e(G_1)+e(G[S])+d_1\\
&\ge 5n-14 -\big(28-\max\{0, 3(|S|-3)\}\big)+ {{|S|}\choose2} \\
&= \big(5\times (n-(8-|S|))-14\big) + 5\times (8-|S|) -\big(28-\max\{0, 3(|S|-3)\}\big)+{{|S|}\choose2}\\
&=(5 |G_2|-14) + 5\times (8-|S|) -\big(28-\max\{0, 3(|S|-3)\}\big)+{{|S|}\choose2}\\
&\ge 5|G_2|-14, \end{align*}
contrary to ($ \bigstar$) because $ |S|\le 4$ and $|G_2|\ge 9$. \end{proof}
Observe that, if $|G_1|\ge 9$ and $|G_2|\ge 9$, then by ($ \bigstar$), we have \begin{align*} 5n - 14= e(G) &= e(G_1) + e(G_2) - e(G[S]) \\
& \le (5|G_1|-15-d_2)+(5|G_2|-15-d_1)- e(G[S]) \\
&= 5(n + |S|) - 30 - d_1 - d_2 - e(G[S]).
\end{align*} It follows that
\[ 5|S| \ge 16 + d_1 + d_2 + e(G[S]) \, \, \text{ if } |G_1|\ge 9\, \text{ and } |G_2|\ge 9. \tag{$\blacklozenge$} \]
\noindent {\bf Claim\refstepcounter{counter}\label{c:not77} \arabic{counter}.}
If $|G_i|=7$, then $|G_{3-i}|\ge9$ for each $i\in[2]$. Moreover, $G[S]=K_5$ or $G[S]=K_6$.
\begin{proof}
Suppose $|G_1|=7$ but $|G_2|\le8$. By Claim~\ref{c:no7}, $|G_2|=7$.
By Claim~\ref{c:triangles}, $G_1=G_2=K_7$, and every vertex in $V(G_1)\setminus S$ is a $6$-vertex in $G$. By Claim~\ref{c:55}, $|V(G_1)\setminus S|\le 2$ and so $|S|\ge 5$. But then $n=14-|S|\le 9$, contrary to Claim~\ref{c:n12}. Since $G_1= K_7$ and $1\le |V(G_1)\setminus S|\le 2$, we see that $G[S]=K_5$ or $G[S]=K_6$. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:5conn} \arabic{counter}.} $G$ is $5$-connected.
\begin{proof} Suppose $G$ is not 5-connected. Let $S$ be a minimal separating set of $G$, and $G_1, G_2, d_1, d_2$ be defined as prior to ($ \bigstar$). By Claim~\ref{c:not77},
$|G_1|\ne 7$ and $|G_2|\ne 7$. By Claim~\ref{c:no7}, $|G_1|\ge 9$ and $|G_2|\ge 9$. By ($\blacklozenge$),
$|S| \geq 4$, and so $G$ is $4$-connected. By Lemma~\ref{l:k4-}, $e(G[S])+d_1\ge5$. Note that $d_2\ge 1$ when $S$ is not a $4$-clique, and $e(G[S])=6$ when $S$ is a $4$-clique. In either case, we have $d_1 + d_2 + e(G[S])\ge6$, contrary to ($\blacklozenge$). \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:almostclique} \arabic{counter}.}
If there exists $x \in S$ such that $S \setminus x$ is a clique, then $G[S] = K_5$ or $G[S] = K_6$.
\begin{proof} Suppose $S \setminus x$ is a clique but $G[S]\ne K_5$ and $G[S]\ne K_6$. Let $G_1$ and $G_2$ be as above. By Claim~\ref{c:not77},
$|G_1|\ne 7$ and $|G_2|\ne 7$. By Claim~\ref{c:no7}, $|G_1|\ge 9$ and $|G_2|\ge 9$. By Claim~\ref{c:5conn}, $|S| \geq 5$.
If $S$ contains a $7$-clique, then $G\succcurlyeq K_9^-$ by contracting a component of $G_1 \setminus S$ and a component of $G_2 \setminus S$ to two distinct vertices, a contradiction. Thus $5\le |S|\le 7$ and $S$ is not a clique. Then $\delta(G[S]) =d_{G[S]}(x)\leq |S| - 2$. Since $S\setminus x$ is a clique, we see that \[ d_1 = d_2 = |S| - 1 - d_{G[S]}(x)=|S| - 1 -\delta(G[S]). \]
It follows that \[ e(G[S]) ={{|S|-1}\choose 2}+d_{G[S]}(x)= {{|S|-1}\choose 2}+\delta(G[S]). \] This, together with ($\blacklozenge$), implies that
\begin{align*}
5|S| &\ge 16+d_1 + d_2 + e(G[S]) \\
&= 16 + 2(|S| - 1 -\delta(G[S])) + {{|S|-1}\choose 2}+\delta(G[S]) \\
&= 16+2(|S|-1)+(|S|^2 -3|S| + 2)/2 - \delta(G[S]) \\
&\ge 15+ (|S|^2 +|S|)/2 - (|S|-2)\\
&=17+(|S|^2 -|S|)/2,
\end{align*}
which is impossible because $5\le |S|\le 7$. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:k72} \arabic{counter}.}
$G$ is $\mathcal{K}_8^{-3}$-free.
\begin{proof} Suppose $G$ has a subgraph $H$ such that $H\in \mathcal{K}_8^{-3}$. Since $G$ is $5$-connected,
we obtain a $\km{9}{6}$ minor in $G$ by contracting a component of $G \setminus V(H)$ to a single vertex, a contradiction. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:dnot6} \arabic{counter}.} No vertex in $G$ is a $7$-vertex.
\begin{proof} Suppose to the contrary that $G$ has a $7$-vertex, say $x$. By Claim~\ref{c:triangles}, $G[N[x]]$ contains $K_8^\equiv$ as a spanning subgraph, contrary to Claim~\ref{c:k72}. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:onlyone5} \arabic{counter}.} $G$ has at most two $6$-vertices. Moreover, if $G$ has exactly two $6$-vertices, then they must be adjacent.
\begin{proof} Suppose to the contrary that $G$ has two distinct $6$-vertices, say $x, y$, such that $xy\notin E(G)$.
By Claim~\ref{c:triangles}, $N[x]$ and $N[y]$ are $7$-cliques in $G$. Then
$|N(x) \cap N(y)|\le 5$, else $G[N[x] \cup N[y]]=K_8^-$, contrary to Claim~\ref{c:k72}.
By Claim~\ref{c:5conn} and Menger's Theorem, there exist five pairwise internally vertex-disjoint $(x, y)$-paths, say $Q_1, \ldots, Q_5$. We choose $Q_1, \ldots, Q_5$ so that $|Q_1|+ \cdots+|Q_5|$ is as small as possible. Then each $Q_i$ contains exactly one vertex in $N(x)$ and exactly one in $N(y)$. It follows that $G\succcurlyeq\km{9}{4}$ by contracting all the edges of $Q_1\setminus\{x, y\}, \ldots, Q_5\setminus\{x, y\}$, a contradiction. This proves that $6$-vertices are pairwise adjacent in $G$. By Claim~\ref{c:55}, $G$ has at most two $6$-vertices.\end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:58} \arabic{counter}.} No $6$-vertex is adjacent to an $8$-vertex or $9$-vertex in $G$.
\begin{proof} Suppose to the contrary that there exists $xy \in E(G)$ such that $d(x) = 6$ and $d(y) \in\{8,9\}$. By Claim~\ref{c:triangles}, $G[N[x]]=K_7$, $N[x]\subseteq N[y]$ and
$\delta(G[(N(y)])\ge5$. Then $d(y)=9$, otherwise $G\succcurlyeq G[N[y]]\succcurlyeq \km{9}{4}$, a contradiction. Let $ A:=N[y]\setminus N[x]$. Then $|A|=3$ because $xy\in E(G)$. Let $A:=\{a_1, a_2, a_3\}$. Then either $e(\{a_1, a_2\}, N[x])\ge 10$, or $e(\{a_1, a_2\}, N[x])\ge 8$ and $a_1a_2\in E(G)$. But then $G[N[y]\setminus a_3]\succcurlyeq\km{9}{6}$ in both cases, a contradiction. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:disconnected} \arabic{counter}.} Let $x\in V(G)$ be an $8$-vertex or $9$-vertex in $G$, and let $M$ be the set of vertices of $N(x)$ not adjacent to all other vertices of $N(x)$. Then there is no component $K$ of $G \setminus N[x]$ such that $M\subseteq N(K)$. In particular, $G \setminus N[x]$ is disconnected if $x$ is an $8$-vertex.
\begin{proof} Suppose such a component $K$ exists. Then every vertex in $M$ has a neighbor in $K$ because $M\subseteq N(K)$.
By Lemma~\ref{l:computer}, there exists $y \in N(x)$ such that $G[N(x)] \setminus y$ has a $\km{7}{6}$ minor. If $y\notin M$, then $y$ is complete to $N[x]\setminus y$ and so $G[N[x]]\succcurlyeq\km{9}{6}$, a contradiction. Thus $y\in M\subseteq N(K)$. By contracting $K$ onto $y$, we obtain a $\km{9}{6}$ minor in $G$, a contradiction. This proves that no such component $K$ exists. Suppose $x$ is an $8$-vertex. By Claims~\ref{c:dnot6} and \ref{c:58}, every vertex in $M$ has degree at least eight, and thus every vertex in $M$ has a neighbor in $G \setminus N[x]$. It follows that $G \setminus N[x]$ is disconnected, as desired. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:dnot8} \arabic{counter}.} No vertex in $G$ is an $8$-vertex.
\begin{proof}
Suppose to the contrary that $G$ has an $8$-vertex, say $x$. Suppose $G[N(x)]$ has a $5$-clique, say $A$. By Claim~\ref{c:triangles}, $\delta(G[N(x)])\ge 5$. It is straightforward to check that $e(G[N[x]])\ge 30=e(K_9)-6$, and so $G[N[x]]\succcurlyeq\km{9}{6}$, a contradiction. Thus $G[N(x)]$ is $K_5$-free. By Claim~\ref{c:disconnected}, let $C$ and $C'$ be two distinct components of $G \setminus N[x]$. By Claim~\ref{c:5conn}, $|N(C)|\ge5$ and $|N(C')|\ge5$. By Claim~\ref{c:almostclique} and the fact $G[N(x)]$ is $K_5$-free, we see that
each of $G[N(C)]$ and $G[N(C')]$ has two independent missing edges. Let $yz$ and $uv$ be a missing edge of $G[N(C)]$ and $G[N(C')]$, respectively, such that $ y\ne u, v$. Note that $e(G[N[x]])\ge 8+20=e(K_9)-8$. But then $G\succcurlyeq\km{9}{6}$ by contracting $C$ onto $y$ and $C'$ onto $u$, a contradiction. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:comporder1} \arabic{counter}.} Let $x\in V(G)$ be a $9$-vertex in $G$.
Then $G\setminus N[x]$ is disconnected. Moreover, $|C|\ge2$ for every component $C$ of $G \setminus N[x]$.
\begin{proof} Suppose $G\setminus N[x]$ is connected. Let $M$ be the set of vertices of $N(x)$ not adjacent to all other vertices of $N(x)$. By Claims~\ref{c:dnot6}, \ref{c:58} and \ref{c:dnot8}, every vertex in $M$ has degree at least nine, and thus every vertex in $M$ has a neighbor in $K:=G \setminus N[x]$. But then $M\subseteq N(K)$, contrary to Claim~\ref{c:disconnected}.
Next suppose there exists a component $C$ of $G \setminus N[x]$ such that $|C|=1$. Let $y$ be the only vertex in $C$. Suppose $y$ is not a $6$-vertex in $G$. Then $d(y)\ge 9$ by Claims~\ref{c:dnot6} and \ref{c:dnot8}, and so $N(C)=N(x)$, contrary to Claim~\ref{c:disconnected}. Thus $y$ is a $6$-vertex in $G$. By Claim~\ref{c:triangles}, $G[N[y]]=K_7$. But then $G[ \{x\}\cup N[y]]=K_8^-$, contrary to Claim~\ref{c:k72}. \end{proof}
\noindent {\bf Claim\refstepcounter{counter}\label{c:comp9} \arabic{counter}.}
Let $x\in V(G)$ be a $9$-vertex in $G$. Then for every component $C$ of $G \setminus N[x]$, there exists a vertex $v\in V(C)$ such that $d_G(v)=9$.
\begin{proof}
Suppose there exists a component $C$ of $G \setminus N[x]$ such that $d_G(v)\ne 9$ for every $v\in V(C)$. By Claim~\ref{c:comporder1}, $|C|\ge2$.
Observe that if all vertices in $V(C)$ are $6$-vertices in $G$, then $|C|=2$ by Claim~\ref{c:55} and $G[V(C)\cup N(C)]=K_7$ by Claim~\ref{c:triangles}; thus $G[\{x\}\cup V(C)\cup N(C)]$ is not $\km{8}{3}$-free, contrary to Claim~\ref{c:k72}. Thus there exists a vertex $y\in V(C)$ such that $d_G(y)\ne 6$. Since $d_G(v)\ne 9$ for every $v\in V(C)$, by Claims~\ref{c:dnot6} and \ref{c:dnot8}, we have $d_G(y)\ge10$.
Let $G_1 := G \setminus V(C)$ and $G_2 := G[V(C) \cup N(C)]$. Note that $|G_2|\ge11$ and $N(C)$ is a minimal separating set of $G$. By Claim~\ref{c:disconnected}, $N(C)\ne N(x)$.
Thus $|C|\ge 11-8=3$. Let $d_1$ be defined as in the paragraph prior to Claim~\ref{c:nodis7}. Let $z \in N(C)$ such that $d_{G[N(C)]}(z) = \delta(G[N(C)])$. Let $d:=d_{G[N(C)]}(z)$.
By contracting $G_1 \setminus N(C)$ onto $z$, we see that $d_1 \geq |N(C)| - d - 1$. By ($ \bigstar$),
\[ e(G_2) \le 5(|C| + |N(C)|) - 15 - (|N(C)| - d - 1)=5|C| + 4|N(C)| + d - 14. \tag{a} \] Now let $t := e_G ( C, N(C) )$ and let $p\le 2$ be the number of vertices in $V(C)$ that are $6$-vertices in $G$. Then $e(G_2) = e(C) + t + e(G[N(C)])$.
Note that $2e(C) \geq 10(|C| - p) + 6\times p - t=10|C|-4p-t$ and $2e(G[N(C)]) \geq d|N(C)|$. Thus \[ 2e(G_2) =2e(C) +2 t + 2e(G[N(C)])\geq 10|C| - 4p + t + d|N(C)|. \tag{b} \]
Combining (a) and (b) yields \[ 10|C| + 8|N(C)| + 2d - 28 \ge 2e(G_2) \geq 10|C| - 4p + t + d|N(C)| \] and so \[ -t \ge d\big(|N(C)| - 2\big) - 8|N(C)| + 28-4p.\tag{c} \]
Note that $\delta(G[N(x)]) \geq 5$ by Claim~\ref{c:triangles}, and $N(C)$ is a subset of $N(x)$, so \[ d = \delta(G[N(C)]) \geq 5 - (9 - |N(C)|) = |N(C)| - 4. \] This, together with (c), implies that
\begin{align*}
-t &\ge \big(|N(C)| - 4\big)\big(|N(C)| - 2\big) - 8|N(C)| + 28-4p \\
&= |N(C)|^2 - 14|N(C)| + 36-4p \\
&= \left( |N(C)|- 7\right)^2 -13-4p,
\end{align*}
so $-t \geq -13-4p$. But then \[ |C|(|C|-1)\ge 2e(C) \geq 10|C|-4p-t \geq 10|C| - 4p-13-4p=10|C|-13-8p,\]
Since $2e(C)$ is even, we have
\[ |C|(|C|-1)\ge 2e(C) \geq 10|C| - 12-8p.\tag{d}\]
If $p\ge1$, let $w\in V(C)$ be a $6$-vertex in $G$. Then $G[N_G[w]]=K_7$ and thus $|N_G(w)\cap V(C)|\ge3$, else $G[\{x\}\cup N_G(w)]$ is not $\km{8}{3}$-free, contrary to Claim~\ref{c:k72}. Thus $|C|\ge4$ when $p\ge1$.
Suppose $p\le1$. Then (d) implies that $|C|\ge9$ and $e(C)>5|C|-14$; thus $ G\succcurlyeq C\succcurlyeq\km{9}{6}$ by the minimality of $G$, contrary to the choice of $G$. Thus $p=2$ and $|C|\ge4$. Then $(d)$ yields $ |C|= 4$ or $|C|\ge7$; and $e(C)\ge 5|C|-14$. By the minimality of $G$, we have
$ |C|= 4$ or $7\le |C|\le 8$. Let $w'\in V(C)$ be the other $6$-vertex in $G$. Suppose $|C|=4$. Then $V(C)$ is a $4$-clique in $G$ because $|N_G(w)\cap V(C)|\ge3$ as observed earlier. Let $y'$ be the vertex in $V(C)\setminus\{w,w',y\}$. Then $d_G(y')\ge10$. Recall that $N(C)\ne N(x)$. Thus there exist $x_1, x_2 \in N(x)\setminus N_G(w)$ such that $x_1$ is complete to $\{y', y\}$, and $x_2$ is adjacent to $x_1$ and some vertex in $N_G(w)\cap N(x)$. But then $G[N[x]\cup V(C)]\succcurlyeq\km{9}{6}$ by first contracting the edge $x_1x_2$ to a single vertex, and then $G[N[x]\setminus (\{x_1, x_2\}\cup N_G(w)\cap N(x))] $ to another single vertex, a contradiction. This proves that $7\le |C|\le 8$. Suppose $|C|=8$. Then (d) implies that $e(C)\ge 5\times 8-14=e(K_8)-2$ and so $C\in\km{8}{2}$, contrary to Claim~\ref{c:k72}. Thus $|C|=7$. By (d), we see that $C=K_7$ because $e(C)\ge 5\times 7-14=e(K_7)$. Note that each vertex in $V(C)\setminus\{w, w'\}$ is adjacent to at least four vertices in $N(x)$. It follows that there exists $x'\in N(x)$ such that $x'$ is adjacent to at least three vertices in $V(C)\setminus\{w, w'\}$. But then $G[N[x]\cup V(C)]\succcurlyeq\km{9}{6}$ by contracting $G[N[x]]\setminus x' $ to a single vertex, a contradiction. \end{proof}
To complete the proof, since $e(G)=5n-14$, we have $\delta(G)\le9$. By Claims~\ref{c:55}, \ref{c:dnot6} and \ref{c:dnot8}, let $x$ be a $9$-vertex in $G$. By Claim~\ref{c:comporder1}, $G \setminus N_G[x]$ is disconnected. Let $C$ be a component of $G \setminus N_G[x]$. We choose $x$ and $C$ so that $|C|$ is minimized. By Claim~\ref{c:comporder1}, $|C|\ge2$.
By Claim~\ref{c:comp9}, $C$ contains a $9$-vertex, say $y$, in $G$. Note that $N_G(x)\ne N(C)$ by Claim~\ref{c:disconnected}. Thus $N_G(x)\setminus N_G(y)\ne\emptyset$. Let $K$ be the component of $G \setminus N_G[y]$ containing $x$. Then $|K|\ge 2$ because $N_G(x)\setminus N_G(y)\ne\emptyset$. Note that $N_G(x)\cap N_G(y)\subseteq N(K)$, and every vertex in $N_G(x)\setminus N_G(y)$ belongs to $K$. Let $M$ be the set of vertices of $N_G(y)$ not adjacent to all other vertices of $N_G(y)$. By Claim~\ref{c:disconnected}, $M\not\subseteq N(K)$.
Let $z \in M \setminus N(K)$. Then $z \notin N_G(x)$, else $z\in N(K)$ because $x\in V(K)$. It follows that $z \in V(C)$.
Let $z'$ be a neighbor of $z$ in $G \setminus N_G[y]$. Note that $z' \in \big(N_G(x)\setminus N_G(y)\big) \cup V(C)$. By Claims~\ref{c:dnot6}, \ref{c:58} and \ref{c:dnot8}, we see that $d_G(z)\ge9$ and so $d_G(z')\ge9$.
Suppose $z' \notin V(K)$. Then $z'\in V(C)$ because every vertex in $N_G(x)\setminus N_G(y)$ belongs to $K$. Let $C'$ be the component of $G \setminus N_G[y]$ that contains $z'$. Then $|C'|\ge2$ by Claim~\ref{c:comporder1}, and $C'$ is a proper subset of $C$, contrary to our choice of $x$ and $C$. This proves that $z' \in V(K)$, and so $z \in N(K)$, contrary to the choice of $z$.
This completes the proof of Theorem~\ref{t:exfun}. \end{proof}
\end{document} | arXiv | {
"id": "2209.05259.tex",
"language_detection_score": 0.713080644607544,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Degenerate Euler zeta function]{Degenerate Euler zeta function}
\author{Taekyun Kim} \address{${}^1$ Department of Mathematics, Kwangwoon University, Seoul 01897, Republic of Korea.} \email{tkkim@kw.ac.kr}
\begin{abstract} Recently, T. Kim considered Euler zeta function which interpolates Euler polynomials at negative integer (see \cite{03}). In this paper, we study degenerate Euler zeta function which is holomorphic function on complex $s$-plane associated with degenerate Euler polynomials at negative integers. \end{abstract}
\maketitle
\section{Introduction}
As is well known, the {\it{Euler polynomials}} are defined by the generating function to be
\begin{equation}\label{1} \frac{2}{e^t+1}e^{xt}=\sum_{n=0} ^{\infty} E_n(x)\frac{t^n}{n!},{\text{ (see [1-7])}} \end{equation} When $x=0$, $E_n=E_n(0)$ are called {\it{Euler numbers}}.
For $s\in{\mathbb{C}}$, Kim considered Euler-zeta function which is defined by \begin{equation}\label{2} \zeta_E(s,x)=2\sum_{n=0} ^{\infty}\frac{(-1)^n}{(n+x)^s},~(x\neq0,-1,-2,\cdot\cdot\cdot). \end{equation} Thus, he obtained the following equation: \begin{equation}\label{3} \zeta_E(-n,x)=E_n(x),~(n\in{\mathbb{N}}\cup\left\{0\right\}), \ {\text{ (see \cite{03})}}. \end{equation}
L. Carlitz introduced {\it{degenerate Euler polynomials}} which are defined by the generating function to be \begin{equation}\label{4}
\frac{2}{(1+\lambda t)^{\iol}+1}(1+\lambda t)^{\frac{x}{\lambda}}=\sum_{n=0} ^{\infty} {\mathcal{E}}_n(x|\lambda)\frac{t^n}{n!},{\text{ (see \cite{01})}}. \end{equation}
When $x=0$, ${\mathcal{E}}_n(\lambda)={\mathcal{E}}_n(0|\lambda)$ are called {\it{degenerate Euler numbers}}.
From \eqref{4}, we note that $\lim_{\lambda\rightarrow0}{\mathcal{E}}_n(x|\lambda)=E_n(x)$, $(n\geq0)$. By \eqref{4}, we easily get \begin{equation}\label{5}
{\mathcal{E}}_m(\lambda)+{\mathcal{E}}_m(n+1|\lambda)=2\sum_{l=0} ^n(-1)^l(l|\lambda)_m, \end{equation}
where $(l|\lambda)_m=l(l-\lambda)\cdots(l-\lambda(m-1))$ and $(l|\lambda)_0=1$.
In this paper, we construct degenerate Euler zeta function which interpolates degenerate Euler polynomials at negative integers.
\section{Degenerate Euler zeta function}
For $s\in{\mathbb{C}} \smallsetminus\left\{0,-1,-2,\cdot\cdot\cdot\right\}$, we recall that {\it{gamma function}} is given by \begin{equation}\label{6} \Gamma(s)=\int_0 ^{\infty} e^{-t}t^{s-1}dt. \end{equation} Let \begin{equation*} F(t,x)=\frac{2}{e^t+1}e^{xt}=\sum_{n=0} ^{\infty} E_n(x)\frac{t^n}{n!}. \end{equation*} Then, by \eqref{6}, we get \begin{equation}\label{7} \begin{split} \frac{1}{\Gamma(s)}\int_0 ^{\infty} F(-t,x)t^{s-1}dt&=\frac{2}{\Gamma(s)}\int_0 ^{\infty}\frac{1}{1+e^{-t}}e^{-xt}t^{s-1}dt\\ =&\frac{2}{\Gamma(s)}\sum_{m=0} ^{\infty}(-1)^m\int_0 ^{\infty} e^{-(m+x)t}t^{s-1}dt\\ =&\frac{2}{\Gamma(s)}\sum_{m=0} ^{\infty}\frac{(-1)^m}{(m+x)^s}\int_0 ^{\infty} e^{-y}y^{s-1}dy=2\sum_{m=0} ^{\infty}\frac{(-1)^m}{(m+x)^s}\\ =&\zeta_E(s,x). \end{split} \end{equation} From \eqref{7}, we note that \begin{equation*} \zeta_E(-n,x)=E_n(x),~(n\in{\mathbb{N}}\cup\left\{0\right\}). \end{equation*} Let \begin{equation}\label{8}
F(t,x|\lambda)=\frac{2}{(1+\lambda t)^{\iol}+1}(1+\lambda t)^{\frac{x}{\lambda}}=\sum_{n=0} ^{\infty}{\mathcal{E}}_n(x|\lambda)\frac{t^n}{n!}. \end{equation} For $\lambda\in(0,1)$ and $s\in{\mathbb{C}}$ with $R(s)>0$, we define degenerate $\Gamma$-function as follows: \begin{equation}\label{9}
\Gamma(s|\lambda)=\int_0 ^{\infty} (1+\lambda t)^{-\iol}t^{s-1}dt. \end{equation}
Note that $\lim_{\lambda\rightarrow0}\Gamma(s|\lambda)=\Gamma(s)$. From \eqref{2}, we can derive \begin{equation}\label{10} \begin{split}
\Gamma(s+1|\lambda)=&\int_0 ^{\infty}(1+\lambda t)^{-\iol}t^sdt\\ =&-\frac{s}{\lambda-1}\int_0 ^{\infty}(1+\lambda t)^{-\frac{1-\lambda}{\lambda}}t^{s-1}dt\\ =&\frac{s}{1-\lambda}\int_0 ^{\infty}\left(1+\frac{\lambda}{1-\lambda}(1-\lambda)t\right)^{-\frac{1-\lambda}{\lambda}}t^{s-1}dt\\ =&\frac{s}{(1-\lambda)^{s+1}}\int_0 ^{\infty}\left(1+\frac{\lambda}{1-\lambda}y\right)^{-\frac{1-\lambda}{\lambda}}y^{s-1}dy\\
=&\frac{s}{(1-\lambda)^{s+1}}\Gamma\left(s\left|\frac{\lambda}{1-\lambda}\right.\right). \end{split} \end{equation} Therefore, by \eqref{10}, we obtain the following theorem.\\
\begin{theorem}\label{thm1} For $s\in{\mathbb{C}}$ with $R(s)>0$ and $\lambda\in(0,1)$, we have \begin{equation*}
\Gamma(s+1|\lambda)=\frac{s}{(1-\lambda)^{s+1}}\Gamma\left(s\left|\frac{\lambda}{1-\lambda}\right.\right). \end{equation*} \end{theorem}
By Theorem \ref{thm1}, we get \begin{equation}\label{11} \begin{split}
\Gamma\left(s\left|\frac{\lambda}{1-\lambda}\right.\right)=&\frac{s-1}{\left(1-\frac{\lambda}{1-\lambda}\right)^s}\Gamma\left(s-1\left|\frac{\frac{\lambda}{1-\lambda}}{1-\frac{\lambda}{1-\lambda}}\right.\right)\\
=&\frac{(s-1)(1-\lambda)^s}{(1-2\lambda)^s}\Gamma\left(s-1\left|\frac{\lambda}{1-2\lambda}\right.\right), \end{split} \end{equation} and \begin{equation}\label{12}
\Gamma\left(s-1\left|\frac{\lambda}{1-\lambda}\right.\right)=\frac{(s-2)(1-2\lambda)^{s-1}}{(1-3\lambda)^{s-1}}\Gamma\left(s-2\left|\frac{\lambda}{1-3\lambda}\right.\right). \end{equation} Continuing this process, we have \begin{equation}\label{13}
\Gamma\left(s-n\left|\frac{\lambda}{1-\lambda}\right.\right)=\frac{(s-(n+1))(1-(n+1)\lambda)^{s-n}}{(1-(n+2)\lambda)^{s-n}}\Gamma\left(s-(n+1)\left|\frac{\lambda}{1-(n+2)\lambda}\right.\right). \end{equation} Thus, by Theorem \ref{thm1}, we get \begin{equation}\label{14} \begin{split}
\Gamma(s+1|\lambda)
=&\frac{s}{(1-\lambda)^{s+1}}\Gamma\left(s\left|\frac{\lambda}{1-\lambda}\right.\right)=\frac{s(s-1)}{(1-\lambda)(1-2\lambda)^s}\Gamma\left(s-1\left|\frac{\lambda}{1-2\lambda}\right.\right)\\
=&\frac{s(s-1)(s-2)}{(1-\lambda)(1-2\lambda)(1-3\lambda)^{s-1}}\Gamma\left(s-2\left|\frac{\lambda}{1-3\lambda}\right.\right)=\cdots\\ =&\frac{s(s-1)(s-2)\cdots(s-(n+1))}{(1-\lambda)(1-2\lambda)\cdots(1-(n+1)\lambda)}\left(\frac{1}{1-(n+2)\lambda}\right)^{s-n}\\
&\times\Gamma\left(s-(n+1)\left|\frac{\lambda}{1-(n+2)\lambda}\right.\right). \end{split} \end{equation} Therefore, by \eqref{14}, we obtain the following theorem.
\begin{theorem}\label{thm2} For $n\in{\mathbb{N}}$, we have \begin{equation*}
\frac{\Gamma(s+1|\lambda)}{\Gamma\left(s-(n+1)\left|\frac{\lambda}{1-(n+2)\lambda}\right.\right)}=\frac{s(s-1)(s-2)\cdots(s-(n+1))}{(1-\lambda)(1-2\lambda)\cdots(1-(n+1)\lambda)}\left(\frac{1}{1-(n+2)\lambda}\right)^{s-n}. \end{equation*} \end{theorem}
Let us take $s=n+2$ $(n\in{\mathbb{N}})$ and $\lambda\in\left(0,\frac{1}{n+3}\right)$. Then we have \begin{equation}\label{15} \begin{split}
\Gamma(n+3|\lambda) =&\frac{(n+2)!}{(1-\lambda)(1-2\lambda)\cdots(1-(n+1)\lambda)}\left(\frac{1}{1-(n+2)\lambda}\right)^2\\
&\times\Gamma\left(1\left|\frac{\lambda}{1-(n+2)\lambda}\right.\right). \end{split} \end{equation} For $\lambda\in\left(0,\frac{1}{n+3}\right)$, we observe that \begin{equation}\label{16} \begin{split}
&\Gamma\left(1\left|\frac{\lambda}{1-(n+2)\lambda}\right.\right)=\int_0 ^{\infty}\left(1+\left(\frac{\lambda}{1-(n+2)\lambda}\right)t\right)^{-\frac{1-(n+2)\lambda}{\lambda}}dt\\
=&\left.\left(\frac{1-(n+2)\lambda}{\lambda}\right)\left(\frac{\lambda}{(n+3)\lambda-1}\right)\left(1+\left(\frac{\lambda}{1-(n+2)\lambda}\right)t\right)^{\frac{(n+3)\lambda-1}{\lambda}}\right|_0 ^{\infty}\\ =&\left(\frac{1-(n+2)\lambda}{\lambda}\right)\left(\frac{\lambda}{(n+3)\lambda-1}\right)(-1)=\frac{1-(n+2)\lambda}{1-(n+3)\lambda}. \end{split} \end{equation} Thus, by \eqref{15} and \eqref{16}, we get \begin{equation}\label{17}
\Gamma(n+3|\lambda)=\frac{(n+2)!}{(1-\lambda)(1-2\lambda)\cdots(1-(n+2)\lambda)(1-(n+3)\lambda)}. \end{equation} Therefore, by \eqref{17}, we obtain the following theorem.\\
\begin{theorem}\label{thm3} For $n\in{\mathbb{N}}$, $\lambda\in\left(0,\frac{1}{n}\right)$, we have \begin{equation*}
\Gamma(n|\lambda)=\frac{(n-1)!}{(1-\lambda)(1-2\lambda)\cdots(1-n\lambda)}. \end{equation*} \end{theorem}
In the viewpoint of \eqref{7}, we define {\it{degenerate Euler zeta function}} as follows: \begin{equation}\label{18}
\zeta_E(s,x|\lambda)=\frac{1}{\Gamma(s|\lambda)}\int_0 ^{\infty} F(-t,x|-\lambda)t^{s-1}dt, \end{equation} where $\lambda\in(0,1)$ and $s\in{\mathbb{C}}$ with $R(s)>0$.
From \eqref{18}, we have \begin{equation}\label{19} \begin{split}
&\frac{1}{\Gamma(s|\lambda)}\int_0 ^{\infty} F(-t,x|-\lambda)t^{s-1}dt=2\sum_{m=0} ^{\infty}(-1)^m\int_0 ^{\infty}(1+\lambda t)^{-\frac{m+x}{\lambda}}t^{s-1}dt\\
\ \ \ \ =&\frac{2}{\Gamma(s|\lambda)}\sum_{m=0} ^{\infty}(-1)^m\int_0 ^{\infty}\left(1+\frac{\lambda}{m+x}(m+x)t\right)^{-\frac{m+x}\lambda}t^{s-1}dt\\
\ \ \ \ =&\frac{2}{\Gamma(s|\lambda)}\sum_{m=0} ^{\infty}\frac{(-1)^m}{(m+x)^s}\int_0 ^{\infty}\left(1+\frac{\lambda}{m+x}y\right)^{-\frac{m+x}{\lambda}}y^{s-1}dy\\
\ \ \ \ =&2\sum_{m=0} ^{\infty}\frac{(-1)^m}{(m+x)^s}\frac{\Gamma\left(s\left|\frac{\lambda}{m+x}\right.\right)}{\Gamma(s|\lambda)}, \end{split} \end{equation} where $x\neq0,-1,-2,\cdot\cdot\cdot$.
Therefore, by \eqref{18} and \eqref{19}, we obtain the following theorem.
\begin{theorem}\label{thm4} For $s\in{\mathbb{C}}$ with $R(s)>0$, we have \begin{equation*}
\zeta_E(s,x|\lambda)=2\sum_{m=0} ^{\infty}\frac{(-1)^m}{(m+x)^s}\frac{\Gamma\left(s\left|\frac{\lambda}{m+x}\right.\right)}{\Gamma(s|\lambda)}, \end{equation*} where $x\neq0,-1,-2,\cdot\cdot\cdot.$ \end{theorem}
Let $s=n\in{\mathbb{N}}$ and $\lambda\in\left(0,\frac{1}{n}\right)$. Then, we have \begin{equation}\label{20}
\zeta_E(n,x|\lambda)=2\sum_{m=0} ^{\infty}\frac{(-1)^m}{(m+x)^n}\frac{\Gamma\left(n\left|\frac{\lambda}{m+x}\right.\right)}{\Gamma(n|\lambda)}. \end{equation} From Theorem \ref{thm3}, we note that \begin{equation}\label{21} \begin{split}
&\frac{\Gamma\left(n\left|\frac{\lambda}{m+x}\right.\right)}{\Gamma(n|\lambda)}\\ =&\frac{(1-\lambda)(1-2\lambda)\cdots(1-n\lambda)}{(n-1)!}\frac{(n-1)!}{\left(1-\frac{\lambda}{m+x}\right)\left(1-\frac{2\lambda}{m+x}\right)\cdots\left(1-\frac{n\lambda}{m+x}\right)}\\ =&\frac{(1-\lambda)(1-2\lambda)\cdots(1-n\lambda)}{\left(1-\frac{\lambda}{m+x}\right)\left(1-\frac{2\lambda}{m+x}\right)\cdots\left(1-\frac{n\lambda}{m+x}\right)}. \end{split} \end{equation} Therefore, by \eqref{20} and \eqref{21}, we obtain the following theorem.\\
\begin{theorem}\label{thm5} For $n\in{\mathbb{N}}$ and $\lambda\in\left(0,\frac{1}{n}\right)$, we have \begin{equation*}
\zeta_E(n,x|\lambda)=2\sum_{m=0} ^{\infty}\frac{(-1)^m}{(m+x)^n}\frac{(1-\lambda)(1-2\lambda)\cdots(1-n\lambda)}{\left(1-\frac{\lambda}{m+x}\right)\left(1-\frac{2\lambda}{m+x}\right)\cdots\left(1-\frac{n\lambda}{m+x}\right)}. \end{equation*} \end{theorem}
From \eqref{8}, we have \begin{equation}\label{22} \begin{split}
&\frac{1}{\Gamma(s|\lambda)}\int_0 ^{\infty} F(-t,x|-\lambda)t^{s-1}dt\\
=&\frac{1}{\Gamma(s|\lambda)}\int_0 ^{\infty}\frac{2}{(1+\lambda t)^{-\iol}+1}(1+\lambda t)^{-\frac{x}{\lambda}}t^{s-1}dt\\
=&\frac{1}{\Gamma(s|\lambda)}\sum_{m=0} ^{\infty}{\mathcal{E}}_m(x|-\lambda)\frac{(-1)^m}{m!}\int_0 ^{\infty} t^{s+m-1}dt. \end{split} \end{equation} Thus, by \eqref{22}, we get \begin{equation}\label{23}
\zeta_E(-n,x|\lambda)=\frac{2\pi i}{n!}{\mathcal{E}}_n(x|-\lambda)(-1)^n\frac{1}{\Gamma(-n|\lambda)}. \end{equation} Now, we observe that \begin{equation}\label{24} \begin{split}
\Gamma(-n|\lambda)=&\int_0 ^{\infty}(1+\lambda t)^{-\iol}t^{-n-1}dt\\ =&\frac{2\pi i}{n!}\frac{1}{\lambda}\left(1+\frac{1}{\lambda}\right)\left(2+\frac{1}{\lambda}\right)\cdots\left((n-1)+\frac{1}{\lambda}\right)(-1)^n\lambda^n\\ =&\frac{2\pi i}{n!}(\lambda +1)(2\lambda+1)\cdots((n-1)\lambda+1)(-1)^n. \end{split} \end{equation} From \eqref{23} and \eqref{24}, we have \begin{equation}\label{25}
\zeta_E(-n,x|\lambda)=\frac{{\mathcal{E}}_n(x|-\lambda)}{(\lambda+1)(2\lambda+1)\cdots((n-1)\lambda+1)}. \end{equation} Therefore, by \eqref{25}, we obtain the following theorem.
\begin{theorem}\label{thm6} For $n\in{\mathbb{N}}\cup\left\{0\right\}$, we have \begin{equation*}
\zeta_E(-n,x|\lambda)={\mathcal{E}}_n(x|-\lambda). \end{equation*} \end{theorem}
\noindent{\bf{Remark.}} We note that $\zeta_E(s,x|\lambda)$ is analytic function in whole complex $s$-plane.
\vskip .1in
\noindent{\bf{Remark.}} \begin{equation*}
\lim_{\lambda\rightarrow0}\zeta_E(-n,x|\lambda)=E_n(x)=\zeta_E(-n,x). \end{equation*}
\end{document} | arXiv | {
"id": "1508.07424.tex",
"language_detection_score": 0.3260241448879242,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} We describe the theory of flag paraproducts and their relationship to the field of differential equations. \end{abstract}
\title{Flag Paraproducts}
\section{Short Introduction}
The main goal of the present paper is to describe the theory of a new class of multi-linear operators which we named ``paraproducts with flag singularities'' (or in short ``flag paraproducts'').
These objects, which have been introduced in \cite{c} as being generalizations of the ``lacunary versions`` of the ``bi-est operators`` of \cite{mtt:walshbiest}, \cite{mtt:fourierbiest}, \cite{mtt:multiest}, turned out in the meantime to have very natural connections to several problems in the theory of differential equations.
While most of the article is expository, we also prove as a consequence of our discussion a new ``paradifferential Leibnitz rule'', which may be of independent interest.
In Section 2 we briefly recall the theory of classical paraproducts and then, in Section 3, we present the basic facts about the flag paraproducts. Sections 4, 5 and 6 are devoted to the description of the various connections of the flag paraproducts: first, to the AKNS systems of mathematical physics and scattering theory, then to what we called ``the grand Leibnitz rule'' for generic non-linearities and in the end to the theory of non-linear Schr\"{o}dinger equations. The last section, Section 7, presents a sketch of some of the main ideas needed to understand the boundedness properties of these flag paraproducts.
{\bf Acknowledgements}: The present article is based on the author's lecture at the ``8th International Conference on Harmonic Analysis and PDE'' held in El Escorial - Madrid, in June 2008. We take this opportunity to thank the organizers once more for the invitation and for their warm hospitality during our stay in Spain. We are also grateful to the NSF for partially supporting this work.
\section{Classical Paraproducts}
If $n\geq 1$, let us denote by $T$ the $n$ - linear singular integral operator given by
\begin{equation}\label{1} T(f_1, ..., f_n)(x) = \int_{{\mbox{\rm I\kern-.22em R}}^n} f_1(x-t_1) ... f_n(x-t_n) K(t) d t, \end{equation} where $K$ is a Calder\'{o}n - Zygmund kernel \cite{stein}.
Alternatively, $T$ can also be written as
\begin{equation}\label{2} T_m(f_1, ..., f_n)(x) = \int_{{\mbox{\rm I\kern-.22em R}}^n} m(\xi) \widehat{f_1}(\xi_1) ... \widehat{f_n}(\xi_n) e^{ 2\pi i x (\xi_1 + ... + \xi_n)} d \xi, \end{equation} where $m(\xi) = \widehat{K}(\xi)$ is a classical multiplier, satisfying the well known Marcinkiewicz - Mihlin - H\"{o}rmander condition
\begin{equation}\label{3}
|\partial^{\alpha} m(\xi)| \lesssim \frac{1}{|\xi|^{|\alpha|}} \end{equation} for sufficiently many multi-indices $\alpha$. \footnote{We use the standard notation $A\lesssim B$ to denote the fact that there exists a constant $C > 0$ so that $A \leq C\cdot B$. We also denote by ${\cal{M}}({\mbox{\rm I\kern-.22em R}}^n)$ the class of all such multipliers.}
These operators play a fundamental role in analysis and PDEs and they are called ``paraproducts''. \footnote{It is easy to observe that in the particular case when $m=1$, $T_m(f_1, ..., f_n)(x)$ becomes the product of the $n$ functions $f_1(x)\cdot ... \cdot f_n(x)$. Also, as stated, the formulas are for functions of one variable, but the whole theory extends easily to an arbitrary euclidean space ${\mbox{\rm I\kern-.22em R}}^d$.} The following Coifman - Meyer theorem is a classical result in harmonic analysis \cite{meyerc}, \cite{ks}, \cite{gt}.
\begin{theorem} For every $m\in {\cal{M}}({\mbox{\rm I\kern-.22em R}}^n)$, the $n$-linear multiplier $T_m$ maps $L^{p_1}\times ... \times L^{p_n} \rightarrow L^p$ boundedly, as long as $1< p_1, ..., p_n \leq \infty$, $1/p_1 + ... + 1/p_n = 1/p$ and $0<p<\infty$. \end{theorem}
To recall some of the main ideas which appear in the proof of the theorem, let us assume that the kernel $K(t)$ has the form
\begin{equation}\label{4} K(t) = \sum_{k\in \Z} \Phi_k^1(t_1) ... \Phi_k^n(t_n), \end{equation} where each $\Phi_k^j$ is an $L^1$ - normalized bump function adapted to the interval $[-2^{-k}, 2^{-k}]$ \footnote{In fact, modulo some technical issues, one can always assume that this is the case.}.
As a consequence, for any $1<p<\infty$, one has
$$\left\|T_m(f_1, ..., f_n)\right\|_p =
\left| \int_{{\mbox{\rm I\kern-.22em R}}}T_m(f_1, ..., f_n)(x) f_{n+1}(x) d x \right| = $$
\begin{equation}\label{5}
\left| \int_{{\mbox{\rm I\kern-.22em R}}}\sum_{k\in\Z} (f_1\ast\Phi_k^1)(x) ... (f_n\ast\Phi_k^n)(x) (f_{n+1}\ast\Phi_k^{n+1})(x) d x
\right|, \end{equation}
where $f_{n+1}$ is a well chosen function with $\|f_{n+1}\|_{p'} = 1$ (and $1/p+1/p'=1$), while the family $(\Phi_k^{n+1})_k$ is also as usual well chosen so that the above equality holds true. One should also recall the standard fact that since $K$ is a Calder\'{o}n - Zygmund kernel, one can always assume that at least two of the families $(\Phi_k^j)_k$ for $j=1, ..., n+1$ are of ``$\Psi$ type'', in the sense that the Fourier transform of the corresponding $k$th term is supported in $[-2^{k+1}, -2^{k-1}]\cup [2^{k-1}, 2^{k+1}]$, while all the others are of ``$\Phi$ type'', in the sense that the Fourier transform of the corresponding $k$th term is supported in $[-2^{k+1}, 2^{k+1}]$. For simplicity, we assume that in our case $(\Phi_k^1)_k$ and $(\Phi_k^2)_k$ are of ``$\Psi$ type'' \footnote{We will use this ``$\Psi$ - $\Phi$'' terminology throughout the paper.}.
Then, (\ref{5}) can be majorized by
$$\int_{{\mbox{\rm I\kern-.22em R}}}
\left(\sum_k |f_1\ast\Phi_k^1(x)|^2\right)^{1/2}
\left(\sum_k |f_2\ast\Phi_k^2(x)|^2\right)^{1/2} \prod_{j\neq 1,2}
\sup_k |f_j\ast\Phi_k^j(x)| d x \lesssim $$
$$\int_{{\mbox{\rm I\kern-.22em R}}} S f_1(x) \cdot S f_2(x) \cdot \prod_{j\neq 1,2} M f_j(x) dx$$ where $S$ is the Littlewood - Paley square function and $M$ is the Hardy - Littlewood maximal function.
Using now their boundedness properties \cite{stein}, one can easily conclude that $T_m$ is always bounded from $L^{p_1}\times ... \times L^{p_n} \rightarrow L^p$, as long as all the indices $p_1, ..., p_n, p$ are strictly between 1 and $\infty$. The $L^{\infty}$ case is significantly harder and it usually follows from the so called $T1$ - theorem of David and Journ\'{e} \cite{stein}. Once the ``Banach case'' of the theorem has been understood, the ``quasi - Banach case'' follows from it by using Calder\'{o}n - Zygmund decompositions for all the functions $f_1, ..., f_n$ carefully \cite{meyerc}, \cite{ks}, \cite{gt}.
\section{Flag Paraproducts}
We start with the following concrete example
\begin{equation}\label{6} T(f,g,h)(x) = \int_{{\mbox{\rm I\kern-.22em R}}^7} f(x-\alpha_1-\beta_1) g(x-\alpha_2-\beta_2-\gamma_1) h(x-\alpha_3-\gamma_2) K(\alpha) K(\beta) K(\gamma) d \alpha d \beta d \gamma \end{equation} which is a prototype of a ``flag paraproduct''. As one can see, there are now three kernels acting on our set of three functions. $K(\beta)$ and $K(\gamma)$ being kernels of two variables, act on the pairs $(f,g)$ and $(g,h)$ respectively, while $K(\alpha)$ being a kernel of three variables acts on all three functions $(f,g,h)$ and all of them in a ``paraproduct manner''. The point is that all these three ``actions'' happen simultaneously.
Alternatively, one can rewrite (\ref{6}) as
$$ T(f,g,h)(x) = \int_{{\mbox{\rm I\kern-.22em R}}^3} m(\xi) \widehat{f}(\xi_1) \widehat{g}(\xi_2) \widehat{h}(\xi_3)
e^{ 2\pi i x (\xi_1 + \xi_2 +\xi_3)} d \xi$$ where
$$m(\xi) = m'(\xi_1, \xi_2)\cdot m''(\xi_2, \xi_3)\cdot m'''(\xi_1, \xi_2, \xi_3)$$ is now a product of three classical symbols, two of them in ${\cal{M}}({\mbox{\rm I\kern-.22em R}}^2)$ and the third in ${\cal{M}}({\mbox{\rm I\kern-.22em R}}^3)$.
Generally, for $n\geq 1$, we denote by ${\cal{M}}_{flag}({\mbox{\rm I\kern-.22em R}}^n)$ the set of all symbols $m$ given by arbitrary products of the form
$$m(\xi) := \prod_{S\subseteq \{1,...,n\}} m_S(\xi_S)$$ where $m_S\in {\cal{M}}({\mbox{\rm I\kern-.22em R}}^{card(S)})$, the vector $\xi_S\in {\mbox{\rm I\kern-.22em R}}^{card(S)}$ is defined by $\xi_S:= (\xi_i)_{i\in S}$, while $\xi \in {\mbox{\rm I\kern-.22em R}}^n$ is the vector $\xi:= (\xi_i)_{i=1}^n$. Every such a symbol $m\in {\cal{M}}_{flag}({\mbox{\rm I\kern-.22em R}}^n)$ defines naturally a generic flag paraproduct $T_m$ by the same formula (\ref{2}). Of course, as usual, the goal is to prove H\"{o}lder type estimates for them \footnote{A ``flag'' is an increasing sequence of subspaces of a vector space $V$: $\{0\}=V_0\subseteq V_1\subseteq ... \subseteq V_k = V$. It is easy to see that a generic symbol in ${\cal{M}}_{flag}({\mbox{\rm I\kern-.22em R}}^n)$ is singular along every possible flag of subspaces, spanned by the coordinate system of ${\mbox{\rm I\kern-.22em R}}^n$. It is also interesting to note that in a completely different direction (see \cite{nrs}, \cite{ns}) singular integrals generated by ``flag kernels'' (this time) appear also naturally in the theory of several complex variables.}.
Let us assume, as in the case of classical paraproducts briefly discussed before, that the kernels $K(\alpha)$, $K(\beta)$, $K(\gamma)$ are given by
$$K(\alpha) = \sum_{k_1} \Phi_{k_1}(\alpha_1) \Phi_{k_1}(\alpha_2) \Phi_{k_1}(\alpha_3), $$
$$K(\beta) = \sum_{k_2} \Phi_{k_2}(\beta_1) \Phi_{k_2}(\beta_2) $$ and
$$K(\gamma) = \sum_{k_3} \Phi_{k_3}(\gamma_1) \Phi_{k_3}(\gamma_2). $$ In particular, the left hand side of (\ref{6}) becomes
\begin{equation} T(f, g, h)(x) = \sum_{k_1, k_2, k_3} (f\ast\Phi_{k_1}\ast\Phi_{k_2})(x)\cdot (g\ast\Phi_{k_1}\ast\Phi_{k_2}\ast\Phi_{k_3})(x)\cdot (h\ast\Phi_{k_1}\ast\Phi_{k_3})(x) \end{equation} and it should be clear by looking at this expression, that there are no ``easy Banach spaces estimates'' this time. Moreover, assuming that such estimates existed, using the Calder\'{o}n - Zygmund decomposition as before to get the ``quasi - Banach estimates'' would not help either, because of the multi-parameter structure of the kernel $K(\alpha)K(\beta)K(\gamma)$.
In other words, completely new ideas are necessary to understand the boundedness properties of these flag paraproducts. More on this later on, in the last section of the paper. We end the current one with the following result from \cite{c}.
\begin{theorem}\label{main} Let $a, b \in {\cal{M}}({\mbox{\rm I\kern-.22em R}}^2)$. Then, the 3 - linear operator $T_{ab}$ defined by the formula
$$T_{ab}(f_1, f_2, f_3)(x) := \int_{{\mbox{\rm I\kern-.22em R}}^3} a(\xi_1, \xi_2)\cdot b(\xi_2, \xi_3) \widehat{f_1}(\xi_1) \widehat{f_2}(\xi_2) \widehat{f_3}(\xi_3) e^{2 \pi i x (\xi_1 + \xi_2 + \xi_3)} d \xi$$ maps $L^{p_1}\times L^{p_2}\times L^{p_3} \rightarrow L^p$ boundedly, as long as $1<p_1, p_2, p_3 <\infty$ and $1/p_1 + 1/p_2 + 1/p_3 =1/p$. \end{theorem} In addition, it has also been proven in \cite{c} that $T_{ab}$ maps also $L^{\infty}\times L^p\times L^q\rightarrow L^r$, $L^p\times L^{\infty}\times L^q\rightarrow L^r$, $L^p\times L^q\times L^{\infty}\rightarrow L^r$ and $L^{\infty}\times L^s\times L^{\infty}\rightarrow L^s$ boundedly, as long as $1<p,q,s<\infty$, $0<r<\infty$ and $1/p+1/q=1/r$. The only $L^{\infty}$ estimates that are not available, are those of the form $L^{\infty}\times L^{\infty}\times L^{\infty}\rightarrow L^{\infty}$, $L^{\infty}\times L^{\infty}\times L^s\rightarrow L^s$ and $L^s\times L^{\infty}\times L^{\infty}\rightarrow L^s$. But this should be not surprising since such estimates are in general false, as one can easily see by taking $f_2$ to be identically equal to $1$ in the formula above.
This operator $T_{ab}$ is the simplest flag paraproduct whose complexity goes beyond the one of a Coifman - Meyer paraproduct. However, as we remarked in \cite{c}, we believe that a similar result holds for generic flag paraproducts of arbitrary complexity, and we plan to address this general case in a future paper \cite{c1}.
In the next three sections we will try to answer (at least partially) the question ``Why is it worthwhile to consider and study this new class of operators ?'' by describing three distinct instances from the theory of differential equations, where they appear naturally.
\section{AKNS systems}
Let $\lambda\in{\mbox{\rm I\kern-.22em R}}$, $\lambda\neq 0$ and consider the system of differential equations
\begin{equation}\label{7} u' = i \lambda D u + N u \end{equation} where $u = [u_1,...,u_n]^t$ is a vector valued function defined on the real line, $D$ is a diagonal $n\times n$ constant matrix with real and distinct entries $d_1,...,d_n$ and $N = (a_{ij})_{i,j=1}^n$ is a matrix valued function defined also on the real line and having the property that $a_{ii}\equiv 0$ for every $i=1,...,n$. These systems play a fundamental role in mathematical physics and scattering theory and they are called AKNS systems \cite{ablowitzsegur}. The particular case $n=2$ is also known to be deeply connected to the classical theory of Schr\"{o}dinger operators \cite{ck1}, \cite{ck2}.
If $N\equiv 0$ it is easy to see that our system (\ref{7}) becomes a union of independent single equations
$$u'_k = i\lambda d_k u_k$$ for $k=1,...,n$ whose solutions are
$$u_k^{\lambda}(x) = C_{k,\lambda} e^{ i\lambda d_k x}$$ and they are all $L^{\infty}({\mbox{\rm I\kern-.22em R}})$-functions. An important problem in the field is the following.
\begin{problem} Prove (or disprove) that as long as $N$ is a matrix whose entries are $L^2({\mbox{\rm I\kern-.22em R}})$ functions, then for almost every real $\lambda$, the corresponding solutions $(u_k^{\lambda})_{k=1}^n$ are all bounded functions. \footnote{The conjecture is easy for $L^1({\mbox{\rm I\kern-.22em R}})$ entries, holds true for $L^p({\mbox{\rm I\kern-.22em R}})$ entries when $1\leq p <2$, thanks to the work of Christ and Kiselev \cite{ck1}, \cite{ck2} and is false for $p>2$, \cite{simon}. } \end{problem}
When $N\nequiv 0$ one can use a simple variation of constants argument and write $u_k(x)$ as
$$u_k(x) := e^{ i\lambda d_k x} v_k(x)$$ for $k=1,...,n$. As a consequence, the column vector $v=[v_1,...,v_n]^t$ becomes the solution of the following system
\begin{equation}\label{8} v' = W v \end{equation} where the entries of $W$ are given by $w_{lm}(x):= a_{lm}(x)e^{ i\lambda (d_l-d_m) x}$. It is therefore enough to prove that the solutions of (\ref{8}) are bounded as long as the entries $a_{lm}$ are square integrable.
In the particular case when the matrix $N$ is upper (or lower) triangular, the system (\ref{8}) can be solved explicitly. A straightforward calculation shows that every single entry of the vector $v(x)$ can be written as a finite sum of expressions of the form
\begin{equation}\label{9} \int_{x_1<...<x_k<x} f_1(x_1) ... f_k(x_k) e^{i\lambda (\#_1 x_1 + ... + \#_k x_k)} d x, \end{equation} where $f_1, ..., f_k$ are among the entries of the matrix $N$, while $\#_1, ..., \#_k$ are various differences of type $d_l - d_m$ as before and satisfying the nondegeneracy condition
$$\sum_{j=j_1}^{j_2} \#_j \neq 0$$ for every $1\leq j_1 < j_2\leq k$.
Given the fact that all the entries of the matrix $N$ are $L^2({\mbox{\rm I\kern-.22em R}})$ functions and using Plancherel, one can clearly conclude that the expression (\ref{9}) is bounded for a. e. $\lambda$, once one proves the following inequality
\begin{equation}
\left\| \sup_M
\left| \int_{\xi_1< ... <\xi_k<M} \widehat{f_1}(\xi_1) ... \widehat{f_k}(\xi_k) e^{2\pi i x(\#_1 \xi_1 + ... +\#_k \xi_k)} d \xi
\right|
\right\|_{L^{2/k}_x} \lesssim
\|f_1\|_2\cdot ... \cdot
\|f_k\|_2 < \infty. \end{equation} A simpler, non-maximal variant of it would be
\begin{equation}
\left\| \int_{\xi_1< ... <\xi_k} \widehat{f_1}(\xi_1) ... \widehat{f_k}(\xi_k) e^{2\pi i x(\#_1 \xi_1 + ... +\#_k \xi_k)} d \xi
\right\|_{L^{2/k}_x} \lesssim
\|f_1\|_2\cdot ... \cdot
\|f_k\|_2 < \infty. \end{equation}
The expression under the quasi-norm can be seen as a $k$-linear multiplier with symbol $\chi_{\xi_1< ... <\xi_k} = \chi_{\xi_1<\xi_2}\cdot ... \cdot \chi_{\xi_{k-1}<\xi_k}$. Now, the ``lacunary variant'' of this multi-linear operator \footnote{It is customary to do this, when one faces operators which have some type of modulation invariance. For instance, the ``lacunary version'' of the Carleson operator is the maximal Hibert transform, while the ``lacunary version'' of the bi-linear Hilbert transform is a paraproduct \cite{laceyt1}, \cite{laceyt2}. The surprise we had in \cite{mtt:walshbiest}, \cite{mtt:fourierbiest} whith these operators is that even their ``lacunary versions'' hadn't been considered before. The reader more insterested in learning about these operators is refered to the recent paper \cite{mtt:multiest}.} is obtained by replacing every bi-linear Hilbert transform type symbol $\chi_{\xi_{j-1}<\xi_j}$ \cite{laceyt1} with a smoother one $m(\xi_{j-1}, \xi_j)$, in the class ${\cal{M}}({\mbox{\rm I\kern-.22em R}}^2)$. The new resulted expression is clearly a flag paraproduct.
\section{General Leibnitz rules}
The following inequality of Kato and Ponce \cite{kp} plays an important role in non-linear PDEs \footnote{ $\widehat{D^{\alpha} f}(\xi) := |\xi|^{\alpha}$ for any $\alpha > 0$}
\begin{equation}\label{10}
\|D^{\alpha}(fg)\|_p \lesssim
\|D^{\alpha} f\|_{p_1}\|g\|_{q_1} +
\|f\|_{p_2}\|D^{\alpha} g\|_{q_2} \end{equation} for any $1<p_i, q_i\leq\infty$, $1/p_i + 1/q_i = 1/p$ for $i=1,2$ and $0<p<\infty$.
It is known that the inequality holds for an arbitrary number of factors and it is also known that it is in general false if one of the indices $p_i, q_i$ is strictly smaller than one. Given (\ref{10}), it is natural to ask if one has similar estimates for more complex expressions, such as
\begin{equation}\label{11}
\left\|
D^{\alpha}[ D^{\beta}(f_1 f_2 f_3)\cdot D^{\gamma}(f_4 f_5)]\right\|_p. \end{equation} Clearly, one can first apply (\ref{10}) for two factors and majorize (\ref{11}) by
\begin{equation}\label{12}
\|D^{\alpha+\beta}(f_1 f_2 f_3)\|_{p_1}\|D^{\gamma}(f_4 f_5)\|_{q_1} +
\|D^{\beta}(f_1 f_2 f_3)\|_{p_2}\|D^{\alpha+\gamma}(f_4 f_5)\|_{q_2} \end{equation} and after that, one can apply (\ref{10}) four more times for two and three factors, to obtain a final upper bound. However, this iterative procedure has a problem. It doesn't work if one would like to end up with products of terms involving (for instance) only $L^2$ norms, since then one has to have $p_1=p_2=2/3$ and $q_1=q_2 = 1$ in (\ref{12}), for which the corresponding (\ref{10}) doesn't hold.
The usual way to prove such ``paradifferential Leibnitz rules'' as the one in (\ref{10}), is by reducing them to the Coifman - Meyer theorem mentioned before. Very briefly, the argument works as follows. First, one uses a standard Littlewood - Paley decomposition \cite{stein} and writes both $f$ and $g$ as
$$f = \sum_{k\in\Z} f\ast\Psi_k$$ and
$$g = \sum_{k\in\Z} g\ast\Psi_k$$ where $(\Psi_k)_k$ is a well chosen family of ``$\Psi$ type''. In particular, one has
$$fg = \sum_{k_1, k_2} (f\ast\Psi_{k_1})(g\ast\Psi_{k_2}) = \sum_{k_1\sim k_2} + \sum_{k_1<<k_2} + \sum_{k_2<<k_1} :=$$
$$I + II + III.$$ Then, one can write term II (for instance) as
$$\sum_{k_1<<k_2}(f\ast\Psi_{k_1})(g\ast\Psi_{k_2}) = \sum_{k_2}\left(\sum_{k_1<< k_2} (f\ast\Psi_{k_1})\right) (g\ast\Psi_{k_2}) :=$$
$$\sum_{k_2}(f\ast\Phi_{k_2})(g\ast\Psi_{k_2}) = \sum_{k}(f\ast\Phi_{k})(g\ast\Psi_{k}) =$$
$$\sum_k [(f\ast\Phi_k)(g\ast\Psi_k)]\ast\widetilde{\Psi}_k$$ for a well chosen family $(\widetilde{\Psi}_k)_k$ of ``$\Psi$ type'', where $(\Phi_k)_k$ is a ``$\Phi$ type'' family now.
Denote by
$$\Pi(f,g) = \sum_k [(f\ast\Phi_k)(g\ast\Psi_k)]\ast\widetilde{\Psi}_k.$$ Then, we have
$$D^{\alpha}(\Pi(f,g)) = \sum_k [(f\ast\Phi_k)(g\ast\Psi_k)]\ast D^{\alpha}\widetilde{\Psi}_k:=$$
$$\sum_k [(f\ast\Phi_k)(g\ast\Psi_k)]\ast 2^{k\alpha}\widetilde{\widetilde{\Psi}}_k =$$
$$\sum_k [(f\ast\Phi_k)(g\ast 2^{k\alpha}\Psi_k)]\ast \widetilde{\widetilde{\Psi}}_k :=$$
$$\sum_k [(f\ast\Phi_k)(g\ast D^{\alpha}\widetilde{\widetilde{\widetilde{\Psi}}}_k)]\ast \widetilde{\widetilde{\Psi}}_k =$$
$$\sum_k [(f\ast\Phi_k)(D^{\alpha} g\ast\widetilde{\widetilde{\widetilde{\Psi}}}_k)]\ast \widetilde{\widetilde{\Psi}}_k :=$$
$$\widetilde{\Pi}(f, D^{\alpha} g).$$ Now, it is easy to see that both $\Pi$ and $\widetilde{\Pi}$ are in fact bi-linear paraproducts whose symbols are
$$\sum_k\widehat{\Phi_k}(\xi_1)\widehat{\Psi_k}(\xi_2)\widehat{\widetilde{\Psi_k}}(\xi_1+\xi_2)$$ and
$$\sum_k\widehat{\Phi_k}(\xi_1)\widehat{\widetilde{\widetilde{\widetilde{\Psi_k}}}}(\xi_2)\widehat{\widetilde{\widetilde{\Psi_k}}}(\xi_1+\xi_2)$$ respectively.
Combining the above equality (between $\Pi$ and $\widetilde{\Pi}$) with the Coifman - Meyer theorem and treating similarly the other two terms I and III, give the desired (\ref{10}).
What we've learned from the calculations above, is that evey non-linearity of the form $D^{\alpha}(fg)$ can be mollified and written as a finite sum of various bi-linear paraproducts applied to either $D^{\alpha}f$ and $g$ or to $f$ and $D^{\alpha}g$ and this fact allows one to reduce inequality (\ref{10}) to the Coifman - Meyer theorem on paraproducts.
Let us consider now the non-linear expression $D^{\alpha}(D^{\beta}(fg) h)$ which is the simplest non-linearity of the same complexity as the one in (\ref{11}). Clearly, this non-linearity which we say is of complexity 2, can be seen as a composition of two non-linearities of complexity 1. In particular, this may suggest that one way to mollify it is by composing the mollified versions of its lower complexity counterparts, thus obtaining expressions of type
$\Pi'(\Pi''(F,G), H)$ where $\Pi'$ and $\Pi''$ are bi-linear paraproducts as before. This procedure reduces the problem of estimating $\|D^{\alpha}(D^{\beta}(fg) h)\|_p$ to the problem of estimating $\|\Pi'(\Pi''(F,G), H)\|_p$, which can be done by applying the Coifman - Meyer theorem two times in a row. However, as we pointed out before, this point of view cannot handle the case when for instance both $F$ and $G$ are functions in an $L^{1+\epsilon}$ space for $\epsilon > 0$ a small number. To be able to understand completely these non-linearities of higher complexity one has to proceed differently. As we will see, the correct point of view is not to write $D^{\alpha}(D^{\beta}(fg) h)$ as a sum of composition of paraproducts, but instead as a sum of flag paraproducts.
Clearly, in order to achieve this, we need to understand two things. First, how to mollify an expression of type $\Pi(F,G)H$ and then, how to mollify $D^{\alpha} (\Pi(F,G)H)$.
Let us assume as before that $\Pi(F,G)$ is given by
$$\Pi(F,G) = \sum_{k_1<<k_2} (F\ast\Psi_{k_1}) (G\ast\Psi_{k_2})$$ and decompose $H$ as usual as
$$H = \sum_{k_3} H\ast\Psi_{k_3}.$$ As a consequence, we have
$$\Pi(F,G) H =$$
$$\sum_{k_1<<k_2; k_3} (F\ast\Psi_{k_1}) (G\ast\Psi_{k_2}) (H\ast\Psi_{k_3}) =$$
$$\sum_{k_1<<k_2; k_3<< k_2} + \sum_{k_1<<k_2; k_3 \sim k_2} + \sum_{k_1<<k_2<<k_3}:= $$
$$A + B + C.$$ It is not difficult to see that both $A$ and $B$ are simply 3-linear paraproducts, while $C$ can be written as
$$\int_{{\mbox{\rm I\kern-.22em R}}^3} m(\xi_1, \xi_2, \xi_3) \widehat{F}(\xi_1) \widehat{G}(\xi_2) \widehat{H}(\xi_3) e^{2\pi i x(\xi_1 +\xi_2 +\xi_3)} d \xi $$ where
$$m(\xi_1, \xi_2, \xi_3) = \sum_{k_1<<k_2<<k_3}\widehat{\Psi_{k_1}}(\xi_1)\widehat{\Psi_{k_2}}(\xi_2)\widehat{\Psi_{k_3}}(\xi_3) :=$$
$$\sum_{k_2<<k_3}\widehat{\Phi_{k_2}}(\xi_1)\widehat{\Psi_{k_2}}(\xi_2)\widehat{\Psi_{k_3}}(\xi_3) = $$
\begin{equation}\label{13} \sum_{k_2<<k_3}\widehat{\Phi_{k_2}}(\xi_1)\widehat{\Psi_{k_2}}(\xi_2) \widehat{\widetilde{\Phi_{k_3}}}(\xi_2) \widehat{\Psi_{k_3}}(\xi_3) \end{equation} for some well chosen family $(\widetilde{\Phi_{k_3}}(\xi_2))_{k_3}$ of ``$\Phi$ type''. But then, one observes that (\ref{13}) splits as
$$\left( \sum_{k_2}\widehat{\Phi_{k_2}}(\xi_1)\widehat{\Psi_{k_2}}(\xi_2)\right) \left(\sum_{k_3}\widehat{\widetilde{\Phi_{k_3}}}(\xi_2) \widehat{\Psi_{k_3}}(\xi_3)\right) $$ since the only way in which $\widehat{\Psi_{k_2}}(\xi_2) \widehat{\widetilde{\Phi_{k_3}}}(\xi_2)\neq 0$ is to have $k_2<<k_3$. This shows that the symbol of $C$ can be written as $a(\xi_1, \xi_2) b(\xi_2,\xi_3)$ with both $a$ and $b$ in ${\cal{M}}({\mbox{\rm I\kern-.22em R}}^2)$, which means that $C$ is indeed a flag paraproduct.
In order to understand now how to molify $D^{\alpha} (\Pi(F,G)H)$, let us first rewrite (\ref{13}) as
\begin{equation}\label{14} \sum_{k_2<<k_3}\widehat{\Phi_{k_2}}(\xi_1)\widehat{\Psi_{k_2}}(\xi_2) \widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2) \widehat{\widetilde{\widetilde{\Phi_{k_3}}}}(\xi_1+\xi_2) \widehat{\Psi_{k_3}}(\xi_3) \widehat{\widetilde{\Phi_{k_3}}}(\xi_1+\xi_2+\xi_3) \end{equation} where as usual, $\widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2)$, $\widehat{\widetilde{\widetilde{\Phi_{k_3}}}}(\xi_1+\xi_2)$ and $\widehat{\widetilde{\Phi_{k_3}}}(\xi_1+\xi_2+\xi_3)$ have been inserted naturally into (\ref{13}).
The advantage of (\ref{14}) is that the corresponding 3-linear operator can be easily written as
\begin{equation}\label{15} \sum_{k_3} \left\{ \left( \sum_{k_2<<k_3} [(F\ast\Phi_{k_2})(G\ast\Psi_{k_2})]\ast\widetilde{\Phi_{k_2}}\right)\ast\widetilde{\widetilde{\Phi_{k_3}}} \cdot (F\ast\Psi_{k_3})\right\}\ast \widetilde{\Phi_{k_3}}. \end{equation} Then, exactly as before one can write
$$D^{\alpha} (\Pi(F,G)H) =$$
$$\sum_{k_3} \left\{ \left( \sum_{k_2<<k_3} [(F\ast\Phi_{k_2})(G\ast\Psi_{k_2})]\ast\widetilde{\Phi_{k_2}}\right)\ast\widetilde{\widetilde{\Phi_{k_3}}} \cdot (F\ast\Psi_{k_3})\right\}\ast D^{\alpha}\widetilde{\Phi_{k_3}} =$$
$$= ... =$$
$$ \sum_{k_3} \left\{ \left( \sum_{k_2<<k_3} [(F\ast\Phi_{k_2})(G\ast\Psi_{k_2})]\ast\widetilde{\Phi_{k_2}}\right)\ast\widetilde{\widetilde{\Phi_{k_3}}} \cdot (D^{\alpha}F\ast\widetilde{\Psi_{k_3}})\right\}\ast \widetilde{\widetilde{\Phi_{k_3}}}. $$ Now, this tri-linear operator has the symbol
$$\sum_{k_2<<k_3} \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Psi_{k_2}}(\xi_2) \widehat{\widetilde{\Psi_{k_2}}}(\xi_1+\xi_2) \widehat{\widetilde{\widetilde{\Psi_{k_3}}}}(\xi_1+\xi_2) \widehat{\widetilde{\Psi_{k_3}}}(\xi_3) \widehat{\widetilde{\widetilde{\Psi_{k_3}}}}(\xi_1+\xi_2 +\xi_3) = $$
$$ \sum_{k_2<<k_3} \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Psi_{k_2}}(\xi_2) \widehat{\widetilde{\Psi_{k_3}}}(\xi_3) \widehat{\widetilde{\widetilde{\Psi_{k_3}}}}(\xi_1+\xi_2 +\xi_3) = $$
$$ \sum_{k_2<<k_3} \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Psi_{k_2}}(\xi_2) \widehat{\widetilde{\Phi_{k_3}}}(\xi_2) \widehat{\widetilde{\Psi_{k_3}}}(\xi_3) \widehat{\widetilde{\widetilde{\Psi_{k_3}}}}(\xi_1+\xi_2 +\xi_3) $$ and this splits again as
$$ \left(\sum_{k_2} \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Psi_{k_2}}(\xi_2)\right) \left(\sum_{k_3} \widehat{\widetilde{\Phi_{k_3}}}(\xi_2) \widehat{\widetilde{\Psi_{k_3}}}(\xi_3) \widehat{\widetilde{\widetilde{\Psi_{k_3}}}}(\xi_1+\xi_2 +\xi_3)\right) := $$
$$\alpha(\xi_1,\xi_2)\cdot \beta(\xi_1, \xi_2, \xi_3)$$ which is an element of ${\cal{M}}_{flag}({\mbox{\rm I\kern-.22em R}}^3)$.
Since tri-linear operators of the form (\ref{15}) give rise to model operators which have been understood in \cite{c}, the above discussion proves the following ``grand Leibnitz rule'' for the simplest non-linearity of complexity 2
$$\|D^{\alpha}(D^{\beta}(fg) h)\|_p \lesssim$$
$$
\|D^{\alpha+\beta}f\|_{p_1} \|g\|_{q_1} \|h\|_{r_1} +
\|f\|_{p_2} \|D^{\alpha+\beta}g\|_{q_2} \|h\|_{r_2} +
\|D^{\beta}f\|_{p_3}\|g\|_{q_2}\|D^{\alpha}h\|_{r_3} +
\|f\|_{p_4}\|D^{\beta}g\|_{q_4}\|D^{\alpha}h\|_{r_4}$$ for every $1<p_i, q_i, r_i<\infty$ with $1/p_i+1/q_i+1/r_i = 1/p$ for $i=1,2,3,4$.
\section{Flag Paraproducts and the non-linear Schr\"{o}dinger equation}
In this section the goal is to briefly describe some recent study of Germain, Masmoudi and Shatah \cite{gms} on the non-linear Schr\"{o}dinger equation. We are grateful to Pierre Germain who explained parts of this work to us.
In just a few words, the main task of these three authors is to develop a general method for understanding the global existence for small initial data of various non-linear Schr\"{o}dinger equations and systems of non-linear Schr\"{o}dinger equations.
Consider the following ``quadratic example''
\begin{equation} \partial_t u + i\triangle u = u^2 \end{equation}
\begin{equation}
u|_{t=2} = u_2 \end{equation} for $(t,x)\in {\mbox{\rm I\kern-.22em R}}\times {\mbox{\rm I\kern-.22em R}}^n$ \footnote{There are some technical reasons for which the authors prefered the initial time to be 2, related to the norm of the space $X$ where the global existence takes place.}.
The Duhamel formula written on the Fourier side becomes
$$\widehat{u} (t,\xi) =
\widehat{u_2}(\xi) e^{i t |\xi|^2} + \int_2^t e^{-i (s-t)|\xi|^2} \widehat{u^2}(s,\xi) d s. $$
Then, if one writes $u=e^{-it\triangle} f$, the above formula becomes
\begin{equation}\label{16} \widehat{f}(t,\xi) =
\widehat{u_2}(\xi) + \int_2^t \int e^{is (-|\xi|^2 + |\eta|^2 + |\xi-\eta|^2)} \widehat{f}(s,\eta)\widehat{f}(s,\xi-\eta) d s d \eta. \end{equation}
A significant part of the argument in \cite{gms} depends on how well one estimates the integral expression in (\ref{16}). The idea would be to take advantage of the oscillation of the term $e^{is\Phi}$ where $\Phi:= -|\xi|^2 + |\eta|^2 + |\xi-\eta|^2$. However, $\Phi$ is ``too degenerate'' (i.e. $\Phi=0$ whenever $\xi=\eta$ or $\eta=0$) and as a consequence, the usual ``$\frac{d}{ds}\{\frac{e^{is\Phi}}{i\Phi}\}$'' argument doesn't work. One needs a ``wiser integration by parts'', not only in the $s$ variable but also in the $\eta$ variable. Denote by
$$P:= -\eta +\frac{1}{2}\xi$$ and
$$Z:= \Phi + P\cdot (\partial_{\eta}\Phi). $$ Observe that
$$Z = - (|\eta|^2 + |\xi-\eta|^2) $$ which is identically equal to zero only when $\xi=\eta=0$.
Alternatively, one has
$$\frac{1}{iZ} (\partial_s + \frac{P}{s}\partial_{\eta}) e^{is\Phi} = e^{is\Phi}.$$ In particular, the inverse Fourier transform of the integral term in (\ref{16}) becomes
$${\cal{F}}^{-1} \int_2^t\int \frac{1}{iZ} (\partial_s + \frac{P}{s}\partial_{\eta}) e^{is\Phi} \widehat{f}(s,\xi-\eta) \widehat{f}(s,\eta) d s d \eta = $$
$$I + II.$$
Using the fact that $|\xi-\eta|^2/iZ$ and $|\eta|^2/iZ$ are both classical symbols in ${\cal{M}}({\mbox{\rm I\kern-.22em R}}^{2n})$, Coifman - Meyer theorem proves that both $I$ and $II$ are ``smoothing expressions''. To conclude, expressions of type
$${\cal{F}}^{-1}\int_2^t\int e^{is\Phi} m(\xi,\eta) \widehat{g}(s,\eta) \widehat{h}(s,\xi-\eta) d s d \eta $$ for some $m\in {\cal{M}}({\mbox{\rm I\kern-.22em R}}^{2n})$ appear naturally, and it is easy to see that if one keeps $s$ fixed, the rest of the formula is just a bi-linear paraproduct.
Coming back to $I$, one of the expressions related to it (after the integration by parts) is of the form
\begin{equation}\label{17} {\cal{F}}^{-1}\int_2^t\int e^{is\Phi} m(\xi,\eta) \partial_s\widehat{f}(s,\eta) \widehat{F}(s,\xi-\eta) d s d \eta \end{equation} for a certain new function $F$. Since $u=e^{-is\triangle} f$ it follows that $f = e^{is\triangle} u$ and then, by using the fact that $u$ solves the equation, we deduce that $\partial_s f = e^{is\triangle} u^2$ which means that
$$\widehat{\partial_s f}(s,\eta) = e^{-is|\eta|^2}\widehat{u^2}(\eta) = $$
$$e^{-is|\eta|^2} \int_{\eta_1+\eta_2 =\eta} \widehat{u}(\eta_1) \widehat{u}(\eta_2) d \eta_1 d \eta_2 =$$
$$e^{-is|\eta|^2} \int_{\eta_1+\eta_2 =\eta}
e^{is|\eta_1|^2} \widehat{f}(s,\eta_1)
e^{is|\eta_2|^2} \widehat{f}(s,\eta_2) d \eta_1 d \eta_2 =$$
$$e^{-is|\eta|^2}
\int e^{is|\tau|^2} \widehat{f}(s,\tau)
e^{is|\eta-\tau|^2} \widehat{f}(s,\eta-\tau) d \tau.$$ Using this in (\ref{17}) one obtains an expression of the form
$$ {\cal{F}}^{-1} \int_2^t\int e^{is\widetilde{\Phi}} m(\xi, \eta) \widehat{f}(s,\tau) \widehat{f}(s, \eta-\tau) \widehat{F}(s,\xi-\eta) d s d \eta d \tau $$
where $\widetilde{\Phi}:= -|\xi|^2 + |\xi-\eta|^2 + |\tau|^2 + |\eta-\tau|^2$.
Using now a similar ``integration by parts argument'' in all three variables this time one obtains as before expressions of type
$$ {\cal{F}}^{-1} \int_2^t\int e^{is\widetilde{\Phi}} m(\xi, \eta) m(\xi,\eta,\tau) \widehat{f}(s,\tau) \widehat{f}(s, \eta-\tau) \widehat{F}(s,\xi-\eta) d s d \eta d \tau. $$ The inner formula (for a fixed $s$) is of the form
$${\cal{F}}^{-1} \int m(\xi, \eta) m(\xi,\eta,\tau) \widehat{g}(\tau) \widehat{f}(\eta-\tau) \widehat{h}(\xi-\eta) d s d \eta d \tau $$ and we claim that it is naturally related to the flag paraproducts we described earlier.
Alternatively, one can rewrite it as
\begin{equation}\label{18} \int_{{\mbox{\rm I\kern-.22em R}}^3} m(\xi, \eta) m(\xi,\eta,\gamma) \widehat{f}(\eta-\gamma) \widehat{g}(\gamma) \widehat{h}(\xi-\eta) e^{ix\xi} d \xi d \eta d \gamma. \end{equation} Then, if we change variables $\xi-\eta:=\xi_3$, $\eta-\gamma:=\xi_1$ and $\gamma:=\xi_2$ (\ref{18}) becomes
$$\int_{{\mbox{\rm I\kern-.22em R}}^3} m(\xi_1+\xi_2+\xi_3,\xi_1+\xi_2) m(\xi_1+\xi_2+\xi_2, \xi_1+\xi_2,\xi_2) \widehat{f}(\xi_1) \widehat{g}(\xi_2) \widehat{h}(\xi_3) e^{i x (\xi_1+\xi_2+\xi_3)} d \xi := $$
\begin{equation}\label{19} \int_{{\mbox{\rm I\kern-.22em R}}^3} \widetilde{m}(\xi_1+\xi_2,\xi_3) \widetilde{\widetilde{m}}(\xi_1, \xi_2, \xi_3) \widehat{f}(\xi_1) \widehat{g}(\xi_2) \widehat{h}(\xi_3) e^{i x (\xi_1+\xi_2+\xi_3)} d \xi \end{equation} where $\widetilde{m}\in{\cal{M}}({\mbox{\rm I\kern-.22em R}}^2)$ while $\widetilde{\widetilde{m}}\in{\cal{M}}({\mbox{\rm I\kern-.22em R}}^3)$.
As it stands, (\ref{19}) is not a flag paraproduct, but we will show that its analysis can be reduced to the analysis of a flag paraproduct.
Assume for simplicity that we have
$$ \widetilde{m}(\xi_1+\xi_2,\xi_3) = \sum_{k_1} \widehat{\Phi_{k_1}}(\xi_1+\xi_2) \widehat{\Phi_{k_1}}(\xi_3) $$ and
$$ \widetilde{\widetilde{m}}(\xi_1, \xi_2, \xi_3) = \sum_{k_2} \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3) $$ as before\footnote{Of course, everything is defined in ${\mbox{\rm I\kern-.22em R}}^n$ now, but the extensions to arbitrary euclidean spaces are straightforward}. Then,
$$ \widetilde{m}(\xi_1+\xi_2,\xi_3) \widetilde{\widetilde{m}}(\xi_1, \xi_2, \xi_3) = $$
\begin{equation}\label{20} \sum_{k_1, k_2} \widehat{\Phi_{k_1}}(\xi_1+\xi_2) \widehat{\Phi_{k_1}}(\xi_3) \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3). \end{equation} Clearly, we have two interesting cases:
\underline{Case 1: $k_2<<k_1$}.
Here, the only possibility is to have $(\widehat{\Phi_{k_1}}(\xi_3))_{k_1}$ of $\Phi$ type in which case $(\widehat{\Phi_{k_1}}(\xi_1+\xi_2))_{k_1}$ must be of ``$\Psi$ type''. Since (\ref{20}) can also be written as
$$ \sum_{k_2<< k_1} \widehat{\Phi_{k_1}}(\xi_1+\xi_2) \widehat{\Phi_{k_1}}(\xi_3) \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2) \widehat{\Phi_{k_2}}(\xi_3) $$ for a well chosen family $(\widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2))_{k_2}$ we see that the only way in which $\widehat{\Phi_{k_1}}(\xi_1+\xi_2) \widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2) \neq 0$ is to have $k_1\sim k_2$. But in this case, the multiplier belongs to ${\cal{M}}({\mbox{\rm I\kern-.22em R}}^3)$ and we simply face a tri-linear paraproduct.
\underline{Case 2: $k_2>>k_1$}.
This time, the only possibility is to have $(\widehat{\Phi_{k_2}}(\xi_3))_{k_2}$ of ``$\Phi$'' type. Then, we can ``complete''the expression in (\ref{20}) as
\begin{equation}\label{21} \sum_{k_2>>k_1} \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3) \widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2) \widehat{\Phi_{k_1}}(\xi_1+\xi_2) \widehat{\Phi_{k_1}}(\xi_3) \widehat{\widetilde{\Phi_{k_1}}}(\xi_1+\xi_2+\xi_3). \end{equation}
Now, for $(\widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2))_{k_2}$ we have two options. Either it is of ``$\Psi$ type'', in which situation the only non-zero case would be when $k_1\sim k_2$. But then, this means that we are again in a paraproduct setting. Or, $(\widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2))_{k_2}$ is of ``$\Phi$ type'' but this can only happen when both $(\widehat{\Phi_{k_2}}(\xi_1))_{k_2}$ and $(\widehat{\Phi_{k_2}}(\xi_2))_{k_2}$ are of ``$\Psi$ type'' (and their oscillations cancel out). Since we also know that either $(\widehat{\Phi_{k_1}}(\xi_1+\xi_2))_{k_1}$ or $(\widehat{\Phi_{k_1}}(\xi_3))_{k_1}$ has to be of a ``$\Psi$ type'', it follows that (\ref{21}) splits as
$$\left(\sum_{k_2}\widehat{\Phi_{k_2}}(\xi_1) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3) \widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2)\right) \left(\sum_{k_1} \widehat{\Phi_{k_1}}(\xi_1+\xi_2) \widehat{\Phi_{k_1}}(\xi_3) \widehat{\widetilde{\Phi_{k_1}}}(\xi_1+\xi_2+\xi_3) \right). $$ But then, if we denote by $T(f,g,h)$ the corresponding tri-linear operator, one has
$$\Lambda_T(f,g,h,k):= \int T(f,g,h)(x) k(x) d x =$$
$$\int \left(\sum_{k_2}\widehat{\Phi_{k_2}}(\xi_1) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3) \widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2)\right) \left(\sum_{k_1} \widehat{\Phi_{k_1}}(\xi_1+\xi_2) \widehat{\Phi_{k_1}}(\xi_3) \widehat{\widetilde{\Phi_{k_1}}}(\xi_1+\xi_2+\xi_3) \right)\cdot $$
$$\cdot\widehat{f}(\xi_1) \widehat{g}(\xi_2) \widehat{h}(\xi_3) \widehat{k}(-\xi_1-\xi_2-\xi_3) d \xi = $$
$$\int \left(\sum_{k_2}\widehat{\widetilde{\Phi_{k_2}}}(-\xi_1) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3) \widehat{\widetilde{\widetilde{\Phi_{k_2}}}}(-\xi_1-\xi_2)\right) \left(\sum_{k_1} \widehat{\widetilde{\Phi_{k_1}}}(-\xi_1-\xi_2) \widehat{\Phi_{k_1}}(\xi_3) \widehat{\widetilde{\widetilde{\Phi_{k_1}}}}(-\xi_1-\xi_2-\xi_3) \right)\cdot $$
$$\cdot\widehat{f}(\xi_1) \widehat{g}(\xi_2) \widehat{h}(\xi_3) \widehat{k}(-\xi_1-\xi_2-\xi_3) d \xi. $$ Then, if we denote by $\lambda:= -\xi_1-\xi_2-\xi_3$ the previous expression becomes
$$\int \left(\sum_{k_2}\widehat{\widetilde{\Phi_{k_2}}}(\lambda+\xi_2+\xi_3) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3) \widehat{\widetilde{\widetilde{\Phi_{k_2}}}}(\lambda+\xi_3)\right) \left(\sum_{k_1} \widehat{\widetilde{\Phi_{k_1}}}(\lambda+\xi_3) \widehat{\Phi_{k_1}}(\xi_3) \widehat{\widetilde{\widetilde{\Phi_{k_1}}}}(\lambda) \right)\cdot $$
$$\cdot\widehat{f}(-\lambda-\xi_2-\xi_3) \widehat{g}(\xi_2) \widehat{h}(\xi_3) \widehat{k}(\lambda) d \xi d \lambda := $$
$$\int m(\xi_2,\xi_3, \lambda) m(\xi_3,\lambda) \widehat{g}(\xi_2) \widehat{h}(\xi_3) \widehat{k}(\lambda) \widehat{f}(-\lambda-\xi_2-\xi_3) d \xi d \lambda :=$$
$$\int \Pi_{flag}(g, h, k)(x) f(x) d x. $$ In conclusion, there exists a flag paraproduct $\Pi_{flag}$ so that
$$\Lambda_T(f,g,h,k) = \int \Pi_{flag}(g, h, k)(x) f(x) d x $$ which reduces the study of $T$ to the study of $\Pi_{flag}$.
As far as we understood from \cite{gms}, paraproducts appear in the study of the $3D$ quadratic NLS, while the flag paraproducts are in addition necessary to deal with the more delicate $2D$ quadratic NLS.
\section{Remarks about the proof of Theorem \ref{main}}
Let us first recall that the symbol of the operator in question is $a(\xi_1,\xi_2) b(\xi_2,\xi_3)$. Split as before both $a$ and $b$ as
$$a(\xi_1,\xi_2) = \sum_{k_1} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) $$ and
$$b(\xi_2,\xi_3) = \sum_{k_2} \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3) $$ which means that
$$a(\xi_1,\xi_2) b(\xi_2,\xi_3) = \sum_{k_1, k_2} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3). $$
As usual, there are three cases. Either $k_1\sim k_2$ or $k_1<<k_2$ or $k_2<<k_1$. The first one is easy, since it generates paraproducts and so we only need to deal with the second one, the third one being completly symmetric. The corresponding symbol is
\begin{equation}\label{22} \sum_{k_1<<k_2} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3). \end{equation} Clearly, in this case we must have $(\widehat{\Phi_{k_2}}(\xi_2))_{k_2}$ of ``$\Phi$ type''. For reasons that will be clearer later on, we would have liked instead of (\ref{22}) to face an expression of the form
\begin{equation}\label{23} \sum_{k_1<<k_2} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_1+\xi_2) \widehat{\Phi_{k_2}}(\xi_3). \end{equation}
Indeed \footnote{We should emphasize here that a similar problem appeared in the ``bi-est case'' \cite{mtt:walshbiest}, \cite{mtt:fourierbiest}. There, the solution came from the
observation that inside a region of the form $|\xi_1-\xi_2|<< |\xi_2-\xi_3|$, one simply has the equality $\chi_{\xi_1<\xi_2}\chi_{\xi_2<\xi_3} = \chi_{\xi_1<\xi_2}\chi_{\frac{\xi_1+\xi_2}{2}<\xi_3}$. However, in our case a similar formula is not available unfortunately, since we are working with generic multipliers. This is the main reason for which we will have to face later on not only discrete models of type (\ref{24}) (which are the ``lacunary'' versions of the model operators in \cite{mtt:walshbiest}, \cite{mtt:fourierbiest}), but also the new ones described in (\ref{25}). }, if we had to deal with (\ref{23}) instead, we would have completed it as
\begin{equation} \sum_{k_1<<k_2} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \widehat{\widetilde{\Phi_{k_1}}}(\xi_1+\xi_2) \widehat{\Phi_{k_2}}(\xi_1+\xi_2) \widehat{\Phi_{k_2}}(\xi_3). \widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2+\xi_3) \end{equation} and then the 3-linear operator having this symbol could be coveniently rewritten as
$$\sum_{k_2} \left\{ \left( \sum_{k_1<<k_2} [(f\ast\Phi_{k_1})(g\ast\Phi_{k_1})]\ast\widetilde{\Phi_{k_1}}\right)\ast\Phi_{k_2}\cdot (h\ast\Phi_{k_2}) \right\}\ast\widetilde{\Phi_{k_2}} $$ which could be further discretized to become an average of model operators of the form
\begin{equation}\label{24}
\sum_I \frac{1}{|I|^{1/2}} \langle B_I(f,g), \Phi^1_I\rangle \langle h, \Phi^2_I\rangle \Phi^3_I, \end{equation} where
$$B_I(f,g) = \sum_{J: |J|>|I|}
\frac{1}{|J|^{1/2}} \langle f, \Phi^1_J\rangle \langle g, \Phi^2_J\rangle \Phi^3_J, $$ while $\Phi^i_I$ and $\Phi^i_J$ are all $L^2$ - normalized bump functions, adapted to dyadic intervals $I$ and $J$ respectively, which are either of ``$\Phi$ or $\Psi$ type'' \cite{c} \footnote{In fact, at least two of the families $(\Phi^i_I)_I$, $i=1,2,3$ and at least two of the families $(\Phi^i_J)_J$, $i=1,2,3$ are of a ``$\Psi$ type''.}.
Our next goal is to explain how can one bring the original (\ref{22}) symbol to its better variant (\ref{23}).
The idea is that in the case when $k_1<<k_2$, then $\widehat{\Phi_{k_2}}(\xi_2)$ becomes very close to $\widehat{\Phi_{k_2}}(\xi_1+\xi_2)$, since $\xi_1$ must be in the support of $\widehat{\Phi_{k_1}}(\xi_1)$, which lives at a much smaller scale.
Therefore, using a Taylor decomposition, one can write
$$\widehat{\Phi_{k_2}}(\xi_2) = \widehat{\Phi_{k_2}}(\xi_1+ \xi_2) + \frac{\widehat{\Phi_{k_2}}'(\xi_1+ \xi_2)}{1!}(-\xi_1) + \frac{\widehat{\Phi_{k_2}}''(\xi_1+ \xi_2)}{2!}(-\xi_1)^2 + ... + \frac{\widehat{\Phi_{k_2}}^M(\xi_1+ \xi_2)}{M!}(-\xi_1)^M + R_M(\xi_1, \xi_2).$$ Clearly, the $0$th term gives rise to a model operator similar to the one before. Fix now $0<l\leq M$ and consider an intermediate term $\frac{\widehat{\Phi_{k_2}}^l(\xi_1+ \xi_2)}{l!}(-\xi_1)^l$. If we insert it into (\ref{22}) the corresponding multiplier becomes
$$ \sum_{k_1<<k_2} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \frac{\widehat{\Phi_{k_2}}^l(\xi_1+ \xi_2)}{l!}(-\xi_1)^l \widehat{\Phi_{k_2}}(\xi_3) = $$
$$ \sum_{\# = 1000}^{\infty} \sum_{k_2 = k_1 + \#}^{\infty} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \frac{\widehat{\Phi_{k_2}}^l(\xi_1+ \xi_2)}{l!}(-\xi_1)^l \widehat{\Phi_{k_2}}(\xi_3) = $$
$$ \sum_{\# = 1000}^{\infty} \sum_{k_2 = k_1 + \#}^{\infty} \widehat{\Phi_{k_1}}(\xi_1)(-\xi_1)^l \widehat{\Phi_{k_1}}(\xi_2) \frac{\widehat{\Phi_{k_2}}^l(\xi_1+ \xi_2)}{l!} \widehat{\Phi_{k_2}}(\xi_3) := $$
$$ \sum_{\# = 1000}^{\infty} \sum_{k_2 = k_1 + \#}^{\infty} \frac{2^{k_1 l}}{2^{ k_2 l}} \widehat{\Phi_{k_1, l}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \widehat{\Phi_{k_2, l}}(\xi_1+ \xi_2) \widehat{\Phi_{k_2}}(\xi_3) = $$
$$ \sum_{\# = 1000}^{\infty} 2^{-\# l} \sum_{k_2 = k_1 + \#}^{\infty} \widehat{\Phi_{k_1, l}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \widehat{\Phi_{k_2, l}}(\xi_1+ \xi_2) \widehat{\Phi_{k_2}}(\xi_3). $$ But then, the inner expression (for a fixed $\#$) generates model operators of the form (see \cite{c})
\begin{equation}\label{25}
\sum_I \frac{1}{|I|^{1/2}} \langle B_I^{\#}(f,g), \Phi^1_I\rangle \langle h, \Phi^2_I\rangle \Phi^3_I \end{equation} where
$$B_I^{\#}(f,g) = \sum_{J: |J|= 2^{\#}|I|}
\frac{1}{|J|^{1/2}} \langle f, \Phi^1_J\rangle \langle g, \Phi^2_J\rangle \Phi^3_J. $$ Finally, it is not difficult to see that the ``rest'' operator (the one which corresponds to $R_M(\xi_1,\xi_2)$) can be written as
$$\sum_{\# = 1000}^{\infty} 2^{-\# M} T_{\#}(f,g,h)$$ where the symbol $m_{\#}$ of the operator $T_{\#}$ satisfies an estimate of the type
$$|\partial^{\alpha}m_{\#}(\xi)|\lesssim 2^{\# |\alpha|}\frac{1}{|\xi|^{|\alpha|}}$$ for many multi-indices $\alpha$. As a consequence, Coifman - Meyer theorem implies that each $T_{\#}$ is bounded with a bound of type $O(2^{100 \#})$, which is acceptable if we pick $M$ large enough.
All of these show that one needs to understand the model operators (\ref{24}) and (\ref{25}), in order to prove our theorem.
The 4-linear form associated to (\ref{24}) can be written as
$$\Lambda(f,g,h,k) = $$
$$
\sum_I \frac{1}{|I|^{1/2}} \langle B_I(f,g), \Phi^1_I\rangle \langle h, \Phi^2_I\rangle \langle k, \Phi^3_I\rangle = $$
$$\sum_J
\frac{1}{|J|^{1/2}} \langle f, \Phi^1_J\rangle \langle g,\Phi^2_J\rangle \langle B_J(h,k),\Phi^3_J\rangle $$ where
$$B_J(h,k) :=
\sum_{I: |I|<|J|} \langle h, \Phi^2_I\rangle \langle k, \Phi^3_I\rangle \Phi^1_I. $$ The above formula is of type
$$\sum_{J\in {\cal{J}}}
\frac{1}{|J|^{1/2}} a^1_J a^2_J a^3_J $$ where ${\cal{J}}$ is an arbitray collection of dyadic intervals and such expressions can be estimated by un upper bound of the form (see \cite{c})
\begin{equation}\label{se} \prod_{i=1}^3 [{\rm size}_{{\cal{J}}}((a^i_J)_J)]^{1-\theta_i}\cdot [{\rm energy}_{{\cal{J}}}((a^i_J)_J)]^{\theta_i} \end{equation} for any $0\leq \theta_1,\theta_2, \theta_3 < 1$ so that $\theta_1+\theta_2+\theta_3 = 1$ with the implicit constants depending on these ``theta parameters''.
The definitions of these ``sizes'' and ``energies'' are as follows:
$${\rm size}_{{\cal{J}}}((a^i_J)_J) := \sup_{J\in{\cal{J}}} \frac{|a^i_J|}{|J|^{1/2}}$$ if $(a^i_J)_J$ is of ``$\Phi$ type'' (meaning that the corresponding implicit $\Phi^i_J$ functions are of ``$\Phi$ type''), and
$${\rm size}_{{\cal{J}}}((a^i_J)_J) := \sup_{J\in{\cal{J}}}
\frac{1}{|J|}
\left\|\left( \sum_{J'\in{\cal{J}}; J'\subseteq J}
\frac{|a^i_{J'}|^2}{|J'|}\chi_{J'} \right)^{1/2}
\right\|_{1,\infty} $$ if $(a^i_J)_J$ is of ``$\Psi$ type''.
Also, the ``energy'' is defined by
$${\rm energy}_{{\cal{J}}}((a^i_J)_J) := \sup_{n\in\Z} \sup_{{\cal{D}}}2^n (\sum_{J\in{\cal{D}}}|J|)$$ where ${\cal{D}}$ either ranges over those collections of disjoint dyadic intervals $J\in{\cal{J}}$ for which
$$\frac{|a^i_J|}{|J|^{1/2}} \geq 2^n$$ in the ``$\Phi$ case'', or it ranges over the collection of disjoint dyadic intervals $J\in{\cal{J}}$ having the property that
$$
\frac{1}{|J|}
\left\|\left( \sum_{J'\in{\cal{J}}; J'\subseteq J}
\frac{|a^i_{J'}|^2}{|J'|}\chi_{J'} \right)^{1/2}
\right\|_{1,\infty}\geq 2^n $$ in the ``$\Psi$ case''.
In the case of $(\langle f,\Phi^1_J\rangle)_J$ or $(\langle g,\Phi^2_J\rangle)_J$ sequences, there are ways to estimate further these sizes and energies, either by certain averages of $f$ and $g$ or by the $L^1$-norms of $f$ and $g$ \cite{c}. The case of $(\langle B_J(h,k),\Phi^3_J\rangle)_J$ is more complicated, since the inner function depends on the interval $J$. The main observation here (called ``the bi-est trick'' in \cite{mtt:walshbiest}, \cite{mtt:fourierbiest}) is that this dependence can actually be factored out.
More precisely, assume that one wants to estimate the size of such a sequence, and that the suppremum is attained for an interval $J_0$. The size then becomes
\begin{equation}\label{26}
\frac{1}{|J_0|}
\left\|\left( \sum_{J\in{\cal{J}}; J\subseteq J_0}
\frac{| \langle B_J(h,k),\Phi^3_J\rangle) |^2}{|J|}\chi_{J} \right)^{1/2}
\right\|_{1,\infty}. \end{equation} From the definition of $B_J(h,k)$ we see that terms of type
$\langle \Phi^1_I, \Phi^3_J\rangle$ for $|I|<|J|$ are implicit in the above expression. By using Plancherel, one then sees that one must have $\omega^3_J\subseteq \omega^1_I$ for such a term to be nonzero. Denote now by ${\cal{I}}_0$ the set of all dyadic intervals $I\in{\cal{I}}$ for which there exists $J\subseteq J_0$ so that $\omega^3_J\subseteq \omega^1_I$ \footnote{These ``omega intervals''are the ``frequency intervals'' which support the Fourier transform of the corresponding functions. }. We then observe that
$$\langle B_J(h,k),\Phi^3_J\rangle = \langle B_{{\cal{I}}_0}(h,k),\Phi^3_J\rangle$$ for any $J\subseteq J_0$ where
$$ B_{{\cal{I}}_0}(h,k) := \sum_{I\in{\cal{I}}_0}
\frac{1}{|I|^{1/2}} \langle h,\Phi^2_I\rangle \langle k,\Phi^3_I\rangle \Phi^1_I. $$ The reason for this is that if a pair of type $\langle \Phi^1_I, \Phi^3_J\rangle$ appears for which one has the opposite inclusion $\omega^1_I\subseteq \omega^3_J$, for some $I\in{\cal{I}}_0$ then, by the definition of ${\cal{I}}_0$ there exists another interval $J'\subseteq J_0$ so that $\omega^3_{J'}\subseteq \omega^1_I\subseteq \omega^3_J$. But then, $\omega^3_{J'}$ and $\omega^3_J$ would be one strictly inside the other, which contradicts the frequency structure of the $(\omega^3_J)_J$ intervals.
This means that (\ref{26}) equals to
$$
\frac{1}{|J_0|}
\left\|\left( \sum_{J\in{\cal{J}}; J\subseteq J_0}
\frac{| \langle B_{{\cal{I}}_0}(h,k),\Phi^3_J\rangle) |^2}{|J|}\chi_{J} \right)^{1/2}
\right\|_{1,\infty} $$ which can be estimated in terms of a certain average of $B_{{\cal{I}}_0}(h,k)$, which itself after a duality argument can be further estimated by using again this time a ``local variant'' of the general upper bound (\ref{se}). And then, a similar reasoning (based on the ``bi-est trick'') helps to understand the energies.
After that, to estimate the other 4-linear form $\Lambda^{\#}$ corresponding to (\ref{25}), one applies again the same generic estimate (\ref{se}) but this time the ``biest trick'' is no longer effective and some other ``ad hoc'' arguments are necessary. The point here is that all these forms can indeed be estimated with upper bounds which are independent on $\#$, which makes the whole sum over $\#$ convergent in the end. For more details, see the original paper \cite{c}.
We would like to end the article with the observation that the flag paraproduct which naturally appears in the study of the 2D quadratic NLS satisfies the same $L^p$ estimates as the operator $T_{ab}$ in Theorem \ref{main}. More precisely, we have
\begin{theorem}\label{nls} The flag paraproduct $T(f_1, f_2, f_3)$ defined by
$$T(f_1, f_2, f_3)(x):= \int_{{\mbox{\rm I\kern-.22em R}}^3}a(\xi_1, \xi_2) b(\xi_1, \xi_2, \xi_3) \widehat{f_1}(\xi_1) \widehat{f_2}(\xi_2) \widehat{f_3}(\xi_3) e^{2\pi i x(\xi_1 +\xi_2 +\xi_3)} d \xi $$ maps $L^{p_1}\times L^{p_2}\times L^{p_3} \rightarrow L^p$ boundedly, for every $1<p_1, p_2, p_3 <\infty$ with $1/p_1+1/p_2+p_3 = 1/p$. \end{theorem}
\begin{proof} Before starting the actual proof, we should mention that the ${\mbox{\rm I\kern-.22em R}}^d$ variant of the theorem holds also true and that this extension to euclidean spaces of arbitrary dimension is really straightforward. We shall describe the argument in the one dimensional case, to be consistent with the rest of the paper. The main point is to simply realize that the discrete model operators studied in \cite{c} are enough to cover this case also. Let us assume that the symbols $a(\xi_1, \xi_2)$ and $b(\xi_1,\xi_2,\xi_3)$ are given by
$$a(\xi_1, \xi_2) = \sum_{k_1} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2)$$ and
$$b(\xi_1,\xi_2,\xi_3) = \sum_{k_2} \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3)$$ as usual \footnote{Again, as we pointed out earlier, modulo some minor technical issues, one can always assume that this is the case.}.
As a consequence, we have
\begin{equation}\label{27} a(\xi_1, \xi_2) b(\xi_1,\xi_2,\xi_3) = \sum_{k_1, k_2} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3). \end{equation} The are two cases:
\underline{Case 1: $k_2<<k_1$.}
This case is simple since we must either have $k_1 \sim k_2$ (and the corresponding expression generates a classical paraproduct) or both $(\widehat{\Phi_{k_1}}(\xi_1))_{k_1}$ and $(\widehat{\Phi_{k_1}}(\xi_2))_{k_1}$ are of ``$\Phi$ type'' which is impossible.
\underline{Case 2: $k_1<<k_2$.}
In this case we have that both $(\widehat{\Phi_{k_2}}(\xi_1))_{k_2}$ and $(\widehat{\Phi_{k_2}}(\xi_2))_{k_2}$ are of ``$\Phi$ type'' which implies in particular that $(\widehat{\Phi_{k_2}}(\xi_3))_{k_2}$ must be of ``$\Psi$ type''. We then rewrite (\ref{27}) as
\begin{equation}\label{28} a(\xi_1, \xi_2) b(\xi_1,\xi_2,\xi_3) = \sum_{k_1<< k_2} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_1) \widehat{\Phi_{k_2}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_3). \end{equation} As in the ``$a(\xi_1,\xi_2)b(\xi_2,\xi_3)$ case'' we would have liked instead of (\ref{28}) to face an expression of type
\begin{equation}\label{29} \sum_{k_1<< k_2} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \widehat{\Phi_{k_2}}(\xi_1+\xi_2) \widehat{\widetilde{\Phi_{k_2}}}(\xi_1+\xi_2) \widehat{\Phi_{k_2}}(\xi_3). \end{equation} Indeed, as before, if we faced this instead, we would have rewritten it as
\begin{equation}\label{30} \sum_{k_1<< k_2} \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \widehat{\widetilde{\widetilde{\Phi_{k_1}}}}(\xi_1+\xi_2) \widehat{\widetilde{\widetilde{\widetilde{\Phi_{k_2}}}}}(\xi_1+\xi_2) \widehat{\Phi_{k_2}}(\xi_3) \end{equation} and this, as we have seen, generates discrete models of type (\ref{24}) which have been understood in \cite{c}.
We show now how can one transform (\ref{28}) into an expression closer to (\ref{29}). The idea is once again based on using a careful Taylor decomposition, this time for the functions $\widehat{\Phi_{k_2}}(\xi_1)$ and $\widehat{\Phi_{k_2}}(\xi_2)$. We write
$$\widehat{\Phi_{k_2}}(\xi_1) = \widehat{\Phi_{k_2}}(\xi_1+ \xi_2) + \frac{\widehat{\Phi_{k_2}}'(\xi_1+ \xi_2)}{1!}(-\xi_2) + \frac{\widehat{\Phi_{k_2}}''(\xi_1+ \xi_2)}{2!}(-\xi_2)^2 + ... + \frac{\widehat{\Phi_{k_2}}^M(\xi_1+ \xi_2)}{M!}(-\xi_2)^M + R_M(\xi_1, \xi_2)$$ and similarly
$$\widehat{\Phi_{k_2}}(\xi_2) = \widehat{\Phi_{k_2}}(\xi_1+ \xi_2) + \frac{\widehat{\Phi_{k_2}}'(\xi_1+ \xi_2)}{1!}(-\xi_1) + \frac{\widehat{\Phi_{k_2}}''(\xi_1+ \xi_2)}{2!}(-\xi_1)^2 + ... + \frac{\widehat{\Phi_{k_2}}^M(\xi_1+ \xi_2)}{M!}(-\xi_1)^M + \widetilde{R_M}(\xi_1, \xi_2).$$ Now, if we insert these two formulae into (\ref{28}), we obtain (most of the time) expressions whose general terms are of type
$$ \widehat{\Phi_{k_1}}(\xi_1) \widehat{\Phi_{k_1}}(\xi_2) \frac{\widehat{\Phi_{k_2}}^l(\xi_1+ \xi_2)}{l!}(-\xi_2)^l \frac{\widehat{\Phi_{k_2}}^{\widetilde{l}}(\xi_1+ \xi_2)}{\widetilde{l}!}(-\xi_1)^{\widetilde{l}} \widehat{\Phi_{k_2}}(\xi_3) := $$
$$ \frac{1}{2^{k_2 l}} \frac{1}{2^{k_2\widetilde{l}}} \widehat{\Phi_{k_1}}(\xi_1)(-\xi_1)^{\widetilde{l}} \widehat{\Phi_{k_1}}(\xi_2)(-\xi_2)^l \widehat{\Phi_{k_2, l}}(\xi_1+\xi_2) \widehat{\Phi_{k_2, \widetilde{l}}}(\xi_1+\xi_2) \widehat{\Phi_{k_2}}(\xi_3) := $$
$$ \frac{2^{k_1 l}}{2^{k_2 l}} \frac{2^{k_1\widetilde{l}}}{2^{k_2\widetilde{l}}} \widehat{\Phi_{k_1, \widetilde{l}}}(\xi_1) \widehat{\Phi_{k_1, l}}(\xi_2) \widehat{\Phi_{k_2, l, \widetilde{l}}}(\xi_1+\xi_2) \widehat{\Phi_{k_2}}(\xi_3) $$ for $0\leq l,\widetilde{l}\leq M$. Since $k_1<<k_2$, we can assume as before that $k_2 = k_1 + \#$, with $\#$ greater or equal than (say) $1000$. In particular, if at least one of $l, \widetilde{l}$ is strictly bigger than zero, the corresponding operator can be written as
$$\sum_{\#=1000}^{\infty} 2^{-\#(l+\widetilde{l})} T_{\#}^{l,\widetilde{l}}.$$ Now, as we pointed out before, the analysis of each $T_{\#}^{l,\widetilde{l}}$ can be reduced to the analysis of the model operators (\ref{25}) which have been understood in \cite{c}. They are all bounded, with upper bounds which are uniform in $\#$ and this allows one to simply sum the above implicit geometric sum. In the case when both $l$ and $\widetilde{l}$ are equal to zero, then the corresponding symbol is precisely of the form (\ref{29}) and the operator generated by it can be reduced (as we already mentioned) to the model operators (\ref{24}) of \cite{c}.
Finally, the operators whose symbols are obtained when at least one of $R_M(\xi_1,\xi_2)$ or $\widetilde{R_M}(\xi_1,\xi_2)$ enters the picture, can be all estimated by the Coifman - Meyer theorem.
\end{proof}
\end{document} | arXiv | {
"id": "0811.2395.tex",
"language_detection_score": 0.6879075169563293,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{
Full-featured peak reduction in right-angled Artin groups
}
\begin{abstract} We prove a new version of the classical peak-reduction theorem for automorphisms of free groups in the setting of right-angled Artin groups. We use this peak-reduction theorem to prove two important corollaries about the action of the automorphism group of a right-angled Artin group $A_\Gamma$ on the set of $k$-tuples of conjugacy classes from $A_\Gamma$: orbit membership is decidable, and stabilizers are finitely presentable. Further, we explain procedures for checking orbit membership and building presentations of stabilizers. This improves on a previous result of the author's. We overcome a technical difficulty from the previous work by considering infinite generating sets for the automorphism groups. The method also involves a variation on the Hermite normal form for matrices. \end{abstract}
\section{Introduction} \subsection{Background} Let $F$ denote a finite-rank free group with automorphism group $\Aut(F)$. \emph{Peak reduction} is a technique in the study of $\Aut(F)$ that is a key ingredient in the solution of several important problems. J.H.C.\ Whitehead invented the technique in the 1930's in~\cite{Whitehead} to provide an algorithm that takes in two conjugacy classes (or more generally, $k$-tuples of conjugacy classes) from $F$ and determines whether there is an automorphism in $\Aut(F)$ that carries one to the other. In a series of papers in the 1970's~\cite{McCool1,McCool2,McCool3}, McCool used peak-reduction methods to reprove Nielsen's result that $\Aut(F)$ is finitely presented, and to prove that the stabilizer in $\Aut(F_n)$ of a tuple of conjugacy classes in $F$ is also finitely presented.
Given a finite simplicial graph $\Gamma$, the \emph{right-angled Artin group} $A_\Gamma$ is the group given by the finite presentation whose generators are the vertices of $\Gamma$, and whose only relations are that two generators commute if and only if they are adjacent as vertices in $\Gamma$. These groups are also called ``partially commutative groups'' and ``graph groups''. Free groups are extreme examples of right-angled Artin groups, but this class of groups contains also contains free abelian groups and many other groups.
The goal of this paper is to generalize the peak-reduction method to right-angled Artin groups. In a previous paper~\cite{Day1}, the author found a weak generalization of peak reduction and used it to prove that $\Aut(A_\Gamma)$ is finitely presented for every $\Gamma$. That paper and its sequel~\cite{Day2} used peak reduction to prove that the stabilizers of certain specific tuples of conjugacy classes in $A_\Gamma$ are finitely generated, and to investigate an analogue of mapping class groups of surfaces inside $\Aut(A_\Gamma)$. There is also a version of peak reduction for right-angled Artin groups from recent work of Charney--Stambaugh--Vogtmann~\cite{CharneyStambaughVogtmann}; they use peak reduction to study an outer space for $\Aut(A_\Gamma)$ (a contractible cell complex that $\Aut(A_\Gamma)$ acts on geometrically). We relate their theorem to our results after we state them below. However, the other applications of peak reduction do not seem to follow directly from either of the peak-reduction theorems just mentioned. The present paper proves a strong generalization of peak reduction to right-angled Artin groups and uses it to prove two important corollaries of peak reduction.
\subsection{Results} First we state our two corollaries of peak reduction. Let $A_\Gamma$ be a right-angled Artin group. \begin{theorem}\label{th:raagcheckorbit} There is an algorithm that takes in two tuples $U$ and $V$ of conjugacy classes from $A_\Gamma$ and either produces an automorphism $\alpha\in \Aut(A_\Gamma)$ with $\alpha\cdot U=V$ or determines that no such automorphism exists. \end{theorem}
\begin{theorem}\label{th:raagstabpres} There is an algorithm that takes in a tuple $W$ of conjugacy classes from $A_\Gamma$ and produces a finite presentation for its stabilizer $\Aut(A_\Gamma)_W$. \end{theorem}
These results follow from our result on peak reduction, which will take a few definitions to state. Let $X$ denote the vertex set of $\Gamma$. The length $\abs{u}$ of an element $u\in A_\Gamma$ is the usual length in terms of the generating set $X$. The length $\abs{w}$ of a conjugacy class $w$ is the minimum length of a representative, and the length $\abs{W}$ of a tuple $W$ of conjugacy classes is the sum of the lengths of its entries.
For $a\in X$, the \emph{star} $\st(a)$ is the subset of $X$ consisting of $a$ and all vertices adjacent to $a$. For $a$ and $b$ in $X$, $a$ and $b$ are in the same \emph{adjacent domination equivalence class} if $\st(a)=\st(b)$. We denote the adjacent domination equivalence class of $a$ by $[a]$; if $b\in[a]$ with $a\neq b$, then $a$ is necessarily adjacent to $b$. It is entirely possible that $[a]=\{a\}$. We define the set of \emph{generalized Whitehead automorphisms with multiplier set $[a]$}, denoted $\whset{a}$, to be the subgroup of automorphisms in $\Aut(A_\Gamma)$ such that $\alpha\in\whset{a}$ if and only if \begin{itemize} \item for each $b\in X\setminus [a]$, there are $u,v\in \genby{[a]}$ with $\alpha(b)=ubv$ and \item for each $b\in [a]$, we have $\alpha(b)\in\genby{[a]}$. \end{itemize} In general each $\whset{a}$ is an infinite group. We define the \emph{permutation automorphisms} $P$ to be the finite subgroup of $\Aut(A_\Gamma)$ with $\alpha$ in $P$ if and only if $\alpha$ restricts to a permutation of $X\cup X^{-1}$. We define the set of \emph{generalized Whitehead automorphisms} $\Omega$ to be the union of $P$ with all the $\whset{a}$ as $a$ varies over the vertices of $\Gamma$. It follows from a result of Laurence~\cite{Laurence} (see Theorem~\ref{th:Laurence} below) that $\Omega$ is a generating set for $\Aut(A_\Gamma)$. \begin{theorem}[Peak reduction]\label{th:fullfeaturedpeakreduction} Suppose $\alpha\in\Aut(A_\Gamma)$ and $W$ is a tuple of conjugacy classes in $A_\Gamma$. Then there is a factorization \[\alpha=\beta_m\beta_{m-1}\dotsm\beta_2\beta_1\] with $\beta_1,\dotsc,\beta_m$ in $\Omega$, such that the sequence of intermediate lengths \[k\mapsto \abs{\beta_k\dotsm \beta_1\cdot W}\quad\text{for $k=0,\dotsc,m$}\] strictly decreases from $k=0$ to $k=k_1$ for some $k_1$ with $0\leq k_1\leq m$, stays constant from $k=k_1$ to $k=k_2$ for some $k_2$ with $k_1\leq k_2\leq m$, and strictly increases from $k=k_2$ to $k=m$. Further, there is an algorithm to find such a factorization. \end{theorem}
A major difference between this formulation and the one for free groups is that the set $\Omega$ is finite in that setting, but is usually infinite in this setting. Since the classic applications rely on this set being finite, we need some additional results to get the applications to work. \begin{proposition}\label{pr:insamewhorbit} There is an algorithm that takes in two tuples $U$ and $V$ of conjugacy classes from $A_\Gamma$, and a vertex $a$ in $\Gamma$, and produces an automorphism $\alpha$ in $\whset{a}$ with $\alpha\cdot U=V$ or determines that no such automorphism exists. In particular, it is possible to determine whether a tuple $U$ can be shortened by an automorphism from $\whset{a}$. \end{proposition} \begin{proposition}\label{pr:whstab} There is an algorithm that takes in a tuple $U$ of conjugacy classes from $A_\Gamma$ and a vertex $a$ in $\Gamma$, and produces a finite presentation for the stabilizer $(\whset{a})_U$. \end{proposition} It turns out that the groups $\whset{a}$ embed in integer general linear groups, and the key to these propositions is a modified version of the Hermite normal form for integer matrices.
The following example motivates the use of the infinite generating sets. \begin{example}\label{ex:nofiniteprgenset} This is Example~4.1 from Day~\cite{Day1}, which shows that no finite generating set will work for a peak-reduction theorem formulated for the entirety of $\Aut(A_\Gamma)$. In this example, $\Gamma$ is the four-vertex path graph with vertices $a$, $b$, $c$ and $d$ in that order. In~\cite[Proposition~C]{Day1}, we show that for $k\in\Z$, the conjugacy class $w=ad^k$ in this $A_\Gamma$ is fixed by an automorphism $\alpha$ with $\alpha(a)=ac^k$ and $\alpha(d)=c^{-1}d$ and with $\alpha$ fixing $b$ and $c$. Further, we show that there is no peak-reduced factorization of $\alpha$ with respect to $w$ by automorphisms that are simpler in a specific sense. This contradicts the existence of a certain formulation of peak-reduction theorem, because such a theorem could be used to produce such factorizations with automorphisms taken from a fixed finite set; however, such a set cannot work for all choices of $k$. But since the automorphism $\alpha$ is in our set $\whset{c}$ for any $k$, this example does not contradict Theorem~\ref{th:fullfeaturedpeakreduction}. \end{example}
The results in the present paper are somewhat similar to certain other results. However, Theorems~\ref{th:raagcheckorbit} and~\ref{th:raagstabpres} do not appear to be a direct consequence of any results in the literature. First we compare Theorem~\ref{th:fullfeaturedpeakreduction} to Theorem~B of Day~\cite{Day1}. Although part~(3) of Theorem~B is a peak-reduction theorem, it applies only to a proper subgroup of $\Aut(A_\Gamma)$ (in general) and there does not appear to be a straightforward way to apply that theorem to characterize orbits or stabilizers under the entirety of $\Aut(A_\Gamma)$.
Next we note that there is a peak-reduction theorem for right-angled Artin groups in a recent preprint of Charney--Stambaugh--Vogtmann~\cite{CharneyStambaughVogtmann}. Theorem~5.19 of that paper proves that $\Aut(A_\Gamma)$ has peak reduction using a finite generating set, but only with respect to a specific kind of tuple of conjugacy classes: tuples containing all conjugacy classes of length one or two, and the images of such tuples under automorphisms. Their proof is elegant, but the methods do not seem to apply to other kinds of tuples of conjguacy classes (in particular, Example~\ref{ex:nofiniteprgenset} is still a problem).
We also note another special case where results like these are previously known. Collins--Zieschang have a series of papers,~\cite{CollinsZieschang84a}, \cite{CollinsZieschang84b}, \cite{CollinsZieschang84c} and \cite{CollinsZieschang87}, on the Whitehead method for free products of groups. If $\Gamma$ is a disjoint union of complete graphs, then $A_\Gamma$ is a free product of free abelian groups, and results of Collins--Zieschang similar to Theorems~\ref{th:raagcheckorbit} and~\ref{th:raagstabpres} apply. Their methods do not extend to $A_\Gamma$ for general $\Gamma$ since $A_\Gamma$ is freely indecomposable if $\Gamma$ is connected.
\begin{remark} Another potential application of results of this paper is to the algebraic geometry of groups. Casals-Ruiz and Kazachkov define an algorithm in~\cite{CasalsRuizKazachkov} for parametrizing solutions to systems of equations in right-angled Artin groups, and our Theorem~\ref{th:raagstabpres} is relevant to that algorithm. Specifically, a system of equations over a right-angled Artin group $A_\Gamma$ corresponds to tuple $W$ of conjugacy classes (those which are set equal to $1$) in $A_\Gamma * F_n$, a free product of $A_\Gamma$ with a finite-rank free group. This free product is another right-angled Artin group $A_\Delta$. Theorem~\ref{th:raagstabpres} gives us a finite presentation, in an effective way, for the stabilizer $(A_\Delta)_W$, and this stabilizer maps to the automorphism group of the coordinate group of the system of equations. Casals-Ruiz and Kazachkov have informed the author that the finite presentability of these groups could be used to make improvements to their algorithm. This connection is not pursued in the present paper. \end{remark}
\subsection{Organization of the paper} We postpone the proofs of the more technical results to later sections in the paper. Section~\ref{se:preliminaries} contains preliminary facts about right-angled Artin groups. We reduce Propositions~\ref{pr:insamewhorbit} and~\ref{pr:whstab} to propositions about linear groups in Section~\ref{se:linearreduction}, and we prove our two main application theorems in Section~\ref{se:applications}. Section~\ref{se:linearproblems} proves the propositions about linear groups and we finally prove the peak reduction theorem in Section~\ref{se:peakreduction}.
\section{Preliminaries}\label{se:preliminaries} \subsection{Combinatorial group theory of right-angled Artin groups}\label{ss:combinatorial} The survey by Charney~\cite{Charney} is a good general reference for right-angled Artin groups. We recall a few facts here that are important for understanding the paper. As mentioned above, the \emph{length} of an element $u$ of $A_\Gamma$ is always the minimal length of an expression for $u$ as a product of generators from $\Gamma$ and their inverses. Servatius~\cite{Servatius} points out that a word in the generators representing $u$ is of minimal length if and only if it is \emph{graphically reduced}, meaning that there is no subword $xvx^{-1}$ where $x$ is a generator or inverse generator and $v$ is a word in those generator that commute with $x$. Further, Servatius shows that it is possible to get between any two minimal-length representatives for a word by a sequence of commutation moves. Normal forms found by VanWyk~\cite{VanWyk} and by Hermiller--Meier~\cite{HermillerMeier} are a convenient way to check whether two words represent the same element. Algorithms for checking conjugacy have been described by Liu, Wrathall and Zeger~\cite{LiuWrathallZeger}, and by Wrathall~\cite{Wrathall}.
One important result is the centralizer theorem of Servatius, from~\cite{Servatius}. It states that for any $u\in A_\Gamma$, we can find an element $v$ with $u'=vuv^{-1}$ cyclically reduced (minimal length in its conjugacy class), and for $u'$ cyclically reduced, its centralizer is exactly what one would guess. Specifically, $u'$ can be written as a graphically reduced product $t_1^{k_1}\dotsm t_m^{k_m}$ where each $t_i$ is not a proper power and every vertex of $\Gamma$ used in writing $t_i$ commutes with every vertex of $\Gamma$ used in writing $t_j$, for each $i\neq j$. Then the centralizer of $u'$ is generated by the $\{t_i\}_i$ together with each vertex in $\Gamma$ that commutes with all the vertices appearing in $u'$. The centralizer of the original $u$ is of course the conjugate of the centralizer of $u'$ by the same conjugating element $v$.
\subsection{Generating Automorphisms} If $a$ and $b$ are vertices in $\Gamma$ and every vertex that is adjacent to $b$ commutes with $a$, we say that $a$ \emph{dominates} $b$. This can happen whether or not $a$ is adjacent to $b$. Domination is important because it determines whether certain maps defined on generators actually extend to automorphisms. The following definitions are originally from Servatius~\cite{Servatius}: \begin{itemize} \item If $a$ and $b$ are distinct vertices in $\Gamma$ and $a$ dominates $b$, then there is an automorphism in $\Aut(A_\Gamma)$ sending $b$ to $ba$ and fixing all other vertices in $\Gamma$. This is a \emph{dominated transvection}. If $a$ is not adjacent to $b$, then there is a different automorphism sending $b$ to $ab$ and fixing all other generators; this is also a dominated transvection. When we need to distinguish between these, we refer to \emph{right} and \emph{left} dominated transvections. \item If $a$ is a vertex in $\Gamma$ and $Y$ is a connected component of $\Gamma\setminus\st(a)$, then there is an automorphism in $\Aut(A_\Gamma)$ sending each $c$ in $Y$ to $aca^{-1}$ and fixing all other vertices in $\Gamma$. This is a \emph{partial conjugation}. \item If $\pi$ is a symmetry of the graph $\Gamma$, then there is an automorphism in $\Aut(A_\Gamma)$ sending each generator $x$ to its image $\pi(x)$. This is called a \emph{graphic automorphism}. \item For each vertex $a$ in $\Gamma$, there is an automorphism sending $a$ to $a^{-1}$ and fixing all other generators. This is the \emph{inversion} in $a$. \end{itemize} If $a$ is adjacent to $b$ then the right and left dominated transvections are the same. Dominated transvections generalize Nielsen moves and also those elementary matrices with a single nonzero off-diagonal entry of one. It is well known that transvections and inversions generate automorphism groups of free groups and general linear groups over the integers, but for general right-angled Artin groups, there are nontrivial partial conjugations and graphic automorphisms that cannot be expressed as products of inversions and transvections. However, these four kinds of automorphisms generate: \begin{theorem}[Laurence~\cite{Laurence}]\label{th:Laurence} For any $\Gamma$, the group $\Aut(A_\Gamma)$ is generated by the finite set consisting of all dominated transvections, partial conjugations, graphic automorphisms, and inversions. \end{theorem}
\subsection{Labeled directed graphs}
A \emph{labeled directed graph} is a directed multigraph, with self-loops allowed, so that each directed edge carries a label from a pre-specified label set. In this paper, the label sets are always subsets of groups, and we use the convention that a directed edge from a vertex $v_1$ to a vertex $v_2$ with label $g$ is also a directed edge from $v_2$ to $v_1$ with label $g^{-1}$. Since all directed edges are considered directed in both directions (with different labels) this means that edge paths are the same as in the underlying undirected graph. In particular, the connected components of a labeled directed graph are the same as the connected components of the underlying undirected graph.
Again we emphasize that in this paper, edge labels are always elements of a pre-specified group. By the \emph{composition of edge labels} of an edge path $p$ in a labeled directed graph, we mean the following: for each edge $e$ on $p$, we record the label of $e$ in $S$ if the orientations of $e$ and $p$ agree, and if the orientations disagree, we record the inverse of the label of $e$; the composition of edge labels is then the composition of these labels and inverses that we recorded, in the order that $p$ traverses them. Or since we consider reversed edges to be labeled with the inverse group element, we can take the composition of edge labels of $p$ simply to be the composition of the labels on the edges in $p$, interpreted with orientation agreeing with $p$.
One note: we compose automorphisms like functions, but we compose edge labels on paths in the opposite order, like the usual convention for fundamental groups. This means that if $\Delta$ is a directed graph with labels in an automorphism group, then the composition of edge labels along a path labeled with $\alpha_1$--$\alpha_2$--$\dotsm$--$\alpha_k$ in $\Delta$ is $\alpha_k\dotsm\alpha_2\alpha_1$.
Let $S$ be a set of elements of an arbitrary group $G$ and let $H$ be a subgroup of $G$. The \emph{Schreier graph} of $H$ in $G$ with respect to $S$ is the labeled, directed multigraph whose vertex set is the set of left cosets of $H$ in $G$ and with an edge labeled by $c$ in $S$ from coset $aH$ to $bH$ if and only if $caH=bH$. Note that since we are not supposing that $S\cup H$ generates $G$, it will not generally be the case that these Schreier graphs are connected.
\begin{lemma}\label{le:travelSchreier} Suppose $S$ is a set of elements of $G$ and $\Delta$ is the Schreier graph of $H$ in $G$ with respect to $S$. Suppose $p$ is an edge path in $\Delta$, starting at a vertex $aH$ and ending at a vertex $bH$. Let $c$ be the composition of edge labels along $p$. Then $caH=bH$. \end{lemma} The proof is an induction argument using the definitions, and is omitted.
In our arguments we need to be able to promote a presentation from a finite-index subgroup of a group to the entire group. Although it is common knowledge that this is possible, we could not find a reference on how to do it. So for completeness, we provide an argument. \begin{lemma}\label{le:presfromfinidx}
Suppose $H$ is a finite-index subgroup of a group $G$ and we are given a finite presentation $H=\genby{S_H|R_H}$ and a finite set $S_G\subset G$ such that the Schreier graph $\Delta$ of $H$ in $G$ with respect to $S_H\cup S_G$ is connected. Suppose we are also given an explicit description of $\Delta$ as a labeled directed multigraph. Then we can write down a finite presentation for $G$. \end{lemma}
\begin{proof}
Let $p_1,\dotsc,p_k$ be generators for the fundamental group of $\Delta$ based at $H$. Since $\Delta$ is a finite graph, this is a finite set (generators can be found by selecting a maximal subtree of $\Delta$). Let $u_i$ be the composition of edge labels along $p_i$ for each $i$; each $u_i$ is a word in $S_H\cup S_G$. By Lemma~\ref{le:travelSchreier}, we know that $u_i\in H$. Let $w_i$ be a word in $S_H$ representing the same element as $u_i$. Let $R_G$ be the set of relations $\{u_1w_1^{-1},\dotsc,u_kw_k^{-1}\}$. Let $G'=\genby{S_H\cup S_G|R_H\cup R_G}$.
Let $\phi\co G'\to G$ send each generator to the element of the same name. Since $\phi$ sends relators to the identity, it is a well defined group homomorphism. We claim that it is an isomorphism. The group $G'$ acts on $\Delta$ through $\phi$; since $S_G\subset G$ is in the image of $\phi$, this action is transitive. Let $K$ be the stabilizer of the vertex $H$ of $\Delta$ in $G'$. We note that $\Delta$ is then isomorphic to the Schreier graph of $K$ in $G'$, and therefore the fundamental group of $\Delta$ based at $H$ surjects to $K$ by reading off edge labels. In particular, $K$ is generated by the elements $u_1,\dotsc,u_k$. Since $G'$ contains relations $R_G$ turning the generators $u_1,\dotsc,u_k$ into words in $S_H$, it follows that $K$ is the subgroup of $G'$ generated by $S_H$. The map $\phi$ restricts to a map $K\to G$; since $K$ fixes the vertex $H$ in $\Delta$, the image of this map is in $H$. We have a map $H\to G'$ sending each generator the generator of the same name; since $H$ fixes the vertex $H$ in $\Delta$, the image of this map lies in $K$. So we have natural maps $H\to K$ and $K\to H$; since these are defined by sending generators to generators of the same name, they are inverses of each other. This implies that $\phi$ is injective: anything in the kernel acts trivially on $H$ and is therefore in $K$, but $G'\to G$ restricts to an isomorphism on $K$.
Now we claim that $\phi$ is surjective. If not, there is some $g\in G$ not in $\phi(G')$. But $G'$ acts transitively on $\Delta$, so there is $g'\in G'$ with $g'\cdot H=gH$. Then $\phi(g')g^{-1}$ is not in $\phi(G')$, however, it fixes $H$ and is therefore in $H$. But $\phi$ maps $K$ maps surjectively to $H$, a contradiction. \end{proof}
\section{Reduction to linear problems}\label{se:linearreduction} \subsection{Structure of generalized Whitehead automorphisms}\label{ss:genwharelinear} In this section we fix a vertex $a$ and consider the set $\whset{a}$ of generalized Whitehead automorphisms with multiplying set $[a]$. As defined in the introduction, this is the subgroup of $\Aut(A_\Gamma)$ consisting of automorphisms that multiply elements not in $[a]$ on the right and left by elements of $[a]$, and send elements of $[a]$ to products of elements of $[a]$. Let $\dom(a)$ denote the set of vertices that $a$ dominates, including $a$ itself.
We define $Z_{[a]}$ to be the free abelian group generated by the following symbols: \begin{itemize} \item for each vertex $b$ in $\st(a)\cap\dom(a)$, a generator $r_b$, \item for each vertex $b$ in $\dom(a)\setminus\st(a)$, two generators $r_b$ and $l_b$, and \item for each connected component $Y$ of $\Gamma\setminus\st(a)$ with at least two vertices, a generator $r_Y$. \end{itemize}
We define a map $\eta\co\whset{a}\to\Aut(Z_{[a]})$ as follows, given $\alpha\in\whset{a}$: \begin{itemize} \item For $b$ in $\st(a)\cap\dom(a)$ and $c$ in $[a]$, the coefficient of $r_c$ in $\eta(\alpha)(r_b)$ is the sum exponent of $c$ in $\alpha(b)$, if $b\notin [a]$ then the coefficient of $r_b$ in $\eta(\alpha)(r_b)$ is one, and the coefficients of all other generators in $\eta(\alpha)(r_b)$ are zero. \item For $b$ in $\dom(a)\setminus\st(a)$ and $c$ in $[a]$, the coefficient of $r_c$ in $\eta(\alpha)(r_b)$ is the sum exponent of $c$ in $v$ and the coefficient of $r_c$ in $\eta(\alpha)(l_b)$ is the sum exponent of $c$ in $u$, where $\alpha(b)=ubv$; the coefficient of $r_b$ in $\eta(\alpha)(r_b)$ is one, the coefficient of $l_b$ in $\eta(\alpha)(l_b)$ is one, and coefficients of all other basis elements in $\eta(\alpha)(r_b)$ and $\eta(\alpha)(l_b)$ are zero. \item For $Y$ a connected component of $\Gamma\setminus\st(a)$ with at least two vertices and $c$ in $[a]$, the coefficient of $r_c$ in $\eta(\alpha)(r_Y)$ is the sum exponent of $c$ in $u$ where $\alpha(x)=u^{-1}xu$ for some $x$ in $Y$; the coefficient of $r_Y$ in $\eta(\alpha)(r_Y)$ is one and the coefficients of all other basis elements in $\eta(\alpha)(r_Y)$ are zero. \end{itemize}
Our next goal is to show that $\eta$ is well defined and describe its image. Let $n=\abs{[a]}$ and let $k=\dim(Z_{[a]})-n$. To describe this image precisely, we pick an isomorphism between $Z_{[a]}$ and $\Z^{n+k}$ to identify $\Aut(Z_{[a]})$ with a matrix group. Let $[a]=\{a_1,\dotsc,a_n\}$. We map the basis elements $r_{a_1},\cdots, r_{a_n}$ to the first $n$ basis elements of $\Z^{n+k}$, and we map the remaining basis elements of $Z_{[a]}$ to the remaining basis elements of $\Z^{n+k}$ arbitrarily.
Throughout the paper, we use $\GL(n,R)$ to denote the general linear group of invertible $n\times n$ matrices with entries in a ring $R$, and we use $M_{n,k}(A)$ to denote the abelian group of $n\times k$ matrices with entries in an abelian group $A$. \begin{lemma}\label{le:etainjective} The map $\eta$ is a well defined injective homomorphism.
Further, we can describe its image. Under the identification of $Z_{[a]}$ with $\Z^{n+k}$ above, the image of $\eta$ is
the set of block-upper-triangular matrices of the form \[ \left( \begin{array}{cc} A & B \\ O & I \end{array}\right), \] where $A$ is in $\GL(n,\Z)$, $B$ is in $M_{n,k}(\Z)$, $O$ is the zero matrix and $I$ is the $k\times k$ identity matrix. Here the matrix $A$ records the coefficients of $r_b$ for $b\in [a]$ and the matrix $B$ records the remaining coefficients. \end{lemma}
\begin{proof} Since $\alpha\in\whset{a}$ sends each element $b$ of $[a]$ to some $\alpha(b)\in\genby{[a]}$, and the elements of $[a]$ commute, the image of $\eta(\alpha)(r_b)$ is well defined for each $b\in[a]$. For each $b\in\dom(a)\cap\st(a)\setminus[a]$, we know that $\alpha(b)$ is $bu$ for some $u\in\genby{[a]}$, since we can commute elements of $[a]$ to the right side of $b$. Since elements of $[a]$ commute, again $\eta(\alpha)(r_b)$ is well defined. For each $b\in\dom(a)\setminus\st(a)$, we know $\alpha(b)=ubv$ for some $u,v\in\genby{[a]}$. These elements $u$ and $v$ are well defined because we cannot commute elements of $[a]$ across this $b$. Again the coefficients are well defined because $\genby{[a]}$ is abelian.
Now consider $Y$ a connected component of $\Gamma\setminus\st(a)$ with at least two vertices. For any $b$ in $Y$ there is some $c$ in $Y$ such that $b$ commutes with $c$. We know $\alpha(b)=u_1bv_1$ and $\alpha(c)=u_2cv_2$, with $u_i,v_i\in \genby{[a]}$. It must be the case that $\alpha(b)$ commutes with $\alpha(c)$. We consider the centralizer of $\alpha(b)$, as indicated by the Servatius centralizer theorem (see Section~\ref{ss:combinatorial}). If $u_1\neq v_1^{-1}$, then the cyclically reduced form of $\alpha(b)$ contains $b$ and elements of $[a]$. This means that there is an element of $\genby{[a]}$ that conjugates $\alpha(c)$ into $\genby{\st(b)\cap\st(a)}$. However, this is impossible---any such conjugate of $\alpha(c)$ would have $c$ in it, which is not in $\st(a)$. So this implies that $u_1=v_1^{-1}$, and of course a parallel argument implies that $u_2=v_2^{-1}$. Further, the centralizer theorem then implies that $u_1=u_2$. Repeating this argument on all adjacent pairs in $Y$, we see that there is a fixed $u\in \genby{[a]}$, depending only on $\alpha$, such that $\alpha(b)=u^{-1}bu$ for any $b$ in $Y$. Then since $\genby{[a]}$ is abelian, the image $\eta(\alpha)(r_Y)$ is well defined.
It is straightforward computation to check that $\eta$ is a homomorphism.
To see that $\eta$ is injective and has the specified image, we construct an inverse map $\theta$. Really there is only one way to do this---given a matrix $M$ in the specified image, we read off the first $n$ entries in each column and multiply together elements of $[a]$ with these exponents to get $n+k$ elements of $\genby{[a]}$. For each basis element, let $u_b$ be the element determined by the column for $r_b$, let $v_b$ be determined by the column for $l_b$, and let $u_Y$ be determined by the column for $r_Y$. We construct an automorphism $\theta(M)$ that sends $b\in[b]$ to $u_b$; $\theta(M)$ sends $b\in\dom(a)\cap\st(a)\setminus[a]$ to $bu_b$; $\theta(M)$ sends $b\in\dom(a)\setminus\st(a)$ to $v_bbu_b$; and $\theta(M)$ sends each $c$ in a connected component $Y$ with at least two vertices to $u_Y^{-1}cu_Y$. In fact this completely specifies $\theta(M)$ on generators; $\theta(M)$ extends to an endomorphism of $A_\Gamma$ because the images of generators still satisfy the defining relations for $A_\Gamma$, and $\theta(M)$ specifies an automorphism of $A_\Gamma$ because $\theta(M^{-1})$ is its inverse. Since these maps are clearly inverses of each other, this finishes the proof. \end{proof}
Let $m,n,k$ be integers with $k\geq 0$ and $n,m\geq 1$. In light of Lemma~\ref{le:etainjective}, we define define the group $G_1$ to be $\GL(n,\Z)\ltimes M_{n,k}(\Z)$ and we state versions of Propositions~\ref{pr:insamewhorbit} and~\ref{pr:whstab} in this setting. The reason for the subscript $1$ will be clear later; we will consider other versions of this group where the subscript denotes a common denominator for certain rational matrix entries. \begin{proposition}\label{pr:matrixorbitalgorithm} There is an algorithm that takes in matrices $A$ and $B$ in $M_{n+k,m}(\Z)$, and returns a matrix $D$ in $G_1$ with $DA=B$, or determines that there is no such matrix. \end{proposition} \begin{proposition}\label{pr:matrixstabpres} Suppose $A$ is in $M_{n+k,m}(\Z)$. Then there is an algorithm that produces a finite presentation for the stabilizer of $A$ in $G_1$. \end{proposition} The proofs of these propositions are postponed to Section~\ref{se:linearproblems}.
\subsection{Algorithms for groups of generalized Whitehead automorphisms} In this section we prove Propositions~\ref{pr:insamewhorbit} and~\ref{pr:whstab} modulo Propositions~\ref{pr:matrixorbitalgorithm} and~\ref{pr:matrixstabpres}, which we prove later. We again fix $a\in \Gamma$. The following definition is not only central to this section but is also important for much of the rest of the paper. \begin{definition}\label{de:syllable} A \emph{syllable} in $A_\Gamma$ with respect to $a$ is a graphically reduced product of the form $c u d\in A_\Gamma$, where $u$ is an element of $\genby{\st(a)}$, and $c$ and $d$ are elements of $(\Gamma\setminus\st(a))^{\pm1}$ or $c=d=1$. It is a \emph{linear syllable} if $c\neq 1$ and $d\neq 1$, and a \emph{cyclic syllable} if $c=d=1$. If $cud$ is a linear syllable, then $c$ and $d$ are the \emph{endpoints}.
If $w$ is a conjugacy class in $A_\Gamma$, a \emph{decomposition of $w$ into syllables with respect to $a$} is either of the following: \begin{itemize} \item if the conjugacy class $w$ has a representative element $u$ in $\genby{\st(a)}$, then $u$ itself is a cyclic syllable, and the singleton $(u)$ is a decomposition of $w$ into syllables; \item if $w$ does not have a representative in $\genby{\st(a)}$, then a decomposition of $w$ into syllables is a $k$-tuple of linear syllables for some $k$: \[(c_1u_1c_2, c_2u_2c_3,\dotsc,c_ku_kc_1),\] such that the product \[c_1u_1c_2u_2c_3u_3\dotsm c_ku_k\in A_\Gamma\] is a graphically reduced and cyclically reduced product representing the class $w$. \end{itemize} The products $u$ or $c_1u_1\dotsm c_ku_k$ above are the \emph{representatives associated with the decomposition}.
If $W=(w_1,\dotsc,w_k)$ is a $k$-tuple of conjugacy classes in $A_\Gamma$, a \emph{decomposition of $W$ into syllables with respect to $a$} is the concatenation, into a single tuple, of some choice of decompositions of the $w_i$ into syllables. Given a decomposition of $W$ into syllables, the \emph{representative associated to the decomposition} is the tuple of elements of $A_\Gamma$ consisting of the representatives associated with the decompositions of the $w_i$. \end{definition}
We give some specific examples of syllable decompositions in Example~\ref{ex:syllablealgorithm} below. To clarify the definition, we note a couple of unusual cases. If $c$ and $d$ distinct elements not in $\st(a)$, and $u\in\genby{\st(a)}$, then $(cd,duc)$ is a syllable decomposition for the conjugacy class of $cdu$. Likewise, $(cuc)$ is a syllable decomposition for the conjugacy class of $cu$. Generally, syllable decompositions are far from being unique; however, it is not hard to see that certain aspects of syllable decompositions are determined by the tuples being decomposed. First of all, for a given conjugacy class, it is either the case that it has a unique decomposition as a cyclic syllable, or that all of its syllable decompositions are tuples of linear syllables. Second, if a conjugacy class decomposes into linear syllables, then the number of linear syllables in the decomposition is the number of instances of letters from $X\setminus\st(a)$ appearing in any cyclically and graphically reduced representative. In particular, the number of syllables in a decomposition is the same for all decompositions.
If $w$ is a conjugacy class in $A_\Gamma$, there are two possible kinds ambiguities that come up in decomposing $w$ into syllables. First of all, if $T$ is a decomposition of $w$ into syllables, then any cyclic permutation of $T$ is also a decomposition of $w$ into syllables. Second, if we take the representative associated with a decomposition and get a different graphically reduced product by commuting some letters across syllable boundaries, the resulting product will be associated with a different decomposition of $w$. These two kinds of ambiguities also come up in decomposing tuples of conjugacy classes into syllables. As we will see below, this second kind of ambiguity is not very important.
The purpose of decomposing things into syllables with respect to $a$ is that it gives us a convenient way to encode the action of $\whset{a}$ and to keep track of its effect on lengths. We recall the free abelian group $Z_{[a]}$ from Section~\ref{ss:genwharelinear}. We define a map $\nu$ to $Z_{[a]}$ from the set of syllables in $A_\Gamma$ with respect to $a$: \begin{itemize} \item If $u$ is a cyclic syllable then $\nu(u)$ is $\sum_{b\in\st(a)\cap \dom(a)} n_b r_b$, where $n_b$ is the sum exponent of $b$ in $u$ for each $b\in\st(a)\cap\dom(a)$. \item If $cud$ is a linear syllable then $\nu(cud)$ is $v_c+v_d+\sum_{b\in\st(a)\cap\dom(a)} n_b r_b$, where $n_b$ is the sum exponent of $b$ in $u$ for each $b\in\st(a)\cap\dom(a)$, and \begin{itemize} \item if $c$ is a positive generator and $c\in \dom(a)$, then $v_c=r_c$, \item if $c^{-1}$ is a positive generator and $c^{-1}\in\dom(a)$, then $v_c=-l_c$, \item otherwise $c$ or $c^{-1}$ is in $Y$, a connected component of $\Gamma\setminus\st(a)$ with two or more vertices, and $v_c=r_Y$; \item if $d$ is a positive generator and $d\in \dom(a)$, then $v_d=l_c$, \item if $d^{-1}$ is a positive generator and $d^{-1}\in\dom(a)$, then $v_d=-r_d$, and \item otherwise $d$ or $d^{-1}$ is in $Y$, a connected component of $\Gamma\setminus\st(a)$ with two or more vertices, and $v_d=-r_Y$. \end{itemize} \end{itemize} The map $\nu$ extends diagonally to define a map $\nu$ from tuples of syllables to tuples of vectors of $Z_{[a]}$. Note that $\nu$ makes no record of the letters in $cud$ from $\st(a)\setminus\dom(a)$. In fact, the map $\nu$ gets rid of one of the ambiguities possible in decomposing a class into syllables.
\begin{lemma}\label{le:decompwelldef} Suppose $w$ is a conjugacy class in $A_\Gamma$ and $T$ and $T'$ are decompositions of $w$ into syllables with respect to $a$. Then the tuple $\nu(T)$ is a cyclic permutation of the entries of the tuple $\nu(T')$. Further, if $W$ is a tuple of conjugacy classes and $T$ and $T'$ are decompositions of $W$, then $\nu(T)$ is a permutation of $\nu(T')$. \end{lemma} \begin{proof} As mentioned in Section~\ref{ss:combinatorial}, any graphically reduced word representing a given element can be transformed into any other by a sequence of commutations. Further, any graphically and cyclically reduced word representing a conjugacy class can be transformed into any other by a sequence of commutations and cyclic permutations. This observation has a corollary for decompositions of $w$ into syllables. Any decomposition $T=(c_1u_1c_2,\dotsc,c_ku_kc_1)$ of $w$ into syllables can be turned into any other by a sequence of the following moves: \begin{itemize} \item If some $u_q=u'_qu''_q$ (possibly with $u'_q=1$ or $u''_q=1$) and for some $p\leq q$ we have that $c_p$ commutes with $c_q$, $u'_q$, $u_p$, and all $c_i$ and $u_i$ for $i=p+1,\dotsc,q-1$, and none of the intervening $c_i$ are equal to $c_p^{\pm1}$, then we can replace the syllable $c_{p-1}u_{p-1}c_p$ with $c_{p-1}u_{p-1}u_pc_{p+1}$, delete $c_pu_pc_{p+1}$ from the list, and break $c_qu_qc_{q+1}$ into two adjacent syllables $c_qu'_qc_p$ and $c_pu''_qc_{q+1}$. In the case that $p=q$, we simply replace $c_{p-1}u_{p-1}c_p$ with $c_{p-1}u_{p-1}u'_pc_p$ and replace $c_pu_pc_{p+1}$ with $c_pu''_pc_{p+1}$. \item The previous move can be done in reverse. \item If some $u_p=u'_pxu''_p$ and for some $q>p$ we have $u_q=u'_qu''_q$ (possibly with any of $u'_p,u''_p,u'_q,u''_q$ equal to $1$) such that $x$ commutes with $u''_p$, $c_q$, $u'_q$ and all $c_i$ and $u_i$ for $i=p+1,\dotsc, q-1$, then we can replace the syllable $c_pu_pc_{p+1}$ with $c_pu'_pu''_pc_{p+1}$ and replace $c_qu_qc_{q+1}$ with $c_qu'_qxu''_qc_{q+1}$. \item The previous move can be done in reverse. \item We can cyclically permute the entries of $T$. \end{itemize} One issue that needs to be addressed with these moves is the fact that they send a well formed syllable decomposition to a well formed syllable decomposition. This follows from the fact that syllables are graphically reduced products. If one of the replacements above results in a syllable that is not graphically reduced, for example, where the $c_i$ cancels with the $c_{i+1}$, then the original syllable was not graphically reduced. By virtue of moving things only across things they commute with, if a cancellation is possible after the replacement, it was also possible in the first place.
We consider the effect the first move has on $\nu(T)$ in the case that $p<q$. This replaces $c_{p-1}u_{p-1}c_p$ with $c_{p-1}u_{p-1}u_pc_{p+1}$. Since $c_p$ commutes with $u_p$, but $c_p$ is not adjacent to $a$, no generator from $\st(a)\cap\dom(a)$ appears in $u_p$. Since $c_p$ commutes with $c_{p+1}$, and neither is adjacent to $a$, this means $a$ dominates neither one and they are in the same component $Y$ of $\Gamma\setminus\st(a)$. These facts imply that $\nu(c_{p-1}u_{p-1}c_p)=\nu(c_{p-1}u_{p-1}u_pc_{p+1})$. The syllable $c_pu_pc_{p+1}$ is deleted, but since $c_p$ and $c_{p+1}$ are both in $Y$ and since no generator from $\st(a)\cap\dom(a)$ appears in $u_p$, we have $\nu(c_pu_pc_{p+1})=0$. The intervening syllables $c_iu_ic_{i+1}$ for $i=p+1,\dotsc,q-1$ commute entirely with $c_p$; this implies for each $i$ that $u_i$ contains no generators from $\st(a)\cap\dom(a)$ and that both $c_i$ and $c_{i+1}$ are in $Y$; this implies that for each $i$, $\nu(c_iu_ic_{i+1})=0$. So although a syllable is deleted before this sequence, thus shifting this sequence forward, there is no effect because this is a sequence of zeros. The syllable $c_qu_qc_{q+1}$ is broken into $c_qu'_qc_p$ and $c_pu''_qc_{q+1}$. For the same reason as the intervening syllables, we see $\nu(c_qu'_qc_p)=0$. Similarly, $c_q$ and $c_p$ must both be in $Y$, and $u'_q$ makes no contribution, so $\nu(c_qu_qc_{q+1})=\nu(c_pu''_qc_{q+1})$. So the effect of the move on $\nu(T)$ is to replace the entry in position $p$ with the same entry, replace zero entries from position $p+1$ to position $q-1$ with zero entries, and replace the entry in position $q$ with an identical entry. Therefore the first move does not affect $\nu(T)$ if $p<q$.
If $p=q$, the first move replaces $c_{p-1}u_{p-1}c_p$ with $c_{p-1}u_{p-1}u'_pc_p$ and replaces $c_pu_pc_{p+1}$ with $c_pu''_pc_{p+1}$. Since $u'_p$ commutes with $c_p$ it contains no generators from $\st(a)\cap\dom(a)$, and we deduce that $\nu(c_{p-1}u_{p-1}c_p)=\nu(c_{p-1}u_{p-1}u'_pc_p)$ and $\nu(c_pu_pc_{p+1})=\nu(c_pu''_pc_{p+1})$. So the move has no effect on $\nu(T)$ in this case as well.
Next we consider the second kind of move. We replace $c_pu_pc_{p+1}=c_pu_p'xu_p''c_{p+1}$ with $c_pu_p'u_p''c_{p+1}$; since $x$ commutes with $c_{p+1}$, it is not in $\st(a)\cap\dom(a)$, and therefore $\nu(c_pu_pc_{p+1})=\nu(c_pu_p'u_p''c_{p+1})$. We also replace $c_qu_qc_{q+1}=c_qu'_qu''_qc_{q+1}$ with $c_qu'_qxu''_qc_{q+1}$; for the same reason, $\nu(c_qu_qc_{q+1})=\nu(c_qu'_qxu''_qc_{q+1})$. Therefore $\nu(T)$ is unchanged for this kind of move. Of course, if a kind of move leaves $\nu(T)$ unchanged, then the same kind of move in reverse will also leave it unchanged. So the moves above can only change $\nu(T)$ by cyclically permuting its entries.
If $W$ is a tuple of cyclic words and $T$ is a decomposition of $W$ into syllables, then of course, we can cyclically permute the decompositions of the entries in $W$. Since $T$ is a concatenation of these decompositions of entries, it is possible that different decompositions of $W$ will differ by more complicated permutations---ones that cyclically permute the segments coming from different entries of $W$. \end{proof}
\begin{definition}\label{de:syllableaction} Fix a vertex $a$ in $\Gamma$. We define an action of $\whset{a}$ on the set of syllables with respect to $a$ as follows: if $\alpha\in\whset{a}$ and $s=cud$ where $u\in\genby{\st(a)}$ and $c,d\in(\Gamma\setminus\st(a))^{\pm1}$ and $\alpha(c)=w_1cw_2$ and $\alpha(d)=w_3dw_4$, then define \[\alpha\cdot s=cw_2\alpha(u)w_3d.\] We extend this action diagonally to the set of $k$-tuples of syllables for each $k$. \end{definition} This action loses the element that $\alpha$ places on the left of the left endpoint of the syllable, and loses the element that $\alpha$ places on the right of the right endpoint of the syllable. We recall the map $\eta\co\whset{a}\to \Aut(Z_{[a]})$ from Section~\ref{ss:genwharelinear}. Note that the action of $\whset{a}$ on syllables just described is the same as the action of $\eta(\whset{a})$ on $Z_{[a]}$, in that the injective map $\eta$ intertwines the two actions.
Let $\{a_1,\dotsc,a_n\}=[a]$. We note that any syllable with respect to $a$ may be written without loss of generality in the form $ca_1^{p_1}\dotsm a_n^{p_n}ud$, where $u\in\genby{\st(a)\setminus[a]}$. This is because in a general syllable $cu'd$, everything from $[a]$ in $u'$ can be commuted to the left, since $u\in\genby{\st(a)}$. The following result is a kind of equivariance between the actions of $\whset{a}$ on syllables and on $Z_{[a]}$.
\begin{lemma}\label{le:syllableequivariance} Suppose $ca_1^{p_1}\dotsm a_n^{p_n}ud$ is a syllable of $A_\Gamma$ with respect to $a$, and suppose $\alpha\in\whset{a}$. Let $p'_i$ be the coefficient of $r_{a_i}$ in $\alpha\cdot \nu(ca_1^{p_1}\dotsm a_n^{p_n}ud)$. Then there are elements $v_1,v_2\in\genby{[a]}$ with \[\alpha(ca_1^{p_1}\dotsm a_n^{p_n}ud)=v_1ca_1^{p'_1}\dotsm a_n^{p'_n}udv_2.\] \end{lemma}
\begin{proof} This is a computational exercise in the definitions. \end{proof}
\begin{lemma}\label{le:decompositionequivariance} Suppose $W$ is a tuple of cyclic words and \[T=(c_1a_1^{p_{1,1}}\dotsm a_n^{p_{1,n}}u_1d_1,\dotsc,c_ma_1^{p_{m,1}}\dotsm a_n^{p_{m,n}}u_md_m)\]
is a decomposition of $W$ into syllables, with each $u_i\in\genby{\st(a)\setminus[a]}$. Suppose $\alpha\in\whset{a}$. Let $p'_{i,j}$ be the coefficient on $r_{a_j}$ in the $i$th coordinate of $\eta(\alpha)\cdot \nu(T)$, for $i=1,\dotsc,m$ and $j=1,\dotsc, n$. Then \[\alpha\cdot T=(c_1a_1^{p'_{1,1}}\dotsm a_n^{p'_{1,n}}u_1d_1,\dotsc,c_ma_1^{p'_{m,1}}\dotsm a_n^{p'_{m,n}}u_md_m),\] and $\alpha\cdot T$ is a syllable decomposition of $\alpha\cdot W$. \end{lemma}
\begin{proof} This follows from repeated application of Lemma~\ref{le:syllableequivariance}. In the case that $w$ is a cyclic word and $(c_1u_1c_2,\dotsc,c_mu_mc_1)$ is its decomposition, we rewrite the representative associated to this decomposition as \[c_1u_1c_2 \cdot c_2^{-1}\cdot c_2u_2c_3 \cdot c_2^{-1} \dotsm c_mu_mc_1\cdot c_1^{-1}.\] We apply $\alpha$ to each factor in this product separately. Lemma~\ref{le:syllableequivariance} applies to each syllable factor in the product. When we act on a standalone $c_i^{-1}$ factor in the product, we cancel away the leading and trailing elements from $\genby{[a]}$ from the preceding and following syllables (the $v_1$ and $v_2$ elements from Lemma~\ref{le:syllableequivariance}). From the resulting product, in which the exponents on the $a_i$--factors inside the syllables are from the action of $\eta(\alpha)$ (as in Lemma~\ref{le:syllableequivariance}), it is easy to see that the corresponding syllable decomposition $T'$ from the lemma statement is a decomposition of $\alpha\cdot w$. The case that $W$ is a tuple of cyclic words follows by applying the same argument to each entry in the tuple separately. \end{proof}
\begin{example}\label{ex:syllablealgorithm} This example illustrates why we need to take a little care with the algorithms for Proposition~\ref{pr:insamewhorbit} and Proposition~\ref{pr:whstab}. Suppose for this example that $\Gamma$ is the graph with four vertices $\{a,b,c,d\}$, with an edge from $a$ to $b$ and an edge from $c$ to $d$ (so $A_\Gamma$ is $\Z^2 * \Z^2$). Consider the conjugacy classes $u$ and $v$ represented by $cacbcb$ and $cbcabcb$ respectively. Choosing syllable decompositions with respect to $a$ arbitrarily, we might choose $T=(cac,cbc,cbc)$ for $u$ and $T'=(cbc,cabc,cbc)$ for $v$. The group $Z_{[a]}$ is generated by $r_a$, $r_b$ and $r_Y$, where $Y=\{c,d\}$, and $\nu(T)=(r_a,r_b,r_b)$ and $\nu(T')=(r_b,r_b+r_a,r_b)$. To check whether $\nu(T)$ and $\nu(T')$ are in the same orbit, we apply the algorithm from Proposition~\ref{pr:matrixorbitalgorithm}, after choosing an appropriate identification between $Z_{[a]}$ and $\Z^3$. We find that $\nu(T)$ and $\nu(T')$ are not in the same orbit. However, $T''=(cabc,cbc,cbc)$ is also a syllable decomposition for $v$ with respect to $a$, and it is not hard to see that $\nu(T)$ and $\nu(T'')$ are in the same orbit under $\eta(\whset{a})$: the automorphism sending $a$ to $ab^{-1}$ and fixing the other generators sends one to the other. This automorphism also sends $u$ to $v$. This example illustrates the need to consider permutations of a syllable decomposition, instead of only considering a single arbitrary decomposition.
Now consider the conjugacy class $u$ represented by $cacb$ in the same group. One syllable decomposition for the conjugacy class is $T=(cac,cbc)$. The automorphism $\alpha$ sending $a$ to $b$ and $b$ to $a$ and fixing the other generators is in $\whset{a}$, and $\alpha\cdot u=u$. However, $\eta(\alpha)\cdot \nu(T)\neq\nu(T)$. This illustrates the possibility of automorphisms fixing a conjugacy class but not a particular syllable decomposition of that class. \end{example}
For our finite presentation result, we need refined versions of Propositions~\ref{pr:insamewhorbit} and~\ref{pr:whstab}. Specifically, we need to perform the algorithms in these propositions while respecting certain restrictions on the support of automorphisms, which we now define. \begin{definition}\label{de:support} The \emph{support} of a generalized Whitehead automorphism $\alpha\in\whset{a}$ is the subset of $X^{\pm1}$ with \begin{itemize} \item for $b$ adjacent to or equal to $a$, $b$ and $b^{-1}$ are both in $\supp(\alpha)$ if $\alpha(b)\neq b$ and neither $b$ nor $b^{-1}$ is in $\supp(\alpha)$ if $\alpha(b)=b$, and \item for $b$ not adjacent to $a$ with $\alpha(b)=ubv$, $b\in\supp(\alpha)$ if and only if $v\neq 1$ and $b^{-1}\in\supp(\alpha)$ if and only if $u\neq 1$. \end{itemize} For $a\in X$ and $S\subset (X\setminus\st(a))^{\pm1}$, we define $\whsetsr{a}{S}$ to be the subset of $\whset{a}$ consisting of automorphisms $\alpha$ with $\supp(\alpha)\cap S=\varnothing$. \end{definition}
Suppose $a\in X$ and $S\subset(X\setminus\st(a))^{\pm1}$. Now we consider the image of $\whsetsr{a}{S}$ under $\eta$. Say that a basis element of $Z_{[a]}$ does not intersect $S$ if it is of the form $r_b$ or $l_b$ with $b\notin S\cup [a]$, or of the form $r_Y$ with $Y\cap S=\varnothing$. Suppose $|[a]|=n$, and there are $k$ basis elements of $Z_{[a]}$ not intersecting $S$, and $l$ remaining basis elements for $Z_{[a]}$. Suppose we identify $Z_{[a]}$ with $\Z^{n+k+l}$ so that the basis elements of the form $r_b$ with $b\in[a]$ map to the first $n$ basis elements, the basis elements not intsecting $S$ map to the next $k$ basis elements, and the remaining basis elements map to the remaining basis elements.
\begin{lemma}\label{le:supportrestrictedsubgroup} With $a$, $S$ as above, $\whsetsr{a}{S}$ is a subgroup of $\whset{a}$. Identifying $\Aut(Z_{[a]})$ with $\GL(n+k+l,\Z)$ using the identification of $Z_{[a]}$ with $\Z^{n+k+l}$ above, the image of $\whsetsr{a}{S}$ under $\eta$ is the set of matrices of the form \[ \begin{pmatrix} A & B & O_{n,l} \\ O_{k,n} & I_k & O_{k,l} \\ O_{l,n} & O_{l,k} & I_l \end{pmatrix} \] where $A\in\GL(n,\Z)$, $B\in M_{n,k}(\Z)$, and the $O$'s and $I$'s represent zero and identity blocks of the indicated dimensions. \end{lemma}
\begin{proof} The assertion that this subset is a subgroup is left as an exercise for the reader.
For $\alpha\in\whset{a}$, the definition of $\eta$ tells us that $\eta$ counts the sum exponent of elements of $[a]$ on the right and left sides of elements of $X$. If $\supp(\alpha)\cap S=\varnothing$, then for any element of $S$, all of these counts are zero. As explained in the proof of Lemma~\ref{le:etainjective}, the sum exponents are the same for both sides of all elements in the same connected component of $X\setminus\st(a)$, if that component has at least two vertices. So if a basis element intersects $S$, then our counts of sum exponents are all zero for $\alpha\in\whsetsr{a}{S}$, which explains the shape of the matrix.
To see that any matrix of this shape is in the image, we use the same argument as in Lemma~\ref{le:etainjective}. \end{proof}
\begin{lemma}\label{le:supportrestrictedmatrixalgs} There is an algorithm to check whether two matrices in $M_{n+k+l,m}(\Z)$ are in the same orbit under the group $G$ of block matrices of the form \[ \left( \begin{array}{ccc} A & B & O_{n,l} \\ O_{k,n} & I_k & O_{k,l} \\ O_{l,n} & O_{l,k} & I_l \end{array}\right), \] where $A\in\GL(n,\Z)$, $B\in M_{n,k}(\Z)$, and the $O$'s and $I$'s represent zero and identity blocks of the indicated dimensions.
Further, there is an algorithm that returns a presentation for the stabilizer of a matrix in $M_{n+k+l,m}$ under the action of $G$. \end{lemma}
\begin{proof} Suppose $C$ and $D$ are two matrices in $M_{n+k+l,m}$. If the last $l$ rows of $C$ do not match the last $l$ rows of $D$, then they cannot be in the same orbit. So we suppose these last $l$ rows match. Next we consider $C'$ and $D'$ in $M_{n+k.m}$, where each is $C$ or $D$ respectively with the last $l$ rows omitted. The group $G$ above is isomorphic to the group $G_1$ of Proposition~\ref{pr:matrixorbitalgorithm}, by the mapping that omits the last $l$ rows and the last $l$ columns, and $C$ is in $G\cdot D$ if and only if $C'$ is in $G_1\cdot D'$. Of course, this is exactly what Proposition~\ref{pr:matrixorbitalgorithm} checks.
Similarly, the stabilizer of $C$ in $G$ will be isomorphic to the stabilizer of $C'$ in $G_1$, and Proposition~\ref{pr:matrixstabpres} provides a presentation for this. \end{proof}
So instead of proving Proposition~\ref{pr:insamewhorbit}, we prove the following proposition. Proposition~\ref{pr:insamewhorbit} is the special case where the set $S$ is empty. \begin{proposition}\label{pr:insamewhorbitsupportrestricted} There is an algorithm that takes in two tuples $U$ and $V$ of conjugacy classes from $A_\Gamma$, a vertex $a$ of $\Gamma$ and a subset $S$ of $(X\setminus\st(a))^{\pm1}$, and produces an automorphism $\alpha\in \whsetsr{a}{S}$ with $\alpha\cdot U=V$, or determines that no such automorphism exists. \end{proposition}
\begin{proof} Suppose $U$ and $V$ are two tuples of conjugacy classes of $A_\Gamma$. The first step in the algorithm is to form syllable decompositions $T$ of $U$ and $T'$ of $V$. If the syllable decompositions $T$ and $T'$ do not have the same number of entries, then $U$ and $V$ are not in the same orbit (this follows from Lemma~\ref{le:decompositionequivariance}, since the decomposition of $\alpha\cdot W$ is the same length as the decomposition of $W$). So we suppose that $T$ and $T'$ have the same number of entries. We consider all the permutations of the entries of $T'$ and select from these the ones $T'_1,\dotsc, T'_m$ that are also syllable decompositions of $V$.
Suppose that \[T=(c_1a_1^{p_{1,1}}\dotsm a_k^{p_{1,k}}u_1d_1,\dotsc,c_ma_1^{p_{m,1}}\dotsm a_k^{p_{m,k}}u_md_m).\] We fix an $r$ from $1$ through $m$, and suppose \[T'_r=(c'_1a_1^{p'_{1,1}}\dotsm a_k^{p'_{1,k}}u'_1d'_1,\dotsc,c'_ma_1^{p'_{m,1}}\dotsm a_k^{p'_{m,k}}u'_md'_m).\] We define a tuple $\hat T_r$ to be $T$ with the exponents of the $a_i$ replaced by those from $T'_r$: \[\hat T_r=(c_1a_1^{p'_{1,1}}\dotsm a_k^{p'_{1,k}}u_1d_1,\dotsc,c_ma_1^{p'_{m,1}}\dotsm a_k^{p'_{m,k}}u_md_m).\] At this point, we check whether $\hat T_r$ is a decomposition of $V$ (this amounts to finding the representative associated to $\hat T_r$ and checking whether it represents the $V$).
If the answer is yes, we use the algorithm from Proposition~\ref{pr:matrixorbitalgorithm} to check whether there is $A\in \eta(\whset{a})$ with $A\cdot \nu(T)=\nu(\hat T_r)$ and with $A$ in the image of $\whsetsr{a}{S}$. By Lemma~\ref{le:supportrestrictedsubgroup} we know that $\eta(\whsetsr{a}{S})$ is $G_1$ with the last $l$ columns (without loss of generality) zeroed out in its upper right block (for some $l$) and we use the modification of the algorithm from Proposition~\ref{pr:matrixorbitalgorithm} described in Lemma~\ref{le:supportrestrictedmatrixalgs} to check for such automorphisms.
If our algorithm finds such a matrix $A$, let $\alpha\in\whsetsr{a}{S}$ map to $A$. By Lemma~\ref{le:decompositionequivariance}, we have that $\hat T_r$ is a representative for $\alpha\cdot U$. Of course, in that case, we have $\alpha\cdot U=V$ and the algorithm returns $\alpha$.
If we get negative answers (either there is no matrix $A$ as above or $\hat T_r$ does not represent $V$), we increment $r$ and check the next $T_r$. If we try this for each $T_r$ and none of them have a matrix $A$ as above, we declare that $U$ and $V$ are in different orbits.
To show the correctness of the algorithm, we suppose that we have $\alpha\in\whsetsr{a}{S}$ with $\alpha\cdot U=V$. We want to show that the algorithm finds an automorphism sending one tuple to the other. We take the coefficients from $\eta(\alpha)\cdot \nu(T)$ and substitute them into the exponents of $T$ to get a tuple $T''$; by Lemma~\ref{le:decompositionequivariance}, $T''$ is a syllable decomposition for $V$. By the argument in Lemma~\ref{le:decompwelldef}, $T''$ will differ from other syllable decompositions for $V$ by a sequence of permutations and commutation of elements not in $\dom(a)$ across each other. This means that one of the $T'_r$ will differ from $T''$ only by the positions of elements not in $\dom(a)$. In particular, $\hat T_r=T''$. This means that the algorithm will catch that $\hat T_r$ is a representative for $V$, and then will catch that there is some $\beta\in\whsetsr{a}{S}$ with $\eta(\beta)\cdot \nu(T)=\nu(\hat T_r)$ (of course this is true for $\beta=\alpha$, but there is no guarantee that the algorithm will catch the same automorphism). So by the contrapositive, if the algorithm does not catch any such automorphism, then no such automorphism exists. \end{proof}
Our next goal is to prove Proposition~\ref{pr:whstab}. Again, we need a slight refinement of the proposition for our argument for finite presentability of stabilizers. So we prove the following, which implies Proposition~\ref{pr:whstab} as a special case. \begin{proposition}\label{pr:whstabsupportrestricted} There is an algorithm that takes in a tuple $U$ of conjugacy classes from $A_\Gamma$, and an element $a\in X$ and a subset $S\subset(X\setminus\st(a))^{\pm1}$ and returns a presentation for the stabilizer $(\whsetsr{a}{S})_U$. \end{proposition}
\begin{proof} Let $U$ be a tuple of conjugacy classes in $A_\Gamma$. Let $T_1$ in $Z_{[a]}^M$ be a syllable decomposition of $U$ with respect to $a$. Let $T_1,\dotsc, T_m$ be all the permutations of $T_1$ that are also syllable decompositions of $U$. The group $\whsetsr{a}{S}$ acts on $Z_{[a]}^M$ and a given $\nu(T_i)$ may or may not be in the orbit of $\nu(T_1)$ under this action. We reorder $T_1,\dotsc,T_m$ so that the intersection of the $\{\nu(T_i)\}_i$ with the orbit of $\nu(T_1)$ is $\nu(T_1),\dotsc,\nu(T_k)$ for some $k$. If $\alpha\in\whsetsr{a}{S}$ with $\alpha\cdot U=U$ and $i=1,\dotsc,k$, then by Lemma~\ref{le:decompositionequivariance}, the element $\alpha\cdot\nu(T_i)$ is the image under $\nu$ of a syllable decomposition of $U$. Then by Lemma~\ref{le:decompwelldef}, the element $\alpha\cdot\nu(T_1)$ is one of $\nu(T_1),\dotsc,\nu(T_k)$. Therefore $(\whsetsr{a}{S})_U$ acts on the finite set $\{\nu(T_1),\dotsc,\nu(T_k)\}$, and by construction, it acts transitively. Then $(\whsetsr{a}{S})_{\nu(T_1)}$ is a finite index subgroup of $(\whsetsr{a}{S})_U$.
The observations we just made make it possible to see the correctness of the following algorithm. First we find a syllable decomposition $T_1$ of $U$. Then we enumerate all the permutations of $T_1$ that are also syllable decompositions of $U$, say $T_1,\dotsc, T_m$. Then we use the algorithm from Proposition~\ref{pr:matrixorbitalgorithm} and its modification in Lemma~\ref{le:supportrestrictedmatrixalgs} to go through the list $\nu(T_1),\dotsc,\nu(T_m)$ to determine which of these are in the same $\whsetsr{a}{S}$--orbit as $\nu(T_1)$. By relabeling the list, we assume that these are $\nu(T_1),\dotsc, \nu(T_k)$. Also, as we use the algorithm to check which $\nu(T_i)$ are in the same orbit, we record an example automorphism in $\whsetsr{a}{S}$ that sends $\nu(T_1)$ to $\nu(T_i)$, for each $i$ from $1$ to $k$. Let $S_1$ denote the set of these automorphisms.
Next we use the algorithm from Proposition~\ref{pr:matrixstabpres} and its modification from Lemma~\ref{le:supportrestrictedmatrixalgs} to find a presentation for $(\whsetsr{a}{S})_{\nu(T_1)}$. Let $S_2$ denote the generating set for $(\whsetsr{a}{S})_{\nu(T_1)}$ from this presentation. For each $\nu(T_i)$ for $i$ from $1$ to $k$, we check where each element of $S_1\cup S_2$ sends it (since $(\whsetsr{a}{S})_U$ acts on $\{\nu(T_1),\dotsc,\nu(T_k)\}$, the image will be one of these). We construct the finite graph whose vertices are $\nu(T_1),\dotsc,\nu(T_k)$ and whose edges are labeled by the elements of $S_1\cup S_2$, with an edge labeled by $\alpha\in S_1\cup S_2$ from $\nu(T_i)$ to $\nu(T_j)$ if and only if $\alpha\cdot \nu(T_i)=\nu(T_j)$. It is not hard to see that this graph is isomorphic to the Schreier graph of $(\whsetsr{a}{S})_{\nu(T_1)}$ in $(\whsetsr{a}{S})_U$ with respect to $S_1\cup S_2$. Note that by the choice of $S_1$, this Schreier graph is connected. Using this Schreier graph together with the presentation for $(\whsetsr{a}{S})_{\nu(T_1)}$, we then construct a finite presentation for $(\whsetsr{a}{S})_U$ using the procedure in Lemma~\ref{le:presfromfinidx}. \end{proof}
\section{Applications}\label{se:applications} \subsection{A useful finite graph}\label{ss:usefulgraph} In this section we prove our two applications, Theorem~\ref{th:raagcheckorbit} and Theorem~\ref{th:raagstabpres}, modulo the technical results that we prove in the later sections.
Suppose $W$ is an $M$--tuple of cyclic words in $A_\Gamma$ and $W$ is of minimal length in its $\Aut(A_\Gamma)$--orbit. We construct a directed, labeled multigraph $\Delta$ associated to $W$ as follows. \begin{itemize} \item The vertices of $\Delta$ are the set of $M$--tuples of cyclic words of $A_\Gamma$ of the same length as $W$. \item For each pair of vertices $W_1$ and $W_2$, possibly with $W_1=W_2$, and for each permutation automorphism $\alpha\in P$ with $\alpha\cdot W_1=W_2$, there is a directed edge from $W_1$ to $W_2$ labeled by $\alpha$. \item For each pair of distinct vertices $W_1$ and $W_2$ and each generator $a\in X$, if there is an automorphism in $\whset{a}$ sending $W_1$ to $W_2$, then there is a directed edge from $W_1$ to $W_2$ labeled by some $\alpha\in\whset{a}$ with $\alpha \cdot W_1=W_2$ (this involves a choice). \item For each vertex $W_1$ and each generator $a\in X$, there are edges from $W_1$ to $W_1$ labeled by a finite generating set for the stabilizer $(\whset{a})_{W_1}$ (this also involves choices). \end{itemize}
\begin{lemma} The graph $\Delta$ associated to the minimal tuple $W$ is finite and can be effectively constructed. \end{lemma}
\begin{proof} First of all, $\Delta$ has finitely many vertices because there are finitely many tuples of a given length. We construct $\Delta$ by finding the required edges and attaching them to the $0$--skeleton. There are finitely many permutation automorphisms, so we can explicitly check which ones send which vertices to which vertices. For each generator $a\in X$ and each pair of vertices, we use Proposition~\ref{pr:insamewhorbit} to check whether there is an automorphism in $\whset{a}$ sending one to the other, and if there is, we add an edge labeled by such an automorphism (Proposition~\ref{pr:insamewhorbit} gives us one if one exists). For each generator $a$ and each vertex $W_1$, we use Proposition~\ref{pr:whstab} to get a finite generating set for the stabilizer $(\whset{a})_{W_1}$. These last two steps are effective since there are only finitely many generators in $X$ and vertices in $\Delta$. \end{proof}
\begin{lemma}\label{le:composedelta} If $\alpha$ is the composition of edge labels on a path from a vertex $W_1$ to a vertex $W_2$ in $\Delta$, then $\alpha\cdot W_1=W_2$. \end{lemma}
\begin{proof} This is true for paths of length one by construction and true in general by Lemma~\ref{le:travelSchreier}. \end{proof}
\begin{lemma}\label{le:peakredongraph} Suppose $W_0$ is a vertex in $\Delta$ and $W_0$ is minimal length in its automorphism orbit. If $W'$ is also a vertex in $\Delta$ and $\alpha\in\Aut(A_\Gamma)$ with $\alpha\cdot W_0=W'$, then there is a path $p$ in $\Delta$ from $W_0$ to $W'$ such that the composition of edge labels along $p$ is $\alpha$. \end{lemma}
\begin{proof} We peak reduce $\alpha$ with respect to $W_0$ by elements of $\Omega$, which is possible by Theorem~\ref{th:fullfeaturedpeakreduction}. Suppose $\alpha=\alpha_k\dotsm\alpha_1$ is the resulting factorization. Let $W_i=\alpha_i\dotsm\alpha_1\cdot W_0$ for $i=1,\dotsc,k$, so that $W_k=W'$. Since $W_0$ is minimal length, the factorization being peak reduced means that $\abs{W_k}=\abs{W_0}$ for $i=0,\dotsc,k$. Then each $W_i$ is a vertex in $\Delta$. If $\alpha_i$ is a permutation automorphism, then there is an edge from $W_{i-1}$ to $W_i$ labeled by $\alpha_i$ by construction. If $\alpha_i\in\whset{a}$ for some $a$, then there is an edge from $W_{i-1}$ to $W_i$ labeled by $\beta_i$ for some $\beta_i\in\whset{a}$. Then $\beta_i^{-1}\alpha_i$ stabilizes $W_{i-1}$. By construction, the edge loops at $W_{i-1}$ contain generators for that stabilizer, and there is a path $p_i$ in the loops at $W_{i-1}$ whose edge labels compose to be $\beta_i^{-1}\alpha_i$. So following $p_i$ and then the edge labeled by $\beta_i$ gives a path $p'_i$ from $W_{i-1}$ to $W_i$ such that the composition of labels on $p'_i$ is $\alpha_i$. Composing these paths as $i$ goes from $1$ to $k$ gives a path from $W_0$ to $W_k$ whose edge label composition is $\alpha$. \end{proof}
\subsection{Orbit membership and finite generation} \begin{lemma}\label{le:checkminlength} Suppose $W$ is a tuple of conjugacy classes from $A_\Gamma$. Then $W$ is minimal length in its $\Aut(A_\Gamma)$--orbit if and only if it cannot be shortened by any element of $\whset{a}$ for any $a\in X$. \end{lemma} \begin{proof} It is clear that a minimal-length tuple cannot be shortened, so we prove the other direction. Suppose for contradiction that there is some $W'$ with $\abs{W'}<\abs{W}$ and some $\alpha\in\Aut(A_\Gamma)$ with $\alpha\cdot W= W'$. We peak reduce $\alpha$ with respect to $W$ by Theorem~\ref{th:fullfeaturedpeakreduction}. Then $\alpha=\beta_k\dotsm\beta_1$ with $\beta_i\in\Omega$ for all $i$, where $k\mapsto\abs{\beta_k\dotsm\beta_1\cdot W}$ is a sequence of lengths that decreases, stays level, and then increases (with any of these phases possibly omitted). Since $\abs{\alpha\cdot W}<\abs{W}$, the decreasing phase cannot be omitted, and therefore $\abs{\beta_1\cdot W}<\abs{W}$. This contradiction proves the lemma. \end{proof}
\begin{proof}[Proof of Theorem~\ref{th:raagcheckorbit}] Let $U$ and $V$ be two $M$-tuples of conjugacy classes from $A_\Gamma$; we want to check whether they are in the same $\Aut(A_\Gamma)$--orbit. We start by enumerating the tuples of conjugacy classes from $A_\Gamma$ that are strictly shorter than $\abs{U}$. Of course there are only finitely many such conjugacy classes (even so, this step is a disappointing bottleneck in the algorithm). For each $U'$ strictly shorter than $U$, and each class $[a]$ with $a\in\Gamma$, we use Proposition~\ref{pr:insamewhorbit} to check whether $U'$ is in the same orbit as $U$ under $\whset{a}$. If it is, we replace $U$ by $U'$ and repeat the previous step again, checking whether one of the $\{\whset{a}\}_a$ can shorten $U$. We stop when we have verified that none of these sets of automorphisms can shorten $U$. By Lemma~\ref{le:checkminlength}, this resulting $U$ is of minimal length.
After we shorten $U$ as much as possible, we do the same to $V$. If the minimal lengths are different, we declare that the orbits are different. Now we suppose that $U$ and $V$ are both minimal length in their automorphism orbits with $\abs{U}=\abs{V}$; let $\Delta$ be the graph from Section~\ref{ss:usefulgraph} above, constructed using $U$ (the vertices of $\Delta$ are tuples of length $\abs{U}$). At this point, we check whether $U$ and $V$ are in the same connected component of $\Delta$ (this is doable since $\Delta$ is a finite graph). By Lemma~\ref{le:composedelta}, if $U$ and $V$ are in the same component, then there is an automorphism $\alpha\in \Aut(A_\Gamma)$ with $\alpha\cdot U=V$, and we can find such an $\alpha$ by composing the edge labels on a path from $U$ to $V$. Conversely, if there is an automorphism $\alpha$ sending $U$ to $V$, then there is a path from $U$ to $V$ in $\Delta$ and both are in the same connected component. This is true by Lemma~\ref{le:peakredongraph}, which uses Theorem~\ref{th:fullfeaturedpeakreduction}. \end{proof}
At this point we can quickly prove an intermediate result. By $\pi_1(\Delta,W)$ we mean the fundamental group of $\Delta$ based at $W$; this may be interpreted either combinatorially or topologically. \begin{proposition}\label{pr:stabfg} Suppose $W$ is a tuple of cyclic words in $A_\Gamma$. Then there is an algorithm to find a finite generating set for the stabilizer $\Aut(A_\Gamma)_W$. \end{proposition} \begin{proof} Altering $W$ by an automorphism sends $\Aut(A_\Gamma)_W$ to a conjugate, so we are free to replace $W$ with a representative of its orbit of minimal length. We find such an element using Lemma~\ref{le:checkminlength} and Proposition~\ref{pr:insamewhorbit}. Now assuming that $W$ has minimal length in its orbit, we construct the graph $\Delta$ as above. Then composition of edge labels defines a homomorphism $\pi_1(\Delta,W)\to \Aut(A_\Gamma)$ (this cannot fail because the domain is free). By Lemma~\ref{le:composedelta} (with $W_1=W_2=W$), the image of this homomorphism lies in $\Aut(A_\Gamma)_W$. By Lemma~\ref{le:peakredongraph} (with $W_0=W'=W$), this homomorphism surjects on $\Aut(A_\Gamma)_W$. \end{proof}
\subsection{Relations}\label{ss:relations} Our next goal is to show that stabilizers of tuples of conjugacy classes are finitely presentable. Before we start, we need to record some relations that hold among the generalized Whitehead automorphisms. First we mention relations between classic Whitehead automorphisms. In our terminology, a \emph{classic Whitehead automorphism} is either \begin{itemize} \item a permutation automorphism from $P$ (an automorphism that restricts to a permutation of $X^{\pm1}$) or \item an automorphism $\alpha$ with a special element $a$ of $X$, called its \emph{multiplier}, such that $\alpha$ sends $a$ to $a$ and sends each $b\in \Gamma\setminus \{a\}$ to one of $b$, $ba$, $a^{-1}b$ or $a^{-1}ba$. \end{itemize} The relations between classic Whitehead automorphisms from Definition~2.6 of Day~\cite{Day1} play a limited role in the current paper. We can essentially treat these as a black box.
We also need peak reduction of long-range Whitehead automorphisms, which we likewise can treat as a black box. A generalized Whitehead automorphism $\alpha$ in $\whset{a}$ is \emph{short-range} if $\alpha(b)=b$ for all $b$ not adjacent to $a$ (but $\alpha$ may do anything to $\st(a)$). It is \emph{long-range} if the restriction of $\alpha$ to $\st(a)^{\pm1}$ is a permutation of that set (but $\alpha$ may do anything to $X\setminus\st(a)$). More generally, an automorphism in $\Aut(A_\Gamma)$ is \emph{long-range} if and only if it can be factored as a product of long-range dominated transvections, partial conjugations, inversions and graphic automorphisms. The important fact about long-range automorphisms is: \begin{theorem}[Day, from~\cite{Day1}, Theorem~A]\label{th:longrangepeakreduction} If $W$ is a tuple of conjugacy classes in $A_\Gamma$ and $\alpha\in\Aut(A_\Gamma)$ is a long-range automorphism, then $\alpha$ has a factorization by classic long-range Whitehead automorphisms and permutation automorphisms that is peak reduced with respect to $W$ (that is, $\alpha$ has a factorization by these kinds of automorphisms that satisfies the conclusions of Theorem~\ref{th:fullfeaturedpeakreduction}). \end{theorem} We need the following relations. These are like the Steinberg relations from algebraic K-theory. \begin{lemma}\label{le:steinbergrel} Suppose $a,b\in\Gamma$ with $[a]\neq[b]$, and we have $\alpha\in\whset{a}$, and $\beta\in\whset{b}$. Further suppose $\alpha$ restricts to the identity on $[b]$. Then $\alpha\beta\alpha^{-1}\in\whset{b}$ and $\alpha\cdot \beta\cdot \alpha^{-1}=(\alpha\beta\alpha^{-1})$ is an identity among generalized Whitehead automorphisms if either of the following is true: \begin{itemize} \item $a$ is adjacent to $b$, or \item $\supp(\alpha)\cap\supp(\beta)=\varnothing$ and $\beta$ restricts to the identity on $[a]$. \end{itemize} \end{lemma}
We further note that if $\beta$ restricts to the identity on $[a]$, then in fact $\alpha\beta\alpha^{-1}=\beta$.
\begin{proof}[Proof of Lemma~\ref{le:steinbergrel}] This follows by straightforward computations, which we describe in broad strokes. Let $\gamma$ denote $\alpha\beta\alpha^{-1}$ and suppose that $c$ is in $X$; we need to show that $\gamma(c)$ is in $\genby{[b]}$ if $c\in[b]$, and that $\gamma(c)=u_1cu_2$ with $u_1,u_2\in\genby{[b]}$ if $c\notin[b]$. If $c\in[b]$, then it is clear that $\gamma(c)=\beta(c)$. If $c\in[a]$, then in all three cases it is straightforward to show that $\gamma(c)=ucv$ for some $u,v$ in $\genby{[b]}$. The reasons for this are different in the three cases above. Now suppose that $c\notin[a]\cup[b]$. Of course, there are $u_1$ and $u_2$ in $\genby{[a]}$ with $\alpha(c)=u_1cu_2$ and $v_1$ and $v_2$ in $\genby{[b]}$ with $\beta(c)=v_1cv_2$. Then $\alpha\beta\alpha^{-1}$ sends $c$ to \[\alpha\beta\alpha^{-1}(u_1^{-1})v_1u_1cu_2v_2\alpha\beta\alpha^{-1}(u_2^{-1}).\] In the first case, $\alpha\beta\alpha^{-1}(u_1)$ differs from $u_1$ by an element of $\genby{[b]}$ and $u_1$ and $v_1$ commute, so the result follows. In the second case, either $u_i$ or $v_i$ is trivial for $i=1,2$ and the result follows. \end{proof}
\subsection{The stabilizer presentation complex}\label{ss:stabpres} Let $W$ be an $M$--tuple of cyclic words in $A_\Gamma$ that is minimal length in its automorphism orbit. To prove Theorem~\ref{th:raagstabpres}, we build a finite cellular $2$--complex $Z$ whose fundamental group is the stabilizer $\Aut(A_\Gamma)_W$. The $1$--skeleton $Z^1$ is like the graph $\Delta$ defined earlier, but with some extra edges. In order to define a map $\pi_1(Z,W)\to \Aut(A_\Gamma)$, we give the $Z^1$ the structure of a labeled multigraph. \begin{itemize} \item The vertices $Z^0$ are the $M$--tuples of cyclic words in $A_\Gamma$ of the same length as $W$ and in the same orbit. \item For each pair of vertices (not necessarily distinct) $W_1$ and $W_2$ and each classic Whitehead automorphism $\alpha$, we add an edge from $W_1$ to $W_2$ labeled by $\alpha$ if $\alpha\cdot W_1=W_2$. This includes the cases where $\alpha$ is a permutation automorphism. \item For each pair of distinct vertices $W_1$ and $W_2$, each generator $a\in X$ and each subset $S\subset (X\setminus\st(a))^{\pm1}$, if there is an element of $\whsetsr{a}{S}$ sending $W_1$ to $W_2$, then we make sure there is an edge from $W_1$ to $W_2$ labeled by some such element. \item For each vertex $W_1$, each generator $a\in X$ and each subset $S\subset(X\setminus\st(a))^{\pm1}$, the labels on edges from $W_1$ to itself must include a generating set from a presentation for the stabilizer $(\whsetsr{a}{S})_{W_1}$. \end{itemize} Like $\Delta$, the graph $Z^1$ can be effectively constructed using Propositions~\ref{pr:insamewhorbitsupportrestricted} and~\ref{pr:whstabsupportrestricted}. Instead of checking whether a tuple is in the same length as $W$ to decide whether to use it as a vertex, it may be more efficient to construct $\Delta$ as above, discard other connected components, and then add extra edges to form $Z^1$. Since we only consider vertices in the same orbit as $W$, $Z^1$ is automatically connected and each vertex $W_1$ of $Z$ is minimal length in its orbit.
Next we define the several situations where we add $2$--cells to $Z$. When we say a $2$--cell ``reads off" a word starting at a given vertex, we mean that we glue in the $2$--cell so that its boundary follows the path whose edge labels form that word. \begin{itemize} \item[(C1)] For each vertex $W_1$, each generator $a\in X$ and each subset $S\subset(X\setminus\st(a))^{\pm1}$, the self-edges at $W_1$ labeled by elements of $\whsetsr{a}{S}$ give a generating set for $(\whsetsr{a}{S})_{W_1}$ by construction. We add $2$--cells reading off the relations between these elements, and we add enough $2$--cells so that the subcomplex of $Z$ spanned by these edges forms a presentation complex for $(\whsetsr{a}{S})_{W_1}$. This is possible by Proposition~\ref{pr:whstabsupportrestricted}. \item[(C2)] Suppose there is an edge starting at the vertex $W_1$ with label $\alpha$, where $\alpha$ is a long-range generalized Whitehead automorphism but not a classic Whitehead automorphism. We find a path from $W_1$ to $\alpha\cdot W_1$ with label sequence $\gamma_1$--$\gamma_2$--$\dotsm$--$\gamma_k$, where each $\gamma_i$ is a classic long-range Whitehead automorphism, and glue in a $2$--cell reading off the difference between these two paths. This is possible by Theorem~\ref{th:longrangepeakreduction}: we peak-reduce $\alpha$ with respect to $W_1$ to get the factorization $\alpha=\gamma_k\dotsm\gamma_1$; since the factorization is peak reduced and $W_1$ is minimal length, each intermediate image $\gamma_i\dotsm\gamma_1\cdot W_1$ is also a vertex of $Z$ and this word defines an edge path. \item[(C3)] Whenever we find an edge loop in $Z$ whose labels read off one of the relations between classic Whitehead automorphisms from Day~\cite[Definition~2.6]{Day1}, we add a $2$--cell bounding this edge loop. These relations fall into ten classes, are easily recognizable, and each such relation has length at most five. \item[(C4)] Suppose $W_1$ is in $Z^0$, $\alpha$ is a generalized Whitehead automorphism labeling an edge starting at $W_1$, and $\beta$ is an inner automorphism that is also a classic Whitehead automorphism. Then there is a factorization $\alpha\beta\alpha^{-1}=\gamma_k\dotsm\gamma_1$, where the $\gamma_i$ are also inner classic Whitehead automorphisms. The inner classic Whitehead automorphisms label loops in $Z^1$ and we glue in a $2$--cell reading off the difference between these two factorizations starting at $\alpha\cdot W_1$. We repeat this for each such $W_1$, $\alpha$ and $\beta$. \item[(C5)] Suppose $W_1$ is in $Z^0$ and there is a closed edge path $p$ starting at $W_1$ whose edge labels $\alpha,\beta,\gamma$ are in $\whset{a}$ for some $a\in X$. Then $\gamma\beta\alpha\in(\whset{a})_{W_1}$. Since the labels on the loops at $W_1$ include generators for $(\whset{a})_{W_1}$, we know that there is an edge path $w$ consisting of loops at $W_1$ whose composition represents the same automorphism as $\gamma\beta\alpha$. Then we add a $2$--cell to $Z$ whose boundary follows $p$ and then follows $w$ backwards. We add such a cell for each vertex $W_1$ on each such path $p$ involving at least two vertices. \item[(C6)] Suppose $W_1$ is in $Z^0$, $\alpha\in P$ and $\beta\in\whset{b}$ for some $b$, and both $\alpha$ and $\beta$ are edge labels on edges starting at $W_1$. Then since $\alpha$ is a permutation automorphism, $\alpha\beta\cdot W_1$ is also a vertex of $W_1$. It is easy to see that the element $\alpha\beta\alpha^{-1}$ is in $\whset{\alpha(b)}$ and sends $\alpha\cdot W_1$ to $\alpha\beta\cdot W_1$. By construction, there is an edge in $Z^1$ labeled by some $\gamma\in\whset{\alpha(b)}$ with $\gamma\alpha\cdot W_1=\alpha\beta\cdot W_1$, and therefore $\alpha\beta^{-1}\alpha^{-1}\gamma$ is in $(\whset{\alpha(b)})_{W_1}$. Since the loop edge labels at $W_1$ include a generating set for $(\whset{\alpha(b)})_{W_1}$, we have a path $w$ in these loops where the composition of edge labels represents $\alpha\beta^{-1}\alpha^{-1}\gamma$. Then we add a $2$--cell to $Z$ whose boundary, starting at $\alpha\cdot W_1$, follows $\alpha^{-1}$ then $\beta$ then $\alpha$ then $\gamma^{-1}$, and then $w$. We repeat this for each vertex $W_1$ and each such pair $\alpha$ and $\beta$. \item[(C7)] Suppose $W_1$ is in $Z^0$ and $a,b\in X$ and $\alpha\in\whset{a}$ and $\beta\in\whset{b}$ are edge labels on edges starting at $W_1$
and $\alpha$ and $\beta$ satisfy the hypotheses of Lemma~\ref{le:steinbergrel} (so $\alpha|_{[b]}$ is the identity, and either $a$ is adjacent to $b$, or $\beta|_{[a]}$ is also the identity and $\supp(\alpha)\cap\supp(\beta)=\varnothing$). Then Proposition~\ref{pr:steinberglengthchange} below implies that $\alpha\beta\cdot W_1$ is the same length as $W_1$ and therefore is a vertex in $Z^0$. So by construction, there is an automorphism $\gamma\in\whset{a}$ labeling an edge from $\beta\cdot W_1$ to $\alpha\beta\cdot W_1$ ($\alpha$ is such an automorphism, but there is no guarantee that our construction of $Z^1$ found this particular automorphism.) Further, $\alpha\beta\alpha^{-1}$ is in $\whset{b}$ by Lemma~\ref{le:steinbergrel}, and $\alpha\beta\alpha^{-1}$ sends $\alpha\cdot W_1$ to $\alpha\beta\cdot W_1$. By construction of $Z^1$, there is $\delta\in\whset{b}$ sending $\alpha\cdot W_1$ to $\alpha\beta\cdot W_1$, and there is an edge in $Z^1$ labeled by $\delta$ from $\alpha\cdot W_1$ to $\alpha\beta\cdot W_1$. We note that $\alpha\gamma^{-1}$ fixes $\alpha\beta\cdot W_1$, and therefore there is a path $w_1$ in the edge loops at $\alpha\beta\cdot W_1$ where the composition of the edge labels represents $\alpha\gamma^{-1}$. Similarly, $\delta\alpha\beta^{-1}\alpha^{-1}$ fixes $\alpha\beta\cdot W_1$ and there is a path $w_2$ in the edge loops at $\alpha\beta\cdot W_1$ where the composition of these edge loops represents $\delta\alpha\beta^{-1}\alpha^{-1}$. We glue a $2$--cell into $Z$ whose boundary, starting at $W_1$, follows $\beta$, then $\gamma$, then $w_1$, then $w_2$, then $\delta^{-1}$, and finally $\alpha^{-1}$. We repeat this for each vertex $W_1$ and each pair $\alpha,\beta$ at $W_1$ satisfying the hypotheses of Lemma~\ref{le:steinbergrel}. \end{itemize} This completes the construction of $Z$. We note that in principle, $Z$ can be effectively constructed from $\Gamma$, since in each of the cases (C1) through (C7), there are only finitely many cases in which we may have to add a $2$--cell.
\begin{lemma}\label{le:surjhomZ} Composition of edge labels defines a surjective homomorphism $\pi_1(Z,W)\to \Aut(A_\Gamma)_W$. \end{lemma} \begin{proof} The proof of Lemma~\ref{le:composedelta} goes through with $\Delta$ replaced by $Z^1$, since the extra edges in $Z^1$ still indicate the action by their labels. Then we have a well defined homomorphism $\pi_1(Z^1,W)\to \Aut(A_\Gamma)_W$ for the same reasons as in the proof of Proposition~\ref{pr:stabfg}. By Lemma~\ref{le:peakredongraph} (with $W_0=W'=W$), this homomorphism surjects on $\Aut(A_\Gamma)_W$. By the Seifert--Van Kampen theorem, the kernel of the natural map $\pi_1(Z^1,W)\to\pi_1(Z,W)$ is normally generated by the boundary loops of the $2$--cells. By construction, each of these boundary loops maps to the trivial automorphism. Then the homomorphism descends to a homomorphism $\pi_1(Z,W)\to\Aut(A_\Gamma)_W$, which is necessarily surjective. \end{proof}
The following proposition is the key to Theorem~\ref{th:raagstabpres}. \begin{proposition}\label{pr:homotopeinZ} Suppose $p$ is an edge loop in $Z$ based at $W$ that maps to the trivial automorphism. Then $p$ can be homotoped relative to $W$ to an edge loop whose edge labels consist entirely of permutation automorphisms and inner automorphisms. \end{proposition} Since the proof of Proposition~\ref{pr:homotopeinZ} uses the structure of the proof of Theorem~\ref{th:fullfeaturedpeakreduction}, we postpone it to Section~\ref{se:peakreduction}. This statement is all we need to prove the finite presentation result.
\begin{proof}[Proof of Theorem~\ref{th:raagstabpres}] Suppose $W$ is a tuple of conjugacy classes in $A_\Gamma$. First we find a minimal-length representative of the orbit of $W$ using Proposition~\ref{pr:insamewhorbit} and Lemma~\ref{le:checkminlength}. Of course, replacing $W$ with an image of itself under an automorphism will not change the isomorphism type of $\Aut(A_\Gamma)_W$---it will only replace it with a corresponding conjugate of itself in $\Aut(A_\Gamma)$. So we replace $W$ with a minimal representative of its orbit.
We construct the complex $Z$ with respect to $W$ as described above, and consider the map $\pi_1(Z,W)\to \Aut(A_\Gamma)_W$ from Lemma~\ref{le:surjhomZ}. If this map is an isomorphism, then the Seifert--Van Kampen theorem implies that the stabilizer is finitely presented, since $Z$ is a finite complex. To prove the theorem, it is enough to show that the map is injective, since it is already surjective by Lemma~\ref{le:surjhomZ}.
To show injectivity, we assume we have an edge loop $p$ based at $W$, such that the composition of the edge labels of $p$ yields the trivial homomorphism. By Proposition~\ref{pr:homotopeinZ}, we assume we have homotoped $p$ to an edge loop whose edge labels are permutation automorphisms and inner automorphisms. We use the $2$--cells of type (C6) to slide these inner automorphisms past the permutation automorphisms. Then the inner automorphisms label loops at the base vertex $W$. The multiplication table of the group $P$ is included in the relations that the (C3) cells bound, so we can eliminate all the permutation automorphisms from $p$ by homotoping across these cells. Then $p$ reads off a product of inner automorphisms representing the trivial automorphism. We can rewrite any inner automorphism in $\Omega$ as a product of inner automorphisms that are also classic Whitehead automorphisms by homotoping across (C1) cells. The group of inner automorphisms of $A_\Gamma$ is isomorphic to another right-angled Artin group ($A_{\Gamma'}$, where $\Gamma'$ is $\Gamma$ with all the vertices representing central generators deleted). Further, this isomorphism carries the inner classic Whitehead automorphisms to the standard generating set of the right-angled Artin group. So any word in the inner classic Whitehead automorphisms that represents the trivial automorphism can be eliminated by applying commutation relations. These commutation relations are given by $2$--cells in $Z$ (redundantly as (C3) or (C7) cells), so we can homotope $p$ to the trivial edge path at $W$. \end{proof}
\section{Orbits of matrices} \label{se:linearproblems} In this section we prove Propositions~\ref{pr:matrixorbitalgorithm} and~\ref{pr:matrixstabpres}. \subsection{Partly rational linear problems} We fix $n\geq 1$ and $k\geq 0$ and consider block-upper triangular matrices of the form \[ \left( \begin{array}{cc} A & B \\ O & I \end{array}\right), \] with $A$ in $\GL(n,\Z)$ and $B$ in $M_{n,k}(\Q)$. This is the semidirect product $\GL(n,\Z)\ltimes M_{n,k}(\Q)$, where $\GL(n,\Z)$ acts on $M_{n,k}(\Q)$ on the left by multiplication. We will vary the kinds of entries we wish to consider in the upper-right block, so we let $G_{\Q}$ denote $\GL(n,\Z)\ltimes M_{n,k}(\Q)$, and for a positive integer $d$, we let $G_d$ denote $\GL(n,\Z)\ltimes M_{n,k}(\frac{1}{d}\Z)$. We deliberately restrict the coefficients in the upper left-block to $\Z$, so that $G_d$ will always be a finite-index subgroup in $G_1$ (we discuss this more below). A key tool in our discussion will be the following modified version of the Hermite normal form for integer row reduction. For more on the Hermite normal form, see Cohen~\cite{Cohen}, section 2.4.2. Note that Cohen describes Hermite normal form for column reduction, whereas we use Hermite normal form for row reduction.
\begin{definition} Suppose $A$ is a matrix in $M_{n+k,m}(\Q)$. Roughly speaking, $A$ is in \emph{$G_{\Q}$--normal form} if \begin{itemize} \item $\Q$-linear combinations of rows $n+1$ through $n+k$ have been added to rows $1$ through $n$ to reduce them as much as possible, and \item there is a multiple of the block of rows $1$ through $n$ that is a matrix in Hermite normal form for integer row reduction. \end{itemize} Precisely, $A=(a_{ij})$ is in $G_{\Q}$--normal form if \begin{itemize} \item for each $j$, $1\leq j\leq m$, if there is a linear combination of rows $n+1$ through $n+k$ that is nonzero in column $j$ but is zero in all previous rows, then every entry in column $j$ from row $1$ through row $n$ is zero, \item there is an increasing sequence $p_1,\dotsc,p_l$ of pivot column positions, for some $l$ with $0\leq l\leq n$, such that for each $i$ from $1$ through $l$: \begin{itemize} \item the entry $a_{i,p_i}$ is positive, \item the entries $a_{i,1},\dotsc,a_{i,p_i-1}$ in row $i$ preceding column $p_i$ are all $0$, \item for each $i'$, $i'=1,\dotsc, i-1$, the entry $a_{i',p_i}$ satisfies $0\leq a_{i',p_i} < a_{i,p_i}$ (the entries above the pivot position are nonnegative and less than the pivot), and \end{itemize} \item rows from $l+1$ through $n$ contain only zero entries. \end{itemize} \end{definition}
We want to show that every matrix is equivalent to a unique one in this form, and to understand the stabilizer of a matrix in this form. First we prove uniqueness. \begin{lemma}\label{le:normalformunique} Suppose $A$ and $B$ are matrices in $M_{n+k,m}(\Q)$ in $G_{\Q}$--normal form and $Q$ is in $G_{\Q}$ with $B=QA$. Let $d$ be the smallest positive integer with $Q\in G_d$. Then $Q=Q_1Q_2$ with $Q_2\in \{I\}\ltimes M_{n,k}(\frac{1}{d}\Z)$ and $Q_1\in\GL(n,\Z)\ltimes \{O\}$, such that $Q_2A=A$ and such that the first $l$ columns of $Q_2$ are the same as those of the identity matrix, where $l$ is the number of pivots in $A$. In particular, $A=B$. \end{lemma}
\begin{proof} First of all, since $G_{\Q}$ cannot alter rows $n+1$ through $n+k$ by multiplication on the left, we see that the bottom $k$ rows of $B$ and $A$ must be identical. In particular, for each position $j$, if there is a linear combination of the bottom $k$ rows in $A$ that is trivial in columns $1$ through $j-1$ but nontrivial in column $j$ then there is one for $B$ as well, and vice versa. Then the set of columns that are forced to be trivial by the first condition in the definition of $G_{\Q}$--normal form is the same in both $A$ and $B$. The matrix $Q$ can be factored as $Q_1Q_2$, where $Q_2\in \{I\}\ltimes M_{n,k}(\frac{1}{d}\Z)$ and $Q_1\in\GL(n,\Z)\ltimes \{O\}$, because $G_d$ is an internal semidirect product of these two subgroups. Then $Q_2$ fixes $A$, because $Q_2$ can only change an entry in $A$ that is forced to be zero by the definition, and $Q_1$ cannot change the set of columns that only have zero entries in their first $n$ rows.
So we have $B=Q_1A$, with $Q_1\in\GL(n,\Z)\ltimes\{O\}$. The uniqueness of Hermite normal form implies that $A=B$, since $Q_1$ only changes the top $n$ rows of $A$ and the top $n\times k$ blocks of $A$ and $B$ are already rational multiples of matrices in Hermite normal form. However, our statement about the form of $Q_1$ is a little stronger, so we prove the lemma as stated.
Let $p_1,\dotsc,p_l$ be the pivots of $A$. We perform induction on the hypothesis that for $i$ with $i\leq l$, columns $1$ through $p_{i+1}-1$ of $A$ and $B$ match and columns $1$ through $i$ of $Q_1$ are the same as those of identity matrix. The hypothesis is true for $i=0$ since columns $1$ through $p_1-1$ of $A$ are trivial and the equation $B=Q_1A$ is only possible if the corresponding columns of $B$ are also trivial. Now we fix an $i$ with $1\leq i\leq l$ and consider column $p_i$ of $A$ and $B$. Certainly entries $i+1$ through $n$ of column $p_i$ of $B$ are zero, since $B$ is in $G_{\Q}$--normal form and column $p_i$ is left of the $(i+1)$st pivot column of $B$ (if it exists). Consider $i'$ with $i<i'\leq n$. By the inductive hypothesis, the first $i-1$ entries of the $i'$th row in $Q_1$ are zero. We dot this row $i'$ with the column $p_i$ in $A$ to get an entry in $B$ that we know to be zero. Since entries $i+1$ through $n$ of column $p_i$ in $A$ are zero, the only possibly nonzero term in this dot product is the product of the $i',i$ entry of $Q_1$ with the $i,p_i$ entry of $A$. So entry $i',i$ of $Q_1$ is zero, and varying $i'$, we see that every below-diagonal entry in $Q_1$ in column $i$ is zero. Next we note that if the diagonal entry $i,i$ of $Q_1$ were zero, then $Q_1$ would have determinant zero. So this entry is nonzero, and therefore position $i,p_i$ in $B$ has a nonzero entry and is the pivot there. Since the pivot entries are positive, the fact that $Q_1$ has determinant $\pm1$ implies that the $i,i$ entry of $Q_1$ is $1$. Then position $i,p_i$ matches in $A$ and $B$. If any above-diagonal entry in column $i$ of $Q_1$ is nonzero, then an entry in column $p_i$ of $B$ above the pivot will not be reduced modulo the pivot entry. This then implies that all the columns before $p_{i+1}$ of $A$ and $B$ match, since these columns have only zero entries in positions $i+1$ through $n$.
The induction continues until we reach the last pivot position of $A$, showing that the first $l$ columns of $Q_1$ match those of the identity matrix. Since rows $l+1$ through $n$ of $A$ are zero rows, this is enough to deduce that $B=A$. \end{proof}
Now we show existence of matrices in normal form. \begin{proposition}\label{pr:normalformexists} Every matrix $A$ in $M_{n+k,m}(\Q)$ is associated to a matrix $B$ in $M_{n+k,m}(\Q)$ in $G_{\Q}$--normal form, with $A=QB$ for some $Q\in G_{\Q}$. The matrix $B$ is unique. \end{proposition}
\begin{proof} We prove existence by supplying an algorithm; of course uniqueness is then the result of Lemma~\ref{le:normalformunique}. The algorithm is a row reduction algorithm. Multiplication on the left by elements of $G_{\Q}$ allows us to replace any of the top $n$ rows of $A$ by itself plus a rational linear combination of the bottom $k$ rows, or to replace any row in the top $n$ by itself plus an integer linear combination of the other top $n$ rows, or to permute the top $n$ rows, or to multiply any of the top $n$ rows by $-1$.
The first part of the algorithm is to use the bottom $k$ rows of $A$ to simplify $A$ as much as possible; this is step 1 below. The second part is to perform integer row reduction and reduce the entries above the pivots as much as possible.
\textbf{Step 1:} We start by setting $j=1$; $j$ is the position of the column we are trying to simplify. We consider the map $\Q^k\to \Q^{j-1}$ that sends a $k$-tuple of coefficients to the corresponding linear combination of the bottom $k$ rows of $A$, restricted to their first $j-1$ columns. We find generators for the kernel of this map. Each generator gives us a linear combination of the bottom $k$ rows of $A$ that is zero in its first $j-1$ entries; if some generator's linear combination is nonzero in the $j$th column, we add rational multiples of this linear combination to the top $n$ rows of $A$ to zero out their $j$th column entries. Of course this leaves the previous columns unaffected. (In the case that $j=1$, we simply check whether some row in the bottom $k$ rows has a nonzero entry in its first column, and if so, we use its multiples to zero out the top $n$ entries of the first column.) So we replace $A$ with an equivalent matrix with the first $n$ entries of column $j$ zeroed out if possible. We then replace $j$ with $j+1$ and repeat this step. We do this until we have tried it for all columns of $A$.
After completing the previous step, we perform the procedure to turn the top $n\times m$ block of $A$ into a rational multiple of a matrix in Hermite normal form for integer row reduction. Although this is standard, we include it here for completeness.
\textbf{Step 2:} If the top $n\times m$ block of $A$ is now the zero matrix, then $A$ is in $G_{\Q}$--normal form and we are done. Otherwise we start the second part by setting $j$ to be the first column with a nonzero entry in its first $n$ rows. We initialize our sequence of pivots by setting $l=0$, so that there are no pivots in the pivots sequence $p_1,\dotsc,p_l$.
\textbf{Step 3:} By construction, $j>l$ and all entries in row $l+1$ through row $n$ in columns $1$ through $j-1$ are zero. We look the entries of column $j$ from row $l+1$ through row $n$, and choose a row $i$ whose column-$j$ entry has minimal nonzero absolute value among these (if all the entries were zero, we would increment $j$ and loop, but the choice of $j$ should prevent this). We add integer multiples of row $i$ to each row from $l+1$ through $n$ in order to diminish the sum of the absolute values of the entries in column $j$. We continue this until either another row's $j$th entry becomes smaller in absolute value than that of row $i$, or else row $i$ becomes the only entry from position $l+1$ through $n$ with a nonzero entry. If another row's $j$th entry becomes smaller in absolute value than that of row $i$, then we replace $i$ with the position of that row and repeat this step. If position $i$ becomes the only entry in column $j$ from position $l+1$ through $n$ that is nonzero, then we proceed to the next step.
\textbf{Step 4:} The entry in position $i$ is the unique nonzero entry in column $j$ among rows from $l+1$ through $n$. We permute rows $l+1$ through $n$ of $A$ so that this nonzero entry is now in row $l+1$. We replace $l$ with $l+1$ and set the new pivot position $p_l$ to be $j$. We replace row $l$ with its multiple by $\pm1$ to ensure that the pivot entry in position $l,p_l$ is positive. We then add integer multiples of row $l$ to rows $1$ through $l-1$ to make sure that these entries are nonnegative and strictly less than the pivot entry. Since all entries to the left of the pivot entry are zero, this does not affect the previous columns of the matrix. We then set $j$ to be the next column with a nonzero entry in rows $l+1$ through $n$ and return to step~3. If no such column exists, we are done. This finishes the algorithm.
It is clear that this procedure terminates. The loop in step 1 repeats once for each column. After step 1, the least common denominator of the entries of $A$ does not change; call this number $d$. The loop in step 3 always terminates because we decrease the sum of the absolute values of the entries of column $j$ of $A$ by at least $1/d$ with each iteration. The loop going back from step 4 to step 3 can be repeated at most $m$ times since it requires a new column each time.
It is also clear that the output of this procedure is a matrix in $G_{\Q}$--normal form. The matrix coming out of step 1 satisfies the first property in the definition: if there were a way to use the bottom $k$ rows to zero out the top $n$ in the $j$th column without disturbing the previous columns, we would have used it already. This is not disturbed by the remainder of the algorithm, which never changes a column with only zeros in its first $n$ rows to one with a nonzero entry there. The output of the algorithm has its list of pivot columns, and by construction, the pivots satisfy the conditions in the definition. Of course, we only stop producing pivots when all the remaining rows in the first $n$ are zero rows, so we satisfy the last condition in the definition.
Let $A$ denote the matrix input to the procedure and $B$ the output, which is in $G_{\Q}$--normal form. Keeping track of the row moves performed in this algorithm and composing them gives us a matrix $Q$ in $G_{\Q}$ with $B=QA$. \end{proof}
Now we turn our attention to stabilizers in $G_d$. We need the following. \begin{lemma}\label{le:sdpres}
Suppose the group $G$ acts on the group $H$, $G$ has the presentation $\genby{S_G | R_G}$, $H$ has the presentation $\genby{S_H|R_H}$, and the set $R_C$ consists of words $gh^{-1}g^{-1}w_{g,h}$ for all $g\in S_G$ and $h\in S_H$, where $w_{g,h}$ is a word in $S_H$ representing $ghg^{-1}$. Then
\[\genby{S_G\cup S_H | R_G\cup R_H\cup R_C}\] is a presentation for the semidirect product $G\ltimes H$. \end{lemma} The proof is left as an exercise for the reader.
\begin{proposition}\label{pr:normalformQstabpres} Suppose $d$ is a positive integer and $A$ is a matrix in $M_{n+k,m}(\frac{1}{d}\Z)$ in $G_{\Q}$--normal form. Then there is an effective procedure to give a finite presentation for the stabilizer $(G_d)_A$. \end{proposition}
\begin{proof} First we pin down the stabilizer; then we will find a presentation for it. Let $l$ be the number of pivot rows of $A$. Suppose $Q\in G_d$ and $A=QA$. By Lemma~\ref{le:normalformunique}, we have $Q=Q_1Q_2$ where $Q_2\in \{I\}\ltimes M_{n,k}(\frac{1}{d}\Z)$, $Q_1\in\GL(n,\Z)\ltimes \{O\}$, $Q_2A=A$ and the first $l$ columns of $Q_1$ are the same as those of the identity matrix. This tell us that the nontrivial block of $Q_1$ is in $M_{l,n-l}(\Z)\rtimes \GL(n-l,\Z)$, in other words the nontrivial block of $Q_1$ is itself a block-upper-triangular matrix of the form \[ \left( \begin{array}{cc} I & B \\ O & C \end{array} \right) \] where $B\in M_{l,n-l}(\Z)$, $C\in \GL(n-l,\Z)$, $I$ is the $l\times l$ identity matrix and $O$ is the $(n-l)\times l$ zero matrix. The configuration of blocks implies that $\GL(n-l,\Z)$ acts on $M_{l,n-l}(\Z)$ on the right, as the semidirect product notation reflects.
The matrix $Q_2$ is in the stabilizer of $A$ in $\{I\}\ltimes M_{n,k}(\frac{1}{d}\Z)$. Each row of $Q_2$ acts by adding a rational linear combination of rows $n+1$ through $n+k$ of $A$ to some row of $A$ in the top $n$, and being in the stabilizer means that each such rational linear combination has trivial value. So each row of the upper-right $n\times k$ block of $Q_2$ is an element of the kernel of the group homomorphism $(\frac{1}{d}\Z)^k\to \Q^m$ that sends a $k$-tuple of coefficients to its corresponding linear combination of rows $n+1$ through $n+k$ of $A$. Letting $K\subset (\frac{1}{d}\Z)^k$ denote this kernel, we see that the stabilizer of $A$ in $\{I\}\ltimes M_{n,k}(\frac{1}{d}\Z)$ is isomorphic to $K^n$.
This makes it easy to see that the stabilizer of $A$ in $G_d$ is \[\big(M_{l,n-l}(\Z)\rtimes \GL(n-l,\Z)\big)\ltimes K^n,\] since each element of this group stabilizes $A$ and any $Q$ stabilizing $A$ is certainly in this group by the above argument.
Since $K$ is the kernel of a map $(\frac{1}{d}\Z)^k\to \Q^m$, it is a finite-rank free abelian group and we can find a basis for $K$. This in turn yields a basis for $K^n$, which is also free abelian. Likewise $M_{l,n-l}(\Z)$ is a free abelian group with an obvious basis. Of course, this means that $K^n$ and $M_{l,n-l}(\Z)$ have obvious finite presentations, where the generators are the given bases and the relations state that all pairs of basis elements commute. The group $\GL(n-l,\Z)$ has a generating set given by transvections (elementary matrices with a single nonzero off-diagonal entry of $1$) and inversions (matrices sending a single basis element to its inverse and fixing the others). The finite presentation for $\SL(n-l,\Z)$ from Milnor~\cite[Chapter~10]{Milnor}, can easily be modified to give a finite presentation for $\GL(n-l,\Z)$. The conjugate of a generator of $M_{l,n-l}(\Z)$ by a generator of $\GL(n-l,\Z)$ can easily be written down as a product of generators of $M_{l,n-l}(\Z)$. We can tabulate this data for all choices of pairs of generators. Using Lemma~\ref{le:sdpres}, this data together with the presentations for $\GL(n-l,\Z)$ and $M_{l,n-l}(\Z)$ can be combined into a finite presentation for $M_{l,n-l}(\Z)\rtimes \GL(n-l,\Z)$. Finally, we can tabulate the action of generators of $M_{l,n-l}(\Z)\rtimes \GL(n-l,\Z)$ on generators of $K^n$ (again in terms of generators of $K^n$). This data, together with the obvious presentation for $K^n$ and the presentation for $M_{l,n-l}(\Z)\rtimes \GL(n-l,\Z)$, can be combined to give a presentation for the stabilizer of $A$ in $G_d$, again using Lemma~\ref{le:sdpres}. \end{proof}
\subsection{Integer linear problems} Let $d$ be a fixed positive integer. To exploit our rational results in the previous section, we use a crossed homomorphism to keep track of the cosets of $G_d$ in $G_1$. Let $\rho\co G_d\to M_{n,k}(\Z/d\Z)$ be the following composition \[G_d = \GL(n,\Z)\ltimes M_{n,k}(\frac{1}{d}\Z)\to M_{n,k}(\frac{1}{d}\Z) \to M_{n,k}(\Z) \to M_{n,k}(\Z/d\Z),\] where the maps are the second coordinate projection from the semidirect product, then multiplication by $d$, then reduction modulo $d$.
\begin{lemma}\label{le:crohomcosets} The map $\rho$ is a crossed homomorphism: for $A,B\in G_d$, we have \[\rho(AB)=A\cdot \rho(B) + \rho(A),\] where $G_d$ acts on $M_{n,k}(\Z/d\Z)$ via the projection $G_d\to \GL(n,\Z)$ and the standard left action of $\GL(n,\Z)$ on $M_{n,k}(\Z/d\Z)$.
The kernel of $\rho$ is $G_1$ and further, the set of preimages of elements of $M_{n,k}(\Z/d\Z)$ under $\rho$ is precisely the set of left cosets of $G_1$ in $G_d$. \end{lemma}
\begin{proof} We consider the definition of the product operation in $\GL(n,\Z)\ltimes M_{n,k}(\Z/d\Z)$: \[(A,B)\cdot (C,D)=(AC,AD+B).\] Of course this means that the projection $G_d\to M_{n,k}(\frac{1}{d}\Z)$ is a crossed homomorphism with respect to the action via the canonical left action of $\GL(n,\Z)$. Since the map $M_{n,k}(\frac{1}{d}\Z)\to M_{n,k}(\Z/d\Z)$ is equivariant with respect to this action, the map $\rho$ is a crossed homomorphism.
If $A\in G_d$ is in the kernel of $\rho$, this means that the entries in its upper-right block are divisible by $d$ after being multiplied by $d$, in other words that they are integers. So the kernel of $\rho$ is $G_1$. Now suppose $B\in M_{n,k}(\Z/d\Z)$. We can pick representatives for the residue classes of the entries of $B$ to get an element $\tilde B$ in $M_{n,k}(\Z)$ mapping to $B$. Then $\rho$ maps $(I,\frac{1}{d}\tilde B)$ in $G_d$ to $B$. We claim that $\rho^{-1}(B)$ is the coset $(I,\frac{1}{d}\tilde B)\cdot G_1$. If $(C,D)$ is in $G_1$, then \[\rho( (I,\frac{1}{d}\tilde B)(C,D))=\rho(\frac{1}{d}\tilde B)+\rho(D)=B+0.\] This implies that the coset is a subset of the preimage. On the other hand, if $\rho((C,D))=B$, then $D-\frac{1}{d}\tilde B$ is in $M_{n,k}(\Z)$, and $(C, D-\frac{1}{d}\tilde B)\in G_1$ with $(C,D)=(I,\frac{1}{d}\tilde B)(C,D-\frac{1}{d}\tilde B)$. In other words, the preimage is a subset of the coset. This shows that every preimage is a coset.
Now suppose that $(A,B)G_1$ is a coset. Let $\overline B=\rho((A,B))$. If $(C,D)\in G_1$, then $\rho((A,B)(C,D))=B\cdot \rho(D)+\rho(B)=0+\overline B$, so this coset is a subset of this preimage. If $(C,D)\in G_d$ and $\rho((C,D))=\overline B$, then $(C,D)=(A,B)(A^{-1}C, A^{-1}(D-B))$ with $(A^{-1}C,A^{-1}(D-B))\in G_1$, so this preimage is a subset of this coset. So every coset is a preimage. \end{proof}
Now we can prove our proposition on determining orbit membership under the action of $G_1$ on $M_{n+k,m}(\Z)$. \begin{proof}[Proof of Proposition~\ref{pr:matrixorbitalgorithm}] Let $A$ and $B$ be in $M_{n+k,m}(\Z)$; we wish to find a matrix $D\in G_1$ with $DA=B$, or show that no such matrix exists. We start by computing the $G_{\Q}$--normal forms of $A$ and $B$, using the algorithm in Proposition~\ref{pr:normalformexists}. If these normal forms are different, then $A$ and $B$ are in distinct $G_{\Q}$--orbits (by uniqueness of the normal form, Lemma~\ref{le:normalformunique}). Certainly if $A$ and $B$ are in different $G_{\Q}$--orbits, then they are also in different $G_1$--orbits. So we suppose that $A$ and $B$ have the same $G_{\Q}$--normal form $N\in M_{n+k,m}(\Q)$. Suppose $Q,R\in G_{\Q}$ with $N=QA$ and $N=RB$. Let $d$ be the least common denominator of the entries of $N$, $Q$ and $R$. Let $S$ be the generating set from the presentation for the stabilizer of $N$ in $G_d$ from Proposition~\ref{pr:normalformQstabpres}. Let $\Delta$ be the Schreier graph of $G_1$ in $G_d$ with respect to $S$. \begin{claim*} $A$ and $B$ are in the same $G_1$--orbit if and only if the vertices $QG_1$ and $RG_1$ are in the same connected component of $\Delta$. \end{claim*}
First we suppose that $QG_1$ and $RG_1$ are in the same connected component of $\Delta$. Let $C\in G_d$ be the composition of edge labels on an edge path from $QG_1$ to $RG_1$. Then $CQG_1=RG_1$ by Lemma~\ref{le:travelSchreier}. In particular, $R^{-1}CQ$ is in $G_1$, and $R^{-1}CQ$ sends $A$ to $B$.
Conversely, we suppose that there is $D$ in $G_1$ with $DA=B$. Then $RDQ^{-1}$ fixes $N$. By Proposition~\ref{pr:normalformQstabpres}, $RDQ^{-1}$ is a word in $S$. Starting at $QG_1$, we form an edge path in $\Delta$ by following the edges labeled by this expression for $RDQ^{-1}$. This is possible and unambiguous since $\Delta$, being a Schreier graph, has exactly one edge with each label entering and leaving each vertex. Then by Lemma~\ref{le:travelSchreier}, the terminus of the path is $RDQ^{-1}QG_1=RG_1$. This proves the claim.
The algorithm should then be clear at this point. First we compute the $G_{\Q}$--normal forms of $A$ and $B$ and report that $A$ and $B$ are in different $G_1$--orbits if these $G_{\Q}$--normal forms differ. If the normal forms are the same matrix $N$, we find matrices $Q$ and $R$ with $QA=N$ and $RB=N$ and find the lowest common denominator $d$ of the entries of $Q$, $R$ and $N$. Then we find a generating set $S$ for the stabilizer of $N$ in $G_d$ and construct the Schreier graph $\Delta$ of $G_1$ in $G_d$ with respect to $S$. This is possible since $\Delta$ is finite by Lemma~\ref{le:crohomcosets}. The crossed homomorphism $\rho$ gives a convenient way to construct $\Delta$: $CDG_1=EG_1$ if and only if $\rho(CD)=\rho(E)$, if and only if $C\rho(D)+\rho(C)=\rho(E)$ for any $D,E\in G_d$ and $C\in S$. Next in the algorithm, we check whether $QG_1$ and $RG_1$ are in the same connected component. If not, we report that $A$ and $B$ are in different $G_1$--orbits. If they are in the same connected component, we take $C$ to be the composition of edge labels along a path from $QG_1$ to $RG_1$, and report that $R^{-1}CQ$ is a matrix in $G_1$ sending $A$ to $B$. \end{proof}
Now we find a presentation for the stabilizer in $G_1$ of a matrix $A$ in $M_{n+k,m}(\Z)$. \begin{proof}[Proof of Proposition~\ref{pr:matrixstabpres}]
Let $N$ be the $G_{\Q}$--normal form of $A$ and let $Q\in G_{\Q}$ be an element with $N=QA$, as found using Proposition~\ref{pr:normalformexists}. Let $d$ be the least common denominator of the entries of $N$ and $Q$. Let $S$ be a generating set for the stabilizer of $N$ in $G_d$, as given by Proposition~\ref{pr:normalformQstabpres}. Let $\Delta$ be the Schreier graph of $G_1$ in $G_d$ with respect to $S$. Let $S'$ be $\{ Q^{-1}CQ | C\in S\}$; note that $S'\subset (G_d)_A$. Finally, let $\Delta'$ be the Schreier graph of $(G_1)_A$ in $(G_d)_A$ with respect to $S'$. The proof of the proposition will follow from the claim:
\begin{claim*} As a directed multigraph, $\Delta'$ is isomorphic to the connected component of $Q^{-1}G_1$ in $\Delta$, by an isomorphism that sends edge labels in $S'$ to their conjugates by $Q$ in $S$. \end{claim*}
To prove the claim, we start by defining a map on vertices from $\Delta'$ to $\Delta$. Let $B\in (G_d)_A$. We send the vertex $B\cdot (G_1)_A$ of $\Delta'$ to the vertex $Q^{-1}BG_1$ of $\Delta$. This map is well defined: if $C$ is also in $(G_d)_A$ with $B\cdot (G_1)_A=C\cdot (G_1)_A$, then $B^{-1}C\in (G_1)_A\subset G_1$ and therefore $Q^{-1}BG_1=Q^{-1}CG_1$. Conversely, the map is injective: given $B,C\in (G_d)_A$ with $Q^{-1}BG_1=Q^{-1}CG_1$, we see that $B^{-1}C\in G_1$; since $(G_1)_A=(G_d)_A\cap G_1$, this means that $B^{-1}C\in (G_1)_A$ and therefore that $B\cdot (G_1)_A=C\cdot(G_1)_A$. Suppose there is an edge labeled by $C$ in $S'$ from $B\cdot (G_1)_A$ to $B'\cdot (G_1)_A$ in $\Delta'$. Then there is an edge labeled by $Q^{-1}CQ$ in $S'$ from $Q^{-1}BG_1$ to $Q^{-1}B'G_1$. This is immediate from the definition of the Schreier graph. The reverse implication also holds, so the map is an isomorphism of directed multigraphs and respects labels as described.
All that is left in the claim is to show that $\Delta'$ is connected. Suppose $B\cdot (G_1)_A$ is a vertex of $\Delta'$. Of course this means that $B\in (G_d)_A$. Then $QBQ^{-1}\in (G_d)_N$, and by the definition of $S$, $QBQ^{-1}$ can be expressed as a product of elements of $S$. Of course, this means that $B$ can be expressed as a product of elements of $S'$. Since $\Delta'$ is a Schreier graph,
we can trace out a unique edge path starting at $(G_1)_A$ using the labels from from the given expression for $B$ as a product of elements of $S'$. By Lemma~\ref{le:travelSchreier}, the terminus of this path is $B\cdot (G_1)_A$. Since $B$ was arbitrary, this means that $\Delta'$ is connected. This proves the claim.
Now we use the claim to finish the proposition. Let $Z$ be the presentation $2$--complex for $(G_d)_N$ using the finite presentation from Proposition~\ref{pr:normalformQstabpres}. Of course, the generating set for this presentation is $S$. The inner automorphism of $G_d$ given by conjugating by $Q^{-1}$ sends $(G_d)_N$ to $(G_d)_A$ and sends $S$ to $S'$. Therefore we can also view $Z$ as a presentation $2$--complex for $(G_d)_A$ with generators $S'$. Let $\widetilde Z$ be the cover of $Z$ with fundamental group $(G_1)_A$. Since $\Delta'$ is a finite graph, $\widetilde Z$ is a finite-sheeted cover and is therefore a finite complex. Then we use the Seifert--Van Kampen theorem to write down a finite presentation for the fundamental group of $\widetilde Z$, which is $(G_1)_A$. Since every step in this construction can be done effectively, this is an algorithm to produce a finite presentation. \end{proof}
\begin{example} We consider a concrete example to illustrate the above algorithms. It also happens that this is an example where the Schreier graph we describe is disconnected, and where a pair of matrices are in the same $G_d$--orbit but in different $G_1$--orbits. Consider the following matrices: \[ A= \left( \begin{array}{c} 1 \\ 0 \\ 2 \end{array} \right) , \quad N= \left( \begin{array}{c} 0 \\ 0 \\ 2 \end{array} \right) ,\quad Q= \left( \begin{array}{ccc} 1 & 0 & -\frac{1}{2} \\ 0 & 1 & 0\\ 0 & 0 & 1 \end{array} \right). \] Then $A,N\in M_{3,1}(\Z)$ and $Q\in G_2=\GL(2,\Z)\ltimes M_{2,1}(\frac{1}{2}\Z)$. We see that $A$ is not in $G_{\Q}$--normal form, but $N$ is, and $N=QA$. The stabilizer of $N$ in $G_2$ is a copy of $\GL(2,\Z)$ generated by $S=\{a,b,c\}$ with \[ a= \left( \begin{array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right), \quad b= \left( \begin{array}{ccc} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right), \quad c= \left( \begin{array}{ccc} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right).\] Using the fact that $\SL(2,\Z)$ is the amalgamated free product $(\Z/4\Z) *_{\Z/2\Z} (\Z/6\Z)$ (see for example, Serre~\cite[page~35]{Serre}) with the elements of order $4$ and $6$ given by $ba^{-1}b$ and $a^{-1}b$ respectively, it is straightforward to derive a presentation for $\GL(2,\Z)=(G_2)_N$ as follows: \[
\genby{ a,b,c | c^2=1, (a^{-1}b)^3=(ba^{-1}b)^2, (a^{-1}b)^6=1, cac=a^{-1}, cbc=b^{-1}}. \]
\begin{figure}
\caption{The Schreier graph of $\GL(2,\Z)\ltimes M_{2,1}(\Z)$ in $\GL(2,\Z)\ltimes M_{2,1}(\frac{1}{2}\Z)$ with respect to $\{a,b,c\}$. Each vertex is labeled by the image of its coset under $\rho$. The vertex $Q^{-1}G_1$ is labeled $\binom{1}{0}$. }
\label{fig:Schreiereg}
\end{figure}
As usual, $G_1=\GL(2,\Z)\ltimes M_{2,1}(\Z)$. The Schreier graph $\Delta$ of $G_1$ in $G_2$ with respect to $S$ is displayed in Figure~\ref{fig:Schreiereg}. By inspecting $\Delta$, we learn that the fundamental group of the connected component of $Q^{-1}G_1$, based at $Q^{-1}G_1$, is generated by the following seven elements: \[\{a,c,b^2,bcb^{-1}, ba^2b^{-1}, baca^{-1}b^{-1}, baba^{-1}b^{-1}\}.\] Therefore the stabilizer $(G_1)_A$ is generated by the conjugates of these seven elements by $Q^{-1}$: \begin{multline*} \Bigg\{ a, c, \left( \begin{array}{ccc} 1 & 0 & 0 \\ 2 & 1 & -1 \\ 0 & 0 & 1 \\ \end{array} \right), \left( \begin{array}{ccc} -1 & 0 & 1 \\ -2 & 1 & 1 \\ 0 & 0 & 1 \\ \end{array} \right), \left( \begin{array}{ccc} -1 & 2 & 1 \\ -2 & 3 & 1 \\ 0 & 0 & 1 \\ \end{array} \right), \\ \left( \begin{array}{ccc} -3 & 2 & 2 \\ -4 & 3 & 2 \\ 0 & 0 & 1 \\ \end{array} \right), \left( \begin{array}{ccc} 3 & -1 & -1 \\ 4 & -1 & -2 \\ 0 & 0 & 1 \\ \end{array} \right) \Bigg\}. \end{multline*}
There is a presentation $2$--complex for $(G_1)_A$ that has the connected component of $QG_1$ in $\Delta$ as its $1$--skeleton, and which is a three-sheeted cover of the presentation $2$--complex for $(G_2)_N$ corresponding to the presentation given above. In particular, $(G_1)_A$ has a finite presentation with seven generators and fifteen relators.
There is another interesting observation we can make by looking at $\Delta$. Even though $A$ and $N$ are both in $M_{3,1}(\Z)$ and $A$ and $N$ are in the same orbit under $G_2$, the vertices $G_1$ and $QG_1$ are in different connected components of $\Delta$ and therefore $A$ and $N$ are in different orbits under the action of $G_1$. This shows that it is not enough simply to check whether matrices are in the same $G_\Q$--orbit. \end{example}
\section{Peak reduction}\label{se:peakreduction} \subsection{Preliminary notions} Throughout this section we use the generalized Whitehead automorphisms $\Omega$ defined in the introduction. The definition of peak here does not exactly match the definition used in Day~\cite{Day1}, but the current definition is more symmetric. \begin{definition} A \emph{peak} is a triple $(W,\alpha,\beta)$, where $W$ is a tuple of conjugacy classes and $\alpha$ and $\beta$ are automorphisms of $A_\Gamma$ and \[\abs{\alpha\cdot W}\leq \abs{W}\quad\text{and}\quad\abs{\beta\cdot W}\leq\abs{W},\] with at least one of these inequalities being strict. The \emph{height} of a peak is $\abs{W}$. In this paper, the automorphisms $\alpha$ and $\beta$ in a peak are assumed to be in $\Omega$.
A \emph{lowering} of a peak $(W,\alpha,\beta)$ is a factorization \[\beta\alpha^{-1}=\gamma_k\dotsc\gamma_1\] of $\beta\alpha^{-1}$ by automorphisms $\gamma_1,\dotsc,\gamma_k$, such that all the lengths of the intermediate images of $\alpha\cdot W$ are strictly lower than that of $W$: \[\abs{\gamma_i\dotsm\gamma_1\alpha\cdot W}<\abs{W}\] for $i=1,\dotsc,k-1$. In this paper, the automorphisms in a lowering factorization of a peak are always elements of $\Omega$. \end{definition}
The goal of this section is to prove the following lemma: \begin{mainlemma}\label{mle:peaklowering} Suppose $(W,\alpha,\beta)$ is a peak and $\alpha$ and $\beta$ are generalized Whitehead automorphisms. Then this peak can be lowered using a factorization of $\beta\alpha^{-1}$ by generalized Whitehead automorphisms. \end{mainlemma} Theorem~\ref{th:fullfeaturedpeakreduction} follows immediately from this lemma: \begin{proof}[Proof of Theorem~\ref{th:fullfeaturedpeakreduction} from Main Lemma~\ref{mle:peaklowering}] This is a standard argument; it appears in detail as the ``Proof of part (3) of Theorem~B" on pages~836--837 of~\cite{Day1}. In short, we express $\alpha$ as a product of generalized Whitehead automorphisms (for example, as a product of Laurence--Servatius generators using Laurence's theorem) and then repeatedly alter the factorization using Main Lemma~\ref{mle:peaklowering}. Whenever we see a peak of maximal height, we replace it with a peak-lowering factorization. Each application of the Main Lemma reduces either the number of peaks of maximal height or reduces the maximum of the heights of the peaks in the factorization. The procedure will terminate with a factorization of $\alpha$ as product of automorphisms from $\Omega$ such that no peaks appear in the sequence of lengths of the intermediate images of $W$. Such a factorization satisfies the conclusion of the theorem. \end{proof}
In our algorithm to lower peaks, we break into cases depending on the properties of the automorphisms $\alpha$ and $\beta$ in the given peak $(W,\alpha,\beta)$. We often consider the support of automorphisms, defined in Definition~\ref{de:support}. Using this, we can explain the structure of a generalized Whitehead automorphism. \begin{lemma}\label{le:actoncomps} Suppose $\alpha\in\whset{a}$ for some vertex $a$. Then $\alpha$ is a product of dominated transvections and partial conjugations with multipliers in $[a]$ and inversions in elements of $[a]$. In particular: \begin{itemize} \item if $b$ is adjacent to $a$ with $[b]\neq[a]$ and $\alpha$ does not fix $b$, then $a$ dominates $b$. \item if $Y$ is a connected component of $\Gamma\setminus\st(a)$ with at least two elements, then either $Y^{\pm1}\cap\supp(\alpha)=\varnothing$ or $Y^{\pm1}\subset\supp(\alpha)$. \end{itemize} \end{lemma} \begin{proof} We know that $\whset{a}$ is isomorphic to a semidirect product $\GL(k,\Z)\ltimes M_{k,l}(\Z)$, where the short-range dominated transvections between elements of $[a]$ and inversions in $[a]$ generate the $\GL(k,\Z)$--factor and the other dominated transvections and partial conjugations with multipliers in $[a]$ generate the $M_{k,l}(\Z)$--factor. This $M_{k,l}(\Z)$ is a free abelian group. In particular, we can express $\alpha$ as $\alpha_2\alpha_1$, where $\alpha_1$ acts trivially on $\Gamma\setminus[a]$ (it is in the $\GL(k,\Z)$--factor) and $\alpha_2$ acts trivially on $[a]$ (it is in the $M_{k,l}(\Z)$--factor).
If $b$ is adjacent to $a$ with $[b]\neq[a]$, then the only possible generators of $\whset{a}$ that can alter $b$ are transvections, if they exist. So if $\alpha$ does not fix $b$, there is a dominated transvection with multiplier $a$ acting on $b$, and therefore $a$ dominates $b$.
For $Y$ a connected component of $\Gamma\setminus\st(a)$, the only possible generators of $\whset{a}$ that do not leave $Y$ pointwise fixed are the partial conjugations that conjugate $Y$ by an element of $[a]$. If one of these appears in a minimal-length factorization of $\alpha_2$, then $Y^{\pm1}\subset\supp(\alpha)$. If none do, then $Y^{\pm1}\cap\supp(\alpha)=\varnothing$. \end{proof}
We need some basic observations about the interactions of Whitehead automorphisms, which we gather together in the following lemma. \begin{lemma}\label{le:basicobservations} Suppose $a$ and $b$ are vertices of $\Gamma$ and $\alpha\in\whset{a}$ and $\beta\in\whset{b}$. Then: \begin{itemize}
\item If $a$ is adjacent to $b$ and $\alpha|_{[b]}$ is not the identity and $\beta|_{[a]}$ is not the identity, then $[a]=[b]$. \item If $a$ is not adjacent to $b$ and $\alpha$ fixes each element of $[b]$ then $\alpha$ fixes each element of $\st(b)$. \item If $a$ is not adjacent to $b$ then $\alpha$ and $\beta$ both fix all of $\st(a)\cap\st(b)$. \item If $a$ is not adjacent to $b$ and $a$ dominates $b$, then $\beta$ is a long-range automorphism.
\item If $a$ is not adjacent to $b$ and $\alpha|_{[b]}$ is not the identity and $\supp(\alpha)\cap \supp(\beta)=\varnothing$ then $\beta$ is a long-range automorphism. \end{itemize} \end{lemma} \begin{proof}
For the first item, suppose for contradiction that $[a]\neq[b]$. We factor $\alpha$ and $\beta$ as products of Laurence generators. Since $\alpha|_{[b]}$ is not the identity, some factor of $\alpha$ is a dominated transvection that replaces some $c\in[b]$ with $cd$ for some $d\in[a]$. In particular, this means that $b$ dominates $a$. Reversing the roles of $\alpha$ and $\beta$, we see that $a$ dominates $b$ as well. Then by definition $[a]=[b]$, a contradiction.
For the second item, we write $\alpha=\alpha_1\alpha_2$ where $\alpha_1$ is a product of partial conjugations and $\alpha_2$ is a product of transvections and of inversions of elements of $[a]$. Let $x\in\st(b)\setminus[b]$. If $\alpha_1$ does not fix $x$, then $x$ is not adjacent to $a$ and $\alpha_1$ conjugates the entire connected component of $x$ in $\Gamma\setminus\st(a)$ by the same nontrivial element of $\genby{[a]}$. Necessarily $b$ is in the same connected component and is therefore not fixed by $\alpha_1$. But since $x$ is adjacent to $b$ but not to $a$, we know that $a$ does not dominate $b$, and therefore $\alpha_2$ fixes $b$. This implies that $\alpha$ does not fix $b$, a contradiction. If $\alpha_2$ does not fix $x$, then there is a transvection multiplying an element of $[a]$ onto $x$. Then $a$ dominates $x$. But since $x$ is adjacent to $b$, this implies $a$ is adjacent to $b$, a contradiction. So neither $\alpha_1$ nor $\alpha_2$ can alter $x$, which was arbitrary, and therefore $\alpha$ fixes $\st(b)$.
The third item is similar. If $\alpha$ does not fix an element $x$ of $\st(a)\cap\st(b)$, then there must be a transvection multiplying an element of $[a]$ onto $x$, so that $a$ dominates $x$. But if $a$ dominates $x$ and $x$ is adjacent to $b$, then $a$ is adjacent to $b$, a contradiction. Similarly $\beta$ must fix every element of $\st(a)\cap\st(b)$.
To show the fourth item, we suppose that $a$ dominates $b$. If $x$ is adjacent to $b$ and $\beta$ does not fix $x$, then there is a transvection factor of $\beta$ that multiplies an element of $[b]$ onto $x$. Then $b$ dominates $x$. Since $a$ dominates $b$, this implies that $a$ dominates $x$, and therefore that $a$ is adjacent to $b$, because $x$ is. This is a contradiction, so therefore $\beta$ fixes every vertex adjacent to $b$.
Now we explain the fifth item. If $\alpha$ does not fix $b$, then either $a$ dominates $b$, or the component $Y$ of $a$ in $\Gamma\setminus\st(b)$ has at least two elements and $\alpha$ conjugates every element of $Y$ by the same nontrivial word in $[a]$. If $a$ dominates $b$, then we have already shown that $\beta$ must be long-range. So we suppose $Y$ has at least two elements. Suppose $x$ is adjacent to $b$. If $\beta$ does not fix $x$, then $b$ dominates $x$ (as explained in the previous item). If $x$ were adjacent to $a$, then $b$ would be adjacent to $a$, which it is not. So $x$ is not adjacent to $a$, and therefore $x$ is an element of $Y$. However, if $x$ is an element of $Y$, then $x$ is in $\supp(\beta)$, a contradiction since $\supp(\beta)\cap\supp(\alpha)=\varnothing$ and $x\in\supp(\alpha)$. So $\beta$ fixes each vertex adjacent to $b$. \end{proof}
The following observation is easy and needs no proof. It greatly reduces the cases we need to consider. \begin{lemma} If $(W,\alpha,\beta)$ is a peak, then so is $(W,\beta,\alpha)$. If one of these peaks can be lowered, then so can the other. \end{lemma}
Now we outline the proof of Main Lemma~\ref{mle:peaklowering}, which is broken into lemmas filling out the rest of this section. \begin{proof}[Proof of Main Lemma~\ref{mle:peaklowering}] Since $(W,\alpha,\beta)$ is a peak, at least one of $\alpha$ or $\beta$ shortens $W$. Since permutation automorphisms do not shorten $W$, at least one of $\alpha$ or $\beta$ is not a permutation automorphism. Since we may swap $\alpha$ and $\beta$, we assume that $\alpha$ is not a permutation, so $\alpha\in\whset{a}$ for some vertex $a$ of $\Gamma$. If $\beta$ is a permutation automorphism, then conjugate $\alpha$ by $\beta$ to lower the peak as described in Lemma~\ref{le:permlower}.
So we assume that neither $\alpha$ nor $\beta$ is a permutation automorphism; then $\beta\in\whset{b}$ for some vertex $b$ in $\Gamma$. If $[a]=[b]$, then we lower the peak by replacing the length-two factorization $\beta\alpha^{-1}$ with the length-one factorization given by its product, as explained in Lemma~\ref{le:samemultset}.
Next we suppose that $a$ and $b$ are adjacent to each other in $\Gamma$, but that $[a]\neq[b]$. By Lemma~\ref{le:basicobservations}, this implies that $\alpha|_{[b]}$ is the identity or $\beta|_{[a]}$ is the identity. We suppose that $\alpha|_{[b]}$ is the identity; then Lemma~\ref{le:adjsteinberg} shows how to lower the peak. This uses a Steinberg relation explained in Lemma~\ref{le:steinbergrel}.
So we assume that $a$ and $b$ are not adjacent in $\Gamma$. If $a$ dominates $b$ and $b$ dominates $a$, then by Lemma~\ref{le:basicobservations}, both $\alpha$ and $\beta$ are long-range automorphisms. In this case, we can lower the peak (in fact, fully reduce it) using Theorem~\ref{th:longrangepeakreduction}. So we suppose that $b$ dominates $a$ but $a$ does not dominate $b$. Again by Lemma~\ref{le:basicobservations}, $\alpha$ is a long-range automorphism. Proposition~\ref{pr:nonadjacentasymmetric} explains how to lower such a peak. Finally, we suppose that $b$ does not dominate $a$ and $a$ does not dominate $b$. Then Proposition~\ref{pr:nonadjacentnodom} shows that the peak can be lowered. \end{proof}
\subsection{Change of length under generalized Whitehead automorphisms} To show that the algorithm for lowering peaks works as desired, we need to show that certain factorizations really are peak lowering. To do this we need a good understanding of the effect that generalized Whitehead automorphisms have on lengths of tuples of cyclic words. Next we prove some results about this.
We recall the action of $\whset{a}$ on syllables with respect to $a$ given in Definition~\ref{de:syllableaction}. \begin{lemma}\label{le:syllablesandlengthchange} Suppose $W$ is a tuple of cyclic words and $T$ is a syllable decomposition of $W$ with respect to $a$ and $\alpha\in\whset{a}$. Then \[\abs{\alpha\cdot W}-\abs{W}=\abs{\alpha\cdot T}-\abs{T}.\] In particular, we can compute the difference $\abs{\alpha\cdot W}-\abs{W}$ by computing the differences $\abs{\alpha\cdot t}-\abs{t}$ over all syllables $t$ of $T$ and summing. \end{lemma}
\begin{proof} This is an easy corollary of Lemma~\ref{le:decompositionequivariance}; the details are left to the reader. \end{proof}
In the following, when we talk about an endpoint of a syllable $cud$ being in the support of an automorphism, we are only talking about whether $c$ and $d^{-1}$ are in the support, but not about $c^{-1}$ or $d$. This is because the action of automorphisms on syllables has nothing to do with the action on the far sides of the endpoints. The following proposition is used to show that factorizations coming from the relations in Lemma~\ref{le:steinbergrel} can lower peaks. \begin{proposition}\label{pr:steinberglengthchange} Let $a$ and $b$ be vertices of $\Gamma$ with $[a]\neq[b]$. Suppose $\alpha\in\whset{a}$ and $\beta\in\whset{b}$ satisfy the hypotheses of Lemma~\ref{le:steinbergrel} ($\alpha$ fixes $[b]$, and either $a$ is adjacent to $b$, or $\supp(\alpha)\cap\supp(\beta)=\varnothing$ and $\beta$ fixes $[a]$). Let $\gamma=\alpha\beta\alpha^{-1}$, which is in $\whset{b}$ by Lemma~\ref{le:steinbergrel}. Suppose $W$ is a tuple of conjugacy classes. Then \[\abs{W}-\abs{\beta\cdot W}=\abs{\alpha\cdot W}-\abs{\alpha\beta\cdot W}.\] Further, if $(W,\alpha,\beta)$ is a peak, then $\abs{\gamma\alpha\cdot W} < \abs{W}$. \end{proposition}
Since $\gamma\alpha=\alpha\beta$, we can also express $\alpha\beta\cdot W$ as $\gamma\alpha\cdot W$ in the equation above.
\begin{proof}[Proof of Proposition~\ref{pr:steinberglengthchange}] Let $T=(t_1,\dotsc,t_N)$ be a syllable decomposition of $W$ with respect to $[b]$. We form a syllable decomposition $T'$ of $\alpha\cdot W$ with respect to $[b]$ by the following process: first we form the representative of $W$ associated to $T$, then we apply $\alpha$ to this representative by inserting elements from $[a]$ where appropriate, then we delete inverse pairs of elements of $[a]$ that commute with all intervening letters, and then we split the resulting representative into syllables. We note that no collapsing can happen in this process: if $c\notin[a]$ cancels with its inverse after we apply $\alpha$, then it could have canceled before applying $\alpha$, thus contradicting the stipulation from the definition of syllable decomposition that the associated representative be graphically reduced. We prove the proposition by defining a relation $f$ between the syllables in $T$ and the syllables $T'$. This $f$ will be an injective partial function.
Consider syllable $t_i$ from $T$. Either $t_i$ is a linear syllable $t_i=c_iv_i u_i d_i$ or a cyclic syllable $t_i=v_i u_i$, with $v_i\in\genby{[b]}$, $u_i\in\genby{\st(b)\setminus[b]}$ and $c_i,d_i\in(\Gamma\setminus\st(b))^{\pm1}$. We consider what happens to $t_i$ in passing from $W$ to $\alpha\cdot W$. The hypotheses of Lemma~\ref{le:steinbergrel} give us two cases for $\alpha$ and $\beta$. In the first case, $a$ is adjacent to $b$, so acting by $\alpha$ cannot alter syllable boundaries, and we define $f$ to be the map that sends each syllable in $T$ to the corresponding syllable with the same boundaries in $T'$.
In the other case, $a$ is not adjacent to $b$ and $\supp(\alpha)\cap\supp(\beta)=\varnothing$. So by Lemma~\ref{le:basicobservations}, $\alpha$ fixes $v_iu_i$. In this case, $\beta|_{[a]}$ is the identity. Also, if $c_i$ or $d_i^{-1}$ is in $\supp(\beta)$, it is not in $[a]$ and is not in $\supp(\alpha)$, so action by $\alpha$ cannot cancel it away or insert an element of $[a]$ between it and $v_iu_i$. So in the second case, if $c_i$ or $d_i^{-1}$ or any element of $u_i$ is in $\supp(\beta)$ then all of these land in the same syllable of $T'$; we declare $f$ to send $t_i$ to this syllable.
In the second case, if $c_i$ and $d_i^{-1}$ are not in $\supp(\beta)$ and $u_i$ does not contain any elements of $\supp(\beta)$, then we leave $t_i$ out of the domain of $f$. This completes the definition of $f$. We note that $f$ is injective. This is obvious in the first case; in the second case, if two or more syllables merge to form the syllable $t'_j$, then at most one of them can be in the domain of $f$. This is because $\supp(\alpha)\cap\supp(\beta)=\varnothing$ and because elements of $\st(a)\cap\st(b)$ cannot be in $\supp(\beta)$: to break a syllable boundary, we must have the far endpoint being in $\supp(\alpha)$ and everything in $\st(b)$ on one side of the boundary in $\st(b)\cap\st(a)$.
\begin{claim*} If $t_i$ is in the domain of $f$, then $\abs{t_i}-\abs{\beta\cdot t_i}=\abs{f(t_i)}-\abs{\gamma\cdot f(t_i)}$. \end{claim*}
First we consider the case where $a$ is adjacent to $b$. Then $f(t_i)$ differs from $t_i$ in that some elements of $[a]$ have been added and deleted from $u_i$. We use $u'_i$ to denote the part of $f(t_i)$ that is a word from $\st(b)\setminus[b]$. The syllable $\beta\cdot t_i$ differs from $t_i$ in that elements of $[b]$ have been added and deleted from $v_i$. Let $v'_i$ denote the part of $\beta\cdot t_i$ that is a word from $[b]$. Since $\gamma\alpha=\alpha\beta$, the syllable $\gamma\cdot f(t_i)$ can be computed by applying $\alpha$ to the syllable $\beta\cdot t_i$ (in the same manner that we obtained $f(t_i)$ by applying $\alpha$ to $t_i$). Since the elements of $\supp(\alpha)$ in $t_i$ are in $c_i$, $d_i^{-1}$ and $u_i$ (since $\alpha$ fixes $[b]$), these are unchanged in $\beta\cdot t_i$, and therefore $\gamma\cdot f(t_i)$ differs from $\beta\cdot t_i$ in that $u_i$ is replaced by $u'_i$ (the same $u'_i$ as above). So the $\st(b)$-parts of $t_i$, $\beta\cdot t_i$, $f(t_i)$ and $\gamma\cdot f(t_i)$ are $u_iv_i$, $u_iv'_i$, $u'_iv_i$ and $u'_iv'_i$ respectively. Since these are reduced products, this proves the claim in this case.
Next we consider the case where $a$ is not adjacent to $b$, the supports of $\alpha$ and $\beta$ are disjoint, and $\beta$ fixes $[a]$. In this case, $\gamma=\beta$. The elements of $f(t_i)$ in $\supp(\beta)$ are exactly the same as those in $t_i$, since, as noted above, any elements merged in from other syllables cannot contain anything from $\supp(\beta)$. Similarly, parts merged in from other syllables cannot contain elements of $[b]$. By Lemma~\ref{le:basicobservations}, $\alpha$ fixes $v_iu_i$. Of course, $\beta\cdot t_i$ differs from $t_i$ in that $v_i$ is replaced by $v'_i$, where $v'_i$ is determined by $v_i$ and the elements of $t_i$ in $\supp(\beta)$. Since $f(t_i)$ contains the same $[b]$-part $v_i$ and the same elements of $\supp(\beta)$, it follows that $\beta\cdot t_i$ will be the same as $f(t_i)$, but with $v_i$ replaced by the same $v'_i$ just mentioned. In particular, the claim follows in this case.
\begin{claim*} If $t_i$ is not in the domain of $f$, then $t_i=\beta\cdot t_i$, and if $t'_j$ is not in the range of $f$, then $t'_j=\gamma\cdot t'_j$. \end{claim*}
In the first case, $f$ is a bijective total function and there is nothing to show. In the second case, the statement about the domain of $f$ is obvious: if $t_i$ is not in the domain, then it contains nothing from $\supp(\beta)$, and therefore $\beta$ does nothing to it. In the second case, if $t'_j$ is some syllable in $T'$ not in the range of $f$, then no part of it is in $\supp(\beta)$. This is because $\alpha$ did not create new elements of $\supp(\beta)$ in acting on $W$, and if some $t_i$ has some elements of $\supp(\beta)$, all these elements of $\supp(\beta)$ end up in its correspondent under $f$. The action of $\gamma$ is to place cancelling copies of $v$ and $v^{-1}$ in the syllable, proving the claim.
Now we finish the proof of the proposition. Using Lemma~\ref{le:syllablesandlengthchange}, we have shown that \[\abs{W}-\abs{\beta\cdot W}=\abs{\alpha\cdot W}-\abs{\gamma\alpha\cdot W},\] since we have matched up all the syllables that change length on one side with the syllables that change length on the other side, and shown that they change length by the same amount. If we suppose that $(W,\alpha,\beta)$ is a peak, then by definition, we know that $2\abs{W}<\abs{\beta\cdot W}+\abs{\alpha\cdot W}$ (this is from summing the two inequalities in the definition of a peak and using the stipulation that one of them is strict). Combining these two statements, we obtain
$\abs{W}>\abs{\gamma\alpha\cdot W}$. \end{proof}
For the next proposition, we recall the technique from Day~\cite{Day1} for computing the change in length of classic long-range Whitehead automorphisms. We defined classic Whitehead automorphisms in Section~\ref{ss:relations}. Fix a tuple of conjugacy classes $W$. For $c$ in a vertex of $\Gamma$ and $A$ and $B$ subsets of $X^{\pm1}$, we use the notation $\pcount{A}{B}{W,c}$ to denote the number of instances of subwords of the forms $due^{-1}$ or $eud^{-1}$ in a graphically reduced representative for $W$, where $d\in A\setminus\st(c)$, $e\in B\setminus\st(c)$, and $u$ is a word in $\st(c)$. This number is nonnegative and does not depend on the choices involved. As a function of $A$ and $B$, it is additive over disjoint sets in both inputs. Generally we will work with a specific $W$ and suppress $W$ from the notation. Let $\gamma$ be a long-range classic Whitehead automorphism with multiplier $c$ and write $C=\supp(\gamma)+c$. We write $c$ for $\{c\}$, ``-" for differences of sets, ``$+$" for disjoint unions and $C'$ for $X^{\pm1}\setminus C$ in the following computations. This bracket has the following useful property (see Day~\cite[Lemma~3.16, Lemma~3.17]{Day1}): \[\abs{W}-\abs{\gamma\cdot W}=\pcount{c}{C-c}{c}-\pcount{C'}{C-c}{c}.\] Since $\pcount{c}{c}{c}=0$ ($W$ is graphically reduced), we can rewrite this as \[\abs{W}-\abs{\gamma\cdot W}=\pcount{c}{X^{\pm1}}{c}-\pcount{C'}{C}{c}.\]
We use this bracket to detect sets of vertices for constructing Whitehead automorphisms for factorizations used in the peak reduction algorithm. The next result helps us do this. \begin{proposition}\label{pr:shorterfactors} Suppose $\alpha$ and $\beta$ are classic long-range Whitehead automorphisms with multipliers $a$ and $b$ respectively. Suppose that $a$ is not adjacent to $b$ and $b$ dominates $a$. Finally, we suppose that $\alpha(b)=b$ and $a$ is in $\supp(\beta)$. Let $W$ be a tuple of conjugacy classes. Let $A$ denote $\supp(\alpha)+a$ and let $B$ denote $\supp(\beta)+b$. Then: \begin{itemize} \item there is a classic long-range Whitehead automorphism $\beta_1$ with multiplier $b^{-1}$ and support $\supp(\beta_1)=A'\cap B'-b^{-1}$; \item there is a classic long-range Whitehead automorphism $\alpha_1$ with multiplier $a$ and support $\supp(\alpha_1)=A\cap B-a$; and \item we have the following inequality for the changes in length of $W$ under these automorphisms: \[(\abs{W}-\abs{\beta_1\cdot W})+(\abs{W}-\abs{\alpha_1\cdot W})\geq (\abs{W}-\abs{\beta\cdot W})+(\abs{W}-\abs{\alpha\cdot W}).\] \end{itemize} \end{proposition}
\begin{proof} First we explain why the automorphisms $\beta_1$ and $\alpha_1$ exist. Since $b$ dominates $a$, $b$ dominates every vertex that $a$ dominates, and if $Y$ is a connected component of $\Gamma\setminus\st(a)$, $Y$ is a union of vertices adjacent to $b$, vertices dominated by $b$, and connected components of $\Gamma\setminus\st(b)$. Therefore $\beta_1$ can be expressed as a product of Laurence generators and is a well defined automorphism.
If $a$ dominates $b$, then $a$ and $b$ dominate the same vertices non-adjacently and $\Gamma\setminus\st(a)$ and $\Gamma\setminus\st(b)$ have the same connected components with two or more vertices. Then $A\cap B-a$ is a union of vertices that $a$ dominates non-adjacently and components of $\Gamma\setminus\st(a)$ with two or more vertices, and therefore $\alpha_1$ can be expressed as a product of Laurence generators with multiplier $a$ and is well defined. Now suppose that $a$ does not dominate $b$. Then since $\alpha$ fixes $b$, $\alpha$ must fix the entire connected component of $b$ in $\Gamma\setminus\st(a)$. If $c$ is a vertex in $A\cap B$ and $a$ does not dominate $c$, then the entire connected component $Y$ of $c$ in $\Gamma\setminus\st(a)$ is in $A$. If $b$ dominates $c$, then there is a path from $c$ to $b$ outside of $\st(a)$, and therefore $b$ and $c$ are both in $Y$. However, this is a contradiction, since $\alpha$ must conjugate every element of $Y$ by $a$. So $b$ does not dominate $c$, which means that the entire connected component $Z$ of $c$ in $\Gamma\setminus\st(b)$ is in $B$. Since $Y\subset Z$ (since $b$ dominates $a$), this means that $Y\subset A\cap B$. In particular, $\alpha_1$ can be expressed as a product of Laurence generators and is well defined.
Now we consider lengths of images of $W$. From the comments preceding the statement of Proposition~\ref{pr:shorterfactors}, we know \[\abs{W}-\abs{\alpha\cdot W}=\pcount{a}{X^{\pm1}}{a}-\pcount{A'}{A}{a},\] \[\abs{W}-\abs{\beta\cdot W}=\pcount{b}{X^{\pm1}}{b}-\pcount{B'}{B}{b},\] and \[\abs{W}-\abs{\beta_1\cdot W}=\pcount{b^{-1}}{X^{\pm1}}{b}-\pcount{A'\cap B'}{(A'\cap B')'}{b}.\] We expand: \[ \begin{split} \pcount{A'\cap B'}{(A'\cap B')'}{b}=&\pcount{A'\cap B'}{A\cap B}{b}+\pcount{A'\cap B'}{A'\cap B}{b}\\ &+\pcount{A'\cap B'}{A\cap B'}{b} \end{split} \] and: \[ \begin{split} \pcount{B'}{B}{b}=&\pcount{A\cap B'}{A\cap B}{b}+\pcount{A\cap B'}{A'\cap B}{b}+\pcount{A'\cap B'}{A\cap B}{b}\\&+\pcount{A'\cap B'}{A'\cap B}{b}. \end{split} \] In particular, we see \[ \begin{split} (\abs{W}-\abs{\beta_1\cdot W})-(\abs{W}-\abs{\beta\cdot W})=&\pcount{A\cap B}{A\cap B'}{b}-\pcount{A'\cap B'}{A\cap B'}{b}\\ &+\pcount{A\cap B'}{A'\cap B}{b}. \end{split} \] Let $C_W$ be the number of subwords of our graphically reduced representative for $W$ of the form $auc^{-1}$, where $c$ is in $A-a$ and $u$ is in $\genby{\st(b)}$ but not in $\genby{\st(a)}$ (so $u$ contains some letter not commuting with $a$). Then \[\pcount{A\cap B}{A\cap B'}{b}=\pcount{A\cap B}{A\cap B'}{a}+C_W.\] On the other hand, if $cud^{-1}$ is counted by $\pcount{A'\cap B'}{A\cap B'}{b}$, meaning that $c\in A'\cap B'$, $u\in\genby{\st(b)}$ and $d\in A-a$, then either $u\in\genby{\st(a)}$ and $cud^{-1}$ is counted by $\pcount{A'\cap B'}{A\cap B'}{a}$, or else $u=u_1eu_2$ with $u_1\in\genby{\st(a)}$ and $e\in\st(b)\setminus\st(a)$. In the latter case, $cu_1e$ is counted by $\pcount{A'\cap B'}{A\cap B'}{a}$, since $e$ is in $A'\cap B'$ ($e$ is fixed by $\beta$ because $\beta$ is long range and by $\alpha$ because otherwise $\alpha$ would not fix $b$). So each subword counted by $\pcount{A'\cap B'}{A\cap B'}{b}$ is counted exactly once by $\pcount{A'\cap B'}{A\cap B'}{a}$, so \[\pcount{A'\cap B'}{A\cap B'}{b}=\pcount{A'\cap B'}{A\cap B'}{a}.\] Then \[\begin{split} (\abs{W}-\abs{\beta_1\cdot W})-(\abs{W}-\abs{\beta\cdot W})=&\pcount{A\cap B}{A\cap B'}{a}-\pcount{A'\cap B'}{A\cap B'}{a}\\&+C_W+\pcount{A\cap B'}{A'\cap B}{b}. \end{split}\] By an argument parallel to the one for $\beta_1$, we deduce \[\begin{split} (\abs{W}-\abs{\alpha_1\cdot W})-(\abs{W}-\abs{\alpha\cdot W})=&\pcount{A\cap B'}{A'\cap B'}{a}-\pcount{A\cap B}{A\cap B'}{a}\\&+\pcount{A'\cap B}{A\cap B'}{a}. \end{split}\] Summing the two equations, we see that \[(\abs{W}-\abs{\beta_1\cdot W})-(\abs{W}-\abs{\beta\cdot W})+(\abs{W}-\abs{\alpha_1\cdot W})-(\abs{W}-\abs{\alpha\cdot W})\] is \[C_W+\pcount{A\cap B'}{A'\cap B}{b}+\pcount{A\cap B'}{A'\cap B}{a},\] a nonnegative number. \end{proof}
In the following, when we write $\abso{\alpha}$, we mean the minimum length of a representative of the class of $\alpha$ in $\Out(A_\Gamma)$ as a product of Laurence generators. \begin{corollary}\label{co:shorterfactors} Suppose $(W,\alpha,\beta)$ is a peak with $\alpha\in \whset{a}$, $\beta\in\whset{b}$ classic long-range Whitehead automorphisms, with $a$ not adjacent to $b$, and $b$ dominating $a$, but $a$ not dominating $b$. Suppose $a\in\supp(\beta)$ and $\alpha(b)=b$. Then either \begin{itemize} \item there is a classic long-range Whitehead automorphism $\beta_1$ with multiplier $b^{-1}$, with $\abso{\beta_1^{-1}\beta}<\abso{\beta}$ as a product of Laurence generators, with $\beta_1(a)$ equal to $ba$ or $a$, and with $\abs{\beta_1\cdot W}<\abs{W}$, or \item there is a classic long-range Whitehead automorphism $\alpha_1$ with multiplier $a$, with $\supp(\alpha_1)\subset\supp(\beta)$, and with $\abs{\alpha_1\cdot W}<\abs{W}$. \end{itemize} Further, if $\supp(\alpha)\cap\supp(\beta)=\varnothing$, then the first option holds. \end{corollary}
\begin{proof} We apply Proposition~\ref{pr:shorterfactors}. Using the definition of a peak, \[(\abs{W}-\abs{\beta_1\cdot W})+(\abs{W}-\abs{\alpha_1\cdot W})\geq (\abs{W}-\abs{\beta\cdot W})+(\abs{W}-\abs{\alpha\cdot W}) > 0.\] So one of $\alpha_1$ or $\beta_1$ strictly shortens $W$. Since $\alpha_1$ and $\beta_1$ satisfy the other properties in the lemma, this proves the main statement of the lemma. If $\supp(\alpha)\cap\supp(\beta)=\varnothing$, then $\alpha_1$ is defined to be the trivial automorphism and cannot shorten $W$. Therefore in this case, it is $\beta_1$ that shortens $W$. \end{proof}
\subsection{The algorithm: basic cases}\label{ss:algbasic}
Now we begin considering cases for lowering peaks. For the rest of this section, we suppose we have peak $(W,\alpha,\beta)$ with $\alpha$ and $\beta$ generalized Whitehead automorphisms. \begin{lemma}\label{le:permlower} If $\alpha$ is a permutation automorphism, then the peak $(W,\alpha,\beta)$ can be lowered. \end{lemma} \begin{proof} Since $\alpha$ is a permutation automorphism, $\alpha\cdot W$ is the same length as $W$. Then the definition of a peak demands that $\abs{\beta\cdot W}<\abs{W}$, and therefore $\beta\in\whset{b}$ for some vertex $b$ of $\Gamma$. Then $\beta\alpha^{-1}=\alpha^{-1}\cdot \alpha\beta\alpha^{-1}$ is a peak-lowering factorization: $\alpha\beta\alpha^{-1}$ is in $\whset{\alpha(b)}$ and $\abs{\alpha\beta\cdot W}=\abs{\beta\cdot W}<\abs{W}$ (again because $\alpha$ is a permutation automorphism). \end{proof}
\begin{lemma}\label{le:samemultset} If $\alpha$ and $\beta$ are both in $\whset{a}$ for $a$ vertex of $\Gamma$, then $(W,\alpha,\beta)$ can be lowered. \end{lemma} \begin{proof} Set $\gamma=\beta\alpha^{-1}$. Then $\beta\alpha^{-1}=\gamma$ is a peak-lowering factorization: $\gamma\in\whset{a}$ and the condition on intermediate lengths of images of $W$ is vacuous for factorizations of length one. \end{proof}
For the rest of this section we assume that $(W,\alpha,\beta)$ is a peak with $\alpha\in\whset{a}$ and $\beta\in\whset{b}$ for distinct vertices $a$ and $b$ in $\Gamma$. \begin{lemma}\label{le:adjsteinberg} Suppose $a$ is adjacent to $b$ and $\alpha_{[b]}$ is the identity. Then the peak $(W,\alpha,\beta)$ can be lowered. \end{lemma} \begin{proof} By Lemma~\ref{le:steinbergrel}, we know that the element $\gamma=\alpha\beta\alpha^{-1}$ is in $\whset{b}$. We use the factorization $\beta\alpha^{-1}=\alpha^{-1}\gamma$. To show that this is peak-lowering, it is enough to show that $\abs{\gamma\alpha\cdot W}<\abs{W}$. However, we have already done this in Proposition~\ref{pr:steinberglengthchange} \end{proof}
\begin{lemma}\label{le:nonadjsteinbergidentity}
Suppose $a$ is not adjacent to $b$, $\supp(\alpha)\cap\supp(\beta)=\varnothing$ and $\alpha|_{[b]}$ and $\beta|_{[a]}$ are both identity maps. Then the peak $(W,\alpha,\beta)$ can be lowered. \end{lemma} \begin{proof} Again we use $\gamma=\alpha\beta\alpha^{-1}$, which is in $\whset{b}$ by Lemma~\ref{le:steinbergrel}. Our hypotheses imply that $\alpha$ and $\beta$ commute, so that $\gamma=\beta$. Our peak-lowering factorization is \[\beta\alpha^{-1}=\alpha^{-1}\beta.\] To show this, of course we need to show that $\abs{\beta\alpha\cdot W}<\abs{W}$. We have already shown this in Proposition~\ref{pr:steinberglengthchange}. \end{proof}
\subsection{General cases with non-adjacent multipliers}\label{ss:alggen} \begin{lemma}\label{le:lowerconj} Suppose $(W,\alpha,\beta)$ is a peak with $\alpha\in\whset{a}$ and $\beta\in\Omega$, and $\gamma\in\whset{a}$ is an inner automorphism of $A_\Gamma$. Then $\gamma\alpha$ and $\alpha\gamma$ are both in $\whset{a}$ and $(W,\gamma\alpha,\beta)$ and $(W,\alpha\gamma,\beta)$ are both peaks and can be lowered if and only if $(W,\alpha,\beta)$ can be lowered. \end{lemma} \begin{proof} The claim that $\gamma\alpha$ and $\alpha\gamma$ are in $\whset{a}$ is obvious. We note that $\gamma\alpha=\alpha\gamma'$ for a possibly different inner automorphism $\gamma'\in\whset{a}$, so it is enough to prove the lemma for $(W,\gamma\alpha,\beta)$. Also, it is enough to show only the ``if" direction of the statement.
First of all, if $(W,\alpha,\beta)$ is a peak, then $(W,\gamma\alpha,\beta)$ is as well, since action by inner automorphisms has no effect on cyclic words. Now we suppose that $\delta_1,\dotsc,\delta_k\in\Omega$ and \[\beta\alpha^{-1}=\delta_k\dotsm\delta_1\] is a lowering of $(W,\alpha,\beta)$. Let $l$ be one of $1,\dotsc, k$ such that $\delta_l\dotsm\delta_1\alpha\cdot W$ is minimal length among all $\delta_i\dotsm\delta_1\alpha\cdot W$. Since a conjugate of an inner automorphism by another automorphism is still an inner automorphism, and since every inner automorphism is a product of inner automorphisms from $\Omega$, we can find inner automorphisms $\gamma_1,\dotsc,\gamma_m$ in $\Omega$ such that \[\beta\alpha^{-1}\gamma^{-1}=\delta_k\dotsm\delta_{l+1}\gamma_m\dotsm\gamma_1\delta_l\dotsm\delta_1.\] Since inner automorphisms do not change the length of cyclic words, this is a lowering of the peak $(W,\gamma\alpha,\beta)$. \end{proof}
If $\alpha$ is a classic long-range Whitehead automorphism with multiplier $a$, we will sometimes consider the \emph{complement} of $\alpha$. This is the classic long-range Whitehead automorphism with multiplier $a$, with the property that the union of the support of $\alpha$ and the support of its complement is $(\Gamma\setminus\st(a))^{\pm1}$. It follows that the product of $\alpha$ and its complement is inner, so Lemma~\ref{le:lowerconj} implies that we may freely replace an automorphism in a peak with its complement.
\begin{proposition} \label{pr:nonadjacentnodom} Suppose $a$ is not adjacent to $b$, $b$ does not dominate $a$ and $a$ does not dominate $b$. Then the peak $(W,\alpha,\beta)$ can be lowered. \end{proposition}
\begin{proof} We prove the proposition by proving it in successively more general cases, with each case building on the last. First we prove: \begin{case*}
The proposition is true if $\beta$ is a classic long-range Whitehead automorphism with multiplier $b$, $\beta|_{[a]}$ is the identity, $\alpha|_{[b]}$ is not the identity and $\supp(\alpha)\cap\supp(\beta)=\varnothing$. \end{case*} Since $a$ does not dominate $b$, there is some nontrivial element $v\in\genby{[a]}$ such that $\alpha(b)=vbv^{-1}$. We define a new automorphism $\alpha_1\in\whset{a}$ by $\alpha_1(c)=v^{-1}\alpha(c)v$ for every $c\in\Gamma$. Then $\alpha_1$ and $\alpha$ differ by an inner automorphism, and to prove the claim, it is enough to show that the peak $(W,\alpha_1,\beta)$ can be lowered.
\begin{claim*} If $c^{\pm1}\in\supp(\beta)$, then either $a$ dominates $c$ or $Y^{\pm1}\subset\supp(\beta)$, where $Y$ is the connected component of $c$ in $\Gamma\setminus\st(a)$. \end{claim*} Suppose that $c$ is a vertex of $\Gamma$ with $c$ or $c^{-1}\in\supp(\beta)$ and $a$ does not dominate $c$. Let $Y$ be the connected component of $c$ in $\Gamma\setminus\st(a)$. We want to show that $Y$ is a subset of a connected component of $\Gamma\setminus\st(b)$ with at least two vertices. This is enough, since by Lemma~\ref{le:actoncomps} this implies that $Y^{\pm1}\subset\supp(\beta)$. Then it is enough to show that $b$ does not dominate $c$ and that $Y$ does not contain an element of $\st(b)$.
If $b$ dominates $c$, then since $a$ does not dominate $c$, either $b$ is adjacent to $c$ or there is a vertex adjacent to $b$ and $c$ but not adjacent to $a$. If $Y$ contains an element of $\st(b)$, then there is a path from $c$ to $b$ outside of $\st(a)$. In any of these three cases, it follows that $b$ and $c$ are in the same component of $\Gamma\setminus\st(a)$. However, this contradicts Lemma~\ref{le:actoncomps}: $\alpha$ fixes $c$ and $c^{-1}$ since $\beta$ does not and $\supp(\alpha)\cap\supp(\beta)=\varnothing$, but $\alpha$ does not fix $b$. This proves the claim.
By the claim, $\supp(\beta)$ is a union of elements $c^{\pm1}$ where $a$ dominates $c$, and subsets $Y^{\pm1}$ where $Y$ is an entire connected component of $\Gamma\setminus\st(a)$. In particular, the following automorphisms are products of Laurence generators and are well defined. Let $\alpha_2(c)$ be in $\{c,cv^{-1},vc,vcv^{-1}\}$ for each vertex $c\in \Gamma$, with $\supp(\alpha_2)=\supp(\beta)$, and let $\alpha_3=\alpha_1\alpha_2^{-1}$.
Now our goal is to show that $\abs{\alpha_3\cdot W}<\abs{W}$. If this is true, then we will have that $(W,\alpha_3,\beta)$ is a peak with $\beta|_{[a]}$ and $\alpha_3|_{[b]}$ being identity maps and $\supp(\alpha_3)\cap\supp(\beta)=\varnothing$. In particular, Lemma~\ref{le:nonadjsteinbergidentity} applies to lower $(W,\alpha_3,\beta)$. Appending $\alpha_2^{-1}$ to this factorization will give us a peak-lowering factorization for $\beta\alpha^{-1}$.
In fact we show that \[\abs{\alpha_1\cdot W}-\abs{\alpha_2\cdot W}\geq \abs{v}(\abs{W}-\abs{\beta\cdot W}).\] Let $T$ be a syllable decomposition of $W$ with respect to $[b]$. For each syllable $t_i$ in $T$, either $\beta\cdot t_i$ is longer that $t_i$ or shorter than $t_i$ or the same length. Let $L_1$ be the sum over syllables $t_i$ that $\beta$ lengthens of $\abs{\beta\cdot t_i}-\abs{t_i}$, and let $S_1$ be the sum over syllables $t_i$ that $\beta$ shortens of $\abs{t_i}-\abs{\beta\cdot t_i}$. Then \[\abs{W}-\abs{\beta\cdot W}= S_1-L_1.\] Let $T'$ be a syllable decomposition of $\alpha_1\cdot W$ with respect to $[a]$, and similarly let $L_2$ be the total increase in length of syllables $\alpha_2^{-1}$ lengthens and let $S_2$ be the total decrease in length of syllables that $\alpha_2^{-1}$ shortens. Then \[\abs{\alpha_1\cdot W}-\abs{\alpha_3\cdot W}=S_2-L_2.\] We assume without loss of generality that $\alpha_1\cdot T$ and $T'$ have the same associated representative of $\alpha_1\cdot W$.
Suppose $t_i$ is a syllable in $T$ that decreases in length under $\beta$. Then $t_i$ contains $b$ to a nonzero power and exactly one endpoint of $t_i$ is in $\supp(\beta)$. Suppose $t_i=cb^kud$ and where $u\in\genby{\st(b)\setminus\{b\}}$ and without loss of generality assume $c$ is in $\supp(\beta)$. Then the initial part $cb^{\pm1}$ of $t_i$ is a syllable of $W$ with respect to $[a]$. The image of this syllable under $\alpha_1$ is a syllable of $\alpha_1\cdot W$ with respect to $[a]$; it is of the form $c\alpha_1(b)^{\pm1}$. Clearly $\alpha_3\cdot cb^{\pm1}$ (acting on the syllable) is $cv^{-1}\alpha_1(b)^{\pm1}$. This is a syllable of $\alpha_1\cdot W$ in $T'$ that shortens by $\abs{v}$ under the action of $\alpha_2^{-1}$. In particular, summing over all syllables in $T$ that decrease in length under $\beta$, we have \[\abs{v} S_1\leq S_2.\]
Suppose $t'_j$ is a syllable in $T'$ that increases in length under $\alpha_2^{-1}$. Suppose $t'_j=cwud$ where $u\in\genby{\st(a)\setminus[a]}$ and $w\in\genby{[a]}$. We note that $\alpha_2$ is like a classic Whitehead automorphism in that $\alpha_2(c)$ must be one of $\{c,cv^{-1},vc,vcv^{-1}\}$. In particular, since $\alpha_2^{-1}\cdot t'_j$ is longer than $t_j$, we must have exactly one of $c$ and $d^{-1}$ in $\supp(\alpha_2)$, and it must be the case $w\neq 1$ that the $v$ or $v^{-1}$ that is inserted by $\alpha_2^{-1}$ does not cancel away completely into $w$. We assume without loss of generality that $c$ is in $\supp(\alpha_2)=\supp(\beta)$ but $d^{-1}$ is not. Clearly $t'_j$ increases in length by at most $\abs{v}$ under $\alpha_2^{-1}$. We consider $\alpha_1^{-1}\cdot t'_j$, which is a syllable of $W$ with respect to $[a]$. Then $\alpha_1^{-1}\cdot t'_j=cw'ud$, where $w'\in\genby{[a]}$. Since $c\in\supp(\beta)$, we know $c\neq b^{\pm1}$. We note that if $d$ is $b^{\pm1}$, then it must be that $w'u$ contains a generator not in $\st(b)$. If $w'$ is trivial, then $u$ contains elements adjacently dominated by $a$, which cannot be in $\st(b)$ since $a$ is not adjacent to $b$. If $w'$ is nontrivial, then $w'$ contains elements of $[a]$ which are not adjacent to $b$. Further $w'u$ contains no elements of $\supp(\beta)$ because $\beta$ fixes $\st(a)$. So whether $d=b^{\pm1}$ or not, the initial segment of $cw'ud$ contains a syllable with respect to $[b]$ starting with $c$ (in $\supp(\beta)$) and ending with an element not in $\supp(\beta)$. Such a syllable increases in length by one under $\beta$. Then summing over all syllables in $T'$ that increase in length under $\alpha_2^{-1}$, we have \[L_2\leq \abs{v} L_1.\] Summing these gives us \[\abs{v}S_1+L_2\leq \abs{v}L_1+S_2,\] in other words \[\abs{v}(\abs{W}-\abs{\beta\cdot W})=\abs{v}(S_1-L_1)\leq S_2-L_2 = \abs{\alpha_1\cdot W}-\abs{\alpha_3\cdot W}.\] We can rewrite this as \[\abs{W}-\abs{\alpha_3\cdot W} \geq \abs{v}(\abs{W}-\abs{\beta\cdot W}) + \abs{W}-\abs{\alpha_1\cdot W}.\] Since $(W,\alpha_1,\beta)$ is a peak, we know $\abs{W}-\abs{\beta\cdot W}\geq 0$ and $\abs{W}-\abs{\alpha_1\cdot W} \geq0$ with at least one strict. In either case, it follows that $\abs{\alpha_3\cdot W}<\abs{W}$. As explained above, this means $(W,\alpha_3,\beta)$ is a peak that Lemma~\ref{le:nonadjsteinbergidentity} lowers, and by attaching $\alpha_2^{-1}$ to the lowering factorization, we get a peak-lowering factorization for the peak $(W,\alpha_1,\beta)$.
This proves the first case. Now we move to a slightly more general case. \begin{case*} The proposition is true for general $\alpha\in\whset{a}$, $\beta\in\whset{b}$ with $\supp(\alpha)\cap\supp(\beta)=\varnothing$. (We are still assuming $a$ does not dominate $b$ and $b$ does not dominate $a$.) \end{case*}
If $\alpha|_{[b]}$ and $\beta|_{[a]}$ are both the identity, then Lemma~\ref{le:nonadjsteinbergidentity} lowers the peak. So we assume without loss of generality that $\alpha|_{[b]}$ is not the identity. Lemma~\ref{le:basicobservations} implies that $\beta$ is a long-range automorphism in this case. If we also assume that $\beta|_{[a]}$ is not the identity, then $\alpha$ is also a long-range automorphism, and therefore Theorem~\ref{th:longrangepeakreduction} applies to lower the peak. So we assume that $\beta|_{[a]}$ is the identity.
Since $\beta$ is a long-range automorphism, we peak-reduce $\beta$ with respect to $W$ to get a factorization $\beta=\beta_k\dotsm\beta_1$ by classic long-range Whitehead automorphisms. It is possible that $\abs{\beta_i\dotsm\beta_1\cdot W}<\abs{W}$ for some $i$ in $1,\dotsc,k$. If this is the case, we lower the peak $(W,\alpha,\beta_1)$ by the method above and concatenate $\beta_k\dotsm\beta_2$ to this factorization to get a peak-lowering factorization for $\beta\alpha^{-1}$.
Otherwise, since $\beta_k\dotsm\beta_1$ is peak reduced, we have $\abs{\beta\cdot W}=\abs{W}$, $\abs{\alpha\cdot W}<\abs{W}$ and $\abs{\beta_i\dotsm\beta_1\cdot W}=\abs{W}$ for $i=1,\dotsc,k$. We prove the claim by induction on the length $k$ of the factorization. We let $\alpha_1,\alpha_2$ and $\alpha_3$ have the same meanings as above, with $\beta_1$ taking the role of $\beta$. Then $\beta_1\alpha_1^{-1}=\alpha_3^{-1}\beta_1\alpha_2^{-1}$ is a peak-lowering factorization for the peak $(W,\alpha_1,\beta_1)$, using Lemma~\ref{le:nonadjsteinbergidentity} as explained above. However, if $k>1$, then we have a new peak $(\beta_1\cdot W, \alpha_3, \beta_k\dotsm\beta_2)$. The inductive hypothesis implies that this new peak can be lowered. Concatenating $\alpha_3^{-1}\beta_1$ onto the peak-lowering factorization for $(\beta_1\cdot W,\alpha_3,\beta_k\dotsm\beta_2)$ gives us a peak-lowering factorization for $(W,\alpha_1,\beta)$, which gives us a peak-lowering factorization for the original peak using Lemma~\ref{le:lowerconj}. This proves the second case.
\paragraph{General case.} Finally we prove the proposition as stated. We have $\alpha\in\whset{a}$ and $\beta\in\whset{b}$. If $\alpha$ does not fix $b$, then since $a$ does not dominate $b$, we know that $\alpha$ conjugates $b$ by a nontrivial element of $\genby{[a]}$. Then by Lemma~\ref{le:lowerconj}, we may replace $\alpha$ by its product with an inner automorphism and assume that $\alpha$ fixes $b$. Similarly, we assume that $\beta$ fixes $a$.
\begin{claim*} If $c^{\pm1}$ is in $\supp(\alpha)\cap\supp(\beta)$, then either both $a$ and $b$ non-adjacently dominate $c$, or else the connected component $Y$ of $c$ in $\Gamma\setminus\st(a)$ is also a connected component of $\Gamma\setminus\st(b)$ and $Y\cup Y^{-1}\subset\supp(\alpha)\cap\supp(\beta)$. \end{claim*}
If $c^{\pm1}$ is in $\supp(\alpha)\cap\supp(\beta)$, then one possibility is that $a$ dominates $c$. If $b$ were adjacent to $c$, it would imply that $a$ is adjacent to $b$, which is not the case. So $b$ is not adjacent to $c$. If $b$ does not also dominate $c$, then there is a vertex adjacent to $a$ and $c$ but not adjacent to $b$. This means that $a$ and $c$ are in the same component of $\Gamma\setminus\st(b)$; then $\beta(a)=a$ implies that $c,c^{-1}\notin\supp(\beta)$ by Lemma~\ref{le:actoncomps}. This is a contradiction, so we see that both $a$ and $b$ dominate $c$. Then if $c$ were adjacent to $a$, it would imply that $a$ and $b$ are adjacent, so $c$ is also not adjacent to $a$.
Next we suppose that $c$ in $\supp(\alpha)\cap\supp(\beta)$ is in a connected component $Y$ of $\Gamma\setminus\st(a)$ with at least two vertices. First of all, $b$ is not adjacent to $c$, since otherwise $\alpha$ would not fix $b$ (by Lemma~\ref{le:actoncomps}, since $b$ would be in $Y$). Further, it cannot be the case that $b$ dominates $c$---if it did, $b$ would be adjacent to a vertex of $Y$ other than $c$ and therefore $b$ would be in $Y$ and $\alpha$ would not fix $b$. So let $Y'$ be the component of $c$ in $\Gamma\setminus\st(b)$. Suppose $d\in Y$. Then there is a path from $d$ to $c$ outside of $\st(a)$. If this path intersects $\st(b)$, then $b$ would be in $Y$, which is impossible. So the path from $d$ to $c$ is outside $\st(b)$, meaning that $d$ is in $Y'$. Since $d$ was arbitrary $Y\subset Y'$. By a parallel argument, $Y'\subset Y$, and therefore they are equal. This proves the claim.
We prove the proposition by induction on the cardinality of $\supp(\alpha)\cap\supp(\beta)$. The base case is that $\supp(\alpha)\cap\supp(\beta)=\varnothing$, which we covered in a previous case.
Now we work the inductive step. Since $\supp(\alpha)\cap\supp(\beta)$ is nonempty, we select an element $c$ from it. First we assume that both $a$ and $b$ non-adjacently dominate $c$. Fix a graphically reduced representative for $W$. Let $n_a$ denote the number of instances of $c$ or $c^{-1}$ in the representative in a subword $cud$ or $duc^{-1}$ with $u\in\genby{\st(a)}$ and $d\in[a]^{\pm1}$ and let $n_b$ be defined similarly with $a$ replaced by $b$. Let $n'$ count the number of remaining instances of $c$ or $c^{-1}$ in the representative (those that are counted by neither $n_a$ or $n_b$). Let $v_1$ and $v_2$ be in $\genby{[a]}$ with $\alpha(c)=v_1cv_2$ and let $u_2$ and $u_2$ be in $\genby{[b]}$ with $\beta(c)=u_1cu_2$. Let $\alpha_c$ be the element of $\whset{a}$ with $\alpha_c(d)=\alpha(d)$ for all $d\neq c$, and $\alpha_c(c)=v_1c$. Let $\beta_c$ be defined similarly, with $\beta_c(c)=u_1c$. We note that $\alpha_c$ and $\beta_c$ can be expressed as products of Laurence generators and are well-defined automorphisms.
The decomposition of $W$ into syllables with respect to $[a]$ coming from our representative, with Lemma~\ref{le:syllablesandlengthchange}, tells us the following: at each instance of $c$ counted by $n_b$ or $n'$, $\alpha$ increases the length of $W$ by $\abs{v_2}$ more than $\alpha_c$ does, and at each instance of $c$ counted by $n_a$, $\alpha$ may decrease the length of $W$ by up to $\abs{v_2}$ more than $\alpha$ does or increase it by up to $\abs{v_2}$ more. Specifically, \[\abs{v_2}(n'+n_b-n_a)\leq \abs{\alpha\cdot W}-\abs{\alpha_c\cdot W}\leq \abs{v_2}(n'+n_b+n_a).\] Similarly for $\beta$: \[\abs{u_2}(n'+n_a-n_b)\leq \abs{\beta\cdot W}-\abs{\beta_c\cdot W}\leq \abs{u_2}(n'+n_a+n_b).\] Of course, since $c\in\supp(\alpha)\cap\supp(\beta)$, we know $\abs{v_2}>0$ and $\abs{u_2}>0$.
Suppose both $\abs{\alpha_c\cdot W}\geq\abs{\alpha\cdot W}$ and $\abs{\beta_c\cdot W}\geq\abs{\beta\cdot W}$. Then $n'+n_b-n_a\leq 0$ and $n'+n_a-n_b\leq 0$. Summing these, we see that $n'\leq 0$, but since $n'$ is a nonnegative integer, $n'=0$. Then $n_b=n_a$, and in this case we have both $\abs{\alpha_c\cdot W}=\abs{\alpha\cdot W}$ and $\abs{\beta_c\cdot W}=\abs{\beta\cdot W}$. By the definition of a peak, either $\abs{\alpha\cdot W}<\abs{W}$ or $\abs{\beta\cdot W}<\abs{W}$. So in this case we have either $\abs{\alpha_c\cdot W}<\abs{W}$ or $\abs{\beta_c\cdot W}<\abs{W}$.
Otherwise, we have either $\abs{\alpha_c\cdot W}<\abs{\alpha\cdot W}$ or $\abs{\beta_c\cdot W}<\abs{\beta\cdot W}$. Since the definition of a peak ensures that both $\abs{\alpha\cdot W}\leq \abs{W}$ and $\abs{\beta\cdot W}\leq \abs{W}$, in this case we also have either $\abs{\alpha_c\cdot W}<\abs{W}$ or $\abs{\beta_c\cdot W}<\abs{W}$.
Without loss of generality we suppose $\abs{\alpha_c\cdot W}<\abs{W}$. Then $(W,\alpha_c,\beta)$ is a peak that can be lowered by the inductive hypothesis. So we append the element $(\alpha_c\alpha^{-1})\in\whset{a}$ onto a peak-lowering factorization for $(W,\alpha_c,\beta)$ to get a peak-lowering factorization for $(W,\alpha,\beta)$.
The case where $c\in\supp(\alpha)\cap\supp(\beta)$ is in a connected component of $\Gamma\setminus\st(a)$ is similar, with $n_a$, $n_b$ and $n'$ defined similarly. \end{proof}
\begin{lemma}\label{le:classicsteinberg} Suppose $a$ and $b$ are vertices of $\Gamma$ with $[a]\neq[b]$ and $\alpha\in\whset{a}$ and $\beta\in\whset{b}$ are classic long-range Whitehead automorphisms such that: $\alpha(b)=b$, $\supp(\alpha)\cap\supp(\beta)=\varnothing$ and $\beta(a)$ is $a$ or $ab$. Then $\alpha\beta\alpha^{-1}$ is a classic long-range Whitehead automorphism in $\whset{b}$ and for any tuple $W$ of cyclic words, we have \[\abs{W}-\abs{\beta\cdot W}=\abs{\alpha\cdot W}-\abs{\alpha\beta\cdot W}.\] \end{lemma}
\begin{proof} The fact that $\alpha\beta\alpha^{-1}\in\whset{b}$ is relation R4 from Section~2.4 in Day~\cite{Day1}, with $\alpha$ being $(A,a)^{-1}$ and $\beta$ being $(B,b)$ in that statement. The equation of differences of lengths is essentially Sublemma~3.21 from Day~\cite{Day1}, applied twice. Near the end of the proof of that statement, the equation above appears as an inequality ($\alpha$ and $\alpha^{-1}$ are switched and the terms rearranged, but it is the same assertion). The same inequality with $W$ replaced by $\alpha\cdot W$ and $\alpha$ replaced by $\alpha^{-1}$ is the inequality in the reverse direction, proving the equation. \end{proof}
\begin{proposition}\label{pr:nonadjacentasymmetric} Suppose $a$ is not adjacent to $b$, $b$ dominates $a$ and $a$ does not dominate $b$. Then the peak $(W,\alpha,\beta)$ can be lowered. \end{proposition}
\begin{proof} More precisely, we prove: \begin{claim*} Suppose $a$ is not adjacent to $b$, $b$ dominates $a$ and $a$ does not dominate $b$. Then the peak $(W,\alpha,\beta)$ has a peak-lowering factorization such that each automorphism in the factorization is in $\whset{b}$ or is a long-range automorphism in $\whset{a}$. \end{claim*} Since $a$ does not dominate $b$, we know that $\alpha$ acts on $[b]$ by conjugating it by a fixed element of $\genby{[a]}$. Then by Lemma~\ref{le:lowerconj}, we can replace $\alpha$ by its composition with an inner automorphism from $\whset{a}$ and assume that $\alpha$ fixes $[b]$. We also note that since $b$ non-adjacently dominates $a$, $a$ does not adjacently dominate anything and therefore $\alpha$ is long-range and $[a]=\{a\}$ (see Lemma~\ref{le:basicobservations}).
We induct on $\abso{\alpha}$, the length of the outer class of $\alpha$ as a product of Laurence generators. We use as a base case the case that $\alpha$ is a classic long-range Whitehead automorphism, which will certainly be true if $\abso{\alpha}$ is one. Before proving this case, we prove the inductive step. We peak-reduce $\alpha$ with respect to $W$ by Theorem~\ref{th:longrangepeakreduction}. We select the first automorphism $\alpha_1$ out of such a peak-reducing factorization. Let $\alpha_2=\alpha\alpha_1^{-1}$. Then $\alpha_1$ will be a classic long-range Whitehead automorphism with multiplier $a^{\pm1}$, $\abso{\alpha_2}<\abso{\alpha}$, and $\abs{\alpha_1\cdot W}\leq \abs{W}$, with the inequality being strict if $\abs{\alpha\cdot W}<\abs{W}$. (Specifically, we factor $\alpha$ as a product of transvections and partial conjugations with multipliers $a^{\pm1}$ and apply the algorithm from Day~\cite{Day1}; at each step, the algorithm merges, commutes or splits the automorphisms in the factorization, and the resulting peak-reduced factorization consists entirely of long-range automorphisms with multipliers in $\{a,a^{-1}\}$.) Then $(W,\alpha_1,\beta)$ is a peak. We apply the claim and get a peak-lowering factorization of $\beta\alpha_1^{-1}$. If the last term in the factorization is a long-range automorphism $\alpha'\in\whset{a}$, then $\abs{\alpha'^{-1}\alpha_1\cdot W}<\abs{W}$ by the definition of a peak-lowering factorization. We peak-reduce $\alpha_2\alpha'$ with respect to $\abs{\alpha'^{-1}\alpha_1\cdot W}$ using Theorem~\ref{th:longrangepeakreduction}. If we replace the element $\alpha'$ in our factorization of $\beta\alpha_1^{-1}$ with this peak-reduced factorization of $\alpha_2\alpha'$, then the result is a peak-lowering factorization of $\beta\alpha^{-1}$. Otherwise the last term in the factorization of $\beta\alpha_1^{-1}$ is an automorphism $\beta'\in\whset{b}$. Then $(\alpha_1\cdot W,\alpha_2,\beta'')$ is a peak satisfying the hypotheses of the claim, but with $\abso{\alpha_2}<\abso{\alpha}$. Then by induction, we apply the claim and get a peak-lowering factorization of $\beta'\alpha_2^{-1}$. We replace $\beta'$ in the factorization of $\beta\alpha_1^{-1}$ with the entire factorization of $\beta'\alpha_2^{-1}$ to get a peak-lowering factorization of $\beta\alpha^{-1}$.
Now we prove the claim in the base case: with the additional hypothesis that $\alpha$ is a classic long-range Whitehead automorphism with multiplier $a$ (the case where the multiplier is $a^{-1}$ is similar). Let $\beta''$ be the short-range part of $\beta$; in other words let $\beta''$ equal $\beta$ on $\st(b)$ and let $\beta$ fix all other vertices. Then $\beta''$ will be a product of short-range transvections and inversions and will be a well defined automorphism. Let $\beta'$ be the difference, the long-range part, so that $\beta=\beta'\beta''$. Everything in $\supp(\beta'')$ is adjacent to $b$ and dominated by $b$, or equal to $b$. Since $a$ is not adjacent to $b$, $a$ cannot be adjacent to anything in $\supp(\beta'')$ (otherwise domination would force $a$ to be adjacent to $b$). Since $\alpha$ fixes $b$, $\alpha$ fixes the entire connected component of $b$ in $\Gamma\setminus\st(a)$. Therefore $\supp(\beta'')\cap\supp(\alpha)=\varnothing$. Since $a$ is not adjacent to $b$, by definition, $\beta''$ fixes $[a]$. Then by Lemma~\ref{le:steinbergrel} and Proposition~\ref{pr:steinberglengthchange}, we know $\alpha$ commutes with $\beta''$ and \[\abs{W}-\abs{\alpha\cdot W}=\abs{\beta''\cdot W}-\abs{\alpha\beta''\cdot W}.\] So $\beta\alpha^{-1}=\beta'\alpha^{-1}\beta''$.
Since we are about to start a step that we will repeat, we relabel $\alpha$ as $\alpha'$ and $\beta'^{-1}\beta\cdot W$ as $W'$. Let $\alpha''$ denote the trivial automorphism at first. Then we have $\beta\alpha^{-1}=\beta'\alpha'^{-1}\beta''\alpha''^{-1}$. We process this factorization into a better one using an algorithm with a loop. If $\abs{W'}<\abs{W}$, then we do not enter the loop and instead skip ahead to the next step. Otherwise $(W',\alpha',\beta')$ is a peak: \[\abs{\beta'\cdot W'}=\abs{\beta\cdot W}\leq \abs{W}\leq\abs{W'}\] and \[\abs{W'}-\abs{\alpha'\cdot W'}=\abs{\beta''\cdot W}-\abs{\alpha'\beta''\cdot W}=\abs{W}-\abs{\alpha'\cdot W}\geq 0,\] with one of these inequalities strict because $(W,\alpha,\beta)$ is a peak.
\begin{claim*} There is an algorithm to iteratively process the factorization $\beta\alpha^{-1}=\alpha''^{-1}\beta''\alpha'^{-1}\beta'$, such that at the beginning of each loop we have \begin{itemize} \item $(W',\alpha',\beta')$ is a peak, where $W'=\beta'^{-1}\beta\cdot W$, \item $\alpha'$ is a non-inner long-range classic Whitehead automorphism, \item $\beta'$ is a non-inner automorphism, \item $\alpha'^{-1}\beta''\alpha'$ is in $\whset{b}$ and \[\abs{W'}-\abs{\alpha'\cdot W'}=\abs{\beta''^{-1}\cdot W'}-\abs{\beta''^{-1}\alpha'\cdot W'}.\] \end{itemize} With each repeat of the loop, we strictly decrease $\abs{W'}$, or we do not increase $\abs{W'}$ but strictly decrease one of $\abso{\alpha'}$ or $\abso{\beta'}$ while leaving the other fixed. At the end of an iteration, either the new $\abs{W'}$ satisfies $\abs{W'}<\abs{W}$ and we exit the loop, or we repeat the loop (and satisfy the conditions for beginning the loop with the relabeled terms). \end{claim*}
We have already shown that if we enter the loop at all, then the beginning conditions are satisfied for the first iteration. Now we explain the algorithm. By construction, $\beta'$ is a long-range automorphism. Using Theorem~\ref{th:longrangepeakreduction}, we peak-reduce $\beta'$ with respect to $W'$ and consider the first automorphism $\beta_0$ in the resulting peak-reduced factorization of $\beta'$. This $\beta_0$ is a classic long-range Whitehead automorphism with multiplier $d^{\epsilon}$ for some $d\in[b]$ and $\epsilon=\pm1$. Since $\abs{\beta'\cdot W'}\leq\abs{W'}$ (because $\abs{\beta'\cdot W'}=\abs{\beta\cdot W}\leq\abs{W}$), it follows from the definition of a peak-reduced factorization that $\abs{\beta_0\cdot W'}\leq\abs{W'}$. Then $(W',\alpha',\beta_0)$ is a peak.
If $\supp(\alpha')\cap\supp(\beta_0)=\varnothing$ and $a\notin\supp(\beta_0)$, then Lemma~\ref{le:classicsteinberg} applies. In particular, $\alpha'\beta_0\alpha'^{-1}\in\whset{b}$ and \[\abs{W'}-\abs{\alpha'\cdot W'}=\abs{\beta_0\cdot W'}-\abs{\alpha'\beta_0\cdot W'}.\] We replace $\beta'$ with $\beta_0^{-1}\beta'$ and replace $\beta''$ with $\beta''\alpha'\beta_0\alpha'^{-1}$. The new $W'$ is $\beta_0\cdot W'$. We go back to the beginning of the loop since the conditions to continue the loop are satisfied. If $\supp(\alpha')\subset\supp(\beta_0)$ and $a\in\supp(\beta_0)$, we replace $\beta_0$ with its complement so that we have $\supp(\alpha')\cap\supp(\beta_0)=\varnothing$ and $a\notin\supp(\beta_0)$. Then we apply the previous case. In these cases we have not increased $\abs{W'}$ or the length of $\alpha'$, but we have decreased the length of $\beta'$. If the new $\beta'$ is inner, then the changes in length imply that the new $W'$ is shorter than $\beta\cdot W$ and is therefore shorter than $W$.
Now we want $a\in\supp(\beta_0)$; if this is not the case, we replace $\beta_0$ with its complement. This replacement does not change $\beta_0\cdot W$, so $(W',\alpha',\beta_0)$ is still a peak. After performing such a replacement if necessary, we definitely have $a\in\supp(\beta_0)$. Further, since we have just considered this case separately, we can assume that $\supp(\alpha')\not\subset\supp(\beta_0)$. Then we apply Corollary~\ref{co:shorterfactors} to $(W',\alpha',\beta_0)$ and one of the following holds: \begin{itemize} \item there is a classic long-range Whitehead automorphism $\beta_1$ with multiplier $d^{-\epsilon}$ such that $\abs{\beta_1\cdot W'}<\abs{W'}$ and $\supp(\beta_1)\cap\supp(\alpha)=\varnothing$ and $\beta_1$ fixes $[a]$, or \item there is a classic long-range Whitehead automorphism $\alpha_1$ with multiplier $a$ such that $\abs{\alpha_1\cdot W'}<\abs{W'}$ and $\supp(\alpha_1)\subset\supp(\alpha')\cap\supp(\beta_0)$. \end{itemize} If $\supp(\alpha')\cap\supp(\beta_0)=\varnothing$, then the first case holds.
If the second case holds, $\alpha'\alpha_1^{-1}$ commutes with $\beta''$ since $\supp(\alpha'\alpha_1^{-1})\subset\supp(\alpha')$ and $\supp(\alpha')\cap\supp(\beta'')=\varnothing$, and since $\alpha'\alpha_1^{-1}$ fixes $[b]$ and $\beta''$ fixes $[a]$. Note that $\alpha_1\neq\alpha$ since we assumed $\supp(\alpha')\not\subset\supp(\beta_0)$. So we relabel $\alpha_1$ as $\alpha'$ and relabel $\alpha''$ as $\alpha''\alpha'\alpha_1^{-1}$. Then we still have the conditions we expect at the beginning of the loop. The new $\alpha'$ is nontrivial because $\alpha_1\cdot W'\neq W'$ and $(W',\alpha',\beta')$ is still a peak because $\abs{W'}$ is now strictly greater than $\abs{\alpha'\cdot W'}$ and $\abs{W'}-\abs{\beta'\cdot W'}$ is unchanged. Then we repeat the loop. In this case we have not increased $\abs{W'}$ or the length of $\beta'$ but we have decreased the length of $\alpha'$.
In the first case, Lemma~\ref{le:steinbergrel} and Proposition~\ref{pr:steinberglengthchange} imply that $\alpha'$ and $\beta_1$ commute and that \[\abs{W'}-\abs{\alpha'\cdot W'}=\abs{\beta_1\cdot W'}-\abs{\alpha'\beta_1\cdot W'}.\] We relabel $\beta_1^{-1}\beta'$ as $\beta'$, and relabel $\beta_1\beta''$ as $\beta''$. Note that we still have $\beta\alpha^{-1}=\beta'\alpha'^{-1}\beta''\alpha''^{-1}$. Further, we still have $\beta''$ fixing $[a]$ and $\supp(\beta'')\cap\supp(\alpha')=\varnothing$, since this was true of $\beta_1$ and the old $\beta''$. We relabel $W'$ as $\beta'^{-1}\beta\cdot W=\beta_1\cdot W'$. We also note that the new $\abs{W'}$ is strictly less than the old $\abs{W'}$, since it is the image of the old $W'$ under $\beta_1$. If $\abs{W'}<\abs{W}$, then we exit the loop. Otherwise $(W',\alpha',\beta')$ is still a peak since $\alpha'$ shortens the new and old $W'$ by the same amount. We repeat the loop with the newly relabeled automorphisms. If the new $\beta'$ is inner, then the changes in length imply that the new $W'$ is shorter than $\beta\cdot W$ and is therefore shorter than $W$.
So we leave the loop when $\abs{W'}<\abs{W}$. The sum of differences \[\abs{\alpha''^{-1}\alpha\cdot W}-\abs{\alpha\cdot W}+\abs{W'}-\abs{\alpha'\cdot W'}\] equals $\abs{W}-\abs{\alpha\cdot W}$. These two facts together are enough to deduce that $\beta\alpha^{-1}=\alpha''^{-1}\beta''\alpha'^{-1}\beta'$ is a peak-lowering factorization. \end{proof}
\subsection{Peak reduction and stabilizer presentations} Finally, we use peak reduction to homotope a path in the complex $Z$ from Section~\ref{ss:stabpres} in order to prove Proposition~\ref{pr:homotopeinZ}. We need the following. \begin{lemma}\label{le:mastertuple} Consider the set of conjugacy classes of $A_\Gamma$ consisting of all conjugacy classes of length one represented by a positive generator and all conjugacy classes of length two represented by a product of two non-commuting generators. Let $U$ be a tuple of conjugacy classes whose entries are this set of classes in some order. Then $U$ is minimal length in its automorphism orbit.
Further, if $\alpha$ is a generalized Whitehead automorphism sending one minimal-length representative of the orbit of $U$ to another, then $\alpha$ is a permutation automorphism or an inner automorphism. \end{lemma}
\begin{proof} Without loss of generality, we suppose that $U$ is ordered with the conjugacy classes of length one first and the classes of length two following. By peak reduction, to show that $U$ is minimal length in its orbit, it is enough to show that no element of $\whset{a}$ can shorten it, for any $a$ in $\Gamma$. Let $T$ be a syllable decomposition of $U$; we claim that the matrix $\nu(T)$ can be put in $G_{\Q}$--normal form by permuting the rows. To see this we examine the matrix $\nu(T)$.
In the following, $\epsilon$ and $\delta$ are always in $\{1,-1\}$. For each $b$ adjacent to or equal to $a$, the conjugacy class represented by $b^\epsilon$ maps to $\epsilon r_b$ under $\nu$. For each $b$ not adjacent to $a$, with $a$ dominating $b$, the conjugacy class represented by $b^\epsilon$ is a single syllable that maps to $\epsilon r_b+\epsilon l_b$ under $\nu$. For $b$ not adjacent to $a$, with $a$ not dominating $b$, the conjugacy class represented by $b^\epsilon$ is a single syllable that maps to $0$ under $\nu$ (it maps to $r_Y-r_Y$ where $Y$ is the component of $b$ in $\Gamma\setminus\st(a)$). For $b$ and $c$ both adjacent to or equal to $a$ but not adjacent to each other, the conjugacy class of $b^\epsilon c^\delta$ maps to $\epsilon r_b+\delta r_c$. For $b$ adjacent to or equal to $a$ and $c$ not adjacent to $a$ or $b$, but with $a$ dominating $c$,
the conjugacy class of $b^\epsilon c^\delta$ maps to $\epsilon r_b +\delta r_c+\delta l_c$. For $b$ adjacent to or equal to $a$ and $c$ in the component $Y$ of $\Gamma\setminus\st(a)$ (which has at least two vertices),
the conjugacy class of $b^\epsilon c^\delta$ maps to $\epsilon r_b$. For $b$ and $c$ not adjacent to $a$, but with $a$ dominating both $b$ and $c$, the conjugacy class of $(b c)^\epsilon$ splits into two syllables, which map to $\epsilon(r_b+l_c)$ and $\epsilon(r_c+l_b)$, and the conjugacy class of $(bc^{-1})^\epsilon$ splits into two syllables, which map to $\epsilon(r_b-r_c)$ and $\epsilon(l_b-l_c)$. If $b$ is not adjacent to $a$, but $a$ dominates $b$, and $c$ is in the connected component $Y$ of $\Gamma\setminus\st(a)$ (which has at least two elements), then the conjugacy class of $(b c^\delta)^\epsilon$ splits into two syllables, which map to $\epsilon(r_b -r_Y)$ and $\epsilon(l_b+r_Y)$. Finally, if $b$ is in the component $Y$ of $\Gamma\setminus\st(a)$ and $c$ is in the component $Z$ of $\Gamma\setminus\st(a)$ (with both having at least two elements), then the conjugacy class of $b^\epsilon c^\delta$ splits into two syllables that map to $r_Y-r_Z$ and $r_Z-r_Y$.
Let $n$ denote the cardinality of $[a]$ and let $k$ denote the number of basis elements of $Z_{[a]}$ other than the $r_b$ for $b\in[a]$. First we verify that adding any linear combination of the last $k$ rows of $\nu(T)$ to any of the first $n$ does not simplify the matrix. If $c$ does not commute with $a$ and $b$ is in $[a]$, then adding combinations of the row for $r_c$ and $l_c$ to the row for $r_b$ may simplify the columns for $b^\epsilon c^\delta$, but any such action will make the column for $c$ more complicated. By our hypothesis on $U$, the column for $c$ precedes all the $b^\epsilon c^\delta$ columns of $\nu(T)$, and therefore none of these row moves can simplify $\nu(T)$. However, none of the columns for the other kinds of conjugacy classes in $U$ (enumerated above) have a nonzero element in the first $n$ rows and a nonzero element in the last $k$ rows. This implies that adding linear combinations of the last $k$ rows to the first $n$ rows cannot simplify any column of $\nu(T)$ without making a previous column worse. Since the first $\abs{X}$ columns of $\nu(T)$ include the $n$ columns coming from the classes $b$ for $b$ in $[a]$, which map to $r_b$ in each case, and since all other columns with a nonzero entry in the first $n$ rows appear after the first $\abs{X}$ columns, the matrix $\nu(T)$ can be put in $G_{\Q}$--normal form by a permutation of the first $n$ rows.
If an element of $\whset{a}$ shortens $U$, then all it can do is to delete some elements of $[a]$ from somewhere in $U$. Such a deletion would give us a tuple $U'$ with a syllable decomposition $T'$ differing only from $T$ in some deletions of elements of $[a]$. The corresponding matrix $\nu(T')$ would also essentially already be in $G_{\Q}$--minimal form (up to a permutation), and therefore $U$ and $U'$ cannot be in the same orbit under $\whset{a}$ by Proposition~\ref{pr:normalformexists}.
To see the second part of the statement, we suppose that $\alpha$ is a generalized Whitehead automorphism sending $U$ to a tuple of the same length. Of course, it is possible that $\alpha$ is a permutation automorphism. We suppose that it is not, so that $\alpha$ is in $\whset{a}$ for some $a$. By the form of $\nu(T)$, we know that the only way $\alpha$ can send $\nu(T)$ to itself is to add a linear combination of the bottom $k$ rows of $\nu(T)$ to some rows in the top, where that linear combination has a trivial value. However, the only such linear combination that is trivial is the one corresponding to conjugation by some element: we add $0$ times each $r_b$ with $b$ commuting with $a$, $+1$ times each $r_c$ and $-1$ times each $l_c$ with $c$ not commuting with $a$ and with $a$ dominating $c$, and $+1$ times each $r_Y$ row. This can be seen from the enumeration of columns above. \end{proof}
\begin{proof}[Proof of Proposition~\ref{pr:homotopeinZ}] Recall that $Z$ is our alleged presentation complex for $\Aut(A_\Gamma)_W$ and we have an edge-loop $p$ based at $W$ that we wish to homotope so that its edge labels are only permutations and inner automorphisms.
We define a non-locally finite graph $\widehat Z$ that is related to $Z$. The vertices of $\widehat Z$ are the same as the vertices of $Z$. The edges are as follows: if $W'$ is a vertex in $\widehat Z$ and $\alpha$ is a generalized Whitehead automorphism in $P$ or in $\whset{a}$ for some $a$ with $\abs{\alpha\cdot W'}=\abs{W'}$, then there is an edge labeled by $\alpha$ from $W'$ to $\alpha\cdot W'$. Of course $Z^1$ is a finite subgraph of $\widehat Z$.
Now we define a map $\phi$ from edge paths in $\widehat Z$ to edge paths in $Z^1$. For edges in $Z^1\subset\widehat Z$, we define $\phi$ to be the identity. Suppose we have an edge in $\widehat Z$ starting at a vertex $W'$ that is not in $Z^1$. Then this edge is labeled by an element $\alpha$ in $\whset{a}$ for some $a\in X$. Let $S$ denote $(X\setminus[a])^{\pm1}\setminus \supp(\alpha)$. Then $\alpha\in\whsetsr{a}{S}$. By the construction of $Z$, there is an edge starting at $W'$ in $Z^1$ labeled by some $\gamma\in\whsetsr{a}{S}$. Since $\alpha\gamma^{-1}\in(\whsetsr{a}{S})_{W'}$, there is an edge path $w$ in the loops at $W'$ representing $\alpha\gamma^{-1}$ (this step involves a choice, but we make these choices once and for all and forget about them). Then $\phi$ sends the edge in $\widehat Z$ starting at $W'$ labeled with $\alpha$ to the concatenation of the path $w$ with the edge labeled by $\gamma$. We extend $\phi$ to all edge paths in $Z^1$ by concatenation.
For any edge path in $\widehat Z$, the composition of edge labels gives us the same automorphism before and after applying $\phi$. Further, the initial and terminal vertices of an edge path are the same before and after applying $\phi$.
Our path $p$ corresponds to a factorization $1=\alpha_k\dotsm\alpha_1$ such that the intermediate images $\alpha_i\dotsm\alpha_1\cdot W$ have the same length as $W$ and there is an edge labeled by $\alpha_{i+1}$ from $\alpha_i\dotsm\alpha_1\cdot W$ to $\alpha_{i+1}\dotsm\alpha_1\cdot W$ for each $i$. By Lemma~\ref{le:mastertuple}, there is a tuple $U$ of cyclic words, minimal length in its orbit, with the following property: any generalized Whitehead automorphism sending one minimal length image of $U$ to another must be an inner automorphism or a permutation automorphism. We consider the intermediate images $\alpha_i\dotsm\alpha_1\cdot U$ of $U$ and let $m$ denote the maximum length of any of these. Let $V$ be the concatenation of $U$ with $m$ copies of $W$. Our plan to prove the proposition is to peak-reduce the factorization $\alpha_k\dotsm\alpha_1$ with respect to $V$. Peak reduction proceeds by finding peak-lowering substitutions to apply to peaks (subwords of length two) in the factorization. Since we have $m$ copies of $W$ in $V$, a peak-lowering substitution can never produce a new factorization that has an intermediate image of $W$ longer than the original $W$---any increase in length in an image of $W$ would have to be countered by a decrease in length in an image of $U$ that is greater than the length of the longest intermediate image of $U$. Then if we peak-reduce $\alpha_k\dotsm\alpha_1$ with respect to $V$, at each step the factorization remains the composition of labels on an edge path in $Z$. So to prove the proposition, it is enough to explain how the peak-lowering substitutions from the proof of Theorem~\ref{th:fullfeaturedpeakreduction} correspond to homotopies of the path $p$.
By inspecting Sections~\ref{ss:algbasic} and~\ref{ss:alggen} above, we can see that the peak-lowering substitutions we use are from the following list: \begin{itemize} \item Factorization by classic Whitehead automorphisms: we take a long-range Whitehead automorphism from $\Omega$ and replace it with a product of classic long-range Whitehead automorphisms that follow an edge path in $Z$. \item Applying classic peak reduction to a peak between two classic long-range Whitehead automorphisms. \item Conjugating an automorphism across an inner automorphism as in Lemma~\ref{le:lowerconj}. \item Applying a relation of length three between Whitehead automorphisms with the same multiplier, as in Lemma~\ref{le:samemultset}. \item Conjugating a permutation automorphism across another automorphism, as in Lemma~\ref{le:permlower}. \item Applying a Steinberg relation from Lemma~\ref{le:steinbergrel}. \end{itemize}
We proceed to show how applying any of the above rules to $\alpha_k\dotsm\alpha_1$ corresponds to homotoping the path given by $\phi(\alpha_k\dotsm\alpha_1)$ across some of the $2$--cells in $Z$. Applying a substitution to $\alpha_k\dotsm\alpha_1$ corresponds to homotoping $\phi(\alpha_k\dotsm\alpha_1)$ across $2$--cells in $Z$. In each of the following items, we indicate which cells we need. \begin{itemize} \item First we suppose we replace a long-range automorphism by a peak-reduced product of classic Whitehead automorphisms. We suppose we have a vertex $W'$ in $\widehat Z$ and a long-range generalized Whitehead automorphism $\gamma$ labeling an edge in $\widehat Z$ starting at $W'$. We know $\phi$ sends $\gamma$ to a product $\gamma_2\gamma_1$ labeling edges starting at $W'$ in $Z$, where $\gamma_i$ has the same multiplier and support as $\gamma$, for $i=1,2$, and where $\gamma_1$ fixes $W'$ and $\gamma_2\cdot W'=\gamma\cdot W'$. Both $\gamma_1$ and $\gamma_2$ are long-range automorphisms, so we can drag the path they follow across $2$--cells in $Z$ of type (C2) in order to replace $\gamma_2\gamma_1$ with a product $\delta_l\dotsm\delta_1$ of long-range classic Whitehead automorphisms tracing out an edge path in $Z$. We substitute this factorization back into $\alpha_k\dotsm\alpha_1$, replacing $\gamma$, and homotope the image under $\phi$ across cells of type (C2). \item Now we suppose we must perform classic peak reduction. Suppose $\beta\gamma$ is a subword of length two of $\alpha_k\dotsm\alpha_1$ and $\beta$ and $\gamma$ are classic long-range Whitehead automorphisms that we must peak-reduce with respect to $V'$, an intermediate image of $V$. Since $V'$ is an intermediate image of $V$, it is the concatenation of $U'$ with several copies of $W'$, where $U'$ is an intermediate image of $U$ and $W'$ is an intermediate image of $W$. Since $\beta$ and $\gamma$ are classic, $\phi$ sends the edges with these labels in $\widehat Z$ to edges with the same labels in $Z$, and $W'$ is the vertex of $Z$ that these edge are based at. Lowering this peak with respect to $V'$ will produce a factorization that is peak-lowering with respect to $U'$, but which never increases the lengths of the intermediate images of $W$. Therefore lowering this peak with respect to $V'$ will give us a factorization $\beta\gamma=\delta_l\dotsm\delta_1$ that follows a path in $Z$. By Lemma~5.1 of Day~\cite{Day1}, the substitution lowering this peak is an application of a relation between classic Whitehead automorphisms. Since these relations are all witnessed by $2$--cells of type (C3), we can replace $\beta\gamma$ by $\delta_l\dotsm\delta_1$ in $\alpha_k\dotsm\alpha_1$ and homotope $\phi(\alpha_k\dotsm\alpha_1)$ across cells of type (C3). \item Next we suppose that we have a vertex $W'$ in $\widehat Z$ and a generalized Whitehead automorphism $\beta\in\whset{b}$ for some $b\in X$, labeling an edge starting at $W'$ and an inner classic Whitehead automorphism $\gamma$, and the next step in the peak reduction of $\alpha_k\dotsm\alpha_1$ requires us to conjugate $\beta$ across $\gamma$. We know that $\beta$ is $\beta_2\beta_1$, where $\beta_1\in(\whset{b})_{W'}$ and $\beta_2\in\whset{b}$ labels an edge starting at $W'$ in $Z^1$. We conjugate $\beta_1$ and $\beta_2$ across $\gamma$, which labels a loop in $Z^1$. This amounts to a homotopy of $\phi(\alpha_k\dotsm\alpha_1)$ across $2$--cells of type (C4). \item Now we suppose $\alpha_k\dotsm\alpha_1$ has a subword $\gamma\beta$ with $\beta,\gamma\in\whset{b}$ for some $b\in X$, and we need to replace $\gamma\beta$ by its product $\delta=\gamma\beta\in\whset{b}$ in the factorization as the next step in the peak reduction. Here $\beta$ is an edge label on an edge originating at a vertex $W'$ in $\widehat Z$. Then $\phi(\beta)$ is $\beta_2\beta_1$, $\phi(\gamma)$ is $\gamma_2\gamma_1$, and $\phi(\delta)$ is $\delta_2\delta_1$, where all these new automorphisms are in $\whset{b}$, $\beta_1$, $\gamma_1$ and $\delta_1$ stabilize the vertices they originate at, and $\beta_2$, $\gamma_2$ and $\delta_2$ label edges in $Z^1$. So $\phi(\alpha_k\dotsm\alpha_1)$ will differ before and after the replacement as follows: before we have $\gamma_2\gamma_1\beta_2\beta_1$ and after we have $\delta_2\delta_1$. Pulling $\gamma_1\beta_2$ across cells of type (C5) will replace it with $\beta_2\gamma_1$, where the second $\gamma_2$ is a product of edge labels at $W'$ instead of $\beta_2\cdot W'$. Then we can homotope $\gamma_2\beta_2$ to $\delta_3\delta_1$ using a (C5) cell, where $\delta_3$ is some product of edge labels at $W'$. Then we have homotoped $\gamma_2\gamma_1\beta_2\beta_1$ to a product of $\delta_2$ with a product of edge labels at $W'$. At this point we only need to use some cells of type (C1) to finish this case. \item For this item we suppose $\alpha_k\dotsm\alpha_1$ has a subword $\gamma\beta$ with $\gamma\in\whset{a}$ for some $a\in X$ and $\beta$ a permutation automorphism. Suppose we need to replace this $\gamma\beta$ with $\beta\delta$, where $\delta=\beta^{-1}\gamma\beta\in\whset{\beta^{-1}(a)}$. Suppose $\gamma$ in this sequence is the label on an edge originating at a vertex $W'$. We know $\phi(\gamma)$ is some $\gamma_2\gamma_1$ where $\gamma_1\in(\whset{a})_{W'}$ and $\gamma_2$ is the label on an edge in $Z$ originating at $W'$, and $\phi(\delta)$ is some $\delta_2\delta_1$ with $\delta_1\in(\whset{\beta^{-1}(a)})_{\beta^{-1}\gamma\cdot W'}$ and $\delta_2$ is the label on an edge in $Z$ originating at $\beta^{-1}\gamma\cdot W'$. We know $\gamma_1$ is represented by some edge path in the loops at $W'$. For each edge in the sequence, we can conjugate $\beta$ across the given edge label, thereby pushing our edge path across a $2$--cell of type (C6). Then we conjugate $\beta$ across $\gamma_2$, getting $\beta\delta_2$ together with some product of loops at $\beta^{-1}\gamma\cdot W'$. We use disks of type (C1) to rewrite the remaining sequence as $\beta\delta_2\delta_1$. This homotopes the image of $\alpha_k\dotsm\alpha_1$ under $\phi$ from before the replacement to the image after the replacement. \item Now we suppose we have $\beta\gamma$ in $\alpha_k\dotsm\alpha_1$ with $\beta\in\whset{b}$ and $\gamma\in\whset{a}$ satisfying the hypotheses of Lemma~\ref{le:steinbergrel}. Suppose $\beta$ is an edge label originating from the vertex $W'$ and to peak-reduce $\alpha_k\dotsm\alpha_1$ we need to conjugate $\gamma$ across $\beta$. Let $\delta=\gamma^{-1}\beta\gamma$, which is in $\whset{b}$ by Lemma~\ref{le:steinbergrel}. Then $\phi$ sends the $\gamma$ originating at $\gamma^{-1}\cdot W'$ to some $\gamma_2\gamma_1$ and sends the $\gamma$ originating at $\delta\gamma^{-1}\cdot W'$ to some $\gamma_4\gamma_3$, where these automorphisms are in $\whset{a}$, we know $\gamma_2$ and $\gamma_4$ label edges leaving $\gamma^{-1}\cdot W'$ and $\delta\gamma^{-1}\cdot W'$ in $Z$, and $\gamma_1$ and $\gamma_3$ fix $\gamma^{-1}\cdot W'$ and $\delta\gamma^{-1}\cdot W'$ respectively. Let $S=(X\setminus[a])^{\pm}\setminus\supp(\gamma)$. Then $\gamma\in\whsetsr{a}{S}$, and by the construction of $\phi$, we know $\gamma_i$ is as well, for $i=1,2,3,4$. Similarly, $\phi$ sends $\beta$ to $\beta_2\beta_1$ where $\beta_2$ labels an edge of $Z$ starting at $W'$, $\beta_1$ fixes $W'$, $\beta_i\in\whset{b}$ and $\supp(\beta_i)\subset\supp(\beta)$ for $i=1,2$, and $\phi$ sends $\delta$ to $\delta_1\delta_2$ where $\delta_2$ labels an edge of $Z$ starting at $\gamma^{-1}\cdot W'$, $\delta_1$ fixes $\gamma^{-1}\cdot W'$, $\delta_i\in\whset{b}$ and $\supp(\delta_i)\subset\supp(\delta)$ for $i=1,2$. These conditions mean that the edges labeled with $\beta_2$ and $\gamma_2^{-1}$ leaving $W'$ in $Z$ satisfy the hypotheses for Lemma~\ref{le:steinbergrel}, and therefore there is a $2$--cell of type (C7) for this relation. We can move $\beta_1$ out of the way using cells of type (C5). Then we slide the edge path across our cell of type (C7), and replace $\gamma_2\beta_2$ with $\gamma_2w_2w_1\delta_2$, where $w_2$ is an edge loop label sequence representing an element of $(\whset{a})_{\delta\gamma^{-1}\cdot W'}$ and $w_1$ is an edge loop label sequence representing an element of $(\whset{b})_{\delta\gamma^{-1}\cdot W'}$. Again we use $2$--cells of type (C5) to move $w_1$ to precede $\delta_2$. We note that the edge labels appearing in $w_1$ and in $\gamma_1$ satisfy the hypotheses of Lemma~\ref{le:steinbergrel}, and so by a sequence of slides across $2$--cells of type (C7) we can move one past the other. Similarly, the labels in $\gamma_1$ together with $\delta_2$ satisfy the hypothesis of Lemma~\ref{le:steinbergrel}, and we can slide $\gamma_1$ across $\delta_2$. Then $\delta_1$ and $w_1$ both represent the same element of $(\whset{b})_{\delta\gamma^{-1}\cdot W'}$ and we can use $2$--cells of type (C1) homotope one to the other. Similarly we homotope $w_2$ to $\gamma_3\gamma_1$. So we have homotoped $\beta_2\beta_1\gamma_2\gamma_1$ to $\gamma_4\gamma_3\delta_2\delta_1$ relative to endpoints, so we have homotoped the image of $\alpha_k\dotsm\alpha_1$ under $\phi$ before this move to the image after this move. \end{itemize}
So we peak-reduce $\alpha_k\dotsm\alpha_1$ with respect to $V$ and homotope $\phi(\alpha_k\dotsm\alpha_1)$ at each step, so our edge path describes an edge loop in $\widehat Z$ that sends each intermediate image of $U$ to one of the same length as $U$. Then each $\alpha_i$ is an inner automorphism or a permutation automorphism by Lemma~\ref{le:mastertuple}. \end{proof}
\end{document} | arXiv | {
"id": "1211.0078.tex",
"language_detection_score": 0.8059207201004028,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\centerline{\bf Int. Journal of Math. Analysis, Vol. 4, 2010, no. 15, 713 - 720}
\centerline{}
\centerline{}
\centerline {\Large{\bf Proximinal and \u{C}EBY\u{S}EV Sets }}
\centerline{}
\centerline{\Large{\bf in Normed Linear Spaces}}
\centerline{}
\centerline{\bf {Hadi Haghshenas}}
\centerline{}
\centerline{Department of Mathematics, Birjand University, Birjand, Iran}
\centerline{h$_{-}$haghshenas60@yahoo.com}
\centerline{}
\centerline{}
{\bf Abstract.} In this paper, we study a part of approximation theory that present the conditions under which a closed set in a normed linear space is proximinal or \u{C}eby\u{s}ev.
\centerline{}
{\bf Mathematics Subject Classification:} 46B20. \\
{\bf Keywords:} Best approximation; Proximinal set; \u{C}eby\u{s}ev set; Strictly convex space; Uniformly convex space; Gateaux differentiability; Fr\'{e}chet differentiability.
\section{Basic Definitions and Preliminaries}
In this section, we collect some definitions which will help us to describe our results in detail. As the first step, let us fix our notation. Through this paper, $K$ denotes a non-empty subset of real normed linear space $(X,\|.\|)$ with the topological dual space $X^{*}$, $S(X)=\{x\in X;\|x\|=1\}$, $B[x;r]=\{y\in X;
\|y-x\| \leq r\}$ and $B(X)=B[0;1]$.\\For an element $x \in X$, we define the distance function $d_{K}: X \rightarrow \Bbb{R}$ by
$d_{K}(x)= inf \{\|y-x\| ; y\in K \}$. It is easy to see that the value of $d_{K}(x)$ is zero if and only if $x$ belongs to $\overline{K}$, the closure of $K$. The subset $K$ is called proximinal (resp. \u{C}eby\u{s}ev), if for each $x \in X \setminus K$, the set of best approximations to $x$ from $K$
$$P_{K}(x)= \{y \in K; \|y-x\|=d_{K}(x)\},$$is nonempty (resp. a singleton). This concept was introduced by S. B. Stechkin and named after the founder of best approximation theory, \u{C}eby\u{s}ev.\\It is interesting to know the sufficient conditions for a subset $K$ of a given normed linear space to be a proximinal or a \u{C}eby\u{s}ev set, and this is what we want to consider in this paper.\\ It is not difficult to show that every proximinal subset $K$ of $X$ is also closed. Now, we state and prove a sufficient condition for proximinality:\\If $K$ is a closed subset of a finite-dimensional space $X$, then $K$ is proximinal. To see this, suppose that $x_{0} \in X \backslash K$
and $r_{0}=d_{K}(x_{0})$. If $r > r_{0}$, then there exists $y \in K$ such that $\|x_{0}-y\|<r$. Therefore $y \in B[x_{0};r] \bigcap K$. It follows that $B[x_{0};r] \bigcap K \neq \phi$. If $B_{n}=B[ x_{0};r_{0}+\frac{1}{n}] \bigcap K$, then it is clear that $B_{n}$ is a non-empty compact subset of $X$ and $B_{n+1} \subseteq B_{n}$ for all $n \geq 1$. Hence, there exists $y_{0} \in X$ such that
$y_{0} \in \displaystyle{\ \bigcap_{n=1}^{\infty}B_{n}}$. Now, we have $\|y_{0}-x_{0}\| \leq r_{0}+\frac{1}{n}$ for all $n \geq 1$. Since $r_{0}=d_{K}(x_{0})$ we have
$\|y_{0}-x_{0}\|=r_{0}=d_{K}(x_{0})$. Thus $y_{0}$ is a best approximation for $x_{0}$ and therefore $K$ is a proximinal set.\\In general, since the functional $e_{x}:
K\rightarrow\Bbb{R}$ with $e_{x}(y)=\|y-x\|$ is continuous, each compact subset of $X$ is proximinal.\\It is easy to see that in a reflexive space, every weakly closed set is proximinal.\\ \textbf{Question. }Is there a closed nonempty subset $K$ of a reflexive Banach space $X$ with the property that no point outside $K$ admits a best approximation in $K$? Is this possible in an equivalent renorm of a Hilbert space? The Lau-Konjagin theorem (see \cite{2}) states that in a reflexive Banach space $X$, for every closed set $K$ there is a dense set in $X \setminus K$ which admits best approximations if and only if the norm has the Kadec-Klee property. (i.e., for each sequence $(x_{n})_{n=1}^{\infty}$ in $X$ which converges weakly to $x$ with
$\displaystyle{\lim_{n\rightarrow \infty}\|x_{n}\|= \|x\|}$, we have $\displaystyle{\lim_{n\rightarrow \infty}\|x_{n}-x\|= 0}$). \\Every closed convex set in a reflexive space is proximinal \cite{2}. However, this theorem is not true in the absence of reflexivity. In fact, this condition is a sufficient one. See the following example:\\Let $X=l^{1}$. It is known that $l^{1}$ is a non-reflexive Banach space with dual space $l^{\infty}$. For any positive integer $n$, let $e_{n} \in l^{1}$ be such that its $n$th entry is $\frac{(n+1)}{n}$ and all other entries are $0$. Let $K=\overline{co}\{e_{1},e_{2}, ..., e_{n}, ... \}$. Then $K$ is a closed convex subset of $l^{1}$ and is not proximinal.\\Another important notion in this paper is metric projection. The metric projection mapping has been used in many areas of mathematics such as the theory of optimization and approximation, and fixed point theory. It is a set-valued mapping $P_{K}: X \rightarrow K$ which associates to each $x$ in $X$ the set of all its best approximations, namely $P_{K}(x)$. The sequence $(y_{n})_{n=1}^{\infty} \subseteq K$ is called minimizing for $x \in X \backslash K$ if, $\displaystyle{\lim_{n\rightarrow
\infty}\|x-y_{n}\|= d_{K}(x)}$ and we say that the metric projection $P_{K}$ is continuous at $x \in X \backslash K$ provided that $\displaystyle{\lim_{n\rightarrow \infty}y_{n}=y_{0}}$ if, $y_{n} \in P_{K}(x_{n})$ for each $n \in \Bbb{N}$ and $\displaystyle{\lim_{n\rightarrow \infty}x_{n}=x}$. It is clear that $P_{K}$ is continuous at $x$ if, every minimizing sequence for $x \in X \backslash K$ converges \cite{10}. The continuity properties of $P_{K}$ is a natural object of study in understanding the nature of some problems in approximation theory. In the linear cases many results show the connection of the continuity properties and the geometry of the Banach space (see \cite{12}). We use this property to prove our main result.\\In order to give sufficient conditions for a set being proximinal, N. V. Efimov and S. B. Stechkin introduced the concept of approximatively compact sets. The set $K$ is said to be approximatively compact if, for any $x \in X$, each minimizing sequence $(y_{n})_{n=1}^{\infty} \subseteq K$ for $x$ has a subsequence converging to an element of $K$. It is proved in \cite{12} that every approximatively compact set is proximinal. But a proximinal set need not be approximatively compact \cite{13}. \\We say that $K$ is boundedly compact, provided that $K \bigcap B[0;r]$ is compact in $X$ for every $r\geq 0$. Every boundedly compact set is approximatively compact although the converse is false. Thus, every boundedly compact set is proximinal, too. It is easy to verify that if $K$ is a boundedly compact \u{C}eby\u{s}ev set in $X$, then the metric projection of $X$ on to $K$ is continuous. Hence, in a finite dimensional space, every \u{C}eby\u{s}ev set has a continuous metric projection.\\ Let $f:X\rightarrow \Bbb{R}$ be a function and $x,y\in X$. Then $f$ is said to be Gateaux differentiable at $x$ if, there exists
$A\in X^{*}$ such that $A(y)=\displaystyle{\lim_{t\rightarrow 0}\frac{f(x+ty)-f(x)}{t}}$. In this case $A$ is called the Gateaux derivative of $f$ and is denoted by $f'(x)$. Also, $A(y)$ is denoted by $<f'(x),y>$, usually. If the above limit exists uniformly for each $y\in S(X)$, then $f$ is said to be Fr\'{e}chet differentiable at $x$ with Fr\'{e}chet derivative $A$. Similarly, the norm function $\|.\|$ is Gateaux (Fr\'{e}chet) differentiable at $0 \neq x \in X$ if, the function $f(x)=\|x\|$ is Gateaux (Fr\'{e}chet) differentiable.\\It is well known that if, $f:X\rightarrow \Bbb{R}$ is Fr\'{e}chet differentiable at $x \in X$ then for given $\varepsilon > 0$ there exists $\delta_{(x,\varepsilon)}> 0$ such that
$\|f(x+y)-f(x)-<f'(x),y>\|\leq\varepsilon\|y\|$, for each $y \in X
$ with $\|y\|< \delta$.\section{Main Results}We start our work with the following lemma:\\ \textbf{Lemma 1. }Suppose $K$ is closed and the distance function $d_{K}$ is Gateaux differentiable at $x \in X \backslash K$. Then for every $y \in P_{K}(x)$ we have $\langle d'_{K}(x),\frac{x-y}{\|x-y\|}\rangle=1$.\begin{proof}At first, from Gateaux differentiability of $d_{K}$, the limit $$\displaystyle{\liminf_{t\rightarrow 0^{+}}\frac{d_{K}(x+tz)-d_{K}(x)}{t}},$$ exists for every $z \in X$. But for each $t > 0$$$d_{K}(x+t(x-y))-d_{K}(x)\leq t d_{K}(x).$$Hence, in particular, for $z=x-y$$$\displaystyle{\liminf_{t\rightarrow 0^{+}}\frac{d_{K}(x+tz)-d_{K}(x)}{t}=d_{K}(x)}.$$Now if $t'=\frac{t}{d_{K}(x)}$ (notice that $d_{K}(x)>0$ ) then
$$\displaystyle{\liminf_{t'\rightarrow 0^{+}}\frac{d_{K}(x+t'(x-y))-d_{K}(x)}{t'}=d_{K}(x)},$$ and consequently$$\displaystyle{\liminf_{t\rightarrow 0^{+}}\frac{d_{K}(x+t\frac{x-y}{\|x-y\|})-d_{K}(x)}{t}=1}.$$On the other hand, since distance functions are Lipschitz (with constant 1) we have$$\displaystyle{\limsup_{t\rightarrow 0^{+}}\frac{d_{K}(x+t\frac{x-y}{\|x-y\|})-d_{K}(x)}{t}\leq1},$$as required.\end{proof}We say that a non-zero element $x^{*} \in X^{*}$ strongly exposes $B(X)$ at $x \in S(X)$, provided a sequence $(z_{n})_{n=1}^{\infty}$ in $B(X)$ converges to $x$ whenever $(\langle x^{*},z_{n}\rangle)_{n=1}^{\infty}$ converges to $\langle x^{*},x\rangle$.\\The following theorem is the same as Theorem 2.6 in \cite{10}, but with some manipulation, and plays a key role in our work:\\\textbf{Theorem 2. }Suppose $K$ is closed in $X$ and $d_{K}$ is Fr\'{e}chet differentiable at $x \in X
\backslash K$. Moreover $y \in P_{K}(x)$ and $d'_{K}(x)$ strongly exposes $B(X)$ at $\|x-y\|^{-1}(x-y)$. Then every minimizing sequence $(y_{n})_{n=1}^{\infty}$ in $K$ for $x$ converges to $y$.\begin{proof}We can choose a sequence $(a_{n})_{n=1}^{\infty}$ of positive numbers such that $\displaystyle{\lim_{n\rightarrow \infty}a_{n}=0}$ and
$$a_{n}^{2}>\|x-y_{n}\|-d_{K}(x) \hspace{2 cm}(n \in \Bbb{N}).$$Hence, if $0 < t < 1$ then for each $n \in \Bbb{N}$ \begin{eqnarray*}
d_{K}(x+t(y_{n}-x))& \leq &\|x+t(y_{n}-x)-y_{n}\| \\& = &(1-t)\|x-y_{n}\|\\
& < & (1-t)(a_{n}^{2}+d_{K}(x)). \end{eqnarray*} Therefore $$d_{K}(x)-d_{K}(x+t(y_{n}-x))\geq td_{K}(x)-2a_{n}^{2}.$$Fix $\varepsilon > 0$. By Fr\'{e}chet differentiability of $d_{K}$, there is $\delta > 0$ such that if
$\|y\| < \delta$ then $$|d_{K}(x+y)-d_{K}(x)-\langle d'_{K}(x),y\rangle|\leq \varepsilon \|y\|\hspace{1 cm}(*).$$Let
$t_{n}=\frac{a_{n}}{\|x-y_{n}\|}$ and $a_{n}< \delta$ for large $n$. Replacing $y$ by $t_{n}(y_{n}-x)$ in $(*)$ we get \begin{eqnarray*}
\varepsilon t_{n} \|x-y_{n}\| - \langle d'_{K}(x),t_{n}(y_{n}-x)\rangle & \geq & d_{K}(x)-d_{K}(x+t_{n}(y_{n}-x))\\& \geq & t_{n}d_{K}(x)- 2 a_{n}^{2},
\end{eqnarray*}whence$$\langle d'_{K}(x),t_{n}(x-y_{n})\rangle \geq - \varepsilon a_{n} - 2 a_{n}^{2}+t_{n} d_{K}(x), $$therefore$$\langle d'_{K}(x),\|x-y_{n}\|^{-1}(x-y_{n})\rangle \geq - \varepsilon - 2 a_{n} + \frac{d_{K}(x)}{\|x-y_{n}\|}.$$Since $\varepsilon > 0$, $\displaystyle{\lim_{n\rightarrow \infty}a_{n}=0}$,
$\displaystyle{\lim_{n\rightarrow \infty} \|x-y_{n}\|= d_{K}(x)}$, we will have$$1 \geq \displaystyle{\liminf_{n\rightarrow \infty}
\langle d'_{K}(x),\|x-y_{n}\|^{-1}(x-y_{n}) \rangle}\geq \displaystyle{\liminf_{n\rightarrow
\infty}\frac{d_{K}(x)}{\|x-y_{n}\|}}=1,$$therefore by the Lemma 1
$$\displaystyle{\lim_{n\rightarrow \infty} \langle d'_{K}(x),\|x-y_{n}\|^{-1}(x-y_{n})\rangle}=1= \langle d'_{K}(x),\|x-y\|^{-1}(x-y)\rangle.$$Since $d'_{K}(x)$ strongly exposes $B(X)$ at $\|x-y\|^{-1}(x-y)$, we deduce that $$\displaystyle{\lim_{n\rightarrow \infty}
\|x-y_{n}\|^{-1}(x-y_{n})}=\|x-y\|^{-1}(x-y),$$ which yields $\displaystyle{\lim_{n\rightarrow \infty}y_{n}=y}$.\end{proof}It is interesting to know that if $K$ is closed in $X$, $x \in X
\backslash K$ and $(y_{n})_{n=1}^{\infty}$ is a minimizing sequence in $K$ for $x$ with the weak limit $y \in K$, then $y$ is a best approximation for $x$ in $K$. This is because the norm is a lower-semi-continuous function with respect to weak topology and we have$$d_{K}(x)\leq\|x-y\|\leq
\displaystyle{\liminf_{n\rightarrow \infty}\|x-y_{n}\|}\leq \displaystyle{\lim_{n\rightarrow
\infty}\|x-y_{n}\|}=d_{K}(x).$$\textbf{Theorem 3. }\cite{6} The dual norm of $X^{*}$ is Fr\'{e}chet differentiable at $x^{*} \in X^{*}$ if and only if $x^{*}$ strongly exposes $B(X)$.\\\textbf{Corollary 4. } Let $K$ is closed in $X$, the distance function $d_{K}$ is Fr\'{e}chet differentiable at $x \in X \backslash K$ and the dual norm of $X^{*}$ is Fr\'{e}chet differentiable. Then each minimizing sequence in $K$ for $x$ is convergent.\begin{proof}Combine Theorem 2 with Theorem 3.\end{proof}\textbf{Corollary 5. } Let $K$ be closed in $X$, $x \in X \backslash K$ and the distance function $d_{K}$ is Fr\'{e}chet differentiable at $x$. Also, assume that the dual norm of $X^{*}$ is Fr\'{e}chet differentiable. Then the metric projection $P_{K}$ is continuous at $x$.\\
We say that the space $X$ is strictly convex (rotund) if, $x=y$
whenever $\|x\|= \|y\|=\|\frac{x+y}{2}\|=1$ and $X$ is called uniformly convex if, for sequences $(x_{n})_{n=1}^{\infty}, (y_{n})_{n=1}^{\infty} \subseteq X$ with $$\displaystyle{\lim_{n\rightarrow
\infty}2\|x_{n}\|^{2}+2\|y_{n}\|^{2}-\|x_{n}+y_{n}\|^{2}}=0,$$we have $$\displaystyle{\lim_{n\rightarrow
\infty}\|x_{n}-y_{n}\|}=0.$$Obviously, uniformly convex Banach spaces are strictly convex and also, they are reflexive (Milman-Pettis).\\\textbf{Remark 6. }It is a well known theorem that the dual norm of $X^{*}$ is Fr\'{e}chet differentiable if and only if $X$ is uniformly convex. Therefore, we have the following corollary.\\\textbf{Corollary 7. }Suppose that $K$ is closed in a uniformly convex space $X$, $x \in X \backslash K$ and $d_{K}$ is Fr\'{e}chet differentiable at $x$. Then the metric projection $P_{K}$ is continuous at $x$.\\We can say also about weakly closed sets that, each weakly closed set in a uniformly convex Banach space has continuous metric projection \cite{8}.\\It is proved that closed convex sets in strictly convex reflexive Banach spaces (and consequently in uniformly convex Banach spaces) are \u{C}eby\u{s}ev (see \cite{6}). Can we prove that in some Banach spaces, a nonempty subset is a \u{C}eby\u{s}ev set if and only if it is closed and convex? This is an open problem, even in the special case of infinite-dimensional Hilbert space (see \cite{4}). In 1934, L. N. H. Bunt proved that each \u{C}eby\u{s}ev set in a finite-dimensional Hilbert space must be convex. From this result, we see that in a finite-dimensional Hilbert space, a nonempty subset is a \u{C}eby\u{s}ev set if and only if it is closed and convex. In \cite{11}, G. G. Johnson gave an example: there exists an incomplete inner product space which possesses a non-convex \u{C}eby\u{s}ev set (M. Jiang completed the proof in 1993). Is there an infinite-dimensional Hilbert space possessing a non-convex \u{C}eby\u{s}ev set? As addressed above, it is unknown. Now, in the last part of the paper, we present a condition under which a closed subset is \u{C}eby\u{s}ev. \\It can be seen in \cite{8} that if the dual norm of $X^{*}$ is Fr\'{e}chet differentiable, then the closed sets in $X$ with continuous metric projection are \u{C}eby\u{s}ev.\\ Finally, the following is immediate from Corollary 7. \\\textbf{Corollary 8. }Let $K$ be closed in a uniformly convex Banach space $X$, $x \in X \backslash K$ and $d_{K}$ is Fr\'{e}chet differentiable at $x$. Then $K$ is \u{C}eby\u{s}ev in $X$.\\Corollary 8 is also valid in infinite-dimensional Hilbert spaces. \section{Acknowledgments} The author is indebted to his supervisor professor Amanollah Assadi for the useful remarks while this work was in progress. Also, Helpful comments by Mr. H. Hosseini Guive is gratefully appreciated.
\centerline{}
{\bf Received: September, 2009}
\end{document} | arXiv | {
"id": "0907.1995.tex",
"language_detection_score": 0.7628620266914368,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{On Characterizations of Metric Regularity\\ of Multi-valued Maps \footnote{Research supported by the Scientific Fund of Sofia University under grant 80-10-133/25.04.2018.}}
\author{M. Ivanov and N. Zlateva} \date{\today}
\maketitle
\centerline{\textsl{Dedicated to Professor Alexander D. Ioffe}}
\begin{abstract}
We provide a new proof along the lines of the recent book of A. Ioffe of a 1990's result of H. Frankowska showing that metric regularity of a multi-valued map can be characterized by regularity of its contingent variation -- a notion extending contingent derivative. \end{abstract}
\textbf{Keywords:} surjectivity, metric regularity, multi-valued map.
\emph{AMS Subject Classification}: 49J53, 47H04, 54H25.
\section{Introduction} Metric regularity, as well as, the equivalent to it linear openness and pseudo-Lipschitz property of the inverse, are very important concepts in Variational Analysis. They have been intensively studied as it can be seen in a number of recent monographs, e.g. \cite{borwein-zhu-book, Penot-book, DR_book, Mordukhovich-book} and the references therein. A very rich and instructive survey on metric regularity is the book of A. Ioffe~\cite{Ioffe_book}.
It may be noted in Chapter V of \cite{Ioffe_book} that the modulus of regularity of a multi-valued map between Banach spaces is estimated in terms of the tangential cones to its graph. The estimates are precise, but they are not characteristic. This is because in infinite dimensions a map may well be regular and the tangential cones to its graph be insufficiently informative, for details see \cite{franqui}.
In \cite{Frankowska} H. Frankowska introduced the notion of contingent variation of a multi-valued map which extends Bouligand tangential cone. This notion can precisely characterize metric regularity.
Let $(X,d)$ and $(Y,d)$ be metric spaces and let
$$
F:X\rightrightarrows Y
$$
be a multi-valued map. If $V\subset Y$ the restriction $F^V$ is defined by
$$
F^V(x) := F(x)\cap V,\quad\forall x\in X,
$$
see \cite[p.54]{Ioffe_book}. The properties related to the so restricted map are called \textit{restricted}.
For example, the multi-valued map $F:X\rightrightarrows Y$ is called \textit{restrictedly Milyutin regular} on $(U,V)$, where $U\subset X$ and $V\subset Y$, if there exists a number $r>0$ such that
$$
B(v,rt)\cap V\subset F(B(x,t))
$$
whenever $(x,v)\in \mathrm{Gr}\, F\cap (U\times V)$ and $B(x,t)\subset U$, where $B(x,t)$ is the closed ball with center $x$ and radius $t$: $B(x,t):=\{ u\in X:\ d(u,x)\le t \}$, and $\mathrm{Gr}\, F = \{ (x,v):\ v\in F(x) \}$.
The supremum of all such $r$ is called \textit{modulus of surjection}, denoted by
$$
\mathrm{sur}_mF^V(U|V).
$$
By convention, $\mathrm{sur}_mF^V(U|V) = 0$ means that $F$ is not restrictedly Milyutin regular on $(U,V)$.
This notion taken from \cite{Ioffe_book} is explained in great detail in Section~\ref{sec:milutin} below.
In the literature, e.g. \cite[Section~5.2]{Ioffe_book}, there are various estimates of $\mathrm{sur}_mF^V(U|V)$ and related moduli in terms of derivative-like objects. Unlike the so called \textit{co-derivative criterion}, see \cite[Section~5.2.3]{Ioffe_book}, most of the \textit{primal} estimates are not characteristic in general. Here we re-establish one primal criterion which complements \cite[Section~5.2.2]{Ioffe_book} and is, moreover, characteristic. It is essentially done by H. Frankowska in \cite{Frankowska}, see also \cite{franqui}. There a new derivative-like object is defined as follows.
Let $(X,d)$ be a metric space, $(Y, \|\cdot \|)$ be a Banach space,
$
F:X\rightrightarrows Y
$
be a multi-valued map.
For $(x,y)\in \mathrm{Gr}\, F$ the \textit{contingent variation} of $F$ at $(x,y)$ is the closed set
$$
F ^{(1)} (x,y):= \limsup _{t\to 0^+} \frac{F(B(x,t))-y}{t},
$$ where $\limsup$ stands for the Kuratowski limit superior of sets.
Equivalently, $v\in F ^{(1)} (x,y)$ exactly when there exist a sequence of reals $t_n\downarrow 0$ and a sequence $(x_n,y_n)\in \mathrm{Gr}\, F$ such that $d(x,x_n)\le t_n$ and \[
\left \| v-\frac {y_n-y}{t_n}\right \| \to 0, \mbox { when } n\to \infty. \]
This notion extends the so-called contingent, or graphical, derivative usually denoted by $DF(x,y)$, e.g. \cite[pp.163, 202]{Ioffe_book}.
Our main result can now be stated. As usual, $B_Y$ denotes the closed unit ball of the Banach space $(Y,\| \cdot \|)$.
\begin{thm}\label{F_result}
Let $(X,d)$ be a metric space and $(Y,\| \cdot \|)$ be a Banach space, let $U\subset X$ and $V\subset Y$ be non-empty open sets. Let
$$
F:X\rightrightarrows Y
$$
be a multi-valued map with complete graph.
$F$ is restrictedly Milytin regular on $(U,V)$ with $\mathrm{sur} _m F^V(U\vert V)\ge r>0$ if and only if
\begin{equation}\label{eq:fr-1}
F^{(1)}(x,v)\supset r B_Y\ \mbox{ for all }(x,v)\in \mathrm{Gr}\, F \cap (U\times V).
\end{equation} \end{thm}
This result is essentially established by H. Frankowska in \cite[Theorem 6.1 and Corollary 6.2]{Frankowska}. However, there it is presented as a characterization of local modulus of regularity in terms of the local variant of the condition~\eqref{eq:fr-1}. Here we render the characterization global. The technique in \cite{Frankowska} is different, but it again depends on Ekeland Variational Principle.
The rest of the article is organized as follows. In Section~\ref{sec:milutin} we provide for reader's convenience the relevant material from \cite{Ioffe_book}. We also present in another form the first criterion for Milyutin regularity from \cite{Ioffe_book}. In Section~\ref{sec:main} we prove Theorem~\ref{F_result}.
\section{Milyutin regularity}\label{sec:milutin}
Let $(X,d)$ and $(Y,d)$ be metric spaces. Let $U\subset X$ and $V\subset Y$, let $F:X \rightrightarrows Y$ be a multi-valued map and let $\gamma (\cdot)$ be extended real-valued function on $X$ assuming positive values (possibly infinite) on $U$.
\begin{dfn}(\textbf{linear openness}, \cite[Definition 2.21]{Ioffe_book}) $F$ is said to be $\gamma$-open at linear rate on $(U,V)$ if there is an $r>0$ such that \[ B(F(x),rt)\cap V\subset F(B(x,t)), \] if $x\in U$ and $t<\gamma (x)$, i.e. \[ B(v,rt)\cap V\subset F(B(x,t)), \] whenever $(x,v)\in \mathrm{Gr}\, F$, $x\in U$ and $t<\gamma (x)$. \end{dfn}
Denote by $\mathrm{sur} _\gamma F(U|V)$ the upper bound of all such $r>0$ and call it \emph{modulus of $\gamma$-surjection} of $F$ on $(U,V)$. If no such $r$ exists, set $\mathrm{sur} _\gamma F(U|V){=}0$.
\begin{dfn}(\textbf{metric regularity}, \cite[Definition 2.22]{Ioffe_book}) $F$ is said to be $\gamma$-metrically regular on $(U,V)$ if there is $\kappa >0$ such that \[ d(x,F^{-1}(y))\le \kappa d(y,F(x)), \] provided $x\in U$, $y\in V$ and $\kappa d(y,F(x))<\gamma (x)$. \end{dfn}
Denote by $\mathrm{reg} _\gamma F(U|V)$ the lower bound of all such $\kappa >0$ and call it \emph{modulus of $\gamma$-metric regularity} of $F$ on $(U,V)$. If no such $\kappa$ exists, set $\mathrm{reg} _\gamma F(U|V)=\infty $.
\begin{thm}(\textbf{equivalence theorem}, \cite[Theorem 2.25]{Ioffe_book})\label{Theorem 2.25 Ioffe} The following are equivalent for any metric spaces $X$, $Y$, any $F:X\rightrightarrows Y$, any $U\subset X$, $V\subset Y$ and any extended real-valued function $\gamma (\cdot)$ which is positive on $U$:
a) $F$ is $\gamma$-open at linear rate un $(U,V)$;
b) $F$ is $\gamma$-metrically regular on $(U,V)$.
Moreover (under the convention $0.\infty =1$), \[
\mathrm{sur} _\gamma F(U|V).\mathrm{reg} _\gamma F(U|V)=1. \] \end{thm}
\begin{dfn}(\textbf{regularity}, \cite[Definition 2.26]{Ioffe_book}) We say that $F:X\rightrightarrows Y$ is $\gamma$-regular on $(U,V)$ if the equivalent properties of Theorem~\ref{Theorem 2.25 Ioffe} are satisfied. \end{dfn}
\begin{dfn}(\textbf{Miluytin regularity}, \cite[Definition 2.28]{Ioffe_book}) Set \[ m_U(x):=d(x, X\setminus U). \] We shall say that $F$ is Milyutin regular on $(U, V)$ if it is $\gamma$-regular on $(U,V)$ with $\gamma (x)=m_U(x)$. \end{dfn}
We will need also \textbf{Ekeland Variational Principle} (see \cite[p.45]{Phelps}): Let $(M,d)$ be a complete metric space, and $f:M\to \mathbb{R}\cup\{+\infty\}$ be a proper, lower semicontinuous and bounded from below function. Assume that $f(\overline x)\le \inf f+\lambda \varepsilon$ for some $\overline x\in M$ and $\lambda \varepsilon >0$. Then there is $\overline y\in M$ such that
(i) $f(\overline y)\le f(\overline x)-\lambda d(\overline x,\overline y)$;
(ii) $d(\overline x,\overline y)\le \varepsilon$;
(iii) $f(x)+\lambda d(x,\overline y)\ge f(\overline y)$, for all $x\in M$.
The following characterization of Milyutin regularity is very similar in form (in fact equivalent) to the so called \textbf{first criterion for Milyutin regularity}, see \cite[Theorem 2.47]{Ioffe_book}. It is also similar to \cite[Proposition~2.2]{cibul}, but there it is stated in local form. We present here a proof for reader's convenience.
Following \cite[p.35]{Ioffe_book} for $\xi>0$ we denote by $d_\xi$ the product metric \begin{equation}\label{def:d:xi}
d_\xi ((x_1,y_1),(x_2,y_2)):=\max \{ d(x_1,x_2),\xi d(y_1,y_2)\}, \end{equation} where $x_i\in X$, $y_i\in Y$, $i=1,2$, and $(X,d)$ and $(Y,d)$ are metric spaces.
\begin{thm}\label{new}
Let $(X,d)$, $(Y,d)$ be metric spaces. Let $F:X\rightrightarrows Y$ be a multi-valued map with complete graph. Let $U\subset X$ and $V\subset Y$.
Then
\[ \mathrm{sur} _mF(U|V) = \sup \{ r\ge 0: \exists\ \xi>0\mbox{ such that } \] \[\forall (x,v)\in \mathrm{Gr}\, F,\ x\in U,\ y\in V\mbox{ satisfying } 0<d(y,v)<rm_U(x) \] \begin{equation}\label{star} \exists (u,w)\in \mathrm{Gr}\, F \mbox{ such that } d(y,w)<d(y,v)-rd_\xi((x,v),(u,w))\}. \end{equation} \end{thm}
\begin{proof}
Let us denote by $s_1$ the left hand side of the above equation, i.e. $s_1:=\mathrm{sur} _mF(U|V)$. In other words, \[s_1=\sup \{ r\ge 0: B(v,rt)\cap V\subset F(B(x,t)),\ \forall (x,v)\in \mathrm{Gr}\, F,\ x\in U, t<m_U (x)\}.\]
Denote by $s_2$ the right hand side of the equation.
We need to show that $s_1 = s_2$.
First, we will show that $s_1\le s_2$.
If $s_1=0$ we have nothing to prove.
Let $s_1>0$. Take $0<r<r'<s_1$. Let $x\in U$, $v\in F(x)$ be fixed. Let $y\in V$ be such that $0<d(y,v)<rm_U(x)$. In particular $0<d(y,v)<r'm_U(x)$. Set $\displaystyle t:=\frac{d(y,v)}{r'}$. Then $t<m_U(x)$. By $r'<s_1= \mathrm{sur}_mF(U|V)$ and by the definition of $\mathrm{sur}_mF(U|V)$ it holds that $y\in B(v,r't)\cap V\subset F(B(x,t))$, i.e. $y\in F(B(x,t))$. So, there exists $u\in B(x,t)$ such that $y\in F(u)$.
Fix $\xi$ such that $0<\xi r'<1$. Then
\[
d_\xi((x,v),(u,y))=\max \{d(x,u),\xi d(v,y)\}\le\max \{t,\xi r't\}=t\max \{1,\xi r'\}=t,
\]
so
\[
r'd_\xi((x,v),(u,y))\le r't=d(y,v).
\]
Observe that $d_\xi((x,v),(u,y))>0$ since $d(v,y)>0$. The latter and $r'>r$ yield
\[
rd_\xi((x,v),(u,y))< r't<d(y,v),
\]
or
\[
0<d(y,v)-rd_\xi((x,v),(u,y)).
\]
Since $0=d(y,y)$ we get that
\[
d(y,y)<d(y,v)-rd_\xi((x,v),(u,y))
\]
and \eqref{star} holds with $w=y$ as $(u,y)\in \mathrm{Gr}\, F$.
This means that $r\le s_2$. Finally, $s_1\le s_2$.
Second, we will prove that $s_2\le s_1$.
If $s_2=0$ we have nothing to prove.
Let now $s_2>0$. Let $0<r<s_2$. Let us fix $x_0\in U$, $v_0\in F(x_0)$ and $0<t<m_U(x_0)$.
Fix $y\in V$ such that $d(y,v_0)\le rt$, i.e. $y\in B(v_0,rt)\cap V$. Let $M:= \mathrm{Gr}\, F$, and let $\xi>0$ correspond to $r$ in the definition of $s_2$. It is clear that $(M,d_\xi)$ is a complete metric space.
Consider the function $f:M\to \mathbb{R}$ defined as $f(u,w):=d(w,y)$.
Then $f\ge 0$ and it is continuous on $M$. Since $f(x_0,v_0)=d(v_0,y)\le rt$, by Ekeland Variational Principle there exists $(x_1,v_1)\in M$ such that
(i) $f(x_1,v_1)\le f(x_0,v_0)-r d_\xi ((x_1,v_1),(x_0,v_0))$;
(ii) $d_\xi ((x_1,v_1),(x_0,v_0))\le t$;
(iii) $f(u,w)+rd_\xi ((u,w),(x_1,v_1))\ge f(x_1,v_1)$, for all $(u,w)\in M$.
Or, equivalently
(i) $d(v_1,y)\le d(v_0,y)-r d_\xi ((x_1,v_1),(x_0,v_0))\le rt-r d_\xi ((x_1,v_1),(x_0,v_0))$;
(ii) $d(x_1,x_0)\le t,\quad \xi d(v_1,v_0)\le t$;
(iii) $d(w,y)+rd_\xi ((u,w),(x_1,v_1))\ge d(v_1,y)$, for all $(u,w)\in M$.
Set $p:=d(v_1,y)$.
Assume that $p>0$. Take $t'$ such that $t<t'<m_U(x_0)$. For $\displaystyle x\in B\left (x_1,\frac{p}{r}+t'-t\right)$ we have that
\begin{eqnarray*}
d(x,x_0)&\le &d(x,x_1)+d(x_1,x_0)\\
&\le &\frac{p}{r}+t'-t+d(x_1,x_0)\\
\mbox{(using (i))} &\le &\frac{rt-rd(x_1,x_0)}{r}+t'-t+d(x_1,x_0)\\
&=&t-d(x_1,x_0)+t'-t+ d(x_1,x_0)\\
&=&t'.
\end{eqnarray*}
Hence $\displaystyle B\left(x_1,\frac{p}{r}+t'-t\right)\subset B(x_0,t')\subset U$. Then $\displaystyle \frac{p}{r}+t'-t\le m_U(x_1)$, and $\displaystyle \frac{p}{r}< m_U(x_1)$ because $t'-t>0$. Hence, $0< d(v_1,y)<rm_U(x_1)$. But now \eqref{star} contradicts (iii).
Therefore, $p=0$ and then $y=v_1\in F(x_1)$. Since by (ii) $x_1\in B(x_0,t)$, we have $y\in F(B(x_0,t))\cap V$.
Since $x_0\in U$, $v_0\in F(x_0)$, $y\in B(v_0,rt)\cap V$ and $0<t<m_U(x_0)$ were arbitrary, this means that $r\le s_1$. Since $0<r<s_2$ was arbitrary, $s_2\le s_1$, and the proof is completed. \end{proof}
In the definitions of regularity properties it is not required that $F(x)\subset V$. Such requirements can be included in the definitions as follows.
\begin{dfn}(\textbf{restricted regularity}, \cite[Definition 2.35]{Ioffe_book}) Set $F^V(x):=F(x)\cap V$. We define restricted $\gamma$-openness at linear rate and restricted $\gamma$-metric regularity on $(U,V)$ by replacing $F$ by $F^V$. \end{dfn}
The equivalence Theorem~\ref{Theorem 2.25 Ioffe} also holds for the restricted versions of the properties. The case is the same with Theorem~\ref{new}, where the proof needs only small adjustments when working with $F^V$ instead of $F$.
\section{Proof of the main result}\label{sec:main}
The proof of our main result relies on the following Lemma.
\begin{lem}\label{lem_for_criterion}
Let $(X,d)$ be a metric space and $(Y,\| \cdot \|)$ be a Banach space, let $U\subset X$ and $V\subset Y$ be non-empty sets and let
$$
F:X\rightrightarrows Y
$$
be a multi-valued map.
If for some $r>0$ it holds that
\[
F^{(1)}(x,v)\supset r B_Y\ \mbox{ for all }(x,v)\in \mathrm{Gr}\, F \cap (U\times V),
\]
then for any $\displaystyle 0< r' <r$ and any $\xi \in (r^{-1}, (r')^{-1})$ it holds that for any $x\in U$ and any $v\in F^V(x)$ and $y\in V\setminus\{v\}$
there is $(u,w)\in \mathrm{Gr}\, F$ such that
\[
\| y- w\|<\| y-v\| -r'd_\xi((x,v),(u,w)).
\]
\end{lem}
\begin{proof}
Let $\displaystyle r' \in (0,r)$ be fixed.
Fix $\xi >0$ such that $\displaystyle (r')^{-1}> \xi > r^{-1}$.
Take $(x,v)$ such that $(x,v)\in \mathrm{Gr}\, F\cap (U\times V)$.
Fix $ y\in V$ such that $0<\| y-v\| $.
Set $\displaystyle \bar v:= r\frac{y-v}{\| y-v\|}$. Obviously $\| \bar v \| =r$. By assumption,
$F^{(1)}(x,v)\ni \bar v$. By definition of the contingent variation there exist $t_n\downarrow 0$,
$u_n\in X$ as well as $w_n\in Y$ and $z_n \in Y$ such that $w_n\in F(u_n) $, $d(x,u_n)\le t_n$, $\| z_n\| \to 0$ and
\begin{equation}\label{eq:u}
v+t_n\bar v=w_n+ t_nz_n.
\end{equation}
Note first that for $n$ large enough
\begin{equation}\label{eq:xy}
\xi\|w_n-v\| > t_n \ge d(x,u_n)
\Rightarrow
d_\xi((x,v),(u_n,w_n))=\xi \|w_n-v\|.
\end{equation}
Indeed, $\|w_n-v\| = t_n\|\bar v - z_n\| \ge t_n (r-\|z_n\|)$ and, since $\xi(r-\|z_n\|)\to\xi r > 1$ as $n\to\infty$, we have $\xi\|w_n-v\| > t_n$ for $n$ large enough.
From \eqref{eq:u} we have \begin{equation}\label{nnn} y-w_n=y-v-t_n\overline v+t_nz_n. \end{equation}
Since $$
y-v-t_n\overline v = \left(1-t_nr\| y-v\| ^{-1}\right)(y-v), $$
and since $1-t_nr\| y-v\| ^{-1}>0$ for $n$ large enough, we have for such $n$ that $$
\|y-v-t_n\overline v \|= \left(1-t_nr\| y-v\| ^{-1}\right)\| y-v\| = \| y-v\| - t_nr. $$ Combining the latter with \eqref{nnn} we get for $n$ large enough \begin{eqnarray}\label{trn}
\|y-w_n\|&=&\|y-v-t_n\bar v +t_nz_n\| \nonumber\\
&\le &\|y-v-t_n\bar v\| +t_n\|z_n\| \nonumber\\
&=&\| y-v\| -t_n(r-\| z_n\|). \end{eqnarray}
On the other hand, \eqref{eq:u} can be rewritten as $w_n -v = t_n\bar v - t_nz_n$, hence
$$
\| w_n-v\| =t_n\| \overline v-z_n\| \le t_n (r+\| z_n\| ),
$$ and using this estimate we obtain that
$$
\liminf_{n\to\infty} \frac{t_n(r-\|z_n\|)}{r'\xi\|v-w_n\|} \ge \liminf_{n\to\infty} \frac{t_n(r-\|z_n\|)}{r'\xi t_n(r+\| z_n\|)} = \frac{1}{r'\xi} > 1.
$$ From this and \eqref{trn} we have that for large $n$
$$
\|y-w_n\| < \|y-v\| - r'\xi\|v-w_n\| .
$$ Using \eqref{eq:xy} we finally obtain that for all $n$ large enough
$$
\|y-w_n\| < \|y-v\| -r'd_\xi((x,v),(u_n,w_n))
$$
and the claim follows. \end{proof}
Proving our main result is now straightforward.
\setcounter{thm}{0}
\begin{thm}
Let $(X,d)$ be a metric space and $(Y,\| \cdot \|)$ be a Banach space, let $U\subset X$ and $V\subset Y$ be non-empty open sets. Let
$$
F:X\rightrightarrows Y
$$
be a multi-valued map with complete graph.
$F$ is restrictedly Milytin regular on $(U,V)$ with $\mathrm{sur} _m F^V(U\vert V)\ge r>0$ if and only if
\[
F^{(1)}(x,v)\supset r B_Y\ \mbox{ for all }(x,v)\in \mathrm{Gr}\, F \cap (U\times V).
\] \end{thm}
\begin{proof} Let \[
F^{(1)}(x,v)\supset r B_Y\ \mbox{ for all }(x,v)\in \mathrm{Gr}\, F \cap (U\times V).
\] From Lemma~\ref{lem_for_criterion} and Theorem~\ref{new} it follows that $\mathrm{sur} _m F^V(U\vert V)\ge r$.
Conversely, let $\mathrm{sur} _m F^V(U\vert V)\ge r>0$. This means that \[ B(v,rt)\cap V \subset F(B(x,t)) \] whenever $(x,v)\in \mathrm{Gr}\, F$, $x\in U$, $v\in V$ and $t<m_U(x)$.
Take arbitrary $(x,v)\in \mathrm{Gr}\, F^V$, $x\in U$ and note that $m_U(x)>0$ because $U$ is open. Take positive $t$ such that $t<m_U(x)$.
For any $y\in rB_Y $ it holds that $v+ty\in B(v,rt)$. Moreover, $v+ty\in V$ will be true for small $t$ because $V$ is open. Then, by assumption, $v+ty \in F(B(x,t))$, so $\displaystyle y\in \frac{F(B(x,t))-v}{t}$ which means that $y\in F^{(1)}(x,v)$. Hence, $F^{(1)}(x,v)\supset rB_Y$. \end{proof}
\noindent\textbf{Acknowledgements.} We wish to express our gratitude for the interesting discussions we have had with Professor Ioffe in the summer of 2018 on some topics in his recent monograph \cite{Ioffe_book}, and for his kind attention.
\end{document} | arXiv | {
"id": "1811.08243.tex",
"language_detection_score": 0.6557537317276001,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Tangent bundle of a manifold of K$3^{[2]}$-type is rigid]{Tangent bundle of a manifold of K$3^{[2]}$-type is rigid} \author{Volodymyr Gavran} \email{vlgvrn@gmail.com} \maketitle
\begin{abstract} We prove that the tangent bundle of a manifold of K$3^{[2]}$-type is rigid. \end{abstract} \section{Introduction} M.Verbitsky in the work \cite{verb} showed that for a hyperholomorphic vector bundle $F$ on a hyperk\"ahler manifold $X$ there are no obstructions for stable deformations of $F$ besides the Yoneda pairing on $\Ext^1(F, F)$. Moreover, he proved the existence of a canonical hyperk\"ahler structure on the reduction of the coarse moduli space of stable deformations of $F$. If $S$ is a K3 surface then it is known that the Hilbert scheme $S^{[n]}$ is a hyperk\"ahler manifold and the tangent bundle $T_{S^{[n]}}$ is a hyperholomorphic bundle on $S^{[n]}$. Thus, the investigation of the deformation space of the bundle $T_{S^{[n]}}$ is a very natural and interesting question from the point of view of hyperkahler geometry. This question also appeared in \cite{charles} in the context of the Lefschetz standard conjecture for hyperk\"ahler manifolds. It was mentioned there without a proof that for $n = 2$ the tangent bundle might actually be rigid. In the present note we confirm this statement by proving the following theorem. \begin{theorem} \label{main} Let $X$ be a manifold of K$3^{[2]}$-type. Then the tangent bundle $T_X$ is infinitesimally rigid, i.e. $H^1(X, {\mathcal End}(T_X))=0$. \end{theorem} The proof of this statement is given in sections 3 and 4. It follows from explicit computations in the case when $X$ is the second Hilbert scheme of a K3 surface and then from application of general results from the theory of hyperholomorphic bundles \cite{verb}. For $n>2$ the question about deformations of $T_{S^{[n]}}$ seems to be much more difficult due to more complicated geometry of the corresponding Hilbert scheme and thus should be considered separately.
\noindent\textbf{Acknowledgements.} The author is grateful to Christopher Brav and Misha Verbitsky for helpful discussions and to Fran\c{c}ois Charles for suggestions. \section{Hilbert square} For a smooth projective surface $S$ the Hilbert scheme of length-2 subschemes of $S$ is denoted by $S^{[2]}$. Let $\Delta:S\hookrightarrow S\times S$ be the diagonal embedding, $p_1,p_1':S\times S\to S$ be the projections onto the first and the second component and $\sigma:Z \stackrel{\mathsf{def}}= \mathsf{Bl}_\Delta(S\times S)\to S\times S$ be the blowup of $S\times S$ in $\Delta$. The natural action of the symmetric group $\mathfrak{S}_2$ on $S\times S$ extends to an action on $Z$ and the Hilbert square $S^{[2]}$ is the quotient of $Z$ by this action. By $q_2$ we denote the corresponding quotient map $Z\to S^{[2]}$. Let $j:E\hookrightarrow Z$ be the exceptional divisor of $\sigma$. Recall that $E\cong\mathbb{P}(T_S)$ is a projective bundle over $S$ and we have the relative Euler exact sequence \begin{equation}\label{eul} 0\to\Omega_{E/S}\to\pi^*\Omega_S(-1)\to\mathcal{O}_E\to0. \end{equation} Also, by $\iota:D\hookrightarrow S^{[2]}$ we denote the isomorphic image of $E$ by $q_2$. The divisor $D$ is precisely the locus parametrizing non-reduced subschemes of $S$ of length two.
Put $q_1 := p_1\circ\sigma$ and $q_1' := p_1'\circ\sigma$. The following diagram depicts the relationship among all the natural maps between the varieties that we mentioned: \begin{equation} \label{equation:notationHilb2} \begin{tikzcd} & E\cong\mathbb{P}(T_S) \arrow[r, hook, "j"] \arrow[dl, swap, "\pi"] & Z \arrow[dl, swap, "\sigma"] \arrow[dr, "q_2"] \arrow[d, "q_1"]\\ S \arrow[r, hook, "\Delta"] & S\times S \arrow[d, "p_1'"] \arrow[r, "p_1"] & S & S^{[2]} & D. \arrow[l, hook', swap, "\iota"]\\ & S & \end{tikzcd} \end{equation} Here $\pi:E\to S$ is the projective bunde map and the equality $\sigma\circ j = \Delta\circ\pi$ yields $q_1\circ j = \pi$.
Note that $Z$ is isomorphic to the universal closed subscheme $\mathcal{Z}: = \{(x, \xi)\, |\, x\in\mathsf{Supp}(\xi)\}$ in $S\times S^{[2]}$ and for any coherent sheaf $F$ over $S$ there is the incidence exact sequence \begin{equation}\label{incidence} 0\to q_1^*F(-E)\to q_2^*F^{[2]}\to q_1'^{*}F\to0, \end{equation} where $F^{[2]}$ is the image of $F$ under the tautological functor $q_{2*}q_1^*:\mathsf{Coh}(S)\to\mathsf{Coh}(S^{[2]})$ (see \cite[p. 193]{lehn}).
Recall that $q_{2*}\mathcal{O}_{Z}\cong\mathcal{O}_{S^{[2]}}\oplus L^{-1}$, where the line bundle $L^{-1}$ is the eigenspace to the eigenvalue $-1$ of the cover involution. Moreover, $L^{\otimes2}\cong\mathcal{O}_{S^{[2]}}(D)$ and $q_2^* L\cong\mathcal{O}_Z(E)$. Note that $q_{2*}j_*\mathcal{O}_E\cong\iota_*\mathcal{O}_D$ and $q_2^*\iota_*\mathcal{O}_D\cong\mathcal{O}_{2E}$.
There is an exact sequence \begin{equation} \label{pullbackCotangent} 0\to q_2^*\Omega_{S^{[2]}}(E)\to\Omega_Z(E)\to j_*\mathcal{O}_E\to0 \end{equation} and an isomorphism \[\Omega_{S^{[2]}}\cong q_{2*}(\mathcal{N}^\vee_{Z/S\times S^{[2]}}(E)).\] The sequence \eqref{pullbackCotangent} implies that $\omega_{q_2}\cong\mathcal{O}_Z(E)$, hence the right adjoint functor to $q_{2*}$ is $q_2^!(-)=q_2^*(-)\otimes\mathcal{O}_Z(E)$.
Now we write down the exact sequence defining the cotangent bundle on $S^{[2]}$. Putting left non-zero arrow of the sequence \eqref{pullbackCotangent} together with the conormal exact sequence of the embedding $Z\hookrightarrow S\times S^{[2]}$ twisted by $E$ into a commutative diagram \begin{equation} \label{cotangentHilb2} \begin{tikzcd} & 0 \arrow[d] \arrow[r] & q_2^*\Omega_{S^{[2]}}(E) \arrow[r, "\sim"] \arrow[d] & q_2^*\Omega_{S^{[2]}}(E) \arrow[d]\\ 0 \arrow[r] & \mathcal{N}^\vee_{Z/S\times S^{[2]}}(E) \arrow[r] & q_1^*\Omega_S(E)\oplus q_2^*\Omega_{S^{[2]}}(E) \arrow[r] & \Omega_Z(E) \end{tikzcd} \end{equation} and applying the snake lemma we obtain the exact sequence \begin{equation} \label{snake} 0 \longrightarrow \mathcal{N}^\vee_{Z/S\times S^{[2]}}(E) \longrightarrow q_1^*\Omega_S(E)\longrightarrow j_*\mathcal{O}_E\longrightarrow 0. \end{equation} After pushing forward \eqref{snake} along $q_2$ we obtain the exact sequence \begin{equation}\label{cotangent} 0\to\Omega_{S^{[2]}}\to\Omega_S^{[2]}\otimes L\to\iota_*\mathcal{O}_D\to0. \end{equation} \section{Computation for the Hilberst square of K3 surface} From now on we assume that $S$ is a K3 surface. We fix some isomorphism $T_S\stackrel{\simeq}\longrightarrow\Omega_S$. The isomorphism $\omega_S\cong\mathcal{O}_S$ yields $\omega_Z\cong\mathcal{O}_Z(E)$. From the Euler sequence \eqref{eul} it follows that $\Omega_{E/S}\cong\mathcal{O}_E(-2)$. From stability of $\Omega_S$ we have that $\Hom(\Omega_S, \Omega_S)\cong\mathbb{C}$. The latter implies that $H^0(S, \Sym^2(\Omega_S)) = 0$. Also, we will use the equality $H^0(S, \Omega_S) = 0$ which by \cite[Remark 3.19]{krug} and by stability of $\Omega_S$ implies that $\Hom(\Omega_S^{[2]}, \Omega_S^{[2]})\cong\mathbb{C}$.
To prove the Theorem \ref{main} in this case it is enough to show the following two equalities \begin{equation} \label{firstVanishing} \Ext^1(\Omega_S^{[2]}\otimes L, \Omega_{S^{[2]}})=0, \end{equation} \begin{equation} \label{secondVanishing} \Ext^2(\iota_*\mathcal{O}_D, \Omega_{S^{[2]}})=0. \end{equation}
\begin{lemma}\label{usefulEqualities} The following equalities hold \begin{enumerate}[(i)]
\item $\Hom(q_1^*\Omega_S(E), j_*\mathcal{O}_E) \cong \Hom(q_1^*\Omega_S(E), q_1^*\Omega_S(E)|_E) \cong \Hom(q_1^*\Omega_S(E)|_E, j_*\mathcal{O}_E) \cong\mathbb{C},$
\item $\Ext^1(\iota_*\mathcal{O}_D, \iota_*\mathcal{O}_D) = 0,$
\item $\Hom(\Omega_S^{[2]}\otimes L, \iota_*\mathcal{O}_D)\cong \mathbb{C},$
\item $\Ext^2(\mathcal{O}_{2E}, q_1^*\Omega_S) = 0,$
\item $\Ext^k(q_2^*\Omega_S^{[2]}, q_1^*\Omega_S(-E))=0$ for $k=0,1$,
\item $\Ext^k(q_2^*\Omega_S^{[2]}, j_*\Omega_{E/S}(-E))=0$ for $k=0,1$,
\item $\Ext^2(\mathcal{O}_{2E}, j_*\Omega_{E/S}) = 0$. \end{enumerate} \end{lemma} \begin{proof} All listed equalities are straightforward consequences of standard adjunctions, properties of the blow-up and projective bundle map $E\to S$, so we only give sketched proofs.
\noindent (i) We have \begin{align*}\Hom(q_1^*\Omega_S(E), j_*\mathcal{O}_E)&\cong\Hom(p_1^*\Omega_S, \sigma_*j_*\mathcal{O}_E(-E))\\ &\cong\Hom(p_1^*\Omega_S, \Delta_*\pi_*\mathcal{O}_E(1))\\ &\cong\Hom(p_1^*\Omega_S, \Delta_*T_S)\\ &\cong\Hom(\Omega_S, T_S)\\ &\cong\mathbb{C}. \end{align*}
By the projection formula we have that $\Hom(q_1^*\Omega_S(E), q_1^*\Omega_S(E)|_E)\cong\Hom(\Omega_S, \Omega_S)\cong\mathbb{C}$. Finally, $\Hom(\pi^*\Omega_S(E), \mathcal{O}_E)\cong\Hom(\Omega_S, T_S)\cong\mathbb{C}$. From the fact that $q_1\circ j = \pi$ and since the functor $j_*$ is fully faithful on the level of abelian categories, we get $\Hom(q_1^*\Omega_S(E)|_E, j_*\mathcal{O}_E)\cong\mathbb{C}$.
\noindent (ii) Using that $\cL\!j^*\mathcal{O}_{2E} \cong \mathcal{O}_E\oplus\mathcal{O}_E(-2E)[1]$ and $\cR\!\pi_*\mathcal{O}_E(-2)\cong\omega^\vee_S[-1]$, by the adjunction and the projection formula we have \begin{align*}\Ext^k(\iota_*\mathcal{O}_D,\iota_*\mathcal{O}_D)&\cong\Ext^k(\iota_*\mathcal{O}_D, q_{2*}j_*\mathcal{O}_E)\\ &\cong\Ext^k(q_2^*\iota_*\mathcal{O}_D, j_*\mathcal{O}_E)\\ &\cong\Ext^k(\mathcal{O}_{2E}, j_*\mathcal{O}_E)\\ &\cong\Ext^k(\mathcal{O}_E, \mathcal{O}_E)\oplus\Ext^{k-1}(\mathcal{O}_E, \mathcal{O}_E(2E))\\ &\cong H^k(S,\mathcal{O}_S)\oplus H^{k-2}(S, \omega^\vee_S). \end{align*} Hence $\Ext^1(\iota_*\mathcal{O}_D,\iota_*\mathcal{O}_D) = 0$.
\noindent (iii) Since $\Hom(q_1^*\Omega_S, j_*\mathcal{O}_E)= 0$ and $\Hom(q_1'^*\Omega_S, j_*\mathcal{O}_E(-E))\cong\Hom(\Omega_S, T_S)\cong\mathbb{C}$, from the exact sequence \eqref{incidence} with $F=\Omega_S$ we have that $\Hom(q_2^*\Omega_S^{[2]}, j_*\mathcal{O}_E(-E))\cong\mathbb{C}$. Then $\Hom(\Omega_S^{[2]}\otimes L, \iota_*\mathcal{O}_D)\cong\Hom(q_2^*\Omega_S^{[2]}, j_*\mathcal{O}_E(-E))\cong\mathbb{C}$.
\noindent (iv) Applying $\sigma_*$ to the exact sequence \begin{equation} \label{exactSequenceO2E} 0\to j_*\mathcal{O}_E\to\mathcal{O}_{2E}(E)\to j_*\mathcal{O}_{E}(E)\to0 \end{equation} and using the equality $\cR\!\pi_*\mathcal{O}_E(-1)=0$ we obtain that $\cR\!\sigma_*\mathcal{O}_{2E}(E)\cong\mathcal{O}_\Delta$. Thus $\Ext^2(q_1^*\Omega_S, \mathcal{O}_{2E}(E))\cong H^2(S, T_S) = 0$. The assertion then follows from the Serre duality.
\noindent (v) Applying adjunctions $q_1^*\dashv q_{1*}$ and $q_{2*}\dashv q_2^!$ we obtain $\Ext^k(q_2^*\Omega_S^{[2]}, q_1^*\Omega_S(-E))\cong\Ext^k(q_1^*\Omega_S, q_2^*\Omega_S^{[2]}) $. Since $\cR\!q_{1*}q_1'^{*}\Omega_S\cong H^1(S, \Omega_S)\otimes\mathcal{O}_S[-1]$ we have that $\Ext^k(q_1^*\Omega_S,q_1'^*\Omega_S)=0$ for $k=0,1$. From the exact sequence \[0\to I_\Delta\to\mathcal{O}_{S\times S}\to\mathcal{O}_\Delta\to0\] and the condition $H^1(S, \mathcal{O}_S) = 0$ we obtain $\cR\!p_{1*}I_\Delta=\mathcal{O}_S[-2]$. This implies that $\Ext^k(q_1^*\Omega_S,q_1^*\Omega_S(-E))\cong\Ext^k(\Omega_S,\Omega_S\otimes\cR\!p_{1*}I_\Delta)=0$ for $k = 0,1$. Now, applying $\Hom(q_1^*\Omega_S, -)$ to the incidence exact sequence \eqref{incidence} with $F = \Omega_S$, we obtain the desired statement.
\noindent (vi) Applying $\sigma_*$ to the sequence \eqref{exactSequenceO2E} twisted by $E$, we obtain the isomorphism $\cR\!\sigma_*\mathcal{O}_{2E}(2E)\cong\mathcal{O}_\Delta[-1]$. Together with the isomorphism $\Omega_{E/S}\cong\mathcal{O}_E(-2)$ it gives \[\Ext^k(q_2^*\Omega_S^{[2]}, j_*\Omega_{E/S}(-E))\cong\Ext^k(\Omega_S^{[2]}, \iota_*\mathcal{O}_D\otimes L)\] \[\cong\Ext^k(q_1^*\Omega_S, \mathcal{O}_{2E}(2E))\cong\Ext^{k - 1}(\Omega_S, \mathcal{O}_S) = 0\] for $k = 0, 1$.
\noindent (vii) We have that $\Ext^2(\mathcal{O}_{2E}, j_*\Omega_{E/S})\cong H^1(S, \omega_S^\vee)\oplus H^0(S, \Sym^2(T_S)) = 0.$ \end{proof} From Lemma \ref{usefulEqualities}(1) we have that the map $q_1^*\Omega_S(E)\to j_*\mathcal{O}_E$ in the exact sequence \eqref{snake} factors as the composition of natural maps \begin{equation}\label{composition}
q_1^*\Omega_S(E)\longrightarrow q_1^*\Omega_S(E)|_E\longrightarrow j_*\mathcal{O}_E, \end{equation} where the second map is the pushforward along $j$ of the quotient map $\pi^*\Omega_S(-1)\to\mathcal{O}_E$ in the Euler exact sequence.
Consider the maps \begin{equation}\label{firstMap} \alpha_k:\Ext^k(\Omega_S^{[2]}, \Omega_S^{[2]})\longrightarrow\Ext^k(\Omega_S^{[2]}\otimes L, \iota_*\mathcal{O}_D), \,\,\, k = 0, 1, \end{equation} \begin{equation}\label{secondMap} \beta_2:\Ext^2(\iota_*\mathcal{O}_D, \Omega_S^{[2]}\otimes L)\longrightarrow\Ext^2(\iota_*\mathcal{O}_D,\iota_*\mathcal{O}_D), \end{equation} coming from the exact sequence \eqref{cotangent}. By the adjunction $q_2^*\dashv q_{2*}$ and factorization \eqref{composition} the map $\alpha_k$ can be written as the composition \begin{equation}\label{extifact}
\Ext^k(q_2^*\Omega_S^{[2]}, q_1^*\Omega_S)\to\Ext^k(q_2^*\Omega_S^{[2]}, q_1^*\Omega_S|_E)\to\Ext^k(q_2^*\Omega_S^{[2]}, j_*\mathcal{O}_E(-E)). \end{equation} From assertions (v) and (vi) of Lemma \ref{usefulEqualities} it follows that both maps in \eqref{extifact} are injective, thus $\alpha_0$ and $\alpha_1$ are injective as well. Moreover, by Lemma \ref{usefulEqualities}(iii) we get that $\alpha_0$ is an isomorphism since it is a map between one-dimensional vector spaces. This implies the equality $\eqref{firstVanishing}$.
Similarly, we now decompose $\beta_2$ as \begin{equation}\label{ext2factor}
\Ext^2(\mathcal{O}_{2E}, q_1^*\Omega_S(E))\to\Ext^2(\mathcal{O}_{2E}, q_1^*\Omega_S(E)|_E)\to\Ext^2(\mathcal{O}_{2E}, j_*\mathcal{O}_E). \end{equation} Lemma \ref{usefulEqualities}(iv) implies the injectivity of the first map in \eqref{ext2factor}. The injectivity of the second map follows from Lemma \ref{usefulEqualities}(vii). This shows that $\beta_2$ is injective, which together with Lemma \ref{usefulEqualities}(ii) gives the vanishing \eqref{secondVanishing}.
\section{General case} Let $X$ be an irreducible holomorphic symplectic manifold and $\mathcal{H} = (I, J, K)$ be the corresponding hyperk\"ahler structure. For any triple $a,b,c\in\mathbb{R}$ such that $a^2 + b^2 + c^2 = 1$ the operator $L := aI + bJ + cK$ defines a complex structure on $X$. Such a complex structure $L$ is called \emph{induced by the hyperk\"ahler structure}. The space $Q_{\mathcal{H}}$ of all induced complex structures of $\mathcal{H}$ is isomorphic to $\mathbb{C}P^1$ and is called \emph{the twistor line} of $\mathcal{H}$. Denote by $\mathsf{Comp}_X$ the coarse moduli space of complex structures on $X$. Then for each hyperk\"ahler structure we have an embedding $Q_{\mathcal{H}}\subset\mathsf{Comp}_X$. \begin{definition} \emph{A twistor path} in $\mathsf{Comp}_X$ is a collection of consecutively intersecting twistor lines $Q_0,...,Q_n\subset\mathsf{Comp}_X$. Two points $I, I'\in\mathsf{Comp}_X$ are called \emph{equivalent} if there exists a twistor path $\gamma = Q_0,...,Q_n$ such that $I\in Q_0$ and $I'\in Q_n$. The path $\gamma$ is then called \emph{a connecting path} of $I$ and $I'$. \end{definition}
\begin{theorem}\cite[Theorem 3.2]{verb2}\label{connectinPath} Any two points $I, I'\in\mathsf{Comp}_X$ are equivalent. \end{theorem} Now we recall the definition of a hyperholomorphic bundle over $X$. \begin{definition} Let $F$ be a holomorphic vector bundle over $(X, L)$ with a Hermitian connection $\nabla$ on $F$. The connection $\nabla$ is called \emph{compatible with a holomorphic structure} if $\nabla_v(\xi) = 0$ for any holomorphic section $\xi\in F$ and any antiholomorphic tangent vector $v$. If there exists a holomorphic structure compatible with the given Hermitian connection $\nabla$, then this connection is called \emph{integrable}. The connection $\nabla$ is called \emph{hyperholomorphic} if it is integrable for any complex structure induced by the hyperk\"ahler structure. Then $F$ is called a \emph{hyperholomorphic bundle}. \end{definition} For an induced complex structure $L$ denote by $H^*_L(X, F)$ the holomorphic cohomologies of $F$ with respect to $L$. We mention the following important property of hyperholomorphic bundles. \begin{theorem}\cite[Corollary 8.1]{verb} \label{dimensionCohomology} Let $F$ be a hyperholomorphic vector bundle. Then for any $i\geqslant0$ the dimension of the space $H^i_L(X, \mathcal{E}nd(F))$ is independent of an induced complex structure $L$. \end{theorem}
Note that the tangent bundle $T_X$ equipped with the Levi-Civita connection is always hyperholomorphic (see \cite[Example 2.9(i)]{verb3}). By Theorem \ref{connectinPath}, for any deformation $X' = (X, I')$, $I'\in\mathsf{Comp}_X$ of $(X, I)$ there exists a twistor path $\gamma$ connecting $I'$ and $I$. Since $T_X$ is hyperholomorphic, the dimension of the cohomology space $H^1(X, \mathcal{E}nd(T_X))$ is constant along $\gamma$ by Theorem \ref{dimensionCohomology}. In the case when $X$ is a manifold of K3$^{[2]}$-type this dimension is equal to zero by the result of Section 3. This proves Theorem \ref{main}.
\renewcommand\refname{}
\end{document} | arXiv | {
"id": "2103.13522.tex",
"language_detection_score": 0.6273888349533081,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Stronger Schr\"odinger-like Uncertainty Relations}
\author{Qiu-Cheng Song}
\email{songqiucheng12@mails.ucas.ac.cn} \author{Cong-Feng Qiao$^{1,2}$\footnote{Corresponding author, qiaocf@ucas.ac.cn}} \affiliation{School of Physics, University of Chinese Academy of Sciences, YuQuan Road 19A, Beijing 100049, China\\ $^2$CAS Center for Excellence in Particle Physics, Beijing 100049, China}
\begin{abstract} Uncertainty relation is one of the fundamental building blocks of quantum theory. Nevertheless, the traditional uncertainty relations do not fully capture the concept of incompatible observables. Here we present a stronger Schr\"odinger-like uncertainty relation, which is stronger than the relation recently derived by L. Maccone and A. K. Pati [Phys. Rev. Lett. 113 (2014) 260401]. Furthermore, we give an additive uncertainty relation which holds for three incompatible observables, which is stronger than the relation newly obtained by S. Kechrimparis and S. Weigert [Phys. Rev. A 90 (2014) 062118] and the simple extension of the Schr\"odinger uncertainty relation. \end{abstract}
\pacs{03.65.Ta, 42.50.Lc, 03.67.-a}
\maketitle
\section{Introduction} Uncertainty is one of the distinct features of quantum theory. The concept of uncertainty principle was first introduced by Heisenberg \cite{heis}. The original form of uncertainty relation was derived by Kennard \cite{Kennard} and Weyl \cite{Weyl}. Indeed, the uncertainty relation is a mathematical description of trade-off relation in the measurement statistics of two incompatible observables. It refers to the preparation of the system which has intrinsic spreads in the measurement outcomes for independent measurements. Notably, it does not mean that two incompatible observables are impossible to be measured simultaneously on a quantum system \cite{Peres}. The best known formula of uncertainty relation is the Heisenberg-Robertson uncertainty relation, which bounds the product of a pair of variances through the expectation value of their commutator \cite{Robertson}. It reads
\begin{eqnarray}\label{ineqr1} \Delta A^2\Delta B^2\geq
\left|\frac{1}{2}\langle\psi|[A,B]|\psi\rangle\right|^2, \end{eqnarray}
for arbitrary observables $A$, $B$, and any state $|\psi\rangle$, where the variances of an observable $X$ in state $|\psi\rangle$ is defined as
$\Delta X^2=\langle\psi|X^2|\psi\rangle-\langle\psi|X|\psi\rangle^2$ and the commutator is defined by $[A,B]=AB-BA$. A stronger extension of the uncertainty relation (\ref{ineqr1}) was made by Schr\"odinger \cite{schroedinger}, namely
\begin{eqnarray}\label{ineqr2}
\Delta A^2\Delta B^2\geq\left|\frac 12 \langle[A,B]\rangle\right|^2
+\left|\frac{1}{2}\langle\{A,B\}\rangle - \langle A\rangle\langle B\rangle\right|^2, \end{eqnarray}
where the anti-commutator is defined by $\{A,B\}=AB+BA$, and $\langle X\rangle$ denotes the expectation value of $X$.
Uncertainty relations are significant in physics, e.g. quantum mechanics and quantum information \cite{PBusch,HHofmann,OGuhne,CAFuchs}. Traditionally the uncertainty relations try to quantitatively express the impossibility of joint sharp preparation of incompatible observables. However, in practice, they do not always capture the notion of incompatible observables since they become trivial in some cases. Recently, Maccone and Pati derived two stronger uncertainty relations based on the sum of $\Delta A^2$ and $\Delta B^2$ \cite{mp}, which to a large extent can avoid the triviality problem and provide more stringent bounds for incompatible observables on the quantum state. The first inequality is
\begin{eqnarray}\label{ineqr3} \Delta A^2 + \Delta B^2\geq
\pm i\langle[A,B]\rangle+|\langle\psi|A \pm iB|\psi^\perp\rangle|^2, \end{eqnarray}
where $|\psi^\perp\rangle$ is an arbitrary state orthogonal to the state $|\psi\rangle$, the sign on the right-hand side of the inequality takes $+(-)$ while $ i\langle[A,B]\rangle$ is positive (negative). The second inequality is
\begin{eqnarray}\label{ineqr4} \Delta A^2 + \Delta B^2\geq
\frac 12|\langle\psi^\perp_{A+B}|A+B|\psi\rangle|^2, \end{eqnarray}
where $|\psi^\perp_{A+B}\rangle\propto(A+B-\langle A + B\rangle)|\psi\rangle$ is a state orthogonal to $|\psi\rangle$. Maccone and Pati also derived an amended Heisenberg-Robertson uncertainty relation, i.e.
\begin{eqnarray}\label{ineqr6} \Delta A\Delta B\geq\frac{\pm i\frac 12\langle[A,B]\rangle}
{1-\frac 12|\langle\psi|\frac{A}{\Delta A}\pm i\frac {B}{\Delta B}|\psi^\perp\rangle|^2}\ , \end{eqnarray}
which is stronger than the Heisenberg-Robertson uncertainty relation.
Two noncommutative sharp observables as well as three pairwise noncommutative sharp observables are incompatible, whatever the state of the system might be. Recently, two Heisenberg uncertainty relations for three canonical observables were obtained by Kechrimparis and Weigert \cite{Kechrimparis}. The multiplicative uncertainty relation reads
\begin{eqnarray}\label{ineqr7} \Delta p\Delta q\Delta r\geq(\frac{\hbar}{\sqrt{3}})^{\frac{3}{2}}\ , \end{eqnarray}
where the Schr\"{o}dinger triple $(p,q,r)$ satisfies the commutation relations
\begin{eqnarray}\label{ineqr8} [q,p]=[p,r]=[r,q] = i \hbar\ . \end{eqnarray}
Here, the observable $r=-q-p$. They also gave an additive uncertainty relation for the Schr\"{o}dinger triple $(q,p,r)$, it reads
\begin{eqnarray}\label{ineqr9} \Delta p^2+\Delta q^2+\Delta r^2\geq \sqrt{3} \hbar\ . \end{eqnarray}
In this work, two new Schr\"odinger-like uncertainty relations for the sum and product of variances of two observables by extending the Schr\"odinger uncertainty relation (\ref{ineqr2}) are obtained. An uncertainty relation for three observables will be given, which is stronger than the uncertainty relation given by Kechrimparis and Weigert, and we will exhibit its property in case of spin-1 system.
\section{ Schr\"odinger-like Uncertainty Relation} The first Schr\"odinger-like uncertainty relation reads
\begin{align}\label{ineqa} \Delta A^2 + \Delta B^2\geq&
|\langle[A,B]\rangle + \langle\{A,B\}\rangle-2\langle A\rangle\langle B\rangle|\notag\\
&+|\langle\psi|A-e^{i\alpha}B|\psi^\perp\rangle|^2\ , \end{align}
which is valid for arbitrary states $|\psi^\perp\rangle$ orthogonal to the state of the system $|\psi\rangle$ and stronger than Maccone and Pati's uncertainty relation (\ref{ineqr3}) (Fig. \ref{two}), where $\alpha$ is a real constant. If $\langle \{A,B\} \rangle-2\langle A\rangle\langle B\rangle >0$, then $\alpha = \arctan \tfrac{-i\langle[A,B]\rangle} {\langle\{A,B\} \rangle- 2\langle A\rangle\langle B\rangle}$; if $\langle\{A,B\}\rangle-2\langle A\rangle \langle B\rangle<0$, then $\alpha = \pi+\arctan \tfrac{-i\langle[A,B] \rangle} {\langle\{A,B\}\rangle-2\langle A\rangle\langle B\rangle}$; and while $\langle\{A,B\} \rangle-2\langle A\rangle\langle B\rangle=0$, it reduces to (\ref{ineqr3}). Removing the last term of (\ref{ineqa}), it then turns into
\begin{eqnarray}\label{ineqc} \Delta A^2 + \Delta B^2\geq
|\langle[A,B]\rangle+\langle\{A,B\}\rangle-2\langle A\rangle\langle B\rangle|\ , \end{eqnarray}
which is implied by the Schr\"odinger uncertainty relation (\ref{ineqr2}).
The second Schr\"odinger-like uncertainty relation is
\begin{eqnarray}\label{ineqb}
\Delta A^2\Delta B^2\geq\frac{\left|\tfrac 12\langle[A,B]\rangle\right|^2
+ \left|\tfrac{1}{2}\langle\{A,B\}\rangle-\langle A\rangle\langle B\rangle\right|^2}
{(1-\tfrac 12|\langle\psi|\tfrac{A} {\Delta A}-e^{i\alpha}\tfrac {B}{\Delta B}|\psi^\perp\rangle|^2)^2}, \end{eqnarray}
which is stronger than the Schr\"odinger uncertainty relation (\ref{ineqr2}) and reduces to (\ref{ineqr6}) when $\langle\{A,B\} \rangle-2\langle A\rangle\langle B\rangle=0$.
\begin{figure}
\caption{Example of comparison between the Maccone-Pati uncertainty relation (MP) (\ref{ineqr3}) and the new uncertainty relation (NEW) (\ref{ineqa}). Note that the new uncertainty relation (\ref{ineqa}) is stronger than the relation (\ref{ineqr3}). We choose two components of the angular momentum $A=J_x$ and $B=J_y$ for a spin-1 particle, and a family of states parameterized by $\theta$ and $\phi$ as $|\psi\rangle=\cos\theta|1\rangle+\sin\theta e^{i\phi}|-1\rangle$, with $|\pm1\rangle$ being eigenstates of $J_z$ corresponding to the eigenvalues of $\pm1$. The upper red line denotes the sum of variances $\Delta J_x^2+\Delta J_y^2$ (SV). The blue points exhibit domains of (\ref{ineqr3}) in ({\bf a}) and (\ref{ineqa}) in ({\bf b}: $\phi=\pi/6$) and ({\bf c: $\phi=\pi/4$}) with 20 randomly chosen states $|\psi^\perp\rangle$ for each of the 200 values of the phase $\theta$. The green curve is the lower bound given by the Schr\"odinger uncertainty relation (SC) (\ref{ineqc}). The black curve is the lower bound set by the Heisenberg-Robertson uncertainty relation (HR) (\ref{ineqr1}). The relation (\ref{ineqr3}) gives the same results for any value of $\phi$ ({\bf a}). If $\phi$ is not equal to $0$ and $\pi$, the new uncertainty relation (\ref{ineqa}) always give nontrivial bound ({\bf b}) and ({\bf c}). When $\phi$ is equal to $0$ or $\pi$, the relation (\ref{ineqa}) reduces to the relation (\ref{ineqr3}) and have the same results as ({\bf a}).}
\label{two}
\end{figure}
\emph{Proof}: To prove the uncertainty relation (\ref{ineqa}), we start by introducing a general inequality \begin{eqnarray}
\|c_A\bar{A}|\psi\rangle
-c_B e^{i\tau}\bar{B}|\psi\rangle
+c(|\psi\rangle-|\phi\rangle)\|^2\geq 0\ ,
\end{eqnarray}
with $\bar{A}=A-\langle\psi|A|\psi\rangle$, $\bar{B} = B- \langle \psi|B| \psi \rangle$; $c_A$, $c_B$, $c$ and $\tau$ being real numbers, and $|\phi\rangle$ being an arbitrary state. Calculating the modulus squared, we have \begin{eqnarray}\label{ineq0} c_A^2\Delta A^2+c_B^2\Delta B^2\geq-\lambda c^2-c_A c_B c\beta + c_A c_B\delta\ . \end{eqnarray} Here, $\Delta A^2$ and $\Delta B^2$ are the variances of $A$ and $B$
calculated on $|\psi\rangle$, respectively. $\lambda\equiv 2(1-\text{Re} [\langle\psi|\phi \rangle])$, $\beta\equiv 2\text{Re} [\langle\psi| (-\bar{A}/c_B+e^{-i\tau}\bar{B}/c_A| \phi\rangle]$, and $\delta\equiv2\text{Re}[e^{i\tau}\langle\psi|\bar{A}\bar{B}|\psi\rangle]$. Choosing the value of $c$ that maximizes the right-hand-side of (\ref{ineq0}), namely $c=-c_A c_B\beta/2\lambda$, we then get \begin{eqnarray}\label{ineq1} c_A^2\Delta A^2+c_B^2\Delta B^2\geq \frac{(c_A c_B\beta)^2}{4\lambda} + c_A c_B\delta\ . \end{eqnarray} We can further choose $c_A=1$ and $c_B=1$, we obtain
\begin{align}\label{ineq2} \Delta A^2 + \Delta B^2\geq&
\frac{\{\text{Re}[\langle\psi|-\bar{A}+e^{-i\tau}\bar{B}|\phi\rangle]\}^2}
{2(1-\text{Re}[\langle\psi|\phi\rangle])}\notag\\
&+2\text{Re}[e^{i\tau}\langle\psi|\bar{A}\bar{B}|\psi\rangle]\ . \end{align}
Suppose $|\phi\rangle = \cos \theta| \psi\rangle + e^{i\phi} \sin\theta |\psi^{\perp}\rangle$, where $| \psi^{\perp}\rangle$ is orthogonal to
$|\psi\rangle$. By taking the limit $\theta\rightarrow 0$, so that
$|\phi\rangle \rightarrow |\psi\rangle$. Then the inequality (\ref{ineq2}) yields
\begin{align}\label{ineq3} \Delta A^2 + \Delta B^2\geq&
\{\text{Re}[e^{i\phi}\langle\psi|
-A+e^{-i\tau}B|\psi^\perp\rangle]\}^2\notag\\
&+2\text{Re}[e^{i\tau}\langle\psi|\bar{A}\bar{B}|\psi\rangle]\ . \end{align}
There exists $\tau=-\alpha$, so that $e^{i\tau}\langle\psi|\bar{A}\bar{B}|\psi\rangle$ is real and can be written as $|\langle\bar{A}\bar{B}\rangle|$, then the second term of (\ref{ineq3}) becomes $\{\text{Re}[e^{i\phi}\langle\psi|
-A+e^{i\alpha}B|\psi^\perp\rangle]\}^2$. Choosing a proper phase $\phi$ which makes the term in the square brackets to be real, it can then be expressed as $|\langle\psi|A-e^{i\alpha}B|\psi^\perp\rangle|^2 $. In the end, the inequality (\ref{ineq3}) turns to \begin{align}\label{ineq4}
\Delta A^2 + \Delta B^2\geq 2|\langle\bar{A}\bar{B}\rangle|+|\langle \psi|A-e^{i\alpha}B|\psi^\perp\rangle|^2\ . \end{align}
Of the quantity $|\langle\bar{A}\bar{B}\rangle|$, it is easy to see that \begin{align}\label{ineq5} \langle\bar{A}\bar{B}\rangle=&\frac{1}{2} \langle[\bar{A}, \bar{B}]\rangle+\frac{1}{2}\langle\{\bar{A}, \bar{B}\}\rangle\notag\\ =&\frac{1}{2}\langle[A,B]\rangle+\frac{1}{2} \langle\{A,B\}\rangle -\langle A\rangle\langle B\rangle\ . \end{align} Substituting $\langle\bar{A}\bar{B}\rangle$ in (\ref{ineq4}) into (\ref{ineq5}), one obtains the uncertainty relation (\ref{ineqa}).
To prove the improved Schr\"odinger uncertainty relation (\ref{ineqb}), we can choose $m=\Delta B$ and $n=\Delta A$ in (\ref{ineq1}), which then becomes
\begin{eqnarray}\label{ineq6}
\Delta A\Delta B \geq&\frac{\Delta A \Delta B\{\text{Re}[\langle\psi|
-\frac{\bar{A}}{\Delta A}+e^{-i\tau}\frac{\bar{B}}{\Delta B}|\phi\rangle]\}^2}
{4(1-\text{Re}[\langle\psi|\phi\rangle])}\notag\\
&+\text{Re}[e^{i\tau}\langle\psi|\bar{A}\bar{B}|\psi\rangle]\ . \end{eqnarray}
Taking the $|\phi\rangle\rightarrow|\psi\rangle$ limit and using the same procedure described above, the inequality (\ref{ineq6}) becomes
\begin{eqnarray}\label{ineq7}
&\Delta A\Delta B\geq\frac{\Delta A\Delta B}{2}\left|\langle\psi|\frac{A}{\Delta A}-e^{i\alpha}\frac {B}{\Delta B}|\psi^\perp\rangle\right|^2\notag\\
&+\left|\frac 12\langle[A,B]\rangle+\frac 12\langle\{A,B\}\rangle-\langle A\rangle\langle B\rangle\right|, \end{eqnarray}
which tells
\begin{eqnarray}\label{ineq8}
\Delta A\Delta B\geq\frac{\left| \frac 12\langle[A,B]\rangle
+\frac 12\langle\{A,B\}\rangle-\langle A \rangle \langle B\rangle\right|}
{(1-\frac 12|\langle\psi\left| \frac{A}{\Delta A}-e^{i\alpha}\frac {B}{\Delta B}|\psi^\perp\rangle\right|^2)}\ . \end{eqnarray}
From (\ref{ineq8}) one can simply obtain the improved Schr\"odinger-like uncertainty relation (\ref{ineqb}).
Note that as this work was finished, there appeared several papers relating to Maccone and Pati's work \cite{mp}. Eq. (4) of Ref. \cite{ba} and Eq. (55) of Ref. \cite{sun} are similar to our uncertainty relation (\ref{ineqa}). Ref. \cite{ba} mentions that Eq. (3) of Ref. \cite{mp} may still experience triviality problem in special case when $|\psi^\perp\rangle =\frac{(A-\langle A\rangle)|\psi\rangle}{\Delta A}$ or $|\psi^\perp\rangle =\frac{(B-\langle B\rangle)|\psi\rangle}{\Delta B}$, which means the uncertainty relations in this work also have such drawback. In practice, if $|\psi^\perp\rangle$ is chosen properly, one can certainly get rid of such triviality problem, e.g, taking the $|\psi^\perp\rangle$ to be orthogonal to $|\psi\rangle$ but not orthogonal to $(A-e^{-i\alpha}B)|\psi\rangle$. Moreover, it is worth mentioning that certain kinds of uncertainty relations, e.g. \cite{li,huang}, are quantum state independent and hence immune from the triviality problem.
\section{Uncertainty Relation for Three Observables} \subsection{New uncertainty relation}
One may generalize the Schr\"odinger uncertainty relation (\ref{ineqr2}) to three observables trivially, that is
\begin{align}\label{ineqsch} &\Delta A^2 + \Delta B^2+\Delta C^2\geq\notag\\
&\frac 12\{\left|\langle[A,B]\rangle+\langle\{A,B\}\rangle-2\langle A\rangle\langle B\rangle\right|\notag\\
&+\left|\langle[B,C]\rangle+\langle\{B,C\}\rangle-2\langle B\rangle\langle C\rangle\right|\notag\\
&+\left|\langle[C,A]\rangle+\langle\{C,A\}\rangle-2\langle C\rangle\langle A\rangle\right|\}\ , \end{align} which is simply the sum of the inequality (\ref{ineqc}). However, we will prove that the following more stringent inequality exists:
\begin{align}\label{th1} &\Delta A^2+\Delta B^2+\Delta C^2\geq
\frac{1}{3}\left|\langle\psi^\perp_{ABC}|A+B+C|\psi\rangle\right|^2 \notag\\
&+\frac{\sqrt3}{3}\left|i\langle[A,B,C]\rangle\right|
+\frac{2}{3}\left|\langle\psi|A+e^{\pm i \frac{2\pi}{3}}B
+e^{\pm i \frac{4\pi}{3}}C|\psi^\perp\rangle\right|^2, \end{align}
which is valid for arbitrary states $|\psi^\perp\rangle$ orthogonal to the state of the system $|\psi\rangle$, where $|\psi^\perp_{ABC} \rangle \propto(A+B+C-\langle A+B+C\rangle)|\psi\rangle$, $\langle[A,B,C]\rangle\equiv\langle[A,B]\rangle+ \langle[B,C]\rangle+\langle[C,A]\rangle$, the sign in the last term of (\ref{th1}) is $+(-)$ when $ i\langle[A,B,C]\rangle$ is positive (negative).
Applying Schr\"odinger triple $(p,q,r)$ to uncertainty relation (\ref{th1}), the Kechrimparis and Weigert's relation (\ref{ineqr9}) can be readily obtained. Choosing an arbitrary state $|\psi\rangle$ and letting $A=q$, $B=p$ and $C=r$, the uncertainty relation (\ref{th1}) then goes like
\begin{align}\label{tha} &\Delta q^2+\Delta p^2+\Delta r^2\geq\sqrt{3}\hbar\notag\\
&+\frac{2}{3}\left|\langle\psi|q+e^{\pm i \frac{2\pi}{3}}p
+e^{\pm i \frac{4\pi}{3}}r|\psi^\perp\rangle\right|^2. \end{align}
Discarding the last term on the right-hand side of the inequality, one obtains the Kechrimparis and Weigert's relation (\ref{ineqr9}). Furthermore, the uncertainty relation (\ref{th1}) is obviously stronger than the Kechrimparis and Weigert's relation (\ref{ineqr9}), since the extra term in (\ref{tha}) is nonnegative.
As suggested to us by an anonymous Referee, the most uncertainty relations do not depend on the order that one chooses to label the operators, but the three terms on the right hand side of inequality (\ref{th1}) do not remain invariant when one changes in the order of the three operators or under sign flips (eg sunbstituting $A$ with $-A$). To get an symmetrical uncertainty relation for three incompatible observables, as we all know the equation \begin{align}\label{thb} \Delta A^2+\Delta B^2+\Delta C^2=\Delta (\pm A)^2+\Delta (\pm B)^2+\Delta (\pm C)^2, \end{align}
when we use inequality (\ref{th1}), the right hand side of the equality (\ref{thb}) have four different lower bounds ${\cal L}_{i} (i=1,2,3,4)$ when we choose the same $|\psi^\perp\rangle$ in general, there exist one ${\cal L}_{i}$ that has the term $|\langle [A,B,C]\rangle|=|\langle [A,B]\rangle| + |\langle [B,C]\rangle| + |\langle[C,A]\rangle|$. We must have the inequality \begin{align}\label{thc} &\Delta A^2+\Delta B^2+\Delta C^2\notag\\
\geq&\frac{\sqrt3}{3}\{|\langle [A,B]\rangle| + |\langle [B,C]\rangle| + |\langle[C,A]\rangle|\} \end{align} which do not depend on the order that one chooses to label the operators and works as a suitable generalisation of the Heisenberg-Robertson inequality (\ref{ineqr1}). The relation (\ref{ineqr9}) also can be derived from the relaion (\ref{thc}).
\emph{Proof}: To prove the uncertainty relation (\ref{th1}), we start by introducing a general inequality \begin{align}\label{th}
\|\bar{A}|\psi\rangle
+ e^{i\rho}\bar{B}|\psi\rangle+ e^{i\sigma}\bar{C}|\psi\rangle
+c(|\psi\rangle-|\phi\rangle)\|^2 \geq 0,
\end{align}
with $\bar{A}=A-\langle\psi|A|\psi\rangle$, $\bar{B}=B-\langle\psi|B|\psi\rangle$,
$\bar{C}=C-\langle\psi|C|\psi\rangle$ and $\rho$, $\sigma$, $c$ real constants, respectively, where $|\phi\rangle$ is an arbitrary state. This inequality has good results when $\rho$ and $\sigma$ are equal to $\pm\frac{2\pi}{3}$ and $\pm\frac{4\pi}{3}$ respectively (see Appendix). Simplifying the modulus squared, we find \begin{align}\label{th2} \Delta A^2+\Delta B^2+\Delta C^2\geq-\lambda c^2+\beta c-\delta, \end{align} by defining \begin{align}
\beta=&2\text{Re}(\langle\psi|\bar{A}+e^{\mp i\frac{2\pi}{3}}\bar{B}
+e^{\mp i\frac{4\pi}{3}}\bar{C}|\phi\rangle),\\
\lambda=&2(1-\text{Re}(\langle\psi|\phi\rangle)),\\ \delta=&-\frac12\langle\{A,B,C\}\rangle\pm i\frac{\sqrt{3}}{2}\langle[A,B,C]\rangle\notag\\
&+\langle A\rangle\langle B\rangle+\langle A\rangle\langle C\rangle+\langle B\rangle\langle C\rangle\ .\label{pi} \end{align} Here, $\langle\{A,B,C\}\rangle\equiv\langle\{A,B\}\rangle + \langle\{A,C\}\rangle+\langle\{B,C\}\rangle$. Noticing \begin{align}\label{th6} &\Delta(A+B+C)^2\notag\\ =&\Delta A^2 + \Delta B^2+ \Delta C^2+\langle\{A,B,C\}\rangle\notag\\
&-2(\langle A\rangle\langle B\rangle+\langle A\rangle\langle C\rangle+\langle B\rangle\langle C\rangle)\ , \end{align} the equality (\ref{pi}) can then be reexpressed as \begin{align}\label{th7} \delta=&-\frac12[\Delta(A+B+C)^2-(\Delta A^2+\Delta B^2+\Delta C^2)]\notag\\ &\pm i\frac{\sqrt{3}}{2}\langle[A,B,C]\rangle\ . \end{align}
Assuming $|\phi\rangle =\cos\theta|\psi\rangle +e^{i\phi}\sin\theta|\psi^{\perp}\rangle$ and using the same techniques employed in deriving (\ref{ineqa}), we obtain \begin{align} &\Delta A^2 + \Delta B^2+ \Delta C^2\geq \frac{1}{3}\Delta (A+B+C)^2\notag\\ &\mp i\frac{\sqrt3}{3}\langle[A,B,C]\rangle
+\frac{2}{3}\left|\langle\psi|A+e^{\mp i \frac{2\pi}{3}}B+
e^{\mp i \frac{4\pi}{3}}C|\psi^\perp\rangle\right|^2\ , \end{align}
which is equivalent to the uncertainty relation (\ref{th1}) since $\Delta (A+B+C)^2=\left|\langle\psi^\perp_{ABC}|A+B+C|\psi\rangle\right|^2$. Here the sign should be chosen properly so that $\mp i\frac{\sqrt3}{3}\langle[A,B,C]\rangle$ (a real quantity) is positive.
Recently, Ref.\cite{sun} gave the variance-based uncertainty equalities for any pairs of incompatible observables $A$ and $B$ . When applied to three observables, the uncertainty equality reads \begin{align}\label{threeequ} &\Delta A^2+\Delta B^2+\Delta C^2\notag\\
=&\frac{1}{3}\left|\langle\psi^\perp_{ABC}|A+B+C|\psi\rangle\right|^2
+\frac{\sqrt3}{3}\left|i\langle[A,B,C]\rangle\right|\notag\\
&+\frac{2}{3}\sum^{d-1}_{n=1}\left|\langle\psi|A+e^{\pm i \frac{2\pi}{3}}B
+e^{\pm i \frac{4\pi}{3}}C|\psi^\perp_n\rangle\right|^2, \end{align}
where $\{|\psi\rangle, {|\psi_n^{\perp}\rangle}_{n=1}^{d-1}\}$ form an orthonormal and complete basis in $d$-dimensional Hilbert space, the sign in the last term of (\ref{threeequ}) is $+(-)$ when $ i\langle[A,B,C]\rangle$ is positive (negative). If we retain only one term associated with $|\psi^{\perp}\rangle\in\{|\psi^\perp_{n}\rangle_{n=1}^{d-1}\}$ in the summation and discard others, it reduces to the uncertainty inequality (\ref{th1}).
\begin{figure}
\caption{Uncertainty relation for three components of the angular momentum $A=J_x$, $B=J_y$ and $C=J_z$ for a spin-1 particle. The state chosen is parameterized by $\theta$ and $\phi$ as $|\psi\rangle=\sin\theta\cos \phi|1\rangle+\sin\theta \sin\phi|0\rangle+\cos\theta|-1\rangle$. Here, $|\pm1\rangle$ and $|0\rangle$ are eigenstates of $J_z$ corresponding to eigenvalues $\pm1$ and $0$. The diagrams illustrate how different uncertainty relations (\ref{ineqsch}) and (\ref{th1}) restrict the possible values of the sum of variances in different values of $\phi$ ($\phi=0$ in ({\bf a}) and $\phi=\pi/4$ in ({\bf b})). The upper red curve shows $\Delta J_x^2+\Delta J_y^2+\Delta J_z^2$. The blue points exhibit domains of (\ref{th1}) with 15 randomly chosen states $|\psi^\perp\rangle$ for each of the 200 values of the phase $\theta$. The dash-dotted green curve is the bound given by the trivially generalized Schr\"odinger uncertainty relation (\ref{ineqsch}) for three observables.}
\label{three}
\end{figure}
\subsection{Application to spin-1 particle state}
As an illustration of the uncertainty relation (\ref{th1}), we consider a simple case of spin-1 particle state. Let $A=J_x$, $B=J_y$ and $C=J_z$ to be three components of the angular momentum, and take \begin{eqnarray}
|\psi\rangle=\sin\theta\cos \phi|1\rangle+\sin\theta \sin\phi|0\rangle+\cos\theta|-1\rangle\ \end{eqnarray} and \begin{align}
|\psi^\perp\rangle=&(\cos\theta\cos\phi\cos\beta e^{i\gamma} -\sin\phi\sin\beta)|1\rangle\notag\\
&+(\cos\theta\sin\phi\cos\beta e^{i\gamma}+\cos\phi\sin\beta )|0\rangle\notag\\
&-(\sin\theta\cos\beta e^{i\gamma})|-1\rangle\ \end{align}
as the states of system, with $|\pm1\rangle$ and $|0\rangle$ the eigenstates of $J_z$ corresponding to eigenvalues of $\pm1$ and $0$. The $\beta$ and $\gamma$ in the orthogonal state are free parameters.
In Fig. \ref{three}, we compare numerically the uncertainty relation obtained in this work, the relation (\ref{th1}), with the simply generalized Schr\"odinger uncertainty relation (\ref{ineqsch}) for three observables.
When $\phi=0$, the relation (\ref{ineqsch}) changes to \begin{eqnarray}
\frac{1}{2}(3-\cos(4\theta))\geq \frac{1}{2}\left|\cos(2\theta)\right|. \end{eqnarray} Discarding the last term in relation (\ref{th1}), it then reads \begin{eqnarray}
\frac{1}{2}(3-\cos(4\theta))\geq\frac{1}{6}\left(2\sqrt{3} \left|\cos(2\theta)\right|-\cos(4\theta)+3\right). \end{eqnarray}
Since state $|\psi^\perp\rangle$ in (\ref{th1}) is an arbitrary state orthogonal to $|\psi\rangle$, the blue points in Fig. \ref{three} illustrate the domain of (\ref{th1}) with 15 randomly taking states $|\psi^\perp\rangle$ for each of the 200 values of the phase $\theta$. We find the uncertainty relation (\ref{th1}) is nontrivial for all $\theta$s and stronger than the simply generalized Schr\"odinger uncertainty relation (\ref{ineqsch}).
When $\phi=\pi/4$ and $\theta\in(0,0.3067)\bigcup(0.6991,\pi)$, the uncertainty relation (\ref{th1}) is also stronger than the generalized Schr\"odinger uncertainty relation (\ref{ineqsch}). In fact, if one chooses $|\psi^\perp\rangle$ properly, the uncertainty relation (\ref{th1}) is always stronger than (\ref{ineqsch}) for any values of $\theta$. It means that the whole incompatible nature of three observables can not be simply represented by three independent pairwise incompatible observables.
\section{Conclusions}
In this work, we have obtained a stronger Schr\"odinger-like uncertainty relation (\ref{ineqa}) based on the sum of variances of two observables, which is stronger than the uncertainty relation (\ref{ineqr3}) given by Maccone and Pati. Meanwhile, we have also developed an improved Schr\"odinger-like uncertainty relation (\ref{ineqb}) which is stronger than the Schr\"odinger uncertainty relation (\ref{ineqr2}). Furthermore, we have obtained an uncertainty relation which holds for three observables, and it is proven to be stronger than the uncertainty relation (\ref{ineqr9}) given by Kechrimparis and Weigert. Finally, as an illustration, we have taken spin-1 particle system as an example to show that the uncertainty relation (\ref{th1}) obtained in this work is stronger than the simply generalized Schr\"odinger uncertainty relation, which means that the whole incompatible nature of three observables can not be simply represented by the natures of three independent pairwise incompatible observables.
{\bf Acknowledgments} We are grateful to Junli Li and Zhiyong Bao for helpful discussions, to Rui Xu for reading through the manuscript and suggestions. This work was supported in part by the Ministry of Science and Technology of the People's Republic of China (2015CB856703), and by the National Natural Science Foundation of China(NSFC) under the grants 11175249 and 11375200.
\onecolumngrid \section*{Appendix}
To illustrate why we choose $\rho=\pm\frac{2\pi}{3}$ and $\sigma=\pm\frac{4\pi}{3}$ in the inequality (\ref{th}), we start from the following three inequalities
\begin{subequations}\label{eq:a} \begin{align}
\|\bar{A}|\psi\rangle+e^{i\rho}\bar{B}|\psi\rangle+e^{i\sigma}\bar{C}|\psi\rangle\|^2 &\geq 0\ ,\label{eq:a1}\\
\|\bar{B}|\psi\rangle+e^{i\rho}\bar{C}|\psi\rangle+e^{i\sigma}\bar{A}|\psi\rangle\|^2 &\geq 0\ ,\label{eq:a2}\\
\|\bar{C}|\psi\rangle+e^{i\rho}\bar{A}|\psi\rangle+e^{i\sigma}\bar{B}|\psi\rangle\|^2 &\geq 0\ ,\label{eq:a3} \end{align} \end{subequations}
where $\rho,\sigma\in(0,2\pi)$, $\bar{A}=A-\langle\psi|A|\psi\rangle$, $\bar{B}=B-\langle\psi|B|\psi\rangle$,
$\bar{C}=C-\langle\psi|C|\psi\rangle$. By expanding the square modulus, we have
\begin{subequations}\label{eq:b} \begin{align} \Delta A^2+\Delta B^2+\Delta C^2 +2\text{Re}(e^{i\rho}\langle\bar{A}\bar{B}\rangle) +2\text{Re}(e^{i\sigma}\langle\bar{A}\bar{C}\rangle) +2\text{Re}(e^{i(\sigma-\rho)}\langle\bar{B}\bar{C}\rangle) \geq 0\ ,\label{b1}\\ \Delta A^2+\Delta B^2+\Delta C^2 +2\text{Re}(e^{i\rho}\langle\bar{B}\bar{C}\rangle) +2\text{Re}(e^{i\sigma}\langle\bar{B}\bar{A}\rangle) +2\text{Re}(e^{i(\sigma-\rho)}\langle\bar{C}\bar{A}\rangle) \geq 0\ ,\label{b2}\\ \Delta A^2+\Delta B^2+\Delta C^2 +2\text{Re}(e^{i\rho}\langle\bar{C}\bar{A}\rangle) +2\text{Re}(e^{i\sigma}\langle\bar{C}\bar{B}\rangle) +2\text{Re}(e^{i(\sigma-\rho)}\langle\bar{A}\bar{B}\rangle) \geq 0\ .\label{b3} \end{align} \end{subequations} To evaluate the inequalities (\ref{eq:b}), we notice
\begin{align}\label{c} &2\text{Re}(e^{i\rho}\langle\bar{E}\bar{F}\rangle) =\cos\rho(\langle\{E,F\}-2\langle E\rangle\langle F\rangle)+i\sin\rho\langle[E,F]\rangle\ ,\\ &\Delta(A+B+C)^2 =\Delta A^2 + \Delta B^2+ \Delta C^2+\langle\{A,B,C\}\rangle -2(\langle A\rangle\langle B\rangle+\langle A\rangle\langle C\rangle+\langle B\rangle\langle C\rangle)\ , \end{align} where $E$ and $F$ are arbitrary observables, and we define $\langle\{A,B,C\}\rangle\equiv\langle\{A,B\}\rangle + \langle\{A,C\}\rangle+\langle\{B,C\}\rangle$.
Calculating (\ref{b1})+(\ref{b2})+(\ref{b3}), we obtain
\begin{align}\label{d} \Delta A^2+\Delta B^2+\Delta C^2 \geq \mu\Delta(A+B+C)^2+i\nu\langle[A,B,C]\rangle\ , \end{align}
where we define \begin{align}\label{e} &\langle[A,B,C]\rangle\equiv\langle[A,B]\rangle+ \langle[B,C]\rangle+\langle[C,A]\rangle\ ,\\ &\mu=\frac{\cos{\rho}+\cos{\sigma} + \cos(\sigma-\rho)}{\cos{\rho}+\cos{\sigma}+\cos(\sigma-\rho)-3}\ ,\\ &\nu=\frac{\sin{\rho}-\sin{\sigma}+ \sin(\sigma-\rho)}{\cos{\rho}+ \cos{\sigma}+\cos(\sigma-\rho)-3}\ . \end{align}
When $\rho=\pm\frac{2\pi}{3}$ and $\sigma=\pm\frac{4\pi}{3}$, $|\mu|$ and $|\nu|$ all have maximum values, namely \begin{align}\label{f} \mu(\rho &=\frac{2\pi}{3}\ ,\sigma=\frac{4\pi}{3})=\frac{1}{3}\ , \hspace{1.1cm} \nu(\rho =\frac{2\pi}{3}\ ,\sigma=\frac{4\pi}{3})=-\frac{1}{\sqrt{3}}\ ;\\ \mu(\rho &=-\frac{2\pi}{3}\ ,\sigma=-\frac{4\pi}{3})=\frac{1}{3}\ ,\hspace{0.5cm} \nu(\rho =-\frac{2\pi}{3}\ ,\sigma=-\frac{4\pi}{3})=\frac{1}{\sqrt{3}}\ . \end{align}
Hence, the inequality (\ref{d}) becomes \begin{align}\label{g} \Delta A^2+\Delta B^2+\Delta C^2 \geq \frac{1}{3}\Delta(A+B+C)^2\pm \frac{i}{\sqrt{3}}\langle[A,B,C]\rangle\ . \end{align}
To gain maximum value of the right-hand side of (\ref{g}), we should choose the sign properly so that $\pm\frac{i}{\sqrt{3}}\langle[A,B,C]\rangle$ (a real quantity) is positive.
\twocolumngrid
\begin{references}
\bibitem{heis} W. Heisenberg, Z. Phys. 43 (1927) 172.
\bibitem{Kennard} E. Kennard, Z. Phys. 44 (1927) 326.
\bibitem{Weyl} H. Weyl, Gruppentheorie und Quantenmechanik (Hirzel, Leipzig) 1928.
\bibitem{Peres} A.Peres, Quantum Theory: Concepts and Methods (Kluwer Academic, Dordrecht) 1933.
\bibitem{Robertson} H. Robertson, Phys. Rev. 34 (1929) 163.
\bibitem{schroedinger} E. Schr\"odinger, Sitzungsber. Preuss. Akad. Wiss. Berl., Math. Phys. 19 (1930) 296. (An english translation can be found at arXiv:quant-ph/9903100)
\bibitem{PBusch} P. Busch, T. Heinonen and P.J. Lahti, Phys. Rep. 452 (2007) 155.
\bibitem{HHofmann} H.F. Hofmann and S. Takeuchi, Phys. Rev. A 68 (2003) 032103.
\bibitem{OGuhne} O. G\"{u}hne, Phys. Rev. Lett. 92 (2004) 117903.
\bibitem{CAFuchs} C.A. Fuchs and A. Peres,
Phys. Rev. A 53 (1996) 2038.
\bibitem{mp} L. Maccone and A.K. Pati, Phys. Rev. Lett. 113 (2014) 260401.
\bibitem{Kechrimparis} S. Kechrimparis and S. Weigert, Phys. Rev. A 90 (2014) 062118.
\bibitem{ba} V.M. Bannur, arXiv:1502.04853 (2014).
\bibitem{sun} Y. Yao, X. Xiao, X. Wang and C.P. Sun, Phys. Rev. A 91 (2015) 062113.
\bibitem{li} J.L. Li and C.F. Qiao, Scientific Reports 5 (2015) 12708.
\bibitem{huang} Y. Huang, Phys. Rev. A 86 (2012) 024101.
\end{references}
\end{document} | arXiv | {
"id": "1504.01137.tex",
"language_detection_score": 0.6055268049240112,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Product set estimates for non-commutative groups}
\author{Terence Tao} \address{Department of Mathematics, UCLA, Los Angeles CA 90095-1555} \email{tao@@math.ucla.edu} \thanks{T. Tao is supported by a grant from the Packard Foundation.}
\begin{abstract} We develop the Pl\"unnecke-Ruzsa and Balog-Szemer\'edi-Gowers theory of sum set estimates in the non-commutative setting, with discrete, continuous, and metric entropy formulations of these estimates. We also develop a Freiman-type inverse theorem for a special class of $2$-step nilpotent groups, namely the Heisenberg groups with no $2$-torsion in their centre. \end{abstract}
\maketitle
\section{Introduction}
The field of \emph{additive combinatorics} is concerned with the structure and size properties of sum sets such as $A+B := \{ a+b: a \in A, b \in B \}$ for various sets $A$ and $B$ (in some additive group $G$). One also considers partial sum sets such as $A \stackrel{E}{+} B := \{ a+b: (a,b) \in E \}$ for some\footnote{We use $E$ here instead of the more traditional $G$, as we are reserving $G$ for the ambient group.} $E \subset A \times B$. There are many deep and important results in this theory, but we shall mention three particularly important ones. Firstly, there are the \emph{Pl\"unnecke-Ruzsa sum-set estimates}, which roughly speaking asserts that if one sum-set such as $A+B$ is small, then other sum-sets such as $A-B$, $A+B+B$, $A+A$, etc. are also small; see e.g. \cite{plun}, \cite{ruzsa}, \cite{nathanson}, \cite{tao-vu}. Then there is the \emph{Balog-Szemer\'edi-Gowers theorem} \cite{balog}, \cite{gowers-4}, which roughly speaking asserts that if a partial sum-set $A \stackrel{E}{+} B$ is small for some dense subset $E$ of $A \times B$, then there are large subsets $A', B'$ of $A$, $B$ respectively whose \emph{complete} sum-set $A' + B'$ is also small. Finally, there are \emph{inverse sum set theorems}, of which \emph{Freiman's theorem} \cite{frei} (see also \cite{bilu-freiman}, \cite{ruzsa-freiman}, \cite{chang}, \cite{nathanson}, \cite{tao-vu}) is the most famous: it asserts that if $A$ is a finite non-empty subset of a torsion-free abelian group (such as ${{\mathbf Z}}^d$) with $A+A$ small, then $A$ can be efficiently contained in the sum of $O(1)$ arithmetic progressions. These three families of results have had many applications, perhaps most strikingly to the work of Gowers \cite{gowers-4}, \cite{gowers} on quantitative bounds for Szemer\'edi's theorem and to the work of Bourgain and co-authors \cite{bourgain-diffie}, \cite{bourgain-mordell}, \cite{bkt}, \cite{konyagin} on exponential sum estimates in finite fields. We refer the reader to \cite{tao-vu} for a more detailed treatment of these topics.
The above results are usually phrased in the discrete setting, with $A$ and $B$ being a finite subset of an abelian group such as the lattice ${{\mathbf Z}}^d$, and with the cardinality $|A|$ of a set $A$ used as a measure of size. However, it is easy to transfer these discrete results to a continuous setting, for instance when $A$ and $B$ are open bounded subsets of a Euclidean space ${{\mathbf R}}^d$, and using Lebesgue measure $\mu(A)$ rather than cardinality to measure the size of a set. Indeed one can pass from the continuous case to the discrete case (possibly losing constants which are exponential in the dimension $d$) by discretizing ${{\mathbf R}}^d$ to a fine lattice such as $\varepsilon \cdot {{\mathbf Z}}^d$, applying the discrete sum-set theory, and then taking limits as $\varepsilon \to 0$. For similar reasons, there is little difficulty in transferring the sum-set theory to a metric entropy setting, in which the size of a set $A$ in ${{\mathbf R}}^d$ is measured using a covering number ${\mathcal N}_\varepsilon(A)$. See for instance Propositions \ref{frei-cts}, \ref{frei-entropy} below for examples of these transference techniques.
In this paper we present analogues of the Pl\"unnecke-Ruzsa and Balog-Szemer\'edi theorems in the non-commutative setting, in which $G$ is now a multiplicative group and one studies the size of product sets $A \cdot B := \{ a \cdot b: a \in A, b \in B \}$ and partial product sets $A \stackrel{E}{\cdot} B := \{ a \cdot b: (a,b) \in E \}$; the theory for inverse sumset estimates is significantly more complicated and does not seem to easily extend to the non-commutative setting (as the Fourier-analytic techniques are substantially less effective in this case), though we are able to obtain an inverse theorem for a class of Heisenberg groups, see Theorem \ref{heisen} below. The other two results, however, are more elementary in nature, and much of the theory carries over surprisingly easily to this setting. The one result which fails utterly is the Pl\"unnecke magnification inequality \cite{plun}, which is valid only for commutative graphs and has no known counterpart in the non-commutative setting. Fortunately, one can use other, more elementary combinatorial arguments as a substitute for the Pl\"unnecke inequalities (as was done in \cite{tao-vu}), albeit at the cost of degrading the exponents in the estimates slightly. Another significant issue is that there is no obvious relationship between the size of $A \cdot B$ and of $B \cdot A$ in the non-commutative setting; consider for instance the case when $A$ is a subgroup of $G$, and $B$ is a right coset of $A$. Fortunately, there is a residual relationship between the sets $A \cdot A^{-1}$ and $A^{-1} \cdot A$ in that if one product set is small then a large portion of the other product set is small (see Lemma \ref{eaa} below). This key observation allows us to get around the obstruction of non-commutativity and recover almost all the standard sum-set theory, though in some cases one has to throw out a small exceptional set in order to proceed. For instance, if $A$ is the union of a subgroup and a point, then $A \cdot A$ will be small, but higher products such as $A \cdot A \cdot A$ can be large; however, by throwing out this exceptional point one can control all products.
Finally, the passage between the discrete setting, the continuous setting, and the metric entropy setting is not as automatic in the non-commutative setting (such as for the group $SU(2)$) as it is in the case of Euclidean spaces ${{\mathbf R}}^d$, because there are usually no good analogues of the discrete subgroups $\varepsilon \cdot {{\mathbf Z}}^d$ in the general setting. Fortunately, the continuous and discrete theories are almost identical, so much so that we shall treat the two in a unified manner. One can then pass from the continuous setting to a metric entropy setting by standard volume packing arguments, provided that the metric structure is sufficiently compatible with the group structure. Ideally one wants the metric to be bi-invariant, but this is usually only possible when the group $G$ is compact. For non-compact groups such as $SL(2,{{\mathbf R}})$, the metric entropy results that we present here are only satisfactory when all the sets under consideration are contained in a fixed bounded set, in which case the metric structure will be approximately bi-invariant (or more precisely, the group operations are Lipschitz) and the metric balls will obey a doubling property, in which case the volume packing arguments go through without difficulty.
Our main results are as follows. In both the continuous and discrete setting, we classify sets of small tripling, showing that such sets are nothing more than dense subsets of a type of set that we call an \emph{approximate group}; see Theorem \ref{tripling-classify}. As for sets of small doubling (or pairs of sets with small product set), we have a slightly different classification, showing that such sets can be covered efficiently by left or right-translates of an approximate group (Theorem \ref{energy-gleam}). For pairs of sets with small \emph{partial} product set, we show that such sets have large \emph{intersection} with translates of an approximate group of comparable size (Theorem \ref{energy-gleam-2}). In Section \ref{entropy-sec} we extend these results to the metric entropy setting, given some mild hypotheses on the metric. Finally, in Section \ref{inverse-sec} we discuss the inverse product set problem (the noncommutative generalisation of the inverse sum set problem) and present a new theorem in this direction in the context of Heisenberg groups in the absence of $2$-torsion. All of these results are \emph{polynomially reversible} in the sense that we can pass from one class of sets to an equivalent class and then back to the original class, losing only polynomial factors in the parameter $K$ (which should be thought of as a type of doubling constant).
The author thanks Jean Bourgain for encouragement, and for raising the issue of the metric entropy case in the non-commutative setting. He also thanks Imre Ruzsa for very detailed comments and suggestions, and Emmanuel Kowalski for corrections. This work developed from some earlier unpublished notes of the author \cite{tao-noncom}, as well as from portions of the author's book with Van Vu \cite{tao-vu}. In particular, the discrete versions of the results here can largely be found in \cite[\S 2.7]{tao-vu}, although in some cases the proofs are assigned as exercises rather than given in full. The differences between the discrete and the continuous arguments are mostly notational in nature.
\section{Setup and notation}
We now give the unified framework in which to present the discrete and continuous non-commutative sum-set (or more precisely product-set) theory.
\begin{definition}[Multiplicative groups] A \emph{multiplicative group} will be a topological group $G$ (thus the group operation $(x,y) \mapsto x \cdot y$ and the inversion operation $x \mapsto x^{-1}$ are continuous), equipped with a \emph{Haar measure} $\mu$, which for us will be a non-negative Radon measure on $G$ which is invariant under left and right translation and inversion, thus $\mu(x \cdot A) = \mu(A \cdot x) = \mu(A^{-1}) = \mu(A)$ for all measurable $A$ in $G$, where $x \cdot A := \{ x \cdot y: y \in G \}$, $A \cdot x := \{ y \cdot x: y \in G \}$, and $A^{-1} := \{ x^{-1}: x \in G \}$. We denote the multiplicative identity by $1 = 1_G$. We also make the mild non-degeneracy assumption that every non-empty open set has non-zero measure. A \emph{multiplicative set} will be any non-empty open precompact set $A$ in $G$; note that we necessarily have $0 < \mu(A) < \infty$. Given two multiplicative sets $A$ and $B$ we define their product $A \cdot B := \{a \cdot b: a \in A, b \in B \}$; observe that this is also a multiplicative set, as is the inverse set $A^{-1}$. \end{definition}
\begin{remarks} The hypotheses that a multiplicative set is open and precompact (and that $\mu$ is Radon) will allow us to avoid many technical issues concerning measurability and integrability, and we shall in fact not discuss these issues here. Note that we are implicitly assuming that $G$ is locally compact, since otherwise there will be no multiplicative sets to consider. One could weaken the translation and inversion invariance properties of the measure somewhat (so that the group operations only preserve the measure \emph{approximately}) but this would introduce a number of measure-dependent constants into the estimates below and we will not do so here. However, such a generalisation would be useful for studying non-unimodular Lie groups. \end{remarks}
We now give the two main examples of multiplicative groups.
\begin{example}[Discrete case]\label{discrete-ex} Let $G$ be an abstract group (not necessarily abelian). Then we can equip this group with the discrete topology and counting measure $\mu(A)=|A|$ to obtain a multiplicative group. In this case, the multiplicative sets are simply the finite non-empty sets. \end{example}
\begin{example}[Unimodular Lie group case] Let $G$ be a finite-dimensional unimodular Lie group. This is a finite-dimensional manifold and thus comes with a standard topology, and a standard Haar measure (defined up to a normalizing scalar). The multiplicative sets in this case are the non-empty bounded open sets. \end{example}
\begin{remark} In the commutative setting one can pass between the discrete and continuous cases above by standard discretisation arguments, but the connection between the two is less clear in the non-commutative setting. Nevertheless we shall be able to treat both of these cases in a completely unified manner. \end{remark}
\begin{remark} Observe that the hypotheses on the measure $\mu$ are preserved if we multiply the measure $\mu$ by a positive constant. Thus all the estimates we present in this paper will be invariant under this symmetry; roughly speaking, this means that the number of times $\mu$ appears on the left-hand side of an equality will always equal the number of times $\mu$ appears on the right-hand side. (Certain quantities such as the Ruzsa distance $d(A,B)$ and the doubling constant $K$ will be dimensionless, whereas the multiplicative energy ${{\operatorname{E}}}(A,B)$, which we define below, has the units of $\mu^3$.) \end{remark}
Henceforth we fix the multiplicative group $G$ (and the measure $\mu$). In the next few sections we study how the measure of various products such as $\mu(A \cdot B)$, $\mu(A \cdot A)$, $\mu(A \cdot A \cdot A)$, etc. of multiplicative sets are related.
We shall use the notation $X = O(Y)$, $Y = \Omega(X)$, $X \lesssim Y$ or $Y \gtrsim X$ to denote the statement that $X \leq CY$ for an absolute constant $C$ (not depending on the choice of group $G$ or on any other parameters). We also use $X \sim Y$ to denote the estimates $X \lesssim Y \lesssim X$. If we wish to indicate dependence of the constant on an additional parameter, we will subscript the notation appropriately, thus for instance $X \sim_n Y$ denotes that $X \leq C_n Y$ and $Y \leq C_n X$ for some $C_n$ depending on $n$.
\section{Ruzsa distance, and tripling sets}
To measure the multiplicative structure inherent in a multiplicative set $A$, or a pair $A,B$ of multiplicative sets, it is convenient to introduce two measurements, the \emph{Ruzsa distance} and the \emph{multiplicative energy}. In this section we focus on the Ruzsa distance and applications to sets of small tripling.
\begin{definition}[Ruzsa distance] Let $A$ and $B$ be multiplicative sets. We define the \emph{(left-invariant) Ruzsa distance} $d(A,B)$ to be the quantity $$ d(A,B) := \log \frac{\mu(A \cdot B^{-1})}{\mu(A)^{1/2} \mu(B)^{1/2}}.$$ \end{definition}
We now justify the terminology ``left-invariant\footnote{One could also define a right-invariant Ruzsa distance $\tilde d(A,B) := d(A^{-1},B^{-1}) = \log \frac{\mu(B^{-1} \cdot A)}{\mu(A)^{1/2} \mu(B)^{1/2}}$, but we will not need that notion here.} Ruzsa distance''.
\begin{lemma}[Ruzsa triangle inequality]\label{Ruzsa-triangle} Let $A,B,C$ be multiplicative sets. Then we have $d(A,B) \geq 0$, $d(A,B) = d(B,A)$, and $d(A,C) \leq d(A,B) + d(B,C)$. Also we have $d(x \cdot A, x \cdot B) = d(A,B)$ for all $x \in G$. \end{lemma}
\begin{remark} This inequality was first established in the discrete case in \cite{ruzsa} (initially in the commutative case, but the argument extends easily to non-commutative settings). \end{remark}
\begin{proof} From translation invariance we have $\mu(A \cdot B^{-1}) \geq \mu(A \cdot b^{-1}) =\mu(A)$ for any $b \in B$. Similarly $\mu(A \cdot B^{-1}) \geq \mu(B^{-1}) = \mu(B)$. Taking geometric means we obtain $\mu(A \cdot B^{-1}) \geq \mu(A)^{1/2} \mu(B)^{1/2}$ and hence $d(A,B) \geq 0$. The symmetry property $d(A,B) = d(B,A)$ follows from the fact that $B \cdot A^{-1}$ is the inverse of $A \cdot B^{-1}$. Finally, to show the triangle inequality $d(A,C) \leq d(A,B) + d(B,C)$, it will suffice to show the inequality $$ \mu(A \cdot C^{-1}) \mu(B) \leq \mu(A \cdot B^{-1}) \mu(B \cdot C^{-1}).$$ To prove this, we rewrite the right-hand side as a double integral $$ \int_G \int_G 1_{A \cdot B^{-1}}(x) 1_{B \cdot C^{-1}}(y)\ d\mu(x) d\mu(y),$$ where $1_A$ denotes the indicator function of $A$. Making the substitution $x = z \cdot y^{-1}$ and using the translation invariance and Fubini's theorem, we can rewrite this as $$ \int_G \left[ \int_G 1_{A \cdot B^{-1}}(z \cdot y^{-1}) 1_{B \cdot C^{-1}}(y)\ d\mu(y)\right] d\mu(z).$$ Now if $z$ lies in $A \cdot C^{-1}$, then we have $z = a \cdot c^{-1}$ for some $a \in A$ and $c \in C$, and then $1_{A \cdot B^{-1}}(z\cdot y^{-1}) 1_{B \cdot C^{-1}}(y) = 1$ whenever $y \in B \cdot c^{-1}$. Since $B \cdot c^{-1}$ has the same measure as $B$, the above integral is at least as large as $$ \int_{A \cdot C^{-1}} \mu( B )\ d\mu(z) = \mu(A \cdot C^{-1}) \mu(B)$$ and the claim follows. \end{proof}
We caution that $d(A,A) = \log \frac{\mu(A \cdot A^{-1})}{\mu(A)}$ will usually not be zero. Also, $d(A,B) \neq d(A^{-1}, B^{-1})$ and $d(A,B) \neq d(A \cdot x, B \cdot x)$ in general.
From the Ruzsa triangle inequality we see in particular that \begin{equation}\label{daa} d(A,A) \leq 2d(A,B) \end{equation} for all multiplicative sets $A,B$.
For any integer $n \geq 1$ and any multiplicative set $A$, let $A^n$ denote the $n$-fold product set $$ A^n = A \cdot \ldots \cdot A = \{ a_1 \ldots a_n: a_1,\ldots,a_n \in A \}.$$ In the commutative setting, the Pl\"unnecke-Ruzsa inequalities show that if $A^2$ is comparable in size to $A$, then $A^n$ is also comparable in size to $A$ for any fixed $n$. The same statement is not necessarily true in the non-commutative setting: for instance, in the discrete setting if we take $A = H \cup \{x\}$, where $H$ is a finite subgroup of $G$ and $x$ lies outside of the normalizer of $H$, then $A^2$ has size comparable to $A$, but $A^3$ can be much larger. However, (as was observed in \cite{Helf} in the discrete non-commutative case, although the basic argument is essentially in \cite{rt}) it turns out that once $A^3$ is under control, then so are all other combinations of $A$ and $A^{-1}$:
\begin{lemma}\label{tripling-ok} Let $A$ be a multiplicative set such that $\mu(A^3) \leq K \mu(A)$. Then for any signs $\epsilon_1,\ldots,\epsilon_n \in \{-1,1\}$ we have $\mu( A^{\epsilon_1} \ldots A^{\epsilon_n} ) \leq K^{O_n(1)} \mu(A)$. \end{lemma}
\begin{proof} Let us first observe from hypothesis that $\mu(A) \leq \mu(A^2) \leq \mu(A^3) \leq K \mu(A)$. This implies that $d(A, A^{-1}) \leq \log K$ and $d(A^2, A^{-1}) \leq \log K$. By the triangle inequality we thus have $d(A^2,A) = O(\log K)$, and thus $\mu(A \cdot A \cdot A^{-1}) \leq K^{O(1)} \mu(A)$, which implies that $d(A, A \cdot A^{-1}) = O(\log K)$. By the triangle inequality again this implies $d(A \cdot A^{-1}, A^{-1}) = O(\log K)$, hence $\mu(A \cdot A^{-1} \cdot A) \leq K^{O(1)} \mu(A)$. In particular $d(A, A^{-1} \cdot A) = O( \log K)$, so again by the triangle inequality $d(A^{-1}, A^{-1} \cdot A) = O(\log K)$, and hence $\mu(A^{-1} \cdot A \cdot A) \leq K^{O(1)} \mu(A)$. With all these bounds (and taking inverses) we can already establish the lemma when $n=3$, which also implies the lemma when $n < 3$.
Now we assume inductively that the lemma is already proven for all $n < n_0$ for some $n_0 \geq 4$, and wish to prove it for $n = n_0$. To establish the bound on $A^{\epsilon_1} \ldots A^{\epsilon_n}$ it suffices to establish the bound $$ d( A^{\epsilon_1} \ldots A^{\epsilon_{n-2}}, A^{-\epsilon_n} \cdot A^{-\epsilon_{n-1}} ) = O_n(\log K).$$ But since the lemma is already proven for $n-1$, we have $$ d( A^{\epsilon_1} \ldots A^{\epsilon_{n-2}}, A ) = O_n(\log K)$$ and since the lemma is already proven for $3$, we have $$ d( A, A^{-\epsilon_n} \cdot A^{-\epsilon_{n-1}} ) = O(\log K)$$ and so the claim follows from the triangle inequality. \end{proof}
\begin{remark} One can weaken the condition $\mu(A^3) \leq K|A|$ to $\sup_{a \in A} |A \cdot a \cdot A| \leq K |A|$; see Corollary \ref{corruz}. \end{remark}
One can analyze the behavior of tripling sets further, by the following covering lemma.
\begin{lemma}[Ruzsa covering lemma]\label{cover} Let $A, B$ be multiplicative sets such that $\mu(A \cdot B) \leq K \mu(A)$ (resp. $\mu(B \cdot A) \leq K \mu(A)$). Then there exists a finite set $X$ contained inside $B$ of cardinality at most $K$ such that $B \subseteq A^{-1} \cdot A \cdot X$ (resp. $B \subseteq X \cdot A \cdot A^{-1}$). \end{lemma}
\begin{remark} For the commutative version of this lemma (in discrete or continuous settings), see \cite{ruzsa-group}, \cite{milman}, \cite{tao-vu}. \end{remark}
\begin{proof} By the reflection invariance of $\mu$ it will suffice to prove the claim when $\mu(A \cdot B) \leq K \mu(A)$. Let $X$ be a subset of $B$ with the property that the sets $A \cdot x$ for $x \in X$ are disjoint. Since $\mu(A \cdot B) \leq K \mu(A)$ we see that such a set $X$ must have cardinality at most $K$. Now let $X$ be such a set which is maximal with respect to set inclusion (which one can construct for instance using the greedy algorithm). Then for any $b \in B$ we must have $A \cdot b$ intersecting $A \cdot x$ for some $x \in X$, which implies that $b \in A^{-1} \cdot A \cdot X$. The claim follows. \end{proof}
We can now give a classification of sets of small tripling.
\begin{definition}[Approximate groups]\label{agdef} A multiplicative set $H$ is said to be a \emph{$K$-approximate group} if it is symmetric (so $H^{-1} = H$) and there exists a finite symmetric set $X \subset H^2$ of cardinality at most $K$ such that $H \cdot H \subseteq X \cdot H$. \end{definition}
\begin{remark} In \cite{tao-vu}, the additional condition $X \cdot H \subset H \cdot X \cdot X$ was also imposed. Our methods for constructing approximate groups also yield this additional property (see Theorem \ref{tripling-classify} below), though we will not use this additional hypothesis in our arguments and thus omit it from the definition of an approximate group. \end{remark}
Note from the symmetry assumptions that if $H \cdot H \subset X \cdot H$, then $H \cdot H \subset H \cdot X$ also. Iterating this we see that $H^n \subseteq X^{n-1} \cdot H, H \cdot X^{n-1}$ for all $n \geq 1$. Thus approximate groups have small tripling. It turns out that this is essentially the only way that a multiplicative set can have small tripling:
\begin{theorem}\label{tripling-classify}
Let $K \geq 1$, and let $A$ be a multiplicative set. Then the following three statements are equivalent, in the sense that if one of them holds for one choice of implied constant in the $O()$ and $\lesssim$ notation, then the other statements hold for a different choice of implied constant in the $O()$ and $\lesssim$ notation: \begin{itemize} \item[(i)] We have the tripling bound $\mu(A^3) \lesssim K^{O(1)} \mu(A)$. \item[(ii)] We have $\mu( A^{\epsilon_1} \ldots A^{\epsilon_n} ) \sim_n K^{O_n(1)} \mu(A)$ for all $n \geq 1$ and all signs $\epsilon_1, \ldots, \epsilon_n \in \{-1,+1\}$. \item[(iii)] There exists a $O(K^{O(1)})$-approximate group $H$ of size $\mu(H) \sim K^{O(1)} \mu(A)$ which contains $A$. \end{itemize} \end{theorem}
\begin{proof} The implication (i) $\implies$ (ii) is Lemma \ref{tripling-ok}, while the reverse implication (ii) $\implies$ (i) is trivial. The implication (iii) $\implies$ (i) is also trivial: $$ \mu(A^3) \leq \mu(H^3) \lesssim K^{O(1)} \mu(H) \lesssim K^{O(1)} \mu(A).$$ It remains to show that (i) implies (iii). Set $H_0 := A \cup \{1\} \cup A^{-1}$ and $H := H_0^3$, then from Lemma \ref{tripling-ok} we see that $\mu(H) \sim K^{O(1)} \mu(A)$. Clearly $H$ also contains $A$, so it remains to show that $H$ is a $O(K^{O(1)})$-approximate group. Certainly $H$ is symmetric. From Lemma \ref{tripling-ok} we have $\mu( H_0 \cdot H^2 ) \lesssim K^{O(1)} \mu(A)$, and hence from Lemma \ref{cover} we can find a finite set $Y$ in $H^2$ of cardinality $O(K^{O(1)})$ such that $$H^2 \subseteq H_0^{-1} \cdot H_0 \cdot Y \subseteq H \cdot Y.$$ If we set $X := Y \cup Y^{-1}$ then $X$ is symmetric and (from symmetry of $H$) we conclude that $H^2 \subseteq H \cdot X, X \cdot H$. But since $X$ is contained in $H^2$, we also have $$ H \cdot X \subseteq H \cdot H^2 = H^2 \cdot H \subseteq X \cdot H \cdot H \subseteq X \cdot X \cdot H$$ and similarly $H \cdot X \subseteq H \cdot X \cdot X$. The claim follows. \end{proof}
An inspection of the above proof reveals the following more precise implication of (i) from (iii):
\begin{corollary}\label{tripling-better} Let $A$ be a multiplicative set such that $\mu(A^3) \leq K \mu(A)$. Then the set $H := (A \cup \{1\} \cup A^{-1})^3$ is a $O(K^{O(1)})$-approximate group. In particular, if $A$ is symmetric and contains $1$, then $A^3$ is a $O(K^{O(1)})$-approximate group. \end{corollary}
\section{Convolution and multiplicative energy}
To study sets of small doubling, rather than small tripling, it is convenient to introduce another measure of multiplicative structure between two multiplicative sets, namely the \emph{multiplicative energy}. Given two absolutely integrable functions $f, g$ on $G$, we define their \emph{convolution} $f*g$ in the usual manner as $$ f*g(x) := \int_G f(y) g(y^{-1} x)\ d\mu(y) = \int_G f(x y^{-1}) g(y)\ d\mu(y).$$ As is well-known, convolution is bilinear and associative (though not necessarily commutative), and the convolution of two absolutely integrable functions is continuous. Convolution is not commutative in general, but we do have the identity \begin{equation}\label{fg0} f*g(1) = g*f(1), \end{equation} which reflects the fact that $xx^{-1} = x^{-1} x = 1$. If $f$ is supported on $A$ and $g$ is supported on $B$ then $f*g$ is supported on $A \cdot B$. If we use $\tilde f(x) := f(x^{-1})$ to denote the reflection of $f$, we observe the reflection property \begin{equation}\label{reflect} \widetilde{f*g} = \tilde g * \tilde f \end{equation} (which reflects the fact that $(x \cdot y)^{-1} = y^{-1} \cdot x^{-1}$ for $x,y \in G$) and the trace formula \begin{equation}\label{fg} \int_G f(x) g(x)\ d\mu(x) = f * \tilde g(1) = \tilde g * f(1) = \tilde f * g(1) = f * \tilde g(1). \end{equation} If $A$ is a multiplicative set, we use $1_A$ to denote the indicator function of $A$; note that this is an absolutely integrable function.
\begin{definition} Let $A, B$ be multiplicative sets. We define the \emph{multiplicative energy} ${{\operatorname{E}}}(A,B)$ between these two sets to be the quantity $$ {{\operatorname{E}}}(A,B) := \int_G [1_A * 1_B(x)]^2\ d\mu(x).$$ \end{definition}
\begin{remark} In the discrete setting (Example \ref{discrete-ex}), we have \begin{equation}\label{energy-discrete}
{{\operatorname{E}}}(A,B) = |\{ (a,b,a',b') \in A \times B \times A \times B: a \cdot b = a' \cdot b' \}|. \end{equation} In the notation of Gowers \cite{gowers-4}, the quantity ${{\operatorname{E}}}(A,B)$ thus counts the number of \emph{multiplicative quadruples} in $A \times B \times A \times B$. In the commutative case, this quantity has a useful representation in terms of the Fourier transform (see e.g. \cite{gowers-4}, \cite{tao-vu}), which has a number of applications, for instance in proving Freiman's inverse sumset theorem. In the non-commutative case it is also possible to use the non-commutative Fourier transform to represent this energy, but the resulting formulae are not as tractable. In particular, no analogue of Freiman's theorem is currently known in the general non-commutative setting. Fortunately, we will be able to use the properties of the convolution algebra (most notably its associativity, the reflection property \eqref{reflect}, and the trace property \eqref{fg}) to compensate for the lack of a convenient Fourier-analytic description of the energy; in particular, we will not need to understand the representation theory of the underlying group $G$. \end{remark}
A simple application of Fubini's theorem and change of variables shows that \begin{equation}\label{fubini}
\int_G 1_A * 1_B(x)\ d\mu(x) = \mu(A) \mu(B) \end{equation} and \begin{equation}\label{1ab}
1_A * 1_B(x) \leq \min( \mu(A), \mu(B) ) \leq \mu(A)^{1/2} \mu(B)^{1/2}
\end{equation} and hence by H\"older's inequality we have the upper bound \begin{equation}\label{eab} {{\operatorname{E}}}(A,B) \leq \mu(A)^{3/2} \mu(B)^{3/2}. \end{equation} Also, since $1_A * 1_B$ is supported on $A \cdot B$, we have a lower bound from \eqref{fubini} and Cauchy-Schwarz: \begin{equation}\label{eab-lower}
{{\operatorname{E}}}(A,B) \geq \frac{\mu(A)^2 \mu(B)^2}{\mu(A \cdot B)}. \end{equation} In general, we do not have ${{\operatorname{E}}}(A,B) = {{\operatorname{E}}}(B,A)$ (although one can use \eqref{reflect} to show that ${{\operatorname{E}}}(A,B) = {{\operatorname{E}}}(B^{-1},A^{-1})$). On the other hand, we do have the following important identity, which can be viewed as a weak form of commutativity.
\begin{lemma}\label{eaa} For any multiplicative set $A$, we have ${{\operatorname{E}}}(A,A^{-1}) = {{\operatorname{E}}}(A^{-1},A)$. \end{lemma}
\begin{remark}\label{aai} This identity is especially striking since there is no relation between the size of $A \cdot A^{-1}$ and $A^{-1} \cdot A$ in general. For instance, if $H$ is a multiplicative set which is also a subgroup of $G$, and $A := (x \cdot H) \cup H$ for some $x$ not in the normaliser of $H$, then $A \cdot A^{-1}$ has about the same size as $H$, but $A^{-1} \cdot A$ can be much larger. In the discrete case (Example \ref{discrete-ex}) one can prove this lemma using the identity \eqref{energy-discrete} and the observation that $a \cdot b = a' \cdot b'$ if and only if $b \cdot (b')^{-1} = a^{-1} \cdot a'$. \end{remark}
\begin{proof} From \eqref{fg}, \eqref{reflect} (and associativity) we have $$ {{\operatorname{E}}}(A,A^{-1}) = 1_A * 1_{A^{-1}} * \widetilde{1_A * 1_{A^{-1}}}(1) = 1_A * 1_{A^{-1}} * 1_A * 1_{A^{-1}}(1).$$ Similarly we have ${{\operatorname{E}}}(A^{-1},A) = 1_{A^{-1}} * 1_A * 1_{A^{-1}} * 1_A(1)$. The claim then follows from \eqref{fg0}. \end{proof}
This lemma has the following useful consequence.
\begin{proposition}\label{musprop} Let $A$ be a multiplicative set such that $\mu(A \cdot A^{-1}) \leq K \mu(A)$. Then there exists a symmetric multiplicative set $S$ such that $\mu(S) \geq \mu(A)/2K$ and \begin{equation}\label{mus} \mu( A \cdot S^n \cdot A^{-1}) \leq 2^n K^{2n+1} \mu(A) \end{equation} for all integers $n \geq 1$. \end{proposition}
\begin{proof} From Lemma \ref{eaa} and \eqref{eab-lower} we have \begin{align*} \int_G \mu( A \cap (A \cdot x) )^2\ d\mu(x) &= 1_{A^{-1}} * 1_A(x)^2\ d\mu(x)\\ &= {{\operatorname{E}}}(A^{-1},A) \\ &= {{\operatorname{E}}}(A,A^{-1}) \\ &\geq \mu(A)^4 / \mu(A \cdot A^{-1}) \\ &\geq \mu(A)^3/K. \end{align*} Now we define $S$ as $$ S := \{ x \in G: \mu( A \cap (A \cdot x) ) > \mu(A)/2K \}.$$ It is easy to see that $S$ is a symmetric multiplicative set. From \eqref{fubini} we see that $$ \int_G \mu(A \cap (A \cdot x))\ d\mu(x) = \mu(A)^2$$ and thus $$ \int_{G \backslash S} \mu(A \cap (A \cdot x))^2\ d\mu(x) \leq \mu(A)^3/2K.$$ Subtracting this from the preceding estimate, we conclude $$ \int_S \mu(A \cap (A \cdot x))^2\ d\mu(x) \geq \mu(A)^3/2K.$$ Bounding $\mu(A \cap (A \cdot x))$ by $\mu(A)$, we conclude in particular that $\mu(S) \geq \mu(A)/2K$.
It remains to prove \eqref{mus}. Let us consider the quantity \begin{equation}\label{amu}
\int_{(A \cdot A^{-1})^{n+1}} 1_{A \cdot S^n \cdot A^{-1}}(y_0 \ldots y_n)\ d\mu(y_0) \ldots d\mu(y_n). \end{equation} On the one hand, this quantity is clearly bounded above by $\mu(A \cdot A^{-1})^{n+1} \leq K^{n+1} \mu(A)$. Now let us obtain a lower bound. We rewrite this quantity as $$ \int_{A \cdot S^n \cdot A^{-1}} \int_{(A \cdot A^{-1})^n} 1_{A \cdot A^{-1}}(y_{n-1}^{-1} \ldots y_0^{-1} x)\ d\mu(y_0) \ldots d\mu(y_{n-1}) d\mu(x).$$ Suppose that we can show that \begin{equation}\label{xyx} \int_{(A \cdot A^{-1})^n} 1_{A \cdot A^{-1}}(y_{n-1}^{-1} \ldots y_0^{-1} x)\ d\mu(y_0) \ldots d\mu(y_{n-1}) \geq (\mu(A)/2K)^n \end{equation} for all $x \in A \cdot S^n \cdot A^{-1}$ Then we can bound \eqref{amu} from below by $\mu(A \cdot S^n \cdot A^{-1}) (\mu(A)/2K)^n$, which will establish \eqref{mus}.
It remains to show \eqref{xyx}. Let $x \in A \cdot S^n \cdot A^{-1}$ be arbitrary. We can write $x = a_0 s_1 \ldots s_n b_{n+1}^{-1}$ where $a_0, b_{n+1} \in A$ and $s_1,\ldots,s_n \in S$. If we make the successive change of variables $$ y_0 = a_0 b_1^{-1}; \quad y_1 = b_1 s_1 b_2^{-1}; \quad \ldots \quad; y_{n-1} = b_{n-1} s_{n-1} b_n^{-1}$$ then we observe that $$ y_{n-1}^{-1} \ldots y_0^{-1} x = b_n s_n b_{n+1}^{-1}$$ and we can rewrite the left-hand side of \eqref{xyx} as $$\int_{G^n} 1_{A \cdot A^{-1}}( a_0 b_1^{-1} ) \prod_{i=1}^{n} 1_{A \cdot A^{-1}}(b_i s_i b_{i+1}^{-1})\ d\mu(b_1) \ldots d\mu(b_n).$$ Note that if $b_1,\ldots,b_n \in A$ and $b_1 s_1, \ldots, b_n s_n \in A$ then the integrand here is equal to $1$. Hence we can bound \eqref{xyx} from below by $$ \int_{G^n} \prod_{i=1}^n 1_A(b_i) 1_A(b_i s_i)\ d\mu(b_1) \ldots d\mu(b_n) = \prod_{i=1}^n \mu( A \cap (A \cdot s_i) ).$$ The claim now follows from the definition of $S$. \end{proof}
Now we can classify sets of small doubling using approximate groups.
\begin{theorem}\label{energy-gleam} Let $K \geq 1$, and let $A,B$ be multiplicative sets. Then the following two statements are equivalent, in the sense that if one of them holds for one choice of implied constant in the $O()$ and $\lesssim$ notation, then the other statement holds for a different choice of implied constant in the $O()$ and $\lesssim$ notation: \begin{itemize} \item[(i)] We have the product bound $\mu(A \cdot B) \lesssim K^{O(1)} \mu(A)^{1/2} \mu(B)^{1/2}$ (or equivalently, $d(A,B^{-1}) \lesssim 1 + \log K$). \item[(ii)] There exists a $O(K^{O(1)})$-approximate group $H$ of size $\mu(H) \lesssim K^{O(1)} \mu(A)^{1/2} \mu(B)^{1/2}$ and a finite set $X$ of cardinality $O(K^{O(1)})$ such that $A \subset X \cdot H$ and $B \subset H \cdot X$. \end{itemize} \end{theorem}
\begin{proof} The implication (ii) $\implies$ (i) is trivial:
$$ \mu(A \cdot B) \leq \mu( X \cdot H \cdot H \cdot X ) \leq |X|^2 \mu(H^2) \lesssim K^{O(1)} \mu( H ) \lesssim K^{O(1)} \mu(A)^{1/2} \mu(B)^{1/2}.$$ Now we show that (i) implies (ii). From \eqref{daa} we have $d(A,A) \lesssim 1 + \log K$, thus $\mu(A \cdot A^{-1}) \lesssim K^{O(1)} \mu(A)$. Applying Proposition \ref{musprop}, we obtain a symmetric multiplicative set $S$ with $\mu(S) \gtrsim K^{O(1)} \mu(A)$ such that \begin{equation}\label{muscle} \mu( A \cdot S^3 \cdot A^{-1}) \lesssim K^{O(1)} \mu(A). \end{equation} In particular we have $\mu(S), \mu(A \cdot S) \lesssim K^{O(1)} \mu(A)$, which implies that $d(A,S) \lesssim 1 + \log K$. We also see that $\mu(S^3) \lesssim K^{O(1)} \mu(S)$, so by Theorem \ref{tripling-classify} we can find a $O(K^{O(1)})$-approximate group $H$ of size $O(K^{O(1)}) \mu(A)$ which contains $S$. In particular we have $\mu(S \cdot H) \leq \mu(H^2) \lesssim K^{O(1)} \mu(A)$, and hence $d(S,H) \lesssim 1 + \log K$. From the triangle inequality we conclude $d(A,H) \lesssim 1 + \log K$, thus $\mu(A \cdot H) \lesssim K^{O(1)} \mu(A)$. Applying Lemma \ref{cover}, there exists a finite set $Y$ of cardinality $O(K^{O(1)})$ such that $A \subset Y \cdot H \cdot H$; since $H$ is a $O(K^{O(1)})$-approximate group, we can thus find another finite set $Z$ of cardinality $O(K^{O(1)})$ such that $A \subset Z \cdot H$. Now since $d(A,B^{-1}) \lesssim 1 + \log K$, the triangle inequality also gives $d(B^{-1}, H) \lesssim 1 + \log K$, so by arguing as before we can find a finite set $W$ of cardinality $O(K^{O(1)})$ such that $B^{-1} \subseteq W \cdot H$. The claim now follows by taking $X := Z \cup W^{-1}$. \end{proof}
One can of course specialize this to theorem to the case $A=B$, to characterize sets of small doubling:
\begin{corollary}\label{energy-cor} Let $K \geq 1$, and let $A$ be a multiplicative set. Then the following two statements are equivalent, in the sense that if one of them holds for one choice of implied constant in the $O()$ and $\lesssim$ notation, then the other statement holds for a different choice of implied constant in the $O()$ and $\lesssim$ notation: \begin{itemize} \item[(i)] We have the product bound $\mu(A \cdot A) \lesssim K^{O(1)} \mu(A)$ (or equivalently, $d(A,A^{-1}) \lesssim 1 + \log K$). \item[(ii)] There exists a $O(K^{O(1)})$-approximate group $H$ of size $\mu(H) \lesssim K^{O(1)} \mu(A)$ and a finite set $X$ of cardinality $O(K^{O(1)})$ such that $A \subset (X \cdot H) \cap (H \cdot X)$. \end{itemize} \end{corollary}
As one consequence of this corollary, we can obtain the following strengthening of Lemma \ref{tripling-ok} which was conjectured to us by Imre Ruzsa (private communication):
\begin{corollary}\label{corruz} Let $A$ be a multiplicative set such that $\mu(A \cdot a \cdot A) \leq K\mu(A)$ for all $a \in A$, and such that $\mu(A^2) \leq K\mu(A)$. Then $\mu(A^3) \lesssim K^{O(1)} \mu(A)$, and in particular the conclusions of Lemma \ref{tripling-ok} hold (with slightly worse implied constants). \end{corollary}
\begin{proof} By Corollary \ref{energy-cor}, we may find a $O(K^{O(1)})$-approximate group $H$ with $\mu(H) \lesssim K^{O(1)} \mu(A)$ and a finite set $X$ of cardinality $O(K^{O(1)})$ such that $A \subset X \cdot H, H \cdot X$. By removing useless elements of $X$ if necessary we may assume that $X \subset (A \cdot H) \cup (H \cdot A)$. Then $$ \mu(A^3) \leq \mu( X \cdot H \cdot H \cdot X \cdot H ) \lesssim K^{O(1)} \mu( H \cdot H \cdot X \cdot H ).$$ But by Definition \ref{agdef}, $H \cdot H$ is covered by $O(K^{O(1)})$ left-translates of $H$, thus $$ \mu(A^3) \lesssim K^{O(1)} \mu(H \cdot X \cdot H) \lesssim K^{O(1)} \sup_{x \in (A \cdot H) \cup (H \cdot A)} \mu( H \cdot x \cdot H)$$ where the last inequality follows from the properties of $X$. Thus it suffices to show that $$ \mu( H \cdot x \cdot H) \lesssim K^{O(1)} \mu(A)$$ for all $x \in A \cdot H$ or $x \cdot H \cdot A$. Splitting $x$ into factors in $H$ and $A$ and noting once again that $H \cdot H$ can be covered by $O(K^{O(1)})$ left-translates (and hence right-translates, by symmetry) of $H$, we reduce to showing that \begin{equation}\label{mah} \mu( H \cdot a \cdot H) \lesssim K^{O(1)} \mu(A) \end{equation} for all $a \in A$.
Fix $a$. We already know that $\mu(A \cdot a \cdot A) \leq K \mu(A)$, thus $$ d( A, A^{-1} \cdot a^{-1} ) \leq \log K.$$ On the other hand, since $$ \mu( A \cdot H ) \leq \mu( X \cdot H \cdot H ) \lesssim K^{O(1)} \mu(H) $$ and $\mu(H) \lesssim K^{O(1)} \mu(A)$ we see that $$ d( A, H ) \lesssim 1+\log K.$$ By the triangle inequality we thus have $$ d( H, A^{-1} \cdot a^{-1} ) \lesssim 1 + \log K$$ or equivalently $$ d( H \cdot a, A^{-1} ) \lesssim 1 + \log K.$$ Now $$ \mu( H \cdot A ) \leq \mu( H \cdot H \cdot X ) \lesssim K^{O(1)} \mu(H) $$ so by arguing as before we have $$ d( H, A^{-1}) \lesssim 1+\log K$$ and thus by the triangle inequality $$ d( H \cdot a, H ) \lesssim 1 + \log K$$ and the claim \eqref{mah} follows. \end{proof}
\section{The Balog-Szemer\'edi-Gowers theorem}
In this section we develop with the non-commutative, continuous analogue of Balog-Szemer\'edi-Gowers theory. We first give a preliminary version of this lemma, in which we start with $1/K$ of a product set $A \cdot B$ being under control, and end up with $1-\varepsilon$ of another product set $(A') \cdot (A')^{-1}$ being under control.
\begin{lemma}[Weak Balog-Szemer\'edi-Gowers theorem]\label{weak-bsg} Let $A, B, C$ be multiplicative sets such that $$ \mu(C) \leq K' \mu(A)^{1/2} \mu(B)^{1/2}$$ and $$ \mu \otimes \mu( \{ (a,b) \in A \times B: a \cdot b \in C \} ) \geq \mu(A) \mu(B) / K$$ for some $K, K' \geq 1$, where $\mu \otimes \mu$ denotes product measure on $G \times G$. Let $0 < \varepsilon < 1$. Then there exists a multiplicative set $A'$ contained in $A$, and a multiplicative set $D$, such that \begin{equation}\label{muap}
\mu(A') \geq \frac{\mu(A)}{\sqrt{2} K} \end{equation} and \begin{equation}\label{mud} \mu(D) \leq \frac{2(KK')^2}{\varepsilon} \mu(A) \end{equation} and \begin{equation}\label{mumua} \mu \otimes \mu( \{ (a,a') \in A' \times A': a \cdot (a')^{-1} \in D \} \geq (1-\varepsilon) \mu(A')^2. \end{equation} \end{lemma}
\begin{proof} By hypothesis on $C$ we have $$ \int_B (\int_A 1_C(a \cdot b)\ d\mu(a)) d\mu(b) \geq \mu(A) \mu(B) / K.$$ By Cauchy-Schwarz we conclude that $$ \int_B (\int_A 1_C(a \cdot b)\ d\mu(a))^2 d\mu(b) \geq \mu(A)^2 \mu(B) / K^2$$ which we rearrange as $$ \int_A \int_A (\int_B 1_C(a \cdot b) 1_C(a' \cdot b)\ d\mu(b))\ d\mu(a) d\mu(a') \geq \mu(A)^2 \mu(B) / K^2.$$ Let $\Omega \subset A \times A$ be the set of all $(a,a')$ such that $$ \int_B 1_C(a \cdot b) 1_C(a' \cdot b)\ d\mu(b) \leq \frac{\varepsilon}{2K^2} \mu(B)$$ then we clearly have $$ \int_A \int_A 1_\Omega(a,a') (\int_B 1_C(a \cdot b) 1_C(a' \cdot b)\ d\mu(b))\ d\mu(a) d\mu(a') \leq \varepsilon \mu(A)^2 \mu(B) / 2K^2$$ and hence $$ \int_A \int_A (1 - \frac{1}{\varepsilon} 1_\Omega(a,a')) (\int_B 1_C(a \cdot b) 1_C(a' \cdot b)\ d\mu(b))\ d\mu(a) d\mu(a') \geq \mu(A)^2 \mu(B) / 2K^2.$$ We rewrite this as $$ \int_B (\int_A \int_A (1 - \frac{1}{\varepsilon} 1_\Omega(a,a')) 1_C(a \cdot b) 1_C(a' \cdot b)\ d\mu(a) d\mu(a')) d\mu(b) \geq \mu(A)^2 \mu(B) / 2K^2$$ and hence by the pigeonhole principle there exists $b \in B$ such that $$ \int_A \int_A (1 - \frac{1}{\varepsilon} 1_\Omega(a,a')) 1_C(a \cdot b) 1_C(a' \cdot b)\ d\mu(a) d\mu(a') \geq \mu(A)^2 / 2K^2.$$ If we fix this $b$ and set $A' := \{ a \in A: (a,b) \in E\}$, we conclude that $$ \mu(A')^2 \geq \int_{A'} \int_{A'} (1 - \frac{1}{\varepsilon} 1_\Omega(a,a'))\ d\mu(a) d\mu(a') \geq \mu(A)^2 / 2K^2 \geq 0$$ which in particular implies \eqref{muap}. Also we see from the above inequality that $$ \mu \otimes \mu( (A' \times A') \cap \Omega ) \leq \varepsilon \mu(A')^2.$$ Thus if we define $$ D := \{ a \cdot (a')^{-1}: a, a' \in A'; (a,a') \not \in \Omega \}$$ then we have \eqref{mumua}. Now suppose that $d = a \cdot (a')^{-1}$ lies in $D$ for some $a, a' \in A'$ and $(a,a') \not \in \Omega$. From definition of $\Omega$ we have $$ \int_B 1_C(a \cdot b) 1_C(a' \cdot b)\ d\mu(b) > \frac{\varepsilon}{2K^2} \mu(B)$$ and hence by the substitution $c := a' \cdot b$ $$ \int_G 1_C(d \cdot c) 1_C(c)\ d\mu(c) > \frac{\varepsilon}{2K^2} \mu(B).$$ Integrating this over all $d \in D$, we obtain $$ \int_G \int_G 1_C(d \cdot c) 1_C(c)\ d\mu(c) d\mu(d) > \frac{\varepsilon}{2K^2} \mu(B) \mu(D).$$ Using Fubini's theorem and making the change of variables $c' = d \cdot c$ we see that the left-hand side is just $\mu(C)^2 \leq (K')^2 \mu(A) \mu(B)$, and \eqref{mud} follows. \end{proof}
Now we extend the Balog-Szemer\'edi-Gowers theorem to pairs $A,B$ of multiplicative sets (of comparable size).
\begin{theorem}[Balog-Szemer\'edi-Gowers theorem]\label{bsg} Let $A, B$ be multiplicative sets such that ${{\operatorname{E}}}(A,B) \geq \mu(A)^{3/2} \mu(B)^{3/2} / K$. Then there exist multiplicative sets $A''', B'''$ contained in $A$, $B$ respectively such that $\mu(A''') \geq \frac{\mu(A)}{8 \sqrt{2} K}$, $\mu(B''') \geq \frac{\mu(B)}{8K}$, and $\mu(A''' \cdot B''') \lesssim K^8 \mu(A)^{1/2} \mu(B)^{1/2}$. \end{theorem}
\begin{proof} By hypothesis we have $$ \int_G (1_A * 1_B(x))^2\ d\mu(x) \geq \mu(A)^{3/2} \mu(B)^{3/2} / K.$$ If we let $C$ denote the (open, precompact) set $$ C := \{ x \in G: 1_A * 1_B(x) > \mu(A)^{1/2} \mu(B)^{1/2} / 2K\}$$ then we see from \eqref{fubini} that $$ \int_{G \backslash C} (1_A * 1_B(x))^2\ d\mu(x) \leq \mu(A)^{3/2} \mu(B)^{3/2} / 2K$$ and hence \begin{equation}\label{intc} \int_C (1_A * 1_B(x))^2\ d\mu(x) \geq \mu(A)^{3/2} \mu(B)^{3/2} / 2K. \end{equation} In particular, $C$ is non-empty (and is thus a multiplicative set), while from \eqref{fubini} and Markov's inequality we have \begin{equation}\label{muc} \mu(C) \leq 2K \mu(A)^{1/2} \mu(B)^{1/2}. \end{equation} Also, from \eqref{intc} and \eqref{1ab} we have $$ \int_C 1_A * 1_B(x)\ d\mu(x) \geq \mu(A) \mu(B) / 2K.$$ By Fubini's theorem and a change of variables, the left-hand side can be rearranged as $$ \int_A (\int_B 1_C(ab)\ d\mu(b)) d\mu(a).$$ If we thus let $A'$ be the (open precompact) subset of $A$ defined by $$ A' := \{ a \in A: \int_B 1_C(ab)\ d\mu(b) > \mu(B) / 4K \}$$ then $$ \int_{A \backslash A'} (\int_B 1_C(ab)\ d\mu(b)) d\mu(a) \leq \mu(A) \mu(B) / 4K$$ and hence \begin{equation}\label{apb} \int_{A'} (\int_B 1_C(ab)\ d\mu(b)) d\mu(a) \geq \mu(A) \mu(B) / 4K. \end{equation} In particular, $A'$ is non-empty (and is thus a multiplicative set). Using the trivial bound $\int_B 1_C(ab)\ d\mu(b) \leq \mu(B)$, we also see that $$ \mu(A') \geq \mu(A) / 4K.$$ Let us thus write $\mu(A) = L \mu(A')$ for some $1 \leq L \leq 4K$. From \eqref{muc} we then have $$ \mu(C) \leq 2K L^{1/2} \mu(A')^{1/2} \mu(B)^{1/2} $$ while from \eqref{apb} we have $$ \mu \otimes \mu( \{ (a,b) \in A' \times B: a \cdot b \in C \} ) \geq \mu(A') \mu(B) L / 4K. $$ We can thus apply Lemma \ref{weak-bsg} (with $\varepsilon := 1/32K$, and with $A, K, K'$ replaced by $A', 4K/L, 2KL^{1/2}$) to find a multiplicative set $A''$ contained in $A'$ (and hence in $A$), and a multiplicative set $D$, such that $$ \mu(A'') \geq \frac{\mu(A') L}{4\sqrt{2} K} = \frac{\mu(A)}{4 \sqrt{2} K}$$ and $$ \mu(D) \leq \frac{2(4K/L)^2(2KL^{1/2})^2}{1/32K} \mu(A') \lesssim K^5 \mu(A') / L.$$ In particular, we have \begin{equation}\label{mudd}
\mu(D) \lesssim K^6 \mu(A'') / L^2 \lesssim K^6 \mu(A''). \end{equation} Also we have $$ \mu \otimes \mu( \{ (a,a') \in A'' \times A'': a \cdot (a')^{-1} \in D \} \geq (1-\frac{1}{32K}) \mu(A'')^2. $$ We can rewrite the latter estimate as $$ \int_{A''} \mu( \{ a' \in A'': a \cdot (a')^{-1} \not \in D \} )\ d\mu(a) \leq \frac{1}{32K} \mu(A'')^2$$ so if we set $$ A''' := \{ a \in A'': \mu( \{ a' \in A'': a \cdot (a')^{-1} \not \in D \} ) \leq \frac{1}{16K} \mu(A'') \}$$ then by Markov's inequality we have $$ \mu(A''') \geq \mu(A'')/2 \geq \frac{\mu(A)}{2^{3.5} K}.$$ Since $A''$ is a subset of $A'$, we have $$ \int_B 1_C(ab)\ d\mu(b) > \mu(B) / 4K \hbox{ for all } a \in A''$$ and hence upon integrating in $a$ and Fubini's theorem $$ \int_B (\int_{A''} 1_C(ab)\ d\mu(a)) d\mu(b) \geq \mu(A'') \mu(B) / 4K.$$ Hence if we define the (open precompact) subset $B'''$ of $B$ by $$ B''' := \{ b \in B: \int_{A''} 1_C(ab)\ d\mu(a) > \mu(A'') / 8K \}$$ then we have by similar arguments to before that $$ \int_{B'''} (\int_{A''} 1_C(ab)\ d\mu(a)) d\mu(b) \geq \mu(A'') \mu(B) / 8K;$$ since $\int_{A''} 1_C(ab)\ d\mu(a) \leq \mu(A'')$, we have in particular that $$ \mu(B''') \geq \mu(B) / 8K.$$ In particular $B'''$ is non-empty and is hence a multiplicative set.
Now let $c = a b$ for some $a \in A'''$ and $b \in B'''$. From definition of $B'''$ we have $$ \mu( \{ a' \in A'': a'b \in C \} ) > \mu(A'')/8K$$ while from definition of $A'''$ we have $$ \mu( \{ a' \in A'': a \cdot (a')^{-1} \not \in D \} ) \leq \frac{1}{16K} \mu(A'').$$ Thus $$ \mu( \{ a' \in A'': a'b \in C, a \cdot (a')^{-1} \in D \} ) > \mu(A'')/16K$$ so by setting $x := a' b$ (so that $a \cdot (a')^{-1} = cx^{-1}$) we have $$ \int_G 1_C(x) 1_D(cx^{-1})\ d\mu(x) > \mu(A'')/ 16 K.$$ Integrating this over all $c \in A''' \cdot B'''$ we conclude $$ \int_G \int_G 1_C(x) 1_D(cx^{-1})\ d\mu(x) d\mu(c) \geq \mu(A'') \mu(A''' \cdot B''')/ 16 K.$$ But the left-hand side is $\mu(C) \mu(D)$, so we have $$ \mu(A''' \cdot B''') \lesssim \frac{K \mu(C) \mu(D)}{\mu(A'')}.$$ Applying \eqref{muc}, \eqref{mudd}, we conclude $$ \mu(A''' \cdot B''') \lesssim K^8 \mu(A)^{1/2} \mu(B)^{1/2} $$ as desired. \end{proof}
\begin{remark} There are a number of variants of this theorem, for instance one could replace the hypothesis that ${{\operatorname{E}}}(A,B)$ is large by the hypothesis that a partial product set $A \stackrel{E}{\cdot} B$ is small (with a suitable largeness hypothesis on $E$). One can then refine the above theorem by requiring the additional conclusion that $E$ has large intersection with $A' \times B'$; see for instance \cite{lruzsa}, \cite{borg:high-dim} for some examples of this type of refinement. Other variants of the lemma and its proof can be found in \cite{ssv}, \cite{chang-er}. The power of $7$ can probably be lowered further but we shall not attempt to do so here. \end{remark}
This gives us a characterisation of pairs of multiplicative sets of large multiplicative energy.
\begin{theorem}\label{energy-gleam-2} Let $K \geq 1$, and let $A,B$ be multiplicative sets. Then the following four statements are equivalent, in the sense that if one of them holds for one choice of implied constant in the $O()$ and $\lesssim$ notation, then the other statements hold for a different choice of implied constant in the $O()$ and $\lesssim$ notation: \begin{itemize} \item[(i)] We have the energy bound ${{\operatorname{E}}}(A,B) \gtrsim K^{O(1)} \mu(A)^{3/2} \mu(B)^{3/2}$. \item[(ii)] There exists an open subset $E \subset A \times B$ of measure $\mu \otimes \mu(E) \gtrsim K^{O(1)} \mu(A) \mu(B)$ such that $\mu( \{ a \cdot b: (a,b) \in E \} )\lesssim K^{O(1)} \mu(A)^{1/2} \mu(B)^{1/2}$. \item[(iii)] There exists multiplicative sets $A'$, $B'$ contained in $A, B$ respectively such that $\mu(A') \sim K^{O(1)} \mu(A)$, $\mu(B') \sim K^{O(1)} \mu(B)$, and $\mu(A' \cdot B') \sim K^{O(1)} \mu(A)^{1/2} \mu(B)^{1/2}$. \item[(iv)] There exists a $O(K^{O(1)})$-approximate group $H$ of size $\mu(H) \sim K^{O(1)} \mu(A)^{1/2} \mu(B)^{1/2}$, and elements $x, y \in G$ such that $\mu( A \cap (x \cdot H) ) \sim K^{O(1)} \mu(A)$ and $\mu( B \cap (H \cdot y) ) \sim K^{O(1)} \mu(B)$. \end{itemize} \end{theorem}
\begin{proof} The implication (i) $\implies$ (iii) follows from Theorem \ref{bsg} and the trivial bounds $\mu(A') \leq \mu(A)$, $\mu(B') \leq \mu(B)$, and $\mu(A' \cdot B') \geq \mu(A')^{1/2} \mu(B')^{1/2}$. Now we show that (iii) implies (iv). Note that the trivial bound $\mu(A' \cdot B') \geq \max(\mu(A'), \mu(B'))$ already implies that $\mu(B) \sim K^{O(1)} \mu(A)$. Using Theorem \ref{energy-gleam}, we can find a $O(K^{O(1)})$-approximate group $H$ of measure $\mu(H) \sim K^{O(1)} \mu(A)$ and a finite set $X$ of cardinality $O(K^{O(1)})$ such that $A' \subseteq X \cdot H$ and $B' \subseteq H \cdot X$. By the pigeonhole principle we can thus find $x, y \in X$ such that $\mu(A' \cap (x \cdot H) ) \gtrsim K^{O(1)} \mu(A)$ and $\mu(B' \cap (H \cdot y) ) \gtrsim K^{O(1)} \mu(B)$. Since we have the trivial bounds $\mu(A' \cap (x \cdot H)) \leq \mu(A)$ and $\mu(B' \cap (H \cdot y)) \leq \mu(B)$, the claim (iv) follows.
Next, we show that (iv) implies (ii). If we set $E := (A \cap (x \cdot H)) \times (B \cap (H \cdot y))$, then we have the desired lower bound on $\mu \otimes \mu(E)$, and we have the upper bound $$ \mu( \{ a \cdot b: (a,b) \in E \} ) \leq \mu( (x \cdot H) \cdot (H \cdot y) ) = \mu(H^2) \sim K^{O(1)} \mu(A)^{1/2} \mu(B)^{1/2}$$ as desired.
Finaly we show that (ii) implies (i). If we let $C := \{ a \cdot b: (a,b) \in E \}$ then we have $$ \int_C 1_A * 1_B(x)\ d\mu(x) \gtrsim K^{O(1)} \mu(A) \mu(B)$$ and (i) easily follows from the Cauchy-Schwarz inequality. \end{proof}
\section{Metric entropy analogues}\label{entropy-sec}
In some applications involving non-discrete groups (e.g. Lie groups), it is not the measure or cardinality of a set which is of interest, but rather its entropy with respect to a metric.
\begin{definition}[Metric entropy] Let $X$ be a metric space and $\varepsilon > 0$. The \emph{metric entropy} (or \emph{Kolmogorov entropy}) ${\mathcal N}_\varepsilon(X)$ is defined to be the least number of open balls of radius $\varepsilon$ needed to cover $X$. \end{definition}
\begin{remark}\label{entropy-equiv} There are several other formulations of metric entropy which are essentially equivalent to each other. For instance, it is easy to see that the largest $\varepsilon$-separated subset of $X$ has cardinality between ${\mathcal N}_{\varepsilon}(X)$ and ${\mathcal N}_{\varepsilon/2}(X)$. Similarly, if $X$ is a subspace of a larger metric space $Y$, one can easily check that the number of open balls of radius $\varepsilon$ in $Y$ needed to cover $X$ lies between ${\mathcal N}_{2\varepsilon}(X)$ and ${\mathcal N}_\varepsilon(X)$. We shall shortly impose a volume doubling condition which will imply that ${\mathcal N}_\varepsilon(X)$ and ${\mathcal N}_{2\varepsilon}(X)$ are comparable in magnitude, and so we will not need to distinguish between these slightly different concepts of entropy. \end{remark}
In order for the sum set theory to extend to metric entropy, we need some mild compatibility conditions between the metric structure, the group structure, and the measure structure. We axiomatize these as follows.
\begin{definition}[Reasonable metrics] We say that a multiplicative group $G = (G,d_G)$ equipped with a metric $d_G$ a is \emph{locally reasonable metric group} if the following properties hold: \begin{itemize} \item[(i)] The topology on $G$ is compatible with the metric $d_G$ (thus the open balls in $d_G$ generate the topology). Also, we assume that all closed balls are compact (thus $G$ is locally compact). \item[(ii)] The group operations are locally Lipschitz continuous. More precisely, for every compact set ${\mathbf{K}} \subseteq G$ we have the estimates $$ d_G(g \cdot x, g \cdot y), d_G(x \cdot g, y \cdot g), d(x^{-1},y^{-1}) \sim_{{\mathbf{K}},G} d(x,y) $$ for all $x,y,g \in {\mathbf{K}}$. \item[(iii)] We have local volume doubling. More precisely, for any $R > 0$ we have $$ \mu({\mathbf{B}}(1,2r)) \sim_{R,G} \mu({\mathbf{B}}(1,r))$$ for all $0 < r < R$, where ${\mathbf{B}}(x,r)$ is the open metric ball of radius $r$ centred at $x$. \end{itemize} If the implied constants in the $\sim_{{\mathbf{K}},G}$ and $\sim_{R,G}$ notation can be chosen to be independent of ${\mathbf{K}}$ and $R$ (but still dependent on the group $G$), we say that the group is \emph{globally reasonable}. \end{definition}
\begin{examples} The Euclidean space ${{\mathbf R}}^d$ with the usual metric and additive group structure is globally reasonable. Any compact Lie group with a smooth Riemannian metric will also be globally reasonable. If one metric is locally (resp. globally) reasonable, then any other metric bilipschitz equivalent to it will also be locally (resp. globally) reasonable. If a locally reasonable metric group is compact, then it is automatically globally reasonable. If $G$ is a group of linear transformations on a finite-dimensional normed vector space (with the usual topology),
then the operator norm metric $d_G(x,y) := \|x-y\|_{\operatorname{op}}$ is locally reasonable (and thus globally reasonable, if $G$ is compact). On the other hand, groups such as $SL_2({{\mathbf R}})$ will not support any globally reasonable metric, due to the non-compact nature of the conjugacy classes. \end{examples}
As we shall shortly see, when the metric is locally reasonable, all bounded sets have finite metric entropy for each $\varepsilon > 0$. In this paper we shall be concerned with the bounded-dimensional regime, in which we allow all constants to depend on the implied constants in the $\sim_{{\mathbf{K}},G}$ and $\sim_{R,G}$ notation appearing in the above definition. The issue of precise behaviour of constants on the dimension (and on other characteristics of the group) in the high-dimensional regime is an interesting one, but we will not pursue it here.
With the above assumptions on the metric, the metric entropy can be estimated accurately by the measure of various sets.
\begin{lemma}[Multiplicative structure of balls]\label{ball} Let $G$ be a locally reasonable metric group. Let ${\mathbf{K}}$ be a compact subset of $G$, and let $R > 0$ be a compact set. \begin{itemize} \item[(i)] (Approximate normality of balls) There exists constants $0 < c_{{\mathbf{K}},R,G} < C_{{\mathbf{K}},R,G} < \infty$ such that we have the inclusions $$ X \cdot {\mathbf{B}}(1, c_{{\mathbf{K}},R,G} \varepsilon) \subseteq {\mathbf{B}}(1,\varepsilon) \cdot X \subseteq X \cdot {\mathbf{B}}(1,C_{{\mathbf{K}},R,G}\varepsilon)$$ and $$ {\mathbf{B}}(1,c_{{\mathbf{K}},R,G}\varepsilon) \cdot X \subseteq X \cdot {\mathbf{B}}(1,\varepsilon) \subseteq X \cdot {\mathbf{B}}(1,C_{{\mathbf{K}},R,G}\varepsilon)$$ for any $X \subseteq {\mathbf{K}}$ and $0 < \varepsilon < R$.
\item[(ii)] (Doubling property) For any $x \in {\mathbf{K}}$, $0 < \varepsilon < R$, and $A > 0$, we have $$ \mu( {\mathbf{B}}(x,A\varepsilon) ) \sim_{A,{\mathbf{K}},R,G} \mu( {\mathbf{B}}(1, \varepsilon) )$$
\item[(iii)] (Self-covering property) For any $x \in {\mathbf{K}}$, $0 < \varepsilon < R$, and $A > 0$, we can cover ${\mathbf{B}}(x,A\varepsilon)$ by $O_{A,{\mathbf{K}},R,G}(1)$ balls of radius $\varepsilon$. \end{itemize} If $G$ is globally reasonable, then we can omit the dependence on ${\mathbf{K}}$ and $R$ in the above estimates. \end{lemma}
\begin{proof} If $x \in X \subseteq K$ and $0 < \varepsilon < R$, then ${\mathbf{B}}(x,\varepsilon)$ is contained in a compact set $\tilde {\mathbf{K}} = \tilde {\mathbf{K}}_{{\mathbf{K}},R}$ which is independent of $x$ and $\varepsilon$. From the locally Lipschitz property we then have $d_G(x,y) \sim_{{\mathbf{K}},R,G} d_G(1, x^{-1} \cdot y) \sim_{{\mathbf{K}},R,G} d_G(1, y \cdot x^{-1})$ for all $x \in X$ and $y \in {\mathbf{B}}(x,\varepsilon)$, and the claim (i) follows.
In view of (i), we see that to prove (ii) it suffices to do so when $x=1$. But then this follows by iterating the volume doubling property.
Finally, we prove (iii). Let $S$ be a maximal $\varepsilon$-separated subset of ${\mathbf{B}}(x,A\varepsilon)$. Clearly the balls ${\mathbf{B}}(s,\varepsilon)$ with $s \in S$ cover ${\mathbf{B}}(x,A\varepsilon)$. Also, the balls ${\mathbf{B}}(s,\varepsilon/2)$ with $s \in S$ are disjoint subsets of ${\mathbf{B}}(x,(A+1/2)\varepsilon)$, and thus $$ \sum_{s \in S} \mu( {\mathbf{B}}(s,\varepsilon/2) ) \leq {\mathbf{B}}(x,(A+1/2)\varepsilon).$$
Applying (ii) we obtain $|S| = O_{A,{\mathbf{K}},R,G}(1)$, and the claim follows. (One could also have proceeded using Lemma \ref{cover}.)
If $d_G$ is globally reasonable, then an inspection of the above arguments shows that the constants which depended on ${\mathbf{K}}$ and $R$ are now uniform in those parameters. \end{proof}
\begin{lemma}[Relationship between entropy and measure]\label{entropy-lemma} Let $G$ be a locally reasonable metric group. Let ${\mathbf{K}}$ be a compact subset of $G$, and let $R > 0$. Then for every $0 < \varepsilon < R$ and every $X \subseteq {\mathbf{K}}$ we have \begin{equation}\label{neps}
{\mathcal N}_\varepsilon(X) \sim_{{\mathbf{K}},R,G} \frac{\mu( X \cdot {\mathbf{B}}(1,\varepsilon) )}{\mu( {\mathbf{B}}(1,\varepsilon) )} \end{equation} and \begin{equation}\label{neps2}
{\mathcal N}_\varepsilon(X) \sim_{{\mathbf{K}},R,G} {\mathcal N}_{2\varepsilon}(X).
\end{equation} In particular \begin{equation}\label{boxbox}
\mu(X \cdot {\mathbf{B}}(1,\varepsilon)) \sim_{{\mathbf{K}},R,G} \mu(X \cdot {\mathbf{B}}(1,2\varepsilon) ).
\end{equation} Also, the ball ${\mathbf{B}}(1,\varepsilon)$ is approximately normal in the sense that \begin{equation}\label{boxy}
\mu( {\mathbf{B}}(1,\varepsilon) \cdot X \cdot Y ) \sim_{{\mathbf{K}},R,G} \mu( X \cdot {\mathbf{B}}(1,\varepsilon) \cdot Y ) \sim_{{\mathbf{K}},R,G} \mu( X \cdot Y \cdot {\mathbf{B}}(1,\varepsilon) )
\end{equation} for any $X,Y \subseteq {\mathbf{K}}$. In particular we have $\mu( X \cdot {\mathbf{B}}(1,\varepsilon) ) \sim_{{\mathbf{K}},R,G} \mu( {\mathbf{B}}(1,\varepsilon) \cdot X )$.
If $G$ is globally reasonable, then we can replace $\sim_{{\mathbf{K}},R,G}$ by $\sim_G$ in the above estimates. \end{lemma}
\begin{remark} Note that no measurability conditions are required on $X$ and $Y$, since sets such as $X \cdot {\mathbf{B}}(1,\varepsilon)$ are automatically open and precompact. These types of inequalities are well known for Euclidean space, and the assumptions we have placed on the metric will allow us to extend the Euclidean space arguments to this more general setting without difficulty. \end{remark}
\begin{proof} Fix $X, \varepsilon$. Let $0 < c = c_{{\mathbf{K}},R,G} \leq 1$ be a small constant to be chosen later. Let $S$ be a maximal $c\varepsilon$-separated subset of $X$, then the balls ${\mathbf{B}}(s,c\varepsilon/2)$ for $s \in S$ are disjoint, and the balls ${\mathbf{B}}(s,c\varepsilon)$ cover $X$. In particular
${\mathcal N}_\varepsilon(X) \leq |S|$. By Lemma \ref{ball}(i), the balls ${\mathbf{B}}(s,c\varepsilon/2)$ are all contained in $X \cdot {\mathbf{B}}(1,\varepsilon)$ if $c$ is sufficiently small, and thus $$ \sum_{s \in S} \mu( {\mathbf{B}}(s,c\varepsilon/2) ) \leq \mu(X \cdot {\mathbf{B}}(1,\varepsilon) ).$$ Applying Lemma \ref{ball}(ii) we have
$$ \sum_{s \in S} \mu( {\mathbf{B}}(s,c\varepsilon/2) ) \sim_{{\mathbf{K}},R,G} |S| \mu({\mathbf{B}}(1,\varepsilon)) \geq {\mathcal N}_\varepsilon(X) \mu({\mathbf{B}}(1,\varepsilon))$$ thus obtaining the upper bound in \eqref{neps}.
Now we obtain the lower bound in \eqref{neps}. Let $\{ {\mathbf{B}}(s,\varepsilon): s \in S \}$ be any covering of $X$ by $\varepsilon$-balls with $S \subseteq X$.
Our task is to show that $\mu(X \cdot {\mathbf{B}}(1,\varepsilon)) = O_{{\mathbf{K}},R,G}( |S| \mu({\mathbf{B}}(1,\varepsilon)) )$. Since $X \cdot {\mathbf{B}}(1,\varepsilon)$ is covered by ${\mathbf{B}}(s,\varepsilon) \cdot {\mathbf{B}}(1,\varepsilon)$, it thus suffices by the union bound to establish that $$ \mu( {\mathbf{B}}(s,\varepsilon) \cdot {\mathbf{B}}(1,\varepsilon) ) = O_{{\mathbf{K}},R,G}( \mu({\mathbf{B}}(1,\varepsilon)) ).$$ But from Lemma \ref{ball}(i) we have ${\mathbf{B}}(s,\varepsilon) \cdot {\mathbf{B}}(1,\varepsilon) \subseteq {\mathbf{B}}(s,C\varepsilon)$ for some $C = O_{{\mathbf{K}},R,G}(1)$, and the claim then follows from Lemma \ref{ball}(ii).
Now we establish \eqref{neps2}. The bound ${\mathcal N}_{2\varepsilon}(X) \leq {\mathcal N}_{\varepsilon}(X)$ is trivial, so it suffices to establish the reverse bound ${\mathcal N}_\varepsilon(X) = O_{{\mathbf{K}},R,G} {\mathcal N}_{2\varepsilon}(X)$. Let $\{ {\mathbf{B}}(s,2\varepsilon): s \in S \}$ be a covering of $X$ by balls of radius $2\varepsilon$ for some $S \subseteq X$; we may take $|S| = {\mathcal N}_{2\varepsilon}(X)$. It will suffice to show that $X$ can be covered by $O_{{\mathbf{K}},R,G}(|S|)$ balls of radius $\varepsilon$ with centres in $X$. By Remark \ref{entropy-equiv}, this will follow if we can cover $X$ by $O_{{\mathbf{K}},R,G}(|S|)$ balls of radius $\varepsilon/2$ whose centres do not necessarily lie in $X$. But this follows from Lemma \ref{ball}(iii).
Finally, the claim \eqref{boxy} follows easily from Lemma \ref{ball}(i) and \eqref{boxbox}.
If $G$ is globally reasonable, then an inspection of the above arguments shows that the constants which depended on ${\mathbf{K}}$ and $R$ are now uniform in those parameters. \end{proof}
Lemma \ref{entropy-lemma} allows us to pass back and forth between entropies and measures, after paying various normalizing factors of $\mu({\mathbf{B}}(1,\varepsilon))$. Using this lemma, one can transfer\footnote{An alternate approach would be to repeat the \emph{proofs} of the previous estimates in the metric entropy setting. That approach also works, and in fact leads to slightly better implied constants in the $O()$ notation, however the repetition of the arguments would be rather boring and we have elected instead to illustrate the transference approach.} most of the continuous estimates of preceding sections to entropy ones, though if the metric $d_G$ is merely locally reasonable instead of globally reasonable, then one has to restrict the sets in question to a fixed compact region. We shall focus attention on the three main results of previous sections, namely Theorems \ref{tripling-classify}, \ref{energy-gleam}, \ref{energy-gleam-2}. We state these results for locally reasonable metric groups, but there is an obvious variant for globally reasonable metric groups in which the dependencies of the constants on ${\mathbf{K}}$ and $R$ are dropped.
\begin{theorem}\label{tripling-classify-metric} Let $G$ be a locally reasonable metric group. Let ${\mathbf{K}}$ be a compact set in $G$, let $R > 0$, let $0 < \varepsilon < R$, let $K \geq 1$, and let $A \subseteq {\mathbf{K}}$ be non-empty. Then the following three statements are equivalent, in the sense that if one of them holds for one choice of implied constant in the $O()$ and $\lesssim$ notation, then the other statements hold for a different choice of implied constant in the $O()$ and $\lesssim$ notation: \begin{itemize} \item[(i)] We have the tripling bound ${\mathcal N}_\varepsilon(A^3) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)$. \item[(ii)] We have ${\mathcal N}_\varepsilon(A^{\epsilon_1} \ldots A^{\epsilon_n} ) \sim_{{\mathbf{K}},R,G,n} K^{O_n(1)} {\mathcal N}_\varepsilon(A)$ for all $n \geq 1$ and all signs $\epsilon_1, \ldots, \epsilon_n \in \{-1,1\}$. \item[(iii)] There exists a $O_{{\mathbf{K}},R,G}(K^{O(1)})$-approximate group $H$ of with ${\mathcal N}_\varepsilon(H) \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)$ which contains $A$, and is contained in a compact set $\tilde {\mathbf{K}} = \tilde {\mathbf{K}}({\mathbf{K}},R)$ depending only on ${\mathbf{K}}$ and $R$. \end{itemize} \end{theorem}
\begin{proof} Let us first prove that (iii) implies (i). By Lemma \ref{ball}(i) we have $$ A^3 \subseteq H^3 \subseteq X \cdot X \cdot H$$ where $X$ is the set of cardinality at most $O_{{\mathbf{K}},R,G}(K^{O(1)})$ associated to $H$. Using Lemma \ref{entropy-lemma} we conclude that \begin{align*} {\mathcal N}_\varepsilon(A^3) &\lesssim_{{\mathbf{K}},R,G} \mu( X \cdot X \cdot H \cdot {\mathbf{B}}(1,\varepsilon) ) / \mu({\mathbf{B}}(1,\varepsilon)) \\
&\lesssim_{{\mathbf{K}},R,G} |X|^2 \mu(H \cdot {\mathbf{B}}(1,\varepsilon)) / \mu({\mathbf{B}}(1,\varepsilon)) \\ &\lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(H) \\ &\sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A) \end{align*} which is (i).
A similar argument shows that (iii) implies (ii) and is left to the reader. Since (ii) trivially implies (i), it remains to show that (i) implies (iii). From the hypothesis on $A$ and Lemma \ref{entropy-lemma} we have $$ \mu( A^3 \cdot {\mathbf{B}}(1,\varepsilon) ) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} \mu(A \cdot {\mathbf{B}}(1,\varepsilon) ).$$ Applying many applications of \eqref{boxy}, \eqref{boxbox} we conclude that $$ \mu( (A \cdot {\mathbf{B}}(1,\varepsilon))^3 ) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} \mu(A \cdot {\mathbf{B}}(1,c\varepsilon) ).$$ By Theorem \ref{tripling-classify} there exists a $O_{{\mathbf{K}},R,G}(K^{O(1)})$-approximate group $H$ which contains $A \cdot {\mathbf{B}}(1,c\varepsilon)$, and which obeys the estimate $$ \mu(H) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} \mu(A \cdot {\mathbf{B}}(1,\varepsilon)) \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A) \mu({\mathbf{B}}(1,\varepsilon)).$$ From the proof of Theorem \ref{tripling-classify}, and the hypothesis that $A \subseteq {\mathbf{K}}$ and $0 < \varepsilon < R$, we also see that $H \subseteq \tilde K$ for some compact $\tilde {\mathbf{K}} = \tilde {\mathbf{K}}({\mathbf{K}},R)$. Then by Lemma \ref{entropy-lemma} \begin{align*} {\mathcal N}_\varepsilon(H) &\lesssim_{{\mathbf{K}},R,G} \mu( {\mathbf{B}}(1,\varepsilon) \cdot H ) / \mu({\mathbf{B}}(1,\varepsilon))\\ &\leq \mu(H \cdot H) / \mu({\mathbf{B}}(1,\varepsilon)) \\ &\lesssim_{{\mathbf{K}},R,G} K^{O(1)} \mu(H) / / \mu({\mathbf{B}}(1,\varepsilon)) \\ &\lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A) \end{align*} and (iii) follows. \end{proof}
Now we give the metric entropy analogue of Theorem \ref{energy-gleam}.
\begin{theorem}\label{energy-gleam-metric} Let $G$ be a locally reasonable metric group. Let ${\mathbf{K}}$ be a compact set in $G$, let $0 < \varepsilon < R$, let $K \geq 1$, and let $A,B \subseteq {\mathbf{K}}$ be non-empty. Then the following two statements are equivalent, in the sense that if one of them holds for one choice of implied constant in the $O()$ and $\lesssim$ notation, then the other statement holds for a different choice of implied constant in the $O()$ and $\lesssim$ notation: \begin{itemize} \item[(i)] We have the product bound ${\mathcal N}_\varepsilon(A \cdot B) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2}$. \item[(ii)] There exists a $O_{{\mathbf{K}},R,G}(K^{O(1)})$-approximate group $H$ with ${\mathcal N}_\varepsilon(H) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2}$ and a finite set $X$ of cardinality $O_{{\mathbf{K}},R,G}(K^{O(1)})$ such that $A \subset X \cdot H$ and $B \subset H \cdot X$. Furthermore, $H$ and $X$ lie in a compact set $\tilde K = \tilde {\mathbf{K}}({\mathbf{K}},R)$ depending only on ${\mathbf{K}}$ and $R$. \end{itemize} \end{theorem}
\begin{proof} First we show that (ii) implies (i). We have a set $Y \subset H^2$ of cardinality $O_{{\mathbf{K}},R,G}(K^{O(1)})$ such that $H \cdot H \subseteq H \cdot Y$, which implies that $A \cdot B \subseteq X \cdot H \cdot Y \cdot X$. From Lemma \ref{entropy-lemma} we then have \begin{align*} {\mathcal N}_\varepsilon( A \cdot B ) &\leq {\mathcal N}_\varepsilon( X \cdot H \cdot Y \cdot X )\\ &\lesssim_{{\mathbf{K}},R,G} \mu( X \cdot H \cdot Y \cdot X \cdot {\mathbf{B}}(1,\varepsilon) ) / \mu({\mathbf{B}}(1,\varepsilon)) \\
&\lesssim_{{\mathbf{K}},R,G} |X|^2 |Y| \mu( H \cdot {\mathbf{B}}(1,\varepsilon) ) / \mu({\mathbf{B}}(1,\varepsilon)) \\ &\lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(H) \\ &\lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2} \end{align*} which is (i).
Now we show that (i) implies (ii). From the hypothesis and Lemma \ref{entropy-lemma} we have \begin{align*} \mu( A \cdot {\mathbf{B}}(1,\varepsilon) \cdot B \cdot {\mathbf{B}}(1,\varepsilon) ) &\lesssim_{{\mathbf{K}},R,G} \mu(A \cdot B \cdot {\mathbf{B}}(1,\varepsilon) ) \\ &\lesssim_{{\mathbf{K}},R,G} {\mathcal N}_\varepsilon(A \cdot B) \mu({\mathbf{B}}(1,\varepsilon))\\ &\lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2} \mu({\mathbf{B}}(1,\varepsilon)) \\ &\sim_{{\mathbf{K}},R,G} K^{O(1)} \mu(A \cdot {\mathbf{B}}(1,\varepsilon))^{1/2} \mu(B \cdot {\mathbf{B}}(1,\varepsilon))^{1/2}. \end{align*} Applying Theorem \ref{energy-gleam}, we can find a $O_{{\mathbf{K}},R,G}(K^{O(1)})$-approximate group $H$ and a set $X$ such that $A \cdot {\mathbf{B}}(1,\varepsilon) \subset X \cdot H$ and $B \cdot {\mathbf{B}}(1,\varepsilon) \subset H \cdot X$, with the bounds \begin{align*} \mu(H) &\lesssim_{{\mathbf{K}},R,G} K^{O(1)} \mu(A \cdot {\mathbf{B}}(1,\varepsilon))^{1/2} \mu(B \cdot {\mathbf{B}}(1,\varepsilon))^{1/2} \\ &\lesssim_{{\mathbf{K}},R,G} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2} \mu({\mathbf{B}}(1,\varepsilon)). \end{align*}
and $|X| \lesssim_{{\mathbf{K}},R,G} K^{O(1)}$. We then have \begin{align*} {\mathcal N}_\varepsilon(H) &\lesssim_{{\mathbf{K}},R,G} \mu({\mathbf{B}}(1,\varepsilon) \cdot H) / \mu({\mathbf{B}}(1,\varepsilon)) \\ &\leq \mu( A \cdot {\mathbf{B}}(1,\varepsilon) \cdot H ) / \mu({\mathbf{B}}(1,\varepsilon)) \\ &\leq \mu( X \cdot H \cdot H ) / \mu({\mathbf{B}}(1,\varepsilon)) \\
&\lesssim_{{\mathbf{K}},R,G} |X| K^{O(1)} \mu(H) / \mu({\mathbf{B}}(1,\varepsilon)) \\ &\lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2} \end{align*} and (ii) follows. \end{proof}
Now we turn to developing a metric entropy analogue of Theorem \ref{energy-gleam-2}. This will be a bit trickier as we shall need an ``$\varepsilon$-approximate'' version of the multiplicative energy ${{\operatorname{E}}}(A,B)$. There are a number of essentially equivalent ways to do so, each of which are at least somewhat artificial; for sake of concreteness we shall fix one such as follows. Given any $A, B \subset G$ and $\varepsilon > 0$, the set $$ Q_\varepsilon(A,B) := \{ (a,b,a',b') \in A \times B \times A \times B: d( a \cdot b, a' \cdot b' ) \leq \varepsilon \}$$ of approximately multiplicative quadruples is a subset of $G^4$, which we view as a metric space with the metric $$ d_{G^4}((x_1,x_2,x_3,x_4),(y_1,y_2,y_3,y_4)) := \sum_{i=1}^4 d_G(x_i,y_i).$$ We then define the $\varepsilon$-approximate multiplicative energy ${{\operatorname{E}}}_\varepsilon(A,B)$ to be the quantity ${\mathcal N}_\varepsilon(Q_\varepsilon(A,B))$. Note that if $A,B$ are finite sets, then this quantity will equal the usual (discrete) multiplicative energy \eqref{energy-discrete} for $\varepsilon$ sufficiently small.
\begin{theorem}\label{energy-gleam-2-metric} Let $G$ be a locally reasonable metric group. Let ${\mathbf{K}}$ be a compact set in $G$, let $0 < \varepsilon < R$, let $K \geq 1$, and let $A,B \subseteq {\mathbf{K}}$ be non-empty. Then the following four statements are equivalent, in the sense that if one of them holds for one choice of implied constant in the $O()$ and $\lesssim$ notation, then the other statement holds for a different choice of implied constant in the $O()$ and $\lesssim$ notation: \begin{itemize} \item[(i)] We have the energy bound ${{\operatorname{E}}}_\varepsilon(A,B) \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{3/2} {\mathcal N}_\varepsilon(B)^{3/2}$. \item[(ii)] There exists a subset $E \subset A \times B$ of entropy ${\mathcal N}_\varepsilon(E) \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A) {\mathcal N}_\varepsilon(B)$ such that ${\mathcal N}_\varepsilon( \{ a \cdot b: (a,b) \in E \} ) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2}$. (Of course, we measure the entropy of $E$ using the product metric on $G^2$.) \item[(iii)] There exists subsets $A'$, $B'$ of $A, B$ respectively such that ${\mathcal N}_\varepsilon(A') \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)$, ${\mathcal N}_\varepsilon(B') \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(B)$, and ${\mathcal N}_\varepsilon(A' \cdot B') \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2}$. \item[(iv)] There exists a $O(K^{O(1)})$-approximate group $H$ with ${\mathcal N}_\varepsilon(H) \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2}$, and elements $x, y \in G$ such that ${\mathcal N}_\varepsilon( A \cap (x \cdot H) ) \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)$ and ${\mathcal N}_\varepsilon( B \cap (H \cdot y) ) \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(B)$. Furthermore, $H$, $x$, $y$ lie in a compact set $\tilde {\mathbf{K}} = {\mathbf{K}}({\mathbf{K}},R)$ that depends only on ${\mathbf{K}}$ and $R$. \end{itemize} \end{theorem}
\begin{proof} Let us first show that (iv) implies (iii). We set $A' := A \cap (x \cdot H)$ and $B' := B \cap (H \cdot y)$. The bounds on ${\mathcal N}_\varepsilon(A')$ and ${\mathcal N}_\varepsilon(B')$ are obvious. From Lemma \ref{entropy-lemma} and the trivial estimate $\mu( X \cdot Y ) \geq \mu(X)^{1/2} \mu(Y)^{1/2}$ we see that ${\mathcal N}_\varepsilon(A' \cdot B') \gtrsim_{{\mathbf{K}},R,G} {\mathcal N}_\varepsilon(A')^{1/2} {\mathcal N}_\varepsilon(B')^{1/2}$, which gives the lower bound on ${\mathcal N}_\varepsilon(A' \cdot B')$. To obtain the upper bound, we use Lemma \ref{entropy-lemma} to compute \begin{align*} {\mathcal N}_\varepsilon(A' \cdot B') &\leq {\mathcal N}_\varepsilon( x \cdot H \cdot H \cdot y ) \\ &\lesssim_{{\mathbf{K}},R,G} \mu( x \cdot H \cdot H \cdot y \cdot {\mathbf{B}}(1,\varepsilon) ) / \mu({\mathbf{B}}(1,\varepsilon)) \\ &\lesssim_{{\mathbf{K}},R,G} \mu( {\mathbf{B}}(1,\varepsilon) \cdot H \cdot H ) / \mu({\mathbf{B}}(1,\varepsilon)). \end{align*} But $H \cdot H$ is covered by $O_{{\mathbf{K}},R,G}(K^{O(1)})$ right-translates of $H$, and so $$ {\mathcal N}_\varepsilon(A' \cdot B') \lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(H) \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2}$$ as desired.
Now we show that (iii) implies (ii). We take $E := A' \times B'$. The bound on ${\mathcal N}_\varepsilon( \{ a \cdot b: (a,b) \in E \} )$ is obvious, while by considering products of $\varepsilon$-separated sets it is easy to establish a bound of the form $${\mathcal N}_{\varepsilon/100}(E) \gtrsim_{{\mathbf{K}},R,G} {\mathcal N}_\varepsilon(A') {\mathcal N}_\varepsilon(B') \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A) {\mathcal N}_\varepsilon(B).$$ The claim then follows from \eqref{neps2} (note that if $G$ is locally reasonable then so is $G^2$).
Now we show that (ii) implies (i). Let $E'$ be a maximal $100\varepsilon$-separated subset of $E$, then by \eqref{neps2}
$$ |E'| \geq {\mathcal N}_{100\varepsilon}(E) \sim_{{\mathbf{K}},R,G} {\mathcal N}_\varepsilon(E) \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A) {\mathcal N}_\varepsilon(B).$$ Let $D$ be a maximal $\varepsilon/2$-separated subset of $\{ a \cdot b: (a,b) \in E \}$, thus
$$ |D| \leq {\mathcal N}_{\varepsilon/4}( \{ a \cdot b: (a,b) \in E \} ) \sim_{{\mathbf{K}},R,G} {\mathcal N}_\varepsilon( \{ a \cdot b: (a,b) \in E \} ) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2}.$$ Observe that for every $(a,b) \in E'$, the product $a \cdot b$ lies within $c\varepsilon$ of an element of $D$, thus
$$ \sum_{x \in D} | \{ (a,b) \in E': d_G( a \cdot b, x ) \leq \varepsilon/2 \} | \geq |E'| \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A) {\mathcal N}_\varepsilon(B).$$ Applying Cauchy-Schwarz we conclude that
$$ \sum_{x \in D} | \{ (a,b) \in E': d_G( a \cdot b, x ) \leq \varepsilon/2 \} |^2 \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{3/2} {\mathcal N}_\varepsilon(B)^{3/2}.$$ Observe that if $(a,b), (a',b') \in E'$ and $x \in D$ are such that $d_G(a \cdot b,x),d_G(a' \cdot b',x) \leq \varepsilon/2$ then $(a,b,a',b') \in Q_\varepsilon(A,B)$. Thus
$$ |Q_\varepsilon(A,B) \cap (E' \times E')| \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{3/2} {\mathcal N}_\varepsilon(B)^{3/2}.$$ But $E' \times E'$ is clearly $\varepsilon$-separated, thus $$ {{\operatorname{E}}}_\varepsilon(A,B) = {\mathcal N}_\varepsilon( Q_\varepsilon(A,B) ) \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{3/2} {\mathcal N}_\varepsilon(B)^{3/2}$$ as desired.
Finally, we show that (i) implies (iv), which is the most difficult implication. Let $C = C_{{\mathbf{K}},R,G}$ be a large constant to be chosen later. Let $\overline{A} := A \cdot {\mathbf{B}}(1,C\varepsilon)$ and $\overline{B} := B \cdot {\mathbf{B}}(1,C\varepsilon)$, thus from Lemma \ref{entropy-lemma} we see that $\overline{A}$, $\overline{B}$ are multiplicative sets with \begin{equation}\label{overbite} \mu(\overline{A}) \sim_{{\mathbf{K}},R,G,C} {\mathcal N}_\varepsilon(A) \mu({\mathbf{B}}(1,\varepsilon)); \quad \mu(\overline{B}) \sim_{{\mathbf{K}},R,G,C} {\mathcal N}_\varepsilon(B) \mu({\mathbf{B}}(1,\varepsilon)). \end{equation}
Now consider the quantity ${{\operatorname{E}}}( \overline{A}, \overline{B} )$. We can rewrite this as \begin{align*} {{\operatorname{E}}}( \overline{A}, \overline{B} ) &= \int_G (1_{\overline{A}} * 1_{\overline{B}})(x)^2\ d\mu(x) \\ &= \int_{\overline{A}} \int_{\overline{B}} 1_{\overline{A}} * 1_{\overline{B}}( a \cdot b)\ d\mu(a) d\mu(b) \\ &= \int_{\overline{A}} \int_{\overline{B}} \int_{\overline{A}} 1_{\overline{B}}( (a')^{-1} \cdot a \cdot b)\ d\mu(a') d\mu(a) d\mu(b). \end{align*} Now observe that if $C$ is large enough, we see that for any $x \in G$, the set $B \cdot {\mathbf{B}}(1,\sqrt{C}\varepsilon)$ intersects ${\mathbf{B}}(x,\sqrt{C}\varepsilon)$ only when $x \in \overline{B}$. This (and Lemma \ref{ball}(ii)) leads to the pointwise estimate $$ 1_{\overline{B}}( x ) \gtrsim_{{\mathbf{K}},R,G} \frac{1}{\mu({\mathbf{B}}(1,\varepsilon))} \int_{B \cdot {\mathbf{B}}(1,\sqrt{C}\varepsilon)}\ 1_{{\mathbf{B}}(x,\sqrt{C}\varepsilon)}(b) d\mu(b)$$ and hence $$ {{\operatorname{E}}}( \overline{A}, \overline{B} ) \gtrsim_{{\mathbf{K}},R,G} \frac{1}{\mu({\mathbf{B}}(1,\varepsilon))} \int_{\overline{A}} \int_{\overline{B}} \int_{\overline{A}} \int_{B \cdot {\mathbf{B}}(1,\sqrt{C}\varepsilon)} 1_{{\mathbf{B}}((a')^{-1} \cdot a \cdot b,\sqrt{C}\varepsilon)}(b')\ d\mu(b') d\mu(a') d\mu(a) d\mu(b).$$ Now observe (from the local Lipschitz property) that if $(a,b,a',b') \in Q_\varepsilon(A,B) \cdot B_{G^4}(1,\varepsilon)$, where $B_{G^4}(1,\varepsilon)$ denotes the ball in $G^4$, then $d( a \cdot b, a' \cdot b' ) \lesssim_{{\mathbf{K}},R,G} \varepsilon$, and hence (if $C$ is large enough) $b' \in (a')^{-1} \cdot a \cdot b,\sqrt{C}\varepsilon)$. Thus we have $$ {{\operatorname{E}}}( \overline{A}, \overline{B} ) \gtrsim_{{\mathbf{K}},R,G} \frac{1}{\mu({\mathbf{B}}(1,\varepsilon))} \mu^{\otimes 4}( Q_\varepsilon(A,B) \cdot B_{G^4}(1,\varepsilon) )$$ and hence (by the analogue\footnote{Here we need the easily verified fact that the direct product of finitely many locally reasonable metric groups is still locally reasonable.} of Lemma \ref{entropy-lemma} for $G^4$) $$ {{\operatorname{E}}}( \overline{A}, \overline{B} ) \gtrsim_{{\mathbf{K}},R,G} \frac{\mu^{\otimes 4}(B_{G^4}(1,\varepsilon))}{\mu({\mathbf{B}}(1,\varepsilon))} {\mathcal N}_\varepsilon( Q_\varepsilon(A,B) ) \gtrsim_{{\mathbf{K}},R,G} \mu({\mathbf{B}}(1,\varepsilon))^3 {{\operatorname{E}}}_\varepsilon(A,B).$$ We henceforth fix $C$ to be a suitably large quantity depending on ${\mathbf{K}},R,G$, and thus can omit the dependence of $C$ in the estimates which follow. By hypothesis on ${{\operatorname{E}}}_\varepsilon(A,B)$ and \eqref{overbite}, we thus have $${{\operatorname{E}}}( \overline{A}, \overline{B} ) \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} \mu(\overline{A})^{3/2} \mu(\overline{B})^{3/2}.$$ Applying Proposition \ref{energy-gleam-2} we can thus locate a $O_{{\mathbf{K}},R,G}(K^{O(1)})$-approximate group $H$ of size $$\mu(H) \sim_{{\mathbf{K}},R,G} K^{O(1)} \mu(\overline{A})^{1/2} \mu(\overline{B})^{1/2}$$ and elements $x, y \in G$ such that $\mu( \overline{A} \cap (x \cdot H) ) \sim_{{\mathbf{K}},R,G} K^{O(1)} \mu(\overline{A})$ and $\mu( \overline{B} \cap (H \cdot y) ) \sim_{{\mathbf{K}},R,G} K^{O(1)} \mu(\overline{B})$. An inspection of the proof of Proposition \ref{energy-gleam-2} reveals that $H$, $x$, $y$ are also contained in a compact set that depends only on ${\mathbf{K}}$ and $R$. Note that from the trivial bounds $\mu( \overline{A} \cap (x \cdot H)) \leq \mu(\overline{A})$ and $\mu( \overline{B} \cap (H \cdot y) ) \leq \mu(\overline{B})$ we can conclude that $\overline{A}$ and $\overline{B}$ are comparable in size: $$ \mu( \overline{A} ) \sim_{{\mathbf{K}},R,G} K^{O(1)} \mu(\overline{B}).$$ From \eqref{overbite} we thus have entropy comparability also: \begin{equation}\label{entropy-compare} {\mathcal N}_\varepsilon(A) \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(B). \end{equation} From \eqref{overbite} again, we have a good bound on the measure of $H$: \begin{equation}\label{muh} \mu(H) \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2} \mu({\mathbf{B}}(1,\varepsilon)). \end{equation} However to get a good bound on the \emph{entropy} of $H$ we need to estimate $\mu(H \cdot B(1,\varepsilon))$. This we shall do by means of the Ruzsa triangle inequality. Observe that $$ \mu( [\overline{A} \cap (x \cdot H)] \cdot H ) \leq \mu( x \cdot H \cdot H ) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} \mu(H)$$ since $H$ is an approximate group. From our bounds on $\mu(H)$ and $\mu(\overline{A} \cap (x \cdot H))$ we conclude that $$ d( \overline{A} \cap (x \cdot H), H^{-1} ) \leq O(\log K) + O_{{\mathbf{K}},R,G}(1).$$ Next, we observe that $$ \mu( [\overline{A} \cap (x \cdot H)] \cdot {\mathbf{B}}(1,\varepsilon) ) \leq \mu( \overline{A} \cdot {\mathbf{B}}(1,\varepsilon) ) = \mu( A \cdot {\mathbf{B}}(1,C\varepsilon) \cdot {\mathbf{B}}(1,\varepsilon) )$$ and so from Lemma \ref{entropy-lemma} we have $$ \mu( [\overline{A} \cap (x \cdot H)] \cdot {\mathbf{B}}(1,\varepsilon) ) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A) \mu({\mathbf{B}}(1,\varepsilon)).$$ This gives a bound on the Ruzsa distance: $$ d( \overline{A} \cap (x \cdot H), {\mathbf{B}}(1,\varepsilon)^{-1} ) \leq \log {\mathcal N}_\varepsilon(A) + O(\log K) + O_{{\mathbf{K}},R,G}(1).$$ By the triangle inequality we conclude that $$ d( H^{-1}, {\mathbf{B}}(1,\varepsilon)^{-1} ) \leq \log {\mathcal N}_\varepsilon(A) + O(\log K) + O_{{\mathbf{K}},R,G}(1)$$ which implies that \begin{equation}\label{buh} \mu(H \cdot {\mathbf{B}}(1,\varepsilon)) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A) \mu({\mathbf{B}}(1,\varepsilon)). \end{equation} Combining this with Lemma \ref{entropy-lemma}, \eqref{muh}, \eqref{entropy-compare}, and the trivial lower bound $\mu(H \cdot {\mathbf{B}}(1,\varepsilon)) \geq \mu(H)$ we conclude that $$ {\mathcal N}_\varepsilon(H) \sim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)^{1/2} {\mathcal N}_\varepsilon(B)^{1/2}.$$ We are nearly done with establishing (iv), but there is one slight problem: we have shown that $\overline{A}$ and $\overline{B}$ have large intersection (in the measure sense) with translates of $H$, but we need $A$ and $B$ to have large intersection (in the entropy sense) with translates of $H$. There are a number of ways to resolve this; one is as follows. Observe that $$ {\mathcal N}_\varepsilon( \overline{A} \cap (x \cdot H) ) \gtrsim_{{\mathbf{K}},R,G} \mu( \overline{A} \cap (x \cdot H ) ) / \mu({\mathbf{B}}(1,\varepsilon)) \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A).$$ Thus there exists a $\varepsilon$-separated subset $\overline{A'}$ of $\overline{A} \cap (x \cdot H)$ of cardinality $\gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)$. Using \eqref{neps2} one can refine this subset to be $C\varepsilon$-separated for any fixed $C = O_{{\mathbf{K}},R,G}(1)$ without degrading the cardinality of $\overline{A'}$ significantly. By construction of $\overline{A'}$, each element of $\overline{A'}$ is at a distance $O_{{\mathbf{K}},R,G}(\varepsilon)$ to an element of $A$. This shows that there exists an $\varepsilon$-separated subset $A'$ of $A$ of cardinality $\gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)$, with each element of $A'$ at a distance $O_{{\mathbf{K}},R,G}(\varepsilon)$ to an element of $x \cdot H$. If we then set $\tilde H := H \cdot {\mathbf{B}}(1,C\varepsilon)$ for some sufficiently large $C = O_{{\mathbf{K}},R,G}(1)$, we see that $A'$ is contained in $x \cdot \tilde H$ and so ${\mathcal N}_\varepsilon(A \cap (x \cdot \tilde H)) \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)$. A similar argument yields ${\mathcal N}_\varepsilon((y \cdot \tilde H) \cap B) \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(B)$. Now we need to pass from $\tilde H$ back to $H$. Recall that as $H$ is an approximate group, $H \cdot H$ can be covered by $O_{{\mathbf{K}},R,G}(K^{O(1)})$ left-translates of $H$, hence $H \cdot \tilde H$ can be covered by $O_{{\mathbf{K}},R,G}(K^{O(1)})$ left-translates of $\tilde H$. Combining this with \eqref{muh}, \eqref{buh}, \eqref{boxbox} we have $$ \mu(H \cdot \tilde H) \lesssim_{{\mathbf{K}},R,G} K^{O(1)} \mu(H).$$ Applying Lemma \ref{cover} we can thus cover $\tilde H$ by $O_{{\mathbf{K}},R,G}(K^{O(1)})$ left-translates of $H$. In particular we can cover $x \cdot \tilde H$ by $O_{{\mathbf{K}},R,G}(K^{O(1)})$ sets of the form $x' \cdot H$, and so by the pigeonhole principle we have ${\mathcal N}_\varepsilon(A \cap (x' \cdot H)) \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(A)$ for some $x'$, which one can easily verify is contained in a compact set $\tilde {\mathbf{K}}({\mathbf{K}},R)$ depending only on ${\mathbf{K}}$ and $R$. A similar argument (using \eqref{boxy} to move the ${\mathbf{B}}(1,\varepsilon)$ factors around as necessary) gives ${\mathcal N}_\varepsilon(B \cap (H \cdot y')) \gtrsim_{{\mathbf{K}},R,G} K^{O(1)} {\mathcal N}_\varepsilon(B)$ for some $y' \in \tilde {\mathbf{K}}({\mathbf{K}},R)$, and (iv) follows. \end{proof}
One can of course develop metric entropy analogues of many of the other estimates from previous sections (such as the Ruzsa triangle inequality). We leave the details to the reader.
\section{Inverse theorems}\label{inverse-sec}
The above theory reduces the study of sets of small doubling or tripling (or pairs of sets with small product set, small partial product set, or large multiplicative energy) to that of studying approximate groups, at least if one is prepared to lose polynomial factors in the constants and (in the locally reasonable metric entropy setting) one restricts all sets to a compact region. There remains of course the question of how to effectively classify these approximate groups; we refer to this as the \emph{inverse product set problem} (or the \emph{inverse sum set problem}, in the abelian additive setting). At present, there is not even a reasonable conjecture as to what such objects should look like; there are obvious examples of approximate groups, such as genuine groups, geometric progressions\footnote{In the abelian case, the group $G$ is usually written additively, and it is then the \emph{arithmetic} progressions which are relevant here. However as we are considering the non-commutative setting we are forced to depart from the usual additive notation and work instead with geometric progressions.}, and (given sufficient commutativity) the direct sum of other approximate groups, but it is not clear in general what the statement should be\footnote{Indeed, the problem can be viewed as a robust version of the problem of classifying all the subgroups of a given group $G$, which is already quite a difficult problem, especially for highly non-abelian groups such as the permutation group $S_n$. In some cases it seems that the class of approximate subgroups of $G$ is not much ``richer'' the class of genuine subgroups of $G$, in the sense that one can express approximate subgroups as dense subsets of combinations of genuine subgroups and related objects such as geometric progressions, but this might not be true for sufficiently complicated groups $G$.}.
There are however a number of special cases which are well understood. If $G$ is a discrete abelian $r$-torsion group for some small $r > 1$ (thus $x^r = 1$ for all $r$) and $A$ is a finite non-empty subset of $G$
then it is known that $|A \cdot A| = O(|A|)$ if and only if $A$ can be contained in a finite subgroup $H$ of $G$ of size $O(|A|)$; see \cite{ruzsa-group}. If $G$ is instead a discrete abelian torsion-free group, and $A$ is a finite non-empty subset of $G$, then a famous theorem of Freiman \cite{frei} (see also \cite{bilu-freiman}, \cite{ruzsa-freiman}, \cite{chang}) shows that $|A \cdot A| = O(|A|)$ if and only if $A$ is contained in the product $P$ of $O(1)$ geometric progressions, whose total cardinality is $O(|A|)$. These results were unified in \cite{gr-4}, in which $A$ was now a finite non-empty subset of an arbitrary abelian group $G$, and the result now being that $|A \cdot A| = O(|A|)$ if and only if $A$ is contained in the product $P$ of $O(1)$ geometric progressions and a finite subgroup of $G$, whose total cardinality is $O(|A|)$. See \cite{tao-vu} for a presentation of all of these abelian results. Apart from the (important) issue of quantifying the dependence of constants here, this is a satisfactory resolution of the inverse sum set problem in the discrete setting.
In the abelian setting it is also easy to pass to the continuous setting and the metric entropy setting. For instance, we have
\begin{proposition}[Continuous version of Freiman's theorem]\label{frei-cts} Let $d \geq 1$. Let $A$ be an open bounded non-empty subset of ${{\mathbf R}}^d$ such that $\mu(A + A) \leq K \mu(A)$ for some $K \geq 2^d$, where $\mu$ denotes Lebesgue measure. Then there exists an $\varepsilon > 0$ and a set $P$ which is the sum of $O_K(1)$ arithmetic progressions in ${{\mathbf R}}^d$ such that $A \subseteq P + {\mathbf{B}}(0,\varepsilon)$ and $\mu(P + {\mathbf{B}}(0,\varepsilon)) \sim_K \mu(A)$. \end{proposition}
\begin{remark}\label{cfr} Note that the trivial inclusion $A+A \supset 2 \cdot A$ (or the Brunn-Minkowski inequality) shows that $K$ cannot be less than $2^d$. In the converse direction, it is easy to see that if $A \subseteq P + {\mathbf{B}}(0,\varepsilon)$ and $\mu(P + {\mathbf{B}}(0,\varepsilon)) \sim_K \mu(A)$, where $P$ is the sum of $O(1)$ arithmetic progressions, then $\mu(A+A) \sim_K \mu(A)$. One can certainly use the arguments in \cite{bilu-freiman}, \cite{ruzsa-freiman}, \cite{gt-freimanbilu} to quantify the exact dependence on $K$ in the above proposition but we will not attempt to do so here. It is also not difficult to modify the above proposition to replace the Euclidean space ${{\mathbf R}}^d$ with a torus such as ${{\mathbf R}}^d/{{\mathbf Z}}^d$ by a lifting argument; we omit the details. \end{remark}
\begin{proof} Since $A+A$ is open, we have $A+A = \bigcap_{\varepsilon > 0} A+A + {\mathbf{B}}(0,\varepsilon)$. By the monotone convergence theorem, we can thus find an $\varepsilon > 0$ such that $\mu( A + A + B(0,\varepsilon) ) \sim \mu( A + A ) \sim_K \mu( A )$.
Now let $\tilde A := (A + B(0,\varepsilon/2)) \cap ( \frac{\varepsilon}{10d} \cdot {{\mathbf Z}}^d )$, thus $A$ is a finite non-empty set. From the inclusions $$ A \subseteq \tilde A + B(0,\varepsilon) \hbox{ and } \tilde A + \tilde A + B(0, \frac{\varepsilon}{100d} ) \subseteq A + A + B(0,\varepsilon)$$ one easily verifies the estimate
$$ \mu(A) \lesssim_K |\tilde A| \varepsilon^d \leq |\tilde A + \tilde A| \varepsilon^d \lesssim_{K} \mu(A).$$
Note that any dependencies on $d$ of the implied constant can be converted to a dependency on $K$ since $K \geq 2^d$. In particular we have $|\tilde A + \tilde A| \sim_K |\tilde A|$. Applying Freiman's theorem (see e.g. \cite{frei}, \cite{bilu-freiman}, \cite{ruzsa-freiman}, \cite{chang}, \cite{nathanson}, \cite{tao-vu}) we can thus place $\tilde A$ inside a set $P \subset {{\mathbf R}}^d$ of cardinality
$|P| \sim_K |\tilde A|$ which is the sum of $O_K(1)$ arithmetic progressions. Since $A \subseteq \tilde A + B(0,\varepsilon) \subseteq P + B(0,\varepsilon)$, we have
$$ \mu(A) \leq \mu( P + B(0,\varepsilon) ) \lesssim_d |P| \varepsilon^d \sim_K |\tilde A| \varepsilon^d \sim_K \mu(A)$$ and the claim follows. \end{proof}
\begin{proposition}[Entropy version of Freiman's theorem]\label{frei-entropy} Let $d \geq 1$ and $\varepsilon > 0$. Let $A$ be a bounded non-empty subset of ${{\mathbf R}}^d$ such that ${\mathcal N}_\varepsilon(A + A) \leq K {\mathcal N}_\varepsilon(A)$ for some $K \geq 1$. Then there exists a set $P$ which is the sum of $O_{K,d}(1)$ arithmetic progressions in ${{\mathbf R}}^d$ such that $A \subseteq P + B(0,\varepsilon)$ and $|P| \sim_{K,d} {\mathcal N}_\varepsilon(A)$. \end{proposition}
\begin{remark} The commentary in Remark \ref{cfr} also applies in this setting. For instance, if $A \subseteq P + B(0,\varepsilon)$ and $|P| \sim_{K,d} {\mathcal N}_\varepsilon(A)$ and $P$ is the sum of $O_{K,d}(1)$ arithmetic progressions then it is easy to see that ${\mathcal N}_\varepsilon(A+A) \sim_{K,d} {\mathcal N}_\varepsilon(A)$. \end{remark}
\begin{proof} Again, we take $\tilde A := (A + B(0,\varepsilon/2)) \cap (\frac{\varepsilon}{10d} \cdot {{\mathbf Z}}^d)$. From Lemma \ref{entropy-lemma} (and the global reasonableness of ${{\mathbf R}}^d$) we
have $|\tilde A| \sim_d {\mathcal N}_\varepsilon(A)$ and $|\tilde A + \tilde A| \lesssim_d {\mathcal N}_\varepsilon(A+A)$, and thus
$|\tilde A + \tilde A| \sim_{K,d} |\tilde A|$. We now argue as in the proof of Proposition \ref{frei-cts}. \end{proof}
We now turn to the noncommutative setting. Here our understanding is only satisfactory for a few special noncommutative groups; in the general case it is not even clear what the correct statement of an inverse product setting theorem should be, let alone how to prove it. We shall restrict our attention to the discrete setting, in other words in understanding those finite non-empty sets $A$ for which $|A \cdot A| = O(|A|)$ (or $|A \cdot A \cdot A| = O(|A|)$), or for classifying finite non-empty $O(1)$-approximate groups; in view of the preceding results it seems likely that the transferral of the discrete results to a continuous or metric entropy setting will not be too difficult.
Inverse product set theorems for groups of affine or projective mappings on the real or complex line or projective line, and hence to groups such as $SL_2({{\mathbf R}})$ or $SL_2({{\mathbf C}})$, were studied in \cite{el1,el2,el3,elk,elru}. A typical result here is that if $A \subset SL_2({{\mathbf C}})$
is a finite non-empty set such that $|A \cdot A^{-1}| = O(|A|)$, then $A$ is contained inside $O(1)$ left-cosets of an abelian subgroup of $SL_2({{\mathbf C}})$.
The case $G = SL_2({{\mathbf Z}}/p{{\mathbf Z}})$, with $p$ a large prime, was studied in \cite{Helf}. In particular it was shown that if
$A \subset G$ had size $p^\varepsilon \leq |A| \leq p^{-\varepsilon} |G|$ for some $\varepsilon > 0$, and $A$ was not contained in any proper subgroup of $G$, then one had the tripling estimate $|A \cdot A \cdot A| \geq p^{\delta} |A|$ for some $\delta = \delta(\varepsilon) > 0$. Thus the only sets of small tripling are those sets which are very small, very large, or are contained in a proper subgroup (e.g. a geometric progression containing the identity). Using the machinery in this section one can also obtain a classification of sets of small doubling, which we leave as an exercise to the reader.
Another interesting example arises in the work of Lindenstrauss \cite{linden}, in which $G$ is now the \emph{lamplighter group} ${{\mathbf Z}} \times ({{\mathbf Z}}/2{{\mathbf Z}})^{{\mathbf Z}}$ with group law $(i,a) \cdot (j,b) = (i+j, \sigma^j a + b)$, where $\sigma$ is the standard shift on $({{\mathbf Z}}/2{{\mathbf Z}})^{{\mathbf Z}}$. There it was shown that the group $G$ contains no F{\o}lner sequence of sets of small doubling constant, despite $G$ being amenable (and solvable).
The case of very small doubling, e.g. $|A \cdot A| \leq 2|A|$, was treated in \cite{kemperman}, \cite{bf}, \cite{hls} in the torsion-free non-commutative case. In this case one has $|A \cdot A| \geq 2|A|-1$, with equality only holding when $A$ is a geometric progression.
We were unable to say anything new about the inverse product set problem for general groups. However for discrete groups $G$ which have a normal subgroup $H$, it turns out that one can exploit the short exact sequence $$ \{1\} \to H \to G \to G/H \to \{1\}$$ to split the inverse product set problem for $G$ into the inverse product set problem for $H$ and $G/H$ separately, together with the problem\footnote{This problem seems to be somewhat difficult, however it does appear to be fractionally simpler than the original inverse product set problem on $G$, so the reduction is not entirely trivial. It is somewhat analogous to the reduction of the (open) ``polynomial Freiman-Ruzsa conjecture'' to a conjecture concerning approximate homomorphisms in \cite{green-survey}.} of classifying a certain type of ``approximate group homomorphism'' from an approximate subgroup of $G/H$ into $G$. To motivate matters, let us first see how \emph{genuine} subgroups of $G$ (as opposed to approximate groups) split under this short exact sequence.
\begin{lemma}[Splitting lemma, group case] Let $H$ be a normal subgroup of a group $G$, and let $A \subset G$. Let $\pi: G \to G/H$ be the canonical projection. Then the following are equivalent. \begin{itemize} \item[(i)] $A$ is a subgroup of $G$. \item[(ii)] There exists a subgroup $B$ of $H$, a subgroup $C$ of $G/H$, and a partial inverse $\phi: C \to G$ to $\pi$ (i.e. $\pi(\phi(x)) = x$ for all $x \in C$) with the property \begin{equation}\label{phib}
\phi(x) B = B \phi(x) \hbox{ for all } x \in C
\end{equation} (thus $\phi$ takes values in the normaliser of $B$) and the quotiented homomorphism property \begin{equation}\label{phib-2} \phi(xy) \in \phi(x) \phi(y) B \hbox{ for all } x,y \in C \end{equation} and such that $A$ has the representation \begin{equation}\label{adef}
A = \bigcup_{x \in C} \phi(x) B.
\end{equation}
In particular, $|A| = |C| |B|$. \end{itemize} \end{lemma}
\begin{remark} One way to view this lemma is to think of $G$ as a principal $H$-bundle over $G/H$. Then $A$ is a principal $B$-bundle over $C$ that takes values in the normaliser of the structure group $B$, and which collapses to a group homomorphism from $C$ to $G$ when quotiented out by $B$. \end{remark}
\begin{proof} Let us first verify that (ii) implies (i). It is easy to verify from \eqref{phib}, \eqref{phib-2} that $\phi(0) \in B$ and $\phi(x^{-1}) \in \phi(x) B$ for all $x \in C$; from these facts and \eqref{phib}, \eqref{phib-2}, \eqref{adef} we quickly see that $A$ contains the identity, that $A^{-1} = A$, and that $A \cdot A = A$; in other words, $A$ is a subgroup of $G$.
Now let us verify that (i) implies (ii). Let $C := \pi(A)$ and $B := A \cap K$; it is easy to see that $B$ and $C$ are subgroups of $K$ and $C/K$ respectively. Let $\phi: C \to A$ be an arbitrary partial inverse to the map $\pi: A \to C$ (which exists thanks to the axiom of choice), then we have \eqref{adef}. Since $A \cdot A = A$ we conclude \eqref{phib-2}. To verify \eqref{phib}, observe that $\phi(x) B \phi(x)^{-1}$ lies in $A \cdot A \cdot A^{-1} = A$, but also lies in the normal group $K$, and must therefore lie in $A \cap K = B$. This shows the inclusion $\phi(x) B \subseteq B \phi(x)$, and the other inclusion is proven similarly. \end{proof}
We now present an analogue of the above lemma for approximate groups.
\begin{lemma}[Splitting lemma, approximate group case]\label{split-approx} Let $H$ be a normal subgroup of a discrete multiplicative group $G$, and let $\pi: G \to G/H$ be the canonical homomorphism. Note that $H$ and $G/H$ are also discrete multiplicative groups. Let $A \subset G$, and let $K \geq 1$. Then the following three statements are equivalent, in the sense that if one of them holds for one choice of implied constant in the $O()$ and $\lesssim$ notation, then the other statement holds for a different choice of implied constant in the $O()$ and $\lesssim$ notation: \begin{itemize}
\item[(i)] We have $|A \cdot A \cdot A| \lesssim K^{O(1)} |A|$.
\item[(ii)] There exists a $O(K^{O(1)})$-approximate group $\tilde A$ of size $|\tilde A| \sim K^{O(1)} |A|$ which contains $A$. \item[(iii)] There exist $O(K^{O(1)})$-approximate groups $B_1 \subseteq B_2 \subseteq B_3 \subset H$ and $C \subset G/H$ with \begin{equation}\label{b-compare}
|B_3| \lesssim K^{O(1)} |B_1|, \end{equation} together with a partial inverse $\phi: C^3 \to G$ to $\pi$, with $\phi(1) = 1$ and $\phi(x^{-1}) = \phi(x)^{-1}$ for all $x \in C^3$, such that \begin{equation}\label{phib-approx}
\phi(x) B_i \subseteq B_{i+1} \phi(x); \quad
B_i \phi(x) \subseteq \phi(x) B_{i+1} \quad
\hbox{ for all } x \in C, i = 1,2 \end{equation} and \begin{equation}\label{phib-2-approx} \phi(x) \phi(y) \phi(z) \in \phi(xyz) B_3 \hbox{ for all } x,y,z \in C \end{equation} with the containment \begin{equation}\label{adef-approx}
A \subseteq \bigcup_{x \in C} \phi(x) B_1 \end{equation} and the cardinality bound \begin{equation}\label{acard-lower}
|A| \gtrsim K^{-O(1)} |B_1| |C|. \end{equation} \end{itemize} \end{lemma}
\begin{remark} One can extend this lemma to the continuous setting provided that the topology of $H$ is well-behaved (e.g. the projection map $\pi$ should be continuous and open, and in particular $H$ should be closed) and one can ``disintegrate'' the measure $\mu$ on $G$ into the measures on $H$-cosets of $G$, integrated against the measures on $G/H$; this can for instance be done if $G$ is a finite-dimensional Lie group and $H$ is a closed Lie subgroup. We omit the details. There is likely to also be an entropy analogue of this lemma under reasonable assumptions on the metric but we will not describe these here. The approximate homomorphism $\phi$ is only defined on $C^3$, but an inspection of the proof below shows that one could in fact extend it to $C^n$ for any fixed $n$, and have a sequence $B_1 \subseteq \ldots \subseteq B_n$ of nested approximate groups of comparable size, with suitable modifications to \eqref{b-compare}, \eqref{phib-approx}, \eqref{phib-2-approx}; again, we omit the details. \end{remark}
\begin{proof} The equivalence of (i) and (ii) follows from Theorem \ref{tripling-classify}. Now let us see that (iii) implies (i). From \eqref{adef-approx} we have $$ A \cdot A \cdot A \subseteq \bigcup_{x,y,z \in C} \phi(x) B_1 \phi(x_2) B_1 \phi(x_3) B_1.$$ From repeated application of \eqref{phib-approx} we have $$ \phi(x) B_1 \phi(y) B_1 \phi(z) B_1 \subseteq \phi(x) \phi(y) \phi(z) B_3 B_2 B_1$$ and hence by \eqref{phib-2-approx} $$ A \cdot A \cdot A \subseteq \bigcup_{x,y,z \in C} \phi(xyz) B_3 B_3 B_2 B_1 \subseteq \bigcup_{w \in C^3} \phi(w) B_3^4$$
and thus $|A^3| \leq |C^3| |B_3^4|$. The claim now follows from \eqref{acard-lower}, \eqref{b-compare}, and the hypothesis that $C$ and $B_3$ are $O(K^{O(1)})$-approximate groups.
It remains to show that (ii) implies (iii). By replacing $A$ by $\tilde A$ if necessary we may assume that $A$ is itself a $O(K^{O(1)})$-approximate group; in particular, $A$ is symmetric, contains $1$, and (by Theorem \ref{tripling-classify})
we have $|A^n| \lesssim_n K^{O_n(1)} |A|$ for all $n \geq 1$. Since $\pi$ is a homomorphism, we easily see that $\pi(A)$ is also a $O(K^{O(1)})$-approximate group. Thus we shall set $C := \pi(A)$. Now we construct the $B_i$ by the formulae $$ B_1 := (A^2 \cap H)^3; \quad B_2 := (A^8 \cap H)^3; \quad B_3 := (A^{26} \cap H)^3.$$
Observe that if $a, a' \in A$ lie in the same fiber of $\pi$ (i.e. in the same coset of $H$), then $a' \in (A^2 \cap H) a$. Since $A$ intersects exactly $|C|$ fibers of $\pi$, we conclude that $|A| \leq |C| |A^2 \cap H|$. On the other hand, observe that if $a \in A$, then the set $(A^{2n} \cap H) a$ lies in $A^{2n+1}$, and is also contained in the same fiber of $\pi$ as $a$. This implies that $|A^{2n+1}| \geq |C| |A^{2n} \cap H|$ for all $n \geq 1$. Since $|A^{2n+1}| \lesssim_n K^{O_n(1)} |A|$, we conclude that $|A^{2n} \cap H| \sim_n K^{O_n(1)} |A^2 \cap H|$ for all $n \geq 1$. Since $(A^{2n} \cap H)^3 \subseteq A^{6n} \cap H$, we conclude that $A^{2n} \cap H$ has a tripling constant of $O_n(K^{O_n(1)})$. Applying Corollary \ref{tripling-better} (observing that $A^{2n} \cap H$ is symmetric and contains $1$) we thus see that $B_1, B_2, B_3$ are all $O(K^{O(1)})$-approximate groups. Note that the estimates established here also give \eqref{acard-lower} and \eqref{b-compare}.
Since $C = \pi(A)$, we have $C^3 = \pi(A^3)$. Using the axiom of choice (which is actually unnecessary here, since $A^3$ and $C^3$ are finite sets), we may select a partial inverse $\phi: C^3 \to A^3$ to $\pi$, which takes values in $C$ on $A$; since $C$ and $A$ are both symmetric and contain the origin, there is no difficulty requiring $\phi(1) = 1$ and $\phi(x^{-1}) = \phi(x)^{-1}$ for all $x \in C^3$. If $a$ and $\phi(x)$ lie in the same fiber of $\pi$, then as mentioned before we have $a \in \phi(x) (A^2 \cap H)$, which implies \eqref{adef-approx}.
Now observe that if $x \in C$, then $\phi(x) (A^{2n} \cap H)^3 \phi(x)^{-1}$ lies in $A^{6n+2} \cap H$, and hence $\phi(x) B_i \phi(x)^{-1}$ lies in $B_{i+1}$ for $i=1,2$. This proves the inclusions $\phi(x) B_i \subseteq \phi(x) B_{i+1}$; the reverse inclusions in \eqref{phib-approx} are proven similarly.
Finally, for $x,y,z \in C$, we observe that $\phi(x) \phi(y) \phi(z) \phi(xyz)^{-1} \in A^4 \cap H$, and so \eqref{phib-2-approx} follows. This concludes the implication of (iii) from (ii). \end{proof}
In principle, this splitting lemma should allow one to deduce inverse product set estimates for nilpotent (and perhaps even solvable) groups from the abelian theory. To do this in full generality appears to be rather difficult however, and we shall only demonstrate the situation with a particularly simple nilpotent group, namely a Heisenberg group.
\begin{definition}[Heisenberg group] Let $Z, W$ be additive abelian groups, and let $\{,\}: Z \times Z \to W$ be an antisymmetric mapping (thus $\{x,y\} = -\{y,x\}$) which is a homomorphism in each of the two variables separately (thus $\{ x+y, z \} = \{x,z\} + \{y,z\}$ and $\{x,y+z\} = \{x,y\} + \{x,z\}$). We define the \emph{Heisenberg group} associated to this antisymmetric mapping to be the set $G := Z \times W$ endowed with the group law $$ (z, w) \cdot (z',w') := (z + z', w + w' + \{ z, z' \} ).$$ \end{definition}
One easily verifies that $G$ is a discrete multiplicative group with the ``vertical'' group $\{0\} \times W$ (which we identify with $W$) as a normal subgroup (indeed, it lies in the centre of $G$) and with identity $1_G = (0,0)$ and inverse $(z,w)^{-1} = (-z,-w)$; the quotient $G/W$ is canonically identified with the ``horizontal'' group $Z$. Thus we have the short exact sequence $$ 0 \to W \to G \to Z \to 0.$$ Since $W$ and $Z$ are abelian, we thus see that $G$ is a $2$-step nilpotent group.
We write $Z \times W$ for the additive group which is the product of $Z$ and $W$, and $\iota: G \to Z \times W$ for the identity map from $G$ to $Z \times W$. We caution that while $\iota$ is a bijection, it is \emph{not} a group homomorphism from the multiplicative group $G$ to the additive group $Z \times W$. Nevertheless, $\iota$ does relate the subgroups of $G$ with the subgroups of $Z \times W$ (except for a ``$2$-torsion'' issue) as follows. Let $\pi: Z \times W \to Z$ be the canonical projection, and for any $A, B \subset Z$ let $\{A,B\} \subset W \equiv \{0\} \times W$ denote the set $\{A,B\} := \{ \{a,b\}: a \in A, b \in B \}$. We also let $\langle \{A,B\} \rangle$ denote the subgroup of $\{0\} \times W$ generated by $\{A,B\}$. Finally, given any $A \subset Z \times W$ we write $2 \cdot A := \{ 2x: x \in A \} = \{ (z+z, w+w): (z,w) \in A \}$.
\begin{proposition}[Subgroups of the Heisenberg group] Let $G$ be a Heisenberg group arising from an antisymmetric mapping $\{,\}: Z \times Z \to W$, and let $A \subset G$ be a multiplicative subgroup of $G$. Then there exists an additive subgroup $\tilde A$ of $Z \times W$ such that \begin{equation}\label{binclude}
2 \cdot(\tilde A + \langle \{ \pi(\tilde A), \pi(\tilde A) \} \rangle ) \subseteq \iota(A) \subseteq \tilde A + \langle \{ \pi(\tilde A), \pi(\tilde A) \} \rangle
\end{equation} \end{proposition}
\begin{remark} In the converse direction, it is easy to verify that given any additive subgroup $\tilde A$ of $Z \times W$, that the set $\iota^{-1}( \tilde A + \langle \{ \pi(\tilde A), \pi(\tilde A) \} \rangle )$ is a multiplicative subgroup of $G$. Thus the above proposition classifies multiplicative subgroups of $G$ in terms of additive subgroups of $Z \times W$, except for the ``$2$-torsion'' issue of having to distinguish a set the additive group $A' := \tilde A + \langle \{ \pi(\tilde A), \pi(\tilde A) \} \rangle$ from its dilate $2 \cdot A'$. If $Z \times W$ is finitely generated, then the quotient of the additive group $A'$ by $2 \cdot A'$ will be bounded, and so in some sense the above classification of subgroups of $G$ only ``loses'' a bounded amount of information. \end{remark}
\begin{proof} First observe that if $(z,w)$ and $(z',w')$ are in $A$, then $$ (z,w) \cdot (z',w') \cdot (z,w)^{-1} \cdot (z',w')^{-1} = (0, 2 \{z,z'\} ).$$ Thus if we set $C := \pi(\iota(A)) \subset Z$, we see that $2 \cdot \{C,C\} \subset A$, and hence (since the Heisenberg group law is additive on $\{0\} \times W$) we also have $2 \cdot \langle \{C,C\} \rangle \subset A$. If we now set $\tilde A := \iota(A) + \langle \{C,C\} \rangle$, we easily verify that $\tilde A$ is indeed an additive group and obeys the inclusions \eqref{binclude}. \end{proof}
We now extend this proposition to approximate groups, though to deal with the $2$-torsion issue we shall need to make an additional assumption on the ``vertical'' group $W$.
\begin{theorem}[Approximate subgroups of the Heisenberg group]\label{heisen}
Let $G$ be a Heisenberg group arising from an antisymmetric mapping $\{,\}: Z \times Z \to W$ such that $W$ has no $2$-torsion (thus if $w \in W$ and $2 w = 0$, then $w=0$), and let $A \subset G$ be a finite nonempty subset of $G$ such that $|A \cdot A \cdot A| \leq K |A|$ for some $K \geq 1$. Then there exists a $O(K^{O(1)})$-approximate additive subgroup $\tilde A$ of $Z \times W$, such that \begin{equation}\label{pi-include} \{ \pi( \tilde A ), \pi(\tilde A) \} \subseteq \tilde A \end{equation} and $$ \iota(A) \subseteq \tilde A.$$ Furthermore we have
$$ |A| \gtrsim K^{-O(1)} |\tilde A|.$$ \end{theorem}
\begin{remark} In the converse direction, if $\tilde A$ obeys all the above properties, then a comparison of the multiplicative group law for $G$ and the additive group law for $Z \times W$ reveals that $$ \iota(A \cdot A \cdot A) \subseteq 3 \tilde A + 3 \{ \pi(\tilde A), \pi(\tilde A) \} \subseteq 6 \tilde A$$ and hence by the approximate group properties of $\tilde A$
$$ |A \cdot A \cdot A| \leq 6 |\tilde A| \lesssim K^{O(1)} |\tilde A| \lesssim K^{O(1)} |A|.$$ Thus we have a sharp characterisation of the sets of small tripling in the Heisenberg group $G$, in the case when no $2$-torsion is present in the centre. In principle, one can make this characterisation more explicit by using a version of Freiman's theorem (such as the one in \cite{gr-4}) in the abelian group $Z \times W$ to classify $\tilde A$, and then to work with the concrete description of $\tilde A$ given by that theorem to determine which approximate groups $\tilde A$ obey the constraint \eqref{pi-include}. Of course once one characterises sets of small tripling, one can use the results of earlier sections to characterise sets of small doubling, or of with small partial product set, etc. These fully explicit descriptions are however rather lengthy to state and we will leave them to the reader. \end{remark}
\begin{proof} We apply Lemma \ref{split-approx} with $H := \{0\} \times W \equiv W$ to obtain $O(K^{O(1)})$-approximate groups\footnote{It is a somewhat unfortunate circumstance that we will be regarding $Z$ and $W$ both as additive groups and multiplicative group (with the same group operation); thus for instance if $B \subset W$ then $B+B = B \cdot B$. We hope the reader will not be unduly confused by this.}
$B_1 \subseteq B_2 \subseteq B_3 \subset W$ and $C \subset Z$, and a partial inverse $\phi: C^3 \to G$ to the projection map $\pi: G \to Z$ with $\phi(0) = (0,0)$ and $\phi(-x) = -\phi(x)$ that obeys \eqref{b-compare}, \eqref{phib-2-approx}, \eqref{adef-approx}, \eqref{acard-lower}. (Because $W$ is abelian, the containments \eqref{phib-approx} become trivial and will not be needed here.) Since $\phi$ is a partial inverse to $\pi$, we may
write $\phi(z) = (z, f(z))$ for some odd function $f: C^3 \to W$. Thus for instance \eqref{adef-approx} becomes the assertion that
$w \in f(z) + B_1$ for all $(z,w) \in A$. From \eqref{phib-2-approx} (setting the third element of $C$ to be the identity) we see that
for all $z_1,z_2 \in C$ we have $$ (z_1, f(z_1)) \cdot (z_2, f(z_2)) \in (z_1+z_2, f(z_1+z_2)) \cdot B_3$$ and hence (expanding out the group multiplication law in coordinates) $$ f(z_1) + f(z_2) + \{ z_1, z_2 \} \in f(z_1+z_2) + B_3.$$ Swapping $z_1$ and $z_2$ and then subtracting, we see that $$ 2 \cdot \{ z_1, z_2 \} \in B_3 - B_3 \hbox{ for all } z_1, z_2 \in C.$$ Let $\tilde B := (B_3 - B_3) \cap (2 \cdot W)$. The set $2 \cdot W$ is a subgroup of the abelian group $W$, and so $\tilde B + \tilde B + \tilde B = (3B_3 - 3B_3) \cap (2 \cdot W)$. Since $B_3$ is a $O(K^{O(1)}$-approximate group, we can cover $3B_3 - 3B_3$ by $O(K^{O(1)})$ translates of $B_3$, and hence $\tilde B + \tilde B + \tilde B$ can be covered by $O(K^{O(1)})$ translates of $B_3$, intersected with $2 \cdot W$. Any one of these translates is itself contained in a translate of $(B_3-B_3) \cap (2 \cdot W)$, and so we see that
$|3 \tilde B| \lesssim K^{O(1)} |\tilde B|$. Applying Lemma \ref{tripling-better} we see that $3\tilde B$ is a $O(K^{O(1)})$-approximate group. If we then let $$ B' := \{ b \in W: 2 \cdot b \in 3 \tilde B \}$$ we conclude (since the map $b \mapsto 2 \cdot b$ is a group isomorphism between $W$ and $2 \cdot W$) that $B'$ is also a $O(K^{O(1)})$-approximate group. By construction we have \begin{equation}\label{fuzz}
\{ C, C \} \subseteq B'. \end{equation} Next, observe that
$$ |B' + 3\tilde B| = |B' + 2 \cdot B'| \leq |3 B'| \lesssim K^{O(1)} |B'| = K^{O(1)} |3\tilde B|$$ and hence by Lemma \ref{cover}, we can cover $B'$ by $O(K^{O(1)})$ translates of $3\tilde B - 3\tilde B$, which is contained in $12 B_3$. Since $B_3$ is itself a $O(K^{O(1)})$-approximate group, we can conclude that \begin{equation}\label{namby}
|nB' + mB_3| \lesssim_{n,m} K^{O_{n,m}(1)} |B_3| \end{equation} for any $n,m \geq 1$. To use this, define the set $A' \subset Z \times W$ by $$ A' := \{ (z, w): z \in C, w \in f(z) + 9B' + B_1 \}.$$ From \eqref{adef-approx} we see that $A'$ contains $A$. Also we have \begin{align*} 3\iota(A') &= \bigcup_{z_1,z_2,z_3 \in C} (z_1,f(z_1) + 9B' + B_1) + (z_2,f(z_2) + 9B' + B_1) + (z_3,f(z_3) + 9B' + B_1) \\ &\subseteq \bigcup_{z_1,z_2,z_3 \in C} (z_1+z_2+z_3, f(z_1)+f(z_2)+f(z_3)+ 27B' + 3B_3). \end{align*} On the other hand, from \eqref{phib-2-approx} we have $$ f(z_1)+f(z_2)+f(z_3) + \{ z_1, z_2 \} + \{ z_1, z_3 \} + \{ z_2, z_3 \} \in f(z_1+z_2+z_3) + B_3$$ and hence by \eqref{fuzz} $$ f(z_1)+f(z_2)+f(z_3) \in f(z_1+z_2+z_3) + B_3 + 3B'.$$ Thus we have $$ 3 \iota(A') \subseteq \bigcup_{z_1,z_2,z_3 \in C} (z_1+z_2+z_3, f(z_1+z_2+z_3)+ 30B' + 4B_3)$$ and hence
$$ |3 \iota(A')| \leq |C^3| |30B'+4B_3|.$$ Since $C$ is a $O(K^{O(1)})$-approximate group, we thus see from \eqref{namby} and \eqref{acard-lower} that
$$ |3\iota(A')| \lesssim K^{O(1)} |A|.$$
Since $|\iota(A')| \geq |A|$, we thus see from Lemma \ref{tripling-better} that the set $\tilde A := 3 (\iota(A') \cup 0 \cup - \iota(A'))$ is a $K^{O(1)}$-approximate group with
$$ |\tilde A| \sim K^{O(1)} |A|.$$ Finally, we see from construction that $\pi(\tilde A) \subseteq 3C$, and hence $\{ \pi(\tilde A), \pi(\tilde A) \} \subseteq 9 \{C,C\} \subseteq 9B'$ by \eqref{fuzz}; since $f(0) = 0$, we obtain \eqref{pi-include} as desired. \end{proof}
\end{document} | arXiv | {
"id": "0601431.tex",
"language_detection_score": 0.7081222534179688,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\Large \bf Alternating Path and Coloured Clustering}
\begin{abstract} In the {\sc Coloured Clustering} problem, we wish to colour vertices of an edge coloured graph to produce as many stable edges as possible, i.e., edges with the same colour as their ends. In this paper, we reveal that the problem is in fact a maximum subgraph problem concerning monochromatic subgraphs and alternating paths, and demonstrate the usefulness of such connection in studying these problems.
We obtain a faster algorithm to solve the problem for edge-bicoloured graphs by reducing the problem to a minimum cut problem. On the other hand, we push the NP-completeness of the problem to edge-tricoloured planar bipartite graphs of maximum degree four. Furthermore, we also give FPT algorithms for the problem when we take the numbers of stable edges and unstable edges, respectively, as parameters.\\ \end{abstract}
\section{Introduction}
The following {\sc Coloured Clustering} problem has been proposed recently by Angel et al.~\cite{Angel} in connection with the classical correlation clustering problem~\cite{Bansal}: Compute a vertex colouring of an edge-coloured graph $G$ to produce as many {\em stable edges} as possible, i.e., edges with the same colour as their ends. As observed by Ageev and Kononov~\cite{Ageev}, the problem contains the classical maximum matching problem as a special case as the two problems coincide when all edges have different colours in $G$.
In this paper we will reveal that {\sc Coloured Clustering}, despite its definition by vertex partition, is in fact the following maximum subgraph problem in disguise: find a largest subgraph where every vertex has one colour for its incident edges ({\sc Vertex-Monochromatic Subgraph}), or, equivalently, delete fewest edges to destroy all alternating paths ({\sc Alternating Path Removal}). This multiple points of view gives us a better understanding of these problems, and is quite useful in studying them.
We are mainly interested in algorithmic issues of {\sc Coloured Clustering}, and will consider polynomial-time algorithms, NP-completeness, and also FPT algorithms for the two natural parameters from the subgraph point of view: numbers of edges inside ({\em stable edges}) and outside ({\em unstable edges}), respectively, the solution subgraph.
\subsection{Main results}
We now summarize our main results for {\sc Coloured Clustering}, where $m$ and $n$, respectively, are numbers of edges and vertices in $G$. These results can be translated directly into corresponding results for {\sc Vertex-Monochromatic Subgraph} and {\sc Alternating Path Removal}.
\begin{itemize} \item We obtain an $O(m^{3/2}\log n)$-time algorithm for edge-bicoloured graphs $G$ by a reduction to the classical minimum cut problem, which improves the $O(m^{3/2}n)$-time algorithm of Angel et al.~\cite{Angel} based on independent sets in bipartite graphs. We also give linear-time algorithms for the special case when $G$ is a complete graph (see \S4).
\item We push the NP-completeness of the problem to edge-tricoloured planar bipartite graphs of maximum degree four (see \S5).
\item We derive FPT algorithms for the problem when we take the numbers of stable edges and unstable edges, respectively, as parameter $k$, which is uncommon for most problems parameterized in this way. Furthermore, we obtain a kernel with at most $4k$ vertices and $2k^2+k$ edges for the latter problem (see \S6). \end{itemize}
\subsection{Related work}
Both monochromatic subgraphs and alternating paths are at least half-century old, and there is a huge number of papers in the literature dealing with them graph theoretically~\cite{Bang,Kano}. However, we are not aware of any work on these two subjects that is directly related to the algorithmic problems we study in this paper.
In the literature, papers by Angel et al.~\cite{Angel} and Ageev and Kononov~\cite{Ageev} seem to be the only work that directly study {\sc Coloured Clustering}. Angel et al. obtain an LP-based $1/e^2$-approximation algorithm for the problem in general, which is improved to a 7/23-approximation algorithm by Ageev and Kononov. Angel et al. also give a polynomial-time algorithm for the problem on edge-bicoloured graphs by a reduction to the maximum independent set problem on bipartite graphs, but show the NP-completeness of the problem for edge-tricoloured bipartite graphs.
\section{Definitions}
An {\em edge-coloured graph} $G$ is a simple graph where each edge $e$ has a unique colour $\psi(e) \in \{1, \dots, t\}$ for some positive integer $t$. We say that $G$ is {\em edge-bicoloured} if $t = 2$, and {\em edge-tricoloured} if $t =3$. Unless specified otherwise, we use $m$ and $n$, respectively, for the numbers of edges and vertices of $G$.
A vertex $v$ is {\em colourful} if its incident edges have at least two different colours, and {\em monochromatic} otherwise. A subgraph $H$ of $G$ is {\em vertex-monochromatic} if all vertices in $H$ are monochromatic vertices of $H$, and {\em edge-monochromatic} if all edges in $H$ have the same colour.
A {\em conflict pair} is a pair of adjacent edges of different colours, and an {\em alternating path} is a simple path where every pair of consecutive edges forms a conflict pair. The {\em edge-conflict graph} of $G$, denoted $X(G)$, is an uncoloured graph where each vertex represents an edge of $G$ and each edge corresponds to a conflict pair in $G$.
A {\em vertex colouring} $f$ of $G$ assigns to each vertex $v$ of $G$ a colour $f(v) \in \{1, \dots, t\}$ for some positive integer $t$. For a vertex colouring $f$ of $G$, an edge $uv$ is {\em stable} if its colour $\psi(uv) = f(u) = f(v)$, and {\em unstable} otherwise.
Angel et al.~\cite{Angel} have recently proposed the following problem, which is in fact, as we will see shortly, the problem of finding the largest vertex-monochromatic subgraph in $G$ in disguise (see \refl{equal}).
\begin{quote} {\sc Coloured Clustering} \\ {\sc Input}: Edge coloured graph $G$ and positive integer $k$. \\ {\sc Question}: Is there a vertex colouring of $G$ that produces at least $k$ stable edges? \end{quote}
The following problem is concerned with purging conflict-pairs (equivalently, alternating paths) by edge deletion, and is the complementary problem of {\sc Coloured Clustering} (see \refc{same}).
\begin{quote} {\sc Conflict-Pair Removal} \\ {\sc Input}: Edge coloured graph $G$ and positive integer $k$. \\ {\sc Question}: Does $G$ contain at most $k$ edges $E'$ such that $G - E'$ contains no conflict pair? \end{quote}
\section{Basic properties}
Although {\sc Coloured Clustering} is defined by vertex partition (i.e., vertex colouring), it is in fact a maximum subgraph problem in disguise. To see this, we first observe the following equivalent properties for edge-coloured graphs.
\lem{equal}{The following statements are equivalent for any edge-coloured graph $G$:\\ {\rm (a).} $G$ is vertex-monochromatic.\\ {\rm (b).} Every component of $G$ is edge-monochromatic.\\ {\rm (c).} $G$ has no alternating path. \\ {\rm (d).} $G$ has no conflict pair. } \pf{ The equivalence between (a) and (b) is obvious, and so is the equivalence between (c) and (d) as a conflict pair is itself an alternating path. Furthermore, it is again obvious that $G$ contains a conflict pair if and only if $G$ has a colourful vertex, and therefore (a) and (d) are equivalent. It follows that the four statements are indeed equivalent. }
Observe that for any vertex colouring of $G$, the subgraph formed by stable edges is vertex-monochromatic, and hence {\sc Coloured Clustering} is actually equivalent to finding a largest vertex-monochromatic subgraph, which in turn is equivalent to deleting fewest edges to destroy all conflict pairs. This gives us the following complementary relation between {\sc Coloured Clustering} and {\sc Conflict-Pair Removal}.
\crl{same}{ There is vertex colouring for $G$ that produces at least $k$ stable edges if and only if $G$ contains at most $m-k$ edges $E'$ such that $G-E'$ has no conflict pair. }
The bilateral relation between {\sc Coloured Clustering} and {\sc Conflict-Pair Removal} is akin to that between the classical {\sc Independent Set} and {\sc Vertex Cover}. In fact, the former two problems become exactly the latter two in the edge-conflict graph $X(G)$ of $G$ (see \reft{conflict-graph}).
It is very useful to view {\sc Coloured Clustering} as an edge deletion problem, instead of a vertex partition problem, which often makes things easier. For instance, it becomes straightforward to obtain the following result of Angel et al.~\cite{Angel} for $X(G)$.
\thm{conflict-graph}{{\rm [Angel et al.\cite{Angel}]} An edge-coloured graph $G$ admits a vertex colouring that produces at least $k$ stable edges if and only if the edge-conflict graph $X(G)$ of $G$ has an independent set of size at least $k$. } \pf{ By \refc{same}, the former statement is equivalent to deleting at most $m-k$ edge to obtain a graph without conflict pair, which is the same as $X(G)$ has a vertex cover of size at most $m-k$, and hence $X(G)$ has an independent set of size at least $k$. }
\section{Algorithms for edge-bicoloured graphs}
Although {\sc Coloured Clustering} is NP-complete for edge-tricoloured graphs~\cite{Angel}, Angel et al.~\cite{Angel} have obtained an $O(m^{3/2}n)$-time algorithm for the problem on edge-bicoloured graphs $G$ by reducing it to the maximum independent set problem on bipartite graphs $X(G)$. In this section, we will give a faster $O(m^{3/2}\log n)$-time algorithm by considering {\sc Conflict-Pair Removal}, which leads us to a simple reduction to a minimum cut problem. We also give a linear-time algorithm for the problem on edge-bicoloured complete graphs.
\subsection{Faster algorithm}
One bottleneck of the algorithm of Angel et al. lies in the size of the edge-conflict graph $X(G)$ which contains $O(m)$ vertices and $O(mn)$ edges. Here we use a different approach of reduction to construct a digraph $G'$ with only $O(m)$ vertices and edges, and then solve an equivalent minimum cut problem on $G'$ to solve our problem.
Let $G = (V, E)$ be an edge-bicoloured graph with colours $\{1,2\}$, and consider {\sc Conflict-Pair Removal}. Our idea is to transform every conflict pair in $G$ into an $(s, t)$-path in a digraph $G'$ with source $s$ and sink $t$. For this purpose, we construct digraph $G'$ from $G$ as follows (see \reff{mincut} for an example of the construction): \begin{enumerate} \item Take graph $G$ and add two new vertices --- source $s$ and sink $t$. \item For each edge $v_iv_j$ of $G$, create a new vertex $v_{ij}$ to represent edge $v_iv_j$. If $v_iv_j$ has colour 1 then replace it by two edges $v_{ij}v_i$ and $v_{ij}v_j$ and add edge $sv_{ij}$. Otherwise replace the edge by two edges $v_iv_{ij}$ and $v_jv_{ij}$ and add edge $v_{ij}t$. \end{enumerate}
\fig{mincut}{2.1}{Digraph $G'$ from graph $G$, where shaded vertices in $G'$ correspond to edges in $G$ and thick edges indicate corresponding solution edges.}{mincut.fig}
An {\em $(s, t)$-cut} in $G'$ is a set of edges whose deletion disconnects sink $t$ from source $s$. Let $v_iv_j$ and $v_iv_{j'}$ be an arbitrary conflict pair of $G$. Without loss of generality, we may assume that $v_iv_j$ has colour 1 and $v_iv_{j'}$ has colour 2. By the construction of $G'$, there is a unique $(s, t)$-path \[ P(j,i,j') = sv_{ij}, v_{ij}v_i, v_iv_{ij'}, v_{ij'}t \] in $G'$ that goes through vertices $v_{ij}$ and $v_{ij'}$. For convenience, we refer to edges $sv_{ij}$ and $v_{ij'}t$ as {\em external edges} and the other two edges as {\em middle edges}. Edges $v_iv_j$ and $v_iv_{j'}$ of $G$ correspond to external edges $sv_{ij}$ and $v_{ij'}t$, respectively, in $G'$. We also call an $(s, t)$-cut a {\em normal cut} if the cut contains no middle edge of any $P(j,i,j')$.
\lem{mincut}{ Let $E'$ be a set of edges in $G$. Then $G - E'$ contains no conflict pair if and only if corresponding edges of $E'$ in $G'$ form a normal $(s, t)$-cut of $G'$. } \pf{ There is a one-to-one correspondence between conflict pair $v_iv_j$ and $v_iv_{j'}$ in $G$ and external edges $sv_{ij}$ and $v_{ij'}t$ in $G'$ in such a way that the conflict pair is destroyed if and only if the $(s, t)$-path $P(j,i,j')$ is disconnected. This clearly implies the lemma. }
The above lemma enables us to solve {\sc Conflict-Pair Removal} on edge-bicoloured graphs by reducing it to the minimum cut problem, which yields a faster algorithm.
\thm{bicoloured}{{\sc Conflict-Pair Removal} for edge-bicoloured graphs $G$ can be solved in $O(m^{3/2}\log n)$. } \pf{ By \refl{mincut}, we can reduce our problem on $G$ to the minimum normal $(s, t)$-cut problem on digraph $G'$. Observe that for any $(s, t)$-cut, we can always replace a middle edge by an external edge without increasing the size of the cut. Therefore we need only solve the minimum $(s, t)$-cut problem on digraph $G'$, which can be accomplished by the maximum flow algorithm of Goldberg and Rao~\cite{Gold}.
For the running time of the algorithm, we first note that $G'$ contains $N = m + n + 2$ vertices and $M = 3m$ edges, and can be constructed in $O(m + n)$ time. Since every edge of $G'$ has capacity 1, Goldberg and Rao's algorithm takes $O(\min(N^{2/3}, M^{1/2})M \log N^2/M)$ time, which gives us $O(m^{3/2}\log n)$ time as $M, N = O(m)$. }
\crl{bicoloured}{{\sc Coloured Clustering} for edge-bicoloured graphs $G$ can be solved in $O(m^{3/2}\log n)$ time. }
\subsection{Complete graphs}
We now turn to the special case of {\sc Coloured Clustering} when $G = (V, E)$ is an edge-bicoloured complete graph, and present a linear-time algorithm. Let $f$ be a vertex-2-colouring of $G$ that colours vertices $V_1$ by colour 1 and vertices $V_2$ by colour 2. For a vertex $v$, let $d_1(v)$ be the number of edges of colour 1 incident with $v$. Let $m_1$ be the number of edges with colour 1. We can completely determine the number of stable edges produced by $f$ as follows.
\lem{clique}{ For a vertex-$2$-colouring $f$ of an edge-bicoloured complete graph $G$, the number $S_f$ of stable edges produced by $f$ equals
\[\sum_{v \in V_1} d_1(v) + {|V_2| \choose 2} - m_1.\] } \pf{Let $A$ and $B$ be numbers of edges of colour 1 in $G[V_1]$ and $G[V_2]$ respectively. By the definition of stable edges, we have
\[ S_f = A + ({|V_2| \choose 2} - B) \] as $G[V_2]$ is a complete graph. On the other hand, $B = m_1 - C$, where $C$ is the number of edges of colour 1 covered by vertices $V_1$. Therefore
\[ S_f = A + {|V_2| \choose 2} + C - m_1, \] and the lemma follows from the fact that $A + C = \sum_{v \in V_1} d_1(v)$. }
With the formula in the above lemma, we can easily and efficiently solve {\sc Coloured Clustering} for edge-bicoloured complete graphs.
\crl{clique}{ {\sc Coloured Clustering} can be solve in $O(n^2)$ for edge-bicoloured complete graphs. } \pf{ From \refl{clique}, we see that once we fix the size of $V_1$ to be $k$, $S_f$ is maximized when we choose $k$ vertices $v$ with largest $d_1(v)$ as vertices in $V_1$. Therefore we can compute the maximum value of $S_f$ for each $0 \le k \le n$, and find an optimal vertex-2-colouring for $G$. The whole process clearly takes $O(n^2)$ time as we can first sort vertices according to $d_1(v)$. }
We can also use a similar idea to solve {\sc Coloured Clustering} in $O(n^2)$ time for edge-bicoloured complete bipartite graphs, which will appear in our full paper.
\section{NP-completeness}
Angel et al.~\cite{Angel} have shown the NP-completeness of {\sc Coloured Clustering} for edge-tricoloured bipartite graphs. In this section, we further push the intractability of the problem to edge-tricoloured planar bipartite graphs of bounded degree. Recall that a vertex colouring is {\em proper} if the two ends of every edge receive different colours.
\thm{planar}{ {\sc Coloured Clustering} is NP-complete for edge-tricoloured planar bipartite graphs of maximum degree four. } \pf{ Garey, Johnson and Stockmeyer~\cite{Garey} proved the NP-completeness of {\sc Independent Set} on cubic planar graphs, and we give a reduction from this restricted case of {\sc Independent Set} to our problem. For an arbitrary cubic planar graph $G = (V, E)$ with $V = \{ v_1, \dots, v_n \}$, we construct an edge-tricoloured planar bipartite graph $G' = (V', E')$ of maximum degree four as follows:
\begin{enumerate} \item Compute a proper vertex 3-colouring $\psi$ of $G$.
\item For each edge $v_iv_j \in E$, subdivide it by a new vertex $v_{ij}$ (i.e., replace edge $v_iv_j$ by two edges $v_iv_{ij}$ and $v_{ij}v_j$), and colour edges $v_iv_{ij}$ and $v_{ij}v_j$ by $\psi(v_i)$ and $\psi(v_j)$ respectively.
\item For each vertex $v_i \in V$, add a new vertex $v^*_i$ and edge $v_iv_i^*$, and colour edge $v_iv_i^*$ by a colour in $\{1,2,3\}$ different from $\psi(v_i)$. \end{enumerate}
It is clear that $G'$ is an edge-tricoloured planar bipartite graphs of maximum degree four. By Brooks' Theorem, every cubic graph except $K_4$ admits a proper vertex 3-colouring, and we can use an algorithm of Lov\'{a}sz~\cite{Lovasz} to compute a proper vertex 3-colouring of a cubic graph in linear time. Therefore, the above construction of $G'$ takes polynomial time. We claim that $G$ has an independent set of size $k$ if and only if $G'$
admits a vertex colouring that produces $k + |E|$ stable edges.
Suppose that $G$ contains an independent set $I$ of size $k$. We define a vertex-3-colouring $f$ of $G'$ as follows: \begin{enumerate} \item For each vertex $v_i^* \in V^*$, set $f(v_i^*)$ to be the colour of edge $v_iv_i^*$.
\item For each vertex $v_i \in V$, set $f(v_i)$ to be the colour of edge $v_iv^*_i$ if $v_i \in I$ and $f(v_i) = \psi(v_i)$ otherwise.
\item For each vertex $v_{ij}$, set $f(v_{ij})$ to be $\psi(v_i)$ if $v_i \not\in I$ and $\psi(v_j)$ otherwise. \end{enumerate}
Clearly, $f$ produces $k$ stable edges $v_iv_i^*$ after Step 2.
For any vertex $v_{ij}$, since $I$ contains at most one of $v_i$ and $v_j$, exactly one of $v_iv_{ij}$ and $v_{ij}v_j$ becomes a stable edge after Step 3. Therefore $f$ produces $k+ |E|$ stable edges for $G'$.
Conversely, call each edge $v_iv_i^*$ an {\em outside edge}, and let $f'$ be a vertex 3-colouring of $G'$ that produces $k + |E|$ stable edges and also minimizes the number of outside edges among these stable edges. Let \[ I = \{ v_i: \mbox{edge $v_iv^*_i$ is stable} \}.\] For every vertex $v_{ij}$ in $G'$, since edges $v_iv_{ij}$ and $v_{ij}v_j$ have different colours, at most one of these two edges is a stable edge for any vertex colouring of $G'$. It follows that at least $k$ stable edges are formed by outside edges and hence $I$ contains at least $k$ vertices.
We claim that $I$ is an independent set of $G$. Suppose to the contrary that for some vertices $v_i, v_j \in I$, $v_iv_j$ is an edge of $G$. First we note that amongst all edges incident with $v_i$, $v_iv^*_i$ is the only stable edge under $f'$ as $f'(v_i) \not= \psi(v_i)$, and similar situation holds for all edges incident with $v_j$. In particular, neither $v_iv_{ij}$ nor $v_{ij}v_j$ is a stable edge. We now recolour both vertices $v_i$ and $v_{ij}$ by the colour of edge $v_iv_{ij}$ (note that $v_{ij}$ may have received that colour already under $f'$) to obtain a new vertex 3-colouring $f''$ (see \reff{npc} for an example of the situation).
\fig{npc}{1.8}{(a) Situation under vertex colouring $f'$. (b) Situation after recolouring vertices $v_i$ and $v_{ij}$. Stable edges are indicated by thick edges.}{npc.fig}
Comparing with colouring $f'$, this new colouring $f''$ reduces one stable edge (namely, edge $v_iv_i^*$), but produces a new stable edge $v_iv^*_i$
(and probably also other new stable edges). Therefore $f''$ produces at least $k + |E|$ stable edges that contains one less outside edges than $f'$, contradicting the choice of $f'$. This contradiction implies that $I$ is indeed an independent set of $G$ with at least $k$ vertices, and hence the theorem holds. }
\crl{planar}{ {\sc Conflict-Pair Removal} is NP-complete for edge-tricoloured planar bipartite graphs of maximum degree four. }
\section{FPT algorithms}
We now turn to the parameterized complexity of {\sc Coloured Clustering}, and give FPT algorithms for the problem with respect to both the number of stable edges and the number of unstable edges as parameter $k$. This is quite interesting as it is uncommon for a problem to admit FPT algorithms both ways when parameterized in this manner.
\subsection{Stable edges}
First we take the number of stable edges produced by a vertex colouring of $G$ as parameter $k$, and use {\sc Coloured Cluster}$[k]$ to denote this parameterized problem. We will give an FPT algorithm that uses random partition in the spirit of the colour coding method of Alon, Yuster and Zwick~\cite{Alon}, which implies an FPT algorithm for {\sc Independent Set}$[k]$ in edge-conflict graphs. Note that if the number $t$ of colours in $G$ is a constant, then the problem is trivially solved in FPT time as it contains a trivial kernel with at most $kt$ edges and hence $2kt$ vertices. Also note that the problem is not as easy as it looks, for it contains the maximum matching problem as a special case when all edges have different colours.
Our idea is to randomly partition vertices of $G$ into $k$ parts $V_1, \dots, V_k$ in a hope that a $k$-solution consists of $k_i$ stable edges, where $\sum_{i=1}^k k_i = k$, in each $G[V_i]$. Indeed we have a good chance to succeed in this way.
\begin{quote} {\bf Algorithm} Coloured-Clustering$[k]$\\ Randomly partition vertices $V$ of $G$ into $V_1,\cdots,V_k$.\\ Compute the most frequently used colour $c_i$ for each $G[V_i]$. \\ Colour all vertices in $V_i$ by $c_i$.\\ \end{quote}
\lem{chance}{For any yes-instance of \/ {\sc Coloured Cluster}$[k]$, the vertex colouring constructed by {\bf Algorithm} {\rm Coloured-Clustering}$[k]$ has probability at least $k^{-2k}$ to produce at least $k$ stable edges. }
\pf{Consider a vertex colouring of $G$ that produces at least $k$ stable edges $E'$, which clearly have at most $k$ different colours. Let $E'_i$ be edges in $E'$ of colour $c_i$ and let $k_i = |E'_i|$ for $1 \le i \le k$. We estimate the probability that all edges of $E'_i$ lie in $G[V_i]$. A vertex has probability $k^{-1}$ to be in $V_i$, and hence the above event happens with probability at least $k^{-2k_i}$ as $E'_i$ contains at most $2k_i$ vertices. It follows that, with probability at least \[ k^{- \sum_{i=1}^k 2k_i} = k^{-2k}, \] all edges of each $E'_i$ lie entirely inside $G[V_i]$. Therefore each $G[V_i]$ contains at least $k_i$ edges of same colour $c_i$, which can be made stable by colouring all vertices in $G[V_i]$ by colour $c_i$. It follows that the algorithm produces at least $k$ stable edges with probability $k^{-2k}$. }
The algorithm runs in $O(k^{2k}(m+n))$ expected time, and can be made into a deterministic FPT algorithm by standard derandomization with a family of perfect hashing functions.
\thm{stable}{ {\sc Coloured Cluster}$[k]$ can be solved in {\rm FPT} time. }
\subsection{Unstable edges}
Now we take the number of unstable edges in a vertex colouring as parameter $k$, and use {\sc Conflict-Pair Removal}$[k]$ to denote this parameterized problem. By a result of Angel et al.~\cite{Angel}, the problem is equivalent to finding a vertex cover of size $k$ in the edge-conflict graph $X(G)$ of $G$, and hence admits an FPT algorithm by transforming it to the $k$-vertex cover problem in $X(G)$. However the time for the transformation takes $O(mn)$ time as $X(G)$ contains $O(m)$ vertices and $O(mn)$ edges, and the total time for the algorithm takes $O(mn + 1.2783^k)$ time. Here we combine kernelization with weighted vertex cover to obtain an improved algorithm with running time $O(m+n+1.2783^k)$.
To start with, we construct in linear time the following edge-coloured weighted graph $G^*$, called {\em condensed graph}, by representing monochromatic vertices of one colour by a single vertex, and then parallel edges between two vertices by a single weighted edge. See \reff{condense} for an example of the construction.
{\bf Step 1.} For each colour $c$, contract all monochromatic vertices of colour $c$ into a single vertex $v_c$.
{\bf Step 2.} For each pair of adjacent vertices, if there is only one edge between them, then set the weight of the edge to 1, otherwise replace all parallel edges between them by a single edge of the same colour\footnote{All such parallel edges have the same colour as they correspond to edges between a vertex and monochromatic vertices of the same colour.} and set its weight to be the number of replaced parallel edges.
\fig{condense}{2}{(a) Edge-coloured graph $G$ where each dashed ellipse indicates monochromatic vertices of same colour. (b) The condensed graph $G^*$ of $G$ where an edge of weight more than 1 has its weight as the superscript of its colour.}{condense.fig}
It turns out that the clustering problem on $G$ is equivalent to a weighted version of the problem on the condensed graph $G^*$.
\lem{condense}{ Graph $G$ has at most $k$ unstable edges if and only if $G^*$ has unstable edges of total weight at most $k$. } \pf{By the construction of $G^*$, we have the following correspondence between edges in $G$ and $G^*$: every edge in $G$ between two colourful vertices remains so in $G^*$, and for any colourful vertex $v$, all edges between $v$ and monochromatic vertices of colour $c$ correspond to edge $vv_c$ in $G^*$. Also all monochromatic vertices in $G^*$ form an independent set.
Now suppose that $G$ has a vertex colouring $f$ that produces $k$ unstable edges. Without loss of generality, we may assume that every monochromatic vertex $v$ in $G$ has its own colour as $f(v)$ since this will not increase unstable edges. For this $f$, we have a natural vertex colouring $f^*$ for $G^*$: the colour of each vertex retains its colour under $f$. It is obvious that an edge in $G^*$ between two colourful vertices is an unstable edge under $f^*$ if and only if it is an unstable edge in $G$ under $f$. For edges $E_c(v)$ in $G$ between a colourful vertex $v$ and monochromatic vertices with colour $c$, either all edges in $E_c(v)$ are stable or all are unstable as $f$ colours all these monochromatic vertices by colour $c$, implying that all edges in $E_c(v)$ are unstable under $f$ if and only if $vv_c$ is unstable under $f^*$. Therefore $f^*$ produces unstable edges of total weight $k$ in $G^*$.
Conversely, suppose that $G^*$ contains a set $U$ of unstable edges of total weight $k$, and let $U'$ be the corresponding $k$ edges in $G$. Clearly $G - U'$ contains no conflict pair, and hence $G$ has at most $k$ unstable edges. }
Further to the above lemma, $G^*$ can be regarded as a kernel as its size is bounded by a function of $k$ whenever $G$ has at most $k$ unstable edges. Note that the bounds in the following lemma are tight.
\lem{kernel}{ If $G$ has at most $k$ unstable edges, then $G^*$ has at most $4k$ vertices and $2k^2+k$ edges. } \pf{Let $[C, M]$ be the cut that partitions the vertices of $G$ into colourful vertices $C$ and monochromatic vertices $M$. Let $A$ be the set of unstable edges inside $G[C]$, and $B$ the set of unstable edges across the cut. Observe that each edge of $A$ is incident with at most two vertices of $C$,
and each edge of $B$ is incident with one vertex of $C$. Furthermore, every colourful vertex is incident with at least one unstable edge. Therefore $|C| \le 2|A|+|B| \le 2k$.
Now consider the condensed graph $G^*$, and note that the cut $[C, M]$ corresponds to the cut $[C, M^*]$ for monochromatic vertices $M^*$ of $G^*$. Furthermore, $A$ consists of unstable edges inside $G^*[C]$, and
$B$ corresponds to unstable edges $B^*$ across $[C, M^*]$ and $|B^*| \le |B|$.
In $G^*$, every vertex in $C$ is incident with at most one stable edge in $[C, M^*]$. Therefore $[C, M^*]$ contains at most
\[|C| + |B^*| \le (2|A|+|B|)+|B^*| \le 2(|A|+|B|) = 2k\] edges, and hence $M^*$ contains at most $2k$ vertices. It follows that $G^*$ contains at most $4k$ vertices, and at most ${2k \choose 2}+2k = 2k^2+k$ edges as $G^*[M^*]$ is edgeless. }
With \refl{condense} and \refl{kernel} in hand, we obtain the following FPT algorithm for {\sc Conflict-Pair Removal}$[k]$
\begin{tabbing} {\bf Algorithm} Conflict-Pair-Removal$[k]$ \\
Construct the condensed graph $G^*$ from $G$; \\ {\bf if} $G^*$ \= contains more than $4k$ vertices or $2k^2+k$ edges \\
\> {\bf then return} ``No'' and {\bf halt}; \\ Construct the edge-conflict $X(G^*)$ of $G^*$;\\ {\bf if} $X(G^*)$ has a vertex cover of weight at most $k$ \\
\> {\bf then return} ``Yes''\\
\> {\bf else return} ``No''.\\ \end{tabbing}
\thm{}{{\sc Conflict-Pair Removal}$[k]$ can be solved in $O(m+n+1.2783^k)$ time. } \pf{The correctness of the algorithm follows from \refl{condense} and \refl{kernel}, and we analyze the running time of the algorithm. The construction of the condensed graph $G^*$ clearly takes $O(m+n)$ time, and the construction of the edge-conflict graph $X(G^*)$ takes $O(k^3)$ time as $G^*$ contains $O(k)$ vertices and $O(k^2)$ edges. Note that $X(G^*)$ contains $O(k^2)$ vertices and $O(k^3)$ edges. Since it takes $O(kn + 1.2783^k)$ to solve the weighted vertex cover problem~\cite{Chen, Nied}, it takes $O(k^3 + 1.2783^k) = O(1.2783^k)$ to solve the problem for $G^*$, and hence the overall time is $O(m + n + 1.2783^k)$. }
\section{Concluding remarks}
We have revealed that {\sc Coloured Clustering}, a vertex partition problem, is in fact subgraph problems {\sc Vertex-Monochromatic Subgraph} and {\sc Alternating Path Removal} in disguise, and demonstrated the usefulness of this multiple points of view in studying these problems. Indeed, our improved algorithm for edge-bicoloured graphs and FPT algorithms for general edge-coloured graphs have benefited a lot from the perspective of {\sc Conflict-Pair Removal}. We now briefly discuss a few open problems in the language of monochromatic subgraphs and alternating paths for readers to ponder. \\
\noindent {\bf Question 1.} {\em For edge-bicoloured graphs, is there a faster algorithm for deleting fewest edges to obtain a vertex-monochromatic subgraph? }
There seems to be a good chance to solve the problem faster than our algorithm, and one possible approach is to reduce the number of vertices in the reduction to minimum cut from the current $O(m)$ to $O(n)$.\\
\noindent {\bf Question 2.} {\em For {\sc Conflict-Pair Removal} on general edge-coloured graphs, is there an $r$-approximation algorithm for some constant $r < 2$?}
The problem admits a simple 2-approximation algorithm through its connection with {\sc Vertex Cover}, and seems easier than the latter problem. It is possible that we can do better for the problem, perhaps through ILP relaxation. \\
\noindent {\bf Question 3.} {\em For edge-coloured graphs, does the problem of finding a vertex-monochromatic subgraph with at least $k$ edges admit a polynomial kernel? }
The above problem is appealing for its connection with the classical maximum matching problem. On one hand, we may use a maximum matching of $G$ as a starting point for a polynomial kernel; and on the other hand if we can obtain a polynomial kernel of $G$ in $o(m\sqrt{n})$ time, we may use the kernel to speed up maximum matching algorithms. Of course, it may be the case that the problem admits no polynomial kernel unless NP $\subseteq$ coNP/poly.\\
\noindent {\bf Question 4.} {\em For edge-coloured graphs, is there an FPT algorithm for the problem of destroying all alternating cycles by deleting at most $k$ edges? }
Although the definition of the problem resembles that of {\sc Alternating Path Removal}, the problem seems much more difficult as the problem does not have a finite forbidden structure like conflict pair for the latter problem. We note that the problem is NP-complete by a simple reduction from {\sc Vertex Cover}.
{}
\end{document} | arXiv | {
"id": "1807.10531.tex",
"language_detection_score": 0.8576071858406067,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Data Driven Cost for DRO]{Data-driven Optimal Transport Cost Selection for Distributionally Robust Optimization} \author{Blanchet, J.} \address{Columbia University, Department of Statistics and Department of Industrial Engineering \& Operations Research, New York, NY 10027, United States.} \email{jose.blanchet@columbia.edu} \author{Kang, Y.} \address{Columbia University, Department of Statistics. New York, NY 10027, United States.} \email{yang.kang@columbia.edu} \author{Zhang, F.} \address{Columbia University, Department of Industrial Engineering \& Operations Research. New York, NY 10027,
United States.} \email{fz2222@columbia.edu }
\author{Murthy, K.} \address{Columbia University, Department of Industrial Engineering \& Operations Research,Mudd Building, 500 W. 120 Street, New York, NY 10027, United States.} \email{karthyek.murthy@columbia.edu} \keywords{} \date{\today }
\begin{abstract}
Recently, \citeapos{blanchet2016robust} showed that several machine
learning algorithms, such as square-root Lasso, Support Vector
Machines, and regularized logistic regression, among many others,
can be represented exactly as distributionally robust optimization
(DRO) problems. The distributional uncertainty is defined as a
neighborhood centered at the empirical distribution. We
propose a methodology which learns such neighborhood in a natural
data-driven way. We show rigorously that our framework encompasses
adaptive regularization as a particular case. Moreover, we
demonstrate empirically that our proposed methodology is able to
improve upon a wide range of popular machine learning estimators.
\end{abstract} \maketitle
\section{Introduction}
A Distributionally Robust Optimization (DRO) problem takes the general form
\begin{equation}
\min_{\beta }\max_{P\in \mathcal{U}_{\delta }}\mathbb{E}_{P}\left[ l\left( X,Y,\beta
\right) \right] , \label{Eqn-DRO_origin}
\end{equation}
where $\beta $ is a decision variable, $(X,Y)$ is a random element, and
$l(x,y,\beta) $ measures a suitable loss or cost incurred when $(X,Y)=(x,y)$ and
the decision $\beta $ is taken. The expectation $\mathbb{E}_{P}[ \cdot ]$ is
taken under the probability model $P$. The set $\mathcal{U}_{\delta }$
is called the distributional uncertainty set and it is indexed by the
parameter $\delta >0$, which measures the size of the distributional
uncertainty.
\newline
The DRO problem is said to be \textit{data-driven} if the uncertainty
set $
\mathcal{U}_{\delta }$ is informed by empirical observations. One
natural way to supply this information is by letting the
\textquotedblleft center\textquotedblright\ of the uncertainty region
be placed at the empirical measure, $P_{n}$, induced by a data set
$\{X_{i},Y_{i}\}_{i=1}^{n}$, which represents an empirical sample of
realizations of $W$
. In order to emphasize the data-driven nature of a DRO
formulation such as (
\ref{Eqn-DRO_origin}), when the uncertainty region is informed by an
empirical sample, we write
$\mathcal{U}_{\delta }=\mathcal{U}_{\delta }( P_{n}) $. To the best of
our knowledge, the available data is utilized in the DRO literature
only by defining the center of the uncertainty region
$\mathcal{U}_\delta(P_n)$ as the empirical measure $P_n.$
\newline
Our goal in this paper is to discuss a data-driven framework to inform
the \textit{shape} of $\mathcal{U}_{\delta }( P_{n})
$. Throughout this paper, we assume that the class of functions to
fit, indexed by $\beta $
, is given and that a sensible loss function
$l\left( x,y,\beta \right) $ has been selected for the problem at
hand. Our contribution concerns the construction of the uncertainty
region in a fully data-driven way and the implications of this design
in machine learning applications. Before providing our construction,
let us discuss the significance of data-driven DRO\ in the context of
machine learning.
\newline
Recently, \citeapos{blanchet2016robust} showed that many prevailing machine
learning estimators can be represented exactly as a data-driven DRO
formulation in (\ref{Eqn-DRO_origin}). For example, suppose that
$X\in \mathbb{R}^{d}$ and $Y\in \{-1,1\}$. Further, let
$l( x,y,\beta) =\log ( 1+\exp ( -y\beta ^{T}x))$ be the
log-exponential loss associated to a logistic regression model where
$Y\sim Ber( 1/(1+\exp ( -\beta _{\ast }^{T}x)),$ and
$\beta _{\ast }$ is the underlying parameter to learn. Then, given a set of
empirical samples
$\mathcal{D}_{n}=\left\{ \left( X_{i},Y_{i}\right) \right\}
_{i=1}^{n}$, and a judicious choice of the distributional uncertainty set
$
\mathcal{U}_{\delta }\left( P_{n}\right) $, \citeapos{blanchet2016robust}
shows that
\begin{equation}
\min_{\beta }\max_{P\in \mathcal{U}_{\delta }\left( P_{n}\right)
}\mathbb{E}_{P}[l( X,Y,\beta) ]=\min_{\beta }\left( \mathbb{E}_{P_{n}}[l(X,Y,\beta)
]+\delta \left\Vert \beta \right\Vert _{p}\right),
\label{DR_Las}
\end{equation}
where $\left\Vert \cdot \right\Vert _{p}$ is the $\ell_{p}-$norm in
$\mathbb{R}^{d}$ for $p\in \lbrack 1,\infty )$ and
$\mathbb{E}_{P_{n}}[l(X,Y,\beta) ]=n^{-1}\sum_{i=1}^{n}l(
X_{i},Y_{i},\beta ).$\newline
\newline
The definition of $\mathcal{U}_{\delta }\left( P_{n}\right) $ turns
out to be informed by the dual norm $\Vert \cdot \Vert _{q}$ with
$1/p+1/q=1$. If $p=1$ we see that (\ref{DR_Las}) recovers
$L_1$ regularized logistic regression (see
\citeapos{friedman2001elements}). Other estimators such as Support Vector
Machines and sqrt-Lasso are shown in
\citeapos{blanchet2016robust} to admit DRO representations analogous to
(\ref{DR_Las}) -- provided that the loss function and the uncertainty
region are judiciously chosen. Note that the parameter $\delta$ in
$\mathcal{U}_{\delta}(P_n)$ is precisely the regularization parameter
in the right hand side of (\ref{DR_Las}). So, the data-driven DRO
representation (\ref {DR_Las}) provides a direct interpretation of the
regularization parameter as the size of the probabilistic uncertainty
around the empirical evidence.
\newline
An important element to all of the DRO representations obtained in
\citeapos{blanchet2016robust} is that the design of the uncertainty region
$\mathcal{U}_{\delta}( P_{n}) $ is based on optimal transport
theory. In particular, we have that
\begin{equation}
\mathcal{U}_{\delta }\left( P_{n}\right) =\{P:D_{c}( P,P_{n})
\leq \delta \}, \label{USet}
\end{equation}
and $D_{c}( P,P_{n}) $ is the minimal cost of rearranging (i.e.
transporting the mass of) the distribution $P_{n}$ into the
distribution $P$. The rearrangement mechanism has a transportation
cost $c( u,w) \geq 0$ for moving a unit of mass from location $u$ in
the support of $P_{n}$ to location $w$ in the support of $P$. For
instance, in the setting of (\ref{DR_Las}) we have that
\begin{equation}
c\big( ( x,y) ,( x^{\prime },y^{\prime }) \big)
=\left\Vert x-x^{\prime }\right\Vert _{q}^{2}I\left( y=y^{\prime }\right)
+\infty \cdot I\left( y\neq y^{\prime }\right) . \label{Cost}
\end{equation}
In the end, as we discuss in Section \ref{Sec_OT}, $D_{c}( P,P_{n}) $
can be easily computed as the solution of a linear programming (LP)
problem which is known as Kantorovich's problem (see
\citeapos{villani2008optimal}).
\newline
Other discrepancy notions between probability models have been
considered, typically using the Kullback-Leibler divergence and other
divergence based notions \citeapos{hu2013kullback}. Using divergence (or
likelihood ratio) based discrepancies to characterize the uncertainty
region $\mathcal{U}_{\delta}( P_{n}) $ forces the models
$P\in \mathcal{U}_{\delta }( P_{n}) $ to share the same support with
$P_{n}$, which may restrict generalization properties of a DRO-based
estimator,
and such restriction may induce overfitting problem (see the
discussions in \citeapos{esfahani2015data} and \citeapos{blanchet2016robust}).
\newline
In summary, data-driven DRO via optimal transport has been shown to
encompass a wide range of prevailing machine learning
estimators. However, so far the cost function $c\left( \cdot \right) $
has been taken as a given, and not chosen in a data-driven way.
\newline
Our main contribution in this paper is to propose a comprehensive
approach for designing the uncertainty region
$\mathcal{U}_{\delta }( P_{n}) $ in a fully data-driven way, using the
convenient role of $c(\cdot) $ in the definition of the optimal
transport discrepancy $D_{c}( P,P_{n}) $. Our modeling approach
further underscores, beyond the existence of representations such as
(\ref{DR_Las}), the convenience of working with an optimal transport
discrepancy for the design of data-driven DRO machine learning
estimators. In other words, because one can select $c( \cdot ) $ in a
data driven way, it is sensible to use our data-driven DRO formulation
even if one is not able to simplify the inner optimization in order to
achieve a representation such as (\ref{DR_Las}).
\newline
Our idea is to apply metric-learning procedures to estimate
$c( \cdot) $ from the training data. Then, use such data-driven
$c( \cdot ) $ in the definition of $D_{c}( P,P_{n}) $ and the
construction $\mathcal{U}_{\delta }( P_{n}) $ in (\ref{USet}).
Finally, solve the DRO problem (\ref{Eqn-DRO_origin}), using
cross-validation to choose $\delta $.
\newline
The intuition behind our proposal is the following. By using a metric learning
procedure we are able to calibrate a cost function $c\left( \cdot \right) $
which attaches relatively high transportation costs to $\left( u,w\right) $
if transporting mass between these locations substantially impacts
performance (e.g. in the response variable, so increasing the expected
risk). In turn, the adversary, with a given budget $\delta $, will carefully
choose the data which is to be transported. The mechanism will then induce
enhanced out-of-sample performance focusing precisely on regions of
relevance, while improving generalization error.
\newline
One of the challenges for the implementation of our idea is to
efficiently solve (\ref{Eqn-DRO_origin}). We address this challenge by
proposing a stochastic gradient descent algorithm which takes
advantage of a duality representation and fully exploits the nature of
the LP structure embedded in the definition of $D_{c}( P,P_{n}) $,
together with a smoothing technique.
\newline Another challenge
consists in selecting the type of cost $c( \cdot ) $ to be used in
practice, and the methodology to fit such cost. To cope with this
challenge, we rely on metric-learning procedures. We do not
contribute any novel metric learning methodology; rather, we discuss
various parametric cost functions and methods developed in the
metric-learning literature. And we discuss their use in the context of
fully data-drive DRO formulations for machine learning problems --
which is what we propose in this paper. The choice of $c( \cdot ) $
ultimately will be influenced by the nature of the data and the
application at hand. For example, in the setting of image recognition,
it might be natural to use a cost function related to similarity
notions.
\newline
In addition to discussing intuitively the benefits of our approach in
Section \ref{Sec_Intuit}, we also show that our methodology provides a
way to naturally estimate various parameters in the setting of
adaptive regularization. For example, Theorem
\ref{Thm-DRO-Rep-Adaptive-Reg} below, shows that choosing
$c( \cdot ) $ using a suitable weighted norm, allows us to
recover an adaptive regularized ridge regression
estimator \citeapos{ishwaran2014geometry}. In turn, using standard
techniques from metric learning we can estimate
$c( \cdot ) $. Hence, our technique connects metric
learning tools to estimate the parameters of adaptive regularized
estimators.
\newline
More broadly, we compare the performance of our procedure with a number of
alternatives in the setting of various data sets and show that our approach
exhibits consistently superior performance.
\section{Data-Driven DRO: Intuition and Interpretations\label{Sec_Intuit}}
One of the main benefits of DRO formulations such as (\ref{Eqn-DRO_origin})
and (\ref{DR_Las}) is their interpretability. For example, we can readily
see from the left hand side of (\ref{DR_Las}) that the regularization
parameter corresponds precisely to the size of the \textit{data-driven}
distributional uncertainty.
\newline
The data-driven aspect is important because we can employ statistical
thinking to optimally characterize the size of the uncertainty,
$\delta $. This readily implies an optimal choice of the
regularization parameter, as explained in \citeapos{blanchet2016robust},
in settings such as (\ref{DR_Las}). Elaborating, we can interpret
$\mathcal{U}_{\delta }\left( P_{n}\right) $ as the set of plausible
variations of the empirical data, $P_{n}$. Consequently, for instance,
in the linear regression setting leading to (\ref{DR_Las}), the
estimate
$\beta _{P}=\arg \min_{\beta }\mathbb{E}_{P}\left( l\left( X,Y,\beta \right)
\right) $ is a plausible estimate of the regression parameter
$\beta _{\ast } $ as long as
$P\in \mathcal{U}_{\delta }\left( P_{n}\right) $. Hence, the set
\begin{equation*}
\Lambda _{\delta }\left( P_{n}\right) =\{\beta _{P}:P\in \mathcal{U}_{\delta
}\left( P_{n}\right) \}
\end{equation*}
is a natural confidence region for $\beta _{\ast }$ which is non-decreasing
in $\delta $. Thus, a statistically minded approach for choosing $\delta $
is to fix a confidence level, say $\left( 1-\alpha \right) $, and choose an
optimal $\delta $ ($\delta _{\ast }\left( n\right) $) via
\begin{equation}
\delta _{\ast }\left( n\right) :=\inf \{\delta :P\left( \beta _{\ast }\in
\Lambda _{\delta }\left( P_{n}\right) \right) \geq 1-\alpha \}.
\label{Opt_delta}
\end{equation}
Note that the random element in
$P\left( \beta _{\ast }\in \Lambda _{\delta }\left( P_{n}\right)
\right) $ is given by $P_{n}$. In \citeapos{blanchet2016robust} this
optimization problem is solved asymptotically as
$n\rightarrow \infty $ under standard assumptions on the data
generating process. If the underlying model is correct, one would
typically obtain, as in \citeapos{blanchet2016robust}, that
$\delta _{\ast }( n) \rightarrow 0$ at a suitable rate. For instance,
in the linear regression setting corresponding to (\ref{DR_Las}), if
the data is i.i.d. with finite variance and the linear regression
model holds then $
\delta _{\ast }(n) =\chi _{_{1-\alpha} }\left( 1+o\left( 1\right)
\right) /n$ as $n\rightarrow \infty $ (where $\chi _{_\alpha }$ is the
$
\alpha $ quantile of an explicitly characterized distribution).
\newline
In practice, one can also choose $\delta $ by
cross-validation. The work of \citeapos{blanchet2016robust} compares the
asymptotically optimal choice $\delta_\ast(n)$ against
cross-validation, concluding that the performance is comparable in the
experiments performed. In this paper, we use cross validation to
choose $ \delta $, but the insights behind the limiting behavior of
(\ref{Opt_delta}) are useful, as we shall see, to inform the design of
our algorithms.
\newline
More generally, the DRO\ formulation (\ref{Eqn-DRO_origin}) is
appealing because the distributional uncertainty endows the estimation
of $\beta $ directly with a mechanism to enhance generalization
properties. To wit, we can interpret (\ref{Eqn-DRO_origin}) as a game
in which we (the outer player) choose a decision $\beta $, while the
adversary (the inner player) selects a model which is a perturbation,
$P$, of the data (encoded by $P_{n}$
). The amount of the perturbation is dictated by the size of $\delta $
which, as discussed earlier, is data driven. But the type of
perturbation and how the perturbation is measured is dictated by
$D_{c}(P,P_{n}) $. It makes sense to inform the design of
$D_{c}( \cdot ) $ using a data-driven mechanism, which is our goal in
this paper. { The intuition is to allow the types of
perturbations which focus the effort and budget of the adversary
mostly on out-of-sample exploration over regions of relevance.}
\newline
The type of benefit that is obtained by informing
$D_{c}\left( P,P_{n}\right) $ with data is illustrated in Figure 1(a)
below.
\begin{figure}
\caption{Stylized examples illustrating the need for data-driven cost
function.
}
\end{figure}
Figure 1(a) illustrates a classification task. The data roughly lies
on a lower dimensional non-linear manifold. Some data which is
classified with a negative label is seen to be \textquotedblleft
close\textquotedblright\ to data which is classified with a positive
label when seeing the whole space (i.e. $\mathbb{R}^{2}$) as the
natural ambient domain of the data. However, if we use a distance
similar to the geodesic distance intrinsic to the manifold we would
see that the negative instances are actually far from the positive
instances.
So, the generalization properties of the algorithm would be enhanced
relative to using a standard metric in the ambient space, because with
a given budget $\delta $ the adversarial player would be
allowed perturbations mostly along the intrinsic manifold where the
data lies and instances which are surrounded (in the intrinsic metric)
by similarly classified examples will naturally carry significant
impact in testing performance. A quantitative example to illustrate
this point will be discussed in the sequel.
\section{Background on Optimal Transport and Metric Learning
Procedures\label{Sec_OT}}
In this section we quickly review basic notions on
optimal transport and metric learning methods so that we can define
$D_{c}( P,P_{n}) $ and explain how to calibrate the function
$c( \cdot ).$
\subsection{Defining Optimal Transport Distances and Discrepancies}
Assume that the cost function
$c:\mathbb{R}^{d+1}\times \mathbb{R}
^{d+1}\rightarrow \lbrack 0,\infty ]$ is lower semicontinuous. We also
assume that $c(u,v)=0$ if and only if $u=v$. Given two distributions
$P$ and $Q$, with supports $\mathcal{S}_{P}$ and $
\mathcal{S}_{Q}$, respectively, we define the optimal transport
discrepancy, $D_{c}$, via
\begin{equation}\label{Discrepancy_Def}
D_{c}\left( P,Q\right) =\inf \big\{\mathbb{E}_{\pi }\left[ c( U,V) \right]
:\pi \in \mathcal{P}\left( \mathcal{S}_{_P}\times \mathcal{S}_{_Q}\right) ,
\text{ }\pi_{_U}=P,\text{ }\pi_{_V}=Q \big\},
\end{equation}
where $\mathcal{P}( \mathcal{S}_{_P}\times \mathcal{S}_{_Q}) $ is the
set of probability distributions $\pi $ supported on $\mathcal{S}
_{P}\times \mathcal{S}_{_Q}$, and $\pi_{_U}$ and $\pi_{_V}$ denote the
marginals of $U$ and $V$ under $\pi $, respectively. Because
$c( \cdot ) $ is non-negative we have that $D_{c}( P,Q) \geq 0$.
Moreover, requiring that $c( u,v) =0$ if and only if $u=v$ guarantees
that $D_{c}( P,Q) =0$ if and only $P=Q$. If, in addition,
$c( \cdot ) $ is symmetric (i.e. $c( u,v) =c( v,u) $), and there
exists $\varrho \geq 1$ such that
$c^{1/\varrho }( u,w) \leq c^{1/\varrho }( u,v) +c^{1/\varrho }(v,w) $
(i.e. $c^{1/\varrho }( \cdot) $ satisfies the triangle inequality)
then it can be easily verified (see \citeapos{villani2008optimal}) that $
D_{c}^{1/\varrho }\left( P,Q\right) $ is a metric. For example, if
$c( u,v) =\Vert u-v\Vert _{q}^{\varrho }$ for $q\geq 1$ (where $
\Vert u-v\Vert _{q}$ denotes the $l_{q}$ norm in $\mathbb{R}
^{d+1} $) then $D_{c}( \cdot ) $ is known as the Wasserstein distance
of order $\varrho $. Observe that (\ref{Discrepancy_Def}) is a linear
program in the variable $\pi.$
\subsection{On Metric Learning Procedures}
In order to keep the discussion focused, we use a few metric learning
procedures, but we emphasize that our approach can be used in combination
with virtually any method in the metric learning literature, see the survey
paper \citeapos{bellet2013survey} that contains additional discussion on metric learning
procedures. The procedures that we consider, as we shall see, can be seen to
already improve significantly upon natural benchmarks. Moreover, as we shall
see, these metric families can be related to adaptive regularization. This
connection will be useful to further enhance the intuition of our procedure.
\subsubsection{The Mahalanobis Distance\label{Subsec_Mahala}}
The Mahalanobis metric is defined as
\begin{equation*}
d_{\Lambda }\left( x,x^{\prime }\right) =\left( \left( x-x^{\prime }\right)
^{T}\Lambda \left( x-x^{\prime }\right) \right) ^{1/2},
\end{equation*}
where $\Lambda$ is symmetric and positive semi-definite and we write
$\Lambda \in PSD$. Note that $d_{\Lambda }( x,x^{\prime }) $ is the
metric induced by the norm
$\Vert x\Vert _{\Lambda }=\sqrt{x^{T}\Lambda x
}$.
\newline
For a discussion, assume that our data is of the form
$\mathcal{D}_{n}=\{ (X_{i},Y_{i})\} _{i=1}^{n}$ and
$Y_{i}\in \{-1,+1\}$. The prediction variables are assumed to be
standardized. Motivated by applications such as social networks, in
which there is a natural graph which can be used to connect instances
in the data, we assume that one is given sets $\mathcal{M}$ and
$\mathcal{N}$, where $\mathcal{M}$ is the set of the pairs that should
be close (so that we can connect them) to each other, and
$\mathcal{N}$, on contrary, is characterizing the relations that the
pairs should be far away (not connected), we define them as
\begin{eqnarray*}
\mathcal{M} := \left\{ \left( X_{i},X_{j}\right) \text{ }|\text{ }X_{i}\text{
and }X_{j}\text{ must connect}\right\}, \quad \\
\mathcal{N} :=\left\{ \left( X_{i},X_{j}\right) \text{ }|\text{ }X_{i}\text{
and }X_{j}\text{ should not connect}\right\} .
\end{eqnarray*}
While it is typically assumed that $\mathcal{M}$ and $\mathcal{N}$ are
given, one may always resort to $k$-Nearest-Neighbor ($k$-NN) method for the
generation of these sets. This is the approach that we follow in our
numerical experiments. But we emphasize that choosing any criterion for the
definition of $\mathcal{M}$ and $\mathcal{N}$ should be influenced by the
learning task in order to retain both interpretability and performance.
\newline
In our experiments we let $\left( X_{i},X_{j}\right) $ belong to $\mathcal{M}
$ if, in addition to being sufficiently close (i.e. in the $k$-NN
criterion), $Y_{i}=Y_{j}$. If $Y_{i}\neq Y_{j}$, then we have that $\left(
X_{i},X_{j}\right) \in \mathcal{N}$.
\newline
The work of \citeapos{xing2002distance}, one
of the earlier reference on the subject, suggests considering
\begin{align}
\min_{\Lambda\in PSD}& \sum_{\left( X_{i},X_{j}\right) \in \mathcal{M}
}d_{\Lambda}^{2}\left( X_{i},X_{j}\right) \\
\quad s.t.& \quad \sum_{\left( X_{i},X_{j}\right) \in \mathcal{N}}d_{\Lambda}^{2}\left(
X_{i},X_{j}\right) \geq \bar{\lambda}.
\label{Eqn-Metric-Learn-Opt}
\end{align}
In words, the previous optimization problem minimizes the total distance
between pairs that should be connect, while keeping the pairs that should
not connect well separated. The constant $\bar{\lambda}>0$ is somewhat
arbitrary (given that $\Lambda $ can be normalized by $\bar{\lambda}$, we
can choose $\bar{\lambda}=1$).
\newline
The optimization problem (\ref{Eqn-Metric-Learn-Opt}) is an LP problem on
the convex cone $PSD$ and it has been widely studied. Since $\Lambda
\in PSD,$ we can always write $\Lambda =LL^{T}$, and therefore
$d_{\Lambda}( X_{i},X_{j}) =\left\Vert
X_{i}-X_{j}\right\Vert_{\Lambda}:=\left\Vert LX_{i}-LX_{j}\right\Vert
_{2}.$
There are various techniques which can be used to exploit the \textit{PSD}
structure to efficiently solve (\ref{Eqn-Metric-Learn-Opt}); see, for
example, \citeapos{xing2002distance} for a
projection-based algorithm; or \citeapos{schultz2004learning}, which uses a factorization-based procedure;
or the survey paper \citeapos{bellet2013survey} for the discussion of a wide range of
techniques.
\newline
We have chosen formulation (\ref{Eqn-Metric-Learn-Opt}) to estimate $\Lambda
$ because it is intuitive and easy to state, but the topic of learning
Mahalanobis distances is an active area of research and there are different
algorithms which can be implemented (see \citeapos{li2016mahalanobis}).
\subsubsection{Using Mahalanobis Distance in Data-Driven DRO
\label{Sec-Mahab-DRO}}
Let us assume that the underlying data takes the form
$\mathcal{D}
_{n}=\{ ( X_{i},Y_{i}) \} _{i=1}^{n}$, where $X_{i}\in R^{d}$ and
$Y_{i}\in R$ and the loss function, depending on a decision variable
$\beta \in R^{m}$, is given by $l( x,y,\beta) $. Note that we are not
imposing any linear structure on the underlying model or in the loss
function. Then, motivated by the cost function (\ref{Cost}), we may
consider
\begin{equation}
c_{_\Lambda }\big( ( x,y) ,( x^{\prime },y^{\prime })
\big) =d_{\Lambda }^{2}\left( x,x^{\prime }\right) I\left( y=y^{\prime
}\right) +\infty I\left( y\neq y^{\prime }\right) , \label{Cost_CA}
\end{equation}
for $\Lambda \in PSD$. The infinite contribution in the definition of
$c_{_\Lambda }$ (i.e.
$\infty \cdot I\left( y\neq y^{\prime }\right) $) indicates that the
adversarial player in the DRO formulation is not allowed to perturb
the response variable.
Even in this case, since the definitions of $\mathcal{M}$ and
$\mathcal{N}$ depend on $W_{i}=\left( X_{i},Y_{i}\right) $ (in
particular, on the response variable), cost function
$c_{_\Lambda }( \cdot) $ (once $\Lambda $ is calibrated using, for
example, the method discussed in the previous subsection), will be
informed by the $Y_{i}$s.
Then, we estimate $\beta $ via
\begin{equation}
\min_{\beta }\sup_{P:D_{c_{\Lambda }}\left( P,P_{n}\right) \leq \delta
}\mathbb{E}[l( X,Y,\beta) ]. \label{Linear}
\end{equation}
It is important to note that $\Lambda $ has been applied only to the
definition of the cost function.
\newline
The intuition behind the formulation can be gained in the context
of a logistic regression setting, see the example in Figure
1(b): Suppose that $d=2$, and that $Y$ depends only on $X(1) $
(i.e. the first coordinate of $X$). Then, the metric
learning procedure in (\ref{Eqn-Metric-Learn-Opt}) will
induce a relatively low transportation cost across the $X( 2) $
direction and a relatively high transportation cost in the $X( 1) $
direction, whose contribution, being highly informative, is reasonably
captured by the metric learning mechanism. Since the $X(1) $ direction
is most impactful, from the standpoint of expected loss estimation,
the adversarial player will reach a compromise, between transporting
(which is relatively expensive) and increasing the expected loss
(which is the adversary's objective). Out of this
compromise the DRO procedure localizes the out-of-sample
enhancement, and yet improves generalization.
\subsubsection{Mahalanobis Metrics on a Non-Linear Feature Space}
In this section, we consider the case in which the cost
function is defined after applying a non-linear transformation,
$\Phi :R^{d}\rightarrow R^{l}$, to the data. Assume that the data
takes the form
$\mathcal{D}_{n}=\left\{ \left( X_{i},Y_{i}\right) \right\}
_{i=1}^{n}$, where $X_{i}\in \mathbb{R}^{d}$ and $Y_{i}\in \mathbb{R}
$ and the loss
function, depending on decision variable $\beta \in R^{m}$, is given
by $l\left( x,y,\beta \right) $.
Once again, motivated by the cost function (\ref{Cost}), we may define
\begin{equation}
c_{_\Lambda }^{\Phi }\big( ( x,y) ,( x^{\prime },y^{\prime
}) \big) =d_{\Lambda }^{2}\left( \Phi \left( x\right) ,\Phi \left(
x^{\prime }\right) \right) I\left( y=y^{\prime }\right) +\infty I\left(
y\neq y^{\prime }\right) , \label{Cost_C_Phi}
\end{equation}
for $\Lambda \in PSD$. To preserve the properties of a cost function
(i.e. non-negativity, lower semicontinuity and
$c_{\Lambda }^{\Phi }\left( u,w\right) =0$ implies $u=w$), we assume
that $\Phi \left( \cdot \right) $ is continuous and that
$\Phi \left( w\right) =\Phi \left( u\right) $ implies that $w=u$. Then
we can apply a metric learning procedure, such as the one described in
(\ref
{Eqn-Metric-Learn-Opt}), to calibrate $\Lambda $.
The intuition is
the same as the one provided in the linear case in Section
\ref{Sec-Mahab-DRO}. \
\section{Data Driven Cost Selection and Adaptive Regularization\label
{Sec-Data-Driven-Cost}}
In this section we establish a direct connection between our fully
data-driven DRO procedure and adaptive regularization. Moreover, our main
result here, together with our discussion from the previous section,
provides a direct connection between the metric learning literature and
adaptive regularized estimators. As a consequence, the methods from the
metric learning literature can be used to estimate the parameter of
adaptively regularized estimators.
\newline
Throughout this section we consider again a data set of the form
$\mathcal{D}
_{n}=\left\{ \left( X_{i},Y_{i}\right) \right\} _{i=1}^{n}$ with
$X_{i}\in \mathbb{R}^{d}$ and $Y_{i}\in \mathbb{R}$. Motivated by the cost function
(\ref{Cost}) we define the cost function $c_{_\Lambda}(\cdot)$ as in
(\ref{Cost_CA}).
Using
(\ref{Cost_CA}) we obtain the following result, which is proved in the appendix.
\begin{theorem}[DRO Representation for Generalized Adaptive Regularization]
\label{Thm-DRO-Rep-Adaptive-Reg} Assume that $\Lambda \in R^{d\times d}$ in (
\ref{Cost_CA}) is positive definite. Given the data set $\mathcal{D}_{n}$,
we obtain the following representation
\begin{align}
\min_{\beta }\max_{P:D_{c_{\Lambda }}\left( P,P_{n}\right) \leq \delta }
\mathbb{E}_{P}^{1/2}\left[ \left( Y-X^{T}\beta \right) ^{2}\right]
=\min_{\beta } \sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(
Y_{i}-X_{i}^{T}\beta\right)^{1/2}}+\sqrt{\delta }\left\Vert \beta
\right\Vert _{\Lambda ^{-1}} . \label{GAR_1}
\end{align}
Moreover, if $Y\in \left\{ -1,+1\right\} $ in the context of adaptive
regularized logistic regression, we obtain the following representation
\begin{eqnarray}
\min_{\beta }\max_{P:D_{c_{\Lambda }}\left( P,P_{n}\right)\leq \delta }\mathbb{E}
\left[ \log \left( 1+e^{ -Y(X^{T}\beta )} \right) \right]
=\min_{\beta }\frac{1}{n}\sum_{i=1}^{n}\log \left( 1+e^{
-Y_{i}(X_{i}^{T}\beta )} \right) +\delta \left\Vert \beta \right\Vert
_{\Lambda ^{-1}}. \label{GAR_2}
\end{eqnarray}
\end{theorem}
In order to recover a more familiar setting in adaptive
regularization, assume that $\Lambda $ is a diagonal positive definite
matrix. In which case we obtain, in the setting of (\ref{GAR_1}),
\begin{eqnarray}
\min_{\beta }\max_{P:D_{c_{\Lambda }}\left( P,P_{n}\right) \leq \delta }
\mathbb{E}_{P}^{1/2}\left[ \left( Y-X^{T}\beta \right) ^{2}\right]
=\min_{\beta } \sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(
Y_{i}-X_{i}^{T}\beta \right) ^{2}}+\sqrt{\delta }\sqrt{\sum_{i=1}^{d}\beta
_{i}^{2}/\Lambda _{ii}}. \label{DRO_RA}
\end{eqnarray}
The adaptive regularization method was first derived as a generalization for
ridge regression in \citeapos{hoerl1970ridge}
and \citeapos{hoerl1970ridge2}. Recent work
shows that adaptive regularization can improve the predictive power of its
non-adaptive counterpart, specially in high-dimensional settings (see in \citeapos{zou2006adaptive} and
\citeapos{ishwaran2014geometry}).
\newline
In view of (\ref{DRO_RA}), our discussion in Section
\ref{Subsec_Mahala} uncovers tools which can be used to
estimate the coefficients $
\{1/\Lambda _{ii}:1<i\leq d\}$ using the connection to metric learning
procedures. To complement the intuition given in Figure 1(b), note that
in the adaptive regularization literature one often choose
$\Lambda _{ii}\approx 0$ to induce $\beta _{i}\approx 0$ (i.e., there
is a high penalty to variables with low explanatory power). This, in
our setting, would correspond to transport costs which are low in
such low explanatory directions.
\section{Solving Data Driven DRO Based on Optimal Transport Discrepancies
\label{Sec-Solving-DRO}}
In order to fully take
advantage of the combination synergies between metric learning
methodology and our DRO formulation, it is crucial to have a
methodology which allows us to efficiently estimate $\beta $ in DRO
problems such as (\ref{Eqn-DRO_origin}). In the presence of a
simplified representation such as (\ref{DR_Las}) or (
\ref{DRO_RA}), we can apply standard stochastic optimization results
(see \citeapos{lei2016less}).
\newline Our objective in this section is
to study algorithms which can be applied for more general loss and
cost functions, for which a simplified representation might not be
accessible.
\newline
Throughout this section, once again we assume that the data is given
in the form
$\mathcal{D}_{n}=\left\{ \left( X_{i},Y_{i}\right) \right\}
_{i=1}^{n}\subset \mathbb{R}^{d+1}$. The loss function is written as
$
\{l\left( x,y,\beta \right) :\left( x,y\right) \in
\mathbb{R}^{d+1},\beta \in \mathbb{R}^{m}\}$. We assume that for each
$\left( x,y\right) $, the function $l\left( x,y,\cdot \right) $ is
convex and continuously differentiable. Further, we shall consider
cost functions of the form
\begin{equation*}
\bar{c}\left( \left( x,y\right) ,\left( x^{\prime },y^{\prime }\right)
\right) =c\left( x,x^{\prime }\right) I\left( y=y^{\prime }\right) +\infty
I\left( y\neq y^{\prime }\right) ,
\end{equation*}
as this will simplify the form of the dual representation in the inner
optimization of our DRO formulation. To ensure boundedness of
our DRO formulation, we impose the following assumption.\\
\textbf{Assumption 1.} There exists
$\Gamma ( \beta ,y) \in (0,\infty)$ such that
$l( u,y,\beta) \leq \Gamma ( \beta ,y) \cdot (1+c(u,x) ),$ for all
$(x,y) \in \mathcal{D}_{n},$
Under Assumption 1, we can guarantee that
\begin{equation*}
\max_{P:D_{c}\left( P,P_{n}\right) \leq \delta }\mathbb{E}_{P}\left[ l\left(
X,Y,\beta \right) \right] \leq \left( 1+\delta \right) \max_{i=1,\ldots,n}\Gamma
\left( \beta ,Y_{i}\right) <\infty .
\end{equation*}
Using the strong duality theorem for semi-infinity linear programming
problem in Appendix B of \citeapos{blanchet2016robust},
\begin{equation}
\max_{P:D_{c}\left( P,P_{n}\right) \leq \delta }\mathbb{E}_{P}\left[ l\left(
X,Y,\beta \right) \right] =\min_{\lambda \geq 0} \frac{1}{n}
\sum_{i=1}^{n}\phi \left( X_{i},Y_{i},\beta ,\lambda \right) ,
\label{Eqn-Worst-Loss}
\end{equation}
where
$\psi ( u,X,Y,\beta ,\lambda ) :=l( u,Y,\beta ) -\lambda (c( u,X)-
\delta),$
$ \phi \left( X,Y,\beta ,\lambda \right) :=\max_{u\in \mathbb{R}^{d}}
\psi ( u,X,Y,\beta ,\lambda ).$ Therefore,
\begin{equation}
\min_{\beta
}\max_{P:D_{c_{\Lambda}}\left( P,P_{n}\right) \leq
\delta }\mathbb{E}_{P}\left[ l\left( X,Y,\beta \right) \right]
=\min_{\lambda \geq 0,\beta }\left\{ \mathbb{E}_{P_{n}}\left[ \phi \left(
X,Y,\beta ,\lambda \right) \right] \right\} . \label{Eqn-DRO-Dual}
\end{equation}
The optimization in \eqref{Eqn-DRO-Dual} is minimize over $\beta $ and
$
\lambda $, which we can consider stochastic approximation algorithm if
the gradient of $\phi \left( \cdot \right) $ with respect to $\beta $
and $
\lambda $ exist. However, $\phi \left( \cdot \right) $ is given in
the form of the value function of a maximization problem, of which the
gradient is not easy accessible. We will
discuss the detailed algorithm and the validity of the smoothing
approximation below.
\newline
We consider a smoothing approximation technique to remove the maximization
problem $\phi \left( \cdot \right) $ using soft-max counterpart, $\phi
_{\epsilon ,f }\left( \cdot \right) $. The smoothing soft-max
approximation has been explored and applied to approximately solve the DRO
problem for the discrete case, where we restrict the distributionally
uncertainty set only contains probability measures support on finite set
(i.e., labeled training data and unlabeled training data with pseudo labels),
we refer \citeapos{blanchet2017distributionally} for further details.
\newline
However, due to the continuous-infinite support constraint, the soft-max
approximation is a non-trivial generalization of the finite-discrete
analogue. The smoothing approximation for $\phi \left( \cdot \right) $ is
defined as,
\begin{equation*}
\phi _{\epsilon ,f }\left( X,Y,\beta ,\lambda \right) =\epsilon \log
\left( \int_{\mathbb{R}^{d}}\exp \left( \left[ \psi \left( u,X,Y,\beta
,\lambda \right) \right] /\epsilon \right) f\left( u\right) du\right) ,
\end{equation*}
where $f\left( \cdot \right) $ is a probability density in $\mathbb{R}^{d}$;
for example, we can consider a multivariate normal distribution and $
\epsilon $ is a small positive number regarded as smoothing parameter.
\newline
Theorem \ref{Thm-Smooth-Approx} below allows to quantify the error due to smoothing approximation.
\begin{theorem}\label{Thm-Smooth-Approx}
Under mild technical assumptions (see Assumption 1-4 in Appendix \ref{appendix-B}), there exists $\epsilon_0>0$ such that
for every $\epsilon<\epsilon_0$, we have
\begin{eqnarray*}
\phi (X,Y,\beta ,\lambda ) \geq \phi _{\epsilon ,f }(X,Y,\beta ,\lambda )
\geq \phi (X,Y,\beta ,\lambda )-d\epsilon \log (1/\epsilon )
\end{eqnarray*}
\end{theorem}
The proof of Theorem \ref{Thm-Smooth-Approx} is given in Appendix \ref{appendix-B}.
\newline
After applying smooth approximation, the optimization problem turns into a
standard stochastic optimization problem and we can apply mini-batch based
stochastic approximation (SA) algorithm to solve it. As we can notice, as a
function and $\beta $ and $\lambda $, the gradient of $\phi _{\epsilon ,f
}\left( \cdot \right) $ satisfies
\begin{align*}
\nabla _{\beta }\phi _{\epsilon ,f }\left( X,Y,\beta ,\lambda \right) & =
\frac{\mathbb{E}_{U\sim f }\left[ \exp \left( \psi \left( U,X,Y,\beta
,\lambda \right) /\epsilon \right) \nabla _{\beta }l\left( f_{\beta }\left(
U\right) ,Y\right) \right] }{\mathbb{E}_{U\sim f }\left[ \exp \left( \psi \left(
U,X,Y,\beta ,\lambda \right) /\epsilon \right) \right] }, \\
\nabla _{\lambda }\phi _{\epsilon ,f }\left( X,Y,\beta ,\lambda \right) & =
\frac{\mathbb{E}_{U\sim f }\left[ \exp \left( \psi \left( u,X,Y,\beta
,\lambda \right) /\epsilon \right) \left( \delta -c_{\mathcal{D}_{n}}\left(
u,X\right) \right) \right] }{\mathbb{E}_{U\sim f }\left[ \exp \left( \psi \left(
U,X,Y,\beta ,\lambda \right) /\epsilon \right) \right] }.
\end{align*}
However, since the gradients are still given in the form of
expectation, we can apply a simple Monte Carlo sampling algorithm to
approximate the gradient, i.e., we sample $U_{i}$'s from
$f (\cdot )$ and evaluate the numerators and denominators of the
gradient using Monte Carlo separately. For more details of the SA
algorithm, please see in Algorithm \ref{Algo-Cont}.
\begin{algorithm}
\caption{Stochastic Gradient Descent with Continuous State}\label{Algo-Cont}
\begin{algorithmic}[1]
\State \textbf{Initialize} $\lambda = 0$, and $\beta$ to be
empirical risk minimizer, $\epsilon = 0.5,$ tracking error $Error = 100$.
\While{$Error>10^{-3}$} \State \textbf{Sample} a mini-batch
uniformly from observations
$\left\{X_{(j)},Y_{(j)}\right\}_{j=1}^{M}$, with $M\leq n$.
\State For each $j = 1,\ldots,M$, sample
i.i.d. $\{U_{k}^{(j)}\}_{k=1}^{L}$ from
$\mathcal{N}\left(0,\sigma^{2}I_{d\times d}\right)$. \State
We denote $f_L^{j}$ as empirical distribution for
$U_{k}^{(j)}$'s, and estimate the batched as
\begin{align*}
\nabla_{\beta} \phi_{\epsilon,f}
= \frac{1}{M}\sum_{j=1}^{M}
\nabla_{\beta}\phi_{\epsilon,f_L^j}\left(X_{(j)},Y_{(j)},\beta,\lambda\right),
\nabla_{\lambda} \phi_{\epsilon,f}
= \frac{1}{M}\sum_{j=1}^{M}
\nabla_{\lambda}\phi_{\epsilon,f_L^j}\left(X_{(j)},Y_{(j)},\beta,\lambda\right).
\end{align*}
\State Update $\beta$ and $\lambda$ using $\beta =
\beta + \alpha_{\beta} \nabla_{\beta}
\phi_{\epsilon,f}$ and $\lambda = \lambda + \alpha_{\lambda} \nabla_{\lambda} \phi_{\epsilon,f}.$
\State Update tracking error $Error$ as the norm of difference between latest parameter and average of last $50$ iterations.
\EndWhile
\State \textbf{Output} $\beta$.
\end{algorithmic}
\end{algorithm}
\section{Numerical Experiments \label{Sec-Numerical}}
We validate our data-driven cost function based DRO using 5 real data examples from the UCI machine learning
database \citeapos{Lichman:2013}. We focus on a DRO
formulation based on the log-exponential loss for a linear model. We
use the linear metric learning framework explained in equation (\ref
{Eqn-Metric-Learn-Opt}), which then we feed into the cost function, $
c_{\Lambda }$, as in (\ref{Cost_CA}), denoting by DRO-L. In addition, we also fit a cost function
$c_{\Lambda }^{\Phi }$, as explained in (\ref{Cost_C_Phi}) using
linear and quadratric transformations of the data; the outcome is
denote as (DRO-NL). We compare our DRO-L and DRO-NL with logistic regression (LR), and regularized
logistic regression (LRL1). For
each iteration and each data set, the data is split randomly into
training and test sets. We fit the models on the training and
evaluate the performance on test set. The regularization parameter is
chosen via $5-$fold cross-validation for LRL1, DRO-L and DRO-NL. We
report the mean and standard deviation for training and testing
log-exponential error
and testing accuracy for $200$
independent experiments for each data set. The details of the
numerical results and basic information of the data is summarized in
Table \ref{Tab-Reals}.
\begin{table}[ht]
\centering
\begin{tabular}{cc|c|c|c|c|c|c}
& & BC & BN & QSAR & Magic & MB & SB \\ \hline
\multicolumn{1}{c|}{\multirow{3}{*}{LR}} & Train & $0\pm0$ & $.008\pm.003$ & $.026\pm.008$ & $.213\pm.153$ & $0\pm 0$ & $0 \pm 0$ \\
\multicolumn{1}{c|}{} & Test & $8.75\pm 4.75$ & $2.80\pm1.44$ & $35.5\pm 12.8$ & $17.8\pm 6.77$ & $18.2\pm 10.0$ & $14.5\pm 9.04$ \\
\multicolumn{1}{c|}{} & Accur & $.762\pm.061$ & $.926\pm.048$ & $.701\pm .040$ & $.668\pm.042$ & $.678\pm.059$ & $.789 \pm .035$ \\ \hline
\multicolumn{1}{c|}{\multirow{3}{*}{LRL1}} & Train & $.185\pm.123$ & $.080\pm.030$ & $.614\pm.038$ & $.548\pm.087$ & $.401\pm .167$ & $.470 \pm .040$ \\
\multicolumn{1}{c|}{} & Test & $.428\pm.338$ & $.340\pm.228$ & $.755\pm.019$ & $.610\pm.050$ & $.910\pm.131$ & $.588 \pm .140$ \\
\multicolumn{1}{c|}{} & Accur & $.929\pm.023$ & $.930\pm.042$ & $.646\pm .036$ & $.665\pm.045$ & $.717\pm.041$ & $.811 \pm .034$ \\ \hline
\multicolumn{1}{c|}{\multirow{3}{*}{DRO-L}} & Train & $.022\pm.019$ & $.197\pm.112$ & $.402\pm.039$ & $.469\pm.064$ & $.294\pm.046$ & $.166 \pm .031$ \\
\multicolumn{1}{c|}{} & Test & $.126\pm.034$ & $.275\pm .093$ & $.557\pm .023$ & $.571\pm .043$ & $.613\pm.053$ & $.333 \pm .018$ \\
\multicolumn{1}{c|}{} & Accur & $.954\pm.015$ & $.919\pm.050$ & $.733\pm.026$ & $.727\pm.039$ & $.714 \pm .032$ & $.887 \pm .011$ \\ \hline
\multicolumn{1}{c|}{\multirow{3}{*}{DRO-NL}} & Train & $.032\pm.015$ & $.113\pm.035$ & $.339\pm.044$ & $.381\pm.084$ & $.287\pm.049$ & $.195 \pm .034$ \\
\multicolumn{1}{c|}{} & Test & $.119\pm.044$ & $.194\pm .067$ & $.554\pm.032$ & $.576\pm.049$ & $.607\pm.060$ & $.332 \pm .015$ \\
\multicolumn{1}{c|}{} & Accur & $.955\pm.016$ & $.931\pm.036$ & $.736\pm.027$ & $.730\pm.043$ & $.716\pm.054$ & $.889 \pm .009$ \\ \hline
\multicolumn{2}{c|}{Num Predictors} & $30$ & $4$ & $30$ & $10$ & $20$ & $56$ \\
\multicolumn{2}{c|}{Train Size} & $40$ & $20$ & $80$ & $30$ & $30$ & $150$ \\
\multicolumn{2}{c|}{Test Size} & $329$ & $752$ & $475$ & $9990$ & $125034$ & $2951$
\end{tabular}
\caption{Numerical results for real data sets.}
\label{Tab-Reals}
\end{table}
\section{Conclusion and Discussion \label{Sec-Conclusion}}
Our fully data-driven DRO\ procedure combines a
semiparametric approach (i.e. the metric learning procedure) with a
parametric procedure (expected loss minimization) to enhance the
generalization performance of the underlying parametric model. We
emphasize that our approach is applicable to any DRO formulation and
is not restricted to classification tasks. An interesting research
avenue that might be considered include the development of a
semisupervised framework as in \citeapos{blanchet2017distributionally}, in
which unlabeled data is used to inform the support of the elements in
$\mathcal{U}_{\delta }(P_{n})$.
\appendix
\section{Proof of Theorem \ref{Thm-DRO-Rep-Adaptive-Reg}}
We first state and prove Lemma \ref{Lemma-M-Norm} which will be useful
in proving Theorem \ref{Thm-DRO-Rep-Adaptive-Reg}.
\begin{lemma}
\label{Lemma-M-Norm} If $\Lambda $ is a is positive definite matrix and we
define $\left\Vert x\right\Vert _{\Lambda }=\left( x^{T}\Lambda x\right)
^{1/2}$, then $\left\Vert \cdot \right\Vert _{\Lambda ^{-1}}$ is the dual
norm of $\left\Vert \cdot \right\Vert _{\Lambda }$. Furthermore, we have
\begin{equation*}
u^{T}w\leq \left\Vert u\right\Vert _{\Lambda }\left\Vert w\right\Vert
_{\Lambda ^{-1}},
\end{equation*}
where the equality holds if and only if, there exists non-negative
constant $\tau $, s.t $\tau \Lambda u=\Lambda ^{-1}w$ or $\tau \Lambda
^{-1}w=\Lambda u$.
\end{lemma}
\begin{proof}[Proof for Lemma \protect\ref{Lemma-M-Norm}]
This result is a direct generalization of $l_{2}$ norm in Euclidean space.
Note that
\begin{equation}
u^{T}w=\left( \Lambda u\right) ^{T}(\Lambda ^{-1}w)\leq \left\Vert \Lambda
u\right\Vert _{2}\left\Vert \Lambda ^{-1}w\right\Vert _{2}=\left\Vert
u\right\Vert _{\Lambda }\left\Vert w\right\Vert _{\Lambda ^{-1}}.
\label{CSI}
\end{equation}
The inequality in the above is Cauchy-Schwartz inequality for $\mathbb{R}
^{d} $ applies to $\Lambda u$ and $\Lambda ^{-1}w$, and the equality holds
if and only if there exists nonnegative $\tau $, s.t. $\tau \Lambda
u=\Lambda ^{-1}w $ or $\tau \Lambda ^{-1}w=\Lambda u$. Now, by the
definition of the dual norm,
\begin{equation*}
\left\Vert w\right\Vert _{\Lambda }^{\ast }=\sup_{u:\left\Vert u\right\Vert
_{\Lambda }\leq 1}u^{T}w=\sup_{u:\left\Vert u\right\Vert _{\Lambda }\leq
1}\left\Vert u\right\Vert _{\Lambda }\left\Vert w\right\Vert _{\Lambda
^{-1}}=\left\Vert w\right\Vert _{\Lambda ^{-1}}.
\end{equation*}
While the first equality follows from the definition of dual
norm, the second equality is due to Cauchy-Schwartz inequality
(\ref{CSI}), and the equality condition therein, and the last
equality are immediate after maximizing.
\end{proof}
\begin{proof}[Proof for Theorem \protect\ref{Thm-DRO-Rep-Adaptive-Reg}]
The technique is a generalization of the method used in proving
Theorem 1 in
\citeapos{blanchet2016robust}. We can apply the strong duality result,
see Proposition 6 in Appendix of
\citeapos{blanchet2016robust}, for worst-case expected loss function,
which is a semi-infinite linear programming problem, to obtain
\begin{align*}
\sup_{P:D_{c_{\Lambda}}\left( P,P_{n}\right) \leq \delta }\mathbb{E}
_{P}\left[ \left( Y-X^{T}\beta \right) ^{2}\right] =\min_{\gamma \geq
0}\left\{ \gamma \delta -\frac{1}{n}\sum_{i=1}^{n}\sup_{u}\left\{ \left(
y_{i}-u^{T}\beta \right) ^{2}-\gamma \left\Vert x_{i}-u\right\Vert _{\Lambda
}^{2}\right\} \right\} .
\end{align*}
For the inner suprema , let us denote $\Delta =u-X_{i}$
and $e_{i}=Y_{i}-X_{i}^{T}\beta $ for notation simplicity. The inner
optimization problem associated with $\left( X_{i},Y_{i}\right) $ becomes,
\begin{align*}
& \sup_{u}\left\{ \left( y_{i}-u^{T}\beta \right) ^{2}-\gamma \left\Vert
x_{i}-u\right\Vert _{\Lambda }^{2}\right\} \\
&\quad= e_{i}^{2}+\sup_{\Delta }\left\{ \left( \Delta ^{T}\beta \right)
^{2}-2e_{i}\Delta ^{T}\beta -\gamma \left\Vert \Delta \right\Vert _{\Lambda
}^{2}\right\} , \\
&\quad= e_{i}^{2}+\sup_{\Delta }\left\{ \left( \sum_{j}\left\vert \Delta
_{j}\right\vert \left\vert \beta _{j}\right\vert \right) ^{2}+2\left\vert
e_{i}\right\vert \sum_{j}\left\vert \Delta _{j}\right\vert \left\vert \beta
_{j}\right\vert -\gamma \left\Vert \Delta \right\Vert _{\Lambda
}^{2}\right\} , \\
&\quad= e_{i}^{2}+\sup_{\left\Vert \Delta \right\Vert _{\Lambda }}\left\{
\left\Vert \Delta \right\Vert _{\Lambda }^{2}\left\Vert \beta \right\Vert
_{\Lambda ^{-1}}^{2}+2\left\vert e_{i}\right\vert \left\Vert \Delta
\right\Vert _{\Lambda }\left\Vert \beta \right\Vert _{\Lambda ^{-1}}-\gamma
\left\Vert \Delta \right\Vert _{\Lambda }^{2}\right\} , \\
&\quad= \left\{
\begin{array}{rcl}
e_{i}^{2}\frac{\gamma }{\gamma -\left\Vert \beta \right\Vert _{\Lambda
^{-1}}^{2}} & \text{ if }\gamma >\left\Vert \beta \right\Vert _{\Lambda
^{-1}}^{2}, & \\
+\infty \text{ } & \text{ if }\gamma \leq \left\Vert \beta \right\Vert
_{\Lambda ^{-1}}^{2}. &
\end{array}
\right.
\end{align*}
While the first equality is due to the change of variable, the
second equality is because we are working on a maximization
problem, and the last term only depends on the magnitude
rather than sign of $\Delta $, thus the optimization problem
will always pick $\Delta $ that satisfying the
equality. Considering the third equality, for the optimization
problem, we can first apply the Cauchy-Schwartz inequality in
Lemma \ref{Lemma-M-Norm} and we know that the maximization
problem is to take $\Delta $ satisfying the equality
constraint. For the last equality, if
$\gamma \leq \left\Vert \beta \right\Vert _{\Lambda
^{-1}}^{2}$, the optimization problem is unbounded, while
$\gamma >\left\Vert \beta \right\Vert _{\Lambda ^{-1}}^{2}$
, we can solve the quadratic optimization problem and it leads
to the final equality.
For the outer minimization problem over $\gamma $, as the
inner suprema equal infinity if
$\gamma \leq \left\Vert \beta \right\Vert _{\Lambda
^{-1}}^{2}$, the worst-case expected loss becomes,
\begin{align}
& \sup_{P:D_{c_{\mathcal{D}_{n}}}\left( P,P_{n}\right) \leq \delta }\mathbb{E
}_{P}\left[ \left( Y-X^{T}\beta \right) ^{2}\right] \label{AD} \\
&\quad= \min_{\gamma >\left\Vert \beta \right\Vert _{\alpha \text{-}
(p,s)}^{2}}\left\{ \gamma \delta -\frac{1}{n}\sum_{i=1}^{n}\left(
Y_{i}-X_{i}^{T}\beta \right) \frac{\gamma }{\gamma -\left\Vert \beta
\right\Vert _{\Lambda ^{-1}}^{2}}\right\} , \notag \\
&\quad= \left( \sqrt{\frac{1}{n}\sum_{i=1}^{n}\left( Y_{i}-X_{i}^{T}\beta \right)
}+\sqrt{\delta }\left\Vert \beta \right\Vert _{\Lambda ^{-1}}\right) ^{2}.
\notag
\end{align}
The first equality follows the discussion above for restricting $\gamma
>\left\Vert \beta \right\Vert _{\Lambda ^{-1}}^{2}$. We can observe that the
objective function in the right hand side of (\ref{AD}) is convex and
differentiable and as $\gamma \rightarrow \infty $ and $\gamma \rightarrow
\left\Vert \beta \right\Vert _{\Lambda }^{2}$, the value function will be
infinity. We know the optimizer could be uniquely characterized via first
order optimality condition. Solving for $\gamma $ in this way (through first
order optimality), it is straightforward to obtain the last equality in (\ref
{AD}). If we take square root on both sides, we prove the claim for linear
regression. \newline
For the log-exponential loss function, the proof follows a similar strategy.
By applying strong duality results for semi-infinity linear programming
problem in \citeapos{blanchet2016robust},
we can write the worst case expected loss function as,
\begin{align*}
& \quad \sup_{P:D_{c_{\mathcal{D}_{n}}}\left( P,P_{n}\right) \leq \delta }
\mathbb{E}_{P}\left[ \log \left( 1+\exp \left( -Y\beta ^{T}X\right) \right)
\right] \\
& =\min_{\gamma \geq 0}\left\{ \gamma \delta -\frac{1}{n}\sum_{i=1}^{n}
\sup_{u}\left\{ \log \left( 1+\exp \left( -Y_{i}\beta ^{T}u\right) \right)
-\gamma \left\Vert X_{i}-u\right\Vert _{\Lambda }\right\} \right\} .
\end{align*}
For each $i$, we can apply Lemma 1 in
\citeapos{shafieezadeh2015distributionally} and dual-norm result in Lemma
\ref{Lemma-M-Norm} to deal with the inner optimization problem. It gives us,
\begin{eqnarray*}
\sup_{u}\left\{ \log \left( 1+\exp \left( -Y_{i}\beta ^{T}u\right) \right)
-\gamma \left\Vert X_{i}-u\right\Vert _{\Lambda }\right\}
= \left\{
\begin{array}{ccc}
\log \left( 1+\exp \left( -Y_{i}\beta ^{T}X_{i}\right) \right) & \text{if} &
\left\Vert \beta \right\Vert _{\Lambda ^{-1}}\leq \gamma , \\
\infty & \text{if} & \left\Vert \beta \right\Vert _{\Lambda ^{-1}}>\gamma .
\end{array}
\right.
\end{eqnarray*}
Moreover, since the outer optimization is trying to minimize, following the
same discussion for the proof for linear regression case, we can plug-in the
result above and it leads the first equality below,
\begin{align*}
&\min_{\gamma \geq 0}\left\{ \gamma \delta -\frac{1}{n}\sum_{i=1}^{n}
\sup_{u}\left\{ \log \left( 1+\exp \left( -Y_{i}\beta ^{T}u\right) \right)
-\gamma \left\Vert X_{i}-u\right\Vert _{\Lambda }\right\} \right\} \\
&\quad=\min_{\gamma \geq \left\Vert \beta \right\Vert _{\Lambda ^{-1}}}\left\{
\delta \gamma +\frac{1}{n}\sum_{i=1}^{n}\log \left( 1+\exp \left(
-Y_{i}\beta ^{T}X_{i}\right) \right) \right\} \\
&\quad=\frac{1}{n}\sum_{i=1}^{n}\log \left( 1+\exp \left( -Y_{i}\beta
^{T}X_{i}\right) \right) +\delta \left\Vert \beta \right\Vert _{\Lambda
^{-1}}.
\end{align*}
We know that the target function is continuous and monotone increasing in $
\gamma $, thus we can notice it is optimized by taking $\gamma =\left\Vert
\beta \right\Vert _{\Lambda ^{-1}}$, which leads to second equality above.
This proves the claim for logistic regression in the statement
of the theorem.
\end{proof}
\section{Proof of Theorem \ref{Thm-Smooth-Approx}\label{appendix-B}}
Let us begin by listing the assumptions required to prove Theorem
\ref{Thm-Smooth-Approx}. First, we begin by recalling Assumption 1
from Section \ref{Sec-Solving-DRO}.
\textbf{Assumption 1.} There exists
$\Gamma ( \beta ,y) \in (0,\infty)$ such that
$l( u,y,\beta) \leq \Gamma ( \beta ,y) \cdot (1+c(u,x) ),$ for all
$(x,y) \in \mathcal{D}_{n},$
We now introduce Assumptions 2-4 below.
\textbf{Assumption 2.} $\psi \left( \cdot ,X,Y,\beta ,\lambda \right) $ is
twice continuously differentiable and the Hessian of $\psi \left( \cdot
,X,Y,\beta ,\lambda \right) $ evaluated at $u^{\ast }$, $D_{u}^{2}\psi
\left( u^{\ast },X,Y,\beta ,\lambda \right) $, is positive definite. In
particular, we can find $\theta >0$ and $\eta >0$, such that
\begin{equation*}
\psi (u,X,Y,\beta ,\lambda )\geq \psi \left( u^{\ast },X,Y,\beta ,\lambda
\right) -\frac{\theta }{2}\Vert u-u^{\ast }\Vert _{2}^{2},\quad \forall u
\text{ with }\left\Vert u-u^{\ast }\right\Vert _{2 }\leq \eta .
\end{equation*}
\textbf{Assumption 3.} For a constant $\lambda _{0}>0$ such that $\phi
(X,Y,\beta ,\lambda _{0})<\infty $, let $K=K\left( X,Y,\beta ,\lambda
_{0}\right) $ be any upper bound for $\phi (X,Y,\beta ,\lambda _{0})$.
\textbf{Assumption 4. }In addition to the lower semicontinuity of $c\left(
\cdot \right) \geq 0$, we assume that $c\left( \cdot ,X\right) $ is coercive
in the sense that $c\left( u,X\right) \rightarrow \infty $ as $\left\Vert
u\right\Vert _{2}\rightarrow \infty $.
For any set $S$, the $r$-neighborhood of $S$
is defined as the set of all points in $\mathbb{R}^{d}$ that are at distance
less than $r$ from $S$, i.e. $S_{r}=\cup _{u\in S}\{\bar{u}:\left\Vert \bar{u
}-u\right\Vert _{2}\leq r\}$.
\begin{proof}[Proof of Theorem \protect\ref{Thm-Smooth-Approx}]
The first part of the inequality is easy to derive. For the second
part, we proceed as follows: Under Assumptions 3 and 4, we can
define the compact set
\begin{equation*}
\mathcal{C}=\mathcal{C}(X,Y,\beta ,\lambda )=\{u:c(u,X)\leq l(X,Y,\beta
)-K+\lambda _{0}/(\lambda -\lambda _{0})\}.
\end{equation*}
It is easy to check that $\arg \max \{\psi \left( u,X,Y,\lambda \right)
\}\subset \mathcal{C}$.
Owing to optimality of $u^\ast$ and from Assumption 2 that $K\geq\phi(X,Y,\beta,\lambda_0)$, we can see that
\begin{align*}
l(X,Y)&\leq l(u^\ast,Y))-\lambda c(u,X) \\
&= l(u^\ast,Y)-\lambda_0 c(u^\ast,X)-(\lambda-\lambda_0) c(u^\ast,X)\\
&\leq K-\lambda_0 -(\lambda-\lambda_0) c(u^\ast,X).
\end{align*}
Thus by definition of $\mathcal{C} =
\mathcal{C}(X,Y,\beta,\lambda)$, it follows easily that
$u^\ast\in \mathcal{C}$, which further implies
$\{u:\|u-u^{\ast}\|_2\leq \eta\}\subset \mathcal{C}_\eta$.
Then we combine the strongly convexity assumption in
Assumption 2 and the definition of
$\phi_{\epsilon,f}(u,X,Y,\beta,\lambda)$, which yields
\begin{align*}
\phi_{\epsilon,f}\left(X,Y,\beta,\lambda\right) &\geq \epsilon \log\left(
\int_{\|u-u^{\ast}\|_2\leq \eta} \exp\left(\left[\phi\left(X,Y,\beta,
\lambda\right) - \frac{\theta}{2}\|u-u^{\ast}\|_2^2\right]
/\epsilon\right)f(u)du\right) \\
&= \epsilon \log\left( \exp\left(\phi\left(X,Y,\beta,\lambda\right)
/\epsilon\right)\right)\int_{\|u-u^{\ast}\|_2\leq \eta} \exp\left( -
\frac{\theta}{2}\|u-u^{\ast}\|_2^2/\epsilon\right)f(u)du \\
& = \phi\left(X,Y,\beta,\lambda\right) + \epsilon\log
\int_{\|u-u^{\ast}\|_2\leq \eta}
\exp\left(-\frac{\theta\|u-u^{\ast}\|_2^2}{2\epsilon}\right)f(u)du.
\end{align*}
As
$\{ u: \Vert u - u^\ast\Vert_2 \leq \eta\} \subset
\mathcal{C}_\eta$, we can use the lower bound of $
f(\cdot)$ to deduce that
\begin{align*}
\int_{\|u-u^{\ast}\|_2\leq \eta} \exp\left(-\frac{\theta\|u-u^{\ast}\|_2^2
}{2\epsilon}\right)f(u)du &\geq \inf_{u\in \mathcal{C}_\eta}f(u)\times
\int_{\|u-u^{\ast}\|_2\leq \eta} \exp\left(-\frac{\theta\|u-u^{\ast}\|_2^2
}{2\epsilon}\right)du \\
&= \inf_{u\in\mathcal{C}_\eta}f(u) \times
\left(2\pi\epsilon/\theta\right)^{d/2}P(Z_{d}\leq\eta^2\theta/\epsilon),
\end{align*}
where $Z_d$ is a chi-squared random variable of $d$ degrees of freedom.
To conclude, recall that $\epsilon \in(0, \eta^2\theta \chi_\alpha)$, the lower
bound of $\phi_{\epsilon,f}(\cdot)$ can be written as
\begin{equation*}
\phi_{\epsilon,f}(X,Y,\beta,\lambda)\geq \phi(X,Y,\beta,\lambda) - \frac{d
}{2} \epsilon\log(1/\epsilon) + \frac{d}{2}\epsilon\log\left(\left(2\pi
\alpha/\theta\right)\inf_{u\in \mathcal{C}_\eta}f(u) \right).
\end{equation*}
This completes the proof of Theorem \ref{Thm-Smooth-Approx}.
\end{proof}
\end{document} | arXiv | {
"id": "1705.07152.tex",
"language_detection_score": 0.6859525442123413,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\ptitle}
\begin{abstract} How well do reward functions learned with inverse reinforcement learning (IRL) generalize? We illustrate that state-of-the-art IRL algorithms, which maximize a maximum-entropy objective, learn rewards that overfit to the demonstrations. Such rewards struggle to provide meaningful rewards for states not covered by the demonstrations, a major detriment when using the reward to learn policies in new situations. We introduce BC-IRL\xspace, a new inverse reinforcement learning method that learns reward functions that generalize better when compared to maximum-entropy IRL approaches. In contrast to the MaxEnt framework, which learns to maximize rewards around demonstrations, BC-IRL\xspace updates reward parameters such that the policy trained with the new reward matches the expert demonstrations better. We show that BC-IRL\xspace learns rewards that generalize better on an illustrative simple task and two continuous robotic control tasks, achieving over twice the success rate of baselines in challenging generalization settings. \end{abstract}
\begin{figure}
\caption{Demonstrations}
\label{fig:teaser:demo}
\caption{Max-Ent IRL Reward}
\label{fig:teaser:me}
\caption{BC-IRL\xspace Reward}
\label{fig:teaser:mirl}
\caption{\small
A visualization of learned rewards on a task where a 2D agent must navigate to the goal at the center. \Cref{fig:teaser:demo}: Four trajectories are provided as demonstrations and the demonstrated states are visualized as points.
Rewards learned via Maximum Entropy are in \Cref{fig:teaser:me} and BC-IRL\xspace in \Cref{fig:teaser:mirl}. Lighter colors represent larger predicted rewards.
The MaxEnt objective overfits to the demonstrations, giving high rewards only close to the expert states, preventing the reward from providing meaningful learning signals in new states.
}
\label{fig:teaser}
\end{figure}
\section{Introduction}\label{sec:intro} Reinforcement learning has demonstrated success on a broad range of tasks from navigation \cite{wijmans2019dd}, locomotion \cite{kumar2021rma, iscen2018policies}, and manipulation \cite{qt-opt}. However, this success depends on specifying an accurate and informative reward signal to guide the agent towards solving the task. For instance, imagine designing a reward function for a robot window cleaning task. The reward should tell the robot how to grasp the cleaning rag, how to use the rag to clean the window, and to wipe hard enough to remove dirt, but not hard enough to break the window. Manually shaping such reward functions is difficult, non-intuitive, and time-consuming. Furthermore, the need for an expert to design a reward function for every new skill limits the ability of agents to autonomously acquire new skills.
Inverse reinforcement learning (IRL) \citep{abbeel2004apprenticeship, ziebart2008maximum, osa2018algorithmic} is one way of addressing the challenge of acquiring rewards by learning reward functions from demonstrations and then using the learned rewards to learn policies via reinforcement learning. When compared to direct imitation learning, which learns policies from demonstrations directly, potential benefits of IRL are at least two-fold: first, IRL does not suffer from the compounding error problem that is often observed with policies directly learned from demonstrations~\citep{ross2011reduction, barde2020adversarial}; and second, a reward function could be a more abstract and parsimonious description of the observed task that generalizes better to unseen task settings \citep{ng2000algorithms, osa2018algorithmic}. This second potential benefit is appealing as it allows the agent to learn a reward function to train policies not only for the demonstrated task setting (e.g. specific start-goal configurations in a reaching task) but also for unseen settings (e.g. unseen start-goal configurations), autonomously without additional expert supervision.
However, thus far the generalization properties of reward functions learned via IRL are poorly understood. Here, we study the generalization of learned reward functions and find that prior IRL methods fail to learn generalizable rewards and instead overfit to the demonstrations. \Cref{fig:teaser} demonstrates this on a task where a point mass agent must navigate in a 2D space to a goal location at the center. An important reward characteristic for this task is that an agent, located anywhere in the state-space, should receive increasing rewards as it gets closer to the goal. Most recent prior work \cite{fu2017learning,ni2020f,finn2016guidedirl} developed IRL algorithms that optimize the maximum entropy objective \citep{ziebart2008maximum} (\Cref{fig:teaser:me}), which fails to capture goal distance in the reward. Instead, the MaxEnt objective leads to rewards that separate non-expert from expert behavior by maximizing reward values along the expert demonstration. While useful for imitating the experts, the MaxEnt objective prevents the IRL algorithms from learning to assign meaningful rewards to other parts of the state space, thus limiting generalization of the reward function.
As a remedy to the reward generalization challenge in the maximum entropy IRL framework, we propose a new IRL framework called \textbf{Behavioral Cloning Inverse Reinforcement Learning (BC-IRL\xspace)}. In contrast to the MaxEnt framework, which learns to maximize rewards around demonstrations, the BC-IRL\xspace framework updates reward parameters such that the policy trained with the new reward matches the expert demonstrations better. This is akin to the model-agnostic meta-learning \citep{finn2017model} and loss learning \citep{bechtle2019meta} frameworks where model or loss function parameters are learned such that the downstream task performs well when utilizing the meta-learned parameters. By using gradient-based bi-level optimization \cite{higher}, BC-IRL\xspace can optimize the behavior cloning loss to learn the reward, rather than a separation objective like the maximum entropy objective. Importantly, to learn the reward, BC-IRL\xspace differentiates through the reinforcement learning policy optimization, which incorporates exploration and requires the reward to provide a meaningful reward throughout the state space to guide the policy to better match the expert. We find BC-IRL\xspace learns more generalizable rewards (\Cref{fig:teaser:mirl}), and achieves over twice the success rate of baseline IRL methods in challenging generalization settings.
Our contributions are as follows: 1) The general BC-IRL\xspace framework for learning more generalizable rewards from demonstrations, and a specific BC-IRL-PPO\xspace variant that uses PPO as the RL algorithm. 2) A quantitative and qualitative analysis of reward functions learned with BC-IRL\xspace and Maximum-Entropy IRL variants on a simple task for easy analysis. 3) An evaluation of our novel BC-IRL\xspace algorithm on two continuous control tasks against state-of-the-art IRL and IL methods. Our method learns rewards that transfer better to novel task settings.
\section{Background and Related Work} \label{sec:related_work}
We begin by reviewing Inverse Reinforcement Learning through the lense of bi-level optimization. We assume access to a rewardless Markov decision process (MDP) defined through the tuple $ \mathcal{M} = ( \mathcal{S}, \mathcal{A}, \mathcal{P}, \rho_0, \gamma, H)$ for state-space $ \mathcal{S}$, action space $ \mathcal{A}$, transition distribution $ \mathcal{P}(s' | s,a)$, initial state distribution $\rho_0$, discounting factor $\gamma$, and episode horizon $H$. We also have access to a set of expert demonstration trajectories $ \mathcal{D}^e = \left\{ \tau^e_i \right\}_{i=1}^{N}$ where each trajectory is a sequence of state, action tuples.
IRL learns a parameterized reward function $R_\psi(\tau_i)$ which assigns a trajectory a scalar reward.
Given the reward, a policy $\pi_\theta(a | s)$ is learned which maps from states to a distribution over actions. The goal of IRL is to produce a reward $R_\psi$, such that a policy trained to maximize the sum of (discounted) rewards under this reward function matches the behavior of the expert. This is captured through the following bi-level optimization problem: \begin{subequations}
\begin{align}
\label{eq:irl:outer_gen}
\min_{\psi} &\mathcal{L}_\text{IRL}(R_\psi; \pi_\theta) &\text{\bf{(outer obj.)} }\\
\label{eq:irl:inner_gen}\st &\theta \in \argmax_{\theta} g(R_\psi, \theta) &\text{\bf{(inner obj.)} }
\end{align} \end{subequations} where $\mathcal{L}_\text{IRL}(R_\psi; \pi_\theta)$ denotes the IRL loss and measures the performance of the learned reward $R_\psi$ and policy $\pi_\theta$; $g(R_\psi, \theta)$ is the reinforcement learning objective used to optimize policy parameters $\theta$. Algorithms for this bi-level optimization consist of an outer loop (\eqref{eq:irl:outer_gen}) that optimizes the reward and an inner loop (\eqref{eq:irl:inner_gen}) that optimizes the policy given the current reward.
\textbf{Maximum Entropy IRL:} Early work on IRL learns rewards by separating non-expert from expert trajectories \citep{ng2000algorithms,abbeel2004apprenticeship, abbeel2010autonomous}. A primary challenge of these early IRL algorithms was the ambiguous nature of learning reward functions from demonstrations: many possible policies exist for a given demonstration, and thus many possible rewards exist. The Maximum Entropy (MaxEnt) IRL framework \citep{ziebart2008maximum} seeks to address this ambiguity, by learning a reward (and policy) that is as non-committal (uncertain) as possible, while still explaining the demonstrations. More concretely, given reward parameters $\psi$, MaxEnt IRL optimizes the log probability of the expert trajectories $\tau^e$ from demonstration dataset $ \mathcal{D}^e$ through the following loss, \begin{align*}
\maxentobj(R_\psi) &= -\mathbb{E}_{\tau^e \sim \mathcal{D}^e} \left[\log p(\tau^e | \psi)\right]
= -\mathbb{E}_{\tau^e \sim \mathcal{D}^e} \left[\log \frac{\exp \left( R_{\psi} (\tau^e) \right)}{Z(\psi)} \right]\\
&= -\mathbb{E}_{\tau^e \sim \mathcal{D}^e}\left[R_\psi(\tau^e)\right] + \log Z(\psi). \end{align*} A key challenge of MaxEnt IRL is estimating the partition function $Z(\psi) = \int \exp R_\psi d\tau$. \cite{ziebart2008maximum} approximate $Z$ in small discrete state spaces with dynamic programming.
\textbf{MaxEnt from the Bi-Level perspective:} However, computing the partition functions becomes intractable for high-dimensional and continuous state spaces. Thus algorithms approximate $Z$ using samples from a policy optimized via the current reward. This results in the partition function estimate being a function of the current policy $\log \hat{Z}(\psi ; \pi_\theta)$. As a result, MaxEnt approaches end up following the bi-level optimization template by iterating between: 1) updating reward function parameters given current policy samples via the outer objective (\eqref{eq:irl:outer_gen}); and 2) optimizing the policy parameters with the current reward parameters via an inner policy optimization objective and algorithm \eqref{eq:irl:inner_gen}. For instance, model-based IRL methods such as \cite{wulfmeier2017large, levine2012continuous, englert2017inverseRL} use model-based RL (or optimal control) methods to optimize a policy (or trajectory), while model-free IRL methods such as \cite{kalakrishnan_2013_irl, boularias2011relativeirl,finn2016guided,finn2016connection} learn policies via model-free RL in the inner loop. All of these methods use policy rollouts to approximate either the partition function of the maximum-entropy IRL objective or its gradient with respect to reward parameters in various ways (outer loop). For instance \cite{finn2016guided} learn a stochastic policy $q(\tau)$, and sample from that to estimate $Z(\psi) \approx \frac{1}{M} \sum_{\tau_i \sim q(\tau)}\frac{\exp R_{\psi}(\tau_i)}{q(\tau_i)}$ with $M$ samples from $q(\tau)$.
\cite{fu2017learning} with adversarial IRL (AIRL) follow this idea and view the problem as an adversarial training process between policy $\pi_\theta(a | s)$ and discriminator $D(s) = \frac{\exp R_\psi(s)}{\exp R_{\psi}(s) + \pi_{\theta}(a|s)}$. \cite{ni2020f} analytically compute the gradient of the $f$-divergence between the expert state density and the MaxEnt state distribution, circumventing the need to directly compute the partition function.
\textbf{Meta-Learning and IRL:} Like some prior work \citep{xu2019learning, yu2019meta, wang2021meta, gleave2018multi,seyed2019smile}, BC-IRL\xspace combines meta-learning and inverse reinforcement learning. However, these works focus on fast adaptation of reward functions to new tasks for MaxEnt IRL through meta-learning. These works require demonstrations of the new task to adapt the reward function. BC-IRL\xspace algorithm is a fundamentally new way to learn reward functions and does not require demonstrations for new test settings. Most related to our work is \cite{das2020model}, which also uses gradient-based bi-level optimization to match the expert. However, this approach requires a pre-trained dynamics model. Our work generalizes this idea since BC-IRL\xspace can optimize general policies, allowing any objective that is a function of the policy and any differentiable RL algorithm. We show our method, without an accurate dynamics model, outperforms \cite{das2020model} and scales to more complex tasks where \cite{das2020model} fails to learn.
\textbf{Generalization in IRL:} Some prior works have explored how learned rewards can generalize to training policies in new situations. For instance, \cite{fu2017learning} explored how rewards can generalize to training policies under changing dynamics. However, most prior work focuses on improving policy generalization to unseen task settings by addressing challenges introduced by the adversarial training objective of GAIL \citep{xu2019positive,zolna2020combating,zolna2019task,lee2021generalizable, barde2020adversarial,jaegle2021imitation,dadashi2020primal}. Finally, in contrast to most related work on generalization, our work focuses on analyzing and improving reward function transfer to new task settings.
\section{Learning Rewards via Behavioral Cloning Inverse Reinforcement Learning (BC-IRL\xspace)} \label{sec:main_method}
We now present our algorithm for learning reward functions via behavioral cloning inverse reinforcement learning. We start by contrasting the maximum entropy and imitation loss objectives for inverse reinforcement learning in \Cref{sec:method:irl-objectives}. We then introduce a general formulation for BC-IRL\xspace in \Cref{sec:method:meta-irl}, and present an algorithmic instantiation that optimizes a BC objective to update the \emph{reward} parameters via gradient-based bi-level optimization with a model-free RL algorithm in the inner loop in \Cref{sec:method:ppo_mirl}.
\subsection{Outer Objectives: Max-Ent vs Behavior Cloning }\label{sec:method:irl-objectives} In this work, we study an alternative IRL objective from the maximum entropy objective. While this maximum entropy IRL objective has led to impressive results, it is unclear how well this objective is suited for learning reward functions that generalize to new task settings, such as new start and goal distributions. Intuitively, assigning a high reward to demonstrated states (without task-specific hand-designed feature engineering) makes sense when you want to learn a reward function that can recover exactly the expert behavior, but it leads to reward landscapes that do not necessarily capture the essence of the task (e.g. to reach a goal, see \Cref{fig:teaser:me}).
Instead of specifying an IRL objective that is directly a function of reward parameters (like maximum entropy), we aim to measure the reward function's performance through the policy that results from optimizing the reward. With such an objective, we can optimize reward parameters for what we care about: for the resulting policy to match the behavior of the expert. The behavioral cloning (BC) loss measures how well the policy and expert actions match, defined for continuous actions as $\mathbb{E}_{(s_t, a_t) \sim \tau^e} \left( \pi_\theta(s_t) - a_t \right)^2 $ where $\tau^e$ is an expert demonstration trajectory. Policy parameters $\theta$ are a result of using the current reward parameters $\psi$, which we can make explicit by making $\theta$ a function of $\psi$ in the objective: $\bcirl = \mathbb{E}_{(s_t, a_t) \sim \tau^e} (\pi_{\theta(\psi)}(s_t) - a_t)^2 $. The IRL objective is now formulated in terms of the policy rollout ``matching" the expert demonstration through the BC loss.
We use the chain-rule to decompose the gradient of $\bcirl$ with respect to reward parameters $\psi$. We also expand how the policy parameters $\theta(\psi)$ are updated via a REINFORCE update with learning rate $ \alpha$ to optimize the current reward $R_\psi$ (but any differentiable policy update applies). \begin{align}
\label{eq:bi_level_opt}
\frac{\partial}{\partial \psi} \bcirl
&= \frac{\partial}{\partial \psi} \left[
\E_{(s_t, a_t) \sim \tau^e}
\left[\left( \pi_{\theta(\psi)}(s_t) - a_{t} \right)^{2} \right]
\right]
= \E_{(s_t, a_t) \sim \tau^e} \left[ 2 \left( \pi_{\theta(\psi)}(s_t) - a_{t} \right)\right] \frac{\partial}{\partial \psi} \pi_{\theta(\psi)} \nonumber \\
&\text{where } \theta(\psi) = \theta_{\text{old}} + \alpha \E_{(s_t, a_t) \sim \pi_{\theta_{\text{old}}}} \left[
\left( \sum_{k=t+1}^{T} \gamma^{k-t-1} R_\psi(s_k) \right) \nabla \ln \pi_{\theta_{\text{old}}} (a_t | s_t)
\right] \end{align} Computing the gradient for the reward update in \Cref{eq:bi_level_opt} includes samples from $ \pi$ collected in the reinforcement learning (RL) inner loop. This means the reward is trained on diverse states beyond the expert demonstrations through data collected via exploration in RL. As the agent explores during training, BC-IRL\xspace must provide a meaningful reward signal throughout the state-space to guide the policy to better match the expert. Note that this is a fundamentally different reward update rule as compared to current state-of-the-art methods that maximize a maximum entropy objective. We show in our experiments that this results in twice as high success rates compared to state-of-the-art MaxEnt IRL baselines in challenging generalization settings, demonstrating that BC-IRL\xspace learns more generalizable rewards that provide meaningful rewards beyond the expert demonstrations.
The BC loss updates only the reward, as opposed to updating the policy as typical BC for imitation learning does \cite{bain1995framework}. BC-IRL\xspace is a IRL method that produces a reward, unlike regular BC that learns only a policy. Since BC-IRL\xspace uses RL, not BC, to update the policy, it avoids the pitfalls of BC for policy optimization such as compounding errors. Our experiments show that policies trained with rewards from BC-IRL\xspace generalize over twice as well to new settings as those trained with BC. In the following section, we show how to optimize this objective via bi-level optimization.
\subsection{BC-IRL}\label{sec:method:meta-irl}
We formulate the IRL problem as a gradient-based bi-level optimization problem, where the outer objective is optimized by differentiating through the optimization of the inner objective. We first describe how the policy is updated with a fixed reward, then how the reward is updated for the policy to better match the expert.
\begin{wrapfigure}{R}{0.55\textwidth}
\begin{minipage}{0.55\textwidth}
\begin{algorithm}[H]
\begin{algorithmic}[1]
\footnotesize{
\STATE{Initial reward $R_\psi$, policy $\pi_\theta$}
\STATE{Policy updater $\popt(R, \pi)$}
\STATE{Expert demonstrations $ \mathcal{D}^e$}
\FOR{each epoch}
\STATE{Policy Update:}
\STATE{$ \theta' \gets \popt(R_\psi, \pi_\theta) $}
\STATE{Sample demo batch $\tau^e \sim \mathcal{D}^e$}
\STATE{Compute IRL loss}
\STATE{$ \bcirl = \mathbb{E}_{(s_t, a_t) \sim \tau^e} \left( \pi_{\theta'}(s_t) - a_t \right)^2 $}
\STATE{Compute gradient of IRL loss wrt reward}
\STATE{$\nabla_\psi \bcirl = \frac{\partial \bcirl}{\partial \theta'} \frac{\partial \popt(R_\psi, \pi_\theta)}{\partial \psi}$}
\STATE{$\psi \gets \psi - \nabla_{\psi} \bcirl$}
\ENDFOR
}
\end{algorithmic}
\caption{BC-IRL\xspace (general framework)}
\label{algo:method:mirl}
\end{algorithm} \end{minipage}
\end{wrapfigure}
\textbf{Inner loop (policy optimization):} The inner loop optimizes policy parameters $\theta$ given current reward function $R_\psi$. The inner loop takes $K$ gradient steps to optimize the policy given the current reward. Since the reward update will differentiate through this policy update, we require the policy update to be differentiable with respect to the reward function parameters. Thus, any reinforcement learning algorithm which is differentiable with respect to the reward function parameters can be plugged in here, which is the case for many policy gradient and model-based methods. However, this does not include value-based methods such as DDPG \cite{lillicrap2015continuous} or SAC \cite{haarnoja2018soft} that directly optimize value estimates since the reward function is not directly used in the policy update.
\textbf{Outer loop (reward optimization)}: The outer loop optimization updates the reward parameters $\psi$ via gradient descent. More concretely: after the inner loop, we compute the gradient of the outer loop objective $\nabla_\psi \higherobj$ wrt to reward parameters $\psi$ by propagating through the inner loop. Intuitively, the new policy is a function of reward parameters since the old policy was updated to better maximize the reward. The gradient update on $\psi$ tries to adjust reward function parameters such that the policy trained with this reward produces trajectories that match the demonstrations more closely. We use \citet{higher} for this higher-order optimization.
BC-IRL\xspace is summarized in \Cref{algo:method:mirl}. Line 5 describes the inner loop update, where we update the policy $\pi_\theta$ to maximize the current reward $R_\psi$. Lines 6-7 compute the BC loss between the updated policy $\pi_{\theta'}$ and expert actions sampled from expert dataset $\mathcal{D}^e$. The BC loss is then used in the outer loop to perform a gradient step on reward parameters in lines 8-9, where the gradient computation requires differentiating through the policy update in line 5.
\subsection{BC-IRL-PPO\xspace}
\label{sec:method:ppo_mirl}
We now instantiate a specific version of the BC-IRL\xspace framework that uses proximal policy optimization (PPO) \cite{schulman2017proximal} to optimize the policy in the inner loop. This specific version, called BC-IRL-PPO\xspace, is summarized in \Cref{algo:method:ppo_mirl}.
\begin{wrapfigure}{R}{0.55\textwidth}
\begin{minipage}{0.53\textwidth}
\begin{algorithm}[H]
\begin{algorithmic}[1]
\footnotesize{
\STATE{Initial reward $R_\psi$, policy $\pi_\theta$, value function $V_\nu$}
\STATE{Expert demonstrations $ \mathcal{D}^e$}
\FOR{each epoch}
\FOR{$k=1 \to K$}
\STATE{Run policy $\pi_{\theta}$ in environment for $T$ timesteps}
\STATE{Compute rewards $\hat{r}_t^{\psi}$ for rollout with current $R_\psi$}
\STATE{Compute advantages $\hat{A}^{\psi}$ using $\hat{r}^{\psi}$ and $V_\nu$}
\STATE{Compute $ \mathcal{L}_{\text{PPO}}$ using $\hat{A}^{\psi}$}
\STATE{Update $\pi_\theta$ with $\nabla_\theta \mathcal{L}_{\text{PPO}}$}
\ENDFOR
\STATE{Sample demo batch $\tau^e \sim \mathcal{D}^e$}
\STATE{Compute $ \bcirl = \mathbb{E}_{(s_t, a_t) \sim \tau^e} \left( \pi_\theta(s_t) - a_t \right)^2 $}
\STATE{Update reward $R_\psi$ with $ \nabla_{\psi} \bcirl$}
\ENDFOR
}
\end{algorithmic}
\caption{BC-IRL-PPO\xspace}
\label{algo:method:ppo_mirl}
\end{algorithm}
\end{minipage}
\end{wrapfigure}
BC-IRL-PPO\xspace learns a state-only parameterized reward function $R_{\psi}(s)$, which assigns a state $s \in \mathcal{S}$ a scalar reward. The state-only reward has been shown to lead to rewards that generalize better \cite{fu2017learning}. BC-IRL-PPO\xspace begins by collecting a batch of rollouts in the environment from the current policy (line 5 of \Cref{algo:method:ppo_mirl}). For each state $s$ in this batch we evaluate the learned reward function $R_\psi(s)$ (line 6). From this sequence of rewards, we compute the advantage estimates $\hat{A}_t$ for each state (line 7). As is typical in PPO, we also utilize a learned value function $V_\nu(s_t)$ to predict the value of the starting and ending state for partial episodes in the rollouts. This learned value function $V_\nu$ is trained to predict the sum of future discounted rewards for the current reward function $R_\psi$ and policy $\pi_\theta$ (part of $ \mathcal{L}_{\text{PPO}}$ in line 8). Using the advantages, we then compute the PPO update (line 9 of \Cref{algo:method:ppo_mirl}) using the standard PPO loss in equation 8 of \citet{schulman2017proximal}. Note the advantages are a function of the reward function parameters used to compute the rewards, so PPO is differentiable with respect to the reward function. Next, in the outer loop update, we update the reward parameters, by sampling a batch of demonstration transitions (line 11), computing the behavior cloning IRL objective $\mathcal{L}_\text{BC-IRL}$ (line 12), and updating the reward parameters $\psi$ via gradient descent on $\mathcal{L}_\text{BC-IRL}$ (line 13). Finally, in this work, we perform one policy optimization step ($K=1$) per reward function update. Furthermore, rather than re-train a policy from scratch for every reward function iteration, we initialize each inner loop from the previous $\pi_{\theta}$. This initialization is important in more complex domains where $K$ would otherwise have to be large to acquire a good policy from scratch.
\section{Illustration \& Qualitative Analysis of Learned Rewards}
\label{sec:reward_analysis}
\input{sections/figures/pm}
We first analyze the rewards learned by different IRL methods in a 2D point mass navigation task. The purpose of this analysis is to test our hypothesis that our method learns more generalizable rewards compared to maximum entropy baselines in simple low-dimensional settings amenable to intuitive visualizations. Specifically, we compare BC-IRL-PPO\xspace to the following baselines.
\textbf{Exact MaxEntIRL (MaxEnt)} \cite{ziebart2008maximum}: The exact MaxEntIRL method where the partition function is exactly computed by discretizing the state space.
\textbf{Guided Cost Learning (GCL)} \cite{finn2016guided}: Uses the maximum-entropy objective to update the reward. The partition function is approximated via adaptive sampling.
\textbf{Adversarial IRL (AIRL)} \cite{fu2017learning}: An IRL method that uses a learned discriminator to distinguish expert and agent states. As described in \cite{fu2017learning} we also use a shaping network $h$ during reward training, but only visualize and transfer the reward approximator $g$.
\textbf{f-IRL} \cite{ni2021f}: Another MaxEntIRL based method, f-IRL computes the analytic gradient of the f-divergence between the agent and expert state distributions. We use the JS divergence version.
Our method does not require demonstrations at test time, instead we transfer our learned rewards zero-shot. Thus we forego comparisons to other meta-learning methods, such as \cite{xu2019learning}, which require test time demonstrations. While a direct comparison with \cite{das2020model} is not possible because their method assumes access to a pre-trained dynamics model, we conduct a separate study comparing their method with an oracle dynamics model against BC-IRL\xspace in \Cref{14355}. All baselines use PPO \cite{schulman2017proximal} for policy optimization as commonly done in prior work \cite{orsini2021matters}. All methods learn a state-dependent reward $r_\psi(s)$, and a policy $\pi(s)$, both parametrized as neural networks. Further details are described in \Cref{sec:pm_hyperparams}.
\looseness=-1 The 2D point navigation tasks consist of a point agent policy that outputs a desired change in $(x,y)$ position (velocity) $(\Delta x, \Delta y)$ at every time step. The task has a trajectory length of $T=5$ time steps with 4 demonstrations. \Cref{fig:qual_results:pm_demo} visualizes the expert demonstrations where darker points are earlier time steps. The agent starting state distribution is centered around the starting state of each demonstration.
\Cref{fig:main_qual_results}b,c visualize the rewards learned by BC-IRL\xspace and the AIRL baseline. Lighter regions indicate higher rewards. In \Cref{fig:qual_results:pm_mirl}, BC-IRL\xspace learns a reward that looks like a quadratic bowl centered at the origin, which models the distance to the goal across the entire state space. AIRL, the maximum entropy baseline, visualized in \Cref{fig:qual_results:pm_airl}, learns a reward function where high rewards are placed on the demonstrations and low rewards elsewhere. Other baselines are visualized in Appendix~\Cref{fig:all_qual_pm}.
To analyze the generalization capabilities of the learned rewards we use them to train policies on a new starting state distribution (visualized in Appendix~\Cref{fig:pm_nav_start_state}). Concretely, a newly initialized policy is trained from scratch to maximize the learned reward from the testing start state distribution. \rv{The policy is trained with 5 million environment steps, which is the same number of steps as for learning the reward.} The testing starting state distribution has no overlap with the training start state distribution. Policy optimization at test time is also done with PPO. The \Cref{fig:main_qual_results}d,e display trajectories from the trained policies where darker points again correspond to earlier time steps.
This qualitative evaluation shows that BC-IRL\xspace learns a meaningful reward for states not covered by the demonstrations. Thus at test time agent trajectories are guided towards the goal with the terminal states (lightest points) close to the goal. The X-shaped rewards learned by the baselines do not provide meaningful rewards in the testing setting as they assign uniformly low rewards to states not covered by the demonstration. \rv{This provides poor reward shaping which prevents the agent from reaching the goal within the 5M training interactions with the environment.} This results in agent trajectories that do not end close to the goal \rv{by the end of training}.
\begin{table}[t]
\centering
\resizebox{0.75\textwidth}{!}{
\input{sections/tables/pm}
}
\caption{\small
Distance to the goal for the point mass navigation task where numbers are mean and standard error for 3 seeds and 100 evaluation episodes per seed.
Train\xspace is policy trained during reward learning.
MaxEnt does not learn a policy during reward learning thus its performance is ``NA".
Eval (Train)\xspace uses the learned reward to train a policy from scratch on the same distribution used to train the reward.
Eval (Test)\xspace measures the ability of the learned reward to generalize to a new starting state distribution.
}
\label{table:toy_quant_results}
\end{table}
Next, we report quantitative results in \Cref{table:toy_quant_results}. We evaluate the performance of the policy trained at test time by reporting the distance from the policy's final trajectory state $s_T$ to the goal $g$: $ \lVert s_{T} - g \rVert_2^2$. We report the final train performance of the algorithm (``Train"), along with the performance of the policy trained from scratch with the learned reward in the train distribution ``Eval (Train)" and testing distribution ``Eval (Test)". These results confirm that BC-IRL\xspace learns more generalizable rewards than baselines. Specifically, BC-IRL\xspace achieves a lower distance on the testing starting state distribution at 0.04, compared to 0.53, 1.6, and 0.36 for AIRL, GCL, and MaxEnt respectively. Surprisingly, BC-IRL\xspace even performs better than exact MaxEnt, which uses privileged information about the state space to estimate the partition function. This fits with our hypothesis that our method learns more generalizable rewards than MaxEnt, even when the MaxEnt objective is exactly computed. We repeat this analysis for a version of the task with an obstacle blocking the path to the goal in \Cref{sec:pmo_nav} and reach the same findings even when BC-IRL\xspace must learn an asymmetric reward function. We also compare learned rewards to manually defined rewards in \Cref{sec:manual_rewards}.
\rv{
Despite baselines learning rewards that do not generalize beyond the demonstrations, with enough environment interactions, policies trained under these rewards will eventually reach the high-rewards along the expert demonstrations.
Since all demonstrations reach the goal in the point mass task, the X-shaped reward baselines learn have high-reward at the center.
Despite the X-shaped providing little reward shaping off the X, with enough environment interactions, the agent eventually discovers the high-reward point at the goal.
After training AIRL for 15M steps, 3x the number of steps for reward learning and the experiments in \Cref{table:toy_quant_results} and \Cref{fig:main_qual_results}, the policy eventually reaches $ 0.08 \pm 0.01$ distance to the goal.
In the same setting, BC-IRL\xspace achieves $ 0.04 \pm 0.01$ distance to the goal in under 5M steps.
The additional performance gap is due to BC-IRL learning a reward with a maximum reward value closer to the center ($ 0.02$ to the center) compared to AIRL ($ 0.04$ to the center). }
\section{Experiments}
\label{sec:main_experiments}
\begin{figure}
\caption{\centering Habitat: Reach Task}
\label{fig:hab-eval:pick-task}
\caption{\centering TriFinger: Reach Task}
\label{fig:trf-eval:reach-task}
\caption{\small \centering Test Distribution}
\label{fig:hab-eval:gen-type-1}
\caption{IRL Training}
\label{fig:rl_curves:train}
\caption{\small
(a+b) Visualization of the Fetch and TriFinger reach tasks.
c) 2D cross-section of the end-effector goal sampling regions in the reaching task.
The reward function is trained on goals from the blue points; the learned reward must train policies to accomplish goals from Easy, Medium, and Hard test distributions of orange, green, and red points.
d) Training curves during reward learning on Habitat Task, all methods succeed in training.
}
\label{fig:hab-overview}
\end{figure}
In our experiments, we aim to answer the following questions: (1) Can BC-IRL\xspace learn reward functions that can train policies from scratch? (2) Does BC-IRL\xspace learn rewards that can generalize to unseen states and goals better than IRL baselines in complex environments? (3) Can learned rewards transfer better than policies learned directly with imitation learning? We show the first in \Cref{sec:exps:train_reward} and the next two in \Cref{sec:exps:eval_reward}. We evaluate on two continuous control tasks: 1) Fetch reaching task \cite{szot2021habitat} (Fig~\ref{fig:hab-eval:pick-task}), and the TriFinger reaching task \cite{ahmed2021causalworld} (Fig~\ref{fig:trf-eval:reach-task}).
\subsection{Reward Training phase: Learning Rewards to Match the Expert} \label{sec:exps:train_reward}
\paragraph{Experimental Setup and Evaluation Metrics}
In the Fetch reaching task, setup in the Habitat 2.0 simulator \cite{szot2021habitat}, the robot must move its end-effector to a 3D goal location $g$ which changes between episodes. The action space of the agent is the desired velocities for each of the 7 joints on the robot arm. The robot succeeds if the end-effector is within 0.1m of the target position by the $20$ time step maximum episode length. During reward learning, the goal $g$ is sampled from a $0.2$ meter length unit cube in front of the robot, $g \sim \mathcal{U}([0]^3, [0.2]^3)$. We provide 100 demonstrations.
\begin{wraptable}{L}{0.6\textwidth}
\begin{minipage}{0.6\textwidth}
\resizebox{1.0\textwidth}{!}{
\input{sections/tables/exp_train_smaller.tex}
}
\caption{\small
Success rates for Fetch Reach and distance to goal for Trifinger Reach tasks in training policies to achieve the goal in the same start state and goal distributions as the expert demonstrations.
Averages and standard deviations are from 3 seeds on Fetch Reach, and 5 seeds on Trifinger Reach with 100 episodes per seed.
}
\label{table:rewards_train} \end{minipage}
\end{wraptable}
For the Trifinger reaching task, each finger must move its fingertip to a 3D goal position. The fingers must travel a different distance and avoid getting blocked by another finger. Each finger has 3 joints, creating a 9D action and state space. The fingers are joint position controlled. We use a time horizon of $T=5$ time steps. We provide a single demonstration. We report the final distance to the demonstrated goal, $(g-g^\text{demo})^2$ in meters.
\paragraph{Evaluation and Baselines} We evaluate BC-IRL-PPO\xspace by how well the reward it can train new policies from scratch in the same start state and goal distribution as the demonstrations. Given the pointmass results \Cref{sec:reward_analysis}, we compare BC-IRL-PPO\xspace to AIRL, the best performing baseline for reward learning. More details on baseline choice, policy and reward representation, and hyperparameters are described in the Appendix (\ref{app:reach-details}).
\paragraph{Results and Analysis} As \Cref{table:rewards_train} confirms, our method and baselines are able to imitate the demonstrations when policies are evaluated in the same task setting as the expert. All methods are able to achieve a near 100\% success rate and low distance to goal. Methods also learn with similar sample efficiency as shown in the learning curves in \Cref{fig:rl_curves:train}. These high-success rates indicate BC-IRL-PPO\xspace and AIRL learn rewards that capture the expert behavior and train policies to mimic the expert. When training policies in the same state/goal distribution as the expert, rewards from BC-IRL-PPO\xspace follow any constraints followed by the experts, just like the IRL baselines.
\subsection{Test Phase: Evaluating Reward and Policy Generalization} \label{sec:exps:eval_reward} In this section, we evaluate how learned rewards and policies can generalize to new task settings with increased starting state and goal sampling noise. We evaluate the generalization ability of rewards by evaluating how well they can train new policies to reach the goal in new start and goal distributions not seen in the demonstrations. This evaluation captures the reality that it is infeasible to collect demonstrations for every possible start/goal configuration. We thus aim to learn rewards from demonstrations that can generalize beyond the start/goal configurations present in those demonstrations. We quantify reward generalization ability by whether the reward can train a policy to perform the task in the new start/goal configurations.
For the Fetch Reach task, we evaluate on three wider test goal sampling distributions $g \sim \mathcal{U}([0]^3, [g_{\text{max}}]^3)$: Easy $(g_\text{max}=0.25)$\xspace, Medium $(g_\text{max}=0.4)$\xspace, and Hard $(g_\text{max}=0.55)$\xspace, all visualized in \Cref{fig:hab-eval:gen-type-1}. Similarly, we evaluate on new state regions, which increase the starting and goal initial state distributions but exclude the regions from training, exposing the reward to only unseen initial states and goals. In Trifinger, we sample start configurations from around the start joint position in the demonstrations, with increasingly wider distributions ($s_0 \sim \mathcal{N}(s_0^\text{demo}, \delta)$, with $\delta =0.01, 0.03, 0.05)$.
\begin{table*}[t!]
\centering
\resizebox{1\textwidth}{!}{
\input{sections/tables/reach_eval.tex}
}
\caption{\small
Success rates for the reaching task comparing the generalization capabilities of IRL and imitation learning methods.
``(Reward\xspace)" transfers the learned reward from IRL methods and trains a newly initialized policy in the test setting.
``(Policy\xspace)" transfers the policy without training in the new setting.
The Easy, Medium, and Hard indicate the difficulty of generalization where the end-effector goal is sampled from $g \sim \mathcal{U}([0]^3, [g_{\text{max}}]^3)$.
}
\label{table:reach_eval} \end{table*}
We evaluate reward function performance by how well the reward function can train new policies from scratch. However, now the reward must generalize to inferring rewards in the new start state and goal distributions. We additionally compare to two imitation learning baselines: Generative Adversarial Imitation Learning (GAIL) \cite{ho2016generative} and Behavior Cloning (BC). We compare different methods of transferring the learned reward and policy to the test setting:
\textbf{1) Reward}: Transfer only the reward from the above training phase and train a newly initialized policy in the test setting.
\looseness=-1 \textbf{2) Policy}: Transfer only the policy from the above training phase and immediately evaluate the policy without further training in the test setting. This compares transferring learned rewards and transferring learned policies. We use this transfer strategy to compare against direct imitation learning methods.
\textbf{3) Reward+Policy}: Transfer the reward and policy and then fine-tune the policy using the learned reward in the test setting. Results for this setting are in \Cref{sec:adapt}.
\paragraph{Results and Analysis} The results in \Cref{table:reach_eval} show BC-IRL-PPO\xspace learns rewards that generalize better than IRL baselines to new settings. In the hardest generalization setting, BC-IRL-PPO\xspace achieves over twice the success rate of AIRL. AIRL struggles to transfer its learned reward to harder generalization settings, with performance decreasing as the goal sampling distribution becomes larger and has less overlap with the training goal distribution. In the ``Hard" start region generalization setting, the performance of AIRL degrades to 34\% success rate. On the other hand, BC-IRL-PPO\xspace learns a generalizable reward and performs well even in the ``Hard" generalization strategy, achieving 76\% success. This trend is true both for generalization to new start state distributions and for new start state regions. The results for Trifinger Reach in \Cref{table:trf_quant_results} support these findings with rewards learned via BC-IRL-PPO\xspace generalizing better to training policies from scratch in all three test distributions. All training curves for training policies from scratch with learned rewards are in \Cref{sec:rl_training}.
\begin{wraptable}{R}{0.42\textwidth} \begin{minipage}{0.42\textwidth}
\resizebox{1\textwidth}{!}{
\begin{tabular}{lcc}
\toprule
\footnotesize & \textbf{BC-IRL-PPO\xspace} & \textbf{AIRL} \\ \midrule
\textbf{Test $\delta=0.01$} & \textbf{0.0065 {\scriptsize $\pm$ 0.002 }} & 0.012 {\scriptsize $\pm$ 0.0017 } \\
\textbf{Test $\delta=0.03$} & \textbf{0.0061 {\scriptsize $\pm$ 0.002 }} & 0.012 {\scriptsize $\pm$ 0.0008 } \\
\textbf{Test $\delta=0.05$} & \textbf{0.0061 {\scriptsize $\pm$ 0.001 }} & 0.0117 {\scriptsize $\pm$ 0.0015 } \\
\bottomrule
\end{tabular} }
\caption{\small
Distance to the goal for Trifinger reach, evaluating how rewards generalize to training policies in new start/ goal distributions.
}
\label{table:trf_quant_results}
\end{minipage} \end{wraptable}
\looseness=-1 Furthermore, the results in \Cref{table:reach_eval} also demonstrate that transferring rewards ``(Reward)" is more effective for generalization than transferring policies ``(Policy)". Transferring the reward to train new policies typically outperforms transferring only the policy for all IRL approaches. Additionally, training from scratch with rewards learned via IRL outperforms non-reward learning imitation learning methods that only permit transferring the policy zero-shot. The policies learned by GAIL and BC generalize worse than training new policies from scratch with the reward learned by BC-IRL-PPO\xspace, with BC and GAIL achieving 35\% and 37\% success rates in the ``Hard" generalization setting while our method achieves 76\% success. The superior performance of BC-IRL-PPO\xspace over BC highlights the important differences between the two methods with our method learning a reward and training the policy with PPO on the learned reward.
In \Cref{sec:adapt}, we also show the ``Policy+Reward"\xspace transfer setting and demonstrate BC-IRL-PPO\xspace also outperforms baselines in this setting. In \Cref{sec:supp:reach} we also analyze performance with the number of demos, different inner and outer loop learning rates, and number of inner loop updates.
\section{Discussion and Future Work}
\label{sec:discussion}
We propose a new IRL framework for learning generalizable rewards with bi-level gradient-based optimization. By meta-learning rewards, our framework can optimize alternative outer-level objectives instead of the maximum entropy objective commonly used in prior work. We propose BC-IRL-PPO\xspace an instantiation of our new framework, which uses PPO for policy optimization in the inner loop and an action matching objective in the outer loop. We demonstrate that BC-IRL-PPO\xspace learns rewards that generalize better than baselines. Potential negative social impacts of this work are that learning reward functions from data could result in less interpretable rewards, leading to more opaque behaviors from agents that optimize the learned reward.
Future work will explore alternative instantiations of the BC-IRL\xspace framework, such as utilizing sample efficient off-policy methods like SAC or model-based methods in the inner loop. Model-based methods are especially appealing because a single dynamics model could be shared between tasks and learning reward functions for new tasks could be achieved purely using the model. Finally, other outer loop objectives rather than action matching are also possible.
\section{Acknowledgments} The Georgia Tech effort was supported in part by NSF, ONR YIP, and ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
\appendix
\section{Further Point Mass Navigation Results}
\subsection{Qualitative Results for all Methods in Point Mass Navigation} \label{sec:pm_all_qual}
Visualizations of the reward functions from all methods for the regular pointmass task are displayed in \Cref{fig:all_qual_pm}.
\begin{figure*}
\caption{BC-IRL\xspace Reward}
\caption{MaxEnt Reward}
\caption{AIRL Reward}
\caption{GCL Train}
\caption{BC-IRL\xspace Train}
\caption{MaxEnt Train}
\caption{AIRL Train}
\caption{GCL Train}
\caption{BC-IRL\xspace Test}
\caption{MaxEnt Test}
\caption{AIRL Test}
\caption{GCL Test}
\caption{\small
Qualitative results for all methods on the point mass navigation task without the obstacle.
}
\label{fig:all_qual_pm}
\end{figure*}
\subsection{Obstacle Point Mass Navigation} \label{sec:pmo_nav}
\input{sections/figures/pmo}
The obstacle point mass navigation task incorporates asymmetric dynamics with an off-centered obstacle. This environment is the same as the point mass navigation task from \Cref{sec:reward_analysis}, except there is an obstacle blocking the path to the center and the agent only spawns in the top-right hand corner. This task has a trajectory length of $T=50$ time steps with 100 demonstrations. \Cref{fig:qual_results:pmo_demo} visualizes the expert demonstrations where darker points are earlier time steps.
\begin{table}
\centering
\resizebox{0.75\textwidth}{!}{
\input{sections/tables/pmo}
}
\caption{\small
Distance to the goal for the point mass navigation task where numbers are mean and standard error for 3 seeds and 100 evaluation episodes per seed.
Train\xspace is policy trained during reward learning.
MaxEnt does not learn a policy during reward learning thus its performance is ``NA".
Eval (Train)\xspace uses the learned reward to train a policy from scratch on the same distribution used to train the reward.
Eval (Test)\xspace measures the ability of the learned reward to generalize to a new starting state distribution.
}
\label{table:pmo_quant_results}
\end{table}
The results in \Cref{table:pmo_quant_results} are consistent with the non-obstacle point mass task where BC-IRL\xspace generalizes better than a variety of MaxEnt IRL baselines. In the train setting, BC-IRL\xspace learns rewards that match the expert behavior with avoiding the obstacle and even achieves better performance than baselines in this task with 0.08 distance to the goal versus 0.41 to the goal for the best performing baseline in the train setting, f-IRL. BC-IRL\xspace generalizes better than baselines achieving 0.79 distance to goal compared to the best performing baseline MaxEnt, which also has access to oracle information. The reward learned by BC-IRL\xspace visualized in \Cref{fig:qual_results:pmo_mirl} shows BC-IRL\xspace learns a complex reward to account for the obstacle. \Cref{fig:all_qual_pmo} visualizes the rewards for all methods.
\begin{figure*}
\caption{BC-IRL\xspace Reward}
\caption{MaxEnt Reward}
\caption{AIRL Reward}
\caption{GCL Train}
\caption{BC-IRL\xspace Train}
\caption{MaxEnt Train}
\caption{AIRL Train}
\caption{GCL Train}
\caption{BC-IRL\xspace Test}
\caption{MaxEnt Test}
\caption{AIRL Test}
\caption{GCL Test}
\caption{\small
Qualitative results for all methods on the point mass navigation task with the obstacle.
}
\label{fig:all_qual_pmo}
\end{figure*}
\subsection{Comparison to Manually Defined Rewards} \label{sec:manual_rewards}
We compare the rewards learned by BC-IRL\xspace to two hand-coded rewards. We visualize how well the learned rewards can train policies from scratch in the evaluation distribution in the point navigation with obstacle task. The reward learned by BC-IRL\xspace therefore must generalize. On the other hand, the hand-coded rewards do not require any learning. We include a sparse reward for achieving the goal, which does not require domain knowledge when implementing the reward. We also implement a dense reward, defined as the change in Euclidean distance to the goal where $r_t = d_{t-1} - d_{t}$ where $d_{t}$ is the distance of the agent to the goal at time $t$.
\Cref{fig:reward_cmp} shows policy training curves for the learned and hand-defined rewards. The sparse reward performs poorly and the policy fails to get closer to the goal. On the other hand, the rewards learned by BC-IRL\xspace guide the policy closer to the goal. The dense reward, which incoporates more domain knowledge about the task, performs better than the learned reward.
\subsection{Analyzing Number of Inner Loop Updates} \label{sec:mirl_n_inner_iters}
As described in \Cref{sec:method:ppo_mirl}, a hyperparameter in BC-IRL-PPO\xspace is the number of inner loop policy optimization steps $K$, for each reward function update. In our experiments, we selected $K=1$. In \Cref{fig:n_inner} we examine the training performance of BC-IRL-PPO\xspace in the point navigation task with no obstacle for various choices of $K$. We find that a wide variety of $K$ values perform similarly. We, therefore, selected $K=1$ since it runs the fastest, with no need to track multiple policy updates in the meta optimization.
\begin{figure}
\caption{Hand-coded rewards vs. BC-IRL\xspace.}
\caption{\# inner loop updates in BC-IRL\xspace.}
\caption{
Left: Comparing the reward learned from BC-IRL\xspace to two manually hand-coded rewards. Right: Comparing different number of inner loop steps in BC-IRL\xspace.
}
\label{fig:reward_cmp}
\label{fig:n_inner}
\end{figure}
\subsection{BC-IRL\xspace with Model-Based Policy Optimization} \label{14355} We compare BC-IRL-PPO\xspace to a version of BC-IRL\xspace that uses model-based RL in the inner loop inspired by \cite{das2020model}. A direct comparison to \cite{das2020model} is not possible because their method assumes access to a pre-trained dynamics model, while in our work, we do not assume access to a ground truth or pre-trained dynamics model. However, we compare to a version of \cite{das2020model} in the point mass navigation task with a ground truth dynamics model. Specifically, we use gradient-based MPC in the inner loop optimization as in \cite{das2020model}, but the BC IRL outer loop objective. With the BC outer loop objective, it also learns generalizable rewards in the point mass navigation task achieving $0.06 \pm 0.03$ distance to goal in ``Eval (Train)" and $0.07 \pm 0.03$ in ``Eval (Test)". However, in the point mass navigation task with the obstacle, this method fails to learn a reward and struggles to minimize the outer loop objective. We hypothesize that in longer horizon tasks, the MPC inner loop optimization in [9] easily gets stuck in local minimas and struggles to differentiate through the entire MPC optimization.
\section{Reach Task: Further Experiment Results} \label{sec:supp:reach}
\subsection{RL-Training Curves} \label{sec:rl_training} In \Cref{fig:rl_curves} we visualize the training curves for the RL training used in \Cref{table:reach_eval}. \Cref{fig:supp_rl_curves:train} shows policy learning progress during the IRL training phase. In each setting, the performance is measured by using the current reward to train a policy and computing the success rate of the policy. \Cref{fig:rl_curves:x25} to \Cref{fig:rl_curves:x100} show the policy learning curves at test time, in the generalization settings, where the reward is frozen and must generalize to learn new policies on new goals (``Reward\xspace" transfer strategy). These plots show that all methods learn similarly during IRL training (\Cref{fig:supp_rl_curves:train}). When transferring the learned rewards to test settings we see that BC-IRL-PPO\xspace performs better in training successful policies as the generalization difficulty increases with the most difficult generalization in \Cref{fig:rl_curves:x100}.
\begin{figure*}
\caption{IRL Training}
\label{fig:supp_rl_curves:train}
\caption{Start Distrib: $(g_\text{max}=0.25)$\xspace}
\label{fig:rl_curves:x25}
\caption{Start Distrib: $(g_\text{max}=0.4)$\xspace}
\label{fig:rl_curves:x75}
\caption{Start Distrib: $(g_\text{max}=0.55)$\xspace}
\label{fig:rl_curves:x100}
\caption{
Learning curves for the training setting and ``Reward"\xspace transfer strategies from \Cref{table:reach_eval}.
All results are for 3 seeds and error regions show the standard deviation in success rate between seeds.
}
\label{fig:rl_curves}
\end{figure*}
\subsection{Transfer Reward+Policy Setting} \label{sec:adapt}
\begin{table*}[h]
\centering
\resizebox{0.95\textwidth}{!}{
\input{sections/tables/reach_table_adapt.tex}
}
\caption{
Results for the ``Policy+Reward"\xspace transfer strategy where the trained policy and reward are transferred to the test setting and the policy is fine-tuned.
}
\label{table:reach_adapt} \end{table*}
Here, we evaluate the \textbf{``Policy+Reward"\xspace} transfer strategy to new environment settings where both the reward and policy are transferred. In the new setting, ``Policy+Reward"\xspace uses the transferred reward to fine-tune the pre-trained transferred policy with RL. We show results in \Cref{table:reach_adapt} for the ``Policy+Reward"\xspace transfer strategy alongside the ``Reward"\xspace transfer strategy from \Cref{table:reach_eval}. We find that ``Policy+Reward"\xspace performs slightly better than ``Reward"\xspace in the Hard setting of generalization to new starting state distributions but otherwise performs similarly. Even in the ``Policy+Reward"\xspace setting, AIRL struggles to learn a good policy in the Medium and Hard settings, achieving $38\%$ and $81\%$ success rate respectively.
\subsection{Analyzing the Number Demonstrations}\label{sec:num_demos}
\begin{table*}[h!]
\centering
\resizebox{0.95\textwidth}{!}{
\input{sections/tables/demo_ablate.tex}
}
\caption{
Comparing the number of demonstrations for BC-IRL-PPO\xspace and AIRL across the train, medium, and hard settings.
}
\label{table:ablate_num_demos} \end{table*}
We analyze the effect of the number of demonstrations used for reward learning in \Cref{table:ablate_num_demos}. We find that using fewer demonstrations does not affect the training performance of BC-IRL-PPO\xspace and AIRL. We also find our method does just as well with 5 demos as 100 in the +75\% noise setting, with any number of demonstrations achieving near-perfect success rates. On the other hand, the performance of AIRL degrades from 93\% success rate with 100 demonstrations to 84\% in the +75\% noise setting. In the +100\% noise setting, fewer demonstrations hurt performance for both methods, with our method dropping from 76\% success to 69\% success and AIRL from 38\% success to 42\% success.
\subsection{BC-IRL\xspace Hyperpararameter Analysis} \label{sec:mirl_ablate} BC-IRL-PPO\xspace requires a learning rate for the policy optimization and a learning rate for the reward optimization. We compare the performance of our algorithm for various choices of policy and reward learning rates in \Cref{table:lr_ablate}. We find that across many different learning rate settings our method achieves high rates of success, but high policy learning rates have a detrimental effect. High reward learning rates have a slight negative impact but are not as severe.
\begin{table}
\centering
\input{sections/tables/lr_ablate.tex}
\caption{
Comparing choice of learning rate for the inner and outer loops for BC-IRL-PPO\xspace on the train setting.
Numbers display success rate.
}
\label{table:lr_ablate} \end{table}
\section{Further 2D Point Navigation Details} \label{sec:pm_hyperparams}
The start state distributions for the 2D point navigation task are illustrated in \Cref{fig:pm_nav_start_state}. The reward is learned using the start distribution in red on 4 equally spaced points from the center. Four demonstrations are also provided in this train start state distribution from each of the four corners. The reward is then transferred and a new policy is trained with the start state distribution in the magenta color. This start state distribution has no overlap with the train distribution and is also equally spaced. The reward must therefore generalize to providing rewards in this new state distribution.
\begin{figure}
\caption{
The starting state distribution for the 2D point navigation task with the demonstrations and negative distance to the goal overlaid.
The training start state distribution where the reward is learned is in \textcolor[rgb]{1.0, 0.03, 0.11}{red}.
The test start state distribution where the reward is transferred is in \textcolor[rgb]{1.0,0.23,0.98}{magenta}.
}
\label{fig:pm_nav_start_state}
\end{figure}
The hyperparameters for the methods from the 2D point navigation task in \Cref{sec:reward_analysis} are detailed in \Cref{tab:pm_hyperparam} for the no obstacle version and \Cref{tab:pmo_hyperparam} for the obstacle version of the task. The reward function / discriminator for all methods was a neural network with 1 hidden layer and 128 hidden dimension size with $ tanh$-activations between the layers. Adam \cite{kingma2014adam} was used for policy and reward optimization. All RL training used 5M steps of experience for the training and testing setting for the navigation no obstacle task. f-IRL uses the same optimization and neural network hyperparameters for the discriminator and reward function. Like in \cite{ni2020f}, we clamp the output of the reward function within the range $ [-10, 10]$ and found this was beneficial for learning. In the navigation with obstacle task, training used 15M steps of experience and testing used 5M steps of experience. All experiments were run on a Intel(R) Core(TM) i9-9900X CPU @ 3.50GHz.
\begin{table}
\centering
\begin{tabular}{cccccc}
\toprule
Hyperparameter & BC-IRL-PPO\xspace & AIRL & GCL & MaxEnt & f-IRL\\
\midrule
Reward Learning Rate & 1e-4 & 1e-3 & 3e-4 & 1e-3 & 3e-4\\
Reward Batch Size & 20 & 20 & 20 & 20 & 20\\
Policy Learning Rate & 1e-4 & 1e-4 & 3e-4 & 3e-4 & 3e-4 \\
Policy Learning Rate Decay & True & True & True & False & False\\
Policy \# Mini-batches & 4 & 4 & 4 & 4 & 4 \\
Policy \# Epochs per Update & 4 & 4 & 4 & 4 & 4 \\
Policy Entropy Coefficient & 1e-4 & 1e-4 & 1e-4 & 1e-4 & 1e-4 \\
Discount Factor $\gamma$ & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 \\
Policy Batch Size & 1280 & 1280 & 1280 & 1280 & 1280 \\
\bottomrule
\end{tabular}
\caption{
2D navigation without obstacle method hyperparameters.
These hyperparameters were used both in the training and reward transfer settings.
}
\label{tab:pm_hyperparam} \end{table}
\begin{table}
\centering
\begin{tabular}{ccccc}
\toprule
Hyperparameter & BC-IRL-PPO\xspace & AIRL & GCL & MaxEnt \\
\midrule
Reward Learning Rate & 1e-4 & 1e-3 & 3e-4 & 1e-3 \\
Reward Batch Size & 256 & 256 & 256 & 256 \\
Policy Learning Rate & 3e-4 & 3e-4 & 3e-4 & 3e-4 \\
Policy Learning Rate Decay & True & True & True & False \\
Policy \# Mini-batches & 4 & 4 & 4 & 4 \\
Policy \# Epochs per Update & 4 & 4 & 4 & 4 \\
Policy Entropy Coefficient & 1e-4 & 1e-4 & 1e-4 & 1e-4 \\
Discount Factor $\gamma$ & 0.99 & 0.99 & 0.99 & 0.99 \\
Policy Batch Size & 6400 & 6400 & 6400 & 6400 \\
\bottomrule
\end{tabular}
\caption{
2D navigation with obstacle method hyperparameters.
These hyperparameters were used both in the training and reward transfer settings.
}
\label{tab:pmo_hyperparam} \end{table}
\section{Further Reach Task Details} \label{app:reach-details}
\subsection{Choice of Baselines} The ``Exact MaxEntIRL" approach is excluded because it cannot be computed exactly for high-dimensional state spaces. GCL is excluded because of its poor performance on the toy task relative to other methods. We also compare to the following imitation learning methods which learn only policies and no transferable reward: \begin{itemize} \item \textbf{Behavioral Cloning (BC)} \cite{bain1995framework}: Train a policy using supervised learning to match the actions in the expert dataset. \item \textbf{Generative Adversarial Imitation Learning (GAIL)} \cite{ho2016generative}: Trains a discriminator to distinguish expert from agent transitions and then use the discriminator confusion score as the reward. This reward is coupled with the current policy \cite{finn2016connection} (referred to as a ``pseudo-reward") and therefore cannot train policies from scratch. \end{itemize}
\subsection{Policy+Network representation} All methods use a neural network to represent the policy and reward with 1 hidden layer, 128 hidden units, and $tanh$-activation functions between the layers. We use PPO as the policy optimization method for all methods. All methods in all tasks use demonstrations obtained from a policy trained with PPO using a manually engineered reward.
\subsection{Hyperparameters} \label{sec:hyperparams} The hyperparameters for all methods from the Reaching task are described in \Cref{tab:hyperparam}. The Adam optimizer \cite{kingma2014adam} was used for policy and reward optimization. All RL training used 1M steps of experience for the training and testing settings. The ``Reward"\xspace and ``Policy+Reward"\xspace transfer strategies trained policies with the same set of hyperparameters.
\begin{table}
\centering
\begin{tabular}{ccccc}
\toprule
Hyperparameter & BC-IRL-PPO\xspace & AIRL & GAIL & BC \\
\midrule
Reward Learning Rate & 3e-4 & 1e-4 & 1e-4 & NA \\
Reward Batch Size & 128 & 128 & 128 & NA \\
Policy Learning Rate & 3e-4 & 1e-4 & 1e-4 & 1e-4 \\
Policy Learning Rate Decay & False & True & True & False \\
Policy \# Mini-batches & 4 & 4 & 4 & NA \\
Policy \# Epochs per Update & 4 & 4 & 4 & NA \\
Policy Entropy Coefficient & 0.0 & 0.0 & 0.0 & NA \\
Discount Factor $\gamma$ & 0.99 & 0.99 & 0.99 & 0.99 \\
Policy Batch Size & 4096 & 4096 & 4096 & NA \\
\bottomrule
\end{tabular}
\caption{
Method hyperparameters for the Fetch reaching task.
These hyperparameters were used both in the training and reward transfer settings.
}
\label{tab:hyperparam} \end{table}
\section{Trifinger experiment details} \label{sec:trifinger_details}
\subsection{Policy+Network representation} All methods use a neural network to represent the policy and reward with 1 hidden layer, 128 hidden units, and $tanh$-activation functions between the layers. We use PPO as the policy optimization method for all methods. All methods in all tasks use demonstrations obtained from a policy trained with PPO using a manually engineered reward.
\subsection{Hyperparameters}
\begin{table}
\centering
\begin{tabular}{ccc}
\toprule
Hyperparameter & BC-IRL-PPO\xspace & AIRL \\
\midrule
Reward Learning Rate & 1e-3 & 1e-3 \\
Reward Batch Size & 6 & 6 \\
Policy Learning Rate & 1e-4 & 1e-3 \\
Policy Learning Rate Decay & False & False \\
Policy \# Mini-batches & 4 & 4 \\
Policy \# Epochs per Update & 2 & 2 \\
Policy Entropy Coefficient & 0.005 & 0.005 \\
Discount Factor $\gamma$ & 0.99 & 0.99 \\
Policy Batch Size & 40 & 40 \\
\bottomrule
\end{tabular}
\caption{
Method hyperparameters for the Trifinger reaching task.
These hyperparameters were used both in the training and reward transfer settings.
}
\label{tab:hyperparam-trf} \end{table}
The hyperparameters for all methods for the Trifinger reaching task are described in \Cref{tab:hyperparam-trf}. The Adam optimizer \cite{kingma2014adam} was used for policy and reward optimization. All RL training used 500k steps of experience for the reward training phase and 100k steps of experience for policy optimization in test settings.
\end{document} | arXiv | {
"id": "2303.16194.tex",
"language_detection_score": 0.8047302961349487,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\global\long\def\ket#1{\left|#1\right\rangle }
\global\long\def\bra#1{\left\langle #1\right|}
\global\long\def\braket#1#2{\left\langle #1\left|#2\right.\right\rangle }
\global\long\def\ketbra#1#2{\left|#1\right\rangle \left\langle #2\right|}
\global\long\def\braOket#1#2#3{\left\langle #1\left|#2\right|#3\right\rangle }
\global\long\def\mc#1{\mathcal{#1}}
\global\long\def\nrm#1{\left\Vert #1\right\Vert }
\title{Quantum Equivalence and Quantum Signatures in Heat Engines}
\author{Raam Uzdin}
\author{Amikam Levy}
\author{Ronnie Kosloff}
\email{raam@mail.huji.ac.il}
\selectlanguage{english}
\affiliation{Fritz Haber Research Center for Molecular Dynamics, Hebrew University of Jerusalem, Jerusalem 91904, Israel}
\maketitle Quantum heat engines (QHE) are thermal machines where the working substance is quantum. In the extreme case the working medium can be a single particle or a few level quantum system. The study of QHE has shown a remarkable similarity with the standard thermodynamical models, thus raising the issue what is quantum in quantum thermodynamics. Our main result is thermodynamical equivalence of all engine type in the quantum regime of small action. They have the same power, the same heat, the same efficiency, and they even have the same relaxation rates and relaxation modes. Furthermore, it is shown that QHE have quantum-thermodynamic signature, i.e thermodynamic measurements can confirm the presence of quantum coherence in the device. The coherent work extraction mechanism enables power outputs that greatly exceed the power of stochastic (dephased) engines.
\section{Introduction}
Thermodynamics emerged as a practical theory for evaluating the performance of steam engines. Since then the theory proliferated and utilized in countless systems and applications. Eventually, thermodynamics became one of the pillars of theoretical physics. Amazingly, it survived great scientific revolutions such as quantum mechanics and general relativity. In fact, thermodynamics played a crucial role in the development of these theories. Black body radiation led Planck and Einstein to introduce energy quantization, and the second law of thermodynamics led Bekenstein to discover the relation between the event horizon area and black hole entropy and temperature.
The Carnot efficiency is a manifestation of the second law in heat engines, and it is universally valid. In addition all reversible engines must operate at Carnot efficiency. Though very profound these principles have limited practical value. First, real engines produce finite non zero power and therefore cannot be reversible. Second, the performance of real engine is more severely limited by heat leaks, friction and heat transport. This led to the study of efficiency at maximal power \cite{curzon75,novikov1958efficiency} and finite time thermodynamics \cite{salamon01,AndresenFiniteTimeThermo2011}. In finite time thermodynamics time was introduced classically using empirical heat transport models. For microscopic systems the natural avenue to introduce time dynamics is via quantum mechanics of open systems. The full quantum dynamics may lead to new mechanism of extracting work or cooling. Alternatively, it may lead to new microscopic heat leaks and friction-like mechanisms.
Quantum thermodynamics is the study of thermodynamic quantities such as temperature, heat, work, and entropy in microscopic quantum systems or even for a single particle. This includes dynamical analysis of engines and refrigerators in the quantum regime \cite{alicki79,k24,k85,k152,k221,levy14,rahav12,allmahler10,linden10,correa14,mahler07b,skrzypczyk2014work,gelbwaser13,kolar13,alicki2014quantum,Nori2007QHE,lutz14,dorner2013extracting,dorner2012emergent,binder2014operational,DelCampo2014moreBang,gelbwaser2015Review,malabarba2014clock}, theoretical frameworks that takes into account single shot events \cite{horodecki2013fundamental,verdal13}, and the study of thermalization mechanisms \cite{mahlerbook,Eiset2012thermalizationInNature,Eisert2012probingTherm}.
It was natural to expect that in the quantum regime new thermodynamic effects will surface. However, quantum thermodynamic systems (even with a single particle) show a remarkable similarity to macroscopic system described by classical thermodynamic. When the baths are thermal the Carnot efficiency limit is equally applicable for a small quantum system \cite{alicki79,spohn78}. Even classical fluctuation theorems hold without any alteration \cite{campisi09,campisi11,quan2008quantumFluctTheorem}.
Is there really nothing new and profound in the thermodynamics of small quantum system? In this work, we present thermodynamic behavior that is purely quantum in its essence and has no classical counterpart.
Recently some progress on the role of quantum coherence in quantum thermodynamics has been made \cite{RudolphPRX_15,LostaglioRudolphCohConstraint}. In addition, quantum coherence has been shown to quantitatively affect the performance of heat machines \cite{mukamel12,scully03,scully2011quantum}. In this work we associate coherence with a specific \textit{thermodynamic effect} and relate it to a thermodynamic \textit{work extraction mechanism}.
Heat engines can be classified by their different scheduling of the interactions with the baths and the work repository. These types include the four-stroke, two-stroke and the continuous engines (these engine types will be described in more detail later on). The choice of engine type is usually guided by convenience of analysis or ease of implementation. Nevertheless, from a theoretical point of view, the fundamental differences or similarities between the various engine types are still uncharted. This is particularly true in the microscopic quantum regime. For brevity we discuss engines but all our results are equally applicable to other heat machines such as refrigerators and heaters.
Our first result is that in the limit of small engine action (weak thermalization, and a weak driving field), all three engine types are thermodynamically equivalent. The equivalence holds also for transients and for states that are very far from thermal equilibrium. On top of providing a thermodynamic unification limit for the various engine types, this finding also establishes a connection to quantum mechanics as it crucially depends on phase coherence and quantum interference. In particular, the validity regime of the equivalence is expressed in terms of $\hbar$.
Our second result concerns quantum-thermodynamic signatures. Let us define a \textit{quantum signature} as a signal extracted from measurements that unambiguously indicates the presence of quantum effects (e.g. entanglement or interference). The Bell inequality for the EPR experiment is a good example. A \textit{quantum-thermodynamic signature} is a quantum signature obtained from measuring thermodynamic quantities. We show that it is possible to set an upper bound on the work output of a stochastic, coherence-free engine. Any engine that surpasses this bound must have some level of coherence. Hence, work exceeding the stochastic bound constitutes a quantum-thermodynamic signature. Furthermore, we distinguish between a coherent work extraction mechanism and a stochastic work extraction mechanism. This explains why in the equivalence regime, coherent engines produce significantly more power compared to the corresponding stochastic engine.
The equivalence derivation is based on three ingredients. First, we introduce a multilevel embedding framework that enables the analysis of all three types of engines in the same physical setup. Next, a ``norm action'' smallness parameter, $s$, is defined for engines using Liouville space. The third ingredient is the symmetric rearrangement theorem that is used to show why all three engine types have the same thermodynamic properties despite the fact that they exhibit very different density matrix dynamics.
In section two we describe the main engine types, and introduce the multilevel embedding framework. Next, in section three the multilevel embedding and the symmetric rearrangement theorem are used to show the various equivalence relation of different engine types. After discussing the two fundamental work extraction mechanisms, in section four we present and study the over-thermalization effect in coherent heat engines. In section five we show a quantum thermodynamic signature that separates quantum engines from stochastic engines. Finally, in section six we conclude and discuss extensions and future prospects.
\section{Heat engines types and the multilevel embedding scheme}
Heat engines are either discrete such as the two stroke engines or the four stroke Otto engine. Heat engines can also operate continuously such as in turbines. Quantum analogues of all these engine types have been studied\footnote{other types consist of small variations and combination of these types.}. Here we present a theoretical framework where all three type of engines can be embedded in a unified physical framework. This framework termed ``multilevel embedding'' is an essential ingredient in our theory as it enables a meaningful comparison between different engine types.
\subsection{Heat and work}
A heat engine is a device that uses at least two thermal baths in different temperatures to extract work. Loosely speaking, work is the transfer of energy to a single degree of freedom. For example, increasing the excitation number of an oscillator, increasing the photon number in a specific optical mode (lasing) or increasing the kinetic energy in a single predefined direction. ``Battery'' or ``flywheel'' are terms often used in this context of work storage \cite{alicki13,hovhannisyan13}. We shall use the more general term ``work repository''. Heat, on the other hand, is energy spread over multiple degrees of freedom. Close to equilibrium and in quasistatic process, the heat is related to the temperature and to the effective number of degrees of freedom (entropy S) via the well known relation $dQ=TdS$.
In the elementary quantum heat engines the working substance comprises of a single particle (or few at the most). Thus the working substance cannot reach equilibrium on its own. Furthermore, excluding few non-generic cases it is not possible to assign an equation of state that establishes a relation between thermodynamic quantities when the substance is in equilibrium. Nevertheless, QHE satisfy the second law and therefore are also bounded by Carnot efficiency limit \cite{alicki79}.
Work strokes are characterized by zero contact with the baths and an inherently time dependent Hamiltonian. The unitary evolution generated by this Hamiltonian can change the energy of the system. On the other hand, the von Neumann entropy and the purity, remain fixed (unitary evolution at this stage). Hence the energy change of the system in this case constitutes pure work. The system's energy change is actually an energy exchange with the work repository.
When the system is coupled to a thermal bath and the Hamiltonian is fixed in time, the bath can change the populations of the energy levels. In steady state the system reaches a Gibbs state where the density matrix has no coherences in the energy basis and the population of the levels is given by: $p_{n,b}=e^{-\frac{E_{n}}{T_{b}}}/\sum_{n=1}^{N}e^{-\frac{E_{n}}{T_{b}}}$ where $N$ is the number of levels and $'b'$ stands for $'c'$ (cold) or $'h'$ (hot). In physical models where the system thermalizes via collision with bath particles a full thermalization can be achieved in finite time \cite{gennaro2008entanglement,GennaroQuDit,rybar2012simulation,ziman2005description,RUswap}. However, it is not necessary that the baths will bring the system close to a Gibbs state for the proper operation of the engine. In particular, maximal efficiency (e.g. in Otto engines) can be achieved without full thermalization. Maximal power (work per cycle time) is also associated with partial thermalization \cite{curzon75,esposito09}. The definitive property of a thermal bath is its aspiration to bring the system to a predefined temperature regardless of the initial state of the system. The evolution in this stage does not conserve the eigenvalues of the density matrix of the system, and therefore not only energy but entropy as well is exchanged with the bath. Therefore, the energy exchange in this stage is considered as heat.
In contrast to definitions of heat and work that are based on the derivative of the internal energy \cite{alicki79,k281,anders2013thermodynamics}, our definitions are obtained by energy balance when coupling only one element (bath or external field) at a time. As we shall see later on, in some engine types several agents change the internal energy simultaneously. Even in this case, this point of view of heat and work will still be useful for obtaining consistent and physical definitions of heat and work.
\subsection{The three engine types}
There are three core engine types that operate with two thermal baths: four-stroke engine, two-stroke engine, and a continuous engine. A stroke is a time segment where a certain operation takes place, for example thermalization or work extraction. Each stroke is a $CP$ map and therefore the one-cycle evolution operator of the engine is also a $CP$ map (since it is a product of $CP$ maps). For the extraction of work it is imperative that some of the stroke propagators do not commute \cite{k258}.
Otto engines and Carnot engines are examples of four stroke engines. The simplest quantum four-stroke engine is the two-level Otto engine shown in Fig. 1a. In the first stroke only the cold bath is connected. Thus, the internal energy changes are associated with heat exchange with the cold bath. The expansion and compression of the levels are fully described by a time-dependent Hamiltonian of the form $H(t)=f(t)\sigma_{z}$ (the baths are disconnected at this stage). In the second stroke, work is consumed in order to expand the levels, and in the fourth stroke work is produced when levels revert to their original values. There is a net work extraction since the populations in stages II and IV are different. \begin{figure}
\caption{(a) A two-level scheme of a four-stroke engine. (b) Two-particle scheme of a two stroke engine. (c) A three-level scheme of a continuous engine.}
\label{fig1}
\end{figure} As we shall see later on, other unitary operations are more relevant for quantum equivalence of heat engines. Nevertheless, this particular operation resembles the classical expansion and compression of classical engines. The work is the energy exchanged with the system during the unitary stages: $W=W_{II}+W_{IV}=(\left\langle E_{3}\right\rangle -\left\langle E_{2}\right\rangle )+(\left\langle E_{5}\right\rangle -\left\langle E_{4}\right\rangle )$. We will consider only energy expectation values for two main reasons. First, investigations of work fluctuations revealed that quantum heat engines follow classical fluctuation laws \cite{campisi14} and we search for quantum signatures in heat engines. The second reason is that in our view the engine should not be measured during operation. The measurement protocol used in quantum fluctuation theorems \cite{campisi09,campisi11,campisi14}, eliminates the density matrix coherences. These coherences have a critical component in the equivalence and quantum signature we study in this paper. Thus, although we frequently calculate work per cycle, the measured quantity is the cumulative work and it is measured only at the end of the process. The averaged quantities are obtained by repeating the full experiment many times. Engines are designed to perform a task and we assume that this completed task is the subject of measurement. The engine internal state are not measured.
The heat per cycle taken from the cold bath is $Q_{c}=\left\langle E_{2}\right\rangle -\left\langle E_{1}\right\rangle $ and the heat taken from the hot bath is $Q_{h}=\left\langle E_{4}\right\rangle -\left\langle E_{3}\right\rangle $. In steady state the average energy of the \textit{system} returns to its initial value after one cycle\footnote{This is, of course, not true for the work repository.} so that $\left\langle E_{5}\right\rangle =\left\langle E_{1}\right\rangle $. From this it follows immediately that $Q_{c}+Q_{h}+W=0$, i.e. the first law of thermodynamics is obeyed. There is no instantaneous energy conservation of \textit{internal} energy, as energy may be temporarily stored in the interaction field or in the work repository.
In the two-stroke engine shown in Fig 1b the engine consists of two parts (e.g. two qubits) \cite{AllahverdyanOptDualStroke}. One part may couple only to the hot bath and the other may couple only to the cold bath. In the first stroke both parts interact with their bath (but do not necessarily reach equilibrium). In the second unitary stroke the two engine parts are disconnected from the baths and are coupled to each other. They undergo a mutual unitary evolution and work is extracted in the process.
In the continuous engine shown in Fig. 1c the two baths and the external interaction field are connected continuously. For example, in the three level laser system shown in Fig 1c the laser light represented by $\mc H_{w}(t)$ generates stimulated emission that extracts work from the system. This system was first studied in thermodynamics context in \cite{scovil59}, while a more comprehensive analysis of the system was given in \cite{k122}. It is imperative that the external field is time-dependent. If it is time-independent the problem becomes a pure heat transport problem where $Q_{h}=-Q_{c}\neq0$. In heat transport the interaction field merely ``dresses'' the level so that the baths see a slightly modified system. The Lindblad generators are modified accordingly and heat flows without extracting or consuming work \cite{levy214}. Variations on these engine types may emerge due to realization constraints. For example, in the two stroke engine the baths may be continuously connected. This variation and others can still be analyzed using the tools presented in this paper.
\subsection{Efficiency vs. work and heat}
Since the early days of Carnot, efficiency received considerable attention for two main reasons. First, this quantity is of great interest from both theoretical and practical points of view. The second reason is that efficiency satisfies a universal bound that is independent of the engine details. The Carnot efficiency bound is a manifestation of the second law of thermodynamics. Indeed, for Markovian bath dynamics it was shown that quantum heat engines cannot exceed the Carnot efficiency \cite{alicki79}. Recently, a more general approach based on a fluctuation theorem for QHE showed that the Carnot bound still holds for quantum engines \cite{campisi14}. Studies in which higher than Carnot efficiency are reported \cite{scully03}, are interesting but they use non-thermal baths and therefore not surprisingly deviate from results derived in the thermodynamic framework that deals with thermal baths. For example, an electric engine is not limited to Carnot efficiency since its power source is not thermal. Although, the present work has an impact on efficiency as well, we focus on work and heat separately in order to unravel quantum effects. As will be exemplified later, in some elementary cases these quantum effects do not influence the efficiency.
\subsection{Bath description and Liouville space}
The dynamics of the working fluid (system) interacting with the heat baths is described by Lindblad-Gorini-Kossakowski-Sudarshan (LGKS) master equation for the density matrix \cite{breuer,lindblad76,gorini276}: \begin{equation} d_{t}\rho=L(\rho)=-i[H_{s},\rho]+\sum_{k}A_{k}\rho A_{k}^{\dagger}-\frac{1}{2}A_{k}^{\dagger}A_{k}\rho-\frac{1}{2}\rho A_{k}^{\dagger}A_{k},\label{eq: Lind Hil} \end{equation} where the $A_{k}$ operators depend on the temperature, relaxation time of the bath, system bath coupling, and also on the system Hamiltonian $H_{s}$ \cite{breuer}. This form already encapsulates within the Markovian assumption of no memory. The justification for these equations arises from a ``microscopic derivation'' in the weak system bath coupling limit \cite{davies74}. In this derivation a weak interaction field couples the system of interest to a large system (the bath) with temperature $T$. This interaction brings the system into a Gibbs state at temperature $T$. The Lindblad thermalization operators $A_{k}$ used for the baths are described in the next section
Equation (\ref{eq: Lind Hil}) is a linear equation so it can always be rearranged into a vector equation. Given an index mapping $\rho_{N\times N}\to\ket{\rho}_{1\times N^{2}}$ the Lindblad equation now reads: \begin{equation} id_{t}\ket{\rho}=(\mc H_{s}+\mc L)\ket{\rho},\label{eq: Lind Lio} \end{equation} where $\mc H_{s}$ is an Hermitian $N^{2}\times N^{2}$ matrix that originates from $H_{s}$, and $\mc L$ is a non Hermitian $N^{2}\times N^{2}$ matrix that originates from the Lindblad evolution generators $A_{k}$. This extended space is called Liouville space \cite{mukamel1995principles}. In this paper we will use calligraphic letters to describe operators in Liouville space and ordinary letters for operators in Hilbert space. For states, however, $\ket A$ will denote a vector in Liouville space formed from $A_{N\times N}$ by ``vec-ing'' $A$ into a column in the same procedure $\rho$ is converted into $\ket{\rho}$. A short review of Liouville space and some of its properties is given in appendix II.
While the Lindblad description works very well for sufficiently long times it fails for very short times where some of the approximation breaks down. In scales where the bath still has a memory of the system's past states, the semi group property of the Lindblad equation no longer holds: $\ket{\rho(t+t')}\neq e^{-i(\mc H_{s}+\mc L)(t-t')}\ket{\rho(t')}$. This will set a cutoff limit for the validity of the engine types equivalence in the Markovian approximation.
Next we introduce the multilevel embedding scheme that enables us to discuss various heat engines in the same physical setup.
\begin{figure}
\caption{In the standard two-level Otto engine there are two-level $E_{g,e}$ (purple) that change in time to $E_{g,e}'$. In the multilevel embedding framework, the levels ($E_{1-4}$) are fixed in time (black dashed lines) but a time dependent field ($\pi$ pulse, swap operation) transfer the population (green) to the other levels. For a swap operation the two schemes lead to the same final state and therefore are associate with the same work. Nonetheless, the multilevel scheme is more general since for weaker unitary transformation (instead of the $\pi$ pulse), coherences are generated. We show that this type of coherences can significantly boost the power output of the engine.}
\label{fig2}
\end{figure}
\subsection{Multilevel embedding}
Let the working substance of the quantum engine be an $N$-level system. These levels are fixed in time (i.e. they do not change as in Fig. 1a). For simplicity, the levels are non degenerate. We divide the energy levels into a cold manifold and a hot manifold. During the operation of the engine the levels in the cold manifold interact only with the cold bath, and the hot manifold interact only with the hot bath. Each thermal coupling can be turned on and off as a function of time but the aliasing of a level to a manifold does not change in time.
If the manifolds do not overlap the hot and cold thermal operations commute and they can be applied at the same time or one after the other. The end result will be the same. Nevertheless, our scheme also includes the possibility that one level appears in both manifold. This is the case for the three-level continuous engine shown in Fig 1c. For simplicity, we exclude the possibility of more than one mutual level. If there are two or more overlapping levels there is an inevitable heat transport in steady state from the hot bath to the cold bath even in the absence of an external field that extract work. In the context of heat engines this can be interpreted as heat leak. This ``no field - no transport'' condition holds for many engines studied in the literature. Nonetheless, this condition is not a necessary condition for the validity of our results.
This manifold division seems sensible for the continuous engine and even for the two stroke engine in Fig 1b, but how can it be applied to the four stroke engine shown in Fig. 1a? The two levels interact with both baths and also change their energy value in time contrary to the assumption of fixed energy levels. Nevertheless, this engine is also incorporated in the multilevel embedding framework\textit{.} Instead of two-levels as in Fig. 1a consider the four-level system shown in the dashed green lines in Fig 2.
\begin{figure}
\caption{Representation of the three types of engines in the multilevel embedding framework. In this scheme the different engine types differ only in the order of coupling to the baths and work repository. Since the interactions and energy levels are the same for all engine types, a meaningful comparison of performance becomes possible. }
\label{fig3}
\end{figure} Initially, only levels 2 and 3 are populated and coupled to the cold bath (2 \& 3 are in the cold manifold). In the unitary stage an interaction Hamiltonian $H_{swap}$ generates a full swap of populations and coherence according to the rule $1\leftrightarrow2,3\leftrightarrow4$. Now levels 1 and 4 are populated and 2 and 3 are empty. Therefore, this system fully simulates the expanding level engine shown in Fig. 1a. At the same time this system satisfies the separation into well defined time-independent manifolds as defined in the multilevel embedding scheme.
The full swap used to embed the traditional four-stroke Otto engine, is not mandatory and other unitary operations can be applied. This extension of the four-stroke scheme is critical for our work since the equivalence of engines appear when the unitary operation is fairly close to the identity transformation.
Figure 3 shows how the three engine types are represented in the multilevel embedding scheme. The advantage of the multilevel scheme now becomes clear. All three engine types can be described in the same physical system with the same baths and the same coupling to external fields (work extraction). The engine types differ only in the order of the coupling to the baths and to the work repository. While the thermal operations commute if the manifolds do not overlap, the unitary operation never commutes with the thermal strokes.
On the right of Fig. 3, we plotted a ``brick'' diagram for the evolution operator. Black stands for unitary transformation generated by some external field, while blue and red stand for hot and cold thermal coupling, respectively. When the bricks are on top of each other it means that they operate simultaneously. Now we are in position to derive the first main results of this paper: the thermodynamic equivalence of engine types in the quantum regime.
\section{Continuous and stroke engine equivalence}
We first discuss the equivalence of continuous and four-stroke engines. Nevertheless, all the argument are valid for the two-stroke engines as well, as explained later on. Although our results are not limited to a specific engine model it will be useful to consider the simple engine shown in Fig. 4 . We will use this model to highlight a few points and also for numerical simulations. The Hamiltonian part of the system is: \begin{equation} H_{0}+\cos(\omega t)H_{w},\label{eq: H(t) num} \end{equation} where $H_{0}=-\frac{\Delta E_{h}}{2}\ketbra 11-\frac{\Delta E_{c}}{2}\ketbra 22+\frac{\Delta E_{c}}{2}\ketbra 33+\frac{\Delta E_{h}}{2}\ketbra 44$, $H_{w}=\epsilon(t)\ketbra 12+\epsilon(t)\ketbra 34+h.c.$ and $\omega=\frac{\Delta E_{h}-\Delta E_{c}}{2}$.
The driving frequency, that couples the system to the work repository is in resonance with the top and bottom energy gaps. The specific partitioning into hot and cold manifolds was chosen so that only one frequency (e.g. a single laser) is needed for implementing the system instead of two.
\begin{figure}
\caption{Illustration of the engine used in the numerical simulation. By changing the time order of the coupling to $H_{w}$ and to thermal baths, all three type of engines can realized in the model. }
\end{figure}
We assume that the Rabi frequency of the drive $\epsilon$ is smaller than the decay time scale of the baths $\epsilon\ll\gamma_{c},\gamma_{h}$. Under this assumption the dressing effect of the driving field on the system-bath interaction can be ignored. It is justified, then, to use ``local'' Lindblad operators obtained in the absence of a driving field \cite{plenio10,levy214}. For plotting purposes (reasonable duty cycle) in the numerical examples we will often use $\epsilon=\gamma_{c}=\gamma_{h}$. While this poses no problem for stroke engines realizations, for experimental demonstration of equivalence with continuous engines one has to increase the duty cycle so that $\epsilon\ll\gamma_{c},\gamma_{h}$. That is, the unitary stage should be made longer but with weaker driving field.
The Lindblad equation is given by (\ref{eq: Lind Hil}) with the Hamiltonian (\ref{eq: H(t) num}) and with the following Lindblad operators: \begin{eqnarray*} A_{1} & = & \sqrt{\gamma_{h}}e^{-\frac{\Delta E_{h}}{2T_{h}}}\ketbra 41,\\ A_{2} & = & \sqrt{\gamma_{h}}\ketbra 14,\\ A_{3} & = & \sqrt{\gamma_{c}}e^{-\frac{\Delta E_{c}}{2T_{c}}}\ketbra 32,\\ A_{4} & = & \sqrt{\gamma_{c}}\ketbra 23. \end{eqnarray*} In all the numerical simulation we use $\Delta E_{h}=4$, $\Delta E_{c}=1$, $T_{h}=5$, $T_{c}=1$. The interaction with the baths or with work repository can be turned on and off at will.
Starting with the continuous engine, we choose a unit cell that contains exactly $6m$ ($m$ is an integer) complete cycles of the drive ($\tau_{d}=2\pi/\omega$) so that $\tau_{cyc}=6m\tau_{d}$. The difference between the engine cycle time and the cycles of the external drive, will become clear in stroke engines (also the factor of six will be clarified).
For the validity of the secular approximation used in the Lindblad microscopic derivation \cite{breuer}, the evolution time scale must satisfy $\tau\gg\frac{2\pi}{min(\Delta E_{h},\Delta E_{c})}$. Therefore $m$ must satisfy $m\gg\frac{\omega}{min(\Delta E_{h},\Delta E_{c})}$. Note that if the Lindblad description is obtained from a different physical mechanism (e.g. thermalizing collisions) then this condition is not required.
Next we transform to the interaction picture (denoted by tilde) using the transformation $\mc U=e^{-i\mc H_{0}t}$, and perform the rotating wave approximation (RWA) by dropping terms oscillating in frequency of $2\omega$. For the RWA to be valid the amplitude of the field must satisfy $\epsilon\ll\omega$. The resulting Liouville space super Hamiltonian is:
\begin{equation} \tilde{\mc H}=\mc L_{c}+\mc L_{h}+\frac{1}{2}\mc H_{w}.\label{eq: H ip-1} \end{equation} Note that $\mc L_{h,c}$ were not modified by the transformation to the rotating system since $[\mc L_{h,c},\mc H_{0}]=0$ in the microscopic derivation\footnote{This can be seen by following the derivation in \cite{breuer} and using formalism introduced in \cite{machnes14}.}. Now that we have established a regime of validity and the super Hamiltonian that governs the system, we can turn to the task of transforming from one engine type to other types and study what properties change in this transformation. The engine type transformation is based on the Strang decomposition \cite{StrangDecompError2000jahnke} for two non commuting operators $\mc A$ and $\mc B$ (the operators may not be Hermitian): \begin{equation} e^{(\mc A+\mc B)dt}=e^{\frac{1}{2}\mc Adt}e^{\mc Bdt}e^{\frac{1}{2}\mc Adt}+O(s^{3})\cong e^{\frac{1}{2}\mc Adt}e^{\mc Bdt}e^{\frac{1}{2}\mc Adt},\label{eq: Strang} \end{equation} where $s=(\nrm{\mc A}+\nrm{\mc B})dt$ must be small for the expansion to be valid. $\nrm{\mc A}$ is the spectral norm (or operator norm)
of $\mc A$, and it is the largest singular value of $\mc A$, $\nrm{\mc A}=max\sqrt{eig(\mc A\mc A^{\dagger})}$\cite{roger1994topics}. For Hermitian operators with eigenvalues $\lambda_{\mc A,i}$ the spectral norm is $max(\left|\lambda_{\mc A,i}\right|)$. In Appendix I we derive the condition $s\ll\frac{1}{2}\hbar$ for the validity of (\ref{eq: Strang}). We will use the symbol $\cong$ to denote equality with correction $O(s^{3})$. Let the evolution operator of the continuous engine over the chosen cycle time $\tau_{cyc}=6m\tau_{d}$ be: \begin{equation} \tilde{\mc K}^{\text{cont}}=e^{-i\tilde{\mc H}\tau_{cyc}}. \end{equation} By first splitting $\mc L_{c}$ and then splitting $\mc L_{h}$ we get: \begin{eqnarray} \tilde{\mc K}^{\text{4 stroke}} & = & e^{-i(3\mc L_{c})\frac{\tau_{cyc}}{6}}e^{-i(\frac{3}{2}\mc H_{w})\frac{\tau_{cyc}}{6}}e^{-i(3\mc L_{h})\frac{\tau_{cyc}}{3}}\nonumber \\
& \times & e^{-i(\frac{3}{2}\mc H_{w})\frac{\tau_{cyc}}{6}}e^{-i(3\mc L_{c})\frac{\tau_{cyc}}{6}}.\label{eq: K 4s} \end{eqnarray} \begin{figure}
\caption{(color online) Illustration of the splitting of the evolution operator of the continuous engine (a), into a four-stroke engine (b), a two-stroke engine (c), and a two-field four-stroke engine (d). The horizontal axis corresponds to time as before. The size of the brick corresponds to the strength of coupling to the work repository or to the baths. The symmetric rearrangement theorem ensures that in the limit of small action, any rearrangement that is symmetric with respect to the center, and conserves the area of each color does not change the total power and heat. }
\label{fig5}
\end{figure} Note that the system is periodic so the first and last stages are two parts of the same thermal stroke. Consequently (\ref{eq: K 4s}) describes an evolution operator of a four stroke engine, where the unit cell is symmetric. This splitting is illustrated in Fig. 5a and Fig. 5b. There are two thermal strokes and two work strokes that together constitute an evolution operator that describes a four-stroke engine. The cumulative evolution time as written above is $(m+m+2m+m+m)\tau_{d}=6m\tau_{d}=\tau_{cyc}$. Yet, to maintain the same cycle time as chosen for the continuous engine, the coupling to the baths and field were multiplied by three. In this four-stroke engine each thermal or work stroke operates in total only a third of the cycle time compared to the continuous engine. Hence, the coupling must be three times larger in order to generate the same evolution.
By virtue of the Strang decomposition $\tilde{\mc K}^{\text{4 stroke}}\cong\tilde{\mc K}^{\text{cont}}$ if $s\ll1$. The \textit{action parameter} $s$ of the engine is defined as $s=\intop_{-\tau_{cyc}/2}^{\tau_{cyc}/2}\nrm{\tilde{\mc H}}dt=(\frac{1}{2}\nrm{\mc H_{w}}+\nrm{\mc L_{h}}+\nrm{\mc L_{c}})\tau_{cyc}$. Although we used $\hbar=1$ units, note that $s$ has dimensions of $\hbar$ and therefore the relation $\tilde{\mc K}^{\text{4 stroke}}\cong\tilde{\mc K}^{\text{cont}}$ holds only when the engine action is small compared to $\hbar$. This first appearance of a quantum scale will be discussed later on.
\subsection{Dynamical aspect of the equivalence}
Before discussing the thermodynamics properties of the engine we point out that $\tilde{\mc K}^{\text{4 stroke}}\cong\tilde{\mc K}^{\text{cont}}$ has two immediate important consequences. First both engines have the same steady state solution over one cycle $\ket{\tilde{\rho}_{s}}$: \begin{eqnarray} \tilde{\mc K}^{\text{4 stroke}}(\tau_{cyc})\ket{\tilde{\rho}_{s}}\cong\tilde{\mc K}^{\text{cont}}(\tau_{cyc})\ket{\tilde{\rho}_{s}} & = & \ket{\tilde{\rho}_{s}},\\ (\mc L_{c}+\mc L_{h}+\frac{1}{2}\mc H_{w})\ket{\tilde{\rho}_{s}} & = & 0.\label{eq: steady state} \end{eqnarray} At time instances that are not an integer multiples $\tau_{cyc}$, the states of the engines will differ significantly ($O(s^{1})$) since $\tilde{\mc K}^{\text{4 stroke}}(t<\tau_{cyc})\neq\tilde{\mc K}^{\text{cont}}(t<\tau_{cyc})$. That is, the engines are still significantly different from each other. The second consequence is that the two engines have the same transient modes as well. When monitored at multiple of $\tau_{cyc}$ both engines will have the same relaxation dynamics to steady state if they started from the same initial condition. In the reminder of the paper, when the evolution operator is written without a time tag, it means we consider the evolution operator of a complete cycle.
\subsection{\label{sub: Thermo equiv}Thermodynamic aspect of the equivalence}
The equivalence of the one cycle evolution operators of the two engines does not immediately imply that the engines are thermodynamically equivalent. Generally, in stroke engines the heat and work depend on the dynamics of the state inside the cycle which is very different ($O(s^{1})$) from the constant state of the continuous engine. However, in this section we show that all thermodynamics properties are equivalent in both engines up to $O(s^{3})$ corrections, similarly to the evolution operator. We start by evaluating the work and heat in the continuous engine. By considering infinitesimal time elements where $\mc L_{c},\mc L_{h}$ and $\mc H_{w}$ operate separately, one obtains that the heat and work currents are\footnote{because of the property $\bra{H_{0}}\mc H_{0}=0$, $\bra{H_{0}}$ is not modified by the transformation to the interaction picture. } $j_{c(h)}=\braOket{H_{0}}{\mc L_{c(h)}}{\tilde{\rho}_{s}(t)}$ and $j_{w}=\braOket{H_{0}}{\frac{1}{2}\mc H_{w}}{\tilde{\rho}_{s}(t)}$. In the continuous engine the steady state satisfies $\ket{\tilde{\rho}_{s}(t)}=\ket{\tilde{\rho}_{s}}$ so the total heat and work in steady state in one cycle are: \begin{eqnarray} W{}^{\text{cont}} & = & \braOket{H_{0}}{\frac{1}{2}\mc H_{w}}{\tilde{\rho}_{s}}\tau_{cyc},\label{eq: Wcont}\\ Q_{c(h)}^{\text{cont}} & = & \braOket{H_{0}}{\mc L_{c(h)}}{\tilde{\rho}_{s}}\tau_{cyc}. \end{eqnarray} These quantities should be compared to the work and heat in the four-stroke engine. Instead of carrying out the explicit calculation for this specific four-stroke splitting we use the symmetric rearrangement theorem (SRT) derived in Appendix III. Symmetric rearrangement of a Hamiltonian means is a change in the order of couplings $\epsilon(t),\gamma_{c}(t),\gamma_{h}(t)$ that satisfies $\intop\epsilon(t)dt=\text{const},\:\intop\gamma_{c}(t)dt=\text{const},\:\intop\gamma_{h}(t)dt=\text{const}$ and $\epsilon(t)=\epsilon(-t),\gamma_{c}(t)=\gamma_{c}(-t),\gamma_{c}(t)=\gamma_{c}(-t)$. $\mc H^{II\:stroke}(t)$, $\mc H^{IV\:stroke}(t)$ and any other super Hamiltonian obtained using the Strang splitting of the continuous engine, are examples of symmetric rearrangements. The SRT exploits the symmetry of the Hamiltonian to show that symmetric rearrangement change heat and work only in $O(s^{3}$). In the Appendix III we show that \begin{eqnarray} W^{\text{4 stroke}} & \cong & W{}^{\text{cont}},\label{eq: W4 Wc}\\ Q_{c(h)}^{\text{4 stroke}} & \cong & Q_{c(h)}^{\text{cont}}.\label{eq: Q4 Qc} \end{eqnarray} Thus, we conclude that up to $s^{3}$ corrections, the engines are thermodynamically equivalent. When $s\ll1$. work, power heat and efficiency converge to the same value for all engine types . Clearly, inside the cycle the work and heat in the two engine are significantly different ($O(s^{1})$) but after a complete cycle they become equivalent. The symmetry makes this equivalent more accurate as it holds up to $s^{3}$ (rather $s^{2}$). Interestingly the work done in the first half of the cycle is $\frac{1}{2}W{}^{\text{cont}}+O(s^{2})$. However, when the contribution of the other half is added added the $s^{2}$ correction cancels out and (\ref{eq: W4 Wc}) is obtained (see Appendix III).
\begin{figure}
\caption{The equivalence of heat engine types in transient evolution when the engine action is small compared to $\hbar$. (a) The cumulative power transferred to the work repository is plotted as a function of time. All engines start in the excited state $\protect\ket 4$, which is very far from the steady state of the system. At complete engine cycles (vertical lines) the power in all engines is the same. (b) Once the action is increased (here the field $\epsilon$ was increased), the equivalence no longer holds. }
\label{fig6}
\end{figure}
We emphasize that the SRT and its implications (\ref{eq: W4 Wc}),(\ref{eq: Q4 Qc}) are valid for transients and for any initial state - not just for steady state operation. In Fig. 6a we show the cumulative work as a function of time for a four-stroke engine and a continuous engine. The vertical lines indicate complete cycle of the four-stroke engine. In addition to the parameter common to all examples specified before, we used $\epsilon=\gamma_{c}=\gamma_{h}=10^{-4}$ and the equivalence of work at the vertical lines is apparent. In Fig. 6b the field and thermal coupling were increased to $\epsilon=\gamma_{c}=\gamma_{h}=5\times10^{-3}$ . Now the engines perform differently even at the end of each cycle. This example is a somewhat extreme situation where the system changes quite rapidly (consequence of the initial state we chose). In other cases, such as steady state operation, the equivalence can be observed for much larger action values.
The splitting used in (\ref{eq: K 4s}) was based on first splitting $\mc L_{c}$ and then $\mc H_{w}$. Other engines can be obtained by different splitting of $\tilde{\mc K}^{\text{cont}}$. For example, consider the two-stroke engine obtained by splitting $\mc L_{c}+\mc L_{h}$:
\begin{eqnarray} \tilde{\mc K}^{\text{2 stroke}} & = & e^{-i\frac{3}{2}(\mc L_{c}+\mc L_{h})\frac{\tau_{cyc}}{3}}e^{-i(\frac{3}{2}\mc H_{w})\frac{\tau_{cyc}}{3}}e^{-i\frac{3}{2}(\mc L_{c}+\mc L_{h})\frac{\tau_{cyc}}{3}}.\nonumber \\ \label{eq: K 2stroke} \end{eqnarray} Note that in the two-stroke engine the thermal coupling has to be $\frac{3}{2}$ stronger compared to the continuous case in order to provide the same action. Using the SRT we obtain the complete equivalence relations of the three main engine types\footnote{Note that since $\mc K=e^{-i\mc H_{0}\tau_{cyc}}\tilde{\mc K}$, the equivalence of the evolution operators holds also in the original frame not just in the interaction frame.}: \begin{eqnarray} W^{\text{2 stroke}} & \cong W^{\text{4 stroke}} & \cong W{}^{\text{cont}},\\ Q_{c(h)}^{\text{2 stroke}}\cong & Q_{c(h)}^{\text{4 stroke}}\cong & Q_{c(h)}^{\text{cont}},\\ \tilde{\mc K}^{\text{2 stroke}} & \cong\tilde{\mc K}^{\text{4 stroke}} & \cong\tilde{\mc K}^{\text{cont}}. \end{eqnarray}
Another type of engine exits when the interaction with the work repository is carried out by two physically distinct couplings. This happens naturally if $E_{4}-E_{3}\neq E_{2}-E_{1}$ so that two different driving lasers have to be used and the Hamiltonian is $H_{0}+\cos((E_{2}-E_{1})t)H_{w1}+\cos((E_{4}-E_{3})t)H_{w2}$. In such cases, one can make the splitting shown in Fig. 5d. In this numerical example we use: $H_{w1}=\epsilon(t)\ketbra 12+h.c.$ and $H_{w2}=\epsilon(t)\ketbra 34+h.c.$ Since there are two different work strokes in addition to the thermal stroke this engine constitute a four-stroke engine which differece.
\subsection{Power and energy flow balance}
The average power and heat flow in the equivalence regime are independent of the cycle time: \begin{eqnarray} P_{W} & = & \frac{W}{\tau_{cyc}}=\braOket{H_{0}}{\frac{1}{2}\mc H_{w}}{\tilde{\rho}_{s}},\\ J_{c(h)} & = & \frac{Q_{c(h)}}{\tau_{cyc}}=\braOket{H_{0}}{\mc L_{c(h)}}{\tilde{\rho}_{s}}. \end{eqnarray} Using the steady state definition (\ref{eq: steady state}) one obtains the steady state energy balance equation: \begin{equation} P_{w}+J_{c}+J_{h}=0.\label{eq: steady state energy} \end{equation} Equation (\ref{eq: steady state energy}) does not necessarily hold if the system is not in steady state as energy may be temporarily stored in the baths or in the work repository.
\begin{figure}
\caption{Power as a function of action for various engine types in steady state. The 4 stroke variant (green) is described at the end of Sec. \ref{sub: Thermo equiv}. The action is increased by increasing the strokes duration (top illustration). (a) For large action with respect to $\hbar$, the engines significantly differ in performance. In this example all engines have the same efficiency, but they extract different amounts of heat from the hot bath. (b) In the equivalence regime where the action is small, all engine types exhibit the same power and also the same heat flows. The condition $s<\hbar/2$ that follows from the Strang decomposition agrees with the observed regime of equivalence. The time symmetric structure of the engines causes the deviation from equivalence to be quadratic in the action.}
\label{fig7}
\end{figure}
Figure 7 shows the power in steady state as a function of the action. The action is increased by increasing the time duration of each stroke (see top illustration in Fig. 7) . The field and the thermal coupling are $\epsilon=\gamma_{h}=\gamma_{c}=5\times10^{-4}.$ The coupling strengths to the bath and work repository are not changed. When the engine action is large compared to $\hbar$, the engines behave very differently (Fig. 7a). On the other hand, in the equivalence regime, where $s$ is small with respect to $\hbar$, the power of all engines types converges to the same value. In the equivalence regime the power rises quadratically with the action since the correction to the power is $s^{3}/\tau_{cyc}\propto\tau_{cyc}^{2}$. This power plateau in the equivalence regime is a manifestation of quantum interference effects (coherence in the density matrix), as will be further discussed in the next section.
The behavior of different engines for large action with respect to $\hbar$ is very rich and strongly depends on the ratio between the field and the baths coupling strength. For example, if the field is amplified then for some parameter the four-stroke engine can produce more power than the continuous and two stroke engine. Some features of this diverse dynamics will be discussed elsewhere.
Finally we comment that the same formalism and results can be extended for the case the drive that is slightly detuned from the gap.
\subsection{Lasing condition via the equivalence to two-stroke engine}
Laser medium can thought of as continuous engine where the power output is light amplification. It is well known that lasing requires population inversion. Scovil et. al \cite{scovil59} were the first to show the relation between population inversion lasing condition, and the Carnot efficiency.
Using the equivalence principle, presented here, the most general form of the lasing condition can be obtained without any reference to light-matter interaction.
Let us start by decomposing the continuous engine into an equivalent two-stroke engine. For simplicity, it is assumed that the hot and cold manifolds have some overlap so that in the absence of the driving field this bath leads the system to a unique steady state $\rho_{0}$. If the driving field is tiny with respect to the thermalization rates then the system will be very close to $\rho_{0}$ in steady state.
To see when $\rho_{0}$ can be used for work extraction we need to discuss passive states. A passive state is a state that is diagonal in the energy basis, and with populations that decreases monotonically with the energy \cite{alahverdyan04}. The energy of a passive state cannot be decreased (or work cannot be extracted from the system) by applying some unitary transformation (the Hamiltonian after the transformation is the same as it was before the transformation) \cite{alahverdyan04,alicki13}. Thus, if $\rho_{0}$ is passive, work cannot be extracted from the device regardless of the details of the driving field (as long as it is weak and the equivalence holds). \\ A combination of thermal baths will lead to an energy diagonal $\rho_{0}$. Consequently, to enable work extraction, passivity must be broken by population inversion. Therefore, we obtain the standard population inversion condition. Note that the derivation does not require Einstein rate equation and any information on the processes of emission and absorption of photons.
Furthermore, it now becomes clear that if ``coherent baths'' are used \cite{scully03} so that $\rho_{0}$ is no longer diagonal in the energy basis (and therefore no longer passive) it is possible to extract work even without population inversion.
In conclusion, using the equivalence principle it is possible to import known results from work extraction in stroke schemes to continuous machines.
\section{Quantum thermodynamic signature}
Can thermodynamic measurement reveal quantum effects in the engine? To answer this we first need to define the corresponding classical engine.
The term ``classical'' engine is rather ambiguous. There are different protocols of modifying the system so that is behaves classically. To make a fair comparison to the fully quantum engine we look for the minimal modification that satisfies the following conditions: \begin{enumerate} \item The dynamics of the device should be fully described using population dynamics (no coherences, no entanglement). \item The modification should not alter the energy levels of the system, the couplings to the baths, and the coupling to the work repository. \item The modification should not introduce a new source of heat or work. \end{enumerate} To satisfy the first requirement we introduce a dephasing operator that eliminate the coherences\footnote{For simplicity we think of a single particle engine. Thus entanglement and spin statistics are irrelevant quantum effects. In addition, in the weak system-bath coupling limit the entanglement to the baths is negligible.} and leads to a stochastic description description of the engine. Clearly, a dephasing operators satisfy the second requirement. To satisfy the third requirement we require ``pure dephasing''; a dephasing in the energy basis. The populations in the energy basis are invariant to this dephasing operation. Such a natural source of energy basis dephasing emerges if there is some scheduling noise \cite{k215}. That is, if there is some error in the switching time of the strokes.
Let us define a ``quantum-thermodynamic signature'' as a signal that is impossible to produce by the corresponding classical engine as defined above.
Our goal is to derive a threshold for power output that a stochastic engine cannot exceed but a coherent quantum engine can.
\begin{figure}
\caption{(a left) Dephasing operations (slanted line, operator $\protect\mc D$) commute with thermal baths so the dephased engine in (a) left, is equivalent to the one on the right. In the new engine the unitary evolution is replace by $\protect\mc{DUD}$. If $\protect\mc D$ eliminates all coherences, the effect of $\protect\mc{DUD}$ on the populations can always be written as a doubly stochastic operator. (b) Any Hermitian Hamiltonian in Liouville space has the structure shown in (b). Thus, first order changes in populations critically depend on the existence of coherence. }
\label{fig8}
\end{figure}
Before analyzing the effect of decoherence it is instructive to distinguish between two different work extraction mechanisms in stroke engines.
\subsection{Coherent and stochastic work extraction mechanisms}
Let us consider the work done in the work stroke of a two-stroke engine (as in Fig. 5c):
\[ W=\braOket{H_{0}}{e^{-i\frac{1}{2}\mc H_{w}\tau_{w}}}{\tilde{\rho}}. \] Writing the state as a sum of population and coherences $\ket{\tilde{\rho}}=\ket{\tilde{\rho}_{pop}}+\ket{\tilde{\rho}_{coh}}$ we get: \begin{eqnarray} W & = & \braOket{H_{0}}{\sum_{n=1}\frac{(-i\frac{1}{2}\mc H_{w}\tau_{w})^{2n-1}}{(2n-1)!}}{\tilde{\rho}_{coh}}\nonumber \\
& + & \braOket{H_{0}}{\sum_{n=1}\frac{(-i\frac{1}{2}\mc H_{w}\tau_{w})^{2n}}{(2n)!}}{\tilde{\rho}_{pop}}.\label{eq: P 2 mechanisms} \end{eqnarray} This result follows from the generic structure of Hamiltonians in Liouville space. Any $\mc H$ that originates from an Hermitian Hamiltonian in Hilbert space (in contrast to Lindblad operators as a source) has the structure shown in Fig 8b (see Appendix II for Liouville space derivation of this property). That is, it connects only populations to coherences and vice versa, but it cannot connect populations to populations directly\footnote{This is very well known in the context of the Zeno effect.}. In addition, since $\bra{H_{0}}$ acts as a projection on population space one gets that odd powers of $\mc H_{w}$ can only operate on coherences and even powers can only operate on populations. Thus, the power can be extracted using two different mechanisms. A coherent mechanism that operates on coherences and a stochastic mechanism that operates on populations.
The effect of the ``stochastic'' terms $\sum_{n=1}\frac{(-i\frac{1}{2}\mc H_{w}\tau_{w})^{2n}}{(2n)!}$ on the populations are equivalently described by a single doubly stochastic operator. If there are no coherences (next section) this leads to a simple interpretation in terms of full swap events that take place with some probability.
Continuous engines, on the other hand, have only coherent work extraction mechanism. This can be seen from the expression for their work output \begin{equation} P{}^{\text{cont}}=\braOket{H_{0}}{\frac{1}{2}\mc H_{w}}{\tilde{\rho}}=\braOket{H_{0}}{\frac{1}{2}\mc H_{w}}{\tilde{\rho}_{coh}},\label{eq: Pcont} \end{equation} where again we used the population projection property of $\bra{H_{0}}$, and the structure of $\mc H_{w}$ (Fig. 8b). We conclude that in contrast to stroke engines, continuous engines have no stochastic work extraction mechanism. This difference stems from the fact that the in steady state the state is stationary in continuous engines. Consequently, there are no higher order terms that can give rise to a population-population stochastic work extraction mechanism. This is a fundamental difference between stroke engines and continuous engines. This effect is pronounce outside the equivalence regime where the stochastic terms become important (see Sec. \ref{sub: Optimal-thermal-coupling}).
\subsection{Engines subjected to pure dephasing}
Consider the engine shown in Fig 8a. The slanted lines on the baths indicate that there is an additional dephasing mechanism that takes place in parallel to the thermalization\footnote{In the Lindblad framework any thermalization is intrinsically associated with some dephasing. Yet, here we assume an additional controllable dephasing mechanism.}. Let us denote the evolution operator of the pure dephasing by $\mc D$. In principle, to analyze the deviation from the coherent quantum engine, first the steady state has to be solved and then work and heat can be compared. Even for simple systems this is difficult task. Hence, we shall take a different approach and derive a power upper bound for stochastic engines. It is important that the bound will contain only quantities that are unaffected by the level of coherence in the system. For example, average energy or dipole expectation values, do contain information on the coherence. We construct a bound in terms of the parameters of the system (e.g. the energy levels, coupling strengths, etc.), an is independent of the state of the system. In the pure dephasing stage the energy does not change. Hence, the total energy change in the $\mc{DUD}$ stage is associated with work.
Let $\mc D_{comp}=\ketbra{pop}{pop}$ be a projection operator on the population space. This operator generates a complete dephasing that eliminates all coherences. In such case, the leading order in the work expression becomes \begin{eqnarray} W & = & \braOket{H_{0}}{\mc D_{comp}e^{-i\frac{1}{2}\mc H_{w}\tau_{w}}\mc D_{comp}}{\tilde{\rho}}\nonumber \\
& = & \frac{\tau_{w}^{2}}{8}\braOket{H_{0}}{\mc H_{w}^{2}}{\tilde{\rho}_{pop}}+O(s^{4}), \end{eqnarray} where we used $\bra{H_{0}}\mc D=\bra{H_{0}}$ and $\mc D_{comp}\ket{\tilde{\rho}}=\ket{\tilde{\rho}_{pop}}$. Since $\mc D_{comp}$ eliminates coherences, $W$ does not contain a linear term in time. Next by using the following relation: $\text{\ensuremath{\braOket{H_{0}}B{\rho}}}\le\sqrt{\braket{H_{0}}{H_{0}}\braket{\rho}{\rho}}\nrm B$, $\sqrt{\braket{H_{0}}{H_{0}}}=\sqrt{tr(H_{0}^{2})}$, we find that for $s\ll\hbar$ the power of a stochastic engine satisfies: \begin{eqnarray} P_{stoch} & \le & \frac{z}{8}\sqrt{tr(H_{0}^{2})-tr(H_{0})^{2}}\Delta_{w}^{2}d^{2}\tau_{cyc},\label{eq: power signature}\\ z=1 & & \text{two-stroke}\nonumber \\ z=1/2 & & \text{four-stroke}\nonumber \end{eqnarray} where $\Delta_{w}$ is the gap of the interaction Hamiltonian (maximal eigenvalue minus minimal eigenvalue of $H_{w}$), and $d$ is the duty cycle - the fraction of time dedicated to work extraction (e.g. $d=1/3$ in all the examples in this paper). We also used the fact that $\braket{\rho_{pop}}{\rho_{pop}}$ is always smaller than the purity $\braket{\rho}{\rho}$ and therefore smaller than one. Note that, as we required, this bound is state-independent, and the right hand side of (\ref{eq: power signature}) contains no information on the coherences in the system. As shown earlier, in coherent quantum engines (in the equivalence regime) the work scales linearly with $\tau_{cyc}$ (see (\ref{eq: Wcont}) and (\ref{eq: W4 Wc})) and therefore the power is constant as a function of $\tau_{cyc}$. When there are no coherences the power scales linearly with $\tau_{cyc}$.
Numerical results of power as function of cycle time are shown in Fig. 9. The power is not plotted as function of action as before, because at the same cycle time the coherent engine and dephased engine have different action. The action of the dephased engine is \begin{equation} s_{deph}=(\nrm{\mc L_{c}}+\nrm{\mc L_{h}}+\nrm{\frac{1}{2}\mc H_{w}}+\nrm{\mc L_{dephasing}})\tau_{cyc}. \end{equation} If the dephasing is significant the action is large and equivalence cannot be observed. That is, a fully stochastic engine in quantum system have large action and cannot satisfy $s\ll\hbar$.
The chosen coherence time is $100\tau_{d}$. When the cycle time is small with respect to the coherence time, equivalence is observed. Yet, the power is significantly smaller compared to the fully coherent case. For longer cycles the decoherence starts to take effect, and the expected linear power growth is observed. The stochastic power bounds for a two-stroke engine (dashed-blue), and for a four-stroke engine (dashed-red) define a power regime (shaded area) that is inaccessible to fully stochastic engines. Thus, any power measurement in this regime unequivocally indicates the presence of quantum coherences in the engine. Note, that to measure power the work repository is measured and not the engine. Furthermore, the engine must operate for many cycles to reduce fluctuations in the accumulated work. To calculate the average power the accumulated work is divided by the total operation time and compared to the stochastic power threshold (\ref{eq: power signature}).
Note that had we chosen complete dephasing then the power output of the continuous engine would have been zero as expected from (\ref{eq: Pcont}).
In summary, quantum thermodynamics signature in stroke engines can be observed in the weak action limit.
\section{\label{sub: Optimal-thermal-coupling}The over thermalization effect in coherent quantum heat engine}
In all the numerical examples studied so far, the unitary action and the thermal action were roughly comparable for reason that will soon become clear. In this section we study some generic features that take place when the thermal action takes over.
\begin{figure}
\caption{The output of the three types of engines (two-stroke blue, four stroke red, continuous black) with and without dephasing (same as Fig. 7b). The dephasing time is $100\tau_{drive}$. Well above the dephasing time ($\sim200\tau_{drive}$) the power grows linearly as expected from stochastic engines (three bottom solid lines). Below the dephasing time ($\sim20\tau_{drive}$) equivalence is observed. Yet the power is significantly lower compared to the coherent engine (top three solid lines). The dashed lines show the stochastic upper bound on the power for two-stroke (dashed-blue) and four-stroke (dashed-red) engines. Any power measurement in the shaded area of each engine indicates the presence of quantum interference in the engine. This plot also demonstrates that for weak couplings (low action) coherent engines produce much more power compared to stochastic dephased engines. }
\label{fig9}
\end{figure}
Let us now consider the case where the unitary contribution to the action $\nrm{\mc H_{\omega}}\tau$ is small with respect to $\hbar$. All the time intervals are fixed but we can control the thermalization rate $\gamma$ (for simplicity, we assume it is the same value for both baths). Common sense suggests that increasing $\gamma$ should increase the power output. At some stage this increase will stop since the system will already reach thermal equilibrium with the bath (or baths in two-stroke engines). Yet, Fig. 10 shows that there is a very distinctive peak where an optimal coupling takes place. That is, in some cases less thermalization leads to more power. We call this effect over thermalization. This effect is generic and not unique to the specific model used in the numerical simulations. The parameter used for the plot are $\epsilon=\gamma_{c}=\gamma_{h}=2\times10^{-4}$ and the number of drive cycle pen engine cycle is $m=600$.
The peak is a consequence of the interplay between the two different work extraction mechanisms (see Sec. 4.1). For low $\gamma$ the coherences in the system are significant and the leading term in the power is $\braOket{H_{0}}{-i\frac{1}{2}\mc H_{w}}{\tilde{\rho}_{coh}}d$ (where d is the duty cycle). In principle, all Lindblad thermalization processes are associated with some level of decoherence. This decoherence generates an exponential decay of $\ket{\tilde{\rho}_{coh}}$ that explains the decay on the right hand side of the peak. At a certain stage the linear term becomes so small that the stochastic second order term $-\frac{1}{8}\braOket{H_{0}}{\mc H_{w}^{2}}{\tilde{\rho}_{pop}}d^{2}\tau_{cyc}$ dominates the power. $\ket{\tilde{\rho}_{pop}}$ eventually saturates for large $\gamma$ and therefore the stochastic second order term leads to a power saturation. Interestingly, in the example shown in Fig. (\ref{fig10}) we observe that the peak is obtained when $\gamma$ and $\epsilon$ are roughly equal. Of course, what really matters is the thermal action with respect to unitary action and not just the values of the parameter $\gamma$ and $\epsilon$.
If thermalization occurs faster, the thermal stroke can be shortened and this increases the power. However, this effect is small with respect to the exponential decay of the coherences. We conclude that even without additional dephasing as in the previous section, excessive thermal coupling turns the engine into a stochastic machine. For small unitary action this effect severely degrades the power output. The arguments presented here are valid for any small action coherent quantum engine.
\begin{figure}
\caption{The over thermalization effect is the decrease of power when the thermalization rate is increased. Over thermalization degrades the coherent work extraction mechanism without effecting the stochastic work extraction mechanism. When the coherent mechanism gets weak enough, the power is dominated by the stochastic power extraction mechanisms and power saturation is observed (dashed lines). The continuous engine has no stochastic work extraction mechanism and therefore it decays to zero without reaching saturation. }
\label{fig10}
\end{figure}
\section{Concluding remarks}
We identified coherent and stochastic work extraction mechanisms in quantum heat engines. While stroke engines have both, continuous engines only have the coherent mechanism. We introduced the ``norm action'' of the engine using Liouville space and showed that when this action is small compared to $\hbar$ all three engine types are equivalent. This equivalence emerges because for small action only the coherent mechanism is important. Despite the equivalence, before the engine cycle is completed the state of the different engine type differ by $O(s^{1})$. This holds true also for work and heat. Remarkably, at the end of each engine cycle a much more accurate $O(s^{3})$ equivalence emerges. Furthermore, the equivalence holds also for transient dynamics, even when the initial state is very far from the steady state of the engine. It was shown that for small action the coherent work extraction is considerably stronger than the stochastic work extraction mechanism. This enabled us to derive a power bound for stochastic engines that constitutes a quantum thermodynamics signature. Any power measurement that exceeds this bound indicated the presence of quantum coherence and the operation of the coherent work extraction mechanism.
The present derivation makes no assumption on the direction of heat flows and the sign of work. Thus our result are equally applicable to refrigerators and heaters.
It is interesting to try and apply these concepts of equivalence and quantum thermodynamic signatures to more general scenarios: non Markovian baths, engines with non symmetric unit cell, and engines with correlation between different particles (entanglement and quantum discord). We conjecture that in multiple particle engines entanglement will play a similar role to that of coherence in single particle engines.
Work support by the Israeli science foundation. Part of this work was supported by the COST Action MP1209 'Thermodynamics in the quantum regime'.
\section*{Appendix I - Strang decomposition validity}
Let $\mc K$ be an operator generated by two non commuting operators $\mc A$ and $\mc B$: \[ \mc K=e^{(\mc A+\mc B)dt}. \] The splitted operator is
\[ \mc K_{s}=e^{\frac{1}{2}\mc Adt}e^{\mc Bdt}e^{\frac{1}{2}\mc Adt}. \] Our goal is to quantify the difference between $\mc K$ and $\mc K_{s}$, $\nrm{\mc K_{s}-\mc K}$ where $\nrm{\cdot}$ stands for the spectral norm. In principle, other sub-multiplicative matrix norm can be used (such as the Hilbert Schmidt norm). However, the spectral norm captures more accurately aspects of quantum dynamics \cite{ruPuritySpeed,uzdin100evoSpeed,uzdinEmbbeding,uzdinNHresources}. $\mc K$ can be expanded as: \begin{equation} \mc K=\sum\frac{(\mc A+\mc B)^{n}dt^{n}}{n!}. \end{equation} $\mc K_{s}$ on the other hand is: \begin{eqnarray} \mc K_{s} & = & \sum_{k,l,m=0}^{\infty}\frac{(\mc A/2)^{k}dt^{k}}{k!}\frac{\mc B^{l}dt^{l}}{l!}\frac{(\mc A/2)^{m}dt^{m}}{m!}\nonumber \\
& = & \sum_{n=0}^{\infty}\sum_{l=0}^{n}\sum_{k=0}^{n-l}\frac{(\mc A/2)^{k}}{k!}\frac{\mc B^{l}}{l!}\frac{(\mc A/2)^{n-l-k}}{(n-l-k)!}dt^{n}. \end{eqnarray} Due to the symmetric splitting the terms up to $n=2$ (including n=2) are identical for both operators. Therefore the difference can be written as \begin{eqnarray} \nrm{\mc K_{s}-\mc K} & =\nonumber \\
& & \left\Vert \sum_{n=3}^{\infty}\sum_{l=0}^{n}\sum_{k=0}^{n-l}\frac{(\mc A/2)^{k}}{k!}\frac{\mc B^{l}}{l!}\frac{(\mc A/2)^{n-l-k}}{(n-l-k)!}dt^{n}\right.\nonumber \\
& - & \left.\sum_{n=3}\frac{(\mc A+\mc B)^{n}dt^{n}}{n!}\right\Vert . \end{eqnarray} next we apply the triangle inequality and the sub-multiplicativity property and get: \begin{eqnarray} \nrm{\mc K_{s}-\mc K} & \le\nonumber \\
& & \left\Vert \sum_{n=3}^{\infty}\sum_{l=0}^{n}\sum_{k=0}^{n-l}\frac{\nrm{\mc A/2}{}^{k}}{k!}\frac{\nrm B^{l}}{l!}\frac{\nrm{\mc A/2}{}^{n-l-k}}{(n-l-k)!}dt^{n}\right.\nonumber \\
& & \left.+\sum_{n=3}^{\infty}\frac{(\nrm{\mc A}+\nrm{\mc B})^{n}dt^{n}}{n!}\right\Vert . \end{eqnarray} Using the binomial formula two times one finds \begin{eqnarray} \sum_{n=3}^{\infty}\sum_{l=0}^{n}\sum_{k=0}^{n-l}\frac{\nrm{\mc A/2}{}^{k}}{k!}\frac{\nrm{\mc B}^{l}}{l!}\frac{\nrm{\mc A/2}{}^{n-l-k}}{(n-l-k)!}dt^{n} & =\nonumber \\ \sum_{n=3}^{\infty}\frac{(\nrm{\mc A}+\nrm{\mc B})^{n}dt^{n}}{n!}, \end{eqnarray} and therefore \begin{eqnarray} \nrm{\mc K_{s}-\mc K} & \le & 2\sum_{n=3}^{\infty}\frac{(\nrm{\mc A}+\nrm{\mc B})^{n}dt^{n}}{n!}\nonumber \\
& = & 2R_{2}[(\nrm{\mc A}+\nrm{\mc B})dt]. \end{eqnarray}
The right hand side is the Taylor reminder of a power series of an exponential with as argument $s=(\nrm{\mc A}+\nrm{\mc B})dt$. The Taylor reminder formula for the exponent function is $R_{k}(x)=e^{\xi}\frac{\left|s\right|^{k+1}}{(k+1)!}$ where $0\le\xi\le1$ (for now we assume $s<1$). Setting $k=2$ and $\xi=1$ (worst case), we finally obtain \begin{eqnarray} \nrm{\mc K_{s}-\mc K} & \le & \frac{e}{3}[(\nrm A+\nrm{\mc B})dt]^{3}\le s^{3},\\ s & = & (\nrm A+\nrm{\mc B})dt. \end{eqnarray}
To get an estimation where the leading non neglected term of $\mc K$, $(\mc A+\mc B)^{2}dt^{2}/2$ is larger then the reminder we require that
\begin{eqnarray} (\nrm{\mc A+\mc B})^{2}dt^{2}/2 & \ge & s^{3}. \end{eqnarray} Using the triangle inequality we get the estimated condition for the Strang decomposition: \begin{equation} s\le1/2. \end{equation} This condition explains why it was legitimate to limit the range of $s$ to 1 in the reminder formula.
\section*{Appendix II - Liouville space formulation of quantum dynamics}
Quantum dynamics is traditionally described in Hilbert space. However, it is convenient, in particular for open quantum systems, to introduce an extended space where density operators are vectors and time evolution is generated by a Schrödinger-like equation. This space is usually referred to as Liouville space \cite{mukamel1995principles}. We denote the \textquotedbl{}density vector\textquotedbl{} by $\ket{\rho}\in\mathbb{C}^{1\times N^{2}}$. It is obtained by reshaping the density matrix $\rho$ into a larger single vector with index $\alpha\in\{1,2,....N^{2}\}.$ The one-to-one mapping of the two matrix indices into a single vector index $\{i,j\}\to\alpha$ is arbitrary, but has to be used consistently. The vector $\ket{\rho}$ is not normalized to unity in general. Its norm is equal to the purity, $\mc P=\text{tr}(\rho^{2})=\braket{\rho}{\rho}$ where $\bra{\rho}=\ket{\rho}^{\dagger}$ as usual. The equation of motion of the density vector in Liouville space follows from $d_{t}\rho_{\alpha}=\sum_{\beta}\rho_{\beta}\partial(d_{t}\rho_{\alpha})/\partial\rho_{\beta}$. Using this equation, one can verify that the dynamics of the density vector $\ket r$ is governed by a Schrödinger-like equation in the new space, \begin{equation} i\partial_{t}\ket{\rho}=\mc H\ket{\rho},\label{eq: schrodinger eq} \end{equation} where the super-Hamiltonian $\mc H^{tot}\in\mathbb{C}^{N^{2}\times N^{2}}$ is given by, \begin{equation} \mc H_{\alpha\beta}=i\frac{\partial(d_{t}\rho_{\alpha})}{\partial\rho_{\beta}}.\label{eq: Hr form} \end{equation}
A particularly useful index mapping is described in \cite{machnes14} and in \cite{roger1994topics}. For this form $\mc H$ has a very simple form in terms of original Hilbert space Hamiltonian and Lindblad operators.
$\mc H=\mc H^{H}+\mc L$ is non-Hermitian for open quantum systems. $\mc H^{H}$ originates from the Hilbert space Hamiltonian $H$, and $\mc L$ from the Lindblad terms \begin{comment} Often in Hilbert space the definition of L differs by an $i$ factor. However to keep the same notation for all operators we use $\mc L$ that is consistent with the Schrödinger equation (\ref{eq: schrodinger eq}) and not the Hilbert space Lindblad form $\frac{d}{dt}\rho=L(\rho)$. \end{comment} . $\mc H^{H}$ is always Hermitian. The skew-Hermitian part $(\mc L-\mc L^{\dagger})/2$ is responsible for purity changes. Yet, in Liouville space, the Lindblad operators $A_{k}$ (\ref{eq: Lind Hil}) may also generate an Hermitian term $(\mc L+\mc L^{\dagger})/2$. Though Hermitian in Liouville space this term cannot be associated with a Hamiltonian in Hilbert space.
For time-independent $\mc H$ the evolution operator in Liouville space is: \begin{equation} \ket{\rho(t)}=\mc K\ket{\rho(t')}=e^{-i\mc H(t-t')}\ket{\rho(t')}. \end{equation} If $\mc L=0$, $\mc K$ is unitary \begin{comment} Any unitary in Hilbert space $U$ has a corresponding unitary $\mc K$. Yet not every unitary $\mc K$ in Liouville space has a corresponding unitary $U$ in Hilbert space. \end{comment} The fact that the evolution operator can be written as an exponent of a matrix, without any commutators as in Hilbert space, is a very significant advantage (see for example \cite{ruPuritySpeed}). It is important to note that not all vectors in Liouville space can be populated exclusively. This is due to the fact that only positive $\rho$ with unit trace are legitimate density matrices. The states that can be populated exclusively describe steady states, while others correspond to transient changes. We remind that in this paper we will use calligraphic letters to describe operators in Liouville space and ordinary letters for operators in Hilbert space. For states, however, $\ket A$ will denote a vector in Liouville space formed from $A_{N\times N}$ by ``vec-ing'' $A$ into a column in the same procedure $\rho$ is converted into $\ket{\rho}$.
\subsection*{Useful relations in Liouville space.}
In Liouville space, the standard inner product of two operators in Hilbert space $\text{tr}A^{\dagger}B$ reads \[ \text{tr}A^{\dagger}B=\braket AB. \] In particular the purity $\mc P=\braket rr$ is just the square of the distance from the origin in Liouville space.
A useful relation for $\mc H^{H}$ : \begin{equation} \mc H^{H}\ket H=\bra H\mc H^{H}=0.\label{eq: vec op herm identity} \end{equation} The proof is as follows:
\begin{eqnarray} \mc{\mc H}_{ij,mn}^{H} & = & H_{im}\delta_{jn}-H_{nj}\delta_{im}.\label{eq: Hherm Lio} \end{eqnarray} Therefore, using (\ref{eq: Hherm Lio}) we get: \begin{eqnarray} \mc H^{H}\ket H=\sum_{\beta}\mc H_{\alpha\beta}^{H}H_{\beta}=\sum_{mn}\mc H_{ijmn}^{H}H_{mn} & = & [H,H]=0\nonumber \\ \end{eqnarray} This property is highly useful. We stress that (\ref{eq: vec op herm identity}) is a property of Hermitian operators in Hilbert space where both $H$ and $\mc H$ are well defined. A general Hermitian operator in Liouville space may not have a corresponding $H$ in Hilbert space.
Another property that immediately follows from (\ref{eq: Hherm Lio}) is \begin{equation} \mc{\mc H}_{ii,kk}^{H}=0. \end{equation} This corresponds to a well known property of unitary operation. If the system starts from a diagonal density matrix, then for short times the evolution generated by $\mc H^{H}$, $e^{-i\mc H^{H}dt}=I-i\mc H^{H}dt+O(dt^{2})$ does not change the population in the leading order.
\subsection*{Expectation values and their time evolution in Liouville space}
The expectation value of an operator in Hilbert space is $\left\langle A\right\rangle =tr(\rho A)$ . Since $\rho$ is Hermitian the expectation value is equal to the inner product of $A$ and $\rho$ and therefore: \[ \left\langle A\right\rangle =tr(\rho A)=\braket{\rho}A. \] the dynamics of $\left\langle A\right\rangle $ under Lindblad evolution operator: \begin{equation} \frac{d}{dt}\left\langle A\right\rangle =-i\braOket A{\mc H}{\rho}+\braket{\rho}{\frac{d}{dt}A}.\label{eq: Lio expect change} \end{equation} \textit{Note that in Liouville space there is no commutator term} since $\mc H$ operates on $\ket{\rho}$ just from the left. If the total Hamiltonian is Hermitian and time independent the conservation of energy follows immediately from applying (\ref{eq: Lio expect change}) and (\ref{eq: vec op herm identity}) for $A=H$.
\section*{Appendix III - The symmetric rearrangement theorem (SRT)}
The goal of this Appendix is to explain why the equivalence of evolution operators leads to equivalence of work and equivalence of heat. In addition it is shown why it is valid also for transients. For the equivalence of evolution operator we have required that the super Hamiltonian is symmetric and that the action is small: \begin{eqnarray} \mc H(t) & = & \mc H(-t),\\ s & = & \int_{-\tau/2}^{+\tau/2}\nrm{\mc H}dt\ll\hbar. \end{eqnarray} Let the initial state at time $t=-\tau/2$ be \begin{equation} \ket{\tilde{\rho}_{i}}=\ket{\tilde{\rho}(-\tau/2}. \end{equation} this state leads to a final state at $\tau/2$ \begin{equation} \ket{\tilde{\rho}_{f}}=\ket{\tilde{\rho}(\tau/2}. \end{equation} Our goal is to evaluate a symmetric expectation value difference of the form: \begin{eqnarray} dA_{tot} & = & [\left\langle A(t_{2})\right\rangle -\left\langle A(t_{1})\right\rangle ]+[\left\langle A(-t_{1})\right\rangle -\left\langle A(-t_{2})\right\rangle ]\nonumber \\
& = & [\braket A{\tilde{\rho}(t_{2})}-\braket A{\tilde{\rho}(t_{1})}]\nonumber \\
& & +[\braket A{\tilde{\rho}(-t_{1})}-\braket A{\tilde{\rho}(-t_{2})}],\\ t_{2},t_{1} & \ge & 0\nonumber \end{eqnarray} that is, the change in the expectation value of $A$ in the segment $[t_{1},t_{2}]$ and its symmetric counterpart in negative time (e.g. the green areas in Fig. 11a). When $A$ is equal to $H_{0}$ this difference will translate into work or heat. We start with the expansion: \begin{eqnarray}
[\left\langle A(t_{2})\right\rangle -\left\langle A(t_{1})\right\rangle ]=\braket A{\mc K_{t_{1}\to t_{2}}-I|\tilde{\rho}(t_{1})} & =\nonumber \\
\braket A{-i\mc H(t_{1})\delta t-\frac{1}{2}\mc H(t_{1})^{2}\delta t^{2}|\tilde{\rho}(t_{1})}+O(s^{3}).\nonumber \\ \end{eqnarray} For the negative side we get: \begin{eqnarray}
[\left\langle A(-t_{1})\right\rangle -\left\langle A(-t_{2})\right\rangle ]=\braket A{I-\mc K_{-t_{1}\to-t_{2}}|r(-t_{1})} & =\nonumber \\
\braket A{-i\mc H(-t_{1})\delta t+\frac{1}{2}\mc H(-t_{1})^{2}\delta t^{2}|\tilde{\rho}(-t_{1})}+O(s^{3}).\nonumber \\ \end{eqnarray} Next we use the fact that: \begin{eqnarray} \ket{\tilde{\rho}(t_{1})} & = & \ket{\tilde{\rho}(0)}-i\int_{0}^{t_{1}}\mc H(t)dt\ket{\tilde{\rho}(0)}+O(s^{2}),\nonumber \\ \label{eq: r(t1) r(t0)}\\ \ket{\tilde{\rho}(-t_{1})} & = & \ket{\tilde{\rho}(0)}+i\int_{0}^{t_{1}}\mc H(t)dt\ket{\tilde{\rho}(0)}+O(s^{2}).\nonumber \\ \label{eq: r(-t1) r(t0)} \end{eqnarray} When adding the two segments the second order cancels out and we get: \begin{equation} \delta A_{tot}=-2i\braOket A{\mc H(t_{1})}{\tilde{\rho}(0)}\delta t+O(s^{3}).\label{eq: dAtot r(0)} \end{equation} Note that the result using $\ket{\tilde{\rho}(0)}$ which is not given explicitly. To correctly relate it to $\ket{\tilde{\rho}(-\tau/2)}$ we have to use symmetric rearrangement properties of the evolution operator.
\subsection*{Symmetric rearrangement}
\begin{figure}
\caption{The Hamiltonians in (a) and (b) are related by symmetric rearrangement of the time segments. Up to a small correction $O(s^{3})$ the change in expectation values of an observable $A$ that takes place during the green segments is the same in both cases. This effect explains why work and heat are the same in various types of engines when $s$ is small compared to $\hbar$ (equivalence regime).}
\end{figure}
In Fig. 11a there is an illustration of some time dependent Hamiltonian with reflection symmetry $\mc H(t)=\mc H(-t)$. We use $\mc H$ to denote a Liouville space operator which may be any unitary operation or Markovian Lindblad operation. Assume that in addition to the symmetric bins of interest (green bins) the reminder of the time is also divided into bins in a symmetric way so that there is still a reflection symmetry also in the bin partitioning. Now, we permute the bins in the positive side as desired, and then make the opposite order in the negative side so that the reflection symmetry is kept. An example of such an operation is shown in Fig. 11b. Due to the Strang decomposition we know that the total evolution operator will stay the same under this rearrangement up to third order: \begin{equation} \mc K_{-\frac{\tau}{2}\to\frac{\tau}{2}}=\mc T_{sym}[\mc K]_{-\frac{\tau}{2}\to\frac{\tau}{2}}+O(s^{3}),\label{eq: Ueq} \end{equation} where $\mc T_{sym}[x]$ stands for evaluation of $x$ after a symmetric reordering.
\subsection*{The symmetric rearrangement theorem (SRT)}
From (\ref{eq: Ueq}) we get that if the initial state is the same for a system described by $\mc K$, and for a system described by $\mc T_{sym}[\mc K]$, the final state at $t=\tau/2$ is the same for both systems up to a third order correction: \begin{equation} \ket{\tilde{\rho}(\frac{\tau}{2})}=\mc T_{sym}[\ket{\tilde{\rho}(\frac{\tau}{2})}]+O(s^{3}).\label{eq: rf eq} \end{equation} Using (\ref{eq: r(t1) r(t0)}),(\ref{eq: r(-t1) r(t0)}) we see that: \begin{equation} \ket{\tilde{\rho}(0)}=\frac{\ket{\tilde{\rho}(\frac{\tau}{2})}+\ket{\tilde{\rho}(-\frac{\tau}{2})}}{2}+O(s^{2}),\label{eq: r(0)} \end{equation} and because of (\ref{eq: rf eq}) it also holds that: \begin{equation} \mc T_{sym}[\ket{\tilde{\rho}(0)}]=\ket{\tilde{\rho}(0)}+O(s^{2})=\frac{\ket{\tilde{\rho}(\frac{\tau}{2})}+\ket{\tilde{\rho}(-\frac{\tau}{2})}}{2}+O(s^{2}), \end{equation} using this in (\ref{eq: dAtot r(0)}) we get that: \begin{equation} \delta A_{tot}=-2i\bra A\mc H(t_{1})\frac{\ket{\tilde{\rho}(\frac{\tau}{2})}+\ket{\tilde{\rho}(-\frac{\tau}{2})}}{2}\delta t+O(s^{3}).\label{eq: dA -t/2 t/2} \end{equation} Expression (\ref{eq: dA -t/2 t/2}) no longer depends on the position of the time segment, but only on its duration and on the value of $\mc H$. Thus, the SRT states that the expression above also holds for any symmetric rearrangement \begin{equation} dA_{tot}=\mc T_{sym}[dA_{tot}]+O(s^{3}). \end{equation}
If we replace $A$ by $H_{0}$ and $\mc H(t_{1})$ by $\mc L_{c},\mc L_{h}$ or $\mc H_{w}$ we immediately get the invariance of heat and work to symmetric rearrangement (up to $s^{3}$). If $\ket{\tilde{\rho}(-\frac{\tau}{2})}$ is the same for all engines then $\ket{\tilde{\rho}(\frac{\tau}{2})}$ is also the same for all engines types up to $O(s^{3})$. Consequently for all stroke engines the expression for work and heat are: \begin{eqnarray} W & = & -2i\bra A\intop_{t\in t_{w}}\mc H_{w}(t)dt\frac{\ket{\tilde{\rho}(\frac{\tau}{2})}+\ket{\tilde{\rho}(-\frac{\tau}{2})}}{2}+O(s^{3}),\nonumber \\ \label{eq: W srt}\\ Q_{c(h)} & = & -2i\bra A\intop_{t\in t_{c(h)}}\mc L_{c(h)}(t)dt\frac{\ket{\tilde{\rho}(\frac{\tau}{2})}+\ket{\tilde{\rho}(-\frac{\tau}{2})}}{2}+O(s^{3}).\nonumber \\ \label{eq: Q srt} \end{eqnarray}
Using the identity $\ket{\tilde{\rho}(\frac{\tau}{2})}+\ket{\tilde{\rho}(-\frac{\tau}{2})}=\ket{\tilde{\rho}(t)}+\ket{\tilde{\rho}(-t)}+O(s^{2})$ that follows from (\ref{eq: r(0)}), the integration over time of the energy flows $j_{w}=\braOket{H_{0}}{\frac{1}{2}\mc H_{w}}{\tilde{\rho}(t)}$ and $j_{c(h)}=\braOket{H_{0}}{\mc L_{c(h)}}{\tilde{\rho}(t)}$ for continuous engines, yields expressions (\ref{eq: W srt}) and (\ref{eq: Q srt}) once more. This implies that the SRT (\ref{eq: W srt}) and (\ref{eq: Q srt}) holds even if the different operations $\mc L_{c},\mc L_{h}$ and $\mc H_{w}$ overlap with each other.
We emphasize that all the above relations hold for any initial state and not only in steady state where $\ket{\tilde{\rho}(\frac{\tau}{2})}=\ket{\tilde{\rho}(-\frac{\tau}{2})}$. The physical implication is that in the equivalence regime different engines are thermodynamically indistinguishable when monitored at the end of each cycle, even when the system is not in its steady state.
\end{document} | arXiv | {
"id": "1502.06592.tex",
"language_detection_score": 0.8424967527389526,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Photonic architecture for scalable quantum information processing in NV-diamond}
\author{Kae Nemoto$^{1}$, Michael Trupke$^{2}$, Simon J. Devitt$^{1}$, Ashley M. Stephens$^{1}$, Kathrin Buczak$^{2}$, Tobias N\"obauer$^{2}$, Mark S. Everitt$^{1}$, J\"org Schmiedmayer$^{2}$ \& William J. Munro $^{3,1}$}
\affiliation{$^1$ National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo 101-8430, Japan \\ $^2$ Vienna Center for Quantum Science and Technology, Atominstitut, TU Wien, 1020 Vienna, Austria \\ $^3$ NTT Basic Research Laboratories, NTT Corporation, 3-1 Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198, Japan}
\begin{abstract} Physics and information are intimately connected, and the ultimate information processing devices will be those that harness the principles of quantum mechanics. Many physical systems have been identified as candidates for quantum information processing, but none of them are immune from errors. The challenge remains to find a path from the experiments of today to a reliable and scalable quantum computer. Here, we develop an architecture based on a simple module comprising an optical cavity containing a single negatively-charged nitrogen vacancy centre in diamond. Modules are connected by photons propagating in a fiber-optical network and collectively used to generate a topological cluster state, a robust substrate for quantum information processing. In principle, all processes in the architecture can be deterministic, but current limitations lead to processes that are probabilistic but heralded. We find that the architecture enables large-scale quantum information processing with existing technology. \end{abstract} \date{\today}
\maketitle
\section{Introduction} Quantum computers promise to surpass even the fastest classical computers, but the task of building a quantum computer presents a significant challenge. Even if they are precisely engineered, quantum systems will inevitably suffer from decoherence and other errors. If these errors are sufficiently rare and not too strongly correlated, then they can be suppressed with quantum error correction \cite{Devitt2009}. The role of quantum computer architecture is to integrate quantum error correction with feasible experimental technology, to find a path to a reliable and scalable quantum computer. In this context, of the many physical systems identified as candidates for quantum information processing \cite{Ladd2010}, the negatively-charged nitrogen vacancy (NV$^-$) centre in diamond \cite{Davies1976,Harley1984,Reddy1987} features a number of desirable properties \cite{Childress2005,Childress2006,Benjamin2006,Jiang2007,YJG12}. For example, the NV$^-$ centre possesses both a nuclear spin and an electron spin---the nuclear spin can serve as a memory to store quantum information for relatively long times \cite{Maurer2012}, and the electron spin can be coupled to a photon to serve as a flexible interface with other NV$^-$ centres \cite{Togan2010}. The experimental feasibility of this system has been well established in recent years. Experiments have demonstrated individual electron and nuclear spin initialisation, manipulation, and measurement \cite{Jelezko2004,Dutt2007,Neumann2008,Hanson2008,Jiang2009,Neumann2010,Neumann2010b,Buckley2010,Robledo2011,vanderSar2012,Dolde2013}, long-lived nuclear memories \cite{Maurer2012}, a coherent interface between an electron spin and an optical field \cite{Togan2010}, and optical cavities containing NV$^-$ centres \cite{Park2006,Englund2010,Faraori2011}. State-dependent reflectivity has been demonstrated with atoms \cite{Englund2007}, though not yet with NV$^-$ centres. At the same time, new techniques for quantum error correction have lessened experimental requirements \cite{Knill2005,Bacon2006,Raussendorf2007}.
Here, we develop a quantum computer architecture based on a simple module comprising an optical cavity containing a single NV$^-$ centre in diamond. Modules are connected by photons propagating in a fiber-optical network. The cavities mediate interactions between the photons and the electron spins, enabling entanglement distribution and readout. The electron spins are coupled to nuclear spins, which constitute long-lived quantum memories where quantum information is stored and processed. Aside from modules connected by optical fibers, other elements of the architecture are single-photon detection devices and classical control lines. These elements are laid out in a regular two-dimensional array, with sufficient connectivity between modules to enable topological cluster-state error correction \cite{RHG07, FG09, Barrett2010}. This arrangement is independent of the size of the network. At a circuit level, we find the maximum tolerable error per elementary quantum gate to be approximately 0.73\%. However, by analysing the architecture at the physical level, we also estimate how well each component of the module must operate for the system to meet this threshold and be truly scalable. The results of this analysis indicate that the architecture is consistent with present technology and might be achievable in the near future.
\section{Fundamental building blocks} Our approach can be adapted to a variety of promising physical systems, such as ions, neutral atoms, and quantum dots \cite{Togan2010,Stute2013,Greve2012,Schaibley2013,Gao2012}, and for this reason, we begin with a general description of the fundamental module. However, to show that the module can form the basis of a truly scalable architecture, we focus on a concrete implementation using NV$^-$ centres.
\begin{figure*}
\caption{Schematic representation illustrating the module and the entanglement distribution scheme. The module contains an optical cavity with a four-level system. The entanglement distribution scheme is based on a Michelson interferometer where two modules are connected via an optical fiber. A single photon comes in from the right port and is conditionally reflected at each module depending on the state of the emitter. Erasing the path information at the beam splitter followed by detection at the dark port projects the system to the singlet Bell state. }
\label{fig1}
\end{figure*}
We begin our description of the architecture with an entanglement scheme based on the state-dependent reflectivity of a module consisting of an emitter-cavity system \cite{Waks2006,Hu2008}, as depicted in Fig.~\ref{fig1}. We can describe the emitter as a four-level system with transitions $|0\rangle \rightarrow |0_E\rangle$ and $|1\rangle \rightarrow |1_E\rangle$, each with a frequency $\omega_0$ and $\omega_1=\omega_0+\delta$, respectively. The probability for a photon to be reflected by a module with cooperativity $C$ and the cavity tuned to the interrogation frequency $\omega_0$ is given by \cite{Kimble1998} \begin{equation}\label{eqn:reflection} P_R=1-\frac{1+4 C+(\delta/\gamma)^2}{1+4 C+4 C^2+(\delta/\gamma)^2}. \end{equation}
We have assumed a cavity with matched mirrors, in which case an impinging photon will be reflected by the module with high probability if the emitter is in the ground state $|0\rangle$ and the cooperativity is $C\gg 1$. In the case of large detuning, $(\delta/\gamma)^2\gg C^2\gg C\gg 1$, the cavity is effectively empty and the reflection probability approaches $P_R\rightarrow 0$. In the simplest variant of our entanglement scheme (Fig.~\ref{fig1}), we place two such modules at the output ports of a $50:50$ beamsplitter and prepare each emitter in an equal superposition of the ground states $|0\rangle$ and $|1\rangle$. A single photon is then sent onto the beamsplitter. If it is subsequently detected at the ``dark" port of the beamsplitter, the emitters are projected onto the maximally entangled state \begin{equation}
|S\rangle=\frac{1}{\sqrt{2}}(|1\rangle |0\rangle-|0\rangle|1\rangle ) \end{equation} with success probability $p=\eta^2/ 8$, where the collection efficiency $\eta^2$ includes the effects of inefficient sources and detectors and transmission losses. This probability may appear to be low, however the generated entangled state has extremely high fidelity ($>$99\%) and is robust to imperfections (see supplementary material). For instance, imbalance in the cavity reflection coefficients slightly reduces the success probability but does not degrade the fidelity of the resulting state.
The low success probability of the implementation can be simply overcome using a repeat until success approach to establish an entanglement link with high probability \cite{Barrett2005,Munro2010}. We will show in the following sections that the scheme not only exhibits high fidelity in the presence of physical imperfections, but also, unlike other approaches, does not involve any catastrophic errors.
In addition, the module enables (near) perfect non-demolition measurement of the qubit state. For an architecture for quantum computation we require a second qubit in the cavity to act as a quantum memory. Ideally, the coupling between our four-level system and this memory qubit can be switched on and off as required. This allows the four-level system to be reused for entanglement creation, now with a third module. By repeating this process with additional modules we can generate a cluster state suitable for fault-tolerant quantum computation. In the following we will detail this architecture by describing a full implementation using single NV centres in micro cavities connected in a photonic network.
\section{The diamond module}
Let us now turn our attention to a concrete implementation: a fiber-connected optical cavity containing a single NV$^-$ centre, of which the energy levels are depicted in Fig.~\ref{fig2}a. \begin{figure*}
\caption{NV$^-$ centre is shown as a definite example of the artificial atom to realise the module. Its energy level structure for a low temperature, low strain sample \cite{Tamarat2008,Togan2010} is illustrated in a). A static magnetic field of approximately 20 mT is used to separate the $m_s=\pm 1$ levels. The NV$^-$ centre possesses both an electron spin and $^{15}$N nuclear spin, which will be used to store and grow a cluster state for quantum information processing. b) illustrates how the storage of entanglement in the nuclear spins is achieved. The nuclear spin needs to be prepared in the superposition state, $|n_+\rangle= \frac{1}{\sqrt 2}(|0\rangle +|1\rangle)$ before the protocol starts. During this operation, the electron spin is in a polarized state $|0\rangle$, hence the hyperfine coupling is effectively turned off. When the electron spins rotate to $\frac{1}{\sqrt 2}(|0\rangle +|+1\rangle)$ for the entanglement distribution scheme, the clock associated with the hyperfine coupling starts. A spin-echo sequence can be used to decouple the electron and nuclear spins where necessary---for instance, when the entanglement distribution fails, we need to decouple the electron and nuclear spins before re-initialising the electron spin and attempting the protocol again until success. }
\label{fig2}
\end{figure*}
The lowest three electron spin states, $|m_s=0,\pm 1\rangle\equiv |0,\pm 1\rangle$ form the spin-1 $^3A_2$ manifold which has a zero-field splitting of $2.87$ GHz. With an externally applied magnetic field $B\sim 20$ mT, our electron spin qubit levels $|0\rangle$ and $|+1\rangle$ are far detuned from the $|-1\rangle$ energy level and so form an excellent qubit. The isotope $^{15}$N will be utilised as a spin-\nicefrac{1}{2} nuclear memory. Next, the optical transitions between one of the $^3A_2$ magnetic sub-levels $|i\rangle$ and the $^3E$ levels $|M_i\rangle$ coupled to the cavity field can be represented by $\hbar g_{m_s,i} \sum_{i=1...6} \left[ a^\dagger |i \rangle \langle M_i | + a | M_i\rangle \langle i | \right]$ where $g_{m_s,i}$ are the coupling constants between the transitions and field, $a^\dagger, (a)$ are the field's creation (annihilation) operators and $M_i$ are the energy eigenstates, in order of ascending energy, within the $^3E$ manifold. At zero strain they are given by the basis states $\{M_{1...6}\}=\{E_2,E_1,E_x,E_y,A_1,A_2\}$, neglecting a small mixture of the $E_{x,y}$ and $E_{1,2}$ states due to spin-spin interaction. The basis states $E_x$ and $E_y$ have electronic spin zero, while the others ($A_{1,2}$ and $E_{1,2}$) are equal superpositions of spin $\pm 1$ \cite{Togan2010,Maze2011}. For our scheme, we apply an electric field in the $x$-direction (${\cal E}_x$) to lift the degeneracy of the spin-zero states in the excited-state manifold. This greatly reduces the sensitivity to rogue strain or electric field influences in the $y-$direction and thus makes the system more robust. ${\cal E}_x$ can be adjusted at each site to bring different NV$^-$ centres to the same resonance frequency. We choose $| 0_E\rangle=| E_x\rangle+\epsilon$ and $| 1_E\rangle| =| M_5\rangle=0.98| A_1\rangle+0.17| A_2\rangle+\epsilon$, where $\epsilon$ represents negligible contributions from other basis states. For this setting, we find $\delta=2\pi\times 2.71\,$GHz, which is far greater than the homogeneous optical half-width of the chosen transitions, $\gamma=2\pi\times 11\,$MHz. We note that although the NV$^-$ is not a simple four-level system (Fig.~\ref{fig2}a), all other allowed transitions are detuned even further from the excitation frequency $\omega$ and can be neglected. Thus we have the properties required for entanglement distribution based on state-selective reflection using the NV$^-$ centre electron spin states $|0\rangle$ and $|+1\rangle$.
\subsection{Quantum non-demolition detection}
The conditional reflection of a photon from a module allows us to perform a quantum non-demolition measurement of the NV$^-$ state \cite{Volz2011} (see supplementary material). The measurement sequence consists of a photon measurement followed by a qubit flip and a second photon measurement. For the photon measurement, a single photon is sent to a module, and will be reflected and detected if the NV$^-$ centre is in the state $|0\rangle$, and lost otherwise. The qubit flip is achieved by a microwave $\pi-$pulse. A photon detection would be expected with certainty for one of the photon measurements under ideal conditions, while the absence of a detection event would indicate leakage of the NV$^-$ centre from the qubit subspace to the $|-1\rangle$ state. The destructiveness of this measurement depends on the probability of exciting the NV$^-$ centre and the subsequent spin-flip probability. The measurement needs to undergo several repetitions to make up for finite photon collection efficiency, thereby increasing the spin-flip probability. Nonetheless, we find that it is possible to achieve a measurement error rate of $\epsilon_{QND}=0.1\%$ even for a finite collection efficiency of $\eta^2=0.3$, which is sufficient for fault-tolerant computation (see Section \ref{sec:benchmarking}). For unity detection efficiency, $\epsilon_{QND}$ reaches $0.01\%$, but cannot be reduced further in our current scheme due to the non-zero spin-flip probability for each measurement. This is due in part to off-resonant excitations of the NV$^-$ centre in the cavity, the probability of which increases with cooperativity. This leads to an working cooperativity of $C\simeq 50$, which is realistically achievable with currently available microcavity technology.
\subsection{Remote entanglement}
We begin by initialising each electron spin to $|0\rangle$ followed by rotating to $\frac{1}{\sqrt{2}}(|0\rangle+|+1\rangle)$ using a polarised driving field in a few nanoseconds. A single-photon pulse is then sent onto the interferometer (Fig.~\ref{fig2}b) and the dark port monitored. We repeat this procedure until the entanglement is heralded by the successful detection of a photon at the dark port. This is made possible by the good cycling properties of the NV$^-$ transition $|0\rangle \rightarrow |E_x\rangle$ \cite{Togan2010}. We note furthermore that the de-ionization process NV$^-\rightarrow$ NV$^0$, and the resulting dynamical spectral diffusion, is rendered impossible by using only one single-photon excitation in the interferometer at a time \cite{Togan2010}.
In the NV implementation of our module, the nuclear spin is a long-lived quantum memory which will in our architecture be designated to store one node of a cluster state \cite{RB2001}. Our scheme creates entanglement between the electrons of the two NV$^-$ centres. The transfer of the entanglement to the nuclear spin memories is done through the Ising component of the hyperfine coupling ($A_\parallel \sim 3.03$ MHz \cite{Felton2009}), which is tuned by the external magnetic field of $B\sim 20$ mT to give a conditional phase on the state of the two spins. The amount of entanglement oscillates in time from zero to maximum. At time $\tau$ setting $\pi$ points of the oscillation, the effective gate becomes a controlled-phase gate, while at the $2\pi$ point it gives identity. The hyperfine coupling is always present but is effectively turned off while the electron spin is in the polarised state $|0\rangle$.
Putting this together, the complete nuclear spin entanglement protocol begins with both electron spins and both nuclear spins polarised in their ground states (achieved via the quantum non-demolition measurement). The electron spin is then rapidly rotated to the $|+\rangle$ state via a $\pi/4$ $Y$ rotation, an effective controlled-NOT operation is then performed between the nucleus and electron via the hyperfine interaction at which point the electron is again rotated by $\pi/4$ around the $Y$-axis and measured in the computational basis. This initialises the nuclear spin into the $\ket{n_+}$ state. We then rotate the electron back into the $\ket{+}$ state to attempt an electron-electron bond via the optical transitions. The hyperfine coupling turns on when the photonic entangling protocol is initiated by the electron spin rotation but a spin echo like sequence can be used to disentangle the electron and nuclear spins at any time we require. If the gate has succeeded, we perform a $\pi/4$ $Y$-rotation on one of two electron spins, and wait until the hyperfine interaction maximally entangles the electron and nuclear spins within each node. A $\pi/4$ $Y$-rotation is then performed on the electron spin of each module followed by its measurement in the computational basis. This completes the transfer of the entangled link to the nuclear spins. If the entanglement distribution has failed, the protocol will be repeated until a success is heralded, as illustrated in Fig.~\ref{fig2}b. We note that it is not necessary to reinitialise the nuclear spin prior to each attempt.
\section{Sharing entangled states between three modules}
The next step is to extend our cluster of two nuclear spin qubits to three (by adding one). We begin with an entangled pair stored in the nuclear spins of modules A and B as shown in Fig.~\ref{fig3}. A new entanglement bond on the electron spins in modules B and C is created using the same repeat until success protocol, though only the nuclear spin in C will be initialised. Once the entanglement between the electronic qubits is created, the entanglement will be transferred to the nuclear spins using the hyperfine coupling described previously.
\begin{figure*}
\caption{The repeat until success protocol is accurately time sequenced. This is required by the nature of the coupling, as entanglement between the electron spin and nuclear spin oscillates. Upon failure, we wait until the $2\pi$ point in the entangling cycle, where the nucleus and electron are decoupled. The nuclear spin is consequently protected from feedback errors through the hyperfine coupling by accurately timing the re-initialisation of the electron spins. When the distribution of entanglement between two electrons succeeds, the entanglement bond will be transferred to the nuclear spins by waiting until a $\pi$ point where the electron and nuclear spins are maximally entangled.}
\label{fig3}
\end{figure*}
This time, the nuclear spin in module B is in use, carrying information established at the beginning of the protocol. Photon loss may feedback via the permanent hyperfine coupling, introducing catastrophic errors in the states stored in the nuclear spins in modules A and B. For the protocol to be useful, we should be able to preserve with high-fidelity the existing entangled states stored in the nuclear spins of A and B, while using the electron spin in B to create new entanglement with module C.
By introducing a time-sequenced entangling procedure we can avoid decoherence caused by photon loss. Furthermore, by using spin-echo like sequences to decouple the electron spins from their surrounding environment we may extend their coherence time. The clock for the hyperfine coupling sequence starts when the photonic entangling protocol is initiated (that is, when the electron spin is rotated out of a polarised $|0\rangle$ state). If the entangling protocol fails, the system waits until the spin echo sequence decouples the electron and nuclear spins. At this point the nuclear system recovers coherence and the information stored on the nuclear spin remains untouched until the protocol succeeds. This process is illustrated in Fig.~\ref{fig3}. Once the new entangling bond is established, indicated by a heralding signal, we again wait until the spin echo decouples the electron and nuclear spins. We then perform a single $\pi/4$ $Y$-rotation on one of the two electron spins, and wait until the hyperfine interaction maximally entangles the electron and nuclear spins within each the nodes. An $X$-basis measurement is performed on each electron (via a $\pi/4$ $Y$-rotation and computational basis readout) to transfer the new bond to the nuclear system.
Repeating this with additional modules we can generate an arbitrary cluster state. We are particularly interested in generating the three-dimensional topological cluster state (illustrated in Fig.~\ref{fig4}a) capable of supporting fault-tolerant quantum computation \cite{RHG07,FG09}. Topological models of error correction \cite{Kitaev2003,Dennis2002} exhibit relatively high tolerance to errors and are particularly well suited to architectures due to their simple underlying structure \cite{DFSG08,SJ09,JMFMKLY10,YJG12,Nickerson2013}. The topological cluster state is particularly useful in the context of our repeat until success protocol as it is inherently robust against missing bonds, which will be heralded. These missing bonds can be processed in the classical interpretation of measurement results, without any modification to the quantum circuit \cite{Barrett2010}. To prepare the topological cluster state, each physical qubit is entangled with its four nearest neighbours, hence a dagger shaped cluster state is the fundamental unit, independent of the size of the network, highlighted by blue bond in Fig.~\ref{fig4}b. Four entangling steps are required to create this fundamental state with five modules.
\begin{figure*}
\caption{Three-dimensional topological cluster state and module connectivity in a two-dimensional plane. (a) The topological cluster state cluster is a resource for fault-tolerant quantum computation. However, the whole state is not required at all times during the computation. Instead, only two layers of the cluster state need to be prepared and stored at any given time. b) The physical unit cell composed of two layers. The back layer contains eight connected qubits arranged in a square (orange), while the front layer has five qubits arranged in a cross (blue). The two layers are connected by controlled-phase gates (green). Measurement of the front layer of the cluster will teleport the current state of the computer to the back layer, at which point the physical qubits we just measured can be reconnected in accordance with the geometry of the cluster state, and the information can be teleported back again. In this way, the two physical layers execute the even and odd temporal steps of the computation, allowing an arbitrarily deep computation to be performed with a fixed number of physical qubits. c) A compact layout of modules on two-dimensional plane. }
\label{fig4}
\end{figure*}
\section{Benchmarking the photonic architecture} \label{sec:benchmarking}
To process quantum information with a three-dimensional topological cluster state, the state is consumed by measurements on physical qubits in sequential two-dimensional layers, where one axis is defined as the temporal axis. These measurements create and manipulate encoded qubits defined by defects \cite{RHG07,Devitt2012}. As the computation proceeds by measuring one layer at a time, the whole topological cluster state is not required to be constructed initially. Only two successive layers need to be prepared and stored at any given time, allowing us to concentrate on only two physical layers of modules. The current state of the computer is teleported back and forth between these two layers, which are refreshed and recycled to generate the entire topological cluster. Taking the centre of each cell (in Fig.~\ref{fig4}b), we initiate a sequence of gates to generate the dagger shaped cluster state throughout the lattice, which generates one layer of the topological cluster state (see supplemental material). The two layers of the module network are flattened to a two-dimensional plane, as shown in Fig.~\ref{fig4}c. This pattern repeats to an arbitrarily large cross section.
\begin{figure*}
\caption{Fault-tolerant thresholds and required component error rates. a) Numerical simulation of topological error correction. The logical error rate is plotted as a function of the physical error rate for various code sizes (distances d), where we have assumed that all gates and measurements are operating at the same error rate. Each point corresponds to at least $10^4$ trials. The value of the physical error rate at the intersection gives the threshold (in this case, approximately $7.3\times10^{-3}$). For physical error rates below this threshold, the logical error rate can be reduced arbitrarily by increasing the code distance. b) The required fidelity for each physical parameter. The dots on the lines show the current best accuracy reported, all of which already meet the required accuracy, 99.27\%. For a realistic implementation, the gate fidelity should be above 99.9\%, corresponding to the green coloured regime in the plot. Electronic and nuclear spin coherence times are already in this regime, and the remaining parameters may soon meet the desired accuracy given the rapid development of quantum control of such systems. The fidelity does not converge to unity due to imperfections in the NV$^-$ centre.}
\label{fig5}
\end{figure*}
At a circuit level, we are interested in the threshold error rate, below which the architecture becomes fault tolerant \cite{RHG07}. The projective measurement of the nucleus (via the electron-nucleus hyperfine interaction) allows us to combine measurement and reinitialisation of the nuclear qubit in a single step. Therefore, the depth of the quantum circuit to prepare the topological cluster state is reduced from six steps to five. We find that this reduction increases the error threshold to 0.73\% (see Fig.~\ref{fig5}a). Given this threshold, our target error rate for the five relevant gates is $\sim 0.1$\%, as this is sufficiently far below the threshold to allow significant suppression of errors using a practical number of modules \cite{Devitt2012}.
The target error rate does not tell us much until it is decomposed into each physical component. Each gate consists of several physical steps and involves several sources of errors. In our case, these sources are parameterised by the nuclear and electron spin decoherence times, electron measurement efficiency, electron rotation efficiency, and timing error. As described, the sequence to generate an entangling bond is probabilistic, and the protocol repeats until success. Given that we require bonds to succeed with probability $P=99.9$\%, if the success probably of a single attempt is $p_c$, the number of attempts we require is $s = log(1-P)/ log(1- p_c)$. For $p_c = 0.0625$, $s = 107$. We consider the error rate for each gate to be the worst-case scenario, as heralded failure can be significantly higher than the error rate for unheralded errors \cite{Barrett2010}.
The required fidelity for each physical parameter is shown in Fig.~\ref{fig5}b. Each curve is plotted assuming it is the \emph{only} non-zero error, except for possible errors arising from the absorption of photons by the NV$^-$ node (see supplementary material). The coloured green region in Fig.~\ref{fig5}b is the target for each parameter for an operational computer (though parameters in the yellow region still lead to gates below the threshold). For the architecture to be fault tolerant, these errors need to be combined (see supplementary material). Electron and nuclear decoherence is already sufficiently low \cite{Balasubramanian2009,Ishikawa2012,Fang2013,Maurer2012}, while the other parameters still need improvement. However, it is important to note that the required improvements are less than one order of magnitude, and are not limited by any currently known fundamental limitations of the NV$^-$ system itself.
Assuming that the threshold condition is met, performance is mostly dependent on the computational cycle time, which is limited by the time taken to establish all the electron-electron connections. For bond connections with $P=99.9$\%, the total time required to create a nuclear-nuclear bond is 3.5 $\mu s$, assuming $p_c=$ 6.25\%. This time could be reduced by lowering the required connection efficiency and exploiting the robustness of the topological code to missing bonds \cite{Barrett2010}. The quantum circuit takes five steps to construct each cross-sectional layer of the topological cluster state. Hence, a unit cell of the cluster is prepared every $\sim 30\;\mu$s. To implement an algorithm on the computer, we create pairs of defects in the cluster. The volume of cluster allocated to pairs of defects represents the degree of error correction, parametrised by the distance between defects, $d$. For a logical error rate $p_L \leq 10^{-18}$, $d \geq 32$ is required \cite{Devitt2012}. Therefore, a logical cell requires $V=\left(\frac{5d}{4}\right)^3 = 40^3$ cluster cells. To perform a logical CNOT gate requires a cluster volume $2\times 2$ in cross section and 2 logical cells in temporal depth. Hence, it takes $3.4$ ms for $p_c=$ 6.25\% (a clock frequency of $\sim$ 295 Hz). This rate can be further improved by better optical efficiencies, but is ultimately limited by the hyperfine interaction of the NV$^-$ node used for nuclear spin operations. If we assume a deterministic electron-electron connection, a logical CNOT gate would take approximately 960 $\mu$s ($\sim$1 kHz) as the system becomes rate limited by nuclear measurement (see supplementary material).
\section{Discussion} As we have seen, a simple module can form the basis of a scalable quantum computer architecture. The architecture is naturally distributed, and hence is applicable to quantum communication \cite{Stephens2012}. Such a network may be local or global, with local networks connected by quantum communication channels. In this case, the distance between the modules may become orders of magnitude larger. The time delay due to the communication distance may be mitigated by the long-lived memory inside the module. With increased distance between modules, photon loss would increase, reducing the success probability of the entangling protocol. However, long-distance communication does not necessary require $P=99.9$\%. Instead, with $P=99.0$\%, the number of attempts can be reduced to $s=71$ for $p_c=$ 6.25\%.
We have found that physical requirements of our architecture are broadly consistent with present technology. However, improvements are still required, in particular to the measurement efficiency. However, while technological developments might help to meet these requirements, physical requirements may be found to be less stringent with a more sophisticated adaptive error analysis.
\vskip 1 truecm \emph{Acknowledgements---} We thank Austin Fowler, Andrew Greentree and Burkhard Scharfenberger for valuable discussions. We acknowledge partial support from FIRST, NTT and NICT in Japan, the Austrian Science Fund (FWF) through the Wittgenstein Prize and the EU through the project DIAMANT. KB and TN acknowledge support from the FWF Doctoral Programme CoQuS (\textit{W1210}).
\begin{thebibliography}{99}
\bibitem{Devitt2009} S.~J.~Devitt, W.~J.~Munro and K.~Nemoto. Quantum Error Correction for Beginners. {\it Rep. Prog. Phys.} {\bf 76}, 076001 (2013).
\bibitem{Ladd2010} T.~D.~Ladd, F.~Jelezko, R.~Laflamme, Y.~Nakamura, C.~Monroe, and J.~L.~O'Brien. Quantum computers. {\it Nature} {\bf 464}, 45-53 (2010).
\bibitem{Davies1976} G.~Davies and M.~F.~Hamer. Optical Studies of the 1.945 eV Vibronic Band in Diamond. {\it Proc. R. Soc. Lond. A} {\bf 348}, 285 (1976).
\bibitem{Harley1984} R.~T.~Harley, M.~J.~Henderson, and R.~M.~Macfarlane. Persistent spectral hole burning of colour centres in diamond. {\it J. Phys. C} {\bf 17}, L233 (1984).
\bibitem{Reddy1987} N.~R.~S.~Reddy, N.~B.~Manson, and E.~R.~Krausz. Two-laser spectral hole burning in a colour centre in diamond. {\it J. Luminescence} {\bf 38}, 46 (1987).
\bibitem{Childress2005} L.~Childress, J.~M.~Taylor, A.~S.~S\o{}rensen, and M.~D.~Lukin. Fault-tolerant quantum repeaters with minimal physical resources and implementations based on single-photon emitters. {\it Phys. Rev. A} {\bf 72}, 052330 (2005).
\bibitem{Childress2006} L.~Childress, J.~M.~Taylor, A.~S.~S\o{}rensen, and M.~D.~Lukin. Fault-Tolerant Quantum Communication Based on Solid-State Photon Emitters. {\it Phys. Rev. Lett.} {\bf 96}, 070504 (2006).
\bibitem{Benjamin2006} S.~C.~Benjamin, D.~E.~Browne, J.~Fitzsimons, and J.~J.~L.~Morton. Brokered graph-state quantum computation. {\it New J. Phys.} {\bf 8}, 141 (2006).
\bibitem{Jiang2007} L.~Jiang, J.~M.~Taylor, A.~S.~S\o{}rensen, and M.~D.~Lukin. Distributed quantum computation based on small quantum registers. {\it Phys. Rev. A} {\bf 76}, 062323 (2007).
\bibitem{YJG12} N.~Yao, L.~Jiang, A.~Gorshkov, P.~Maurer, G.~Giedke, J.~Cirac, and M.~Lukin. Scalable Architecture for a Room Temperature Solid-State QuantumÊInformation Processor. {\it Nature Comm.} {\bf 3}, 800 (2012).
\bibitem{Maurer2012} P.~C.~Maurer, G.~Kucsko, C.~Latta, L.~Jiang, N.~Y.~Yao, S.~D.~Bennett, F.~Pastawsk, D.~Hunger, N.~Chisholm, M.~Markham, D.~J.~Twitchen, J.~I.~Cirac, and M.~D.~Lukin. Room-Temperature Quantum Bit Memory Exceeding One Second. {\it Science} {\bf 336}, 1283-1286 (2012).
\bibitem{Togan2010} E.~Togan, Y.~Chu, A.~S.~Trifonov, L.~Jiang, J.~Maze, L.~Childress, M.~V.~G.~Dutt, A.~S.~S\o{}rensen, P.~R.~Hemmer, A.~S.~Zibrov, and M.~D.~Lukin. Quantum entanglement between an optical photon and a solid-state spin qubit. {\it Nature} {\bf 466}, 730-734 (2010).
\bibitem{Jelezko2004} F.~Jelezko, T.~Gaebel, I.~Popa, A.~Gruber, and J.~Wrachtrup. Observation of coherent oscillations in a single electron spin. {\it Phys. Rev. Lett.} {\bf 92}, 076401 (2004).
\bibitem{Dutt2007} M.~V.~G. Dutt, L.~Childress, L.~Jiang, E.~Togan, J.~Maze, F.~Jelezko, A.~S.~Zibrov, P.~R.~Hemmer, and M.~D.~Lukin. Quantum Register Based on Individual Electronic and Nuclear Spin Qubits in Diamond. {\it Science} {\bf 316}, 1312-1316 (2007).
\bibitem{Neumann2008} P.~Neumann, N.~Mizuochi, F.~Rempp, P.~Hemmer, H.~Watanabe, S.~Yamasaki, V.~Jacques, T.~Gaebel, F.~Jelezko, and J.~Wrachtrup. Multipartite Entanglement Among Single Spins in Diamond. {\it Science} {\bf 320} 1326-1329 (2008).
\bibitem{Hanson2008} R.~Hanson, V.~V.~Dobrovitski, A.~E.~Feiguin, O.~Gywat, and D.~D.~Awschalom. Coherent Dynamics of a Single Spin Interacting with an Adjustable Spin Bath. {\it Science} {\bf 320}, 352-355 (2008).
\bibitem{Jiang2009} L.~Jiang, J.~S.~Hodges, J.~R.~Maze, P.~Maurer, J.~M.~Taylor, D.~G.~Cory, P.~R.~Hemmer, R.~L.~Walsworth, A.~Yacoby, A.~S.~Zibrov, and M.~D.~Lukin. Repetitive Readout of a Single Electronic Spin via Quantum Logic with Nuclear Spin Ancillae. {\it Science} {\bf 326}, 267Ğ272 (2009).
\bibitem{Neumann2010} P.~Neumann, J.~Beck, M.~Steine, F.~Rempp, H.~Fedder, P.~R.~Hemmer. J.~Wrachtrup, and F.~Jelezko. Single-Shot Readout of a Single Nuclear Spin. {\it Science} {\bf 329}, 542Ğ544 (2010).
\bibitem{Neumann2010b} P.~Neumann, R.~Kolesov, B.~Naydenov, J.~Beck, F.~Rempp, M.~Steiner, V.~Jacques, G.~Balasubramanian, M.~L.~Markham, D.~J.~Twitchen, S.~Pezzagna, J.~Meijer, J.~Twamley, F.~Jelezko, and J.~Wrachtrup. Quantum register based on coupled electron spins in a room-temperature solid. {\it Nature Physics} {\bf 6} 249-253 (2010).
\bibitem{Buckley2010} B.~B.~Buckley, G.~D.~Fuchs, L.~C.~Bassett, and D.~D.~Awschalom. Spin-Light Coherence for Single-Spin Measurement and Control in Diamond. {\it Science} {\bf 26}, 1212-1215 (2010).
\bibitem{Robledo2011} L.~Robledo, L.~Childress, H.~Bernien, B.~Hensen, P.~F.~A.~Alkemade, and R.~Hanson. High-fidelity projective read-out of a solid-state spin quantum register. {\it Nature} {\bf 477}, 574 (2011).
\bibitem{vanderSar2012} T.~van der Sar, Z.~H.~Wang, M.~S.~Blok, H.~Bernien, T.~H.~Taminiau, D.~M.~Toyli, D.~A.~Lidar, D.~D.~Awschalom, R.~Hanson, and V.~V.~Dobrovitski. Decoherence-protected quantum gates for a hybrid solid-state spin register. {\it Nature} {\bf 484}, 82-86 (2012).
\bibitem{Dolde2013} F.~Dolde, I.~Jakobi, B.~Naydenov, N.~Zhao, S.~Pezzagna, C.~Trautmann, J.~Meijer, P.~Neumann, F.~Jelezko, and J.~Wrachtrup. Room-temperature entanglement between single defect spins in diamond. {\it Nature Phys.} {\bf 9}, 139-143 (2013).
\bibitem{Park2006} Y.-S.~Park, A.~K.~Cook, and H.~Wang, Cavity QED with diamond nanocrystals and silica microspheres. {\it Nano Lett.} {\bf 6}, 2075-2079, (2006).
\bibitem{Englund2010} D.~Englund, B.~J.~Shields, K.~Rivoire, F.~Hatami, J.~Vuckovic, H.~Park, and M.~D.~Lukin. Deterministic coupling of a single nitrogen vacancy center to a photonic crystal cavity. {\it Nano Lett.} {\bf 10}, 3922-3926 (2010).
\bibitem{Faraori2011} A.~Faraon, P.~E.~Barclay, C.~Santori, K.-M.~C.~Fu, and R.~G.~Beausoleil. Resonant enhancement of the zero-phonon emission from a colour centre in a diamond cavity. {\it Nature Photon.} {\bf 5}, 301-305 (2011).
\bibitem{Englund2007} D.~Englund, A.~Faraon, I.~Fushman, N.~Stoltz, J.~Vuckovic, and P.~Petroff. Controlling cavity reflectivity with a single quantum dot. {\it Nature} {\bf 450}, 857-861 (2007).
\bibitem{Knill2005} E.~Knill. Quantum computing with realistically noisy devices. {\it Nature} {\bf 434}, 39-44 (2005).
\bibitem{Bacon2006} D.~Bacon. Operator quantum error correcting subsystems for self-correcting quantum memories. {\it Phys. Rev. A} {\bf 73}, 012340 (2006).
\bibitem{Raussendorf2007} R.~Raussendorf and J.~Harrington. Fault-tolerant quantum computation with high threshold in two dimensions. {\it Phys. Rev. Lett.} {\bf 98}, 190504 (2007).
\bibitem{RHG07} R.~Raussendorf, J.~Harrington, and K.~Goyal. Topological fault-tolerance in cluster state quantum computation. {\it New J. Phys.} {\bf 9}, 199 (2007).
\bibitem{FG09} A.~Fowler and K.~Goyal. Topological cluster state quantum computing. {\it Quant. Inf. Comput.} {\bf 9}, 721, (2009).
\bibitem{Barrett2010} S.~D.~Barrett and T.~M.~Stace. Fault Tolerant Quantum Computation with Very High Threshold for Loss Errors. {\it Phys. Rev. Lett.} {\bf 105}, 200502 (2010).
\bibitem{Stute2013} A.~Stute, B.~Casabone, B.~BrandstŠtter, K.~Friebe, T.~E.~Northup, and R.~Blatt. Quantum-state transfer from an ion to a photon. {\it Nature Photon.} {\bf 7}, 219-222 (2013).
\bibitem{Greve2012} K.~De Greve, L.~Yu, P.~L.~McMahon, J.~S.~Pelc, C.~M.~Natarajan, N.~Y.~Kim, E.~Abe, S.~Maier, C.~Schneider, M.~Kamp, S.~H\"ofling, R.~H.~HadŞeld, A.~Forchel, M.~M.~Fejer, and Y.~Yamamoto. Quantum-dot spin-photon entanglement via frequency downconversion to telecom wavelength. {\it Nature} {\bf 491}, 421 (2012).
\bibitem{Schaibley2013} W.~B.~Gao, P.~Fallahi, E.~Togan, J.~Miguel-Sanchez, and A.~Imamoglu. Observation of entanglement between a quantum dot spin and a single photon. {\it Nature} {\bf 491}, 426 (2012).
\bibitem{Gao2012} J.~R.~Schaibley, A.~P.~Burgers, G.~A.~McCracken, L.~M.~Duan, P.~R.~Berman, D.~G.~Steel, A.~S.~Bracker, D.~Gammon, and L.~J.~Sham. Demonstration of Quantum Entanglement between a Single Electron Spin Confined to an InAs Quantum Dot and a Photon. {\it Phys. Rev. Lett.} {\bf 110}, 167401 (2013).
\bibitem{Waks2006} E.~Waks and J.~Vuckovic. Dipole Induced Transparency in Drop-Filter Cavity-Waveguide Systems. {\it Phys. Rev. Lett.} {\bf 96}, 153601 (2006).
\bibitem{Hu2008} C.~Y.~Hu, A.~Young, J.~L.~O'Brien, W.~J.~Munro, and J.~G.~Rarity. Giant optical Faraday rotation induced by a single-electron spin in a quantum dot: Applications to entangling remote spins via a single photon. {\it Phys. Rev. B} {\bf 78}, 085307 (2008).
\bibitem{Kimble1998} H.~J.~Kimble. Strong interactions of single atoms and photons in cavity QED. {\it Physica Scripta} {\bf T76}, 127 (1998).
\bibitem{Barrett2005} S.~D.~Barrett and P.~Kok. Efficient high-fidelity quantum computation using matter qubits and linear optics. {\it Phys. Rev. A} {\bf 71}, 060310(R) (2005).
\bibitem{Munro2010} W.~J.~Munro, K.~A.~Harrison, A.~M.~Stephens, S.~J.~Devitt, and K.~Nemoto. From quantum multiplexing to high-performance quantum networking. {\it Nature Photon.} {\bf 4}, 792-796 (2010).
\bibitem{Tamarat2008} P.~Tamarat, N.~B.~Manson, J.~P.~Harrison, R.~L.~McMurtrie, A.~Nizovtsev, C.~Santori, R.~G.~Beausoleil, P.~Neumann, T.~Gaebel, F.~Jelezko, P.~Hemmer, and J.~Wrachtrup. Spin-flip and spin-conserving optical transitions of the nitrogen-vacancy centre in diamond. {\it New J. Phys.} {\bf 10}, 045004 (2008).
\bibitem{Maze2011} J.~R.~Maze, A.~Gali, E.~Togan, Y.~Chu, A.~Trifonov, E.~Kaxiras, and M.~D.~Lukin. Properties of nitrogen-vacancy centers in diamond: the group theoretic approach. {\it New J. Phys.} {\bf 13}, 025025 (2011).
\bibitem{Volz2011} J. Volz, R. Gehr, G. Dubois, J. Estve and J. Reichel, Measurement of the internal state of a single atom without energy exchange, Nature {\bf 475}, 210 - 213 (2011).
\bibitem{Felton2009} S.~Felton, A.~M.~Edmonds, M.~E.~Newton, P.~M.~Martineau, D.~Fisher, D.~J.~Twitchen, and J.~M.~Baker. Hyperfine interaction in the ground state of the negatively charged nitrogen vacancy center in diamond. {\it Phys. Rev. B} {\bf 79}, 075203 (2009).
\bibitem{RB2001} R.~Raussendorf and H.~J.~Briegel.ÊA one-way quantum computer. {\it Phys. Rev. Lett.} {\bf 86}, 5188-5191(2001).
\bibitem{Kitaev2003} A.~Y.~Kitaev. Fault-tolerant quantum computation by anyons. {\it Ann. Phys.} {\bf 303}, 2 (2003).
\bibitem{Dennis2002} E.~Dennis, A.~Kitaev, A.~Landahl, and J.~Preskill. Topological quantum memory. {\it J. Math. Phys.} {\bf 43}, 4452 (2002).
\bibitem{SJ09} R.~Stock and D.~F.~V.~James. Scalable, High-Speed Measurement-Based Quantum Computer Using Trapped Ions. {\it Phys. Rev. Lett.} {\bf 102}, 170501 (2009).
\bibitem{DFSG08} S.~J.~Devitt, A.~G.~Fowler, A.~M.~Stephens, A.~D.~Greentree, L.~C.~L.~Hollenberg, W.~J.~Munro, andÊK.~Nemoto. Architectural design for a topological cluster state quantum computer. {\it New J. Phys.} {\bf 11}, 083032 (2009).
\bibitem{JMFMKLY10} N.~C.~Jones, R.~V.~Meter, A.~Fowler, P.~McMahon, J.~Kim, T.~Ladd, andÊY.~Yamamoto. A Layered Architecture for Quantum Computing Using Quantum Dots. {\it Phys. Rev. X} {\bf 2}, 031007 (2012).
\bibitem{Nickerson2013} N.~H.~Nickerson, Y.~Li, and S.~C.~Benjamin. Topological quantum computing with a very noisy network and local error rates approaching one percent. {\it Nature Comm.} {\bf 4}, 1756 (2013).
\bibitem{Devitt2012} S.~J.~Devitt, A.~M~Stephens, W.~J.~Munro, and K.~Nemoto. Requirements for fault-tolerant factoring on an atom-optics quantum computer. arXiv:1212.4934 (2012).
\bibitem{Balasubramanian2009} G.~Balasubramanian, P.~Neumann, D.~Twitchen, M.~Markham, R.~Kolesov, N.~Mizuochi, J.~Isoya, J.~Achard, J.~Beck, J.~Tissler, V.~Jacques, P.~R.~Hemmer, F.~Jelezko, and J.~Wrachtrup. Ultralong spin coherence time in isotopically engineered diamond. {\it Nature Mater.} {\bf 8}, 383-387 (2009).
\bibitem{Ishikawa2012} T.~Ishikawa, K.-M.~C.~Fu, C.~Santori, V.~M.~Acosta, R.~G.~Beausoleil, H.~Watanabe, S.~Shikata, and K.~M.~Itoh. Optical and Spin Coherence Properties of Nitrogen-Vacancy Centers Placed in a 100 nm Thick Isotopically Purified Diamond Layer. {\it Nano Lett.} {\bf 12}, 2083-2087 (2012).
\bibitem{Fang2013} K.~Fang, V.~M.~Acosta, C.~Santori, Z.~Huang, K.~M.~Itoh, H.~Watanabe, S.~Shikata, and R.~G.~Beausoleil. High-Sensitivity Magnetometry Based on Quantum Beats in Diamond Nitrogen-Vacancy Centers. {\it Phys. Rev. Lett.} {\bf 110}, 130802 (2013).
\bibitem{Stephens2012} A.~M.~Stephens, J.~Huang, K.~Nemoto, and W.~J.~Munro. Hybrid-systems approach to fault-tolerant quantum communication. {\it Phys. Rev. A} {\bf 87}, 052333 (2013).
\bibitem{He1993} X.~F.~He, N.~B.~Manson, and P.~T.~H.~Fisk. Paramagnetic resonance of photoexcited N-V defects in diamond. II. Hyperfine interaction with the 14N nucleus. {\it Phys. Rev. B} {\bf 47}, 8816 (1993).
\bibitem{Manson2006} N.~B.~Manson, J.~P.~Harrison, and M.~J.~Sellars. Nitrogen-vacancy center in diamond: Model of the electronic structure and associated dynamics. {\it Phys. Rev. B} {\bf 74}, 104303 (2006).
\bibitem{Young2009} A Young, C Y Hu, L Marseglia, J P Harrison, J L OÕBrien and J G Rarity, Cavity enhanced spin measurement of the ground state spin of an NV center in diamond, New Journal of Physics 11, 013007 (2009)
\bibitem{Santori2010} C. Santori, D. Fattal and Y. Yamamoto, Single-photon Devices and Applications, Wiley-VCH; 1 edition (November 1, 2010)
\bibitem{Maze2008} J.~R.~Maze, P.~L.~Stanwix, J.~S.~Hodges, S.~Hong, J.~M.~Taylor, P.~Cappellaro, L.~Jiang, M.~V.~G.~Dutt, E.~Togan, A.~S.~Zibrov, A.~Yacoby, R.~L.~Walsworth, and M.~D.~Lukin, Nanoscale magnetic sensing with an individual electronic spin in diamond, {\it Nature} {\bf 455}, 644--647 (2008).
\bibitem{Jarmola2012} A.~Jarmola, V.~M.~Acosta, K.~Jensen, S.~Chemerisov, and D.~Budker, Temperature and magnetic field dependent longitudinal spin relaxation in nitrogen-vacancy ensembles in diamond, {\it Phys. Rev. Lett.} {\bf 108}, 197601 (2012).
\bibitem{Robledo2011a} L.~Robledo, H.~Bernien, T.~van der Sar, and R.~Hanson, Spin dynamics in the optical cycle of single nitrogen-vacancy centres in diamond, {\it New J. Phys.} {\bf 13}, 025013 (2011).
\bibitem{Imoto} N.~Imoto, H.~A.~Haus, and Y.~Yamamoto, Quantum nondemolition measurement of the photon number via the optical Kerr effect, {\it Phys. Rev. A} {\bf 32}, 2287 (1985).
\bibitem{Reichel2012} J.~Volz, R.~Gehr, G.~Dubois, J.~Estve, and J.~Reichel, Measurement of the internal state of a single atom without energy exchange, {\it Nature} {\bf 475}, 210--213 (2011).
\bibitem{DohertyReview} M.~W.~Doherty, B.~B.~Manson, P.~Delaney, F.~Jelezko, J.~Wrachtrup, and L.~C.~L.~Hollenberg, The nitrogen-vacancy colour centre in diamond, {\it Phys. Rep.} {\bf 528}, 145 (2013).
\bibitem{Dorenbos} S.~N.~Dorenbos, E.~M.~Reiger, U.~Perinetti, V.~Zwiller, T.~Zijlstra, and T.~M.~Klapwijk, Low noise superconducting single photon detectors on silicon, {\it Appl. Phys. Lett.} {\bf 93}, 131101 (2008).
\bibitem{dlcz} L.-M.~Duan, M.~D.~Lukin, J.~I.~Cirac, and P.~Zoller, Long-distance quantum communication with atomic ensembles and linear optics, {\it Nature} {\bf 414}, 413--418 (2001).
\bibitem{barrett2005} S.~D.~Barrett and P.~Kok, Efficient high-fidelity quantum computation using matter qubits and linear optics, {\it Phys. Rev. A} {\bf 71}, 060310(R) (2005).
\bibitem{Raussendorf2001} R.~Raussendorf and H.~J.~Briegel, A one-way quantum computer, {\it Phys. Rev. Lett.} {\bf 86}, 5188 (2001).
\bibitem{kok2007} P. ~Kok, W.~J.~Munro, K.~Nemoto, T.~C.~Ralph, J.~P.~Dowling, and G.~J.~Milburn, Linear optical quantum computing with photonic qubits, {\it Rev. Mod. Phys.} {\bf 79}, 135-174 (2007).
\bibitem{everitt2012} M.~S.~Everitt, S.~J.~Devitt, W.~J.~Munro, and K.~Nemoto, High fidelity gate operations with the coupled nuclear and electron spins of a nitrogen vacancy center in diamond, arXiv:1309.3107.
\bibitem{footnote1}
This is a fast and clean operation so long as a $\sigma_+$ polarized field is used. The polarized field stops population of the $| -1\rangle$ level.
\bibitem{RH07} R.~Raussendorf and J.~Harrington, Fault-tolerant quantum computation with high threshold in two dimensions, {\em Phys. Rev. Lett.} \textbf{98}, 190504 (2007).
\bibitem{RHG06} R.~Raussendorf, J.~Harrington, and K.~Goyal, A fault-tolerant one-way quantum computer, {\it Ann. Phys.} \textbf{321}, 2242 (2006).
\bibitem{WFSH10} D.~S.~Wang, A.~G.~Fowler, A.~M.~Stephens, and L.~C.~L.~Hollenberg, Threshold error rates for the toric and planar codes, {\it Quant. Inf. Comput.} \textbf{10}, 456--469 (2010).
\bibitem{BS10} S.~D.~Barrett and T.~M.~Stace, Fault tolerant quantum computation with very high threshold for loss errors, {\it Phys. Rev. Lett.} \textbf{105}, 200502 (2010).
\bibitem{barrettstace10} T.~M.~Stace and S.~D.~Barrett, Error correction and degeneracy in surface codes suffering loss, {\it Phys. Rev. A.} \textbf{81}, 022317 (2010).
\bibitem{fowler2011} A.~G.~Fowler, A.~C.~Whiteside, and L.~C.~L.~Hollenberg, Towards practical classical processing for the surface code, {\it Phys. Rev. Lett.} {\bf 108}, 180501 (2012).
\end{thebibliography}
\appendix \section{Description of the NV$^-$ centre} The dynamics of the NV$^-$ centre, consisting of the electron spin-1 $^3A_2$ manifold and the nuclear spin-1/2 system, can be described by the Hamiltonian $H= H_{\rm e}+H_{\rm n}+H_{\rm e-n}$. The electron spin's ground state Hamiltonian is given by \cite{He1993,Manson2006} $$H_{\rm e} = \hbar ( D S_z^2 +E \left[S_x^2 - S_y^2\right] + g_e \mu_B B S_z),$$
which represents a zero-field splitting ($D/2 \pi = 2.87$ GHz), a strain induced splitting ($E/ 2 \pi \sim 1$-$10$ MHz), and a magnetic field induced splitting ($g_e \mu_B B$), where $\mu_B$ is the Bohr magneton and $g_e=2.0$ is the g-factor. In this Hamiltonian, $S_z, S_x,S_y$ are the usual spin-1 operators. With an externally applied magnetic field $B\sim 20$mT, our electron spin qubit levels $|0\rangle$ and $|+1\rangle$ are far detuned from the $|-1\rangle$ energy level, supporting our electron spin qubit. The nuclear spin Hamiltonian $H_{\rm n} = -\hbar g_n \mu_n B I_z$ represents a magnetic field induced splitting of the $^{15}$N nuclear spin, where $\mu_n$ is the nuclear magneton and $g_n=-0.566$ the nuclear g-factor. $I_z$ is the usual Pauli $Z$ spin-1/2 operator.
The hyperfine coupling between the electron and the nuclear spins is given by \cite{Felton2009} $$H_{\rm e-n} = \hbar A_{\parallel} S_z I_z+ \frac{\hbar A_{\perp}}{2} \left(S_+ I_- + S_- I_+ \right),$$
where $S_\pm$ ($I_\pm$) are the electron spin (nuclear spin) raising and lowering operators respectively. This coupling includes an Ising part with coupling strength $A_{\parallel}/2 \pi \sim 3.03$ MHz and an exchange part with coupling constant $A_{\perp}/2 \pi \sim 3.65$ MHz \cite{Felton2009}. With $B\sim 20$ mT the exchange coupling is far off-resonance resulting only in a small dispersive phase shift. This results in a natural controlled-phase gate that operates on a time scale $\tau \sim \pi/ \left[ A_{\parallel}+\frac{A_{\perp}^2}{2 \lambda}\right] \sim 165$ ns, where $\lambda$ is the frequency difference between the electron and nuclear spin levels.
An external microwave driving of amplitude $ \Omega_0$ is used to perform the electron and nuclear spin rotations. The driving Hamiltonian can be expressed as $$H_{\rm d}= \hbar \Omega_0 \cos \left(\omega_d t + \phi \right) \left( S_x - \frac{g_n \mu_n}{ g_e \mu_B} I_x\right),$$ where the frequency $\omega_d$ is chosen appropriately to determine whether we drive the electron or nuclear spin, with $\phi $ representing an initial phase offset. By using a polarised field, electron spin rotations can be achieved with high fidelity in at most a few nanoseconds. The nuclear spin operations are much slower due to the weak gyromagnetic ratio but can be achieved (with high fidelity) in a few microseconds by using the hyperfine coupling to enhance the natural nuclear spin splitting.
Next, the NV$^-$ centre also has a $^3E$ energy level manifold with optical transitions to the $^3A_2$ manifold. The optical transitions between one of the $^3A_2$ magnetic sub-levels and the $^3E$ levels coupled to the cavity field can be represented by
$$H_{\rm e-f} = \hbar g_{m_s,i} \sum_{i=1...6} \left[ a^\dagger |i \rangle \langle M_i | + a | M_i\rangle \langle i | \right],$$ where $M_i$ are the energy eigenstates, in order of ascending energy, within the $^3E$ manifold. At zero strain and magnetic field, the $^3E$ manifold is represented by the basis states $\{M_{1...6}\}=\{E_2,E_1,E_x,E_y,A_1,A_2\}$, neglecting a small mixture of the $E_{x,y}$ and $E_{1,2}$ states due to spin-spin interaction. The optical field of frequency $\omega$ can be described by $H_{\rm f} = \hbar \omega a^\dagger a$ with $a^\dagger$ $(a)$ being the field's creation (annihilation) operators. The cavity coupling rate for a given transition is given by $g_{m_s,i}$. The basis states $E_x$ and $E_y$ have electronic spin zero, while the others ($A_{1,2}$ and $E_{1,2}$) are equal superpositions of spin $\pm 1$ \cite{Togan2010,Maze2011}. For our scheme, we apply an electric field of $E_x=1\,$GHz in the $x$-direction so as to lift the degeneracy of the spin-zero states in the excited-state manifold. This greatly reduces the sensitivity to rogue strain or electric field in the $y-$direction making the system more robust.
\subsection{Coherence Properties} It is critical to mention the coherence properties of our electron-nuclear spin as this can vary significantly. Here we are assuming a single $^{15}$NV$^-$ centre is created on isotopically pure (99.9\%+ 12C) diamond substrate \cite{Balasubramanian2009} and that our module will operate at low temperature (4-20 K) rather than room temperature. In such a case it has been reported that $T_1$ of the electron spin is greater than 1 s, while $T_2^* \sim 90$ $\mu$s \cite{Ishikawa2012,Fang2013} with $T_2$ much longer \cite{Balasubramanian2009}. The nuclear spin $T_1$ and $T_2$ are at least 0.2 s at present \cite{Maurer2012}. The limiting coherence parameter in this design is the $T_2^*$ of the electron spin during the 165 ns controlled-phase gate. However with Gaussian decay having the form $\exp \left[ - \left(2 t / T_2^*\right)^2\right]$, the error associated with this is small in principle ($<10^{-5}$) \cite{Balasubramanian2009}.
\section{The diamond module}
At the centre of our approach is a quantum module in which an NV$^-$ centre is embedded in an optical cavity (Fig.~\ref{main-fig}). The NV$^-$ centre is composed of a spin-one electronic spin and a spin-half $^{15}$N nuclear spin. Our module is an interface between the optical, microwave and radio frequency regimes allowing information to be transferred between them. It works as follows: state dependent reflection allows the creation of entanglement between an external optical field \cite{Waks2006,Hu2008,Young2009,Santori2010} and the electron spin, while the hyperfine interaction allows the transfer of the electron spin state to the long-lived nuclear spin. It also allows the nuclear spin to be measured via the electron spin, thus completing the interface. While this is conceptually simple, the details of the physical system lead to a number of complications which we will address in this supplementary material.
\begin{figure}
\caption{Schematic representation of a repeater node containing an optical cavity with an embedded NV$^-$ centre. The NV$^-$ centre possesses both an electron spin and a $^{15}$N nuclear spin. A static magnetic field of approximately 50 mT is used to separate the $m_s=\pm 1$ levels. }
\label{main-fig}
\end{figure}
To understand exactly how this module operates we must examine the interactions between the three components of our hybrid system (optical field, electron spin, nuclear spin) as a whole. The overall system including couplings and driving fields can be described by the Hamiltonian \begin{eqnarray} H= H_{\rm f}&+& H_{\rm e}+H_{\rm n}+H_{\rm d} +H_{\rm e-n}+H_{\rm e-f}, \end{eqnarray} where $H_{\rm f} = \hbar \omega a^\dagger a$ is the Hamiltonian for the optical field detuned from the cavity resonance frequency $\omega_c$ by $\Delta=\omega_c-\omega$ with $a$ $ (a^\dagger)$ being the field annihilation (creation) operator.
The second term $H_{\rm e} = \hbar ( D S_z^2 + E \left[S_x^2 - S_y^2\right] + g_e \mu_B B S_z)$ represents a zero field splitting ($D/2 \pi = 2.87$ GHz), a strain induced splitting ($E/2 \pi < 10\,$MHz) and a magnetic field induced splitting ($g_e \mu_B B$) for the NV$^-$ centre's electron spin \cite{Felton2009}. In this spin-one system, $S_{x,y,z}$ represents the generalised Pauli $X$,$Y$,$Z$ operators with $S_+$ ($S_-$) being the raising (lower) operator. Further $\mu_B$ is the Bohr magneton and $g_e=2.0$ the g-factor. For an externally applied magnetic field of $B\sim 20$ mT, the $|0\rangle$ and $|+1\rangle$ levels are separated by approximately $3.43$ GHz. The $|m_s=-1\rangle$ energy level is detuned approximately $1.1$ GHz below the $|m_s=+1\rangle$ level and $\sim2.3$ GHz above the $|m_s=0\rangle$ level.
The third term $H_{\rm n} = -\hbar g_n \mu_n B I_z$ represents a magnetic field induced splitting of the nuclear spin with $I_z$ being the Pauli $Z$ spin-half operator. Here, $\mu_n$ is the nuclear magneton and $g_n=-0.566$ the nuclear g-factor. The computational basis states of the nuclear spin are $|\downarrow\rangle$ ($|\uparrow\rangle$).
Next, $H_{\rm d} = \hbar \Omega_0 \cos \left(\omega_d t + \phi \right) \left( S_x - \frac{g_n \mu_n}{ g_e \mu_B} I_x\right)$ represents an electromagnetic field driving whose magnitude on the electron (nuclei) is determined by both the amplitude $\Omega_0$ of the applied field and the ratio of $g_n \mu_n / g_e \mu_B$. The frequency $\omega_d$ is chosen appropriately to determine whether we drive the electron or nuclear spin while $\phi $ specifies the phase.
The first of the coupling terms $H_{\rm e-n} = \hbar A_{\parallel} S_z I_z+ \hbar \frac{A_{\perp}}{2} \left(S_+ I_- + S_- I_+ \right)$ represents a hyperfine interaction between the electron and nuclear spin. This coupling contains both an Ising part with coupling strength $A_{\parallel}$ and an exchange part with coupling constant $A_{\perp}$. For a $^{15}$N nucleus, $A_{\parallel}/2 \pi \sim 3.03$ MHz and $A_{\perp}/2 \pi \sim 3.65$ MHz \cite{Felton2009}. The second coupling term $H_{\rm e-f}$ is between the optical field and the electronic spin. It is detailed in the methods section of the main text and will be discussed in the next several sections.
Before proceeding it is also useful to consider the coherence parameters of our NV$^-$ centre. With isotopically purified CVD diamond \cite{Maze2008,Ishikawa2012,Fang2013} we can expect electronic spin coherence times $T_2^*$ of 90 $\mu$s and $T_2 > 1.8$ ms while the relaxation $T_1$ can be over 1 second when the sample operates in the 4-80 K regime \cite{Jarmola2012}. The coherence times of nuclear spins have been shown to exceed 1 s \cite{Maurer2012}. We now explore in detail measurement and entanglement of two NV$^-$ centres.
\subsection{Level structure}
In this section we consider the main features of an NV$^-$ centre in a microcavity to ascertain how well the state of an NV$^-$ centre can be coupled to an external optical field and detected, and how two NV$^-$ centres can be entangled by detection. We apply a magnetic field $B_z=20\,$ mT to separate the ground state levels $\vert \text{+1}\rangle$ and $\vert-1\rangle$. We aim to use resonant light tuned to the $\vert 0\rangle \leftrightarrow \vert M_{3}\rangle\equiv(0.998\vert E_{y}\rangle+0.07 \vert E_{1}\rangle)\simeq \vert E_{y}\rangle$ transition with almost pure $x-$polarisation. We apply an electric field of $1\,$GHz to lift the degeneracy between the $\vert E_{x}\rangle$ and $\vert E_{y}\rangle$ states, and also to increase the detuning between $\vert 0\rangle \leftrightarrow \vert M_{5}\rangle$ and other transitions. The electric field has a negligible effect on the ground state triplet, leading to an amplitude mixing of the $\vert \text{+1}\rangle$ and $\vert-1\rangle$ levels on the order of $2\times 10^{-5}$. The closest strongly allowed transition to $\vert 0\rangle \leftrightarrow \vert M_{3}\rangle$ is the $\vert \text{+1}\rangle \leftrightarrow \vert M_{5}\rangle\equiv(0.98\vert A_{1}\rangle+0.17\vert A_{2}\rangle)$ transition, with a detuning of $\delta_{\omega}=2\pi\times 2.71\,$GHz. Furthermore this transition is almost purely circularly polarised. Assuming transform-limited linewidth, at low temperatures (2 K) the excited-state decay transitions have amplitude decay rates of $\gamma(M_3)=2\pi\,\times\,6\,$MHz and $\gamma(M_5)=2\pi\,\times\,11\,$MHz \cite{Robledo2011a} so that in both cases $\delta_{\omega}\gg \gamma$. All other significantly allowed transitions are detuned even further and can be neglected. \subsection{Quantum non-demolition measurement of the electron spin state}
We now consider an NV$^-$ centre placed at the antinode of a cavity resonant with the $\vert 0\rangle \leftrightarrow \vert M_{3}\rangle$ transition. The natural entanglement we can generate between the electron spin and optical field allows us to perform a quantum non demolition (QND) measurement \cite{Imoto} of the electron-spin (and thus also its initialisation). We can use a single photon to probe the electron spin a number of times and from the measurement patterns (clicks or no clicks) determine with high probability the state of the electron spin, that is whether it is in the $|0\rangle$ or $|+1\rangle$ state. In the following, we assume the cavity to have no losses other than the transmission through the mirrors, and a spatially perfectly mode-matched input beam. The core of the proposal is based on the different effects of the NV$^-$ centre being in the ground state $\vert 0\rangle$ rather than in state $\vert \text{+1}\rangle$ on light impinging on the cavity. The resonator is assumed to have a finesse $\mathcal{F}$ and a $1/e^2$ mode intensity radius $w_C$, leading to a cooperativity of \begin{figure}
\caption{ Ground (black) and excited state energy eigenstates. Left: effect of $B_z$ up to the chosen field of $200\,$G, at an electric field $E_x=1\,$GHz. Middle: Level structure at the chosen fields, showing the laser frequency $\omega$ and its detuning from selected transitions. Right: leading terms of the energy eigenstates.}
\label{fig:states}
\end{figure} \begin{equation}\label{eq:coop} C=\frac{2}{\pi}\frac{\sigma_E}{\sigma_C} \mathcal{F}\eta_{BR}\,. \end{equation} Here, $\sigma_E=3\lambda^2/(2\pi)$ is the emitter scattering cross-section, while $\sigma_{C}=\pi w_C^2$ is the cavity mode area. $\eta_{BR}$ is the branching ratio of the transition in question, which is $\eta_{BR}(M_3\leftrightarrow 0)=4\%$ and $\eta_{BR}(M_5 \leftrightarrow \pm 1)=2\%$. The resonator amplitude decay rate depends on the resonator length $L$ with $\kappa=\pi c/(2L \mathcal{F})$. The reflection and transmission amplitudes for a photon with a linewidth $\gamma_{phot}\ll\kappa,\gamma$ being reflected or transmitted by a cavity containing an NV$^-$ centre are given by \begin{eqnarray} \label{eq:amps} A_r &=& 1-\frac{1-A}{(1-i\Delta_C)-2C/(1-i\Delta_E)},\nonumber\\ A_t &=& \frac{\sqrt{1-A^2}}{(1-i\Delta_C)-2C/(1-i\Delta_E)}, \end{eqnarray} with $\Delta_C=(\omega_{laser}-\omega_{\text{cavity}})/\kappa$, $\Delta_E=(\omega_{\text{laser}}-\omega_{\text{NV}})/\gamma$, and where $A=(r_1-r_2)/(1-r_1 r_2)$ is the amplitude of the reflected light for an empty cavity on resonance, with amplitude reflectance coefficients $r_{1,2}$ for the input and output mirrors, respectively. Then the probabilities for reflection and transmission are \begin{eqnarray} \label{eq:probs} P_R &=& 1-\frac{(1-A)\left(1+A+4C+(1+A)\Delta_E^2\right)}{4C^2+4C(1-\Delta_E\Delta_C)+(1+\Delta_E^2)(1+\Delta_C^2)},\nonumber\\ P_T &=& \frac{(1-A^2)\left( 1+\Delta_E^2\right)}{4C^2+4C(1-\Delta_E\Delta_C)+(1+\Delta_E^2)(1+\Delta_C^2)}. \end{eqnarray} By energy conservation, the incoherent scattering probability is $P_S\left(\vert 0\rangle\right)=1-(P_R+P_T)$. For emitter, cavity and probe light tuned to resonance, the expressions reduce to \begin{equation}\label{eq:probsRes} P_R^{res.}=\left( \frac{2C+A}{2C+1}\right) ^2,\, P_T^{res.}=\frac{1-A^2}{(2C+1)^2}. \end{equation}
We now need to maximise the difference in reflected signal caused by an NV$^-$ centre in the ground $\vert 0\rangle$ state, which can be done in two ways:\\
\begin{itemize} \item \textbf{High-cooperativity implementation:}\;\; In this approach, we minimise $A$ so that $A\simeq0$ and maximise $C$. Then the signal for the empty cavity results in $P_R\simeq0$ while the signal for a cavity containing an NV$^-$ centre in the ground $\vert 0\rangle$ state tends to $P_R\simeq1$ for $C\gg 1$. In this limit, the emitter excitation decreases with cooperativity as $P_S\rightarrow1/C$ \cite{Reichel2012}. However, there is a small off-resonant excitation of NV$^-$ centres in the $\vert \text{+1}\rangle$ ground state which remains even for large cooperativity, and limits the performance of the device.
Excitation of the NV$^-$ centre can be significantly decreased for either the $\vert 0\rangle$ or $\vert +1\rangle$ ground states by using an appropriately polarised optical field. In principle, the excitation for one of these states can be entirely turned off. In our situation we select a polarised field to suppress excitation in the $|+1\rangle \leftrightarrow |M_5\rangle$ such that $P_S(|+1\rangle) \rightarrow 0$.
This approach also requires careful matching of mirror reflectivities.\\
\item \textbf{Low-cooperativity implementation:}\;\; In this approach, we select a large negative value for $A$ ($A\simeq -1$) and tune $C$ such that $2C+A=0$. This can be arranged by choosing $r_2\gg r_1$. This approach is both more flexible and more readily achievable as we only require an initial cooperativity of $C(\vert 0\rangle \leftrightarrow \vert M_{3}\rangle)\geq0.5$. The cooperativity can then be reduced to $0.5$ by rotating the polarisation of the incoming photons away from the $x-$direction. Alternatively, the interaction strength between light and emitter can be decreased by detuning. Assuming a value of $A=-1$, a detuning of $\Delta_E=\Delta_C=\sqrt{2C-1}$ leads to vanishing reflection probability. Conversely, the reflection probability approaches $A^2$ when the NV$^-$ centre is in the state $|+1\rangle$, as can be seen from Eqn.~(\ref{eq:probs}), so detecting a photon projects the NV$^-$ centre onto this state. However, as this implementation is based on the conditional absorption of a photon, the performance is limited by spin-flip-inducing transitions. Experimentally, these have been observed to be on the order of $1\%$, which excludes this implementation for our purposes unless this issue can be addressed. It will nonetheless be suitable for initial demonstrations of the entangling mechanism. For our work we therefore focus on the high-cooperativity implementation.
\end{itemize}
{\bf Measurement sequence and sources of error}: We aim for near perfect contrast of the empty cavity and maximum reflectivity---that is, $A_r(|+1\rangle)\sim A\rightarrow 0$ and $A_r(|0\rangle)\rightarrow 1$. Our state detection is based on a measurement -- spin rotation -- measurement sequence where we assume that negligible errors occur during in the spin rotation. Furthermore, we assume that this sequence will be repeated many times, as photon loss will be unavoidable in a realistic device. If the electronic spin is in the $|0\rangle$ state, the probability that our single-photon detector clicks at least once in $s$ attempts is \begin{eqnarray}
P_{click, 0}(s)&=& 1-(1-|\eta A_r(|0\rangle)|^2))^s \nonumber \\
&=& 1 - \sum_{i=0}^s \binom{s}{i} (-1)^i |\eta A_r(|0\rangle)|^{ 2 i}. \end{eqnarray} Then, the probability of at least one click in $s$ attempts with no spin flips is \begin{eqnarray}
\sum_{i=1}^{s } |\eta A_r(|0\rangle)|^{2 i} \left[1-|\eta A_r(|0\rangle)|^2-P_S(|0\rangle) P_{\text{flip}, 0} )\right]^{s-i} \nonumber,
\end{eqnarray}
where $P_{\text{flip}, 0}$ and $P_{\text{flip}, +1}$ are the probabilities of a single measurement inducing a spin flip upon excitation when the NV$^-$ centre is in one of the qubit states, due to resonant and off-resonant excitation, respectively. Conversely, the probability for the detector never to click in $s$ attempts and not spin flip when the NV$^-$ centre is in the $|+1\rangle$ state, is
$$P_{click, +1}(s)= \left[1-|\eta A_r(|+1\rangle)|^2- P_S(|+1\rangle) P_{\text{flip}, +1}\right]^s$$
where $\eta^2$ is the single photon detection efficiency, including all losses along the channel. The error probability for our entire sequence is then \begin{align}\label{eq:QNDErr}
P_{err}^{QND}\sim &\; 1 - \sum_{i=1}^{s } |\eta A_r(|0\rangle)|^{2 i} \nonumber \\
\times& \left[1-|\eta A_r(|0\rangle)|^2-P_S(|0\rangle) P_{\text{flip}, 0} )\right]^{s-i} \\
\times& \left[1-|\eta A_r(|+1\rangle)|^2- P_S(|+1\rangle) P_{\text{flip}, +1}\right]^s \nonumber . \end{align}
One can immediately see the advantage of having $ |A_{r}(|+1\rangle)|^2 \sim A = 0$. A detector click strongly indicates that the NV$^-$ centre is in the $|0\rangle$ state. We assume $P_{\text{flip}, 0}=0.003$ and $P_{\text{flip}, +1}=0.35$ respectively (see Fig.~\ref{fig:mixing}), but we note that there is no current consensus on these values in the literature \cite{DohertyReview}. A key advantage of our measurement sequence is that it allows us to determine whether the NV$^-$ centre exits the qubit subspace into the state $|-1\rangle$.
\begin{figure}
\caption{ Simplified energy level diagram with key transitions through the metastable states. Indicated are approximate probabilities for these transitions occurring}
\label{fig:mixing}
\end{figure}
The scheme can easily be modified to use weak coherent pulses instead of single photons. We neglect this approach to avoid errors due to de-ionization of the NV$^-$ centre, which are possible for coherent states given their non-zero overlap with Fock states with $n>1$. While the probability of this occurring may be small, it may be difficult to detect explicitly.
\begin{figure}\label{fig:qndError}
\end{figure}
{\bf QND measurement performance}: The QND measurement performance is depicted in Fig.~\ref{fig:qndError} for both the low-cooperativity and high-cooperativity approaches. The low-cooperativity approach ($A\approx -1$) is limited by the spin-flip probability per measurement for the resonant excitation situation. The numerically optimised points for the case of negative $A$ are closely matched by choosing $\Delta_E=\Delta_C=\sqrt{2C-1}$ (red lines in Fig.~\ref{fig:qndError}), while for $A=0$, $\Delta_E=0$ and $\Delta_C=C\gamma(M_5)/\delta_{\omega}$ (blue lines in Fig.~\ref{fig:qndError}). The error rate of a single QND measurement is on the order of $1\%$. For our scheme, we require an error rate of less than $7.3 \times 10^{-3}$ (see main text). To overcome this limitation, we consider the high-cooperativity approach where we perform multiple measurements, shown in the middle panel of Fig.~\ref{fig:qndError}. The error rate as a function of detection efficiency is shown in the lower panel of Fig.~\ref{fig:qndError}. We require optical efficiency (including detectors) of $\eta^2\gtrsim 30\%$ to meet the requirements of our scheme (see main text). A useful working cooperativity is of the order of $C\sim 50$.
Before outlining how to generate remote entanglement, we will briefly discuss detection errors, namely photon loss and dark counts: \begin{itemize} \item {\bf Photon loss}: This is the most common error, which can arise from a number of sources including absorption or scattering in the channel, coupling inefficiencies between the cavity and channel, and inefficient single-photon detection. This error simply decreases the probability that we successfully measure a photon at the detector. We can model this by a parameter $\eta^2$ which ranges from [0,1] with $\eta^2=1$ being no loss. The probability of successfully measuring the photon is the ideal success probability multiplied by $\eta^2$.
\item {\bf Dark counts}: This error is where the detector clicks when no photon was incident. In principle, with current gated APDs, this dark count probability could be less than $10^{-5}$ per time window \cite{Dorenbos}. \end{itemize}
\subsection{Entanglement}
The creation of an entangled state between two remote electron states can be described in a straightforward manner given the previous discussion. Our scheme, which we depict in Fig.~\ref{spin-entangler}, is comparable to the protocol of Duan, Lukin, Cirac and Zoller \cite{dlcz}. We place two microcavities, each containing a single NV$^-$ centre, at the output ports of a 50:50 beamsplitter in a Michelson interferometer configuration. For simplicity, we set $A_{r,i}(|+1\rangle)=A_i$ and $A_{r,i}(|0\rangle)=A_{r,i}$ with $i=(a,b)$ the indices of the two cavities.
\begin{figure*}
\caption{Schematic representation of the entanglement of two individual NV$^-$ centres located in remote cavities based on a single photon conditioning measurement. A single photon is split into two modes on a 50:50 beamsplitter with the bottom mode directed to the first cavity. This mode, now containing a superposition of no photon and one photon, interacts with the electron spin prepared as $\frac{1}{\sqrt 2} \left[|0\rangle+|+1\rangle\right]$. The change of transmission coefficient dependent on the electron spin state entangles our two subsystems. Then the reflected mode from the cavity and the top mode from the 50:50 beamsplitter are temporally multiplexed into the same fiber and transmitted to the second module containing an NV$^-$ centre. The temporally multiplexed photonic signal is then separated back into its original two modes and the upper mode interacts with the NV$^-$ centre in the cavity. The two modes are then recombined on a 50:50 beamsplitter and the dark port monitored. A photon detected at this port projects the two electron spins into the maximally entangled singlet state.}
\label{spin-entangler}
\end{figure*}
In the most general case, we start our sequence by first preparing the NV$^-$ centres in superpositions $\alpha_a\vert \text{+1}\rangle _a+ \beta_a\vert 0\rangle _a $ and $\alpha_b\vert \text{+1}\rangle _b+ \beta_b\vert 0\rangle _b $. A single photon then impinges on the beamsplitter, resulting in a path-entangled state being sent to the two cavities. The photon then interacts with the cavities and then returns to the beamsplitter. A detection event in the dark port projects the NV$^-$ centres into the state \begin{eqnarray} \psi_d&=&(A_{r,b} -A_{r,a})\alpha_a \alpha_b/\sqrt{2}\vert \text{+1},\text{+1}\rangle _{a,b} \nonumber \\ &\;& +(A_{b} -A_{a})\beta_a \beta_b/\sqrt{2}\vert 0,0\rangle _{a,b} \nonumber \\ &\;&+ (A_{b} -A_{r,a})\alpha_a \beta_b/\sqrt{2}\vert \text{+1},0\rangle _{a,b} \nonumber \\ &\;&+ (A_{r,b} -A_{a})\beta_a\alpha_b /\sqrt{2}\vert 0,\text{+1}\rangle _{a,b}. \end{eqnarray} Now non-zero values of $A_{r_i}$ and $1-A_i$ will only lead to a decrease in the state amplitude, while differences between the two cavities will generally lead to a loss in fidelity. This can be seen from the first two terms in the state $\psi_d$. Assuming perfect state preparation with $\alpha_i =\beta_i =1/\sqrt{2}$, $A_a=A_b=0$ and $A_{r,a}=A_{r,b}=A_r$, our expression for $\psi_d$ simplifies to $A_r/\sqrt{8}(\vert 0,\text{+1}\rangle _{a,b}-\vert \text{+1},0\rangle _{a,b})$. The probability of projecting the two NV$^-$ centres onto our desired entangled state is $p_c=\eta^2 A_r^2/8$. This probability may seem quite low, however the Bell state is generated with extremely high fidelity, even with imperfect transmission and reflection coefficients. Instead of impacting the fidelity of the resulting singlet state, $A$ and $A_r$ impact the probability of detecting a single photon in the dark port.
No photon detection leaves the electron in an indeterminate state, as the photons could have been lost in the channel, scattered from the NV$^-$ centres, or lost due to imperfect coupling, inefficient detection, or through the unmonitored $b_{{out}\;a,b}$ cavity ports. To address the low success probability, we repeat the process a number of times to establish a link with high probability \cite{barrett2005}.
So far we have assumed the transmission and reflection coefficients of the two cavities have been matched. This may not be the case in practice, and we will likely have $A_{b} \sim A_{a}$ as $A_{a}, A_{b} \sim0$ but $A_{r,b} \neq A_{r,a}$. In this case, we can introduce a small loss element into the reflected path of the photon with the greater $A_{r,i}$ coefficient to effectively decrease its amplitude. Hence our resulting state is $\psi_d \propto \beta_a \alpha_b \vert 0,+1\rangle _{a,b} - \alpha_a \beta_b\vert \text{+1},0\rangle _{a,b}$, as required.
\subsection{A little determinism: adding a $^{15}$N nuclear spin}
With electron-spin initialisation and readout, the ability to generate remote entanglement, and a microwave driving field to perform electron-spin rotations, we essentially have all the operations required for distributed quantum computation and communication, particularly via the preparation and measurement of cluster states \cite{Raussendorf2001}. However, unsuccessful attempts to introduce additional qubits to the cluster state may destroy entanglement that has already been established, significantly increasing the resource overhead for low success probabilities \cite{kok2007}. Adding a little determinism will decrease these requirements.
An NV$^-$ centre in diamond possesses an electron spin and also a nuclear spin from the $^{15}$N atom. These couple naturally via the hyperfine interaction given by $H_{\rm e-n}$, which may allow us to add another qubit to the module. With a 20 mT field, the exchange interaction component is far off-resonance and so in an appropriate rotating frame we can write the effective interaction Hamiltonian as \begin{eqnarray}
H_{\rm eff} =\hbar A_{\rm net} |+1\rangle \langle +1 | \otimes | \uparrow\rangle \langle \uparrow| , \label{cphase} \end{eqnarray} where $A_{\rm net}= A_{\parallel}+\frac{A_{\perp}^2}{2 \Lambda}$ with $\Lambda=D+ g_e \mu_B B-g_n \mu_n B$ being the detuning between the electron and nuclear spin levels. This interaction gives a fast and natural controlled-phase (CPHASE) gate, where the time to create a maximally entangled state is $t_{\rm max}= \pi/ A_{\rm net} \sim 165$ ns.
To transfer quantum information between these systems, we require single-qubit operations on both the electron and nuclear spins. The electron spin rotations can be achieved using a $\sigma_+$ polarised microwave driving field of the form $H^{rf}_{\rm Driving}= \hbar \Omega_0 \left[ e^{i \phi} | +1\rangle \langle 0 | +e^{-i \phi} | 0\rangle \langle +1|\right]$ (in our rotating frame). With $\phi=\pi/2$, a $-\pi/4$ Y-rotation transforms $|0\rangle \rightarrow \frac{1}{\sqrt 2} \left[ |0\rangle+|+1\rangle\right] $ (a Hadamard-like operation) in approximately 2 ns \cite{everitt2012,footnote1}. The nuclear spin rotation operation could similarly be achieved through driving the exchange part of the hyperfine coupling in 1 $\mu s$ \cite{everitt2012}. We hence have the operations required to construct gates that transfer the state of the electron spin to the nuclear spin and vice versa. These gates may also be used to initialise and measure the nuclear spin, via a projective measurement of the coupled electron--nuclear system. We will discuss the error channels in the electron--nuclear spin system once we have integrated all the elements.
\section{A hybrid interface}
Next, we combine the basic operations between the optical, electron-spin, and nuclear-spin components in a protocol for generating entanglement between two remote nuclear spins. Care is required to ensure that the operations work as intended. For instance, coupling between the electron and nuclear spin is always on, meaning that failed attempts at electron--electron coupling could cause errors on the nuclear spins.
We begin by preparing the electron (nuclear) spin in the $|0\rangle$ ($|n_+\rangle=\frac{1}{\sqrt{2}}(|\downarrow\rangle+|\uparrow\rangle)$) state. An accurate (sub-nanosecond) clock is started in the first module and the electron spin rotated to $|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$. Two independent operations occur at this time: \begin{itemize}
\item First, as soon as the electron spin is rotated from $|0\rangle$ to $|+\rangle$, the hyperfine interaction begins coupling the electron spin with the nuclear spin, according to $|+\rangle|n_+\rangle \rightarrow \frac{1}{\sqrt{2}} |0\rangle|n_+\rangle+\frac{1}{2}|1\rangle \left[ |\downarrow\rangle+e^{iA_{\rm net} t} |\uparrow\rangle\right]= |\Psi(t)\rangle$. The resulting entanglement is periodic and oscillates between separable and maximally entangled with period $2\pi/ A_{\rm net}\sim 330$ ns. The oscillation stops when the electron spin is returned to a polarised state. Alternatively we can use a spin-echo technique to disentangle the electron and nuclear spins at any time. We know that after a time $t$ the state $|+\rangle|n_+\rangle$ has evolved to $|\Psi(t)\rangle$. Performing a spin-echo pulse and waiting a further time $t$ evolves our combined state to $\frac{1}{\sqrt{2}} |+\rangle \left[ |\downarrow\rangle+e^{iA_{\rm net} t} |\uparrow\rangle\right]$. The electron and nuclear spins are disentangled with the electron spin returning to the original state and the nuclear spin evolving to $\frac{1}{\sqrt{2}} \left[ |\downarrow\rangle+e^{iA_{\rm net} t} |\uparrow\rangle\right]$.
\item Second, a single photon is split on a 50:50 beamsplitter into two modes. The bottom mode in Fig.~\ref{spin-entangler} interacts via dipole-induced transparency with the NV$^-$ centre in the first module, where it becomes entangled with the electron spin state. Both modes exciting the cavity are temporally multiplexed and transmitted over a fiber to the second module where the multiplexing is reversed. In the second module, the clock is started and the electron spin in the cavity is rotated to $|+\rangle$ where it interacts with the top mode from the original beamsplitter. The two modes are recombined on the beamsplitter and the dark port of the interferometer is monitored. \end{itemize} Two possible outcomes, which we refer to as unsuccessful and successful, are distinguished by the measurement result: \begin{itemize} \item The unsuccessful case is where no photon is detected at the dark port, which occurs if a photon is detected at the bright port or not at all (it may have been lost in the cavity, during coupling, or in the channel, or the detector may not have detected it due to error). In this case, we are unsure of the exact state of the remote electron spins and must assume it is maximally mixed. Consequently (assuming that $A_{\rm net}$ is identical for both NV$^-$ centres), the density matrix of the combined nuclear--electron system is \begin{eqnarray}\label{eq:density}
\rho &=& |00\rangle\langle00|_e \rho_n \nonumber \\
&+& |01\rangle\langle01|_e e^{-iA_{net}tZ_{n_2}}\rho_ne^{iA_{net}tZ_{n_2}} \nonumber \\
&+& |10\rangle\langle10|_e e^{-iA_{net}tZ_{n_1}}\rho_ne^{iA_{net}tZ_{n_1}} \nonumber \\
&+& |11\rangle\langle11|_e e^{-iA_{net}tZ_{n_1}Z_{n_2}}\rho_ne^{iA_{net}tZ_{n_1}Z_{n_2}}, \end{eqnarray}
where $e$ and $n$ denote the electron and nuclear subsystems respectively. The hyperfine coupling combined with the fact that photon loss completely mixes the state of the electrons implies that either one or two phase errors can be back-propagated to each nucleus. However, the nuclear component of this mixed state "re-purifies" itself with the periodicity $A_{\rm net}t = 2\pi m$ of the hyperfine coupling or via a spin-echo pulse (the spin-echo pulse is preferred as it potentially much faster). After such a pulse the electron and nucleus become decoupled and the state of the nuclear qubits is simply $\frac{1}{\sqrt{2}} \left[ |\downarrow\rangle+e^{iA_{\rm net} t} |\uparrow\rangle\right]$. This slight phase rotation $e^{iA_{\rm net} t}$, where $t$ is when the spin-echo pulse is applied, can be corrected later.
\item The successful case is where a photon is detected at the dark port and the remote electron spins are projected into a singlet state with a high fidelity, as discussed in Section IIB. A spin-echo pulse is also performed on each module to decouple the electron and nuclear spins. \end{itemize}
At this point, the electron and nuclear spins are decoupled. What to do next depends on the measurement result: \begin{itemize}
\item In the unsuccessful case, we measure the electron spin at the $2t$ time of the spin echo and initialise the electron spin into $|0\rangle$, which collapses the overall density matrix to one of the four terms in Eqn.~(\ref{eq:density}). Then we start the procedure again from where we clocked the first attempt. Although the electron spin states are completely mixed, this gate sequence allows the nuclear spins to avoid decoherence and be preserved for the next attempt. This can be repeated until success. Errors may propagate to the nuclear spins due to poor control of the time when electrons are reinitialised to the $\ket{+}$ state prior to each attempt.
\item In the successful case, we perform a single-qubit $\pi/4$ $Y$-rotation on one of the two electron spins at the $2 t$ time of the spin echo (the $\pi/4$ $Y$-rotation is an effective Hadamard gate necessary to convert the electron--electron singlet state into the appropriate two-qubit cluster state, $(|0+\rangle - |1-\rangle)/\sqrt{2}$). We then wait until the hyperfine interaction maximally entangles the electron and nuclear spins within the node (at a time $t=m \pi/A_{\rm net}$). A second $\frac{\pi}{4}$, $Y$-rotation is performed on the electron spin of each module followed by measurement in the computational basis (an effective $X$-basis measurement). \end{itemize}
Upon success, we have transferred newly established entanglement between the electron spins in two remote modules to the nuclear spins in those same modules (by effectively teleporting a CPHASE gate), which is where we are storing and processing our quantum information. Importantly, the protocol circumvents photon-loss induced decoherence via the hyperfine interaction on the nuclear spin.
\subsection{Timescales}
The timescales for the various processes in the protocol can be grouped into three categories: short (1--30 ns), medium (100 ns -- 1 $\mu s$) and long ($>1$ $\mu s$). Short timescales are associated with electron spin operations (initialisation, detection, and rotations), medium timescales are associated with hyperfine coupling operations (entanglement, nuclear spin initialisation, and measurement in the $Z$-basis), and long timescales are associated with nuclear spin rotations (via the hyperfine interaction \cite{everitt2012}). Nuclear spin rotations are generally only required only for initialisation, and the number of nuclear rotations is independent of the number of attempts to create an electron--electron bond. Similarly, measurement of the nuclear spin is only required for measurements that consume the preprepared entanglement. Transmission of a single photon between the modules is our last operation of interest, and its timescale depends on the task at hand. In quantum communication, remote modules may be separated by up to 40 km. In this case, it takes approximately $0.4$ ms to transmit a photon between modules and receive a classical return signal. In this case, the duration of each attempt is determined by this timescale. By contrast, for modules separated by 1 m, the transmission time is $\sim 10$ ns, which is shorter than the timescale associated with hyperfine coupling operations.
The overall rate of the protocol is determined by the product of the per-attempt rate and the number of attempts. The number of attempts is related to the probability of success of each attempt, which depends on the efficiency of the optical components. We define the total efficiency of the optical components, $p_o$, to be the combined efficiency of all factors that influence the success probability of the optical gate, besides the theoretical upper bound of $0.125$. If $p_o=0.5$, then for each attempt the probability of success is $0.125\times0.5=0.0625$. After approximately 107 attempts the probability of success is $P=0.999$, which for our purpose is effectively deterministic.
\subsection{NV$^-$ module} Let us now return to the issue of errors in the module. Error can be divided into two categories: \begin{itemize} \item Accumulation errors are those that depend on the number of attempts taken to establish entanglement between remote electron spins. These errors only affect the error rate of the nuclear--nuclear CPHASE gate, not the error rate of nuclear measurement and initialisation. To tolerate a low success probability (which necessitates a large number of attempts) these errors need to be heavily suppressed. \item Non-accumulation errors are those that are independent of the total number of attempts, and depend only on the final successful attempt to establish entanglement between remote electron spins. \end{itemize} In Sections III and IV we will break down errors into several parameters that determine the overall error rates of nuclear spin measurement and initialisation CPHASE gates between remote nuclear spins. These error rates will then allow us to determine the performance of the architecture.
\section{Nuclear Spin measurement and initialisation} Both measurement and initialisation of the nuclear qubit is performed via projective collapse of the electron and the hyperfine interaction. As described in \cite{everitt2012} we can generate multiple types of controlled operations (where the electron acts as the control) between the electron--nuclear system. The most basic is the natural hyperfine generated CPHASE gate. Combining this with $Y$ rotations on the electron, we are able to perform an effective $Z$-basis measurement on the nucleus with a total time of approximately $2\times 5+165+100 = 275$ ns, assuming single-qubit gates take less than 5 ns and single attempt initialisation and measurement of the electron takes 100 ns (see Fig.~\ref{fig:meas}a). This measurement circuit also initialises the nucleus in a known state. Therefore, measurement and initialisation in this model is achieved with a combined gate.
Similarly, we can drive the hyperfine interaction to generate a controlled rotation around a different axis (rather than the $Z$ axis). Two examples are a controlled-not (CNOT) operation and a controlled-$Y$ operation, which can be used to measure the nucleus in the $X$ basis and $Y$ basis respectively (see Figs.~\ref{fig:meas}b and \ref{fig:meas}c). Driving of the hyperfine interaction necessitates a longer time for these controlled operations (approximately 1 $\mu$s \cite{everitt2012}) and these are therefore classified as long timescale operations. Errors associated with nuclear spin initialisation and measurement do not depend on the number of attempts to establish entanglement between remote electron spins and may occur only when nuclear spins are measured.
\begin{figure}
\caption{Projective measurement of the nuclear spin, mediated by the electron--nuclear hyperfine interaction. The natural hyperfine interaction enables fast $Z$-basis measurements, while a driven hyperfine interaction enables $X$ and $Y$ -basis measurements \cite{everitt2012}. (a) Measurement in the $Z$ basis and initialisation in $\ket{+}$ consists of measurement via the natural hyperfine interaction and then a controlled-$Y$ gate on the nuclear spin with the electron spin polarised in the $\ket{1}$ state, which effectively rotates the nuclear spin into the $\ket{+}$ state. (b) Measurement and initialisation in the $X$ basis. (c) The two types of $Y$-basis measurements required by our scheme are performed by driving the hyperfine interaction. After measurement, a controlled $Z$-rotation with a polarised electron spin will reinitialise the nuclear spin in the $\ket{+}$ state.}
\label{fig:meas}
\end{figure} Gate times for the combined measurement and initialisation operation are approximately 1 $\mu$s. The error rate associated with measurement and initialisation in the $Z$ and $Y$ basis is higher than in the $X$ basis as a second rotation is required to reinitialise the nuclear spin in the $X$-basis (to prepare cluster states, qubits should be initialised in $\ket{\pm}$). As the natural CPHASE gate of the hyperfine interaction is much faster than the driven CNOT (or controlled-$Y$) gate, the timescale associated with measurement and initialisation in the $Z$-basis is the same as in the $X$-basis.
\subsection{Intrinsic decoherence in diamond} For both the nucleus and electron, intrinsic decoherence can be induced through spin relaxation (thermalisation) and through dephasing. For the nuclear spin we can model both processes as a Markovian process where the errors induced are approximately given by, \begin{eqnarray} p^{(1)}_{n}(t) &\approx& \left(1-e^{-t/T_{1n}}\right) \nonumber \\ p^{(2)}_{n}(t) &\approx&\frac{1}{2} \left(1-e^{-t/T_{2n}}\right) \nonumber, \end{eqnarray} where $t$ is the length of time considered and $T_{in}$ are the decoherence times ($i=1$ for relaxation and $i=2$ for dephasing). Dephasing (from $T_2^*$ processes) results in $Z$ errors while relaxation (thermalisation)
results in $X$ and $Y$ errors. We assume here that spin-echo techniques are being used on the electron spin to effectively decouple the electron and nuclear spins. If this is not the case, the coherence times of the nuclear spin will be much shorter.
For the electron spin, relaxation can be modelled again as a simple Markovian process $p^{(1)}_{e}(t) \approx 1-e^{-t/T_{1e}}$, which results in $X$ and $Y$ errors. Dephasing is non-Markovian but gives $Z$ errors with probability $p^{(2)}_{e}(t) \approx \left(1-e^{-t^2/2T_{2e}^2}\right)/2$. Generally the relaxation times are very long (seconds) compared to the electron-spin gate times (nanoseconds to microseconds) and can be neglected, leaving only $Z$ errors as our intrinsic error. We use approximate expression for $p_{n,e}(t)$ to simplify our estimates and to find an upper bound for our error probabilities. The master equation used for each process will give slightly different expressions for $X$, $Y$, and $Z$ errors.
Control errors can be modelled by an error $\epsilon$, which is defined as the over or under rotation caused by imprecise control of the Hamiltonian of the electron spin. Over or under rotation simply produces a error of the same type as the rotation axis, whereas axis misalignment may cause an arbitrary error. We assume that this error affects the rotation angle and not the rotation axis. In either case, given a rotation error of $\epsilon$, the error induced is given by $\sin^2(\epsilon) \approx \epsilon^2$, for $\epsilon \ll 1$.
The total error for electronic rotations (the combination of decoherence and control errors) is given by \begin{equation} p _e(5ns)=\frac{1}{2} (1-e^{-5ns^2 /T^2_{2e}})+ \epsilon^2. \end{equation} A similar expression can be derived for $p_n$ for pure decoherence over time $t$, \begin{equation} p_n(t) = (1-e^{-t/T_{n}}). \end{equation} Note that we do not include an $\epsilon^2$ term for the nuclear spin error. This is because all nuclear rotations are achieved by driving the hyperfine interaction, where the associated error is given by the coupled error terms $p_{2z}$ and $p_{2x}$. These intrinsic errors associated with the hyperfine control may introduce correlated errors. A more detailed analysis of these processes can be found in \cite{everitt2012} and the total probability of error during these rotations will encapsulate both the systematic errors and the intrinsic decoherence on both the electron and nucleus over the relevant time scales. These can be modelled by a general two-qubit depolarising map with probabilities $p_{2z}$, $p_{2x}$ and $p_{2y}$. Each of these expressions can now be used to bound the error rate associated with nuclear measurement and initialisation, \begin{equation} \begin{aligned} &p_{M_Z} = 2p_e+2p_M+p_{2x}+p_{2y}\\ &p_{M_{X}} = 2p_e + p_M +p_{2x}\\ &p_{M_{Y}} = 2p_e+2p_M+p_{2x}+p_{2y}, \end{aligned} \end{equation} where $p_e$ is the electronic rotation error, $p_M$ is electronic measurement error (also initialisation) and $p_{2(x,y,z)}$ are the errors associated with the hyperfine coupling for natural ($z$) or driven $(x,y)$ evolution. The timescale of each measurement is approximately 1 $\mu$s and the errors in $p_{M_Z}$ will be dominant.
\section{Electron--electron connection}
Errors that accumulate as we attempt to establish entanglement between remote electron spins arise from three sources: \begin{enumerate} \item {\it Hyperfine interaction timing errors}. After an attempt to entangle two remote electron spins, the hyperfine interaction must be allowed to evolve (including spin-echo sequences) to the $2\pi$ point so that the electron and nuclear spins are disentangled prior to the next attempt. If there is an associated timing error, $\nu$, a $Z$ error will propagate back to the nucleus with a probability of $\sin^2(\nu/165\;\textrm{ns}) \approx (\nu/165\;\textrm{ns})^2$. In Table \ref{tab:numbers} we give the required accuracy for a successful connection probability of $P=0.99$ (accumulated nuclear spin error of 1\%) and $P=0.999$ (accumulated nuclear spin error of 0.1\%) for various optical component efficiencies, $p_o$. The probability of the connection being successful using a single sided-cavity protocol is given by $p_c = 0.125p_o$.
\begin{table} \begin{center} \vspace*{4pt}
\begin{tabular}{|c|c|c|}
Optical efficiency & \multicolumn{2}{|c|}{Timing accuracy (connection attempts)} \\ $p_o$ & $P=0.99$ & $P=0.999$\\ \hline 100\% &2.81 ns (35) & 725 ps (52)\\ 80\% & 2.5 ns (44) & 644 ps (66)\\ 50\% & 1.95 ns (71) & 504 ps (107)\\ 20\% & 1.22 ns (182) & 315 ps (273)\\ 10\% & 0.86 ns (366) & 222 ps (549)\\ \end{tabular} \caption{Timing error and number of attempts required to establish entanglement with probability (and fidelity) $P=0.99$ and $P=0.999$. Since the timing error accumulates with each attempt, there is a tradeoff between optical efficiency and timing accuracy.} \label{tab:numbers} \end{center} \end{table}
\item {\it Nuclear decoherence.} As entanglement is established between remote electron spins over a series of attempts, decoherence will accumulate on the nuclear spins. Long nuclear decoherence times are required to accommodate the low success probability. We assume that the physical separation between NV$^-$ centres is short enough such that the optical protocol can be confirmed to have succeeded or failed within the 165 ns required for the electron--nuclear hyperfine gate. An unsuccessful attempt takes approximately $2\times 45+100+5 \sim 200$ ns (initialisation of the electron via measurement, rotation of the electron, and spin-echo to disentangle the electron and nuclear spins prior to the next attempt). Therefore, the nuclear decoherence will be $p_n(200\;\textrm{ns}) = (1-e^{-2.00\times 10^{-7}/T_{n}})$ per attempt. For $s$ attempts, this becomes $ p_n(s 200\;\textrm{ns})$.
\item {\it Excitation of the electronic system.} When attempting to entangle remote electron spins, or when measuring and initialising the electronic spin via an optical photon, we may accidentally excite the electronic system. When this occurs, the attempt is automatically unsuccessful as the photon has been absorbed. With high probability, the excited system will relax to its original state with no error induced on the nuclear spin. However, due to level mixing in the upper manifold, there is a possibility of a series of non-spin conserving transitions back to the ground state. As soon as the spin state of the electron changes, the timing control that we use to prevent errors back-propagating to the nucleus becomes unreliable. This error channel is active not only during every connection attempt, but when measuring and initialising the electron spin.
Experiments to precisely determine the relevant branching ratios for the decay of the electron have not been performed, but we can approximate these values using a theoretical model. Consider the basic level structure of the NV$^-$ centre shown in Fig.~\ref{fig:mixing}. The probability of a photon being absorbed by the NV$^-$ centre can be calculated using Eqn.~(\ref{eq:probs}). Given the parameters we assume for our system, \begin{eqnarray} &P_S(\ket{0}) = 0.0098, \; P_S(\ket{1}) \sim 0 \\ &P_R(\ket{0}) = 0.980, \; P_R(\ket{1}) = 1.7\times 10^{-7}, \end{eqnarray} where $P_R$ is the probability of reflection for each state and $P_S$ is the probability of absorption for each state. The probability of error on the nuclear spin depends on the state of the electron spin, and the worst case is when the electron is in the $\ket{0}$ state (as the probability of excitation is higher). The probability of error on the nuclear spin also depends on the likelihood of an excitation causing a spin flip in the NV$^-$ centre. The general error mapping, in the worst case, is given by \begin{eqnarray} \begin{aligned}
\rho &= \frac{P_0}{2}(|0\rangle\langle0|_e +|1\rangle\langle1|_e )\\
&+P_1\rho_n|0\rangle\langle0|_e \\
&+ P_2Z_n\rho_nZ_n|1\rangle\langle1|_e, \end{aligned} \end{eqnarray} where $P_0$ is the probability that no absorption takes place and the photon is lost through other mechanisms. $P_1$ is the probability that the NV$^-$ centre relaxes to the $\ket{0}$ state via a series of spin-0 levels and $P_2$ is the probability that it relaxes to the $\ket{+1}$ state when initially in the $\ket{0}$ state. When the system relaxes to the $\ket{+1}$ state, the probability of a error on the nuclear spin is related to exactly when the electron decays from the meta-stable state back to the $\ket{+1}$ state with respect to the 165 ns $\pi$ point of the hyperfine coupling. Reliable estimates for this decay are not experimentally available, so we will attempt to make a large overestimate. If this decay pathway occurs, we assume that a full $Z$-error occurs on the nuclear spin. Each probability can be estimated from the probability of absorption, $P_S$, and the relative probabilities of each of the transitions, \begin{eqnarray} \begin{aligned} P_0 &= 0.9902.\\ P_1 &= P_S(\ket{0})\times (0.99+.01\times 0.7) = 0.0098\\ P_2 &= P_S(\ket{0})\times (0.01\times 0.3) = 2.9\times 10^{-5}. \end{aligned} \end{eqnarray} Therefore, $P_2$ is our estimate of the probability that an error occurs on the nuclear spin due to excitation of the electron spin. This is likely to be is a significant overestimate as we have not accounted for the timing of the electron relaxation relative to the $\pi$ point of the hyperfine interaction. This estimate was done assuming a cooperativity of $C=50$. By doubling this cooperativity, the probability of error halves. \end{enumerate}
\section{Topological cluster states}
We will now outline how our protocol to establish entanglement between remote nuclear spins enables scalable quantum information processing. In particular, we will outline how to prepare cluster states appropriate for universal quantum computation and quantum communication. A common way to prepare cluster states involves two-qubit CPHASE gates between neighbouring qubits in some geometry \cite{kok2007}, and our protocol is effectively a CPHASE gate between remote nuclear spins. Therefore, with a cluster state stored in the states of the nuclear spins, our protocol can be applied with additional modules to introduce additional qubits to the cluster. In this way, we can prepare an arbitrary cluster state by repeating the protocol as required with a sufficient number of modules.
\subsection{Topological cluster-state error correction}
For scalable quantum information processing, some form of error correction will be essential. Of the many schemes for error correction, the two-dimensional surface code and the closely related scheme based on three-dimensional topological cluster states are the strictly local schemes with the highest tolerance to errors (above $0.5\%$ per gate in both cases) \cite{RH07,RHG06,RHG07,WFSH10,BS10}. In both cases, each qubit is only required to interact with its four nearest neighbours. Typically, the surface code is thought to be appropriate for matter-based qubits, while topological cluster-state error correction is thought to be appropriate for photonic qubits. Despite the fact that our nuclear spin qubits are immobile, topological cluster-state error correction features a natural mechanism to tolerate missing bonds in the cluster state, which might arise in our scheme due to the strictly non-deterministic nature of the CPHASE gate. Missing bonds can be avoided through a clever interpretation of the measurement results during computation, at the cost of a reduced tolerance to other errors \cite{BS10}. This is not possible with surface code \cite{barrettstace10}. As such, we will focus on topological cluster-state error correction, which requires us to prepare the three-dimensional topological cluster state illustrated in Fig.~\ref{fig:cluster}a.
\begin{figure}
\caption{Schematic representation of a three-dimensional topological cluster state. a) A $2\times2\times2$ region of the topological cluster state. b) A physical unit cell comprising two layers of qubits. c) The projection of the physical unit cell to a two-dimensional plane, requiring nearest- and next-nearest neighbours interactions.}
\label{fig:cluster}
\end{figure}
In topological cluster-state error correction, two dimensions of the topological cluster state are reserved for the spatial distribution of protected logical qubits. The third dimension is identified with the temporal axis of the computation. As such, we are not required to prepare the entire topological cluster state before the computation can begin. Instead, only two adjacent layers of the topological cluster state are required at a given time. In Fig.~\ref{fig:cluster}b we illustrate the physical unit cell of the topological cluster state, comprising two layers. The back layer contains eight qubits connected in a square (orange), while the front layer contains five qubits connected in a cross (blue). The two layers are connected in the temporal direction (green). This pattern is repeated over the entire topological cluster state. Then, measurement of the front layer will teleport the current state of the computer to the back layer, at which point the front qubits can be reconnected in accordance with the geometry of the topological cluster state and the information can be teleported back again. In this way, the two physical layers function as even and odd layers in the temporal direction, allowing an arbitrarily deep computation to be performed with a fixed number of physical qubits.
\subsection{Mapping to a two-dimensional geometry}
Because we are using matter qubits, it may be useful for the array of NV$^-$ modules to be strictly two-dimensional. In Fig.~\ref{fig:cluster}c we illustrate physical unit cell (comprising two layers in the temporal direction) projected to a two-dimensional plane (where colour coding has been preserved). Each NV$^-$ module is no longer connected to only its nearest neighbours, and several next-nearest neighbour connections are required. However, as these connections are optically mediated, this is compatible with our scheme. In principle, the array can be distributed, where neighbouring NV$^-$ modules are separated by an arbitrary distance (subject to photon loss and communication time) and the relevant integrated (or bulk) optics are positioned between connected modules.
\subsection{Connection circuits}
The circuit in Figs.~\ref{fig:sequence} and \ref{fig:circuit} is used to prepare the topological cluster state, layer by layer, with the array of NV$^-$ modules. Creating an optimal five-step circuit is not possible given only only two layers of modules. Instead, we use a six-step circuit, where NV$^-$ modules are idle for one step after measurement. In Fig.~\ref{fig:sequence}, the star notation denotes the subsequent six-step circuit that occurs at a later time (for example, $1^*$ denotes step $7$). Figure~\ref{fig:circuit} illustrates the circuit to prepare a topological cluster state with cross section equal to $1\times 1$ and arbitrary depth. As discussed in the main text, our calculation of the threshold assumes a five-step circuit. This is a reasonable approximation to the six-step circuit, as the error that accumulates while a module is idle is restricted to pure nuclear decoherence, which is negligible over the timescale of a successful electron--electron connection.
\begin{figure}
\caption{Sequence of NV$^-$ node connections for a unit cell of the physical cluster. Each number represents the time-step for bonding, while a number inside each node represents measurement/initialisation. This sequence is time optimal given the physical constraints on the system. The star notation denotes equivalent time steps in the circuit which occur at later physical times, i.e. $1^*$ would occur at time step seven in real time and after the measurement a node remains idle of one step in the cluster.}
\label{fig:sequence}
\end{figure} \begin{figure}
\caption{Quantum circuit required for the creation of two layers of the Raussendorf cluster. Also shown are the circuits for nuclear initialisation and readout, utilising a QND measurement via the electron/nuclear hyperfine interaction.}
\label{fig:circuit}
\end{figure}
Simultaneous connections are grouped into a single step. In order to maintain synchronicity over the entire computer, all connections in a given step should be established before moving onto the next one. As the connections are probabilistic, this may require some modules to wait while other modules are still being connected. However, as nuclear decoherence rates are orders of magnitude less than the time required to attempt an electron--electron connection, this waiting period will not adversely effect the error performance of the computer provided electronic errors propagating through the hyperfine interaction are handled carefully. This requirement determines the number of connection attempts, $g$, required for each module at a given success probability. The number of attempts to ensure a bond is established with probability $P$ is given by $g=\log(1-P)/\log(1-p_c)$, where $p_c$ is the probability that a given connection attempt is successful. Assuming that $p_c = 6.25\%$, $g=107$ for $P=0.99\%$. However, in the main text we assumed that $g$ is equal to the average number of connection attempts, given by $1/p_c$, which increases the rate of operation of the computer. In this case, we must ensure that the topological cluster state is synchronised over the entire computer. On average, each node will synchronise with its neighbours. In extreme cases, some modules will have to wait for $g$ attempts to be connected. Our estimates will assume both the synchronous and asynchronous modes of operation.
Because failed connections are heralded, we can exploit the tolerance of the topological cluster state to missing bonds \cite{BS10}. For example, we may reduce $P$ to 95\% to reduce the number of attempts. However, as the proportion of missing bonds is increased, the threshold error rate for all other errors is reduced. In our calculations, we do not exploit this potential robustness, and detailed calculations determining the tradeoff between missing bonds and other error rates will be studied in further work. \section{Experimental requirements and expected performance}
We now estimate the experimental requirements for a scalable quantum computer based on our architecture. We have determined the threshold error rate for topological cluster-state error correction with a five-step circuit to be $\approx$ 0.73\%, as shown in the main text. This threshold is the maximum tolerable error rate for measurement/initialisation and CPHASE gates during preparation of the topological cluster state.
\begin{table*} \begin{center} \vspace*{4pt} \begin{tabular}{ccc} \hline\hline \hspace{0.2cm} Gate \hspace{0.2cm} & \hspace{0.2cm} Timescale \hspace{0.2cm} & \hspace{0.2cm} Error rate \hspace{0.2cm} \\ \hline Single-qubit gate: electron & 5 ns & error, $p_e$ \\
Initialisation/measurement via measurement: electron & 100 ns & incorrect state, $p_M$ \\
Initialisation/measurement: Nuclear ($Z$) & $\approx 1.3$ $\mu$s & projective circuit, $p_{I,M_z}$ \\
Initialisation/measurement: Nuclear ($X$) & $\approx 1.1$ $\mu$s & projective circuit, $p_{I,M_x}$\\
Initialisation/measurement: Nuclear ($Y$) & $\approx 1.2$ $\mu$s & projective circuit, $p_{I,M_y}$\\ Electron--electron C-$\sigma_z$ error & $g(200\;\textrm{ns})$& $3p_{2z}+3p+p_e(275\;\textrm{ns})+p_M$ \\ Timing error: hyperfine interaction & & $(\nu/165\;\textrm{ns})^2$ \\ Nuclear--nuclear C-$\sigma_z$ & $g(200\;\textrm{ns})+110+165=(200g+275)$ ns & $p_{CZ}$\\ \hline\hline \end{tabular} \caption{Estimate of physical parameters and basic module operations.} \label{tab:volumes} \end{center} \end{table*}
To quantify the experimental requirements of the system, we specify several parameters in Table \ref{tab:volumes}, assuming that the time required for all connections is determined by the slowest connection, given by $g=\log(1-P)/\log(1-p_c)$. The average number of attempts for a successful connection is $1/p_c$. We assume that the accumulated error per connection is given by $E/p_c$, where $E$ is the accumulated error per attempt [excluding nuclear decoherence, which will induce an error of $gp_n(200\;\textrm{ns})$ as all of the nuclear spins are waiting until all connections have been made]. Then, the probability of error for measurement and initialisation of the nuclear spins and CPHASE gates between nuclear spins is \begin{eqnarray} p_{I,M_Z} &=& 2p_e+p_M+p_{2z}+p_{2y} + 2sP_{2}\\ p_{I,M_{X}} &=& 2p_e + p_M +p_{2x} + 2sP_{2}\\ p_{I,M_{Y}} &=& 2p_e+p_M+p_{2z}+p_{2y} + 4sP_{2}\\ p_{CZ} &=& \frac{((\nu/165\;\textrm{ns})^2+2sP_{2})}{p_c}+gp_n(200\;\textrm{ns}) \\ &+&3p_{2z}+3p_e+p_{e}(275\;\textrm{ns})+p_M+2sP_{2}. \nonumber \end{eqnarray} These expressions include errors associated with electronic rotations and hyperfine gates, and an additional term, $P_{2}$, which represents the probability that one of the $2s$ photons that are used in the QND measurement is absorbed by the NV$^-$ electron.
For $p_{CZ}$, the first three terms correspond to the errors accumulated during $g$ connection attempts, while the remaining five terms are associated with the final successful connection. We assume that for the vast majority of nodes, the number of attempts is $1/p_c$ and hence the accumulation errors from photon absorption ($P_2$) and the hyperfine interaction ($\nu$) are amplified by $1/p_c$. However, the nuclear error is amplified by a factor of $g$ under the assumption that a given node will have to wait for the entire computer to synchronise. The $p_e(275\;\textrm{ns})$ term is included because, after a $90$ ns spin-echo pulse, the successful connection will undergo a $Y$-rotation and be stored in the electrons for the $\pi = 165$ ns cycle time of the hyperfine interaction before the electron is measured and the connection is transferred to the nucleus (taking 105 ns). We have one set of errors for measurement/initialisation, the timescales for these gates are commensurate, and the majority of measurements in topological cluster-state error correction (where measurement errors are relevant) are $X$-basis measurements.
These probabilities must satisfy the threshold condition of topological cluster-state error correction for the architecture to be scalable. For a useful device, they should be approximately an order of magnitude lower than the threshold, otherwise the resources required for error correction will become prohibitively large). As the threshold is estimated to be 0.73\%, we will require that $p_{I,M}$ and $p_{CZ}$ are both $\leq 0.1\%$.
Our expressions for $p_{I,M}$ and $p_{CZ}$ upper bound the total error rate for each operation. In the main text, we outlined the individual requirements for each physical parameter. A detailed simulation is required examine the full parameter space to find the optimal set of physical parameters such that each operation satisfies the threshold condition with the lowest possible error rate. In our simulations, we assume that $P_2 \neq 0$, hence the fidelity asymptotes to a value below unity as each individual error approaches zero. We can provide a set of parameters that can satisfy the threshold condition, but are not necessarily optimal. Assuming a success probability per attempt, $p_c$, of 6.25\%, the following set of error parameters meet the threshold condition: \begin{equation} \begin{aligned} \nu &= 0.05, \quad ( \text{50ps timing error}) \\ \epsilon &= 5\times 10^{-3}\\ T_{2e}^* &= 90\mu s\\ T_{n} &= 1 s\\ P_{2} &= 1.2\times 10^{-5}\ \\ p_{2z} &= p_{2x} = p_{2y} = 10^{-4}\\ p_M &= 10^{-4}\\ s &= 5 \quad (\text{10 photons are used for measurement}). \end{aligned} \end{equation} With these parameters, we have \begin{equation} \begin{aligned} p_{I,M_z} &= 6.4\times 10^{-4}\\ p_{I,M_x} &= 5.4\times 10^{-4}\\ p_{I,M_y} &= 6.4\times 10^{-4}\\ p_{CZ} &= 5.4\times 10^{-3}. \end{aligned} \end{equation} Error rates for measurement and initialisation are below our target error rate (0.1\%), but the CPHASE error rate above our target, but still below the actual threshold (0.73\%). The primary cause of this is the value of $P_2$. Recall that we assumed that a spin flip of the NV$^-$ electron always induces a phase flip on nucleus. In practice, the probability of an error depends quadratically on the exact fraction of time (relative to the 165 ns $\pi$ point of the hyperfine coupling) the system spends in the $\ket{+1}$ state, $\sin^2(t_{decay}\pi/165\;\textrm{ns})$. The potential exists to engineer a much lower value of $P_2$ if the intrinsic branching ratios are similar to Fig.~\ref{fig:mixing}. For example, we could tune the cooperativity to decrease the scattering probability $P_S (0)$ for the $\ket{0} \leftrightarrow \ket{M_5}$ transition. Also, photons used in electron measurement could be sent at appropriate times to ensure that, in the case of absorption, a decay to the $\ket{+1}$ state occurs close to the $2\pi$ point of the hyperfine coupling. Reducing $P_2$ will be sufficient to reduce all error rates to below our target error rate.
\subsection{Expected performance}
Lastly, we estimate the performance of our architecture. The rate-limiting process is the connection of all electron--electron pairs in each step of the circuit to prepare the topological cluster state. As discussed, we can operate the architecture in a synchronous or asynchronous manner, and the mode of operation will affect the performance. Simplest is the synchronous mode, where 99.9\% of all connections are established before moving to the next step (connections that are not established are introduce errors, which can be corrected). In this case, approximately 107 attempts per step are required for $p_c=6.25\%$. This leads to a time per step of $(200\times 107+275) \approx 22$ $\mu$s. In asynchronous mode, we take the average number of attempts for connections to be established ($1/p_c$). This implicitly assumes that different parts of the NV$^-$ array may by at different temporal stages of the computation, but classical control will be used to keep track of the entire topological cluster state, which is generated at a constant rate on average. In this case $\approx 16$ connection attempts are needed at $p_c = 6.25\%$ requiring a time of $3.5$ $\mu$s. Initialisation and measurement takes approximately 1 $\mu$s, so this is not the rate-limiting process. The quantum circuit illustrated in Fig.~\ref{fig:circuit} takes six steps to construct a layer of the topological cluster state. 1Hence, a temporal layer of the topological cluster state is prepared every $\approx 132$ $\mu$s in synchronous mode and every $\approx 21$ $\mu$s in asynchronous mode, with a unit cell prepared every $264$ $\mu$s and $42$ $\mu$s, respectively.
To estimate the size of topological cluster state and the speed of performing logical gate operations, we estimate the failure rate of a logical cell and the number of logical cells required for a logical gate \cite{RHG07}. The failure rate of a logical cell can be approximated as $p_L \approx C_1(C_2p/p_{th})^{(d+1)/2}$, where $d$ is the distance of the topological code, $p$ is the physical error rate, $p_{th}$ is the threshold error rate (estimated to be approximately 0.73\%), $C_1 \approx 0.13$, and $C_2 \approx 0.61$ \cite{fowler2011} We assume $p = 0.1\%$ is our average error rate for all gates, as the CPHASE gate has a slightly higher error and the measurement gates have slightly lower errors. For a large computation, we are likely to require $p_L \leq 10^{-18}$, implying $d \geq 32$. Then, a logical cell is a cube of unit cells measuring $5d/4 = 40$ cells in edge length. A logical qubit is defined as a cross section of the cluster, measuring $2\times 1$ logical cells, requiring $80\times 40$ unit cells. To perform a logical CNOT gate we require a cluster volume $2\times 2$ in cross section, requiring 9841 physical qubits and 2 logical cells in temporal depth. Hence, the time for a logical CNOT is $2\times 40\times 264 = 21.1$ ms for the synchronous mode and 3.4 ms for the asynchronous mode.
\end{document} | arXiv | {
"id": "1309.4277.tex",
"language_detection_score": 0.8079732656478882,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Positroids are 3-colorable} \begin{center}
{\footnotesize
Fakult\"at f\"ur Mathematik und Informatik, \\
FernUniversit\"at in Hagen, Germany,\\
\text{\{lamar.chidiac,winfried.hochstaettler\}@fernuni-hagen.de}
\\\ \\
} \end{center}
\begin{abstract}
We show that every positroid of rank $r \geq 3$ has a positive
coline. Using the definition of the chromatic number of oriented
matroid introduced by J.\ Ne\v{s}et\v{r}il, R.\ Nickel, and W.~Hochst\"{a}ttler, this shows that every orientation of a positroid
is 3-colorable. \end{abstract}
\section{Introduction}
A classical algebraic way to analyze the chromatic number of a graph is to study Nowhere-Zero(NZ)-tensions, which we will call NZ-coflows, as they form a concept dual to NZ-flows. Flows and coflows immediately generalize to regular oriented matroids and Hochst\"{a}ttler \cite{HadwigerforHyperplanes} presented a generalization of Hadwiger's famous conjecture that any graph which is not $k$-colorable must have a $K_{k+1}$-minor to regular oriented matroids. Goddyn and Hochst\"attler \cite{Boergerband} proved that this conjecture is equivalent to Tutte's 4-Flow conjecture for $k=4$, Tutte's 5-flow conjecture for $k=5$ and Hadwiger's conjecture for $k\ge 6$.
A matroid is regular if and only if it does not have $U_{2,4}$, the four point line, as a minor. A first idea to generalize the theory of NZ-coflows to general oriented matroids would be, to require that summing the coflow-values along any cycle, taking the signs of the elements into account, evaluates to zero. Alas, one has to cope with the fact that using this definition no orientation of $U_{2,4}$ admits a non-trivial coflow, in particular, there is no NZ-coflow. Therefore, one needs to take another approach to define the NZ-coflows for general oriented matroids. The coflow space may dually be defined as the linear hull of signed characteristic vectors of cocircuits. Using this and ideas of Hochst\"attler and Ne\v{s}et\v{r}il \cite{honese}, Hochstättler and Nickel \cite{theflowlatticeofOM, onthechromaticnumber} initiated a chromatic theory of oriented matroids. Lacking the total unimodularity of the matrix defining the cocircuit space, they considered the integer lattice (chain group) generated by the signed characteristic vectors of the cocircuits instead. It seems possible that the generalization of Hadwiger's Conjecture to regular oriented matroids might be true even for general orientable matroids.
While the case $k=2$ still is trivial for general oriented matroids, already the case $k=3$, which was proven by Hadwiger in the graphic (and regular) case, is open in general. The graphic case is easily proved observing the fact that a simple graph without $K_4$-minor always has a vertex of degree at most $2$. Therefore Goddyn, Hochst\"{a}ttler and Neudauer~\cite{goddynetal}, introduced the class of generalized series parallel ($GSP$) oriented matroids, which is an $M(K_4)$-free class, requiring that the coflow-lattice of every minor contains a vector with at most two non-zero entries which must be $+1$ or $-1$. This class is easily seen to be 3-colorable in the sense of Hochst\"{a}ttler and Nickel, if the matroid is loopless. In general, the class of $M(K_4)$-free matroids is not well understood~\cite{geelenblog}. If every such orientable matroid could be proven to be $GSP$, then Hadwiger's conjecture would hold for oriented matroids in the case $k=3$. A large class of orientable matroids without $M(K_4)$-minor is formed by the gammoids. Unfortunately, they are only slightly better understood.
One possibility to prove membership in the class $GSP$ is to find two compatible cocircuits in the oriented matroid with symmetric difference of cardinality at most two. The existence of such a pair is guaranteed for any orientation if there exists a coline with more simple than multiple copoints. Here, a copoint $H$ (a hyperplane in the underlying matroid) is simple with respect to the coline $L$ if
$|H \setminus L|=1$. Call such a coline a positive coline. Goddyn et al.\ showed in \cite{goddynetal} that every simple bicircular matroid of rank at least $2$ has a positive coline. Bicircular matroids are transversal matroids, and the smallest class under minors that contains transversal matroids is the class of gammoids. Therefore they conjectured in the same work that the same holds for every simple gammoid of rank at least $2$. Albrecht and Hochst\"{a}ttler \cite{Immanuelandwinfried, immanueldiss} later proved this to be true for rank $3$. However, Guzman-Pro and Hochst\"{a}ttler recently proved in \cite{santiago} that not every simple gammoid of rank at least $2$ has a positive coline, disproving by that the conjecture of Goddyn et al. \cite{goddynetal}.
Every loopless oriented matroid of rank $3$ not containing $M(K_4)$ is 3-colorable \cite{onthechromaticnumber}. In addition, until now bicircular matroids of rank $\geq 2$ \cite{goddynetal} and simple lattice path matroids of rank $\geq 2$\cite{Immanuelandwinfried1} were shown to be $3$-colorable by proving the existence of a positive coline in each one of them. In this paper, we add in the same way, simple positroids of rank $\geq 3$ to this list.
We assume basic familiarity with matroid theory and oriented matroids. The standard references are~\cite{oxleybook,ombibel}.
\section{Preliminaries}
In this work we consider matroids to be pairs $M=(E,\mathcal{B})$ where $E=[n]=\{1,2,...n\}$ and present them as their set of bases. For simplicity sake, we omit curly brackets and commas when writing subsets. So rather than $\mathcal{B}=\{\{1,2\},\{1,3\}\}$, we write $\mathcal{B}=\{12,13\}$.
A \emph{path} in $D=(V,A)$ is a non-empty and non-repeating sequence $p_{1}p_{2}\dots p_{n}$ of vertices $p_i \in V$ such that for each $1 \leq i <n$, $(p_{i},p_{i+1}) \in A$. By convention, we shall denote $p_n$ by $p_{-1}$. Furthermore, the set of vertices traversed by a path $p$ shall be denoted by $p=\{p_{1},p_{2}, \dots,p_{n}\}$.
As mentioned in the introduction, gammoids are the main motivation behind this work, however they are not very well understood. In this section we define gammoids and positroids, and show that positroids are actually gammoids. For that we are going to need the following definition.
\begin{defn}
Let $D=(V,A)$ be a digraph, and $X,Y \subseteq V$. A \emph{routing} from $X$ to $Y$ in $D$ is a family of paths $R$, such that:
\begin{itemize}
\item for each $x$ in $X$ there is some $p \in R$ with $p_{1}=x$
\item for all $p \in R$ the end vertex $p_{-1} \in Y$, and
\item for all $p,q \in R$, either $p=q$ or $p\cap q=\emptyset$
\end{itemize}
A routing $R$ is called \emph{linking} from $X$ to $Y$, if it is a routing onto $Y$, i.e.\ whenever $Y =\{p_{-1} \:| \: p \in R \}$. We write $X \rightrightarrows Y$ whenever there is a routing from $X$ to $Y$. Furthermore, we say that $v$ is linked to $w$ in a directed graph, whenever there exists a directed path from $v$ to $w$ or from $w$ to $v$. \end{defn}
\begin{defn}[Gammoids]
Let $D=(V,A)$ be a digraph, $E\subseteq V$, and $T \subseteq V$. The \emph{gammoid} represented by $(D,T,E)$ is defined to be the matroid $\Gamma(D,T,E)=(E,\mathcal{I})$ where
\begin{equation*}
\mathcal{I}=\{X \subseteq E \; | \; \text{there is a routing} \; X \rightrightarrows T \; \text{in D} \}.
\end{equation*} \end{defn}
Positroids were originally defined in \cite{postnikov} as the column sets coming from nonzero maximal minors of a matrix such that all maximal minors are nonnegative. Thus positroids are matroids representable over the reals and therefore orientable. However in this paper, we will use an equivalent definition using \reflectbox{L}-graphs.
\begin{defn}[\reflectbox{L}-diagram]
A \reflectbox{L}-\emph{diagram} is a lattice path with a finite collection of boxes lying above it, arranged in left-justified rows, with the row lengths in non-increasing order, where some of the boxes are filled with dots in such a way that the \reflectbox{L}-\emph{property} is satisfied: There is no empty box which has a dot above it in the same column and a dot to its left in the same row. \end{defn}
\begin{defn}[\reflectbox{L}-graph]
Starting from a \reflectbox{L}-diagram we apply the following steps:
\begin{itemize}
\item place nodes in the middle of each edge of this lattice path and label them from $1$ to $n$ starting at the Northeast corner of the path and ending at the Southwest corner.
\item Add edges directed to the right between any two consecutive nodes that lie in the
same row.
\item Add edges directed upwards between any two consecutive nodes that lie in the same column.
\end{itemize}
A \reflectbox{L}-\emph{graph} $G=(V,A)$ is the graph obtained from a \reflectbox{L}-diagram following the above steps, where $V$ is the set vertices formed by the internal vertices (nodes inside boxes) and the external vertices (nodes placed on the lattice path), and $A$ is the set of directed edges added to the \reflectbox{L}-diagram previously. Edges between external vertices are not in $A$. The \reflectbox{L}-property implies that $G$ is always planar. \end{defn}
The following provides an example of a \reflectbox{L}-diagram and a \reflectbox{L}-graph.
\begin{center}
\begin{figure}
\caption{A \reflectbox{L}-diagram $D$}
\label{L-diagram}
\caption{The corresponding \reflectbox{L}-graph obtained from $D$}
\label{le-graph}
\end{figure} \end{center}
According to \cite{postnikov}, the following definition of positroids is equivalent to all of the definitions of positroids found there. \begin{defn}[Positroids]
Let $B$ be the set of sinks of a \reflectbox{L}-graph, that is the set of external vertices labeling vertical edges of the lattice path and let $L$ be the set of all external vertices (sources and sinks of the \reflectbox{L}-graph). Let $\mathcal{P}\subseteq$ $L \choose r$ consist of $B$ together with all sets $I \in $ $L \choose r$ such that there exists a linking from $I \backslash B$ to $B \backslash I$ in the \reflectbox{L}-graph. This collection of subsets is a \emph{positroid}. All positroids may be realized uniquely in this way. \end{defn}
For example, the positroid built from the previous \reflectbox{L}-graph is \begin{equation*}
\mathcal{P} = \{235,236,245,246,256,257,267,356,357,367,456,457,467\}. \end{equation*}
It is easy to see that positroids are actually gammoids. Since there is a linking from $I \backslash B$ to $B \backslash I$, this means there is also a linking between $I$ and $B$ since $I$ and $B$ are of same size, and thus a routing between $X \in I$ and $B$. Therefore a positroid is a gammoid with representation $(D,B,E)$ with $\Gamma(D,B,E)=(E,\mathcal{I})$ where \begin{equation*}
\mathcal{I}=\{X \subseteq E \; | \; \text{there is a routing} \; X \rightrightarrows B \; \text{in}\; D \} \end{equation*}
\begin{rem}[Rank of a subset of a positroid]
In order to understand the proof of our main theorem, it is crucial to understand how we can compute the rank of an arbitrary subset of a positroid using the \reflectbox{L}-graph. As you already know, the rank of a set $I$
is the size of the biggest independent subset in $I$ and since independent sets are the subsets of the bases, we will have to find them using the same method we use to find a basis in the positroid, i.e.\ searching for vertex-disjoint walks and counting them. An arbitrary subset $I$ of a positroid, that contains $x$ vertical edges of the lattice path ($x$ sinks), is of rank at least $x$. The rank of $I$ is $x+t$, such that $t$ is the number of all vertex-disjoint walks from $I\backslash B$ to $B\backslash I$. \end{rem}
In this work we show that simple positroids are 3-colorable, by proving the existence of a positive coline. For that, we recall the following definitions.
\begin{defn}[Copoints and Colines]
Let $M$ be a matroid. A \emph{copoint} of $M$ is a flat of codimension $1$, that is a hyperplane. A \emph{coline} is a flat of codimension 2. If $L$ is a coline of $M$, $H$ a copoint of $M$ and $L \subseteq H$, then we
say that $H$ is a \emph{copoint on} $L$. The copoint is \emph{simple} (with respect to $L$) if $|H \backslash L| = 1$, otherwise it is \emph{multiple}. A coline is \emph{positive} if there are more simple than multiple copoints on $L$. \end{defn}
\section{Loops, coloops and parallel elements in a \reflectbox{L}-graphs}
In the \reflectbox{L}-graph, a loop of the positroid is represented by a source (an external vertex labelling a horizontal edge of the lattice path) that is not linked to anything. A coloop is represented by a sink (an external vertex labelling a vertical edge of the lattice path) that is not linked to anything. Figure \ref{loopcoloop} below presents the \reflectbox{L}-graph of a positroid containing loops and a coloop.
Since the set of all sinks in the \reflectbox{L}-graph is a base of the positroid, a pair of elements can be parallel if both elements are sources or if one is a source and the other is a sink. Let $h_i$ and $h_j$ be two sources in the \reflectbox{L}-graph of a positroid $\mathcal{P}$ such that $h_i$ is to the right of $h_j$, and let $w$ be the first internal vertex above $h_i$.
\begin{center}
\begin{figure}
\caption{The \reflectbox{L}-graph of a positroid in which the sources 1 and 4 are loops, and the sink 5 is a coloop.}
\label{loopcoloop}
\caption{The \reflectbox{L}-graph of a positroid in which \{3,4\} and \{6,7\} are two parallel pairs.}
\label{parallel}
\end{figure} \end{center}
\textbullet \; $h_i$ and $h_j$ are parallel if and only if any directed path starting from $h_j$ must go through $w$.\\
\textbullet \; A source $h$ and a sink $v$ are parallel if and only if $h$ is not linked to any sink other than $v$. \\
Figure \ref{parallel} above presents a \reflectbox{L}-graph of a positroid with parallel elements. In the rest of this work, we consider only simple positroids, i.e.\ positroids with no loops and parallel elements. Therefore we assume that each source in the \reflectbox{L}-graph is linked to at least two sinks and no two sources are parallel.
\section{Connectivity of positroids}
In order to characterise connectivity of positroids in \reflectbox{L}-graphs, we need the following definitions.
\begin{defn}
Let $\mathcal{P}$ be a positroid and $D$ its \reflectbox{L}-diagram. A \emph{level} in $D$ is a collection of consecutive vertical edges of the lattice path with the consecutive horizontal edges that follows them. \end{defn} For example, in Figure \ref{parallel}, the edges of the lattice path labelled by $2,3$ and $4$ form a level and the edges of the lattice path labelled by $5,6$ and $7$ form another level.
\begin{defn}
Let $\mathcal{P}$ be a positroid and $D$ its \reflectbox{L}-diagram. An \emph{isolated block} in $D$ is a level or a collection of levels in which the sources of this block are only connected to the sinks in it.
We can write this formally in the following way.
\begin{equation*}
B \text{ is an isolated block in }D \iff \begin{cases}
\forall v \text{ a sink} \in B, \forall h \text{ a source} \notin B, h\text{ is not linked to }v, \text{ and}\\
\forall h \text{ a source} \in B, \forall v \text{ a sink} \notin B, h \text{ is not linked to }v.
\end{cases}
\end{equation*} \end{defn}
The following figures are examples of \reflectbox{L}-diagrams with isolated blocks.
\begin{center}
\begin{minipage}{.45\textwidth}
\centering
\begin{tikzpicture}[scale=0.8]
\draw[help lines,line width=.4pt,step=1] (-1,-2) grid (0,3);
\draw[help lines,line width=.4pt,step=1] (0,-1) grid (2,3);
\draw[help lines,line width=.4pt,step=1] (2,2) grid (3,3);
\draw[line width=0.5mm] ((3,3)--(3,1)--(2,1)--(2,-1)--(0,-1)--(0,-2)--(-1,-2);
\draw[thick,->](2.5,2.5) -- (3,2.5);
\draw[thick,->](2.5,1.5) -- (3,1.5);
\draw[thick,->](1.5,0.5) -- (2,0.5);
\draw[thick,->](1.5,-0.5) -- (1.5,0.5);
\draw[thick,->](0.5,0.5) -- (1.5,0.5);
\draw[thick,->](2.5,1) -- (2.5,1.5);
\draw[thick,->](2.5,1.5) -- (2.5,2.5);
\draw[thick,->](0.5,-0.5) -- (0.5,0.5);
\draw[thick,->](0.5,-1) -- (0.5,-0.5);
\draw[thick,->](0.5,-0.5) -- (1.5,-0.5);
\draw[thick,->](1.5,-1) -- (1.5,-0.5);
\draw[thick,->](1.5,-0.5) -- (2,-0.5);
\draw[thick,->](-0.5,-2) -- (-0.5,-1.5);
\draw[thick,->](-0.5,-1.5) -- (0,-1.5);
\draw[thick,->](-0.5,-1.5) -- (-0.5,1.5);
\draw[thick,->](-0.5,1.5) -- (2.5,1.5);
\draw(3,2.5) node[right] {1};
\draw(3,1.5) node[right] {2};
\draw(2.5,1) node[below] {3};
\draw(2,0.5) node[anchor=north west] {4};
\draw(2,-0.5) node[right] {5};
\draw(0,-1.5) node[anchor=north west] {8};
\draw(1.5,-1) node[below] {6};
\draw(0.5,-1) node[below] {7};
\draw(-0.5,-2) node[below] {9};
\foreach \Point in {(2,-0.5),(0.5,-0.5),(1.5,-0.5),(-0.5,-1.5),(-0.5,1.5),(-0.5,-2),(0.5,-1),(1.5,-1),(2,0.5),(2.5,2.5),(3,2.5),(3,1.5),(2.5,1),(2.5,1.5),(1.5,0.5),(0.5,0.5),(0,-1.5)}{
\node at \Point {\textbullet};}
\end{tikzpicture}
\captionof{figure}{A \reflectbox{L}-diagram with an isolated block formed by one level: edges labelled by 4,5,6 and 7. The other isolated block consists of edges labelled by 1,2,3,8 and 9}
\end{minipage} \hspace{1cm}
\begin{minipage}{.45\textwidth}
\centering
\begin{tikzpicture}[scale=0.8]
\draw[help lines,line width=.4pt,step=1] (-1,-2) grid (0,3);
\draw[help lines,line width=.4pt,step=1] (0,-1) grid (1,3);
\draw[help lines,line width=.4pt,step=1] (1,0) grid (2,3);
\draw[help lines,line width=.4pt,step=1] (2,2) grid (3,3);
\draw[line width=0.5mm] ((3,3)--(3,2)--(2,2)--(2,0)--(1,0)--(1,-1)--(0,-1)--(0,-2)--(-1,-2);
\draw[thick,->](2.5,2.5) -- (3,2.5);
\draw[thick,->](2.5,2) -- (2.5,2.5);
\draw[thick,->](1.5,0.5) -- (2,0.5);
\draw[thick,->](1.5,0) -- (1.5,0.5);
\draw[thick,->](0.5,0.5) -- (1.5,0.5);
\draw[thick,->](1.5,0.5) -- (1.5,1.5);
\draw[thick,->](1.5,1.5) -- (2,1.5);
\draw[thick,->](0.5,-0.5) -- (0.5,0.5);
\draw[thick,->](0.5,-1) -- (0.5,-0.5);
\draw[thick,->](0.5,-0.5) -- (1,-0.5);
\draw[thick,->](-0.5,-2) -- (-0.5,-1.5);
\draw[thick,->](-0.5,-1.5) -- (0,-1.5);
\draw[thick,->](-0.5,-1.5) -- (-0.5,2.5);
\draw[thick,->](-0.5,2.5) -- (2.5,2.5);
\draw(3,2.5) node[right] {1};
\draw(2.5,2) node[below] {2};
\draw(2,1.5) node[anchor=north west] {3};
\draw(2,0.5) node[right] {4};
\draw(1.5,0) node[below] {5};
\draw(1,-0.5) node[anchor=north west] {6};
\draw(0.5,-1) node[below] {7};
\draw(0,-1.5) node[anchor=north west] {8};
\draw(-0.5,-2) node[below] {9};
\foreach \Point in {(2,1.5),(1.5,0),(1.5,1.5),(0.5,-0.5),(-0.5,-1.5),(-0.5,2.5),(-0.5,-2),(0.5,-1),(1,-0.5),(2,0.5),(2.5,2.5),(3,2.5),(2.5,2),(1.5,0.5),(0.5,0.5),(0,-1.5)}{
\node at \Point {\textbullet};}
\end{tikzpicture}
\captionof{figure}{A \reflectbox{L}-diagram with two isolated blocks. Block 1: edges labelled by 3,4,5,6,7. Block 2: edges labelled by 1,2,8,9}
\end{minipage} \end{center}
\begin{rem}
Note that a coloop, just like a loop, is by itself an isolated block. \end{rem}
We next prove that isolated blocks are strongly related to the connectivity of a positroid. \begin{lemma}
A positroid $\mathcal{P}$ is connected if and only if there exists no isolated blocks in its \reflectbox{L}-diagram. Equivalently, a positroid $\mathcal{P}$ is not connected if and only if there exist isolated blocks in its \reflectbox{L}-diagram $D$. \end{lemma}
\begin{proof}
In the following we prove that a positroid is not connected if and only if there are isolated blocks. We start by the necessary condition. Let $B_1, \dots, B_k$ be the isolated blocks in $D$. These blocks have disjoint set of elements and therefore form a partition of the elements of $P$ which can be written as the direct sum of these blocks, $P=B_{1} \oplus \dots \oplus B_{k}$. Thus, $P$ is not connected.
For the sufficient condition, we now assume that $P$ is not connected. This means that $P=P_{1} \oplus \dots \oplus P_k$. We now show that the $P_i$ s are the isolated blocks of $D$. Let $B(P_i)$ (resp. $H(P_i)$) be the set of sinks (resp. sources) of the block $P_i$. It is well known (\cite{oxleybook}) that a matroid is connected if and only if any pair of elements belong to a common circuit. This implies that a positroid is not connected if and only if there exists a pair of elements that do not belong to a common circuit. It is also known that a circuit in an unconnected matroid is a circuit in only one of its connected components \cite{oxleybook}. Therefore $\forall$ $e \in P_i$ and $\forall$ $f \in P_j$ with $i \neq j$, $e$ and $f$ do not belong to a common circuit. We now consider two cases. If $e \in B(P_i)$, then $\forall f \in H(P_j)$ ($i \neq j$), $f$ is not linked to $e$ because otherwise $e,f$ and all sinks linked to $f$ form a circuit, which contradicts the fact that $e$ and $f$ do not belong to a common circuit. Similarly, if $e \in H(P_i)$, then $\forall f \in B(P_j)$ ($i \neq j$), $e$ is not linked to $f$ because otherwise $e,f$ and all sinks linked to $e$ form a circuit. Thus $P_i$ is an isolated block in $D$, $\forall$ $1 \leq i \leq k$.
\end{proof}
\begin{rem}
In \cite{Bonin2010}, J. Bonin proved that a lattice path matroid $L$ of rank $rk(L)$ is connected if and only if it has a spanning circuit, that is a circuit of rank $rk(L)$. Unlike lattice path matroids, a positroid can be connected without having a spanning circuit. In the following, we present a counterexample (Figure \ref{counterex}) showing a connected positroid and then prove the absence of a spanning circuit in it.
\begin{center}
\begin{figure}
\caption{A connected positroid with no spanning circuit.}
\label{counterex}
\end{figure}
\end{center}
The positroid in Figure \ref{counterex} is connected since there are no isolated blocks in the \reflectbox{L}-diagram. Suppose this positroid has a spanning circuit $C$. It is easy to notice that $C$ must contain simultaneously $6$ and $7$, because if only one of them was in $C$, then its removal will prevent us from reaching full rank and therefore the rank of $C$ will decrease after its removal. Similarly, both $4$ and $5$ must be in $C$. Until now, $C$ contains the set $\{4,5,6,7\}$. However, this set is not independent, it is of rank $3$ and has $4$ elements. Therefore this set cannot be properly contained in any circuit, which contradicts our assumption that $C$ is a spanning circuit. \end{rem}
\section{Main theorem and proof} Before we get into our main theorem and its proof, let us mention the following propositions, which explain why we assume the connectivity of positroids in the proof.
\begin{prop}[\cite{ardila} Proposition 3.5]\label{ardila}
Positroids are closed under taking minors and duals. \end{prop}
\begin{prop} \label{prop}
If $\mathcal{M}$ is a class of matroids, closed under taking connected components, such that all its connected members of rank at least $2$ have a positive coline, then every element of $\mathcal{M}$ has a positive coline. \end{prop}
\begin{proof}
Let $M$ be a matroid of rank $r\ge 2$ in $\mathcal{M}$ such that $M$
is not connected. If $M$ consists solely of coloops all copoints are
simple. Thus we may assume that $M$ can be written as the direct sum
$M=M_{1}\oplus M_{2}$ of two matroids $M_1$ of rank $r_1$ and $M_2$
of rank $r_2$ where $M_1$ is connected $r_1 \ge 2$. Thus, $M_1$ has
a positive coline $L$. It is easy to see that $L \cup M_2$ is a positive
coline of $M$, because $L \cup M_2$ is a flat of rank $r-2$ and it is
contained in the same number of simple and multiple copoints of
$M_1$. Namely, if $C_1$ resp.\ $C_2$ is a simple resp.\ multiple
copoint on $L$, then $C_{1} \cup M_2$ resp.\ $C_{2} \cup M_2$ is simple resp.\
multiple copoint on $L \cup M_2$. \end{proof}
Now, we present and prove our main theorem.
\begin{thm} \label{positivecoline}
Every simple positroid of rank $r \geq 3$ has a positive coline. \end{thm}
\begin{proof}
Due to proposition \ref{prop}, we assume connectivity of the positroids, i.e.\
there exists no isolated blocks (in particular, no coloops or loops). Moreover, since we only consider simple
positroids, i.e.\ positroids with no
loops and parallel elements, each source should be linked to at least two sinks, otherwise the source and the sink linked to it form a parallel pair. Therefore, the first two edges of the lattice path in the \reflectbox{L}-diagram of the positroid have to be vertical.
During our search for a positive coline we will only be interested in pairs of consecutive sinks followed by one or more sources. The closure of the set of all sinks except for these two consecutive sinks is almost always the positive coline we are looking for. We will shortly prove that and give the positive coline in the other case. Notice here that we know at least one such pair exists, namely the first two sinks. Let $V=\{v_{1},...,v_{r}\}$ be the set of sinks and $H=\{h_{1},...,h_{n-r}\}$ be the set of sources, and let $v_i$ and $v_{i+1}$ be the last two consecutive sinks of the positroid.
Let us first look at the case where $v_i$ and $v_{i+1}$ are $v_{r-1}$ and $v_{r}$, that is, when $v_i$ and $v_{i+1}$ are the last two sinks of the positroid.
\begin{center}
\begin{minipage}{.25\textwidth}
\begin{tikzpicture}[scale=0.8]
\draw[help lines,line width=.4pt,step=1] (-4,-2) grid (0,0);
\draw[line width=0.5mm] (0,0)--(0,-2)--(-4,-2);
\draw[thick,->](-0.5,-2) -- (-0.5,-1.5);
\draw[thick,->](-0.5,-1.5) -- (0,-1.5);
\draw[thick,->](-0.5,-1.5) -- (-0.5,-0.5);
\draw[thick,->](-0.5,-0.5) -- (0,-0.5);
\draw[thick,->](-1.5,-2) -- (-1.5,-1.5);
\draw[thick,->](-1.5,-1.5) -- (-0.5,-1.5);
\draw[thick,->](-1.5,-1.5) -- (-1.5,-0.5);
\draw[thick,->](-1.5,-0.5) -- (-0.5,-0.5);
\draw[thick,->](-2.5,-2) -- (-2.5,-1.5);
\draw[thick,->](-3.5,-2) -- (-3.5,-1.5);
\draw[thick,->](-2.5,-1.5) -- (-1.5,-1.5);
\draw[thick,->](-3.5,-1.5) -- (-2.5,-1.5);
\draw[thick](-2.5,-1.5) -- (-2.5,1);
\draw[thick](-3.5,-1.5) -- (-3.5,1);
\draw[thick, dashed](-1.5,-0.5) -- (-1.5,1);
\draw[thick, dashed](-0.5,-0.5) -- (-0.5,1);
\draw(0,-0.5) node[right] {$v_{r-1}$};
\draw(0,-1.5) node[right] {$v_{r}$};
\draw(-0.5,-2) node[below] {$h_{i}$};
\draw(-1.5,-2) node[below] {$h_{j}$};
\draw(-2.5,-2) node[below] {$h_{j+1}$};
\draw(-3.5,-2) node[below] {$h_{n-r}$};
\foreach \Point in {(-3.5,-2),(-2.5,-2),(-1.5,-1.5),(-0.5,-0.5),(-0.5,-1.5),(-1.5,-2),(-0.5,-2),(0,-1.5),(0,-0.5),(-2.5,-1.5),(-3.5,-1.5),(-1.5,-0.5)}{
\node at \Point {\textbullet};}
\end{tikzpicture}
\captionof{figure}{}
\label{vr}
\end{minipage}\hspace{2.5cm}
\begin{minipage}{.25\textwidth}
\begin{tikzpicture}[scale=0.8]
\draw[help lines,line width=.4pt,step=1] (-4,-2) grid (0,0);
\draw[line width=0.5mm] (0,0)--(0,-2)--(-4,-2);
\draw[thick,->](-0.5,-2) -- (-0.5,-1.5);
\draw[thick,->](-0.5,-1.5) -- (0,-1.5);
\draw[thick,->](-0.5,-1.5) -- (-0.5,-0.5);
\draw[thick,->](-0.5,-0.5) -- (0,-0.5);
\draw[thick,->](-1.5,-2) -- (-1.5,-1.5);
\draw[thick,->](-1.5,-1.5) -- (-0.5,-1.5);
\draw[thick,->](-1.5,-1.5) -- (-1.5,-0.5);
\draw[thick,->](-1.5,-0.5) -- (-0.5,-0.5);
\draw[thick,->](-2.5,-2) -- (-2.5,-0.5);
\draw[thick,->](-3.5,-2) -- (-3.5,-0.5);
\draw[thick,->](-2.5,-0.5) -- (-1.5,-0.5);
\draw[thick,->](-3.5,-0.5) -- (-2.5,-0.5);
\draw[thick](-2.5,-1.5) -- (-2.5,1);
\draw[thick](-3.5,-1.5) -- (-3.5,1);
\draw[thick, dashed](-1.5,-0.5) -- (-1.5,1);
\draw[thick, dashed](-0.5,-0.5) -- (-0.5,1);
\draw(0,-0.5) node[right] {$v_{r-1}$};
\draw(0,-1.5) node[right] {$v_{r}$};
\draw(-0.5,-2) node[below] {$h_{i}$};
\draw(-1.5,-2) node[below] {$h_{j}$};
\draw(-2.5,-2) node[below] {$h_{j+1}$};
\draw(-3.5,-2) node[below] {$h_{n-r}$};
\foreach \Point in {(-3.5,-2),(-2.5,-2),(-1.5,-1.5),(-0.5,-0.5),(-0.5,-1.5),(-1.5,-2),(-0.5,-2),(0,-1.5),(0,-0.5),(-2.5,-0.5),(-3.5,-0.5),(-1.5,-0.5)}{
\node at \Point {\textbullet};}
\end{tikzpicture}
\captionof{figure}{}
\label{vr-1}
\end{minipage}
\end{center}
In this case, we show that
$L=cl(v_{1},v_{2},...,v_{r-2})$, the closure of the first
$r-2$ sinks, is a positive coline. Note that $v_{r-1}$ and $v_r$ must be followed by one or several sources $h_{i}, \dots, h_{n-r}$, since otherwise they are coloops. As we can see we
have different possibilities for the way the last sources
can be linked. However, there must be a linking from $h_i$
to $v_r$, otherwise $v_r$ is a coloop, as well as a linking from $h_i$ to $v_{r-1}$ since $h_i$ should be linked to another sink (otherwise $h_i$ and $v_r$ are parallel elements), and if this sink wasn't $v_{r-1}$, then $v_{r-1}$ would be a coloop. To the left of $h_i$ we can have other sources similar to $h_i$, meaning sources
linked to both $v_r$ and $v_{r-1}$ in the same way $h_i$ is. Let $h_j$ ($j \geq i$) be the last such source. It is easy to notice that if a
source after $h_j$ is not linked to neither $v_r$ nor $v_{r-1}$, but
only to higher sinks, then it is included in
$L$, and therefore we may ignore them and assume such
sources do not exist. Thus, after $h_j$ we have 3 possibilities. We can have sources linked to $v_r$ (Figure \ref{vr}), or sources only
linked to $v_{r-1}$ and not to $v_r$ (Figure \ref{vr-1}), or sources linked to both $v_r$ and $v_{r-1}$ in the same way $h_i$ is. In other words, $h_j$ is the last source. The important thing to notice here, is that we can't have some sources linked to $v_r$ and the others only linked to $v_{r-1}$ as in Figure \ref{wrong} since now $h_{j}$ and $h_{j+1}$ are parallel elements.
\begin{center}
\begin{tikzpicture}[scale=0.7]
\draw[help lines,line width=.4pt,step=1] (-4,-2) grid (0,0);
\draw[line width=0.5mm] (0,0)--(0,-2)--(-4,-2);
\draw[thick,->](-0.5,-2) -- (-0.5,-1.5);
\draw[thick,->](-0.5,-1.5) -- (0,-1.5);
\draw[thick,->](-0.5,-1.5) -- (-0.5,-0.5);
\draw[thick,->](-0.5,-0.5) -- (0,-0.5);
\draw[thick,->](-1.5,-2) -- (-1.5,-1.5);
\draw[thick,->](-1.5,-1.5) -- (-0.5,-1.5);
\draw[thick,->](-1.5,-1.5) -- (-1.5,-0.5);
\draw[thick,->](-1.5,-0.5) -- (-0.5,-0.5);
\draw[thick,->](-2.5,-2) -- (-2.5,-1.5);
\draw[thick,->](-3.5,-2) -- (-3.5,-0.5);
\draw[thick,->](-2.5,-1.5) -- (-1.5,-1.5);
\draw[thick,->](-3.5,-0.5) -- (-1.5,-0.5);
\draw[thick, dashed](-3.5,-0.5) -- (-3.5,1);
\draw[thick, dashed](-1.5,-0.5) -- (-1.5,1);
\draw[thick, dashed](-0.5,-0.5) -- (-0.5,1);
\draw(0,-0.5) node[right] {$v_{r-1}$};
\draw(0,-1.5) node[right] {$v_{r}$};
\draw(-0.5,-2) node[below] {$h_{i}$};
\draw(-1.5,-2) node[below] {$h_{j}$};
\draw(-2.5,-2) node[below] {$h_{j+1}$};
\draw(-3.5,-2) node[below] {$h_{n-r}$};
\foreach \Point in {(-3.5,-2),(-2.5,-2),(-1.5,-1.5),(-0.5,-0.5),(-0.5,-1.5),(-1.5,-2),(-0.5,-2),(0,-1.5),(0,-0.5),(-2.5,-1.5),(-3.5,-0.5),(-1.5,-0.5)}{
\node at \Point {\textbullet};}
\end{tikzpicture}
\captionof{figure}{}
\label{wrong}
\end{center}
We now discuss the 3 possibilities. $L$ is a flat of rank $r-2$, so it
is a coline that contains all sinks except for
$v_{r-1}$ and $v_{r}$ with all sources that are not
connected to $v_{r-1}$ and $v_{r}$. Let us now look at the copoints
on $L$.
Possibility 1: All sources are linked to $v_r$.\\
In this case (Figure \ref{vr}), $L \cup v_{r-1}$ is of rank $r-1$
and a flat because adding anything to it will increase its
rank since all sources are linked to $v_r$. Therefore
$L \cup v_{r-1}$ is a simple copoint. Similarly, $L \cup v_{r}$ is a
simple copoint. Additionally, if $j \neq i$, then $L \cup h_i$ is a simple copoint as
well since all other sources can get to $v_{r-1}$ with no
intersection with the path from $h_i$ to $v_r$. The only
possible multiple copoint is $L \cup h_{j} \cup \dots \cup h_{n-r}$ since
$h_{j}, h_{j+1},\dots,h_{n-r}$ cannot be linked to
$v_{r-1}$ and $v_{r}$ without intersection. If $j=i$, which means that $h_i$ is the last source linked to both $v_r$ and $v_{r-1}$ as shown above, then $L \cup h_i$ would not be simple and we would only have two simple copoints. Note that here, the more we have sources similar to $h_i$, in other words the bigger $j$ is, the more simple copoints we get. So $L$ is a positive coline since it has at least two simple copoints and at most one
multiple copoint.
Possibility 2: There are sources which are only linked to $v_{r-1}$.\\
In this case (Figure \ref{vr-1}), $L \cup v_{r}$ is of rank $r-1$
and a flat because adding anything to it will increase its
rank since all sources can be linked to $v_{r-1}$.
Therefore $L \cup v_{r}$ is a simple copoint. Similarly $L \cup h_{i}$
and $L \cup h_{j}$ are simple copoints. The only possible
multiple copoint is
$L \cup v_{r-1} \cup h_{j+1} \cup \dots \cup h_{n-r}$, where $h_{j+1}$ is the first source which is not linked to $v_r$. So $L$ is a positive coline in this case as well.
Possibility 3: $h_j$ is the last source.\\
If $h_j$ is the last source, this means that all sources after $v_r$ are linked to both $v_r$ and $v_{r-1}$ in the same way $h_i$ is. After similar calculations to the previous two cases, it is easy to deduce that in this case, we would only have simple copoints, namely $L \cup v_{r-1}$, $L \cup v_{r}$, $L \cup h_{i},\dots, L \cup h_{n-r}$ and no multiple copoint.\\
In fact, $L$ can still be a positive coline of the positroid, even if $v_i$ and $v_{i+1}$ were not the last two sinks of the positroid. We next explain when and why.
Suppose $v_{i}$ and $v_{i+1}$ are not the last two sinks of the positroid.
\begin{center}
\begin{minipage}{.45\textwidth}
\centering
\begin{tikzpicture}[scale=0.8]
\draw[help lines,line width=.4pt,step=1] (-1,-1) grid (0,0);
\draw[help lines,line width=.4pt,step=1] (-1,0) grid (2,3);
\draw[help lines,line width=.4pt,step=1] (2,1) grid (3,3);
\draw[line width=0.5mm] ((3,3)--(3,1)--(2,1)--(2,0)--(0,0)--(0,-1)--(-1,-1);
\draw[thick,->](2.5,2.5) -- (3,2.5);
\draw[thick,->](2.5,1.5) -- (3,1.5);
\draw[thick,->](1.5,0.5) -- (2,0.5);
\draw[thick,->](1.5,0.5) -- (1.5,1.5);
\draw[thick,->](1.5,0) -- (1.5,0.5);
\draw[thick,->](0.5,0.5) -- (1.5,0.5);
\draw[thick,->](2.5,1) -- (2.5,1.5);
\draw[thick,->](2.5,1.5) -- (2.5,2.5);
\draw[thick,->](1.5,1.5) -- (2.5,1.5);
\draw[thick,->](0.5,0) -- (0.5,0.5);
\draw[thick,->](0.5,0.5) -- (0.5,1.5);
\draw[thick,->](0.5,1.5) -- (1.5,1.5);
\draw[thick,->](-0.5,-1) -- (-0.5,-0.5);
\draw[thick,->](-0.5,-0.5) -- (0,-0.5);
\draw[thick,->](-0.5,-0.5) -- (-0.5,2.5);
\draw[thick,->](-0.5,2.5) -- (2.5,2.5);
\draw(3,2.5) node[right] {$v_{i}$};
\draw(3,1.5) node[right] {$v_{i+1}$};
\draw(2.5,1) node[below] {$h_{j}$};
\draw(2,0.5) node[anchor=north west] {$v_{i+2}$};
\draw(0,-0.5) node[anchor=north west] {$v_{i+3}$};
\draw(1.5,0) node[below] {$h_{j+1}$};
\draw(-0.5,-1) node[below] {$h_{j+3}$};
\foreach \Point in {(-0.5,-0.5),(0.5,1.5),(1.5,1.5),(-0.5,-1),(0.5,0),(0,-0.5),(1.5,0),(2,0.5),(2.5,2.5),(3,2.5),(3,1.5),(2.5,1),(2.5,1.5),(1.5,0.5),(0.5,0.5),(-0.5,2.5)}{
\node at \Point {\textbullet};}
\end{tikzpicture}
\captionof{figure}{}
\label{case21}
\end{minipage}
\begin{minipage}{.45\textwidth}
\centering
\begin{tikzpicture}[scale=0.8]
\draw[help lines,line width=.4pt,step=1] (-1,-1) grid (0,0);
\draw[help lines,line width=.4pt,step=1] (-1,0) grid (2,3);
\draw[help lines,line width=.4pt,step=1] (2,1) grid (5,3);
\draw[line width=0.5mm] ((5,3)--(5,1)--(2,1)--(2,0)--(0,0)--(0,-1)--(-1,-1);
\draw[thick,->](2.5,2.5) -- (3.5,2.5);
\draw[thick,->](2.5,1.5) -- (3.5,1.5);
\draw[thick,->](1.5,0.5) -- (2,0.5);
\draw[thick,->](1.5,0.5) -- (1.5,1.5);
\draw[thick,->](1.5,0) -- (1.5,0.5);
\draw[thick,->](0.5,0.5) -- (1.5,0.5);
\draw[thick,->](2.5,1) -- (2.5,1.5);
\draw[thick,->](2.5,1.5) -- (2.5,2.5);
\draw[thick,->](1.5,1.5) -- (2.5,1.5);
\draw[thick,->](0.5,0) -- (0.5,0.5);
\draw[thick,->](0.5,0.5) -- (0.5,2.5);
\draw[thick,->](0.5,2.5) -- (0.5,2.5);
\draw[thick,->](-0.5,-1) -- (-0.5,-0.5);
\draw[thick,->](-0.5,-0.5) -- (0,-0.5);
\draw[thick,->](-0.5,-0.5) -- (-0.5,2.5);
\draw[thick,->](-0.5,2.5) -- (0.5,2.5);
\draw[thick,->](0.5,2.5) -- (2.5,2.5);
\draw[thick,->](3.5,1) -- (3.5,1.5);
\draw[thick,->](3.5,1.5) -- (3.5,2.5);
\draw[thick,->](3.5,1.5) -- (4.5,1.5);
\draw[thick,->](3.5,2.5) -- (4.5,2.5);
\draw[thick,->](4.5,1) -- (4.5,1.5);
\draw[thick,->](4.5,1.5) -- (4.5,2.5);
\draw[thick,->](4.5,1.5) -- (5,1.5);
\draw[thick,->](4.5,2.5) -- (5,2.5);
\draw(5,2.5) node[right] {$v_{i}$};
\draw(5,1.5) node[right] {$v_{i+1}$};
\draw(4.5,1) node[below] {$h_{j}$};
\draw(2,0.5) node[anchor=north west] {$v_{i+2}$};
\draw(0,-0.5) node[anchor=north west] {$v_{i+3}$};
\draw(1.5,0) node[below] {$h_{j+3}$};
\draw(-0.5,-1) node[below] {$h_{j+5}$};
\foreach \Point in {(0.5,2.5),(3.5,1.5),(3.5,2.5),(4.5,1.5),(4.5,2.5),(3.5,1),(4.5,1),(-0.5,-0.5),(1.5,1.5),(-0.5,-1),(0.5,0),(0,-0.5),(1.5,0),(2,0.5),(2.5,2.5),(5,2.5),(5,1.5),(2.5,1),(2.5,1.5),(1.5,0.5),(0.5,0.5),(-0.5,2.5)}{
\node at \Point {\textbullet};}
\end{tikzpicture}
\captionof{figure}{}
\label{case22}
\end{minipage}
\end{center}
As we just saw previously, no matter how the sources directly following $v_r$ are linked, the closure of all sinks except for $v_i$ and $v_{i+1}$ is always a positive coline. The problems only arise when something similar to Figure \ref{wrong} happens. As mentioned before, we can't have at the same time, sources linked to $v_r$ and others only linked to $v_{r-1}$, if these sources are on the same level, as in Figure \ref{wrong}. However, this case can happen when such sources are on different levels. We can notice in figure \ref{case21} that $h_{j+1}$ is linked to $v_{i+1}$ (and intersects the path from $h_j$ to $v_{i+1}$) and $h_{j+3}$ is only linked to $v_{i}$. This will not allow the closure $cl(V-\{v_{i},v_{i+1}\})$ to be a positive coline since
$L \cup v_{i+1}$ is the only simple copoint and
$L \cup v_{1} \cup h_{j+3}$ and $L \cup h_{j} \cup h_{j+1} \cup h_{j+2}$ are the multiple copoints. Nonetheless, we prove in this case, that the closure $L=cl(V-\{v_{i},v_{i+2}\})$ is a positive coline. We notice here that any source linked to $v_{i+2}$ is also linked to $v_i$, which makes $L \cup v_{i+2}$ a simple copoint. Moreover, any other source next to $h_{j+1}$ and on the same level (if it exists) will also be linked to $v_i$ without intersecting the path from $h_{j+1}$ to $v_{i+2}$, no matter how the linking is, because otherwise it will form a parallel pair with $h_{j+1}$. This leads to $L \cup h_{j+1}$ and $L \cup h_{j+2}$ being simple copoints as well. The only multiple copoint in this case is $L \cup v_{i} \cup h_{j} \cup h_{j+3}$, which in a more general case, will include all sources linked to $v_i$ and not to $v_{i+2}$. This may also include $h_{j+2}$, if it was not linked to $v_{i+2}$. Note here, that if $h_{j+1}$ is not linked to $v_{i+1}$, then we are back to the first case, where $L=cl(V-\{v_{i},v_{i+1}\})$ is the positive coline.
It is important to mention as well that if 3 or more sources following $v_{i+1}$ on the same level are linked to $v_{i+1}$ and $v_{i}$ the same way $h_{j}$ is (Figure \ref{case22}), then $L=cl(V-\{v_{i},v_{i+1}\})$ would also be a positive coline. \end{proof}
\begin{cor}
Some positive coline of a simple positroid of rank $\geq 3$ is either $L=cl(V-\{v_{i},v_{i+1}\})$ or $L=cl(V-\{v_{i},v_{i+2}\})$, or both, where $v_{i}$ and $v_{i+1}$ are the last two consecutive sinks of the positroid. \end{cor}
We put this result in the context of the results from Goddyn et al.\cite{goddynetal} and recall the definition of a GSP oriented matroid that was introduced there. We say that an oriented matorid $\mathcal{O}$ is a \emph{generalized series-parallel} (GSP) if every simple minor of $\mathcal{O}$ has a $\{0, \pm 1\}$-valued coflow which has at most two nonzero entries. In the same work, the authors show that every GSP-oriented matroid with no loops has a NZ-3 coflow, i.e.\ is 3-colorable. Additionally, they proved the following lemma.
\begin{lemma}[\cite{goddynetal} Lemma 6]\label{goddyn1}
If an orientable matroid $M$ has a positive coline, then every orientation $\mathcal{O}$ of $M$ has a $\{0, \pm 1\}$-valued coflow which has exactly two nonzero entries. \end{lemma}
Since positroids are closed under taking minors, Lemma \ref{goddyn1} along with Theorem \ref{positivecoline} imply that the class of simple positroids is a subclass of the class of GSP-oriented matroids. Thus, the main result follows immediately.
\begin{thm}
Simple positroids are 3-colorable. \end{thm}
\section{Conclusion} Originally our plan was to prove that every simple gammoid of rank at least $2$ is 3-colorable by proving the existence of a positive coline, but as mentioned in the introduction, this was proven to be false by Guzman-Pro and Hochst\"{a}ttler in \cite{santiago}. Nevertheless, the authors showed in the same work that we don't necessarily need a positive coline to prove the 3-colorability of an oriented matroid, we only need a coline with at least two simple copoints, and in this way they proved that cobicircular matroids are 3-colorable. This leads to the following question which, if answered positively, would prove the 3-colorability of gammoids since gammoids are closed under minors.
\begin{que}
Does every simple gammoid have a coline with at least two simple copoints? \end{que}
\end{document} | arXiv | {
"id": "2111.06983.tex",
"language_detection_score": 0.8176283240318298,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Star-finite coverings of Banach spaces}
\author[C.A.~De~Bernardi]{Carlo Alberto De Bernardi} \author[J.~Somaglia]{Jacopo Somaglia} \author[L.~Vesel\'y]{Libor Vesel\'y}
\address{Dipartimento di Matematica per le Scienze economiche, finanziarie ed attuariali, Universit\`a Cattolica del Sacro Cuore, 20123 Milano,Italy}
\email{carloalberto.debernardi@unicatt.it}
\address{Dipartimento di Matematica ``F. Enriques'', Universit\`a degli Studi di Milano, via Cesare Saldini 50, 20133 Milano, Italy. }
\email{jacopo.somaglia@unimi.it} \email{libor.vesely@unimi.it}
\subjclass[2010]{Primary 46B20 ; Secondary 52A99, 46A45}
\keywords{covering of normed space, Fr\'echet smooth body, locally uniformly rotund norm}
\thanks{ All the authors are members of GNAMPA (INdAM). The research of the first and the third author was partially supported by GNAMPA (INdAM), Project 2018. The research of the second and the third author was partially supported by the University of Milan (Universit\`a degli Studi di Milano), Research Support Plan 2019.}
\begin{abstract} We study star-finite coverings of infinite-dimensional normed spaces. A family of sets is called star-finite if each of its members intersects only finitely many other members of the family. It follows by our results that an LUR or a uniformly Fr\'echet smooth infinite-dimensional Banach space does not admit star-finite coverings by closed balls. On the other hand, we present a quite involved construction proving existence of a star-finite covering of $c_0(\Gamma)$ by Fr\'echet smooth centrally symmetric bounded convex bodies. A similar but simpler construction shows that every normed space of countable dimension (and hence incomplete) has a star-finite covering by closed balls.
\end{abstract}
\maketitle
\section{Introduction}
A family of subsets of a real normed space $X$ is called a {\em covering} if the union of all its members coincides with $X$. One of the earliest results concerning coverings of infinite-dimensional spaces is Corson's theorem \cite{Cor61}, stating that if $X$ is a reflexive infinite-dimensional Banach space and $\mathcal{F}$ is a covering of $X$ by bounded convex sets then $\mathcal{F}$ is not locally finite (see Definition~\ref{D: finiteness}). V.P.~Fonf and C.~Zanco \cite{FZ06} improved this result by proving that {\em if a Banach space $X$ contains an infinite-dimensional closed subspace non containing $c_0$ then $X$ does not admit any locally finite covering by bounded closed convex bodies}. The same authors proved in \cite{FZJMMA09} that {\em if $X$ contains a separable infinite-dimensional dual space and if $\tau$ is a covering by bounded closed convex sets then there exists a finite-dimensional compact set $C$ that meets infinitely many members of $\tau$}. Moreover, they proved in \cite{FZForum09} that, in the above result, if the members of $\tau$ are rotund or smooth then $C$ can be taken 1-dimensional. Let us recall that the prototype of a locally finite covering of an infinite-dimensional Banach space by closed convex bounded sets is the covering (actually a tiling) of $c_0$ by translates of its unit ball, see \cite{lup88} for the details. (Recall that a {\em tiling} is a covering by bodies whose nonempty interiors are pairwise disjoint.)
The existing theory of point-finite coverings (see Definition~\ref{D: finiteness}) of infinite-dimen\-sional normed spaces is less developed and mainly concerns coverings by balls. A surprising construction discovered in 1981 by V.~Klee \cite{Klee1} shows existence of a simple (that is, disjoint, and hence point-finite) covering of $\ell_1(\Gamma)$ by closed balls of radius $1$, whenever $\Gamma$ is a suitable uncountable set. Though the question about existence of point-finite coverings by balls of $\ell_p(\Gamma)$ spaces
was already considered by V.~Klee in the same paper, this problem was partially solved only recently for $\ell_2$ by
V.P.~Fonf and C.~Zanco in \cite{FZHilbert}, in which they proved that {\em the infinite-dimensional separable Hilbert space does not admit point-finite coverings by closed balls of positive radius}.
Then V.P.~Fonf, M.~Levin and C.~Zanco \cite{FonfLevZan14} extended this result to separable Banach spaces that are both uniformly smooth and uniformly rotund. We point out that Klee's problem about coverings by closed balls seems to be open in the non-separable case, even for Hilbert spaces.
In the present paper, we consider a particular class of coverings of infinite-dimensional normed spaces, given by the property that each member intersects at most finitely many other members. Such coverings are known in the literature as {\em star-finite coverings} (see \cite[p.~317]{Engel}), and singular points of star-finite (not necessarily convex) tilings of topological vector spaces were first studied in \cite{Breen85}, then generalized in \cite{NielsenStarfinite}. It is clear that each simple covering is star-finite and each star-finite covering is point-finite. Moreover, the above-mentioned coverings by balls of $c_0$ and $\ell_1(\Gamma)$ easily show that there are no implications between star-finiteness and local finiteness of a covering.
Roughly speaking, all mentioned results concerning non-existence of point-finite or locally finite coverings are in some sense inspired by the following general principle.
\noindent {\em Coverings in ``good'' (separable, reflexive, \ldots) infinite-dimensional Banach spaces whose members enjoy ``nice properties'' (smoothness, rotundity, \ldots) cannot satisfy ``finiteness properties'' (local finiteness, point finiteness, \ldots).}
\noindent
Hence, the first step in our study is to determine to what extent we can apply the same principle to star-finite coverings. A careful reading of the proof of a result by A.~Marchese and C.~Zanco \cite{MarZan}, stating that each Banach space has a 2-finite (see Definition~\ref{D: finiteness}) covering (actually a tiling) by closed convex bounded bodies, reveals that the same argument actually proves that {\em each Banach space admits a covering by closed convex bounded bodies such that each of its member intersects at most two other its members}. However, as noted by the authors, the elements of such a covering are far from being balls. This fact together with Klee's construction in $\ell_1(\Gamma)$ suggest that, in order to obtain non-existence results, we should restrict at first our attention to star-finite coverings by closed balls satisfying some rotundity or smoothness property. After some preliminaries and some general facts (Section~\ref{section:preliminaries}), we prove the main results in this direction in Section~\ref{section:prohibitive}: our Corollary~\ref{C: suffcond} implies that an infinite-dimensional Banach space $X$ does not admit any star-finite covering by closed balls whenever $X$ is uniformly Fr\'echet smooth or LUR. The techniques used in some of these proofs are inspired by the paper \cite{DEVETIL}. We also prove non existence of {\em countable} star-finite coverings by closed balls for a class of (subspaces of) spaces of continuous functions, which include, e.g., all infinite-dimensional $\ell_\infty(\Gamma)$ spaces. In the particular case of $c_0(\Gamma)$ ($\Gamma$ infinite), we show that it admits no (countable or not) star-finite covering by closed balls.
In Section~\ref{section:existenceresult}, we obtain a result in the opposite direction: we present a quite involved construction of a star-finite covering of every $c_0(\Gamma)$ space by Fr\'echet smooth centrally symmetric bounded bodies. The starting point of our construction is existence of an equivalent Fr\'echet smooth norm on $c_0(\Gamma)$ whose unit sphere contains many ``flat faces'' (see Proposition~\ref{P: bollapiatta}). We point out that a similar but simpler construction contained in Section~\ref{section:preliminaries} shows that every normed (necessarily incomplete) space of countable dimension has a star-finite covering by closed balls. Proofs of some needed auxiliary facts are contained in the Appendix (Section~\ref{section:appendix}).
\section{Preliminaries and some general facts}\label{section:preliminaries}
Throughout the paper, $\mathbb{N}$ denotes the set of strictly positive integers, while $\mathbb{N}_0:=\mathbb{N}\cup\{0\}$ is the set of nonnegative integers. Given a set $\Gamma$ and $n\in\mathbb{N}_0$, by $[\Gamma]^n$ we mean the set of all $n$-element subsets of $\Gamma$, and by $[\Gamma]^{<\infty}$ the set of all finite subsets of $\Gamma$. Thus $[\Gamma]^{<\infty}=\bigcup_{n\ge0}[\Gamma]^n$.
We consider only nontrivial real normed spaces. If $X$ is a normed space then $X^*$ is its dual Banach space, and $B_X$ and $S_X$ are the closed unit ball and the unit sphere of $X$. Moreover, we denote by $B(x,\varepsilon)$ and $U(x,\varepsilon)$ the closed and the open ball with radius $\varepsilon$ and center $x$, respectively. By a {\em ball} in $X$ we mean a closed or open ball of positive radius in $X$. If $B\subset X$ is a ball then $c(B)$ and $r(B)$ denote its center and radius, respectively. Other notation is standard, and various topological notions refer to the norm topology of $X$, if not specified otherwise. A set $B\subset X$ will be called a {\em body} if it is closed, convex and has nonempty interior. For $x,y\in X$, $[x,y]$ denotes the closed segment in $X$ with endpoints $x$ and $y$, and $(x,y)=[x,y]\setminus\{x,y\}$ is the corresponding ``open'' segment.
Let $\mathcal{F}$ be a family of nonempty sets in a normed space $X$. By $\bigcup \mathcal F$ we mean the union of all members of $\mathcal F$. A point $x\in X$ is a {\em regular point} for $\mathcal F$ if it has a neighborhood that meets at most finitely many members of $\mathcal F$. Points that are not regular are called {\em singular}. Notice that the set of regular points is an open set. For any $x\in X$ we denote $$ \mathcal{F}(x):=\{F\in\mathcal{F}: x\in F\}. $$ Thus $\mathcal F$ is a covering of $X$ if and only if $\mathcal{F}(x)\ne\emptyset$ for each $x\in X$.
\begin{definition}\label{D: finiteness} The family $\mathcal F$ is called: \begin{enumerate}[(a)] \item {\em star-finite} if each of its members intersects only finitely many other members of $\mathcal F$ (cf. \cite[p. 317]{Engel}); \item {\em simple} if its members are pairwise disjoint; \item {\em point-finite} ({\em point-countable}) [{\em $n$-finite} ($n\in\mathbb{N}$)] if each $x\in X$ is contained in at most finitely many (countably many) [$n$] members of $\mathcal F$; \item {\em locally finite} if each $x\in X$ is a regular point for $\mathcal F$. \end{enumerate} \end{definition}
It is evident that simple families are star-finite, and star-finite families are point-finite (and hence point-countable).
A {\em minimal covering} is a covering whose no proper subfamily is a covering. Notice that a covering need not contain any minimal subcovering (consider e.g.\ the covering consisting of $nB_X$, $n\in\mathbb{N}$). However, it is easy to see that the intersection of a chain of point-finite coverings is again a covering. Thus by Zorn's lemma {\em every point-finite (hence every star-finite) covering contains a minimal subcovering.}
\subsection{Cardinality properties} Next results describe relations between the cardinality of certain coverings of a topological space $T$ and its density character $\mathrm{dens}(T)$ (i.e., the smallest cardinality of a dense subset of $T$).
\begin{lemma}
Let $T$ be an infinite Hausdorff topological space, and $\mathcal{F}$ a point-countable family of nonempty open subsets of $T$. Then $|\mathcal{F}|\le\mathrm{dens}(T)$. \end{lemma}
\begin{proof} Fix a dense (necessarily infinite) set $D\subset T$. For each $A\in\mathcal{F}$ choose some $f(A)\in D\cap A$, obtaining in this way a function $f\colon\mathcal{F}\to D$ such that the subfamilies $f^{-1}(d)\subset\mathcal F$, $d\in D$, are all at most countable. It is clear that these subfamilies are pairwise disjoint. Now we obtain $$
|\mathcal{F}|=|\bigcup_{d\in D}f^{-1}(d)\,|\le |D\times\mathbb{N}|=|D|, $$ which completes the proof by arbitrariness of the dense set $D$. \end{proof}
\begin{observation}\label{O: card upper bound} The above cardinality estimate applies whenever $\mathcal{F}$ is a point-finite family of sets with nonempty interior (and $T$ as above). Indeed, it suffices to consider the family $\mathcal{F}'=\{\mathrm{int}\,F: F\in\mathcal{F}\}$. \end{observation}
Since we are interested in star-finite coverings of normed spaces by bodies, we will always have that the cardinality of such a covering is not greater than the density character of the space.
\begin{lemma}
Let $X$ be an infinite-dimensional normed space, $r>0$, and $\mathcal{B}$ a covering of $X$ by balls of radius at most $r$. Then $|\mathcal{B}|\ge \mathrm{dens}(X)$. \end{lemma}
\begin{proof}
Let $E\subset X$ be a maximal $3r$-dispersed set, that is: $\|y-z\|\ge 3r$ for any distinct $y,z\in E$, and for each $x\in X$ there is $y\in E$ such that $\|x-y\|<3r$. Then the set $D:=\bigcup_{n\in\mathbb{N}}(1/n)E$
is dense and $|E|=|D|\ge\mathrm{dens}(X)$. Since $E\subset\bigcup\mathcal{B}$ and each member of $\mathcal{B}$
contains at most one element of $E$, we conclude that $|\mathcal{B}|\ge|E|\ge\mathrm{dens}(X)$. \end{proof}
Let $X$ be a normed space. Recall that a set $A\subset X^*$ is {\em total} if $^\perp A:=\bigcap_{x^*\in A}\mathrm{Ker}(x^*)=\{0\}$. Thus if $A$ is total then $\overline{\mathrm{span}}^{w^*}\,A=(^\perp A)^\perp=X^*$. It follows that if $A$ is total and infinite then $$
w^*\text{-}\mathrm{dens}(X^*):=\mathrm{dens}(X^*,w^*)\le|\mathrm{span}_\mathbb{Q}\,A|=|A|\,, $$ where $\mathrm{span}_\mathbb{Q}\,A$ is the ``rational span'' of $A$.
\begin{proposition}\label{p: wstardensityCardinality}
Let $X$ be an infinite-dimensional normed space. Suppose that $X$ admits a covering $\mathcal{B}$ by closed bounded convex sets such that some $x_0\in X$ belongs to only finitely many elements of $\mathcal{B}$. Then $w^*\text{\rm-dens}(X^*)\leq|\mathcal{B}|$. \end{proposition}
\begin{proof} By translation we may (and do) assume that $x_0=0$. Define $\mathcal{B}':=\mathcal{B}\setminus\mathcal{B}(0)$. Since $\bigcup\mathcal{B}(0)$ is bounded, by homogeneity we can (and do) assume that $S_X\subset \bigcup\mathcal{B}'$. Set $\mathcal{B}^{''}\coloneqq\{B\in \mathcal{B}': B\cap S_X\neq\emptyset\}$. By the Hahn-Banach theorem, for each $B\in\mathcal{B}''$ there exists $x^*_B\in S_{X^*}$ such that $0=x^*_B(0)<\inf x^*_B(B)$. Since $S_X\subset \bigcup \mathcal{B}''$, the family $\{x^*_B\}_{B\in\mathcal{B}''}$ is total and hence infinite. Consequently,
$w^*\text{\rm-dens}(X^*)\leq|\mathcal{B}''|\le|\mathcal{B}|$. \end{proof}
From the previous result we deduce the exact size of a point-finite (star-finite) covering for a wide class of Banach spaces, more precisely the class of weakly Lindel\"of determined Banach spaces (WLD). The class of WLD Banach spaces, that generalizes the class of WCG Banach spaces, has been studied first in \cite{ArgMer} (see also \cite{Hajeketal} for more details).
\begin{corollary}
Let $X$ be a WLD Banach space. Suppose that $\mathcal{B}$ is a point-finite covering by bodies of $X$. Then $\dens(X)=|\mathcal{B}|$. \end{corollary}
\begin{proof}
By Observation~\ref{O: card upper bound} we have $|\mathcal{B}|\leq \dens(X)$. The other inequality follows combining Proposition \ref{p: wstardensityCardinality} with the fact that $\mathrm{dens}(X)=w^*\text{\rm-dens}(X^*)$ (see \cite[Proposition 5.40]{Hajeketal}). \end{proof}
\subsection{Structure properties} Let us state some simple properties of star-finite coverings, which will be used in the sequel.
\begin{observation}\label{O: basic} Let $\mathcal F$ be a star-finite covering by closed sets of a normed space $X$. Then it has the following properties. \begin{enumerate}[(a)] \item The set $D:=\bigcup_{F\in\mathcal{F}}\partial F$ is closed. \item A point $x\in X$ is regular for $\mathcal{F}$ if and only if $x\in\mathrm{int}\,[\bigcup \mathcal{F}(x)]$. \item If $x$ is a singular point of $\mathcal F$ then $x\in\bigcap_{F\in\mathcal{F}(x)}\partial F$.
\item If $\mathcal F$ is countable then $H:=\{x\in D: |\mathcal{F}(x)|=1\}$ is a $G_\delta$ set. \end{enumerate} \end{observation}
\begin{proof} (a) If $x\notin D$ then $x\in U:=\bigcap_{F\in\mathcal{F}(x)}\mathrm{int}\,F$. Since $\mathcal{F}(x)$ contains only finitely many sets each of which intersects only finitely many members of $\mathcal{F}\setminus\mathcal{F}(x)$, it follows that the set $U\setminus \bigcup[\mathcal{F}\setminus\mathcal{F}(x)]$ is an open neighborhood of $x$ which is disjoint from $D$. This proves that $X\setminus D$ is open.
(b) The implication ``$\Leftarrow$'' follows in a similar way to (a), now starting from the set $U:=\mathrm{int}\,[\bigcup \mathcal{F}(x)]$. To show the other implication, assume that $x$ is a regular point for $\mathcal F$, that is, there exists an open neighborhood $V$ of $x$ for which the subfamily $\{F\in\mathcal{F}: F\cap V\ne\emptyset\}$ is finite. Now star-finiteness of $\mathcal F$ easily implies that there exists a neighborhood $U\subset V$ of $x$ such that $U$ is contained in $\bigcup\mathcal{F}(x)$.
(c) If $x$ is singular then $x\notin\mathrm{int}[\bigcup\mathcal{F}(x)]$ by (b), and hence $x\notin\bigcup_{F\in\mathcal{F}(x)}(\mathrm{int}\,F)$. Thus $x\in\bigcap_{F\in\mathcal{F}(x)}\partial F$.
(d) Write $\mathcal{F}=\{F_n\}_{n\in\mathbb{N}}$. Then $D\setminus H=D\cap\bigcup_{m\ne n}(F_m\cap F_n)$ is an $F_\sigma$ set in $D$, hence $H$ is $G_\delta$ in $D$. Since $D$ is $G_\delta$ in $X$, it follows that $H$ is $G_\delta$ in $X$. \end{proof}
\begin{lemma}\label{L: one ball} Let $C_1,\dots, C_n$ and $B$ be closed convex sets in an infinite-dimen\-sional normed space $X$. If $B$ is bounded and $\{C_i\}_1^n$ does not cover $B$ then $\partial B\setminus\bigcup_{i=1}^n C_i$ is weakly dense in $B\setminus\bigcup_{i=1}^n C_i$. In particular, $\{C_i\}_1^n$ does not cover $\partial B$. \end{lemma}
\begin{proof} Let $B$ have interior points (otherwise there is nothing to prove). Proceeding by contradiction, assume there exists $x\in B\setminus\bigcup_{i=1}^n C_i$ which does not belong to $\overline{\partial B\setminus\bigcup_{i=1}^n C_i}^{w}$. We have $$\textstyle \partial B\subset \overline{\partial B\setminus\bigcup_{i=1}^n C_i}^{w}\,\cup \bigcup_{i=1}^n C_i\,=:E, $$ where $E$ is a weakly closed set that does not contain $x$. Let $W$ be a weak neighborhood of $x$ which is disjoint from $E$. But then $W\cap \partial B=\emptyset$, which is impossible since $W$ contains a line. This contradiction completes the proof. \end{proof}
\begin{corollary}\label{C: one ball} Let $\mathcal{B}$ be a minimal star-finite covering by bounded closed convex sets of an infinite-dimensional normed space $X$. Then the boundary of each $B\in\mathcal{B}$ contains a nonempty relatively open set which does not meet other members of $\mathcal B$. \end{corollary}
\begin{proof} Given $B$, let $C_1,\dots, C_n$ be the members of $\mathcal{B} \setminus\{B\}$ that intersect $B$. By minimality, $\{C_i\}_1^n$ does not cover $B$. By Lemma~\ref{L: one ball}, $\partial B\setminus\bigcup_{i=1}^n C_i\ne\emptyset$. \end{proof}
\subsection{Covering normed spaces of countable dimension} In the rest of this section we will show that each normed space with countable dimension can be covered by a star-finite family of closed balls. This result is achieved by covering inductively a nested sequence of finite-dimensional subspaces.\\ Let $A$ be a set in a metric space $(X,d)$, and let $\delta>0$. Recall that a set $E\subset X$ is a {\em $\delta$-net} for $A$ if $\mathrm{dist}(a,E)<\delta$ for each $a\in A$.
In what follows, we shall use several times the following simple fact.
\begin{observation}\label{O: ball reduction} Let $Z$ be a convex subset of a normed space $X$. Let $B_1$ and $B_2$ be two closed balls in $X$, such that $c(B_1),c(B_2)\in Z$, then $B_1\cap B_2=\emptyset$ if and only if $Z\cap B_1\cap B_2=\emptyset$. \end{observation}
\begin{proof} The proof is done observing that two balls intersect if and only if the distance of their centers is not greater than the sum of their radii if and only if the balls intersect in the segment connecting the centers. \end{proof}
The key step in the proof of Theorem \ref{T: star-finite countable dimension} is the next lemma, which proves that each open subset $A$ of a finite-dimensional normed space admits a star-finite covering by closed balls whose singular points accumulate on the boundary of $A$.
\begin{lemma}\label{L: totallybounded} Let $X$ be a normed space, and $Y\subset X$ a finite-dimensional subspace. Let $C\subset Y$ be a closed set such that $Y\setminus C\ne\emptyset$. Then there exists a star-finite family $\mathcal{B}$ of closed balls of $X$ such that: \begin{enumerate}[(a)] \item $c(B)\in Y$ and $B\cap C=\emptyset$ for each $B\in\mathcal{B}$; \item $Y\setminus C\subset\bigcup\mathcal{B}$; \item the singular points of $\mathcal B$ are contained in $C$. \end{enumerate} \end{lemma}
\begin{proof} Let us define \begin{align*}
A_{h,k}&\textstyle :=\{y\in Y\setminus C: \frac1{k+1}<\mathrm{dist}(y,C)\leq\frac1k,h\leq\|y\|< h+1\} \ \ (h\in\mathbb{N}_0, k\in\mathbb{N}),\\
A_{h,0}&:=\{y\in Y\setminus C: {1}<\mathrm{dist}(y,C),h\leq\|y\|< h+1\} \ \ (h\in\mathbb{N}_0), \end{align*} where for $C=\emptyset$ we put $\mathrm{dist}(y,C):=\infty$. For each $h,k\in\mathbb{N}_0$, the bounded set $A_{h,k}\subset Y$ admits a finite $\frac1{2(k+1)}$-net $E_{h,k}\subset A_{h,k}$. Consider the family $$\textstyle \mathcal{B}:=\left\{z+\frac1{2(k+1)}B_X:\;z\in E_{h,k},\;k,h\in\mathbb{N}_0\right\} $$ which clearly satisfies (a). Since $Y\setminus C\subset\bigcup_{h,k\in\mathbb{N}_0}A_{h,k}\,$, the condition (b) easily follows by the choice of the sets $E_{h,k}$.
Now let us show (c). Let $x\in X$ be a singular point of $\mathcal{B}$. Then $\mathcal{B}$ contains a sequence $\{B_n\}$ of pairwise distinct closed balls such that $\mathrm{dist}(x,B_n)\to0$. For each $n\in\mathbb{N}$ there are $h_n,k_n\in\mathbb{N}_0$ such that $$\textstyle c(B_n)\in E_{h_n,k_n}\quad\text{and}\quad r(B_n)=\frac1{2(k_n+1)}\le\frac12\,. $$ It is easy to see that $\{h_n\}$ is necessarily bounded and $\{k_n\}$ is unbounded. So we can (and do) assume that $k_n\to\infty$. But then we obtain \begin{align*}
\mathrm{dist}(x,C)&\le\|x-c(B_n)\|+\mathrm{dist}(c(B_n),C) \\ &\textstyle\le \mathrm{dist}(x,B_n) + r(B_n)+\frac1{k_n}\to0\,, \end{align*} and hence $x\in C$.
Finally, proceeding by contradiction, let us show that $\mathcal{B}$ is star-finite. So assume that $\mathcal{B}$ is not star-finite. There exists an infinite subfamily $\{B_n\}_{n\in\mathbb{N}}\subset \mathcal{B}$ such that $B_1\cap B_n\neq \emptyset$ for each $n\ge2$. By Observation~\ref{O: ball reduction}, $B_1\cap B_n\cap Y\neq \emptyset$, $n\ge2$. Fix arbitrarily $y_n\in B_1\cap B_n\cap Y$. Since $B_1\cap Y$ is compact, there exists a subsequence $\{y_{n_k}\}$ that converges to some $y\in B_1$. But then $y$ is a singular point of $\mathcal{B}$ which, by (a), does not belong to $C$. This contradicts (c), and we are done. \end{proof}
Finally let us prove the main result of the present section.
\begin{theorem}\label{T: star-finite countable dimension} Let $X$ be a normed space such that $\dim X=\aleph_0$. Then $X$ has a star-finite covering $\mathcal{B}$ by closed balls. \end{theorem}
\begin{proof} Let $\{e_n\}_{n\in\mathbb{N}}$ be a Hamel basis of $X$. We set $Y_0:=\{0\}$, and $Y_n\coloneqq\mathrm{span}\{e_1,...,e_n\}$ for $n\in\mathbb{N}$. We will inductively define families $\mathcal{B}_n$ ($n\in\mathbb{N}_0$) of closed balls, satisfying for each $n\in\mathbb{N}_0$ the following conditions: \begin{enumerate} \item[($\mathrm P^1_n$)] $\mathcal{B}_n$ is star-finite; \item[($\mathrm P^2_n$)] $Y_n\subset C^n:=\bigcup(\mathcal{B}_0\cup\dots\cup \mathcal{B}_n)$; \item[($\mathrm P^3_n$)] $C^{n}$ is closed; \item[($\mathrm P^4_n$)] $\bigcup\mathcal{B}_n$ is disjoint from $\bigcup(\bigcup_{k<n}\mathcal{B}_k)$. \end{enumerate}
\noindent To start, put $\mathcal{B}_0:=\{B_X\}$ and notice that the conditions ($\mathrm P^1_0$)-($\mathrm P^4_0$) are trivially satisfied. Now, take $n\in\mathbb{N}$ and assume we have already defined $\mathcal{B}_k$ for $k\le n-1$. Since $C:=C^{n-1}\cap Y_n$ is closed, by Lemma~\ref{L: totallybounded} there exists a star-finite family $\mathcal{B}_n$ of closed balls of $X$, all centered in $Y_n$, such that $Y_n\cap C^{n-1}\cap\bigcup\mathcal{B}_n=C\cap\bigcup\mathcal{B}_n=\emptyset$, $Y_n\setminus C\subset \bigcup \mathcal{B}_n$, and all singular points of $\mathcal{B}_n$ belong to $C$. Since both $C^{n-1}$ and $\bigcup \mathcal{B}_n$ are unions of closed balls centered in $Y_n$, we can apply Observation~\ref{O: ball reduction} to obtain that $C^{n-1}\cap\bigcup\mathcal{B}_n=\emptyset$, which shows ($\mathrm P^4_n$). Moreover, $Y_n=C\cup(Y_n\setminus C)\subset C^{n-1}\cup\bigcup\mathcal{B}_n$, which is ($\mathrm P^2_n$). It remains to verify ($\mathrm P^3_n$). To this end, consider $x\in\overline{\bigcup\mathcal{B}_n}\,\setminus\bigcup\mathcal{B}_n\,$ and notice that $x$ is a singular point of $\mathcal{B}_n$, which implies that $x\in C\subset C^{n-1}$. Consequently, $\overline{C^n}= C^{n-1}\cup\overline{\bigcup\mathcal{B}_n}\,\subset C^{n-1}\cup\bigcup\mathcal{B}_n=C^n$ which means that $C^n$ is closed.\\ Finally, let $\mathcal{B}=\bigcup_{n\in\mathbb{N}_0}\mathcal{B}_n$. By property ($\mathrm P^2_n$), we easily get that $\mathcal{B}$ is a covering. Since the sets $\bigcup \mathcal{B}_n$ ($n\in\mathbb{N}_0$) are pairwise disjoint, we immediately obtain star-finiteness of $\mathcal{B}$. The proof is complete. \end{proof}
\section{Prohibitive conditions for coverings by closed balls}\label{section:prohibitive}
In the present section, we provide results on non-existence of star-finite or simple coverings of some Banach spaces. Main of these results are contained in Corollary~\ref{C: suffcond}, Corollary~\ref{C: no countable}, Theorem~\ref{T: c_0dirbuona}, and Corollary~\ref{C: no simple c0Gamma}.
\subsection{Rotundity and differentiability conditions}
\begin{definition}\label{D: propertyI}
Let $X$ be a normed space, and $\alpha$ a cardinal.
\begin{enumerate}
\item Given $\varepsilon>0$, we say that a point $x\in S_X$ has property $(\mathcal I_{\alpha,\varepsilon})$ if, whenever $\mathcal{B}$ is a family of pairwise disjoint closed balls of radius 1 not intersecting $B_X$ and such that $|\mathcal{B}|=\alpha$, we have
$$\textstyle \sup_{B\in\mathcal{B}}\mathrm{dist}(x,B)>\varepsilon.$$
\item We say that $X$ has property $(\mathcal I_{\alpha})$ if,
for each $x\in S_X$ there exists $\varepsilon>0$ such that $x$ has property $(\mathcal I_{\alpha,\varepsilon})$.
\item We say that $X$ has property $(U\mathcal I_{\alpha})$ if there exists $\varepsilon>0$ such that each $x\in S_X$ has property $(\mathcal I_{\alpha,\varepsilon})$.
\item We denote
$$
\mathcal{K}(X,\alpha):=\sup\bigl\{\mathrm{sep}\,A:\, A\subset S_X,\ |A|=\alpha\bigr\},
$$
where\ \ $\mathrm{sep}\,A:=\inf\{\|a-b\|:\, a,b\in A,\ a\neq b\}$.
\end{enumerate} \end{definition}
\begin{remark}\label{R:I}
Let $\alpha,\beta$ be cardinals such that $\alpha<\beta$, $x\in S_X$, and $\varepsilon>0$.
\begin{enumerate}[(a)]
\item If $x$ has property $(\mathcal I_{\alpha,\varepsilon})$ then $x$ has property $(\mathcal I_{\beta,\varepsilon})$.
\item If $X$ has property $(U\mathcal I_{\alpha})$ then $X$ has property $(\mathcal I_{\alpha})$.
\item It is clear that if $B$ is a closed ball in $X$ and $u\in \partial B$, then for each $r\in(0,r(B))$ there exists a closed ball $B'\subset B$ such that $r(B')=r$ and $u\in\partial B'$. This simple observation easily implies that:
{the point $x$ has property $(\mathcal{I}_{\alpha,\varepsilon})$ if and only if,
whenever $\mathcal{B}$ is a family of pairwise disjoint closed balls not intersecting $B_X$ such that $|\mathcal{B}|=\alpha$
and $\inf_{B\in\mathcal{B}}r(B)\ge\rho>0$, we have\ \
$\textstyle \sup_{B\in\mathcal{B}}\mathrm{dist}(x,B)>\rho\varepsilon.$}
\item Notice also that if $\alpha$ is an infinite cardinal then: $X$ has property $(U\mathcal{I}_\alpha)$ if and only if
there exists $\varepsilon>0$ such that if $\mathcal{B}$ is a disjoint family of closed balls of radius $1$ with $|\mathcal{B}|=\alpha$, and $x_B\in \partial B$ ($B\in\mathcal{B}$), then\ \ $\mathrm{diam}\{x_B\}_{B\in\mathcal{B}}>\varepsilon$.
\item We clearly always have
$ \mathcal{K}(X,\alpha)\le 2$. Moreover, $\mathcal{K}(X,{\aleph_0})$ coincides with $\mathcal{K}(X)$, the {\em Kottman's (separation) constant} of a Banach space $X$; see \cite{Kott}.
\end{enumerate} \end{remark}
\noindent The next lemma provides a characterization of property $(U\mathcal {I}_\alpha)$ in terms of $K(X,\alpha)$.
\begin{lemma}\label{L: costante di Kottman}
Let $X$ be an infinite-dimensional normed space and let $\alpha$ be an infinite cardinal. Then $\mathcal{K}(X,\alpha)<2$ if and only if $X$ has $(U\mathcal {I}_\alpha)$. \end{lemma}
\begin{proof}
First assume that $\mathcal{K}(X,\alpha)=2$, and fix an arbitrary $\varepsilon\in(0,2)$. There exists a set $A\subset S_X$
with $\mathrm{sep}\, A>2-\varepsilon$ and $|A|=\alpha$. Then the balls $B_a:=B(a,1-\varepsilon/2)$, $a\in A$, are pairwise disjoint, and moreover $y_a:=(\varepsilon/2)a\in B_a$. Clearly, $\mathrm{diam}\{y_a\}_{a\in A}\le\varepsilon$. By multiplying everything by $r:=(1-\varepsilon/2)^{-1}$
we obtain pairwise disjoint balls $r B_a$ ($a\in A$) of radius $1$, and points $z_a:= r y_a\in r B_a$ such that
$\mathrm{diam}\{z_a\}_{a\in A}\le r\varepsilon=2\varepsilon/(2-\varepsilon)$. Since $\varepsilon$ can be arbitrarily small, $X$ fails $(U\mathcal I_\alpha)$ by Remark~\ref{R:I}(d).
Now, assume that $X$ fails $(U\mathcal I_\alpha)$, and fix an arbitrary $\varepsilon\in(0,1)$.
By Remark~\ref{R:I}(d), there exist pairwise disjoint balls $B_\gamma:=B(c_\gamma,1)$ ($\gamma<\alpha$) and points $y_\gamma\in B_\gamma$ with
$\mathrm{diam}\{y_\gamma\}_{\gamma<\alpha}\le\varepsilon/2$. By translation, we can (and do) assume that
$\{y_\gamma\}_{\gamma<\alpha}\subset \varepsilon B_X$. Since the origin belongs to at most one of the balls $B_\gamma$,
by excluding such a ball we can (and do) assume that $0\notin B_\gamma$ ($\gamma<\alpha$).
Then $1<\|c_\gamma\|\le\|c_\gamma-y_\gamma\|+\|y_\gamma\|\le 1+\varepsilon$ for each $\gamma<\alpha$, and
$\|c_\gamma-c_\beta\|>2$ whenever $\gamma \ne \beta$. Consider the set $A$ of all the points $x_\gamma:=c_\gamma/\|c_\gamma\|$ ($\gamma<\alpha$).
Then $\|x_\gamma-c_\gamma\|=\|c_\gamma\|-1\le\varepsilon$ and hence
for $\gamma\ne \beta$ we have $\norm{x_\gamma-x_\beta}\ge\norm{c_\gamma-c_\beta}-\norm{x_\gamma-c_\gamma}-\norm{x_\beta-c_\beta}
>2-2\varepsilon$. Since $\mathrm{sep}\, A\ge 2-2\varepsilon$ and $\varepsilon$ can be arbitrarily small, we conclude that $\mathcal{K}(X,\alpha)=2$. \end{proof}
Next theorem shows that Banach spaces satisfying condition $(\mathcal I_{{\aleph_0}})$ do not admit any star-finite covering by closed balls. In order to prove this result we need a simple lemma.
\begin{lemma}\label{L: uncountable star.finite} Let $X$ be a normed space, and $Y$ its separable subspace. Suppose that $\mathcal{B}$ is a star-finite covering of $X$ by closed balls such that uncountably many elements of $\mathcal{B}$ intersect $Y$. Then $X$ fails property $(\mathcal I_{\aleph_1})$. \end{lemma}
\begin{proof}
Let us consider the uncountable family $\mathcal{B}':=\{B\in\mathcal{B}: B\cap Y\neq\emptyset\}$ and,
for each $C\in\mathcal{B}'$, let us consider $y_C\in Y\cap C$.
By Zorn's lemma, there exists a maximal simple subfamily $\mathcal{C}'$ of $\mathcal{B}'$. Notice that, since the family $\mathcal{B}'$ is uncountable and star-finite, $\mathcal{C}'$ must be uncountable.
If we denote $\mathcal C'_m:=\{C\in\mathcal C' : r(C)\geq \frac1m\}$
($m\in\mathbb{N}$), it is clear that there exists $n\in\mathbb{N}$ such that $\mathcal C'_n$ is uncountable.
Since $Y$ is separable, there exists a condensation point $\overline y\in Y$ for the set
$U:=\{y_C : C\in\mathcal C'_n\}$. Moreover, there exists $\widetilde B\in \mathcal{B}'$ such that $\overline y\in \widetilde B$; since $\mathcal{B}'$ is star-finite, we have $\overline y\in\partial \widetilde B$, moreover, only finitely many elements of $\mathcal C'_n$ intersect $\widetilde B$. It easily follows that $X$ fails property $(\mathcal I_{\aleph_1})$. \end{proof}
\begin{theorem}\label{T: trebolle}
Let $X$ be an infinite-dimensional Banach space satisfying property $(\mathcal I_{{\aleph_0}})$. Then $X$ does not admit star-finite coverings by closed balls. \end{theorem}
\begin{proof}
Proceeding by contradiction, assume that such a covering $\mathcal{B}$ exists. Let us consider $Y$, a separable infinite-dimensional subspace of $X$.
By Lemma~\ref{L: uncountable star.finite} and since $X$ has property $(\mathcal I_{{\aleph_0}})$ (and hence property $(\mathcal I_{\aleph_1})$), the family $\mathcal{B}':=\{B\cap Y : B\in\mathcal{B}, B\cap Y\neq\emptyset\}$ must be
countable. Moreover, we can (and do) assume that $\mathcal B'$ is a minimal covering of $Y$, and denote
$$\textstyle
D:=\bigcup_{B\in\mathcal{B}'} \partial B\,,\quad
H:=\{x\in D: |\mathcal{B'}(x)|=1\}.
$$
Observe that since $Y$ is infinite-dimensional and $\mathcal{B}'$ is minimal, $H$ is nonempty by Corollary~\ref{C: one ball}.
By Observation~\ref{O: basic}(d), $H=\bigcup_{B\in\mathcal{B}'}(\partial B\cap H)$ is a Baire space.
Therefore there exists $B_0\in\mathcal{B}'$ such that $\partial B_0\cap H$ is not nowhere dense in $H$.
Using the fact that $\partial B_0\cap H$ is a relatively open set in $\partial B_0$
(see Corollary~\ref{C: one ball}), it easily follows that there exist $x_0\in\partial B_0$ and $\varepsilon>0$ so that
\begin{equation}\label{E: UH1}
U(x_0,\varepsilon)\cap H\subset \partial B_0\cap H.
\end{equation}
Clearly, $x_0$ is a singular point for $\mathcal{B}'$. Since $\mathcal{B}'$ is star-finite, there exists a sequence $\{y_n\}\subset Y$ such that $y_n\to x_0$, $y_n\in C_n\in\mathcal{B}'$ and the sets $C_n$ ($n\in\mathbb{N}$) are pairwise distinct. Now, for each $n\in\mathbb{N}$, there exists $B_n\in\mathcal{B}$ such that $C_n=B_n\cap Y$. Let $ r(B_n)$ be the radii of the balls $B_n$ ($n\in\mathbb{N}$) and consider the following two cases.
\begin{enumerate}
\item $ r(B_n)\not\to0$. Let $D_0\in \mathcal{B}$ be such that $B_0=D_0\cap Y$. By considering a suitable subsequence we can suppose without any loss of generality that: (a) there exists $\alpha>0$ such that $ r(B_n)>\alpha$, whenever $n\in\mathbb{N}$, and such that $ r(D_0)>\alpha$; (b) the sets $B_n$ ($n\in\mathbb{N}$) and $D_0$ are pairwise disjoint.
\item $ r(B_n)\to0$. Since $Y$ is infinite-dimensional and $\mathcal{B}'$ is minimal, by Corollary~\ref{C: one ball}, for each $n\in\mathbb{N}$ there exists $z_n\in H\cap C_n$. In particular, $z_n\to x_0$ and hence, since $(x_0+\varepsilon B_Y)\cap H\subset\partial B_0$, we have that eventually $z_n\in\partial B_0$. Hence, eventually $C_n\cap B_0\neq\emptyset$.
\end{enumerate}
We have a contradiction, in the first case since $X$ has property $(\mathcal I_{\aleph_0})$, and in the latter case since $\mathcal{B}'$ is star-finite. This concludes the proof. \end{proof}
The rest of the present subsection is devoted to finding sufficient conditions for a Banach space to satisfy property $(\mathcal I_{\aleph_0})$. For this purpose let us recall the following definition from \cite{DEVETIL}.
\begin{definition}[see {\cite[Definition~4.6]{DEVETIL}}]
We shall say that $x\in S_X$ is a {\em locally non-D2} (or {\em LND2}) {\em point} of $B_X$ if there exists $\delta>0$ such that
\begin{equation*}\label{nd2}
\textstyle
\text{diam}\bigl\{y\in S_X:\;\|\frac{x+y}2\|\geq1-\delta\bigr\}<2\,.
\end{equation*} \end{definition}
\noindent The following lemma immediately follows by \cite[Lemma~4.5]{DEVETIL}.
\begin{lemma}\label{L: trebollelur}
Let $X$ be a normed space, $\varepsilon\ge0$, and
$B_0,B_1,B_2\subset X$ three closed balls of radius one whose interiors are pairwise disjoint.
Consider three points $y_i\in\partial B_i$, $i=0,1,2$,
and denote $x_0=y_0-d_0$ where $d_0$ is the center of $B_0$.
If $\mathrm{diam}\{y_0,y_1,y_2\}\leq\varepsilon\,$ then
\begin{equation}\label{E: 3b}
\textstyle
\mathrm{diam}\bigl\{y\in S_X:\|x_0+y\|\geq2-\varepsilon\bigr\}\geq2-2\varepsilon\,.
\end{equation} \end{lemma}
\noindent For $f\in S_{X^*}$ and $\alpha\in[0,1)$, we consider the closed convex cone $$
\mathrm{C}(\alpha, f)=\{x\in X: f(x)\ge\alpha\|x\|\}. $$ The following observation is an analogue of \cite[Observation 2.1]{DEVETIL} for uniformly Fr\'echet smooth norms.
\begin{observation}\label{O: Fr}
Suppose that $X$ is a Banach space with uniformly Fr\'echet smooth norm. Then
for each $\alpha\in(0,1)$ there exists $\varepsilon>0$ such that for each $x\in S_X$ there exists $f_x\in S_{X^*}$ with the following property:
\begin{equation}\label{E: fr}
[x-\mathrm{C}(\alpha,f_x)]\cap[x+\varepsilon B_X]\subset B_X\,.
\end{equation}
\rm \end{observation}
\begin{proof}
For each $x\in S_X$, let $f_x\in S_{X^*}$ be the Fr\'echet derivative of $\|\cdot\|$ at $x$. Since the norm of $X$ is uniformly Fr\'echet smooth, for each $\alpha\in(0,1)$ there exists $\varepsilon>0$ such that, for each $x\in S_X$, we have
$\bigl|\|x+h\|-1-f_x(h)\bigr|\leq\alpha\|h\|$, whenever $h\in\varepsilon B_X$.
Thus, for $h\in [-\mathrm{C}(\alpha,f_x)]\cap\varepsilon B_X$, we obtain
$\|x+h\|\leq 1+f_x(h)+\alpha\|h\|\le 1$,
and hence $x+h\in B_X$. This completes the proof. \end{proof}
\begin{definition}[{see \cite[Definition~2.2]{DEVETIL}}]\label{D: cs} Let $x\in S_X$ and $\varepsilon>0$.
We say that $x$ is an $\varepsilon$-\emphcone smooth\ point of $B_X$
if there exists $f_x\in S_{X^*}$ such that
$$\textstyle[x-\mathrm{C}(\frac17,f_x)]\cap[x+\varepsilon B_X]\subset B_X\,,$$
that is, \eqref{E: fr} holds for $\alpha=1/7$. \end{definition}
Observe that, if the norm of $X$ is uniformly Fr\'echet smooth, then, by Observation~\ref{O: Fr}, there exists $\varepsilon>0$ such that each $x\in S_X$ is an $\varepsilon$-cone smooth\ point of $B_X$.
\begin{proposition}\label{P: suffcond}
Let $X$ be a Banach space and $x\in S_X$. Let us consider the following conditions:
\begin{enumerate}[(i)]
\item $X$ is uniformly Fr\'echet smooth;
\item there exists $\varepsilon>0$ such that the set of all $\varepsilon$-cone smooth points of $B_X$ is dense in $S_X$;
\item $\mathcal{K}(X)\equiv \mathcal{K}(X,\aleph_0)$, the Kottman's constant of $X$, satisfies $\mathcal{K}(X)<2$;
\item $x$ is an LUR point;
\item $x$ is an LND2 point;
\item $x$ is a Fr\'echet smooth and strongly exposed point of $B_X$;
\item $x$ is a Fr\'echet smooth point and the unique norm-one functional $f_x\in X^*$ that supports $B_X$ at $x$ determines a slice $\Sigma$ of $B_X$ such that
$\mathrm{diam}(\Sigma)<2$.
\end{enumerate} Then the following implications hold. \begin{enumerate}[(a)] \item If $(i)$ or $(ii)$ is satisfied then $X$ has property $(U\mathcal I_2)$. \item If (iii) is satisfied then $X$ has property $(U\mathcal I_{\aleph_0})$. \item If at least one of the conditions $(iv)$--$(vii)$ is satisfied then the point $x$ has property $(\mathcal I_{2,\varepsilon})$ for some $\varepsilon>0$. \end{enumerate} \end{proposition}
\begin{proof} {\em(a)} By the observation immediately after Definition~\ref{D: cs}, (i) implies (ii). Moreover, if (ii) is satisfied, \cite[Lemma~4.1]{DEVETIL} easily implies that $X$ has property $(U\mathcal I_2)$.
\noindent {\em(b)} It follows immediately by Lemma~\ref{L: costante di Kottman}.
\noindent {\em (c)}
It is clear that (iv) implies (v). Moreover, if (v) is satisfied it follows by Lemma~\ref{L: trebollelur} that $x$ has property $(\mathcal I_{2,\varepsilon})$ for some $\varepsilon>0$.
Finally, it is clear that (vi) implies (vii). Let us prove that if (vii) is satisfied then $x$ has property $(\mathcal I_{2,\varepsilon})$ for some $\varepsilon>0$. We proceed as in the last part of the proof of \cite[Theorem~4.9]{DEVETIL}. Suppose on the contrary that, for each $\varepsilon>0$, $x$ fails property $(\mathcal I_{2,\varepsilon})$. Then there exist sequences $\{w_n\}$, $\{u_n\}$ in $X$ such that
\begin{itemize}
\item for each $n\in\mathbb{N}$, there exist $B_n,C_n$, closed balls of radius 1, such that $B_X,B_n,C_n$ are pairwise disjoint and $w_n\in \partial B_n,u_n\in \partial C_n$;
\item $\mathrm{diam}\{x,w_n,u_n\}\to0$.
\end{itemize}
By Lemma~\ref{L: trebollelur}, for each $\delta>0$, we have that
$\textstyle\mathrm{diam}\{y\in S_X:\|\frac{x+y}2\|\geq1-\delta\}=2$.
This easily implies existence of a sequence $\{y_n\}\subset S_X$ such that
$\|\frac{x+y_n}2\|\to1$, and $\mathrm{diam}(\{y_n\}_{n\geq n_0})=2$ for each $n_0\in\mathbb{N}$.
By convexity of the norm,
for each $n\in\mathbb{N}$ there exists $z_n\in(x,y_n)$ such that
$\|z_n\|=\min \{\|z\|: z\in[x,y_n]\}$. It is not difficult to see that
$$\|z_n\|\geq\|x+y_n\|-1$$
(indeed, if $z_n'\in(x,y_n)$ is such that $\frac{z_n+z_n'}2=\frac{x+y_n}2$, then
$\|x+y_n\|=\|z_n+z_n'\|\le\|z_n\|+1$). For each $n\in\mathbb{N}$, let
$f_n\in X^*$ be
a norm-one functional that separates $\|z_n\|B_X$ and $[x,y_n]$;
clearly, $$f_n(z_n)=\|z_n\|=f_n(x)=f_n(y_n).$$
Notice that $\|z_n\|\to1$, that is, $f_n(x)\to1$. Since $x$ is a
Fr\'echet smooth point of $B_X$, we have that $f_n\to f_x$ in the
norm topology (see, e.g., \cite[Corollary~7.22]{FHHMZ}). It
follows that $f_x(y_n)\to1$. In particular, $y_n$ belongs to
$\Sigma$ for each sufficiently large $n$, and hence
$\mathrm{diam}(\Sigma)\geq2$. This contradiction concludes the proof. \end{proof}
\noindent By Proposition~\ref{P: suffcond} and Theorem~\ref{T: trebolle}, we obtain the following corollary.
\begin{corollary}\label{C: suffcond}
Let $X$ be a Banach satisfying at least one of the following conditions:
\begin{enumerate}
\item $X$ is uniformly Fr\'echet smooth;
\item there exists $\varepsilon>0$ such that the set of all $\varepsilon$-cone smooth points of $B_X$ is dense in $S_X$;
\item $\mathcal{K}(X)$, the Kottman's constant of $X$, satisfies $\mathcal{K}(X)<2$;
\item for each $x\in S_X$, at least one of the following conditions is satisfied:\begin{itemize}
\item $x$ is an LUR point;
\item $x$ is an LND2 point;
\item $x$ is a Fr\'echet smooth and strongly exposed point of $B_X$;
\item $x$ is a Fr\'echet smooth point and the unique norm-one functional $f_x\in X^*$ that supports $B_X$ at $x$ determines a slice $\Sigma$ of $B_X$
with $\mathrm{diam}(\Sigma)<2$.
\end{itemize}
\end{enumerate}
Then $X$ does not admit star-finite coverings by closed balls. \end{corollary}
\subsection{Prohibitive conditions in spaces of continuous functions} We shall use the following standard notation. Given a Hausdorff topological space $T$, by $C_b(T)$ we mean the Banach space of all bounded continuous real-valued functions on $T$, equipped with the supremum norm
$\|x\|_\infty:=\sup_{t\in T}|x(t)|$. In the case $T$ is compact, we simply write $C(T)$ instead of $C_b(T)$. If $T$ is a locally compact Hausdorff space, we denote by $C_0(T)$ the Banach space of all elements of $C_b(T)$ that vanish at infinity.
\begin{definition}
Let $X$ be a normed space. We shall say that:
\begin{enumerate}[(a)]
\item a direction $v\in S_X$ is {\em important} if there exists $\alpha_v>0$ such that
for each straight line $L\subset X$ which is parallel to $v$ and intersects $B_X$,
one has
$\diam (L\cap B_X)\ge\alpha_v$;
\item a point $x\in S_X$ is {\em ``good''} if there exists an important direction $v\in S_X$ such that $\|x+tv\|>1$ for each $t>0$.
\end{enumerate} \end{definition}
\begin{theorem}\label{T: good pts}
Let $X$ be an infinite-dimensional Banach space such that its ``good'' points are dense in $S_X$.
Then $X$ has no countable star-finite covering by closed balls. \end{theorem}
\begin{proof}
Proceeding by contradiction, let $\mathcal B=\{B_n\}_{n\in\mathbb{N}}$ be a countable star-finite covering of $X$ by closed balls. We can (and do) assume that $\mathcal B$ is minimal, and denote
$$\textstyle
D:=\bigcup_{n\in\mathbb{N}} \partial B_n\,,\quad
H:=\{x\in D: |\mathcal{B}(x)|=1\}.
$$
By Observation~\ref{O: basic}(d), $H=\bigcup_{n\in\mathbb{N}}(\partial B_n\cap H)$ is a Baire space.
Therefore there exists $m\in\mathbb{N}$ such that $\partial B_m\cap H$ is not nowhere dense in $H$.
Using the fact that $\partial B_m\cap H$ is a relatively open set in $\partial B_m$
(see Corollary~\ref{C: one ball}), it easily follows that there exist $x_0\in\partial B_m$ and $\varepsilon>0$ so that
\begin{equation}\label{E: UH}
U(x_0,\varepsilon)\cap H\subset \partial B_m\cap H.
\end{equation}
We can (and do) clearly assume that $B_m=B_X$ and $x_0$ is a ``good'' point.
Let $v\in S_X$ be an important direction such that the half-line
$$
L:=\{x_0+tv: t>0\}
$$
is disjoint from $B_X$. Notice that the subfamily
$\mathcal{B}':=\{B\in\mathcal{B}: B\cap L\ne\emptyset\}$ covers $L$,
and $x_0\notin\bigcup\mathcal{B}'$ is necessarily a singular point for $\mathcal{B}'$. By star-finiteness, there exists an infinite disjoint subfamily $\mathcal{B}''\subset\mathcal{B}'$ whose elements are disjoint from $B_X$, and such that
$$
\inf_{B''\in\mathcal{B}''}d(x_0,B'')=0.
$$
Write $\mathcal{B}''=\{B''_k\}_{k\in\mathbb{N}}$ and notice that we can assume that $d(x_0,B''_k\cap L)\to0$. Then
$\diam(B''_k\cap L)\to0$. Since the direction $v$ is important, we obtain that
$$
r(B''_k)\le(1/\alpha)\diam(B''_k\cap L)\to0.
$$
For each sufficiently large $k$, $B''_k\subset U(x_0,\varepsilon)$, and since $B''_k$ is disjoint from $B_X=B_m$
we obtain from \eqref{E: UH} that $B''_k\cap H=\emptyset$ for such $k$. But this contradicts Corollary~\ref{C: one ball}. We are done. \end{proof}
\begin{theorem}
Let $T$ be an infinite Hausdorff topological space whose isolated points form a dense subset. Let $X$ be a closed subspace of $C_b(T)$ such that $X$ contains the characteristic function
$\mathds{1}_{\{t\}}$ for each isolated point $t\in T$. Then the Banach space $X$ has no countable star-finite covering by closed balls. \end{theorem}
\begin{proof}
By Theorem~\ref{T: good pts} it suffices to show that ``good'' points of $X$ are dense in $S_X$.
Fix arbitrary $x\in S_X$ and $\varepsilon>0$. At least one of the open sets $\{t\in T: x(t)>1-\varepsilon\}$
and $\{t\in T: x(t)<-1+\varepsilon\}$ is nonempty, say it is the first one (the other case is done in a similar way).
So there exists an isolated point $t_0\in T$ such that $x(t_0)>1-\varepsilon$.
We claim that $v:=\mathds{1}_{\{t_0\}}\in S_X$ is an important direction for $X$. To this end, consider the line
$L:=\{z+\lambda v: \lambda\in\mathbb{R}\}$ where $z\in X$, $\|z\|_\infty\le1$, and denote
$$
\beta:=\min\{\lambda\in\mathbb{R}: \|z+\lambda v\|_\infty\le1\}
\ \text{and}\
\gamma:=\max\{\lambda\in\mathbb{R}: \|z+\lambda v\|_\infty\le1\}.
$$
For each $\eta>0$ we have
$$
1<\|z+(\beta-\eta)v\|_\infty=\max\left\{\sup_{t\ne t_0}|z(t)|\,,\,|z(t_0)+\beta-\eta|\right\}
=|z(t_0)+\beta-\eta|,
$$
which implies that $z(t_0)+\beta=-1$. Analogously, we obtain that
$|z(t_0)+\gamma+\eta|>1$ ($\eta>0$), and hence $z(t_0)+\gamma=1$. It follows that
$\diam(L\cap B_X)=\gamma-\beta=2$, and our claim is proved.
Now, by the choice of $t_0$, for each $\lambda\ge\varepsilon$ we have
$(x+\lambda v)(t_0)>(1-\varepsilon)+\varepsilon=1$ and hence $\|x+\lambda v\|_\infty>1$. Put
$\lambda_0:=\max\{\lambda\ge0: \|x+\lambda v\|_\infty\le1\}$ and notice that $\lambda_0<\varepsilon$ and
$\|x+\lambda_0 v\|_\infty=1$. The point $y:=x+\lambda_0 v\in S_X$ is ``good'' since $v$ is an important direction
and $\|y+\eta v\|_\infty=\|x+(\lambda_0+\eta)v\|_\infty>1$ ($\eta>0$).
Moreover, $\|y-x\|_\infty=\lambda_0<\varepsilon$.
This completes the proof. \end{proof}
\begin{corollary}\label{C: no countable}
Let $T$ be an infinite Hausdorff topological space whose isolated points are dense, and $\Gamma$ a nonempty infinite set.
Let $X$ be one of the following spaces:
\begin{enumerate}[(a)]
\item $C(T)$ where $T$ is compact;
\item $C_0(T)$ where $T$ is locally compact;
\item $\ell_\infty(\Gamma)$ or $c_0(\Gamma)$.
\end{enumerate}
Then $X$ has no countable star-finite covering by closed balls. \end{corollary}
The following result shows that $c_0(\Gamma)$ ($\Gamma$ infinite) has no (not necessarily countable) star-finite covering by closed balls, and that property $(\mathcal I_{\aleph_0})$ is only a sufficient condition for $X$ to not admit star-finite coverings by closed balls.
\begin{theorem}\label{T: c_0dirbuona}
Let $\Gamma$ be an infinite set. Then $c_0(\Gamma)$ does not admit any star-finite covering by closed balls,
and it fails $(\mathcal I_{\aleph_0})$. \end{theorem}
\begin{proof} Proceeding by contradiction, assume that such a covering exists.
Fix an infinite countable set $\Gamma_0\subset\Gamma$ and consider the separable subspace
$Y:=\{x\in X: x(\gamma)=0 \text{ for each } \gamma\in\Gamma\setminus\Gamma_0\}$.
The family $\mathcal{B}':=\{B\cap Y: B\in\mathcal{B}\ne\emptyset\}$ is a star-finite covering of $Y$.
It is an easy exercise to see that each member of $\mathcal{B}'$ is in fact a closed ball in $Y$.
By Observation~\ref{O: card upper bound}, $\mathcal{B}'$ is countable, but this contradicts
Corollary~\ref{C: no countable}(c) since $Y$ is isometric to $c_0(\Gamma_0)$.
For the second part, let the sequence $\{u_n\}\subset 2S_{c_0(\Gamma)}$ be defined by $$u_1=2e_1-e_2,\ \ \ u_n=2e_1+e_2+\ldots+e_n-e_{n+1}\ \ \ \ (n>1).$$ We claim that the point $x=e_1\in S_{c_0}$ fails property $(\mathcal I_{{\aleph_0},\varepsilon})$, whenever $\varepsilon>0$.
Indeed, for each $\delta>0$, we can consider the family
$$\mathcal D:=\bigl\{(1+\delta)u_n+B_{c_0(\Gamma)};\,n\in\mathbb{N}\bigr\},$$
and observe that $\mathcal D$ is a family of pairwise disjoint closed balls of radius 1 not intersecting $B_X$ and such that $|\mathcal D|={\aleph_0}$. Moreover, we have
$\mathrm{dist}(x,B)=2\delta$, whenever $B\in\mathcal D$. This clearly implies that $c_0(\Gamma)$ does not have property $(\mathcal I_{\aleph_0})$. \end{proof}
\subsection{Simple coverings by closed balls} Recall that a simple covering is a covering by pairwise disjoint sets. It is a well-known fact that each simple covering of $\mathbb{R}$ by at least two nonempty closed subsets of $\mathbb{R}$ is uncountable (see e.g.~\cite[Fact~3.2]{DEVETIL}). Hence if a (nontrivial) normed space $X$ admits a simple covering by closed balls, then necessarily $X$ is nonseparable and the covering is uncountable. Moreover, from this result we can easily deduce that certain non-separable $C(K)$ spaces do not admit simple covering by closed balls.
\begin{proposition}\label{P: C(K) no simple coverings}
Let $K$ be a compact space. Suppose that $K$ contains an isolated point, then $C(K)$ does not admit simple coverings by closed balls. \end{proposition}
\begin{proof}
Let $k\in K$ be an isolated point, then the characteristic function $\mathds{1}_{\{k\}}$ is a continuous function on $K$. Let $B=B(f,r)$ be a closed ball in $C(K)$ intersecting the straight line $l=\{t\mathds{1}_{\{k\}}:t\in\mathbb{R}\}$.
We claim that $B\cap l$ is a non-degenerate closed interval. Indeed, since $B\cap l\neq \emptyset$, we have $|f(x)|\leq r$ for each $x\in K\setminus \{k\}$. It follows that
$t\mathds{1}_{\{k\}}\in B$ if and only if $|t-f(x)|\le r$, proving our claim.
Now it is clear that $C(K)$ can not be covered by a simple family of closed balls since otherwise we would get a simple covering of $\mathbb{R}$ by non-degenerate closed intervals, which is impossible. \end{proof}
Next theorem shows the relation between separated families of vectors and simple coverings by closed balls. For convenience of the reader we state the following known lemma.
\begin{lemma}\label{L: renorming}
Let $(X,\|\cdot\|_X)$ and $(Y,\|\cdot\|_Y)$ be normed spaces and $\theta\in[1,2]$. Let $A\subset S_X$ be such that $\mathrm{sep}\, A\geq\theta$. Let $M\in[1,\theta]$ and let $T:X\to Y$ be an isomorphic embedding such that $\|T\|\cdot\|T^{-1}\|\leq M$. Then the set $B:=\{\frac {Tx}{\|Tx\|_Y};\,x\in A\}\subset S_{Y}$ satisfies
$$\textstyle \mathrm{sep}\, B\geq \frac\theta M.$$ \end{lemma}
\begin{proof}
See the proof of \cite[Lemma~2.2]{HajekKaniaRusso}. \end{proof}
\begin{theorem}\label{T: no simple stable}
Let $Y$ be a Banach space such that $K:=\mathcal{K}(Y,\aleph_1)<2$. Let $T\colon X\to Y$ be an isomorphic embedding satisfying $\|T\|\cdot\|T^{-1}\|<\frac2K$.
Then $X$ does not admit any simple covering by closed balls. \end{theorem}
\begin{proof}
Denote $M:=\|T\|\cdot\|T^{-1}\|$. Proceeding by contradiction,
suppose that $X$ admits a simple covering $\mathcal{B}$ by closed balls.
Let $\ell$ be a line in $X$, and observe that uncountably many elements of $\mathcal{B}$ intersect $\ell$. By Lemma~\ref{L: uncountable star.finite}, $X$ fails property $(\mathcal I_{\aleph_1})$ (and hence it fails property $(U\mathcal I_{\aleph_1})$). By Lemma~\ref{L: costante di Kottman}, we have $\mathcal{K}(X,\aleph_1)=2$ and hence there exists $A\subset S_X$ such that $\mathrm{sep}\, A> MK$ and $|A|=\aleph_1$. Lemma~\ref{L: renorming} implies existence of a set $B\subset S_{Y}$ satisfying
$\mathrm{sep}\, B> \frac{MK}{M}=K$ and $|B|=\aleph_1$. But this contradicts the definition of $K$. \end{proof}
\noindent A famous result of Elton and Odell \cite{EltonOdell} states that if $\Gamma$ is an uncountable set then $c_0(\Gamma)$ contains no $(1+\varepsilon)$-separated uncountable family of unit vectors, for any $\varepsilon>0$. That is, $\mathcal{K}(c_0(\Gamma),\aleph_1)\leq1$ (observe that this inequality trivially holds even if $\Gamma$ is countable). Hence, we get the following corollary.
\begin{corollary}\label{C: no simple c0Gamma}
Let $\Gamma$ be a nonempty set, and $X$ a Banach space. If there exists an isomorphic embedding $T\colon X\to c_0(\Gamma)$ such that $\|T\|\cdot\|T^{-1}\|<2$, then
$X$ does not admit simple coverings by closed balls. \end{corollary}
Finally we observe that P. Koszmider in \cite{Kosz18} defined, under an additional set-theoretic assumption consistent with the usual axioms of ZFC, a connected compact space $K$ for which the Banach space $C(K)$ has no uncountable $(1+\varepsilon)$-separated set in the unit ball for any $\varepsilon>0$, hence $\mathcal{K}(C(K),\aleph_1)<2$. Therefore, by Theorem \ref{T: no simple stable}, $C(K)$ does not admit any simple covering by closed balls.
\subsection{Some open problems} We have already mentioned that separable nor\-med spaces do not admit a simple covering, however Theorem \ref{T: star-finite countable dimension} shows that normed spaces with countable dimension admit a star-finite covering by closed balls. On the other hand, in the present section we have provided various conditions for a Banach space not to have a star-finite covering by closed balls. Among them there is $c_0$ which admits a point-finite covering by closed balls. The following question naturally arises from these facts. \begin{problem} Does there exist a separable Banach space admitting a star-finite covering by closed balls? \end{problem}
The following two problems should be compared with Corollary \ref{C: suffcond} and Corollary \ref{C: no countable}, respectively.
\begin{problem} Does there exist an infinite-dimensional Fr\'echet smooth Banach space admitting a star-finite covering by closed balls? \end{problem}
\begin{problem} Does there exist an infinite compact space $K$ for which $C(K)$ admits a star-finite (or even simple) covering by closed balls? \end{problem}
\section{A star-finite covering by Fr\'echet smooth bodies of $c_0(\Gamma)$}\label{section:existenceresult}
The purpose of this section is to show that, for any nonempty set $\Gamma$, the Banach space $c_0(\Gamma)$ admits a star-finite covering by Fr\'echet smooth bounded bodies. This is clearly trivial for any finite $\Gamma$; therefore, from now on $\Gamma$ will be an infinite set.
In order to define the desired bodies, we are going to define suitable Fr\'echet renormings of $c_0(\Gamma)$, whose balls, roughly speaking, have many flat faces. Given $M>2$, let us consider the equivalent norm on $c_0(\Gamma)$ defined for $x\in c_0(\Gamma)$ by
$$\|x\|^2_M=\inf\{{\|x_1\|^2_\infty+M\|x_2\|^2_2};\, x_1\in c_0(\Gamma),x_2\in\ell_2(\Gamma), x_1+x_2=x\}.$$ Thanks to Proposition \ref{P: norma Frechet}, we have: \begin{enumerate}
\item $\|x\|_M\leq\|x\|_\infty\leq \sqrt{1+\frac{1}{M}}\,\|x\|_M$;
\item the dual norm of $\|\cdot\|_M$ is given by:
$$\textstyle {\|f\|}_M^*=\sqrt{\|f\|^2_1+\frac1M\|f\|^2_2}\ \ \ \ \ \ \ (f\in \ell_1(\Gamma)).$$
\item $\|\cdot\|_M$ is Fr\'echet smooth (since its dual norm is LUR; this is quite standard);
\item $\|\cdot\|_M$ is a lattice norm. \end{enumerate}
We will use $B_M$ to denote the closed unit ball of $(c_0(\Gamma), \|\cdot\|_M)$, and $B_{c_0(\Gamma)}$ to denote the one of $(c_0(\Gamma),\|\cdot\|_\infty)$. Observe that (i) is equivalent to
$$\textstyle{B_{c_0(\Gamma)}\subset B_M \subset \sqrt{1+\frac{1}{M}}\,B_{c_0(\Gamma)}}.$$ Let $\{e_\gamma\}_{\gamma\in\Gamma}$ be the canonical basis of $c_0(\Gamma)$ and, for each finite set $\Gamma_0\subset \Gamma$, let us define $$\textstyle Y_{\Gamma_0}={\mathrm{span}}\{e_\gamma;\, \gamma\in\Gamma_0\}\ \ \ \text{\and}\ \ \ Z_{\Gamma_0}=\overline{\mathrm{span}}\{e_\gamma;\, \gamma\in\Gamma\setminus\Gamma_0\}.$$ We denote by $P_{\Gamma_0}$ the canonical projection of $c_0(\Gamma)$ onto $Y_{\Gamma_0}$. Moreover, for $x\in c_0(\Gamma)$, we denote by $\mathrm{supp}(x)$ the support of $x$.
Let us start by quantifying how much flat is the norm $\|\cdot\|_M$.
\begin{proposition}\label{P: bollapiatta}
Suppose that $x\in c_0(\Gamma)$ is such that $\|x\|_M=1$ and let $y\in c_0(\Gamma)$ be such that:
\begin{enumerate}
\item[(a)] $\|y\|_\infty\leq1-\sqrt\frac2M\,$;
\item[(b)] $\mathrm{supp} (x)\cap\mathrm{supp}(y)=\emptyset$.
\end{enumerate}
Then $\|x+y\|_M=1$. \end{proposition}
\begin{proof}
Let $\varepsilon\in(0,1)$ and let $x_1\in c_0(\Gamma)$ and $x_2\in\ell_2(\Gamma)$ be such that $x_1+x_2=x$, $\mathrm{supp}(x_1)\subset \mathrm{supp}(x)$ and $\|x_1\|_\infty^2+M\|x_2\|_2^2\leq1+\varepsilon$. Observe that $\|x_2\|_2^2\leq\frac{1+\varepsilon}M\leq\frac2M$ and hence that $\|x_2\|_\infty\leq\|x_2\|_2<\sqrt\frac2M\,$. Hence
$$\textstyle \|x_1\|_\infty=\|x-x_2\|_\infty\geq\|x\|_\infty-\|x_2\|_\infty\geq1-\sqrt\frac2M\,.$$
By (a) and (b), it follows that $\|x_1+y\|_\infty=\|x_1\|_\infty$. Since $x+y=(x_1+y)+x_2$, we have that
$$\|x+y\|_{M}^{2}\leq\|x_1+y\|^2_\infty+M\|x_2\|^2_2=\|x_1\|^2_\infty+M\|x_2\|^2_2\leq1+\varepsilon.$$
By arbitrariness of $\varepsilon\in(0,1)$, we have that $\|x+y\|_M\leq1$. Moreover, by (b) and since $\|\cdot\|_M$ is a lattice norm, we clearly have $\|x+y\|_M=1$. \end{proof}
Let $M>2$, $q\in(0,\infty)$ and $\Gamma_0\in [\Gamma]^{<\infty}$. Let us consider the continuous linear operator $T_{\Gamma_0,q}:c_0(\Gamma)\to c_0(\Gamma)$ given by $$\textstyle (T_{\Gamma_0,q}x)(\gamma)=\begin{cases} \frac{x(\gamma)}q &\text{if}\ \gamma\in\Gamma_0; \\ x(\gamma) &\text{if}\ \gamma\in\Gamma\setminus\Gamma_0. \end{cases}$$
Let us consider the equivalent norm $\|\cdot\|_{M,\Gamma_0,q}$ on $c_0(\Gamma)$ given by
$\|x\|_{M,\Gamma_0,q}=\|T_{\Gamma_0,q} x\|_M$ ($x\in c_0(\Gamma)$). We observe that the mapping $T_{\Gamma_0,q}$ defines an isometry from $(c_0(\Gamma),\|\cdot\|_{M,\Gamma_0,q})$ onto $(c_0(\Gamma),\|\cdot\|_{M})$. The following lemma easily follows by the definition of the norm $\|x\|_{M,\Gamma_0,q}$ and by Proposition~\ref{P: bollapiatta}.
\begin{lemma}\label{L: corpi}
Let $\|\cdot\|_{M,\Gamma_0,q}$ be defined as above, and let $B_{M,\Gamma_0,q}$ be the corresponding unit ball. Then:
\begin{enumerate}
\item $B_{M,\Gamma_0,q}$ is a Fr\'echet smooth body;
\item $q B_M\cap Y_{\Gamma_0}=B_{M,{\Gamma_0},q}\cap Y_{\Gamma_0}$;
\item if $\Gamma_0\subset\Gamma_1\subset\Gamma$, $x\in B_{M,\Gamma_0,q}\cap Y_{\Gamma_1}$, and $y\in(1-\sqrt\frac2M\,)B_{c_0(\Gamma)}\cap Z_{\Gamma_1}$, then $x+y\in B_{M,\Gamma_0,q}$;
\item $\|\cdot\|_{M,\Gamma_0,q}$ is a lattice norm;
\item $B_{M,\Gamma_0,q}\subset q B_M\cap Y_{\Gamma_0}+\sqrt{1+\frac1M}\,B_{c_0(\Gamma)}\cap Z_{\Gamma_0}$.
\end{enumerate} \end{lemma}
\begin{proof}
(i) It follows from the fact that the bijection $T_{\Gamma_0,q}$ is an isometry.\\
(ii) If $x\in Y_{\Gamma_0}$ then $T_{\Gamma_0,q}(x)=\frac{x}{q}$. Therefore we have $\|T_{\Gamma_0,q}(x)\|_{M}=\|\frac{x}{q}\|_{M}\leq 1$ if and only if $\|x\|_{M}\leq q$.\\
(iii) We start by proving the following assertion:
\begin{equation}\label{E: terzo item}
\begin{split}
&\textstyle\text{if $\Gamma_0\subset \Gamma_1\subset \Gamma$, $x\in B_{M}\cap Y_{\Gamma_1}$, and $y\in(1-\sqrt\frac2M\,)B_{c_0(\Gamma)}\cap Z_{\Gamma_1}$,}\\ &\text{then $x+y\in B_{M}$.}
\end{split} \end{equation}
If $x=0$, then \eqref{E: terzo item} follows by the inclusion $B_{c_0(\Gamma)}\subset B_M$. Let $x\in (B_M\cap Y_{\Gamma_1})\setminus\{0\}$ and $y\in(1-\sqrt\frac2M\,)B_{c_0(\Gamma)}\cap Z_{\Gamma_1}$. We have $\mathrm{supp} (\frac{x}{\|x\|_M})\cap\mathrm{supp}(y)=\emptyset$, hence by Proposition \ref{P: bollapiatta}, we obtain $\|\frac{x}{\|x\|_M}+y\|_{M}=1$. Since $\|\cdot\|_{M}$ is a lattice norm and $|x+y|\leq |\frac{x}{\|x\|_M}+y|$, we have $\|x+y\|_M\leq \|\frac{x}{\|x\|_M}+y\|_M=1$, hence \eqref{E: terzo item} is proved.\\
Let $x\in B_{M,\Gamma_0,q}\cap Y_{\Gamma_1}$ and $y\in(1-\sqrt\frac2M\,)B_{c_0}\cap Z_{\Gamma_1}$. Since $T_{\Gamma_0,q}$ is an isometry and $x\in B_{M,\Gamma_0,q}\cap Y_{\Gamma_1}$, we have $T_{\Gamma_0,q}(x)\in B_{M}\cap Y_{\Gamma_1}$. Furthermore, since $y\in Z_{\Gamma_1}$, we have $T_{\Gamma_0,q}(y)=y$. Hence applying \eqref{E: terzo item} to $T_{\Gamma_0,q}(x)$ and $y$, we have $\|x+y\|_{M,\Gamma_0,q}=\|T_{\Gamma_0,q}(x)+T_{\Gamma_0,q}(y)\|_M=\|T_{\Gamma_0,q}(x)+y\|_M\leq 1$.\\
(iv)
holds since $T_{\Gamma_0,q}$ is a positive operator, $\|\cdot\|_M$ is a lattice norm, and
$\|\cdot\|_{M,\Gamma_0,q}=\|\cdot\|_M\circ T_{\Gamma_0,q}$.
\\
(v) Let $x\in B_{M,\Gamma_0,q}$. We set $x_1\coloneqq P_{\Gamma_0} (x)$ and $x_2\coloneqq (I-P_{\Gamma_0})(x)$. Let us prove that $x_1\in q B_M\cap Y_{\Gamma_0}$ and $x_2\in \sqrt{1+\frac1M}\,B_{c_0(\Gamma)}\cap Z_{\Gamma_0}$. Since $x\in B_{M,\Gamma_0,q}$, the norm $\|\cdot\|_{M,\Gamma_0,q}$ is a lattice norm and $|x_1|\leq |x|$, we have $x_1\in B_{M,\Gamma_0,q}$. Therefore by (ii), we have $x_1\in qB_M\cap Y_{\Gamma_0}$. Since $x_2\in Z_{\Gamma_0}$, we have $T_{\Gamma_0,q}(x_2)=x_2$, therefore we obtain $\|x_2\|_{M}=\|T_{\Gamma_0,q}(x_2)\|_{M}=\|x_2\|_{M,\Gamma_0,q}\leq 1$. Finally, since $\|x\|_{\infty}\leq \sqrt{1+\frac1M}\, \|x\|_M$ holds for any $x\in c_0(\Gamma)$, we have $x_2\in \sqrt{1+\frac1M}\, B_{c_0(\Gamma)}\cap Z_{\Gamma_0}$. This completes the proof. \end{proof}
\begin{theorem} For every infinite set $\Gamma$, the space $c_0(\Gamma)$ admits a star-finite covering $\mathcal B$ by Fr\'echet smooth centrally symmetric bounded bodies. \end{theorem}
\begin{proof} Let us consider sequences $\{M_n\}_{n=0}^\infty\subset(2,\infty)$ and $\{\alpha_n\}_{n=0}^{\infty}\subset (0,1)$ such that $$\textstyle \theta:=\prod_{i=0}^{\infty}(1-\sqrt\frac2{M_i})\frac{\alpha_i}{\sqrt{1+M_i^{-1}}}>0.$$ Put $\theta_0=1$ and for each $n\in\mathbb{N}$ define $$\textstyle \theta_n:=\prod_{i=0}^{n-1}(1-\sqrt\frac2{M_i}\,)\,\prod_{j=1}^{n}\frac{\alpha_j}{\sqrt{1+M_j^{-1}}}\,.$$ We shall inductively construct families $\mathcal B_n$ ($n\in\mathbb{N}_0$) of bodies such that: \begin{enumerate}
\item[($\mathrm P^1_n$)] if $B\in\mathcal B_n$ and $C\in\bigcup_{k<n}\mathcal B_k$, then $B\cap C=\emptyset$;
\item[$(\mathrm P^2_n)$] $\mathcal B_n$ is star-finite;
\item[($\mathrm P^3_n$)]
$C^n:=\bigcup_{B\in \mathcal B_0\cup\ldots\cup \mathcal B_n} B$ is closed;
\item[($\mathrm P^4_n$)]
for each $\Gamma_1\subset\Gamma$ such that $|\Gamma_1|\geq n$, we have
$$\textstyle Y_{\Gamma_1}\cap C^n+\theta_n(1-\sqrt\frac2{M_n})B_{c_0(\Gamma)}\cap Z_{\Gamma_1}\subset C^n\subset Y_{\Gamma_1}\cap C^n+Z_{\Gamma_1};$$
\item[($\mathrm P^5_n$)]
for each $\Gamma_0\subset \Gamma$, with $|\Gamma_0|=n$, we have $$\textstyle Y_{\Gamma_0}+\theta_n \left( 1-\sqrt{\frac{2}{M_n}}\right)B_{c_0(\Gamma)}\subset C^n.$$ \end{enumerate}
Let us show that this is possible.
We put $\mathcal B_0=\{B_{M_0}\}$ and claim that the above conditions hold for $n=0$.
Indeed, conditions ($\mathrm P^1_0$) and ($\mathrm P^2_0$) are trivially true, while observing that $B_{c_0(\Gamma)}\subset B_{M_0}=C^0$, we obtain ($\mathrm P^3_0$) and ($\mathrm P^5_0$). In order to prove condition ($\mathrm P^4_0$), we verify both inclusions. By (iii) in Lemma \ref{L: corpi}, we have $Y_{\Gamma_1}\cap B_{M_0}+(1-\sqrt{\frac{2}{M_0}}\,)B_{c_0(\Gamma)}\cap Z_{\Gamma_1}\subset B_{M_0}$ for any $\Gamma_1\subset \Gamma$ such that $|\Gamma_1|\geq 0$. On the other hand, let $x\in B_{M_0}$ and $\Gamma_1\subset \Gamma$ such that $|\Gamma_1|\geq 0$. Since $P_{\Gamma_1}(x)\in Y_{\Gamma_1}$ and $|P_{\Gamma_1}(x)|\leq |x|$, we have $\|P_{\Gamma_1}(x)\|_{M_0}\leq\|x\|_{M_0}\leq 1$. Therefore it follows that $B_{M_0}\subset Y_{\Gamma_1}\cap B_{M_0}+Z_{\Gamma_1}$. Hence ($\mathrm P^4_0$) is established.
Let $n\in\mathbb{N}$ and suppose we have already defined $\mathcal B_0,\ldots,\mathcal B_{n-1}$
such that conditions ($\mathrm P^3_{n-1}$), ($\mathrm P^4_{n-1}$) and ($\mathrm P^5_{n-1}$) hold. Let $\Gamma_0\in [\Gamma]^n$. We have that the set $C^{n-1}\cap Y_{\Gamma_0}$ is a closed subset of $Y_{\Gamma_0}$. By Lemma~\ref{L: totallybounded}, there exist sequences $\{x_k\}_k\subset Y_{\Gamma_0}$, and $\{\widetilde q_k\}_k\subset(0,\infty)$ such that:
\begin{enumerate}
\item[(a)] the family $\{x_k+\widetilde q_k B_{M_n}\cap Y_{\Gamma_0}\}_k$ is star-finite;
\item[(b)] $\bigcup_{k}(x_k+\widetilde q_k B_{M_n}\cap Y_{\Gamma_0})=Y_{\Gamma_0}\setminus C^{n-1}$;
\item[(c)] the singular points of $\{x_k+\widetilde q_k B_{M_n}\cap Y_{\Gamma_0}\}_k$ are contained in $C^{n-1}\cap Y_{\Gamma_0}$.
\end{enumerate} Now, for each $k\in\mathbb{N}$, define $q_k=\frac{\widetilde q_k}{\theta_{n} }$ and put $\mathcal B_{\Gamma_0}=\{B_k\}_k$, where $B_k\coloneqq x_k+\theta_{n} B_{M_n,\Gamma_0,q_k}$. Observe that, for each $k\in\mathbb{N}$,
\begin{equation}\label{E: uguaglianza}
B_k\cap Y_{\Gamma_0}=x_k + \theta_{n} B_{M_n,\Gamma_0,q_k}\cap Y_{\Gamma_0}=x_k + \widetilde q_k B_{M_n}\cap Y_{\Gamma_0}
\end{equation}
holds. Moreover, by (v) in Lemma \ref{L: corpi}, we have
\begin{equation}\label{E: contenuto}
\textstyle x_k + \theta_n B_{M_n,\Gamma_0,q_k}\subset x_k + \widetilde q_k B_{M_n}\cap Y_{\Gamma_0} + \theta_n\sqrt{1+\frac{1}{M_n}}\,B_{c_0(\Gamma)}\cap Z_{\Gamma_0}
\end{equation}
for each $k\in\mathbb{N}$. Now, we are going to prove that the family $\mathcal{B}_{\Gamma_0}$ satisfies the following conditions: \begin{enumerate}
\item[(a')] the family $\mathcal B_{\Gamma_0}$ is star-finite;
\item[(b')] $\bigcup_{B\in \mathcal B_{\Gamma_0}}B\cap Y_{\Gamma_0}=Y_{\Gamma_0}\setminus C^{n-1}$;
\item[(c')] the singular points of $\mathcal B_{\Gamma_0}$ are contained in
$$\textstyle C^{n-1}\cap Y_{\Gamma_0}+\sqrt{1+M_n^{-1}}\,\theta_{n} B_{c_0(\Gamma)}\cap Z_{\Gamma_0}.$$ \end{enumerate}
If $\mathcal{B}_{\Gamma_0}$ is not star-finite, then there exists a subfamily $\{B_{k_j}\}_{j\in \mathbb{N}}\subset \mathcal{B}_{\Gamma_0}$ such that $B_{k_1}\cap B_{k_j}\neq \emptyset$, for each $j\in \mathbb{N}$. Let $y_j\in B_{k_1}\cap B_{k_j}$, for each $j\in \mathbb{N}$. By \eqref{E: contenuto}, for each $j\in \mathbb{N}$, we have $$P_{\Gamma_0}(y_{j})\in [x_{k_j}+\widetilde q_{k_j} B_{M_n}\cap Y_{\Gamma_{0}}]\cap[ x_{k_1}+\widetilde q_{k_1} B_{M_n}\cap Y_{\Gamma_0}],$$ which contradicts $(a)$. Hence $(a')$ is proved.
$(b')$ follows combining \eqref{E: uguaglianza} with (b).
Let $x\in c_0(\Gamma)$ be a singular point for $\mathcal{B}_{\Gamma_0}$. Then $P_{\Gamma_0}(x)$ is a singular point for the family $\{x_k+\widetilde q_k B_{M_n}\cap Y_{\Gamma_0}\}_k$, hence by (c), we have $P_{\Gamma_0}(x)\in C^{n-1}\cap Y_{\Gamma_0}$. Moreover we have $\|(I-P_{\Gamma_0})(x)\|_{\infty}\leq \sqrt{1+M_n^{-1}}\,\theta_{n}$ since otherwise, by \eqref{E: contenuto}, there would exist $\varepsilon>0$ for which $(x+ \varepsilon B_{c_0(\Gamma)})\cap B=\emptyset$ for each $B\in \mathcal{B}_{\Gamma_0}$, contradicting the fact that $x$ is singular. Therefore (c') is established.
Now, let us denote $$\textstyle \mathcal{B}_n\coloneqq \bigcup_{\Gamma_0\in [\Gamma]^n}\mathcal{B}_{\Gamma_0}\,, \quad D^{n}_{\Gamma_0}:=\bigcup_{B\in \mathcal B_{\Gamma_0}}B \quad\text{and}\quad D^n:=\bigcup_{\Gamma_0\in[\Gamma]^n}D_{\Gamma_0}^n. $$ \textbf{Claim:} there exists $\beta_n>0$ such that for every $B_0\in\mathcal{B}_{\Delta_0}$ and $B_1\in\mathcal{B}_{\Delta_1}$ with $\Delta_0,\Delta_1\in [\Gamma]^n$, $\Delta_0\neq \Delta_1$, we have $\mathrm{dist}(B_0,B_1)\geq \beta_n$, where the distance refers to the supremum norm.
In order to prove the claim, let $\Delta_0, \Delta_1\in [\Gamma]^n$ such that $\Delta_0\neq \Delta_1$ and $B_0\in \mathcal{B}_{\Delta_0}$, $B_1\in \mathcal{B}_{\Delta_1}$. Since $\Delta_0$ and $\Delta_1$ are different and they have the same cardinality, there exists $\gamma_0\in \Delta_0\setminus \Delta_1$. We observe that $$B_0 \subset Y_{\Delta_0}\cap B_0 + Z_{\Delta_0}\subset Y_{\Delta_0}\cap B_0 + Z_{\{\gamma_0\}}. $$ Hence we have: \begin{equation}\label{E: claim1} \begin{split} \mathrm{dist}(B_0,Y_{\Delta_1})&\geq \mathrm{dist}(Y_{\Delta_0}\cap B_0+Z_{\{\gamma_0\}},Y_{\Delta_1})\\
&=\inf\{\|x_0+z_0-y\|_{\infty}:x_0\in Y_{\Delta_0}\cap B_0,z_0\in Z_{\{\gamma_0\}},y\in Y_{\Delta_1}\}\\
&=\inf\{\|x_0+z_0\|_{\infty}:x_0\in Y_{\Delta_0}\cap B_0,z_0\in Z_{\{\gamma_0\}}\}\\
&\geq \inf\{|(x_0+z_0)(\gamma_0)|:x_0\in Y_{\Delta_0}\cap B_0,z_0\in Z_{\{\gamma_0\}}\}\\
&=\inf\{|x_0(\gamma_0)|:x_0\in Y_{\Delta_0}\cap B_0\}\\ &\geq \theta_{n-1}\left(\textstyle{1-\sqrt{\frac{2}{M_{n-1}}}}\right), \end{split} \end{equation} where in the last inequality we have used property ($\mathrm{P_{n-1}^{5}}$) with $\Delta_0\setminus\{\gamma_0\}$. Moreover, by \eqref{E: contenuto} we have \begin{equation}\label{E: claim2} \textstyle{B_1 \subset Y_{\Delta_1} + \theta_{n}\sqrt{1+\frac{1}{M_n}}B_{c_0(\Gamma)}\cap Z_{\Delta_1}} \end{equation} Hence, combining \eqref{E: claim1} with \eqref{E: claim2} we obtain
\begin{eqnarray*} \mathrm{dist}(B_0,B_1)&\geq & \textstyle{\theta_{n-1} \left(1- \sqrt{\frac{2}{M_{n-1}}}\right) - \theta_n\sqrt{1+\frac{1}{M_n}}}\\ &=&\textstyle{\theta_{n-1} \left(1 - \sqrt{\frac{2}{M_{n-1}}}\right)(1-\alpha_n) >0.} \end{eqnarray*} Letting $\beta_n=\theta_{n-1} \left(1 - \sqrt{\frac{2}{M_{n-1}}}\right)(1-\alpha_n) >0$ we obtain the claim.
Let us prove that conditions ($\mathrm P^1_n$)-($\mathrm P^5_n$) hold. \begin{itemize}
\item In order to prove that condition ($\mathrm P^1_n$) holds, we can equivalently prove that the sets $C^{n-1}$ and $D^{n}_{\Gamma_0}$ are disjoint for each $\Gamma_0\in[\Gamma]^n$. Let $\Gamma_0 \in[\Gamma]^n$, $B\in\mathcal{B}_{\Gamma_0}$ and $x\in B$. By ($\mathrm P^4_{n-1}$) we have $C^{n-1}\subset Y_{\Gamma_0}\cap C^{n-1}+Z_{\Gamma_0}$. Therefore, suppose by contradiction that $x\in C^{n-1}$, then we would have $P_{\Gamma_0}(x)\in Y_{\Gamma_0}\cap C^{n-1}$. Which is not possible, indeed, by (b'), we have $P_{\Gamma_0}(x)\in B\cap Y_{\Gamma_0}\subset Y_{\Gamma_0}\setminus C^{n-1}$.
\item ($\mathrm P^2_{n}$) follows combining (a') with our claim.
\item Let $\{z_{k}\}_{k\in\mathbb{N}}\subset D^n$ be such that $z_k\to z$. If there exists $B\in \mathcal{B}_n$ such that $z_k\in B$ for infinitely many $k\in \mathbb{N}$, by closedness of $B$, we have $z\in B\subset D^n$. If, on the other hand, each $B\in \mathcal{B}_n$ contains finitely many elements of the sequence $\{z_k\}_{k\in \mathbb{N}}$, by our claim, there exists $\Gamma_0\in[\Gamma]^n$ such that $z$ is a singular point of $\mathcal B_{\Gamma_0}$. By (c'), ($\mathrm P^4_{n-1}$) and the definition of $\theta_n$, we have
$$\textstyle z\in C^{n-1}\cap Y_{\Gamma_0}+\theta_{n-1}(1-\sqrt\frac2{M_{n-1}}\,) B_{c_0(\Gamma)}\cap Z_{\Gamma_0}\subset C^{n-1}.$$
In any case, the closure of the set $D^n$ is contained in $C^n=C^{n-1}\cup D^n$. Since, by ($\mathrm P^3_{n-1}$), $C^{n-1}$ is closed, condition ($\mathrm P^3_{n}$) holds.
\item Since $\theta_n(1-\sqrt\frac2{M_n}\,)<\theta_{n-1}(1-\sqrt\frac2{M_{n-1}}\,)$ and since ($\mathrm P^4_{n-1}$) holds,
in order to prove condition ($\mathrm P^4_{n}$), it suffices to show that, for each $\Gamma_1\subset \Gamma$ such that $|\Gamma_1|\geq n$, we have that
\begin{equation}\label{E: bollepiatte}
\textstyle Y_{\Gamma_1}\cap D^{n}_{\Gamma_0}+\theta_n(1-\sqrt\frac2{M_n}\,)B_{c_0(\Gamma)}\cap Z_{\Gamma_1}\subset D^{n}_{\Gamma_0}\subset Y_{\Gamma_1}\cap D^{n}_{\Gamma_0}+Z_{\Gamma_1}
\end{equation}
for each $\Gamma_0\in[\Gamma]^n$. It is easy to see that \eqref{E: bollepiatte} follows by the definition of $\mathcal B_{\Gamma_0}$ and Lemma~\ref{L: corpi}, (iii) and (iv).
\item By (b') we have $Y_{\Gamma_0} \subset C^{n}$. Since ($\mathrm P^4_{n}$) holds we have
$$
\textstyle Y_{\Gamma_0}\cap C^n+\theta_n(1-\sqrt\frac2{M_n}\,)B_{c_0(\Gamma)}\cap Z_{\Gamma_0}\subset C^n.$$
Hence we obtain ($\mathrm P^5_{n}$). \end{itemize}
To complete the proof, let us consider the family $\mathcal B:=\bigcup_{n\in\mathbb{N}} \mathcal B_n$. By ($\mathrm P^1_{n}$) and ($\mathrm P^2_{n}$), $\mathcal B$ is clearly star-finite. Moreover, for each $n\geq 0$ and each $\Gamma_0\in[\Gamma]^{n}$, by condition ($\mathrm P^5_{n}$) we have that: $$\textstyle Y_{\Gamma_0}+\theta B_{c_0(\Gamma)}\subset Y_{\Gamma_0}+\theta_n\left(1-\sqrt\frac2{M_n}\right)B_{c_0(\Gamma)}\subset C^n.$$ By arbitrariness of $n\geq 0$ and $\Gamma_0\in[\Gamma]^{n}$ (and since $\theta>0$), $\mathcal B$ is a covering of $c_0(\Gamma)$. The fact that the elements of $\mathcal B$ are Fr\'echet smooth centrally symmetric bounded bodies follows by our construction and Lemma~\ref{L: corpi}. \end{proof}
\section{Appendix}\label{section:appendix}
In what follows, $(X,\|\cdot\|)$ and $(Y,|\cdot|)$ are Banach spaces whose dual norms will be denoted by $\|\cdot\|_*$ and $|\cdot|_*$, respectively.
Given an arbitrary function $f\colon X\to(-\infty,+\infty]$ which is {\em proper}, that is, finite in at least one point, one can define its {\em Fenchel conjugate} $f^*\colon X^*\to (-\infty,+\infty]$ by $$ f^*(x^*)=\sup_{x\in X}\{x^*(x)-f(x)\}. $$ Let us collect some useful properties, which are more or less known.
\begin{lemma}\label{L: Fenchel} Let $X,Y$ be as above, $f\colon X\to(-\infty,+\infty]$, $g\colon Y\to(-\infty,+\infty]$. Let $\mathcal{C}$ denote the set of all convex, proper, lower semicontinuous functions on $X$ with values in $(-\infty,+\infty]$, and $\mathcal{C}^*$ the set of all convex, proper, weak$^*$-lower semicontinuous functions on $X^*$ with values in $(-\infty,+\infty]$. \begin{enumerate}[(a)] \item $f^*$ is convex and weak$^*$-lower semicontinuous. \item $f^*$ is proper if and only if $f\ge a$ for some continuous affine $a\colon X\to\mathbb{R}$. \item For any $\alpha>0$, $(\alpha f)^*(x^*)=\alpha\, f^*(x^*/\alpha)$, $x^*\in X^*$.
\item $(\|\cdot\|^2)^*=(1/4)\|\cdot\|_*^2$. \item Let $T\colon Y\to X$ be a bounded linear operator, and assume that $$ h(x):=\inf\bigl\{f(u)+g(y): u\in X, y\in Y, x=u+Ty\bigr\}>-\infty,\quad x\in X. $$ Then the function $h$ is proper, and its Fenchel conjugate is $h^*=f^*+g^*\circ T^*$. \item The Fenchel conjugation $\varphi\mapsto \varphi^*$ gives a bijection between $\mathcal{C}$ and $\mathcal{C}^*$. \end{enumerate} \end{lemma}
\begin{proof}[Sketch of proof] (a), (b) and (c) are easy exercises. Part (d) can be easily proved via (b) from the known equality
$(\frac12\|\cdot\|^2)^*=\frac12\|\cdot\|_*^2$ (see \cite[Example~6.1.6]{Nic-Per} for a more general fact). Part (f) is a well-known result (sometimes called the Fenchel-Moreau theorem); see e.g.\ \cite[Proposition~4.4.2]{Bor-Van}, \cite[Theorem~6.1.2]{Nic-Per} or \cite[Theorem~5.2.8]{Lucc}.
Let us show (e). In what follows, $x,u\in X$, $y\in Y$ and $x^*\in X^*$. \begin{align*} h^*(x^*)&= \sup_x\bigl\{x^*(x)-\!\!\inf_{x=u+Ty}[f(u)+g(y)]\bigr\} =\sup_{u,y}\bigl\{ x^*(u+Ty)-f(u)-g(y)\bigr\}\\ &=\sup_u\{x^*(u)-f(u)\}+\sup_y\{(T^*x^*)(y)-g(y)\}=f^*(x^*)+g^*(T^* x^*). \end{align*} \end{proof}
\begin{proposition}\label{P: norma Frechet} Let $X,Y$ be as above, $M>0$, and
$T\colon Y\to X$ a bounded linear operator. For $x\in X$ define $|\!|\!|x|\!|\!|\ge0$ by the formula $$
|\!|\!|x|\!|\!|^2:=\inf\bigl\{\|u\|^2+M|y|^2: u\in X, y\in Y, x=u+Ty\bigr\}. $$ Then: \begin{enumerate}[(a)]
\item $|\!|\!|\cdot|\!|\!|$ is an equivalent norm on $X$ which satisfies the estimates $$\textstyle
|\!|\!|x|\!|\!| \le \|x\| \le \sqrt{1+\frac{\|T\|^2}{M}}\ |\!|\!|x|\!|\!|\,; $$ \item the corresponding dual norm is given by\ \
$|\!|\!|x^*|\!|\!|_*^2=\|x^*\|_*^2+\frac1M |T^* x^*|_*^2\,;$ \item if moreover $X,Y$ are Banach lattices and $T$ is a positive operator then $\norma{\cdot}$ is a lattice norm. \end{enumerate} \end{proposition}
\begin{proof}
It is easy to see that $|\!|\!|\cdot|\!|\!|>0$ outside the origin, $|\!|\!|\cdot|\!|\!|\le\|\cdot\|$, and $|\!|\!|\lambda x|\!|\!| = |\lambda| |\!|\!|x|\!|\!|$ whenever $\lambda\in\mathbb{R}$, $x\in X$. Given $x_1,x_2\in X$ and $\varepsilon>0$, for $i=1,2$ fix $u_i\in X$ and $y_i\in Y$ so that
$x_i=u_i+Ty_i$ and $\|u_i\|^2+M|Ty_i|^2\le \norma{x_i}^2+\varepsilon$. Then clearly
$\norma{x_1+ x_2}^2\le\|u_1+u_2\|^2 +M|Ty_1+Ty_2|^2$, from which we obtain \begin{align*}
\norma{x_1+x_2}&\le\sqrt{(\norm{u_1}+\norm{u_2})^2+\left(\sqrt{M}\,|Ty_1|+\sqrt{M}\,|Ty_2|\right)^2}\\
&\le \sqrt{\norm{u_1}^2+M|Ty_1|^2} + \sqrt{\norm{u_2}^2+M|Ty_2|^2} \\ &\le\sqrt{\norma{x_1}^2+\varepsilon}\, + \sqrt{\norma{x_2}^2+\varepsilon}\,. \end{align*} By $\varepsilon\to0^+$ we obtain the triangle inequality for $\norma{\cdot}$.
Consequently, $\norma{\cdot}$ is a norm on $X$, which is equivalent to $\|\cdot\|$ by the Open Mapping Theorem. Using Lemma~\ref{L: Fenchel}(c,d,e), it is not difficult to calculate that its dual norm is given by $\norma{x^*}^2_*=\|x^*\|^2_*+(1/M)|T^*x^*|^2_*$. Thus
$\norma{\cdot}_* \le \sqrt{1+\frac{\|T\|^2}{M}}\,\norm{\cdot}_*$. It follows that
$\norm{\cdot}\le \sqrt{1+\frac{\|T\|^2}{M}}\,\norma{\cdot}$, which completes the proof of (a) and (b).
Now assume that $X,Y$ are Banach lattices and $T$ is positive. Then it is clear from (b) that $(X^*,\norma{\cdot}_*)$ is a Banach lattice, and hence its dual $(X^{**},\norma{\cdot}_{**})$ is a Banach lattice as well. Consequently $\norma{\cdot}$, which is the restriction of $\norma{\cdot}_{**}$ to $X$ (considered as a subspace of $X^{**}$), is a lattice norm. \end{proof}
\end{document} | arXiv | {
"id": "2002.04308.tex",
"language_detection_score": 0.6898223161697388,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Complexes of not $i$-Connected Graphs}
\author[E. Babson]{Eric Babson} \thanks{Babson, Bj\"orner, Linusson and Welker were partially supported by MSRI. Babson was supported by a National Science Foundation postdoctoral fellowship. Linusson was supported by a Swedish Natural Sciences Research Council (NFR) postdoctoral fellowship. Welker was supported by Deutsche Forschungsgemeinschaft (DFG). Research at MSRI is supported in part by NSF grant DMS-9022140. } \address{\hskip-\parindent Eric Babson, Mathematical Sciences Research Institute, 1000 Centennial Drive, Berkeley, CA 94720-5070, USA.} \email{babson@msri.org} \author[A. Bj\"orner]{Anders Bj\"orner} \address{\hskip-\parindent Anders Bj\"orner, Department of Mathematics, Royal Institute of Technology, S-100 44 Stockholm, Sweden} \email{bjorner@math.kth.se} \author[S. Linusson]{Svante Linusson} \address{\hskip-\parindent Svante Linusson, Department of Mathematics, Stockholms Universitet, S-106 91 Stockholm, Sweden} \email{linusson@matematik.su.se} \author[J. Shareshian]{John Shareshian} \address{\hskip-\parindent John Shareshian, Mathematical Sciences Research Institute, 1000 Centennial Drive, Berkeley, CA 94720-5070, USA.} \email{shareshi@msri.org} \author[V. Welker]{Volkmar Welker} \address{\hskip-\parindent Volkmar Welker, Fachbereich 6, Mathematik, Universit\"at GH-Essen, D-45117 Essen, Germany} \email{welker@exp-math.uni-essen.de}
\keywords{Connected graphs, complexes of graphs, matching, homotopy type, simplicial resolution, knot invariant, $S_n$-character}
\date{}
\begin{abstract} Complexes of (not) connected graphs, hypergraphs and their homology appear in the construction of knot invariants given by V. Vassiliev \cite{Vas93-1,Vas94,Vas97}. In this paper we study the complexes of not $i$-connected $k$-hypergraphs on $n$ vertices. We show that the complex of not $2$-connected graphs has the homotopy type of a wedge of $(n-2)!$ spheres of dimension $2n-5$. This answers one of the questions raised by Vassiliev \cite{Vas97} in connection with knot invariants. For this case the $S_n$-action on the homology of the complex is also determined. For complexes of not $2$-connected $k$-hypergraphs we provide a formula for the generating function of the Euler characteristic, and we introduce certain lattices of graphs that encode their topology. We also present partial results for some other cases. In particular, we show that the complex of not $(n-2)$-connected graphs is Alexander dual to the complex of partial matchings of the complete graph.
For not $(n-3)$-connected graphs we provide a formula for the generating function of the Euler characteristic. \end{abstract}
\maketitle
\section{Introduction}
In this paper we study the homotopy type and homology of simplicial complexes whose simplices are the edge sets of not $i$-connected graphs and hypergraphs on $n$ vertices. The case $i=1$ is already well understood (see Proposition \ref{discon}), and here we begin the examination of the topological structure of such complexes for $i \geq 2$.
Although our point of view is mainly combinatorial, our original motivation for studying these complexes comes from the theory of Vassiliev invariants in knot theory. By determining the homotopy type of the complex of not $2$-connected graphs on $n$ vertices we answer a question posed by V. Vassiliev in \cite{Vas97}, where he presents a new approach to Vassiliev knot invariants using a filtration of the simplicial resolution of the space of not-knots as in \cite{Vas94}. More precisely, he studies the space $\Sigma$ of maps $f : S^1 \rightarrow {\mathbb R}^3$ such that $f(S^1)$ has multiple points or cusps. The simplicial resolution $\widetilde{\Sigma}$ of $\Sigma$ is obtained roughly speaking as follows: singular knots are resolved by blowing up each $r$-fold self-intersection to an $({r\choose 2}-1)$-simplex, and similarly for the set of cusps. A suitable filtration (see \cite{Vas97}) of $\widetilde{\Sigma}$, combinatorially defined in terms of these simplices, gives rise to a spectral sequence that contains the homology of the complex of not $2$-connected graphs on $n$ vertices as a basic ingredient.
Our work continues the already fruitful interaction between the theory of Vassiliev invariants and questions in topological and homological combinatorics of graph complexes (see \cite{Vas93-1}). The study of complexes of not $i$-connected graphs has intriguing combinatorial and algebraic aspects as well. For example, such aspects become apparent when considering the complex of not $(n-2)$-connected graphs on $n$ vertices. In Section \ref{match} this complex is shown to be Alexander dual to the complex of partial matchings of the complete graph on $n$ vertices. These matching complexes, along with complexes of partial matchings of bipartite graphs, have previously been studied for other reasons, see \cite{BLVZ92}. In each case for which we calculate the Betti numbers, we detect nontrivial homology. For $(n-3)$-connected graphs (see Section \ref{notn3}) and for most complexes of not $2$-connected hypergraphs (see Section \ref{notconnhyper}) we have been unable to compute the Betti numbers explicitly, but we do determine the generating function of their reduced Euler characteristics. The homology is seen to be nontrivial in almost all of these cases.
Surprisingly, these non-vanishing phenomena are suggested by a result motivated by a conjecture in complexity theory. The conjecture states that complexes of graphs on $n$ vertices having some non-trivial monotone graph property -- like being not $i$-connected -- are evasive (see for example \cite{KahSakStu84}). Kahn, Saks \& Sturtevant \cite{KahSakStu84} showed that non-evasive complexes are contractible. In many naturally arising cases, including those examined here, the converse is true and evasive complexes in fact have non-vanishing reduced Euler characteristics.
In Section \ref{rep} we study the action of the symmetric group on the complex of not $2$-connected graphs induced by its natural action on the vertices. This action induces a representation of $S_n$ on the homology groups of the complex, which we determine. Using the representation, we deduce upper bounds on the number of Vassiliev invariants of a given bi-order. This representation coincides with a recently well studied representation which appears in the work of Robinson \& Whitehouse \cite{RobWhi94,Whi94-2}, Kontsevich \cite{Kon93}, Getzler \& Kapranov \cite{GetKap95}, Mathieu \cite{Mat96}, Hanlon \& Stanley \cite{Han96,HanSta95} and Sundaram \cite{Sun96}.
\noindent {\sf Acknowledgment:} We are grateful to V. Vassiliev for inspiring discussions and hints, which sparked our interest and initiated this research. All computer calculations presented in this paper were performed using a Mathematica$^{\text{\copyright}}$ package designed by Vic Reiner and a C-Program by Frank Heckenbach.
\section{Preliminaries} \label{prel}
We now introduce the basic concepts used in this paper. By a graph $G = (V(G),E(G))$ we mean a loopless graph without multiple edges on the vertex set $V(G)$ and with edge set $E(G) \subseteq \binom{V(G)}{2}$. Our standard vertex set will be the set $[n]:=\{1,2,\dots,n\}$. A graph $G$ is called {\it connected} if for any two distinct vertices $v, v' \in V(G)$ there is a path from $v$ to $v'$ in $G$, that is, a sequence of edges $\{ v_1, v_2 \}$, $\{ v_2, v_3 \}$, $\ldots$, $\{v_{l-1},v_l\}$ $\in E(G)$ such that $v = v_1$ and $v' = v_{l}$. Such a path will sometimes be denoted by $v_1,v_2,\ldots,v_l$. The
{\it size} of a graph $G$ is $|V(G)|$.
A graph $G$ is called {\it $i$-connected}, for a number $i$
such that $0<i<|V(G)|$, if for any $j$ vertices $v_1, \ldots, v_j \in V(G)$, $j<i,$ the graph $G'$ that is obtained from $G$ by deleting the vertices $v_1, \ldots, v_j$ and their adjacent edges is connected. Equivalently, $G$ is $i$-connected if and only if for every pair $v, v'$ of not adjacent vertices there are at least $i$ paths from $v$ to $v'$ that are pairwise vertex disjoint except at their endpoints.
A graph with at least $i+1$ vertices which is not $i$-connected is also called {\it (i-1)-separable}, and a $1$-separable (that is, not $2$-connected) graph will often be called just separable. Of course, if $G = (V(G),E(G))$ is a graph that is not $i$-connected for some $i \geq 1$ then for any subset $E' \subseteq E(G)$ the graph $G' = (V(G),E')$ on the same vertex set is not $i$-connected either. Hence if we fix an $n$-element vertex set $V$ and identify a graph with the set of its edges, then we may regard the set of not $i$-connected graphs on $V$ as a simplicial complex. \\
\begin{definition} $\Delta_n^i$ is the complex of not $i$-connected graphs on $n$ vertices. \end{definition}
For a graph $G$ and a vertex $v$ we denote by $G-v$ the graph that is obtained from $G$ by deleting the vertex $v$ from its set of vertices and deleting all edges emerging from $v$ from the set of edges. If $v$ and $w$ are two distinct vertices of $G$ then we denote by $vw$ the two-element set $\{v,w\}$, by $G \setminus vw$ the graph $(V(G),E(G) \setminus \{vw\})$, and by $G + vw$ the graph $(V(G),E(G) \cup \{vw\})$. Note that (by definition) if $xy \in E(G)$ then $G+xy=G$ and if $xy \not\in E(G)$ then $G \setminus xy=G$. A subset $V' \subseteq V(G)$ of the vertex-set of a graph $G = (V(G),E(G))$ is called a {\it cutset} if the graph obtained from $G$ by deleting the vertices in $V'$ and all adjacent edges is not connected. In particular, a graph is $i$-separable if and only if there is a cutset of cardinality $i$. A cutset of cardinality $1$ is also called a {\it cutpoint}.
More generally, one may consider complexes of not $i$-connected $k$-uniform hypergraphs. Recall that a {\it $k$-uniform hypergraph} on a vertex set $V$ is a subset $E$ of the set of $k$-element subsets $\binom{V}{k}$ of $V$. We will call the $k$-uniform hypergraphs {\it $k$-graphs} for short. Note that a $2$-graph is just a graph. A $k$-graph is called {\it $i$-connected} if its underlying $2$-graph is $i$-connected. The underlying $2$-graph of a $k$-graph $E$ is the graph on $V$ whose edge set contains a $k$-clique on $\lp v_1,\ldots,v_k \rp$ for each hyperedge $\lp v_1,\ldots,v_k \rp \in E$. \\
\begin{definition} ${\Delta}^{i}_{n,k}$ is the complex of all not $i$-connected $k$-graphs on $n$ vertices. \end{definition}
Cutsets and cutpoints are defined analogously for $k$-graphs as they were for graphs.
For the notation related to simplicial complexes and partially ordered sets -- posets for short -- used in this paper, we refer the reader to Section \ref{notation}. \\ \\
Let us now review some known results. For $i=1$ we have that $\Delta_n^1$ and $\Delta_{n,k}^1$ are the complexes of disconnected graphs, resp., disconnected $k$-graphs. The topology of $\Delta_{n,k}^1$ is well understood up to homotopy type.
\begin{proposition} \label{discon} Let $n \geq 2$. Then \begin{itemize} \item[(i)] The complex $\Delta_n^1$ is homotopy equivalent to a wedge of $(n-1)!$ spheres of dimension $n-3$. In particular, $\widetilde{H}_{i} (\Delta_n^1) = 0$ for $i \neq n-3$ and $\widetilde{H}_{n-3}(\Delta_n^1) \cong {\mathbb Z}^{(n-1)!}$. \item[(ii)] The complex $\Delta_{n,k}^1$ is homotopy equivalent to a wedge of spheres of dimensions $n-(k-2)\cdot t -3$, $1 \leq t \leq \lfloor \frac{n}{k} \rfloor$. In particular, the homology of $\Delta_{n,k}^i$ is free and concentrated in dimensions $n-(k-2)\cdot t -3$, $1 \leq t \leq \lfloor \frac{n}{k} \rfloor$. \end{itemize} \end{proposition}
Part (i) follows from well-known properties of partition lattices (see \cite{Bjo91,BjoWal83,Sta82}) together with the crosscut theorem (see \cite{Bjo91}). An alternative proof is provided in \cite{Vas93-1}. Part (ii) was established by Bj\"orner and Welker in \cite{BjoWel92}. See Theorem 4.5 and Section 7.8 of \cite{BjoWel92} for exact numerical information on the homology of $\Delta_{n,k}^1$.
The character of the symmetric group for the representation on $\widetilde{H}_{n-3}(\Delta_n^1)$ was determined by Stanley in \cite{Sta82} in terms of the character of $S_n$ on the homology of the partition lattice. These two characters are equal by an equivariant version of the crosscut theorem. The character of the symmetric group on the homology of $\Delta_{n,k}^1$ was given by Sundaram \& Wachs \cite{SunWac94}.
Unless otherwise explicitly stated, all homology groups in this paper have integer coefficients.
\section{Homology and homotopy type of $\Delta_n^2$} \label{homnot2}
The main theorem of this section gives a complete description of the homotopy type of $\Delta_n^2$.
\begin{theorem} Let $n \geq 3$. Then $\dee{n}{2}$ has the homotopy type of a wedge of $(n-2)!$ spheres of dimension $2n-5$. \label{main} \end{theorem}
\noindent
\noindent {\sf Remark:} This result was circulated for several months as a conjecture. During that time, the Euler characteristic of $\dee{n}{2}$ was calculated by Rodica Simion \cite{Si}. The theorem was proved independently and simultaneously, almost to the day, by V. Turchin in Moscow, in a homology version \cite{Vas97} that is equivalent to our result by some general arguments from homotopy theory.
For any natural number $k$, let $B_k$ be the Boolean algebra on $k$ elements (i.e., the lattice of subsets of a $k$-element set) and let $\Pi_k$ be the lattice of partitions of a $k$-set into subsets, ordered by refinement. It is well-known that $\Delta(\ov{B_k})$ --- being the barycentric subdivision of a simplex boundary --- is homeomorphic to a $(k-2)$-sphere, and that $\Delta(\ov{\Pi_k}) \simeq \dee{k}{1}$ has the homotopy type of a wedge of $(k-1)!$ spheres of dimension $k-3$ (see Proposition \ref{discon} (i) and its references). These facts imply the following.
\begin{lemma} \label{boolxpart} $\Delta(\ov{B_k \times \Pi_k})$ has the homotopy type of a wedge of $(k-1)!$ spheres of dimension $2k-3$. \end{lemma}
\Proof Let $\emptyset$ and $[k]$ be the least element and top element of $B_k$, and let $1|\cdots |k$ and $|1\cdots k|$ be the least and top elements of $\Pi_k$. Apply the Homotopy Complementation Formula \ref{homcom} (ii) to
$p = (\emptyset,|1\cdots k |)$. The set of complements of $p$ in
$B_k \times \Pi_k$ consists of the single element $q = ([k],1| \cdots|k)$. Obviously, $\Delta((\hat{0},q)) \cong \Delta(\ov{B_k})$ and $\Delta((q, \hat{1})) \cong \Delta(\ov{\Pi_k})$. Then by Formula \ref{homcom} (i) we have $$\Delta(\ov{B_k \times \Pi_k}) \simeq \Sigma \big(\Delta(\ov{B_k}) * \Delta(\ov{\Pi_k})\big).$$ Since the join of a wedge of $n$ spheres of dimension $i$ with a wedge of $m$ spheres of dimension $j$ is homotopy equivalent to a wedge of $nm$ spheres of dimension $i+j+1$ (see for example \cite[Lemma 2.5 (ii)]{BjoWel92}) the assertion follows. Recall that suspension can be regarded as a join with a $0$-sphere and that the join operation is associative. \qed
Thus, in order to prove Theorem \ref{main} it suffices to demonstrate that $\dee{n}{2}$ is homotopy equivalent to $\Delta(\ov{B_{n-1} \times \Pi_{n-1}})$. In order to state more precisely what we will prove, we make the following definitions.
\begin{definition} For $x \in \left[ n \right]$ and any graph $G$ on $[n]$, $N_G(x)$ is the {\it neighborhood} of $x$ in $G$, i.e. $N_G(x)=\{y\in [n]: xy\in E(G)\}$,
and $\pi(x,G)$ is the partition of the set $\left[ n \right] \setminus \lp x \rp$ determined by the connected components of $G-x$. \end{definition}
\begin{definition} $\phi:\ov{{\mathcal Lat}(\dee{n}{2})} \rightarrow \ov{B_{n-1} \times \Pi_{n-1}}$ is the map of posets given by $G \mapsto (N_G(1),\pi(1,G))$, and $\phi^\ast:\Delta(\ov{{\mathcal Lat}(\dee{n}{2})}) \rightarrow \Delta(\ov{B_{n-1} \times \Pi_{n-1}})$ is the simplicial map induced by $\phi$. \end{definition}
\noindent Note that if $G$ is a graph on $[n]$ such that $N_G(1)=\lp 2,\ldots,n \rp$ and $G-1$ is connected, then $G$ is $2$-connected. On the other hand, if $N_G(1)=
\emptyset$ and $\pi(1,G)=2|3|\ldots|n$ then $G$ is the empty graph. Thus $\phi$ is well-defined. It is clear that $\phi$ is order preserving, so $\phi^\ast$ is well-defined. We can now state the key technical result, from which (in view of Lemma \ref{boolxpart}) Theorem \ref{main} follows.
\begin{lemma} \label{hom:equiv} The simplicial map $\phi^\ast$ is a homotopy equivalence. \label{homeq} \end{lemma}
To prove Lemma \ref{homeq} we use Quillen's Fiber Lemma (see Proposition \ref{fibre}). In our situation this says that if for each $(S,\pi) \in \ov{B_{n-1} \times \Pi_{n-1}}$ the order complex of the poset $\phi_{\leq}^{-1}(S,\pi)=\lp G \in \ov{{\mathcal Lat}(\dee{n}{2})}:\phi(G) \leq (S,\pi) \rp$ is contractible, then $\phi^\ast$ is a homotopy equivalence.
If $\pi \neq |2 \cdots n|$ then $\phi^{-1}_{\leq}((S,\pi))$ has a top element, namely the graph $G$ such that $1t$ is an edge of $G$ for $t \in S$ and $G$ induces the complete graph on each
block of $\pi$. So assume that $\pi = |2 \cdots n|$. If $|S| \leq 1$ then there is also a top element in $\phi^{-1}_{\leq}((S,\pi))$, namely the graph $G$ which induces a clique on $\lp 2,\ldots,n \rp$ and has $N_G(1)=S$. If $S = \{ 2, \ldots ,n \}$ then $(S, \pi)$ does not lie in the proper part of $B_{n-1} \times \Pi_{n-1}$. In summary, it remains to consider the fibers $\phi_{\leq}^{-1}(S,\pi)$
for pairs $(S,\pi)$ such that $\pi = |2 \cdots n|$ and $S \subseteq
\{ 2, \ldots ,n \}$ with $2 \leq |S| \leq n-2$. To handle these remaining cases, we make the following definitions.
\begin{definition} \begin{itemize} \item[(1)] For $2 \leq k \leq n-1$, $\Delta(k)=\lp G \in \dee{n}{2}:N_G(1) \subseteq \lp 2,\ldots,k \rp \rp$. \item[(2)] For $3 \leq k \leq n-1$, $\Delta(k-1,k)=\lp G \in \Delta(k-1):G+1k \in \Delta(k) \rp$. \end{itemize} \end{definition}
\noindent Note that if $(S,\pi)=(\lp 2,\ldots,k \rp,|2 \cdots n|)$ then $\Delta(k)=\phi_{\leq}^{-1}(S,\pi)$. Also, $\Delta(k-1,k)$ consists of those graphs in $\Delta(k-1)$ which do not become $2$-connected when the edge $1k$ is added.
By the above discussion and the fact that the natural action of $S_n$ on ${\mathcal Lat}(\dee{n}{2})$ is order preserving, Lemma \ref{homeq} follows immediately from the next lemma.
\begin{lemma} For $2 \leq k \leq n-1$, $\Delta(k)$ is contractible. \label{dkcon} \end{lemma}
\noindent The proof of Lemma \ref{dkcon} proceeds by induction on $k$, the case $k=2$ having been handled above.
The inductive proof is therefore achieved by the combination of the following two lemmas.
\begin{lemma} Let $3 \leq k \leq n-1$. If $\Delta(k-1)$ and $\Delta(k-1,k)$ are contractible, then so is $\Delta(k)$. \label{redk} \end{lemma}
\noindent \begin{Proof}
Let $\star(1k)$ be the subcomplex of $\Delta(k)$ consisting of graphs that either contain the edge $1k$ or else can be extended within $\Delta(k)$ to contain $1k$. Then $\star(1k)$ is a cone with base $\Delta(k-1,k)$ and apex $1k$, and we have $$\Delta(k) = \Delta(k-1) \cup \star(1k),$$ $$\Delta(k-1,k) = \Delta(k-1) \cap \star(1k).$$ Thus, $\Delta(k)$ is a union of two contractible complexes with contractible intersection, and hence $\Delta(k)$ is itself contractible (see e.g. \cite[Lemma 10.3]{Bjo91}). \end{Proof}
\begin{lemma} For $3 \leq k \leq n-1$, $\Delta(k-1,k)$ is contractible. \label{dkkcon} \end{lemma}
To prove Lemma \ref{dkkcon} we will use a special case of Forman's discrete Morse theory (see \cite{fo}, and for this case also \cite{ch}). The following works for regular cell complexes, but we will only need the simplicial case.
\noindent \begin{definition} Let $\Sigma$ be a simplicial complex.
\begin{itemize} \item[(1)] $D(\Sigma)$ is the digraph whose vertex set is $\Sigma$ and whose edges are the edges in the Hasse diagram of ${\mathcal Lat}(\Sigma)\setminus \{\hat{1}\}$, all directed downward. \item[(2)] For any set $X$ of edges in $D(\Sigma)$, $D_X(\Sigma)$ is the digraph obtained from $D(\Sigma)$ by reversing the direction of the edges in $X$, so these edges are directed upward while the remaining edges are directed downward. \end{itemize} \end{definition}
Before we can formulate the following lemma we have to recall some basic facts about collapsibility (see for example \cite{Bjo91}). Given a simplicial complex $\Sigma$, a face $\sigma \in \Sigma$ is called {\it free} if $\sigma$ is not maximal and is contained in a unique maximal face of $\Sigma$. If $\sigma$ is free in $\Sigma$ then passing from $\Sigma$ to the complex $\Sigma \setminus \{ \tau:\tau \supseteq \sigma \}$ is called an {\it elementary collapse} of $\Sigma$. If we can obtain a single vertex by applying a sequence of elementary collapses to a complex $\Sigma$, then $\Sigma$ is called {\it collapsible}. Since it is easily seen that an elementary collapse of $\Sigma$ is a strong deformation retraction it follows that collapsible complexes are contractible.
\begin{proposition} Let $\Sigma$ be a simplicial complex. If $D(\Sigma)$ contains a perfect matching $M$ such that $D_M(\Sigma)$ is acyclic, then $\Sigma$ is collapsible. \label{forman} \end{proposition}
\noindent \begin{Proof}
This is a special case of Corollary 3.5 of \cite{fo}, and this case is easily proved by induction on $|\Sigma|$. If $\Sigma=\lp \emptyset,\lp x
\rp \rp$ then the claim is clearly true. If $|\Sigma|>2$, let $x$ be a source in $D_M(\Sigma)$, which must exist since $D_M(\Sigma)$ contains no directed cycle. It is easy to see that $x$ must be a free face of $\Sigma$ which is properly contained in a unique face $y \in \Sigma$. Now $\Sigma$ is collapsible to the complex obtained by removing $x$ and $y$, and we can apply the inductive hypothesis. \end{Proof}
We call a perfect matching of the type described in Proposition \ref{forman} an {\it acyclic perfect matching} on $D(\Sigma)$. Our goal is to produce an acyclic perfect matching on $D(\Delta(k-1,k))$. The following easy result will be useful.
\begin{lemma} Let $\Sigma$ be a simplicial complex, let $M$ be a matching on $D(\Sigma)$ and let $F_0 \rightarrow F_1 \rightarrow \ldots \rightarrow F_r \rightarrow F_0$ be a directed cycle in $D_M(\Sigma)$. Then there is some dimension $d$ such that $dim(F_i) \in \lp d,d+1 \rp$ for all $i \in \left[ r \right]$. \label{twolev} \end{lemma} \begin{Proof} If the $F_i$ have more than two distinct dimensions then some $F_i$ must be incident to two upward directed edges. This contradicts the fact that $M$ is a matching, and the result follows immediately. \end{Proof}
Before proceeding with the proof of Lemma \ref{dkkcon} we make some technical definitions.
\begin{definition} Consider separable graphs on the vertex set $[n]$. \begin{itemize} \item[(1)] We denote the set of cutpoints of such a graph $G$ by ${\rm Cut}(G)$. \item[(2)] For fixed $k \in \lp 3,\ldots,n-1 \rp$, let \begin{itemize}
\item[(a)] $I(k):=\lp G \in \Delta(k-1,k)~|~N_G(1)=\emptyset \rp$.
\item[(b)] $J(k):=\lp G \in \Delta(k-1,k)~|~N_G(1) \neq \emptyset \mbox{ and } {\rm Cut}(G+1k) \neq \lp 1 \rp \rp$.
\item[(c)] $F(k):=\lp G \in \Delta(k-1,k)~|~{\rm Cut}(G+1k)=\lp 1 \rp \rp$. \end{itemize} \end{itemize} \end{definition}
\noindent Note that $\Delta(k-1,k)$ is the disjoint union of $I(k)$, $J(k)$ and $F(k)$, and that both $I(k)$ and $I(k) \cup J(k)$ are subcomplexes of $\Delta(k-1,k)$.
The following lemma implies Lemma \ref{dkkcon}, and therefore completes the proof of Theorem \ref{main}.
\begin{lemma} For any $k \in \lp 3,\ldots,n-1 \rp$, $D(\Delta(k-1,k))$ admits an acyclic perfect matching. \label{apm} \end{lemma} \noindent \begin{Proof} This proof will be carried out in three steps. We will construct an acyclic perfect matching first for $D(I(k))$, then for $D(I(k) \cup J(k))$, and finally for $D(\Delta(k-1,k))$.
\noindent {\sf Step 1:} $D(I(k))$ admits an acyclic perfect matching.
Note that $I(k)$ contains a unique maximal face, namely the complete graph on $\lp 2,\ldots,n \rp$. Thus $I(k)$ is a simplex and it is easy to see that
the matching $M=\lp G+23 \rightarrow G \setminus 23 | G \in I(k) \rp$ is an acyclic perfect matching on $D(I(k))$.
\noindent {\sf Step 2:} $D(I(k) \cup J(k))$ admits an acyclic perfect matching.
It suffices to show that there exists a matching $M^\ast$ consisting of edges between elements of $J(k)$ which covers all the elements of $J(k)$, and such that $D_{M^\ast}(I(k) \cup J(k))$ is acyclic. If $M^\ast$ is such a matching, let $M^\circ$ be an acyclic perfect matching on $D(I(k))$ and set $M=M^\ast \cup M^\circ$. Then $M$ is a perfect matching on $D(I(k) \cup J(k))$ which contains no edges between $I(k)$ and $J(k)$, so that any directed cycle in $D_M(I(k))$ cannot cover points from both $I(k)$ and $J(k)$. It follows immediately that $M$ is acyclic.
Now let $G \in J(k)$ and let $c \in {\rm Cut}(G+1k)$, $c\neq 1$. Let $x=\min\lp N_G(1) \rp$. If $xk \in E(G)$ then clearly $G\setminus xk \in J(k)$. If $c \not\in \lp x,k \rp$ then since $1k$ and $1x$ are edges of $G+1k$, $x$ and $k$ lie in the same connected component of $(G+1k)-c$. If $c \in \lp x,k \rp$ then clearly $c$ is a cutpoint of $G+xk+1k$. In any case, $c \in {\rm Cut}(G+xk+1k)$ and $G+xk \in J(k)$.
Let $M^\ast$ consist of all edges $G+xk \rightarrow G \setminus xk$, where $x$ is determined as above. Clearly $M^\ast$ is a matching which covers all points in $J(k)$. Assume for contradiction that $A_1 \rightarrow B_1 \rightarrow A_2 \rightarrow B_2 \rightarrow \ldots \rightarrow B_r \rightarrow A_1$ is a directed cycle in $D_{M^\ast}(I(k) \cup J(k))$. Clearly all the $A_i$ and all the $B_i$ are in $J(k)$, and by Lemma \ref{twolev} we may assume that for each $i$ there are edges $\alpha_i$ and $\beta_i$ such that $B_i=A_i+\alpha_i$ and $A_{i+1}=B_i \setminus \beta_i$. Thus $A=A+\alpha_1 \setminus \beta_1+\ldots +\alpha_r\setminus \beta_r$ and $\lp \alpha_i \rp=\lp \beta_i \rp$. By the definition of $M^\ast$, no $\alpha_i =x_i k$ contains $1$, so no $\beta_i$ contains $1$. It follows that $N_{A_1}(1)=N_{B_1}(1)=N_{A_2}(1)=\ldots= N_{B_r}(1)$. By the choice of the $x_i$'s this forces $\alpha_1=\alpha_2=\ldots=\alpha_r$, which is clearly impossible.
\noindent {\sf Step 3:} $D(\Delta(k-1,k))$ admits an acyclic perfect matching.
As in Step 2, it suffices to produce a matching $M^\ast$ on edges connecting elements of $F(k)$ which covers all points in $F(k)$ and such that $D_{M^\ast}(\Delta(k-1,k))$ is acyclic.
Let $G \in F(k)$. Then $(G+1k)-1$ splits into connected components $C_1, \ldots,C_s$ such that for each $i \in \left[ s \right]$ the subgraph of $G+1k$ induced on $V(C_i) \cup \lp 1 \rp$ is $2$-connected. We may assume that $n \in V(C_1)$. Note that since $k<n$, $1n \not\in G+1k$. Define $S(G)$ to be the set of all $x \in V(C_1) \cap N_{G+1k}(1)$ such that there is a path $P=1,x,\ldots,n$ in $G+1k$ with $P \cap N_{G+1k}(1)=\lp x \rp$.
We claim that $|S(G)|>1$. Indeed, let $1,x,\ldots,n$ be a shortest path from $1$ to $n$ in $G+1k$. Clearly $x \in S(G)$. Since the subgraph of $G+1k$ induced on $V(C_1) \cup \lp 1 \rp$ is $2$-connected and $x \neq n$, there exists a path from $1$ to $n$ in this graph which does not contain $x$. Let $1,y,\ldots,n$ be a shortest such path. Then $y \in S(G)$.
Let $x,y$ be the two smallest elements of $S(G)$. If $xy \not\in G$ then clearly $G+xy \in F(k)$ and $S(G+xy)=S(G)$. Now assume $xy \in G$ and let $H$ be the subgraph of $G \setminus xy+1k$ induced on $V(C_1) \cup \lp 1 \rp$. If $d$ is a cutpoint of $H$ then $x$ and $y$ are in different components of $H-d$ (otherwise $d$ is a cutpoint of the subgraph of $G+1k$ induced on $V(C_1) \cup \lp 1 \rp$). However, there is a cycle $1,x,\ldots,y,1$ in $H$. Thus there is no such cutpoint $d$ and $H$ is $2$-connected. It follows that $G \setminus xy \in F(k)$ and $S(G \setminus xy)=S(G)$.
Now, let $M^\ast$ consist of the edges $G+xy \rightarrow G \setminus xy$ where $x,y$ are determined as above. Then $M^\ast$ is a matching which consists of edges connecting points in $F(k)$ and covers all points in $F(k)$. It remains to show that $D_{M^\ast}(\Delta(k-1,k))$ is acyclic.
Assume for contradiction that $A_1 \rightarrow B_1 \rightarrow A_2 \rightarrow \ldots \rightarrow B_r \rightarrow A_1$ is a directed cycle in $D_{M^\ast}(\Delta(k-1,k))$. As in Step 2, we may assume that there are edges $\alpha_i$ and $\beta_i$ such that $B_i=A_i+\alpha_i$, $A_{i+1}=B_i \setminus \beta_i$ and $\lp \alpha_i \rp=\lp \beta_i \rp$.
By the definition of $M^\ast$, each $\alpha_i$ connects two elements of $N_{B_i+1k}(1)=N_{A_i+1k}(1)$, so no $\beta_i$ contains $1$. Thus $N_{A_i}(1) =N_{B_j}(1)$ for all $i,j$, and each $\beta_i$ connects two elements of $N_{B_i+1k}(1)=N_{A_{i+1}+1k}(1)$. Write $\alpha_1=xy$. Then $\beta_1 \neq xy$, and in $A_2+1k$, $x$ and $y$ are still the two smallest neighbors of $1$ which are contained in paths from $1$ to $n$ which intersect $N_{A_2+1k}(1)$ exactly once. Thus $\alpha_2=xy=\alpha_1$, giving the desired contradiction. \end{Proof}
\section{The character for the action of $S_n$ on $\ti{H}_{2n-5}(\dee{n}{2})$} \label{rep} \label{homrep}
In view of Theorem \ref{main} it is natural to investigate the representation of the symmetric group $S_n$ on the only non-zero homology group of $\dee{n}{2}$, induced by the obvious action. In this section we consider homology with complex coefficients, hence all representations are over ${\mathbb C}$. In many of the computations below, we actually determine character values for the representation of $S_n$ on the only non-zero homology group of $\Delta(\ov{{\mathcal Lat}(\dee{n}{2})})$, which is easily seen to be the same as the representation described above.
\begin{definition} \begin{itemize} \item[(i)] We denote by $\omega_n^2$ the character of $S_n$ given by $g \mapsto Trace(g,\ti{H}_{2n-5}(\dee{n}{2}))$. \item[(ii)] Let $C_n$ be a cyclic subgroup of $S_n$ generated by a full $n$-cycle. We denote by $\ell ie_n$ the character of $S_n$ induced from the character on $C_n$ which takes the value $e^{\frac{2 \pi i}{n}}$ on a fixed generator. It is well known (see e.g. \cite[Chapter 8]{Reu93}) that $\ell ie_n$ is the character of $S_n$ on the multigraded piece of the free Lie algebra generated by $n$ variables. \end{itemize} \end{definition}
For the rest of this section we let $S_{n-1}$ be the stabilizer of the point $1$ in the natural action of $S_n$ on the set $[n]$.
\begin{theorem} \label{char} The character $\omega_n^2$ is given by $$\omega_n^2 = \ell ie_{n-1} \uparrow_{S_{n-1}}^{S_n} - \ell ie_{n}.$$ \end{theorem} The proof will follow a sequence of lemmas establishing the main steps.
\begin{lemma} \label{first} If $g \in S_{n-1}$ then $\omega_n^2(g)=\ell ie_{n-1}(g).$ \label{resg1} \end{lemma} \begin{Proof} It is easily seen that the map $\phi:\ov{{\mathcal Lat}(\dee{n}{2})} \rightarrow \ov{B_{n-1} \times \Pi_{n-1}}$, defined in the previous section, commutes with the actions of $S_{n-1}$ on the two posets. Thus the induced map on homology is $S_{n-1}$-equivariant and is an $S_{n-1}$-module isomorphism by Lemma \ref{hom:equiv}. Thus, the characters of $S_{n-1}$ on the homology of $\Delta_n^2$ and on the homology of $\Delta(\ov{B_{n-1} \times \Pi_{n-1}})$ coincide. By an equivariant version of Proposition \ref{homcom} (see \cite{Wel90-1}), $\Delta(\ov{B_{n-1} \times \Pi_{n-1}})$ has the $S_{n-1}$-homotopy type of $\Sigma ( \Delta(\ov{B_{n-1}}) * \Delta(\ov{\Pi_{n-1}})),$ where the group $S_{n-1}$ acts diagonally on $\Delta(\ov{B_{n-1}}) * \Delta(\ov{\Pi_{n-1}})$. Thus the character of $S_{n-1}$ on the homology of $\Delta(\ov{B_{n-1} \times \Pi_{n-1}})$ is given by the product of the characters of $S_{n-1}$ on $\widetilde{H}_*(\Delta(\ov{B_{n-1}}))$ and $\widetilde{H}_*(\Delta(\ov{\Pi_{n-1}}))$. The character of $S_{n-1}$ on $\widetilde{H}_*(\Delta(\ov{B_{n-1}}))$ is rather easily seen to be the sign-character of $S_{n-1}$ (see \cite{Sta82}). The character of $S_{n-1}$ on $\widetilde{H}_*(\Delta(\ov{\Pi_{n-1}}))$ was determined in \cite{Sta82} as $sign_{n-1} \cdot \ell ie_{n-1}$. This implies the assertion. \end{Proof}
\noindent Since every element of $S_n$ which has a fixed point is conjugate to an element of $S_{n-1}$, it remains to determine $\omega_n^2(g)$ for all fixed-point-free $g \in S_n$.
\begin{definition} Let $g \in S_n$. We denote by $L^g$ the poset of faces of $\dee{n}{2}$ which are fixed by $g$, and by $g^\ast$ the element of $S_{n+1}$ which fixes $n+1$ and acts as $g$ does on $\left[ n \right]$. \end{definition}
\noindent Write $\hat{0}$ for the empty graph in $L^g$, which is the unique minimum element of $L^g$, and for any poset $P$ let $\mu_{P}$ be the M\"obius function on $P$.
\begin{lemma} For $g \in S_n$, $\omega_n^2(g)=\displaystyle{\sum_{G \in L^g}\mu_{L^g}(\hat{0},G)}$. \label{hall} \end{lemma}
\noindent \begin{Proof} It is well-known (see e.g. \cite[(13.5)]{Bjo91}) that if a group acts on a bounded poset $P$ then for any group element $g$ we have \[ \mb{P^g}=\sum_i(-1)^iTr(g,\ti{H}_i(\Delta(\ov{P}))). \] In the case under consideration, the only nonzero reduced homology group is the one in dimension $2n-5$, so the lemma follows immediately from the definition of the M\"obius function. \end{Proof}
The next two lemmas will be used to determine $\omega_n^2(g)$ when $g$ is fixed-point-free.
\begin{lemma} Let $G$ be a graph whose automorphism group acts transitively on $V(G)$. If $G$ is connected then $G$ is $2$-connected. \label{cyctrans} \end{lemma}
\begin{Proof}
Let $v$ be a leaf of some spanning tree in the connected graph $G$. Then $v$ is not a cutpoint. Since Aut($G$) is transitive on vertices there cannot be any other cutpoints. Hence $G$ is $2$-connected. \end{Proof}
\begin{lemma} Let $g \in S_n$ be fixed-point-free. Write $g$ as a product of disjoint cycles, $g=g_1 \ldots g_r$. Let $V_i=supp(g_i)$. Let $G \in L^g$ be connected and let $x \in {\rm Cut}(G)$ with $x \in V_j$. Then there exists some connected component $C$ of $G-x$ such that $V_j \setminus \lp x \rp \subseteq C$ and $C \cap V_i \neq \emptyset$ for all $i \in \left[ r \right]$. \label{bigcomp} \end{lemma}
\noindent \begin{Proof} Let $G_j$ be the graph on $V_j$ such that an edge $yz$ is in $E(G_j)$ if $yz \in E(G)$ or if there is a path $P$ from $y$ to $z$ in $E(G)$ such that $P \cap V_j=\lp y,z \rp$. Since $G$ is connected, so is $G_j$. Also, the group generated by $g_j$ is a group of automorphisms of $G_j$ which acts transitively on $V_j$. By Lemma \ref{cyctrans}, $G_j$ is $2$-connected. It follows that all elements of $V_j \setminus \lp x \rp$ are in the same connected component of $G-x$. Now for $i \neq j$, let $P$ be a path of shortest length connecting some $y \in V_i$ with some $z \in V_j$. If $z=x$ replace $P$ with $g(P)$. Now $P$ contains no vertices from $V_i \cup V_j$ other than $y$ and $z \neq x$. Thus $P$ is a path in $G-x$ and $y$ lies in the component of $G-x$ containing $V_j \setminus \lp x \rp$. \end{Proof}
We can now determine the values of $\omega_n^2$ on fixed-point-free elements of $S_n$.
\begin{lemma} \label{second} Let $g \in S_n$ be fixed-point-free. Then $\omega_n^2(g)=-\omega_{n+1}^2(g^\ast)$. \label{wfpf} \end{lemma} \begin{Proof} As usual we write $\hat{0}$ for the empty graph. By Lemma \ref{hall} we have \[ \omega_{n+1}^2(g^\ast)=\sum_{G \in L^{g^\ast}}\mega. \] Let $M^g$ be the poset of all graphs on $\left[ n \right]$ which are fixed by $g$. Note that if $G \in L^{g^\ast}$ then $G - (n+1) \in M^g$. For $F \in M^g$ let $D(F)$ be the set of all $G \in L^{g^\ast}$ such that $G-(n+1)=F$. We have \[ \omega_{n+1}^2(g^\ast)=\sum_{F \in M^g}\sum_{G \in D(F)}\mega. \] Any $G \in L^{g^\ast}$ is a union of $\langle g^\ast \rangle$-orbits on ${{\left[ n+1 \right]} \choose {2}}$. Let $o(G)$ be the number of such orbits. It is easy to see that $\mega=(-1)^{o(G)}$. Let $p(G)$ be the number of such orbits containing edges covering the point $n+1$. Applying the previous argument to $M^g$, we get for any $F \in M^g$ \[ \sum_{G \in D(F)}\mega=\meg\sum_{G \in D(F)}(-1)^{p(G)}. \]
We will examine this sum for each $F \in M^g$, looking separately at the cases where $F$ is disconnected, connected but not $2$-connected, and $2$-connected. Write $g$ as a product of disjoint cycles, $g=g_1 \ldots g_r$ and let $V_i=supp(g_i)$. Note that if $v \in V_i$ and $G \in L^{g^\ast}$ with $\lp v,n+1 \rp \in G$, then the $\langle g^\ast \rangle$-orbit containing $\{v,n+1\}$ consists of the edges $\{w,n+1\}$ for all $w \in V_i$, and is contained in $E(G)$. Also, $p(G)$ is simply the number of such orbits. Let $O(g)$ be the set of all such orbits, and for $S \subseteq O(g)$ let $G(S)$ be the graph induced on the edges which are contained in elements of $S$. For $F \in M^g$ define \[ \Sigma(F):=\lp S \subseteq O(g): F \cup G(S) \in L^{g^\ast} \rp. \] Note that $\Sigma(F)$ is a simplicial complex on $O(g)$. Let $P(F)={\mathcal Lat}(\Sigma(F)) \setminus \lp \hat{1} \rp$. By the above arguments we have \[ \sum_{G \in D(F)}\mega=\meg\sum_{S \in P(F)}\mu_{P(F)}(\hat{0},S). \]
We now examine the three cases.
\noindent {\sc Case 1:} $F$ is not connected.
Then $P(F)$ is the Boolean algebra on $O(g)$, since $n+1$ is a cutpoint of $F \cup G(S)$ for all $S \subseteq O(g)$. It follows immediately that \[ \sum_{G \in D(F)}\mega=0. \]
\noindent {\sf Case 2:} $F$ is connected but not $2$-connected.
We will use the block decomposition described in Proposition \ref{sep} of the following section. Given a connected but not $2$-connected graph $F \in M^g$, let $T(F)$ be the bipartite graph whose vertices are the vertices of $F$ and the blocks of $F$, with $\{ v,W_i\}$ an edge if and only if $v \in W_i$. It is easy to see that $T(F)$ is a tree and that $\langle g \rangle$ is a group of automorphisms of $T(F)$ which preserves each part of the given bipartition. It follows that $g$ fixes a vertex of $T(F)$ (see \cite{Lov93}). Since $g$ fixes no vertex of $F$, $g$ must fix some block $B$ of $F$. This means that there is some nonempty $J \subset \left[ r \right]$ such that $B=\cup_{j \in J}V_j$. Let $S$ be the set of all orbits in $O(g)$ which contain edges that include vertices in $B$. We will show that every maximal element of $P(F)$ contains $S$, from which it follows immediately that \[ \sum_{G \in D(F)}\mega=0. \]
Let $G \in D(F)$ and let $c \in {\rm Cut}(G)$. Since $F$ is connected, $c \neq n+1$. Also, if $N_G(n+1) \neq \emptyset$ then since $g$ is fixed-point-free $c$ must be a cutpoint of $F$. If $N_G(n+1)=\emptyset$ then every $x \in \left[ n \right]$ cuts $G$, so in any case we may assume $c \in {\rm Cut}(F)$. Let $c \in V_i$. By Lemma \ref{bigcomp}, there is some connected component $C$ of $F-c$ which contains $V_i \setminus \lp c \rp$ and at least one element of each $V_j$. Since $B$ is $2$-connected and $B \cap C \neq \emptyset$, we must have $B \subseteq C$.
We will now show that $c$ must be a cutpoint of $G \cup S$. If $N_G(n+1)=\emptyset$ then adding $S$ to $G$ simply moves the previously isolated point $n+1$ into the connected component of $G-c$ which contains $C$. However, there is a component of $F-c$ besides $C$, which remains separated from $C$ in $G-c$. Now, assume that $N_G(n+1) \neq \emptyset$. Then there exists some set $I$ such that $N_G(n+1)=\cup_{i \in I}V_i$. The component of $G-c$ containing $C$ contains elements of each $V_i$, and it follows that $n+1$ must also be in this component. Thus, adding $S$ to $G$ does not reduce the number of components of $G-c$.
\noindent {\sf Case 3:} $F$ is $2$-connected.
In this case the only $G \in D(F)$ is that for which $N_G(n+1)=\emptyset$. Indeed, since each $V_i$ has at least two elements we cannot have $|N_G(n+1)|=1$, and the claim follows. Thus \[ \sum_{G \in D(F)}\mega=\mu_{M^g}(\hat{0},F). \]
Let $K^g$ be the set of $2$-connected graphs in $M^g$. Combining the information from the three cases we have shown that \[ \omega_{n+1}^2(g^\ast)=\sum_{F \in K^g}\mu_{M^g}(\hat{0},F). \] By definition of the M\"obius function and the fact that $M^g$ has a maximum element, we have \[ \sum_{F \in K^g}\mu_{M^g}(\hat{0},F)=-\sum_{F \in L^g}\mu_{L^g}(\hat{0},F) =-\omega_n^2(g), \] and the proof is complete. \end{Proof}
\noindent {\sf Proof of Theorem \ref{char}:} Set $\rho_n = \ell ie_{n-1} \uparrow_{S_{n-1}}^{S_n} - \ell ie_n$. We must show that $\rho_n(g) = \omega_n^2(g)$ for all $g \in S_n$.
By the definition of induced characters, if $g \in S_n$ is not the product of disjoint cycles of the same length then $\ell ie_n(g) = 0$. We will assume from now on that any $g \in S_n$ which fixes a point is contained in $S_{n-1}$ (so by our convention it fixes the point $1$).
By the definition of induced characters and Theorem \ref{main}, we have $$\rho_n({\mbox{\rm id}}) = (n-2)!~ [S_n:S_{n-1}] - (n-1)! = (n-2)! = \omega_n^2({\mbox{\rm id}}).$$
If $g \neq {\mbox{\rm id}}$ and $g$ has at least two fixed points, then $\ell ie_{n-1}(g) = \ell ie_n(g) = 0$, so $\rho_n(g) = \omega_n^2(g)$ by Lemma \ref{first}.
If $g \neq {\mbox{\rm id}}$ has exactly one fixed point, then $\ell ie_n(g) = 0$. For $h \in S_n$ we have $g^h := h^{-1}gh \in S_{n-1}$ if and only if $h \in S_{n-1}$. By the definition of induced characters and Lemma \ref{first}, $$\rho_n(g) = \ell ie_{n-1} \uparrow_{S_{n-1}}^{S_n}(g) = \frac{1}{(n-1)!} \sum_{h \in S_{n-1}} \ell ie_{n-1}(g^h) = \ell ie_{n-1}(g) = \omega_n(g).$$
If $g \in S_n$ has no fixed points then $\ell ie_{n-1} \uparrow_{S_{n-1}}^{S_n} (g) = 0$ and $\rho_n(g) = - \ell ie_n(g)$. As before, let $g^*$ be the element of $S_{n+1}$ which fixes $n+1$ and acts as $g$ does on $[n]$. We have shown above that $\omega_{n+1}^2(g^*) = \ell ie_n(g)$. Hence, by Lemma \ref{second}, $\rho_n(G) = \omega_n^2(g)$. \qed
According to Vassiliev, the number of linearly independent knot invariants of bi-order $(n,n-1)$, modulo lower bi-order invariants, is bounded from above by the multiplicity of the trivial representation in the restriction of $\omega_n^2$ to the cyclic group $C_n$ generated by $(12 \cdots n)$. See \cite{Vas97} for all details. As a corollary of Theorem \ref{char} we obtain a formula for this multiplicity. We write $\langle \xi,1 \rangle$ for the multiplicity of the trivial character in any character $\xi$ of $C_n$.
\begin{corollary} \label{trivmult} \[ \langle \omega_n^2 \downarrow^{S_n}_{C_n},1 \rangle=(n-2)! -
\frac{1}{n}\sum_{d|n}\mu(d)\phi(d) (\frac{n}{d}-1)! \hskip2pt d^{\frac{n}{d}-1} \] \end{corollary} \begin{Proof} As a consequence of a result by Hanlon \cite{Han82} (see also \cite{Sta82}), it is straightforward to show that \[ \langle \ell ie_n\downarrow^{S_n}_{C_n},1 \rangle=
\frac{1}{n}\sum_{d|n}\mu(d)\phi(d) (\frac{n}{d}-1)! \hskip1pt d^{\frac{n}{d}-1}, \] where $\mu$ is the usual number-theoretic M\"obius function and $\phi$ is Euler's function. On the other hand, $$\langle \ell ie_{n-1}\uparrow_{S_{n-1}}^{S_n} \downarrow_{C_n}^{S_n},1 \rangle = \frac{1}{n} \sum_{g \in C_n} \ell ie_{n-1} \uparrow_{S_{n-1}}^{S_n}(g) = \frac{1}{n} \hskip2pt \ell ie_{n-1} \uparrow_{S_{n-1}}^{S_n} ({\mbox{\rm id}}) = (n-2)!.$$ Now the assertion follows immediately from Theorem \ref{char}. \end{Proof}
The values of $w_n = \langle \omega_n^2\downarrow^{S_n}_{C_n},1 \rangle$ for small $n$ are given in the table below.
\begin{center}
$\begin{array}{|c||c|c|c|c|c|c|c|c|c|} \hline n & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline \hline w_n & 1 & 1 & 2 & 6 & 18 & 96 & 564 & 4,072 & 32,990\\ \hline \end{array}$ \end{center}
\centerline{{\sf Table 1:} Multiplicity $w_n$ of the trivial character in $\omega_n^2 \downarrow_{C_n}^{S_n}$}
The character $\omega_n^2$ and the tensor product of $\omega_n^2$ with the sign character have recently appeared in various different settings, see Section \ref{questionrep}.
\section{The lattice of block-closed graphs}
\label{closed}
In this section we will obtain information on the topology of $\Delta_{n,k}^2$ by producing a lattice $\Sigma_{n,k}$ such that $\Delta(\ov{\Sigma_{n,k}})$ is homotopy equivalent to $\Delta_{n,k}^2$ and examining the structure of $\Sigma_{n,k}$. For lattice and poset terminology not explained in Section \ref{notation} we refer to \cite{Sta86}.
We begin by recalling some elements of the well known structure theory of separable graphs, which appears e.g. in \cite{Lov93}.
\begin{definition} Let $G$ be any graph. A {\em block} of $G$ is a subset $W$ of $V(G)$ such that the subgraph of $G$ induced on $W$ is $2$-connected or $W$ is a singleton or a pair of points connected by an edge, and the subgraph of $G$ induced on any proper superset of $W$ is separable. We will say that $G$ is {\em block-closed} if the subgraph induced on each block is a clique. \end{definition}
Given a graph $G$, say that $e\equiv e'$ for two of its edges $e$ and $e'$ if they both lie in some circuit of $G$. This is easily seen to be an equivalence relation on $E(G)$. If $W$ is the set of nodes underlying an equivalence class then $W$ is a block, and all non-singleton blocks correspond to equivalence classes of edges in this way. {From} this it is easy to derive the following basic facts about the ``block decomposition'' of $G$, see \cite{Lov93} for more details.
\begin{proposition} Let $G$ be a graph. Then there exists a unique decomposition of $V(G)$ into blocks $W_1,\ldots,W_r$, and if $i \neq j$ we have
$|W_i \cap W_j| \leq 1$. Moreover, if $B_G$ is the graph with vertex set $\lp w_1,\ldots,w_r \rp$ such that $\lp w_i,w_j \rp \in E(B_G)$ if and only if $|W_i \cap W_j|=1$, then $B_G$ is a forest (that is, $B_G$ contains no cycles). \label{sep} \end{proposition}
Note that if $K$ is a $k$-graph with underlying graph $G$, then every block of $G$ has size at least $k$ or is a single vertex.
\begin{definition} Let $K$ be a $k$-graph with underlying graph $G$, and let $W_1,\ldots,W_r$ be the blocks of $G$. We define $K^*$ to be the $k$-graph which induces the complete $k$-graph on each $W_i$ and contains no other hyperedges. We also define $\Sigma_{n,k}$ to be the poset of all graphs on vertex set $\left[ n \right]$ in which every block is either an isolated vertex or a clique of size at least $k$, ordered by inclusion. \end{definition}
The first part of the following lemma is immediate from the definition, and the second follows via a standard argument for closure operators on lattices.
\begin{lemma} \begin{itemize} \item[(i)] The map $K \mapsto K^*$ defines a closure operator on ${{\mathcal Lat}(\Delta_{n,k}^2)}$ whose image is isomorphic to $\Sigma_{n,k}$. \item[(ii)] $\Sigma_{n,k}$ is a lattice. \label{clop} \end{itemize} \end{lemma}
The meet operation in the lattice $\Sigma_{n,k}$ is intersection of edge-sets followed by deletion of the edges in all blocks of size smaller than $k$. Note that the elements of $\Sigma_{n,2}$ are the block-closed graphs, and that we have a tower of embeddings as subposets (not sublattices): $$\Sigma_{n,k}\subseteq \cdots \subseteq \Sigma_{n,3} \subseteq \Sigma_{n,2}.$$ Hence, in view of the following result the topology of all the complexes $\Delta_{n,k}^2$ is encoded into the lattice $\Sigma_{n,2}$ of block-closed graphs.
\begin{theorem} The complexes $\Delta_{n,k}^2$ and $\Delta(\ov{\Sigma_{n,k}})$ are homotopy equivalent. \label{dnksnk} \end{theorem}
\begin{Proof} $K^*$ is the complete graph (the top element of $\Sigma_{n,k}$) if and only if $K$ is $2$-connected. Hence, the map $K \mapsto K^*$ restricts to a closure operator on $\ov{{\mathcal Lat}(\Delta_{n,k}^2)}$ whose image is isomorphic to $\ov{\Sigma}_{n,k}$. The theorem then follows from Corollary \ref{closure}. \end{Proof}
We will now investigate the structure of $\Sigma_{n,k}$. The next two lemmas follow immediately from the definition of $\Sigma_{n,k}$. We write $\hat{0}$ for the empty graph, which is the minimum element of $\Sigma_{n,k}$, and $\hat{1}$ for the complete graph, which is its maximum.
\begin{lemma} Let $G,H \in \Sigma_{n,k}$. Then $G$ covers $H$ if and only if one of the following conditions holds: \begin{itemize} \item[(i)] $E(G) \setminus E(H)$ is a clique on $k$ vertices belonging to $k$ pairwise different components of $H$. \item[(ii)] $E(G) \setminus E(H)$ is a complete bipartite graph on parts $A$ and $B$, and there is a vertex $v$ such that $A \cup \lp v \rp$ and $B \cup \lp v \rp$ are blocks in $H$. \item[(iii)] Only if $k>2$: $E(G) \setminus E(H)$ is a star (that is, a connected graph with at most one vertex of degree more than one), and the vertices of degree one in this star form a block in $H$ belonging to a component of $H$ distinct from that of the center of the star. \end{itemize} \label{covsnk} \end{lemma}
The three types of coverings can informally be described as follows: \begin{itemize} \item[(i)] select a vertex from each of $k$ pairwise disjoint components of $H$ and then create a $k$-clique on these vertices; \item[(ii)] complete the union of two overlapping blocks of $H$ to a clique; \item[(iii)] for $k>2$: select a block and a vertex from different components of $H$ and complete their union to a clique. \end{itemize}
The lattices $\Sigma_{n,k}$ are neither upper nor lower semimodular. However, they exhibit a recursive structure on lower intervals, and certain upper intervals are upper semimodular, as the following lemma shows.
\begin{lemma} Let $G\in \Sigma_{n,k}$. \begin{itemize} \item[(i)] If $G$ has $r$ non-singleton blocks of sizes $m_1, \dots , m_r$ then the interval $[\hat{0},G]$ is isomorphic to the direct product $\Sigma_{m_1,k} \times \cdots \times \Sigma_{m_r,k}$ \item[(ii)] If $G$ is connected then the interval $[G,\hat{1}]$ is isomorphic to a direct product of partition lattices. More precisely, suppose that $G$ has $s$ cutpoints and that the $i$-th cutpoint lies in $t_i \geq 2$ blocks. Then, $[G,\hat{1}] \cong \Pi_{t_1} \times \cdots \times \Pi_{t_s}$. \end{itemize} \label{interv} \end{lemma}
The following description of the coatoms of $\Sigma_{n,k}$, that is, the elements which are covered by $\hat{1}$, follows immediately from the two preceding lemmas.
\begin{lemma} Let $M$ be a coatom of $\Sigma_{n,k}$. Then one of the following conditions holds: \begin{itemize} \item[(i)] $M$ is connected and has two blocks of size $l,m$ with $k \leq l \leq m \leq n-k+1$ and $l+m=n+1$. In this case, the interval $\left[ \hat{0},M \right]$ is isomorphic to $\Sigma_{l,k} \times \Sigma_{m,k}$. \item[(ii)] $M$ consists of an $(n-1)$-clique and an isolated vertex. In this case, $k>2$ and the interval $\left[ \hat{0},M \right]$ is isomorphic to $\Sigma_{n-1,k}$. \end{itemize} \label{cosnk} \end{lemma}
For any graph $G$ let $c(G)$ be the number of connected components, and $b(G)$ the number of blocks of size $\geq 2$.
\begin{theorem} \begin{itemize} \item[(i)] The lattice $\Sigma_{n,2}$ is graded with rank function $$\rho(G)=2n-2c(G)-b(G).$$ In particular, its length is $\rho(\hat{1})=2n-3$. \item[(ii)] The lattice $\Sigma_{n,3}$ is graded with rank function $$\rho(G)=n-c(G)-b(G).$$ In particular, its length is $\rho(\hat{1})=n-2$. \item[(iii)] If $k>3$ and $n<2k-1$, then $\Sigma_{n,k}$ is isomorphic to the lower-truncated Boolean algebra
$\{A\subseteq [n]: |A|\geq k\}\cup \{ \emptyset \}.$ In particular, $\Sigma_{n,k}$ is graded of length $n-k+1$. \item[(iv)] If $k>3$ and $n \geq 2k-1$, then $\ell$ is the length of a maximal chain of $\Sigma_{n,k}$ if and only if $$\ell =(n-2)-t(k-3), \mbox{ for some } 1\leq t\leq \lfloor \frac{n-1}{k-1} \rfloor .$$ In particular, $\Sigma_{n,k}$ is of length $n-k+1$ and is not graded. \item[(v)] If $k>3$ then $G \in \ov{\Sigma_{n,k}}$ is contained in a chain of length $n-k+1$ if and only if $G$ consists of a clique of size $l \geq k$ and $n-l$ isolated vertices. \end{itemize} \label{chsnk} \end{theorem}
\begin{Proof} For claims (i) and (ii) it suffices to check that the given rank functions increase by $1$ for each type of covering given in Lemma \ref{covsnk} and take value zero at the empty graph. Claim (iii) is clear from the definition.
Claims (iv) and (v) are implied by the following description of the maximal chains in $\Sigma_{n,k}$. We will here view $\Sigma_{n,k}$ as a subposet of $\Sigma_{n,3}$, and we let $\rho$ denote the restriction of the rank function of claim (ii) from $\Sigma_{n,3}$ to $\Sigma_{n,k}$.
A maximal chain from $\hat{0}$ to $\hat{1}$ in $\Sigma_{n,k}$ is a sequence of covering steps. By Lemma \ref{covsnk} there are three possibilities for each step. The rank function $\rho$ will increase by $1$ for coverings of types (ii) or (iii), and by $k-2$ for coverings of type (i). Hence, the length of a maximal chain must be $n-2-t(k-3)$, where $t$ is the number of covering steps of type (i). Note that $t\geq 1$ since the first covering in the chain must be of type (i), and that $t\leq \lfloor \frac{n-1}{k-1} \rfloor$ since each step of type (i) reduces the number of connected components by $k-1$ and the total reduction of components along the whole chain is $n-1$.
Now, suppose that $1\leq t\leq \lfloor \frac{n-1}{k-1} \rfloor$. A maximal chain of length $n-2-t(k-3)$ is constructed as follows. First perform a sequence of $t$ covering steps of type (i) producing the graph with $k$-cliques on the sets $\{1,\dots,k\}$, $\{k,\dots,2k-1\}$, $\dots$ $\{(t-1)k-(t-2),\dots,tk-(t-1)\}$. Then continue from there via a sequence of $t-1$ covering steps of type (ii) leading to the graph with a $(tk-(t-1))$-clique on the set $[tk-(t-1)]$. Finally, $n-(tk-(t-1))$ covering steps of type (iii) will lead to the complete graph. The total number of steps taken, i.e. the length of the constructed chain, is $t+(t-1)+n-(tk-(t-1))=n-2-t(k-3)$.
\end{Proof}
The above result yields some nontrivial information about the topology of $\Delta_{n,k}^2$. For instance, part (i) shows that the order complex of $\ov{\Sigma_{n,2}}$ is pure of dimension $2n-5$. With Theorem \ref{dnksnk} this implies that the homology of $\Delta_{n,2}^2$ vanishes in dimensions greater than $2n-5$ and is free in dimension $2n-5$. Of course, in this case we already have more precise knowledge from Theorem \ref{main}. By similar reasoning we can conclude the following new information about the $k=3$ case from part (ii) of Theorem \ref{chsnk}.
\begin{theorem} $\widetilde{H}_i(\Delta_{n,3}^2)=0$ for all $i>n-4$, and $\widetilde{H}_{n-4}(\Delta_{n,3}^2)$ is free. \label{snk3} \end{theorem}
In the remaining cases the following can be deduced.
\begin{theorem} Assume that $k>3$. \begin{itemize} \item[(i)] $\widetilde{H}_i(\Delta_{n,k}^2)=0$ if $i>n-k-1$ or $n-k-1>i>n-2k+2$. \item[(ii)] $\widetilde{H}_{n-k-1}(\Delta_{n,k}^2)$ is free of dimension ${{n-1} \choose {k-1}}$. \item[(iii)] If $n<2k-1$ then $\Delta_{n,k}^2$ has the homotopy type of a wedge of ${{n-1} \choose {k-1}}$ spheres of dimension $n-k-1$. \item[(iv)] If $n=2k-1$ then $\Delta_{n,k}^2$ has the homotopy type of a wedge of spheres. This wedge consists of ${{n-1} \choose {k-1}}$ $(n-k-1)$-spheres and $\frac{1}{2}n{{n-1} \choose {k-1}}$ $1$-spheres. \end{itemize} \label{snk4} \end{theorem}
\begin{Proof} We use Theorem \ref{dnksnk} without reference throughout the proof. Claim (i) follows immediately from Theorem \ref{chsnk}(iii),(iv),(v). By Theorem \ref{chsnk}(v), the subposet of $\Sigma_{n,k}$ generated by chains of length $n-k+1$ is isomorphic to the poset obtained by removing all sets of sizes $1,2,\ldots,k-1$ from the Boolean algebra $B_n$. Claims (ii) and (iii) now follow immediately from the rank selection results in \cite{Bjo91,Sta82}, along with Theorem \ref{chsnk}(iii),(iv).
If $n=2k-1$, let $W$ be the set of vertices in $\Delta(\ov{\Sigma_{n,k}})$ corresponding to graphs which consist of two $k$-cliques intersecting in a single vertex, and let $\Delta_0$ be the complex obtained by removing all simplices containing an element of $W$ from $\Delta(\ov{\Sigma_{n,k}})$. Then $\Delta_0$ is the order complex of the subposet of $\ov{\Sigma_{n,k}}$ generated by chains of length $n-k+1$, and is therefore homotopy equivalent to a wedge of ${{n-1} \choose {k-1}}$ $(n-k-1)$-spheres, as above. If $G \in \Sigma_{n,k}$ corresponds to an element $w \in W$, then by Lemma \ref{covsnk}, $(\ov{\Sigma_{n,k}})_{<G}$ consists of two graphs which contain a $k$-clique and $n-k$ isolated vertices. It follows that $link_{\Delta(\ov{\Sigma_{n,k}})}(w)$ consists of two vertices in $\Delta_0$. There is a homotopy equivalence between $\Delta_0$ and a wedge of ${{n-1} \choose {k-1}}$ $(n-k-1)$-spheres which maps
$\cup_{w \in W}link_{\Delta(\ov{\Sigma_{n,k}})}(w)$ to the wedge point. It is easy to see that $|W|=\frac{1}{2}n{{n-1} \choose {k-1}}$, and claim (iv) follows. \end{Proof}
The homology of ${\Delta}^{2}_{n,3}$ has been computed for $4 \leq n \leq 7$. It is concentrated in dimension $n-4$, see Table 2.
$$
\begin{array}{|c||c|c|c|c|c|c|} \hline n \backslash i & 0 & 1 & 2 & 3 \\ \hline \hline 2 & 0 & 0 & 0 & 0 \\ \hline 3 & 0 & 0 & 0 & 0 \\ \hline 4 & {\mathbb Z}^3 & 0 & 0 & 0 \\ \hline 5 & 0 & {\mathbb Z}^{21} & 0 & 0 \\ \hline 6 & 0 & 0 & {\mathbb Z}^{180} & 0 \\ \hline 7 & 0 & 0 & 0 & {\mathbb Z}^{2010} \\ \hline \end{array} $$
\centerline{{\sf Table 2:} Homology groups $\widetilde{H}_i(\Delta_{n,3}^2)$}
We believe that the concentration of homology in dimension $n-4$ is true in general, see the discussion in Section 9.2. One approach to proving this could be via the following lemma.
Recall that a graph is called a {\em forest} if it is free of circuits. This is equivalent to saying that every block in its block decomposition has at most two vertices.
\begin{lemma}
Suppose that the order complex of the open interval $(G,\hat{1})$ in $\Sigma_{n,2}$ is topologically $(n-5)$-connected for every forest $G$. Then ${\Delta}_{n,3}^2$ is homotopy equivalent to a wedge of $(n-4)$-spheres. \label{sn2sn3} \end{lemma}
\begin{Proof} By Theorem \ref{dnksnk} we may replace ${\Delta}_{n,3}^2$ by $\Delta(\ov{\Sigma_{n,3}})$, which by Theorem \ref{chsnk}(ii) is $(n-4)$-dimensional. Hence by known reductions (see \cite[(9.19)]{Bjo91}) it suffices to prove that $\Delta(\ov{\Sigma_{n,3}})$ is $(n-5)$-connected. By Theorem \ref{main} we know that $\ov{\Sigma_{n,2}}$ is $(n-5)$-connected, and we will show how to transfer this connectivity to the subposet $\ov{\Sigma_{n,3}}$ under the given hypothesis.
Let $P_n$ be the subposet of $\ov{\Sigma_{n,2}}$ consisting of all elements which contain at least one block of size greater than two. The elements in $\ov{\Sigma_{n,2}} \setminus P_n$ are the forests $G$, so a version of Quillen's fiber lemma (see \cite[Lemma 11.12]{Bjo91}) together with our hypothesis about the intervals $(G,\hat{1})$ shows that $P_n$ is $(n-5)$-connected.
Now, note that $\ov{\Sigma_{n,3}} \subseteq P_n$. Let $\rho: P_n \rightarrow \ov{\Sigma_{n,3}}$ be the map which sends $H \in P_n$ to the subgraph obtained by removing from $H$ all edges which are not contained in a block of size at least three. Then $\rho$ is a lower closure operator on $P_n$ (that is, a closure operator on $P_n$ with the opposite order) whose image is $\ov{\Sigma_{n,3}}$. Hence, by Corollary \ref{closure} $\ov{\Sigma_{n,3}}$ is $(n-5)$-connected also.
\end{Proof}
We end this section with an easy result which shows that the homology of ${\Delta}_{n,k}^2$ vanishes in all sufficiently low dimensions. For this the posets $\Sigma_{n,k}$ are not used.
\begin{lemma} Let $E$ be a $k$-graph on $n$ vertices. If $E$ is $2$-connected, then $E$ contains at least $\lceil\frac{n}{k-1} \rceil$ hyperedges. \label{skellem} \end{lemma}
\begin{Proof} If $E$ is $2$-connected then for each $k$-edge $X=\lp v_1,\ldots,v_k \rp \in E$ there exist at least two $v_i$ which are contained in some $k$-edge of $E$ other than $X$. It follows easily that \[
n \leq |E|(k-1). \] \end{Proof}
\begin{corollary} The complex ${\Delta}_{n,k}^2$ is topologically $(\lceil \frac{n}{k-1} \rceil -3)$-connected, implying that $\widetilde{H}_i(\Delta_{n,k}^2)=0$ for $i=0,1,\dots, \lceil \frac{n}{k-1} \rceil -3.$ \label{skelcor} \end{corollary}
\begin{Proof} Let $m=\lceil \frac{n}{k-1} \rceil -2$. By Lemma \ref{skellem} ${\Delta}_{n,k}^2$ contains the full $m$-skeleton on its vertex set. The corollary follows immediately. \end{Proof}
\section{The Euler characteristic of the complex ${\Delta}_{n,k}^2$} \label{notconnhyper}
In Section \ref{closed} we were able to determine the homotopy type of ${\Delta}_{n,k}^2$ for $k>3$ when $n \leq 2k-1$, but not for $k=3$, nor for $k>3$ and $n>2k-1$. Indeed, other than the connectivity result given in Corollary \ref{skelcor}, in the case $k=3$ our only information on the topology of ${\Delta}_{n,k}^2$ is given by Theorem \ref{snk3} (unless $n$ is very small), and in the case $k>3$ and $n>2k-1$ we were only able to determine the homology group $\widetilde{H}_{n-k-1}({\Delta}_{n,k}^2)$.
In this section, we investigate the reduced Euler characteristic of ${\Delta}_{n,k}^2$. We will determine a formula for the exponential generating function
$$M_k(x):=\sum_{n=1}^\infty \tilde{\chi}({\Delta}^{2}_{n,k})\frac{x^n}{n!},$$ for all $k \geq 2$. That formula is stated in the following theorem.
\begin{theorem}\label{th:mobius} For $k\ge 2$, we have \[M_k'\left(x\frac{p_{k-1}(x)}{p_k(x)}\right)= \ln\left(\frac{p_{k-1}(x)}{p_k(x)}\right), \] where $p_k(x):=1+x+\frac{x^2}{2!}+\cdots +\frac{x^{k-1}}{(k-1)!}$. \end{theorem}
Theorem \ref{th:mobius} gives another proof that $\tilde{\chi}({\Delta}^{2}_{n,2})=-(n-2)!$. It also implies \[M_3'(x)=\ln\left(\frac{-x(x-2)}{(x-1)+\sqrt{2-(x-1)^2}}\right), \] which gives the sequence $0,0,-1,3,-21,180,-2010,27090,-430290,\ldots$ for $\tilde \chi({\Delta}^2_{n,3})$, cf. Table 2. To obtain these corollaries set $y:=x\frac{p_{k-1}(x)}{p_k(x)}$ and solve for $x$ to get $x=\frac{1}{1-y}$ and $x=\frac{y-1+\sqrt{2-(y-1)^2}}{2-y},$ when $k=2$ and $3$ respectively.
To prove Theorem \ref{th:mobius} we will use the posets $\Sigma_{n,k}$ defined in Section \ref{closed}. Note that $\tilde\chi({\Delta}^2_{n,k})=\mu_{\Sigma_{n,k}}(\hat 0,\hat 1)$. We will write $\mu_k(G)$ for $\mu_{\Sigma_{n,k}}(\hat 0,G)$ and $\mu_k(n)$ for $\mu_{\Sigma_{n,k}}(\hat 0,\hat 1)$.
Let $\Pi_{n,k}$ be the $k$-equal lattice, which is the lattice of partitions of $\left[ n \right]$ into subsets such that each subset has size one or at least $k$. Let $\tau_k(n):=\mu_{\Pi_{n,k}}(\hat 0,\hat 1)$ for $n\ge k$ and $\tau_k(2)=\cdots=\tau_k(k-1)=0$, but $\tau_k(1)=1$. The exponential generating function, $T_k(x):=\sum_{n=1}^\infty \tau_k(n)\frac{x^n}{n!}$, for the M\"obius function of $\Pi_{n,k}$ is known to be \[ T_k(x)=\ln(p_k(x)), \] where $p_k(x)$ is as above. It was first calculated in \cite{BL}.
Let ${\mathcal C}$ be the set of connected graphs in $\Sigma_{n,k}$. Now let \[ \sigma: \Sigma_{n,k}\setminus {\mathcal C} \longrightarrow \Pi_{n,k} \] be the function which maps a disconnected graph in $\Sigma_{n,k}$ to the partition determined by its connected components. It is easily seen that for each $x \in \Pi_{n,k}$, $\sigma^{-1}_{\leq}(x)$ has a unique maximum element and therefore has a contractible order complex. Thus by Proposition \ref{quillen} and the definition of the M\"obius function, we have \[\tau_k(n)=-\sum_{G\in \Sigma_{n,k}\setminus {\mathcal C}} \mu_k(G)= \sum_{G\in {\mathcal C}} \mu_k(G). \] Thus it suffices to concentrate on the connected but not $2$-connected graphs. First we need a simple lemma. \begin{lemma}
If $G \in {\mathcal C}$ has blocks $W_1,\ldots,W_r$ with $|W_i|=w_i$, then \[\mu_k(G)=\prod_{i=1}^{r}\mu_k(w_i). \] \label{mult} \end{lemma} \begin{Proof} By Lemma \ref{interv} the interval $\left[ \hat{0},G \right]$ is isomorphic to the product poset $\Sigma_{w_1,k} \times \ldots \times \Sigma_{w_r,k}$. The lemma now follows from the well known multiplicativity of the M\"obius function. \end{Proof}
Now define $\displaystyle{\alpha_k(n):=\sum_{G\in{\mathcal C}_1} \mu_k(G)}$, where ${\mathcal C}_1:=\{G\in {\mathcal C}: \mbox{$n$ is not a cutpoint of G}\}$. Also, set $\alpha_k(1)=\cdots=\alpha_k(k-1)=0$ and $A_k(x):=\displaystyle{\sum_{n=1}^\infty \alpha_k(n)\frac{x^n}{n!}}$.
\begin{lemma}\label{lm:Dk} We have \[ A_k'(x)=\ln\left(\frac{p_{k-1}(x)}{p_k(x)}\right). \] \end{lemma}
\begin{Proof} If $n$ is a cutpoint of $G \in {\mathcal C}$, let $P_1,\dots,P_t$ be the connected components of $G-n$. Then for each $i \in \left[ t \right]$, $n$ is not a cutpoint of the connected subgraph of $G$ induced on $P_i \cup \lp n \rp$. Using Lemma \ref{mult}, we get the recursive formula \[\tau_k(n)=\alpha_k(n)+\sum_{t=2}^{\left\lfloor\frac{n-1}{k-1}\right\rfloor}
\sum_{P_1|\ldots|P_t \in \Pi_{n-1}}\alpha_k(|P_1|+1)\cdots\alpha_k(|P_t|+1), \] where each summand in the double sum on the right counts the M\"obius functions of all elements $G \in {\mathcal C}$ such that $G-n$ has connected components $P_1,\ldots,P_t$. By the definition of $\alpha_k(n)$ we can rewrite this formula as \[
\tau_k(n)=\sum_{P_1|\ldots|P_t \in \Pi_{n-1}}
\alpha_k(|P_1|+1)\cdots\alpha_k(|P_t|+1). \] The exponential formula (Proposition \ref{exp}) and easy power series manipulations then give \[T_k'(x)=e^{A_k'(x)}. \] \end{Proof}
\noindent {\sf Proof of Theorem \ref{th:mobius}:} We will establish a recurrence relation for $\alpha_k$ involving $\mu_k$. Let $G \in {\mathcal C}_1$ and let $W$ be the block of $G$ containing $n$. Then $W$ is the unique maximal clique in $G$ which contains $n$. Let $S=W \setminus \lp n \rp$. Let $B_G$ be the graph on the blocks of $G$ defined in Proposition \ref{sep}. Let $T_1,\ldots,T_t$ be the connected components of $B_G-W$ and for each $T_i$ let $V_i$ be the set of vertices of $G$ which are contained in a block that is contained in $T_i$. For each $T_i$ there is a unique $j_i \in S$ such that the subgraph of $G$ induced on $V_i \cup \lp j_i \rp$ is a connected union of blocks of $G$. Conversely, any $G \in {\mathcal C}_1$ can be obtained by choosing $S$, $T_i$ and $j_i$ as above and then choosing graphs $H_i$ on $T_i \cup \lp j_i \rp$ such that each $H_i$ is either a clique of size at least $k$ or
isomorphic to a connected element of $\ov{\Sigma}_{|T_i|+1,k}$. These choices can all be made independently, so we get the recurrence relation \[
\alpha_k(n)=\sum_{S\uplus T=[n-1]}\mu_k(|S|+1)\sum_{T_1\uplus\ldots\uplus T_t=T}
(\alpha_k(|T_1|+1)|S|)\cdots(\alpha_k(|T_t|+1)|S|). \] We cannot apply the exponential formula directly at this point due to the factors
$|S|$ which appear on the right hand side of the above equation. However, we get
\[A_k'(x)=\sum_{n=1}^\infty \frac{\alpha_k(n)x^{n-1}}{(n-1)!}
\] \[=\sum_{n=1}^\infty\sum_{i=1}^{n-1}\binom{n-1}{i}\frac{\mu_k(i+1)x^i}{(n-1)!}
\sum_{T_1|\ldots|T_t \in \Pi_{n-i-1}}(\alpha_k(|T_1|+1)i)\cdots(\alpha_k(|T_t|+1)i) x^{n-i-1} \] \[=\sum_{i=1}^\infty\frac{\mu_k(i+1)x^i}{i!}\sum_{n=i+1}^\infty
\sum_{T_1|\ldots|T_t \in \Pi_{n-i-1}}
(\alpha_k(|T_1|+1)i)\cdots(\alpha_k(|T_t|+1)i)\frac{x^{n-i-1}}{(n-i-1)!} \] Applying the exponential formula, for each $i$ we get \[ \sum_{n=i+1}^\infty
\sum_{T_1|\ldots|T_t \in \Pi_{n-i-1}}
(\alpha_k(|T_1|+1)i)\cdots(\alpha_k(|T_t|+1)i)\frac{x^{n-i-1}}{(n-i-1)!} =e^{iA_k'(x)}. \] Thus \[A_k'(x)=\sum_{i=1}^\infty\frac{\mu_k(i+1)x^i}{i!}e^{iA_k'(x)}= M_k'\left(xe^{A_k'(x)}\right). \] The theorem now follows from Lemma \ref{lm:Dk}. \qed
\section{$(n-2)$-connected graphs and matching complexes} \label{match}
Before we proceed to consider $(n-2)$-connected graphs, let us state some simple but useful facts about the general situation. What do maximal $(i-1)$-separable graphs on the $n$ element set $[n]$ look like? Is is clear that each such graph is described by an $(i-1)$-set $A$ and a partition $B \uplus C$ of $[n] \setminus A$ into two non-empty blocks $B$, $C$. The corresponding maximal $(i-1)$-separable graph is the complete graph on $[n]$ with all edges connecting $B$ and $C$ removed.
Now let $G$ be an $(n-2)$-connected graph on $n$ vertices, so $G \not\in \Delta_{n}^{n-2}$. Then by the above description of maximal $(n-3)$-separable graphs the induced subgraph on any three vertices must contain at least two edges. Thus the complementary graph (i.e., the graph containing precisely the edges that are not in $G$) is a matching. The graphs on $n$ vertices that are matchings form a simplicial complex, that we denote by $M_n$. We conclude the following.
\begin{proposition} The matching complex $M_n$ is Alexander dual (in the sense of Proposition \ref{alexander}) to the complex $\Delta_n^{n-2}$. In particular, there is an isomorphism $$\widetilde{H}_i(M_n) \cong \widetilde{H}^{\binom{n}{2}-i-3}(\Delta_n^{n-2}).$$ \end{proposition}
The matching complexes $M_n$ have attracted attention for various reasons. In \cite[Theorem 4.1]{BLVZ92} the matching complex $M_n$ is shown to be topologically $( \lfloor \frac{n+1}{3} \rfloor-2)$-connected, which implies that $\widetilde {H}_i(M_n)=0$, for $i=0,\dots, \lfloor \frac{n+1}{3} \rfloor -2$. We thus get the following corollary.
\begin{corollary} The cohomology of $\Delta_n^{n-2}$ vanishes in dimensions $i \geq \binom{n}{2}-\lfloor \frac{n+1}{3} \rfloor -1$. \end{corollary}
The following table shows what we know about the homology groups $\widetilde{H}_i(M_n)$, based on the results of \cite{BLVZ92} for $n\leq 6$ and $n=8$, and our own computations.
$$
\begin{array}{|c||c|c|c|c|c|c|} \hline n \backslash i & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline \hline 2 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 3 & {\mathbb Z}^2 & 0 & 0 & 0 & 0 & 0 \\ \hline 4 & {\mathbb Z}^2 & 0 & 0 & 0 & 0 & 0 \\ \hline 5 & 0 & {\mathbb Z}^6 & 0 & 0 & 0 & 0 \\ \hline 6 & 0 & {\mathbb Z}^{16} & 0 & 0 & 0 & 0 \\ \hline 7 & 0 & \text{torsion}^1& {\mathbb Z}^{20} & 0 & 0 & 0 \\ \hline 8 & 0 & 0 & {\mathbb Z}^{132} & 0 & 0 & 0 \\ \hline 9 & 0 & 0 & {\mathbb Z}^{42} \oplus \text{torsion}^2 & {\mathbb Z}^{70} & 0 & 0 \\ \hline 10 & 0 & 0 & \text{torsion}^3 & {\mathbb Z}^{1216} & 0 & 0\\ \hline 11 & 0 & 0 & 0 & {\mathbb Z}^{1188} \oplus \text{torsion}^4 & {\mathbb Z}^{252} & 0\\ \hline 12 & 0 & 0 & 0 & \text{torsion}^5 & {\mathbb Z}^{12440} & 0\\ \hline \end{array} $$
\centerline{{\sf Table 3:} Homology groups $\widetilde{H}_i(M_n)$ of matching complexes}
\footnotetext[1]{There is ${\mathbb Z}_3$-torsion of rank $1$. No ${\mathbb Z}_p$-torsion for $p=2$, $5 \leq p \leq 17$.} \footnotetext[2]{There is ${\mathbb Z}_3$-torsion of rank $8$. No ${\mathbb Z}_p$-torsion for $p=2$, $5 \leq p \leq 17$.} \footnotetext[3]{There is ${\mathbb Z}_3$-torsion of rank $1$. No ${\mathbb Z}_p$-torsion for $p=2, 5, 7$.} \footnotetext[4]{There is ${\mathbb Z}_3$-torsion of rank $35$. No ${\mathbb Z}_p$-torsion for $p=2, 5, 7$.} \footnotetext[5]{There is ${\mathbb Z}_3$-torsion of rank $56$. No ${\mathbb Z}_p$-torsion for $p=2, 5, 7$.} We see that the complexes $\Delta_n^{n-2}$ can have torsion, and that this phenomenon begins with $\Delta_{7}^{5}$.
\section{The Euler characteristic of the complex ${\Delta}_{n}^{n-3}$} \label{notn3}
Consider the complex $(\Delta_n^{n-3})^*$ which is the Alexander dual of $\Delta_n^{n-3}$, and the exponential generating function of its reduced Euler characteristic $$F_n^{n-3} (x) := \displaystyle{\sum_{n \geq 0} \widetilde{\chi}((\Delta_n^{n-3})^*) \frac{x^n}{n!}}.$$ The values of $\widetilde{\chi}((\Delta_n^{n-3})^*)$ in the degenerate cases $n\leq 3$ will appear from an explicitly calculated expansion below. We will express the reduced Euler characteristic of $\Delta_n^{n-3}$ in terms of an expression for this series.
\begin{theorem} We have that: $$F_n^{n-3}(x)= x-\frac{{\rm exp}(\frac{x}{2(1+x)})+x-\frac{1}{4}x^2-\frac{1}{8}x^4}{\sqrt{1+x}}.$$ The exponential generating function of the reduced Euler characteristic of $\Delta_n^{n-3}$ is then the sum of the real and imaginary parts of $-F_n^{n-3}(ix)$. \end{theorem} \Proof We will argue as we did for $\Delta_n^{n-2}$ in Section \ref{match}. If a graph $G$ is $(n-3)$-connected then the induced subgraph on any $4$ of its vertices contains either a vertex of degree $3$ or a path of length $3$. Thus in the complementary graph the induced subgraph on any $4$ vertices is either contained in a $3$-cycle or in a path of length $3$. In particular, there are no $4$-cycles and no vertices of degree $3$ in the complementary graph. Thus the connected components of the complementary graph are paths of any length and cycles of length different from $4$. Moreover, any graph in which every connected component is a cycle not of length $4$ or a path is the complement of an $(n-3)$-connected graph, so $(\Delta_{n}^{n-3})^*$ consists of all such graphs. There are exactly $n!/2$ different paths of length $n$ and $(n-1)!$ different $n$-cycles on an $n$-element vertex set. Now a direct application of the Exponential Formula \ref{exp} gives the result for the generating function of $(\Delta_n^{n-3})^*$. The remaining assertion follows from the fact that when passing from $\Delta_n^i$ to its Alexander dual the Euler characteristic changes by a factor of $-(-1)^{n(n-1)/2}$. \qed
The complex $(\Delta_{n}^{n-3})^*$ has maximal simplexes of dimensions $n-1$ and $n-2$ only. It is easily collapsible to a pure complex of dimension $n-2$. A Maple computation (see below) shows that neither the Euler characteristic of $(\Delta_n^{n-3})^*$ nor the Euler characteristic of $\Delta_n^{n-3}$ alternate in sign, so the pure complexes are certainly not all Cohen-Macaulay. The calculation shows that
$$F_n^{n-3}(x)=-1-x+\frac{1}{4}x^4+\frac{1}{20}x^5+\frac{1}{20}x^6 -\frac{1}{27}x^7- \frac{1}{224}x^8-\frac{1}{480}x^9+O(x^{10}).$$
We have also studied the slightly larger complex of graphs which are the disjoint union of cycles and paths of any lengths (i.e., graphs with maximum vertex degree at most $2$). This is also a reasonable generalization of the matching complex, which is the complex of all graphs with maximum vertex degree at most $1$. The Euler characteristic of the corresponding Alexander dual has almost the same generating function as for $(\Delta_n^{n-3})^*$. That generating function is
$$x - \frac{{\rm exp}(\frac{x}{2(1+x)})+x-\frac{1}{4}x^2}{\sqrt{1+x}} = $$
$$-1-x+\frac{1}{8}x^4-\frac{3}{40}x^5+\frac{1}{20}x^6-\frac{1}{28}x^7- \frac{17}{896}x^8-\frac{7}{1920}x^9-\frac{23}{2400}x^{10}+ O(x^{11}).$$
The maximal simplices in the cycles-and-paths complex have dimension $n-1$ or $n-2$, and the complex can be collapsed to a pure $(n-2)$-dimensional complex. The generating function for the Euler characteristic shows that these collapsed complexes are not all Cohen-Macaulay.
\section{Final Remarks}
\subsection{Homology and Topology of $\Delta_{n}^i$}
The results and computations presented in this paper suggest that there is probably no uniform statement that covers the topology of all complexes $\Delta_n^i$. However, for $i \leq 2$ the homotopy type calculations for $\Delta_n^i$ give very nice answers. This is consistent with the graph theoretical study of not $i$-connected graphs, where there is a good structure theory only when $i \leq 3$ (see for example Chapter 6 of Lov\'asz' book \cite{Lov93}, or the survey article by Oxley \cite{Oxl96} and the references therein).
As mentioned, there is a structure theory for $3$-connected graphs. The $3$-connected graphs on $n$ vertices for which neither the deletion nor the contraction of an edge leads to a $3$-connected graph were classified by Tutte (see Theorem 2.3 in \cite{Oxl96}) as ``wheels'' and ``whirls,'' both having $2n-2$ edges. Note that this does not provide a characterization of the deletion-minimally $3$-connected graphs (the minimal non-faces of $\Delta_n^3$), however it does show that no graph with less than $2n-2$ edges can be $3$-connected. Hence, $\Delta_n^3$ has a complete $2n-4$-skeleton, which shows that $\widetilde{H}_i(\Delta_n^3)=0$ for $i<2n-4$. This fact together with the following table leads to an interesting conjecture.
$$
\begin{array}{|c||c|c|c|c|c|c|c|c|c|c|c|} \hline n \backslash i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \hline 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ \hline 4 & 0 & 0 & 0 & 0 & {\mathbb Z}^1 & 0 & 0 & 0 & 0 & 0 & 0\\ \hline 5 & 0 & 0 & 0 & 0 & 0 & 0 & {\mathbb Z}^6 & 0 & 0 & 0 & 0\\ \hline 6 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {\mathbb Z}^{36} & 0 & 0\\ \hline 7 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {\mathbb Z}^{240} \\ \hline \end{array} $$
\centerline{{\sf Table 4:} Homology groups $\widetilde{H}_i(\Delta_n^3)$}
\noindent {\sf Conjecture 9.1:} ${\Delta}_n^3$ has the homotopy type of a wedge of $\frac{(n-3)(n-2)!}{2}$ spheres of dimension $2n-4$.
For general $i$ the situation (concerning ${\Delta}_n^i$) seems to be far more complicated. However, we would like to remark that for $i=2,3$ the known Betti numbers are (up to sign) the Lah-numbers $L_{n-2,1}$ and $L_{n-2,2}$ (see \cite[p.165-166]{Com70}). This coincidence unfortunately fails for $i=4$, which is easily seen by comparing $L_{n-2,3}$ with $\widetilde{\chi}(\Delta_n^4)$ for $n=6$. For $i>3$ no good structure theory for $i$-connected graphs is known, and the results of Section \ref{match} on $(n-2)$-connected graphs indicate that the topology of $\Delta_n^i$ will not behave nicely for all $i$. Nevertheless, the Alexander Duality with the matching complexes $M_n$ encourages a closer look at the complexes $\Delta_n^{n-2}$. Surprisingly, the prime $3$ seems to play a special role in the topology of these complexes. We have not detected $p$-torsion in the homology of $\Delta_n^{n-2}$ for any prime $p \neq 3$.
\noindent {\sf Question 9.2:} Does the homology of $M_n$ have $p$-torsion for any prime $p \neq 3$?
Even more surprising is the fact that the same prime $3$ seems to play an analogous role for the matching complexes $M_{n,n}$ on complete bipartite graphs $K_{n,n}$, also called chessboard complexes (see \cite{BLVZ92}).
$$
\begin{array}{|c||c|c|c|c|c|c|c|} \hline n \backslash i & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \hline 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 2 & {\mathbb Z} & 0 & 0 & 0 & 0 & 0 & 0\\ \hline 3 & 0 & {\mathbb Z}^4 & 0 & 0 & 0 & 0 & 0 \\ \hline 4 & 0 & 0 & {\mathbb Z}^{15} & 0 & 0 & 0 & 0 \\ \hline 5 & 0 & 0 & {\mathbb Z}_3 & {\mathbb Z}^{56} & 0 & 0 & 0 \\ \hline 6 & 0 & 0 & 0 & {\mathbb Z}^{25}\oplus \text{torsion}^1 & {\mathbb Z}^{210} & 0 &0 \\ \hline 7 & 0 & 0 & 0 & 0 & {\mathbb Z}^{588} \oplus \text{torsion}^2 & {\mathbb Z}^{792} & 0 \\ \hline 8 & 0 & 0 & 0 & 0 & \text{torsion}^3 & ? & ? \\ \hline \end{array} $$
\footnotetext[1]{There is ${\mathbb Z}_3$-torsion of rank $10$. No ${\mathbb Z}_p$-torsion for $p=2,5,7$} \footnotetext[2]{There is ${\mathbb Z}_3$-torsion of rank $66$. No ${\mathbb Z}_p$-torsion for $p=2,5,7$} \footnotetext[3]{There is ${\mathbb Z}_3$-torsion of rank $1$. This group is finite according to \cite{FriHan96}.}
\centerline{{\sf Table 5:} Homology groups $\widetilde{H}_i(M_{n,n})$ for the bipartite matching complexes}
It is easy to see that $M_{n,n}$ collapses to an $(n-2)$-dimensional complex. Hence homology is free in dimension $n-2$ and vanishes in higher dimensions. Looking at the table and the footnotes the following question naturally occurs.
\noindent {\sf Question 9.3:} Is the homology of $M_{n,n}$ free except for $3$-torsion ?
\subsection{Homology and Topology of $\Delta_{n,k}^2$} For $k$-graph complexes the problem of determining the topology of the complex of separable $k$-graphs (the $i=2$ case) seems to be the most important. The complexes $\Delta_{n,k}^2$ play the same role in the study of spaces of ``knots'' for which $k$-fold self-intersections are forbidden as the complexes $\Delta_n^2$ play for ordinary knots.
\noindent {\sf Question 9.4:} What is the homology and homotopy type of $\Delta_{n,k}^2$ ?
The evidence from Section \ref{closed} leads us to anticipate the following answer for $k=3$.
\noindent {\sf Conjecture 9.5:} ${\Delta}_{n,3}^2$ is homotopy equivalent to a wedge of $(n-4)$-spheres.
A natural approach to this question is through further combinatorial study of the lattices $\Sigma_{n,k}$ defined in Section \ref{closed}.
\noindent {\sf Conjecture 9.6:} The lattice $\Sigma_{n,k}$ is shellable.
This is open in all cases, except for the somewhat degenerate cases $n<2k-1$ when $\Sigma_{n,k}$ is a truncated Boolean algebra. If Conjecture 9.6 were verified for $k=2$ it would reprove Theorem \ref{main}, and it would via Lemma \ref{sn2sn3} imply the truth of Conjecture 9.5. If Conjecture 9.6 were verified for $k=3$ it would also imply the truth of Conjecture 9.5. If Conjecture 9.6 were verified for $k>3$ it would via Theorem \ref{chsnk}(iv) and the results of \cite{BjoWac96} imply the truth of the following.
\noindent {\sf Conjecture 9.7:} If $k>3$ and $n \geq 2k-1$, then ${\Delta}_{n,k}^2$ is homotopy equivalent to a wedge of spheres. Furthermore, the dimensions of the spheres are precisely $n-4-t(k-3)$ for $1\leq t\leq \lfloor \frac{n-1}{k-1} \rfloor$.
\subsection{Generating series of Euler characteristics}
Let $F_i(x) = \displaystyle{\sum_{n \geq 0} \widetilde{\chi}(\Delta_{n}^{i}) \frac{x^n}{n!}}$ and $G_i(x) = \displaystyle{\sum_{n \geq 0} \widetilde{\chi}((\Delta_{n}^{n-i})^*) \frac{x^n}{n!}}$. By the results presented in this paper we get the following table:
\begin{center}
$\begin{array}{|c||c|c|} \hline i & F_i(x) & G_i(x) \\ \hline \hline 1 & ln(1+x) & -1 \\ \hline 2 & (1-x){\rm log}(1-x)+1+x & -{\rm exp}(x-\frac{x^2}{2}) \\ \hline 3 & & x - \frac{{\rm exp}(\frac{x}{2(1+x)})+x-\frac{1}{4}x^2-\frac{1}{8}x^4}{\sqrt{1+x}} \\ \hline \end{array}$ \end{center}
\centerline{{\sf Table 6:} Generating functions of the Euler characteristics of $\Delta_n^i$ and $(\Delta_n^{n-i})^*$}
We cannot formulate a conjecture about the entries in this table for $i>3$. Nevertheless, even though the actual homology computation may be too difficult, the generating series may be computable for a few more cases. Assuming a positive answer to Conjecture 9.1, we get
$$F_3(x) = (x-\frac{3}{2}){\rm log}(1-x)+1-\frac{3}{2}x+\frac{1}{4}x^2.$$
\subsection{The representation of the symmetric group} \label{questionrep}
All complexes $\Delta_{n,k}^i$ are invariant under the action of the symmetric group $S_n$. This action determines a linear representation of $S_n$ on each homology group $\widetilde{H}_j(\Delta_{n,k}^i)$. For fixed $n,k$ and $i$, the alternating sum of the characters of the given representations is a virtual character of $S_n$
that we denote by $\omega_{n,k}^i$.
For $k=2$ and $i=1,2$ this is an actual character (up to sign) and satisfies $$(-1)^{n+1}\omega_{n+1,k}^2 = (sign_n \omega_{n,k}^1) \uparrow_{S_n}^{S_{n+1}} + sign_{n+1} \omega_{n+1,k}^1.$$
{From} looking at the dimensions of the homology modules it is clear that the analogous formula for $k \geq 3$ does not hold. We have also seen that homology of $\Delta_{n,k}^i$ is not torsion-free in general. On the other hand, \cite{ReiRob97} has demonstrated that for the closely related matching complexes it is possible to determine the representations on the rational homology. Thus it is reasonable to ask:
\noindent {\sf Question 9.8:} What is the character of $S_n$ on each non-vanishing rational homology group of $\Delta_{n,k}^i$ ?
The character $\omega_n^2 = \omega_{n,2}^2$ determined in Section \ref{homrep} has recently appeared in several different areas of mathematics. First in the work of C. A. Robinson \& S. Whitehouse \cite{RobWhi94} and S. Whitehouse \cite{Whi94-2} on gamma-homology of algebras and later in work of E. Getzler \& M. Kapranov \cite{GetKap95} on operads, O. Mathieu \cite{Mat96} on hyperplane arrangements and symplectic geometry, in the work of M. Kontsevich \cite{Kon93} on Lie algebras and symplectic geometry, and in the work of P. Hanlon \cite{Han96}, P. Hanlon \& R.P. Stanley \cite{HanSta95} and S. Sundaram \cite{Sun96} in a combinatorial and representation-theoretic context. It seems mysterious that the same character pops up in so many seemingly unrelated places.
\noindent {\sf Question 9.9:} What are the deeper connections between the various contexts where the character $\omega_n^2$ appears ?
The analogous question for the character $\omega_{n,2}^1= sign_n \cdot \ell ie_n$ has been studied quite extensively (see for example \cite{Bar90,BarBer90,Reu93}), and for that case much detailed information is known. An important aspect of the work in \cite{Bar90} and \cite{BarBer90} is the construction of explicit bases for the modules under consideration. Thus a first step towards an answer to Question 9.9 could be a solution of the following problem.
\noindent {\sf Problem 9.10:} Describe a combinatorial basis for the homology of $\Delta_n^2$.
A positive answer to the shellability conjecture 9.6 for $k=2$ could via the induced shelling basis (see \cite{BjoWac96}) lead to progress on Problem 9.10.
\section{Notation and Tools}\label{notation}
In this short section we will summarize the main tools that we use in the study of the complexes $\Delta_{n,k}^i$. We refer the reader to the survey paper \cite{Bjo91} for more details and references.
Let $P$ be a finite partially ordered set -- {\it poset} for short. If $P$ has a unique minimum element $\hat{0}$ and a unique maximum element $\hat{1}$, we denote by $\ov{P}$ the {\it proper part} of $P$, that is the poset obtained by removing from $P$ the elements $\hat{0}$ and $\hat{1}$. By $\Delta(P)$ we denote the simplicial complex of all chains in ${P}$. The complex $\Delta(P)$ is called the {\it order complex} of $P$.
By convention we include the empty set $\emptyset$ in every simplicial complex. For any simplicial complex $\Delta$, the {\it face lattice} ${\mathcal Lat}(\Delta)$ is the poset of faces of $\Delta$, ordered by inclusion and enlarged by an additional greatest element $\hat{1}$. Then the order complex ${\Delta}(\ov{{\mathcal Lat}({\Delta})})$ of the proper part of ${\mathcal Lat}(\Delta)$ is homeomorphic to $\Delta$. Indeed, $\Delta(\ov{{\mathcal Lat}(\Delta)})$ is the barycentric subdivision of $\Delta$.
For a poset $P$ and $p \in P$ we denote by $P_{\leq p}$ the sub-poset
$\{ p'~|~p' \in P;~p' \leq p~\}$. The posets $P_{\geq p}$, $P_{< p}$ and $P_{> p}$ are analogously defined. For $p \leq p'$ in $P$ we denote by $[p,p']$ the {\it closed interval} $P_{\geq p} \cap P_{\leq p'}$ in $P$, and by $(p,p')$ the {\it open interval} $P_{>p} \cap P_{<p'}$.
For a poset $P$ we denote by $\mu_P$ the ${\mathbb Z}$-valued {\it M\"obius function} (see \cite{Sta86}), defined recursively on the intervals of $P$ by $\mu_P(x,x) = 1$ and $\mu_P(x,y) = \displaystyle{-\sum_{x \leq z < y} \mu_P(x,z)}$ if $x < y$.
By a map $f : P \rightarrow Q$ of posets we always mean a poset homomorphism (i.e., $x \leq y$ implies $f(x) \leq f(y)$). For an element $q \in Q$ we denote by $f^{-1}_{\leq}(q)$ the preimage of $Q_{\leq q}$ under $f$. The poset $f^{-1}_{\geq}(q)$ is analogously defined.
\begin{proposition}[Quillen Fiber Lemma {\rm \cite{Qui78}}] \label{fibre} Let $f : P \rightarrow Q$ be a map of posets. If $\Delta(f^{-1}_{\leq}(q))$ is contractible for all $q \in Q$ then $\Delta(P)$ and $\Delta(Q)$ are homotopy equivalent. \label{quillen} \end{proposition}
A map $f : P \rightarrow P$ from a poset to itself is called a {\it closure operator} if $f(x) \geq x$ and $f(f(x)) = f(x)$ for all $x\in P$. The Quillen Fiber Lemma immediately implies the fact that closure operators preserve the homotopy type.
\begin{corollary}[Closure Lemma]
\label{closure}Let $f : P \rightarrow P$ be a closure operator on the partially ordered set $P$. Then $\Delta(P)$ and $\Delta(f(P))$ are homotopy equivalent. \end{corollary}
If the poset $P$ is a {\it lattice} (i.e., suprema, denoted by ``$\vee$'', and infima, denoted by ``$\wedge$'', exist) then there is another tool for computing the homotopy type. Note that if $P$ is a finite lattice then there is a least element $\hat{0}$ and a largest element $\hat{1}$ in $P$. For an arbitrary element $p \in P$ we say that $a \in P$ is a {\it complement} of $p$ if $p \wedge a = \hat{0}$ and $p \vee a = \hat{1}$.
\begin{proposition}[Homotopy Complementation Formula {\rm \cite{BjoWal83}}~] \label{homcom} {\ } \begin{itemize} \item[(i)] Let $P$ be a poset and $A \subseteq P$ an antichain. Assume $\Delta(P \setminus A)$ is contractible. Then $\Delta(P)$ is homotopy equivalent to $$\bigvee_{x \in A} \Sigma \Bigl(\Delta(P_{<x}) * \Delta(P_{>x}) \Bigr).$$ \item[(ii)] Let $P$ be the proper part of a lattice and let $Co$ be the set of complements of some element $p \neq \hat{0}, \hat{1}$. Then $\Delta(P \setminus Co)$ is contractible. \end{itemize} \end{proposition}
In the formulation of the proposition $\bigvee$ denotes the wedge product, $\Sigma$ denotes the suspension and $*$ denotes the join of topological spaces.
Our next tool is the combinatorial version of a standard duality theorem from
algebraic topology.
\begin{proposition}[Combinatorial Alexander Duality]
\label{alexander} Let $\Delta$ be a finite simplicial complex on vertex set $V$ and define
$$\Delta^* = \{ B \subseteq V~|~V \setminus B \not\in \Delta \}.$$ Then
$$\widetilde{H}_i(\Delta) \cong \widetilde{H}^{|V|-i-3}(\Delta^*).$$ \end{proposition}
This is derived as follows. The usual Alexander duality theorem (see e.g. Munkres \cite{Mun84}) says that $$\widetilde{H}_i(A) \cong \widetilde{H}^{n-i-1} (S^n \setminus A)$$ for any compact subset $A$ of the $n$-sphere $S^n$. In our situation, let $P = 2^V \setminus \{ \emptyset, V \}$. This truncated Boolean algebra is the proper part of the
face lattice of the boundary complex of a simplex, so $\Delta(P) \cong S^{|V|-2}$. Now let $A$ be the realization of ${\Delta}(\ov{{\mathcal Lat}({\Delta})})$ as a subspace of $\Delta(P)$. It is easy to see that
${\Delta}(P\setminus \ov{{\mathcal Lat}({\Delta})})$ is a strong deformation retract of $S^{|V|-2} \setminus A$, and since $P\setminus \ov{{\mathcal Lat}({\Delta})} \cong \ov{{\mathcal Lat}({\Delta}^*)}$ the result follows.
Finally, we recall a result from enumerative combinatorics. For a number sequence $(a_n)_{n \geq 0}$ the formal power series $\displaystyle{\sum_{n\geq 0} a_n \frac{x^n}{n!}}$ is called its {\it exponential generating function}.
\begin{proposition}[Exponential formula] \label{exp} Suppose that two functions $a,b:{\mathbb N} \longrightarrow {\mathbb Z}$ are given such that
\[b(n)=\sum_{S_1|\cdots| S_t \in \Pi_n}a(|S_1|)\cdots a(|S_t|),\quad n\ge 1, \] where the sum ranges over all set partitions of $[n]$ and $a(0)=0$, $b(0)=1$. Then the exponential generating functions $A(x):=\sum_{n=0}^\infty \frac{a(n)x^n}{n!}$ and $B(x):=\sum_{n=0}^\infty \frac{b(n)x^n}{n!}$ satisfy \[B(x)=e^{A(x)}. \] \end{proposition}
\noindent For the proof see \cite{Sta78,Sta98}.
\end{document} | arXiv | {
"id": "9705219.tex",
"language_detection_score": 0.7740345001220703,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\normalem \title{From local equilibrium to numerical PDE: Metropolis crystal surface dynamics in the rough scaling limit hanks{ extbf{Funding}
\begin{abstract} We derive the PDE governing the hydrodynamic limit of a Metropolis rate crystal surface height process in the ``rough scaling" regime introduced by Marzuola and Weare. The PDE takes the form of a continuity equation, and the expression for the current involves a numerically computed multiplicative correction term similar to a mobility. The correction accounts for the fact that, unusually, the local equilibrium distribution of the process is not a local Gibbs measure even though the global equilibrium distribution \emph{is} Gibbs. We give definitive numerical evidence of this fact, originally suggested in Gao, et. al., \emph{Pure and Applied Analysis} (2021). In that paper, an approximate PDE --- our PDE, but without the correction term --- was derived for the limit of the Metropolis rate process under the assumption of a local Gibbs distribution. Our main contribution is to present a numerical method to compute the corrected macroscopic current, which is given by a function of the third spatial derivative of the height profile. Our method exploits properties of the local equilibrium (LE) state of the third order finite difference process. We find that the LE state of this process is not only useful for deriving the PDE; it also enjoys nonstandard properties which are interesting in their own right. Namely, we demonstrate that the LE state is a ``rough LE", a novel kind of LE state discovered in our recent work on an Arrhenius rate crystal surface process. \end{abstract}
\section{Introduction}
Consider a microscopic particle system in global equilibrium. Taking the viewpoint of equilibrium statistical mechanics, we can describe the system by its ensemble, a probability distribution over particle configurations. Typically, physical principles dictate that the distribution belong to some family of distributions, and a few average statistics of the system (e.g. the mean) determine the particular distribution in this family. For example, the speeds of particles in an idealized gas follow a distribution in the one-parameter family of Maxwell-Boltzmann distributions~\cite{statphys}. The average speed of the particles in the gas determines the parameter. Now consider an out-of-equilibrium particle system, evolving in time toward its global equilibrium state. If the system is \emph{locally} equilibrated, then an analogous principle applies. There is a single family of probability distributions governing the particle configurations in each space-time region of \emph{mesoscopic} extent, an intermediate scale between micro- and macroscopic. For example, the speeds of particles in a mesoscopic region of a \emph{locally} equilibrated gas can still be expected to be Maxwell-Boltzmann distributed. But unlike globally equilibrated systems, the average statistics determining the parameter now vary among these mesoscopic regions, and they also vary in time.
If we rescale time and space appropriately, a macroscopic equation of motion --- a partial differential equation --- emerges from the microscopic particle dynamics. The PDE governs the evolution in time of a limit of these local mesoscopic statistics, as they tend toward their single constant value in global equilibrium. Typically, knowing the parameterized family of local equilibrium (LE) distributions is sufficient to determine the PDE.
In this paper, we derive the PDE limit of a stochastic microscopic dynamics modeling particle diffusion on a crystal surface. The global equilibrium (GE) family for this particle system are the Gibbs measures. Unusually, however, the LE family is \emph{not} made up of local Gibbs measures, as we show numerically. In other words, the LE family is \emph{not} the same as the GE family. Moreover, the LE family is not known explicitly at all. As such, it is impossible to obtain an exact analytic expression for the PDE, and we opt for a numerical approach instead. Our approach to derive the PDE exploits fundamental properties of LE states without needing to know the LE family explicitly. Although understanding why the LE family is not local Gibbs is an interesting and important problem, it is beyond the scope of this paper.
\subsection{Background and Main Contribution} We now give some background on the crystal surface model and on related works. We model the crystal surface as a collection of particles arranged in a height profile on a one-dimensional periodic lattice. At lattice site $i$, the height $h_i$ represents the number of particles which are stacked in a column above (positive ``height'') or below (negative ``height'') the lattice, which represents height zero.
The particle dynamics is governed by a Markov jump process $\mathbf h_N(t)\in\mathbb Z^N$ (where $N$ is the lattice size), in which the topmost particles jump to neighboring columns with certain jump rates. Jumps which lower the surface energy have higher rates than jumps which increase it, where the energetically optimal configuration is a flat surface. In this context, the micro-to-macro limit is known as a ``hydrodynamic" limit, obtained by scaling height, time, and lattice width with $N$ according to a certain scaling regime, and taking $N\to\infty$. The limit is a macroscopic height profile $h(t,x)$, where the spatial domain is the unit torus.
Here, we assume the microscopic dynamics evolves under ``Metropolis-type'' jump rates, which are functions only of the difference in the surface energy before and after the jump. We study the macroscopic limit in a nonstandard, so-called ``rough'' scaling regime.
The rough scaling regime was introduced in~\cite{mw-krug} to study the limit of the better-known, \emph{Arrhenius} jump rate crystal surface dynamics. Marzuola and Weare show that the PDE of the Arrhenius process in this rough scaling limit takes the form $h_t = \partial_{xx}\,\mathrm{exp}(-h_{xx})$. Meanwhile, the PDE governing the more standard scaling limit (which the authors call the ``smooth scaling regime") is essentially given by linearizing the exponential in the rough PDE. Thus, the rough PDE describes surfaces $h$ in which $|h_{xx}|$ is large (and cannot be linearized), so that $h$ is ``rapidly varying" and hence ``rough". For a discussion of the physical relevance of the rough scaling regime, see e.g.~\cite{gao2020_arrPDE,asymmetry,mw-krug}.
Similarly, the rough scaling limit for the Metropolis rate process is the limit which leads to a PDE with exponential nonlinearity. The Metropolis rough scaling limit was first studied in~\cite{gao2020}. In this work, the authors assume the LE distribution can be approximated by a local Gibbs measure to derive the approximate PDE \begin{equation}\label{introPDE-gibbs}h_t = -\partial_x\left(\sinh(Kh_{xxx})\right),\end{equation} where $K$ is inverse temperature. That the PDE takes the form of a continuity equation naturally follows from the microscopic dynamics, which preserves total sum of heights (see~\eqref{evoln-h} for the corresponding microscopic continuity equation). However, the current $\hat J_\gibbs(h_{xxx}):=\sinh(Kh_{xxx})$ is not the true macroscopic current. Indeed, the authors of~\cite{gao2020} observe a discrepancy between the solution to the PDE~\eqref{introPDE-gibbs} and the microscopic process $\mathbf h_N$, which does not vanish as one increases $N$.
Our main contribution in this paper is a numerical method which removes this discrepancy. Namely, we compute a multiplicative correction $\sigma$ to the current, to obtain the true current $\hat J=\sigma\times\hat J_\gibbs$, and the true PDE \begin{equation}\label{introPDE}h_t = -\partial_x\left(\sigma(h_{xxx})\sinh(Kh_{xxx})\right).\end{equation} The method uses sample runs of the microscopic process generated from only a single initial datum, and requires evolving the process in time only sufficiently long to reach local, rather than global, equilibrium. The function $\sigma$ is $K$-dependent and converges to $1$ as $K\downarrow0$. This shows that the local Gibbs approximation becomes accurate in the small $K$ limit. Using observed qualitative properties of $\sigma$ (e.g. that it is even and increasing when $h_{xxx}>0$), we also extend the results of~\cite{gao2020} on properties of the PDE~\eqref{introPDE-gibbs}. Namely, we show that strong solutions of~\eqref{introPDE} exist, are unique, and enjoy the same regularity properties as those shown for solutions to~\eqref{introPDE-gibbs}.
The way $\sigma$ appears in~\eqref{introPDE} bears some resemblance to a mobility: a medium-dependent constant of proportionality determining the current in a diffusion. The similarity is somewhat superficial, however, because the PDE~\eqref{introPDE} is not a standard diffusion. Indeed, the current does not follow Fick's law, since it is not proportional to the gradient of an appropriate potential. Nevertheless, we mention this similarity because like our correction $\sigma$, mobilities arising in the continuum limit of microscopic processes often cannot be computed explicitly. There is a vast body of work on computing mobilities numerically, and we will not attempt to review it here. The closest such work to ours that we are aware of, in terms of similarity of the physical model, is~\cite{krug1995adatom}. The authors study the smooth scaling PDE limit of a microscopic crystal surface jump process, in which the rates are also of Metropolis type. The PDE limit takes the form of a standard diffusion, which allows the authors to use linear response theory to compute the slope-dependent mobility. In addition, we mention the work~\cite{Embacher2018} (see also the references therein). This work is similar to ours in that the authors' approach to compute the mobility only requires simulating the process until local equilibrium.
Since the PDE~\eqref{introPDE} is not a standard diffusion, methods for computing mobilities such as linear response are not available to us in computing the factor $\sigma$. Instead we develop an alternative numerical approach. It is borne out of the LE framework of our recent work~\cite{kat21} on rough-scaled processes, as we now explain.
\subsection{Companion Finite Difference Process and Rough LE} Note that the macroscopic current in the PDE~\eqref{introPDE} is a function of $h_{xxx}$. This is a reflection of the fact that (1) the jump rates are functions of the third order finite difference $w_i:=h_{i+2}-3h_{i+1}+3h_i-h_{i-1}$, and (2) in the rough scaling regime, $w_i$ has order $O(1)$ as $N\to\infty$. This consideration motivates us to consider the companion process $\mathbf w_N(t)=(w_1(t),\dots, w_N(t))$ --- in particular, the distribution ${\mathrm{Law}}(\mathbf w_N(t))$ in local equilibrium --- as the central object of study.
A finite difference (FD) process also plays a central role in our previous work~\cite{kat21}, in which we take a closer look at the Arrhenius rate process in the rough scaling regime. There, we show that the PDE governing the hydrodynamic limit $h$ is determined by the LE distribution of the second order FDs of the heights. We will call this second order FD process $\mathbf w_N^{\text{arr}}$, for comparison with the third order FD $\mathbf w_N$ of the Metropolis height process.
We show in~\cite{kat21} that $\mathbf w_N^{\text{arr}}$ has a novel, ``rough'' LE state. The defining characteristic of the rough LE state is that the expected profile $(\mathbb E\, w_i^{\text{arr}})_{i=1}^N$ is rough in the sense that $|\mathbb E\, w_{i+1}^{\text{arr}} - \mathbb E\, w_i^{\text{arr}}|$ does not go to zero as $N$ increases. (The discovery of this rough profile retroactively lends a second meaning to the name ``rough scaling regime", which was coined earlier for different reasons). Moreover, the distributions ${\mathrm{Law}}(w_i)$ do not enjoy a crucially important feature enjoyed by more standard particle systems: belonging to a mean-parameterized measure family. However, we show that the probability distributions given by mesoscopic window averages of the single site marginals \emph{do} have this property, and their means \emph{do} vary smoothly across space.
For the Metropolis process, we do not have explicit access to ${\mathrm{Law}}(\mathbf w_N)$. However, we will show empirically that $\mathbf w_N$ also has a rough LE state, confirming that this new kind of LE is not an isolated phenomenon. Building off the work in~\cite{kat21}, our numerical method for computing $\sigma$ exploits the crucial fact that upon mesoscopic window averaging, the LE state is described by \emph{some} mean-parameterized family. We will not need to know \emph{which} family this is.
The function $\hat J=\sigma\times\hat J_\gibbs$ will be defined in terms of properties of the LE state of $\mathbf w_N$. To show this same $\hat J$ is the macroscopic current arising in the $h$ PDE, we take two more steps. First, we prove that if $\mathbf w_N$ converges to a macroscopic $w$ in an appropriate scaling regime, then $w$ must be the solution to $w_t=-\partial_{xxxx}\hat J(w)$. Second, we prove that $\mathbf h_N$ then has a unique limit $h$ in the rough scaling regime, where $h_{xxx}=w$ and $h$ is the solution to $h_t=-\partial_x\hat J(h_{xxx})$. Our proofs rely on two boundedness conditions which we confirm numerically, but are otherwise rigorous. These two steps were also informally described in~\cite{kat21} (in particular we did not check the boundedness conditions), but they served only as motivation for studying the LE properties in that paper.
\subsection*{Organization}The rest of the paper is organized as follows. In Section~\ref{sec:micro}, we introduce the Metropolis height process as well as the companion finite difference processes. In Section~\ref{sec:hydro}, we define the hydrodynamic limit in the rough scaling regime and motivate studying the limit of $\mathbf h_N$ via the limit of the third order FDs $\mathbf w_N$. In Section~\ref{sec:theory}, we formalize this approach, proving that the limit $h$ of $\mathbf h_N$ and the PDE governing it follow from the limit $w$ of $\mathbf w_N$ and the corresponding PDE. We also introduce a key property of rough LE states which makes our numerical method possible. In Section~\ref{sec:LE}, we review the concept of LE states, show $\mathbf w_N$ has a rough LE state, and explain how the macroscopic current $\hat J$ arises from LE properties. We also show the local Gibbs measure is not correct, so that we do not know the explicit form of the LE state and cannot compute $\hat J$ analytically. In Section~\ref{sec:num} we present our numerical method and confirm that we have derived the correct PDE for $h$. Finally, we analyze the PDE in Section~\ref{sec:PDE}, and make a few concluding remarks in Section~\ref{sec:conclude}.
\subsection*{Notation} For a sequence of vectors ${\bf v}_N\in\mathbb R^N$, $N=1,2,\dots$, we let the entries of ${\bf v}_N$ be ${\bf v}_N=(v_1,v_2,\dots, v_N)$, omitting the dependence of each $v_i$ on $N$ for brevity. We let ${\mathbb T}$ denote the unit interval with periodic boundary conditions (the unit torus). Next, let $m_1(\rho)$ denote the first moment of a probability mass function (pmf) $\rho$ on $\mathbb Z$, i.e. $m_1(\rho)=\sum_{n=-\infty}^\infty n\rho(n)$, and we write $$\rho(f) :=\sum_{n=-\infty}^\infty f(n)\rho(n)$$ to denote the expectation of the observable $f$ under $\rho$. We use the notation $$\{\rho[\lambda]\mid\lambda\in\mathbb R\}$$ to denote a family of pmfs on $\mathbb Z$, parameterized by $\lambda$; so that $\rho[\lambda](n)$ denotes the probability of $n$ under $\rho[\lambda]$, and $\rho[\lambda](f)$ denotes the expectation of $f$ under $\rho[\lambda]$. \subsection*{Acknowledgments} Thanks to Yuan Gao, Jian-Guo Liu, Jianfeng Lu, and Jeremy Marzuola, with whom the author discussed the possibility of generalizing the analysis of~\eqref{introPDE-gibbs} to that of the PDE~\ref{introPDE}. Thanks also to Jonathan Weare and Jeremy Marzuola for their guidance and insights throughout the last five years, in which this project came to fruition. Finally, thank you to NYU High Performance Computing for access to computing resources.
\section{Preliminaries: Metropolis Rate Model}\label{sec:micro} In this section we will introduce the Metropolis rate crystal height process $\mathbf h_N$ as well as two companion processes. We will then explain the role of the companion processes in deriving the PDE governing the hydrodynamic limit of $\mathbf h_N$. Finally, we will briefly mention key features of the Arrhenius rate process studied in~\cite{kat21}. This process will repeatedly serve as a point of comparison to the Metropolis process. \subsection{Microscopic Dynamics}\label{subsec:micro} Let $\mathbf h_N(t) = (h_1(t),\dots, h_N(t))\in\mathbb Z^N$ be a Markov jump process, with $h_j(t)$ representing the discrete height at lattice site $j$ of the crystal, relative to some fixed arbitrarily chosen zero height level. Note that each $h_i$ depends on $N$ as well as $i$. The lattice is periodic, so that we identify $j$ with $j+mN$, $m\in\mathbb Z$. As we will soon see, the dynamics of the height process $\mathbf h_N(t)$ will be determined entirely by the companion \emph{slope} process $\mathbf z_N(t)= (z_1(t),\dots, z_N(t))$, where $$z_i(t) = h_{i+1}(t) - h_i(t).$$ We will use the letters ${\bf h}$ and ${\bf z}$, with no subscript, to denote an arbitrary height and slope configuration in $\mathbb Z^N$, respectively. The surface energy, or Hamiltonian, of a configuration ${\bf h}$ is given by
\begin{equation}\label{H} H({\bf h}) = \sum_{i=1}^N(h_{i+1}-h_i)^2 =\sum_{i=1}^Nz_i^2=H(\mathbf z).\end{equation}
Although the absolute value potential is the most physically relevant choice for modeling crystal surface energies, we choose a quadratic interaction potential because certain calculations can be done explicitly in this case. In the mathematical study of hydrodynamic limits of interfaces, it is standard to consider energies of the form $\sum_iV(|h_{i+1}-h_i|)$ for general convex $V$. See e.g.~\cite{Nishikawa,funaki1997motion}. We see that the energy of a height profile ${\bf h}$ is actually a function of the corresponding slope profile ${\bf z}$.
The process $\mathbf h_N(t)$ evolves through particle jumps between neighboring lattice sites which, on average, lower the surface energy. We represent a jump from lattice site $i$ to site $j$ by $\mathbf h\mapsto \mathbf h^{i,j}$, $|i-j|=1$. Here, $\mathbf h^{i,j}$ is the height profile such that \begin{equation}\label{hij} (h^{i,j})_k = \begin{cases} h_i-1,\quad &k=i,\\ h_j+1,\quad &k=j,\\ h_k,\quad&\text{otherwise.} \end{cases} \end{equation} Now, suppose $\mathbf h_N(t)={\bf h}$, so that $\mathbf z_N(t)={\bf z}$, the corresponding slope profile. If $\mathbf h_N(t)$ undergoes the transition ${\bf h}\mapsto{\bf h}^{i,j}$, then $\mathbf z_N(t)$ undergoes the transition ${\bf z}\mapsto{\bf z}^{i,j}$, where ${\bf z}^{i,j}$ is the slope profile corresponding to ${\bf h}^{i,j}$. Explicitly, we compute \beqs\label{zij} &\mathbf z^{i,i+1} = \mathbf z -{\bf d}^{(2)},\quad\mathbf z^{i+1,i}={\bf z}+{\bf d}^{(2)},\\ &{\bf d}^{(2)}=\mathbf e^{i-1} - 2\mathbf e^i + \mathbf e^{i+1}
\eeqs where $\mathbf e^j$ denotes the $j$th unit vector. The jumps $\mathbf h\mapsto \mathbf h^{i,j}$ occur at certain \emph{rates} $r^{i,j}_N(\mathbf h)$. The rates indicate the probability of a jump in time $dt$, as follows: suppose the process is in state ${\bf h}$ at time $t$, and let $R_N({\bf h})$ be the sum of the rates of all possible jumps from ${\bf h}$. Then the probability that the jump ${\bf h}\mapsto{\bf h}^{i,j}$ occurs in the interval $(t,t+dt]$ is $(r^{i,j}_N(\mathbf h)/R_N({\bf h}))dt$.
In this paper, we will consider rates of the form $r^{i,j}_N({\bf h})=N^4r^{i,j}(\mathbf h)$. Here, $N^4$ is the appropriate time scaling to take a hydrodynamic limit, as we will explain in Section~\ref{sec:hydro}. The unscaled rates $r^{i,j}(\mathbf h)$ only depend on the local configuration of heights, and not on $N$.
Formally, the rates determine the dynamics of $\mathbf h_N$ through the generator $\mathcal L_N$:
\begin{equation}\label{gen-gen}\left(\mathcal L_N f\right)(\mathbf h) = N^4\sum_{|i-j|=1}r^{i,j}(\mathbf h)\left[f(\mathbf h^{i,j}) - f(\mathbf h)\right].\end{equation} Consider applying $\mathcal L_N$ to $f=\pi_i$, where $\pi_i(\mathbf h) = h_i$. Note that $h_i$ decreases by 1 if a particle at $i$ jumps to $i\pm 1$, and $h_i$ increases by 1 if a particle at $i\pm1$ jumps to $i$. As a result, \beqs\label{LN-pi-h}(\mathcal L_N \pi_i)(\mathbf h) &= N^4\left[\left(r^{i-1,i}-r^{i,i-1}\right)(\mathbf h)-\left(r^{i,i+1}-r^{i+1,i}\right)\right](\mathbf h)\\&=N^4(J^{i-1,i}-J^{i,i+1})(\mathbf h),\eeqs where $J^{i,i+1}(\mathbf h)= (r^{i,i+1}-r^{i+1,i})(\mathbf h)$ is the \emph{current}: the net, expected amount of mass flowing from $i$ to $i+1$ per unit of unscaled time if the process is in state $\mathbf h$. By definition of the generator, we then have \beqs\label{evoln-h} \partial_t\mathbb E\,[h_i(t)] = \mathbb E\,\left[(\mathcal L_N\pi_i)(\mathbf h_N(t))\right]= N^4\mathbb E\,\left[(J^{i-1,i}-J^{i,i+1})(\mathbf h_N(t))\right] \eeqs This equation is valid regardless of the specific form of $r^{i,j}(\mathbf h)$. It can be thought of as a microscopic continuity equation: the change in mass (height) is given by the divergence (finite difference) of a current. We now specify the ``Metropolis-type" rates considered in this paper: \begin{equation}\label{rates-H}
r^{i,j}(\mathbf h) = \,\mathrm{exp}\left(-\frac K2\left[H(\mathbf z^{i,j}) - H(\mathbf h)\right]\right), \quad |i-j|=1. \end{equation} Here, $K=1/(k_B T)$, where $k_B$ is the Boltzmann constant and $T$ is the ambient temperature, held constant over time. See Remark~\ref{rk:metrop} for an explanation of the name ``Metropolis". Using the formulas~\eqref{H} for the Hamiltonian,~\eqref{zij} for the transitions ${\bf z}\mapsto{\bf z}^{i,j}$ and~\eqref{rates-H} for the Metropolis rates, we obtain the following explicit expression for the rates: \beqs\label{met-rates-general} r^{i,i+1}(\mathbf h) &= r^{i,i+1}(\mathbf z)=\,\mathrm{exp}\left(-3K+K(z_{i+1}-2z_i+z_{i-1})\right),\\ r^{i+1,i}(\mathbf h) &= r^{i+1,i}(\mathbf z)= \,\mathrm{exp}\left(-3K-K(z_{i+1}-2z_i+z_{i-1})\right). \eeqs Note that these rates depend on ${\bf h}$ only through ${\bf z}$; in fact, only through a further finite difference (FD). This implies that $\mathbf z_N(t)$ is also a Markov jump process which can exist independently of a height process: it is the process which takes jumps ${\bf z}\mapsto{\bf z}^{i,j}$ with rates $N^4r^{i,j}({\bf z})$. The form of the rates~\eqref{met-rates-general} motivates us to also introduce the third order FD process $\mathbf w_N(t) = (w_1(t),\dots, w_N(t))$, with $$w_i(t) = (z_{i-1}-2 z_i + z_{i+1})(t) = (h_{i-1}-3h_{i}+3h_{i+1} - h_{i+2})(t).$$ We let ${\bf w}$ denote a generic third order FD profile corresponding to the generic height profile ${\bf h}$. The process $\mathbf w_N(t)$ is also a Markov jump process which can be independently defined. It undergoes jumps ${\bf w}\mapsto {\bf w}^{i,i+1}$, ${\bf w}\mapsto {\bf w}^{i+1,i}$ with rate $N^4r^+(w_i)$ and $N^4r^-(w_i)$, respectively, where $r^\pm(w) = e^{-3K\pm Kw}$ and
\beqsn &\mathbf w^{i,i+1} = \mathbf w -{\bf d}^{(4)},\quad\mathbf w^{i+1,i}={\bf w}+{\bf d}^{(4)},\\ &{\bf d}^{(4)}=\mathbf e^{i-2} - 4\mathbf e^{i-1} + 6\mathbf e^{i} - 4\mathbf e^{i+1}+\mathbf e^{i+2}. \eeqsn \subsection{Role of Height, Slope, and Third Order FD Process}\label{subsec:role} Now that we have introduced the three processes $\mathbf h_N$, $\mathbf z_N$, and $\mathbf w_N$, let us explain their roles in this paper. The original $\mathbf h_N(t)$ is the physically meaningful process, and our main goal is to derive the PDE governing its hydrodynamic limit $h$. However, it will be more convenient to first study the hydrodynamic limit $w$ of $\mathbf w_N$, and to deduce the PDE governing $h$ from the PDE governing $w$. As an indication of why this is more convenient, recall the evolution equation~\eqref{evoln-h} for $\mathbb E\,[h_i(t)]$. Using~\eqref{met-rates-general} and the definition of $w_i$, we can now write the current $J^{i,i+1}$ as $ J^{i,i+1}({\bf h}) = J(w_i),$ where \begin{equation} J(w)=(r^+-r^-)(w) = 2e^{-3K}\sinh(Kw).\end{equation} Therefore, with the Metropolis rates~\eqref{met-rates-general}, the evolution equation~\eqref{evoln-h} takes the form \begin{equation}\label{evoln-h-II} \partial_t\mathbb E\,[h_i(t)] = -N^4\mathbb E\,\left[J(w_{i}(t))-J(w_{i-1}(t))\right]. \end{equation} Thus, the evolution of $\mathbb E\,[h_i]$ depends nonlinearly on the FDs $w_i$, $w_{i-1}$. In hydrodynamic limit derivations, such dependence on finite differences is inconvenient. But if we take the third order FD of both sides of~\eqref{evoln-h-II}, we get \begin{equation}\label{evoln-w} \partial_t\mathbb E\,[w_i(t)] = -N^4\mathbb E\,\left[J(w_{i-2})-4J(w_{i-1}) +6J(w_i)- 4J(w_{i+1}) + J(w_{i+2})\right].\end{equation} We see that the evolution of $\mathbb E\,[w_i]$ can be written in terms of $w_j$'s alone.
Now, let us address the role of $\mathbf z_N$. Roughly speaking, the PDE governing the limit $w$ comes from replacing $\mathbb E\,[J(w_i)]$ in the righthand side of~\eqref{evoln-w} by $\hat J(w(t, i/N))$ for some function $\hat J$. Showing such a replacement is possible and determining $\hat J$ will require us to have some knowledge of ${\mathrm{Law}}(\mathbf w_N(t))$. Since the distribution ${\mathrm{Law}}(\mathbf z_N(t))$ determines the distribution ${\mathrm{Law}}(\mathbf w_N(t))$, we could study the former to understand the latter. As a helpful starting point, it turns out that $\mathbf z_N$ has the special property that it is reversible with respect to the standard Gibbs measure $$\Phi_N(\mathbf z)\propto \,\mathrm{exp}(-KH(\mathbf z)) = \,\mathrm{exp}(-K\sum_{i=1}^N z_i^2),\quad {\bf z}\in\mathbb Z^N.$$ This is a result of detailed balance, i.e. $$r^{i,i+1}(\mathbf z)\Phi_N(\mathbf z) = r^{i+1,i}(\mathbf z^{i,i+1})\Phi_N(\mathbf z^{i,i+1})$$ for all $i$, which is easy to see using the original formulation of the rates~\eqref{rates-H}. \begin{remark}\label{rk:metrop} Any rates of the form $r^{i,j}({\bf z})=\psi(\Delta H)$ which satisfy $\psi(-\Delta H) = \psi(\Delta H)e^{K\Delta H}$ are in detailed balance with the Gibbs measure $\Phi_N$. Another example of rates in this family is $\psi(\Delta H) = e^{-K\Delta H}\wedge1$, which is the acceptance probability in a Metropolis-Hastings scheme to sample from $\Phi_N$. This is where the name ``Metropolis'' comes from. \end{remark} Reversibility of $\mathbf z_N$ with respect to $\Phi_N$ suggests that for $N$ large, ${\mathrm{Law}}(\mathbf z_N(t))$ is a \emph{local} Gibbs product measure of the form \begin{equation}\label{gibbs-measure} \mathbb P(\mathbf z_N(t)=\mathbf z) \approx \rho[\bml](\mathbf z)\propto \,\mathrm{exp}\left(-K\sum_{i=1}^Nz_i^2 + 2K\sum_{i=1}^N\lambda_iz_i\right) \end{equation} for some $\lambda_i=\lambda_i(t)$. There are deeper and more technical reasons why the local Gibbs measure typically arises, which we will not get into here. See e.g.~\cite{varadhanI,kipnisbook} for a rigorous probabilistic treatment of this topic and~\cite{spohnbook} for a more physical treatment.
The paper~\cite{gao2020} assumed that ${\mathrm{Law}}(\mathbf z_N(t))$ is a local Gibbs distribution to carry out the aforementioned replacement and to determine the function $\hat J$. However, the authors gave preliminary evidence that interestingly enough, the local Gibbs distribution is not accurate for all $K$. And indeed, we will show definitively in Section~\ref{subsec:gibbs} that ${\mathrm{Law}}(\mathbf z_N(t))$ \emph{cannot} be approximated by a local Gibbs distribution as $N\to\infty$. Why the local Gibbs distribution is not the correct form of ${\mathrm{Law}}(\mathbf z_N(t))$ is a very interesting question worthy of further investigation, but we do not pursue the question here. We will see that despite being incorrect, the local Gibbs approximation~\eqref{gibbs-measure} to ${\mathrm{Law}}(\mathbf z_N(t))$ leads to a crude but numerically useful approximation to the true function $\hat J$, the computation of which is the main goal of this paper. Beyond this approximation, however, the $\mathbf z_N$ process will play no role in our PDE derivation.
\subsection{A Close Cousin: Arrhenius Rate Dynamics}\label{subsec:arr} Throughout the paper, it will often be helpful to compare the Metropolis rate process and its hydrodynamic limit to the Arrhenius rate process and its limit, which were studied in~\cite{kat21}. For the sake of a self-contained paper, let us review the key features of the Arrhenius process. We will let $\mathbf h_N^{\text{arr}}$ denote the height process, $\mathbf z_N^{\text{arr}}$ denote the first order FD (slope) process, and $\mathbf w_N^{\text{arr}}$ denote the \emph{second} order FD process. The Arrhenius rates $r_N^{i,j}$ can also be written $r_N^{i,j}({\bf h})=N^4r^{i,j}({\bf h})$, where $r^{i,j}({\bf h})=r^{i,j}({\bf z})$. They are symmetric with respect to jumping left and right, with $r^{i,i\pm1}({\bf z})=r(z_i-z_{i-1})$ for $r(w)=e^{-2K-2Kw}$. For the physical interpretation of these rates, see~\cite{kat21} and the references therein. Like the Metropolis rates, the Arrhenius rates are reversible with respect to the Gibbs measure $\Phi_N({\bf z})\propto\,\mathrm{exp}(-KH({\bf z}))$. But unlike ${\mathrm{Law}}(\mathbf z_N(t))$ for the Metropolis process, the distribution ${\mathrm{Law}}(\mathbf z_N^{\text{arr}}(t))$ \emph{does} converge to a local Gibbs measure as $N\to\infty$. This is the key difference between these two otherwise very similar processes. Another similarity is that the evolution of $\mathbb E\,[w_i^{{\text{arr}}}(t)]$ takes the exact same form as the evolution~\eqref{evoln-w} of $\mathbb E\,[w_i(t)]$, except that the function $J$ is replaced with the function $r$.
\section{Hydrodynamic Limit in the Rough Scaling Regime}\label{sec:hydro} In this section, we define the hydrodynamic limit of a Markov jump process $\mathbf v_N(t)\in\mathbb Z^N$ under a given scaling regime. We then specify the rough scaling regime for the Metropolis $\mathbf h_N$ process, and motivate recasting the limit of $\mathbf h_N$ in terms of the limit of $\mathbf w_N$.
Let $\mathbf v_N(t)\in\mathbb Z^N$, $N=1,2,\dots$ be a sequence of Markov jump processes on the periodic lattice $\{1,2,\dots,N\}$ with transitions ${\bf v}\mapsto{\bf v}^{i,j}$ occurring at rates $$r_N^{i,j}({\bf v})= N^\alpha r^{i,j}({\bf v})$$ for some $\alpha$. In order for a hydrodynamic limit to exist, the rates and transition rules should satisfy certain conditions. We will content ourselves with taking $\mathbf v_N$ to be one of $\mathbf h_N$ or $\mathbf w_N$, for which these conditions are satisfied.
The hydrodynamic limit of $\mathbf v_N$ arises by rescaling three characteristic scales: time, space, and ``amplitude". The $N^\alpha$ time rescaling has already been incorporated into the transition rates $r_N^{i,j}$. The spatial scaling occurs by identifying the $N$ lattice sites with points on the periodic unit interval (torus), denoted ${\mathbb T}$. Specifically, we identify $\mathbf v_N(t) = (v_1(t), \dots, v_N(t))$ with a random measure on the unit interval: $$\mathbf v_N(t)\quad\leftrightarrow \quad v_N(t,dx) = \frac1N\sum_{i=1}^Nv_i(t)\delta\left(x-\frac iN\right).$$ Another equivalent way to think of $\mathbf v_N(t)$ is as a step function, with value $v_i(t)$ in the interval $[i/N, (i+1)/N)$. For the amplitude rescaling, we assume that the $v_i$ grow with $N$, so that to obtain a finite macroscopic limit, the $v_i$ must be scaled down. We will incorporate the amplitude rescaling into the following definition of a hydrodynamic limit: \begin{definition}[Hydrodynamic Limit,~\cite{kat21}]\label{def:hydro} Suppose $\mathbf v_N(0)$ is initialized in a random configuration for which there exists $v_0:{\mathbb T}\to\mathbb R$ such that \begin{align*}\label{init}\frac1N\sum_{i=1}^N\phi\left(\frac iN\right)(N^{-\beta}v_i(0)) \probto\int_{\mathbb T}\phi(x)v_0(x)dx,\quad N\to\infty.\tag{init}\end{align*} We say $\mathbf v_N$ converges hydrodynamically to $v:(0,T]\times{\mathbb T}\to\mathbb R$ under amplitude scaling $N^{\beta}$ and implied time scaling $N^\alpha$ if for each $t\in(0,T]$, $\phi\in C(\unit)$, we have \beqs\label{hydro-lim}\frac1N\sum_{i=1}^N\phi\left(\frac iN\right)(N^{-\beta}v_i(t)) \probto\int_{\mathbb T}\phi(x)v(t,x)dx,\quad N\to\infty.\eeqs \end{definition} Here, the notation $\probto$ denotes convergence in probability. The probability distribution of $\int \phi(x)N^{-\beta}v_N(t,dx)$ is induced by ${\mathrm{Law}}(\mathbf v_N(t))={\mathrm{Law}}(\mathbf v_N(0))\,\mathrm{exp}(\mathcal L_N t)$, where $\mathcal L_N$ is the generator of $\mathbf v_N$.
Note that the lefthand side of~\eqref{hydro-lim} equals $\int \phi(x)N^{-\beta}v_N(t,dx)$, so that~\eqref{hydro-lim} expresses that the random measure $N^{-\beta}v_N(t,dx)$ converges to the measure $v(t,x)dx$. \begin{definition}[Rough Scaling Regime] Let $\mathbf h_N$ be governed by the Metropolis dynamics specified in Section~\ref{subsec:micro}. We say $\mathbf h_N$ converges to $h:[0,T]\times{\mathbb T}\to\mathbb R$ in the \textbf{\emph{rough scaling regime}} if $\mathbf h_N$ converges hydrodynamically to $h$ under amplitude scaling $N^{3}$ and implied time scaling $N^4$.\end{definition} We will explain the choice $\alpha=4$ and $\beta=3$ below.
As an example of a distribution on $\mathbf h_N(0)$ satisfying~\eqref{init} with $\beta=3$, consider a product measure with marginals $$h_i\;\sim\;\lfloor N^3 h_0(i/N)\rfloor + \xi_i,$$ where $\lfloor q\rfloor$ denotes the integer part of $q$ and $\xi_i$, $i=1,\dots, N$ are i.i.d. integer-valued random variables with bounded support. In fact,~\eqref{init} is satisfied as long as $|\xi_i|<B_N$ where $B_N = o(N^3)$. To summarize the rough scaling regime in simple terms, start with an $O(1)$ height profile $h_i(0) = h_0(i/N)$. Then, multiply it by $N^3$ to get $\mathbf h_N(0)$, and evolve it forward according to the time-rescaled Metropolis rate dynamics. To get a hydrodynamic limit, divide $\mathbf h_N(t)$ by $N^{3}$ and take $N\to\infty$. It may seem like multiplying and dividing by $N^3$ should have no effect. But this is not so, because scaling has a nonlinear effect on the dynamics, so that different choices of $\beta$ for the amplitude scaling lead to different hydrodynamic limits. A straightforward way to see this is to note that the rates $r^{i,i+1}$ and $r^{i+1,i}$ are exponential in $w_i$. Scaling $w_i$ by a constant multiple will affect the rates nonlinearly, and thus have a nonlinear effect on the evolution of $h_i$. To understand the effect of different $\beta$ in more detail, suppose for the moment that we have $\mathbb E\,[h_{i}(t)]\approx N^{\beta}h(t,i/N) $ for $N\gg1$. (This of course does not follow from hydrodynamic convergence.) Using~\eqref{evoln-h-II}, we should then have \begin{equation}\label{ab-scaling}\partial_th(t,i/N) \approx -N^{4-\beta}\left(\mathbb E\, J\left(w_{i}\right) - \mathbb E\, J\left(w_{i-1}\right)\right).\end{equation} Now, if $h_i$ has order $N^\beta$ then $w_i$ should have order $N^{\beta-3}$, since it is the third order FD of $h_i$ (this statement is purely formal; see below). If $\beta<3$, then we expect that in the $N\to\infty$ limit, the nonlinear function $J$ will become linearized around $0$. Taking $\beta=3$ as in the rough scaling limit, the nonlinear function $J$ is in some sense ``preserved" as $N\to\infty$, leading to a very different PDE. The reason for the $N^{4}$ time scaling is that it ensures that the total power of $N$ is $4-3=1$ in~\eqref{ab-scaling}, which balances the order of the finite difference $J(w_{i}) - J(w_{i-1})$. In sum, the rough scaling regime is the unique choice of $\alpha,\beta$ which leads to a nontrivial and non-exploding limit $h$ governed by a PDE which ``preserves" the nonlinear function $J$ (we use quotation marks because the PDE will involve not $J$ but a related $\hat J$, also with exponential nonlinearity). Note that this choice is tailored to the Metropolis dynamics. For the Arrhenius dynamics, for example, $\alpha=4,\beta=2$ gives the PDE with exponential nonlinearity.
Of course, if $h_i=O(N^3)$ then in general we cannot infer $w_i=O(1)$, since taking finite differences is unstable. This motivates us to take $\mathbf w_N$ as our ``original" process and study its hydrodynamic limit under $N^0$ amplitude scaling. We then expect to obtain the hydrodynamic limit of $\mathbf h_N$ under $N^3$ amplitude scaling by doing three cumulative sum operations. We will carry out this program formally in the next section. In particular, we will see that the function $\hat J$ of the macroscopic current $\hat J(h_{xxx})$ in the $h$ PDE is intrinsically linked to the $\mathbf w_N$ process.
\section{PDE for $h$ via Third Order Finite Differences}\label{sec:theory} This section explains our approach to deriving the PDE governing the hydrodynamic limit of $\mathbf h_N$ in the rough scaling regime, via the hydrodynamic limit of $\mathbf w_N$. We begin the section with an overview of this approach. First, we will show that \begin{equation}\label{w-to-h-logic} \mathbf w_N\stackrel{\substack{\text{ptwise}\\\text{meso}}}{\to} w\quad\Rightarrow\quad\mathbf w_N\stackrel{\text{hydro}}{\to} w\quad\Rightarrow\quad \mathbf h_N\stackrel{\text{hydro}}{\to} h.
\end{equation} The rightmost limit denotes hydrodynamic convergence of $\mathbf h_N$ in the rough scaling regime: amplitude scaling $N^3$, time scaling $N^4$. We will show this follows from the middle limit: hydrodynamic convergence of $\mathbf w_N$ under amplitude scaling $N^0$ and time scaling $N^4$. The limiting function $h$ will be uniquely determined from the function $w$, the initial macroscopic condition $h_0$, and the periodic boundary. The leftmost limit denotes ``pointwise mesoscopic" convergence of $\mathbf w_N$, which is nonstandard but physically intuitive, and was used in our study of rough local equilibria in~\cite{kat21}. We will show that pointwise mesoscopic convergence implies hydrodynamic convergence.
Next, consider the following key approximation: \begin{equation}\label{Ef-intro} \frac{1}{2N\epsilon}\sum_{i\in{N(x\pm\epsilon)}}\mathbb E\, J(w_i(t)) \approx \hat J\left(\frac{1}{2N\epsilon}\sum_{i\in{N(x\pm\epsilon)}} \mathbb E\, w_i(t)\right),\quad N\gg1, \epsilon\ll1.\end{equation} The existence of $\hat J$ such that~\eqref{Ef-intro} holds is a property of locally equilibrated processes, as we will explain in Section~\ref{subsec:LE}. It is important to note that~\eqref{Ef-intro} is \emph{not} an assumption. In rigorous hydrodynamic limit arguments, proving the so-called ``Replacement Lemma", which is analogous to~\eqref{Ef-intro}, is typically the central and most difficult part (for more on this, see the discussion and references in~\cite{kat21}). We will show numerically that~\eqref{Ef-intro} is satisfied for the Metropolis process. The equation is the key ingredient to derive the PDE since, as we will show in Claim~\ref{claim:PDE}, \beqs\label{EplusEfimplyPDE} \bigg(\mathbf w_N\stackrel{\substack{\text{ptwise}\\\text{meso}}}{\to} w\bigg)\quad &+\quad \bigg(\exists\,\hat J\text{ s.t. \eqref{Ef-intro} holds }\forall\;t,x\bigg)\\ &\Rightarrow\quad w\text{ solves }\partial_tw = -\partial_{xxxx}\hat J(w)\text{ weakly.} \eeqs From here, we will be able to conclude that $h$, the hydrodynamic limit of $\mathbf h_N$ in the rough scaling regime, is the weak solution to the PDE $$\partial_th= -\partial_{x}\hat J(h_{xxx}).$$
Thus, if we can verify the conditions in~\eqref{EplusEfimplyPDE} --- that $\mathbf w_N$ converges pointwise mesoscopically to $w$, and a function $\hat J$ exists such that~\eqref{Ef-intro} holds for all $t,x$ --- then the chain of logic just described will lead us to the PDE for $h$, our original goal. More specifically, this logic establishes the \emph{form} of the PDE, but it remains to compute $\hat J$. Doing so numerically will be the focus of Section~\ref{sec:num}.
The assertions~\eqref{w-to-h-logic} and~\eqref{EplusEfimplyPDE} will be formalized in Section~\ref{subsec:theory} and proved rigorously in Appendix~\ref{app:claims}. The rigorous proofs rely on the following supplementary boundedness assumptions:
\begin{align*}\label{bd1}\sup_{N}\sup_{i=1,\dots, N}\mathbb E\,|w_i(t)| < \infty\quad\forall t\geq0,\tag{w-bd}\end{align*} \begin{align*}\label{bd2}
\sup_{N}\sup_{s\in(0,t]}\sup_{i=1,\dots,N}|\mathbb E\, J(w_i(s))|<\infty\quad\forall t\geq0.\tag{J-bd}\end{align*}
These assumptions are extremely strong (most likely unnecessarily so), but our primary aim in presenting the proofs is to put our numerical method on firm footing. We will numerically check both the boundedness assumptions and the two conditions of~\eqref{EplusEfimplyPDE} in Section~\ref{subsec:theory-numeric}.
Later in Section~\ref{sec:num}, we will verify numerically that the end goal has been achieved: that $\mathbf h_N$ does in fact converge to the solution of the PDE we obtain. Given this, the numerical and theoretical verifications of this section may seem unnecessary. Their purpose is to confirm that we have obtained the correct PDE \emph{for the correct reason}. This is important because hydrodynamic limit derivations can be delicate. For example, in~\cite{mw-krug} the authors used heuristic arguments to derive the PDE limit of the Arrhenius dynamics in the rough scaling regime. They confirmed numerically that the PDE they obtained is correct. However, we show numerically in~\cite{kat21} that some of the assumptions in~\cite{mw-krug} were incorrect, which obscured the true reason the PDE takes the form it does (see the discussion in Section 6.4 of~\cite{kat21}).
\subsection{From $w$ to $h$: Limit and PDE}\label{subsec:theory}
We start by showing that the hydrodynamic limit of $\mathbf h_N$ is determined from the hydrodynamic limit of $\mathbf w_N$.
The following claim corresponds to the second (righthand) implication in~\eqref{w-to-h-logic}. \begin{claim}\label{claim:htow} Let $\mathbf h_N(t)$, $N=1,2,\dots$ be a sequence of Metropolis rate height processes, and $\mathbf w_N(t)$ be the corresponding third order FD processes. Suppose $$N^{-1}\sum_{i=0}h_i(0)\probto M,\quad N\to\infty,$$ for some deterministic $M$ and that~\eqref{init} is satisfied for $\mathbf w_N$ with $\beta=0$ and some $v_0=w_0$. If $\mathbf w_N$ converges hydrodynamically to a function $w$ under amplitude scaling $N^0$, and if~\eqref{bd1} holds, then $\mathbf h_N$ converges hydrodynamically under amplitude scaling $N^3$. The limit $h$ is the unique periodic function such that $\int_0^1h(t,x)dx = M$, $h_{xxx}=w$, and such that $h_x$, $h_{xx}$ are also periodic. \end{claim} For the proof of the claim see Appendix~\ref{app:claims}.
The claim implies in particular that~\eqref{init} is satisfied for $\mathbf h_N$ with $\beta=3$, where $h_0$ is uniquely determined from the function $w_0$, the constant $M$, and the periodic boundary.
We will now recall from~\cite{kat21} the notion of pointwise mesoscopic convergence, which will be very convenient to study from a numerical perspective. We begin with some notation. For a vector $\bf w$ and a function $f:\mathbb R\to\mathbb R$, define \beqs\label{wfbar} \varbar w = \frac{1}{2N\epsilon}\sum_{i\in{N(x\pm\epsilon)}} w_i,\qquad \fxnbar f w= \frac{1}{2N\epsilon}\sum_{i\in{N(x\pm\epsilon)}}f(w_i).\eeqs
Here, $i\in N(x\pm\epsilon)$ denotes $i\in 1,2,\dots, N$ such that $|(i/N-x)\bmod1|\leq\epsilon$. \begin{remark}\label{rk:w-meas} Let $(\phi\ast\mu)(x)=\int_{\mathbb T}\phi(x-y)\mu(dy)$ for a function $\phi:{\mathbb T}\to\mathbb R$ and a signed measure $\mu$ on ${\mathbb T}$. Using the interpretation of $\mathbf w_N(t)=\mathbf w_N(t,dx)$ as a signed, random measure, note that $\varbar w(t)$ can be written in the following way: \begin{equation} \varbar w(t) = (\phi_\epsilon\ast\mathbf w_N(t))(x),\quad\text{where}\quad \phi_\epsilon(x)=\mathbbm{1}_{(-\epsilon,\epsilon)}(x)/2\epsilon. \end{equation} \end{remark} \begin{definition}[Pointwise Mesoscopic Convergence] We say $\mathbf w_N$ converges pointwise mesoscopically if there exists a continuous function $w$ such that
\begin{equation}\label{convergence}\lim_{\epsilon\to0}\lim_{N\to\infty}\mathbb E\,\bigg|\varbar w(t) - w(t,x)\bigg|^2=0,\quad\forall t\geq0,x\in{\mathbb T}.\end{equation} \end{definition}
The reason we think of~\eqref{convergence} as a ``pointwise" convergence is that it holds for each $x$, and the \emph{limit} is the pointwise quantity $w(t,x)$. However, $\varbar w(t)$ is not itself a ``pointwise" quantity, but rather an average over the \emph{mesoscopic}, or \emph{local} set ${N(x\pm\epsilon)}$: for each fixed $\epsilon\ll1$, this set contains a number of microscopic lattice sites that grows to infinity with $N$. At the same time, it corresponds to the small macroscopic interval $(x-\epsilon,x+\epsilon)$.
A practical reason to consider pointwise mesoscopic convergence is that it connects with our numerical method to compute $\hat J$, which uses the local quantities $\varbar w$ and $\fxnbar J w$. As such, it will be more straightforward to prove that $w$ solves $w_t=-\partial_{xxxx}\hat J(w)$ if we know that $w$ is the pointwise mesoscopic, rather than hydrodynamic, limit of $\mathbf w_N$. Moreover, as noted in~\cite{kat21}, it is convenient that convergence in $L^2$ (with respect to randomness) can be separated into convergence of expectations and vanishing variance. Namely,~\eqref{convergence} is equivalent to
\begin{align*} \label{V}&\lim_{\epsilon\to0}\lim_{N\to\infty}\mathrm{Var}\left(\varbar w(t)\right)= 0\quad\forall x\in{\mathbb T},\tag{V}\\ \label{E}&\text{There exists a continuous $w(t,\cdot):{\mathbb T}\to\mathbb R$ such that }\tag{E}\\ &\lim_{\epsilon\to0}\lim_{N\to\infty}\mathbb E\,\varbar w(t)= w(t,x), \quad \forall x\in{\mathbb T},\\ \\ \end{align*} for all $t\geq0$.
We now formalize the first (lefthand) implication in~\eqref{w-to-h-logic}. \begin{claim}\label{claim:meso} Assume that
$\mathbf w_N$ converges pointwise mesoscopically to $w$, and that~\eqref{bd1} holds. Then \begin{equation}\int \phi(x)w_N(t,dx) \probto\int_{\mathbb T}\phi(x)w(t,x)dx,\quad N\to\infty\end{equation} for all $\phi\inC(\unit)$ and $t\geq0$. In other words, $\mathbf w_N$ converges hydrodynamically to $w$ under amplitude scaling $N^0$. \end{claim} The proof is given in Appendix~\ref{app:claims}. Now that we have discussed convergence of $\mathbf w_N$, we turn to the problem of deriving the PDE governing its limit $w$. To do so, we will exploit the following crucial property of any process with a ``rough" local equilibrium state (defined in Section~\ref{subsec:LE}):
\begin{align*}\label{Ef}&\text{For all ``suitable" }f,\text{ there exists continuous }\hat f\text{ such that }\tag{Ef}\\ &\mathbb E\,\bar {f}({\bf w}_{{N(x\pm\epsilon)} }(t))\stackrel{N,\epsilon}{\approx}\hat f\left(\mathbb E\,\varbar w(t)\right), \quad \forall x\in{\mathbb T}.\end{align*} We use the notation $A\stackrel{N,\epsilon}{\approx}B$ to mean that $|A-B|$ converges to zero as $N\to\infty$ and then $\epsilon\to0$. Note that if $\eqref{E}$ and $\eqref{Ef}$ both hold then \begin{equation}\label{eEfeE} \lim_{\epsilon\to0}\lim_{N\to\infty}\mathbb E\,\bar {f}({\bf w}_{{N(x\pm\epsilon)} }(t)) = \hat f(w(t,x)). \end{equation} The ``suitable" functions $f$ are discussed in Section~\ref{subsec:LE}. For the Metropolis process, we confirm $\eqref{Ef}$ for both $f(w)=w^2$ and $f(w)=J(w)$ in Section~\ref{subsec:theory-numeric}. However, to derive the Metropolis PDE we will only use that $\eqref{Ef}$ holds for $f=J$. The following claim formalizes the assertion~\eqref{EplusEfimplyPDE} in the introduction to this section. \begin{claim}\label{claim:PDE} Suppose $\mathbf w_N$ converges pointwise mesoscopically to $w$, and that $\eqref{Ef}$ holds with $f=J$ for all $t>0$. Also, assume the bounds~\eqref{bd1} and~\eqref{bd2}. Then $w$ is a weak solution to \begin{equation}\label{w-PDE} \begin{cases} \partial_tw(t,x) &= -\partial_{xxxx}\hat J(w),\quad t>0,x\in {\mathbb T},\\ w(0,x)&=w_0(x),\quad x\in{\mathbb T}\end{cases}\end{equation} in the sense that \begin{equation}\label{weak-def}\int_0^1\psi(x)\left[w(t,x)-w_0(x)\right]dx = -\int_0^t\int_0^1\psi^{(4)}(x)\hat J(w(s,x))dxds,\end{equation} $\forall t>0,\psi\in C^4({\mathbb T})$. \end{claim} \begin{proof} First, we substitute $w(t,x)-w_0(x)$ on the lefthand side of~\eqref{weak-def} by the limit of $\mathbb E\,\left[\varbar w(t)-\varbar w(0)\right]$. Thanks to~\eqref{bd1}, we can pull the limit outside of the integral. Thus, the lefthand side is the limit of $\int_{\mathbb T}\psi(x)\mathbb E\,\left[\varbar w(t)-\varbar w(0)\right]dx$ as $N\to\infty,\,\epsilon\to0$. Now, as in Remark~\ref{rk:w-meas}, note that we can write $$\mathbb E\,\varbar w(t)=(\phi_\epsilon\ast\mathbb E\,[\mathbf w_N(t)])(x),$$ where $\mathbb E\,[\mathbf w_N(t)]$ is the signed measure which assigns weight $\mathbb E\,[w_i(t)]$ to $x=i/N$. We can then use the identity $\int_{\mathbb T} \psi(x)(\phi\ast\mu)(x)dx = \int_{\mathbb T} (\psi\ast\phi)(x)\mu(dx)$ which holds for even functions $\phi$. Thus, we get that \beqs\label{PDE-derivation-i} \int_{\mathbb T}\psi(x)\mathbb E\,&\left[\varbar w(t)-\varbar w(0)\right]dx \\&= \int_{\mathbb T}(\psi\ast\phi_\epsilon)(x)\mathbb E\,\left[ w_N(t,dx)-w_N(0,dx)\right]\\&= \frac1N\sum_{i=1}^N(\psi\ast\phi_\epsilon)\left(\frac iN\right)\mathbb E\,[w_i(t) - w_i(0)]\\ &= -\frac1N\sum_{i=1}^N(\psi\ast\phi_\epsilon)\left(\frac iN\right) \int_0^t\mathbb E\,[N^4D_N^4J(w_i(s))]ds,
\eeqs where $D_N^4J(w_i) = J(w_{i-2})-4J(w_{i-1}) +6J(w_i)- 4J(w_{i+1}) + J(w_{i+2})$. We can now move $N^4D_N^4$ onto $(\psi\ast\phi_\epsilon)(i/N)$. For $N\gg1$, the result is approximately $(\psi^{(4)}\ast\phi_\epsilon)(i/N)$. Then we move $\phi_\epsilon$ back onto $\mathbb E\,[J(w_i(s))]$, to arrive at $$\int_{\mathbb T}\psi(x)\mathbb E\,\left[\varbar w(t)-\varbar w(0)\right]dx \approx -\int_{\mathbb T}\psi^{(4)}(x)\int_0^T\mathbb E\,\bar J({\bf w}_{N(x\pm\epsilon)}(s))dsdx.$$ We now apply~\eqref{eEfeE} with $f=J$, and the bound~\eqref{bd2}, to conclude by Dominated Convergence. The details of the proof are filled in in Appendix~\ref{app:claims}. \end{proof}
Finally, we return to our original goal to derive the PDE governing the rough scaling limit of $\mathbf h_N$. \begin{corollary} Let $\mathbf h_N(t)$ be a Metropolis rate process such that $N^{-1}\sum_{i=1}^Nh_i(0)$ converges to some constant $M$ in probability, and assume the conditions of Claim~\ref{claim:PDE}. Then $\mathbf h_N$ has a hydrodynamic limit $h$ in the rough scaling regime which is three times continuously differentiable in $x$, and which is the weak solution to \begin{equation}\label{h-PDE} \begin{cases} \partial_th(t,x) &= -\partial_{x}\hat J(h_{xxx}),\quad t>0,x\in {\mathbb T},\\ h(0,x)&=h_0(x),\quad x\in{\mathbb T}\end{cases}\end{equation} in the sense that \begin{equation}\label{weak-def-ii}\int_0^1\psi(x)\left[h(t,x)-h_0(x)\right]dx = \int_0^t\int_0^1\psi'(x)\hat J(h_{xxx}(s,x))dxds\end{equation} for all $t>0$ and $\psi\in C^1({\mathbb T})$. \end{corollary} \begin{proof} By Claim~\ref{claim:meso}, $\mathbf w_N$ converges hydrodynamically to $w$, and by Claim~\ref{claim:htow}, $\mathbf h_N$ then converges hydrodynamically to the unique periodic $h$ such that $\int h(t,x)dx=M$ for all $t$ and $h_{xxx}=w$. Also, Claim~\ref{claim:PDE} gives that $w$ is the weak solution to $w_t=-\partial_{xxxx}\hat J(w)$. Now, note that~\eqref{weak-def-ii} is clearly satisfied for $\psi\equiv1$, so it suffices to show~\eqref{weak-def-ii} for all $\psi\in C^1({\mathbb T})$ which integrate to zero. For such $\psi$, there exists a function $\phi\in C^4({\mathbb T})$ such that $\phi^{(k)}$, $k=0,1,2$ are all periodic and such that $\phi_{xxx}=\psi$. We substitute $\phi_{xxx}=\psi$ into the lefthand side of~\eqref{weak-def-ii}, integrate by parts, and use the fact that $w=h_{xxx}$ satisfies~\eqref{weak-def}.
\end{proof}
So far, we have only established the \emph{form} of the PDEs governing $w$ and $h$. We must now actually compute the function $\hat J$. Note that according to $\eqref{Ef}$ the points $(\mathbb E\,\varbar{w}(t), \mathbb E\, \bar J\left({\bf w}_{N(x\pm\epsilon)}(t)\right))$ should lie on the curve $\{(\omega,\hat J(\omega))\mid\omega\in\mathbb R\}$. Thus, we can compute $\hat J$ numerically simply by interpolating these points! This is the essence of our numerical method, described in full in Section~\ref{sec:num}.
But is it possible to compute $\hat J$ analytically instead of resorting to numerics? To address this question, we need to explain why we expect a function $\hat J$ satisfying $\eqref{Ef}$ to exist in the first place. This has to do with the form of the \emph{local equilibrium} (LE) distribution of $\mathbf w_N$, discussed in Section~\ref{sec:LE}.
\subsection{Numerical Verification of Claim Assumptions}\label{subsec:theory-numeric} Let us now check numerically the assumptions of Claims~\ref{claim:meso} and~\ref{claim:PDE}. Namely, we need to check $\eqref{E}$, $\eqref{V}$,~\eqref{bd1},and~\eqref{bd2}. Each of these conditions can be written in terms of expectations of the form $\mathbb E\,[f(\mathbf w_N(t))]$. For details on how we estimate such expectations numerically, see Section~\ref{subsec:setup}. \begin{figure}
\caption{In (a), we confirm $\eqref{E}$. The figure shows that $\mathbb E\,\bar{\bf w}_{N(x\pm\epsilon(N))}(t)$ converges as $N\to\infty$ to some $w$. We choose $\epsilon=\epsilon(N)$ as a proxy for taking a double limit; see the text for details. In (b), we confirm $\eqref{V}$. As expected, the variance of the window average goes to zero as $N\to\infty$ for each fixed $\epsilon$.}
\label{fig:E}
\label{fig:V}
\end{figure}
Figure~\ref{fig:E} and~\ref{fig:V} confirm that the two limits $\eqref{E}$ and $\eqref{V}$ hold. In both figures, $\mathbf w_N(t)$ is computed as the third order finite difference of $\mathbf h_N(t)$, generated from an initial condition $\mathbf h_N(0)$ satisfying~\eqref{init}, $\beta=3$, with $h_0(x) = 0.0075\sin(2\pi x)$and such that $\mathbf w_N(0)$ satisfies~\eqref{init}, $\beta=0$, with $w_0=h_0'''$.
The double limit $\eqref{E}$ as $N\to\infty$ and $\epsilon\to0$ is delicate. This is because, as we show later in Figure~\ref{fig:gazon}, the profile $i/N\mapsto \mathbb E\, w_i$ varies roughly, but we want to show their sliding window averages converge to a smooth limit. We cannot take $N\to\infty$ numerically, and for every finite $N$, if $\epsilon$ is small enough (e.g. smaller than $1/N$), the window average $\mathbb E\,\varbar w(t)$ will revert back to being roughly varying. We therefore cannot take $\epsilon$ too small. We circumvent this problem with the following heuristic. For each $N$, we choose a ``good'' $\epsilon(N)$: for $\epsilon>\epsilon(N)$, $\mathbb E\,\varbar w(t)$ is smooth but biased, whereas for $\epsilon<\epsilon(N)$, it is unbiased but rough. We then check that $\mathbb E\,\overline{\bf w}_{N(x\pm\epsilon(N))}$ is converging as $N\to\infty$, as shown in Figure~\ref{fig:E}. Figure~\ref{fig:V} shows that the variance of the window average decays as $N\to\infty$ for each fixed $\epsilon$, i.e. as the window size increases. This suggests that pairs $w_i, w_j$, $i\neq j$ are uncorrelated or have low correlation.
Figure~\ref{fig:Ef} shows that $\eqref{Ef}$ is satisfied for $f=J$ and $f(w)=w^2$, using the same sinusoidal initial condition. \begin{figure}
\caption{Here we confirm $\eqref{Ef}$ for the two functions $f(w)=J(w) = 2e^{-3K}\sinh(Kw)$ (left) and $f(w)=w^2$ (right). The existence of $\hat J$ (i.e. $\hat f$ for $f=J$) is the key reason a macroscopic dynamics emerges in the hydrodynamic limit. Estimation of $\hat J$ will enable us to determine the PDE numerically.}
\label{fig:Ef}
\end{figure} In the figure, we plot the points $(\mathbb E\,\overline{{\bf w}}_{i\pm N\epsilon}(t), \mathbb E\,\bar f({\bf w}_{i\pm N\epsilon}(t)))$, $i=1,\dots, N$, and confirm that they lie on a fixed curve in the $N,\epsilon$ limit. We take the double limit $N\to\infty,\,\epsilon\to0$ using the same heuristic as with $\eqref{E}$: for each $N$, we choose a ``good" $\epsilon(N)$ for the length of the averaging interval.
We now turn to the boundedness conditions~\eqref{bd1} and~\eqref{bd2}. The top middle panel of Figure~\ref{fig:gazon} depicts $\mathbb E\,[w_i(t)^2]$, $i=1,\dots, N$ at three points in time at $N=400$. This is evidence for the fact that $\mathbb E\,|w_i(t)|$ remains bounded over time and over $i=1,\dots, N$, since $\mathbb E\,|w_i|\leq \sqrt{\mathbb E\,[w_i^2]}$ and we see that $\max_i\mathbb E\,[w_i(t)^2]$ is decreasing in time. Meanwhile, the bottom panel shows that $\max_{i=1,\dots,N}\mathbb E\,|w_i(t)|$ remains bounded as $N$ increases. Similarly, the top right panel of Figure~\ref{fig:gazon} shows $\mathbb E\,[J(w_i(t))]$ remains bounded over time and over $i=1,\dots, N$, while the bottom right panel shows it remains bounded as $N$ grows. \section{Local Equilibrium, but no Local Gibbs} \label{sec:LE}
Let us return to the questions posed at the end of Section~\ref{subsec:theory}: can we compute $\hat J$ analytically, and why should we expect $\hat J$ satisfying $\mathbb E\,\bar J({{\bf w}}_{{N(x\pm\epsilon)}})\approx \hat J(\mathbb E\,\overline{{\bf w}}_{N(x\pm\epsilon)})$ to exist at all? To address these questions, we first review the key ideas in~\cite{kat21} on ``smooth" and ``rough" local equilibrium (LE) states. We then show that ${\mathrm{Law}}(\mathbf z_N)$ is not a local Gibbs measure. \subsection{Local Equilibrium}\label{subsec:LE} This section reviews ideas from~\cite{kat21} and is primarily for the reader's convenience. Informally, a Markov jump process $\mathbf v_N$ has an LE state if there is an $M$-parameter family of distributions (where $M$ is fixed as $N\to\infty$) such that for each $t$ and $x$, the PDE-relevant information contained in the joint law of $\{v_i(t)\}_{i\in{N(x\pm\epsilon)}}$ is fully determined by a single measure in this family via some parameters $\lambda_1(t,x),\dots, \lambda_M(t,x)$ specifying this measure. What we mean by ``PDE-relevant" will become clear at the end of the section.
Here we will only discuss LE states which can be described by a $M=1$ parameter, \emph{mean-parameterized} family $\{\mu[\lambda]\mid\lambda\in\mathbb R\}$. Here, each $\mu[\lambda]$ is a probability mass function (pmf) on $\mathbb Z$, and ``mean-parameterized" means $\lambda = m_1(\mu[\lambda])$. The prototypical LE state takes the form \begin{equation}\label{smooth-LE-typical}{\mathrm{Law}}(\mathbf v_N(t))= \bigotimes_{i=1}^N\mu[v(t,i/N)]\end{equation} for a continuous function $v(t,\cdot):{\mathbb T}\to\mathbb R$, where $\otimes$ denotes taking a product of measures. Thus, the joint distribution $\{v_i(t)\}_{i\in{N(x\pm\epsilon)}}$ is fully determined by $\mu[v(t,x)]$, since the random variables $v_i(t)$, $i\in{N(x\pm\epsilon)}$ are independent and approximately distributed according to $\mu[v(t,x)]$ when $N\gg1$, $\epsilon\ll1$. Now, define $\hat f(v):=\mu[v](f)$, the expectation of $f$ under $\mu[v]$. Note that under~\eqref{smooth-LE-typical}, we have \begin{equation}\label{proto-Ef}\mathbb E\,[v_i]= v(t,i/N),\quad \mathbb E\, f(v_i) = \hat f(\mathbb E\, v_i).\end{equation} If there is a pmf $p$ dominating the measure family $\mu[\cdot]$ (see~\cite{kat21} for the details), then $\hat f$ is finite and continuous for any $f\in L^1(p)$. Moreover, by continuity of $v(t,\cdot)$ and $\hat f$, we can take mesoscopic averages of the equality $\mathbb E\, f(v_i)=\hat f(\mathbb E\, v_i)$ to conclude that $\eqref{Ef}$ is satisfied. Thus, for prototypical LE states, $\hat f$ exists thanks to the fact that the marginals ${\mathrm{Law}}(v_i)$ belong to a single mean-parameterized measure family.
Of course,~\eqref{smooth-LE-typical} is an idealized situation, and for general interacting particle systems we should not expect ${\mathrm{Law}}(\mathbf v_N(t))$ to be an exact prototypical LE state. But the prototypical LE state --- in particular the equalities~\eqref{proto-Ef} --- serve as inspiration for our definition of \emph{smooth} LE states: \begin{definition}[Smooth LE State~\cite{kat21}] We say a process $\mathbf v_N$ has a \emph{\textbf{smooth}} LE state if~\eqref{V} and the following hold for each $t>0$ (dependence on $t$ omitted below): \begin{align*} \label{Ep}&\text{There exists a continuous $v:{\mathbb T}\to\mathbb R$ such that}\tag{E$\,'$}\\ &\lim_{N\to\infty}\mathbb E\,[ v_{Nx+k}]= v(t,x),\quad\forall \,x\in{\mathbb T}, \,k=0,1,2,\dots\text{fixed}\\ \label{Efp}&\text{For all ``suitable" }f,\text{ there exists a continuous }\hat f\text{ such that }\tag{Ef$\,'$}\\ &\mathbb E\, f( v_{Nx+k})\stackrel{N}{\approx}\hat f\left(\mathbb E\, v_{Nx+k}\right), \quad \forall \,x\in{\mathbb T},\,k=0,1,2,\dots\text{ fixed}. \end{align*}
\end{definition}
The notation $A\stackrel{N}{\approx}B$ means $|A-B|\to0$ as $N\to\infty$. \begin{remark} The class of ``suitable" functions $f$ for which $\eqref{Efp}$ is satisfied will depend on the LE state. For the prototypical LE state~\eqref{smooth-LE-typical} with dominating pmf $p$, this class consists of the functions $f\in L^1(p)$. \end{remark} By contrast, \begin{definition}[Rough LE State~\cite{kat21}] We say $ \mathbf v_N$ has a \emph{\textbf{rough}} LE state if $\eqref{V}$, $\eqref{E}$, and $\eqref{Ef}$ hold for each $t>0$, while
$\eqref{Ep}$ and $\eqref{Efp}$ do \emph{\textbf{not}}. \end{definition} To explain the reason for the names ``smooth" and ``rough", we define ``smoothly" and ``roughly" varying as follows. \begin{definition}[\cite{kat21}] We say a sequence of vectors $\mathbf u_N\in\mathbb R^N$, $N=1,2,\dots$ is \emph{\textbf{smoothly}} varying in a neighborhood of $x$ if for any finite $R\geq 0$ we have
$$\lim_{N\to\infty}\max_{|i-Nx|\leq R}|u_{i+1}-u_i| = 0.$$ Otherwise, ${\mathbf u}_N $ is \emph{\textbf{roughly}} varying. \end{definition} It is straightforward to see that if $\mathbf v_N$ has a smooth LE, then the expectations $(\mathbb E\, v_i)_{i=1}^N$ and $(\mathbb E\, f(v_i))_{i=1}^N$ are smoothly varying in the neighborhood of each $x$.
We have already shown in Figures~\ref{fig:E},~\ref{fig:V}, and~\ref{fig:Ef} that $\eqref{V}$, $\eqref{E}$, and $\eqref{Ef}$ are satisfied for the Metropolis $\mathbf w_N$ process. Let us now show that $\eqref{Ep}$ and $\eqref{Efp}$ are not satisfied. It is sufficient to show $(\mathbb E\, w_i)_{i=1}^N$, and $(\mathbb E\, f(w_i))_{i=1}^N$ for some $f$, are roughly varying. Consider Figure~\ref{fig:gazon}, which depicts the observable expectations $\mathbb E\, w_i$, $\mathbb E\, w_i^2$, and $\mathbb E\, J(w_i)$. We see that $\mathbb E\, w_i$ and $\mathbb E\, w_i^2$ are roughly varying, since the rough variation persists as $N$ increases (bottom panel). It also persists as time evolves (top panel). Thus, we have confirmed that $\mathbf w_N$ has a rough LE state. This is itself an interesting fact; it shows that the rough LE state discovered in~\cite{kat21} is not an isolated phenomenon.
\begin{figure}
\caption{Here we show the expectations of three observables of the $\mathbf w_N$ process at different points in time, with $N=400$. In the left panel, the thin white line is the initial expected profile, taken to be $\mathbb E\,[w_i(0)]=\mathrm{const.}\times\cos(2\pi i/N)$ exactly. Despite this smooth initial condition, we see that the $\mathbb E\,[w_i(t)]$ profile roughens over time. The $\mathbb E\,[w_i^2]$ profile is also roughly varying, although interestingly enough, $\mathbb E\,[J(w_i)]$ is smoothly varying.}
\label{fig:gazon-time-evoln}
\caption{The rough variation observed in (a) persists also as $N$ increases, confirming that $\mathbf w_N$ does \emph{not} have a smooth LE state. }
\label{fig:gazon-N}
\end{figure}
Based on these observables, we see that the qualitative properties of the local equilibrium state of the Metropolis process $\mathbf w_N$ are very similar to those of the Arrhenius $\mathbf w_N^{\text{arr}}$. For the Arrhenius process, the points $(i/N, \mathbb E\, w_i^{\text{arr}})$ also form a ``cloud'' with well-defined boundaries, and which narrows near integer values of the range. In addition, despite the fact that both processes have a rough LE, the key functions $f$ (whose corresponding $\hat f$ arises in the PDE) has the property that $(\mathbb E\, f(w_i))_i$ is smoothly varying in both cases. These qualitative similarities between the Arrhenius and Metropolis LE are interesting because as we will show at the end of this section, there is a key difference between the local equilibrium measures of the two processes: for the Arrhenius process, ${\mathrm{Law}}(\mathbf w_N^{\text{arr}})$ is induced by a local Gibbs distribution on $\mathbf z_N^{\text{arr}}$, whereas for the Metropolis process, $\mathbf z_N$ does \emph{not} follow a local Gibbs distribution. \begin{remark}The phenomenon of narrowing near the integers is explained in~\cite{kat21} for the Arrhenius process, but the explanation relies on the local Gibbs assumption, not valid for the Metropolis process.\end{remark}
Let us return to our main goal: computing $\hat f$ for $f=J$. Why might we expect $\hat f$ to exist for a rough LE state like that of the Metropolis $\mathbf w_N$? There cannot possibly be a mean-parameterized measure family describing each ${\mathrm{Law}}(w_i)$, because this would imply $\mathbb E\,[J(w_i)]$ can be expressed as a function of $\mathbb E\,[w_i]$. Plotting the former against the latter confirms this is not the case (figure not shown). To answer the question, it is insightful to return to $\mathbf w_N^{\text{arr}}$, observed in~\cite{kat21} to have a rough LE state. In that paper, we first confirmed that the distributions ${\mathrm{Law}}(w_i^{\text{arr}})$ are the pmfs induced by ${\mathrm{Law}}(\mathbf z_N^{\text{arr}})=\rho[{\bm\lambda}]$, the local Gibbs measure defined in~\eqref{gibbs-measure}. We then used this explicit knowledge to show that while ${\mathrm{Law}}(w_i^{\text{arr}})$ is \emph{not} mean-parameterized, we \emph{do} have that \begin{equation}\label{meso-measure-av}\bar \mu_{N(x\pm\epsilon)}:=\frac{1}{2N\epsilon}\sum_{\argidxsetx i}{\mathrm{Law}}(w_i^{\text{arr}}) \approx \mu[\mathbb E\,\overline{\bf w}^{\text{arr}}_{N(x\pm\epsilon)}]\end{equation} for some mean-parameterized family $\mu[\cdot]$. As a result, defining $\hat f(\omega)=\mu[\omega](f)$, we see that $$\mathbb E\, \bar f({\mathbf w}^{\text{arr}}_{N(x\pm\epsilon)})=\bar\mu_{N(x\pm\epsilon)}(f) \approx \mu[\mathbb E\,\overline{\bf w}^{\text{arr}}_{N(x\pm\epsilon)}](f)=\hat f(\mathbb E\,\overline{\bf w}^{\text{arr}}_{N(x\pm\epsilon)}).$$ The first equality uses linearity of expectation with respect to measures; e.g. if $2N\epsilon=2$ and $\mu_i={\mathrm{Law}}(w_i^{\text{arr}})$, we are using that $(\int fd\mu_1 + \int fd\mu_2 )/2 = \int fd(\mu_1+\mu_2)/2$.
\begin{remark} The Arrhenius LE state shows that the PDE-relevant information contained in the joint law of $\{v_i(t)\}_{i\in{N(x\pm\epsilon)}}$, is the measure $\bar \mu_{N(x\pm\epsilon)}$. \end{remark} Due to the qualitative similarity between the LE state of the Arrhenius $\mathbf w_N^{\text{arr}}$ and the Metropolis $\mathbf w_N$, we speculate that the reason for the existence of $\hat J$ is the same for the two LE states: there is some parameterized measure family to which mesoscopic averages of ${\mathrm{Law}}(w_i)$ all belong. This is supported by Figure~\ref{fig:Ef} confirming $\eqref{Ef}$ both for $f(w)=J(w)$ and $f(w)=w^2$. To compute $\hat J$ explicitly, however, we would need to know this measure family. But our only guess is the family induced by a local Gibbs product measure on ${\mathrm{Law}}(\mathbf z_N)$, and we will now show that this guess is incorrect. \subsection{Local Gibbs Approximation: False but Numerically Useful} \label{subsec:gibbs}
Recall from~\eqref{gibbs-measure} the form of the local Gibbs product measure $\rho[\bml]$. By completing the square in the exponent, we can also write $$\rho[\bml] = \otimes_i\rho[\lambda_i],\quad\text{where }\quad\rho[\lambda](n) = \frac{e^{-K(n-\lambda)^2}}{\mathcal Z(\lambda)},\quad n\in\mathbb Z.$$ Here, $\mathcal Z(\lambda)=\sum_{m=-\infty}^\infty \,\mathrm{exp}(-K(m-\lambda)^2).$ To show that ${\mathrm{Law}}(\mathbf z_N(t))$ is not a local Gibbs distribution for any ${\bm\lambda}$, consider the following specially chosen observables: $$f_i^\pm(\mathbf z) = \,\mathrm{exp}\left(\pm 2K(z_{i-1}-2z_i + z_{i+1})\right).$$ We will compare the expectation of the $f_i^\pm$ under the local Gibbs measure and under the true measure.
Now, we showed in Section 6.1 of~\cite{kat21} that \begin{equation}\label{exp-expect-gibbs}\rho[\lambda](e^{cKz}) = \,\mathrm{exp}(c^2K/4 + cK\lambda)\frac{\mathcal Z(\lambda+c/2)}{\mathcal Z(\lambda)}.\end{equation} Using this formula, the fact that $\mathcal Z$ is a period 1 function, and the independence of the $z_i$ under the product measure $\rho[\bml]$, we compute \beqs\label{f-expect-gibbs} \rho[{\bm\lambda}]&(f_i^\pm) = \,\mathrm{exp}\left(6K\pm 2K(\lambda_{i+1}-2\lambda_i + \lambda_{i-1})\right),\\ &\implies\rho[{\bm\lambda}]\left( f_i^+\right)\times\rho[{\bm\lambda}]\left(f_i^-\right) \equiv e^{12K}\eeqs for all $i$, and regardless of ${\bm\lambda}$. Thus, we can confirm that ${\mathrm{Law}}(\mathbf z_N(t))$ is not a local Gibbs measure by showing that under the true measure, \begin{equation}\label{log-prod}\log\big(\mathbb E\,\left[ f_i^+(\mathbf z_N(t))\right]\times\mathbb E\, \left[f_i^-(\mathbf z_N(t))\right]\big)\neq 12K.\end{equation} This is shown in Figure~\ref{fig:no-gibbs}, with $K=1$. Interestingly, the constant $12K$ seems to be a tight lower bound for the lefthand side of~\eqref{log-prod}. \\ \begin{wrapfigure}{r}{0.46\textwidth}
\centering \includegraphics[width=0.46\textwidth]{Met_Figures/met_noGibbs.png} \caption{This figure confirms~\eqref{log-prod}, which implies that for the Metropolis process, ${\mathrm{Law}}(\mathbf z_N(t))$ is not given by a local Gibbs measure. Here, $K=1$, and we see that $12K=12$ is a lower bound for the quantity on the lefthand side of~\eqref{log-prod}.} \label{fig:no-gibbs}
\end{wrapfigure} \noindent\textbf{Useful Numerical Estimate.} As we will explain in Section~\ref{subsec:hatJ}, the estimate $\hat J_{\text{gibbs}}$ of $\hat J$ obtained by assuming ${\mathrm{Law}}(\mathbf z_N)=\rho[{\bm\lambda}]$ is a useful baseline estimate. To obtain $\hat J_{\text{gibbs}}$ we must be able to write $\rho[{\bm\lambda}](\fxnbar J w)$ as a function of $\mathbb E\,\varbar w$. To do so, we first note that $\lambda_i$ must be given by $\lambda_i=\lambda(\mathbb E\, z_i)$, where the function $\lambda$ is the inverse of $\lambda\mapsto m_1(\rho[\lambda])$. Now, $\rho[{\bm\lambda}](\fxnbar J w)$ is a function of $\lambda_i$, $i\in{N(x\pm\epsilon)}$ and therefore it is some further function of the $\mathbb E\,[z_i]$.
For general $K$, it is unclear whether this further function depends on the $\mathbb E\,[z_i]$ only through $\mathbb E\,\varbar w$, i.e. whether a function sending $\mathbb E\,\varbar w$ to $\rho[{\bm\lambda}](\fxnbar J w)$ exists at all. This is because the function $\lambda$ depends nonlinearly on $\mathbb E\,[z_i]$. However, when $K$ is small, simplifying approximations make this possible, and one obtains \begin{equation}\label{Jgibbs}\hat J_\gibbs(\omega) = 2e^{-3K/2}\sinh(K\omega).\end{equation} See~\cite{gao2020} for the computation of this function. We do not bother obtaining a more exact estimate of $\hat J_{\text{gibbs}}$ for larger $K$ since the local Gibbs distribution is incorrect. We only need a baseline estimate which will help us compute the true $\hat J$, and it will turn out that the estimate~\eqref{Jgibbs} suits our needs, even for larger $K$.
We note that the discrepancy observed in~\cite{gao2020} between $\mathbf h_N$, $N\to\infty$ and the solution to the PDE $h_t=-\partial_x\hat J_\gibbs(h_{xxx})$, is \emph{not} caused by small $K$ approximations. The simulations in that paper take $K=0.25$, for which $\hat J_\gibbs$ is a very accurate estimate of the local Gibbs expectation. Rather, the discrepancy is caused by the fact that the local Gibbs distribution is not the correct LE state.
\section{Numerical Implementation}\label{sec:num}
We begin in Section~\ref{subsec:setup} by describing how we simulate the Metropolis dynamics and compute expectations of observables. In Section~\ref{subsec:hatJ}, we explain in more detail how we compute the function $\hat J$. Finally, in Section~\ref{subsec:PDE-confirm}, we confirm that we computed the function $\hat J$ correctly: we show that the microscopic processes $\mathbf w_N$ and $\mathbf h_N$ converge to the solutions of the PDEs~\eqref{w-PDE} and~\eqref{h-PDE}, respectively, and \emph{not} to the corresponding PDEs with $\hat J_\gibbs$.
\subsection{Set Up}\label{subsec:setup} Since the microscopic dynamics is a Markov jump process, the path $\{\mathbf h_N(t)\}_{t\geq0}$ is a step function, with $\mathbf h_N(t) = \mathbf h_{k}$ when $t\in [t_k, t_{k+1})$. Therefore, simulating the process in a time interval $[0,T]$ amounts to drawing the pairs $(\mathbf h_k, t_k)$, $t_k\leq T$, according to the law of the process $\mathbf h_N$. We do so using the Kinetic Monte Carlo algorithm (KMC)~\cite{KMC}, presented in Algorithm~\ref{alg:KMC} below. Note that the algorithm uses rates $r^{i,j}$ and rescales time, which is equivalent to using rates $N^4r^{i,j}$. \begin{algorithm2e} \caption{KMC algorithm to simulate Markov jump processes}\label{alg:KMC} \KwInput{$N$, initial value ${\bf h}\in \mathbb Z^N$, macroscopic end time $t$, rates $r^{i,j}$.} \KwOutput{Jump times $t_1, t_2, t_3,\dots$ and states ${\bf h}_1,{\bf h}_2,{\bf h}_3,\dots$} $s \gets 0$, $k \gets 0$, ${\bf h}_k\gets {\bf h}$\; \While{$s \leq N^4t$}{
$R\gets \sum_{|i-j|=1}r^{i,j}({\bf h}_k)$\;
Draw $T\sim\mathrm{Exp}(R)$\;
Draw ${\bf h}_{k+1}$ from the pdf $p({\bf h}^{i,j}) = r^{i,j}({\bf h}_k)/R$\;
$s \gets s + T$, $t_k\gets N^{-4}s$, $k \gets k+1$ } \end{algorithm2e} Given a smooth macroscopic initial condition $h_0(x)$, we initialize KMC with a microscopic height profile $\mathbf h_N(0)$ drawn from \begin{equation}\label{init:num}\mathbf h_N(0) \sim \left(\left\lfloor N^3h_0\left(\frac iN\right)\right\rfloor + \xi_i\right)_{i=1}^{N},\quad \xi_i\sim \text{Bernoulli}(p_i),\end{equation} where the $\xi_i$ are independent and $p_i$ is the fractional part of $N^3h_0(i/N)$. Thus $\mathbb E\,[h_i]=N^3h_0(i/N)$ exactly. This fact and the independence of the $\xi_i$ ensures that~\eqref{init}, $\beta=0$, is satisfied for $\mathbf w_N$ and that $N^{-1}\sum_{i=1}^Nh_i(0)$ converges to a constant.
Most of the quantities we need to compute are observables $\mathbb E\,[f(\mathbf w_N(t))]$ of the third order FD process $\mathbf w_N(t)$. We estimate these by drawing $M$ independent initial conditions $\mathbf h_N^{(k)}(0)$, $k=1,\dots, M$ from the distribution~\eqref{init:num}, evolving them forward using KMC, and then estimating \begin{equation}\label{sample-av} \mathbb E\,[f(\mathbf w_N(t))]\approx \frac 1M\sum_{k=1}^Mf(\mathbf w^{(k)}_{N}(t)), \end{equation} where $\mathbf w_N^{(k)}$ is the third order FD of $\mathbf h_N^{(k)}$. Some observables $f$ we need to compute (such as the current observable $f(w) = \sinh(Kw)$) have extremely high variance, and the number of samples $M$ needed to sufficiently reduce the sample variance of~\eqref{sample-av} is intractable. For such observables, we can reduce the variance further by integrating over a small time window: \begin{equation}\label{sample-time-av} \mathbb E\,[f(\mathbf w_N(t))]\approx \frac{1}{\Delta}\int_{I_{t,\Delta}}\mathbb E\,[f(\mathbf w_N(s)]ds \approx \frac 1M\sum_{k=1}^M\frac{1}{\Delta}\int_{I_{t,\Delta}}f(\mathbf w^{(k)}_{N}(s))ds,\quad \Delta\ll1.\end{equation} Here, $I_{t,\Delta}$ is some time interval of length $\Delta$ containing $t$.
We can compute the time integrals in~\eqref{sample-time-av} exactly since the paths of the process are step functions. See Appendix~\ref{app:num} for justification of the time average approximation to the expectations $\mathbb E\,[f(\mathbf w_N(t))]$. \begin{figure}
\caption{Initial Conditions. We take $c=0.0075$ in the first (leftmost) column, $c=0.001$ in the second column, and $c=0.003$ in the third column. The initial condition $h_0(x)=c\sin^2(2\pi x)$ in the third column is reserved for computing $\hat J$.}
\label{fig:init}
\end{figure}
Figure~\ref{fig:init} depicts the initial conditions (ICs) we used in our simulations. The rightmost IC is reserved for computing the function $\hat J$ in the PDE. We use the other two ICs to confirm the resulting $\hat J$ gives the correct PDE.
\subsection{Computation of Current}\label{subsec:hatJ} In this section, we will describe our procedure to compute $\hat J$, for which we will use statistics collected from a process with initial condition given by the third column in Figure~\ref{fig:init}, and $K=2$.
As we mentioned in Section~\ref{subsec:theory}, our general strategy for computing $\hat J$ is to interpolate the points $(\mathbb E\,\varbar w(t),\, \mathbb E\, \bar J({\bf w}_{N(x\pm\epsilon)}(t)))$ computed at multiple $(t,x)$. However, we will need to refine this strategy slightly to incorporate new information and to make numerical estimation more convenient. The new information we have is that, as seen in Figure~\ref{fig:gazon}, the expectations $\mathbb E\, J(w_i)$ vary smoothly with $i$. Therefore we will not average the expected current over space. The next modification is to replace expectations at a single time $t$ with expectations integrated over $s\in I_{t,\Delta}$. As explained above, sample average estimates of these time-integrated quantities have much lower variance. In sum, we replace $(\mathbb E\,\varbar w(t),\, \mathbb E\, \bar J({\bf w}_{N(x\pm\epsilon)}(t)))$ with $(\omega_{\epsilon,\Delta}(t,x), \mathcal J_\Delta(t,x))$, where
\beqs\label{bar-omega-J} \omega_{\epsilon,\Delta}(t,x) := \frac1\Delta\int_{I_{t,\Delta}}\mathbb E\,\varbar w(s)ds, \eeqs and \beqs\mathcal J_{\Delta}(t,x) := \frac1\Delta\int_{I_{t,\Delta}}\mathbb E\, J(w_{\lfloor Nx\rfloor}(s))ds.\eeqs In what follows, we will write $\omega_{\epsilon,\Delta}(t,x)$ and $\mathcal J_\Delta(t,x)$ to denote our \emph{sample average estimates} of these quantities, computed as in~\eqref{sample-time-av}. We also sometimes abbreviate notation by writing $\omega_{\epsilon,\Delta}=\omega_{\epsilon,\Delta}(t,x)$ and $\mathcal J_{\Delta}=\mathcal J_{\Delta}(t,x)$ to denote these quantities at a generic point $(t,x)$.
Next, we need to address an important issue with the strategy of interpolating $(\omega_{\epsilon,\Delta}, \mathcal J_\Delta)$ to compute the function $\hat J$: namely, we need the $x$-coordinates $\omega_{\epsilon,\Delta}$ to span all of $\mathbb R$ in order to accurately estimate the value of $\hat J(\omega)$ for $\omega$ ranging over all of $\mathbb R$. This is where the ``baseline estimate" $\hat J_\gibbs$ come in handy. Conveniently, $\hat J(\omega)/\hat J_\gibbs(\omega)$ \emph{rapidly levels out to a constant asymptote as} $|\omega|\to\infty$! Therefore, it will be much simpler to estimate \begin{equation}\label{sigmaK}\sigma(\omega):=\hat J(\omega)/\hat J_\gibbs(\omega),\end{equation} and then obtain $\hat J$ as $\hat J=\sigma\hat J_\gibbs$. We can estimate $\sigma(\omega)$ by \emph{interpolating} the points
\begin{equation}\label{epsDelt-points}\bigg\{\bigg(\omega_{\epsilon,\Delta}(t,x),\;\frac{\mathcal J_\Delta(t,x)}{\hat J_\gibbs(\omega_{\epsilon,\Delta}(t,x))}\bigg)\;\bigg\vert\; x=\frac1N,\frac2N,\dots, 1\bigg\}\end{equation} inside a bounded domain, and \emph{extrapolating} to a constant outside of it.
This strategy raises a new issue, which is that both $\mathcal J_{\Delta}$ and $\hat J_\gibbs(\omega_{\epsilon,\Delta})$ approach zero when $\omega_{\epsilon,\Delta}\to0$. We will address this issue shortly. First let us choose suitable parameters $t=t_N$, $\epsilon=\epsilon_N$, $\Delta=\Delta_N$ for each of $N=1000,2000,4000$. We will then plot three curves of points $\{(\omega_{\epsilon,\Delta},\;\mathcal J_\Delta/\hat J_\gibbs(\omega_{\epsilon,\Delta}))\}$ corresponding to these three values of $N$, to ensure that the curves have converged.
\textbf{Step 1: Choose $t$.} We observe that $\sup_i|\mathbb E\, w_i(t)|$ decreases with time, so for the purpose of generating points $\omega_{\epsilon,\Delta}(t,x)$ which span a large interval, it is better to take $t$ small. On the other hand, $t$ must be sufficiently large that the process has had time to \emph{locally equilibrate}. As $N\to\infty$, the ``burn-in" time $t_N$ until local equilibration --- i.e. until we can expect the crucial condition $\eqref{Ef}$ of a rough LE state to be satisfied --- should go to zero. In other words, equilibration occurs instantaneously when $N\to\infty$. However, for finite $N$ we must be careful to wait sufficiently long so as to avoid collecting statistics $(\omega_{\epsilon,\Delta}(t,x), \mathcal J_{\Delta}(t,x))$ from a pre-LE distribution. In order to determine whether a given time $t$ is past burn-in for a fixed $N$, we do the following heuristic test: first, we check that the points $(\omega_{\epsilon,\Delta}(t,x), \mathcal J_{\Delta}(t,x))$, $x=i/N$, all lie on a single curve, i.e. they pass the straight line test. Second, we check that for $t'>t$, the corresponding points lie on the same curve as at time $t$.
\textbf{Step 2: Choose $\epsilon,\Delta$}. We choose appropriate values $\epsilon=\epsilon_N$ and $\Delta=\Delta_N$ as follows: we plot the points~\eqref{epsDelt-points} (with $t=t_N$) for a range of $\epsilon$ and $\Delta$, and look for $\epsilon_N$, $\Delta_N$ which lead to curves which are neither too biased compared to the curve corresponding to the smallest $\epsilon,\Delta$, nor too noisy. Figure~\ref{fig:sig-eps-Delt} depicts these curves for fixed $N=4000$ and several values of $\epsilon,\Delta$. We omit points in the set~\eqref{epsDelt-points} for which $|\omega_{\epsilon,\Delta}|<\delta_0$ for a $\delta_0\ll1$. We see in the figure that the effect of varying $\Delta$ is much less significant than the effect of varying $\epsilon$. For $N=4000$, we take $\epsilon=0.0015$ and $\Delta=4\times 10^{-10}$. \begin{figure}
\caption{Effect of varying $\epsilon$ and $\Delta$ on the curve of points $(\omega_{\epsilon,\Delta}, \mathcal J_\Delta/\hat J_\gibbs(\omega_{\epsilon,\Delta}))$ at fixed $N=4000$, and $K=2$. We see that varying $\epsilon$ has a more significant effect than does varying $\Delta$. Based on this plot, we choose $\epsilon_N=0.0015$ and $\Delta_N=4\times10^{-10}$.}
\label{fig:sig-eps-Delt}
\end{figure} Using this procedure for $N=1000,2000$, we take $\epsilon_N=0.003,0.002$, respectively, and $\Delta_N=4\times10^{-10}$ for both.
\textbf{Step 3: ``Fill in" the curve near zero.} Having identified $\epsilon_N$ and $\Delta_N$, we next ``fill in'' the curve of points~\eqref{epsDelt-points} in the neighborhood $|\omega|<\delta_0$. We do so using the numerical observation that $\sigma$ has a local (and global) minimum at zero. This implies that for small values of $\omega$ we should have $\sigma(\omega)\approx a+ b\omega^2$ for some values $a,b$. We find optimal $a=a_N$, $b=b_N$ for each $N$ by solving \begin{equation}\label{ab}
(a_N, b_N) = \arg\min_{(a,b)}\sum_{|\omega_{\epsilon,\Delta}|<\delta_1}\bigg(\mathcal J_{\Delta} - (a + b\,\omega_{\epsilon,\Delta}^2)\hat J_\gibbs(\omega_{\epsilon,\Delta})\bigg)^2,
\end{equation} where $\epsilon=\epsilon_N$, $\Delta=\Delta_N$, and the sum is over all points in the set~\eqref{epsDelt-points} such that $|\omega_{\epsilon,\Delta}|<\delta_1$. Here, $\delta_1$ is small but greater than $\delta_0$. We take $\delta_1>\delta_0$ in order to obtain a smoother transition between the quadratic approximation near the origin and the remaining curve.
Next, we visually confirm that the filled in curves converge as $N$ increases. This is shown in Figure~\ref{fig:converge}. \begin{figure}
\caption{Convergence with $N$ of the curve of points $(\omega_{\epsilon,\Delta}, \mathcal J_\Delta/\hat J_\gibbs(\omega_{\epsilon,\Delta}))$, where $\epsilon=\epsilon_N$, $\Delta=\Delta_N$ are chosen using the procedure described in the text. Near the origin, we substitute $ \mathcal J_\Delta/\hat J_\gibbs(\omega_{\epsilon,\Delta})$ by $a_N+b_N\omega_{\epsilon,\Delta}^2$, where $a_N,b_N$ minimize~\eqref{ab}.}
\label{fig:converge}
\end{figure} Finally, we use the $N=4000$ curve to compute the function $\sigma$, by fitting a smoothing spline to it. We fit the spline inside a bounded interval (e.g. $[-2.5,2.5]$ for $K=2$) by calling MATLAB's \texttt{fit} routine with the option ``smoothingspline". This routine implements the following minimization: \beqs\label{spline}
\sigma= \arg\min_{\text{splines } s}&\;\;\lambda\!\!\sum_{|\omega_{\epsilon,\Delta}|>\delta_0}\left(\mathcal J_\Delta/\hat J_\gibbs(\omega_{\epsilon,\Delta})- s(\omega_{\epsilon,\Delta})\right)^2 \\&+ \lambda\!\!\sum_{|\omega_{\epsilon,\Delta}|<\delta_0}\left(a+b\omega_{\epsilon,\Delta}^2 - s(\omega_{\epsilon,\Delta})\right)^2 + (1-\lambda)\int s''(x)^2dx.\eeqs The coefficient $\lambda\in (0,1)$ is a smoothing parameter. We then extrapolate the spline to be constant outside the bounded interval using MATLAB's \texttt{fnxtr}.
Figure~\ref{fig:allUFOs} depicts the result of implementing this procedure for a range of $K$ values. For each $K$, we plot the points~\eqref{epsDelt-points} generated from the $N=4000$ process and using the chosen $\epsilon=\epsilon_N$, $\Delta=\Delta_N$ as described above. These curves are shown in color. They appear smoother near the origin because for $|\omega_{\epsilon,\Delta}|\ll1$, we replace $(\omega_{\epsilon,\Delta}, \mathcal J_\Delta/\hat J_\gibbs(\omega_{\epsilon,\Delta}))$ with $(\omega_{\epsilon,\Delta}, a+b\omega_{\epsilon,\Delta}^2)$. The curves are overlayed with their spline approximations in black. \begin{figure}
\caption{Function $\sigma=\sigma_K$ for different values of $K$. The black dotted lines are the smoothing spline interpolations computed as in~\eqref{ab},~\eqref{spline}. Near $\omega=0$, we plot $(\omega, a+b\omega^2)$ rather than $(\omega_{\epsilon,\Delta}, \mathcal J_\Delta/\hat J_\gibbs(\omega_{\epsilon,\Delta}))$. Note that the functions $\sigma$ are all even, nondecreasing on $\mathbb R^{+}$, and bounded above and below by positive constants. Note also that $\sigma$ approaches the constant 1 as $K$ decreases.}
\label{fig:allUFOs}
\end{figure}
There are two notable features of this family of corrections $\sigma=\sigma_K$. First, the corrections approach the constant 1 as $K$ decreases, in line with the observation in~\cite{gao2020} that the PDE derived under the local Gibbs assumption is very nearly accurate at low $K$. Second, we note that $\sigma$ is even, nondecreasing on $\mathbb R^{+}$, and bounded above and below by positive constants for all $K$. These qualitative observations will enable us to extend the analysis of the PDE~\eqref{introPDE-gibbs} done in~\cite{gao2020}, to the analysis of the corrected PDE~\eqref{introPDE}.
\subsection{Convergence to PDE Solution}\label{subsec:PDE-confirm} We will take $K=2$ in our verification of the PDE, and the initial height profiles $h_0$ depicted in the first two columns of Figure~\ref{fig:init}. We numerically solve the PDE \begin{equation}\label{J-PDE}\begin{cases} h_t = -\partial_x\left(\sigma(h_{xxx})2{e}^{\frac{-3K}{2}}\sinh(Kh_{xxx})\right),&\quad t>0,x\in (0,1)\\ h(0,x)=h_0(x),&\quad x\in (0,1)\end{cases},\end{equation} with $\sigma$ computed as described in Section~\ref{subsec:hatJ}. For comparison, we also numerically solve the PDE without the correction, letting $\tilde h$ denote the solution to \begin{equation}\label{tilde-J-PDE}\begin{cases} \tilde h_t = -\partial_x\left(2{e}^{\frac{-3K}{2}}\sinh(K\tilde h_{xxx})\right),&\quad t>0,x\in (0,1)\\ \tilde h(0,x)=h_0(x),&\quad x\in (0,1).\end{cases}\end{equation} We solved the PDEs by discretizing the spatial differential operators, and evolving the resulting ODE forward using MATLAB's \texttt{ode15s}, which is designed for stiff differential equations. Our primary interest is to confirm that the PDE~\eqref{J-PDE} is the correct limit of the microscopic dynamics. We will therefore study pointwise convergence of $\mathbf h_N$ rather than the hydrodynamic convergence of Definition~\ref{def:hydro}. We will see that $$\mathbb E\,[h_N(t,x)]:=\mathbb E\, h_{\lfloor Nx\rfloor}(t)\to h(t,x),\quad N\to\infty,\;\forall \; (x,t),$$ \emph{with no spatial averaging required}. This is in stark contrast to the $\mathbf w_N$ process, for which $\mathbb E\, w_{\lfloor Nx\rfloor}(t)$ does not converge at all. We will also confirm that $$\mathbb E\,\varbar w(t)\to h_{xxx}(t,x), \quad N\to\infty,\;\forall \; (x,t),$$where $\epsilon=\epsilon(N)$ is chosen using the procedure described in the verification of $\eqref{E}$ in Section~\ref{subsec:theory-numeric}. Note that $\eqref{E}$ only verified $\mathbb E\,\varbar w(t)$ has \emph{some} limit, whereas now we verify this limit is the third derivative of the solution to~\eqref{J-PDE}.
We start with the initial condition $h_0(x)=c(1-e^{-\sin(2\pi x)})$. The left panel of Figure~\ref{fig:exp-evoln} depicts the evolution of $\mathbb E\, [h_N(t,\cdot)]$ in time for $N=500$, as well as the evolution of $h(t,\cdot)$ and $\tilde h(t,\cdot)$, where $h$, $\tilde h$ are solutions to~\eqref{J-PDE} and~\eqref{tilde-J-PDE}, respectively. The right panel shows the evolution of $\mathbb E\,\varbar w(t)$ in comparison to $h_{xxx}$ and $\tilde h_{xxx}$. We see that there is a nontrivial qualitative difference between the two macroscopic evolutions, and that the evolution of the microscopic process clearly follows the PDE~\eqref{J-PDE} rather than the PDE~\eqref{tilde-J-PDE}. This shows that the correction $\sigma$ is necessary to capture the correct dynamics.
The left panel of Figure~\ref{fig:exp-converge} depicts $\mathbb E\,[h_N(t,\cdot)-h_N(0,\cdot)]$ for $t=2\times 10^{-8}$ and $N=250,500,1000$, where the $N=\infty$ curve is $h(t,\cdot)-h_0$. We plot the time increment of $\mathbb E\, h_N$ rather than $\mathbb E\, h_N$ itself, in order to better see the convergence (at this $t$, the process $h_N(t)$ is still very close to $h_N(0)$). The right panel of the figure depicts $\mathbb E\,\varbar w(t)$ for increasing $N$, with $N=\infty$ representing $h_{xxx}(t,\cdot)$. The panels confirm that $\mathbb E\,[h_N(t,\cdot)]$ is converging to $h(t,\cdot)$, since $\mathbb E\,[h_N(0)]$ converges to $h_0$ by design, and that $\mathbb E\,\varbar w(t)$ is converging to $h_{xxx}(t,\cdot)$. \begin{figure}
\caption{Exponential initial condition. Left: evolution of $\mathbb E\, h_N$ in time, for $N=500$. The evolution is compared to that of $ h(t,\cdot)$ and $\tilde h(t,\cdot)$. It is clear that the evolution of $\mathbb E\, h_N$ follows that of $h$ rather than $\tilde h$. Right: evolution of $\mathbb E\,\varbar w(t)$ in time for $N=500$, compared to the evolution of $h_{xxx}$ and $\tilde h_{xxx}$. We see that $\mathbb E\,\varbar w(t)$ follows $h_{xxx}$ rather than $\tilde h_{xxx}$. Note the qualitative difference between the two macroscopic profiles.}
\caption{Exponential initial condition. Convergence of $\mathbb E\,[h_N(t,\cdot)-h_N(0,\cdot)$ to $h(t,\cdot)-h_0$, and $\mathbb E\,\varbar w$ to $h_{xxx}$, as $N\to\infty$.}
\label{fig:exp-evoln}
\label{fig:exp-converge}
\end{figure}
Figures~\ref{fig:sin-evoln} and~\ref{fig:sin-converge} are analogous, but for the sinusoidal initial condition. For this IC, the qualitative differences between the two macroscopic evolutions~\eqref{J-PDE} and~\eqref{tilde-J-PDE} are not as significant but again, it is clear that the microscopic process converges to the solution of~\eqref{J-PDE}. \begin{figure}
\caption{Sinusoidal initial condition. Left: evolution of $\mathbb E\,[h_N(t,\cdot)-h_N(0,\cdot)]$ in time, for $N=500$. The evolution is compared to that of $h(t,\cdot)-h_0$ and $\tilde h(t,\cdot)-h_0$, where $h,\tilde h$ are the solutions to~\eqref{J-PDE} and~\eqref{tilde-J-PDE}, respectively. The discrepancy between $h(t,\cdot)-h_0$ and $\tilde h(t,\cdot)-h_0$ is clearest at $t=10^{-5}$. It is clear that $\mathbb E\, h_N$ follows $h$ rather than $\tilde h$. Right: evolution of $\mathbb E\,\varbar w(t)$ in time for $N=500$, compared to the evolution of $h_{xxx}$ and $\tilde h_{xxx}$. On the $O(1)$ scale of $h_{xxx}$, the difference between $h_{xxx}$ and $\tilde h_{xxx}$ is imperceptible.}
\caption{Sinusoidal initial condition. Convergence of $\mathbb E\,[h_N(t,\cdot)-h_N(0,\cdot)]$ to $h(t,\cdot)-h_0$, and $\mathbb E\,\varbar w$ to $h_{xxx}$, as $N\to\infty$.}
\label{fig:sin-evoln}
\label{fig:sin-converge}
\end{figure} \section{PDE Analysis}\label{sec:PDE} We conclude the paper by generalizing the PDE results in~\cite{gao2020}. Setting all constants equal to 1, consider the PDE \begin{equation}\label{hPDE} \begin{cases} h_t = -\partial_{x}\left(\sigma(h_{xxx})\sinh(h_{xxx})\right),\quad &t>0, x\in{\mathbb T}\\ h(0,x) = h_0(x),\quad &x\in{\mathbb T} \end{cases} \end{equation} where $\sigma\in C^2(\mathbb R)$, is even, nondecreasing on $\mathbb R^{+}$, and bounded above and below by constants $0<c<\sigma(\omega)<C$ for all $\omega\in\mathbb R$. These properties are all confirmed in Figure~\ref{fig:allUFOs}. From~\eqref{hPDE}, we get the following PDE for the slope $z=h_x$: \begin{equation}\label{zPDE} \begin{cases} z_t = -\partial_{xx}\left(\sigma(z_{xx})\sinh(z_{xx})\right),\quad &t>0, x\in{\mathbb T}\\ z(0,x) = z_0(x)=h_0'(x),\quad &x\in{\mathbb T} \end{cases} \end{equation} Before stating the main result, we introduce some notation. Let $$H=\{u\in L^2({\mathbb T});\; \int_{\mathbb T} udx=0\},\quad V=\{u\in H^2({\mathbb T});\; \int_{\mathbb T} udx=0\}.$$ Define $\psi:\mathbb R\to\mathbb R$ by \begin{equation} \psi(u) = c + \int_0^u\sigma(q)\sinh(q)dq, \end{equation} and $\phi:H\to [0,+\infty]$ by \begin{equation}\label{phi} \phi(z) = \begin{cases}\int_{\mathbb T} \psi(z_{xx})dx,\quad &z\in V,\\ +\infty,\quad&\text{otherwise}.\end{cases} \end{equation} Note that we have $$\frac{\delta\phi}{\delta z} = \left(\psi'(z_{xx})\right)_{xx}= \partial_{xx}\left(\sigma(z_{xx})\sinh(z_{xx})\right)$$ so that~\eqref{zPDE} can be written as $$z_t = -\frac{\delta\phi}{\delta z}.$$ This motivates writing solutions of~\eqref{zPDE} as the limit of a discretized gradient flow in the metric space $H$ with $L^2$ distance.
In preparation for doing so, we state the following lemma. It is the same as Proposition 3.2 in~\cite{gao2020}, but applies to the more general functional $\phi$ in~\eqref{phi}: \begin{lemma}\label{lma:phi-prop}
The functional $\phi:H\to[0,\infty]$ is $\lambda$-convex for $\lambda = c/\kappa^2$, where $\kappa$ is the best Poincare constant for the domain ${\mathbb T}$. $\phi$ is also proper, lower semicontinuous in $H$, and satisfies coercivity, meaning that there exists a ball $B(u^*,r^*)=\{v\in H: \|v-u^*\|_{L^2}\leq r^*\}$ such that $\phi(u^*)<\infty$ and the infimum of $\phi$ over $B(u^*,r^*)$ is finite. \end{lemma} See the end of this section for the proof. Now, define the proximal operator
$$\mathcal J_\tau[u] = \arg\min_{v\in H}\left\{\phi(v) + \frac{1}{2\tau}\|v-u\|^2\right\}.$$ The proximal operator is the variational formulation of the update for gradient descent on $\phi$ with step size $\tau$. The convexity and lower semicontinuity of $\phi$ ensures that the minimizer of the above objective exists and is unique. Using $\mathcal J_\tau$, we form the approximate solution $$z_n(t) := \left(\mathcal J_{t/n}\right)^n[z_0].$$ Using Lemma~\ref{lma:phi-prop} and the theory of gradient flows in metric spaces (see~\cite{gao2020} and the citations therein, in particular~\cite{AGS}), one can show that given $z_0\in H$, the sequence $z_n(t)$ converges in $H$ to $z(t)$, which is the unique evolution variational inequality (EVI) solution to the PDE~\eqref{zPDE}. See~\cite{gao2020} and~\cite{AGS} for the definition of the EVI solution. Finally, if $h_0$ enjoys more regularity, then the EVI solution $z(t)$ is a strong solution. We have the following theorem, which is analogous to Theorem 3.6 in~\cite{gao2020}. \begin{theorem} Let $$D = \{z\in V\mid \left(\sigma(z_{xx})\sinh(z_{xx})\right)_{xx}\in H\}.$$ Take $T>0$ and $z_0\in D$ such that $\phi(z_0)<\infty$. Then~\eqref{zPDE} has a unique global strong solution $z$ in the sense that $\partial_tz = -\partial_{xx}\left(\sigma(z_{xx})\sinh(z_{xx})\right)$ for all $t\geq0$, and such that $$z\in C([0,T]; D)\cap C_1([0,T]; H).$$ Moreover, we have the following decay: \beqs
\|z(t)\|_{L^2}&\leq \|z_0\|_{L^2}\;\forall t\geq 0,\\
\|\partial_tz(t)\|_{L^2} &\leq e^{-\lambda t}\left\|\partial_{xx}\left(\sigma\left(\partial_{xx}z_0\right)\sinh(\partial_{xx}z_0)\right)\right\|_{L^2}\forall t\geq0, \eeqs where $\lambda$ is as in Lemma~\ref{lma:phi-prop}. \end{theorem} Let us now present the proof of Lemma~\ref{lma:phi-prop}. \begin{proof}[Proof of Lemma~\ref{lma:phi-prop}] $\phi$ is proper since $u=0$ satisfies $\phi(u)<\infty$, so $\{\phi<\infty\}$ is nonempty. Since $\phi\geq0$, it is obviously coercive. Now we show $\phi$ is $\lambda$-convex with $\lambda = c/\kappa^2$, where $\kappa$ is the best Poincare constant for the domain ${\mathbb T}$. First, note that $$\psi''(w) = \sigma(w)\cosh(w) + \sigma'(w)\sinh(w) \geq c\cosh(w)\geq c.$$ Now, analogously to~\cite{gao2020}, define
\begin{equation} I(t) = \int_{\mathbb T} (1-t)\psi(u_{xx}) + t\psi(v_{xx}) - \frac\lambda2t(1-t)\|u-v\|_{L^2}-\psi((1-t)u_{xx}+tv_{xx})dx. \end{equation} Note that $I(0)=I(1)=0$, so $I(t)\geq0$ provided $I''(t)\leq 0$. We compute $I''(t)$ below, substituting $\lambda=c/\kappa^2$ in the second line: \beqs I''(t) &= \lambda\int_{\mathbb T}(u-v)^2dx -\int_{\mathbb T}(u_{xx}-v_{xx})^2\psi''((1-t)u_{xx}+tv_{xx})dx \\ &\leq \frac{c}{\kappa^2}\int_{\mathbb T}(u-v)^2dx - c\int_{\mathbb T}(u_{xx}-v_{xx})^2dx\leq 0, \eeqs applying the Poincare inequality twice. Hence $\phi$ is $\lambda$-convex. The lower semi-continuity of $\phi$ will follow from the convexity of $\psi$ and the below bound~\eqref{L2bd}; for the details, see~\cite{gao2020}. For $z\in V$, we have \beqs
\frac c2\int_{\mathbb T}\left|(z_{xx})^+\right|^2dx &\leq c\int_{{\mathbb T}\cap \{z_{xx}>0\}} e^{(z_{xx})^+}dx \leq 2c\int_{{\mathbb T}\cap\{z_{xx}>0\}}\cosh((z_{xx})^+)dx \\ &\leq 2c\int_{{\mathbb T}}\cosh(z_{xx})dx\leq 2\phi(z). \eeqs Applying an analogous inequality with the negative part of $z_{xx}$, we conclude that \begin{equation}\label{L2bd}
\|z_{xx}\|_{L^2}^2 \leq \frac8c\phi(z). \end{equation} \end{proof}
\section{Conclusion}\label{sec:conclude} We have derived the continuity equation $h_t = -\partial_x\hat J(h_{xxx})$ governing the hydrodynamic limit of a Metropolis rate jump process in the rough scaling regime. Due to the surprising fact that the local equilibrium (LE) state of this process is not local Gibbs, and is unknown, we opted for a numerical approach to compute the current $\hat J$. We conclude with an observation about this approach. Although it took into account some specific properties of the model, the basic principle underlying our approach is quite general. Namely, if a system is in LE, then the expectations of a local nonlinear observable $f$ in different mesoscopic regions depend in the same way (through a universal function $\hat f$) on a finite number of usually linear statistics in these regions. In our case, this statistic is the first moment $\mathbb E\,\bar w_{{N(x\pm\epsilon)}}$, which is essentially the local value of $h_{xxx}$. We can infer the function $\hat f$ by plotting the $f$ expectations against the linear statistics, collected from sample runs of the process. We believe our numerical approach can be useful to derive the PDE limit of interacting particle systems in which an explicit expression for the LE distribution is not available, provided the PDE derivation reduces to computing the expectation of an observable $f$ in LE. \appendix \section{Proofs of Claims in Section~\ref{subsec:theory}}\label{app:claims} The proof of Claim~\ref{claim:htow} relies on the following lemma. \begin{lemma}\label{app:lemma:htow} Under the conditions of the claim, there exists $R(t)>0$ such that
$$\mathbb P\left(\frac1N\sum_{i=1}^NN^{-3}|h_i(t)|>R(t)\right)\to0,\quad N\to\infty.$$ \end{lemma}
\begin{proof} Let $C(t)=\sup_{N}\max_{i=1,\dots,N}\mathbb E\,|w_i(t)|$, which is finite thanks to~\eqref{bd1}. Let $h_i=h_i(t)$, $w_i=w_i(t)$. Write \begin{equation}\label{hfromw} h_i = \sum_{j=0}^{i-2}\sum_{k=0}^j\sum_{\ell=0}^kw_\ell + a_Ni^2 + b_Ni + c_N,\quad i=0,\dots, N-1, \end{equation} and note that we have the bound
\beqs N^{-4}\sum_{i=1}^N|h_i| &\leq DN^{-4}\left[N^3\sum_{i=1}^N|w_i| + N^3|a_N| + N^2|b_N| + N|c_N|\right]\\
&= D\left[N^{-1}\sum_{i=1}^N|w_i| + N^{-1}|a_N| + N^{-2}|b_N| + N^{-3}|c_N|\right]\eeqs for some constant $D$. Thus it suffices to show there exists a constant $R(t)$ such that $\mathbb P(N^{-1}\sum_{i=1}^N|w_i|>R(t)/4)$, $\mathbb P(N^{-1}|a_N|>R(t)/4)$, $\mathbb P(N^{-2}|b_N|>R(t)/4)$, and $\mathbb P(N^{-3}|c_N|>R(t)/4)$ all go to zero as $N\to\infty$. The first probability goes to zero by~\eqref{bd1}.
We will now solve for $a_N$, $b_N$, $c_N$. Note that taking $h_i$ as in~\eqref{hfromw}, we immediately get that $w_i=h_{i+2}-3h_{i+1}-3h_i + h_{i-1}$ for $i=1,\dots, N-3$, but we must also ensure that $w_i=h_{i+2}-3h_{i+1}-3h_i + h_{i-1}$ for $i=0,N-2,N-1$. This is equivalent to extending the definition of $h_i$ to $i=N, N+1, N+2$ and setting $h_0 =h_N$, $h_1=h_{N+1}$, $h_2=h_{N+2}$. One can show that the equality $h_2=h_{N+2}$ will follow from the other two equalities. Setting $h_0$ equal to $h_N$ gives $$c_N = S_{N,3} + N^2a_N + Nb_N + c_N,\quad S_{N,3} = \sum_{j=0}^{N-2}\sum_{k=0}^j\sum_{\ell=0}^kw_\ell.$$ Setting $h_1$ equal to $h_{N+1}$ gives $$a_N + b_N + c_N = S_{N,3} + S_{N,2} + (N+1)^2a_N + (N+1)b_N + c_N,\quad S_{N,2} = \sum_{k=0}^{N-1}\sum_{\ell=0}^k w_\ell.$$
These two equations give $a_N = -S_{N,2}/2N$ and $b_N = S_{N,2} /2 - S_{N,3}/N$. It is straightforward to see that we have the bound $\mathbb E\,|S_{N,2}| \leq DC(t)N^2$ for some constant $D$, so $N^{-1}\mathbb E\,|a_N|= N^{-2}\mathbb E\,|S_{N,2}|/2 \leq DC(t)$. A similar argument gives $N^{-2}\mathbb E\,|b_N|\leq DC(t)$.
We can estimate $N^{-3}|c_N|$ by recalling that $N^{-4}\sum_ih_i=:M_N$ converges to $M$ in probability. This gives $$N^4M_N = S_{N,4} + a_N\sum_{i=0}^{N-1}i^2 + b_N\sum_{i=0}^{N-1}i + Nc_N,\quad S_{N,4} = \sum_{i=0}^{N-1}\sum_{j=0}^{i-2}\sum_{k=0}^j\sum_{\ell=0}^kw_\ell$$ so that
$$N^{-3}|c_N|\leq |M_N| + N^{-4}|S_{N,4}| + DN^{-1}|a_N| + DN^{-2}|b_N|.$$ The first summand on the right is bounded in probability, and the second, thirds, and fourth summands are bounded in expectation, so it follows that $N^{-3}|c_N|$ is bounded in probability. \end{proof} \begin{proof}[Proof of Claim~\ref{claim:htow}] Let us show a unique function $h=h(t,\cdot)$ exists such that $\int_0^1hdx=M$, $h_{xxx}=w$, and $h,h_x,h_{xx}$ are all periodic. Such a function necessarily takes the form $$h(t ,x) = \int_0^x\int_0^y\int_0^zw(t,u)dudzdy + a(t)x^2 + b(t)x + c(t),$$ so we show there is a unique choice of $a(t),b(t),c(t)$. First note that by definition of $\mathbf w_N$ as the third order FD of some process, we have $\sum_i w_i(t) =0$ for all $t$ (recall that lattice site indexing is periodic). Therefore, taking $\phi\equiv 1$, we get that $\int w(t,x)dx=0$ for all $t$. Now, we have $h_{xx} = \int_0^xw(t,u)du + 2a(t)$, which is periodic for any $a(t)$, since $\int w(t,x)dx=0$. Equating $h(t,0)$ and $h(t,1)$, we get the condition $$c(t) =h(t,0)=h(t,1)= \int_0^1\int_0^y\int_0^zw(t,u)dudzdy + a(t) + b(t)+c(t).$$ Equating $h_x(t,0)$ with $h_x(t,1)$, we get the condition $$b(t) = h_x(t,0)=h_x(t,1)=\int_0^1\int_0^zw(t,u)dudz + 2a(t) + b(t).$$ Finally, integrating $h$, we get the condition $$M = \int_0^1h(x)dx=\int_0^1\int_0^x\int_0^y\int_0^zw(t,u)dudzdydx + a(t)/3 + b(t)/2+c(t).$$ It is clear that this system of equations has a unique solution $a(t),b(t),c(t)$, so a unique $h$ satisfying the conditions exists. Now, we need to show that for all $\phi\inC(\unit)$, we have \begin{equation}\label{needtoshow}N^{-1}\sum_i\phi(i/N)N^{-3}h_i(t)\probto\int_0^1\phi(x)h(t,x)dx\end{equation} for this $h$. Since $\sum_ih_i(t)$ stays fixed under the crystal surface dynamics, we already know this is true for $\phi\equiv 1$. Indeed, we have $$\frac1N\sum_{i=1}^NN^{-3}h_i(t) = \frac1N\sum_{i=1}^NN^{-3}h_i(0) \to M=\int_0^1h(t,x)dx.$$ Thus, it suffices to show~\eqref{needtoshow} for continuous $\phi$ which integrate to $0$. For such a $\phi$, there exists a $C^3$, periodic $\psi$ such that $\psi'''=\phi$ and $\psi'$, $\psi''$ are also periodic. This is true by the same argument as above. Now, let $\psi_i = \psi(i/N)$, and \begin{equation}\psi_i^1= \psi_{i-1}-\psi_{i-2},\quad\psi_i^2 = \psi_{i+1}^1 - \psi_i^1,\quad \psi_i^3 = \psi_{i+1}^2 - \psi_i^2.\end{equation} Note that by continuity of $\phi=\psi'''$,
$$C_N := \max_i\left|\psi'''(i/N) - N^3\psi_i^3\right| \to0,\quad N\to\infty.$$ We then have
\beqs\label{by-parts}\bigg|\frac1N\sum_{i=1}^N&\psi'''(i/N)N^{-3}h_i-\frac1N\sum_{i=1}^N\psi_i^3h_i\bigg|
\\&\leq C_N \frac1N\sum_{i=1}^NN^{-3}|h_i|,\eeqs omitting the $t$ for brevity. Therefore, \beqs
\mathbb P\bigg(\bigg|\frac1N\sum_{i=1}^N\psi'''&(i/N)N^{-3}h_i-\frac1N\sum_{i=1}^N\psi_i^3h_i\bigg|>\delta\bigg)\\
&\leq \mathbb P\bigg(\frac1N\sum_{i=1}^NN^{-3}|h_i|>\delta/C_N\bigg)\to0,\quad N\to\infty,\eeqs using the Lemma. Thus it suffices to prove $\frac1N\sum_{i=1}^N\psi_i^3h_i$ converges in probability to $\int\phi(x)h(t,x)dx$. Define \begin{equation} h_i^1 = h_i - h_{i-1},\quad h_i^2 = h_i^1 - h_{i-1}^1,\quad h_i^3 = h_i^2 - h_{i-1}^2\end{equation} Note that $h_i^3= h_{i}-3h_{i-1} + 3h_{i-2}-h_{i-3} = w_{i-2}.$ Now, for arbitrary $N$-periodic sequences $\{f_k\}_{k\in\mathbb Z}$, $\{g_k\}_{k\in\mathbb Z}$, we have by the summation by parts formula, $$\sum_{k=1}^{N}f_k(g_{k+1}-g_k) = f_{N}g_{N+1} - f_1g_1 - \sum_{k=2}^{N}g_k(f_k - f_{k-1}) = -\sum_{k=1}^{N}g_k(f_k - f_{k-1}),$$ so there are no boundary terms thanks to the periodicity. We now apply summation by parts three times to get \beqs \sum_{i=1}^N\psi_i^3h_i &= -\sum \psi_i^2h_i^1 = \sum_{i=1}^N\psi_i^1h_i^2 = \sum_{i=1}^N(\psi_{i-1}-\psi_{i-2})h_i^2 \\ &= -\sum_{i=1}^N\psi_{i-2}h_i^3 = -\sum_{i=1}^N\psi_{i-2}w_{i-2} = -\sum_{i=1}^N\psi(i/N)w_i \eeqs Thus $$\frac1N\sum_{i=1}^N\psi_i^3h_i = -\frac1N\sum_{i=1}^N\psi(i/N)w_i\probto - \int_0^1\psi(x)w(t,x)dx = \int_0^1\psi'''(x)h(t,x)dx.$$ The last equality is by three applications of integration by parts. There are no boundary terms because $\psi$, $h$, and their first three spatial derivatives, are all periodic.
\end{proof} For the proof of Claims~\ref{claim:meso},~\ref{claim:PDE}, recall that to a vector $\mathbf v = (v_1,\dots, v_N)$ we associate a signed measure on the unit interval, as follows: \begin{equation}\label{v-meas}\mathbf v \quad\leftrightarrow\quad v(dx) = \frac1N\sum_{i=1}^Nv_i\delta\left(x-\frac iN\right).\end{equation}
Also, recall from Remark~\ref{rk:w-meas} that $(\phi\ast\mu)(x) = \int_0^1 \phi(x-y)\mu(dy)$ for a signed measure $\mu$ defined on the unit torus and a function $\phi\in L^1(\mu)$, and that $(\phi_\epsilon\ast w_N(t,\cdot))(x) = \varbar w(t),$ where $\phi_\epsilon(x)= \frac{1}{2\epsilon}\mathbbm{1}_{(-\epsilon,\epsilon)}(x)$. Further, note that if $\phi$ is even, and the function $(x,y)\mapsto \psi(x)\phi(x-y)$ is integrable with respect to $\mu(dy)dx$ on ${\mathbb T}\times{\mathbb T}$, then we have the identity \beqs\label{conv-meas} \int_0^1 \psi(x)(\phi\ast\mu)(x)dx &= \int_0^1\int_0^1\psi(x)\phi(x-y)\mu(dy)dx \\&= \int_0^1\mu(dy)\int_0^1\psi(x)\phi(y-x)dx = \int_0^1(\psi\ast\phi)(y)\mu(dy). \eeqs
\begin{proof}[Proof of Claim~\ref{claim:meso}]
Since $\psi$ is continuous and hence uniformly continuous on $[0,1]$, we have $\sup_{x\in{\mathbb T}}|\psi(x) -(\psi\ast\phi_\epsilon)(x)|\to0$ as $\epsilon\to0$, with $\phi_\epsilon$ as above. Using this and~\eqref{bd1}, we have that \beqs
\mathbb E\,\big|\int \psi(x)w_N(t,dx) &- \int(\psi\ast\phi_\epsilon)(x)w_N(t,dx)\big| \\
&\leq \max_i|(\psi-\psi\ast\phi_\epsilon)(i/N)|\max_i\mathbb E\,|w_i| \eeqs goes to zero as $N\to\infty$ and then $\epsilon\to0$. Therefore, it suffices to show $\int(\psi\ast\phi_\epsilon)(x)w_N(t,dx)$ converges in $L^1$ (with respect to randomness) to $\int\psi(x)w(t,x)dx$. Now, $\int(\psi\ast\phi_\epsilon)(x)w_N(t,dx) = \int \psi(x)\varbar w(t)dx$ by~\eqref{conv-meas}, and \begin{equation}
\mathbb E\,\bigg|\int \psi(x)\varbar w(t)dx - \int\psi(x)w(t,x)dx\bigg|\leq \|\psi\|_{\infty}\int\mathbb E\,|\varbar w(t) - w(t,x)|dx, \end{equation} which goes to zero by the definition of pointwise mesoscopic convergence, combined with~\eqref{bd1} and the continuity of $w$ which allows us to apply Lebesgue Dominated Convergence. \end{proof} \begin{proof}[Proof of Claim~\ref{claim:PDE}] As argued in the main text, the lefthand side of~\eqref{weak-def} is the limit of $\int_{\mathbb T}\psi(x)\mathbb E\,\left[\varbar w(t)-\varbar w(0)\right]dx$ and \beqsn \int_0^1\psi(x)&\mathbb E\,[\varbar w(t)-\varbar w(0)]dx\\&= N^4\int_0^t\frac1N\sum_{i=1}^N(\psi\ast\phi_\epsilon)\left(\frac iN\right)\mathbb E\,[D_N^4J(w_i(s))]ds, \eeqsn where $D_N^4J(w_i) = J(w_{i-2})-4J(w_{i-1}) +6J(w_i)- 4J(w_{i+1}) + J(w_{i+2})$. We can now use that summation by parts yields no boundary terms when the sequences are periodic, as above. Thus we can move $D_N^4$ onto $(\psi\ast\phi_\epsilon)(i/N)$ provided we define $D_N^4(\psi\ast\phi_\epsilon)(i/N)$ as the appropriately shifted fourth order finite difference obtained in the summation by parts, rather than the centered fourth order FD. Thus we can write \beqsn \int_0^1\psi(x)\mathbb E\,[&\varbar w(t)-\varbar w(0)]dx\\&= N^4\int_0^t\frac1N\sum_{i=1}^ND_N^4(\psi\ast\phi_\epsilon)\left(\frac iN\right)\mathbb E\,[J(w_i(s))]ds. \eeqsn Since $D_N^4$ is only shifted by a finite number of indices, we still have by the smoothness of $\psi$ that
$$\left|N^4D_N^4(\psi\ast\phi_\epsilon)(i/N)-(\psi^{(4)}\ast\phi_\epsilon)(i/N)\right|\leq \frac CN\|\psi^{(5)}\|_\infty\|\phi_\epsilon\|_{L^1} = \frac CN\|\psi^{(5)}\|_\infty$$ for some constant $C$. Thus, \beqs
\bigg|\int_0^t\frac1N\sum_{i=1}^N&\left[N^4D_N^4(\psi\ast\phi_\epsilon)-(\psi^{(4)}\ast\phi_\epsilon)\right]\left(\frac iN\right)\mathbb E\, J(w_i(s))ds\bigg|\\
&\leq \frac CN\int_0^t\max_i|\mathbb E\, J(w_i(s))|ds,\eeqs which goes to zero as $N\to\infty$ by the boundedness assumption~\eqref{bd2}. Next, we have \beqs \int_0^t&\frac1N\sum_{i=1}^N(\psi^{(4)}\ast\phi_\epsilon)(i/N)\mathbb E\, J(w_i(s))ds \\&= \int_0^t\int_0^1(\psi^{(4)}\ast\phi_\epsilon)(x)\mathbb E\, J(\mathbf w_N(s))(dx)ds=\int_0^t\int_0^1\psi^{(4)}(x)\mathbb E\,\bar J({\bf w}_{{N(x\pm\epsilon)}}(s)) dxds, \eeqs using identity~\eqref{conv-meas} with $\mu(dx) = \mathbb E\, J(\mathbf w_N(s))(dx)$ $=$ $\frac1N\sum_{i=1}^N\mathbb E\, J(w_i(s))\delta(x-i/N)$ and $\phi=\phi_\epsilon$. By the pointwise convergence of $\mathbb E\,\bar J({\bf w}_{{N(x\pm\epsilon)}}(s))$ to $\hat J(w(s,x))$ and boundedness~\eqref{bd2}, we conclude by applying dominated convergence. \end{proof} \section{Justification of Time Averaging}\label{app:num} We now justify using a sample average estimate of the time average $\mathbb E\,_\Delta[f(w_i(t))]:=\frac1\Delta\int_{I_{t,\Delta}}\mathbb E\,[f(w_i(s))]ds$ in place of a sample average estimate of $\mathbb E\,[f(w_i(t))]$. Let $\mathbb E\,^n$ and $\mathbb E\,^n_\Delta$ denote the $n$-sample estimates of $\mathbb E\,$ and $\mathbb E\,_\Delta$, respectively (see~\eqref{sample-av} and~\eqref{sample-time-av}). We first show that by taking $\Delta$ small enough, decreasing $\Delta$ further has no effect on $\mathbb E\,^n_\Delta[f( w_i)]$, except perhaps to increase its variance. This is shown in the left panels in Figure~\ref{fig:sample-v-time} (a), (b) for $f(w)=J(w)$ and $f(w)=w$, respectively. Fixing $\Delta=2\times 10^{-9}$, we now show that as we increase $n$, the estimate $\mathbb E\,^n[f(w_i)]$ approaches $\mathbb E\,^n_\Delta[f(w_i)]$. See the righthand panels in Figure~\ref{fig:sample-v-time} (a), (b). \begin{figure}
\caption{Left: we choose $\Delta$ small enough, so that the effect of decreasing $\Delta$ on the estimator $\mathbb E\,^n_\Delta[J( w_i)]$ is negligible. Right: As $n$ increases, the instantaneous-time estimator $\mathbb E\,^n[J( w_i)]$ approaches $\mathbb E\,_\Delta[J( w_i)]$.}
\caption{Same as above, but now with the observable $\mathbf w_N\mapsto w_i$.}
\label{fig:sample-v-time}
\end{figure}
\end{document} | arXiv | {
"id": "2108.03527.tex",
"language_detection_score": 0.7457157373428345,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Limits of vector lattices]{Limits of vector lattices}
\author{Walt van Amstel}
\author{Jan Harm van der Walt}
\address{Department of Mathematics and Applied Mathematics, University of Pretoria, Cor\-ner of Lynnwood Road and Roper Street, Hatfield 0083, Pretoria, South Africa and DSI-NRF Centre of Excellence in Mathematical and Statistical Sciences (CoE-MaSS), South Africa} \email{sjvdwvanamstel@gmail.com}
\address{Department of Mathematics and Applied Mathematics, University of Pretoria, Cor\-ner of Lynnwood Road and Roper Street, Hatfield 0083, Pretoria, South Africa} \email{janharm.vanderwalt@up.ac.za}
\thanks{The first author was supported by a grant from the DSI-NRF Centre of Excellence in Mathematical and Statistical Sciences (CoE-MaSS), South Africa. Opinions expressed and conclusions arrived at are those of the authors and are not necessarily to be attributed to the CoE-MaSS. The second author was supported by the NRF of South Africa, grant number 115047. The results in this paper were obtained, in part, while both authors visited Leiden University from September 2021 to January 2022. This visit was funded by the European Union Erasmus+ ICM programme. The authors thank Prof. Marcel de Jeu and the Mathematical Institute at Leiden University for their hospitality. The authors thank the reviewer for a meticulous reading of the article along with a number of helpful suggestions.}
\subjclass[2010]{Primary 46M40; Secondary 46A40, 46E05}
\date{\tt {\today}}
\keywords{Vector lattices, direct limits, inverse limits, dual spaces, perfect spaces}
\begin{abstract} If $K$ is a compact Hausdorff space so that the Banach lattice ${\mathrm C }(K)$ is isometrically lattice isomorphic to a dual of some Banach lattice, then ${\mathrm C }(K)$ can be decomposed as the $\ell^\infty$-direct sum of the carriers of a maximal singular family of order continuous functionals on ${\mathrm C }(K)$. In order to generalise this result to the vector lattice $\cont(X)$ of continuous, real valued functions on a realcompact space $X$, we consider direct and inverse limits in suitable categories of vector lattices. We develop a duality theory for such limits and apply this theory to show that $\cont(X)$ is lattice isomorphic to the order dual of some vector lattice $\vlat{F}$ if and only if $\cont(X)$ can be decomposed as the inverse limit of the carriers of all order continuous functionals on $\cont(X)$. In fact, we obtain a more general result: A Dedekind complete vector lattice $\vlat{E}$ is perfect if and only if it is lattice isomorphic to the inverse limit of a suitable family of order continuous functionals on $\vlat{E}$. A number of other applications are presented, including a decomposition theorem for order dual spaces in terms of spaces of Radon measures. \end{abstract}
\maketitle
\section{Introduction}\label{Section: Introduction}
Let $K$ be a compact Hausdorff space. A basic question concerning the Banach lattice ${\mathrm C }(K)$ is the following: Does there exist a Banach space (lattice) $\vlat{E}$ so that ${\mathrm C }(K)$ is isometrically (lattice) isomorphic to the dual $\vlat{E}^\ast$ of $\vlat{E}$? That is, does ${\mathrm C }(K)$ have a Banach space (lattice) predual? In general, the answer to this question is `no'. The unit ball of ${\mathrm C }[0,1]$ has only two extreme points, but the unit ball of the dual of an infinite dimensional Banach space has infinitely many extreme points. Hence ${\mathrm C }[0,1]$ is not the dual of any Banach space; hence also not of any Banach lattice. On the other hand, ${\mathrm C }(\beta \mathbb{N})$ is the dual of $\ell^1$. The problem is therefore to characterise those spaces $K$ for which ${\mathrm C }(K)$ is a dual Banach space (lattice). Combining two classic results of Dixmier \cite{Dixmier1951} and Grothendieck \cite{Grothendieck1955}, respectively, gives an answer to this question in the setting of Banach spaces, see also \cite{DalesDashiellLauStrass2016} for a recent presentation. The Banach lattice case is treated in \cite{Schaefer1974}.
In order to formulate this result we recall the following. A Radon measure $\mu$ on $K$ is called \emph{normal} if $|\mu|(B)=0$ for every closed nowhere dense subset $B$ of $K$. The space of all normal measures on $K$ is denoted $\vlat{N}(K)$. The space $K$ is called \emph{Stonean} if it is extremally disconnected; that is, the closure of every open set is open. $K$ is \emph{hyper-Stonean}\footnote{We feel obligated to recall Kelley's remark \cite{Kelley1959}: `In spite of my affection and admiration for Marshall Stone, I find the notion of a Hyper-Stone downright appalling.'} if it is Stonean and the union of the supports of the normal measures on $K$ is dense in $K$.
\begin{thm}\label{Thm: C(K) Dual space char} Let $K$ be a compact Hausdorff space. Consider the following statements. \begin{itemize}
\item[(i)] ${\mathrm C }(K)$ has a Banach lattice predual.
\item[(ii)] ${\mathrm C }(K)$ has a Banach space predual.
\item[(iii)] $K$ is hyper-Stonean.
\item[(iv)] Let $\cal{F}$ be a maximal singular family of normal probability measures on $K$, and for each $\mu\in \cal{F}$ let $S_\mu$ denote its support. Then
\[
{\mathrm C }(K)\ni u\longmapsto \left( \left.u\right|_{S_\mu} \right)_{\mu\in \cal{F}}\in \bigoplus_{\infty} {\mathrm C }(S_\mu)
\]
is an isometric lattice isomorphism. \end{itemize} Statements (i), (ii) and (iii) are equivalent, and each implies (iv). If $K$ is Stonean, then all four statements are equivalent.
Furthermore, in case ${\mathrm C }(K)$ has a Banach space predual $\vlat{E}$, this predual is also a Banach lattice predual and is unique up to isometric lattice isomorphism. In particular, $\vlat{E}$ is isometrically lattice isomorphic to $\vlat{N}(K)$. \end{thm}
This result can be reformulated by identifying $\vlat{N}(K)$ with the order continuous dual of ${\mathrm C }(K)$, via the isometric lattice isomorphism between the dual of ${\mathrm C }(K)$ and the space of Radon measures on $K$, and ${\mathrm C }(S_\mu)$ with the carrier of the corresponding functional on ${\mathrm C }(K)$.
\begin{thm}\label{Thm: C(K) Dual space char order dual version} Let $K$ be a compact Hausdorff space. Consider the following statements. \begin{itemize}
\item[(i)] ${\mathrm C }(K)$ has a Banach lattice predual.
\item[(ii)] ${\mathrm C }(K)$ has a Banach space predual.
\item[(iii)] ${\mathrm C }(K)$ is Dedekind complete and has a separating order continuous dual.
\item[(iv)] Let $\cal{F}$ be a maximal singular family of order continuous functionals on ${\mathrm C }(K)$, and for each $\varphi\in\cal{F}$ let $\vlat{C}_\varphi$ denote its carrier and $P_\varphi$ the band projection onto $\vlat{C}_\varphi$. Then
\[
{\mathrm C }(K)\ni u \longmapsto (P_\varphi u)_{\varphi\in \cal{F}}\in \bigoplus_{\infty} \vlat{C}_\varphi
\]
is an isometric lattice isomorphism. \end{itemize} Statements (i), (ii) and (iii) are equivalent, and each implies (iv). If $K$ is Stonean, then all four statements are equivalent.
Furthermore, in case ${\mathrm C }(K)$ has a Banach space predual $\vlat{E}$, this predual is also a Banach lattice predual and is unique up to isometric lattice isomorphism. In particular, $\vlat{E}$ is isometrically lattice isomorphic to the order continuous dual $\ordercontn{{\mathrm C }(K)}$ of ${\mathrm C }(K)$. \end{thm}
The above problem may be generalised to the class of realcompact spaces. Recall that a \emph{realcompact} space is a Tychonoff space $X$ which is homeomorphic to a closed subset of some product of $\mathbb{R}$. Equivalently, $X$ is realcompact if it is a Tychonoff space and for every point $x \in \beta X\setminus X$ (where $\beta X$ denotes the Stone-\v{C}ech compactification of $X$) there exists a real-valued, continuous function $u$ on $X$ which does not extend to a continuous, real-valued function on $X\cup\{x\}$. For every Tychonoff space $X$ there exists a unique (up to homeomorphism) realcompact space $\upsilon X$ so that $\cont(X)$ and ${\mathrm C }(\upsilon X)$ are isomorphic vector lattices, see for instance \cite{Hewitt1948}, \cite[Chapter 8]{GillmanJerison1960} and \cite[\S 3.11]{Engelking1989}. The realcompact space $\upsilon X$ is called the \emph{realcompactification} of $X$.\label{RC-ification}
Let $X$ be a realcompact space. Then ${\mathrm C }(X)$ is a vector lattice but, in general, not a Banach lattice. Hence we ask the following question: Does there exist a vector lattice $\vlat{E}$ so that $\orderdual{\vlat{E}}$ is lattice isomorphic to ${\mathrm C }(X)$? That is, does $\cont(X)$ have an \emph{order predual}? Xiong \cite{Xiong1983} obtained the following answer to this question.
\begin{thm} Let $X$ be realcompact space. Denote by $S$ the union of the supports of all compactly supported normal measures\footnote{See Section \ref{Subsection: Realcompact spaces preliminaries}.} on $X$. The following statements are equivalent. \begin{itemize}
\item[(i)] There exists a vector lattice $\vlat{E}$ so that $\orderdual{\vlat{E}}$ is lattice isomorphic to ${\mathrm C }(X)$.
\item[(ii)] $\cont(X)$ is lattice isomorphic to $\orderdual{(\ordercontn{{\mathrm C }(X)})}$.
\item[(iii)] $X$ is extremally disconnected and $\upsilon S=X$. \end{itemize} \end{thm}
This result differs from the corresponding result for compact spaces in the following respects. Unlike in the Banach lattice setting, ${\mathrm C }(X)$ may have more than one order predual, see \cite{Xiong1983}. Secondly, the condition that $\cont(X)$ is Dedekind complete and has a separating order continuous dual does not imply that $\cont(X)$ has an order predual. Indeed, in \cite[p. 620]{Mazon1986} an example is provided of a realcompact space $X$ so that $\cont(X)$ is Dedekind complete and has a separating order continuous dual, but is not the order dual of any vector lattice. Furthermore, we have no counterpart of the decomposition \[ {\mathrm C }(K)\ni u \longmapsto (P_\varphi u)_{\varphi\in \cal{F}}\in \bigoplus_{\infty} \vlat{C}_\varphi. \] The naive extension of this decomposition to the class of extremally disconnected realcompact spaces does not provide a characterization of those spaces $\cont(X)$ which admit an order predual. It will be shown in Section \ref{Subsection: Structure theorems for C(X) as a dual space}, Proposition \ref{Prop: Partial decomposition result for order dual C(X)}, that if $X$ is an extremally disconnected realcompact space and $\cal{F}$ is a maximal singular family in $\ordercontn{\cont(X)}$ so that \[ {\mathrm C }(X)\ni u \longmapsto (P_\varphi u)_{\varphi\in \cal{F}}\in \prod_{\varphi\in\cal{F}} \vlat{C}_\varphi \] is a lattice isomorphism, then $\ordercontn{\cont(X)}$ is an order predual for $\cont(X)$. The converse, however, is false, see Example \ref{Exm: C(X) decomponsition counterexample}.
In view of the above, we formulate the following problem. Let $X$ be an extremally disconnected realcompact space. Can the property `$\cont(X)$ admits an order predual' be characterised in terms of a suitable decomposition of $\cont(X)$ in terms of the carriers of order continuous functionals on $\cont(X)$? We solve this problem using direct and inverse limits in suitable categories of vector lattices.\footnote{In the literature, direct and inverse limits are also referred to as \emph{inductive} and \emph{projective} limits, respectively.}
Such limits are common in analysis, see for instance \cite{BeattieButzmann2002}, \cite[Chapter IV, \S 5]{Conway1990}, \cite[Chapter 5]{Bochner1955} and \cite{Choksi1958}. Direct limits of vector lattices were introduced by Filter \cite{Filter1988} and inverse limits of vector lattices have appeared sporadically in the literature, see for instance \cite{Dettweiler1979,Kuller1958}, but no systematic study of this construction has been undertaken in the context of vector lattices. We therefore take the opportunity to clarify the question of existence of inverse limits in certain categories of vector lattices. We also establish the permanence of a number of vector lattice properties under the inverse limit construction. Our treatment of direct and inverse limits of vector lattices is found in Sections \ref{Section: Inductive limits} and \ref{Section: Projective limits} respectively. Inspired by results in the theory of convergence spaces \cite{BeattieButzmann2002} we obtain duality results for direct and inverse limits of vector lattices, see Section \ref{Section: Dual spaces}. These results are roughly of the following form: If a vector lattice $\vlat{E}$ can be expressed as the direct (inverse) limit of some system of vector lattices, then the order (continuous) dual of $\vlat{E}$ can be expressed in a natural way as the inverse (direct) limit of a system of order (continuous) duals. In addition to a solution of the mentioned decomposition problem, a number of applications of the general theory of direct and inverse limits of vector lattices are presented in Section \ref{Section: Applications}. These include the computations of order (continuous) duals of function spaces and a structural characterisation of order dual spaces in terms of spaces of Borel measures.
In the next section, we state some preliminary definitions and results which are used in the rest of the paper.
\section{Preliminaries}\label{Section: Preliminaries}
\subsection{Vector lattices}\label{Subsection: Vector lattice preliminaries}
In order to make the paper reasonably self-contained we recall a few concepts and facts from the theory of vector lattices. For undeclared terms and notation we refer to the reader to any of the standard texts in the field, for instance \cite{AliprantisBurkinshaw78,AliprantisBurkinshaw2006,LuxemburgZaanen1971RSI,Zaanen1983RSII}. Let $\vlat{E}$ and $\vlat{F}$ be real vector lattices. For $u,v\in \vlat{E}$ we write $u<v$ if $u\leq v$ and $u\neq v$. In particular, $0 < u$ means $u$ is positive but not zero. We note that if $\vlat{E}$ is a space of real-valued functions on a set $X$, then $0 < v$ does not mean that $0 < v(x)$ for every $x\in X$.
For sets $A,B\subseteq \vlat{E}$ let $A\vee B \ensuremath{\mathop{:}\!\!=} \{u\vee v ~:~ u\in A,~ v\in B\}$. The sets $A\wedge B$, $A^+$, $A^-$ and $|A|$ are defined similarly. Lastly, $A^d \ensuremath{\mathop{:}\!\!=} \{u\in \vlat{E} ~:~ |u|\wedge |v|=0 \text{ for all } v\in A\}$. We write $A\downarrow u$ if $A$ is downward directed and $\inf A=u$. Similarly, we write $B\uparrow u$ if $B$ is upward directed and $\sup B = u$.
Let $T:\vlat{E}\to \vlat{F}$ be a linear operator. Recall that $T$ is \emph{positive} if $T\left[ \vlat{E}^+ \right] \subseteq \vlat{F}^+$, and \emph{regular} if $T$ is the difference of two positive operators. $T$ is \emph{order bounded} if $T$ maps order bounded sets in $\vlat{E}$ to order bounded sets in $\vlat{F}$. If $\vlat{F}$ is Dedekind complete, $T$ is order bounded if and only if $T$ is regular \cite[Theorem 20.2]{Zaanen1997Introduction}. Further, $T$ is \emph{order continuous} if $\inf |T[A]|=0$ whenever $A\downarrow 0$ in $\vlat{E}$. Every order continuous operator is necessarily order bounded \cite[Theorem 1.54]{AliprantisBurkinshaw2006}. $T$ is a \emph{lattice homomorphism} if it preserves suprema and infima of finite sets, and a \emph{normal lattice homomorphism} if it preserves suprema and infima of arbitrary sets; equivalently, if it is an order continuous lattice homomorphism, see \cite[p. 103]{LuxemburgZaanen1971RSI}. A \emph{lattice isomorphism} is a bijective lattice homomorphism $T:\vlat{E}\to\vlat{F}$. An operator $T$ is a lattice isomorphism if and only if it is bijective and both $T$ and $T^{-1}$ are positive \cite[Theorem 19.3]{Zaanen1997Introduction}. We say that $T$ is \emph{interval preserving} if for all $0\leq u\in\vlat{E}$, $T[[0,u]]=[0,T(u)]$. An interval preserving map need not be a lattice homomorphism, nor is a (normal) lattice homomorphism in general interval preserving, see for instance \cite[p. 95]{AliprantisBurkinshaw2006}. However, the following holds. We have not found this result in the literature, and therefore we include the simple proof.
\begin{prop}\label{Prop: Interval Preserving vs Lattice Homomorphism} Let $\vlat{E}$ and $\vlat{F}$ be vector lattices and $T:\vlat{E}\to\vlat{F}$ a positive operator. The following statements are true. \begin{itemize}
\item[(i)] If $T$ is injective and interval preserving then $T$ is a lattice isomorphism onto an ideal in $\vlat{F}$, hence a normal lattice homomorphism into $\vlat{F}$.
\item[(ii)] If $T$ is a lattice homomorphism and $T[\vlat{E}]$ is an ideal in $\vlat{F}$ then $T$ is interval preserving. \end{itemize} \end{prop}
\begin{proof}[Proof of (i)] Assume that $T$ is injective and interval preserving. $T[\vlat{E}]$ is an ideal in $\vlat{F}$ by \cite[Proposition 14.7]{Kaplan1985}. Therefore, because $T$ is injective, it suffices to show that $T$ is a lattice homomorphism. To this end, consider $u,v\in \vlat{E}^+$. Then $0\leq T(u) \wedge T(v) \leq T(u)$ and $0\leq T(u)\wedge T(v) \leq T(v)$. Since $T$ is interval preserving and injective there exists $w\in [0,u]\cap[0,v] = [0, u\wedge v]$ so that $T(w)=T(u)\wedge T(v)$. We have \[ T(w)\leq T(u\wedge v)\leq T(u) \text{ and } T(w)\leq T(u\wedge v)\leq T(v). \] Hence $T(u)\wedge T(v) = T(w) \leq T(u\wedge v) \leq T(u)\wedge T(v)$ so that $T(u\wedge v) = T(w) = T(u)\wedge T(v)$.
To see that $T$ is a normal lattice homomorphism, let $A\downarrow 0$ in $\vlat{E}$. Then $T[A]\downarrow 0$ in $T[\vlat{E}]$ because $T$ is a lattice isomorphism onto $T[\vlat{E}]$. But $T[\vlat{E}]$ is and ideal in $\vlat{F}$, so $T[A]\downarrow 0$ in $\vlat{F}$. \end{proof}
\begin{proof}[Proof of (ii)] Assume that $T$ is a lattice homomorphism and $T[\vlat{E}]$ is an ideal in $\vlat{F}$. Let $0\leq u \in \vlat{E}$ and $0\leq v\leq T(u)$. Because $T[\vlat{E}]$ is an ideal in $\vlat{F}$ there exists $w\in \vlat{E}$ so that $T(w) = v$. Let $w'= (w\vee 0)\wedge u$. Then $0\leq w'\leq u$ and $T(w') = (v\vee 0)\wedge T(u) = v$. \end{proof}
\begin{prop}\label{Prop: Properties of band projections} Let $\vlat{E}$ be a vector lattice, $\vlat{A}$ and $\vlat{B}$ projection bands in $\vlat{E}$, $P_\vlat{A}$ and $P_{\vlat{B}}$ the band projections of $\vlat{E}$ onto $\vlat{A}$ and $\vlat{B}$, respectively, and $I_{\vlat{E}}$ the identity operator on $\vlat{E}$. Assume that $\vlat{A}\subseteq \vlat{B}$. The following statements are true. \begin{enumerate}
\item[(i)] $P_\vlat{A}$ is an order continuous lattice homomorphism.
\item[(ii)] $P_\vlat{A} \leq I_{\vlat{E}}$.
\item[(iii)] $P_\vlat{A} P_\vlat{B} = P_\vlat{B} P_\vlat{A} = P_\vlat{A}$.
\item[(iv)] $P_\vlat{A}$ is interval preserving. \end{enumerate} \end{prop}
\begin{proof} For (i), see \cite[Theorem 24.6 and Exercise 24.11]{LuxemburgZaanen1971RSI}. For (ii) and (iii), see \cite[Theorem~24.5~(ii)]{LuxemburgZaanen1971RSI} and \cite[Theorem~30.1~(i)]{LuxemburgZaanen1971RSI} respectively. Lastly, (iv) follows from Proposition \ref{Prop: Interval Preserving vs Lattice Homomorphism} (ii), since $P_{\vlat{A}}[\vlat{E}]=\vlat{A}$ is a band, hence an ideal, in $\vlat{E}$. \end{proof}
The \emph{order dual} of $\vlat{E}$ is $\orderdual{\vlat{E}}\ensuremath{\mathop{:}\!\!=} \{\varphi: \vlat{E}\to\mathbb{R} ~:~ \varphi \text{ is order bounded}\}$, and the order continuous dual of $\vlat{E}$ is $\ordercontn{\vlat{E}} \ensuremath{\mathop{:}\!\!=} \{\varphi\in \orderdual{\vlat{E}} ~:~ \varphi \text{ is order continuous}\}$. If $A\subseteq \vlat{E}$ and $B\subseteq \orderdual{\vlat{E}}$ we set \[ \ann{A} \ensuremath{\mathop{:}\!\!=} \{\varphi\in \orderdual{\vlat{E}} ~:~ \varphi(u)=0,~u\in A\},~~ \preann{B} \ensuremath{\mathop{:}\!\!=} \{u\in\vlat{E} ~:~ \varphi(u) = 0,~ \varphi\in B\}. \] For $\varphi\in \orderdual{\vlat{E}}$ the \emph{null ideal} (or absolute kernel) of $\varphi$ is \[
\nullid{\varphi} \ensuremath{\mathop{:}\!\!=} \{u\in \vlat{E} ~:~ |\varphi|(|u|)=0\}. \] The \emph{carrier} of $\varphi$ is $\carrier{\varphi} \ensuremath{\mathop{:}\!\!=} \nullid{\varphi}^d$. The null ideal $\nullid{\varphi}$ of $\varphi$ is an ideal in $\vlat{E}$ and its carrier $\carrier{\varphi}$ is a band; if $\varphi$ is order continuous then $\nullid{\varphi}$ is also a band in $\vlat{E}$, see for instance \cite[\S 90]{Zaanen1983RSII}.
Define $\sigma:\vlat{E}\ni u\mapsto \Psi_u\in \ordercontnbidual{\vlat{E}}$ by setting $\Psi_u(\varphi)\ensuremath{\mathop{:}\!\!=} \varphi(u)$ for all $u\in\vlat{E}$ and $\varphi\in \ordercontn{\vlat{E}}$. Then $\sigma$ is a lattice homomorphism, and, if $\preann{\ordercontn{\vlat{E}}}=\{0\}$, $\sigma$ is injective, see \cite[p.~404~-~405]{Zaanen1983RSII}. We call $\vlat{E}$ \emph{perfect} if $\sigma[\vlat{E}]=\ordercontnbidual{\vlat{E}}$.
In the following theorem, we briefly recall some basic facts concerning the order adjoint of a positive operator $T:E\to F$ which we make use of in the sequel.
\begin{thm}\label{Thm: Adjoints of interval preserving vs lattice homomorphisms} Let $\vlat{E}$ and $\vlat{F}$ be vector lattices and $T:\vlat{E}\to\vlat{F}$ a positive operator. Denote by $T^\sim :\vlat{F}^\sim \to \vlat{E}^\sim$ its order adjoint, $\varphi\mapsto \varphi\circ T$. The following statements are true. \begin{itemize}
\item[(i)] $T^\sim$ is positive and order continuous.
\item[(ii)] If $T$ is order continuous then $T^\sim[\ordercontn{\vlat{F}}]\subseteq \ordercontn{\vlat{E}}$.
\item[(iii)] If $T$ is interval preserving then $T^\sim$ is a lattice homomorphism.
\item[(iv)] If $T$ is a lattice homomorphism then $T^\sim$ is interval preserving. The converse is true if $\preann{\orderdual{\vlat{F}}}=\{0\}$. \end{itemize} \end{thm}
\begin{proof} For (i), see \cite[14.2 \& 14.5]{Kaplan1985}. The statement in (ii) follows directly from the fact that composition of order continuous operators is order continuous. For (iii), see \cite[14.13]{Kaplan1985}. The first statement in (iv) is proven in \cite[Theorem~2.16~(1)]{AliprantisBurkinshaw2006}. The second statement is proven in \cite[Theorem~2.20]{AliprantisBurkinshaw2006}. We note that although \cite{AliprantisBurkinshaw2006} declares a blanket assumption at the start of the book that all vector lattices under consideration in \cite{AliprantisBurkinshaw2006} are Archimedean, the proofs of \cite[Theorems~2.16, 2.20]{AliprantisBurkinshaw2006} do not make use of this assumption. \end{proof}
\begin{comment} The statements in (iii) and (iv) are special cases of We note that although \cite{AliprantisBurkinshaw2006} declares a blanket assumption at the start of the book that all vector lattices under consideration in \cite{AliprantisBurkinshaw2006} are Archimedean, the proof of \cite[Theorem~2.16]{AliprantisBurkinshaw2006} does not make use of this assumption.
the positivity of $T^\sim$ is easily verified and order continuity follows directly from the proof of \cite[Theorem~1.18]{AliprantisBurkinshaw2006}. The statements in (iii) and (iv) are special cases of \cite[Theorem~2.16]{AliprantisBurkinshaw2006}. We note that although \cite{AliprantisBurkinshaw2006} declares a blanket assumption at the start of the book that all vector lattices under consideration in \cite{AliprantisBurkinshaw2006} are Archimedean, the relevant part of the proof of \cite[Theorem~1.18]{AliprantisBurkinshaw2006} along with the proof of \cite[Theorem~2.16]{AliprantisBurkinshaw2006} does not make use of this assumption. \end{proof}
\begin{proof} For (i), the positivity of $T^\sim$ is easily verified and order continuity follows directly from the proof of \cite[Theorem~1.18]{AliprantisBurkinshaw2006}. The statement in (ii) follows directly from the fact that composition of order continuous operators is order continuous. The statements in (iii) and (iv) are special cases of \cite[Theorem~2.16]{AliprantisBurkinshaw2006}. We note that although \cite{AliprantisBurkinshaw2006} declares a blanket assumption at the start of the book that all vector lattices under consideration in \cite{AliprantisBurkinshaw2006} are Archimedean, the relevant part of the proof of \cite[Theorem~1.18]{AliprantisBurkinshaw2006} along with the proof of \cite[Theorem~2.16]{AliprantisBurkinshaw2006} does not make use of this assumption. \end{proof} \end{comment}
\begin{prop}\label{Prop: Image of adjoint of lattice homomorphism} Let $\vlat{E}$ and $\vlat{F}$ be vector lattices and $T:\vlat{E}\to\vlat{F}$ a linear lattice homomorphism onto $\vlat{F}$. The following statements are true. \begin{itemize}
\item[(i)] $T^\sim[\vlat{F}^\sim] = \ann{\ker(T)}$.
\item[(ii)] If $\vlat{E}$ is Archimedean and $T$ is order continuous then $T^\sim[\ordercontn{\vlat{F}}] = \ann{\ker(T)}\cap\ordercontn{\vlat{E}}$. \end{itemize} \end{prop}
\begin{proof}[Proof of (i)] Let $\varphi\in\vlat{F}^\sim$. If $u\in\ker(T)$ then $T^\sim(\varphi) (u) = \varphi (T(u)) = \varphi(0) = 0$. Hence $\varphi\in\ann{\ker(T)}$.
Let $\psi\in \ann{\ker(T)}$. Define $\varphi:\vlat{F}\to\mathbb{R}$ by setting $\varphi(v) = \psi(u)$ if $v=T(u)$. Then $\varphi\in \vlat{F}^\sim$ and $T^\sim(\varphi) = \psi$. \end{proof}
\begin{proof}[Proof of (ii)] It follows from (i) and Theorem \ref{Thm: Adjoints of interval preserving vs lattice homomorphisms} (ii) that $T^\sim[\ordercontn{\vlat{F}}] \subseteq \ann{\ker(T)}\cap\ordercontn{\vlat{E}}$. We show that if $T^\sim(\varphi) \in \ordercontn{\vlat{E}}$ for some $\varphi\in \vlat{F}^\sim$ then $\varphi\in \ordercontn{\vlat{F}}$. From this and (i) it follows that $T^\sim[\ordercontn{\vlat{F}}] = \ann{\ker(T)}\cap\ordercontn{\vlat{E}}$. We observe that it suffices to consider positive $\varphi\in\vlat{F}^\sim$. Indeed, $T$ is a surjective lattice homomorphism and therefore also interval preserving. Hence by Theorem \ref{Thm: Adjoints of interval preserving vs lattice homomorphisms} (iii), $T^\sim$ is a lattice homomorphism.
Suppose that $0\leq \varphi\in\vlat{F}^\sim$ and that $T^\sim(\varphi) \in \ordercontn{\vlat{E}}$. Let $A\downarrow 0$ in $\vlat{F}$. Define $B\ensuremath{\mathop{:}\!\!=} T^{-1}[A]\cap \vlat{E}_+$. Then $B$ is downward directed and $T[B]=A$. In particular, $\varphi[A] = T^\sim(\varphi)[B]$. Let $C\ensuremath{\mathop{:}\!\!=} \{w\in \vlat{E} ~:~ 0\leq w\leq v \text{ for all } v\in B\}$. If $w\in C$ then $0\leq T(w) \leq u$ for all $u\in A$ so that $T(w) = 0$. Hence $C\subseteq \ker(T)$. Since $\vlat{E}$ is Archimedean, we have $B-C\downarrow 0$ in $\vlat{E}$, see \cite[Theorem~$22.5$]{LuxemburgZaanen1971RSI}. Since $T^\sim(\varphi)$ is order continuous, $T^\sim(\varphi)[B-C]\downarrow 0$; that is, for every $\epsilon>0$ there exists $v\in B$ and $w\in C$ so that $\varphi(T(v)) = \varphi (T(v-w)) = T^\sim(\varphi)(v-w)<\epsilon$. Hence, for every $\epsilon>0$ there exists $u\in A$ so that $\varphi(u)<\epsilon$. This shows that $\varphi[A]\downarrow 0$ so that $\varphi\in \ordercontn{\vlat{F}}$ as required. \end{proof}Let $I$ be a non-empty set and let $\vlat{E}_\alpha$ be a vector lattice for every $\alpha\in I$. Then $\displaystyle\prod_{\alpha\in I} \vlat{E}_\alpha$ is a vector lattice with respect to the coordinate-wise operations. If the index set is clear form the context, we omit it and write $\displaystyle\prod \vlat{E}_\alpha$. For $\beta\in I$ let $\pi_\beta:\displaystyle\prod\vlat{E}_\alpha\to \vlat{E}_\beta$ be the coordinate projection onto $\vlat{E}_\beta$ and $\iota_\beta:\vlat{E}_\beta\to\displaystyle\prod \vlat{E}_\alpha$ the right inverse of $\pi_\beta$ given by \[ \pi_\alpha(\iota_\beta(u))= \left\{\begin{array}{lll} u & \text{if} & \alpha=\beta
\\ 0 & \text{if} & \alpha\neq \beta.\\ \end{array}\right. \] We denote by $\displaystyle\bigoplus \vlat{E}_\alpha$ the ideal in $\displaystyle\prod \vlat{E}_\alpha$ consisting of $u\in \displaystyle\prod \vlat{E}_\alpha$ for which $\pi_\alpha(u)\neq 0$ for only finitely many $\alpha\in I$. The following properties of $\displaystyle\prod \vlat{E}_\alpha$ and $\displaystyle \bigoplus \vlat{E}_\alpha$ are used frequently in the sequel.
\begin{thm}\label{Thm: Properties of product of vector lattices.} Let $I$ be a non-empty set and $\vlat{E}_\alpha$ a vector lattice for every $\alpha\in I$. The following statements are true. \begin{itemize}
\item[(i)] The coordinate projections $\pi_\beta$ and their right inverses $\iota_\beta$ are normal, interval preserving lattice homomorphisms.
\item[(ii)] $\displaystyle\prod \vlat{E}_\alpha$ is Archimedean if and only if each $\vlat{E}_\alpha$ is Archimedean.
\item[(iii)] $\displaystyle\prod \vlat{E}_\alpha$ is Dedekind complete if and only if each $\vlat{E}_\alpha$ is Dedekind complete.
\item[(iv)] If $I$ has non-measurable cardinal, then the order dual of $\displaystyle\prod \vlat{E}_\alpha$ is $\displaystyle \bigoplus \vlat{E}_\alpha^\sim$.
\item[(v)] The order continuous dual of $\displaystyle\prod\vlat{E}_\alpha$ is $\displaystyle \bigoplus \ordercontn{(\vlat{E}_\alpha)}$.
\item[(vi)] The order dual of $\displaystyle \bigoplus \vlat{E}_\alpha$ is $\displaystyle\prod \vlat{E}_\alpha^\sim$.
\item[(vii)] The order continuous dual of $\displaystyle \bigoplus \vlat{E}_\alpha$ is $\displaystyle\prod \ordercontn{\left( \vlat{E}_\alpha \right)}$. \end{itemize} \end{thm}
We leave the straightforward proofs of (i), (ii), (iii), (vi) and (vii) to the reader.
\begin{proof}[Proof of (iv)] Assume that $I$ has non-measurable cardinal. By (i) of this theorem and Theorem \ref{Thm: Adjoints of interval preserving vs lattice homomorphisms} (iii) and (iv), $\iota_\beta^\sim : \orderdual{\left(\displaystyle\prod \vlat{E}_\alpha\right)}\to \orderdual{\vlat{E}}_\beta$ is an interval preserving normal lattice homomorphism for every $\beta\in I$. Because each $\varphi\in \orderdual{\left(\displaystyle\prod \vlat{E}_\alpha\right)}$ is linear and order bounded, the set $I_\varphi \ensuremath{\mathop{:}\!\!=} \{\beta \in I ~:~ \iota_\beta^\sim(\varphi)\neq 0 \}$ is finite. Define $S:\orderdual{\left(\displaystyle\prod \vlat{E}_\alpha\right)} \to \displaystyle \bigoplus \orderdual{\vlat{E}}_\alpha$ by setting \[ S(\varphi) \ensuremath{\mathop{:}\!\!=} (\iota_\alpha^\sim(\varphi))_{\alpha\in I},~~ \varphi\in \orderdual{\left(\displaystyle\prod \vlat{E}_\alpha\right)}. \]Then $S$ is a lattice homomorphism. It remains to verify that $S$ is bijective.
We show that $S$ is injective. Let $0\neq \varphi \in \orderdual{\left(\displaystyle\prod \vlat{E}_\alpha\right)}$. Fix $0\leq u\in\displaystyle\prod\vlat{E}_\alpha$ so that $\varphi(u)\neq 0$. For $f\in \mathbb{R}^I$ let $fu\in\displaystyle\prod \vlat{E}_\alpha$ be defined by $\pi_\alpha (fu) = f(\alpha)\pi_\alpha (u)$, $\alpha\in I$. Define $\hat\varphi : \mathbb{R}^I\to \mathbb{R}$ by setting \[ \hat \varphi (f) \ensuremath{\mathop{:}\!\!=} \varphi(fu),~~ f\in \mathbb{R}^I. \] Then $\hat\varphi$ is a non-zero order bounded linear functional on $\mathbb{R}^I$. Because $I$ has nonmeasurable cardinal, $I$ with the discrete topology is realcompact, see \cite[\S 12.2]{GillmanJerison1960}. Therefore there exists a non-zero finitely supported and countably additive measure $\mu$ on the powerset $2^I$ of $I$ so that \[ \hat\varphi (f) = \int_I f \thinspace d\mu = \sum_{\alpha \in I} f(\alpha)\mu(\alpha),~~ f\in \mathbb{R}^I, \] see \cite[Theorem 4.5]{GouldMahowald1962}. Let $\alpha$ be in the support of $\mu$, and let $g$ be the indicator function of $\{\alpha\}$. Then $0\neq \mu(\alpha)=\hat \varphi(g) = \varphi(gu) = \iota_\alpha^\sim(\varphi)(\pi_\alpha(u))$. Therefore $S(\varphi)\neq 0$ so that $S$ is injective.
To see that $S$ is surjective, observe that for every $\beta\in I$, $\pi_\beta^\sim :\orderdual{\vlat{E}}_\beta\to \orderdual{\left(\displaystyle\prod \vlat{E}_\alpha\right)}$ is an interval preserving normal lattice homomorphism by (i) of this theorem and Theorem \ref{Thm: Adjoints of interval preserving vs lattice homomorphisms} (iii) and (iv). Define $T: \displaystyle \bigoplus \orderdual{\vlat{E}}_\alpha\to \orderdual{\left(\displaystyle\prod \vlat{E}_\alpha\right)}$ by setting \[ T(\psi) \ensuremath{\mathop{:}\!\!=} \sum \pi_\alpha^\sim(\psi_\alpha),~~ \psi=(\psi_\alpha)\in \bigoplus \orderdual{\vlat{E}}_\alpha. \] Then $T$ is a positive operator. We claim that $S\circ T$ is the identity on $\displaystyle \bigoplus \orderdual{\vlat{E}}_\alpha$. Indeed, for any $\psi\in \displaystyle \bigoplus \orderdual{\vlat{E}}_\alpha$ we have \[ S(T(\psi)) = \sum_{\alpha\in I}(\iota_\beta^\sim(\pi_\alpha^\sim(\psi_\alpha)))_{\beta\in I} = \sum_{\alpha\in I}(\psi_\alpha\circ \pi_\alpha\circ\iota_\beta )_{\beta\in I}. \]By definition of the $\iota_\beta$ it follows that $S(T(\psi)) = \psi$ which verifies our claim. Therefore $S$ is a lattice isomorphism. \end{proof}
\begin{proof}[Proof of (v)] Define $S:\orderdual{\left(\displaystyle\prod \vlat{E}_\alpha\right)} \to \displaystyle \bigoplus \orderdual{\vlat{E}}_\alpha$ as in the proof of (iv). By Theorem \ref{Thm: Adjoints of interval preserving vs lattice homomorphisms} (ii), $S$ maps $\ordercontn{\left(\displaystyle\prod \vlat{E}_\alpha\right)}$ into $\displaystyle \bigoplus \ordercontn{(\vlat{E}_\alpha)}$. A similar argument to that given in proof of (iv) shows that $S$ is a surjective lattice homomorphism. Hence it remains to show that $S$ is injective.
Let $0\leq \varphi \in \ordercontn{\left(\displaystyle\prod \vlat{E}_\alpha\right)}$ and suppose that $S(\varphi) = 0$. Then $\iota_\beta^\sim(\varphi) = 0$ for every $\beta\in I$. But for any $0\leq u \in \displaystyle\prod\vlat{E}_\alpha$, \[ u = \sup\left\lbrace \sum_{\alpha \in F}\iota_\alpha(u) ~:~F\subseteq I \text{ is finite} \right\rbrace. \] Therefore by the order continuity of $\varphi$, \[ \varphi(u) = \sup \left\lbrace \sum_{\alpha \in F}\iota_\alpha^\sim (\varphi)(u) ~:~ F\subseteq I \text{ is finite}\right\rbrace = 0 \] for all $0\leq u \in \displaystyle\prod\vlat{E}_\alpha$; hence $\varphi=0$. Because $S$ is a lattice homomorphism it follows that, for all $\varphi\in \ordercontn{\left(\displaystyle\prod \vlat{E}_\alpha\right)}$, if $S(\varphi) = 0$ then $\varphi=0$; that is, $S$ is injective. \end{proof}
\begin{remark} We note that the statement in Theorem 2.5 (iv) is not true if $I$ has measurable cardinal: In this case the map $S$ in the proof of Theorem 2.5 (iv) is not injective. To see this, suppose that $I$ has measurable cardinal. Then $I$ with the discrete topology is not realcompact. We identify $\mathbb{R}^I$ with ${\mathrm C } (\upsilon I)$. Let $x\in \upsilon I\setminus I$. Then $\delta_x :\mathbb{R}^I \ni u\mapsto u(x)\in \mathbb{R}$ is a non-zero, positive linear functional on $\mathbb{R}^I$, but $S(\delta_x)=0$. \end{remark}
We now define the categories which are the setting of this paper. It is readily verified that these are indeed categories.
\begin{table}[H]
\begin{tabular}{ |l|l|l| } \hline ${}$ & \quad\textsc{Objects}\quad & \quad\textsc{Morphisms}\quad \\ \hline ${\bf VL}$ & Vector lattices & Lattice homomorphisms\\ \hline ${\bf NVL}$ & Vector lattices & Normal lattice homomorphisms\\ \hline ${\bf IVL}$ & Vector lattices & Interval preserving lattice homomorphisms\\ \hline ${\bf NIVL}$ & Vector lattices & Normal, interval preserving lattice homomorphisms\\ \hline \end{tabular}
\end{table}
We refer to these four categories as \emph{categories of vector lattices}. If ${\bf C}$ is a category of vector lattices, then a ${\bf C}$-morphism is a morphism within the category ${\bf C}$. Below we depict the subcategory relationships between the categories of vector lattices under consideration.
\subsection{Measures on topological spaces}\label{Subsection: Realcompact spaces preliminaries}
Because the terminology related to measures on topological spaces varies across the literature, we declare our conventions. Let $X$ be a Hausdorff topological space. For a function $u:X\to\mathbb{R}$ we denote by $\zeroset{u}$ the \emph{zero set} of $u$ and by $\cozeroset{u}$ its \emph{co-zero set}, that is, the complement of $\zeroset{u}$. If $A\subseteq X$ then ${\mathbf 1}_A$ denotes the indicator function of $A$.
Denote by $\borel{X}$ the Borel $\sigma$-algebra generated by the open sets in $X$. A \emph{(signed) Borel measure} on $X$ is a real-valued and $\sigma$-additive function on $\borel{X}$. We denote the space of all signed Borel measures on $X$ by $\vlat{M}_\sigma(X)$. This space is a Dedekind complete vector lattice with respect to the pointwise operations and order \cite[Theorem 27.3]{Zaanen1997Introduction}. In particular, for $\mu,\nu\in \vlat{M}_\sigma(X)$, \[ (\mu\vee \nu)(B) = \sup\left\lbrace \mu(A)+\nu(B\setminus A) ~:~ A\subseteq B,~~ A \in \borel{X} \right\rbrace,~~ B \in \borel{X}. \]For any upward directed set $D\subseteq \vlat{M}_\sigma(X)^+$ with $\sup D=\nu$ in $\vlat{M}_\sigma(X)$, \[ \nu(B) = \sup\{\mu(B) ~:~ \mu\in D\},~~ B \in \borel{X}. \]Following Bogachev \cite{Bogachev2007}, we call a Borel measure $\mu$ on $X$ a \emph{Radon measure} if for every $B \in \borel{X}$, \[
|\mu|(B)=\sup\{|\mu|(K) ~:~ K\subseteq B \text{ is compact}\}. \]
Equivalently, $\mu$ is Radon if for every $B \in \borel{X}$ and every $\epsilon>0$ there exists a compact set $K\subseteq B$ so that $|\mu|(B\setminus K)<\epsilon$. Observe that if $\mu$ is Radon, then also \[
|\mu|(B)=\inf\{ |\mu|(U) ~:~ U\supseteq B \text{ is open}\}. \] Denote the space of Radon measures on $X$ by $\vlat{M}(X)$.
Recall that the \emph{support} of a Borel measure $\mu$ on $X$ is defined as \[
S_\mu \ensuremath{\mathop{:}\!\!=} \{x\in X ~:~ |\mu|(U)>0 \text{ for all } U\ni x \text{ open}\}. \]
A non-zero Borel measure $\mu$ may have empty support, and even if $S_\mu\neq \emptyset$, it may have measure zero \cite[Vol. II, Example 7.1.3]{Bogachev2007}. However, if $\mu$ is a non-zero Radon measure, then $S_\mu \neq \emptyset$ and $|\mu|(S_\mu)=|\mu|(X)$; in fact, for every $B \in \borel{X}$, $|\mu|(B)=|\mu|(B\cap S_\mu)$. We list the following useful properties of the support of a measure; the proofs are straightforward and therefore omitted.
\begin{prop}\label{Prop: Properties of support of a measure} Let $\mu$ and $\nu$ be Radon measures on $X$. The following statements are true. \begin{enumerate}
\item[(i)] If $|\mu|\leq |\nu|$ then $S_\mu\subseteq S_\nu$.
\item[(ii)] $S_{\mu+\nu}\subseteq S_{|\mu|+|\nu|}$
\item[(iii)] $S_{|\mu|+|\nu|} = S_\mu \cup S_\nu$. \end{enumerate} \end{prop}
A Radon measure $\mu$ is called \emph{compactly supported} if $S_\mu$ is compact. We denote the space of all compactly supported Radon measures on $X$ as $\vlat{M}_c(X)$. Further, a Radon measure $\mu$ on $X$ is called a \emph{normal measure} if $|\mu|(L)=0$ for all closed nowhere dense sets $L$ in $X$. The space of all normal Radon measures on $X$ is denoted $\vlat{N}(X)$, and the space of compactly supported normal Radon measures by $\vlat{N}_c(X)$.
\begin{thm} The following statements are true. \begin{enumerate}
\item[(i)] $\vlat{M}(X)$ is an band in $\vlat{M}_\sigma(X)$
\item[(ii)] $\vlat{M}_c(X)$ is an ideal in $\vlat{M}(X)$.
\item[(iii)] $\vlat{N}(X)$ is a band in $\vlat{M}(X)$.
\item[(iv)] $\vlat{N}_c(X)$ is a band in $\vlat{M}_c(X)$. \end{enumerate} \end{thm} \begin{proof}
For the proof of (i), let $\mu,\nu\in\vlat{M}(X)$. Consider a Borel set $B$ and a real number $\epsilon>0$. There exists a compact set $K\subseteq B$ so that $|\mu|(B\setminus K)<\epsilon/2$ and $|\nu|(B\setminus K)<\epsilon/2$. We have $|\mu+\nu|(B\setminus K)\leq |\mu|(B\setminus K) + |\nu|(B\setminus K)<\epsilon$. Therefore $\mu+\nu \in \vlat{M}(X)$. A similar argument shows that $a\mu\in \vlat{M}(X)$ for all $a\in\mathbb{R}$. It also follows in this way that for all $\nu\in\vlat{M}_\sigma(X)$ and $\mu\in \vlat{M}(X)$, if $|\nu|\leq |\mu|$ then $\nu\in \vlat{M}(X)$. By definition of a Radon measure, $|\mu|\in\vlat{M}(X)$ whenever $\mu\in\vlat{M}(X)$. Therefore $\vlat{M}(X)$ is an ideal in $\vlat{M}_\sigma(X)$.
To see that $\vlat{M}(X)$ is a band in $\vlat{M}_\sigma(X)$, consider an upward directed subset $D$ of $\vlat{M}(X)^+$ so that $\sup D=\nu$ in $\vlat{M}_\sigma(X)$. Fix a Borel set $B$ and a real number $\epsilon>0$. There exists $\mu\in D$ so that $\nu(B)-\epsilon/2<\mu(B)$. But $\mu$ is a Radon measure, so there exists a compact subset $K$ of $B$ so that $\mu(K)>\mu(B)-\epsilon/2$. Therefore $\nu(K)\geq \mu(K)>\mu(B)-\epsilon/2> \nu(B)-\epsilon$. Therefore $\nu \in \vlat{M}(X)$ so that $\vlat{M}(X)$ is a band in $\vlat{M}_\sigma(X)$.
The statement in (ii) follows immediately from the definition of the support of a measure and Proposition \ref{Prop: Properties of support of a measure}. It is clear that $\vlat{N}(X)$ is an ideal in $\vlat{M}(X)$, and that it is a band follows from the expression for suprema in $\vlat{M}_\sigma(X)$. Hence (iii) is true. That (iv) is true follows immediately from (iii). \end{proof}
Unsurprisingly, there is a close connection between Radon measures on $X$ and order bounded linear functionals on $\cont(X)$. Theorem \ref{Thm: Riesz Representation Theorem for C(X)} to follow is implicit in \cite[Corollary 1, p. 106; Theorems 4.2 \& 4.5]{GouldMahowald1962}, see also \cite{Hewitt1950} where a treatment is given in terms of Baire measures. In order to facilitate the discussion of order continuous functionals, we include a short proof.
\begin{thm}\label{Thm: Riesz Representation Theorem for C(X)} Let $X$ be a realcompact space. There is a lattice isomorphism $\orderdual{\cont(X)}\ni \varphi \longmapsto \mu_\varphi \in \vlat{M}_c(X)$ so that for every $\varphi\in \orderdual{\cont(X)}$, \[ \varphi(u) = \int_X u \thinspace d\mu_\varphi,~~ u\in\cont(X). \] \end{thm}
\begin{proof} We identify the space $\contb(X)$ with ${\mathrm C }\left(\beta X\right)$. Because $\contb(X)$ is an ideal in $\cont(X)$, the restriction map from $\orderdual{\cont(X)}$ to $\orderdual{\contb(X)}$ is a lattice homomorphism \cite[Section 1.3, Exercise 1]{AliprantisBurkinshaw2006}. It follows from \cite[Theorem 1]{Hewitt1950} that this map is injective. Thus by the Riesz Representation Theorem \cite[Theorem 18.4.1]{Semadeni1971}, for every $\varphi\in\orderdual{\cont(X)}$ there exists a unique Radon measure $\nu_\varphi$ on $\beta X$ so that \[ \varphi(u) = \int_{\beta X}u \thinspace d\nu_\varphi,~~ u\in\contb(X). \] Furthermore, the map $\varphi\mapsto \nu_\varphi$ is a lattice isomorphism onto its range.
We claim that the range of this map is $\vlat{M}_0(\beta X)\ensuremath{\mathop{:}\!\!=} \{\nu \in \vlat{M}(\beta X) ~:~ S_\nu \subseteq X\}$. According to \cite[Theorem 4.4]{GouldMahowald1962}, $S_{\nu_\varphi}\subseteq X$ for every $\varphi\in \orderdual{\cont(X)}$. Hence $\nu_\varphi\in \vlat{M}_0(\beta X)$. Conversely, let $\nu\in \vlat{M}_0(\beta X)$. Since $S_\nu \subseteq X$ is compact in $\beta X$, hence also in $X$, \[ \psi(u) \ensuremath{\mathop{:}\!\!=} \int_{S_{\nu}} u \thinspace d\nu,~~ u\in\cont(X) \] defines an order bounded functional on $\cont(X)$. For every $u\in \contb(X)$ we have \[ \psi(u) = \int_{S_{\nu}} u \thinspace d\nu = \int_{\beta X} u d\nu. \] Therefore $\nu_\psi = \nu$ which establishes our claim.
We have shown that $\orderdual{\cont(X)}\ni \varphi\mapsto \nu_\varphi \in \vlat{M}_0(\beta X)$ is a lattice isomorphism. We now show that $\vlat{M}_0(\beta X)$ is isomorphic to $\vlat{M}_c(X)$.
Let $\nu\in \vlat{M}_0(\beta X)$. The Borel sets in $X$ are precisely the traces on $X$ of Borel sets in $\beta X$ \cite[p. 108]{GouldMahowald1962}. Furthermore, if $B', B'' \in \borel{\beta X}$ so that $B'\cap X = B''\cap X$ then $\nu(B')=\nu(B'\cap S_\nu)=\nu(B''\cap S_\nu)=\nu(B'')$. For $B \in \borel{X}$ define \[ \nu^\ast (B) \ensuremath{\mathop{:}\!\!=} \nu(B') \text{ with }B'\in \borel{\beta X} \text{ so that }B'\cap X=B. \] It follows from the previous observation that $\nu^\ast$ is well-defined. It follows easily that $\nu^\ast\in \vlat{M}_c(X)$, and that the map $\vlat{M}_0(\beta X)\ni \nu\mapsto \nu^\ast\in \vlat{M}_c(X)$ is injective, linear, and bipositive. Let $\mu\in \vlat{M}_c(X)$. For every $B\in \borel{\beta X}$ let $\nu(B) \ensuremath{\mathop{:}\!\!=} \mu(B\cap X)$. Then $\nu\in \vlat{M}_0(\beta X)$ and $\nu^\ast = \mu$. Therefore $\vlat{M}_0(\beta X)\ni \nu\mapsto \nu^\ast\in \vlat{M}_c(X)$ is a lattice isomorphism.
For $\varphi \in \orderdual{\cont(X)}$ let $\mu_\varphi \ensuremath{\mathop{:}\!\!=} (\nu_\varphi)^\ast$. Then $\orderdual{\cont(X)}\ni \varphi\mapsto \mu_\varphi \in \vlat{M}_c(X)$ is a lattice isomorphism. It remains to show that, for every $\varphi \in \orderdual{\cont(X)}$, \[ \varphi(u) = \int_{X} u \thinspace d\mu_\varphi,~~ u\in\cont(X). \] Fix $0\leq \varphi \in \orderdual{\cont(X)}$ and $u\in\cont(X)^+$. A minor modification of the proof of \cite[Theorem 3.1]{GouldMahowald1962} shows that there exists a natural number $N$ so that $\varphi( u )=\varphi( u \wedge n{\mathbf 1}_X)$ for every $n\geq N$. But \[ \int_X u \thinspace d\mu_\varphi = \sup_{n\in\mathbb{N}} \int_X u \wedge n{\mathbf 1}_X \thinspace d\mu_\varphi, \] and, for every $n\in\mathbb{N}$, \[ \int_X u \wedge n{\mathbf 1}_X \thinspace d\mu_\varphi = \int_{\beta X} u \wedge n{\mathbf 1}_X \thinspace d\nu_\varphi = \varphi(u\wedge n{\mathbf 1}_X) \] Therefore \[ \varphi(u) = \int_X u \thinspace d\mu_\varphi, \] as desired. \end{proof}
\begin{thm}\label{Thm: Order continuous functionals on C(X) are normal measures} Let $X$ be a realcompact space. Let $\varphi$ be an order bounded functional on $\cont(X)$. Then $\varphi$ is order continuous if and only if $\mu_\varphi$ is a normal measure. The map \[ \ordercontn{\cont(X)}\ni \varphi \longmapsto \mu_{\varphi} \in \vlat{N}_c(X) \]is a lattice isomorphism onto $\vlat{N}_c(X)$. \end{thm} \begin{proof} We make use of the notation introduced in the proof of Theorem \ref{Thm: Riesz Representation Theorem for C(X)}. It suffices to show that for any $0\leq \varphi\in \orderdual{\cont(X)}$, $\varphi$ is order continuous if and only if $\mu_\varphi$ is normal. Let $0\leq \varphi \in \ordercontn{\cont(X)}$. Because $\contb(X)$ is an ideal in $\cont(X)$ the restriction of $\varphi$ to ${\mathrm C}_{\mathrm b}(X)$ is order continuous. Hence the measure $\nu_\varphi\in\vlat{M}_0(\beta X)$ so that \[ \varphi(u)=\int_{\beta X}u \thinspace d\nu_\varphi,~~ u\in{\mathrm C}_{\mathrm b}(X) \] is a normal measure on $\beta X$, see for instance \cite[Definition 4.7.1, Theorem 4.7.4]{DalesDashiellLauStrass2016}. It therefore follows that the measure $\mu_\varphi = (\nu_\varphi)^\ast \in\vlat{M}_c(X)$ is a normal measure on $X$.
Conversely, let $0\leq \varphi \in \orderdual{\cont(X)}$ be such that $\mu_\varphi$ is a normal measure on $X$. Then the Borel measure $\nu$ on $\beta X$ given by \[ \nu(B) = \mu_\varphi(B\cap X),~~ B\in \borel{\beta X}
\]is a normal measure on $\beta X$. Hence $S_{\nu}$ is regular-closed in $\beta X$, see \cite[Proposition 4.7.9]{DalesDashiellLauStrass2016}. But $S_\nu = S_{\mu_\varphi}\subseteq X$ so that $S_{\mu_\varphi}$ is regular-closed in $X$. Therefore, if $D\downarrow 0$ in $\cont(X)$ then $\left.D\right|_{S_{\mu_\varphi}} = \{ \left. u \right|_{S_{\mu_\varphi}} ~:~ u\in D \} \downarrow 0$ in ${\mathrm C }(S_{\mu_\varphi})$, see \cite[Theorem 3.4]{KandicVavpeticPositivity2019}. Also, $\mu_\varphi$ restricted to the Borel sets in $S_{\mu_\varphi}$ is a normal measure on $S_{\mu_\varphi}$. Hence \[ \inf_{u\in D} \varphi(u) = \inf_{u\in D}\int_{S_{\mu_\varphi}}u \thinspace d\mu_\varphi = 0. \] Therefore $\varphi$ is order continuous. \end{proof}
\section{Direct limits}\label{Section: Inductive limits}
We recall the definitions of a direct system in a category of vector lattices, and of the direct limit of such a system. These definitions are specializations of the corresponding definitions in general categories, see for instance \cite[Chapter 5]{Awodey2010} and \cite[Chapter III]{MacLane1998} where direct limits are referred to as \emph{colimits}. We summarise some existence results and list vector lattice properties that have permanence under the direct limit construction. Addition results are found in \cite{Filter1988}. Lastly, we give a number of examples of direct limits which we will make use of later.
\begin{defn}\label{Defn: Inductive system} Let ${\bf C}$ be a category of vector lattices, $I$ a directed set, $\vlat{E}_\alpha$ a vector lattice for each $\alpha\in I$, and $e_{\alpha, \beta}: \vlat{E}_\alpha\to \vlat{E}_\beta$ a ${\bf C}$-morphism for all $\alpha \preccurlyeq \beta$ in $I$. The ordered pair $\cal{D} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ is called a \emph{direct system} in ${\bf C}$ if, for all $\alpha \preccurlyeq \beta \preccurlyeq \gamma$ in $I$, the diagram \[ \begin{tikzcd}[cramped] \vlat{E}_\alpha \arrow[rd, "e_{\alpha, \beta}"'] \arrow[rr, "e_{\alpha, \gamma}"] & & \vlat{E}_\gamma\\ & \vlat{E}_\beta\arrow[ru, "e_{\beta, \gamma}"'] \end{tikzcd} \] commutes in ${\bf C}$. \end{defn}
\begin{defn}\label{Defn: Compatible system for inductive limits} Let ${\bf C}$ be a category of vector lattices and $\cal{D} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ a direct system in ${\bf C}$. Let $\vlat{E}$ be a vector lattice and for every $\alpha\in I$, let $e_\alpha: \vlat{E}_\alpha \to \vlat{E}$ be a ${\bf C}$-morphism. The ordered pair $\cal{S} \ensuremath{\mathop{:}\!\!=} (\vlat{E}, (e_\alpha)_{\alpha\in I})$ is a \emph{compatible system} of $\cal{D}$ in ${\bf C}$ if, for all $\alpha\preccurlyeq\beta$ in $I$, the diagram \[ \begin{tikzcd}[cramped] \vlat{E}_\alpha \arrow[rd, "e_{\alpha, \beta}"'] \arrow[rr, "e_{\alpha}"] & & \vlat{E}\\ & \vlat{E}_\beta\arrow[ru, "e_{\beta}"'] \end{tikzcd} \] commutes in ${\bf C}$. \end{defn}
\begin{defn}\label{Defn: Inductive limit} Let ${\bf C}$ be a category of vector lattices and $\cal{D} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ a direct system in ${\bf C}$. The \emph{direct limit} of $\cal{D}$ in ${\bf C}$ is a compatible system $\cal{S}\ensuremath{\mathop{:}\!\!=}(\vlat{E}, (e_\alpha)_{\alpha\in I})$ of $\cal{D}$ in ${\bf C}$ so that for any compatible system $\tilde{S} \ensuremath{\mathop{:}\!\!=} (\tilde{\vlat{E}}, (\tilde{e}_\alpha)_{\alpha\in I})$ of $\cal{D}$ in ${\bf C}$ there exists a unique ${\bf C}$-morphism $r: \vlat{E}\to \tilde{\vlat{E}}$ so that, for every $\alpha\in I$, the diagram \[ \begin{tikzcd}[cramped] \vlat{E} \arrow[rr, "r"] & & \tilde{\vlat{E}}\\ & \vlat{E}_\alpha \arrow[lu, "e_{\alpha}"] \arrow[ru, "\tilde{e}_{\alpha}"'] \end{tikzcd} \] commutes in ${\bf C}$. We denote the direct limit of a direct system $\cal{D}$ by $\ind{\cal{D}}$ or $\ind{\vlat{E}_\alpha}$. \end{defn}
Since the direct limit of a direct system is in fact an initial object in a certain derived category, it follows that the direct limit, when it exists, is unique up to a unique isomorphism, see for instance \cite[p. 54]{BucurDeleanu1968}.
\subsection{Existence and permanence properties of direct limits}\label{Subsection: Existence and permanence properties of inductive limits}
Filter \cite{Filter1988} shows that any direct system $\cal{D} \ensuremath{\mathop{:}\!\!=} ((\vlat{E}_\alpha)_{\alpha\in I},(e_{\alpha , \beta})_{\alpha \preccurlyeq\beta})$ in ${\bf VL}$ has a direct limit in ${\bf VL}$.\footnote{It is not formulated in exactly these terms.} In particular, the set-theoretic direct limit \cite[Chapter III, $\S7.5$]{Bourbakie_Theory_of_sets} of $\cal{D}$ equipped with suitable vector space and order structures is also the direct limit of $\cal{D}$ in ${\bf VL}$. We briefly recall the details.
For $u$ in the disjoint union $\biguplus \vlat{E}_\alpha$ of the collection $\left( \vlat{E}_\alpha \right)_{\alpha\in I}$, denote by $\alpha(u)$ that element of $I$ so that $u\in \vlat{E}_{\alpha(u)}$. Define an equivalence relation on $\biguplus \vlat{E}_\alpha$ by setting $u\sim v$ if and only if there exists $\beta \succcurlyeq \alpha (u),\alpha (v)$ in $I$ so that $e_{\alpha(u) , \beta}(u) = e_{\alpha(v) , \beta}(v)$. Let $\vlat{E} \ensuremath{\mathop{:}\!\!=} \biguplus \vlat{E}_\alpha / \sim$ and denote the equivalence class generated by $u\in \biguplus\vlat{E}_\alpha$ by $\dot u$.
Let $\dot u,\dot v\in \vlat{E}$. We set $\dot u\leq \dot v$ if and only if there exists $\beta \succcurlyeq \alpha (u),\alpha(v)$ in $I$ so that $e_{\alpha(u) , \beta}(u) \leq e_{\alpha(v) , \beta}(v)$. Further, for $a,b\in\mathbb{R}$ define \[ a\dot u + b\dot v \ensuremath{\mathop{:}\!\!=} \dot{\overbrace{a e_{\alpha(u) , \beta}(u) + b e_{\alpha(v) , \beta}(v)}}, \] where $\beta \succcurlyeq \alpha(u),\alpha(v)$ in $I$ is arbitrary. With addition, scalar multiplication and the partial order so defined, $\vlat{E}$ is a vector lattice. The lattice operations are given by \[
\dot u\wedge\dot v\ = \ \dot{\overbrace{e_{\alpha(u) , \beta}(u) \wedge e_{\alpha(v) , \beta}(v)}} \] and \[
\dot u\vee\dot v\ = \ \dot{\overbrace{e_{\alpha(u) , \beta}(u) \vee e_{\alpha(v) , \beta}(v)}}, \]with $\beta \succcurlyeq \alpha(u),\alpha(v)$ in $I$ arbitrary.
For each $\alpha\in I$ define $e_\alpha : \vlat{E}_\alpha \to \vlat{E}$ by setting $e_\alpha (u) \ensuremath{\mathop{:}\!\!=} \dot u$ for $u\in \vlat{E}_\alpha$. Each $e_\alpha$ is a lattice homomorphism and the diagram \[ \begin{tikzcd}[cramped] \vlat{E}_\alpha \arrow[rd, "e_{\alpha, \beta}"'] \arrow[rr, "e_{\alpha}"] & & \vlat{E}\\ & \vlat{E}_\beta\arrow[ru, "e_{\beta}"'] \end{tikzcd} \] commutes in ${\bf VL}$ for all $\alpha \preccurlyeq\beta$ in $I$ so that $\cal{S}\ensuremath{\mathop{:}\!\!=} (\vlat{E},(e_\alpha)_{\alpha\in I})$ is a compatible system of $\cal{D}$ in ${\bf VL}$. Further, if $\tilde{\cal{S}} = (\tilde{\vlat{E}},(\tilde{e}_\alpha)_{\alpha\in I}$ is another compatible system of $\cal{D}$ in ${\bf VL}$ then \[ r: \vlat{E}\ni \dot u \longmapsto \tilde{e}_{\alpha(u)}(u)\in \tilde{\vlat{E}} \] is the unique lattice homomorphism so that the diagram \[ \begin{tikzcd}[cramped] \vlat{E} \arrow[rr, "r"] & & \tilde{\vlat{E}}\\ & \vlat{E}_\alpha \arrow[lu, "e_{\alpha}"] \arrow[ru, "\tilde{e}_{\alpha}"'] \end{tikzcd} \] commutes for every $\alpha\in I$. Hence $\cal{S}$ is indeed the direct limit of $\cal{D}$ in ${\bf VL}$.
We give two further existence results for direct limits of direct systems in other categories of vector lattices.
\begin{thm}\label{Thm: Existence of Inductive Limits in VLI} Let $\cal{D} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ be a direct system in ${\bf IVL}$, and let $\cal{S} \ensuremath{\mathop{:}\!\!=} (\vlat{E},(e_\alpha)_{\alpha \in I})$ be the direct limit of $\cal{D}$ in ${\bf VL}$. Then $\cal{S}$ is the direct limit of $\cal{D}$ in ${\bf IVL}$. \end{thm}
\begin{proof} We show that each $e_\alpha$ is interval preserving. To this end, fix $\alpha\in I$ and $0<u\in \vlat{E}_\alpha$. Suppose that $\dot 0 \leq \dot v \leq e_\alpha (u)=\dot u$. Then there exists a $\beta \succcurlyeq \alpha , \alpha(v)$ in $I$ so that $0\leq e_{\alpha(v) , \beta}(v) \leq e_{\alpha , \beta} (u)$. But $e_{\alpha , \beta}$ is interval preserving, so there exists $0\leq w\leq u$ in $\vlat{E}_\alpha$ so that $e_{\alpha , \beta}(w)=e_{\alpha(v) , \beta}(v)$. Therefore $e_\alpha(w) = \dot w = \dot v$. Hence $e_\alpha$ is interval preserving. Therefore $\cal{S}$ is a compatible system of $\cal{D}$ in ${\bf IVL}$.
Let $\tilde{\cal{S}}=(\tilde{\vlat{E}},(\tilde{e}_\alpha)_{\alpha\in I})$ be a compatible system of $\cal{D}$ in ${\bf IVL}$, thus also in ${\bf VL}$. We show that the unique linear lattice homomorphism $r:\vlat{E}\to \tilde{\vlat{E}}$ is interval preserving. Consider $\dot u \in \vlat{E}^+$. Let $0\leq v \leq r(\dot u)$ in $\tilde{\vlat{E}}$, that is, $0\leq v\leq \tilde{e}_{\alpha(u)}(u)$. But $\tilde{e}_{\alpha(u)}$ is interval preserving so there exists $0\leq w\leq u$ in $\vlat{E}_{\alpha(u)}$ so that $v=\tilde{e}_{\alpha(u)}(w)$. Thus $\dot 0 \leq \dot w \leq \dot u$ and $r(\dot w) = v$ in $\vlat{E}$. Therefore $r$ is interval preserving. \end{proof}
\begin{thm}\label{Thm: Existence of Inductive Limits in NVLI} Let $\cal{D} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ be a direct system in ${\bf NIVL}$, and let $\cal{S} \ensuremath{\mathop{:}\!\!=} (\vlat{E},(e_\alpha)_{\alpha \in I})$ be the direct limit of $\cal{D}$ in ${\bf VL}$. Assume that $e_{\alpha , \beta}$ is injective for all $\alpha \preccurlyeq \beta$ in $I$. Then $\cal{S}$ is the direct limit of $\cal{D}$ in ${\bf NIVL}$. \end{thm}
\begin{proof} We start by proving that $e_\alpha :\vlat{E}_\alpha\to \vlat{E}$ is injective for every $\alpha\in I$. Fix $\alpha \in I$ and $u\in \vlat{E}_\alpha$ so that $e_\alpha (u ) = \dot 0$ in $\vlat{E}$. Then there exists $\beta \succcurlyeq \alpha$ in $I$ so that $e_{\alpha , \beta} (u)=0$. But $e_{\alpha , \beta}$ is injective, so $u=0$. Hence $e_\alpha$ is injective.
By Theorem \ref{Thm: Existence of Inductive Limits in VLI}, $e_\alpha :\vlat{E}_\alpha\to \vlat{E}$ is an injective interval preserving lattice homomorphism for every $\alpha\in I$. It follows from Proposition \ref{Prop: Interval Preserving vs Lattice Homomorphism} (i) that $e_\alpha$ is a ${\bf NIVL}$-morphism for every $\alpha\in I$. Therefore $\cal{S}$ is a compatible system of $\cal{D}$ in ${\bf NIVL}$.
Let $\tilde{S} = (\tilde{\vlat{E}},(\tilde{e}_\alpha)_{\alpha\in I})$ be a compatible system of $\cal{D}$ in ${\bf NIVL}$. By Theorem \ref{Thm: Existence of Inductive Limits in VLI} the canonical map $r:\vlat{E}\to \tilde{\vlat{E}}$ is an interval preserving lattice homomorphism. We claim that $r$ is a normal lattice homomorphism. To this end, let $A\downarrow \dot 0$ in $\vlat{E}$. Without loss of generality we may suppose that $A$ is bounded from above in $\vlat{E}$, say by $\dot u_0$. There exists $\alpha \in I$ and $u_0 \in \vlat{E}_\alpha$ so that $\dot u_0 = e_\alpha (u_0)$. Because $e_\alpha$ is injective and interval preserving, there exists for every $\dot u\in A$ a unique $u\in [0,u_0]\subseteq \vlat{E}_\alpha$ so that $e_\alpha (u) = \dot u$. In particular, $e_\alpha^{-1}[A]\subseteq [0,u_0]$. We claim that $\inf e_{\alpha}^{-1}[A] = 0$ in $\vlat{E}_\alpha$. Let $0\leq v\in \vlat{E}_\alpha $ be a lower bound for $e_{\alpha}^{-1}[A]$. Then $e_{\alpha}(v)\geq0$ is a lower bound for $A$ in $\vlat{E}$, hence $e_{\alpha}(v)=0$. But $e_\alpha$ is injective, so $v=0$. This verifies our claim. By definition, $r[A]=\tilde{e}_\alpha[e_{\alpha}^{-1}[A]]$. Because $\tilde{e}_{\alpha}$ is a normal lattice homomorphism it follows that $\inf r[A] = 0$ in $\tilde{\vlat{E}}$. \end{proof}
We recall the following result on permanence of vector lattice properties under the direct limit construction from \cite{Filter1988}.
\begin{thm}\label{Thm: Inductive Limit Permanence} Let $\cal{D} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ be a direct system in a category ${\bf C}$ of vector lattices. Assume that $e_{\alpha , \beta}$ is injective for all $\alpha \preccurlyeq \beta$ in $I$. Let $\cal{S} \ensuremath{\mathop{:}\!\!=} (\vlat{E},(e_\alpha)_{\alpha \in I})$ be the direct limit of $\cal{D}$ in ${\bf VL}$. Then the following statements are true. \begin{itemize}
\item[(i)] $\vlat{E}$ is Archimedean if and only if $\vlat{E}_\alpha$ is Archimedean for all $\alpha\in I$.
\item[(ii)] If ${\bf C}$ is ${\bf IVL}$ then $\vlat{E}$ is order separable if and only if $\vlat{E}_\alpha$ is order separable for every $\alpha\in I$.
\item[(iii)] If ${\bf C}$ is ${\bf IVL}$ then $\vlat{E}$ has the (principal) projection property if and only if $\vlat{E}_\alpha$ has the (principal) projection property for every $\alpha\in I$.
\item[(iv)] If ${\bf C}$ is ${\bf IVL}$ then $\vlat{E}$ is ($\sigma$-)Dedekind complete if and only if $\vlat{E}_\alpha$ is \linebreak ($\sigma$-)Dedekind complete for every $\alpha\in I$.
\item[(v)] If ${\bf C}$ is ${\bf IVL}$ then $\vlat{E}$ is relatively uniformly complete if and only if $\vlat{E}_\alpha$ is relatively uniformly complete for every $\alpha\in I$. \end{itemize} \end{thm}
Before we proceed to discuss examples of direct limits we make some clarifying remarks about the structure of the direct limit of vector lattices.
\begin{remark}\label{Remark: Inductive limit notation} Let $\cal{D} \ensuremath{\mathop{:}\!\!=} ((\vlat{E}_\alpha)_{\alpha\in I},(e_{\alpha , \beta})_{\alpha \preccurlyeq\beta})$ be a direct system in ${\bf VL}$ and let $\cal{S} \ensuremath{\mathop{:}\!\!=} (\vlat{E},(e_\alpha)_{\alpha \in I})$ be the direct limit of $\cal{D}$ in ${\bf VL}$. \begin{itemize}
\item[(i)] Unless clarity demands it, we henceforth cease to explicitly express elements of $\vlat{E}$ as equivalence classes; that is, we write $u\in\vlat{E}$ instead of $\dot u\in\vlat{E}$.
\item[(ii)] For every $u\in \vlat{E}$ there exists at least one $\alpha\in I$ and $u_\alpha \in\vlat{E}_\alpha$ so that $u = e_\alpha (u_\alpha)$. If $u = e_\beta (u_\beta)$ for some other $\beta\in I$ and $u_\beta\in \vlat{E}_\beta$ then there exists $\gamma \succcurlyeq \alpha,\beta$ in $I$ so that $e_{\alpha , \gamma}(u_\alpha) = e_{\beta , \gamma}(u_\beta)$, and hence
\[
e_\gamma ( e_{\alpha , \gamma}(u_\alpha)) = u = e_\gamma ( e_{\beta , \gamma}(u_\beta)).
\]
\item[(iii)] It is proven in Theorem \ref{Thm: Existence of Inductive Limits in NVLI} that if $e_{\alpha , \beta}$ is injective for all $\alpha \preccurlyeq \beta$ in $I$ then $e_\alpha$ is injective for all $\alpha\in I$. In this case we identify $\vlat{E}_\alpha$ with the sublattice $e_\alpha[\vlat{E}_\alpha]$ of $\vlat{E}$.
\item[(iv)] An element $u\in \vlat{E}$ is positive if and only if there exist $\alpha \preccurlyeq \beta$ in $I$ and $u_\alpha\in\vlat{E}_\alpha$ so that $e_\alpha(u_\alpha) = u$ and $e_{\alpha , \beta}(u_\alpha)\geq 0$ in $\vlat{E}_\beta$. Combining this observation with (ii) we see that $u\geq 0$ if and only if there exist $\alpha\in I$ and $0\leq u_\alpha \in\vlat{E}_\alpha$ so that $u=e_\alpha(u_\alpha)$. \end{itemize} \end{remark}
\subsection{Examples of direct limits}\label{Subsection: Examples of inductive limits}
In \cite{Filter1988} a number of examples are presented of naturally occurring vector lattices which can be expressed as direct limits of vector lattices. We provide further examples which will be used in Section \ref{Section: Applications}.
\begin{example}\label{Exm: Inductive limit main example} Let $\vlat{E}$ be a vector lattice. Let $(\vlat{E}_\alpha)_{\alpha\in I}$ be an upward directed collection of ideals in $\vlat{E}$ such that $\vlat{E}_\alpha \subseteq \vlat{E}_\beta$ if and only if $\alpha \preccurlyeq \beta$. Assume that $\displaystyle\bigcup \vlat{E}_\alpha=\vlat{E}$. For all $\alpha\preccurlyeq \beta$ in $I$, let $e_{\alpha , \beta}:\vlat{E}_\alpha\to \vlat{E}_\beta$ and $e_\alpha:\vlat{E}_\alpha \to \vlat{E}$ be the inclusion mappings. Then $\cal{D} \ensuremath{\mathop{:}\!\!=} ((\vlat{E}_{\alpha})_{\alpha\in I},(e_{\alpha , \beta})_{\alpha\preccurlyeq \beta})$ is a direct system in ${\bf NIVL}$ and $\cal{S}\ensuremath{\mathop{:}\!\!=} (\vlat{E},(e_{\alpha})_{\alpha\in I})$ is the direct limit of $\cal{D}$ in ${\bf NIVL}$. \end{example}
\begin{proof} It is clear that $\cal{D}$ is a direct system in ${\bf NIVL}$ and that $\cal{S}$ is a compatible system of $\cal{D}$ in ${\bf NIVL}$. Let $\tilde{\cal{S}} = (\tilde{\vlat{E}},(\tilde{e}_{\alpha})_{\alpha\in I})$ be any compatible system of $\cal{D}$ in ${\bf NIVL}$. We show that there exists a unique ${\bf NIVL}$-morphism $r:\vlat{E}\to\tilde{\vlat{E}}$ so that for all $\alpha\in I$, the diagram \[ \begin{tikzcd}[cramped] \vlat{E} \arrow[rr, "r"] & & \tilde{\vlat{E}}\\ & \vlat{E}_\alpha \arrow[lu, "e_{\alpha}"] \arrow[ru, "\tilde{e}_{\alpha}"'] \end{tikzcd} \] commutes.
If $u\in\vlat{E}$ and $\alpha,\beta\in I$ are such that $u\in \vlat{E}_\alpha,\vlat{E}_\beta$, then $\tilde{e}_\alpha (u) = \tilde{e}_\beta(u)$. Indeed, for any $\gamma \succcurlyeq \alpha,\beta$ in $I$ \[ \tilde{e}_{\gamma}(u) = \tilde{e}_{\gamma}(e_{\alpha, \gamma}(u)) = \tilde{e}_{\alpha}(u) \] and \[ \tilde{e}_{\gamma}(u) = \tilde{e}_{\gamma}(e_{\beta, \gamma}(u)) = \tilde{e}_{\beta}(u) \] Therefore the map $r:\vlat{E}\to \tilde{\vlat{E}}$ given by \[ r(u) = \tilde{e}_{\alpha}(u) ~\text{if}~ u \in \vlat{E}_\alpha \]is well-defined. It is clear that this map makes the diagram above commute. Further, if $u,v\in \vlat{E}$ then there exists $\alpha\in I$ so that $u,v \in \vlat{E}_\alpha$. Then for all $a,b\in\mathbb{R}$ we have $au+bv,u\vee v \in\vlat{E}_\alpha$ so that \[ r(au+bv) = \tilde{e}_{\alpha}(au+bv) = a \tilde{e}_{\alpha}(u) + b \tilde{e}_{\alpha}(v) = a\thinspace r(u)+b\thinspace r(v) \] and \[ r(u\vee v) = \tilde{e}_{\alpha}(u\vee v) = \tilde{e}_{\alpha}(u)\vee \tilde{e}_{\alpha}(v) = r(u)\vee r(v). \] Hence $r$ is a lattice homomorphism. A similar argument shows that $r$ is interval preserving. To see that $r$ is a normal lattice homomorphism, let $A\downarrow 0$ in $\vlat{E}$. Without loss of generality, assume that there exists $0<u_0 \in \vlat{E}$ so that $u\leq u_0$ for all $u\in A$. Then $A\subseteq \vlat{E}_\alpha$ for some $\alpha\in I$ so that $r[A]=\tilde{e}_\alpha [A]$. Hence, because $\tilde{e}_\alpha$ is a normal lattice homomorphism, $ \inf r[A] = 0$. Therefore $r$ is a ${\bf NIVL}$-morphism.
It remains to show that $r$ is the unique ${\bf NIVL}$-morphism making the diagram above commute. Suppose that $\tilde{r}$ is another such morphism. Let $u\in \vlat{E}$. There exists $\alpha\in I$ so that $u\in \vlat{E}_\alpha$. We have $\tilde{r}(u) = \tilde{r}(e_\alpha(u)) = \tilde{e}_\alpha(u) = r(u)$, which completes the proof. \end{proof}
The remaining examples in this section may readily been seen to be special cases of Example \ref{Exm: Inductive limit main example}. Therefore we omit the proofs.
\begin{example}\label{Exm: Inductive limit of principle ideals} Let $\vlat{E}$ be a vector lattice. For every $0<u\in \vlat{E}$ let $\vlat{E}_u$ be the ideal generated by $u$ in $\vlat{E}$. For all $0<u\leq v$ let $e_{u , v}:\vlat{E}_u\to \vlat{E}_v$ and $e_u:\vlat{E}_u\to \vlat{E}$ be the inclusion mappings. Let $I$ be an upward directed subset of $\vlat{E}^+\mysetminus\{0\}$ so that $\vlat{E}=\displaystyle \bigcup \vlat{E}_u$. Then $\cal{D} \ensuremath{\mathop{:}\!\!=} ((\vlat{E}_{u})_{u\in I},(e_{u , v})_{u\leq v})$ is a direct system in ${\bf NIVL}$ and $\cal{S}\ensuremath{\mathop{:}\!\!=} (\vlat{E},(e_{u})_{u\in I})$ is the direct limit of $\cal{D}$ in ${\bf NIVL}$. \end{example}
\begin{example}\label{Exm: Locally supported p-summable functions as inductive limit} Let $(X,\Sigma,\mu)$ be a complete $\sigma$-finite measure space. Let $\Xi \ensuremath{\mathop{:}\!\!=} (X_n)$ be an increasing sequence (w.r.t. inclusion) of measurable sets with positive measure so that $X =\displaystyle \bigcup X_n$. For $n\leq m$ in $\mathbb{N}$ let $e_{n,m}:\vlat{L}^p(X_n)\to \vlat{L}^p(X_m)$ be defined (a.e.) by setting \[
e_{n,m}(u)(t)\ensuremath{\mathop{:}\!\!=} \left\{ \begin{array}{lll} u(t) & \text{ if } & t\in X_n
\\ 0 & \text{ if } & t\in X_m\!\setminus X_n \\ \end{array}\right. \] for each $u\in\vlat{L}^p(X_n)$. Further, define \[ \vlat{L}^p_{\Xi-c}(X) \ensuremath{\mathop{:}\!\!=} \left\lbrace u\in\vlat{L}^p(X) ~:~ u=0 \text{ a.e. on } X\setminus X_n \text{ for some } n\in\mathbb{N} \right\rbrace. \] For $n\in\mathbb{N}$ let $e_n:\vlat{L}^p(X_n)\to \vlat{L}^p_{\Xi-c}(X)$ be given by \[
e_{n}(u)(t)\ensuremath{\mathop{:}\!\!=} \left\{ \begin{array}{lll} u(t) & \text{ if } & t\in X_n
\\ 0 & \text{ if } & t\in X\setminus X_n \\ \end{array}\right. \] The following statements are true. \begin{itemize}
\item[(i)] $\cal{D}^p_{\Xi-c}\ensuremath{\mathop{:}\!\!=} \left((\vlat{L}^p(X_n))_{n\in\mathbb{N}}, (e_{n , m})_{n\leq m}\right)$ is a direct system in ${\bf NIVL}$.
\item[(ii)] $\cal{S}^p_{\Xi-c}\ensuremath{\mathop{:}\!\!=} \left(\vlat{L}^p_{\Xi-c}(X),(e_n)_{n\in\mathbb{N}}\right)$ is the direct limit of $\cal{D}^p_{\Xi-c}$ in ${\bf NIVL}$. \end{itemize} \end{example}
\begin{example}\label{Exm: Compactly supported measures as inductive limit} Let $X$ be a locally compact Hausdorff space. Let $\Gamma \ensuremath{\mathop{:}\!\!=} (X_\alpha)_{\alpha\in I}$ be an upward directed (with respect to inclusion) collection of non-empty open precompact subsets of $X$ so that $\displaystyle\bigcup X_\alpha = X$. For each $\alpha\in I$, let $\vlat{M}(\bar X_\alpha)$ be the space of Radon measures on $\bar X_\alpha$ and $\vlat{M}_c(X)$ the space of compactly supported Radon measures on $X$. For all $\alpha \preccurlyeq \beta$ in $I$, let $e_{\alpha , \beta}:\vlat{M}(\bar X_\alpha)\to \vlat{M}(\bar X_\beta)$ be defined by setting \[
e_{\alpha , \beta}(\mu)(B) \ensuremath{\mathop{:}\!\!=} \mu(B\cap \bar X_\alpha) \text{ for all } \mu\in \vlat{M}(\bar X_\alpha) \text{ and } B\in\borel{\bar X_{\beta}}. \] Likewise, for $\alpha\in I$, define $e_\alpha:\vlat{M}(\bar X_\alpha)\to\vlat{M}_c(X)$ by setting \[ e_{\alpha}(\mu)(B) \ensuremath{\mathop{:}\!\!=} \mu(B\cap \bar X_\alpha) \text{ for all } \mu\in \vlat{M}(X_\alpha) \text{ and } B\in\borel{X}. \] The following statements are true. \begin{itemize}
\item[(i)] $\cal{D}_{\Gamma}\ensuremath{\mathop{:}\!\!=} \left((\vlat{M}(\bar X_\alpha)_{\alpha\in I},(e_{\alpha , \beta})_{\alpha\preccurlyeq \beta}\right)$ is a direct system in ${\bf NIVL}$ and $e_{\alpha , \beta}$ is injective for all $\alpha\preccurlyeq \beta$ in $I$.
\item[(ii)] $\cal{S}_{\Gamma}\ensuremath{\mathop{:}\!\!=} \left(\vlat{M}_c(X),(e_\alpha)_{\alpha \in I}\right)$ is the direct limit of $\cal{D}_{\Gamma}$ in ${\bf NIVL}$. \end{itemize} \end{example}
\begin{example}\label{Exm: Compactly supported NORMAL measures as inductive limit} Let $X$ be a locally compact Hausdorff space. Let $\Gamma \ensuremath{\mathop{:}\!\!=} (X_\alpha)_{\alpha\in I}$ be an upward directed (with respect to inclusion) collection of open precompact subsets of $X$ so that $\displaystyle\bigcup X_\alpha =X$. For each $\alpha\in I$, let $\vlat{N}(\bar X_\alpha)$ be the space of normal Radon measures on $\bar X_\alpha$ and $\vlat{N}_c(X)$ the space of compactly supported normal Radon measures on $X$. For all $\alpha\preccurlyeq \beta$ in $I$, let $e_{\alpha , \beta}: \vlat{N}(\bar X_\alpha)\to \vlat{N}(\bar X_\beta)$ be defined by setting \[
e_{\alpha , \beta}(\mu)(B) \ensuremath{\mathop{:}\!\!=} \mu(B\cap \bar X_\alpha) \text{ for all } \mu\in \vlat{N}(\bar X_\alpha) \text{ and } B\in\borel{\bar X_\beta}. \]Likewise, for $\alpha\in I$, define $e_\alpha:\vlat{N}(\bar X_\alpha)\to\vlat{N}_c(X)$ by setting \[
e_{\alpha}(\mu)(B) \ensuremath{\mathop{:}\!\!=} \mu(B\cap \bar X_\alpha) \text{ for all } \mu\in \vlat{N}(X_\alpha) \text{ and } B\in\borel{X}. \] The following statements are true. \begin{itemize}
\item[(i)] $\cal{E}_{\Gamma} \ensuremath{\mathop{:}\!\!=} \left((\vlat{N}(\bar X_\alpha)_{\alpha\in I},(e_{\alpha , \beta})_{\alpha\preccurlyeq \beta}\right)$ is a direct system in ${\bf NIVL}$ and $e_{\alpha , \beta}$ is injective for all $\alpha\preccurlyeq \beta$ in $I$.
\item[(ii)] $\cal{T}_{\Gamma}\ensuremath{\mathop{:}\!\!=} \left(\vlat{N}_c(X),(e_\alpha)_{\alpha \in I}\right)$ is the direct limit of $\cal{J}_{\Gamma}$ in ${\bf NIVL}$. \end{itemize} \end{example}
\section{Inverse limits}\label{Section: Projective limits}
In this section we discuss inverse systems and inverse limits in categories of vector lattices, which are the categorical dual concepts of direct systems and direct limits. Below we present the definitions of inverse systems and inverse limits in these categories. As is the case in the previous section, these definitions are specializations of the corresponding definitions in general categories, see for instance \cite[Chapter 5]{Awodey2010} or \cite[Chapter III]{MacLane1998}.
\begin{defn}\label{Defn: Projective system} Let ${\bf C}$ be a category of vector lattices, $I$ a directed set, $\vlat{E}_\alpha$ a vector lattice for each $\alpha\in I$, and $p_{\beta , \alpha}:\vlat{E}_\beta\to \vlat{E}_\alpha$ a ${\bf C}$-morphism for all $\beta \succcurlyeq \alpha$ in $I$. The ordered pair $\cal{I} \ensuremath{\mathop{:}\!\!=} \left((\vlat{E}_\alpha)_{\alpha \in I},(p_{\beta , \alpha})_{\beta \succcurlyeq \alpha}\right)$ is an \emph{inverse system} in ${\bf C}$ if, for all $\alpha \preccurlyeq\beta\preccurlyeq\gamma$ in $I$, the diagram \[ \begin{tikzcd}[cramped] \vlat{E}_\gamma \arrow[rd, "p_{\gamma, \beta}"'] \arrow[rr, "p_{\gamma, \alpha}"] & & \vlat{E}_\alpha\\ & \vlat{E}_\beta\arrow[ru, "p_{\beta, \alpha}"'] \end{tikzcd} \] commutes in ${\bf C}$. \end{defn}
\begin{defn}\label{Defn: Compatible system for projective limit} Let ${\bf C}$ be a category of vector lattices and $\cal{I} \ensuremath{\mathop{:}\!\!=} \left((\vlat{E}_\alpha)_{\alpha \in I},(p_{\beta , \alpha})_{\beta \succcurlyeq \alpha}\right)$ an inverse system in ${\bf C}$. Let $\vlat{E}$ be a vector lattice and for every $\alpha\in I$, let $p_\alpha: \vlat{E} \to \vlat{E}_\alpha$ be a ${\bf C}$-morphism. The ordered pair $\cal{S} \ensuremath{\mathop{:}\!\!=} (\vlat{E}, (p_\alpha)_{\alpha\in I})$ is a \emph{compatible system} of $\cal{I}$ in ${\bf C}$ if, for all $\alpha\preccurlyeq\beta$ in $I$, the diagram \[ \begin{tikzcd}[cramped] \vlat{E} \arrow[rd, "p_\beta"'] \arrow[rr, "p_{\alpha}"] & & \vlat{E}_\alpha\\ & \vlat{E}_\beta\arrow[ru, "p_{\beta, \alpha}"'] \end{tikzcd} \] commutes in ${\bf C}$. \end{defn}
\begin{defn}\label{Defn: Projective limit} Let ${\bf C}$ be a category of vector lattices and $\cal{I} \ensuremath{\mathop{:}\!\!=} \left((\vlat{E}_\alpha)_{\alpha \in I},(p_{\beta , \alpha})_{\beta \succcurlyeq \alpha}\right)$ an inverse system in ${\bf C}$. The \emph{inverse limit} of $\cal{I}$ in ${\bf C}$ is a compatible system $\cal{S}\ensuremath{\mathop{:}\!\!=} (\vlat{E}, (p_\alpha)_{\alpha\in I})$ so that for any compatible system $\tilde{\cal{S}}\ensuremath{\mathop{:}\!\!=} (\tilde{\vlat{E}}, (\tilde{p}_\alpha)_{\alpha\in I})$ in ${\bf C}$ there exists a unique ${\bf C}$-morphism $s: \tilde{\vlat{E}}\to \vlat{E}$ so that, for all $\alpha\in I$, the diagram \[ \begin{tikzcd}[cramped] \tilde{\vlat{E}} \arrow[rd, "\tilde{p}_{\alpha}"']\arrow[rr, "s"] & & \vlat{E}\arrow[ld, "p_{\alpha}"]\\ & \vlat{E}_\alpha \end{tikzcd} \] commutes in ${\bf C}$. The inverse limit of $\cal{I}$ is denoted by $\proj{\cal{I}}$ or simply $\proj \vlat{E}_\alpha$. \end{defn}
Since inverse limits are terminal objects in a certain derived category, they are unique up to a unique isomorphism when they exist, see for instance \cite[Corollary 3.2]{BucurDeleanu1968}
\subsection{Existence of inverse limits}\label{Subsection: Existence of projective limits}
Our first task is to establish the existence of inverse limits in various categories of vector lattices. The basic result, akin to Filter's result for direct systems, is the following.
\begin{thm}\label{Thm: Existence of Projective Limit} Let $\cal{I} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta \succcurlyeq\alpha}\right)$ be an inverse system in ${\bf VL}$. Define the set \[ \vlat{E} \ensuremath{\mathop{:}\!\!=} \left\lbrace u \in \displaystyle\prod \vlat{E}_\alpha ~:~ \pi_\alpha(u) = p_{\beta,\alpha}(\pi_\beta(u)) \text{ for all } \alpha\preccurlyeq \beta \text{ in } I \right\rbrace. \]
For every $\alpha\in I$ define $p_\alpha \ensuremath{\mathop{:}\!\!=} \left.\pi_\alpha\right|_{\vlat{E}}$. The following statements are true. \begin{enumerate}
\item[(i)] $\vlat{E}$ is a vector sublattice of $\displaystyle\prod \vlat{E}_\alpha$.
\item[(ii)] The pair $\cal{S}\ensuremath{\mathop{:}\!\!=} ( \vlat{E} , (p_\alpha)_{\alpha\in I})$ is the inverse limit of $\cal{I}$ in ${\bf VL}$. \end{enumerate} \end{thm}
\begin{proof}[Proof of (i).] We verify that $\vlat{E}$ is a sublattice of $\displaystyle\prod \vlat{E}_\alpha$; that it is a linear subspace follows by a similar argument, as the reader may readily verify. Consider $u$ and $ v$ in $\vlat{E}$. Then $\pi_\alpha(u \vee v) = \pi_\alpha(u) \vee \pi_\alpha(v)$ for all $\alpha\in I$. Fix any $\alpha,\beta \in I$ so that $\beta \succcurlyeq \alpha$. Then \[ p_{\beta , \alpha}(\pi_\beta(u \vee v)) = p_{\beta , \alpha}(\pi_\beta(u))\vee p_{\beta , \alpha}(\pi_\beta(u)) = \pi_\alpha(u) \vee \pi_\alpha(v)=\pi_\alpha(u\vee v). \] Therefore $u \vee v \in \vlat{E}$. Similarly, $u\wedge v \in \vlat{E}$ so that $\vlat{E}$ is a sublattice of $\displaystyle\prod \vlat{E}_\alpha$. \end{proof}
\begin{proof}[Proof of (ii).] From the definitions of $\vlat{E}$ and the $p_\alpha$ it is clear that $\cal{S}$ is a compatible system of $\cal{I}$ in ${\bf VL}$. Let $\tilde{\cal{S}} = (\tilde{\vlat{E}},(\tilde{p}_\alpha)_{\alpha\in I})$ be any compatible system of $\cal{I}$ in ${\bf VL}$. Define $s:\tilde{\vlat{E}}\to \vlat{E}$ by setting $s(u) \ensuremath{\mathop{:}\!\!=} (\tilde{p}_{\alpha}(u))_{\alpha\in I}$. Let $\beta \succcurlyeq \alpha$ in $I$. Because $\tilde{\cal{S}}$ is a compatible system \[ p_{\beta , \alpha} (\tilde{p}_\beta (u)) = \tilde{p}_\alpha (u), ~ u\in \tilde{\vlat{E}}. \] Therefore $s(u)\in \vlat{E}$ for all $u\in \tilde{\vlat{E}}$. Because each $\tilde{p}_\alpha$ is a lattice homomorphism, so is $s$. By the definitions of $s$ and the $p_\alpha$, respectively, it follows that $p_\alpha \circ s = \tilde{p}_\alpha$ for every $\alpha \in I$. We show that $s$ is the unique lattice homomorphism with this property. To this end, let $\tilde{s}:\tilde{\vlat{E}}\to \vlat{E}$ be a lattice homomorphism so that $p_\alpha \circ \tilde{s}=\tilde{p}_\alpha$ for every $\alpha\in I$. Fix $u\in \tilde{\vlat{E}}$. Then for every $\alpha\in I$, \[ \pi_\alpha (\tilde{s}(u)) = p_\alpha (\tilde{s}(u)) = \tilde{p}_\alpha (u) = \pi_\alpha (s(u)). \] Hence $s=\tilde{s}$ and therefore $\proj{\cal{I}} = (\vlat{E},(p_\alpha)_{\alpha \in I})$ in ${\bf VL}$. \end{proof}
\begin{comment} \begin{thm} Let $\cal{I} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta\succcurlyeq\alpha}\right)$ be an inverse system in ${\bf NVL}$ and $\cal{S} \ensuremath{\mathop{:}\!\!=} ( \vlat{E}, (p_\alpha)_{\alpha\in I} )$ its inverse limit in ${\bf VL}$. If $\vlat{E}_\alpha$ is Dedekind complete for every $\alpha\in I$ then $\cal{S}$ is the inverse limit of $\cal{I}$ in ${\bf NVL}$. \end{thm}
\begin{prop}\label{Prop: Projective limit closed under sup and inf} Let $\cal{I} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta\succcurlyeq\alpha}\right)$ be an inverse system in ${\bf NVL}$ and $\cal{S} \ensuremath{\mathop{:}\!\!=} \left(\vlat{E},(p_\alpha)_{\alpha\in I}\right)$ its inverse limit in ${\bf VL}$. Let $A\subseteq \vlat{E}$ and assume that $\inf A = u$ or $\sup A = u$ in $\displaystyle\prod\vlat{E}_\alpha$. Then $u\in \vlat{E}$. \end{prop}
\begin{proof} First, we prove that the $p_\alpha$ are normal homomorphisms: Fix $\alpha\in I$ and let $A\downarrow 0$ in $E$. Since $E_\alpha$ is Dedekind complete for every $\alpha\in I$, by Theorem~\ref{Thm: Properties of product of vector lattices.}~(iii), the product $\displaystyle\prod E_\alpha$ is also Dedekind complete. Therefore $A\downarrow u$ in $\displaystyle\prod E_\alpha$ for some $u\in \displaystyle\prod E_\alpha$. By (i), we have $u\in E$ which implies $A\downarrow u$ in $E$. Since $A\downarrow 0$ in $E$, we conclude that $A\downarrow 0$ in $\displaystyle\prod E_\alpha$. By the normality of the coordinate projections $\pi_\alpha$, it follows that \[
\inf p_\alpha \left[ A \right] = \inf \pi_\alpha \left[ A \right] = \pi_\alpha \left[ \inf A \right] = 0. \]Hence the pair $\cal{S}$ is a compatible system over $\cal{I}$ in ${\bf NVL}$. It remains to verify that $\cal{S}$ satisfies the universal property of the inverse limit in ${\bf NVL}$: Let $\tilde{\cal{S}} \ensuremath{\mathop{:}\!\!=} (\tilde{E},(\tilde{p}_\alpha)_{\alpha\in I})$ be a compatible system in ${\bf NVL}$. Following the proof of Theorem \ref{Thm: Existence of Projective Limit}, we need only show that $s:\tilde{E}\to E$ where $s(v) \ensuremath{\mathop{:}\!\!=} (\tilde{p}_{\alpha}(v))_{\alpha\in I}$ is a ${\bf NVL}$-morphism: Let $A\downarrow 0$ in $\tilde{E}$, then since each $\tilde{p}_\alpha$ is a normal homomorphism, we have $p_{\alpha}[s[A]] = \tilde{p}_\alpha [A]\downarrow 0$ in $E_\alpha$ for each $\alpha \in I$. By Proposition~\ref{Prop: decreasing nets in product lattice}, it follows that $s[A]\downarrow 0$ in $\displaystyle\prod E_\alpha$, and by (i) above $s[A]\downarrow 0$ in $E$. Therefore $s$ is a ${\bf NVL}$-morphism. \end{proof}
For this, it is sufficient to consider infima of downward directed subsets of $\vlat{E}$.
Let $A\subseteq \vlat{E}$ and assume that $A\downarrow u$ in $\displaystyle\prod\vlat{E}_\alpha$. Then for every $\alpha\in I$, $p_\alpha [A]=\pi_\alpha[A]\downarrow \pi_\alpha(u)$ in $\vlat{E}_\alpha$. For $\beta \succcurlyeq \alpha$ in $I$, \[ \pi_\alpha(u) = \inf p_\alpha [A] = \inf p_{\beta , \alpha}[p_\beta[A]] = p_{\beta , \alpha}(\inf p_\beta[A]) = p_{\beta , \alpha}(\pi_\beta(u)); \] the second to last identity follows form the fact that $p_{\beta , \alpha}$ is a normal lattice homomorphism. Therefore $u\in \vlat{E}$.
Assume that $\vlat{E}_\alpha$ is Dedekind complete for every $\alpha\in I$. We show that the $p_\alpha$ are normal lattice homomorphisms. Our proof proceeds in two steps. The first step is to show that $\vlat{E}$ is closed under the formation of arbitrary infima and suprema in $\displaystyle\prod\vlat{E}_\alpha$.\footnote{We remark that this fact does not depend on the $\vlat{E}_\alpha$ being Dedekind complete.} For this, i
\end{comment}
\begin{thm}\label{Thm: Existence of Projective Limit NVL} Let $\cal{I} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta\succcurlyeq\alpha}\right)$ be an inverse system in ${\bf NVL}$ and $\cal{S} \ensuremath{\mathop{:}\!\!=} ( \vlat{E}, (p_\alpha)_{\alpha\in I} )$ its inverse limit in ${\bf VL}$. The following statements are true. \begin{enumerate}
\item[(i)] Let $A\subseteq \vlat{E}$ and assume that $\inf A = u$ or $\sup A = u$ in $\displaystyle\prod \vlat{E}_\alpha$, then $u\in E$.
\item[(ii)] If $\vlat{E}_\alpha$ is Dedekind complete for every $\alpha\in I$ then $\cal{S}$ is the inverse limit of $\cal{I}$ in ${\bf NVL}$. \end{enumerate} \end{thm}
\begin{proof}[Proof of (i)] It is sufficient to consider infima of downward directed subsets of $\vlat{E}$. Let $A\subseteq \vlat{E}$ and assume that $A\downarrow u$ in $\displaystyle\prod\vlat{E}_\alpha$. By Theorem~\ref{Thm: Properties of product of vector lattices.}~(i), for every $\alpha\in I$, $p_\alpha [A]=\pi_\alpha[A]\downarrow \pi_\alpha(u)$ in $\vlat{E}_\alpha$. For $\beta \succcurlyeq \alpha$ in $I$, \[ \pi_\alpha(u) = \inf p_\alpha [A] = \inf p_{\beta , \alpha}[p_\beta[A]] = p_{\beta , \alpha}(\inf p_\beta[A]) = p_{\beta , \alpha}(\pi_\beta(u)); \] the second to last identity follows from the fact that $p_{\beta , \alpha}$ is a normal lattice homomorphism. Therefore $u\in \vlat{E}$. \end{proof}
\begin{proof}[Proof of (ii)] First, we prove that the $p_\alpha$ are normal lattice homomorphisms. Let $A\downarrow 0$ in $\vlat{E}$. Since $\vlat{E}_\alpha$ is Dedekind complete for every $\alpha\in I$, so is $\displaystyle\prod \vlat{E}_\alpha$. Therefore $A\downarrow u$ in $\displaystyle\prod \vlat{E}_\alpha$ for some $u\in \displaystyle\prod \vlat{E}_\alpha$. Then $u\in \vlat{E}$ so that $A\downarrow u$ in $\vlat{E}$. But $A\downarrow 0$ in $\vlat{E}$, hence $u = 0$. Therefore $\inf p_\alpha [A] = \pi_\alpha(u) = 0$ for every $\alpha \in I$.
From the above it follows that $\cal{S}$ is a compatible system in ${\bf NVL}$. It remains to show that $\cal{S}$ satisfies Definition \ref{Defn: Projective limit} in ${\bf NVL}$. Let $\tilde{\cal{S}} = (\tilde{\vlat{E}},(\tilde{p}_\alpha)_{\alpha\in I})$ be a compatible system in ${\bf NVL}$. Based on Theorem \ref{Thm: Existence of Projective Limit} we need only show that $s:\tilde{\vlat{E}}\to \vlat{E}$ defined by setting $s(u) \ensuremath{\mathop{:}\!\!=} (\tilde{p}_{\alpha}(u))_{\alpha\in I}$ for every $u \in \tilde{\vlat{E}}$ is a normal lattice homomorphism.
Let $A\downarrow 0$ in $\tilde{\vlat{E}}$. Then, since each $\tilde{p}_\alpha$ is a normal lattice homomorphism, $p_{\alpha}[s[A]] = \pi_\alpha[s[A]] = \tilde{p}_\alpha [A]\downarrow 0$ in $\vlat{E}_\alpha$ for every $\alpha \in I$. Hence $s[A]\downarrow 0$ in $\displaystyle\prod \vlat{E}_\alpha$, therefore also in $\vlat{E}$. Therefore $s$ is a normal lattice homomorphism, hence a ${\bf NVL}$-morphism, so that $\proj{\cal{I}}=(\vlat{E},(p_\alpha)_{\alpha \in I})$ in ${\bf NVL}$. \end{proof}
\begin{remark} Let $\cal{I} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta\succcurlyeq \alpha}\right)$ be an inverse system in a category of vector lattices, and $\cal{S} \ensuremath{\mathop{:}\!\!=} \left(\vlat{E},(p_\alpha)_{\alpha\in I}\right)$ its inverse limit in ${\bf VL}$. We occasionally suppress the projections $p_\alpha$ and simply write $\vlat{E}=\proj{\cal{I}}$ or `$\vlat{E}$ is the inverse limit of $\cal{I}$'. \end{remark}
\subsection{Permanence properties}\label{Subsection: Permanence properties}
In this section we establish some permanence properties for inverse limits, along the same vein as those for direct limits given in Theorem \ref{Thm: Inductive Limit Permanence}. These follow easily from the construction of inverse limits given in Theorem \ref{Thm: Existence of Projective Limit} and the properties of products of vector lattices given in Theorem \ref{Thm: Properties of product of vector lattices.}
\begin{thm}\label{Thm: Proj Limits Archimedean and RU Complete} Let $\cal{I} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta \succcurlyeq \alpha}\right)$ be an inverse system in ${\bf VL}$ and $\cal{S} \ensuremath{\mathop{:}\!\!=} \left(\vlat{E},(p_\alpha)_{\alpha\in I}\right)$ its inverse limit in ${\bf VL}$. The following statements are true. \begin{enumerate}
\item[(i)] If $\vlat{E}_\alpha$ is Archimedean for every $\alpha\in I$ then so is $\vlat{E}$.
\item[(ii)] If $\vlat{E}_\alpha$ is Archimedean and relatively uniformly complete for every $\alpha\in I$ then $\vlat{E}$ is relatively uniformly complete. \end{enumerate} \end{thm}
\begin{proof} We note that (i) follows immediately from Theorems \ref{Thm: Properties of product of vector lattices.} (ii) and the construction of an inverse limit in ${\bf VL}$.
For (ii), assume that $\vlat{E}_\alpha$ is Archimedean and relatively uniformly complete for every $\alpha \in I$. We show that every relatively uniformly Cauchy sequence in $\vlat{E}$ is relatively uniformly convergent. Because $\vlat{E}$ is Archimedean by (i), it follows from \cite[Theorem 39.4]{LuxemburgZaanen1971RSI} that it suffices to consider increasing sequences. Let $(u_n)$ be an increasing, relatively uniformly Cauchy sequence in $\vlat{E}$. Then for every $\alpha\in I$, $(p_\alpha(u_n))$ is an increasing sequence in $\vlat{E}_\alpha$. According to \cite[Theorem 59.3]{LuxemburgZaanen1971RSI}, $(p_\alpha(u_n))$ is relatively uniformly Cauchy in $\vlat{E}_\alpha$. Because each $\vlat{E}_\alpha$ is relatively uniformly complete, there exists $u_\alpha\in \vlat{E}_\alpha$ so that $(p_\alpha(u_n))$ converges relatively uniformly to $u_\alpha$. In fact, because $(p_\alpha (u_n))$ is increasing, $u_\alpha = \sup\{p_\alpha(u_n) ~:~n\in\mathbb{N}\}$. Therefore $u=(u_\alpha)=\sup \{u_n ~:~n\in\mathbb{N}\}$ in $\displaystyle\prod \vlat{E}_\alpha$. By Theorem \ref{Thm: Existence of Projective Limit NVL}~(i), $u\in \vlat{E}$ so that $u=\displaystyle\sup\{ u_n ~:~ n\in \mathbb{N}\}$ in $\vlat{E}$. Therefore $(u_n)$ converges relatively uniformly to $u$ by \cite[Lemma 39.2]{LuxemburgZaanen1971RSI}. We conclude that $\vlat{E}$ is relatively uniformly complete. \end{proof}
\begin{thm}\label{Thm: Projective limit permanence properties} Let $\cal{I} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta\succcurlyeq\alpha}\right)$ be an inverse system in ${\bf NVL}$ and $\cal{S} \ensuremath{\mathop{:}\!\!=} \left(\vlat{E},(p_\alpha)_{\alpha\in I}\right)$ its inverse limit in ${\bf VL}$. The following statements are true. \begin{enumerate}
\item[(i)] If $\vlat{E}_\alpha$ is $\sigma$-Dedekind complete for every $\alpha\in I$ then so is $\vlat{E}$.
\item[(ii)] If $\vlat{E}_\alpha$ is Dedekind complete for every $\alpha\in I$ then so is $\vlat{E}$.
\item[(iii)] If $\vlat{E}_\alpha$ is laterally complete for every $\alpha\in I$ then so is $\vlat{E}$.
\item[(iv)] If $\vlat{E}_\alpha$ is universally complete for every $\alpha\in I$ then so is $\vlat{E}$. \end{enumerate} \end{thm}
\begin{proof} We prove (ii). The statements in (i) and (iii) follow by almost identical arguments, and (iv) follows immediately from (ii) and (iii).
Let $D\subseteq \vlat{E}$ be an upwards directed set bounded above by $u \in \vlat{E}$. For every $\alpha\in I$ the set $D_\alpha \ensuremath{\mathop{:}\!\!=} p_\alpha\left[ D \right]$ is bounded above in $\vlat{E}_\alpha$ by $\pi_\alpha(u) \in \vlat{E}_\alpha$. Since $\vlat{E}_\alpha$ is Dedekind complete for every $\alpha\in I$, $v_\alpha \ensuremath{\mathop{:}\!\!=} \sup D_\alpha$ exists in $\vlat{E}_\alpha$ for all $\alpha\in I$. We have that $\sup D = \left( v_\alpha \right)$ in $\displaystyle\prod \vlat{E}_\alpha$. By Theorem \ref{Thm: Existence of Projective Limit NVL}~(i), $v\ensuremath{\mathop{:}\!\!=} \left( v_\alpha \right) \in \vlat{E}$. Because $\vlat{E}$ is a sublattice of $\displaystyle\prod \vlat{E}_\alpha$ it follows that $v=\sup D$ in $\vlat{E}$. \end{proof}
\subsection{Examples of inverse limits}\label{Subsection: Examples of projective limits}
In this section we present a number of examples of inverse systems in categories of vector lattices and their limits. These will be used in Section \ref{Section: Applications}. Our first example is related to Example \ref{Exm: Locally supported p-summable functions as inductive limit}.
\begin{example}\label{Exm: Lploc projective limit} Let $(X,\Sigma,\mu)$ be a complete $\sigma$-finite measure space. Let $\Xi \ensuremath{\mathop{:}\!\!=} (X_n)$ be an increasing sequence (w.r.t. inclusion) of measurable sets with positive measure so that $X = \displaystyle \bigcup X_n$. For $1 \leq p \leq \infty$ let $\vlat{L}^p_{\Xi-\ell oc}(X)$ denote the set of (equivalence classes of) measurable functions $u:X\to\mathbb{R}$ so that $u{\mathbf 1}_{X_n}\in \vlat{L}^p(X)$ for every $n\in\mathbb{N}$. For $m \geq n$ in $\mathbb{N}$ let $r_{m , n}:\vlat{L}^p(X_m)\to \vlat{L}^p(X_n)$ and $r_n:\vlat{L}^p_{\Xi-\ell oc}(X)\to \vlat{L}^p(X_n)$ be the restriction maps. The following statements are true.\begin{enumerate}
\item[(i)] $\cal{I}^p_{\Xi-\ell oc} \ensuremath{\mathop{:}\!\!=} ((\vlat{L}^p(X_n))_{n\in\mathbb{N}},(r_{m , n})_{m \geq n})$ is an inverse system in ${\bf NVL}$.
\item[(ii)] $\cal{S}^p_{\Xi-\ell oc}\ensuremath{\mathop{:}\!\!=} (\vlat{L}^p_{\Xi-\ell oc}(X),(r_n)_{n\in \mathbb{N}})$ is a compatible system of $\cal{I}^p_{\Xi-\ell oc}$ in ${\bf NVL}$.
\item[(iii)] $\cal{S}^p_{\Xi-\ell oc}$ is the inverse limit of $\cal{I}^p_{\Xi-\ell oc}$ in ${\bf NVL}$. \end{enumerate} \end{example}
\begin{proof} That (i) and (ii) are true is clear. We prove (iii).
Because $\vlat{L}^p(X_n)$ is Dedekind complete for every $n\in\mathbb{N}$, $\proj{\cal{I}^p_{\Xi-\ell oc}}\ensuremath{\mathop{:}\!\!=} (\vlat{F},(p_n)_{n\in\mathbb{N}})$ exists in ${\bf NVL}$ by Theorem \ref{Thm: Existence of Projective Limit NVL}~(ii). Since $\cal{S}^p_{\Xi-\ell oc}$ is a compatible system of $\cal{I}^p_{\Xi-\ell oc}$ in ${\bf NVL}$ there exists a unique normal lattice homomorphism $s:\vlat{L}^p_{\Xi-\ell oc}(X)\to \vlat{F}$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{L}^p_{\Xi-\ell oc}(X) \arrow[rd, "r_{n}"']\arrow[rr, "s"] & & \vlat{F}\arrow[ld, "p_{n}"]\\ & \vlat{L}^p(X_n) \end{tikzcd} \] commutes for every $n\in\mathbb{N}$. We show that $s$ is bijective. To see that $s$ is injective, suppose that $s(u)=0$ for some $u\in \vlat{L}^p_{\Xi-\ell oc}(X)$. Then $r_n(u)=0$ for every $n\in\mathbb{N}$; that is, the restriction of $u$ to each set $X_n$ is $0$. Since $\displaystyle\bigcup X_n = X$ it follows that $u=0$. To see that $s$ is surjective, consider $u\in\vlat{F}$. If $m \geq n$ then $p_n(u)=r_{m,n}(p_m(u))$; that is, $p_n(u)=p_m(u)$ a.e. on $X_n$. Therefore $v:X\to\mathbb{R}$ given by \[ v(x) \ensuremath{\mathop{:}\!\!=} p_n(u)(x) \text{ if } x\in X_n \] is a.e. well-defined on $X=\displaystyle \bigcup X_n$. For $n\in\mathbb{N}$, $v$ restricted to $X_n$ is $p_n(u)\in \vlat{L}^p(X_n)$. Therefore $v\in\vlat{L}^p_{\Xi-\ell oc}(X)$. Furthermore, $p_n(s(v)) = r_n(v) = p_n(u)$ for all $n\in\mathbb{N}$ so that $s(v)=u$. We conclude that $s$ is a lattice isomorphism. \end{proof}
Our second example is a companion result for Examples \ref{Exm: Compactly supported measures as inductive limit} and \ref{Exm: Compactly supported NORMAL measures as inductive limit}.
\begin{example}\label{Exm: Continuous functions projective limit} Let $X$ be a topological space and $\cal{O}\ensuremath{\mathop{:}\!\!=} \{O_\alpha ~:~ \alpha\in I\}$ collection of non-empty open subsets of $X$ which is upward directed with respect to inclusion; that is, $\alpha \preccurlyeq \beta$ if and only if $O_\alpha \subseteq O_\beta$. Assume that $\bigcup O_\alpha$ is dense and ${\mathrm C }$-embedded in $X$. For $\beta\succcurlyeq \alpha$, denote by $r_{\beta , \alpha}:{\mathrm C }(\bar O_\beta)\to {\mathrm C }(\bar O_\alpha)$ and $r_\alpha:{\mathrm C }(X)\to{\mathrm C }(\bar O_\alpha)$ the restriction maps. The following statements are true. \begin{enumerate}
\item[(i)] $\cal{I}_\cal{O} \ensuremath{\mathop{:}\!\!=} (({\mathrm C }(\bar O_\alpha))_{\alpha\in I},(r_{\beta , \alpha})_{\beta \succcurlyeq \alpha})$ is an inverse system in ${\bf VL}$.
\item[(ii)] $\cal{S}_\cal{O} \ensuremath{\mathop{:}\!\!=} ({\mathrm C }(X),(r_\alpha)_{\alpha\in I})$ is a compatible system of $\cal{I}_{\cal{O}}$ in ${\bf VL}$.
\item[(iii)] $\cal{S}_\cal{O}$ is the inverse limit of $\cal{I}_{\cal{O}}$ in ${\bf VL}$.
\item[(iv)] If $X$ is a Tychonoff space and $O_\alpha$ is precompact for every $\alpha\in I$ then $\cal{I}_\cal{O}$ is an inverse system in ${\bf NIVL}$, and $\cal{S}_\cal{O}$ is a compatible system of $\cal{I}_{\cal{O}}$ in ${\bf NIVL}$. \end{enumerate} \end{example}
\begin{proof} That (i), (ii) and (iii) are true follows from arguments similar to those used in the proof of Example \ref{Exm: Lploc projective limit}. We therefore omit the proofs of these statements. We only note that for (iii), we use the fact that every $u\in{\mathrm C }(\displaystyle\bigcup O_\alpha)$ has a unique continuous and real-valued extension to $X$; that is, restriction from $X$ to $\displaystyle\bigcup O_\alpha$ defines a lattice isomorphism from ${\mathrm C }(\displaystyle\bigcup O_\alpha)$ onto $\cont(X)$.
To verify (iv) it is sufficient to show that the $r_\alpha$ and $r_{\alpha,\beta}$ are order continuous and interval preserving. That these maps are order continuous follows from \cite[Theorem 3.4]{KandicVavpeticPositivity2019}. That they are interval preserving follows from the fact that every compact subset of a Tychonoff space is ${\mathrm C }^\ast$-embedded. We show that the $r_\alpha$ are interval preserving, the proof for $r_{\alpha,\beta}$ being identical. Consider an $\alpha\in I$, $u\in {\mathrm C }(X)^+$ and $v\in{\mathrm C }(\bar O_\alpha)$ so that $0\leq v\leq r_\alpha(u)$. Because $\bar O_\alpha$ is ${\mathrm C }^\ast$-embedded in $X$ there exists a continuous function $v'\in {\mathrm C } (X)$ so that $r_\alpha (v')=v$. Let $w\ensuremath{\mathop{:}\!\!=} (0\vee v')\wedge u$. Then $0\leq w\leq u$ and, because $r_\alpha$ is a lattice homomorphism, $r_\alpha(w)=v$. Therefore $[0,r_\alpha (u)] = r_\alpha [[0,u]]$. \end{proof}
Our next example is of a more general nature. It is an essential ingredient in our solution of the decomposition problem for ${\mathrm C }(X)$ mentioned in Section \ref{Section: Introduction}.
\begin{example}\label{Exm: Projective limits of bands} Let $\vlat{E}$ be an Archimedean vector lattice. Denote by $\bands{\vlat{E}}$ the Boolean algebra of projection bands in $\vlat{E}$.\footnote{$\bands{\vlat{E}}$ are ordered by inclusion.} Let $\mathrm{M}$ be a non-trivial ideal in $\bands{\vlat{E}}$; that is, $\mathrm{M}\subset \bands{\vlat{E}}$ is downward closed, upward directed and does not consist of the trivial band $\{0\}$ only. For notational convenience we express $\mathrm{M}$ as indexed by a directed set $I$, $\mathrm{M} = \{\vlat{B}_\alpha ~:~ \alpha \in I\}$, so that $\alpha \preccurlyeq \beta$ if and only if $\vlat{B}_{\alpha}\subseteq \vlat{B}_{\beta}$.
For $\vlat{B}_\alpha \subseteq \vlat{B}_\beta$ in $\mathrm{M}$, denote by $P_\alpha$ the band projection of $\vlat{E}$ onto $\vlat{B}_\alpha$ and by $P_{\beta , \alpha}$ the band projection of $\vlat{B}_\beta$ onto $\vlat{B}_\alpha$; that is, $P_{\beta , \alpha} = \left.P_{\alpha}\right|_{\vlat{B}_\beta}$. The following statements are true. \begin{enumerate}
\item[(i)] $\cal{I}_{\mathrm{M}} \ensuremath{\mathop{:}\!\!=} (\mathrm{M} , (P_{\beta,\alpha})_{\beta\succcurlyeq \alpha})$ is an inverse system in ${\bf NIVL}$ and $\tilde{\cal{S}} \ensuremath{\mathop{:}\!\!=} (\vlat{E},(P_\alpha)_{\alpha\in I})$ is a compatible system of $\cal{I}_\vlat{M}$ in ${\bf NIVL}$.
\item[(ii)] $\proj{\cal{I}_{\mathrm{M}}}\ensuremath{\mathop{:}\!\!=} \left(\vlat{F},(p_\alpha)_{\alpha \in I}\right)$ exists in ${\bf VL}$. If $\vlat{E}$ is Dedekind complete then $\left(\vlat{F},(p_\alpha)_{\alpha \in I}\right)$ is the inverse limit of $\cal{I}_{\mathrm{M}}$ in ${\bf NVL}$.
\item[(iii)] $P_{\mathrm{M}}:\vlat{E}\ni u \mapsto (P_\alpha(u))_{\alpha\in I}\in \vlat{F}$ is the unique lattice homomorphism so that the diagram
\[
\begin{tikzcd}[cramped]
\vlat{E} \arrow[rd, "P_{\alpha}"']\arrow[rr, "P_{\mathrm{M}}"] & & \vlat{F}\arrow[ld, "p_{\alpha}"]\\
& \vlat{B}_\alpha
\end{tikzcd}
\]
commutes for every $\alpha\in I$. Furthermore, $P_{\mathrm{M}}[\vlat{E}]$ an order dense sublattice of $\vlat{F}$. If $\vlat{E}$ is Dedekind complete then $P_\vlat{M}[\vlat{E}]$ is an ideal in $\vlat{F}$.
\item[(iv)] $P_{\mathrm{M}}$ is injective if and only if $\{P_\alpha : \alpha\in I\}$ separates the points of $\vlat{E}$. In this case, $P_{\mathrm{M}}$ is a lattice isomorphism onto an order dense sublattice of $\vlat{F}$.
\end{enumerate} \end{example}
\begin{proof} Since band projections are both interval preserving and order continuous, (i) follows immediately from Proposition \ref{Prop: Properties of band projections}. The statement in (ii) follows immediately from (i) and Theorems \ref{Thm: Existence of Projective Limit} and \ref{Thm: Existence of Projective Limit NVL}~(ii). That (iv) is true is a direct consequence of the definition of $P_{\mathrm{M}}$.
We proceed to prove (iii). Since $P_\alpha$ is a lattice homomorphism for every $\alpha\in I$, $P_\vlat{M}$ is a lattice homomorphism into $\displaystyle\prod \vlat{B}_\alpha$. If $u\in\vlat{E}$ and $\alpha \preccurlyeq \beta$ then $P_{\beta,\alpha}(P_\beta ( u)) = P_\alpha (u)$ by Proposition \ref{Prop: Properties of band projections} (iii). Hence $P_\vlat{M}[\vlat{E}]$ is a sublattice of $\vlat{F}$. It follows from the construction of $\vlat{F}$ as a sublattice of $\displaystyle\prod \vlat{B}_\alpha$ given in Theorem \ref{Thm: Existence of Projective Limit} that $p_\alpha \circ P_{\vlat{M}}=P_\alpha$ for all $\alpha\in I$.
Let $0 < u=(u_\alpha)\in \vlat{F}$. There exists $\alpha_0\in I$ so that $u_{\alpha_0}>0$ in $\vlat{B}_{\alpha_0}\subseteq \vlat{E}$. Then $0<P_\vlat{M}(u_{\alpha_0}) \leq u$ in $\vlat{F}$. Hence $P_\vlat{M}[\vlat{E}]$ is order dense in $\vlat{F}$.
Assume that $\vlat{E}$ is Dedekind complete. We show that $P_\vlat{M}[\vlat{E}]$ is an ideal in $\vlat{F}$. Consider $v\in \vlat{E}^+$ and $u=(u_\alpha)\in \vlat{F}^+$ so that $0\leq u \leq P_\vlat{M}(v)$. Then $u_\alpha \leq P_\alpha(v) \leq v$ for all $\alpha\in I$. Let $w=\sup \{u_\alpha ~:~ \alpha \in I\}$ in $\vlat{E}$. We claim that $P_\vlat{M}(w)=u$. Because $u_\alpha \leq w$ for all $\alpha \in I$, $u_\alpha = P_\alpha(u_\alpha) \leq P_\alpha(w)$. Therefore $u\leq P_\vlat{M}(w)$. For the reverse inequality we note that for all $\beta\in I$, \[ P_\beta(w) = \sup\{ P_\beta u_\alpha ~:~ \alpha \in I\}. \] We claim that $P_{\beta}(u_\alpha) \leq u_\beta$ for all $\alpha,\beta\in I$. It follows from this claim that $P_\beta(w) \leq u_\beta$ so that $P_\vlat{M}(w) \leq u$. Thus we need only verify that, indeed, $P_{\beta}(u_\alpha) \leq u_\beta$ for all $\alpha,\beta \in I$. To this end, fix $\alpha,\beta\in I$. Let $\gamma\in I$ be a mutual upper bound for $\alpha$ and $\beta$. Because $u=(u_\alpha) \in\vlat{F}$, $\tilde{\cal{S}}$ is compatible with $\cal{I}_\vlat{M}$ and $u_\gamma,u_\alpha\in \vlat{E}$ we have \[ P_\beta(u_\alpha) = P_\beta (P_{\gamma, \alpha}(u_\gamma)) \leq P_\beta(u_\gamma) = P_{\gamma , \beta} (P_\gamma(u_\gamma)) = P_{\gamma , \beta}(u_\gamma) = u_\beta. \] This completes the proof. \end{proof}
\begin{remark}\label{Remark: Vector lattice as projective limit of bands} Let $\vlat{E}$, $\bands{\vlat{E}}$, $\mathrm{M}$, $\cal{I}_{\mathrm{M}}$, $P_{\mathrm M}$ and $\tilde{\cal{S}}$ be as in Example \ref{Exm: Projective limits of bands}. Assume that $\{P_\alpha ~:~ \alpha\in I\}$ separates the points of $\vlat{E}$. It may happen that $P_{\mathrm M}$ maps $\vlat{E}$ onto $\proj{\cal{I}_\vlat{M}}$, but this is not always the case. If this is the case, then $\proj{\cal{I}_\vlat{M}}=\tilde{\cal{S}}$. A sufficient, but not necessary, condition for $P_\mathrm{M}$ to map $\vlat{E}$ onto $\vlat{F}$ is that $\vlat{E}\in\mathrm{M}$. \begin{enumerate}
\item[(i)] Consider the vector lattice $\mathbb{R}^\omega$ of all functions from $\mathbb{N}$ to $\mathbb{R}$. For $F\subseteq \mathbb{N}$ let
\[
\vlat{B}_F \ensuremath{\mathop{:}\!\!=} \{u\in \mathbb{R}^\omega ~:~ \supp (u) \subseteq F\}.
\]
Then $\mathrm{M} \ensuremath{\mathop{:}\!\!=} \{\vlat{B}_F ~:~ \emptyset\neq F\subseteq \mathbb{N} \text{ finite} \}$ is a proper, non-trivial ideal in $\rm{ {\bf B}}_{\mathbb{R}^\omega}$ and $\{P_F : \emptyset\neq F\subseteq \mathbb{N} \text{ finite} \}$ separates the points of $\mathbb{R}^\omega$. It is easy to see that $P_{\mathrm{M}}$ maps $\mathbb{R}^\omega$ onto $\proj{\cal{I}_{\mathrm{M}}}$.
\item[(ii)] Consider the vector lattice $\ell^1$. As in (i), for $F\subseteq \mathbb{N}$ define
\[
\vlat{B}_F \ensuremath{\mathop{:}\!\!=} \{u\in \ell^1 ~:~ \supp (u) \subseteq F\}
\]
Then $\mathrm{M} \ensuremath{\mathop{:}\!\!=} \{\vlat{B}_F ~:~ \emptyset\neq F\subseteq \mathbb{N} \text{ finite} \}$ is a proper, non-trivial ideal in $\rm{ {\bf B}}_{\ell^1}$ and $\proj{\cal{I}_\mathrm{M}}$ is $\mathbb{R}^\omega$. In this case, $P_{\mathrm{M}}[\ell^1]$ is a proper subspace of $\proj{\cal{I}_\mathrm{M}}$. \end{enumerate} \end{remark}
Based on Remark \ref{Remark: Vector lattice as projective limit of bands} we ask the following question: Given a Dedekind complete vector lattice $\vlat{E}$, does there exist a proper ideal $\mathrm{M}$ in $\bands{\vlat{E}}$ so that $P_\mathrm{M}:\vlat{E}\to \proj{\cal{I}_\mathrm{M}}$ is an isomorphism onto $\proj{\cal{I}_\mathrm{M}}$? We do not pursue this question any further here, except to note the following example.
\begin{example} Let $X$ be an extremally disconnected Tychonoff space. Let $\cal{O} \ensuremath{\mathop{:}\!\!=} \{O_\alpha : \alpha \in I\}$ be a proper, non-trivial ideal in the Boolean algebra ${\bf R}_X$ of clopen subsets of $X$. Assume that $\bigcup O_\alpha$ is dense and ${\mathrm C }$-embedded in $X$. Then $\vlat{M}\ensuremath{\mathop{:}\!\!=} \{{\mathrm C } (O_\alpha) ~:~ \alpha\in I\}$ is a proper, non-trivial ideal in $\bands{\cont(X)}$ and $P_\vlat{M}:\cont(X) \to \proj{\cal{I}_\vlat{M}}$ is a lattice isomorphism onto $\proj{\cal{I}_\vlat{M}}$. \end{example}
\begin{proof} The Boolean algebras ${\bf R}_X$ and $\bands{\cont(X)}$ are isomorphic. In particular, the isomorphism is given by \[ {\bf R}_X\ni O \longmapsto \vlat{B}_O=\{u\in \cont(X) ~:~ \supp(u)\subseteq O\}, \] see \cite[Theorem 12.9]{deJongevanRooijRieszSpaces1977}. Moreover, for $O\in{\bf R}_X$ the band projections onto $\vlat{B}_O$ is given by restriction to $O$. Finally, we note that for $O\in{\bf R}_X$ the band $\vlat{B}_O$ may be identified with ${\mathrm C }(O)$. Therefore $\vlat{M}$ is a proper, non-trivial ideal in $\bands{\cont(X)}$. It follows from Example \ref{Exm: Continuous functions projective limit} that $\proj{\cal{I}_\vlat{M}}=\cont(X)$, i.e. $\cal{I}_\vlat{M}:\cont(X) \to \proj{\cal{I}_\vlat{M}}$ is a lattice isomorphism onto $\proj{\cal{I}_\vlat{M}}$. \end{proof}
\section{Dual spaces}\label{Section: Dual spaces}
The results presented in this section form the technical heart of the paper. Roughly speaking, we will show, under fairly general assumptions, that the order (continuous) dual of a direct limit is an inverse limit. On the other hand, more restrictive conditions are needed to show that the order (continuous) dual of an inverse limit is a direct limit. These results form the basis of the applications given in Section \ref{Section: Applications}.
\subsection{Duals of direct limits}\label{Subsection: Duals of inductive limits}
\begin{defn}\label{Defn: Dual system of inductive system} Let $\cal{D}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ be a direct system in ${\bf IVL}$. The \emph{dual system} of $\cal{D}$ is the pair $\cal{D}^\sim \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}^\sim_\alpha)_{\alpha\in I}, (e^\sim_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$.
If $\cal{D}$ is a direct system in ${\bf NIVL}$, define the \emph{order continuous dual system} of $\cal{D}$ as the pair $\ordercontn{\cal{D}} \ensuremath{\mathop{:}\!\!=} \left( (\ordercontn{(\vlat{E}_\alpha)})_{\alpha\in I}, (e^\sim_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ with $e^\sim_{\alpha, \beta}: \ordercontn{(E_\beta)} \to \ordercontn{(\vlat{E}_\alpha)}$. \end{defn}
\begin{prop}\label{Prop: Dual system of inductive system is projective system} Let $\cal{D}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ be a direct system in ${\bf VL}$. The following statements are true. \begin{enumerate}
\item[(i)] If $\cal{D}$ is a direct system in ${\bf IVL}$ then the dual system $\cal{D}^\sim$ is an inverse system in ${\bf NIVL}$.
\item[(ii)] If $\cal{D}$ is a direct system in ${\bf NIVL}$ then the order continuous dual system $\ordercontn{\cal{D}}$ is an inverse system in ${\bf NIVL}$. \end{enumerate} \end{prop}
\begin{proof} We present the proof of (i). That (ii) is true follows by a similar argument, so we omit the proof.
Assume that $\cal{D}$ is a direct system in ${\bf IVL}$. Then the maps $e_{\alpha, \beta}: \vlat{E}_\alpha\to \vlat{E}_\beta$ are interval preserving lattice homomorphisms for all $\alpha \preccurlyeq \beta$. By Theorem \ref{Thm: Adjoints of interval preserving vs lattice homomorphisms} the adjoint maps $e^\sim_{\alpha, \beta}:\vlat{E}_\beta^\sim \to \vlat{E}_\alpha^\sim$ are all normal interval preserving lattice homomorphisms. Fix $\alpha, \beta, \gamma\in I$ such that $\alpha \preccurlyeq \beta \preccurlyeq \gamma$. Since $\cal{D}$ is a direct system in ${\bf IVL}$, $e_{\alpha , \gamma} = e_{\beta , \gamma } \circ e_{\alpha , \beta}$ so that $e_{\alpha , \gamma}^\sim = e_{\alpha , \beta}^\sim \circ e_{\beta , \gamma }^\sim$. Thus the dual system $\cal{D}^\sim = \left( (\vlat{E}^\sim_\alpha)_{\alpha\in I}, (e^\sim_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ is an inverse system in ${\bf NIVL}$. \end{proof}
\begin{prop}\label{Prop: Dual compactible systems inductive limit} Let $\cal{D} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta}) \right)$ be a direct system in ${\bf IVL}$ and $\cal{S}\ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (e_\alpha)_{\alpha\in I} \right)$ a compatible system of $\cal{D}$ in ${\bf IVL}$. The following statements are true. \begin{enumerate}
\item[(i)] $\cal{S}^\sim \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}^\sim, (e^\sim_\alpha)_{\alpha\in I} \right)$ is a compatible system for the inverse system $\cal{D}^\sim$ in ${\bf NIVL}$.
\item[(ii)] If $\cal{D}$ is a direct system in ${\bf NIVL}$ then $\ordercontn{\cal{S}} \ensuremath{\mathop{:}\!\!=} \left( \ordercontn{\vlat{E}}, (e^\sim_\alpha)_{\alpha\in I} \right)$ is a compatible system for the inverse system $\ordercontn{\cal{D}}$ in ${\bf NIVL}$. \end{enumerate} \end{prop}
\begin{proof} Again, we only prove (i) as the proof of (ii) is similar. By Theorem \ref{Thm: Adjoints of interval preserving vs lattice homomorphisms}, $e_{\alpha}^\sim: \vlat{E}^\sim \to \vlat{E}^\sim_\alpha$ is a normal interval preserving lattice homomorphism for every $\alpha\in I$. Furthermore, if $\alpha \preccurlyeq \beta $ then $e_\alpha = e_\beta \circ e_{\alpha , \beta}$ so that $e_\alpha^\sim = e_{\alpha , \beta}^\sim \circ e_\beta^\sim$. Therefore $\cal{S}^\sim$ is a compatible system of $\cal{D}^\sim$ in ${\bf NIVL}$. \end{proof}
The main results of this section are the following.
\begin{thm}\label{Thm: Dual of ind sys in VLIC is proj of duals in NVL} Let $\cal{D}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ be a direct system in ${\bf IVL}$, and let $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E},(e_\alpha)_{\alpha\in I}\right)$ be the direct limit of $\cal{D}$ in ${\bf IVL}$. The following statements are true. \begin{itemize} \item[(i)] $\proj{\cal{D}^\sim}\ensuremath{\mathop{:}\!\!=} \left(\vlat{F},(p_\alpha)_{\alpha\in I}\right)$ exists in ${\bf NVL}$. \item[(ii)] $\left(\ind{\cal{D}}\right)^\sim \cong \proj{\cal{D}^\sim}$ in ${\bf NVL}$; that is, there exists a lattice isomorphism $T: \vlat{E}^\sim \to \vlat{F}$ such that the following diagram commutes for all $\alpha\in I$. \end{itemize} \begin{eqnarray}\label{EQ: Thm-Dual of ind sys in VLIC is proj of duals in NVL diagram} \begin{tikzcd}[cramped] \vlat{E}^\sim \arrow[rd, "e^\sim_\alpha"'] \arrow[rr, "T"] & & \vlat{F} \arrow[dl, "p_\alpha"]\\ & \orderdual{\vlat{E}}_\alpha \end{tikzcd} \end{eqnarray} \end{thm}
\begin{proof} That (i) is true follows from Proposition \ref{Prop: Dual system of inductive system is projective system} and Theorem \ref{Thm: Existence of Projective Limit NVL}~(ii) because $\orderdual{\vlat{E}}_\alpha$ is Dedekind complete for every $\alpha\in I$.
We prove (ii). By Proposition \ref{Prop: Dual compactible systems inductive limit}, $\cal{S}^\sim \ensuremath{\mathop{:}\!\!=} \left( \orderdual{\vlat{E}},(e^\sim_\alpha)_{\alpha\in I}\right)$ is a compatible system for $\cal{D}^\sim$ in ${\bf NIVL}$, hence also in ${\bf NVL}$. Therefore there exists a unique normal lattice homomorphism $T:\orderdual{\vlat{E}}\to \vlat{F}$ so that the diagram (\ref{EQ: Thm-Dual of ind sys in VLIC is proj of duals in NVL diagram}) commutes. We show that $T$ is bijective.
To see that $T$ is injective, let $\psi\in \vlat{E}^\sim$ and suppose that $T(\psi)=0$. Consider any $u\in \vlat{E}$. There exist $\alpha\in I$ and $u_\alpha\in \vlat{E}_\alpha$ so that $u=e_\alpha(u_\alpha)$, see Remark \ref{Remark: Inductive limit notation}. Then $\psi(u) = \psi(e_\alpha(u_\alpha)) = e^\sim_\alpha(\psi)(u_\alpha) = p_\alpha(T(\psi))(u) = 0$. This holds for all $u\in\vlat{E}$ so that $\psi=0$. Therefore $T$ is injective.
It remains to show that $T$ maps $\vlat{E}^\sim$ onto $\vlat{F}$. To this end, consider $(\varphi_\alpha) \in \vlat{F}^+$. We construct a functional $0 \leq \varphi\in \vlat{E}^\sim$ so that $T(\varphi) = (\varphi_\alpha)$.
Let $u\in\vlat{E}$. Consider any $\alpha,\beta\in I$, $u_\alpha\in\vlat{E}_\alpha$ and $u_\beta\in\vlat{E}_\beta$ so that $e_\alpha(u_\alpha)=u=e_\beta(u_\beta)$, see Remark \ref{Remark: Inductive limit notation}. We claim that $\varphi_\alpha(u_\alpha) = \varphi_\beta(u_\beta)$. Indeed, there exists $\gamma \succcurlyeq \alpha,\beta$ in $I$ so that $e_{\alpha , \gamma}(u_\alpha) = e_{\beta , \gamma}(u_\beta)$. Furthermore, $e_{\gamma}(e_{\alpha , \gamma}(u_\alpha))=u=e_{\gamma}(e_{\beta , \gamma}(u_\beta))$. Because $(\varphi_\alpha) \in\vlat{F}$ we have $\varphi_\alpha = e_{\alpha , \gamma}^\sim (\varphi_\gamma)$ and $\varphi_\beta = e_{\beta , \gamma}^\sim (\varphi_\gamma)$; that is, \[ \varphi_\alpha (u_\alpha) = \varphi_\gamma (e_{\alpha , \gamma}(u_\alpha)) = \varphi_\gamma (e_{\beta , \gamma}(u_\beta)) = \varphi_\beta (u_\beta). \] Thus our claim is verified.
For $u\in\vlat{E}$ define $\varphi(u) \ensuremath{\mathop{:}\!\!=} \varphi_\alpha(u_\alpha)$ if $u=e_\alpha (u_\alpha)$. By our above claim, $\varphi$ is a well-defined map from $\vlat{E}$ into $\mathbb{R}$. To see that $\varphi$ is linear, consider $u,v\in \vlat{E}$ and $a,b\in\mathbb{R}$. Let $u=e_\alpha (u_\alpha)$ and $v=e_\beta (v_\beta)$ where $\alpha,\beta\in I$, $u_\alpha\in\vlat{E}_\alpha$ and $v_\beta\in \vlat{E}_\beta$. There exists $\gamma \succcurlyeq \alpha,\beta$ in $I$ so that \[ au+bv = e_{\gamma}(a e_{\alpha , \gamma}(u_\alpha) + b e_{\beta , \gamma}(v_\beta)). \] Then \[ \varphi(au+bv) = \varphi_{\gamma}(a e_{\alpha , \gamma}(u_\alpha) + b e_{\beta , \gamma}(v_\beta)) = a\varphi_\gamma(e_{\alpha , \gamma}(u_\alpha)) + b \varphi_\gamma(e_{\beta , \gamma}(v_\beta)). \] But $e_\gamma (e_{\alpha , \gamma}(u_\alpha)) = e_\alpha(u_\alpha)=u$ and $e_\gamma (e_{\beta , \gamma}(v_\beta)) = e_\beta(v_\beta)=v$. Hence $\varphi_\gamma(e_{\alpha , \gamma}(u_\alpha))= \varphi(u)$ and $\varphi_\gamma(e_{\beta , \gamma}(v_\beta)) = \varphi(v)$. Therefore $\varphi(au+bv) = a\varphi(u) + b\varphi(v)$.
We show that $\varphi$ is positive. If $0\leq u\in \vlat{E}$ then there exist $\alpha\in I$ and $0\leq u_\alpha \in\vlat{E}_\alpha$ so that $u=e_\alpha(u_\alpha)$, see Remark \ref{Remark: Inductive limit notation}. Then $\varphi(u) = \varphi_\alpha( u_\alpha )\geq 0$, the final inequality following from the fact that $(\varphi_\alpha)\in\vlat{F}^+$.
It follows from the definition of $\varphi$ and the commutativity of the diagram (\ref{EQ: Thm-Dual of ind sys in VLIC is proj of duals in NVL diagram}) that $p_{\alpha}(T(\varphi))= e_\alpha^\sim(\varphi)=\varphi_\alpha$ for every $\alpha\in I$. Hence $T(\varphi) = (\varphi_\alpha)$ so that $T$ is surjective. \end{proof}
\begin{thm}\label{Thm: Order continuous dual of ind sys in NRIP is proj of duals in NRiesz} Let $\cal{D}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ be a direct system in ${\bf NIVL}$, and let $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E},(e_\alpha)_{\alpha\in I}\right)$ be the direct limit of $\cal{D}$ in ${\bf IVL}$. The following statements are true. \begin{itemize} \item[(i)] $\proj{\ordercontn{\cal{D}}}\ensuremath{\mathop{:}\!\!=} \left(\vlat{G},(p_\alpha)_{\alpha\in I}\right)$ exists in ${\bf NVL}$. \item[(iii)] If $e_{\alpha , \beta}$ is injective for all $\alpha \preccurlyeq \beta$ in $I$ then $\ordercontn{\left(\ind{\cal{D}}\right)} \cong \proj{\ordercontn{\cal{D}}}$ in ${\bf NVL}$; that is, there exists a lattice isomorphism $S: \ordercontn{\vlat{E}} \to \vlat{G}$ such that the following diagram commutes for all $\alpha\in I$. \end{itemize} \begin{eqnarray}\label{EQ: Thm-Order continuous dual of ind sys in NRIP is proj of duals in NRiesz diagram} \begin{tikzcd}[cramped] \ordercontn{\vlat{E}} \arrow[rd, "e^\sim_\alpha"'] \arrow[rr, "S"] & & \vlat{G} \arrow[dl, "p_\alpha"]\\ & \ordercontn{(\vlat{E}_\alpha)} \end{tikzcd} \end{eqnarray} \end{thm}
\begin{proof} The proof proceeds in a similar fashion to that of Theorem \ref{Thm: Dual of ind sys in VLIC is proj of duals in NVL}. That (i) is true follows from Proposition \ref{Prop: Dual system of inductive system is projective system} and Theorem \ref{Thm: Existence of Projective Limit NVL}~(ii).
For the proof of (ii), assume that $e_{\alpha , \beta}$ is injective for all $\alpha \preccurlyeq \beta$ in $I$. By Proposition \ref{Prop: Dual compactible systems inductive limit}, $\ordercontn{\cal{S}}$ is a compatible system of $\ordercontn{\cal{D}}$ in ${\bf NIVL}$, hence in ${\bf NVL}$. Therefore there exits a unique normal lattice homomorphism $S:\ordercontn{\vlat{E}}\to \vlat{G}$ so that the diagram (\ref{EQ: Thm-Order continuous dual of ind sys in NRIP is proj of duals in NRiesz diagram}) commutes.
It follows by exactly the same reasoning as employed in the proof of Theorem \ref{Thm: Dual of ind sys in VLIC is proj of duals in NVL} that $S$ is injective. It remains to verify that $S$ maps $\ordercontn{\vlat{E}}$ onto $\vlat{G}$. Let $(\varphi_\alpha)\in\vlat{G}^+$. As in the proof of Theorem \ref{Thm: Dual of ind sys in VLIC is proj of duals in NVL} we define a positive functional $\varphi \in \vlat{E}^\sim$ by setting, for each $u\in \vlat{E}$, \[ \varphi(u)\ensuremath{\mathop{:}\!\!=} \varphi_\alpha (u_\alpha) \text{ if } u = e_\alpha (u_\alpha). \] We claim that $\varphi$ is order continuous. To see that this is so, let $A\downarrow 0$ in $\vlat{E}$. Without loss of generality, we may assume that $A$ is bounded above by some $0\leq w\in \vlat{E}$. By Remark \ref{Remark: Inductive limit notation} (ii) there exist an $\alpha\in I$ and a $0\leq w_\alpha\in \vlat{E}_\alpha$ so that $e_\alpha(w_\alpha)=w$, and, by Remark \ref{Remark: Inductive limit notation} (iii), $e_\alpha$ is injective. Because $e_\alpha$ is also interval preserving, there exists for every $u\in A$ a unique $0\leq u_\alpha \leq w_\alpha$ in $\vlat{E}_\alpha$ so that $e_\alpha (u_\alpha)=u$. Let $A_\alpha \ensuremath{\mathop{:}\!\!=} \{u_\alpha ~:~ u\in A\}$. Then $A_\alpha \downarrow 0$ in $\vlat{E}_\alpha$. Indeed, let $0\leq v\in\vlat{E}_\alpha$ be a lower bound for $A_\alpha$. Then $0\leq e_\alpha(v) \leq e_\alpha(u_\alpha)=u$ for all $u\in A$. Because $A\downarrow 0$ in $\vlat{E}$ it follows that $e_\alpha (v)=0$, hence $v=0$. By definition of $\varphi$ and the order continuity of $\varphi_\alpha$ we now have $\varphi[A]=\varphi_\alpha[A_\alpha]\downarrow 0$. Hence $\varphi\in \ordercontn{\vlat{E}}$.
By definition of $\varphi$ and the commutativity of the diagram (\ref{EQ: Thm-Order continuous dual of ind sys in NRIP is proj of duals in NRiesz diagram}) it follows that $S(\varphi)=(\varphi_\alpha)$. Therefore $S$ is surjective. \end{proof}
\begin{remark}\label{Remark: Direct limit with trivial dual} Let $\cal{D}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ be a direct system in ${\bf IVL}$, and let $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E},(e_\alpha)_{\alpha\in I}\right)$ be the direct limit of $\cal{D}$ in ${\bf IVL}$. In general, it does not follow from $\preann{\orderdual{(\vlat{E}_\alpha)}}=\{0\}$ for all $\alpha\in I$ that $\preann{\orderdual{\vlat{E}}}=\{0\}$, even if all the $\vlat{E}_\alpha$ are non-trivial and the $e_\alpha$ injective. Indeed, it is well known that $\vlat{L}^0[0,1]$, the space of Lebesgue measurable functions on the unit interval $[0,1]$, has trivial order dual, see for instance \cite[Example 85.1]{Zaanen1983RSII}. However, by Example \ref{Exm: Inductive limit of principle ideals}, $\vlat{L}^0[0,1]$ can be expressed as the direct limit of its principal ideals, each of which has a separating order dual. \end{remark}
In view of the above remark, the following proposition is of interest.
\begin{prop}\label{Prop: Separating order dual of direct limit} Let $\cal{D}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha, \beta})_{\alpha\preccurlyeq\beta}\right)$ be a direct system in ${\bf IVL}$, and let $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E},(e_\alpha)_{\alpha\in I}\right)$ be the direct limit of $\cal{D}$ in ${\bf IVL}$. Assume that for every $\alpha\in I$, $e_\alpha$ is injective and $e_\alpha[\vlat{E}_\alpha]$ is a projection band in $\vlat{E}$. The following statements are true. \begin{enumerate}
\item[(i)] If $\preann{\orderdual{(\vlat{E}_\alpha)}}=\{0\}$ for every $\alpha\in I$ then $\preann{\orderdual{\vlat{E}}}=\{0\}$.
\item[(ii)] If $\preann{\ordercontn{(\vlat{E}_\alpha)}}=\{0\}$ for every $\alpha\in I$ then $\preann{\ordercontn{\vlat{E}}}=\{0\}$. \end{enumerate} \end{prop}
\begin{proof} The proofs of (i) and (ii) are identical, except that for (ii) we note that for all $\alpha\in I$, $e_\alpha$ and $e_\alpha^{-1}$ are order continuous by Proposition \ref{Prop: Interval Preserving vs Lattice Homomorphism} (i). We therefore omit the proof of (ii).
Assume that $\preann{\orderdual{(\vlat{E}_\alpha)}}=\{0\}$ for every $\alpha\in I$. Let $u\in\vlat{E}$ be non-zero. Then there exist $\alpha\in I$ and a non-zero $u_\alpha\in \vlat{E}_\alpha$ so that $e_\alpha(u_\alpha)=u$, see Remark \ref{Remark: Inductive limit notation}. By assumption there exists $\varphi_\alpha \in\orderdual{\vlat{E}_\alpha}$ so that $\varphi_\alpha(u_\alpha)\neq 0$. Denote by $P_\alpha:\vlat{E}\to e_\alpha[\vlat{E}_\alpha]$ the projection onto $e_\alpha[\vlat{E}_\alpha]$. We note that $e_\alpha$ is an isomorphism onto $e_\alpha[\vlat{E}_\alpha]$. Let $\varphi \ensuremath{\mathop{:}\!\!=} (e_\alpha^{-1} \circ P_\alpha)^\sim(\varphi_\alpha)$. Then $\varphi\in\orderdual{\vlat{E}}$ and, because $u \in e_\alpha[\vlat{E}_\alpha]$, $\varphi(u) = \varphi_\alpha(e_\alpha^{-1}(P_\alpha(u)))=\varphi_\alpha(u_\alpha)\neq 0$. Hence $\preann{\orderdual{\vlat{E}}}=\{0\}$. \end{proof}
\subsection{Duals of inverse limits}\label{Subsection: Duals of projective limits}
We now turn to duals of inverse limits. For inverse systems over $\mathbb{N}$, we prove results analogous to those of Theorems \ref{Thm: Dual of ind sys in VLIC is proj of duals in NVL} and \ref{Thm: Order continuous dual of ind sys in NRIP is proj of duals in NRiesz}. We identify the main obstacle to more general results for inverse systems over arbitrary index sets: Positive (order continuous) functionals defined on a proper sublattice of a vector lattice $\vlat{E}$ do not necessarily extend to $\vlat{E}$.
\begin{defn}\label{Defn: Dual system of projective system} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta\succcurlyeq \alpha}\right)$ be an inverse system in ${\bf IVL}$. The \emph{dual system} of $\cal{I}$ is the pair $\cal{I}^\sim \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}^\sim_\alpha)_{\alpha\in I}, (p^\sim_{\beta , \alpha})_{\beta\succcurlyeq \alpha}\right)$.
If $\cal{I}$ is an inverse system in ${\bf NVL}$, define the \emph{order continuous dual system} of $\cal{I}$ as the pair $\ordercontn{\cal{I}} \ensuremath{\mathop{:}\!\!=} \left( (\ordercontn{(\vlat{E}_\alpha)})_{\alpha\in I}, (p^\sim_{\beta , \alpha})_{\beta\succcurlyeq \alpha}\right)$ with $p^\sim_{\beta , \alpha}: \ordercontn{(E_\alpha)} \to \ordercontn{(\vlat{E}_\beta)}$. \end{defn}
The following preliminary results, analogous to Propositions \ref{Prop: Dual system of inductive system is projective system} and \ref{Prop: Dual compactible systems inductive limit}, are proven in the same way as the corresponding results for direct limits. As such, we omit the proofs.
\begin{prop}\label{Prop: Dual system of projective system is inductive system} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta\succcurlyeq \alpha}\right)$ be an inverse system in ${\bf VL}$. The following statements are true. \begin{enumerate}
\item[(i)] If $\cal{I}$ is an inverse system in ${\bf IVL}$ then the dual system $\cal{I}^\sim$ is a direct system in ${\bf NIVL}$.
\item[(ii)] If $\cal{I}$ is an inverse system in ${\bf NIVL}$ then the order continuous dual system $\ordercontn{\cal{I}}$ is a direct system in ${\bf NIVL}$. \end{enumerate} \end{prop}
\begin{prop}\label{Prop: Dual compactible systems projective limit} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta\succcurlyeq \alpha}\right)$ be an inverse system in ${\bf IVL}$ and $\cal{S}\ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_\alpha)_{\alpha\in I} \right)$ a compatible system of $\cal{I}$ in ${\bf IVL}$. The following statements are true. \begin{enumerate}
\item[(i)] $\cal{S}^\sim \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}^\sim, (p^\sim_\alpha)_{\alpha\in I} \right)$ is a compatible system for the direct system $\cal{I}^\sim$ in ${\bf NIVL}$.
\item[(ii)] If $\cal{I}$ is an inverse system in ${\bf NIVL}$ then $\ordercontn{\cal{S}} \ensuremath{\mathop{:}\!\!=} \left( \ordercontn{\vlat{E}}, (p^\sim_\alpha)_{\alpha\in I} \right)$ is a compatible system for the direct system $\ordercontn{\cal{I}}$ in ${\bf NIVL}$. \end{enumerate} \end{prop}
\begin{lem}\label{Lem: Countable projective system with surjective projections} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_n)_{n\in \mathbb{N}}, (p_{m, n})_{m\geq n}\right)$ be an inverse system in ${\bf IVL}$ and let $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_n)_{n\in \mathbb{N}} \right)$ be the inverse limit of $\cal{I}$ in ${\bf VL}$. Assume that $p_{m , n}$ is a surjection for all $m \geq n$ in $\mathbb{N}$. Then $p_n$ is surjective and interval preserving for every $n\in\mathbb{N}$. In particular, $\cal{S}$ is a compatible system of $\cal{I}$ in ${\bf IVL}$. \end{lem}
\begin{proof} Fix $n_0\in \mathbb{N}$. Consider any $u_{n_0}\in \vlat{E}_{n_0}$. For $n<n_0$ let $u_n = p_{n_0 , n}(u_{n_0})$. Because $p_{n_0+1 , n_0}$ is a surjection, there exists $u_{n_0+1}\in \vlat{E}_{n_0+1}$ so that $p_{n_0+1 , n_0}(u_{n_0+1}) = u_{n_0}$. Inductively, for each $n>n_0$ there exists $u_n\in\vlat{E}_n$ so that $p_{n , n-1}(u_n) = u_{n-1}$.
We show that $(u_n)\in\vlat{E}$. Let $n<m$ in $\mathbb{N}$. By the definition of an inverse system it follows that $p_{m , n} = p_{n+1 , n} \circ p_{n+2 , n+1}\circ \cdots \circ p_{m-1 , m-2}\circ p_{m,m-1}$. It thus follows that $p_{m , n}(u_m) = u_n$ so that $(u_n)\in \vlat{E}$. We have $p_{n_0}((u_n))=u_{n_0}$ so that $p_{n_0}$ is a surjection. It follows from Proposition \ref{Prop: Interval Preserving vs Lattice Homomorphism} that $p_{n_0}$ is interval preserving. Since $\cal{S}$ is a compatible system of $\cal{I}$ in ${\bf VL}$ and the $p_n$ are interval preserving, we conclude that $\cal{S}$ is a compatible system of $\cal{I}$ in ${\bf IVL}$. \end{proof}
\begin{thm}\label{Thm: Dual of proj sys in VLIC is ind of duals in NVLIC} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_n)_{n\in \mathbb{N}}, (p_{m, n})_{m\geq n}\right)$ be an inverse system in ${\bf IVL}$, and let $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_n)_{n\in \mathbb{N}} \right)$ be the inverse limit of $\cal{I}$ in ${\bf VL}$. Assume that $p_{m , n}$ is a surjection for all $m \geq n$ in $\mathbb{N}$. Then the following statements are true. \begin{enumerate}
\item[(i)] $\ind{\cal{I}^\sim}\ensuremath{\mathop{:}\!\!=} \left(\vlat{F},(e_n)_{n\in \mathbb{N}}\right)$ exists in ${\bf NIVL}$.
\item[(ii)] $\left(\proj{\cal{I}}\right)^\sim \cong \ind{\cal{I}^\sim}$ in ${\bf NIVL}$; that is, there exists a lattice isomorphism $T: \vlat{F} \to \vlat{E}^\sim$ such that the following diagram commutes for all $n\in \mathbb{N}$. \end{enumerate} \[ \begin{tikzcd}[cramped] \vlat{F} \arrow[rr, "T"] & & \vlat{E}^\sim\\ & \vlat{E}_n^\sim \arrow[lu, "e_{n}"] \arrow[ru, "p_n^\sim"'] \end{tikzcd} \] \end{thm}
\begin{proof} By Proposition \ref{Prop: Dual system of projective system is inductive system}, $\cal{I}^\sim$ is a direct system in ${\bf NIVL}$. Because the $p_{m , n}$ are surjections their adjoints are injective. Thus by Theorem \ref{Thm: Existence of Inductive Limits in NVLI}, $\ind{\cal{I}^\sim}$ exists in ${\bf NIVL}$.
We proceed to prove (ii). Because the $p_{m , n}^\sim: \left(\vlat{E}_n\right)^\sim \to \left(\vlat{E}_m\right)^\sim$ are injective, so are the $e_n:\left(\vlat{E}_n\right)^\sim \to \vlat{F}$, see Remark \ref{Remark: Inductive limit notation}. By Lemma \ref{Lem: Countable projective system with surjective projections}, each $p_n:\vlat{E}\to\vlat{E}_n$ is surjective and interval preserving, and $\cal{S}$ is a compatible system of $\cal{I}$ in ${\bf IVL}$. Therefore $p_n^\sim:\left(\vlat{E}_n\right)^\sim \to\vlat{E}^\sim$ is an injection for every $n$ in $\mathbb{N}$.
By Proposition \ref{Prop: Dual compactible systems projective limit}, $\cal{S}^\sim = (\vlat{E}^\sim , (p_n^\sim)_{n\in\mathbb{N}})$ is a compatible system of $\cal{I}^\sim$ in ${\bf NIVL}$. Therefore there exists a unique interval preserving normal lattice homomorphism $T : \vlat{F}\to \vlat{E}^\sim$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{F} \arrow[rr, "T"] & & \vlat{E}^\sim\\ & \vlat{E}_n^\sim \arrow[lu, "e_{n}"] \arrow[ru, "p_n^\sim"'] \end{tikzcd} \] commutes for all $n\in\mathbb{N}$. We show that $T$ is a lattice isomorphism.
Our first goal is to establish that $T$ is injective. Consider $\varphi\in \vlat{F}$ so that $T(\varphi)=0$. There exist an $n\in\mathbb{N}$ and a unique $\varphi_{n}\in \orderdual{\vlat{E}}_n$ so that $e_{n}(\varphi_n) = \varphi$. Then $p_n^\sim (\varphi_n) = T (e_n(\varphi_n)) = T(\varphi)=0$. But $p_n^\sim$ is injective so that $\varphi_n=0$, hence $\varphi=e_n(\varphi_n)=0$.
It remains to show that $T$ maps $\vlat{F}$ onto $\vlat{E}^\sim$. This follows from \[
\vlat{E}^\sim = \displaystyle\bigcup p_n^\sim\left[ \left(\vlat{E}_n\right)^\sim \right], \] a fact which we now establish. Suppose that $\vlat{E}^\sim \neq \displaystyle\bigcup p_n^\sim[\vlat{E}^\sim_n]$. Because $p_n^\sim$ is an interval preserving lattice homomorphism for every $n\in\mathbb{N}$, each $p_n^\sim[\vlat{E}_n^\sim]$ and hence $\displaystyle\bigcup p_n^\sim[\vlat{E}^\sim_n]$ is a solid subset of $\vlat{E}^\sim$. Therefore, because $\vlat{E}^\sim \neq \displaystyle\bigcup p_n^\sim[\vlat{E}^\sim_n]$, there exists $0\leq \psi\in \vlat{E}^\sim \setminus \displaystyle\bigcup p_n^\sim[\vlat{E}^\sim_n]$. By Proposition \ref{Prop: Image of adjoint of lattice homomorphism} (i), $p_n^\sim[\vlat{E}^\sim_n]=\ann{\ker(p_n)}$ for every $n\in \mathbb{N}$ so that $\psi\notin \ann{\ker(p_n)}$ for $n\in\mathbb{N}$. Hence, for every $n\in\mathbb{N}$, there exists $0\leq u^{(n)}\in\ker(p_n)$ so that $\psi(u^{(n)})=1$. We claim that there exists $w\in\vlat{E}$ so that $w \geq u^{(1)}+\cdots + u^{(n)}$ for all $n\in\mathbb{N}$. This claim leads to $\psi(w) \geq \psi(u^{(1)}+\cdots + u^{(n)}) = n$ for every $n\in\mathbb{N}$, which is impossible, so that $\vlat{E}^\sim = \displaystyle\bigcup p_n^\sim\left[ \left(\vlat{E}_n\right)^\sim \right]$.
Write $u^{(n)} = (u_m^{(n)})\in\vlat{E} \subseteq \displaystyle\prod\vlat{E}_m$. Fix $m\in \mathbb{N}$. If $n>m$ then $u_m^{(n)} = p_{n , m}(p_n(u^{(n)})) = 0$ because $u^{(n)}\in\ker(p_n)$. Let $w_m \ensuremath{\mathop{:}\!\!=} u_m^{(1)} + \cdots + u_m^{(m)}$ and $w\ensuremath{\mathop{:}\!\!=} (w_m)$. Then $w \geq u^{(1)}+ \cdots + u^{(n)}$ for every $n\in\mathbb{N}$ because $u_{m}^{(n)}\geq 0$ for all $m,n\in\mathbb{N}$. To see that $w\in \vlat{E}$ consider $m_1\geq m_0$ in $\mathbb{N}$. Then \[ p_{m_1 , m_0}(w_{m_1}) = p_{m_1 , m_0}(u_{m_1}^{(1)}) + \cdots + p_{m_1 , m_0}(u_{m_1}^{(m_1)}). \] But $u^{(n)}=(u_{m}^{(n)})\in\vlat{E}$ for all $n\in\mathbb{N}$, so \[ p_{m_1 , m_0}(w_{m_1}) = u_{m_0}^{(1)} + \cdots + u_{m_0}^{(m_1)}. \] Finally, because $u_m^{(n)}=0$ for all $n>m$ in $\mathbb{N}$ we have \[ p_{m_1 , m_0}(w_{m_1}) = u_{m_0}^{(1)} + \cdots + u_{m_0}^{(m_0)} = w_{m_0}. \] Hence $w\in \vlat{E}$, which verifies our claim. This completes the proof. \end{proof}
\begin{thm}\label{Thm: Order continuous dual of proj sys in VLIC is ind of duals in NVLIC} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_n)_{n\in \mathbb{N}}, (p_{m, n})_{m\geq n}\right)$ be an inverse system in ${\bf NIVL}$, and let $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_n)_{n\in \mathbb{N}} \right)$ be the inverse limit of $\cal{I}$ in ${\bf VL}$. Assume that $\vlat{E}_n$ is Archimedean for each $n\in \mathbb{N}$ and that $p_{m , n}$ is a surjection for all $m \geq n$ in $\mathbb{N}$. The following statements are true. \begin{enumerate}
\item[(i)] $\ind{\ordercontn{\cal{I}}}\ensuremath{\mathop{:}\!\!=} \left(\vlat{G},(e_n)_{n\in \mathbb{N}}\right)$ exists in ${\bf NIVL}$.
\item[(ii)] $\ordercontn{\left(\proj{\cal{I}}\right)} \cong \ind{\ordercontn{\cal{I}}}$ in ${\bf NIVL}$; that is, there exists a lattice isomorphism $S: \vlat{G} \to \ordercontn{\vlat{E}}$ such that the following diagram commutes for all $n\in \mathbb{N}$. \end{enumerate} \[ \begin{tikzcd}[cramped] \vlat{G} \arrow[rr, "S"] & & \ordercontn{\vlat{E}}\\ & \ordercontn{(\vlat{E}_n)} \arrow[lu, "e_{n}"] \arrow[ru, "p_n^\sim"'] \end{tikzcd} \] \end{thm}
\begin{proof} The existence of $\ind{\ordercontn{\cal{I}}}$ in ${\bf NIVL}$ follows by the same reasoning as given in Theorem \ref{Thm: Dual of proj sys in VLIC is ind of duals in NVLIC}.
For (ii), as in the proof of Theorem \ref{Thm: Dual of proj sys in VLIC is ind of duals in NVLIC}, we see that $e_n:\ordercontn{(\vlat{E}_n)} \to \vlat{G}$ and $p_n^\sim:\ordercontn{(\vlat{E}_n)}\to\ordercontn{\vlat{E}}$ are injective interval preserving maps for all $n\in \mathbb{N}$. In addition, $\cal{S}$ is a compatible system for $\cal{I}$ in ${\bf IVL}$.
By Proposition \ref{Prop: Dual compactible systems projective limit}, $\ordercontn{\cal{S}} = (\ordercontn{\vlat{E}} , (p_n^\sim)_{n\in\mathbb{N}})$ is a compatible system of $\ordercontn{\cal{I}}$ in ${\bf NIVL}$. Therefore there exists a unique interval preserving normal lattice homomorphism $S : \vlat{G}\to \ordercontn{\vlat{E}}$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{G} \arrow[rr, "S"] & & \ordercontn{\vlat{E}}\\ & \ordercontn{(\vlat{E}_n)} \arrow[lu, "e_{n}"] \arrow[ru, "p_n^\sim"'] \end{tikzcd} \] commutes for all $n\in\mathbb{N}$. Exactly the same argument as used in the proof of Theorem \ref{Thm: Dual of proj sys in VLIC is ind of duals in NVLIC} shows that $S$ is a lattice isomorphism, this time making use of Proposition \ref{Prop: Image of adjoint of lattice homomorphism} (ii). \end{proof}
Theorems \ref{Thm: Dual of proj sys in VLIC is ind of duals in NVLIC} and \ref{Thm: Order continuous dual of proj sys in VLIC is ind of duals in NVLIC} cannot be generalised to systems over an arbitrary directed set $I$. Indeed, the assumption that the inverse system $\cal{I}$ is indexed by the natural numbers is used in essential ways to show that the mappings $T$ and $S$ in Theorems \ref{Thm: Dual of proj sys in VLIC is ind of duals in NVLIC} and \ref{Thm: Order continuous dual of proj sys in VLIC is ind of duals in NVLIC}, respectively, are both injective and surjective. The injectivity of $S$ and $T$ follows from the surjectivity of the maps $p_n$, which in turn follows from Lemma \ref{Lem: Countable projective system with surjective projections} where the total ordering of $\mathbb{N}$ is used explicitly. We are not aware of any conditions on a general inverse system $\cal{I}$ in ${\bf VL}$, indexed over an arbitrary directed set, which implies that the projections from $\proj{\cal{I}}$ into the component spaces are surjective. Furthermore, the method of proof for surjectivity of $S$ and $T$ cannot be generalised to systems over arbitrary directed sets. As we show next, this issue is related to the extension of positive linear functionals.
\begin{thm}\label{Thm: Characterisation dual of projective limit is inductive limit of duals} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta \succcurlyeq \alpha}\right)$ be an inverse system in ${\bf IVL}$ and $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_\alpha)_{\alpha\in I} \right)$ its inverse limit in ${\bf VL}$. Assume that $p_{\beta , \alpha}$ and $p_\alpha$ are surjections for all $\beta\succcurlyeq \alpha$ in $I$. Then the following statements are true. \begin{enumerate}
\item[(i)] $\ind{\cal{I}^\sim}\ensuremath{\mathop{:}\!\!=} \left(\vlat{F},(e_\alpha)_{\alpha\in I}\right)$ exists in ${\bf NIVL}$.
\item[(ii)] There exists an injective interval preserving normal lattice homomorphism $T:\vlat{F}\to \vlat{E}^\sim$ so that the diagram
\[
\begin{tikzcd}[cramped]
\vlat{F} \arrow[rr, "T"] & & \vlat{E}^\sim\\
& \vlat{E}_\alpha^\sim \arrow[lu, "e_{\alpha}"] \arrow[ru, "p_\alpha^\sim"']
\end{tikzcd}
\]
commutes for every $\alpha \in I$.
\item[(iii)] If $T$ is a bijection, hence a lattice isomorphism, then every order bounded linear functional on $\vlat{E}$ has a positive linear extension to $\displaystyle\prod \vlat{E}_\alpha$. The converse is true if $I$ has non-measurable cardinal. \end{enumerate} \end{thm}
\begin{proof} That (i) and (ii) are true follow as in the proof of Theorem \ref{Thm: Dual of proj sys in VLIC is ind of duals in NVLIC}. We verify (iii).
Let $\iota:\vlat{E}\to\displaystyle\prod \vlat{E}_\alpha$ be the inclusion map. The diagram \[ \begin{tikzcd}[cramped] \vlat{E} \arrow[rd, "p_\alpha"'] \arrow[rr, "\iota"] & & \displaystyle\prod\vlat{E}_\alpha \arrow[dl, "\pi_\alpha"]\\ & \vlat{E}_\alpha \end{tikzcd} \] commutes for every $\alpha\in I$, and therefore the diagram \[ \begin{tikzcd}[cramped] \left(\displaystyle\prod\vlat{E}_\alpha\right)^\sim \arrow[rr, "\iota^\sim"] & & \vlat{E}^\sim \\ & \vlat{E}_\alpha^\sim \arrow[lu, "\pi_{\alpha}^\sim"] \arrow[ru, "p_\alpha^\sim"'] \end{tikzcd} \] also commutes for each $\alpha\in I$. Hence, for all $\alpha\in I$, the diagram \[ \begin{tikzcd}[cramped] \left(\displaystyle\prod\vlat{E}_\alpha\right)^\sim \arrow[rr, "\iota^\sim"] & & \vlat{E}^\sim &\\ & \vlat{E}^\sim_\alpha \arrow[ul, "\pi_\alpha^\sim"] \arrow[ur, "p_\alpha^\sim"'] \arrow[rr, "e_\alpha"'] & & \vlat{F} \arrow[ul, "T"'] \end{tikzcd} \] commutes.
Assume that $T$ is a lattice isomorphism, and therefore a surjection. Let $\varphi\in \vlat{E}^\sim$. There exists a $\psi\in \vlat{F}$ so that $T(\psi)=\varphi$. By Remark \ref{Remark: Inductive limit notation}, there exist $\alpha\in I$ and $\psi_\alpha\in\vlat{E}^\sim_\alpha$ so that $e_\alpha(\psi_\alpha)=\psi$. Then \[ \iota^\sim (\pi_\alpha^\sim(\psi_\alpha)) = p_\alpha^\sim (\psi_\alpha) = T(e_\alpha(\psi_\alpha)) = \varphi. \] Therefore $\iota^\sim$ is a surjection; that is, every $\varphi\in \vlat{E}^\sim$ has an order bounded linear extension to $\displaystyle\prod \vlat{E}_\alpha$.
Assume that $I$ has non-measurable cardinal, and every order bounded linear functional on $\vlat{E}$ extends to an order bounded linear functional on $\displaystyle\prod \vlat{E}_\alpha$. Then $\iota^\sim$, which acts as restriction of functionals on $\displaystyle\prod \vlat{E}_\alpha$ to $\vlat{E}$, is a surjection. Fix $\varphi\in \vlat{E}^\sim$. There exists $\psi \in \left(\displaystyle\prod\vlat{E}_\alpha\right)^\sim$ so that $\varphi = \iota^\sim (\psi)$. By Theorem \ref{Thm: Properties of product of vector lattices.} (iv) there exist $\alpha_1,\ldots,\alpha_n\in I$ and $\psi_1\in \vlat{E}_{\alpha_1}^\sim,\ldots,\psi_n\in \vlat{E}_{\alpha_n}^\sim$ so that $\psi = \pi_{\alpha_1}^\sim (\psi_{\alpha_1}) + \ldots + \pi_{\alpha_n}^\sim (\psi_{\alpha_n})$. Then \[ \varphi =\iota^\sim \left( \sum_{i=1}^n \pi_{\alpha_i}^\sim(\psi_i)\right) = \sum_{i=1}^n \iota^\sim(\pi_{\alpha_i}^\sim(\psi_i)) = \sum_{i=1}^n p_{\alpha_i}^\sim(\psi_i) = \sum_{i=1}^n T(e_{\alpha_i}(\psi_i)) = T\left( \sum_{i=1}^n e_{\alpha_i}(\psi_i)\right). \] Therefore $T$ is surjective, and hence a lattice isomorphism. \end{proof}
A similar result holds for the order continuous dual of an inverse limit. We omit the proof of the next theorem, which is virtually identical to that of Theorem \ref{Thm: Characterisation dual of projective limit is inductive limit of duals}. Note, however, that unlike in Theorem \ref{Thm: Characterisation dual of projective limit is inductive limit of duals}, we make no assumption on the cardinality of $I$.
\begin{thm}\label{Thm: Characterisation order continuous dual of projective limit is inductive limit of duals} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta \succcurlyeq \alpha}\right)$ be an inverse system in ${\bf NIVL}$ and $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_\alpha)_{\alpha\in I} \right)$ its inverse limit in ${\bf VL}$. Assume that $p_{\beta , \alpha}$ and $p_\alpha$ are surjections for all $\beta \succcurlyeq \alpha$ in $I$. Then the following statements are true. \begin{enumerate}
\item[(i)] $\ind{\left(\ordercontn{\cal{I}}\right)}\ensuremath{\mathop{:}\!\!=} \left( \vlat{G}, (e_\alpha)_{\alpha \in I} \right)$ exists in ${\bf NIVL}$.
\item[(ii)] There exists an injective and interval preserving normal lattice homomorphism $S:\vlat{G}\to \ordercontn{\vlat{E}}$ so that the diagram
\[
\begin{tikzcd}[cramped]
\vlat{G} \arrow[rr, "S"] & & \ordercontn{\vlat{E}}\\
& \ordercontn{(\vlat{E}_\alpha)} \arrow[lu, "e_{\alpha}"] \arrow[ru, "p_\alpha^\sim"']
\end{tikzcd}
\]
commutes for every $\alpha \in I$.
\item[(iii)] $S$ is a lattice isomorphism if and only if every order continuous linear functional on $\vlat{E}$ has an order continuous linear extension to $\displaystyle\prod \vlat{E}_\alpha$. \end{enumerate} \end{thm}
The following two results are immediate consequences of Theorems \ref{Thm: Characterisation dual of projective limit is inductive limit of duals} and \ref{Thm: Characterisation order continuous dual of projective limit is inductive limit of duals}, respectively.
\begin{cor}\label{Cor: Dual of projectime limit is inductive limit} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\alpha\preccurlyeq \beta}\right)$ be an inverse system in ${\bf IVL}$, $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_\alpha)_{\alpha\in I} \right)$ its inverse limit in ${\bf VL}$ and $(\vlat{F},(e_\alpha)_{\alpha\in I})$ the direct limit of $\cal{I}^\sim$ in ${\bf NIVL}$. Assume that $p_{\beta , \alpha}$ and $p_\alpha$ are surjections for all $\beta\succcurlyeq\alpha$ in $I$. If $\vlat{E}$ is majorising in $\displaystyle\prod \vlat{E}_\alpha$ then $\left(\proj{\cal{I}}\right)^\sim \cong \ind{\cal{I}^\sim}$ in ${\bf NIVL}$; that is, there exists a lattice isomorphism $T: \vlat{F} \to \vlat{E}^\sim$ such that the diagram \[ \begin{tikzcd}[cramped] \vlat{F} \arrow[rr, "T"] & & \vlat{E}^\sim\\ & \vlat{E}_\alpha^\sim \arrow[lu, "e_{\alpha}"] \arrow[ru, "p_\alpha^\sim"'] \end{tikzcd} \] commutes for all $\alpha\in I$. \end{cor}
\begin{proof} This follows immediately from \cite[Theorem 1.32]{AliprantisBurkinshaw2006} and Theorem \ref{Thm: Characterisation dual of projective limit is inductive limit of duals}. \end{proof}
\begin{cor}\label{Cor: Dual of projectime limit is inductive limit} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\alpha\preccurlyeq \beta}\right)$ be an inverse system in ${\bf NIVL}$, $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_\alpha)_{\alpha\in I} \right)$ its inverse limit in ${\bf VL}$ and $(\vlat{F},(e_\alpha)_{\alpha\in I})$ the direct limit of $\ordercontn{\cal{I}}$ in ${\bf NIVL}$. Assume that $p_{\beta , \alpha}$ and $p_\alpha$ are surjections for all $\beta\succcurlyeq\alpha$ in $I$. If $\vlat{E}$ is majorising and order dense in $\displaystyle\prod \vlat{E}_\alpha$ then $\ordercontn{\left(\proj{\cal{I}}\right)} \cong \ind{\ordercontn{\cal{I}}}$ in ${\bf NIVL}$; that is, there exists a lattice isomorphism $S: \vlat{F} \to \ordercontn{\vlat{E}}$ such that the diagram \[ \begin{tikzcd}[cramped] \vlat{F} \arrow[rr, "S"] & & \ordercontn{\vlat{E}}\\ & \ordercontn{\left(\vlat{E}_\alpha\right)} \arrow[lu, "e_{\alpha}"] \arrow[ru, "p_\alpha^\sim"'] \end{tikzcd} \] commutes for all $\alpha\in I$. \end{cor}
\begin{proof} This follows immediately from \cite[Theorem 1.65]{AliprantisBurkinshaw2006} and Theorem \ref{Thm: Characterisation order continuous dual of projective limit is inductive limit of duals}. \end{proof}
In contradistinction with direct limits, the inverse limit construction always preserves the property of having a separating order (continuous) dual.
\begin{prop}\label{Prop: Separating order dual of inverse limit} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta \succcurlyeq \alpha}\right)$ be an inverse system in ${\bf VL}$ and $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_\alpha)_{\alpha\in I} \right)$ its inverse limit in ${\bf VL}$. Then the following statements are true. \begin{enumerate}
\item[(i)] If $\preann{\orderdual{(\vlat{E}_\alpha)}}=\{0\}$ for every $\alpha \in I$ then $\preann{\orderdual{\vlat{E}}}=\{0\}$.
\item[(ii)] If $\preann{\ordercontn{(\vlat{E}_\alpha)}}=\{0\}$ and $p_\alpha$ is order continuous for every $\alpha \in I$ then $\preann{\ordercontn{\vlat{E}}}=\{0\}$. \end{enumerate} \end{prop}
\begin{proof} The proofs of (i) and (ii) are identical. Hence we omit the proof of (ii).
Assume that $\preann{\orderdual{(\vlat{E}_\alpha)}}=\{0\}$ for every $\alpha \in I$. Let $u\in\vlat{E}$ be non-zero. Then there exists $\alpha \in I$ so that $p_\alpha(u)\neq 0$. Since $\preann{\orderdual{(\vlat{E}_\alpha)}}=\{0\}$, there exists $\varphi\in \orderdual{(\vlat{E}_\alpha)}$ so that $\varphi(p_\alpha (u))\neq 0$; that is, $p_\alpha^\sim(\varphi)(u)\neq 0$. Hence $\preann{\orderdual{\vlat{E}}}=\{0\}$. \end{proof}
\section{Applications}\label{Section: Applications}
In this section we apply the duality results for direct and inverse limits obtained in Section \ref{Section: Dual spaces}. In particular, we consider order (continuous) duals of some of the function spaces which are expressed as direct and inverse limits in Sections \ref{Subsection: Examples of inductive limits} and \ref{Subsection: Examples of projective limits}, respectively. This is followed by an investigation of perfect spaces. We show that, under certain conditions, the direct and inverse limits of perfect spaces are perfect. We then specialise these results to the case of ${\mathrm C }(X)$ and obtain a solution to the decomposition problem mentioned in the introduction. Finally, we show that an Archimedean vector lattice has a relatively uniformly complete predual if and only if it can be expressed, in a suitable way, as an inverse limit of spaces of Radon measures on compact Hausdorff spaces. The following two simple propositions are used repeatedly. These results are proved in \cite[p. 193, p.~205]{Bourbakie_Theory_of_sets} in the context of direct and inverse systems of sets. The arguments in \cite{Bourbakie_Theory_of_sets} suffice to verify the results in the vector lattice context, so we do not repeat them here.
\begin{prop}\label{Prop: Morphisms between inductive limits} Let $\cal{D}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (e_{\alpha , \beta})_{\alpha\preccurlyeq \beta}\right)$ and $\cal{D}'\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha')_{\alpha\in I}, (e_{\alpha , \beta}')_{\alpha\preccurlyeq \beta}\right)$ be direct systems in ${\bf VL}$ with direct limits $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (e_\alpha)_{\alpha\in I} \right)$ and $\cal{S}' \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}', (e_\alpha')_{\alpha\in I} \right)$ in ${\bf VL}$. Assume that for every $\alpha\in I$ there exists a lattice homomorphism ${T_\alpha :\vlat{E}_\alpha \to \vlat{E}_\alpha'}$ so that the diagram \begin{eqnarray} \begin{tikzcd}[cramped] \vlat{E}_\alpha \arrow[rr, "T_\alpha"] \arrow[dd, "e_{\alpha , \beta}"'] & & \vlat{E}_\alpha' \arrow[dd, "e_{\alpha , \beta}' "]\\
& & \\ \vlat{E}_\beta \arrow[rr, "T_\beta"'] & & \vlat{E}_\beta' \end{tikzcd}\label{EQ: Prop-Morphisms between inductive limits 1} \end{eqnarray} commutes for all $\alpha\preccurlyeq \beta$ in $I$. The following statements are true. \begin{enumerate}
\item[(i)] There exists a unique lattice homomorphism $T:\vlat{E}\to\vlat{E}'$ so that the diagram
\begin{eqnarray}
\begin{tikzcd}[cramped]
\vlat{E}_\alpha \arrow[rr, "T_\alpha"] \arrow[dd, "e_{\alpha}"'] & & \vlat{E}_\alpha' \arrow[dd, "e_{\alpha}' "]\\
& & \\
\vlat{E} \arrow[rr, "T"'] & & \vlat{E}'
\end{tikzcd}\label{EQ: Prop-Morphisms between inductive limits 2}
\end{eqnarray}
commutes for every $\alpha\in I$.
\item[(ii)] If $T_\alpha$ is a lattice isomorphism for every $\alpha\in I$, then so is $T$. \end{enumerate} \end{prop}
\begin{comment} \begin{proof}[Proof of (i)] Let $u\in\vlat{E}$. There exists $\alpha\in I$ and $u_\alpha\in\vlat{E}_\alpha$ so that $u=e_\alpha(u_\alpha)$. Define $T(u)\ensuremath{\mathop{:}\!\!=} e_\alpha'(T_\alpha (u_\alpha))$. It follows from Remark \ref{Remark: Inductive limit notation} and the commutative diagram (\ref{EQ: Prop-Morphisms between inductive limits 1}) that $T(u)$ is well defined. Indeed, if also $u=e_{\gamma}(u_\gamma)$ for some $\gamma\neq \alpha$ in $I$ then there exists $\beta\succcurlyeq\alpha,\gamma$ in $I$ so that $e_{\alpha,\beta}(u_\alpha)=e_{\gamma,\beta}(u_\gamma)$. Then \[ e_{\alpha,\beta}'(T_\alpha(u_\alpha)) = T_\beta(e_{\alpha,\beta}(u_\alpha)) = T_\beta(e_{\gamma,\beta}(u_\gamma))=e_{\gamma,\beta}'(T_\gamma(u_\gamma)). \] Therefore $e_\alpha'(T_\alpha(u_\alpha))=e_\gamma'(T_\gamma(u_\gamma))$.
With $T$ so defined, it is clear that the diagram (\ref{EQ: Prop-Morphisms between inductive limits 2}) commutes. It follows from the definitions of the vector space and lattice operations on $\vlat{E}$ and $\vlat{E}'$ that $T$ is a linear lattice homomorphism. It follows from the diagram (\ref{EQ: Prop-Morphisms between inductive limits 1}) and the fact that $\cal{S}'$ is the direct limit, and hence a compatible system, of $\cal{D}'$ in ${\bf VL}$ that $\left(\vlat{E}',(e_\alpha'\circ T_\alpha)_{\alpha\in I}\right)$ is a compatible system of $\cal{D}$ in ${\bf VL}$. The uniqueness of $T$ now follows from the definition of a direct limit. \end{proof}
\begin{proof}[Proof of (ii)] Assume that $T_\alpha$ is a lattice isomorphism for every $\alpha\in I$. Then, following the same argument as in the proof of (i), there exists a unique lattice homomorphism $S:\vlat{E}'\to\vlat{E}$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{E}_\alpha' \arrow[rr, "T_\alpha^{-1}"] \arrow[dd, "e_{\alpha}'"'] & & \vlat{E}_\alpha \arrow[dd, "e_{\alpha} "]\\ & & \\ \vlat{E}' \arrow[rr, "S"'] & & \vlat{E} \end{tikzcd} \] We have that $S\circ T$ is the identity on $\vlat{E}$ and $T\circ S$ is the identity on $\vlat{E}'$. Therefore $T$ is a lattice isomorphism onto $\vlat{E}'$. \end{proof} \end{comment}
\begin{prop}\label{Prop: Morphisms between projective limits} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta \succcurlyeq \alpha}\right)$ and $\cal{I}'\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha')_{\alpha\in I}, (p_{\beta, \alpha}')_{\beta \succcurlyeq \alpha}\right)$ be inverse systems in ${\bf VL}$ with inverse limits $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_\alpha)_{\alpha\in I} \right)$ and $\cal{S}' \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}', (p_\alpha')_{\alpha\in I} \right)$ in ${\bf VL}$. Assume that for every $\alpha\in I$ there exists a lattice homomorphism $T_\alpha :\vlat{E}_\alpha \to \vlat{E}_\alpha'$ so that the diagram \begin{eqnarray} \begin{tikzcd}[cramped] \vlat{E}_\beta \arrow[rr, "T_\beta"] \arrow[dd, "p_{\beta, \alpha}"'] & & \vlat{E}_\beta' \arrow[dd, "p_{\beta,\alpha}' "]\\
& & \\ \vlat{E}_\alpha \arrow[rr, "T_\alpha"'] & & \vlat{E}_\alpha' \end{tikzcd}\label{EQ: Prop-Morphisms between projective limits 1} \end{eqnarray} commutes for all $\alpha\preccurlyeq \beta$ in $I$. The following statements are true. \begin{enumerate}
\item[(i)] There exists a unique lattice homomorphism $T:\vlat{E}\to\vlat{E}'$ so that the diagram
\begin{eqnarray}
\begin{tikzcd}[cramped]
\vlat{E} \arrow[rr, "T"] \arrow[dd, "p_{\alpha}"'] & & \vlat{E}' \arrow[dd, "p_{\alpha}' "]\\
& & \\
\vlat{E}_\alpha \arrow[rr, "T_\alpha"'] & & \vlat{E}_\alpha'
\end{tikzcd}\label{EQ: Prop-Morphisms between projective limits 2}
\end{eqnarray}
commutes for every $\alpha\in I$.
\item[(ii)] If $T_\alpha$ is a lattice isomorphism for every $\alpha\in I$, then so is $T$. \end{enumerate} \end{prop}
\begin{comment} \begin{proof}[Proof of (i)] It is easy to see that $T:\vlat{E}\ni u\mapsto (T_\alpha(p_\alpha (u)))\in \vlat{E}'$ is a lattice homomorphism making the diagram (\ref{EQ: Prop-Morphisms between projective limits 2}) commute. It follows from the diagram (\ref{EQ: Prop-Morphisms between projective limits 1}) and the fact that $\cal{S}$ is the inverse limit, and hence a compatible system, of $\cal{I}$ that $\left(\vlat{E},(T_\alpha\circ p_\alpha)_{\alpha\in I}\right)$ is a compatible system of $\cal{I}'$ in ${\bf VL}$. The uniqueness of $T$ now follows from the definition of an inverse limit. \end{proof}
\begin{proof}[Proof of (ii)] If each $T_\alpha$ is an isomorphism, then the diagram \[ \begin{tikzcd}[cramped] \vlat{E}_\beta' \arrow[rr, "T_\beta^{-1}"] \arrow[dd, "p_{\beta, \alpha} '"'] & & \vlat{E}_\beta \arrow[dd, "p_{\beta,\alpha} "]\\
& & \\ \vlat{E}_\alpha' \arrow[rr, "T_\alpha^{-1}"'] & & \vlat{E}_\alpha \end{tikzcd} \] commutes for all $\alpha\preccurlyeq \beta$ in $I$. As in the proof of (i), $S:\vlat{E}'\ni u\mapsto (T_\alpha^{-1}(p_\alpha' (u)))\in \vlat{E}$ is a lattice homomorphism making the diagram \[ \begin{tikzcd}[cramped] \vlat{E}' \arrow[rr, "S"] \arrow[dd, "p_{\alpha}' "'] & & \vlat{E} \arrow[dd, "p_{\alpha} "]\\ & & \\ \vlat{E}_\alpha \arrow[rr, "T_\alpha^{-1}"'] & & \vlat{E}_\alpha' \end{tikzcd} \] commute for every $\alpha\in I$. We have that $S\circ T$ is the identity on $\vlat{E}$ and $T\circ S$ is the identity on $\vlat{E}'$. Therefore $T$ is a lattice isomorphism onto $\vlat{E}'$. \end{proof} \end{comment}
\subsection{Duals of function spaces}\label{Subsection: Duals of function spaces}
In this section we apply the duality results in Section \ref{Section: Dual spaces} to the examples in Sections {\ref{Subsection: Examples of inductive limits} and \ref{Subsection: Examples of projective limits} to obtain characterizations of the order and order continuous duals of some function spaces. All of these results follow immediately from the corresponding examples and the appropriate duality result.
\begin{thm} Let $(X,\Sigma,\mu)$ be a complete $\sigma$-finite measure space. Let $\Xi \ensuremath{\mathop{:}\!\!=} (X_n)$ be an increasing sequence (w.r.t. inclusion) of measurable sets with positive measure so that $X=\displaystyle \bigcup X_n$. Let $1\leq p <\infty$ and $1\leq q\leq \infty$ satisfy $\frac{1}{p}+\frac{1}{q}=1$. For $n\in\mathbb{N}$ let $e_n$ and $r_n$ be as in Examples \ref{Exm: Locally supported p-summable functions as inductive limit} and \ref{Exm: Lploc projective limit}, respectively.
For every $n\in\mathbb{N}$, let $T_n:\vlat{L}^q(X_n)\to \vlat{L}^p(X_n)^\sim$ be the usual (isometric) lattice isomorphism, \[ T_n(u)(v) = \int_{X_n} uv\thinspace d\mu ,~~ u\in \vlat{L}^q(X_n),~ v\in \vlat{L}^p(X_n). \] There exists a unique lattice isomorphism $T:\vlat{L}^q_{\Xi-\ell oc}(X)\to \vlat{L}^p_{\Xi-c}(X)^\sim$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{L}^q_{\Xi-\ell oc}(X) \arrow[rr, "T"] \arrow[dd, "r_n"'] & & \vlat{L}^p_{\Xi-c}(X)^\sim \arrow[dd, "e_n^\sim"]\\
& & \\ \vlat{L}^q(X_n) \arrow[rr, "T_n"'] & & \vlat{L}^p(X_n)^\sim \end{tikzcd} \] commutes for every $n\in\mathbb{N}$. \end{thm}
\begin{proof} The result follows immediately from Examples \ref{Exm: Locally supported p-summable functions as inductive limit} and \ref{Exm: Lploc projective limit}, Theorem \ref{Thm: Dual of ind sys in VLIC is proj of duals in NVL} and Proposition \ref{Prop: Morphisms between projective limits}. \end{proof}
\begin{thm} Let $(X,\Sigma,\mu)$ be a complete $\sigma$-finite measure space. Let $\Xi \ensuremath{\mathop{:}\!\!=} (X_n)$ be an increasing sequence (w.r.t. inclusion) of measurable sets with positive measure so that $X = \displaystyle \bigcup X_n$. Let $1\leq p\leq \infty$ and $1\leq q\leq \infty$ satisfy $\frac{1}{p}+\frac{1}{q}=1$. For $n\in\mathbb{N}$ let $e_n$ and $r_n$ be as in Examples \ref{Exm: Locally supported p-summable functions as inductive limit} and \ref{Exm: Lploc projective limit}, respectively.
For every $n\in\mathbb{N}$, let $S_n:\vlat{L}^q(X_n)\to \ordercontn{\vlat{L}^p(X_n)}$ be the usual (isometric) lattice isomorphism, \[ S_n(u)(v) = \int_{X_n}uv\thinspace d\mu,~~ u\in \vlat{L}^q(X_n),~ v\in \vlat{L}^p(X_n). \] There exists a unique lattice isomorphism $S:\vlat{L}^q_{\Xi-\ell oc}(X)\to \ordercontn{\vlat{L}^p_{\Xi-c}(X)}$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{L}^q_{\Xi-\ell oc}(X) \arrow[rr, "S"] \arrow[dd, "r_n"'] & & \ordercontn{\vlat{L}^p_{\Xi-c}(X)} \arrow[dd, "e_n^\sim"]\\
& & \\ \vlat{L}^q(X_n) \arrow[rr, "S_n"'] & & \ordercontn{\vlat{L}^p(X_n)} \end{tikzcd} \] commutes for every $n\in\mathbb{N}$. \end{thm}
\begin{proof} We observe that the mappings $e_{n,m}$ in Example \ref{Exm: Locally supported p-summable functions as inductive limit} are injective for all $n\leq m$ in $\mathbb{N}$. Therefore the result follows immediately from Examples \ref{Exm: Locally supported p-summable functions as inductive limit} and \ref{Exm: Lploc projective limit}, Theorem \ref{Thm: Order continuous dual of ind sys in NRIP is proj of duals in NRiesz} and Proposition \ref{Prop: Morphisms between projective limits}. \end{proof}
\begin{thm} Let $(X,\Sigma,\mu)$ be a complete $\sigma$-finite measure space. Let $\Xi \ensuremath{\mathop{:}\!\!=} (X_n)$ be an increasing sequence (w.r.t. inclusion) of measurable sets with positive measure so that $X=\displaystyle \bigcup X_n$. Let $1\leq p<\infty$ and $1\leq q\leq \infty$ satisfy $\frac{1}{p}+\frac{1}{q}=1$. For $n\in\mathbb{N}$ let $e_n$ and $r_n$ be as in Examples \ref{Exm: Locally supported p-summable functions as inductive limit} and \ref{Exm: Lploc projective limit}, respectively.
For every $n\in\mathbb{N}$, let $T_n:\vlat{L}^q(X_n)\to \vlat{L}^p(X_n)^\sim$ be the usual (isometric) lattice isomorphism, \[ T_n(u)(v) = \int_{X_n}uv\thinspace d\mu, ~~ u\in \vlat{L}^q(X_n),~ v\in \vlat{L}^p(X_n). \] There exists a unique lattice isomorphism $R:\vlat{L}_{\Xi-c}^q(X)\to \vlat{L}^p_{\Xi-\ell oc}(X)^\sim$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{L}^q(X_n) \arrow[rr, "T_n"] \arrow[dd, "e_n"'] & & \vlat{L}^p(X_n)^\sim \arrow[dd, "r_n^\sim"]\\
& & \\ \vlat{L}^q_{\Xi-c}(X) \arrow[rr, "R"'] & & \vlat{L}^p_{\Xi-\ell oc}(X)^\sim \end{tikzcd} \] commutes for every $n\in\mathbb{N}$. \end{thm}
\begin{proof} We note that the mappings $p_{m,n}$ in Example \ref{Exm: Lploc projective limit} are surjective for all $m\geq n$ in $\mathbb{N}$. Therefore the result follows immediately from Examples \ref{Exm: Locally supported p-summable functions as inductive limit} and \ref{Exm: Lploc projective limit}, Theorem \ref{Thm: Dual of proj sys in VLIC is ind of duals in NVLIC} and Proposition \ref{Prop: Morphisms between inductive limits}. \end{proof}
\begin{thm} Let $(X,\Sigma,\mu)$ be a complete $\sigma$-finite measure space. Let $\Xi \ensuremath{\mathop{:}\!\!=} (X_n)$ be an increasing sequence (w.r.t. inclusion) of measurable sets with positive measure so that $X=\displaystyle \bigcup X_n$. Let $1\leq p\leq \infty$ and $1\leq q\leq \infty$ satisfy $\frac{1}{p}+\frac{1}{q}=1$. For $n\in\mathbb{N}$ let $e_n$ and $r_n$ be as in Examples \ref{Exm: Locally supported p-summable functions as inductive limit} and \ref{Exm: Lploc projective limit}, respectively.
For every $n\in\mathbb{N}$, let $S_n:\vlat{L}^q(X_n)\to \ordercontn{\vlat{L}^p(X_n)}$ be the usual (isometric) lattice isomorphism, \[ S_n(u)(v) = \int_{X_n}uv\thinspace d\mu, ~~ u\in \vlat{L}^q(X_n),~ v\in \vlat{L}^p(X_n). \] There exists a unique lattice isomorphism $Q:\vlat{L}^p_{\Xi-c}(X) \to \ordercontn{\vlat{L}^q_{\Xi-\ell oc}(X)}$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{L}^q(X_n) \arrow[rr, "S_n"] \arrow[dd, "e_n"'] & & \ordercontn{\vlat{L}^p(X_n)} \arrow[dd, "r_n^\sim"]\\
& & \\ \vlat{L}^q_{\Xi-c}(X) \arrow[rr, "Q"'] & & \ordercontn{\vlat{L}^p_{\Xi-\ell oc}(X)} \end{tikzcd} \] commutes for every $n\in\mathbb{N}$. \end{thm}
\begin{proof} Because the mappings $p_{m,n}$ in Example \ref{Exm: Lploc projective limit} are surjective for all $m\geq n$ in $\mathbb{N}$, the result follows immediately from Examples \ref{Exm: Locally supported p-summable functions as inductive limit} and \ref{Exm: Lploc projective limit}, Theorem \ref{Thm: Order continuous dual of proj sys in VLIC is ind of duals in NVLIC} and Proposition \ref{Prop: Morphisms between inductive limits}. \end{proof}
The next two results are special cases of Theorems \ref{Thm: Riesz Representation Theorem for C(X)} and \ref{Thm: Order continuous functionals on C(X) are normal measures}, respectively.
\begin{thm} Let $X$ be a locally compact and $\sigma$-compact Hausdorff space. Let $\Gamma \ensuremath{\mathop{:}\!\!=} (X_n)$ be an increasing sequence (with respect to inclusion) of open precompact sets in $X$ so that $X = \displaystyle\bigcup X_n$. For $n\in\mathbb{N}$ let $e_n$ and $r_n$ be as in Examples \ref{Exm: Compactly supported measures as inductive limit} and \ref{Exm: Continuous functions projective limit}, respectively.
For every $n\in \mathbb{N}$, let $T_n:\vlat{M}(\bar X_n)\to {\mathrm C }(\bar X_n)^\sim$ denote the usual (isometric) lattice isomorphism, \[ T_n(\mu)(u) = \int_{X_n} u\thinspace d\mu, ~~ \mu\in\vlat{M}(\bar X_n),~ u\in {\mathrm C }(\bar X_n). \] There exists a unique lattice isomorphism $T:\vlat{M}_c(X)\to {\mathrm C }(X)^\sim$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{M}(\bar X_n) \arrow[rr, "T_n"] \arrow[dd, "e_n"'] & & {\mathrm C }(\bar X_n)^\sim \arrow[dd, "r_n^\sim"]\\
& & \\ \vlat{M}_c(X) \arrow[rr, "T"'] & & \vlat{C}(X)^\sim \end{tikzcd} \] commutes for every $n\in\mathbb{N}$. \end{thm}
\begin{proof} The result follows immediately from Examples \ref{Exm: Compactly supported measures as inductive limit} and \ref{Exm: Continuous functions projective limit}, Theorem \ref{Thm: Dual of proj sys in VLIC is ind of duals in NVLIC} and Proposition \ref{Prop: Morphisms between inductive limits}. \end{proof}
\begin{thm} Let $X$ be a locally compact and $\sigma$-compact Hausdorff space. Let $\Gamma \ensuremath{\mathop{:}\!\!=} (X_n)$ be an increasing sequence (with respect to inclusion) of open precompact sets in $X$ so that $X = \displaystyle\bigcup X_n$. For $n\in\mathbb{N}$ let $e_n$ and $r_n$ be as in Examples \ref{Exm: Compactly supported NORMAL measures as inductive limit} and \ref{Exm: Continuous functions projective limit}, respectively.
For every $n\in \mathbb{N}$, let $S_n:\vlat{N}(\bar X_n)\to \ordercontn{{\mathrm C }(\bar X_n)}$ denote the (isometric) lattice isomorphism, \[
S_n(\mu)(u) = \int_{X_n} u\thinspace d\mu, ~~\mu\in\vlat{N}(\bar X_n),~ u\in {\mathrm C }(\bar X_n). \] There exists a unique lattice isomorphism $S:\vlat{N}_c(X)\to \ordercontn{{\mathrm C }(X)}$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{N}(\bar X_n) \arrow[rr, "S_n"] \arrow[dd, "e_n"'] & & \ordercontn{{\mathrm C }(\bar X_n)} \arrow[dd, "r_n^\sim"]\\
& & \\ \vlat{N}_c(X) \arrow[rr, "S"'] & & \ordercontn{{\mathrm C }(X)} \end{tikzcd} \] commutes for every $n\in\mathbb{N}$. \end{thm}
\begin{proof} The result follows immediately from Examples \ref{Exm: Compactly supported NORMAL measures as inductive limit} and \ref{Exm: Continuous functions projective limit}, Theorem \ref{Thm: Order continuous dual of proj sys in VLIC is ind of duals in NVLIC} and Proposition \ref{Prop: Morphisms between inductive limits}. \end{proof}
\subsection{Perfect spaces}\label{Subsection: Perfect spaces}
Recall that a vector lattice $\vlat{E}$ is \emph{perfect} if the canonical embedding $\vlat{E}\ni u\longmapsto \Psi_u \in \ordercontnbidual{\vlat{E}}$ is a lattice isomorphism \cite[p. 409]{Zaanen1983RSII}. We say that a vector lattice $\vlat{E}$ is \emph{an order continuous dual}, or has an \emph{order continuous predual} if there exists a vector lattice $\vlat{F}$ so that $\vlat{E}$ and $\ordercontn{\vlat{F}}$ are isomorphic vector lattices. From the definition it is clear that every perfect vector lattice has an order continuous predual. On the other hand, see \cite[Theorem 110.3]{Zaanen1983RSII}, $\ordercontn{\vlat{F}}$ is perfect for any vector lattice $\vlat{F}$. Therefore, if $\vlat{E}$ has an order continuous predual then $\vlat{E}$ is perfect; that is, $\vlat{E}$ is perfect if and only if $\vlat{E}$ has an order continuous predual.
This section is mainly concerned with obtaining a decomposition theorem for perfect vector lattices, i.e. for vector lattices with an order continuous predual, akin to Theorem \ref{Thm: C(K) Dual space char order dual version}. This result follows as an application of Example \ref{Exm: Projective limits of bands} and the duality results in Section \ref{Section: Dual spaces}.
\begin{lem}\label{Lem: Disjointification of order cts functionals} Let $\vlat{E}$ be a vector lattice and $0 \leq \varphi, \psi \in \ordercontn{\vlat{E}}$. The following statements are true. \begin{enumerate}
\item[(i)] There exist functionals $0\leq \varphi_1,\psi_1 \in\ordercontn{\vlat{E}}$ so that $\varphi_1\wedge \psi_1=0$, $\varphi_1\leq \varphi$, $\psi_1\leq \psi$ and $\varphi\vee \psi = \varphi_1\vee \psi_1$.
\item[(ii)] If $\vlat{E}$ has the principal projection property and $\varphi$ is strictly positive, then for all $u\in\vlat{E}$, if $\eta(u) = 0$ for all functionals $0 \leq \eta \leq \varphi$ then $u = 0$. \end{enumerate} \end{lem}
\begin{proof} The statement in (i) follows from \cite[Lemma 1.28 (ii) \& Exercise 1.2.E1]{Meyer-Nieberg1991}.
We prove the contrapositive of (ii). Let $u\neq 0$ in $\vlat{E}$. Without loss of generality assume that $u^+\neq 0$. Denote by $\vlat{B}$ the band generated by $u^+$ in $\vlat{E}$. Define $\eta \ensuremath{\mathop{:}\!\!=} \varphi \circ P_{\vlat{B}}$. Then $\eta$ is order continuous, $0 \leq \eta \leq \varphi$ and $\eta(u) = \varphi(u^+) \neq 0$. \end{proof}
\begin{thm}\label{Thm: Projective limit of perfect spaces is perfect} Let $\cal{I}\ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_\alpha)_{\alpha\in I}, (p_{\beta, \alpha})_{\beta \succcurlyeq \alpha}\right)$ be an inverse system in ${\bf NIVL}$, and let $\cal{S}\ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (p_\alpha)_{\alpha\in I} \right)$ be its inverse limit in ${\bf VL}$. Assume that $p_{\beta , \alpha}$ is surjective for all $\beta \succcurlyeq \alpha$ in $I$. If $\vlat{E}_\alpha$ is perfect for every $\alpha\in I$ then so is $\vlat{E}$. \end{thm}
\begin{proof} By Proposition \ref{Prop: Dual system of projective system is inductive system} the pair $\ordercontn{\cal{I}}\ensuremath{\mathop{:}\!\!=} \left( (\ordercontn{(\vlat{E}_\alpha)})_{\alpha\in I}, (p_{\beta, \alpha}^\sim)_{\alpha\preccurlyeq \beta}\right)$ is a direct system in ${\bf NIVL}$. Because every $p_{\beta,\alpha}$ is surjective, each $p_{\beta,\alpha}^\sim$ is injective. Hence, by Theorem \ref{Thm: Existence of Inductive Limits in NVLI}, the direct limit of $\ordercontn{\cal{I}}$ exists in ${\bf NIVL}$. Let $\cal{S}\ensuremath{\mathop{:}\!\!=} \left( \vlat{F},(e_\alpha)_{\alpha \in I}\right)$ be the direct limit of $\ordercontn{\cal{I}}$ in ${\bf NIVL}$.
By Proposition \ref{Prop: Dual system of inductive system is projective system} the pair $\ordercontnbidual{\cal{I}}\ensuremath{\mathop{:}\!\!=} \left( (\ordercontnbidual{(\vlat{E}_\alpha)})_{\alpha\in I}, (p_{\beta, \alpha}^{\sim \sim})_{\alpha\preccurlyeq \beta}\right)$ is an inverse system in ${\bf NIVL}$, and $\ordercontn{\cal{S}}\ensuremath{\mathop{:}\!\!=} \left( \ordercontn{\vlat{F}},(e_\alpha^\sim)_{\alpha \in I}\right)$ is the inverse limit of $\ordercontnbidual{\cal{I}}$ in ${\bf NVL}$ by Theorem \ref{Thm: Order continuous dual of ind sys in NRIP is proj of duals in NRiesz}. For every $\alpha\in I$, let $\sigma_\alpha: \vlat{E}_\alpha \to \ordercontnbidual{(\vlat{E}_\alpha)}$ denote the canonical lattice isomorphism. We observe that the diagram \[ \begin{tikzcd}[cramped] \vlat{E}_\beta \arrow[rr, "\sigma_\beta"] \arrow[dd, "p_{\beta, \alpha}"'] & & \ordercontnbidual{(\vlat{E}_\beta)} \arrow[dd, "p^{\sim\sim}_{\beta, \alpha}"]\\
& & \\ \vlat{E}_\alpha \arrow[rr, "\sigma_\alpha"'] & & \ordercontnbidual{(\vlat{E}_\alpha)} \end{tikzcd} \] commutes for all $\beta\succcurlyeq\alpha$ in $I$. By Proposition \ref{Prop: Morphisms between projective limits}, there exists a unique lattice isomorphism $\Sigma:\vlat{E}\to \ordercontn{\vlat{F}}$ so that the diagram \[ \begin{tikzcd}[cramped] \vlat{E} \arrow[rr, "\Sigma"] \arrow[dd, "p_{\alpha}"'] & & \ordercontn{\vlat{F}} \arrow[dd, "e^{\sim}_{\alpha}"]\\
& & \\ \vlat{E}_\alpha \arrow[rr, "\sigma_\alpha"'] & & \ordercontnbidual{(\vlat{E}_\alpha)} \end{tikzcd} \] commutes for every $\alpha \in I$. Since $\ordercontn{\vlat{F}}$ is perfect, we conclude that $\vlat{E}$ is also perfect. \end{proof}
We now come to the main results of this section, namely, decomposition theorems for perfect vector lattices. Recall the terminology and notation introduced in Example \ref{Exm: Projective limits of bands}.
\begin{thm}\label{Thm: Perfect spaces as projective limits of carriers} Let $\vlat{E}$ be a Dedekind complete vector lattice. Let $\vlat{M}_{\mathrm n}\subseteq \bands{\vlat{E}}$ consist of the carriers of all positive, order continuous functionals on $\vlat{E}$; that is, \[ \vlat{M}_{\mathrm n} \ensuremath{\mathop{:}\!\!=} \{\carrier{\varphi} ~:~ 0\leq \varphi\in\ordercontn{\vlat{E}}\}. \] For $\carrier{\varphi} \subseteq \carrier{\psi}$ in $\vlat{M}_{\mathrm n}$, denote by $P_{\varphi}$ the band projection of $\vlat{E}$ onto $\carrier{\varphi}$ and by $P_{\psi , \varphi}$ the band projection of $\carrier{\psi}$ onto $\carrier{\varphi}$. The following statements are true. \begin{enumerate}
\item[(i)] $\vlat{M}_{\mathrm n}$ is an ideal in $\bands{\vlat{E}}$.
\item[(ii)] $\vlat{M}_{\mathrm n}$ is a non-trivial ideal in $\bands{\vlat{E}}$ if and only if $\vlat{E}$ admits a non-zero order continuous functional.
\item[(iii)] $\vlat{M}_{\mathrm n}$ is a proper ideal in $\bands{\vlat{E}}$ if and only if $\vlat{E}$ does not admit a strictly positive order continuous functional.
\item[(iv)] $P_{\vlat{M}_{\mathrm n}}$ is injective if and only if $\preann{\ordercontn{\vlat{E}}}=\{0\}$.
\item[(v)] If $\vlat{E}$ is perfect then $P_{\vlat{M}_{\mathrm n}}$ is a lattice isomorphism. \end{enumerate} \end{thm}
\begin{proof}[Proof of (i)] For $0\leq \psi,\varphi\in\ordercontn{\vlat{E}}$, we have $\carrier{\psi},\carrier{\varphi}\subseteq \carrier{\varphi\vee \psi}\in \vlat{M}_{\mathrm n}$ and therefore $\vlat{M}_{\mathrm n}$ is upwards directed.
Let $\vlat{B}\in\bands{\vlat{E}}$ and $0\leq \varphi\in\ordercontn{\vlat{E}}$ such that $\vlat{B}\subseteq \carrier{\varphi}$. Define $\psi \ensuremath{\mathop{:}\!\!=} \varphi \circ P_{\vlat{B}}$. Then $\psi \geq 0$ and by the order continuity of band projections, $\psi\in \ordercontn{\vlat{E}}$. We show that $\nullid{\psi} = \vlat{B}^d$. For $u\in \vlat{B}^d$, $P_\vlat{B}(|u|) = 0$ so that $\psi(|u|) = \varphi\left( P_{\vlat{B}}(|u|) \right) = 0$. Therefore $\vlat{B}^d\subseteq \nullid{\psi}$. For the reverse inclusion, let $v\in \nullid{\psi}$. Then $\varphi\left( P_{\vlat{B}}(|v|) \right) = 0$ so that $P_{\vlat{B}}(|v|) \in \nullid{\varphi} \subseteq \vlat{B}^d$. Hence $P_{\vlat{B}}(|v|)=0$ so that $v\in \vlat{B}^d$. We conclude that $\vlat{B}=\carrier{\psi}$. Therefore $\vlat{B}\in\vlat{M}_{\mathrm n}$ so that $\vlat{M}_{\mathrm n}$ is downward closed, hence an ideal in $\bands{\vlat{E}}$. \end{proof}
\begin{proof}[Proof of (ii)] This is clear. \end{proof}
\begin{proof}[Proof of (iii)] A functional $0\leq \varphi\in\ordercontn{\vlat{E}}$ is strictly positive if and only if $\nullid{\varphi}=\{0\}$, if and only if $\carrier{\varphi} = \vlat{E}$; hence the result follows. \end{proof}
\begin{proof}[Proof of (iv)] According to Example \ref{Exm: Projective limits of bands} (iii), $P_{\vlat{M}_{\mathrm n}}$ is injective if and only if $\{P_\varphi ~:~ 0\leq \varphi\in \ordercontn{\vlat{E}}\}$ separates the points of $\vlat{E}$. It therefore suffices to prove that $\preann{\ordercontn{\vlat{E}}}=\{0\}$ if and only if $\{P_{\varphi} : 0\leq \varphi\in\ordercontn{\vlat{E}}\}$ separates the points of $\vlat{E}$.
Assume that $\preann{\ordercontn{\vlat{E}}}=\{0\}$. Fix $u\in \vlat{E}$ with $u\neq 0$. Then there exists $\varphi\in \ordercontn{\vlat{E}}$ such that $\varphi(u) \neq 0$. Therefore $0 < |\varphi(u)| \leq |\varphi|(|u|)$. Hence $u\not\in \nullid{|\varphi|}$ and thus $P_{|\varphi|}(u) \neq 0$.
Conversely, assume that $\{P_{\varphi} : 0\leq \varphi\in\ordercontn{\vlat{E}}\}$ separates the points of $\vlat{E}$. Let $0<v\in \vlat{E}^+$. There exists $0 \leq \varphi \in \ordercontn{\vlat{E}}$ such that $P_{\varphi}(v) > 0$. Since every positive functional is strictly positive on its carrier, it follows that $\varphi(v) \geq \varphi\left( P_{\varphi}(v) \right) > 0$. Now consider any non-zero $w\in \vlat{E}$. There exists $0 \leq \varphi \in \ordercontn{\vlat{E}}$ such that $\varphi(w^+) \neq 0$. Let $B$ denote the band generated by $w^+$ in $\vlat{E}$ and define the functional $\psi \ensuremath{\mathop{:}\!\!=} \varphi\circ P_{B}$. Then $0\leq \psi\in \ordercontn{\vlat{E}}$ and $\psi(w) = \varphi(w^+) \neq 0$. \end{proof}
\begin{proof}[Proof of (v)] It follows from Example \ref{Exm: Projective limits of bands} (ii) that $P_{\vlat{M}_{\mathrm n}}$ is a lattice homomorphism. Since $\vlat{E}$ is perfect, $\preann{\ordercontn{\vlat{E}}}=\{0\}$ by \cite[Theorem 110.1]{Zaanen1983RSII} and so by (iv), $P_{\vlat{M}_{\mathrm n}}$ is injective. We show that $P_{\vlat{M}_{\mathrm n}}$ is surjective.
Let $0\leq u = \left( u_\varphi \right) \in \proj{\cal{I}_{\vlat{M}_{\mathrm n}}}$. Define the map $\Upsilon: (\ordercontn{\vlat{E}})^+ \to \mathbb{R}$ by setting $\Upsilon\left(\varphi\right) \ensuremath{\mathop{:}\!\!=} \varphi\left( u_{\varphi} \right)$ for every $\varphi \in (\ordercontn{\vlat{E}})^+$. We claim that $\Upsilon$ is additive. Let $0 \leq \varphi, \psi \in \ordercontn{\vlat{E}}$. Then \[ \begin{array}{lll} \Upsilon\left( \varphi + \psi \right) & = & \left(\varphi + \psi\right)\left( u_{\varphi + \psi} \right)
\\ & = & \varphi\left( u_{\varphi + \psi} \right) + \psi\left( f_{\varphi + \psi} \right)
\\ & = &\varphi\circ P_{\varphi}\left(u_{\varphi + \psi} \right) + \psi\circ P_{\psi}\left( u_{\varphi + \psi} \right). \end{array} \] Because $(u_{\varphi})\in \proj{\cal{I}_{\vlat{M}_{\mathrm n}}}$, $u_{\varphi+\psi}\in \carrier{\varphi+\psi}$ so that $P_{\varphi}\left(u_{\varphi + \psi} \right)= P_{\varphi+\psi , \varphi}\left(u_{\varphi + \psi} \right) = u_\varphi$ and $P_{\psi}\left(u_{\varphi + \psi} \right) = P_{\varphi+\psi , \psi}\left(u_{\varphi + \psi} \right) = u_\psi$. Hence \[ \Upsilon\left( \varphi + \psi \right) = \varphi\left( u_\varphi \right) + \psi\left( u_\psi \right) = \Upsilon\left( \varphi\right) + \Upsilon\left( \psi \right). \] By \cite[Theorem 1.10]{AliprantisBorder2006} $\Upsilon$ extends to a positive linear functional on $\ordercontn{\vlat{E}}$, which we denote by $\Upsilon$ as well.
We claim that $\Upsilon$ is order continuous. To see this, consider any $D\downarrow 0$ in $\ordercontn{\vlat{E}}$. Fix $\epsilon > 0$ and $\varphi\in D$. By \cite[Theorem 1.18]{AliprantisBurkinshaw2006} there exists $\psi_0\leq \varphi$ in $D$ so that $0\leq \psi(u_{\varphi})<\epsilon$ for all $\psi\leq \psi_0$ in $D$. Consider $\psi\leq \psi_0$. Since $u\in \proj{\cal{I}_{\vlat{M}_{\mathrm n}}}$ we have $u_\psi = P_{\varphi,\psi} (u_\varphi)\leq u_\varphi$ so that $0\leq \psi(u_\psi) \leq \psi(u_{\varphi})<\epsilon$; that is, $0\leq \Upsilon(\psi)<\epsilon$ for all $\psi\leq \psi_0$. Therefore $\Upsilon[D]\downarrow 0$ in $\mathbb{R}$ so that $\Upsilon$ is order continuous, as claimed.
Since $\vlat{E}$ is perfect, there exists $v\in \vlat{E}^+$ so that $\Upsilon\left( \varphi \right) = \varphi\left( v \right)$ for all $\varphi\in \ordercontn{\vlat{E}}$. We claim that $P_{\vlat{M}_{\mathrm n}}(v)=u$; that is, $P_{\varphi}(v)=u_\varphi$ for every $0\leq \varphi\in \ordercontn{\vlat{E}}$. For each $0 \leq \varphi\in \ordercontn{\vlat{E}}$ we have $\varphi(u_\varphi) = \Upsilon\left(\varphi\right) = \varphi(v) = \varphi\left( P_{\varphi}(v) \right)$. Let $0 \leq \eta \leq \varphi$ in $\ordercontn{\vlat{E}}$. Then \[
\eta\left( u_\varphi \right) = \eta\left( P_{\eta}(u_\varphi) \right) = \eta(P_{\varphi,\eta}(u_\varphi)) = \eta\left( u_\eta \right) = \Upsilon(\eta) = \eta(v), \] and, \[ \eta\left( P_{\varphi} (v) \right) = \eta\left( P_{\eta}P_{\varphi} (v) \right) = \eta\left( P_{\eta}(v) \right) = \eta(v). \] Thus $\eta\left( u_\varphi- P_{\varphi } (v) \right)=0$. By Lemma \ref{Lem: Disjointification of order cts functionals} (ii), applied on $C_\varphi$, we conclude that $P_{\varphi}(v) = u_\varphi$. This verifies our claim. Therefore $P_{\vlat{M}_{\mathrm n}}$ maps $\vlat{E}^+$ onto $\left(\proj{\cal{I}_{\vlat{M}_{\mathrm n}}}\right)^+$ which shows that $P_{\vlat{M}_{\mathrm n}}$ is surjective. \end{proof}
\begin{remark}\label{Remark: Inverse limit of carriers not always perfect} We observe that the converse of Theorem \ref{Thm: Perfect spaces as projective limits of carriers} (v) is false. Indeed, $\ordercontnbidual{{\mathrm c}_0}=\ell^\infty$ so that ${\mathrm c}_0$ is not perfect. However, there exists a strictly positive functional $\varphi\in \ordercontn{{\mathrm c}_0}$. Therefore ${\mathrm c}_0 = \carrier{\varphi}\in \vlat{M}_{\mathrm n}$ so that $P_{\vlat{M}_{\mathrm n}}$ maps ${\mathrm c}_0$ lattice isomorphically onto $\proj{\cal{I}_{\vlat{M}_{\mathrm n}}}$, see Remark \ref{Remark: Vector lattice as projective limit of bands}. \end{remark}
\begin{cor}\label{Cor: Perfect spaces as inverse limit of perfect carriers} Let $\vlat{E}$ be a Dedekind complete vector lattice. Let $\vlat{M}_{p}\subseteq \bands{\vlat{E}}$ consist of the carriers of all positive, order continuous functionals on $\vlat{E}$ which are perfect; that is, \[ \vlat{M}_{p} \ensuremath{\mathop{:}\!\!=} \{\carrier{\varphi} ~:~ 0\leq \varphi\in\ordercontn{\vlat{E}} \text{ and } \carrier{\varphi} \text{ is perfect}\}. \] The following statements are true. \begin{enumerate}
\item[(i)] $\vlat{M}_{p}$ is an ideal in $\bands{\vlat{E}}$.
\item[(ii)] $P_{\vlat{M}_p}$ is a lattice isomorphism if and only if $\vlat{E}$ is perfect. \end{enumerate} \end{cor}
\begin{proof}[Proof of (i)] It follows from Theorem \ref{Thm: Perfect spaces as projective limits of carriers} (i) and the fact that bands in a perfect vector lattice are themselves perfect that $\vlat{M}_{p}$ is downwards closed in $\bands{\vlat{E}}$. To see that $\vlat{M}_{p}$ is upwards directed, fix $C_\varphi,C_\psi\in\vlat{M}_p$. By Lemma \ref{Lem: Disjointification of order cts functionals} (i) there exist functionals $0 \leq \varphi_1\leq \varphi$ and $0\leq \psi_1 \leq \psi$ in $\ordercontn{\vlat{E}}$ such that $\varphi_1 \wedge \psi_1=0$ and $\varphi_1 \vee \psi_1 = \varphi \vee \psi$. Because $0\leq \varphi_1\leq \varphi$ and $0\leq \psi_1\leq \psi$ it follows that $\carrier{\varphi_1}\subseteq \carrier{\varphi}$ and $\carrier{\psi_1}\subseteq \carrier{\psi}$. Therefore $\carrier{\varphi_1}$ and $\carrier{\psi_1}$ are perfect. By \cite[Theorem 90.7]{Zaanen1983RSII} we have \[ \carrier{\varphi_1 \vee \psi_1} = \left( \carrier{\varphi_1} + \carrier{\psi_1} \right)^{dd} = \carrier{\varphi_1} + \carrier{\psi_1}. \] By \cite[Theorem 90.6]{Zaanen1983RSII}, since $\varphi_1 \wedge \psi_1 = 0$, we have $\carrier{\varphi_1} \perp \carrier{\psi_1}$. Thus $\carrier{\varphi_1} \cap \carrier{\psi_1} =\{ 0 \}$ which implies $\carrier{\varphi_1 \vee \psi_1} = \carrier{\varphi_1} \oplus \carrier{\psi_1}$. Hence it follows from Theorem \ref{Thm: Properties of product of vector lattices.} (v) and (vii) that $\ordercontnbidual{\left(\carrier{\varphi_1 \vee \psi_1}\right)} \cong \carrier{\varphi_1 \vee \psi_1}$; that is, $\carrier{\varphi \vee \psi}=\carrier{\varphi_1 \vee \psi_1}$ is perfect. Since $\carrier{\varphi},\carrier{\psi}\subseteq \carrier{\varphi \vee \psi}$ it follows that $\vlat{M}_p$ is upward directed, hence an ideal in $\bands{\vlat{E}}$. \end{proof}
\begin{proof}[Proof of (ii)] If $\vlat{E}$ is perfect then $\vlat{M}_p=\vlat{M}_{\mathrm n}$, and so the result follows from Theorem \ref{Thm: Perfect spaces as projective limits of carriers} (v). Conversely, if $P_{\vlat{M}_p}$ is an isomorphism then Theorem \ref{Thm: Projective limit of perfect spaces is perfect} implies that $\vlat{E}$ is perfect. \end{proof}
We now consider direct limits of perfect spaces. Due to the inherent limitations of the duality theorems for inverse limits, the results we obtain are less general than the corresponding results for inverse limits.
\begin{thm}\label{Thm: Direct limit of perfect spaces is perfect.} Let $\cal{D} \ensuremath{\mathop{:}\!\!=} \left( (\vlat{E}_n)_{n \in \mathbb{N}}, (e_{n, m})_{n \leq m} \right)$ be a direct system in ${\bf NIVL}$, and let $\cal{S}\ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (e_n)_{n \in \mathbb{N}} \right)$ be the direct limit of $\cal{D}$ in ${\bf IVL}$. Assume that $e^\sim_{n , m}$ is surjective for all $n \leq m$ in $\mathbb{N}$. If $\vlat{E}_n$ is perfect for every $n \in \mathbb{N}$ then so is $\vlat{E}$. \end{thm}
\begin{proof} By Proposition \ref{Prop: Dual system of inductive system is projective system}, the pair $\ordercontn{\cal{D}} \ensuremath{\mathop{:}\!\!=} \left( ( \ordercontn{( \vlat{E}_n )} )_{n \in \mathbb{N}}, (e^\sim_{n, m})_{n \leq m} \right)$ is an inverse system in ${\bf NIVL}$, and by Theorem \ref{Thm: Existence of Projective Limit NVL}~(ii) the inverse limit of $\ordercontn{\cal{D}}$ exists in ${\bf NVL}$. Denote $\proj{\ordercontn{\cal{D}}}$ by $\cal{S}_0 \ensuremath{\mathop{:}\!\!=} \left( \vlat{F},(p_n)_{n \in \mathbb{N}}\right)$.
By Proposition \ref{Prop: Dual system of projective system is inductive system}, the pair $\ordercontnbidual{\cal{D}} \ensuremath{\mathop{:}\!\!=} \left( \left( \ordercontnbidual{( \vlat{E}_n )} \right)_{n \in \mathbb{N}}, (e^{\sim\sim}_{n, m})_{n \leq m} \right)$ is a direct system in ${\bf NIVL}$. Since we assumed that the $e^\sim_{n , m}$ are surjective, it follows by Theorem \ref{Thm: Order continuous dual of proj sys in VLIC is ind of duals in NVLIC} that $\ordercontn{\left( \cal{S}_0 \right)}$ is the direct limit of $\ordercontnbidual{\cal{D}}$ in ${\bf NIVL}$. For every $n \in \mathbb{N}$, let $\sigma_n: \vlat{E}_n \to \ordercontnbidual{(\vlat{E}_n)}$ denote the canonical lattice isomorphism. The diagram \[ \begin{tikzcd}[cramped] \vlat{E}_n \arrow[rr, "\sigma_n"] \arrow[dd, "e_{n,m}"'] & & \ordercontnbidual{( \vlat{E}_n )} \arrow[dd, "e^{\sim\sim}_{n,m} "]\\
& & \\ \vlat{E}_m \arrow[rr, "\sigma_m"'] & & \ordercontnbidual{( \vlat{E}_m )} \end{tikzcd} \] commutes for all $n\leq m$ in $\mathbb{N}$. By Proposition \ref{Prop: Morphisms between inductive limits} there exists a unique lattice isomorphism $\Sigma: \vlat{E}\to \ordercontn{\vlat{F}}$ so that the diagram \[
\begin{tikzcd}[cramped]
\vlat{E}_n \arrow[rr, "\sigma_n"] \arrow[dd, "e_n"'] & & \ordercontnbidual{( \vlat{E}_n )} \arrow[dd, "e^{\sim\sim}_{n,m} "]\\
& & \\
\vlat{E} \arrow[rr, "\Sigma"'] & & \ordercontn{\vlat{F}}
\end{tikzcd} \] commutes for every $n \in \mathbb{N}$. Since $\ordercontn{\vlat{F}}$ is perfect, we conclude that $\vlat{E}$ is also perfect. \end{proof}
\begin{cor}\label{Cor: Inductive limit of sequence of perfects is perfect} Let $\cal{D}\ensuremath{\mathop{:}\!\!=} \left((\vlat{E}_{n})_{n\in\mathbb{N}},(e_{n,m})_{n\leq m}\right)$ be a direct system in ${\bf NIVL}$, and let $\cal{S} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E}, (e_n)_{n\in \mathbb{N}} \right)$ be the direct limit of $\cal{D}$ in ${\bf IVL}$. Assume that $e_{n,m}$ is injective and $e_{n,m}[\vlat{E}_n]$ is a band in $\vlat{E}_m$ for all $n\leq m$ in $\mathbb{N}$. If $\vlat{E}_n$ is perfect for every $n\in\mathbb{N}$ then so is $\vlat{E}$. \end{cor}
\begin{proof}
We show that $e_{n,m}^\sim$ is surjective for all $n\leq m$ in $\mathbb{N}$. Then the result follows directly from Theorem~\ref{Thm: Direct limit of perfect spaces is perfect.}. We observe that each $\vlat{E}_n$ is Dedekind complete and thus has the projection property. Fix $n\leq m$ in $\mathbb{N}$. Let $P_{m,n}:\vlat{E}_m\to e_{n,m}[\vlat{E}_n]$ be the band projection onto $e_{n,m}[\vlat{E}_n]$. The diagram \[ \begin{tikzcd}[cramped] \vlat{E}_n \arrow[rd, "e_{n,m}"'] \arrow[rr, "e_{n,m}"] & & \vlat{E}_m \arrow[dl, "P_{m,n}"]\\ & e_{n,m}[\vlat{E}_n] \end{tikzcd} \] commutes. Therefore \[ \begin{tikzcd}[cramped] \ordercontn{(\vlat{E}_{m})} \arrow[rr, "e_{n,m}^\sim"] & & \ordercontn{(\vlat{E}_n)}\\ & \ordercontn{(e_{n,m}[\vlat{E}_n])} \arrow[lu, "P_{m,n}^\sim"] \arrow[ru, "e_{n,m}^\sim"'] \end{tikzcd} \] commutes as well. Since $e_{m,n}:\vlat{E}_n\to e_{n,m}[\vlat{E}_n]$ is an isomorphism, so is $e_{n,m}^\sim: \ordercontn{(e_{n,m}[\vlat{E}_n])}\to \ordercontn{\left( \vlat{E}_n \right)}$. It follows from the above diagram that $e_{n,m}^\sim:\ordercontn{(\vlat{E}_m)}\to \ordercontn{(\vlat{E}_n)}$ is a surjection. \end{proof}
\begin{cor} Let $\vlat{E}$ be a vector lattice. Assume that there exists an increasing sequence $(\varphi_n)$ of positive order continuous functionals on $\vlat{E}$ such that $\displaystyle\bigcup \carrier{\varphi_n}=\vlat{E}$ and, for every $n\in\mathbb{N}$, $\carrier{\varphi_n}$ is perfect. Then $\vlat{E}$ is perfect. \end{cor}
\begin{proof} For all $n\leq m$ denote by $e_{n,m}:\carrier{\varphi_n}\to\carrier{\varphi_m}$ and $e_n:\carrier{\varphi_n}\to\vlat{E}$ the inclusion maps. By Example \ref{Exm: Inductive limit main example}, $\cal{D}\ensuremath{\mathop{:}\!\!=} \left( (\carrier{\varphi_n})_{n\in \mathbb{N}},(e_{n,m})_{n\leq m}\right)$ is a direct system in ${\bf NIVL}$, and $\cal{S} \ensuremath{\mathop{:}\!\!=} \left(\vlat{E},(e_n)_{n\in\mathbb{N}}\right)$ is the direct limit of $\cal{D}$ in ${\bf NIVL}$. By Corollary \ref{Cor: Inductive limit of sequence of perfects is perfect}, $\vlat{E}$ is perfect. \end{proof}
\subsection{Decomposition theorems for $\cont(X)$ as a dual space}\label{Subsection: Structure theorems for C(X) as a dual space}
This section deals with decomposition theorems for spaces $\cont(X)$, of continuous real valued functions, which are order dual spaces. In particular, we show that the naive generalization of Theorems \ref{Thm: C(K) Dual space char} and \ref{Thm: C(K) Dual space char order dual version} to the non-compact case fails, and present an alternative approach via inverse limits. Specialising Corollary \ref{Cor: Perfect spaces as inverse limit of perfect carriers} to $\cont(X)$ yields the desired decomposition theorem. In order to facilitate the discussion to follow we recall some basic facts concerning the structure of the carriers of positive functionals on $\cont(X)$. Throughout the section, $X$ denotes a realcompact space. Recall from Section~\ref{Section: Introduction} that the realcompactification of a Tychonoff space $Y$ is denoted as $\upsilon Y$.
Let $0\leq \varphi\in \orderdual{\cont(X)}$. According to Theorem \ref{Thm: Riesz Representation Theorem for C(X)} there exists a measure $\mu_\varphi \in\vlat{M}_c(X)^+$ so that \[ \varphi(u) = \int u \thinspace d\mu_\varphi,~~ u\in\cont(X). \] Denote by $S_\varphi$ the support of the measure $\mu_\varphi$. The null ideal of $\varphi$ is given by \[ \nullid{\varphi} = \{u\in\cont(X) ~:~ u(x) = 0 \text{ for all } x\in S_{\varphi}\}. \]
Indeed, the inclusion $\{u\in\cont(X) ~:~ u(x) = 0 \text{ for all } x\in S_{\varphi}\}\subseteq \nullid{\varphi}$ is clear. For the reverse inclusion, consider $u\in\cont(X)$ so that $u(x_0)\neq 0$ for some $x_0\in S_\varphi$. Then there exist a neighbourhood $V$ of $x_0$ and a number $\epsilon>0$ so that $|u|(x)>\epsilon$ for all $x\in V$. Because $x_0\in S_\varphi$, $\mu_\varphi(V)>0$. Therefore \[
\varphi(|u|) \geq \int_V |u| \thinspace d\mu_\varphi \geq \epsilon\mu_\varphi(V)>0 \] so that $u\notin \nullid{\varphi}$. It therefore follows that \[ \carrier{\varphi}=\{u\in\cont(X) ~:~ u(x) = 0 \text{ for all } x\in X\setminus S_\varphi\}. \] The band $\carrier{\varphi}$ is a projection band if and only if $S_\varphi$ is open, hence compact and open, see \cite[Theorem 6.3]{KandicVavpeticPositivity2019}. In this case we identify $\carrier{\varphi}$ with ${\mathrm C }(S_\varphi)$ and the band projection $P_\varphi:\cont(X)\to \carrier{\varphi}$ is given by restriction of $u\in\cont(X)$ to $S_\varphi$.
\begin{prop}\label{Prop: Carriers in C(X) are perfect} Let $X$ be extremally disconnected. Then $\carrier{\varphi}$ is perfect for every $0\neq \varphi\in\ordercontn{\cont(X)}$. \end{prop}
\begin{proof}
Let $0\neq \varphi\in\ordercontn{\cont(X)}$. Since $\cont(X)$ is Dedekind complete, so is $\carrier{\varphi}$. Furthermore, $|\varphi|$ is strictly positive and order continuous on $\carrier{\varphi}$. Thus $\carrier{\varphi}$ has a separating order continuous dual. By Theorem~\ref{Thm: C(K) Dual space char order dual version}, $\carrier{\varphi} = {\mathrm C }(S_\varphi)$ has a Banach lattice predual; that is, $\carrier{\varphi}$ is an order dual space. Therefore $\carrier{\varphi}$ is perfect by \cite[Theorem~110.2]{Zaanen1983RSII}. \end{proof}
\begin{thm}\label{Thm: C(X) perfect iff order dual iff x is Xion} Let $X$ be a realcompact space. Denote by $S$ the union of the supports of all order continuous functionals\footnote{Equivalently, all compactly supported normal measures on $X$.} on $\cont(X)$. The following statements are equivalent. \begin{enumerate}
\item[(i)] There exists a vector lattice $\vlat{E}$ so that ${\mathrm C }(X)$ is lattice isomorphic to $\orderdual{\vlat{E}}$.
\item[(ii)] $\cont(X)$ is perfect.
\item[(iii)] $X$ is extremally disconnected and $\upsilon S = X$; that is,
\[
\cont(X) \ni u \longmapsto \left.u\right|_{S} \in {\mathrm C } (S)
\]is a lattice isomorphism. \end{enumerate} \end{thm}
\begin{proof} That (i) implies (ii) follows from \cite[Theorem~110.2]{Zaanen1983RSII}. The argument in the proof of \cite[Theorem 2]{Xiong1983} shows that (ii) implies (iii), and \cite[Theorem 1]{Xiong1983} shows that (iii) implies (i). Thus the statements (i)-(iii) are equivalent. \end{proof}
\begin{comment} \begin{proof} , (iii) implies (i) is , and is \cite[Theorem 2]{Xiong1983}. Furthermore, the argument in the proof of also shows that . \end{proof} \end{comment}
A naive attempt to generalise Theorem \ref{Thm: C(K) Dual space char order dual version} (iv) is to replace the $\ell^\infty$-direct sum in that result with the Cartesian product of the carriers of a maximal singular family in $\ordercontn{{\mathrm C }(X)}$. In next theorem and the example to follow, we show that this is approach is not correct.
\begin{prop}\label{Prop: Partial decomposition result for order dual C(X)} Let $X$ be an extremally disconnected realcompact space, and let $\cal{F}$ be a maximal (with respect to inclusion) singular family of positive order continuous linear functionals on $\cont(X)$. Consider the following statements. \begin{enumerate}
\item [(i)] The map
\[
\cont(X)\ni u \longmapsto (P_{\varphi}u)\in \displaystyle\prod_{\varphi\in\cal{F}}\carrier{\varphi}
\]
is a lattice isomorphism.
\item [(ii)] $\cont(X)$ is perfect.
\item[(iii)] There exists a vector lattice $\vlat{E}$ so that ${\mathrm C }(X)$ is lattice isomorphic to $\orderdual{\vlat{E}}$. \end{enumerate} Then (i) implies (ii), and (ii) and (iii) are equivalent. \end{prop}
\begin{proof} By Theorem \ref{Thm: C(X) perfect iff order dual iff x is Xion}, (ii) and (iii) are equivalent. Assume that (i) is true. By Theorem \ref{Thm: Properties of product of vector lattices.} (v) and (vii), $\ordercontnbidual{\cont(X)}$ is isomorphic to $\displaystyle\prod \ordercontnbidual{(\carrier{\varphi})}$. But each $\carrier{\varphi}$ is perfect so that $\displaystyle\prod \ordercontnbidual{(\carrier{\varphi})}$ is isomorphic to $\displaystyle\prod \carrier{\varphi}$, hence $\cont(X)$ is isomorphic to $\ordercontnbidual{\cont(X)}$. \end{proof}
\begin{example}\label{Exm: C(X) decomponsition counterexample} As is well known, ${\mathrm C }(\beta\mathbb{N})=\ell^\infty$ is perfect, hence an order dual space. For every $x\in\mathbb{N}$, denote by $\delta_x:{\mathrm C }(\beta\mathbb{N})\to\mathbb{R}$ the point mass centred at $x$. Then $\cal{F}=\{\delta_x ~:~ x\in\mathbb{N}\}$ is a maximal singular family in $\ordercontn{{\mathrm C }(\beta\mathbb{N})} \cong \ell^1$. Since $\carrier{\delta_x}=\mathbb{R}$ for every $x\in\mathbb{N}$, it follows that $\displaystyle\prod\carrier{\delta_x}=\mathbb{R}^\omega$. Therefore $\displaystyle\prod\carrier{\delta_x}$ does not have a strong order unit. Since ${\mathrm C }(\beta\mathbb{N})$ contains a strong order unit, \[ {\mathrm C }(\beta\mathbb{N})\ni u \longmapsto (P_{\delta_x}u)\in\displaystyle\prod \carrier{\delta_x} \] is not an isomorphism. \end{example}
The final result of this section offers a solution to the decomposition problem for a space $\cont(X)$ which is an order dual space. We refer the reader to the notation used in Example \ref{Exm: Projective limits of bands} and Theorem \ref{Thm: Projective limit of perfect spaces is perfect}.
\begin{thm}\label{Thm: Structure theorem for C(X) a dual space} Let $X$ be an extremally disconnected realcompact space. Denote by $S$ the union of the supports of all order continuous functionals on $\cont(X)$. The following statements are equivalent. \begin{enumerate}
\item[(i)] There exists a vector lattice $\vlat{E}$ so that ${\mathrm C }(X)$ is lattice isomorphic to $\orderdual{\vlat{E}}$.
\item[(ii)] $\cont(X)$ is perfect.
\item[(iii)] $\upsilon S = X$.
\item[(iv)] $P_{\vlat{M}_{\rm n}}:\cont(X)\to \proj{\cal{I}_{\vlat{M}_{\rm n}}}$ is a lattice isomorphism. \end{enumerate} \end{thm}
\begin{proof} By Theorem~\ref{Thm: C(X) perfect iff order dual iff x is Xion}, it suffices to show that (ii) and (iv) are equivalent. Since $\carrier{\varphi}$ is perfect for every $0\leq \varphi \in \ordercontn{\cont(X)}$ by Proposition \ref{Prop: Carriers in C(X) are perfect}, this follows immediately from Corollary \ref{Cor: Perfect spaces as inverse limit of perfect carriers}. \end{proof}
\subsection{Structure theorems}\label{Subsection: Structure theorems}
Let $\vlat{E}$ be an Archimedean vector lattice. In Example \ref{Exm: Inductive limit of principle ideals} it is shown that the principal ideals of $\vlat{E}$ form a direct system in ${\bf NIVL}$, and that $\vlat{E}$ can be expressed as the direct limit of this system. In this section we exploit this result and the duality results in Section \ref{Section: Dual spaces} to obtain structure theorems for vector lattices and their order duals.
A frequently used technique in the theory of vector lattices is to reduce a problem to one confined to a fixed principal ideal $\vlat{E}_u$ of a space $\vlat{E}$. Once this is achieved, the problem becomes equivalent to one in a space ${\mathrm C }(K)$ of continuous functions on some compact Hausdorff space $K$ via the Kakutani Representation Theorem, see \cite{Kakutani1941} or \cite[Theorem 2.1.3]{Meyer-Nieberg1991}. For instance, this technique is used in \cite[Theorem 3.8.6]{Meyer-Nieberg1991} to study tensor products of Banach lattices. The following result is essentially a formalization of this method in the language of direct limits.
\begin{thm}\label{Thm: Structure theorem} Let $\vlat{E}$ be an Archimedean, relatively uniformly complete vector lattice. For all $0<u\leq v$ there exist compact Hausdorff spaces $K_u$ and $K_v$ and injective, interval preserving normal lattice homomorphisms $e_{u,v}:{\mathrm C }(K_u)\to {\mathrm C }(K_v)$ and $e_u:{\mathrm C }(K_u)\to \vlat{E}$ so that the following is true. \begin{enumerate}
\item[(i)] $\vlat{E}_u$ is lattice isomorphic to ${\mathrm C }(K_u)$ for every $0<u\in\vlat{E}$.
\item[(ii)] $\cal{D}_\vlat{E}\ensuremath{\mathop{:}\!\!=} \left(({\mathrm C }(K_u))_{0<u\in\vlat{E}},(e_{u,v})_{u\leq v}\right)$ is a direct system in ${\bf NIVL}$
\item [(iii)] $\cal{S}_\vlat{E}\ensuremath{\mathop{:}\!\!=} \left(\vlat{E},(e_u)_{0<u\in\vlat{E}}\right)$ is the direct limit of $\cal{D}_{\vlat{E}}$ in ${\bf NIVL}$.
\item[(iv)] $\vlat{E}$ is Dedekind complete if and only if $K_u$ is Stonean for every $0<u\in\vlat{E}$.
\item[(v)] If $\vlat{E}$ is perfect then $K_u$ is hyper-Stonean for every $0<u\in\vlat{E}$. \end{enumerate} \end{thm}
\begin{proof} According to \cite[Proposition 1.2.13]{Meyer-Nieberg1991} every principal ideal in $\vlat{E}$ is a unital $AM$-space. Therefore the statements in (i), (ii) and (iii) follow immediately from Example \ref{Exm: Inductive limit of principle ideals} and Kakutani's Representation Theorem for $AM$-spaces \cite{Kakutani1941}. The proof of (iv) follows immediately from Theorem \ref{Thm: Inductive Limit Permanence} and \cite[Proposition 2.1.4]{Meyer-Nieberg1991}.
For the proof of (v), assume that $\vlat{E}$ is perfect. Then, in particular, $\vlat{E}$ is Dedekind complete and has a separating order continuous dual. Therefore the same is true for each $\vlat{E}_u$. By (i), ${\mathrm C }(K_u)$ is Dedekind complete and has a separating order continuous dual, i.e. $K_u$ is hyper-Stonean. \end{proof}
\begin{cor}\label{Cor: Structure of duals ito spaces of measures} Let $\vlat{E}$ be an Archimedean, relatively uniformly complete vector lattice. There exist an inverse system $\cal{I}\ensuremath{\mathop{:}\!\!=} \left((\vlat{M}(K_\alpha))_{\alpha\in I},(p_{\beta,\alpha})_{\beta \succcurlyeq \alpha}\right)$ in ${\bf NIVL}$, with each $K_\alpha$ a compact Hausdorff space, and normal lattice homomorphisms $p_\alpha :\orderdual{\vlat{E}}\to \vlat{M}(K_\alpha)$, so that $\cal{S}\ensuremath{\mathop{:}\!\!=} \left(\orderdual{\vlat{E}},(p_\alpha)_{\alpha \in I}\right)$ is the inverse limit of $\cal{I}$ in ${\bf NVL}$. \end{cor}
\begin{proof} The result follows immediately from Theorems \ref{Thm: Structure theorem} and \ref{Thm: Dual of ind sys in VLIC is proj of duals in NVL}, and the Riesz Representation Theorem. \end{proof}
In order to obtain a converse of Corollary \ref{Cor: Structure of duals ito spaces of measures} we require a more detailed description of the interval preserving normal lattice homomorphisms $e_{u,v}:{\mathrm C }(K_u)\to {\mathrm C }(K_v)$ in Theorem \ref{Thm: Structure theorem}. Let $X$ and $Y$ be topological spaces and $p: X\to Y$ a continuous function. Recall from \cite[p.~21]{Bilokopytov2021} that $p$ is \emph{almost open} if for every non-empty open subset $U$ of $X$, $\text{int}\left( \overline{p\left[ U \right]} \right) \neq \emptyset$. It is clear that all open maps are almost open and thus every homeomorphism is almost open.
\begin{prop}\label{Prop: Chacterization of injective interval preserving lattice homomorphisms on C(K)} Let $K$ and $L$ be compact Hausdorff spaces and $T:{\mathrm C }(K)\to{\mathrm C }(L)$ a positive linear map. $T$ is a lattice homomorphism if and only if there exist a unique $0<w\in{\mathrm C }(L)$ and a unique continuous function $p:\cozeroset{w}\to K$ so that \begin{eqnarray} T(u)(x) = \left\{ \begin{array}{lll} w(x)u(p(x)) & \text{if} & x\in \cozeroset{w}
\\ 0 & \text{if} & x\in\zeroset{w} \\ \end{array} \right.\label{EQ: Prop-Chacterization of injective interval preserving lattice homomorphisms on C(K)} \end{eqnarray} for all $u\in{\mathrm C }(K)$. In particular, $w=T({\mathbf 1}_K)$.
Assume that $T$ is a lattice homomorphism. Then the following statements are true.\begin{enumerate}
\item[(i)] $T$ is order continuous if and only if $p$ is almost open.
\item[(ii)] $T$ is injective if and only if $p[\cozeroset{w}]$ is dense in $K$.
\item[(iii)] $T$ is interval preserving if and only if $p[\cozeroset{w}]$ is ${\mathrm C }^\ast$-embedded in $K$ and $p$ is a homeomorphism onto $p[\cozeroset{w}]$. \end{enumerate} \end{prop}
\begin{proof} The first part of the result is well known, see for instance \cite[Theorem 4.25]{AbramovichAliprantis2002}. Now suppose that $T$ is a lattice homomorphism. The statement (i) follows from \cite[Theorem 4.4]{vanImhoff2018}, or, from \cite[Theorem 7.1 (iii)]{Bilokopytov2021}.
We prove (ii). Assume that $p[\cozeroset{w}]$ is dense in $K$. Let $u\in{\mathrm C }(K)$ satisfy $T(u)=0$. Then $w(x)u(p(x))=0$ for all $x\in \cozeroset{w}$. Hence $u(z)=0$ for all $z\in p[\cozeroset{w}]$. Since $p[\cozeroset{w}]$ is dense in $K$ it follows that $u=0$. Thus $T$ is injective. Conversely, suppose that $p[\cozeroset{w}]$ is not dense in $K$. Then there exists $0<u\in{\mathrm C }(K)$ so that $u(z)=0$ for all $z\in p[\cozeroset{w}]$; that is, $u(p(x))=0$ for all $x\in \cozeroset{w}$. Hence $T(u)(x) = w(x)u(p(x))=0$ for all $x\in \cozeroset{w}$. By definition $T(u)(x)=0$ for all $x\in \zeroset{w}$ so that $T(u)=0$. Therefore $T$ is not injective. Thus (ii) is proved.
Lastly we verify (iii). Suppose that $T$ is interval preserving. We first show that $p[\cozeroset{w}]$ is ${\mathrm C }^\ast$-embedded in $K$. Consider $0\leq f\in {\mathrm C}_{\mathrm b}(p[\cozeroset{w}])$. We must show that there exists a function $g\in{\mathrm C }(K)$ so that $g(z)=f(z)$ for all $z\in p[\cozeroset{w}]$. We may assume that $f\leq {\mathbf 1}_{p[\cozeroset{w}]}$. Define $v:L\to \mathbb{R}$ by setting \[ v(x)\ensuremath{\mathop{:}\!\!=} \left\{ \begin{array}{lll} w(x)f(p(x)) & \text{if} & x\in \cozeroset{w}
\\ 0 & \text{if} & x\in\zeroset{w} \\ \end{array} \right. \] for every $x\in K$. It is clear that $v$ is continuous on $\cozeroset{w}$ and on the interior of $\zeroset{w}$. For all other point $x\in K$, continuity of $v$ follows from the inequality $0\leq v \leq w$. From this last inequality and the fact that $T$ is interval preserving it follows that there exists $0\leq g\leq {\mathbf 1}_K$ so that $T(g)=v$. If $x\in p[\cozeroset{w}]$ then $w(x)f(p(x)) = v(x) = (T(g))(x) = w(x)g(p(x))$ so that $f(p(x))=g(p(x))$; that is, $g(z)=f(z)$ for all $z\in p[\cozeroset{w}]$.
Next we show that $p$ is a homeomorphism onto $p[\cozeroset{w}]$. First we show that $p$ is injective. Consider distinct $x_0,x_1\in \cozeroset{w}$ and suppose that $p(x_0)=p(x_1)$. There exists $v \in {\mathrm C }(L)$ with $0 < v \leq w = T({\mathbf 1}_K)$ such that $v(x_0)=0$ and $v(x_1)>0$. Because $T$ is interval preserving there exists $0<u\leq {\mathbf 1}_K$ in ${\mathrm C }(K)$ so that $T(u)=v$. Then $u(p(x_0))=0$ and $u(p(x_1))>0$, contradicting the assumption that $p(x_0)=p(x_1)$. Therefore $p$ is injective.
It remains to verify that $p^{-1}$ is continuous. Let $(x_i)$ be a net in $\cozeroset{w}$ and $x\in\cozeroset{w}$ so that $(p(x_i))$ converges to $p(x)$ in $K$. Suppose that $(x_i)$ does not converge to $x$. Passing to a subnet of $(x_i)$ if necessary, we obtain a neighbourhood $V$ of $x$ so that $x_i\notin V$ for all $i$. Therefore there exists a function $0<v\leq w$ in ${\mathrm C }(L)$ so that $v(x)>0$ and $v(x_i)=0$ for all $i$. Because $T$ is interval preserving there exists a function $u\in{\mathrm C }(K)$ so that $T(u)=v$. In particular, $w(x)u(p(x))=v(x)>0$ so that $u(p(x))>0$, but $w(x_i)u(p(x_i))=v(x_i)=0$ so that $u(p(x_i))=0$ for all $i$. Therefore $(u(p(x_i)))$ does not converge to $u(p(x))$, contradicting the continuity of $u$. Hence $(x_i)$ converges to $x$ so that $p^{-1}$ is continuous.
Conversely, suppose that $p$ is a homeomorphism onto $p[\cozeroset{w}]$, and that $p[\cozeroset{w}]$ is ${\mathrm C }^\ast$-embedded in $K$. Let $0<u\in{\mathrm C }(K)$ and $0\leq v\leq T(u)$ in ${\mathrm C }(L)$. Define $f:p[\cozeroset{w}]\to\mathbb{R}$ by setting \[ f(z)\ensuremath{\mathop{:}\!\!=} \frac{1}{w(p^{-1}(z))}v(p^{-1}(z)),~~ z\in p[\cozeroset{w}]. \] Because $p^{-1}:p[\cozeroset{w}] \to \cozeroset{w}$ is continuous, $f$ is continuous. Furthermore, $0\leq f(z)\leq u(z)$ for all $z\in p[\cozeroset{w}]$. Therefore $f$ is a bounded continuous function on $p[\cozeroset{w}]$. By assumption there exists a continuous function $g:K\to \mathbb{R}$ so that $g(z)=f(z)$ for all $z\in p[\cozeroset{w}]$. Since $0\leq f\leq u$ on $p[\cozeroset{w}]$, the function $g$ may to chosen so that $0\leq g\leq u$. For $x\in\cozeroset{w}$ we have \[ T(g)(x)=w(x)g(p(x))=w(x)f(p(x))=\frac{w(x)v(x)}{w(x)}=v(x), \] and for $x\in\zeroset{w}$ we have $v(x)=0=T(g)(x)$. Therefore $Tg=v$ so that $T$ is interval preserving. \end{proof}
\begin{thm}\label{Thm: Structure theorem for order duals} Let $\vlat{E}$ be a vector lattice. The following statements are equivalent. \begin{enumerate}
\item[(i)] There exists a relatively uniformly complete Archimedean vector lattice $\vlat{F}$ so that $\vlat{E}$ is lattice isomorphic to $\vlat{F}^\sim$.
\item[(ii)] There exists an inverse system $\cal{I}\ensuremath{\mathop{:}\!\!=} \left((\vlat{M}(K_\alpha))_{\alpha \in I},(p_{\beta,\alpha})_{\beta \succcurlyeq \alpha}\right)$ in ${\bf NIVL}$, with each $K_\alpha$ a compact Hausdorff space, such that the following holds. \begin{enumerate}
\item[(a)] For each $\beta \succcurlyeq \alpha$ in $I$ there exist a function $w\in {\mathrm C }(K_\beta)^+$ and homeomorphism $t:\cozeroset{w}\to t[\cozeroset{w}]\subseteq K_\alpha$ onto a dense ${\mathrm C }^\star$-embedded subspace of $K_\alpha$ so that for every $\mu\in \vlat{M}(K_\beta)$,
\[
p_{\beta , \alpha}(\mu)(A) = \int_{p^{-1}[A]} w \thinspace d\mu,~~ A\in\borel{K_\alpha}.
\]
\item[(b)] For every $\alpha\in I$ there exists a normal lattice morphism ${p_\alpha :\vlat{E}\to\vlat{M}(K_\alpha)}$ such that $\proj{\cal{I}} = \left( \vlat{E},(p_\alpha)_{\alpha \in I} \right)$.
\end{enumerate} \end{enumerate} \end{thm}
\begin{proof}[Proof that (i) implies (ii)] By Theorem \ref{Thm: Structure theorem} there exist a direct system \linebreak $\cal{D} \ensuremath{\mathop{:}\!\!=} \left( ({\mathrm C }(K_\alpha))_{\alpha\in I}, (e_{\alpha,\beta})_{\alpha \preccurlyeq \beta} \right)$ in ${\bf NIVL}$, with each $K_\alpha$ a compact Hausdorff space, and interval preserving normal lattice homomorphisms ${e_\alpha: {\mathrm C }(K_\alpha)\to \vlat{F}}$ so that $\cal{S}\ensuremath{\mathop{:}\!\!=} \left(\vlat{F},(e_\alpha)_{\alpha\in I}\right)$ is the direct limit of $\cal{D}$ in ${\bf NIVL}$. By Theorem \ref{Thm: Dual of ind sys in VLIC is proj of duals in NVL} and the Riesz Representation Theorem \cite[Theorem 18.4.1]{Semadeni1971}, $\orderdual{\cal{S}} \ensuremath{\mathop{:}\!\!=} \left( \vlat{E},(e_\alpha^\sim)_{\alpha \in I}\right)$ is the inverse limit of the inverse system $\orderdual{\cal{D}} \ensuremath{\mathop{:}\!\!=} \left(\vlat{M}(K_\alpha),(e_{\alpha,\beta}^\sim)_{\alpha \preccurlyeq \beta} \right)$ in ${\bf NVL}$. Thus the claim in (b) holds.
Fix $\beta \succcurlyeq \alpha$ in $I$. We show that $e_{\alpha,\beta}^\sim$ is of the from given in (a). By Proposition \ref{Prop: Chacterization of injective interval preserving lattice homomorphisms on C(K)} there exist $w\in {\mathrm C }(K_\beta)^+$ and a homeomorphism $t:\cozeroset{w}\to t[\cozeroset{w}]\subseteq K_\alpha$ onto a dense ${\mathrm C }^\star$-embedded subspace of $K_\alpha$ so that \[
e_{\alpha,\beta}(u) (x) = \left\{ \begin{array}{lll} w(x)u(t(x)) & \text{if} & x\in \cozeroset{w}
\\ 0 & \text{if} & x\in\zeroset{w} \\ \end{array} \right. \] for all $u\in {\mathrm C }(K_\alpha)$. Let $T:{\mathrm C }(K_\alpha)\to {\mathrm C}_{\mathrm b}(\cozeroset{w})$ and $M_w:{\mathrm C}_{\mathrm b}(\cozeroset{w})\to {\mathrm C }(K_\beta)$ be given by $T(u)=u\circ t$ and $M_w (v) = wv$ for all $u\in{\mathrm C }(K_\alpha)$ and $v\in {\mathrm C}_{\mathrm b}(\cozeroset{w})$, with $wv$ defined as identically zero outside $\cozeroset{w}$. Then $T$ and $M_w$ are positive operators and $e_{\alpha , \beta} = M_w\circ T$; hence $e_{\alpha,\beta}^\sim = T^\sim\circ M_w^\sim$. It follows from \cite[Theorems 3.6.1 \& 9.1.1]{Bogachev2007} that $T^\sim( \mu)(A) = \mu(t^{-1}[A])$ for every $\mu\in \vlat{M}(\cozeroset{w})$ and $A\in\borel{K_\alpha}$. The Riesz Representation Theorem shows that, for each $\nu \in \vlat{M}(K_\beta)$ and every Borel set $B$ in $\cozeroset{w}$, \[ M_w^\sim(\nu)(B) = \int_Bw \thinspace d\nu. \] Hence for $\mu\in \vlat{M}(K_\beta)$ and $A\in\borel{K_\alpha}$, \[ e_{\alpha,\beta}^\sim (\mu) (A) = \int_{t^{-1}[A]} w \thinspace d\mu \] as claimed. \end{proof}
\begin{proof}[Proof that (ii) implies (i)] Fix $\beta \succcurlyeq \alpha$ in $I$ and consider the function $w\in {\mathrm C }(K_\beta)^+$ and the homeomorphism $t:\cozeroset{w}\to t[\cozeroset{w}]\subseteq K_\alpha$ given in (b). Define the map $e_{\alpha,\beta}:{\mathrm C }(K_\alpha)\to {\mathrm C }(K_\beta)$ as \[ e_{\alpha,\beta}(u)(x) = \left\{ \begin{array}{lll} w(x)u(t(x)) & \text{if} & x\in \cozeroset{w}
\\ 0 & \text{if} & x\in\zeroset{w} \\ \end{array} \right. \] We show that $\cal{D}\ensuremath{\mathop{:}\!\!=} \left(({\mathrm C }(K_\alpha))_{\alpha\in I},(e_{\alpha,\beta})_{\alpha \preccurlyeq \beta}\right)$ is a direct system in ${\bf NIVL}$.
It follows by Proposition \ref{Prop: Chacterization of injective interval preserving lattice homomorphisms on C(K)} that each $e_{\alpha,\beta}$ is an injective interval preserving normal lattice homomorphism. It remains to show that $e_{\alpha,\gamma} = e_{\beta,\gamma}\circ e_{\alpha,\beta}$ for all $\alpha\preccurlyeq \beta \preccurlyeq \gamma$ in $I$. An argument similar to that in the proof that (i) implies (ii) shows that $e_{\alpha,\beta}^\sim = p_{\beta,\alpha}$ for all $\alpha \preccurlyeq\beta$; hence $e_{\alpha,\beta}^{\sim\sim} = p_{\beta,\alpha}^\sim$. By Proposition \ref{Prop: Dual system of projective system is inductive system}, $\cal{I}^\sim \ensuremath{\mathop{:}\!\!=} \left( (\vlat{M}(K_\alpha)^\sim_\alpha)_{\alpha\in I}, (p^\sim_{\beta , \alpha})_{\beta\succcurlyeq \alpha}\right)$ is a direct system in ${\bf NIVL}$ and therefore $e_{\alpha,\gamma}^{\sim\sim}=e_{\beta,\gamma}^{\sim\sim}\circ e_{\alpha,\beta}^{\sim\sim}$ for all $\alpha \preccurlyeq \beta \preccurlyeq \gamma$ in $I$. Since ${\mathrm C }(K_\alpha)$ has a separating order dual for every $\alpha\in I$, it follows that $e_{\alpha,\gamma}=e_{\beta,\gamma}\circ e_{\alpha,\beta}$. Hence $\cal{D}$ is a direct system in ${\bf NIVL}$.
Since each $e_{\alpha,\beta}$ is injective, $\ind{\cal{D}}\ensuremath{\mathop{:}\!\!=} \left(\vlat{F},(e_\alpha)_{\alpha\in I}\right)$ exists in ${\bf NIVL}$ by Theorem \ref{Thm: Existence of Inductive Limits in NVLI}. Since ${\mathrm C }(K_\alpha)$ is Archimedean and relatively uniformly complete for each $\alpha\in I$ it follows from Theorem \ref{Thm: Inductive Limit Permanence} (i) and (v) that $\vlat{F}$ is also Archimedean and relatively uniformly complete. Because $e_{\alpha,\beta}^\sim = p_{\beta,\alpha}$ for all $\alpha\preccurlyeq\beta$ in $I$, $\orderdual{\cal{D}}=\cal{I}$. Therefore, by Theorem \ref{Thm: Dual of ind sys in VLIC is proj of duals in NVL}, there exists a lattice isomorphism $T: \vlat{F}^\sim \to \vlat{E}$ such that the diagram \[ \begin{tikzcd}[cramped] \vlat{F}^\sim \arrow[rd, "e^\sim_\alpha"'] \arrow[rr, "T"] & & \vlat{E} \arrow[dl, "p_\alpha"]\\ & \vlat{M}(K_\alpha) \end{tikzcd} \] commutes for all $\alpha\in I$. This completes the proof. \end{proof}
\end{document} | arXiv | {
"id": "2207.05459.tex",
"language_detection_score": 0.6459174156188965,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract}
For any admissible subcategory of the bounded derived category of coherent sheaves on a smooth proper variety, we prove that sections of the canonical bundle impose a strong constraint on the supports of the objects of the subcategory or its semiorthogonal complement. We also show that admissible subcategories are rigid under the actions of topologically trivial autoequivalences. As applications of these results, we prove that the derived category of various minimal varieties in the sense of the minimal model program admit no non-trivial semiorthogonal decompositions, generalizing the result for curves due to the second author to higher dimensions. The case of minimal surfaces is further investigated in detail.
\end{abstract}
\maketitle \tableofcontents
\section{Introduction} Let $X$ be a smooth proper variety, or more generally a Deligne-Mumford stack (DM stack for short) defined over a field $ \bfk $. The bounded derived category of coherent sheaves on $X$, which will be denoted by $
\bfD^{b} \lb \coh X \rb $ in this paper, has been acquiring considerable attention from both mathematical and physical points of view.
Sometimes $
\bfD^{b} \lb \coh X \rb $, as a triangulated $\bfk$-linear category, admits a \emph{semiorthogonal decomposition} (\emph{SOD} for short) into smaller triangulated subcategories. Since SOD is a fundamental structure of triangulated categories and it often arises as the categorical incarnation of the deep geometry of $X$, it is natural to ask if one can understand the general nature of SODs of $
\bfD^{b} \lb \coh X \rb $.
If the Kodaira dimension of $X$ is non-negative, the most important source of SODs is the \emph{minimal model program} (\emph{MMP} for short). It is conjectured in general and has been verified in some cases that each step of MMP induces a non-trivial SOD of $\bfD ^{ b } \lb \coh X \rb$ (see \cite{MR2483950} and references therein). Note that the conjecture, in particular, would imply: \begin{conjecture}\label{cj:MMP_implies_SOD_minimal_case} If $\bfD ^{ b } \lb \coh X \rb$ does not admit any SOD, then $X$ should be minimal. \end{conjecture}
When $\dim X = 1$, it is shown in \cite[Theorem 1.1]{MR2838062} that $\bfD ^{ b } \lb \coh X \rb$ admits no SOD if and only if $X$ is minimal in the sense of minimal model program ($\iff g ( X ) \ge 1$). This confirms \pref{cj:MMP_implies_SOD_minimal_case} in dimension 1, and also its converse. Starting from dimension 2, however, the converse is not correct anymore.
Suppose that $X$ satisfies the following condition. \begin{equation}\label{eq:acyclicity}
\bR \Gamma (X, \cO_X) \simeq \bfk \in
\bfD ^{ b } ( \Spec \bfk ). \end{equation} Then any line bundle $L$ on $X$ is an \emph{exceptional object} (see, e.g., \cite[Definition 1.57]{MR2244106}), so that it induces a non-trivial SOD $
\bfD ^{ b } \lb \coh X \rb = \la \la L\ra^{\perp}, \la L\ra \ra $. Here $
\la L\ra $ denotes the smallest triangulated subcategory containing $ L $, and $ \la L\ra^{\perp} $ is its right orthogonal complement (\cite[Definition 1.42]{MR2244106}). Although the condition \pref{eq:acyclicity} is typically satisfied by varieties of negative Kodaira dimension, such as varieties of Fano type in characteristic zero, there are minimal varieties which also satisfy \pref{eq:acyclicity}. Already in dimension 2, classical Enriques surfaces and Godeaux surfaces are such examples. Therefore, even for varieties of non-negative Kodaira dimensions, there are more SODs than MMPs\footnote{Recently several groups worked on exceptional objects of the derived categories of surfaces of general type satisfying \pref{eq:acyclicity}, and discovered several examples of (quasi-)phantom categories (\cite{MR3096524}, \cite{MR3062745}, \cite{MR3361723}, and \cite{MR3077896}, to name a few). Another remarkable discovery was that one can also produce a counter-example for the Jordan-H\"older property of SODs on such a surface (see \cite{MR3177299}. An easier example was found on a rational threefold in \cite{kuznetsov2013simple}).}.
The purpose of this paper is to carry out the initial step toward understanding/classifying, in arbitrary dimension, the SODs of $
\bfD^{b} \lb \coh X \rb $ for $X$ with non-negative Kodaira dimension. We prove two kinds of constraints which should be fulfilled by \emph{any} SOD of $
\bfD^{b} \lb \coh X \rb $. As an application we prove the non-existence of non-trivial SODs for many $X$ which are (as expected) minimal.
One of the constraints on SODs is provided by global/local sections of the canonical bundle $\omega _{ X }$. In this paper, the base locus of the complete canonical linear system will be denoted by
$ \Bs| \omega_X |$.
\begin{theorem}[{$=$ special case of \pref{th:pg positive}}] Let $X$ be a smooth proper variety and $
\bfD ^{ b } \lb \coh X \rb = \la \cA, \cB \ra $ an SOD. Then at least one of the followings holds.
\begin{enumerate}[(a)]
\item
the support of any object in $ \cB $
is contained in
$ \Bs| \omega_X |$.
\item
the support of any object in $ \cA $
is contained in
$ \Bs| \omega_X |$. \end{enumerate} \end{theorem}
This will be used to show the non-existence of SOD for some $X$. In this paper we always use the following general criterion to show the non-existence of SODs.
\begin{proposition}[{$=$special case of \pref{cr:iff}}]\label{pr:iff_intro} Let $X$ be a smooth proper variety. Consider an SOD $
\bfD ^{ b } \lb \coh X \rb = \la \cA,\cB\ra. $ Then it is trivial, i.e. $\cA=0$ or $\cB=0$, if and only if $
\cA \otimes \omega_X \subset \cA \subset \bfD ^{ b } \lb \coh X \rb $ holds. \end{proposition}
As an immediate corollary, we obtain the following theorem. \begin{theorem}
Let $X$ be a smooth proper variety such that each connected component of $\Bs| \omega_X |$ is contained in an open subset of $X$ on which $\omega_X$ is trivial. Then $\bfD ^{ b } \lb \coh X \rb$ admits no non-trivial SOD. \end{theorem}
A special case of this is
\begin{corollary}\label{cr:main_cor} Let $X$ be a smooth proper variety such that
$\Bs| \omega_X |$ is a finite set (possibly empty). Then $\bfD ^{ b } \lb \coh X \rb$ has no non-trivial SOD. \end{corollary}
\noindent In particular the global generation of the canonical bundle implies the non-existence of non-trivial SODs. Examples of such varieties are submanifolds of abelian varieties (\cite[Lemma 3.11]{MR0360582}) and complete intersections with non-negative Kodaira dimensions in projective spaces. This is a far generalization of \cite[Theorem 1.1]{MR2838062}.
The other constraint on SODs is the rigidity under the action of topologically trivial autoequivalences.
\begin{theorem}[{$=$ a special case of \pref{th:rigidity}}] Let $X$ be a smooth projective variety over $\bfk$ and $ \bfD ^{ b } \lb \coh X \rb = \la \cA, \cB \ra $ an SOD. Then for any line bundle $
L $ satisfying $
[ L ] \in \PPic _{ X / \bfk }^0 $, we have the equality of subcategories
$
\cA \otimes L = \cA \subset \bfD ^{ b } \lb \coh X \rb. $
\end{theorem}
\noindent Immediately we obtain \begin{corollary} [$=$ \pref{cr:trivial_chern_class}] Let $X$ be a smooth projective variety satisfying $
[ \omega_X ] \in \PPic _{ X / \bfk }^0 $. Then $\bfD ^{ b } \lb \coh X \rb$ admits no SOD. \end{corollary}
\begin{corollary}\label{cr:the_converse_does_not_hold} Let $X$ be the product of a bielliptic surface with an abelian variety. Then $\bfD ^{ b } \lb \coh X \rb$ admits no SOD. \end{corollary}
\noindent Since the canonical bundle of the variety $X$ in \pref{cr:the_converse_does_not_hold} admits no global section (in another word $
\Bs | \omega_X | = X $), the converse of \pref{cr:main_cor} does not hold at all in dimensions at least two.
In \pref{sc:Surfaces} we closely study SODs of minimal surfaces. We obtain satisfactory results for the cases $
\kappa ( X )
=
0 $ and $1$, where $
\kappa ( X ) $ is the Kodaira dimension of $X$. Below is the summary.
\begin{theorem} Let $X$ be a smooth projective minimal surface. \begin{enumerate}
\item If $\kappa(X)=0$, then $\bfD ^{ b } \lb \coh X \rb$ admits a non-trivial SOD if and only if $X$ is a classical Enriques surface.
\item If $\kappa(X)=1$ and $p_g (X) > 0$, then $\bfD ^{ b } \lb \coh X \rb$ admits no SOD. \end{enumerate} \end{theorem}
\noindent In the study of SODs of (quasi-)elliptic fibrations (\pref{sc:kappa=1}), \pref{th:rigidity} will be effectively used on multiple fibers.
For the case $\kappa(X)=2$ we have to put a rather strong assumption to prove the non-existence of SODs. We believe that it is a technical assumption and should be eventually removed.
\begin{theorem}[$=$ \pref{th:negative_definite}] Let $X$ be a minimal smooth projective surface of general type with $ \dim_{\bfk} H^0(X, \omega_X) > 1 $. Assume the following condition (*). \begin{quote} (*)
For any one-dimensional connected component $Z \subset \Bs| \omega_X |$, its intersection matrix is negative definite. \end{quote} Then $\bfD ^{ b } \lb \coh X \rb$ admits no SOD. \end{theorem}
\noindent If there exists a local section of $\omega_X$ defined on an infinitesimal neighborhood of $\Bs| \omega_X |$, similar arguments as in the proof of \pref{th:pg positive} work. See \pref{sc:Local_situation} and \pref{cr:induction} for the precise statements. This will be effectively used in the proof of \pref{th:negative_definite}.
Our results for minimal surfaces apparently indicate that the existence/non-existence of SOD corresponds to the conditions $p_g>0/=0$. This resembles the (conjectural) dichotomy of infinite/finite generation of the Chow group of zero-cycles on surfaces; i.e., a theorem of Mumford \cite{MR0249428} and the Bloch conjecture \cite{MR0371891}.
In \pref{sc:Toward_higher_dimensions}, we briefly treat varieties of dimensions greater than $2$. There is nothing new in the case $\kappa=0$, and we discuss the case $\kappa=1$. A new difficulty, which does not appear in dimension $2$, then shows up. We illustrate the issue by an example due to Keiji Oguiso.
Finally, in \pref{sc:twisted_sheaves} we generalize our results to the category of twisted sheaves. It turns out that our arguments go through without essential change, though there are a couple of technical issues to be settled.
Our results so far seem to imply the following general principle: \begin{quote} For "most" minimal $X$ with non-negative Kodaira dimension, there is no SOD of $
\bfD ^{ b } \lb \coh X \rb $; i.e., the converse of \pref{cj:MMP_implies_SOD_minimal_case} is true. \end{quote}
The precise meaning of "most", in particular in higher dimensions, is very vague at this moment and remains to be made precise. Note that the principle can be regarded as a special case of the following assertion: \begin{quote} For "most" $X$ with non-negative Kodaira dimension, there are no SODs of $
\bfD^{b} \lb \coh X \rb $ other than those originating from the MMPs starting from $X$. \end{quote} This generalized assertion can be checked for some non-minimal surfaces, and it will be discussed in future work(s).
\begin{not_and_conv} The base field $\bfk$ will be assumed to be algebraically closed, without loss of generality. In fact, if $\bfk$ is not algebraically closed, any SOD of $
\bfD ^{ b } \lb \coh X \rb $ induces an SOD of $
\bfD \lb X \otimes_{\bfk} \overline{\bfk} \rb $, where $\overline{\bfk}$ is the algebraic closure of $\bfk$, which is invariant under the action of the absolute Galois group $
\Aut \lb \overline{\bfk} / \bfk \rb $ (\cite[Proposition 5.1]{MR2801403}). Since $X$ is connected, the absolute Galois group acts transitively on the set of connected components $
\pi _{ 0 } \lb X \otimes _{ \bfk } \overline{\bfk} \rb $. Therefore in order to show the non-existence of SOD for $
\bfD ^{ b } \lb \coh X \rb $, it is enough to show it for $
\bfD ^{ b } ( \Xbar ) $ for a connected component $
\Xbar \subset X \otimes_{\bfk} \overline{\bfk} $.
Any Deligne-Mumford stack in this paper will be assumed to be connected and smooth over $\bfk$, unless otherwise stated.
The following standard symbols will be used. \begin{itemize}
\item (geometric genus) $
p _{ g } ( X )
= \dim H^0 ( X, \omega _X )
= \dim \Hom _{ \bfD ^{ b } \lb \coh X \rb } ( \cO_X, \omega_X ) $
\item (irregularity) $
q ( X ) = \dim H^1 ( X, \cO _X ) $
\item (canonical sheaf) $
\omega _{ X } $
\end{itemize} \end{not_and_conv}
\begin{acknowledgements*} The authors would like to thank Yujiro Kawamata and Mihnea Popa for their corrections and comments about stacks, and Keiji Oguiso for useful discussions and providing them with the example. They are indebted to Marcello Bernardara for suggesting generalization to twisted sheaves, and Kazuhiro Konno for informing them of folklore conjectures on the canonical linear systems of surfaces of general type and for many valuable discussions. They would also like to thank Alexey Bondal, Andrei C\u{a}ld\u{a}raru, Daniel Huybrechts, and Alexander Kuznetsov for stimulating discussions about support of objects in the derived category. They would also like to thank Pawel Sosna for carefully reading the first draft and providing them with many corrections and suggestions, David Favero for communicating to them the paper \cite{rosay2009some}, and Hiroyuki Minamoto for informing them of the concise proof of \pref{lm:classical_generator} which is reproduced below in this paper.
This work was initiated while the second author was visiting the University of Michigan. He would like to thank Mircea Musta\c{t}\u{a} for his support and hospitality. The first author was partially supported by Grant-in-Aid for Scientific Research (S) (No. 22224001), and the second author was partially supported by JSPS fellowship (22-849), Grant-in-Aid for Scientific Research ( 60646909, 16H05994, 16K13746, 16H02141, 16K13743, 16K13755, 16H06337) and the Inamori Foundation. \end{acknowledgements*}
\section{Preliminaries}
\subsection{Deligne-Mumford stack}
We begin with a couple of notions about Deligne-Mumford (DM for short) stacks.
\begin{definition} Let $X$ be a DM stack. A \emph{closed point} of $X$ is an irreducible reduced closed substack of dimension zero. For a closed point $
\iota \colon x \hto X $, the sheaf $\iota_{*}\cO_x$ will be denoted by $\bfk ( x )$. This situation will be simply denoted by $x\in X$. \end{definition}
\begin{definition}\label{df:base_locus} Let $X$ be a smooth DM stack. The \emph{base locus of the complete canonical linear system} $
\Bs | \omega_X | $ is the closed substack of $X$ defined by the ideal \begin{align}
\Image \lb \Hom ( \omega _{ X } ^{ - 1 }, \cO _X ) \otimes \omega _{ X } ^{ - 1 }
\xto[]{\ev}
\cO _X \rb \subset \cO _X. \end{align} \end{definition} See \cite[Application (14.2.7)]{MR1771927} for the correspondence between closed substacks and coherent ideal sheaves.
\begin{definition}\label{df:stacky_locus} Let $X$ be a smooth proper DM stack. By \cite[Corollary 1.3.(1)]{MR1432041}, $X$ admits a coarse moduli algebraic space $
\pi \colon X \to |X| $. The \emph{stacky locus}, which will be denoted by $\cS\subset X$, is the complement of the maximal open substack of $X$ on which $\pi$ is an isomorphism to its image. A closed point $
\iota \colon x \hookrightarrow X $ is said to be \emph{non-stacky} if $
\iota $ factors through $
X \setminus \cS $. \end{definition}
\begin{definition}\label{df:support} Let $X$ be a smooth DM stack, and $E\in \bfD ^{ b } \lb \coh X \rb$ a bounded complex of coherent sheaves on $X$. The \emph{support of $E$} is the union of the supports of its cohomology sheaves: i.e., $
\Supp{E} = \bigcup_i \Supp{ \cH ^i(E)} _{ \mathrm{red} }. $ By definition, $
\Supp{ E } $ is a closed substack of $
X $. If $
\Supp E $ is contained in a closed substack $
Z \subset X $, we say that\emph{ $
E $ is supported in $
Z $}. \end{definition}
\begin{remark}\label{rm:rouquier} The support of $E$ can be alternatively defined as the complement of the maximal open substack of $X$ on which $E$ is zero.
If $E$ is a coherent sheaf, we can introduce the natural stack structure on $
\Supp E $ (which is not necessarily reduced) in such a way that $E$ is isomorphic to the pushforward of a coherent sheaf on $\Supp E$ (see \cite[Section 1.1]{Huybrechts-Lehn}). In general, let $Z\subset X$ be a reduced closed substack and $
E \in \bfD ^{ b } \lb \coh X \rb $ a bounded complex of coherent sheaves supported in $Z$. Then by \cite[Lemma 7.40]{MR2434186} there exists a positive integer $n>0$ and a bounded complex of coherent sheaves $
E ' \in \bfD ^{ b } \lb \coh n Z \rb $ such that $
E \simeq \iota_*E' $, where $
\iota \colon n Z \hto X $ is the natural closed immersion. The proof uses the Artin-Rees lemma in an essential way, so that we do not have a control over the value of $n$. If we could introduce a sufficiently thin stack structure on the support of complexes, that would be useful to improve the results of this paper. \end{remark}
\begin{lemma}\label{lm:invariant} Let $X$, $E$, and $Z$ be as in \pref{rm:rouquier}. Suppose that $L$ is a line bundle on $X$ which is trivial on an open neighborhood of each connected component of $Z$. Then $
E \simeq E\otimes L. $ \end{lemma}
\begin{proof} If $
Z _{ 1 }, \dots, Z _{ m } $ are the connected components of $Z$, then there are objects $
E _{ i } $ whose supports are contained in $Z _{ i }$ and $
E \simeq \oplus _{ i = 1 } ^{ m } E _{ i } $. Therefore we may and will assume that $Z$ itself is connected in the rest of the proof.
Let $
\iota \colon U \hto X $ be an open neighborhood of $Z$ on which $L$ is trivial. Since $E$ is supported in $Z$, the natural morphism $
E \to \iota_*\iota^*E $ is an isomorphism. The same holds for $ E \otimes L $, and hence \begin{equation}
E
\simeq \iota_*\iota^*E
\simeq \iota_*(\iota^*E \otimes L|_U)
\simeq \iota_*\iota^*(E \otimes L)
\simeq E \otimes L. \end{equation} \end{proof}
Next we recall the Serre duality for tame DM stacks. Recall that a DM stack over a field is \emph{tame} if the order of the stabilizer groups are not divisible by the characteristic of the base field. In this paper the dualizing sheaf of $X$ will be denoted by $\omega_X$.
\begin{fact} Let $X\to\Spec \bfk$ be a tame smooth proper DM stack of dimension $n$ over a field $\bfk$. Then $- \otimes _{ X } \omega_X [ n ]$ is a Serre functor of $
\bfD ^{ b } \lb \coh X \rb $. \end{fact}
We will use the following useful lemma from \cite[Proposition 1.5]{Bondal-Orlov_semiorthogonal}. \begin{lemma}\label{lm:useful} Let $X$ be a DM stack and $x\in X$ a non-stacky closed point. Take an upper bounded complex of coherent sheaves $
E \in \bfD ^{ - } \lb \coh X \rb $. If $
x \in \Supp E $, then $
\Hom ( E, \bfk ( x ) [ i ] ) \neq 0 $ for some $
i \in \bZ $. Moreover if $X$ is tame, smooth, and proper, then $
\Hom ( \bfk ( x ) [ j ] , E ) \neq 0 $ also holds for some $
j \in \bZ $. \end{lemma} \begin{proof} Since the support of an object in invariant under a tensor product with an invertible sheaf, one can reduce the second assertion to the first one by applying the Serre functor to the object $E$.
Suppose that the closed point $x$ is contained in $\Supp E$. Set $
m = \max \lc j \mid x \in \Supp H ^{ j } ( E ) \rc. $ We have a distinguished triangle $E^{\leq m} \to E \to E^{>m} \to E^{\leq m[1]}$, where $E^{>m}$ (resp. $E^{\leq m}$) is the upper (resp. lower) truncation of $E$. Since $
x \nin \Supp \lb E^{>m} \rb $, we have $\Hom(E^{>m}, \bfk(x)[p] ) =0$ for all $p \in \bZ$. Hence we see $\Hom(E, \bfk(x)[-m] ) \simeq \Hom(E ^{\leq m}, \bfk(x)[-m])$. Moreover we have \begin{equation}\label{eq:support_closed_point} \Hom(E ^{\leq m}, \bfk(x)[-m]) \simeq \Hom (H^m(E)[-m], \bfk(x)[-m]), \end{equation} since $
\Hom \lb (E^{\leq m})^{\leq m-1}, \bfk(x)[-m] \rb
=
\Hom \lb E^{\leq m - 1}, \bfk(x)[-m] \rb
=
0 $ by the degree reason. Since the right hand side of \eqref{eq:support_closed_point} is non-zero, we conclude the proof.
\end{proof}
For the sake of completeness, here we include the following lemma.
\begin{lemma}[Nakayama-Azumaya-Krull]\label{lm:Derived_NAK} Let $X$ be a scheme, $
\iota \colon x \hto X $ a closed point, and $
E \in \bfD ^{ - } ( \coh X ) $ a complex of coherent sheaves on $X$ bounded above. If $
\bL \iota^*E = 0 $, then $
x \nin \Supp{E} $. \end{lemma} \begin{proof} See \cite[Lemma 3.29 and Exercise 3.30]{MR2244106}. \end{proof}
\subsection{(Semi)orthogonal decomposition}
We recall the notion of (semi)orthogonal decompositions and show the non-existence of orthogonal decompositions for stacks which admits a non-stacky closed point. This is well known for varieties (see, e.g., \cite[Proposition 3.10]{MR2244106}). The proof given below was suggested by Yujiro Kawamata. Recall that a full subcategory $
\cA \subset \cC $ is said to be \emph{strict} if any object in $
\cC $ which is isomorphic to an object in $
\cA $ is already contained in $
\cA $.
\begin{definition}\label{df:SOD} A pair of strictly full triangulated subcategories $\cA,\cB$ of a triangulated category $\cT$ is a \emph{semiorthogonal decomposition} if the following conditions are satisfied: \begin{itemize} \item $\Hom_{\cT}(b,a)=0$ for all $a\in\cA$ and $b\in\cB$. \item Any object $x\in\cT$ is decomposed into a pair of objects $a\in\cA$ and $b\in\cB$ by a distinguished triangle \begin{equation}\label{eq:decomposing_triangle}
b \to x \to a \to b [ 1 ]. \end{equation} \end{itemize} This situation will be denoted by the symbol $
\cT = \la \cA, \cB \ra $. If $\Hom_{\cT}(\cA, \cB)=0$ also holds, the decomposition is called an \emph{orthogonal decomposition} (OD for short) and denoted by $
\cT = \cA \oplus \cB $. In this case the triangle \eqref{eq:decomposing_triangle} splits and we obtain the direct sum decomposition $
x \simeq a \oplus b $. \end{definition}
\begin{remark} \begin{enumerate} \item If $
\cT = \la \cA,\cB \ra $ is an SOD of $\cT$, then $
\cA = \cB^{\perp} $ and $
\cB ={}^{\perp} \hspace{-1mm}\cA $. Here $
\bullet^\perp $ (resp. $
{}^\perp \bullet $) denotes the right (resp. left) orthogonal complement of the subcategory $\bullet$ (\cite[Definition 1.42]{MR2244106}).
\item For a strictly full triangulated subcategory $
\iota \colon \cC \subset \cT $, the pair $
\cC, {}^{\perp} \cC $ (resp. $\cC^{\perp}, \cC$) gives an SOD of $\cT$ if and only if the inclusion functor $\iota$ admits a left (resp. right) adjoint (see \cite[Lemma 3.1]{Bondal_RAACS}). \end{enumerate} \end{remark}
\begin{lemma}\label{lm:no_OD} Let $X$ be a connected locally separated DM stack which admits a non-stacky point. Assume that $X$ satisfies the resolution property. Then $
\bfD ^{ - } \lb \coh X \rb,
\bfD ^{ b } \lb \coh X \rb $, and $
\bfD ^{ \perf } \lb X \rb $ admit no non-trivial OD. \end{lemma}
In the statement above, $
\bfD ^{ - } \lb \coh X \rb $ and $
\bfD ^{ \perf } \lb X \rb $ denote the category of upper bounded complexes of coherent sheaves and those of perfect complexes, respectively.
\begin{proof} Let $
\bfD ^{ b } \lb \coh X \rb = \cA \oplus \cB $ be an OD of $\bfD ^{ b } \lb \coh X \rb$. Take a non-stacky point $x\in X$. Since $\End(\bfk ( x ))$ is a field, $\bfk ( x )$ is indecomposable and hence is contained in either $\cA$ or $\cB$. Let us assume it is contained in $\cA$.
Let $E$ be any locally free sheaf, and consider the decomposition $
E = E_{\cA} \oplus E_{\cB} $ provided by the OD. If $E_{\cB} \neq 0$, then $
E_{\cB} \otimes \bfk ( x ) $ is isomorphic to $
\bfk ( x )^{\oplus r} $ with $
r $ the rank of $E_{\cB}$. This follows from the connectedness of $X$ and the fact that the closed substack $x$ is a scheme \cite[Chapter II, Proposition 5.9]{MR0302647}. Thus we get a surjective morphism $
E_{\cB}\to E _{ \cB } \otimes \bfk ( x )
\to \bfk ( x ) $ and it is a contradiction. Hence $E$ belongs to $\cA$. Since locally free sheaves form a spanning class of $\bfD ^{ b } \lb \coh X \rb$ because of the resolution property. To see this, take any non-trivial object $
F ^{ \bullet } = \lb F ^{ i } \rb _{ i \in \bZ } $ and set $
M \coloneqq \max \lc i \in \bZ \mid \cH ^{ i } \lb F ^{ \bullet } \rb \neq 0 \rc $. By replacing $F ^{ \bullet }$ with its canonical truncation at $M$, we may assume that $
F ^{ i } \neq 0 $ for $
i > M $. Then cover $F ^{ M }$ by a locally free sheaf $E ^{ M }$ and extend it to a locally free sheaf resolution $
E ^{ \bullet } = \lb E ^{ i } \rb _{ i \in \bZ } $ of $
F ^{ \bullet } $. Then one obtains a non-trivial map $
E ^{ M } [ - M ] \to E ^{ \bullet } $. Therefore $\cB$ should be trivial by the orthogonality.
Finally, note that the arguments above works perfectly well for $
\bfD ^{ - }\lb \coh X \rb $ as well. This, in turn, implies the claim for $
\bfD ^{ \perf }\lb X \rb $. In fact, suppose that there is an OD $
\bfD ^{ \perf }\lb X \rb = \cA \oplus \cB $. By \cite[Proposition 4.3]{MR2801403}, there is a unique SOD $
\bfD ^{ - } \lb \coh X \rb= \la \cA ^{ - }, \cB ^{ - } \ra $ which is compatible with the SOD $
\bfD ^{ \perf } \lb X \rb= \la \cA, \cB \ra $. Closely looking on the explicit description of the subcategory $
\cA ^{ - } $ (respectively $
\cB ^{ - } $) (see the proof of \cite[Proposition 4.3]{MR2801403}), one finds that these categories depend only on the subcategory $\cA$ (resp. $\cB$). Hence it follows that $
\bfD ^{ - }\lb \coh X \rb = \la \cB ^{ - }, \cA ^{ - } \ra $ is also an SOD which is compatible with the SOD $
\bfD ^{ \perf } \lb X \rb= \la \cB, \cA \ra $. Thus we see $
\bfD ^{ - } \lb \coh X \rb= \cA ^{ - } \oplus \cB ^{ - } $, so that either $
\cA ^{ - } = 0 $ or $
\cB ^{ - } = 0 $. Since $
\cA ^{ \perf } $ (resp. $
\cB ^{ \perf } $) is a full subcategory of $
\cA ^{ - } $ (resp. $
\cB ^{ - } $), we conclude the proof. \end{proof}
\begin{remark} If there is no non-stacky point on $X$, $\bfD ^{ b } \lb \coh X \rb$ may have an OD even if it is smooth and irreducible. For example consider the quotient stack $
X = [ \Spec{ \bC } / ( \bZ / 2 ) ] $. A coherent sheaf on $X$ is nothing but a finite dimensional representation of $\bZ / 2$ over $
\bC $. The derived category $\bfD ^{ b } \lb \coh X \rb$ is orthogonally decomposed by the trivial representation and the non-trivial character of $\bZ / 2$. \end{remark}
\pref{lm:no_OD} provides us with the following useful criterion for the triviality of an SOD. \begin{corollary}\label{cr:iff} Let $X$ be a smooth proper DM stack which has a non-stacky point and satisfies the resolution property. Consider an SOD $
\bfD ^{ b } \lb \coh X \rb = \la \cA,\cB\ra. $ Then it is trivial, i.e. $\cA=0$ or $\cB=0$, if and only if $
\cA \otimes \omega_X \subset \cA \subset \bfD ^{ b } \lb \coh X \rb $ holds. \end{corollary}
\noindent The following argument is well known to experts (see \cite[Proposition 3.6]{Bondal-Kapranov_Serre}), but we include it here because of its importance in this paper.
\begin{proof} By \pref{lm:no_OD}, it is enough to show that it is an OD. Given $a\in\cA$ and $b\in\cB$, by applying the Serre duality and the assumption $
a \otimes \omega_X [ \dim X ] \in \cA $, we see $
\Hom ( a, b ) \simeq \Hom ( b, a \otimes \omega_X [ \dim X ] )^{\vee} = 0. $ Hence we see that $\cA$ is also the right orthogonal of $\cB$, concluding the proof. \end{proof}
\subsection{Picard scheme}
We recall basics of Picard schemes from \cite[Chapter 9]{MR2222646}. Let $X\to S$ be a morphism of finite type between locally Noetherian schemes. The relative Picard functor, which will be denoted by $\Pic_{X/S}$, is a contravariant functor from the category of locally Noetherian $S$-schemes to the category of abelian groups defined by \begin{equation}
\Pic_{ X / S } ( T ) = \Pic ( X \times _{ S } T ) / \Pic ( T ), \end{equation} where $T$ is a locally Noetherian $S$-scheme. The functor $
\Pic _{ X / S } $ is a presheaf, and the associated sheaf on the fppf site will be denoted by $
\Pic _{ ( X / S ) ( \mathrm{ fppf } )} $. If $
\Pic _{ ( X / S )( \mathrm{fppf} ) } $ is represented by a scheme, it will be denoted by $
\PPic _{ X / S } $. A line bundle $L$ on $X$ naturally defines an $S$-valued point $
[ L ] \in \PPic _{ X / S } ( S ). $
The following existence result for Picard schemes is enough for us.
\begin{theorem}[{$=$\cite[Corollary 9.4.18.3]{MR2222646}}]\label{th:existence} Let $S$ be the spectrum of a field and $X$ a proper scheme over $S$. Then $
\PPic _{ X / S } $ exists and is a disjoint union of open quasi-projective subschemes. \end{theorem}
If $\PPic_{X/S}$ exists and $S$ is the spectrum of a field, its identity component (i.e., the connected component containing the identity) will be denote by $\PPic^0_{X/S}$. The following theorem characterizes the $\bfk$-valued points of $\PPic^0_{X/\bfk}$
\begin{theorem}[{$=$\cite[Corollary 9.5.10]{MR2222646}}]\label{th:characterization_of_points_of_Pic0} Assume that $S$ is the spectrum of a field and $
\PPic _{ X / S } $ exists. Let $L$ be an invertible sheaf on X. Then $L$ is algebraically equivalent to $\cO_X$ if and only if $
[ L ] \in \PPic^0 _{ X / S } ( S ). $ \end{theorem}
The notion of algebraic equivalence is defined as follows. For simplicity, we assume that the base field is algebraically closed.
\begin{definition}[{$=$\cite[Definition 9.5.9]{MR2222646}}]\label{df:algebraic_equivalence} Assume $S$ is the spectrum of an algebraically closed field $\bfk$. Let $L$ and $N$ be invertible sheaves on $X$. Then $L$ is said to be \emph{algebraically equivalent to} $N$ if, for some $n$ and all $i$ with $1\le i\le n$, there exist a connected $\bfk$-schemes of finite type $T_i$, closed points $
s_i, t_i \in T_i $, and an invertible sheaf $M_i$ on $
X \times _{ \bfk } T_i $ such that \begin{equation}
L
\simeq M _{ 1, s_1 },
M _{ 1, t_1 }
\simeq M _{ 2, s_2 },
\dots,
M _{ n-1, t _{n-1} }
\simeq M_{ n, s_n },
M_{n, t_n}
\simeq N. \end{equation} \end{definition}
\section{Results in arbitrary dimensions} \subsection{Constraints on SOD by canonical base loci}
\begin{theorem}\label{th:pg positive} Let $X$ be a smooth proper DM stack and $ \bfD ^{ b } \lb \coh X \rb = \la \cA, \cB \ra $ an SOD. Then
\begin{enumerate}
\item\label{it:closed points}
At least one of the followings holds.
\begin{enumerate}
\item \label{it:closed points in A}
$
\bfk ( x ) \in \cA
$
for any closed point
$
x \nin \Bs| \omega_X | \cup \cS
$.
\item \label{it:closed points in B}
$
\bfk (x) \in \cB
$
for any closed point
$
x \nin \Bs| \omega_X | \cup \cS
$.
\end{enumerate}
\item \label{it:small supports}
When (\pref{it:closed points in A}) (resp. (\pref{it:closed points in B}))
is satisfied, the support of any object in $ \cB $ (resp. $ \cA $)
is contained in
$ \Bs| \omega_X | \cup \cS $.
\end{enumerate} \end{theorem}
\noindent In the statement above, $\Bs| \omega_X |$ denotes the canonical base locus (see \pref{df:base_locus}) and $ \cS \subset X $ the locus of stacky points (see \pref{df:stacky_locus}).
\begin{proof} Take an arbitrary closed point $
x \in X \setminus \lb \Bs | \omega _X | \cup \cS \rb $. We first show that $
\bfk ( x ) $ is contained in either $\cA$ or $\cB$. Let us include $\bfk ( x )$ in the triangle provided by the SOD: \begin{equation}
b \to \bfk ( x ) \to a \xto[]{f} b [ 1 ]. \end{equation}
\noindent Take a global section $
s \in H^0 ( X, \omega _X ) $ which is not vanishing at $x$, and set $
U = X \setminus Z ( s ) $. If $
f| _U \neq 0 $, we see $
( \otimes s \circ f ) | _U
= \otimes s| _U \circ f|_ U $ is also non-trivial. This contradicts \begin{equation}
\Hom ( a, b \otimes \omega _X [ 1 ] )
\simeq \Hom ( b, a [ \dim X - 1 ] ) ^{\vee}
= 0. \end{equation} Thus we see $
f| _U = 0 $. This implies the decomposition $
\bfk ( x )
\simeq a| _U \oplus b| _U $ and hence we obtain either $
a| _U = 0 $ or $
b| _U = 0 $. If $
a| _U = 0 $, the morphism $
\bfk ( x ) \to a $ is zero. Then we obtain the decomposition $
b \simeq \bfk ( x ) \oplus a [ - 1 ] $. By the semiorthogonality we see $
a = 0 $ and hence $
\bfk ( x ) \in \cB. $ If we instead assume $
b | _{ U } = 0, $ similarly we obtain $
\bfk ( x ) \in \cA. $
If $\bfk ( x ) \in \cB$ (respectively $\bfk ( x ) \in \cA$) holds for some closed point $
x \nin \cS $, then by \pref{lm:useful} any object $E\in\cA$ should satisfy $
x \nin \Supp{E} $ (resp. any $ E \in \cB $). This in particular implies that the closed substack $
\Supp{ E } \subset X $ is strictly smaller than $X$.
Finally assume for a contradiction that $\cA$ and $\cB$ both contain non-stacky closed points. As said in the previous paragraph, the support of any object in $\cA$ or $\cB$ is a strictly smaller closed subset of $X$. On the other hand, consider the decomposition of the structure sheaf \begin{equation}
b \to \cO_X \to a \to b [ 1 ]. \end{equation} From this triangle we obtain the equality $
X = \Supp { \cO_X } = \Supp { a } \cup \Supp { b }, $
which contradicts the irreducibility of $X$. Hence we see that all the non-stacky closed points outside of $\Bs| \omega_X |$ are contained simultaneously in $\cA$, or otherwise in $\cB$. This concludes (\pref{it:closed points}) of \pref{th:pg positive}. In the former case, the support of any object in
$\cB$ is contained in $\cS\cup\Bs| \omega_X |$ as we saw above, concluding the proof of (\pref{it:small supports}). \end{proof}
\begin{example} Let $Y$ be a smooth projective surface such that $
\omega _{ Y } $ is globally generated. Let $
f \colon X \to Y $ be the blow-up of $Y$ at a closed point $y$, with the exceptional divisor $
E \subset X $. Then we obtain the SOD $
\bfD ^{ b } \lb \coh X \rb = \la \la \cO _E ( E ) \ra, \bL f^{*} \bfD ( Y ) \ra $ (see \cite{Orlov_PB}). Observe that the objects in $
\la \cO _E ( E ) \ra $ are supported in $
E = \Bs | \omega_X | $, and that all the closed points $
x \nin \Bs | \omega_X | $ are contained in $
\bL f^{*} D ( Y ) $. \end{example}
As an immediate consequence of \pref{th:pg positive}, we obtain the following general result for the non-existence of SODs.
\begin{theorem}\label{th:locally free} Let $X$ be a smooth proper DM stack satisfying the following properties: \begin{enumerate} \item there exists a non-stacky closed point.
\item $X$ satisfies the resolution property: i.e., every coherent sheaf on $X$ admits a surjective morphism from a locally free sheaf. \label{it:resolution_property}
\item Each connected component of $\cS\cup\Bs| \omega_X |$ is contained in an open substack of $X$ on which $\omega_X$ is trivial. \end{enumerate} Then $\bfD ^{ b } \lb \coh X \rb$ admits no non-trivial SOD. \end{theorem}
\noindent If the coarse moduli space of $X$ is projective, then the condition \eqref{it:resolution_property} is always satisfied by \cite[Theorem 4.2]{Kawamata_EDCSSS}. Actually, it is enough to assume that the coarse moduli is a scheme with affine diagonal (\cite[Theorem 1.2]{MR2108211}).
In order to establish the correspondence between MMP and SOD for varieties with quotient singularities, one should think of the derived category of the smooth DM stack which is obtained by replacing the quotient singularity with the corresponding stacky quotient (see \cite{Kawamata_LCBMDC}). This is one of the reasons why we should think of stacks, not only schemes.
\begin{proof} Take an SOD $
\bfD ^{ b } \lb \coh X \rb = \la \cA, \cB \ra $. Write $
U = X \setminus \lb \cS \cup \Bs| \omega _X | \rb $. By \pref{th:pg positive}, closed points of $U$ are simultaneously contained in either $\cA$ or $\cB$. Let us assume they are in $
\cB. $
By \pref{th:pg positive}, then the support of any object $
a \in \cA $ is contained in $
\cS \cup \Bs | \omega _X | $. Since $
\omega _X $ is trivial on an open neighborhood of each connected component of this set, we see $
a \otimes \omega_X \simeq a $ by \pref{lm:invariant}. Therefore we can apply \pref{cr:iff} to conclude the proof. \end{proof}
\begin{example} Let $X$ be the surface (of general type) discussed in \cite[Proposition 3]{MR1980618}. We can easily check that $
\Bs| \omega_X | $ consists of $4$ points. Hence \pref{cr:main_cor} tells us that the derived category $\bfD ^{ b } \lb \coh X \rb$ admits no SOD. \end{example}
\subsection{Local situation}\label{sc:Local_situation}
We refine the arguments in the proof of \pref{th:pg positive}, so as to make it applicable to local situations. This will be applied later to surfaces of general type. For simplicity we restrict ourselves to varieties.
Let $X$ be a variety and $Z$ a closed subset. Consider the strict full subcategory \begin{equation}\label{eq:category_supported_on_Z}
\cC_Z
= \{ E \in \bfD ^{ b } \lb \coh X \rb | \Supp{E} \subset Z \} \subset \bfD ^{ b } \lb \coh X \rb. \end{equation}
\noindent An immediate corollary of \pref{th:pg positive} is \begin{lemma}\label{lm:restriction} Let $X$ be a smooth proper variety with $
p_g(X) > 0. $
Suppose that for any connected component $Z$ of $\Bs| \omega_X |$, the category $\cC_Z$ admits no SOD. Then $\bfD ^{ b } \lb \coh X \rb$ has no SOD. \end{lemma}
\begin{proof} Take an SOD $
\bfD ^{ b } \lb \coh X \rb = \la \cA, \cB \ra $. By \pref{cr:iff}, it is enough to show $
\cA \otimes \omega_X
=
\cA $. Assume that the conclusion \eqref{it:closed points in B} of \pref{th:pg positive} holds, so that $\cA$ is a triangulated subcategory of $
\cC _{ \Bs | \omega _X | } $. For any connected component $
Z \subset \Bs | \omega_X | $, set $
\cA_Z = \{ a \in \cA | \Supp{a} \subset Z \} $ so as to obtain the orthogonal decomposition $
\cA = \bigoplus _{Z} \cA_Z $ by the triangulated subcategories $
\cA_Z \subset \cC_{Z} $.
Let $p\colon \bfD ^{ b } \lb \coh X \rb\to\cA$ be the left adjoint of the inclusion functor $
\cA \hto \bfD ^{ b } \lb \coh X \rb $. Composing $p$ with the obvious functors, we get the left adjoint of the inclusion $
\cA_Z\hto \cC_Z $ and hence obtain the SOD $
\cC_Z = \la \cA_Z, {}^\perp \hspace{-1mm} \cA_Z \ra. $ Since $
\cC_Z $ admits no SOD by the assumption, $
\cA_Z $ is either $
0 $ or $
\cC _Z $ itself. In any case we obtain the equality $
\cA_Z\otimes \omega_X = \cA_Z \subset \bfD ^{ b } \lb \coh X \rb. $ Summing up over $Z$, we obtain the conclusion. \end{proof}
Now we show the local version of \pref{th:pg positive} for $\cC_Z$.
\begin{proposition}\label{pr:closed_points_local} Let $X$ be a smooth proper variety and $Z \subset X$ a closed subscheme. Assume that for each $m \ge 1 $ we have a section $
s_m \in H^0 ( mZ, \omega _{ X } |_{ mZ } ) $ such that $
s _{ m + 1 } | _{ m Z } = s _{ m } $ and $
s_1 $ is generically non-vanishing on an irreducible component $
W \subset Z $. Write the projective limit as $
s = ( s_m )_{ m \ge 1 }
\in
\varprojlim H^0 ( mZ, \omega _X | _{ m Z } ). $
Then for any SOD $\cC_Z=\la \cA, \cB \ra$, one and only one of the followings holds. \begin{enumerate}[(a)] \item For any closed point $x\in W$ at which $s$ does not vanish, $\bfk ( x )\in\cA$. \item For any closed point $x\in W$ at which $s$ does not vanish, $\bfk ( x )\in\cB$. \end{enumerate} \end{proposition}
\begin{proof} We follow essentially the same line as that of the proof of \pref{th:pg positive}. Take any closed point $x\in W$ at which $s$ does not vanish. Consider the decomposition \begin{equation}
b \to \bfk ( x ) \to a \xto[]{f} b [ 1 ]. \end{equation}
\noindent Since $b$ is supported in $Z$, by \pref{rm:rouquier} there exists $ m \ge 1 $ and $
b' \in D ( m Z ) $ together with an isomorphism $
b \simto \iota_* b', $ where $
\iota \colon mZ \hto X $ is the natural immersion. Hence we can define the ``multiplication by $s$'' as the composition of morphisms \begin{equation}
b \simto \iota_* b'
\xto[]{ \iota_* \otimes s_m }
\iota _* ( b' \otimes \omega _{ X } | _{ m Z } )
\simto
\iota_* b' \otimes \omega_X
\simto
b \otimes \omega_X. \end{equation} Arguing as in the proof of \pref{th:pg positive}, we can show that the morphism $f$ vanishes at $x$. Thus we see that $\bfk ( x )$ is contained in $\cA$ or $\cB$.
Finally, by looking at the decomposition of $\cO_W$ instead of $\cO_X$, we see that all such closed points are simultaneously contained in $\cA$ or $\cB$. \end{proof}
The next statement provides us with an inductive way of proving the non-existence of SODs.
\begin{corollary}\label{cr:induction} Let $X$ be a smooth proper variety and $Z$ a connected closed subscheme. Define the closed subset $B\subset Z$ by \begin{equation}
B
= \bigcap _{ s \in \varprojlim H^0 ( mZ, \omega _{ X } | _{ mZ })} V ( s _1 ). \end{equation} Assume $
B \neq Z $, and take an irreducible component $Z_1\subset Z$ which is not contained in $B$. Then $\cC_Z$ admits no SOD if the same holds for all $\cC_W$, where $W$ runs through all the connected components of $\overline{(Z\setminus Z_1)}\cup (B\cap{Z_1})$. \end{corollary}
\begin{proof} Note first that $\cC_Z$ admits no OD, since it is connected; proof is essentially the same as that of \pref{lm:no_OD}, once one replaces `locally free sheaves' with `locally free sheaves on thickenings of $Z$'. Hence it is enough to show that any SOD of $\cC_Z$ is in fact an OD.
Take an SOD $
\cC _Z = \la \cA, \cB \ra. $ By \pref{pr:closed_points_local}, one and only one of the followings holds: \begin{enumerate}[(a)] \item For any closed point $x\in Z_1 \setminus B$, $\bfk ( x )\in\cA$. \item For any closed point $x\in Z_1 \setminus B$, $\bfk ( x )\in\cB$. \end{enumerate} Let us assume (a) holds. Then, as before, for any $
b \in \cB $ we see $
\Supp b \subset
\overline{ \lb Z \setminus ( Z_1 \setminus B ) \rb }
= \overline{ ( Z \setminus Z_1 ) } \cup ( B \cap Z_1 ) $. The rest of the proof is completely analogous to that of \pref{lm:restriction}. \end{proof}
\begin{remark} In fact, the arguments above work under weaker assumptions. It is enough to find infinitely many integers $
m > 0 $ such that for each $
m $ one can find $
s _m \in H^0 ( m Z, \omega_X | _{ m Z } ) $ which does not vanish at the generic point of an irreducible component of $
Z. $ \end{remark}
\subsection{Rigidity of semiorthogonal decomposition} \label{sc:Rigidity_of_Semiorthogonal_decomposition} We show that SODs are rigid under the actions of topologically trivial autoequivalences. \begin{theorem}\label{th:rigidity} Let $X$ be a projective scheme over a field $
\bfk $, and $
\bfD ^{ \perf } \lb X \rb
=
\la \cA, \cB \ra $ be an SOD. Then for any line bundle $L$ such that $
[ L ] \in \PPic_{X/\bfk}^0 $, the equality of subcategories $
\cA \otimes L= \cA \subset \bfD ^{ \perf } \lb X \rb $ holds. \end{theorem}
We immediately obtain \begin{corollary}\label{cr:trivial_chern_class} Let $X$ be a smooth projective variety whose canonical bundle is contained in $\PPic_{X/\bfk}^0$. Then $\bfD ^{ b } \lb \coh X \rb$ has no SOD. \end{corollary} \begin{proof} By \pref{cr:iff}, it is enough to show that any SOD $\bfD ^{ b } \lb \coh X \rb=\la \cA, \cB \ra$ satisfies $
\cA \otimes \omega_X = \cA $. By \pref{th:rigidity}, this follows from the assumption $[\omega_X]\in\PPic^0_{X/\bfk}$. \end{proof}
\begin{lemma}\label{lm:classical_generator} Let $
X $ be a quasi-compact and separated scheme and $
\cC \subset \bfD ^{ \perf } \lb X \rb $ be a right or left admissible subcategory. Then $ \cC $ admits a classical generator. \end{lemma}
See \cite[p. 2]{Bondal-van_den_Bergh} for the definition of classical generator.
\begin{proof} We first show the existence of classical generator for $
\bfD ^{ \perf } \lb X \rb $. By \cite[Theorem 3.1.1 (2)]{Bondal-van_den_Bergh}, the derived category $
\bfD _{ \Qcoh } \lb X \rb $ of unbounded complexes of $
\cO _{ X } $-modules with quasi-coherent cohomology is generated by a single compact object $
T $. Hence by the Ravenel-Neeman theorem \cite[Theorem 2.1.2]{Bondal-van_den_Bergh}
(see also \cite[\href{https://stacks.math.columbia.edu/tag/09SM}{Tag 09SM}]{stacks-project} for a self-contained proof), the subcategory $
\bfD _{ \Qcoh } \lb X \rb ^{ c } $ of compact objects is classically generated by $T$.
On the other hand, it follows from \cite[Corollary 5.5]{MR1214458} that the canonical exact functor $
\bfD \lb \Qcoh X \rb \to \bfD _{ \Qcoh } \lb X \rb $, where $
\bfD \lb \Qcoh X \rb $ is the derived category of unbounded complexes of quasi-coherent sheaves on $X$, is an equivalence of triangulated categories. Since $
\bfD \lb \Qcoh X \rb ^{ c } = \bfD ^{ \perf } \lb X \rb $ by \cite[Theorem 3.1.1 (1)]{Bondal-van_den_Bergh}, we obtain the assertion for $\cC = \bfD ^{ \perf } \lb X \rb$.
To obtain a classical generator of general $
\cC $, take the projection of a classical generator of $
\bfD ^{ \perf } \lb X \rb $ by the projection functor. In fact, fix a classical generator $
T $ of $
\bfD ^{ \perf } \lb X \rb $ so that for each object $
a \in \cA $ there is a sequence \begin{align} \xymatrix{
0 \ar[rr] & & E ^{ 1 } \ar[rr] \ar[dl] & & E ^{ 2 } \ar[dl] \ar[r] &\cdots \ar[r] & E ^{ n - 1 }
\ar[rr] \ar[dl] & & E ^{ n } = \iota _{ \cA } \lb a \rb \oplus \exists R \ar[dl]\\
& S ^{ 1 } \ar@{-->}[ul] & & S ^{ 2 } \ar@{-->}[ul] & \cdots &
& & S ^{ n } \ar@{-->}[ul]} \end{align} such that \begin{itemize}
\item the lower triangles are distinguished triangles,
\item each $
S ^{ i } $ is a direct sum of shifts of $T$. \end{itemize} By applying the projection functor $
p _{ \cA } $ to this diagram, we immediately see that $
a \in \cA $ is classically generated by $
p _{ \cA } \lb T \rb $. \end{proof}
\begin{lemma}\label{lm:semi-continuity} Let $X$ be a projective scheme, $
a \in \bfD ^{ b } \lb \coh X \rb $ a bounded complex of coherent sheaves, and $
b \in \bfD ^{ \perf } \lb X \rb $ a perfect complex. Let $T$ be a scheme of finite type over $\bfk$ with a point $0 \in T(\bfk)$, and $M$ a line bundle on $X\times _{ \bfk }T$ such that $
\RHom(b,a\otimes M_0)=0. $ Then there exists an open neighborhood $
0 \in U \subset T $ such that for any $t\in U(\bfk)$ $
\RHom ( b, a \otimes M_t ) = 0. $ \end{lemma}
\begin{proof} It follows from the assumptions on $
a $ and $
b $ that $
\bR\cHom (p_X^*b, p_X^*a \otimes M)) $ is a bounded complex of coherent sheaves on $
X \times _{ k } T $. Since $
p _{ T } $ is projective, it then follows that \begin{equation}
S = \Supp { ( \bR p_{T*}\bR\cHom (p_X^*b, p_X^*a \otimes M))} \subset T \end{equation} is a closed subset of $
T $.
Next consider the sequence of isomorphisms \begin{equation}
\begin{split}
\bL \iota_t^* \bR p_{T*} \bR\cHom ( p_X^*b, p_X^*a \otimes M)
\simto \bL \iota_t^* \bR p_{T*} \lb p_X^* \bR\cHom ( b, a ) \otimes M \rb\\
\simto \bR \Gamma \lb X, \bR\cHom ( b, a ) \otimes M_t \rb
\simto \RHom ( b, a \otimes M_t ), \end{split} \end{equation} where $
\iota_t \colon \{ t \} \hto T $ is the natural inclusion. The second isomorphism follows from the base change theorem for flat morphisms (\cite[Corollary 2.23]{MR2238172}). From this and by \pref{lm:Derived_NAK}, we see that the subset $S$ does not contain $0$. Now we can define $U$ as the complement of $S$. \end{proof}
\begin{lemma}\label{lm:open} Let $X$ be a projective scheme over a field $
\bfk $, and $
\bfD ^{ \perf } \lb X \rb
=
\la \cA, \cB \ra
=
\la \cA', \cB' \ra $ be two SODs. Let $T$ and $M$ be as in \pref{lm:semi-continuity}. Then the subset $
U(\cA')
=
\lc t \in T ( \bfk ) \mid \cA \otimes M_t = \cA' \rc \subset T(\bfk) $ is open. \end{lemma}
\begin{proof} The assertion is trivial when $
U \lb \cA ' \rb = \emptyset $, we assume $
U \lb \cA ' \rb \neq \emptyset $. It is enough to check the following two claims separately for each point $0\in\ U(\cA')$. \begin{enumerate} \item There exists an open neighborhood $
0 \in U $ such that $
\cA \otimes M_t\subset\cA' $ holds for any $
t \in U $. \item There exists an open neighborhood $
0 \in U $ such that $
\cA \otimes M_t^{-1}\subset\cA' $ holds for any $
t \in U $. \end{enumerate} We give a proof only for the first one; the second follows from this by replacing $
M $ with $
M ^{ - 1 } $.
By \pref{lm:classical_generator}, we can take classical generators $
a \in \cA $ and $
b ' \in \cB ' $. Then we have the useful criterion \begin{equation}
\cA \otimes M_t \subset \cA' = \cB'^{\perp}
\iff
\RHom ( b', a \otimes M_t )
= 0. \end{equation} Since the latter condition on $
t $ is known to be open by \pref{lm:semi-continuity}, we are done. \end{proof}
\begin{proof}[Proof of \pref{th:rigidity}] By \pref{th:characterization_of_points_of_Pic0} and \pref{df:algebraic_equivalence}, it is enough to show the following
\begin{claim} Under the same assumptions as in \pref{th:rigidity}, let $T$ be a connected scheme of finite type over $\bfk$, and $M$ a line bundle on $X\times_{\bfk}T$. If $\cA\otimes M_0=\cA$ holds for $0\in T(\bfk)$, then $\cA\otimes M_t=\cA$ holds for all $t\in T(\bfk)$. \end{claim}
In order to show the claim, set $
S
=
\{ \cA \otimes M_t | \ t \in T ( \bfk ) \}.
$
By \pref{lm:open}, we obtain a decomposition $
T(\bfk)=\coprod_{\cA'\in S}U(\cA') $ of $T(\bfk)$ into disjoint open subsets. Since $T(\bfk)$ is connected by the assumption $\bfk=\overline{\bfk}$, this implies that $U(\cA)=T(\bfk)$. \end{proof}
Let $X$ be a smooth projective variety over $\bfk$. Using similar arguments, we can also show $
g^*\cA = \cA \subset \bfD ^{ b } \lb \coh X \rb $ for any automorphism $
g \in \AAut^0 _{ X / \bfk } $. Here $
\AAut^0 _{ X / \bfk } $ is the identity component of the group scheme $
\AAut _{ X / \bfk } $ (see \cite[p.133 Exercise]{MR2222646} for the definition and the existence of $\AAut_{X/\bfk}$). Thus we obtain
\begin{corollary}\label{cr:rigidity_with_repsect_to_toplologically_trivial_autoequivalences} For any SOD $
\bfD ^{ b } \lb \coh X \rb = \la \cA, \cB \ra $ and $
\Phi \in \PPic^0 _{ X / \bfk } \rtimes \AAut^0 _{ X / \bfk } $, we have $
\Phi \cA = \cA \subset \bfD ^{ b } \lb \coh X \rb. $ \end{corollary}
\noindent As proven in \cite[Theorem 2.12]{rosay2009some}, the group scheme $
\PPic^0 _{ X / \bfk } \rtimes \AAut^0 _{ X / \bfk } $ coincides with the identity component of the group scheme of autoequivalences of $\bfD ^{ b } \lb \coh X \rb$.
\section{More results for surfaces}\label{sc:Surfaces}
In this section we closely investigate the non-existence of SOD for minimal projective surfaces. For $\kappa = 0$, we completely understand when and only when there is a non-trivial SOD. For $\kappa = 1$, we have a rather satisfactory answer that there is no SOD if $
p _{ g } > 0 $. For $\kappa = 2$, we show the non-existence of SOD under strong assumptions. The assumption seems to be a technical one, and it remains to be a future task to get rid of it.
Readers can refer to \cite{MR986969} for various notions for surfaces in positive characteristics, such as non-classical Enriques surfaces, quasi-elliptic fibrations, wild fibers, and quasi-bielliptic surfaces. The following result plays a central role for the cases $
\kappa = 0, 1 $.
Let $f\colon X\to C$ be a relatively minimal (quasi-)elliptic surface with multiple fibers $
X _{ c_i } = m_i F_i \quad (i = 1, \dots, k) $. Then we have the Kodaira-Bombieri-Mumford canonical bundle formula (see \cite[Theorem 2]{MR0491719}) \begin{equation}\label{eq:Kodaira} \omega_X \simeq f^*(\omega_C\otimes \bR ^1f_*\cO_X/T)\otimes \cO_X \lb \sum_i a_i F_i \rb, \end{equation} where $
T \subset
\bR ^1 f_* \cO_X $ is the torsion part and $
0 < a_i \le m_i - 1 $ are some integers. It is known that $
\omega_C \otimes \bR ^1 f_*\cO_X / T $ is a line bundle of degree $
2g(C) - 2 + \chi ( \cO_X ) + \mathop {\textrm{length}} { T } $.
\subsection{$\kappa=0$}
Since classical Enriques surface satisfies $p_g=q=0$, any line bundle on it is exceptional. Hence the derived category always admits a non-trivial SOD (see \cite{MR3302614} and \cite{hosono2015derived}
for further results on this topic). For non-classical Enriques, abelian, and K3 surfaces we have no SOD by \pref{cr:iff}, since their canonical bundles are trivial.
The most non-trivial is the following \begin{proposition} Let $
X $ be an (quasi-)bielliptic surface. Then $
\bfD ^{ b } \lb \coh X \rb $ admits no SOD. \end{proposition}
\begin{proof} The idea is to use \pref{cr:trivial_chern_class}. In order to check its assumption, take a (quasi-)elliptic fibration $
f \colon X \to C $ without multiple fibers. By \eqref{eq:Kodaira}, there exists a line bundle $L$ on $C$ such that $
\omega_X \simeq f^* L $. Since $
f^* \colon \PPic _{ C / \bfk } \to \PPic _{ X / \bfk } $ preserves the identity components, it is enough to show that $
[L] \in \PPic^{0} _{ C / \bfk } $.
By the assumption there is a positive integer $N$ for which $
\omega_X ^{ \otimes N } \simeq \cO _{ X } $. Together with the projection formula, this implies \begin{align}
L ^{ \otimes N }
\simeq
f _{ * } f ^{ * } L ^{ \otimes N }
\simeq
f _{ * } \cO _{ X }
\simeq
\cO _{ C }. \end{align} Since $C$ is a smooth projective curve, this implies $
[L] \in \PPic^{0} _{ C / \bfk } $. Thus we conclude the proof. \end{proof}
\subsection{$\kappa=1$}\label{sc:kappa=1}
\begin{theorem}\label{th:elliptic_surface} Let $f\colon X\to C$ be a relatively minimal elliptic or quasi-elliptic surface. If $p_g(X)>0$, then $\bfD ^{ b } \lb \coh X \rb$ has no SOD. In particular, this applies to the case when $X$ is a minimal surface of Kodaira dimension $1$ with $p_g ( X ) >0$. \end{theorem}
\begin{proof}
Since the contribution from the multiple fibers in the RHS of \pref{eq:Kodaira} is fixed as a linear system and $f$ is an algebraic fiber space, if $p_g(X)>0$, then $\Bs| \omega_X |$ is a union of finitely many fibers of $
f $. The finite set $
f ( \Bs | \omega _X | ) \subset C $ will be denoted by $
S $.
Take any SOD $
\bfD ^{ b } \lb \coh X \rb = \la \cA, \cB \ra $. By \pref{th:pg positive}, either $
\cA $ or $
\cB $ is supported in $
f ^{ - 1 } ( S ) $. This implies that the SOD under consideration is $
C $-linear in the sense of \cite{MR2801403}; to see this, note that the pull-back of any locally free sheaf on $C$ is trivial on an open neighborhood of $f ^{ - 1 } ( S )$. Also since $f$ is flat, we can apply \cite[Theorem 5.6]{MR2801403} to any base change of $
f $. In particular, for any closed point $
s \in S $ we obtain the SOD $
\bfD ^{ \perf } \lb X _{ s } \rb = \la \cA _{ X_s } ^{ \perf }, \cB _{ X_s } ^{ \perf } \ra $ (following the notation of \cite{MR2801403}). On the other hand, since $\omega_{ X _s }$ is torsion and any torsion line bundle on a complete curve over a field is contained in its $
\PPic^0 $ (see \cite[Chapter 0 Section 7]{MR986969}), we can use \pref{th:rigidity} to see $
\cA _{ X_s }^{ \perf } \otimes \omega _{ X_s } = \cA _{ X_s } ^{ \perf } \subset \bfD ^{ \perf } \lb X_s \rb $. Since the Serre duality works in $
\bfD ^{ \perf } \lb X_s \rb $ (see, say, \cite[Proposition 7.47 and Remark 7.48]{MR2434186}), this implies that the SOD is actually an OD.
Now since there is no OD of $
\bfD ^{ \perf } \lb X _{ s } \rb $
by \pref{lm:no_OD}, it follows that either $
\cA _{ X_s } ^{ \perf } = 0 $ or $
\cB _{ X_s } ^{ \perf } = 0 $ should hold for any $
s \in S $.
Without loss of generality, let us assume that $
\cA $ is supported in $
f ^{ - 1 } ( S ) $ for the rest of proof. Assume for a contradiction $
\cA \neq 0 $. By the construction of $
\cA _{ X_s } ^{ \perf } $ given in \cite[Section 5.4]{MR2801403}, we see that for any object $
a \in \cA, $ its base change $
a_s = a {}^{ \bL }\!\!\otimes _{ \cO _C } \cO _s $ is contained in $
\cA _{ X_s } ^{ \perf } $ (note that $
\bfD ^b \lb \coh X \rb = \bfD ^{ \perf } \lb X \rb $). Therefore, by \pref{lm:Derived_NAK}, there should be a point $
s \in S $ for which $
\cA _{ X_s } \neq 0 $. Then we obtain $
\cB _{ X_s } = 0 $, so that any object $
b \in \cB $ satisfies $
\Supp b \cap f ^{ - 1 } ( s ) = \emptyset $ by \pref{lm:Derived_NAK} again. Since $
f $ is proper, this implies that any object $
b \in \cB $ is supported in a union of finitely many fibers. Thus we conclude that any object in either $
\cA $ or $
\cB $ should be supported in a union of finitely many fibers, and it clearly contradicts the assumption that $
\bfD ^{ b } \lb \coh X \rb $ is generated by $
\cA $ and $
\cB $. \end{proof}
\begin{remark} \begin{enumerate} \item Our method is also applicable to other situations in which we have a sufficiently nice canonical bundle formula (see \pref{sc:Toward_higher_dimensions}).
\item (Assume $\bfk = \bC$ for simplicity.) Let $X$ be a minimal projective surface with $\kappa{(X)}=1$. If $p_g(X)=0$, $h^1(\cO_X)$ should be either $0$ or $1$ since $\chi{(\cO_X)}\ge 0$ holds (see \cite[Chapter V, \S 12]{Barth-Hulek-Peters-Van_de_Ven}). If $h^1(\cO_X)=0$, as we saw before, any line bundle is exceptional. If $h^1(\cO_X)=1$ (and hence $\chi{(\cO_X)}=0$), although we can restrict the nature of the fibration as follows, we do not know if $\bfD ^{ b } \lb \coh X \rb$ can admit an SOD or not. \begin{itemize} \item $g(C)=0$ or $1$ by the canonical bundle formula and the assumption $p_g(X)=0$. \item Smooth fibers are all isomorphic to one another and the multiple fibers are of type $_mI_0$ for some $m>0$ (see \cite[Chapter III, \S 18]{Barth-Hulek-Peters-Van_de_Ven}). \end{itemize} \end{enumerate} \end{remark}
\subsection{$\kappa=2$} We apply the results of \pref{sc:Local_situation} to smooth projective minimal surfaces with $
\kappa = 2 $ (i.e., of general type). There are some examples of minimal surfaces of general type on which the connected components of fixed part of the canonical linear system can be birationally contracted to points (in the category of algebraic spaces). This property turns out to ensure the non-existence of SOD.
\begin{theorem}\label{th:negative_definite} Let $X$ be a smooth projective minimal surface of general type with $p_g\ge 2$. Assume the following condition (*). \begin{quote} (*)
For any one-dimensional connected component $Z$ of $\Bs| \omega_X |$, its intersection matrix is negative definite. \end{quote}
Then $\bfD ^{ b } \lb \coh X \rb$ has no SOD. \end{theorem}
\begin{proof} By \pref{cr:induction}, it is enough to show that for any one-dimensional connected component $Z$ of
$\Bs| \omega_X |$, the category $
\cC_Z $ has no SOD. We prove this for more general $
\cC_W $, where $
W $
is any reduced connected one cycle contained in $\Bs| \omega_X |$, by an induction on the number of irreducible components of $W$. If $W$ is empty, there is nothing to show. In general we can use the following
\begin{lemma}\label{lm:non-vanishing}
Under the assumptions of \pref{th:negative_definite}, let $W$ be a reduced connected one-cycle which is contained in $\Bs| \omega_X |$. Then there exists $
s = ( s_m ) _{ m \ge 1 } \in \varprojlim H^0 ( m W, \omega _{ X } | _{ m W } ) $ which does not vanish at the generic point of an irreducible component of $W$. \end{lemma}
\noindent This implies the strict inequality $
B \subsetneq W $ (see \pref{cr:induction} for notation). Since $W$ has pure dimension $1$, we can pick an irreducible curve $
Z_1 \subset W $ which is not contained in $B$. By \pref{cr:induction} it is then enough to show the non-existence of SOD for $\cC_{W'}$, where $
W' $ are connected components of $
\overline{ W \setminus Z_1 } \cup ( B \cap Z_1 ). $ Since $W'$ is either a point or a connected one cycle whose number of irreducible components is strictly less than that of $W$, we can apply the induction hypothesis. \end{proof}
\begin{proof}[Proof of \pref{lm:non-vanishing}] By the Riemann-Roch (\cite[Chapter II, Theorem 3.1.]{Barth-Hulek-Peters-Van_de_Ven}) and the adjunction formula, we see $
h^0 ( \omega _X | _{W} ) = h^1(\omega_X|_{W}) + \frac{1}{2} W \cdot ( K_X - W ). $ Since we assumed $
p_g ( X ) > 1, $ $
K_X - W $ is linearly equivalent to a non-zero effective divisor. Hence by the $2$-connectedness of the canonical divisor of minimal surfaces of general type (\cite[Chapter VII, Proposition 6.2. (ii)]{Barth-Hulek-Peters-Van_de_Ven}), we see $
W \cdot (K_X - W) \ge 2
$. Thus we obtain $h^0(\omega_X|_{W})>0$.
Take any non-zero global section $s_1$ of $\omega_X|_{W}$. Since $W$ is reduced, $s_1$ is generically non-vanishing on at least one irreducible component of $W$. For each $m>1$ we show that the global section $s_{m-1}$ of $
\omega _X | _{ ( m - 1 ) Z } $ lifts to a global section $s_m$ of $
\omega_X | _{ mW } $, so as to obtain the desired $
s = ( s_m ) _{ m \ge 1 } \in \varprojlim H^0 ( m W, \omega _{ X } | _{ m W } ) $. Consider the exact sequence \begin{equation}
0
\to
\cO_{W}( K_X - ( m - 1 ) W )
\to
\cO_{mW} ( K_X )
\to
\cO _{ ( m - 1 ) W } ( K_X )
\to 0 \end{equation} and the associated cohomology long exact sequence. This yields an exact sequence \begin{equation}
H^0(mW, \cO_{mW}(K_X))\to H^0((m-1)W, \cO_{(m-1)W}(K_X))
\to
H^1(W, \cO_{W}(K_X-(m-1)W)), \end{equation} and hence it is enough to show the vanishing of the third term. By the adjunction formula and the Serre duality for embedded curves (see \cite[Chapter II, Section 1]{Barth-Hulek-Peters-Van_de_Ven}), its dimension can be rewritten as $
h^1(W, \cO_{W}(K_X-(m-1)W))= h^0(W, \cO_W(mW)) $. Finally, the vanishing of the RHS follows from the assumption $W^2<0$. \end{proof}
\begin{example}\label{eg:Horikawa} Minimal surfaces $X$ of general type with $p_g=K_X^2=2$ and $q=0$ were investigated in \cite{MR517773}. Among them, those of type III (see \cite[page 104]{MR517773}) satisfy the assumption of \pref{th:negative_definite}. In fact the fixed part consists of a ($-2$)-curve. In this example the moving part of the canonical linear system is base point free and defines a genus two pencil over the projective line (\cite[Theorem 1.3]{MR517773}). \end{example}
\section{$\kappa = 1$ in higher dimensions}\label{sc:Toward_higher_dimensions}
Most of the arguments of \pref{sc:kappa=1} can be generalized to higher dimensions, except one point.
\begin{theorem}\label{th:when kappa=1} Let $X$ be a non-singular projective $n$-fold defined over $\bfk$ such that $X$ is a minimal model with $\kappa{(X)}=1$ and $p_g(X)>0$. Suppose $
\omega _X $ is semi-ample so that the canonical morphism $
f \colon X \to C $ exists. Suppose that for any scheme-theoretic fiber $X_c$ of $f$ we have $
\omega _{ X } | _{ X_c } \in \PPic^0 _{ X_c }. $ Then $\bfD ^{ b } \lb \coh X \rb$ admits no SOD. \end{theorem}
\begin{proof} Below we prove that $
\Bs | \omega_X | $ is contained in the union of finitely many fibers of $f$. Once it is shown, with the extra assumption $
\omega_X|_{X_c} \in \PPic^{0} _{X_c/\bfk} $, which was automatically fulfilled in the case of \pref{sc:kappa=1}, the arguments of \pref{sc:kappa=1} works without change.
The assumption $p_g(X)>0$ implies that $
p_g ( \omega _{ X_c } ) > 0 $ holds for a general fiber $X_c$. Since $X$ is irreducible and $C$ is a non-singular curve, the morphism $f$ is flat (\cite[Chapter III, Proposition 9.7]{Hartshorne}). Combined with the torsion-freeness \cite[Theorem 2.1]{MR825838} and the theory of cohomology and base change \cite[Chapter III, Theorem 12.11]{Hartshorne}, we see that the direct image $f_*\omega_X$ is an invertible sheaf. The natural injective morphism $
f^* f_* \omega_X \to \omega_X $ provides us with an effective divisor $E$ on $X$ which fits in the canonical bundle formula \begin{equation}
\omega_X\simeq f^*f_*\omega_X \otimes \cO _X(E). \end{equation} This formula is a generalization of \eqref{eq:Kodaira}.
Arguing as in \cite[Proof of Theorem 12.1, Chapter V]{Barth-Hulek-Peters-Van_de_Ven}, we see that the morphism $f^*f_*\omega_X \to \omega_X$ is an isomorphism on smooth fibers: in fact, for a smooth fiber $X_c$ we can find a section $\stilde\in (f_*\omega_X)_c$ which, under the isomorphism $
f_*\omega_X\otimes_{\cO_C}\bfk(c)
\simto
H^0(X_c,\omega_X|_{X_c}), $
corresponds to the global trivialization of $\omega_X|_{X_c} \simeq \omega_{X_c}$. Since $f$ is projective, there exists an open neighborhood $U\ni c$ such that $\stilde$ is well-defined and vanishes nowhere on $f^{-1}(U)$. This shows that the morphism $f^*f_*\omega_X\to\omega_X$ is surjective (and hence is an isomorphism) on $f^{-1}(U)$. Thus we see that $E$ is contained in the union of the singular fibers of $f$.
Now since \begin{align}
H ^{ 0 } \lb X, f ^{ * } f _{ * } \omega _{ X } \rb
\simeq
H ^{ 0 } \lb C, f _{ * } \omega _{ X } \rb
\simeq
H ^{ 0 } \lb X, \omega _{ X } \rb \neq 0 \end{align} by the assumption, it follows from the sequence of maps \begin{align}
H ^{ 0 } \lb C, f _{ * } \omega _{ X } \rb
\xto[\simeq]{f ^{ * }}
H ^{ 0 } \lb X, f ^{ * } f _{ * } \omega _{ X } \rb
\xto[]{ \otimes s _{ E } }
H ^{ 0 } \lb X, \omega _{ X } \rb, \end{align} where $
s _{ E } $ is the section of the line bundle $
\cO _{ X } ( E ) $ which tautologically corresponds to $E$, that there exists an effective canonical divisor of $X$ whose support is contained in the union of finitely many fibers and the support of $E$. Thus we conclude the proof. \end{proof}
Let $
\PPic^{\tau} \subset \PPic $ be the subscheme of numerically trivial line bundles (see \cite[Section 9.6]{MR2222646}). If $
\PPic^0_{X_c}=\PPic^{\tau}_{X_c} $ holds for any singular fiber $
X_c $, since the morphism $f$ is defined by some multiple of $\omega_X$, the last assumption of \pref{th:when kappa=1} will be automatically satisfied. This is always the case if $\dim X_c = 1$, but not in general. Actually, even worse, the following example due to Keiji Oguiso satisfies all the assumptions of \pref{th:when kappa=1} but the last one. The authors are not sure if its derived category admits a non-trivial SOD or not.
\begin{example} In this example we assume $
\bfk = \bC $ for simplicity. Fix an integer $n\ge 4$ such that $n+1$ is a prime number. We construct a minimal $n$-fold $X$ of $\kappa{(X)}=1$ and $p_g(X)>0$ such that the canonical morphism $
f \colon X \to C $ has a singular fiber $
X_c $ which satisfies $
\omega _{ X } | _{ X_c } \nin \PPic^0 _{ X_c } $.
Consider the Fermat hypersurface
$ Y = \lb \sum _{ i = 0 } ^{ n } X _i ^{ n + 1 } = 0 \rb \subset \bP ^n$. It is a smooth projective Calabi-Yau ($n-1$)-fold in the strict sense; i.e., $
H ^{ i } ( \cO_Y )
=
0 $ for $
i = 1, 2, \dots, n - 2 $ and $
\omega_Y \simeq \cO_Y $. Note that one can describe the unique (up to constant) global section of $
\omega _{ Y } $ as the residue top form \begin{equation}
\psi = \Res_{Y} \frac {\sum_{i=0}^{n} (-1)^i X_i d X_0 \wedge \cdots \wedge \widehat{d X_i} \wedge \cdots \wedge dX_n} {\sum_iX_i^{n+1}}. \end{equation}
Pick a primitive $
( n + 1 ) $th root of unity $
\zeta $ to define the action of the cyclic group $
G = \bZ / ( n + 1 ) \bZ $ on $Y$ given as $
X_i \mapsto \zeta^i X_i $. Since we assumed that $ n+1 $ is a prime number, this action is free and hence we obtain the non-singular quotient $
Z = Y / G $. Since the top form $\psi$ is easily seen to be $
G $-invariant, it follows that $Z$ is also a Calabi-Yau $(n-1)$-fold.
Let $
C' $ be the smooth projective model of the affine curve $
( y^2-x^{4(n+1)}+1 = 0 ) \subset \bA ^2, $ and let $G$ act on $C'$ by $
( x, y ) \mapsto ( \zeta x, y ) $. This action is effective but not free. As can easily be seen, $
g ( C' ) = 2 ( n + 1 ) $ and $
\gamma = ( y ^{ - 1 } x ^n ) d x $ defines an $G$-invariant regular $1$-form on $C'$.
Now consider the \'etale quotient $
\pi \colon Y \times C' \to ( Y \times C' ) / G =: X $. Since $
\psi \boxtimes \gamma \in
H^0 ( Y \times C', \omega_{Y\times C'})^G \simeq H^0 ( X, \omega_X ), $ we see $p_g(X)>0$. Also since $
\omega _{ Y \times C' } \simeq \pi^* \omega _X $, $X$ is minimal and $\omega_X$ is not trivial. Combined with the inequalities $
0 \le \kappa { ( X ) } \le \kappa { ( Y \times C' ) }
= 1 $, we see $
\kappa { ( X ) }
= 1 $. Also it is easily seen that the algebraic fiber space $
f \colon X \to C'/G = \colon C $ is induced by the pluri-canonical linear system of $
X $.
Let $c\in C$ be a branch point of $C'\to C$. By an abuse of notation we write $
(X_c) _{\mathrm{red}} = Z $, so that $
X_c = (n+1) Z $. Here we claim that $
\cO_{X_c} ( K_X ) \nin \PPic^0_{X_c/\bfk} $. To see this we prove $
\cO_Z ( K_X ) \nin \PPic^0_{Z/\bfk} $, which in turn is equivalent to $
\cO_Z ( Z ) \not \simeq \cO_Z $.
Take a Stein open neighborhood $
c \in V \subset C $ such that $
X_c = (n+1)Z $ is a deformation retract of $
U =f^{-1} ( V ) \subset X $ (see \cite[Chapter I, Theorem 8.8]{Barth-Hulek-Peters-Van_de_Ven}). Consider the following commutative diagram with exact rows.
\begin{equation}\label{eq:diagram}
\xymatrix{ H^1(U,\cO_U) \ar[r] \ar[d] & H^1(U,\cO^{*}_U) \ar[r]^{c_1} \ar[d] & H^2(U,\bZ) \ar[r] \ar[d]_{\simeq} & H^2(U,\cO_U) \ar[d] \\ H^1(X_c,\cO_{X_c}) \ar[r] \ar[d] & H^1(X_c,\cO^*_{X_c}) \ar[r]^{c_1} \ar[d] & H^2(X_c,\bZ) \ar[r] \ar[d]_{\simeq} & H^2(X_c,\cO_{X_c}) \ar[d] \\ H^1(Z,\cO_Z) \ar[r] & H^1(Z,\cO^*_Z) \ar[r]^{c_1} & H^2(Z,\bZ) \ar[r] & H^2(Z,\cO_Z)\\} \end{equation}
We check that the terms in the 1st and the 4th columns in the diagram \eqref{eq:diagram} all vanish. First, since $Z$ is a Calabi-Yau $(n-1)$-fold with $n\ge 4$, $H^i(Z,\cO_Z)=0$ for $i=1,2$. For the remaining four terms, note first that the higher direct image sheaves $\bR ^if_*\cO_X$ are locally free for all $i\ge 0$ due to the torsion-freeness theorem \cite[Theorem 2.1]{MR825838}, relative duality, and the fact that the base space $C$ is a non-singular curve. From this we obtain the isomorphisms $
\bR ^i f_* \cO_X \otimes \bC ( t ) \simeq H^i ( X_t, \cO _{ X_t } ) $ (\cite[Chapter III, Theorem 12.11]{Hartshorne}). Since $H^i(X_t,\cO_{X_t})=0$ for $i=1,2$ and general $t$, we obtain $\bR ^if_*\cO_X=0$ for $i=1,2$ and hence the desired vanishings.
As a result, it turns out that the six terms in the 2nd and the 3rd columns in \eqref{eq:diagram} are isomorphic to one another. Since we can easily check that $\cO_U(Z)\in H^1(U,\cO^*_U)$ is a $(n+1)$-torsion non-trivial line bundle, so is $\cO_{Z}(Z)$. \end{example}
\begin{remark} The structure sheaf $\cO_Z$ is \emph{not} an exceptional object. Actually one can easily see from the calculations above that $
\Ext^{n-1}_X ( \cO_Z, \cO_Z ) \neq 0 $. \end{remark}
\section{Generalization to twisted sheaves} \label{sc:twisted_sheaves}
Most of the results we have established so far can be generalized to derived categories of \emph{twisted} coherent sheaves without essential change.
\begin{definition} A \emph{cohomological Brauer class} of a scheme $
X $ is an element $
\alpha \in
\Br' ( X ) :=
H^2 _{ et } ( X, \cO_X^* ) $. A pair $
( X, \alpha ) $ will be called a \emph{cohomological Brauer pair}. \end{definition}
Given such a pair $
( X, \alpha ) $, we can define the abelian category $
\coh ( X, \alpha ) $ of $
\alpha $-twisted coherent sheaves. When $
X $ is defined over a field $
\bfk $, it comes with the structure of a $
\bfk $-linear category.
Fix an \'etale cover $
\cU = ( U_i ) _{ i \in I } $ of $
X $ on which the cohomology class $
\alpha $ is represented by a \v{C}ech cocycle \begin{equation}
\alpha = ( \alpha _{ i j k } ) _{ i, j, k \in I }
\in
\Zv^2 ( \cU, \cO ^{ * } )
=
\prod _{ i, j, k \in I } H^0 ( U _{ i j k }, \cO ^{ * } ), \end{equation} where $
U _{ i j k }
=
U_i \times _{ X } U_j \times _{ X } U_k $ (by an abuse of notation, we used the same symbol $
\alpha $ to describe its representative). Then an $
\alpha $-twisted coherent sheaf $
F $ is a collection of coherent sheaves $
F_i \in \coh U_i $ and isomorphisms $
\varphi _{ i j } \colon F_j | _{ U _{ i j } } \simto F_i | _{ U _{ i j } } $ which satisfy the $
\alpha $-twisted cocycle conditions \begin{equation}
\varphi _{ i j } \varphi _{ j k } \varphi _{ k i } = \alpha _{ i j k } \cdot \id _{ U _{ i j k } }
\colon
F _{ i } |_{ U _{ i j k } } \simto F _{ i } |_{ U _{ i j k } }. \end{equation} A morphism between such data is a collection of $
\cO _{ U_i } $-homomorphisms which satisfy the obvious consistency. Then we can check that thus obtained category is abelian and is independent of the choice of a representative of $
\alpha $ (see \cite[Lemma 1.2.3]{caldararu2000derived}). We write $
\bfD ( X, \alpha ) = \bfD ^b \coh ( X, \alpha ) $, so that $
\bfD ( X, 0 ) = \bfD ^{ b } \lb \coh X \rb $.
\begin{definition}\label{df:support_for_twisted_sheaves} For an $
\alpha $-twisted coherent sheaf $
F \in \coh ( X, \alpha ) $, its support $
\Supp F $ is defined as the closed subscheme $
\Spec _{ X } \lb \Image ( \cO_X \to \cEnd ( F ) ) \rb
\subset
X $. For an object $
F \in \bfD ( X, \alpha ) $, its support is defined as $
\Supp F = \bigcup _{ i } \Supp \cH^i ( F ) _{ \mathrm{red} } $. \end{definition}
For $
\alpha $-twisted sheaves $
F $ and $
G $, we can define the (untwisted!) coherent sheaf of homomorphisms $
\cHom ( F, G ) \in \coh ( X ) $. The following fact is an easy consequence of this observation.
\begin{lemma}\label{lm:Serre_functor_for_twisted_sheaves} Let $
( X, \alpha ) $ be a smooth proper cohomological Brauer pair. Then $
\otimes \omega _{ X } [ \dim X ] $ is the Serre functor of $
\bfD ( X, \alpha ) $. \end{lemma}
\begin{proof} See \cite[Example 1.4.3]{navas2010fourier} \end{proof}
The next lemma is a direct consequence of the definition of twisted coherent sheaves.
\begin{lemma}\label{lm:point_module} For any closed point $
x \in X $, its structure sheaf $
\bfk ( x ) $ gives rise to the \emph{``$\alpha$-twisted skyscraper sheaf at $
x $''} $
\in \bfD ( X, \alpha ) $ for any cohomological Brauer class $
\alpha $. \end{lemma}
It is sometimes convenient to restrict ourselves to \emph{Brauer classes}. Brauer classes form a subgroup $
\Br ( X ) \subset \Br' ( X ) $, and they are characterized by either of the following properties (see \cite[Theorem 1.3.5]{caldararu2000derived}).
\begin{itemize}
\item There exists a sheaf of Azumaya algebras which represents the class $\alpha$.
\item There exists a non-zero locally free $\alpha$-twisted sheaf of finite rank. \end{itemize}
\noindent The difference of these two notions are very subtle. In fact, it is shown in \cite[Theorem 1.1]{de2003result} that $
\Br = \Br' $ holds on any projective scheme.
\begin{comment}
\begin{proposition}\label{pr:no_OD_for_twisted_sheaves} Let $
X $ be a smooth projective variety, and $
\alpha $ a cohomological Brauer class. Then $
D ( X, \alpha ) $ does not admit any OD. \end{proposition}
\begin{proof} Let $
D ( X, \alpha ) = \cA \oplus \cB $ be an OD. Take a generic closed point $
x \in X $. Since $
\bfk ( x ) $ is indecomposable, it should be contained in either $
\cA $ or $
\cB $. Suppose $
\bfk ( x ) \in \cA $. By the Bertini's theorem \cite[Chapter II, Theorem 8.18]{Hartshorne}, for any generic closed point $
y \in X $ there exists a smooth projective curve $
C $ embedded in $
X $ which connects $
y $ and $
x $. Since $
\alpha |_C $ is trivial by the obvious reason $
H^2 _{ et } ( C, \cO_C^* ) = 0 $, the sheaf $
\cO_C $ is an $
\alpha $-twisted sheaf on $
X $. Since $
\Hom ( \cO_C, \bfk ( x ) ) \neq 0 $ and $
\cO_C $ is indecomposable, we see $
\cO_C \in \cA $ and hence $
\bfk ( y ) \in \cA $. This implies that generic closed points are contained in $
\cA $.
If a closed point is contained in $
\cB $, then generic closed points are contained in $
\cB $ as well, which contradicts the irreducibility of $
X $. Hence we see all the closed points are contained in $
\cA $. Since they form a spanning class of $
D ( X, \alpha ) $ (see \cite[Proposition 1.4.5]{navas2010fourier}), we see $
\cB = 0 $. \end{proof}
\begin{remark} If $
\alpha $ is a genuine Brauer class, meaning that it comes from a sheaf of Azumaya algebras, then any $
\alpha $-twisted coherent sheaf on $
X $ is the quotient of a locally free $
\alpha $-twisted coherent sheaf (\cite[Lemma 2.1.4]{caldararu2000derived}). In this case, we can drop the smoothness assumption in the previous proposition and use locally free twisted sheaves as in the proof of \pref{lm:no_OD}. In fact, it is shown in \cite[Theorem 1.1]{de2003result} that any cohomological Brauer class on a projective scheme comes from a sheaf of Azumaya algebras. \end{remark}
\end{comment}
Now that we have prepared basic lemmas, we can show
\begin{theorem} Let $
( X, \alpha ) $ be a smooth proper Brauer pair. Assume that $
\omega_X $ is trivial on an open neighborhood of the canonical base locus $
\Bs | K_X | $. Then $
\bfD ( X, \alpha ) $ admits no SOD. \end{theorem} \begin{proof} Since we assumed $
X $ is proper, it satisfies the resolution property by \cite[Theorem 1.2]{MR2108211}. Combined with the assumption $
\alpha \in \Br ( X ) $, \cite[Lemma 2.1.4]{caldararu2000derived} implies that any coherent $
\alpha $-twisted sheaf on $
X $ is a quotient of a locally free $
\alpha $-twisted sheaf of finite rank. Hence those sheaves form a spanning class of $
\bfD ( X, \alpha ) $, and the original proof of \pref{lm:no_OD} can be used without change to show the non-existence of OD. Then the original proof of \pref{th:locally free} works with a minor modification, by replacing the corresponding lemmas with \pref{lm:Serre_functor_for_twisted_sheaves} and \pref{lm:point_module}. In order to show that both of the SOD summands can not contain closed points at the same time, one can use a locally free twisted sheaf instead of $
\cO_X $. \end{proof}
Similarly, by using \cite[Lemma 2.1.4]{caldararu2000derived}, the original proof of \pref{th:negative_definite} works without change and we obtain
\begin{theorem} Let $
( X, \alpha ) $ be a smooth projective Brauer pair such that $
X $ is a minimal surface of general type satisfying $ \dim_{\bfk} H^0(X, \omega_X) > 1 $ and the condition (*). Then $
\bfD ( X, \alpha ) $ admits no SOD. \end{theorem}
\begin{remark} We expect that the other results can be generalized as well with a bit more effort. For \pref{th:rigidity}, all we need is the existence of classical generators in the derived category of twisted coherent sheaves. Similarly, for \pref{th:when kappa=1}, we have to check that the base change theorem \cite[Theorem 5.6]{MR2801403} works for Brauer pairs as well. \end{remark}
\end{document} | arXiv | {
"id": "1508.00682.tex",
"language_detection_score": 0.7527658939361572,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\large
\title[]{ AN ALGORITHM TO CLASSIFY THE ASYMPTOTIC SET \\ ASSOCIATED TO A POLYNOMIAL MAPPING}
\makeatother
\author[Nguy\~{\^e}n Th\d{i} B\'ich Th\h{u}y]{Nguy\~{\^e}n Th\d{i} B\'ich Th\h{u}y} \address[{Nguy\~{\^e}n Th\d{i} B\'ich Th\h{u}y}]{UNESP, Universidade Estadual Paulista, ``J\'ulio de Mesquita Filho'', S\~ao Jos\'e do Rio Preto, Brasil} \email{bichthuy@ibilce.unesp.br} \maketitle \thispagestyle{empty} \begin{abstract} We provide an algorithm to classify the asymptotic sets of the dominant polynomial mappings $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2, using the definition of the so-called ``{\it fa{\c c}ons}'' in \cite{Thuy}. We obtain a classification theorem for the asymptotic sets of dominant polynomial mappings $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2. This algorithm can be generalized for the dominant polynomial mappings $F: \mathbb{C}^n \to \mathbb{C}^n$ of degree $d$, with any $(n, d) \in {( \mathbb{N}^*)}^2$. \end{abstract}
\section{Introduction} Let $F : \mathbb{C}^n \to \mathbb{C}^n$ be a polynomial mapping. Let us denote by $S_F$ the set of points at which $F$ is non proper, {\it i.e.}, $$S_F = \{ a \in \mathbb{C}^n \text{ such that } \exists \{ \xi_k\}_{k \in \mathbb{N}} \subset \mathbb{C}^n, \vert \xi_k \vert \text{ tends to infinity and } F(\xi_k) \text{ tends to } a\},$$ where $ \vert \xi_k \vert$ is the Euclidean norm of $\xi_k$ in $ \mathbb{C}^n$.
The set $S_F$ is called the asymptotic set of $F$. In the years 90's, Jelonek studied this set in a deep way
and described the principal properties. One of the important results is that, if $F$ is dominant, {\it i.e.}, $\overline{F( \mathbb{C}^n)} = \mathbb{C}^n$, then $S_F$ is an empty set or a hypersurface \cite{Jelonek1}.
Notice that it is sufficient to define $S_F$ by considering sequences $\{\xi_k\}$ tending to infinity in the following sense: each coordinate of these sequences either tends to infinity or converges. In \cite{Thuy}, the sequences tending to infinity such that their images tend to the points in $S_F$ are labeled in terms of ``{\it fa{\c c}ons}'', as follows: for each point $a$ of $S_F$, there exists a sequence $\{\xi_k^a\}$ in the source space $ \mathbb{C}^n$, $\, \xi_k^a = (x_{k,1}^a, \ldots , x_{k,n}^a)$ tending to infinity such that
$F(\xi_k^a)$ tends to $a$.
Then there exists at least one index $i \in \mathbb{N}$, $1 \leq i \leq n$ such that $x_{k,i}^a$ tends to infinity. We define a ``{\it fa{\c c}on''} of the point $a \in S_F$
as a $(p,q)$-tuple $(i_1, \ldots , i_p)[j_1, \ldots, j_q]$ of integers where $x_{k,i_r}^{a}$ tends to infinity for $r = 1, \ldots , p$ and, for $s = 1, \ldots, q$, the sequence $ x_{k,j_s}^a$ tends to a complex number independently on the point $a$
when $a$ describes locally $S_F$ (definition \ref{definitionXi}).
The aim of this paper is to provide an algorithm to classify the asymptotic sets of dominant polynomial mappings $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2, using the definition of ``{\it fa{\c c}ons}'' in \cite{Thuy}, and then generalize this algorithm for the general case.
One important tool of the algorithm is the notion of {\it pertinent variables}. The idea of the notion of pertinent variables is the following: Let $F=(F_1, F_2 , F_3): \mathbb{C}^3 \to \mathbb{C}^3$ be a dominant polynomial mapping of degree 2 such that $S_F \neq \emptyset$. We fix a {\it fa{\c c}on} $\kappa$ of $F$ and assume that $ \{ \xi_k \} $ is a sequence tending to infinity with the {\it fa{\c c}on} $\kappa$ such that $F(\xi_k)$ tends to a point of $S_F$. Since the degree of $F$ is 2 then each coordinate $F_1, F_2$ and $F_3$ of $F$ is a linear combination of $f_1 = x_1$, $f_2 = x_2$, $f_3 = x_3$, $f_4 = x_1x_2$, $f_5 = x_2x_3$ and $f_6 = x_3x_1$. We call a {\it pertinent variable of $F$ with respect to the {\it fa{\c c}on} $\kappa$} a {\it minimum} linear combination of $f_1, \ldots, f_6$ such that the image of the sequence $\{ \xi_k \}$ by this combination does not tend to infinity (see definition \ref{pertinentvar}).
Moreover, if $F$ is dominant then by Jelonek, the set $S_F$ has pure dimension $2$ (see theorem \ref{theoremjelonek1}). With this observation and with the idea of pertinent variables, we:
\begin{enumerate} \item[$\bullet$] Make the list $(\mathcal{L})$ of all possible {\it fa{\c c}ons}
for a polynomial mappings $F: \mathbb{C}^3 \to \mathbb{C}^3$. This list is finite. In fact, there are 19 possible {\it fa{\c c}ons} (see the list (\ref{group})). \item[$\bullet$] Assume that a 2-dimensional irreductible stratum $S$ of $S_F$ admits $l$ fixed {\it fa{\c c}ons} in the list $(\mathcal{L})$, where $ 1 \leq l \leq 19$. \item[$\bullet$] Determine the pertinent variables of $F$ with respect to these $l$ {\it fa{\c c}ons}. \item[$\bullet$] Restrict the above pertinent variables using the dominancy of $F$ and the fact $\dim S = 2$. We get the form of $F$ in terms of these pertinent variables. \item[$\bullet$] Determine the geometry of $S$ in terms of the form of $F$. \item[$\bullet$] Let $l$ runs in the list $(\mathcal{L})$ for $ 1 \leq l \leq 19$. We get all the possible $2$-dimensional irreductible strata of $S_F$. Since the dimension of $S_F$ is $2$, then we get the list of all possible asymptotic sets $S_F$. \end{enumerate}
With this idea, we provide the algorithm \ref{algorithmordre} to classify the asymptotic sets
of dominant polynomial mappings $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2, and we obtain the classification theorem \ref{theothuyb}. This algorithm can be generalized for the general case of polynomial mappings $F: \mathbb{C}^n \to \mathbb{C}^n$ of degree $d$, where $n \geq 3$ and $d \geq 2$ (algorithm \ref{algorithmordregeneral}).
\section{Dominancy, assymptotic set and ``fa{\c c}ons''} \subsection {Dominant polynomial mapping} \begin{definition} \label{defdominant} {\rm Let $F: \mathbb{C}^n \to \mathbb{C}^n$ be a polynomial mapping. Let $\overline{F( \mathbb{C}^n)}$ be the closure of $F( \mathbb{C}^n)$ in $ \mathbb{C}^n$. F is called {\it dominant} if $\overline{F( \mathbb{C}^n)} = \mathbb{C}^n$, {\it i.e.}, $F( \mathbb{C}^n)$ is dense in $ \mathbb{C}^n$. } \end{definition}
We provide here a lemma on the dominancy of a polynomial mapping $F: \mathbb{C}^n \to \mathbb{C}^n$ that we will use later on.
\begin{lemma} \label{lemmaindependant} Let $F =(F_1,\ldots, F_n) : \mathbb{C}^n \to \mathbb{C}^n$ be a dominant polynomial mapping. Then, the coordinate polynomials $F_1, \ldots, F_n$ are independent. That means, there does not exist any coordinate polynomial $F_\eta$, where $\eta \in \{1, \ldots, n \}$, such that $F_\eta$ is a polynomial mapping of the variables $F_1, \ldots, F_{\eta-1}, F_{\eta+1}, \ldots, F_n$. \end{lemma} \begin{preuve}
Assume that $F_\eta=\varphi (F_1, \ldots, F_{\eta-1}, F_{\eta+1}, \ldots, F_n)$ where $\eta \in \{1, \ldots, n \}$ and
$\varphi$ is a polynomial. Then, the dimension of $F( \mathbb{C}^n)$ is less than $n$. Consequently, the dimension of $\overline{F( \mathbb{C}^n)}$ is less than $n$. That provides the contradiction with the fact $F$ is dominant. \end{preuve}
\subsection{Asymptotic set} \label{ensembleJelonek} \begin{definition} \label{ensembleJelonek} {\rm Let $F: \mathbb{C}^n \to \mathbb{C}^n$ be a polynomial mapping. Let us denote by $S_F$ the set of points at which $F$ is non-proper, {\it i.e.}, \begin{equation*} \label{defSF} S_F = \{ a \in \mathbb{C}^n \text{ such that } \exists \{ \xi_k\}_{k \in \mathbb{N}} \subset \mathbb{C}^n, \vert \xi_k \vert \to \infty \text{ and } F(\xi_k) \to a\}, \end{equation*} where $ \vert \xi_k \vert$ is the Euclidean norm of $\xi_k$ in $ \mathbb{C}^n$. The set $S_F$ is called the asymptotic set of $F$. } \end{definition}
Recall that, it is sufficient to define $S_F$ by considering sequences $\{\xi_k\}$ tending to infinity in the following sense: each coordinate of these sequences either tends to infinity or converges to a finite number.
\begin{theorem} \cite{Jelonek1} \label{theoremjelonek1} Let $F: \mathbb{C}^n \rightarrow \mathbb{C}^n$ be a polynomial mapping.
If $F$ is dominant, then $S_F$ is either an empty set or a hypersurface. \end{theorem}
\subsection{``Fa{\c c}ons''.}
In this section, let us recall the definition of {\it fa{\c c}ons} as it appears in \cite{Thuy}.
In order to a better understanding of the definition of {\it fa{\c c}ons}, let us start by giving an example. \begin{example} \cite{Thuy} \label{exfacon} {\rm Let $F = (F_1, F_2, F_3): \mathbb{C}^3_{(x_1, x_2, x_3)} \to \mathbb{C}^3_{(\alpha_1, \alpha_2, \alpha_3)}$ be the polynomial mapping such that $$F_1:=x_1, \qquad F_2:= x_2, \qquad F_3:=x_1x_2x_3.$$ Notice that by the notations $ \mathbb{C}^3_{(x_1, x_2, x_3)}$ and $ \mathbb{C}^3_{(\alpha_1, \alpha_2, \alpha_3)}$, we want to distinguish the source space and the target space. That means, if we take $x = (x_1, x_2, x_3)$ then $x$ belongs to the source space $ \mathbb{C}^3_{(x_1, x_2, x_3)}$; if we take $\alpha = (\alpha_1, \alpha_2, \alpha_3)$ then $\alpha$ belongs to the target space $ \mathbb{C}^3_{(\alpha_1, \alpha_2, \alpha_3)}$. We determine now the asymptotic set $S_F$ by using the definition \ref{ensembleJelonek}.
Assume that there exists a sequence $\{ \xi_k = ( x_{1, k}, x_{2, k}, x_{3,k}) \}$ in the source space $ \mathbb{C}^3_{(x_1, x_2, x_3)}$ tending to infinity such that its image $\{ F(\xi_k) = ( x_{1, k}, x_{2, k}, x_{1, k} x_{2, k} x_{3,k}) \}$ does not tend to infinity. Then $x_{1, k}$ and $x_{2, k}$ cannot tend to infinity. Since the sequence $\{\xi_k\}$ tends to infinity, then $x_{3, k}$ must tend to infinity. Hence, we have the three following cases:
1) $x_{1, k}$ tends to 0, $x_{2, k}$ tends to a complex number $\alpha_2 \in \mathbb{C}$ and $x_{3, k}$
tends to infinity. In order to determine the biggest possible subset of $S_F$, we choose the sequences $x_{1, k}$ tending to 0 and $x_{3, k}$ tending to infinity in such a way that the product $x_{1, k}x_{3, k}$ tends to a complex number $\alpha_3$.
Let us choose, for example $\xi_k = \left( \frac{1}{k}, \alpha_2, \frac{k\alpha_3}{\alpha_2} \right)$ where $\alpha_2 \neq 0$, then $F(\xi_k)$ tends to a point $a = (0, \alpha_2, \alpha_3)$ in $S_F$.
We get a 2-dimensional stratum $S_1$ of $S_F$, where $S_1 = (\alpha_1 = 0) \setminus 0\alpha_3 \subset \mathbb{C}^3_{(\alpha_1, \alpha_2, \alpha_3)}$. We say that a {\it ``fa{\c c}on''} of $S_1$ is $(3)[1]$. The symbol ``(3)'' in the {\it fa{\c c}on} $(3)[1]$ means that the {\it third} coordinate $ x_{3, k}$ of the sequence $\{ \xi_k \}$ tends to infinity. The symbol ``[1]'' in the {\it fa{\c c}on} $(3)[1]$ means that the {\it first} coordinate $x_{1, k}$ of the sequence $\{ \xi_k \}$ tends to 0 which is a fixed complex number which does not depend on the point $a = (0, \alpha_2, \alpha_3)$ when $a$ describes $S_1$. Notice that the second coordinate of the sequence $\{\xi_k\}$ tends to a complex number $\alpha_2$ depending on the point $a = (0, \alpha_2, \alpha_3)$ when $a$ varies, then the indice ``2'' does not appear in the {\it fa{\c c}on} $(3)[1]$. Moreover, all the sequences tending to infinity such that their images tend to a point of $S_1$ admit only the {\it fa{\c c}on} $(3)[1]$.
The two following cases are similar to the case 1):
2) $x_{1, k}$ tends to a complex number $\alpha_1 \in \mathbb{C}$, $x_{2, k}$ tends to 0 and $x_{3, k}$
tends to infinity: then the {\it fa{\c c}on} $(3)[2]$ determines a 2-dimensional stratum $S_2$ of $S_F$, where $S_2 = (\alpha_2 = 0) \setminus 0\alpha_3 \subset \mathbb{C}^3_{(\alpha_1, \alpha_2, \alpha_3)}$.
3) $x_{1, k}$ and $x_{2, k}$ tend to 0, and $x_{3, k}$
tends to infinity: then the {\it fa{\c c}on} $(3)[1, 2]$ determines the 1-dimensional stratum $S_3$ where $S_3$ is the axis $0\alpha_3$ in $ \mathbb{C}^3_{(\alpha_1, \alpha_2, \alpha_3)}$.
In conclusion, we get \begin{enumerate} \item[$\bullet$] the asymptotic set $S_F$ of the given polynomial mapping $F$ as the union of two planes $(\alpha_1 = 0)$ and $(\alpha_2 = 0)$ in $ \mathbb{C}^3_{(\alpha_1, \alpha_2, \alpha_3)}$, \item[$\bullet$] all the {\it fa{\c c}ons} of $S_F$ of the given polynomial mapping $F$: they are three fa{\c c}ons $(3)[1]$, $(3)[2]$ and $(3)[1, 2]$. \end{enumerate}
\begin{remark} {\rm The chosen sequence $\left\{ \xi_k = \left( \frac{1}{k}, \alpha_2, \frac{k\alpha_3}{\alpha_2} \right) \right\}$ in 1) of the above example is called a {\it generic sequence} of the 2-dimensional irreductible component $(\alpha_1 = 0)$ (a plane) of $S_F$, since the image of any sequence of this type (with differents $\alpha_2 \neq 0$ and $\alpha_3$) falls to a generic point of the plane $(\alpha_1 = 0)$. That means the images of all the sequences $\{\xi_k\}$ when $\alpha_2$ runs in $ \mathbb{C} \setminus \{ 0 \}$ and $\alpha_3$ runs in $ \mathbb{C}$ cover $S_1 = (\alpha_1 = 0) \setminus 0\alpha_3$ and $S_1$ is dense in the plane $(\alpha_1 = 0)$. We can see easily that a generic sequence of the 2-dimensional irreductible component $(\alpha_2 = 0)$ of $S_F$ is $\left( \alpha_1, \frac{1}{k}, \frac{k\alpha_3}{\alpha_1} \right)$ where $\alpha_1 \neq 0$. More generally, any sequence $\left\{ \left( \frac{1}{k^r}, \alpha_2, \frac{k^s\alpha_3}{\alpha_2} \right) \right \}$, where $r = s$ and $\alpha_2 \neq 0$, is a generic sequence of $(\alpha_1 = 0) \subset S_F$. Any sequence $\left \{ \left( \alpha_1, \frac{1}{k^r}, \frac{k^s\alpha_3}{\alpha_1} \right) \right \}$, where $r = s$ and $\alpha_1 \neq 0$, is a generic sequence of $(\alpha_2 = 0) \subset S_F$. } \end{remark}
} \end{example}
In the light of this example, we recall here the definition of {\it fa{\c c}ons } in \cite{Thuy}.
\begin{definition} \cite{Thuy} \label{definitionXi} {\rm Let $F: \mathbb{C}^n \to \mathbb{C}^n$ be a dominant polynomial mapping such that $S_F \neq \emptyset$.
For each point $a$ of $S_F$, there exists a sequence $\{ \xi_k^a \} \subset \mathbb{C}^n$, $\, \xi_k^a = (x_{k,1}^a, \ldots , x_{k,n}^a)$ tending to infinity such that
$F(\xi_k^a)$ tends to $a$.
Then, there exists at least one index $i \in \mathbb{N}$, $1 \leq i \leq n$ such that $x_{k,i}^a$ tends to infinity when $k$ tends to infinity.
We define ``{\it a fa{\c c}on of tending to infinity of the sequence $\{\xi_k^a\}$''},
as a maximum $(p,q)$-tuple $\kappa = (i_1, \ldots , i_p)[j_1, \ldots, j_q]$ of different integers in $\{1, \ldots, n\}$, such that: \begin{enumerate} \item[i)] $x_{k,i_r}^{a}$ tends to infinity for all $r = 1, \ldots , p$,
\item[ii)] for all $s = 1, \ldots , q$, the sequence $x_{k,j_s}^a$ tends to a complex number independently on the point $a$
when $a$ varies locally, that means: \begin{enumerate} \item[ii.1)] either there exists in $S_F$ a subvariety $U_a$ containing $a$ such that for any point $a'$ in $U_a$,
there exists a sequence $\{ \xi_k^{a'} \} \subset \mathbb{C}^n$, $\, \xi_k^{a'} = (x_{k,1}^{a'}, \ldots , x_{k,n}^{a'})$ tending to infinity such that \begin{enumerate} \item[a)] $F(\xi_k^{a'}) $ tends to $a',$ \item[b)] $x_{k,i_r}^{a'}$ tends to infinity for all $r = 1, \ldots , p$, \item[c)] for all $s = 1, \ldots , q$, ${\displaystyle \lim_{k \to \infty}} x_{k, j_s}^{a'} = {\displaystyle \lim_{k \to \infty}} x_{k,j_s}^{a}$ and this limit is finite. \end{enumerate}
\item[ii.2)] or there does not exist such a subvariety, then we define $$\kappa = (i_1, \ldots , i_p)[j_1, \ldots, j_{n-p}],$$ where $x_{k,i_r}^{a}$ tends to infinity for all $r = 1, \ldots , p$ and $\{i_1, \ldots , i_p\} \cup \{j_1, \ldots, j_{n-p}\} = \{1, \ldots, n\}$. In this case, the set of points $a$ is a subvariety of dimension 0 of $S_F$. \end{enumerate} \end{enumerate} We call a {\it fa{\c c}on} of tending to infinity of the sequence $\{\xi_k^a\}$ also a
{\it a fa{\c c}on} of $S_F$. If the image of a sequence corresponding with a {\it fa{\c c}on} $\kappa$ tends to a point of a stratum $S$ of $S_F$, we call also $\kappa$ a {\it fa{\c c}on} of the stratum $S$.
} \end{definition}
\section{An algorithm to stratify the asymptotic sets of the dominant polynominal mappings $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2}
In this section we provide an algorithm to stratify the asymptotic sets associated to dominant polynominal mappings $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2. In the last section, we show that this algorithm can be generalized in the general case for dominant polynominal mappings $F: \mathbb{C}^n \to \mathbb{C}^n$ of degree $d$ where $n \geq 3$ and $d \geq 2$. Let us recall that by degree of a polynomial mapping $F = (F_1, \ldots, F_n): \mathbb{C}^n \to \mathbb{C}^n$, we mean the highest degree of the monomials $F_1, \ldots , F_n$.
Let us consider now a dominant polynomial mapping $F=(F_1, F_2 , F_3): \mathbb{C}^3_{(x_1, x_2, x_3)} \to \mathbb{C}^3_{(\alpha_1, \alpha_2, \alpha_3)}$ of degree 2 such that $S_F \neq \emptyset$. An important step of this section is to define the notion of {\it ``pertinent'' variables} of $F$. \subsection{Pertinent variables}
Let us explain at first the idea of the notion of {\it pertinent variables} of $F$: let $ \{ \xi_k \} = \{ (x_{1, k}, x_{2, k}, x_{3, k}) \}$ be a sequence in the source space $ \mathbb{C}^3_{(x_1, x_2, x_3)}$ tending to infinity such that $F(\xi_k)$ tends to a point of $S_F$ in the target space $ \mathbb{C}^3_{(\alpha_1, \alpha_2, \alpha_3)}$. Then the image of $\xi_k$ by any coordinate polynomial $F_\eta$, where $\eta = 1, 2, 3$, cannot tend to infinity. Notice that $F_\eta$ can be written as the sum of elements of the form $F_\eta^1 F_\eta^2$ such that if
$F_\eta^1(\xi_k)$ tends to infinity, then $F_\eta^2(\xi_k)$ must tend to 0. In other words, if one element of the above sum has a factor tending to infinity with respect to the sequence $\{\xi_k\}$, then this element must be ``balanced'' with another factor tending to zero with respect to the sequence $\{\xi_k\}$.
For example, assume that the coordinate sequences $x_{1, k}$ and $x_{2, k}$ of the sequence $ \{ \xi_k \}$ tend to infinity, then $F_\eta$ cannot admit neither $x_1$ nor $x_2$ {\it alone} as an element of the above sum, but $F_\eta$ can admit $( x_1 - \nu x_2)$, $(x_1 - \nu x_2) x_1$, $(x_1 - \nu x_2) x_2$
as elements of this sum, where $\nu \in \mathbb{C} \setminus \{ 0 \}$. So we define
\begin{definition} \label{pertinentvar} {\rm Let $F=(F_1, F_2 , F_3): \mathbb{C}^3_{(x_1, x_2, x_3)} \to \mathbb{C}^3_{(\alpha_1, \alpha_2, \alpha_3)}$ be a polynomial mapping of degree 2 such that $S_F \neq \emptyset$.
Let us fix a {\it fa{\c c}on} $\kappa$ of $S_F$. Then there exists a sequence $\{ \xi_k \} \subset \mathbb{C}^3_{(x_1, x_2, x_3)}$ tending to infinity with the
{\it fa{\c c}on} $\kappa$ such that its image tend to a point in $S_F$. An element in the list \begin{equationth} \label{vairablepertinent} \begin{cases}
X_{h_i} = x_i, \text{ where } i = 1, 2, 3, \cr X_{h_j} = x_i + \nu_{h_j} x_j, \text{ where } i \neq j \text{ and } i, j = 1, 2, 3, \cr
X_{h_r} = (x_i + \nu_{h_r} x_j) x_l, \text{ where } i \neq j \text{ and } i, j, l = 1, 2, 3, \cr
X_{h_s} = x_i + \nu_{h_s} x_j x_l, \text{ where } i \neq j, j \neq k, k \neq i \text{ and } i, j, k = 1, 2, 3, \end{cases} \end{equationth} ($ \nu_{h_i}, \nu_{h_j}, \nu_{h_r}, \nu_{h_s} \in \mathbb{C} \setminus \{0\}$)
is called {\it a pertinent variable of $F$ with respect to the fa{\c c}on $\kappa$} if the image of the sequence $\{\xi_k\}$ by this element does not tend to infinity. } \end{definition}
\begin{remark} {\rm From now on, we will denote $X_1, \ldots, X_h$ pertinent variables of $F$ with respect to a {\it fa{\c c}on} and we write $$F = \tilde{F}(X_1, \ldots, X_h).$$
Notice that we can also determine the pertinent variables of $F$ with respect to a set of {\it fa{\c c}ons} in the case we have more than one {\it fa{\c c}on}. } \end{remark}
\subsection{Idea of the algorithm}
The aim of the algorithm that we present in this section is to describe the list of all possible asymptotic sets $S_F$ for the dominant polynomial mappings $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2. In order to do that, we observe firstly that
\begin{enumerate} \item[$\bullet$] The list of all the possible {\it fa{\c c}ons} of $S_F$ for a polynomial mapping $F: \mathbb{C}^3 \to \mathbb{C}^3$ is \begin{equationth} \label{group} \begin{cases} \text{ 1) Group I}: (1, 2, 3), \cr \text{ 2) Group II}: (1,2), (2,3) \text{ and } (3,1), \cr \text{ 3) Group III}: (1), (2) \text{ and }(3), \cr \text{ 4) Group IV}: (1,2)[3], (1,3)[2] \text{ and } (2,3)[1], \cr \text{ 5) Group V}: (1)[2], (1)[3], (2)[1], [2](3), [3](1) \text{ and } [3](2), \cr \text{ 6) Group VI}: (1)[2,3], (2)[1,3] \text{ and } (3)[1,2]. \end{cases} \end{equationth} This list have 19 {\it fa{\c c}ons}.
\item [$\bullet$] Since $F$ dominant, then by the theorem \ref{theoremjelonek1}, the set $S_F$ has pure dimension $2$. \end{enumerate}
With these observations, we will:
\begin{enumerate} \item[$\bullet$] assume that a 2-dimensional irreductible stratum $S$ of $S_F$ admits $l$ fixed {\it fa{\c c}ons} in the list (\ref{group}), where $ 1 \leq l \leq 19$, \item[$\bullet$] determine the pertinent variables of $F$ with respect to these $l$ {\it fa{\c c}ons}, \item[$\bullet$] restrict the above pertinent variables using the dominancy of $F$ and the fact $\dim S = 2$. We get the form of $F$ in terms of these pertinent variables, \item[$\bullet$] determine the geometry of $S$ in terms of the form of $F$, \item[$\bullet$] let $l$ runs in the list (\ref{group}) for $ 1 \leq l \leq 19$. We get all the possible $2$-dimensional irreductible strata $S$ of $S_F$. Since the dimension of $S_F$ is $2$, then we get the list of all the possible asymptotic sets $S_F$ of $F$. \end{enumerate}
The following example explains the process of the algorithm, {\it i.e. } how we can determine the geometry of a $2$-dimensional irreductible stratum $S$ of $S_F$ admitting some fixed {\it fa{\c c}ons}.
\subsection{Example}
\begin{example} \label{exal} {\rm
Let $F: \mathbb{C}^3 \to \mathbb{C}^3$ be a dominant polynomial mapping of degree 2. Assume that a 2-dimensional stratum $S$ of $S_F$ admits the two {\it fa{\c c}ons} $\kappa = (1, 2, 3)$ and $\kappa'= (1,2)[3]$. That means that all the sequences tending to infinity in the source space such that their images tend to the points of $S$ admit either the {\it fa{\c c}on} $\kappa = (1, 2, 3)$ or the {\it fa{\c c}on} $\kappa'= (1,2)[3]$. In order to describe the geometry of $S$, we perform the following steps:
{\bf Step 1:} Determine the pertinent variable of $F$ with respect to the {\it fa{\c c}ons} $\kappa = (1, 2, 3)$ and $\kappa'= (1,2)[3]$:
\begin{enumerate} \item[$\bullet$] With the {\it fa{\c c}on} $\kappa = (1, 2, 3)$, up to a suiable linear change of coordinates, the mapping $F$ admits the pertinent variables: $x_1-x_2$, $(x_1 - x_2)x_1$, $(x_1-x_2)x_2$, $(x_1 - x_2)x_3$, $x_1-x_3$, $(x_1 - x_3)x_1$, $(x_1-x_3)x_2$, $(x_1 - x_3)x_3$, $x_2-x_3$, $(x_2 - x_3)x_1$, $(x_2-x_3)x_2$, $(x_2 - x_3)x_3$, $x_1 - x_2x_3$, $x_2 - x_1x_3$ and $x_3 - x_1x_2$ (see definition \ref{pertinentvar}).
\item[$\bullet$] With the {\it fa{\c c}on} $\kappa' = (1,2)[3]$, as we refer to the same mapping $F$, then up to the {\it same } suiable linear change of coordinates, the mapping $F$ admits the pertinent variables: $x_3$, $x_1x_3$, $x_2x_3$, $x_1-x_2$, $(x_1 - x_2)x_1$, $(x_1-x_2)x_2$, $(x_1 - x_2)x_3$ and $(x_1 - x_3)x_3$. \end{enumerate}
Since $S$ contains both of the {\it fa{\c c}ons} $\kappa$ and $\kappa'$, then this surface $S$ admits $x_1-x_2$, $(x_1 - x_2)x_1$, $(x_1-x_2)x_2$, $(x_1 - x_2)x_3$ and $(x_1 - x_3)x_3$ as pertinent variables. Let us denote by $$X_1 =x_1-x_2, \quad X_2 = (x_1 - x_2)x_1, \quad X_3 = (x_1-x_2)x_2, \quad X_4 = (x_1 - x_2)x_3, \quad X_5 = (x_1 - x_3)x_3.$$ We can write \begin{equationth} \label{equaordre3} F = \tilde{F}(X_1, X_2, X_3, X_4, X_5). \end{equationth}
{\bf Step 2:} Assume that $\{ \xi_k = (x_{1,k}, x_{2,k}, x_{3, k})\}$ and $\{\xi'_k = (x'_{1,k}, x'_{2,k}, x'_{3,k})\}$ are two sequences tending to infinity with the {\it fa{\c c}ons} $\kappa$ and $\kappa'$, respectively.
A) Let us consider the fa{\c c}on $\kappa = (1, 2, 3)$ and its corresponding generic sequence $\{ \xi_k = (x_{1,k}, x_{2,k}, x_{3, k})\}$:
\begin{enumerate} \item[$\bullet$] Assume that $X_1(\xi_k) = (x_{1,k} - x_{2,k})$ tends to a non-zero complex number. Since $\kappa = (1, 2, 3)$ then all three coordinate sequences $x_{1,k}, x_{2,k}$ and $x_{3 , k}$ tend to infinity. Hence $X_2 (\xi_k) = (x_{1,k} - x_{2,k})x_{1,k}$, $X_3(\xi_k) = (x_{1,k}-x_{2,k})x_{2,k}$ and $X_4 (\xi_k) = (x_{1,k} - x_{2,k})x_{3,k}$ tend to infinity. In this case, $X_2$, $X_3$ and $X_4$ cannot be pertinent variables of $F$ anymore. Then $F$ admits only two pertinent variables $X_1$ and $X_5$, or $F = \tilde{F}(X_1, X_5)$. We can see that the dimension of $S$ in this case is 1, that provides a contradiction with the fact that the dimension of $S$ is 2. Consequently, $(x_{1,k} - x_{2,k})$ tends to 0.
\item[$\bullet$] Assume that $(x_{1,k} - x_{3,k})$ tends to a non-zero complex number. Then $X_5(\xi_k) = (x_{1,k} - x_{3,k}) x_{3,k}$ tend to infinity, hence $X_{5}$ cannot be a pertinent variable of $F$ anymore, or $F = \tilde{F}(X_1, X_2, X_3, X_4)$. We choose a {\it generic} sequence $\{\xi_k\}$ satisfying the conditions: $X_{1,k} = (x_{1,k} - x_{2,k})$ tends to zero and $(x_{1,k} - x_{3,k})$ tends to a non-zero complex number, for example, $\xi_k = (k + \alpha/k, k + \beta/k, k + \gamma)$. Then $X_2(\xi_k) = (x_{1,k} - x_{2,k})x_{1,k}$, $X_3(\xi_k) = (x_{1,k}-x_{2,k})x_{2,k}$ and $X_4(\xi_k) = (x_{1,k} - x_{2,k})x_{3,k}$ tend to the same complex number $\lambda - \mu$. Combining with the fact $X_{1,k} = (x_{1,k} - x_{2,k})$ tends to zero, we conclude that the dimension of $S$ in this case is 1, that provides a contradiction with the fact that the dimension of $S$ is 2. Consequently, $(x_{1,k} - x_{3,k})$ tends to 0.
\end{enumerate} Then, with the {\it fa{\c c}on} $\kappa$, we have $(x_{1,k} - x_{2,k})$ and $(x_{1,k} - x_{3,k})$ tend to 0. Hence $(x_{2,k} - x_{3,k})$ also tends to 0. Let us choose a {\it generic} sequence $\{\xi_k\}$ satisfying these conditions, for example, the sequence $\left \{ \xi_k = (k + \alpha/k, k + \beta/k, k + \gamma/k) \right \}$. We see that $X_{2, k} = (x_{1,k} - x_{2,k})x_{1,k}$, $X_{3, k} =(x_{1,k}-x_{2,k})x_{2,k}$ and $X_{4, k} = (x_{1,k} - x_{2,k})x_{3,k}$ tend to a same complex number $\lambda = \alpha - \beta$. Moreover, $X_{5, k} = (x_{1,k} - x_{3,k})x_{3,k}$ tends to $\mu = \alpha- \gamma$. So we have \begin{equationth} \label{faconkappa} \lim_{k \to \infty} F(\xi_k) = \tilde{F}(0, \lambda, \lambda, \lambda, \mu) \end{equationth}
B) Let us consider now the fa{\c c}on $\kappa' = (1, 2)[ 3]$ and its corresponding generic sequence $\{ \xi'_k = (x'_{1,k}, x'_{2,k}, x'_{3, k})\}$, we have two cases:
\begin{enumerate}
\item[$\bullet$] If $X_1(\xi'_k) = (x'_{1,k} - x'_{2,k})$ tends to 0: So
$X_4(\xi'_k) = (x'_{1,k} - x'_{2,k})x'_{3,k}$ tends to 0. We have $X_2(\xi'_k) = (x'_{1,k} - x'_{2,k})x'_{1,k}$ and $X_2(\xi'_k) = (x'_{1,k}-x'_{2,k})x'_{2,k}$ tend to a same complex number $\lambda'$ and $X_5(\xi'_k)= (x'_{1,k} - x'_{3,k}) x'_{3,k} $ tends to an arbitrary complex number $\mu'$. Then in this case, we have \begin{equationth} \label{faconkappa'1} F(\xi'_k) = \tilde{F}(0, \lambda', \lambda', 0, \mu'). \end{equationth}
\item[$\bullet$] If $X_1(\xi'_k) = (x'_{1,k} - x'_{2,k})$ tends to a non-zero complex number $\lambda' \in \mathbb{C}$: So
$X_2(\xi'_k)$ and $X_3(\xi_k)$ tend to infinity, thus $X_2$ and $X_3$ cannot be pertinent variables of $F$ anymore. Moreover, $X_4(\xi'_k)$ tends to 0 and $X_5(\xi'_k)$ tends to an arbitrary complex number $\mu'$. Then in this case, we have \begin{equationth} \label{faconkappa'2} {\begin{matrix} F & = & \tilde{F}(X_1, & X_4, & X_5) \cr F(\xi'_k) & = & \tilde{F}( \lambda', & 0, &\mu'). \end{matrix}} \end{equationth}
\end{enumerate}
In conclusion, we have two cases: \begin{enumerate} \item[1)] From (\ref{equaordre3}), (\ref{faconkappa}) and (\ref{faconkappa'1}), we have
$${\begin{matrix} F & = & \tilde{F}(X_1, & X_2, & X_3, & X_4, & X_5) \cr
{\displaystyle \lim_{k \to \infty}} F(\xi_k) & = & \tilde{F}(0, & \lambda, & \lambda, & \lambda, & \mu) \cr
{\displaystyle \lim_{k \to \infty}} F(\xi'_k) & = & \tilde{F}(0, & \lambda', & \lambda', & 0, & \mu'). \end{matrix}} \eqno (*)$$
\item[2)] From (\ref{equaordre3}), (\ref{faconkappa}) and (\ref{faconkappa'2}), we have
$${\begin{matrix} F & = & \tilde{F}(X_1, & X_4, & X_5) \cr
{\displaystyle \lim_{k \to \infty}} F(\xi_k) & = & \tilde{F}(0, & \lambda, & \mu) \cr
{\displaystyle \lim_{k \to \infty}} F(\xi'_k) & = & \tilde{F}( \lambda', & 0, & \mu'). \end{matrix}} \eqno (**)$$
\end{enumerate}
{\bf Step 3:} We restrict the pertinent variables in the step 2 using the three following facts: \begin{enumerate} \item[$\bullet$] $\kappa$ and $\kappa'$
are two {\it fa{\c c}ons} of the same stratum $S$,
\item[$\bullet$] $\dim S = 2$, \item[$\bullet$] $F$ is dominant. \end{enumerate}
Let us consider the two cases (*) and (**) determined in the step 2: \begin{enumerate} \item[1)] $F$ is of the form (*): \begin{enumerate} \item[$\bullet$] At first, we use the fact that $\kappa$ and $\kappa'$
are two {\it fa{\c c}ons} of the same stratum $S$, then
if $X_i$ is a pertinent variable of $F$ then both $X_i(\xi_k)$ and $X_i(\xi'_k)$ must tend to either an arbitrary complex number or zero.
\item[$\bullet$] Since the dimension of $S$ is 2 then $F$ must have at least two pertinent variables $X_i$ and $X_j$ such that the images of the sequences $\xi_k$ and $\xi'_k$ by $X_i$ and $X_j$, respectively,
tend independently to two complex numbers. In this case: \begin{enumerate} \item[$+$] $F$ must admit either $X_2 = (x_1 - x_2) x_1$ or $X_3 = (x_1 - x_2) x_2$ as a pertinent variable, \item[$+$] $F$ must admit $X_5 = (x_1 - x_3)x_3$ as a pertinent variable. \end{enumerate}
\item[$\bullet$] Since $F$ is dominant then $F$ must admit at least 3 independent pertinent variables (see lemma \ref{lemmaindependant}). Then in this case, $F$ must also admit $X_1 = x_1 - x_2$ as a pertinent variable. We see that $X_1(\xi_k)$ and $X_1(\xi'_k)$ tend to 0. We can say that this variable is a ``free'' pertinent variable. The role of this variable is to guarantee the fact that $F( \mathbb{C}^3)$ is dense in the target space $ \mathbb{C}^3$.
\end{enumerate}
\item[2)] $F$ is of the form (**): Similarly to the case 1), we can see easily that in this case $F$ can admit only $X_5$ as a pertinent variable. Then the dimension of $S$ is 1, which is a contradiction with the fact that the dimension of $S$ is 2. \end{enumerate}
In conclusion, $F$ has the following form:
$${\begin{matrix} F & = & \tilde{F}(X_1, & X_2, & X_3, & X_5) \cr
{\displaystyle \lim_{k \to \infty}} F(\xi_k) & = & \tilde{F}(0, & \lambda, & \lambda, & \mu) \cr
{\displaystyle \lim_{k \to \infty}} F(\xi'_k) & = & \tilde{F}(0, & \lambda', & \lambda', & \mu'), \end{matrix}}$$
`
{\bf Step 4: Describe the geometry of the 2-dimensional stratum $S$}: On the one hand, the pertinent variables $X_2$ (or $X_3$) and $X_5$ tending independently to two complex numbers have degree 2; on the other hand, the degree of $F$ is 2, then the degree of the surface $S$ with respect to the variables $\lambda$ and $\mu$ (or $\lambda'$ and $\mu'$) is 1 (notice that by degree of $S$, we mean the degree of the equation defining $S$). We conclude that $S$ is a plane.
} \end{example}
In light of the example \ref{exal}, we explicit now the algorithm for classifying the asymptotic sets of the non-proper dominant polynomial mappings $F=(F_1, F_2 , F_3): \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2.
\subsection{Algorithm}
\begin{algorithm} \label{algorithmordre} {\rm We have the five following steps:
{\bf Step 1}: \begin{enumerate} \item[$\bullet$] Fix $l$ {\it fa{\c c}ons} $\kappa_1, \ldots, \kappa_l$ in the list (\ref{group}), where $1 \leq l \leq 19$. \item[$\bullet$] Determine the pertinent variables with respect to these $l$ {\it fa{\c c}ons} (knowing that they must be refered to a same mapping $F$). \end{enumerate}
{\bf Step 2}: \begin{enumerate} \item[$\bullet$] Assume that $S$ is a 2-dimensional stratum of $S_F$ admitting {\it only} the $l$ {\it fa{\c c}ons} $\kappa_1, \ldots, \kappa_l$ in step 1. \item[$\bullet$] Take {\it generic} sequences $\xi_k^1, \ldots, \xi_k^l$ corresponding to $\kappa_1, \ldots, \kappa_l$, respectively. \item[$\bullet$] Compute the limit of the images of the sequences $\xi_k^1, \ldots, \xi_k^l$ by the pertinent variables defined in step 1. \item[$\bullet$] Restrict the pertinent variables in step 1 using the fact $\dim S = 2$. \end{enumerate}
{\bf Step 3}: Restrict again the pertinent variables in step 2 using the three following facts:
\begin{enumerate} \item[$\bullet$] the {\it fa{\c c}ons} $\kappa_1, \ldots, \kappa_l$ belongs to $S$: then the images of the generic sequences $\xi_k^1, \ldots, \xi^l_k$ by the pertinent variables defined in the step 2 must tend to either an arbitrary complex number or zero,
\item[$\bullet$] $\dim S =2$: then there are
at least two pertinent variables $X_i$ and $X_j$ such that the images of the sequences $\xi_k$ and $\xi'_k$ by $X_i$ and $X_j$, respectively,
tend independently to two complex numbers,
\item[$\bullet$] F is dominant: then there are at least 3 independent pertinent variables (see lemma \ref{lemmaindependant}). \end{enumerate}
{\bf Step 4}: Describe the geometry of the 2-dimensional irreductible stratum $S$ of $S_F$ in terms of the pertinent variables obtained in the step 3.
{\bf Step 5}: Letting $l$ run from 1 to $19$ in the list (\ref{group}).
} \end{algorithm}
\begin{theorem} {\rm With the algorithm \ref{algorithmordre}, we obtain the list of all possible asymptotic sets $S_F$ of non-proper dominant polynomial mappings $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2. } \end{theorem} \begin{preuve} On the one hand, the process of the algorithm \ref{algorithmordre} is possible, since the number of the {\it fa{\c c}ons} in the list (\ref{group}) is finite (19 {\it fa{\c c}ons}). On the other hand, by the step 2, step 4 and step 5, we consider all the possible cases for all 2-dimensional irreductible strata of $S_F$. Since the dimension of $S_F$ is 2 (see theorem \ref{theoremjelonek1}),
we get all the possible asymptotic sets $S_F$ of non-proper dominant polynomial mappings $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2. \end{preuve}
\section{Results} In this section, we use the algorithm \ref{algorithmordre} to prove the following theorem.
\begin{theorem} \label{theothuyb}
The asymptotic set of a non-proper dominant polynomial mapping $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2 is one of the five elements in the following list ${\mathcal{L}}_{S_F}^{(3,2)}$. Moreover, any element of this list can be realized as the asymptotic set of a dominant polynomial mapping $F: \mathbb{C}^3 \to \mathbb{C}^3$ of degree 2.
{\bf The list ${\mathcal{L}}_{S_F}^{(3,2)}$: } \\ 1) A plane. \\ 2) A paraboloid.\\ 3) The union of a plane $(\mathscr{P}): \quad r_1x_1 + r_2x_2 + r_3x_3 + r_4 = 0,$ and a plane of the form $(\mathscr{P'}): \quad r'_1x_1 + r'_2x_2 + r'_3x_3 + r'_4 = 0,$ where we can choose two of the three coefficients $r'_1, r'_2, r'_3$, then the third of them and the fourth coefficient $r'_4$ are determined. \\ 4) The union of a plane $(\mathscr{P}): \quad r_1x_1 + r_2x_2 + r_3x_3 + r_4 = 0$ and a paraboloid of the form $(\mathscr{H}): \quad r'_ix_i^2 + r'_jx_j + r'_lx_l + r'_4 = 0, \quad \{i, j, l \} = \{1, 2, 3 \},$ where we can choose two of the three coefficients $r'_1, r'_2, r'_3$, then the third of them and the fourth coefficient $r'_4$ are determined. \\ 5) The union of three planes $$(\mathscr{P}): \quad r_1x_1 + r_2x_2 + r_3x_3 + r_4 = 0,$$ $$(\mathscr{P'}): \quad r'_1x_1 + r'_2x_2 + r'_3x_3 + r'_4 = 0,$$ $$(\mathscr{P''}): \quad r''_1x_1 + r''_2x_2 + r''_3x_3 + r''_4 = 0,$$ where:
a) for $(\mathscr{P'})$, we can choose two of the three coefficients $r'_1, r'_2, r'_3$, then the third of them and the fourth coefficient $r'_4$ are determined,
b) for $(\mathscr{P''}) $, we can choose two of the three coefficients $r''_1, r''_2, r''_3$, then the third of them and the fourth coefficient $r''_4$ are determined.
\end{theorem}
In order to prove this theorem, we need the two following lemmas.
\begin{lemma} \label{cothuy2} Let $F = (F_1, F_2, F_3): \mathbb{C}^3 \to \mathbb{C}^3$ be a non-proper dominant polynomial mapping of degree 2. If $S_F$ contains a surface of degree higher than 1, then either $S_F$ is a paraboloid, or $S_F$ is the union of a paraboloid and a plane. \end{lemma}
\begin{preuve} Assume that $S_F$ contains a surface $(\mathscr{H})$. Since $\deg F = 2$ then $1 \leq \deg (\mathscr{H}) \leq 2$.
A) We prove firstly that if $S_F$ contains a surface $(\mathscr{H})$ where $\deg (\mathscr{H}) = 2$ then $(\mathscr{H})$ is a paraboloid. Since $\deg (\mathscr{H}) = 2$ and $\deg F = 2$ then
$S_F$ admits one {\it fa{\c c}on} $\kappa$ in such a way that among the pertinent variables of $F$ with respect to the {\it fa{\c c}on} $\kappa$, there exists only one {\it free} pertinent variable. That means, one of $x_1$, $x_2$ and $x_3$ is a pertinent variable of $F$ with respect to $\kappa$. Without loose of generality, we assume that $S_F$ admits $\kappa = (3)[2]$ as a {\it fa{\c c}on}.
Assume that $ \{\xi_k = (x_{1,k}, x_{2,k}, x_{3,k})\}$ is a generic sequence tending to infinity with the
{\it fa{\c c}on} $\kappa$ and
i) $x_{3,k}$ tends to infinity and $x_{2,k}$ tends to 0 in such a way that $x_{2,k}x_{3,k}$ tends to an arbitrary complex number $\lambda$,
ii) $x_{1,k}$ tends to an arbitrary complex number $\mu$.
We see that $x^2_{1,k}$ and $(x_{1,k} + x_{2,k}) x_{1,k}$ tend to $\mu^2$.
Since $\deg F = 2$ and $\deg (\mathscr{H}) = 2$, then
i) one coordinate polynomial $F_{\eta}$, where $\eta \in \{1, 2, 3 \}$, must contain $x_1$ as an element of degree 1,
ii) the another coordinate polynomial $F_{\eta'}$, where $\eta' \in \{1, 2, 3 \}$ and $\eta' \neq \eta$, must contain $x_1^2$ or $(x_1 + x_{2}) x_{1}$ as a pertinent variable.
Assume that the equation of the surface $(\mathscr{H})$ is
$r_1 \alpha_1^{p_1} + r_2 \alpha_2^{p_2} + r_3 \alpha_3^{p_3} + r_4 = 0.$ Since $x^2_{1,k}$ and $(x_{1,k} + x_{2,k}) x_{1,k}$ tend to the same complex number $\mu^2$, and $\deg (\mathscr{H}) = 2$, then there exists an unique index $i \in \{ 1, 2, 3\}$ such that
$r_i \neq 0$ and $p_i = 2$. If $r_j = 0$ or $p_j = 0$ for all $j \neq i$, $j \in \{1, 2, 3\}$, then $(\mathscr{H})$ is the union of two lines. That provides the contradiction with the fact that $\deg (\mathscr{H}) = 2$. So, there exists $j \neq i$, $j \in \{ 1, 2, 3 \}$ such that $r_j \neq 0$ and $p_j = 1.$ Consequently, the surface $(\mathscr{H})$ is a paraboloid.
B) We prove now that if $S_F$ contains a paraboloid then the biggest possible $S_F$ is the union of this paraboloid and a plane.
Since $S_F$ contains a paraboloid then with the same choice of the {\it fa{\c c}on} $\kappa=(3)[2]$ as in A), the mapping $F$ must be con\-si\-de\-red as a dominant polynomial mapping of pertinent variables $x_1, x_2, x_1x_2$ and $x_2x_3$, that means: $$F = \widetilde{F}( x_1, x_2, x_1x_2, x_2x_3).$$ We can see easily that if $x_2$ is a pertinent variable of $F$, then $S_F$ admits only the {\it fa{\c c}on} $\kappa$ and $S_F$ is a paraboloid.
Assume that $S_F$ contains another irreductible surface
$(\mathscr{H'})$ which is different from $(\mathscr{H})$. Then $F$ must be considered as a polynomial mapping of pertinent variables $x_1, x_1x_2$ and $x_2x_3$, that means: \begin{equationth} \label{obseverkappa} F = \widetilde{F}( x_1, x_1x_2, x_2x_3). \end{equationth} Let us consider now one {\it fa{\c c}on} $\kappa'$ of $(\mathscr{H'})$ such that $\kappa' \neq \kappa$ and let $ \{\xi'_k = (x'_{1,k}, x'_{2,k}, x'_{3,k})\}$ be a corresponding generic sequence of $\kappa'$. Notice that one coordinate of $F$ admits $x_1$ as a pertinent variable. Let us show that $x'_{1, k}$ tends to 0. Assume that
$x'_{1, k}$ tends to a non-zero complex number. As one coordinate of $F$ admits $x_1x_2$ as a pertinent variable, then $x'_{2, k}$ does not tend to infinity. We have two cases:
+ If $x'_{2, k}$ tends to 0, then in order to $ \xi'_k$ tending to infinity, $x'_{3, k}$ must tend to infinity. Hence, the {\it fa{\c c}on} $\kappa'$ is $(3)[2]$. That provides the contradiction with the fact $\kappa' \neq \kappa$.
+ If $x'_{2, k}$ tends to a non-zero finite complex number, since one coordinate of $F$ admits $x_2x_3$ as factor, then $x'_{3, k}$ does not tend to infinity. That provides the contradiction with the fact that $ \xi'_k$ tends to infinity.
Therefore, $x'_{1, k}$ tends to 0. We have the following possible cases:
1) $\kappa' = (2)[1]$: then $F$ is a polynomial mapping of the form $F = \widetilde{F}(x_1, x_3, x_1x_2, x_1x_3).$
Combining with (\ref{obseverkappa}), then
$F = \widetilde{F}(x_1, x_1x_2).$ Therefore, $F$ is not dominant, which provides the contradiction.
2) $\kappa' = (3)[1]$: then $F$ is a polynomial mapping of the form $F = \widetilde{F}(x_1, x_2, x_1x_2, x_1x_3).$ Combining with (\ref{obseverkappa}), then $F = \widetilde{F}(x_1, x_1x_2).$ Therefore, $F$ is not dominant, which provides the contradiction.
3) $\kappa' = (2,3)[1]$: then $F$ is a polynomial mapping of the form $F = \widetilde{F}(x_1, x_1x_2, x_1x_3).$ Combining with (\ref{obseverkappa}), then $F = \widetilde{F}(x_1, x_1x_2).$ Therefore, $F$ is not dominant, which provides the contradiction.
4) $\kappa' = (3)[1,2]$: then $F = \widetilde{F}(x_1, x_2, x_1x_2, x_2x_3, x_3x_1).$ Combining with (\ref{obseverkappa}), we have $F = \widetilde{F}(x_1, x_1x_2, x_2x_3).$ Since $x'_{1,k}x'_{2,k}$ and $x'_{1,k}$ tend to 0, then $\dim (\mathscr{H'}) \leq 1$, that provides the contradiction.
5) $\kappa' = (2)[1,3]$: in this case, $F$ is a polynomial mapping admitting the form $$F = \widetilde{F}(x_1, x_3, x_1x_2, x_2x_3, x_3x_1).$$ Combining with (\ref{obseverkappa}), then $$F = \widetilde{F}(x_1, x_1x_2, x_2x_3).$$
We know that $x'_{1,k}$ tends to 0. Assume that $x'_{1,k} x'_{2,k}$ tends to a complex number $\lambda$ and $x'_{2,k}x'_{3,k}$ tends to a complex number $\mu$, we have $$(\mathscr{H'}) = \{ (\widetilde{F}_1(0, \lambda, \mu), \widetilde{F}_2(0, \lambda, \mu), \widetilde{F}_3(0, \lambda, \mu)): \lambda, \mu \in \mathbb{C}\},$$ where $\widetilde{F} = (\widetilde{F}_1, \widetilde{F}_2, \widetilde{F}_3)$.
Since $\deg F = 2$, then the degree of $\widetilde{F}_i$ with respect to the variables $\lambda$ and $\mu$ must be 1, for all $i \in \{ 1, 2, 3\}$. Consequently, the surface $(\mathscr{H'})$ is a plane.
\end{preuve}
\begin{lemma} \label{lemmeordren=3}
Let $F: \mathbb{C}^3 \to \mathbb{C}^3$ be a non-proper dominant polynomial mapping of degree 2. Assume that $S$ is a 2-dimensional irreductible stratum of $S_F$. Then $S$ admits at most two {\it fa{\c c}ons}. Moreover, if $S$ admits two {\it fa{\c c}ons}, then $S_F$ is a plane. \end{lemma}
\begin{preuve} Let $F: \mathbb{C}^3 \to \mathbb{C}^3$ be a non-proper dominant polynomial mapping of degree 2. Assume that $S$ is a 2-dimensional irreductible stratum of $S_F$.
A) We provide firstly the list of pairs of {\it fa{\c c}ons} that $S$ can admit and we write $F$ in terms of pertinent variables in each of these cases.
Let us fix a pair of {\it fa{\c c}ons} $(\kappa, \kappa')$ in the list (\ref{group}) and
assume that $S$ admits these two {\it fa{\c c}ons}. We use the steps 1, 2, 3 and 4 of the algorithm \ref{algorithmordre}. In the same way than the example \ref{exal}, we can determine the form of $F$ in terms of its pertinent variables with respect to two fixed {\it fa{\c c}ons} after using the conditions of dimension of $S$ and the dominancy of $F$. Letting two {\it fa{\c c}ons} $\kappa, \kappa'$ run in the list (\ref{group}), we get the following possiblilities:
\noindent 1) $(\kappa, \, \kappa') = ((1, 2, 3), (i_1, i_2)[j])$, where $\{ i_1, i_2,j \} = \{ 1,2,3\}$ and $$F = \tilde{F}(x_{i_1}-x_{i_2}, (x_{i_1} - x_{i_2})x_{i_1}, (x_{i_1}-x_{i_2})x_{i_2}, (x_{i_1} - x_{i_2})x_j, (x_{i_1} - x_j)x_j).$$
\noindent 2) $(\kappa, \kappa') = \left((1, 2, 3), \, (i)[j_1, j_2]\right)$, where $\{ i, j_1, j_2 \} = \{ 1,2,3\}$ and \begin{align*} F = \tilde{F}((x_i-x_{j_1})x_{j_1}, (x_i-x_{j_1})x_{j_2}, (x_i - x_{j_2})x_{j_1}, (x_i-x_{j_2})x_{j_2}, (x_{j_1} - x_{j_2}), (x_{j_1} - x_{j_2}) x_i). \end{align*}
\noindent 3) $(\kappa, \kappa') = ((1, 2) [3], \, (i)[3, j])$, where $\{ i, j\} = \{ 1,2\}$, and $$F = \tilde{F}(x_3, x_jx_3, x_i x_3, (x_i-x_j)x_j).$$
\noindent 4) $(\kappa, \kappa')= ((1, 2)[3], \, (3)[1,2])$ and $$F = \tilde{F}(x_1x_3, x_2x_3, x_1 - x_2, r_1x_1x_3 + r_2x_2x_3 + r_3(x_1-x_2)x_1 + r_4(x_1 - x_2) x_2),$$ where $r_l \in \mathbb{C}$, for $l = 1, \ldots, 4,$
such that $(r_1 \neq 0, r_2 \neq 0, (r_3, r_4) \neq (0, 0))$, or $((r_1, r_2) \neq (0, 0), (r_3, r_4) \neq (0, 0)).$
\noindent 5) $(\kappa, \, \kappa') = ((1, 3) [2], \,(i)[2, j])$, where $\{i, j\} =\{1, 3\}$, and $$F = \tilde{F}(x_2, x_2x_j, x_i x_2, (x_i-x_j)x_j).$$
\noindent 6) $(\kappa, \kappa') = ((1, 3)[2], \, (2)[1,3])$ and $$F = \tilde{F}(x_1x_2, x_2x_3, x_1 - x_3, r_1x_1x_2 + r_2x_2x_3 + r_3(x_1-x_3)x_1 + r_4(x_1 - x_3) x_3),$$ where $r_l \in \mathbb{C}$, for $l = 1, \ldots, 4,$ such that $(r_1 \neq 0, r_2 \neq 0)$, or $(r_1, r_2) \neq (0, 0)$ and $(r_3, r_4) \neq (0, 0)$.
\noindent 7) $(\kappa, \kappa')= ((2, 3) [1], \, (i)[1, j])$, where $\{ i, j\} = \{ 2, 3\}$, and $$F = \tilde{F}(x_1, x_1x_i, x_1 x_j, (x_i-x_j)x_j).$$
\noindent 8) $(\kappa, \kappa') = ((2, 3)[1], \, (1)[2,3])$ and $$F = \tilde{F}(x_1x_3, x_1x_2, x_3 - x_2, r_1x_1x_3 + r_2x_1x_2 + r_3(x_3-x_2)x_3 + r_4(x_3 - x_2) x_2),$$ where $r_l \in \mathbb{C}$, for $l = 1, \ldots, 4,$ such that $(r_1 \neq 0, r_2 \neq 0, (r_3, r_4) \neq (0, 0))$ or $((r_1, r_2) \neq (0, 0), (r_3, r_4) \neq (0, 0))$.
\noindent 9) $(\kappa, \, \kappa') = ((1)[2,3], \, (i)[1, j])$, where $\{ i, j \} = \{ 2, 3\}$, and $$F = \tilde{F}(x_{j}, x_1x_{i}, r_1x_{i}x_{j} + r_2x_{1}x_j),$$ where $r_1$ et $r_2$ are the non-zero complex numbers.
B) We prove now that $S$ admits at most two {\it fa{\c c}ons}. We prove the result for the first case of the above possibilities: $(\kappa, \, \kappa') = ((1, 2, 3), (i_1, i_2)[j])$, where $\{ i_1, i_2,j \} = \{ 1,2,3\}$. The other cases are proved similarly. For example, assume that $S$ admits two {\it fa{\c c}ons} $\kappa = (1,2,3)$ and $\kappa' = (1,2)[3]$. We prove that $S$ cannot admit the third {\it fa{\c c}on} $\kappa''$ different from $\kappa$ and $\kappa'$.
Let $\kappa''$ be a {\it fa{\c c}on} of $S_F$. Let us denote by $\{\xi_k'' = (x''_{1, k}, x''_{2, k}, x''_{3, k})\}$ a generic sequence corresponding to $\kappa''$. By the example \ref{exal}, the mapping $F$ admits $X_1$, $X_2$ (or $X_3$), and $X_5$ as the pertinent variables, where $$X_1 = x_1 - x_2, \quad X_2 = (x_1 - x_2)x_1, \quad X_3 = (x_1 - x_2)x_2, \quad X_5 = (x_1 - x_3)x_3.$$ Without loose the generality, we can assume that $X_2$ is a pertinent variable of $F$.
We prove that $X_1(\xi''_k) = (x''_{1, k} - x''_{2, k})$ tends to 0.
Assume that $X_1(\xi''_k) = (x''_{1, k} - x''_{2, k})$ tends to a non-zero complex number.
Then : \begin{enumerate} \item[$+ $] If $x''_{1,k}$ tends to infinity, then $X_2(\xi''_k)$ tends to infinity, that provides a contradiction with the fact that $X_2$ is a pertinent variable of $F$. \item[$+ $] If $x''_{2,k}$ tends to infinity, then $x''_{1,k}$ also tends to infinity since $(x''_{1, k} - x''_{2, k})$ tends to a non-zero complex number. That implies $X_2(\xi''_k)$ tends to infinity and this provides a contradiction with the fact that $X_2$ is a pertinent variable of $F$. \end{enumerate} Hence, $x''_{1, k}$ and $x''_{2, k}$ cannot tend to infinity. Consequently,
$x''_{3, k}$ must tend to infinity. Therefore, $X_5(\xi''_k) = (x''_{1, k} - x''_{3, k}) x''_{3, k}$ tends to infinity, that provides the contradiction with the fact that $X_5$ is a pertinent variable of $F$. We conclude that $(x''_{1, k} - x''_{2, k})$ tend to 0.
Then we have two possibilities:
\begin{enumerate} \item[a)] either both of $x''_{1, k}$ and $x''_{2, k}$ tend to 0: then $X_1(\xi''_k) = (x''_{1, k} - x''_{2, k})$, $X_2(\xi''_k) = (x''_{1, k} - x''_{2, k}) x''_{1, k}$ and $X_3(\xi''_k) =(x''_{1, k} - x''_{2, k}) x''_{2, k}$ tend to 0, which provides the contradiction with the fact that the dimension of $S$ is 2,
\item[b)] or both of $x''_{1, k}$ and $x''_{2, k}$ tend to infinity: Since $X_5$ is a pertinent variable of $F$, then $x''_{3, k}$ tends to 0 or infinity. We conclude that the {\it fa{\c c}on} $\kappa''$ is $(1,2,3)$ or $(1,2)[3]$.
\end{enumerate}
In conclusion, $S$ admits only the two {\it fa{\c c}ons} $\kappa = (1,2,3)$ and $\kappa' = (1,2)[3]$.
C) We prove now that if there exists a 2-dimensional irreductible stratum $S$ of $S_F$ admitting two {\it fa{\c c}ons}, then $S_F$ is a plane. Similarly to B), we prove this fact for the first case of the possibilities in A), that means, the case of $(\kappa, \, \kappa') = ((1, 2, 3), (i_1, i_2)[j])$, where $\{ i_1, i_2,j \} = \{ 1,2,3\}$. The other cases are proved similarly. For example, assume that $S$ admits two {\it fa{\c c}ons} $\kappa = (1,2,3)$ and $\kappa' = (1,2)[3]$. With the same arguments than in the example \ref{exal}, the stratum $S$ is a plane. By B), the asymptotic set $S_F$ admits also {\it only} two {\it fa{\c c}ons} $\kappa = (1,2,3)$ and $\kappa' = (1,2)[3]$. In other words, $S_F$ and $S$ concide. We conclude that $S_F$ is a plane.
\end{preuve}
We prove now the theorem \ref{theothuyb}.
\begin{preuve}({\it The proof of theorem \ref{theothuyb}}). The cases 1) and 2) are easily achievable by the lemmas \ref{lemmeordren=3} and \ref{cothuy2}, respectively.
Let us prove the cases 3), 4) and 5). In these cases, on the one hand, since $S_F$ contains at least two irreductible surfaces, then $S_F$ admits at least two {\it fa{\c c}ons}; on the other hand, by the lemma \ref{lemmeordren=3}, each irreductible surface of $S_F$ admits only one {\it fa{\c c}on.} Assume that $\kappa$, $\kappa'$ are two different {\it fa{\c c}ons} of $S_F$ and $ \{\xi_k\} $, $\{\xi'_k \}$ are two corresponding generic sequences, respectively.
We use the algorithm \ref{algorithmordre} and in the same way than the proofs of the lemmas \ref{cothuy2} and \ref{lemmeordren=3}, we can see easily that the pairs of {\it fa{\c c}ons} $(\kappa, \kappa')$ must belong to only the following pairs of groups: (I, IV), (I, V), (I, VI), (II, VI), (IV, V), (IV, VI), (V, VI) and (VI, VI) in the list (\ref{group}).
i) If $\kappa$ belongs to the group I and $\kappa'$ belongs to the group IV, for example $\kappa = (1, 2, 3)$ and $\kappa'= (1,2)[3]$. From the example \ref{exal}, $F$ is a dominant polynomial mapping which can be written in terms of pertinent variables:
$${\begin{matrix} F &= &\tilde{F}(x_1-x_2, & (x_1 - x_2)x_1, &(x_1-x_2)x_2, &(x_1 - x_2)x_3, &(x_1 - x_3)x_3) \cr
{\displaystyle \lim_{k \to \infty}} F(\xi_k) & = & \tilde{F}(0, & \lambda, & \lambda, & \lambda, & \mu), \cr
{\displaystyle \lim_{k \to \infty}} F(\xi'_k) & = & \tilde{F}(0, & \lambda', & \lambda', & 0, & \mu'), \end{matrix}}$$ where $\lambda, \mu, \lambda', \mu' \in \mathbb{C}$ (see (\ref{equaordre3}), (\ref{faconkappa}) and (\ref{faconkappa'1})). We see that, with the sequence $\{\xi_k\}$,
the pertinent variables tending to an arbitrary complex numbers have the degree 2, then the {\it fa{\c c}on} $\kappa$ provides a plane, since the degree of $F$ is 2.
In the same way, the {\it fa{\c c}on} $\kappa'$ provides a plane. Furthermore, it is easy to check that these two planes must have the form of the case 3) of the theorem and $S_F$ is the union of these two planes.
ii) If $\kappa$ belongs to the group I and $\kappa'$ belongs to the group V, for example $\kappa = (1, 2, 3)$ and $\kappa'= (1)[2]$. then, on the one hand, $F$ is a dominant polynomial mapping which can be written in terms of pertinent variables:
\begin{equation*} \label{etyna1}
F = \tilde{F}(x_2-x_3, (x_2 - x_3)x_2, (x_2-x_3)x_3, (x_1 - x_2)x_2).
\end{equation*} On the other hand, with the same arguments than the example \ref{exal}, and for suitable generic sequences $\{\xi_k\}$ and $\{\xi'_k\}$, we obtain:
$${\begin{matrix}
{\displaystyle \lim_{k \to \infty}} F(\xi_k) & = & \tilde{F}(0, & \lambda, & \lambda, & \mu), \cr
{\displaystyle \lim_{k \to \infty}} F(\xi'_k) & = & \tilde{F}(\lambda', & 0, & \lambda'^2, & \mu'), \end{matrix}}$$ where $\lambda, \mu, \lambda', \mu' \in \mathbb{C}.$ With the same arguments than in the case i), we have:
a) either the {\it fa{\c c}ons} $\kappa$ and $\kappa'$ provide two planes of the form of the case 3) of our theorem,
b) or the {\it fa{\c c}on} $\kappa$ provides the plane $(\mathscr{P})$ and, by the lemma \ref{cothuy2}, the {\it fa{\c c}on} $\kappa'$ provides the paraboloid $(\mathscr{H})$ of the form of the case 4) of our theorem.
\noindent By an easy calculation, we see that if $S_F$ admits another {\it fa{\c c}on} $\kappa''$ which is different from the {\it fa{\c c}ons} $\kappa$ and $\kappa'$, then this {\it fa{\c c}on} provides a 1-dimensional stratum contained in $(\mathscr{P})$ or contained in $(\mathscr{H}).$
iii) Proceeding in the same way for the cases where $(\kappa, \kappa')$ is a pair of {\it fa{\c c}ons} belonging to the pairs of groups: (I, VI), (II, VI), (IV, V), (IV, VI) and (V, VI), we obtain the case 3) or the case 4) of the theorem.
iv) Consider now the case where $\kappa$ and $\kappa'$ belong to the group VI, for example, $\kappa = (1)[2,3]$ and $\kappa' =(2)[1,3]$, then $F$ is a dominant polynomial mapping which can be written in terms of pertinent variables: \begin{equation*} \label{etyna5} F = \tilde{F}(x_3, x_1x_2, x_2x_3, x_3x_1). \end{equation*} With the same arguments than the example \ref{exal}, and for suitable generic sequences $\{\xi_k\}$ and $\{\xi'_k\}$, we obtain: $${\begin{matrix}
{\displaystyle \lim_{k \to \infty}} F(\xi_k) & = & \tilde{F}( 0, & \lambda, & 0, & \mu), \cr
{\displaystyle \lim_{k \to \infty}} F(\xi'_k) & = & \tilde{F}(0, & \lambda', & \mu', & 0), \end{matrix}}$$
where $\lambda, \mu, \lambda', \mu' \in \mathbb{C}.$ In this case, we have two possibilities:
a) either $F$ admits $x_3$ as a free variable: This case is similar to the case i) and we have the case 3) of the theorem,
b) or $F$ does not admit $x_3$ as a free variable, that means \begin{equation} \label{etyna6} F = \tilde{F}(x_1x_2, x_2x_3, x_3x_1). \end{equation} In this case, $S_F$ admits one more {\it fa{\c c}on} $\kappa'' = (3)[1,2]$ such that with a corresponding suitable generic sequence $ \{\xi''_k \}$ of $\kappa''$, we have $${\begin{matrix}
{\displaystyle \lim_{k \to \infty}} F(\xi_k) & = & \tilde{F}( \lambda, & 0, & \mu), \cr
{\displaystyle \lim_{k \to \infty}} F(\xi'_k) & = & \tilde{F}(\lambda', & \mu', & 0), \cr
{\displaystyle \lim_{k \to \infty}} F(\xi''_k) & = & \tilde{F}(0, & \lambda'', &\mu''),
\end{matrix}}$$
where $\lambda'', \mu'' \in \mathbb{C}.$
In this case, $S_F$ is the union of three planes the forms of which are as in the case 5) of the theorem. \end{preuve}
\section{The general case}
The algorithm \ref{algorithmordre} can be generalized to clasify the asymptotic sets of non-proper dominant polynomial mappings $F: \mathbb{C}^n \to \mathbb{C}^n$ of degree $d$ where $n \geq 3$ and $d \geq 2$ as the following. \begin{algorithm} \label{algorithmordregeneral} {\rm We have the six following steps:
{\bf Step 1}: Determine the list ${\mathcal{L}}_F^{(n,d)}$ of all the possible {\it fa{\c c}ons} of $S_F$.
{\bf Step 2}: Fix $l$ {\it fa{\c c}ons} $\kappa_1, \ldots, \kappa_l$ in the list ${\mathcal{L}}_F^{(n,d)}$ obtained in step 1.
Determine the pertinent variables with respect to these $l$ {\it fa{\c c}ons} (in the similar way than the definition \ref{pertinentvar}).
{\bf Step 3}:
\begin{enumerate} \item[$\bullet$] Assume that $S$ is a $(n-1)$-dimensional stratum of $S_F$ admiting {\it only} the $l$ {\it fa{\c c}ons} $\kappa_1, \ldots, \kappa_l$ determined in step 1. \item[$\bullet$] Take {\it generic} sequences $\xi_k^1, \ldots, \xi_k^l$ corresponding to $\kappa_1, \ldots, \kappa_l$, respectively. \item[$\bullet$] Compute the limit of the images of the sequences $\xi_k^1, \ldots, \xi_k^l$ by pertinent variables defined in step 1. \item[$\bullet$] Restrict the pertinent variables defined in step 2 using the fact $\dim S = n-1 $. \end{enumerate}
{\bf Step 4}: Restrict again the pertinent variables in step 3 using the three following facts:
\begin{enumerate} \item[$\bullet$] all the {\it fa{\c c}ons} $\kappa_1, \ldots, \kappa_l$ belong to $S$: then the images of the generic sequences $\xi_k^1, \ldots, \xi^l_k$ by the pertinent variables defined in the step 2 must tend to either an arbitrary complex number or zero,
\item[$\bullet$] $\dim S =n-1$: then
there are at least $n-1$ pertinent variables $X_{i_1}, \ldots, X_{i_{n-1}}$ such that the images of the sequences $\{\xi_k^1\}, \ldots, \{\xi_k^l\}$ by $X_{i_1}, \ldots, X_{i_{n-1}}$, respectively,
tend independently to $(n-1)$ complex numbers,
\item[$\bullet$] F is dominant: then there are at least $n$ independent pertinent variables (see lemma \ref{lemmaindependant}).
\end{enumerate}
{\bf Step 5}: Describe the geometry of the $(n-1)$-dimensional irreductible stratum $S$ in terms of the pertinent variables obtained in the step 4.
{\bf Step 6: } Let $l$ run in the list obtained in the step 1.
} \end{algorithm}
\begin{theorem} {\rm With the algorithm \ref{algorithmordregeneral}, we obtain all possible asymptotic sets of non-proper dominant polynomial mapping $F: \mathbb{C}^n \to \mathbb{C}^n$ of degree $d$. } \end{theorem}
\begin{preuve} In the one hand, by theorem \ref{theoremjelonek1}, the dimension of $S_F$ is $n-1$. By the step 3, step 5 and step 6, we consider all the possible cases of the all $(n-1)$-dimensional irreductible strata of $S_F$. Since the dimension of $S_F$ is $n-1$ (see theorem \ref{theoremjelonek1}),
we get all the possible asymptotic sets $S_F$ of non-proper dominant polynomial mappings $F: \mathbb{C}^n \to \mathbb{C}^n$ of degree $d$.
In the other hand, the number of all the possible {\it fa{\c c}ons} of a polynomial mapping $F: \mathbb{C}^n \to \mathbb{C}^n$ is finite, as the shown of the following lemma: \end{preuve}
\begin{lemma} {\cite{Thuy}} \label{pro nombre} {\rm Let $F: \mathbb{C}^n \to \mathbb{C}^n$ be a polynomial mapping such that $S_F \neq \emptyset$. Then, the number of all possible {\it fa{\c c}ons} of $S_F$ is finite. More precisely, the maximum number of {\it fa{\c c}ons} of $S_F$ is equal to $$ \sum_{t=1}^n C_t^n + {\displaystyle \sum_{t=1}^{n-1}} C_t^n + {\displaystyle \sum_{t=2}^{n-1}} A_t^{n}, $$ where $$C_t^n = \frac{n!}{t!(n-t)!}, \quad \quad \quad A_t^n = \frac{n!}{(n-t)!}.$$ } \end{lemma} \begin{preuve} {\cite{Thuy}} Assume that $\kappa = (i_1, \ldots , i_p)[j_1, \ldots, j_q]$ is a {\it fa{\c c}on} of $S_F$. We have the following cases:
i) If $\{ i_1, \ldots, i_p\} \cup \{ j_1, \ldots, j_q\} = \{ 1, \ldots , n\}$: we have ${\displaystyle \sum_{t=1}^n } C_t^n$ possible {\it fa{\c c}ons}.
ii) If $\{ i_1, \ldots, i_p\} \cup \{ j_1, \ldots, j_q\} \neq \{ 1, \ldots , n\}$ and $\{ j_1, \ldots, j_q\} = \emptyset$: we have ${\displaystyle \sum_{t=1}^{n-1}} C_t^n$ possible {\it fa{\c c}ons}.
iii) If $\{ i_1, \ldots, i_p\} \cup \{ j_1, \ldots, j_q\} \neq \{ 1, \ldots , n\}$ and $\{ j_1, \ldots, j_q\} \neq \emptyset$: We have ${\displaystyle \sum_{t=2}^{n-1}} A_t^{n}$ possible {\it fa{\c c}ons}.
As the three cases are independent, then the maximum number of {\it fa{\c c}ons} of $S_F$ is equal to $${\displaystyle \sum_{t=1}^n } C_t^n + {\displaystyle \sum_{t=1}^{n-1}} C_t^n + {\displaystyle \sum_{t=2}^{n-1}} A_t^{n}.$$ \end{preuve}
\begin{remark} {\rm In the example \ref{exal} and in the proofs of the lemmas \ref{cothuy2} and \ref{lemmeordren=3}, we use a linear change of variables to simplify the pertinent variables (so that we can work without coefficients and then we can simplify calculations).
This change of variables does not modify the results of the theorem \ref{theothuyb}.
However, in the algorithms \ref{algorithmordre} and \ref{algorithmordregeneral}, we do not need the step of linear change of variables, since the computers can work with coefficients of pertinent variables without making the problem heavier.
} \end{remark}
\end{document}
\end{document} | arXiv | {
"id": "1510.00221.tex",
"language_detection_score": 0.6597302556037903,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\baselineskip=17pt
\title[Parallel Almost Paracontact Structures...]{Parallel Almost Paracontact Structures on Affine Hypersurfaces}
\author[Z. Szancer]{Zuzanna Szancer} \address{Department of Applied Mathematics \\ University of Agriculture in Krakow\\ 253c Balicka Street\\ 30-198 Krakow, Poland} \email{Zuzanna.Szancer@urk.edu.pl}
\date{}
\begin{abstract} Let $\widetilde{J}$ be the canonical para-complex structure on $\mathbb{R}^{2n+2}\simeq\widetilde{\mathbb{C}}^{n+1}$. We study real affine hypersurfaces $f\colon M\rightarrow \widetilde{\mathbb{C}}^{n+1}$ with a $\widetilde{J}$-tangent transversal vector field. Such vector field induces in a natural way an almost paracontact structure ${(\varphi,\xi,\eta)}$ on $M$ as well as the affine connection $\nabla$. In this paper we give the classification of hypersurfaces with the property that $\varphi$ or $\eta$ is parallel relative to the connection $\nabla$. Moreover, we show that if $\nabla\varphi=0$ (respectively $\nabla\eta=0$) then around each point of $M$ there exists a parallel almost paracontact structure. Results we illustrate with some examples. \end{abstract}
\subjclass[2010]{53A15, 53D15}
\keywords{para-complex structure, affine hypersurface, almost paracontact structure, parallel structure}
\maketitle
\section{Introduction} \par Para-complex and paracontact structures were studied by many authors over past decades. These structures play an important role in pseudo-Riemannian geometry as well as modern mathematical physics. In particular, some recent results related to paracontact geometry can be found in \cite{Z,CKM,KM}. Moreover, recently some relations between para-complex and affine differential geometry were also studied (see \cite{LS,CLS} and \cite{SZ}). \par If we denote by $\widetilde{J}$ the canonical para-complex structure on $\mathbb{R}^{2n+2}\simeq\widetilde{\mathbb{C}}^{n+1}$ then, in a similar way like in the complex case (\cite{Cruceanu,SzanSzan,SZ2}), one may consider affine hypersurfaces $f\colon M\rightarrow \widetilde{\mathbb{C}}^{n+1}$ with a $\widetilde{J}$-tangent transversal vector field. Some recent results for affine hypersurfaces with a $\widetilde{J}$-tangent transversal vector field can be found in \cite{S1,S3}. \par In \cite{S4} the author studied real affine hypersurfaces of the complex space $\mathbb{C}^{n+1}$ with a $J$-tangent transversal vector field and an induced almost contact structure ${(\varphi,\xi,\eta)}$ such that $\varphi$ or $\eta$ is parallel relative to the induced affine connection. Now, it is natural to ask what happens in a para-complex situation. It is worth highlighting that in this case we do not have the canonical $\widetilde{J}$-tangent transversal vector field (the Riemannian normal field in general is not $\widetilde{J}$-tangent), so the situation is more complex. \par In Section 2 we briefly recall basic formulas of affine differential geometry. \par In Section 3 we recall the definition of an almost paracontact structure introduced for the first time in \cite{KW}. We recall the notion of a $\widetilde{J}$-tangent transversal vector field and a $\widetilde{J}$-invariant distribution. We also recall some results obtained in \cite{S1,S3} for induced almost paracontact structures which we will use in the next section. \par Section 4 contains main results of this paper. First we show some basic relations which hold among induced objects under an additional condition
that either $\varphi$ or $\eta$ is $\nabla$-parallel.
Later we show that if any of the above is satisfied for some $\widetilde{J}$-tangent transversal vector field then we can
always find (at least locally) another $\widetilde{J}$-tangent transversal vector field such that the induced almost paracontact structure is parallel.
Finally, we provide the full local classification of the above mentioned hypersurfaces. In order to illustrate the results some examples are also given.
In particular we show that (contrary to the case when $\nabla\eta=0$ or $\nabla\varphi=0$) the condition $\nabla\xi=0$ is much more weaker. \section{Preliminaries} We briefly recall the basic formulas of affine differential geometry. For more details, we refer to \cite{NS}. Let $f\colon M\rightarrow\mathbb{R}^{n+1}$ be an orientable, connected differentiable $n$-dimensional hypersurface immersed in affine space $\mathbb{R}^{n+1}$ equipped with its usual flat connection $\operatorname{D}$. Then, for any transversal vector field $C$ we have \begin{equation}\label{eq::FormulaGaussa} \operatorname{D}_Xf_\ast Y=f_\ast(\nabla_XY)+h(X,Y)C \end{equation} and \begin{equation}\label{eq::FormulaWeingartena} \operatorname{D}_XC=-f_\ast(SX)+\tau(X)C, \end{equation} where $X,Y$ are vector fields tangent to $M$. The formulas (\ref{eq::FormulaGaussa}) and (\ref{eq::FormulaWeingartena}) are called the formula of Gauss and the formula of Weingarten, respectively. For any transversal vector field $\nabla$ is a torsion-free connection, $h$ is a symmetric bilinear form on $M$, called the second fundamental form, $S$ is a tensor of type $(1,1)$, called the shape operator, and $\tau$ is a 1-form, called the transversal connection form. \par We shall now consider the change of a transversal vector field for a given immersion $f$.
\begin{thm}[\cite{NS}]\label{tw::ChangeOfTransversalField} Suppose we change a transversal vector field $C$ to $$ \overline{C}=\Phi C+f_\ast(Z), $$ where $Z$ is a tangent vector field on $M$ and $\Phi$ is a nowhere vanishing function on $M$. Then the affine fundamental form, the induced connection, the transversal connection form, and the affine shape operator change as follows: \begin{align*} & \overline{h}=\frac{1}{\Phi}h,\\ & \overline{\nabla}_XY=\nabla_XY-\frac{1}{\Phi}h(X,Y)Z,\\
& \overline{\tau}=\tau+\frac{1}{\Phi}h(Z,\cdot)+d\ln|\Phi|,\\ & \overline{S}=\Phi S-\nabla_{\cdot}Z+\overline{\tau}(\cdot)Z. \end{align*} \end{thm} \par If $h$ is nondegenerate, then we say that the hypersurface or the hypersurface immersion is \emph{nondegenerate}. We have the following \begin{thm}[\cite{NS}, Fundamental equations]\label{tw::FundamentalEquations} For an arbitrary transversal vector field $C$ the induced connection $\nabla$, the second fundamental form $h$, the shape operator $S$, and the 1-form $\tau$ satisfy the following equations: \begin{align} \label{eq::Gauss}&R(X,Y)Z=h(Y,Z)SX-h(X,Z)SY,\\ \label{eq::Codazzih}&(\nabla_X h)(Y,Z)+\tau(X)h(Y,Z)=(\nabla_Y h)(X,Z)+\tau(Y)h(X,Z),\\ \label{eq::CodazziS}&(\nabla_X S)(Y)-\tau(X)SY=(\nabla_Y S)(X)-\tau(Y)SX,\\ \label{eq::Ricci}&h(X,SY)-h(SX,Y)=2d\tau(X,Y). \end{align} \end{thm} The equations (\ref{eq::Gauss}), (\ref{eq::Codazzih}), (\ref{eq::CodazziS}), and (\ref{eq::Ricci}) are called the equation of Gauss, Codazzi for $h$, Codazzi for $S$ and Ricci, respectively. \par For a hypersurface immersion $f\colon M\rightarrow \mathbb{R}^{n+1}$ a transversal vector field $C$ is said to be \emph{equiaffine} (resp. \emph{locally equiaffine}) if $\tau=0$ (resp. $d\tau=0$).
\section{Almost paracontact structures} \par A $(2n+1)$-dimensional manifold $M$ is said to have an \emph{almost paracontact structure} if there exist on $M$ a tensor field $\varphi$ of type (1,1), a vector field $\xi$ and a 1-form $\eta$ which satisfy \begin{align} \varphi^2(X)&=X-\eta(X)\xi,\\ \eta(\xi)&=1 \end{align} for every $X\in TM$ and the tensor field $\varphi$ induces an almost para-complex structure on the distribution $\mathcal{D}=\operatorname{ker}\eta$. That is the eigendistributions $\mathcal{D} ^{+},\mathcal{D} ^{-}$ corresponding to the eigenvalues $1,-1$ of $\varphi$ have equal dimension $n$. Let $\nabla$ be a connection on $M$. We say that an almost paracontact structure ${(\varphi,\xi,\eta)}$ is \emph{$\nabla$-parallel} if $\nabla\varphi=0$, $\nabla\eta=0$ and $\nabla\xi=0$. \par By $\widetilde{\mathbb{C}}$ we denote the real algebra of para-complex numbers, then the free $\widetilde{\mathbb{C}}$-module $\widetilde{\mathbb{C}}^{n+1}$ is a para-complex vector space. We always assume that $\mathbb{R}^{2n+2}\cong\widetilde{\mathbb{C}}^{n+1}$ is endowed with the standard para-complex structure $\widetilde{J}$ given by: $$ \widetilde{J}(x_1,\ldots,x_{n+1},y_1,\ldots,y_{n+1})=(y_1,\ldots,y_{n+1},x_1,\ldots,x_{n+1}). $$ For more details on para-complex geometry we refer to \cite{CFG,LS}. \par Let $\dim M=2n+1$ and $f\colon M\rightarrow \mathbb{R}^{2n+2}$ be an affine hypersurface. Let $C$ be a transversal vector field on $M$. We say that $C$ is \emph{$\widetilde{J}$-tangent} if $\widetilde{J}C_x\in f_\ast(T_xM)$ for every $x\in M$. We also define a distribution $\mathcal{D}$ on $M$ as the biggest $\widetilde{J}$-invariant distribution on $M$, that is $$ \mathcal{D}_x:=f_\ast^{-1}(f_\ast(T_xM)\cap \widetilde{J}(f_\ast(T_xM))) $$ for every $x\in M$. We have that $\dim\mathcal{D} _x\geq 2n$. If for some $x$ the $\dim\mathcal{D} _x=2n+1$ then $\mathcal{D} _x=T_xM$ and it is not possible to find a $\widetilde{J}$-tangent transversal vector field in a neighbourhood of $x$. Since we only study hypersurfaces with a $\widetilde{J}$-tangent transversal vector field, then we always have $\dim\mathcal{D}=2n$. The distribution $\mathcal{D}$ is smooth as an intersection of two smooth distributions and because $\dim \mathcal{D}$ is constant. A vector field $X$ is called a \emph{$\mathcal{D}$-field} if $X_x\in\mathcal{D}_x$ for every $x\in M$. We use the notation $X\in\mathcal{D}$ for vectors as well as for $\mathcal{D}$-fields.
\par To simplify the writing, we will be omitting $f_\ast$ in front of vector fields in most cases. \par Let $f\colon M\rightarrow \mathbb{R}^{2n+2}$ be an affine hypersurface with a $\widetilde{J}$-tangent transversal vector field $C$. Then we can define a vector field $\xi$, a 1-form $\eta$ and a tensor field $\varphi$ of type (1,1) as follows: \begin{align} &\xi:=\widetilde{J}C;\\
\label{etanaD::eq::0}&\eta|_\mathcal{D}=0 \text{ and } \eta(\xi)=1; \\
&\varphi|_\mathcal{D}=\widetilde{J}|_\mathcal{D} \text{ and } \varphi(\xi)=0. \end{align} It is easy to see that $(\varphi,\xi,\eta)$ is an almost paracontact structure on $M$. This structure is called the \emph{induced almost paracontact structure}. Using Theorem \ref{tw::ChangeOfTransversalField} one may prove the following:
\begin{lem}[\cite{S1}]\label{lm::ZmianaStrukturyPrawieparakontaktowej} Let $C$ be a $\widetilde{J}$-tangent transversal vector field. Then any other $\widetilde{J}$-tangent transversal vector field $\overline{C}$ has a form: $$ \overline{C}=\Phi C+f_\ast Z, $$ where $\Phi\neq 0$ and $Z\in\mathcal{D}$. Moreover, if $(\varphi,\xi,\eta)$ is an almost paracontact structure induced by $C$, then $\overline{C}$ induces an almost paracontact structure $(\overline\varphi,\overline\xi,\overline\eta)$, where $$ \begin{cases} \overline\xi=\Phi\xi+\varphi Z,\\ \overline\eta=\frac{1}{\Phi}\eta,\\ \overline\varphi=\varphi-\eta(\cdot)\frac{1}{\Phi}Z. \end{cases} $$ \end{lem}
For an induced almost paracontact structure we have the following theorem \begin{thm}[\cite{S3}]\label{tw::Wzory} Let $f\colon M\rightarrow \mathbb{R}^{2n+2}$ be an affine hypersurface with a $\widetilde{J}$-tangent transversal vector field $C$. If $(\varphi,\xi,\eta)$ is an induced almost paracontact structure on $M$ then the following equations hold: \begin{align} \label{Wzory::eq::1}&\eta(\nabla_XY)=h(X,\varphi Y)+X(\eta(Y))+\eta(Y)\tau(X),\\ \label{Wzory::eq::2}&\varphi(\nabla_XY)=\nabla_X\varphi Y-\eta(Y)SX-h(X,Y)\xi,\\ \label{Wzory::eq::3}&\eta([X,Y])=h(X,\varphi Y)-h(Y,\varphi X)+X(\eta(Y))-Y(\eta(X))\\ \nonumber &\qquad\qquad\quad+\eta(Y)\tau(X)-\eta(X)\tau(Y),\\ \label{Wzory::eq::4}&\varphi([X,Y])=\nabla_X\varphi Y-\nabla_Y\varphi X+\eta(X)SY-\eta(Y)SX,\\ \label{Wzory::eq::5}&\eta(\nabla_X\xi)=\tau(X),\\ \label{Wzory::eq::6}&\eta(SX)=-h(X,\xi) \end{align} for every $X,Y\in \mathcal{X}(M)$. \end{thm}
\section{Parallel induced almost paracontact structures} In this section we always assume that $(\varphi,\xi,\eta)$ is an almost paracontact structure induced by a $\widetilde{J}$-tangent transversal vector field $C$ and $\nabla, h, S, \tau$ are affine objects induced by $C$ as well. Sometimes we denote a transversal vector field by $\overline{C}$ or even $\overline{\overline{C}}$. In such cases we use adequate (i. e. with bars) notation of induced objects. \par When $\varphi$ is $\nabla$-parallel we have the following \begin{lem}\label{lm::NablaPhiRowneZero} Let $({\varphi}, \xi ,\eta)$ be an induced almost paracontact structure such that $\nabla{\varphi}=0$. Then \begin{align}
\label{eq::fi1}&h|_{\mathcal{D}\times\mathcal{D}}=0,\\ \label{eq::fi2}&h(\xi,X)=h(X,\xi)=0\qquad\text{ for all $X\in\mathcal{D}$},\\
\label{eq::fi3}&S|_\mathcal{D}=0,\\ \label{eq::fi4}&S\xi=-h(\xi,\xi)\xi,\\ \label{eq::fi5}&d\tau=0,\\ \label{eq::fi6}&R=0. \end{align} \end{lem} \begin{proof} From the formula (\ref{Wzory::eq::2}) we have $$ (\nabla_X\varphi)(Y)=\eta(Y)SX+h(X,Y)\xi $$ for all $X,Y\in\X(M)$. Since $\nabla\varphi=0$ we get $h(X,Y)=0$ and $h(\xi,Y)=0$ for every $X,Y\in\mathcal{D}$. Now, taking
$X\in\mathcal{D}$ and~$Y=\xi$ we have $SX=0$. Taking $X=Y=\xi$ we easily get $S\xi=-h(\xi,\xi)\xi$. The equation (\ref{eq::fi5}) follows immediately from the Ricci equation (\ref{eq::Ricci}). The last equation is an immediate consequence of the Gauss equation and (\ref{eq::fi1})--(\ref{eq::fi3}). \end{proof} By Lemma \ref{lm::NablaPhiRowneZero} we have that the transversal vector field $C$ is locally equiaffine, that is there exists (at least locally) a non-vanishing function $\Phi$, such that $\overline{C}=\Phi C$ is equiaffine. Of course $\overline{C}$ is $\widetilde{J}$-tangent. Now, using Theorem \ref{tw::ChangeOfTransversalField} and Lemma \ref{lm::ZmianaStrukturyPrawieparakontaktowej} we get the following corollary \begin{cor}\label{wn::ONablaPhi}
Let $C$ be a $\widetilde{J}$-tangent transversal vector field such that $\nabla{\varphi}=0$ and let $\Phi$ be a nowhere vanishing function on $M$. Let us denote by $\overline C$ the transversal vector field $\Phi C$, then $\overline\nabla\overline{\varphi}=0$ (actually $\overline\nabla=\nabla$ and $\overline\varphi=\varphi$). It means that the condition $\nabla\varphi=0$ is the direction property. In particular, locally we can always choose $C$ equiaffine. \end{cor} We shall prove
\begin{lem}\label{lm::NablaEtaRowneZero} Let $({\varphi}, \xi ,\eta)$ be an induced almost paracontact structure such that $\nabla \eta =0$. Then \begin{align}
\label{eq::eta1} &h|_{\mathcal{D} \times \mathcal{D}}=0, &\\ \label{eq::eta2} &h(\xi,X)=h(X,\xi)=0 &\qquad\text{for every $X\in \mathcal{D} $},\\ \label{eq::eta3} &\tau =0,&\\ \label{eq::eta4} &\nabla _{X}Y\in \mathcal{D} &\qquad\text{for every $X,Y\in \mathcal{D} $},\\ \label{eq::eta5} &\nabla _{X}\xi\in\mathcal{D} &\qquad\text{for every $X\in \X(M) $},\\ \label{eq::eta6} &\nabla _{\xi}X\in\mathcal{D} &\qquad\text{for every $X\in \mathcal{D} $},\\
\label{eq::eta9} &X(h(\xi,\xi))=0 &\qquad\text{for every $X\in \mathcal{D} $}.
\end{align} \end{lem} \begin{proof} Since $\nabla\eta=0$ we have \begin{equation}\label{eq::NabEtaZero1} \eta(\nabla_XY)=X(\eta(Y)) \end{equation} for every $X,Y\in \X(M)$. Now, using the formula (\ref{Wzory::eq::1}) we get \begin{equation}\label{eq::NabEtaZero2} h(X,{\varphi} Y)=-\eta(Y)\tau(X) \end{equation} for every $X,Y\in \X(M)$. Hence, if $X,Y\in\mathcal{D}$, then $h(X,{\varphi} Y)=0$, what proves (\ref{eq::eta1}). Taking $X=\xi$ and $Y\in\mathcal{D}$ in (\ref{eq::NabEtaZero2}) we easily get (\ref{eq::eta2}). On the other hand taking
$Y=\xi$ we have $\tau(X)=0$, that is (\ref{eq::eta3}). The formulas (\ref{eq::eta4})--(\ref{eq::eta6}) can be directly obtained from (\ref{eq::NabEtaZero1}). To prove (\ref{eq::eta9}) let us note that from the Codazzi equation for $h$ (and using (\ref{eq::eta3})) we have \begin{align*} (\nabla_Xh)(\xi,\xi)=(\nabla_\xi h)(X,\xi)=\xi(h(X,\xi))-h(\nabla_\xi X,\xi)-h(X,\nabla_\xi\xi). \end{align*} Now, if we take $X\in\mathcal{D}$ and use (\ref{eq::eta1})--(\ref{eq::eta2}) we get $h(X,\xi)=0$ and $h(X,\nabla_\xi\xi)=0$, whereas (\ref{eq::eta6}) implies that we also have $h(\nabla_\xi X,\xi)=0$. Thus, we obtain $$ 0=(\nabla_Xh)(\xi,\xi)=X(h(\xi,\xi))-2h(\nabla_X\xi,\xi) $$ for every $X\in\mathcal{D}$. Now, applying (\ref{eq::eta5}) in the above formula we get $$ X(h(\xi,\xi))=0 $$ for every $X\in\mathcal{D}$. This finishes the proof of (\ref{eq::eta9}). \end{proof}
\begin{rem}\label{rem::1} If $\nabla \varphi =0$ then $(\nabla _X\eta)Y=-\eta (Y)\tau (X)$. \end{rem} \begin{proof} If $\nabla \varphi =0$ from (\ref{Wzory::eq::2}) we get that $h(X,\varphi Y)=0$ for all $X,Y\in \X(M)$. So from (\ref{Wzory::eq::1}) we obtain that $(\nabla _X\eta)Y=-\eta (Y)\tau (X)$. \end{proof}
\begin{rem}\label{rem::2} If $\nabla \varphi =0$ or $\nabla \eta =0$ then: \begin{align} \label{eq::rankh} \operatorname{rank}h\leq 1\\ \label{DDplusDminusnablaparallel} \mathcal{D}, \mathcal{D}^+, \mathcal{D}^- \quad \text{are $\nabla$-parallel}\\ \label{DDplusDminusinvolutive} \mathcal{D}, \mathcal{D}^+, \mathcal{D}^- \quad \text{are involutive}. \end{align} \end{rem} \begin{proof} The property (\ref{eq::rankh}) we immediately get from Lemma \ref{lm::NablaPhiRowneZero} or Lemma \ref{lm::NablaEtaRowneZero}. By (\ref{Wzory::eq::1}) we have \begin{align*} \eta(\nabla_XY)=h(X,\varphi Y) \end{align*} for every $X\in\X(M)$ and $Y\in\mathcal{D}$. Now (\ref{eq::fi1})--(\ref{eq::fi2}) (or (\ref{eq::eta1})--(\ref{eq::eta2})) imply that $\eta(\nabla_XY)=0$ for $X\in\X(M)$ and $Y\in\mathcal{D}$, that is $\mathcal{D}$ is $\nabla$-parallel. Using formula (\ref{Wzory::eq::2}) we get \begin{align*} \varphi(\nabla_XY)=\nabla_X\varphi Y=\nabla_XY \end{align*} for $X\in\X(M)$ and $Y\in\mathcal{D}^+$, so $\nabla_XY\in\mathcal{D}^+$. Similarly we obtain that $\nabla_XY\in\mathcal{D}^-$ for $X\in\X(M)$ and $Y\in\mathcal{D}^-$, that is both $\mathcal{D}^+$ and $\mathcal{D}^-$ are $\nabla$-parallel. Now (\ref{DDplusDminusinvolutive}) easily follows from (\ref{DDplusDminusnablaparallel}) and the fact that $\nabla$ is torsion-free. \end{proof}
\begin{exa} Let $f\colon \mathbb{R}^3\rightarrow \mathbb{R}^4$ be given by the formula $$ f(x,y,z):= \left[ \begin{matrix} x+y\\ \sinh z\\ x-y\\ \cosh z \end{matrix} \right]. $$ Let $\{\partial_{x},\partial_{y},\partial_z\}$ be the canonical basis on $\mathbb{R}^3$ generated by the coordinate system $(x,y,z)$. It easily follows that $f$ is an immersion and $\widetilde{J} f_x=f_x$, $\widetilde{J} f_y=-f_y$. Moreover it is easy to see that $$ C\colon \mathbb{R}^3\ni (x,y,z)\mapsto \left[ \begin{matrix} x\\ \sinh z\\ x\\ \cosh z \end{matrix} \right] \in\mathbb{R}^4 $$ is a transversal $\widetilde{J}$-tangent vector field, since $C=\widetilde{J} f_z+xf_x$.
Moreover, we have the following equalities $$ \tau=0,\qquad S(\partial_x) =-\partial_x,\qquad S(\partial_y)=0,\qquad S(\partial_z)=-\partial_z $$ and $$ h=\left[ \begin{matrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{matrix} \right]. $$ Now using the formula (\ref{Wzory::eq::1}) and the fact that $\partial_x\in\mathcal{D}^+$, $\partial_y\in\mathcal{D}^-$ and $\xi=\partial_z+x\partial_x$ we obtain that $\nabla\eta=0$, since $\varphi (\partial_z)=-x\partial_x$. However $\nabla \varphi \neq 0$. Indeed, from (\ref{Wzory::eq::2}) we get \begin{align*} (\nabla _{\partial_z}\varphi )\partial_z&=\eta (\partial_z)S\partial_z+h(\partial_z,\partial_z)\xi =-\partial_z+\xi=x\partial_x \end{align*} We also have $$ R(\partial_x,\partial_z)\partial_z=h(\partial_z,\partial_z)S\partial_x-h(\partial_x,\partial_z)S\partial_z=-\partial_x, $$ so $\nabla$ is not flat. Now, let us consider $f$ with the transversal vector field $$ \overline C\colon \mathbb{R}^3\ni (x,y,z)\mapsto \left[ \begin{matrix} 0\\ \sinh z\\ 0\\ \cosh z \end{matrix} \right] \in\mathbb{R}^4. $$
Of course $\overline{C}$ is $\widetilde{J}$-tangent. In a similar way as above we compute that induced by $\overline C$ the almost paracontact structure $(\overline\varphi,\overline\xi,\overline\eta)$ is $\overline\nabla$-parallel. In particular $\overline\nabla$ is flat.\\
Finally let us define $$ N\colon \mathbb{R}^3\ni (x,y,z)\mapsto \left[ \begin{matrix} 0\\ \frac{\sinh z}{\sqrt{\cosh^2z+\sinh^2z}}\\ 0\\ -\frac{\cosh z}{\sqrt{\cosh^2z+\sinh^2z}} \end{matrix} \right] \in\mathbb{R}^4. $$ It is not difficult to see that $N$ is the normal field (in classical Riemannian sense) for $f$, since is orthogonal to $f_x$, $f_y$ and $f_z$ and $<N,N>=1$. Moreover $\widetilde{J} N$ is not tangent (if only $z\neq 0$), that is $N$ is not $\widetilde{J}$-tangent. \end{exa} The above example shows that in general condition $\nabla \eta=0$ is weaker than $\nabla \varphi =0$, however we shall show that if $\nabla \eta=0$ for some $\widetilde{J}$-tangent transversal vector field $C$ we can (at least locally) find another equiaffine $\widetilde{J}$-tangent transversal vector field $\overline{C}$ such that whole structure $(\overline{\varphi},\overline{\xi},\overline{\eta})$ is $\overline{\nabla}$-parallel. In order to prove it we will need the following lemmas
\begin{lem}\label{lm::FieldW} If $\nabla\eta=0$ then there exists a vector field $W\in\mathcal{X}(M)$ such that the connection $\overline\nabla$ defined by $$ \overline\nabla_XY:=\nabla_XY+h(X,Y)W $$ is flat. \end{lem} \begin{proof} From Theorem \ref{tw::ChangeOfTransversalField} there exist a non-vanishing function $\Phi$ and a vector field $Z_0\in\mathcal{X}(M)$ such that the normal vector field $\overline C$ is given by $$ \overline C:=\Phi C+f_\ast Z_0. $$ Let $\overline\nabla$, $\overline h$, $\overline S$ and $\overline \tau\equiv 0$ be affine objects induced by $\overline C$ and let $g$ be the first fundamental form on $M$ (i.e. Riemannian metric on $M$ induced from the canonical inner product $<\cdot,\cdot>$ on $\mathbb{R}^{2n+2}$). We have $$ g(\overline SX,Y)=\overline h(X,Y)=\frac{1}{\Phi}h(X,Y) $$ for all $X,Y\in\X(M)$. Since $h(X,Y)=0$ for all $X\in\mathcal{D}$ and $Y\in\X(M)$ and $g$ is the Riemannian metric on $M$ the above implies that $\overline S=0$ on $\mathcal{D}$. Now the Gauss equation $$ \overline R(X,Y)Z=\overline h(Y,Z)\overline SX -\overline h(X,Z)\overline SY $$ implies that $\overline R=0$, that is $\overline \nabla$ is flat. Since $\nabla$ and $\overline\nabla$ are related (see Th. \ref{tw::ChangeOfTransversalField}) by $$ \overline\nabla_XY=\nabla_XY-\frac{1}{\Phi}h(X,Y)Z_0 $$ it is enough to take $W:=-\frac{1}{\Phi}Z_0$. \end{proof}
\begin{lem}\label{lm::mapL1} Let $f\colon M\rightarrow \mathbb{R}^{2n+2}$ be an affine hypersurface with a $\widetilde{J}$-tangent transversal vector field $C$ and $(\varphi,\xi,\eta)$ be an induced almost paracontact structure on $M$. If $\nabla \eta =0$ then for every $p\in M$ there exist a neighbourhood $U$ of $p$ and a local basis $\{\partial_{1},\ldots ,\partial_{{2n+1}}\}$ on $U$ such that $\{\partial_{1},\ldots ,\partial_{{n}}\}$ span the distribution $\mathcal{D}^+$, $\{\partial_{{n+1}},\ldots ,\partial_{{2n}}\}$ span the distribution $\mathcal{D}^-$ and $\nabla _{\partial_{i}}{\partial _{j}}=0$ for $i,j=1\ldots 2n$, $\nabla _{\partial_{i}}{\partial _{2n+1}}=\nabla _{\partial_{2n+1}}{\partial _{i}}=0$ for $i=1,\ldots ,2n$. \end{lem} \begin{proof} Let $p\in M$ and let $\overline\nabla$ be the connection from Lemma \ref{lm::FieldW}. Since $\overline\nabla$ is flat, in some neighborhood $U$ of $p$ there exist a basis $\partial_1,\ldots,\partial_{2n+1}$ such that $\overline\nabla_{\partial_i}\partial_j=0$ for $i,j=1,\ldots,2n+1$. In particular we have $\overline\nabla_X\partial_i=0$ for $i=1,\ldots,2n+1$ and $X\in\mathcal{X}(U)$. Without loss of generality (shrinking the neighborhood $U$ if needed) we may assume that $\partial_{2n+1}\notin\mathcal{D}$. Then for $i=1,\ldots,2n$ we have the decomposition $$ \partial_i=\partial_i^++\partial_i^-+\alpha_i\partial_{2n+1} $$ where $\partial_i^+\in\mathcal{D}^+$, $\partial_i^-\in\mathcal{D}^-$ and $\alpha_i$ are some smooth functions on $U$. Now for any $X\in \X(U)$ we have \begin{align*} 0&=\overline\nabla_X{\partial_i}=\overline\nabla_X{\partial_i^+}+\overline\nabla_X{\partial_i^-}+\overline\nabla_X(\alpha_i\partial_{2n+1})\\ &=\overline\nabla_X{\partial_i^+}+\overline\nabla_X{\partial_i^-}+X(\alpha_i)\partial_{2n+1}\\ &=\nabla_X{\partial_i^+}+\nabla_X{\partial_i^-}+X(\alpha_i)\partial_{2n+1}, \end{align*} where the last equality is an immediate consequence of the fact that $\overline\nabla_XY=\nabla_XY$ for all $X\in\X(M)$, $Y\in\mathcal{D}$. Since $\mathcal{D}^+$ and $\mathcal{D}^-$ are $\nabla$-parallel we obtain that $\nabla_X{\partial_i^+}=0$ and $\nabla_X{\partial_i^-}=0$ for any $X\in\X(U)$ and $\alpha_i=\operatorname{const}$ for $i=1,\ldots,2n$. In particular we have \begin{align*} \nabla _{\partial_i^+}{\partial_j^+}=0,\quad \nabla _{\partial_i^-}{\partial_j^-}=0,\quad \nabla _{\partial_i^+}{\partial_j^-}=0,\quad \nabla _{\partial_i^-}{\partial_j^+}=0 \end{align*} for $i,j=1,\ldots,2n$. Let us consider $$ \operatorname{span}\{\partial_1^+,\ldots,\partial_{2n}^+\}\subset \mathcal{D}^+ \quad\text{and}\quad \operatorname{span}\{\partial_1^-,\ldots,\partial_{2n}^-\}\subset\mathcal{D}^-. $$ Since every $\partial_i$ is a linear combination of elements from $\{\partial_1^+,\ldots,\partial_{2n}^+\}$, $\{\partial_1^-,\ldots,\partial_{2n}^-\}$ and $\partial_{2n+1}$ we have $$ \operatorname{span}\{\partial_1^+,\ldots,\partial_{2n}^+\}\oplus \operatorname{span}\{\partial_1^-,\ldots,\partial_{2n}^-\}\oplus \operatorname{span}\{\partial_{2n+1}\}=TM. $$ In particular $\dim \operatorname{span}\{\partial_1^+,\ldots,\partial_{2n}^+\}=\dim \operatorname{span}\{\partial_1^-,\ldots,\partial_{2n}^-\}=n$.
Now (around $p$) we can choose $2n$ linearly independent vector fields $\{\partial_1',\ldots,\partial_{2n}'\}$ such that $\partial_i'\in \{\partial_1^+,\ldots,\partial_{2n}^+\}$ and $\partial_{i+n}'\in \{\partial_1^-,\ldots,\partial_{2n}^-\}$ for $i=1,\ldots,n$. We have $$ \nabla_{\partial_i'}{\partial_j'}=0 $$ for $i,j=1,\ldots,2n$. Note also that $\nabla_{\partial_i'}\partial_{2n+1}=\bar\nabla_{\partial_i'}\partial_{2n+1}=0$ and $\nabla_{\partial_{2n+1}}{\partial_i'}=0$ for $i=1,\ldots,2n$. Finally we see that the basis $\{\partial_1',\ldots,\partial_{2n}',\partial_{2n+1}\}$ has required properties. \end{proof}
\begin{lem}\label{lm::mapL2} Let $f\colon M\rightarrow \mathbb{R}^{2n+2}$ be an affine hypersurface with a $\widetilde{J}$-tangent transversal vector field $C$ and let $(\varphi,\xi,\eta)$ be an induced almost paracontact structure on $M$. If $\nabla \eta =0$ then for every $p\in M$ there exist a neighbourhood $U$ of $p$ and a $\widetilde{J}$-tangent transversal vector field $\overline{C}$ defined on $U$ and a local coordinate system on $U$ $(x_1,\ldots ,x_{2n},y)$ such that $\partial_{x_1},\ldots,\partial_{x_n}\in \mathcal{D}^+, \partial_{x_{n+1}},\ldots,\partial_{x_{2n}}\in \mathcal{D}^-$ and the following conditions are satisfied: \begin{align} \label{eq::bar1}&\overline{\xi}=\partial _{y}; &\\ \label{eq::bar2}&\overline{\nabla}\overline{\eta}=0; &\\ \label{eq::bar3}&\overline{\nabla}_{\partial_{x_i}}\partial_{x_j}=0 &\text{for }i,j=1,\ldots,2n;\\ \label{eq::bar4}&\overline{\nabla}_{\partial_{x_i}}\partial_{y}=\overline{\nabla}_{\partial_{y}}\partial_{x_i}=0 &\text{for }i=1,\ldots,2n. \end{align} where $(\overline{\varphi},\overline{\xi},\overline{\eta})$ and $\overline{\nabla}$ are the almost paracontact structure and the affine connection induced by $\overline{C}$, respectively. \end{lem}
\begin{proof} By Lemma \ref{lm::mapL1} for any $p\in M$ there exist a neighbourhood $U$ of $p$ and a local basis $\{\partial_{1},\ldots ,\partial_{{2n+1}}\}$ on $U$ with properties described in the Lemma \ref{lm::mapL1}. Of course $\eta (\partial_{2n+1})\neq 0$. Since $\nabla _{\partial_{i}}{\partial _{2n+1}}=0$ for $i=1,\ldots,2n$, using (\ref{Wzory::eq::1}) we obtain \begin{align}\label{eq::etaNablaxiy} 0=\eta (\nabla _{\partial_{i}}{\partial _{2n+1}})=\partial_{i}(\eta (\partial_{2n+1})) \end{align} for $i=1,\ldots,2n$. Let us define $$ Y:=\frac{1}{\eta (\partial_{2n+1})}\cdot\partial_{2n+1}. $$ Thanks to (\ref{eq::etaNablaxiy}) we get \begin{align*} \nabla _{\partial_{i}}Y=\nabla_Y\partial_{i}=0 \end{align*} for $i=1,\ldots,2n$. In particular $[\partial_{i},Y]=0$ for $i=1,\ldots,2n$ and there exist a local coordinate system $(x_1,\ldots,x_{2n},y)$ around $p$ such that $\partial_i=\partial_{x_{i}}$ for $i=1,\ldots,2n$ and $Y=\partial_{y}$. We also have $\partial_{x_{i}}\in \mathcal{D}^+$ and $\partial_{x_{n+i}}\in \mathcal{D}^-$ for $i=1,\ldots,n$. Now, we have \begin{align*} \eta (\partial_{y})=\eta (Y)=1=\eta (\xi) \end{align*} that is there exists $Z\in \mathcal{D}$ such that $\partial_{y}=\xi +Z$. Let us define \begin{align*} \overline{C}:=C+f_{\ast}(\varphi Z). \end{align*} By Theorem \ref{tw::ChangeOfTransversalField} and Lemma \ref{lm::ZmianaStrukturyPrawieparakontaktowej} we have \begin{align*} \overline{\xi}=\xi+Z=\partial_{y}, \quad \overline{\eta}=\eta \end{align*} and \begin{align*} \overline{\nabla}_XY=\nabla_XY-h(X,Y)\varphi Z \quad \text{for} \quad X,Y\in\X(M). \end{align*} In consequence \begin{align*} \overline{\nabla}_{\partial_{x_i}}\partial_{x_j}=\nabla_{\partial_{x_i}}\partial_{x_j}=0 \end{align*} and \begin{align*} \overline{\nabla}_{\partial_{x_i}}\partial_{y}=\overline{\nabla}_{\partial_{y}}\partial_{x_i}=\nabla_{\partial_{y}}\partial_{x_i}=\nabla_{\partial_{x_i}}\partial_{y}=0 \end{align*} for $i,j=1,\ldots,2n$. Finally we obtain \begin{align*} (\overline{\nabla}_X\overline{\eta})(Y)&=X(\overline{\eta}(Y))-\overline{\eta}(\overline{\nabla}_XY)=X(\eta (Y))-\eta (\nabla_XY-h(X,Y)\varphi Z)\\ &=(\nabla_X\eta)(Y)+h(X,Y)\eta(\varphi Z)=(\nabla_X\eta)(Y)=0. \end{align*} \end{proof}
Now we can prove the following
\begin{thm}\label{tw::mapL3} Let $f\colon M\rightarrow \mathbb{R}^{2n+2}$ be an affine hypersurface with a $\widetilde{J}$-tangent transversal vector field $C$ and $(\varphi,\xi,\eta)$ be an induced almost paracontact structure on $M$. If $\nabla \eta =0$ then for every $p\in M$ there exist a neighbourhood $U$ of $p$ and a $\widetilde{J}$-tangent transversal vector field $\overline{\overline{C}}$ defined on $U$ such that the induced almost paracontact structure $(\overline{\overline{\varphi}},\overline{\overline{\xi}},\overline{\overline{\eta}})$ is $\overline{\overline{\nabla}}$-parallel. Moreover around $p$ there exists a local coordinate system $(x_1,\ldots,x_{2n},x_{2n+1})$ such that $\partial_{x_1},\ldots,\partial_{x_n}\in \mathcal{D}^+, \partial_{x_{n+1}},\ldots,\partial_{x_{2n}}\in \mathcal{D}^-$, $\nabla_{\partial_{x_i}}\partial_{x_j}=0$ for $i,j=1,\ldots,2n+1$ and $\partial_{x_{2n+1}}=\overline{\overline{\xi}}$. \end{thm} \begin{proof} By Lemma \ref{lm::mapL2} for any $p\in M$ there exist a neighbourhood $U$ of $p$, local coordinates $(x_1,\ldots ,x_{2n},y)$ on $U$ and a $\widetilde{J}$-tangent transversal vector field $\overline{C}$ defined on $U$ such that (\ref{eq::bar1})--(\ref{eq::bar4}) are satisfied. Using formula (\ref{Wzory::eq::2}), for $i=1,\ldots,2n$ we have \begin{align*} \overline{S}\partial_{x_i}=-\overline{\varphi}(\overline{\nabla}_{\partial_{x_i}}\overline{\xi})=-\overline{\varphi}(\overline{\nabla}_{\partial_{x_i}}\partial_y)=0, \end{align*}
where $\overline{S}$ is the shape operator induced by $\overline{C}$. In particular $\overline{S}|\mathcal{D}=0$. From (\ref{Wzory::eq::1}) we get $\overline{\eta}(\overline{\nabla}_{\overline{\xi}}\overline{\xi})=0$ that is $\overline{\nabla}_{\overline{\xi}}\overline{\xi}\in\mathcal{D}$. Therefore there exist smooth functions $p_i$ ($i=1,\ldots,2n$) such that \begin{align*} \overline{\nabla}_{\overline{\xi}}\overline{\xi}=\sum_{i=1}^{2n}p_i\partial_{x_i}. \end{align*} Now from the Gauss equation we obtain that for every $i=1,\ldots,2n$ \begin{align*} 0=\overline{R}(\partial_{x_i},\partial_y)\partial_y =\overline{\nabla}_{\partial_{x_i}}\overline{\nabla}_{\partial_y}\partial_y =\overline{\nabla}_{\partial_{x_i}}\sum_{j=1}^{2n}p_j\partial_{x_j} =\sum_{j=1}^{2n}\partial_{x_i}(p_j)\partial_{x_j} \end{align*} therefore $p_j$ depends only on $y$ for $j=1,\ldots,2n$. Let us define \begin{align*} Z:=\sum_{i=1}^{2n}a_i\partial_{x_i}, \end{align*} where $a_i$ are smooth functions defined by \begin{align} \label{eq::ai} a_i&:=-e^{\int \overline{h}(\partial_y,\partial_y)dy}\cdot \int p_ie^{-\int \overline{h}(\partial_y,\partial_y)dy}dy\\ \label{eq::ain}a_{i+n}&:=-e^{-\int \overline{h}(\partial_y,\partial_y)dy}\cdot \int p_{i+n}e^{\int \overline{h}(\partial_y,\partial_y)dy}dy \end{align} for $i=1,\ldots,n$. By (\ref{eq::eta9}) we have $\partial_{x_i}(\overline{h}(\partial_y,\partial_y))=0$ for $i=1,\ldots,2n$, that is $\overline{h}(\partial_y,\partial_y)$ depends only on $y$ and in consequence $a_i$ depends only on $y$ for $i=1,\ldots,2n$. Now we can define another transversal vector field: \begin{align*} \overline{\overline{C}}:=\overline{C}+f_{\ast}\varphi Z. \end{align*} Since $Z\in\mathcal{D}$ we see that $\overline{\overline{C}}$ is $\widetilde{J}$-tangent. First note that \begin{align*} \overline{\overline{\nabla}}_{\overline{\overline{\xi}}}\partial_{x_i}=0,\\ \overline{\overline{\nabla}}_{\partial_{x_i}}\overline{\overline{\xi}}=0,\\ \overline{\overline{\nabla}}_{\partial_{x_i}}\partial_{x_j}=0 \end{align*} for $i,j=1,\ldots,2n$. Indeed, the above follows immediately from the fact that $\overline{\overline{\xi}}=\overline{\xi}+Z=\partial_y+Z$,
$a_i$ depend only on $y$ and $\overline{\overline{\nabla}}_XY=\overline{\nabla}_XY$ if only $X\in\mathcal{D}$ or $Y\in \mathcal{D}$.
Let us denote $a_i':=\frac{\partial a_i}{\partial y}$. One may compute \begin{align*} \overline{\overline{\nabla}}_{\overline{\overline{\xi}}}\overline{\overline{\xi}}&=\overline{\nabla}_{\overline{\overline{\xi}}}\overline{\overline{\xi}}-\overline{h}(\overline{\overline{\xi}},\overline{\overline{\xi}})\varphi Z=\overline{\nabla}_{\overline{\xi}+Z}{(\overline{\xi}+Z)}-\overline{h}(\overline{\xi}+Z,\overline{\xi}+Z)\varphi Z\\&=\overline{\nabla}_{\overline{\xi}}\overline{\xi}+\overline{\nabla}_{\overline{\xi}}Z+\overline{\nabla}_{Z}\overline{\xi}+\overline{\nabla}_{Z}Z-\overline{h}(\overline{\xi},\overline{\xi})\varphi Z\\&=\overline{\nabla}_{\partial_y}\partial_y+\overline{\nabla}_{\partial_y}Z+\overline{\nabla}_ZZ-\overline{h}(\partial_y,\partial_y)\varphi Z\\ &=\sum_{i=1}^{2n}p_i\partial_{x_i}+\overline{\nabla}_{\partial_y}(\sum_{i=1}^{2n}a_i\partial_{x_i})-\overline{h}(\partial_y,\partial_y)\varphi (\sum_{i=1}^{2n}a_i\partial_{x_i})\\ &=\sum_{i=1}^{2n}p_i\partial_{x_i}+\sum_{i=1}^{2n}a_i'\partial_{x_i}-\overline{h}(\partial_y,\partial_y)\sum_{i=1}^na_i\partial_{x_i}+\overline{h}(\partial_y,\partial_y)\sum_{i=1}^na_{i+n}\partial_{x_{i+n}}\\ &=\sum_{i=1}^{n}p_i\partial_{x_i}+\sum_{i=1}^{n}a_i'\partial_{x_i}-\overline{h}(\partial_y,\partial_y)\sum_{i=1}^na_i\partial_{x_i}\\ &+\sum_{i=1}^{n}p_{i+n}\partial_{x_{i+n}}+\sum_{i=1}^{n}a_{i+n}'\partial_{x_{i+n}}+\overline{h}(\partial_y,\partial_y)\sum_{i=1}^na_{i+n}\partial_{x_{i+n}}=0, \end{align*} where the last equality easily follows from (\ref{eq::ai}) and (\ref{eq::ain}). Now, using the above we have \begin{align*} (\overline{\overline{\nabla}}_{\overline{\overline{\xi}}}\overline{\overline{\varphi}})\overline{\overline{\xi}}=-\overline{\overline{\varphi}}(\overline{\overline{\nabla}}_{\overline{\overline{\xi}}}\overline{\overline{\xi}}) =0,\\ (\overline{\overline{\nabla}}_{\overline{\overline{\xi}}}\overline{\overline{\varphi}})(\partial_{x_i})=\overline{\overline{\nabla}}_{\overline{\overline{\xi}}}(\overline{\overline{\varphi}}(\partial_{x_i})) -\overline{\overline{\varphi}}(\overline{\overline{\nabla}}_{\overline{\overline{\xi}}}\partial_{x_i})=0,\\
(\overline{\overline{\nabla}}_{\partial_{x_i}}\overline{\overline{\varphi}})(\overline{\overline{\xi}})=-\overline{\overline{\varphi}}(\overline{\overline{\nabla}}_{\partial_{x_i}}\overline{\overline{\xi}})=0,\\ (\overline{\overline{\nabla}}_{\partial_{x_i}}\overline{\overline{\varphi}})(\partial_{x_j})=0, \end{align*} that is $\overline{\overline{\nabla}}\overline{\overline{\varphi}}=0$. Since $\overline{\overline{\nabla}}_{\overline{\overline{\xi}}}\overline{\overline{\xi}}=0$ and $\overline{\overline{\nabla}}_{\partial_{x_i}}\overline{\overline{\xi}}=0$ for $i=1,\ldots,2n$ we get that $\overline{\overline{\nabla}}_{X}\overline{\overline{\xi}}=0$ for $X\in\mathcal{X}(U)$, that is $\overline{\overline{\nabla}}\overline{\overline{\xi}}=0$. Since $\overline\tau=0$ Theorem \ref{tw::ChangeOfTransversalField} implies that $\overline{\overline{\tau}}=0$ and now Remark \ref{rem::1} implies that $\overline{\overline{\nabla}}\overline{\overline{\eta}}=0$. \par Finally note that for $\partial_{x_1},\ldots,\partial_{x_{2n}},\partial_{x_{2n+1}}:=\overline{\overline{\xi}}$ we have $[\partial_{x_i},\partial_{x_j}]=0$ for $i,j=1,\ldots,2n+1$, so there exists a local coordinate system $(x_1,\ldots,x_{2n},x_{2n+1})$ with desired properties. The proof is completed. \end{proof}
Now we can state the following classification theorem. Namely we have
\begin{thm}\label{thm::opostacif} Let $f\colon M\rightarrow \mathbb{R}^{2n+2}$ be an affine hypersurface with a $\widetilde{J}$-tangent transversal vector field $C$ and $(\varphi,\xi,\eta)$ be an induced almost paracontact structure on $M$. If $\varphi$ or $\eta$ is $\nabla$-parallel then for every $p\in M$ there exists a neighbourhood $U$ of $p$ such that $f$ can be locally expressed in the form \begin{align}\label{eq::WzorNafNew} f(x_1,\ldots,x_{2n},y)=x_1b_1+\cdots+x_{2n}b_{2n}+\widetilde{J} v\int\cosh {\alpha}(y)\,dy\\\nonumber+v\int\sinh {\alpha}(y) \,dy, \end{align} where $b_1,\ldots,b_{2n}, v\in\mathbb{R}^{2n+2}$, $b_1,\ldots,b_{2n}, v, \widetilde{J} v$ are linearly independent and such that $\widetilde{J} b_i=b_i$ for $i=1,\ldots, n$, $\widetilde{J} b_i=-b_i$ for $i=n+1,\ldots, 2n$ and $\alpha$ is some smooth function. Moreover, the converse is true in the sense that for the function {\rm{(\ref{eq::WzorNafNew})}} there exists a global $\widetilde{J}$-tangent equiaffine transversal vector field $C$ such that $(\varphi,\xi,\eta)$ is $\nabla$-parallel. \end{thm} \begin{proof} Let $p\in M$. Thanks to Theorem \ref{tw::mapL3}, Corollary \ref{wn::ONablaPhi} and Remark \ref{rem::1}, without loss of generality, we may assume that $(\varphi,\xi,\eta)$ is $\nabla$-parallel around $p$. Let $(x_1,\ldots,x_{2n},x_{2n+1})$ be the local coordinate system from the Theorem \ref{tw::mapL3}. Let us denote $x_{2n+1}=y$. Now, by the Gauss formula we have the following system of differential equations: \begin{align} \label{eq::dffxixj}f_{x_ix_j}=0,\\ \label{eq::dffxiy}f_{x_iy}=0,\\ \label{eq::dffyy}f_{yy}=h(\xi,\xi)C=h(\xi,\xi)\widetilde{J} f_y \end{align} for $i=1,\ldots,2n$. Solving (\ref{eq::dffxixj}) and (\ref{eq::dffxiy}) we obtain \begin{align*} f(x_1,\ldots,x_{2n},y)=\sum_{i=1}^{2n}x_ib_i+A(y), \end{align*} where $b_1,\ldots,b_{2n}\in\mathbb{R}^{2n+2}$, $b_1,\ldots,b_{2n}$ are linearly independent and such that $\widetilde{J} b_i=b_i$ for $i=1,\ldots, n$, $\widetilde{J} b_i=-b_i$ for $i=n+1,\ldots, 2n$ and $A$ is some smooth function depending only on variable $y$ with values in $\mathbb{R}^{2n+2}$. Now (\ref{eq::dffyy}) takes the form \begin{align}\label{eq::DEwithA} A''=\beta\widetilde{J} A', \end{align} where $\beta:=h(\xi,\xi)$. Substituting $G:=A'$ we get \begin{align}\label{eq::DEwithG} G'=\beta\widetilde{J} G. \end{align} Let $\overline\beta$ be any integral of $\beta$. First note that for every $v\in\mathbb{R}^{2n+2}$ the function \begin{align}\label{eq::DEwithG-Solution} G(y)= \widetilde{J} v\cosh \overline{\beta}(y)+v\sinh \overline{\beta}(y) \end{align} is the solution of (\ref{eq::DEwithG}). On the other hand, since (\ref{eq::DEwithG}) is a first order ordinary differential equation all its solutions have the form (\ref{eq::DEwithG-Solution}). The above imply that solutions of (\ref{eq::DEwithA}) have the form $$ A(y)= \widetilde{J} v\int \cosh \overline{\beta}(y)dy+v\int \sinh \overline{\beta}(y)dy. $$ Since $f$ is the immersion and $C$ is transversal we have that $b_1,\ldots,b_{2n}$, $f_y=G(y)$ and $C=\widetilde{J} f_y$ are linearly independent. In particular, we obtain that $b_1,\ldots,b_{2n},v,\widetilde{J} v$ are linearly independent too. Indeed, if we assume that $$ \sum_{i=1}^{2n}a_ib_i+a_{2n+1}v+a_{2n+2}\widetilde{J} v=0 $$ for some functions $a_1,\ldots,a_{2n+2}$ we get \begin{align*} 0&=\sum_{i=1}^{2n}a_ib_i+a_{2n+1}v+a_{2n+2}\widetilde{J} v \\ &=\sum_{i=1}^{2n}a_ib_i+(a_{2n+2}\cosh \overline{\beta}(y)-a_{2n+1}\sinh \overline{\beta}(y))f_y \\ &\phantom{=}+(a_{2n+1}\cosh \overline{\beta}(y)-a_{2n+2}\sinh \overline{\beta}(y))\widetilde{J} f_y. \end{align*} Now, since $\{b_1,\ldots,b_{2n},f_y,\widetilde{J} f_y \}$ are linearly independent we obtain $a_1=\cdots =a_{2n}=0$ and $$a_{2n+2}\cosh \overline{\beta}(y)-a_{2n+1}\sinh \overline{\beta}(y)=a_{2n+1}\cosh \overline{\beta}(y)-a_{2n+2}\sinh \overline{\beta}(y)=0.$$ The above implies that $a_{2n+1}=a_{2n+2}=0$. Summarising $f$ can be locally expressed in the form: \begin{align*} f(x_1,\ldots,x_{2n},y)=x_1b_1+\cdots+x_{2n}b_{2n}+\widetilde{J} v\int\cosh \overline{\beta}(y)\,dy\\\nonumber+v\int\sinh \overline{\beta}(y) \,dy. \end{align*} Now denoting $\alpha:=\overline{\beta}$ we get the thesis. \par In order to prove the last part of the theorem let us note that since $b_1,\ldots,b_{2n},v,\widetilde{J} v$ are linearly independent the function $f$
given by (\ref{eq::WzorNafNew}) is an immersion and $C:=\widetilde{J} f_y$ is transversal and $\widetilde{J}$-tangent. Let $(\varphi,\xi,\eta)$ be an almost paracontact structure induced by $C$. Since $C_{x_i}=0$ and $C_{y}=\alpha'f_y$ we have $\tau=0$, $S|_\mathcal{D}=0$ and $S\xi=-\alpha'\xi$. Since $f_{x_ix_j}=0$, $f_{x_iy}=0$ and $f_{yy}=\alpha'C$ we have $h(X,Y)=0$ if only $X\in\mathcal{D}$ or $Y\in\mathcal{D}$ and $h(\xi,\xi)=\alpha'$. Now using (\ref{Wzory::eq::2}) we easily obtain that $\nabla\varphi =0$. Since $C$ is equiaffine we also have $\nabla\eta=0$ and $\nabla\xi=0$. \end{proof}
\begin{cor}\label{wn::hyperplane} If $\nabla\varphi=0$ or $\nabla\eta=0$ and $\operatorname{rank}h=0$ on $M$ then $f$ is a piece of a hyperplane. \end{cor} \begin{proof} By Theorem \ref{thm::opostacif} $f$ can be locally expressed in the form (\ref{eq::WzorNafNew}). In particular \begin{align*} f_{yy}=\alpha'(y) \cdot (\widetilde{J} v\sinh \alpha(y)+v\cosh \alpha (y))=\alpha'(y)\widetilde{J} f_y. \end{align*} On the other hand $f_{yy}$ is tangent, since $h=0$. Since $\widetilde{J} f_y$ is transversal we obtain $\alpha '=0$, that is $\alpha=\operatorname{const}$ and, in consequence, $f$ is a piece of a hyperplane. \end{proof}
We conclude this section with an example showing that the condition $\nabla\xi=0$ is much more weaker than $\nabla\varphi=0$ or $\nabla\eta=0$.
\begin{exa} Let us consider an affine immersion $f\colon (0,\infty)^2\times\mathbb{R}\rightarrow \mathbb{R}^4$ given by the formula $$ f(x,y,z):= \left[ \begin{matrix} \frac{1}{2}(x^2+y^2)\\ \sinh z\\ \frac{1}{2}(x^2-y^2)\\ \frac{1}{3}(x^3+y^3)+\cosh z \end{matrix} \right]. $$ It is easy to verify that $$ C\colon (0,\infty)^2\times\mathbb{R}\ni (x,y,z)\mapsto \left[ \begin{matrix} 0\\ \sinh z\\ 0\\ \cosh z \end{matrix} \right] \in\mathbb{R}^4 $$ is a $\widetilde{J}$-tangent transversal vector field for $f$. Let $\{\partial_{x},\partial_{y},\partial_z\}$ be the canonical basis on $(0,\infty)^2\times\mathbb{R}$ generated by the coordinate system $(x,y,z)$ and let $(\varphi,\xi,\eta)$ be an almost paracontact structure induced by $C$. Note that we have $\xi=\partial_z$. By straightforward computations we obtain $$ h=\left[ \begin{matrix} x\cosh z & 0 & 0 \\ 0 & y\cosh z & 0 \\ 0 & 0 & 1 \end{matrix} \right], $$ so in particular $f$ is nondegenerate, that is $\operatorname{rank} h=3$. Now by Remark \ref{rem::2} it is not possible to find a $\widetilde{J}$-tangent transversal vector field such that $\nabla\varphi=0$ or $\nabla\eta=0$. However, by the Gauss formula we have $$ \nabla_{\partial_x}\partial_z=\nabla_{\partial_y}\partial_z=\nabla_{\partial_z}\partial_z=0, $$ that is $\nabla\xi=0$. \end{exa}
\emph{This Research was financed by the Ministry of Science and Higher Education of the Republic of Poland.}
\end{document} | arXiv | {
"id": "1805.08054.tex",
"language_detection_score": 0.5701472163200378,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{accommodating new flights \\ into an existing airline flight schedule}
\author{\"{O}zge \c{S}afak \\ Department of Industrial Engineering, Bilkent University, 06800 Ankara, Turkey, ozge.safak@bilkent.edu.tr \\ \\ Alper Atamt{\"u}rk \\ Industrial Engineering \& Operations Research, University of California, Berkeley, California 94720, USA, atamturk@berkeley.edu \\ \\ M. Selim Akt{\"u}rk \\ Department of Industrial Engineering, Bilkent University, 06800 Ankara, Turkey, akturk@bilkent.edu.tr }
\maketitle
\begin{abstract} \ignore{We present two novel approaches for airline rescheduling to respond to increasing passenger demand. In both approaches, we alter an existing flight schedule to accommodate new flights while maximizing the airline's profit.} We present two novel approaches to alter a flight network for introducing new flights while maximizing airline's profit. A key feature of the first approach is to adjust the aircraft cruise speed to compensate for the block times of the new flights, trading off flying time and fuel burn. In the second approach, we introduce aircraft swapping as an additional mechanism to provide a greater flexibility in reducing the incremental fuel cost and adjusting the capacity. The nonlinear fuel-burn function and the binary aircraft swap and assignment decisions complicate the optimization problem significantly. We propose strong mixed-integer conic quadratic formulations to overcome the computational difficulties. The reformulations enable solving instances with 300 flights from a major U.S. airline optimally within reasonable compute times. \\
\noindent \textbf{Keywords:} Airline rescheduling, cruise speed control, aircraft swapping, CO$_2$ emissions, passenger spill, mixed-integer conic quadratic optimization, McCormick inequalities.
\end{abstract}
\begin{center} Jun 2018, Nov 2018, Apr 2019 \end{center}
\BCOLReport{18.02}
\pagestyle{plain}
\section{Introduction}
\ignore{The U.S. Federal Aviation Administration (FAA) Aerospace Forecast 2017--2037 \cite{FAAreport} reports that the demand for air travel in 2016 grew at the fastest pace since 2005 despite the modest economic growth in the U.S.}
\ignore{For reasons such as tourism, new job opportunities or even a natural disaster, to respond to increasing number of passengers, an airline needs to make immediate changes on its existing flight schedule to introduce new flights in a particular day. For such a short time period, leasing an aircraft just to serve these new flights may not be a feasible option. Therefore,} While operating a daily flight schedule, several events, such as tourism, business conferences or even a natural disaster, might necessitate introducing new flights on a particular day. In a relatively short time period, leasing or buying an aircraft just to serve these new flights may not be a feasible option. Therefore, an airline aims to accommodate new flights with minimum disruption on the existing schedule. In near real time, to accommodate these new flights into an existing flight schedule, the airline can only make some operational changes: either it can use the idle times, if any, in the existing schedule, or it can change the departure times of the existing flights, or it can increase the aircraft cruise speed to shorten the flight times. The airline can utilize any one of these alternatives or any combination of them to open up enough time to accommodate the new flights. Increasing the cruise speed, however, has an adverse effect on the fuel burn, which in turn increases the fuel and carbon emission costs, i.e., the most significant component of an airline's operational costs. Since aircraft types have different fuel efficiencies, changing the aircraft assignments
may be beneficial in reducing the fuel burn, and consequently decreasing the operational costs.
In this paper, we propose two approaches to accommodate new flights into an existing schedule. The
first approach carefully adjusts flight departure times and as well as aircraft cruise speed to allow the required time for operating the new flights.
Increasing the cruise speed of a flight directly reduces its block time, and thereby opening up space to accommodate new flights into the flight schedule. Although cruise time reduction provides a great opportunity to add new flights, increasing the speed of an aircraft comes with significant additional cost of fuel burn and CO$_2$ emission. To keep the cost of fuel manageable, \ignore{the decision of fleet type assignment to new flight may be critical. A solution can be improved through assigning} the new flights may be assigned to a fuel-efficient, but smaller aircraft.
However, such an assignment may spill some of the passengers due to the insufficient seat capacity, resulting in a loss of revenue. Therefore,
in order to address this trade-off,
we propose a second approach, which incorporates an explicit aircraft swapping mechanism together with cruise time controllability. Aircraft swapping provides a greater opportunity in reducing the fuel burn and capturing passenger demand of new flights. Through flight timing and assignment decisions, we trade-off the incremental fuel cost associated with the cruise time compression with the revenue from the passengers. Although the second approach may provide substantial improvements in the airline's profit over the first one, the additional binary swapping decisions and the nonlinear fuel burn function make the optimization problem significantly more difficult to solve. Our aim is to provide a set of alternative schedules with increasing profit at a cost of additional compute time. A decision maker can interactively specify her preferences (or restrictions) and analyze their effect on the airline's profit when introducing new flights into an existing schedule.
Aircraft swapping is a practical way to adjust the capacity based on the demand changes during the booking period. Sherali et al. \cite{Sherali2005} develop a demand-driven re-fleeting model that dynamically re-assigns the aircraft in response to improved passenger demand forecast. They only allow aircraft re-assignment within the same aircraft family to keep the crew assignments unchanged. Jarrah et al. \cite{Jarrah} also re-assign fleet types by limiting the number of changes on the original fleet assignment. Wang and Regan \cite{Wang} examine a dynamic yield management problem when the assigned capacities are subject to a swap. The recent studies show that re-assignment of aircraft to reflect the changing demand yields substantial savings.
In addition to adjusting the capacity, most airlines make use of swap opportunities to build robust aircraft routings or reduce delays in the recovery plans. Ageeva \cite{Ageeva} adds a reward for each opportunity to swap aircraft in an aircraft routing model and encourage overlapping routes to have more swap opportunities in the case of an operational disruption. If two aircraft routings meet at more than one airport, aircraft can be swapped, and then returned to their original routings at a next meeting point. Therefore, if a flight is delayed, swapping the aircraft provides robustness by allowing a flight with high demand to be flown. Akt{\"u}rk et al. \cite{akturk} use the idea of swapping aircraft between flights to reduce the effect of a disruption on the schedule. They provide approximately 30\% cost savings compared to the delay propagation recovery approach. More recently, Arikan et al.\cite{Arikan} use both flight re-timing and aircraft swapping approaches to find minimum cost passenger and aircraft recovery plans. Based on an investigation of over 240,000 domestic routings of 13 major U.S. airlines, Lonzius and Lange \cite{Lonzius} confirm the delay-reducing effect of swap opportunities.
Re-timing approach has been also used to minimize the delay propagation in the entire airline flight network. Chiraphadnakul and Barnhart \cite{virot} and Dunbar \cite{Dunbar} adjust flight departure times to provide slacks across the connections so as to minimize delay propagation. Lan et al. \cite{lans} implement a re-timing approach to minimize misconnected passengers. A novel model to increase the robustness of aircraft routing is presented in Aloulou et al. \cite{aloulou}. They judiciously distribute slacks to connections where they are most needed. Ahmed et al. \cite{ahmed} also adjust flight departure times in their robust weekly aircraft routing problem. Cadarso and de Celis \cite{Cadar} propose a two-stage stochastic programming formulation which updates base schedules in terms of timetable and fleet assignments while considering stochastic demand, and proposes robust itineraries in order to ameliorate miss-connected passengers. Although the aircraft swapping and re-timing approaches have been widely used in the literature, the novelty in this paper lies in the fact that they are implemented to allow for introducing new flights into the schedule. \ignore{In this paper, we aim to use the aircraft swapping to adjust the capacity to respond to new scheduling requests as well as to reduce the fuel burn and open up enough space to accommodate the new flights into an existing flight schedule.}
In the airline industry there is a realization that cruise speed selections have a significant impact on the airline's profit. Sherali et al. \cite{Sherali2006} state that airline optimization models are quite sensitive to fuel burn. Cook et al. \cite{Cook09} discuss the option of flying faster to ensure the minimum time requirement for the connections of passengers and flying slower for conservation of fuel.
In recent years, the aircraft speed control has been the subject of several research studies such as air traffic management (ATM), airline disruption management, aircraft recovery, and robust schedule design. The joint work of FAA/Eurocontrol \cite{FAAEurocontrol} emphasizes the importance of speed control for ATM to manage the fuel burn and terminal congestion. While the aim of the proposed methodology in \cite {FAAEurocontrol} is to save fuel by reducing cruise speed, once congestion in a terminal is determined, it is suggested to increase the speed of aircraft at the beginning of a rush period to avoid creating congestion and reduce the overall delay and fuel burn. Kang and Hansen \cite{Kang} emphasize the importance of accurate flight fuel burn prediction to reduce airline's cost. They showed how ensemble learning techniques can be used to improve flight trip fuel burn prediction. In their study, a novel discretionary fuel estimation approach is proposed to assist dispatchers with better discretionary fuel loading decisions. Kohl et al. \cite{Kohl} discuss the ability to reduce passenger delay costs by accelerating the aircraft in their overview of airline disruption management processes. Marla et al. \cite{Marla} integrate disruption management with flight planning, which enables changes in the flight speed. Using a time-space network, they make multiple copies of flights representing different discrete departure times and cruise speeds. However, in the context of airline operations, this representation leads to a large number of copies of flights to be evaluated in the model. Arikan et al. \cite{Arikan} and Akt{\"u}rk et al. \cite{akturk} express cruise speed as a continuous variable and find an optimal trade-off between increased fuel cost and disruption costs such as delay and spilled passengers costs. To manage disruptions in a less costly manner, airlines are also interested in building robust schedules. More recently, Duran et al. \cite{Duran} and \c{S}afak et al. \cite{Safak} consider the fuel burn and CO$_2$ emission costs associated with the aircraft cruise speed adjustments to ensure the passenger connections with desired probabilities. G{\"u}rkan et. al \cite{Gurkan} also include aircraft cruise speed decisions in an integrated airline scheduling, aircraft fleeting and routing problem. Different than the existing studies, our key feature is to include cruise time controllability decisions to \ignore{make enough time space} open-up enough time to accommodate new flights into the existing flight schedule. The major difficulty of including controllable cruise time decisions in the model is the nonlinearity of the fuel burn and carbon emission cost functions. We handle these nonlinearities using formulation strengthening techniques in Akt{\"u}rk et al. \cite{akturk2009strong}. See Atamt\"urk and Narayanan \cite{AN:cmir} for more on strengthening conic mixed-integer programs.
The main contributions of the current paper are as follows: \begin{itemize} \item{We propose a new problem of accommodating new flights into an existing flight schedule of a particular day. In this context, for the first time, we introduce the options of flight re-timing, aircraft cruise speed control and aircraft swapping to open up enough time for new flights in the schedule.} \item{We propose strong mixed-integer conic quadratic (MICQ) formulations to overcome the computational difficulties of nonlinear fuel burn and emission functions as well as the penalty functions of arrival tardiness.} \item{We improve and strengthen the MICQ formulations by adding Mc-Cormick inequalities. The new formulation with the McCormick inequalities enables the solution of test instances with 300 flights from a major U.S. airline optimally within reasonable compute times.} \end{itemize}
The remainder of the paper is organized as follows. In Section~\ref{Sec:2}, we briefly describe the framework of the problem and then present a numerical example illustrating the benefits of cruise time controllability and the proposed aircraft swapping mechanism. Section~\ref{Sec:3} introduces the mixed-integer nonlinear programming formulations for the two proposed approaches. In Section~\ref{sec:reformulation} we present stronger reformulations of the models to improve their solvability.
We computationally test the proposed mathematical models using a real-world data of a major U.S. airline in Section ~\ref{Computation} and conclude with a few final remarks in Section ~\ref{Sec:6}.
\section{Problem Definition} \label{Sec:2} In this section, we briefly describe the problem setting. Consider a set of new flight pairs (i.e., consecutive flights, specifically a flight from the hub to a new demand point and its return flight to hub) to be accommodated into the existing flight schedule in near real time. An airline needs to accommodate new flights into an existing flight schedule without excessively disrupting the existing schedule. Therefore, we only shift the departure times of existing flights within the intervals already determined by the airline. Moreover, any arrival tardiness of the existing flights due to inserting new flights is penalized in the objective. In hub-and-spoke networks, connecting passengers represent a non-negligible percentage of the total number of passengers. Therefore, we also respect the connection of passengers at the hub airport while optimizing the departure and cruise times of flights. There are two cases for a new flight: (1) If the new flight is assigned to a larger aircraft, then this assignment may capture more passengers, providing a greater revenue, but compressing the cruise times of flights to accommodate new flights may increase the fuel cost significantly; (2) if the new flight is assigned to a smaller fuel efficient aircraft to reduce the fuel burn, then there may be an additional cost of spilled passengers with a decrease in revenue. To increase the profit for an airline, we introduce a second model which additionally includes aircraft swapping decisions. \ignore{While a swap may reduce the fuel burn, if a flight is assigned to a smaller aircraft after the swap, then some of the passengers of the subsequent flights may be spilled due to the low capacity of the aircraft. In this study, we consider the interplay among all decision variables with the goal of maximizing the airline's profit.}
In the following sections, we first define the fuel burn as a function of cruise time and the penalty function for arrival tardiness of flights, and then provide a numerical example to show how to utilize the cruise time controllability and aircraft swapping to \ignore{accommodate} open-up enough space for the block times of new flights.
\subsection{Fuel and CO$_2$ emission cost function} One of the main contributions of this study is to increase the aircraft cruise speed to compensate for the time required to operate new flights. However, we need to consider the adverse effect of increasing cruise speed on fuel and carbon emissions costs. To estimate the fuel burn, we use the cruise stage fuel flow model developed by the Base of Aircraft Data (BADA) project of EUROCONTROL \cite{europaram}. This model has been widely used in the literature. The fuel burn (kg) of a flight as a function of its cruise time $f$ (minutes) and its aircraft type $t$ can be calculated as follows:
\begin{equation} F^t\left(f\right)= \alpha^{t} \frac{1}{f} + \beta^{t} \frac{1}{{f}^2} + \gamma^t {f}^3 + \nu^t {f}^2. \label{eq:cost} \nonumber \end{equation} The coefficients \noindent $\alpha^{t}, \beta^{t}, \gamma^t, \nu^t > 0$
are expressed in terms of aircraft specific fuel consumption coefficients as well as the mass of the aircraft, air density and gravitational acceleration as provided in \c{S}afak et al.\cite{Safak}. It is important to note that $F^t$ is a convex function whenever $f>0$. The minimizer of the fuel consumption function $F^t$ is represented by $u^t$, which is the ideal cruise time when an aircraft flies at the most fuel-efficient speed, referred to Maximum Range Cruise (MRC) speed. \ignore{In other words, MRC speed is the most fuel efficient speed of an aircraft.} Although the fuel burn is minimized at MRC speed, airlines may set higher cruise speed due to the scheduling constraints.
EUROCONTROL \cite{euro} states that each kg of fuel burn approximately produces 3.15 kg of CO$_2$ emission. Therefore, we can express fuel and CO$_2$ emission costs as a function of cruise time as follows: \begin{equation} c^t(f) = c_o F^t\left(f\right)
\end{equation} \noindent where $c_o$ is the total cost of of fuel and CO$_2$ emitted by an aircraft per kg of fuel burned.
\subsection{Penalty function for arrival tardiness} In this paper, we aim to accommodate new flights with minimal disruptions on the existing flight schedule. In this quest, we penalize the deviation from the original arrival times for the existing flights. Hoffman and Ball \cite{hoffman} suggest to use a nonlinear delay cost function to better reflect the reality since flight delay costs tend to grow with time at a greater rate than linear rate. Moreover, EUROCONTROL \cite{delay} performs a detailed investigation of airline cost functions and reports that the power curve provides a good fit to passenger costs as a function of delay duration. Therefore, we penalize the arrival tardiness with a convex increasing function of tardiness $b$ as
\begin{equation} P(b) = \rho b^{\zeta} \label{eq:penaltyfunction} \end{equation}
\noindent where $\rho \geq 0$. As in Akt{\"u}rk et al. (5), we let $\zeta = 1.5$ in our computational experiments. \REV{{It is important to note that the US costs might differ from the ones provided in the EUROCONTROL \cite{delay}.}}
Arrival tardiness can be reduced by compressing the cruise time of flights with an additional fuel burn cost. Therefore, the assignment of fuel efficient aircraft to new flights becomes more critical in order to reduce both delay and fuel burn costs.
\subsection{Numerical example} In this section, first we will provide a numerical example to show how the cruise time controllability can be utilized to accommodate new flights into an existing airline flight schedule. Then, we will extend the example to show how aircraft swapping together with the cruise speed control can be used to achieve a more profitable schedule.
We give a sample schedule for two aircraft in Table \ref{schedule1}. Tail numbers of the aircraft and the flight numbers along with the origin and destination airports, planned departure times in local ORD time, planned block times and demand for the flights are listed in the table. Each aircraft visits ORD at least once in a day. Let us now introduce new round-trip flights from ORD to MSP and back.
\begin{table}[htbp]\scriptsize
\centering
\caption{Original schedule.}
\label{schedule1}
\begin{tabular}{l|cccccccc}
\hline
\hline
& Tail \# & Flight \# & \multicolumn{1}{c}{From} & \multicolumn{1}{c}{To} & \multicolumn{1}{c}{Plan. Dep.} & \multicolumn{1}{c}{Plan. Dur.} & \multicolumn{1}{c}{Plan. Arr.} & \multicolumn{1}{c}{Demand} \\
\hline
&\multirow{4}*{N53442} &1586 &ORD &MCO &08:00 &03:04 &11:04 &200 \\ Existing & &633 &MCO &ORD &12:00 &03:07 &15:07 &180\\ flights & &451 &ORD &IAH &18:10 &03:08 &21:18 &190\\
& &584 &IAH &ORD &22:30 &02:50 &01:20 &186\\ \hline
& \multirow{4}*{N45425} &527 &ORD &IAH &08:45 &03:02 &11:47 &151\\ Existing & &521 &IAH &ORD &12:32 &03:03 &15:35 &154\\ flights & &623 &ORD &MCO &17:00 &03:02 &20:02 &160\\ & &679 &MCO &ORD &21:10 &03:10 &00:20 &163\\
\hline
New & &1842 &ORD &MSP &13:15 &01:40 &14:55 &183\\
flights & &430 &MSP &ORD &16:10 &01:45 &17:55 &168\\
\hline
\hline
\end{tabular} \end{table}
Figure \ref{fig:origschedule} gives the time-space network representation of the original schedule. The red and blue arcs in Figure \ref{fig:origschedule} represent routes for aircraft N53442 and N45425 respectively. The flight arcs originate from the departure airport at the planned departure time and end at the destination airport after the planned block time. Ground arcs represent the aircraft turnaround times needed to prepare the aircraft for the next flight.
\begin{figure}
\caption{Original time-space network.}
\label{fig:origschedule}
\end{figure}
In this example, we assume that aircraft N53442 is a Boeing 767-300 and N45425 is an Airbus 320-212. The number of seats of the aircraft are 218 and 180, respectively. In the original schedule we assume that the aircraft fly at the most fuel efficient speed (MRC speed) and estimate the fuel burn rates as 87 kg/min and 40 kg/min for aircraft N53442 and N45425, respectively. The fuel burn rate is calculated using the fuel flow model of BADA as mentioned above. We assume that for each flight, non-cruise stages take 30 minutes. Then, cruise stages take 30 minutes less than flight block times given in Table \ref{schedule1}. Assuming $c_{fuel}$ = 1.2 \$/kg and $c_{CO_2}$ = 0.2 \$/kg, the total cruise stage fuel and carbon cost for the original schedule is \$100,593.
Let us assume that an airline wants to operate two new flights 1842 and 430. In order to open up sufficient time to accommodate these new flights, one approach is to compress the cruise times and adjust the departure times simultaneously. We refer to this approach with cruise time controllability as CTC. If the airline wants to meet the passenger demand of 183 for the new flight 1842, the only way to do so is to assign the new flights to aircraft N53442. These new flights can be placed between flights 633 and 451. The necessary block times for the new flights can be made available by left-shifting flights 1586 and 633 and right-shifting flights 451 and 584. However, we assume that the airline wishes to keep such a new schedule close to the original one. Thus, we only allow departure times to deviate at most 90 minutes from the planned departure times in the original schedule. In addition, if a flight arrives 15 minutes later than the original planned arrival times, we penalize the tardiness with a nonlinear function in \eqref{eq:penaltyfunction}. We let $\rho=5$ and $\zeta=1.5$ in the penalty function. Because of these scheduling limitations, \ignore{operate business trips in the morning. Thus, we only allow departure times of morning flights to deviate 30 minutes from the planned departure times in the original schedule. In addition, we assume that airline wants to depart all flights before 11:00 pm in local times of departure airports. Therefore, }the new fights cannot be accommodated by only shifting the flight departure times. We also need to compress the cruise times of the existing flights 633, 451 and 584 by 17, 17 and 3 minutes, respectively. Besides, the cruise times of new flights 1842 and 430 are compressed by 8 minutes to satisfy the scheduling restrictions. However, flight 451 arrives 37 minutes later then the original planned arrival time, thus 22 minutes of this tardiness are penalized by \$552. In Figure \ref{fig:schedule1}, we give the time-space network representation of resulting schedule. In this figure, the dotted arcs represent the flights with original block times, whereas the line arcs represent the flights with compressed cruise times.
\begin{figure}
\caption{Time-space network with CTC.}
\label{fig:schedule1}
\end{figure}
Compressing the cruise times of the flights incurs additional costs of fuel burn and CO$_2$ emission. The total fuel burn and CO$_2$ emission costs of the existing flights increases to \$101,286. Moreover, for new flights, the total fuel burn and CO$_2$ emission cost is \$16,044. Therefore, the fuel and emission cost increment compared to the original schedule is \$16,737 calculated as \$16,737 = \$101,286 + \$16,044 - \$100,593. To reduce the fuel burn by reassigning the aircraft, we also propose an aircraft swapping mechanism together with the cruise time controllability, referred to as CTC-AS.
\begin{figure}
\caption{Time-space network with CTC-AS.}
\label{fig:schedule2}
\end{figure}
In the CTC-AS approach, we swap the aircraft of flights 451 and 623 before departure. In the new schedule, aircraft N53442 operates flights 1586, 633, 623, and 679, whereas aircraft N45425 operates 527, 521, 451, and 584. We provide the time-space network representation of the resulting schedule in Figure \ref{fig:schedule2}. To reduce the fuel expenses, the new flights are assigned to the fuel efficient aircraft N45425. However 10, 6 and 3 passengers of flights 451, 584 and 1842, respectively are spilled due to the low capacity of the aircraft N45425, thus resulting in a cost of \$2,007. \ignore{However, 10 passengers of flight 451, 6 p and 3 passengers of the new flight 1842 are spilled due to insufficient seat capacity of the aircraft N45425, thus resulting in \$3,514. }
Similarly, passengers' revenue obtained from ticket sales of new flights are reduced to \$49,923 from \$49,548, since 3 passengers of new flights are not served due to low seat capacity. An additional \$500 cost of swapping is incurred in the new schedule. On the other hand, the savings from the fuel burn and CO$_2$ emission costs may compensate for these additional cost of spilled passengers, revenue loss and cost of swapping. Indeed, the fuel expenses and CO$_2$ cost of the new flights are reduced to \$8,118, almost half of the fuel expenses of the CTC approach. Even though the total fuel burn and CO$_2$ emission cost of existing flights slightly increases to \$101,590, the fuel and emission cost increment significantly decreases from \$16,737 to \$9,115. Penalization cost for arrival tardiness of flight 451 also decreases to \$115. Therefore, resulting schedule improves the airline's profit from \$30,234 to \$35,412.
We give the operational cost components and revenues of two schedules achieved by CTC and CTC-AS approaches in Table \ref{eq:CostPublished}. Airline's profit is calculated as follows: \begin{eqnarray} \text{Profit } & = & \text{Revenue} - \left(\text{Fuel \& Emiss. Cost Increment}\right) - \text{Spilled Cost} \nonumber \\ & & - \text{Penalty Cost} - \text{Swap Cost} - \left(\text{Crew \& Service Costs}\right). \nonumber \end{eqnarray}
\begin{table}[ht]
\setlength{\tabcolsep}{1pt}
\centering
\caption{Cost comparison.}
\label{eq:CostPublished}
\scalebox{0.9}{ \begin{tabular}{c r r r r r } \hline \hline \multicolumn{1}{r}{} &\multicolumn{1}{r}{Fuel \& Emiss.} &\multicolumn{1}{r}{Spilled}& \multicolumn{1}{r}{Deviation}& \multicolumn{1}{r}{Swap} \\ \multicolumn{1}{r}{} &\multicolumn{1}{r}{Cost Increment ($\$$)}&\multicolumn{1}{r}{ Cost ($\$$)} &\multicolumn{1}{r}{ Penalty ($\$$)} & \multicolumn{1}{r}{Cost ($\$$)} \\ \hline
\multicolumn{1}{l}{CTC} &16,044 &0 &552.0 &0 \\
\multicolumn{1}{l}{CTC-AS} &8,118 &2,007 &115.0 &500 \\
\hline \hline & \multicolumn{1}{r}{Crew \& Service} & \multicolumn{1}{r}{Passengers}&\multicolumn{1}{r}{} \\ & \multicolumn{1}{r}{Cost ($\$$)} & \multicolumn{1}{r}{Revenue ($\$$)}&\multicolumn{1}{r}{Profit ($\$$)}\\
\hline \multicolumn{1}{l}{CTC}&4,400 &49,923 &30,234\\
\multicolumn{1}{l}{CTC-AS} &4,400 &49,548 &35,412\\
\hline \hline
\end{tabular} } \end{table}
\section{Mathematical Formulations} \label{Sec:3} In this section, we present the mathematical formulations of the two approaches described in the previous section. We start with the simpler CTC model that adjusts the departure times and controls the cruise time, and then extend it to CTC-AS by incorporating aircraft swapping as well.
\subsection{Formulation with cruise time controllability} \label{FirstFormulation} We first give a list of sets, parameters and decision variables used in the model.\\
\noindent \textbf{Sets:} \begin{longtable}{p{.07\textwidth}p{.9\textwidth}p{.18\textwidth}} $E$&set of existing flights in the schedule\\ $E_{O}$&set of existing outbound flights from the hub \\ $E_{I}$& set of existing inbound flights arriving to the hub \\ $N$&set of new flights \\ $N_{O}$&set of new outbound flights from the hub to a new demand point\\ $N_{I}$&set of new inbound flights from a new demand point to the hub\\ $T$&set of aircraft types\\ $C_E$&set of pairs of existing consecutive flights of the same aircraft, $(i,j), \ i \in E, j \in E$ \\ $C_N$&set of pairs of new consecutive flights of the same aircraft, $(i,j), \ i \in N_{O}, j \in N_{I}$\\ $U_i$& set of flights that can follow flight $i$, $i \in E \cup N_{I}$\\ $G_i$&set of existing outbound flights which have passenger connections from flight $i \in E_{I}$ \\ \end{longtable}
\noindent \textbf{Parameters:} \begin{longtable}{p{.07\textwidth}p{.9\textwidth}p{.18\textwidth}} $\chi_i^t$& 1 if aircraft type $t \in{T}$ is originally assigned to existing flight $i \in{E}$, and 0 o.w.\\ $\left[\ell_{i}^{t},u_{i}^{t}\right] $&time window for the cruise time of flight $i \in{E \cup N}$ with aircraft type $t \in{T}$\\ $\left[d_i^{\ell},d_i^{u}\right]$&time window for the departure of flight $i \in{E \cup N}$ \\ $\eta_i$&non-cruise time of flight $i \in{E \cup N}$ \\ $\tau_{i}^t$&turnaround time needed to prepare aircraft type $t \in{T}$ for the connection after flight $i \in E \cup N$ \\ $\lambda_{ij}$&time needed for the connection of passengers from flight $i \in E_{I}$ to flight $j \in G_{i}$ \\ $\kappa^t$&number of seats of an aircraft type $t \in{T}$\\ $\mu_i$&number of passenger demand of each new flight $i \in{N}$ \\ $t(i)$&aircraft type assignment of existing flight $i \in E$ \\ $\pi_i$ &ticket price of new flight $i \in{N}$ \\ $\sigma_{i}$&cost of spilled passengers of new flight $i \in{N}$\\ $\phi$&crew and serving cost for new flights\\
$a^{o}_i$&original arrival time of flight $i \in E$ \end{longtable}
\noindent \textbf{Decision variables:} \begin{longtable}{p{.07\textwidth}p{.9\textwidth}p{.48\textwidth}} $f_i^t$&cruise time of flight $i \in E \cup N$ with aircraft type $t \in T$\\ $d_i$& departure times of flight $i \in E \cup N $ \\ $a_i$&arrival time of flight $i$ to its destination $i \in E \cup N$\\ $b_i$&deviation from the original arrival time of flight $i \in E$\\ $z_{i}^t$& 1 if aircraft type $t \in{T}$ is assigned to flight $i \in{N}$, and 0 o.w.\\ $y_{ij}$& 1 if flight $i \in E \cup N_{I}$ is followed by flight $j \in U_i$, and 0 o.w. \end{longtable}
In the new schedule, an existing flight $i \in E$ can be followed by a new outbound flight $j \in N_{O}$. Similarly, each new inbound flight $i \in N_{I}$ can be followed by an existing flight $j \in E$.
For each $i \in E \cup N, t \in T$, we redefine the fuel and CO$_2$ emission cost function as \begin{equation*} c_i^t(f_i^t) = \begin{cases} c_o \bigg (\alpha_{i}^{t} \frac{1}{f_i^t}+ \beta_{i}^{t} \frac{1}{{(f_i^t)}^2}+\gamma_i^t {(f_i^t)}^3+\nu_i^t {(f_i^t)}^2 \bigg) & \text{if } z_i^t=1 \\ 0 & \text{if }z_i^t=0, \end{cases} \end{equation*} so that if aircraft type $t$ is not assigned to flight $i$, then $c_i^t(f_i^t) = 0$.
Using the notation above, we now provide a mathematical model of the problem (CTC): \label{mathmodel}
\allowdisplaybreaks
\begin{align} \text{max} \ & \sum_{t \in T} \sum_{i \in N} \pi_i \left( min \left(\mu_i, \kappa^t \right) \right) z_i^t -\sum_{i \in E} \left(c_i^{t(i)}(f_i^{t(i)}) - c_i^{t(i)}(u_i^{t(i)}) \right) \nonumber \\ & - \sum_{t \in T} \sum_{i \in N} c_i^{t}(f_{i}^{t}) - \sum_{t \in T} \sum_{i \in N} \sigma_{i} max (0, \mu_i - \kappa^t) z_i^t -\sum_{i \in E} \rho b_i^{\zeta}- \phi \nonumber \end{align} \begin{align} \text{s.t.} & \sum_{i \in E_{I}} y_{in} \leq 1, && n \in N_O \label{eq:4}\\ & \sum_{i \in E_{O}} y_{ni} \leq 1, && n \in N_I \label{eq:5} \\ & \sum_{i \in E_{I}} y_{in} + \sum_{i \in E_{O}} y_{mi} \geq 1, && (n, m) \in C_N \label{eq:6} \\ & \sum_{n \in N_{O}} y_{in} \leq 1, && i \in E_{I} \label{eq:4a} \\ & \sum_{m \in N_{I}} y_{mi} \leq 1, && i \in E_{O} \label{eq:4b} \\ & y_{in} = y_{mj}, && (i, j) \in C_E, (n, m) \in C_N \label{eq:28}\\
& | z_n^t - \chi_i^t | \leq (1 - y_{in} ), && i \in E_{I}, n \in{N_O}, t \in T \label{eq:7} \\
& | z_{n}^t - \chi_i^t | \leq (1 - y_{ni} ), && i \in E_{O}, n \in N_I, t \in T \label{eq:9} \\ & z_{n}^t = z_m^t, && (n, m) \in C_N, t \in T \label{eq:8} \\ & \sum_{t \in T} z_{n}^t = 1, && n \in N_O \label{eq:27} \\ & d_i + f_i^{t(i)} + \eta_i = a_i, && i \in E \label{eq:1}\\ & d_n + \sum_{t \in T} f_i^t + \eta_n = a_n, && n \in N \label{eq:2}\\ & a_i + \lambda_{ij} \leq d_j, && i \in E_{I}, j \in G_{i} \label{eq:3}\\ & a_n + \sum_{t \in T} \tau_{n}^t z_n^t \leq d_{m}, && (n, m) \in C_N \label{eq:10}\\ & \text{If } y_{in} =1 \text{ then } a_i +\tau_{i}^{t(i)} \leq d_n, && i \in E_{I}, n \in N_O \label{eq:16} \\ & \text{If } y_{ni} =1, \text{ then } a_n + \tau_{n}^{t(i)} \leq d_i, && i \in E_{O}, n \in N_I \label{eq:15} \\ & \text{If }\sum_{n \in N_O} y_{in} =0, \text{ then } a_i + \tau_{i}^{t(i)} \leq d_{j}, && (i, j) \in C_E \label{eq:17}\\ &a_i - \left(a^{o}_i +15\right) \leq b_i && i \in E \label{eq:deviation}\\ & \ell_i^{t(i)} \leq f_i^{t(i)} \leq u_i^{t(i)}, && i \in E \label{eq:18} \\ & \ell_i^{t} z_i^t \leq f_{i}^{t} \leq u_i^{t} z_i^t, && i \in N, t \in T \label{eq:19} \\ & d_i^{\ell} \leq d_i \leq d_i^{u}, && i \in E \cup N \label{eq:26} \\
& z_i^t \in \{0,1\}, && i \in N, t \in T \label{eq:23}\\ & y_{ij}\in \{0,1\}, && i \in E \cup N_{I}, j \in U_i \label{eq:24}\\ & b_i \geq 0, && i \in E \label{eq:deviation1} \end{align}
For given aircraft routes, we aim to generate a new flight schedule to introduce new flights with the goal of maximizing airline's profit. The first term of the objective is the revenue from ticket sales. The number of tickets sold can be determined as the minimum of passenger demand and seat capacity of the assigned aircraft. The remaining terms in the objective function represent the operational costs. The second term is the incremental cost of fuel burn associated with speeding up the aircraft of existing flights. Similarly, the third term represents the total cost of fuel burn and carbon emissions for new flights. The fourth term is the cost of spilled passengers due to insufficient seat capacity of assigned aircraft to new flights. The fifth term is a penalty for arrival tardiness of the existing flights. Finally, the sixth term is the total costs of crew and service for new flights including landing fees, ground handling fees, insurance fees, onboard services costs, etc.
Constraint \eqref{eq:4} ensures that new outbound flight $n$ follows at most one existing flight $i$ arriving to the hub airport. Similarly, constraint \eqref{eq:5} guarantees that new inbound flight $n$ is followed at most one existing flight $i$ departing from the hub airport. Constraint \eqref{eq:6} assures that new flight pair $(n, m)$ is covered by an aircraft route. Constraints \eqref{eq:4a}--\eqref{eq:4b} ensure that an existing flight does not follow or is immediately followed by two different new flights. Constraint \eqref{eq:28} keeps the sequence of existing flights as in the original schedule. If a new flight pair $(n,m)$ is operated between an existing flight pair $(i,j)$, then the model ensures that $y_{in}=y_{mj}=1$. Otherwise, $y_{in}=y_{mj}=0$ for $(i,j) \in C_E$.
Constraints \eqref{eq:7}--\eqref{eq:27}, determine the aircraft type assignment to a new flight pair $(n, m)$. If $y_{in} = 1$ or $y_{ni} = 1$, then the corresponding aircraft of existing flight $i \in E$ is assigned to new flight pair $(n, m)$. In the schedule, we only allow new flight $n$ to depart from the hub and be immediately followed by a return flight $m$ so that we make the same fleet assignment to flights $n$ and $m$ in constraint \eqref{eq:8}. A new flight is assigned to one fleet type in constraint \eqref{eq:27}.
Constraints \eqref{eq:1}--\eqref{eq:2} define the arrival time of the flights to their destination airport. Constraints \eqref{eq:3} ensure the minimum time requirement for the passenger connections. Similarly, constraints \eqref{eq:10}--\eqref{eq:17} maintain the precedence relations among the flights assigned to the same aircraft in the new schedule. For a new flight pair $(n,m)$, constraint \eqref{eq:10} guarantees the minimum time requirement for aircraft connections between flights $n$ and $m$. If an existing flight $i$ follows the new flight $n$, then constraint \eqref{eq:16} ensures that flight $n$ does not depart before the arrival time of flight $i$ plus its turnaround time. If a new flight $n$ follows an existing flight $i$, then constraint \eqref{eq:15} enables incoming aircraft of flight $n$ to catch flight $i$. On the other hand, if no new flight is scheduled between existing flights $i$ and $j$, then constraint \eqref{eq:17} keeps the minimum aircraft turnaround time between flights $i$ and $j$ as in the original schedule. Constraints \eqref{eq:deviation} determine the arrival tardiness for new flights. In this study, we do not penalize the first 15 minutes of tardiness as it is a common notion in practice.
Constraints \eqref{eq:18}--\eqref{eq:19} apply cruise time upper and lower bounds for each flight, respectively. Constraint \eqref{eq:26} defines the time intervals for the departures of both existing and new flights. For instance, due to time sensitivity of the business trips, one can allow departures in the morning within certain time intervals, which have already been determined by the airline. The rest of the constraints \eqref{eq:23}--\eqref{eq:deviation1} define the domain of decision variables.
An important feature of the proposed mathematical formulation is that the problem is formulated without keeping track of individual aircraft. The decision variable $y$ denotes on which flights are operated before/after the new flights. Since we also keep the sequence of existing flights operated by the same aircraft as it was in the sample schedule, following the index of $y$ decision determines the route of each aircraft. In the model, the aircraft tail of existing flights are given and not changed. This is a great advantage so that the proposed model determines the aircraft tail assignment with less computational effort. Note that the aircraft type information is also necessary for the cost calculations in the objective function.
The proposed formulation is a mixed-integer optimization model with nonlinear cost terms in the objective function and logical constraints \eqref{eq:16}, \eqref{eq:15}, and \eqref{eq:17}. \REV{{In Section \ref{sec:reformulation}, we present the logical constraints mathematically using the Big-M method and McCormick inequalities, respectively. These reformulations enable us to solve relatively large instances to optimality very efficiently.}}
\subsection{Formulation with cruise time controllability and aircraft swapping} \label{SecondFormulation} In this section, we additionally include an option of swapping aircraft in the model. Although the ability of swapping the aircraft provides a greater flexibility to make time spaces for new flights with an increased profit, several challenges arise. First, additional binary aircraft assignment decisions for the existing flights are required. Second, there exists a trade-off between the fuel burn and number of spilled passengers of existing flights, since swapping the aircraft of a flight with a more fuel efficient but smaller aircraft not only decreases the fuel burn but may also spill some of the passengers. Third, cruise time decisions of the existing flights depend on the aircraft assignments due to fuel burn as a function of aircraft type.
In order to include an option of aircraft swapping, we first define additional sets and parameters, and redefine the decision variables.\\
\noindent \textbf{Sets \& Parameters:} \begin{longtable}{p{.07\textwidth}p{.9\textwidth}p{.48\textwidth}} $R$& set of aircraft routes in the original schedule \\ $E_r$& set of flights in each aircraft route $r \in R$ \\ $S(i)$& set of possible flights whose aircraft can be swapped with the aircraft of flight $i \in E$ \\ $p(i)$&predecessor of flight $i \in E$\\ $\sigma_{i}$&cost of spilled passengers of flight $i \in{E \cup N}$ \\ $\psi$ &cost of swapping an aircraft \end{longtable}
\noindent \textbf{Decision variables:} \begin{longtable}{p{.07\textwidth}p{.9\textwidth}p{.48\textwidth}} $z_i^t$& 1 if aircraft type $t \in{T}$ is assigned to flight $i \in{E \cup N}$, and 0 o.w.\\ $s_{ij}$& 1 if the aircraft of flight $i \in E$ and flight $j \in S(i)$ are swapped at their destination and 0 o.w. \end{longtable}
Then, the mathematical formulation that includes the option of the aircraft swapping (CTC-AS) is stated as follows: \allowdisplaybreaks \ignore{ \begin{align}\text{max} \quad & \sum_{t \in T} \sum_{i \in N} \left( min \left(\mu_i, \kappa^t \right) \right) z_{i}^t p_i \tag{CTC-AS}\nonumber \\
& -\sum_{i \in E} \left( \sum_{t \in T} c_i^{t}(f_i^t) - c_i^{t(i)}(u_i^{t(i)}) \right) \nonumber - \sum_{t \in T} \sum_{i \in N} c_i^{t}(f_i^t) \nonumber \\ & - \sum_{t \in T} \sum_{i \in E \cup N} \left( max \left(0, \mu_i - \kappa^t \right) \right) z_{i}^t \sigma_{i} - \sum_{i \in E} \sum_{j \in S(i)} \left(\psi s_{ij}\right)/2 \label{eq:00.1} \\
\text{subject to} \nonumber \end{align} } \begin{align} \text{max} & \sum_{t \in T} \sum_{i \in N} \pi_i \text{ min} \left(\mu_i, \kappa^t \right) z_{i}^t \nonumber
-\sum_{i \in E} \left( \sum_{t \in T} c_i^{t}(f_i^t) - c_i^{t(i)}(u_i^{t(i)}) \right) \\ &\nonumber - \sum_{t \in T} \sum_{i \in N} c_i^{t}(f_i^t) \nonumber
- \sum_{t \in T} \sum_{i \in E \cup N} \sigma_{i} \text{ max} \left(0, \mu_i - \kappa^t \right) z_{i}^t \nonumber \\
& -\sum_{i \in E} \rho b_i^{\zeta} - \sum_{i \in E} \sum_{j \in S(i)} \left(\psi/2\right) s_{ij} - \phi \nonumber
\end{align} \begin{align} & \text{s.t.} \quad \text{Constraints} \quad \eqref{eq:4}-\eqref{eq:4b} && \nonumber \\
& | z_n^t - z_i^t | \leq (1 - y_{in} ), && i \in E_{I}, n \in{N_{O}}, t \in T \label{eq:5.1} \\
& | z_{n}^t - z_i^t | \leq (1 - y_{ni} ), && i \in E_{O}, n \in N_{I}, t \in T \label{eq:6.1} \\ & z_{n}^t = z_{m}^t, && (n, m) \in C_N, t \in T \label{eq:7.1} \\ & \sum_{t \in T} z_{n}^t = 1, && n \in N_{O} \label{eq:8.1} \\
& | z_{p(i)}^t - z_i^t | \leq \sum_{j \in S(i)} s_{ij}, && t \in T, i \in E \label{eq:9.1} \\ & z_j^{t(p(i))} \geq s_{ij,} && i \in E, j \in S(i) \label{eq:11.1} \\ & \sum_{i \in E_r} \sum_{j \in S(i)} s_{ij} \leq 1, && r \in R \label{eq:12.1} \\ & s_{ij} = s_{ji}, && i \in E, j \in S(i) \label{eq:13.1} \\
& | y_{in} - y_{mj} | \leq \sum_{k \in S(j)} s_{jk}, && (i, j) \in C_E, (n, m) \in C_N \label{eq:14.1}\\
& |y_{in}- y_{mk} | \leq 1 - s_{jk}, && k \in S(j), (i, j) \in C_E, \nonumber \\ & && (n, m) \in C_N \label{eq:15.1}\\ & d_i + \sum_{t \in T} f_i^t + \eta_i = a_i, && i \in E \cup N \label{eq:4.1}\\ & a_n + \sum_{t \in T} \tau_{n}^t z_n^t \leq d_{m}, && (n, m) \in C_N \label{eq:17.1}\\ & \text{If } y_{in} =1, \text{then } a_i + \sum_{t \in T} \tau_{i}^t z_i^t \leq d_n, && i \in E_{I}, n \in N_{O} \label{eq:16.1} \\ & \text{If } y_{ni} =1, \text{then } a_n \! + \!\sum_{t \in T} \tau_{n}^t z_{n}^t \leq d_i, && i \in E_{O}, n \in N_{I} \label{eq:18.1} \\ & \text{If } \sum_{k \in S(j)} \! s_{jk} = 0, \text{then } a_i \! + \! \sum_{t \in T} \tau_{i}^t z_i^t \leq d_{j}, && (i, j) \in C_E \label{eq:19.1}\\
& \text{If } s_{jk} \! = 1, \text{then } a_i \! + \! \sum_{t \in T} \tau_{i}^t z_i^t \leq d_{k}, && k \in S(j), (i, j) \in C_E \label{eq:20.1} \\
& a_i + \lambda_{ij} \leq d_j, && i \in E_{I}, j \in G_{i} \label{eq:3.2}\\ &a_i - \left(a^{o}_i +15\right) \leq b_i && i \in E \label{eq:newdeviation}\\ & \ell_i^{t} z_i^t \leq f_i^t, \leq u_i^{t} z_i^t && i \in E \cup N, t \in T \label{eq:21.1} \\ & d_i^{\ell} \leq d_i \leq d_i^{u}, && i \in E \cup N \label{eq:22.1} \\
& z_i^t \in \{0,1\}, && i \in E \cup N, t \in T \label{eq:24.1}\\ & y_{in}\in \{0,1\}, && i \in E \cup N_{I}, n \in U_i \label{eq:25.1}\\ & s_{ij} \in \{0,1\}, && i \in E, j \in S(i) \label{eq:27.1}\\ & b_i \geq 0, && i \in E \label{eq:newdeviation1} \end{align}
The objective function of the second model is slightly different than the objective of the first model. If the aircraft of a flight is swapped with a smaller aircraft, then some of the passengers of the subsequent flights might be spilled. Therefore, we include an additional cost term for spilled passengers of the existing flights. We also add a new swap cost term to cover the cost of changes caused by swapping the aircraft. The rest of the objective terms are same as the first model. Despite the additional cost of spilled passengers and swapped aircraft, the CTC-AS introduces potential for greater profit by reducing the fuel burn.
We use the same constraints \eqref{eq:4}--\eqref{eq:4b} of the first formulation to accommodate new flight pairs in an aircraft route. However, the aircraft type assignment constraints \eqref{eq:5.1}--\eqref{eq:8.1} are slightly different.
If new flight $n$ follows an existing flight $i$, then aircraft type assignments of flights $i$ and its immediate successor $n$ will be same per constraint \eqref{eq:5.1}. Similarly, if new flight $n$ is followed by an existing flight $i$, then constraint \eqref{eq:6.1} assigns the same aircraft type to flights $n$ and $i$. Constraint \eqref{eq:7.1} ensures that the aircraft type assignments of the consecutive new flights are same. Constraint \eqref{eq:8.1} assigns exactly one aircraft type to each new flight pair.
Constraints \eqref{eq:9.1}--\eqref{eq:11.1} relate aircraft swap decisions to assignment decisions. If aircraft of flight $i$ is not swapped with another one before its departure, then the aircraft type assignment of flight $i$ and its predecessor flight $p(i)$ will be the same. In other words, if there is no swap before the departure of flight $i$, then $s_{ij}=0$ for all flights $j \in S(i)$. Therefore, $z_i^t=z_{p(i)}^t$ for each aircraft type $t$ per constraint \eqref{eq:9.1}. Otherwise, i.e., $s_{ij}=1$, then the aircraft type assignment of flight $j$ and the predecessor flight $p(i)$ of flight $i$ will be the same. That is, flight $j$ is taken over by the aircraft of the predecessor of $i$, in the original schedule. Aircraft type assignment of flight $j$ is modeled in constraint \eqref{eq:11.1}. Constraint \eqref{eq:12.1} limits the number of swaps on an aircraft path. Constraint \eqref{eq:13.1} guarantees the symmetry of swap decisions between flights.
If an aircraft is not swapped with another, we keep the same sequence of flights in an aircraft route as in the original schedule. If the aircraft of flight pair $(i,j)$ is not swapped, a new flight pair $(n,m)$ may be accommodated between flights $i$ and $j$. That is, if the aircraft of flight $j$ is not swapped before its departure, i.e., $\sum_{k \in S(j)} s_{jk}=0$, then constraint \eqref{eq:14.1} ensures that $y_{in}=y_{mj}$ for new flight pair $(n,m)$ as in the constraint \eqref{eq:28} of the first formulation. Otherwise, if the aircraft of flight $j$ is swapped with an aircraft of any flight $k$, i.e., $s_{jk}=1$, then constraint \eqref{eq:15.1} guarantees that $y_{in}=y_{mk}$ for new flight pair $(n,m)$.
Constraint \eqref{eq:4.1} define the arrival time of flights to their destination airport. To model flight departure times and cruise times, we need to make sure that the departure of the successor of flight $i$ is later than the arrival time of flight $i$ plus its aircraft turn time. We first define the successor flight of $i$ in the new schedule. There are three cases for the precedence relations of flight $i$. Case 1: the new flight $n$ follows flight $i$, i.e., $y_{in}=1$, in which case constraint \eqref{eq:16.1} enables incoming aircraft of flight $i$ to catch new flight $n$. Case 2: \ignore{no new flight after flight $i$ (i.e., $\sum_{n \in N_{O}} y_{in}=0$) and} no swap is made after flight $i$ (i.e., $\sum_{k \in S(j)} s_{jk}=0$), in which case flight $i$ is followed by either a new flight or its successor. If it is followed by a new outbound flight, then the inbound flight of new trip will follow the successor of flight $i$ as well. Therefore, in both situations, departure time of the successor of flight $i$ will be later than the arrival time of flight $i$ plus its turnaround time as it is guaranteed by constraint \eqref{eq:19.1}. Case 3: \ignore{there is no new flight after flight $i$ and} Aircraft of flight $i$ is swapped with aircraft of flight $k$, in which case flight $k$ follows flight $i$ in the new schedule. Therefore, constraint \eqref{eq:20.1} guarantees the minimum aircraft turn time between flight $i$ and $k$. We ensure the minimum time requirements for the aircraft connections of new flight pair with constraint \eqref{eq:17.1} as well as the aircraft connections between the new flight and its successor with constraint \eqref{eq:18.1}. Similarly, we guarantee the minimum connection time for passengers with constraints \eqref{eq:3.2}. Constraints \eqref{eq:newdeviation} determine the deviation from the original arrival times for existing flights.
The mathematical formulation for the CTC-AS approach also provides aircraft tail assignment. Following the index of decisions $y$ and $x$ determines the sequence of flights operated by the same aircraft. Then, aircraft tail information of the existing flights helps to identify the tail assignment for each route developed. \REV{{The logical constraints \eqref{eq:16.1}-\eqref{eq:20.1} are also represented using the BigM and McCormick inequalities in Section \ref{sec:reformulation}.}}
The mathematical formulations above include nonlinear (convex) fuel and CO$_2$ emission cost and penalty cost terms in the objective function. To efficiently handle the nonlinearity, we use convexification results from mixed-integer conic quadratic optimization. To simplify the presentation, we drop the indices of the variables and parameters as follows:
\begin{equation*} c(f) = \begin{cases} c_o(\alpha \frac{1}{f}+\beta \frac{1}{f^2}+\gamma f^3+\nu f^2) & \text{if } z=1 \nonumber \\ 0 & \text{if }z=0. \nonumber \end{cases} \end{equation*} The function $c\left(f\right)$ with the indicator variable $z$ is discontinuous and its epigraph $E_F=\left\{(x,f,t)\in{\{0,1\} \times \mathbb{R}^2_+} : c(f) \leq t, \ell z \le f \le u z \right\}$ is nonconvex. The next proposition describes the convex hull of $E_F$. The convexification of convex functions with indicators are discussed in detail in Akt{\"u}rk et al. \cite{akturk2009strong} and G{\"u}nl{\"u}k and Linderoth \cite{gunluk}.
\begin{proposition} \label{conicfuel} {[\c{S}afak et al. \cite{Safak}]} The convex hull of the set $E_F$ can be expressed as \begin{align} t & \geq c_o(\alpha p+\beta q +\gamma r +\nu h )\\ {z}^2 & \leq p f \label{eq:con1}\\ {z}^4 & \leq {f}^2 q z \label{eq:con2} \\ {f}^4 & \leq {z}^2 r f \label{eq:con3}\\ {f}^2 & \leq h z \label{eq:con4} \end{align}
Moreover, each inequality \eqref{eq:con1}--\eqref{eq:con4} can be represented by conic quadratic inequalities. \end{proposition}
The next proposition \ref{conicdelay} also shows conic quadratic representation of the nonlinear penalty function.
\begin{proposition} The epigraph of penalty function, $E_P=\left\{(b,g) \in{\mathbb{R}^2_+} : b^{1.5} \leq g \right\}$ is conic quadratic representable. \label{conicdelay} \end{proposition}
\begin{proof} Since $b^{1.5}$ is a convex function for $b \geq 0$, its epigraph $E_P$ is a convex set. Let us restate $b^{1.5} \leq g$ as
\begin{equation} b^{4} \leq g^{2} b 1. \label{con1} \end{equation}
\noindent Observe that \eqref{con1} can be rewritten as two hyperbolic inequalities
\begin{eqnarray} b^2 \leq x g \label{eq:con2} \\ x^2 \leq b 1 \label{eq:con3} \end{eqnarray}
\noindent where $x \geq 0$. According to Ben-Tal and Nemirovski \cite{nemirovski}, hyperbolic inequality \eqref{eq:con2} can be represented by conic quadratic inequality below
\begin{equation}
|| (2b, x-g) || \leq x+g. \nonumber \end{equation}
\noindent Similarly, hyperbolic inequality \eqref{eq:con3} can be represented by the conic quadratic inequality. That concludes the proof.
\end{proof}
Moreover, the mathematical formulations involve logical constraints. In the next subsection, we replace logical constraints by Big-M constraints. Then, we strengthen the formulation by replacing logical constraints with stronger McCormick inequalities.
\section{Stronger Reformulations} \label{sec:reformulation}
The nonlinear fuel and emission costs, binary aircraft assignment and swapping decisions and the logical ``if-then" constraints in the models of the preceding section increase the computational burden of solving the problem significantly. In order to solve the problem with less computational effort, in this section we give alternative, stronger reformulations We first introduce a simple linearizion of the logical constraints using the well known Big-M method with carefully computed constants. Then we improve this formulation using McCormick estimators.
\subsection{Reformulations with Big-M constraints} \label{Sec:4}
Formulations with logical constraints by means of conditional ``if-then" statements can be numerically more robust than the Big-M formulations if the Big-M formulations use large constants to express the constraints linearly. Solvers may exploit the explicit conditional statements to improve the preprocessing and branching algorithms. Details on logical constraints can be found in \cite{cplex}. However, for our formulations, we are able to carefully tighten the Big-M constants using the implied upper and lower bounds on the variables, leading to more effective formulations. In the following, we will present linear reformulations of the logical constraints of CTC and CTC-AS with the corresponding Big-M constraints.
The formulation CTC involves logical constraints \eqref{eq:16}, \eqref{eq:15}, and \eqref{eq:17}. We introduce below three linear constraints to replace these logical constraints, respectively.
\begin{proposition}
For $i \in E_{I}$, $n \in N_O$, inequality \allowdisplaybreaks \begin{eqnarray} d_n - d_i - f_i^{t(i)} \geq \tau_{i}^{t(i)} + \eta_i - \delta^1_{in} \left(1 - y_{in} \right), \label{eq:bigm1} \end{eqnarray} where $\delta^1_{in}:= \max (d_i^u + u_i^{t(i)} +\tau_{i}^{t(i)}+ \eta_i$ $-d_n^\ell, 0)$, is equivalent to \eqref{eq:16}. \end{proposition}
\begin{proof} For any $i \in E_{I}$, $n \in N_O$, if $ y_{in} =1$, then constraint \eqref{eq:bigm1} is same as the \eqref{eq:16}. Otherwise, \eqref{eq:bigm1} reduces to the redundant inequality $d_i^u - d_i + u_i^{t(i)} - f_i^{t(i)} + d_n - d_n^l \geq 0$ since $u_i^{t(i)}$ is an upper bound for $f_i^{t(i)}$, $d_n^\ell$ is an lower bound for $d_n$, and $d_i^u$ is an upper bound for $d_i$. \end{proof}
The rest of the inequalities are stated without proof as they are similar to the one above. \begin{proposition} For all $ i \in E_{O}, n \in N_I$, inequality \allowdisplaybreaks \begin{eqnarray} d_i - d_n - \sum_{t \in T} f_{n}^{t} \geq \tau_{n}^{t(i)} + \eta_{n} - \delta^2_{ni} \left(1 - y_{ni} \right), \label{eq:bigm2} \end{eqnarray} where $\delta^2_{ni} := \max \left(d_n^u + \max_{t \in T} u_n^{t} + \tau_{n}^{t(i)} + \eta_{n} - d_i^\ell,0\right)$, is equivalent to \eqref{eq:15}.
\end{proposition}
\begin{proposition}
For $(i, j) \in C_E$, inequality \allowdisplaybreaks \begin{eqnarray} d_{j} - d_i - f_i^{t(i)} \geq \tau_{i}^{t(i)}+ \eta_i - \delta^3_{ij} \left( \sum_{n \in N_O} y_{in} \right), \label{eq:bigm3} \end{eqnarray} where $\delta^3_{ij} := \max\left(d_i^u + u_i^{t(i)} + \tau_{i}^{t(i)} + \eta_i - d_j^\ell,0 \right)$, is equivalent to \eqref{eq:17}. \end{proposition}
The logical constraints \eqref{eq:16.1} - \eqref{eq:20.1} of formulation CTC-AS are replaced with the following linear inequalities, respectively.
\allowdisplaybreaks
\begin{proposition} For $i \in E_{I}, n \in N_O$, inequality \begin{eqnarray} d_n - d_i - \sum_{t \in T} f_i^t \geq \sum_{t \in T} \tau_{i}^t \ z_i^t + \eta_i - \delta^4_{in} \left(1 - y_{in} \right), \label{eq:25.1} \end{eqnarray} where $\delta^4_{in}:= \max\left( d_i^u + \max_{t \in T }u_i^{t} + \max_{t \in T} \tau_{i}^t + \eta_i -d_n^\ell, 0 \right)$, is equivalent to \eqref{eq:16.1}. \end{proposition}
\begin{proposition} For $i \in E_{O}, n \in N_I$, inequality
\begin{eqnarray} d_i - d_n - \sum_{t \in T} f_{n}^t \geq \sum_{t \in T} \tau_{n}^t z_{n}^t + \eta_{n} - \delta^5_{ni} \left(1 - y_{ni} \right), \label{eq:26.1} \end{eqnarray} where $\delta^5_{ni}:=\max\left(d_n^u + \max_{t \in T }u_n^{t} + \max_{t \in T } \tau_{n}^t + \eta_{n} - d_i^\ell, 0 \right)$, is equivalent to \eqref{eq:18.1}. \end{proposition}
\ignore{ We restate the constraints \ref{eq:19.1} and \ref{eq:20.1} in Proposition \ref{proof} and Proposition \ref{proof2}, respectively, then we linearize them using the Big-M method.
\begin{proposition}
\label{proof}
For $(i, j) \in C_E$,
inequalities \eqref{eq:19.1} can be restated as
\begin{eqnarray}
\text{If } \sum_{k \in S(j)} \! s_{jk} = 0, \text{then } d_i \! + \! \sum_{t \in T} f_i^t \! + \! \eta_i \! + \! \sum_{t \in T} \tau_{i}^t z_i^t \leq d_{j}. \label{eq:nonlinear}
\end{eqnarray} \end{proposition} \begin{proof} For any $(i,j) \in C_E$, if $\sum_{k \in S(j)} s_{jk}=0$, there are two cases: if $\sum_{n \in N_{O}} y_{in} =0$, then constraint \eqref{eq:nonlinear} is same as the constraint \eqref{eq:19.1}; otherwise, if $\sum_{n \in N_{O}} y_{in} =1$, then there exists $n^* \in N_O$ such that $y_{in^*}=1$. For feasibility, from constraint \eqref{eq:16.1}, we need $d_i + \sum_{t \in T} f_i^t + \eta_i + \sum_{t \in T} \tau_{i}^t z_i^t \leq d_{n^*}$. Similarly, from constraint \eqref{eq:17.1}, we need $d_{n^*}+ \sum_{t \in T} f_{n^*}^t + \eta_{n^*} + \sum_{t \in T} \tau_{n^*}^t z_{n*}^t \leq d_{m^*}$ for consecutive new flights $(n^*, m^*)$. Since $\sum_{k \in S(j)} s_{jk} = 0$, constraint \eqref{eq:14.1} implies that $y_{in^*}=y_{m^*j}=1$. Therefore, from constraint \eqref{eq:18.1}, we need $d_{m^*} + \sum_{t \in T} f_{m^*}^t + \eta_{m^*} +\sum_{t \in T} \tau_{m^*}^t z_{m}^t \leq d_j$, which implies that $d_i + \sum_{t \in T} f_i^t + \eta_i + \sum_{t \in T} \tau_{i}^t z_i^t \leq d_{j}$.
\end{proof}
The following restatement is provided without proof as it is similar to the one above.
\begin{proposition} \label{proof2} For $k \in S(j), (i, j) \in C_E$, inequalities \eqref{eq:20.1} can be restated as \begin{eqnarray} \text{If } s_{jk} = 1, \text{then } d_i \! + \! \sum_{t \in T} f_i^t \! + \! \eta_i \! + \! \sum_{t \in T} \tau_{i}^t z_i^t \leq d_{k}. \label{eq:nonlinear2} \end{eqnarray} \end{proposition}
Using the Proposition \ref{proof} and Proposition \ref{proof2}, the logical constraints \eqref{eq:19.1} and \eqref{eq:20.1} of formulation CTC-AS are replaced with the following linear inequalities, respectively. }
\begin{proposition} For $(i, j) \in C_E$, inequality \begin{eqnarray} d_{j} - d_i - \sum_{t \in T} f_i^t \geq \sum_{t \in T} \tau_{i}^t z_i^t + \eta_i - \delta^6_{ij} \left( \sum_{k \in S(j)} s_{jk} \right), \label{eq:27.11} \end{eqnarray} where $ \delta^6_{ij}:= \max\left(d_i^u + \max_{t \in T }u_i^{t} + \max_{t \in T } \tau_{i}^t + \eta_i - d_j^\ell, 0 \right)$, is equivalent to \eqref{eq:19.1}. \end{proposition}
\begin{proposition} For $(i, j) \in C_E, k \in S(j)$, inequality \begin{eqnarray} d_k - d_i - \sum_{t \in T} f_i^t \geq \sum_{t \in T} \tau_{i}^t z_i^t + \eta_i - \delta^7_{ik} \left(1 - s_{jk} \right), \label{eq:28.1} \end{eqnarray} where $\delta^7_{ik}:= \max\left(d_i^u + \max_{t \in T }u_i^{t} + \max_{t \in T } \tau_{i}^t + \eta_i - d_k^\ell, 0 \right)$, is equivalent to \eqref{eq:20.1}. \end{proposition}
\REV{{We provide the reformulation with Big-M and the hyperbolic inequalities, which can be written as conic quadratic inequalities, below.}}
\allowdisplaybreaks
\begin{eqnarray} \text{max}&& \sum_{t \in T} \sum_{i \in N} \pi_i \text{ min} \left(\mu_i, \kappa^t \right) z_i^t -\sum_{i \in E} \rho g_i - \phi \nonumber \\%\tag{$(MICQ+MC)$}\nonumber \\ && -\sum_{i \in E} \left( \sum_{t \in T} c_o \left(\alpha_i^t p_i^t + \beta_i^t q_i^t + \gamma_i^t r_i^t + \nu_i^t h_i^t\right) - c_i^{t(i)}(u_i^{t(i)}) \right) \nonumber \\ && - \sum_{t \in T} \sum_{i \in N} c_o \left(\alpha_i^t p_i^t + \beta_i^t q_i^t + \gamma_i^t r_i^t + \nu_i^t h_i^t\right) \nonumber \\ && - \sum_{t \in T} \sum_{i \in E \cup N} \sigma_{i} \text{ max} \left(0, \mu_i - \kappa^t \right) z_i^t - \sum_{i \in E} \sum_{j \in S(i)} \left(\psi/2\right) s_{ij} \nonumber \end{eqnarray} \begin{align} & \text{s.t.} \quad \text{Const.} \quad \eqref{eq:4}-\eqref{eq:4b}, \eqref{eq:5.1}-\eqref{eq:17.1}, \eqref{eq:3.2}-\eqref{eq:newdeviation1} && \nonumber \\ & {\left(z_i^t\right)}^2 \leq p_i^t f_i^t && i \in {E \cup N}, t \in T \label{eq:34}\\ &{\left(z_i^t\right)}^4 \leq {\left(f_i^t\right)}^2 q_i^t z_{i}^t && i \in {E \cup N}, t \in T \label{eq:35}\\ &{\left(f_i^t\right)}^4 \leq {\left(z_i^t\right)}^2 r_i^t f_i^t && i \in {E \cup N}, t \in T \label{eq:36}\\ &{\left(f_i^t\right)}^2 \leq h_i^t z_i^t && i \in {E \cup N}, t \in T \label{eq:37}\\ &b_{i}^{4} \leq g_{i}^{2} b_{i} && i \in E \label{eq:conicdelay}\\ &d_n - d_i - \sum_{t \in T} f_i^t \geq \sum_{t \in T} \tau_{i}^t \ z_i^t + \eta_i - \delta^4_{in} \left(1 - y_{in} \right) && i \in E_{I}, n \in N_O \label{eq:500}\\ &d_i - d_n - \sum_{t \in T} f_{n}^t \geq \sum_{t \in T} \tau_{n}^t z_{n}^t + \eta_{n} - \delta^5_{ni} \left(1 - y_{ni} \right) && i \in E_{O}, n \in N_I \label{eq:501}\\ &d_{j} - d_i - \sum_{t \in T} f_i^t \geq \sum_{t \in T} \tau_{i}^t z_i^t + \eta_i - \delta^6_{ij} ( \sum_{k \in S(j)} s_{jk} ) && (i, j) \in C_E \label{eq:502}\\ &d_k - d_i - \sum_{t \in T} f_i^t \geq \sum_{t \in T} \tau_{i}^t z_i^t + \eta_i - \delta^7_{ik} \left(1 - s_{jk} \right) &&(i, j) \in C_E, k \in S(j) \label{eq:503} \end{align}
\REV{{Objective function is slightly different than the original objective function of the proposed model for CTC-AS in Section 3.2. The original objective is represented by the new objective and constraints \eqref{eq:34} - \eqref{eq:conicdelay}, which can be restated by conic quadratic inequalities. Moreover, logical constraints (39)-(42) are represented using the BigM inequalities \eqref{eq:500} - \eqref{eq:503}. The remaining constraints are same as the original constraints of the proposed model for CTC-AS.}}
As an alternative to the Big-M method, Mannino et. al. \cite{mannino2018hotspot} use the Path\&Cycle Algorithm. In their experiments, the Big-M formulation is slowed down due to the the weak bounds on optimality compared to the Path\&Cycle formulation. As another alternative to the Big-M formulation, we introduce stronger McCormick estimators in the next section.
\subsection{Improved reformulation with McCormick inequalities} \label{McCormick} In this section, we improve and strengthen the formulations by using McCormick estimators to represent the logical constraints.
We will demonstrate the construction of McCormick inequalities only for constraints \eqref{eq:19.1}.
Let us define auxiliary variables $v_i = d_i + \sum_{t \in T} f_i^t + \eta_i + \sum_{t \in T} \tau_{i}^t z_i^t$. We can state inequality \eqref{eq:19.1} as \[ d_j \ge v_i \bigg (1 - \sum_{k \in S(j)} s_{jk} \bigg), \ \ (i,j) \in C_E. \]
The problem formulation can be strengthened using linear inequalities based on McCormick estimators for the bilinear terms $w_{ij} = v_i (1 - \sum_{k \in S(j)} s_{jk} ),$ $(i,j) \in C_E.$ To do so, note the valid upper and lower bounds on $v_i$: \begin{eqnarray} v_i^{u} := d_i^{u} + \max_{t \in T} u_i^{t} + \max_{t \in T} \tau_i^{t} + \eta_i \geq v_i \nonumber\\ v_i^{\ell} := d_i^{\ell} + \min_{t \in T} \ell_i^{t} + \min_{t \in T} \tau_i^{t} + \eta_i \leq v_i. \nonumber \end{eqnarray}
\noindent Using these bounds on $v_i$, $i \in E$, the following McCormick inequalities \cite{McCormick} are valid for each bilinear term $\omega_{ij} = v_i \left(1 - \sum_{k \in S(j)} s_{jk} \right):$ \begin{eqnarray} &&\omega_{ij} \leq v_i^{u} \bigg (1 - \sum_{k \in S(j)} s_{jk} \bigg)\quad\quad\qquad\qquad(i,j) \in C_E \label{MC1.1}\\ &&\omega_{ij} \geq v_i^{\ell} \bigg (1 - \sum_{k \in S(j)} s_{jk} \bigg) \quad\quad\qquad\qquad\, (i,j) \in C_E \\ &&\omega_{ij} \leq v_i - v_i^{\ell} \sum_{k \in S(j)} s_{jk} \quad\quad\quad\qquad\qquad\, (i,j) \in C_E \label{redundant}\\ &&\omega_{ij} \geq v_i - v_i^{u} \sum_{k \in S(j)} s_{jk} \quad\quad\quad\qquad\qquad\, (i,j) \in C_E. \label{MC4.1} \end{eqnarray}
\noindent Therefore, constraints \eqref{eq:19.1} can be replaced with the following constraints \begin{eqnarray} d_j \geq \omega_{ij} \nonumber \\ \eqref{MC1.1} - \eqref{MC4.1}. \nonumber \end{eqnarray}
Observe that constraints \eqref{redundant} do not need to be added. If $ \sum_{k \in S(j)} s_{jk} =1$, then constraints \eqref{redundant} and \eqref{MC4.1} become redundant. On the other hand, if $ \sum_{k \in S(j)} s_{jk} =0$, then constraints \eqref{redundant} and \eqref{MC4.1} ensure that $d_j \geq \omega_{ij}=v_i$. If constraints \eqref{redundant} are not included in the model, then we obtain $d_j \geq \omega_{ij} \geq v_i$. By this way, we still ensure that flight $j$ cannot depart before the arrival time of flight $i$ plus its turnaround time. Therefore, the optimal solution is same as the original one.
Similarly, logical constraints \eqref{eq:16.1}, \eqref{eq:18.1}, and \eqref{eq:20.1} can be replaced by the stronger McCormick inequalities. We provide the improved reformulation with the McCormick inequalities and hyperbolic inequalities, which can be written as conic quadratic inequalities, below.
\allowdisplaybreaks \ignore{ \begin{align}\text{max} \quad & \sum_{t \in T} \sum_{i \in N} p_i \left( min \left(\mu_i, \kappa^t \right) \right) z_i^t \\%\tag{$(MICQ+MC)$}\nonumber \\
& -\sum_{i \in E} \left( \sum_{t \in T} c_o \left(\alpha_i^t q_i^t + \beta_i^t \delta_i^t + \gamma_i^t \varphi_i^t + \nu_i^t \vartheta_i^t\right) - c_i^{t(i)}(u_i^{t(i)}) \right) \nonumber \\
& - \sum_{t \in T} \sum_{i \in N} c_o \left(\alpha_i^t q_i^t + \beta_i^t \delta_i^t + \gamma_i^t \varphi_i^t + \nu_i^t \vartheta_i^t\right) \nonumber \\ & - \sum_{t \in T} \sum_{i \in E \cup N} \sigma_{i} \left( max \left(0, \mu_i - \kappa^t \right) \right) z_i^t - \sum_{i \in E} \sum_{j \in S(i)} \left(\psi s_{ij}\right)/2 \label{eq:0.1} \\
\text{subject to} \nonumber \end{align} }
\begin{eqnarray} \text{max}&& \sum_{t \in T} \sum_{i \in N} \pi_i \text{ min} \left(\mu_i, \kappa^t \right) z_i^t -\sum_{i \in E} \rho g_i - \phi \nonumber \\%\tag{$(MICQ+MC)$}\nonumber \\ && -\sum_{i \in E} \left( \sum_{t \in T} c_o \left(\alpha_i^t p_i^t + \beta_i^t q_i^t + \gamma_i^t r_i^t + \nu_i^t h_i^t\right) - c_i^{t(i)}(u_i^{t(i)}) \right) \nonumber \\ && - \sum_{t \in T} \sum_{i \in N} c_o \left(\alpha_i^t p_i^t + \beta_i^t q_i^t + \gamma_i^t r_i^t + \nu_i^t h_i^t\right) \nonumber \\ && - \sum_{t \in T} \sum_{i \in E \cup N} \sigma_{i} \text{ max} \left(0, \mu_i - \kappa^t \right) z_i^t - \sum_{i \in E} \sum_{j \in S(i)} \left(\psi/2\right) s_{ij} \nonumber
\end{eqnarray} \begin{align} &\text{s.t. Const.} \quad \eqref{eq:4}-\eqref{eq:4b}, \eqref{eq:5.1}-\eqref{eq:17.1}, \eqref{eq:3.2}-\eqref{eq:newdeviation1} \nonumber \\ & {\left(z_i^t\right)}^2 \leq p_i^t f_i^t && i \in {E \cup N}, t \in T \label{eq:34.1}\\ &{\left(z_i^t\right)}^4 \leq {\left(f_i^t\right)}^2 q_i^t z_{i}^t && i \in {E \cup N}, t \in T \label{eq:35.1}\\ &{\left(f_i^t\right)}^4 \leq {\left(z_i^t\right)}^2 r_i^t f_i^t && i \in {E \cup N}, t \in T \label{eq:36.1}\\ &{\left(f_i^t\right)}^2 \leq h_i^t z_i^t && i \in {E \cup N}, t \in T \label{eq:37.1}\\ &b_{i}^{4} \leq g_{i}^{2} b_{i} && i \in E \label{eq:conicdelay.1}\\ &d_n \geq \omega_{in}^1 && i \in E_{I}, n \in N_O \label{MCbas}\\ &\omega_{in}^1 \leq v_i^{u} y_{in} && i \in E_{I}, n \in N_O \label{MC1}\\ &\omega_{in}^1 \geq v_i^{\ell} y_{in} && i \in E_{I}, n \in N_O \label{MC1l}\\ &\omega_{in}^1 \leq v_i - v_i^{\ell} \left(1 - y_{in} \right) && i \in E_{I}, n \in N_O \label{MC1l.1}\\ &\omega_{in}^1 \geq v_i - v_i^{u} \left(1 - y_{in} \right) && i \in E_{I}, n \in N_O \label{MC4} \\
&d_i \geq \omega_{ni}^2 && i \in E_{I}, n \in N_O \\ &\omega_{ni}^2 \leq v_n^{u} y_{ni} && i \in E_{O}, n \in N_I \label{MC1}\\ &\omega_{ni}^2 \geq v_n^{\ell} y_{ni} && i \in E_{O}, n \in N_I \label{MC2l}\\ &\omega_{ni}^2 \leq v_n - v_n^{\ell} \left(1 - y_{ni} \right) && i \in E_{O}, n \in N_I \label{MC2l.1}\\ &\omega_{ni}^2 \geq v_n - v_n^{u} \left(1 - y_{ni} \right) && i \in E_{O}, n \in N_I \label{MC4} \\ & d_j \geq \omega_{ij}^3 && (i,j) \in C_E \\ & \omega_{ij}^3 \leq v_i^{u} \bigg(1 - \sum_{k \in S(j)} s_{jk} \bigg) && (i,j) \in C_E \label{MC1}\\ &\omega_{ij}^3 \geq v_i^{\ell} \bigg (1 - \sum_{k \in S(j)} s_{jk} \bigg) && (i,j) \in C_E\label{MC3l}\\ &\omega_{ij}^3 \leq v_i - v_i^{\ell} \sum_{k \in S(j)} s_{jk} && (i,j) \in C_E \label{MC3l.1}\\ &\omega_{ij}^3 \geq v_i - v_i^{u} \sum_{k \in S(j)} s_{jk} && (i,j) \in C_E \label{MC4}\\ &d_k \geq \omega_{ik}^4 && k \in S(j), (i,j) \in C_E \\ &\omega_{ik}^4 \leq v_i^{u} s_{jk} && k \in S(j), (i,j) \in C_E \label{MC1}\\ &\omega_{ik}^4 \geq v_i^{\ell} s_{jk} && k \in S(j), (i,j) \in C_E \label{MC4l}\\ &\omega_{ik}^4 \leq v_i - v_i^{\ell} \left(1 - s_{jk}\right) && k \in S(j), (i,j) \in C_E \label{MC4l.1}\\ &\omega_{ik}^4 \geq v_i - v_i^{u} \left(1 - s_{jk} \right) && k \in S(j), (i,j) \in C_E \label{MCson} \end{align}
\section{Computational Study} \label{Computation}
In this section, we first test and compare the performance of two approaches, CTC and CTC-AS, proposed in the paper in terms of their effectiveness to improve the airline's profitability computationally through a full-factor experimental design. Then we test and compare the effectiveness of the stronger reformulations described in Section~\ref{sec:reformulation} in solving the computationally intensive approach CTC-AS with aircraft swapping.
In the experimental study, we test performance of MICQ reformulations with Big-M constraints and McCormick inequalities, respectively. All experiments are performed on a workstation with a 3.60 GHz Intel R Xeon R CPU E5-1650 and 32 GB main memory. The mixed-integer conic quadratic reformulations are implemented using JAVA programming language with a connection to IBM ILOG CPLEX Optimization Studio 12.7.1.
We use a sample schedule extracted from the database ``Airline On-Time Performance Data," provided by the Bureau of Transportation Statistics of the US Department of Transportation, BTS \cite{BTSperformance}, and query the planned departure and arrival times of all United Airlines (UA) domestic flights for the date of 03/01/2018 from the database. In our computations, departure time intervals ([$d^{\ell}, d^{u}$]) are determined by adding/subtracting ninety minutes to planned departure times of the sample schedule. The flight block times in the sample schedule are calculated by taking the difference between the scheduled arrival and departure times. Then, we assume that 30 minutes of a flight block time is non-cruise time ($\nu$) and remaining is cruise time ($u$). In experiments, we compress the cruise times at most 15\%, hence $\ell = 0.85 u$. Moreover, in the proposed models, we keep each aircraft route, i.e., sequence of flights as provided in the sample schedule.
In this study, new flights are connected to the existing schedule at hub airport ORD. Therefore, in the sample schedule, we remove the route of aircraft that does not visit ORD airport for this particular day. This is reasonable, since the flights of aircraft which does not visit ORD will not be affected by introducing new flights. The resulting sample schedule includes 300 flights operated by 81 aircraft.
\subsection{Description of the data for the experimental study}
In order to analyze the effects of problem parameters on the airline's profit, we conduct a $2^k$ full-factorial experimental design. The experimental factors and their levels are given in Table \ref{factor}.
\setcounter{table}{2} \begin{table}[ht]\footnotesize
\centering
\caption{Factor values. }
\label{factor}
\begin{tabular}{ r| c| c }
\hline \hline
\multicolumn{1}{c}{ }&\multicolumn{2}{|c}{Levels}\\
\hline
\multicolumn{1}{c|}{Factor Description} & \multicolumn{1}{c|}{Low} & \multicolumn{1}{c}{High} \\
\hline
$c_{fuel}$ (\$/kg) & 0.6 & 1.2 \\
$\sigma_b$ (\$/passenger) & 60 & 200 \\
$\psi$ (\$/swap) & 500 & 1000 \\
\hline \hline
\end{tabular}
\end{table}
The fuel prices for lower and higher settings, respectively, are estimates based on the history of fuel prices obtained from IATA fuel price monitor \cite{fuelprice}, which shows a fluctuation between \$0.6/kg and \$1.2/kg during years 2008 -- 2018. In the table $\sigma_b$ is a base value for the opportunity cost for each of the spilled passengers due to the insufficient seat capacity of the aircraft. \c{S}afak et al. \cite{Safak} express the cost of spilled passenger for each flight using airport congestion coefficients, e.g., favoring the populated markets as follows: \begin{equation} \sigma_i = \sigma_b {\left(e_{O_i}\right)} {\left(e_{D_i}\right)},\quad i\in E \cup N, \label{eq:spilledcost} \end{equation} \noindent where $e_{O_i}$ and $e_{D_i}$ represent the congestion coefficients for the origin and destination airports of flight $i \in E \cup N$. These coefficient values are provided in \c{S}afak et al. \cite{Safak}. $\psi$ is the cost of changes caused by swapping the aircraft of the flights. For low and high values of swap cost, we have used \$500 (proposed by Marla et al. \cite{Marla}) and \$1000, respectively.
We consider six aircraft types and list the fuel burn related parameters, the corresponding maximum range cruise (MRC) speed, and the seat capacity in Table \ref{aircraftParam}. The coefficients of the fuel burn function \eqref{eq:cost}, $\alpha_i^t, \beta_i^t, \gamma_i^t, \nu_i^t$, are calculated as specified in \c{S}afak et al. \cite{Safak} using the corresponding values of fuel burn related parameters in Table \ref{aircraftParam}.
\begin{table}[ht]\scriptsize
\centering
\caption{Aircraft parameters. }
\label{aircraftParam}
\begin{tabular}{r| c c c c c c}
\hline \hline
Aircraft type & B727 228 & B737 500 &MD 83 & A320 111 & A320 212 & B767 300\\
\hline
Capacity &134 & 122 &148 & 172 & 180 & 218 \\
Mass (kgs) & 74000 & 50000 &61200 & 62000 & 64000 & 135000\\
Surface($m^2$) & 157.9 & 105.4 &118 & 122.4 & 122.6 & 283.3\\
$C_{D0,CR}$ & 0.018 & 0.018 &0.0211 & 0.024 & 0.024 & 0.021\\
$C_{D2,CR}$ & 0.06 & 0.055 &0.0468 & 0.0375 & 0.0375 & 0.049\\
$Cf_1$ &0.53178 & 0.46 &0.7462 & 0.94 & 0.94 & 0.763\\
$Cf_2$ &276.72 & 300 &638.59 & 50000 & 100000 & 1430\\
$cf_{CR}$ &0.954 & 1.079 &0.9505 & 1.095 & 1.06 & 1.0347\\
MRC speed (km/h) & 867.6 & 859.2 &867.6 & 855.15 & 868.79 & 876.70 \\
$\tau_{b}^t$ (min) & 32 &36 & 26 & 28 & 30 & 40 \\
\hline \hline
\end{tabular}
\end{table}
For a flight $i$, the aircraft turnaround time ($\tau_{i}^{t}$) needed to prepare the aircraft for the next flight is estimated using the expression \begin{equation} \tau_{i}^t = \tau_{b}^t \cdot{e_{D_i}}\quad t \in T, \label{tab:turntime} \end{equation} where $\tau_{b}^t$ is a base value for aircraft turnaround time. Therefore, turnaround time of an aircraft visiting a congested airport will take longer. The calculated aircraft turnaround times match with the aircraft turnaround times given in Ar{\i}kan et al. \cite{arikanDes}.
The passenger demand for existing flights are generated uniformly between 110 and 134, 110 and 122, 110 and 148, 150 and 172, 160 and 180, 160 and 218, for aircraft types B727 228, B737 500, MD 83, A320 111, A320 212 and B767 300, respectively. Without loss of generality, we assume that the original aircraft assignments meet all passenger demand. Under this experimental setting, we can analyze the performance of aircraft swapping while trading-off between the cost of fuel burn and cost of spilled passengers.
\subsection{Performance analysis of CTC and CTC-AS} While aircraft swapping in addition to re-timing departures and cruise speed control provides a greater flexibility to accommodate the new flights, it is of interest to study the incremental increase in the airlines profit due to the heavy computational burden of solving CTC-AS.
In this experimental study, we use 300 flights operated by 81 aircraft of the sample schedule. We consider adding round-way trips \{ORD-IAH\}, \{ORD-BOS\} and \{ORD-MSP\} so that there are six new flights to be added into the new schedule. The demand for the new flights are generated uniformly between 120 and 200. The estimated demand ranges are defined based on the seat capacity of the aircraft types commonly used to operate these trips. The fares for the new flights are generated uniformly between USD 120 and USD 350, based on an analysis of the ticket prices of the flights in these trips for United Airlines. According to Eurocontrol's report \cite{eurocrew} on dynamic cost indexing, total crew cost per block hour varies between \$280 and \$800. In this study, we let the crew cost for each new flight as \$400/hr. Using an expected block time for each new flight, the total crew cost for six new flights is calculated as \$6500. For each new flight, we also consider \$1500 service cost including landing fees, ground handling fees, insurance fees, onboard services costs, etc.
We design a $2^3$ experimental study with two levels for each experimental factor. For each combination of the factor levels, we solve 10 randomly generated instances with the approaches CTC and CTC-AS, respectively. In both formulations, we replace logical constraints with McCormick inequalities and handle the nonlinear fuel and emission costs using mixed integer conic quadratic programming described in Proposition \ref{conicfuel}. Then, each instance is approximately solved less than \REV{\textbf{sixty seconds}} by the CTC approach. On the other hand, with formulation CTC-AS, the average CPU time is 6,100 seconds due to increased number of conic constraints with binary assignment decisions and binary swapping decisions. Despite the additional computational complexity, the CTC-AS approach provides substantial profit improvement over the simpler cruise time controllability approach CTC, calculated as
\[ \text{Profit improvement ($\%$)} = 100 \times \frac{\text{Optval (CTC-AS) -- Optval (CTC)}} {\text{Optval (CTC)}} \cdot \]
Table \ref{comparison} summarizes the results for 80 instances. Observe that the profit improvement significantly increases as the fuel price increases. As the fuel expenses are a major cost component of airlines, this cost component becomes more important with higher fuel price. In order to reduce fuel burn, CTC-AS has an advantage of reassigning flights to more fuel efficient aircraft. For high fuel price, the profit improvement can reach to 131\% over CTC. On the other hand, if the spill cost is high, then the profit improvement decreases. Similarly, the profit improvement decreases as the swapping cost parameter increases. Because, swapping the aircraft of a flight with a smaller aircraft may spill some of the passengers, and such swaps incur additional spilled passenger cost for CTC-AS approach. CTC-AS approximately yields a 53\% improvement in airline's profit compared to CTC for the factor values analyzed in this study.
\begin{table}[ht]\footnotesize
\centering
\caption{Profit improvement of CTC-AS over CTC. }
\label{comparison}
\begin{tabular}{ l l r r r}
\hline
\hline
\multicolumn{2}{c}{}& \multicolumn{3}{c}{Profit Impr. (\%) } \\
\multicolumn{2}{c}{}&\multicolumn{1}{c}{min}&\multicolumn{1}{c}{avg}&\multicolumn{1}{c}{max} \\ \hline
$c_{fuel}$& Low & 8 &25 &44 \\
&High &42 &81 &131 \\
$\sigma_b$& Low &25 &66 &131 \\
&High&8 &39 &107 \\
$\psi$ &Low &10 &55 &131\\
&High &8 &51 &125\\
\hline
& avg & &53 & \\
\hline
\hline
\end{tabular}
\end{table}
\subsubsection{What-if analysis on the number of aircraft swaps} CTC-AS approach utilizes the aircraft swapping mechanism to reduce the fuel burn. On the other hand, if the aircraft of a flight is swapped with a smaller aircraft, then some of the passengers of the subsequent flights might be spilled. Therefore, CTC-AS approach trades-off the cost of fuel burn with the cost of spilled passengers. To see the effect of the number of swaps, we restrict the maximum number of swaps with the following constraint: \begin{equation} \sum_{i \in J} \sum_{j \in AS(i)} x_{ij} \leq max\_swap \end{equation}
\noindent A schedule planner can specify and modify the maximum number of swaps and analyze the influence of the number of swaps on the airline's profit. In Figure \ref{fig:whatif1}, we provide the efficient frontier for a problem instance with 300 flights and 81 aircraft solved with different levels of factors in Table \ref{level}. If the fuel price is high, then the airline's profit significantly increases as the number of swaps increases. Since the fuel cost is the major cost component of airlines, fuel burn has a greater influence on airline's profit in this case. The total fuel burn can be reduced by reassigning the fuel efficient aircraft to longer trips and using the less efficient aircraft for shorter trips. \ignore{This reassignment can be achieved by swapping the aircraft among flights. } On the other hand, if the fuel price is low and the spill parameter is high, the profit is slightly improved as the number of swaps increases and the profit remains constant after seven swaps. In this case, it is not preferred to spill passengers due to the high spill cost. As shown in Figure \ref{fig:whatif2}, the percentage of spilled passengers is the lowest for factor combinations 3 and 7 as expected. The diminishing rate of return in the profit increase allows the airline to limit the number of swaps to a few to gain a large benefit with a small number of swaps.
\begin{table}[ht]\footnotesize
\centering
\caption{Combination of factor levels.}
\label{level}
\begin{tabular}{ c r r r}
\hline
\hline
\multicolumn{1}{c}{ } & \multicolumn{3}{c}{Levels of}\\
Factor combination & \multicolumn{1}{c}{$\psi$ }& \multicolumn{1}{c}{$\sigma_b$}& \multicolumn{1}{c}{$c_{fuel}$}\\
\hline
1& Low & Low & Low \\
2 & Low & Low & High \\
3& Low & High & Low \\
4 & Low & High & High \\
5& High & Low & Low \\
6 & High & Low & High \\
7& High & High & Low \\
8 & High & High & High \\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure}
\caption{Efficient frontier of aircraft swaps.}
\label{fig:whatif1}
\end{figure}
\begin{figure}
\caption{The effect of swap decisions on spilled passengers.}
\label{fig:whatif2}
\end{figure}
The computational results clearly demonstrate that the benefit of CTC-AS over CTC and highlights the need to address the computational difficulty for solving the CTC-AS model. In the next section, we will test the performance of reformulations of CTC-AS.
\subsubsection{What-if analysis for aircraft dependent swap costs.} An airline may consider its passenger's satisfaction when the aircraft of a passenger's flight is swapped. While passengers would be satisfied when the aircraft of their flights is swapped with a larger one, in other case, passengers would not feel comfortable with smaller aircraft. Therefore, in this section, we consider the aircraft dependent swapping cost parameters provided in Table \ref{swapDependent}. If aircraft type ($t$) in the first row is swapped with another aircraft type ($t^{'}$) in one of the columns, we incur an additional penalty cost $\Phi^{ti t^{'}}$ to original swap cost $\psi$ for this swap.
\begin{table}[ht]\scriptsize
\centering
\caption{Aircraft dependent swap cost. }
\label{swapDependent}
\begin{tabular}{r| c c c c c c}
\hline \hline
Aircraft type & B727 228 & B737 500 &MD 83 & A320 111 & A320 212 & B767 300\\
\hline
B727 228 & 0 & 0 &0 &0 &0 &0\\
B737 500 &0 & 0 &0 &0 &0 &0\\
MD 83 &100 &100 &0 &0 &0 &0\\
A320 111 &150 &150 &100 &0 &0 &0\\
A320 212 &150 &150 &100 &0 &0 &0\\
B767 300 &200 &200 &150 &100 &100 &0\\
\hline \hline
\end{tabular}
\end{table}
We redefine the cost of swap ($\phi_{ij}$) between the aircraft of flights $i$ and $j$ and calculate as:
\begin{equation} \phi_{ij} = \psi + \Phi^{t(i), t(j)}. \nonumber \end{equation}
\noindent $\psi$ is still considered as \$500 and \$1000 for low and high values, respectively. For each combination of fuel price, base spill cost and swap cost levels, we solve ten instances. Then, we report the minimum, average and maximum profit improvements of CTC-AS approach over the CTC approach in Table \ref{profitSwap}.
\begin{table}[ht]\footnotesize
\centering
\caption{Profit improvement of CTC-AS over CTC. }
\label{profitSwap}
\begin{tabular}{ l l r r r}
\hline
\hline
\multicolumn{2}{c}{}& \multicolumn{3}{c}{Profit Impr. (\%) } \\
\multicolumn{2}{c}{}&\multicolumn{1}{c}{min}&\multicolumn{1}{c}{avg}&\multicolumn{1}{c}{max} \\ \hline
$c_{fuel}$& Low & 8 &24 &43 \\
&High &40 &78 &127 \\
$\sigma_b$& Low &24 &64 &127 \\
&High&8 &38 &103 \\
$\psi$ &Low &10 &53 &127\\
&High &8 &49 &121\\
\hline
& avg & &51 & \\
\hline
\hline
\end{tabular}
\end{table}
As expected, if there is an additional aircraft dependent swap cost, then the profit improvement slightly decreases compared to the single swap cost case provided in Table 5. On the other hand, CTC-AS approach with aircraft type dependent swap cost still significantly increases the profit compared to the CTC approach, if the fuel price increases. Because, CTC-AS approach has an advantage of reducing the fuel burn by reassigning the fuel efficient aircraft among subset of flights. However, Table \ref{percentageSpill} shows that these reassignments spill more passengers as the fuel price increases. If the spill cost parameter increases, then the number of swaps decreases so that the number of spilled passengers decreases. The results indicate that an average of 0.9\% of passengers are spilled while maximizing the airline profit to capture the additional demand of new flights.
\begin{table}[ht]\footnotesize
\centering
\caption{Percentage of the spilled passengers}
\label{percentageSpill}
\begin{tabular}{ l l r r r}
\hline
\hline
\multicolumn{2}{c}{}& \multicolumn{3}{c}{Percentage of } \\
\multicolumn{2}{c}{}& \multicolumn{3}{c}{spilled pax. (\%) } \\
\multicolumn{2}{c}{}&\multicolumn{1}{c}{min}&\multicolumn{1}{c}{avg}&\multicolumn{1}{c}{max} \\ \hline
$c_{fuel}$& Low & 0.2 &0.6 &1.3 \\
&High &0.3 &1.2 &2.3 \\
$\sigma_b$& Low &0.7 &1.4 &2.3 \\
&High&0.2 &0.4 &0.7 \\
$\psi$ &Low &0.2 &0.9 &2.3\\
&High &0.2 &0.9 &2.3\\
\hline
& avg & &0.9 & \\
\hline
\hline
\end{tabular}
\end{table}
\subsection{Analysis of the reformulations} The original formulation in Section \ref{SecondFormulation} has nonlinear fuel burn and carbon emission functions in the objective. To efficiently handle them, we propose a mixed-integer conic quadratic reformulation. Here, we compare two alternative mixed-integer conic quadratic reformulations referred to as MICQ1 and MICQ2. The second one MICQ2 is formulated using the strengthened inequalities as shown in Table \ref{alternative}. It is clear that inequalities of MICQ2 are valid for MICQ1 since for $z \in \{0,1\}$ they reduce to inequalities of MICQ1. Recall that the additional constraints $\ell z \leq f \leq u z$, which force $f$ to zero whenever $z$ is zero.
With each of these formulations we compare two different ways of forcing the logical constraints. First, ``MICQ1/2+BigM", corresponds to the mixed integer conic quadratic reformulation, where the logical constraints are replaced with the Big-M constraints \eqref{eq:500}-\eqref{eq:503}. Second, ``MICQ1/2+MC", replaces the logical constraints with the McCormick inequalities \eqref{MCbas}--\eqref{MCson} described in Section \ref{McCormick}. We do not need to add inequalities \eqref{MC1l.1}, \eqref{MC2l.1}, \eqref{MC3l.1} and \eqref{MC4l.1} as discussed in Section \ref{McCormick}, therefore we do not include them in the experiments.
We perform a $2^3$ experimental design with two levels of experimental factors in Table \ref{factor}. For each of these eight factor combinations, we generate 10 instances, resulting in a total of 80 instances. A time limit of 18000 seconds is used for each runtime of each instance.
\begin{table}[ht]\footnotesize
\centering
\caption{Alternative conic formulations.}
\label{alternative}
\begin{tabular}{ c l l }
\hline
\hline
\multicolumn{1}{c}{}& \multicolumn{1}{l}{MICQ1} & \multicolumn{1}{l}{MICQ2} \\
\hline
& \(\displaystyle z^2 \leq p \cdot f \) &\(\displaystyle z^2 \leq p \cdot f \) \\
Hyperbolic & \(\displaystyle z^4 \leq f^2 \cdot q \cdot 1 \) & \(\displaystyle z^4 \leq f^2 \cdot q \cdot z \) \\
inequalities & \(\displaystyle {f}^4 \leq {1}^2 \cdot r \cdot f \) & \(\displaystyle {f}^4 \leq {z}^2 \cdot r \cdot f \) \\
&\(\displaystyle {f}^2 \leq h \cdot 1 \) & \(\displaystyle {f}^2 \leq h \cdot z \) \\
\hline
\hline
\end{tabular} \end{table}
The computational performance results with the alternative formulations are summarized in Tables~\ref{comparison2}--\ref{comparison3}. Table \ref{comparison2} displays the results with the conic formulation MICQ1, whereas Table \ref{comparison3} presents the improved results with the strengthened conic formulation MICQ2. For each fuel price, base spill cost, and swap cost parameters, the first column ``\# nodes" reports the average number of nodes explored in the branch-and-bound algorithm. The second column ``time" reports the average CPU time (in seconds) for the instances that could be solved to optimality within the time limit with the number of such instances in the parenthesis. The symbol (-) indicates that none of the instances could be solved to optimality within the time limit. The third column ``gap" reports the average percentage optimality gap between the best bound at termination and the integer objective with the number of such instances that could not be solved to optimality in parenthesis.
Table \ref{comparison2} shows that none of the instances could be solved to optimality using MICQ1 with either Big-M or McCormick inequalities within the time limit. On the other hand, the computational performances of strong conic formulations MICQ2+BigM/MC are significantly better. The formulation MICQ2+BigM solves the most of instances to optimality within the time limit as indicated in Table \ref{comparison3}. Even, for each instance that could not be solved by formulation MICQ1+BigM to optimality, the stronger formulation MICQ2+BigM achieves a lower optimality gap. When the McCormick valid inequalities are added to the strong conic formulation MICQ2, the computational performance further improves. The McCormick inequalities help to solve all instances faster within the time limit. For some instances, the strong conic programming formulation MICQ2 with McCormick estimators is solved more than twice as fast than the best alternative.
\begin{table}[ht]\footnotesize
\setlength{\tabcolsep}{1pt}
\centering
\caption{Comparison of conic formulations.}
\label{comparison2}
\scalebox{0.9}{
\begin{tabular}{ r r r | r c r| r c r }
\hline
\hline
\multicolumn{3}{c}{} &\multicolumn{3}{c}{MICQ1+Big-M} &\multicolumn{3}{c}{MICQ1+MC} \\
\multicolumn{1}{c}{$\psi$}& \multicolumn{1}{c}{$\sigma_b$}& \multicolumn{1}{c}{$c_{fuel}$ }&\multicolumn{1}{c}{\# nodes}& \multicolumn{1}{c}{cpu (sec)}& \multicolumn{1}{c}{gap (\%)}& \multicolumn{1}{c}{\# nodes}& \multicolumn{1}{c}{cpu (sec)} & \multicolumn{1}{c}{gap (\%)} \\
\hline
500 &60 &0.6 &55726 &- &11.69(10) &35416 &- &7.78(10) \\
& &1.2 &51729 &- &13.71(10) &33419 &- &9.38(10) \\
&200 &0.6 &101801 &- &7.09(10) &62260 &- &4.83(10) \\
& &1.2 &70944 &- &8.55(10) &39552 &- &4.74(10) \\ \hline
1000 &60 &0.6 &56412 &- &11.99(10) &36445 &- &7.68(10) \\
& &1.2 &56506 &- &13.45(10) &35584 &- &9.01(10) \\
&200 &0.6 &102414 &- &7.00(10) &55232 &- &4.32(10) \\
& &1.2 &71287 &- &7.74(10) &37564 &- &5.25(10) \\
\hline
\hline
\end{tabular} } \end{table}
\begin{table}[ht]\footnotesize
\setlength{\tabcolsep}{1pt}
\centering
\caption{Comparison of the strengthened conic formulations.}
\label{comparison3}
\scalebox{0.9}{
\begin{tabular}{ r r r | r r r| r r r }
\hline
\hline
\multicolumn{3}{c}{} &\multicolumn{3}{c}{MICQ2+Big-M} &\multicolumn{3}{c}{MICQ2+MC} \\
\multicolumn{1}{c}{$\psi$}& \multicolumn{1}{c}{$\sigma_b$}& \multicolumn{1}{c}{$c_{fuel}$ }&\multicolumn{1}{c}{\# nodes}& \multicolumn{1}{c}{cpu (sec)}& \multicolumn{1}{c}{gap (\%)}& \multicolumn{1}{c}{\# nodes}& \multicolumn{1}{c}{cpu (sec)} & \multicolumn{1}{c}{gap (\%)} \\
\hline
500 &60 &0.6 &13511 &12603(9) &0.07(1) &3538 &8206(10) &0(0) \\
& &1.2 &25209 &12988(6) &1.46(4) &6677 &8688(10) &0(0) \\
&200 &0.6 &19908 &5798(9) &3.48(1) &7640 &3436(10) &0(0) \\
& &1.2 &12981 &7304(9) &2.35(1) &3783 &3033(10) &0(0) \\ \hline
1000 &60 &0.6 &9900 &10601(9) &1.00(1) &3641 &8074(10) &0(0) \\
& &1.2 &17262 &12930(8) &0.47(2) &6161 &8803(10) &0(0) \\
&200 &0.6 &13969 &5085(9) &3.06(1) &6560 &3380(10) &0(0) \\
& &1.2 &10958 &7043(9) &0.08(1) &8935 &5608(10) &0(0) \\
\hline
\hline
\end{tabular} }
\end{table}
In Table \ref{avgPerforms}, we summarize the results for 80 instances. The conic formulation MICQ1 including BigM inequalities could not solve any instances to optimality within the time limit. If we replace BigM inequalities with McCormick inequalities in formulation MICQ1, then the average optimality gap is decreased from 10.15\% to 6.61\%. If we use strong conic formulation MICQ2 with BigM inequalities, we can further decrease the average optimality gap to 1.40\% over 12 instances that are not solved to optimality. When we reformulate the logical constraints with McCormick inequalities in strong conic formulation MICQ2, the computational performance is the best. All instances are solved to optimality with a dramatic reduction in number of nodes explored.
\begin{table}[ht]\footnotesize
\centering
\caption{Average performances of conic formulations.}
\label{avgPerforms}
\begin{tabular}{ r| r c r}
\hline
\hline
\multicolumn{1}{c}{ } & \multicolumn{1}{c}{\# nodes} & \multicolumn{1}{c}{\# CPU} & \multicolumn{1}{c}{\# Gap}\\
\hline
MICQ2+MC& 5867 & 6153(80) & 0(0) \\
MICQ2+Big-M & 15462 & 9294(68) & 1.40(12) \\
MICQ1+MC& 41934 & - & 6.61(80) \\
MICQ1+Big-M & 70852 & - & 10.15(80) \\
\hline
\hline
\end{tabular}
\end{table}
Figure \ref{fig:nodes} analyzes the average number of nodes explored in the branch-and-bound algorithm for each reformulation. For each factor combination, conic formulation MICQ1 with Big-M constraints explores a large number of nodes within the time limit. By adding McCormick inequalities to MICQ1, the average number of nodes is reduced by half, but it is still more than three times required by the conic formulations MICQ2. With strengthened inequalities of conic formulation MICQ2+Big-M, a large number of the nodes are fathomed and the optimal solutions are found for many instances within the time limit. Further, conic formulation MICQ2 can be improved and strengthened by adding McCormick inequalities, which leads to further significant reduction in the number of nodes.
\begin{figure}
\caption{Analysis on number of nodes.}
\label{fig:nodes}
\end{figure}
\section{Conclusion} \label{Sec:6} We propose two approaches to accommodate new flights into an existing flight schedule of a particular day. Both of the approaches make use of re-timing of the flight departure times and cruise time controllability to reduce the block times of the existing flights, thereby making time to operate the new flights in the schedule. The second approach additionally takes the advantage of flexibility offered by aircraft swapping among flights. The second approach provides substantial cost savings in fuel burn by reassigning flights to fuel-efficient aircraft. However, the nonlinear fuel and emission costs together with additional binary swapping and assignment decisions significantly complicate the problem. To overcome the computational difficulty, we present strong conic quadratic reformulations. The experiments show the superiority of strong conic formulations over an alternative conic formulation. The alternative conic formulation times out for all test instances. On the other hand, the Big-M reformulation of logical constraints with strengthened conic inequalities can solve most of test instances to optimality. For only 12 instances over 80, Big-M reformulation provides an average of 1.40\% optimality gap at termination. As an alternative to Big-M method, when we add McCormick inequalities to the strong conic formulations, all test instances can be solved to optimality with a dramatic reduction in the number of nodes.
In conclusion, we provide two alternative approaches as CTC and CTC-AS with different quality of results and computational difficulties to an airline. While CTC-AS provides an average of 53\% profit improvement over CTC, the average CPU time to solve the CTC-AS to optimality by a strong conic reformulation together with McCormick inequalities is 6,100 seconds. On the other hand, CTC approach can be easily solved within \REV{\textbf{sixty seconds}}.
This study may lead to several potential research directions. The computational advantages of the strong conic quadratic models may pave the way for researchers to integrate the consideration of crew itineraries while determining the flight departures. Another extension of this study would be addressing a strategic planning problem that aims at introducing new flights to new demand points for the next season. Many potential demand scenarios considering the competitor's flights could be analyzed. Moreover, leasing an aircraft to hedge for the demand uncertainties may be an additional mechanism to introduce new flights.
\end{document} | arXiv | {
"id": "1806.02866.tex",
"language_detection_score": 0.776887059211731,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{opening} \title{The Uses of Argument in Mathematics} \def\large \raggedright{\large \raggedright} \def\aftertitleskip{\aftertitleskip} \author{ANDREW \surname{ABERDEIN}} \institute{Humanities and Communication,\\ Florida Institute of Technology,\\ 150 West University Blvd, \\ Melbourne, Florida 32901-6975, U.S.A.\\ aberdein@fit.edu} \renewcommand{ABSTRACT}{ABSTRACT} \renewcommand{KEY WORDS}{KEY WORDS} \def\rm{\rm} \def\rm{\rm} \begin{abstract} Stephen Toulmin once observed that `it has never been customary for philosophers to pay much attention to the rhetoric of mathematical debate' \protect\cite[p.~89]{Toulmin+}. Might the application of Toulmin's layout of arguments to mathematics remedy this oversight?
Toulmin's critics fault the layout as requiring so much abstraction as to permit incompatible reconstructions. Mathematical proofs may indeed be represented by fundamentally distinct layouts. However, cases of genuine conflict characteristically reflect an underlying disagreement about the nature of the proof in question.
\end{abstract}
\keywords{Euclid, mathematical argumentation, proof, rebuttal, Stephen Toulmin, undercutter} \end{opening}
\setlength{\parindent}{.5in} \pagestyle{fancy} \chead{\sc Andrew Aberdein, The Uses of Argument in Mathematics} \renewcommand{0pt}{0pt} \makeatletter \renewcommand{\@arabic\c@subsection}{\@arabic\c@subsection} \renewcommand{\@copyrightfoot}{} \renewcommand{\@copyrighthead}{} \renewcommand{\idline}{} \makeatother
\noindent Modern formal logic's earliest successes were in the analysis of mathematical argument. However, formal logic has long been recognized as useful elsewhere, and is now the dominant means of logical analysis in all fields of discourse. Informal logicians claim that this mathematical success has been bought at the cost of broader application. They have proposed methods of argument analysis complementary to that of formal logic, providing for the pragmatic treatment of features of argumentation which cannot be reduced to logical form. Characteristically, informal logicians have conceded mathematics to formal logic, while stressing the superiority of their own systems as analyses of argumentation in natural language. My contention is that this is an unnecessary concession: both systems have lessons for mathematics, just as they do for natural language argumentation.
In this paper I exhibit some aspects of mathematical argumentation which can best be captured by informal logic. Specifically I investigate the applicability of Toulmin's layout of arguments to mathematics. I demonstrate how the layout may be used to represent the structure of both `regular' and `critical' arguments in mathematics. Mathematical proof is typically regular argumentation, and it is this which I address most closely.
\subsection{Toulmin's Layout}
Toulmin's \textit{The Uses of Argument} is perhaps the single most influential work in modern argumentation theory \cite{Toulmin58}. Especially widely cited is the general account it offers of the structure of arguments. Toulmin begins with the thought that an argument is a claim ($C$) derived from data ($D$) in accordance with a warrant ($W$). While this is superficially similar to the treatment of arguments in deductive logic, greater generality is achieved through additional components of the layout. The argument may have a modal qualifier ($Q$), such as `necessarily' or `presumably', which explicates the force of the warrant. If the warrant does not provide necessity, its conditions of exception or rebuttal ($R$) may be noted. We may also keep track of the backing ($B$) which supports the warrant. The overall layout is often set out graphically, as in Figure~\ref{fig:Toulmin}.
\begin{figure}
\caption{Toulmin's layout \protect\cite[p.~104]{Toulmin58}.}
\label{fig:Toulmin}
\end{figure}
While philosophers attacked \textit{The Uses of Argument} as `Toulmin's anti-logic book', rhetoricians, communication theorists and subsequently computer scientists have found its account of argumentation deeply insightful \cite{Shapin}. Although it traduces Toulmin to characterize him as `anti-logic', his motivation was indeed to critique formal logic, specifically its claim to represent all the significant content of \textit{non}-mathematical argument. Mathematics, as the discourse for which formal logic is best suited, was exempt from this critique. Nevertheless, Toulmin's layout is intended to encompass all forms of argument, mathematics included, as he makes clear in a later work \protect\cite[p.~89]{Toulmin+}. In the remainder of this paper we will see how well it accomplishes this task, and whether it has advantages over formal logic alone in this discourse too.
One significant problem which critics of Toulmin have raised is that the degree of abstraction necessary to use the layout at all can make different, incompatible, reconstructions possible \cite{Willard}. This would seem to imply that use of the layout must risk distorting the original argument to an unacceptable degree. I shall explore whether this criticism undermines the application of Toulmin's layout to mathematics below, but first I must address an important distinction.
\subsection{Regular and Critical Arguments} \label{sec:Reg} In discussing scientific arguments, Toulmin distinguishes between arguments that are conducted within, or as applications of, a scientific theory and arguments which challenge a prevailing theory or seek to motivate an alternative \cite[p.~247]{Toulmin+}. The former he terms `regular arguments', the latter `critical arguments'. This distinction echoes the familiar one between normal and revolutionary science promoted by Thomas Kuhn, although Toulmin rejects Kuhn's hard and fast distinction between these two modes \cite{Toulmin70}. Hence critical arguments are always available to scientists, not just in revolutionary phases, although they may not always have cause to use them. Moreover, critical arguments are more broadly conceived than scientific revolutions, since they do not necessarily require the overthrow of the prior theory in order to succeed. As Toulmin observes, `[c]omplete immutability in our rational procedures and standards of judgment is not to be found even in \dots pure mathematics. The standards of rigour relied on in the judging of mathematical arguments have had their own history' \cite[p.~133]{Toulmin+}. Even commentators who deny that there can be revolutions in mathematics on the grounds that it is a purely cumulative discipline concede that there can be revolutions in mathematical rigour.\footnote{For example, \cite[p.~19]{Crowe}. Positions for and against mathematical revolutions are explored at length in \cite{Gillies}.} Thus even the most conservative history of mathematics must consider mathematical critical arguments. Furthermore, when we apply Toulmin's layout to regular mathematical argumentation, we shall see how it may be used to keep track of the changing standards of rigour to which critical argumentation can give rise.
\begin{figure}
\caption{Alcolea's analysis of Zermelo's argument for the adoption of the axiom of choice \protect\cite[p.~143]{Alcolea}.}
\label{fig:Alcolea}
\end{figure}
A good example of a mathematical critical argument is that offered by Ernst Zermelo and others for admitting the Axiom of Choice as one of the axioms of set theory. This is discussed by Jes\'{u}s Alcolea Banegas, one of the few people to specifically address the application of informal logic to mathematics \cite{Alcolea}.\footnote{I am grateful to Miguel Gimenez of the University of Edinburgh for translating this paper from the original Catalan.} I have reproduced his reconstruction of the layout of this argument as Figure~\ref{fig:Alcolea}. However, mathematical critical arguments are arguments \textit{about} mathematics, not arguments \textit{in} mathematics, For this reason, mathematical critical arguments are much like the critical arguments of any other discipline: there is nothing specifically mathematical about the warrant invoked, or any other aspect of their structure. Indeed, there cannot be, since they are appealing to the extra-mathematical in order to settle a dispute which cannot be resolved by purely mathematical means.
The characteristic content of mathematics is mathematical proof. Proofs are regular arguments---they may inspire criticism of underlying principles, but the criticism must take place outside the proof itself. In the next section we will examine how closely Toulmin's layout models the work of mathematical proof.
\subsection{Proofs}
Toulmin's own example applying his layout to mathematics is Theaetetus's proof that there are exactly five platonic solids. This result is recorded by Euclid as the final proposition of Book XIII of his \textit{Elements}, where the proof reads:
\begin{quotation} I say next that \textit{no other figure, besides the said five figures, can be constructed which is contained by equilateral and equiangular figures equal to one another}.
For a solid angle cannot be constructed with two triangles, or indeed planes.
With three triangles the angle of the pyramid is constructed, with four the angle of the octahedron, and with five the angle of the icosahedron; but a solid angle cannot be formed by six equilateral and equiangular triangles placed together at one point, for, the angle of the equilateral triangle being two thirds of a right angle, the six will be equal to four right angles: which is impossible, for any solid angle is contained by angles less than four right angles [XI.21].
For the same reason, neither can a solid angle be constructed by more than six plane angles.
By three squares the angle of the cube is contained, but by four it is impossible for a solid angle to be contained, for they will again be four right angles.
By three equilateral and equiangular pentagons the angle of the dodecahedron is contained; but by four such it is impossible for any solid angle to be contained, for, the angle of the equilateral pentagon being a right angle and a fifth, the four angles will be greater than the four right angles: which is impossible.
Neither again will a solid angle be contained by other polygonal figures by reason of the same absurdity.
\raggedleft{\sc q.e.d.} \cite[Vol.~3, pp.~507 f.]{Heath} \end{quotation}
\begin{figure}
\caption{Toulmin's analysis of Theaetetus's proof that the platonic solids are exactly five in number \protect\cite[Fig.~7.4, p.~89]{Toulmin+}.}
\label{fig:Theaetetus}
\end{figure}
Toulmin's layout of this proof, slightly adapted for consistency of notation, is reproduced as Figure~\ref{fig:Theaetetus}. We can see that it is an essentially faithful reproduction of Euclid's argument, but it will be profitable to compare the two more closely. Firstly, there is a harmless simplificiation, but slight loss of rigour, in Toulmin's omission of the statement that a solid angle cannot be constructed from two planes (a consequence of Euclid's Def.\ 11, Book XI). Most of the remainder of Euclid's proof becomes Toulmin's data, $D$, the claim, $C$, is essentially the same in both presentations, and Toulmin's warrant, $W$, combines the definition of a regular convex polyhedron with the essential prior result on which the proof depends, Euclid's Prop.\ 21, Book XI. Other parts of the layout are more novel: $Q$, a characterization of the rigour of the proof, makes explicit something implicit in Euclid, and $R$ is not there at all, since on Toulmin's account the proof cannot be rebutted. All of these identifications seem reasonable, and there does not seem to be any scope for pernicious ambiguity. However, this is an elementary proof---problems may yet arise in more complicated cases.
One obvious way in which Theaetetus's proof is untypical of mathematical proofs is that it has only one step. Most proofs, indeed most proofs in Euclid, constitute a sequence of steps, from the initial premisses, through one intermediate result after another to the eventual conclusion. How might Toulmin's layout be applied to these multi-step proofs? The most thorough method would be to diagram each step of the proof separately, perhaps linking them together in a sort of sorites. If the fine detail of the proof is of particular interest this would be the best approach, but typically we would prefer something more coarse-grained. A useful maxim that applies to any modeling process is that a model should be as detailed as we need it to be, \textit{but no more so}. In most multi-step mathematical proofs the qualifier (and rebuttal) will be the same at every step. This allows us to merge the layouts for the separate steps into one, by combining the given components of the data, the warrants (and if necessary the backing) of each step to produce a single layout for the whole proof.\footnote{This is a simplification, but more technical details are not required to establish this point.} Where different steps have different qualifiers, the qualifier for the whole proof would represent the degree of certainty of the least certain step.
\begin{figure}
\caption{Classical proof that there are irrational numbers $\alpha$ and $\beta$ such that $\alpha ^{\beta}$ is rational.}
\label{fig:root2}
\end{figure}
One source of examples of this phenomenon is the class of constructively invalid classical proofs. Characteristically, most of the steps of these proofs \textit{are} constructively valid, making it important to identify the ones that are not. Here a fine-grained application of Toulmin layouts can make the guilty steps explicit, unlike a single layout in which the qualifier for the whole proof would merely indicate that the result is classically, but not constructively, valid. For instance, Figure~\ref{fig:root2} lays out a familiar classical proof that there are irrational numbers $\alpha$ and $\beta$ such that $\alpha ^{\beta}$ is rational. (The rebuttal components have been omitted for simplicity.) This decomposition of the proof into its separate steps clearly exhibits the dependencies of each step: whereas the second step relies on the constructively acceptable inference rule of constructive dilemma (CD), the first step employs the non-constructive law of excluded middle (LEM). We can see immediately that the proof could be transformed into a constructive proof if a constructive derivation of $C_{1}$ from $D_{1}$ could be found to replace the first step. This can be done, since there is a constructive proof that $\surd 2^{\surd 2}$ is irrational: substituting this statement for $W_{1}$, and its proof for $B_{1}$, yields a wholly constructive proof.\footnote{Albeit a rather cumbersome proof, since the constructive proof that $\surd 2^{\surd 2}$ is irrational relies on the deep Gelfond-Schneider Theorem. There are much simpler constructive proofs that $\alpha ^{\beta}$ may be rational for irrational $\alpha$, $\beta$.} The Intermediate Value Theorem, which states that if $f$ is a continuous real-valued function, $u < v$ and $f(u) < m < f(v)$, then there must be a number $w$ such that $u < w < v$ and $f(w) = m$, provides a more protracted example of a non-constructive classical proof. The sketch of this proof laid out in Figure~\ref{fig:ivtfull}, has four steps, of which three are constructive. However, the last step, that of Trichotomy, is irredeemably classical. The constructivist has to choose between strengthening the hypotheses of the proof to rule out Brouwerian counterexamples to Trichotomy, and accepting a weaker result, that $f(w)$ and $m$ are arbitrarily close, rather than equal \cite[p.~140 f.]{George+}. These examples suggest that, providing individual proof steps may be represented unambiguously, longer sequences of steps, whether represented separately or in combination, should also be unambiguous.
\begin{figure}
\caption{Classical proof of the Intermediate Value Theorem.}
\label{fig:ivtfull}
\end{figure}
However, there are circumstances where ambiguity of layout would seem to be a genuine risk. To begin with we shall look at another example from Euclid. In 1879, Charles Dodgson, better known as Lewis Carroll, produced a diagram exhibiting the logical interdependency of the propositions of Book I \cite[frontispiece]{Carroll}. A similarly motivated but distinct diagram for Book I (and diagrams for the other books, all inspired by the work of Ian Mueller \cite{Mueller}) is provided in the most recent scholarly edition of the \textit{Elements} \cite[p.\ 518]{Vitrac}. Both diagrams omit some detail to maintain readability, and most of the differences between the two can be explained as contrasted editing choices. However, there are some points at which they are explicitly at odds. For example, for Mueller and Vitrac Proposition I.12 follows from I.8 and I.10, whereas for Carroll I.12 follows from I.9 alone. Following Toulmin's practice of identifying the previously established propositions employed in a proof as its warrant, two separate layouts for the proof of I.12 could be constructed, with distinct warrants. One possible explanation for the difference would be that Carroll was in error, or was working from a corrupt text: Johan Ludvig Heiberg's much improved edition of the \textit{Elements} only became available in Greek in 1883 and in English in 1908. This text supports Mueller/Vitrac over Carroll \cite[Vol.~1, pp.\ 270 ff.]{Heath}. If the difference can be resolved in this way it is obvious which layout is correct.
However, there are more persistent types of disagreement, one of which may provide an alternative account of this difference. Any modeling process will make explicit ambiguities which had been ignored in the original context. For example, propositional logic forces us to choose between \mbox{$A \wedge (B \vee C)$} and \mbox{$(A \wedge B) \vee C$} as formalizations of English sentences of the form ``$A$ and $B$ or $C$''. We should be astonished to find an ambiguity of this kind in the \textit{statements} of a mathematical proof, since it would compromise the proof in a fashion any mathematician should be expected to spot. Much less attention has been paid to the \textit{dialectic} of mathematical proofs. In rendering it explicit through the formalism of the Toulmin layout we should not be surprised if we occasionally uncover a dialectical ambiguity: an argument which bears reconstruction in two distinct ways. Where such ambiguities have gone uncorrected it is likely that they are sound arguments both ways. This difference would be interesting, but benign. If one or even both of the resolutions of a dialectical ambiguity is unsound, then the close reading required for the application of the Toulmin layout has performed a genuine service in demonstrating that the proof is at best poorly phrased and at worst fallacious.
A more profound sort of disagreement may arise where there is no dispute over what the proof says, but there is a dispute over what it ought to say. Some of the best known propositions of Euclid, such as I.5, the \textit{pons asinorum}, and I.47, Pythagoras's theorem, have many independent proofs, of which Euclid's is not necessarily the most rigorous, or the clearest. Presumably, the different proofs will have different layouts, but there is no inconsistency in this. A more fundamental example from the \textit{Elements}, is the proof of Proposition I.4, that two triangles are equal if they have two sides and the enclosed angle equal, which makes use of the principle of superposition. This principle, expressed in one of Euclid's axioms, Common Notion 4, as `things which coincide with one another are equal to one another', is used sparingly by Euclid, and has long been suspected as too empirical in character for geometrical proof \cite[Vol.~1, pp.\ 224 ff.]{Heath}. Many modern axiomatizations of geometry take I.4 as an axiom, not a theorem \cite[Vol.~1, p.\ 249]{Heath}. \label{sec:I.4}
All of the sources of conflict considered so far reflect an ambiguity in the proof under analysis, either because the text of the proof is in dispute, or is ambiguously expressed, or because there are multiple distinct proofs of the same proposition. All of these cases may give rise to a choice of different layouts, but in so doing they are faithfully reproducing an ambiguity in the source material. This is a useful service, especially if a concealed ambiguity is brought to light. Could a mathematical proof be represented by fundamentally distinct layouts in some other, harmful way?
\subsection{The Four Colour Theorem}
Alcolea, whose analysis of a critical mathematical argument I reproduced in Section \ref{sec:Reg}, also has a case study of a regular mathematical argument, Kenneth Appel and Wolfgang Haken's computer assisted proof of the four colour conjecture. The conjecture states that four colours may be assigned to the regions of any planar map in such a way that no adjacent regions receive the same colour. A partial proof published by Alfred Kempe in 1879 was taken as decisive for a decade until the discovery of counterexamples demonstrating its limitations. The conjecture has proved resistant to straightforward methods of proof, and in 1976 was confirmed by a computer assisted proof, which remains controversial in some quarters since the full details are too protracted for human inspection. Alcolea reconstructs the central argument of the proof as a derivation from the data $D_{1}$--$D_{3}$ \begin{quotation} \noindent ($D_{1}$) Any planar map can be coloured with five colours.\\ ($D_{2}$) There are some maps for which three colours are insufficient.\\ ($D_{3}$) A computer has analysed every type of planar map and verified that each of them is 4-colorable. \end{quotation} of the claim $C$, that \begin{quotation} \noindent ($C$) Four colours suffice to colour any planar map. \end{quotation} by employment of the warrant $W$, \begin{quotation} \noindent ($W$) The computer has been properly programmed and its hardware has no defects. \end{quotation} which has backing $B$ \begin{quotation} \noindent ($B$) Technology and computer programming are sufficiently reliable. \cite[pp.~142f.]{Alcolea} \end{quotation} We can see at a glance that the warrant and backing of this proof are very different from those of Theaetetus's proof. The dependence on apparently extra-mathematical methods is made explicit. Alcolea draws the moral that the proof lacks mathematical rigour, and may even hide an unforeseen counterexample \cite[p.~143]{Alcolea}.
However, as I have argued elsewhere, this is not the only way of representing Appel and Haken's proof within Toulmin's layout \cite{Aberdein}. We might also represent it as: \begin{quotation} Given that ($D$) the elements of the set $U$ are reducible, we can ($Q$) almost certainly claim that ($C$) four colours suffice to colour any planar map, since ($W$) $U$ is an unavoidable set (on account of ($B$) conventional mathematical techniques), unless ($R$) there has been an error in either (i) our mathematical reasoning, or (ii) the hardware or firmware of all the computers on which the algorithm establishing $D$ has been run. \end{quotation} This analysis requires some unpacking. I have gone further into the details of the proof than Alcolea, in the hope of making the nature of its dependency on the computer precise. The concepts of unavoidability and reducibility originate with Kempe's first, unsuccessful attempt to prove the conjecture. An unavoidable set is a set of configurations---regions or clusters of neighbouring regions---such that every planar map must contain at least one member. A configuration is reducible if it can be shown that all planar maps containing that configuration are four-colourable. Kempe correctly demonstrated that the two-sided, three-sided, four-sided and five-sided regions together constitute an unavoidable set. He also believed that he had shown all of these configurations to be reducible, which would have proved the conjecture. Unfortunately, the five-sided region is not reducible. Appel and Haken's proof is similar in nature to Kempe's, but much greater in scale: they derived an unavoidable set of reducible configurations with 1,482 members. Although the set was found by a computer search, its unavoidability could be verified by hand. However, confirmation of the reducibility of each of its members is too formidable a task for human proof checking. Our confidence in the proof rests on the reliability of the methods programmed into the computer.
\subsection{Refutations}
The fundamental difference between these two layouts of the Four Colour Theorem lies in the rebuttal component, $R$. Alcolea does not offer one, in accordance with Toulmin's injunction that `a mathematical inference \dots\ leaves no room for ``exceptions'' or ``rebuttals''; and indeed to raise the question of possible rebuttals would be to challenge the status of the entire argument' \cite[p.~254]{Toulmin+}. On my reconstruction, the rebuttal forms the largest part of the layout. What has happened here?
As the second part of this quotation from Toulmin makes clear, his account of rebuttal is not intended to permit challenges to the soundness of the argument, but rather to index possible exceptions allowed for by the choice of qualifier. This distinction between the different ways in which an argument may be defeated has been developed at length in various places, often as a distinction between rebuttals and undercutters.\footnote{What follows is loosely derived from John Pollock's account. He offers a general definition of a defeater as ``If $P$ is a reason for $S$ to believe $Q$, $R$ is a defeater for this reason if and only if $R$ is logically consistent with $P$ and $(P \wedge R)$ is not a reason for $S$ to believe $Q$'', and then defines rebutting and undercutting defeaters as: \begin{quotation} If $P$ is a \textit{prima facie} reason for $S$ to believe $Q$, $R$ is a \textit{rebutting} defeater for this reason if and only if $R$ is a defeater (for $P$ as a reason for $S$ to believe $Q$) and $R$ is a reason for $S$ to believe not-$Q$;
If $P$ is a \textit{prima facie} reason for $S$ to believe $Q$, $R$ is an \textit{undercutting} defeater for this reason if and only if $R$ is a defeater (for $P$ as a reason for $S$ to believe $Q$) and $R$ is a reason to deny that $P$ would not be true unless $Q$ were true \cite[pp.\ 38 f.]{Pollock}. \end{quotation}} Intuitively, an argument may be rebutted by offering independent reasons to disbelieve its conclusion, or undercut by challenging its soundness. Most species of argument may be rebutted, undercut, or both. In empirical science both defeaters occur, but as the result of very different work: an empirical study could be rebutted by an independent study, or undercut by a challenge to its methodology. A paper which does the former might also do the latter, but need not. Mathematics is different: proofs may be rebutted and undercut or just undercut, but not just rebutted. Kempe's failed proof of the Four Colour Conjecture was undercut when counterexamples to it were discovered, but not rebutted since the counterexamples did not challenge the conjecture, which eventually turned out to be true. Other failed proofs have been undercut and rebutted, when it has transpired that the `theorem' in question is not merely unproven but false. But this is the only way a proof can be rebutted. While we could accept that two empirical studies had conflicting results, despite both having sound methodologies, we could not accept that a mathematical conjecture could be independently proven to be both true and false: at least one of the `proofs' must fail.
Toulmin, however, does not admit undercutters into his layout. Hence, the only sort of defeater he can accept is rebuttal without undercutting, which is precisely the sort of defeater which mathematical proofs cannot have. In the light of his own definitions, Toulmin is right to say that mathematical proofs cannot be rebutted. But should we accept these definitions? Toulmin's motivation seems to be the thought that an undercutter cannot be part of a regular argument, but must instead reopen the issue `from a new, critical standpoint' \cite[p.~254]{Toulmin+}. Is this what happens when a proof is undercut?
Undercutters for mathematical proofs have been described as `inferential gaps' \cite[p.\ 51]{Fallis}. In terms of my application of Toulmin's layout to multi-step proofs, an inferential gap would be a step in the proof in which the data and warrant do not support the claim. The data represents the point the proof had reached when the (first) gap arises, the claim the point from which the remainder of the proof proceeds, and the warrant contains the mathematical results and definitions used to derive the claim from the data. The failure of this inferential step might be attributable to the failure of the warrant. Something like this happens when standards of rigour change, as in the criticism of Euclid's proposition I.4 discussed in Section \ref{sec:I.4}. This would indeed open up a new critical argument. However, what is far more typical of inferential gaps is that there is nothing wrong with the warrant---it just does not warrant the claim on the basis of the data. It may be that the warrant needs to be supplemented, or that the claim is false, because the proof has been rebutted \textit{and} undercut, or has simply taken a wrong turning. These are not grounds for a critical argument `challeng[ing] the credentials of current ideas', but evidence of human error in the conduct of a regular argument. In the light of this observation, it seems more helpful to treat all defeaters alike, and admit them all into Toulmin's layout, generalizing his conception of `rebuttal'.\footnote{Several contemporary applications of Toulmin's work take this step.}
We can now understand the difference between the two reconstructions of the four colour theorem. Alcolea sticks to Toulmin's narrow account of rebuttal, and produces a reconstruction with an emphatically non-mathematical warrant. I offered instead a layout with a mathematical warrant, but a slightly more cautious qualifier than normal, admitting of two sources of rebuttal: human error and computer error. I have argued elsewhere that in this case the likelihood of the former is orders of magnitude greater than the likelihood of the latter, and that this is why mathematicians are right to regard the proof with as much confidence as they do more orthodox proofs \cite{Aberdein}.
The underlying problem here is that of rational reconstruction, a familiar one in the history and philosophy of mathematics. Mathematical textbooks unselfconsciously rewrite the proofs of past mathematicians, happily introducing anachronistic notation and standards of rigour. Historians try to be more sensitive, but there is often a tension between doing justice to the context of discovery, by reproducing the twists and turns which led to the result, and doing justice to the context of justification by providing a sound proof \cite[p.\ 159]{Polya}. This tension provides sufficient room for substantive disagreement about how best to reconstruct contentious proofs. Toulmin's layout may bring these disagreements into sharper focus, but it can hardly be blamed for them. As with the simpler sources of ambiguity discussed in Section \ref{sec:I.4}, the layout's capacity to represent proofs in different ways turns out not to be a handicap, but perhaps its most valuable feature.
\def\rm REFERENCES{\rm REFERENCES} \setlength{\bibhang}{\parindent}
\end{document} | arXiv | {
"id": "0504090.tex",
"language_detection_score": 0.9233136773109436,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{
extbf{Approximation Strategies for Generalized Binary Search in Weighted Trees}
\begin{abstract} We consider the following generalization of the binary search problem. A search strategy is required to locate an unknown target node $t$ in a given tree $T$. Upon querying a node $v$ of the tree, the strategy receives as a reply an indication of the connected component of $T\setminus\{v\}$ containing the target $t$. The cost of querying each node is given by a known non-negative weight function, and the considered objective is to minimize the total query cost for a worst-case choice of the target.
Designing an optimal strategy for a weighted tree search instance is known to be strongly NP-hard, in contrast to the unweighted variant of the problem which can be solved optimally in linear time. Here, we show that weighted tree search admits a quasi-polynomial time approximation scheme (QPTAS): for any $0 < \varepsilon < 1$, there exists a $(1+\varepsilon)$-approximation strategy with a computation time of $n^{O(\log n / \varepsilon^2)}$. Thus, the problem is not APX-hard, unless $NP \subseteq DTIME(n^{O(\log n)})$. By applying a generic reduction, we obtain as a corollary that the studied problem admits a polynomial-time $O(\sqrt{\log n})$-approximation. This improves previous $\hat O(\log n)$-approximation approaches, where the $\hat O$-notation disregards $O(\mathrm{poly}\log\log n)$-factors. \end{abstract}
\noindent \textbf{Key Words:} Approximation Algorithm; Adaptive Algorithm; Graph Search; Binary Search; Vertex Ranking; Trees
\section{Introduction}
In this work we consider a generalization of the fundamental problem of searching for an element in a sorted array. This problem can be seen, using graph-theoretic terms, as a problem of searching for a target node in a path, where each query reveals on which `side' of the queried node the target node lies. The generalization we study is two-fold: a more general structure of a tree is considered and we assume non-uniform query times. Thus, our problem can be stated as follows. Given a node-weighted input tree $T$ (in which the query time of a node is provided as its weight), design a search strategy (sometimes called a decision tree) that locates a hidden \emph{target node} $x$ by asking \emph{queries}. Each query selects a node $v$ in $T$ and after the time that equals the weight of the selected node, a reply is given: the reply is either `yes' which implies that $v$ is the target node and thus the search terminates, or it is `no' in which case the search strategy receives the edge outgoing from $v$ that belongs to the shortest path between $u$ and $v$. The goal is to design a search strategy that locates the target node and minimizes the search time in the worst case.
The vertex search problem is more general than its `edge variant' that has been more extensively studied. In the latter problem one selects an edge $e$ of an edge-weighted tree $T=(V,E,w)$ in a query and learns in which of the two components of $T-e$ the target node is located. Indeed, this edge variant can be reduced to our problem as follows: first assign a `large' weight to each node of $T$ (for example, one plus the sum of the weights of all edges in the graph) and then subdivide each edge $e$ of $T$ giving to the new node the weight of the original edge, $w(e)$. It is apparent that an optimal search strategy for the new node-weighted tree should never query the nodes with large weights, thus immediately providing a search strategy for the edge variant of $T$.
We also point out that the considered problem, as well as the edge variant, being quite fundamental, were historically introduced several times under different names: minimum height elimination trees \cite{Pothen88}, ordered colourings \cite{KatchalskiMS95}, node and edge rankings \cite{IyerRV88}, tree-depth \cite{NesetrilM06} or LIFO-search \cite{GiannopoulouHT12}.
Table~\ref{tab:res} summarizes the complexity status of the node-query model (in case of unweighted paths in both cases the solution is the classical binary search algorithm) and places our result in the general context.
\setlength\dashlinedash{0.2pt} \setlength\dashlinegap{1.5pt} \setlength\arrayrulewidth{0.3pt} \bgroup\def1.5{1.5}
\begin{table}[htb] \caption{Computational complexity of the search problem in different graph classes, including our results for weighted trees. Completeness results refer to the decision version of the problem.} \label{tab:res}
{ \small \centering \begin{tabular}{rcc} \toprule \emph{Graph class} & \emph{Unweighted} & \emph{Weighted} \\ \midrule Paths: & exact in $O(n)$ time & exact in $O(n^2)$ time \cite{CicaleseJLV12} \\ \cdashline{2-3}
\multirow{3}{*}{Trees:} & \multirow{3}{*}{exact in $O(n)$ time \cite{OnakP06,Schaffer89}} & strongly NP-complete \cite{DereniowskiN06} \\
& &$(1+\varepsilon)$-approx. in $n^{O(\log n / \varepsilon)}$ time (Thm.~\ref{thm:qptas}) \\
& & $O(\sqrt{\log n})$-approx. in poly-time (Thm.~\ref{thm:recursive-algo})\\ \cdashline{2-3}
\multirow{2}{*}{Undirected:} & exact in $n^{O(\log n)}$ time \cite{Emamjomeh-ZadehKS16} & PSPACE-complete \cite{Emamjomeh-ZadehKS16} \\ & $O(\log n)$-approx. in poly-time \cite{Emamjomeh-ZadehKS16} & $O(\log n)$-approx. in poly-time \cite{Emamjomeh-ZadehKS16}\\
\cdashline{2-3} Directed: & PSPACE-complete \cite{Emamjomeh-ZadehKS16} & PSPACE-complete \cite{Emamjomeh-ZadehKS16}\\ \bottomrule \end{tabular} } \end{table}
\subsection {State-of-the-Art}
In this work we focus on the worst case search time for a given input graph and we only remark that other optimization criteria has been also considered \cite{CicaleseJLM11,LaberMP02,LaberM11,SzwarcfiterNBOCZ03}. For other closely related models and corresponding results see e.g. \cite{ArkinMMRS98,HeeringaIT11,LaberN04,LinialS85,Steiner87}.
\subparagraph{The node-query model.} An optimal search strategy can be computed in linear-time for an unweighted tree \cite{OnakP06,Schaffer89}. The number of queries performed in the worst case may vary from being constant (for a star one query is enough) to being at most $\log_2 n$ for any tree \cite{OnakP06} (by always querying a node that halves the search space). Several following results have been obtained in \cite{Emamjomeh-ZadehKS16}. First, it turns out that $\log_2 n$ queries are always sufficient for general simple graphs and this implies a $O(m^{\log_2 n}n^2\log n)$-time optimal algorithm for arbitrary unweighted graphs. The algorithm which performs $\log_2 n$ queries also serves as a $O(\log n)$-approximation algorithm, also for the weighted version of the problem. (We remark that in the weighted case, the algorithms in~\cite{Emamjomeh-ZadehKS16} sometimes have an approximation ratio of $\Theta(\log n)$, even in the tree scenario we study in this work.) On the other hand, it is shown in the same work that an optimal algorithm (for unweighted case) with a running time of $O(n^{o(\log n)})$ would be in contradiction with the Exponential-Time-Hypothesis\InJournal{, and for $\varepsilon>0$, $O(m^{(1-\varepsilon)\log n})$ would be in contradiction with the Strong Exponential-Time-Hypothesis}. When weighted graphs are considered, the problem becomes PSPACE-complete. \InJournal{Also, a generalization to directed graphs also turns out to be PSPACE-complete.}
\InJournal{ We also refer the interested reader to further works that consider a probabilistic version of the problem, where the answer to a query is correct with some probability $p>\frac{1}{2}$ \cite{Ben-OrH08,Emamjomeh-ZadehKS16,FeigeRPU94,KarpK07}. In particular, for any $p>\frac{1}{2}$ and any undirected unweighted graph, a search strategy can be computed that finds the target node with probability $1-\delta$ using $(1-\delta)\frac{\log_2n}{1-H(p)}+o(\log n)+O(\log^2\frac{1}{\delta})$ queries in expectation, where $H(p)=-p\log_2p-(1-p)\log_2(1-p)$ is the entropy function. See \cite{RivestMKWS80} for a model in which a fixed number of queries can be answered incorrectly during a binary search. }
\subparagraph{The edge-query model.} In the case of unweighted trees, an optimal search strategy can be computed in linear time \cite{LamY01,MozesOW08}. (See~\cite{Dereniowski08} for a correspondence between edge rankings and the searching problem.) The problem of computational complexity for weighted trees attracted a lot of attention. On the negative side, it has been proved that it is strongly NP-hard to compute an optimal search strategy \cite{Dereniowski06} for bounded diameter trees, which has been improved by showing hardness for several specific topologies: trees of diameter at most 6, trees of degree at most 3 \cite{CicaleseJLV12} and spiders \cite{CicaleseKLPV14} (trees having at most one node of degree greater than two). On the other hand, polynomial-time algorithms exist for weighted trees of diameter at most 5 and weighted paths \cite{CicaleseJLV12}. We note that for weighted paths there exists a linear-time but approximate solution given in \cite{LaberMP02}. For approximate polynomial-time solutions, a simple $O(\log n)$-approximation has been given in \cite{Dereniowski06} and a $O(\log n/\log \log \log n)$-approximate solution is given in \cite{CicaleseJLV12}. Then, the best known approximation ratio has been further improved to $O(\log n/\log\log n)$ in \cite{CicaleseKLPV14}.
Some bounds on the number of queries for unweighted trees have been developed. Observe that an optimal search strategy needs to perform at least $\log_2 n$ queries in the worst case. However, there exist trees of maximum degree $\Delta$ that require $\Delta\log_{\Delta+1}n$ queries \cite{Ben-AsherF97}. On the other hand, $\Theta(\Delta\log n)$ queries are always sufficient for each tree \cite{Ben-AsherF97}, which has been improved to $(\Delta+1)\log_{\Delta}n$ \cite{LaberN01}, $\Delta\log_{\Delta} n$ \cite{DereniowskiK06} and $1+\frac{\Delta-1}{\log_2(\Delta+1)-1}\log_2 n$ \cite{Emamjomeh-ZadehKS16}.
\InJournal{\subparagraph{Searching partial orders.} The problem of searching a partial order with uniform query times is NP-complete even for partial orders with maximum element and bounded height Hasse diagram \cite{CarmoDKL04,Dereniowski08}. For some algorithmic solutions for random partial orders see \cite{CarmoDKL04}. For a given partial order $P$ with maximum element, an optimal solution can be obtained by computing a branching $B$ (a directed spanning tree with one target) of the directed graph representing $P$ and then finding a search strategy for the branching, as any search strategy for $B$ also provides a feasible search for $P$ \cite{Dereniowski08}. Since computing an optimal search strategy for $B$ can be done efficiently (through the equivalence to the edge-query model), finding the right branching is a challenge. This approach has been used in \cite{Dereniowski08} to obtain an $O(\log n/\log\log n)$-approximation polynomial time algorithm for partial orders with a maximum element.
We remark that searching a partial order with a maximum element or with a minimum element are essentially quite different. For the latter case a linear-time algorithm with additive error of 1 has been given in \cite{OnakP06}. As observed in \cite{Dereniowski08}, the problem of searching in tree-like partial orders with a maximum element (which corresponds to the edge-query model in trees) is equivalent to the edge ranking problem.}
\subsection{Organization of the Paper} The aim of Section~\ref{sec:preliminaries} is to give the necessary notation and a formal statement of the problem (Sections~\ref{sec:querymodel} and~\ref{sec:strategydef}) and to provide two different but equivalent problem formulations that will be more convenient for our analysis. As opposed to the classical problem formulation in which a strategy is seen as a \emph{decision tree}, Section~\ref{sec:querysequences} restates the problem in such a way that with each vertex $v$ of the input tree we associate a sequence of vertices that need to be iteratively queried when $v$ is the root of the current subtree that contains the target node. In Section~\ref{sec:schedules} we extend this approach by associating with each vertex a sequence of not only vertices to be queried but also time points of the queries.
The latter problem formulation is suitable for a dynamic programming algorithm provided in Section~\ref{sec:qptas}. In this section we introduce an auxiliary, slightly modified measure of the cost of a search strategy. First we provide a quasi-polynomial time dynamic programming scheme that provides an arbitrarily good approximation of the output search strategy with respect to this modified cost (the analysis is deferred to Section~\ref{sec:dp}), and then we prove that the new measure is sufficiently close to the original one (the analysis is deferred to Section~\ref{sec:blackbox}). These two facts provide the quasi-polynomial time scheme for the tree search problem, achieving a $(1+\varepsilon)$-approximation with a computation time of $n^{O(\log n / \varepsilon^2)}$, for any $0 < \varepsilon < 1$.
In Section~\ref{sec:approx} we observe how to use the above algorithm to derive a polynomial-time $O(\sqrt{\log n})$-approximation algorithm for the tree search problem. This is done by a divide and conquer approach: a sufficiently small subtree $T^*$ of the input tree $T$ is first computed so that the quasi-polynomial time algorithm runs in polynomial (in the size of $T$) time for $T^*$. This decomposes the problem: having a search strategy for $T^*$, the search strategies for $T-T^*$ are computed recursively. \InJournal{Details of the approach are provided in Section~\ref{sec:algosqrt}.}
\section{Preliminaries} \label{sec:preliminaries}
\subsection{Notation and Query Model} \label{sec:querymodel}
We now recall the problem of searching of an unknown target node $x$ by performing queries on the vertices of a given node-weighted rooted tree $T=(V,E,w)$ with weight function $w\colon V\to\mathbb{R}_+$. Each \emph{query} selects one vertex $v$ of $T$ and after $w(v)$ time units receives an answer: either the query returns \emph{true}, meaning that $x=v$, or it returns a neighbor $u$ of $v$ which lies closer to the target $x$ than $v$. Since we assume that the queried graph $T$ is a tree, such a neighbor $u$ is unique and is equivalently described as the unique neighbor of $v$ belonging to the same connected component of $T\setminus \{v\}$ as $x$.
All trees we consider are rooted. Given a tree $T$, the root is denoted by $\root{T}$. For a node $v\in V$, we denote by $T_v$ the subtree of $T$ rooted at $v$. For any subset $V' \subseteq V$ (respectively, $E'\subseteq E$) we denote by $T[V']$ (resp., $T[E']$) the minimal subtree of $T$ containing all nodes from $V'$ (resp., all edges from $E'$). For $v\in V$, $N(v)$ is the set of neighbors of $v$ in $T$.
For $U\subseteq V$ and a target node $x\notin U$, there exists a unique maximal subtree of $T \setminus U$ that contains $x$; we will denote this subtree by $\queriedSubtree{T}{U}{x}$.
We denote $|V|=n$. We will assume w.l.o.g.\ that the maximum weight of a vertex is normalized to $1$. (This normalization is immediately obtained by a proportional scaling of all units of cost.) We will also assume w.l.o.g.\ that the weight function satisfies the following \emph{star condition}: \ifConferenceVersion $ \else $$\fi \text{for all $v\in V$, } w(v) \leq \sum_{u \in N(v)} w(u). \ifConferenceVersion $ \else $$\fi Observe that if this condition is not fulfilled, i.e., for some vertex $v$ will have $w(v) > \sum_{u \in N(v)} w(u)$, then vertex $v$ will never be queried by any optimal strategy in $v$, since a query to $v$ can then be replaced by a sequence of queries to all neighbors of $v$, obtaining not less information at strictly smaller cost. In general, given an instance which does not satisfy the star condition, we enforce it by performing all necessary weight replacements $w(v) \gets \min \{w(v), \sum_{u \in N(v)} w(u)\}$, for $v \in V$.
For $a, \omega \in \mathbb R_{\geq 0}$, we denote the rounding of $a$ down (up) to the nearest multiple of $\omega$ as $\lfloor a \rfloor_{\omega} = \omega\lfloor a/\omega\rfloor$ and $\lceil a \rceil_{\omega} = \omega\lceil a/\omega\rceil$, respectively.
\subsection{Definition of a Search Strategy} \label{sec:strategydef}
\emph{A search strategy} $\mathcal{A}$ for a rooted tree $T=(V,E,w)$ is an adaptive algorithm which defines successive queries to the tree, based on responses to previous queries, with the objective of locating the target vertex in a finite number of steps. Note that search strategies can be seen as decision trees in which each node represents a subset of vertices of $T$ that contains $x$, with leaves representing singletons consisting of $x$.
Let $\textup{\texttt{Q}}_\mathcal{A}(T,x)$ be the time-ordering (sequence) of queries performed by strategy $\mathcal{A}$ on tree $T$ to find a target vertex $x$, with $\textup{\texttt{Q}}_{\mathcal{A},i}(T,x)$ denoting the $i$-th queried vertex in this time ordering, $1\leq i \leq |\textup{\texttt{Q}}_\mathcal{A}(T,x)|$.
We denote by \ifConferenceVersion $ \else $$\fi
\textup{\texttt{COST}}_\mathcal{A}(T,x) = \sum_{i=1}^{|\textup{\texttt{Q}}_\mathcal{A}(T,x)|} w(\textup{\texttt{Q}}_{\mathcal{A},i}(T,x)) \ifConferenceVersion $ \else $$\fi the sum of weights of all vertices queried by $\mathcal{A}$ with $x$ being the target node, i.e., the time after which $\mathcal{A}$ finishes. Let \ifConferenceVersion $ \else $$\fi \textup{\texttt{COST}}_\mathcal{A}(T)=\max_{x\in V}\textup{\texttt{COST}}_\mathcal{A}(T,x) \ifConferenceVersion $ \else $$\fi be the \emph{cost of $\mathcal{A}$}. We define the \emph{cost of $T$} to be \ifConferenceVersion $ \else $$\fi \opt{T}=\min\{\textup{\texttt{COST}}_\mathcal{A}(T)\hspace{0.1cm}\bigl|\bigr.\hspace{0.1cm} \mathcal{A}\textup{ is a search strategy for }T\}. \ifConferenceVersion $ \else $$\fi We say that a search strategy is \emph{optimal} for $T$ if its cost equals $\opt{T}$.
As a consequence of normalization and the star condition, we have the following bound.
\begin{observation} \label{obs:opt-bounds} For any tree $T$, we have $1\leq\opt{T}\leq \lceil \log_2 n \rceil$. \end{observation} \InJournal{ \begin{proof} By the star condition, considering any vertex $v \in V$ as the target, we trivially have \[\opt T \geq \inf_\mathcal{A} \textup{\texttt{COST}}_{\mathcal{A}} (T,v) \geq \inf_\mathcal{A} \textup{\texttt{COST}}_{\mathcal{A}} (T [\{v\} \cup N(v)],v) \geq w(v).\]
Thus, $\opt T \geq \max_{v\in V} w(v) = 1$, which gives the first inequality.
For the second inequality, we observe that applying to tree $T$ the optimal search strategy for unweighted trees, we can locate the target in at most $\lceil \log_2 n \rceil$ queries (cf.~e.g.~\cite{KatchalskiMS95,OnakP06}). Since the cost of each query is at most $1$, the claim follows. \end{proof} } \InConference{All omitted proofs are provided in the Appendix.}
We also introduce the following notation. If the first $\card{U}$ queried vertices by a search strategy $\mathcal{A}$ are exactly the vertices in $U$, $U = \{\textup{\texttt{Q}}_{\mathcal{A},i}(T,x) : 1\leq i \leq |U|\}$, then we say that $\mathcal{A}$ \emph{reaches $\queriedSubtree{T}{U}{x}$ through $U$}, and $w(U)$ is the \emph{cost of reaching $\queriedSubtree{T}{U}{x}$ by $\mathcal{A}$}. We also say that we receive an `up' reply to a query to a vertex $v$ if the root of the tree remaining to be searched remains unchanged by the query, i.e., $r(\queriedSubtree{T}{U}{x}) = r(\queriedSubtree{T}{U \cup \{v\}}{x})$, and we call the reply a `down' reply when the root of the remaining tree changes, i.e., $r(\queriedSubtree{T}{U}{x}) \neq r(\queriedSubtree{T}{U \cup \{v\}}{x})$. Without loss of generality, after having performed a sequence of queries $U$, we can assume that the tree $\queriedSubtree{T}{U}{x}$ is known to the strategy.
\subsection{Query Sequences and Stable Strategies} \label{sec:querysequences}
By a slight abuse of notation, we will call a search strategy \emph{polynomial-time} if it can be implemented using a dynamic (adaptive) algorithm which computes the next queried vertex in polynomial time.
We give most of our attention herein to search strategies in trees which admit a natural (non-adaptive, polynomial-space) representation called a \emph{query sequence assignment}. Formally, for a rooted tree $T$, the \emph{query sequence assignment} $S$ is a function $S : V \to V^*$, which assigns to each vertex $v\in V$ an ordered sequence of vertices $S(v)$, known as the \emph{query sequence} of $v$. The query sequence assignment directly induces a strategy $\mathcal{A}_{S}$, presented as Algorithm~\ref{alg:As}. Intuitively, the strategy processes successive queries from the sequence $S(v)$, where $v$ is the root vertex of the current search tree, $v = r(\queriedSubtree{T}{U}{x})$, where $U$ is the set of queries performed so far. This processing is performed in such a way that the strategy iteratively takes the first vertex in $S(v)$ that belongs to $\queriedSubtree{T}{U}{x}$ and queries it. As soon as the root of the search tree changes, the procedure starts processing queries from the sequence of the new root, which belong to the remaining search tree. The procedure terminates as soon as $\queriedSubtree{T}{U}{x}$ has been reduced to a single vertex, which is necessarily the target $x$.
\begin{algorithm} \small \caption{Search strategy $\mathcal{A}_S$ for a query sequence assignment $S$} \label{alg:As} \begin{algorithmic}[1]
\State $v \gets r(T)$ \quad // stores current root \State $U \gets \emptyset$
\While{$|\queriedSubtree{T}{U}{x}|>1$} \For {$u\in S(v)$} \If {$u\in \queriedSubtree{T}{U}{x}$} \quad // $u$ is the first vertex in $S(v)$ that belongs to $\queriedSubtree{T}{U}{x}$ \State \Call{QueryVertex}{$u$} \State $U \gets U\cup \{u\}$ \If{$v\neq r(\queriedSubtree{T}{U}{x})$} \quad // query reply is `down' \State $v\gets r(\queriedSubtree{T}{U}{x})$\label{alg:As:linerootchange} \State \textbf{break} \quad // for loop \EndIf \EndIf \EndFor \EndWhile
\end{algorithmic} \end{algorithm}
In what follows, in order to show that our approximation strategies are polynomial-time, we will confine ourselves to presenting a polynomial-time algorithm which outputs an appropriate sequence assignment.
A sequence assignment is called \emph{stable} if the replacement of line~\ref{alg:As:linerootchange} in Algorithm~\ref{alg:As} by any assignment of the form $v \gets v''$, where $v''$ is an arbitrary vertex which is promised to lie on the path from $\root{\queriedSubtree{T}{U}{x}}$ to the target $x$, always results in a strategy which performs a (not necessarily strict) subsequence of the sequence of queries performed by the original strategy $\mathcal{A}_S$. Sequence assignments computed on trees with a bottom-up approach usually have the stability property; we provide a proof of stability for one of our main routines in Section~\ref{sec:dp}.
Without loss of generality, we will also assume that if $v \in S(v)$, then $v$ is the last element of $S(v)$. Indeed, when considering a subtree rooted at $v$, after a query to $v$, if $v$ was not the target, then the root of the considered subtree will change to one of the children of $v$, hence any subsequent elements of $S(v)$ may be removed without changing the strategy.
\subsection{Strategies Based on Consistent Schedules} \label{sec:schedules}
Intuitively, we may represent search strategies by a schedule consisting of some number of jobs, with each job being associated to querying a node in the tree (cf.\ e.g.\ \cite{IyerRV88b,Liu86,Liu90}). Each job has a fixed processing time, which is set to the weight of a node. Formally, in this work we will refer to the schedule $\hat S$ only in the very precise context of search strategies $\mathcal{A}_S$ based on some query sequence assignment $S$. The \emph{schedule assignment} $\hat S$ is the following extension of the sequence assignment $S$, which additionally encodes the starting time of search query job. If the query sequence $S$ of a node $v$ is of the form $S(v) = (v_1, \ldots, v_k)$, $k = |S(v)|$, then the corresponding schedule for $v$ will be given as $\hat S(v) = ((v_1, t_1), \ldots, (v_k, t_k))$, with $t_i \in \mathbb{R}_{\geq 0}$ denoting the starting time of the query for $v_i$. We will call $\hat S(v)$ the \emph{schedule of node} $v$. We will call a schedule assignment $\hat S$ \emph{consistent} with respect to search in a given tree $T$ if the following conditions are fulfilled: \begin{enumerate}
\item[(i)] No two jobs in the schedule of a node overlap: for all $v \in V$, for two distinct jobs $(u_1,t_1), (u_2,t_2) \in \hat S(v)$, we have $|[t_1, t_1+w(u_1)] \cap [t_2, t_2+w(u_2)]| = 0$. \item[(ii)] If $v$ is the parent of $v'$ in $T$ and $(u,t) \in \hat S(v')$, then we either also have $(u,t) \in \hat S(v)$, or the job $(v,t_v) \in \hat S(v)$ completes before the start of job $(u,t)$: $t_v + w(v) \leq t$.
\end{enumerate} It follows directly from the definition that a consistent schedule assignment (and the underlying query sequence assignment) is uniquely determined by the collection of jobs $\{(v, t_v) : (v, t_v) \in \hat S(u), u \in V\}$. Note that not every vertex has to contain a query to itself in its schedule; we will occasionally write $t_v = \perp$ to denote that such a job is missing. \InJournal{ In this case, the jobs of all children of $v$ have to be contained in the schedule of node $v$.}
By extension of notation for sequence assignments, we will denote a strategy following a consistent schedule assignment $\hat S$ (i.e., executing the query jobs of schedule $\hat S$ at the prescribed times) as $\mathcal{A}_{\hat S}$. We will then have: \ifConferenceVersion $ \else $$\fi
\textup{\texttt{COST}}_{\mathcal{A}_{\hat S}}(T) = |\hat S|,
\ifConferenceVersion $ \else $$\fi where $|\hat S|$ is the \emph{duration} of schedule assignment $\hat S$, given as: \ifConferenceVersion $ \else $$\fi
|\hat S| = \max_{v\in V} |\hat S(v)|, \ifConferenceVersion $ \else $$\fi with: \ifConferenceVersion $ \else $$\fi
|\hat S(v)| = \max_{(u,t)\in \hat S(v)} (t + w(u)). \ifConferenceVersion $ \else $$\fi
We remark that there always exists an optimal search strategy which is based on a consistent schedule. By a well-known characterization (cf.\ e.g.~\cite{Dereniowski06}), tree $T$ satisfies $\opt{T}=\tau \in \mathbb R$ if and only if there exists an assignment $I : V \to \mathcal I_\tau$ of intervals of time to nodes before deadline $\tau$, $\mathcal I_\tau =\{[a,b] : 0 \leq a < b \leq \tau\}$, such that $|I(v)|=w(v)$ and if $|I(u) \cap I(v)| > 0$ for any pair of nodes $u, v \in V$, then the $u-v$ path in $T$ contains a separating vertex $z$ such that $\max I(z) \leq \min(I(u)\cup I(v))$. The corresponding schedule assignment of duration $\tau$ is obtained by adding, for each node $u \in V$, the job $(u, \min I(u))$ to the schedule of all nodes on the path from $u$ towards the root, until a node $v$ such that $\max I(v) \leq \min I(u)$ is encountered on this path. The consistency and correctness of the obtained schedule is immediate to verify.
\begin{observation}
For any tree $T$, there exists a query sequence assignment $S$ and a corresponding consistent schedule $\hat S$ on $T$ such that $|\hat S| = \opt T$. \eop \end{observation}
\section{The Results} \subsection{\texorpdfstring{$(1+\varepsilon)$}{(1+eps)}-Approximation in \texorpdfstring{$n^{O(\log n/\varepsilon^2)}$}{n\^{}O(log n/eps\^{}2)} Time} \label{sec:qptas} We first present an approximation scheme for the weighted tree search problem with $n^{O(\log n)}$ running time. The main difficulty consists in obtaining a constant approximation ratio for the problem with this running time; we at once present this approximation scheme with tuned parameters, so as to achieve $(1+\varepsilon)$-approximation in $n^{O(\log n/\varepsilon^2)}$ time.
Our construction consists of two main building blocks. First, we design an algorithm based on a bottom-up (dynamic programming) approach, which considers exhaustively feasible sequence assignments and query schedules over a carefully restricted state space of size $n^{O(\log n)}$ for each node. The output of the algorithm provides us both with a lower bound on $\opt T$, and with a sequence assignment-based strategy $\mathcal{A}_S$ for solving the tree search problem. The performance of this strategy $\mathcal{A}_S$ is closely linked to the performance of $\opt T$, however, there is one type of query, namely a query on a vertex of small weight leading to a `down' response, due to whose repeated occurrence the eventual cost difference between $\textup{\texttt{COST}}_{\mathcal{A}_S}(T)$ and $\opt T$ may eventually become arbitrarily large. To alleviate this difficulty, we introduce an alternative measure of cost which compensates for the appearance of the disadvantageous type of query.
We start by introducing some additional notation. Let $\omega \in \mathbb R_+$, be an arbitrarily fixed value of weight and let $c\in\mathbb{N}$. The choice of constant $c \in \mathbb{N}$ will correspond to an approximation ratio of $(1+\varepsilon)$ of the designed scheme for $\varepsilon = 168/c$.
We say that a query to a vertex $v$ is a \emph{light down query} in some strategy if $w(v)<c\omega$ and $x\in V(T_v)$, i.e., it is also a `down' query, where $x$ is the target vertex.
For any strategy $\mathcal{A}$, we denote by $\cost^{( \omega,c)}_\mathcal{A}(T,x)$ its modified cost of finding target $x$, defined as follows. Let $d_x$ be the number of light down queries when searching for $x$:
\ifConferenceVersion $ \else $$\fi d_x=\left|\{i : w(\textup{\texttt{Q}}_{\mathcal{A},i}(T,x))< c\omega \textnormal{ and } x \in V(T_{\textup{\texttt{Q}}_{\mathcal{A},i}(T,x)})\}\right|. \ifConferenceVersion $ \else $$\fi
Then, the modified cost $\cost^{( \omega,c)}_\mathcal{A}(T,x)$ is: \begin{equation} \label{eq:costo-def} \cost^{( \omega,c)}_\mathcal{A}(T,x) = \textup{\texttt{COST}}_\mathcal{A}(T,x) - (2c+1)\omega d_x. \end{equation} and by a natural extension of notation: \ifConferenceVersion $ \else $$\fi \cost^{( \omega,c)}_\mathcal{A}(T)=\max_{x\in V}\cost^{( \omega,c)}_\mathcal{A}(T,x). \ifConferenceVersion $ \else $$\fi
The technical result which we will obtain in Section~\ref{sec:dp} may now be stated as follows.
\begin{proposition}\label{pro:dp} For any $c \in \mathbb{N}$, $L \in \mathbb{N}$, there exists an algorithm running in time $(cn)^{O(L)}$, which for any tree $T$ constructs a stable sequence assignment $S$ and computes a value of $\omega$ such that $\omega \leq \frac {1}{L}\cost^{( \omega,c)}_{\mathcal{A}_S}(T)$ and: \ifConferenceVersion $ \else $$\fi \cost^{( \omega,c)}_{\mathcal{A}_S}(T) \leq \left(1+\frac{12}{c} \right) \opt {T}. \ifConferenceVersion $ \else $$\fi \end{proposition}
In order to convert the obtained strategy $\mathcal{A}_S$ with a small value of $\cost^{( \omega,c)}$ into a strategy with small $\textup{\texttt{COST}}$, we describe in Section~\ref{sec:blackbox} an appropriate strategy conversion mechanism. The approach we adopt is applicable to any strategy based on a stable sequence assignment and consists in concatenating, for each vertex $v\in V$, a prefix to the query sequence $S(v)$ in the form of a separately computed sequence $R(v)$, which does not depend on $S(v)$. The considered query sequences are thus of the form $R(v) \circ S(v)$, where the symbol ``$\circ$'' represents sequence concatenation. Intuitively, the sequences $R$, taken over the whole tree, reflect the structure of a specific solution to the unweighted tree search problem on a contraction of tree $T$, in which each edge connecting a node to a child with weight at least $c\omega$ is contracted. We recall that the optimal number of queries to reach a target in an unweighted tree is $O(\log n)$, and the goal of this conversion is to reduce the number of light down queries in the combined strategy to at most $O(\log n)$.
\begin{proposition} \label{prop:cost-prime}\label{pro:cost-prime} For any fixed $\omega > 0$ there exists a polynomial-time algorithm which for a tree $T$ computes a sequence assignment $R : V \to V^*$, such that, for any strategy $\mathcal{A}_S$ based on a stable sequence assignment $S$, the sequence assignment $S^+$, given by $S^+(v) = R(v) \circ S(v)$ for each $v\in V$, has the following property: $$\textup{\texttt{COST}}_{\mathcal{A}_{S^+}}(T) \leq \cost^{( \omega,c)}_{\mathcal{A}_{S}}(T) + 4(2c+1)\omega \log_2 n.$$ \end{proposition}
The proof of Proposition~\ref{pro:cost-prime} is provided in Section~\ref{sec:blackbox}.
We are now ready to put together the two bounds. Combining the claims of Proposition~\ref{pro:dp} for $L = \lceil c^2 \log_2 n \rceil$ (with $\omega \leq \frac {1}{L}\cost^{( \omega,c)}_{\mathcal{A}_S}(T) \leq \frac{\cost^{( \omega,c)}_{\mathcal{A}_{S}}(T)}{c^2 \log_2 n}$) and Proposition~\ref{pro:cost-prime}, we obtain:
\begin{align*} \textup{\texttt{COST}}_{\mathcal{A}_{S^+}}(T) &\leq \cost^{( \omega,c)}_{\mathcal{A}_{S}}(T) + 4(2c+1)\omega \log_2 n \leq \cost^{( \omega,c)}_{\mathcal{A}_{S}}(T) + 12 c\omega \log_2 n \ \leq\\&\leq \cost^{( \omega,c)}_{\mathcal{A}_{S}}(T) + 12 c \log_2 n \frac{\cost^{( \omega,c)}_{\mathcal{A}_{S}}(T)}{c^2 \log_2 n} \leq \left(1+\frac{12}{c}\right) \cost^{( \omega,c)}_{\mathcal{A}_{S}}(T) \leq\\ &\leq \left(1+\frac{12}{c}\right)^2 \opt {T} \leq \left(1+\frac{168}{c}\right) \opt {T}. \end{align*}
After putting $\varepsilon = \frac{168}{c}$ and noting that in stating our result we can safely assume $c = O(\mathrm{poly}(n))$ (beyond this, the tree search problem can be trivially solved optimally in $O(n^n)$ time using exhaustive search), we obtain the main theorem of the Section.
\begin{theorem}\label{thm:qptas} There exists an algorithm running in $n^{O\left(\frac{\log n}{\varepsilon^2}\right)}$ time, providing a $(1+\varepsilon)$-approximation solution to the weighted tree search problem for any $0 < \varepsilon < 1$.\eop \end{theorem}
\subsection{Extension: A Poly-Time \texorpdfstring{$O(\sqrt{\log n})$}{O(sqrt(log n))}-Approximation Algorithm} \label{sec:approx}
We now present the second main result of this work. By recursively applying the previously designed QPTAS (Theorem~\ref{thm:qptas}), we obtain a polynomial-time $O(\sqrt{\log n})$-approximation algorithm for finding search strategy for an arbitrary weighted tree. We start by informally sketching the algorithm --- we follow here the general outline of the idea from \cite{CicaleseKLPV14}. The algorithm is recursive and starts by finding a minimal subtree $T^*$ of an input tree whose removal disconnects $T$ into subtrees, each of size bounded by $n/2^{\sqrt{\log n}}$. The tree $T^*$ will be processed by our optimal algorithm described in Section~\ref{sec:qptas}. This results either in locating the target node, if it belongs to $T^*$, or identifying the component of $T-T^*$ containing the target, in which case the search continues recursively in the component. However, for the final algorithm to have polynomial running time, the tree $T^*$ needs to be of size $2^{O({\sqrt{\log n}})}$. This is obtained by contracting paths in $T^*$ (each vertex of the path has at most two neighbors in $T^*$) into single nodes having appropriately chosen weights. Since $T^*$ has $2^{O({\sqrt{\log n}})}$ leaves, this narrows down the size of $T^*$ to the required level and we argue that an optimal search strategy for the `contracted' $T^*$ provides a search strategy for the original $T^*$ that is within a constant factor from the cost of $T^*$.
A formal exposition and analysis of the obtained algorithm is provided in \InJournal{Section~\ref{sec:algosqrt}.}\InConference{the Appendix.} \begin{theorem} \label{thm:recursive-algo} There is a $O(\sqrt{\log n})$-approximation polynomial time algorithm for the weighted tree search problem. \end{theorem}
\section{\InJournal{Proof of Proposition~\ref{pro:dp}: }Quasi-Polynomial Computation of Strategies with Small \texorpdfstring{$\cost^{( \omega,c)}$}{Modified Cost}}\label{sec:dp}
\subsection{Preprocessing: Time Alignment in Schedules}
We adopt here a method similar but arguably more refined than rounding techniques in scheduling problems of combinatorial optimization, showing that we could discretise the starting and finishing time of jobs, as well as weights of vertices, in a way to restrict the size of state space for each node to $n^{O(\log n)}$, without introducing much error.
Fix $c \in \mathbb{N}$ and $\omega = \frac{a}{cn}$ for some $a\in \mathbb{N}$. (In subsequent considerations, we will have $c = \Theta(1/\varepsilon)$, $a=O(\frac{n}{\log n})$ and $\omega = \Omega(\varepsilon/\log n)$.) Given a tree $T = (V,E,w)$, let $T' = (V,E,w')$ be a tree with the same topology as $T$ but with weights rounded up as follows: \begin{equation} \label{eq:wprime-def} w'(v)= \begin{cases} \roundup{w(v)}{\omega}, &\text{if }w(v)>c\omega,\\ \roundup{w(v)}{\frac{1}{cn}}, &\text{otherwise.} \end{cases} \end{equation} We will informally refer to vertices with $w(v)>c\omega$ (equivalently $w'(v)>c\omega$) as \emph{heavy vertices} and vertices with $w(v) \leq c\omega$ (equivalently $w'(v)\leq c \omega$) as \emph{light vertices}. (Note that $w(v)\leq c\omega$ if and only if $w'(v)\leq c\omega$.)
When designing schedules, we consider time divided into \emph{boxes} of duration $\omega$, with the $i$-th box equal to $[i\omega, (i+1) \omega]$. Each box is divided into $a$ identical \emph{slots} of length $\frac 1 {cn}$.
In the tree $T'$, the duration of a query to a heavy vertex is an integer number of boxes, and the duration of a query to a light vertex is an integer number of slots. We next show that, without affecting significantly the approximation ratio of the strategy, we can align each query to a heavy vertex in the schedule so that it occupies an interval of full adjacent boxes, and each query to a light vertex in the schedule so that it occupies an interval of full adjacent slots (possibly contained in more than one box).
We start by showing the relationship between the costs of optimal solutions for trees $T$ and $T'$. \begin{lemma} \label{lem:costTTprime} $\opt T \leq \opt {T'}\leq (1+\frac{2}{c})\opt T$. \end{lemma} \InJournal{ \begin{proof}
The inequality $\opt{T}\leq \opt{T'}$ follows directly from the monotonicity of the cost of the solution with respect to vertex weights, since we have $w'(v) \geq w(v)$, for all $v\in V$.
To show the second inequality, we note that by the definition of weights~\eqref{eq:wprime-def}, for any vertex $v$, $w'(v) \leq (1+\frac{1}{c})w(v) + \frac{1}{cn}$.
Consider an optimal strategy $\mathcal{O}$ for tree $T$ and let $\textup{\texttt{Q}}_{\mathcal{O}}(T,x) = (v_1, \ldots, v_k)$ be the time-ordering of queries performed by strategy $\mathcal{O}$ on tree $T$ to find a target vertex $x$. Let $\mathcal{O}'$ be the strategy which follows the same time-ordering of queries when locating target $x$ in $T'$. We have: \begin{align*} \textup{\texttt{COST}}_{\mathcal{O}'}(T', x) & = \sum_{i=1}^k w'(v_i) \leq \sum_{i=1}^k \left(\left(1+\frac{1}{c}\right)w(v) + \frac{1}{cn}\right) \leq \frac{1}{c} + \left(1+\frac{1}{c}\right) \sum_{i=1}^k w(v) \leq \\ & \leq \left(1+\frac{2}{c}\right) \opt T, \end{align*} where we used the fact that, by Observation~\ref{obs:opt-bounds}, $\opt T \geq 1$. Since $\opt {T'} \leq \max_{x\in V}\textup{\texttt{COST}}_{\mathcal{O}'}(T', x) $, the claim follows. \end{proof} }
\begin{lemma} \label{lem:rounding-eps} There exists a consistent schedule assignment $\hat S$ for tree $T'$ such that $ \textup{\texttt{COST}}_{\mathcal{A}_{\hat S}} (T')\leq (1+\frac{3}{c})\opt{T'}$ and for all $v\in V$ we have that \begin{itemize} \item if $w'(v)> c\omega$, ($v$ is heavy), then the starting time $t$ of any job $(v,t)$ in the schedule $\hat S(u)$ of any $u \in V$ is an integer multiple of $\omega$ (aligned to a box), \item if $w'(v)\leq c\omega$, ($v$ is light), then the starting time $t$ of any query $(v,t)$ in the schedule $\hat S(u)$ of any $u \in V$ is an integer multiple of $\frac 1{cn}$ (aligned to a slot). \end{itemize} \end{lemma} \InJournal{ \begin{proof}
We consider an optimal consistent schedule assignment $\hat \Sigma$ for tree $T'$, $|\hat \Sigma|=\opt{T'}$. Fix $u \in V$ arbitrarily, and let $(v_{u,i}, t_{u,i})$ be the $i$-th query job in $\hat \Sigma (u)$. Consider now the schedule $\hat \Sigma^* (u)$ for $T$ based on the same sequence assignment, in which the job $(v_{u,i}, t_{u,i})$ is replaced by the job $(v_{u,i}, t^*_{u,i})$ with $t^*_{u,i} = (1+\frac{2}{c}) t_{u,i}$. We have for any two consecutive jobs at $u$: \begin{equation} \label{eq:tstar-diff} t^*_{u,{i+1}} - t^*_{u,i} = \left(1+\frac{2}{c}\right) (t_{u,{i+1}} - t_{u,{i}}) \geq \left(1+\frac{2}{c}\right)w(v_{u,i}), \end{equation}
where we assume by convention that for the last job index $i_{max}$, $t_{u,{i_{max}+1}} = |\hat \Sigma(u)|$. We now observe that schedule assignment $\hat\Sigma^*$ on tree $T$ can be directly converted into schedule assignment $\hat S$ on tree $T'$ as follows. The query sequence of each vertex is preserved unchanged. If $v_{u,i}$ is a heavy vertex, then within time interval $[t^*_{u,{i}}, t^*_{u,{i+1}}]$ we allocate to vertex $v_{u,i}$ an interval of full boxes, starting at time $\lceil t^*_{u,i} \rceil_{\omega}$. Indeed, by \eqref{eq:tstar-diff} we have: $$ t^*_{u,{i+1}} - \lceil t^*_{u,i} \rceil_{\omega} > t^*_{u,{i+1}} - t^*_{u,i} - \omega > \left(1+\frac{2}{c}\right)w(v_{u,i}) - \omega > w(v_{u,i}) + \omega > w'(v_{u,i}). $$
Since no two jobs overlap and the time transformation is performed identically for all vertices, the validity and consistency of schedule assignment $\hat S$ for tree $T'$ follows. We also have $|\hat S| \leq (1+\frac{2}{c}) |\hat \Sigma| = (1+\frac{2}{c}) \opt {T'}$.
To obtain the second part of the claim (alignment for light vertices) it suffices to round up the starting time of query times of all (light) vertices to an integer multiple of $\frac{1}{cn}$. Since all weights in $T'$ are integer multiples of $\frac{1}{cn}$, and so are the starting times of queries to heavy vertices in $\hat S$, the correctness and consistency of the obtained schedule again follows directly. This final transformation increases the duration by at most $\frac{1}{c} \leq \frac{1}{c} \opt {T'}$, and combining the bounds for both the transformations finally gives the claim. \end{proof} }
A schedule on tree $T'$ satisfying the conditions of Lemma~\ref{lem:rounding-eps}, and the resulting search strategy, are called \emph{aligned}. Subsequently, we will design an aligned strategy on tree $T'$, and compare the quality of the obtained solution to the best aligned strategy for $T'$.
The intuition between the separate treatment of heavy vertices (aligned to boxes) and light vertices (aligned to slots) in aligned schedules is the following. Whereas the time ordering of boxes is essential in the design of the correct strategy, in our dynamic programming approach we will not be concerned about the order of slots within a single box (i.e., the order of queries to light vertices placed in a single box). This allows us to reduce the state space of a node. Whereas the ordering of slots in the box will eventually have to be repaired to provide a correct strategy, this will not affect the quality of the overall solution too much (except for the issue of light down queries pointed out earlier, which are handled separately in Section~\ref{sec:blackbox}).
\subsection{Dynamic Programming Routine for Fixed Box Size}
Let the values of parameter $c$ and box size $\omega$ be fixed as before. Additionally, let $L \in \mathbb{N}$ be a parameter representing the time limit for the duration of the considered vertex schedules when measured in boxes, i.e., the longest schedule considered by the procedure will be of length $L \omega$ (we will eventually choose an appropriate value of $L = O(\log n)$ as required when showing Theorem~\ref{thm:qptas}).
\InConference{ In order to lower-bound the duration of the consistent aligned schedule assignment with minimum cost, we perform an exhaustive bottom-up evaluation of aligned schedules which satisfy constraints on the occupancy of slots. However, instead of considering individual slots of a schedule which may be empty or full, for reasons of efficiency we consider the \emph{load} $s_v[p]$ of each box, $0\leq p < L$, in the same schedule, defined informally as the proportion of the duration of the occupied slots within the box to the duration $\omega$ of the box. In the Appendix, we formally show the following claim.
\begin{lemma}\label{lem:merge-insert-box} Assume that the data structure $(s_v,t_v)_{v\in V}$ corresponds to a consistent schedule. Let $v \in V$ be an arbitrarily chosen node with set of children $\{v_1, \ldots, v_l\}$. Then the set of queried nodes forms an edge cover of the tree: \begin{equation} \label{con:vertexcover} \text{If $t_v = \perp$, then $t_{v_j} \neq \perp$, for all $1\leq j \leq l$.} \end{equation} Moreover, let completion time $\tinserted{v}$ of the query to $v$ given as: $$ \tinserted{v} = \begin{cases} t_v + w'(v), & \text{if $t_v \neq \perp$,}\\ +\infty, & \text{if $t_v = \perp$.}\\ \end{cases} $$ Let $a_p$ be the contribution to the load of the $p$-th time box of the query job for vertex $v$, i.e. $$ a_p = \begin{cases}
\frac{1}{\omega}|[t_v,\tinserted{v}] \cap [p\omega, (p+1)\omega]| & \text{if $t_v \neq \perp$,}\\ 0 & \text{if $t_v = \perp$.} \end{cases} $$ Then, for any box $[p\omega, (p+1)\omega]$, $0 \leq p < L$, we have the following bounds on the amount of load which can be packed into the box: \begin{equation} \label{con:merge-insert-box} \begin{rcases} & s_v[p] = a_p + \sum_{j=1}^l s_{v_j}[p] \in [0,1], && \text{when $\tinserted{v}\geq (p+1)\omega$ ,}\\ & s_v[p] \geq a_p, && \text{when $p\omega < \tinserted{v} < (p+1)\omega$ , }\\ & s_v[p] = 0, && \text{when $\tinserted{v} \leq p\omega$.} \end{rcases} \end{equation} Moreover, for any box $[p\omega, (p+1)\omega]$, $0 \leq p < L$, we have that the total load of a query to $v$ and queries propagated from any of the subtrees cannot exceed $1$: \begin{equation} \label{con:fullboxsafe} \text{For all $1\leq j \leq l$, the following bound holds: $s_{v_j}[p] + a_p \leq 1$.} \end{equation}
\end{lemma} } \InJournal{ Before presenting formally the considered quasi-polynomial time procedure, we start by outlining an (exponential time) algorithm which verifies if there exists an aligned schedule assignment $\hat \Sigma$ for $T'$ whose duration is at most $L\omega$. Notice that since all weights in $T'$ are integer multiples of $\frac{1}{cn}$, the optimal aligned schedule assignment will start and complete the execution of all queries at times which are integer multiples of $\frac{1}{cn}$; thus, we may restrict the considered class of schedules to those having this property. Any possible schedule of length at most $L \omega$ at a vertex $v$, which may appear in $\hat \Sigma$, will be represented in the form of the pair $(\sigma_v,t_v)$, where: \begin{itemize} \item $\sigma_v$ is a Boolean array with $L \omega cn$ entries, where $\sigma_v[i]=1$ when time slot $[\frac{i}{cn}, \frac{i+1}{cn}]$ is occupied in the schedule at $v$, and $\sigma_v[i]=0$ otherwise. \item $t_v \in \mathbb R$ represents the start time of the query to $v$ in the schedule of $v$ (we put $t=\perp$ if such a query does not appear in the schedule). \end{itemize} We now state some necessary conditions for a consistent schedule, known from the analysis of the unweighted search problem (cf.~e.g.~\cite{IyerRV88,OnakP06,Schaffer89}). The first observation expresses formally the constraint that the same time slot cannot be used in the schedules of two children of a node $v$, unless it is separated by an (earlier) query to node $v$ itself. All time slots before the starting time $t_v$ of job $(v,t_v)$ are free if and only if the corresponding time slot is free for all of the children of $v$.
\begin{observation}\label{obs:merge-insert} Assume that the tuple $(\sigma_v,t_v)_{v\in V}$ corresponds to a consistent schedule. Let $v \in V$ be an arbitrarily chosen node with set of children $\{v_1, \ldots, v_l\}$. Let the completion time $\tinserted{v}$ of the query to $v$ in the schedule of $v$ be given as: $$ \tinserted{v} = \begin{cases} t_v + w'(v), & \text{if $t_v \neq \perp$,}\\ +\infty, & \text{if $t_v = \perp$.}\\ \end{cases} $$ Then, for any time slot $[\frac{i}{cn}, \frac{i+1}{cn}]$, we have: \begin{equation} \label{con:merge-insert} \begin{rcases} & \sigma_v[i] = \sum_{j=1}^l \sigma_{v_j}[i], && \text{\quad when $\frac{i+1}{cn} \leq t_v$,}\\ & \sigma_v[i] = 1 \text{ and } \sum_{j=1}^l \sigma_{v_j}[i] = 0, && \text{\quad when $t_v < \frac{i+1}{cn} \leq \tinserted{v}$,}\\ & \sigma_v[i] = 0, && \text{\quad when $\frac{i+1}{cn} > \tinserted{v}$.} \end{rcases} \end{equation}
\end{observation}
We remark that the last of the above conditions~\eqref{con:merge-insert} follows from the w.l.o.g.\ assumption we made when defining sequence assignments that whenever node $v$ appears in the schedule of $v$, it is the last node in the query sequence for $v$.
Moreover, any valid search strategy which locates a target vertex must eventually query at least one of the endpoints of every edge of the tree $T'$, since otherwise, it will not be able to distinguish targets located at these two endpoints. We thus make the following observation.
\begin{observation}\label{obs:vertexcover} Assume that the tuple $(\sigma_v,t_v)_{v\in V}$ represents a consistent schedule. Let $v \in V$ be an arbitrarily chosen node with set of children $\{v_1, \ldots, v_l\}$. Then: \begin{equation} \label{con:vertexcover} \text{If $t_v = \perp$, then $t_{v_j} \neq \perp$, for all $1\leq j \leq l$.} \end{equation}
\end{observation}
\MC{This should be mentioned earlier. Done.}
Conditions~\eqref{con:merge-insert} and~\eqref{con:vertexcover} provide us with necessary conditions which must be satisfied by any consistent aligned schedule assignment.
In order to lower-bound the duration of the consistent aligned schedule assignment with minimum cost, we perform an exhaustive bottom-up evaluation of aligned schedules which satisfy the constraints of~\eqref{con:vertexcover}, and a slightly weaker form of the constraints of~\eqref{con:merge-insert}. These weaker constraints are introduced to reduce the running time of the algorithm. Instead of considering individual slots of a schedule which may be empty or full, $\sigma_v [i] \in \{0,1\}$, we consider the load of each box in the same schedule, defined as the proportion of occupied slots within the box. Formally, for the $p$-th box, $0\leq p < L$, the \emph{load} $s_v[p]$ is given as: $$ s_v [p] = \frac{1}{\omega cn}\sum_{i = p \cdot\omega cn}^{(p+1)\omega cn -1} \sigma_v [i],\quad \quad s_v[p]\in \left\{0, \frac{1}{\omega cn}, \frac{2}{\omega cn}, \ldots, 1 \right\}, $$ where we recall that $\omega cn$ is an integer by the choice of $\omega$. We will call a box with load $s_v [p] = 0$ an \emph{empty box}, a box with load $s_v [p] = 1$ a \emph{full box}, and a box with load $0 < s_v [p] < 1$ a \emph{partially full box} in the schedule of $v$.
By summing over all slots within each box, we obtain the following corollary directly from Observation~\ref{obs:merge-insert}.
\begin{corollary}\label{cor:merge-insert-box} Assume that the tuple $(s_v,t_v)_{v\in V}$ corresponds to a consistent schedule. Let $v \in V$ be an arbitrarily chosen node with set of children $\{v_1, \ldots, v_l\}$ and completion time $\tinserted{v}$ of the query to $v$ given as in Observation~\ref{obs:merge-insert}. Let $a_p$ be the contribution to the load of the $p$-th box of the query job for vertex $v$, i.e. $$ a_p = \begin{cases}
\frac{1}{\omega}|[t_v,\tinserted{v}] \cap [p\omega, (p+1)\omega]| & \text{if $t_v \neq \perp$,}\\ 0 & \text{if $t_v = \perp$.} \end{cases} $$ Then, for any box $[p\omega, (p+1)\omega]$, $0 \leq p < L$, we have: \begin{equation} \label{con:merge-insert-box} \begin{rcases} & s_v[p] = a_p + \sum_{j=1}^l s_{v_j}[p] \in [0,1], && \text{when $\tinserted{v}\geq (p+1)\omega$ ,}\\ & s_v[p] \geq a_p, && \text{when $p\omega < \tinserted{v} < (p+1)\omega$ , }\\ & s_v[p] = 0, && \text{when $\tinserted{v} \leq p\omega$.} \end{rcases} \end{equation} Moreover, for any box $[p\omega, (p+1)\omega]$, $0 \leq p < L$, we have: \begin{equation} \label{con:fullboxsafe} \text{For all $1\leq j \leq l$, the following bound holds: $s_{v_j}[p] + a_p \leq 1$.} \end{equation}
\end{corollary} We remark that the statement of Corollary~\ref{cor:merge-insert-box} treats specially one box, namely the one which contains strictly within it the time moment $\tinserted{v}$. For this box, we are unable to make a precise statement about $s_v[p]$ based on the description of the schedules of its children, and content ourselves with a (potentially) weak lower bound $s_v[p] \geq a_p = \frac{1}{\omega}(\tinserted{v} - p\omega)$. This is the direct reason for the slackness in our subsequent estimation, which loses $\omega$ time per down query. However, we note that by the definition of aligned schedule, a query to a heavy vertex will never begin or end strictly inside a box, and will not lead to the appearance of this issue. We remark that condition~\eqref{con:fullboxsafe} additionally stipulates that within any box, it must be possible to schedule the contribution of the query to $v$ and the contribution of any child $v_j$ to the load of the box in a non-overlapping way. } We now show that the shortest schedule assignments satisfying the set of constraints~\eqref{con:vertexcover}, \eqref{con:merge-insert-box}, and~\eqref{con:fullboxsafe} can be found in $n^{O(\log n)}$ time. This is achieved by using the procedure $\textsc{BuildStrategy}$, presented in Algorithm~\ref{alg:dp}, which returns for a node $v$ a non-empty set of schedules $\mathcal{\hat S}[v]$, such that each $s_v \in \mathcal{\hat S}[v]$ can be extended into the sought assignment of schedules in its subtree, $(s_u, t_u)_{u\in V(T_v)}$. In the statement of Algorithm~\ref{alg:dp}, we recall that, given a tree $T = (V,E,w)$, tree $T' = (V,E,w')$ is the tree with weights rounded up to the nearest multiple of the length of a slot (see Equation~\eqref{eq:wprime-def}).
The subsequent steps taken in procedure $\textsc{BuildStrategy}$ can be informally sketched as follows. The input tree $T'$ is processed in a bottom-up manner and hence, for an input vertex $v$, the recursive calls for its children $v_1,\ldots,v_l$ are first made, providing schedule assignments for the children (see lines \ref{l:rec-beg}-\ref{l:rec-end}). Then, the rest of the pseudocode is responsible for using these schedule assignments to obtain all valid schedule assignments for $v$. Lines \ref{l:merge-beg}-\ref{l:merge-end} merge the schedules of the children in such a way that a set ${\hat S}^*_{i}$, $i\in\{1,\ldots,l\}$, contains all schedule assignments computed on the basis of the schedules for the children $v_1,\ldots,v_i$. Thus, the set ${\hat S}^*_{l}$ is the final product of this part of the procedure and is used in the remaining part. Note that a schedule assignment in ${\hat S}^*_{l}$ may not be valid since a query to $v$ is not accommodated in it --- the rest of the pseudocode is responsible for taking each schedule $s\in{\hat S}^*_{l}$ and inserting a query to $v$ into $s$. More precisely, the subroutine $\textsc{InsertVertex}$ is used to place the query to $v$ at all possible time points (depending whether $v$ is heavy or light). We note that the subroutine \textsc{MergeSchedules}, for each schedule $s$ it produces, sets a Boolean `flag' $s.must\_contain\_v$ that whenever equals $false$, indicates that querying $v$ is not necessary in $s$ to obtain a valid schedule for $v$ (this happens if $s$ queries all children of $v$). A detailed analysis of procedure \textsc{BuildStrategy} can be found in \InJournal{the proof of Lemma~\ref{lem:dplower1}}\InConference{the Appendix}.
\begin{algorithm}[t] \small \caption{Dynamic programming routine \textsc{BuildStrategy} for a tree $T'$. $L, c\in \mathbb{N}$ are global parameters. Subroutines \textsc{MergeSchedules} and \textsc{InsertVertex} are provided \InJournal{further on.}\InConference{in the Appendix.}}\label{alg:dp} \begin{algorithmic}[1] \Procedure{BuildStrategy}{vertex $v$, box size $\omega \in \mathbb R$} \State $l \gets$ number of children of $v$ in $T'$ \quad// Denote by $v_1,\ldots,v_l$ the children of $v$. \For{$i=1..l$ \label{l:rec-beg}}
\State $\mathcal{\hat S}[v_i] \gets$ \Call{BuildStrategy}{$v_i$, $\omega$}; \label{l:rec-end} \EndFor \State $s \gets 0^L$ \State $s.max\_child\_load \gets 0^L$ \State $s.must\_contain\_v \gets false$ \State $\mathcal{\hat S}_{0} \gets \{ s \}$ \quad// $\mathcal{\hat S}_{0}$ contains the schedule with no queries. \State // Inductively, $\mathcal{\hat S}^*_{i}$ is based on merging schedules at $v_1,\ldots,v_{i}$. \For{$i=1..l$ \label{l:merge-beg}}
\State $\mathcal{\hat S}^*_i \gets \emptyset$
\For{each schedule $s \in \mathcal{\hat S}^*_{i-1}$}
\For{each schedule $s_{add} \in \mathcal {\hat S} [v_i]$}
\State $\mathcal{\hat S}^*_i \gets \mathcal{\hat S}^*_i \cup $ \Call{MergeSchedules}{$s$, $s_{add}$, $\omega$}; \label{l:merge-end}
\EndFor
\EndFor \EndFor \State $\mathcal{\hat S}[v] \gets \emptyset$ \For{each $s \in \mathcal{\hat S}^*_l$}
\If{$w'(v) > c \omega$} \quad // $v$ is heavy
\For{$p = 0..L-1$} \quad //attempt to insert (into $s$) query to $v$ starting from \mbox{time-box $p$}
\State $\mathcal{\hat S}[v] \gets \mathcal{\hat S}[v] \cup$ \Call{InsertVertex}{$s, v, \omega, p\cdot \omega$}
\EndFor
\Else \quad //$v$ is light
\For{real $t = 0..L\cdot \omega$ \textbf{step} $\frac{1}{cn}$}
\State //attempt to insert (into $s$) query to $v$ at a slot from time $t$
\State $\mathcal{\hat S}[v] \gets \mathcal{\hat S}[v] \cup$ \Call{InsertVertex}{$s, v, \omega, t$}
\EndFor
\EndIf
\If{$s.must\_contain\_v = false$}
\State $\mathcal{\hat S}[v] \gets \mathcal{\hat S}[v] \cup$ \Call{InsertVertex}{$s, v, \omega, \perp$}
\EndIf \EndFor
\State \textbf{return} $\mathcal{\hat S}[v]$ \EndProcedure \end{algorithmic} \end{algorithm}
\InJournal{\begin{algorithm*}[t] \small \caption{Subroutines \textsc{MergeSchedules} and \textsc{InsertVertex} of procedure \textsc{BuildStrategy} from Algorithm~\ref{alg:dp}.}\label{alg:dp-sub} \begin{algorithmic}[1] \Procedure{MergeSchedules}{schedule $s_{orig}$, schedule $s_{add}$, box size $\omega \in \mathbb R$}
\State $s \gets s_{orig}$ \quad // copy schedule and its properties to answer
\For{$p = 0..L-1$} \quad // for each time-box add load of $s_1$ and $s_2$
\State $s[p] \gets s_{orig}[p] + s_{add}[p]$
\If{$s[p] > 1$}
\State {$s[p] \gets +\infty$}
\EndIf
\State $s.max\_child\_load[p] \gets \max\{s.max\_child\_load[p], s_{add} [p]\}$
\EndFor
\If{$s_{add}.t_v = \perp$}
\State $s.must\_contain\_v \gets true$
\EndIf
\State \textbf{return} $s$ \EndProcedure \\
\Procedure{InsertVertex}{schedule $s_{orig}$, vertex $v$, box size $\omega \in \mathbb R$, time $t\in \mathbb R \cup \{\perp\}$}
\State $s \gets 0^L$ \quad // initialize empty schedule for answer
\If{$t \neq \perp$}
\State $I \gets [t, t+w'(v)]$ \quad // time interval into which query to $v$ is being inserted
\State $s.t_v \gets t$
\State $\tinserted{v} \gets t + w'(v)$
\Else
\State $I \gets \emptyset$
\State $s.t_v \gets \perp$
\State $\tinserted{v} \gets +\infty$
\EndIf
\For{$p = 0..L-1$} \quad // for each time-box
\State $a_p \gets \frac{1}{\omega}|I \cap [p \cdot \omega, (p+1)\cdot \omega]|$ \quad // contribution of query to $v$ to load of box $p$
\If {$s.max\_child\_load [p] + a_p>1$}
\State \textbf{return} $\emptyset$
\EndIf
\If {$\tinserted{v}\geq(p+1)\omega$
} \quad
\State $s[p] \gets s_{orig}[p] + a_p$ // add load from children in box $p$
\If {$s[p] > 1$} \quad //insertion failed
\State \textbf{return} $\emptyset$
\EndIf
\Else
\State $s[p] \gets a_p$
\EndIf
\EndFor
\State \textbf{return} $\{s\}$ \EndProcedure \end{algorithmic} \end{algorithm*} }
\begin{lemma}\label{lem:dplower1} For fixed constants $L, c\in \mathbb{N}$, calling procedure $\textsc{BuildStrategy}(r(T),\omega)$, where $r(T)$ is the root of the tree, determines if there exists a tuple $(s_v,t_v)_{v\in V}$ which satisfies constraints~\eqref{con:vertexcover}, \eqref{con:merge-insert-box}, and~\eqref{con:fullboxsafe}, or returns an empty set otherwise. \end{lemma} \InJournal{ \begin{proof} The formulation of procedure $\textsc{BuildStrategy}$ directly enforces that the constraints~\eqref{con:vertexcover}, \eqref{con:merge-insert-box}, and~\eqref{con:fullboxsafe} are fulfilled at each level of the tree, in a bottom-up manner.
For each vertex $v \in V$, we show by induction on the tree size that upon termination of procedure $\textsc{BuildStrategy}(v,\omega)$, the returned variable $\mathcal{\hat S} [v]$ is the set of all \emph{minimal} schedules $(s_v, t_v) \in \mathcal{\hat S} [v]$ which can be extended within the subtree $T_v$ to a data structure $(s_u, t_u)_{u \in V(T_v)}$, for some $ (s_u, t_u)\in \mathcal {\hat S} [u]$, $u \in V(T_v)$, in such a way that the conditions~\eqref{con:vertexcover}, \eqref{con:merge-insert-box}, and~\eqref{con:fullboxsafe} hold within subtree $T_v$. Here, minimality of a schedule is a trivial technical assumption, understood in the sense of the following very restrictive partial order: we say $(s_v, t_v) \leq (s'_v, t'_v)$ if $s_v[p] \leq s'_v[p]$ for all $0 \leq p \leq L-1$ and $t_v = t'_v$. (In the pseudocode, rather than write $(s_v, t_v)$ as a pair variable, we include $t_v$ within the structure $s_v$ as its special field $s_v.t_v$.)
The algorithm proceeds to merge together exhaustively all possible choices of schedules $ (s_{v_i}, t_{v_i})\in \mathcal {\hat S} [v_i]$ of all children $v_i$ of $v$, $1\leq i \leq l$. The merge is performed by computing, for any fixed choice $(s_{v_i}, t_{v_i})_{1\leq i \leq l}$, the combined load of each box in the resultant schedule $s$: \begin{equation}\label{eq:sumload} s[p] \gets \sum_{i=1}^l s_{v_i} [p], \end{equation} where, as a technicality, we also put $s[p] \gets +\infty$ whenever we obtain excessive load in a box ($s[p]>1$), as to avoid inflating the size of the state space and consequently, the running time of the algorithm. In Algorithm~\ref{alg:dp}, the computation of $s[p]$ through the sum~\eqref{eq:sumload} proceeds by a processing of successive children $v_i$, $1\leq i \leq l$, so that a schedule $s$ stored in the data structure $\mathcal{\hat S}^*_i$ represents $s[p] = \sum_{i=1}^i s_{v_j} [p]$. The summation of load is performed within the subroutine $\textsc{MergeSchedules}$, which merges a schedule $s_{orig} \in \mathcal{\hat S}^*_{i-1}$ with a schedule $s_{add} \in \mathcal{\hat S} [v_i]$ to obtain the new schedule $s \in \mathcal{\hat S}^*_{i}$.
Eventually, the set of schedules $\mathcal{\hat S}^*_{l}$, obtained after merging the schedules of all children of $v$, contains an element $s$ satisfying~\eqref{eq:sumload}. Next, we test all possible values of $t_v \in \mathbb R \cup \{\perp\}$, which are feasible for an aligned schedule. These values depend on whether vertex $v$ is heavy or light, for which $t_v$ should represent the starting time of a box or slot, respectively. Using procedure $\textsc{InsertVertex}$, we then set the load of each box following~\eqref{con:merge-insert-box}: \begin{equation}\label{eq:insertmerge} s_v[p] \gets\begin{cases} a_p + \sum_{j=1}^l s_{v_j}[p], & \text{when $\tinserted{v}\geq(p+1)\omega$ ,}\\ a_p, & \text{when $p\omega < \tinserted{v} < (p+1)\omega$ , }\\ 0, & \text{when $\tinserted{v} \leq p\omega$,} \end{cases} \end{equation} where $a_p$ is defined as in~\eqref{con:merge-insert}. In the pseudocode of function $\textsc{InsertVertex}$, for compactness we replace the second and third condition by equivalently setting $s_v[p] \gets a_p$ when the first condition does not hold. We additionally constrain in procedures $\textsc{MergeSchedules}$ and $\textsc{InsertVertex}$ the possibility of the condition $t_v = \perp$ occurring by enforcing the constraints of~\eqref{con:vertexcover} (corresponding of the setting of parameter $s.must\_contain\_v$ to $false$). Condition~\eqref{con:fullboxsafe} is enforced through procedures $\textsc{MergeSchedules}$ and $\textsc{InsertVertex}$ using the auxiliary array $s.max\_child\_load[p]$, $0\leq p \leq L-1$, defined so that $s.max\_child\_load[p] \gets \max_{1\leq j \leq l} s_{v_j}[p]$.
Since $\mathcal {\hat S} [v_i]$, for all $1\leq i \leq l$, contains all minimal schedules satisfying~\eqref{con:vertexcover}, \eqref{con:merge-insert-box}, and~\eqref{con:fullboxsafe}, the same holds for $\mathcal {\hat S} [v]$, which was constructed by enforcing only the required constraints. We remark that we obtain only the set of minimal (and not all) schedules due to the slight difference between~\eqref{eq:insertmerge} and~\eqref{con:merge-insert-box} in the second condition: instead of requiring $s_v[p] \geq a_p$, we put $s_v[p] \gets a_p$, thus setting the $p$-th coordinate of the schedule at its minimum possible value. \end{proof} }
It follows directly from Lemma~\ref{lem:dplower1} that, for any value $\omega^*$, tree $T$ may only admit an aligned schedule assignment of duration at most $\omega^* L$ if a call to procedure $\textsc{BuildStrategy}\allowbreak(r(T),\omega^*)$ returns a non-empty set. Taking into account Lemmas~\ref{lem:costTTprime} and~\ref{lem:rounding-eps}, we directly obtain the following lower bound on the length of the shortest aligned schedule in tree $T'$.
\begin{lemma}\label{lem:lb} If $\textsc{BuildStrategy}(r(T),\omega^*) = \emptyset$, then:\InConference{\\} \ifConferenceVersion $ \else $$\fi \omega^* L < \left (1+\frac{3}{c}\right) \opt {T'} \leq \left(1+\frac{3}{c}\right)\left(1+\frac{2}{c}\right) \opt T \leq \left(1+\frac{11}{c}\right) \opt T. \ifConferenceVersion $ \else $$\fi
\eop \end{lemma}
\InJournal{ Finally, we bound the running time of procedure $\textsc{BuildStrategy}$.} \begin{lemma} The running time of procedure $\textsc{BuildStrategy} (r(T), \omega)$ is at most $O((cn)^{\gamma L})$, for some absolute constant $\gamma = O(1)$, for any $\omega \leq n$. \end{lemma} \InJournal{ \begin{proof}
The procedure $\textsc{BuildStrategy}$ is run recursively, and is executed once for each node of the tree. The time of each execution is upper-bounded, up to multiplicative factors polynomial in $n$, by the size of the largest of the schedule sets named $\mathcal{\hat S}[u]$, $u\in V$, or $\mathcal{\hat S}^*_i$, appearing in the procedure. We further focus only on bounding the size $|\mathcal{\hat S}|$ of the state space of distinct possible schedules in the $(s_v, t_v)$ representation. The array $s_v$ has size $L$, with each entry $s_v [p]$, $0\leq p \leq L-1$, taking one of the values $s_v [p] \in \{0, \frac{1}{\omega cn}, \frac{2}{\omega cn},\ldots, 1 \}$, where the size of the set of possible values is $\omega cn+1 \in \mathbb{N}$. Additionally, in some of the auxiliary schedules, the additional array field $s_v.max\_child\_load$ has length $L$, with each entry $s_v.max\_child\_load [p]$, $0\leq p \leq L-1$, likewise taking one of the values from the set $\{0, \frac{1}{\omega cn}, \frac{2}{\omega cn},\ldots, 1 \}$. Finally, for the time $t_v$, we have: $t_v \in \{0, \omega, 2\omega, \ldots, (L-1)\omega, \perp\}$, where the size of the set of possible values is $L+1$.
Overall, we obtain: $$
|\mathcal{\hat S}| \leq (L+1) \left(\omega cn+1\right)^{L} \left(\omega cn+1\right)^{L} \leq (L+1) \left(cn^2+1\right)^{2L} < (cn)^{L \gamma'}, $$ where $\gamma'>0$ is a suitably chosen absolute constant. Accommodating the earlier omitted multiplicative $O(\mathrm{poly}(n))$ factors in the running time of the algorithm, we get the claim for some suitably chosen absolute constant $\gamma > \gamma'$. \end{proof} }
\InJournal{ \subsection{Sequence Assignment Algorithm with Small \texorpdfstring{$\cost^{( \omega,c)}$}{Modified Cost}} }
\InJournal{ The procedure for computing a sequence assignment $S$ which achieves a small value of $\cost^{( \omega,c)}$ is given in Algorithm~\ref{alg:S}.} \InConference{To complete the proof of Proposition~\ref{pro:dp}, we can now provide a strategy which achieves a small value of $\cost^{( \omega,c)}$. } This relies on procedure $\textsc{BuildStrategy}(r(T), \omega)$ as an essential subroutine, first determining the minimum value of $\omega = \frac{i}{cn}$, $i\in \mathbb{N}$, for which $\textsc{BuildStrategy}$ produces a schedule. \InJournal{Since the schedule of a parent node $v$ is based on an insertion of a query to $v$ into the schedules of its children, a standard backtracking procedure allows us to determine the representation $(s_v, t_v)_{v \in V}$ of the schedules of all nodes of the tree.} \InConference{Details of the approach are provided in the Appendix.}
\InJournal{ \DD{I do not quite see why we reconstruct query assignment in lies 7-9 the way we do. The problematic part for me is that \textsc{BuildStrategy}, and more precisely \textsc{InsertVertex}, is taking care of the fact that if one vertex "hides" query to a descendant, then the descendant is not propagated upwards? So it seems that $C(v)$ can be constructed on the basis of $s_v$: according to the definition of a schedule, is is the same as sequence assignment modulo time stamps, so perhaps we can just say that the schedule computed by \textsc{BuildStrategy} has for each vertex $v$ a sequence $((v_1,t_1),\ldots,(v_k,t_k))$ and define $S(v):=(v_1,\ldots,v_k)$. This should potentially replace lines 5-10. What am I missing? } \AK{It is quite possible that you are right (you are definitely right in terms of high level intuition); to be honest, I do not remember all the boundary cases / issues with rounding up and down to multiples of omega, cutting partially covered jobs, etc. Perhaps it's OK to simplify it.} \begin{algorithm} \small \caption{Construction of sequence assignment $S$}\label{alg:S} \begin{algorithmic}[1] \State $\omega \gets \frac{1}{cn}$ \While{\Call{BuildStrategy}{$r(T)$, $\omega$} $= \emptyset$}
\State $ \omega \gets \omega + \frac{1}{cn}$ \EndWhile \State $(s_v, t_v)_{v \in V} \gets $ schedule assignment of duration at most $L \omega$, satisfying constraints~\eqref{con:vertexcover}, \eqref{con:merge-insert-box}, and~\eqref{con:fullboxsafe}, \StatexIndent[1] reconstructed by backtracking through the sets $(\mathcal {\hat S}[v])_{v\in V}$ computed in the last call \StatexIndent[1] to procedure \Call{BuildStrategy}{$r(T)$, $\omega$}. \For {$v \in V$}
\State $C(v) \gets \emptyset$
\For {$u \in V(T_v)$}
\If{there is no vertex $z\neq u$ on the path from $v$ to $u$ s.t.\ $t_z < \lfloor t_u + w'(u) \rfloor_\omega + \omega$ \label{ln:Cvcond}}
\State\label{ln:Cv} $C(v) \gets C(v) \cup \{(\lfloor t_u \rfloor_{\omega}, \lceil t_u + w'(u) \rceil_{\omega}, u)\}$
\EndIf
\EndFor
\State\label{ln:Cvsort} $S(v) \gets$ sequence of vertices (third field) of $C(v)$ sorted in non-decreasing
\StatexIndent[2] order, with tuples compared by first field, then second field, then third field. \EndFor \State \textbf{return} $(S(v))_{v\in V}$ \end{algorithmic} \end{algorithm}
We start by observing in Algorithm~\ref{alg:S} that if a node $v$ is not queried ($t_v = \perp$), then all of the children of $v$ belong to the schedules produced by procedure $\textsc{BuildStrategy}$ following condition~\eqref{con:vertexcover}, and thus they will also appear in $S(v)$. This guarantees the validity of the solution.
\begin{lemma} \label{lem:finalalg-correct} Algorithm~\ref{alg:S} returns a correct query sequence assignment $S$ for tree $T$. \eop \end{lemma}
For the purposes of analysis, we extend the notion of backtracking procedure $\textsc{BuildStrategy}$ in a natural way, so that, for every node $v\in V$ and box $0 \leq p \leq L-1$, we describe precisely the contribution $c_v[p,u]$ of each vertex $u \in V(T_v)$ to the load $s_v[p]$. \InJournal{(See Fig.~\ref{fig:group-1} for an illustration.)} \InJournal{ \begin{figure}
\caption{Illustration of Algorithm ~\protect\ref{alg:dp} and ~\protect\ref{alg:dp-sub}. The depicted tree $T'$ has vertex set $V=\{a,b,c,d,e,f,g,h\}$ and vertex weights:}
\label{fig:1a}
\label{fig:1b}
\label{fig:1c}
\label{fig:1d}
\label{fig:1}
\label{fig:group-1}
\end{figure} }
Formally, for $u=v$ we have $c_v[p,v] \gets a_p = |[t_v, t_v + w'(v)] \cap [p\omega, (p+1)\omega]|$ if $t_v\neq \perp$, and $c_v[p,v] \gets 0$, otherwise. Next, if $u \neq v$ and $u$ belongs to the subtree of child $v_i$ of $v$, we put: $$ c_v[p,u]\gets\begin{cases} c_{v_i}[p,u], & \text{if $\tinserted{v} > p \omega$,}\\ 0, & \text{otherwise,} \end{cases} $$ where the insertion time $\tinserted{v}$ for $v$ is defined as in Observation~\ref{obs:merge-insert}. Comparing with~\eqref{eq:insertmerge}, we have directly for all $0 \leq p < L$: $$ s_v [p] = \sum_{u \in V(T_v)} c_{v} [p,u]. $$
Let $p_s(u)$ and $p_f(u)$ be the indices of the starting and final box, respectively, to which vertex $u$ adds load, formally $p_s(u) = \min P_u$ and $p_f(u) = \max P_u$, where $P_u = \{p: |[t_u, t_u + w'(u)]\cap\ [p\omega, (p+1)\omega]| >0\}$. From the statement of Algorithm~\ref{alg:S}, we show immediately by inductive bottom-up argument that if $u \in S(v)$, then $\omega \sum_{p = p_s(u)}^{p_f(u)} c_{v} [p,u] = w'(u).$
\begin{lemma} \label{lem:heavy_zero}
Let $(s_v,t_v)_{v\in T(V)}$ be a schedule assignment computed by $\textsc{BuildStrategy}$. For any vertices $u$ and $z$ such that $(u,t_u)$ and $(z,t_z)$ belong to the schedule at $v$, if either $u$ or $z$ is heavy, then $|[t_u,t_u+w'(u)]\cap[t_z,t_z+w'(z)]|=0$. \end{lemma} \begin{proof} Note that procedure $\textsc{InsertVertex}$ is called for a heavy input vertex with its last parameter (insertion time) being a multiple of $\omega$, and the weight $w'(u)$ is a multiple of $\omega$ by definition. Thus, the interval $[t_u,t_u+w'(u)]$ starts and ends at the beginning and end of a box, respectively. Hence, Constraint~\eqref{con:fullboxsafe} gives the lemma. \end{proof}
\MC{Here $u,z$ are on the same schedule? Else heavy vertices could overlap. But the notation $t_u,t_z$ are the starting time of $u,z$ on different schedule $s_u,s_z$. I think you mean the starting time of two vertices on the same schedule? May need to change notations.} \DD{I wrote the sequence to which you refer as the lemma above and clarified --- please check, and is it ok now?} As a consequence of Lemma~\ref{lem:heavy_zero}, if these two jobs $(u,t_u)$ and $(z,t_z)$ overlap, where $u$ and $z$ belong to the sequence assignment $S(v)$, then both of the vertices $u$ and $z$ must be light, thus: $$ t_u > t_z - w'(u) - \omega = t_z + w'(z) - w'(z) - w'(u) - \omega \geq t_z + w'(z) - (2c+1)\omega. $$ We now define the measure of progress $M(x,i)$ of strategy $\mathcal{A}_S$ when searching for target $x$ after $i$ queries as follows. Let $Q_i $ be the set of the first $i$ queried vertices. Let $v_i$ be the current root of the tree, $v_i=\root{\queriedSubtree{T}{Q_i}{x}}$. Let $S_i(v) \subseteq S(v)$ be the subsequence (suffix) of $S(v)$ consisting of those vertices which have not yet been queried. Now, we define: $$ M(x,i) = \begin{cases} \min_{u \in S_i(v_i)} p_s(u), & \textup{if }S_i(v_i)\neq\emptyset, \\ L, & \textup{if }S_i(v_i)=\emptyset. \end{cases} $$ We have by definition, $M(x,i) \in \{0, 1, \ldots, L-1, L\}$. We obtain the next Lemma from a following straightforward analysis of the measure of progress: every time following sequence $S(v)$ we successively complete queries with an `up' result with a total duration of at least $a$ boxes, since the queried vertices are ordered in the first place according to minimum query time, and in the second place according to query duration, the value of the minimum $p_s(u)$, for $u \in S(v)$ remaining to be queried, advances by at least $a$ boxes. \begin{lemma} \label{lem:progress} The measure of progress $M(x,i)$ has the following properties: \begin{enumerate} \item If the $(i+1)$-st query returns an `up' result, then $M(x, i+1) \geq M(x,i)$. \item If the $(i+1)$-st query returns a `down' result, then $M(x, i+1) \geq M(x,i) - (2c+1) \omega$. \DD{I have some problems with this. First, measure of progress is expressed in boxes, so shouldn't we have here just $2c+1$? I also think we should have a proof for this one, but I'm not sure what is the right argument; I thought that we lose one box for a down query?} \item Suppose that between some two steps of the strategy, $i_2 > i_1$, each of the queries $(q_{i_1+1}, \ldots, q_{i_2})$ returns an `up' result, and moreover, the total cost of queries performed was at least $a\omega$, for some $a\in \mathbb{N}$: $$ \sum_{j=i_1 + 1}^{i_2} w'(q_j) \geq a\omega, $$ where $q_j = \textup{\texttt{Q}}_{\mathcal{A}_S,j}(T,x)$. Then, $M(x, i_2) \geq M(x, i_1) +a$. \end{enumerate} \qed \end{lemma} \DD{I would suggest to remove 1. in the lemma above as it is a special case of 3.} Since the value of $M(x,i)$ is bounded from above by $L$, we obtain from Lemma~\ref{lem:progress} that the strategy $\mathcal{A}_S$ necessarily terminates when looking for target $x$ with cost at most $L\omega + (2c+1) \omega d_x$, \[\textup{\texttt{COST}}_{\mathcal{A}_S}(T',x) \leq L\omega + (2c+1) \omega d_x.\] Thus, due to the definition of $\cost^{( \omega,c)}$ in \eqref{eq:costo-def} and the monotonicity of of the cost of a strategy with respect to vertex weights, we obtain the following: \begin{corollary} \label{cor:progress-concl} For the sequence assignment computed by Algorithm~\ref{alg:S} it holds \[\cost^{( \omega,c)}_{\mathcal{A}_S}(T) \leq \cost^{( \omega,c)}_{\mathcal{A}_S}(T') \leq \omega L.\] \qed \end{corollary}
To prove Proposition~\ref{pro:dp}, it remains to show only the stability of the sequence assignment $S$.
\begin{lemma} \label{lem:stable} The query sequence assignment $S$ obtained by Algorithm~\ref{alg:S} is stable. \end{lemma} \begin{proof}
We perform the proof by induction. Following the definition of stability, assume that $v$ is the root of the remaining subtree \MC{For induction, I think we should not assume $v$ is the root. And we don't need $v$ to be root here, right?} \DD{I think it is OK -- we just consider an arbitrary subtree, skipping actually the (trivial) base case when $v$ is a leaf.} at some moment of executing $\mathcal{A}_S$ on $T'$, and let $u$ be a vertex such that $u$ is a child of $v$ lying on the path from $v$ to the target $x$. We will show that following $S(u)$ always results in a subsequence of the sequence of queries performed by following $S(v)$.
Let $S^+(v)$ be the subsequence of vertices of $S(v)$ which lie in $T_u$, and let $S^-(v)$ be the subsequence of all remaining vertices of $S(v)$.
Note that $x$ belongs to $T_u$ and hence any query to a node in $S^-(v)$ gives an `up' reply. \MC{I agree but what is "since the root of the tree may only change to a subtree of $T_u$"?} \DD{I changed this sequence, without changing the point, I think -- is it ok now?}
We now observe the first (leftmost) difference $v'$ of the sequences $S^+(v)$ and $S(u)$. Suppose that before such a difference occurs, the common fragment of the sequences contains a query to any vertex $y$ on the path from $u$ to $x$. Then, the root of both trees moves to the same child of $y$, and the process continues identically regardless of the initial root of the tree. Thus, such a vertex $y$ cannot occur prior the difference in sequences $S^+(v)$ and $S(u)$.
Next, suppose that the first difference between the two sequences consists in the appearance of vertex $v$ in sequence $S^+(v)$, i.e., $v'=v$. Then, the root of the tree moves from $v$ to $u$, and the two processes proceed identically as required. This also implies that $t_v>t_u$.
Finally, we observe that no other first difference between the sequences $S^+(v)$ and $S(u)$ is possible by the formulation of Algorithm~\ref{alg:S}. In particular, if a triple $(\lfloor t_z\rfloor_{\omega},\lceil t_z+w'(z)\rceil_{\omega},z)$ is added to $C(u)$ in line~\ref{ln:Cv}, then the condition in line~\ref{ln:Cvcond} and $t_v>t_u$ imply that the triple $(\lfloor t_z\rfloor_{\omega},\lceil t_z+w'(z)\rceil_{\omega},z)$ is added also to the set $C(v)$. Similarly, an insertion of a triple $(\lfloor t_z\rfloor_{\omega},\lceil t_z+w'(z)\rceil_{\omega},z)$ for $z\in V(T_u)$ into $C(v)$ implies that this triple also belongs to $C(u)$. Due to the sorting performed in line~\ref{ln:Cvsort} of Algorithm~\ref{alg:S}, $S^+(v)=S(u)$.
\AK{some more details would perhaps be welcome --- but this is not a priority} \DD{I rephrased a bit using only arguments from Algorithm~\ref{alg:S}; Mengchuan: I did not understand all your arguments; I could say more if you would write this part formally, if you think the above is not correct.} \MC{Maybe:1. Let $(s_v,t_v)$ and $(s_u,t_u)$ be the schedules from the schedule assignment obtained by Algorithm \ref{alg:S}, define $s^+_v$ be the schedule by extracting $c_v[p,x], x\in S^-(v)$ from $s_v$. According to the algorithm, there is no first difference other than query to $v$ is possible between $s^+_v$ and $s_u$. 2. Thus when translating from $(s_v,t_v), v\in V$ to $S(v), v\in V$, $s^+_v$ will be translated into a sequence that all vertices other than $v$ have the same order than the sequence translated from $s_u$, and these two are $S^+(v)$ and $S(u)$. }
The eventual deterministic coupling, which is obtained in all cases for the strategies starting at $v$ and $u$, extends by induction to the execution of $\mathcal{A}_S$ for trees rooted at a vertex $v$ and its arbitrary descendant $u'$ lying on the path from $v$ to $x$, hence the claim holds. \end{proof}
For the chosen value $\omega$, we can apply Lemma~\ref{lem:lb} with $\omega^* = \omega - \frac{1}{cn}$, obtaining: $$ \left(\omega - \frac{1}{cn}\right) L = \omega^* L < \left(1+\frac{11}{c}\right) \opt T, $$ thus, by Corollary~\ref{cor:progress-concl}, $$ \cost^{( \omega,c)}_{\mathcal{A}_S}(T) \leq \left(1+\frac{11}{c}\right) \opt T + \frac{L}{cn} \leq \left(1+\frac{12}{c}\right) \opt T, $$ where we took into account that trivially $L\leq n$ and $\opt T \geq 1$. We thus, by Lemmas~\ref{lem:finalalg-correct}, and~\ref{lem:stable} obtain the claim of Proposition~\ref{pro:dp}. } \InConference{\enlargethispage{0.9cm}}
\section{\InJournal{Proof of Proposition~\ref{pro:cost-prime}: }Reducing the Number of Down-Queries}\label{sec:blackbox}
We start with defining a function $\ell\colon V\to \{1,\ldots,\lceil\log_2 n\rceil\}$ which in the following will be called a \emph{labeling of $T$} and the value $\ell(v)$ is called the \emph{label of $v$}. We say that a subset of nodes $H\subseteq V$ is an \emph{extended heavy part} in $T$ if $H=\{v\}\cup H'$, where all nodes in $H'$ are heavy, no node in $H'$ has a heavy neighbor in $T$ that does not belong to $H'$ and $v$ is the parent of some node in $H'$. Let $H_1,\ldots,H_l$ be all extended heavy parts in $T$. Obtain a tree $T_C=(V_C,E_C)$ by contracting, in $T$, the subgraph $H_i$ into a node denoted by $h_i$ for each $i\in\{1,\ldots,l\}$. In the tree $T_C$, we want to find its labeling $\ell'\colon V_C\to \{1,\ldots,\lceil\log_2 |V_C|\rceil\}$ that satisfies the following condition: for each two nodes $u$ and $v$ in $V_C$ with $\ell'(u)=\ell'(v)$, the path between $u$ and $v$ has a node $z$ satisfying $\ell'(z)<\ell'(u)$. One can obtain such a labeling by a following procedure that takes a subtree $T_C'$ of $T_C$ and an integer $i$ as an input. Find a central node $v$ in $T_C'$, set $\ell'(v)=i$ and call the procedure for each subtree $T_C''$ of $T_C'-v$ with input $T_C''$ and $i+1$. The procedure is initially called for input $T$ and $i=1$. We also remark that, alternatively, such a labeling can be obtained via vertex rankings \cite{IyerRV88,Schaffer89}.
Once the labeling $\ell'$ of $T_C$ is constructed, we extend it to a labeling $\ell$ of $T$ in such a way that for each node $v$ of $T$ we set $\ell(v)=\ell'(v)$ if $v\notin H_1\cup\cdots\cup H_l$ and $\ell(v)=\ell'(h_i)$ if $v\in H_i$, $i\in\{1,\ldots,l\}$.
Having the labeling $\ell$ of $T$, we are ready to define a query sequence $R(v)$ for each node $v\in V$. The $R(v)$ contains all nodes $u$ from $T_v$ such that $\ell(u)<\ell(v)$ and each internal node $z$ of the path connecting $v$ and $u$ in $T$ satisfies $\ell(z)>\ell(u)$. Additionally, the nodes in $R(v)$ are ordered by increasing values of their labels. See Figure~\ref{fig:labeling} for an example.\InConference{\enlargethispage{1.4cm}} \begin{figure}
\caption{A tree $T$ (on the left) has light vertices (marked as white nodes) and heavy ones (dark circles); also heavy extended parts are marked. The tree $T_C$ (in the middle) is used together with its labeling (integers are the labels) to obtain the sequence assignment $R$ (on the right); here we skip the sequence assignment for each node $v$ for which $R(v)=\emptyset$.}
\label{fig:labeling}
\end{figure}
\InJournal{ We start by making some simple observations regarding the sequence assignment $R$.
\begin{observation} \label{obs:light-in-R} For each $v\in V$ and for each $u\in R(v)$, $w(u)\leq c\omega$. \eop \end{observation} \begin{observation} \label{obs:visibility} For each $v\in V$, any two nodes in $R(v)$ have different labels. \eop \end{observation} \begin{observation} \label{obs:R-linear} The sequence assignment $R$ can be computed in time $O(n\log n)$. \eop \end{observation} }
By $x$ we refer to the target node in $T$. Fix $S$ to be a stable sequence assignment in the remaining part of this section and by $R$ we refer to the sequence assignment constructed above. Then, we fix $S^+$ to be $S^+(v)=R(v)\circ S(v)$ for each $v\in V$. \InJournal{Denote by $U_i$ the first $i$ nodes queried by $\mathcal{A}_{S^+}$ and let $C_i=\min \ell(\queriedSubtree{T}{U_{i-1}}{x})$ for each $i\geq 1$. For brevity we denote $U_0=\emptyset$ and $C_0=0$; we also denote by $u_i$ the node in $U_i\setminus U_{i-1}$, $i\geq 1$.} A query made by $\mathcal{A}_{S^+}$ to a node that belongs to $R(v)$ for some $v\in V$ is called an \emph{$R$-query}; otherwise it is an \emph{$S$-query}. \InJournal{ \begin{lemma} \label{lem:min-label-conn} For each $i\geq 0$, the nodes in $\queriedSubtree{T}{U_i}{x}$ with minimum label induce a connected subtree. \eop \end{lemma}
The next two lemmas will be used to conclude that the number of light queries performed by $\mathcal{A}_{S^+}$ is bounded by $2\log_2 n$ (see Lemma~\ref{lem:R-logn}). \begin{lemma} \label{lem:R-up} If the $i$-th query of $\mathcal{A}_{S^+}$ is an $R$-query resulting in an `up' reply, then $C_{i+1}\geq C_{i}+1$. \end{lemma} \begin{proof} By construction, $u_i$ has the minimum label among all nodes in $\queriedSubtree{T}{U_{i-1}}{x}$. By Lemma~\ref{lem:min-label-conn}, either $u_i$ is the unique node with label $\ell(u_i)$ in the tree $\queriedSubtree{T}{U_{i-1}}{x}$ or there are more nodes with this label and they all belong to a single extended heavy part in $\queriedSubtree{T}{U_{i-1}}{x}$ with $u_i$ being closest to the root of $\queriedSubtree{T}{U_{i-1}}{x}$. In both cases, since the reply is `up', we obtain that $\queriedSubtree{T}{U_{i}}{x}$ has no node with label $\ell(u)$, which proves the lemma. \end{proof}
\begin{lemma} \label{lem:R-down} If the $i$-th query of $\mathcal{A}_{S^+}$ is an $R$-query that results in a `down' reply, then one of the two cases holds: \begin{enumerate}[label={\normalfont{(\roman*)}},ref={(\roman*)}]
\item\label{it:heavy-child1} if $u_i$ has no heavy child that belongs to $\queriedSubtree{T}{U_{i}}{x}$, then $C_{i+1}\geq C_{i}+1$,
\item\label{it:heavy-child2} if $u_i$ has a heavy child that belongs to $\queriedSubtree{T}{U_{i}}{x}$, then $C_{i+1}=C_{i}=\ell(u_i)$ and for each $j>i$ such that $C_i=C_{j+1}$, all queries $i+1,\ldots,j$ are $S$-queries. \end{enumerate} \end{lemma} \begin{proof} By Lemma~\ref{lem:min-label-conn}, the nodes with label $C_i$ induce a connected subtree in $\queriedSubtree{T}{U_{i-1}}{x}$. This immediately implies \ref{it:heavy-child1}. By construction, if $u_i$ has a heavy child $u'$ that is in $\queriedSubtree{T}{U_{i}}{x}$, then $u'=\root{\queriedSubtree{T}{U_{i}}{x}}$ and the labels of $u_i$ and $u'$ are the same. The latter is due to the fact that both $u_i$ and $u'$ belong to the same extended heavy part in $T$. Suppose for a contradiction that the $j$-th query (performed say on a node $z$) is an $R$-query and $C_{j+1}=C_i$, $j>i$. This in particular implies that $z\in R(\root{\queriedSubtree{T}{U_{j-1}}{x}})$. Due to Lemma~\ref{lem:R-up}, the reply to this query is `down'. By \ref{it:heavy-child1}, $z$ has a heavy child that belongs to $\queriedSubtree{T}{U_j}{x}$. By Observation~\ref{obs:light-in-R}, $z$ is a light node and therefore $z$ along with some of its descendants and $u_i$ with some of its descendants form two different extended heavy parts in $T$. Since $z$ and $u_i$ have the same label, there exists a light node $u_i'$ in $T$ on the path between $u_i$ and $z$ with label smaller than $\ell(u_i)$. Assume without loss of generality that no other node of this path that lies between $u_i$ and $u_i'$ has label smaller than $\ell(u_i')$. The above-mentioned path is contained in $\queriedSubtree{T}{U_{i-1}}{x}$ since both $u_i$ and $z$ belong to this subtree. This however implies that $u_i'\in R(v)$ because $\ell(u_i')<\ell(u_i)$ and no node on the path between $u_i$ and $u_i')$ has label smaller than $\ell(u_i')$. Moreover, $u_i'$ precedes $u_i$ in $R(v)$ meaning that among one for the first $i$ queries, $u_i'$ must have been queried --- a contradiction with the fact that $u_i'$ belongs to $\queriedSubtree{T}{U_i}{x}$. \end{proof} \begin{lemma} \label{lem:R-logn} For each target node, the total number of $R$-queries made by $\mathcal{A}_{S^+}$ is at most $2\log_2 n$. \end{lemma} \begin{proof} It follows from Lemmas~\ref{lem:R-up} and~\ref{lem:R-down} that after any two subsequent $R$-queries the value of parameter $C_i$ increases by at least $1$. \end{proof}
The next two lemmas will be used to bound the number of $S$-queries in $S^+$ receiving a `down' reply to be at most $2\log_2 n$. \begin{lemma} \label{lem:last-in-R} If all nodes in $R(v)$ have been queried by $\mathcal{A}_{S^+}$ after an $i$-th query for some $v\in V$ and $v$ is the root of $\queriedSubtree{T}{U_{i}}{x}$, then $\ell(v)=C_{i+1}$. \end{lemma} \begin{proof} Suppose for a contradiction that $\ell(v)\neq C_{i+1}$. Since $v$ belongs to $\queriedSubtree{T}{U_i}{x}$, we have that $\ell(v)>C_{i+1}$. Thus, by construction, there exists a light node $u$ in $\queriedSubtree{T}{U_i}{x}$ with $\ell(u)=C_{i+1}$ such that all internal nodes on the path between $v$ and $u$ have labels larger than $\ell(u)$. Therefore, $u$ belongs to $R(v)$ because $v$ is the root of $\queriedSubtree{T}{U_i}{x}$. This implies that $u$ has been already queried --- a contradiction with $u$ being in $\queriedSubtree{T}{U_i}{x}$. \end{proof}
\begin{lemma} \label{lem:S-down} If the $i$-th query of $\mathcal{A}_{S^+}$ is an $S$-query performed on a light node and the reply is `down', then $\queriedSubtree{T}{U_i}{x}$ has no light node with label $C_i$. \end{lemma} \begin{proof} Suppose that the $i$-th query is performed on a node $u$ in $S(v)$ for some $v\in V$. Clearly, $v$ is the root of $\queriedSubtree{T}{U_{i-1}}{x}$. Since the considered query is an $S$-query, all vertices in $R(v)$ have been already queried. Thus, by Lemma~\ref{lem:last-in-R}, $\ell(v)=C_{i}$. By construction, $v$ is the only light node in this subtree having label $C_i$. Since the reply to the $i$-th query is `down', $v$ does not belong to $\queriedSubtree{T}{U_i}{x}$. \end{proof}
We are now ready to prove Proposition~\ref{prop:cost-prime}.
By Lemma~\ref{lem:R-logn},} \InConference{In the Appendix we show that,}
in $\mathcal{A}_{S^+}$, the total number of $R$-queries does not exceed $2\log_2 n$. \InJournal{Note that since}\InConference{We also show that} $S$ is stable, \InConference{and so} for each target node $x$, the $S$-queries performed by $\mathcal{A}_{S^+}$ are a subsequence of the queries performed by $\mathcal{A}_{S}$. Therefore, the potentially additional queries made by $\mathcal{A}_{S^+}$ with respect to $\mathcal{A}_{S}$ are $R$-queries. \InJournal{By Observation~\ref{obs:light-in-R}, each $R$-query is made on a light node. By definition of function $\cost^{( \omega,c)}$ and Observation~\ref{obs:light-in-R}, any $R$-query increases the value of $\cost^{( \omega,c)}$ of $\mathcal{A}_{S^+}$ with respect to the value of $\cost^{( \omega,c)}$ of $\mathcal{A}_{S}$ by at most $(2c+1)\omega$.} \InConference{We then formally show that each $R$-query is made on a light node and that any $R$-query increases the value of $\cost^{( \omega,c)}$ of $\mathcal{A}_{S^+}$ with respect to the value of $\cost^{( \omega,c)}$ of $\mathcal{A}_{S}$ by at most $(2c+1)\omega$.} Hence we have: \ifConferenceVersion $ \else $$\fi \cost^{( \omega,c)}_{\mathcal{A}_{S^+}}(T) \leq \cost^{( \omega,c)}_{\mathcal{A}_{S}}(T) + 2(2c+1)\omega \log_2 n. \ifConferenceVersion $ \else $$\fi
\InJournal{By Lemmas~\ref{lem:R-down} and~\ref{lem:S-down},} \InConference{Moreover, we show in the Appendix that} the total number of queries in strategy $\mathcal{A}_{S^+}$ to light nodes receiving `down' replies can be likewise bounded by $2\log_2 n$. Since each such query introduces a rounding difference of at most $(2c+1)\omega$ when comparing cost functions $\textup{\texttt{COST}}$ and $\cost^{( \omega,c)}$, we thus obtain: \ifConferenceVersion $ \else $$\fi \textup{\texttt{COST}}_{\mathcal{A}_{S^+}}(T) \leq \cost^{( \omega,c)}_{\mathcal{A}_{S^+}}(T) + 2(2c+1)\omega \log_2 n. \ifConferenceVersion $ \else $$\fi
Combining the above observations gives the claim of the Proposition.
\InJournal{ \section{Proof of Theorem~\ref{thm:recursive-algo}: A \texorpdfstring{$O(\sqrt{\log n})$}{O(sqrt(log n))}-Approximation Algorithm} \label{sec:algosqrt} We start with some notation. Given a tree $T=(V,E,w)$ and a fixed value of parameter $\alpha$, we find a subtree $T^*=(V^*,E^*)$ of the input tree $T$, called an \emph{$\alpha$-separating tree}, that satisfies: $\root{T^*}=\root{T}$ and each connected component of $T\setminus V^*$ has at most $\alpha$ vertices.
An $\alpha$-separating tree $T^*$ is \emph{minimal} if the removal of any leaf from $T^*$ gives an induced tree that is not an $\alpha$-separating tree. Then, for a target node $x\in V$, we introduce a recursive strategy $\mathcal{R}$ that takes the following steps: \begin{enumerate} \item $\mathcal{R}$ first applies strategy $\mathcal{A}^*$ restricted to tree $T^*$ to locate the node $x'$ of $T^*$ which is closest to the target $x$. \item Then, $\mathcal{R}$ queries $x'$, which either completes the search in case when $x'$ is the target or provides a neighbor $x''$ of $x'$ that is closer to the target than $x'$. \item If $x'$ is not the target, then the strategy calls itself recursively on the subtree $T_{x''}$ of $T\setminus\{x'\}$ containing $x$. The latter strategy for $T_{x''}$ is denoted by $\mathcal{R}_{x''}$. (Note that $T_{x''}$ is a connected component in $T\setminus V^*$.) \end{enumerate} Such a search strategy $\mathcal{R}$ obtained from $\mathcal{A}^*$ and strategies $\mathcal{R}_{\root{T'}}$ (constructed recursively) for subtrees $T'$ in $T\setminus V^*$ is called a \emph{$(\mathcal{A}^*,\{\mathcal{R}_{\root{T'}}\hspace{0.1cm}\bigl|\bigr.\hspace{0.1cm} T' \in \mathcal{C}(T \setminus V^*)\})$-strategy}, where $\mathcal{C}(T \setminus V^*)$ is the set of connected components (subtrees) in $T \setminus V^*$.
The following bound on the cost of the strategy $\mathcal{R}$ follows directly from the construction: \begin{lemma} \label{lem:recursion-cost} For a $(\mathcal{A}^*,\{\mathcal{R}_{\root{T'}}\hspace{0.1cm}\bigl|\bigr.\hspace{0.1cm} T' \in \mathcal{C}(T \setminus V^*)\})$-strategy $\mathcal{R}$ for $T$ it holds $$ \textup{\texttt{COST}}_\mathcal{R} (T) \leq \textup{\texttt{COST}}_{\mathcal{A}^*} (T^*) + \max_{x' \in V^*} w(x') + \max_{T' \in \mathcal{C}(T \setminus V^*)} \textup{\texttt{COST}}_{\mathcal{R}_{\root{T'}}} (T'). $$ \qed \end{lemma}
We now formally describe and analyze the aforementioned contractions of subpaths in a tree. A maximal path with more than one node in a tree $T$ that consists only of vertices that have degree two in $T$ is called a \emph{long chain in $T$}. For each long chain $P$, contract it into a single node $v_P$ with weight $\min_{u\in V(P)}w(u)$, obtaining a tree $\longchains{T}$. In what follows, the tree $\longchains{T}$ is called a \emph{chain-contraction of} $T$.
Our first step is a remark that, at the cost of losing a multiplicative constant in the final approximation ratio, we may restrict ourselves to trees that have no long chains. This is due to the following observation. \begin{lemma} \label{lem:no-long-chains} Let $T$ be a tree. Given a $p$-approximate search strategy for $\longchains{T}$, a $(p+1)$-approximate search strategy for $T$ can be computed in polynomial time. \end{lemma} \begin{proof} Let $\mathcal{A}'$ be a search strategy for $\longchains{T}$. We obtain a search strategy $\mathcal{A}$ for $T$ in two stages. In the first stage we `mimic' the behavior of $\mathcal{A}'$: (i) if $\mathcal{A}'$ queries a node $v$ that also belongs to $T$, then $\mathcal{A}$ also queries $v$; (ii) if $\mathcal{A}'$ queries a node $v_P$ that corresponds to some long chain $P$ in $T$, then $\mathcal{A}$ queries, in $T$, a node with minimum weight in $P$. Note that after the first stage, the search strategy either located the target or determined that the target belongs to a subpath $P'$ of some long chain $P$ of $T$. Moreover, the total cost of all queries performed in the first stage is at most $\textup{\texttt{COST}}_{\mathcal{A}'}(\longchains{T})$.
Then, in the second stage we compute (in $O(n^2)$-time) an optimal search strategy $\mathcal{A}_{P'}$ for $P'$ \cite{CicaleseJLV12}. Due to the monotonicity of the cost over taking subgraphs, $\textup{\texttt{COST}}_{\mathcal{A}_{P'}}(P')=\opt{P'}\leq\opt{T}$.
Both stages provide us with a search strategy for $T$ with cost at most $\textup{\texttt{COST}}_{\mathcal{A}'}(\longchains{T})+\opt{T}$. Since, $\opt{\longchains{T}}\leq\opt{T}$ and $\textup{\texttt{COST}}_{\mathcal{A}'}(\longchains{T})\leq p\cdot\opt{\longchains{T}}$, the lemma follows. \end{proof} Note that it is straightforward to verify whether any vertex $v$ of $T$ is a leaf in the $\alpha$-separating tree of $T$ and hence we obtain the following. \begin{observation} \label{obs:seperating-set} Given a tree $T$ with no long chain and $\alpha$, a minimal $\alpha$-separating tree of $T$ can be computed in polynomial-time. \qed \end{observation}
Using Lemma~\ref{lem:no-long-chains} and choosing appropriately the value of $\alpha$, one can obtain an $\alpha$-separating tree of $T$ having at most $t = 2^{O(\sqrt{\log n})}$ vertices.
\begin{lemma} \label{lem:minimal-separating} Let $T$ be any tree and let $\alpha$ be selected arbitrarily. If $T^*$ is a minimal $\alpha$-separating tree of $T$, then $\longchains{T^*}$ has at most $4\left\lceil\frac{n}{\alpha}\right\rceil$ vertices. \end{lemma} \begin{proof} By definition, for each leaf $v$ of $T^*$, the subtree $T_v$ has more than $\alpha$ nodes. Since these trees are node-disjoint, we obtain that there are at most $\lceil \frac{n}{\alpha} \rceil$ leaves in $T^*$. We denote the leaves of $T^*$ by $v_1,v_2,\ldots,v_{l}$, $l\leq \left\lceil \frac{n}{\alpha} \right\rceil$; note that $\longchains{T^*}$ has the same leaves as $T^*$.
Let $V(\longchains{T^*})$ be the vertex set of $\longchains{T^*}$. Then, we claim that $\card{V(\longchains{T^*})}=O(\lceil \frac{n}{\alpha} \rceil)$ by counting the number of nodes with different degrees in $\longchains{T^*}$. Clearly, we have $\card{\{v\in V(\longchains{T^*})\hspace{0.1cm}\bigl|\bigr.\hspace{0.1cm} \deg(v)>2\}} \leq \lceil \frac{n}{\alpha} \rceil$. Since the tree $\longchains{T^*}$ contains no long chains, the parent (if exists) of every node with degree exactly $2$ must have degree at least $3$. Thus, $$\card{\{v\in V(\longchains{T^*})\hspace{0.1cm}\bigl|\bigr.\hspace{0.1cm} \deg(v)=2\}} \leq \card{\{v\in V(\longchains{T^*})\hspace{0.1cm}\bigl|\bigr.\hspace{0.1cm} \deg(v)>2\}}+1 \leq \left\lceil \frac{n}{\alpha} \right\rceil+1.$$ Hence we get $\card{V(\longchains{T^*})}\leq 4 \left\lceil \frac{n}{\alpha} \right\rceil$. \end{proof}
With Lemmas~\ref{lem:no-long-chains},~\ref{lem:minimal-separating} and Observation~\ref{obs:seperating-set} we are now ready to obtain the efficient recursive decomposition of the problem: \begin{lemma} \label{lem:recursive-algo} If there is a $O(1)$-approximation algorithm running in $n^{O(\log n)}$ time for any input tree, then one can obtain a $O(\sqrt{\log n})$-approximation algorithm with polynomial running time for any input tree. \end{lemma}
\begin{proof} Suppose $\textsc{Solve}$ is a given constant-factor approximation algorithm running in time $n^{O(\log n)}$ that, for any input tree $T$, outputs a search strategy for $T$. We then design a polynomial-time procedure $\textsc{Rec}$ as shown in Algorithm~\ref{alg:rec}, which outputs a search strategy $\mathcal{R}$ for an input tree $T$.
\begin{algorithm} \caption{$O(\sqrt{\log n})$)-approximation procedure $\textsc{Rec}$ based on $n^{O(\log n)}$-time constant approximation algorithm $\textsc{Solve}$}\label{alg:rec} \begin{algorithmic}[1]
\Procedure{Rec}{tree $T=(V,E,w)$} \State $n\gets \card{V}$ \If {$n\leq 2^{\sqrt{\log n}}$}
\State \textbf{return} \Call{Solve}{$T$} \label{line:solve} \Else \State $\alpha \gets n/2^{\sqrt{\log n}}$ \State $T^* \gets$ a minimal $\alpha$-separating tree of $T$ with vertex set $V^*$ \State $\mathcal{A}^* \gets$ \Call{Solve}{$\longchains{T^*}$} \label{line:solve2} \State $\mathcal{A}_{T^*} \gets$ search strategy for $T^*$ obtained from $\mathcal{A}^*$ as described in proof of Lemma~\ref{lem:no-long-chains} \For{\textbf{each} $T'$ in $\mathcal{C}(T \setminus V^*)$} \State $\mathcal{R}_{\root{T'}} \gets$ \Call{Rec}{$T'$}; \EndFor \State \textbf{return} $(\mathcal{A}_{T^*},\{\mathcal{R}_{\root{T'}}\hspace{0.1cm}\bigl|\bigr.\hspace{0.1cm} T' \in \mathcal{C}(T \setminus V^*)\})$-strategy for $T$ \EndIf \EndProcedure \end{algorithmic} \end{algorithm}
Each call to $\textsc{Solve}$ in line~\ref{line:solve} has running time $(2^{\sqrt{\log n}})^{O(\log (2^{\sqrt{\log n}}))}$, which is a polynomial in $n$. The same holds for each call call in line~\ref{line:solve2} because, by Lemma~\ref{lem:minimal-separating}, $\longchains{T^*}$ has at most $4 \left\lceil \frac{n}{\alpha} \right\rceil = O(2^{\sqrt{\log n}})$ vertices. Thus, procedure $\textsc{Rec}$ has polynomial running time and it remains to bound the cost of the search strategy $\mathcal{R}$ computed by $\textsc{Rec}$.
To bound the recursion depth of $\textsc{Rec}$, note that each time a recursive call is made, the size of instance (input tree) decreases $2^{\sqrt{\log n}}$ times. Thus, the depth is bounded by $\log_{(2^{\sqrt{\log n}})}n=\sqrt{\log n}$. In the search strategy computed by procedure $\textsc{Rec}$, at each level of the recursion we execute the search strategy computed by one call to $\textsc{Solve}$ and one vertex of the $(n/2^{\sqrt{n}})$-separating tree is queried. This follows from the definition of $(\mathcal{A}_{T^*},\{\mathcal{R}_{\root{T'}}\hspace{0.1cm}\bigl|\bigr.\hspace{0.1cm} T' \in \mathcal{C}(T \setminus V^*)\})$-strategy. By Lemma~\ref{lem:no-long-chains}, \[\textup{\texttt{COST}}_{\mathcal{A}_{T^*}}(T^*)\leq c'\cdot\opt{T^*}\] for some constant $c'$. By Lemma~\ref{lem:recursion-cost} and since $\opt{T^*}\leq\opt{T}$, the cost of $\mathcal{R}$ at each recursion level is bounded by $(c'+1)\opt{T}$. This gives that $\textup{\texttt{COST}}_{\mathcal{R}}(T)\leq c'\sqrt{\log n}\cdot\opt{T}$ as required.
\end{proof}
Noting that the existence of a constant-approximation procedure with $n^{O(\log n)}$ running time follows from Theorem~\ref{thm:qptas} (by taking $\varepsilon=1$), the claim of Theorem~\ref{thm:recursive-algo} follows directly from Lemma~\ref{lem:recursive-algo}. }
\InJournal{ \section*{Acknowledgment}
The authors thank Jakub \L{}ącki for preliminary discussions on the studied problem. }
\InConference{
}
\end{document} | arXiv | {
"id": "1702.08207.tex",
"language_detection_score": 0.7582317590713501,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} We review the recent approach to Markov chains using the Karnofksy--Rhodes and McCammond expansions in semigroup theory by the authors and illustrate them by two examples. \end{abstract}
\maketitle
\section{Introduction}
A Markov chain is a model that describes transitions between states in a state space according to certain probabilistic rules. The defining characteristic of a Markov chain is that the transition from one state to another only depends on the current state and the elapsed time, but not how it arrived there. In other words, a Markov chain is ``memoryless''. Markov chains have many applications, from stock performances, population dynamics to traffic models.
In this paper, we consider Markov chains with a finite state space $\Omega$. The Markov chain can be described pictorially by its \defn{transition diagram}, where the vertices are the states in $\Omega$ and the labelled directed arrows between the vertices indicate the transitions. For example \begin{equation} \label{equation.transition diagram} \raisebox{-2.2cm}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm,
semithick]
\tikzstyle{every state}=[fill=red,draw=none,text=white]
\node[state] (I) {$\mathbf{1}$};
\node[state] (A) [right of=I] {$\mathbf{a}$};
\node[state] (B) [below of=I] {$\mathbf{b}$};
\node[state] (C) [right of =B] {$\mathbf{ab}$};
\path (I) edge [bend left] node {$a$} (A)
edge [bend left] node {$b$} (B)
(A) edge[bend left] node {$a$} (I)
edge[bend left] node {$b$} (C)
(B) edge[bend left] node {$b$} (I)
edge[bend left] node {$a$} (C)
(C) edge[bend left] node {$a$} (B)
edge[bend left] node {$b$} (A); \end{tikzpicture}} \end{equation} is the transition diagram for a Markov chain with state space $\Omega = \{\mathbf{1},\mathbf{a},\mathbf{b},\mathbf{ab}\}$.
Associating the probability $0\leqslant x_a \leqslant 1$ to the transition arrows labeled $a$ and $0\leqslant x_b \leqslant 1$ to the transition arrows labeled $b$ yields a Markov chain assuming that $x_a+x_b=1$. As the picture indicates, if the Markov chain is in state $\mathbf{1}$, then with probability $x_a$ it transitions to state $\mathbf{a}$ and with probability $x_b$ it transitions to state $\mathbf{b}$, and so on. Note that every vertex has one outgoing arrow labeled $a$ and one outgoing arrow labeled $b$.
The \defn{transition matrix} $T$ of a Markov chain is a matrix of dimension $|\Omega|\times |\Omega|$. Let $A$ be the set of all edge labels in the transition diagram. Then the entry $T_{s',s}$ in row $s'\in \Omega$ and column $s\in \Omega$ in $T$ is \[
T_{s',s} = \sum_{\substack{a\in A\\ s \stackrel{a}{\longrightarrow} s'}} x_a, \] where the sum is over all $a\in A$ such that $s \stackrel{a}{\longrightarrow} s'$ is an edge in the transition diagram. For the transition diagram in~\eqref{equation.transition diagram}, the transition matrix is given by (ordering the states as $(\mathbf{1},\mathbf{a},\mathbf{b},\mathbf{ab})$) \[
T = \begin{pmatrix}
0&x_a&x_b&0\\ x_a&0&0&x_b\\ x_b&0&0&x_a\\ 0&x_b&x_a&0
\end{pmatrix}. \] Note that, since every vertex has precisely one outgoing edge labeled $a\in A$ and since $\sum_{a\in A} x_a=1$, the column sums of $T$ are equal to one, namely $\sum_{s'\in \Omega} T_{s',s}=1$. Starting with a distribution $\nu$ (where state $s\in \Omega$ occurs with probability $\nu_s$ and $\sum_{s\in \Omega} \nu_s=1$), the distribution of states after $t$ steps in the Markov chain is $T^t \nu$.
Two fundamental questions are to find the \defn{stationary distribution} and the \defn{mixing time} of the Markov chain. Intuitively speaking, the stationary distribution is the distribution of states that the Markov chain will tend to when the chain runs for a long time. For ergodic Markov chains (to be defined later), the stationary distribution is unique and is the right eigenvector of $T$ of eigenvalue one. The mixing time measures how quickly the distribution approaches the stationary distribution.
In this paper, we review the approach of~\cite{RhodesSchilling.2019a,RhodesSchilling.2019b} to compute the stationary distribution of a finite Markov chain using expansions of semigroups. For example, the Markov chain with transition diagram~\eqref{equation.transition diagram} can be described using the dihedral group \[
D_2 = \langle a,b \mid a^2=b^2=1, (ab)^2=1\rangle, \] generated by two reflections $a,b$. This group is isomorphic to $Z_2 \times Z_2$. State $\mathbf{1}$ corresponds to the identity, state $\mathbf{a}$ corresponds to $a$, state $\mathbf{b}$ to $b$, and state $\mathbf{ab}$ corresponds to $ab=ba$. The transition from state $s$ to $s'$ is given by left multiplication by one of the generators in $A=\{a,b\}$. In general, it is always possible to describe a finite state Markov chain via a semigroup~\cite{LevinPeres.2017, ASST.2015} by the \defn{random letter representation}.
Our focus in this paper is to compute the stationary distribution from the McCammond and Karnofsky--Rhodes expansion of the right Cayley graph of the underlying semigroup $S$ with generators in $A$. The right Cayley graph $\mathsf{RCay}(S,A)$ of the dihedral group $S=D_2$ with generators $A=\{a,b\}$ is depicted in Figure~\ref{figure.right cayley}. In Section~\ref{section.stationary}, we will define the right Cayley graph of a semigroup, introduce its Karnofsky--Rhodes and McCammond expansion, and review the main results from~\cite{RhodesSchilling.2019a,RhodesSchilling.2019b} on how to compute the stationary distribution of the Markov chain from these. We will illustrate the results in terms of two (running) examples.
\begin{figure}
\caption{The right Cayley graph $\mathsf{RCay}(D_2,\{a,b\})$ of the dihedral group
with generators $A=\{a,b\}$. Transition edges are indicated in blue. Double edges mean that
right multiplication by the label for either vertex yields the other vertex.}
\label{figure.right cayley}
\end{figure}
\subsection*{Acknowledgments} We would like to thank Igor Pak for discussions.
The last author was partially supported by NSF grants DMS--1760329 and DMS--1764153. This material is based upon work supported by the Swedish Research Council under grant no. 2016-06596 while the last author was in residence at Institut Mittag--Leffler in Djursholm, Sweden during Spring~2020. The authors thank the organizers of the International conference on semigroups and applications held at Cochin University of Science and Technology in India December 9-12, 2019, where this work was presented.
\section{Stationary distribution from semigroup expansions} \label{section.stationary}
\subsection{Markov chains} \label{section.markov}
Let $\mathcal{M}$ be a finite Markov chain with state space $\Omega$ and transition matrix $T$. A Markov chain is \defn{irreducible} if the transition diagram of the Markov chain is strongly connected. It is \defn{aperiodic} if the greatest common divisor of the cycle lengths in the transition diagram of the Markov chain is one. Furthermore, a Markov chain is \defn{ergodic} if it is both irreducible and aperiodic. By the Perron--Frobenius Theorem, an ergodic Markov chain has a unique stationary distribution $\Psi$ and $T^t \nu$ converges to $\Psi$ as $t\to \infty$ for any initial state $\nu$. In fact, the \defn{stationary distribution} is the right eigenvector of eigenvalue one of $T$ \[
T \Psi = \Psi. \]
An important question is how quickly does the Markov chain converge to the stationary distribution. In Markov chain theory, distance is usually the total variation distance. The total variation distance between two probability distributions $\nu$ and $\mu$ is defined as \[
\|\nu - \mu\| = \max_{A \subseteq \Omega} |\nu(A) - \mu(A)|. \] For a given small $\epsilon>0$, the \defn{mixing time} $t_\mathsf{mix}$ is the smallest $t$ such that \[
\| T^t \nu - \Psi \| \leqslant \epsilon. \]
Brown and Diaconis~\cite{BrownDiaconis.1998,Brown.2000} analyzed Markov chains associated to left regular bands. In particular, they showed~\cite[Theorem 0]{Brown.2000} that the total variational distance from stationarity after $t$ steps is bounded above by the probability $\mathsf{Pr}(\tau> t)$, where $\tau$ is the first time that the walk hits a certain ideal. The arguments in Brown and Diaconis~\cite{BrownDiaconis.1998} were generalized to Markov chains for $\mathscr{R}$-trivial semigroups~\cite{ASST.2015}. A unified theory for Markov chains for any finite semigroup was developed in~\cite{RhodesSchilling.2019a,RhodesSchilling.2019b}.
As explained in~\cite[Proposition 1.5]{LevinPeres.2017} and~\cite[Theorem 2.3]{ASST.2015}, every finite state Markov chain $\mathcal{M}$ has a random letter representation, that is, a representation of a semigroup $S$ acting on the left on the state space $\Omega$. In this setting, we transition $s \stackrel{a}{\longrightarrow} s'$ with probability $0\leqslant x_a\leqslant 1$, where $s, s'\in \Omega$, $a\in S$ and $s'=a.s$ is the action of $a$ on the state $s$. It is enough to consider the semigroup $S$ generated by the elements $a\in A$ with $x_a>0$.
A two-sided \defn{ideal} $I$ (or ideal for short) is a subset $I \subseteq S$ such that $u I v \subseteq I$ for all $u,v \in S^{\mathbbm{1}}$, where $S^{\mathbbm{1}}$ is the semigroup $S$ with the identity $\mathbbm{1}$ added (even if $S$ already contains an identity). If $I,J$ are ideals of $S$, then $IJ \subseteq I \cap J$, so that $I \cap J \neq \emptyset$. Hence every finite semigroup has a unique \defn{minimal ideal} denoted $K(S)$. An ideal $K(S)$ is \defn{left zero} if $xy=x$ for all $x,y\in K(S)$.
We will determine the stationary distribution from certain expansions of the right Cayley graph $\mathsf{RCay}(S,A)$ of the underlying semigroup $S$ with generators $A$. The Markov chain itself is a random walk on the minimal ideal $K(S)$ by the left action.
\subsection{Right Cayley graphs} We begin with the definition of a graph.
\begin{definition}[Graph] A \defn{labeled directed graph} $\Gamma$ (or \defn{graph} for short) consists of a vertex set $V(\Gamma)$, an edge set $E(\Gamma)$, and a labelling set $A$. An edge $e\in E(\Gamma)$ is a tuple $e=(v,a,w) \in V(\Gamma)\times A \times V(\Gamma)$. We often also write $e\colon v \stackrel{a}{\longrightarrow} w$. \end{definition}
A \defn{path} $p$ from vertex $v$ to vertex $w$ in a graph $\Gamma$ is a sequence of edges \[
p = \left(v = v_0 \stackrel{a_1}{\longrightarrow} v_1 \stackrel{a_2}{\longrightarrow} \cdots
\stackrel{a_\ell}{\longrightarrow} v_\ell =w\right), \] where each tuple $(v_i,a_{i+1},v_{i+1}) \in E(\Gamma)$ for $0\leqslant i<\ell$. The initial (resp. terminal) vertex $v$ (resp. $w$) of $p$ is denoted by $\iota(p)$ (resp. $\tau(p)$). The \defn{length} of $p$ is $\ell(p):= \ell$ and $a_1 \ldots a_\ell$ is called the label of the path.
We can define a preorder $\prec$ on $V(\Gamma)$ by $v \prec w$ if there is a path from $v$ to $w$ in $\Gamma$. This induces an equivalence relation $\sim$ on $V(\Gamma)$, where $v \sim w$ if $v \prec w$ and $w \prec v$. A \defn{strongly connected component} of $\Gamma$ is a $\sim$-equivalence class.
\begin{definition}[Rooted graph] A \defn{rooted graph} is a pair $(\Gamma,r)$, where $\Gamma$ is a graph and $r \in V(\Gamma)$, such that $r \prec v$ for all $v \in V(\Gamma)$. \end{definition}
A path is called \defn{simple} if it visits no vertex twice. Empty (or trivial) paths are considered simple. For a rooted graph $(\Gamma,r)$, let $\mathsf{Simple}(\Gamma,r)$ be the set of simple paths of $\Gamma$ starting at $r$ (including the empty path).
\begin{definition}[Right Cayley graph] Let $(S,A)$ be a finite semigroup $S$ together with a set of generators $A$. The \defn{right Cayley graph} $\mathsf{RCay}(S,A)$ of $S$ with respect to $A$ is the rooted graph with vertex set $V(\mathsf{RCay}(S,A)) = S^{\mathbbm{1}}$, root $r=\mathbbm{1} \in S^{\mathbbm{1}}$, and edges $s \stackrel{a}{\longrightarrow} s'$ for all $(s,a,s') \in S^{\mathbbm{1}} \times A \times S^{\mathbbm{1}}$, such that $s'=sa$ in $S^{\mathbbm{1}}$. \end{definition}
An example of a right Cayley graph is given in Figure~\ref{figure.right cayley}.
For a semigroup $S$, two elements $s,s'\in S$ are in the same $\mathscr{R}$-class if the corresponding right ideals are equal, that is, $s S^{\mathbbm{1}} = s'S^{\mathbbm{1}}$. The strongly connected components of $\mathsf{RCay}(S,A)$ are precisely the $\mathscr{R}$-classes of $S^{\mathbbm{1}}$. In other words, the vertices of a strongly connected component are exactly the vertices that represent the elements in an $\mathscr{R}$-class of $S^{\mathbbm{1}}$. Edges that go between distinct strongly connected components will turn out to play an important role in the Karnofksy--Rhodes expansion.
\begin{definition}[Transition edges] Let $\Gamma$ be a graph. Then $e=(v,a,w) \in E(\Gamma)$ with $v,w\in V(\Gamma)$ and $a\in A$ is a \defn{transition edge} if $v \not \sim w$. In other words, there is no path from $w$ to $v$ in $\Gamma$. \end{definition}
In Figure~\ref{figure.right cayley}, the transition edges are indicated in blue. Note that the edges leaving $\mathbbm{1}$ in the right Cayley graph are always transitional. Other edges might or might not be transitional. In this example $K(S)$ consists of all vertices in $\mathsf{RCay}(S,A)$ except the root $\mathbbm{1}$.
\subsection{The Karnofsky--Rhodes expansion} \label{section.KR}
To compute explicit expressions for the stationary distributions of Markov chains on finite semigroups, we need the \defn{Karnofsky--Rhodes expansion}~\cite{Elston.1999} of the right Cayley graph $\mathsf{RCay}(S,A)$. See also~\cite[Definition~4.15]{MRS.2011} and~\cite[Section 3.4]{MSS.2015}.
Denote by $(A^+,A)$ the free semigroup with generators in $A$. In other words, $A^+$ is the set of all words $a_1 \ldots a_\ell$ of length $\ell \geqslant 1$ over $A$ with multiplication given by concatenation. Furthermore, let $A^\star = A^+ \cup \{1\}$, so that $A^\star$ is $A^+$ with the identity added; it is the free monoid generated by $A$.
\begin{definition}[Karnofksy--Rhodes expansion] The \defn{Karnofsky--Rhodes expansion} $\mathsf{KR}(S,A)$ is obtained as follows. Start with the right Cayley graph $\mathsf{RCay}(A^+,A)$. Identify the endpoints of two paths in $\mathsf{RCay}(A^+,A)$ \begin{equation*}
p := \left( \mathbbm{1} \stackrel{a_1}{\longrightarrow} v_1 \stackrel{a_2}{\longrightarrow} \cdots \stackrel{a_\ell}
{\longrightarrow} v_\ell \right)
\quad \text{and} \quad
p' := \left( \mathbbm{1} \stackrel{a'_1}{\longrightarrow} v'_1 \stackrel{a'_2}{\longrightarrow} \cdots
\stackrel{a'_{\ell'}}{\longrightarrow} v'_{\ell'} \right) \end{equation*} in $\mathsf{KR}(S,A)$ if and only if the corresponding paths in $\mathsf{RCay}(S,A)$ \begin{equation*}
[p]_S := \left( \mathbbm{1} \stackrel{a_1}{\longrightarrow} [w_1]_S \stackrel{a_2}{\longrightarrow} \cdots
\stackrel{a_\ell}{\longrightarrow} [w_\ell]_S \right)
\quad \text{and} \quad
[p']_S := \left( \mathbbm{1} \stackrel{a'_1}{\longrightarrow} [w'_1]_S \stackrel{a'_2}{\longrightarrow} \cdots
\stackrel{a'_{\ell'}}{\longrightarrow} [w'_{\ell'}]_S \right), \end{equation*} where $w_i=a_1 a_2 \ldots a_i$ and $w_i' = a_1' a_2' \ldots a'_i$, end at the same vertex $[w_\ell]_S = [w'_{\ell'}]_S$ and in addition the set of transition edges of $[p]_S$ and $[p']_S$ in $\mathsf{RCay}(S,A)$ is equal. \end{definition}
An example for $\mathsf{KR}(S,A)$ is given in Figure~\ref{figure.KR}. In this figure, the paths $a^2b$ and $aba$ are equal because they end in the same vertex when projected onto $S$ and they share the same transition edge, which is the first $a$. On the other hand, the paths $ab$ and $ba$ are distinct even though $ab=ba$ in $D_2$ because for the first path the transition edge is the first $a$ and for the second path the transition edge is the first $b$.
\begin{figure}
\caption{ The Karnofsky--Rhodes expansion $\mathsf{KR}(S,A)$ of the right Cayley graph of Figure~\ref{figure.right cayley}.}
\label{figure.KR}
\end{figure}
\begin{proposition}\cite[Proposition 2.15]{RhodesSchilling.2019a} \label{proposition.KR Cayley} $\mathsf{KR}(S,A)$ is the right Cayley graph of a semigroup, also denoted by $\mathsf{KR}(S,A)$. \end{proposition}
\subsection{The McCammond expansion} \label{section.mccammond}
The McCammond expansion~\cite{MRS.2011} of a rooted graph is intimately related to the unique simple path property.
\begin{definition}[Unique simple path property] \label{definition.unique simple path} A rooted graph $(\Gamma,r)$ has the \defn{unique simple path property} if for each vertex $v\in V(\Gamma)$ there is a unique simple path from the root $r$ to $v$. \end{definition}
As proven in~\cite[Proposition 2.32]{MRS.2011}, the unique simple path property is equivalent to $(\Gamma,r)$ admitting a unique directed spanning tree $\mathsf{T}$. Note that the unique simple path property not only depends on the graph $\Gamma$, but also on the chosen root $r$. In this paper, we always choose $r= \mathbbm{1}$. It was established in~\cite[Section 2.7]{MRS.2011} that every rooted graph $(\Gamma,r)$ has a universal simple cover, which has the unique simple path property.
If $p$ and $q$ are paths, $\ell(q)=k \leqslant \ell(p)$, and the first $k+1$ vertices and $k$ edges of $p$ and $q$ agree, we say that $q$ is an \defn{initial segment} of $p$, written $q \subseteq p$.
\begin{definition}[McCammond expansion] \label{definition.mccammond} For a rooted graph $(\Gamma,r)$, define its \defn{McCammond expansion} $(\Gamma^{\mathsf{Mc}},r)$ as the graph with \begin{equation*} \begin{split}
V(\Gamma^{\mathsf{Mc}}) &= \mathsf{Simple}(\Gamma,r),\\
E(\Gamma^{\mathsf{Mc}}) &= \{(p,a,q) \in V(\Gamma^{\mathsf{Mc}}) \times A \times V(\Gamma^{\mathsf{Mc}}) \mid
(\tau(p), a, \tau(q)) \in E(\Gamma),\\
& \qquad \qquad \qquad \ell(q) = \ell(p)+1 \text{ or } (q\subseteq p \text{ and } \ell(q)\leqslant \ell(p))\}. \end{split} \end{equation*} \end{definition}
Note that by definition there are two types of edges $(p,a,q) \in E(\Gamma^{\mathsf{Mc}})$: either $\ell(q)=\ell(p)+1$ or $\ell(q) \leqslant \ell(p)$ as paths in $\mathsf{Simple}(\Gamma,r)$. The spanning tree $\mathsf{T}$ has vertex set $V(\Gamma^{\mathsf{Mc}})$ and only those edges $(p,a,q) \in E(\Gamma^{\mathsf{Mc}})$ such that $\ell(q)=\ell(p)+1$.
From now on choose $r = \mathbbm{1}$. The simple path \[
\mathbbm{1} \stackrel{a_1}{\longrightarrow} v_1 \stackrel{a_2}{\longrightarrow} \cdots \stackrel{a_\ell}{\longrightarrow}
v_\ell \] in $\mathsf{Simple}(\Gamma,\mathbbm{1})$ is naturally indexed by the word $a_1 a_2 \ldots a_\ell$. We will use this labeling for the McCammond expansion of $\mathsf{KR}(S,A)$. In particular, if $a_1 a_2 \ldots a_\ell \in \mathsf{Simple}(\Gamma,\mathbbm{1})$ and $a_1 a_2 \ldots a_\ell a \in \mathsf{Simple}(\Gamma,\mathbbm{1})$, then the edge $a_1 a_2 \ldots a_\ell \stackrel{a}{\longrightarrow} a_1 a_2 \ldots a_\ell a$ is in the spanning tree $\mathsf{T}$. Otherwise we have $a_1 a_2 \ldots a_\ell \stackrel{a}{\longrightarrow} a_1 a_2 \ldots a_k$ for some unique $1\leqslant k < \ell$. Thus under the right action of $a \in A$ on $a_1 a_2 \ldots a_\ell$, we either move forward in the spanning tree or fall backwards somewhere on the unique geodesic from $\mathbbm{1}$ to $a_1 a_2 \ldots a_\ell$, but staying in the same $\mathscr{R}$-class. An example of a McCammond expansion of a Karnofsky--Rhodes graph is given in Figure~\ref{figure.mccammond}.
\begin{figure}
\caption{ The McCammond expansion $\mathsf{Mc} \circ \mathsf{KR}(D_2,\{a,b\})$ of Figure~\ref{figure.KR}.}
\label{figure.mccammond}
\end{figure}
For a non-simple path in $(\Gamma^{\mathsf{Mc}},\mathbbm{1})$, we can remove loops; it does not matter in which order these loops are removed. This is also known as the Church--Rosser property~\cite{Church.Rosser.1936} or a Knuth--Bendix rewriting system. This is proved in~\cite{MRS.2011}.
We denote the McCammond expansion of a semigroup $(S,A)$ with generators in $A$ by $\mathsf{Mc}(S,A)$, which is the McCammond expansion of its right Cayley graph.
\subsection{Stationary distribution}
Denote by $\mathcal{M}(S,A)$ the Markov chain associated with the semigroup $(S,A)$. As mentioned before, this is the random walk on $K(S)$ by the left action. The probability $x_a$ is associated with generator $a\in A$. The stationary distribution for $\mathcal{M}(S,A)$ was computed in~\cite{RhodesSchilling.2019a} using $\mathsf{Mc}\circ \mathsf{KR}(S,A)$. The treatment depends on whether the minimal ideal $K(S)$ is left zero or not. We first start with the former case, stated in~\cite[Corollaries 2.23 \& 2.28]{RhodesSchilling.2019a}.
\begin{theorem} \label{theorem.stationary} If $K(S)$ is left zero, the stationary distribution of the Markov chain $\mathcal{M}(S,A)$ labeled by $w \in K(S)$ is given by \begin{equation*}
\Psi^{\mathcal{M}(S,A)}_w
= \sum_{p} \; \prod_{a\in p} x_a, \end{equation*} where the sum is over all paths $p$ in $\mathsf{Mc}\circ\mathsf{KR}(S,A)$ starting at $\mathbbm{1}$ and ending in $s$ such that $[s]_S=w$. \end{theorem}
The case when $K(S)$ is not left zero was treated in~\cite[Section 2.9]{RhodesSchilling.2019a} by adding a zero element $\square$ to the semigroup $S$ and the generators $A$. This new generator $\square$ has its own probability $x_\square$. The minimal ideal of the semigroup $(S\cup \{\square\}, A \cup \{\square\})$ is left zero and by taking the limit $x_\square \to 0$, the stationary distribution of the original Markov chain $\mathcal(S,A)$ is retrieved as stated in~\cite[Corollary~2.33]{RhodesSchilling.2019a}.
\begin{theorem} \label{theorem.stationary general} If $K(S)$ is not left zero, the stationary distribution of the Markov chain $\mathcal{M}(S,A)$ labeled by $w \in K(S)$ is given by \begin{equation} \label{equation.psi limit}
\Psi^{\mathcal{M}(S,A)}_w
= \lim_{x_\square \to 0} \bigl(\sum_{p}\; \prod_{a\in p} x_a \bigr), \end{equation} where the sum is over all paths p in $\mathsf{Mc}\circ\mathsf{KR}(S \cup \{\square\},A \cup \{\square\})$ starting at $\mathbbm{1}$ and ending in $s$ such that $[s]_{S\cup \{\square\}}=w \square$. \end{theorem}
In~\cite{RhodesSchilling.2019b}, we developed a strategy using \defn{loop graphs} to compute the expressions in Theorems~\ref{theorem.stationary} and~\ref{theorem.stationary general} as rational functions in the probabilities $x_a$ for $a\in A$. This is done in two steps: \begin{enumerate} \item Using McCammond's $\mathsf{Pict}$ map, we map the McCammond expansion together with a simple path from $\mathbbm{1}$ to the ideal to a loop graph. See Section~\ref{section.loop}. \item The set of all paths from $\mathbbm{1}$ to the element in the ideal in a loop graph can be written as a Kleene expression. The Kleene expression immediately yields a rational expression for the stationary distribution. See Section~\ref{section.kleene}. \end{enumerate}
\subsection{Loop graphs} \label{section.loop}
A \defn{loop} of size $\ell$ is a connected directed graph with $\ell$ vertices such that each vertex has exactly one incoming and one outgoing edge. In other words, a loop is a directed circle of $\ell$ vertices. A \defn{loop graph} can be defined recursively. Start with a directed straight line path. Recursively, attach a loop of an arbitrary finite size to any existing chosen vertex. Repeat or stop. The edges of the loop graph can be labeled. An example of a loop graph is given in Figure~\ref{figure.loop}.
\begin{figure}
\caption{Loop graph for $(\mathsf{Mc}\circ \mathsf{KR}(D_2 \cup \{\square\},\{a,b,\square\}), ab\square)$ }
\label{figure.loop}
\end{figure}
Recall that an important property of the McCammond expansion is the unique simple path property (see Definition~\ref{definition.unique simple path}). We now define the map $\mathsf{Pict}$ from the set of tuples $(\Gamma,p)$, where $\Gamma$ is a graph with the unique simple path property and $p$ is a simple path in $\Gamma$ starting at $\mathbbm{1}$, to the set of loop graphs. The straight line, that the loop graph is based on, corresponds to the chosen simple path $p$. We follow~\cite[Section 3.2]{RhodesSchilling.2019b}.
\begin{definition}[McCammond] \cite[Definition 3.5]{RhodesSchilling.2019b} \label{definition.pict} Let $\Gamma$ be a graph with the unique simple path property and $p$ a simple path in $\Gamma$ starting at $\mathbbm{1}$. Then $\mathsf{Pict}(\Gamma,p)$ is defined by the principle of induction.
\noindent \textbf{Induction basis:} Set $P=p$ and start at vertex $v_0=\mathbbm{1}$.
\noindent \textbf{Induction step:} Suppose one is at vertex $v_0 \neq \tau(p)$ on path $p$. Take the edge $e$ from $v_0$ to $v_1$ in $p$. \begin{enumerate} \item If there is no edge in $\Gamma$ coming into $v_1$ besides $e$, continue with the unique next vertex in $p$, now denoted $v_1$ (with the current vertex $v_1$ relabeled $v_0$), unless $v_1=\tau(p)$. If $v_1=\tau(p)$, then output $\mathsf{Pict}(\Gamma,p)=P$. \item Otherwise there is at least one edge $e' \neq e$ in $\Gamma$ going into $v_1$, given by $e'=\left(v' \stackrel{a}{\longrightarrow} v_1\right)$ for some $a\in A$. Since $\Gamma$ has the unique simple path property by assumption, there must be a unique simple path starting at $\mathbbm{1}$ going to $v_0$ along the path $p$ followed by the path $p'$ starting at $v_0$, going along $e$ to $v_1$, and ending at $v'$. \begin{enumerate} \item Run the induction on $p'$ in a subgraph $\Gamma'$ of $\Gamma$, consisting of all edges and vertices on circuits containing a vertex of $p'$. Note that $p'$ is simple in $\Gamma'$. The output is $P'=\mathsf{Pict}(\Gamma',p')$. \item Modify $P$ by attaching $P'$ disjointly except at $v_1$ and adding edge $e'$ from $v'$ in $P'$ back to~$v_1$. \end{enumerate} \item Repeat step (2) for each edge $e'\neq e$ at vertex $v_1$. \item Continue with the induction step unless $v_1=\tau(p)$. If $v_1=\tau(p)$, then output $\mathsf{Pict}(\Gamma,p)=P$. \end{enumerate} \end{definition}
\begin{remark} The map $\mathsf{Pict}$ has the property that the set of all paths in $\Gamma$ from $\mathbbm{1}$ to $\tau(p)$ is in bijection with the set of all paths in $\mathsf{Pict}(\Gamma,p)$ from $\mathbbm{1}$ to $\tau(p)$ such that the labels of the paths are preserved. \end{remark}
\begin{example} \label{example.loop} Let us compute the example $\mathsf{Pict}(\Gamma,ab\square)$, where $\Gamma = \mathsf{Mc}\circ \mathsf{KR}(D_2\cup \{\square\},\{a,b,\square\})$. The McCammond expansion is given in Figure~\ref{figure.mccammond} if we attach to each vertex an edge labelled $\square$ to the ideal $\{\square\}$. The straight path is $ab\square$. The vertex labelled $a$ in Figure~\ref{figure.mccammond} has four dashed edges coming in. Two are labelled $a$ and two are labelled $b$. By the algorithm described in Definition~\ref{definition.pict}, the long dashed arrows labelled $a$ and $b$ give rise to loops of length 4. The other two dashed arrows give rise to loops of length 2. Repeating the process yields the loop graph in Figure~\ref{figure.loop}. \end{example}
\subsection{Kleene expressions} \label{section.kleene}
Denote the set of all paths in a loop graph $G$ starting at $\mathbbm{1}$ and ending at $\tau(p)$, where $p$ is the straight line the loop graph is based on, by $\mathcal{P}_G$. We represent a path $q\in \mathcal{P}_G$ by \[
\mathbbm{1} \stackrel{a_1}{\longrightarrow} v_1 \stackrel{a_2}{\longrightarrow} \cdots \stackrel{a_k}
{\longrightarrow} v_k=\tau(p), \] where $v_i$ are vertices in $G$ and $a_i \in A$ are the labels on the edges.
There is a simple inductive way to describe $\mathcal{P}_G$ using \defn{Kleene expressions} (see~\cite[Section~1.3]{RhodesSchilling.2019b}). Given a set $L$, define $L^0 = \{\varepsilon \}$ given by the empty string, $L^1 = L$, and recursively $L^{i+1} = \{wa \mid w \in L^i, a \in L\}$ for each integer $i>0$. Then the \defn{Kleene star} is \[
L^\star = \bigcup_{i\geqslant 0} L^i. \] A Kleene expression only involves letters in $A$, concatenation, unions, and $\star$. To obtain a Kleene expression for $\mathcal{P}_G$, perform the following doubly recursive procedure:
\noindent \textbf{Algorithm 1.} Assume that the straight line path corresponding to the loop graph $G$ is indexed as \begin{center} \begin{tikzpicture}[auto] \node (A) at (0, 0) {$\mathbbm{1}$}; \node (B) at (1.5,0) {$1$}; \node(C) at (3,0) {$2$}; \node(Cp) at (4.5,0) {}; \node(Dp) at (6,0) {}; \node(F) at (5.3,0) {$\cdots$}; \node(D) at (7.5,0) {$\tau(p)$}; \draw[edge,thick] (A) -- (B); \draw[edge,thick] (B) -- (C); \draw[edge,thick] (C) -- (Cp); \draw[edge,thick] (Dp) -- (D); \end{tikzpicture} \end{center}
\noindent \textbf{Induction basis:} Start at vertex $\mathbbm{1}$ and with the empty expression $E$.
\noindent \textbf{Induction step:} Suppose one is at vertex $i\neq \tau(p)$ (or $\mathbbm{1}$) on the straight line path underlying $G$. \begin{enumerate} \item Continue to the next vertex $i+1$ (or $1$) on the straight line path underlying $G$ and append the label $a$ on the edge from $i \stackrel{a}{\longrightarrow} i+1$ (or $\mathbbm{1} \stackrel{a}{\longrightarrow} 1$) to $E$. \item If there are loops $\ell_1,\ell_2,\ldots,\ell_k$ at vertex $i+1$ (or $1$), append the formal expression \[
\{\ell_1,\ell_2,\ldots,\ell_k\}^\star \] to $E$. The loops $\ell_1,\ell_2,\ldots,\ell_k$ are in one-to-one correspondence with the edges coming into vertex $i+1$. \item If $i+1\neq \tau(p)$, continue with the next induction step. Else stop and output $E$. \end{enumerate}
\noindent \textbf{Algorithm 2.} For each symbol $\ell_i$ in the expression for $E$, do the following: \begin{enumerate} \item Consider the loop $\ell_i = \left( v_0 \stackrel{a_1}{\longrightarrow} v_1 \stackrel{a_2}{\longrightarrow} \cdots \stackrel{a_k}{\longrightarrow} v_k=v_0 \right)$ from vertex $v_0$ to $v_0$ in $G$. Consider the subgraph of $G$ with straight line $v_1 \stackrel{a_2}{\longrightarrow} \cdots \stackrel{a_k}{\longrightarrow} v_k$ and all further loops that are attached to any of the vertices $v_i$ in $G$. Attach $\mathbbm{1}$ to $v_1$. The resulting graph $G^{(i)}$ is a new loop graph. Perform Algorithm 1 on $G^{(i)}$ to obtain a Kleene expression $E^{(i)}$. Replace the symbol $\ell_i$ in $E$ by $E^{(i)}$. \item Continue this process until $E$ does not contain any further expressions $\ell_i$ for some loop $\ell_i$, that is, $E$ only contains unions, $\star$ and elements in the alphabet $A$. Then the Kleene expression for $\mathcal{P}_G$ is $E$. \end{enumerate} The resulting expressions can be made into unionless expressions by using \defn{Zimin words} \begin{equation} \label{equation.zimin}
\{a\}^\star = a^\star \qquad \text{and} \qquad \{a, b\}^\star = (a^\star b)^\star a^\star \qquad \text{for $a,b\in A$.} \end{equation} Expressions for larger unions can be obtained by induction using~\eqref{equation.zimin}.
\begin{example} \label{example.kleene} Let us continue Example~\ref{example.loop} and compute the Kleene expression for $\mathcal{P}_{\mathsf{Pict}(\Gamma,ab\square)}$ for $\Gamma = \mathsf{Mc}\circ \mathsf{KR}(D_2\cup \{\square\},\{a,b,\square\})$. By Algorithm 1, we obtain the expression \[
E = a \{\ell_1,\ell_2,\ell_3,\ell_4\}^\star b \ell_5^\star \square. \] Using Algorithm 2 repeatedly for $\ell_1,\ldots,\ell_5$, we obtain \[ \begin{split}
\ell_1 &= a (b(aa)^\star b)^\star b (aa)^\star ab,\\
\ell_2 &= a(b(aa)^\star b)^\star a,\\
\ell_3 &= b (a(bb)^\star a)^\star a (bb)^\star ba,\\
\ell_4 &= b(a(bb)^\star a)^\star b,\\
\ell_5 &= a(bb)^\star a. \end{split} \] \end{example}
\subsection{From Kleene expressions to rational functions} \label{section.rational}
Our aim is to evaluate the expressions for $\Psi_w^{\mathcal{M}(S,A)}$ in Theorems~\ref{theorem.stationary} and~\ref{theorem.stationary general}. Let $G$ be a loop graph with straight line path $p$. Define \begin{equation} \label{equation.psi G}
\Psi_G(x_1,\ldots,x_n) = \sum_{q} \prod_{a \in q} x_a, \end{equation} where the sum is over all paths $q$ from $\mathbbm{1}$ to $\tau(p)$ in $G$. In~\cite[Definition 1.3]{RhodesSchilling.2019b} this is also called the \defn{normal distribution} of the loop graph $G$. Note that $\Psi_w^{\mathcal{M}(S,A)}$ is the sum of $\Psi_G(x_1,\ldots,x_n)$ for various loop graphs with straight line paths $p$ such that $\tau(p)=w$ (see also~\cite[Theorem 1.4]{RhodesSchilling.2019b}). The Kleene expressions from Section~\ref{section.kleene} give us an expression for the set of relevant paths $q$ in $G$. Now we discuss how to get from the Kleene expressions to rational functions.
The main idea is that concatenation in Kleene expressions corresponds to products, unions corresponds to sums, and $\star$ corresponds to the geometric series. More concretely, for a path $p=a_1\cdots a_k$ we obtain \[
\prod_{a\in p} x_a = x_{a_1} x_{a_2} \cdots x_{a_k}. \] For $\star$-expressions with a single letter $a$, we obtain \[
\sum_{s\in a^\star} \prod_{i \in s} x_i = \sum_{\ell=0}^\infty x_a^\ell = \frac{1}{1-x_a}. \] Similarly \[
\sum_{s \in \{a,b\}^\star} \prod_{i\in s} x_i = \sum_{s \in a^\star (ba^\star)^\star} \prod_{i\in s} x_i
= \frac{1}{1-x_a} \cdot \frac{1}{1-\frac{x_b}{1-x_a}}
= \frac{1}{1-x_a-x_b}. \] In general, using the recursion~\eqref{equation.zimin} we derive by induction \begin{equation} \label{equation.geometric}
\sum_{s \in \{a_1,a_2,\ldots,a_n\}^\star} \prod_{i\in s} x_i = \frac{1}{1-x_{a_1} - x_{a_2} - \cdots - x_{a_n}}. \end{equation}
\begin{example} \label{example.psi limit} Let us now compute \[
\Psi_{ab\square} = \sum_{p\in E} \prod_{a\in p} x_a \] for the Kleene expression $E$ of Example~\ref{example.kleene}. We find (see also~\cite[Example 3.8]{RhodesSchilling.2019b}) \begin{equation*} \begin{split}
\Psi_{ab\square} &= \frac{x_a x_b x_\square}
{\left(1-\frac{x_a^2x_b^2}{\left(1-\frac{x_b^2}{1-x_a^2}\right)(1-x_a^2)} -\frac{x_a^2}{1-\frac{x_b^2}{1-x_a^2}}
-\frac{x_a^2x_b^2}{\left(1-\frac{x_a^2}{1-x_b^2}\right)(1-x_b^2)} -\frac{x_b^2}{1-\frac{x_a^2}{1-x_b^2}}\right)
\left(1-\frac{x_a^2}{1-x_b^2}\right)}\\
&=\frac{x_a x_b x_\square(1-x_b^2)}
{\left(1-\frac{2x_a^2x_b^2}{1-x_a^2-x_b^2} -\frac{x_a^2(1-x_a^2)}{1-x_a^2-x_b^2}
-\frac{x_b^2(1-x_b^2)}{1-x_a^2-x_b^2}\right) (1-x_a^2-x_b^2)}\\
&=\frac{x_a x_b x_\square(1-x_b^2)}
{1-2x_a^2-2x_b^2+(x_a^2-x_b^2)^2}. \end{split} \end{equation*} In the limit as $x_\square \to 0$, we obtain \[
\lim_{x_\square \to 0} \Psi_{ab\square} = \frac{1-x_b^2}{8}. \] In~\cite[Example 3.8]{RhodesSchilling.2019b}, the remaining stationary distributions were computed using that $x_a+x_b+x_\square=1$ and by taking the limit $x_\square \to 0$ \begin{equation*} \begin{split}
\Psi_\square & = x_\square \qquad \qquad \qquad \qquad \qquad \qquad \;\;
\stackrel{x_\square\to 0}{\longrightarrow} \qquad 0\\
\Psi_{a\square} &= \frac{x_a(1-x_a^2-x_b^2)x_\square}{1-2x_a^2-2x_b^2+(x_a^2-x_b^2)^2}
\qquad \stackrel{x_\square\to 0}{\longrightarrow} \qquad \frac{x_a}{4}\\
\Psi_{ab\square} &= \frac{x_a x_b x_\square(1-x_b^2)} {1-2x_a^2-2x_b^2+(x_a^2-x_b^2)^2}
\qquad \stackrel{x_\square\to 0}{\longrightarrow} \qquad \frac{1-x_b^2}{8}\\
\Psi_{aba\square} &= \frac{x_a^2 x_b x_\square}{1-2x_a^2-2x_b^2+(x_a^2-x_b^2)^2}
\qquad \stackrel{x_\square\to 0}{\longrightarrow} \qquad \frac{x_a}{8}\\
\Psi_{abab\square} &= \frac{x_a^2 x^2_b x_\square}{1-2x_a^2-2x_b^2+(x_a^2-x_b^2)^2}
\qquad \stackrel{x_\square\to 0}{\longrightarrow} \qquad \frac{x_a x_b}{8}\\
\Psi_{a^2\square} &= \frac{x_a^2 (1-x_a^2) x_\square}{1-2x_a^2-2x_b^2+(x_a^2-x_b^2)^2}
\qquad \stackrel{x_\square\to 0}{\longrightarrow} \qquad \frac{x_a (1+x_a)}{8}\\
\Psi_{a^2b\square} &= \frac{x_a^2 x_b x_\square}{1-2x_a^2-2x_b^2+(x_a^2-x_b^2)^2}
\qquad \stackrel{x_\square\to 0}{\longrightarrow} \qquad \frac{x_a}{8}\\
\Psi_{a^2ba\square} &= \frac{x_a^3 x_b x_\square}{1-2x_a^2-2x_b^2+(x_a^2-x_b^2)^2}
\qquad \stackrel{x_\square\to 0}{\longrightarrow} \qquad \frac{x_a^2}{8} \end{split} \end{equation*} and similarly for the cases with $a$ and $b$ interchanged by symmetry.
For a word $w$ in $\{a,b\}$, denote by $[w]$ the corresponding element in $D_2$. For example, in $D_2$ we have $[a]=[bab]=[b^2a]$. Note that by Theorem~\ref{theorem.stationary general} \[
\Psi^{\mathcal{M}(D_2,\{a,b\})}_s=\frac{1}{4} \qquad \text{for all $s\in D_2$} \] by summing the appropriate results for $\lim_{x_\square \to 0} \Psi_{w\square}$ as above. For example, \[
\Psi^{\mathcal{M}(D_2,\{a,b\})}_{[ab]} = \lim_{x_\square \to 0} \left(\Psi_{ab\square} + \Psi_{ba\square}
+\Psi_{a^2ba\square} + \Psi_{b^2ab\square} \right)
= \frac{1-x_b^2}{8} + \frac{1-x_a^2}{8} + \frac{x_a^2}{8} + \frac{x_b^2}{8} = \frac{1}{4}. \] This shows that the stationary distribution is uniform. \end{example}
\begin{remark} Note that the Markov chain in~\eqref{equation.transition diagram} is not ergodic since the greatest common divisor of the cycle length is 2 and not 1. We can make the Markov chain ergodic by introducing a new generator $c$, which acts as the identity. In other words, this would introduce loops at each vertex in~\eqref{equation.transition diagram} labeled $c$. In turn, this would introduce loops labeled $c$ at each vertex in the McCammond expansion in Figure~\ref{figure.mccammond} and the loop graph in Figure~\ref{figure.loop}. This would change the Kleene expressions in Example~\ref{example.kleene} to \[
E = a \{\ell_1,\ell_2,\ell_3,\ell_4,c\}^\star b \{\ell_5,c\}^\star \square \] with \[ \begin{split}
\ell_1 &= a \{b\{ac^\star a,c\}^\star b,c\}^\star b \{ac^\star a,c\}^\star a c^\star b,\\
\ell_2 &= a\{b\{ac^\star a,c\}^\star b,c\}^\star a,\\
\ell_3 &= b \{a\{b c^\star b,c\}^\star a,c\}^\star a \{bc^\star b,c\}^\star b c^\star a,\\
\ell_4 &= b\{a\{bc^\star b,c\}^\star a,c\}^\star b,\\
\ell_5 &= a\{bc^\star b,c\}^\star a. \end{split} \] In this setting, we find \begin{multline*}
\Psi_{ab\square} =
\frac{x_a x_b x_\square}
{\left(1
-\frac{x_a^2x_b^2}{\left(1-\frac{x_b^2}{1-\frac{x_a^2}{1-x_c}-x_c}-x_c\right)\left(1-\frac{x_a^2}{1-x_c}-x_c\right)(1-x_c)}
-\frac{x_a^2}{1-\frac{x_b^2}{1-\frac{x_a^2}{1-x_c}-x_c}-x_c}\right.}\\
\frac{1}{\left.
-\frac{x_a^2x_b^2}{\left(1-\frac{x_a^2}{1-\frac{x_b^2}{1-x_c}-x_c}-x_c\right)\left(1-\frac{x_b^2}{1-x_c}-x_c\right)(1-x_c)}
-\frac{x_b^2}{1-\frac{x_a^2}{1-\frac{x_b^2}{1-x_c}-x_c}-x_c}-x_c\right)
\left(1-\frac{x_a^2}{1-\frac{x_b^2}{1-x_c}-x_c}-x_c\right)}\\
= \frac{x_a x_b x_\square (1+x_b-x_c)(1-x_b-x_c)}{(1-x_a-x_b-x_c)(1+x_a+x_b-x_c)(1-x_a+x_b-x_c)(1+x_a-x_b-x_c)}. \end{multline*} Using $x_a+x_b+x_c+x_\square=1$, we see that the term $(1-x_a-x_b-x_c)$ in the denominator cancels with $x_\square$ in the numerator. Hence in the limit $x_\square \to 0$, we obtain \[
\lim_{x_\square \to 0} \Psi_{ab\square} = \frac{(1+x_b-x_c)(1-x_b-x_c)}{8(x_a+x_b)}. \] As $x_c\to 0$, we recover the result from Example~\ref{example.psi limit}. \end{remark}
\subsection{Mixing time} \label{section.mixing}
Recall from Section~\ref{section.markov}, that for a given small $\epsilon>0$, the \defn{mixing time} $t_\mathsf{mix}$ is the smallest $t$ such that \[
\| T^t \nu - \Psi \| \leqslant \epsilon. \] Many references about mixing time can be found in~\cite{Diaconis.2011,LevinPeres.2017}.
Let $\tau$ be the first time that the Markov chain hits the ideal (when starting at $\mathbbm{1}$ in $\mathsf{RCay}(S,A)$ or $\mathsf{Mc}\circ \mathsf{KR}(S,A)$). Denote by $\mathsf{Pr}(\tau> t)$ the probability that $\tau$ is bigger than a given $t$. In~\cite{ASST.2015}, it was shown that $\mathsf{Pr}(\tau>t)$ gives a bound on the mixing time.
\begin{theorem} \cite{ASST.2015} \label{theorem.ASST} Let $S$ be a finite semigroup whose minimal ideal $K(S)$ is a left zero semigroup and let $T$ be the transition matrix of the associated Markov chain. Then \[
\| T^t \nu - \Psi \| \leqslant \mathsf{Pr}(\tau>t). \] \end{theorem}
In~\cite{RhodesSchilling.2020}, we provide a way to compute $\mathsf{Pr}(\tau>t)$ from particular rational expressions for the stationary distribution. Let $G$ be a loop graph and recall $\Psi_G(x_1,\ldots,x_n)$ from~\eqref{equation.psi G}, which is a rational function in $x_1,\ldots,x_n$. Let $\mathsf{Pr}_G(\tau\geqslant t)$ be the probability that the length of the paths in the loop graph $G$ from $\mathbbm{1}$ to $s$ in the ideal is weakly bigger than $t$. Let $\Psi^{\geqslant t}_G(x_1,\ldots,x_n)$ be the truncation of the formal power series associated to the rational function $\Psi_G(x_1,\ldots,x_n)$ to terms of degree weakly bigger than $t$ and let $\Psi_G^{<t}(x_1,\ldots,x_n)$ be the truncation of the formal power series associated to the rational function $\Psi_G(x_1,\ldots,x_n)$ to terms of degree strictly smaller than $t$. Note that \[
\Psi_G(x_1,\ldots,x_n) = \Psi_G^{<t}(x_1,\ldots,x_n) + \Psi^{\geqslant t}_G(x_1,\ldots,x_n). \]
\begin{theorem} \cite{RhodesSchilling.2020} \label{theorem.main} Suppose the Markov chain satisfies the conditions of Theorem~\ref{theorem.ASST}. If $\Psi_G(x_1,\ldots,x_n)$ is represented by a rational function such that each term of degree $\ell$ in its formal power sum expansion corresponds to a path in $G$ of length $\ell$, we have \[
\mathsf{Pr}_G(\tau \geqslant t) = \frac{\Psi^{\geqslant t}_G(x_1,\ldots,x_n)}{\Psi_G(x_1,\ldots,x_n)}
= 1 - \frac{\Psi_G^{<t}(x_1,\ldots,x_n)}{\Psi_G(x_1,\ldots,x_n)}. \] \end{theorem}
By Markov's inequality (see for example~\cite{LevinPeres.2017,DevroyeLugosi.2001}), we have \begin{equation} \label{equation.Markov inequality}
\mathsf{Pr}(\tau>t) \leqslant \frac{E[\tau]}{t+1}, \end{equation} where $E[\tau]$ is the expected value for $\tau$, the first time the walk hits the ideal. Hence knowing $E[\tau]$ gives an upper bound on the mixing time. In~\cite{RhodesSchilling.2020}, we find a way to compute $E[\tau]$ from certain representations of the stationary distribution.
\begin{theorem} \cite{RhodesSchilling.2020} \label{theorem.main1} Suppose the Markov chain satisfies the conditions of Theorem~\ref{theorem.ASST}. If $\Psi_G(x_1,\ldots,x_n)$ is represented by a rational function such that each term of degree $\ell$ in its formal power sum expansion corresponds to a path in $G$ of length $\ell$, we have \begin{equation} \label{equation.EG}
E_G[\tau] = \left( \sum_{i=1}^n x_i \frac{\partial}{\partial x_i} \right) \ln \Psi_G(x_1,\ldots,x_n). \end{equation} \end{theorem}
\subsection{Another example} \label{section.example}
Let us illustrate the concepts and algorithms in terms of another example. Consider the Markov chain with state space $\Omega=\{\mathbf{1},\mathbf{2}\}$ given by the transition diagram: \begin{equation} \label{equation.markov linear} \raisebox{-1cm}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm,
semithick]
\tikzstyle{every state}=[fill=red,draw=none,text=white]
\node[state] (I) {$\mathbf{1}$};
\node[state] (A) [right of=I] {$\mathbf{2}$};
\path (I) edge [bend left] node {$2,3$} (A)
edge [loop left] node {$1$} (I)
(A) edge[bend left] node {$1,3$} (I)
edge [loop right] node {$2$} (A); \end{tikzpicture}} \end{equation} The transition matrix of this Markov chain is given by \[
T = \begin{pmatrix}
x_1 & x_1+x_3\\
x_2+x_3 & x_2
\end{pmatrix}. \] Pick as generators of the semigroup $A=\{1,2,3\}$. Then the right Cayley graph of the semigroup that gives the above Markov chain by left multiplication is depicted in Figure~\ref{figure.right cayley linear}, where $K(S)=\{1,2\}$. Indeed, the left action $11=1$, $21=2$ and $31=2$, which gives all the edges out of $\mathbf{1}$ in~\eqref{equation.markov linear}. Similarly, the left action $12=1$, $22=2$, and $32=1$, which gives all the edges out of $\mathbf{2}$ in~\eqref{equation.markov linear}. \begin{figure}
\caption{ The right Cayley graph $\mathsf{RCay}(S,A)$ of the semigroup that gives the Markov chain in Section~\ref{section.example}.}
\label{figure.right cayley linear}
\end{figure}
The McCammond and Karnofsky--Rhodes expansion of the right Cayley graph is given in Figure~\ref{figure.mccammond linear}. \begin{figure}
\caption{ $\mathsf{Mc}\circ \mathsf{KR}(S,A)$ of $\mathsf{RCay}(S,A)$ in Figure~\ref{figure.right cayley linear} with loops on the ideal omitted.}
\label{figure.mccammond linear}
\end{figure} The Kleene expression for all paths from $\mathbbm{1}$ to $32 \in K(S)$ is $3(33)^\star 2$ and similarly for paths with other endpoints. From this we easily compute \[ \begin{aligned}
\Psi_1 &= x_1 & \qquad \Psi_2&=x_2,\\
\Psi_{32} &= \frac{x_2 x_3}{1-x_3^2} & \qquad \Psi_{31} &= \frac{x_1 x_3}{1-x_3^2},\\
\Psi_{331} &= \frac{x_1 x_3^2}{1-x_3^2} & \qquad \Psi_{332} &= \frac{x_2 x_3^2}{1-x_3^2}. \end{aligned} \] Furthermore, \[ \begin{split}
\Psi^{\mathcal{M}(S,A)}_1 &= \Psi_1 + \Psi_{32} + \Psi_{331} = \frac{x_1+x_2 x_3}{1-x_3^2},\\
\Psi^{\mathcal{M}(S,A)}_2 &= \Psi_2 + \Psi_{31} + \Psi_{332} = \frac{x_2+x_1 x_3}{1-x_3^2}. \end{split} \] Using that $x_1+x_2+x_3=1$, we find indeed that $\Psi^{\mathcal{M}(S,A)}_1+\Psi^{\mathcal{M}(S,A)}_2=1$.
Since the expressions for $\Psi^{\mathcal{M}(S,A)}_1$ and $\Psi^{\mathcal{M}(S,A)}_2$ were computed directly from $\mathsf{Mc} \circ \mathsf{KR}(S,A)$ (or the corresponding loop graphs) without using that $x_1+x_2+x_3=1$, each term of degree $\ell$ in the expansion of the rational function corresponds to a path in the graph. Hence we may use Theorem~\ref{theorem.main1} to give an upper bound on the mixing time \[
E_1[\tau] = \left( x_1 \frac{\partial}{\partial x_1} + x_2 \frac{\partial}{\partial x_2} + x_3 \frac{\partial}{\partial x_3} \right)
\ln \Psi^{\mathcal{M}(S,A)}_1
= \frac{x_1}{x_1+x_2 x_3} + \frac{2 x_2 x_3}{x_1+x_2 x_3} + \frac{2 x_3^2}{1-x_3^2}. \] Inserting $x_1=x_2=x_3=\frac{1}{3}$ yields $E_1[\tau]=E_2[\tau]=\frac{3}{2}$, so that $t_{\mathsf{mix}} \leqslant 3$ if $\epsilon = \frac{1}{2}$.
\end{document} | arXiv | {
"id": "2006.00895.tex",
"language_detection_score": 0.6587966084480286,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\section{Introduction}\label{sec:intro} Longitudinal data are prevalent in various fields, including biomedical, behavioral, and social sciences. Two modeling families are commonly employed for analyzing longitudinal data: the mixed-effects modeling framework and the structural equation modeling (SEM) framework. These frameworks use latent variables (referred to as `growth factors' in the SEM literature or `random coefficients' in the mixed-effects modeling framework) based on specified functional forms for the change patterns. For instance, an intercept and a slope are two growth factors that represent linear change. Estimating the means and variances of these growth factors (random coefficients) enables the assessment of between-individual differences in within-individual change processes. The linear longitudinal model is frequently used to evaluate change processes and often serves as a reasonable model for understanding change within limited time spans. However, if the process under investigation is measured over an extended period or more frequently, the trajectory may exhibit nonlinearity with a nonconstant rate of change. In such cases, linear longitudinal models cannot capture the complexity of individual trajectories, necessitating the use of longitudinal models with nonlinear functions to assess between-individual differences in within-individual change.
According to \citet[Chapter~9]{Grimm2016growth}, there are three types of nonlinear longitudinal models: those nonlinear with respect to (1) time, (2) parameters, and (3) growth factors. The third type of nonlinear longitudinal model, often considered intrinsically nonlinear\footnote{The term `intrinsic nonlinearity' was initially used to describe a nonlinear regression function where at least one of the first derivatives of the function with respect to the parameters depends on one or more of the parameters \citep[Chapter~2]{Bates1988Nonlinear}. Subsequently, researchers extended this concept to encompass nonlinear longitudinal models \citep{Harring2014nonlinear}. According to the original definition, both Type II and Type III nonlinear longitudinal models can be considered intrinsically nonlinear. However, `intrinsic nonlinearity' in this project exclusively refers to Type III nonlinear longitudinal models, which necessitate multidimensional integrations of joint likelihood over the growth factors and do not possess a closed-form likelihood function following \citet{Rohloff2022nonlinear}.}, is defined by having at least one of the derivatives of the growth function with respect to a growth factor dependent on some of the rest growth factors. All three types of nonlinear longitudinal models can be fitted in mixed-effects modeling framework and latent growth curve modeling (LGCM) framework that belongs to the SEM family. These frameworks allow for the direct fitting of the first two types of nonlinear models and generate theoretically and empirically equivalent fits \citep{Bauer2003Estimating, Curran2003Multilevel}. Numerous computational tools have been developed to analyze the first two types of nonlinear longitudinal models. For instance, the packages \pkg{lme4} \citep{Bates2014lme4, Bates2015lme4} and \pkg{nlme} \citep{Pinheiro2000nlme, Pinheiro2023nlme} have been created in the mixed-effects modeling framework for analyzing longitudinal processes. \citet{Proust2017lcmm} developed the package \pkg{lcmm} to fit mixture models, assuming that longitudinal processes originate from multiple latent classes within the mixed-effects modeling framework. Corresponding tools in the SEM framework leveraging package \pkg{OpenMx} \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx} or software \pkg{Mplus} \citep{Muthen2017Mplus} are well-documented in \citet{Grimm2016growth}.
However, fitting the third type of nonlinear longitudinal model (i.e., an intrinsically nonlinear longitudinal model) necessitates an approximation approach. Specifically, the marginal maximum likelihood estimation (MLE) procedure \citep{Harring2006nonlinear, Toit2009NRC, Cudeck2010nonlinear} and Taylor series expansion \citep{Browne1991Taylor, Preacher2015repara} are employed for mixed-effects modeling and the SEM framework, respectively. Although intrinsically nonlinear longitudinal models have not received as much attention as other longitudinal models, some research has proposed and developed tools for constructing these models. For instance, \citet{Zopluoglu2014knot} developed the R routine \pkg{fitPMM} to implement the marginal MLE procedure described in \citet{Toit2009NRC} for longitudinal models with a linear-linear functional form and random knot, which is a Type III nonlinear longitudinal model, within the mixed-effects modeling framework. Despite the growing interest in intrinsically nonlinear longitudinal models, more comprehensive tools within the SEM framework to effectively analyze Type III nonlinear functions are still needed. In addition, Type III nonlinear models can be complicated and may lead to issues with convergence or the generation of improper solutions. As a result, it is essential to have parsimonious backup solutions available when working with these models. Addressing these gaps, \pkg{nlpsem} provides computational tools to fit intrinsically nonlinear LGCMs and their corresponding parsimonious backup. In addition to LGCMs, this package allows for investigating longitudinal processes in various scenarios within the SEM framework and empowers researchers to understand complex longitudinal processes better and make more informed decisions based on their findings:
\begin{itemize} \item {The research objective is to estimate the rate of change over time for a nonlinear process.} \item {The research goal is to evaluate how a baseline characteristic (i.e., time-invariant covariate, TIC) affects time-dependent states or time-dependent changes.} \item {Multiple longitudinal variables are collected, and the research interest lies in understanding the joint processes and how these processes and their changes are associated over time. There are three possible statistical models to analyze such joint processes:} \begin{itemize} \item {One process is considered a longitudinal outcome, while the other is regarded as a time-varying covariate (TVC) to examine the effect of the TVC on the outcome.} \item {All processes are treated as longitudinal outcomes, focusing on the interrelations of multiple distinct longitudinal processes.} \item {Assuming that a predictor affects a longitudinal outcome both directly and indirectly through a longitudinal mediator, the interest is in investigating direct and indirect effects and, subsequently, total effects.} \end{itemize} \item {Observable multiple classes exist, and the research interest involves assessing differences between groups over time.} \item {Unobserved heterogeneity is present, and the research objective is to identify latent classes and the reasons for between-class and within-class heterogeneity.} \end{itemize}
Another challenge in analyzing longitudinal datasets is unstructured measurement schedules, which widely exist in various research areas, such as biomedical, behavioral, and social sciences. For instance, in clinical trials, site visits may be scheduled on different study days for each patient\footnote{The `study day' is terminology from clinical trials, which is generally considered to start at either randomization or dosing and counted from Day 1. }. Unstructured measurement occasions may also occur when time is measured precisely or self-initiated responses are collected. One analytical method involves dividing the assessment period into multiple time windows, allowing individuals to have up to one response per window. However, earlier studies, such as \citet{Blozis2008coding, Coulombe2015ignoring}, have shown that approximations that ignore individual differences in time may lead to invalid estimations. The mixed-effects modeling framework addresses this challenge by incorporating measurement time as a continuous variable in the model, which is doable with popular packages like \pkg{nlme}, \pkg{lme4}, and \pkg{lcmm}. However, incorporating individual measurement occasions is not as straightforward in the SEM framework, where popular computational tools require wide-format longitudinal data. Fortunately, \citet{Mehta2000people, Mehta2005people} proposed a `definition variables' approach to tackle this challenge, in which definition variables are observed variables used to adjust model parameters to individual-specific values. \citet{Sterba2014individually} and \citet{Liu2022LCSM} have proposed SEM longitudinal models that treat measurement occasions and time intervals between adjacent measurement occasions as definition variables, respectively, which are incorporated into this package.
The goal of the \pkg{nlpsem} package, developed for R \citep{R_package}, is to provide estimations for evaluating longitudinal processes with intrinsically nonlinear functional forms in the SEM framework. The current version of \pkg{nlpsem} offers three commonly used intrinsically nonlinear functional forms: (1) the negative exponential function with an individual ratio of growth rate \citep{Sterba2014individually}, (2) the Jenss-Bayley function with an individual ratio of growth acceleration \citep[Chapter~12]{Grimm2016growth}, and (3) the bilinear spline function (also referred to as linear-linear piecewise function) with an individual knot \citep{Liu2019BLSGM}. These intrinsically nonlinear longitudinal models often have reduced versions, which are no longer intrinsically nonlinear and belong to Type II of nonlinear longitudinal models. The \pkg{nlpsem} package also enables the estimation of these parsimonious models and the models with the quadratic functional form, which are Type I of nonlinear longitudinal models. Although models for linear longitudinal processes are not the primary focus, we also provide functions for them when applicable to researchers interested in utilizing them. The \pkg{nlpsem} package begins with computational tools for univariate longitudinal processes, with or without TICs. It also aims to provide functions for evaluating multivariate longitudinal processes, including three estimations for (1) a longitudinal outcome with a TVC, (2) parallel processes and correlated growth models for multiple longitudinal outcomes, and (3) longitudinal mediation models. The computational tools for multiple group and mixture models, where the sub-model could be any of the models above, are also provided. All estimations are built within the R package \pkg{OpenMx} \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}, which allows for flexible model specification in the SEM framework and parameter estimation based on observed data using built-in optimizers. Using the definition variables approach, all computational tools in \pkg{nlpsem} accommodate unstructured time frames.
The remainder of this article is organized as follows. Section \ref{sec:spec} provides the specification for each model included in the package, and Section \ref{sec:est} details the full information maximum likelihood (FIML) technique process. Section \ref{sec:Implement} describes the implementation of the seven estimation functions and provides the details of the algorithm for initial values. Section \ref{sec:post} discusses computation details and post-fit analyses. Section \ref{sec:example} presents a series of examples based on the dataset available in the package, and Section \ref{sec:conclude} presents practical and methodological considerations, as well as potential extensions.
\section{Model Specification}\label{sec:spec} This section presents the specifications for each modeling family implemented in this package. The first two subsections describe the two modeling frameworks for univariate longitudinal processes—latent growth curve models (LGCMs) and latent change score models (LCSMs)—both with and without time-invariant covariates. Subsections \ref{spec:TVC}-\ref{spec:MED} concentrate on modeling frameworks for examining multivariate longitudinal processes, including LGCMs or LCSMs with a time-varying covariate (TVC), multivariate LGCMs or LCSMs, and longitudinal mediation models. Lastly, Subsections \ref{spec:Multi} and \ref{spec:GMMs} introduce multiple group and mixture models that relax the single population assumption inherent in the above modeling frameworks, detailing corresponding extensions for heterogeneous populations with observed and latent classes, respectively.
\subsection{Latent Growth Curve Models}\label{spec:LGCMs} The LGCM, a modeling family in the SEM framework, is often employed to analyze growth status over time. As introduced in Section \ref{sec:intro}, the linear and first two types of nonlinear LGCMs are mathematicsly and empirically equivalent to the corresponding model in the mixed-effects modeling framework. This subsection briefly describes the LGCM, providing an overview of three intrinsically nonlinear functional forms, their corresponding reduced non-intrinsically nonlinear functions, and the other two commonly used functional forms, linear and quadratic curves, with unstructured time frames. A general form of the LGCM with individual measurement occasions is \begin{equation}\label{eq:LGCM1} \boldsymbol{y}_{i}=\boldsymbol{\Lambda}_{i}\times\boldsymbol{\eta}_{i}+\boldsymbol{\epsilon}_{i}, \end{equation} where $\boldsymbol{y}_{i}$ is a $J\times1$ vector of the repeated measurements of individual $i$ (where $J$ is the number of measures). The elements in the vector $\boldsymbol{\eta}_{i}$ ($K\times1$) are often called growth factors, which are latent variables that constitute growth status in LGCMs for the $i^{th}$ individual (where $K$ is the number of growth factors). Furthermore, $\boldsymbol{\Lambda}_{i}$ is a $J\times K$ matrix of the corresponding factor loadings. Note that the subscript $i$ in $\boldsymbol{\Lambda}_{i}$ indicates that the factor loadings, which are functions of measurement times, are at the individual level. The vector $\boldsymbol{\epsilon}_{i}$ ($J\times1$) represents residuals of individual $i$ (usually assumed to follow an identical and independent normal distribution). That is, $\boldsymbol{\epsilon}_{i}\sim\text{MVN}\big(\boldsymbol{0},\theta_{\epsilon}\boldsymbol{I}\big)$, where $\theta_{\epsilon}$ is the residual variance and $\boldsymbol{I}$ is $J\times J$ identity matrix). The vector of growth factors can be further written as deviations from the corresponding mean values \begin{equation}\label{eq:LGCM2_noTIC} \boldsymbol{\eta}_{i}=\boldsymbol{\mu_{\eta}}+\boldsymbol{\zeta}_{i}, \end{equation} where $\boldsymbol{\mu_{\eta}}$ (a $K\times1$ vector) represents growth factor means, and $\boldsymbol{\zeta}_{i}$ (a $K\times1$ vector) denotes deviations of the $i^{th}$ individual from factor means. Suppose it is of interest to evaluate the effects of TICs on the heterogeneity of growth factors and, consequently, on growth curves. In that case, one may further regress growth factors on the covariates, \begin{equation}\label{eq:LGCM2_TIC} \boldsymbol{\eta}_{i}=\boldsymbol{\alpha}+\boldsymbol{B}_{\text{TIC}}\times\boldsymbol{X}_{i}+\boldsymbol{\zeta}_{i}, \end{equation} where $\boldsymbol{\alpha}$ is a $K\times1$ vector of growth factor intercepts (equivalent to the mean vector of growth factors if the TICs are centered), and $\boldsymbol{B}_{\text{TIC}}$ is a $K\times m$ matrix of regression coefficients (with $m$ representing the number of TICs) from TICs to growth factors. The vector $\boldsymbol{X}_{i}$ ($m\times1$) is for TICs, which could be continuous or categorical, of individual $i$. In the current version of \pkg{nlpsem}, growth factors in Equation \ref{eq:LGCM1} are assumed to follow a (conditional) multivariate normal distribution, a common assumption in practice. That is, $\boldsymbol{\zeta}_{i}\sim \text{MVN}\big(\boldsymbol{0}, \boldsymbol{\Psi}_{\boldsymbol{\eta}}\big)$, where $\boldsymbol{\Psi}_{\boldsymbol{\eta}}$ is a $K\times K$ matrix and indicates growth factor variance and unexplained growth factor variance in the scenarios without and with TICs, respectively.
Table \ref{tbl:LGCM_summary} provides the model specification of LGCM with multiple commonly used functional forms and corresponding interpretation of growth coefficients. For instance, consider the linear functional form, the most straightforward function. There are two free coefficients in an individual linear trajectory: intercept ($\eta_{0i}$) and linear slope ($\eta_{1i}$). The intercept and slope are allowed to vary from individual to individual. Examining individual differences in the intercept and slope, and thus, the between-individual differences in within-individual changes is one primary interest when exploring longitudinal records \citep{Biesanz2004Linear, Zhang2012LCSM, Grimm2013LCSM1}.
\tablehere{1}
Although the linear function's coefficients are straightforward to interpret, they do not help effectively describe more complex change patterns. When the trajectory of a longitudinal process exhibits a certain degree of nonlinearity to time, particularly if the process is observed over a long period, more complex functional forms with additional growth coefficients are needed to depict change patterns. For instance, an individual quadratic functional form, which includes an acceleration growth factor ($\eta_{2i}$, representing the quadratic component of change) in addition to the intercept ($\eta_{0i}$) and the linear component of change ($\eta_{1i}$), serves as a candidate for describing nonlinear trajectories, particularly when estimating growth acceleration is of interest. A LGCM with a quadratic functional form belongs to Type I of nonlinear longitudinal models, as its factor loadings are functions that depend solely on measurement times. In practice, other nonlinear functions listed in Table \ref{tbl:LGCM_summary} can also provide valuable insights regarding growth status. For example, $\eta_{1i}$ in an individual negative exponential function indicates the individual's growth capacity.
As illustrated in Table \ref{tbl:LGCM_summary}, the second term of the individual growth curve of the negative exponential function contains two growth factors, $\eta_{1i}$ and $b_{i}$. This implies that the derivatives of the growth curve with respect to $\eta_{1i}$ or $b_{i}$ would be functions of $b_{i}$ and $\eta_{1i}$, respectively (i.e., intrinsically nonlinear model). Consequently, the negative exponential function with individual coefficient $b$ cannot be modeled directly in the SEM framework, as the factor loadings matrix cannot depend on a growth factor. In practice, two ways to address this issue are: (1) performing Taylor series expansion to linearize the function \citep{Preacher2015repara}, and (2) using a reduced model assuming that the growth rate ratio is consistent across the entire population. These two approaches can also be employed to handle $c_{i}$ in the Jenss-Bayley function and $\gamma_{i}$ in the bilinear spline function with an unknown knot. After linearization, the factor-loading matrices of these intrinsically nonlinear models depend on the growth factor means and measurement occasions and can thus be fit in the SEM framework. Technical details of linearization via Taylor series expansion for the negative exponential function, Jenss-Bayley function, and bilinear spline function with an unknown knot are documented in \citet{Sterba2014individually}, \citet[Chapter~12]{Grimm2016growth}, and \citet{Liu2019BLSGM}, respectively. Note that the reduced versions of the three models belong to Type II of nonlinear longitudinal models, as their factor loading matrices depend on population-level coefficients $b$, $c$, or $\gamma$ in addition to measurement times. The coefficient $b$, $c$, or $\gamma$ is estimated as an additional parameter in the corresponding parsimonious model. The package \pkg{nlpsem} provides a function \code{getLGCM()} that enables the estimation of the three intrinsically nonlinear LGCMs, the corresponding reduced models, and LGCMs with linear or quadratic functional forms.
It is important to note that the bilinear spline function with an unknown knot is a piecewise function that must be unified when implemented in SEM software. Multiple reparameterization techniques are available to unify the expression pre- and post-knot (i.e., $\gamma_{i}$, or $\gamma$ in the reduced model). More technical details can be found in \citet{Harring2006nonlinear}, \citet[Chapter~11]{Grimm2016growth}, and \citet{Liu2019BLSGM}. The function \code{getLGCM()} implements the reparameterization approach developed in \citet{Liu2019BLSGM}, wherein the initial status, first, and second slopes are reparameterized as the measurement at the knot ($\eta^{'}_{0}$), the mean of the two slopes ($\eta^{'}_{1}$), and the half-difference of the two slopes ($\eta^{'}_{2}$). The function \code{getLGCM()} also allows for the transformation of the reparameterized growth factors $\eta^{'}_{0}$, $\eta^{'}_{1}$, and $\eta^{'}_{2}$ to their original forms to obtain coefficients that are interpretable and related to the developmental theory by implementing the (inverse) transformation functions and matrices developed by \citet{Liu2019BLSGM}.
\subsection{Latent Change Score Models}\label{spec:LCSMs} The primary focus of employing LGCMs is to characterize time-dependent status. However, when exploring nonlinear longitudinal processes, one might also be interested in assessing the growth rate (i.e., rate-of-change) \citep{Grimm2013LCSM1, Grimm2013LCSM2, Zhang2012LCSM} and the cumulative value of the growth rate over time \citep{Liu2022LCSM}. In such cases, one needs to turn to LCSMs, which emphasize time-dependent changes. LCSMs, also known as latent difference score models \citep{McArdle2001LCSM1, McArdle2001LCSM2, McArdle2009LCSM}, were developed to incorporate difference equations with discrete measurement occasions into the SEM framework. \citet{Liu2022JB_LCSM, Liu2021PLBGM, Liu2022LCSM} proposed a novel specification for LCSMs that allows for unequal study waves and individual measurement occasions around each wave through the `definition variables' approach. These three works form the theoretical foundation for the computational tool of LCSMs in \pkg{nlpsem}. The model specification of LCSMs begins with the concept of classical test theory \begin{equation}\label{eq:LCSM1} y_{ij}=y^{*}_{ij}+\epsilon_{ij}, \end{equation} where $y_{ij}$, $y^{*}_{ij}$, and $\epsilon_{ij}$ represent the observed score, latent true score, and residual for individual $i$ at time $j$, respectively. This specification suggests that the observed score for an individual at a specific occasion can be decomposed into a latent true score and a residual. At baseline (i.e., $j=1$), the true score corresponds to the growth factor indicating the initial status. For each post-baseline (i.e., $j\ge2$), the true score at time $j$ is a linear combination of the score at the previous time point $j-1$ and the amount of true change that occurs between time $j-1$ and time $j$ \begin{equation}\label{eq:LCSM2} y^{*}_{ij}=\begin{cases} \eta_{0i}, & \text{if $j=1$}\\ y^{*}_{i(j-1)}+\delta y_{ij}, & \text{if $j=2, \dots, J$} \end{cases}, \end{equation} where $\delta y_{ij}$ is the amount of change that occurs during the $(j-1)^{th}$ time interval (i.e., from $j-1$ to $j$) for the $i^{th}$ individual. Such interval-specific changes can be further expressed as the product of the corresponding interval-specific slopes and the time interval. \citet{Liu2022LCSM} pointed out two ways to express the interval-specific slopes, depending on research interests when exploring a longitudinal process. If the interest is only in assessing the growth rate over time, one may view a process with $J$ measurements as a linear piecewise function with $J-1$ segments (i.e., a nonparametric functional form) \begin{align} &\delta y_{ij}=dy_{ij}\times(t_{ij}-t_{i(j-1)})\qquad (j=2, \dots, J), \label{eq:LCSM3_nonpara}\\ &dy_{ij}=\eta_{1i}\times\gamma_{j-1}\qquad (j=2, \dots, J), \label{eq:LCSM4_nonpara} \end{align} where $dy_{ij}$ is the slope during the $(j-1)^{th}$ time interval, which can be further expressed as the product of the shape factor indicating the slope in the first time interval and the corresponding relative rate. Suppose one also wants to capture other features of change patterns, such as growth acceleration or growth capacity, and resorts to a parametric nonlinear functional form, such as quadratic, negative exponential, and Jenss-Bayley functions. In this scenario, the slope within an interval is not constant. \citet{Liu2022LCSM} and \citet{Liu2022JB_LCSM} proposed approximating the interval-specific changes as the product of the instantaneous slope midway through the corresponding interval and the interval length \begin{equation}\label{eq:LCSM3_para} \delta y_{ij}\approx dy_{ij\_mid}\times(t_{ij}-t_{i(j-1)}), \end{equation} in which $dy_{ij\_mid}$ is the instantaneous slope at the midpoint of the $(j-1)^{th}$ time interval (i.e., from time $(j-1)$ to time $J$). Table \ref{tbl:LCSM_summary} provides the expression of $dy_{ij}$ for the LCSM with the linear piecewise function and $dy_{ij\_mid}$ for the LCSMs with quadratic, negative exponential, and Jenss-Bayley functional forms. Note that the LCSMs also allow for intrinsically nonlinear functional forms. In particular, \citet{Grimm2013LCSM1} and \citet{Liu2022JB_LCSM} first proposed the LCSMs for the negative exponential function with a random ratio of growth rate and the Jenss-Bayley function with a random ratio of growth acceleration, respectively. The package \pkg{nlpsem} provides the function \code{getLCSM()} to allow for the implementation of these two intrinsically nonlinear LCSMs, the corresponding reduced model with a fixed ratio of growth rate or growth acceleration, and the LCSMs with linear piecewise and quadratic functional forms.
\tablehere{2}
Latent Change Score Models (LCSMs) can also be expressed in matrix form, with a general expression analogous to Equation \ref{eq:LGCM1}. Depending on the scenario, they can be further represented as Equations \ref{eq:LGCM2_noTIC} and \ref{eq:LGCM2_TIC} for cases without and with TICs, respectively. Commonly, the distribution assumptions for LCSMs parallel those made for LGCMs, as introduced in Subsection \ref{spec:LGCMs}. Table \ref{tbl:LCSM_summary} outlines the growth factors and corresponding factor loadings for each model, as provided by the \code{getLCSM()} function. The interpretation of growth factors for each parametric LCSM remains consistent with those of the corresponding LGCM. Notably, the first element in vector $\boldsymbol{\eta}_{i}$ symbolizes the latent variable representing the initial status, while the remaining elements compose the growth rate. Unlike the corresponding LGCM, the factor loadings of an LCSM are expressed differently since an LCSM is constructed from the perspective of accumulated change. Specifically, the factor-loading matrices for LCSMs are contingent upon factors related to interval-specific growth rates and time intervals. These time intervals function as `definition variables' in LCSMs with unstructured time frames.
One advantage of employing LCSMs is that this modeling framework facilitates the evaluation of interval-specific slopes, interval-specific changes, and the amounts of change from baseline. Two approaches exist for performing such evaluations. The first method involves deriving the expressions for the means and variances of interval-specific slopes, interval-specific changes, and the amounts of change from baseline, treating them as non-estimable parameters. Table \ref{tbl:LCSM_summary} provides these expressions, and the function \code{getLCSM()} is capable of generating estimates for these parameters. As an alternative, when applicable, interval-specific slopes, interval-specific changes, and the amounts of change from baseline can be defined as additional latent variables, allowing for the calculation of factor scores in a post-fit manner.
\subsection{Longitudinal Models with Time-varying Covariates}\label{spec:TVC} A fundamental interest in evaluating longitudinal processes involves assessing the impact of a covariate on between-individual differences in within-individual change. As introduced in subsections \ref{spec:LGCMs} and \ref{spec:LCSMs}, time-invariant covariates (TICs) can be incorporated when analyzing longitudinal processes with LGCMs or LCSMs, allowing them to account for variability in growth factors, and then growth curves. However, covariates in longitudinal studies are not necessarily TICs. \citet{Grimm2007multi} proposed using LGCMs with a time-varying covariate (TVC) to simultaneously analyze bivariate longitudinal variables, wherein the primary process is treated as the longitudinal outcome, and the other variable is considered as the TVC. Table \ref{tbl:TVC_summary} presents the model specification details. The advantages and drawbacks of this specification have been documented in recent research, such as \citet[Chapter~8]{Grimm2016growth} and \citet{Liu2022decompose}. To address some limitations of the model proposed by \citet{Grimm2007multi}, \citet{Liu2022decompose} developed three methods to decompose a TVC into an initial trait and a set of temporal states, enabling separate evaluation of the trait and temporal effects of the TVC on outcome trajectories. All three decomposition methods are based on the LCSM specification with a piecewise linear function introduced in Subsection \ref{spec:LCSMs}. Table \ref{tbl:TVC_summary} provides technical details for the three TVC decomposition methods and their corresponding model specifications.
\tablehere{3}
It is important to note that the initial trait remains consistent across all three decomposition methods, represented by the true score of the baseline value, which signifies the initial status of the TVC (i.e., $\eta^{[x]}_{0i}$ in Table \ref{tbl:TVC_summary}). In contrast, the temporal states differ among the methods. In particualr, interval-specific slopes (i.e., $\boldsymbol{dx}_{i}$ in Table \ref{tbl:TVC_summary}, a $J\times1$ vector), interval-specific changes (i.e., $\boldsymbol{\delta x}_{i}$ in Table \ref{tbl:TVC_summary}, a $J\times1$ vector), and change-from-baseline amounts (i.e., $\boldsymbol{\Delta x}_{i}$ in Table \ref{tbl:TVC_summary}, a $J\times1$ vector) characterize each approach, respectively. Regressing the growth factors of the longitudinal outcome on the initial trait allows for the evaluation of the baseline effect (i.e., $\boldsymbol{\beta}_{\text{TVC}}$ in Table \ref{tbl:TVC_summary}). Regressing each post-baseline observed measurement of the longitudinal outcome on the corresponding temporal state enables the assessment of the temporal effect (i.e., $\kappa$ in Table \ref{tbl:TVC_summary}). \citet{Liu2022decompose} provides the technical details for the three decomposition approaches, along with \citet{Grimm2007multi}, serves as the theoretical foundation for the \code{getTVCmodel()} function, which enables the implementation of LGCMs in Subsection \ref{spec:LGCMs} and parametric LCSMs in Subsection \ref{spec:LCSMs} using the four possible ways to include a TVC.
\subsection{Parallel Processes and Correlated Growth Models}\label{spec:MGMs} Alternatively, all processes under investigation can be viewed as longitudinal outcomes and analyzed using multivariate growth models (MGMs) \citep[Chapter~8]{Grimm2016growth}, also known as parallel process or correlated growth models \citep{McArdle1988Multi, Grimm2007multi}. MGMs allow for the examination of covariances among cross-process growth factors, thereby capturing the relationships among multiple longitudinal processes. A general model specification for a bivariate growth model can be expressed as \begin{equation}\label{eq:MGMs1} \begin{pmatrix} \boldsymbol{y}_{i} \\ \boldsymbol{z}_{i} \end{pmatrix}= \begin{pmatrix} \boldsymbol{\Lambda}_{i}^{[y]} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\Lambda}_{i}^{[z]} \end{pmatrix}\times \begin{pmatrix} \boldsymbol{\eta}^{[y]}_{i} \\ \boldsymbol{\eta}^{[z]}_{i} \end{pmatrix}+ \begin{pmatrix} \boldsymbol{\epsilon}^{[y]}_{i} \\ \boldsymbol{\epsilon}^{[z]}_{i} \end{pmatrix}, \end{equation} where $\boldsymbol{z}_{i}$ is a vector of repeated measurements for the second longitudinal outcome of individual $i$. In applications, $\boldsymbol{y}_{i}$ and $\boldsymbol{z}_{i}$ often assume the same functional forms when using MGMs; however, the time structures of $\boldsymbol{y}_{i}$ and $\boldsymbol{z}_{i}$ may differ. The growth factors $\begin{pmatrix} \boldsymbol{\eta}^{[y]}_{i} & \boldsymbol{\eta}^{[z]}_{i}\end{pmatrix}^{T}$ can be further expressed as deviations from the corresponding mean values \begin{equation}\label{eq:MGMs2} \begin{pmatrix} \boldsymbol{\eta}^{[y]}_{i} \\ \boldsymbol{\eta}^{[z]}_{i} \end{pmatrix}= \begin{pmatrix} \boldsymbol{\mu}^{[y]}_{\boldsymbol{\eta}} \\ \boldsymbol{\mu}^{[z]}_{\boldsymbol{\eta}} \end{pmatrix}+ \begin{pmatrix} \boldsymbol{\zeta}^{[y]}_{i} \\ \boldsymbol{\zeta}^{[z]}_{i} \end{pmatrix}, \end{equation} where $\boldsymbol{\mu}^{[y]}_{\boldsymbol{\eta}}$ and $\boldsymbol{\mu}^{[z]}_{\boldsymbol{\eta}}$ are $K\times1$ vectors of growth factor mean values, while $\boldsymbol{\zeta}^{[y]}_{i}$ and $\boldsymbol{\zeta}^{[z]}_{i}$ are $K\times1$ vectors of individual $i$ deviations from factor means.
In practice, $\begin{pmatrix} \boldsymbol{\zeta}^{[y]}_{i} & \boldsymbol{\zeta}^{[z]}_{i}\end{pmatrix}^{T}$ is often assumed to follow a multivariate normal distribution \begin{equation}\label{eq:MGMs3} \begin{pmatrix} \boldsymbol{\zeta}^{[y]}_{i} \\ \boldsymbol{\zeta}^{[z]}_{i} \end{pmatrix}\sim \text{MVN}\bigg(\boldsymbol{0}, \begin{pmatrix} \boldsymbol{\Psi}_{\boldsymbol{\eta}}^{[y]} & \boldsymbol{\Psi}_{\boldsymbol{\eta}}^{[yz]} \\ & \boldsymbol{\Psi}_{\boldsymbol{\eta}}^{[z]} \end{pmatrix}\bigg), \end{equation} where $\boldsymbol{\Psi}_{\boldsymbol{\eta}}^{[y]}$ and $\boldsymbol{\Psi}_{\boldsymbol{\eta}}^{[z]}$ are $K\times K$ variance-covariance matrices of outcome-specific growth factors, while $\boldsymbol{\Psi}_{\boldsymbol{\eta}}^{[yz]}$, also a $K\times K$ matrix, indicates the covariances between cross-outcome growth factors. Assuming both $\boldsymbol{y}_{i}$ and $\boldsymbol{z}_{i}$ take linear functions, $\boldsymbol{\Psi}_{\boldsymbol{\eta}}^{[yz]}$ allows for the estimation of intercept-intercept and slope-slope covariance (correlation) and the covariance between intercept and slope across outcomes. To simplify the model specification, the individual outcome-specific residuals, $\boldsymbol{\zeta}^{[y]}_{i}$ and $\boldsymbol{\zeta}^{[z]}_{i}$, are assumed to follow identical and independent normal distributions over time, and the residual covariances are assumed to be homogeneous over time. That is, \begin{equation}\label{eq:MGMs4} \begin{pmatrix} \boldsymbol{\epsilon}^{[y]}_{i} \\ \boldsymbol{\epsilon}^{[z]}_{i} \end{pmatrix}\sim \text{MVN}\bigg(\boldsymbol{0}, \begin{pmatrix} \theta^{[y]}_{\epsilon}\boldsymbol{I} & \theta^{[yz]}_{\epsilon}\boldsymbol{I} \\ & \theta^{[z]}_{\epsilon}\boldsymbol{I} \end{pmatrix}\bigg), \end{equation} where $\boldsymbol{\epsilon}^{[y]}_{i}$ and $\boldsymbol{\epsilon}^{[z]}_{i}$ are residual variances of the longitudinal outcomes $\boldsymbol{y}_{i}$ and $\boldsymbol{z}_{i}$, respectively, $\boldsymbol{\epsilon}^{[yz]}_{i}$ is the residual covariance between the two processes, and $\boldsymbol{I}$ is a $J\times J$ identity matrix if both $\boldsymbol{y}_{i}$ and $\boldsymbol{z}_{i}$ have $J$ repeated measurements.
Earlier studies have proposed and developed MGMs using LGCMs or LCSMs to describe univariate longitudinal processes. For instance, a multivariate version of an intrinsically nonlinear LGCM, the bilinear spline growth curve model with an unknown knot, and its reduced model were developed and examined in recent work \citep{Liu2021PBLSGM}. Moreover, \citet{Blozis2004MGM} created MGMs with quadratic and negative exponential functions (the reduced version). \citet{Ferrer2003MGM} suggested employing multivariate LCSMs to analyze multivariate nonlinear developmental processes, while \citet{Liu2021PLBGM} developed a multivariate version of the LCSM with a piecewise linear functional form. However, some earlier works were limited to MGMs with structured time frames. The current work extends these previous models. Specifically, the package \code{nlpsem} provides the \code{getMGM()} function, which implements a multivariate version of each univariate longitudinal model defined in Tables \ref{tbl:LGCM_summary} and \ref{tbl:LCSM_summary}, within the framework of individual measurement occasions. Additionally, \code{getMGM()} accommodates multiple longitudinal outcomes with varying time structures, tackling a prevalent challenge in analyzing multivariate longitudinal processes by accounting for diverse assessment schedules across outcomes. In clinical trials, for instance, multiple longitudinal outcomes might be collected during specific site visits, with each outcome assessed at different frequencies.
\subsection{Longitudinal Mediation Models}\label{spec:MED} The third type of statistical model employed to investigate multivariate longitudinal processes is longitudinal mediation models. These models were developed to assess two potential pathways through which the predictor variable affects the outcome variable: paths that lead from the predictor to the outcome, both directly and indirectly through a mediator, enabling the simultaneous evaluation of direct effects and indirect impacts on the outcome variable of the predictor \citep{Baron1986indirect}. Since causal relationships often require time to unfold and causal effects may change over time, longitudinal data are preferred when testing mediation hypotheses. Previous studies, such as \citet{Hayes2009Mediate, MacKinnon2000Mediator, MacKinnon2002Mediator, Cheung2008Mediate, Shrout2002Mediate, Selig2009mediate, Gollob1987mediate, Cole2003mediate, Maxwell2007mediate, MacKinnon2008mediate, Cheong2003mediate, Soest2011mediate}, have detailed the drawbacks of using cross-sectional data and the advantages of employing longitudinal data for mediational analyses. Similar to MGMs, longitudinal mediation models can assess each univariate process. However, unlike MGMs that evaluate the covariance (correlation) between cross-outcome growth factors, longitudinal mediation models estimate regression coefficients of the unidirectional paths between these cross-outcome growth factors.
\citet{Cheong2003mediate} proposed analyzing longitudinal mediation processes within the LGCM framework and developed a parallel linear growth model to examine how the baseline predictor affects the change in the outcome through the change in the mediator. \citet{Soest2011mediate} extended this model by incorporating additional regression paths. Moreover, \citet[Chapter~8]{MacKinnon2008mediate} noted that the baseline predictor could also be longitudinal. Recently, \citet{Liu2022mediate} developed two parallel bilinear growth models to investigate mediational relationships among multiple nonlinear processes. In the proposed models, the linear-linear piecewise function with an unknown knot is used to capture the underlying change patterns of each longitudinal process, where the slopes of the two linear segments represent the short- and long-term growth rate. Table \ref{tbl:Med_summary} presents the specifications of two parallel linear growth models and two parallel bilinear growth models with all possible paths within the framework of individual measurement occasions. Since the measurement at the knot conveys time-dependent effects and is more meaningful than the baseline value in a longitudinal mediation model, longitudinal mediation models with linear-linear functional forms employ an alternative reparameterization technique, proposed by \citet[Chapter~11]{Grimm2016growth}, to unify the pre- and post-knot expressions. Specifically, the initial status and two segment-specific slopes are reparameterized as the minimum value between $0$ and $t_{ij}-\gamma$, the measurement at the knot, and the maximum value between $0$ and $t_{ij}-\gamma$. The function \code{getMediation()} facilitates the implementation of these four longitudinal mediation models with all possible paths within the framework of individual measurement occasions. In addition to estimating the direct effects of the predictor on the outcome variable, the function \code{getMediation()} enables the evaluation of indirect effects on the outcome through the mediator, as well as the total effect. This comprehensive assessment captures the relationships between predictor, mediator, and outcome variables over time, providing a more nuanced understanding of the underlying processes.
\tablehere{4}
\subsection{Multiple-group Models for Longitudinal Processes}\label{spec:Multi} Subsections \ref{spec:LGCMs} and \ref{spec:LCSMs} introduce LGCMs and LCSMs with time-invariant covariates (TICs). When TICs are categorical variables, these models assist in evaluating differences in average growth trajectories between groups but have limitations in examining differences in other aspects of longitudinal processes, such as variability in growth curves or differences in multivariate longitudinal processes between such manifest categorical TICs. This section introduces the multiple-group modeling framework as an alternative method to assess group differences in any aspect of processes, providing additional insights into how and why individuals differ in their development. With a specification of the corresponding model for each manifest group, it is straightforward to extend a one-group model to the multiple-group modeling framework. The function \code{getMGroup()} in \pkg{nlpsem} allows for the implementation of a multiple-group model with each of the models introduced in Subsections \ref{spec:LGCMs}-\ref{spec:MED} as the group-specific model (or sub-model), under the assumption that the change pattern and model type are invariant across groups. A general expression for a multiple-group model with $G$ manifest groups (where $G$ is a finite number) can be expressed as \begin{equation}\label{eq:nultigrp}
p(\text{sub-model}|mc_{i}=g)=\sum_{g=1}^{G}p(mc_{i}=g)\times p(\text{sub-model}|mc_{i}=g), \end{equation} where $mc_{i}$ represents the manifest class label for the $i^{th}$ individual, and $p()$ denotes the proportion of each manifest group, summing up to $1$ across groups.
\subsection{Mixture Models for Longitudinal Processes}\label{spec:GMMs} Mixture models for analyzing longitudinal processes, often referred to as growth mixture models (GMMs) within the SEM framework, bear similarities to multiple-group models. Similar to multiple-group models, GMMs relax the single-population assumption that all individuals are homogeneous and can be captured by a single profile of all models introduced in Subsections \ref{spec:LGCMs}-\ref{spec:MED}, mixing these groups through linear combinations. However, unlike multiple-group models where group information is a manifested indicator, GMMs introduced in this section relax the assumption and combine individuals from $G$ heterogeneous latent subpopulations (where $G$ is a finite number). In statistical terms, GMMs can also be viewed as a cluster analysis technique for longitudinal processes. Contrasting traditional methods for cluster analysis, such as K-means, GMMs employ a probability-based clustering approach, allowing each individual to belong to multiple latent classes with varying probabilities; an individual is then classified into the class with the highest probability. The vector of these probabilities follows a multinomial distribution that may be further regressed on the TICs suggesting group components. For scenarios without such TICs, a general specification of a GMM with $G$ latent classes is given by \begin{equation}\label{eq:GMM1}
p(\text{sub-model}|lc_{i}=g)=\sum_{g=1}^{G}\pi(lc_{i}=g)\times p(\text{sub-model}|lc_{i}=g), \end{equation} where $lc_{i}$ is the latent class label for the $i^{th}$ individual, and $\pi()$ is the latent mixing component variable dividing the sample into $K$ latent classes. Note that $\pi()$ needs to satisfy two constraints: (1) $0\le\pi()\le1$, and (2) $\sum_{g=1}^{G}\pi()=1$.
For scenarios with TICs informing class formation, a general specification of a GMM with $G$ latent classes is \begin{align}
&p(\text{sub-model}|lc_{i}=g,\boldsymbol{x}_{gi})=\sum_{g=1}^{G}\pi(lc_{i}=g|\boldsymbol{x}_{gi})\times p(\text{sub-model}|lc_{i}=g),\label{eq:GMM2_1}\\
&\pi(lc_{i}=g|\boldsymbol{x}_{gi})=\begin{cases} \frac{1}{1+\sum_{g=2}^{G}\exp(\beta_{g0}^{(g)}+\boldsymbol{\beta}_{g}^{(g)T}\boldsymbol{x}_{gi})}& \text{reference Group ($g=1$)}\\ \frac{\exp(\beta_{g0}^{(g)}+\boldsymbol{\beta}_{g}^{(g)T}\boldsymbol{x}_{gi})} {1+\sum_{g=2}^{G}\exp(\beta_{g0}^{(g)}+\boldsymbol{\beta}_{g}^{(g)T}\boldsymbol{x}_{gi})} & \text{other Groups ($g=2,\dots, G$)} \end{cases},\label{eq:GMM2_2} \end{align} where $\boldsymbol{x}_{gi}$ is the TIC vector of the $i^{th}$ individual, $\beta^{(g)}_{0}$, and $\boldsymbol{\beta}^{(g)}$ are the intercept and coefficients of the multinomial logistic functions in the $g^{th}$ latent class. Numerous earlier works, such as \citet{Muthen1999GMM, Bouveyron2019GMM, Bauer2003GMM, Muthen2004GMM, Grimm2009FMM, Grimm2010FMM, Liu2019BLSGMM, Liu2021MoE, Liu2022PBLSGMM, Liu2023GMMTVC}, have developed GMMs with various sub-models (i.e., within-class models) and examined their pros and cons. The majority of developed GMMs are for assessing the heterogeneity in a univariate longitudinal process. However, some recent studies have also developed GMMs to explore the heterogeneity in a multivariate longitudinal process \citep{Liu2022PBLSGMM, Liu2023GMMTVC}. The package \pkg{nlpsem} enables the examination of heterogeneity for univariate and multivariate longitudinal processes. In particular, it provides the function \code{getGMM()} to allow for the implementation of a GMM with each of the models introduced in Subsections \ref{spec:LGCMs}-\ref{spec:MED} as the submodel.
\section{Model Estimation}\label{sec:est} The \pkg{nlpsem} package is developed by interfacing with the \pkg{OpenMx} package \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}. This section briefly introduces the iterative algorithm implemented in \pkg{OpenMx}. Iterative estimation algorithms require initial values for the model parameters. Given a specified model and initialized parameters, the package calculates the expected model-implied mean vector $\boldsymbol{\mu}_{i}$ and variance-covariance structure $\boldsymbol{\Sigma}_{i}$. The expectation is then compared to the raw data to determine the goodness of fit for the model. This comparison is based on the difference between the model-implied mean vector and variance-covariance structure and the observed data distribution, also known as the function value. \pkg{OpenMx} \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx} supports multiple fit functions; however, the models implemented in \pkg{nlpsem} exclusively use the maximum likelihood (also known as full information maximum likelihood, or FIML, among SEM researchers). The FIML technique enables the specification of a separate likelihood calculation for each individual, accounting for individual heterogeneity in the likelihood contributions. This feature also allows for the analysis of dropout data under the missing at random assumption by leveraging available information.
The function value of the FIML technique is $-2\ln{L}$, where $L$ is the likelihood function across all $N$ individuals. This article denotes the parameter space as $\Theta$. For a single-group analysis (i.e., the models introduced in Subsections \ref{spec:LGCMs}-\ref{spec:MED}), assuming there are $p$ variables measured for the $i^{th}$ individual in a vector $\boldsymbol{\omega}_{i}$, $L$ is defined as follows \begin{equation}\label{eq:lik1}
L(\Theta)=\prod^{N}_{i=1}\frac{\exp{\big(-\frac{1}{2}(\boldsymbol{\omega}_{i}-\boldsymbol{\mu}_{i})^{T}\boldsymbol{\Sigma}^{-1}_{i}(\boldsymbol{\omega}_{i}-\boldsymbol{\mu}_{i})\big)}}{\sqrt{(2\pi)^{p}|\boldsymbol{\Sigma}_{i}|}}, \end{equation} where $\boldsymbol{\mu}_{i}$ and $\boldsymbol{\Sigma}_{i}$ are the model-implied mean vector and variance-covariance structure of the $i^{th}$ individual. For a multiple-group model with $G$ manifested groups (i.e., the model introduced in Subsection \ref{spec:Multi}), the likelihood function across all individuals is defined as \begin{equation}\label{eq:lik2}
L(\Theta)=\prod^{N}_{i=1}\bigg(\sum^{G}_{g=1}p(mc_{i}=g)\times\frac{\exp{\big(-\frac{1}{2}(\boldsymbol{\omega}^{(g)}_{i}-\boldsymbol{\mu}^{(g)}_{i})^{T}\boldsymbol{\Sigma}^{(g)-1}_{i}(\boldsymbol{\omega}^{(g)}_{i}-\boldsymbol{\mu}^{(g)}_{i})\big)}}{\sqrt{(2\pi)^{p}|\boldsymbol{\Sigma}^{(g)}_{i}|}}\bigg), \end{equation} where $\boldsymbol{\mu}^{(g)}_{i}$ and $\boldsymbol{\Sigma}^{(g)}_{i}$ are the submodel-implied mean vector and variance-covariance structure of the $i^{th}$ individual in the $g^{th}$ manifested group. For a growth mixture model with $G$ latent classes (i.e., the models introduced in Subsection \ref{spec:GMMs}), the likelihood function across all $N$ individuals is defined as \begin{equation}\label{eq:lik3}
L(\Theta)=\prod^{N}_{i=1}\bigg(\sum^{G}_{g=1}\pi(lc_{i}=g)\times\frac{\exp{\big(-\frac{1}{2}(\boldsymbol{\omega}_{i}-\boldsymbol{\mu}^{(g)}_{i})^{T}\boldsymbol{\Sigma}^{(g)-1}_{i}(\boldsymbol{\omega}_{i}-\boldsymbol{\mu}^{(g)}_{i})\big)}}{\sqrt{(2\pi)^{p}|\boldsymbol{\Sigma}^{(g)}_{i}|}}\bigg) \end{equation} and \begin{equation}\label{eq:lik4}
L(\Theta)=\prod^{N}_{i=1}\bigg(\sum^{G}_{g=1}\pi(lc_{i}=g|\boldsymbol{x}_{gi})\times\frac{\exp{\big(-\frac{1}{2}(\boldsymbol{\omega}_{i}-\boldsymbol{\mu}^{(g)}_{i})^{T}\boldsymbol{\Sigma}^{(g)-1}_{i}(\boldsymbol{\omega}_{i}-\boldsymbol{\mu}^{(g)}_{i})\big)}}{\sqrt{(2\pi)^{p}|\boldsymbol{\Sigma}^{(g)}_{i}|}}\bigg) \end{equation} for scenarios without and with the TICs informing cluster formation, in which $\boldsymbol{\mu}^{(g)}_{i}$ and $\boldsymbol{\Sigma}^{(g)}_{i}$ are the submodel-implied mean vector and variance-covariance structure of the $i^{th}$ individual in the $g^{th}$ latent class. Optimizers iteratively minimize the FIML function value until convergence is reached (i.e., the status code $0$ in \pkg{OpenMx} \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}). Three optimization engines are available in \pkg{OpenMx}: NPSOL, SLSQP, and CSOLNP \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}, all of which can be implemented in the \pkg{nlpsem} package.
\section{Implementation}\label{sec:Implement} This package currently includes seven estimation functions: \begin{enumerate} \item {\code{getLGSM()} for latent growth curve models (LGCMs), without or with time-invariant covariates (TICs),} \item {\code{getLCSM()} for latent change score models (LCSMs), without or with TICs,} \item {\code{getTVCmodel()} for LGCMs or LCSMs with a time-varying covariate (TVC),} \item {\code{getMGM()} for multivariate version of LGCMs or LCSMs,} \item {\code{getMediation()} for longitudinal mediation analysis,} \item {\code{getMGroup()} for multiple-group version of the models (1)-(5), and} \item {\code{getMIX()} for mixture model version of the models (1)-(5).} \end{enumerate} The optimization for the seven estimation functions relies on the built-in optimizers of the package \pkg{OpenMx}. This section describes the calls of these estimation functions, presents the process of having additional but non-estimable parameters, and details the initialization of the initiative optimizers.
\subsection{Function getLGCM()}\label{getLGSM} The call of \code{getLGCM()} is
\code{getLGCM(dat, t_var, y_var, curveFun, intrinsic = T, records, growth_TIC = NULL, starts = NULL, res_scale = NULL, tries = NULL, OKStatus = 0, jitterD = "runif", loc = 1, scale = 0.25, paramOut = F, names = NULL)}.
The \code{dat} argument represents the input dataset, which must be in a wide format (i.e., one row per individual) to satisfy the requirements of the \pkg{OpenMx} package \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}. This dataset typically comprises repeated measurements, corresponding measurement occasions, and time-invariant covariates (TICs) that account for the variability of growth factors, if present. The \code{y_var} and \code{t_var} arguments denote the prefixes of the column names for the measurement value and time at each point, respectively. Together with \code{records}, these arguments delineate the column name of the longitudinal outcome and the measurement occasions over time. For instance, if the dataset contains four repeated measurements ($Y_{1}$, $Y_{2}$, $Y_{3}$, and $Y_{4}$) recorded at $T_{1}$, $T_{2}$, $T_{3}$, and $T_{4}$, the user needs to specify \code{t_var = "T", y_var = "Y", records = 1:4}. The \code{curveFun} argument designates the functional form, chosen from any function listed in Table \ref{tbl:LGCM_summary}, to define the change patterns of the repeated measurements. The \code{intrinsic} argument is a boolean flag that indicates whether an intrinsically nonlinear longitudinal model (i.e., the third type of nonlinear longitudinal model in the current package) is constructed. The choice of \code{curveFun} and \code{intrinsic} often depends on the trajectory shapes of raw data and the research questions of interest.
The \code{growth_TIC} argument specifies the TICs added to the model, if any, to account for the variability of growth factors. By default, it is set to \code{NULL}, indicating that no growth TICs are incorporated into the fitted LGCM. The \code{starts} argument establishes the set of initial values for the iterative algorithms. If set to the default value of \code{NULL}, the estimation function will derive a set of initial values based on the raw data. Alternatively, the \code{getLGCM()} function allows for user-specified initial values. When \code{starts = NULL}, the \code{res_scale} argument is required, as it defines the proportion of variance unexplained by the growth factors and aids in determining the initial values of the residual variance $\theta_{\epsilon}$.
The \code{tries} argument establishes the number of additional attempts to fit a specified model in \pkg{OpenMx} (often referred to as a \code{mxModel} object). With its default value of \code{NULL} (indicating zero additional trials unless the initial attempt), the \code{getLGCM()} function will call the \code{mxRun()} function to send a \code{mxModel} object to an optimizer and return the free parameters with their optimized values. Assigning a numerical value to \code{tries} permits the corresponding additional attempts to fit a model until the optimizer produces an acceptable solution defined by the \code{OKStatus} argument or reaches the maximum number of attempts. When assigning a numeric value to \code{tries}, the user must also define the \code{jitterD}, \code{loc}, \code{scale}, and \code{OKStatus} arguments. The \code{jitterD} argument, in conjunction with \code{loc} and \code{scale}, defines the distribution and its location and scale parameters, from which a set of random values are drawn to perturb initial values between attempts. The \code{jitterD} argument is a character string, with supported values including \code{"runif"} (uniform distribution), \code{"rnorm"} (normal distribution), or \code{"rcauchy"} (Cauchy distribution). The \code{loc} and \code{scale} arguments are numeric inputs. By default, the setup employs a uniform distribution with a location parameter of $1$ and a scale parameter of $0.25$, adhering to the default settings in the \pkg{OpenMx} package \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}. The \code{OKStatus} argument is an integer vector that defines the acceptable solution of the optimizer status code, with its default value being $0$, indicating a successful optimization without error returns. The \code{paramOut} argument specifies whether the point estimates and standard errors of the free and additional non-estimable parameters will be extracted from the optimized \code{mxModel}. Parameter names provided by the \code{names} argument must be supplied if \code{paramOut = T}.
\subsection{Function getLCSM()}\label{getLCSM} The call of \code{getLCSM()} is
\code{getLCSM(dat, t_var, y_var, curveFun, intrinsic = T, records, growth_TIC = NULL, starts = NULL, res_scale = NULL, tries = NULL, OKStatus = 0, jitterD = "runif", loc = 1, scale = 0.25, paramOut = F, names = NULL)}.
The arguments in \code{getLCSM()} mirror those in \code{getLGCM()}, with a notable distinction in the \code{curveFun} argument, which supports different functional forms for \code{getLCSM()}: nonparametric, quadratic, negative exponential, and Jenss-Bayley curves.
\subsection{Function getTVCmodel()}\label{getTVCmodel} The call of \code{getTVCmodel()} is
\code{getTVCmodel(dat, t_var, y_var, curveFun, intrinsic = T, records, y_model, TVC, decompose, growth_TIC = NULL, starts = NULL, res_scale = NULL, res_cor = NULL, tries = NULL, OKStatus = 0, jitterD = "runif", loc = 1, scale = 0.25, paramOut = F, names = NULL)}.
In comparison to the \code{getLGCM()} and \code{getLGSM()} functions, the \code{getTVCmodel()} function incorporates four additional arguments: \code{y_model}, \code{TVC}, \code{decompose}, and \code{res_cor}. Specifically, the \code{y_model} argument establishes the model describing longitudinal outcome change patterns, offering options `LGCM' and `LCSM'. For example, if \code{y_model = "LGCM"} is specified, a LGCM will be fit to capture these change patterns. The \code{TVC} and \code{records} arguments together define the column names for the TVC over time. For example, to specify four repeated TVC measurements, $x_{1}$, $x_{2}$, $x_{3}$, and $x_{4}$, one should set \code{TVC = "x"}. The \code{decompose} argument, an integer, indicates the decomposing option, with supported values of $0$ (adding a TVC directly), $1$ (adding a decomposed TVC with interval-specified slopes), $2$ (adding a decomposed TVC with interval-specified changes), and $3$ (adding a decomposed TVC with change from baseline values). When \code{starts = NULL} and fitting a LGCM or a LCSM with a decomposed TVC, the \code{res_cor} argument provides the initial value for the residual correlation between the longitudinal outcome and the TVC. Additionally, the \code{res_scale} argument demands a single numeric value for the initial value of the proportion of variance unexplained by the growth factors; alternatively, it requires a vector with two numeric values for the proportion of unexplained variance for both the TVC and the longitudinal outcome.
\subsection{Function getMGM()}\label{getMGM} The call of \code{getMGM()} is
\justifying{ \code{getMGM(dat, t_var, y_var, curveFun, intrinsic = T, records, y_model, starts =\\ NULL, res_scale = NULL, res_cor = NULL, tries = NULL, OKStatus = 0, jitterD =\\ "runif", loc = 1, scale = 0.25, paramOut = F, names = NULL)}.}
In the \code{getMGM()} function, parameters are akin to those found in the three preceding functions but have been adapted to accommodate multiple longitudinal outcomes. In contrast to the scenario of univariate longitudinal processes where \code{y_var} (\code{t_var}) and \code{records} are each represented by a single character string or a numeric vector, the analysis of multivariate longitudinal processes necessitates their specification as a vector of character strings and a list of numeric vectors, respectively. The length of \code{y_var}, \code{t_var}, and \code{records} must be identical, as it reflects the number of longitudinal processes being analyzed simultaneously within the multivariate model. Each element in \code{y_var}, in conjunction with the relevant index vector from the \code{records} argument, defines the repeated measurements for the associated longitudinal outcome. Similarly, each \code{t_var} element, when paired with the corresponding index vector from the \code{records} argument, establishes the measurement occasions for the respective longitudinal outcome. This configuration enables the \code{getMGM()} function to effectively analyze multiple longitudinal processes simultaneously while retaining flexibility in model specification. Suppose the input data contains two longitudinal processes: (1) $Y_{1}$, $Y_{2}$, $Y_{4}$, and $Y_{5}$ recorded at $t_{1}$, $t_{2}$, $t_{4}$, and $t_{5}$, and (2) $Z_{1}$, $Z_{3}$, $Z_{4}$, and $Z_{5}$ recorded at $t_{1}$, $t_{3}$, $t_{4}$, and $t_{5}$. To analyze this bivariate longitudinal process using all available data, one needs to set \code{y_var = c("Y", "Z")}, \code{t_var = c("T", "T")}, and \code{records = list(c(1:2, 4:5), c(1, 3:5))}. The \code{res_scale} argument in the \code{getMGM()} function should maintain a length identical to that of \code{y_var}, \code{t_var}, and \code{records}, as it represents a numeric vector where each value delineates the proportion of unexplained variance in the corresponding longitudinal process attributable to the growth factors to derive initial values of the corresponding residual variance. In addition, \code{res_cor} provides the initial value(s) of the residual correlation(s) of any two longitudinal processes under investigation.
\subsection{Function getMediation()}\label{getMed} The call of \code{getMediation()} is
\code{getMediation(dat, t_var, y_var, m_var, x_type, x_var, curveFun, records, starts = NULL, res_scale = NULL, res_cor = NULL, tries = NULL, OKStatus = 0, jitterD = "runif", loc = 1, scale = 0.25, paramOut = F, names = NULL)}.
In the \code{getMediation()} function, the \code{y_var} and \code{m_var} arguments, in conjunction with \code{records}, determine the column names for the longitudinal outcome and the longitudinal mediator, respectively. This function accommodates both baseline and longitudinal predictors, specified by \code{x_type = "baseline"} and \code{x_type = "longitudinal"}. For a longitudinal mediation model with a baseline predictor, a two-element character vector is required for the \code{t_var} argument, while a list containing two numeric vectors is needed for the \code{records} argument, specifying the measurement times for the outcome and mediator. In a model with a longitudinal predictor, an additional numeric vector for the \code{records} argument is necessary, allowing the \code{x_var} argument and the third numeric vector to define the column names of the predictors. The \code{t_var} argument then requires an extra element, along with the third numeric vector of the \code{records} argument, to delineate the column names of measurement occasions of the longitudinal predictor. Similarly, the \code{res_scale} argument should contain two elements for a model with a baseline predictor and three elements for a model with a longitudinal predictor.
\subsection{Function getMGroup()}\label{getMGroup} The call of \code{getMGroup()} is
\code{getMGroup(dat, grp_var, sub_Model, t_var, records, y_var, curveFun, intrinsic \\= NULL, y_model = NULL, m_var = NULL, x_var = NULL, x_type = NULL, TVC = NULL, decompose = NULL, growth_TIC = NULL, starts = NULL, res_scale = NULL, res_cor \\= NULL, tries = NULL, OKStatus = 0, jitterD = "runif", loc = 1, scale = 0.25, \\paramOut = F, names = NULL)}.
The \code{getMGroup()} function necessitates the inclusion of two supplementary arguments, \code{nGroup} and \code{sub_Model}. The \code{nGroup} argument provides the number of manifested groups in the model. The \code{sub_Model} argument allows for the selection of a single-group model (i.e., LGCMs, LCSMs, TVC models, MGMs, or longitudinal mediation models) to be incorporated as a submodel. The remaining arguments are comparable to those detailed in Subsections \ref{getLGSM}-\ref{getMed}. To address group-specific specifications, the \code{getMGroup()} function requires that arguments associated with initial value derivation (i.e., \code{starts}, \code{res_scale}, and \code{res_cor}) be structured as a list of inputs corresponding to the relevant single-group model.
\subsection{Function getMIX()}\label{getMIX} The call of \code{getMIX()} is
\code{getMIX(dat, prop_starts, sub_Model, cluster_TIC = NULL, t_var, y_var, curveFun, intrinsic = NULL, records, y_model = NULL, m_var = NULL, x_var = NULL, x_type = NULL, TVC = NULL, decompose = NULL, growth_TIC = NULL, starts = NULL, res_scale = NULL, res_cor = NULL, tries = NULL, OKStatus = 0, jitterD = "runif", loc = 1, scale = 0.25, paramOut = F, names = NULL)}.
The implementation of \code{getMIX()} calls for three additional arguments: \code{prop_starts}, \code{sub_Model}, and \code{cluster_TIC}. The \code{prop_starts} argument specifies the initial proportions for each latent class in the mixture model. The length of this argument indicates the number of latent classes and the sum of these elements totaling $1$. The \code{sub_Model} argument allows users to select a single-group model (i.e., LGCMs, LCSMs, TVC models, MGMs, or longitudinal mediation models) for integration as a submodel within the mixture model. The \code{cluster_TIC} argument designates the TICs, if any, to be included in the model to inform cluster formation. By default, it is set to \code{NULL}, indicating that no cluster TICs are incorporated in the fitted mixture model. Analogous to the \code{getMGroup()} function, the remaining arguments are consistent with those outlined in Subsections \ref{getLGSM}-\ref{getMed}. The \code{getMIX()} function necessitates that arguments about initial value derivation (i.e., \code{starts}, \code{res_scale}, and \code{res_cor}) be formatted as a list of inputs corresponding to the respective single-group model to accommodate class-specific specifications.
\subsection{Initial Values}\label{init} Iterative estimation algorithms necessitate initialization using a set of initial values for the parameter space $\Theta$. Selecting suitable initial values improves the likelihood of model convergence and reduces computational burdens. In each estimation function, the \code{starts} argument dictates the algorithm's initialization process. The estimation functions provided in the \pkg{nlpsem} package allow for user-specified initial values or derive them from raw data. As the longitudinal models in \pkg{nlpsem} are centered around the concept of `growth factors' that are latent variables, the most crucial step in deriving initial values involves calculating these growth factors for each individual from raw data. The derivation algorithm may differ based on the target being a single-group, multiple-group, or mixture model. For single-group models, `raw' growth factors of each longitudinal process are derived by fitting a linear regression using \code{lm()} for linear functions or fitting a nonlinear regression with \code{nls()} for nonlinear functions for each individual.
Subsequent steps depend on the model specification. For models without covariates or mediators, estimation functions compute the mean values and variance-covariance matrix of these `raw' growth factors. When evaluating the impacts of TICs, estimation functions further initialize path coefficients that quantify such impacts by regressing `raw' growth factors on the TICs. In cases assessing the effect of a TVC, the longitudinal outcome is further regressed on the corresponding TVC measurement. When examining the effect of a decomposed TVC, trait and temporal effects are separately initialized by regressing the `raw' growth factors on the initial trait and regressing the longitudinal outcome on the corresponding temporal state. In longitudinal mediation models, path coefficients' initialization occurs stepwise. First, the mediator's `raw' growth factors are regressed on predictor-related parameters, followed by regressing the outcome's `raw' growth factors on both predictor and mediator-related parameters. Direct initialization of residual variances and covariances from raw data is unfeasible, so estimation functions necessitate user-provided information via the \code{res_scale} and \code{res_cor} arguments.
For multiple-group models, the procedure above is executed for each manifested group. In mixture models, the K-means algorithm is initially performed with a pre-specified number of latent classes, assigning individuals to clusters. The procedure is then conducted for each latent class using the assigned labels.
\subsection{Additional Non-estimable Parameters}\label{non-est} The \pkg{nlpsem} package streamlines the estimation of non-estimable parameters by utilizing the \code{mxAlgebra()} and \code{mxSE()} functions within the \pkg{OpenMx} package. Specifically, \code{mxAlgebra()} outlines the derivation of additional parameters from free parameters, computing point estimates, while \code{mxSE()} generates the corresponding standard errors based on the (multivariate) delta method \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}. The seven estimation functions introduced earlier accommodate four categories of non-estimable parameters. Firstly, as discussed in Subsection \ref{spec:LGCMs}, when analyzing a bilinear spline growth curve model with an unknown knot, a reparameterization process is essential to unify the pre- and post-knot expressions. However, such reparameterized parameters are not directly tied to developmental theory and consequently lack meaningful interpretations. The \pkg{nlpsem} package integrates the inverse transformation functions and matrices developed in \citet{Liu2019BLSGM} and \citet{Liu2021PBLSGM} to derive parameters of the LGCM and its multivariate version with the bilinear spline functional form in their original form.
Second, as introduced in Subsection \ref{spec:LCSMs}, one advantage of employing the LCSM is that it enables the estimation of means and variances of interval-specific slopes, interval-specific changes, and change-from-baseline values for a longitudinal outcome. These means and variances can also be considered additional parameters. Thirdly, the package incorporates parameters of conditional distributions for models involving covariates (and a mediator). For example, in an LGCM with a TIC, the estimated parameters related to the longitudinal outcome are the vector of growth factor intercepts and the unexplained variance-covariance structure of the growth factors, as indicated in Equation \ref{eq:LGCM2_TIC}. In practice, the mean vector of growth factors conditional on the TIC holds greater interest since these coefficients directly pertain to developmental theory. Therefore, \pkg{nlpsem} facilitates the derivation of parameters for such conditional distributions. Finally, the fourth category of additional parameters includes the indirect and total effects of the predictor on the outcome generated by longitudinal mediation models.
\section{Post-fit Computations}\label{sec:post} The \pkg{nlpsem} package provides a comprehensive set of post-fit computations, analyses, and evaluations applicable to all seven estimation functions. However, certain analyses, such as posterior classification, the enumeration process, and the computation of latent kappa statistics \citep{Dumenci2011kappa, Dumenci2019knee}, are specifically tailored to mixture models. This section presents an overview of these post-fit analyses and their applications.
\subsection{Statistical Significance: Wald p-values and Confidence Intervals}\label{fit:sig} It is essential to obtain p-values and confidence intervals to quantify the uncertainty of estimates. Similar to the \pkg{OpenMx} package, the \pkg{nlpsem} package produces point estimates and standard errors for each estimable or derived parameter by default. Additionally, \pkg{nlpsem} offers the \code{getEstimateStats()} function, allowing for the evaluation of Wald p-values \citep{Wald1943wald} and confidence intervals. While generating p-values is not the primary focus of \pkg{OpenMx}, the \code{getEstimateStats()} function provides an algorithm to derive the z-score and associated p-value for each parameter based on the corresponding point estimate and standard error, with the assumption that the sampling distribution of the estimate is approximately normal.
Furthermore, \code{getEstimateStats()} allows for three types of confidence intervals: Wald confidence intervals \citep{Casella2002wald}, likelihood-based confidence intervals (also known as `likelihood profile confidence intervals') \citep{Madansky1965likelihood, Matthews1988likelihood}, and bootstrap confidence intervals \citep[Chapter~12]{Efron1994bootstrap}. Wald confidence intervals require the assumption of asymptotic normal distribution, while likelihood-based and bootstrap confidence intervals relax this assumption. Specifically, likelihood-based confidence intervals rely on the asymptotic chi-square distribution of the log-likelihood ratio test statistic, and bootstrap confidence intervals depend on resampling distributions. The \code{getEstimateStats()} function produces likelihood-based and bootstrap confidence intervals using \pkg{OpenMx} built-in functions, \code{mxCI()} and \code{mxBootstrap()}, respectively \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}. Both functions require an optimized \code{mxModel} object as input, seamlessly integrating with the \pkg{nlpsem} package to provide comprehensive statistical information for users.
The call of \code{getEstimateStats()} is
\justifying{ \code{getEstimateStats(model = NULL, est_in, p_values = T, CI = T, CI_type = "Wald", rep = NA, conf.level = 0.95)}.}
The \code{getEstimateStats()} function takes several arguments, including \code{model}, an optimized \code{mxModel} object; \code{est_in}, a data frame comprising point estimates and standard errors of parameters for which users aim to derive p-values and/or confidence intervals; \code{p_values}, a logical value denoting whether to generate p-values (default is \code{TRUE}); \code{CI}, a logical value denoting whether to generate confidence intervals (default is \code{TRUE}); \code{CI_type}, a character string indicating the type of confidence interval (default is "Wald"); \code{conf.level}, a numeric value representing the desired probability for the confidence interval (default is 0.95); and \code{rep}, a numeric value specifying the number of resampling instances when constructing bootstrap CIs (default is \code{NA}).
By default, the function generates p-values and Wald confidence intervals for the parameters enumerated in the \code{est_in} argument without requiring an input of the \code{model} argument. The \code{CI_type} argument enables users to designate alternative confidence intervals, such as likelihood-based intervals (\code{CI_type = "likelihood"}) or bootstrap intervals (\code{CI_type = "bootstrap"}). When specifying these two types of confidence intervals, the \code{model} argument is mandatory, and the \code{getEstimateStats()} function generates the corresponding confidence intervals for all free parameters. Additionally, \code{rep} must be specified when constructing bootstrap CIs.
\subsection{Model Selection between Intrinsically Nonlinear Longitudinal Models and Their Parsimonious Alternatives}\label{fit:LRT} As detailed in Section \ref{sec:spec}, fitting intrinsically nonlinear longitudinal models within the SEM framework necessitates linearizing functional forms through Taylor series expansion. Insights obtained from individual growth rate parameters, individual growth acceleration parameters, or unknown individual knots are valuable in the context of developmental theory. Nevertheless, it is vital to weigh the advantages of fitting an intrinsically nonlinear longitudinal model against the feasibility and interpretability of its parsimonious counterpart. In other words, before interpreting parameters from such intricate models, it is crucial to demonstrate whether the corresponding parsimonious version, with fixed growth rate parameters, fixed growth acceleration parameters, or unknown fixed knots, is a suitable alternative. The choice often hinges on the research interest and statistical evidence. The \pkg{nlpsem} package provides tools to facilitate decision-making from a statistical standpoint. Specifically, an intrinsically nonlinear longitudinal model and its corresponding parsimonious version can be compared using the likelihood ratio test (LRT), as the parsimonious model is nested within the full model. The \code{getLRT()} function, built upon the \code{mxCompare()} function in \code{OpenMx} \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}, enables such comparisons.
The call of \code{getLRT()} is
\justifying{ \code{getLRT(full, reduced, boot = F, replications = NA)}.}
The \code{getLRT()} function is tailored to compare a full model (e.g., an intrinsically nonlinear longitudinal model) with its parsimonious counterpart and accepts the following arguments: \code{full}, the \code{mxModel} object of the full model; \code{reduced}, the \code{mxModel} object of the reduced model; \code{boot}, a logical value indicating whether to use the bootstrap resampling distribution to compute the p-value; and \code{replications}, an integer specifying the number of replications employed to approximate the bootstrap distribution. By default, the function does not utilize the bootstrap distribution to compute the p-value, but users can enable this by setting \code{boot = TRUE}. When employing the bootstrap distribution, the \code{replications} argument governs the number of replications for approximating the distribution, with a default value of \code{NA}.
\subsection{Derivation of Individual Factor Scores}\label{fit:FSest} As previously introduced, one of the primary goals in analyzing longitudinal data is to explore between-individual differences in within-individual changes. In the context of longitudinal processes within the SEM framework, these between-individual differences are represented by the variance of growth factors. For instance, using the point estimate and associated standard error of a growth factor variable, a researcher may perform a Wald test to determine whether the variability is significantly greater than $0$. If it is found to be significant, a natural follow-up question arises as to whether a value for each individual can be derived. Such individual values in the SEM framework are referred to as factor scores, indicating an individual's relative position or standing on a latent variable. The \code{getIndFS()} function, built upon the \pkg{OpenMx} function \code{mxFactorScores()} \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}, facilitates the estimation of factor scores for all specified latent variables. Note that this function is capable of calculating factor scores for both free latent variables (e.g., growth factors) and derived latent variables (e.g., interval-specific slopes, interval-specific changes, and change from baseline). The function \code{getIndFS()} generates factor scores and their standard errors for each latent variable.
The call of \code{getIndFS()} is
\justifying{ \code{getIndFS(model, FS_type = "Regression")}.}
The \code{getIndFS()} function is designed to calculate or estimate factor scores, along with their associated standard errors, for an optimized model obtained from the \code{nlpsem} package. This function offers three options for the type of factor scores to compute by interfacing with the \code{mxFactorScores()} function in the \pkg{OpenMx} package, including \code{ML} (maximum likelihood), \code{WeightedML} (weighted maximum likelihood), and \code{Regression} \citep{OpenMx2016package, Pritikin2015OpenMx, Hunter2018OpenMx, User2020OpenMx}.
The \code{Regression} method, which is the default option of the \code{getIndFS()} function, generates factor scores using a general linear prediction formula that predicts the factor scores from the non-missing manifest variables for each row of the raw data. In contrast, both \code{ML} and \code{WeightedML} methods create a new model for each data row and estimate factor scores as free parameters in a model with a single data row. The \code{ML} method optimizes the conditional likelihood of the data given the factor scores, while the \code{WeightedML} method optimizes the joint likelihood of the data and factor scores, taking the latent variable distribution into account during their estimation. More technical details regarding the pros and cons of each method of obtaining factor scores are well-documented in earlier studies, such as \citet{Priestley1975FS, Estabrook2013FS}.
\subsection{Posterior Classification}\label{fit:post} An essential post-fit evaluation of a mixture model involving latent classes is a posterior classification to compute a vector of probabilities, which helps determine the likelihood of each individual belonging to each latent class. This posterior classification is grounded in the Bayes rule. Assuming a mixture model does not contain a time-invariant covariate (TIC) that indicates class components, the posterior probabilities can be calculated as follows \begin{equation}\label{eq:post1}
p(lc_{i}=g)=\frac{\pi(lc_{i}=g)\times p(\text{sub-model}|lc_{i}=g)}{\sum_{g=1}^{G}\pi(lc_{i}=g)\times p(\text{sub-model}|lc_{i}=g)}. \end{equation}
On the other hand, if a mixture model contains TICs that indicate class components, the posterior probabilities can be computed as \begin{equation}\label{eq:post2}
p(lc_{i}=g)=\frac{\pi(lc_{i}=g|\boldsymbol{x}_{gi})\times p(\text{sub-model}|lc_{i}=g)}{\sum_{g=1}^{G}\pi(lc_{i}=g|\boldsymbol{x}_{gi})\times p(\text{sub-model}|lc_{i}=g)}. \end{equation}
The call of \code{getPosterior()} is
\justifying{ \code{getPosterior(model, nClass, label = F, cluster_TIC = NULL)}.}
The \code{getPosterior()} function facilitates the calculation of posterior probabilities for each individual's membership in latent classes of an optimized mixture model with \pkg{OpenMx}. This function accepts four arguments: \code{model}, an optimized mixture model; \code{nClass}, representing the number of latent classes specified for the model; \code{label}, a boolean flag indicating whether the dataset contains true labels (default is FALSE); and \code{cluster_TIC}, specifying the column index of time-invariant covariates that inform class formation in the observed data, when applicable (default is \code{NULL}, signifying that no TICs indicating class components are included in the mixture model).
Upon execution, the function returns a list comprising the following elements: (1) a matrix of posterior probabilities for each individual, (2) a vector of cluster memberships, (3) the entropy of the input model, and (4) the accuracy value of the input model if the true labels are available.
\subsection{Model Selection and Enumeration Process with Summary Tables}\label{fit:summary} The \pkg{nlpsem} package provides the \code{getSummary()} function to facilitate model comparison and selection based on estimated likelihood and information criteria such as Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). This function is an alternative to the likelihood ratio test provided by the \code{getLRT()} function, particularly when the models under comparison are not nested. The \code{getSummary()} function streamlines the model selection and comparison process across various scenarios.
The \code{getSummary()} function is capable of generating a summary table for any optimized models fitted by the estimation functions provided by \code{nlpsem}. However, it is essential to ensure that comparisons are fair and meaningful by comparing models with only one distinct aspect. For instance, comparing a series of LGCMs without any covariates but with different functional forms is appropriate, as it helps in selecting the optimal functional form\footnote{Note that comparing an LGCM with a linear functional form and TICs to an LGCM with a quadratic functional form but no TICs is not meaningful, as the models differ in more than one aspect.}. In this context, the summary table generated by \code{getSummary()} presents a comprehensive overview of each model's performance, including the number of free parameters, estimated likelihood, AIC, BIC, and residual variances. Such information allows researchers to make informed decisions, for example, the most suitable functional form for their data from the statistical perspective.
Another practical application of \code{getSummary()} is in the enumeration process. Within the SEM literature, the enumeration process for mixture models is typically performed in an exploratory fashion. This process involves fitting a series of candidate models, including a single-group model and the corresponding mixture models with different numbers of latent classes. The `best' model and the optimal number of latent classes are then determined using statistical criteria such as the Bayesian Information Criterion (BIC) \citep{Nylund2007BIC}. Following the SEM literature convention, it is recommended to conduct the enumeration process without including any covariates to prevent potential confounding effects \citep{Diallo2017TICs, Nylund2016TICs}. When addressing multiple longitudinal variables, the enumeration process could be carried out independently for each process, enabling the determination of the optimal number of latent classes for each longitudinal outcome \citep{Liu2022PBLSGMM}.
The call of \code{getSummary()} is
\justifying{ \code{getSummary(model_list, HetModels = F)}.}
The \code{getSummary()} function is developed for facilitating the process of model comparison and selection, taking in two arguments: \code{model_list} and \code{HetModels}. The \code{model_list} argument is expected to be a list of optimized models from \code{nlpsem}. These models should be comparable for meaningful comparisons, exhibiting differences in only one aspect. The \code{HetModels} argument, a logical value, denotes whether a model within the \code{model_list} incorporates heterogeneous classes. When \code{HetModels} is set to \code{TRUE}, the function outputs a summary table that accentuates the enumeration process, which includes a comparison of single-group models and respective mixture models with varying quantities of latent classes.
\subsection{Visualization of Estimated Growth Curves and Growth Changes Over Time} The \pkg{nlpsem} package provides \code{getFigure()} function to facilitate the visualization of (class-specific) estimated growth status and changes over time.
The call of \code{getFigure()} is
\justifying{ \code{getFigure(model, nClass = NULL, cluster_TIC = NULL, grp_var = NULL, sub_Model, y_var, curveFun, y_model = NULL, t_var, records, m_var = NULL, x_type = NULL, \\x_var = NULL, xstarts, xlab = "Time", outcome = "Process").}
The \code{getFigure()} function incorporates multiple arguments the same as those utilized in the estimation functions. For instance, the \code{sub_Model} argument facilitates user selection of a precise submodel for each manifest group or latent class when handling multiple-group or mixture models, respectively. Note that \code{getFigure()} generates estimated growth curves when \code{sub_model = "LGCM"} or \code{y_model = "LGCM"}. Conversely, it provides estimated growth rates and changes from baseline when \code{sub_model = "LCSM"} or \code{y_model = "LCSM"}.
In addition to the arguments shared with the estimation functions, \code{getFigure()} also includes three supplementary arguments: \code{xstarts}, \code{xlab}, and \code{outcome}. The \code{xstarts} argument signifies the onset of the longitudinal study, enabling the provision of interpretable estimates. On the other hand, \code{xlab} and \code{outcome} are used for labeling the generated figures. To elaborate, \code{xlab} represents the unit of the time structure, while \code{outcome} denotes the name(s) of the longitudinal processes. For example, should the data record the time unit in months, then \code{xlab} should be assigned as \code{xlab = "Month"}. If the model scrutinizes two longitudinal processes, such as the advancement of reading and math abilities, the \code{outcome} argument should be configured as \code{outcome = c("Reading Ability", "Math Ability")}. This ensures that the labels on the generated figures are both pertinent, thereby aiding their interpretation.
\subsection{Latent Kappa Statistic}\label{fit:kappa} The latent kappa statistic \citep{Dumenci2011kappa} is a measure designed to quantify the agreement between latent groups. While the kappa statistic is commonly used to assess inter-rater reliability for manifested categorical variables, the latent kappa statistic focuses on evaluating agreement for latent categorical variables. Recent research has extended this metric to assess the agreement between latent class assignments across various clustering algorithms. Such applications include evaluating the agreement between cluster assignments based on different longitudinal outcomes \citep{Dumenci2019knee, Liu2022PBLSGMM}, assessing the impact of TICs on latent class assignments \citep{Liu2021MoE}, and examining the effects of considering multiple longitudinal outcomes in the clustering algorithm on clustering results \citep{Liu2022PBLSGMM}. The \pkg{nlpsem} package provides the \code{getLatentKappa()} function to facilitate comparisons between clustering algorithms. This function enables researchers to assess the consistency of latent class assignments, providing valuable insights into the robustness of clustering results and facilitating a more accurate evaluation and interpretation of heterogeneity among individuals.
The call of \code{getLatentKappa()} is
\justifying{ \code{getLatentKappa(label1, label2, conf.level = 0.95)}.}
The \code{getLatentKappa()} function accepts three arguments: \code{label1}, \code{label2}, and \code{conf.level}. The \code{label1} and \code{label2} arguments represent the latent class assignments determined by the first and second clustering algorithms, respectively. Meanwhile, the \code{conf.level} argument indicates the probability for confidence intervals associated with the kappa statistic.
\section{Tutorial: Synthetic Examples}\label{sec:example} \subsection{Example Data} The \pkg{nlpsem} package incorporates a sample dataset, \code{RMS_dat}, obtained from the publicly accessible segment of the Early Childhood Longitudinal Study, Kindergarten Class of 2010-11 (ECLS-K:2011). This study, conducted by the National Center for Education Statistics (NCES), spanned nine rounds of data collection from the fall of 2010 to the spring of 2016. The ECLS-K:2011 dataset provides comprehensive information about children's early life experiences, encompassing aspects such as health, developmental milestones, educational progress, and experiences leading up to kindergarten\footnote{The dataset supplied in this package has been formatted only for demonstration purposes and does not include the original survey weights. Thus, researchers intending to perform in-depth analyses should resort to the entire dataset on the NCES data products page and ensure the appropriate use of complex survey weights.}.
The \code{RMS_dat} dataset consists of 500 observations spanning 49 variables, thereby capturing a broad spectrum of factors pertinent to early childhood education research. These include variables related to academic performance, such as reading, mathematics, and science scores across all study waves, as well as demographic and environmental variables like age-in-month at each study wave, sex, race, family income, and others. Additionally, the dataset incorporates teacher evaluations of children's behavioral and learning traits, such as approach to learning, self-control, interpersonal skills, and attention focus. The dataset underwent careful preprocessing for computation and interpretation purposes. We conducted three main preprocessing procedures. First, we set the initial time point to zero across all time-varying measurements. This manipulation enables the initial status estimates to be interpreted as the values corresponding to the onset of the study. We implemented this in R, modifying each measurement occasion (\code{T1} through \code{T9}) by subtracting the initial time point (\code{T1}). Second, we standardized the time-invariant covariates (TICs) to expedite computational processes. For this, we employed the \code{scale()} function in R, which centers these variables around a mean of zero and scales them to have a standard deviation of one. In addition, we standardized the time-varying covariate (TVC), in this case, the reading scores at each time point (\code{R1} through \code{R9}). Following the approach outlined by \citet{Liu2022decompose, Liu2023GMMTVC}, we calculated the mean and variance of the baseline reading scores (\code{R1}) and used these baseline parameters to standardize the reading scores across all time points.
\begin{Schunk} \begin{Sinput} R> library(nlpsem) R> # Load RMS_dat (from ECLS-K (2011)) R> data("RMS_dat") R> RMS_dat0 <- RMS_dat R> # Re-baseline the data so that the estimated initial status is for the R> # starting point of the study R> baseT <- RMS_dat0$T1 R> RMS_dat0$T1 <- RMS_dat0$T1 - baseT R> RMS_dat0$T2 <- RMS_dat0$T2 - baseT R> RMS_dat0$T3 <- RMS_dat0$T3 - baseT R> RMS_dat0$T4 <- RMS_dat0$T4 - baseT R> RMS_dat0$T5 <- RMS_dat0$T5 - baseT R> RMS_dat0$T6 <- RMS_dat0$T6 - baseT R> RMS_dat0$T7 <- RMS_dat0$T7 - baseT R> RMS_dat0$T8 <- RMS_dat0$T8 - baseT R> RMS_dat0$T9 <- RMS_dat0$T9 - baseT R> # Standardize time-invariant covariates (TICs) R> ## ex1 and ex2 are standardized growth TICs in models R> RMS_dat0$ex1 <- scale(RMS_dat0$Approach_to_Learning) R> RMS_dat0$ex2 <- scale(RMS_dat0$Attention_focus) R> ## gx1 and gx2 are standardized cluster TICs in models R> RMS_dat0$gx1 <- scale(RMS_dat0$INCOME) R> RMS_dat0$gx2 <- scale(RMS_dat0$EDU) R> # Standardize time-varying covariate (TVC) R> BL_mean <- mean(RMS_dat0[, "R1"]) R> BL_var <- var(RMS_dat0[, "R1"]) R> RMS_dat0$Rs1 <- (RMS_dat0$R1 - BL_mean)/sqrt(BL_var) R> RMS_dat0$Rs2 <- (RMS_dat0$R2 - BL_mean)/sqrt(BL_var) R> RMS_dat0$Rs3 <- (RMS_dat0$R3 - BL_mean)/sqrt(BL_var) R> RMS_dat0$Rs4 <- (RMS_dat0$R4 - BL_mean)/sqrt(BL_var) R> RMS_dat0$Rs5 <- (RMS_dat0$R5 - BL_mean)/sqrt(BL_var) R> RMS_dat0$Rs6 <- (RMS_dat0$R6 - BL_mean)/sqrt(BL_var) R> RMS_dat0$Rs7 <- (RMS_dat0$R7 - BL_mean)/sqrt(BL_var) R> RMS_dat0$Rs8 <- (RMS_dat0$R8 - BL_mean)/sqrt(BL_var) R> RMS_dat0$Rs9 <- (RMS_dat0$R9 - BL_mean)/sqrt(BL_var) R> head(RMS_dat0) \end{Sinput} \begin{Soutput}
## ID R1 R2 R3 R4 R5 R6 R7
## 14 10000014 61.0533 71.0552 78.5028 109.9882 112.6338 119.0805 131.0204
## 29 10000029 58.7342 63.3107 87.4390 108.3098 117.8031 124.4843 123.0577
## 47 10000047 51.5844 66.6677 67.5425 81.5954 92.1793 104.7755 112.0417
## 66 10000066 53.6920 61.6393 74.8572 91.1721 105.4468 122.3648 126.2234
## 95 10000095 54.8889 73.2573 77.7501 107.0961 113.7871 108.4502 127.0300
## 129 10000129 91.2245 107.1388 120.3719 136.5753 137.1338 139.9153 140.1828
## R8 R9 M1 M2 M3 M4 M5 M6
## 14 143.4052 137.0652 51.2953 50.6399 57.3925 80.1414 85.2682 94.1731
## 29 126.5784 137.9527 44.1411 54.3557 64.7107 82.9958 83.2294 106.2275
## 47 121.5341 140.2891 30.9925 44.7628 47.0755 57.7788 66.8194 78.5502
## 66 141.0226 148.8337 24.6459 47.0937 56.6954 69.5438 76.1157 86.6673
## 95 132.5852 136.9246 29.6065 50.2221 61.0513 81.7005 77.4089 95.1059
## 129 152.3430 155.9560 60.4633 74.5698 82.6556 83.0306 110.9810 119.8762
## M7 M8 M9 S2 S3 S4 S5 S6 S7
## 14 106.1280 115.0474 125.8766 24.6497 30.4830 32.3496 40.9454 46.5182 65.5331
## 29 123.4263 132.7925 127.5096 35.8322 43.7028 40.6455 47.7815 53.1821 60.4607
## 47 94.3809 99.7010 115.3470 41.2768 46.7668 42.0820 43.3976 55.4341 73.3641
## 66 122.5060 118.6628 131.0051 43.2319 53.8406 59.3788 56.0984 61.3869 77.8254
## 95 109.7579 118.7727 120.9763 43.4005 46.8256 50.2495 47.4000 55.3191 68.1581
## 129 119.4172 122.1821 136.5360 47.8725 63.9814 66.4430 68.9873 73.0444 78.6502
## S8 S9 T1 T2 T3 T4 T5 T6 T7 T8 T9 SEX RACE
## 14 69.0916 75.9290 0 4.08 10.36 18.12 23.91 29.69 39.03 54.05 65.07 1 1
## 29 77.6913 68.3381 0 5.76 11.54 17.46 23.28 29.92 41.00 53.92 65.89 2 1
## 47 80.3503 82.6842 0 6.02 11.77 17.73 23.77 29.73 42.51 53.53 64.57 1 1
## 66 79.7523 86.9595 0 6.68 11.94 18.12 23.94 30.51 43.53 54.61 67.50 1 3
## 95 58.9861 82.4376 0 5.29 11.74 17.95 23.90 30.31 42.41 53.43 67.43 2 1
## 129 82.9082 89.5211 0 5.69 11.94 17.62 24.07 29.62 41.66 53.56 65.56 2 8
## LOCALE INCOME SCHOOL_TYPE Approach_to_Learning Self_control Interpersonal
## 14 2 17 1 3.1667 3.6667 3.2
## 29 1 8 1 2.8571 3.0000 3.2
## 47 1 17 1 2.5714 3.5000 4.0
## 66 1 18 2 2.2857 3.0000 3.2
## 95 2 16 1 3.5714 3.5000 3.4
## 129 4 18 1 3.7143 3.3333 3.2
## External_prob_Behavior Internal_prob_Behavior Attention_focus
## 14 1.0 1.00 5.5000
## 29 2.0 1.00 6.0000
## 47 1.6 1.00 4.5000
## 66 1.6 1.25 4.8333
## 95 1.0 1.00 5.5000
## 129 1.0 1.00 7.0000
## Inhibitory_Ctrl EDU ex1 ex2 gx1 gx2
## 14 6.8000 5 0.1827911 0.51616548 0.9808159 -0.1505668
## 29 4.3333 5 -0.3144371 0.92312846 -0.7372345 -0.1505668
## 47 2.6667 5 -0.7732811 -0.29776049 0.9808159 -0.1505668
## 66 3.5000 6 -1.2321251 -0.02647897 1.1717104 0.3513226
## 95 6.0000 6 0.8327532 0.51616548 0.7899214 0.3513226
## 129 6.0000 8 1.0622555 1.73705443 1.1717104 1.3551014
## Rs1 Rs2 Rs3 Rs4 Rs5 Rs6 Rs7 Rs8
## 14 0.48312698 1.3656884 2.022860 4.801112 5.034558 5.603411 6.656981 7.749808
## 29 0.27849104 0.6823186 2.811385 4.653011 5.490694 6.080239 5.954357 6.265021
## 47 -0.35240288 0.9785382 1.055730 2.295749 3.229666 4.341147 4.982312 5.819915
## 66 -0.16642957 0.5348353 1.701175 3.140791 4.400382 5.893216 6.233696 7.539568
## 95 -0.06081585 1.5600004 1.956442 4.545915 5.136325 4.665400 6.304870 6.795058
## 129 3.14541497 4.5496829 5.717363 7.147141 7.196423 7.441861 7.465465 8.538474
## Rs9
## 14 7.190370
## 29 7.268682
## 47 7.474845
## 66 8.228815
## 95 7.177963
## 129 8.857282 \end{Soutput} \end{Schunk}
\subsection{getLGCM() Examples} The function \code{getLGCM()} applies latent growth curve models (LGCMs) to illustrate mathematics development through linear-linear trajectories without the inclusion of any covariates. In this section, we fit two distinct models: intrinsically nonlinear (i.e., a linear-linear function with a random knot) and its parsimonious counterpart (i.e., a linear-linear function with a fixed knot). We performed a likelihood ratio test as part of this demonstration. This statistical comparison allows us to ascertain whether the parsimonious model can serve as an effective substitute for the more complex model without a significant compromise in fit or information conveyance. In addition, we visualized the estimated developmental statuses of mathematics ability for both models (refer to Figure \ref{fig:LGCM1}). As indicated in Figure \ref{fig:LGCM2}, both LGCMs successfully capture the underlying patterns of mathematics development. However, when utilizing AIC, BIC, or the likelihood ratio test as comparative metrics, the intrinsically nonlinear model appears as a superior fit for the development of mathematics ability.
\begin{Schunk} \begin{Sinput} R> Math_LGCM_BLS_f <- getLGCM( R> dat = RMS_dat0, t_var = "T", y_var = "M", curveFun = "bilinear spline", R> intrinsic = TRUE, records = 1:9, growth_TIC = NULL, res_scale = 0.1 R> ) R> Math_LGCM_BLS_r <- getLGCM( R> dat = RMS_dat0, t_var = "T", y_var = "M", curveFun = "bilinear spline", R> intrinsic = FALSE, records = 1:9, growth_TIC = NULL, res_scale = 0.1 R> ) R> getLRT(full = Math_LGCM_BLS_f, reduced = Math_LGCM_BLS_r, boot = FALSE, R> + replications = NA) R> xstarts <- mean(baseT) R> Figure1 <- getFigure( R> + model = Math_LGCM_BLS_f, nClass = NULL, cluster_TIC = NULL, sub_Model = "LGCM", R> + y_var = "M", curveFun = "BLS", y_model = "LGCM", t_var = "T", records = 1:9, R> + m_var = NULL, x_var = NULL, x_type = NULL, xstarts = xstarts, xlab = "Month", R> + outcome = "Mathematics" R> + ) R> print(Figure1) R> Figure2 <- getFigure( R> + model = Math_LGCM_BLS_r, nClass = NULL, cluster_TIC = NULL, sub_Model = "LGCM", R> + y_var = "M", curveFun = "BLS", y_model = "LGCM", t_var = "T", records = 1:9, R> + m_var = NULL, x_var = NULL, x_type = NULL, xstarts = xstarts, xlab = "Month", R> + outcome = "Mathematics" R> + ) R> print(Figure2) \end{Sinput} \begin{Soutput}
## # of Free Param -2loglik Degree of Freedom Diff in loglik
## Full Model 15 31261.60 4485 NA
## Reduced Model 11 31347.39 4489 85.78891
## Diff in DoF p.values AIC BIC
## Full Model NA <NA> 31291.60 31354.82
## Reduced Model 4 <0.0001 31369.39 31415.75 \end{Soutput} \end{Schunk}
\begin{figure}
\caption{Estimated Development Status of Mathematics Ability from Latent Growth Curve Models}
\label{fig:LGCM1}
\label{fig:LGCM2}
\label{fig:LGCM}
\end{figure}
\subsection{getLCSM() Examples} The function \code{getLCSM()} demonstrates reading development employing latent change score models (LCSMs) with a nonparametric functional form. In this section, we implement two models: one without any covariates and the other incorporating two standardized growth time-invariant covariates (TICs), specifically the baseline teacher-reported approach to learning and attentional focus. In addition to these model fits, we provide a summary table for both models and compute p-values and confidence intervals for the model incorporating growth TICs. Furthermore, we present visualizations of the estimated change-from-baseline and growth rate for the model without and with TICs in Figures \ref{fig:LCSM1} and \ref{fig:LCSM2}, respectively.
\begin{Schunk} \begin{Sinput} R> paraNonP_LCSM <- c( R> + "mueta0", "mueta1", paste0("psi", c("00", "01", "11")), paste0("rel_rate", 2:8), R> + "residuals", paste0("slp_val_est", 1:8), paste0("slp_var_est", 1:8), R> + paste0("chg_inv_val_est", 1:8), paste0("chg_inv_var_est", 1:8), R> + paste0("chg_bl_val_est", 1:8), paste0("chg_bl_var_est", 1:8) R> + ) R> Read_LCSM_NonP <- getLCSM( R> + dat = RMS_dat0, t_var = "T", y_var = "R", curveFun = "nonparametric", R> + intrinsic = FALSE, records = 1:9, growth_TIC = NULL, res_scale = 0.1, R> + paramOut = TRUE, names = paraNonP_LCSM R> + ) R> paraNonP_LCSM_TIC <- c( R> + "alpha0", "alpha1", paste0("psi", c("00", "01", "11")), paste0("rel_rate", 2:8), R> + "residuals", paste0("beta1", c(0:1)), paste0("beta2", c(0:1)), R> + paste0("mux", 1:2), paste0("phi", c("11", "12", "22")), "mueta0", "mueta1", R> + paste0("slp_val_est", 1:8), paste0("slp_var_est", 1:8), R> + paste0("chg_inv_val_est", 1:8), paste0("chg_inv_var_est", 1:8), R> + paste0("chg_bl_val_est", 1:8), paste0("chg_bl_var_est", 1:8) R> + ) R> Read_LCSM_NonP_TIC <- getLCSM( R> + dat = RMS_dat0, t_var = "T", y_var = "R", curveFun = "nonparametric", R> + intrinsic = FALSE, records = 1:9, growth_TIC = c("ex1", "ex2"), res_scale = 0.1, R> + paramOut = TRUE, names = paraNonP_LCSM_TIC R> + ) R> getSummary(model_list = list(Read_LCSM_NonP[[1]], Read_LCSM_NonP_TIC[[1]])) R> Figure3 <- getFigure( R> + model = Read_LCSM_NonP[[1]], sub_Model = "LCSM", y_var = "R", curveFun = "NonP", R> + y_model = "LCSM", t_var = "T", records = 1:9, xstarts = xstarts, xlab = "Month", R> + outcome = "Reading" R> + ) R> print(Figure3[[1]]) R> print(Figure3[[2]]) R> Figure4 <- getFigure( R> + model = Read_LCSM_NonP_TIC[[1]], sub_Model = "LCSM", y_var = "R", curveFun = "NonP", R> + y_model = "LCSM", t_var = "T", records = 1:9, xstarts = xstarts, xlab = "Month", R> + outcome = "Reading" R> + ) R> print(Figure4[[1]]) R> print(Figure4[[2]]) R> getEstimateStats( R> + model = Read_LCSM_NonP_TIC[[1]], est_in = Read_LCSM_NonP_TIC[[2]], R> + CI_type = "all", rep = 1000 R> + ) \end{Sinput} \begin{Soutput}
## Model No_Params -2ll AIC BIC Y_residuals
## 1 Model1 13 32288.39 32314.39 32369.18 45.1379
## 2 Model2 22 34575.48 34619.48 34712.20 45.1409
## $wald
## Estimate SE p.value wald_lbound wald_ubound
## alpha0 55.5595 0.6005 <0.0001 54.3825 56.7365
## alpha1 2.3772 0.0724 <0.0001 2.2353 2.5191
## psi00 136.1661 9.8711 <0.0001 116.8191 155.5131
## psi01 -1.1696 0.2351 <0.0001 -1.6304 -0.7088
## psi11 0.1076 0.0113 <0.0001 0.0855 0.1297
## rel_rate2 0.7137 0.0435 <0.0001 0.6284 0.7990
## rel_rate3 1.2059 0.0459 <0.0001 1.1159 1.2959
## rel_rate4 0.5228 0.0339 <0.0001 0.4564 0.5892
## rel_rate5 0.6870 0.0339 <0.0001 0.6206 0.7534
## rel_rate6 0.2888 0.0174 <0.0001 0.2547 0.3229
## rel_rate7 0.2923 0.0170 <0.0001 0.2590 0.3256
## rel_rate8 0.2571 0.0166 <0.0001 0.2246 0.2896
## residuals 45.1409 1.0797 <0.0001 43.0247 47.2571
## beta10 2.8589 0.9090 0.0017 1.0773 4.6405
## beta11 -0.0307 0.0274 0.2625 -0.0844 0.0230
## beta20 2.1025 0.9027 0.0199 0.3332 3.8718
## beta21 0.0440 0.0274 0.1083 -0.0097 0.0977
## mux1 0.0000 0.0447 >0.9999 -0.0876 0.0876
## mux2 0.0000 0.0447 >0.9999 -0.0876 0.0876
## phi11 0.9980 0.0631 <0.0001 0.8743 1.1217
## phi12 0.7759 0.0565 <0.0001 0.6652 0.8866
## phi22 0.9980 0.0631 <0.0001 0.8743 1.1217
## mueta0 55.5595 0.6358 <0.0001 54.3134 56.8056
## mueta1 2.3772 0.0724 <0.0001 2.2353 2.5191
## slp_val_est1 2.3772 0.0724 <0.0001 2.2353 2.5191
## slp_val_est2 1.6966 0.0700 <0.0001 1.5594 1.8338
## slp_val_est3 2.8666 0.0684 <0.0001 2.7325 3.0007
## slp_val_est4 1.2427 0.0721 <0.0001 1.1014 1.3840
## slp_val_est5 1.6331 0.0651 <0.0001 1.5055 1.7607
## slp_val_est6 0.6866 0.0362 <0.0001 0.6156 0.7576
## slp_val_est7 0.6950 0.0349 <0.0001 0.6266 0.7634
## slp_val_est8 0.6111 0.0352 <0.0001 0.5421 0.6801
## slp_var_est1 0.1076 0.0113 <0.0001 0.0855 0.1297
## slp_var_est2 0.0548 0.0065 <0.0001 0.0421 0.0675
## slp_var_est3 0.1565 0.0156 <0.0001 0.1259 0.1871
## slp_var_est4 0.0294 0.0042 <0.0001 0.0212 0.0376
## slp_var_est5 0.0508 0.0060 <0.0001 0.0390 0.0626
## slp_var_est6 0.0090 0.0012 <0.0001 0.0066 0.0114
## slp_var_est7 0.0092 0.0012 <0.0001 0.0068 0.0116
## slp_var_est8 0.0071 0.0010 <0.0001 0.0051 0.0091
## chg_inv_val_est1 13.8240 0.4211 <0.0001 12.9987 14.6493
## chg_inv_val_est2 9.9338 0.4098 <0.0001 9.1306 10.7370
## chg_inv_val_est3 17.7668 0.4238 <0.0001 16.9362 18.5974
## chg_inv_val_est4 7.0613 0.4100 <0.0001 6.2577 7.8649
## chg_inv_val_est5 10.4670 0.4173 <0.0001 9.6491 11.2849
## chg_inv_val_est6 7.9308 0.4186 <0.0001 7.1104 8.7512
## chg_inv_val_est7 8.3602 0.4203 <0.0001 7.5364 9.1840
## chg_inv_val_est8 7.3259 0.4215 <0.0001 6.4998 8.1520
## chg_inv_var_est1 3.6401 0.3831 <0.0001 2.8892 4.3910
## chg_inv_var_est2 1.8796 0.2218 <0.0001 1.4449 2.3143
## chg_inv_var_est3 6.0126 0.5983 <0.0001 4.8400 7.1852
## chg_inv_var_est4 0.9498 0.1368 <0.0001 0.6817 1.2179
## chg_inv_var_est5 2.0868 0.2457 <0.0001 1.6052 2.5684
## chg_inv_var_est6 1.1981 0.1623 <0.0001 0.8800 1.5162
## chg_inv_var_est7 1.3313 0.1751 <0.0001 0.9881 1.6745
## chg_inv_var_est8 1.0223 0.1472 <0.0001 0.7338 1.3108
## chg_bl_val_est1 13.8240 0.4211 <0.0001 12.9987 14.6493
## chg_bl_val_est2 23.7578 0.4419 <0.0001 22.8917 24.6239
## chg_bl_val_est3 41.5246 0.4908 <0.0001 40.5626 42.4866
## chg_bl_val_est4 48.5859 0.5144 <0.0001 47.5777 49.5941
## chg_bl_val_est5 59.0529 0.5566 <0.0001 57.9620 60.1438
## chg_bl_val_est6 66.9837 0.5901 <0.0001 65.8271 68.1403
## chg_bl_val_est7 75.3439 0.6287 <0.0001 74.1117 76.5761
## chg_bl_val_est8 82.6698 0.6638 <0.0001 81.3688 83.9708
## chg_bl_var_est1 3.6401 0.3831 <0.0001 2.8892 4.3910
## chg_bl_var_est2 10.7512 0.9995 <0.0001 8.7922 12.7102
## chg_bl_var_est3 32.8438 2.9329 <0.0001 27.0954 38.5922
## chg_bl_var_est4 44.9638 3.9820 <0.0001 37.1592 52.7684
## chg_bl_var_est5 66.4240 5.8549 <0.0001 54.9486 77.8994
## chg_bl_var_est6 85.4636 7.4989 <0.0001 70.7660 100.1612
## chg_bl_var_est7 108.1282 9.4549 <0.0001 89.5969 126.6595
## chg_bl_var_est8 130.1778 11.3752 <0.0001 107.8828 152.4728
##
## $likelihood
## Estimate lik_lbound lik_ubound
## beta01 2.8589 1.1119 4.6009
## beta11 -0.0307 -0.0847 0.0230
## beta02 2.1025 0.3572 3.8526
## beta12 0.0440 -0.0092 0.0980
## Y_rel_rate2 0.7137 0.6320 0.8039
## Y_rel_rate3 1.2059 1.1197 1.3007
## Y_rel_rate4 0.5228 0.4581 0.5913
## Y_rel_rate5 0.6870 0.6227 0.7563
## Y_rel_rate6 0.2888 0.2555 0.3242
## Y_rel_rate7 0.2923 0.2599 0.3268
## Y_rel_rate8 0.2571 0.2253 0.2905
## Y_residuals 45.1409 43.0889 47.3247
## phi1 0.9980 0.8837 1.1329
## phi2 0.7759 0.6735 0.8963
## phi3 0.9980 0.8839 1.1336
## Y_psi00 136.1661 118.2425 157.2350
## Y_psi01 -1.1696 -1.5739 -0.7784
## Y_psi11 0.1076 0.0876 0.1318
## mux1 0.0000 -0.0879 0.0879
## mux2 0.0000 -0.0879 0.0879
## Y_mueta0 55.5595 54.3791 56.7430
## Y_mueta1 2.3772 2.2355 2.5201
##
## $bootstrap
## Estimate boot_lbound boot_ubound
## beta01 2.8589 1.1434 4.4157
## beta11 -0.0307 -0.0808 0.0210
## beta02 2.1025 0.4774 3.6940
## beta12 0.0440 -0.0148 0.1013
## Y_rel_rate2 0.7137 0.6466 0.7853
## Y_rel_rate3 1.2059 1.1273 1.2943
## Y_rel_rate4 0.5228 0.4666 0.5848
## Y_rel_rate5 0.6870 0.6322 0.7447
## Y_rel_rate6 0.2888 0.2571 0.3219
## Y_rel_rate7 0.2923 0.2637 0.3213
## Y_rel_rate8 0.2571 0.2297 0.2840
## Y_residuals 45.1409 42.3715 47.9546
## phi1 0.9980 0.8876 1.0993
## phi2 0.7759 0.6726 0.8789
## phi3 0.9980 0.8731 1.1160
## Y_psi00 136.1661 109.6972 162.7781
## Y_psi01 -1.1696 -1.7844 -0.5275
## Y_psi11 0.1076 0.0856 0.1307
## mux1 0.0000 -0.0870 0.0856
## mux2 0.0000 -0.0865 0.0850
## Y_mueta0 55.5595 54.6348 56.4455
## Y_mueta1 2.3772 2.2636 2.4940 \end{Soutput} \end{Schunk}
\begin{figure}
\caption{Latent Change Score Models with Nonparametric Function for Reading Ability (No Growth TICs)}
\label{fig:LCSM1_chg}
\label{fig:LCSM1_slp}
\label{fig:LCSM1}
\end{figure}
\begin{figure}
\caption{Latent Change Score Models with Nonparametric Function for Reading Ability (Teacher-reported Approach to Learning and Attentional Focus as Growth TICs)}
\label{fig:LCSM2_chg}
\label{fig:LCSM2_slp}
\label{fig:LCSM2}
\end{figure}
As described in Figures \ref{fig:LCSM1} and \ref{fig:LCSM2}, the estimated values for change-from-baseline and interval-specific growth rates, both derived from the growth factors, remain consistent across both models. It is important to note that the estimated growth factors from the model integrating growth TICs are conditional on these TICs, thus differing from the model excluding growth TICs. Examining the summary table output reveals that Model 1 (the nonparametric LCSM excluding growth TICs) exhibits a smaller estimated likelihood, AIC, and BIC compared to Model 2 (the nonparametric LCSM including two growth TICs). However, this difference does not inherently signify that Model 1 outperforms Model 2, given that the data employed to fit both models vary. When setting \code{CI_type = "all"} in the \code{getEstimateStats()} function, it generates three tables corresponding to Wald, likelihood-based, and bootstrap confidence intervals. Note that the function generates likelihood-based and bootstrap confidence intervals solely for free parameters, while it offers Wald confidence intervals for both free parameters and those that cannot be estimated. These non-estimable parameters include the conditional mean values of the growth factors, the means and variances of interval-specific slopes, interval-specific changes, and the amounts of change from baseline.
\subsection{getTVCmodel() Examples} The function \code{getTVCmodel()} is implemented to construct univariate longitudinal outcome models with a time-varying covariate (TVC). It exemplifies the latent growth curve model with an intrinsically linear-linear functional form for mathematics development while simultaneously examining the impact of reading ability on mathematics development over time. This section contains the fitting of two distinct models. Both models view reading ability as the TVC and the baseline teacher-reported approach to learning as the time-invariant covariate (TIC). However, the first model directly incorporates the TVC, while the second introduces a decomposed TVC, partitioned into the baseline value and a set of interval-specific slopes. Alongside these model fittings, we computed p-values and Wald confidence intervals for the model incorporating growth TICs. We provide plots of the estimated growth status of mathematics development for both models, demonstrated in Figures \ref{fig:TVC1} and \ref{fig:TVC2}.
\begin{Schunk} \begin{Sinput} R> set.seed(20191029) R> Math_TVC_BLS_f <- getTVCmodel( R> + dat = RMS_dat0, t_var = "T", y_var = "M", curveFun = "BLS", intrinsic = TRUE, R> + records = 1:9, y_model = "LGCM", TVC = "Rs", decompose = 0, growth_TIC = "ex1", R> + res_scale = 0.1, tries = 10 R> + ) R> paraBLS_TVC.f <- c( R> + "Y_alpha0", "Y_alpha1", "Y_alpha2", "Y_alphag", R> + paste0("Y_psi", c("00", "01", "02", "0g", "11", "12", "1g", "22", "2g", "gg")), R> + "Y_residuals", "X_mueta0", "X_mueta1", paste0("X_psi", c("00", "01", "11")), R> + paste0("X_rel_rate", 2:8), paste0("X_abs_rate", 1:8), "X_residuals", R> + paste0("betaTIC", c(0:2, "g")), paste0("betaTVC", c(0:2, "g")), "muTIC", "phiTIC", R> + "Y_mueta0", "Y_mueta1", "Y_mueta2", "Y_mu_knot", "covBL", "kappa", "Cov_XYres" R> + ) R> set.seed(20191029) R> Math_TVCslp_BLS_f <- getTVCmodel( R> + dat = RMS_dat0, t_var = "T", y_var = "M", curveFun = "BLS", intrinsic = TRUE, R> + records = 1:9, y_model = "LGCM", TVC = "Rs", decompose = 1, growth_TIC = "ex1", R> + res_scale = c(0.1, 0.1), res_cor = 0.3, tries = 10, paramOut = TRUE, R> + names = paraBLS_TVC.f R> + ) R> getEstimateStats(est_in = Math_TVCslp_BLS_f[[2]], CI_type = "Wald") R> Figure5 <- getFigure( R> + model = Math_TVC_BLS_f, sub_Model = "TVC", y_var = "M", curveFun = "BLS", R> + y_model = "LGCM", t_var = "T", records = 1:9, xstarts = xstarts, xlab = "Month", R> + outcome = "Mathematics" R> + ) R> Figure6 <- getFigure( R> + model = Math_TVCslp_BLS_f[[1]], sub_Model = "TVC", y_var = "M", curveFun = "BLS", R> + y_model = "LGCM", t_var = "T", records = 1:9, xstarts = xstarts, xlab = "Month", R> + outcome = "Mathematics" R> + ) R> print(Figure5) R> print(Figure6) \end{Sinput} \begin{Soutput}
## $wald
## Estimate SE p.value wald_lbound wald_ubound
## Y_alpha0 37.1159 0.4057 <0.0001 36.3207 37.9111
## Y_alpha1 1.7073 0.0172 <0.0001 1.6736 1.7410
## Y_alpha2 0.6839 0.0157 <0.0001 0.6531 0.7147
## Y_alphag 37.9198 0.5015 <0.0001 36.9369 38.9027
## Y_psi00 47.2140 4.1851 <0.0001 39.0114 55.4166
## Y_psi01 -0.1004 0.1359 0.4600 -0.3668 0.1660
## Y_psi02 -0.1912 0.1247 0.1252 -0.4356 0.0532
## Y_psi0g -4.4619 3.6385 0.2201 -11.5932 2.6694
## Y_psi11 0.0845 0.0083 <0.0001 0.0682 0.1008
## Y_psi12 -0.0015 0.0055 0.7851 -0.0123 0.0093
## Y_psi1g -0.7959 0.1856 <0.0001 -1.1597 -0.4321
## Y_psi22 0.0231 0.0080 0.0039 0.0074 0.0388
## Y_psi2g -0.3249 0.1786 0.0689 -0.6749 0.0251
## Y_psigg 42.3163 6.8402 <0.0001 28.9098 55.7228
## Y_residuals 28.3921 0.8035 <0.0001 26.8173 29.9669
## X_mueta0 0.0093 0.0556 0.8672 -0.0997 0.1183
## X_mueta1 0.2086 0.0058 <0.0001 0.1972 0.2200
## X_psi00 1.2124 0.0857 <0.0001 1.0444 1.3804
## X_psi01 -0.0065 0.0017 0.0001 -0.0098 -0.0032
## X_psi11 0.0007 0.0001 <0.0001 0.0005 0.0009
## X_rel_rate2 0.6947 0.0384 <0.0001 0.6194 0.7700
## X_rel_rate3 1.2478 0.0431 <0.0001 1.1633 1.3323
## X_rel_rate4 0.4903 0.0303 <0.0001 0.4309 0.5497
## X_rel_rate5 0.7110 0.0330 <0.0001 0.6463 0.7757
## X_rel_rate6 0.2823 0.0169 <0.0001 0.2492 0.3154
## X_rel_rate7 0.3019 0.0168 <0.0001 0.2690 0.3348
## X_rel_rate8 0.2568 0.0165 <0.0001 0.2245 0.2891
## X_abs_rate1 0.2086 0.0058 <0.0001 0.1972 0.2200
## X_abs_rate2 0.1449 0.0055 <0.0001 0.1341 0.1557
## X_abs_rate3 0.2603 0.0055 <0.0001 0.2495 0.2711
## X_abs_rate4 0.1023 0.0057 <0.0001 0.0911 0.1135
## X_abs_rate5 0.1483 0.0055 <0.0001 0.1375 0.1591
## X_abs_rate6 0.0589 0.0032 <0.0001 0.0526 0.0652
## X_abs_rate7 0.0630 0.0031 <0.0001 0.0569 0.0691
## X_abs_rate8 0.0536 0.0031 <0.0001 0.0475 0.0597
## X_residuals 0.3587 0.0088 <0.0001 0.3415 0.3759
## betaTIC0 0.9665 0.3896 0.0131 0.2029 1.7301
## betaTIC1 0.0099 0.0159 0.5335 -0.0213 0.0411
## betaTIC2 -0.0175 0.0164 0.2859 -0.0496 0.0146
## betaTICg 0.0593 0.4359 0.8918 -0.7950 0.9136
## betaTVC0 7.6229 0.3793 <0.0001 6.8795 8.3663
## betaTVC1 0.0708 0.0166 <0.0001 0.0383 0.1033
## betaTVC2 -0.0289 0.0158 0.0674 -0.0599 0.0021
## betaTVCg -1.5263 0.4583 0.0009 -2.4246 -0.6280
## muTIC 0.0000 0.0447 >0.9999 -0.0876 0.0876
## phiTIC 0.9980 0.0631 <0.0001 0.8743 1.1217
## Y_mueta0 37.1867 0.5448 <0.0001 36.1189 38.2545
## Y_mueta1 1.7079 0.0172 <0.0001 1.6742 1.7416
## Y_mueta2 0.6836 0.0158 <0.0001 0.6526 0.7146
## Y_mu_knot 37.9056 0.5009 <0.0001 36.9239 38.8873
## covBL 0.4000 0.0523 <0.0001 0.2975 0.5025
## kappa 21.3464 1.2707 <0.0001 18.8559 23.8369
## Cov_XYres 0.5510 0.0619 <0.0001 0.4297 0.6723 \end{Soutput} \end{Schunk}
\begin{figure}
\caption{Latent Growth Curve Models with Bilinear Spline Function (Random Knots) for Mathematics Ability (TVC: Standardized Reading Ability over Time; Growth TIC: Teacher-reported Approach)}
\label{fig:TVC1}
\label{fig:TVC2}
\label{fig:TVC}
\end{figure}
The model outputs highlight the merits of incorporating a decomposed time-varying covariate (TVC) into the analysis. First, the decomposed TVC enables the examination of the baseline and temporal effects of reading ability on mathematics development. According to the model results, the baseline effect of reading ability on the early-stage development of mathematics ability stands at $0.0708$. This implies that for every standardized unit increase in baseline reading ability, the growth rate of mathematics ability before Grade 3 increases $0.0708$. The temporal effect of reading ability on mathematics ability is reported to be $21.3464$. This suggests, for instance, that the final mathematics score in the spring semester of Grade 1 increases by $21.3464$ for each standardized unit growth in the reading ability growth rate within that semester. Secondly, the decomposed model facilitates the exploration of the relationship between the TVC baseline value and the TIC. For example, the covariance between reading ability and the teacher-reported approach to learning was $0.4000$ at the inception of the ECLS-K: 2011 study. More importantly, as displayed in Figure \ref{fig:TVC}, incorporating a TVC into a longitudinal model tends to result in underestimated growth factors and growth trajectories since the longitudinal outcome is regressed on the TVC (or its temporal states). However, the extent of underestimation is noticeably less in the model with a decomposed TVC (Figure \ref{fig:TVC2}) compared to the model where the TVC is directly integrated (Figure \ref{fig:TVC1}).
\subsection{getMGM() Examples} The \code{getMGM()} function is designed to construct a multivariate growth model (MGM) to analyze the development of reading and mathematics abilities and the correlations between these two developmental processes over time. In addition to the model fitting, we compute p-values and Wald confidence intervals for this bivariate longitudinal model. We present visualizations of the estimated growth trajectories for both reading and mathematics abilities in Figures \ref{fig:MGM1} and \ref{fig:MGM2}. Apart from the growth factors associated with each univariate developmental process, this constructed model enables us to estimate the covariances between the growth factors of different outcomes and the residual covariance. As demonstrated in the output, the relationships between the developmental processes of the two abilities are positive, evidenced by positive intercept-intercept (YZ\_psi00, $\text{p-value}<0.0001$) and pre-knot slope-slope (YZ\_psi11, $\text{p-value}<0.0001$) covariances. This suggests that students who demonstrated a higher reading ability at the inception of the ECLS-K:2011 study also tended to exhibit a higher level of mathematics ability and vice versa. Similarly, students with more rapid growth in reading ability during the early stage were generally associated with more rapid development in mathematics ability, and vice versa.
\begin{Schunk} \begin{Sinput} R> paraBLS_PLGCM_f <- c( R> + "Y_mueta0", "Y_mueta1", "Y_mueta2", "Y_knot", R> + paste0("Y_psi", c("00", "01", "02", "0g", "11", "12", "1g", "22", "2g", "gg")), R> + "Y_res", R> + "Z_mueta0", "Z_mueta1", "Z_mueta2", "Z_knot", R> + paste0("Z_psi", c("00", "01", "02", "0g", "11", "12", "1g", "22", "2g", "gg")), R> + "Z_res", R> + paste0("YZ_psi", c(c("00", "10", "20", "g0", "01", "11", "21", "g1", R> + "02", "12", "22", "g2", "0g", "1g", "2g", "gg"))), R> + "YZ_res" R> + ) R> RM_PLGCM.f <- getMGM( R> + dat = RMS_dat0, t_var = c("T", "T"), y_var = c("R", "M"), curveFun = "BLS", R> + intrinsic = TRUE, records = list(1:9, 1:9), y_model = "LGCM", res_scale = R> + c(0.1, 0.1), res_cor = 0.3, paramOut = TRUE, names = paraBLS_PLGCM_f R> + ) R> + Figure7 <- getFigure( R> + model = RM_PLGCM.f, sub_Model = "MGM", y_var = c("R", "M"), curveFun = "BLS", R> + y_model = "LGCM", t_var = c("T", "T"), records = list(1:9, 1:9), xstarts = R> + xstarts, xlab = "Month", outcome = c("Reading", "Mathematics") R> + ) R> print(Figure7[[1]]) R> print(Figure7[[2]]) \end{Sinput} \begin{Soutput}
## $wald ...
## Estimate SE p.value wald_lbound wald_ubound
## YZ_psi00 88.5558 7.5042 <0.0001 73.8478 103.2638
## YZ_psi10 1.5594 0.3149 <0.0001 0.9422 2.1766
## YZ_psi20 -0.6429 0.1454 <0.0001 -0.9279 -0.3579
## YZ_psig0 -15.3384 3.4818 <0.0001 -22.1626 -8.5142
## YZ_psi01 0.6831 0.2057 0.0009 0.2799 1.0863
## YZ_psi11 0.0687 0.0108 <0.0001 0.0475 0.0899
## YZ_psi21 0.0018 0.0047 0.7017 -0.0074 0.0110
## YZ_psig1 -0.2231 0.1141 0.0505 -0.4467 0.0005
## YZ_psi02 -0.4524 0.1882 0.0162 -0.8213 -0.0835
## YZ_psi12 -0.0188 0.0094 0.0455 -0.0372 -0.0004
## YZ_psi22 0.0018 0.0046 0.6956 -0.0072 0.0108
## YZ_psig2 0.2468 0.1194 0.0387 0.0128 0.4808
## YZ_psi0g -18.8062 5.1876 0.0003 -28.9737 -8.6387
## YZ_psi1g 0.1887 0.2711 0.4864 -0.3426 0.7200
## YZ_psi2g 0.1961 0.1280 0.1255 -0.0548 0.4470
## YZ_psigg -1.6773 3.8826 0.6657 -9.2871 5.9325
## YZ_res 7.6825 0.7061 <0.0001 6.2986 9.0664 \end{Soutput} \end{Schunk}
\begin{figure}
\caption{Multivariate Latent Growth Curve Models with Bilinear Spline Function (Random Knots) for Reading Ability and Mathematics Ability}
\label{fig:MGM1}
\label{fig:MGM2}
\label{fig:MGM}
\end{figure}
\subsection{getMediation() Examples} The \code{getMediation()} function is deployed to build a longitudinal mediation model. In this section, we set up two such models. The first model, employing a linear-linear functional form and a baseline predictor, was designed to analyze how the baseline approach to learning influences mathematics development via the mediation of reading ability development. Conversely, the second model, also adopting a linear-linear functional form but with a longitudinal predictor, examined how the developmental trajectory of reading ability influences the developmental process of science ability via the mediation of mathematics ability development. Both models are substantiated with p-values and Wald confidence intervals.
\begin{Schunk} \begin{Sinput} R> paraMed2_BLS <- c( R> + "muX", "phi11", "alphaM1", "alphaMr", "alphaM2", "mugM", R> + paste0("psi", c("M1M1", "M1Mr", "M1M2", "MrMr", "MrM2", "M2M2"), "_r"), R> + "alphaY1", "alphaYr", "alphaY2", "mugY", R> + paste0("psi", c("Y1Y1", "Y1Yr", "Y1Y2", "YrYr", "YrY2", "Y2Y2"), "_r"), R> + paste0("beta", rep(c("M", "Y"), each = 3), rep(c(1, "r", 2), 2)), R> + paste0("beta", c("M1Y1", "M1Yr", "M1Y2", "MrYr", "MrY2", "M2Y2")), R> + "muetaM1", "muetaMr", "muetaM2", "muetaY1", "muetaYr", "muetaY2", R> + paste0("Mediator", c("11", "1r", "12", "rr", "r2", "22")), R> + paste0("total", c("1", "r", "2")), R> + "residualsM", "residualsY", "residualsYM" R> + ) R> RM_BLS_LGCM <- getMediation( R> + dat = RMS_dat0, t_var = rep("T", 2), y_var = "M", m_var = "R", x_type = R> + "baseline", x_var = "ex1", curveFun = "BLS", records = list(1:9, 1:9), R> + res_scale = c(0.1, 0.1), R> + res_cor = 0.3, paramOut = TRUE, names = paraMed2_BLS R> + ) R> getEstimateStats(est_in = RM_BLS_LGCM[[2]], CI_type = "Wald") R> paraMed3_BLS <- c( R> + "muetaX1", "muetaXr", "muetaX2", "mugX", R> + paste0("psi", c("X1X1", "X1Xr", "X1X2", "XrXr", "XrX2", "X2X2")), R> + "alphaM1", "alphaMr", "alphaM2", "mugM", R> + paste0("psi", c("M1M1", "M1Mr", "M1M2", "MrMr", "MrM2", "M2M2"), "_r"), R> + "alphaY1", "alphaYr", "alphaY2", "mugY", R> + paste0("psi", c("Y1Y1", "Y1Yr", "Y1Y2", "YrYr", "YrY2", "Y2Y2"), "_r"), R> + paste0("beta", c("X1Y1", "X1Yr", "X1Y2", "XrYr", "XrY2", "X2Y2", R> + "X1M1", "X1Mr", "X1M2", "XrMr", "XrM2", "X2M2", R> + "M1Y1", "M1Yr", "M1Y2", "MrYr", "MrY2", "M2Y2")), R> + "muetaM1", "muetaMr", "muetaM2", "muetaY1", "muetaYr", "muetaY2", R> + paste0("mediator", c("111", "11r", "112", "1rr", "1r2", "122", "rr2", R> + "r22", "rrr", "222")), paste0("total", c("11", "1r", "12", "rr", "r2", "22")), R> + "residualsX", "residualsM", "residualsY", R> + "residualsMX", "residualsYX", "residualsYM" R> + ) R> set.seed(20191029) R> RMS_BLS_LGCM <- getMediation( R> + dat = RMS_dat0, t_var = rep("T", 3), y_var = "S", m_var = "M", x_type = R> + "longitudinal", R> + x_var = "R", curveFun = "bilinear spline", records = list(2:9, 1:9, 1:9), R> + res_scale = c(0.1, 0.1, 0.1), res_cor = c(0.3, 0.3), tries = 10, paramOut = R> + TRUE, names = paraMed3_BLS) R> getEstimateStats(est_in = RMS_BLS_LGCM[[2]], CI_type = "Wald") \end{Sinput} \begin{Soutput}
## $wald
## Estimate SE p.value wald_lbound wald_ubound ...
## betaM1 0.0623 0.0231 0.0070 0.0170 0.1076
## betaMr 5.5471 0.6945 <0.0001 4.1859 6.9083
## betaM2 -0.0468 0.0118 0.0001 -0.0699 -0.0237
## betaY1 0.0149 0.0139 0.2837 -0.0123 0.0421
## betaYr 1.2907 0.5133 0.0119 0.2847 2.2967
## betaY2 -0.0212 0.0135 0.1163 -0.0477 0.0053
## betaM1Y1 0.3807 0.0317 <0.0001 0.3186 0.4428
## betaM1Yr 0.1206 0.9362 0.8975 -1.7143 1.9555
## betaM1Y2 0.0548 0.0505 0.2779 -0.0442 0.1538
## betaMrYr 0.7277 0.0309 <0.0001 0.6671 0.7883
## betaMrY2 -0.0012 0.0020 0.5485 -0.0051 0.0027
## betaM2Y2 0.4813 0.1434 0.0008 0.2002 0.7624 ...
## Mediator11 0.0237 0.0090 0.0085 0.0061 0.0413
## Mediator1r 0.0075 0.0585 0.8980 -0.1072 0.1222
## Mediator12 0.0034 0.0034 0.3173 -0.0033 0.0101
## Mediatorrr 4.0368 0.5298 <0.0001 2.9984 5.0752
## Mediatorr2 -0.0066 0.0111 0.5521 -0.0284 0.0152
## Mediator22 -0.0225 0.0088 0.0106 -0.0397 -0.0053
## total1 0.0386 0.0156 0.0133 0.0080 0.0692
## totalr 5.3350 0.6953 <0.0001 3.9722 6.6978
## total2 -0.0469 0.0133 0.0004 -0.0730 -0.0208 ... \end{Soutput} \begin{Soutput}
## $wald
## Estimate SE p.value wald_lbound wald_ubound ...
## betaX1Y1 0.3987 0.0313 <0.0001 0.3374 0.4600
## betaX1Yr 0.6677 1.2151 0.5827 -1.7139 3.0493
## betaX1Y2 0.0653 0.0571 0.2528 -0.0466 0.1772
## betaXrYr 0.7445 0.0347 <0.0001 0.6765 0.8125
## betaXrY2 -0.0020 0.0023 0.3845 -0.0065 0.0025
## betaX2Y2 0.4755 0.1700 0.0052 0.1423 0.8087
## betaX1M1 0.1540 0.0371 <0.0001 0.0813 0.2267
## betaX1Mr 4.4495 1.2668 0.0004 1.9666 6.9324
## betaX1M2 -0.1999 0.0700 0.0043 -0.3371 -0.0627
## betaXrMr 0.1960 0.0348 <0.0001 0.1278 0.2642
## betaXrM2 0.0090 0.0028 0.0013 0.0035 0.0145
## betaX2M2 0.8529 0.1818 <0.0001 0.4966 1.2092
## betaM1Y1 0.2718 0.0541 <0.0001 0.1658 0.3778
## betaM1Yr -2.6244 1.5247 0.0852 -5.6128 0.3640
## betaM1Y2 0.0092 0.0928 0.9210 -0.1727 0.1911
## betaMrYr 0.2927 0.0367 <0.0001 0.2208 0.3646
## betaMrY2 0.0028 0.0024 0.2433 -0.0019 0.0075
## betaM2Y2 0.3738 0.1587 0.0185 0.0628 0.6848 ...
## mediator111 0.1084 0.0229 <0.0001 0.0635 0.1533
## mediator11r -1.0462 0.6148 0.0888 -2.2512 0.1588
## mediator112 0.0037 0.0370 0.9203 -0.0688 0.0762
## mediator1rr 0.1954 0.3497 0.5763 -0.4900 0.8808
## mediator1r2 0.0019 0.0039 0.6261 -0.0057 0.0095
## mediator122 0.0244 0.0269 0.3644 -0.0283 0.0771
## mediatorrr2 0.2179 0.0308 <0.0001 0.1575 0.2783
## mediatorr22 0.0021 0.0018 0.2433 -0.0014 0.0056
## mediatorrrr -0.0008 0.0010 0.4237 -0.0028 0.0012
## mediator222 0.1777 0.0713 0.0127 0.0380 0.3174
## total11 0.2624 0.0273 <0.0001 0.2089 0.3159
## total1r 3.5987 1.2804 0.0049 1.0892 6.1082
## total12 -0.1700 0.0633 0.0072 -0.2941 -0.0459
## totalrr 0.4140 0.0339 <0.0001 0.3476 0.4804
## totalr2 0.0103 0.0024 <0.0001 0.0056 0.0150
## total22 1.0306 0.1773 <0.0001 0.6831 1.3781 ... \end{Soutput} \end{Schunk}
Similar to the MGMs, these longitudinal models can estimate the growth trajectories for each univariate development and the relationship among these processes over time. However, the relationships between these processes are captured by the coefficients of unidirectional paths. For example, the output from the first model indicates that the baseline approach to learning exerts a positive influence on the growth rate of reading (betaM1, $\text{p-value}=0.0070$) during the early developmental stage, as well as reading (betaMr, $\text{p-value}<0.0001$) and mathematics ability at the knot (betaYr, $\text{p-value}=0.0119$). Additionally, the impact of reading ability on mathematics ability during the early stage is also positive (betaM1Y1, $\text{p-value}<0.0001$).
By using these path coefficients, we are able to calculate both the indirect effect (mediation effect) of the baseline approach to learning on mathematics development through reading development and the total effect of the approach to learning on mathematics development. For example, through the growth rate of reading ability during the early period, the indirect effect of the baseline approach to learning on the growth rate of mathematics ability was $0.0237$. Therefore, the total effect of the baseline approach to learning on the early growth rate of mathematics ability is $0.0386$ ($0.0386=0.0149+0.0237$). The path coefficients, indirect effects, and total effects for the second longitudinal model can be interpreted in a similar manner.
\subsection{getMGroup() Examples} The \code{getMGroup()} function is utilized to build a multiple-group latent growth curve model. This model employs a linear-linear functional form and random knot, allowing the examination of differences in mathematics development from Grade K through Grade 5. As evidenced in Figure \ref{fig:MGroup}, the development of mathematical ability in boys slightly outpaces that of girls. However, this discrepancy is not statistically significant, as evident by the overlapping confidence intervals between the two manifest groups.
\begin{Schunk} \begin{Sinput} R> MGroup_Math_BLS_LGCM_f <- getMGroup( R> + dat = RMS_dat0, grp_var = "SEX", sub_Model = "LGCM", y_var = "M", t_var = "T", R> + records = 1:9, curveFun = "BLS", intrinsic = TRUE, res_scale = list(0.1, 0.1) R> + ) R> Figure8 <- getFigure( R> + model = MGroup_Math_BLS_LGCM_f, nClass = 2, cluster_TIC = NULL, grp_var = "SEX", R> + sub_Model = "LGCM", y_var = "M", curveFun = "BLS", y_model = "LGCM", t_var = "T", R> + records = 1:9, m_var = NULL, x_var = NULL, x_type = NULL, xstarts = xstarts, R> + xlab = "Month", outcome = "Mathematics" R> + ) print(Figure8) \end{Sinput} \end{Schunk}
\begin{figure}
\caption{Multiple Group and Mixture Latent Growth Curve Models with Bilinear Spline Function for Mathematics Ability}
\label{fig:MGroup}
\label{fig:MIX}
\label{fig:HetGroup}
\end{figure}
\subsection{getMIX() Examples} The \code{getMIX()} function is employed to construct mixture latent growth curve models, which incorporate a linear-linear functional form and a fixed knot. We first performed an enumeration process, where we built a set of potential models consisting of one to three latent classes. Utilizing the \code{getSummary()} function with \code{HetModels = TRUE}, we were able to obtain estimates of the likelihood, AIC, BIC, class-specific residuals, and class-specific proportions for each of the three models. Both the estimated likelihood and the information criteria consistently pointed towards the three latent classes model as being optimal. We then plotted the estimated growth status for each of these three latent classes in Figure \ref{fig:MIX}. The diagram reveals that students in the third latent class outperformed the other two classes in mathematics. Although the other two classes showed overlapping patterns in mathematics development during the early stage, the growth pace of the first group slowed down earlier than the second group.
\begin{Schunk} \begin{Sinput} R> Math_BLS_LGCM1 <- getLGCM( R> + dat = RMS_dat0, t_var = "T", y_var = "M", curveFun = "BLS", intrinsic = FALSE, R> + records = 1:9, res_scale = 0.1 R> + ) R> Math_BLS_LGCM2 <- getMIX( R> + dat = RMS_dat0, prop_starts = c(0.45, 0.55), sub_Model = "LGCM", y_var = "M", R> + t_var = "T", records = 1:9, curveFun = "BLS", intrinsic = FALSE, R> + res_scale = list(0.3, 0.3) R> + ) R> set.seed(20191029) R> Math_BLS_LGCM3 <- getMIX( R> + dat = RMS_dat0, prop_starts = c(0.33, 0.34, 0.33), sub_Model = "LGCM", R> + y_var = "M", t_var = "T", records = 1:9, curveFun = "BLS", intrinsic = R> + FALSE, res_scale = list(0.3, 0.3, 0.3), tries = 10 R> + ) R> Figure9 <- getFigure( R> + model = Math_BLS_LGCM3, nClass = 3, cluster_TIC = NULL, sub_Model = "LGCM", R> + y_var = "M", curveFun = "BLS", y_model = "LGCM", t_var = "T", records = 1:9, R> + m_var = NULL, x_var = NULL, x_type = NULL, xstarts = xstarts, xlab = "Month", R> + outcome = "Mathematics" R> + ) R> print(Figure9) R> getSummary(model_list = list(Math_BLS_LGCM1, Math_BLS_LGCM2, Math_BLS_LGCM3), R> + HetModels = TRUE) \end{Sinput} \begin{Soutput}
## Model No_Params -2ll AIC BIC Y_res_c1 Y_res_c2 Y_res_c3
## Model1 11 31347.39 31369.39 31415.75 34.0030 NA NA
## Model2 23 31134.61 31180.61 31277.55 34.2367 31.5802 NA
## Model3 35 31008.30 31078.30 31225.81 29.5689 31.0093 31.4662
##
## 100
## 67.4
## 22.4 \end{Soutput} \end{Schunk}
\section{Concluding Remarks}\label{sec:conclude} The developed R package, \pkg{nlpsem}, aims to facilitate comprehensive evaluations of nonlinear longitudinal processes, including intrinsically nonlinear functional forms within the SEM framework. It currently supports three commonly used intrinsically nonlinear functional forms, namely, the individual ratio of growth rate under the negative exponential function, the individual ratio of growth acceleration within the Jenss-Bayley function, and the individual knot in the bilinear spline function, also known as the linear-linear piecewise function. In addition, this package is versatile enough to handle parsimonious models and models with a quadratic functional form, both under Type I and Type II of nonlinear longitudinal models. Despite not primarily focusing on models for linear longitudinal processes, \pkg{nlpsem} incorporates functionalities for them, making it a comprehensive tool for researchers in the field. The package provides computational resources for univariate longitudinal processes, with the option to include or exclude time-invariant covariates. Further, it facilitates estimations for multivariate longitudinal processes, including a longitudinal outcome with time-varying covariates, correlated growth models for multiple outcomes, and longitudinal mediation models. Multiple group and mixture models are accommodated within \pkg{nlpsem}, where the sub-model can be any of the types above. Built on the \pkg{OpenMx} package, it enables flexible SEM specification and data-driven parameter estimation through built-in optimizers. Note that the package allows for unstructured time frame compatibility by employing the definition variables approach.
Despite its capabilities, \pkg{nlpsem} has limitations, which also pave the way for future developments. First, other nonlinear functional forms, such as logistic and Gompertz functions, are not currently supported. The inclusion of such additional forms could enhance the flexibility and applicability of the package. Second, formal statistical hypothesis testing needs to be developed for complex longitudinal models to evaluate the impact of removing or adding specific paths on the overall model. Third, although several nonlinear longitudinal models with certain functional forms have been well-documented and validated by simulation studies, others still need to be explored. Conducting further simulation studies to investigate the performance of these lesser-known models under different scenarios will provide essential insights and improve the robustness and validity of the \pkg{nlpsem} package.
\renewcommand\thetable{\arabic{table}} \setcounter{table}{0}
\begin{table}[!htbp] \centering \footnotesize \resizebox{1.15\columnwidth}{!}{\begin{threeparttable} \setlength\tabcolsep{2pt} \renewcommand{0.6}{0.6} \caption{Model Specification for Commonly Used Latent Growth Curve Models with Individual Measurement Occasions} \begin{tabular}{p{4.3cm}p{8.1cm}p{7.5cm}} \hline \hline & \multicolumn{2}{c}{\textbf{Linear Function}} \\ \hline \textbf{Individual Growth Curve}\tnote{a} & $y_{ij}=\eta_{0i}+\eta_{1i}\times{t_{ij}}+\epsilon_{ij}$ & \\ \textbf{Growth Factors}\tnote{b} & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i}\end{pmatrix}$ & \\ \textbf{Factor Loadings}\tnote{b} & $\boldsymbol{\Lambda}_{i}=\begin{pmatrix}1 & t_{ij} \end{pmatrix}$ & \\ \multirow{2}{*}{\textbf{Interpretation of Growth Coef.}} & \multicolumn{2}{l}{$\eta_{0i}$: the individual initial status} \\ & \multicolumn{2}{l}{$\eta_{1i}$: the individual linear component of change} \\ \hline \hline & \multicolumn{2}{c}{\textbf{Quadratic Function}} \\ \hline \textbf{Individual Growth Curve}\tnote{a} & $y_{ij}=\eta_{0i}+\eta_{1i}\times{t_{ij}}+\eta_{2i}\times{t^{2}_{ij}}+\epsilon_{ij}$ & \\ \textbf{Growth Factors}\tnote{b} & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} & \eta_{2i}\end{pmatrix}$ & \\ \textbf{Factor Loadings}\tnote{b} & $\boldsymbol{\Lambda}_{i}=\begin{pmatrix}1 & t_{ij} & t^{2}_{ij} \end{pmatrix}$ & \\ \multirow{3}{*}{\textbf{Interpretation of Growth Coef.}} & \multicolumn{2}{l}{$\eta_{0i}$: the individual initial status} \\ & \multicolumn{2}{l}{$\eta_{1i}$: the individual linear component of change} \\ & \multicolumn{2}{l}{$\eta_{2i}$: the individual quadratic component of change (i.e., the individual growth acceleration)} \\ \hline \hline & \multicolumn{2}{c}{\textbf{Negative Exponential Function}} \\ \hline & \textbf{Intrinsically Nonlinear Model} & \textbf{Reduced Non-intrinsically Nonlinear Model} \\ \hline \textbf{Individual Growth Curve}\tnote{a} & $y_{ij}=\eta_{0i}+\eta_{1i}\times(1-\exp(-b_{i}\times {t_{ij}}))+\epsilon_{ij}$ & $y_{ij}=\eta_{0i}+\eta_{1i}\times(1-\exp(-b\times {t_{ij}}))+\epsilon_{ij}$ \\ \textbf{Growth Factors}\tnote{b} & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} & b_{i}-\mu_{b} \end{pmatrix}$ & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} \end{pmatrix}$ \\ \textbf{Factor Loadings}\tnote{b} & $\boldsymbol{\Lambda}_{i}\approx\begin{pmatrix}1 & 1-\exp(-\mu_{b}\times {t_{ij}}) & \mu_{\eta_{1}}\times\exp(-\mu_{b}t_{ij})\times t_{ij} \end{pmatrix}$ & $\boldsymbol{\Lambda}_{i}=\begin{pmatrix}1 & 1-\exp(-b\times {t_{ij}}) \end{pmatrix}$ \\ \multirow{3}{*}{\textbf{Interpretation of Growth Coef.}} & \multicolumn{2}{l}{$\eta_{0i}$: the individual initial status} \\ & \multicolumn{2}{l}{$\eta_{1i}$: the individual change from initial status to asymptotic level (i.e., the individual growth capacity)} \\ & \multicolumn{2}{l}{$b$ ($b_{i}$)\tnote{c}: a growth rate parameter that controls the curvature of the growth trajectory (for individual $i$)} \\ \hline \hline & \multicolumn{2}{c}{\textbf{Jenss-Bayley Function}} \\ \hline & \textbf{Intrinsically Nonlinear Model} & \textbf{Reduced Non-intrinsically Nonlinear Model} \\ \hline \textbf{Individual Growth Curve}\tnote{a} & $y_{ij}=\eta_{0i}+\eta_{1i}\times t_{ij}+\eta_{2i}\times(\exp(c_{i}\times t_{ij})-1)+\epsilon_{ij}$ & $y_{ij}=\eta_{0i}+\eta_{1i}\times t_{ij}+\eta_{2i}\times(\exp(c\times t_{ij})-1)+\epsilon_{ij}$ \\ \textbf{Growth Factors}\tnote{b} & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} & \eta_{2i} & c_{i}-\mu_{c} \end{pmatrix}$ & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} & \eta_{2i} \end{pmatrix}$ \\ \textbf{Factor Loadings}\tnote{b} & $\boldsymbol{\Lambda}_{i}\approx\begin{pmatrix}1 & t_{ij} & \exp(\mu_{c}\times {t_{ij}})-1 & \mu_{\eta_{2}}\times\exp(\mu_{c}t_{ij})\times t_{ij} \end{pmatrix}$ & $\boldsymbol{\Lambda}_{i}=\begin{pmatrix}1 & t_{ij} & \exp(c\times {t_{ij}}-1) \end{pmatrix}$ \\ \multirow{4}{*}{\textbf{Interpretation of Growth Coef.}} & \multicolumn{2}{l}{$\eta_{0i}$: the individual initial status} \\ & \multicolumn{2}{l}{$\eta_{1i}$: the individual slope of linear asymptote with the assumption $c_{i}<0$ ($c<0$)\tnote{d}} \\ & \multicolumn{2}{l}{$\eta_{2i}$: the individual change from initial status to the linear asymptote intercept} \\ & \multicolumn{2}{l}{$c$ ($c_{i}$)\tnote{e}: a growth acceleration parameter that controls the rate of change of the growth trajectory's curvature (for individual $i$)} \\ \hline \hline & \multicolumn{2}{c}{\textbf{Bilinear Spline Function with an Unknown Knot}} \\ \hline & \textbf{Intrinsically Nonlinear Model} & \textbf{Reduced Non-intrinsically Nonlinear Model} \\ \hline \textbf{Individual Growth Curve}\tnote{a} & $y_{ij}=\begin{cases} \eta_{0i}+\eta_{1i}\times t_{ij}+\epsilon_{ij}, & t_{ij}<\gamma_{i} \\ \eta_{0i}+\eta_{1i}\times \gamma_{i}+\eta_{2i}\times(t_{ij}-\gamma_{i})+\epsilon_{ij}, & t_{ij}\ge\gamma_{i} \\ \end{cases}$ & $y_{ij}=\begin{cases} \eta_{0i}+\eta_{1i}\times t_{ij}+\epsilon_{ij}, & t_{ij}<\gamma \\ \eta_{0i}+\eta_{1i}\times \gamma+\eta_{2i}\times(t_{ij}-\gamma)+\epsilon_{ij}, & t_{ij}\ge\gamma \\ \end{cases}$ \\ \textbf{Growth Factors}\tnote{b} & $\boldsymbol{\eta}^{'}_{i}=\begin{pmatrix}\eta_{0i}+\gamma_{i}\eta_{1i} & \frac{\eta_{1i}+\eta_{2i}}{2} & \frac{\eta_{2i}-\eta_{1i}}{2} & \gamma_{i}-\mu_{\gamma} \end{pmatrix}$ & $\boldsymbol{\eta}^{'}_{i}=\begin{pmatrix}\eta_{0i}+\gamma\eta_{1i} & \frac{\eta_{1i}+\eta_{2i}}{2} & \frac{\eta_{2i}-\eta_{1i}}{2}\end{pmatrix}$ \\
\textbf{Factor Loadings}\tnote{b} & $\boldsymbol{\Lambda}^{'}_{i}\approx\begin{pmatrix}1 & t_{ij}-\mu_{\gamma} & |t_{ij}-\mu_{\gamma}| & -\mu^{'}_{\eta_{2}}-\frac{\mu^{'}_{\eta_{2}}(t_{ij}-\mu_{\gamma})}{|t_{ij}-\mu_{\gamma}|}
\end{pmatrix}$ & $\boldsymbol{\Lambda}^{'}_{i}=\begin{pmatrix}1 & t_{ij}-\gamma & |t_{ij}-\gamma| \end{pmatrix}$ \\ \multirow{4}{*}{\textbf{Interpretation of Growth Coef.}} & \multicolumn{2}{l}{$\eta_{0i}$: the individual initial status} \\ & \multicolumn{2}{l}{$\eta_{1i}$: the individual slope of the first linear piece} \\ & \multicolumn{2}{l}{$\eta_{2i}$: the individual slope of the second linear piece} \\ & \multicolumn{2}{l}{$\gamma$ ($\gamma_{i}$): the (individual) transition time from $1^{st}$ linear piece to $2^{nd}$ linear piece (i.e., knot)} \\ \hline \hline \end{tabular} \label{tbl:LGCM_summary} \begin{tablenotes} \small \item[a] {In the function of the individual growth curve, $y_{ij}$, $t_{ij}$, and $\epsilon_{ij}$ are the observed measurement, recorded time, and residual of the $i^{th}$ individual at the $j^{th}$ time point.} \item[b] {In the vector of growth factors and the corresponding factor loadings, $\mu_{b}$, $\mu_{c}$, and $\mu_{\gamma}$ are the mean values of the log-ratio of growth rate, of the log-ratio of growth acceleration, and of the knot for the negative exponential function, Jenss-Bayley function, and bilinear spline function with an unknown knot, respectively.} \item[c] {There are multiple interpretations for $b$ ($b_{i}$). For example, $\exp(b_{i}\times(t_{i(j+1)} - t_{ij}))$ represents the ratio of the instantaneous growth rates at time points $t_{i(j+1)}$ and $t_{ij}$. This value reflects how much the growth rate has changed between $t_{ij}$ and $t_{i(j+1)}$, depending on the growth rate parameter $b_{i}$. With the assumption that measurements are taken at equally-spaced waves with scaled intervals, $\exp(b_{i})$ represents the ratio of the instantaneous rates at any adjacent time points.} \item[d] {If $c_{i}>0$ ($c>0$), the Jenss-Bayley function does not have a linear asymptote as the nonlinear component continues to grow with time.} \item[e] {There are multiple interpretations for $c$ ($c_{i}$). For example, $\exp(c_{i}\times(t_{i(j+1)} - t_{ij}))$ represents the ratio of the instantaneous growth accelerations at time points $t_{i(j+1)}$ and $t_{ij}$. This value reflects how much the growth acceleration has changed between $t_{ij}$ and $t_{i(j+1)}$, depending on the growth acceleration parameter $c_{i}$. With the assumption that measurements are taken at equally-spaced waves with scaled intervals, $\exp(c_{i})$ represents the ratio of the instantaneous accelerations at any adjacent time points.} \end{tablenotes} \end{threeparttable}} \end{table}
\begin{table}[!htbp] \centering \footnotesize \resizebox{1.15\columnwidth}{!}{\begin{threeparttable} \setlength\tabcolsep{2pt} \renewcommand{0.6}{0.6} \caption{Model Specification for Commonly Used Latent Change Score Models with Individual Measurement Occasions} \begin{tabular}{p{5cm}p{10.5cm}p{6cm}} \hline \hline & \multicolumn{2}{c}{\textbf{Quadratic Function}} \\ \hline \textbf{Individual Growth Rate}\tnote{b} & $dy_{ij\_\text{mid}}=\eta_{1i}+2\times\eta_{2i}\times t_{ij\_\text{mid}}$ & \\ \textbf{Growth Factors of Growth Rate}\tnote{c} & $\boldsymbol{\eta}_{di}=\begin{pmatrix}\eta_{1i} & \eta_{2i} \end{pmatrix}$ & \\ \textbf{Factor Loadings of Growth Rate}\tnote{c} & $\boldsymbol{\Lambda}_{di}=\begin{pmatrix} 1 & 2\times t_{ij\_\text{mid}} \end{pmatrix}$ & \\ \textbf{Growth Factors of Growth Status}\tnote{d} & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} & \eta_{2i}\end{pmatrix}$ & \\ \textbf{Factor Loadings of Growth Status}\tnote{d} & $\boldsymbol{\Lambda}_{i}=\begin{pmatrix} \boldsymbol{1} & \boldsymbol{\Omega}_{i}\times\boldsymbol{\Lambda}_{di} \end{pmatrix}$ & \\ \hline \hline & \multicolumn{2}{c}{\textbf{Negative Exponential Function}} \\ \hline & \textbf{Intrinsically Nonlinear Model} & \textbf{Reduced Non-intrinsically Nonlinear Model} \\ \hline \textbf{Individual Growth Rate}\tnote{b} & $dy_{ij\_\text{mid}}=b_{i}\times\eta_{1i}\times\exp(-b_{i}\times t_{ij\_\text{mid}})$ & $dy_{ij\_\text{mid}}=b\times\eta_{1i}\times\exp(-b\times t_{ij\_\text{mid}})$ \\ \textbf{Growth Factors of Growth Rate}\tnote{c,e} & $\boldsymbol{\eta}_{di}=\begin{pmatrix}\eta_{1i} & b_{i}-\mu_{b} \end{pmatrix}$ & $\boldsymbol{\eta}_{di}=\eta_{1i}$ \\ \textbf{Factor Loadings of Growth Rate}\tnote{c,e} & $\boldsymbol{\Lambda}_{di}\approx\begin{pmatrix} \mu_{b}\times\exp(-\mu_{b}\times t_{ij\_\text{mid}}) & \mu_{\eta_{1}}\times\exp(-\mu_{b}t_{ij\_\text{mid}})\times(1-\mu_{b}t_{ij\_\text{mid}} \end{pmatrix}$ & $\boldsymbol{\Lambda}_{di}=b\times\exp(-b\times t_{ij\_\text{mid}})$ \\ \textbf{Growth Factors of Growth Status}\tnote{d,e} & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} & b_{i}-\mu_{b} \end{pmatrix}$ & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} \end{pmatrix}$ \\ \textbf{Factor Loadings of Growth Status}\tnote{d,e} & $\boldsymbol{\Lambda}_{i}\approx\begin{pmatrix}\boldsymbol{1} & \boldsymbol{\Omega}_{i}\times\boldsymbol{\Lambda}_{di} \end{pmatrix}$ & $\boldsymbol{\Lambda}_{i}=\begin{pmatrix}\boldsymbol{1} & \boldsymbol{\Omega}_{i}\times\boldsymbol{\Lambda}_{di} \end{pmatrix}$ \\ \hline \hline & \multicolumn{2}{c}{\textbf{Jenss-Bayley Function}} \\ \hline & \textbf{Intrinsically Nonlinear Model} & \textbf{Reduced Non-intrinsically Nonlinear Model} \\ \hline \textbf{Individual Growth Rate}\tnote{b} & $dy_{ij\_\text{mid}}=\eta_{1i}+c_{i}\times\eta_{2i}\times\exp(c_{i}\times t_{ij\_\text{mid}})$ & $dy_{ij\_\text{mid}}=\eta_{1i}+c\times\eta_{2i}\times\exp(c\times t_{ij\_\text{mid}})$ \\ \textbf{Growth Factors of Growth Rate}\tnote{c,e} & $\boldsymbol{\eta}_{di}=\begin{pmatrix}\eta_{1i} & \eta_{2i} & c_{i}-\mu_{c} \end{pmatrix}$ & $\boldsymbol{\eta}_{di}=\begin{pmatrix}\eta_{1i} & \eta_{2i} \end{pmatrix}$ \\ \textbf{Factor Loadings of Growth Rate}\tnote{c,e} & $\boldsymbol{\Lambda}_{di}\approx\begin{pmatrix} 1 & \mu_{c}\times\exp(\mu_{c}\times t_{ij\_\text{mid}}) & \mu_{\eta_{2}}\times\exp(\mu_{c}t_{ij\_\text{mid}})\times(1+\mu_{c}t_{ij\_\text{mid}} \end{pmatrix}$ & $\boldsymbol{\Lambda}_{di}\approx\begin{pmatrix} 1 & c\times\exp(c\times t_{ij\_\text{mid}}) \end{pmatrix}$ \\ \textbf{Growth Factors of Growth Status}\tnote{d,e} & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} & \eta_{2i} & c_{i}-\mu_{c} \end{pmatrix}$ & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} & \eta_{2i} \end{pmatrix}$ \\ \textbf{Factor Loadings of Growth Status}\tnote{d,e} & $\boldsymbol{\Lambda}_{i}\approx\begin{pmatrix}\boldsymbol{1} & \boldsymbol{\Omega}_{i}\times\boldsymbol{\Lambda}_{di} \end{pmatrix}$ & $\boldsymbol{\Lambda}_{i}=\begin{pmatrix}\boldsymbol{1} & \boldsymbol{\Omega}_{i}\times\boldsymbol{\Lambda}_{di} \end{pmatrix}$ \\ \hline \hline & \multicolumn{2}{c}{\textbf{Nonparametric Function}} \\ \hline \textbf{Individual Growth Rate}\tnote{b} & $dy_{ij}=\eta_{1i}\times\gamma_{j-1}$ & \\ \textbf{Growth Factors of Growth Rate}\tnote{c} & $\boldsymbol{\eta}_{di}=\eta_{1i}$ & \\ \textbf{Factor Loadings of Growth Rate}\tnote{c} & $\boldsymbol{\Lambda}_{di}=\gamma_{j-1}$ & \\ \textbf{Growth Factors of Growth Status}\tnote{d} & $\boldsymbol{\eta}_{i}=\begin{pmatrix}\eta_{0i} & \eta_{1i} \end{pmatrix}$ & \\ \textbf{Factor Loadings of Growth Status}\tnote{d} & $\boldsymbol{\Lambda}_{i}=\begin{pmatrix} \boldsymbol{1} & \boldsymbol{\Omega}_{i}\times\boldsymbol{\Lambda}_{di} \end{pmatrix}$ & \\ \multirow{3}{*}{\textbf{Interpretation of Growth Coef.}} & \multicolumn{2}{l}{$\eta_{0i}$: the individual initial status} \\ & \multicolumn{2}{l}{$\eta_{1i}$: the individual slope during the first time interval} \\ & \multicolumn{2}{l}{$\gamma_{j}$: the relative growth rate of the $j^{th}$ interval} \\ \hline \hline \end{tabular} \label{tbl:LCSM_summary} \begin{tablenotes} \item[a] {This table does not include the specifications for LCSMs with linear and bilinear spline functions, as LGCMs with these two functional forms can estimate interval-specific slopes, eliminating the need for LCSMs to estimate growth rates. Additionally, this table presents the model specifications for the LCSM with a piecewise linear function. Note that the specification of this model serves as the foundation for the models with a decomposed TVC, which is introduced in Subsection \ref{spec:TVC}.} \item[b] {In the individual growth rate function, $dy_{ij\_\text{mid}}$ and $t_{ij\_\text{mid}}$ are the instantaneous slope midway through the $(j-1)^{th}$ time interval and the corresponding time.} \item[c] {The growth factors of the growth rate $\boldsymbol{\eta}_{di}$ consists of those associated with the growth rates, which are present in the respective growth rate function. The corresponding factor loadings are provided in the matrix $\boldsymbol{\Lambda}_{di}$ representing the factor loadings of the growth rate (where $j=2, 3, \dots, J$). The mean vector and variance-covariance matrix of growth factors of growth rate are $\boldsymbol{\mu_{\eta}}_{d}$ and $\boldsymbol{\Psi_{\eta}}_{d}$, respectively. With $\boldsymbol{\eta}_{di}$, $\boldsymbol{\Lambda}_{di}$, $\boldsymbol{\mu_{\eta}}_{d}$, and $\boldsymbol{\Psi_{\eta}}_{d}$, we are able to derive (1) mean and variance of interval-specific slopes: $\boldsymbol{\mu_{dy\_\text{mid}}}=\boldsymbol{\Lambda_{\eta}}_{d}\times\boldsymbol{\mu_{\eta}}_{d}$ and $\boldsymbol{\sigma^{2}_{dy\_\text{mid}}}=\boldsymbol{\Lambda_{\eta}}_{d}\times\boldsymbol{\Psi}_{d}\times\boldsymbol{\Lambda_{\eta}}^{T}_{d}$, and (2) mean and variance of interval-specific changes: $\boldsymbol{\mu_{\delta y_{ij}}}=\boldsymbol{\Lambda_{\eta}}_{d}\times\boldsymbol{\mu_{\eta}}_{d}\times(t_{ij}-t_{i(j-1)})$ and $\boldsymbol{\sigma^{2}_{\delta y_{ij}}}=\boldsymbol{\Lambda_{\eta}}_{d}\times\boldsymbol{\Psi}_{d}\times\boldsymbol{\Lambda_{\eta}}^{T}_{d}\times(t_{ij}-t_{i(j-1)})^2$. In the equations of means and variances of interval-specific slopes and interval-specific changes, $j=2, 3, \dots, J$.} \item[d] {The vector of growth factor of the growth status consists of growth factors associated with both the growth rates and the initial status, which together determine the growth status. The corresponding factor loadings are provided in the matrix representing the factor loadings of the growth status (where $j=1, 2, \dots, J$), in which $\boldsymbol{\Omega}_{i}=\begin{pmatrix}
0 & 0 & \cdots & \cdots & \cdots & 0 \\
t_{i2}-t_{i1} & 0 & 0 & \cdots & \cdots & 0 \\
t_{i2}-t_{i1} & t_{i3}-t_{i2} & 0 & 0 & \cdots & 0 \\
\cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\
t_{i2}-t_{i1} & t_{i3}-t_{i2} & t_{i4}-t_{i3} & \cdots & \cdots & t_{ij}-t_{i(j-1)} \\ \end{pmatrix}$ so that $\boldsymbol{\Omega}_{i}\times\boldsymbol{\Lambda}_{di}$ represents the accumulative value since the initial status of the corresponding factor loading of the growth rate. With $\boldsymbol{\Omega}_{i}$, we are able to derive the mean and variance of change from baseline: $\boldsymbol{\mu_{\Delta y_{ij}}}=\boldsymbol{\Omega}_{i}\times\boldsymbol{\Lambda}_{di}\times\boldsymbol{\mu_{\eta}}_{d}$ and $\boldsymbol{\sigma^2_{\Delta y_{ij}}}=\boldsymbol{\Omega}_{i}\times\boldsymbol{\Lambda_{\eta}}_{d}\times\boldsymbol{\Psi}_{d}\times\boldsymbol{\Lambda_{\eta}}^{T}_{d}\times\boldsymbol{\Omega}^{T}_{i}$. In the equations of means and variances of change from baseline, $j=2, 3, \dots, J$.} \item[e] {In the vector of growth factors and the corresponding factor loadings, $\mu_{b}$ and $\mu_{c}$ are the mean values of $b_{i}$ and of $c_{i}$ for the negative exponential function and Jenss-Bayley function, respectively. } \end{tablenotes} \end{threeparttable}} \end{table}
\begin{table}[!htbp] \centering \footnotesize \resizebox{\columnwidth}{!}{\begin{threeparttable} \setlength\tabcolsep{2pt} \renewcommand{0.6}{0.6} \caption{Model Specification for Four Possible Ways of Adding Time-varying Covariate with Individual Measurement Occasions} \begin{tabular}{p{5cm}p{10cm}} \hline \hline \multicolumn{2}{c}{\textbf{LGCM with a TVC and a TIC}} \\ \hline \multirow{3}{*}{\textbf{Model Specification}} & $\boldsymbol{y}_{i}=\boldsymbol{\Lambda}^{[y]}_{i}\times \boldsymbol{\eta}^{[y]}_{i}+\kappa\times\boldsymbol{x}_{i}+\boldsymbol{\epsilon}^{[y]}_{i}$ \\ & $\boldsymbol{\eta}^{[y]}_{i}=\boldsymbol{\alpha}^{[y]}+\boldsymbol{B}_{\text{TIC}}\times \boldsymbol{X}_{i}+ \boldsymbol{\zeta}^{[y]}_{i}$ \\ \hline \hline \multicolumn{2}{c}{\textbf{LGCM with a Decomposed TVC into Baseline and Interval-specific Slopes and a TIC}} \\ \hline \multirow{4}{*}{\textbf{Individual Function of TVC}} & $x_{ij}=x^{\ast}_{ij}+\epsilon^{[x]}_{ij}$ \\ & $x^{\ast}_{ij}=\begin{cases} \eta^{[x]}_{0i}, & \text{if $j=1$}\\ x^{\ast}_{i(j-1)}+dx_{ij}\times(t_{ij}-t_{i(j-1)}), & \text{if $j=2, \dots, J$} \end{cases}$ \\ & $dx_{ij}=\eta^{[x]}_{1i}\times\gamma_{j-1}\qquad (j=2, \dots, J)$ \\ \hline \multirow{6}{*}{\textbf{Model Specification}} & $\begin{pmatrix}\boldsymbol{x}_{i} \\ \boldsymbol{y}_{i} \end{pmatrix}=\begin{pmatrix} \boldsymbol{\Lambda}^{[x]}_{i} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\Lambda}^{[y]}_{i} \end{pmatrix}\times\begin{pmatrix} \boldsymbol{\eta}^{[x]}_{i} \\ \boldsymbol{\eta}^{[y]}_{i} \end{pmatrix}+\kappa\times\begin{pmatrix} \boldsymbol{0} \\ \boldsymbol{dx_{i}} \end{pmatrix}+\begin{pmatrix} \boldsymbol{\epsilon}^{[x]}_{i} \\ \boldsymbol{\epsilon}^{[y]}_{i} \end{pmatrix}$ \\ & $\boldsymbol{x}_{i}=\boldsymbol{\Lambda}^{[x]}_{i}\times\boldsymbol{\eta}^{[x]}_{i}+\boldsymbol{\epsilon}^{[x]}_{i}$ \\ & $\boldsymbol{\eta}^{[y]}_{i}=\boldsymbol{\alpha}^{[y]}+\begin{pmatrix}\boldsymbol{B}_{\text{TIC}} & \boldsymbol{\beta}_{\text{TVC}}\end{pmatrix}\times\begin{pmatrix}\boldsymbol{X}_{i} \\ \eta^{[x]}_{0i}\end{pmatrix} +\boldsymbol{\zeta}^{[y]}_{i}$ \\ & $\boldsymbol{dx_{i}}=\begin{pmatrix}0 & dx_{i2} & dx_{i3} & \dots & dx_{iJ}\end{pmatrix}^{T}$ \\ \hline \hline \multicolumn{2}{c}{\textbf{LGCM with a Decomposed TVC into Baseline and Interval-specific Changes and a TIC}} \\ \hline \multirow{5}{*}{\textbf{Individual Function of TVC}} & $x_{ij}=x^{\ast}_{ij}+\epsilon^{[x]}_{ij}$ \\ & $x^{\ast}_{ij}=\begin{cases} \eta^{[x]}_{0i}, & \text{if $j=1$}\\ x^{\ast}_{i(j-1)}+\delta x_{ij}, & \text{if $j=2, \dots, J$} \end{cases}$ \\ & $\delta x_{ij}=dx_{ij}\times(t_{ij}-t_{i(j-1)})\qquad (j=2, \dots, J)$ \\ & $dx_{ij}=\eta^{[x]}_{1i}\times\gamma_{j-1}\qquad (j=2, \dots, J)$ \\ \hline \multirow{6}{*}{\textbf{Model Specification}} & $\begin{pmatrix}\boldsymbol{x}_{i} \\ \boldsymbol{y}_{i} \end{pmatrix}=\begin{pmatrix} \boldsymbol{\Lambda}^{[x]}_{i} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\Lambda}^{[y]}_{i} \end{pmatrix}\times\begin{pmatrix} \boldsymbol{\eta}^{[x]}_{i} \\ \boldsymbol{\eta}^{[y]}_{i} \end{pmatrix}+\kappa\times\begin{pmatrix} \boldsymbol{0} \\ \boldsymbol{\delta x_{i}} \end{pmatrix}+\begin{pmatrix} \boldsymbol{\epsilon}^{[x]}_{i} \\ \boldsymbol{\epsilon}^{[y]}_{i} \end{pmatrix}$ \\ & $\boldsymbol{x}_{i}=\boldsymbol{\Lambda}^{[x]}_{i}\times\boldsymbol{\eta}^{[x]}_{i}+\boldsymbol{\epsilon}^{[x]}_{i}$ \\ & $\boldsymbol{\eta}^{[y]}_{i}=\boldsymbol{\alpha}^{[y]}+\begin{pmatrix}\boldsymbol{B}_{\text{TIC}} & \boldsymbol{\beta}_{\text{TVC}}\end{pmatrix}\times\begin{pmatrix}\boldsymbol{X}_{i} \\ \eta^{[x]}_{0i}\end{pmatrix} +\boldsymbol{\zeta}^{[y]}_{i}$ \\ & $\boldsymbol{\delta x_{i}}=\begin{pmatrix}0 & \delta x_{i2} & \delta x_{i3} & \dots & \delta x_{iJ}\end{pmatrix}^{T}$ \\ \hline \hline \multicolumn{2}{c}{\textbf{LGCM with a Decomposed TVC into Baseline and Change-from-baseline and a TIC}} \\ \hline \multirow{5}{*}{\textbf{Individual Function of TVC}} & $x_{ij}=x^{\ast}_{ij}+\epsilon^{[x]}_{ij}$ \\ & $x^{\ast}_{ij}=\begin{cases} \eta^{[x]}_{0i}, & \text{if $j=1$}\\ \eta^{[x]}_{0i}+\Delta x_{ij}, & \text{if $j=2, \dots, J$} \end{cases}$ \\ & $\Delta x_{ij}=\Delta x_{i(j-1)}+dx_{ij}\times(t_{ij}-t_{i(j-1)})$ \\ & $dx_{ij}=\eta^{[x]}_{1i}\times\gamma_{j-1}\qquad (j=2, \dots, J)$ \\ \hline \multirow{6}{*}{\textbf{Model Specification}} & $\begin{pmatrix}\boldsymbol{x}_{i} \\ \boldsymbol{y}_{i} \end{pmatrix}=\begin{pmatrix} \boldsymbol{\Lambda}^{[x]}_{i} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\Lambda}^{[y]}_{i} \end{pmatrix}\times\begin{pmatrix} \boldsymbol{\eta}^{[x]}_{i} \\ \boldsymbol{\eta}^{[y]}_{i} \end{pmatrix}+\kappa\times\begin{pmatrix} \boldsymbol{0} \\ \boldsymbol{\Delta x_{i}} \end{pmatrix}+\begin{pmatrix} \boldsymbol{\epsilon}^{[x]}_{i} \\ \boldsymbol{\epsilon}^{[y]}_{i} \end{pmatrix}$ \\ & $\boldsymbol{x}_{i}=\boldsymbol{\Lambda}^{[x]}_{i}\times\boldsymbol{\eta}^{[x]}_{i}+\boldsymbol{\epsilon}^{[x]}_{i}$ \\ & $\boldsymbol{\eta}^{[y]}_{i}=\boldsymbol{\alpha}^{[y]}+\begin{pmatrix}\boldsymbol{B}_{\text{TIC}} & \boldsymbol{\beta}_{\text{TVC}}\end{pmatrix}\times\begin{pmatrix}\boldsymbol{X}_{i} \\ \eta^{[x]}_{0i}\end{pmatrix} +\boldsymbol{\zeta}^{[y]}_{i}$ \\ & $\boldsymbol{\Delta x_{i}}=\begin{pmatrix}0 & \Delta x_{i2} & \Delta x_{i3} & \dots & \Delta x_{iJ}\end{pmatrix}^{T}$ \\ \hline \hline \end{tabular} \label{tbl:TVC_summary} \end{threeparttable}} \end{table}
\begin{table}[!htbp] \centering \footnotesize \resizebox{0.95\columnwidth}{!}{\begin{threeparttable} \setlength\tabcolsep{2pt} \renewcommand{0.6}{0.6} \caption{Model Specification for Longitudinal Mediation Models with Individual Measurement Occasions} \begin{tabular}{p{3cm}p{3cm}p{12cm}} \hline \hline \multicolumn{3}{c}{\textbf{Baseline Covariate, Longitudinal Mediator, and Longitudinal Outcome}} \\ \hline \multirow{10}{*}{\textbf{Linear Function}} & \multirow{6}{*}{\textbf{Model Specification}} & $\begin{pmatrix} \boldsymbol{m}_{i} \\ \boldsymbol{y}_{i} \end{pmatrix}= \begin{pmatrix} \boldsymbol{\Lambda}_{i}^{[m]} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\Lambda}_{i}^{[y]} \end{pmatrix}\times \begin{pmatrix} \boldsymbol{\eta}^{[m]}_{i} \\ \boldsymbol{\eta}^{[y]}_{i} \end{pmatrix}+ \begin{pmatrix} \boldsymbol{\epsilon}^{[m]}_{i} \\ \boldsymbol{\epsilon}^{[y]}_{i} \end{pmatrix}$ \\ & & $\boldsymbol{\eta}^{[u]}_{i} = \begin{pmatrix} \eta^{[u]}_{0i} & \eta^{[u]}_{1i} \end{pmatrix}^{T}$ $(u=m,y)$ \\ & & $\boldsymbol{\Lambda}^{[u]}_{i} = \begin{pmatrix} 0 & t_{ij} \end{pmatrix}$ $(u=m,y; j=1,\cdots, J)$ \\ & & $\boldsymbol{\eta}^{[m]}_{i}=\boldsymbol{\alpha}^{[m]}+\boldsymbol{B}^{[x\rightarrow{m}]}\times x_{i}+\boldsymbol{\zeta}^{[m]}_{i}$ \\ & & $\boldsymbol{\eta}^{[y]}_{i}=\boldsymbol{\alpha}^{[y]}+\boldsymbol{B}^{[x\rightarrow{y}]}\times x_{i}+\boldsymbol{B}^{[m\rightarrow{y}]}\times\boldsymbol{\eta}^{[m]}_{i}+\boldsymbol{\zeta}^{[y]}_{i}$ \\ \cline{2-3} & \multirow{4}{*}{\textbf{Path Coef.}} & $\boldsymbol{B}^{[x\rightarrow{m}]}=\begin{pmatrix} \beta^{[x\rightarrow{m}]}_{0} & \beta^{[x\rightarrow{m}]}_{1} \end{pmatrix}^{T}$; $\boldsymbol{B}^{[x\rightarrow{y}]}=\begin{pmatrix} \beta^{[x\rightarrow{y}]}_{0} & \beta^{[x\rightarrow{y}]}_{1} \end{pmatrix}^{T}$ \\ & & $\boldsymbol{B}^{[m\rightarrow{y}]}=\begin{pmatrix} \beta^{[m\rightarrow{y}]}_{00} & 0 \\ \beta^{[m\rightarrow{y}]}_{01} & \beta^{[m\rightarrow{y}]}_{11} \\ \end{pmatrix}$ \\ \hline \hline \multirow{11}{*}{\textbf{Bilinear Function}} & \multirow{6}{*}{\textbf{Model Specification}} & $\begin{pmatrix} \boldsymbol{m}_{i} \\ \boldsymbol{y}_{i} \end{pmatrix}= \begin{pmatrix} \boldsymbol{\Lambda}_{i}^{[m]} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\Lambda}_{i}^{[y]} \end{pmatrix}\times \begin{pmatrix} \boldsymbol{\eta}^{[m]}_{i} \\ \boldsymbol{\eta}^{[y]}_{i} \end{pmatrix}+ \begin{pmatrix} \boldsymbol{\epsilon}^{[m]}_{i} \\ \boldsymbol{\epsilon}^{[y]}_{i} \end{pmatrix}$ \\ & & $\boldsymbol{\eta}^{[u]}_{i} = \begin{pmatrix} \eta^{[u]}_{1i} & \eta^{[u]}_{\gamma_{i}} & \eta^{[u]}_{2i} \end{pmatrix}^{T}$ $(u=m,y)$ \\ & & $\boldsymbol{\Lambda}^{[u]}_{i} = \begin{pmatrix} \min(0,t_{ij}-\gamma^{[u]}) & 1 & \max(0,t_{ij}-\gamma^{[u]}) \end{pmatrix}$ $(u=m,y; j=1,\cdots, J)$ \\ & & $\boldsymbol{\eta}^{[m]}_{i}=\boldsymbol{\alpha}^{[m]}+\boldsymbol{B}^{[x\rightarrow{m}]}\times x_{i}+\boldsymbol{\zeta}^{[m]}_{i}$ \\ & & $\boldsymbol{\eta}^{[y]}_{i}=\boldsymbol{\alpha}^{[y]}+\boldsymbol{B}^{[x\rightarrow{y}]}\times x_{i}+\boldsymbol{B}^{[m\rightarrow{y}]}\times\boldsymbol{\eta}^{[m]}_{i}+\boldsymbol{\zeta}^{[y]}_{i}$ \\ \cline{2-3} & \multirow{4}{*}{\textbf{Path Coef.}} & $\boldsymbol{B}^{[x\rightarrow{m}]}=\begin{pmatrix} \beta^{[x\rightarrow{m}]}_{1} & \beta^{[x\rightarrow{m}]}_{\gamma} & \beta^{[x\rightarrow{m}]}_{2} \end{pmatrix}^{T}$; $\boldsymbol{B}^{[x\rightarrow{y}]}=\begin{pmatrix} \beta^{[x\rightarrow{y}]}_{1} & \beta^{[x\rightarrow{y}]}_{\gamma} & \beta^{[x\rightarrow{y}]}_{2} \end{pmatrix}^{T}$ \\ & & $\boldsymbol{B}^{[m\rightarrow{y}]}=\begin{pmatrix} \beta^{[m\rightarrow{y}]}_{11} & 0 & 0 \\ \beta^{[m\rightarrow{y}]}_{1\gamma} & \beta^{[m\rightarrow{y}]}_{\gamma\gamma} & 0 \\ \beta^{[m\rightarrow{y}]}_{12} & \beta^{[m\rightarrow{y}]}_{\gamma2} & \beta^{[m\rightarrow{y}]}_{22} \\ \end{pmatrix}$ \\ \hline \hline \multicolumn{3}{c}{\textbf{Longitudinal Covariate, Longitudinal Mediator, and Longitudinal Outcome}} \\ \hline \multirow{17}{*}{\textbf{Linear Function}} & \multirow{8}{*}{\textbf{Model Specification}} & $\begin{pmatrix} \boldsymbol{x}_{i} \\ \boldsymbol{m}_{i} \\ \boldsymbol{y}_{i} \end{pmatrix}= \begin{pmatrix} \boldsymbol{\Lambda}_{i}^{[x]} & \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\Lambda}_{i}^{[m]} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{\Lambda}_{i}^{[y]} \end{pmatrix}\times \begin{pmatrix} \boldsymbol{\eta}^{[x]}_{i} \\\boldsymbol{\eta}^{[m]}_{i} \\ \boldsymbol{\eta}^{[y]}_{i} \end{pmatrix}+ \begin{pmatrix} \boldsymbol{\epsilon}^{[x]}_{i} \\\boldsymbol{\epsilon}^{[m]}_{i} \\ \boldsymbol{\epsilon}^{[y]}_{i} \end{pmatrix}$ \\ & & $\boldsymbol{\eta}^{[u]}_{i} = \begin{pmatrix} \eta^{[u]}_{0i} & \eta^{[u]}_{1i} \end{pmatrix}^{T}$ $(u=x,m,y)$ \\ & & $\boldsymbol{\Lambda}^{[u]}_{i} = \begin{pmatrix} 1 & t_{ij} \end{pmatrix}$ $(u=x,m,y; j=1,\cdots, J)$ \\ & & $\boldsymbol{\eta}^{[x]}_{i}=\boldsymbol{\mu}^{[x]}_{\boldsymbol{\eta}}+\boldsymbol{\zeta}^{[x]}_{i}$ \\ & & $\boldsymbol{\eta}^{[m]}_{i}=\boldsymbol{\alpha}^{[m]}+\boldsymbol{B}^{[x\rightarrow{m}]}\times\boldsymbol{\eta}^{[x]}_{i}+\boldsymbol{\zeta}^{[m]}_{i}$ \\ & & $\boldsymbol{\eta}^{[y]}_{i}=\boldsymbol{\alpha}^{[y]}+\boldsymbol{B}^{[x\rightarrow{y}]}\times\boldsymbol{\eta}^{[x]}_{i}+\boldsymbol{B}^{[m\rightarrow{y}]}\times\boldsymbol{\eta}^{[m]}_{i}+\boldsymbol{\zeta}^{[y]}_{i}$ \\ \cline{2-3} & \multirow{4}{*}{\textbf{Path Coef.}} & $\boldsymbol{B}^{[x\rightarrow{m}]}=\begin{pmatrix} \beta^{[x\rightarrow{m}]}_{00} & 0 \\ \beta^{[x\rightarrow{m}]}_{01} & \beta^{[x\rightarrow{m}]}_{11} \end{pmatrix}$; $\boldsymbol{B}^{[x\rightarrow{y}]}=\begin{pmatrix} \beta^{[x\rightarrow{y}]}_{00} & 0 \\ \beta^{[x\rightarrow{y}]}_{01} & \beta^{[x\rightarrow{y}]}_{11} \\ \end{pmatrix}$ \\ & & $\boldsymbol{B}^{[m\rightarrow{y}]}=\begin{pmatrix} \beta^{[m\rightarrow{y}]}_{00} & 0 \\ \beta^{[m\rightarrow{y}]}_{01} & \beta^{[m\rightarrow{y}]}_{11} \end{pmatrix}$ \\ \hline \hline \multicolumn{3}{c}{\textbf{Longitudinal Covariate, Longitudinal Mediator, and Longitudinal Outcome}} \\ \hline \multirow{17}{*}{\textbf{Bilinear Function}} & \multirow{8}{*}{\textbf{Model Specification}} & $\begin{pmatrix} \boldsymbol{x}_{i} \\ \boldsymbol{m}_{i} \\ \boldsymbol{y}_{i} \end{pmatrix}= \begin{pmatrix} \boldsymbol{\Lambda}_{i}^{[x]} & \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\Lambda}_{i}^{[m]} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{\Lambda}_{i}^{[y]} \end{pmatrix}\times \begin{pmatrix} \boldsymbol{\eta}^{[x]}_{i} \\\boldsymbol{\eta}^{[m]}_{i} \\ \boldsymbol{\eta}^{[y]}_{i} \end{pmatrix}+ \begin{pmatrix} \boldsymbol{\epsilon}^{[x]}_{i} \\\boldsymbol{\epsilon}^{[m]}_{i} \\ \boldsymbol{\epsilon}^{[y]}_{i} \end{pmatrix}$ \\ & & $\boldsymbol{\eta}^{[u]}_{i} = \begin{pmatrix} \eta^{[u]}_{1i} & \eta^{[u]}_{\gamma_{i}} & \eta^{[u]}_{2i} \end{pmatrix}^{T}$ $(u=x,m,y)$ \\ & & $\boldsymbol{\Lambda}^{[u]}_{i} = \begin{pmatrix} \min(0,t_{ij}-\gamma^{[u]}) & 1 & \max(0,t_{ij}-\gamma^{[u]}) \end{pmatrix}$ $(u=x,m,y; j=1,\cdots, J)$ \\ & & $\boldsymbol{\eta}^{[x]}_{i}=\boldsymbol{\mu}^{[x]}_{\boldsymbol{\eta}}+\boldsymbol{\zeta}^{[x]}_{i}$ \\ & & $\boldsymbol{\eta}^{[m]}_{i}=\boldsymbol{\alpha}^{[m]}+\boldsymbol{B}^{[x\rightarrow{m}]}\times\boldsymbol{\eta}^{[x]}_{i}+\boldsymbol{\zeta}^{[m]}_{i}$ \\ & & $\boldsymbol{\eta}^{[y]}_{i}=\boldsymbol{\alpha}^{[y]}+\boldsymbol{B}^{[x\rightarrow{y}]}\times\boldsymbol{\eta}^{[x]}_{i}+\boldsymbol{B}^{[m\rightarrow{y}]}\times\boldsymbol{\eta}^{[m]}_{i}+\boldsymbol{\zeta}^{[y]}_{i}$ \\ \cline{2-3} & \multirow{6}{*}{\textbf{Path Coef.}} & $\boldsymbol{B}^{[x\rightarrow{m}]}=\begin{pmatrix} \beta^{[x\rightarrow{m}]}_{11} & 0 & 0 \\ \beta^{[x\rightarrow{m}]}_{1\gamma} & \beta^{[x\rightarrow{m}]}_{\gamma\gamma} & 0 \\ \beta^{[x\rightarrow{m}]}_{12} & \beta^{[x\rightarrow{m}]}_{\gamma2} & \beta^{[x\rightarrow{m}]}_{22} \\ \end{pmatrix}$; $\boldsymbol{B}^{[x\rightarrow{y}]}=\begin{pmatrix} \beta^{[x\rightarrow{y}]}_{11} & 0 & 0 \\ \beta^{[x\rightarrow{y}]}_{1\gamma} & \beta^{[x\rightarrow{y}]}_{\gamma\gamma} & 0 \\ \beta^{[x\rightarrow{y}]}_{12} & \beta^{[x\rightarrow{y}]}_{\gamma2} & \beta^{[x\rightarrow{y}]}_{22} \\ \end{pmatrix}$ \\ & & $\boldsymbol{B}^{[m\rightarrow{y}]}=\begin{pmatrix} \beta^{[m\rightarrow{y}]}_{11} & 0 & 0 \\ \beta^{[m\rightarrow{y}]}_{1\gamma} & \beta^{[m\rightarrow{y}]}_{\gamma\gamma} & 0 \\ \beta^{[m\rightarrow{y}]}_{12} & \beta^{[m\rightarrow{y}]}_{\gamma2} & \beta^{[m\rightarrow{y}]}_{22} \end{pmatrix}$ \\ \hline \hline \end{tabular} \label{tbl:Med_summary} \end{threeparttable}} \end{table}
\end{document} | arXiv | {
"id": "2302.03237.tex",
"language_detection_score": 0.6552684903144836,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\LARGE \bf
Tight Linear Convergence Rate Bounds for\\Douglas-Rachford Splitting and ADMM} \thispagestyle{empty} \pagestyle{empty}
\begin{abstract}
Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) can be used to solve convex optimization problems that consist of a sum of two functions. Convergence rate estimates for these algorithms have received much attention lately. In particular, linear convergence rates have been shown by several authors under various assumptions. One such set of assumptions is strong convexity and smoothness of one of the functions in the minimization problem. The authors recently provided a linear convergence rate bound for such problems. In this paper, we show that this rate bound is tight for many algorithm parameter choices.
\end{abstract}
\section{Introduction}
Douglas-Rachford splitting is an optimization algorithm that can solve general convex composite optimization problems. The algorithm has its roots in the 1950's \cite{DouglasRachford,PeacemanRachford}. In the late 1970's, it was shown \cite{LionsMercier1979} how to use the algorithm to solve montone operator inclusion problems and convex composite optimization problems. The alternating direction method of multipliers (ADMM) can also solve composite optimization problems. It was first presented in \cite{Glowinski1975,Gabay1976}. Soon thereafter, it was shown \cite{Gabay83} that ADMM is equivalent to Douglas-Rachford splitting applied to the dual problem.
General sublinear convergence rate estimates for these methods have just recently been presented in the literature, see \cite{DR_one_over_k_2012,even_lin_conv_2013,conv_split_schemes_Davis_2014}. Under various assumptions, also linear convergence rates can be established. In the paper by Lions and Mercier \cite{LionsMercier1979}, a linear convergence rate was provided for Douglas-Rachford splitting under (the equivalence of) strong convexity and smoothness assumptions. Until recently, further linear convergence rate results have been scarce. The last couple of years, however, several linear convergence rate results for both Douglas-Rachford splitting and ADMM have been presented. These include \cite{linConvADMM,Davis_Yin_2014}, in which linear convergence rates for ADMM are presented under various assumptions. In \cite{lin_conv_DR_mult_block_2013}, linear convergence rates are established for multiple splitting ADMM. In \cite{Panos_acc_DR_2014}, it is shown that for a specific class of problems, the Douglas-Rachford algorithm can be interpreted as a gradient method of a function named the Douglas-Rachford envelope. By showing strong convexity and smoothness properties of the Douglas-Rachford envelope under similar assumptions on the underlying problem, a linear convergence rate is established based on gradient algorithm theory. Very recently \cite{laurent_ADMM} appeared and showed linear convergence of ADMM under smoothness and strong convexity assumptions using the integral quadratic constraints (IQC) framework. The rate is obtained by solving a series of a small semi-definite programs. Common for all these linear convergence rate bounds are that they are not tight for the class of problems under consideration, see \cite[Section~IV.B]{gisBoydTAC2014metric_select}.
In \cite{arvind_ADMM}, linear convergence of ADMM is established under more general assumptions than the above. However, the assumptions are more difficult to verify for a given problem. Tightness is verified for a 2-dimensional example in the Euclidean case. In \cite{GhadimiADMM}, linear convergence for ADMM on strongly convex quadratic optimization problem with inquality constraints is established. This rate improves on the rates presented in \cite{LionsMercier1979,linConvADMM,Davis_Yin_2014,lin_conv_DR_mult_block_2013,Panos_acc_DR_2014,laurent_ADMM}. In \cite{gisBoyd2014CDCprecondADMM}, the authors generalize, using a completely different machinery, the results in \cite{GhadimiADMM} and in \cite{gisBoydTAC2014metric_select} the results are further generalized. More specifically, \cite{gisBoydTAC2014metric_select} generalizes the results in \cite{GhadimiADMM} in the following three ways; (i) a wider class of problems is considered, (ii) rates for both Douglas-Rachford splitting and ADMM are provided, and (iii) the results in \cite{gisBoydTAC2014metric_select} hold for general real Hilbert spaces as opposed to the Euclidean space only in \cite{GhadimiADMM}. For the restricted class of problems considered in \cite{GhadimiADMM}, the convergence rate bounds in \cite{gisBoydTAC2014metric_select} and \cite{GhadimiADMM} coincide.
The contribution of this paper is that we show tightness of the convergence rate bounds presented in \cite{gisBoydTAC2014metric_select} for the class of problems under consideration and for many algorithm parameters. This is done by formulating examples, both for Douglas-Rachford splitting and ADMM, for which the linear convergence rate bounds are satisfied with equality. Similar lower convergence rate bounds have been presented in \cite{laurent_ADMM}. The bounds in this paper cover wider classes of problems and are less conservative.
\section{Notation}
We denote by $\mathbf{R}$ the set of real numbers, $\mathbf{R}^n$ the set of real column-vectors of length $n$. Further
$\overbar{\mathbf{R}}:=\mathbf{R}\cup\{\infty\}$ denotes the extended real line. Throughout this paper $\mathcal{H}$ denotes a real separable Hilbert space. Its inner product is denoted by $\langle\cdot,\cdot\rangle$, the induced norm by $\|\cdot\|$, and the identity operator by ${\rm{Id}}$. The indicator function for a set $\mathcal{X}$ is denoted by $\iota_{\mathcal{X}}$. Finally, the class of closed, proper, and convex functions $f~:~\mathcal{H}\to\overbar{\mathbf{R}}$ is denoted by $\Gamma_0(\mathcal{H})$.
\section{Preliminaries}
In this section we present, well known concepts, results, operators, and algorithms that will be extensively used in the paper. \begin{defin}[Orthonormal basis] An \emph{orthonormal basis} $\{\phi_i\}_{i=1}^K$ for a (separable) Hilbert space $\mathcal{H}$ is an orthogonal basis, i.e.
$\langle\phi_i,\phi_j\rangle=0$ if $i\neq j$, where each basis vector has unit length, i.e. $\|\phi_i\|=1$. \end{defin} Hereon, $\phi_i$ will denote elemtens of an orthonormal basis. \begin{rem} The number of elements in the basis (the cardinality) $K$ is equal to the dimension of the corresponding Hilbert space, which might be $\infty$. Also, by definition of a basis, each element $x\in\mathcal{H}$ can be (uniquely) decomposed as $x=\sum_{i=1}^K \langle x,\phi_i\rangle\phi_i$, see \cite[Proposition~3.3.10]{Willem_fcn_analysis}. \end{rem}
The reason why we consider separable Hilbert spaces is the following proposition which can be found, e.g., in \cite[Proposition~3.3.12]{Willem_fcn_analysis}. \begin{prp} A Hilbert space is separable if and only if it has an orthonormal basis. \end{prp}
We will also make extensive use of the following two propositions that are proven, e.g., in \cite[Propsition~3.3.10]{Willem_fcn_analysis} and \cite[Propsition~3.3.14]{Willem_fcn_analysis} respectively. \begin{prp}[Parseval's identity] In separable Hilbert spaces $\mathcal{H}$, the squarred norm of each element $x\in\mathcal{H}$ satisfies \begin{align*}
\|x\|^2 = \sum_{i=1}^K|\langle x,\phi_i\rangle|^2. \end{align*} \end{prp} \begin{prp}[Riesz-Fischer] In separable Hilbert spaces $\mathcal{H}$, the sequence $\sum_{i=1}^\infty a_i\phi_i$ converges if and only if $\sum_{i=1}^\infty a_i^2<\infty$. Then \begin{align*}
\left\|\sum_{i=1}^K a_i\phi_i\right\|^2=\sum_{i=1}^K a_i^2. \end{align*} \end{prp}
\begin{defin}[Strong convexity]
\label{def:strConv}
A function $f\in\Gamma_0(\mathcal{H})$ is $\sigma$-\emph{strongly
convex} if
\begin{align*}
f(x)\geq f(y)+\langle u,x-y\rangle+\tfrac{\sigma}{2}\|x-y\|^2
\end{align*}
holds for all $x,y\in\mathcal{H}$ and all $u\in\partial f(y)$. \label{def:strConv} \end{defin} \begin{defin}[Smoothness]
A function $f\in\Gamma_0(\mathcal{H})$ is
$\beta$-smooth if it is differentiable and
\begin{align}
f(x)\leq f(y)+\langle \nabla f(y),x-y\rangle+\tfrac{\beta}{2}\|x-y\|^2
\label{eq:fSmooth_cvx}
\end{align}
holds for all $x,y\in\mathcal{H}$. \label{def:smoothness} \end{defin} \begin{defin}[Proximal operators]
The \emph{proximal operator} of a function
$f\in\Gamma_0(\mathcal{H})$ is defined as
\begin{align*}
{\rm{prox}}_{\gamma f}(y) :=
\argmin_x\left\{f(x)+\tfrac{1}{2\gamma}\|x-y\|^2\right\}.
\end{align*}
\label{def:proxOp} \end{defin} \begin{defin}[Reflected proximal operators]
The \emph{reflected proximal operator} to $f\in\Gamma_0(\mathcal{H})$ is defined as
\begin{align*}
R_{\gamma f} := 2{\rm{prox}}_{\gamma f}-{\rm{Id}}.
\end{align*}
\label{def:reflRes} \end{defin} \begin{defin}[Fixed-point] A point $y\in\mathcal{H}$ is a \emph{fixed-point} to the (single-valued) operator $A~:~\mathcal{H}\to\mathcal{H}$ if \begin{align*} y = Ay. \end{align*} The set of fixed-points to $A$ is denoted by ${\rm{fix}}A$. \end{defin} \begin{alg}[Generalized Douglas-Rachford splitting] The generalized Douglas-Rachford splitting algorithm is given by the iteration \begin{align} z^{k+1} = (1-\alpha){\rm{Id}}+\alpha R_{\gamma g}R_{\gamma f}z^k \label{eq:DRsplitting} \end{align} where $\alpha\in(0,1)$ and $\gamma>0$ are algorithm parameters. \label{alg:DRsplitting} \end{alg} \begin{rem} In the general case, $\alpha$ is restricted to the interval $(0,1)$. Under the assumptions used in this paper, a larger $\alpha$ can be used as well, see \cite{gisBoydTAC2014metric_select}. \end{rem}
\section{Linear convergence rates}
In this section, we state the linear convergence rate results for Douglas-Rachford and ADMM in \cite{gisBoydTAC2014metric_select}. The paper \cite{gisBoydTAC2014metric_select} considers optimization problems of the form \begin{align} \begin{tabular}{ll} minimize & $f(x)+g(\mathcal{A}x)$ \end{tabular} \label{eq:prob} \end{align} where $x\in\mathcal{H}$, and $f$, $g$, and $\mathcal{A}$ satisfy the following assumptions: \begin{ass}~ \begin{enumerate}[(i)] \item The function $f\in\Gamma_0(\mathcal{H})$ is $\sigma$-strongly convex
and $\beta$-smooth. \item The function $g\in\Gamma_0(\mathcal{K})$. \item $\mathcal{A}~:~\mathcal{H}\to\mathcal{K}$ is a surjective bounded linear operator. \end{enumerate} \label{ass:prob} \end{ass} Under the additional assumption that $\mathcal{A} = {\rm{Id}}$ (which implies that $\mathcal{K}=\mathcal{H}$), Douglas-Rachford splitting can be applied to solve \eqref{eq:prob}. It enjoys a linear convergence rate, as shown in \cite[Theorem 1]{gisBoydTAC2014metric_select}. This result is restated here for convenience. \begin{thm}
Suppose that Assumption~\ref{ass:prob} holds and that $\mathcal{A}={\rm{Id}}$. Then
the generalized
Douglas Rachford algorithm (Algorithm~\ref{alg:DRsplitting}) converges linearly
towards a fixed-point $\bar{z}\in{\rm{fix}}(R_{\gamma f}R_{\gamma g})$ with at
least rate
$|1-\alpha|+\alpha\max\left(\tfrac{\gamma\beta-1}{\gamma\beta+1},\tfrac{1-\gamma\sigma}{1+\gamma\sigma}\right)$, i.e.
\begin{align*}
\|z^{k+1}-\bar{z}\|\leq\left(|1-\alpha|+\alpha\max\left(\tfrac{\gamma\beta-1}{\gamma\beta+1},\tfrac{1-\gamma\sigma}{1+\gamma\sigma}\right)\right)^k\|z^{0}-\bar{z}\|
\end{align*} for any $\gamma > 0$ and $\alpha\in(0,\tfrac{2}{1+\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma\beta-1}{1+\gamma\beta}\right)})$.
\label{thm:DR_lin_conv_subdiff} \end{thm} \begin{rem} The bound on the rate in Theorem~\ref{thm:DR_lin_conv_subdiff} can be optimized with respect to the algorithm parameters $\alpha$ and $\gamma$. The optimal parameters are given by $\alpha=1$ and $\gamma = \tfrac{1}{\sqrt{\beta\sigma}}$ which yields rate bound factor $\tfrac{\sqrt{\beta/\sigma}-1}{\sqrt{\beta/\sigma}+1}$, see \cite[Proposition 16]{gisBoydTAC2014metric_select}. \label{rem:opt_param} \end{rem}
In the case where $\mathcal{A}\neq{\rm{Id}}$, problem \eqref{eq:prob} can be solved by applying Douglas-Rachford splitting on the dual problem: \begin{align} \begin{tabular}{ll} minimize & $d(\mu)+g^*(\mu)$ \end{tabular} \label{eq:dual_prob} \end{align} where $g^*\in\Gamma_0(\mathcal{K})$, and $d\in\Gamma_0(\mathcal{K})$ is defined as \begin{align*} d := f^*\circ (-\mathcal{A}^*). \end{align*} If the dual problem \eqref{eq:dual_prob} satisfies Assumption~\ref{ass:prob} (with $d$ instead of $f$ and $g^*$ instead of $g$), Douglas-Rachford splitting can be applied to solve \eqref{eq:dual_prob}, and Theorem~\ref{thm:DR_lin_conv_subdiff} would guarantee a linear convergence rate. Since $g\in\Gamma_0(\mathcal{K})$, we have $g^*\in\Gamma_0(\mathcal{K})$ \cite[Theorem 12.2]{Rockafellar}, and we have $\mathcal{A}$ in Assumption~\ref{ass:prob}(iii) equal to ${\rm{Id}}$ in \eqref{eq:dual_prob}. The remaining assumption needed to apply Theorem~\ref{thm:DR_lin_conv_subdiff} is that $d\in\Gamma_0(\mathcal{K})$ is strongly convex and smooth. Indeed, this is the case as shown in \cite[Proposition 18]{gisBoydTAC2014metric_select}. This result is restated here for convenience of the reader. \begin{prp}
Suppose that Assumption~\ref{ass:prob} holds. Then
$d\in\Gamma_0(\mathcal{K})$ is $\tfrac{\|\mathcal{A}^*\|^2}{\sigma}$-smooth and
$\tfrac{\theta^2}{\beta}$-strongly convex, where $\theta>0$ always exists and satisfies
$\|\mathcal{A}^*\mu\|\geq \theta\|\mu\|$ for all
$\mu\in\mathcal{K}$. \label{prp:dual_fcn_prop_gen} \end{prp}
It is well known \cite{Gabay83} that Douglas-Rachford splitting applied to the dual problem \eqref{eq:dual_prob} is equivalent to ADMM applied to the primal problem \eqref{eq:prob}. Therefore, the linear convergence rate obtained by applying Douglas-Rachford splitting to the dual problem \eqref{eq:dual_prob} directly translates to a linear convergence rate for ADMM. This linear convergence rate bound is stated in \cite[Corollary 2]{gisBoydTAC2014metric_select}, and restated here for convenience.
\begin{prp}
Suppose that Assumption~\ref{ass:prob}
holds and that generalized Douglas-Rachford is applied to solve the
dual problem \eqref{eq:dual_prob}. Then the Douglas-Rachford
splitting algorithm converges linearly
towards a fixed-point $\bar{z}\in{\rm{fix}}(R_{\gamma d}R_{\gamma g^*})$ with at
least rate
$|1-\alpha|+\alpha\max\left(\tfrac{\gamma\hat{\beta}-1}{\gamma\hat{\beta}+1},\tfrac{1-\gamma\hat{\sigma}}{1+\gamma\hat{\sigma}}\right)$, i.e.
\begin{align*}
\|z^{k+1}-\bar{z}\|\leq\left(|1-\alpha|+\alpha\max\left(\tfrac{\gamma\hat{\beta}-1}{\gamma\hat{\beta}+1},\tfrac{1-\gamma\hat{\sigma}}{1+\gamma\hat{\sigma}}\right)\right)^k\|z^{0}-\bar{z}\|
\end{align*} for any $\gamma > 0$ and
$\alpha\in(0,\tfrac{2}{1+\max\left(\tfrac{1-\gamma\hat{\sigma}}{1+\gamma\hat{\sigma}},\tfrac{\gamma\hat{\beta}-1}{1+\gamma\hat{\beta}}\right)})$, where $\hat{\beta} = \tfrac{\|\mathcal{A}^*\|^2}{\sigma}$ and $\hat{\sigma} = \tfrac{\theta^2}{\beta}$. \label{prp:ADMM_lin_conv_subdiff} \end{prp} \begin{rem} The parameters that optimize
the convergence rate bound are $\alpha=1$ and $\gamma =
\tfrac{1}{\sqrt{\hat{\beta}\hat{\sigma}}}=\tfrac{\sqrt{\beta\sigma}}{\sqrt{\|\mathcal{A}^*\|^2\theta^2}}$ and the
linear convergence rate bound factor is
$\tfrac{\sqrt{\kappa}-1}{\sqrt{\kappa}+1}$, where $\kappa = \tfrac{\hat{\beta}}{\hat{\sigma}}=
\tfrac{\|\mathcal{A}^*\|^2\beta}{\theta^2\sigma}$, see \cite[Corollary 2]{gisBoydTAC2014metric_select}. \end{rem}
\section{Tightness of rate bounds}
In this section, we will state examples that show tightness of the linear convergence rate bounds in Theorem~\ref{thm:DR_lin_conv_subdiff} and Proposition~\ref{prp:ADMM_lin_conv_subdiff} for many choices of algorithm parameters.
\subsection{Primal Douglas-Rachford splitting}
To establish that the convergence rate bound provided in \cite[Theorem 1]{gisBoydTAC2014metric_select} and restated in Theorem~\ref{thm:DR_lin_conv_subdiff} is tight, we consider a problem of the form \eqref{eq:prob} with \begin{align} \label{eq:f_def} f(x) &=\sum_{i=1}^K\tfrac{\lambda_i}{2}\langle x,\phi_i\rangle^2,\\ \label{eq:g_def} g(x) &=0,\\ \label{eq:A_def} \mathcal{A}&={\rm{Id}}. \end{align} Here $\{\phi_i\}_{i=1}^K$ is an orthonormal basis for $\mathcal{H}$, $K$ is the dimension of the space $\mathcal{H}$ (possibly infinite), and $\lambda_i$ is either $\sigma$ or $\beta$. We denote the set of indices $i$ with $\lambda_i=\sigma$ by $\mathcal{I}_{\sigma}$ and the set of indices $i$ with $\lambda_i=\beta$ by $\mathcal{I}_{\beta}$. We require that $\mathcal{I}_{\sigma}\neq\emptyset$, that $\mathcal{I}_{\beta}\neq\emptyset$, and we get that $\mathcal{I}_{\sigma}\cap\mathcal{I}_{\beta}=\emptyset$ and $\mathcal{I}_{\sigma}\cup\mathcal{I}_{\beta} = \{1,\ldots,K\}$.
First, we show that $f$ in \eqref{eq:f_def} is defined (finite) for all $x\in\mathcal{H}$, even if $\mathcal{H}$ is infinite dimensional. Obviously $f(x)\geq 0$ for all $x\in\mathcal{H}$. We also have for arbitrary $x\in\mathcal{H}$ that \begin{align*}
f(x) = \sum_{i=1}^K\frac{\lambda_i}{2}\langle x,\phi_i\rangle^2 \leq \frac{\beta}{2}\sum_{i=1}^K\langle x,\phi_i\rangle^2=\frac{\beta}{2}\|x\|^2<\infty \end{align*} where the last equality follows from Parseval's identity. Therefore, the optimization problem \eqref{eq:prob} with $f$, $g$, and $\mathcal{A}$ as in \eqref{eq:f_def}, \eqref{eq:g_def}, and \eqref{eq:A_def} respectively is well defined also on infinite dimensional spaces.
Next, we show that $f\in\Gamma_0(\mathcal{H})$ satisfies Assumption~\ref{ass:prob}(i), i.e., that $f$ is $\beta$-smooth and $\sigma$-strongly convex. \begin{prp} The function $f$, as defined in \eqref{eq:f_def} with $\lambda_i=\sigma$ for $i\in\mathcal{I}_{\sigma}$ and $\lambda_i=\beta$ for $i\in\mathcal{I}_{\beta}$, is $\sigma$-strongly convex and $\beta$-smooth. \end{prp} \begin{pf} Since $\mathcal{H}$ has a orthonormal basis, each element $x\in\mathcal{H}$ may be decomposed as $x=\sum_{i=1}^K\langle x,\phi_i\rangle\phi_i$. We let $a_i = \langle x,\phi_i\rangle$ and $b_i = \langle y,\phi_i\rangle$, to get arbitrary $x=\sum_{i=1}^K a_i\phi_i\in\mathcal{H}$ and $y=\sum_{i=1}^K
b_i\phi_i\in\mathcal{H}$. Then \begin{align*}
&\frac{\beta}{2}\|x-y\|^2 = \frac{\beta}{2}\|\sum_{i=1}^K a_i\phi_i-\sum_{i=1}^K b_i\phi_i\|^2\\ &=\sum_{i=1}^K \frac{\beta}{2}( a_i- b_i)^2\geq \sum_{i=1}^K\frac{\lambda_i}{2}( a_i- b_i)^2\\ &=\sum_{i=1}^K\lambda_i\left(\frac{1}{2} a_i^2 -\frac{1}{2} b_i^2 -\langle
b_i\phi_i, a_i\phi_i- b_i\phi_i\rangle\right)\\ &=f(x)-f(y)-\sum_{i=1}^K\langle \lambda_i\langle y,\phi_i\rangle \phi_i,( a_i- b_i)\phi_i\rangle\\ &=f(x)-f(y)-\langle \sum_{i=1}^K\lambda_i\langle y,\phi_i\rangle \phi_i,\sum_{i=1}^K( a_i- b_i)\phi_i\rangle\\ &=f(x)-f(y)-\langle \nabla f(y),x-y\rangle \end{align*} where the second equality follows from Riesz-Fischer, the first inequalty holds since $\beta\geq\lambda_i$ for all $i=1,\ldots,K$, the third equality follows by expanding the square and noting that $ a_i b_i=\langle a_i\phi_i, b_i\phi_i\rangle$ and $ b_i^2=\langle b_i\phi_i, b_i\phi_i\rangle$, the fourth equality follows by identifying the definition of $f$ in \eqref{eq:f_def} and using $ a_i=\langle x,\phi_i\rangle$ and $ b_i=\langle y,\phi_i\rangle$, the fifth equality holds since the added cross-terms vanish in the inner product expression due to orthogonality of basis vectors $\phi_i$, and the final equality holds by identifying $x=\sum_{i=1}^K
a_i\phi_i$, $y=\sum_{i=1}^K
b_i\phi_i$, and the gradient of $f$ in \eqref{eq:f_def}: \begin{align*} \nabla f(x) = \sum_i \lambda_i\langle x,\phi_i\rangle\phi_i. \end{align*} This is the definition of $\beta$-smoothness in Definition~\ref{def:smoothness}.
An equivalent derivation using $\sigma$ instead of $\beta$ and a reversed inequality, shows that $f$ is also $\sigma$-strongly convex. \end{pf}
To show that the provided example converges exactly with the rate given in Theorem~\ref{thm:DR_lin_conv_subdiff}, we need expressions for the proximal operators and reflected proximal operators of $f$ and $g$ in \eqref{eq:f_def} and \eqref{eq:g_def} respectively. \begin{prp} The proximal operator of $f$ in \eqref{eq:f_def} is \begin{align} &{\rm{prox}}_{\gamma
f}(y)=\sum_{i=1}^K\tfrac{1}{1+\gamma\lambda_i}\langle y,\phi_i\rangle\phi_i \label{eq:f_prox} \end{align} and the reflected proximal operator is \begin{align} R_{\gamma f}(y) &=\sum_{i=1}^K\tfrac{1-\gamma\lambda_i}{1+\gamma\lambda_1}\langle y,\phi_i\rangle\phi_i. \label{eq:f_refl_prox} \end{align}
\label{prp:f_refl_prox} \end{prp} \begin{pf} We decompose $x = \sum_{i=1}^K a_i\phi_i$ where $ a_i=\langle x,\phi_i\rangle$ and $y=\sum_{i=1}^K b_i\phi_i$ where $ b_i=\langle y,\phi_i\rangle$. Then, for general $\gamma>0$, the proximal operator of $f$ is given by: \begin{align*} &{\rm{prox}}_{\gamma f}(y)=\arg\min_x\left\{\gamma \left(\sum_{i=1}^K\tfrac{\lambda_i}{2}\langle
\phi_i,x\rangle^2\right)+\tfrac{1}{2}\|x-y\|^2\right\}\\ &=\arg\min_{x=\sum_{i=i}^K a_i\phi_i}\left\{
\left(\sum_{i=1}^K\tfrac{\gamma\lambda_i}{2} a_i^2\right)+\frac{1}{2}\left\|\sum_{i=1}^K(a_i-b_i)\phi_i\right\|^2\right\}\\ &=\arg\min_{x=\sum_{i=i}^K a_i\phi_i}\left\{
\frac{1}{2}\sum_{i=1}^K\left(\gamma\lambda_i a_i^2+\left( a_i- b_i\right)^2\right)\right\}\\ &=\sum_{i=1}^K\arg\min_{ a_i}\tfrac{1}{2}\left\{
\gamma\lambda_i a_i^2+( a_i- b_i)^2\right\}\phi_i\\ &=\sum_{i=1}^K\tfrac{1}{1+\gamma\lambda_i} b_i\phi_i=\sum_{i=1}^K\tfrac{1}{1+\gamma\lambda_i}\langle y,\phi_i\rangle\phi_i. \end{align*} The reflected resolvent for general $\gamma>0$ is given by: \begin{align*} R_{\gamma f}(y) &= 2{\rm{prox}}_{\gamma f}(y)-y\\ &=2\sum_{i=1}^K\tfrac{1}{1+\gamma\lambda_i} b_i\phi_i-\sum_{i=1}^K b_i\phi_i\\ &=\sum_{i=1}^K\tfrac{1-\gamma\lambda_i}{1+\gamma\lambda_i} b_i\phi_i=\sum_{i=1}^K\tfrac{1-\gamma\lambda_i}{1+\gamma\lambda_i}\langle y,\phi_i\rangle\phi_i. \end{align*}
\end{pf}
The proximal and reflected proximal operators of $g\equiv 0$ are trivially given by ${\rm{prox}}_{\gamma g} = R_{\gamma g} = {\rm{Id}}$.
Next, these results are used to show a lower bound on the convergence rate of Douglas-Rachford splitting for several choices of algorithm parameters $\alpha$ and $\gamma$. First, we state two help lemmas.
\begin{lem} The function $\psi(x)=\tfrac{1-x}{1+x}$ is a decreasing function for $x>-1$. \label{lem:psi_decreasing} \end{lem} \begin{pf} We have \begin{align*} &&(1-x)/(1+x)&< (1-y)(1+y) \\ &\Leftrightarrow& (1-x)(1+y)&< (1-y)(1+x) \\ &\Leftrightarrow& 2y&< 2x. \end{align*} \end{pf} \begin{lem} For $x> -1$, the function $\psi(x)=\tfrac{1-x}{1+x}$ satisfies $\phi(x)\leq -\phi(y)$ if and only if $y\geq 1/x$. \label{lem:psi_reciprocal} \end{lem} \begin{pf} We have \begin{align*} &&\phi(x) = (1-x)/(1+x) &\leq (y-1)(1+y) = -\phi(y)\\ &\Leftrightarrow& (1-x)(1+y) &\leq (y-1)(1+x)\\ &\Leftrightarrow& 2 &\leq 2xy. \end{align*} \end{pf}
\begin{thm} The generalized Douglas-Rachford splitting algorithm (Algorithm~\ref{alg:DRsplitting}) when applied to solve \eqref{eq:prob} with $f$, $g$, and $\mathcal{A}$ in \eqref{eq:f_def}-\eqref{eq:A_def} converges exactly with rate \begin{align}
|1-\alpha|+\alpha\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma
\beta-1}{1+\gamma\beta}\right) \label{eq:DRrate_tight} \end{align} in the following cases: (i) $\alpha = 1$ and $\gamma\in(0,\infty)$, (ii) $\alpha\in(0,1]$ and $\gamma\in(0, \tfrac{1}{\sqrt{\sigma \beta}}]$, (iii) $\alpha \in [1,\tfrac{2}{1+\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma
\beta-1}{1-\gamma \beta}\right)})$ and $\gamma
\in[\tfrac{1}{\sqrt{\sigma \beta}},\infty)$, (iv) $\alpha\in(0,\tfrac{2}{1+\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma
\beta-1}{1-\gamma \beta}\right)})$ and $\gamma = \tfrac{1}{\sqrt{\beta\sigma}}$. \label{thm:tight_bound} \end{thm} \begin{pf} For algorithm initial condition $z^0=\phi_i$ the Douglas-Rachford algorithm evolves according to \begin{align*} z^{k} = \left(1-\alpha+\alpha\tfrac{1-\gamma\lambda_i}{1+\gamma\lambda_i}\right)^k\phi_i \end{align*} where $\lambda_i$ is either $\sigma$ or $\beta$ depending on if $i\in\mathcal{I}_{\sigma}$ or $i\in\mathcal{I}_{\beta}$. This follows immediately from Algorithm~\ref{alg:DRsplitting}, the expression of $R_{\gamma f}$ in Proposition~\ref{prp:f_refl_prox}, and since $R_{\gamma g}={\rm{Id}}$. Obviously, this converges with rate factor \begin{align*}
\left|1-\alpha+\alpha\tfrac{1-\gamma\lambda_i}{1+\gamma\lambda_i}\right|. \end{align*} Below, we show for each of the four cases that this rate coincides with the rate \eqref{eq:DRrate_tight}. \subsection*{Case (i): $\alpha = 1$ {\rm{and}} $\gamma\in(0,\infty)$} The rate in this case when $z^0 = \phi_i$, $i\in\mathcal{I}_\sigma$, is exactly
$\left|\tfrac{1-\gamma\sigma}{1+\gamma\sigma}\right|$. The rate when $z^0 = \phi_i$, $i\in\mathcal{I}_\beta$, is exactly
$\left|\tfrac{1-\gamma \beta}{1+\gamma \beta}\right|$. A lower bound on the convergence of the algorithm when $\alpha = 1$ is therefore \begin{align*}
\max\left(\left|\tfrac{1-\gamma\sigma}{1+\gamma\sigma}\right|,\left|\tfrac{1-\gamma
\beta}{1+\gamma \beta}\right|\right)&=\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma
\beta-1}{1+\gamma \beta}\right)\\
&=|1-\alpha|+\alpha\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma
\beta-1}{1+\gamma \beta}\right). \end{align*} where the first equality is due to Lemma~\ref{lem:psi_decreasing}, and the second holds since $\alpha=1$. This proves the first claim.
\subsection*{Case (ii): $\alpha\in(0,1]$ {\rm{and}} $\gamma\in(0,\tfrac{1}{\sqrt{\sigma
\beta}}]$} The rate when using initial condition $z^0 = \phi_i$, $i\in\mathcal{I}_\sigma$, is $r_{\sigma}:=1-\alpha+\alpha\tfrac{1-\gamma\sigma}{1+\gamma\sigma}$ (since $(1-\alpha)\geq 0$ and $\alpha\tfrac{1-\gamma\sigma}{1+\gamma\sigma}\geq 0$). For $z^0 = \phi_i$, $i\in\mathcal{I}_\beta$, and $\gamma \leq \tfrac{1}{\beta}$, we get \begin{align*}
|1-\alpha+\alpha\tfrac{1-\gamma \beta}{1+\gamma \beta}|&\leq
|1-\alpha|+|\alpha\tfrac{1-\gamma \beta}{1+\gamma \beta}|\\&= 1-\alpha+\alpha\tfrac{1-\gamma \beta}{1+\gamma \beta}\\ &\leq 1-\alpha+\alpha\tfrac{1-\gamma\sigma}{1+\gamma\sigma}=r_\sigma \end{align*} where the last inequality holds due to Lemma~\ref{lem:psi_decreasing}. For $z^0 = \phi_i$, $i\in\mathcal{I}_\beta$, and $\gamma\in[\tfrac{1}{\beta},\tfrac{1}{\sqrt{\sigma \beta}}]$, we get \begin{align*}
|1-\alpha+\alpha\tfrac{1-\gamma \beta}{1+\gamma \beta}|&\leq
|1-\alpha|+|\alpha\tfrac{1-\gamma \beta}{1+\gamma \beta}|\\&= 1-\alpha+\alpha\tfrac{\gamma \beta-1}{1+\gamma \beta} \\ &\leq 1-\alpha+\alpha\tfrac{1-\gamma\sigma}{1+\gamma\sigma} = r_\sigma \end{align*} where the last inequality follows from Lemma~\ref{lem:psi_reciprocal}. Thus, a lower bound on the rate for $\alpha \in(0,1]$ and $\gamma \in(0,\tfrac{1}{\sqrt{\sigma \beta}}]$ is \begin{align*} r_\sigma=1-\alpha+\alpha\tfrac{1-\gamma\sigma}{1+\gamma\sigma}=
|1-\alpha|+\alpha\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma
\beta-1}{1+\gamma \beta}\right). \end{align*} This proves the second claim.
\subsection*{Case (iii): $\alpha\in[1,\tfrac{2}{1+\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma
\beta-1}{1-\gamma \beta}\right)})$ {\rm{and}} $\gamma\in[\tfrac{1}{\sqrt{\sigma
\beta}},\infty)$} The rate when using $z^0 = \phi_i$, $i\in\mathcal{I}_\beta$, is $r_\beta:=\alpha-1+\alpha\tfrac{\gamma \beta-1}{1+\gamma \beta}$ (since $(1-\alpha)\leq 0$ and $\alpha\tfrac{1-\gamma \beta}{1+\gamma \beta}\leq 0$). For $z^0 = \phi_i$, $i\in\mathcal{I}_\sigma$, and $\gamma\in[\tfrac{1}{\sqrt{\sigma \beta}},\tfrac{1}{\sigma}]$ the rate is \begin{align*}
|1-\alpha+\alpha\tfrac{1-\gamma\sigma}{1+\gamma\sigma}|&\leq
|1-\alpha|+|\alpha\tfrac{1-\gamma \sigma}{1+\gamma \sigma}|\\&= \alpha-1+\alpha\tfrac{1-\gamma \sigma}{1+\gamma \sigma}\\ &\leq \alpha-1+\alpha\tfrac{\gamma \beta-1}{1+\gamma \beta}=r_\beta \end{align*} where the last inequality follows from Lemma~\ref{lem:psi_reciprocal}. For $z^0 = \phi_i$, $i\in\mathcal{I}_\sigma$, and $\gamma \geq \tfrac{1}{\sigma}$, we get \begin{align*}
|1-\alpha+\alpha\tfrac{1-\gamma \sigma}{1+\gamma \sigma}|&\leq
|1-\alpha|+|\alpha\tfrac{1-\gamma \sigma}{1+\gamma \sigma}|\\&= \alpha-1+\alpha\tfrac{\gamma \sigma-1}{1+\gamma \sigma}\\ &\leq \alpha-1+\alpha\tfrac{\gamma \beta-1}{1+\gamma \beta}=r_\beta \end{align*} where the last inequality is due to Lemma~\ref{lem:psi_decreasing}. This implies that a lower bound on the rate for $\alpha \in[1,\tfrac{2}{1+\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma
\beta-1}{1-\gamma \beta}\right)})$ and $\gamma \in [\tfrac{1}{\sqrt{\sigma \beta}},\infty)$ is \begin{align*} r_\beta=\alpha-1+\alpha\tfrac{\gamma \beta-1}{1+\gamma \beta}=
|1-\alpha|+\alpha\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma
\beta-1}{1+\gamma \beta}\right). \end{align*}
\subsection*{Case (iv): $\alpha\in(0,\tfrac{2}{1+\max\left(\tfrac{1-\gamma\sigma}{1+\gamma\sigma},\tfrac{\gamma
\beta-1}{1-\gamma\beta}\right)})$ {\rm{and}} $\gamma=\tfrac{1}{\sqrt{\sigma
\beta}}$} This Case follows directly from Cases (ii) and (iii). \end{pf}
The convergence rate for the example given by $f$ and $g$ in \eqref{eq:f_def} and \eqref{eq:g_def} respectively coincides with the upper bound on the convergence rate in \cite[Theorem 1]{gisBoydTAC2014metric_select} (which is restated in Theorem~\ref{thm:DR_lin_conv_subdiff}). The bound in \cite[Theorem 1]{gisBoydTAC2014metric_select} is therefore tight for the class of problems under consideration and for the combination of algorithm parameters specified in Theorem~\ref{thm:tight_bound}.
\begin{rem} The upper bound on the rate in \cite[Theorem 1]{gisBoydTAC2014metric_select}, relies on the triangle inequality between $(1-\alpha)(z^k-\bar{z})$ and $\alpha (R_{\gamma g}R_{\gamma
f}z^k-R_{\gamma g}R_{\gamma f}\bar{z})$. To get equality, we must find $\alpha$, $\gamma$ and $z^k$ such that $(1-\alpha)(z^k-\bar{z})$ and $\alpha (R_{\gamma g}R_{\gamma
f}z^k-R_{\gamma g}R_{\gamma f}\bar{z})$ are parallel. For remaining combinations of $\gamma$ and $\alpha$, these become anti-parallel, and the rate bound is not met exactly. Note, however, that for optimal choices of $\alpha$ and $\gamma$, the bound is tight. \end{rem}
\subsection{Dual Douglas-Rachford splitting (ADMM)}
This section concerns tightness of the rate bounds when Douglas-Rachford splitting is applied to the dual problem \eqref{eq:dual_prob}, or equivalently, when ADMM applied to the primal problem \eqref{eq:prob}. To show tightness in this case, we consider the following problem \begin{align} \label{eq:f_def_dual} f(x) &=\sum_{i=1}^K\tfrac{\lambda_i}{2}\langle x,\phi_i\rangle^2\\ \label{eq:g_def_dual} g(x) &=\iota_{x=0}(x)\\ \label{eq:A_def_dual} \mathcal{A}(x) &= \sum_{i=1}^K\nu_i\langle x,\phi_i\rangle\phi_i \end{align} where $\lambda_i=\sigma$ and $\nu_i=\theta>0$ if $i\in\mathcal{I}_{\sigma}$ and $\lambda_i=\beta$ and $\nu_i=\zeta>\theta$ if $i\in\mathcal{I}_{\beta}$, where $\mathcal{I}_{\sigma}$ and $\mathcal{I}_{\beta}$ are the same as before. That $\mathcal{A}$ is linear follows trivially. That it is self-adjoint, bounded, and surjective is shown in the following proposition. \begin{prp} The linear operator $\mathcal{A}$ defined in \eqref{eq:A_def_dual} is self-adjoint, i.e. $\mathcal{A}=\mathcal{A}^*$, and for every $x\in\mathcal{H}$, we have \begin{align}
\theta\|x\|\leq\|\mathcal{A}(x)\|\leq\zeta\|x\|. \label{eq:A_bounds} \end{align}
Further $\|\mathcal{A}\|=\|\mathcal{A}^*\|=\zeta$. \end{prp} \begin{pf} We start by showing that $\mathcal{A}$ is self-adjoint. We have \begin{align*} \langle\mathcal{A}(x),\mu\rangle &=\left\langle \sum_{i=1}^K\nu_i\langle x,\phi_i\rangle\phi_i,\sum_{i=1}^K\langle\mu,\phi_i\rangle\phi_i\right\rangle\\ &=\sum_{i=1}^K\langle \nu_i\langle x,\phi_i\rangle\phi_i,\langle\mu,\phi_i\rangle\phi_i\rangle\\ &=\sum_{i=1}^K\langle \langle x,\phi_i\rangle\phi_i,\nu_i\langle\mu,\phi_i\rangle\phi_i\rangle\\ &=\left\langle \sum_{i=1}^K\langle x,\phi_i\rangle\phi_i,\sum_{i=1}^K\nu_i\langle\mu,\phi_i\rangle\phi_i\right\rangle\\ &=\langle x,\mathcal{A}(\nu)\rangle \end{align*} where moving of summations are due to orthogonality of $\phi_i$. Next we show the first inequality in \eqref{eq:A_bounds}: \begin{align*}
\|\mathcal{A}(x)\| &=
\left\|\theta\sum_{i\in\mathcal{I}_{\sigma}}\langle
x,\phi_i\rangle\phi_i+\zeta\sum_{i\in\mathcal{I}_{\beta}}\langle
x,\phi_i\rangle\phi_i\right\|\\
&\geq\theta\left\|\sum_{i=1}^K\langle
x,\phi_i\rangle\phi_i\right\|=\theta\|x\| \end{align*} since $0<\theta\leq\zeta$. The second inequality in
\eqref{eq:A_bounds} is proven similarly. Finally, we show $\|\mathcal{A}\|=\zeta$. We have already shown that
$\|\mathcal{A}(x)\|\leq\zeta\|x\|$ for all $x\in\mathcal{H}$, i.e., that $\|\mathcal{A}\|\leq\zeta$. By definition of the operator norm, we also know that $\|\mathcal{A}\|\geq \|\mathcal{A}(x)\|$ for all $x\in\mathcal{H}$ with $\|x\|\leq 1$. Choosing $x=\phi_j$ (which satisfies $\|x\|=\|\phi_j\|=1$) for any $j\in\mathcal{I}_{\beta}$ (i.e. $j$ with $\nu_j=\zeta$) gives \begin{align*}
\|\mathcal{A}\|\geq \|\mathcal{A}(\phi_j)\| = \left\|\sum_{i=1}^K\nu_i\langle \phi_j,\phi_i\rangle\right\|=\|\nu_j\|=\zeta. \end{align*}
Thus, $\|\mathcal{A}\|=\zeta$ and the proof is complete. \end{pf}
This result implies that the assumptions in \cite[Corollary 2]{gisBoydTAC2014metric_select} (and Proposition~\ref{prp:ADMM_lin_conv_subdiff}) are met by $f$, $g$, and $\mathcal{A}$ in \eqref{eq:f_def_dual}, \eqref{eq:g_def_dual}, and \eqref{eq:A_def_dual} respectively. The bound on the convergence rate from \cite[Corollary 2]{gisBoydTAC2014metric_select} (and restated in Proposition~\ref{prp:ADMM_lin_conv_subdiff}) is therefore valid. To show that this bound is tight for the class of problems under consideration, we need the following explicit characterization of $d$: \begin{align*} d(\mu) &:= f^*(-\mathcal{A}^*\mu) = f^*(-\mathcal{A}\mu)\\ &=\sup_{x}\left\{\langle -\mathcal{A}\mu,x\rangle-f(x)\right\}\\ &=-\inf_{x}\left\{f(x)+\langle\mathcal{A}\mu,x\rangle\right\}\\ &=-\inf_{x}\left\{\sum_{i=1}^K\tfrac{\lambda_i}{2}\langle
x,\phi_i\rangle^2+\langle
\sum_{i=1}^K\nu_i\langle\mu,\phi_i\rangle\phi_i,x\rangle\right\}\\ &=-\inf_{ a_i}\Bigg\{\sum_{i=1}^K\tfrac{\lambda_i}{2}\langle
\sum_{i=1}^K a_i\phi_i,\phi_i\rangle^2\\ &\qquad\qquad\qquad\qquad+\langle \sum_{i=1}^K\nu_i\langle\mu,\phi_i\rangle\phi_i,\sum_{i=1}^K a_i\phi_i\rangle\Bigg\}\\ &=-\sum_{i=1}^K\inf_{ a_i}\left\{\tfrac{\lambda_i}{2}\langle
a_i\phi_i,\phi_i\rangle^2+\langle \nu_i\langle\mu,\phi_i\rangle\phi_i, a_i\phi_i\rangle\right\}\\ &=-\sum_{i=1}^K\inf_{ a_i}\left\{\tfrac{\lambda_i}{2}
a_i^2+\nu_i\langle\mu,\phi_i\rangle a_i\right\}\\ &=-\sum_{i=1}^K\left\{\frac{(\nu_i\langle\mu,\phi_i\rangle)^2}{2\lambda_i}
-\frac{(\nu_i\langle\mu,\phi_i\rangle)^2}{\lambda_i}\right\}\\ &=\sum_{i=1}^K\frac{(\nu_i\langle\mu,\phi_i\rangle)^2}{2\lambda_i}=\sum_{i=1}^K\tfrac{\nu_i^2}{2\lambda_i}\langle\nu,\phi_i\rangle^2 \end{align*} where the decomposition $x=\sum_{i=1}^K a_i\phi_i$ with $a_i=\langle x,\phi_i\rangle$ is used, and the optimal $a_i=-\nu_i\langle\mu,\phi_i\rangle/\lambda_i$. The function $d$ has exactly the same structure as the function $f$ but with $\lambda_i$ in $f$ in \eqref{eq:f_def} replaced by $\nu_i^2/\lambda_i$ in $d$. The function $g^*$ is, for all $\mu\in\mathcal{H}$, given by \begin{align*} g^*(\mu) &= \sup_{x\in\mathcal{H}}\left\{\langle\mu,x\rangle-\iota_{x=0}(x)\right\}=\langle\mu,0\rangle=0. \end{align*} This implies that the dual problem \eqref{eq:dual_prob} with $f$, $g$, and $\mathcal{A}$ specified in \eqref{eq:f_def_dual}, \eqref{eq:g_def_dual}, and \eqref{eq:A_def_dual} has exactly the same structure as the primal problem \eqref{eq:prob} with $f$ and $g$ specified in \eqref{eq:f_def} and \eqref{eq:g_def} respectively and with $\mathcal{A}={\rm{Id}}$ . The only things that differ are the scalars that multiply the quadratic terms in the functions $f$ and $d$ respectively. Therefore, we can immediately state the following corollary to Theorem~\ref{thm:tight_bound}. \begin{cor} Let $f$ be given by \eqref{eq:f_def_dual}, $g$ be given by \eqref{eq:g_def_dual}, and $\mathcal{A}$ be given by \eqref{eq:A_def_dual}. Then the generalized Douglas-Rachford algorithm applied to solve the dual problem \eqref{eq:dual_prob} (or equivalently ADMM applied to solve \eqref{eq:prob}) converges as in Theorem~\ref{thm:tight_bound} with $\beta$ and $\sigma$ in Theorem~\ref{thm:tight_bound} replaced by
$\hat{\beta}=\tfrac{\|\mathcal{A}\|^2}{\sigma}$ and $\hat{\sigma}=\tfrac{\theta^2}{\beta}$ respectively. \label{cor:ADMM_lin_conv} \end{cor}
The exact rate provided in Corollary~\ref{cor:ADMM_lin_conv}, coincides with rate bound in \cite[Corollary 2]{gisBoydTAC2014metric_select} and Proposition~\ref{prp:ADMM_lin_conv_subdiff}. Therefore, we conclude that the rate bound in \cite[Corollary 2]{gisBoydTAC2014metric_select} for ADMM on the primal problem, or equivalently for Douglas-Rachford splitting on the dual problem, is tight for the class of problems under consideration for many algorithm parameter choices. Especially, the bound is tight for the optimal paramters $\alpha$ and $\gamma$, as in the primal Douglas-Rachford case.
\section{Conclusion}
Recent results in the literature have shown linear convergence of Douglas-Rachford splitting and ADMM under various assumptions. In this paper, we have shown that the linear convergence rate bounds presented in \cite{gisBoydTAC2014metric_select} are indeed tight for the class of problems under consideration.
\begin{appendices}
\end{appendices}
\end{document} | arXiv | {
"id": "1503.00887.tex",
"language_detection_score": 0.611908495426178,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title {
Searching for a Compressed Polyline\\
with a Minimum Number of Vertices } \author { \IEEEauthorblockN{Alexander Gribov} \IEEEauthorblockA {
Environmental Systems Research Institute\\
380 New York Street\\
Redlands, CA 92373\\
E-mail: agribov@esri.com} }
\maketitle
\begin{abstract} \boldmath
There are many practical applications that require simplification of polylines. Some of the goals are to reduce the amount of information necessary to store, improve processing time, or simplify editing. The simplification is usually done by removing some of the vertices, making the resultant polyline go through a subset of the source polyline vertices. However, such approaches do not necessarily produce a new polyline with the minimum number of vertices. The approximate solution to find a polyline, within a specified tolerance, with the minimum number of vertices is described in this paper. \end{abstract}
\begin{IEEEkeywords}
polyline compression; polyline approximation; orthogonality; circular arcs \end{IEEEkeywords}
\section{Introduction}
The task is to find a polyline, within a specified tolerance of the source polyline, with the minimum number of vertices. That polyline is called optimal. Usually, a subset of vertices of the source polyline is used to construct an optimal polyline~\cite{CompressionAlgorithm, CompressionReview}. However, an optimal polyline does not necessarily have vertices coincident with the source polyline vertices. One approach, to allow the resultant polyline to have flexibility in the locations of vertices, is to find the intersection between adjacent straight lines~\cite{PolylineGeneralizationCombinatorical} or geometrical primitives~\cite{PolylineGeneralization}. However, there are situations when such an approach does not work well, for example, when adjacent straight lines are almost parallel to each other or a circular arc is close to being tangent to a straight segment. The approach described in this paper evaluates a set of vertex locations (considered locations) while searching for a polyline with the minimum number of vertices.
\section{Algorithm}
\subsection {
Discretization of the Solution
\label{sec:Descretization} }
\begin{figure}\label{fig:ExampleSegment}
\end{figure}
Any compressed polyline must be within tolerance of the source polyline; therefore, the compressed polyline must have vertices within tolerance of the source polyline. It would be very difficult to consider all possible polylines and find one with the minimum number of vertices; therefore, as an approximation, only some locations around vertices of the source polyline are considered (see the black points around the vertices of the source polyline in Fig.~\ref{fig:ExampleSegment}).
The locations around vertices of the source polyline are chosen to be on an infinite equilateral triangular grid with the distance from vertices of the source polyline less than the specified tolerance. The equilateral triangular grid (see Fig.~\ref{fig:TriangularGrid}) has the lowest number of nodes versus other grids (square, hexagonal, etc.), satisfying that distance from any point to the closest node does not exceed the specified threshold.
\begin{figure}\label{fig:TriangularGrid}
\end{figure}
The choice for the side of an equilateral triangle in the equilateral triangular grid is calculated from the error it introduces. That error can be expressed as a proportion of the specified tolerance. For example, $q \in \left( 0, 1 \right)$ proportion of the specified tolerance means that the side of the equilateral triangle is equal to $q \sqrt{3}$ times the specified tolerance. This leads to about $\dfrac{2 \pi}{3 \sqrt{3} q^2} \approx \dfrac{1.2}{q^2}$ locations per each vertex. To decrease complexity, some locations might be skipped; if they are considered in neighbor vertices of the source polyline, however, it should be done without breaking the combinatorial algorithm described in section \ref{sec:CombinatorialApproach}. If tolerance is great, it is possible to consider locations around segments of the source polyline. In this paper, to support any tolerance, only locations around vertices of the source polyline are considered. Densification of the source polyline might be necessary to find the polyline with the minimum number of vertices.
\subsection {
Testing a Segment to Satisfy Tolerance
\label{sec:TestingSegmentTolerance} }
For a compressed polyline to be within tolerance, every segment of the compressed polyline must be within tolerance from the part of the source polyline it describes. To find the compressed polyline with the minimum number of vertices, this test has to be performed many times for all combinations of possible locations of vertices (see Fig.~\ref{fig:ExampleSegment}). \cite{OptimizedCompressionAlgorithm} describes an efficient approach to perform these tests based on the convex hull. If the convex hull is stored as a polygon, the complexity of this task is $O{\left( \log{n} \right)}$, where $n$ is the number of vertices in the convex hull~\cite{OptimizedCompressionAlgorithm}. The expected complexity of the convex hull for the $N$ random points in any rectangle is $O{\left( \log{N} \right)}$, see~\cite{ConvexHullsComplexity}. If the source polyline has parts close to an arc, the size of the convex hull tends to increase. For the worst case, the number of vertices in the convex hull is equal to the number of vertices in the original set.
If there are no lines with thickness of two tolerances covering the convex hull completely, then one segment cannot describe this part of the source polyline. The complexity of this check is $O{\left( n \log{n} \right)}$.
A convex hull for any part of the source polyline is constructed in the same way as in~\cite{OptimizedCompressionAlgorithm}.
\subsection {
Testing Segment End Points
\label{sec:TestingSegmentEndPoints} }
The test described in the previous section~\ref{sec:TestingSegmentTolerance} does not check the ends of the segment. The example in Fig.~\ref{fig:TopologycalTest} shows that the source polyline changes direction to the opposite several times (zigzag) before going up. Without checking end points and changes in direction, the compressed polyline might not describe some parts of the source polyline (Fig.~\ref{fig:TopologycalTest}a). Therefore, these tests are necessary to guarantee that the compressed polyline (Fig.~\ref{fig:TopologycalTest}b) describes the source polyline without missing any parts.
\begin{figure}\label{fig:TopologycalTest}
\end{figure}
The segment end points to be within the tolerance of the part of the source polyline are tested based on the convex hull in the same way as the test for the segment to be within tolerance performed in section~\ref{sec:TestingSegmentTolerance}.
This is equivalent to the test if the segment extended in parallel and perpendicular directions by the tolerance (see Fig.~\ref{fig:SegmentEndPoints}) contains a convex hull of the part of the source polyline it describes. If more directions are used, a better approximation of the curved polygon can be obtained. The complexity of the test is $O{\left( \log{n} \right)}$.
\begin{figure}\label{fig:SegmentEndPoints}
\end{figure}
\subsection {
Testing Polyline Direction
\label{sec:TestingZigZag} }
The test for the source polyline to have a zigzag is performed by checking if the projection to the segment of backward movement exceeds two tolerances ($2 T$, where $T$ is the tolerance). Two tolerances are used because one vertex of the source polyline can shift forward by the tolerance and the vertex after that shift backward by the tolerance. The algorithm is based on analyzing zigzags before the processed point. Let $p_i$ be the vertices of the polyline, $i = \overline{0..N - 1}$, $N$ be the number of vertices in the polyline. The next algorithm constructs a table for efficient testing. \begin{enumerate}[label={}]
\item Define a set of directions $\alpha_j = \dfrac{2 \pi}{N_d} j$,\\
where $j = \overline{0..N_d - 1}$, $N_d$ is the number of directions.
\item Cycle over each direction $\alpha_j$, $j = \overline{0..N_d - 1}$.
\begin{enumerate}[label={}]
\item Define the priority queue with requests containing two numbers. The first number is the real value, and the second number is the index. Priority of the request is equal to the first number.
\item Set $k = 0$.
\item Cycle over each point $p_i$ of the source polyline,\\
$i = \overline{0..N - 1}$.
\begin{enumerate}[label={}]
\item Calculate projection of $p_i$ to the direction $\alpha_j$ (scalar product between the point and the direction vector):
\begin{equation*}
d = p_i \cdot \left( \cos \left( \alpha_j \right), \sin \left( \alpha_j \right) \right).
\end{equation*}
\item Remove all requests from the priority queue with a priority of more than $d + 2 T$. If the largest index from removed requests is larger than $k$, set $k$ equal to that index.
\item Set $V_{j, i} = k$.
\item Add request $\left( d, i + 1 \right)$ to the priority queue.
\end{enumerate}
\end{enumerate} \end{enumerate}
To test if the part of the source polyline between vertices $i_s$ and $i_e$ has a zigzag. \begin{enumerate}[label={}]
\item First, find the closest direction $\alpha_j$ to the direction of the segment $\alpha_{j^*}$:
$
j^* = \round{\left( \dfrac{N_d}{2 \pi} \alpha \right)}
\!\!\!
\mod
N_d
$,
where $\alpha$ is the direction of the segment.
\item Second, if $V_{j^*, i_{e}} \leq i_{s}$, then there are no zigzags for the segment describing the part of the source polyline from vertex $i_{s}$ till $i_{e}$. \end{enumerate}
Let $W_i = \min_{0 \leq j \wedge j < N_d}{\left( V_{j, i} \right)}$. If $i_{s} < W_{i_{e}}$, then one segment cannot describe the part of the source polyline from vertex $i_{s}$ till $i_{e}$.
This test has some limitations: \begin{itemize}
\item The tested direction is approximated by the closest one, making the check approximate.
\item For some error models, a zigzag might pass the test. For example, if errors are limited by a circle, a zigzag by two tolerances is only possible if it happens directly on the segment. \end{itemize}
Nevertheless, it is an efficient test to avoid absurd results, like in Fig.~\ref{fig:TopologycalTest}a. The complexity of the algorithm is $O{\left( N_d N \log \left( N \right) \right)}$ and the complexity to test any segment is $O{\left( 1 \right)}$.
\subsection {
Combinatorial Approach to Find an Optimal Solution
\label{sec:CombinatorialApproach} }
The optimal solution is found by using the algorithm described in~\cite{PolylineGeneralizationCombinatorical}.
Let $p_{i, j}$ be considered locations for vertex $p_i$, where ${i = \overline{0..N - 1}}$, ${j = \overline{0..N_i - 1}}$, $N_i$ is the number of considered locations for the vertex $i$. Let pairs $\left( i_k, j_k \right)$, ${k = \overline{0..m}}$, divide the source polyline into $m$ straight segments $
\left( p_{i_k, j_k}, p_{i_{k + 1}, j_{k + 1}} \right) $ describing the source polyline from vertex $i_k$ till $i_{k + 1}$, $k = \overline{0..m - 1}$. Notice that neighbor segments are already connected in $p_{i_k, j_k}$, $k = \overline{1..m - 1}$, and this solution avoids problems in algorithms~\cite{PolylineGeneralizationCombinatorical, PolylineGeneralization} when the intersection of neighbor segments is far away from the source polyline.
The goal of this algorithm is to find the solution with the minimum number of vertices while satisfying tolerance restriction, and among them with the minimum integral square differences. Therefore, minimization is performed in two parts $
\left\{
\begin{aligned}
& T^{\#}\\
& T^{\epsilon}
\end{aligned}
\right\} $, where the first part $T^{\#}$ is the number of segments, and the second part $T^{\epsilon}$ is the integral of the square deviation between segments and the source polyline. The solutions are compared by the number of segments and, if they have the same number of segments, by square deviation between segments and the source polyline. The solution of this task, when the optimal polyline has vertices coincident with the source polyline, can be found in \cite{CombinatorialMinimumNumberSegments}.
Let $P_k$, $k = \overline{0..N - 1}$ be parts of the source polyline from vertex $0$ to $k$.
The optimal solution is found by induction. Define the optimal solution for polyline $P_0$ as $
\left\{
\begin{aligned}
& T_{0, j}^{\#}\\
& T_{0, j}^{\epsilon}
\end{aligned}
\right\}
=
\left\{
\begin{aligned}
0\\
0
\end{aligned}
\right\} $, $
{
j = \overline{0, N_0 - 1}
} $. For $k = \overline{1, N - 1}$, construct the optimal solution for $P_k$ from optimal solutions for $P_{k'}$, $k' = \overline{0..k-1}$. \begin{equation*}
\left\{
\begin{aligned}
& T_{k, j}^{\#}\\
& T_{k, j}^{\epsilon}
\end{aligned}
\right\}
=
\min
_
{
\begin{aligned}
0 \leq k' \wedge k' < k\\
0 \leq j' \wedge j' < N_{k'}\\
\functioncheck
{
\left(
\left( k', j' \right),
\left( k, j \right)
\right)
}
\end{aligned}
}
{
\left(
\left\{
\begin{aligned}
& T_{k', j'}^{\#} + 1\\
& T_{k', j'}^{\epsilon}
+
\epsilon
_
{
\left( k', j' \right),
\left( k, j \right)
}
\end{aligned}
\right\}
\right)
}
, \end{equation*} where $
\epsilon
_
{
\left( k', j' \right),
\left( k, j \right)
} $ is the integral square difference between segment $
\left( p_{k', j'}, p_{k, j} \right) $ and the source polyline from vertex $k'$ till $k$, $
\functioncheck
{
\left(
\left( k', j' \right),
\left( k, j \right)
\right)
} $ is a combination of checks described in the previous sections \ref{sec:TestingSegmentTolerance}, \ref{sec:TestingSegmentEndPoints}, and \ref{sec:TestingZigZag} to check if segment $
\left( p_{k', j'}, p_{k, j} \right) $ can describe the part of the source polyline from vertex $k'$ till $k$.
To reconstruct the optimal solution, it is necessary for $
\left\{
\begin{aligned}
& T_{k, j}^{\#}\\
& T_{k, j}^{\epsilon}
\end{aligned}
\right\} $ to store $\left\{ k', j' \right\}$ when the right part is minimal.
The optimal solution is reconstructed from \begin{equation*}
\min
_
{
0 \leq j \wedge j < N_{N - 1}
}
{
\left\{
\begin{aligned}
& T_{N - 1, j}^{\#}\\
& T_{N - 1, j}^{\epsilon}
\end{aligned}
\right\}
} \end{equation*} by recurrently using stored $\left\{ k', j' \right\}$ values.
\subsection {
Optimization
\label{sec:Optimization} }
It is possible to significantly reduce the complexity of the algorithm described in the previous section~\ref{sec:CombinatorialApproach} by using the approach described in~\cite{PolylineGeneralizationCombinatorical}. \begin{equation}
\begin{aligned}
&
\min
_
{
\begin{aligned}
k_1 \leq k' \wedge k' \leq k_2\\
0 \leq j' \wedge j' < N_{k'}\\
\functioncheck
{
\left(
\left( k', j' \right),
\left( k, j \right)
\right)
}
\end{aligned}
}
{
\left(
\left\{
\begin{aligned}
& T_{k', j'}^{\#} + 1\\
& T_{k', j'}^{\epsilon}
+
\epsilon
_
{
\left( k', j' \right),
\left( k, j \right)
}
\end{aligned}
\right\}
\right)
}
\gtrapprox
\\
\gtrapprox
&
\min
_
{
\begin{aligned}
k_1 \leq k' \wedge k' \leq k_2\\
0 \leq j' \wedge j' < N_{k'}
\end{aligned}
}
{
\left(
\left\{
\begin{aligned}
& T_{k', j'}^{\#} + 1\\
& T_{k', j'}^{\epsilon}
+
\epsilon
_
{
\left( k', j' \right),
\left( k, j \right)
}
^
{
\left( k_2 \right)
}
\end{aligned}
\right\}
\right)
}
,
\end{aligned}
\label{eq:OptmizationBaseFormula} \end{equation} where \begin{multline*}
\epsilon
_
{
\left( k', j' \right),
\left( k, j \right)
}
^
{
\left( k_2 \right)
}
=
\\
=
\min
_
{
\begin{aligned}
0 \leq j_2 \wedge j_2 < N_{k_2}\\
\functioncheck
{
\left(
\left( k', j' \right),
\left( k_2, j_2 \right)
\right)
}\\
\functioncheck
{
\left(
\left( k_2, j_2 \right),
\left( k, j \right)
\right)
}
\end{aligned}
}
{
\left(
\epsilon
_
{
\left( k', j' \right),
\left( k_2, j_2 \right)
}
+
\epsilon
_
{
\left( k_2, j_2 \right),
\left( k, j \right)
}
\right)
}
. \end{multline*}
From \eqref{eq:OptmizationBaseFormula}, it follows that \begin{equation}
\begin{aligned}
&
\min
_
{
\begin{aligned}
k_1 \leq k' \wedge k' \leq k_2\\
0 \leq j' \wedge j' < N_{k'}\\
\functioncheck
{
\left(
\left( k', j' \right),
\left( k, j \right)
\right)
}
\end{aligned}
}
{
\left(
\left\{
\begin{aligned}
& T_{k', j'}^{\#} + 1\\
& T_{k', j'}^{\epsilon}
+
\epsilon
_
{
\left( k', j' \right),
\left( k, j \right)
}
\end{aligned}
\right\}
\right)
}
\gtrapprox
\\
\gtrapprox
&
\min
_
{
\begin{aligned}
0 \leq j_2 \wedge j_2 < N_{k_2}\\
\functioncheck
{
\left(
\left( k_2, j_2 \right),
\left( k, j \right)
\right)
}
\end{aligned}
}
{
\left(
\left\{
\begin{aligned}
& T_{k_2, j_2}^{\#}\\
& T_{k_2, j_2}^{\epsilon}
+
\epsilon
_
{
\left( k_2, j_2 \right),
\left( k, j \right)
}
\end{aligned}
\right\}
\right)
}
\end{aligned}
\label{eq:OptimizationFormula1} \end{equation} and \begin{equation}
\begin{aligned}
&
\min
_
{
\begin{aligned}
k_1 \leq k' \wedge k' \leq k_2\\
0 \leq j' \wedge j' < N_{k'}\\
\functioncheck
{
\left(
\left( k', j' \right),
\left( k, j \right)
\right)
}
\end{aligned}
}
{
\left(
\left\{
\begin{aligned}
& T_{k', j'}^{\#} + 1\\
& T_{k', j'}^{\epsilon}
+
\epsilon
_
{
\left( k', j' \right),
\left( k, j \right)
}
\end{aligned}
\right\}
\right)
}
\gtrapprox
\\
\gtrapprox
&
\min
_
{
0 \leq j_1 \wedge j_1 < N_{k_1}
}
{
\left(
\left\{
\begin{aligned}
& T_{k_1, j_1}^{\#}\\
& T_{k_1, j_1}^{\epsilon}
\end{aligned}
\right\}
\right)
}
+
\\
&
+
\left\{
\begin{aligned}
&
\: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \: \:
1\\
&
\min
_
{
\begin{aligned}
0 \leq j_2 \wedge j_2 < N_{k_2}\\
\functioncheck
{
\left(
\left( k_2, j_2 \right),
\left( k, j \right)
\right)
}
\end{aligned}
}
{
\left(
\epsilon
_
{
\left( k_2, j_2 \right),
\left( k, j \right)
}
\right)
}
\end{aligned}
\right\}
.
\end{aligned}
\label{eq:OptimizationFormula2} \end{equation}
The maximum of \eqref{eq:OptimizationFormula1} and \eqref{eq:OptimizationFormula2} can be used to skip checking combinations between vertex $k_1$ and $k_2$.
The inequalities \eqref{eq:OptimizationFormula1} and \eqref{eq:OptimizationFormula2} are approximate due to the use of considered locations. However, this allows finding stricter limitations for the solution inside the interval and simultaneously finding the solution for breaking at vertex $k_2$.
It is possible to construct \eqref{eq:OptimizationFormula1} and \eqref{eq:OptimizationFormula2} with exact inequalities by constructing the optimal solution when the end point is not required to end in the considered location. Similarly, the part from vertex $k_2$ to $\left( k, j \right)$ should not be required to end in considered locations for vertex $k_2$. This is useful when the resultant polyline is required to go through the vertices of the source polyline. However, such an algorithm has a worse compression ratio than the one with the flexibility in joints.
See paper~\cite{PolylineGeneralizationCombinatorical} for further details of this algorithm.
\subsection{Optimal Compression of Closed Polylines}
To find the optimal compression of a closed polyline, it is necessary to know the starting vertex. It is also necessary that the resultant polyline starts and ends in the same vertex. The next algorithm will be used to find the starting vertex and construct a closed resultant polyline. \begin{enumerate}[label={\arabic*.}, ref={\arabic*}]
\item \label{enum:ConvexHull} Construct a convex hull for all vertices of the source polyline.
\item \label{enum:SmallesAngle} Find the smallest angle of the convex hull polygon.
\item \label{enum:Reorient} Take the vertex corresponding to the smallest angle as the starting vertex and reorient the closed polyline to start from that vertex.
\item Apply the algorithm.
\item \label{enum:ConstructFirstSolution} From the constructed solution, take one vertex in the middle as the new starting vertex and reorient the closed polyline to start from that vertex.
\item Apply the algorithm once more, while for the first and the last vertex consider only the location of the previous solution for the middle vertex. \end{enumerate}
Steps \ref{enum:ConvexHull}, \ref{enum:SmallesAngle}, and \ref{enum:Reorient} are important for a small closed polyline. For the small closed polyline, the resultant polyline is within tolerance of the source polyline, even with suboptimal orientation. As a consequence, without these steps, step \ref{enum:ConstructFirstSolution} may not find the optimal division of the source polyline, leading to a suboptimal solution.
\subsection{Optimal Compression by Straight Segments and Arcs}
This algorithm is extendible to support arcs. The arc passing through considered locations differs from the segment by the necessity to define the radius. Unfortunately, it adds significant complexity to the algorithm. Nevertheless, such an algorithm is possible. There are different ways to fit an arc to a polyline: minimum integral square differences of squares~\cite{ThomasReference2, IchokuReference3}, minimum integral square differences~\cite{RobinsonReference6, Landau4, PaperArcFitting, FittingOfCircularArcsWithO1Complexity, EfficientFittingOfCircularArcs}, minimum deviation, etc. Algorithms with complexity $O{\left( n \right)}$, where $n$ is the number of vertices in the fitted polyline, are not suitable due to the significant increase in complexity. The algorithms with acceptable complexity $O{\left( 1 \right)}$ are~\cite{ThomasReference2, IchokuReference3, FittingOfCircularArcsWithO1Complexity, EfficientFittingOfCircularArcs}; however, algorithms based on integral square differences of squares~\cite{ThomasReference2, IchokuReference3} might break for small arcs and, therefore, are not suitable. Checking that the part of the source polyline is within tolerance, end points, and zigzag will be time-consuming due to complexity $O{\left( n \right)}$.
\section{Analysis of the Algorithm Complexity}
The algorithm contains three steps: \begin{enumerate}[label={\arabic*.}]
\item Preprocessing: construction of convex hulls (section~\ref{sec:TestingSegmentTolerance}) and filling arrays for an efficient zigzag test (section~\ref{sec:TestingZigZag}).
\item Construction of the optimal solution (section~\ref{sec:CombinatorialApproach}).
\item Reconstruction of the optimal solution (section~\ref{sec:CombinatorialApproach}). \end{enumerate}
A significant amount of time is spent on constructing an optimal solution. It is difficult to evaluate the complexity described in section~\ref{sec:Optimization}; however, the worst complexity is \begin{equation}
O{\left( N^2 \cdot \max_{0 \leq i \wedge i < N}{\left( N_i^2 \right)} \cdot \log{\left( N \right)} \right)}
.
\label{eq:WorseComplexity} \end{equation}
The complexity of the algorithm depends on the type of polyline it processes. It is very difficult to conclude what is the practical complexity of this algorithm. If the optimal polyline does not have segments describing too many vertices of the source polyline, \eqref{eq:WorseComplexity} tends to be \begin{equation}
O{\left( N \cdot \max_{0 \leq i \wedge i < N}{\left( N_i^2 \right)} \right)}
.
\label{eq:PracticalComplexity} \end{equation}
Fig.~\ref{fig:EstimationOfAlgorithmComplexity} shows how much time it takes to process a polyline depending on the number of vertices. The dependence is very close to linear, supporting \eqref{eq:PracticalComplexity}.
\begin{figure}\label{fig:EstimationOfAlgorithmComplexity}
\end{figure}
\section{Examples}
Fig.~\ref{fig:Comparison} shows an example of the algorithm described in this paper. If the source polyline is the noisy version of a ground truth polyline, where the noise does not exceed some threshold, and the algorithm is provided with a tolerance slightly greater than the threshold to account for approximations inside the algorithm, then the resultant polyline will never have more vertices than the ground truth polyline.
\begin{figure}\label{fig:Comparison}
\end{figure}
The effectiveness of the approach is shown in Fig.~\ref{fig:ExampleArc}. Nine segments are sufficient to represent the arc with specified precision. The algorithm not only optimizes the number of segments, it also finds the locations of the segments that minimize integral square differences. Therefore, as shown in Fig.~\ref{fig:ExampleArc}, the algorithm tends to construct segments similar in length.
\begin{figure}\label{fig:ExampleArc}
\end{figure}
Fig.~\ref{fig:GraphCompressionEfficiency} shows the dependence from the error introduced by a discrete set of considered locations (see section~\ref{sec:Descretization}) to the efficiency of the compression. Flexibility in places where neighboring segments connect each other is very important to reach maximum compression, especially for noisy data.
\section{Optimal Compression by Orthogonal Directions}
The triangular grid for considered locations supports directions by $30 \degree$. Reconstruction of orthogonal buildings requires support for $90 \degree$~\cite{ReconstructionOfOrthogonalPolygonalLines} and sometimes $45 \degree$. The square grid for considered locations is more appropriate for this task.
Notice that because only certain directions are allowed, only segments between pairs of considered locations aligned by these directions may be parts of the resultant polyline. Suppose that the resultant segment goes between vertex $i$ and $j$. Because it has to be within tolerance for all vertices between $i$ and $j$, it goes through their considered locations (with the exception of the segment deviating close to the tolerance due to discretization of considered locations).
The optimal solution is found by induction. Define the optimal solution for polyline $P_0$ as $
\left\{
\begin{aligned}
& T_{0, j, q}^{\#}\\
& T_{0, j, q}^{\epsilon}
\end{aligned}
\right\}
=
\left\{
\begin{aligned}
0\\
0
\end{aligned}
\right\} $, where $
j = \overline{0, N_0 - 1} $, $
q = \overline{0, M - 1} $, and $M$ is the number of different directions. For orthogonal case $M = 4$, and for $45 \degree$ case $M = 8$. Take directions as $\alpha_{i} = \dfrac{360 \degree}{M} \cdot i$, $i = \overline{0, M - 1}$. For $k = \overline{1, N - 1}$, construct the optimal solution for $P_k$ from the optimal solution for $P_{k - 1}$. \begin{multline*}
\left\{
\begin{aligned}
& T_{k, j, q}^{\#}\\
& T_{k, j, q}^{\epsilon}
\end{aligned}
\right\}
=
\\
\min
_
{
\begin{aligned}
0 \leq j' \wedge j' < N_{k - 1}\\
0 \leq q' \wedge q' < M\\
2 \left| q' - q \right| \neq M\\
\functionangle{\left( p_{k, j} - p_{k - 1, j'}, \alpha_{q'} \right)}
\end{aligned}
}
{
\left(
\left\{
\begin{aligned}
& T_{k - 1, j', q'}^{\#} + \delta_{q' \neq q}\\
& T_{k - 1, j', q'}^{\epsilon}
+
\epsilon
_
{
\left( k - 1, j' \right),
\left( k, j \right)
}
\end{aligned}
\right\}
\right)
}
, \end{multline*} were $
\delta_{q' \neq q}
=
\left\{
\begin{aligned}
1, & \text{ if } q' \neq q,\\
0, & \text{ otherwise};
\end{aligned}
\right. $ \\ $ \functionangle{ \left( v, \alpha \right)} $ is the check that the vector $v$ has angle $\alpha$ (zero length vectors are allowed).
\begin{figure}\label{fig:GraphCompressionEfficiency}
\end{figure}
The condition $2 \left| q' - q \right| \neq M$ corresponds to prohibiting changes in direction by $180 \degree$.
For the $45 \degree$ case, it is possible to restrict the resultant polyline from having sharp angles by not allowing a change of direction by $135 \degree$ ($\left| 4 - \left( \left( q' - q \right) \bmod 8 \right) \right| \neq 1$).
Notice that there are no checks for the tolerance, direction, and end points because they are satisfied during each induction step.
Analyzing the previous solution along $M$ direction will further reduce the amount of calculations. The total complexity of the algorithm is \begin{equation*}
O{\left( N \cdot \max_{0 \leq i \wedge i < N}{\left( N_i \right)} \cdot M \right)}
. \end{equation*}
For some data, the algorithm may produce an improper result. This happens when the introduction of a zero length segment lowers the penalty.
Because the correct orientation is not known in advance, it is necessary to rotate polylines by different angles and take the solution with the lowest penalty \cite[see section 6]{ReconstructionOfOrthogonalPolygonalLines}.
Fig.~\ref{fig:ExampleOrthogonalBuildings} shows an example for the reconstruction of orthogonal buildings.
\begin{figure}\label{fig:ExampleOrthogonalBuildings}
\end{figure}
The reconstruction of buildings with $45 \degree$ sides are shown in Fig.~\ref{fig:Example45DegreeBuildings}.
\begin{figure}\label{fig:Example45DegreeBuildings}
\end{figure}
The main difference of the algorithm described in this section and \cite{ReconstructionOfOrthogonalPolygonalLines} is in the parameters. The specification of the tolerance is easier than the specification of the penalty $\Delta$ for each additional segment.
\section{Conclusion}
This paper describes an approximation algorithm that finds a polyline with the minimum number of vertices while satisfying tolerance restriction. The solution is optimal with the following limitations: \begin{itemize}
\item The vertices of the compressed polyline are limited to considered locations (section~\ref{sec:Descretization}).
\item The test that the vertex of the compressed polyline is located between some vertices of the source polyline is approximate due to the snapping of the breaking point (section~\ref{sec:Optimization}).
\item The tests for end points (section~\ref{sec:TestingSegmentEndPoints}) and zigzags are approximate (section~\ref{sec:TestingZigZag}). \end{itemize}
The performance of the algorithm can be greatly improved if the number of considered locations is decreased without losing quality. This requires further research.
\newcommand{\doi}[1]{\textsc{doi}: \href{http://dx.doi.org/#1}{\nolinkurl{#1}}}
\begingroup \raggedright
\endgroup
\end{document} | arXiv | {
"id": "1504.06584.tex",
"language_detection_score": 0.7242263555526733,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\bf Zero sets of Lie algebras of analytic vector fields on real
and complex 2-dimensional manifolds} \author{{\bf Morris
W. Hirsch}\thanks{I thank Paul Chernoff, David Eisenbud, Robert
Gompf, Henry King, Robion Kirby, Frank Raymond, Helena Reis and
Joel Robbin for helpful discussions.} \\ Mathematics
Department\\ University of Wisconsin at Madison\\ University of
California at Berkeley}
\maketitle
\begin{abstract}
Let $M$ denote a real or complex analytic manifold with empty boundary, having dimension $2$ over the ground field $\ensuremath{{\mathbb F}} =\ensuremath{{\mathbb R}}$ or $\ensuremath{{\mathbb C}}$. Let $Y, X$ denote analytic vector fields on $M$. Say that $Y$ {\em tracks} $X$ provided $[Y,X] = fX$ with $f\colon\thinspace M\to \ensuremath{{\mathbb F}}$ continuous. Let $K$ be a compact component of the zero set $\Z X$ whose Poincar\'e-Hopf index is nonzero.
\noindent {\bf Theorem.} If $Y$ tracks $X$ then $\Z Y \cap K\ne\varnothing$.
\noindent {\bf Theorem.} Let $\mcal G$ be a Lie algebra of analytic vector fields that track $X$, with $\mcal G$ finite-dimensional and supersolvable when $M$ is real. Then $\bigcap_{Y\in \mcal G}\Z Y$ meets $K$.
\end{abstract}
\tableofcontents
\section{Introduction} \mylabel{sec:intro}
Let $M$ denote a manifold with empty boundary $\ensuremath{\partial} M$. A fundamental
issue in Dynamical Systems is deciding whether a vector field has a
zero.
When is $M$ compact with Euler characteristic $\chi (M)\ne 0$, positive answer is given by the celebrated {\sc Poincar\'e-Hopf} Theorem.
Determining whether two or more vector fields have a common zero is much more challenging. For two commuting analytic vector fields on a possibly noncompact manifold of dimension $\le 4$, Theorem \ref{th:bonatti}, due to {\sc C. Bonatti}, not only gives conditions sufficient for common zeros, it locates specific compact sets. Our results establish common zeros for other sets of vector fields. The basic results are Theorems \ref{th:MAIN} and \ref{th:liealg}. Theorem \ref{th:liegroup} is an application to fixed points of transformnation groups.
Denote the zero set of a vector field $X$ on $M$ by $\Z X$.
A {\em block} of zeros for $X$, or an {\em $X$-block}, is a compact
set $K\subset \Z X$ having an open neighborhood $U\subset M$ such
that $\ov U$ is compact and $ \Z X \cap \ov U=K$. Such a set $U$ is
{\em isolating} for $X$ and for $(X, K)$.
The {\em index} of the $X$-block $K$ is the integer $\msf i_K
(X):=\msf i (X,U)$ defined as the Poincar\'e-Hopf index of any
sufficiently close approximation to $X$ having only finitely many
zeros in $U$ (Definitions \ref{th:defph}, \ref{th:defindex}).\footnote{
Equivalently: $\msf i (X, U)$ is the intersection number of $X|U$ with the zero section of the tangent bundle ({\sc Bonatti} \cite{Bonatti92}).}
This number is independent of $U$, and is stable under perturbations of $X$ (Theorems \ref{th:stability}, \ref{th:approx}).
When $X$ is $C^1$ and generates the local flow $\phi$, for sufficiently small $t >0$ the index $\msf i (X, U)$ equals the fixed-point index $I (\phi_t|U)$ defined by {\sc Dold} \cite {Dold72}.
An $X$-block $K$ is {\em essential} if $\msf i_K (X)\ne 0$. This implies $\Z X\cap K\ne\varnothing$ because every isolating neighborhood of $K$ meets $\Z X$.
\begin{theorem} [{\sc Poincar\'e-Hopf}] \mylabel{th:PH}
If $M$ is compact $\msf i (X, M) =\chi (M)$ for all continuous vector fields $X$ on $M$.
\end{theorem}
For calculations of the index in more general settings see {\sc Morse
\cite{Morse29}, Pugh \cite{Pugh68}, Gottlieb \cite{Gottlieb86}, Jubin
\cite{Jubin09}}.
This paper was inspired by a remarkable result of C. Bonatti, which does not require compactness of $M$:\footnote{
``The demonstration of this result involves a beautiful and quite
difficult local study of the set of zeros of $X$, as an analytic
$Y$-invariant set.''\ ---{\sc P.\ Molino} \cite{Molino93}}
\begin{theorem} [{\sc Bonatti} \cite{Bonatti92}] \mylabel{th:bonatti}
Assume $M$ is a real manifold of dimension $\le 4$ and $X, Y$ are analytic vector fields on $M$ such that $[X, Y]= 0$. Then $\Z Y$ meets every essential $X$-block.\footnote{
In \cite{Bonatti92} this is stated for $\dim (M) = 3$ or $4$. If $\dim (M) =2$ the same conclusion is obtained by applying the 3-dimensional case to the vector fields $X \times t\pd t, \ Y\times t\pd {t}$\, on $M\times\ensuremath{{\mathbb R}}$.} \end{theorem}
\subsection{Statement of results} \mylabel{sec:results}
Theorems \ref{th:MAIN} and \ref{th:liealg} extend Bonatti's Theorem to certain pairs of noncommuting vector fields, and Lie algebras of vector fields.
$M$ always denotes a real or complex manifold without boundary, over the corresponding ground field $\ensuremath{{\mathbb F}}=\ensuremath{{\mathbb R}}$ (the real numbers) or $\ensuremath{{\mathbb C}}$ (the complex numbers).
In the main results $M$ has dimension $\dim_\ensuremath{{\mathbb F}} M=2$ over $\ensuremath{{\mathbb F}}$.
$\ensuremath{{\mcal V}} (M)$ is the vector space over $\ensuremath{{\mathbb F}}$ of continuous vector fields on $M$, with the compact-open topology. The subspace of $C^r$ vector fields is $\ V^r (M)$, where $r$ is a positive integer, $\infty$ (meaning $C^k$ for all finite $k$), or $\omega$ (analytic over $\ensuremath{{\mathbb F}}$). When $M$ is complex, $\mcal V^r (M)$ and $\mcal V^\omega (M)$ are identical as real vector spaces, and $\mcal V^\omega (M)$ is a complex Lie algebra. A linear subspace of $\mcal V^r (M)$ is called a Lie algebra if it is closed under Lie brackets. The set of common zeros of a set $\ensuremath{\mathfrak {s}}\subset \mcal V (M)$ is $\Z\ensuremath{\mathfrak {s}} :=\bigcap_{X\in\ensuremath{\mathfrak {s}}}\Z X$.
$X$ and $Y$ denote vector fields on $M$.
$Y$ {\em tracks $X$} provided $Y, X \in \ensuremath{{\mcal V}}^1 (M)$ and $[Y, X]=fX$ with $f\colon\thinspace M\to \ensuremath{{\mathbb F}}$ continuous. When $\ensuremath{{\mathbb F}}=\ensuremath{{\mathbb R}}$ this implies the local flow generated by $Y$ locally permutes orbits of $\Phi^X$ (Proposition \ref{th:orbits}). We say that a set of vector fields tracks $X$ when each of its elements tracks $X$.
\begin{example*}[\bf A]
Suppose $\mcal G$ is a Lie algebra of $C^1$ vector fields on $M$. If $X\in \mcal G$ spans an ideal then $ \mcal G$ tracks $X$, and the converse holds provided $\mcal G$ is finite dimensional. \end{example*}
Here are the main results. The manifold $M$ is real or complex with $\dim_\ensuremath{{\mathbb F}} M=2$.
\begin{theorem} \mylabel{th:MAIN}
Assume $X, Y\in \ensuremath{{\mcal V}}^\omega (M)$, $Y$ tracks $X$, and $K\subset \Z X$ is an essential $X$-block. Then $\Z Y\cap K\ne\varnothing$. \end{theorem}
A Lie algebra $\mcal G$ is {\em supersolvable} if it is real and faithfully represented by upper triangular matrices. If $\mcal G$ is the Lie algebra of Lie group $G$ we call $G$ supersolvable.
\begin{theorem} \mylabel{th:liealg}
Assume $X\in\ensuremath{{\mcal V}}^\omega (M)$, $K$ is an essential $X$-bloc, and
$\mcal G\subset \ensuremath{{\mcal V}}^\omega (M)$ is a Lie algebra that tracks $X$. Let one of the following conditions hold: \begin{description}
\item[(a)] $M$ is complex,
\item[(b)] $M$ is real and $\mcal G$ is supersolvable. \end{description} Then $\Z{\mcal G}\cap K\ne\varnothing$. \end{theorem}
\begin{example*}[\bf B]
Let $M, \mcal G, X$ be as in Theorem \ref{th:liealg} and assume the local flow $\Phi^X$ has a compact global attractor. It can be shown that $M$ is an isolating neighborhood for the $X$-block $\Z X$, $M$ has finitely generated homology, and $\msf i (X, M)=\chi (M)$.
Theorem \ref{th:liealg} thus implies:
\begin{itemize} \item {\em If $\chi (M)\ne 0$ then $\Z{\mcal G}\ne\varnothing$.} \end{itemize} For instance: \begin{itemize} \item {\em Let $\mcal G$ be a Lie algebra of holomorphic vector fields
on $\C 2$. If $X\in \mcal G$ spans an ideal and $\Phi^X$ has a global attractor,
then $\Z{\mcal G}\ne\varnothing$. } \end{itemize} \end{example*}
Now let $G$ denote a connected Lie group over the same ground field as $M$.
An {\em analytic action} of $G$ on $M$ is a homomorphism $\alpha\colon\thinspace g\mapsto g^\alpha$ from $G$ to the group of analytic diffeomorphisms of $M$; this action is also denoted by $(\alpha, G, M)$. The action is {\em effective} if its kernel is trivial. Its {\em fixed point set} is \[\Fix \alpha:=\{p\in M\colon\thinspace g^\alpha (p)=p, \quad (g\in G)\}.\]
\begin{theorem} \mylabel{th:liegroup}
Assume: \begin{itemize} \item $M$ is a compact complex 2-manifold and
$\chi (M)\ne 0$,
\item $(\alpha, G, M)$ is an effective analytic action,
\item $G$ contains a 1-dimensional normal subgroup. \end{itemize} Then $\Fix \alpha \ne\varnothing$.
\end{theorem}
\begin{proof} This follows from Theorem \ref{th:liealg}(a) (see Section \ref{sec:proofs}). \end{proof}
The analogous result for analytic actions of supersolvable Lie groups on real surfaces is due to {\sc Hirsch
\& Weinstein} \cite{HW2000}. But {\sc Lima} \cite {Lima64} and {\sc Plante} \cite{Plante86} have shown that every compact surface supports
a continuous fixed-point free
action by the 2-dimensional group whose Lie algebra has the
structure $[Y, X]= X$. Whether $X$ and $Y$ can be smooth is unknown.
\subsection{Terminology}
The closure of subset $\ensuremath{\Lambda}$ of a topological space $S$ is denoted by $\ov \ensuremath{\Lambda}$, the frontier by $\fr \ensuremath{\Lambda}:=\ov \ensuremath{\Lambda} \cap \ov{S \sm\ensuremath{\Lambda}}$, and the interior by $\ensuremath{\operatorname{\mathsf{Int}}} (\ensuremath{\Lambda})$.
Maps are continuous unless otherwise characterized. A map is {\em
null homotopic} if it is homotopic to a constant map.
If is vector $\xi$ in $\R n$, or a tangent vector to a Riemannian manifold, its norm is $\|\xi\|$. The unit sphere in $\R n$ is $\S n$.
Let $\dim_\ensuremath{{\mathbb F}} (M)=n$. The tangent bundle $\tau (M)$ is an $\F n$-bundle, meaning a fibre bundle over $M$ with total space $T(M)$, projection $\pi_M\colon\thinspace T(M)\to M$, standard fibre $\F n$, and structure group $GL (n,\ensuremath{{\mathbb F}})$ (see Steenrod \cite{Steenrod51}). The fibre over $p\in M$ is tangent space to $M$ at $p$ is $T_p(M):=\pi_M^{-1}(p)$. When $M$ is an open set in $\F n$ we identify $T_p(M)$ with $\F n$, $\tau (M)$ with the trivial vector bundle $M\times \F n \to M$, and $X\in\ensuremath{{\mcal V}} (M)$ with the map \ $M\to \F n$, \,$p\mapsto X_p$.
Assume $X\in \ensuremath{{\mcal V}}^r (M)$ and $\ensuremath{\partial} M=\varnothing$. The local flow on $M$ whose trajectory through $p$ is the $X$-trajectory of $p$ is denoted by $\Phi^X:=\big\{\Phi^X_t\big\}_{t\in\ensuremath{{\mathbb R}}}$, referred to informaly as the {\em
$X$-flow}. The maps $\Phi^X_t$ are $C^r$ diffeomorphisms between open subsets of $M$.
An {\em $X$-curve} is the image of an integral
curve $t\mapsto y(t)$ of $X$. This is either a singleton and hence
a zero of $X$, or an
interval. The maximal $X$-curve
through $p$ is the {\em orbit} of $p$ under $X$. A set $S\subset M$ is {\em $X$-invariant} if it contains the orbits under $X$ of its points. When this holds for all $X$ in a set $\mcal H \subset \ensuremath{{\mcal V}}^1 (M)$ then $S$ is {\em $\mcal H$-invariant}.
If $X, Y$ are vector fields on $M$, the alternating tensor field $X\wedge Y\in \ensuremath{\Lambda}^2 (M)$ may be denoted by $X\wedge_\ensuremath{{\mathbb F}} Y$ in order to emphasize the ground field. $X\wedge_\ensuremath{{\mathbb F}} Y=0$ means $X_p$ and $Y_p$ are linearly dependent over $\ensuremath{{\mathbb F}}$ at all $p\in M$.
\section{Consequences of tracking} \mylabel{sec:tracking}
Throughout this section we assume:
\begin{itemize}
\item {\em $M$ is a real or complex manifold, \,$\dim_\ensuremath{{\mathbb F}} (M)=n\ge 1$,
\ $\ensuremath{\partial} M=\varnothing$,}
\item {\em $X, Y \in \ensuremath{{\mcal V}}^1 (M)$,}
\end{itemize} where $\ensuremath{{\mathbb F}}$ denotes the ground field $\ensuremath{{\mathbb R}}$ or $\ensuremath{{\mathbb C}}$.
\begin{definition*} \mylabel{th:deftracks}
$Y$ {\em tracks $X$} provided $Y, X \in \ensuremath{{\mcal V}}^1 (M)$ and $[Y, X]=fX$
with $f\colon\thinspace M\to \ensuremath{{\mathbb F}}$ continuous.
\end{definition*}
\begin{proposition} \mylabel{th:backprop}
Suppose $Z \in \ensuremath{{\mcal V}}^1 (M)$. If $Y$ and $Z$ track $X$ and $[Y, Z]$ is $C^1$, then $[Y, Z]$ tracks $X$. \end{proposition}
\begin{proof} Follows from the Jacobi identity. \end{proof}
\begin{definition} \mylabel{th:defdepend}
The {\em dependency set} of $X$ and $Y$ (over the ground field) is \[
\msf{Dep}_\ensuremath{{\mathbb F}} (X,Y):=\big\{p\in M\colon\thinspace X_p\wedge_\ensuremath{{\mathbb F}} Y_p=0\big\}. \] \end{definition}
\begin{proposition} \mylabel{th:ideal}
If $Y$ tracks $X$ then $\Z X$ and $\mcal D (X, Y)$ are $X$- and $Y$-invariant.
\end{proposition}
\begin{proof} As the statement is local, we assume $M$ is an open set in $\F n$.
{\em Invariance of $\Z X$: } Evidently $\Z X$ is $X$-invariant and $\Z X\cap \Z Y$ is $Y$-invariant. To show that $\Z X\,\verb=\=\,\Z Y$ is $Y$-invariant, fix $p\in \Z X\,\verb=\=\,\Z Y $. Let $(y_1,\dots,y_n)$ be flowbox coordinates in a neighborhood $V_p$
of $p$, representing $Y|V_p$ as $\pd{y_1}$ in a convex open subset of $\R n$, and the $Y$-trajectory of $p$ as
\[ t\mapsto y(t):=p + te_1\]
where $e_1,\dots, e_n \in\F n$ are the standard basis vectors.
Let $J_p\subset \ensuremath{{\mathbb R}}$ be an open interval around $0$ such that
\[y(t)\in V_p, \qquad (t\in J_p).\]
Then
\begin{equation} \label{eq:tpys}
\ode{~}t\big(T\Phi^Y_t (X_p)\big) = [Y, X]_{y(t)}, \qquad (t\in J_p). \end{equation}
Since $Y$ tracks $X$, there is a continuous $\ensuremath{{\mathbb F}}$-valued function $t\mapsto g(t)$ such that in the flowbox coordinates for $Y$, the vector-valued function $t\mapsto X_{y(t)}$ satisfies the linear initial value problem
\begin{equation} \label{eq:tphi1}
\ode{~}t X_{y(t)} = g(t) X_{y(t)}, \qquad X_p=0. \end{equation}
Therefore $X_{y(t)}$ vanishes identically in $t$.
{\em Invariance of $\msf D(X, Y)$: } We need to prove: for all $t\in J_p$,
\begin{equation} \label{eq:tphi2}
X_p\wedge_\ensuremath{{\mathbb F}} Y_p = 0 \implies T\Phi^Y_t (X_p)\wedge_\ensuremath{{\mathbb F}} X_{\Phi^Y_t(p)} =0.
\end{equation}
Assume $Y_p\ne 0$ and fix flowbox coordinates for $Y$ at $p$. It suffices to verify (\ref{eq:tphi2}) for all $t\in J_p$. Equations (\ref{eq:tpys}) and (\ref{eq:tphi1}) imply \[ \begin{split} \ode{~}t \big(T\Phi^Y_t (X_p)\wedge_\ensuremath{{\mathbb F}} X_{y(t)}\big)
&= \big([Y, X]\wedge_\ensuremath{{\mathbb F}} X\big)_{y(t)} +
T\Phi^Y_t (X_p)\wedge_\ensuremath{{\mathbb F}} g(t) X_{y(t)}\\
&= 0 \ \text{ identically in $t$.} \end{split} \] As (\ref{eq:tphi2}) holds for $t=0$, the proof is complete. \end{proof}
The proof of the following result is similar and left to the reader:
\begin{proposition} \mylabel{th:orbits}
If $M$ is real and $Y$ tracks $X$, each map $\Phi^Y_t$ sends orbits of
$X|\mcal D \Phi^Y_t$ to orbits of $X|\mcal R \Phi^Y_t$.\qed \end{proposition}
When $M$ is complex
there is a simliar result for the holomorphic local actions of $\ensuremath{{\mathbb C}}$ on $M$ generated by $X$ and $Y$.
\section{The index function} \mylabel{sec:indices}
In this section $M$ is a real surface of dimension $n\ge 1$ with empty boundary. Assume $X\in \ensuremath{{\mcal V}} (M)$, $K$ is an $X$-block, and the precompact open set $U\subset M$ is isolating for $(X, K)$.
\begin{definition} \mylabel{th:defdeform}
A {\em deformation} from $X$ to $X'$ is path in $\ensuremath{{\mcal V}} (M)$ of the form \[
t\mapsto X^t, \quad X^0:=X, \quad X^1= Y, \qquad (0\le t \le 1). \] The deformation is {\em nonsingular} in a set $S\subset M$ provided $\Z{X^t}\cap S=\varnothing$. \end{definition}
\begin{proposition} \mylabel{th:convex}
$X$ has arbitrarily small convex open neighborhoods $\mcal B\subset\ensuremath{{\mcal V}}
(M)$ such that for all $Y, Z\in \mcal B$:
\begin{description}
\item[(i)] $U$ is isolating for $Y$,
\item[(ii)] the deformation $Y^t:= (1-t)Y + tZ,\ (0\le t\le 1)$ is
nonsingular in $\fr U$.
\end{description} These conditions imply:
\begin{description}
\item[(iii)] the set of $Y\in \mcal B$ such that $\Z Y\cap U$ is finite contains a dense open subset \\ of $\mcal B$. \end{description}
\end{proposition}
\begin{proof} (i) and (ii) follow from the definition of the compact-open
topology on $\ensuremath{{\mcal V}} (M)$. Standard approximation theory gives (iii). \end{proof}
\begin{definition} \mylabel{th:defph}
When $K$ is finite, the {\em Poincar\'e-Hopf index} of $X$ at $K$, and in $U$, is the integer $i^{PH}_K(X)=\msf i^{PH} (X, U)$ defined as follows.
For each $p\in K$ choose an open set $W\subset U$ meeting $K$ only at $p$, such that $W$ is the domain of a $C^1$ chart \[\phi\colon\thinspace W\approx W' \subset \R n, \quad \phi(p)=p'. \]
The transform of
$X$ by $\phi$ is \[X':=T\phi\circ X \circ \phi^{-1} \in \ensuremath{{\mcal V}} (W'). \] There is a unique map of pairs \[
F_p\colon\thinspace (W', 0) \to \R n, 0) \] that expresses $X'$ by the formula
\[
X'_x=\big(x, F_p(x)\big) \in \{x\}\times \R n, \qquad (x\in W'). \]
Noting that $F^{-1} (0)=p$, we define $\msf i^{PH}_p(X)\in\ensuremath{{\mathbb Z}}$ as the degree of the map defined for any sufficiently small $\ensuremath{\epsilon} >0$ as \[
\S{n-1}\to\S{n-1},\quad u\mapsto \frac{F_p (\ensuremath{\epsilon} u)}{\|F_p (\ensuremath{\epsilon} u) \|}\,. \]
This degree is independent of $\ensuremath{\epsilon}$ and the chart $\phi$, by standard properties of the degree function. Therefore the integer
\[\textstyle
\msf i^{PH}_K(X)=i^{PH} (X, U):= \begin{cases}
\sum_{p\in K} i^{PH}_p(X) \, &\text{if} \,K\ne\varnothing,\\
0\, &\text{if} \,K =\varnothing. \end{cases} \] is well defined and depends only on $X$ and $K$. \end{definition}
\begin{proposition} \mylabel{th:xtfru}
Let $\{X^t\}$ be a deformation that is nonsingular in $\fr U$. If both $\Z{X^0}\cap U$ and $\Z{X^1}\cap U$ are finite, then \[
i^{PH} (X, U) = i^{PH} (X^1, U). \] \end{proposition}
\begin{proof} The proof is similar to that of a standard result on homotopy
invariance of intersection numbers in oriented manifolds (compare {\sc Hirsch} \cite[Theorem 5.2.1]{Hirsch76}). \end{proof}
\begin{definition} \mylabel{th:supp}
The {\em support} of a deformation $\{X^t\}$ is the closed set \[
\mathsf {supp}\{X^t\}:=\big\{p\in M\colon\thinspace X^t_p =X^0_p, \quad{0\le t\le 1} \big\}. \] The deformation is {\em compactly supported in $S$} provided $ \mathsf {supp}\{X^t\}$ is a compact subset of $S$. \end{definition}
\begin{definition} \mylabel{th:defindex}
The {\em index of $X$ in $U$} is
\begin{equation} \label{eq:defindex}
\msf i (X, U):=\msf i^{PH} (X', U) \end{equation}
where $X'$ is any vector field on $M$ such that $\Z {X'}\cap U$ is finite and there is a deformation from $X$ to $X'$ compactly supported in $\ensuremath{\operatorname{\mathsf{Int}}}(U)$. This integer is well defined because the right hand side of Equation (\ref{eq:defindex}) depends only on $X$ and $U$, by Proposition \ref{th:xtfru}.
The notation $\msf i (X,U)$ tacitly assumes $U$ is isolating for $X$. \end{definition}
\begin{lemma} \mylabel{th:isol}
If $U$ and $ U_1$ are isolating for $(X, K)$ then $\msf i (X, U)=\msf i (X, U_1)$. \end{lemma}
\begin{proof} Let $W$ be isolating for $(X, K)$, with $\ov W\subset U_1 \cap U$.
It suffices to show that $\msf i (X, U)=\msf i (X, W)$, for this also implies $\msf i (X, U_1)=\msf i (X, W$.
By definition, $\msf i (X, W)=\msf i^{PH}(X', W)$ provided $X$ and $X'$ are homotopic by deformation with compact support in $W$ and $\Z{Y^1}\cap W$ is finite. Let $\{Y^t\}$ be the deformation defined by \[ Y^t_p=\begin{cases} X^t_p & \text{if $p\notin W$,}\\ Y^t_p & \text{if
$p\in \ov W$}.
\end{cases} \] Therefore $\msf i (X, U)=\msf i (X, W)$, because this deformation is compactly supported in $U$ and $\Z{Y^1}\cap U$ is finite. \end{proof}
It follows that $\msf i (X, U)$ depends only on $X$ and $K$. The {\em
index of $X$ at $K$} is \[ \msf i_K (X):=\msf i (X, U). \]
It is easy to see that the index function enjoys the following additivity:
\begin{proposition} \mylabel{th:add}
Let $K_1, K_2$ be disjoint $X$-blocks, with isolating neighborhoods $U_1, U_2$ respectively. Then \[\begin{split} \msf i_{K_1\cup K_2} (X) & = \msf i_{ K_1} (X) + \msf i_{K_2} (X), \\ \msf i (X, U_1 \cup U_2) & = \msf i (X, U_1) + \msf i (X, U_2). \end{split} \] \end{proposition}
The following property is crucial:
\begin{theorem}[{\sc Stability}] \mylabel{th:stability}
Let $U\subset M$ be isolating for $X$.
\begin{description}
\item[(a)] If $\msf i (X, U)\ne 0$ then $\Z X\cap U\ne\varnothing$.
\item[(b)] If $Y$ is sufficiently close to $X$ then $U$ is isolating for $Y$ and $\msf i (Y, U)=\msf i (X, U)$.
\item[(c)] Let $\{X^t\}$ be a deformation of $X$ that is nonsingular
in $\fr U$. Then \[\msf i (X^t, U)=\msf i (X, U), \qquad (0\le t\le 1). \] \end{description} \end{theorem}
\begin{proof} If $\msf i (X, U)\ne 0$, Definition \ref{th:defindex} shows that $X$ is the limit of a convergent sequence $\{X^n\}$ in $\ensuremath{{\mcal V}} (M)$ such that $\Z {X^n}\cap U\ne\varnothing$. Passing to a subsequence and using compactness of $\ov U$ shows that $\Z X\cap \ov U\ne\varnothing$, and (a) follows because $\Z X\cap \fr U=\varnothing$. Parts (b) and (c) are implied by Propositions \ref{th:convex} and \ref{th:xtfru}. \end{proof}
\begin{proposition} \mylabel{th:lindep}
Assume $X, Y\in\ensuremath{{\mcal V}} (M)$ and $U\subset M$ is isolating for both $X$ and $Y$. For each component $U'$ of $U$ that meets $\Z X \cup \Z Y$, let one of the following conditions hold: \begin{description}
\item[(a)] $X_p \ne \ensuremath{\lambda} Y_p, \quad (p\in \fr {U'}, \, \ensuremath{\lambda} <0)$, \end{description} or \begin{description} \item[(b)] $X_p \ne \ensuremath{\lambda} Y_p, \quad (p\in \fr {U'}, \, \ensuremath{\lambda} >0)$.
\end{description} Then $\msf i (X, U)=\msf i (Y, U)$. \end{proposition}
\begin{proof} $U\cap \big(\Z X\cup\Z Y\big)$ is the compact set $\ov U \cap \big(\Z X \cup \Z Y \big)$. This implies only finitely
many components of $U$ meet $\Z X\cup\Z Y$.
The union $U_1$ of these components is isolating for $X$ and $Y$. The index function is additive over disjoint unions, and both $X$ and $Y$ have index zero in the open set $U \verb=\=U_1$, which is disjoint from $\Z X\cup\Z Y$. Therefore
\[\begin{split}\msf i (X, U) &=\msf i (X, U_1),\\
\msf i (Y, U) &=\msf i (Y, U_1). \end{split} \]
Replacing $U$ by $U_1$, we assume $U$ has only finitely many components. As it suffices to prove $X$ and $Y$ have the same index in each
component of $U$, we also assume $U$ is connected.
Let $p\in \fr U$ be arbitrary. If (a) holds, consider the deformation \[
X^t:= (1-t)X + tY,\quad (0\le t\le 1). \]
If $t= 0$ or $1$ then $X^t_p\ne 0$ because $U$ is isolating for $X$
and $Y$, while if $0<t<1$ then $X^t_p\ne 0$ by (a). Therefore the
conclusion follows from the Stability Theorem \ref{th:stability}.
If (b) holds the same argument works for the deformation $(1-t)X - tY$. \end{proof}
\begin{proposition} \mylabel{th:wedge}
Assume $U$ is an isolating neighborhood for both $X$ and $Y$, whose closure $N$ is a $C^1$ submanifold such that
\begin{equation} \label{eq:xwry}
X_p\wedge_\ensuremath{{\mathbb R}} Y_p =0, \quad (p\in \ensuremath{\partial} N) \end{equation}
and one of the following conditions holds:
\begin{description}
\item[(a)] $M$ is even-dimensional
\item[(b)] $M$ is odd-dimensional and $X$ and $Y$ are tangent to
$\ensuremath{\partial} N$.
\end{description}
Then $\msf i (X,U)=\msf i(Y, U)$. \end{proposition}
\begin{proof} By the Stability Theorem \ref{th:stability} it suffices to find a deformation from $X$ to $Y$ that is nonsingular on $\ensuremath{\partial} N$. As $\ensuremath{\partial} N$ is a subcomplex of a smooth triangulation of $M$ ({\sc Whitehead}
\cite{Whitehead40}, {\sc Munkres} \cite{Munkres63}), The Homotopy Extension Theorem ({\sc Steenrod} \cite [Th. 34.9] {Steenrod51}) shows that this deformation exists provided $X|\ensuremath{\partial} N$ and $Y|\ensuremath{\partial} N$ are connected by a homotopy of nonsingular sections of $T_{\ensuremath{\partial} N}M$. Such a homotopy exists in case (a)
because the antipodal map and identity maps of $\R n\,\verb=\=\,\{0\}$
are homotopic, and in case (b) because these maps in
$\R{n-1}\,\verb=\=\,\{0\}$ are homotopic. \end{proof}
Fix $U$ and $N=\ov U$ as in Proposition \ref{th:wedge}, so that $\msf i (X, U)=\msf i (X, N\verb=\=\ensuremath{\partial} N)$. An orientation of $N$ corresponds to a generator \[
\nu_N\in H_n (N,\ensuremath{\partial} N)\cong \ensuremath{{\mathbb Z}}. \] Let $\nu^N \in H^n (N,\ensuremath{\partial} N)$ be the dual generator.
Evaluating cocyles on cycles defines the canonical dual pairing (the Kronecker Index): \[
H^n (N, \ensuremath{\partial} N) \times H_n (N,\ensuremath{\partial}
N)\to \ensuremath{{\mathbb Z}}, \qquad (c, \ensuremath{\lambda})\mapsto c\cdot\ensuremath{\lambda}. \]
Let $c_{X,N}\in H^n(N, \ensuremath{\partial} N)$ be the obstruction to extending $X|\ensuremath{\partial} N$ to a
nonsingular vector field on $N$. Unwinding definitions proves:
\begin{proposition} \mylabel{th:obstruction}
If $N$ is oriented, $\msf i (X, U) = c_{X,N}\cdot \nu_N$. \qed \end{proposition} A similar result holds for nonorientable manifolds, using homology with coefficients twisted by the orientation sheaf.
\begin{theorem} \mylabel{th:approx}
If $X$ can be approximated by vector fields $X'$ with no zeros in $\ov U$ then $\msf i (X, U)=0$, and the converse holds provided $U$ is connected. \end{theorem}
\begin{proof} If the approximation is possible then the index vanishes
by the Stability Theorem \ref{th:stability}. To prove the
converse fix a Riemann metric on $M$ and $\ensuremath{\epsilon} >0$. There exists
an isolating neighborhood $U'$
of $K$ whose closure is a compact submanifold $N\subset U$ and
\[\|X_p\|<\ensuremath{\epsilon}, \quad (p\in N). \] Define
\[E_\ensuremath{\epsilon}:=\{x\in\ (N)\colon\thinspace 0< \|x\| <\ensuremath{\epsilon}, \quad (p\in N)\}. \] This is the total space of a fibre bundle $\eta$ over $N$ that is fibre homotopically equivalent to the sphere bundle associated to the tangent bundle of $N$.
$X|\ensuremath{\partial} N$ extends to a section $X''\colon\thinspace N\to E_{\ensuremath{\epsilon}}$ of $\eta$, by Proposition \ref{th:obstruction}. Let $X'\in \ensuremath{{\mcal V}} (M)$ be the extension of $X''$ that agrees with $X$ outside $N$. Then $X'$ is an $\ensuremath{\epsilon}$-approximation to $X$ with no zeros in $\ov U$. \end{proof}
Examination of the proof, together with standard approximation theory, yields the following addendum to Theorem \ref{th:approx}:
\begin{corollary} \mylabel{th:approxcor}
Assume $\msf i (X, U)=0$.
\begin{description}
\item[(i)] If $X$ is analytic, the approximations in {\em Theorem
\ref{th:approx}} can be chosen to be analytic.
\item[(ii)] If $X$ is $C^r$ and $0\le r\le \infty$, the
approximations can be chosen to be $C^r$ and to agree with $X$ in
$M\verb=\=U$. \qed
\end{description} \end{corollary}
\begin{definition} \mylabel{th:deftriv}
Let $\eta$ denote a real or complex vector bundle with total space $E$ and $n$-dimensional fibres. A {\em trivialization} of $\eta$ is a map $\psi\colon\thinspace E \to \F n$ that restricts to linear isomorphisms on fibres. \end{definition}
\begin{proposition} \mylabel{th:obcor}
Assume $N\subset U$ is a compact, connected real $n$-manifold whose interior is isolating for $(X, K)$. Let $\psi$ be a trivialization of $\tau_{\ensuremath{\partial} N}(M)$. Then $\msf i (X, U)$ equals the degree $\msf {deg}(F_X)$ of the map \[
F_X\colon\thinspace \ensuremath{\partial} N\to \S {n-1}, \quad p\mapsto \frac{\psi
(X_p)} {\|\psi (X_p)\|}. \]
\end{proposition}
\begin{proof} Follows from Proposition \ref{th:obstruction}, because
$\msf {deg}(F_X)=c_{X,N}\cdot \nu_N$ by obstruction theory. \end{proof}
This result will be used in the proofs Theorem \ref{th:MAIN}:
\begin{proposition} \mylabel{th:fue}
Let $W\subset M$ be a connected isolating neighbrohood for $(X, K)$. Assume the following data: \begin{itemize} \item $\Phi\colon\thinspace T(W\verb=\=K)\to \R n$ is a trivialization of
$\tau (W\verb=\= K)$,
\item $E\subset \R {n\times n}$\, is a linear space of
matrices,\, $\dim (E) < n$,
\item$A\colon\thinspace W\sm K\to E$ is a map such that
\begin{equation} \label{eq:xpap}
\Phi(X_q) = A(q)\cdot\Phi(Y_q), \qquad (q\in W\sm K). \end{equation}
\end{itemize}
If $W$ is isolating for $Y\in \ensuremath{{\mcal V}} (M)$, then
\ $
\msf i (Y, W)=0\implies \msf i (X, W)=0$. \end{proposition}
\begin{proof} Consider the maps \[\begin{split}
& F_X\colon\thinspace \ensuremath{\partial} N\to \S {n-1}, \quad p\mapsto \frac{\Phi
(X_p)} {\|\Phi (X_p)\|}, \\
& F_Y\colon\thinspace \ensuremath{\partial} N\to \S {n-1}, \quad p\mapsto \frac{\Phi
(Y_p)} {\|\Phi (Y_p)\|}. \end{split} \]
Corollary \ref{th:obcor} implies $\msf {deg} (F_Y)=0$, hence $F_Y$ is null homotopic, and it suffices to prove $F_X$ null
homotopic.
Degree theory shows that $F_Y$ is homotopic to a constant map \[\tilde F_Y\colon\thinspace\ensuremath{\partial} N\to\S{n-1}, \quad \tilde F_Y(p)= c\in\S{n-1}. \] By Equation (\ref{eq:xpap}) there exists $\ensuremath{\lambda}\colon\thinspace \ensuremath{\partial} N\to \ensuremath{{\mathbb R}}$ such that \[
F_X (p) = \ensuremath{\lambda} (p) A (p) F_Y(p), \quad \ensuremath{\lambda} (p) >0. \] Consequently $F_X$ is homotopic to \[
\tilde F_X\colon\thinspace \ensuremath{\partial} N\to \S{n-1}, \qquad \tilde F_X(p)= \ensuremath{\lambda} (p)A (p)c.
\]
The map \[
H\colon\thinspace E\verb=\= \{0\} \to \S{n-1}, \quad B\mapsto\frac{B(c)}{\|B(c)\|} \] satisfies:
\begin{equation} \label{eq:fxpn}
\tilde F_X (\ensuremath{\partial} N) \subset H ( E\verb=\=\{0\})\subset\S {n-1}. \end{equation}
Since the unit sphere $\ensuremath{\Sigma} \subset E\verb=\=\{0\}$ is a deformation retract of $ E\verb=\= \{0\}$, Equation (\ref{eq:fxpn}) shows that $\tilde F_X $ is homotopic to a map \[ G\colon\thinspace \ensuremath{\partial} N \to H (\ensuremath{\Sigma}) \subset\S {n-1}. \] Now $\dim (\ensuremath{\Sigma})= \dim (E)-1 \le n-2$. As $H$ is Lipschitz, $\dim (H (\ensuremath{\Sigma})) \le n-2$. Therefore $H (\ensuremath{\Sigma})$ is a proper subset of $\ensuremath{\Sigma}^{n-1}$ containing $G(\ensuremath{\partial} N)$, implying $G$ is null homotopic. The conclusion follows because the homotopic maps \[F_X, \tilde F_X, G\colon\thinspace \ensuremath{\partial} N\to\S{n-1} \]
have the same degree. \end{proof}
\begin{example*}[\bf C] \mylabel{th:exalg}
Let $\mcal A$ denote a finite dimensional algebra over $\ensuremath{{\mathbb R}}$ with multiplication \ $ (a,b)\mapsto a\bullet b$. Let $X, Y$ be vector fields on a connected open set $U\subset \mcal A$, whose respective zero sets $K, L$ are compact.
Assume there is a map $A\colon\thinspace U\to \mcal A$ such that \[
X_p=A(p)\bullet Y_p, \qquad (p\in U). \] Then\, $\msf i_Y (U) = 0 \implies \msf i_X (U)= 0$, by Proposition \ref{th:fue}. \end{example*}
\section{Proofs of Theorems \ref{th:MAIN}, \ref{th:liealg},
\ref{th:liegroup}} \mylabel{sec:proofs}
Henceforth $M$ denotes a connected real or complex 2-manifold with $\ensuremath{\partial} M=\varnothing$.
\subsection {Proof of Theorem \ref{th:MAIN}}
The hypotheses are:
\begin{itemize}
\item $X$ and $Y$ are analytic vector fields on $M$,
\item $Y$ tracks $X$,
\item $K$ is an essential $X$-block,
\end{itemize}
The conclusion is that $\Z Y\cap K\ne\varnothing$. It suffices to prove: \begin{quote}{\em
$\Z Y$ meets every neighborhood of $K$.} \end{quote}
Many sets $S\subset M$ associated to analytic vector fields, including
zero sets and dependency sets, are {\em analytic
spaces}: Each point of $S$ has an open neighborhood $V\subset M$ such that $S\cap V$ is the zero set of an analytic map $V\to \ensuremath{{\mathbb F}}^k$. This implies $S$ is covered by a locally finite family of disjoint analytic submanifolds.
The local topology of analytic
spaces of is rather simple, owing to the
theorem of {\sc {\L}ojasiewicz} \cite {Lo64}:
\begin{theorem} [{\sc Triangulation}] \mylabel{th:triang}
If $S$ is a locally finite collection of closed analytic spaces in $M$, there is a triangulation of $M$ such that each element of $ S$ a subcomplex. \end{theorem}
We justify three simplifying assumptions by showing that if any one
of them is violated the conclusion of Theorem \ref{th:MAIN} holds:
\begin{description}
\item [(A1)] {\em $K$ is connected, and $\chi (K)=0$.}
$K$ is compact and triangulable and hence has only finitely many components. As $K$ is an essential $X$-block, so is some component (Proposition \ref{th:add}), and we can assume $K$ is that component. If $\chi (K)\ne 0$, the flow induced by $Y$ on the triangulable space $K$ fixes a point $p\in \Z Y\cap K$ by Lefschetz's Fixed Point Theorem ({\sc Lefschetz} \cite {Lefschetz37}, {\sc Spanier} \cite{Spanier66}, {\sc Dold} \cite {Dold72}). This justifies (A1).
Note that (A1) implies $K$ has arbitrarily small connected neighborhoods $U$ that are isolating for $(X,K)$.
\item [(A2)] {\em $\dim_\ensuremath{{\mathbb F}} (K)=1$ and $K$ is an analytic submanifold.}
If $\dim_\ensuremath{{\mathbb F}} (K)=0$ then $K$ is a singleton by (A1) and the conclusion of the theorem is obvious.
If $\dim_\ensuremath{{\mathbb F}}(K)=2$ then $K=M$ because $X$ is analytic, $M$ is connected, and both are 2-dimensional. Therefore $\chi (M) \ne 0$ and the Poincar\'e-Hopf Theorem \ref{th:PH} implies $\Z Y\cap K =\Z Y\ne\varnothing$.
Let $\dim (K)=1$. Suppose $K$ is not an analytic submanifold. Its singular set is nonempty, finite and $Y$-invariant, and thus contained in $\Z Y\cap K$.
\item [(A3)] {\em $U$ is a connected isolating neighborhood for $(X,
K)$ and $\Z Y \cap \ov U\subset K$.}
If no such $U$ exists, (A1) implies there is a nested sequence
$\{U_j\}$ of connected isolating neighborhoods for
$(X, K)$ whose intersection is $K$, and each $U_j$
contains a $p_j\in \Z Y\cap\ov {U_j}$. A subsequence of $\{p_j\}$
tends to a point of $\Z Y \cap K$ by compactness of $K$.
Note that (A3) implies $U$ is isolating for $Y$.
\end{description}
Henceforth we assume (A1), (A2) and (A3).
It suffices to prove
\begin{equation} \label{eq:iyu0}
\msf i (Y, U)\ne 0, \end{equation}
because then (A3) implies $\Z Y\cap U$ meets $K$.
Both $K$ and the dependency set $\msf D:=\msf{Dep}_\ensuremath{{\mathbb F}} (X, Y)$ (Definition \ref{th:defdepend}) are $Y$-invariant analytic spaces (Proposition \ref{th:ideal}), and (A2) implies $\dim_\ensuremath{{\mathbb F}} (\msf D) = 1$ or $2$.
Because $\dim_\ensuremath{{\mathbb F}}(K)=1$ by (A3), one of the following conditions is satisfied:
\begin{description}
\item[(B1)] {\em $K$ is a component of $\msf D$},
\item[(B2)] {\em $\dim_\ensuremath{{\mathbb F}}(\msf D)=1$ and $K$ is not a component of $\msf D$},
\item[(B3)] {\em $\dim_\ensuremath{{\mathbb F}}(\msf D)=2$}.
\end{description}
{\em Assume} (B1): Choose the isolating neighborhood $U$ so small that $\fr U \cap \msf D=\varnothing$. Proposition \ref{th:lindep} implies $i (Y, U)=\msf i (X, U) \ne 0$, yielding (\ref{eq:iyu0}).
{\em Assume} (B2): Because $\msf D$ and $K$ are 1-dimensional and $K\subset \msf D$, the frontier in $K$ of $K\cap \big(\ov{\msf D\sm K}\big)$ is $Y$-invariant and $0$-dimensional, hence a nonempty subset of $\Z Y\cap K$.
{\em Assume} (B3): In this case $\msf D=M$ because $X$ and $Y$ are analytic and $M$ is connected, hence $X\wedge_\ensuremath{{\mathbb F}} Y=0$.
If $M$ is real, Proposition \ref{th:wedge}(a) implies \[
\msf i (Y, U)=\msf i (X, U) \ne 0, \] whence $\Z Y\cap U\ne\varnothing$ and $\Z Y\cap K\ne\varnothing$ by (A3).
This completes the proof of Theorem \ref{th:MAIN} for real $M$.
Henceforth we assume $M$ is complex. Therefore
(A1) and (A2) imply \begin{description}
\item[(C1)] {\em $K$ is a compact connected Riemann surface of genus $1$, holomorphically embedded in $U$,}
\item[(C2)] {\em The tangent bundle $\tau (K)$ is a holomorphically trivial
complex line bundle,} \end{description}
Note that (B3) implies $ X_p$ and $Y_p$ are linearly dependent over $\ensuremath{{\mathbb C}}$ at all $p\in M$, because $X$ and $Y$ are analytic and $M$ is connected. Together with (A3) this implies:
\begin{description} \item[(C3)] {\em There is an open neighborhood $W\subset U$ of $K$ and a
holomorphic map $f\colon\thinspace W\to\ensuremath{{\mathbb C}}$ satisfying:}
\[p\in W \implies X_p =f(p)Y_p, \qquad f^{-1} (0)= K. \] \end{description}
Since $K$ is a compact, connected, complex submanifold of $M$ having codimension $1$, it can be viewed as a divisor of the complex analytic variety $M$ ({\sc Griffiths \& Harris} \cite{GrifHarris78}). This divisor determines a holomorphic line bundle $[K]$ over $M$, canonically associated to the pair $(M, K)$.
\begin{proposition} \mylabel{th:div}
{~} \begin{description}
\item[(i)] The restriction of $[K]$ to the
submanifold $K$ is holomorphically isomorphic to the algebraic normal bundle $\nu (K, M):=\tau_K(M)/\tau (K)$ of $K$.
\item[(ii)] $[K]$ is holomorphically trivial.
\item[(iii)] $\nu(K, M)$ is holomorphically trivial.
\end{description} \end{proposition}
\begin{proof}
Working through the definition of $[K]$ in
\cite{GrifHarris78} demonstrates (i). Part (ii) follows from (C3)
and the italicized statement on \cite[page 134]{GrifHarris78}), and (ii)
implies (iii).\footnote{
An elegant explanation was kindly supplied by {\sc D. Eisenbud} \cite{Eisenbud15}:
The ideal defining $[K]$ is the dual $\mcal K^*$ of the sheaf $\mcal K$ of ideals of the analytic space $K$.
Because $\mcal K$ is generated by the single function $f$ it is a product sheaf, and so also is $\mcal K^*$. Therefore
$[K]$ is a holomorphically trivial line bundle, as is its restriction to $K$, which is $\nu(K, M)$.} \end{proof}
From (C1), (C2) and Proposition \ref{th:div}(ii) with $V:=K$ we see that the complex vector bundle $\tau_K(M)\cong \tau (K) \oplus \nu (K, M)$ is holomorphically trivial.
As $K$ is triangulable we can choose $W$ in (C3) so that it admits
$K$ as a deformation retract. Therefore: \begin{description} \item[(C4)] {\em $\tau (W)$ is a trivial complex vector bundle} \end{description} by the Homotopy Extension Theorem ({\sc Steenrod}
\cite[Thm. 34.9] {Steenrod51}, {\sc Hirsch} \cite [Chap. 4,
Thm. 1.5] {Hirsch76}).
Define \[
\theta\colon\thinspace\ensuremath{{\mathbb C}}\to\R {2\times 2}, \quad a+b\sqrt{-1} \mapsto
\, \left[\begin{smallmatrix}
a & -b \\ b & \, a
\end{smallmatrix}\right], \qquad (a, b\in \ensuremath{{\mathbb R}}). \] Let \[\Theta\colon\thinspace \C {2\times 2} \to \R {4 \times 4} \] be the $\ensuremath{{\mathbb R}}$-linear isomorphism that replaces each matrix entry \,$z$\,
by the $2\times 2$ block $\theta(z)$.
Define \[
H\colon\thinspace \ensuremath{{\mathbb C}} \to \R {4\times 4}, \quad a+b\sqrt{-1} \mapsto \left[\begin{smallmatrix}
a & -b & 0 & 0 \\
b & \, a & 0 & 0 \\
0 & 0 & a & -b\\
0 & 0 & b & \, a
\end{smallmatrix}\right], \quad a, b \in \ensuremath{{\mathbb R}}. \] Note that $E:=H (\ensuremath{{\mathbb C}})$ is a 2-dimensional linear subspace of $\R {4\times 4}$.
Let \[\Psi\colon\thinspace T (W)\to\C 2 \] be a trivialization of the complex vector bundle $\tau (W)$ (Definition \ref{th:deftriv}). The real vector bundle $\tau (W^\ensuremath{{\mathbb R}})$ has the trivialization \[
\Phi:= \Theta\circ \Psi\colon\thinspace T (W)\to \R 4. \]
Let $f\colon\thinspace W \to \ensuremath{{\mathbb C}}$ be as in (C3) and set
$ A:= H\circ f\colon\thinspace W \to E$. Then \[
\Phi (X_q)=A(q)\cdot\Phi (Y_q), \qquad (q\in W). \] This implies $\msf i (Y, W) =\msf i (X, W)\ne 0$ by Proposition \ref{th:fue}, because $\msf i (X,W)\ne 0$, Therefore (\ref{eq:iyu0}) holds, completing the proof of Theorem \ref{th:MAIN}.
\subsection{Proof of Theorem \ref{th:liealg} } \mylabel{sec:liealg}
Recall the hypotheses: \begin{itemize} \item $M$ is a connected real or complex 2-manifold with empty boundary \item $\mcal G\subset \ensuremath{{\mcal V}}^\omega (M)$ is a Lie algebra over ground field $\ensuremath{{\mathbb F}}$
that tracks $X\in \ensuremath{{\mcal V}}^\omega (M)$ \item If $M$ is real, $\mcal G$ is supersolvable. \item $K$ is an essential $X$-block \end{itemize} To be proved: $\Z{\mcal G}\cap K\ne\varnothing$.
Consider first the case that $M$ is complex. We can assume:
\begin{description}
\item[(A1$'$)] {\em $K$ is connected and $\chi (K)=0$.}
For otherwise the argument used above to justify (A1) shows that there is point of $K$ fixed by the local flow of every $Y\in \ensuremath{{\mcal V}}^1 (M)$ that tracks $X$.
\end{description}
\begin{description}
\item [(A2$'$)] {\em $\dim_\ensuremath{{\mathbb F}} (K)=1$, and $K$ is an analytic submanifold.}
If $\dim_\ensuremath{{\mathbb F}} (K)=0$ then $K$ is a singleton by (A1$'$) and the conclusion of the theorem is obvious. If $\dim_\ensuremath{{\mathbb F}}(K)=2$ then $K=M$ because $X$ is analytic and $M$ is connected. But then $X=0$, contradicting $\msf i_K (X)\ne\varnothing$.
Thus we can assume $\dim (K)=1$. If $K$ is not an analytic submanifold, its singular set is a nonempty and $\mcal G$-invariant; being $0$-dimensional, it is finite hence contained in $\Z {\mcal G}$. \end{description}
(A1$'$) and [(A2$'$) imply:
\begin{description}
\item[(C1$'$)] {\em $K$ is a compact connected Riemann surface of genus $1$, holomorphically embedded in $M$.} \end{description}
As every $Y\in \mcal G$ tracks $X$, $K$ is $Y$-invariant (Proposition
\ref{th:ideal}. Therefore $Y|K$ is a holomorphic vector field on $K$, and $\Z Y \cap K\ne\varnothing$ (Theorem \ref{th:MAIN}). Since a nontrivial holomorphic vector field on a compact Riemann surface has no zeros, $K\subset \Z Y$ for all $Y\in \mcal G$. This completes the proof of Theorem \ref{th:liealg} for complex manifolds.
Now assume: {\em $M$ is real and $\mcal G$ is supersolvable.}
Let $\dim\, G=d, \, 1\le d <\infty$. Finite dimensionality and the assumption that $\mcal G$ tracks $X$ implies $X$ spans an ideal $\mcal H_1$
The conclusion of the theorem is trivial if $d=1$. If $d=2$ then $\mcal G$ has a basis $\{X, Y\}$ and Theorem \ref{th:MAIN} implies $\Z Y\cap K\ne\varnothing$, whence $\Z{\mcal G}\cap K\ne\varnothing$.
Proceeding inductively we assume $d >2$ and that the conclusion holds for smaller values of $d$. Supersolvability implies there is a chain of ideals \[
\mcal H_1\subset \dots \subset H_d=\mcal G \] such that $\dim,\mcal H_k = k$. Note that \[
\Z{\mcal H_1} = K. \] The inductive hypothesis implies \[ L:=\Z {\mcal H_{d-1}}\cap K\ne\varnothing, \] and $L$ is $\mcal G$-invariant because the zero sets of the ideals $\mcal H_{d-1}$ and $\mcal H_1$ are $\mcal G$-invariant.
If $\dim\, L=0$ it is a nonempty finite $\mcal G$ invariant set, hence contained in $\mcal G \cap K$. Suppose $\dim\,L=1$. If $L=K$ there is nothing more to prove, and if $L\ne K$ its frontier in $K$ is a nonempty finite $\mcal G$-invariant set. The proof of Theorem \ref{th:MAIN} is complete. \qed
\subsection{Proof of Theorem \ref{th:liegroup}}
The effective analytic action of $G$ on $M$ induces an isomorphism from the Lie algebra of $G$ onto a Lie algebra $\mcal G\subset \ensuremath{{\mcal V}}^\omega (M)$. Let $X\in \mcal G$ span the Lie algebra of a $1$-dimensional ideal. Because $\chi (M)\ne 0$, the set $K:=\Z X$ is an essential $X$-block by the Poincar\'e-Hopf Theorem \ref{th:PH}. Theorem \ref{th:liealg} shows that $\Z{\mcal G}\cap K\ne\varnothing$. Connectedness of $G$ implies $\Z{\mcal G}=\Fix \alpha$, implying the conclusion. \qed
\end{document} | arXiv | {
"id": "1310.0081.tex",
"language_detection_score": 0.6853544116020203,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Couniversal spaces which are equivariantly commutative ring spectra}
\author{J.P.C.Greenlees} \address{School of Mathematics and Statistics, Hicks Building, Sheffield S3 7RH. UK.} \email{j.greenlees@sheffield.ac.uk} \date{}
\begin{abstract} We identify which which couniversal spaces have suspension spectra equivalent to commutative orthogonal ring $G$-spectra for a compact Lie group $G$. These are precisely those whose cofamily is closed under passage to finite index subgroups. Equivalently these are the couniversal spaces admitting an action of an
$E_{\infty}^G$-operad. \end{abstract}
\thanks{I am grateful to M.Hill and M.Kedziorek for the conversation
at EuroTalbot17 when we observed that we knew of no obstruction to
Corollary \ref{cor:main}. } \maketitle
\tableofcontents
\section{Introduction}
For a compact Lie group $G$, Theorem \ref{thm:main} shows that a number of simple $G$-equivariant homotopy types have suspension spectra which are commutative orthogonal ring $G$-spectra.
Because equivariant commutativity implies a large amount of additional structure, including norm maps, this has significant implications.
These homotopy types are naturally used for isotropic decompositions of the sphere, and as such they play a significant role in understanding the structure of $G$-equivariant spectra where $G$ is a torus in \cite{tnqcore}. That analysis involves constructing the model category of rational $G$-spectra for a torus $G$ from a diagram of much simpler model categories. The simplest way to do this is to construct the simpler model categories as categories of modules over commutative ring $G$-spectra, and for this diagram to arise from a diagram of commutative ring $G$-spectra.
The homotopy types of the ring $G$-spectra are apparent from the construction, and it remains to show that they are indeed commutative ring $G$-spectra. If the ambient category of $G$-spectra is the category of orthogonal spectra, the commutative monoids admit multiplicative norm maps, which is a substantial restriction on the homotopy type. Accordingly, \cite{tnqcore} works instead with the Blumberg-Hill category of orthogonal $\mathcal{L}$-spectra \cite{BlumbergHillL1}, where
many more $G$-spectra admit the structure of commutative rings.
The motivating application of the present note is to show that in fact the ring spectra required in the construction of \cite{tnqcore} can be represented by commutative rings in the category orthogonal $G$-spectra. It follows that the the argument of \cite{tnqcore} can be conducted directly in the category of orthogonal $G$-spectra rather than in the more elaborate category of spectra with an $\mathcal{L}$-action.
\section{Operadic preliminaries}
There are rare examples of spectra which are obviously strictly commutative rings, but it is much more usual to show that a spectrum admits the action of a suitable operad, and then use general results to show this means the homotopy type is represented by a ring spectrum.
\subsection{$N_{\infty}$-operads} In the equivariant world there is a range of essentially different operads governing commutative ring spectra: these are the $N_{\infty}$-operads of Blumberg-Hill \cite{BlumbergHillNorms}. These are operads $\cO$ in $G$-spaces whose $n$-th term $\cO(n)$ is a universal space for a family $\cF \cO (n)$ of subgroups of $G\times \Sigma_n$: it is essential that $\cO (n)$ is $G$-fixed and $\Sigma_n$-free, but within that class there is a wide range of options. We need only discuss the two extreme types of $N_{\infty}$ $G$-operads.
At one extreme we have the non-equivariant $E_{\infty}$-operads, which are as free as possible whilst being $G$-fixed. Equivalently, the $n$-th term is the universal space for the family $$\cF (n)=\{ H\times 1 \subseteq G\times \Sigma_n \; | \; H\subseteq G\}. $$ There are of course many $E_{\infty}$-operads, and we write ${\mathbb E}_{\infty}$ for a chosen one. For example we might use
the linear isometries operad on a $G$-fixed universe, but we will use no special properties of the operad.
At the other extreme we have the $E_{\infty}^G$-operads which are as fixed as possible whilst their $n$th term is $\Sigma_n$-free, so their $n$-th term is a universal space for the family $$\cF_G(n) =\{ \Gamma \; | \; \Gamma \cap \Sigma_n=1\}.$$ There are of course many $E_{\infty}^G$-operads, and we write ${\mathbb E}_{\infty}^G$ for a chosen one. For example we might use
the linear isometries operad on a complete $G$-universe, but we will use no special properties of the operad. We pause to recall that if $\Gamma \cap \Sigma_n=1$ then $\Gamma$ is a `graph subgroup' in the sense that we have $\Gamma =\Gamma(L, \alpha)$ for some subgroup $L$ of $G$ and some homomorphism $\alpha : L \longrightarrow \Sigma_n$, where $\Gamma (L, \alpha)=\{ (x , \alpha (x))\; | \; x \in L\}$.
\subsection{Commutative monoids and $E_{\infty}^G$-operads} The relevance of $E_{\infty}^G$-operads is the connection to the standard symmetric monoidal product of spectra.
\begin{lemma} The commutative monoids in the category of orthogonal $G$ spectra are the $E_{\infty}^G$-algebras. \end{lemma}
\begin{proof} This uses the traditional argument of \cite[15.5]{MMSS}, using \cite[B.117]{HHR}, which in turn corrects \cite[III.8.4]{MMorthogonal}. We note that the statement in \cite{HHR} is only given for finite groups, but the argument applies as written to arbitrary compact Lie groups, giving the full replacement for the statement in \cite{MMorthogonal}. \end{proof}
\subsection{Endomorphism operads} The other piece of standard material is to consider the endomorphism operad $\cE_Y$ on a based space $Y$, defined by $$\cE_Y(n)=Map_*(Y^{\sm n }, Y). $$ We automatically find $Y$ is an $\cE_Y$-algebra. Equally, if $Y$ is a based $G$-space $\cE_Y$ is an operad in $G$-spaces and $Y$ is an algebra over it.
\section{McClure's argument} McClure \cite{McClureTate} argued as follows to construct an $E_{\infty}$-operad acting on $\tilde{E} G$.
First we consider the endomorphism operad $\cE_{\tilde{E} G}$, and then note that passage to fixed points gives a map $$\phi(n): \cE_{\tilde{E} G}(n)^G=Map_*^G(\tilde{E} G^{\sm n}, \tilde{E} G)\longrightarrow Map_*(S^0, S^0). $$ We write $$D_{McC}(n)=\phi(n)^{-1}(id), $$ and note that this is also an operad acting on $\tilde{E} G$. Because $\phi (n)$ is a weak equivalence $D_{McC}(n)$ is contractible, so that ${\mathbb E}_{\infty} \times D_{McC}$ is an $E_{\infty}$-operad acting on $\tilde{E} G$ as required.
\section{Generalizing McClure's argument}
\subsection{Couniversal spaces} Given a group $G$ and a family $\cF$ of subgroups of $G$, we say that $\tilde{E} \cF=S^0*E\cF$ is {\em the couniversal space} for the complementary cofamily $All \setminus \cF$. Simplifying notation, for a cofamily $\cC$, we write simply $$E\cC =\tilde{E} (\cC^c). $$ This has two essential features: it has geometric isotropy $\cC$, and $(E\cC)^H=S^0$ whenever $H\in \cC$.
\subsection{The endomorphism operad of a cofamily} \label{subsec:counivoperad} We consider the endomorphism operad of $E\cC$: $$\cE_{E\cC}(n) = Map_*(E\cC^{\sm n},E\cC). $$ The following partial information about the homotopy type of this space will be useful later.
\begin{lemma} \label{lem:easyendo} Given cofamilies $\cC$ and $\mcD$ the space $$\mathrm{map}_* (E\cC ,E\mcD )$$ has the following properties \begin{itemize} \item It is $H$-contractible if $H\not \in \cC \cap \mcD$ \item It is $H$-couniversal if no subgroup of $H$ lies in $\mcD
\setminus \cC$ \end{itemize} \end{lemma}
\begin{proof} It is clear that if $H$ is not in $\cC \cap \mcD$ then $\mathrm{map}_*(E\cC, E\mcD)$ is $H$-contractible, since one or other of the spaces is.
If $H \in \cC\cap \mcD$ we wish to argue that the map $$\mathrm{map}_*(E\cC , E\mcD)^H \longrightarrow \mathrm{map}_*(S^0, S^0)=S^0$$ is an equivalence. In other words, that any $H$-map $f: E\cC \longrightarrow E\mcD$ is determined by the map from $S^0\longrightarrow E\mcD$. The obstruction to extension and uniqueness lie in $[E\cC^c_+\sm S^k, E\mcD]^H$, which vanishes unless $H$ has a subgroup $K \in \mcD \setminus \cC$. \end{proof}
\subsection{The couniversal operad of a cofamily}
There is a $G\times \Sigma_n$-map $i_n: S^0=(S^0)^{\sm n} \longrightarrow (E\cC)^{\sm n}$ inducing a $G\times \Sigma_n$-map $$i_n^*: \cE_{E\cC} (n) = Map_*(E\cC^{\sm n},E\cC)\longrightarrow Map_*(S^0,E\cC)=E\cC . $$ We take $$D\cC (n)=(i_n^*)^{-1}(i_1). $$ We note that when $\cC={\mathcal{NT}}$ consists of the non-trivial subgroups the fixed point set $D{\mathcal{NT}}^G=D_{McC}$ is McClure's operad.
\begin{lemma} \label{lem:DCactsonEC} $D\cC$ is an operad acting on $E\cC$. \qed \\[1ex] \end{lemma}
Using this, we will show that for suitable cofamilies $\cC$, the space $E\cC$ is an algebra over an $N_{\infty}$-operad with more highly structured algebras than ${\mathbb E}_{\infty}$.
\subsection{Permutation powers and cofamilies} Let us think of the symmetric group $\Sigma_n$ as the permutations of $\{ 1, 2, \ldots, n\}$. We consider the group $G\times \Sigma_n$ and let $p:G\times \Sigma_n \longrightarrow \Sigma_n$ and $\pi: G\times \Sigma_n \longrightarrow G$ be the projections.
If $\cC$ is a cofamily of subgroups of $G$, we view $E\cC$ as a trivial $\Sigma_{n-1}$-space and form the $n$th smash power $(E\cC)^{\sm n}$ and view it as a $G\times \Sigma_n$-space.
\begin{lemma} The $G\times \Sigma_n$-space $E\cC^{\sm n}$ is couniversal. \end{lemma}
\begin{proof} Consider any $G$-space $X$ and form the $G\times \Sigma_n$-space $X^{\sm n}$. We will consider fixed points under a subgroup
$\Delta \subseteq G\times \Sigma_n$.
Consider the orbits $o_1, \ldots , o_s$ of
$\{1, \ldots , n\}$ under $p(\Delta)$, and choose orbit representatives $d_i\in o_i$. Now write $\Delta_i=p^{-1}((\Sigma_n)_{d_i})\cap \Delta$ for the subgroup of $\Delta$ fixing $d_i$.
We then see that there is a homeomorphism $$h: \bigwedge_{i=1}^s X^{\pi (\Delta_i)}\stackrel{\cong}\longrightarrow (X^{\wedge
n})^{\Delta}. $$ The $i$th factor in the domain gives the $d_i$th coordinate in $X^{\wedge n}$ and hence determines the coordinates in $o_i$. More precisely, if $m\in o_i$ we may choose $\delta \in \Delta$ with $p(\delta)(d_i)=m$, and then $$h(x_1\wedge\ldots \sm x_s)_m=\pi (\delta ) x_i. $$ Since $x_i$ is fixed by $\Delta_i$ this is independent of the choice of $\delta$. The verification that $h$ is a homeomorphism is straightforward.
Applying this to $X=E\cC$ we see that $X^{\Delta}$ is always either $S^0$ or contractible. The collection of subgroups for which it is $S^0$ is obviously a cofamily.
\end{proof}
If we write $C(\cC, n)$ for the geometric isotropy of $E\cC^{\sm n}$, then by the lemma $E\cC^{\sm n}\simeq EC(\cC,n)$.
\begin{lemma} \label{lem:CCnC1} $$C(\cC,n) \subseteq \pi^* \cC$$ \end{lemma}
\begin{proof}
We show that if $\Delta$ is not in the right hand side it is not in the left hand side.
If $\pi (\Delta) $ does not lie in $\cC$, then $(E\cC)^{\pi
(\Delta)}\neq S^0$. Suppose then that $x=x(1)\in E\cC \setminus S^0$
is a non-trivial element of $\cC$ fixed by $\pi (\Delta)$. Now write $x(i)=\sigma^i x(1)$ where
$\sigma =(123\cdots n)$. We then have $x =x(1)\sm w(2)\wedge\cdots
\sm w(n)$ fixed by $\pi (\Delta )\times \Sigma_n$ and hence by its
subgroup $\Delta$. Hence $\Delta \not \in C(\cC , n)$. \end{proof}
\begin{lemma} \label{lem:CC1Cn} If $\cC$ is closed under passage to finite index subgroups then $$\pi^*\cC \cap \cF_G(n) \subseteq C(\cC ,n)$$ \end{lemma}
\begin{proof} Suppose $\Delta\subseteq G\times \Sigma_n$ lies in the intersection, which is to say $L:=\pi (\Delta )\in \cC$, and $\Delta =\Gamma (L, \alpha)$ is a graph subgroup. We will show that $\Delta \in C(\cC,n)$. Since $C(\cC , n)$ is a cofamily, it suffices to show that the subgroup
$\Delta'=\Gamma(L_e, \alpha|_{L_e})$ lies in $C(\cC,n)$, where $L_e$ is the identity component of $L$.
However, since $\Sigma_n$ is discrete $\alpha|_{L_e}$ is trivial, so that $\Delta'=\Gamma (L_e, const)=L_e$. However $L=\pi(\Delta)$ lies in $\cC$, so its finite index subgroup $L_e$ also lies in $\cC$ by hypothesis: $$(E\cC^{\sm n})^{\Delta}\subseteq (E\cC^{\sm n})^{\Delta'}= (E\cC^{L_e})^{\sm n})=(S^0)^{\sm n}=S^0. $$ Hence $(E\cC^{\sm n})^{\Delta}=S^0$ as required. \end{proof}
\begin{lemma} \label{lem:inFGn} The map $$i_n^*: \cE_{E\cC}(n)=\mathrm{map}_*(E\cC^n , E\cC)\longrightarrow \mathrm{map}_*(S^0, E\cC)=E\cC$$ is an $\cF_G(n)$-equivalence. \end{lemma}
\begin{proof}
We observe that by Lemmas \ref{lem:CCnC1} and \ref{lem:CC1Cn}, if $H\in \cF_G(n)$ then $C(\cC, n)|_H=\pi_*\cC|_H$. The result follows from Lemma \ref{lem:easyendo}. \end{proof}
\subsection{McClure's argument extended}
We now apply the above to the operad $D\cC$ of Subsection \ref{subsec:counivoperad}.
\begin{thm} \label{thm:main} If $\cC$ is a cofamily then the space $E\cC$ is an $E_{\infty}^G$-algebra if and only if $\cC$ is closed under passage to finite index subgroups. \end{thm}
\begin{proof} If there is a finite index inclusion $K\subseteq H$ of subgroups with $H \in \cC$ and $K\not\in \cC$, then the assumption that $E\cC$ is $E_{\infty}^G$ leads to a contradiction. Indeed $\pi^K_0(E\cC)=0$ so that $1=0$ in that ring. On the other hand, by Segal-tom Dieck splitting, $\pi^H_0(E\cC)\neq 0$ so that $1\neq 0$ in $\pi^H_0(E\cC)$. The existence of a norm map then gives a contradiction since $\mathrm{norm}_K^H(1)=1$.
Now suppose $\cC$ is closed under passage to finite index subgroups. By Lemma \ref{lem:DCactsonEC} there is an action of $D\cC$ on $E\cC$, and hence also an action of ${\mathbb E}_{\infty}^G \times D\cC$. It remains to show that the $n$th term in this operad is universal for $\cF_G(n)$. In other words, we need to show that if $\Gamma\in \cF_G(n)$ is a graph subgroup then $D\cC(n)^{\Gamma}\simeq *$.
Now by Lemma \ref{lem:inFGn}, the map $$i_n^*: \cE_{E\cC}(n)=\mathrm{map}_*(E\cC^n , E\cC)\longrightarrow \mathrm{map}_*(S^0, E\cC)=E\cC$$ is an $\cF_G(n)$-equivalence, and hence $D\cC (n)$ is $\cF_G(n)$-contractible as required. \end{proof}
\begin{cor} \label{cor:main} If $G$ is a torus and $K$ is a connected subgroup then $S^{\infty
V(K)}=\bigcup_{V^K=0}S^V$ is an $E_{\infty}^G$-algebra. \end{cor}
\begin{proof} The space $S^{\infty V(K)}$ is couniversal for the cofamily $\mathsf{V} (K)=\{H \; | \; H\supseteq K\}$ of subgroups containing $K$. Since $K$ is connected $\mathsf{V} (K)$ is closed under passage to finite index subgroups. \end{proof}
\end{document} | arXiv | {
"id": "1801.09766.tex",
"language_detection_score": 0.7513241767883301,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Difference Covering Arrays and Pseudo-\\Orthogonal Latin Squares}
\author{Fatih Demirkale\\ \small\tt fatih.demirkale@uqconnect.edu.au\\ \small\tt Department of Mathematics,\\ \small\tt Ko\c{c} University, Sar{\i}yer, 34450, \.{I}stanbul, Turkey\\ \\ Diane M. Donovan \\ \small\tt dmd@maths.uq.edu.au\\ \small\tt Centre for Discrete Mathematics and Computing,\\ \small\tt University of Queensland, St Lucia 4072 Australia \\ \\ Joanne Hall\\ \small\tt j42.hall@qut.edu.au\\ \small Department of Mathematics\\ \small Queensland University of Technology\\ \small Qld, 4000\\ \small\tt j42.hall@qut.edu.au\\ \\ Abdollah Khodkar\\ \small \tt akhodkar@westga.edu\\ \small \tt Department of Mathematics\\ \small University of West Georgia\\ \small Carrollton, GA 30118, USA\\ \\ Asha Rao\\ \small\tt asha@rmit.edu.au\\ \small School of Mathematical and Geospacial Sciences\\ \small RMIT University\\ \small Vic 3000, Australia } \date{January 19, 2015} \maketitle
\maketitle
\begin{abstract} Difference arrays are used in applications such as software testing, authentication codes and data compression. Pseudo-orthogonal Latin squares are used in experimental designs. A special class of pseudo-orthogonal Latin squares are the mutually nearly orthogonal Latin squares (MNOLS) first discussed in 2002, with general constructions given in 2007. In this paper we develop row complete MNOLS from difference covering arrays. We will use this connection to settle the spectrum question for sets of 3 mutually pseudo-orthogonal Latin squares of even order, for all but the order 146. \end{abstract}
\section{Introduction}
Difference matrices are a fundamental tool used in the construction of combinatorial objects, generating a significant body of research that has identified a number of existence constraints. These difference matrices have been used for diverse applications, for instance, in the construction of authentication codes without secrecy \cite{Stinson}, software testing \cite{CDFP1},\cite{CDPP2} and data compression \cite{Korner}. This diversity of applications, coupled with existence constraints, has motivated authors to generalise the definition to holey difference matrices, difference covering arrays and difference packing arrays, to mention just a few.
In the current paper we are interested in constructing subclasses of cyclic difference covering arrays and exploiting these structures to emphasize new connections with other combinatorial objects, such as pseudo-orthogonal Latin squares. We use this connection to settle
the existence spectrum for sets of 3 mutually pseudo-orthogonal Latin squares of even order, in all but one case. We begin with the formal definitions.
A {\em difference matrix} (DM) over an abelian group $(G,+)$ of order $n$ is defined to be an $n\times k$ matrix $Q=[q(i,j)]$ with entries from $G$ such that,
for all pairs of columns $0\leq j,j^\prime\leq k-1$, $j\neq j^\prime$, the difference set $$ \Delta_{j,j^\prime}=\{q(i,j)-q(i,j^\prime) \mid 0\leq i\leq n-1\} $$ contains every element of $G$ equally often, say $\lambda$ times. (See, for instance, \cite{colbourn}, \cite{Ge} and \cite{HSS}.) Note that we label the rows from $0$ to $n-1$ and the columns $0$ to $k-1$. Also to be consistent with later sections involving Latin squares and covering arrays our definition uses the transpose of the matrix given in \cite{colbourn} and \cite{Ge}. Since the addition of a constant vector, over $G$, to all rows and a constant vector to any column does not alter the set $\Delta_{j,j^\prime}$, we may assume that one row and one column contain only $0$, the identity element of $G$. More precisely, to simplify later calculations, we will assume that all entries in the last row and last column of $Q$ are $0$. A difference matrix will be denoted DM$(n,k;\lambda)$. If $(G,+)$ is the cyclic group we refer to a {\em cyclic difference matrix}.
\begin{theorem}\cite[Thm 17.5, p 411]{colbourn} A DM$(n,k;\lambda)$ does not exist if $k>\lambda n$. \end{theorem}
In the main, we will use difference matrices with $k= 4$, $\lambda=1$ and where possible we will work with cyclic difference matrices. In Section \ref{sc:spec} we list a number of existence results that will be relevant to the current paper.
A {\em holey difference matrix} (HDM) over an abelian group $(G,+)$ of order $n$ with a subgroup $H$ of order $h$ is defined to be an $(n-h)\times k$ matrix $Q=[q(i,j)]$ with entries from $G$ such that,
for all pairs of columns $0\leq j,j^\prime\leq k-1$, $j\neq j^\prime$, the difference set $$ \Delta_{j,j^\prime}=\{q(i,j)-q(i,j^\prime) \mid 0\leq i\leq n-h-1\} $$
contains every element of $G\setminus H$ equally often, say $\lambda$ times. A holey difference matrix will be denoted HDM$(k,n;h)$, where $|G|=n$ and $|H|=h$. If $G$ is the cyclic group then we refer to a {\em cyclic holey difference matrix}.
\begin{remark}\label{rem} As before a constant vector may be added to any column without affecting $\Delta_{j,j^\prime}$ so we may assume that all entries in the last column of $Q$ are equal to $0$. However since $H$ is a subgroup, $0$ belongs to the hole. Consequently $0$ does not occur in $\Delta_{j,j^\prime}$, and thus there will be no row containing two or more $0$'s. Further since $\Delta_{j,k-1}=G\setminus H$, $0\leq j\leq k-2$, the entries of $H$ do not occur in the first $k-1$ columns of $Q$. \end{remark}
A {\em difference covering (packing) array} over an abelian group $(G,+)$ of order $n$ is defined to be an $\eta\times k$ matrix $Q=[q(i,j)]$ with entries from $G$ such that,
for all pairs of distinct columns $0\leq j,j^\prime\leq k-1$, the difference set $$ \Delta_{j,j^\prime}=\{q(i,j)-q(i,j^\prime) \mid 0\leq i\leq \eta-1\} $$ contains every element of $G$ at least (at most) once. (See, for instance, \cite{Yin1} and \cite{Yin2}.) A difference covering array will be denoted DCA$(k,\eta;n)$ and a difference packing array will be denoted DPA$(k,\eta;n)$.
If $(G,+)$ is the cyclic group, then the difference covering (packing) array is said to be {\em cyclic}.
Difference covering arrays have been studied in their own right and are related to mutually orthogonal partial Latin squares and transversal coverings, with applications in information technology, see \cite{KAG} and \cite{SMM}.
As before we may assume that the last row and last column of a DCA$(k,\eta;n)$ contain only 0.
In the papers \cite{Yin1} and \cite{Yin2}, Yin constructs cyclic DCA$(4,n+1;n)$ for all even integers $n$, with similar results for cyclic difference packing arrays. Yin documents a number of product constructions for difference covering arrays, some of which will be reviewed in Section \ref{sc:spec} and then adapted to construct difference covering arrays with specific properties; properties that build connections with pseudo-orthogonal Latin squares.
The additional properties that we seek are that $0$ (the entry relating to identity element of $G$) occurs at least twice in each column of the DCA$(k,n+1;n)$ and for pairs of columns, not including the last column, the repeated difference is not the element $0$.
Formally we are interested in DCA$(k,n+1;n)$, $Q=[q(i,j)]$, ($0\leq i\leq n$, $0\leq j\leq k-1$) satisfying the properties: \begin{itemize} \item[{\bf P1.}] the entry $0\in G$ occurs at least twice in each column of $Q$, and \item[{\bf P2.}] for all pairs of distinct columns $j$ and $j^\prime$, $j\neq k-1\neq j^\prime$, $\Delta_{j,j^\prime}=\{q(i,j)-q(i,j^\prime)\mid 0\leq i\leq n-1\}=G\setminus\{0\},$ \end{itemize} Note that this last property implies that $\Delta_{j,j^\prime}$ contains a repeated difference that is not $0$.
The following example, of cyclic DCA$(4, 7;6)$ that satisfies P1 and P2, is taken from \cite{RSS}. $$ B^T=\left[\begin{array}{ccccccc} 0&1&2&3&4&5&0\\ 1&3&5&0&2&4&0\\ 3&0&4&1&5&2&0\\ 0&0&0&0&0&0&0 \end{array}\right] $$
In the next lemma we show that if $G$ is the cyclic group over ${\mathbb Z}_n$, then these conditions imply that for all distinct columns $j$ and $j^{\prime}$, $j\neq k-1\neq j^\prime$, $$ \Delta_{j,j^\prime}=\{0, 1, 2,\dots,n/2,n/2,\dots, n-1\} $$ with repetition retained.
\begin{lemma}\label{diff} If there exists a cyclic DCA$(k,n+1;n)$, $Q=[q(i,j)]$, ($0\leq i\leq n$, $0\leq j\leq k-1$) satisfying Properties P1 and P2,
then $n$ is even. Further, given $d_0$ such that $d_0=q(i,j)-q(i,j^\prime)=q(i^\prime,j)-q(i^\prime,j^\prime)$, for $i\neq i^\prime$ and $k-1\neq j\neq j^\prime\neq k-1$, then $d_0 = n/2$. \end{lemma}
\begin{proof} Let $Q=[q(i,j)]$ ($0\leq i\leq n$, $0\leq j\leq k-1$) represent the difference covering array. The definition requires that ${\mathbb Z}_{n}\subseteq \Delta_{j,j^\prime}$ and since column $k-1$ of $Q$ contains all zeros, Property P1 implies that the remaining columns are permutations of the multi-set $\{0,0, 1, 2,\dots, n-1\}$.
Let $d_0\in {\mathbb Z}_n\setminus \{0\}$ represent the repeated difference in $\Delta_{j,j^\prime}$. Suppose $n$ is odd and, without loss of generality, that column 0 is in standard form. Then, for all $0< j \leq k-2$, $ \sum_{i=0}^{n-1}q(i,j)=\frac{(n-1)n}{2}$ and $$
\sum_{i=0}^{n-1}(i-q(i,j))\equiv \frac{(n-1)n}{2}+d_0 \mod n. $$ Consequently $ 2d_0=n(2u-n+1),$
or equivalently $n|2d_0$. But since $n$ is odd, this leads to the contradiction, $d_0\in {\mathbb Z}_n$ and $n|d_0$. Thus $n$ is $2p$ for some integer $p$, where $p$ divides $d_0$, implying $d_0=p$.
\end{proof}
The remainder of this paper is organised as follows. In Section \ref{Latinsquare} we will draw the connection between DCA$(k,n+1;n)$ and sets of mutually pseudo-orthogonal Latin squares and for a subclass of squares settle the spectrum question for all but a single order, namely 146. In Section \ref{sc:spec} we review some of the general constructions for difference covering arrays and show that these constructions can be used to construct DCA$(k,n+1;n)$ that satisfy Properties P1 and P2. In Section \ref{constructions} we give three new constructions for DCA$(4,n+1;n)$'s and consequently new families of
mutually pseudo-orthogonal Latin squares.
The notation $[a,b]=\{a,a+1,\dots, b-1,b\}$ refers to the closed interval of integers from $a$ to $b$.
\section{Pseudo-orthogonal Latin squares and difference covering arrays}\label{Latinsquare}
In this section we verify that cyclic difference covering arrays can be used to construct pseudo-orthogonal Latin squares.
A {\em Latin square} of order $n$ is an $n\times n$ array in which each of the symbols of ${\mathbb Z}_n$ occurs once in every row and once in every column.
Two Latin squares $A=[a(i,j)]$ and $B=[b(i,j)]$, of order $n$, are said to be {\em orthogonal} if $$ O=\{(a(i,j),b(i,j))\mid 0\leq i,j\leq n-1\}={\mathbb Z}_n\times {\mathbb Z}_n. $$ A set of $t$ Latin squares is said to be {\em mutually orthogonal}, $t$-MOLS$(n)$, if they are pairwise orthogonal. A set of $t$ {\em idempotent} MOLS$(n)$, denoted $t$-IMOLS$(n)$, is a set of $t$-MOLS$(n)$ each of which is idempotent; that is, the cell $(i,i)$ contains the entry $i$, for all $0\leq i\leq n-1$.
It is well known that difference matrices can be used to construct sets of mutually orthogonal Latin squares, see for instance \cite[Lemma 6.12]{HSS}.
While the applications of orthogonal Latin squares are well documented, there are still many significant existence questions unanswered. For instance, it is known that there is no pair of MOLS(6), however it is not known if there exists a set of three MOLS(10),
or four MOLS(22), see \cite{colbourn}. The existence of a set of four MOLS(14) was established by Todorov \cite{todorov} in 2012, but it is not known if there exists a set of five MOLS(14). Many of the existence results have been obtained using quasi-difference matrices or difference matrices with holes, see \cite{colbourn}.
The importance and applicability of MOLSs combined with these difficult open questions has motivated authors, such as Raghavarao, Shrikhande and Shrikhande \cite{RSS} and Bate and Boxall \cite{BB}, to slightly vary the orthogonality condition to that of pseudo-orthogonal. A pair of Latin squares, $A=[a(i,j)]$ and $B=[b(i,j)]$, of order $n$, is said to be {\em pseudo-orthogonal} if given $O=\{(a(i,j),b(i,j))\mid 0\leq i,j\leq n-1\}$,
for all $a\in {\mathbb Z}_n$ $$
|\{(a,b(i,j))\mid (a,b(i,j))\in O\}|=n-1. $$ That is, each symbol in $A$ is paired with every symbol in $B$ precisely once, except for one symbol with which it is paired twice and one symbol with which it is not paired at all. A set of $t$ Latin squares, of order $n$, are said to be mutually pseudo-orthogonal if they are pairwise pseudo-orthogonal.
The value and applicability of pseudo-orthogonal Latin squares has been established through applications to multi-factor crossover designs in animal husbandry \cite{BB}, and strongly regular graphs \cite{BHS} (though the definition varies here). {\em Mutually nearly orthogonal Latin squares} (MNOLS) are a special class of pseudo-orthogonal Latin squares, in that the set $O$ does not contain the pair $(a,a)$, for any $a\in{\mathbb Z}_n$. Mutually nearly orthogonal Latin squares (MNOLS) were first discussed in a paper by Raghavarao, Shrikhande and Shirkhande in 2002 \cite{RSS}.
A natural question to ask is: Can we use difference techniques to construct mutually pseudo-orthogonal Latin squares?
Raghavarao, Shrikhande and Shirkhande did precisely this and constructed mutually pseudo-orthogonal Latin squares from cyclic DCA$(k,n+1;n)$ termed $(k,n)$-difference sets in \cite{RSS}. The Raghavarao, Shrikhande and Shirkhande result is as follows.
\begin{theorem} If there exists a cyclic DCA$(t+1, 2p+1;2p)$, $Q^\prime=[q^\prime(i,j)]$, that satisfies P1 and P2, then there exists a set of $t$ pseudo-orthogonal Latin squares of order $2p$. \end{theorem}
\begin{proof} Recall that without loss of generality we may assume that the last row and column of $Q^\prime$ contain all zeros. Construct a new matrix $Q=[q(i,j)]$ by removing the last row and last column from $Q^\prime$ and define a set of $t$ arrays, $L_s=[l_s(i,j)]$, $0\leq s\leq t-1$, of order $2p$, by \begin{equation}\label{cyclic} l_s(i,j)=q(i,s)+j(\mbox{mod }2p),\ 0\leq i,j\leq 2p-1. \end{equation} It is easy to see that each column of $L_s$ is a permutation of ${\mathbb Z}_{2p}$ and so $L_s$ is a Latin square. By Lemma \ref{diff} $$ \Delta_{j,j^\prime}=\{q^\prime(i,j)-q^\prime(i,j^\prime) \mid 1\leq i\leq 2p\}=({\mathbb Z}_{2p}\setminus \{0\})\cup \{p\} $$
implying that when any two Latin squares are superimposed we obtain the set of ordered pairs $(\{{\mathbb Z}_{2p}\times {\mathbb Z}_{2p}\}\setminus\{(x,x)\mid 0\leq x\leq 2p-1\}) \cup \{(x,x+p)\mid 0\leq x\leq 2p-1\}$ with repetition retained.
\end{proof}
If there exists a pair of pseudo-orthogonal Latin squares generated from cyclic difference covering arrays satisfying P1 and P2, then there exists a pair of nearly orthogonal Latin squares. Conversely, a pair of nearly orthogonal Latin squares are necessarily pseudo-orthogonal Latin squares. Given this and the strong connection with papers \cite{LvR} and \cite{RSS} we will state all results in terms of mutually nearly orthogonal Latin squares.
Raghavarao, Shrikhande and Shirkhande established bounds on the maximum number of Latin squares in a set of mutually nearly orthogonal Latin squares. This result provides bounds on $k$ for DCA$(k,n+1;n)$ that satisfy P1 and P2.
\begin{lemma} Let $p\geq 2$ be a positive integer. If there exists a cyclic DCA$(k+1, 2p+1;2p)$ that satisfies P1 and P2, then $k\leq p+1$. Further if $p$ is even and there exists a DCA$(k, 2p+1;2p)$, then $k<p+1$.
\end{lemma}
\begin{proof} If $k>p+1$ then there exists a set of more than $(p+1)$-MNOLS$(2p)$, which contradicts Raghavarao, Shrikhande and Shrikhande result. Similarly for the second statement.
\end{proof}
\subsection{ Some interesting facts}
It is also interesting to note that the MNOLS$(2p)$ constructed from cyclic difference covering arrays are essentially copies of the cyclic group. Consequently, these Latin squares are all {\em bachelor squares}, in that they have no orthogonal mate.
We also note that these sets of Latin squares are row complete. A {\em row complete} Latin square, $L=[l(i,j)]$ is one in which the columns can be reordered in such a way that the set $\{(l(i,j),l(i,j+1)\mid i \in {\mathbb Z}_n, 0\leq j \leq n-1\}={\mathbb Z}_n\times {\mathbb Z}_n$. So the set of entries obtained by taking pairs of adjacent cells in the same row, for all rows, gives the set of all ordered pairs on ${\mathbb Z}_n$.
Williams \cite{W} verified that the columns of the Latin square corresponding to the Cayley table of the cyclic group can be rearranged to obtain a row complete Latin square.
Each of the $k$ MNOLS$(2p)$ constructed from cyclic DCA$(k+1, 2p+1;2p)$ can be obtained by reordering the rows of the Cayley table of the cyclic group, without touching the columns. Hence simultaneously reordering the columns of these nearly orthogonal Latin squares will also produce row complete pseudo-orthogonal Latin squares.
\section{The spectrum for sets of 3 mutually nearly orthogonal Latin squares}\label{sc:spec}
In 2007, Li and van Rees \cite{LvR} continued the study of $3$-MNOLS$(n)$ and conjectured that they exist for all even $n\geq 6$. In a partial answer to this question, Li and van Rees proved the existence for small orders and orders greater than 356, (see also \cite{PR}).
\begin{theorem}\cite[Thm 4.8]{LvR} If $2p\geq 358$, then there exists a $3$-MNOLS$(2p)$. \end{theorem}\emph{}
This work was extended in 2014, when Demirkale, Donovan and Khodkar \cite{DDK} developed further constructions for cyclic DCA$(4, 2p+1;2p)$ proving:
\begin{theorem}\cite{DDK} There exist $3$-MNOLS$(2p)$, where $2p\equiv $ {\rm 14, 22, 38, 46 mod 48}. \end{theorem}
The next result lists known values for cyclic DCA$(4, 2p+1;2p)$ satisfying P1 and P2, with $2p\leq 356$.
\begin{lemma} There exists cyclic DCA$(4, 2p+1;2p)$ for $2p= $ {\rm 6, 8,}$\dots$, {\rm 20, 22, 38, 46, 62, 70, 86, 94, 110, 118, 134, 142, 158, 166, 182, 190, 206, 214, 230, 238, 254, 262, 278, 286, 302, 310, 326, 334, 350}. \end{lemma}
\begin{proof} The existence of orders $6$ and $8$ was given in \cite{RSS} and orders 10, 12, 14, 16, 18, 20 in \cite{LvR}. All the remaining cases were shown to exist in \cite{DDK}.
\end{proof}
van Rees recently summarised these results and indicated that $3$-MNOLS$(2p)$ exist for all orders except possibly those given below.
\begin{lemma} \cite{priv-van-rees}\label{rem-spec} A set of $3$-MNOLS$(2p)$ exists except possibly when $2p=$ {\rm 24, 26, 28, 30, 34, 36, 42, 50, 52, 54, 58, 66, 74, 82, 92, 102, 106, 114, 116, 122, 124, 130, 138, 146, 148, 170, 172, 174, 178}. \end{lemma}
In this section we will settle the existence question for all even orders except $2p=146$. For completeness we list all values less than $2p=358$ and, given the connection with row complete Latin squares, where possible we will use cyclic difference covering arrays to construct the Latin squares. Therefore we begin this section by reviewing relevant results from \cite{Ge}, \cite{Yin1} and \cite{Yin2}, and adapting these to construct cyclic DCA$(4, 2m+1;2m)$ that satisfy P1 and P2. Where appropriate we will indicate how these results can be used to settle the spectrum question.
We begin with the following straight forward result that is analogous to \cite[Lem 2.3]{Ge}.
\begin{lemma}\label{l:insert} Suppose that there exists a HDM$(k,n;h)$ over the group $(G,+)$ with a hole over the subgroup $H$. Further suppose there exists a DCA$(k,h+1;h)$ over $H$ satisfying $P1$ and $P2$. Then there exists a DCA$(k,n+1;n)$ over $G$ satisfying P1 and P2. Further suppose that the HDM$(k,n;h)$ and DCA$(k,h+1;h)$ are cyclic. Then there exists a cyclic DCA$(k,n+1;n)$ satisfying P1 and P2. \end{lemma}
\begin{proof} In the cyclic case, let $A=[a(i,j)]$ ($0\leq i\leq n-1-h$, $0\leq j\leq k-1$) represent the cyclic HDM$(k,n;h)$ and $B=[b(i,j)]$ $(0\leq i\leq h$, $0\leq j\leq k-1$) represent the cyclic DCA$(k,h+1;h)$. The definition of cyclic implies that $H=\{0,u, 2u, \dots(h-1)u\},$ where $n=uh$ and the proof of Lemma \ref{diff} implies that $h$ is even and the repeated difference in $\Delta_{j,j^\prime}$, $j\neq k-1\neq j^\prime$, of $B$ is $hu/2=n/2$.
Set $Q=[q(i,j)]$ ($0\leq i\leq n$, $0\leq j\leq k$) to be the concatenation of $A$ with $B$ and we obtain a cyclic DCA$(k,n+1;n)$ that satisfies P1 and P2.
The non-cyclic case follows similarly.
\end{proof}
Next we give a general product type construction taken from \cite{Ge} and adapt it to construct cyclic difference covering arrays that satisfy P1 and P2.
\begin{lemma}\cite[Lem 2.6]{Ge}\label{l:prod-ge} If both a cyclic HDM$(k,n;h)$ and a cyclic DM$(n^\prime,k;1)$ exist, then so does a cyclic HDM$(k,nn^\prime;hn^\prime)$. In particular, if there exists a cyclic DM$(n,k;1)$ and a cyclic DM$(n^\prime,k;1)$ then there exists cyclic DM$(nn^\prime,k;1)$. \end{lemma}
The first statement of Lemma \ref{l:prod-ge} coupled with Lemma \ref{l:insert} leads to the following straightforward result.
\begin{corollary}\label{c:prod} Suppose that there exists a cyclic HDM$(k,n;h)$, a cyclic DM$(n^\prime,$ $k;1)$ and a cyclic DCA$(k,hn^\prime+1;hn^\prime)$ that satisfies P1 and P2. Then there exists a cyclic DCA$(k,nn^\prime+1;nn^\prime)$ that satisfies P1 and P2. \end{corollary}
The second statement of Lemma \ref{l:prod-ge} can also be adapted.
\begin{lemma}\label{l:prod} Suppose a cyclic DM$(n,k;1)$, a cyclic DM$(n^\prime,k;1)$ and a cyclic DCA$(k,n^\prime+1;n^\prime)$ satisfying P1 and P2 exist. Then there exists a cyclic DCA$(k,$ $nn^\prime+1;nn^\prime)$ that satisfies P1 and P2. \end{lemma}
\begin{proof} This result can be obtained by taking a hole of size 1 in Corollary \ref{c:prod} or as follows. Let $A=[a(i,j)]$ ($0\leq i\leq n-1,0\leq j\leq k-1$) represent the cyclic DM$(n,k;1)$, $B=[b(i,j)]$ ($0\leq i\leq n^\prime-1,0\leq j\leq k-1$) represent the cyclic DM$(n^\prime,k;1)$ and $C=[c(i,j)]$ ($0\leq i\leq n^\prime,0\leq j\leq k-1$) represent the cyclic DCA$(k,n^\prime+1;n^\prime)$. Recall that $a(n-1,j)=b(n^\prime-1,j)=c(n^\prime,j)=0$ for all $0\leq j\leq k-1$ and $a(i,k-1)=b(i^\prime,k-1)=c(i^{\prime\prime},k-1)=0$ for all $0\leq i\leq n-1$, $0\leq i^\prime\leq n^\prime-1$, and $0\leq i^{\prime\prime}\leq n^\prime$.
Construct a matrix $Q=[q(i,j)]$ where, for $0\leq i\leq n-1$, $0\leq i^\prime\leq n^\prime-1$, $0\leq i^{\prime\prime}\leq n^\prime$ and $0\leq j\leq k-1$, \begin{align*} q(i+i^\prime n,j)&=a(i,j)+b(i^\prime,j)n, \mbox{ when }i\neq n-1\\ q(n-1+ i^{\prime\prime}n,j)&=c(i^{\prime\prime},j)n. \end{align*} Then \begin{align*} \Delta_{j,j^\prime}&=\{a(i,j)+b(i^\prime,j)n-a(i,j^\prime)-b(i^\prime,j^\prime)n\mid 0\leq i\leq n-2,0\leq i^\prime \leq n^\prime -1\}\\ &\quad \cup \{c(i^{\prime\prime},j)n-c(i^{\prime\prime},j^\prime)n\mid 0\leq i^{\prime\prime}\leq n^\prime\},\\ &= {\mathbb Z}_n\setminus\{0,n, 2n,\dots, n(n^\prime-1)\}\cup \{0,n, 2n,\dots, nn^\prime/2,nn^\prime/2, \dots,\\ & \quad n(n^\prime-1)\}. \end{align*} Properties P1 and P2 follow as in the proof of Lemma \ref{l:insert}.
\end{proof}
This result can be generalised to construct non-cyclic difference covering arrays as in \cite{Yin1}.
We now combine these results with various results of \cite{colbourn}, \cite{Ge} and \cite{Yin2} to obtain results for cyclic DCA$(4, 2p+1;2p)$, satisfying P1 and P2, implying new existence results for row complete $3$-MNOLS$(2p)$, where $2p<358$. In the lists below $*$ indicates that the existence of $3$-MNOLS$(2p)$ was previously unknown.
In doing this we will settle the remaining cases in Lemma \ref{rem-spec}, with the exception of $2p=146$. Here we believe that there exists a DCA$(4, 147;146)$ satisfying P1 and P2 but have been unable to verify it.
\begin{theorem}\cite[Thm 17.6, p 411]{colbourn}\label{prime-cdm} If $n$ is a prime greater than or equal to $k$, then there exists a cyclic DM$(n,k;1)$. \end{theorem}
\begin{lemma}\cite[Lem 2.5]{Yin2}\label{hdm-2prime} Let $n\geq 5$ be prime. Then there exists a cyclic HDM$(4, 2n;2)$. \end{lemma}
\begin{lemma} There exists cyclic DCA$(4, 2p+1;2p)$ for $2p=50^*$, {\rm 98}, $170^*$, {\rm 242, 290, 338}. \end{lemma}
\begin{proof} Corollary \ref{c:prod} together with Theorem \ref{prime-cdm} and Lemma \ref{hdm-2prime} can be used first to construct a cyclic HDM$(4, 2p;h)$ with $2p(h)=$ {\rm 50(10), 98(14), 170(10), 242(22), 290(10), 338(26)} and then the required DCA$(4, 2p+1;2p)$.
\end{proof}
Lemmas \ref{spec:90}, \ref{spec:60}, \ref{spec:140} do not document any new existence results, however they do verify that for the given orders cyclic DCA$(4, 2p+1;2p)$ satisfying P1 and P2 exist.
\begin{lemma}\cite[Thm 2.1]{Yin2}\label{hdm-23} Let $n$ be an odd positive integer satisfying gcd$(n, 9)\neq 3$. Then there exists a cyclic HDM$(4, 2n;2)$. \end{lemma}
\begin{lemma}\label{spec:90} There exists cyclic DCA$(4, 2p+1;2p)$ for $2p=$ {\rm 90, 126, 198, 234, 306, 342}. \end{lemma}
\begin{proof} Corollary \ref{c:prod} together with Theorem \ref{prime-cdm} and Lemma \ref{hdm-23} can be used first to construct a cyclic HDM$(4, 2p;h)$ with $2p(h)=$ {\rm 90(10), 126(14), 198(22), 234(26), 306(34), 342(38)} and then the required DCA$(4, 2p+1;2p)$.
\end{proof}
\begin{theorem}\cite[Thm 2.3]{Yin2}\label{hdm-4prime} Let $n\geq 4$ and $n=2^\alpha 3^\beta p_1^{\alpha_1}\dots p_t^{\alpha_t},$ where $(\alpha,\beta)\neq (1,0)$, $\alpha_i \geq 0$ and the prime factors $p_i\geq 5$ for $1 \leq i \leq t$. Then there exists a cyclic HDM$(4, 2n;2)$. \end{theorem}
\begin{lemma}\label{spec:60} There exists cyclic DCA$(4, 2p+1;2p)$ for $2p=$ {\rm 60, 80, 84, 100, 112, 120, 132, 156, 160, 168, 176, 180, 204, 208, 224, 228, 240, 252, 264, 272, 276, 300, 304, 312, 320, 336, 352}. \end{lemma}
\begin{proof} Corollary \ref{c:prod} together with Theorem \ref{prime-cdm} and Theorem \ref{hdm-4prime} can be used first to construct a cyclic HDM$(4, 2p;h)$ with $2p(h)=$ 60(10), 80(10), 84(14), 100(10), 112(14), 120(10), 132(22), 156(26), 160(10), 168(14), 176(22), 180(10), 204(34), 208(26), 224(14), 228(38), 240(10), 252(14), 264(22), 272(34), 276(46), 300(10), 304(38), 312(26), 320(10), 336(14), 352(22) and then the required \\ DCA$(4, 2p+1;2p)$.
\end{proof}
\begin{theorem}\cite[Thm 2.4]{Yin2}\label{hdm-4hole} Let $n= p_1^{\alpha_1}\dots p_t^{\alpha_t}$,
where $\alpha_i \geq 0$ and the prime factors $p_i\geq 5$ for $1 \leq i \leq t$. Then there exists a cyclic HDM$(4, 4n;4)$. \end{theorem}
\begin{lemma}\label{spec:140} There exists cyclic DCA$(4, 2p+1;2p)$ for $2p=$ {\rm 140, 196, 220, 260, 308, 340}. \end{lemma}
\begin{proof} Corollary \ref{c:prod} together with Theorem \ref{prime-cdm} and Theorem \ref{hdm-4hole} can be used first to construct a cyclic HDM$(4, 2p;h)$ with $2p(h)=$ {\rm 140(28), 196(28), 220(20), 260(20), 308(28), 340(20)} and then the required DCA$(4, 2p+1;2p)$.
\end{proof}
\begin{theorem}\cite[Thm 3.10]{Ge}\label{3-cdm} A cyclic DM$(3^i, 5;1)$ exists for all $i\geq 3$. \end{theorem}
\begin{lemma} There exists cyclic DCA$(4, 2p+1;2p)$ for $2p=$ {\rm 216, 270, 324}. \end{lemma}
\begin{proof} Corollary \ref{c:prod} together with Theorem \ref{3-cdm} and Theorem \ref{hdm-4prime} can be used to first construct a cyclic HDM$(4, 2p;h)$ with $2p(h)=$ 216(54), 270(54), 324(54) and then the required DCA$(4, 2p+1;2p)$.
\end{proof}
The next result from Yin's paper \cite{Yin2} is interesting in that it allows us to construct difference covering arrays and so nearly orthogonal Latin squares of order $6n$, and gives many values that were previously unresolved (the obstruction was the non-existence of MNOLS of order $6$).
\begin{theorem} \cite[Thm 2.2]{Yin2}\label{hdm-6} Let $n$ be a positive integer of the form $p_1^{\alpha_1}p_2^{\alpha_2}\dots p_t^{\alpha_t}$, where $\alpha_i \geq 0$ and the prime factors $p_i\geq 5$ for $1 \leq i \leq t$. Then there exists a cyclic HDM$(4, 6n;6)$. \end{theorem}
\begin{corollary}\label{ls:sixes} Let $n$ be an integer of the form $p_1^{\alpha_1}p_2^{\alpha_2}\dots p_t^{\alpha_t}$, where $\alpha_i \geq 0$ and the prime factors $p_i\geq 5$ for $1 \leq i \leq t$. Then there exists a cyclic DCA$(4, 6n+1;6n)$ satisfying P1 and P2. Consequently there exists cyclic DCA$(4, 2p+1;2p)$ for $2p=30^*, 42^*, 66^*, 78, 102^*, 114^*, 138^*, 150, 174^*,\\ 186, 210, 222, 246, 258, 282, 294, 318, 330, 354$. \end{corollary}
The following result can be verified using direct constructions given in the later Section \ref{constructions}.
\begin{lemma} There exists cyclic DCA$(4, 2p+1;2p)$ for $2p=26^*$, {\rm 266}, $2p=$ {\rm 40, 56, 88, 104, 136, 152, 184, 200, 232, 248, 280, 296, 328, 344} and $2p=34^*$, $58^*$, $82^*$, $106^*$, $130^*$, {\rm 154}, $178^*$, {\rm 202, 226, 250, 274, 298, 322, 346}. \end{lemma}
\begin{proof} Corollary \ref{cor-2m}, given in Section \ref{constructions}, verifies the existence of cyclic DCA$(4, 2p+1;2p)$ with $2p=26^*, 266.$
Corollary \ref{cor-4m+1}, given in Section \ref{constructions}, verifies the existence of cyclic DCA$(4, 2p+1;2p)$ with $2p=$ 40, 56, 88, 104, 136, 152, 184, 200, 232, 248, 280, 296, 328, 344.
Theorem \ref{thm-6mu+4}, given in Section \ref{constructions}, verifies the existence of cyclic DCA$(4, 2p+1;2p)$ with $2p=34^*, 58^*, 82^*, 106^*, 130^*$, 154, $178^*$, 202, 226, 250, 274, 298, 322, 346.
\end{proof}
\begin{lemma} There exists cyclic DCA$(4, 2p+1;2p)$ for $2p=24^*, 28^*$, {\rm 32}, $36^*$, {\rm 44, 48}, $52^*$, $54^*$. \end{lemma}
\begin{proof} These results have been verified by computer searches. The first column of the DCA$(4, 2p+1, 2p)$ is given by $[0, 1, 2, \ldots, 2p-1]$, the second column by $[1, 3, \ldots, 2p-1, 2, 4, \ldots, 2p-2]$ and the third column by
\noindent $2p=24$: [2, 0, 3, 1, 14, 21, 20, 19, 23, 15, 6, 18, 16, 10, 17, 8, 11,
22, 5, 13, 4, 9, 7, 12]
\noindent $2p=28$: [2, 0, 3, 1, 11, 16, 22, 25, 20, 23, 4, 8, 21, 5, 18, 10, 19,
13, 24, 27, 7, 26, 15, 9, 6, 14, 17, 12]
\noindent $2p=32$: [2, 0, 3, 6, 1, 13, 22, 30, 21, 25, 28, 26, 7, 5, 23, 20, 12,
10, 24, 17, 31, 15, 29, 27, 11, 14, 4, 9, 8, 19, 18, 16]
\noindent $2p=36$: [5, 35, 13, 20, 11, 9, 1, 31, 10, 2, 30, 33, 4, 34, 32, 25, 28, 16,
27, 22, 3, 29, 19, 24, 18, 15, 6, 23, 17, 7, 0, 8, 14, 12, 21, 26]
\noindent $2p=44$: [39, 13, 26, 21, 35, 3, 17, 16, 40, 28, 38, 25, 6, 10, 34, 5, 18, 30, 43, 15, 19, 36, 7, 24, 32, 14, 4, 0, 31, 12, 2, 9, 23, 37, 11, 42, 41, 29, 20, 1, 33, 27, 8, 22]
\noindent $2p=48$: [5, 41, 23, 40, 1, 39, 34, 25, 28, 8, 4, 9, 21, 30, 43, 18, 12, 2, 42, 45,
32, 37, 33, 0, 26, 15, 13, 22, 10, 35, 44, 7, 36, 16, 27, 19, 46, 38,
3, 47, 31, 29, 17, 14, 11, 24, 20, 6]
\noindent $2p=52$: [18, 12, 50, 37, 16, 6, 45, 4, 31, 34, 47, 21, 29, 2, 5, 22, 38, 3, 39,
27, 0, 15, 51, 7, 28, 24, 42, 40, 48, 32, 9, 26, 20, 11, 1, 41, 19,
35, 43, 13, 49, 33, 14, 17, 46, 8, 36, 23, 10, 30, 25, 44]
\noindent $2p=54$: [6, 5, 31, 27, 20, 38, 19, 4, 30, 51, 3, 52, 49, 14, 48, 23, 41, 12, 25,
0, 32, 40, 21, 50, 9, 45, 16, 1, 46, 11, 28, 42, 47, 35, 39, 2, 22, 13,
34, 33, 24, 44, 15, 53, 7, 17, 37, 36, 26, 18, 10, 43, 29, 8].
\end{proof}
For the remaining values 64, 68, 72, 74, 76, 92, 96, 108, 116, 122, 124, 128, 144, 146, 148, 162, 164, 172, 188, 192, 194, 212, 218, 236, 244, 256, 268, 284, 288, 292, 314, 316, 332, 348, 356 we were unable to construct cyclic difference covering arrays however for completeness and to answer questions about the spectrum we give full details verifying existence.
It should be noted that it is possible to construct DCA$(4, 2p+1;2p)$ satisfying P1 and P2 for some of these orders however our construction does not give cyclic difference covering arrays and so the details have been omitted here.
For the Li and van Rees conjecture \cite[Conjecture 5.1]{LvR} we require two more results from their paper. The second result uses group divisible designs: A ${\mathcal K}$-{\em group divisible design} of {\em type} $g_1^{a_1}g_2^{a_2}\dots g_s^{a_s}$ is a partition ${
\mathcal G}$ of a finite set ${\mathcal V}$, of cardinality $v=\sum_{i=1}^s a_ig_i$, into $a_i$ {\em groups} of size $g_i$, $1\leq i\leq s$, together with a family of subsets ({\em blocks}) ${\mathcal B}$ of ${\mathcal V}$ such that: 1) if $B \in {\mathcal B}$, then $|B| \in {\mathcal K}$, 2) every pair of distinct elements of ${\mathcal V}$ occurs in $1$ block of ${\mathcal B}$ or $1$ group of ${\mathcal G}$ but not both, and 3) $|{\mathcal G}| > 1$.
\begin{theorem}\label{vanrees1}\cite[Thm 4.1]{LvR} Suppose there exists $k$-MNOLS$(2p)$, $k$-MOLS$(2p)$, and $k$-MOLS$(n)$. Then there exists $k$-MNOLS$(2pn)$. \end{theorem}
\begin{theorem}\label{vanrees2}\cite[Thm 4.5]{LvR}\label{LvR} Suppose there exists a ${\mathcal K}$-GDD of type $g_1^{a_1}\dots g_s^{a_s}$. Further suppose that for any group size $g_i$ there exists a $s$-MNOLS$(g_i)$ and for any block size $k\in {\mathcal K}$ there exists a $s$-IMOLS$(k)$. Then there are $s$-MNOLS$(\sum_{i=1}^s a_ig_i)$. \end{theorem}
\begin{lemma} There exists $3$-NMOLS$(2p)$ for $2p =$ {\rm 76}, $92^*$, {\rm 96, 108}, $116^*$, $124^*,$ {\rm 128, 144}, $148^*$, {\rm 164}, $172^*$, {\rm 188, 192, 212, 236, 244, 256, 268, 284, 288, 292, 316, 332, 348} and {\rm 356}. \end{lemma}
\begin{proof}
The $3$-MNOLS$(2p)$, $2p= 76(12^5 16^1), 92(20^4 12), 96(16^5 16^1), 108(20^5 8^1),$ \\ $ 116(20^5, 16), 124(20^5 24^1), 128(24^5 8^1), 144(24^5 24^1), 148(24^5 28), 164(28^5 24^1),$\\ $ 172(32^5 12), 188(32^5 28^1), 192(32^5 36^1), 212(36^5 32^1), 236(40^5 36^1), 244(40^5 44^1),$ \\ $256(44^5 36^1), 268(44^5 48^1), 284(48^5 44^1), 288(48^5 48^1), 292(48^5 52^1), 316(52^5 56^1),$ \\$ 332(56^5 52^1), 348(56^5 68^1)$ and $356(60^5 56^1)$ can be constructed applying $5$-GDD, that exist by \cite[Thm 4.17, p 258]{colbourn}, in Theorem \ref{vanrees2}. Here the bracketed information gives the type of the GDD. \end{proof}
\begin{lemma}There exists $3$-NMOLS$(2p)$ for $2p =$ {\rm 64, 68, 72}, $74^*$, {\rm 122, 162, 194, 218} and {\rm 314}. \end{lemma}
\begin{proof} $3$-NMOLS$(2p)$ for $2p =$ 64, 68, 72, $74^*$, 122, 162, 194 and 218 can be constructed by applying $8$-GDD$(8^8)$, $\{7, 8, 9\}$-GDD$(8^7 6^2)$, $9$-GDD$(8^9)$, $\{7, 8, 9\}$-GDD$(8^7 6^3)$, $\{7, 8\}$-GDD$(16^7 10^1)$, $\{11, 12, 13\}$-GDD$(12^{12} 10^1 8^1)$,\\ $\{11, 12, 13\}$-GDD$(16^{11} 10^1 8^1)$,
$\{13, 14\}$-GDD$(16^{13} 10^1)$ and $\{8, 9, 10,$ $11\}$-\\GDD$(32^8 14^2 10^1)$, that exist by finite field constructions, in Theorem \ref{vanrees2}, respectively. \end{proof}
All the results of this section combine to the following theorem.
\begin{theorem} There exists a set of $3$-MNOLS$(2p)$ for each positive integers $p\geq 3$, except possibly $p=73$. \end{theorem}
\begin{conjecture} There exists DCA$(4, 2p+1;2p)$ satisfying P1 and P2 of all positive integers $p\geq 3$. \end{conjecture}
\section{Construction of difference covering arrays DCA$(4, 2m+1;2m)$}\label{constructions}
This section is devoted to giving new constructions for families of cyclic DCA$(4,$ $2m+1;2m)$ when $m = 2k + 1$, $ m = 8k + 4$ and $ m = 3k + 2$, respectively. In each of these cases the difference covering arrays satisfy P1 and P2 and so they can be used to construct MNOLS$(2m)$.
When using a cyclic DCA$(4, 2m+1;2m)$ to construct nearly orthogonal Latin squares we strip off the last row and last column of zeros. Thus to reduce the complexity of the notation and to avoid confusion, we will assume that we are constructing a $2m\times 3$ array $Q=[q(i,j)]$ that satisfies: \begin{itemize} \item each column is a permutation of $Z_{2m}$ and
\item $\Delta_{j,j^\prime}=\{q(i,j)-q(i,j^\prime)|\mid 0\leq i\leq 2m-1\}=\{1, 2,\dots, m,m,\dots, 2m-1\}$, with repetition retained. \end{itemize} Also we will use the following notation: $q(a,0)=a$ (or $q(\alpha,0)=a(\alpha)$), $q(a, 1)=b(a)$ (or $q(\alpha, 1)=b(\alpha)$), and
$q(a, 2)=c(a)$ (or $q(\alpha, 2)=c(\alpha)$).
The following lemmas document some well known results, stated without proof, which will be used extensively in the proof of subsequent results.
\begin{lemma}\label{basic0} For all integers $x,y,z$, ${\rm gcd}(x+yz,z)=gcd(x,z).$ \end{lemma}
\begin{lemma}\label{basic} Let $g$ and $p$ be positive integers and $h$ a non-negative integer. Working modulo $2p$, if $gcd(g, 2p)=1$ then \begin{align*} \{gx+h\mid 0\leq x\leq 2p-1\}&={\mathbb Z}_{2p}, \end{align*} or if $gcd(g, 2p)=r$ and $h\equiv s\mod r$ then \begin{align*} \{gx+h\mid 0\leq x\leq 2p/r-1\} &=\{rx+s\mid 0\leq x\leq 2p/r-1\}. \end{align*} \end{lemma}
\subsection{Construction for general families DCA$(4, 2m+1;2m)$ for some odd $m$}
In this subsection we give a general construction for a difference covering array DCA$(4, 2m+1;2m)$, for $m$ odd. The proof that such a difference covering array exists uses the results presented in the following lemma. Note that in this section unless otherwise stated all arithmetic is modulo $2m$. In particular, for $i \not\equiv 2\; \rm{mod}\; 3$ a non-negative integer, and $k = 2i^2 + 7i + 6$, we present an infinite family of DCA$(4, 2m+1;2m)$ for $m = 2k + 1$.
\begin{lemma}\label{lem:gcd} Let $f$ and $m$ be integers such that $gcd(f, 2m)=2$,
$gcd(f+2, 2m)=2$, and $f^2+f+1\equiv m\mod 2m$. Then \begin{align} {\rm gcd}(f,m)&=1,\label{gcd f}\\ {\rm gcd}(f+1,m)&=1,\label{gcd f+1}\\ {\rm gcd}(f-1,m)&=1,\label{gcd f-1}\\ {\rm gcd}(2f+1,m)&=1\label{gcd 2f+1}\\ mf&\equiv 0\mod 2m.\label{mf=0} \end{align} \end{lemma} \begin{proof}
Eq \ref{gcd f}: \quad Note that since $f$ is even, $f^2+f+1$ is odd implying $m$ is odd and hence ${\rm gcd}(f,m)=1$.
Eq \ref{gcd f+1}: \quad Since $f+1 \equiv -f^2\mod m$ and ${\rm gcd}(f,m)=1$ we have $1={\rm gcd}(f,m)={\rm gcd}(f^2,m)={\rm gcd}(f+1,m).$
Eq \ref{gcd f-1}: \quad Since $ f-1 = (-f^2-f-1)+f^2+2f \equiv f(f+2) \mod m$, ${\rm gcd}(f,m)=1$ and ${\rm gcd}(f+2,m)=1$, we have $1= {\rm gcd}(f(f+2),m)={\rm gcd}(f-1,m).$
Eq \ref{gcd 2f+1}: \quad Since $2f+1 \equiv -f(f-1)\mod m$, ${\rm gcd}(f,m)=1$ and ${\rm gcd}(f-1,m)=1$, $1={\rm gcd}(f(f-1),m)={\rm gcd}(2f+1,m).$
Eq \ref{mf=0}: This follows from the fact that $f$ is even. \end{proof}
For a suitable choice of $f$, we divide the domain of $a$ into the subintervals $[0,m+f]$, $[m+f+1,m-1]$, $[m,m-f-1]$ and $[m-f, 2m-1]$ where all endpoints are included.
\begin{example} To aid understanding we begin with an example where $m=13$ and $f=16$ and give the transpose of the difference covering array DCA$(4, 27;26)$. The key to understanding the proof is to recognise that within the subintervals $I_1 =[0,\dots, 3], I_2 = [4, \dots, 12], I_3=[13, \dots, 22]$ and $I_4 = [23, \dots, 25]$, the value of $a$, $b(a)$ and $c(a)$ increases by a constant ``jump'', respectively $1$, $f=16$ and $-(f+1)=9$. This implies that the differences will also increase by a constant. By carefully choosing the start value on each subinterval it is possible to obtain the required values in ${\mathbb Z_{2m}}$. The value $m=13$ is boldfaced in the differences.
{\scriptsize $$
\begin{array}{r|cccc|ccccccccc|cccccccccc|ccc}
&\multicolumn{4}{|c|}{I_1}&\multicolumn{9}{|c|}{I_2}&\multicolumn{10}{|c|}{I_3}&\multicolumn{3}{|c}{I_4}\\ \hline
a & 0 & 1 & 2 &
3 & 4 & 5 &
6 & 7 & 8 &
9 & 10 & 11 &
12 & 13 & 14 &
15 & 16 & 17 &
18 & 19 & 20 &
21 & 22 & 23 &
24 & 25 \\
b(a) & 13 & 3 &
19 & 9 & 25 &
15 & 5 & 21 &
11 & 1 & 17 &
7 & 23 & 2 &
18 & 8 & 24 &
14 & 4 & 20 &
10 & 0 & 16 &
6 & 22 & 12 \\
c(a) & 15 & 24 & 7 &
16 & 12 & 21 &
4 & 13 & 22 &
5 & 14 & 23 &
6 & 0 & 9 &
18 & 1 & 10 &
19 & 2 & 11 &
20 & 3 & 25 &
8 & 17 \\
b(a)-a & {\bf 13} & 2 & 17 & 6 &
21 & 10 & 25 &
14 & 3 & 18 &
7 & 22 & 11 &
15 & 4 & 19 &
8 & 23 & 12 &
1 & 16 & 5 &
20 & 9 & 24 &
{\bf 13} \\
c(a)-a & 15 & 23 & 5 &
{\bf 13} & 8 & 16 & 24 &
6 & 14 & 22 &
4 & 12 & 20 & {\bf 13} & 21 & 3 &
11 & 19 & 1 &
9 & 17 & 25 &
7 & 2 & 10 &
18 \\
c(a)-b(a) & 2 & 21 & 14 &
7 & {\bf 13} & 6 & 25 & 18 &
11 & 4 & 23 &
16 & 9 & 24 &
17 & 10 & 3 &
22 & 15 & 8 &
1 & 20 & {\bf 13} & 19 & 12 &
5 \\ \end{array} $$ } \end{example}
\begin{theorem} \label{QD(2m, 3)} Let $f$ and $m$ be natural numbers such that ${\rm gcd}(f, 2m)=2$, ${\rm gcd}(f+2, 2m)=2$, $f^2+f+1\equiv m\mod 2m$, and $m+3\leq f\leq 2m-4.$ Then a cyclic DCA$(4, 2m+1;2m)$ satisfying P1 and P2 exists. \end{theorem}
\begin{proof} The proof is by construction with the values of DCA$(4, 2m+1;2m)$ as given in Figure \ref{values-2m1}, with the 3 columns of $Q=[q(i,j)]$ given by $q(a,0)=a$, $q(a, 1)=b(a)$, $q(a, 2)=c(a)$. We will show that $Q$ has the required properties.
\begin{figure}
\caption{Entries are elements of ${\mathbb Z}_{2m}$, where $q(a,0)=a,$ $q(a, 1)=b(a)$ and $q(a, 2)=c(a)$ in the array $Q=[q(i,j)]$. Rows 5 to 7 give the differences.}
\label{values-2m1}
\end{figure}
If $m+3\leq f\leq 2m-4$ and $f$ is even then, working modulo $2m$, $3\leq m+f\leq m-4$ and so the intervals $[0,m+f]$ and $[m+f+1,m-1]$ are non-empty. Further, $m+4\leq m-f \leq 2m-3$ and so the intervals $[m,m-f-1]$ and $[m-f, 2m-1]$ are non-empty.
Given that $f$ is even and $m$ is odd, by Lemma \ref{basic} $$ af+m\equiv 1\mod 2 \Longrightarrow \{b(a)\mid a\in I_1\cup I_2\}=\{2g+1\mid 0\leq g \leq m-1\},$$ $$(a+1)f+m-1\equiv 0\mod 2 \Longrightarrow \{b(a)\mid a\in I_3\cup I_4\}=\{2g\mid 0\leq g \leq m-1\},$$ and so $\{b(a)\mid 0\leq a\leq 2m-1\}=[2m]$.
On each of the subintervals $c(a)$ takes the form $ag+h$, where $g=-(f+1)$, so the ``jump'' size is $-(f+1)$ and $$\begin{array}{rcl} c(m)&=&0,\\ c(m+f+1)-c(m-f-1)&=& -m-f(f+1)+m-2+m\\ &&-f(f+1)-(f+1)-m\\ &=&-2f^2-2f-2-(f+1)=-(f+1),\\ c(0)-c(m-1)&=&
-(f+1),\\ c(m-f)-c(m+f)&=&
-(f+1),\\ c(m)-c(2m-1)&=& -(f+1). \end{array}$$ Thus by reordering the subintervals as $I_3,I_2,I_1,I_4$, and noting for instance, $c(m-f-1)-(f+1)=c(m+f+1)$, we get $\{c(a)\mid 0\leq a\leq 2m-1\}=[2m]$.
For $b(a)-a$, $c(a)-a$ and $c(a)-b(a)$ we are required to show that for $a\in[2m]$ the differences cover the multiset $\{1, 2,\dots,m-1,m,m,m+1,\dots, 2m-1\}=([2m]\setminus \{0\})\cup\{m\}$.
For $b(a)-a$, the subintervals are taken in natural order $I_1,I_2,I_3,I_4$. Starting at $a=0$ and finishing at $a=2m-1$, we have $b(0)-0=m=(2m-1+1)(f-1)+m= b(2m-1)-(2m-1)$, so the difference $m$ occurs twice. Further, the ${\rm gcd}(f-1, 2m)=1$ implies that $a(f-1)+m$, $0\leq a\leq m-1$, are all distinct, as are $(a+1)(f-1)+m$, $m\leq a\leq 2m-1$ and
\begin{align*}
b(m)-m&=(m+1)(f-1)+m=f-1,\\
b(m-1)-(m-1)&=(m-1)(f-1)+m=-(f-1). \end{align*} Thus there is a jump of $-2(f-1)$ between $a=m-1$ and $a=m$ and the difference 0 is omitted, implying $\{b(a)-a\mid0\leq a\leq 2m-1\}=([2m]\setminus \{0\})\cup\{m\}$.
For $c(a)-a$, since $f+2$ is even and ${\rm gcd}(f+2,m)=1$, these values are all distinct on each of the subintervals, $|I_3\cup I_1|=m+1$, $|I_4\cup I_2|=m-1$, and \begin{align*}\hspace*{-1cm} c(m)-m&=-m(f+2)+m=m,\\ c(m+f)-(m+f)&=-(m+f-1)(f+2)-3=m,\\
(c(0)-0)-(c(m-f-1)-(m-f-1))&= m-f(f+2)-3=-(f+2),\\ c(2m-1)-(2m-1)&=-(2m-1)(f+2)=f+2,\\ c(m+f+1)-(m+f+1)&= -f^2-2f+m-3 =-(f+2).
\end{align*} Thus $-(a-1)(f+2)-3,-a(f+2)+m\equiv 1\mod 2$, hence, $$\{c(a)-a\mid a\in I_3\cup I_1\}=\{2g+1\mid 0\leq g\leq m-1\}\cup\{m\}.$$ In addition, $-a(f+2)+m-1,-a(f+2)\equiv 0\mod 2$ implies that $$\{c(a)-a\mid a\in I_4\cup I_2\}=\{2g\mid 1\leq g\leq m-1\},$$ giving $\{c(a)-a\mid0\leq a\leq 2m-1\}=([2m]\setminus \{0\})\cup\{m\}$.
For $c(a)-b(a)$, since ${\rm gcd}(2f+1, 2m)=1$, these values are all distinct on the subintervals, $c(m+f+1)-b(m+f+1)=-2f^2-2f-2+m=m=m+2f^2+2f+2= c(m-f-1)-b(m-f-1)$, and \begin{align*} c(0)-b(0)-(c(m-1)-b(m-1))& =-(2f+1),\\ c(m+f)-b(m+f)&=
2f+1,\\ c(m-f)-b(m-f)&
=-(2f+1).
\end{align*} Thus when the subintervals are reordered to $I_2,I_1,I_4,I_3$ we may verify that
$\{c(a)-b(a)\mid 0\leq a\leq 2m-1\}=([2m]\setminus \{0\})\cup\{m\}$. Note that the values of $c(a)-b(a)$ start and finish on $m$ and the value $0$ is omitted between $a=m+f$ and $a=m-f$. \end{proof}
\begin{corollary}\label{cor-2m} Let $i \not\equiv 2\; \rm{mod}\; 3$ be a non-negative integer, $k = 2i^2 + 7i + 6$ and $m = 2k + 1$.
Then there exists an infinite family of cyclic DCA$(4, 2m+1;2m)$'s satisfying P1 and P2.
\end{corollary}
\begin{proof} Taking $f = m + 3 + 2i$, then $f$ is even. In addition $m + 3 \leq f \leq 2m -4 $, since $ i \geq 0$ and $2m - 4 = (m + 3) + (m - 7) = (m + 3) + ( 4i^2 + 14 i + 6 ) \geq m + 3 + 2i = f$. Now \begin{align*} f^2 + f + 1
& \equiv 4i^2 + 14i + 13 \; \rm{mod}\; 2m\\
& = 2(2i^2 + 7i + 6 ) + 1 = m \; \rm{mod}\; 2m. \end{align*} Further, applying Lemma \ref{basic0} repeatedly, \begin{align*} \gcd(f, 2m) &= 2(\gcd(2i^2+ 8i + 8, (2i^2+ 8i + 8) + 2i^2 + 6i + 5))\\
& = 2(\gcd(2i + 3, 2i^2+ 4i + 2 + (2i + 3) ))\\
& = 2(\gcd(2i + 3, 2(i + 1)^2))= 2(\gcd(2i + 3, i + 1))=2 \\
\end{align*} Also, \begin{align*} \gcd(f + 2, 2m) &= 2(\gcd(2i^2+ 8i + 9, (2i^2+ 8i + 9) + 2i^2 + 6i + 4))\\
& = 2(\gcd(2i + 5, 2(i + 2)(i + 1)))\\
& = 2(\gcd(2i + 5, (i + 2)(i + 1))). \end{align*} Now $\gcd(2i + 5, i + 2) = \gcd(2(i + 2) + 1, i + 2)=1.$ Whereas \begin{align*}
\gcd(2i + 5, i + 1) & = \gcd(2(i + 1) + 3, i + 1) )= \gcd(3, i + 1)\\
& \neq 1 \;\;\mbox{when} \;\; i + 1 \equiv 0\; \rm{mod}\; 3 \; \mbox{or equivalently}\; i \equiv 2\; \rm{mod}\; 3. \end{align*}
Thus taking $i \not\equiv 2 \; \rm{mod}\; 3$ we can construct a DCA$(4, 2m+1;2m)$ as per the Theorem \ref{QD(2m, 3)}. \end{proof}
\subsection{Construction of difference covering arrays DCA$(4, 4m+1;4m)$}
In this subsection we give a general construction for a difference covering array DCA$(4, 4m+1;4m)$. It will be shown that for all non-negative integers $k$, such that $3 \!\nmid \!(2k + 1)$, this construction gives an infinite family of DCA$(4, 16k+9;16k+8)$. The proof that such a difference covering array exists uses the results presented in the following lemma. Note that in this section unless otherwise stated all arithmetic is modulo $4m$.
\begin{lemma}\label{lem:gcd2} Let $f$ and $m$ be natural numbers such that $m\equiv 2\mod 4$, ${\rm gcd}(f,$ \\$4m)=2$, ${\rm gcd}(f-1, 4m)=1$, and $f^2+f-2\equiv 2m\mod 4m$. Then \begin{align} {\rm gcd}(2m+2-f, 4m)&=4,\label{gcd 2m+2-f}\\ {\rm gcd}(2m-f+1, 4m)&=1,\label{gcd 2m-f+1}\\ {\rm gcd}(2m-2f+2, 4m)&=2\label{gcd 2m-2f+2},\\
mf&\equiv 2m\mod 4m.\label{gcd mf} \end{align} \end{lemma} \begin{proof}
Eq \ref{gcd 2m+2-f}: \quad Rewriting $2m+2-f = f^2+f-2+2-f = f^2 \mod 4m$ and assuming ${\rm gcd}(f, 4m)=2$ gives ${\rm gcd}(f^2, 4m)=4$.
Eq \ref{gcd 2m-f+1}: \quad Since $2m-f+1$ is odd, the ${\rm gcd}(2m-f+1, 4m)$ is odd. Assume there exists an odd $x$ such that $x|4m$ and $x|(2m-f+1)$, then $x|m$ and so $x|(f-1)$. But the ${\rm gcd}(f-1, 4m)=1$, so $x=1$.
Eq \ref{gcd 2m-2f+2}: \quad Assume that there exists $x$ such that $x|(m-f+1)$ and $x|2m$. Since $m-f+1$ is odd, $x$ is odd and so $x|m$. Consequently $x|(f-1)$ and $x|4m$, implying $x=1$.
Eq \ref{gcd mf}: \quad It follows that $ mf\equiv m(2m-f^2+2)= 2m^2-mf^2+2m\equiv 2m\mod 4m.$ \end{proof}
\begin{theorem} \label{QD(4m, 3)} Let $f$ be a natural number and $m=4k+2$, where $k$ is a non-negative integer, such that ${\rm gcd}(f, 4m)=2$, ${\rm gcd}(f-1, 4m)=1$, and $f^2+f-2\equiv 2m\mod 4m$. Then a cyclic DCA$(4, 4m+1;4m)$ satisfying P1 and P2 exists. \end{theorem} \begin{proof}
The proof is by construction with the values of DCA$(4, 4m+1;4m)$ as given in Figure \ref{values-4m2}, with the 3 columns of $Q=[q(i,j)]$ given by $q(a,0)=a$, $q(a, 1)=b(a)$, $q(a, 2)=c(a)$. We will show that $Q$ has the required properties.
\begin{figure}
\caption{Entries are elements of ${\mathbb Z}_{4m}$, where $q(a,0)=a,$ $q(a, 1)=b(a)$ and $q(a, 2)=c(a)$ in the array $Q=[q(i,j)]$.}
\label{values-4m2}
\end{figure}
For $b(a)$, since $f$ is even, Lemma \ref{basic} implies that
\begin{align*} (a+1)f-1&\equiv 1\mod 2 \Longrightarrow \{b(a)\mid a\in I_1\cup I_2\}=\{2g+1\mid 0\leq g \leq 2m-1\},\\ af&\equiv 0\mod 2 \Longrightarrow \{b(a)\mid a\in I_3\cup I_4\}=\{2g\mid 0\leq g \leq 2m-1\}, \end{align*} and $\{b(a)\mid 0\leq a\leq 4m-1\} ={\mathbb Z}_{4m}$.
For $c(a)$, since ${\rm gcd}(2m-f+2, 4m)=4$, Lemma \ref{basic} implies that $$\begin{array}{rl} (a+1)(2m-f+2)+m -1\equiv 1 \mod 4 \Longrightarrow &\{c(a)\mid a\in I_3\}=\{4g+1\mid 0\leq \\ & g\leq m -1\},\\ a(2m-f+2)\equiv 0 \mod 4 \Longrightarrow &\{c(a)\mid a\in I_4\}=\{4g\mid 0\leq g\leq\\ & m -1\},\\ (a+1)(2m-f+2)-1\equiv 3 \mod 4 \Longrightarrow &\{c(a)\mid a\in I_1\}=\{4g+3\mid 0\leq \\ &g\leq m -1\},\\ a(2m-f+2)-m \equiv 2 \mod 4 \Longrightarrow &\{c(a)\mid a\in I_2\}=\{4g+2\mid 0\leq \\&g\leq m -1\}. \end{array}$$ Thus the set of values $\{c(a)\mid a\in {\mathbb Z}_{4m}\}={\mathbb Z}_{4m}$.
For $b(a)-a$, $c(a)-a$ and $c(a)-b(a)$ we are required to show that for $a\in{\mathbb Z}_{4m}$ the differences cover the multiset $\{1, 2,\dots, 2m-1, 2m, 2m, 2m+1,\dots, 4m-1\}=({\mathbb Z}_{4m}\setminus \{0\})\cup\{2m\}$.
For $b(a)-a$, the ${\rm gcd}(f-1, 4m)=1$, and
\begin{align*} b(2m)-2m & =2m(f-1)= 2m,\\ b(2m-1)-(2m-1) & =(2m-1+1)f-1-(2m-1)=(2m)(f-1)=2m,\\
b(4m-1)-(4m-1)&= -(f-1),\\
b(0)-0&=f-1.
\end{align*} So using a ``jump'' of $f-1$ and ordering the subintervals as $I_3,I_4,I_1,I_2$ we obtain the difference $2m$ twice and the difference 0 is omitted between $a=4m-1$ and $a=0$ implying that $\{b(a)-a\mid0\leq a\leq 4m-1\}=({\mathbb Z}_{4m}\setminus \{0\})\cup\{2m\}$.
For $c(a)-a$, the ${\rm gcd}(2m-f-1, 4m)=1$, and $$\begin{array}{rcl} c(m )-m & = & m (2m-f+1)-m =2m,\\ c(3m -1)-(3m -1) & = &(3m -1+1)(2m-f+1)+m\\& =&2m,\\ c(3m )-3m -(c(2m-1)-(2m-1))&=&(m +1)(2m-f+1)+m \\ &=& 2m-f+1,\\ c(0)-0&=&2m-f+1,\\ c(4m-1)-(4m-1)&=&(4m-1)(2m+f-1)\\ &=&-(2m-f+1),\\
c(2m)-2m-(c(m -1)-(m -1))&=&(m +1)(2m-f+1)+m \\ &=& 2m-f+1. \end{array}$$ So using a ``jump'' of $2m-f+1$ and ordering the subintervals as $I_2,I_4,I_1,I_3$ we obtain the difference $2m$ twice and the difference 0 is omitted between $a=4m-1$ and $a=0$, implying $\{c(a)-a\mid0\leq a\leq 4m-1\}=({\mathbb Z}_{4m}\setminus \{0\})\cup\{2m\}$.
For $c(a)-b(a)$, and
\begin{align*} c(3m )-b(3m ) & =m(2m-2f+2)=2m,\\ c(m-1)-b(m-1) & = (m-1+1)(2m-2+2)=2m,\\ c(4m-1)-b(4m-1)&=
-(2m-2f+2),\\ c(0)-b(0)&=2m-2f+2,\\ c(2m)-b(2m)-(c(2m-1)-b(2m-1))&=
2m-2f+2. \end{align*} Then since $$\begin{array}{rcl} 2m-2f-2\equiv 0\mod 2 \Longrightarrow \{c(a)-b(a)\mid a\in I_4\cup I_1\}&=&\{2g\mid 1\leq g \leq\\&& 2m-1\}\cup\{2m\}, \\ -f+1\equiv 1\mod 2 \Longrightarrow \{c(a)-b(a)\mid a\in I_2\cup I_3\}&=&\{2g+1\mid 0\leq g \leq \\&&2m-1\}, \end{array}$$ implying $\{c(a)-b(a)\mid0\leq a\leq 4m-1\}=({\mathbb Z}_{4m}\setminus \{0\})\cup\{2m\}$. \end{proof}
\begin{corollary}\label{cor-4m+1} For $k\geq 0$ such that $k \not\equiv 1 \mod 3$ a cyclic DCA$(4, 4m+1;4m)$ satisfying P1 and P2 can be constructed as described in Theorem \ref{QD(4m, 3)}. \end{corollary} \begin{proof}
Given $m=4k+2$, take $f=2m-2$. Then $f=8k+2$, and \begin{align*}
\gcd(f, 4m) & = 2(\gcd(4k + 1, 8k + 4)) = 2(\gcd(4k + 1, 2(4k + 1) + 2))= 2. \end{align*} In addition \begin{align*} \gcd(f-1, 4m) & = \gcd(2m - 3, 4m) = \gcd(8k + 1, 16k + 8)\\
& = \gcd(8k + 1, 2k + 1) \;\;\rm{since} \;\;2 \!\nmid\! (8k + 1)\\
& = \gcd(6k, 2k + 1)
= 1, \;\rm{if}\; 3\!\nmid\! (2k + 1). \end{align*}
Also \begin{align*} f^2+f-2 & = 64k^2+32k+4+8k+2-2 = 4k(16k+8)+8k+4 \equiv 2m\; \rm{mod}\; 4m. \end{align*}
Hence, $f=2m-2$ satisfies the assumptions of Theorem \ref{QD(4m, 3)} and we can construct a DCA$(4, 16k + 9;16k+8)$ for $k$ such that $3 \!\nmid \!(2k + 1)$ as described in this theorem. \end{proof}
\subsection{Construction of difference covering arrays DCA$(4, 2m+1;2m)$, where $m=3\mu + 2$}
In this subsection we give a general construction for a difference covering array DCA$(4, 2m+1;2m)$, where $m=3\mu + 2$. The proof that such a difference covering array exists uses the result presented in the following lemma. Note that in this section unless otherwise stated all arithmetic is modulo $12k+10$, $k\geq 0$.
\begin{theorem}\label{thm-6mu+4} Let $\mu$ be an odd positive integer. Then there exists a cyclic DCA$(4, 6\mu+5;6\mu+4)$ satisfying P1 and P2. \end{theorem}
\begin{proof} Since $\mu\geq 1$, $n=6\mu+4\geq 10$, since $3$ is prime, ${\rm gcd}(3, 6\mu+4)=1$.
Hence ${\rm gcd}(3\mu+4, 6\mu+4)={\rm gcd}(3\mu, 6\mu+4)=1$.
Let $\mu=2k+1$, $k\geq 0$, then \begin{align*} \mu(3\mu+2)&=(2k+1)(6k+5)=3\mu+2,\mbox{ and}\\ (3\mu+2)^2&=3\mu+2. \end{align*} That is, $(n/2)^2\equiv n/2 \mod n$.
The proof is by construction with the values for DCA$(4, 6\mu+5;6\mu+4)$ as given in Figure \ref{values-3}, with the three columns of $Q=[q(i,j)]$ given by $q(\alpha,0)=a(\alpha)$, $q(\alpha, 1)=b(\alpha)$ and $q(\alpha, 2)=c(\alpha)$.
\begin{figure}\label{values-3}
\end{figure}
Since ${\rm gcd}(3,n)=1$, $\{3\alpha\mid 0\leq \alpha \leq n-1\}={\mathbb Z}_n$, by Lemma \ref{basic}. Further $a(3\mu+2)=3(3\mu+2)+3\mu+3=1$ and $ a(2\mu)=6\mu+2$ and there is a ``jump'' of $3$ between $a(4\mu+2)$ and $a(0)$; $a(\mu-1)$ and $a(5\mu+3)$; $a(6\mu+3)$ and $a(2\mu+1)$; $a(3\mu+1)$ and $a(4\mu+3)$;
$a(5\mu+2)$ and $ a(\mu)$, respectively.
Thus reordering the subintervals as $I_4,I_1,I_6,I_3,I_5,I_2$ gives $\{a(\alpha)\mid 0\leq \alpha\leq n-1\}={\mathbb Z}_n$.
For $b(\alpha)$, $$\begin{array}{rl} 3\alpha(\mu+1)+2\mu+2\equiv 0 \mod 2 \Longrightarrow & \{b(\alpha)\mid \alpha \in I_1 \cup I_2 \cup I_3\}\\&=\{2g\mid 0\leq g\leq 3\mu+2\}\\ 3\alpha(\mu+1)+2\mu+1\equiv 1 \mod 2 \Longrightarrow & \{b(\alpha)\mid \alpha \in I_4 \cup I_5 \cup I_6\}\\&=\{2g+1\mid 0\leq g\leq 3\mu+1\} \end{array}$$
For $c(\alpha)$, since ${\rm gcd}(3\mu+4, 6\mu+4)=1$ Lemma \ref{basic} implies that $\{c(\alpha)\mid 0\leq \alpha\leq 6\mu+3\}={\mathbb Z_{6\mu+4}}$.
For $b(\alpha)-a(\alpha)$, since ${\rm gcd}(3\mu, 6\mu+4)=1$,
\begin{align*} b(\mu)-a(\mu) & =3\mu+2,\\ b(4\mu+2)-a(4\mu+2) & = 3\mu+2,\\ b(5\mu+3)-a(5\mu+3)-(b(2\mu)-a(2\mu))&=
3\mu\\
b(4\mu +3)-a(4\mu+3)-(b(\mu-1)-a(\mu-1))&=
6\mu \\ b(2\mu+1)-a(2\mu+1)-(b(5\mu +2)-a(5\mu+2))&=
3\mu\\ b(0) - a(0) -(b(6\mu + 3) - a(6\mu + 3) & = 3\mu. \end{align*} So using a ``jump'' of $3\mu$ and ordering the subintervals as $I_2,I_6,I_1,I_5,I_3, I_4,$ we obtain the difference $3\mu+2$ twice and since there is $6\mu$ between $b(\alpha)-a(\alpha)$ for $\alpha=\mu-1$ and $\alpha=4\mu+3$ the difference 0 is omitted implying that $\{b(\alpha)-a(\alpha)\mid 0\leq \alpha \leq 6\mu+3\}=({\mathbb Z}_n\setminus \{0\})\cup\{n/2\}$.
For $c(\alpha)-b(\alpha)$, since ${\rm gcd}(3\mu+1, 6\mu+4)=2$, we have \\
and
\begin{align*} c(5\mu+3)-a(5\mu+3) & = 3\mu+2,\\ c(2\mu)-a(2\mu) & = 3\mu+2,\\ c(3\mu+2)-a(3\mu+2)-(c(6\mu+3)-a(6\mu+3))&=
-(3\mu+3)\\ c(\mu)-a(\mu)-(c(4\mu+2)-a(4\mu+2))&= -(3\mu+3)\\ c(2\mu+1)-a(2\mu+1)&=3\mu+1\\ c(5\mu+2)-c(5\mu+2)&=3\mu+3\\ c(0)-a(0)-(c(3\mu+1)-a(3\mu+1))& =-(3\mu+3)\\ c(\mu-1)-a(\mu-1)-(c(4\mu+3)-a(4\mu+3))&=
-(3\mu+3). \end{align*} Reordering the intervals as $I_6,I_4,I_2$ and $I_3,I_1,I_5$ and using a regular ``jump'' of $-(3\mu+3)$, with the jump of $6\mu+2$ between $\alpha=2\mu+1$ and $\alpha=5\mu+2$, being the exception, we have the difference $3\mu+2$ twice and the difference $0$ omitted, thus $\{c(\alpha)-a(\alpha)\mid 0\leq \alpha\leq 6\mu+3\}=({\mathbb Z_{6\mu+4}}\setminus\{0\})\cup \{3\mu+2\}$ with repetition retained.
For $c(\alpha)-b(\alpha)$, we note that \begin{align*} c(0)-b(0)&= 3\mu+2,\\ c(3\mu+1)-b(3\mu +1)&=- 1, \end{align*} and so the values of $c(\alpha)-a(\alpha)$ on the subinterval $I_1\cup I_2\cup I_3$ cover the set $\{3\mu+2,\dots,-1\}$. Also \begin{align*} c(3\mu+2)-b(3\mu+2)&=1,\\ c(6\mu+3)-b(6\mu +3)&= 3\mu+2, \end{align*} and so the values of $b(\alpha)-c(\alpha)$ on the subinterval $I_4\cup I_5\cup I_6$ cover the set $\{3\mu+2,\dots, 6\mu-1\}$. Consequently $\{b(\alpha)-c(\alpha)\mid 0\leq \alpha\leq 6\mu+3\}=([6\mu+4]\setminus\{0\})\cup \{3\mu+2\}$ with repetition retained. \end{proof}
\subsection{Infinite families} The construction of Theorem \ref{thm-6mu+4} constructs sets of three MNOLS of orders $10, 22, 34, 46 \mod 48$. The construction of Corollary \ref{cor-4m+1} constructs sets of three MNOLS of orders $8, 40\mod 48$. Combined with the constructions of \cite{DDK}, there is a construction of three MNOLS for $8, 10, 14, 22, 34,$ $38, 40, 46\mod 48$. There are infinite families constructed from Corollary \ref{cor-2m} and from results of Li and van Rees \cite{LvR}, but these cannot be described mod 48. It is an open question as to why 48 features in many of the constructions.
\end{document} | arXiv | {
"id": "1502.02332.tex",
"language_detection_score": 0.6909554600715637,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{On Protocols for Monotone Feasible Interpolation}
\author{Lukáš Folwarczný \orcidlink{0000-0002-3020-6443}\\ \small Institute of Mathematics of the Czech Academy of Sciences\\ \small Prague, Czech Republic\\[.5em] \small Computer Science Institute of Charles University\\ \small Prague, Czech Republic\\ \small \tt \href{mailto:folwarczny@math.cas.cz}{folwarczny@math.cas.cz}}
\maketitle
\begin{abstract} Feasible interpolation is a general technique for proving proof complexity lower bounds. The monotone version of the technique converts, in its basic variant, lower bounds for monotone Boolean circuits separating two NP-sets to proof complexity lower bounds. In a generalized version of the technique, dag-like communication protocols are used instead of monotone Boolean circuits. We study three kinds of protocols and compare their strength.
Our results establish the following relationships in the sense of polynomial reducibility: Protocols with equality are at least as strong as protocols with inequality and protocols with equality have the same strength as protocols with a conjunction of two inequalities. Exponential lower bounds for protocols with inequality are known. Obtaining lower bounds for protocols with equality would immediately imply lower bounds for resolution with parities (R(LIN)). \end{abstract}
\section{Introduction} \label{sec:introduction}
Studying the strength of various propositional proof systems is the essence of proof complexity. Dag-like communication protocols come into play as a~tool to translate, via a technique called feasible interpolation, the task of proving lower bounds on the length of proofs into the realm of communication complexity. This paper studies variants of protocols which seem to have a~potential for producing new proof complexity lower bounds. Most importantly, lower bounds for a certain type of protocols, called protocols with equality in this paper, would immediately translate into lower bounds for the proof system resolution with parities (R(LIN)).
\subsection{Motivation and context}
Propositional proof system, in the sense of Cook and Reckhow \cite{cookreckhow}, is a~sound and complete system for propositional tautologies with proofs verifiable in polynomial time. Examples of propositional proof systems include resolution, Frege systems, sequent calculus and cutting planes. Proof complexity is then the field studying the strength of various propositional proof systems; proof complexity is tightly connected with computational complexity (one of the fundamental open problems is equivalent to the question whether NP = coNP) and logic. A~reference for this field is the book by Krajíček~\cite{krajicekproofcomplexity}. A~lower bound is a~theorem stating that for a~certain proof system~$P$ and a~sequence of tautologies $\left\{ \phi_n \right\}$ the shortest length of a~proof of $\phi_n$ in $P$ is bounded from below by a~certain function of the size of $\phi_n$. Exponential lower bounds have been proven for various proof systems, Resolution, constant depth Frege systems, Cutting Planes, Polynomial Calculus (see Kraj\'{i}\v{c}ek's monograph~\cite{krajicekproofcomplexity}), but for other proof systems studied in the literature this is still an open problem.
The feasible interpolation method was invented by Krajíček (idea formulated in \cite{krajicek1994}, applied in \cite{krajicek-feasibleinterpolation}). In the basic setup, feasible interpolation reduces the task of proving a~lower bound for a~proof system~$P$ to proving a~lower bound for Boolean circuits separating two NP-sets. In the case of monotone feasible interpolation, lower bounds for monotone Boolean circuits are enough. However, lower bounds for different objects than monotone Boolean circuits may be used as well. Limits of monotone feasible interpolation by monotone Boolean circuits were already considered in the aforementioned paper by Krajíček~\cite{krajicek-feasibleinterpolation}, Section~9. However, general limits of monotone feasible interpolation are not known. The first result when monotone feasible interpolation was used with another computational model than Boolean circuits is due to Pudlák~\cite{pudlak-interpol}: He defined a~generalization of monotone Boolean circuits called monotone real circuits (which were later proved by Rosenbloom~\cite{rosenbloom} to be strictly stronger than monotone Boolean circuits) and proved lower bounds for this model which lead, via monotone feasible interpolation, to lower bounds for the cutting planes proof system.
The history of the (dag-like communication) protocols considered in this paper starts with a paper by Karchmer and Wigderson \cite{karchmerwigderson} where the following theorem is proved: For a~Boolean function~$f$, the minimum depth of a~Boolean circuit computing $f$ is equal to the communication complexity (that is the depth of a~protocol) of a~certain relation defined for~$f$. There is also a~monotone version of the relation for monotone circuits.
In classical communication complexity, the measure of protocols is their depth and hence one can without loss of generality consider protocols with the underlying graph being a~tree. Dag-like communication protocols, that is protocols with the underlying graph being a~directed acyclic graph, go back to Razborov~\cite{razborov-unprovability}. Razborov, inspired by Karchmer and Wigderson, used the size of certain protocols to characterize the size of circuits. See Pudlák~\cite{pudlak-extractingcomputations} or Sokolov~\cite{sokolov-daglike} for a~survey of this topic. We give more details on the origin of the protocols in \Cref{sec:protocols}. More recently, new types of protocols have been introduced~\cite{garggooskamathsokolov}.
It is now a well known open problem in proof complexity to prove lower bounds for the system which we denote, as in the Krajíček's book, by R(LIN). This problem may be seen as a step in solving another open problem which is to prove lower bounds for the so called $AC^0[p]$-Frege proof systems. The system R(LIN) was introduced by Itsykson and Sokolov~\cite{itsyksonsokolov} who also prove exponential lower bounds for the tree-like version. Itsykson and Sokolov call the system $\mathrm{Res}(\oplus)$ (or Res-Lin in a preliminary version of the paper). The system was inspired by the system R(lin) introduced by Raz and Tzameret~\cite{raztzameret}.
Lower bounds for protocols called \emph{protocols with equality} in this paper would directly lead to lower bounds for R(LIN). A~different approach which could work for this proof system is randomized feasible interpolation due to Krajíček~\cite{krajicek-random-fi}, studied also by Krajíček and Oliveira~\cite{krajicek-oliveira}.
\subsection{Our contribution}
In this section, all protocols have constant degree. Our contribution is the comparison of strength of protocols: Protocols with equality are at least as strong as protocols with inequality (in the sense of polynomial reducibility). Furthermore, we establish that protocols with a conjunction of a constant number of inequalities have the same strength as protocols with equality if the constant is at least 2. Details are given in \Cref{sec:results}.
\subsection{Open problems}
Research into the dag-like communication complexity of KW games is motivated by the quest of proving lower bounds on proof systems that are stronger than those for which we already have lower bounds. Since R(LIN) is just on the border of what we can and cannot prove and lower bounds on protocols with equality would give us such a lower bound, the main problem is to prove a superpolynomial lower bound on such protocols.
Another motivation is the general question of how far we can get with feasible interpolation. This question is mentioned by Razborov~\cite{razborov-sigact} in his survey as one of the main challenges in proof complexity. Lower bounds based on monotone interpolation reduce the problem of proving the lower bound on the lengths of proofs to proving a lower bound on some monotone circuits (monotone Boolean circuits, monotone Real circuits, monotone span programs). Researchers in the area of proof complexity agree on the idea that in order to make progress, we have to study communication protocols instead of circuit models, because there may be no circuit type for some proof system. It seems that this is the case of R(LIN). Therefore proving lower bounds on protocols with equality seems to be the most viable approach to proving lower bounds on R(LIN) proofs.
Since we do not have lower bounds for protocols with equality, but we do have exponential lower bounds for protocols with inequality, we expect the former type of protocols to be stronger than the latter. However we do not have any partial Boolean function that would provide such a separation. Finding such a function would also shed more light on the problem of proving lower bounds on protocols with equality.
\subsection{Paper outline}
In \Cref{sec:protocols}, we define all the protocols concerned in this paper and explain their origin. In \Cref{sec:results}, we state our results, discuss them and prove a part of them. In \Cref{sec:proofmainthm}, we prove the main theorem. In \Cref{sec:applications}, we show how lower bounds for protocols would translate into lower bounds for the proof systems R(LIN).
\section{Protocols} \label{sec:protocols}
\subsection{General protocols}
Karchmer and Wigderson~\cite{karchmerwigderson} used classical tree-like (i.e.\ the underlying graph is a~tree) communication protocols and their depth to characterize the depth of circuits. Dag-like protocols (i.e.\ the underlying graph is a~directed acyclic graph) considered in this paper go back to Razborov~\cite{razborov-unprovability} who was considering the size of protocols to characterize the size of circuits. An explicit definition of these protocols was given by Krajíček~\cite{krajicek-feasibleinterpolation}. Our definition is a~generalization of the definition by Hrubeš and Pudlák~\cite{hrubespudlak-monotonecircuits}. Unlike in~\cite{krajicek-feasibleinterpolation}, in this paper only the monotone version is defined and the strategy function is omitted.
The task of the monotone Karchmer-Wigderson game (the monotone KW game) for a~given partial monotone Boolean function $f$ is: given $x\in f^{-1}(0)$ and $y \in f^{-1}(1)$, find an index $i$ such that $x_i = 0 \wedge y_i = 1$. (Observe that there is always at least one such index $i$.)
\begin{definition} \label{def:pgeneral} Let $n \geq 1$ be a~natural number. A \emph{protocol of degree $d$ with feasibility relation} is a~directed acyclic graph $G = (V, E)$ and a~relation $F \subseteq \{0,1\}^n \times \{0,1\}^n \times V$, such that \begin{enumerate}[(i)] \item $G$ has one source $v_0$ (a~node of in-degree zero) and the out-degree of every vertex is at most $d$, \item for every sink $\ell$ (a~node of out-degree zero), there exists an index $i$ such that for every $x, y \in \{0,1\}^n$ it holds $(x, y, \ell) \in F$ iff $x_i = 0$ and $y_i = 1$. \end{enumerate} Let $f$ be a~partial monotone Boolean function in $n$~variables. We say that the protocol \emph{solves the monotone KW game for $f$} (or simply \emph{solves} $f$), if for every $x \in f^{-1}(0)$ and $y \in f^{-1}(1)$, \begin{enumerate}[(a)] \item $(x, y, v_0) \in F$, \item for every $v \in V$ with $p \geq 1$ children $u_1 , \dots , u_p$, if $(x, y, v) \in F$ then there exists $u_i$ with $(x,y,u_i)\in F$. \end{enumerate} The size of a~protocol is the number of vertices. \end{definition}
We say that a~vertex $v$ is \emph{feasible for $x$, $y$} if $(x,y,v)\in F$. By definition, the source is feasible for any $x \in f^{-1}(0)$, $y \in f^{-1}(1)$. The protocol solves the monotone KW game in the following sense: Given $x \in f^{-1}(0)$ and $y \in f^{-1}(1)$ and any vertex $v$ which is feasible for $x$, $y$ (it is crucial that this holds for any feasible vertex, not just the source), we can find the solution to the KW game by traversing the graph via feasible vertices down to sinks which give us the solution.
Another general definition was given by Garg et al.~\cite{garggooskamathsokolov}, Section 2.1. In their definition, degree is fixed to 2 and general search problems are considered. If we restrict our definition to degree 2 and their definition to the monotone KW game, the definitions become equivalent.
\subsection{Protocols with inequality and equality}
The definition of what we call a~protocol with inequality was independently introduced by Hrubeš and Pudlák~\cite{hrubespudlak-monotonecircuits} and Sokolov~\cite{sokolov-daglike} (Sokolov considers only protocols of degree~2). Before that, Krajíček~\cite{krajicek-interpolationbyagame} considered a different type of protocols with inequality.
\begin{definition} \label{def:pinequality} A~\emph{protocol of degree $d$ with inequality} is a~protocol of degree $d$ with feasibility relation such that the relation $F$ may be expressed as follows: For each vertex $v \in V$, there is a~pair of functions $q_v, r_v \colon \{0, 1\}^n \rightarrow \mathbb{R}$ such that for all $x, y \in \{0, 1\}^n$, it holds $(x, y, v) \in F$ iff $q_v(x) < r_v(y)$. \end{definition}
Hrubeš and Pudlák~\cite{hrubespudlak-monotonecircuits} proved that the size of the minimum protocol of degree~$d$ with inequality solving $f$ and the size of the minimum $d$-ary monotone real circuit computing $f$ are equal (definition of monotone real circuits may be found in \cite{pudlak-interpol} or \cite{hrubespudlak-monotonecircuits}). This implies that the exponential lower bounds due to Pudlák~\cite{pudlak-interpol} and Haken and Cook~\cite{cook-haken} for monotone real circuits hold also for protocols with inequality. Replacing inequality with equality or a~conjunction of inequalities, we obtain the following two definitions:
\begin{definition} \label{def:pequality} A~\emph{protocol of degree $d$ with equality} is a~protocol of degree $d$ with feasibility relation such that the relation $F$ may be expressed as follows: For each vertex $v \in V$ there is a~pair of functions $q_v, r_v \colon \{0, 1\}^n \rightarrow \mathbb{R}$ such that for all $x, y \in \{0,1\}^n$ it holds $(x, y, v) \in F$ iff $q_v(x) = r_v(y)$. \end{definition}
\begin{definition} \label{def:pconjunction} A~\emph{protocol of degree $d$ with a~conjunction of $c$~inequalities} is a protocol of degree $d$ with feasibility relation $F$ such that the relation $F$ may be expressed as follows: For each vertex $v \in V$, there are $c$~pairs of functions $q^{j}_v, r^{j}_v \colon \{0, 1\}^n \rightarrow \mathbb{R}$ for $j \in [c]$. The functions satisfy for all $x, y \in \{0,1\}^n$ $(x, y, v) \in F$ iff $q^1_v(x) < r^1_v(y) \wedge \cdots \wedge q^{c}_v(x) < r^{c}_v(y)$. \end{definition}
All three types of the protocols defined in this subsection are mentioned by Garg et al. \cite{garggooskamathsokolov} who name the protocols after the shape of the feasible sets in the protocol. For a~given $v$, the feasible set for $v$ is the set of all pairs $(x, y)$ such that $(x,y,v) \in F$. For protocols with inequality, the feasible sets are combinatorial triangles (definition may be found in \cite{garggooskamathsokolov}). For protocols with equality, the feasible sets are block-diagonal. And for protocols with a~conjunction of $c$~inequalities, the feasible sets are intersections of $c$~combinatorial triangles.
\section{Results} \label{sec:results}
We use the $\mathcal{O}$-notation for functions with several parameters. The precise meaning is the following: $$g \in \mathcal{O}(f) \Leftrightarrow \exists c>0 \forall n_1 \in \mathbb{N} \dots \forall n_k \in \mathbb{N}: g(n_1, \dots, n_k) \leq cf(n_1, \dots, n_k) + c$$
Our first theorem compares protocols with inequality and protocols with equality.
\begin{theorem} \label{thm:c1} Let $P$ be a protocol of degree $d$ with inequality solving an $n$-bit partial monotone Boolean function $f$. If the size of $P$ is $s$, then there exists a protocol of degree 2 with equality solving $f$ whose size is $\mathcal{O}(sn^{d+2})$. \end{theorem}
A more general theorem compares protocols with a conjunction of $c$~inequalities and protocols with equality.
\begin{theorem}[main] \label{thm:main} Let $P$ be a protocol of degree $d$ with a~conjunction of $c$ inequalities solving an $n$-bit partial monotone Boolean function $f$. If the size of $P$ is $s$, then there exists a protocol of degree~2 with equality solving $f$ whose size is $\mathcal{O}(sn^{2cd+c-1})$. \end{theorem}
Hrubeš and Pudlák \cite{hrubespudlak-monotonecircuits} prove how to reduce the degree of protocols with inequality and correspondingly monotone real circuits. (We use their result as a part of the proof of \Cref{thm:c1}.)
\begin{lemma}[\cite{hrubespudlak-monotonecircuits}, Corollary 6 (ii)] \label{l:hp} Let $P$ be a~protocol of degree $d$ with inequality solving an $n$-bit partial monotone Boolean function $f$. If the size of $P$ is $s$, then there exists a~protocol of degree 2 with inequality solving $f$ whose size is $\mathcal{O}(sn^{d-2})$. \end{lemma}
Our results enable a similar reduction for protocols with equality.
\begin{corollary} \label{cor:degreereduction} Let $P$ be a~protocol of degree $d$ with equality solving an $n$-bit partial monotone Boolean function $f$. If the size of $P$ is $s$, then there exists a~protocol of degree~2 with equality solving $f$ whose size is $\mathcal{O}(sn^{4d+1})$. \end{corollary}
\subsection{Discussion}
We consider the theorems to be most relevant for the cases when $d = \mathcal{O}(1)$ and $c = \mathcal{O}(1)$. In this case, we interpret the results as comparison of the strength of protocols in the sense of polynomial reducibility.
We cannot expect interesting results for very large degree because of the following simple fact:
\begin{fact} \label{fact:degreen} For every $n$-bit partial monotone Boolean function $f$, there is a~protocol of degree~$n$ with inequality solving $f$ whose size is $n+1$. \end{fact}
Similarly, we cannot expect interesting results if the conjunctions are large.
\begin{fact} \label{fact:ninequalities} For every $n$-bit partial monotone Boolean function $f$, there is a~protocol of degree~2 with a~conjunction of $n-2$ inequalities solving $f$ whose size is $2n-1$. \end{fact}
It is easy to see that protocols with a conjunction of two (or more than two) equalities are at least as strong as protocols with equality.
\begin{fact} \label{fact:equalitybytwoinequalities} Let $P$ be a~protocol of degree~$d$ with equality solving an $n$-bit partial monotone Boolean function $f$. If the size of $P$ is $s$, then there exists a~protocol of degree $d$ with a conjunction of two inequalities solving $f$ whose size is $s$. \end{fact}
\subsection{Proofs}
We postpone the proof of \Cref{thm:main} to \Cref{sec:proofmainthm}.
\begin{proof}[Proof of \Cref{thm:c1}] We use \Cref{l:hp} to convert $P$ into a protocol $P'$ of degree 2 with equality. The size is $\mathcal{O}(sn^{d-2})$. We then use \Cref{thm:main} with $c = 1$ and $d = 2$ to convert $P'$ into a protocol of size $\mathcal{O}(sn^{d-2} \cdot n^4) = \mathcal{O}(sn^{d+2})$. \end{proof}
\begin{proof}[Proof of \Cref{cor:degreereduction}] We use \Cref{fact:equalitybytwoinequalities} to convert $P$ into a~protocol $P'$ of degree~$d$ with a~conjunction of two inequalities whose size is $s$. Using \Cref{thm:main} with $c = 2$ for $P'$, we obtain a~protocol of degree~2 with equality solving $f$ whose size is $\mathcal{O}(sn^{4d + 1})$. \end{proof}
\section{Proof of the main theorem} \label{sec:proofmainthm}
We prove \Cref{thm:main} in this section.
\subsection{Structure of the proof}
Let $G = (V, E)$ be the underlying graph of $P$ and let $q^j_v$, $r^j_v$ for $j \in [c]$ be the functions associated with $v \in V$. Observe that for fixed inner $v \in V$ and $j \in [c]$, it holds
$$\left|\{ q^j_v(x) \mid x \in f^{-1}(0) \} \cup \{ r^j_v(y) \mid y \in f^{-1}(1) \}\right| \leq \left| f^{-1}(0) \cup f^{-1}(1) \right| \leq 2^n.$$ As the only thing that matters is the relative order of the values, we can w.l.o.g.\ assume that $q^j_v(x), r^j_v(y) \in \{0, \dots, 2^n - 1\}$. (In fact, the functions $q^j_v(x)$ and $r^j_v(y)$ are defined for any $x, y \in \{0,1\}^n$. However, we ignore the values for $x \notin f^{-1}(0)$ and $y \notin f^{-1}(1)$.) We treat $q^j_v(x)$ and $r^j_v(y)$ as $n$-bit numbers. The $i$-th most significant bit of $q^j_v(x)$ (resp. $r^j_v(y)$) is denoted by $q^j_v(x)[i]$ (resp. $r^j_v(y)[i]$). That is $$q^j_v(x) = \sum_{i=1}^{n} 2^{n-i}q^j_v(x)[i] \quad\text{and}\quad r^j_v(y) = \sum_{i=1}^{n} 2^{n-i}r^j_v(y)[i].$$
The proof based on the fact that one can express inequality of $n$-bit numbers as one of $n$ particular equalities. Denote $a = q^j_v(x)$ and $b = r^j_v(y)$. Then $$ a < b \Leftrightarrow \exists i \in [n] \left(a[1] = b[1] \wedge \cdots \wedge a[i-1] = b[i-1] \wedge a[i] = 0 \wedge 1 = b[i] \right). $$ The conjunction can be written as a single equality as \begin{equation}\label{ineq} a < b \Leftrightarrow \exists i \in [n] \left(a[1]\dots a[i-1]a[i]1 = b[1]\dots b[i-1] 0 b[i]\right). \end{equation} An important feature of the above expression is that the left-hand side of each equality is a function of only $x$ (it does not depend on $y$) and the right-hand side of each equality is a function of only $y$.
We can similarly express the conjunction of $c$ inequalities as one of $n^c$ equalities. Consider an inner vertex $v \in V$ and define: \begin{align*} \chi^{j,(i,=)}_v &\equiv q_v^j(x)[i] = r_v^j(y)[i]\\ \chi^{j,(i,\neq)}_v &\equiv q_v^j(x)[i] = 1-r_v^j(y)[i]\\ \chi^{j,(i,<)}_v &\equiv q_v^j(x)[i] = 0 \wedge 1=r_v^j(y)[i]\\ \chi^{j,(i,>)}_v &\equiv q_v^j(x)[i] = 1 \wedge 0=r_v^j(y)[i] \end{align*}
(we use the symbol $\equiv$ for definitions of formulas). We can then write $$\bigwedge_{j=1}^c \left(q^j_v(x) < r^j_v(y)\right) \Leftrightarrow \exists (i_1, \dots, i_c) \in [n]^c \bigwedge_{j=1}^c \left(\bigwedge_{k=1}^{i_j-1}\chi^{j,(k,=)}_v \wedge \chi^{j,(i_j,<)}_v\right).$$ The expression bounded by the existential quantifier is a conjunction of equalities of bits such that in each equality the left-hand side depends only on $x$ and the right-hand side depends only on $y$. Therefore, we can again rewrite the expression into a single equality. This is the desired expression of a conjunction of $c$ inequalities as one of $n^c$ equalities (the particular form of the equalities will be of importance, too).
We introduce the following convention: Vertices in the protocol with equality $P'$ will be labeled with conjunctions of equalities of bits such that the left-hand side of each equality is a function of $x$ and the right-hand side of each equality is a function of $y$. If a vertex $v$ in $P'$ is labeled with $\psi$ defined as $$b_1(x) = b'_1(y) \wedge \cdots \wedge b_\ell(x) = b'_\ell(y),$$ then the functions associated with $v$ in $P'$ are $$q_v(x) := b_1(x)b_2(x)\ldots b_\ell(x) \quad\text{and}\quad r_v(y) := b'_1(y)b'_2(y)\ldots b'_\ell(y).$$ Hence, $v$ labeled with $\psi$ is feasible in the new protocol $P'$ for $x$ and $y$ iff $\psi$ is true (for $x$ and $y$).
For $I = (i_1, \dots, i_c) \in [n]^c$, we denote $$\Phi^I_v \equiv \bigwedge_{j=1}^c \left( \bigwedge_{k=1}^{i_j-1} \chi^{j,(k,=)}_v \wedge \chi^{j,(i_j,<)}_v \right).$$ We state what this notation essentially means as a fact. \begin{fact} A vertex $v$ is feasible (in the original protocol $P$) for $x$ and $y$ iff there exists $I \in [n]^c$ such that $\Phi^I_v$ is true. We call such an $I$ a \emph{witness} for $v$. \end{fact}
We first describe a sort of a skeleton of the protocol with equality $P'$ that simulates the given protocol $P$. It is a set of vertices $S$ between which we will later insert trees connecting them. For each sink $\ell \in V$, we put $\ell^{P'}$ into $S$. There, $\ell^{P'}$ is labeled with the conjunction $x_i = 0 \wedge 1 = y_i$ where $i$ is the solution of the KW game corresponding to $\ell$ in $P$. For each inner vertex $v \in V$, we put into $S$ $n^c$ vertices of the form $v(I)$ for $I \in [n]^c$ such that $v(I)$ is labeled with the conjunction $\Phi^I_v$. Observe that $v$ is feasible for $x,y$ in $P$ iff there is a witness $I \in [n]^c$ such that $v(I)$ is feasible in $P'$. An illustration for a vertex $v$ with two children is in \Cref{fig:mainthm_step1}; the question mark corresponds to the part of the protocol which we have not described yet. The size of the skeleton is upper-bounded by $sn^c$, where $s=|P|$ and $n^c$ is the number of witnesses $I$.
\begin{figure}
\caption{First step of the construction of the protocol $P'$}
\label{fig:mainthm_step1}
\end{figure}
To construct $P'$ it now remains to construct the trees between the elements of the skeleton and prove their properties. This is done in the following lemma.
\begin{lemma} \label{l:technical} Let $\phi$ be a conjunction of bit equalities such that the left-hand side of each equality is a function of $x$ and the right-hand side of each equality is a function of $y$. Let $u_1$, $\dots$, $u_p$ be inner vertices of $P$ and let $\ell_1$, $\dots$, $\ell_{p'}$ be sink vertices of $P$. If the validity of $\phi$ for $x$ and $y$ implies that at least one of the vertices $u_1$, $\dots$, $u_p$ and $\ell_1$, $\dots$, $\ell_{p'}$ is feasible for $x$ and $y$ in $P$, then it is possible to construct a binary protocol $T$ in a form of a tree with the following properties: \begin{enumerate}
\item The source of the tree is labeled with $\phi$.
\item The set of sinks of the tree is $$\left\{ u_{i}(I') \mid i \in [p], I' \in [n]^c \right\} \cup \{ \ell^{P'}_1, \dots, \ell^{P'}_{p'} \}.$$
\item The size of the tree is $\mathcal{O}(n^{2c(p+p') - 1})$.
\item The tree satisfies the feasibility condition of Definition~1(b), i.e. if a vertex with two children is feasible for $x$ and $y$, at least one of the children is feasible for $x$ and $y$. \end{enumerate} \end{lemma}
Before we prove the lemma, we show that it implies the theorem. First, we apply the lemma with $\phi$ being an empty conjunction and $p' = 0$, $p = 1$, $u_1 = v_0$ (the source of $P$). Second, we apply the lemma for every inner $v \in V$ and $I \in [n]^c$ by setting $\phi \equiv \Phi^I_v$. In this case $u_1, \dots, u_p$ and $\ell_1, \dots, \ell_{p'}$ are the children of $v$. We join all the trees we obtain and identify sources and sinks of these trees when they have the same labels. For every inner $v \in V$ and $I \in [n]^c$, the vertex $v(I)$ is a source of exactly one tree and a sink of at least one tree. Therefore, the protocol has one source labeled with empty conjunction, which is feasible for any $x$ and $y$, and sinks of the form $\ell^{P'}$ which correspond to solutions for the monotone KW game. The out-degree of every vertex is at most 2 and the feasibility condition of Definition~1(b) is satisfied because the trees satisfy it.
What remains is to estimate the size of the protocol. We apply \Cref{l:technical} once with empty $\phi$ and $p + p' = 1$ and we apply it for every inner $v\in V$ and $I \in [n]^c$. That is at most $s n^c$-times. In these cases it holds $p + p' \leq d$. The total size is then $$s n^c \cdot \mathcal{O}(n^{2cd - 1}) + \mathcal{O}(n^{2c - 1}) = \mathcal{O}(sn^{2cd + c - 1}).$$
\subsection{Proof of \Cref{l:technical}}
{\bf The protocol as a search procedure.} We will start by describing the structure of the protocol without the feasibility conditions. We will first assume that all $u_i$s are inner nodes, after that we will say how to modify the protocol if some nodes are sinks. We will view the protocol as a search procedure. Formally, the search is performed by two communication parties with the help of a referee who tells them which of the available vertices of the protocol is feasible. It is clear, however, that one party (corresponding to the referee) is enough and what it does is constructing a path from the root to a leaf such that all vertices are feasible. We will take the position of the party and imagine that we are performing the search.
The search is structured in 4 levels.
\begin{enumerate} \item On the highest level we are searching for an index $i\in[d]$ such that $u_i$ is feasible. We search $i$ by systematically testing $i=p,p-1,\dots,1$ until we find $i$ such that $u_i$ is feasible.
\item On the level below the highest we test, for a given $i$, whether the conjunction of inequalities $\bigwedge_{j=1}^c q_{u_i}^j(x)<r_{u_i}^j(y)$ is satisfied. For $j=c,c-1,\dots,1$, we systematically test every inequality $q_{u_i}^j(x)<r_{u_i}^j(y)$ one by one until we either find an inequality that is not satisfied, or verify that all inequalities are satisfied. If we find an inequality that is not satisfied, we proceed to the next conjunction. If all conjunctions are satisfied, the search is completed and we are at a leaf of $T$.
\item On the next level below we test inequalities $q_{u_i}^j(x)<r_{u_i}^j(y)$. Let $a=q_{u_i}^j(x), b=r_{u_i}^j(y)$. To test $a<b$?, we compare the bits of $a$ and $b$ starting from the most significant one and proceeding to less significant ones. The protocol is based on formula \cref{ineq}, which means that we continue as long as the bits are equal. So we have three cases:
\begin{enumerate}
\item If for the currently tested bit $k$, $a[k]=b[k]$, the search goes on if $k>1$. If $k=1$ then $a=b$, so the players know that $u_i$ is not feasible and we go to test next node $u_{i-1}$.
\item If we find $k$ such that $a[k]<b[k]$, we know that $q_{u_i}^j(x)<r_{u_i}^j(y)$ and we start testing the next inequality if $j>1$, or finish testing if $j=1$, because $u_i$ is feasible.
\item If we find $k$ such that $a[k]>b[k]$, we know that $q_{u_i}^j(x)>r_{u_i}^j(y)$. In this case we know that $u_i$ is not feasible and we go to test next conjunction.
\end{enumerate}
\item On the lowest level we need to determine which of the three possibilities $a[k]<b[k],a[k]=b[k],a[k]>b[k]$ holds true. Since the simulation is by a protocol of degree 2, we first decide whether $a[k]=b[k]$ or $a[k]\neq b[k]$ and in the second case we decide whether $a[k]<b[k]$ or $a[k]>b[k]$. If we are testing the first conjunction, the feasibility conditions ensure that that none of $u_i$ $i>1$ is feasible, so $u_1$ must be feasible. Which means that all inequalities in the conjunction are true and thus we only need to test whether $a[k]=b[k]$ or $a[k]<b[k]$. (This is not essential, it only makes the tree a little smaller than it would be if we kept the superfluous test for $a[k]>b[k]$.) \end{enumerate}
This search procedure can be naturally represented as a directed acyclic graph or a tree but we also need to label the vertices with feasibility conditions. Therefore have to use a tree, because the feasibility conditions depend on the history of the search. To estimate the size of the tree we need a more explicit description, which we will postpone to the next section.
{\bf Feasibility conditions.} Now we describe the feasibility conditions used in the search procedure. The feasibility conditions are given by two strings, one depending on $x$, the other depending on $y$. At the root of the tree we have two strings that make the condition of $v(I)$. In each step we extend the strings by one or two more bits. Since the protocol has the form of a tree, we do not have to forget any bits. (We could forget some bits at some stages of the protocol and thus make the protocol DAG-like and slightly smaller, but the gain would not be significant and it would complicate the proof.) The added bits are determined by what the protocol tests.
It will be more convenient to view the feasibility conditions as conjunctions of $\Phi$ and some elementary equality conditions $\chi^{j,(k,=)}_v,\chi^{j,(k,\neq)}_v,\chi^{j,(k,<)}_v,\chi^{j,(k,>)}_v$. At each step of the procedure we add a new elementary term.
\begin{enumerate} \item When we test $a[k]=b[k]$? and $w$ (respectively $z$) is the child where $a[k]=b[k]$ (respectively $a[k]\neq b[k]$), the feasibility condition is extended with $\chi^{j,(k,=)}_{u_i}$ (respectively with $\chi^{j,(k,\neq)}_{u_i}$).
\item When we test whether $a[k]<b[k]$ or $a[k]>b[k]$ and $w$ (respectively $z$) is the child where $a[k]<b[k]$ (respectively $a[k]> b[k]$), the feasibility condition is extended with $\chi^{j,(k,<)}_{u_i}$ (respectively with $\chi^{j,(k,>)}_{u_i}$).
\end{enumerate} This, clearly, satisfy the condition of the feasibility predicates of Definition~1(b), because in the first case always $a[k]=b[k]$ or $a[k]\neq b[k]$ holds true, and in the second case $a[k]<b[k]$ or $a[k]>b[k]$ holds true, because in the parent node we have $a[k]=1-b[k]$.
At the leaves $u_i(I)$ of the protocol, all bits terms are removed except for those that witness the feasibility of $u_i$ in $P$. Thus we get $\Phi^I_{u_i}$.
This determines the feasibility conditions in the protocol. For the sake of higher clarity, we will describe explicitly the conditions in particular steps, but we will only say what is added at these steps to the previous stings. We will consider the levels of the protocol in the inverse order.
\begin{itemize}
\item We have already said what happens on the lowest level.
\item When $a<b$? is tested and after the $k$th bit has been tested without deciding the inequality, the last $k$ terms in the feasibility condition express that $a[n]a[n-1]\dots a[n-k+1]=b[n]b[n-1]\dots b[n-k+1]$. This guarantees that if $a\neq b$, the two numbers differ at a lower bit. After testing the next bit (if there is any) the condition will be extended as defined above.
\item When testing a conjunction $a^1<b^1\wedge a^2<b^2\wedge\dots \wedge a^c<b^c$ and the last $j$ inequalities were tested positive, the strings of the feasibility conditions will contain witnesses of these inequalities. When an inequality $a^j<b^j$ is tested negatively, the witness for $a^j\geq b^j$ is added; there are two versions, one for $a^j=b^j$ and one for $a^j> b^j$. (At this point we could remove the witnesses of $a^c<b^c,\dots,a^{c-j+1}<b^{c-j+1}$, but we do not do it.)
\item When testing the feasibility of $u_i$ for $i>1$, the feasibility condition contains formulas that witness that all $u_{i+1},\dots,u_p$ are not feasible. If $u_i$ is tested positively, the communication proceeds to the leaf $u_i(I)$ where $I$ consists of the witnesses for the members of the conjunction. Otherwise the a witness of the failure is added and the communication proceeds with testing $u_{i-1}$. \end{itemize}
\noindent The property of the feasibility conditions in Definition~1(b) is satisfied because:
\begin{enumerate} \item When testing conjunctions for $u_i$ for $i<d$, the protocol considers all 3 possibilities $a[k]<b[k],a[k]=b[k],a[k]>b[k]$ one of which must be true. \item In the first conjunction we only test whether $a[k]<b[k]$ or $a[k]=b[k]$, because the feasibility conditions contain terms witnessing the feasibility of $v$ and non-feasibility $u_2,\dots,u_{d}$ in $P$, which imply the feasibility of $u_1$. The last bit is not tested, because if the communication reaches this node the feasibility of $v$, non-feasibility $u_2,\dots,u_{d}$ and $a[n]\dots a[2]=b[n]\dots b[2]$ force $a[1]<b[1]$. \end{enumerate}
Note that in spite of the fact that the first conjunction is always tested positively if the communication reaches this stage, we have to ``test'' it. In this case it is not literally a test, but a search for a witness $I$ that guarantees that the conjunction is true.
\subsubsection{Estimating the size of T}
We can assume $p'=0$, i.e., there are no sinks among $u_i$s, because if there are some, the tree is smaller.
We construct the binary tree $T$ recursively by defining its subtrees $T^\psi_{i,j,k}$. The bottom indices determine the stage of the search: $i$ corresponds to vertex $u_i$, $j$ to the element of the tested conjunction, $k$ to the tested bit. The superscript $\psi$ denotes the feasibility condition in the root of $T^\psi_{i,j,k}$. If we disregard the feasibility conditions, then for fixed $i,j,k$, all trees $T^\psi_{i,j,k}$ are isomorphic. The feasibility conditions $\psi$, however, are different, because the stage $(i,j,k)$ of the communication may be reached in different ways and the history is encoded in $\psi$. We have already defined the feasibility conditions above. Here we will define the feasibility conditions $\psi$ for the trees $T^\psi_{i,j,k}$ only for the sake of completeness; they are not needed for estimating the size of $T$.
Trees $T^\psi_{i,j,k}$ are defined recursively using the lexicographic order starting with $i=1,j=1,k=2$. (Recall that we do not test the pair of least bits in the first inequality of the first conjunction, because their values are forced to 0 and 1; therefore we start with $k=2$, but in other instances of $i$ and $j$ we do have $k=1$.) The whole tree is $T = T^\phi_{p,c,n}$.
The tree $T^\psi_{i,j,k}$ has the root labeled with the formula $\psi$ that is a conjunction of $\phi$ (the feasibility condition of $v$ in $P$) and the elementary terms $\chi^{j,(k,=)}_v,\chi^{j,(k,\neq)}_v,\chi^{j,(k,<)}_v,\chi^{j,(k,>)}_v$ added on the path to its root.
Case 1: $i = 1$. For $j \in [c]$ and $k \in \{2, \dots, n\}$. The tree $T^\psi_{1,j,k}$ has a root labeled with $\psi$. There are two subtrees under the root, named $A^=$ and $A^<$. We define: \begin{align*} \psi^= &\equiv \psi \wedge \chi^{j,(n-k+1,=)}_{u_1} \\ \psi^< &\equiv \psi \wedge \chi^{j,(n-k+1,<)}_{u_1} \end{align*} The particular choices for $A^=$ and $A^<$ are in the table. The superscript for $T$ is omitted; it is $\psi^=$ in the column $A^=$ and $\psi^<$ in the column $A^<$. When there is $u_1(I_1)$ or $u_2(I_2)$ in the table, the subtree consists of a single vertex $u_1(I_1)$ or $u_2(I_2)$.
\begin{tabular}{lcccc} Case & $j$ & $k$ & $A^=$ & $A^<$ \\ \hline 1.1 & $=1$ & $=2$ & $u_1(I_1)$ & $u_1(I_2)$ \\ 1.2 & $=1$ & $>2$ & $T_{1,1,k-1}$ & $u_1(I_2)$ \\ 1.3 & $>1$ & $=2$ & $T_{1,j-1,n}$ & $T_{1,j-1,n}$ \\ 1.4 & $>1$ & $>2$ & $T_{1,j,k-1}$ & $T_{1,j-1,n}$ \end{tabular}
\noindent The meaning of this table is as follows. In Case~1.1. this is the last step of the search procedure, hence the tree is connected to $u_1(I_1)$ and $u_1(I_2)$. The conditions $I_1$ and $I_2$ are witnesses of the first conjunction being true; they differ only in the last bits. In Case~1.2 the first conjunction is being tested and the bits of the numbers that are tested are $>2$. Hence the testing can either continue, if the bits are same (column $A^=$), or finish if the first is less then the second (column $A^<$). In Case 1.3 the second least significant bit is tested in the $j$th conjunction. Hence testing proceeds to the next conjunction (the $j-1$st) and the subtrees representing this search differ only in their feasibility conditions. In Case~1.4 a higher bit is tested in the $j$th conjunction. If the tested bits are equal, testing of the inequality continues, otherwise next inequality is tested.
\begin{figure}
\caption{Case 2 of the construction}
\label{fig:tecko}
\end{figure}
Case 2: $i > 1$. The difference is that now we have to test all three possibilities $a[k]<b[k],a[k]=b[k],a[k]>b[k]$ for the tested $k$th bit.
For $i \in \{2, \dots, p\}$, $j \in [c]$ and $k \in [n]$, the tree $T^\psi_{i,j,k}$ has again a root labeled with $\psi$. The left subtree is $B^=$, the right subtree has a root $b^{\neq}$ which has two subtrees $B^<$ and $B^>$. See \Cref{fig:tecko}. We define: \begin{align*} \psi^{\neq} &\equiv \psi \wedge \chi^{j,(n-k+1,\neq)}_{u_i} \\ \psi^= &\equiv \psi \wedge \chi^{j,(n-k+1,=)}_{u_i} \\ \psi^< &\equiv \psi \wedge \chi^{j,(n-k+1,<)}_{u_i} \\ \psi^> &\equiv \psi \wedge \chi^{j,(n-k+1,>)}_{u_i} \end{align*} The label of $b^{\neq}$ is $\psi^{\neq}$. Again, we omit the superscript for $T$ which is $\psi^t$ in the column $B^t$ for $t \in \{ <, >, = \}$.
\begin{tabular}{lccccc} Case & $j$ & $k$ & $B^=$ & $B^<$ & $B^>$ \\ \hline 2.1 & $=1$ & $=1$ & $T_{i-1,c,n}$ & $u_i(I_2)$ & $T_{i-1,c,n}$\\ 2.2 & $=1$ & $>1$ & $T_{i,1,k-1}$ & $u_i(I_2)$ & $T_{i-1,c,n}$\\ 2.3 & $>1$ & $=1$ & $T_{i-1,c,n}$ & $T_{i,j-1,n}$ & $T_{i-1,c,n}$\\ 2.4 & $>1$ & $>1$ & $T_{i,j,k-1}$ & $T_{i,j-1,n}$ & $T_{i-1,c,n}$\\ \end{tabular}
We leave to the reader to figure out the meaning of these cases, because it is very similar to the case of~$i=1$. We now compute the size of $|T_{1,j,k}|$.
\begin{claim} \label{claim:Tsizecase1} For $j \in [c]$ and $k \in \{2, \dots, n\}$ $$\lvert T_{1,j,k} \rvert = 2kn^{j-1} - 1.$$ \end{claim}
\begin{proof} We use induction and verify cases 1.1 -- 1.4:\\ Case 1.1: $$\lvert T_{1,1,2} \rvert = 3$$ Case 1.2: For $k \in \{3, \dots, n\}$: $$\lvert T_{1,1,k} \rvert = 2 + \lvert T_{1,1,k-1} \rvert = 2 + 2(k-1) -1 = 2k - 1$$ Case 1.3: For $j \in \{2, \dots, c\}$: $$\lvert T_{1,j,2} \rvert = 1 + 2\lvert T_{1,j-1,n} \rvert = 1 + 2\cdot (2n\cdot n^{j-2} - 1) = 4n^{j-1} - 1$$ Case 1.4: For $j \in \{2, \dots, c\}$ and $k \in \{3, \dots, n\}$: $$\lvert T_{1,j,k} \rvert = 1 + \lvert T_{i,j,k-1} \rvert + \lvert T_{i,j-1,n} \rvert = 1 + 2(k-1)n^{j-1} - 1 + 2n\cdot n^{j-2} - 1 = 2kn^{j-1} - 1$$ \end{proof}
In the more general cases, it would be too tedious to write down the exact formulas and we instead use approximate bounds.
\begin{claim} \label{claim:Tsize} For $n \geq 10$: $$\lvert T_{p,c,n} \rvert \leq 2n^{2pc - 1}$$ \end{claim}
\begin{proof} We prove by induction that for $i \in [p], j \in [c], k \in [n]$ (except for the case $i = 1$ and $k = 1$ when $T_{i,j,k}$ is undefined): $$\lvert T_{i,j,k} \rvert \leq 2kn^{2(i - 1)c + 2(j - 1)}$$
This holds for the case $i = 1$ because the formula from \Cref{claim:Tsizecase1} satisfies $$2kn^{j-1}-1 \leq 2kn^{j-1} \leq 2kn^{2(j - 1)}.$$
We consider cases 2.1 -- 2.4:\\ Case 2.1: For $i \in \{2, \dots, p\}$: $$\lvert T_{i,1,1} \rvert = 3 + 2\lvert T_{i-1,c,n} \rvert \leq 3 + 2\cdot 2n\cdot n^{2(i - 2)c+2(c - 1)} \leq 7n^{2(i - 1)c - 1} \leq 2n^{2(i-1)c}$$ Case 2.2: For $i \in \{2, \dots, p\}$ and $k \in \{2, \dots, n\}$: \begin{align*}\lvert T_{i,1,k} \rvert &= 3 + \lvert T_{i,1,k-1} \rvert + \lvert T_{i-1,c,n} \rvert \leq 3 + 2(k-1)n^{2(i-1)c} + 2n\cdot n^{2(i-2)c + 2(c-1)}\\ & = 2(k-1)n^{2(i-1)c} + (2 + 2n^{2(i-1)c - 1}) \leq 2k n^{2(i-1)c}\end{align*} Case 2.3: For $i \in \{2, \dots, p\}$ and $j \in \{2, \dots, c\}$: \begin{align*}\lvert T_{i,j,1} \rvert &= 2 + 2\lvert T_{i-1,c,n} \rvert + \lvert T_{i,j-1,n} \rvert \\ & \leq 2 + 2\cdot 2n\cdot n^{2(i-2)c + 2(c-1)} + 2n\cdot n^{2(i-1)c + 2(j-2)}\\ & = 2 + 4n^{2(i-1)c - 1} + 2n^{2(i-1)c + 2(j - 1) - 1} \leq 2n^{2(i-1)c + 2(j-1)}\end{align*} Case 2.4: For $i \in \{2, \dots, p\}$, $j \in \{2, \dots, c\}$ and $k \in \{2, \dots, n\}$: \begin{align*}\lvert T_{i,j,k} \rvert &= 2 + \lvert T_{i,j,k-1} \rvert + \lvert T_{i,j-1,n} \rvert + \lvert T_{i-1,c,n} \rvert\\
& \leq 2 + 2(k-1)\cdot n^{2(i-1)c + 2(j-1)} \\& + 2n\cdot n^{2(i-1)c + 2(j-2)} + 2n\cdot n^{2(i-2)c + 2(c-1)} \\
& = 2(k-1) n^{2(i-1)c + 2(j-1)} + 2\\& + 2n^{2(i-1)c + 2(j - 1) - 1} + 2n^{2(i-1)c - 1} \\ &\leq 2k n^{2(i-1)c + 2(j-1)} \end{align*}
All the cases are checked and the statement of the claim follows. \end{proof}
As noted above, if we have $p'$ leaves on top of the $p$ inner vertices $u_i$, the size of $T$ is at most (in fact less, except for trivial cases) the size of $T_{n,p+p',c}$. Hence the size of $T$ can be bounded by $\mathcal{O}(n^{2(p+p')c-1})$.
This finishes the proof of \Cref{l:technical} and also the proof of \Cref{thm:main}.
\section{A potential application of protocols with equality} \label{sec:applications}
We prove in this section how one could turn lower bounds for protocols from this paper to lower bounds for the proof system R(LIN). The idea of using with equality for R(LIN) is due to D. Sokolov (personal communication with P. Pudlák). This result has not been published, therefore, we present it here for the sake of completeness.
We define R(LIN) in the spirit of Krajíček's book~\cite{krajicekproofcomplexity}. However, we use it only as a~proof system for DNF tautologies (i.e. a refutation system for CNF formulas). Therefore, the Boolean axioms and the weakening rule can be omitted.
\begin{definition}[R(LIN)] The system R(LIN) is an extension of resolution. A~clause in the system R(LIN)
is a~set $C = \{f_1, \dots, f_k\}$ where all elements $f_i \in \mathbf{F}_2[x_1, \dots, x_n]$ are linear polynomials. As it is usual, the value 0 is interpreted as false
and the value 1 as true. Then the~clause $C$ is true for $\overline{a} \in \{0,1\}^n$ if and only if the Boolean formula $\bigvee_{i=1}^k f_i$ is true.
Let $\phi$ be a~CNF formula with the set of clauses $S$. Replacing each negative literal $\neg x_i$ with $1+x_i$, we obtain an equivalent set of clauses $S'$ for R(LIN).
A R(LIN)-refutation of $\phi$ is then the derivation of the empty clause from the clauses $S'$ using the following two rules.
The contraction rule: \begin{prooftree} \AxiomC{$C,0$} \UnaryInfC{$C$} \end{prooftree}
The addition rule: \begin{prooftree} \AxiomC{$C,g$} \AxiomC{$D,h$} \BinaryInfC{$C,D,g+h+1$} \end{prooftree} \end{definition}
To translate the lower bounds from protocols to proofs, we need to convert a function $f$ which is hard to solve by a~protocol to a formula which is hard to refute. The following lemma is essentially Lemma 13.5.1 from Krajíček's book \cite{krajicekproofcomplexity}.
\begin{lemma} \label{l:functiontoformula} Let $f\colon \{0,1\}^* \rightarrow \{0,1\}$ be a partial monotone function such that $U = f^{-1}\{0\}$ and $V = f^{-1}\{1\}$ are NP-sets. Then for each $n \geq 1$, there exists a CNF formula $\phi(\overline{x}, \overline{y}) \wedge \psi(\overline{x}, \overline{z})$ such that: \begin{enumerate} \item variables from $\overline{y}$ and $\overline{z}$ are disjoint, \item variables from $\overline{x}$ occur only negatively in $\phi(\overline{x}, \overline{y})$, \item variables from $\overline{x}$ occur only positively in $\psi(\overline{x}, \overline{z})$, \item \label{someitem} in each clause of $\psi(\overline{x}, \overline{z})$, there is at most one occurrence of a variable from $\overline{x}$, \item the size of $\phi(\overline{x}, \overline{y}) \wedge \psi(\overline{x}, \overline{z})$ is polynomial in $n$,
\item $U \cap \{0,1\}^n = \left\{ \overline{a} \in \{0,1\}^n \middle| \phi(\overline{a},\overline{y}) \in SAT \right\}$,
\item $V \cap \{0,1\}^n = \left\{ \overline{b} \in \{0,1\}^n \middle| \psi(\overline{b},\overline{z}) \in SAT \right\}$. \end{enumerate} \end{lemma}
\begin{proof} Except for \Cref{someitem}, this is Lemma 13.5.1 from \cite{krajicekproofcomplexity}.
We replace each clause of $\phi(\overline{x}, \overline{z})$ by an equivalent set of clauses.
For example, clause $\{x_1, x_2, x_3, z_1, \neg z_2, z_3\}$ is transformed into an equivalent set of clauses $\{z_1, \neg z_2, z_3, x_1, z_4\}$, $\{\neg z_4, x_2, z_5\}$, $\{\neg z_5, x_3\}$
where $z_4, z_5$ are new variables. Generally: Consider a clause of
the form $\{\ell_1, \dots, \ell_k, \ell'_1, \dots, \ell'_m\}$ where
$\ell_1, \dots, \ell_k$ are literals from $\overline{x}$ and $\ell'_1,
\dots, \ell'_m$ are literals from $\overline{z}$. Such a~clause is
replaced by $k$ clauses: $\{\ell'_1, \dots, \ell'_m, \ell_1,
\ell''_1\}$, $\{\neg \ell''_1, \ell_2, \ell''_2\}$, \dots,$\{\neg
\ell''_{k-2}, \ell_{k-1}, \ell''_{k-1}\}$, $\{\neg \ell''_{k-1}, \ell_k\}$ where $\ell''_1, \dots, \ell''_{k-1}$ are new variables put into $\overline{z}$. \end{proof}
Observe that $\phi(\overline{x}, \overline{y}) \wedge \psi(\overline{x}, \overline{z})$ is unsatisfiable because $U \cap V = \emptyset$.
We are now ready to state the theorem for R(LIN) and protocols with equality.
\begin{theorem} Let $f\colon \{0,1\}^* \rightarrow \{0,1\}$ be a monotone partial function such that $U = f^{-1}(0)$ and $V = f^{-1}(1)$ are NP-sets.
Then it holds for all $n$: If $s(n)$ is the minimum size of a DAG communication protocol of degree 2 with equality solving $f_n =
\left.f\right|_{\{0,1\}^n}$, then there is a formula $\alpha^f_n$ of size $n^{O(1)}$ such that the minimum number of lines of a R(LIN)-refutation of $\alpha^f_n$ is $s(n)$. \end{theorem}
\begin{proof} Let us fix $n$. We use the formula $\phi(\overline{x}, \overline{y}) \wedge \psi(\overline{x}, \overline{z})$ from \Cref{l:functiontoformula} as $\alpha^f_n$.
For the sake of contradiction, we assume that there is a R(LIN)-refutation $d$ of $\phi(\overline{x}, \overline{y}) \wedge \psi(\overline{x}, \overline{z})$ whose number of lines is $s < s(n)$. We construct a protocol $P$ of degree 2 with equality of size at most $s$ solving $f_n$. The underlying graph of the protocol $P$ is the graph of the proof $d$ with the direction of edges reversed. For each $\overline{a} \in f^{-1}\{0\}$, we fix $\overline{y_a}$ such that $\phi(\overline{a},\overline{y_a})$ is true. Analogously, for each $\overline{b} \in f^{-1}\{1\}$, we fix $\overline{z_b}$ such that $\psi(\overline{b},\overline{z_b})$ is true.
Let us consider a vertex $v$ with the clause $f_1, \dots, f_k$. The functions $q_v$, $r_v$ will be set so that the vertex $v$ is feasible for $\overline{a}, \overline{b}$ iff the assignment $\overline{a}, \overline{y_a}, \overline{z_b}$ falsifies $f_1, \dots, f_k$. For each $i$, we split $f_i = f_i^0 + f_i^1$ so that $f_i^0$ contains only variables from $\overline{x}, \overline{y}$ and the constant 1 if present, while $f_i^1$ contains only variables from $\overline{z}$. It is then enough to set $q_v(\overline{a})$ to the sequence of bits $v_1v_2\dots v_k$ where $v_i$ is the value of $f^0_i$ under the assignment $\overline{a}, \overline{y_a}$ and to set $r_v(\overline{b})$ to the sequence of bits $w_1w_2\dots w_k$ where $w_i$ is the value of $f^1_i$ under the assignment $\overline{z_b}$.
First, observe that the source of the graph is always feasible because the empty clause is falsified by any assignment. Next, we observe that if a vertex $v$ is feasible for $\overline{a}$ and $\overline{b}$, then at least one of the sons is feasible for $\overline{a}$ and $\overline{b}$, too. This is trivial for the contraction rule of R(LIN). Let us consider the vertex $v$ with the clause $C,D,f+g+1$ and its two sons with clauses $C,f$ and $D,g$. Because $f+g+1$ is false under the assignment $\overline{a}, \overline{y_a}, \overline{z_b}$, at least one of $f$, $g$ must be false under the same assignment. The clauses $C$, $D$ are trivially false under this assignment. Therefore at least one of the sons is indeed feasible.
Let us consider a vertex $v$ such that all of its descendants come exclusively from the clauses of $\phi(\overline{x},\overline{y})$. If the clause of $v$ is falsified by the assignment $\overline{a}, \overline{y_a}$, it contradicts the fact that $\phi(\overline{a}, \overline{y_a})$ is true. Therefore the vertex $v$ is never feasible. We remove all such vertices. The only remaining thing is to verify that the sinks give us a correct solution to the KW game. Each sink $\ell$ must have a clause $C$ from $\psi(\overline{x},\overline{z})$. We claim that the sink is feasible if and only if $a_i = 0 \wedge b_i = 1$ where $x_i$ is the (only) positive literal of $x$ appearing in the clause $C$. Feasibility of $\ell$ implies that all literals in $C$ are falsified by the assignment $\overline{a}, \overline{z_b}$. But $C$ is true under the assignment $\overline{b},\overline{z_b}$. Therefore, $a_i = 0 \wedge b_i = 1$ as required.
We have constructed a protocol solving $f_n$ of size at most $s$. Because $s < s(n)$, this is a contradiction. \end{proof}
\end{document} | arXiv | {
"id": "2201.05662.tex",
"language_detection_score": 0.7895638346672058,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Some virtually abelian subgroups of
the group of analytic symplectic diffeomorphisms of $S^2$
} \begin{abstract} We show that if $M$ is a compact oriented surface of genus $0$ and $G$ is a subgroup of $\Symp^\omega_\mu(M)$ which has an infinite normal solvable subgroup, then $G$ is virtually abelian. In particular the centralizer of an infinite order $f \in \Symp^\omega_\mu(M)$ is virtually abelian. Another immediate corollary is that if $G$ is a solvable subgroup of $\Symp^\omega_\mu(M)$ then $G$ is virtually abelian. We also prove a special case of the Tits Alternative for subgroups of {$\Symp^\omega_\mu(M).$} \end{abstract} \section{Introduction}
If $M$ is a compact connected oriented surface we let $\Df0^r(M)$ denote the $C^r$ diffeomorphisms isotopic to the identity. In particular if $r = \omega$ this denotes the subgroup of those which are real analytic. Let $\Symp^r_\mu(M)$ denote the subgroup of $\Df0^r(M)$ which preserve a smooth volume form $\mu$. We will denote by $\Cent^r(f)$ and $\Cent^r_\mu(f)$ the subgroups of $\Df0^r(M)$ and $\Symp^r_\mu(M)$, respectively, whose elements commute with $f.$ If $G$ is a subgroup of $\Symp^r_\mu(M)$ then $\Cent^r_\mu(f,G)$ will denote the the subgroup of $G$ whose elements commute with $f.$ In this article we wish to address the algebraic structure of subgroups of $\Symp^\infty_\mu(M)$ and our results are largely limited to the case of analytic diffeomorphisms when $M$ has genus $0$. Our approach is to understand the possible dynamic behavior of such diffeomorphisms and use the structure exhibited to conclude information about subgroups.
\begin{defn} If $N$ is a connected manifold, an element $f \in \Diff^r(N)$ will be said to have {\em full support} provided $N \setminus \Fix(f)$ is dense in $N$. We will say that $f$ has {\em support of finite type} if both $\Fix(f)$ and $N \setminus \Fix(f)$ have finitely many components. We will say that a subgroup $G$ of $\Diff^r(N)$ has {\em full support of finite type} if every non-trivial element of $G$ has full support of finite type. \end{defn}
The primary case of interest for this article is analytic diffeomorphisms, but we focus on groups with full support of finite type to emphasize the properties we use, which are dynamical rather than analytic in nature. The following result shows we include $\Diff^\omega(M)$.
\begin{prop}\label{analytic} If $N$ is a compact connected manifold, the group $\Diff^\omega(N)$ has full support of finite type. \end{prop}
\begin{proof}
If $f$ is analytic and non-trivial the set $\Fix(f)$ is an analytic set in $N$ and has no interior and the set $N \setminus \Fix(f)$ is a semianalytic set. Hence $\Fix(f)$ has finitely many components
and $N \setminus \Fix(f)$ has finitely many components (see Corollary 2.7 of \cite{B-M} for both these facts). \end{proof}
The following result is due to Katok \cite{Katok2} who stated it only in the analytic case. For completeness we give a proof in section~\ref{sec:positive entropy} but the proof we give is essentially the same as the analytic case.
\begin{prop}[Katok \cite{Katok2}]\label{prop cyclic} Suppose $G$ is a subgroup of $\Diff^2(M)$ which has full support of finite type and $f \in \Diff^2(M)$ has positive topological entropy. Then the centralizer of $f$ in $G$, is virtually cyclic. Moreover, every infinite order element of this centralizer has positive topological entropy. \end{prop}
A corollary of the proof of this result is the following.
\begin{cor}\label{dichotomy} Suppose $f,g \in \Diff^2(M)$ have infinite order and full support of finite type. If $fg = gf$, then $f$ has positive topological entropy if and only if $g$ has positive entropy. \end{cor}
We remark that in contrast to our subsequent results
Proposition~\ref{prop cyclic} and its corollary make no assumption
about preservation of a measure $\mu.$
We can now state our main result, whose proof is given in section~\ref{main section}.
\begin{thm}\label{main thm}
Suppose $M$ is a compact oriented surface of genus $0$ and $G$ is a subgroup
of $\Symp^\infty_\mu(M)$ which has full support of finite type, e.g. a subgroup of $\Symp^\omega_\mu(M)$. Suppose further that $G$ has an infinite normal solvable subgroup. Then $G$ is virtually abelian. \end{thm}
There are several interesting corollaries of this result. To motivate the first we recall the following result.
\begin{thm}[Farb-Shalen \cite{Farb-Shalen}] Suppose $f \in \Df0^\omega(S^1)$ has infinite order. Then the centralizer of $f$ in $\Df0^\omega(S^1)$, is virtually abelian. \end{thm}
There are a number of instances where results about the algebraic structure of $\Diff(S^1)$ have close parallels for $\Symp_\mu(M)$. By analogy with the result of Farb and Shalen above, it is natural to ask the following question.
\begin{question} Suppose $M$ is a compact surface and
$G$ is a subgroup of $\Df0^2(M)$ with full support of finite type and $f \in G$ has infinite order. Is the centralizer of $f$ in $G$ always virtually abelian? \end{question}
The following result gives a partial answer to this question.
\begin{prop}\label{prop: cent} Suppose $M$ is a compact surface of genus $0$, $H$ is a subgroup of $\Symp^\infty_\mu(M)$ with full support of finite type and $f \in H$ has infinite order, then $\Cent^\infty_\mu(f, H)$, the centralizer of $f$ in $H,$ is virtually abelian. In particular this applies when $H \subset \Symp_\mu^\omega(M)$. \end{prop}
\begin{proof} If we apply Theorem~\ref{main thm} to the group $G = \Cent^\infty_\mu(f,H)$ and observe that the cyclic group generated by $f$ is a normal abelian subgroup, then we conclude that $G$ is virtually abelian. \end{proof}
Question 1.9 of \cite{FH-ent0} asks if any finite index subgroup of $\mcg(\Sigma_g)$ can act faithfully by area preserving diffeomorphisms on a closed surface. The following corollary of Proposition~\ref{prop: cent} and Proposition~\ref{analytic} gives a partial answer in the special case of analytic diffeomorphisms and $M$ of genus $0$.
\begin{cor} Suppose $H$ is a finite index subgroup of $\mcg(\Sigma_g)$ with $g \ge 2$ or $\Out(F_n)$ with $n \ge 3$, and $G$ is a group with the property that every element of infinite order has a virtually abelian centralizer. Then any homomorphism $\nu: H \to G$ has non-trivial kernel. In particular this holds if $G = \Symp^\omega_\mu(S^2).$
\end{cor} \begin{proof} Suppose at first that $H$ is a finite index subgroup of $\mcg(\Sigma_g)$ with $g \ge 2$. Let $\alpha_1, \alpha_2$ and $\alpha_3$ be simple closed curves in $M$ such that $\alpha_1$ and $\alpha_2$ are disjoint from $\alpha_3$ and intersect each other transversely in exactly one point. Let $T_i$ be a Dehn twist around $\alpha_i$ with degree chosen so that $T_i \in H$. Then $T_1$ and $T_2$ commute with $T_3$ but no finite index subgroup of the group generated by $T_1$ and $T_2$ is abelian. It follows that $\mu$ cannot be injective.
Suppose now that $H$ is a finite index subgroup of $\Out(F_n)$ with $n\ge 3$. Let $A,B$ and $C$ be three of the generators of $F_n$. Define $\Phi \in \Aut(F_n)$ by $B \mapsto BA$, define $\Psi \in \Aut(F_n)$ by $C \mapsto CBAB^{-1}A$ and define $\Theta \in \Aut(F_n)$ by $C \mapsto CABAB^{-1}$, where $\Phi$ fixes all generators other than $B$ and where $\Psi $ and $\Theta$ fix all generators other than $C$. Let $\phi, \psi$ and $\theta$ be the elements of $\Out(F_n)$ determined by $\Phi, \Psi$ and $\Theta$ respectively. It is straightforward to check that $\Phi$ commutes with $\Psi$ and $\Theta$ and that for all $k > 0$, $\Psi^k$ (which is defined by $C \mapsto C(BAB^{-1}A)^k$) does not commute with $\Theta^k$ (which is defined by $C \mapsto C(ABAB^{-1})^k$). Moreover, $[\Theta^k,\Psi^k]$ is not a non-trivial inner automorphism because it fixes both $A$ and $B$. It follows that $[\theta^k,\psi^k]$ is non-trivial for all $k$ and hence that no finite index subgroup of the group generated by $\psi$ and $\theta$, and hence no finite index subgroup of $\Cent(\phi)$,
the centralizer of $\phi$ in $\Out(F_n),$ is abelian. The proof now concludes as above. \end{proof}
Another interesting consequence of Theorem~\ref{main thm} is the following result.
\begin{prop}\label{prop: solv} Suppose $M$ is a compact surface of genus $0$ and $G$ is a solvable subgroup of $\Symp^\infty_\mu(M)$ with full support of finite type, then $G$ is virtually abelian. \end{prop}
The {\em Tits alternative} (see J. Tits \cite{Tits}) is satisfied by a group $G$ if every subgroup (or by some definitions every finitely generated subgroup) of $G$ is either virtually solvable or contains a non-abelian free group. This is a deep property known for finitely generated linear groups, mapping class groups \cite{Iv1}, \cite{mccarthy:tits}, and the outer automorphism group of a free group \cite{bfh:tits1} \cite{bfh:tits2}. It is an important open question for $\Diff^\omega(S^1)$ (see \cite{Farb-Shalen}). It is known that this property is not satisfied for $\Diff^\infty(S^1)$ (again see \cite{Farb-Shalen}).
This naturally raises the question for surfaces.
\begin{conj}[{\bf Tits alternative}]\label{tits} If $M$ is a compact surface then every finitely generated subgroup of $\Symp_\mu^\omega(M)$ is either virtually solvable or contains a non-abelian free group. \end{conj}
In the final section of this paper we are able to use the techniques developed to prove an important special case of this conjecture.
\begin{thm}\label{tits special case} Suppose that $M$ is a compact oriented genus zero surface, that $G$ is a subgroup of $\Symp^\omega_\mu(M)$ and that $G$ contains an infinite order element with entropy zero and at least three periodic points. Then either $G$ contains a subgroup isomorphic to $F_2$, the free group on two generators, or $G$ has an abelian subgroup of finite index. \end{thm}
We observe that one cannot expect the virtually abelian (as opposed to solvable) conclusion to hold more generally as the subgroup of $\Symp^\omega_\mu(\mathbb T^2)$ generated by an ergodic translation and the automorphism with matrix \[ \begin{pmatrix} 1 & 1\\ 0 & 1\\ \end{pmatrix} \] is solvable, but not virtually abelian.
\section{Maximal Annuli for elements of {$\Symp^\infty_\mu(M).$}}
\begin{defn}\label{defn: annular comp}
Suppose $M$ is a compact oriented surface.
The {\em annular compactification} of an open annulus $U \subset M$ is obtained by blowup on an end whose frontier is a single point and by the prime end compactification otherwise. We will denote it by $U_c$ (see 2.7 \cite{FH-ent0} for details). \end{defn}
For any diffeomorphism $h$ of $M$ which leaves $U$ invariant there is a canonical extension to a homeomorphism $h_c: U_c \to U_c$ which is functorial in the sense that $(hg)_c = h_c g_c$.
\begin{defn} Suppose $M$ is a compact genus zero surface, $f \in \Symp^\infty_\mu(M)$, and that the number of periodic points of $f$ is greater than the Euler characteristic of $M$.
If $f$ has infinite order and entropy $0$, we will call it a {\em multi-rotational diffeomorphism.} This set of diffeomorphisms will be denoted $\zz(M)$. \end{defn}
The rationale for the terminology {\em multi-rotational} comes from the following result which is a distillation of several results from \cite{FH-ent0} (see Theorems 1.2, 1.4 and 1.5 from that paper). In particular, if $f \in \zz(M)$ then every point of $\Int(M) \setminus \Fix(f)$ has a well defined rotation number (with respect to any pair of components from $\partial M \cup \Fix(f)$ and there are non-trivial intervals of rotation numbers.
\begin{thm}\label{max-annuli}
Suppose $M$ is a compact genus zero surface and $f \in \zz(M)$. The collection ${\cal A} = {\cal A}_f$ of maximal $f$-invariant open annuli in $\Int(M) \setminus \Fix(f)$ satisfies the following properties: \begin{enumerate} \item The elements of ${\cal A}$ are pairwise disjoint. \item The union $\displaystyle{\bigcup_{U\in {\cal A}}U}$ is a full measure open subset of {$M \setminus \Fix(f)$.} \item \label{item:U is essential} Each $U \in {\cal A}$ is essential in $\Int(M) \setminus \Fix(f).$
\item \label{item:rho non-constant} For each $U \in {\cal A}$, the rotation number $\rho_f: U_c \to S^1$ is continuous and non-constant. Each component of the level set of $\rho_f$ which is disjoint from $\partial U_c$ is essential in $U$, i.e. separates $U_c$ into two components, each containing a component of $\partial U_c$. \end{enumerate} \end{thm}
We will make repeated use of the properties in the following straightforward corollary.
\begin{cor}\label{Z centralizer} Suppose $M$ is a compact genus zero surface, $f \in \zz(M)$ and $\Cent^\infty_\mu(f)$ denotes its centralizer in $\Symp^\infty_\mu(M)$. \begin{enumerate} \item \label{item: cA periodic} If $g \in \Cent^\infty_\mu(f)$ and $U\in {\cal A}_f$ then $g(U) \in {\cal A}_f$ and the $\Cent^\infty_\mu(f)$-orbit of $U$ is finite. \item \label{item: inv level} If $U \in {\cal A}_f$ then any $g \in \Cent^\infty_\mu(f)$ which satisfies $g(U) = U$ must preserve the components of each level set of $\rho_f: U_c \to S^1$. \item If $f$ has support of finite type then the set ${\cal A}_f$ of maximal annuli for $f$ is finite. \end{enumerate} \end{cor} \begin{proof} If $g \in \Cent^\infty_\mu(f)$ then $g(\Fix(f)) = \Fix(f)$ so $\Int(M) \setminus \Fix(f)$ is $g$-invariant. Also $f(g(U)) = g(f(U)) = g(U)$ so $g(U)$ is $f$-invariant. Clearly $U$ is a maximal $f$-invariant annulus in $\Int(M) \setminus \Fix(f)$ if and only if $g(U)$ is. This proves $g(U) \in {\cal A}_f.$ The $\Cent^\infty_\mu(f)$-orbit of $U$ consists of pairwise disjoint annuli each with the same positive measure. There can only be finitely many of them in $M.$ This proves (1).
To show (2) we observe that $g$ is a topological conjugacy from $f$ to itself and rotation number is a conjugacy invariant. Hence $g$ must permute the components of the level sets of $\rho_f$. Clearly those level sets which contain a component of $\partial U_c$ must be preserved. Since any other such component separates $U$, if it were not $g$-invariant the fact that $g$ is area preserving would be contradicted.
Since the elements of ${\cal A}_f$ are maximal $f$-invariant annuli in $\Int(M) \setminus \Fix(f)$ , they are mutually non-parallel in $\Int(M) \setminus \Fix(f)$ . Let $E$ be the union of a set of core curves for some given finite subset of ${\cal A}_f$. Theorem~\ref{max-annuli}-\pref{item:U is essential} implies that each component of {$M \setminus E$} either contains a component of $\Fix(f) \cup \partial M$ or has negative Euler characteristic. The dual graph for $E$ is a tree that has one edge for each element of $E$ and has at most as many valence one and valence two vertices as there are components of $\Fix(f) \cup \partial M$. Since the Euler characteristic of the tree is $1$ there is a uniform bound on the number of its edges. It follows that the cardinality of $E$, and hence the number of elements of ${\cal A}_f$, is uniformly bounded. This proves (3). \end{proof}
\section{The positive entropy case.} \label{sec:positive entropy}
\begin{lemma}\label{exp factor} Suppose that $M$ is a closed surface, that $f\in \Diff^2(M)$ and that $g$ commutes with $f$. Suppose further that $f$ has a hyperbolic fixed point $p$ of saddle type, and that $g$ fixes $p$ and preserves the branches of $W^s(p,f)$. Then there is a $C^1$ coordinate function $t$ on $W^s(p,f)$ and a unique number $\alpha>0$ such that in these coordinates $g(t) = \alpha t.$ In particular $\alpha$ is an eigenvalue of $Dg_p.$ \end{lemma}
\begin{proof} By Sternberg linearization (see Theorem 2 of \cite{Sternberg}) there is a $C^1$ coordinate function $t(x)$ on $W^s(p,f)$ in which the restriction of $f$ (also denoted $f$) satisfies $f(t) = \lambda t$ where $\lambda \in (0,1)$ is the eigenvalue of $Df_p$ corresponding to the eigenvector tangent to $W^s(p,f)$.
In these coordinates $g(t) = \lambda^{-n} g( \lambda^n t).$ Applying the chain rule gives $g'(t) = g'( \lambda^n t).$ Letting $n$ tend to infinity we get $g'(t) = g'(0)$ so $g'(x)$ is constant and $g(t) = \alpha t$ where $\alpha = g'(0).$ \end{proof}
\begin{defn} \label{preserve branches} Suppose that $M$ is a compact surface and that $f \in \Diff^r(M)$ has a hyperbolic fixed point $p$ of saddle type. We define $\Cent^r_p(f)$ to be the subgroup of elements of $\Diff^r(M)$ which commute with $f$, fix the point $p,$ and preserve the branches of $W^s(f,p).$ Let $\mathbb R^+$ denote the multiplicative group of positive elements of $\mathbb R$. The {\em expansion factor homomorphism} \[ \phi: \Cent^r_p(f) \to \mathbb R^+, \] is defined by $\phi(g) = \alpha$ where $\alpha$ is the unique number given by Lemma~(\ref{exp factor}) for which $g(x) = \alpha x$ in $C^1$ coordinates on $W^s(f,p).$ It is immediate that $\phi$ is actually a homomorphism. We also observe that $\phi(g)$ is just the eigenvalue of $Dg_p$ whose eigenvector is tangent to
$W^s(f,p).$ \end{defn}
The following result is due to Katok \cite{Katok2} who stated it only in the analytic case, but gave a proof very similar to the one below.
\noindent {\bf Proposition~\ref{prop cyclic}} ([Katok \cite{Katok2}]). {\em Suppose $G$ is a subgroup of $\Diff^2(M)$ which has full support of finite type and $f \in \Diff^2(M)$ has positive topological entropy. Then the centralizer of $f$ in $G$, \ $\Cent^2(f,G)$, is virtually cyclic. Moreover, every infinite order element of $\Cent^2(f,G)$ has positive topological entropy.}
\begin{proof} By a result of Katok \cite{katok:horseshoe} there is a hyperbolic periodic point $p$ for $f$ of saddle type with a transverse homoclinic point $q$. Let $\phi: \Cent^2_p(f) \to \mathbb R^+$ be the expansion factor homomorphism of Definition~\ref{preserve branches} and let $ H =\Cent^2_p(f) \cap G $. Then $H$ has finite index in $\Cent^2(f,G)$ because otherwise the {$\Cent^2(f,G)$} orbit of $p$ is
infinite and consists of hyperbolic fixed points of $f$ all with the same eigenvalues. This is impossible because a limit point would be a non-isolated hyperbolic fixed point. For the main statement of the proposition, it suffices to show that $H$ is cyclic and for this it suffices to show that the restriction $\phi|_H: H \to \mathbb R^+$ is injective and has a discrete image.
To show that $\phi|_H$ is injective it suffices to show that
if $g \in H$ and $\phi(g) = 1$ then $g = id.$ But $\phi(g) = 1$ implies $W^s(p,f) \subset \Fix(g)$. Note that if $\phi': \Cent^2_p(f^{-1}) \to \mathbb R^+$ is the expansion homomorphism for $f^{-1}$ then $\phi(g)\phi'(g) = det(Dg_p) = 1$ so we also know that $W^u(p,f) \subset \Fix(g)$. Hence we need only show that this implies $g = id$. Let $J_s$ be the interval in $W^s(p,f)$ joining $p$ to $q$ and define $J_u \subset W^u(p,f)$ analogously. Define $K_n = f^{-n}(J_s) \cup f^{n}(J_u)$. Then the number of components of the complement of $K_n$ tends to $+\infty$ with $n.$ Since $g$ has full support of finite type, each component $V$ of the complement of $K_n$ contains a component of $M \setminus \Fix(g)$, in contradiction to the fact that $M \setminus \Fix(g)$ has only finitely many components. We conclude that $\phi$ is injective.
To show that $\phi|_H$ has discrete image we assume that there is a sequence $\{g_n\}$ of elements of $H$ such that $\lim \phi(g_n) = 1$, and show that $\phi(g_n) = 1$ for $n$ sufficiently large.
Let $I_s = f^{-1}(J_s)$ and $I_u = f(J_u)$. Since $I_s$ and $I_u$ are compact intervals and the point $q$ is a point in the interior of each where they intersect transversely, there is a neighborhood $U$ of $q$ such that $U \cap I_s \cap I_u =\{q\}.$ But for $n$ sufficiently large $g_n(q) \in U$ and $g_n(q) \in (I_s \cap I_u).$ It follows that $g_n(q) =q$ for $n$ sufficiently large
and hence $\phi(g_n) = 1.$ This proves that the image of $\phi_H$ is discrete and hence that $H$ is cyclic.
Each non-trivial element $h \in H$ has $\phi(h) \ne 1$ and $\phi'(h) \ne 1$. Hence $p$ is a hyperbolic fixed point of $h$ and $q$ is a transverse homoclinic point for $h$. It follows that $h$ has positive entropy {(see Theorem 5.5 of \cite{smale:DDS}).} \end{proof}
\begin{prop}\label{A +entropy} Suppose $G$ is a subgroup of $\Diff^2(M^2)$ which has full support of finite type and $A$ is an abelian normal subgroup of $G$. If there is an element $f \in A$ with positive topological entropy then $G$ is virtually cyclic. \end{prop}
\begin{proof} It follows from Proposition~\ref{prop cyclic} that the group $A$ is virtually cyclic. Since $f \in $ has positive entropy it is infinite order and generates a finite index subgroup $A_f$ of $A$. Hence there exists a positive integer $k$ such that $a^k \in A_f$ for all $a \in A$. In particular, for each $g \in G$, we have $g f^k g^{-1} = (g f g^{-1})^k = f^m$ for some $m \in \mathbb Z$ . Since $f$ and $g f g^{-1}$ are conjugate, the topological entropy $\ent(f) = \ent(g f g^{-1})$. Hence \[
\ent(g f g^{-1})^{k} = |k| \ent(g f g^{-1}) = |k| \ent(f),
\text{ and }\ent(f^{m}) = |m| \ent(f). \] We conclude that $m = \pm k$ and hence that $g f^k g^{-1} = f^{\pm k}$ for all $g \in G$. Let $G_0$ be the subgroup of index at most two of $G$ such that $g f^kg^{-1} = f^k$ for all $g \in G_0$. Then $G_0 \subset \Cent^2(f^k,G)$ and Proposition~\ref{prop cyclic} completes the proof. \end{proof}
\section{Mean Rotation Numbers}
In this section we record some facts (mostly well known)
about rotation numbers for homeomorphisms of area preserving homeomorphisms of the closed annulus. In subsequent sections we will want to apply these results when $M$ is a surface, $f \in \Symp^\infty_\mu(M)$, and $U$ is an $f$-invariant {\em open} annulus. We will do this by considering the extension of $f$ to the closed annulus $U_c$ which is the annular compactification of $U$. When the annulus $U$ is understood, we will use $\rho_f(x)$ to mean the rotation number of $x \in U$ with respect to the homeomorphism $f_c: U_c \to U_c$ of the closed annulus $U_c$.
\begin{defn} Suppose that $f:{\mathbb A} \to {\mathbb A}$ is a homeomorphism of a closed annulus ${\mathbb A} = S^1 \times [0,1]$ preserving a measure $\mu$. For each lift $\tilde f$, let $\Delta_{\tilde f}( x) = p_1(\tilde f(\tilde x)) - p_1(\tilde x)$ where $\tilde x$ is a lift of $x$ and $p_1: \tilde A = \tilde R \times [0,1] \to \mathbb R$ is projection onto the first factor. As reflected in the notation, $\Delta_{\tilde f}(x)$ is independent of the choice of lift $\tilde x$ and hence may be considered as being defined on ${\mathbb A}.$
If $X \subset {\mathbb A}$ is an $f$-invariant $\mu$-measurable set then the {\em mean translation number relative to} $X$
and $\tilde f$ is defined to be \[ {\cal T}_{\mu}(\tilde f, X) = \int_X \Delta_{\tilde f}(x) \ d\mu. \] We define the {\em mean rotation number relative to} $X$, $\rho_\mu(f,X)$ to be the coset of ${\cal T}_{\mu}(\tilde f, X)$ mod $\mathbb Z$ thought of as an element of $\mathbb T = \mathbb R/\mathbb Z.$ \end{defn}
The mean rotation number is independent of the choice of lift $\tilde f$ since different lifts give values of ${\cal T}(\tilde f, X)$ differing by an element of $\mathbb Z$.
We define the {\em translation number} of $x$ with respect to $\tilde f$ (see e.g. Definition~2.1 of \cite{FH-ent0}), by \[ \tau_{\tilde f}(x) = \lim_{n \to \infty} \frac{1}{n}\Delta_{\tilde f^n}(x). \] A straightforward application of the Birkhoff ergodic theorem implies that $\tau_{\tilde f}(x)$ is well defined for almost all $x$ and \[ {\cal T}_\mu(\tilde f, X) = \int_{X} \tau_{\tilde f}(x) \ d\mu. \]
If $X$ is also $g$-invariant and $g$ preserves $\mu$ then \[ \Delta_{\tilde f \tilde g}(x) = \Delta_{\tilde g}(x) + \Delta_{\tilde f}(\tilde g(x)) \] and hence integrating we obtain \[ {\cal T}_\mu(\tilde f \tilde g, X) = {\cal T}_\mu(\tilde f,X) + {\cal T}_\mu(\tilde g,X) \text{ and } \rho_\mu(fg, X) = \rho_\mu(f,X) + \rho_\mu(g,X), \] \i.e. $f \mapsto \rho_\mu( f, X)$ is a homomorphism from the group of $\mu$ preserving homeomorphisms of ${\mathbb A}$ to $\mathbb T = \mathbb R/\mathbb Z.$ Hence if $h = [g_1,g_2]$ for some $g_i : {\mathbb A} \to {\mathbb A}$ that preserve $\mu$ then $\rho_\mu(h) = 0.$
\begin{lemma}\label{int rho} Let $f: {\mathbb A} \to {\mathbb A}$ be an area preserving homeomorphism of a closed annulus isotopic to the identity. Suppose $X \subset {\mathbb A}$ is an $f$-invariant subset with Lebesgue measure $\mu(X) > 0.$ If $\rho_\mu(f,X) = 0$ then $f$ has a fixed point in the interior of ${\mathbb A}$. \end{lemma}
\begin{proof} Replacing $X$ with $X \cap \Int({\mathbb A})$ we may assume $X \subset \Int({\mathbb A}).$ If $\tau_{\tilde f}(x) = 0$ on a positive measure subset of $X$ then the fixed point exists by Proposition~2.4 of \cite{FH-ent0}. Otherwise for some $\epsilon >0$ there is a positive measure $f$-invariant subset $X^+$ on which $\rho_f > \epsilon$ and another $X^-$ on which $\rho_f < -\epsilon$. If $x \in X^+$ is recurrent and not fixed then it is contained in a positively recurring disk which is disjoint from its $f$-image. Similarly if $y \in X^-$ is recurrent and not fixed then it is contained in a negatively recurring disk which is disjoint from its $f$-image. Theorem 2.1 of \cite{F-Poincare} applied to $f$ on the open annulus $\Int({\mathbb A})$ then implies there is a fixed point in $\Int({\mathbb A}).$ \end{proof}
\begin{lemma}\label{lem: periodic}
Suppose that $f: {\mathbb A} \to {\mathbb A}$ is an area preserving homeomorphism of a closed annulus which is is isotopic to the identity. Suppose also that $V \subset {\mathbb A}$ is a connected open set which is inessential in ${\mathbb A}$ and such that $f^m(V) = V$ for some $m \ne 0$. Then there is a full measure subset $W$ of $V$ on which $\rho_f$ assumes a constant rational value. \end{lemma}
\begin{proof} Replacing $V$ with $V \cap \Int({\mathbb A})$ we may assume $V \subset \Int({\mathbb A})$. Replacing $f$ with $f^m$ it will suffice to show that $f(V) = V$ implies $\rho_f =0$ on a full measure subset of $V$. Since $V$ does not contain a simple closed curve that separates the components of $\partial {\mathbb A}$, both components of $\partial {\mathbb A}$ belong to the same component $X$ of ${\mathbb A} \setminus V$ by Lemma 3.1 of \cite{FH-ent0}. Note that $D := {\mathbb A} \setminus X$ contains $V$, is contained in $\Int({\mathbb A})$ and is $f$-invariant. Lemma 3.2 of \cite{FH-ent0} implies that $D$ is connected and it is simply connected since its complement $X$ is connected. Hence $D$ is an open disk.
By the Brouwer plane translation theorem $D$ contains a point of $\Fix(f)$. Let $\tilde {\mathbb A}$ be the universal cover of ${\mathbb A}$ and let $\tilde D \subset \tilde {\mathbb A}$ be an open disk which is a lift of $D.$ Since $f$ has a fixed point in $D$ there is a lift $\tilde f: \tilde {\mathbb A} \to \tilde {\mathbb A}$ which has a fixed point in $\tilde D.$ Therefore $\tilde f(\tilde D) = \tilde D.$ If $p_1(\tilde D)$ is bounded then it is obvious that $\rho_f$ is constant and $0$ on $D$. Otherwise, note that by Poincar\'e recurrence and the Birkhoff ergodic theorem there is a full measure subset $\tilde W$ of $\tilde D$ consisting of points which are recurrent and have a well defined translation number. Calculating the translation number of a point $\tilde x \in \tilde W$ on a subsequence of iterates which converges to a point of $\tilde {\mathbb A}$ shows that it must be $0$. Hence $W$, the projection of $\tilde W$ to $D$, has the desired properties. \end{proof}
\section{Pseudo-rotation subgroups of $\Symp^r_\mu(M)$.}
\begin{defn} Suppose $M$ is a compact oriented surface.
A {\em pseudo-rotation subgroup} of $\Symp^r_\mu(M)$ with $r
\ge 1,$ is a subgroup $G$ with the property that every non-trivial
element of $G$ has exactly ${\chi}(M)$ fixed points. An element $f \in
\Symp^r_\mu(M)$ will be called a {\em pseudo-rotation} provided the
cyclic group it generates is a pseudo-rotation group. \end{defn}
Our definition may be slightly non-standard in that we consider finite order elements of $\Symp^r_\mu(M)$ to be pseudo-rotations. We observe that if ${\chi}(M) < 0$ then any pseudo-rotation group is trivial. Since we assume $M$ is oriented we have only the cases that $M$ is ${\mathbb A},\ {\mathbb D}^2,$ or $S^2$ for which any pseudo-rotations must have precisely $0,\ 1,$ or $2$ fixed points respectively. By far the most interesting case is $M = S^2$, since we will show in Lemma~\ref{A-D abelian} that when $M = {\mathbb D}^2$ or ${\mathbb A}^2$ any pseudo-rotation group is abelian. This is not the case when $M = S^2$ since $SO(3)$ acting on the unit sphere in $\mathbb R^3$ is a pseudo-rotation group.
There are three immediate but very useful facts about pseudo-rotations which we summarize in the following Lemma:
\begin{lemma} \label{useful facts} Suppose $M$ is a compact surface and $f \in \Symp^r_\mu(M)$ is a pseudo-rotation. \begin{enumerate} \item Either $f$ has finite order or $\Fix(f) = \Per(f).$ \item If $\Fix(f)$ contains more than ${\chi}(M)$ points then $f = id.$ In particular if $f,g$ are elements of a pseudo-rotation group which agree at more than ${\chi}(M)$ points then $fg^{-1} =id$ so $f = g.$ \item If $M = {\mathbb D}^2$, then the one fixed point of $f$ is in the interior of ${\mathbb D}^2$. \end{enumerate} \end{lemma}
\begin{proof}
Parts (1) and (2) are immediate from the definition of pseudo-rotation and part (3) holds because otherwise the restriction of $f$ to the interior would have recurrent points but no fixed point, contradicting the Brouwer plane translation theorem. \end{proof}
We will make repeated use of these properties.
\begin{prop}\label{rho constant} Suppose $f \in \Symp^r_\mu(M)$ is a pseudo-rotation. Then $U = \Int(M) \setminus \Fix(f)$ is an open annulus and $\rho_f: U_c \to \mathbb T = \mathbb R/\mathbb Z$ is a constant function. Moreover, {if $G \subset \Symp^r_\mu(M)$ is an abelian pseudo-rotation group, with $\Fix(g) = \Fix(f)$ for all $g \in G$, then the assignment $g \mapsto \rho_g$, for $g \in G$, is an injective homomorphism, so, in particular, if $g$ has infinite order then the constant $\rho_g$ is irrational.} \end{prop}
\begin{proof} The only possibilities for $M$ are $S^2, {\mathbb A}$ or
${\mathbb D}^2$. If $M = {\mathbb D}^2$, as remarked above, its one fixed point must be in the interior of $M$. Hence in all cases $U$ is an open annulus. If $\rho_f$ is not constant on $U_c$ then the Poincar\'e-Birkhoff Theorem (see Theorem (2.2) of \cite{FH-ent0} for example) implies there are periodic points with arbitrarily large period. We may therefore conclude that $\rho_f$ is constant and equal to $\rho_\mu(f)$. { Suppose now that $G$ is an abelian pseudo-rotation subgroup containing $f$ and $\Fix(f) = \Fix(g)$ for all $g \in G$, so $U$ is $G$-invariant. Since $\rho_\mu: G \to \mathbb T$ is a homomorphism $\rho_\mu(g)$ being rational implies $\rho_\mu(g^n) = 0$ for some $n$. Then Lemma~\ref{int rho} implies the existence of a fixed point in $U$ for $g^{n}$. Hence $g^{n} = id$. We conclude that if $g$ has infinite order then
$\rho_\mu(g)$ is irrational. If $\rho_\mu(f) = \rho_\mu(g)$ then $\rho_\mu(fg^{-1}) = 0$ and the same argument shows $fg^{-1} = id$, so the assignment $f \mapsto \rho_\mu(f)$ for $f$ in the abelian group $G$ is injective.} \end{proof}
We observed above that if $f$ is a non-trivial pseudo-rotation then either $f$ has finite order or $\Per(f) = \Fix(f).$ We now prove the converse to this statement.
\begin{prop}\label{prop: pseudo-r} Suppose $M$ is ${\mathbb A},\ {\mathbb D}^2$ or $S^2$ and $G$ is a subgroup of $\Symp^\infty_\mu(M)$ with the property that every non-trivial element $g$ of $G$ either has finite order or satisfies $\Fix(g) = \Per(g)$. Then $G$ is a pseudo-rotation group. \end{prop}
\begin{proof} {No element $g \in G$ can have positive entropy since that would imply $g$ has points of arbitrarily high period (see Katok \cite{katok:horseshoe}), a contradiction.}
If $g \in G$, then it must have at least ${\chi}(M)$ fixed points because the Lefschetz index of a fixed point of $g$ is at most $1$ (see \cite{simon:index}, for example).
If $g$ is non-trivial and has more than ${\chi}(M)$ fixed points it follows from part (\ref{item:rho non-constant}) of Theorem~\ref{max-annuli} that there is an $g$-invariant annulus $U \subset int(M)$ for which the rotation number function in continuous and non-constant. It then follows from Corollary~2.4 of \cite{franks:recurrence} that there are periodic points with arbitrarily high period, contradicting our hypothesis. We conclude that either $g$ has finite order or $\card(\Per(g)) = {\chi}(M).$ \end{proof}
\begin{lemma} \label{three components} Suppose $U \subset M$ is an open annulus and that $C_1$ and $C_2$ are disjoint closed connected sets in $U$ each of which separates the ends of $U$ and such that both $U \setminus C_1$ and $U \setminus C_2$ have two components. Then $M \setminus (C_1 \cup C_2)$ has three components: one with frontier contained in $C_1$, one with frontier contained in $C_2$ and an open annulus with frontier contained in $(C_1 \cup C_2)$. \end{lemma}
\begin{proof} Clearly $M \setminus C_i$ has two components. Let $X_1$ and $X_2$ [resp. $Y_1$ and $Y_2$] be the components of $M \setminus C_1$ [resp. $M \setminus C_2$] labeled so that $C_2 \subset X_2$ and $C_1 \subset Y_1$. Then $X_1$ and $Y_2$ are components of $M \setminus (C_1 \cup C_2)$ with frontiers contained in $C_1$ and $C_2$ respectively and it suffices to show that $V = X_2 \cap Y_1$ is an open annulus. We use the fact that an open connected subset of $S^2$ whose complement has two components is an annulus. {Of course the same is true for $M$ which may be considered as a subsurface of $S^2$.} But $X_1 \cup C_1 = \cl(X_1) \cup C_1$ is connected and similarly so is $Y_2 \cup C_2 = \cl(Y_2) \cup C_2$. Hence $M \setminus V $ has two components, namely $X_1 \cup C_1$ and $Y_2 \cup C_2$. So $V$ is an annulus. \end{proof}
Our next result includes a very special case of Proposition~(\ref{prop:
cent}) namely we show that the centralizer of an infinite order pseudo-rotation is virtually abelian. In fact, for later use, we need to consider a slightly more general setting, namely, not just the centralizer of a single pseudo-rotation, but the centralizer of an abelian pseudo-rotation group all of whose non-trivial elements have the same fixed point set.
\begin{lemma}\label{2periodic} Suppose $A$ is an abelian pseudo-rotation subgroup of $\Symp^\infty_\mu(M)$ containing elements of arbitrarily large order. Suppose also that there is a set $F$ with ${\chi}(M)$ points such that $\Fix(f) = F$ for all non-trivial $f \in A.$ Then the subgroup ${\cal C}_0$ of the centralizer, $\Cent^\infty_\mu(A),$ of $A$
in $\Symp^\infty_\mu(M)$ consisting of those elements which pointwise fix $F$, is abelian and has index at most $2$ in $\Cent^\infty_\mu(A).$ \end{lemma}
\begin{proof} Elements of $A$ must have entropy $0$ since positive entropy implies the existence of infinitely many periodic points by a result of Katok, \cite{katok:horseshoe}. Let $U$ be the open annulus $M \setminus (F \cup \partial M)$. The group $\Cent^\infty_\mu(A)$ preserves $F \cup \partial M$ and hence $U$. The subgroup ${\cal C}_0 \subset \Cent^\infty_\mu(A)$, whose elements fix the ends of $U$, has index at most $2$.
We claim that the elements of ${\cal C}_0$ all have entropy $0$. Clearly we need only consider elements of infinite order since all finite order homeomorphisms have entropy $0$. Hence the claim follows from Corollary~(\ref{dichotomy}) if there is an element of infinite order in $A$. Otherwise if $ {\cal C}_0$ contains an element with positive entropy then it contains an element $g$ with a hyperbolic fixed point $p \in U$. Each point in the $A$-orbit of $p$ is in $\Fix(g)$ and has the same set of eigenvalues for the derivative $Dg$. It follows that the $A$-orbit of $p$ has finite cardinality, say $m$, and hence that $p \in \Fix(f^m)$ for all $f \in A$. Part (2) of Lemma~\ref{useful facts} then implies that $f^m = id$ for all $f \in A$, {if $m > 2$.} This contradicts the fact that $A$ contains elements of arbitrarily high order. This completes the proof that all elements of ${\cal C}_0$ have entropy zero.
To prove that ${\cal C}_0$ is abelian, we will show that each commutator $h$ of two elements in ${\cal C}_0$ is the identity by assuming that $h$ is non-trivial and arguing to a contradiction. Since $h$ is a commutator the map $h_c: U_c \to U_c$ has mean rotation number $\rho_\mu(h_c) = 0$ and hence, by Lemma~\ref{int rho}, $h$ has a fixed point in $U$. Therefore $\Fix(h)$ contains more than ${\chi}(M)$ points. If $h$ has finite order then in a suitable averaged metric it is an isometry of $M$. But then each fixed point must have Lefschetz number $+1$ and hence by the Lefschetz theorem $h$ has ${\chi}(M)$ fixed points, a contradiction. We conclude therefore that $h$ has infinite order and hence satisfies the hypothesis of Theorem~(\ref{max-annuli}). By this theorem there exists an element $V \in {\cal A}_h$ and it must be the case that $V \subset U$ since $V$ cannot contain a point of $\Fix(h) \supset F$. We will show this leads to a contradiction, either by showing that $h$ has a fixed point in $V$ or by showing that some non-trivial $f \in A$ has a fixed point in $U$. We may then conclude $h = id$ and hence that ${\cal C}_0$ is abelian.
Suppose first that $V$ is inessential in $U$. Elements of $A$ commute with $h$ and hence permute the elements of ${\cal A}_h$ by Corollary~\ref{Z centralizer}. Since there can be only finitely many elements of ${\cal A}_h$ of any fixed area, the $A$-orbit of $V$ is finite. Hence there is $m$ such that $f^m(V) = V$ for all $f \in A$. The union of $V$ with the component of its complement which is disjoint from $F$ is an open disk $D \subset U$ satisfying $f^m(D) = D$ for all $f \in A.$ It follows from the Brouwer plane translation theorem that $f^m$ has a fixed point in $D.$ If the order of $f$ is greater than $m$ then $f^m$ is a non-trivial element of $A$ with fixed points not in $F$ contradicting the assumption that $\Fix(f) = F$ for all non-trivial $f \in A$. So the possibility that $V$ is inessential in $U$ is contradicted.
We are now reduced to the case that $V$ is essential in $U$. It follows from part (\ref{item: cA periodic}) of Corollary~\ref{Z centralizer} that each element of $A$ preserves $V$ or maps it to a disjoint element of ${\cal A}_h$. Since the annulus $U$ is $A$-invariant and $V$ is essential in $U$ it follows that $V$ is $A$-invariant as the alternative would contradict the fact that $A$ preserves area. We want to replace $V$ with a slightly smaller essential $V_0 \subset V$ which has the property that its frontier (in $M$) lies in $V$ and has measure $0$. To do this we observe that $\rho_h$ is non-constant on $V$ and hence has uncountably many level sets. Hence there must be two of its level sets $C_1, C_2$ which have measure $0$. Let $V_0$ be the essential open annulus in $V$ whose frontier lies in $C_1 \cup C_2$ (see Lemma~\ref{three components}). Then $\mu(V_0) = \mu(\cl_{U_c}(V_0))$ since $\cl_{U_c}(V_0) \setminus V_0 \subset C_1 \cup C_2.$ It follows from part (\ref{item: inv level}) of Corollary~\ref{Z centralizer} that each $C_i$ is $A$-invariant and hence $V_0$ is also.
As a first subcase, suppose that $ \cl_{U_c}(V_0)$ is $g$-invariant for each $g \in {\cal C}_0$. Then $h$ is a commutator of elements that preserve $\cl_{U_c}(V_0)$, so \[ \rho_\mu(h, \cl_{U_c}(V_0)) = 0, \] (measured in $U_c$). Since $V_0$ differs from $\cl_{U_c}(V_0)$ in a set of measure zero, $\rho_\mu(h, V_0) = 0$ also. The translation number with respect to a lift $\tilde h$ of a point $x \in V_0$ can be measured in either $U_c$ or $(V_0)_c$ giving $\tau_{\tilde h}(x)$
and $\tau_{\tilde h|_{\tilde V_0}}(x)$. But $x \in V_0$ lies on a compact $h$-invariant level set $C_0$ which is in the interior of both $U_c$
and $(V_0)_c$. This implies $\tau_{\tilde h}(x) = \tau_{\tilde h|_{\tilde V_0}}(x)$. Hence we conclude that $\rho_\mu(h) = 0,$ measured in $(V_0)_c$ and by Lemma~\ref{int rho}, $h$ has a fixed point in $V_0$, a contradiction.
The last subcase is that there exists $g \in {\cal C}_0$ such that $g(V_0) \not \subset \cl_{M}(V_0) = \cl_{U_c}(V_0)$. Choose a component $W$ of $g(V_0) \cap (U \setminus \cl_{U_c}(V_0))$. Since $g$ leaves $U$ invariant and preserves area and since $g^{-1}(W) \cap W = \emptyset$, $W$ cannot contain a closed curve $\alpha$ that is essential in $U$. Thus $W$ is inessential in $U$. As we noted above $V_0$ is $A$-invariant. Since $g \in {\cal C}_0$, it commutes with $A$ so $g(V_0)$ is also $A$-invariant. It follows that $A$ permutes the components of $g(V_0) \cap (U \setminus \cl_{U_c}( V_0))$. In particular for some $m > 0$ we have $f^m(W) =W$ for all $f \in A$. Letting $D$ be the union of $W$ with any components of its complement which do not contain
ends of $U$ we conclude that $D$ is an open disk and $f^m(D) = D$ for all $f \in A$. By the Brouwer plane translation theorem there is a point of $\Fix(f^m)$ in $D$. Since $f^m$ is a pseudo-rotation and has more than ${\chi}(M)$ fixed points, $f^m = id$. Since this holds for any $f \in A$ we have contradicted the hypothesis that $A$ contains elements of arbitrarily high order. \end{proof}
\begin{lemma}\label{A-D abelian} Suppose that $G$ is a pseudo-rotation subgroup of $\Symp^r_\mu(M)$ where $M = {\mathbb A}$ or ${\mathbb D}^2$ and $r \ge 2$. Then $G$ is abelian and $\Fix(g)$ is the same for all non-trivial $g \in G.$ \end{lemma}
\begin{proof} If $M ={\mathbb A}$ then $\Fix(g) = \emptyset$ for all non-trivial $ g \in G$. Since $\rho_\mu([f,g]) = 0$ for each $f,g \in G$, Lemma~\ref{int rho} implies that each $[f,g]$ has a fixed point in the interior of ${\mathbb A}$ and hence is the identity. Thus each $f$ and $g$ commute and we are done.
We assume for the remainder of the proof that
$M = {\mathbb D}^2$. For each $f \in G$ we consider $\hat f$, the restriction of $f$ to $\partial {\mathbb D}^2.$ Let $\hat G$ be the image in $\Diff^r(S^1)$ of $G$ under the homomorphism $f \mapsto \hat f$.
Lemma~\ref{useful facts} (3) implies that $f$ fixes a point in the interior of ${\mathbb D}^2$. If $f ^k$ fixes a point in $\partial {\mathbb D}^2$ for some $k \ge 1$ then $f^k$ has more than ${\chi}(M)$ fixed points and so is the identity. We conclude that if $\hat f$ has a point of period $k$ then $f$ is periodic of period $k$. In particular, if $\hat f:S^1 \to S^1$ has a fixed point then $f = id.$ {This proves that $\hat G$ acts freely on $S^1$. It also proves that the restriction homomorphism $f \mapsto \hat f$ is an isomorphism $G \to \hat G$. Since $\hat G$ acts freely on $S^1$ it follows from H\"older's Theorem (see, e.g., Theorem 2.3 of \cite{Farb-Shalen}) that $\hat G$ (and hence $G$) is abelian.}
Suppose $f,g \in G$ and let $\Fix(f) = \{q\}$. Since $f$ and $g$ commute, $g$ preserves $\{q\}$ and so fixes $q$. This proves that $\Fix(f) = \Fix(g)$ and completes the proof. \end{proof}
If $f \in \Diff^r(S^2)$ and $p \in \Fix(f)$ we can compactify $S^2 \setminus \{p\}$ by blowing up $p$, i.e. adding a boundary component on which we extend $f$ by letting it be the projectivization of $Df_p$. More precisely we define $\hat f_p: S^1 \to S^1$ by considering the boundary as the unit circle in $\mathbb R^2$ and letting \[
\hat f_p(v) = \frac{Df_p(v)}{|Df_p(v)|}. \]
If $\Fix(f)$ contains two points then we may blow up these points and obtain the annular compactification ${\mathbb A}$ of $S^2 \setminus \Fix(f)$.
\begin{cor}\label{fp-stabilizer} Suppose that $G$ is a pseudo-rotation subgroup of $\Symp^\infty_\mu(S^2)$ and that $G_p$ is the stabilizer of a point $p \in S^2$. Then $G_p$ is abelian and there exists $q \in S^2$ such that $\Fix(g) = \{p,q\}$ for all non-trivial $g \in G_p$. \end{cor}
\begin{proof} Suppose that $f \in G_p$. Blowing up the point $p$ and extending $f$ we obtain $\bar f: {\mathbb D}^2 \to {\mathbb D}^2.$ This construction is functorial in the sense that if $h = fg$ then $\bar h = \bar f \bar g.$ Hence the assignment $f \mapsto \bar f$ is a homomorphism from $G_p$ to $\Symp^\infty_\mu({\mathbb D}^2)$. This homomorphism is injective by part (2) of Lemma~\ref{useful facts}. The result now follows from Lemma~\ref{A-D abelian}. \end{proof}
It is not quite the case that for abelian pseudo-rotation subgroups
of $\Symp^\infty_\mu(S^2)$ that the fixed point set for non-trivial elements is independent of the element. There is essentially one exception, namely the group generated by rotations of the sphere through angle $\pi$ about two perpendicular axes.
\begin{lemma}\label{lem: abelian rot} If $G$ is an abelian pseudo-rotation subgroup of $\Symp^\infty_\mu(S^2)$ then either $\Fix(g)$ is the same for every non-trivial element $g$ of $G$ or $G$ is isomorphic to $\mathbb Z/2\mathbb Z \oplus \mathbb Z/2\mathbb Z$. In the latter case the fixed point sets of non-trivial elements of $G$ are pairwise disjoint and hence their union contains six points. \end{lemma}
\begin{proof} Let $g,h$ be distinct non-trivial elements of $G$ and suppose $\Fix(g) \ne \Fix(h)$. Since $G$ is abelian $g(\Fix(h)) = \Fix(h)$. {Since $\Fix(g) \ne \Fix(h)$ and each contains two points, these sets are disjoint.} Hence $g$ switches the two points of $\Fix(h)$. In this case $g$ has two fixed points and a period $2$ orbit and hence $g^2$ has four fixed points and must be the identity. We conclude that $g$ has order two. Switching the roles of $g$ and $h$ we observe that $h$ has order two.
Let $G_0$ be the abelian group generated by $g$ and $h$ so $G_0 \cong \mathbb Z/2\mathbb Z \oplus \mathbb Z/2\mathbb Z$. The possible actions of an element of $G_0$ on $\Fix(g) \cup \Fix(h)$ are: fix all four points, switch the points of one pair and fix the other, or switch both pairs. If $f \in G$ then $f$ preserves $\Fix(g)$ and $\Fix(h)$ so its action on $\Fix(g) \cup \Fix(h)$ must be the same as one of the elements of $G_0$. Since $f$ agrees with an element of $G_0$ at four points it must, in fact, be an element of $G_0$. Hence any abelian pseudo-rotation group containing non-trivial elements whose fixed point sets do not coincide must be isomorphic to $\mathbb Z/2\mathbb Z \oplus \mathbb Z/2\mathbb Z$.
Suppose now $G \cong \mathbb Z/2\mathbb Z \oplus \mathbb Z/2\mathbb Z$ is a pseudo-rotation subgroup of $\Symp^\infty_\mu(S^2).$ The three non-trivial elements of $G$ must have pairwise disjoint fixed point sets. To see this note that otherwise there would be two distinct elements, say, $g$ and $h$ with a common fixed point. As observed above this implies $\Fix(g) = \Fix(h)$. But any two non-trivial elements of $G$ generate it and hence $\Fix(f)$ is the same for all elements $f \in G.$ Then by Proposition~\ref{rho constant} there is an injective homomorphism from $G$ to $\mathbb T$. This is a contradiction since $\mathbb T$ contains a unique element of order two.
\end{proof}
\begin{remark}\label{rmk: 6pts} Suppose in the previous lemma $G \cong \mathbb Z/2\mathbb Z \oplus \mathbb Z/2\mathbb Z$. Let $F$ be the union of three sets $F_i = \{a_i, b_i\},\ 1\le i \le 3$ where $F_i = \Fix(h_i)$ and $h_i$ is one of the non-trivial elements of $G$. The elements of $G$ preserve each of the sets $F_i$ and if $h$ is a non-trivial element of $G$ it must fix the points of one of the $F_i$'s while switching the points of the other two (since $h$ cannot fix four points). \end{remark}
Suppose $A$ is a subgroup of the circle $\mathbb T = \mathbb R/\mathbb Z$ and ${\cal G} = A \rtimes_\phi (\mathbb Z/2\mathbb Z)$ is the semidirect product determined by the homomorphism $\phi: \mathbb Z/2\mathbb Z \to \Aut(A)$ given by $\phi(1) = i \in \Aut(A)$ where $i: A \to A$ is the involution
$i(h) = -h$. Then ${\cal G}$ is called a {\em generalized dihedral group.}
\begin{lemma}\label{lem: F finite} Suppose $G$ is a pseudo-rotation subgroup of $\Symp^\infty_\mu(S^2)$ such that either $G$ leaves invariant a non-empty finite set $F \subset S^2$, or $G$ has a non-trivial normal solvable subgroup. {Then either \begin{enumerate} \item There is a $G$-invariant set $F_0$ containing two points in which case $G$ is isomorphic to a subgroup of the generalized dihedral group $A \rtimes_\phi (\mathbb Z/2\mathbb Z)$ for some subgroup $A$ of the circle $\mathbb T,$ or \item Every $G$-invariant set contains more than two points in which case $G$ is finite. \end{enumerate} Moreover, if the set $F_0$ in (1) is pointwise fixed then $G$ is isomorphic to a subgroup of $\mathbb T.$ } \end{lemma}
\begin{proof} {Suppose first $G$ has a non-trivial normal solvable subgroup $N$. We will reduce this case to the case that there is a finite $G$-invariant set. Let $H$ be the last non-trivial element of the derived series of $N$ so $H$ is an abelian subgroup. Since $H$ is characteristic in $N$ it is normal in $G$.}
By Lemma~\ref{lem: abelian rot} the set \[ F = \bigcup_{h \in H, h\ne id} \Fix(h) \] is finite. If $g \in G$ and $h \in H$ then $ghg^{-1} \in H$. Since $\Fix(ghg^{-1}) = g(\Fix(h))$, it follows that $g(F) = F.$ Hence we need only prove the conclusion of our result under the assumption that $G$ leaves invariant a finite set $F$.
Let $A$ be the finite index subgroup of $G$ which pointwise fixes $F$. {By Corollary~\ref{fp-stabilizer} $A$ is abelian. If $F$ contains more than two points then $A$ must be trivial and $G = G/A$ is finite since $A$ had finite index. If $F$ is a single point then by Corollary~\ref{fp-stabilizer} there is a set $F'$ containing $F$ and one other point which are the common fixed points for every element of $A$. As above the set $F'$ is $G$-invariant and we replace $F$ by $F'$, so we may assume $F$ contains two points.}
In this case $A$ has index at most two and $\Fix(h) = \Fix(h')$ for all $h,h' \in A$. {From Propostion~\ref{rho constant} it follows that $A$ is isomorphic to a subgroup of $\mathbb T$.}
{ If $h \in G \setminus A$ then $h$ interchanges the two points of $F$ and reverses the orientation of $H_1(U)$ where $U = S^2 \setminus F.$ It follows that $\rho_\mu (h^{-1} g h) = -\rho_\mu (g)$ for all $g \in A.$ Also $h$ has two fixed points and $h^2$ fixes the points of $F$ so it has four fixed points and we conclude that $h^2 = id$. Hence the map $\phi: \mathbb Z/2\mathbb Z \to G$ given by $\phi(1) = h$ is a homomorphism so $G \cong A \rtimes_\phi \mathbb Z/2\mathbb Z$. }
\end{proof}
\begin{prop}\label{A pseudo-r} If $G$ is a subgroup of $\Symp^\infty_\mu(M)$ which has an infinite normal abelian subgroup $A$ which is a pseudo-rotation group then $G$ {has an abelian subgroup of index at most two.} \end{prop}
\begin{proof} Since $A$ is non-trivial, $M$ is ${\mathbb A}, {\mathbb D}^2,$ or $S^2$.
Since $A$ is infinite, {Lemma~\ref{A-D abelian} and Lemma~\ref{lem: abelian rot}} imply there is a set $F$ containing ${\chi}(M) \le 2$ points such that $F = \Fix(h)$ for all $h \in A.$ Let $U = M \setminus F$. Observe that in all cases $U$ is an annulus. The set $F$ must be invariant under $G$ since $F = \Fix(ghg^{-1}) = g(\Fix(h)) = g(F)$ for every element $g$ in $G$. The subgroup $G_0$ of $G$ that pointwise fixes $F$ has index at most two. Also elements of $G_0$ leave $U$ invariant and their restrictions to $U$ are isotopic to the identity.
{Since the set $\Fix(h)$ is the same for all $h \in A$, by Proposition~\ref{rho constant} the homomorphism} $\rho_\mu: A \to \mathbb T^1$ given by $h \mapsto \rho_\mu(h)$ is injective, where $\rho_\mu(h)$ is the mean rotation number on $U_c$. Since conjugating $h$ by an element $g \in G_0$ does not change its mean rotation number we conclude that $h = ghg^{-1}$ for all $h \in A.$ This means that $G_0$ is contained in $\Cent^\infty_\mu(A,G),$ the centralizer of $A$ in $G$.
{If $A$ contains elements of arbitrarily large order then it follows from Lemma~\ref{2periodic} that $G_0$ is abelian and we are done.} So we may assume the order of elements in $A$ is bounded. Since the order of elements of $A$ is bounded there are only finitely many possible values for $\rho_\mu(f)$ with $f \in A.$ Hence, since the assignment $f \mapsto \rho_\mu(f)$ is injective, we may conclude $A$ is finite in contradiction to our hypotheses.
\end{proof}
\begin{ex} { Let $A$ be the subgroup of all rational rotations around the $z$-axis and let $G$ be the $\mathbb Z/2\mathbb Z$ extension of $A$ obtained by adding an involution $g$ which rotates around the $x$-axis and reverses the orientation of the $z$-axis. More precisely let the homomorphism $\phi: \mathbb Z/2\mathbb Z \to \Aut(A)$ be given by $\phi(1) = i_g$ where $i_g(h) = g^{-1}hg = h^{-1}$. Every element of $A$ has finite order while $A$ itself has infinite order. Moreover, $G \cong A \rtimes_\phi (\mathbb Z/2\mathbb Z)$ is not abelian even though it has an index two abelian subgroup.} \end{ex}
{ We are now able to classify up to isomorphism all pseudo-rotation subgroups of $\Symp^\infty_\mu(S^2)$ which have a non-trivial normal solvable subgroup. We denote by $\phi$ the homomorphism $\phi: \mathbb Z/2\mathbb Z \to \mathbb T$ defined by $\phi(1) = i$, where $i$ is the automorphism of $\mathbb T, \ i(t) = -t$. }
\begin{prop}\label{char pseudo-r} If $G$ is a pseudo-rotation subgroup of $\Symp^\infty_\mu(S^2)$ which has a normal solvable subgroup then $G$ is isomorphic to either a subgroup of the generalized dihedral group $\mathbb T \rtimes_\phi (\mathbb Z/2\mathbb Z)$ or a subgroup of the group ${\cal O}$ of orientation preserving symmetries of the regular octahedron (or equivalently the orientation preserving symmetries of the cube). \end{prop}
\begin{proof} Suppose first that there is a $G$-orbit $F_0$ containing two points. Then by Proposition~\ref{lem: F finite}, $G$ is isomorphic to a subgroup of the generalized dihedral group $\mathbb T \rtimes_\phi (\mathbb Z/2\mathbb Z)$ and we are done. If this is not the case, then by the same proposition $G$ is finite and every $G$-orbit contains at least three points.
Let $A$ be the last non-trivial element of the derived series of the normal solvable subgroup of $G$, so $A$ is abelian. Since $A$ is a characteristic subgroup of that normal solvable subgroup it is normal in $G$. Hence for any $g \in G, \ g(\Fix(A)) = \Fix(g A g^{-1}) = \Fix(A)$ and we observe $\Fix(A)$ cannot contain only two points since that is the case we already considered. Therefore not all elements of $A$ have the same set of fixed points and we can apply Lemma~\ref{lem: abelian rot} to conclude the group $A$ is isomorphic to $\mathbb Z/2\mathbb Z \oplus \mathbb Z/2\mathbb Z$. {We observed in Remark~\ref{rmk: 6pts} that there is an $A$-invariant set $F$ which is the union of three sets $F_i = \{a_i, b_i\},\ 1\le i \le 3$ where $F_i = \Fix(h_i)$ and $h_i$ is one of the non-trivial elements of $A$. We also noted there that the elements of $A$ preserve the pairs $F_i$, fixing the points of one of them and switching the points in each of the other pairs. Since $A$ is normal in $G$, if $h_j = g^{-1}h_ig \in A$ then $F_j = \Fix(h_j) = g(\Fix(h_i) = g(F_i)$ so the elements of $G$ must permute the three pairs $F_i.$}
We define a homomorphism $\theta: G \to O(3)$ as follows. {Given $g \in G$ define a matrix $P = P_g$ by $P_{ij} = 1$ if $g(a_j) = a_i$ and $g(b_j) = b_i$, $P_{ij} = -1$ if $g(a_j) = b_i$ and $g(b_j) = a_i$,} and $P_{ij} = 0$ otherwise. Each row and column has one non-zero entry which is $\pm 1$ so $P_g \in O(3).$ It is straightforward to see that $\theta(g) = P_g$ defines a homomorphism to $O(3)$. It is also clear that it is injective since $P_g = I$ implies that $g$ fixes the six points of $F$. Note that if $h_i$ is a non-trivial element of $A$, then $\theta(h_i)$ must be diagonal since $h_i(F_j) = F_j$, and $\theta(h_i)$ must have two entries equal to $-1$ since $h_i$ switches the points of two of the $F_j$'s. It follows that $\theta(A)$ is precisely the diagonal entries of $O(3)$ with an even number of $-1$'s.
We need to show that $\theta(G)$ lies in $SO(3)$, i.e., all its elements have determinant $1$. Clearly $\theta(A) \subset SO(3)$. We denote the symmetric group on three elements by ${\cal S}_3$ and think of it as the permutation matrices in $O(3)$. We define a homomorphism $\bar \theta : G \to {\cal S}_3$ by $\bar \theta(g) = Q \in O(3)$ where
$Q_{ij} = |P_{ij}|$ and $P= \theta(g).$ We observe that $A$ is in the kernel of $\bar \theta.$ In fact $A$ equals the kernel of $\bar \theta$. To see this note that if $\bar \theta(g) = I$ then $P_g = \theta(g)$ is a diagonal matrix and hence $\ker(\bar \theta)$ is abelian. If $g \in \ker(\bar \theta) \setminus A$ then the abelian group generated by $g$ and $A$ is not $\mathbb Z/2\mathbb Z \oplus \mathbb Z/2\mathbb Z$ contradicting Lemma~\ref{lem: abelian rot}. We conclude $A = \ker(\bar \theta)$.
Elements of ${\cal S}_3$ have order one, two or three. In case $\bar \theta(g)$ has order one or three, $g^3$ is in $A$ and hence $\det(\theta(g)) = \det(\theta(g))^3 = \det(\theta(g^3))= 1$ so $\theta(g) \in SO(3)$.
If $\bar \theta(g)$ has order two and $P_g = \theta(g)$ has determinant $-1$ we argue to a contradiction. In this case, without loss of generality we may assume \[ P_g = \begin{pmatrix} a & 0 & 0\\ 0 & 0 & b\\ 0 & c & 0\\ \end{pmatrix} \] where $\det(P_g) = -abc = -1.$ Hence $g^2 = id$.
If $a =1$ then $bc = 1$ and $P_g^2 = I.$ We consider the $g$ invariant annulus $S^2 \setminus F_1$ and observe that in this annulus $\rho_\mu(g) = 1/2.$ But if $h_1 \in A$ is the element with $\theta(h_1) = diag(1, -1, -1)$ then $h_1(U) = U$ and $\rho_\mu(h_1) = 1/2.$ Since $g$ and $h$ both fix $F_1$ pointwise they commute by Corollary~\ref{fp-stabilizer} and by Proposition~\ref{rho constant} $\rho_\mu$ is defined and injective on the subgroup generated by $g$ and $h_1$. We conclude $g = h_1$ a contradiction. Finally suppose $a = -1$ and $bc=-1$. If $h_2 \in A$ is the element with $\theta(h_2) = diag(-1, 1, -1)$ then \[ \theta(h_2 g) = \begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & b\\ 0 & -c & 0\\ \end{pmatrix}, \] so by the previous argument $\det(\theta(h_2 g)) = 1$. Since $\det(\theta(h_2)) = 1$ this contradicts the assumption that $\det(\theta(g)) = -1$. This completes the proof that $\theta(G) \subset SO(3)$.
Since $\theta(G)$ preserves the set $\{(\pm 1, 0, 0), (0, \pm 1, 0), (0, 0, \pm 1)\}$ of vertices of the regular octahedron, it follows that $\theta(G)$ is a subgroup of the group ${\cal O}$ of orientation preserving symmetries of the octahedron.
\end{proof}
\section{Proof of the Main Theorem}\label{main section}
In this section we prove
\noindent {\bf Theorem~(\ref{main thm})} {\em Suppose $M$ is a compact oriented surface of genus $0$ and $G$ is a subgroup
of $\Symp^\infty_\mu(M)$ which has full support of finite type, e.g. a subgroup of $\Symp^\omega_\mu(M)$. Suppose further that $G$ has an infinite normal solvable subgroup. Then $G$ is virtually abelian. }
Throughout this section $M$ will denote a compact oriented surface of genus $0$, perhaps with boundary. We begin with a lemma that allows us to replace the hypothesis that $G$ has an infinite normal solvable subgroup with the hypothesis that $G$ has an infinite normal abelian subgroup.
\begin{lemma}\label{solv to abel} If $G$ is a subgroup of $\Symp^\infty_\mu(M)$, which contains an infinite normal solvable subgroup, then $G$ contains a finite index subgroup which has an infinite normal abelian subgroup. \end{lemma}
\begin{proof} Let $N$ be the infinite normal solvable subgroup. The proof is by induction on the length $k$ of the derived series of $N$. If $k=0$ then $N$ is abelian and the result holds. Assume the result holds for $k \le k_0$ for some $k_0 \ge 0$ and suppose the length of the derived series for $N$ is $k = k_0 + 1$. Let $A$ be the abelian group which is the last non-trivial term in the derived series of $N$. The group $A$ is invariant under any automorphism of $N$ and hence is normal in $G$. If $A$ is infinite we are done.
We may therefore assume $A$ is finite and hence that in a suitable metric $A$ is a group of isometries. No non-trivial orientation preserving isometry of $M$ can have infinitely many fixed points so we let $F$ be the finite set of fixed points of non-trivial elements of $A$. Since $A$ is normal $g(F) = F$ for any $g \in G.$ Let $G_0$ be the {normal finite index subgroup} of $G$ which pointwise fixes $F.$
{Let $N_1 = G_0 \cap N$ so $N_1$ is an infinite solvable normal subgroup of $G_0$} and the $k$-th term $A_1$ in the derived series of $N_1$ is contained in $A \cap G_0$. We claim that $A_1$ is trivial. If $F$ contains more than ${\chi}(M)$ points then this follows from the fact that a non-trivial isometry isotopic to the identity fixes exactly ${\chi}(M)$ points since each fixed point must have Lefschetz index $+1$.
Otherwise $F$ contains ${\chi}(M)$ points and we may blow up these points to form an annulus ${\mathbb A}$ on which $N_1$ acts preserving each boundary component. Since each element of $A_1$ is a commutator of elements in $N_1$, it acts on ${\mathbb A}$ with mean rotation number zero. Each element of $A_1$ therefore contains a fixed point in the interior of ${\mathbb A}$ by Lemma~\ref{int rho}. {However, a finite order isometry of the annulus which is isotopic to the identity and contains an interior fixed point must be the identity. This is because every fixed point must have Lefschetz index $+1$ and the Euler characteristic of ${\mathbb A}$ is $0.$} This shows that $A_1$ acts trivially on ${\mathbb A}$ and hence on $M$. This verifies the claim {that $A_1$ is trivial} and hence the length of the derived series for $N_1$ is at most $k_0$. The inductive hypothesis completes the proof. \end{proof}
The next lemma states the condition we use to prove that $G$ is virtually abelian.
\begin{lemma} \label{ending lemma} Suppose $G_0$ is a subgroup of $\Symp^\infty_\mu(M)$ which has full support of finite type. Suppose further that there is an infinite family of disjoint $G_0$-invariant open annuli. Then $G_0$ is abelian. \end{lemma}
\proof We assume that $[G_0,G_0]$ contains a non-trivial element $h$ and argue to a contradiction. For each $G_0$-invariant open annulus
$V$, let $V_c$ be its annular compactification and let $h_c :V_c \to V_c$ be the extension of $h|_V$ {(see Definition~\ref{defn: annular comp} or Definition~2.7 of \cite{FH-ent0} for details). Since $h$ is a commutator of elements of $G_0$ and since $G_0$ extends to an action on $V_c$, $h_c$ is a commutator and so has mean rotation number zero. Lemma~\ref{int
rho} therefore implies that $\Fix(h) \cap V \ne \emptyset$.
By assumption, $\Fix(h)$ does not contain any open set and so
$\Fix(h|_V)$ is a proper subset of $V$. We claim that $\fr(V) \cup
\Fix(h|_V)$ is not connected. To see this, let $S$ be the end compactification of $V$ obtained by adding two points, one for each end of $V$ and let $h_S : S \to S$ be the extension of $h|_V$ that fixes the two added points. If $\fr(V) \cup \Fix(h|_V)$ is connected then each component $W$ of $S \setminus \Fix(h|_S)$ is an open disk. A result of Brown and Kister \cite{brnkist} asserts $W$ is $h_S$-invariant. However, then the Brouwer plane translation theorem would imply that $h$ has a fixed point in $W$. This contradiction proves the claim.
By passing to a subfamily of the $G_0$-invariant annuli, we may assume that the following is either true for all $V$ or is false for all $V$: some component of $\Fix(h)$ intersects both components of the frontier of $V$. In the former case, the interior of each $V$ contains a component of $\Fix(h)$. In the latter case, no component of $\Fix(h)$ intersects more than two of the annuli in our infinite family. In both cases, $\Fix(h)$ has infinitely many components in contradiction to the assumptions that $h$ is non-trivial and that $G_0$ has full support of finite type. \endproof
We need an elementary topology result.
\begin{lemma} \label{breve} Suppose that $C\subset \Int(M)$ is a closed connected set which is nowhere dense and has two complementary components $U_1$ and $U_2$. Then $C' = \fr(U_1) \cap\ \fr(U_2)$ is a closed connected set with two complementary components, {$U_1'$ and $U_2'$}, each of which has frontier $C'$ and is equal to the interior of its closure. Moreover $U_i$ is dense in $U_i'$. \end{lemma}
\proof To see that $C'$ separates $U_1$ and $U_2$, suppose that $\sigma$ is a closed path in $M \setminus C'$ with initial endpoint in $U_1$ and terminal endpoint in $U_2$. Then $\sigma \cap \fr(U_2) \ne \emptyset$ and we let $\sigma_0$ be the shortest initial segment of $\sigma$ terminating at a point $x \in \sigma \cap \fr(U_2)$. Each $y\in \sigma_0 \setminus x$ has a neighborhood that is disjoint from $U_2$. Since $C$ has empty interior, every neighborhood of $y$ must intersects $U_1$. It follows that every neighborhood of $x$ must intersect $U_1$ and hence that $x \in C'$. This contradiction proves that $C'$ separates $U_1$ and $U_2$. Since the union of
$U_1$ and $U_2$ is dense in $M$, the components $V_1$ and $V_2$ of $M \setminus C'$ that contain them are the only components of $M \setminus C'$. Every neighborhood of a point in $C'$ intersects both $U_1$ and $U_2$ and so intersects both $V_1$ and $V_2$. Thus $C' \subset \fr(V_1) \cap \fr(V_2)$. Since $\fr(V_1), \fr(V_2) \subset C'$ we have that $C' \subset \fr(V_1) \cap \fr(V_2) \subset \fr(V_i) \subset C'$ and hence $C' = fr(V_1) = fr(V_2)$ which implies that $V_1$ and $V_2$ are the interior of their closures. Since $C$ is nowhere dense in $M$ it follows that $U_1 \cup U_2$ is dense in $M$ and hence that $U_i$ is dense in $U_i'$. \endproof
\begin{notn} If $C$ and $C'$ are as in Lemma~\ref{breve} then we say that $C'$ is obtained from $C$ by {\em trimming}. Recall that if $f \in \zz(M)$ and $U \in {\cal A}_f$, then the rotation number function $\rho = \rho_{f|_U}: U \to S^1$ is well defined and continuous. A component of a point pre-image of $\rho$ is {\em a level set for $\rho$} and is said to be {\em irrational} if its $\rho$-image is irrational and to be {\em interior} if it is disjoint from the frontier of $U$. If $C$ is an interior level set which is nowhere dense and $C'$ is obtained from $C$ by trimming then we will call $C'$ a {\em trimmed level set}. The collection of all trimmed level sets for $f$ will be denoted ${\cal C}(f).$ \end{notn}
\begin{lemma} \label{disjoint or equal} Suppose $G$ is a subgroup of $\Symp^\infty_\mu(M)$ containing an abelian normal subgroup $A$, that $f \in \zz(M)$ lies in $A$ and that $U \in
{\cal A}_f$. Suppose further that $C_1'$ and $C_2'$ are obtained from nowhere dense irrational interior level sets $C_1$ and $C_2$ for $\rho = \rho_{f|_U}$ by trimming. Letting $B_i$ be the component of $M\setminus (C_1'\cup C_2')$ with frontier $C'_i$ and $V$ be the component of $M\setminus (C_1'\cup C_2')$, with frontier $(C_1'\cup C_2')$, assume that
\begin{enumerate} \item $\mu(V) < \mu(B_1)< \mu(B_2).$ \label{item:V is small} \item $\mu(B_2 ) > \mu(M)/2.$ \label{item:more than half} \end{enumerate} Then for all $g \in G$, either $g(V) \cap V= \emptyset$ or $g( V) = V$. In particular, there is a finite index subgroup of $G$ that preserves $V$. \end{lemma}
\begin{proof} We assume that $g(V) \cap V\ne \emptyset$ and $g( V) \ne V$ and argue to a contradiction. Since $A$ is normal and abelian, $h = g^{-1}fg$ commutes with $f$. Corollary~\ref{Z centralizer} part (1) implies that $h$ permutes the elements of ${\cal A}_f$ and hence that $h^n(U) = U$ for some $n \ge 1$. It then follows from Corollary~\ref{Z centralizer} part (2) that $h^n$ preserves each level set
for $\rho = \rho_{f|_U}$, and hence each trimmed level set for $\rho$, and so preserves $V, B_1$ and $B_2$. Equivalently, $g(V), g(B_1)$ and $g(B_2)$ are $f^n$-invariant.
Since $g(V) \cap V \ne \emptyset$ and $g(V) \ne V$, there is a component $W$ of $g(V) \cap V$ such that $\fr(W) \cap \ \fr(V)\ne \emptyset$. To see this observe that otherwise one of $V$ and $g(V)$ would properly contain the other
which would contradict the fact that $g$ preserves $\mu.$ Since $f^n$ preserves both $V$ and $g(V)$, it permutes the components of their intersection. Since $f$ preserves area, $f^m(W) = W$ for some $m>0.$ If every simple closed curve in $W$ is inessential in $V$ then $\rho$ has a constant rational value on a dense subset of $W$ by Lemma~\ref{lem: periodic}. This contradicts the fact that $\rho$ is continuous on $U$ and takes only irrational values on $\fr(V)$. We conclude that there is a simple closed curve $\alpha \subset W$ which is essential in $V.$ Let $\beta = g^{-1}(\alpha) \subset V$.
Item~\pref{item:V is small} implies that $\beta$ is also essential in $V$ because if it were inessential the component of its complement which lies in $V$ would contain either $g^{-1}(B_1)$ or $g^{-1}(B_2).$
Let $B_i'$ be the subsurface of $M$ bounded by $\beta$ that contains $B_i$. Item \pref{item:more than half} rules out the possibility that $B_2 \subset g(B_1')$ so it must be that $B_1 \subset g(B_1')$ and $B_2 \subset g(B_2')$. Item \pref{item:V is small} therefore implies that $B_i \cap g(B_i) \ne \emptyset$. If there is a simple closed curve in $B_i$ whose $g$-image is in $V$ and essential in $V$ then there is a proper subsurface of $B_i$ whose $g$ image contains $B_i$ in contradiction to the fact that $g$ preserves area. It follows that either $g(B_i) = B_i$
or there is a component $W_i'$ of $g(B_i) \cap V$ which is inessential in $V$ and whose frontier intersects $C_i'$. As above, this implies that $\rho$ is constant and rational on $W'$ in contradiction to the fact that $\rho$ is continuous on $U$ and takes an irrational value on $C_i'$. This contradiction completes the proof. \end{proof}
We are now prepared to complete the proof of our main theorem.
\noindent {\bf Theorem~(\ref{main thm})} {\em Suppose $M$ is a compact oriented surface of genus $0$ and $G$ is a subgroup of $\Symp^\infty_\mu(M)$ which has full support of finite type, e.g., a subgroup of $\Symp^\omega_\mu(M)$. Suppose further that $G$ has an infinite normal solvable subgroup. Then $G$ is virtually abelian. }
\begin{proof}
By Lemma~\ref{solv to abel} it suffices to prove the result when $G$ has an infinite normal abelian subgroup $A$. If $A$ contains an element of positive entropy the result follows by Proposition~\ref{A +entropy}. If the group $A$ is a pseudo-rotation group the result follows from Proposition~\ref{A pseudo-r}.
We are left with the case that $A$ contains an element $f \in \zz(M)$. Let $U$ be an element of ${\cal A}_f$ and let $\rho = \rho_{f|_U}$. Since there are only countably many level sets of $\rho$ with positive measure, all but countably many have empty interior or equivalently are nowhere dense. Choose a nowhere dense interior irrational level set $C$ of $\rho$. One component of $M \setminus C$, say, $Y$, will be an open subsurface with $\mu(Y) \le \mu(M)/2.$ Choose a nowhere dense interior irrational level set $C' \subset Y \cap U$ and let $Y'$ be
the open subsurface which is the component of the complement of $C'$ satisfying $Y' \subset Y$. Finally, choose nowhere dense interior irrational level sets $\hat C_1, \hat C_2 \subset (Y \setminus \cl(Y'))$ so that the annulus $\hat V$ bounded by the trimmed sets $\hat C_1'$ and $\hat C_2'$ has measure less than the measure of $Y'$. Then $\hat C_1'$ and $\hat C_2'$ satisfy the hypotheses of Lemma~\ref{disjoint or equal}. It follows that the subgroup $G_0$ of $G$ that preserves $\hat V$ has finite index in $G$.
Note that any two trimmed irrational level sets $C_1'$ and $C_2'$ in $\hat V$ satisfy the hypotheses of Lemma~\ref{disjoint or equal}. Moreover if $V$ is the essential open subannulus bounded by $C_1'$ and $C_2'$ then $g(V) \cap V \ne \emptyset$ for each $g \in G_0$ because each such $g$ preserves $\hat V$ and preserves area. Lemma~\ref{disjoint
or equal} therefore implies that each such $V$ is $G_0$-invariant so Lemma~\ref{ending lemma} completes the proof. \end{proof}
\section{The Tits Alternative}
We assume throughout this section that $M$ be a compact oriented surface of genus $0$, ultimately proving a special case (Theorem~\ref{tits special case}) of the Tits Alternative.
\begin{lemma}\label{abelian case} Suppose $f \in {\cal Z}(M)$ and $U \in {\cal A}(f)$ is a maximal annulus for $f.$ If $G$ is a subgroup of $\Symp^\omega_\mu(M)$
whose elements preserve each element of an infinite family of trimmed level sets for $\rho_{f|_U}$ lying in $U$, then $G$ is abelian. \end{lemma}
\begin{proof} Since infinitely many of the trimmed level sets in $U$ are preserved by $G$ so is each open annulus bounded by two such level sets. It is thus clear that one may choose infinitely many disjoint open annuli in $U$ each of which is $G$-invariant. The result now follows from Lemma~\ref{ending lemma}. \end{proof}
\begin{lemma} \label{trans num} Suppose $f: {\mathbb A} \to {\mathbb A}$ is a homeomorphism isotopic to the identity with universal covering lift $\tilde f: \tilde {\mathbb A} \to \tilde {\mathbb A}$ which has a non-trivial translation interval and suppose $\alpha \subset {\mathbb A}$ is an embedded arc joining the two boundary components of ${\mathbb A}$. Then there exists $m >0$ such that for any $\gamma \subset {\mathbb A}$ which is an embedded arc, disjoint from $\alpha$,
joining the two boundary components of ${\mathbb A}$, every lift of $f^{k}(\gamma),\ |k| > m$ must intersect more than one lift of $\alpha.$
\end{lemma}
\begin{proof}
Let $\tilde \alpha$ be a lift of $\alpha$ and let $T$ be a generator for the group of covering translations of $\tilde {\mathbb A}$. We claim that there is an $m >0$ such that for all $k \ge m,$
$\tilde f^k(\tilde \alpha)$ intersects at least $6$ adjacent lifts of $\alpha$. To show the claim consider the fundamental domain $D_0$ bounded by $\tilde \alpha$ and $T(\tilde \alpha).$ If there are arbitrarily large $k$ for which $\tilde f^k(\tilde \alpha)$ intersects fewer than six translates of $\tilde \alpha,$ then for such $k, \ \tilde f^k(D_0)$ would lie in an adjacent strip of five adjacent translates of $D_0$ and the translation interval for $\tilde f$ would have length $0$. This contradiction verifies the claim.
There is a lift $\tilde \gamma$ of $\gamma$ that is contained in $D_0$. Let $X$ be the region bounded by $T^{-1}(\tilde \gamma)$ and $T(\gamma)$ and note that $\tilde \alpha \subset X$. If the lemma fails then we can choose $k \ge m$ and a lift $\tilde h $ of $f^k$ such that $\tilde h(\tilde \gamma)$ is disjoint from $T^i(\tilde \alpha)$ for all $i \ne 0$. It follows that $\tilde h(X)$ is contained in $\cup_{j=-2}^2 T^{j}(D_0)$ in contradiction to the fact that $\tilde h(\tilde \gamma)$ intersects at least six translates of $\tilde \alpha$. \end{proof}
\begin{lemma} \label{free case} {Suppose $f,g \in {\cal Z}(M)$ and there are trimmed level sets $C_1' \in {\cal C}(f)$ and $C_2' \in {\cal C}(g)$ such that there exist points $x_i \in M \setminus (C_1' \cup C_2'),\ 1 \le i \le 4$ with the following properties: \begin{enumerate} \item $x_1,x_2$ lie in the same component of the complement of $C_1'$ and $x_3,x_4$ lie in the other component of this complement. \item $x_1,x_3$ lie in the same component of the complement of $C_2'$ and $x_2,x_4$ lie in the other component of this complement. \end{enumerate} } Then for some $n>0,$ the diffeomorphisms $f^n$ and $g^n$ generate a non-abelian free group. \end{lemma}
\begin{figure}
\caption{The curves $\beta_1$ and $\beta_2$}
\label{fig1}
\end{figure}
\begin{proof} Let $C_i$ be the untrimmed level set whose trimmed version is $C_i'$. We claim that by modifying the points $\{ x_i\}$ slightly we may assume that the hypotheses (1) and (2) hold with $C_i'$ replaced by $C_i$. This is because each component of the complement of $C_i$ is a dense open subset of a component of the complement of $C_i'$. Hence each $x_j$ can be perturbed slightly to $\hat x_j$ which, for each $i$, is in the component of complement of $C_i$ which is a dense open subset of the component of the complement of $C_i'$ containing $x_j$. Henceforth we will work with $C_i$ and refer to $\hat x_j$ simply as $x_j$.
Let $\beta_1$ be a path in $M \setminus C_2$ joining $x_1$ and $x_3$; so $\beta_1$ crosses $ C_1$ and is disjoint from $ C_2.$ Likewise let $\beta_2$ be a path in $M \setminus C_1$ joining $x_1$ and $x_2$; so $\beta_2$ crosses $ C_2$ and is disjoint from $ C_1.$ (See Figure \ref{fig1}.) The level set $ C_1$ is an intersection \[
C_1 = \bigcap_{n=1}^\infty \cl(B_n), \] where each $B_n$ is an $f$-invariant open annulus with $\cl(B_{n+1}) \subset B_n$ and the rotation interval $\rho(f, B_n)$ of the annular compactification $f_c$ of
$f|_{B_n}$ is non-trivial (see section 15 and the proof of Theorem~ 1.4 in \cite{FH-ent0}). For $n$ sufficiently large $\cl(B_n)$ is disjoint from $\beta_2$ and $\{x_i\}_{i=1}^4.$ Let $A_1$ be a choice of $B_n$ with this property.
We may choose a closed subarc $\alpha_1$ of $\beta_1$ whose interior lies in $A_1$ and whose endpoints are in different components of the complement of $A_1$. We will use intersection number with $\alpha_1$ with a curve in $A_1$ to get a lower bound on the number of times that curve ``goes around'' the annulus $A_1.$
\begin{figure}
\caption{The curves $\alpha_1,\ \alpha_2,\ \gamma_0$ and $\gamma_1$}
\label{fig2}
\end{figure}
Similarly the level set $ C_2$ is an intersection \[
C_2 = \bigcap_{n=1}^\infty \cl(B_n'), \] where each $B_n'$ is a $g$-invariant open annulus with $\cl(B_{n+1}') \subset B_n'$ and the rotation interval $\rho(g, B_n')$ of the annular compactification $g_c$ of
$f|_{B_n'}$ is non-trivial. We construct $A_2$ and the arc $\alpha_2$ in a fashion analogous to the construction of $A_1$ and $\alpha_1$. By construction $\alpha_1$ is disjoint from $A_2$ and crosses $A_1$ while $\alpha_2$ is disjoint from $A_1$ and crosses $A_2.$ (See Figure \ref{fig2}.)
Note that any essential closed curve in $A_1$ must intersect $\alpha_1$ and must contain points of both components of the complement of $ C_2.$ To see this latter fact we note that we constructed $\alpha_1$ to lie in one component of the complement of $ C_2$ but we could as well have constructed $\alpha_1'$ in the other component of this complement. Any essential curve in $A_1$ must intersect both $\alpha_1$ and $\alpha_1'.$ Similarly any essential curve in $A_2$ must intersect $\alpha_2$ and must contain points of both components of the complement of $ C_1$.
There is a key consequence of these facts which we now explore. Let $\gamma_0$ be an arc with interior in $A_1$, disjoint from $A_2 \cup \alpha_1$ and with endpoints in different components of the complement of $A_1$. Replace $f$ and $g$ by $f^m$ and $g^{m'}$ where $m$ and $m'$ are the numbers guaranteed by Lemma~\ref{trans num} for $f$ and $g$ respectively. Then we know that for $k \ne 0$ the curve $f^k(\gamma_0)$ must intersect more than one lift of the arc $\alpha_1$ in the universal covering $\tilde A_1$.
It follows that $f^k(\gamma_0)$ contains a subarc whose union with a subarc of $\alpha_1$ is essential in $A_1$. Hence we conclude that $f^k(\gamma_0)$ contains a subarc crossing $A_2$, i.e. a subarc $\gamma_1$ whose interior lies in $A_1 \cap A_2$ (and hence is disjoint from $\alpha_2$), and whose endpoints are in different components of the complement of $A_2.$ (See Figure~\ref{fig2}.)
Since we replaced $g$ by $g^{m'}$ above we know that for $k \ne 0$ the curve $g^k(\gamma_1)$ must intersect more than one lift of the arc $\alpha_2$ in the universal covering $\tilde A_2$.
We can now construct $\gamma_2$ in a similar manner but switching the roles of $f$ and $g, \ \alpha_1$ and $\alpha_2,$ and $A_1$ and $A_2$. More precisely, for any $k \ne 0$ the curve $g^k(\gamma_1)$ contains a subarc whose union with a subarc of $\alpha_2$ is essential in $A_2$. It follows that $g^k(\gamma_1)$ contains a subarc $\gamma_2$ whose interior lies in $A_1 \cap A_2$ and whose endpoints are in different components of the complement of $A_1.$ Note that $\gamma_2$ is a subarc of $g^mf^k(\gamma_0).$
We can repeat this construction indefinitely, each time switching $f$ and $g$. Hence if we are given $h = g^{m_1}f^{k_1} \dots g^{m_n}f^{k_n}$ and $m_i \ne 0,\ k_i \ne 0$ we can obtain a non-trivial arc $\gamma_{2n}$ which is a subarc of $h(\gamma_0)$. Since $\gamma_{2n} \subset A_1 \cap A_2$ and $\gamma_0 \cap A_2 = \emptyset$ it is not possible that $h = id.$ But every element of the group generated by $f$ and $g$ is either conjugate to a power of $f$, a power of $g,$ or an element expressible in the form of $h$. Hence we conclude that the group generated by $f$ and $g$ is a non-abelian free group. \end{proof}
\noindent{\bf Proof of Theorem~\ref{tits special case}.}
Suppose $f \in G \cap \zz(M)$ and $U \in {\cal A}(f)$ is a maximal annulus for $f$. One possibility is that there is a finite index subgroup $G_0$ of $G$ which preserves infinitely many of the trimmed rotational level sets for $f$ which lie in $U$. In this case Lemma~\ref{abelian case} implies $G_0$ is abelian and we are done.
If this possibility does not occur, we claim that there exists a trimmed level set $C$ in $U$ and $h_0 \in G$ such that $h_0(C) \cap C \ne C$ but $h_0(C) \cap C \ne \emptyset$. If this is not the case then for every $h \in G$ and every $C$ either $h(C) = C$ or $h(C) \cap C = \emptyset$. It follows that the $G$-orbit of $C$ consists of pairwise disjoint copies of $C$. Since elements of $G$ preserve area this orbit must be finite and it follows that the subgroup of $G$ which stabilizes $C$ has finite index. If we now choose $C_0$ and $C_0'$, two trimmed level sets in $U$ and let $G_0$ be the finite index subgroup of $G$ which stabilizes both of them, then the annulus $U_0$ which they bound is $G_0$-invariant. Now if $C$ is any trimmed level set in $U_0$ then its $G_0$-orbit lies in $U$ and we conclude from area preservation that $g(C) \cap C \ne \emptyset$ for all $g \in G_0.$ If for some $g$ and $C$ we have $g(C) \cap C \ne C$ we have demonstrated the claim and otherwise we are in the previous case since $G_0$ preserves the infinitely many trimmed level sets lying in $U_0$.
So we may assume that the claim holds, i.e., that
there is $h_0 \in G$ and a trimmed level set $C_1$ for $f$ in $U$ such that $h_0(C_1) \cap C_1 \ne C_1$ and $h_0(C_1) \cap C_1 \ne \emptyset$. Let $C_2 = h_0(C_1)$ and $g = h_0fh_0^{-1}$. Since $C_1 \ne C_2$ and each is the common frontier of its complementary components, it follows that each of the complementary components of $C_1$ has non-empty intersection with each of the complementary components of $C_2$. Hence we may choose points $x_i \in M \setminus (C_1 \cup C_2),\ 1 \le i \le 4$ satisfying the hypothesis of Lemma~\ref{free case} which completes the proof. \qed
\end{document} | arXiv | {
"id": "1204.3961.tex",
"language_detection_score": 0.8438599705696106,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Based quasi-hereditary algebras]{{\bf Based quasi-hereditary algebras}}
\author{\sc Alexander Kleshchev} \address{Department of Mathematics\\ University of Oregon\\ Eugene\\ OR 97403, USA} \email{klesh@uoregon.edu}
\author{\sc Robert Muth} \address{Department of Mathematics\\ Washington \& Jefferson College \\ Washington\\ PA 15301, USA} \email{rmuth@washjeff.edu}
\subjclass[2010]{16G30, 16E99}
\thanks{The first author was supported by the NSF grant No. DMS-1700905 and the DFG Mercator program through the University of Stuttgart. This work was also supported by the NSF under grant No. DMS-1440140 while both authors were in residence at the MSRI during the Spring 2018 semester. Additional support was provided by the National University of Singapore IMS, where both authors were in residence for the Representation Theory of Symmetric Groups and Related Algebras workshop, December 11--20, 2017. }
\begin{abstract} A notion of a split quasi-hereditary algebra has been defined by Cline, Parshall and Scott. Du and Rui describe a based approach to split quasi-hereditary algebras. We develop this approach further to show that over a complete local Noetherian ring, one can achieve even stronger basis properties. This is important for {\em `schurifying'} quasi-hereditary algebras as developed in our subsequent work. The schurification procedure associates to an algebra $A$ a new algebra, which is the classical Schur algebra if $A$ is a field. Schurification produces interesting new quasi-hereditary and cellular algebras. It is important to work over an integral domain of characteristic zero, taking into account a super-structure on the input algebra $A$. So we pay attention to super-structures on quasi-hereditary algebras and investigate a subtle {\em conforming} property of heredity data which is crucial to guarantee that the schurification of $A$ is quasi-hereditary if so is $A$. We establish a Morita equivalence result which allows us to pass to basic quasi-hereditary algebras {\em preserving conformity}.
\end{abstract}
\maketitle
\section{Introduction}
Working over an arbitrary ground field, Cline, Parshall and Scott \cite{CPSCrelle} axiomatized the notion of a {\em highest weight category} and defined {\em quasi-hereditary algebras}. However, it is important to be able to work more generally over a reasonable commutative ring. This was pursued in \cite{CPS,DS,DR,Ro}. In particular, if $\Bbbk$ is a Noetherian ground ring, a notion of a split quasi-hereditary algebra has been defined in \cite{CPS}, cf. also \cite{Ro}. On the other hand, Du and Rui \cite{DR} described a based approach to split quasi-hereditary algebras, showing that it is equivalent to that of \cite{CPS} provided that $\Bbbk$ is Noetherian and local.
The goal of this paper is to develop Du and Rui's approach further to show that over a complete local Noetherian ring, we can achieve even stronger basis properties, see Definition~\ref{DCC}. This is important for {\em `schurifying'} quasi-hereditary algebras as developed in \cite{greenThree}. The schurification procedure associates to a $\Bbbk$-algebra $A$ (with suitable subalgebra \(\mathfrak{a}\)) a new algebra $T^A_\mathfrak{a}(n,d)$, which is the classical Schur algebra if $A=\Bbbk$. Schurification often produces interesting new quasi-hereditary and cellular algebras which are important in representation theory of symmetric groups, Hecke algebras, classical Schur algebras, etc., see e.g. \cite{T,EK1,EK2}.
It is clear from \cite{EK1,EK2} that to define many interesting quasi-hereditary algebras, it is important to work over an integral domain of characteristic zero, taking into account a super-structure on the input algebra $A$. Therefore we pay attention to super-structures (as well as $\mathbb{Z}$-gradings) on quasi-hereditary algebras. We investigate a subtle {\em conforming} property of heredity data, see Definition~\ref{D290517}. This is non-trivial only if the super-structure is non-trivial, and is crucial to guarantee that $T^A_\mathfrak{a}(n,d)$ is quasi-hereditary if $A$ is quasi-hereditary.
We further establish some Morita equivalence results which sometimes allow us to pass to basic (or almost basic) quasi-hereditary algebras {\em preserving conformity}, see Theorem~\ref{MorBasic}. This is crucial for studying decomposition numbers and other properties of $T^A_\mathfrak{a}(n,d)$, see for example \cite{greenThree}.
\section{Based quasi-hereditary algebras}\label{SQHA}
Throughout the paper $\Bbbk$ is always a commutative unital ring. Sometimes we will require more in which case this will be stated explicitly.
\subsection{Algebras and modules}
Let $V$ be a {\em graded $\Bbbk$-supermodule}, i.e. $V$ is endowed with a $\Bbbk$-module decomposition $$V=\bigoplus_{n\in\mathbb{Z},\,{\varepsilon}\in\mathbb{Z}/2}V^n_{\varepsilon}. $$ We set $V^n:=V^n_{\bar 0}\oplus V^n_{\bar 1}$ and $V_{\varepsilon}:=\bigoplus_{n\in\mathbb{Z}}V^n_{\varepsilon}$. Then $V=\bigoplus_{n\in\mathbb{Z}} V^n$ is a grading, and $V=V_{\bar 0}\oplus V_{\bar 1}$ is a superstructure. For $v\in V_{\varepsilon}$, we write $\bar v:={\varepsilon}$. Of course, the grading and/or the superstructure could be trivial, for example we could have $V=V^0_{\bar 0}$.
An element $v\in V$ is called homogeneous if $v\in V_{\varepsilon}^m$ for some ${\varepsilon}$ and $m$. We denote by $V_{\rm hom}$ the set of all non-zero homogeneous elements of $V$. For a subset $S\subseteq V_{\rm hom}$ and ${\varepsilon}\in\mathbb{Z}/2$
we denote \begin{equation}\label{EH} S_{\varepsilon}:=S\cap V_{\varepsilon}. \end{equation}
A map $f:V\to W$ of graded $\Bbbk$-supermodules is called {\em homogeneous} if $f(V^m_{\varepsilon})\subseteq W^m_{\varepsilon}$ for all $m$ and ${\varepsilon}$. Let \begin{equation}\label{ER} R:=\mathbb{Z}[q,q^{-1}][t]/(t^2-1), \end{equation} and denote the image of $t$ in the quotient ring by $\pi$, so that $\pi^{\varepsilon}$ makes sense for ${\varepsilon}\in\mathbb{Z}/2$. For $v\in V^n_{\varepsilon}$, we write \begin{equation}\label{EDeg} \deg(v):=q^n\pi^{\varepsilon}. \end{equation}
For a free $\Bbbk$-module $W$ of finite rank $d$, we write $d=\dim W$. A graded $\Bbbk$-supermodule $V$ is free of finite rank if each $V^n_{\varepsilon}$ is free of finite rank and we have $V^n=0$ for almost all $n$. Let $V$ be a free graded $\Bbbk$-supermodule of finite rank. A {\em homogeneous basis} of $V$ is a $\Bbbk$-basis all of whose elements are homogeneous. The {\em graded dimension} of $V$ is $$ \dim^q_\pi V:=\sum_{n\in\mathbb{Z},\, {\varepsilon}\in\mathbb{Z}/2}(\dim V^n_{\varepsilon})q^n\pi^{\varepsilon}\in R. $$
A (not necessarily unital) $\Bbbk$-algebra $A$ is called a {\em graded $\Bbbk$-superalgebra}, if $A$ is a graded $\Bbbk$-supermodule and $A_{{\varepsilon}}^nA_{\delta}^m\subseteq A_{{\varepsilon}+\delta}^{n+m}$ for all ${\varepsilon},\delta$ and $n,m$.
By a {\em graded $A$-supermodule} we understand an $A$-module $V$ which is a graded $\Bbbk$-supermodule and
$A_{{\varepsilon}}^nV_{\delta}^m\subseteq V_{{\varepsilon}+\delta}^{n+m}$ for all ${\varepsilon},\delta$ and $n,m$. We denote by $\bmod \,{A}$ the category of all finitely generated graded $A$-supermodules and homogeneous $A$-homomorphisms. All ideals, subalgebras, submodules, etc. are assumed to be homogeneous. In particular the Jacobson ideal $J(A)$ is the intersection of the annihilators of all graded simple $A$-supermodules.
Given a graded $A$-supermodule $V$, $n\in \mathbb{Z}$ and ${\varepsilon}\in\mathbb{Z}/2\mathbb{Z}$, we denote by $q^n\pi^{\varepsilon} V$ the graded $A$-supermodule which is the same as $V$ as an $A$-module but with $(q^n\pi^{\varepsilon} V)^m_\delta=V^{m-n}_{\delta+{\varepsilon}}$.
\subsection{Definition and first properties}\label{SSCC} Let $A$ be a graded $\Bbbk$-superalgebra, and $I$ be a finite partially ordered set. A subset $\Omega\subseteq I$ is called an {\em upper set} if $i\in\Omega$ and $j\geq i$ imply $j\in\Omega$. Examples of upper sets are $$ I^{>i}:=\{j\in I\mid j>i\}\quad\text{and}\quad I^{\geq i}:=\{j\in I\mid j\geq i\} $$ for a fixed $i\in I$.
\begin{Definition} \label{DCC} {\rm A {\em heredity data} on $A$ consist of a partially ordered set $I$ and finite sets $X=\bigsqcup_{i\in I}X(i)$ and $Y=\bigsqcup_{i\in I}Y(i)$ of non-zero homogeneous elements of $A$ with distinguished {\em initial elements} $e^i\in X(i)\cap Y(i)$ for each $i\in I$. For $i\in I$ and $\Omega\subseteq I$, we set \begin{align*} &Z(i):=X(i)\times Y(i),\ Z(\Omega):=\textstyle \bigsqcup_{j\in \Omega} Z(j), \\ &Z^{>i}:=Z(I^{>i}), \ Z^{\geq i}:=Z(I^{\geq i}), \\ &A(\Omega):=\operatorname{span}\{xy \mid (x,y)\in \Omega\},\\ &A^{>i}:=A(Z^{>i}),\ A^{\geq i}:=A(Z^{\geq i}). \end{align*} We require that the following axioms hold: \begin{enumerate} \item[{\rm (a)}] $B:=\{x y \mid (x,y)\in Z\}$ is a basis of $A$;
\item[{\rm (b)}] For all $i\in I$, $x\in X(i)$, $y\in Y(i)$ and $a\in A$, we have $$ a x \equiv \sum_{x'\in X(i)}l^x_{x'}(a)x' \pmod{A^{>i}} \ \ \text{and}\ \ ya \equiv \sum_{y'\in Y(i)}r^y_{y'}(a)y' \pmod{A^{>i}} $$ for some $l^x_{x'}(a),r^y_{y'}(a)\in\Bbbk$;
\item[{\rm (c)}] For all $i\in I$, we have \begin{align*}
&xe_i= x,\ e_ix= \delta_{x,e_i}x,\ e_i y= y,\ ye_i= \delta_{y,e_i}y &(x\in X(i),\ y\in Y(i)); \\ &e_jx=x\ \text{or}\ 0,\ ye_j=y\ \text{or}\ 0 &(x\in X,\ y\in Y,\ j\in I). \end{align*} \end{enumerate} } \end{Definition}
If $A$ is endowed with a heredity data $I,X,Y$, we call $A$ {\em based quasi-hereditary (with respect to the poset $I$)}, and refer to $B$ as a {\em heredity basis} of $A$.
\begin{Lemma} \label{LHI} If $\Omega\subseteq I$ is an upper set, then $A(\Omega)$ is the (two-sided) ideal generated by $\{e_i\mid i\in\Omega\}$. \end{Lemma} \begin{proof} That $A(\Omega)$ is an ideal is clear from Definition~\ref{DCC}(b). That $A(\Omega)$ contains the ideal generated by $\{e_i\mid i\in\Omega\}$ is now clear since $A(\Omega)\supseteq \{e_i\mid i\in\Omega\}$. The converse containment follows from $xy=xe_iy$ for $(x,y)\in Z(i)$, see Definition~\ref{DCC}(c). \end{proof}
\begin{Lemma} \label{L290417} Let $\Omega,\Theta\subseteq I$ be upper sets. \begin{enumerate} \item[{\rm (i)}] $A(\Omega) \subseteq A(\Theta)$ if and only if $\Omega\subseteq \Theta$; \item[{\rm (ii)}] $A(\Omega) A(\Theta)\subseteq A(\Omega)\cap A(\Theta)= A(\Omega\cap\Theta)$. \end{enumerate} \end{Lemma} \begin{proof} (i) If $\Omega \not\subseteq \Theta$ and $i\in \Omega\setminus \Theta$, it follows from Definition~\ref{DCC}(a) that $xy\in A(\Omega) \setminus A(\Theta)$ for all $(x,y)\in Z(i)$, i.e. $A(\Omega) \not\subseteq A(\Theta)$. The converse is obvious.
(ii) As $A(\Omega)$, $A(\Theta)$ are ideals by Lemma~\ref{LHI}, the containment $A(\Omega) A(\Theta)\subseteq A(\Omega)\cap A(\Theta)$ is clear. The equality $A(\Omega)\cap A(\Theta)= A(\Omega\cap\Theta)$ comes from Definition~\ref{DCC}(a). \end{proof}
\begin{Lemma} \label{idemaction} Let \(x \in X(i)\), \(y \in Y(i)\). If \(j \not \leq i\), then \(e_j x = y e_j = 0\). \end{Lemma} \begin{proof} As \(e_j \in Y(j)\), we have by Definition~\ref{DCC}(b) that \(e_j x \in A^{\geq j}\). Since \(x \notin A^{\geq j}\), we have that \(e_j x \neq x\), so Definition~\ref{DCC}(c) gives us \(e_jx = 0\). The proof of \(ye_j = 0\) is similar. \end{proof}
\begin{Lemma} For any $i,j\in I$, we have $e_ie_j=\delta_{i,j}e_i$. \end{Lemma} \begin{proof} Since $e_i\in I$, the equality $e_i^2=e_i$ comes from Definition~\ref{DCC}(c). Let $i\neq j$. By Definition~\ref{DCC}(c) again, we have that $e_ie_j$ is either $e_j$ or $0$ and on the other hand either $e_i$ or $0$. Since $e_i\neq e_j$ by Definition~\ref{DCC}(a), we deduce that $e_ie_j=0$. \end{proof}
Let $i\in I$, $x\in X(i)$ and $y\in Y(i)$. By Definition~\ref{DCC}(b), $$ \sum_{x'\in X(i)} l^x_{x'}(y)x' \equiv yx\equiv \sum_{y'\in Y(i)} r^y_{y'}(x)y' \pmod{A^{>i}}. $$ By Definition~\ref{DCC}(c), we have $x'= x'e_i$ and $y'= e_iy'$, so taking into account Definition~\ref{DCC}(a), we deduce that \begin{equation}\label{E210617} yx\equiv f_i(y,x)e_i \pmod{A^{>i}} \end{equation} for some $f_i(y,x)\in\Bbbk$. This defines a function $f_i:Y(i)\times X(i)\to \Bbbk$. Note that \begin{equation}\label{E121217_3} f_i(e_i,e_i)=1 \end{equation} and \begin{equation}\label{E050817} \text{$f_i(y,x)=0$\quad unless\quad $\deg(x)\deg(y)=1$.} \end{equation}
\begin{Definition} \label{DSB} {\rm \cite[1.2.1]{DR}} {\rm A graded $\Bbbk$-superalgebra $A$ is called {\em standardly based} with respect to a finite poset $I$ if it possesses a {\em standard basis}, i.e. a homogeneous basis of the form $$ \{b^i_{x,y}\mid i\in I,\ x\in X(i),\ y\in Y(i)\} $$ for some index sets $X(i),Y(i)$ such that, setting $A^{>i}:=\operatorname{span}\{b^j_{x,y}\mid j>i\}$, for all $a\in A$, $i\in I$, $x\in X(i)$, $y\in Y(i)$, we have \begin{align*}
a b^i_{x,y} &\equiv \sum_{x'\in X(i)}l^x_{x'}( a) b^i_{x',y} \pmod{A^{>i}}, \\
b^i_{x,y} a &\equiv \sum_{y'\in Y(i)}r^y_{y'}( a) b^i_{x,y'} \pmod{ A^{>i}} \end{align*} for some $l^x_{x'}(a)\in \Bbbk$ independent of $y$ and $r^y_{y'}(a)\in\Bbbk$ independent of $x$. } \end{Definition}
By \cite[(1.2.3)]{DR}, $$ b^i_{x,y}b^i_{x',y'}\equiv f_i(y,x')b^i_{x,y'}\pmod{A^{>i}} $$ for some $f_i(y,x')\in \Bbbk$.
The standardly based algebra is called standardly full-based if the $\Bbbk$-span of the elements $f_i(y,x)$, with $x\in X(i)$, $y\in Y(i)$, is $\Bbbk$. The following is clear using (\ref{E121217_3}):
\begin{Lemma} If $A$ is a based quasi-hereditary algebra then it is standardly full-based with $b^i_{x,y}=xy$ for all $i\in I$ and all $x\in X(i)$, $y\in Y(i)$. \end{Lemma}
A homogeneous anti-involution $\tau$ on $A$ is called {\em standard} (with respect to $I,X,Y$) if for all $i\in I$ there is a bijection $X(i)\stackrel{\sim}{\longrightarrow} Y(i),\ x\mapsto y(x)$ such that $y(e_i)=e_i$ and \begin{equation}\label{E200517} \tau(x)= y(x). \end{equation} For a standard anti-involution $\tau$, we have \begin{equation}\label{E230517_6} \tau(xy(x'))= x'y(x) \end{equation}
and $\tau(e_i)= e_i$ for all $i\in I,\ x,x'\in X(i)$. If $\tau$ is a standard anti-involution on $A$ then $\{xy\mid (x,y)\in Z\}$ is a {\em cellular basis} of $A$ with respect to $\tau$, see \cite[(6.1.4)]{DR}.
\subsection{Standard modules} \label{SSDelta} Throughout the subsection, $A$ is a based quasi-hereditary $\Bbbk$-superalgebra with heredity data $I,X,Y$.
Fix $i\in I$ and upper sets $\Omega',\Omega\subseteq I$ such that $\Omega'\setminus \Omega=\{i\}$. For example we could take $\Omega'=I^{\geq i}$ and $\Omega=I^{>i}$. Denote $$\tilde A:=A/A(\Omega)\quad \text{and}\quad \tilde a:=a+A(\Omega)\in \tilde A\qquad (a\in A). $$ By inflation, $\tilde A$-modules will be automatically considered as $A$-modules. The {\em standard module $\Delta(i)$} and the {\em right standard module $\Delta^{\mathrm{op}}(i)$} are defined as \begin{equation}\label{EDe} \Delta(i):=\tilde A \tilde e_i\quad\text{and}\quad \Delta^{\mathrm{op}}(i):=\tilde e_i\tilde A. \end{equation}
By Definition~\ref{DCC}, we have
$$\Delta(i)=\operatorname{span}\{\tilde x \mid x\in X(i)\}\quad \text{and}\quad \Delta^{\mathrm{op}}(i)=\operatorname{span}\{\tilde y \mid y\in Y(i)\},$$
so $\Delta(i)$ and $\Delta^{\mathrm{op}}(i)$ can be defined respectively as free $\Bbbk$-modules with bases $\{v_x\mid x\in X(i)\}$ and $\{w_y\mid y\in Y(i)\}$ and the actions $$av_x=\sum_{x'\in X(i)}l_{x'}^x(a)v_{x'}\quad \text{and}\quad w_y a=\sum_{y'\in Y(i)}r_{y'}^y(a)w_{y'} \qquad(a\in A). $$ This implies in particular that the definition of $\Delta(i)$ and $\Delta^{\mathrm{op}}(i)$ does not depend on the choice of $\Omega$ and $\Omega'$ as long as $\Omega'\setminus \Omega=\{i\}$.
Note that $v_i:=v_{e_i}$ is a cyclic generator of $\Delta(i)$ such that \begin{equation}\label{E121217} e_iv_i=v_i\quad \text{and}\quad xv_i=v_x\qquad(x\in X(i)). \end{equation} Moreover, \begin{equation}\label{E121217_1} e_i v_x=0\qquad(x\in X(i)\setminus\{e_i\}). \end{equation} Taking into account Lemma \ref{idemaction}, we deduce that $e_j\Delta(i)\neq 0$ implies $j\leq i$. Similar statements hold for $\Delta^{\mathrm{op}}(i)$. We have \begin{equation}\label{LEndDe} \operatorname{End}_A(\Delta(i))\cong \operatorname{End}_{\tilde A}(\Delta(i))\cong \operatorname{End}_{\tilde A}(\tilde Ae_i,\Delta(i))\cong e_i\Delta(i)\cong \Bbbk. \end{equation}
It follows from the definitions that as $A$-bimodules, \begin{equation}\label{E121217_2} A(\Omega')/A(\Omega)\cong \Delta(i)\otimes_\Bbbk \Delta^{\mathrm{op}}(i). \end{equation}
Recalling (\ref{E210617}), we have a bilinear pairing $ (\cdot,\cdot)_i:\Delta(i)\times \Delta^{\mathrm{op}}(i)\to \Bbbk$ satisfying $$ (v_x,w_y)_i= f_i(y,x). $$
\begin{Lemma} We have \begin{enumerate} \item[{\rm (i)}] $(v_i,w_i)_i=1$; \item[{\rm (ii)}] $(av,w)_i=(v,wa)_i$ for all $v\in\Delta(i),w\in \Delta^{\mathrm{op}}(i),a\in A$. \end{enumerate} \end{Lemma} \begin{proof} (i) comes from (\ref{E121217_3}).
(ii) We follow \cite[(2.3.1)]{DR}. Let $x\in X(i), y\in Y(i)$. We have \begin{align*} (av_x,w_y)_i&=\sum_{x'\in X(i)}l_{x'}^x(a)(v_{x'},w_y)_i =\sum_{x'\in X(i)}l_{x'}^x(a)f_i(y,x'), \\ (v_x,w_ya)_i&=\sum_{y'\in Y(i)}r_{y'}^y(a)(v_x,w_{y'})_i =\sum_{y'\in Y(i)}r_{y'}^y(a)f_i(y',x). \end{align*}
On the other hand, by Definition~\ref{DCC}(b) and (\ref{E210617}), modulo $A(\Omega)$ we have \begin{align*} \sum_{y'\in Y(i)}r_{y'}^y(a)f_i(y',x)e_i&\equiv \sum_{y'\in Y(i)}r_{y'}^y(a)y'x=(ya)x=y(ax)=\sum_{x'\in X(i)}l_{x'}^x(a)yx' \\ &\equiv \sum_{x'\in X(i)}l_{x'}^x(a)f_i(y,x')e_i, \end{align*} so\, $$ \sum_{y'\in Y(i)}r_{y'}^y(a)f_i(y',x)= \sum_{x'\in X(i)}l_{x'}^x(a)f_i(y,x')e_i, $$ completing the proof. \end{proof}
By the lemma, $$ {\mathrm {rad}\,} \Delta(i):=\{v\in\Delta(i)\mid (v,w)_i=0\ \text{for all $w\in \Delta^{\mathrm{op}}(i)$}\} $$ is a submodule of $\Delta(i)$.
\begin{Lemma}\label{LDeRad}{\rm \cite[(2.4.1)]{DR}} Let $\Bbbk$ be a field. Then for each $i\in I$ we have that $$L(i):=\Delta(i)/{\mathrm {rad}\,}\Delta(i)$$ is an absolutely irreducible $A$-module. Furthermore, ignoring grading and superstructure, $\{L(i)\mid i\in I\}$ is a complete and irredundant set of irreducible $A$-modules up to an isomorphism. \end{Lemma}
By definition, the form $(\cdot,\cdot)_i$ is homogeneous, so ${\mathrm {rad}\,} \Delta(i)$ is a homogeneous submodule of $\Delta(i)$ and $L(i)$ is naturally a graded $A$-supermodule. We refer to the modules $L(i)$ as the {\em canonical irreducible $A$-modules}. From Lemma~\ref{LDeRad}, we get:
\begin{Lemma} Let $\Bbbk$ be a field. Then $$\{q^n\pi^{\varepsilon} L(i)\mid i\in I,\ n\in\mathbb{Z},\ {\varepsilon}\in\mathbb{Z}/2\}$$ is a complete and irredundant set of irreducible graded $A$-supermodules up to a homogeneous isomorphism. \end{Lemma}
\begin{Corollary} Suppose that $\Bbbk$ is a local ring with the maximal ideal $\mathfrak{m}$ and the quotient field $F=\Bbbk/\mathfrak{m}$. Then: \begin{enumerate} \item[{\rm (i)}] $A/\mathfrak{m} A\cong A\otimes_\Bbbk F$ is based quasi-hereditary $F$-superalgebra. \item[{\rm (ii)}] For each $i\in I$, denote the corresponding canonical irreducible $A/\mathfrak{m} A$-module by $L_{A/\mathfrak{m} A}(i)$ and denote by $L_A(i)$ the $A$-module obtained from $L_{A/\mathfrak{m} A}(i)$ by inflation. Then $$\{q^n\pi^{\varepsilon} L_A(i)\mid i\in I,\ n\in\mathbb{Z},\ {\varepsilon}\in\mathbb{Z}/2\}$$ is a complete and irredundant set of irreducible graded $A$-supermodules up to a homogeneous isomorphism. \end{enumerate} \end{Corollary}
If $\Bbbk$ is a local ring, we call $A$ {\em basic} if the the modules $L_{A/\mathfrak{m} A}(i)$ are $1$-dimensional as $F$-vector spaces, equivalently if the modules $L_{A}(i)$ are free of rank $1$ as $\Bbbk$-modules.
Let $\Bbbk$ be a field. Recalling the ring $R$ from (\ref{ER}), we can now consider {\em bigraded decomposition numbers} \begin{equation}\label{EDNGr} d_{ij}(q,\pi):=\sum_{n\in\mathbb{Z},\,{\varepsilon}\in\mathbb{Z}/2}d_{ij}^{n,{\varepsilon}}q^n\pi^{\varepsilon}\in R \qquad(i,j\in I), \end{equation} where \begin{equation}\label{EDNGr1} d_{ij}^{n,{\varepsilon}}:=[\Delta(i):q^n\pi^{\varepsilon} L(j)]\qquad(n\in\mathbb{Z},\,{\varepsilon}\in\mathbb{Z}/2). \end{equation}
\begin{Lemma} For $i,j\in I$, we have $d_{ii}(q,\pi)=1$, and $d_{ij}(q,\pi)\neq 0$ implies $j\leq i$. \end{Lemma} \begin{proof} Denote $$\hat v_i:=v_i+{\mathrm {rad}\,}\Delta(i)\in \Delta(i)/{\mathrm {rad}\,}\Delta(i)=L(i).$$ Then $e_i \Delta(i)=\Bbbk\cdot v_i$ implies $e_i L(i)=\Bbbk\cdot \hat v_i$. Moreover, $e_j\Delta(i)\neq 0$ only if $j\leq i$ implies that $e_jL(i)\neq 0$ only if $j\leq i$. The result follows. \end{proof}
\section{Based quasi-hereditary versus split quasi-hereditary}
Throughout the section we assume that $A$ is unital. Our goal is to show that under reasonable assumptions on $\Bbbk$, the notion of based quasi-hereditary and split qusi-hereditary are the same.
\subsection{Based quasi-hereditary algebras are split quasi-hereditary} Assume that $\Bbbk$ is noetherian and $A$ is a graded $\Bbbk$-superalgebra, which is finitely generated projective as a $\Bbbk$-module. The following definition goes back to \cite{CPS,DS}, but we follow the version of \cite{Ro}:
\begin{Definition}\label{DSHI} {\rm A (homogeneous) ideal $J$ of $A$ is called an {\em indecomposable split heredity ideal} if the following conditions hold: \begin{enumerate} \item[{\rm (1)}] $A/J$ is projective as a $\Bbbk$-module; \item[{\rm (2)}] $J$ is projective as a left $A$-module; \item[{\rm (3)}] $J$ is idempotent, i.e. $J^2=J$; \item[{\rm (4)}] $\operatorname{End}_A(J)$ is Morita equivalent to $\Bbbk$. \end{enumerate} } \end{Definition}
\begin{Definition}\label{DSQH} {\rm The graded $\Bbbk$-superalgebra $A$ is {\em split quasi-hereditary} with respect to a finite partially ordered set $I$ if for every upper set $\Omega\subseteq I$ there is an ideal $A(\Omega)$ in $A$ such that \begin{enumerate} \item[{\rm (1)}] if $\Omega\subseteq \Omega'$ are upper sets then $A(\Omega)\subseteq A(\Omega')$;
\item[{\rm (2)}] if $\Omega\subseteq \Omega'$ are upper sets with $|\Omega'\setminus\Omega|=1$, then $A(\Omega)/A(\Omega)$ is an indecomposable split heredity ideal in $A/A(\Omega)$. \end{enumerate} } \end{Definition}
\begin{Lemma} Let $\Bbbk$ be noetherian. If $A$ is based quasi-hereditary then it is split quasi-hereditary. \end{Lemma} \begin{proof} By Lemma~\ref{LHI}, we have the ideals $A(\Omega)$ which clearly satisfy Definition~\ref{DSQH}(1). Let $\Omega\subseteq \Omega'$ satisfy $\Omega'\setminus\Omega=\{i\}$. We need to check that the ideal $A(\Omega')/A(\Omega)$ in $A/A(\Omega)$ satisfies (1)--(4) of Definition~\ref{DSHI}. Note that
$$\{xy+A(\Omega')\mid (x,y)\in Z(I\setminus \Omega')\}$$ is a $\Bbbk$-basis of $A/A(\Omega')$, which gives (1). The property (2) follows from (\ref{E121217_2}) and (\ref{EDe}). The property (3) comes from the fact that $A(\Omega')/A(\Omega)$ is generated by the idempotent $e_i+A(\Omega)$. Finally, by (\ref{E121217_2}) and (\ref{LEndDe}), we have $\operatorname{End}_A(A(\Omega')/A(\Omega))\cong M_m(\Bbbk)$, where $m=|Y(i)|$, which gives (4). \end{proof}
\subsection{Split quasi-hereditary algebras are based quasi-hereditary}
In this subsection, we assume that the ground ring $\Bbbk$ is noetherian and local and that $A$ is a split quasi-hereditary graded superalgebra. In particular, $A$ is a free $\Bbbk$-module of finite rank and hence Noetherian.
In addition we assume that $A$ is {\em semiperfect}, i.e. $A/J(A)$ is a left Artinian and homogeneous idempotents lift from $A/J(A)$ to $A$, cf. \cite[Definition 3.3]{Das}. By \cite[Theorem 3.5]{Das}, this is equivalent to $A_{\bar 0}^0$ being semiperfect (in the usual sense). So, as noted in \cite[\S1]{CPS}, $A$ is semiperfect provided $\Bbbk$ is complete (local Noetherian). The proof of \cite[(1.3)]{CPS} now goes through to give:
\begin{Lemma} \label{CPS13}
Let $A$ be semiperfect. If $J_1\supseteq\dots \supseteq J_t$ are idempotent ideals in $A$ then there exist idempotents $f_1,\dots,f_t$ in $A$ such that $J_r=Af_rA$ for all $r$ and $f_rf_s=f_sf_r=f_r$ for all $r>s$. \end{Lemma}
\begin{Proposition} Assume that \(\Bbbk\) is Noetherian and local and that $A$ is a semiperfect graded $\Bbbk$-superalgebra. If \(A\) is split quasi-hereditary, then \(A\) is based quasi-hereditary. \end{Proposition} \begin{proof} We may assume that $I=\{0,1,\dots,\ell\}$ for some $\ell\in\mathbb{Z}_{>0}$ and $0<1<\dots<\ell$ is a total order refining the given partial order on $I$. Then $\Omega_i:=\{i,i+1,\dots,\ell\}$ is an upper set for any $i\in I$, and we have a chain $$ I=\Omega_0\supseteq \Omega_1\supseteq\dots\supseteq \Omega_\ell\supseteq \Omega_{\ell+1}:=\varnothing
$$ with \(\Omega_i \backslash \Omega_{i+1} = \{i\}\) for \(i \in I\). By Lemma \ref{CPS13} there exist idempotents \(f_0, \ldots, f_\ell\) such that \(A(\Omega_i)=Af_iA \), and \(f_if_j = f_jf_i = f_i\) whenever \(i>j\). Define \(e_\ell:=f_\ell\) and \(e_i := f_i - f_{i+1}\) for $i=0,1,\dots,\ell-1$. Then for all \(i,j \in I\), we have \(e_ie_j = \delta_{ij}e_i\), and \(f_i= e_i + \cdots + e_\ell\).
Let \(i \in I\), \(\tilde{A}:=A/A(\Omega_{i+1})\) and $\tilde a:=a+A(\Omega_{i+1})\in \tilde A$ for $a\in A$. It follows from Definition~\ref{DSHI}(1) that $\tilde A$ is projective as a $\Bbbk$-module. Moreover, $A(\Omega_i)/A(\Omega_{i+1})$ is projective as an \(\tilde{A}\)-module. Since $$A(\Omega_i)/A(\Omega_{i+1}) = \tilde{A}\tilde{f}_i\tilde{A} = \tilde{A}\tilde{e}_i\tilde{A}$$ by the previous paragraph, \cite[Statement 7]{DlRi} implies that the multiplication map \begin{align*} m: \tilde{A}\tilde{e}_i \otimes_{\tilde{e}_i\tilde{A}\tilde{e}_i} \tilde{e}_i\tilde{A} \to \tilde{A} \tilde{e}_i \tilde{A} \end{align*} is an isomorphism of $\tilde A$-bimodules. By \cite[Lemma 4.5, Proposition 4.7]{Ro}, we have that \begin{align*} \tilde{e}_i\tilde{A}\tilde{e}_i \cong \operatorname{End}_{\tilde{A}}(\tilde{A}\tilde{e}_i)^{\mathrm{op}} = \operatorname{End}_{A}(\tilde{A}\tilde{e}_i)^\textup{op} \cong \Bbbk, \end{align*} so \(\tilde{e}_i \tilde{A} \tilde{e}_i = \Bbbk \tilde{e}_i\), and \(A(\Omega_i)/A(\Omega_{i+1}) \cong \tilde{A}\tilde{e}_i \otimes_\Bbbk \tilde{e}_i \tilde{A}\).
The left \(\tilde A\)-module \(\tilde{A}\tilde{e}_i\) is projective as an \(\tilde A\)-module, hence projective as a \(\Bbbk\)-module. Writing \(e_* := 1-e_0 - \cdots -e_\ell\), we have \begin{align*} \tilde{A}\tilde{e}_i = \tilde{e}_0\tilde{A}\tilde{e}_i \oplus \cdots \oplus \tilde{e}_\ell \tilde{A}\tilde{e}_i \oplus \tilde{e}_*\tilde{A}\tilde{e}_i. \end{align*} Each of the summands above is projective as a \(\Bbbk\)-module, hence is free as a \(\Bbbk\)-module since \(\Bbbk\) is local. Then there exists a set of elements \(X(i) \subset A_{\textup{hom}}\) such that: \begin{itemize} \item \(e_i \in X(i)\); \item \(\{\tilde{x} \mid x \in X(i)\}\) is a \(\Bbbk\)-basis for \(\tilde{A}\tilde{e}_i\); \item For all \(x \in X(i)\), we have \(x=e_t x e_i\) for some \(t \in \{0,\ldots, \ell, *\}\). \end{itemize} In similar fashion we may choose a set of elements \(Y(i) \subset A_{\textup{hom}}\) such that: \begin{itemize} \item \(e_i \in Y(i)\); \item \(\{\tilde{y} \mid y \in Y(i)\}\) is a \(\Bbbk\)-basis for \(\tilde{e}_i\tilde{A}\); \item For all \(y \in Y(i)\), we have \(y=e_i y e_t\) for some \(t \in \{0,\ldots, \ell, *\}\). \end{itemize} Since $m$ is an isomorphism, \(\{\tilde{x}\tilde{y} \mid x \in X(i),\ y \in Y(i)\}\) is a \(\Bbbk\)-basis for \(\tilde{A}\tilde{e}_i \tilde{A} = A(\Omega_i)/A(\Omega_{i+1})\), for all \(i \in I\), which implies that \(\{xy \mid i \in I,\ x \in X(i),\ y \in Y(i)\}\) is a basis for \(A\). The remaining conditions of Definition~\ref{DCC} are now easily checked. For example, $e_ix=\delta_{x,e_i}x$ for $x\in X(i)$ follows from $\tilde{e}_i\tilde{A}\tilde{e}_i\cong \Bbbk \tilde e_i$. Thus \(\{I,\ \bigsqcup_i X(i),\ \bigsqcup_i Y(i)\}\) constitutes based quasi-hereditary data for \(A\). \end{proof}
\section{Further properties}\label{SQHAReg} Let $A$ be a based quasi-hereditary $\Bbbk$-superalgebra with heredity data $I,X,Y$.
\subsection{Involution and idempotent truncation}\label{SSIdTr} If $e\in A$ is a homogeneous idempotent, we consider the idempotent truncation $\bar A:=eAe$, and denote $\bar a:= eae\in\bar A$ for $a\in A$. We say that $e$ is {\em adapted} (with respect to the given heredity data) if for all $i\in I$ there exist subsets $\bar X(i)\subseteq X(i)$ and $\bar Y(i)\subseteq Y(i)$ such that for all $(x,y)\in Z(i)$ we have: \begin{equation}\label{E230517} ex= \left\{ \begin{array}{ll} x &\hbox{if $x\in\bar X(i)$,}\\ 0 &\hbox{otherwise,} \end{array} \right. \qquad\text{and}\qquad\ \ ye= \left\{ \begin{array}{ll} y &\hbox{if $y\in\bar Y(i)$,}\\ 0 &\hbox{otherwise.} \end{array} \right. \end{equation} Setting \begin{equation}\label{E200717} \bar I:=\{i\in I\mid \bar X(i)\neq \emptyset\neq \bar Y(i)\}, \end{equation} the {\em $e$-truncation of $B$} is defined to be \begin{equation}\label{ETrunc} \bar B:=\{xy\mid i\in \bar I, x\in\bar X(i), y\in\bar Y(i)\}. \end{equation} We say that $e$ is {\em strongly adapted} if it is adapted and $ ee_i=e_ie= e_i $ for all $i\in\bar I$.
\begin{Lemma} \label{L200517_4} Let $e\in A$ be an adapted idempotent. \begin{enumerate} \item[{\rm (i)}] The $e$-truncation $\bar B$ is a standard basis of $\bar A$ in the sense of Definition~\ref{DSB}.
\item[{\rm (ii)}] If $\tau$ is a standard anti-involution of $A$ such that $\tau(e)=e$, then $\bar B$ is a cellular basis of $\bar A$ with respect to the restriction $\tau|_{\bar A}$.
\item[{\rm (iii)}] If $e$ is strongly adapted then $\bar A$ is based quasi-hereditary with heredity data $\bar I$, $\bar X:=\bigsqcup_{i\in \bar I}\bar X(i)$, $\bar Y:=\bigsqcup_{i\in \bar I}\bar Y(i)$. \end{enumerate} \end{Lemma} \begin{proof} (i) follows from $xy=exye$. To check (ii) one needs to observe that $ex=x$ if and only if $y(x)e=y(x)$, and so $\bar Y(i)=\{y(x)\mid x\in\bar X(i)\}$. Part (iii) is clear. \end{proof}
\iffalse{ \begin{Lemma} Let $e$ be strongly adapted. Then $\bar A$ is based quasi-hereditary with heredity data $\bar I$, $\bar X:=\bigsqcup_{i\in \bar I}\bar X(i)$, $\bar Y:=\bigsqcup_{i\in \bar I}\bar Y(i)$. Moreover, if the heredity data $I,X,Y$ is $\mathfrak{a}$-conforming and $e\in \mathfrak{a}$, then the heredity data $\bar I$, $\bar X$, $\bar Y$ is $\bar\mathfrak{a}$-conforming. Finally, if $\bar I=I$, we have $AeA=A$ and so $\bar A$ is Morita equivalent to $A$. \end{Lemma} \begin{proof}
\end{proof} }\fi
\begin{Remark} {\rm Let $e\in A$ be an adapted idempotent. For $i\in I$, consider the $\bar A$-module $ \bar\Delta(i):=e\Delta(i). $ If $\tau$ is a standard anti-involution of $A$ with $\tau(e)=e$ then by Lemma~\ref{L200517_4}(ii), $\bar A$ is cellular and $\{\bar\Delta(i)\mid i\in \bar I\}$ are the cell modules for $\bar A$. If $e$ is strongly adapted then by Lemma~\ref{L200517_4}(iii), $\bar A$ is quasi-hereditary and $\{\bar\Delta(i)\mid i\in \bar I\}$ are the standard modules for $\bar A$. } \end{Remark}
\begin{Remark} {\rm Given a cellular algebra $\bar A$ with cellular basis $\bar B$ and a subalgebra $\bar\mathfrak{a}\subseteq \bar A_{\bar 0}$, is there a based quasi-hereditary algebra $A$ with heredity basis $B$, a standard anti-involution $\tau$ and $\tau$-invariant adapted idempotent $e$ such that $\bar A=eAe$, $\bar\mathfrak{a}=e\mathfrak{a} e$, and $\bar B$ is the $e$-truncation of $B$? We do not know if this converse of Lemma~\ref{L200517_4}(ii) always holds true. This question seems to be related to problems studied in \cite{Ro,DlRi2,Ko,Aus}. } \end{Remark}
\begin{Lemma} Let \(\Bbbk\) be a field, and $e\in A$ be an adapted idempotent. \begin{enumerate} \item[{\rm (i)}] $eL(i)=0$ if and only if $e\Delta(i)\subseteq {\mathrm {rad}\,}\Delta(i)$. \item[{\rm (ii)}] $eL(i)=0$ if and only if $yex\in A^{>i}$ for all $x\in X(i)$ and $y\in Y(i)$.
\item[{\rm (iii)}] $eL(i)=0$ if and only if $yx\in A^{>i}$ for all $x\in \bar X(i)$ and $ y\in\bar Y(i)$.
\item[{\rm (iv)}] $eL(i)=0$ for all $i\in I\setminus \bar I$. \end{enumerate} \end{Lemma} \begin{proof} Part (i) is clear. By part (i), $eL(i)=0$ if and only if $ev_x\in{\mathrm {rad}\,} \Delta(i)$ for all $x\in X(i)$. Recalling the definition of the form $(\cdot,\cdot)_i$, this is equivalent to $yex\in A^{>i}$, proving part (ii). Pari (iii) follows from part (ii) since $ex=\delta_{\{x\in \bar X\}}x$ and $ye=\delta_{\{y\in \bar Y\}}y$. Finally, if $i\in I\setminus \bar I$ then $\bar X(i)=\varnothing$ or $\bar Y(i)=\varnothing$ (or both). So part (iv) follows from part (iii).
\end{proof}
\begin{Corollary} Let \(\Bbbk\) be a field, and $e\in A$ be an adapted idempotent. Then there exists a subset $\bar I'\subseteq\bar I$ such that $\{eL(i)\mid i\in \bar I'$ is a complete and irredundant set of irreducible $\bar A$-modules up to isomorphism. \end{Corollary}
\subsection{Conformity}
We now turn to more subtle additional properties of heredity data, which have to do with the super-structure.
Recalling (\ref{EH}), we have sets $B_{\varepsilon}, X(i)_{\varepsilon}, Y_{\varepsilon}$ etc.
\begin{Definition}\label{D290517} {\rm
Suppose that $\mathfrak{a}\subseteq A_{\bar 0}$ is a subalgebra. The heredity data $I,X,Y$ of $A$ is {\em $\mathfrak{a}$-conforming} if $I,X_{\bar 0},Y_{\bar 0}$ is a heredity data for $\mathfrak{a}$. } \end{Definition}
If the heredity data $I,X,Y$ of $A$ is $\mathfrak{a}$-conforming then $\mathfrak{a}$ is recovered as follows: $$ \mathfrak{a}=\operatorname{span}(xy\mid i\in I,\ x\in X(i)_{\bar 0},\ y\in Y(i)_{\bar 0}). $$ So sometimes we will just speak of a {\em conforming heredity data}. Even though in some sense $\mathfrak{a}$ is redundant in the definition of conormity, it is often convenient to use it. For example, in \cite{greenTwo}, we will construct generalized Schur algebras $T^A_\mathfrak{a}(n,d)$, which will only depend on $A$ and $\mathfrak{a}$, but not on $I,X,Y$.
Recall that we have standard $A$-modules $\Delta(i)$ and simple $A$-modules $L(i)$ (if $\Bbbk$ is a field). If the heredity data $I,X,Y$ of $A$ is $\mathfrak{a}$-conforming then by definition $\mathfrak{a}$ is also based quasi-hereditary and has its own standard $\mathfrak{a}$-modules $\Delta_\mathfrak{a}(i)$ and simple $\mathfrak{a}$-modules $L_\mathfrak{a}(i)$ (if $\Bbbk$ is a field).
We describe an additional property which implies conformity. This property is readily checked in some important examples and will be preserved under formation of the generalized Schur algebra $T^A_\mathfrak{a}(n,d)$. The following is easy to see:
\begin{Lemma} Suppose that $A$ possesses a $(\mathbb{Z}/2\times \mathbb{Z}/2)$-grading $ A=\bigoplus_{{\varepsilon},\delta\in\mathbb{Z}/2} A_{{\varepsilon},\delta} $ such that the following conditions hold: \begin{enumerate} \item[{\rm (1)}] $A_{{\varepsilon},\delta}A_{{\varepsilon}',\delta'}\subseteq A_{{\varepsilon}+{\varepsilon}',\delta+\delta'}$ for all ${\varepsilon},\delta,{\varepsilon}',\delta'\in\mathbb{Z}/2$; \item[{\rm (2)}] For all ${\varepsilon}\in\mathbb{Z}/2$, we have $A_{\varepsilon}=\bigoplus_{{\varepsilon}'+{\varepsilon}''={\varepsilon}} A_{{\varepsilon}',{\varepsilon}''}$. \item[{\rm (3)}] $X_{\varepsilon}\subseteq A_{{\varepsilon},{\bar 0}}$ and $Y_{\varepsilon}\subseteq A_{{\bar 0},{\varepsilon}}$ for all ${\varepsilon}\in\mathbb{Z}/2$. \end{enumerate} Then the heredity data $I,X,Y$ is $\mathfrak{a}$-conforming for $\mathfrak{a}=A_{{\bar 0},{\bar 0}}$. \end{Lemma}
\subsection{Morita equivalence} Throughout the section, we assume that $\Bbbk$ is local. We also assume that \(A\) is a unital based quasi-hereditary graded \(\Bbbk\)-superalgebra with heredity data \(I,X,Y\) which is $\mathfrak{a}$-conforming for a unital subalgebra $\mathfrak{a}$, in particular, $I,X_{\bar 0},Y_{\bar 0}$ is a heredity data for $\mathfrak{a}$ and $1_\mathfrak{a}=1_A$.
Our goal is to find an idempotent $f\in\mathfrak{a}$ such that $\bar A:=fAf$ is based quasi-hereditary with $\bar\mathfrak{a}$-conforming hereditary data, where $\bar \mathfrak{a}:=f\mathfrak{a} f$ is basic and the functors $$ {\mathcal F}_A:\bmod \,{A}\to\bmod \,{\bar A},\ V\mapsto f V \quad\text{and}\quad {\mathcal F}_\mathfrak{a}:\bmod \,{\mathfrak{a}}\to\bmod \,{\bar{\mathfrak{a}}},\ V\mapsto fV $$ are equivalences of categories, such that \begin{align*} &{\mathcal F}_A(L_A(i))\cong L_{\bar{A}}(i), &{\mathcal F}_A(\Delta_A(i))\cong \Delta_{\bar{A}}(i), \\ &{\mathcal F}_\mathfrak{a}(L_\mathfrak{a}(i))\cong L_{\bar{\mathfrak{a}}}(i), &{\mathcal F}_\mathfrak{a}(\Delta_\mathfrak{a}(i))\cong \Delta_{\bar{\mathfrak{a}}}(i). \end{align*}
The first step allows us to reduce to the situation where $\sum_{i\in I}e_i=1_A=1_\mathfrak{a}$:
\begin{Lemma} \label{L221217} Let $e:=\sum_{i\in I} e_i$. Then $\bar A:=eAe$ is based quasi-hereditary with $\bar\mathfrak{a}$-conforming hereditary data, where $\bar \mathfrak{a}:=e\mathfrak{a} e$ and the functors $$ {\mathcal F}_A:\bmod \,{A}\to\bmod \,{\bar A},\ V\mapsto e V \quad\text{and}\quad {\mathcal F}_\mathfrak{a}:\bmod \,{\mathfrak{a}}\to\bmod \,{\bar{\mathfrak{a}}},\ V\mapsto eV $$ are equivalences of categories, such that \begin{align*} &{\mathcal F}_A(L_A(i))\cong L_{\bar{A}}(i), &{\mathcal F}_A(\Delta_A(i))\cong \Delta_{\bar{A}}(i), \\ &{\mathcal F}_\mathfrak{a}(L_\mathfrak{a}(i))\cong L_{\bar{\mathfrak{a}}}(i), &{\mathcal F}_\mathfrak{a}(\Delta_\mathfrak{a}(i))\cong \Delta_{\bar{\mathfrak{a}}}(i). \end{align*} \end{Lemma} \begin{proof} This follows using Lemma~\ref{L200517_4} since $e$ is strongly adapted. \end{proof}
\begin{Lemma}\label{makeprim}
There exists an \(\mathfrak{a}\)-conforming heredity data \(I,X',Y'\) for \(A\) with the same ideals \(A(\Omega)\) and \(\mathfrak{a}(\Omega)\), and such that the new initial elements \(\{e'_i \mid i \in I\}\) are primitive idempotents in \(\mathfrak{a}\) satisfying $e_ie_i'=e_i'=e_i'e_i$ and $e_i'\equiv e_i\pmod{\mathfrak{a}^{>i}}$ for all $i\in I$. \end{Lemma}
\begin{proof}
Let $i\in I$. Set $\tilde \mathfrak{a}:=\mathfrak{a}/\mathfrak{a}^{>i}$ and $\tilde a:=a+\mathfrak{a}^{>i}\in\tilde \mathfrak{a}$ for $a\in \mathfrak{a}$. Then $\tilde e_i$ is a primitive idempotent in $\tilde \mathfrak{a}$ since $\operatorname{End}_{\tilde \mathfrak{a}}(\tilde \mathfrak{a} \tilde e_i)\cong \tilde e_i \tilde \mathfrak{a} \tilde e_i\cong \Bbbk$ is local. So if $e_i=e_i^1+\dots+e_i^r$ is a sum of orthogonal primitive idempotents in $\mathfrak{a}$ then there is exactly one $t$ with $1\leq t\leq r$ and $\tilde e_i= \tilde e_i^t$. We set $e_i':=e_i^t$. Note that $e_ie_i'=e_i'=e_i'e_i$, hence $e_i'e_j'=0$ for $i\neq j$.
Let $\Omega$ be an upper set of $I$. It easily follows that $A(\Omega)$, which by Lemma~\ref{LHI} is the ideal of $A$ generated by $\sum_{i\in\Omega}e_i$, is also generated by $\sum_{i\in\Omega}e_i'$. Similarly, $\mathfrak{a}(\Omega)$ is the ideal of $\mathfrak{a}$ generated by $\sum_{i\in\Omega}e_i'$.
We have that $\mathfrak{a}^{\geq i}/\mathfrak{a}^{>i}$ is projective as an \(\tilde \mathfrak{a}\)-module, $ \mathfrak{a}^{\geq i}/\mathfrak{a}^{>i} = \tilde \mathfrak{a}\tilde e_i\tilde \mathfrak{a}=\tilde \mathfrak{a}\tilde e_i'\tilde \mathfrak{a} $ and $\tilde e_i' \tilde \mathfrak{a} \tilde e_i'=\tilde e_i \tilde \mathfrak{a} \tilde e_i\cong \Bbbk$. So \cite[Statement 7]{DlRi} implies that the multiplication map $$ m: \tilde{\mathfrak{a}}\tilde{e}_i' \otimes_{\Bbbk} \tilde{e}_i'\tilde{\mathfrak{a}} \to \tilde{\mathfrak{a}} \tilde{e}_i' \tilde{\mathfrak{a}} $$ is an isomorphism of $\tilde \mathfrak{a}$-bimodules. By definition, $\tilde \mathfrak{a} \tilde e_i'=\tilde \mathfrak{a} \tilde e_i$ has $\Bbbk$-basis $\{\tilde x\mid x\in X(i)_{\bar 0}\}$, $\tilde A_{\bar 1} \tilde e_i'=(\tilde A \tilde e_i')_{\bar 1}=(\tilde A \tilde e_i)_{\bar 1}$ has $\Bbbk$-basis $\{\tilde x\mid x\in X(i)_{\bar 1}\}$, and $\tilde A \tilde e_i'=\tilde A \tilde e_i=\tilde \mathfrak{a} \tilde e_i'\oplus \tilde A_{\bar 1} \tilde e_i'$ as $\Bbbk$-modules. Let $$e_*' := 1_A-\sum_{i\in I}e_i'.$$ Since $1_A=1_\mathfrak{a}$, we have $e_*'\in\mathfrak{a}$. Note that \begin{align*} \tilde{\mathfrak{a}}\tilde{e}_i' = \bigoplus_{j\in I\sqcup\{*\}}\tilde{e}_j'\tilde{\mathfrak{a}}\tilde{e}_i' \quad\text{and}\quad \tilde{A}_{\bar 1}\tilde{e}_i' = \bigoplus_{j\in I\sqcup\{*\}}\tilde{e}_j'\tilde{A}_{\bar 1}\tilde{e}_i'. \end{align*} Each of the summands above is projective, hence free, as a \(\Bbbk\)-module. So there exists a set of elements \(X'(i)=X'(i)_{\bar 0}\sqcup X'(i)_{\bar 1}\) such that: \begin{itemize} \item \(e_i' \in X(i)_{\bar 0}\); \item \(\{\tilde{x} \mid x \in X'(i)_{\bar 0}\}\) is a \(\Bbbk\)-basis for \(\tilde{\mathfrak{a}}\tilde{e}_i'\) and \(\{\tilde{x} \mid x \in X'(i)_{\bar 1}\}\) is a \(\Bbbk\)-basis for \(\tilde{A}_{\bar 1}\tilde{e}_i'\); \item For all \(x \in X'(i)\), we have \(x=e_j' x e_i'\) for some \(j \in I\sqcup\{*\} \). \end{itemize} In similar fashion we may choose a set of elements \(Y'(i)=Y'(i)_{\bar 0}\sqcup Y'(i)_{\bar 1}\) such that: \begin{itemize} \item \(e_i' \in Y'(i)_{\bar 0}\); \item \(\{\tilde{y} \mid y \in Y'(i)_{\bar 0}\}\) is a \(\Bbbk\)-basis for \(\tilde{e}_i'\tilde{\mathfrak{a}}\) and \(\{\tilde{y} \mid y \in Y'(i)_{\bar 1}\}\) is a \(\Bbbk\)-basis for \(\tilde{e}_i'\tilde{A}_{\bar 1}\); \item For all \(y \in Y'(i)\), we have \(y=e_i' y e_j'\) for some \(j \in I\sqcup\{*\} \). \end{itemize}
Since $m$ is an isomorphism, \(\{\tilde{x}\tilde{y} \mid x \in X'(i),\ y \in Y'(i)\}\) is a \(\Bbbk\)-basis for \(\tilde{A}\tilde{e}_i' \tilde{A} = A^{\geq i}/A^{>i}\) and \(\{\tilde{x}\tilde{y} \mid x \in X'(i)_{\bar 0},\ y \in Y'(i)_{\bar 0}\}\) is a \(\Bbbk\)-basis for \(\tilde{\mathfrak{a}}\tilde{e}_i' \tilde{\mathfrak{a}} = \mathfrak{a}^{\geq i}/\mathfrak{a}^{>i}\). Doing this for all $i\in I$, we deduce that \(\{xy \mid i \in I,\ x \in X'(i), y \in Y'(i)\}\) is a basis for \(A\) and \(\{xy \mid i \in I,\ x \in X'(i)_{\bar 0}, y \in Y'(i)_{\bar 0}\}\) is a basis for \(\mathfrak{a}\). The remaining conditions of Definitions~\ref{DCC} and \ref{D290517} are now easily checked. Thus \(\{I,\ \bigsqcup_i X'(i),\ \bigsqcup_i Y'(i)\}\) is an $\mathfrak{a}$-conforming heredity data for \(A\). \end{proof}
In Lemma~\ref{makeprim}, we have obtained the condition that all the heredity ideals \(A(\Omega)\) are the same for the two heredity bases coming from $(I,X,Y)$ and $(I,X',Y')$. This implies that the standard modules $\Delta_A(i)$ and hence the simple modules $L_A(i)$ are unchanged when we pass from $(I,X,Y)$ and $(I,X',Y')$. The similar statement holds for $\Delta_\mathfrak{a}(i)$ and $L_\mathfrak{a}(i)$.
For a strongly adapted idempotent $e\in A$, recall the notation $\bar X(i),\bar Y(i)$ from (\ref{E230517}),\,(\ref{E200717}). These will be applied for the idempotent $f$ appearing in the following theorem:
\begin{Theorem}\label{MorBasic} Let $\Bbbk$ be local and \(A\) be a unital based quasi-hereditary graded \(\Bbbk\)-superalgebra with $\mathfrak{a}$-conforming heredity data $(I,X,Y)$ for a unital subalgebra $\mathfrak{a}$. Then there exists an $\mathfrak{a}$-conforming heredity data $(I,X',Y')$ with the same ideals $A(\Omega)$ and $\mathfrak{a}(\Omega)$ and such that the new initial elements $\{e'_i \mid i \in I\}$ are primitive idempotents in \(\mathfrak{a}\) satisfying $e_ie_i'=e_i'=e_i'e_i$ and $e_i'\equiv e_i\pmod{\mathfrak{a}^{>i}}$ for all $i\in I$. Moreover, setting $f:=\sum_{i\in I}e_i'$, we have: \begin{enumerate} \item[{\rm (i)}] $f$ is strongly adapted with respect to $(I,X',Y')$, so that $\bar A$ is based quasi-hereditary with heredity data $(I,\bar X',\bar Y')$. \item[{\rm (ii)}] $(I,\bar X',\bar Y')$ is $\bar\mathfrak{a}$-conforming; \item[{\rm (iii)}] $\bar \mathfrak{a}$ is basic and if \(A_{\overline{1}} \subset J(A)\) then \(\bar{A}\) is a basic as well; \item[{\rm (iv)}] The functors $$ {\mathcal F}_A:\bmod \,{A}\to\bmod \,{\bar A},\ V\mapsto f V \quad\text{and}\quad {\mathcal F}_\mathfrak{a}:\bmod \,{\mathfrak{a}}\to\bmod \,{\bar{\mathfrak{a}}},\ V\mapsto fV $$ are equivalences of categories, such that \begin{align*} &{\mathcal F}_A(L_A(i))\cong L_{\bar{A}}(i), &{\mathcal F}_A(\Delta_A(i))\cong \Delta_{\bar{A}}(i),\\
&{\mathcal F}_\mathfrak{a}(L_\mathfrak{a}(i))\cong L_{\bar{\mathfrak{a}}}(i),
&{\mathcal F}_\mathfrak{a}(\Delta_\mathfrak{a}(i))\cong \Delta_{\bar{\mathfrak{a}}}(i). \end{align*} \end{enumerate} \end{Theorem} \begin{proof} Let $e=\sum_{i\in I}e_i$. By Lemma~\ref{L221217}, the algebra $eAe$ satisfies the assumptions of Lemma~\ref{makeprim}. The application of that lemma yields a conforming heredity data $(I,X'',Y'')$ in $eAe$ with initial elements $\{e_i''\mid i\in I\}$. To extend it to the needed heredity data $(I,X',Y')$ for $A$ define \begin{align*} X'&:=X''\sqcup \bigsqcup_{i\in I}\{xe_i''\mid x\in X(i)\ \text{with}\ ex=0\}, \\ Y'&:=Y''\sqcup \bigsqcup_{i\in I}\{e_i''y\mid y\in Y(i)\ \text{with}\ ye=0\}. \end{align*} It is easy to see that this new heredity data with initial elements $e_i'=e_i''$ satisfies the required conditions. \end{proof}
\subsection{Examples}\label{SSZig}
Our two main examples of based quasi-hereditary algebras are the classical {\em Schur algebra} $S(n,d)$ and the {\em extended zigzag algebra} $Z$.
The classical Schur algebra with trivial grading and superalgebra structures has the basis $\{Y^\lambda_{S,T}\}$ of codeterminants constructed in \cite{GreenCod}. It is essentially checked in \cite{GreenCod} that $S(n,d)$ with the codeterminant basis is a based quasi-hereditary algebra with perfect heredity data and standard anti-involution. So is the extended zigzag algebra, which we define next.
Given \(n \geq d\), let \(\lambda = (1^d)\), and let \(T^\lambda\) be the \(\lambda\)-tableau with the entry \(r\) in the \(r\)th row. Define \begin{align*} e:= Y^{\lambda}_{T^\lambda, T^\lambda} = \xi_{1 \cdots d, 1 \cdots d}. \end{align*} Then \(e\) is an adapted idempotent, and \(eS(n,d)e \cong \Bbbk \mathfrak{S}_d\). Thus \begin{align*} \{ eY^{\lambda}_{S,T}e \mid eY^{\lambda}_{S,T}e \neq 0\}
\end{align*}
defines a cellular basis for \(\mathfrak{S}_d\), known as a {\em Murphy basis}.
Fix $\ell\geq 1$ and set $$ I:=\{0,1,\dots,\ell\},\quad J:=\{0,\dots,\ell-1\}. $$
Let $\Gamma$ be the quiver with vertex set $I$ and arrows $\{a_{j,j+1},a_{j+1,j}\mid j\in J\}$ as in the picture: \begin{align*} \begin{braid}\tikzset{baseline=3mm} \coordinate (0) at (-4,0); \coordinate (1) at (0,0); \coordinate (2) at (4,0); \coordinate (3) at (8,0);
\coordinate (6) at (12,0); \coordinate (L1) at (16,0); \coordinate (L) at (20,0); \draw [thin, black,->,shorten <= 0.1cm, shorten >= 0.1cm] (0) to[distance=1.5cm,out=100, in=100] (1); \draw [thin,black,->,shorten <= 0.25cm, shorten >= 0.1cm] (1) to[distance=1.5cm,out=-100, in=-80] (0); \draw [thin, black,->,shorten <= 0.1cm, shorten >= 0.1cm] (1) to[distance=1.5cm,out=100, in=100] (2); \draw [thin,black,->,shorten <= 0.25cm, shorten >= 0.1cm] (2) to[distance=1.5cm,out=-100, in=-80] (1); \draw [thin,black,->,shorten <= 0.25cm, shorten >= 0.1cm] (2) to[distance=1.5cm,out=80, in=100] (3); \draw [thin,black,->,shorten <= 0.25cm, shorten >= 0.1cm] (3) to[distance=1.5cm,out=-100, in=-80] (2);
\draw [thin,black,->,shorten <= 0.25cm, shorten >= 0.1cm] (6) to[distance=1.5cm,out=80, in=100] (L1); \draw [thin,black,->,shorten <= 0.25cm, shorten >= 0.1cm] (L1) to[distance=1.5cm,out=-100, in=-80] (6); \draw [thin,black,->,shorten <= 0.25cm, shorten >= 0.1cm] (L1) to[distance=1.5cm,out=80, in=100] (L); \draw [thin,black,->,shorten <= 0.1cm, shorten >= 0.1cm] (L) to[distance=1.5cm,out=-100, in=-100] (L1); \blackdot(-4,0); \blackdot(0,0); \blackdot(4,0);
\blackdot(16,0); \blackdot(20,0); \draw(-4,0) node[left]{$0$}; \draw(0,0) node[left]{$1$}; \draw(4,0) node[left]{$2$};
\draw(10,0) node {$\cdots$}; \draw(13.4,0) node[right]{$\ell-1$}; \draw(18.65,0) node[right]{$\ell$}; \draw(-2,1.2) node[above]{$ a_{1,0}$}; \draw(2,1.2) node[above]{$ a_{2,1}$}; \draw(6,1.2) node[above]{$ a_{3,2}$};
\draw(14,1.2) node[above]{$ a_{\ell-2,\ell-1}$}; \draw(18,1.2) node[above]{$ a_{\ell,\ell-1}$}; \draw(-2,-1.2) node[below]{$ a_{0,1}$}; \draw(2,-1.2) node[below]{$ a_{1,2}$}; \draw(6,-1.2) node[below]{$ a_{2,3}$};
\draw(14,-1.2) node[below]{$ a_{\ell-2,\ell-1}$}; \draw(18,-1.2) node[below]{$ a_{\ell-1,\ell}$}; \end{braid} \end{align*}
The {\em extended zigzag algebra $Z$} is the path algebra $\Bbbk\Gamma$ modulo the following relations: \begin{enumerate} \item All paths of length three or greater are zero. \item All paths of length two that are not cycles are zero. \item All length-two cycles based at the same vertex are equivalent. \item $ a_{\ell,\ell-1} a_{\ell-1,\ell}=0$. \end{enumerate} Length zero paths yield the standard idempotents $\{ e_i\mid i\in I\}$ with $ e_i a_{i,j} e_j= a_{i,j}$ for all admissible $i,j$. The algebra $Z$ is graded by the path length: $Z=Z^0\oplus Z^1\oplus Z^2. $ We also consider $Z$ as a superalgebra with $Z_{\bar 0}=Z^0\oplus Z^2\quad \text{and}\quad Z_{\bar 1}=Z^1. $
Define $$
c_j:= a_{j,j+1} a_{j+1,j} \qquad (j\in J). $$ The algebra $Z$ has an anti-involution $\tau$ with $$ \tau( e_i)= e_i,\quad \tau(a_{ij})= a_{ji},\quad \tau(c_j) = c_j. $$
We consider the total order on $I$
given by $0<1<\dots<\ell$. For $i\in I$, we set
$$X(i):=
\left\{ \begin{array}{ll} \{e_i,a_{i-1,i}\} &\hbox{if $i>0$,}\\ \{e_0\} &\hbox{if $i=0$,}
\end{array}
\right.
\quad
Y(i):=
\left\{ \begin{array}{ll} \{e_i,a_{i,i-1}\} &\hbox{if $i>0$,}\\ \{e_0\} &\hbox{if $i=0$.}
\end{array}
\right.
$$ With respect to this data we have:
\begin{Lemma} \label{LAQH} The graded superalgebra $Z$ is a basic based quasi-hereditary with perfect heredity data and standard anti-involution $\tau$.\end{Lemma} \begin{proof} This is well-known and easy to check. \end{proof}
Note that $$B_{\bar 1}=\{ a_{j,j+1}, a_{j+1,j}\mid j\in J\},\ \ B_{{\bar 0}}=\{ e_i\mid i\in I\}\sqcup \{ c_j\mid j\in J\}. $$
Let \(e:= e_0 + \cdots + e_{\ell-1} \in Z\). Note that \(e\) is an adapted idempotent, and \(\tau(e) = e\) so the {\em zigzag algebra} \(\overline{Z}:=e Z e \subset Z\) is a cellular algebra with involution \(\tau|_{\overline{Z}}\), and cellular basis \begin{align*} \overline{B} = \{xy\mid i \in I, x \in \overline{X}(i), y \in \overline{Y}(i)\}, \end{align*} where \(\overline{X}(\ell) = \{a_{\ell-1,\ell}\}\), \(\overline{Y}(\ell)=\{a_{\ell,\ell-1}\}\), and \(\overline{X}(i) = X(i)\), \(\overline{Y}(i) = Y(i)\) for all \(i \in J\).
Note that, when \(\Bbbk\) is a field, we have \(eL(\ell) = 0\), and \(eL(j) = L(j)\) for all \(j \in J\), so the standard \(\overline{Z}\)-modules are \(\{\bar{\Delta}(i) = e\Delta(i) \mid i \in I\}\), and the simple \(\overline{Z}\)-modules are \(\{\bar{L}(j) = eL(j) \mid j \in J\}\). The following lemma is easily checked.
\begin{Lemma}\label{zigdecomp} Let \(\Bbbk\) be a field. Let \(i \in I\), and \(j \in I\) (resp. \(j \in J\)). Then the graded decomposition numbers for standard \(Z\)-modules (resp. \(\overline{Z}\)-modules) are given by \begin{align*} d_{i,j} = \delta_{i,j} +\delta_{i-1,j} q \pi. \end{align*} \end{Lemma}
For integers \(n,m\), consider the matrix superalgebra \(M_{n|m}(\Bbbk)\) of rank \(n|m\), with entries in \(\Bbbk\). For \(r,s \in [1,n+m]\), let \(E_{r,s}\) be the matrix with \(1\) in the \((r,s)\)-th component, and zeros elsewhere. We have \begin{align*} \overline{E}_{r,s} := \begin{cases} \bar 0 & \textup{if }r,s \leq n \textup{ or } r,s >n,\\ \bar 1 & \textup{otherwise}. \end{cases} \end{align*} and \begin{align*} \textup{deg}(E_{r,s}) = r-s. \end{align*}
Then \(B:=\{E_{r,s} \mid r,s \in [1,n+m]\}\) constitutes a homogeneous basis for \(M_{n|m}(\Bbbk)\).
Now, let \(I = \{\bullet\}\) be the singleton set, and define: \begin{align*} e_\bullet:=E_{1,1}, \hspace{5mm} X(\bullet):=\{E_{r,1} \mid r \in [1,n+m]\}, \hspace{5mm} Y(\bullet):=\{E_{1,s} \mid s \in [1, n+m]\}. \end{align*}
Then \((I,X,Y)\) constitutes conforming heredity data for \(M_{n|m}(\Bbbk)\) with heredity basis \(B\).
\end{document} | arXiv | {
"id": "1810.02844.tex",
"language_detection_score": 0.6473553776741028,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Units of Equivariant Ring Spectra} \author{Rekha Santhanam } \begin{abstract}
It is well known that very special $\Gamma$-spaces and grouplike ${\text{E}}_\infty$ spaces both model connective spectra. Both these models have equivariant analogues in the case when the group acting is finite. Shimakawa defined the category of equivariant $\Gamma$-spaces and showed that special equivariant $\Gamma$-spaces determine positive equivariant spectra. Costenoble and Waner \cite{MR1012523} showed that grouplike equivariant ${\text{E}}_\infty$-spaces determine connective equivariant spectra.
We show that with suitable model category structures the category of equivariant $\Gamma$-spaces is Quillen equivalent to the category of equivariant ${\text{E}}_\infty$ spaces. We define the units of equivariant ring spectra in terms of equivariant $\Gamma$-spaces and show that the units of an equivariant ring spectrum determines a connective equivariant spectrum.
\end{abstract} \maketitle
\section{Introduction} There are several space level models for the category of spectra.
Segal \cite{MR0353298} developed the notion of very-special $\Gamma$-spaces to model connective spectra. May \cite{MR0339152} showed that group-like ${\text{E}}_\infty$-spaces model connective spectra.
May and Thomason \cite{MR508885} gave a comparison of these models and showed that they are indeed equivalent. However, the model theoretic viewpoint was missing and the equivariant case was not considered. We show that the two models of equivariant infinite loop spaces, namely, equivariant ${\text{E}}_\infty$-spaces and equivariant $\Gamma$-spaces are equivalent.
We interpret the infinite loop space of an ${\text{E}}_\infty$-equivariant ring spectrum as an equivariant $\Gamma$-space. We then describe the units of equivariant spectra in terms of equivariant $\Gamma$-spaces.
\subsection{Background and Results}
Let $R$ be an ${\text{E}}_\infty$-ring spectrum. Then $\pi_0(R)$ defines a monoid and we can consider its unit components. Define $\text{GL}_1 R$ to be the following pullback of spaces \begin{equation*} \xymatrix{\text{GL}_1 R \ar[r] \ar[d] & \Omega^{\infty} R \ar[d] \\
(\pi_0(R))^{\times} \ar[r] & \pi_0(R) } \end{equation*} May, Quinn and Ray \cite{MR0494077} showed that $\text{GL}_1(R)$ is a grouplike ${\text{E}}_\infty$-space and hence determines a connective spectrum which is denoted by $\text{gl}_1 R$.
The theory of units of ring spectra was developed to understand the obstruction theory \cite{MR0494077} for ${\text{E}}_\infty$-orientations on cohomology theories and to classify these orientations. Further the classifying space of the multiplicative units of a cohomology (ring) theory parametrize its twistings \cite{units}, as in the case of twisted K-theory\cite{MR2172633}.
A recent result of Freed, Hopkins and Teleman \cite{MR2365650} relates twisted equivariant K-theory of a compact lie group with the representations of the loop group of the lie group. Atiyah and Segal \cite{MR2172633} and Freed, Hopkins and Teleman \cite{MR2365650} give a geometric construction of twisted equivariant K-theory. This construction does not use homotopy theoretic methods. Further equivariant orientation theory is not as well understood as the non-equivariant case. We expect that the twistings of equivariant K-theory will be parametrized by the units of equivariant K-theory as in the non-equivariant case. We also hope that the units of equivariant ring spectra will give a better perspective on equivariant orientation theory.
May's machine describing equivariant infinite loop spaces via equivariant grouplike ${\text{E}}_\infty$-spaces can be applied directly to construct the unit equivariant spectrum associated to the unit space of an equivariant ${\text{E}}_{\infty}$ ring-spectrum. According to May [private communication], the details have been understood in principle since the early 1980s, although the theory has still not been written up. The details of how equivariant ${\text{E}}_{\infty}$-spaces describe equivariant infinite loop spaces have been discussed by Costenoble and Waner in \cite{MR1012523}.
In this article, we give a comparison theorem, between the two models of equivariant infinite loop spaces. We use the comparison theorem to give a construction of the unit space of equivariant ${\text{E}}_\infty$-ring spectrum in terms of equivariant $\Gamma$-spaces (Defn \ref{segalgamma}, Defn \ref{shimgamma}).
Let $G$ be a finite group. Shimakawa defined the notion of $\Gamma_G$-spaces \cite{MR1003787} and showed that special $\Gamma_G$-spaces are equivalent to positive $G$-spectra (Defn. \ref{positive} ). We develop this notion of equivariant $\Gamma$-spaces further and show that very special $\Gamma$-spaces are equivalent to equivariant infinite loop spaces in Theorem \ref{fixonvsp}.
We describe a model structure on the category of equivariant $\Gamma$-spaces where the special ${\Gamma_\text{\tiny G}}$-spaces are the fibrant objects. We prove that this category is Quillen equivalent to the category of equivariant ${\text{E}}_\infty$-spaces with the model structure inherited from that on the underlying category of $G$-spaces in Theorem \ref{comparison}. We expect this equivalence will respect the symmetric monoidal structures on the categories. This is discussed in Remark \ref{symmonidal}.
If $X$ is a very special ${\Gamma_\text{\tiny G}}$-space then $X(\b 1)$ is a equivariant infinite loop space. Given a special ${\Gamma_\text{\tiny G}}$-space we show that the $G$-space represented by the orbit diagram of invertible fixed point components defines an equivariant infinite loop space cf. Lemma \ref{orbit}.
Beginning with an equivariant ${\text{E}}_\infty$-ring spectrum we define the group of units of equivariant ${\text{E}}_\infty$-ring spectra as a very-special equivariant gamma space in Definition \ref{unitsdefn}. Our definition of ${\text{GL}}_1$ matches with the usual notion of units of commutative ring spectra when the group action is trivial.
In Appendix \ref{discussion}, we discuss further why our definition of equivariant units is a good analog of the non-equivariant definition. As alluded to in Appendix \ref{discussion} in a later paper joint with Chenghao Chu we will discuss the Quillen equivalence between the category of equivariant $\Gamma$-spaces and the category of equivariant spectra. There we will also discuss an equivariant analog of Segal's method of obtaining $\Gamma$-spaces from symmetric monoidal categories.
All of our constructions are valid only when the group acting is finite. If $G$ is not finite then Blumberg \cite{MR2286026} shows that one cannot use the model of $\Gamma_G$-spaces. The equivariant infinite loop space theory is not as well understood when the group acting is not finite.
There has been some work in the direction of describing equivariant infinite loop spaces in the compact Lie group case by Caruso and Waner \cite{MR777436}. However, very little is known so far.
\begin{rmk} We expect that the notion of orientations arising from the equivariant space ${\text{GL}}_1$ for the Eilenberg-Maclane spectra of Burnside Green functors should be related the notion of equivariant orientation theory described by May, Costenoble and Waner \cite{MR1856029} for equivariant bundles when the group acting is finite. At this point we do not have any results in this direction. \end{rmk}
\begin{comment} \subsection{Results} Let $G$ be a finite group. We show that there exists a model category structure on the category of equivariant $\Gamma$-spaces with special $\Gamma$-spaces as the fibrant objects. We prove that this category is Quillen equivalent to the category of equivariant ${\text{E}}_\infty$-spaces with the model structure inherited from that on the underlying category of $G$-spaces.
With this setting in place, we add to Shimakawa's work \cite{MR1003787} by describing how very-special equivariant gamma spaces model connective equivariant spectra. We then define the group of units of equivariant ${\text{E}}_\infty$- ring spectra as a very-special equivariant gamma space and therefore, show that this gives rise to a connective equivariant spectrum.
\end{comment}
\section{Notation} \begin{itemize} \item Let ${\mathcal{T}}$ denote the category of compactly generated based topological spaces, morphisms being continuous based maps. \item Let ${\mathcal{W}}$ denote the category of pointed CW-complexes. \item Let $n$, $m$, $p$ and $r$ denote natural numbers. \item We will denote the unit of adjunction of an adjoint pair by $\eta$ and the counit by $\epsilon$. \item Denote the category of sets by ${\mathcal{I}}$ and the category of finite $G$ sets by ${\mathcal{I}}_\text{\tiny G}$. \item Let ${\mathcal{C}}$ be any topological category and $A$ be an object of ${\mathcal{C}}$. Then denote the corepresentable functor ${\mathcal{C}}(A,\_ ) $ from ${\mathcal{C}} \to {\mathcal{T}}$ by ${\mathcal{C}}_A$ and representable functor ${\mathcal{C}}(\_ , A)$ from $ {\mathcal{C}} \to {\mathcal{T}}$ by $ {\mathcal{C}}^A$.
\end{itemize}
\section{Equivariant Infinite Loop Space machines}
Let $G$ be a compact lie group. Let ${\mathcal{U}}$ denote the complete universe of real representations of $G$, namely, ${\mathcal{U}}$ is a collection of $G$-representations containing the trivial representation and countably many copies of irreducible representations. \begin{defn}
A prespectrum $X$ is a collection of $G$ spaces indexed on finite dimensional subspaces, namely, $V, W$ of ${\mathcal{U}}$ with $G$-maps $S^W \wedge X_V \to X_{V\oplus W}$. If the adjoint maps are $G$-weak equivalences then $X$ is called a $\Omega$ $G$-spectrum.
\end{defn}
For the rest of this article we will assume that $G$ is a finite group.
\subsection{Equivariant $\Gamma$-spaces }
Shimakawa \cite{MR1003787} constructed an equivariant analogue of $\Gamma$-spaces. We now describe equivariant $\Gamma$-spaces.
Let $G{\mathcal{T}}$ denote the category with objects based $G$-spaces and morphisms continuous $G$-maps. A map of $G$-spaces $X \xrightarrow{f} Y$ is a $G$-homotopy (weak) equivalence if for every $H<G$, $$X^H \xrightarrow{f^H} Y^H $$ is a homotopy (weak) equivalence. \\
Define ${\mathcal{T}_\text{\tiny G}}$ to be the category whose objects are the same as that of $G{\mathcal{T}}$ but morphisms are all maps between based $G$-spaces. The category ${\mathcal{T}_\text{\tiny G}}$ is enriched over $G$-spaces. Given two based $G$-spaces $X$ and $Y$, for any $f: X \to Y$ and $g\in G$ we define, $$ g.f(x):= gf(g^{-1}x).$$ Thus, the space of all maps ${\mathcal{T}_\text{\tiny G}}(X,Y)$ has a $G$-action by conjugation.
\begin{defn}\label{segalgamma} Let $\Gamma$ denote the skeletal category of finite pointed sets with pointed set maps as morphisms. Denote the $n+1$ element set $\{ 0, 1, \cdots, n \}$ by $\b n$ where $0$ is the marked point. The category $\Gamma$ is a topological category with discrete topology on the morphism sets. Note that our $\Gamma$ is Segal's $\Gamma^{{\small{\text{op}}}}$.\\
Define a category $\Gamma[{\text{G}}{\mathcal{T}}]$ to be the category whose objects are continuous functors $X$ from $\Gamma$ to $G{\mathcal{T}}$ such that $ X(\b 0)$ is a point. Morphisms in this category are natural transformations. \end{defn}
\begin{defn}\label{shimgamma} Let ${\text{G}}\Gamma$ denote the skeletal category of finite pointed $G$-sets (where the $G$ action preserves the marked point) with $G$-pointed maps. Let ${\Gamma_\text{\tiny G}}$ be the category with the same objects as ${\text{G}}\Gamma$ but with morphisms being all pointed set maps, The category ${\Gamma_\text{\tiny G}}$ is $G$-enriched. The $G$-action on ${\Gamma_\text{\tiny G}}(S, T)$ is by conjugation as before.
Define the category ${\Gamma_\text{\tiny G}}[{\mathcal{T}_\text{\tiny G}}]$ to have objects continuous $G$-functors $X$ from ${\Gamma_\text{\tiny G}}$ to $G$-spaces such that $X(\b 0) $ is a point. We refer the objects of ${\Gamma_\text{\tiny G}}[{\mathcal{T}_\text{\tiny G}}]$ as equivariant $\Gamma$-space or as ${\Gamma_\text{\tiny G}}$-spaces. Morphisms in ${\Gamma_\text{\tiny G}}[{\mathcal{T}_\text{\tiny G}}]$ are $G$-natural transformations.
Denote the category of functors $X: G\Gamma \to {\text{G}}{\mathcal{T}}$ such that $X(\b 0 ) = \ast $ by $ G\Gamma[G{\mathcal{T}}]$ \end{defn}
Let $S$ denote a finite pointed $G$-set. Let $p_s : S \to \b 1$ for $s \in S$ be the morphism defined \[ p_s(t) = \begin{cases} 1 & \mbox{ if } s=t \\
0 & \mbox{ if } s\neq t \end{cases}
\] Let $X$ be a ${\Gamma_\text{\tiny G}}$-space. The projections $p_s$ induce a map $ \theta : S \wedge X(S) \to X(\b 1) $ defined by $\theta(s,x):=X(p_s) (x)$.
Since $ {\Gamma_\text{\tiny G}}(S,\b1)\to {\mathcal{T}_\text{\tiny G}}(X(S),X(\b 1)) $ is a $G$-map and $g.p_{s} = p_{g^{-1}s}$ it is easy to show that the map $\theta$ is a $G$-map.
\begin{defn}\label{eqspecial} Let $X$ be a ${\Gamma_\text{\tiny G}}$-space. If the adjoint map $ X(S) \to {\mathcal{T}_\text{\tiny G}}(S,X(\b 1))$ is a $G$-weak equivalence then $X$ is defined to be a \emph{special }${\Gamma_\text{\tiny G}}$-space. \\ \end{defn} Define the map $\b 2 \xrightarrow{\mu} \b 1$ to be such that $ \mu(0) = 0$ and $\mu(1)=1=\mu(2)$. Let $H$ be a subgroup of $G$ and $X$ be a special ${\Gamma_\text{\tiny G}}$-space. Then up to homotopy the map $$( X(\b 1)^H)^2 \xleftarrow{\sim} X(\b 2)^H \xrightarrow{\mu} X(\b 1)^H $$ induces a monoidal structure on $X(\b 1)^H$. \begin{defn} Let $X$ be a special ${\Gamma_\text{\tiny G}}$-space. If for every $H<G$, the space $\pi_0 X(\b 1)^H$ is a group under the monoid structure induced by specialness condition on $X$, then $X$ is defined to be a \emph{very-special} ${\Gamma_\text{\tiny G}}$-space. \\ \end{defn}
Given any ${\Gamma_\text{\tiny G}}$-space $X$, the $G$-functor $X : {\Gamma_\text{\tiny G}} \to {\mathcal{T}_\text{\tiny G}}$ has a left Kan extension from the category of $G$-CW-complexes to ${\mathcal{T}_\text{\tiny G}}$. Denote the left Kan extension again by $X :{\cW_\text{\tiny G}} \to {\mathcal{T}_\text{\tiny G}}$, where ${\cW_\text{\tiny G}}$ is the $G$-enriched category of based $G$-CW complexes.
Let $V$ and $W$ be a $G$-representations. Then, the adjoint map to the isomorphism $ S^{V} \wedge S^W \to S^{V\oplus W}$ induces the following map \begin{eqnarray*} S^V &\to &\map{}{X(S^W)}{X(S^{V\oplus W})} \\ \implies S^V \wedge X(S^W) & \to & X(S^{V \oplus W}). \end{eqnarray*} Thus every ${\Gamma_\text{\tiny G}}$-space defines a $G$-prespectrum. Shimakawa \cite{MR1003787} shows that a special ${\Gamma_\text{\tiny G}}$-space defines a positive $\Omega$-$G$-spectrum. \begin{defn}\label{positive} A $G$-prespectrum $X$ is an positive $\Omega$- $G$-spectrum if for every $G$-representation $V$ such that $V^G\neq \phi$, the map $X(V) \to \Omega^V X(W)$ is a $G$-weak equivalence. \end{defn}
The following proposition is an important observation (due to Shimakawa and May) which we will use extensively.
\begin{prop}\label{eqcat}\cite{MR1132161} Let ${\text{i}}$ be the inclusion functor from $\Gamma$ to ${\Gamma_\text{\tiny G}}$ taking sets to $G$-sets with trivial $G$-action. Then there exist an adjoint pair of functors \begin{equation*} \xymatrix{ {\Gamma_\text{\tiny G}}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[r]^{{\text{i}}} & \Gamma[G{\mathcal{T}}] \ar@<.5ex>[l]^{{\text{P}}} } \end{equation*} which induce an equivalence of categories. \end{prop} \begin{proof} Let $X$ be a functor $\Gamma \to {\text{G}}{\mathcal{T}}$. For any finite $G$ set, define $\gGs{S}$ to be the $G$- functor ${\Gamma_\text{\tiny G}} \to {\mathcal{T}_\text{\tiny G}}$ as $\gGs{S} (T) = {\Gamma_\text{\tiny G}}(S,T) $ for all finite $G$-sets $T$.
Define the functor ${\text{P}} X : {\Gamma_\text{\tiny G}} \to {\mathcal{T}_\text{\tiny G}}$ at a $G$-set $S$ as the left Kan extension $${\text{P}} X (S)= \gGs{S} \otimes_\Gamma X ,$$ defined to be the coequalizer \begin{equation*} \xymatrix{\coprod \limits_{m,n} {\Gamma_\text{\tiny G}}(\b n, S) \times \Gamma(\b m, \b n) \times X(\b m) \ar@<.5ex>[r] \ar@<-.5ex>[r] & \coprod \limits_m {\Gamma_\text{\tiny G}}(\b m, S) \times X(\b m) \ar[r] & {\text{P}} X(S), } \end{equation*} where one of the maps is given by the functoriality of $X$ and the other is composition in ${\Gamma_\text{\tiny G}}$ given via inclusion of $\Gamma(\b m, \b n) \to {\Gamma_\text{\tiny G}}(\b m, \b n)$ but giving the sets trivial $G$-action. Let $S$ be a finite $G$-set and $f : S \xrightarrow{\cong}\b n $ as sets. The $G$-action on $S$ can be described by a group morphism $ \rho: G \to \Sigma_n$. Define $X(\b n)_\rho$ to be the $G$-space $X(\b n)$ with the $G$-action defined as follows: Given an element $x \in X(\b n)$ and $g \in G$,
$g x = g X(\rho(g)) x = X(\rho(g) (gx) $ since $X(\rho(g))$ is a $G$-map.
Claim: ${\text{P}} X(S) \cong X(\b n)_\rho$.
Reason:
${\text{P}} X(S) = \coprod {\Gamma_\text{\tiny G}}(\b m, S) \times X(\b m) / \sim $, where $\sim$ is defined as follows. For any $h' \in {\Gamma_\text{\tiny G}}(\b m ,S)$, $h \in {\Gamma_\text{\tiny G}}(\b n, \b m) =\Gamma(\b n, \b m) $ and $x \in X(\b n)$ we have,
$(h',X(h) x) \sim (fh,x)$.
Fix $f: S \to \b n$ to be an isomorphism of sets. This induces a group morphism $ \rho : G \to \Sigma_n$ such that for any $s \in S$, $$ f g(s) = \rho(g) f (s).$$
Define $X(\b n)_\rho$ as before, then we have a map $\beta : {\text{P}} X(S) \to X(\b n)_\rho$ as $\beta(h, x) = X(fh)(x)$. This is a $G$map and is invertible with inverse $\theta : X(\b n)_\rho \to {\text{P}} X(S)$ defined as $\theta(x)= (f^{-1},x)$.
Therefore, $X \cong {\text{i}} {\text{P}} X $.
Let $Y$ be an object in ${\Gamma_\text{\tiny G}}[{\mathcal{T}_\text{\tiny G}}]$.
Let $S$ be a finite $G$ set with $|S| =n $. Then the $G$-set is completely described by $(\b n, \rho : G \to \Sigma_n)$ up to a set isomorphism, where $\rho$ describes the $G$-action on $S$. For any $ Y \in {\Gamma_\text{\tiny G}}[{\mathcal{T}_\text{\tiny G}}]$ we have, $ {\text{i}} Y(S) (\b n) = Y(\b n) $. Then ${\text{P}}{\text{i}} Y(S) = Y(\b n)_\rho$.
The map $\epsilon : Y(S) \to {\text{P}} {\text{i}} Y(S) = Y(\b n)_\rho$ is induced by the isomorphism from $f : S \to \b n$, that is, $\epsilon(x) = Y(f) (x)$. This is a $G$-map since $\epsilon(g.x) = Y( f) (g.x) = g Y(gf) (x) $ since $Y$ is a $G$-functor.
But, $gX(gf) (x) = g X(\rho(g)) Y(f) (x) = g. Y(f)(x) $ in $Y(\b n)_\rho$.
Claim: $\epsilon$ is an isomorphism. \\ Reason : Define the inverse map $\alpha : Y(\b n)_\rho \to Y(S) $ as $\alpha(y) = Y(f^{-1}) y$. Then $\alpha$ is a $G$-map. \begin{eqnarray*} \alpha(g y) & =& Y(f^{-1}) (gy) = Y(f^{-1} ) Y(\rho(g)) (gy) \\ & = &Y(f^{-1} \rho(g) )(gy) = Y(g f^{-1}) (gy) = g. Y(f^{-1}) (gy) = g Y(f)(y).\end{eqnarray*}
Thus, these functors induce an equivalence of $G$-categories. \end{proof}
In $\Gamma[G{\mathcal{T}}]$, a $\Gamma$-space $X$ is \emph {special} if for every $H < G$ and homomorphism $\rho : H \to \Sigma_n$ the map $$ X(\b n)_\rho \to (X(\b 1)^n)_\rho$$ is a $H$-weak equivalence. The group $H$ acts on $X(\b 1)^n_\rho$ as follows. For any $h \in H$ and $(x_1, \cdots, x_n ) \in X(\b 1)^n$ we have \[ h. (x_1,\cdots, x_n) = (h.x_{\rho(h)(1)}, \cdots, h. x_{\rho(h)(n)} )\]
Shimakawa shows that \cite{MR1132161}[pg 226] this is equivalent to the condition that ${\text{E}} X $ is a special ${\Gamma_\text{\tiny G}}$-space. We will switch back and forth between these two notions of equivariant $\Gamma$-spaces depending on the situation.
\subsection{Equivariant Operads and Monads}
Costenoble and Waner \cite{MR1012523} showed that a $G$-grouplike ${\text{E}}_\infty$-space is $G$ homotopy equivalent to an equivariant infinite loop space.
\begin{defn} A $G$-operad ${\mathcal{D}}$ is an operad in the category of $G$-spaces. The spaces ${\mathcal{D}}(n)$ have an action by $G \times \Sigma_n$ and the operad action maps are $G$-maps commuting with the symmetric group action. We assume that ${\mathcal{D}}(0)$ is a point (which induces the base point on ${\mathcal{D}}(n)$ for all $n \in \mathbb{N}$ via the operad structure maps) and $1 \in {\mathcal{D}}(1)$ is fixed under the action of $G$. \end{defn}
\begin{defn} A ${\mathcal{D}}$-space is a based $G$-space $X$ along with $G$-maps $${\mathcal{D}} (n) \times X^n \to X $$ commuting with the operad structure and the $\Sigma_n$-action. Maps of ${\mathcal{D}}$-spaces are maps of $G$-spaces which are compatible with the ${\mathcal{D}}$-action. Denote the category of ${\mathcal{D}}$-spaces by ${\mathcal{D}}[{\mathcal{T}_\text{\tiny G}}]$. \end{defn}
Given a based $G$-space $X$ we can construct a free ${\mathcal{D}}$-space \begin{eqnarray*} F(X) & := & \coprod\limits_{n=0}^{\infty} {\mathcal{D}}(n) \times_{\Sigma_n} X^n/ \sim \end{eqnarray*} where, the relation is defined as follows. Let $\sigma_j : {\mathcal{D}}(0) \times {\mathcal{D}}(1) \times \cdots {\mathcal{D}}(0) \cdots \times {\mathcal{D}}(1) \to {\mathcal{D}}(j-1) $ where ${\mathcal{D}}(0)$ is in the $j$th spot and let $ i_j: X^{j-1} \to X^j$ be the map which inserts a point in the $j$th spot. Then for any $d \in {\mathcal{D}}(j)$ and $x \in X^{j-1}$ the relation is given by $(c, i_j(x)) \sim (\sigma_j(c), x)$.
\begin{defn} Let ${\mathcal{D}}$ be a $G$-operad. Then ${\mathcal{D}}$ is a ${\text{E}}_\infty$ $G$-operad if ${\mathcal{D}}(n)$ is a universal $ (G ,\Sigma_n)$ principal bundle for every $n \in \mathbb{N}$.
\end{defn}
A $G$-space, $X$ is said to be an ${\text{E}}_\infty$-space if it has an ${\text{E}}_\infty$ $G$-operad acting on it. Given an ${\text{E}}_\infty$-space $X$, the operad induces a monoidal structure up to homotopy on $X^H$ for all subgroups $H< G$. Define $X$ to be $G$-grouplike if $\pi_0(X^H)$ is a group for all subgroups $H$ of $G$.
\section {Category of Equivariant Operators} Both, the category of grouplike ${\text{E}}_\infty$-spaces and the category of very-special $\Gamma$-spaces model infinite loop spaces. May and Thomason \cite{MR508885} showed that both these approaches to infinite loop space are equivalent. They defined the notion of "category of operators" to construct a category which can be compared to the category of ${\text{E}}_\infty$-spaces and the category of $\Gamma$-spaces. We generalize their ideas to the equivariant setting.
With appropriate model category structure the category $\Gamma[{\mathcal{T}_\text{\tiny G}}]$ models the category of equivariant $\Gamma$-spaces. Our theorem compares the category of equivariant ${\text{E}}_\infty$-spaces with the category $\Gamma[{\mathcal{T}_\text{\tiny G}}]$.
We now introduce the notion of a "category of equivariant operators". \begin{defn}\label{pi} Let $\Pi$ denote the subcategory of $\Gamma$ with morphisms \begin{eqnarray*} \Pi(\b m,\b n)& = & \{\phi \in\Gamma(\b m,\b n) /\phi^{-1} (i) \text{ has at most one element for all } i > 0 \} \end{eqnarray*} \end{defn} \noindent Note that $\Pi(\b m , \b 1)$ has the maps $p_i$ for all $i = 1, 2, \cdots, n$. \begin{defn} Let ${\text{G}}\Pi$ denote the subcategory of ${\text{G}}\Gamma$ such that \begin{eqnarray*} {\text{G}}\Pi( S, T)& = & \{\phi \in{\text{G}}\Gamma( S, T) /\phi^{-1} (t) \text{ has at most one element for all } t \in T \} \end{eqnarray*} \end{defn} \begin{defn} Let $\pig{}$ denote the subcategory of ${\Gamma_\text{\tiny G}}$ such that \begin{eqnarray*} \pig{}( S, T)& = & \{\phi \in{\Gamma_\text{\tiny G}}( S, T) /\phi^{-1} (t) \text{ has at most one element for all } t \in T \} \end{eqnarray*} \end{defn} \begin{defn} Define a $\pig{}$-space to be a covariant $G$-functor from $X: \pig{} \to {\mathcal{T}_\text{\tiny G}}$ such that $X(\b 0) =\ast$. Define the representable $\pig{}$-spaces, $\lpig{T}$ as follows $$ \lpig{T}( S)= \pig{}(T,S)$$ Define $X$ to be a \emph{special} $\pig{}$-space if the map $\theta $ induced by the maps $p_s$,
$$ X(S) \to \map{}{S}{X(\b 1})$$ is a $G$ weak equivalence.
\end{defn} Given any pointed $G$-space $Y$, we can construct a $\pig{}$-space ${\text{R}}' Y(S):=\map{}{S}{ Y}$. This defines a $\pig{}$-space. A map $\alpha : S \to T$ induces a map ${\text{R}}'(\alpha): \map{}{S}{Y} \to \map{}{T}{Y} $ given by $$ {\text{R}}'(\alpha) (f) (t) = \begin{cases}
f(\alpha^{-1} (t)) & \mbox{ if } |\alpha^{-1}(t) | =1 \\
\ast & \mbox{ if } |\alpha^{-1}(t) | =0 \end{cases} $$
\begin{lemma} Let ${\text{L}}'$ and ${\text{R}}'$ be a pair of functors \begin{equation*} \xymatrix{ \pig{}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[r]^{{\text{L}}'} & {\mathcal{T}_\text{\tiny G}} \ar@<.5ex>[l]^{{\text{R}}'} } \end{equation*} defined as ${\text{L}}' X= X(\b 1)$ for $X \in \pig{}[{\mathcal{T}_\text{\tiny G}}]$ and ${\text{R}}' Y( S) = {\mathcal{T}_\text{\tiny G}}(S,Y) $ for $Y \in {\mathcal{T}_\text{\tiny G}}$. Then ${\text{L}}'$ and ${\text{R}}'$ are $G$-functors adjoint to each other. \end{lemma} \begin{proof} It is easy to see that ${\text{L}}'$ and ${\text{R}}'$ are $G$-functors. We will denote the unit of adjunction of an adjoint pair by $\eta$ and the counit by $\epsilon$. Note $\eta: {\text{L}}' {\text{R}}' Y = {\text{R}}' Y (\b 1) = \map{}{\b 1}{Y} \xrightarrow{\sim} Y $.
Now, ${\text{R}}'{\text{L}}' X ( S) = {\mathcal{T}_\text{\tiny G}}(S,{\text{L}}' X) = {\mathcal{T}_\text{\tiny G}}( S,X(\b 1)) $. The maps $p_s$ induce a map
$$ X(S) \to {\mathcal{T}_\text{\tiny G}}( S ,X(\b 1))={\mathcal{T}_\text{\tiny G}}(S,{\text{L}}' X)={\text{R}}'{\text{L}}' X( S)$$
as defined before.
Therefore, the functors ${\text{L}}'$ and ${\text{R}}'$ are adjoint to each other.
\end{proof}
The proof of Proposition \ref{eqcat} can be modified to show that $\pig{}[{\mathcal{T}_\text{\tiny G}}]$ and $\Pi[{\text{G}}{\mathcal{T}}]$ are equivalent categories. The adjoint pair ${\text{L}}'$ and ${\text{R}}'$ factor through to give an adjoint pair ${\text{L}} :\Pi[G{\mathcal{T}}] \to G{\mathcal{T}}$ defined as ${\text{L}} X = X(\b 1) $ and ${\text{R}}: G {\mathcal{T}} \to \Pi[G {\mathcal{T}}]$ defined as $ {\text{R}} Y (\b n) = Y(\b 1)^n $. We have the following commutative diagram :
\begin{equation*} \xymatrix{ \Pi[{\text{G}} {\mathcal{T}}] \ar@<.5ex>[dr]^-{{\text{P}}} \ar@<.5ex>[r]^-{{\text{L}}} & {\text{G}} {\mathcal{T}} \ar@<.5ex>[l]^-{{\text{R}}} \ar@<.5ex>[d]^{{\text{R}}'} \\ & \pig{}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[ul]^{{\text{i}}} \ar@<.5ex>[u]^-{{\text{L}}'} } \end{equation*} \begin{defn}\cite[Defn~1.1]{MR508885} Define a category of operators ${\mathcal{G}}$ to be a topological category whose objects are the sets $\b n$ and with functors from $\Pi$ to ${\mathcal{G}}$ and ${\mathcal{G}}$ to $\Gamma$ such that the induced functor from $\Pi$ to $\Gamma$ is the inclusion of $\Pi$ in $\Gamma$. We will assume that ${\mathcal{G}}(\b m, \b 0) = \ast$ for all $\b m \in \text{Ob}{\mathcal{G}}$.
A map of category of operators ${\mathcal{G}}$ and ${\mathcal{H}}$ is the following commutative diagram of continuous functors. \begin{equation*} \xymatrix{ & {\mathcal{G}} \ar[dr] \ar[dd]^{v} & \\ \Pi \ar[ur] \ar[dr] & \ar[d] & \Gamma \\ & {\mathcal{H}} \ar[ur] & \\} \end{equation*} \end{defn}
\begin{defn}
Define a \emph{category of equivariant operators} to be a category of operators ${\mathcal{G}}$ enriched over $G$-spaces. Morphisms are morphisms of category of operators which are $G$-functors. An equivalence of category of equivariant operators is an morphism of category of operators which induces $G$-weak equivalence on the morphism spaces.
Define a ${\mathcal{G}}$-space $X$ to be a covariant $G$-functor from ${\mathcal{G}}$ to ${\mathcal{T}_\text{\tiny G}}$ such that $X(\b 0) =\ast$. Denote the category of ${\mathcal{G}}$-spaces by ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$. \end{defn} Note that any category of operators is enriched over $G$-spaces via trivial $G$-action and is therefore a category of equivariant operators.
Let ${\mathcal{H}}$ be a category of equivariant operators. Given any $\b n \in \mbox{Ob} {\mathcal{H}}$, we have an object in the category ${\mathcal{H}}[{\mathcal{T}_\text{\tiny G}}]$ defined as ${\mathcal{H}}^{\b n} (\b m ) = {\mathcal{H}}(\b m, \b n )$ for all $\b m \in {\mbox{Ob}} {\mathcal{H}}$. A morphism ${\mathcal{G}} \xrightarrow{v} {\mathcal{H}}$ of category of equivariant operators induces a morphism from ${\mathcal{H}}[{\mathcal{T}_\text{\tiny G}}] \xrightarrow{v^\ast } {\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ defined as $v^\ast Y = Y{\small{\text{o}}} v$.
\begin{prop} Let ${\mathcal{G}}$ and ${\mathcal{H}}$ be categories of equivariant operators. Let ${\mathcal{G}} \xrightarrow{v} {\mathcal{H}}$ be a morphism of category of equivariant operators. Then there exists a functor ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}] \xrightarrow{v_\ast} {\mathcal{H}}[{\mathcal{T}_\text{\tiny G}}]$ left adjoint to $v^\ast$. \begin{equation*} \xymatrix{ {\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[r]^{v_{\ast}} & {\mathcal{H}} [{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[l]^{v^{\ast}}. } \end{equation*} \end{prop} \begin{proof} Given a functor ${\mathcal{G}} \xrightarrow{v} {\mathcal{H}} $ and functor ${\mathcal{G}} \xrightarrow{X} {\mathcal{T}_\text{\tiny G}}$, there exists a left Kan extension of $X$ to ${\mathcal{H}}$ defined as the coend, $v_\ast X (\b n) = {\mathcal{H}}^{\b n} \otimes_{{\mathcal{G}}} X$ which is given by the following coequalizer in ${\text{G}}{\mathcal{T}}$. \begin{equation*} \xymatrix{\coprod_{\b m ,\b k} {\mathcal{H}}( \b m , \b n) \times {\mathcal{G}}(\b k , \b m) \times X(\b k) \ar@<.5ex>[r]^-{\mu } \ar@<-.5ex>[r]_-{v_{k,m}} & \coprod_{m} {\mathcal{H}}(\b m, \b n ) \times X(\b m) \ar[r] & v_{\ast} X( \b n)} \end{equation*}
The adjointness is easy to check. \end{proof}
Let ${\mathcal{G}}$ be a category of equivariant operators. Then ${\mathcal{G}}$ defines a monad on $\Pi[{\text{G}}{\mathcal{T}}]$. For any $\Pi$-space $X$, $$ {\text{F}}_{\mathcal{G}} X ( \b n) := {\mathcal{G}}^{\b n} \otimes_{\Pi} X :=\coprod \limits_m {\mathcal{G}}(\b m, \b n) \times X(\b m) /\sim $$ where, for $f \in {\mathcal{G}}(\b k , \b n)$, $x \in X(\b m)$ and $\pi \in \Pi(\b m, \b k)$ we have $(f, \pi x) \sim (f \pi, x)$.
Thus ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ is the category of ${\text{F}}_{\mathcal{G}}$-algebras over $\Pi[{\text{G}}{\mathcal{T}}]$.
Any pointed $G$-operad ${\mathcal{D}}$ induces a category of equivariant operators. Define $\hat{{\mathcal{D}}}$ to be the category with objects being the finite sets $\b n$ and the morphism space defined as \begin{eqnarray*}
\hat{{\mathcal{D}}}(\b m, \b n) & := & \coprod \limits_{\phi \in \Gamma(\b m, \b n)} \prod \limits_{1\leq j \leq n} {\mathcal{D}}(|\phi^{-1}(j)|) .\\ \end{eqnarray*} It follows that the category $\hat{{\mathcal{D}}}$ is a category of equivariant operators.
The category of operators $\hat{{\mathcal{D}}}$ induces a monad and denote the free algebra functor ${\text{F}_{\mathcal{D}}} : \Pi[{\text{G}} {\mathcal{T}}] \to \Pi[{\text{G}} {\mathcal{T}}]$ defined as $${\text{F}_{\mathcal{D}}} X := \hat{{\mathcal{D}}} X (\b n) = \coprod \limits_m \hat{{\mathcal{D}}}(\b m, \b n) \times X(\b m) /\sim $$ where the relation is as before.
Given a ${\mathcal{D}}$-space $X$ by construction ${\text{R}} X$ is a $\hat{{\mathcal{D}}}$-space. Denote this induced functor on ${\mathcal{D}}$-spaces by ${\text{R}_{\mathcal{D}}}$. We have the following square of adjoint pairs. \begin{equation*} \label{eqsquare} \xymatrix{ \Pi[{\text{G}}{\mathcal{T}}] \ar@<-.5ex>[d]_-{{\text{F}_{\mathcal{D}}}} \ar@<.5ex>[r]^-{{\text{L}}} & {\text{G}}{\mathcal{T}} \ar@<.5ex>[d]^-{{\text{F}}} \ar@<.5ex>[l]^-{{\text{R}}} \\
\hat{{\mathcal{D}}}[{\mathcal{T}_\text{\tiny G}}] \ar@<-.5ex>[u]_-{{\text{U}_{\mathcal{D}}}} & {\mathcal{D}}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[u]^-{{\text{U}}} \ar@<.5ex>[l]^-{{\text{R}_{\mathcal{D}}}} } \end{equation*}
\begin{defn}A morphism $\phi$ in $\Gamma(\b m, \b n)$ is said to be effective if $\phi^{-1}(0) = 0$. It is said to be ordered if it is order preserving. The set of ordered effective morphisms from $\b m $ to $\b n$ is denoted by ${\mathcal{E}}(\b m, \b n)$.
\end{defn} \begin{lemma} \label{gpushoutlem}\cite[Lemma 5.5]{MR508885} Let $X$ be a $\Pi$-space. Let ${\mathcal{D}}$ be a $G$-operad. For $n \geq 1$ let ${\text{F}}_p\hat{{\mathcal{D}}}X(\b n)$ be the image of $\coprod \limits_{m \leq p} \hat{{\mathcal{D}}}(\b m, \b n) \times X(\b m)/ \sim $. Then $\hat{{\mathcal{D}}}X(\b n)$ is the union of $F_p \hat{{\mathcal{D}}}X(\b n)$ over all $p$.
\noindent Moreover, ${\text{F}}_0\hat{{\mathcal{D}}}X(\b n) = X(\b 0) = \hat{{\mathcal{D}}}X(\b 0) $ and ${\text{F}}_p\hat{{\mathcal{D}}}X(\b n)$ can be constructed as the following pushout of $G$-spaces; \begin{equation}\label{pushout}
\xymatrix{ \coprod \limits_{\alpha \in {\mathcal{E}}(\b p, \b n)} \prod\limits_{1\leq j \leq n} {\mathcal{D}}(|\alpha^{-1}(j)|) \times \prod \limits_{\Sigma(\alpha )} sX(\b{p-1}) \ar[r]^-{v} \ar[d]_{i} & {\text{F}}_{p-1} \hat{{\mathcal{D}}} X(\b n) \ar[d] \\
\coprod \limits_{\alpha \in {\mathcal{E}}(\b p, \b n)} \prod\limits_{1\leq j\leq n} {\mathcal{D}}(|\alpha^{-1}(j)|) \times \prod \limits_{\Sigma(\alpha)} X(\b p) \ar[r] & {\text{F}}_{p} \hat{{\mathcal{D}}}X (\b n) . } \end{equation} Here $sX(\b{p-1}) = \coprod\limits_i \sigma_i X(\b{p-1})$ for $\sigma_i$ are the ordered effective morphisms from $\b{p-1} \to \b{p}$ and $\Sigma(\alpha) = \Sigma_{\alpha^{-1}(1)} \times \cdots \times \Sigma_{\alpha^{-1}(n)}$ . The morphism $v$ takes $(\alpha,c;\sigma_i x)$ to $(\alpha \sigma_i,c;x)$. Then $$\hat{{\mathcal{D}}}X(\b n) = \text{colim } {\text{F}}_p \hat{{\mathcal{D}}}X(\b n)$$ where the colimit is computed in the category of $G$-spaces. \end{lemma}
\begin{lemma} \cite{MR508885} \label{identity} Let ${\mathcal{D}}$ be a $G$-operad. The functor $UF$ is a monad on $G$-spaces and ${\text{L}}{\text{U}_{\mathcal{D}}} {\text{F}_{\mathcal{D}}} {\text{R}} = {\text{U}} {\text{F}} $. In fact, $ {\text{U}_{\mathcal{D}}} {\text{F}_{\mathcal{D}}} {\text{R}} = {\text{R}} {\text{U}} {\text{F}}$. \end{lemma}
By Proposition \ref{adjoint}, the functor ${\text{R}_{\mathcal{D}}}$ has a left adjoint ${\text{L}}_{{\mathcal{D}}}$ and we have the following diagram.
\begin{equation} \label{eqsquare2} \xymatrix{ \Pi[{\text{G}}{\mathcal{T}}] \ar@<-.5ex>[d]_-{{\text{F}_{\mathcal{D}}}} \ar@<.5ex>[r]^-{{\text{L}}} & {\text{G}}{\mathcal{T}} \ar@<.5ex>[d]^-{{\text{F}}} \ar@<.5ex>[l]^-{{\text{R}}} \\
\hat{{\mathcal{D}}}[{\mathcal{T}_\text{\tiny G}}] \ar@<-.5ex>[u]_-{{\text{U}_{\mathcal{D}}}} \ar@<.5ex>[r]^{{\text{L}_{\mathcal{D}}}} & {\mathcal{D}}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[u]^-{{\text{U}}} \ar@<.5ex>[l]^-{{\text{R}_{\mathcal{D}}}} } \end{equation}
\section{ Model Category Structures} \label{Gmodelcat}
We now set up the model category structure for all the categories which play a role in proving the main theorem.
The $G$-topological category $G {\mathcal{T}} $ admits a compactly generated model category structure where \begin{itemize} \item a map $X \to Y$ is a weak equivalence if $X^H \to Y^H$ is a weak equivalence for all subgroups $H$ of $G$. \item a map $X \to Y$ is a fibration if $X^H \to Y^H$ is a Serre fibration for all subgroups $H$ of $G$. \item cofibrations are maps with left lifting property with respect to all trivial fibrations. \end{itemize} The sets $I=\{(G /H \times {\text{S}}^{n-1})_+ \to (G/H \times {\text{D}}^n )_+/ H<G, n\geq1 \}$ and $J=\{(G /H \times {\text{D}}^n)_+ \to( G/H \times {\text{D}}^n \times \text{I})_+ / H<G, n\geq1 \}$ are the generating cofibrations and trivial cofibrations in $G {\mathcal{T}}$.
The following result is well known. This proof is an adaptation of the non-equivariant case.
\begin{thm} Let ${\mathcal{D}}$ be an pointed $G$-operad. The category of ${\mathcal{D}}$-spaces forms a model category with weak equivalences and fibrations defined on the underlying category of $G$-spaces. \end{thm} \begin{proof} Let ${\mathcal{D}}$ be the monad corresponding to the operad ${\mathcal{D}}$. The category $ G{\mathcal{T}}$ forms a cofibrantly generated model category with weak homotopy equivalences and Serre fibrations. The maps $ (G/H \times {\text{S}}^{n-1})_+ \to (G/H \times {\text{D}}^{n})_+$ and $ (G/H \times {\text{D}}^n )_+\to (G/H \times{\text{D}}^n \times \text{I})_+$ are the generating cofibrations and acyclic cofibrations respectively. By \cite[Prop~5.13]{MR1806878} we need to show that the maps
\noindent ${\mathcal{D}}(G/H \times {\text{S}}^{n-1})_+ \to {\mathcal{D}}(G/H \times {\text{D}}^{n}) $ and $ {\mathcal{D}}(G/H \times {\text{D}}^n)_+ \to {\mathcal{D}}(G/H \times {\text{D}}^{n} \times \text I)_+ $ for $n \geq 1$, satisfy the cofibration hypothesis \cite[5.3]{MR1806878} and that the monad ${\mathcal{D}}$ preserves reflexive coequalizers.
Reflexive coequalizers of spaces preserve finite products. Also, colimits commute with coequalizers implies ${\mathcal{D}}$ preserves reflexive coequalizers.
Thus we need to show that \begin{itemize} \item [(i)] for any ${\mathcal{D}}$-algebra $Y$ \begin{equation*} \xymatrix{ {\mathcal{D}}(G/H \times {\text{S}}^{n-1} )_+\ar[r] \ar[d] & Y \ar[d] \\
{\mathcal{D}}(G/H \times {\text{D}}^n )_+\ar[r] & {\mathcal{D}}(G/H \times {\text{D}}^n)_+ \coprod_{{\mathcal{D}} ( G/H \times {\text{S}}^{n-1})_+} Y } \end{equation*} the pushout is a Hurewicz cofibration . \item[(ii)] Every relative ${\mathcal{D}} J$-cell complex is a weak equivalence. \end{itemize}
Note that \begin{itemize} \item $ Y \to {\mathcal{D}} S^0 \coprod Y$ is a Hurewicz cofibration. \item $Y\coprod_{{\mathcal{D}} (G/H \times {\text{S}}^{n-1})_+} {\mathcal{D}}(G/H \times {\text{D}}^n)_+ = {\text{B}}(Y, {\mathcal{D}}(G/H \times {\text{S}}^n )_+, {\mathcal{D}}( G/H \times \ast)_+)$ and the degeneracy maps are Hurewicz cofibrations. \item Hence $ Y \to Y \coprod _{ {\mathcal{D}}(G/H \times {\text{S}}^{n-1})_+} {\mathcal{D}}(G/H \times {\text{D}}^n)_+ $ is a Hurewicz cofibration. \end{itemize} Similar ideas can be used to show that every ${\mathcal{D}} J$-relative cell complex is a weak equivalence.
\end{proof} \subsection{Diagram Categories}
\begin{defn}\cite{MR1806878} Let ${\mathcal{A}}$ be a topological category. Let ${\mathcal{A}}[{\text{G}}{\mathcal{T}}]$ denote the category of covariant functors from ${\mathcal{A}} \to {\text{G}}{\mathcal{T}}$. A map of ${\mathcal{A}}$-spaces $X\to Y$ is said to be a level equivalence and a level fibration if for every object $a \in {\mathcal{A}}$ and subgroup $H$ of $G$, the map $X(a)^H \to Y(a)^H $ is a weak equivalence and a Serre fibration respectively. A map of ${\mathcal{A}}$-spaces is said to be a $q$-cofibration if it has the left lifting property with respect to all level acyclic fibrations. A map of ${\mathcal{A}}$-spaces $X \to Y $ is said to be an h-cofibration if $X(a)\to Y(a)$ is a Hurewicz $G$-cofibration (has $G$- homotopy extension property) for all $ a \in \text{Obj} {\mathcal{A}}$.
\end{defn} For every $a \in {\mathcal{A}}$ we have an adjoint pair of functors, \begin{equation*} \xymatrix{ {\text{G}} {\mathcal{T}} \ar@<.5ex>[r]^{{\text{F}}_a} & {\mathcal{A}}[{\text{G}}{\mathcal{T}}] \ar@<.5ex>[l]^{{\text{E}}_a} } \end{equation*} defined as $ {\text{E}}_a (X) = X(a)$ and $ {\text{F}}_a(A) (b)= {\mathcal{A}}(a,b)\wedge A$
\begin{thm} \cite[Theorem~6.5]{MR1806878}\cite[Theorem~III.2.4]{MR1922205} Let ${\mathcal{A}}$ be a $G$-topological category. The category ${\mathcal{A}}[{\text{G}}{\mathcal{T}}]$ admits a
level model category structure where the weak equivalences are level equivalences, fibrations are level fibrations and cofibrations are $q$-cofibrations. Then ${\mathcal{A}}[{\text{G}}{\mathcal{T}}]$ forms a compactly generated topological model category with the level model structure. The set of maps ${\text{F}}_a I$ and ${\text{F}}_a J$ for all objects $a$ of ${\mathcal{A}}$ are the generating cofibrations and generating trivial cofibrations. \end{thm} \begin{proof} The category ${\mathcal{A}}[{\text{G}}{\mathcal{T}}]$ is complete and cocomplete since the colimits and limits are evaluated in the underlying category of $G$-spaces. In order to show that the model structure on ${\text{G}}{\mathcal{T}}$ lifts to ${\mathcal{A}}[{\text{G}}{\mathcal{T}}]$, we need to show that the sets ${\text{F}}_a I $ and ${\text{F}}_a J $ satisfy the cofibration hypothesis. This follows from the adjointness of ${\text{F}}_a$ and the model category structure on ${\text{G}}{\mathcal{T}}$. \end{proof}
\begin{cor} The category $G\Pi[{\text{G}}{\mathcal{T}}] $ is a compactly generated model category with the level model category structure. Then the sets ${\mathcal{F}}_{S} I$ and ${\mathcal{F}}_{S} J$ for all $S \in {\mbox{Ob}} G\Pi{}$ are the generating cofibrations and generating acyclic cofibrations. \end{cor}
Let $S$ be a $G$-set. Let $\lpig{S}$ be an object of $\pig{}[{\mathcal{T}_\text{\tiny G}}]$ defined as $\lpig{S}(T)= \pig{}(S, T)$. Then by restricting to the subcategory $G\Pi$ this also defines a ${\text{G}}\Pi$-space. The projection morphisms $p_s$ where $s \in S$ induce a map $ S \wedge\lpig{1} \to \lpig{S} $ in $G\Pi[{\text{G}}{\mathcal{T}}]$. By \cite[Thm~4.1.1]{MR1944041} the left localization of $G\Pi[{\text{G}}{\mathcal{T}}]$ with respect to the set $V:=\{S \wedge\lpig{1} \to \lpig{S}/ S \in {\mbox{Ob}} G\Pi \}$ exists.
Define a $G\Pi{}$-space $X$ to be a $V$-local object if for every map in $Z\to W$ in $V$, $\map{}{W}{X} \to \map{}{Z}{X}$ is a $G$-weak equivalence. Further a morphism in of $G \Pi[{\text{G}}{\mathcal{T}}]$ spaces $X \to Y$ is defined to be a $V$-local equivalence if for every $V$-local object $Z$, the map $\map{}{Y}{Z} \to \map{}{X}{Z}$ is a $G$-weak equivalence. \\
Then in the localized model category structure on $G\Pi[{\text{G}}{\mathcal{T}}]$ \begin{itemize} \item weak equivalences are $V$-local equivalences, \item cofibrations are cofibrations in the level model category structure \item fibrations are maps with right lifting property with respect to trivial cofibrations and \item fibrant objects are the $V$-local objects.\\ \end{itemize}
The category $\pig{}[{\mathcal{T}_\text{\tiny G}}]$ is equivalent to $\Pi [{\text{G}} {\mathcal{T}}]$. Therefore, ${\text{P}}$ is a right adjoint. Define a functor $ \pig{S}: \pig{{\small{\text{op}}}} \to {\mathcal{T}_\text{\tiny G}}$ as $\pig{S}(T) = \pig{}(T, S)$. We can forget to $G\Pi$ to get a functor from $G\Pi^{\small{\text{op}}} \to G{\mathcal{T}}$ For any $X : G \Pi \to G{\mathcal{T}}$ define $$ {\text{E}}' X (S) = \pig{S} \otimes_{G \Pi} X$$ for all $ S \in {\mbox{Ob}}{G \Pi}$. It is easy to check that ${\text{E}}'$ is the left adjoint to the forgetful functor ${\text{i}}: \pig{}[{\mathcal{T}_\text{\tiny G}}] \to G \Pi[G{\mathcal{T}}]$.
This induces an adjoint pair of functors between $G\Pi[{\text{G}}{\mathcal{T}}]$ and $ \Pi[{\text{G}} {\mathcal{T}}] $. Define ${\text{E}} : \Pi[G{\mathcal{T}}] \to G\Pi[G{\mathcal{T}}] $ as the right adjoint $ {\text{E}} = {\text{i}} {\text{P}}$.
\begin{equation*} \xymatrix{ G \Pi[{\text{G}} {\mathcal{T}}] \ar@<.5ex>[dr]^-{{\text{E}}'} \ar@<.5ex>[r]^-{ {\text{E}}'\circ {\text{i}}} & \Pi[ {\text{G}} {\mathcal{T}}] \ar@<.5ex>[l]^-{{\text{i}} \circ {\text{P}}} \ar@<.5ex>[d]^{{\text{P}}} \\ & \pig{}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[ul]^{{\text{i}}} \ar@<.5ex>[u]^-{{\text{i}}} } \end{equation*}
Then the model category structure on $G\Pi[{\text{G}}{\mathcal{T}}]$ induces a model category structure on $\Pi[{\text{G}}{\mathcal{T}}]$ where \begin{itemize} \item A map $X \to Y $ is a weak equivalence if ${\text{E}} X \to {\text{E}} Y$ is a weak equivalences in $G\Pi[{\text{G}}{\mathcal{T}}]$. \item A map $X \to Y$ is a fibration if ${\text{E}} X \to {\text{E}} Y$ is a fibration in $G\Pi[{\text{G}} {\mathcal{T}}]$. \item Cofibrations are maps with the left lifting property with respect to trivial fibrations. \end{itemize} Denote $\Pi[{\text{G}} {\mathcal{T}}]$ with the localized model category by $\Pi[{\text{G}} {\mathcal{T}}]_{{\text{i}} V}$. The notation is appropriate since by Claim \ref{twin models} we can also consider this as localizing the induced model structure on $\Pi[{\text{G}}{\mathcal{T}}]$ with respect to ${\text{i}} V$.\\
\begin{rmk} \label{fibrant} The space $X$ is fibrant in $\Pi[{\text{G}} {\mathcal{T}}]$ if ${\text{E}} X$ is fibrant in $G\Pi[{\text{G}} {\mathcal{T}}]$. Therefore, the map $$ G\Pi[G{\mathcal{T}}](\Pi_{G,S},{\text{E}} X) \to G \Pi[G {\mathcal{T}}](S \wedge \Pi_{G,\b 1}, {\text{E}} X)$$ is a $G$-weak equivalence. In particular, $$ \Pi[G{\mathcal{T}}]({\text{i}} \Pi_{G,S}, X) \to \Pi[G{\mathcal{T}}](S \wedge{\text{i}} \Pi_{G,\b 1} , {\text{E}} X)$$is a $G$-weak equivalence.
If $|S|=k$ then the $G$ action on $S$ can be described by an isomorphism $\rho : \b k \to S $. Then an argument similar to the proof of Proposition \ref{eqcat} shows that
$$ X(\b k)_\rho \to (X(\b 1)^k))_\rho $$ is a $G$-weak equivalence.
The space $X$ is fibrant if and only $ X(\b k)_\rho \to (X(\b 1)^k))_\rho $ is a $G$-equivalence for every $k$ and $\rho$. In this localized model category structure the fibrant objects in $\Pi[{\text{G}}{\mathcal{T}}]$ are therefore, exactly the special $\Pi$-spaces. \end{rmk}
\begin{prop}\label{serreeqcat} Let ${\mathcal{G}}$ be a category of equivariant operators. Define \begin{itemize} \item a map of ${\mathcal{G}}$-spaces $X \to X'$ to be a weak equivalence (fibration) if
${\text{E}} {\text{U}}_{\mathcal{G}} X(\b n) \to {\text{E}} {\text{U}}_{\mathcal{G}} X'(\b n)$ is a weak equivalence (fibration) of $G\Pi$-spaces and \item a map of ${\mathcal{G}}$-spaces to be a cofibration if it has the left lifting property with respect to all trivial fibrations. \end{itemize} Then ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ forms a compactly generated model category with this structure. The set of maps ${\text{F}}_{\mathcal{G}} {\text{i}} {\text{F}}_S I $ and ${\text{F}}_{\mathcal{G}} {\text{i}}{\text{F}}_{S} J$ for all objects $S$ of $G\Pi$, are the generating cofibrations and generating trivial cofibrations. \end{prop} \begin{proof} Colimits and limits exist in the category of $G$-functors from ${\mathcal{G}} \to {\mathcal{T}_\text{\tiny G}}$.
The sets ${\text{F}}_{\mathcal{G}} {\text{i}} {\text{F}}_{S} I $ and ${\text{F}}_{\mathcal{G}} {\text{i}} {\text{F}}_{S} J $ satisfy the cofibration hypothesis. This follows from the fact that ${\text{U}}_{\mathcal{G}}$ preserves colimits, ${\text{i}} {\text{F}}_{S} I$ and ${\text{i}} {\text{F}}_{S} J$ satisfy the cofibration hypothesis and the functor ${\text{F}}_{\mathcal{G}}$ commutes with tensoring over spaces. Now, apply the small object argument \cite[Lemma~5.3]{MR1806878} to prove the factorization axioms. The other axioms are easy to prove from definitions and since the model structure is inherited from the model structure on $\Pi[{\text{G}}{\mathcal{T}}]$. \end{proof}
\begin{rmk}[Claim \ref{twin models}] Consider the category ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ with the model structure inherited from the level model structure on $G\Pi$-spaces. Then ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ has a localized model category structure with respect to the set $\{ S\wedge {\text{F}}_{{\mathcal{G}}} {\text{i}} \lpig{\b 1} \to {\text{F}}_{\mathcal{G}} {\text{i}} \lpig{S}/ S \in {\mbox{Ob}} G\Pi\}$ and this is equivalent to the model category structure on ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ obtained from the underlying localized model category structure on $G \Pi[{\text{G}}{\mathcal{T}}]$. \end{rmk} \begin{cor} The category $\hat{{\mathcal{D}}}[{\mathcal{T}_\text{\tiny G}}]$ therefore, has a cofibrantly generated model category structure where \begin{itemize} \item a map $X \to Y$ is a weak equivalence (or fibration) if ${\text{U}_{\mathcal{D}}} X \to {\text{U}_{\mathcal{D}}} Y$ is a weak equivalence (or fibration) in $\Pi[{\text{G}}{\mathcal{T}}]_{{\text{i}} V}$ and \item cofibrations are maps of $\hat{{\mathcal{D}}}$-spaces with left lifting property with respect to acyclic fibrations. \end{itemize} \end{cor} \subsection{Quillen Equivalences}
\begin{prop}\label{pig}
The adjoint functors \begin{equation*} \xymatrix{ \Pi[{\text{G}}{\mathcal{T}}]_{{\text{i}} V} \ar@<.5ex>[r]^-{{\text{L}}} & {\text{G}}{\mathcal{T}} \ar@<.5ex>[l]^-{{\text{R}}} } \end{equation*} induce a Quillen equivalence between $\Pi[{\text{G}}{\mathcal{T}}]$ and ${\text{G}}{\mathcal{T}}$. \end{prop} \begin{proof} Consider $ G\Pi[{\text{G}}{\mathcal{T}}]$ with the level model structure. The functor ${\text{R}}': {\text{G}}{\mathcal{T}}\to G\Pi[{\text{G}}{\mathcal{T}}] $ takes weak equivalences and fibrations to level weak equivalences and fibrations. The adjoint pair ${\text{L}}'$ and ${\text{R}}'$ form a Quillen pair between $G\Pi[{\text{G}}{\mathcal{T}}]$ and ${\text{G}}{\mathcal{T}}$. Since the model structure on $\Pi[{\text{G}}{\mathcal{T}}]$ is induced by the model structure on $G\Pi[{\text{G}} {\mathcal{T}}]$ and we have the following diagram \begin{equation*} \xymatrix{ \Pi[{\text{G}} {\mathcal{T}}] \ar@<.5ex>[dr]^-{{\text{E}}} \ar@<.5ex>[r]^-{{\text{L}}} & {\text{G}} {\mathcal{T}} \ar@<.5ex>[l]^-{{\text{R}}} \ar@<.5ex>[d]^{{\text{R}}'} \\ & G\Pi[G {\mathcal{T}}]. \ar@<.5ex>[ul]^{{\text{i}}} \ar@<.5ex>[u]^-{{\text{L}}'} } \end{equation*} The adjoint pair ${\text{L}}$ and ${\text{R}}$ is a Quillen pair between $\Pi[{\text{G}}{\mathcal{T}}]$ and ${\text{G}}{\mathcal{T}}$. We need to show that the adjoint pair \begin{equation*} \xymatrix{\Pi{}[{\text{G}}{\mathcal{T}}]_{{\text{i}} V} \ar@<.5ex>[r]^-{{\text{L}}} & {\text{G}}{\mathcal{T}} \ar@<.5ex>[l]^-{{\text{R}}} } \end{equation*} induces a Quillen equivalence.
Let $Y$ be a based $G$-space. Let $X \xrightarrow{\sim}{\text{R}}' Y $ be a cofibrant replacement in $G\Pi[{\text{G}}{\mathcal{T}}]$ and therefore ${\text{i}} {\text{R}}' Y$ is a cofibrant replacement in $\Pi[{\text{G}}{\mathcal{T}}]$. In particular, ${\text{E}} {\text{i}} X \to {\text{E}} {\text{R}}' Y $ is a level $G$-weak equivalence and ${\text{E}}{\text{i}} X$ is cofibrant in $G\Pi[{\text{G}}{\mathcal{T}}]_V$. Then ${\text{L}} X = {\text{L}}' {\text{E}} {\text{i}} X= X(\b 1) \to {\text{L}}' {\text{E}} {\text{i}} {\text{R}}' Y ={\text{L}} {\text{R}} Y $ is a $G$-weak equivalence.
Let $X$ be a cofibrant-fibrant object in $\Pi[{\text{G}}{\mathcal{T}}]$. Then ${\text{L}} X$ is cofibrant and also fibrant since all objects of ${\text{G}}{\mathcal{T}}$ are fibrant. Further ${\text{R}} {\text{L}} X (\b n) = \map{}{\b n}{X(\b 1)}$. Then being fibrant in $\Pi[{\text{G}}{\mathcal{T}}]_{{\text{i}} V}$ implies $ {\text{E}} X \to {\text{E}} {\text{R}} {\text{L}} X $ is a level $G$-weak equivalence and hence a weak equivalence.
Thus ${\text{L}}$ and ${\text{R}}$ induce a Quillen equivalence.
\end{proof} We would like to understand what it means for $\hat{{\mathcal{D}}}X$ to be special.
\begin{rmk} Let ${\mathcal{D}}$ be a $G$-operad. Let $\hat{{\mathcal{D}}}$ be the category of equivariant operators induced by ${\mathcal{D}}$. Let $X$ be a $\Pi[{\text{G}}{\mathcal{T}}]$-space $X$. For any $k$ and $\rho: G \to \Sigma_k$, we have
\begin{eqnarray*} (\hat{{\mathcal{D}}}X(\b k))_\rho & = & {\hat{{\mathcal{D}}}^{\b k}}_\rho \otimes_\Pi X \\
& := & \coprod \limits_m (\coprod\limits_{\phi \in \Gamma(\b m, \b k)_\rho} \prod\limits_{1 \leq j \leq k} {\mathcal{D}}(|\phi^{-1}(j)|)_\rho ) \times X(\b m) /\sim , \end{eqnarray*} where the relation is given as follows. Let $f \in \hat{{\mathcal{D}}}(\b m, \b k)$, $y \in X(\b n)$ and $\alpha \in \Pi(\b n, \b k) $. Then $(f, \alpha y) \sim (f \circ \alpha, y)$.
Let $H$ be a subgroup of $G$ and $\rho : G \to \Sigma_k$ be a group homomorphism.
The $G$ action on $\hat{{\mathcal{D}}} (X(\b k))$ is via the $G$-action on $\Gamma(\b m, \b k)$ for all $\b m$ and diagonal action on ${\mathcal{D}}(|\phi^{-1}(j)|) \times X (\b m) $ for all $\phi \in \Gamma(\b m, \b k)$. Taking fixed points, when $ \phi \in \Gamma( \b m, \b k)_\rho^H $, the map $\rho$ acts by identity on $D(|\phi^{-1}(j)|)$ for all $1\leq j \leq k$.
In fact, \begin{eqnarray*}
(( \hat{{\mathcal{D}}}X(\b k))_\rho)^H & = & \coprod \limits_m \coprod \limits_{\phi \in \Gamma(\b m, \b k)_\rho^H} \prod \limits_{1 \leq j \leq k} ({\mathcal{D}}(|\phi^{-1}(j)|))^H \times X(\b m)^H /\sim. \end{eqnarray*} where the relation $\sim$ is as defined before. \end{rmk}
Let $X$ be an object of $\Pi[{\text{G}}{\mathcal{T}}]$. Let $H$ be a subgroup of $G$. Let $\rho: G \to \Sigma_n$ be a group homomorphism. By Lemma \ref{gpushoutlem} we have a filtration for $\hat{{\mathcal{D}}}X(\b n)$.
\noindent Then by above remark and since fixed points preserve pushouts, note that ${\text{F}}^H_0\hat{{\mathcal{D}}}X(\b n)_\rho = X(\b 0)^H = \hat{{\mathcal{D}}}X(\b 0)_\rho^H $ and ${\text{F}}_p^H \hat{{\mathcal{D}}}X(\b n) $ is the pushout of the following diagram; \begin{equation*}\label{Gpushout}
\xymatrix{ \prod \limits_{\alpha \in {\mathcal{E}}(\b p, \b n)^H_\rho} \coprod \limits_{1 \leq j \leq n} {\mathcal{D}}(|\alpha^{-1}(j)|)^H \times _{\Sigma(\alpha)} sX(\b{p-1})^H \ar[r]^-{v} \ar[d]_{i} & {\text{F}}_{p-1}^H \hat{{\mathcal{D}}} X(\b n)_\rho \ar[d] \\
\coprod \limits_{\alpha \in {\mathcal{E}}(\b p, \b n)^H_\rho} \prod \limits_{1\leq j \leq n} {\mathcal{D}}(|\alpha^{-1}(j)|)^H \times_{\Sigma(\alpha)} X(\b p)^H \ar[r] & {\text{F}}_{p}^H \hat{{\mathcal{D}}}X (\b n)_\rho } \end{equation*} Here $sX(\b{p-1})^H = \cup_i \sigma_i X(\b{p-1})^H$ where $\sigma_i$ are the ordered effective morphisms from $\b{p-1} \to \b{p}$. The morphism $v$ takes $(\alpha,c;\sigma_i x)$ to $(\alpha \sigma_i,c;x)$.
\begin{lemma} If $X$ is a cofibrant in $\Pi[{\text{G}}{\mathcal{T}}]$ then the map $i$ is a h-cofibration. \end{lemma} \begin{proof} Consider the $\Pi^{{\small{\text{op}}}} \times \Pi$ space $\Pi'(\b m, \b n) = \Pi (\b m ,\b{n-1})$. Then for any ordered effective morphism $\b{n-1} \to \b n$ induces a level-wise h-cofibration $\Pi' \to \Pi$ and hence $\Pi' \circ X \to \Pi \circ X$ is a cofibration in the Hurewicz-Strom model structure by Proposition \ref{action}. This implies in particular, $sX(\b{p-1})^H \to X(\b p)^H$ is a $\Sigma_p$-h-cofibration for all $H <G$, and ${\text{i}}$ is a h-cofibration. \end{proof}
\begin{lemma} \label{geq} Let ${\mathcal{D}}$ be a $\Sigma$-free $G$-operad, that is, ${\mathcal{D}}(n)$ is a free $\Sigma_n$-space for all $n$.
Let $X \to X'$ be a level $G$-weak equivalence of cofibrant objects in $\Pi[{\text{G}}{\mathcal{T}}]$. Then $ {\text{E}} \hat{{\mathcal{D}}}X \to {\text{E}} \hat{{\mathcal{D}}} X'$ is a weak equivalence in $G \Pi[{\text{G}}{\mathcal{T}}]$. \end{lemma} \begin{proof} If $X \to X'$ is a level $G$-weak equivalence then \small{ \[
\coprod \limits_{\alpha \in {\mathcal{E}}(\b p, \b n)^H_\rho} \prod \limits_{1\leq j \leq n} {\mathcal{D}}(|\alpha^{-1}(j)|)^H \times_{\Sigma(\alpha)} sX(\b{p-1})^H \to \coprod \limits_{\alpha \in {\mathcal{E}}(\b p, \b n)^H_\rho} \prod \limits_{1\leq j \leq n} {\mathcal{D}}(|\alpha^{-1}(j)|)^H \times_{\Sigma(\alpha)} sX'(\b{p-1})^H \] }
and \small{ \begin{eqnarray*}
\coprod \limits_{\alpha \in {\mathcal{E}}(\b p, \b n)^H_\rho} \prod\limits_{1 \leq j \leq n} {\mathcal{D}}(|\alpha^{-1}(j)|)^H \times_{\Sigma(\alpha)} X(\b p)^H \to \coprod\limits_{\alpha \in {\mathcal{E}}(\b p, \b n)^H_\rho} \prod \limits_{1 \leq j \leq n} {\mathcal{D}}(|\alpha^{-1}(j)|)^H \times_{\Sigma(\alpha)} X'(\b p)^H && \end{eqnarray*} } \normalsize{ are weak equivalences for all $H<G$.}
As in the non-equivariant case, we can show that if $X$ is cofibrant then the map $sX(\b{p-1})^H \to X(\b p)^H $ is a $\Sigma_p$-cofibration for all $H<G$. Since $\Sigma_k$ acts freely on ${\mathcal{D}}(k)$, we have that $i$ is a Hurewicz-cofibration. Therefore the pushout diagram preserves weak equivalences.
By induction, ${\text{F}}_p^H \hat{{\mathcal{D}}}X(\b n)_\rho \to {\text{F}}_p^H \hat{{\mathcal{D}}}X'(\b n)_\rho$ is a weak equivalence for all subgroups $H$ of $G$. Thus inducing a $G$-weak equivalence $\hat{{\mathcal{D}}}X(\b n)_\rho \to \hat{{\mathcal{D}}}X'(\b n)_\rho$, that is, a $G$-weak equivalence ${\text{E}} \hat{{\mathcal{D}}} X \to {\text{E}}\hat{{\mathcal{D}}} X$. \end{proof}
\begin{prop} Let ${\mathcal{D}}$ be a $\Sigma$-free $G$-operad and $X$ be a cofibrant-fibrant object in $\Pi[{\text{G}}{\mathcal{T}}]$ in the localized model category. Then ${\text{U}_{\mathcal{D}}}{\text{F}_{\mathcal{D}}} X $ is fibrant. \end{prop} \begin{proof} A $\Pi$-space, $X$ is fibrant if for every $ n \in \mathbb{N}$ and homomorphism $\rho : G \to \Sigma_n$, the map $ X(\b n)_\rho \to X(\b 1)^n_\rho $ is a $G$-weak equivalence. Let $X'$ be the $\Pi$-space defined as $ X'(\b n) := X(\b 1)^n $. By Lemma \ref{geq} we have that $ {\text{E}} \hat{{\mathcal{D}}} X \to {\text{E}} \hat{{\mathcal{D}}} X'$ is a level $G$-weak equivalence of $\Pi$-spaces. By construction this implies $\hat{{\mathcal{D}}}X \to \hat{{\mathcal{D}}}X'$ is a level $G$-weak equivalence of $\hat{{\mathcal{D}}}$-spaces.
\end{proof}
Let $X$ be an object of $\Pi[G{\mathcal{T}}]$. Then we can define a simplicial object in $\Pi[G{\mathcal{T}}]$ using the monad structure on $\Pi[G {\mathcal{T}}]$ due to ${\mathcal{G}}$. Given an object of $\Pi[G{\mathcal{T}}]$, let ${\text{B}}_{\ast}({\mathcal{G}}, {\mathcal{G}}, Y)$ denote the simplicial object in $\Pi[G{\mathcal{T}}]$ with the $n$th simplex ${\text{U}_{\mathcal{G}}}{\text{F}_{\mathcal{G}}} \cdots {\text{U}_{\mathcal{G}}}{\text{F}_{\mathcal{G}}} Y$ where ${\text{U}_{\mathcal{G}}} {\text{F}_{\mathcal{G}}}$ is applied $n$-times. The simplicial structure follows from the monad structure of ${\text{F}_{\mathcal{G}}}{\text{U}_{\mathcal{G}}}$. Denote the geometric realization of this simplicial object in $\Pi[G{\mathcal{T}}]$ by ${\text{B}}({\mathcal{G}},{\mathcal{G}}, Y)$. Given a morphism of category of equivariant operators $v: {\mathcal{G}} \to {\mathcal{H}}$ one can similarly define ${\text{B}}({\mathcal{H}},{\mathcal{G}}, Y)$.
\begin{lemma}\label{eqveq} Let ${\mathcal{G}}$ and ${\mathcal{H}}$ be category of equivariant operators and ${\mathcal{G}} \xrightarrow{v} {\mathcal{H}}$ be a morphism of category of operators.
If $\Pi \to {\mathcal{G}}$ is a cofibration (Proposition \ref{piopspaces}) and ${\text{E}} {\text{U}_{\mathcal{G}}} {\mathcal{G}}^{\b m} \to {\text{E}} {\text{U}_{\mathcal{H}}} {\mathcal{H}}^{\b m} $ is a weak equivalence in $G\Pi[{\text{G}}{\mathcal{T}}]$ then the map of ${\mathcal{H}}$ spaces $ {\text{B}}({\mathcal{G}},{\mathcal{G}},Y) \to {\text{B}}({\mathcal{H}},{\mathcal{G}},Y)$ is a weak equivalence. \end{lemma} \begin{proof} The assumption that ${\text{E}} {\text{U}_{\mathcal{G}}} {\mathcal{G}}^{\b m} \to {\text{E}} {\text{U}_{\mathcal{G}}} {\mathcal{H}}^{\b m}$ is a weak equivalence implies that $ {\text{E}} {\text{U}_{\mathcal{G}}} {\text{B}}_\ast({\mathcal{G}},{\mathcal{G}},Y) \to {\text{E}} {\text{U}_{\mathcal{H}}} {\text{B}}_\ast({\mathcal{H}},{\mathcal{G}},Y)$ is a weak equivalence of $G\Pi$-spaces. By Proposition \ref{greedy} both these simplicial ${\mathcal{H}}$-spaces are Reedy cofibrant in the Hurewicz-Strom model structure. By Proposition \ref{ghurewicz} we have the lemma. \end{proof}
\begin{thm} \label{eqcategopeq} Let ${\mathcal{G}}$ and ${\mathcal{H}}$ be categories of equivariant operators. Let ${\mathcal{G}} \xrightarrow{v} {\mathcal{H}}$ be a morphism of category of equivariant operators. Then the following adjoint pair is a Quillen pair with the model categories inherited from the underlying category of $\Pi$-spaces. \begin{equation*} \xymatrix{ {\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[r]^{v_{\ast}} & {\mathcal{H}} [{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[l]^{v^{\ast}}. } \end{equation*} Further if $\Pi \to {\mathcal{G}}$ is a cofibration (Proposition \ref{piopspaces}) and ${\text{E}} {\text{U}_{\mathcal{G}}} {\mathcal{G}}^{\b m} \to {\text{E}} {\text{U}_{\mathcal{H}}} {\mathcal{H}}^{\b m} $ is a weak equivalence in $G \Pi[G{\mathcal{T}}]$ then the above adjoint pair form a Quillen equivalence. \end{thm} \begin{proof}
Consider ${\mathcal{H}}[{\mathcal{T}_\text{\tiny G}}]$ and ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ with the underlying level model structure on $\Pi[G{\mathcal{T}}]$. Then $v^{\ast}$ takes fibrations and weak equivalences in ${\mathcal{H}}$-spaces to fibrations and weak equivalences respectively in ${\mathcal{G}}$-spaces since they are defined on $\Pi$-spaces. Thus $v_\ast$ and $v^\ast$ form a Quillen pair.
Note a map of ${\mathcal{G}}$-spaces or ${\mathcal{H}}$-spaces is a weak equivalence if it is a weak equivalence of their underlying $\Pi{}$-spaces. In the level model structure on ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ and ${\mathcal{H}}[{\mathcal{T}_\text{\tiny G}}]$ all objects are fibrant.
Let $Y$ be an cofibrant object of ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$. By Lemma \ref{eqveq} the maps \begin{eqnarray*} {\text{E}} {\text{U}_{\mathcal{G}}} {\text{B}}({\mathcal{G}},{\mathcal{G}},Y) & \xrightarrow{\sim} & {\text{E}} {\text{U}_{\mathcal{G}}} Y \\ {\text{E}} {\text{U}_{\mathcal{H}}} {\text{B}}({\mathcal{G}},{\mathcal{G}},Y) & \xrightarrow{\sim} & {\text{E}} {\text{U}_{\mathcal{H}}} {\text{B}}({\mathcal{H}},{\mathcal{G}},Y) \\ {\text{E}}{\text{U}_{\mathcal{H}}} {\text{B}}({\mathcal{H}},{\mathcal{G}},Y) &\xrightarrow{\sim}& {\text{E}} {\text{U}_{\mathcal{H}}} ({\mathcal{H}} \otimes_{\mathcal{G}} Y) = {\text{E}} {\text{U}_{\mathcal{H}}} v_\ast Y \\ {\text{U}_{\mathcal{G}}} v^\ast v_\ast Y & \cong &{\text{U}_{\mathcal{H}}} v_\ast Y \\ \end{eqnarray*} are $G$-weak equivalences. By two out of three of weak equivalences we get that implies $v^\ast v_\ast Y \to Y$ is a weak equivalence in ${\mathcal{G}}$-spaces.
Let $X$ be a fibrant-cofibrant ${\mathcal{H}}$-space. Then $v^\ast X$ is a fibrant ${\mathcal{H}}$-space. Let $Y$ such that $ Y \xrightarrow{\sim} v^\ast X$ be a cofibrant replacement ${\mathcal{G}}$-spaces. Then ${\text{E}}{\text{U}_{\mathcal{H}}} X \xrightarrow {\text{E}} {\text{U}_{\mathcal{G}}} Y$. But we know that $$ {\text{E}} {\text{U}_{\mathcal{H}}} v_\ast Y \xrightarrow{\sim} {\text{E}} {\text{U}_{\mathcal{G}}} Y \to \xrightarrow{\sim} {\text{E}} {\text{U}_{\mathcal{H}}} X .$$
Therefore, $v_\ast$ and $v^\ast$ induce Quillen equivalence. \end{proof}
\begin{thm}\label{equiop} Let ${\mathcal{G}}$ and ${\mathcal{H}}$ be a category of equivariant operators and ${\mathcal{G}} \xrightarrow{v} {\mathcal{H}}$ be a morphism of category of equivariant operators. Then \begin{equation*} \xymatrix{ {\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[r]^{v_{\ast}} & {\mathcal{H}} [{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[l]^{v^{\ast}}. } \end{equation*} form a Quillen equivalence with the localized model category structures on ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ and ${\mathcal{H}}[{\mathcal{T}_\text{\tiny G}}]$. \end{thm} \begin{proof} The proof follows from noting that $v_{\ast} {\text{F}}_{\mathcal{G}} = {\text{F}}_{\mathcal{H}}$ and applying Theorem 3.3.20 \cite{MR1944041} to Theorem \ref{eqcategopeq}. \end{proof}
\begin{thm} \label{equivariantoperad} Let ${\mathcal{D}}$ be a $\Sigma$-free $G$-operad such that $1 \hookrightarrow {\mathcal{D}}(1)$ is a h-cofibration. Let $\hat{{\mathcal{D}}}$ denote the induced category of equivariant operators. Then the adjoint pair of functors, \begin{equation} \label{eqadjoint} \xymatrix{ \hat{{\mathcal{D}}}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[r]^{{\text{L}_{\mathcal{D}}}} & {\mathcal{D}}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[l]^-{{\text{R}_{\mathcal{D}}}} } \end{equation} are Quillen equivalences. \end{thm} \begin{proof} Let $Y \to Y'$ be fibration ( acyclic fibration) of ${\mathcal{D}}$-spaces. Then ${\text{U}} Y \to {\text{U}} Y'$ is fibration (acyclic fibration) on the underlying category of $G$-spaces. By Proposition \ref{pig} ${\text{R}}{\text{U}} Y \to {\text{R}} {\text{U}} Y'$ is a fibration (or acyclic fibration). This implies ${\text{U}_{\mathcal{D}}} {\text{R}_{\mathcal{D}}} Y \to {\text{U}_{\mathcal{D}}} {\text{R}_{\mathcal{D}}} Y'$ is a fibration (or acyclic fibration) of $\Pi[{\mathcal{T}_\text{\tiny G}}]$. Therefore the functors ${\text{R}_{\mathcal{D}}}$ and ${\text{L}_{\mathcal{D}}}$ form a Quillen pair.
Note that ${\text{R}_{\mathcal{D}}}$ creates all weak equivalences in $\hat{{\mathcal{D}}}$-spaces.
Given a cofibrant-fibrant $\hat{{\mathcal{D}}}$-space $X$ the map of $\pig{}$-spaces $$ {\text{E}} {\text{U}_{\mathcal{D}}} {\text{B}}(\hat{{\mathcal{D}}},\hat{{\mathcal{D}}}, X)= {\text{B}}(\hat{{\mathcal{D}}}, \hat{{\mathcal{D}}}, X) \to {\text{E}}{\text{U}_{\mathcal{D}}} X$$ is a level $G$-weak equivalence.
Since ${\text{L}_{\mathcal{D}}}$ preserves weak equivalences of cofibrant objects in $\hat{{\mathcal{D}}}[{\mathcal{T}_\text{\tiny G}}]$, $${\text{U}} {\text{L}_{\mathcal{D}}} {\text{B}}({\text{F}_{\mathcal{D}}}{\text{U}_{\mathcal{D}}},{\text{F}_{\mathcal{D}}} {\text{U}_{\mathcal{D}}} X) \xrightarrow{\sim} {\text{U}} {\text{L}_{\mathcal{D}}} X .$$
Now, ${\text{R}_{\mathcal{D}}}$ preserves weak equivalences of ${\mathcal{D}}$-spaces. Hence $${\text{E}} {\text{R}_{\mathcal{D}}} {\text{L}_{\mathcal{D}}} {\text{B}}({\text{F}_{\mathcal{D}}}{\text{U}_{\mathcal{D}}},{\text{F}_{\mathcal{D}}} {\text{U}_{\mathcal{D}}} X) \to {\text{E}} {\text{R}_{\mathcal{D}}} {\text{L}_{\mathcal{D}}} X .$$ is a weak equivalence of $G\Pi{}$-spaces.
Therefore, $$ {\text{E}} {\text{B}}({\text{R}} {\text{U}} {\text{F}} {\text{L}}, {\text{U}_{\mathcal{D}}} {\text{F}_{\mathcal{D}}}, {\text{U}_{\mathcal{D}}} X) \xrightarrow{\sim} {\text{E}} {\text{R}_{\mathcal{D}}} {\text{L}_{\mathcal{D}}} X. $$
We have the following commutative diagram \begin{equation*} \xymatrix{ {\text{E}} {\text{U}_{\mathcal{D}}} X \ar[r] & {\text{E}} {\text{U}_{\mathcal{D}}} {\text{R}_{\mathcal{D}}} {\text{L}_{\mathcal{D}}} X \\
{\text{E}} {\text{B}}({\text{U}_{\mathcal{D}}} {\text{F}_{\mathcal{D}}}, {\text{U}_{\mathcal{D}}} {\text{F}_{\mathcal{D}}}, {\text{U}_{\mathcal{D}}} X) \ar[u]^{\sim} \ar[r]^{\beta} & {\text{E}} {\text{B}}( {\text{R}}{\text{U}}{\text{F}}{\text{L}}, {\text{U}_{\mathcal{D}}}{\text{F}_{\mathcal{D}}}, {\text{U}_{\mathcal{D}}} X). \ar[u]^{\sim} } \end{equation*} where the map $\beta$ is induced by the map of triples $$ {\text{U}_{\mathcal{D}}} {\text{F}_{\mathcal{D}}} \to {\text{R}} {\text{L}} {\text{U}_{\mathcal{D}}} {\text{F}_{\mathcal{D}}} {\text{R}} {\text{L}} \to {\text{R}}{\text{U}} {\text{F}} {\text{L}}.$$
By Lemma \ref{identity} and Lemma \ref{pig} the map $\beta$ is a weak equivalence of simplicial objects on $\Pi$-spaces for cofibrant-fibrant $X$. Given a cofibrant $\hat{{\mathcal{D}}}$-space $X$, the bar construction is Reedy cofibrant in the Hurewicz-Strom model category structure by Proposition \ref{greedy}. Moreover, geometric realization preserves weak equivalences by Proposition \ref{ghurewicz}. Therefore, $ {\text{U}_{\mathcal{D}}} X \to {\text{U}_{\mathcal{D}}} {\text{R}_{\mathcal{D}}} {\text{L}_{\mathcal{D}}} X $ is a weak equivalence of $\Pi$-spaces by two out of three axiom. Lemma \ref{quillen} implies that ${\text{R}_{\mathcal{D}}}$ and ${\text{L}_{\mathcal{D}}}$ induce a Quillen equivalence.
\end{proof} \begin{rmk} \label{symmonidal} Lydakis \cite{MR1670245} defined a symmetric monoidal structure on the category of $\Gamma$-spaces. One can use this idea to define a symmetric monoidal structure on the category $\Pi[{\mathcal{T}}]$. The functor $R : {\mathcal{T}} \to \Pi[{\mathcal{T}}]$ respects the symmetric monoidal structure on ${\mathcal{T}}$ via cartesian products. The category of operators $\hat{{\mathcal{C}}}$ defined by an operad ${\mathcal{C}}$ on ${\mathcal{T}}$ defines a monad on the category $\Pi[{\mathcal{T}}]$. By \cite{BF01220868}[Thm 2.1] it is easy to check that this monad is symmetric monoidal and hence the symmetric monoidal structure lifts to $\hat{{\mathcal{C}}}[{\mathcal{T}}]$ and similarly to ${\mathcal{C}}[{\mathcal{T}}]$. Then in the non equivariant case, the corresponding Theorem \ref{equivariantoperad} and Theorem \ref{equiop} respect the symmetric monoidal structures. Thus the equivalence between ${\text{E}}_\infty$-spaces and $\Gamma$-spaces is symmetric monoidal. We expect this to generalize to the equivariant case. We will talk about the monoidal structure on equivariant $\Gamma$-spaces elsewhere.
\end{rmk}
\section{Comparison Theorem}
Let ${\mathcal{N}}$ denote the $G$-operad, defined as ${\mathcal{N}}(m) = \ast$ with a trivial $G$-action. Let ${\mathcal{E}}$ be a ${\text{E}}_\infty$-$G$-operad such that $1 \to {\mathcal{E}}(1)$ is a Hurewicz $G$-cofibration. Then by definition for every subgroup $\Lambda < G \times \Sigma_m$ such that $\Lambda$ does not contain any non-trivial subgroups of $\Sigma_m$, ${\mathcal{E}}(m)^\Lambda \to {\mathcal{N}}(m)^\Lambda$ is a weak equivalence.
Consider the category of equivariant operators induced by the operads ${\mathcal{E}}$ and ${\mathcal{N}}$. For any subgroup $H$ of $G$ and $\rho: H \to \Sigma_m$, $$\hat{{\mathcal{E}}}^{\b n }(\b m)_{\rho}= \coprod_{\phi \in \Gamma(\b m, \b n)_\rho} \prod_{1 \leq j \leq n} {\mathcal{E}}(|\phi^{-1}( j)|)_\rho .$$
Since ${\mathcal{E}} $ is an ${\text{E}}_\infty$-$G$-operad, ${\text{E}} \hat{{\mathcal{E}}}^{\b n}$ is $G$-weakly equivalent to ${\text{E}} \hat{{\mathcal{N}}}^{\b n}$.
\begin{thm} \label{modelcateqgamma} The category $\Gamma[{\text{G}} {\mathcal{T}}]$ forms a model category with the model structure induced by level model structure of $G\Gamma[{\text{G}}{\mathcal{T}}]$. The localized model category of $G\Gamma[{\text{G}}{\mathcal{T}}]$ with respect to the set $ \{\coprod_{s\in S} \Gamma_{G,1} \to \Gamma_{G,S} / S \text{ is a } G \text{set} \}$ exists and induces as model category structure on $\Gamma[{\text{G}}{\mathcal{T}}]$ where the fibrant objects are special $\Gamma$-spaces. This is Quillen equivalent to the localized model category structure on $\hat{{\mathcal{N}}}[{\mathcal{T}_\text{\tiny G}}]$. \end{thm}
\begin{thm} \label{comparison} Let ${\mathcal{E}}$ be an equivariant ${\text{E}}_\infty$-operad such that $1 \to {\mathcal{E}}(1)$ is a h-cofibration. The category of ${\mathcal{E}}$-spaces with the model category structure induced from $G$-spaces is Quillen equivalent to the category of $\Gamma[{\text{G}} {\mathcal{T}}]$ with the induced localized model structure. \end{thm} \begin{proof}
By Theorem \ref{equivariantoperad} we get that
$\hat{{\mathcal{E}}}[{\mathcal{T}_\text{\tiny G}}]$ with the localized model category structure is Quillen equivalent to ${\mathcal{E}}[{\mathcal{T}_\text{\tiny G}}]$ with the underlying model category structure of $G$-spaces.
Since ${\mathcal{E}}$ is an ${\text{E}}_\infty$-operad for any subgroup $H$ of $G$ and group homomorphism $\rho: H \to \Sigma_n$, the space $({\mathcal{E}}(n)_\rho)^H$ is contractible. Note that $\hat{{\mathcal{N}}} = \Gamma$. This implies that $$ (\hat{{\mathcal{E}}}^{\b m}(\b n)_\rho )^H \to (\Gamma^{\b m}(\b n)_\rho)^H =(\hat{{\mathcal{N}}}^{\b m}(\b n)_\rho)^H$$ is a weak equivalence. Thus ${\text{E}} {\mathcal{U}}_{\mathcal{E}} \hat{{\mathcal{E}}}^{\b m} \to {\text{E}} {\mathcal{U}}_{\mathcal{N}} \hat{{\mathcal{N}}}^{\b m}$ is a weak equivalence of $G\Pi{}$-spaces. Theorem \ref{eqcategopeq} implies that $\hat{{\mathcal{E}}}[{\mathcal{T}_\text{\tiny G}}]$ is Quillen equivalent to $\hat{{\mathcal{N}}}[{\mathcal{T}_\text{\tiny G}}]$ with the localized model structures. Hence proved. \end{proof}
\begin{prop} \label{main} Let ${\mathcal{E}}$ be an ${\text{E}}_\infty$- $G$-operad satisfying the hypothesis of Theorem \ref{eqcategopeq}. Let $X$ be an equivariant ${\mathcal{E}}$-space. Then $X$ is a special equivariant $\Gamma$-space up to a cofibrant replacement in $\Gamma[G{\mathcal{T}}]$. \end{prop} Proposition \ref{main} follows from the following Lemma and Theorem \ref{comparison}. \begin{lemma}\label{replacement} Let ${\mathcal{D}}$ be a $\Sigma$-free $G$-operad and $\hat{{\mathcal{D}}} \xrightarrow{v}{\mathcal{H}}$ be a morphism of category of equivariant operators satisfying the hypothesis of Theorem \ref{eqcategopeq}. Let $X$ be a fibrant-cofibrant $\hat{{\mathcal{D}}}$-space. Then the map $$ {\text{U}_{\mathcal{D}}} {\text{B}}(\hat{{\mathcal{D}}}, \hat{{\mathcal{D}}},X) \xrightarrow{v} {\text{U}_{\mathcal{H}}} {\text{B}}({\mathcal{H}},\hat{{\mathcal{D}}},X)$$ is a weak equivalence in $\Pi[{\text{G}}{\mathcal{T}}]$ and $ {\text{B}}({\mathcal{H}},\hat{{\mathcal{D}}},X)$ is a fibrant ${\mathcal{H}}$-space in the localized model category. \end{lemma} \begin{proof}
By hypothesis ${\text{E}}{\text{U}_{\mathcal{D}}} {\text{B}}_\ast(\hat{{\mathcal{D}}}, \hat{{\mathcal{D}}},X) \xrightarrow{v} {\text{E}}{\text{U}_{\mathcal{H}}} {\text{B}}_\ast({\mathcal{H}},\hat{{\mathcal{D}}},X)$ is a weak equivalence of simplicial $\Pi$-$G$-spaces. By Proposition \ref{greedy} these spaces are Reedy cofibrant. This induces a level $G$-equivalence of $\Pi$-spaces. This implies that ${\text{U}_{\mathcal{H}}} {\text{B}}({\mathcal{H}},\hat{{\mathcal{D}}}, X)$ is a fibrant $\Pi$-space and hence a fibrant ${\mathcal{H}}$-space in the localized model category. \end{proof}
\begin{proof} of Proposition \ref{main}
We have shown that ${\mathcal{E}}[{\mathcal{T}_\text{\tiny G}}]$ is Quillen equivalent to $\Gamma[G{\mathcal{T}}]$ via a Quillen equivalence with the category $\hat{{\mathcal{E}}}[{\mathcal{T}_\text{\tiny G}}]$. Let $X$ be an ${\text{E}}_\infty$-space. Then ${\text{R}}_{{\mathcal{E}}} X$ is a fibrant $\hat{{\mathcal{E}}}$-space. If we take its cofibrant replacement $Y \to {\text{R}}_{\mathcal{E}} X$ in the level model category structure on $\hat{{\mathcal{E}}} [G{\mathcal{T}}]$ then $Y$ is fibrant-cofibrant in the localized model category on $\hat{{\mathcal{E}}}$-spaces. Now let $v :\hat{{\mathcal{E}}} \to \Gamma$ denote the morphism of category of operators induced by the contractibility of ${\mathcal{E}}(n)$'s. By Proposition \ref{replacement} the space $v_\ast {\text{B}}(\hat{{\mathcal{E}}},\hat{{\mathcal{E}}},Y)$ is a fibrant $\Gamma$-$G$-space. Thus, up to a cofibrant replacement, an equivariant ${\text{E}}_\infty$-space is equivalent to a special equivariant $\Gamma$-space. \end{proof}
\section{${\Gamma_\text{\tiny G}}$-spaces and Equivariant spectra}
Shimakawa \cite{MR1003787} generalized Segal's work to the equivariant case to show that special ${\Gamma_\text{\tiny G}}$-spaces model positive connective $\Omega$-$G$ spectra. We extend Shimakawa's work to show that very-special ${\Gamma_\text{\tiny G}}$-spaces model connective $G$-spectra.
Given a $G$-functor $X: {\Gamma_\text{\tiny G}} \to {\mathcal{T}_\text{\tiny G}}$ the left Kan extension of $X$ from the category ${\cW_\text{\tiny G}}$ of based $G$-CW complexes to ${\mathcal{T}_\text{\tiny G}}$ exists. Denote the Kan extension by $X$ again and the homotopy Kan extension of $X$ to ${\cW_\text{\tiny G}}$ by $\tilde{X}$. \begin{rmk} Note both these constructions are functorial. This defines a functor from the category of equivariant $\Gamma$-spaces to the category of equivariant spectra. We elaborate this further in Appendix \ref{discussion}.
\end{rmk}
Let $A$ be an object of ${\cW_\text{\tiny G}}$. Define a functor $Y_A : \Gamma^{{\small{\text{op}}}}_\text{\tiny G}\to {\mathcal{T}_\text{\tiny G}}$ as $Y_A(S) ={\mathcal{T}_\text{\tiny G}}(S,A)$.
Then the homotopy extension is given by $\tilde{X}(A)= {\text{B}}(Y_A,{\Gamma_\text{\tiny G}}, X)$.
Here ${\text{B}}(Y_A, {\Gamma_\text{\tiny G}}, X)$ denotes the geometric realization of the simplicial space
\noindent ${\text{B}}_\bullet(Y_A, {\Gamma_\text{\tiny G}}, X)$ whose $n$ simplices are $$\coprod_{T_i} Y_A(T_n) \times {\Gamma_\text{\tiny G}}(T_{n-1}, T_n) \times \cdots \times {\Gamma_\text{\tiny G}}(T_0,T_1) \times X(T_0),$$ face maps are compositions and degeneracy maps are defined via the natural inclusion of identity map in ${\Gamma_\text{\tiny G}}(T_i,T_i)$.
The left Kan extension of $X$ at $A$ is the coequalizer \begin{equation*} \xymatrix{\coprod\limits_{T_0, T_1} Y_A(T_1) \times {\Gamma_\text{\tiny G}}(T_0, T_1) \times X(T_0) \ar@<.5ex>[r]\ar@<-.5ex>[r] & \coprod\limits_{T_0} Y_A(T_0) \times X(T_0) \ar[r] & X(A) .} \end{equation*}
There exists a natural map from $\tilde{X}(A) \to X(A)$.
\begin{lemma}\label{homtokan} Let $X$ be an object in ${\Gamma_\text{\tiny G}}[{\mathcal{T}_\text{\tiny G}}]$ such that $ {\text{i}} X$ is a cofibrant object in $G\Gamma[{\text{G}}{\mathcal{T}}]$ with the localized model category structure on $G\Gamma$-spaces. Then the map $\tilde{X} \to X$ is a level $G$-weak equivalence. \end{lemma} \begin{proof} Let $ X$ be a representable object of ${\Gamma_\text{\tiny G}}[{\mathcal{T}_\text{\tiny G}}]$ denoted by $\Gamma_{G,S}$ where $S$ is an object of ${\Gamma_\text{\tiny G}}$. Then $\tilde{\Gamma}_{G,S} \to \Gamma_{G,S} $ is a level $G$-weak equivalence in $G\Gamma[{\text{G}}{\mathcal{T}}]$, since all the $n$-simplices in ${\text{B}}_\bullet(Y_A,{\Gamma_\text{\tiny G}},\Gamma_{G,S})$ are degenerate for $n > 2$.
The set of maps $$I =\{\Gamma_{G,S} \times (G /H \times {\text{S}}^{n-1} )_+\to \Gamma_{G,S} \times (G/H \times {\text{D}}^n)_+ / n \in \mathbb{N}, S \in \text{Obj}{\Gamma_\text{\tiny G}} , H<G \} $$ is the set of generating cofibrations of $G\Gamma[{\text{G}}{\mathcal{T}}]$. A cofibrant object in $G\Gamma[{\text{G}}{\mathcal{T}}]$ can be written as a transfinite composition of maps which are pushouts of maps in $I$.
The bar construction commutes with colimits. Furthermore if the colimits are computed along cofibrations then they preserve the weak equivalences. Therefore, if $X$ is cofibrant then $\tilde{X} \to X$ is a level $G$-weak equivalence in $G\Gamma[{\text{G}}{\mathcal{T}}]$.
\end{proof}
\begin{thm}\cite[\text{Lem 1.4}] {MR1003787} \label{segalbit} Let $X$ be a special ${\Gamma_\text{\tiny G}}$-space. Then for $G$-CW complexes $A$ and $B$ and an object $S$ of ${\Gamma_\text{\tiny G}}$ \begin{itemize} \item[(a)] the map $ \tilde{X} (S \wedge A) \xrightarrow{\tilde{\rho}} \map{}{S}{\tilde{X}(A)} $ adjoint to the evaluation map $ S \wedge \tilde{X}(S\wedge A) \xrightarrow{\rho} X(A)$ is a $G$-weak equivalence. \item[(b)] If $\tilde{X}(A)$ is $G$-grouplike and $A \to B$ is a $G$-cofibration then $\tilde{X} (A) \to \tilde{X}(B) \to \tilde{ X}(A/B)$ is a $G$-fibration. \end{itemize}
\end{thm}
\begin{thm} \label{shimakawabit} Let $X$ be a very-special ${\Gamma_\text{\tiny G}}$-space such that ${\text{i}} X$ is a cofibrant object in $G\Gamma[{\text{G}}{\mathcal{T}}]$ with the localized model structure. Let $V$ and $ W$ be $G$-representations such that $V^G \neq \{ 0 \}$. Then $X (S^V) \xrightarrow{\sim} \Omega^{W} X(S^{V\oplus W})$ and $ X(S^0) \xrightarrow{\sim} \Omega X(S^1)$ are level $G$-weak equivalences. \end{thm} \begin{proof} Shimakawa \cite[\text{Thm B}]{MR1003787}, shows that for any $G$-representations $V$ and $W$ such that $V^G \neq \{0 \}$, the map $\tilde{X} (S^V) \xrightarrow{\sim} \Omega^{W} \tilde{X}(S^{V\oplus W})$ and $ \tilde{X}(S^0) \xrightarrow{\sim} \Omega \tilde{X}(S^1)$ are level $G$-weak equivalences.
By Lemma \ref{homtokan} we have that $\tilde{X}(S^V) \to X(S^V)$ is a level $G$-weak equivalence for all representations $V$ of $G$. Since $S^V$ is cofibrant in ${\text{G}}{\mathcal{T}}$, the map $\Omega^V{X(S^W)} \to \Omega^V{X(S^W)}$ is a $G$-weak equivalence.
For any $G$-representations $V,W$ such that $V^G \neq \{0\}$ the map $$X (S^V) \xrightarrow{\sim} \Omega^{W} X(S^{V\oplus W})$$ and $$ X(S^0) \xrightarrow{\sim} \Omega X(S^1)$$ are level $G$-weak equivalences.
\end{proof}
\begin{lemma} \label{verysp} Let $X$ be a very special ${\Gamma_\text{\tiny G}}$-space and cofibrant as an object of $G\Gamma[{\text{G}}{\mathcal{T}}]$. Let $A$ be a based $G$-CW complex. Then $X(A)$ is $G$-grouplike. \end{lemma} \begin{proof} The space $X(\b 1) $ is $G$-grouplike implies that $X(\b 1)$ has a homotopy inverse under the monoid structure. Let $S$ be finite pointed $G$-set. Then $S$ is equivalent to $\vee_i (G /H_i)_+$ as $G$-sets, for some subgroups $H_i$ of $G$.
Since $X$ is special , \begin{eqnarray*} X(S) = X(\vee_i((G/H_i)_+)) = \prod_i X((G/H_i)_+) & \xrightarrow{\sim} & \prod_i \map{}{(G/H_i)_+}{X(\b 1)} \\ & = & \prod_i X(\b 1)^{H_i}. \\ \end{eqnarray*}
Since the above equivalence commutes with the monoidal structure, $X(S)$ is $G$-grouplike.
Let $\Delta : \b 2 \to \b 1$ denote the map of finite sets which map both 1 and 2 to 1. Then $X(S \wedge \b 1)$ being $G$-grouplike is equivalent to the map $$X(S \wedge \b 2)^H \xrightarrow{X(\Delta) \times X(p^2_1)} X(S \wedge \b 1)^H \times X(S \wedge \b 1)^H$$ being a $G$-homotopy weak equivalence. Now taking homotopy Kan extension preserves the homotopy equivalence since geometric realizations preserve finite products. Therefore for any $G$-CW complex $A$, $$ \tilde{X}(A \wedge \b 2)^H \xrightarrow{ \tilde{X}(\Delta) \times \tilde{X}(p^2_1)} \tilde{X}(A \wedge \b 1 )^H \times \tilde{X}(A\wedge \b 1)^H$$
But by Lemma \ref{homtokan} homotopy Kan extension is weakly equivalent to Kan extension. Thus, we have that $X(A)$ is grouplike for all $G$-CW complexes $A$ if $X$ is very-special.
\end{proof} \begin{thm} \label{fixonvsp} Let $X$ be a very-special ${\Gamma_\text{\tiny G}}$-space such that ${\text{i}} X$ is a cofibrant object in $G\Gamma[{\text{G}}{\mathcal{T}}]$ with the localized model structure. Then $\{X(S^V)\}$ is an equivariant $\Omega$-spectrum. \end{thm} \begin{proof} By Lemma \ref{verysp} and Lemma \ref{segalbit}(b), given a very-special ${\Gamma_\text{\tiny G}}$-space $X$ which is cofibrant in $G\Gamma[{\text{G}}{\mathcal{T}}]$, for any $G$-representation $V$ we have that $$X(S^V) \xrightarrow{\sim} \Omega^1 X(S^{V\oplus \mathbb{R}}).$$ But, Thm \ref{shimakawabit} says that $X(S^0) \xrightarrow{\sim} \Omega^{V} X(S^{V\oplus\mathbb{R}})$ is a $G$-weak equivalence.
Then in the following diagram for any $G$-representation $V$ \begin{equation*} \xymatrix{ X(S^0) \ar[d] \ar[r] & \Omega X(S^1) \ar[d] \\
\Omega^V X(S^V) \ar[r] & \Omega^{V\oplus\mathbb{R}} X(S^{V \oplus \mathbb{R}}), } \end{equation*} both the horizontal arrows and the right vertical arrow are $G$-weak equivalences. This implies that $X(S^0) \xrightarrow{\sim} \Omega^V X(S^V)$ is a $G$- weak homotopy equivalence.
Therefore, $\{X(S^V)\}$ is an equivariant $\Omega$-spectrum. \end{proof}
\section{$G$-spaces and Orbit Categories}
\begin{defn} Let $G$ be a finite group. Define the orbit category of $G$ denoted ${\mathcal{O}}(G)$ to be the category with \begin{itemize} \item left cosets $G/H$ for every subgroup $H$ of $G$ as the objects \item and, $G$-set maps as the morphisms. \end{itemize} The morphism set can be identified as follows $$G{\mathcal{T}}(G/H,G/K) \cong (G/K)^H.$$ \end{defn} \begin{defn} Let an ${\mathcal{O}}(G)$-space be a functor from ${\mathcal{O}}(G)^{\text{op}} $ to ${\mathcal{T}}$. Define the category of ${\mathcal{O}}(G)$-spaces be the category whose objects are ${\mathcal{O}}(G)$-spaces and morphisms are natural transformations. Denote this category by ${\mathcal{O}}(G)[{\mathcal{T}}]$. Define the representable ${\mathcal{O}}(G)$-space as $$\underline{G/H}(G/K) := {\mathcal{O}}(G)(G/K,G/H).$$ The category ${\mathcal{O}}(G)[{\mathcal{T}}]$ is enriched over itself. For any two functors $W$ and $Z$ define the $\map{}{W}{Z}$ as the ${\mathcal{O}}(G)$-space defined by the functor $ \map{}{W}{Z} (G/H) = \map{}{\underline{G/H} \times W}{Z}$. We use the same notation for the enriched category. \end{defn} Given a $G$-space $W$, we can define a ${\mathcal{O}}(G)$-space $\Phi W$ defined as $$\Phi W(G/H) := G{\mathcal{T}}(G/H,W)= W^H.$$
Note that since the category ${\mathcal{T}_\text{\tiny G}}$ is $G$-enriched, it is naturally ${\mathcal{O}}(G)$ enriched.
The model category structure on $G$-spaces is as described in Section \ref{Gmodelcat}. The category of ${\mathcal{O}}(G)$-spaces has a level-model category structure where \begin{itemize} \item a map $W \to Z$ is a weak equivalence (fibration) if $W(G/H) \to Z(G/H)$ is a weak-equivalences (Serre fibration) of spaces. \item and, cofibrations are maps with the left lifting property with respect to acyclic fibrations. \end{itemize}
\begin{prop} \cite{MR97k:55016},\cite{MR690052}\label{fixedpoints} The functor $\Phi $ has a left adjoint ${\textbf{C}}$ and we have an adjoint pair of enriched functors. \begin{equation*} \xymatrix{ {\mathcal{O}}(G)[{\mathcal{T}}] \ar@<.5ex>[r]^{{\textbf{C}}} & {\mathcal{T}_\text{\tiny G}} \ar@<.5ex>[l]^{\Phi} } \end{equation*} and the categories ${\mathcal{O}}(G)[{\mathcal{T}}] $ and $G {\mathcal{T}}$ with the model structures described above \begin{equation*} \xymatrix{ {\mathcal{O}}(G)[{\mathcal{T}}] \ar@<.5ex>[r]^{{\textbf{C}}} & {\text{G}}{\mathcal{T}} \ar@<.5ex>[l]^{\Phi} } \end{equation*} is a Quillen equivalence.
\end{prop}
\section{Units of Equivariant Ring Spectra}
We construct the group of units of a special equivariant $\Gamma$-space and show that it is a very special equivariant $\Gamma$-space. We use the equivalence of equivariant $\Gamma$-spaces and equivariant ${\text{E}}_\infty$-spaces to give a construction of the units of equivariant ring spectra.
\subsection{Units of Special $\Gamma$-spaces} Denote the category of sets with set maps by ${\mathcal{I}}$ and let ${\mathcal{I}_\text{\tiny G}}$ denote the category of $G$-sets with the morphisms being set maps. The category ${\mathcal{I}_\text{\tiny G}}$ is $G$-enriched. Given a set map $K \xrightarrow{f} L$, the action of $G$ is defined via conjugation as follows: $$ g. f(k) = g^{-1} f (g k) $$ for all $k \in K$.
\begin{defn} Define a $\Gamma$-set $N$ to be a functor from $\Gamma \to {\mathcal{I}}$ such that $N(\b 0) = \ast$. Then $N$ is a special $\Gamma$-set if the map $$N(\b n) \xrightarrow{\prod_i p_i} N(\b 1)^n $$ is an isomorphism. \end{defn}
Let $N$ be a special $\Gamma$-set. Let $i: \b 0 \to \b 1$ be the inclusion. Then $N(\b 1)$ is a commutative monoid via the monoidal structure given by $$ N(\b 1) \times N(\b 1) \xleftarrow{\cong} N(\b 2) \xrightarrow{N(\mu)} N(\b 1).$$ If $\Gamma$-set $N$ is special then $N(\b m \wedge \b2) \xrightarrow{N(\text{id} \wedge \mu)} N(\b m)^2$ is an isomorphism and we have a product structure on $N(\b m \wedge \b 1)$.
We can define the group of units of $N(\b 1)$ in terms of a very special $\Gamma$-set. Let $N'$ be a $\Gamma$-set defined as the pullback of the following diagram: \begin{equation*} \xymatrix{N'(\b m) \ar@{^{(}->}[r] \ar[d] & N(\b m \wedge \b 2) \ar[d]^{N(\text{it}\wedge\mu)} \\ N(\b m \wedge \b 0) \ar@{^{(}->}[r]^{N(i)} & N(\b m \wedge \b 1) } \end{equation*} Since $\b m \wedge \b 0 = \b 0$ and $N(\b 0) = \ast $ the above construction is functorial, that is, describes a $\Gamma$-set. By construction $N'(\b m)$ is the pair of invertible elements of $N(\b m)$ with their inverses. The $\Gamma$-set $UN$ describing the group units of $N$ is therefore, the image of $N'$ under the projection onto first factor namely, $$UN(\b m) := N(\text{id} \wedge p_1) N' (\b m).$$
\begin{defn} Let $G$ be a finite group. Define a ${\Gamma_\text{\tiny G}}$-set to be a $G$-functor $A$ from ${\Gamma_\text{\tiny G}} \to {\mathcal{I}_\text{\tiny G}}$ such that $A(\b 0)= \ast$. Let $\theta: S \wedge A(S) \to A(\b 1 )$ be as defined in Definition \ref{eqspecial}. A ${\Gamma_\text{\tiny G}}$-set $A$ is \emph{special} if the adjoint of $\theta$ induces a $G$-isomorphism, that is, $$ A(S)^H \xrightarrow{\cong} H{\mathcal{T}}(S,A(\b 1))$$ is an isomorphism for all subgroups $H$ of $G$. Further $A$ is \emph{very-special} if $A(\b 1)^H$ is grouplike under the induced monoid structure for all $H <G$. \end{defn}
Given a $G$-functor $A: {\Gamma_\text{\tiny G}} \to {\mathcal{I}_\text{\tiny G}}$, the functor $\Phi A $ describes a functor from ${\Gamma_\text{\tiny G}}$ to ${\mathcal{O}}(G)$-sets. Now the category ${\Gamma_\text{\tiny G}}$ is $G$-enriched. By Proposition \ref{fixedpoints}, the category is ${\mathcal{O}}(G)[{\mathcal{I}}]$-enriched.
\begin{defn} Define a ${\Gamma_\text{\tiny G}}$-${\mathcal{O}}(G)$-set to be an ${\mathcal{O}}(G)[{\mathcal{I}}]$- functor from ${\Gamma_\text{\tiny G}}$ to ${\mathcal{O}}(G)$-sets. Given a ${\Gamma_\text{\tiny G}}$-set $A$, we get an ${\Gamma_\text{\tiny G}}$-${\mathcal{O}}(G)$-set $\Phi A$.
\end{defn}
Note that a ${\Gamma_\text{\tiny G}}$-${\mathcal{O}}(G)$-set can be rewritten as a ${\mathcal{O}}(G)$-${\Gamma_\text{\tiny G}}$-set.
Let $A$ be a special ${\Gamma_\text{\tiny G}}$-set. Define a ${\Gamma_\text{\tiny G}}$-${\mathcal{O}}(G)$-set $B$ to be the following pullback of sets, \begin{equation*} \xymatrix{ B(S)(G/H) \ar@{^{(}->}[r] \ar[d] & \Phi A(S \wedge \b 2)(G/H) \ar[d]^{\mu} \\
\Phi A(S \wedge \b 0 )(G/H) \ar@{^{(}->}[r]^{{\text{i}}} & \Phi A(S \wedge \b 1) (G/H)\\ } \end{equation*} The construction is functorial and therefore defines a ${\Gamma_\text{\tiny G}}$-${\mathcal{O}}(G)$-set.
Define the units of $A$ to be the $\Gamma$-${\mathcal{O}}(G)$-set $${\text{U}} A(S)(G/H) := \Phi A (p_1) (B(S) ) (G/H).$$
Given a ${\Gamma_\text{\tiny G}}$-${\mathcal{O}}(G)$-set $B$ the projections induce the map of ${\mathcal{O}}(G)$-sets similar to the ${\Gamma_\text{\tiny G}}$-space case.
\begin{defn} Define a ${\Gamma_\text{\tiny G}}$-${\mathcal{O}}(G)$-set $B$ to be \emph{special} if the map induced by projections $p_s$ $$ B(S) \to {\mathcal{O}}(G)[{\mathcal{I}}](\Phi S,B(\b 1))$$ is an isomorphism of ${\mathcal{O}}(G)$-sets. This induces a monoidal structure on $B(\b 1)(G/H)$ for all objects $G/H$ of ${\mathcal{O}}(G)$. If $B(\b 1)(G/H)$ is grouplike for all $H<G$ then $B$ is said to be \emph{very special}. \end{defn}
\begin{lemma} If $A$ is a special ${\Gamma_\text{\tiny G}}$-set then ${\text{U}} A$ is a very-special ${\Gamma_\text{\tiny G}}$-${\mathcal{O}}(G)$-set. \end{lemma}
Let $X$ be a special ${\Gamma_\text{\tiny G}}$-space. Then $\pi_0 X $ is a special ${\Gamma_\text{\tiny G}}$-set. Define ${\text{U}} X$ as the following homotopy pullback \begin{equation*} \xymatrix{ UX(S)(G/H) \ar@{^{(}->}[r] \ar[d] & X(S)(G/H) \ar[d] \\
{\text{U}} (\pi_0 X (S)(G/H)) \ar@{^{(}->}[r]^{{\text{i}}} & (\pi_0 X)(S) (G/H)\\ } \end{equation*} By construction, for any map $S \to T $ since $U X $ includes into $X$ we have a map from $UX(S)(G/H) \to X(T) (G/H) $. But the $\pi_0 U X $ is a group and this map should factor through a group of units in of $\pi_0 X$. Therefore,
This $UX $ is an ${\Gamma_\text{\tiny G}}-{\mathcal{O}}(G)$-space by construction.
\begin{lemma}\label{orbit} Let $X$ be a special ${\mathcal{O}}(G)$-space. Then ${\textbf{C}} {\text{U}} X$ is a very-special ${\Gamma_\text{\tiny G}}$-space. \end{lemma} \begin{proof} This follows from the adjointness of the ${\mathcal{O}}(G)$-spaces and ${\mathcal{T}_\text{\tiny G}}$. \end{proof}
\begin{defn} Let $X$ be a special ${\Gamma_\text{\tiny G}}$-space. Define the units of $X$ to be the very-special ${\Gamma_\text{\tiny G}}$-space, ${\textbf{C}} {\text{U}} X$. \end{defn}
\subsection{Equivariant ${\text{E}}_\infty$-ring spectra}
Denote the category of $G$-spectra by ${\mathcal{S}}_G$.
\begin{thm} The category ${\mathcal{S}}_G$ is a topological model category with \begin{itemize} \item weak equivalences being $G$-weak equivalences of $G$-spectra, \item fibrations being Serre fibrations of $G$-spectra and, \item cofibrations being the maps of spectra with a left lifting property with respect to acyclic fibrations. \end{itemize} Moreover, given a continuous monad ${\mathcal{C}}:{\mathcal{S}}_G \to {\mathcal{S}}_G$ such that the category ${\mathcal{C}}[{\mathcal{S}}_G]$ of a ${\mathcal{C}}$-algebras has continuous coequalizers and satisfies the Cofibration Hypothesis, ${\mathcal{S}}_G$ creates a topological model structure on ${\mathcal{C}}[{\mathcal{S}}_G]$. \end{thm} \begin{proof} The proof is similar to the proof of the non-equivariant version \cite[Thm~VII.4.4]{MR1417719}. \end{proof}
There exist adjoint maps from $G$-spectra to $G$-spaces. \begin{equation*} \xymatrix{ {\mathcal{S}}_G \ar@<.5ex>[r]^{\Omega^{\infty}} & {\mathcal{T}_\text{\tiny G}} \ar@<.5ex>[l]^{\Sigma^{\infty}} } \end{equation*} Here, $\Omega^{\infty} X = X_0$ for $0$ is the trivial representation and $\Sigma^{\infty} Y$ denotes the spectrification of $ \{\Sigma^V Y \}$.
\begin{prop} Let ${\mathcal{L}}$ denote the linear isometries $G$-operad. Then we have a adjoint pair of functors between equivariant ${\text{E}}_\infty$-ring spectra and ${\text{E}}_\infty$-spaces. \begin{equation*} \xymatrix{ {\mathcal{L}}[{\mathcal{S}}_G ]\ar@<.5ex>[r]^{\Omega^{\infty}} & {\mathcal{L}}[{\mathcal{T}_\text{\tiny G}}] \ar@<.5ex>[l]^{\Sigma^{\infty}} } \end{equation*}
\end{prop}
\subsection{ Defining the Units of Equivariant ${\text{E}}_\infty$ Ring Spectra}
Let $R$ be a ${\text{E}}_\infty$-equivariant ring spectrum. Then $\Omega^{\infty}R$ is a ${\text{E}}_\infty$- ring space. There is a forgetful functor to ${\mathcal{L}}$-spaces which forgets the additive structure on $\Omega^{\infty}R$ due to the infinite loop space structure. By Proposition \ref{main}, as an equivariant ${\mathcal{L}}$-space (forgetting the additive structure) $\Omega^\infty R $ is equivalent to an equivariant special $\Gamma$-space. We know how to construct units of a equivariant special $\Gamma$-space. Therefore, we can make the following definition.
\begin{defn}\label{unitsdefn} Let $R$ be an equivariant $E_\infty$-ring spectrum and $Y$ be the special ${\Gamma_\text{\tiny G}}$-space equivalent to the ${\mathcal{L}}$-space, $\Omega^{\infty} R$. Define the \emph{unit equivariant spectrum } of $R$ to be the equivariant spectrum represented by the very-special ${\Gamma_\text{\tiny G}}$-space ${\textbf{C}} U Y$. \end{defn}
\appendix \section{Adjoint Square} \begin{thm}\cite[Thm~3.3.10]{MR771116} Let ${\mathcal{B}}$ and ${\mathcal{A}}$ be cocomplete categories Beck's monadicity theorem states that a functor $U : {\mathcal{B}} \to {\mathcal{A}} $ is monadable if and only if \begin{itemize} \item[(i)] $U : {\mathcal{B}} \to {\mathcal{A}}$ has a left adjoint. \item[(ii)] $U$ reflects isomorphisms. \item[(iii)] ${\mathcal{B}}$ has coequalizers of reflexive $U$-contractible coequalizer pairs and $U$ preserves them. \end{itemize} \end{thm}
\begin{prop} \label{adjoint} Let ${\mathcal{D}}$ and ${\mathcal{F}}$ be cocomplete categories with an adjoint pair of functors $\xymatrix{ {\mathcal{D}}\ar@<.5ex>[r]^{{\text{L}}} & {\mathcal{F}} \ar@<.5ex>[l]^{{\text{R}}}} $ such that $LR=id$. Let $ \tilde{{\mathcal{D}}}$ and $\tilde{{\mathcal{F}}}$ be categories with monadable functors $U_d: \tilde{{\mathcal{D}}} \to {\mathcal{D}}$ and $U_f:\tilde{{\mathcal{F}}} \to {\mathcal{F}}$. Let $F_d$ and $F_f$ be the left adjoints to $U_d$ and $U_f$ respectively with $L U_d F_d R = U_f F_f$. Further, let there exist $\tilde{R}: \tilde{{\mathcal{F}}} \to \tilde{{\mathcal{D}}} $ with the following commuting diagram of adjoint functors, namely $R U_f = U_d \tilde{R}$.
\begin{equation} \xymatrix{ {\mathcal{D}} \ar@<-.5ex>[d]_{F_d} \ar@<.5ex>[r]^{L} & {\mathcal{F}} \ar@<.5ex>[d]^{F_f} \ar@<.5ex>[l]^{R} \\
\tilde{{\mathcal{D}}} \ar@<-.5ex>[u]_{U_d} & \tilde{{\mathcal{F}}} \ar@<.5ex>[u]^{U_f} \ar@<.5ex>[l]^{\tilde{R}}. } \end{equation} Then $\tilde{R}$ has a left adjoint such that the following diagram of adjoints commutes
\begin{equation} \xymatrix{ {\mathcal{D}} \ar@<-.5ex>[d]_{F_d} \ar@<.5ex>[r]^{L} & {\mathcal{F}} \ar@<.5ex>[d]^{F_f} \ar@<.5ex>[l]^{R} \\
\tilde{{\mathcal{D}}} \ar@<-.5ex>[u]_{U_d} \ar@<.5ex>[r]^{\tilde{L}} & \tilde{{\mathcal{F}}} \ar@<.5ex>[u]^{U_f} \ar@<.5ex>[l]^{\tilde{R}}. } \end{equation} \end{prop} \begin{proof}
For any $Y$ in $\tilde{{\mathcal{D}}}$ we have a morphism $ F_d U_d Y \to Y$ due to adjointness. Also for any $Y$ in $\tilde{{\mathcal{D}}}$ note we have \begin{equation*} \xymatrix{U_dF_d Y \ar[r]^-{\eta U_d F_d \eta} & RLU_dF_dRL = RU_fF_fL } \end{equation*} The above map denoted by $\beta$ is in fact a map of triples. Further \begin{equation*} \xymatrix{F_fLU_dF_dY \ar[r] & F_fLRU_fF_fY=F_fU_fF_fLY \ar[r]^-{\epsilon} & F_fL Y}
\end{equation*} This gives an action $\alpha: F_f L U_d F_d \to F_f L$.
Define a functor $\tilde{L}: \tilde{{\mathcal{D}}} \to \tilde{{\mathcal{F}}}$ as follows. For any $X$ in ${\mathcal{D}}$, \begin{equation*} \xymatrix{ F_f L U_d F_d U_d X \ar@<.5ex>[r]^-{ F_f L U_d \epsilon } \ar@<-.5ex>[r]_-{\alpha U_d } & F_f L U_d X \ar[r] & \tilde{L}X } \end{equation*} This is a $U_f$-contractible coequalizer. By Beck's monadicity theorem the above coequalizer exists.
Claim: The functor $\tilde{L} $ is adjoint to $\tilde{R}$.
Reason: Let $X$ be an object of $\tilde{{\mathcal{D}}}$. Then there exist maps \begin{equation*} \xymatrix{U_d X \ar[r]^-{\epsilon} & R L U_d X \ar[r]^-{R\epsilon LU_dX} & R U_f F_f L U_d X = U_d \tilde{R} F_f L U_d X \ar[r] & U_d \tilde{R} \tilde{L} X .} \end{equation*} The last map is the coequalizing map in the definition of $\tilde{L}$. Since $U_dF_d \to RF_f U_fL$ is a map of triples we get a map of algebras \begin{equation*} \xymatrix{ U_d F_d U_d X \ar[d] \ar[r] & RF_fU_f L R F_f U_fL U_d X \ar[d] \\
U_d X \ar[r] & R F_f U_f L U_d X } \end{equation*} This gives a map from $ X \to \tilde{R} \tilde{L} X$ since $\tilde{{\mathcal{D}}}$ is a equivalent to the category of $U_dF_d$ algebras on ${\mathcal{D}}$.
Since $U_d$ is monadable, we can think of $\tilde{{\mathcal{D}}}$ as space of $U_dF_d$ algebras. Then above diagram says that we have a map of $U_dF_d$ algebras $X \to \tilde{R}\tilde{L} X$.
Given any $Y$ in $ {\mathcal{F}}$, $\tilde{L} \tilde{R} Y$ will be given by the coequalizer \begin{equation*} \xymatrix{ F_f L U_d F_d U_d \tilde{R}Y \ar@<.5ex>[r]^-{ F_f L U_d \eta } \ar@<-.5ex>[r]_-{\alpha U_d } & F_f L U_d \tilde{R} Y \ar[r] & \tilde{L} \tilde{R}Y} \end{equation*}
Using the fact that $U_d \tilde{R} = R U_f$ and $LR=id$ we get that the coequalizer diagram is
\begin{equation*} \xymatrix{ F_f L U_d F_d R U_f Y \ar@<.5ex>[r]^-{ F_f L U_d \eta } \ar@<-.5ex>[r]_-{\alpha U_d } & F_f U_f Y \ar[r] & \tilde{L} \tilde{R}Y} \end{equation*}
This reduces to \begin{equation*} \xymatrix{ F_f U_f F_f U_f Y \ar@<.5ex>[r]^-{ F_f U_f \eta } \ar@<-.5ex>[r]_-{\eta F_f U_f } & F_f U_f Y \ar[r] & \tilde{L} \tilde{R}Y} \end{equation*}
But this gives an isomorphism $\tilde{L} \tilde{R} Y \to Y$.
Thus we have an adjoint pair. \end{proof}
Let ${\mathcal{A}}$ and ${\mathcal{B}}$ be model categories. A functor ${\text{U}}: {\mathcal{B}} \to {\mathcal{A}}$ creates weak equivalences in ${\mathcal{A}}$ if a map $B \to B'$ is a weak equivalence in ${\mathcal{B}}$ if and only if ${\text{U}} B \to {\text{U}} B'$ is a weak equivalence. \begin{lemma} \label{quillen} \cite[lemma~A.2]{MR1806878} Let ${\text{U}}: {\mathcal{B}} \to {\mathcal{A}}$ and ${\text{F}}: {\mathcal{A}} \to {\mathcal{B}}$ be a Quillen adjoint pair. Then $({\text{U}},{\text{F}})$ form a Quillen equivalence if ${\text{U}}$ creates weak equivalences in ${\mathcal{B}}$ and for all cofibrant objects $A$ of ${\mathcal{A}}$, the map $A \to {\text{U}}{\text{F}} A$ is a weak equivalence in ${\mathcal{A}}$. \end{lemma}
\begin{lemma} \label{twin models} Let ${\mathcal{D}}$ be a model category and $S$ be a set of maps in ${\mathcal{D}}$ such that the localization of ${\mathcal{D}}$ with respect to $S$ exists and is denoted by ${\mathcal{D}_s}$. Let $T$ be a monad on ${\mathcal{D}}$ and ${\mathcal{D}}_T$ denote the category of $T$-algebras. Then ${\mathcal{D}}_T$ has a model category structure inherited by both ${\mathcal{D}}$ and ${\mathcal{D}_s}$. Localizing ${\mathcal{D}}_T$ with respect to $TS$ we get another model category structure on ${\mathcal{D}}_T$ and let us denote this model category by ${\mathcal{D}}_{TS}$ for notational convenience.
Then the model categories ${\mathcal{D}_s}_T$ and ${\mathcal{D}}_{TS}$ are Quillen equivalent. \end{lemma} \begin{proof}:
\noindent In the localized model category ${\mathcal{D}_s}$ \begin{itemize} \item fibrant objects are $S$-local objects, namely, fibrant objects $X$ in ${\mathcal{D}}$ such that for every morphism $Y \to Y'$ in $S$, the map $\map{}{Y'}{X} \to \map{}{Y}{X}$ is a weak equivalence. \item weak equivalences are $S$-local equivalences, namely, morphisms $Z \to W$ such that $\map{}{W } {X} \to \map{}{Z}{X}$ is a weak equivalence for all fibrant $X$. \item cofibrations are maps which are cofibrations in ${\mathcal{D}}$. \item fibrations are maps with right lifting property with respect to acyclic cofibrations. \end{itemize} In the model category ${\mathcal{D}_s}_T$ \begin{itemize} \item Weak equivalences and fibrations same as those in ${\mathcal{D}_s}$.
\item Cofibrations are the ones with left lifting property with respect to acyclic fibrations. \end{itemize} In the model category ${\mathcal{D}}_{TS}$ \begin{itemize} \item Weak equivalences are $TS$ local equivalences. \item Cofibrations are on the underlying category ${\mathcal{D}}$. \item Fibrations have the right lifting property with respect to acyclic cofibrations. \end{itemize} Note that the free functor $F_T$ on ${\mathcal{D}}$ is left adjoint to the forgetful functor $U_T$ on ${\mathcal{D}}_T$. Thus a $TS$-local object in ${\mathcal{D}}_T$ is exactly a $T$-algebra whose underlying space is an $S$-local object of ${\mathcal{D}}$. Both model categories have the same fibrant objects. For similar reasons, both model categories have the same weak equivalences. Moreover, fibrations in ${{\mathcal{D}_s}}_T$ are fibrations in ${\mathcal{D}}_{TS}$. Thus one can show that the identity functors will actually induce an Quillen equivalence between the two model categories. \end{proof}
\section{Cofibrant Objects} \label{appb} \begin{prop} The category of $G$-topological spaces forms a model category with \begin{itemize} \item weak equivalences as $G$-homotopy equivalences of $G$-spaces, \item cofibrations are Hurewicz $G$-cofibrations denoted by h-cofibrations and \item fibrations are maps with right lifting property with respect to trivial cofibrations denoted by h-fibrations. In particular, fibrations are Hurewicz fibrations. \end{itemize} \end{prop} We will call this the Hurewicz-Strom model structure on $G$-spaces.
\begin{prop}\label{ghurewicz} Let $X$ be simplicial object in $G$-topological spaces with the Hurewicz-Strom model structure. Then geometric realization preserves weak homotopy equivalences between Reedy cofibrant objects. \end{prop}
\begin{lemma}\cite[I.$\delta$6.5]{MR1417719} Let $A \to B $ be a h-cofibration of $G$-topological spaces. Then cobase change along a weak homotopy equivalence is a weak homotopy equivalence. \end{lemma}
\begin{lemma}\label{hcofweq} Let the following be a pushout diagram of $G$-spaces ; \begin{equation*} \xymatrix{ A \ar[r]^{f} \ar[d]_i & C \ar[d] \\
B \ar[r] & B\cup_A C.}
\end{equation*} If $i$ is a h-cofibration then the pushout is preserved under weak homotopy equivalences. \end{lemma}
For rest of this section we will assume that ${\text{G}}{\mathcal{T}}$ has the Hurewicz-Storm model structure. Consider $G\Pi[G{\mathcal{T}}]$ with the level model category structure. Then $G\Pi[G{\mathcal{T}}]$ is a topological model category induced by the Hurewicz-Strom model structure on $G{\mathcal{T}}$. Then $\Pi[G{\mathcal{T}}]$ is a topological model category with the model structure induced by the functor ${\text{E}}$. Let ${\mathcal{G}}$ be a category of operators and ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ have the model category from the underlying structure on $\Pi[G{\mathcal{T}}]$. Denote the category of simplicial objects in the ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ by $s.{\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$. Consider the category $s.{\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$ with the Reedy model structure induced by the Hurewicz-Strom model structure on ${\mathcal{G}}[{\mathcal{T}_\text{\tiny G}}]$. \begin{prop}\label{greedy} Let ${\mathcal{G}}$ and ${\mathcal{H}}$ be category of operators with a morphism $v: {\mathcal{G}} \to {\mathcal{H}}$. Let $\Pi \to {\mathcal{G}}$ be a cofibration of $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces. Let $X$ be a cofibrant ${\mathcal{G}}$-space. Then the bar construction ${\text{B}}_\bullet({\mathcal{H}},{\mathcal{G}},X)$ is Reedy cofibrant as a simplicial object in $\Pi[G{\mathcal{T}}]$. \end{prop}
Note in the case that this Proposition is applied we assume that $X$ is cofibrant in model category
described in Section \ref{serreeqcat}.
In order to prove Proposition \ref{greedy} we reformulate the proof of a similar result by Rezk[Thesis].
Consider the category of covariant functors from $\Pi^{\small{\text{op}}} \times \Pi$ to $G {\mathcal{T}}$ denoted by $(\Pi^{\small{\text{op}}} \times \Pi)[{\text{G}} {\mathcal{T}}]$. We can define a monoidal structure on $\Pi^{\small{\text{op}}} \times \Pi$-spaces as follows. For any $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces ${\mathcal{A}}$ and ${\mathcal{B}}$ define a $\Pi^{\small{\text{op}}} \times \Pi$-space as $$ {\mathcal{A}} {\small{\text{o}}} {\mathcal{B}} (\b n , \b m) := {\mathcal{A}}^{\b m} \otimes_\Pi {\mathcal{B}}_{\b n} $$ Note $\Pi {\small{\text{o}}} {\mathcal{A}} = {\mathcal{A}}$ and ${\mathcal{A}} {\small{\text{o}}} \Pi = {\mathcal{A}}$.
The category of $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces acts on the category of $\Pi$-spaces. Let $X$ be a $\Pi$-space and ${\mathcal{A}}$ be a $\Pi^{{\small{\text{op}}}} \times \Pi$-space. Then define a $\Pi$-space ${\mathcal{A}}(X)$ as follows $$ {\mathcal{A}}(X) (\b n) := {\mathcal{A}}^{\b n} \otimes_\Pi X .$$
This defines right closed action of $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces on $\Pi$-spaces. Let $X$ and $Y$ be $\Pi$-spaces. Define $${\text{Hom}}(X,Y) (\b m, \b n) := {\mathcal{T}_\text{\tiny G}}(X(\b m),Y(\b n) ).$$ Note this is a $\Pi^{{\small{\text{op}}}} \times \Pi$ space.
Let $X$ and $Y$ be $\Pi$-spaces and ${\mathcal{A}}$ be a $\Pi^{{\small{\text{op}}}} \times \Pi$-space. Then $${\text{Hom}}({\mathcal{A}} X, Y ) \cong {\text{Hom}}({\mathcal{A}}, {\text{Hom}}(X,Y)).$$
Further given $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces ${\mathcal{A}}$ and ${\mathcal{B}}$ we get a function $\Pi^{{\small{\text{op}}}} \times \Pi$-space defined is the coequalizer \begin{equation*} \xymatrix{ {\text{F}}({\mathcal{A}},{\mathcal{B}}) (\b m, \b n) & \ar[l] \prod_{k} {\mathcal{T}_\text{\tiny G}}({\mathcal{A}}(\b k, \b m){\mathcal{B}}(\b k, \b n)) & \ar@<-.5ex>[l] \ar@<.5ex>[l] \prod_{k \to k'} {\mathcal{T}_\text{\tiny G}}({\mathcal{A}}(\b k,\b m),{\mathcal{B}}(\b k', \b n)) \\ } \end{equation*}
\begin{prop} The category of $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces has a right closed monoidal structure and for $\Pi ^{{\small{\text{op}}}} \times \Pi$-spaces ${\mathcal{A}},{\mathcal{B}}$ and ${\mathcal{G}}$ $$ {\text{Hom}}({\mathcal{A}} {\small{\text{o}}} {\mathcal{B}}, {\mathcal{G}}) \cong {\text{Hom}}({\mathcal{A}}, {\text{F}}({\mathcal{B}},{\mathcal{G}}) ).$$ \end{prop}
\begin{prop}\label{piopspaces} Define a morphism $ {\mathcal{A}} \to {\mathcal{B}}$ in $(\Pi^{{\small{\text{op}}}}\times \Pi)[G{\mathcal{T}}]$ \begin{itemize} \item to be a weak equivalence (or fibration) if ${\mathcal{A}}(\b n,\hspace{0.1cm} ) \to {\mathcal{B}}(\b n,\hspace{0.1cm} ) $ is a weak equivalence (or fibration) in $\Pi[G{\mathcal{T}}]$ and, \item to be a cofibration if it has the left lifting property with respect to all trivial fibrations. \end{itemize}
\end{prop}
\begin{prop}\label{action} The action of $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces on $\Pi$-spaces is compatible with the level model category structure of $\Pi$-spaces. \end{prop} \begin{proof} Let $i: X \to Y$ be a cofibration of $\Pi$-spaces and $p:Z \to W$ be a fibration of $\Pi^{\small{\text{op}}} \times \Pi$-spaces. Then we need to show that the induced maps \begin{eqnarray*} f: {\text{Hom}}(\ast, Z) & \to & {\text{Hom}}(\ast, W) \text{ and} \\ g: {\text{Hom}}(Y,Z) & \to & {\text{Hom}}(X, Z) \times_{{\text{Hom}}(X,W)} {\text{Hom}}(Y, W) \end{eqnarray*} are fibrations. Further we need to show that if $i$ is also a weak equivalence, then $g$ is a trivial fibration. If $p$ is also a weak equivalence, then $f$ and $g$ are trivial fibrations.
We can reduce this to a similar diagram in $\Pi[G {\mathcal{T}}]$ using adjointness of ${\text{E}}$ and ${\text{i}}$. The result follows from the fact that $\Pi[G{\mathcal{T}}]$ is a topological model category.
\end{proof}
\begin{prop}\label{product} The monoidal structure of $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces is compatible with the model category structure of $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces. \end{prop} \begin{proof} We need to show that if $i:{\mathcal{A}} \to {\mathcal{B}}$ is a cofibration and $p:{\mathcal{G}} \to {\mathcal{H}}$ is a fibration of $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces, then the induced maps \begin{eqnarray*} {\text{F}}( \ast, {\mathcal{G}}) & \to & {\text{F}}( \ast, {\mathcal{H}}) \text{ and} \\ {\text{Hom}}({\mathcal{B}}, {\mathcal{G}} ) & \to & {\text{Hom}}({\mathcal{A}}, {\mathcal{G}}) \times_{{\text{Hom}}({\mathcal{A}},{\mathcal{H}})} {\text{Hom}} ({\mathcal{B}}, {\mathcal{H}})
\end{eqnarray*} are fibrations in $(\Pi^{{\small{\text{op}}}} \times \Pi)[G {\mathcal{T}}]$. If $i$ is also a weak equivalence then the second map is a trivial fibration. If both $i$ and $p$ are weak equivalences then both the maps are trivial fibrations.
In order to prove the above result we need to know that if ${\mathcal{A}} \xrightarrow{i} {\mathcal{B}}$ is a cofibration then ${\mathcal{A}}^{\b n} \to {\mathcal{B}}^{\b n}$ is a cofibration of $\Pi$-spaces. This follows from the fact that fibrations of $\Pi^{{\small{\text{op}}}}\times \Pi[G {\mathcal{T}}]$ are defined level-wise.
The rest of the proof is similar to that of the previous Proposition.
\end{proof}
In order to prove Proposition \ref{greedy} we follow the proof of Proposition 3.7.3 in Rezk[Thesis]. In order to show that ${\text{B}}_\bullet({\mathcal{H}},{\mathcal{G}},X)$ is Reedy cofibrant for a cofibrant ${\mathcal{G}}$-space $X$, we need to show that $L_{n-1}{\text{B}}_\bullet({\mathcal{H}},{\mathcal{G}},X) \to {\text{B}}_n({\mathcal{H}},{\mathcal{G}},X)$ is a cofibration of ${\mathcal{G}}$-spaces.
Let $\Pi \xrightarrow{i} {\mathcal{G}}$ be the natural map. Then we have maps ${\mathcal{G}}^{\otimes m} \xrightarrow{s_j} {\mathcal{G}}^{\otimes {m+1}}$ given by $s_j= \text{id} \otimes \cdots i \otimes \cdots \otimes \text{id}$ where $i$ is in the $j$th spot and $s_0= i \otimes \text{id} \otimes \cdots \otimes \text{id}$. Now, define ${\mathcal{A}}_m$ to be the following coequalizer. \[\xymatrix{\coprod_{0 \leq r <j< m-1} {\mathcal{G}}^{\circ {m-1}} \ar@<.5ex>[r]^{s_r} \ar@<-.5ex>[r]_{s_j} & \coprod_{0\leq k \leq m} {\mathcal{G}}^{\circ{m}} \ar[r] & {\mathcal{A}}_m } \]
There exist maps $s_k$ from ${\mathcal{G}}^{\circ{m}} \to {\mathcal{G}}^{\circ{m+1}} $ giving rise to a map $a: {\mathcal{A}}_m \to {\mathcal{G}}^{\circ{m+1}}$.
\begin{lemma} The following diagram is a pushout square in $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces. \begin{equation*} \xymatrix{ {\mathcal{A}}_m \circ \Pi \ar[r]^{\text{id}\circ i}\ar[d]^{a\circ \text{id}} & {\mathcal{A}}_m \circ {\mathcal{G}} \ar[d] \\
{\mathcal{G}}^{\circ {m+1}} \circ \Pi \ar[r] & {\mathcal{A}}_{m+1} } \end{equation*} \end{lemma} \begin{proof} The functor $_ \circ {\mathcal{G}}$ preserves colimits in $\Pi^{op} \times \Pi$-spaces as they are computed in the underlying category of spaces. The proof follows similar to the proof of Lemma 3.7.8 Rezk[Thesis]. \end{proof}
\begin{lemma} Let ${\mathcal{G}}$ be a $\Pi^{\small{\text{op}}} \times \Pi$ space such that $\Pi \to {\mathcal{G}}$ is a cofibration of $\Pi^{{\small{\text{op}}}}\times \Pi$-spaces. Then the map ${\mathcal{A}}_{m+1} \to {\mathcal{G}}^{\circ{n+1}} $ is a cofibration $\Pi^{{\small{\text{op}}}}\times \Pi$-spaces. \end{lemma} \begin{proof} Proof is by induction. By hypothesis $({\mathcal{A}}_0= \Pi) \to {\mathcal{G}} $ is a cofibration. Let ${\mathcal{A}}_{m-1} \to {\mathcal{G}}^{\circ m}$ be a cofibration. Then since $\Pi \to {\mathcal{G}}$ is a cofibration, by previous lemma and Proposition \ref{action} we get that ${\mathcal{A}}_{m} \to {\mathcal{G}}^{\circ m+1}$ is a cofibration. \end{proof} \begin{rmk} Note that if ${\mathcal{D}}$ is a well pointed operad, that is, $\ast \to {\mathcal{D}}(1)$ is a h-cofibration. Then we can show that $\Pi \to \hat{{\mathcal{D}}}$ is a cofibration of $\Pi^{{\small{\text{op}}}}\times \Pi$-spaces. \end{rmk}
\begin{lemma} Let ${\mathcal{G}} \to {\mathcal{H}}$ be a map of $\Pi^{\small{\text{op}}} \times \Pi $-spaces, $\Pi \to {\mathcal{G}}$ be a cofibration of $\Pi^{{\small{\text{op}}}} \times \Pi$-spaces and $X$ be a cofibrant $\Pi$-space. Then $L_n{\text{B}}_\bullet({\mathcal{H}},{\mathcal{G}},X) \to {\text{B}}_n ({\mathcal{H}},{\mathcal{G}},X)$ is a cofibration in $\Pi[G{\mathcal{T}}]$. \end{lemma} \begin{proof} By previous lemma and Proposition \ref{action} the map ${\mathcal{A}}_n \circ X \to {\mathcal{G}}^{\circ n+1 } \circ X$ is a cofibration. Now, $L_n {\text{B}}_\bullet({\mathcal{H}},{\mathcal{G}},X) \cong {\mathcal{H}} \circ {\mathcal{A}}^{\circ n} \circ X $. This implies from Proposition \ref{product} that $L_n {\text{B}}_\bullet({\mathcal{H}},{\mathcal{G}},X) \to {\text{B}}_n({\mathcal{H}},{\mathcal{G}},X)$ is a level cofibration. \end{proof}
\begin{proof} of \textbf{Proposition \ref{greedy}} follows from the previous lemma. \end{proof}
\section{Discussion on units of equivariant ring spectra}\label{discussion}
Following is a discussion regarding the equivariant ${\text{gl}}_1$ functor from equivariant ring spectra to equivariant spectra. Let ${\mathcal{S}}_G$ denote the $G$ enriched category of $G$-spectra and $G {\mathcal{S}}$ denote the category of $G$-spectra without the enrichment. There exists Quillen pair of functors between equivariant ring spectra and equivariant $E_\infty$-spaces \begin{eqnarray} \xymatrix{ {\mathcal{E}}[{\mathcal{T}_\text{\tiny G}}] \ar@<-.5ex>[r]_{\Sigma^\infty} & {\mathcal{E}}[ {\mathcal{S}}_G]\ar@<-.5ex>[l]_{\Omega^\infty} } \end{eqnarray} induced by the adjoint pair between $G$-spaces and $G$-spectra. This induces an adjoint pair between the homotopy categories of ${\mathcal{E}}[{\mathcal{T}_\text{\tiny G}}]$ and ${\mathcal{E}}[{\mathcal{S}}_G]$. By the results in this article, since the homotopy categories of ${\mathcal{E}}[{\mathcal{T}_\text{\tiny G}}]$ and $\Gamma[G{\mathcal{T}}]$ are equivalent we have an adjoint pair between the homotopy categories of $\Gamma[G {\mathcal{T}}]$ and ${\mathcal{E}}[{\mathcal{S}}_G]$.
There are two relevant model structures on the category of equivariant $\Gamma$-spaces. The one described in this paper is such that the fibrant objects in the category are special equivariant $\Gamma$-spaces, which we will denote by $\Gamma[G {\mathcal{T}}]_s$. There is a different model structure in which fibrant objects are very special equivariant $\Gamma$-spaces which we denote by $\Gamma[G{\mathcal{T}}]_{vs}$. In a later paper (joint with Chenghao Chu), we show that there is a Quillen pair between the category of equivariant $\Gamma$-spaces and a suitable category of equivariant spectra that induces a equivalence between the homotopy category of connective equivariant spectra and homotopy category of equivariant $\Gamma$-spaces. We will have a Quillen pair as follows, where ${\mathcal{A}}$ and ${\mathcal{B}}$ denote the equivariant analogs of functors ${\mathcal{A}}$ and ${\mathcal{B}}$ defined by Segal \cite{MR0353298}[Prop 3.3] \begin{eqnarray} \label{equiv} \xymatrix{ G{\mathcal{S}}\ar@<-.5ex>[r]_{{\mathcal{A}}} & \Gamma[G{\mathcal{T}}]_{vs} \ar@<-.5ex>[l]_{{\mathcal{B}}} } \end{eqnarray}
Consider the functor $\text{Units}$ obtained by taking fibrant replacement in $\Gamma[G{\mathcal{T}}]_s$ and then applying ${\text{GL}}_1$ construction to it. On the level of homotopy categories this induces a functor which is right adjoint to the identity functor of equivariant $\Gamma$-spaces. More precisely we have a pair of adjoint functors,
\begin{eqnarray*} \xymatrix{ \text{ho.}\Gamma[G{\mathcal{T}}]_{vs}\ar@<.5ex>[r]^{Id } & \text{ho.} \Gamma[G{\mathcal{T}}]_{s} \ar@<.5ex>[l]^{\text{Units}}} \end{eqnarray*}
Assembling all these diagrams and noting that the Quillen pair \ref{equiv} induces an equivalence on the homotopy category of connective spectra. We can define the functor on the homotopy cateories $\text{gl}_1 : \text{ho.} \Gamma[G{\mathcal{S}}] \to \text{ho. connective } G {\mathcal{S}} \subset \text{ho.} G{\mathcal{S}}$ adjoint to the functor ${\mathcal{A}} \Omega^\infty: \text{ho. connective } G {\mathcal{S}} \subset\text{ho.} G {\mathcal{S}} \to \text{ho.} \Gamma[G {\mathcal{S}}]$.
\begin{rmk}
We expect that the notion of equivariant $\Gamma$-spaces can be extended to the notion of equivariant $\Gamma$-spectra and one can generalize the result in this paper to a Quillen equivalence between the category of equivariant ${\text{E}}_\infty$-spectra and equivariant $\Gamma$-spectra. Following the notation in this article, we will have,
\begin{claim} Let ${\mathcal{E}}$ denote a $E_\infty $-$G$-operad. Then with appropriate model structures where the fibrant objects in $\Gamma[G{\mathcal{S}}]$ are special objects, we get a zigzag of Quillen equivalences between ${\mathcal{E}}[{\mathcal{S}}_G]$ and $\Gamma[G{\text{S}}]$ \end{claim}
We can reiterate the definition of ${\text{gl}}_1$ in the equivariant case using the above claim. \end{rmk}
\end{document} | arXiv | {
"id": "0912.4346.tex",
"language_detection_score": 0.6163209676742554,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Spontaneous dressed-state polarization in the strong driving regime of cavity QED}
\author{Michael~A.~Armen} \affiliation{Edward L.\ Ginzton Laboratory, Stanford University, Stanford CA 94305, USA} \affiliation{Physical Measurement and Control 266-33, California Institute of Technology, Pasadena CA 91125, USA}
\author{Anthony~E.~Miller} \affiliation{Edward L.\ Ginzton Laboratory, Stanford University, Stanford CA 94305, USA}
\author{Hideo~Mabuchi} \affiliation{Edward L.\ Ginzton Laboratory, Stanford University, Stanford CA 94305, USA}
\date{\today} \pacs{42.50.Pq,42.50.Lc,42.65.Pc,42.79.Ta}
\begin{abstract} We utilize high-bandwidth phase quadrature homodyne measurement of the light transmitted through a Fabry-Perot cavity, driven strongly and on resonance, to detect excess phase noise induced by a single intracavity atom. We analyze the correlation properties and driving-strength dependence of the atom-induced phase noise to establish that it corresponds to the long-predicted phenomenon of spontaneous dressed-state polarization. Our experiment thus provides a demonstration of cavity quantum electrodynamics in the strong driving regime, in which one atom interacts strongly with a many-photon cavity field to produce novel quantum stochastic behavior. \end{abstract}
\maketitle
\noindent Current research in single-atom optical cavity quantum electrodynamics (cavity QED)~\cite{Mabu02} largely emphasizes the input-output properties of strongly coupled systems~\cite{Kimb98}, from normal-mode splitting~\cite{Thom92} to photon blockade~\cite{Birn05,Daya08}. While theory has predicted a wide range of quantum nonlinear-optical phenomena in the strong driving regime~\cite{Arme06}, experiments have with few exceptions~\cite{Saue04,Gupt07,Schu08,Brun96} focused on relatively weak driving conditions with average intracavity photon number $\lesssim 1$. Here we report the observation of characteristic high-bandwidth phase noise in the light transmitted through a resonantly driven Fabry-Perot cavity containing one strongly coupled atom and $10$--$100$ photons, confirming long-standing predictions of a phenomenon known as single-atom phase bistability~\cite{Kili91} or spontaneous dressed-state polarization~\cite{Alsi91}. Our results extend cavity-QED studies of quantum stochastic processes beyond the few-photon regime, open the door to experiments on feedback control of dressed-state cascades~\cite{Rein03}, and highlight the relevance of cavity QED in the development of ultra-low power nonlinear optical signal processing.
The Jaynes-Cummings model~\cite{Jayn63}, in which a two-level atom is coupled to a single-mode electromagnetic field, provides robust intuitions regarding the input-output behavior of real atom-cavity systems that incorporate multiple atomic sub-states and are subject to dissipative processes such as cavity decay and atomic spontaneous emission. It is well known that the spectroscopic normal-mode splitting of strongly coupled atom-cavity systems~\cite{Thom92,Boca04} simply reflects the eigenvalues of the lowest Jaynes-Cummings excited states, with peak widths determined by the cavity and atomic decay rates. It is equally true but less appreciated that the Jaynes-Cummings model provides crucial guidance for understanding cavity QED in the strong driving regime, in which the number of atom-cavity excitations grows $\gg 1$ and the model begins to resemble the dressed-state picture used to analyze radiative processes in free space. A clear example arises with the phenomenon of spontaneous dressed-state polarization as described by Alsing and Carmichael~\cite{Alsi91}, in which the highly-excited Jaynes-Cummings model is seen to `factor' into two quasi-harmonic ladders of states. The atom-cavity system tends to localize transiently on one sub-ladder or the other, resulting in one of two different phase shifts of the transmitted light, with stochastic switching between sub-ladders induced by individual atomic spontaneous emission events. We have detected and characterized atom-induced phase fluctuations of this type and achieve quantitative agreement with predictions based on an elementary cavity QED model incorporating Jaynes-Cummings dynamics, dissipation and external driving.
We employ a standard cavity QED apparatus~\cite{Mabu96} in which a cloud of laser-cooled $^{133}$Cs atoms is dropped over a high-finesse Fabry-Perot cavity (cavity length $l\approx 72$ $\mu$m, field decay rate $\kappa/2\pi\approx 8$ MHz); the cavity resonance frequency is fixed relative to the atomic $(6S_{1/2},F=4)\rightarrow (6P_{3/2},F=5)$ transition (dipole decay rate $\gamma_\perp/2\pi\approx 2.6$ MHz) using an auxiliary laser~\cite{Mabu99}. The cavity transmission is monitored using a laser probe and balanced homodyne detector. The cloud position and density can be adjusted so that isolated transits of individual atoms through the cavity mode volume are observed in the transmission signal. Atoms free-fall through the cavity but generally are subject to both optical pumping and forces induced by the cavity field. As the cavity mode forms a standing wave with Gaussian transverse profile (waist $\approx 28.5$ $\mu$m), the strength $g$ of the coherent atom-cavity coupling is a function of the atomic position (with maximum value $\approx 16$ MHz in our setup). We select transit events in which optical pumping and the atomic trajectory lead to maximal values of $g$ by initializing the cavity probe in a detuned configuration that produces a real-time photocurrent directly related to $g$. When a set threshold is reached during a single-atom transit, the probe frequency and power are quickly shifted to desired values for data acquisition. Using this triggering scheme we obtain phase-quadrature homodyne data in which near-maximal atom-cavity coupling strength is apparently maintained for up to $\sim 50$ $\mu$s, limited by optical pumping into the dark $(6S_{1/2},F=3)$ hyperfine state.
\begin{figure}
\caption{ (a) Black solid trace is a photocurrent segment recorded with input power such that $N\approx 20$, filtered to a bandwidth of 4 MHz. Blue dashed horizontal lines indicate the standard deviation of the optical shot noise. The atom is optically pumped to a dark state at time $t\sim 35$ $\mu$s, resulting in an abrupt disappearance of the atom-induced excess phase noise. Units are referred to the phase quadrature amplitude of the intra-cavity field. (b) Histogram of filtered photocurrent segments (0.1--8 MHz bandpass) from multiple atom transits; see text for explanation of the theoretical curve.}
\label{fig:trajec}
\end{figure}
Fig.~\ref{fig:trajec}a shows a representative example of the single-shot photocurrents thus obtained. A distinct transition in the signal can be seen at time $t^*\sim 35$ $\mu$s. The photocurrent variance for $t>t^*$ corresponds to optical shot noise while for $t<t^*$ the variance is clearly larger, indicating a significant effect of the atom on light transmitted through the cavity. We interpret the change as an optical pumping event in which the atom is transferred to the dark hyperfine ground state, and have verified that such events can be suppressed by adding an intracavity repumping beam. Although the signal-to-noise ratio in our measurements is limited, a histogram of photocurrent segments from multiple atom transits (Fig.~\ref{fig:trajec}b) reveals a flat-topped distribution supporting the theoretical expectation of random-telegraph (rather than Gaussian) statistics of the atom-induced phase noise. The smooth curve is a theoretical prediction produced by fitting the sum of two Gaussian functions (constrained to have width corresponding to optical shot noise) to histograms generated via quantum trajectory simulations of our cavity QED model~\cite{Carm93,Mabu98}. In the limit of low sampling noise our experimental histogram would be expected more clearly to display such a bimodal distribution, although some deviations from ideal theory would presumably emerge because of residual atomic motion and optical pumping among Zeeman sub-states.
\begin{figure}
\caption{ Driving-strength dependence of the splitting (in units of phase quadrature amplitude of the intra-cavity field) between the centroids of bi-Gaussian fits to photocurrent histograms as in Fig.~\ref{fig:trajec}b. The experimental data (points with error bars) are directly compared with theoretical predictions based on quantum trajectory simulations (solid curves) and the cavity QED Master Equation (see text). Blue points and curves are computed with data and simulations filtered to 10 MHz; red points and curves reflect filtering at 2 MHz, which results in a decrease in apparent switching amplitude since 2 MHz is well below the purported switching frequency. Black horizontal dotted lines indicate the splitting predicted by steady-state solution of the Master Equation for large $N$.}
\label{fig:splittings}
\end{figure}
In Fig.~\ref{fig:splittings} we summarize a comparison of experimental and theoretical photocurrent histograms, such as the one depicted in Fig.~\ref{fig:trajec}b, across a range of probe powers. The probe power is conventionally parametrized by the mean intracavity photon number $N$ that would be produced in an empty cavity; note that for $N\gtrsim 5$ the mean intracavity photon number produced with a strongly coupled atom is $\sim N-1$. We display the best-fit values of the two Gaussian centroids obtained in fits to data and simulations; the blue points and theoretical curves are for the maximal signal bandwidth of 10 MHz while the red points and curves are computed with signals and simulations filtered to 2 MHz. The horizontal lines indicate the splitting predicted by steady-state solution of the cavity QED Master Equation in the asymptotic region of large $N$. Our data closely match the predicted sharp onset of atom-induced phase fluctuations for lower values of $N$ and also asymptote correctly for high $N$. That the splitting becomes independent of $N$ for high $N$ is a distinctive feature of spontaneous dressed-state polarization~\cite{Alsi91}.
\begin{figure}
\caption{ Autocorrelation functions of the photocurrent $Y(t)$ (units as in Figs.~\ref{fig:trajec},~\ref{fig:splittings}) obtained under a range of driving parameters. Experimental autocorrelations are computed after ac-filtering the photocurrents (20 kHz cutoff) to suppress artifacts caused by atomic motion and optical pumping. Points are experimental data and curves are theoretical predictions based on the cavity QED Master Equation for four different parameter sets (see legend and text).}
\label{fig:autocorr}
\end{figure}
In Fig.~\ref{fig:autocorr} we display the autocorrelation functions of experimental photocurrent segments for four characteristic sets of probe parameters, together with theoretical predictions. Data points and theoretical curves are displayed for $N=(4,20,56)$ with the atom, cavity and probe frequencies all coincident, and for $N=20$ with the cavity and probe frequencies coincident but 40 MHz below the atomic resonance frequency. The vertical line indicates the predicted average period ($2/\gamma_\perp$) of spontaneous phase-switching events. In the resonant cases, the rapid growth with $N$ of predominantly short-timescale photocurrent fluctuations agrees with theory and reinforces our identification of the atom-induced phase noise with switching caused by spontaneous emission. The increased correlation timescale for the detuned case follows from a tendency of the atom-cavity system to favor one of the sub-ladders discussed by Alsing and Carmichael~\cite{Alsi91}. We should note that theory predicts identical autocorrelation functions for atom-cavity detuning of either positive or negative sign, but in the experiment setting $\Delta<0$ results in a drastic reduction of atom transit signals as a result of repulsive mechanical forces exerted by the detuned intracavity light.
\begin{figure}
\caption{ (a) Theoretical reconstruction (red) produced via posterior decoding~\cite{Durb98} of a two-state switching trajectory from a segment of the experimental homodyne photocurrent (black, 10 MHz bandwidth, $N\approx 37$). Note that at time $t\sim 43$ $\mu$s the reconstruction algorithm correctly identifies the end of the atom-induced fluctuations and infers a `dark' signal state with zero mean and Gaussian shot-noise fluctuations only. (b) Quantum trajectory simulation of the expected switching behavior in a cavity QED system (conditional expectation value of the phase quadrature amplitude of the {\em intra}-cavity field) using parameter values of the current experiment. (c) Simulated phase-quadrature homodyne photocurrent corresponding to (b), including shot-noise and finite bandwidth as in the experimental data. Note that the duration of the simulations in (b) and (c) are $\approx 10$ $\mu$s.}
\label{fig:bimodal}
\end{figure}
Although statistical comparisons strongly support the conclusion that our data reflect atom-induced phase noise associated with spontaneous dressed-state polarization, direct visualization of `bistable' switching in our measured photocurrents is difficult because of the modest ratios $g:\kappa:\gamma_\perp$ realized in our experiment. We have been limited in this regard by the properties of the mirrors we had available at the time the experiment was assembled (significant improvements should be possible with commercial mirror technology~\cite{Mabu98}), and by a pronounced birefringence of the assembled cavity that forced us to use linear rather than circular probe polarization~\cite{Birn05b}. Furthermore our maximum digitization rate in recording the photocurrent was $2.5\times 10^7$ samples per second, whereas the switching rate induced by atomic spontaneous emission should be $\gamma_\perp/2\approx 8$ MHz. It is nevertheless possible to utilize standard techniques for hidden Markov models (HMM's) to attempt to reconstruct two-state switching trajectories from individual photocurrent records. In Fig.~\ref{fig:bimodal} the red trace shows the result of applying a standard posterior decoding algorithm~\cite{Durb98} to the photocurrent shown in black, assuming a simple three-state HMM in which the states corresponds to negative, positive, and zero values of the conditional expectation of the intra-cavity phase quadrature amplitude~\cite{PW09}. The red trace schematically indicates, at each point in time, which of the three states has highest posterior probability. Although we are unable directly to check the accuracy of this reconstruction, numerical experiments based on simulations such as the one shown in Figs.~\ref{fig:bimodal}b,c predict an accuracy $\sim 10\%$.
The demonstration of spontaneous dressed-state polarization constitutes an important step in the study of nonlinear optical phenomena at ultra-low energy scales, where quantum fluctuations interact with nonlinear mean-field dynamics to generate complex stochastic behavior~\cite{Arme06}. Single-atom cavity QED provides an important setting for such studies, as theoretical models that accurately capture the behavior of real physical systems can be formulated with sufficiently low variable count to enable direct numerical simulations for conceptual analysis and comparisons with experiment. These models, which in some cases are amenable to systematic reduction techniques~\cite{VanH06,Niel09}, provide a unique resource for research in non-equilibrium quantum statistical mechanics and on quantum--classical correspondence in nonlinear dynamical systems~\cite{Arme06}.
Cavity QED in the strong driving regime may also be of interest for exploratory studies in attojoule nonlinear optics. While the difficulties of all-optical information processing are well known~\cite{Smit84,Mill90}, the prospect of photonic interconnect layers within microprocessors~\cite{Beau07} is reviving interest in ultra-low energy optical switching. The characteristic scale of 50 photons corresponds to an energy $\approx 12$ aJ, representing an operating regime for optical switching that lies significantly below foreseeable improvements in existing technology but stays above the single-photon level where propagation losses and signal regeneration would seem to be dominant engineering concerns. Ultra-low energy nonlinear optical effects should be achievable with nanophotonic implementations of cavity QED~\cite{Srin07,Fara08}, providing a potential path toward large-scale integration. Even with our modest values of $g$ and $\kappa$, the atom-induced phase shift of light transmitted through the cavity is $\pm 0.15$ rad for input power corresponding to $N=20$. Each atomic spontaneous emission event that switches the phase of the transmitted light dissipates a mere $0.23$ aJ of energy, while the light transmitted through the cavity during a typical interval between switching events carries an optical signal energy $\approx 3.3$ aJ at $N=20$. Such a 1:10 ratio between the switching energy and the energy of the controlled signal in nonlinear-optical phase modulation would be highly desirable for the implementation of cascadable photonic logic devices, and is not generally achieved in schemes based on single-photon saturation effects in cavity QED~\cite{Turc95,Engl07}. While it is certainly not clear whether the specific phenomenon of spontaneous dressed-state polarization can be exploited in the design of practicable switching devices, we hope that our demonstration will serve to draw some attention up from the bottom of the Jaynes-Cummings ladder towards the strong-driving regime of cavity QED.
This research has been supported by the NSF under PHY-0354964, by the ONR under N00014-05-1-0420 and by the ARO under W911NF-09-1-0045. AEM acknowledges the support of a Hertz Fellowship. We thank David Miller for enlightening conversations and Dmitri Pavlichin for crucial technical assistance in the preparation of Fig.~\ref{fig:bimodal}a.
\end{document} | arXiv | {
"id": "0907.4804.tex",
"language_detection_score": 0.8551482558250427,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Invariant Ring Extensions]{Ring Extensions Invariant Under\\ Group Action}
\author{Amy Schmidt}
\address{Department of Mathematics, George Mason University, Fairfax, Virginia 22030-4444, {\tt E-mail: aschmid9@masonlive.gmu.edu}}
\begin{abstract} Let $G$ be a subgroup of the automorphism group of a commutative ring with identity $T$. Let $R$ be a subring of $T$ such that $R$ is invariant under the action by $G$. We show $R^G\subset T^G$ is a minimal ring extension whenever $R\subset T$ is a minimal extension under various assumptions. Of the two types of minimal ring extensions, integral and integrally closed, both of these properties are passed from $R\subset T$ to $R^G\subset T^G$. An integrally closed minimal ring extension is a flat epimorphic extension as well as a normal pair. We show each of these properties also pass from $R\subset T$ to $R^G\subseteq T^G$ under certain group action. \end{abstract}
\keywords{Fixed ring, ring of invariants, invariant theory, locally finite, minimal ring extension, flat epimorphism, normal pair}
\subjclass[2010]{Primary 13A50, 13B202 Secondary 13B21, 13A15.}
\maketitle \begin{center} \today \end{center}
\section{Introduction} \label{intro}
All rings herein are commutative with identity, and all homomorphisms and subrings are unital. For a ring $R$, we denote by $\text{Reg}(R)$ the set of regular elements; $\text{Spec}(R)$ the set of prime ideals; $\text{Max}(R)$ the set of maximal ideals; $\text{Rad}_R(I)$ the radical in $R$ of an ideal $I\subset R$; $\text{tq}(R)$ the total quotient ring; $\text{qf}(R)$ the quotient field, if $R$ is a domain; and $\text{Aut}(R)$ the automorphism group of $R$. As in \cite{Kaplansky}, we refer to the lying-over, going-up, and incomparable properties of ring extensions as LO, GU, and INC, respectively.
Given a subgroup $G$ of $\text{Aut}(R)$, we say $G$ acts on $R$ and denote the fixed ring of this action by $R^G=\{r\in R\, \mid \, \sigma (r)=r \text{ for all } \sigma\in G\}$. We say a property of $R$ is \textit{($G$-)invariant} if $R^G$ also has the property. Our purpose in this paper is to enhance the popular investigation of which ring-theoretic properties are invariant. As the title of this paper suggests, we determine properties of the ring extension $R\subseteq T$ that are $G$-invariant, meaning the property descends to the fixed subring extension $R^G\subseteq T^G$.
\textbf{Our riding assumptions in this work are $R$ is a subring of $T$, $G$ acts on $T$ via automorphisms, and $R$ is $G$-invariant, i.e., $\sigma(R)\subseteq R$ for all $\sigma\in G$.} It then follows that $G$ is a subgroup of $\text{Aut}(R)$.
We denote the orbit of $t\in T$ under $G$ by $\mathcal{O}_t$, i.e., $\mathcal{O}_t=\{\sigma(t)\, \mid \, \sigma\in G\}$, and we define \[
n_t:=|\mathcal{O}_t|,\quad\hat t:=\sum_{t_i\in\mathcal{O}_t}t_i\quad\text{and}\quad \tilde t:=\prod_{t_i\in\mathcal{O}_t}t_i. \] If $G$ is finite, instead we denote by $\hat t$ the sum $\sum_{\sigma\in G}\sigma(t)$. We say $G$ is \textit{locally finite} if $\mathcal{O}_T$ is finite for all $t\in T$. Given an ideal $I\subset T$ we denote the orbit of $I$ under $G$ by $\mathcal{O}_I:=\{\sigma(I)\, \mid \, \sigma\in G\}$. By the First Isomorphism Theorem, $T/I\cong T/\sigma(I)$. Clearly, $T/I$ is a field (domain) if and only if $T/\sigma(I)$ is a field (domain). Hence, $I$ is a maximal (prime) ideal if and only if $\sigma(I)$ is a maximal (prime) ideal. We say $G$ is \textit{strongly locally finite} if $G$ is locally finite and $\mathcal{O}_P$ is finite for all $P\in\text{Spec}(T)$.
As in \cite{FO}, we say $R\subset T$ is a \textit{minimal ring extension} if there is no ring $S$ such that $R\subset S \subset T$. Clearly, this is true if and only if $T=R[u]$ for all $u\in T\backslash R$. Since $R\subseteq \bar R \subseteq T$, where $\bar R$ is the integral closure of $R$ in $T$, if $R\subset T$ is minimal, then either $R$ is integrally closed in $T$, or $T$ is integral over $R$ (equivalently, $T$ is module finite over $R$). In the first case we call $R\subset T$ an \textit{integrally closed minimal ring extension}, and in the second case, we call it an \textit{integral minimal ring extension}. By \cite[Th\'eor\`eme~2.2]{FO}, if $R\subset T$ is a minimal ring extension, there exists a unique maximal ideal $M$ of $R$ such that $R_P\cong T_P$ for all $P\in\text{Spec}(T)\backslash\{M\}$. This maximal ideal is commonly referred to as the \textit{crucial maximal ideal} of the extension. In the integral case, $(R:_RT)$ is the crucial maximal ideal, while in the integrally closed case, $(R:_RT)$ is a prime ideal adjacent to the crucial maximal ideal.
In 1970, Ferrand and Olivier contributed to the groundbreaking work of classifying minimal ring extensions by determining the minimal ring extensions of a field \cite{FO}. More recently, Ayache extended this work to integrally closed domains \cite{Ayache}. Shortly thereafter, Dobbs and Shapiro generalized these results further to arbitrary domains in \cite{DS2006} and then later to certain rings with zero-divisors in \cite{DSMRE}. In their second paper, they completely classify the integral minimal ring extensions of an arbitrary ring, as well as the integrally closed minimal ring extensions of a ring with von Neumann regular total quotient ring \cite{DSMRE}. In \cite{Picavets} (cf.\ \cite{DMP}), Picavet and Picavet-L'Hermitte give another characterization of integral minimal ring extensions. In \cite{CahenDobbsLucas}, Cahen et al.\ characterize integrally closed minimal ring extensions of an arbitrary ring.
In Section~\ref{integral section}, under the assumption $R\subset T$ is an integral minimal ring extension and $G$ is locally finite acting on $T$ (such that $R$ is $G$-invariant), we show $R^G\subset T^G$ is an integral minimal ring extension under mild hypotheses. To do so we use \cite[Theorem~3.3]{Picavets}, given in Theorem~\ref{integral mre} for reference. We present examples to show it is necessary to assume $R^G\neq T^G$. In one example, we use the idealization construction. Given a ring $R$ and an $R$-module $M$, the \textit{idealization} $R(+)M=\{(r,m)\, \mid \, r\in R,\,m\in M\}$ is ring with multiplication given by $(r,m)(r',m')=(rr',rm'+r'm)$ and componentwise addition. By \cite[Theorem~2.4]{DobbsMRE}, $R(+)M$ is a minimal ring extension of $R$ if and only if $M$ is a simple $R$-module.
In Section~\ref{integrally closed section}, we turn to the integrally closed case. In Theorem~\ref{integrally closed mre invariance}, we show arbitrary integrally closed minimal ring extensions are $G$-invariant assuming $G$ is locally finite. This invariance is established in \cite[Theorem~3.6]{DS2007Houston} under the assumptions that the base ring is a domain in which $|G|\in\mathbb{N}$ is a unit. The authors use the characterization of the minimal overrings of an integrally closed domain (that is not a field) by Ayache \cite[Theorem~2.4]{Ayache}. This result is generalized by Dobbs and Shapiro \cite[Theorem~3.7]{DSMRE}, and then further generalized by Cahen et al.\ \cite[Theorem~3.5]{CahenDobbsLucas}. The latter authors introduce a new classification of integrally closed minimal ring extensions of an arbitrary ring in terms of rank 1 valuation pairs.
For an extension $R\subset T$ and a prime ideal $P\subset R$, we say $(R,P)$ is a \textit{valuation pair of $T$} if there exists a valuation $v$ on $T$ such that $R=\{t\in T\, \mid \, v(t)\geq 0\}$ and $P=\{t\in T\, \mid \, v(t)>0\}$ as in \cite{Manis} (cf.\ \cite{CahenDobbsLucas}). Equivalently, $(R,P)$ is a valuation pair of $T$ if $R=S$ whenever $S$ is an intermediate ring containing a prime ideal lying over $P$. The \textit{rank} of $(R,P)$ is the rank of the valuation group. A useful necessary and sufficient condition for $(R,P)$ to have rank 1 is that $P$ is a critical ideal \cite[Lemma~2.12]{CahenDobbsLucas}. Cahen et al.\ define a \textit{critical ideal (for $R\subset T$)} as an ideal $I\subset R$ such that $I=\text{Rad}_R((R:_Rt))$ for all $t\in T\backslash R$. That is, $\text{Rad}_R((R:_Rt))$ is the same ideal for all $t\in T\backslash R$. While such an ideal may not exist for some extensions, if it does, clearly it is unique.
In Section 4, we show certain ring extensions related to minimal ring extensions are also invariant. It is easy to see the integral and integrally closed properties are invariant. Other related extensions are flat epimorphic extensions and normal pairs. As in \cite{Davis}, for an extension $R\subset T$, we say $(R,T)$ is a \textit{normal pair} if every intermediate ring is integrally closed in $T$. Clearly integrally closed minimal ring extensions are normal pairs; they are also flat epimorphic extensions \cite[Th\'eorm\`e~2.2]{FO}. (Throughout, we mean epimorphic in the category of commutative rings.) In Proposition~\ref{perfect localization}, we show that flat epimorphic extensions are invariant under strongly locally finite group action. Lastly, in Coroallary~\ref{normal pair invariance}, we assert normal pairs are invariant.
\section{Integral Minimal Ring Extensions} \label{integral section}
We begin with a well-known result that is fundamental in this work and in much of the work by Dobbs and Shapiro \cite{DS2006Houston}, \cite{DS2007Houston}, \cite{DS2007}. These papers on invariant theory are a strong influence on our work.
\begin{lemma} \label{integrality} If $G$ is locally finite, then $T$ is integral over $T^G$. \end{lemma}
Recall we our riding assumptions in this paper are $R\subset T$, $G$ acts on $T$ and $R$ is $G$-invariant. In the following lemma we establish several technical results needed for the main result of this section. Proposition~\ref{fixed quotient} is another tool for the main result and is also of independent interest.
\begin{lemma} \label{conductor lemma} Assume $G$ is locally finite, and assume $M:=(R:_RT)$ is a maximal ideal of $R$. Set $\textfrak{m}:=M\cap R^G =M\cap T^G$. Then: \begin{enumerate}[(a)] \item \label{contraction} The conductor $(R^G:_{R^G}T^G)$ is $\textfrak{m}$. \item \label{orbit} The orbit of $M$ in $R$ is a singleton set, i.e., $\mathcal{O}_{M}=\{M\}$. \item \label{lying over M} If there exist $N\in\text{Spec}(T)$ containing $M$, then $M=N\cap R$. \end{enumerate} \end{lemma} \begin{proof} \begin{inparaenum}[(a)] \item Let $x\in \textfrak{m}$. Then $x\in R^G$, and $xt\in R$, for all $t\in T$. If $t\in T^G$, then $xt\in T^G$, from which it follows that $xt\in T^G\cap R=R^G$. Hence $x\in(R^G:_{R^G}T^G)$. Thus $\textfrak{m}\subseteq (R^G:_{R^G}T^G)$. By Lemma~\ref{integrality}, $R$ is integral over $R^G$. Hence $\textfrak{m}$ is maximal in $R^G$. Thus $\textfrak{m}=(R^G:_{R^G}T^G)$.
\item Let $\sigma\in G$. Then \[ \sigma(M)R=\sigma(M)\sigma(R)=\sigma(MR)\subseteq\sigma(R)=R. \] Hence $\sigma(M)\subseteq M$. This is sufficient to show $\sigma(M)=M$ in $R$, since $\sigma(M)$ is maximal in $R$, by \cite[Lemma~2.1(b)]{DS2006Houston}.
\item Clearly $M=N\cap R$ whenever $N$ is a prime ideal of $T$ containing $M$, since $M\in\text{Max}(R)$. \end{inparaenum} \end{proof}
\begin{proposition} \label{fixed quotient} Let $M\in\text{Max}(R)$ and $\textfrak{m}:=M\cap R^G$. Assume $G$ is locally finite such that $\text{char}(R^G/\textfrak{m})\nmid n_r$ for all $r\in R$. If $\mathcal{O}_M=\{M\}$, then the $G$-action extends to $R/M$ via $\sigma(r+M)=\sigma(r)+M$, for $\sigma\in G$. Moreover $R^G/\textfrak{m}\cong (R/M)^G$. \end{proposition} \begin{proof} The given action of $G$ on $R/M$ is well-defined: If $r+M=s+M$, then $\sigma(r)-\sigma(s)\in\sigma(M)= M$. Hence $\sigma(r)+M=\sigma(s)+M$.
As for the moreover, first note $\textfrak{m}\in\text{Max}(R^G)$, by Lemma~\ref{integrality}. Define $\phi:R^G/\textfrak{m}\rightarrow (R/M)^G$ by $r+\textfrak{m}\mapsto r+M$. Clearly, $\phi$ is a ring homomorphism. If $\phi(r+\textfrak{m})=0+M$, then $r\in M$. It follows that $r\in M\cap R^G=m$, so $r+\textfrak{m}=0+\textfrak{m}$. Hence $\phi$ is injective.
Now let $r+M\in (R/M)^G$. Then $r+M=\sigma(r)+M$ for all $\sigma\in G$. Summing $\mathcal{O}_r$ we have $n_rr+M=\hat r+M$. Since $R/M$ is a field, we have $r+M=(n_r+M)^{-1}(\hat r+M)$. Similarly, since $n_r+\textfrak{m}\in R^G/\textfrak{m}$, we have $y+\textfrak{m}:=(n_r+\textfrak{m})^{-1}\in R^G/\textfrak{m}$. It follows that $y+M=(n_r+M)^{-1}$, whence $\phi(y\hat r+\textfrak{m})=y\hat r+M=(n_R+M)^{-1}(\hat r+M)=r+M$. Thus $\phi$ is surjective. Hence $R^G/m\cong (R/M)^G$. \end{proof}
The technique of averaging the orbit of an element used above to produce $r+M=(n_r+M)^{-1}(\hat r+M)$ is introduced in \cite{Bergman}. We generalize this method in the following lemma.
\begin{lemma} \label{fixed sums} Assume $G$ is locally finite. Let $t\in T^G$. We show that if $t=r_1u_1+r_2u_2+\cdots+r_ku_k$ for some $r_i\in R$ and $u_i\in T^G$, then there exist $m,m_i\in\mathbb{N}$ and $r_i'\in R^G$ such that $0\neq mt=m_1r_1'u_1+m_2r_2'u_2+\cdots+m_kr_k'u_k$ whenever \begin{enumerate}[(a)] \item $T$ is a domain and $\text{char}(T)\nmid n_t$ for all $t\in T$, or
\item $|G|$ is finite and a unit in $T$. \end{enumerate} \end{lemma} \begin{proof}
For all $t\in T$, fix a subset $\mathcal{N}_t$ of $G$ such that for each $a\in\mathcal{O}_t$ there exists a unique $\sigma\in\mathcal{N}_t$ with $a=\sigma(t)$ (and so $|\mathcal{N}_t|=|\mathcal{O}_t|=n_t$).
First we show if \begin{equation} \label{equation 1} 0\neq t=q_1u_1\cdots+q_iu_i+r_{i+1}u_{i+1}+\cdots+r_ku_k, \end{equation} where $t\in T^G$, $q_i\in R^G$, and $r_j\in R$, then there exists $m\in\mathbb{N}$, $r_{i+1}'\in R^G$, and $s_j\in R$ such that \begin{equation} \label{equation 2} 0\neq mr=m(q_1u_+\cdots+q_iu_i)+r_{i+1}'u_{i+1}+s_{i+2}u_{i+2}+\cdots+s_ku_k. \end{equation} Applying each $\sigma\in\mathcal{N}_{r_{i+1}}$ to (\ref{equation 1}) and summing establishes (\ref{equation 2}). In particular, \[ m=n_{r_{i+1}},\quad r_{i+1}'=\widehat{r}_{i+1},\quad\text{and}\quad s_j=\sum_{\sigma\in\mathcal{N}_{r_{i+1}}}\sigma(r_j)u_j, \]
for $i+2\leq j\leq k$. Note $n_{r_{i+1}}r\neq 0$ under assumption (a). Since $i=1$ establishes the base case, the assertion of the lemma now follows from induction. Under assumption (b), the same argument holds replacing $\mathcal{N}_{r_{i+1}}$ with $G$ and $n_{r_{i+1}}$ with $|G|$. \end{proof}
We have established the machinery needed to prove the main result of this section. We use the characterization provided below for reference.
\begin{theorem}\cite[Theorem~3.3]{Picavets} (cf. \cite[Corollary~II.2]{DMP}) \label{integral mre} Let $R\rightarrow T$ be an injective ring homomorphism, with conductor $(R:_RT)$. Then $R\rightarrow T$ is minimal and finite if and only if $(R:_RT)\in\text{Max}(R)$ and one of the following three conditions holds: \begin{enumerate}[(a)] \item\label{inert} \textbf{Inert case:} $(R:_RT)\in\text{Max}(T)$ and $R/(R:_RT)\rightarrow T/(R:_RT)$ is a minimal field extension. \item\label{decomposed} \textbf{Decomposed case:} There exist $N_1,N_2\in\text{Max}(T)$ such that $(R:_RT)=N_1\cap N_2$ and the natural maps $R/(R:_RT)\rightarrow T/N_1$ and $R/(R:_RT)\rightarrow T/N_2$ are each isomorphisms. \item\label{ramified} \textbf{Ramified case:} There exists $N\in\text{Max}(T)$ such that $N^2\subseteq(R:_RT) \subset N$, $[T/(R:_RT):R/(R:_RT)]=2$ and the natural map $R/(R:_RT)\rightarrow T/N$ is an isomorphism. \end{enumerate} \end{theorem}
We now present our main result on the invariance of integral minimal extensions.
\begin{theorem} \label{integral mre invariance} Let $R\subset T$ be an integral minimal extension with crucial maximal ideal $M=(S:_RR)$. Assume $G$ locally finite such that $R^G\neq T^G$ and $\text{char}(R^G/(M\cap T^G))\nmid n_r$, for all $r\in R$. Then $R^G\subset T^G$ is a minimal extension of the same type as $R\subset T$. Moreover, the crucial maximal ideal of $R^G\subset T^G$ is $(R^G:_{R^G}T^G)$. \end{theorem} \begin{proof} Throughout the argument, set $\textfrak{m}:=(R^G:_{R^G}T^G)$, whence $\textfrak{m}=M\cap R^G=M\cap T^G$, by Lemma~\ref{conductor lemma}(\ref{contraction}).
\begin{inparaenum} \item[\textbf{Inert case:}] By Theorem~\ref{integral mre}(\ref{inert}), $M\in\text{Max}(T)$ and $R/M\rightarrow T/M$ is a minimal field extension. By Lemma~\ref{conductor lemma}(\ref{orbit}) and Proposition~\ref{fixed quotient}, we may pass to $R/M\subset T/M$. Replacing $R/M\subset T/M$ with $R\subset T$, we show $R^G\subset T^G$ is a minimal field extension. Clearly this is true if and only if for all $u\in T^G\backslash R^G$, $T^G=R^G[u]$. If $u\in T^G\backslash R^G$, then $u\in T\backslash R$, so $T=R[u]$. Let $t\in T^G$. Then $t=r_ku^k+\cdots+r_1u+r_0$, for some $k\in\mathbb{N}$ and $r_i\in R$. By Lemma~\ref{fixed sums}, there exist $m,m_i\in\mathbb{N}$ and $r_i'\in R^G$ such that $0\neq mt=m_k r_k'u^k+\cdots+m_1 r_1'u+ m_0r_0'$. Since $R^G$ is a field, we have $t=m^{-1}(m_k r_k'u^k+\cdots+m_1 r_1'u+m_0 r_0')\in R^G[u]$. Hence, $R^G\subset T^G$ is a minimal field extension. By Theorem~\ref{integral mre}(\ref{inert}), the original fixed ring extension (before passing to the quotient ring extension) $R^G\subset T^G$ is an inert integral minimal extension with crucial maximal ideal $\textfrak{m}=(R^G:_{R^G}T^G)$.
\item[\textbf{Decomposed case:}] By Theorem~\ref{integral mre}(\ref{decomposed}), there exist $N_1,N_2\in\text{Max}(T)$ such that $M=N_1\cap N_2$ and the natural maps $R/M\rightarrow T/N_1$ and $R/M\rightarrow T/N_2$ are isomorphisms. Set $\textfrak{n}_1:=N_1\cap T^G$ and $\textfrak{n}_2:=N_2\cap T^G$. Since $T$ is integral over $T^G$, $\textfrak{n}_1,\textfrak{n}_2\in\text{Max}(T^G)$. Clearly \[ \textfrak{m}=M\cap T^G=(N_1\cap N_2)\cap T^G=\textfrak{n}_1\cap \textfrak{n}_2. \]
Define $\phi:R^G/\textfrak{m}\rightarrow T^G/\textfrak{n}_1$ via the natural map $r+\textfrak{m}\mapsto r+\textfrak{n}_1$. Suppose $\phi(r+\textfrak{m})=0+\textfrak{n}_1$ for some $r\in R^G$. Then $r\in \textfrak{n}_1\cap R^G$, but, by Lemma~\ref{conductor lemma}(\ref{lying over M}), $\textfrak{n}_1\cap R^G=\textfrak{m}$. Hence, $r+\textfrak{m}=0+\textfrak{m}$. Thus, $\phi$ is injective.
To show $\phi$ is surjective, we first note the $G$-action extends to $T/N_1$, since it extends to $R/M$ and $R/M\cong T/N_1$. From Lemma~\ref{conductor lemma}(\ref{orbit}) and Proposition~\ref{fixed quotient}, we have $R^G/\textfrak{m}\cong(R/M)^G\cong(T/N_1)^G$. Let $t+\textfrak{n}_1\in T^G/\textfrak{n}_1$ be nonzero. Then $ t+N_1\in (T/N_1)^G$ is nonzero. (Clearly it is fixed, and if $t\in N_1$, then $t\in N_1\cap T^G=\textfrak{n}_1$ -- contradiction.) Since $R^G/m\cong(T/N_1)^G$ (via composition of the natural maps), there exists $r+\textfrak{m}\in R^G/\textfrak{m}$ such that $r+\textfrak{m} \mapsto r+M\mapsto r+N_1=t+N_1$. It follows $(r-t)\in N_1\cap T^G=\textfrak{n}_1$. Hence $\phi(r+\textfrak{m})=r+\textfrak{n}_1=t+\textfrak{n}_1$. Thus $\phi$ is surjective, so $R^G/\textfrak{m}\cong T^G/\textfrak{n}_1$. The same argument applies to show $R^G/\textfrak{m}\cong T^G/\textfrak{n}_2$. By Theorem~\ref{integral mre}(\ref{decomposed}), $R^G\subset T^G$ is a decomposed integral minimal extension with crucial maximal ideal $\textfrak{m}=(R^G:_{R^G}T^G)$.
\item[\textbf{Ramified case:}] By Theorem~\ref{integral mre}(\ref{ramified}), there exists $N\in\text{Max}(T)$ such that $N^2\subseteq M \subset N$, $[T/M:R/M]=2$ and the natural map $R/M\rightarrow T/N$ is an isomorphism. Set $\textfrak{n}:=N\cap T^G$, and recall $\textfrak{m}=M\cap T^G$. Clearly, $\textfrak{n}\in\text{Max}(T^G)$ and $\textfrak{m}\subsetneq \textfrak{n}$, since $\textfrak{m}\notin\text{Max}(T^G)$ (since $M\notin\text{Max}(T)$, $N\in\text{Max}(T)$, and $T$ is integral over $T^G$). For the other containment, let $x\in \textfrak{n}^2$. Then $x\in N^2$, so $x\in M$. Hence, $x\in M\cap T^G=\textfrak{m}$. Thus, $\textfrak{n}^2\subseteq \textfrak{m}$.
We show that the natural map $\phi: R^G/\textfrak{m}\rightarrow T^G/\textfrak{n}$ given by $r+\textfrak{m}\mapsto r+\textfrak{n}$ is an isomorphism. Suppose $\phi(r+\textfrak{m})=0+\textfrak{n}$ for some $r\in R^G$. Then $r\in \textfrak{n}$, so $r^2\in \textfrak{n}^2$. Since $\textfrak{n}^2\subseteq \textfrak{m}$ and $\textfrak{m}$ is prime (maximal) in $R^G$, we have $r\in \textfrak{m}$. (Alternatively, $r\in \textfrak{n}\cap R^G=\textfrak{m}$, by Lemma~\ref{conductor lemma}(\ref{lying over M}).) Hence, $r+\textfrak{m}=0+\textfrak{m}$. Thus, $\phi$ is injective.
Next we show $\phi$ is surjective. Let $t+\textfrak{n}\in T^G/\textfrak{n}$. Then $t+N\in (T/N)^G$. Note that, as in the decomposed case, since $R/M\cong T/N$, the $G$-action extends to $T/N$. From this, Lemma~\ref{conductor lemma}(\ref{orbit}), and Proposition~\ref{fixed quotient}, it follows that $R^G/\textfrak{m} \cong(R/M)^G\cong(T/N)^G$ via $r+\textfrak{m}\mapsto r+M\mapsto r+N$. Hence, there exists $r+\textfrak{m}\in R^G/\textfrak{m}$ such that $r+\textfrak{m}\mapsto r+M\mapsto r+N=t+N$, from which it follows that $(r-t)\in N\cap T^G=\textfrak{n}$. Hence, $\phi(r+\textfrak{m})=t+\textfrak{n}$. Thus, $\phi$ is surjective.
It remains to show $[T^G/\textfrak{m}:R^G/\textfrak{m}]=2$. Note $T^G/\textfrak{m}$ is not a domain, since $\textfrak{n}^2\subseteq \textfrak{m}\subset n$ implies $\textfrak{m}=\textfrak{n}$, if $\textfrak{m}$ is prime. Hence $T^G/\textfrak{m}\neq R^G/\textfrak{m}$, i.e., $[T^G/\textfrak{m}:R^G/\textfrak{m}]\geq2$.
Suppose $[T^G/\textfrak{m}:R^G/\textfrak{m}]>2$, and let $\{e_1+\textfrak{m}, e_2+\textfrak{m},e_3+\textfrak{m}\}$ be an $R^G/\textfrak{m}$-linearly independent set in $T^G/\textfrak{m}$. Then each $e_i\notin M$; otherwise, $e_i\in M\cap T^G=\textfrak{m}$. Hence each $e_i+M$ is nonzero in $T/M$. Since $[T/M:R/M]=2$, without loss of generality we may assume there exist $t_1+M,t_2+M\in T/M$ such that \[ e_3+M=(t_1+M)(e_1+M)+(t_2+M)(e_2+M)=t_1e_1+t_2e_2+M. \] As in Lemma~\ref{fixed sums}, using $\sigma\in \mathcal{N}_{t_1}$ and summing $\mathcal{O}_{t_1}$ we have \[ n_{t_1}e_3+M=\widehat{t}_1e_1+\left(\sum_{\sigma\in\mathcal{N}_{t_1}}\sigma(t_2)\right)e_2+M. \] Defining $t_3$ to be the coefficient of $e_2$ above and repeating the above technique with respect to $t_3$ we have \[ n_{t_3}n_{t_1}e_3+M=n_{t_3}\widehat{t}_1e_1+\widehat{t}_3e_2+M. \] It follows that $n_{t_3}n_{t_1}e_3-(n_{t_3}\widehat{t}_1e_1+\widehat{t}_3e_2)\in M\cap T^G=\textfrak{m}$, so \[ n_{t_3}n_{t_1}e_3+\textfrak{m}=n_{t_3}\widehat{t}_1e_1+\widehat{t}_3e_2+\textfrak{m}. \] Equivalently, \[ (n_{t_3}n_{t_1}+\textfrak{m})(e_3+\textfrak{m})=(n_{t_3}\widehat{t}_1+\textfrak{m})(e_1+\textfrak{m})+(\widehat{t}_3+\textfrak{m})(e_2+\textfrak{m}) \] is an $R^G/\textfrak{m}$-linear combination of $e_1+\textfrak{m},e_2+\textfrak{m}, e_3+\textfrak{m}$ in $T^G/\textfrak{m}$ -- contradiction. Hence, there cannot exist in $T^G/\textfrak{m}$ any more than two $R^G/\textfrak{m}$-linearly independent elements. Thus $[T^G/\textfrak{m}:R^G/\textfrak{m}]\leq 2$. Hence $[T^G/\textfrak{m}:R^G/\textfrak{m}]=2$. By Theorem~\ref{integral mre}(\ref{ramified}), $R^G\subset T^G$ is a ramified integral minimal extension with crucial maximal ideal $\textfrak{m}=(R^G:_{R^G}T^G)$. \end{inparaenum} \end{proof}
\begin{remark} It is necessary to assume $R^G\neq T^G$ in Theorem~\ref{integral mre invariance}, as illustrated in the following. \end{remark}
\begin{example} The fixed rings are equal, even under finite group action, in the following cases:
\begin{inparaenum} \item[\textbf{Inert case:}] Set $R:=\mathbb{R}$, $T:=\mathbb{C}$, and $G:=\{1,\sigma\}$, where $\sigma$ is the conjugacy map. Then $R^G=R=T^G$.
\item[\textbf{Decomposed case:}] Let $F$ be a field such that $\text{char}(F)\neq 2$, and set $R:=\{(x,x)\, \mid \, x\in F\}$ and $T:=F\times F$. By \cite[Lemme~1.2(b)]{FO}, $R\subset T$ is a minimal extension. Define $G:=\{1,\sigma\}$, where $\sigma((x,x)=(x,-x)$. Then $R^G=T^G$.
\item[\textbf{Ramified case:}] Let $F$ and $R$ be as above, and set $T:=F(+)F$. Then by \cite[Lemme~1.2(c)]{FO}, $R\subset T$ is a minimal extension. Define $G$ as above. Then $R^G=T^G$. \end{inparaenum} \end{example}
\section{Integrally Closed Minimal Extension} \label{integrally closed section}
In this section, we show that the integrally closed minimal property of the extension $R\subset T$ is invariant under locally finite $G$-action. This generalizes Dobbs' and Shapiro's result that the property is invariant if $R$ is a domain and if $|G|$ is finite and a unit in $R$ \cite[Theorem~3.6]{DS2007Houston}. They use Ayache's characterization of minimal extensions (overrings) of an integrally closed domain \cite[Theorem~2.4]{Ayache}. Ayache's result has since been generalized by Dobbs and Shapiro \cite[Theorem~3.7]{DSMRE} and recently further generalized by Cahen et al.\ \cite[Theorem~3.5]{CahenDobbsLucas}. In the latter, the authors give several necessary and sufficient conditions for an arbitrary ring extension to be integrally closed and minimal, which we use to establish Theorem~\ref{integrally closed mre invariance}.
Whereas crucial maximal ideals are historically essential to the study of minimal extensions, Cahen et al.\ introduce critical ideals and use them extensively in characterizing integrally closed minimal extensions of an arbitrary ring \cite{CahenDobbsLucas}. As previously mentioned, they define a critical ideal for $R\subset T$ as an ideal $I\subset R$ such that $I=\text{Rad}_R((R:_Rt))$ for all $t\in T\backslash R$. That is, $\text{Rad}_R((R:_Rt))$ is the same ideal for all $t\in T\backslash R$. They show in \cite[Lemma~2.11]{CahenDobbsLucas} that if an extension has a critical ideal, then the ideal is prime. Moreover, they show that if $R\subset T$ is a minimal extension, then the critical ideal exists \cite[Proposition~2.14(2)]{CahenDobbsLucas} and is maximal \cite[Theorem~3.5]{CahenDobbsLucas}. If $R\subset T$ has a critical ideal, we show $R^G\subset T^G$ has a critical ideal under any $G$-action such that $R^G\neq T^G$.
\begin{lemma} \label{critical ideal invariance} Let $P$ be the critical ideal of $R\subset T$. If $R^G\neq T^G$, then $\textfrak{p}:=P\cap R^G$ is the critical ideal of $R^G\subset T^G$. \end{lemma} \begin{proof} Let $t\in T^G\backslash R^G$. Then $t\in T\backslash R$. Hence $P=\text{Rad}_R((R:_Rt))$, from which it follows that \[ \textfrak{p}=\text{Rad}_R((R:_Rt))\cap R^G=\text{Rad}_{R^G}((R:_Rt)\cap R^G)=\text{Rad}_{R^G}((R^G:_{R^G}t)). \] Thus $\textfrak{p}$ is the critical ideal of $R^G\subset T^G$. \end{proof}
We next show if a critical ideal is maximal, then its orbit (under $G$) is a singleton set.
\begin{lemma} \label{critical ideal orbit} Suppose $M=\text{Rad}_R((R:_Rt))$, for all $t\in T\backslash R$, i.e., $M$ is the critical ideal for $R\subset T$. If $M$ is maximal, then $\sigma(M)=M$ for all $\sigma\in G$, i.e. $\mathcal{O}_M=\{M\}$. \end{lemma} \begin{proof} Let $\sigma\in G$ and $t\in T\backslash R$. Note $\sigma^{-1}(t)\in T\backslash R$; otherwise, if $\sigma^{-1}(t)\in R$, then $t=\sigma(\sigma^{-1}(t))\in\sigma(R)=R$ -- contradiction. Since $M$ is the critical ideal for $R\subset T$, $M=\text{Rad}_R((R:_R\sigma^{-1}(t)))$. Let $r$ be an arbitrary element of $R$, let $x\in M$, and set $y:=\sigma^{-1}(x)$. Then there exists $n\in\mathbb{N}$ such that $x^nr\in R$, from which it follows that $(\sigma^{-1}(x))^n\sigma^{-1}(t)\in \sigma^{-1}(R)=R$. Hence $y=\sigma^{-1}(x)\in\text{Rad}_R((R:_R\sigma^{-1}(t)))=M$. Thus $x=\sigma(y)\in\sigma(M)$, which shows $M\subseteq\sigma(M)$. Since $M$ is maximal, $M=\sigma(M)$, as desired. \end{proof}
\begin{remark} It is not necessary to assume $M$ is maximal in the preceding lemma. A similar set-theoretic argument establishes the converse $\sigma(M)\subseteq M$. \end{remark}
Related to critical ideals are valuation pairs for an extension $R\subset T$. As in the introduction and \cite{Manis}, for $P\in\text{Spec}(R)$, $(R,P)$ is a valuation pair of $T$ if there is a valuation $v$ on $T$ with $R=\{t\in T\, \mid \, v(t)\geq 0\}$ and $P=\{t\in T\, \mid \, v(t)>0\}$. Equivalently, $(R,P)$ is a valuation pair of $T$ if $R=S$ whenever $S$ is an intermediate ring containing a prime ideal lying over $P$ \cite{Manis}. Rank 1 valuation pairs are one of several equivalences of integrally closed minimal extensions given by Cahen et al \cite{CahenDobbsLucas}. As previously mentioned, the rank of a valuation pair $(R,P)$ of $T$ is the rank of the valuation group. The following lemma describes the relationship between critical ideals and valuation pairs.
\begin{lemma}\cite[Lemma~2.12]{CahenDobbsLucas} \label{critical ideals and valuation pairs} Let $(R,P)$ be a valuation pair of $T$. Then $R\subset T$ has a critical ideal if an only if $(R,P)$ has rank 1. Moreover, under these conditions, $P$ is the critical ideal of $R\subset T$. \end{lemma}
Our next result is fundamental to the invariance of integrally closed minimal extensions established in Theorem~\ref{integrally closed mre invariance}.
\begin{proposition} \label{valuation pair invariance} Assume $G$ is locally finite such that $R^G\neq T^G$. Let $M\in\text{Max}(R)$ and set $\textfrak{m}:=M\cap R^G$. If $\mathcal{O}_M=\{M\}$, then $(R^G,\textfrak{m})$ is a valuation pair of $T^G$ whenever $(R,M)$ is a valuation pair of $T$. \end{proposition} \begin{proof} Let $A$ be a ring such that $R^G\subseteq A\subseteq T^G$. Then $R\subseteq AR\subseteq T$. First note $AR$ is integral over $A$, since $R$ is integral over $R^G$, hence over $A$. Let $\textfrak{q}\in\text{Spec}(A)$ such that $\textfrak{q}\cap R^G=\textfrak{m}$, and let $Q\in\text{Spec}(AR)$ lie over $\textfrak{q}$. From \[ \textfrak{m}=\textfrak{q}\cap R^G=(Q\cap A)\cap R^G=Q\cap R^G=(Q\cap R)\cap R^G \] it follows $Q\cap R$ is maximal in $R$, by integrality. We claim $Q\cap R=M$. Suppose not. Then there exists $x\in (Q\cap R)\backslash M$, since $Q\cap R$ and $M$ are incomparable (as maximal ideals). It follows $\tilde x\in Q\cap R^G=\textfrak{m}=M\cap R^G$. Hence $\sigma(x)\in M$ for some $\sigma\in G$. Since $\mathcal{O}_M=\{M\}$, we have $x\in\sigma^{-1}(M)=M$ -- contradiction. Hence $Q\cap R=M$. Since $(R,M)$ is a valuation pair of $T$, we have $AR=R$, whence $A=R^G$. Thus $(R^G,\textfrak{m})$ is a valuation pair of $T^G$. \end{proof}
Of the several integrally closed minimal extension equivalences in \cite[Theorem~3.5]{CahenDobbsLucas}, we use the condition that there exists a maximal ideal $M$ such that $(R,M)$ is a rank 1 valuation pair of $T$ where $R\subset T$. With this equivalence, it follows easily from the preceding results that integrally closed minimal extensions are invariant under locally finite group action.
\begin{theorem} \label{integrally closed mre invariance} Assume $G$ is locally finite. If $R \subset T$ is an integrally closed minimal extension, then $R^G\subset T^G$ is an integrally closed minimal extension. \end{theorem} \begin{proof} First we show $R^G\neq T^G$. Let $t\in T\backslash R$. Then $\tilde t\in T^G$. Suppose $\tilde t\in R^G$. Then $\tilde t\in R$. By \cite[Proposition~3.1]{FO}, $\sigma(t)\in R$ for some $\sigma\in G$, whence $t=\sigma^{-1}(\sigma(t))\in\sigma^{-1}(R)=R$ -- contradiction. Hence, $\tilde t\in T^G\backslash R^G$. Thus, $R^G\subsetneq T^G$.
Let $M$ be the critical ideal for $R\subset T$. By Lemma~\ref{critical ideal invariance}, $m:=M\cap R^G$ is the critical ideal for $R^G\subset T^G$. Since $R\subset T$ is a minimal extension, the critical ideal $M$ is maximal. By Lemma~\ref{critical ideal orbit} $\mathcal{O}_M=\{M\}$. By Lemma~\ref{valuation pair invariance} $(R^G,m)$ is a valuation pair of $T^G$. Since $m$ is the critical ideal of $R^G\subset T^G$, this valuation pair has rank 1 by Lemma~\ref{critical ideals and valuation pairs}. Hence, $R^G\subset T^G$ is an integrally closed minimal extension by \cite[Proposition~3.5]{CahenDobbsLucas}. \end{proof}
\section{Minimal Extensions, Flat Epimorphisms, and Normal Pairs} \label{related results}
In this section, we generalize the results of Sections~\ref{integral section} and~\ref{integrally closed section}. Of course, arbitrary integral (integrally closed) extensions are a generalization of minimal integral (integrally closed) extensions. It is easy to see in Propositions~\ref{integrality invariance} and~\ref{integrally closed invariance} that integral and integrally closed extensions are invariant.
In Proposition~\ref{mre invariance} and Corollary~\ref{mre invariance corollary}, we show integral minimal extensions are invariant under stronger assumptions on $G$ and without the restriction of characteristic used in Theorem~\ref{integral mre invariance}. In doing so, we simultaneously re-establish Theorem~\ref{integrally closed mre invariance}.
In Theorem~\ref{perfect localization}, we exchange a stronger assumption for a more general result. In particular, we assume $G$ is strongly locally finite in order to show flat epimorphic extensions are invariant.
Lastly in Corollary~\ref{normal pair invariance}, we show normal pairs are invariant. As in \cite{Davis}, we say $(R,T)$ is a \textit{normal pair} if $S$ is integrally closed in $T$ whenever $R\subseteq S\subseteq T$. Clearly, if $R\subset T$ integrally closed minimal extension, then $(R,T)$ is a normal pair.
\begin{proposition} \label{integrality invariance} If $R\subset T$ is an integral extension and $G$ is locally finite, then $R^G\subseteq T^G$ is an integral extension. \end{proposition} \begin{proof} It follows from Lemma~\ref{integrality} and by transitivity \cite[Theorem 40]{Kaplansky}. \end{proof}
\begin{proposition} \label{integrally closed invariance} If $R$ is integrally closed in $T$, then $R^G$ is integrally closed in $T^G$. \end{proposition} \begin{proof} Let $u\in T^G$ be integral over $R^G$. Then $u\in T$ is integral over $R$. Hence $u\in T^G\cap R=R^G$. \end{proof}
As in Theorems~\ref{integral mre invariance} and~\ref{integrally closed mre invariance}, certain integral minimal extensions and all integrally closed minimal extensions are invariant under locally finite $G$-action. In the former, however, we require a certain restriction of characteristic. Assuming $|G|$ is finite and a unit in the base ring, we can remove this restriction. Of course, if $G$ is finite, then it is locally finite. Hence, the following result and corollary re-establish Theorem~\ref{integrally closed mre invariance}.
\begin{proposition} \label{mre invariance}
Let $R\subset T$ be a minimal extension. Assume $G$ is finite such that $|G|$ is a unit in $R$ and $R^G\neq T^G$. Then $R^G\subset T^G$ is a minimal extension. \end{proposition} \begin{proof} Let $u\in T^G\backslash R^G$. Clearly, $u\in T\backslash R$. Hence, $T=R[u]$. Let $t\in T^G$. Then $t=r_nu^n+\cdots+r_1u+r_0$ for some $r_i\in R$. Applying the averaging technique introduced in Section~\ref{integral section} we have \[
t=|G|^{-1}\sum_{\sigma\in G}\sigma(r_n)u^n+\cdots+\sigma(r_1)u+\sigma(r_0). \] Thus $T^G=R^G[u]$, i.e. $R^G\subset T^G$ is a minimal extension. \end{proof}
Combining Propositions~\ref{integrality invariance},~\ref{integrally closed invariance}, and~\ref{mre invariance}, we have the following corollary.
\begin{corollary} \label{mre invariance corollary} Under the hypotheses of {\rm Proposition~\ref{mre invariance}}, if $R\subset T$ is an integral or integrally closed minimal extension, then $R^G\subset T^G$ is an integral or integrally closed minimal extension, respectively. \end{corollary}
Integrally closed minimal extensions are flat epimorphic extensions (in the category of commutative rings), by \cite[Th\'eorm\`e~2.2]{FO}. Equivalently, flat epimorphisms are \textit{perfect localizations}, so-called because of the following correspondence.
\begin{theorem}\cite[Theorem~2.1,~Ch.~XI]{Stenstrom} \label{flat epi}
Let $\phi:R\rightarrow T$ be a ring homomorphism. Then $\phi$ is a flat epimorphism if and only if the collection $\mathcal{F}=\{I\subset R\,|\,\phi(I)T=T\}$ where $I$ is an ideal in $R$ is a Gabriel filter, and there exists an isomorphism $\psi:T\rightarrow R_{\mathcal{F}}$ such that $\psi\circ\phi:R\rightarrow R_{\mathcal{F}}$ is the canonical homomorphism. Such a filter is called perfect. \end{theorem}
A collection of ideals $\mathcal{F}$ of a ring $R$ is a \textit{Gabriel filter} if it satisfies: \begin{enumerate}[(i)] \item If $I\in\mathcal{F}$ and $I\subseteq J$, then $J\in\mathcal{F}$. \item If $I,J\in\mathcal{F}$, then $I\cap J\in\mathcal{F}$. \item If for an ideal $I$ there exists $J\in\mathcal{F}$ such that $(I:j)\in\mathcal{F}$ for every $j\in J$, then $I\in\mathcal{F}$. \end{enumerate} For more information on Gabriel filters, see \cite{Stenstrom}.
By \cite[Exercise~8,~p.\ 242]{Stenstrom}, $T$ is a perfect localization of $R$ if and only if for all $t\in T$, $(R:_Rt)T=T$. With this definition and Lemma~\ref{filter lemma} we show perfect localizations (equivalently, flat epimorphic extensions) are invariant in Proposition~\ref{perfect localization}.
\begin{lemma} \label{filter lemma}
Assume $G$ is strongly locally finite. Define $\mathcal{F}:=\{I\subset R\,|\,IT=T\}$ and $\mathcal{F}':=\{J\subset R^G\,|\,JT^G=T^G\}$. If $I\in\mathcal{F}$, then $I\cap R^G\in\mathcal{F}'$. \end{lemma} \begin{proof}
Note $I\in\mathcal{F}$ if and only if every $P\in\text{Spec}(R)$ containing $I$ is not lain over in $T$. Also note $\mathcal{F'}=\{J\subset R^G\,|\, JR\in\mathcal{F}\}$. Let $I\in\mathcal{F}$ and let $P\in\text{Spec}(R)$ contain $(I\cap R^G)R$. We claim $I\subseteq\sigma(P)$ for some $\sigma\in G$, whence $PT=\sigma^{-1}(\sigma(P)T)=\sigma^{-1}(\sigma(PT))=T$ (since $IT=T$). Let $x\in I$. Then $\tilde x \in I\cap R^G$, so $\tilde x\in P$. It follows $\sigma(x)\in P$ for some $\sigma\in G$; equivalently, $x\in\sigma^{-1}(P)$. Hence $I\subseteq \bigcap_{Q\in\mathcal{O}_P}Q$. Since $G$ is strongly locally finite, $\mathcal{O}_P$ is finite. It follows that $I\subseteq Q$ for some $Q\in\mathcal{O}_P$ by the Prime Avoidance Lemma \cite[Theorem~81]{Kaplansky}. Hence the claim is satisfied by $\sigma\in G$, where $Q=\sigma(P)$, so $PT=T$. Thus, every prime containing $(I\cap R^G)R$ is not lain over in $T$. That is, $(I\cap R^G)R\in\mathcal{F}$, whence $I\cap R^G\in\mathcal{F'}$, as desired. \end{proof}
We are now ready to show perfect localizations (flat epimorphic extensions) are invariant under strongly locally finite group action using Lemma~\ref{filter lemma}.
\begin{theorem} \label{perfect localization} Let $G$ be strongly locally finite, and let $\mathcal{F}$ and $\mathcal{F'}$ be as in Lemma~\ref{filter lemma}. Then \begin{inparaenum}[(a)] \item $\mathcal{F'}$ is a Gabriel filter whenever $\mathcal{F}$ is a Gabriel filter, and \item $T^G=(R^G)_{\mathcal{F'}}$ whenever $T=R_{\mathcal{F}}$. \end{inparaenum} In particular, if $R\subseteq T$ is a flat epimorphic extension, then so is $R^G\subseteq T^G$. \end{theorem} \begin{proof} \begin{inparaenum}[(a)] \item Suppose $\mathcal{F}$ is a Gabriel filter. We check that $\mathcal{F}'$ satisfies the defining conditions (i) through (iii) of a Gabriel filter given above. Let $I\in\mathcal{F}'$, and let $J$ be an ideal of $R^G$ containing $I$. Then $IR\in\mathcal{F}$ and $IR\subseteq JR$, so $JR\in\mathcal{F}$. It follows that $JT=T$, so $JT^G=T^G$, since $T$ is integral over $T^G$. Hence $J\in\mathcal{F}'$, which establishes condition (i). Now let $I,J\in\mathcal{F}'$. Then $IT=T$ and $JT=T$. Suppose $I\cap J\notin\mathcal{F}'$, i.e. $(I\cap J)T^G\neq T^G$. Again by integrality, $(I\cap J)T\neq T$. Let $P\in\text{Spec}(T)$ contain $(I\cap J)T$. Then $I\cap J\subseteq P\cap T^G=:\textfrak{p}$. It follows that $I\subseteq \textfrak{p}$ or $J\subseteq \textfrak{p}$, but then $IT\subseteq P$ or $JT\subseteq P$ -- contradiction. Hence $I\cap J\in\mathcal{F}'$, which establishes condition (ii).
It remains to show $\mathcal{F'}$ satisfies condition (iii). Let $J$ be an ideal of $R^G$, and suppose there exists $I\in\mathcal{F'}$ such that $(J:_{R^G}a)\in\mathcal{F'}$ for all $a\in I$. We claim $(JR:_Ra)\in\mathcal{F}$ for all $a\in IR$, whence $JR\in\mathcal{F}$, i.e., $J\in\mathcal{F'}$. Let $a:=a_1r_1+\cdots+a_nr_n\in IR$, where $a_i\in I$ and $r_i\in R$. For each $a_i$, clearly $(J:_{R^G}a_i)R\subseteq(JR:_Ra_i)$. Since $(J:_{R^G}a_i)\in\mathcal{F}'$, we have $(J:_{R^G}a_i)R\in\mathcal{F}$. Hence $(JR:_Ra_i)\in\mathcal{F}$. From $(JR:_Ra_i)\subseteq(JR:_Ra_ir_i)$ it follows that $(JR:_Ra_ir_i)\in\mathcal{F}$. Since $\bigcap_{i=1}^n(JR:_Ra_ir_i)\in\mathcal{F}$ and $\bigcap_{i=1}^n(JR:_Ra_ir_i)\subseteq(JR:_Ra)$, we have $(JR:_Ra)\in\mathcal{F}$, proving the claim. Hence $JR\in\mathcal{F}$, i.e. $J\in\mathcal{F}'$. Thus $\mathcal{F'}$ is a Gabriel filter.
\item Now we show $T^G=(R^G)_{\mathcal{F}'}$ by showing that $T^G$ is a perfect localization of $R^G$. Let $x\in T^G$. Then $(R:_Rx)T=T$, since $T$ is a perfect localization of $R$. It follows that $(R:_Rx)\in\mathcal{F}$, and $(R:_Rx)\cap R^G\in\mathcal{F'}$, by Lemma~\ref{filter lemma}. We claim $(R:_Rx)\cap R^G\subseteq (R^G:_{R^G}x)$, whence $(R^G:_{R^G}x)\in\mathcal{F'}$, since $\mathcal{F'}$ is a Gabriel filter. Let $y\in(R:_Rx)\cap R^G$. Then $xy\in R$, but $x\in T^G$ and $y\in T^G$, so $xy\in R^G$. Hence $(R:_Rx)\cap R^G\subseteq (R^G:_{R^G}x)$, so $(R^G:_{R^G}x)\in\mathcal{F}'$ as claimed. (In fact, as the reverse containment clearly holds, $(R:_Rx)\cap R^G= (R^G:_{R^G}x)$.) Thus $(R^G:_{R^G}x)T^G=T^G$, i.e. $T^G$ is a perfect localization of $R^G$. Equivalently, $R^G\subseteq T^G$ is a flat epimorphic extension whenever $R\subseteq T$ is a flat epimorphic extension. \end{inparaenum} \end{proof}
\begin{remark} It would be interesting to know if epimorphic extensions or flat extensions are invariant under any group action. \end{remark}
Normal pairs are another generalization of integrally closed minimal extensions. By \cite[Theorem~5.2]{KnebuschZhang}, $(R,T)$ is a normal pair if and only if $R$ is integrally closed in $T$ and $R\subseteq S$ satisfies INC for any intermediate ring $S$. We call a pair of rings $(R,T)$ satisfying the latter property an \textit{INC-pair} and note it is equivalent to the definition of an INC-pair given in \cite{DobbsLO}.
We have already seen integrally closed extensions are invariant in Proposition~\ref{integrally closed invariance}. To assert normal pairs are invariant, it remains to show INC-pairs are invariant.
\begin{proposition} \label{INC invariance} Assume $G$ is locally finite. If $(R,T)$ is an INC-pair, then $(R^G,T^G)$ is an INC-pair. \end{proposition} \begin{proof} Let $R^G\subseteq A\subseteq T^G$, and let $\textfrak{q}\subseteq \textfrak{q}'$ be prime ideals of $A$ with the same contraction in $R^G$. Set $\textfrak{p}:=\textfrak{q}\cap R^G=\textfrak{q}'\cap R^G$. Since $R$ is integral over $R^G$ (whence over $A$), $AR$ is integral over $A$. Hence, $A\subseteq AR$ satisfies LO and GU. Let $Q\subseteq Q'$ be prime ideals in $AR$ such that $\textfrak{q}=Q\cap A$ and $\textfrak{q}'=Q\cap A$. Setting $P:=Q\cap R$ and $P':=Q'\cap R$, we have $P\subseteq P'$ and \[ P\cap R^G=Q\cap R^G=(Q\cap A)\cap R^G=\textfrak{q}\cap R^G=\textfrak{p}, \] and $P'\cap R^G=\textfrak{p}$, by the same reasoning. As an integral extension, $R^G\subseteq R$ satisfies INC, whence $P=P'$. Since $R\subseteq AR$ satisfies INC, $Q=Q'$. Hence $q=q'$. Thus $(R^G,T^G)$ is an INC-pair. \end{proof}
The corollary below now follows easily from Propositions \ref{integrally closed invariance} and \ref{INC invariance}.
\begin{corollary} \label{normal pair invariance} If $G$ is locally finite, then $(R^G,T^G)$ is a normal pair whenever $(R,T)$ is a normal pair. \end{corollary}
\begin{remark} By \cite[Corollary~2.4(bis\.)]{DobbsLO}), \textit{P-extensions} are precisely INC-pairs. Hence P-extensions are invariant, by Proposition~\ref{INC invariance}. \end{remark}
\section*{Acknowledgment} The author is immensely grateful to her advisor, Jay Shapiro, for his guidance, suggestions, and help revising the manuscript.
\end{document} | arXiv | {
"id": "1403.6733.tex",
"language_detection_score": 0.667138934135437,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{center}\textbf{Thompson's conjecture for alternating group}\end{center}
\begin{center}I. B. Gorshkov \footnote{The work is supported by the
Russian Science Foundation (project no. 15-11-10025)}\end{center}
\textsl{Abstract}: Let $G$ be a finite group, and let $N(G)$ be the set of sizes of its conjugacy classes. We show that if a finite group $G$ has trivial center and $N(G)$ equals to $N(Alt_n)$ or $N(Sym_n)$ for $n\geq 23$, then $G$ has a composition factor isomorphic to an alternating group $Alt_k$ such that $k\leq n$ and the half-interval $(k, n]$ contains no primes. As a corollary, we prove the Thompson's conjecture for simple alternating groups.
\textsl{Key words}: finite group, alternating group, conjugacy class, Thompson's conjecture.
\section{Introduction}
Consider a finite group $G$. For $g\in G$, let
$g^G$ denote the conjugacy class of $G$ containing $g$, and $|g^G|$ denote the size of $g^G$. The centralizer of $g$ in $G$ is denoted by $C_G(g)$. Put $N(G)=\{n\ |\ \exists g\in G$ such that $|g^G|=n\}$. In 1987 Thompson posed the following conjecture concerning $N(G)$.
\textbf{Thompson's Conjecture (see \cite{Kour}, Question 12.38)}. {\it If $L$ is a finite simple non-abelian group, $G$ is a finite group with trivial center, and $N(G)=N(L)$, then $G\simeq L$.}
Thompson's conjecture was proved for many finite simple groups of Lie type. Denote the alternating group of degree $n$ by $Alt_n$ and the symmetric group of degree $n$ by $Sym_n$. Alavi and Daneshkhah proved that the groups $Alt_n$ with $n=p$, $n=p+1$, $n=p+2$ for prime $p\geq5$ are characterized by $N(G)$ (see \cite{AD}). This conjecture has recently been confirmed for $Alt_{10}, Alt_{16}$, $Alt_{22}$ and $Alt_{26}$ (see \cite{VasT}, \cite{Gor}, \cite{Xu}, \cite{Liu}). In \cite{GorA}, the author showed that if $N(G)=N(Alt_n)$ or $N(G)=N(Sym_n)$ for $n\geq5$, then $G$ is non-solvable. In \cite{GorA2}, we obtained some information about composition factors of a group $G$ in the case where $N(G)=N(Alt_n)$ or $N(G)=N(Sym_n)$ for $n>1361$. It was shown in \cite{GorA3} that Thompson's conjecture is valid for alternating group $Alt_n$, where $n>1361$.
Here is our main result.
\begin{theorem} If $G$ is a finite group with trivial center such that $N(G)=N(Alt_n)$ or $N(G)=N(Sym_n)$ with $n\geq23$, then $G$ has a composition factor $S$ isomorphic to an alternating group $Alt_k$, where $k\leq n$ and the half-interval $(k, n]$ contains no primes. \end{theorem}
Theorem and the main result of \cite{GorA3} (see Lemma \ref{Alt1361} below) imply immediately the following corollary.
\begin{corollary} Thompson's conjecture is true for an alternating group of degree $n$ if $n\geq5$ and $n$ or $ n-1$ is a sum of two primes. \end{corollary}
At this moment it is not known there are even positive integers, which can not be decomposed into a sum of two primes.
\section{Notation and preliminary results} \begin{lem}[{\rm\cite{GorA3}}]\label{Alt1361} Let $G$ be a finite group with trivial center such that $N(G)=N(Alt_n)$, where $n\geq 27$ and at least one of the numbers $n$ or $n-1$ is a sum of two primes. If $G$ contains a composition factor $S\simeq Alt_{n-\varepsilon}$, where $\varepsilon$ is a non-negative integer such that the set $\{n-\varepsilon,...,n\}$ does not contain primes, then $G\simeq Alt_n$. \end{lem}
\begin{lem}[{\rm\cite[Lemma 3]{VasT}}]\label{pi} If $G$ and $H$ are finite groups with trivial centers and $N(G) = N(H)$, then $\pi(G) = \pi(H)$. \end{lem}
\begin{lem}[{\rm \cite[Lemma 1.4]{GorA2}}]\label{factorKh} Let $G$ be a finite group, $K\unlhd G$, $\overline{G}= G/K$, $x\in G$ and $\overline{x}=xK\in G/K$. The following assertions are valid
(i) $|x^K|$ and $|\overline{x}^{\overline{G}}|$ divide $|x^G|$.
(ii) If $L$ and $M$ are neigboring members of a composition series of $G$, $L<M$, $S=M/L$, $x\in M$ and
$\widetilde{x}=xL$ is an image of $x$, then $|\widetilde{x}^S|$ divides $|x^G|$.
(iii) If $y\in G, xy=yx$, and $(|x|,|y|)=1$, then $C_G(xy)=C_G(x)\cap C_G(y)$.
(iv) If $(|x|, |K|) = 1$, then $C_{\overline{G}}(\overline{x}) = C_G(x)K/K$.
\end{lem}
\begin{lem}[{\rm \cite[Lemma 4]{VasT}}]\label{vas}
Given a finite group $G$ with trivial center, if there exists a prime $p\in \pi(G)$ such that $p^2$ does not divide $|x^G|$ for all $x$ in $G$, then the Sylow $p$-subgroups of $G$ are elementary abelian. \end{lem}
\begin{lem}\label{diretN}
If $G=A\times B$, then $N(G)=\{ab|a\in N(a), b\in N(b)\}$. \end{lem} \begin{proof} The proof is obvious. \end{proof}
\begin{lem}[{\rm\cite[Lemma 1.6]{GorA2}}]\label{Pord}
Let $S$ be a non-abelian finite simple group. If $p\in \pi(S)$, then there exist $a\in N(S)$ and $g\in S$ such that
$|a|_p=|S|_p$, $|g^S|=a$ and $|\pi(g)|=1$. \end{lem}
\begin{lem}[{\rm \cite[Lemma 14]{Vac}}]\label{vac2} Let $S$ be a non-abelian finite simple group. Any odd element from $\pi(Out(S))$ either belongs to $\pi(P)$ or does not exceed $m/2$, where $m=max_{p\in\pi(S)}$. \end{lem}
\begin{pr} \label{rt1}
Let $G$ be a finite group, $\Omega$ be a set of primes such that, for all $p,q\in \Omega$ and $\alpha\in N(G)$, $p$ does not divide $q-1$, $p^2$ does not divide $\alpha$. Suppose that $g\in G$ and $|g|=t\in\Omega$. If $\rho= \pi(|g^G|)\cap\Omega\neq \varnothing$, then $G$ has a nonabelian composition factor $S$ and an element $a\in S$ such that $|a|=t$ and $\rho\subseteq\pi(|a^S|)$. \end{pr} \begin{proof}
Note that, as follows from Lemma \ref{vas}, a Sylow $t$-subgroup of $G$ is elementary abelian and, consequently, $|g^G|$ is not a multiple of $t$. Let $K$ be a maximal normal subgroup of $G$ such that the image $\overline{g}$ of $g$ in the group $\overline{G}=G/K$ is nontrivial and $\rho\subseteq\pi(|\overline{g}^{\overline{G}}|)$. Let $R$ be a minimal normal subgroup of $\overline{G}$. It can be represented as the direct product $R=R_1\times R_2\times...\times R_l$ of $l$ isomorphic simple groups. Order of $R$ is a multiple of some $r\in \rho\cap\{t\}$. It follows from Lemmas \ref{diretN} and \ref{Pord} that if $l>1$, then there exists $h\in R$ such that $|h^G|_r>r$. It follows from Lemma \ref{factorKh} that there exists $x\in G$ such that $|x^G|_r>r$; a contradiction. We have $l=1$ and hence $R$ is a simple group. The rest of the proof is divided into two lemmas.
\begin{lem}\label{aa}
If $|\overline{g}^R|$ is a multiple of some $r\in\rho$, then Proposition \ref{rt1} is true. \end{lem} \begin{proof}
Assume that $|\overline{g}^R|$ is a multiple of some $r\in\rho$. It follows from Lemma \ref{vac2} that $\overline{g}$ acts on $R$ as an interior automorphism. Consequently, there exists $z\in R$ such that $h^{z}=h^{\overline{g}}$ for all $h\in R$.
It follows from Lemma \ref{Pord} and the fact that $r^2$ does not divide $\alpha$ for any $r\in \Omega$ and $\alpha\in N(G)$ that $k^2$ does not divide $|R|$ for any $k\in\Omega$. Suppose that there exists $r_1\in(\pi(|z^{\overline{G}}|)\cap\Omega)\setminus\pi(|z^R|)$. Since some Sylow $r_1$-subgroup of $R$ centralizes $z$, if follows from Frattini argument that some Sylow $r_1$-subgroup $H$ of $\overline{G}$ normalizes $\langle z\rangle$. Since $|Aut(\langle z\rangle)|=t(t-1)$, the subgroup $H$ centralizes $\langle z\rangle$ and $r_1\not\in\pi(|z^{\overline{G}}|)$, a contradiction. Thus, $\pi(|z^{\overline{G}}|)\cap\rho=\pi(|z^R|)\cap\rho$.
Assume that there exists $r_2\in\rho\setminus\pi(|z^R|)$. Let us show that $\overline{G}$ contains an element $h$ of order $r_2$ such that $t\in\pi(|h^{\overline{G}}|)$ and $R\leq C_{\overline{G}}(h)$. Let $\overline{K}<\overline{G}$ be a maximal normal subgroup such that $|\widetilde{g}^{\overline{G}/\overline{K}}|$ is a multiple of $r_2$, where $\widetilde{g}\in \widetilde{G}=\overline{G}/\overline{K}$ is image of $g$. Let $\widetilde{R}<\widetilde{G}$ be a minimal normal subgroup. It is clear that $|\widetilde{R}|$ is a multiple of $r_2$ or $t$. As above, it can be shown that $\widetilde{R}$ is a simple group. Since $\widetilde{R}$ contains no outer automorphisms of order $t$ and $r_2$, we obtain that any element of orders $t$ or $r_2$ induces an inner automorphism of $\widetilde{R}$. Since $\overline{K}$ is maximal, we have that $|(\widetilde{g}\widetilde{R}/\widetilde{R})^{\widetilde{G}/\widetilde{R}}|$ is not a multiple of $r_2$. Therefore $C_{\widetilde{G}/\widetilde{R}}(\widetilde{g}\widetilde{R}/\widetilde{R})$ contains some Sylow $r_2$-subgroup $T$ of $\widetilde{G}/\widetilde{R}$. Let $H$ be the full preimage of $\langle\widetilde{g}\widetilde{R}/\widetilde{R},T\rangle$. Hence, $H\simeq(\widetilde{R}\times T)\leftthreetimes \langle\widetilde{g}\rangle$. Since $H$ contains some Sylow $r_2$-subgroup and an element $\widetilde{g}$, we have $r_2\in\pi(|\widetilde{g}^H|$. Therefore $|\widetilde{g}^{\widetilde{R}}|$ is a multiple of $r_2$. As above, it can be shown that there exists an element $\widetilde{h}\in\widetilde{R}$ such that $|\widetilde{h}^{\widetilde{R}}|$ is a multiple of $t$. Since $\overline{K}>R$ and any $r_2$-element acts on $R$ as an inner automorphism, the element $\widetilde{h}$ have some preimage $h\in \overline{G}$ such that $h$ acts trivially on $R$. Since $|\widetilde{h}^{\widetilde{G}}|$ divides $|h^{\overline{G}}|$, we have $t\in(\pi|h^{\overline{G}}|)$.
If follows from Lemma \ref{Pord}, that there exists $u\in R$ such that $|u|\neq r_2$ and $t\in\pi(|u^R|)$. It follows from Lemma \ref{factorKh} that $t^2$ divides $|\overline{uh}^{\overline{G}}|$, a contradiction. Thus, $\rho\in |z^R|$. Therefore $z$ is the desired element and $R=S$. \end{proof}
\begin{lem}\label{bb}
$\pi(|\overline{g}^R|)\cap\rho\neq\varnothing$. \end{lem} \begin{proof}
Assume that $\pi(|\overline{g}^R|)\cap\rho=\varnothing$. Since $K$ is a maximal subgroup, there exists $r\in \rho\setminus\pi((\overline{g}R/R)^{\overline{G}/R})$. Since $\pi(|\overline{g}^R|)\cap\rho=\varnothing$, we see that $C_R(\overline{g})$ contains some Sylow $r$-subgroup $H$ of $R$. Let $N=N_{\overline{G}}(H)$, $T=N_R(H)$, $\overline{N}=N/H$, $\overline{T}=T/H$, $\overline{g'}=\overline{g}H/H$. From Frattini argument it follows that $N/T\simeq\overline{G}/R$, in particular, $r\not\in\pi(|(\overline{g}T/T)^{N/T}$. Since $N$ contains some Sylow $r$-subgroup of $\overline{G}$ and the element $\overline{g}$, we obtain $r\in\pi(|\overline{g}^{N_{\overline{G}}(H)}|)$. Since $H\leq C_N(\overline{g})$ and $(|\overline{g}|,r)=1$, we have $r\in\pi(|\overline{g'}^{\overline{N}}|)$. Let $F\in Syl_t(\overline{T})$ be such that $\overline{g'}\in C_{\overline{N}}(F)$, $L=N_{\overline{T}}(F)$, $B=N_{\overline{N}}(F)$, $\widetilde{g'}=\overline{g'}F/F$, $\widetilde{L}=L/F$, $\widetilde{B}=B/F$. From Frattini argument it follows that $B$ contains some Sylow $r$-subgroup $A$ of $\overline{N}$. Since $|R|_t\leq t$, we have $|F|\leq t$, in particular, $A\in C_{\overline{N}}(F)$. Therefore $r\in\pi|\widetilde{g'}^{\widetilde{B}}$. Since $r,t\not\in\pi(|\widetilde{L}|)$, it follows that $r\in\pi|(\widetilde{g'}\widetilde{L})^{\widetilde{B}/\widetilde{L}}$, $\widetilde{L}/\widetilde{B}\simeq \overline{G}/R$, a contradiction. \end{proof}
The statement of the proposition follows from Lemmas \ref{aa} and \ref{bb}. \end{proof}
\begin{lem}\label{rt2}
Let $G, \Omega$ and $g$ be as in Proposition \ref{rt1}. If there exists $r\in\pi(|g^G|)\cap\Omega$, then $G$ contains an element $h$ of order $r$ such that $t\in \pi(|h^G|)$. \end{lem}
\begin{proof} By Lemma \ref{rt1}, $G$ has a composition factor $S$ such that $\overline{g}\in S, |\overline{g}|=t$ and $|\overline{g}^S|$ is a multiple of r. It follows from Lemmas \ref{Pord} and \ref{factorKh} that Sylow $t$- and $r$-subgroups of $S$ are cyclic groups of simple orders. Let $\overline{h}\in S$ and $|\overline{h}|=r$. Assume that $|\overline{h}^S|$ is not a multiple of $t$. Then, there exists an element $x\in S$, $|x|=t$, such that $\langle x\rangle< C_S(\overline{h})$. The subgroup $\langle x\rangle$ is a Sylow $t$-subgroup of $S$. Consequently, there exists $y\in S$ such that $(\langle x \rangle)^y=\langle\overline{g}\rangle$ and, hence, $\overline{h}^y\in C_S(\overline{g})$, a contradiction. Thus, $t\in|\overline{h}^S|$. Consequently, $G$ contains an element $h$ of order $r$ such that
$|h^G|$ is a multiple of $t$. \end{proof}
From now let us fix $V_n\in\{Alt_n, Sym_n\}$, for $n\geq5$, and a finite group $G$
such that $N(G) = N(V_n)$. Let $\Omega=\{t|t$ is a prime, $n/2<t\leq n\}$ be an ordered set, and $p$ be the largest number of $\Omega$.
\begin{lem}[{\rm \cite[Lemma 2.3]{GorA2}}]\label{ConClass}
Suppose that $t\in \Omega$, $\alpha\in N(G)$ and $\alpha$ is not a multiple of $t$. Then $\alpha$ is $|V_n|/t|C|$ or $|V_n|/|V_{t+i}||B|$, where $C = C_{V_{n-t}}(g)$ for some $g\in V_{n-t}$, $t+i\leq n$, and $B=C_{V_{n-t-i}}(h)$ for some $h\in V_{n-t-i}$. \end{lem}
Put $\Phi_t=\{\alpha\in N(G)\mid \alpha=|V_n|/(t|C|),\; C=C_{V_{n-t}}(g)\mbox{ for some }g\in V_{n-t}\}$ and
$\Psi_t=\{\alpha\in N(G)\mid \alpha=|V_n|/ ( |V_{t+i}||B|)
\mbox{ for some }i\geq0 \mbox{ and } t+i<n-1\mbox{ where }B=C_{V_{n-t-i}}(g) \mbox{ for some } g\in V_{n-t-i}\mbox{ such that }g\mbox{ moves } n-t-i\mbox{ points}\}$. Observe that the definitions of the sets $\Phi$ and $\Psi$ do not assume that $t$ is a prime.
\begin{lem}[{\rm \cite[Lemma 2.4]{GorA2}}]\label{Psi}
Let $t_i\in \Omega, 1\leq i<|\Omega|$. The set $\Psi_{t_i}\setminus \Psi_{t_{i+1}}$ is empty iff $n-t_i=2$ and $V=Alt$. \end{lem}
Given a finite set of positive integers $\Theta$, we define a directed graph $\Gamma(\Theta)$ with the vertex set $\Theta$ and with edges $\overrightarrow{ab}$ whenever $a$ divides $b$. Denote by $h(\Theta)$ the maximal length of a path in $\Gamma(\Theta)$.
\begin{lem}[{\rm \cite[Lemma 8]{GorA}}]\label{hz} The following claims hold: \begin{enumerate} \item{If $n-t=2$ then $h(\Psi_p)\leq 1$; } \item{If $n-t=3$ then $h(\Psi_p)\leq 2$; } \item{If $n-t=4$ then $h(\Psi_p)\leq 3$; } \item{If $n-t=5$ then $h(\Psi_p)\leq 5$; } \item{If $n-t=6$ then $h(\Psi_p)\leq 6$; } \item{If $n-t=7$ then $h(\Psi_p)\leq 8$; } \item{If $n-t=8$ then $h(\Psi_p)\leq 11$; } \item{If $n-t=9$ then $h(\Psi_p)\leq 14$; } \item{If $n-t=10$ then $h(\Psi_p)\leq 18$; } \item{If $n-t=11$ then $h(\Psi_p)\leq 21$; } \item{If $n-t=12$ then $h(\Psi_p)\leq 26$; } \item{If $n-t=13$ then $h(\Psi_p)\leq 30$; } \item{If $n-t=18$ then $h(\Psi_p)\leq 69$. } \end{enumerate} \end{lem} \begin{lem}\label{hz2} The following claims hold: \begin{enumerate} \item{If $23\leq n\leq26$ then $h(\Psi_p)\leq2$;} \item{If $31\leq n\leq36$ then $h(\Psi_p)\leq2$;} \item{If $113\leq n\leq124$ then $h(\Psi_p)\leq11$;} \item{If $139\leq n\leq148$ then $h(\Psi_p)\leq9$;} \item{If $199\leq n\leq210$ then $h(\Psi_p)\leq12$;} \item{If $211\leq n\leq222$ then $h(\Psi_p)\leq12$;} \item{If $317\leq n\leq336$ then $h(\Psi_p)\leq17$;} \item{If $523\leq n\leq540$ then $h(\Psi_p)\leq26$;} \item{If $887\leq n\leq905$ then $h(\Psi_p)\leq35$;} \item{If $1129\leq n\leq1150$ then $h(\Psi_p)\leq39$;} \item{If $1327\leq n\leq1360$ then $h(\Psi_p)\leq58$;} \end{enumerate}
\end{lem} \begin{proof} Using \cite{GAP} we obtain the required bounds for $h(\Psi_p)$. \end{proof}
Let $\pi$ be some set of primes. A finite group is said to have property $D_{\pi}$ if it contains a Hall $\pi$-subgroup and all its Hall $\pi$-subgroups are conjugate. For brevity, we will write $G\in D_{\pi}$ if a group $G$ has the property $D_{\pi}$.
\begin{lem}[{\rm \cite[Corollary 6.7]{DpiB}}]\label{Dpi} Suppose that $G$ is a finite group and $\pi$ is some set of primes. Then $G\in D_{\pi}$ if and only if each composition factor of $G$ has property $D_{\pi}$. \end{lem}
\begin{lem} [{\rm \cite{HallExB}}]\label{HallEx} Suppose that $G$ is a finite group and $\pi$ is some set of primes. If $G$ has a nilpotent Hall $\pi$-subgroup, then $G\in D_{\pi}$. \end{lem}
\section{Proof of Theorem}
Take a finite group $G$ with $N(G)=N(V_n)$, where $5\geq n\geq 1361$ and $Z(G)=1$. Put $\Omega=\{t \mid n/2<t\leq n, \; t \mbox{ is a prime}\}$ and denote the largest number of $\Omega$ by $p$. Lemma \ref{pi} implies that $\pi(G)\supseteq\pi(V_n)$, in particular, $\Omega \subseteq \pi(G)$. In view of the main result of \cite{AD}, we assume that for $V_n = Alt_n$ the numbers $n$ and $n - 1$ are not prime.
\begin{pr}\label{exist}
There exists an element $g\in G$ such that $|g|\in\Omega$ and $|g^G|\in\Phi_{|g|}$. \end{pr} \begin{proof}
Assume that there is no element $g\in G$ such that $|g|\in\Omega$ and $|g^G|\in\Phi_{|g|}$. \begin{lem}\label{gPsi}
If $|g|\in \Omega$, then $|g^G|\in\Psi_{p}$. \end{lem} \begin{proof}
Assume that $|g^G|\in \Psi_{|g|}\setminus \Psi_{p}$. Then $|g^G|$ is a multiple of $p$. Consequently, $|g|\neq p$. By Lemma \ref{rt2}, $G$ contains an element $h$ of order $p$ such that $|g|\in\pi(|h^G|)$ and, consequently, $|h^G|\not\in\Psi_p$. It follows from Lemma \ref{ConClass} that $|h^G|\in\Phi_p$; a contradiction. \end{proof}
\begin{lem}\label{Dthe} $G\in D_{\Omega}$. \end{lem} \begin{proof}
It follows from Lemma \ref{Dpi} that it is sufficient to verify the property $D_{\Omega}$ for every composition factor of $G$. Let $S$ be a nonabelian composition factor of $G$ such that $|\pi(S)\cap\Omega|\geq 2$ and let $r$ and $t$ be two different elements from $\pi(S)\cap\Omega$. By Lemma \ref{factorKh}, there are no multiples of $r^2$
or $t^2$ in $N(S)$. By Lemma \ref{Pord}, a Sylow $a$-subgroup has order $a$ for any $a\in\{r,t\}$. By Lemmas \ref{gPsi} and \ref{factorKh}, $S$ contains a Hall $\{r, t\}$-subgroup $H$ of order $rt$. By the definition of the numbers $r$ and $t$, the group $H$ is cyclic. It follows from Lemma \ref{HallEx}, that $S\in D_{\{t, r\}}$. Let $l\in\pi(S)\cap\Omega\setminus \{t,r\}, g\in S, |g|=l$. Since $|g^S|$ is not multiple of $t$ or $r$, we have $H^x<C_S(g)$ for some $x\in S$. Consequently, $S$ contains a cyclic Hall $\{t,r,l\}$-subgroup. Using Lemma \ref{HallEx}, we obtain that $S\in D_{\{t,r,l\}}$. Repeating this procedure $|\pi(S)\cap\Omega|$ times, we obtain that $S\in D_{\Omega}$. \end{proof}
\begin{lem}\label{abelian} Hall $\Omega$-subgroup of $G$ is abelian. \end{lem} \begin{proof}
As follows from Lemma \ref{vas}, Sylow $t$-subgroup of $G$ is abelian for any $t\in \Omega$. Assume that a Hall $\Omega$-subgroup of $G$ is nonabelian. Then $G$ contains a nonabelian Hall $\{r, t\}$-subgroup $H$ for some $r, t\in \Omega$. Let $R<G$ be a maximal normal subgroup such that the image $\overline{H}$ of $H$ in the group $\overline{G}=G/R$ is nonabelian. The group $\overline{H}$ contains a normal $l$-subgroup $T$, for some $l\in\{r, t\}$. Note that $g\in \overline{G}, |g|\in\{r,t\}\setminus \{l\}$, acts nontrivially on $T$. We have $T=C_T(g)\times[T,g]$, where $\langle g,[T,g]\rangle$ is a Frobenius group. Since $l-1$ is not a multiple of $|g|$, we obtain that $|[T,g]|>l$ and $T$ is a acyclic group. By the definitions of the groups $R$ and $T$, we obtain that $T$ lies in some minimal normal subgroup $K$ of the group $G$. If $K$ is solvable, then $K=T$
is an elementary abelian group and, consequently, the subgroup $K\cap C_K(g)$ is a Sylow $l$-subgroup of $C_K(g)$. As follows from \ref{factorKh}, $G$ contains a preimage $h$ of $g$ such that $|h^G|$ is a multiple of $|[T,g]|$, a contradiction. Therefore, $K=S_1\times S_2\times...\times S_m$, where $S_i$ is a nonabelian simple group for $1\leq i\leq m$. It can be shown as in Lemma \ref{rt1} that $m=1$. As noted in Lemma \ref{Dthe}, Hall $\{r,t\}$-subgroup of any composition factor is cyclic. We obtain a contradiction with the fact that $K$ contains an acyclic $l$-subgroup $T$. \end{proof}
\begin{lem}\label{OOmega}
If $T$ be a Hall $\Omega$-subgroup of $G$ and $\Upsilon=\{|g^G|$, $g\in T\}$, then $|\Omega|\leq h(\Upsilon)$. \end{lem}
\begin{proof} Let $g_1\in T$ and $|g_1|=t_1\in \Omega$. By Lemma \ref{Psi}, $G$ contains an element $r_1\in G$ such that $|r_1^G|\in \Psi_{t_1}\setminus \Psi_{t_2}$, where $t_2$ is the smallest number of $\Omega\setminus\{t_1\}$. Since $G\in D_{\Omega}$ (see Lemma \ref{Dthe}), we can assume that $r_1\in C_G(g_1)$ and a Hall $\Omega$-subgroup of $C_G(r_1)$ lies in $T$. Consequently, there exists $g_2\in T$ such that $|g_2|=t_2$ and $C_G(g_1)\neq C_G(g_2)$. By Lemma \ref{abelian}, the group $T$ is abelian. Thus, $|(g_1g_2)^G|>|g_1^G|$. Note that $|g_1^G|$ divides $|(g_1g_2)^G|$. Repeating this procedure $|\Omega|$ times, we obtain the set $\Sigma=\{g_1, g_1g_2, g_1g_2g_3,..., g_1g_2...g_{|\Omega|}\}$ such that $|g_1^G|$ divides $|(g_1g_2)^G| |...||g_1g_2...g_{|\Omega|}|$. In particular, $h(\Upsilon)\geq |\Omega|$. \end{proof}
\begin{lem}\label{l27125} $n\not\in\{27,28,125,126\}$ \end{lem} \begin{proof}
Assume that $n\in\{27,28,125,126\}$. Let $t=3$ if $n\in\{27,28\}$ and $t=5$ if $n\in\{125,126\}$, $T\in Syl_G(t)$, $g\in C_G(T)$. We have $|g^G|=|V_n|/|V_n|_t$, in particular, $|g^G|$ is a minimal and maximal element in the set $N(G)\setminus\{1\}$. Let $K\unlhd G$ be a minimal normal subgroup such that $5\in\pi(K)$. Assume that there exists $r\in\Omega\cap\pi(G/K)$. Let $h\in G$, $|h|=r$. Without loss of generality, we can assume that $h\in N_G(Z(T\cap K)$ and $T\cap C_G(h)\in Syl_{N_G(h)}(t)$. Since any multiple $a\in N(G)$ of $|h^G|$ is not a multiple of $|g'^G|$ for any $g'\in Z(T)$, the element $h$ acts without fixed points on $Z(T\cap K)$. Therefore $|h^G|_t\geq t^x$, where $x$ is such that $|h|$ divides $p^x-1$. But $p^x>|\alpha|_p$ for any $\alpha\in Psi_{p}$; a contradiction. Let $R\lhd K$ be a maximal normal subgroup, $\Upsilon=\{17,23\}$ if $n\in\{27,28\}$ and $\Upsilon=\{113,109\}$ if $n\in\{125,126\}$. Using Frattini argument we get that the set $\pi|R|\cap\Upsilon$ is empty. Hence, $\Upsilon\subseteq \pi(K/R)$. Since $R$ is maximal, $K/R$ is simple. It follows from \cite{Zav}, that $K/R\in\{Alt_{23}, Alt_{24}..., Alt_{28}, Fi_{23}, Alt_{113},...,Alt_{126}, U_7(4)\}$. Since $G\in D_{\Omega}$, we have $K/R\in D_{\Omega}$. Let $O$ be a Hall $\Upsilon$-subgroup of $K/R$. It follows from Lemma \ref{abelian}, that $O$ is abelian, a contradiction. \end{proof}
It follows from Lemma \ref{l27125}, that $n\not\in\{27,28,125,126\}$. We obtain From Lemmas \ref{hz} and \ref{hz2}, that $|\Omega|>h(\Psi_{p})$, a contradiction with Lemma \ref{OOmega}. This completes the proof of Proposition \ref{exist}.
\end{proof}
It follows from Proposition \ref{exist}, that there exists an element $g\in G$ such that $|g|\in\Omega$ and $|g^G|\in\Phi_{|g|}$. It follows from Lemma \ref{rt1} that there exists a composition factor $S$ of $G$ and $\overline{g}\in S$ such that $|\overline{g}|=|g|, \pi(|\overline{g}^S|)\cap\Omega=\Omega\setminus\{|g|\}$ and $|S|$ is not a multiple of $r^2$ for any $r\in \Omega$.
\begin{lem}\label{Lie} If $n\geq23$, then $S$ is not isomorphic to a finite simple group of Lie type. \end{lem} \begin{proof} As noted above, $\Omega\subset\pi(S)$. Let $\Lambda(q^m)$ be an exceptional group of Lie type over field of order $q^m$, where $q<663$ is a prime, $m\geq 1327$. Using \cite{GAP} we can obtain that $\pi(\Lambda(q^m))$ either contains a number greater than $1327$ or does not contains $\Omega(r)$, where $\Omega(r)=\{t \mid
r/2<t< r, \; t \mbox{ is a prime}\}$ for $r$ greater or equal than the maximal number of $\pi(\Lambda(q^m))$. Since $\pi(\Lambda(q^m))$ contains a primitive divisor of $q^m-1$, we obtain that if $m>1327$, then $\pi(\Lambda(q^m))$ contains a number greater than $1327$. $|\Lambda(q^m)|$ is a multiple of $q^2$. Therefore $S$ is not isomorphic to an exceptional group of Lie type. Let $L=\Lambda_k(q^m)$ be a finite simple classical group of Lie type of Lie rank $k$ over the field $q^m$. The set $\pi(L)$ contains a primitive divisor of $q^{mk}-1$ or a primitive divisors of $q^{2mk}-1$. Hence, if $mk>1327$, then $\pi(L)$ contains a number greater than $1327$. If $k>1$, then $|L|$ is a multiple of $q^2$. Using \cite{GAP} we get that $\pi(L)$ contains a number greater than $1327$ or does not contain $\Omega(r)$ for $r$ greater or equal than the maximal number of $\pi(\Lambda_k(q^m)))$. \end{proof}
\begin{lem}\label{Spar} $S$ not isomorphic to a finite simple sparadic group. \end{lem} \begin{proof}
The proof is obvious. \end{proof}
It follows from Lemmas \ref{Lie} and \ref{Spar}, that $S\simeq Alt_m$. Since $p\in\pi(S)$, we obtain the required equality $m\geq p$. Thus, Theorem is proved. Corollary follows from Theorem and Lemma \ref{Alt1361}.
\end{document} | arXiv | {
"id": "1611.05526.tex",
"language_detection_score": 0.6254136562347412,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Retraction based Direct Search Methods for Derivative Free Riemannian Optimization} \begin{abstract}
Direct search methods represent a robust and reliable class of algorithms for solving black-box optimization problems.
In this paper, we explore the application of those strategies to Riemannian optimization, wherein minimization is to be performed with respect to variables restricted to lie on a manifold. More specifically, we consider classic and line search extrapolated variants of direct search, and, by making use of retractions, we devise tailored strategies for the minimization of both smooth and nonsmooth functions.
As such we analyze, for the first time in the literature, a class of retraction based algorithms for minimizing nonsmooth objectives on a Riemannian manifold without having access to (sub)derivatives. Along with convergence guarantees we provide a set of numerical performance illustrations on a standard set of problems. \\
\textbf{Keywords:} Direct search, derivative free optimization, Riemannian manifold, retraction. \\
\textbf{AMS subject classifications:} 90C06, 90C30, 90C56. \end{abstract}
\section{Introduction}
Riemannian optimization, or solving minimization problems constrained on a Riemannian manifold embedded in an Euclidean space, is an important and active area of research considering the numerous problems in data science, robotics, and other settings wherein there is an important geometric structure characterizing the allowable inputs. Derivative Free Optimization (DFO), or Zeroth Order Optimization, involves algorithms that only make use of function evaluations rather than any gradient computations in their implementation. In cases of dynamics subject to significant epistemic uncertainty and the necessity of performing a simulation to compute a function evaluation, derivatives may be unavailable. This paper presents the introduction of a classic set of DFO algorithms, namely direct search, to the case of Riemannian optimization. For classic references of Riemannian optimization and DFO, see, e.g., ~\cite{absil2009optimization} and \cite{audet2017derivative,conn2009introduction,larson2019derivative}, respectively.
Formally, let $\ensuremath \mathcal{M}$ be a smooth manifold embedded in $\ensuremath \mathbb{R}^n$.
We are interested here in the problem
\begin{equation} \label{eq:opt}
\min_{x \in \ensuremath \mathcal{M}} f(x)
\end{equation}
with $f$ continuous and bounded below. We consider both the case of $f(x)$ being continuously differentiable, as well as the more general nonsmooth case.
Direct search methods (see, e.g.,~\cite{kolda2003optimization} and references therein) belong to the class of algorithms that are mesh based, rather than model based. This distinction presents a binary taxonomy of DFO algorithms: on the one hand we have those based on approximating gradient information using function evaluations and constructing approximate local models, while on the other hand we have those based on sampling a pre-defined grid of points for the next iteration. Thus direct search is particularly suitable for black box cases wherein it is unknown the degree to which any model would have much veracity.
To the best of our knowledge, thorough studies of DFO on Riemannian manifolds have only been carried out recently in the literature. In~\cite{li2020zeroth}, the authors focus on a model based method using a two point function approximation for the gradient.
The paper~\cite{yao2021riemannian} presents a specialized Polak-Ribi{\' e}ere-Polyak procedure for finding a zero of a tangent vector field on a Riemannian manifold. In \cite{dreisigmeyer2018direct}, the author focuses on a specific class of manifolds (reductive homogeneous spaces, including several matrix manifolds) where, thanks to the properties of exponential maps, a straightforward extension of mesh adaptive direct search methods (see, e.g., \cite{audet2006mesh,audet2017derivative}) and probabilistic direct search strategies \cite{gratton2015direct} is possible. Some DFO methods and nonsmooth problems on Riemannian manifolds without convergence analysis can be found in \cite{hosseini2019nonsmooth} and references therein.
Thus our paper presents the first analysis of retraction based direct search strategies on Riemannian manifolds, and the first analysis of a DFO algorithm for minimizing nonsmooth objectives in Riemannian optimization. In particular, we first adapt, thanks to the use of retractions, a classic direct search scheme (see, e.g., \cite{conn2009introduction, kolda2003optimization}) and a linesearch based scheme (see, e.g., \cite{cristofari2021derivative,liuzzi2010sequential,lucidi2002derivative,lucidi2002global} for further details on this class of methods)
to deal with the minimization of a given smooth function over a manifold. Then, inspired by the ideas in \cite{fasano2014linesearch}, we extend the two proposed strategies to the nonsmooth case.
The remainder of this paper is as follows. In Section~\ref{s:def}, we present some definitions. In Section~\ref{s:smooth}, we present and prove convergence for a direct search method applicable for continuously differentiable $f$. In Section~\ref{s:nonsmooth}, we consider the case of $f$ not being continuously differentiable, and only Lipschitz continuous. We present some numerical results in Section~\ref{s:num} and conclude in Section~\ref{s:con}.
\section{Definitions and notation}\label{s:def}
We now introduce some notation for the formalism we use in this article. We refer the reader to, e.g., \cite{absil2009optimization,boumal2020introduction} for an overview of the relevant background. \\
Let
$T\ensuremath \mathcal{M}$ be the tangent manifold and for $x \in \ensuremath \mathcal{M}$ let $T_x\ensuremath \mathcal{M}$ be the tangent bundle to $\ensuremath \mathcal{M}$ in~$x$. We assume that $\ensuremath \mathcal{M}$ is a Riemannian manifold, i.e., for $x$ in $\ensuremath \mathcal{M}$, we have a scalar product $\Sc{\cdot}{\cdot}_x : T_x\ensuremath \mathcal{M} \times T_x\ensuremath \mathcal{M} \rightarrow \ensuremath \mathbb{R}$ smoothly dependent from $x$. Let $\dist(\cdot, \cdot)$ be the distance induced by the scalar product, so that for $x, y \in \ensuremath \mathcal{M}$ we have that $\dist(x, y)$ is the length of the shortest geodesic connecting $x$ and $y$. Furthermore, let $\nabla_{\ensuremath \mathcal{M}}$ be the Levi-Cita connection for $\ensuremath \mathcal{M}$, and $\Gamma: T\ensuremath \mathcal{M} \times \ensuremath \mathcal{M} \rightarrow T\ensuremath \mathcal{M}$ the parallel transport with respect to $\nabla_{\ensuremath \mathcal{M}}$, with $\Gamma_x^y(v) \in T_y\ensuremath \mathcal{M}$ transport of the vector $v \in T_x\ensuremath \mathcal{M}$. We define $\ensuremath{\mathsf P}_x$ as the orthogonal projection from $\ensuremath \mathbb{R}^n$ to $T_x\ensuremath \mathcal{M}$, and $S(x, r) \subset \ensuremath \mathbb{R}^n$ as the sphere centered at $x$ and with radius $r$. \\
We write $\{a_k\}$ as a shorthand for $\{a_k\}_{k \in I}$ when the index set $I$ is clear from the context. We also use the shorthand notations $T_k\ensuremath \mathcal{M}, \ensuremath{\mathsf P}_k, \Sc{\cdot}{\cdot}_k, \n{\cdot}_k$, $\Gamma_i^j$ for $T_{x_k}\ensuremath \mathcal{M}, \ensuremath{\mathsf P}_{x_k}, \Sc{\cdot}{\cdot}_{x_k}, \n{\cdot}_{x_k}$ and $\Gamma_{x_i}^{x_j}$. \\
We define the distance $\dist^*$ between vectors in different tangent spaces in a standard way using parallel transport (see for instance \cite{azagra2005nonsmooth}): for $x, y \in M$, $v \in T_xM$ and $w \in T_yM$,
\begin{equation} \label{eq:ct}
\dist^*(v, w) = \n{v - \Gamma_y^x w} = \n{w - \Gamma_x^y v} \, ,
\end{equation}
and for a sequence $\{(y_k, v_k)\}$ in $T\ensuremath \mathcal{M}$ we write $v_k \rightarrow v$ if $y_k \rightarrow y$ in $\ensuremath \mathcal{M}$ and $\dist^*(v_k, v) \rightarrow 0$. \\
As it is common in the Riemannian optimization literature (see, e.g., \cite{absil2012projection}), to define our tentative descent directions we use a retraction $R: T\ensuremath \mathcal{M} \rightarrow \ensuremath \mathcal{M}$. We assume $R \in C^1(T\ensuremath \mathcal{M}, \ensuremath \mathcal{M})$, with
\begin{equation} \label{eq:rbounded}
\dist (R(x, d), x) \leq L_r \n{d} \, ,
\end{equation}
(true in any compact subset of $T\ensuremath \mathcal{M}$ given the $C^1$ regularity of $R$, without any further assumptions), and that the sufficient decrease property holds: for any $L$-Lipschitz smooth $f$,
\begin{equation} \label{eq:taylor}
f(R(x, d)) \leq f(x) + \Sc{\tx{\tx{grad}} f(x)}{d} + L \n{d}^2 \, .
\end{equation}
\section{Smooth optimization problems}\label{s:smooth}
In this section, we consider solving~\eqref{eq:opt} with the objective satisfying $f \in C^1(\ensuremath \mathcal{M})$. Recall that we can define the Riemannian gradient as
\begin{equation}
\tx{grad} f(x) = \ensuremath{\mathsf P}_x(\nabla f(x)),
\end{equation}
for given $x \in \ensuremath \mathcal{M}$.
\subsection{Preliminaries}
First, we assume that the objective function $f$ has a Lipschitz continuous gradient on the manifold.
\begin{assumption}\label{as:lipcongrad}
There exists $L_f>0$ such that for all $x\in\ensuremath \mathcal{M}$
\begin{equation} \label{eq:lipf}
\dist^*(\tx{grad} f(x), \tx{grad} f(y)) = \n{\Gamma_x^y \tx{grad} f(x) - \tx{grad} f(y)} \leq L_f \n{\tx{grad} f(x)} \, ,
\end{equation}
\end{assumption}
Like in the unconstrained case, the Lipschitz gradient property implies the standard descent property. \begin{proposition} \label{p:std}
Assume that $M$ is compact and $R$ is a $C^2$ retraction. If condition \eqref{eq:lipf} holds, then the sufficient decrease property \eqref{eq:taylor} holds for some constant $L > 0$. \end{proposition} The proof can be found in the appendix. An analogous property, but under the stronger assumption that $f$ has Lipschitz gradient as a function in $\ensuremath \mathbb{R}^n$, is proved in \cite{boumal2019global}.\\ Another assumption we make in this context is that the gradient norm is globally bounded.
\begin{assumption}\label{as:globboundgrad}
There exists $M_f>0$ such that,
\begin{equation} \label{eq:mf}
\n{\tx{grad} f(x)} \leq M_f,
\end{equation}
for every $x \in \ensuremath \mathcal{M}$. \end{assumption}
For each of the algorithms in this section, we further assume that, at each iteration $k$, we have a positive spanning basis $\{p_k^j\}_{j \in [1:K]}$ of the tangent space $T_{x_{k}}M$ of the iterate $x_k$ (further details on how to get a positive spanning basis can be found, e.g., in \cite{conn2009introduction}). More specifically, we assume that the basis stays bounded and does not become degenerate during the algorithm, that is,
\begin{assumption}\label{as:basis}
There exists $B>0$ such that
\begin{equation} \label{eq:bounded}
\max_{j \in [1:K]} \n{p_k^j} \leq B,
\end{equation}
for every $k \in \mathbb{N}$. Furthermore there is a constant $\tau > 0$ such that
\begin{equation} \label{eq:pspanningt}
\max_{i \in [1:K]} \Sc{r}{p_k^j} \geq \tau \n{r},
\end{equation}
for every $k \in \mathbb{N}$ and $r \in T_{x_k}M$. \\
\end{assumption}
\subsection{Direct search algorithm}
We present here our Riemannian Direct Search method based on Spanning Bases (RDS-SB) for smooth objectives as Algorithm~\ref{alg:ds}.
\begin{algorithm}[h]
\caption{RDS-SB}
\label{alg:ds}
\begin{algorithmic}
\par\vspace*{0.1cm}
\STATE \textbf{Input:} $x_0\in \ensuremath \mathcal{M}$, $\gamma_1 \in (0, 1)$, $\gamma_2 \geq 1$, $\alpha_0 > 0$, $\rho > 0$
\FOR{$k = 0, 1, ...$}
\STATE Compute a positive spanning basis $\{p_k^j\}_{j = 1:K}$ of $T_k\ensuremath \mathcal{M}$
\FOR{$j=1,..., K$}
\STATE Let $x_k^j= R(x_k, \alpha_k p_k^j)$
\IF{$ f(x_k^j) \le f(x_k)-\rho \alpha_k^2 $}
\STATE $\alpha_{k + 1} = \gamma_2 \alpha_k, x_{k + 1} = x_k^j$
\STATE Declare the step $k$ successful
\STATE \textbf{Break}
\ENDIF
\ENDFOR
\IF{$ f(x_k^j) > f(x_k)-\rho \alpha_k^2 $ for $j \in [1:K]$}
\STATE $\alpha_{k+1}= \gamma_1 \alpha_k$, $x_{k + 1} = x_k$
\STATE Declare the step $k$ unsuccessful
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
This procedure resembles the standard direct search algorithm for unconstrained derivative free optimization (see, e.g.,~\cite{conn2009introduction, kolda2003optimization}) with two significant modifications. First, at every iteration a positive spanning basis is computed for the current tangent vector space $T_k\ensuremath \mathcal{M}$. As this space is expected to change at every iteration, it is not possible to use the same standard positive spanning sets appearing in the classic algorithms. Second, the candidate point $x_k^j$ is computed by retracting the step $\alpha_k p_k^j$ from the current tangent space $T_{x_k^j}\ensuremath \mathcal{M}$ to the manifold.
\subsection{Convergence analysis}
Now we show asymptotic global convergence of the method. Using a similar structure of reasoning as in standard convergence derivations for direct search, we prove that the gradient evaluated at iterates associated with unsuccessful steps must converge to zero, and extend the property to the remaining iterates, using the Lipschitz continuity of the gradient.
The first lemma states a bound on the scalar product between the gradient and the descent direction for an unsuccessful iteration.
\begin{lemma} \label{l:deltaklem}
If $f(R(x_k, \alpha_k p_k^j)) > f(x_k) - \gamma \alpha_k^2$, then
\begin{equation} \label{eq:deltak}
\alpha_k(LB^2 + \gamma) > - \Sc{\tx{\tx{grad}} f(x_k)}{p_k^j} \, .
\end{equation}
\end{lemma}
\begin{proof}
To start with, we have
\begin{equation} \label{eq:pspanning}
\begin{aligned}
& f(x_k) - \gamma \alpha_k^2 < f(R(x, \alpha_k p_k^j)) \leq f(x_k) + \alpha_k\Sc{\tx{\tx{grad}} f(x_k)}{p_k^j} + L \alpha_k^2 \n{p_k^j}^2 \\
\leq & f(x_k) + \alpha_k \Sc{\tx{\tx{grad}} f(x_k)}{p_k^j} + L \alpha_k^2 B^2 \, ,
\end{aligned}
\end{equation}
where we used \eqref{eq:taylor} in the second inequality, and \eqref{eq:bounded} in the third one.
The above inequality can be rewritten as
\begin{equation}
\alpha_k\Sc{\tx{\tx{grad}} f(x_k)}{p_k^j} + \alpha_k^2 (LB^2 + \gamma) > 0.
\end{equation}
Given that $\alpha_k > 0$, the above is true iff
\begin{equation} \label{eq:deltakbis}
\alpha_k > - \frac{\Sc{\tx{\tx{grad}} f(x_k)}{p_k^j}}{(LB^2 + \gamma)} \, ,
\end{equation}
which rearranged gives the thesis.
\end{proof}
From this we can infer a bound on the gradient with respect to the stepsize.
\begin{lemma} \label{l:gnorm}
If iteration $k$ is unsuccessful, then
\begin{equation} \label{eq:ngrad}
\n{\tx{grad} f(x_k)} \leq \frac{\alpha_k(2LB^2 + \gamma)}{\tau} \, .
\end{equation}
\end{lemma}
\begin{proof}
If iteration $k$ is unsuccessful, equation \eqref{eq:deltak} must hold for every $j \in [1:K]$. We obtain the thesis by applying the positive spanning property \eqref{eq:pspanningt} in the RHS:
\begin{equation}
\alpha_k(LB^2 + \gamma) > \max_{j \in [1:K]} - \Sc{\tx{\tx{grad}} f(x_k)}{p_k^j} \geq \tau \n{\tx{grad} f(x_k)} \, .
\end{equation}
\end{proof}
Finally, we are able to show convergence of the gradient norm using the lemmas above and appropriate arguments regarding the step sizes.
\begin{theorem} \label{th:ds}
For the sequence $\{x_k\}$ generated by Algorithm \ref{alg:ds} we have
\begin{equation}
\lim_{k \rightarrow \infty} \n{\tx{grad} f(x_k)} = 0 \, .
\end{equation}
\end{theorem}
\begin{proof}
To start with, clearly $\alpha_k \rightarrow 0$ since the objective is bounded below, $\{f(x_k)\}$ is non increasing with $f(x_{k + 1}) \leq f(x_k) - \gamma \alpha_k^2$ if the step $k$ is successful, and so there can be a finite number of successful steps with $\alpha_k \geq \varepsilon$ for any $\varepsilon > 0$.\\
For a fixed $\varepsilon > 0$, let $\bar{k}$ such that $\alpha_k \leq \varepsilon$ for every $k \geq \bar{k}$. We now show that, for every $\varepsilon > 0$ and $k \geq \bar{k}$ large enough, we have
\begin{equation} \label{eq:gradbound}
\n{\tx{grad} f(x_k)} \leq \varepsilon\left(\frac{ (2LB^2 + \gamma)}{\tau} + L_fL_r B \frac{\gamma_2}{\gamma_2 - 1}\right) \, ,
\end{equation}
which clearly implies the thesis given that $\varepsilon$ is arbitrary. \\
First, \eqref{eq:gradbound} is satisfied for $k \geq \bar{k}$ if the step $k$ is unsuccessful by Lemma \ref{l:gnorm}:
\begin{equation} \label{eq:unsucc}
\n{\tx{grad} f(x_k)} \leq \frac{\alpha_k(2LB^2 + \gamma)}{\tau} \leq \frac{\varepsilon(2LB^2 + \gamma)}{\tau} \, ,
\end{equation}
using $\alpha_k \leq \varepsilon$ in the second inequality. \\
If the step $k$ is successful, then let $j$ be the minimum positive index such that the step $k + j$ is unsuccessful. We have that $\alpha_{k + i} = \alpha_k \gamma_2^{i}$ for $i \in [0:j - 1]$, and since $\alpha_{k + j - 1} \leq \varepsilon$ by induction we get $\alpha_{k + i} \leq \varepsilon \gamma_2^{i - j + 1}$. Therefore
\begin{equation} \label{eq:sumdelta}
\sum_{i= 0}^{j - 1} \alpha_{k + i} \leq \sum_{i = 0}^{j - 1} \varepsilon \gamma_2^{i - j + 1} \leq \varepsilon \sum_{h= 0}^{\infty} \gamma_2^{-h} = \varepsilon \frac{\gamma_2}{\gamma_2 - 1} \, .
\end{equation}
Then
\begin{equation} \label{eq:distbound}
\begin{aligned}
& \dist(x_k, x_{k + j}) \leq \sum_{i = 0}^{j - 1} \dist(x_{k + i}, x_{k + i + 1}) = \sum_{i = 0}^{j - 1} \dist(x_{k + i}, R(x_{k + i}, \alpha_{k + i}p_{k + i}^{j(k + i)} )) \\
\leq & \sum_{i = 0}^{j - 1} L_r \alpha_{k + i}B \leq L_rB \varepsilon \frac{\gamma_2}{\gamma_2 - 1} \, .
\end{aligned}
\end{equation}
where we used \eqref{eq:rbounded} together with \eqref{eq:bounded} in the second inequality, and \eqref{eq:sumdelta} in the third one. \\
In turn,
\begin{equation}
\begin{aligned}
& \n{\tx{grad} f(x_k)} \leq \dist^*(\tx{grad} f(x_k), \tx{grad} f(x_{k + j})) + \n{\tx{grad} f(x_{k + j})} \\
\leq & L_f \dist(x_k, x_{k + j}) + \frac{ \varepsilon(2LB^2 + \gamma)}{\tau} \leq \varepsilon\left(\frac{ 2LB^2 + \gamma}{\tau} + L_fL_r B \frac{\gamma_2}{\gamma_2 - 1} \right) \, ,
\end{aligned}
\end{equation}
where we used \eqref{eq:lipf} and \eqref{eq:unsucc} with $k + j$ instead of $k$ for the first and second summand respectively in the second inequality, and \eqref{eq:distbound} in the last one.
\end{proof}
\subsection{Incorporating an extrapolation linesearch}
The works~\cite{lucidi2002derivative,lucidi2002global} introduced the use of an extrapolating line search that tests the objective on variable inputs farther away from the current iterate than the tentative point obtained by direct search on a given direction (i.e., an element of the positive spanning set). Such a thorough exploration of the search directions ultimately yields better performances in practice.
We found that the same technique can be applied in the Riemannian setting to good effect.
We present here our Riemannian Direct Search with Extrapolation method based on Spanning Bases (RDSE-SB) for smooth objectives. The scheme is presented in detail as Algorithm \ref{alg:dse}.
As we can easily see, the method uses a specific stepsize for each direction in the positive spanning basis, so that instead of $\alpha_k$ we have a set of stepsizes $\{\alpha_k^j\}_{j \in [1:K]}$ for every $k \in \mathbb{N}_0$. Furthermore
a retraction based linesearch procedure
(see Algorithm \ref{alg:lsep}) is used to better explore a given direction in case a sufficient decrease of the objective is obtained.
When analyzing the RDSE-SB method, we assume that the following continuity condition holds.
\begin{assumption}
For every $l, m \in \mathbb{N}$, $j \in [1:K]$, there exists a constant $L_{\Gamma}>0$ such that
\begin{equation} \label{eq:mb}
\dist^*(p^j_l, p_m^j) \leq L_{\Gamma} \dist(x_l, x_m) \, .
\end{equation}
\end{assumption}
We refer the reader to \cite{lucidi2002global} for a slightly weaker continuity condition in an Euclidean setting.
\begin{algorithm}[h]
\caption{RDSE-SB}
\label{alg:dse}
\begin{algorithmic}
\par\vspace*{0.1cm}
\STATE \textbf{Input:} $x_0\in \mathbb{R}^n$, $\{\alpha^j_0\}_{j \in [1:K]}$, $\gamma > 0, \gamma_1 \in (0, 1), \gamma_2 \geq 1$.
\FOR{ $k=0, 1,...$}
\STATE Compute a positive spanning basis $\{ p_k^j\}_{j \in [1:K]} $ of $T_k\mathcal{M}$
\STATE Set $j(k)=\mod(k,n)$, $\alpha_{k}^i = \tilde{\alpha}_{k}^i$ and $\tilde{\alpha}_{k + 1}^i = \tilde{\alpha}_{k}^i$ for $i \in [1:K] \setminus \{j(k)\}$.
\STATE Compute $\alpha_k^{j(k)}, \tilde{\alpha}_{k + 1}^{j(k)}$ with \textbf{Linesearchprocedure}($\tilde \alpha_k^{j(k)}, x_k, p_k^{j(k)}, \gamma, \gamma_1, \gamma_2$)
\STATE Set $x_{k+1}= R(x_k, \alpha_k^{j(k)} p_k^{j(k)})$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h]
\caption{Linesearchprocedure($x, \alpha, d, \gamma, \gamma_1, \gamma_2$)}
\label{alg:lsep}
\begin{algorithmic}
\par\vspace*{0.1cm}
\IF{$f(R(x_k, \alpha d )) > f(x) - \gamma \alpha^2$}
\STATE $(0, \gamma_1 \alpha)$
\ENDIF
\WHILE{ $f(R(x_k, \alpha d )) < f(x) - \gamma \alpha^2 $}
\STATE Set $\alpha = \gamma_2 \alpha$
\ENDWHILE
\STATE \textbf{Return} $(\alpha/\gamma_2, \alpha/\gamma_2)$
\end{algorithmic}
\end{algorithm}
We now proceed to prove the asymptotic convergence of this method.
\begin{lemma} \label{l:splem}
We have, at every iteration $k$, that the following inequality holds:
\begin{equation} \label{eq:alphakgamma1}
-\Sc{\tx{grad} f(x_k)}{p_k^{j(k)}} < \tilde{\alpha}_{k + 1}^{j(k)} \frac{\gamma_2}{\gamma_1} (2LB^2 + \gamma).
\end{equation}
\end{lemma}
\begin{proof}
It is immediate to check that we must always have
\begin{equation}
f(R(x_k, \Delta_k p_k^{j(k)})) > f(x_k) - \gamma \Delta_k^2,
\end{equation}
for $\Delta_k = \frac{1}{\gamma_1} \td{\alpha}_{k + 1}^{j(k)}$ if the Linesearchprocedure terminates at the second line, and $\Delta_k = \gamma_2\td{\alpha}_{k + 1}^{j(k)} $ if the Linesearchprocedure terminates in the last line. Then in both cases
\begin{equation}
-\Sc{\tx{grad} f(x_k)}{p_k^{j(k)}} < \Delta_k (2LB^2 + \gamma) \leq \tilde{\alpha}_{k + 1}^{j(k)} \frac{\gamma_2}{\gamma1} (2LB^2 + \gamma) \, ,
\end{equation}
where we used Lemma \ref{l:deltaklem} in the first inequality.
\end{proof}
\begin{theorem}
For $\{x_k\}$ generated by Algorithm \ref{alg:dse}, we have
\begin{equation}
\lim_{k \rightarrow \infty} \n{\tx{grad} f(x_k)} \rightarrow 0 \, .
\end{equation}
\end{theorem}
\begin{proof}
Let $\bar{\alpha}_k = \max_{j \in [1:K]} \tilde{\alpha}_{k + 1}^{j(k)}$, so that
$ \bar{\alpha}_k \rightarrow 0$ since $\tilde{\alpha}_{k}^{j(k)} \rightarrow 0$, reasoning as in the proof of Theorem \ref{th:ds}.
As a consequence of Lemma \ref{l:splem} we have
\begin{equation} \label{eq:jrel}
-\Sc{\tx{grad} f(x_k)}{p_k^{j(k)}} < \bar{\alpha}_k c_1\, ,
\end{equation}
for the constant $c_1 = \frac{\gamma_2}{\gamma_1} (2LB^2 + \gamma)$ independent from $j(k)$. \\
It remains to bound $\Sc{\tx{grad} f(x_k)}{p_k^i}$ for $i \neq j$.
To start with, we have the following bound:
\begin{equation} \label{eq:gbound}
\begin{aligned}
& -\Sc{\tx{grad} f(x_{k})}{p^i_k} \leq -\Sc{\tx{grad} f(x_{k + h})}{p^i_{k + h}} + |\Sc{\tx{grad} f(x_{k + h})}{p^i_{k + h}} - \Sc{\tx{grad} f(x_{k})}{p^i_k}| \\
\leq & c_1 \bar{\alpha}_{k + h} + |\Sc{\tx{grad} f(x_{k + h})}{p^i_{k + h}} - \Sc{\tx{grad} f(x_{k})}{p^i_k}|\, ,
\end{aligned}
\end{equation}
for $h \leq K$ such that $k + h = j(i)$, and where in the second inequality we used \eqref{eq:jrel} with $k + h$ instead of $k$. For the second summand appearing in the RHS of \eqref{eq:gbound}, we can write the following bound
\begin{equation} \label{eq:dbound}
\begin{aligned}
& |\Sc{\tx{grad} f(x_{k + h})}{p^i_{k + h}} - \Sc{\tx{grad} f(x_k)}{p^i_k}| = |\Sc{\tx{grad} f(x_{k + h})}{p^i_{k + h}} - \Sc{\Gamma_k^{k + h} \tx{grad} f(x_k)}{\Gamma_k^{k + h}p^i_{k}}| \\
\leq & |\Sc{\tx{grad} f(x_{k + h}) - \Gamma_{k}^{k + h} \tx{grad} f(x_{k}) }{ p^i_{k + h}}| + |\Sc{\Gamma_{k}^{k + h} \tx{grad} f(x_{k}) }{p^i_{k + h} - \Gamma_k^{k + h} p^i_k}| \\
+ & |\Sc{\tx{grad} f(x_{k + h}) - \Gamma_{k}^{k + h} \tx{grad} f(x_{k}) }{p^i_{k + h} - \Gamma_k^{k + h} p^i_k}| \\
\leq & L_f \dist(x_k, x_{k + h}) \n{p^i_{k + h}} + L_{\Gamma} \n{\tx{grad} f(x_k)} \dist(x_{k + h}, x_k) + L_f L_{\Gamma} \dist(x_k, x_{k + h})^2 \\
\leq & (L_f B + L_{\Gamma} M_f + L_fL_{\Gamma} \dist(x_{k + h}, x_k))\dist(x_{k + h}, x_k)\, ,
\end{aligned}
\end{equation}
where in the second inequality we used the Cauchy-Schwartz inequality together with the Assumptions on the Lipschitz property of the iterates \eqref{eq:lipf} and \eqref{eq:mb}, while in the third inequality we used conditions \eqref{eq:bounded} and \eqref{eq:mf}. \\
We can now bound $\dist(x_k, x_{k + h})$ as follows
\begin{equation} \label{eq:sumbound}
\begin{aligned}
& \dist(x_{k + h}, x_k) \leq \sum_{l = 0} ^ {h - 1} \dist(x_{k + l + 1}, x_{k + l}) \\
= & \sum_{l= 0}^{h - 1} \dist(x_{k + l}, R(x_{k + l}, \bar{\alpha}_{k + l} p_{k + l}^{j(k + l)})) \leq \sum_{l= 0}^{h - 1} L_r \bar{\alpha}_{k + l} \n{p_{k + l}^{j(k + l)}} \\
\leq & B L_r \sum_{l= 0}^{h - 1} \bar{\alpha}_{k + l} \leq hBL_r \max_{l \in [0:h-1]} \bar{\alpha}_{k + l} \\
\leq & KBL_r \max_{l \in [0:K]} \bar{\alpha}_{k + l} \, ,
\end{aligned}
\end{equation}
where we used \eqref{eq:rbounded} in the second inequality, \eqref{eq:bounded} in the third one, and $h \leq K$ in the last one. \\
Now let $\Delta_k = \max_{l \in [0:K]} \bar{\alpha}_{k + l} $, so that in particular $\Delta_k \rightarrow 0$.
We apply \eqref{eq:sumbound} to the RHS of \eqref{eq:dbound} and obtain
\begin{equation}
\begin{aligned}
& |\Sc{\tx{grad} f(x_{k + h})}{p^i_{k + h}} - \Sc{\tx{grad} f(x_k)}{p^i_k}| \leq (L_f B + L_{\Gamma} M_f + L_f L_{\Gamma} c_2 \Delta_k)c_2 \Delta_k \rightarrow 0\, ,
\end{aligned}
\end{equation}
for $k \rightarrow \infty$ and $c_2 = KBL_r$.
Finally, for every $i \in [1:K]$
\begin{equation} \label{eq:czero}
- \Sc{\tx{grad} f(x_{k})}{p^i_k} \leq c_1 \bar{\alpha}_{k + h} + (L_f B + L_{\Gamma} M_f + L_f L_{\Gamma} c_2 \Delta_k)c_2 \Delta_k \rightarrow 0 \, ,
\end{equation}
and the thesis follows after observing that, by \eqref{eq:pspanningt},
\begin{equation}
\n{\tx{grad} f(x_k)} \leq \frac{1}{\tau} \max_{i \in [1:K]} -\Sc{\tx{grad} f(x_{k})}{p^i_k} \rightarrow 0 \, ,
\end{equation}
where the convergence of the gradient norm to zero is a consequence of
\eqref{eq:czero}.
\end{proof}
\section{Nonsmooth objectives}\label{s:nonsmooth}
Now we proceed to present and study direct search methods in the context where $f$ is Lipschitz continuous and bounded from below, but not necessarily continuously differentiable. The algorithms we devise are built around the ideas given in~\cite{fasano2014linesearch}, where the authors consider direct search methods for nonsmooth objectives in Euclidean space.
\subsection{Clarke stationarity for nonsmooth functions on Riemannian manifolds}
In order to perform our analysis, we first need to define the Clarke directional derivative for a point $x \in M$. The standard approach is to write the function in coordinate charts and take the standard Clarke derivative in an Euclidean space (see, e.g.,~ \cite{hosseini2013nonsmooth} and \cite{hosseini2017riemannian}). Formally, given a chart $(\varphi, U)$ at $x \in M$ and $v \in T_xM$, we define
\begin{equation} \label{eq:clarke}
f^{\circ}(x, v) = \tilde{f}(\varphi(x), d\varphi(x)v)'\, ,
\end{equation}
for $\tilde{f}(y) = f(\varphi^{- 1}(y))$. The following lemma shows the relationship between definition \eqref{eq:clarke} and a directional derivative like object defined with retractions.
\begin{lemma} \label{l:clarkeR}
If $(y_k, q_k) \rightarrow (x, d)$ and $t_k \rightarrow 0$,
\begin{equation}
f^{\circ}(x, d) \geq \limsup_{k \rightarrow \infty} \frac{f(R(y_k, t_kq_k)) - f(y_k)}{t_k} \, .
\end{equation}
\end{lemma}
The proof is rather technical and we defer it to the appendix.
\subsection{Refining subsequences}
We now adapt the definition of refining subsequence used in the analysis of direct search methods (see, e.g., \cite{audet2002analysis, fasano2014linesearch}) to the Riemannian setting.
Let $(x_k, d_k)$ be a sequence in $T\ensuremath \mathcal{M}$.
\begin{definition}
We say that the subsequence $\{x_{i(k)}\}$ is refining if $x_{i(k)} \rightarrow x $, and if for every $d \in T_x\ensuremath \mathcal{M}$ with $\n{d}_x = 1$ there is a further subsequence $\{j(i(k))\}$ such that
\begin{equation}
\lim_{k \rightarrow \infty} d_{j(i(k))} = d \, .
\end{equation}
\end{definition} We now give a sufficient condition for a sequence to be refining.
\begin{proposition}
If $x_{i(k)} \rightarrow x^*$, $\bar{d}_{i(k)}$ is dense in the unit sphere, and $d_{i(k)} = \ensuremath{\mathsf P}_{k}(\bar{d}_{i(k)})/\n{\ensuremath{\mathsf P}_k(\bar{d}_{i(k)})}_k$ for $\ensuremath{\mathsf P}_k(\bar{d}_{i(k)}) \neq 0$ and $d_{i(k)} = 0$ otherwise, then it holds that the subsequence $\{x_{i(k)}\}$ is refining.
\end{proposition}
\begin{proof}
Fix $d \in T_{x^*} \ensuremath \mathcal{M}$, with $\n{d}_{x^*} = 1$, and let $\bar{d} = d/ \n{d}$. By density, we have that $\bar{d}_{j(i(k))} \rightarrow \bar{d}$ for a proper choice of the subsequence $\{j(i(k))\}$. Then
\begin{equation}
\lim_{k \rightarrow \infty } d_{j(i(k))} = \lim_{k \rightarrow \infty } \frac{\ensuremath{\mathsf P}_k(\bar{d}_{j(i(k))})}{\n{\ensuremath{\mathsf P}_k(\bar{d}_{j(i(k))})}_k}
= \frac{\ensuremath{\mathsf P}_{x^*}(\bar{d})}{\n{\ensuremath{\mathsf P}_{x^*}(\bar{d})}_{x^*}} = \frac{\bar{d}}{\n{\bar{d}}_{x^*}} = d \, ,
\end{equation}
where in the second equality we used the continuity of $\ensuremath{\mathsf P}_x$ and of the norm $\n{\cdot}_x$, and in the third equality we used $\ensuremath{\mathsf P}_{x^*}(\bar{d}) = \bar{d}$ since $\bar{d} \in T_{x^*} \ensuremath \mathcal{M}$ by construction.
\end{proof}
\subsection{Direct search for nonsmooth objectives}
We present here our Riemannian Direct Search method based on Dense Directions (RDS-DD) for nonsmooth objectives. The scheme is presented in detail as Algorithm \ref{alg:dse3}. The algorithm performs three simple steps at an iteration $k$. First, a given search direction is suitably projected onto the current tangent space. Then a tentative point is generated by retracting the step $\alpha_k d_k$ from the tangent space to the manifold. Such a point is then eventually accepted as the new iterate if a sufficient decrease condition of the objective function is satisfied (and the stepsize is expanded), otherwise the iterate stays the same (and the stepsize is reduced).
\begin{algorithm}[h]
\caption{RDS-DD}
\label{alg:dse3}
\begin{algorithmic}
\par\vspace*{0.1cm}
\STATE{\textbf{Input:} $x_0\in \mathbb{R}^n$, $\alpha_0 > 0$, $\gamma > 0, \gamma_1 \in (0, 1), \gamma_2 \geq 1$, $\{\bar{d}_k\}$ dense in $S(0, 1)$}
\FOR{ $k=0, 1,...$}
\STATE{Let $d_k = \ensuremath{\mathsf P}_{k}(\bar{d}_k)/\n{\ensuremath{\mathsf P}_{k}(\bar{d}_k)}_k$ if $\ensuremath{\mathsf P}_{k}(\bar{d}_k) \neq 0$, $0$ otherwise}
\IF{$f(R(x_k, \alpha_k d_k )) \leq f(x) - \gamma \alpha_k^2$}
\STATE{$x_{k + 1} = R(x_k, \alpha_k d_k )$, $\alpha_{k + 1} = \gamma_2 \alpha_k$}
\ELSE
\STATE $x_{k + 1} = x_k$, $\alpha_{k + 1} = \gamma_1 \alpha_k$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
Thanks to the theoretical tools previously introduced, we can easily prove that a suitable subsequence of unsuccessful iterations of the RDS-DD method converges to a Clarke stationary point.
\begin{theorem} \label{th:dsns}
Let $\{x_k\}$ be generated by Algorithm \ref{alg:dse3}. If $\{x_{i(k)}\}$ is refining, with $ x_{i(k)} \rightarrow x^* $, and $i(k)$ is an unsuccessful iteration for every $k \in \mathbb{N} \cup \{0\}$, then $x^*$ is Clarke stationary.
\end{theorem}
\begin{proof}
Clearly as in the smooth case $\alpha_k \rightarrow 0$ and in particular $\alpha_{i(k)} \rightarrow 0$. Since by assumption $i(k)$ is an unsuccessful step, we have, for every $i(k)$
\begin{equation} \label{eq:inequns}
f(R(x_{i(k)}, \alpha_{i(k)} d_{i(k)})) - f(x_{i(k)}) > -\gamma \alpha_{i(k)}^2 \, .
\end{equation}
Let $\{j(i(k)) \}$ be such that $d_{j(i(k))} \rightarrow d$, and let $y_k = x_{j(i(k))} $, $q_k = d_{j(i(k))}$, $t_k = \alpha_{j(i(k))}$. We have
\begin{equation}
\limsup_{k \rightarrow \infty} \frac{f(R(y_k, t_kq_k)) - f(y_k)}{t_k} \geq \limsup_{k \rightarrow \infty} -\gamma \alpha_{i(k)} = 0\, ,
\end{equation}
thanks to \eqref{eq:inequns}, and by applying Lemma \ref{l:clarkeR} we get
\begin{equation}
f^{\circ}(x^*, d) \geq \limsup_{k \rightarrow \infty} \frac{f(R(y_k, t_kq_k)) - f(y_k)}{t_k} \geq 0 \, ,
\end{equation}
which implies the thesis since $d$ is arbitrary.
\end{proof}
\subsection{Direct search with extrapolation for nonsmooth objectives}
We present here our Riemannian Direct Search method with Extrapolation based on Dense Directions (RDSE-DD) for nonsmooth objectives. The detailed scheme is given in Algorithm \ref{alg:dse2}. As we can easily see, the algorithm performs just two simple steps at an iteration $k$. First, a given search direction is suitably projected on the current tangent space. Then a linesearch is performed using Algorithm \ref{alg:lsep} to hopefully obtain a new point that guarantees a sufficient decrease.
\begin{algorithm}[h]
\caption{RDSE-DD}
\label{alg:dse2}
\begin{algorithmic}
\par\vspace*{0.1cm}
\STATE \textbf{Input:} $x_0\in \mathbb{R}^n$, $\alpha_0 > 0$, $\gamma > 0, \gamma_1 \in (0, 1), \gamma_2 \geq 1$, $\{\bar{d}_k\}$ dense in $S(0, 1)$.
\FOR{$k=0, 1,...$}
\STATE Let $d_k = \ensuremath{\mathsf P}_{k}(\bar{d}_k)/\n{\ensuremath{\mathsf P}_{k}(\bar{d}_k)}_k$ if $\ensuremath{\mathsf P}_{k}(\bar{d}_k) \neq 0$, $0$ otherwise.
\STATE Compute $\alpha_k, \tilde{\alpha}_{k + 1}$ with \textbf{Linesearchprocedure}($\tilde{\alpha}_k, x_k, d_k, \gamma, \gamma_1, \gamma_2$)
\STATE Set $x_{k+1}= R(x_k, \alpha_k d_k)$
\ENDFOR
\end{algorithmic}
\end{algorithm}
Once again, by exploiting the theoretical tools previously introduced, we can straightforwardly prove that a suitable subsequence of the RDSE-DD iterations converges to a Clarke stationary point. It is interesting to notice that, thanks to the use of the linesearch strategy, we are not restricted to considering unsuccessful iterations this time.
\begin{theorem} \label{th:dsextns}
Let $\{x_k\}$ be generated by Algorithm \ref{alg:dse2}.
If $\{x_{i(k)}\}$ is refining, with $ x_{i(k)} \rightarrow x^* $, then $x^*$ is Clarke stationary.
\end{theorem}
\begin{proof}
Let $\beta_k = \td{\alpha}_{k}/\gamma_2$ if the linesearch procedure exits before the loop, and $\beta_k = \gamma_1 \td{\alpha}_{k + 1}$ otherwise. Clearly $\beta_k \rightarrow 0$, and by definition of the linesearch procedure, for every $k$
\begin{equation} \label{eq:inequns2}
f(R(x_k, \beta_k d_k)) - f(x_k) > -\gamma \beta_k^2 \, .
\end{equation}
The rest of the proof is analogous to that of Theorem \ref{th:dsns}.
\end{proof}
\section{Numerical results}\label{s:num}
We now report the results of some numerical experiments of the algorithms described in this paper on a set of simple but illustrative example problems. The comparison among the algorithms is carried out by using data and performance profiles \cite{more2009benchmarking}. Specifically, let $S$ be a set of algorithms and $P$ a set of problems. For each $s\in S$ and $p \in P$, let $t_{p,s}$ be the number of function evaluations required by algorithm $s$ on problem $p$ to satisfy the condition \begin{equation}\label{eq:stop} f(x_k) \leq f_L + \tau(f(x_0) - f_L)\, , \end{equation} where $0< \tau < 1$ and $f_L$ is the best objective function value achieved by any solver on problem $p$. Then, performance and data profiles of solver $s$ are the following functions \begin{eqnarray*}
\rho_s(\alpha) & = & \frac{1}{|P|}\left|\left\{p\in P: \frac{t_{p,s}}{\min\{t_{p,s'}:s'\in S\}}\leq\alpha\right\}\right|,\\
d_s(\kappa) & = & \frac{1}{|P|}\left|\left\{p\in P: t_{p,s}\leq\kappa(n_p+1)\right\}\right|\, , \end{eqnarray*} where $n_p$ is the dimension of problem $p$. \\ We used a budget of $100(n_p+1)$ function evaluations in all cases and two different precisions for the condition \eqref{eq:stop}, that is $\tau\in \{10^{-1},10^{-3}\}$. We consider randomly generated instances of well-known optimization problems over manifolds from \cite{absil2009optimization,boumal2020introduction,hosseini2019nonsmooth}. A brief description of those problems as well as the details of our implementation can be found in the appendix (see Sections~\ref{s:Id}, \ref{sapp:numtests} and \ref{s:nsp}). The size of the ambient space for the instances varies from 2 to 200. We would finally like to highlight that, in Section~\ref{s:anr}, we report further detailed numerical results, splitting the problems by ambient space dimension: between 2 and 15 for small instances, between 16 and 50 for medium instances, and between 51 and 200 for large instances. \\
\subsection{Smooth problems} In Figure \ref{fig:overall}, we include the results related to 8 smooth instances of problem \eqref{eq:opt} from \cite{absil2009optimization,boumal2020introduction}, each with 15 different problem dimensions (from 2 to 200), for a total number of 60 tested instances. We compared our methods, that is RDS-SB and RDSE-SB, with the zeroth order gradient descent (ZO-RGD, \cite[Algorithm 1]{li2020zeroth}). \\ The results clearly show that RDSE-SB performs better than RDS-SB and ZO-RGD both in efficiency and reliability for both levels of precision. By taking a look at the detailed results in Section \ref{s:anr}, we can also see how the gap between RDSE-SB and the other two algorithms gets larger as the problem dimension grows.
\begin{figure}
\caption{Data p., $\tau=10^{-1}$}
\caption{Perf. p., $\tau=10^{-1}$}
\caption{Data p., $\tau=10^{-3}$}
\caption{Perf. p., $\tau=10^{-3}$}
\caption{Smooth case: results for all the instances}
\label{fig:overall}
\end{figure}
\subsection{Nonsmooth problems}\label{sec:NSprob}
We finally report a preliminary comparison between a direct search strategy and a linesearch strategy on two nonsmooth instances of \eqref{eq:opt} from \cite{hosseini2019nonsmooth}, each with 15 different problem sizes (from 2 to 200), thus getting a total number of 30 tested instances.
In the direct search strategy (RDS-DD+), we apply the RDS-SB method until $\alpha_{k + 1} \leq \alpha_{\epsilon}$, at which point we switch to the nonsmooth version RDS-DD. Analogously, in the linesearch strategy (RDSE-DD+), we apply the RDSE-SB method until $\max_{j \in [1:K]} \tilde{\alpha}_{k + 1}^j \leq \alpha_{\epsilon}$, at which point we switch to the nonsmooth version RDSE-DD. Both strategies use a threshold parameter $\alpha_{\epsilon} > 0$ to switch from the smooth to the nonsmooth DFO algorithm. We refer the reader to \cite{fasano2014linesearch} and references therein for other direct search strategies combining coordinate and dense directions. \\ We report, in Figure \ref{fig:nonsmooth}, the comparison between the two considered strategies. As in the smooth case, the linesearch based strategy outperforms the simple direct search one. By taking a look at the detailed results in Section \ref{s:anr}, we can once again see how the gap between the algorithms gets larger as the problem dimension gets large enough.
\begin{figure}
\caption{Data p., $\tau=10^{-1}$}
\caption{Perf. p., $\tau=10^{-1}$}
\caption{Data p., $\tau=10^{-3}$}
\caption{Perf. p., $\tau=10^{-3}$}
\caption{Nonsmooth case: results for all the instances}
\label{fig:nonsmooth}
\end{figure}
\section{Conclusion}\label{s:con} In this paper, we presented direct search algorithms with and without an extrapolation linesearch for minimizing functions over a Riemannian manifold. We found that, modulo modifications to account for the changing vector space structure with the iterations, direct search strategies provide guarantees of convergence for both smooth and nonsmooth objectives. We found also that in practice, in our numerical experiments, the extrapolation linesearch speeds up the performance of direct search in both cases, and it appears that it even outperforms a gradient approximation based zeroth order Riemannian algorithm in the smooth case. As a natural extension for future work, considering the stochastic case would be a reasonable next step.
\section{Appendix}
\subsection{Proofs}
In order to prove Proposition \ref{p:std} we first need the following lemma.
\begin{lemma} \label{l:hlem}
For a Lipschitz continuous function $h: \ensuremath \mathbb{R}^m \rightarrow \ensuremath \mathbb{R}$, $\tilde{y}, \tilde{v} \in \ensuremath \mathbb{R}^m$, if $\tilde{y}_k \rightarrow \tilde{y}$, $\tilde{v}_k \rightarrow \td{v}$ and $t_k \rightarrow 0$ then
\begin{equation}
h^{\circ}(\td{y},\td{v}) \geq \limsup_{k \rightarrow \infty} \frac{h(\tilde{y}_k + t_k \tilde{v}_k) - h(\tilde{y}_k)}{t_k} \, .
\end{equation} \end{lemma} \begin{proof}
We have
\begin{equation} \label{eq:otk}
|h(\tilde{y}_k + t_k \tilde{v}_k) - h(\tilde{y}_k + t_k \td{v})| \leq t_k L_h\n{\td{v} - \td{v}_k} = o(t_k)\, ,
\end{equation}
with $L_h$ the Lipschitz constant of $h$. Then
\begin{equation}
\begin{aligned}
& \limsup_{k \rightarrow \infty} \frac{h(\tilde{y}_k + t_k \tilde{v}_k) - h(\td{y}_k)}{t_k} = \limsup_{k \rightarrow \infty} \frac{h(\tilde{y}_k + t_k \td{v}) + o(t_k) - h(\td{y}_k)}{t_k} \\
= & \limsup_{k \rightarrow \infty} \frac{h(\tilde{y}_k + t_k \td{v}) - h(\td{y}_k)}{t_k} \leq h^{\circ}(\td{y}, \td{v}) \, ,
\end{aligned}
\end{equation}
where we used \eqref{eq:otk} in the first equality, and with the inequality true by definition of the Clarke derivative. \end{proof}
\begin{proof}[Proof of Proposition \ref{p:std}]
Let $(\varphi)$ be a chart defined in a neighborhood $U$ of $x \in M$. We use the notation $(\td{x}, \td{d})= (\varphi(x), d\varphi(x)d)$ for $(x, d) \in T\ensuremath \mathcal{M}$. We pushforward the manifold and the related structure with the chart $\varphi$, i.e. for $\bar{\varphi} = \varphi^{-1}$ we define $\tilde{f} = f\circ \bar{\varphi}$, $\td{U} = \varphi(U)$, $\tilde{R}(\tilde{y}, \tilde{d}) = R(y, d)$, for $d, q \in T_xM$ we define $g(\td{d}, \td{q}) = \Sc{d}{q}_x$, $\n{\td{d} - \td{q}}_{\td{x}} = \n{d - q}_x$, and $\td{\Gamma}_{\td{x}}^{\td{y}}(\td{d}) = \Gamma_x^y(d)$. With slight abuse of notation we use $\dist(\td{x}, \td{y})$ to denote $\dist(x, y)$. We also define as $\tx{grad} \td{f}$ the gradient of $\td{f}$ with respect to the scalar product $g$, so that $g(\tx{grad} \td{f}(\td{x}), \td{d}) = \Sc{\nabla \td{f}(x)}{d}$ for any $\td{d} \in \ensuremath \mathbb{R}^m$. Importantly, by the equivalence of norms in $\ensuremath \mathbb{R}^m$ we can use $O(\n{\td{d}}_x)$ and $O(\n{\td{d}})$ interchangeably. \\
We first prove \eqref{eq:taylor} in $x$ for some constant $L > 0$ and any $d$ with $\n{d} \leq B$ for some $B > 0$. Equivalently, we want to prove
\begin{equation} \label{eq:th}
\td{f}(\td{R}(\td{x}, \td{d})) \leq \td{f}(\td{x}) + g(\tx{grad} \td{f}(\td{x}), \td{d}) + \frac{L}{2}\n{\td{d}}_{\td{x}}^2 \, .
\end{equation}
for $\td{d}$ s.t. $\n{\td{d}} \leq B$. \\ By compactness we can choose $(\varphi, U)$ and $B > 0$ in such a way that, for every $\td{y} \in \td{U}_1 \subset \td{U}$ and $\td{d}$ with $\n{\td{d}}_{\td{y}} \leq B$ we have $\td{R}(\td{y}, \td{d}) \in \td{U}_2 \subset \td{U}$, with $\td{U}_2$ compact and $B > 0$ independent from $\td{x}, \td{y}, \td{d}$. \\ First, since $\td{R}$ is in particular $C^1$ regular \begin{equation}
\tilde{R}(\tilde{x}, \tilde{d}) = \tilde{x} + O(\n{\td{d}}_{\td{x}})\, , \end{equation} and by smoothness of the parallel transport \begin{equation} \label{eq:gammasmooth}
\td{\Gamma}_{\td{x}}^{\td{y}} \td{q} = \td{q} + O(\n{\td{x} - \td{y}}) \, . \end{equation} Furthermore, \begin{equation}
\tx{grad} \td{f}(\td{x} + \td{q}) =\td{\Gamma}_{\td{x}}^{\td{x} + \td{q}} \tx{grad} \td{f}(\td{x}) + O(\dist(\td{x}, \td{x} + \td{q})) \, , \end{equation} by the Lipschitz continuity assumption \eqref{eq:lipf}, and consequently \begin{equation} \label{eq:gradR}
\begin{aligned}
& \tx{grad} \td{f}(\td{R}(\td{x}, \td{q})) =\td{\Gamma}_{\td{x}}^{\td{R}(\td{x}, \td{q})} \tx{grad} \td{f}(\td{x}) + O(\dist(\td{x}, \td{R}(\td{x}, \td{q}))) \\
= & \td{\Gamma}_{\td{x}}^{\td{R}(\td{x}, \td{q})} \tx{grad} \td{f}(\td{x}) + O(\n{\td{q}}) \, ,
\end{aligned} \end{equation} where we used \eqref{eq:rbounded} in the last equality. \\ Finally, since, $\frac{d}{dt}\td{R}(\td{x}, t\td{d})$ is $C^1$ regular, we also have \begin{equation} \label{eq:ddt}
\begin{aligned}
&\frac{d}{dt} \td{R}(\td{x}, t\td{q})|_{t = h} = \frac{d}{dt} \td{R}(\td{x}, t\td{q})|_{t = 0} +
O(\n{h \td{q}}) \\
= & \td{q} + O(h\n{\td{q}}) = \td{\Gamma}_{\td{x}}^{R(\td{x}, h\td{q})}\td{q} + O(\n{R(\td{x}, h\td{q}) - \td{x}}) + O(h \n{\td{q}}) = \td{\Gamma}_{\td{x}}^{R(\td{x}, h\td{q})}\td{q} + O(h \n{\td{q}})\, ,
\end{aligned} \end{equation} where we used \eqref{eq:gammasmooth} in the third equality, and \eqref{eq:rbounded} in the last one. Again by compactness, for $\td{y} \in \td{U}_1$, $t \leq 1$, $\n{\td{q}}, \n{\td{d}} \leq B$ the implicit constants can be taken with no dependence from the variables. \\ Now for $\td{d}$ s.t. $\td{d} \leq B$ define $\td{q} = B \td{d}/\n{\td{d}}$, so that $\td{d} = \bar{t} \td{q}$ for $\bar{t} =\n{\td{d}}/B$. Then we obtain \eqref{eq:th} reasoning as follows: \begin{equation}
\begin{aligned}
& \td{f}(\td{R}(\td{x}, \td{d})) - \td{f}(\td{R}(\td{x}, 0)) = \td{f}(\td{R}(\td{x}, \bar{t} q)) - \td{f}(\td{R}(\td{x}, 0)) \\
= & \int_{0}^{\bar{t}} \frac{d}{dt} \td{f}(\td{R}(\td{x} + t\td{q})) dt = \int_{0}^{\bar{t}} g(\tx{grad} f(\td{R}(\td{x}, t\td{q})), \frac{d}{dt} \td{R}(\td{x}, t\td{d})) dt \\ = & \int_{0}^{\bar{t}} g(\td{\Gamma}_{\td{x}}^{\td{R}(\td{x}, t\td{q})} \tx{grad} \td{f}(\td{x}) + O(t\n{\td{q}}), \td{\Gamma}_{\td{x}}^{\td{R}(\td{x},t\td{d})}\td{d} + O(t \n{\td{q}})) dt \\ = & \int_{0}^{\bar{t}} \left(g(\td{\Gamma}_{\td{x}}^{\td{R}(\td{x}, t\td{q})} \tx{grad} \td{f}(\td{x}), \td{\Gamma}_{\td{x}}^{\td{R}(\td{x},t\td{d})}\td{d}) + O(t\n{\td{q}})\right) dt \\ = & g(\tx{grad} f(\td{x}), \td{d}) + O(\bar{t}^2 \n{\td{q}}) = g(\tx{grad} f(\td{x}), \td{d}) + O(\n{\td{d}}^2) \, ,
\end{aligned} \end{equation} where we used \eqref{eq:gradR} and \eqref{eq:ddt} in the fourth inequality. To conclude, notice that the above argument does not depend from the choice of $\td{x} \in \td{U}_1$, so that it can be extended to every $\td{y} \in \td{U}_1$ and then by compactness to every $y \in M$. \end{proof}
\begin{proof}[Proof of Lemma \ref{l:clarkeR}]
With the notation introduced in the proof of Proposition \ref{p:std}, without loss of generality we assume that $U$ is bounded and that $\varphi$ can be extended to a neighborhood containing the closure of $U$. \\
First, since pushforward $\td{R}$ of a $C^2$ retraction on $\ensuremath \mathbb{R}$ is a $C^2$ retraction itself of $T \ensuremath \mathbb{R}^m$ on $\ensuremath \mathbb{R}^m$, we have the Taylor expansion
\begin{equation} \label{eq:rtaylor}
\td{R}(\td{y}, \td{v}) = \td{y} + \td{v} + O(\n{\td{v}}^2) \, ,
\end{equation}
with the implicit constant uniform for $\td{y}$ varying in $\td{U}$ and $\td{v}$ chosen in $\ensuremath \mathbb{R}^m$. \\
Second, for any fixed constant $B> 0$, by continuity we have
\begin{equation} \label{eq:bgamma}
\n{\td{\Gamma}_{\td{x}}^{\td{x}_k}\td{q} - \td{q}} \leq O\left( \n{\td{x} - \td{x}_k} \right)\, ,
\end{equation}
for $k \rightarrow \infty$, $\td{q} \in \ensuremath \mathbb{R}^m$ with $\n{\td{q}} \leq B$, and with a uniform implicit constant. \\
Therefore
\begin{equation} \label{eq:dktdk}
\begin{aligned}
& \n{\td{d}_k - \td{d}} \leq \n{\td{d}_k - \td{\Gamma}_{\td{x}}^{\td{x}_k}\td{d}} + \n{\td{\Gamma}_{\td{x}}^{\td{x}_k}\td{d} - \td{d}} \leq O\left( \n{\td{d}_k - \td{\Gamma}_{\td{x}}^{\td{x}_k}(\td{d})}_{\td{x}} \right) + O\left(\n{\td{x} - \td{x}_k}\right) \\
& = O\left(\n{d_k - \Gamma_x^{x_k}(d)}_{x} \right) + O\left( \n{\td{x} - \td{x}_k} \right) = o(1) \, ,
\end{aligned}
\end{equation}
where in the second inequality we used \eqref{eq:bgamma}, and in the last equality we used $d_k \rightarrow d$ together with $\td{x}_k \rightarrow \td{x}$. \\
Let now $\td{v}_k = (\td{R}(\td{x}_k, t_k\td{d}_k) - \td{x}_k)/t_k$. Then
\begin{equation} \label{eq:vkd}
\begin{aligned}
& \n{\td{v}_k - \td{d}} = \frac{1}{t_k}\n{\td{R}(\td{x}_k, t_k\td{d}_k) - \td{x}_k - t_k\td{d}} \leq \frac{1}{t_k}
(\n{R(\td{x}_k, t_k\td{d}_k) - \td{x}_k - t_k\td{d}_k} + t_k\n{d_k - \td{d}_k}) \\
= & \frac{1}{t_k} (O(t_k^2 \n{\tilde{d}_k}^2) + t_k o(1)) = o(1)\, ,
\end{aligned}
\end{equation}
where we used \eqref{eq:rtaylor} and \eqref{eq:dktdk} for the first and the second summand in the second equality. In other words, $\td{v}_k \rightarrow \td{d}$. To conclude,
\begin{equation}
\begin{aligned}
& \limsup_{k \rightarrow \infty} \frac{f(R(y_k, t_kd_k)) - f(y_k)}{t_k} = \limsup_{k \rightarrow \infty} \frac{\td{f}(\td{R}(\td{y}_k, t_k\td{d}_k)) - \td{f}(\td{y}_k)}{t_k} \\
= & \limsup_{k \rightarrow \infty} \frac{\td{f}(\td{y}_k + t_k \td{v}_k) - \td{f}(\td{y}_k)}{t_k} \geq \td{f}^{\circ}(\td{x}, \td{d}) = f^{\circ}(x, d) \, ,
\end{aligned}
\end{equation}
where in the inequality we were able to apply \eqref{l:hlem} because $\td{v}_k \rightarrow \td{d}$ by \eqref{eq:vkd}.
\end{proof}
\subsection{Implementation details} \label{s:Id} For all the problems, the manifold structure we used was the one available in the MANOPT library \cite{manopt}.
After a basic tuning phase, we set the algorithm parameters as follows: we used $\gamma_1 = 0.61$, $\gamma_2 = 1$ and $\gamma= 0.77$ for Algorithm \ref{alg:ds}, $\gamma_1= 0.81$, $\gamma_2 = 3.12$ and $\gamma= 0.11$ for Algorithm \ref{alg:dse}, and the stepsize $1.64/n$ (recall that $n$ is the dimension of the ambient space) for the ZO-RGD method. \\ For the nonsmooth strategies RDS-DD+ and RDSE-DD+, we considered the same parameters of the smooth case for RDS-SB and RDSE-SB, setting $\alpha_{\epsilon} = 10^{-3}$, and for both RDS-DD and RDSE-DD used $\gamma_1= 0.95$, $\gamma_2= 2$, and $\gamma= 1$. \\ The positive spanning basis was obtained both in Algorithm \ref{alg:ds} and Algorithm \ref{alg:dse} by projecting the positive spanning basis $(e_1, ..., e_n, - e_1, ..., - e_n)$ of the ambient space $\mathbb{R}^n$ on the tangent space. The initial stepsize was set to $1$ for all the direct search methods, with no fine tuning. \\ We generated the starting point and the parameters related to the instances either with MATLAB rand function or by using the random element generators implemented in the MANOPT library. \subsection{Smooth problems} \label{sapp:numtests} We describe here the 8 smooth instances of problem \eqref{eq:opt} from \cite{absil2009optimization,boumal2020introduction}.
\subsubsection{Largest eigenvalue, singular value, and top singular values problem}
In the largest eigenvalue problem \cite[Section 2.3]{boumal2020introduction}, given a symmetric matrix $A \in S(n,n) = \{A \in \ensuremath \mathbb{R}^{n \times n} \ | \ A = A^\top \}$, we are interested in computing \begin{equation} \label{p:leig}
\max_{x \in \mathbb{S}^{n - 1}} x^\top Ax \, . \end{equation} The largest singular value problem \cite[Section 2.3]{boumal2020introduction} can be formulated generalizing \eqref{p:leig}: given $A \in \ensuremath \mathbb{R}^{m \times h}$, we are interested in \begin{equation}\label{p:lsv}
\max_{x \in \mathbb{S}^{m - 1}, y \in \mathbb{S}^{h - 1}} x^\top Ay \, . \end{equation} Notice how the domain in \eqref{p:leig} and \eqref{p:lsv} are a sphere and the product of two spheres respectively. \\ Finally, to compute the sum of the top $r$ singular values, as explained in \cite[Section 2.5]{boumal2020introduction} it suffices to solve \begin{equation}
\max_{X \in S(m, r), Y \in S(h, r)} X^\top AY \, , \end{equation} for $S(a,b)$ the Stiefel manifold with dimensions $(a, b)$. \subsubsection{Dictionary learning} The dictionary learning problem \cite[Section 2.4]{boumal2020introduction} can be formulated as \begin{equation}\label{p:mainOT}
\begin{array}{ll}
\min & \n{Y -DC} + \lambda \n{C}_1, \\
\quad \tx{s.t.} & D \in \ensuremath \mathbb{R}^{d \times h}, C\in \ensuremath \mathbb{R}^{h \times k}, \ \n{D_1} = ... = \n{D_h} = 1 \, , \\
\end{array} \end{equation} for a fixed $Y \in \ensuremath \mathbb{R}^{d \times k}$, $\lambda > 0$, $\n{\cdot}_1$ the $\ell_1 -$ norm, and $D_1, ..., D_h$ the columns of $D$. \\ In our implementation we smooth the objective by using a smoothed version $\n{\cdot }_{1, \varepsilon}$ of $\n{\cdot}_1$ \begin{equation}
\n{C}_{1, \varepsilon} = \sum_{i, j} \sqrt{C_{i, j}^2 + \varepsilon^2} \, . \end{equation} In our tests, we generated the solution $\bar{C}$ using MATLAB sprand function, with a density of $0.3$, set the regularization parameter $\lambda$ to $0.01$ and $\varepsilon$ to $0.001$. \subsubsection{Synchronization of rotations} Let $\tx{SO}(d)$ be the special orthogonal group: \begin{equation}
\tx{SO}(d) = \{R \in \ensuremath \mathbb{R}^{d \times d} \ | \ R^\top R=I_d \tx{ and } \det(R) = 1\} \, . \end{equation} In the synchronization of rotations problem \cite[Section 2.6]{boumal2020introduction}, we need to find rotations $R_1, ..., R_h \in \tx{SO}(d)$ from noisy measurements $H_{ij}$ of $R_iR_j^{-1}$, for every $(i, j) \in E$, a subset of ${h \choose 2}$ (the set of couples of distinct elements in $[1:h]$). The objective is then \begin{equation}
\min_{\hat{R}_1, ..., \hat{R}_h \in \tx{SO}(d)} \sum_{(i,j) \in E} \n{\hat{R}_i - H_{ij} \hat{R}_j}^2 \, . \end{equation} In our tests, we considered the case $h = 2$ for simplicity. \subsubsection{Low-rank matrix completion} The low rank matrix completion problem \cite[Section 2.7]{boumal2020introduction} can be written, for a fixed matrix $M \in \ensuremath \mathbb{R}^{m \times h}$, as \begin{equation}\label{p:lrm}
\begin{array}{ll}
\min & \sum_{(i, j) \in \Omega} (X_{ij} - M_{ij})^2, \\
\quad s.t. &X \in \ensuremath \mathbb{R}^{m \times h}, \, \tx{rank}(X) = r \, ,
\end{array} \end{equation} given a positive integer $r > 0$ and a subset of indices $\Omega \subset [1:m] \times [1:h]$. It can be proven that the optimization domain, that is the matrices in $\ensuremath \mathbb{R}^{m \times n}$ with fixed rank $r$, can be given a Riemannian manifold structure (see, e.g., \cite{vandereycken2010riemannian}). \subsubsection{Gaussian mixture models} In the Gaussian mixture model problem \cite[Section 2.8]{boumal2020introduction}, we are interested in computing a maximum likelihood estimation for a given set of observations $x_1, ..., x_h$: \begin{equation}
\max_{\substack{\hat{u}_1,..., \hat{u}_k \in \ensuremath \mathbb{R}^d \\ \hat{\Sigma}_1, ..., \hat{\Sigma}_k \in \tx{Sym}(d)^+, \\ w \in \Delta^{K - 1}_+}} \sum_{i = 1}^h \log\left( \sum_{k = 1}^K w_k \frac{1}{\sqrt{2\pi \det(\Sigma_k)}} e^{\frac{(x-\mu_k)^\top \Sigma_k^{-1} (x-\mu_k)}{2}} \right) \, , \end{equation} where $\tx{Sym}(d)^+$ is the manifold of positive definite matrices \begin{equation}
\tx{Sym}(d)^+ = \{X \in \ensuremath \mathbb{R}^{d \times d} \ | \ X = X^\top , \, X \succ 0 \} \end{equation} and $\Delta^{K - 1}_+$ is the subset of strictly positive elements of the simplex $\Delta^{K - 1}$, which can be given a manifold structure. In our tests, we considered the case $K = 2$ and the reformulation proposed in \cite{hosseini2015matrix}, which does not use the unconstrained variables $(\hat{u}_1,..., \hat{u}_k)$. \subsubsection{Procrustes problem} \label{p:proc} The Procrustes problem \cite{absil2009optimization} is the following linear regression problem, for fixed $A \in \ensuremath \mathbb{R}^{l \times n}$ and $B \in \ensuremath \mathbb{R}^{l \times p}$: \begin{equation}
\min_{x \in \mathcal M} \n{AX - B}_F^2 \, , \end{equation} In our tests, we assumed the variable $X \in \ensuremath \mathbb{R}^{n \times p}$ to be in the Stiefel manifold $\tx{St}(n, p)$, a choice leading to the so called unbalanced orthogonal Procrustes problem.
\subsection{Nonsmooth problems} \label{s:nsp} We report two nonsmooth problems taken from \cite{hosseini2019nonsmooth}. \subsubsection{Sparsest vector in a subspace} Given an orthonormal matrix $Q \in \ensuremath \mathbb{R}^{m \times n}$, the problem of finding the sparsest vector in the subspace generated by the columns of $Q$ can be relaxed as \begin{equation} \label{p:nsp}
\min_{x \in \mathbb{S}^{n - 1}} \n{Qx}_1 \, . \end{equation} \subsubsection{Nonsmooth low-rank matrix completion} In the nonsmooth version of the low rank matrix completion problem \eqref{p:lrm} the Euclidean norm is replaced with the $l_1$ norm, so that in the objective we have a sum of absolute values: \begin{equation}\label{p:lrmns}
\begin{array}{ll}
\min &\sum_{(i, j) \in \Omega} |X_{ij} - M_{ij}|, \\
\quad s.t. &X \in \ensuremath \mathbb{R}^{m \times n}, \, \tx{rank}(X) = r \, .
\end{array} \end{equation} \subsection{Additional numerical results} \label{s:anr} We include here the performance and data profiles split by problem size.
\begin{figure}
\caption{Data p., $\tau=10^{-1}$}
\caption{Perf. p., $\tau=10^{-1}$}
\caption{Data p., $\tau=10^{-3}$}
\caption{Perf. p., $\tau=10^{-3}$}
\caption{Data p., $\tau=10^{-1}$}
\caption{Perf. p., $\tau=10^{-1}$}
\caption{Data p., $\tau=10^{-3}$}
\caption{Perf. p., $\tau=10^{-3}$}
\caption{Data p., $\tau=10^{-1}$}
\caption{Perf. p., $\tau=10^{-1}$}
\caption{Data p., $\tau=10^{-3}$}
\caption{Perf. p., $\tau=10^{-3}$}
\caption{From top to bottom: results for small, medium and large instances in the smooth case.}
\label{fig:t1s}
\end{figure}
\begin{figure}
\caption{Data p., $\tau=10^{-1}$}
\caption{Perf. p., $\tau=10^{-1}$}
\caption{Data p., $\tau=10^{-3}$}
\caption{Perf. p., $\tau=10^{-3}$}
\caption{Data p., $\tau=10^{-1}$}
\caption{Perf. p., $\tau=10^{-1}$}
\caption{Data p., $\tau=10^{-3}$}
\caption{Perf. p., $\tau=10^{-3}$}
\caption{Data p., $\tau=10^{-1}$}
\caption{Perf. p., $\tau=10^{-1}$}
\caption{Data p., $\tau=10^{-3}$}
\caption{Perf. p., $\tau=10^{-3}$}
\caption{From top to bottom: results for small, medium and large instances in the nonsmooth case.}
\label{fig:t3s}
\end{figure}
\end{document} | arXiv | {
"id": "2202.11052.tex",
"language_detection_score": 0.6439594030380249,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{titlepage}
\title{Voluntary Disclosure and Personalized Pricing\thanks{We thank Nima Haghpanah, Navin Kartik, Vijay Krishna, Doron Ravid, Nikhil Vellodi, Jidong Zhou, and various conferences and seminars audiences. We are grateful to Microsoft Research for hosting Ali and Vasserman. We thank Xiao Lin for excellent research assistance.}} \author{ S. Nageeb Ali\thanks{Pennsylvania State University. Email: \href{mailto:nageeb@psu.edu}{nageeb@psu.edu}.} \and Greg Lewis\thanks{Microsoft Research. Email: \href{glewis@microsoft.com}{glewis@microsoft.com}} \and Shoshana Vasserman\thanks{Stanford Graduate School of Business. Email: \href{mailto:svass@stanford.edu}{svass@stanford.edu}.} } \date{August 15, 2020} \maketitle
\begin{abstract} Central to privacy concerns is that firms may use consumer data to price discriminate. A common policy response is that consumers should be given control over which firms access their data and how. Since firms learn about a consumer's preferences based on the data seen and the consumer's disclosure choices, the equilibrium implications of consumer control are unclear. We study whether such measures improve consumer welfare in monopolistic and competitive markets. We find that consumer control can improve consumer welfare relative to both perfect price discrimination and no personalized pricing. First, consumers can use disclosure to amplify competitive forces. Second, consumers can disclose information to induce even a monopolist to lower prices. Whether consumer control improves welfare depends on the disclosure technology and market competitiveness. Simple disclosure technologies suffice in competitive markets. When facing a monopolist, a consumer needs partial disclosure possibilities to obtain any welfare gains.
\noindent JEL codes: D4, D8
\noindent Keywords: privacy, disclosure, segmentation, unraveling.
\end{abstract} \thispagestyle{empty}
\end{titlepage}
\setcounter{tocdepth}{2}
\tableofcontents
\thispagestyle{empty}
\setcounter{page}{1}
\maketitle
\section{Introduction}\label{Section-Introduction}
Consumer data is valuable information. Sellers want to know what consumers are interested in, which products they are aware of, and how much they might be willing to pay for them. Answers to these questions can be collected to an unprecedented degree through online activity. The resulting information can then be used to personalize both prices and product offerings by online sellers. But many consumer advocates have argued that consumer data is inherently \emph{private}, and that consumers should be able to choose whether or not to share it with companies. This tension has led regulators to pose a question that made little sense in the physical economy: to what degree are consumers in the online economy harmed by the release of their private information to sellers, and should consumers be able to control which information they share?
These questions are at the forefront of an ongoing international debate that has precipitated action in both the public and private sectors. Regulators concerned that personalized pricing will harm consumers have focused on the importance of consumer consent, passing wide-reaching legislation on data storage and tracking. A prominent example, the General Data Protection Regulation (GDPR) passed by the European Union, requires firms to obtain consent from consumers before collecting and processing their personal data.\footnote{Starting on January 1, 2020, California has enforced the California Consumer Privacy Act (CCPA), which has similar provisions to the GDPR.} In the United States, the Federal Trade Commission recommends that ``best practices include...giving consumers greater control over the collection and use of their personal data...'' \citep{FTC2012}. Meanwhile, private sector firms have responded to consumer demand for privacy by designing commercial products and brands that are specifically developed to limit tracking.\footnote{For example, Apple recently added a feature to its Safari browser that limits the ways in which its user's activities are tracked by third parties \citep{Hern2018}.}
Against this backdrop, we study the market implications of consumer consent and control. We investigate what happens when consumers fully control their data---not only whether they are tracked, but what specific information is disclosed to firms. We assume that the data is encoded in a verifiable form that consumers can partially or fully disclose to firms. In equilibrium, firms draw inferences about a consumer's preferences based on both what she shares and what she chooses not to share. Our motivating question is: \emph{when consumers fully control their information, are they hurt or helped by personalized pricing?}
We pose this question in an environment in which products cannot be personalized, and so there is no match value from data. A classical intuition might suggest that consumers would not benefit from being permitted to voluntarily disclose information. Because the market's \emph{equilibrium inferences} are based both on information that is disclosed and what is not being disclosed, giving consumers the ability to separate themselves may be self-defeating, as seen in the unraveling equilibria of \cite{grossman:81-JLE} and \cite{milgrom1981good}. Contrary to that intuition, we find that the combination of personalized pricing and consumer control is actually beneficial to consumers in both monopolistic and competitive markets. We construct equilibria of the consumers' disclosure game in which sharing data weakly increases consumer surplus for \emph{every} consumer type, relative to the benchmark of no personalized pricing.
Two key ideas drive these results. First, voluntary disclosure and personalized pricing together amplify competition between firms. Nearly indifferent consumers benefit from the ability to credibly communicate their flexibility, intensifying competition for their business, while consumers with a strong preference for the product of one particular firm can hide this preference. Although firms interpret this non-disclosure as a signal of strong preferences, the resulting prices are still lower than without disclosure. Second, even in the absence of competition, consumers can benefit from sending coarse signals that pool their valuations. These pools are constructed in such a way that a monopolist finds it optimal to sell to every type within each pool. Therefore every consumer within a pool pays the price of the consumer type that has the lowest valuation in that pool. Disclosures lead to price discounts that benefit every consumer type. The take-away is that offering consumers control---and possibly building tools that coordinate the sharing of data for consumer benefit---may make personalized pricing attractive \emph{even in the absence of better matching.}
\paragraph{A Preview:}Although we are interested in both monopolistic and competitive markets, we start with the problem of a monopolist choosing what price to charge a consumer whose valuation he does not know. We augment that classical problem with a ``verifiable'' disclosure game, as in \cite{grossman:81-JLE} and \cite{milgrom1981good}: before the monopolist sets his price, a consumer chooses what ``evidence'' or hard information to disclose about her valuation. We study both those disclosure environments in which evidence is \textbf{simple}, where a consumer can either speak ``the whole truth'' (reveal her type) or say nothing at all, as well as those in which evidence is \textbf{rich}, where a consumer can disclose facts about her type without having to reveal it completely.\footnote{We borrow this terminology from \cite{hagenbach2017}.} In our game, the consumer first observes her type and then chooses a message to disclose to the firm from the set of messages available to her. The firm then quotes a price, and the consumer chooses whether to buy the product at that price. The firm and the consumer cannot commit to their strategic choices.
Our first conclusion for the monopolistic environment (\Cref{Proposition-Certification}) is that simple evidence never benefits the consumer and potentially hurts her: there is no equilibrium in which \emph{any type} of the consumer is better off relative to a setting without personalized pricing. Moreover, there are equilibria in which all consumer types are worse off, such as an unraveling equilibrium in which the monopolist extracts all surplus.
Our second conclusion is that once the evidence structure is rich---where consumers can partially disclose information without revealing all of it---all consumer types can benefit from disclosing information. \Cref{Proposition-MonopolistConsumerSegmentation} constructs an equilibrium that improves the consumer surplus for almost all consumer types without reducing the surplus of any consumer type. In this equilibrium, all types are partitioned into segments on the basis of their willingness to pay, and trading is fully efficient. Because the consumer cannot commit to her disclosure strategy, every consumer type must find that her equilibrium message induces a weakly lower price than that induced by any other feasible message; our segmentation guarantees this property. Moreover, our segmentation ensures that for each segment, the monopolist's optimal price is the lowest willingness to pay in that segment. This greedy algorithm identifies a Pareto-improving equilibrium segmentation for every distribution of consumer types and identifies the equilibrium segmentation that maximizes ex ante consumer surplus for a class of distributions.
We show that stronger conclusions emerge in competitive markets. We consider general competitive markets with differentiated products, in which the consumer demands a single unit. The differentiation may be purely horizontal or have both horizontal and vertical components. The firms are uncertain of the consumer's preferences and the consumer can disclose information about her preferences to the firms, who then simultaneously make price offers to her. As before, we compare the outcomes when consumers can disclose information --- either via simple or rich evidence --- against a benchmark model in which there is no personalized pricing. Here, voluntary disclosure and personalized pricing is particularly beneficial to consumer surplus because of a new economic force: \emph{information can be selectively disclosed to amplify competition}.
More specifically, if the distribution of consumer preferences is symmetric and log-concave, then an equilibrium in the game with simple evidence (where the consumer's disclosure strategies are all-or-nothing) improves consumer welfare for every type relative to the no-personalized-pricing benchmark. Rich evidence allows one to do even better by using a greedy segmentation strategy similar to that used in the monopolist problem. But in a competitive market, rich evidence is unnecessary for these Pareto gains, and simple evidence suffices.
\paragraph{Implications:} The goal of our exercise is to use a simple model to inform the ongoing regulatory debate on consumer consent. From this stylized model, we draw two broad lessons for policy. First, voluntary disclosure can facilitate price concessions in both monopolistic and competitive markets. Thus, there is something missing in the view that tracking involves a tradeoff between the benefits of personalized products and the costs of personalized prices. Even without the benefits of product personalization, a consumer can benefit from personalized pricing when she has control. Disclosure generates discounts and amplifies competition.
The second lesson is that whether a track / do-not-track regime (as evoked by the GDPR) suffices to give consumers \emph{useful} control over their data depends on the competitiveness of the marketplace. In a monopolistic environment, richer forms of data sharing are necessary for consumers to benefit: consumer control must involve a choice not only of \textit{whether} to share information but also of \textit{how much} information to share. By contrast, in competitive markets, the choice of whether to share information is sufficient so long as consumers can share information with some firms and conceal it from others. Because both monopolistic and competitive environments feature multiple equilibria, some degree of coordination is needed to orchestrate which information to group together.
While online communications between consumers and sellers are not yet as sophisticated as is envisioned in our rich-evidence setup, an important element of the digital economy is its increasing ability to verify information \citep{goldfarbtucker}. These advances suggest that it may be technologically feasible for consumers to use intermediaries or platforms to verifiably disclose that their preferences or characteristics (e.g., income, age, address etc.) lie within a certain range without having to forfeit all of their information to online sellers. Regulatory agencies may seek to constrain the ways in which such intermediaries confer information between buyers and sellers, or to provide information verification services themselves. Our work provides suggestions for what an effective intermediary of this form might look like.
\paragraph{Relationship to Literature:} Our work belongs to the growing literatures on privacy, information, and their implications for markets; see \cite*{acquisti2016economics} and \cite{bergemann2019survey} for recent surveys. We view our paper as making two contributions. First, it formulates and investigates the economic implications of giving consumers control over their data through the lens of voluntary disclosure. Second, our analysis shows that whether consumers benefit from controlling their information depends both on the technology of disclosure and the level of market competition.
We combine classical models of monopolistic and competitive pricing with the now classical study of verifiable disclosure. Unlike the first analyses of verifiable disclosure \citep{grossman:81-JLE,milgrom1981good}, unraveling is not the unique equilibrium outcome of the markets that we study. An observation central to our results is that a firm's optimal price need not be monotone in its beliefs about the consumer's willingness to pay. This observation permits us to pool high and low types without giving the low type an incentive to separate.\footnote{Prior analyses have highlighted other reasons for why markets may not unravel, in particular (i) uncertainty about whether the sender has evidence \citep{dye1985disclosure,shin1994burden}, (ii) disclosure costs \citep{jovanovic1982truthful,verrecchia1983discretionary}, or (iii) the possibility for receivers to be naive \citep{hagenbach2017}.}
For the case of a monopolistic seller, two papers in this verifiable disclosure literature touch upon closely related issues. \cite{sher2015price} study monopolistic price discrimination in which the seller commits to a schedule of evidence-contingent prices. By contrast, we assume that sellers cannot commit and instead set prices that best respond to the evidence that has been presented. In independent and prior work, \cite{pram2020} studies when a consumer can disclose rich evidence to obtain Pareto gains from a monopolistic seller.\footnote{We became aware of his work subsequent to our first draft.} His paper and ours share some overlap in studying the rich-evidence case of a monopolist facing a consumer that has private values, but we go in different directions from that benchmark. Pram's interest is in adverse-selection settings where the payoffs for the monopolist and the buyer depend on the buyer's type, and he elegantly characterizes necessary and sufficient conditions for the existence of a Pareto improving equilibrium.\footnote{Other papers that study the role of certification in adverse selection are \cite{stahl2017certification} and \cite*{glode2018voluntary}.} By contrast, our motivation is to understand the interaction of disclosure technologies and market competition, which is why we study both simple and rich evidence in both monopolistic and competitive markets. Our message of how simple evidence amplifies competitive forces but richer forms of evidence are needed with a monopolist does not feature in his work. We view our papers to be highly complementary shedding light on different aspects of consumer control.
This verifiable-disclosure approach to consumer control complements two other ways for consumers to share information in markets. One way is through information design, where an intermediary that knows the consumer's type commits to a segmentation strategy. With a monopolistic seller, that intermediary can achieve payoffs characterized by \cite*{bergemann2015limits}. The other way is for the consumer to use cheap talk. \cite{hidir2018personalization} show that if the product can be customized to the consumer's tastes, cheap talk can improve matching without resulting in the monopolist capturing the entire surplus. Verifiable disclosure offers a potentially useful middle ground between information design and cheap talk. It is relevant when a consumer can use an intermediary to verify information about her type in her communication to firms without forfeiting control over the disclosure of that information. As the evolving digital economy balances an increasing ability to verify information cryptographically with public pressure for individual privacy, we believe it to be useful to understand the possible implications when consumers can communicate verifiable information to firms.
The work discussed above involves a single seller but much of our interest is in how voluntary disclosure amplifies competition in markets with differentiated goods. Related to this idea is the innovative work of \cite{thissevives1988}. Using a model of Bertrand duopoly, they study whether firms choose to personalize prices. In their model, the consumer's type is commonly known and they show that the unique equilibrium involves personalized pricing even though the firms' joint profits would be higher if they could commit to uniform pricing. The key distinction with our work is that all of the action in their model --- and in the subsequent literature --- is on the side of the firms: taking consumers as passive, these papers study whether firms personalize prices when they know (or can learn) consumer types.\footnote{See \cite{armstrong_2006} for a survey. We thank Jidong Zhou for drawing our attention to this work. }
By contrast, in our model, the consumer actively chooses whether to disclose information and it is her voluntary disclosure that facilitates personalized pricing. Moreover, the ability to pool with other types is necessary for every consumer type to benefit from personalized pricing; otherwise, extreme types are worse off from personalized pricing. Thus, the welfare gains that we study would not emerge in a model in which consumer types are commonly known.
Several papers share our interest in the role of information in competitive markets with differentiated products. \cite{elliott2019market} show how an information-designer can segment the market so that consumers are allocated efficiently while nevertheless guaranteeing that consumers obtain no surplus. \cite{armstrongzhou2019} study firm-optimal and consumer-optimal information structures for a consumer that does not know her tastes, and show that the consumer-optimal signal may involve learning a little so as to amplify price competition. Thinking about network information, \cite{fainmesser2015pricing,fainmesser2019pricing} study monopolistic and competitive price discrimination based on how consumers influence each other.
Less closely related, a prior literature studies how a consumer may distort her behavior in dynamic settings if firms draw inferences about her tastes from her past choices. This literature has investigated when firms prefer to commit to not personalize prices, when consumers would like to remain anonymous, and how consumers may distort their actions \citep*{taylor2004consumer,villasboas:04,acquisti2005conditioning,calzolari2006optimality,conitzer2012hide}. \cite{bonatti/cisternas} study the welfare properties of aggregating consumers' past purchasing histories into scores. Our analysis complements this agenda by studying how a consumer fares from directly controlling the flow of information rather than distorting her behavior to influence the market's perception of her tastes.
In thinking about consumer control, we abstract from issues raised in the study of data markets. Several recent papers \citep*{choi2019privacy,acemoglu2019too,bergemann2019economics} model how the data of some consumers may be predictive of others but each consumer may not internalize this externality, inducing excessive data-sharing. \cite{liang2020data} study how data linkages across individuals affect their incentives for exerting effort. \cite{jones2019nonrivalry} show that, because data is non-rival, there are social gains from multiple firms using the same data simultaneously, and therefore it is better to let consumers own and trade data. \cite*{fainmesser2020digital} study issues of data collection and data protection, and tradeoffs thereof.
\paragraph{Outline:} Using examples, we illustrate our main results for both monopolistic and competitive markets in \Cref{Section-Example}. \cref{Section-MonopolistModel} offers the general analysis for the monopolist setting. In \cref{Section-Competition}, we analyze competitive markets with product differentiation. Therein, we study both Bertrand duopoly with horizontal differentiation as well as a general model with $n\geq 2$ firms and general product differentiation. \cref{Section-Conclusion} concludes.
\section{Two Examples}\label{Section-Example}
\subsection{A Monopolist}\label{Section-ExampleMonopolist} A monopolist (``he'') sells a good to a single consumer (``she''), who demands a single unit. The consumer's value for that good is $v$, which is drawn uniformly from $[0,1]$. If the consumer purchases the good from the monopolist at price $p$, her payoff is $v-p$ and the monopolist's payoff is $p$; otherwise, each party receives a payoff of $0$. The consumer knows her valuation for the good and the monopolist does not. In this setting, and without any disclosure, the monopolist optimally posts a uniform price of $\frac{1}{2}$, which induces an ex interim consumer surplus of $\max\{v-\frac{1}{2},0\}$, and a producer surplus of $\frac{1}{4}$.
We augment this standard pricing problem with voluntary disclosure on the part of the consumer. After observing her value, the consumer chooses a message $m$ from the set of feasible messages for her. The set of \emph{all} feasible messages is $\mathcal M\equiv \left\{[a,b]:0\leq a \leq b \leq 1\right\}$, and we interpret a message $[a,b]$ as \emph{``My type is in the set $[a,b]$.''} When a consumer's type is $v$, the set of messages that she can send is $M(v)\subseteq \mathcal M$. The evidence structure is represented by the correspondence $M:[0,1]\rightrightarrows \mathcal M$.
Here is the timeline for the game: first, the consumer observes her type $v$ and chooses a message $m$ from $M(v)$. The monopolist then observes the message and chooses a price $p\geq 0$. The consumer then chooses whether to purchase the good. Each party behaves sequentially rationally: we study Perfect Bayesian Equilibria (henceforth PBE) of this game. Our interest is in the implications of this model for simple and rich evidence structures, described below.
\paragraph{Simple evidence:} An evidence structure is \textbf{simple} if for every $v$, $M(v)=\{\{v\},[0,1]\}$. In other words, each type $v$ can either fully reveal her type using the message $m=\{v\}$ (which is unavailable to every other type), or not disclose anything at all, using the message $m=[0,1]$ (which is available to every type). Such an evidence structure offers a stylized model for the dichotomy between ``track'' and ``do-not-track'': a consumer who opts into tracking will have all of her digital footprint observed by the buyer, whereas do-not-track obscures it entirely.
In this game, there exists an equilibrium in which every type $v$ fully reveals itself using the message $m=\{v\}$, and the monopolist extracts all surplus on the equilibrium path. Off-path, if the consumer sends the non-disclosure message, $m=[0,1]$, the monopolist believes that $v=1$ with probability $1$, and charges a price of $1$. In this equilibrium, all consumers are hurt by the possibility of voluntary disclosure and personalized pricing but the monopolist benefits from it.
But this is not the only equilibrium: there is also one in which every type sends the {non-disclosure} message $m=[0,1]$, and the monopolist charges a price of $\frac{1}{2}$. No consumer type wishes to deviate because revealing her true type results in a payoff of $0$. Here, both consumer and producer surplus are exactly as in the world without personalized pricing. In fact, there are an uncountable number of equilibria. But \emph{none} of them improve upon the benchmark of no-personalized-pricing from the perspective of \emph{any} consumer type. \begin{observation}\label{Observation-Simple}
With simple evidence, across all equilibria, the consumer's interim payoff is no more than her payoff without personalized pricing, namely $\max \{v-1/2,0\}$. \end{observation}
\begin{figure}
\caption{$(a)$ shows that any disclosing type that is strictly higher than $p_{ND}$ has a profitable deviation $\Rightarrow$ the set of non-disclosing types includes $(p_{ND},1]$. $(b)$ and $(c)$ show different equilibria where the shaded region is the set of non-disclosing types. Across equilibria, $p_{ND}\geq 1/2$.}
\label{Figure-SimpleEvidence}
\end{figure}
We illustrate the argument in \autoref{Figure-SimpleEvidence}. In an equilibrium where a positive mass send the non-disclosure message, suppose that the monopolist charges $p_{ND}$ when he receives this message. Any type $v$ that is strictly higher than $p_{ND}$ must send the non-disclosure message because her other option---revealing herself---induces a price that extracts all of her surplus (this property is shown in \autoref{Figure-SimpleEvidence}(a)). Hence, the set of types that send the non-disclosure message includes $(p_{ND},1]$. There are various configurations of disclosure and non-disclosure segments that are compatible with this requirement, as shown in (b) and (c), but across them all, the monopolist's optimal non-disclosure price $p_{ND}$ never dips below $\frac{1}{2}$, the price charged without personalized pricing.
\paragraph{Rich evidence:} \Cref{Observation-Simple} illustrates that simple evidence structures and personalized pricing do not benefit the consumer. Now we see how the consumer can do better if she can use a rich evidence structure. An evidence structure is \textbf{rich} if for every $v$, $M(v)=\{m\in\mathcal M:v\in m\}$; in other words, a type $v$ can send any interval that contains $v$. With a rich evidence structure, all the equilibrium outcomes described above can still be supported using this richer language. But new possibilities emerge, some of which dominate the payoffs from no-personalized-pricing.
We describe an equilibrium that strictly improves consumer surplus for a positive measure of consumer types without making any type worse off. Inspired by Zeno's Paradox,\footnote{Zeno's Paradox is summarized by Aristotle as \emph{``...that which is in locomotion must arrive at the half-way stage before it arrives at the goal....''} See \href{https://plato.stanford.edu/entries/paradox-zeno/}{https://plato.stanford.edu/entries/paradox-zeno/}.} consider the countable grid $\left\{1,\frac{1}{2},\frac{1}{4},\ldots\right\}\cup\left\{0\right\}$. We denote the $(k+1)^{th}$ element of this ordered list, namely $2^{-k}$, by $a_{k}$, and the set $m_k\equiv [a_{k+1},a_k]$. We use this partition to construct an equilibrium segmentation that improves consumer surplus.
\begin{observation}\label{Observation-Rich} With rich evidence, there exists an equilibrium that generates Zeno's Partition: a consumer's reporting strategy is
\begin{align*} m(v) = \begin{cases}
[a_{k+1},a_k]\text{ where } a_{k+1}<v\leq a_k & \text{if }v>0, \\
\{0\} & \text{if }v=0.
\end{cases}\end{align*}
When the monopolist receives message $m_k$, he charges $a_{k+1}$ thereby selling to that entire segment. Relative to no-personalized-pricing, this equilibrium strictly improves consumer surplus for all $v$ in $\left(0,1/2\right]$, and leaves consumer surplus unchanged for all other types. \end{observation}
\begin{figure}
\caption{(a) illustrates Zeno's Partition. (b) illustrates prices and payoffs: for each consumer-type $v$, the step-function shows the equilibrium price that is charged and the dashed $45^{\circ}$ line shows the payoff from consumption. The shaded region illustrates the consumer surplus achieved by Zeno's Partition.}
\label{Figure-Zeno}
\end{figure}
In this equilibrium, the highest market segment is composed of types in $\left(\frac{1}{2},1\right]$, all of which send the message $m_0\equiv \left[\frac{1}{2},1\right]$; the next highest market segment comprises types in $\left(\frac{1}{4},\frac{1}{2}\right]$, all of which send the message $m_1\equiv \left[\frac{1}{4},\frac{1}{2}\right]$, and so on and so forth. We depict this partition in \autoref{Figure-Zeno}. Once the monopolist receives any message corresponding to each market segment, he believes that the consumer's value is uniformly distributed on it. His optimal strategy then is to price at the bottom of the segment. Therefore, trade occurs with probability $1$, with each higher consumer type capturing some surplus.
This equilibrium generates an ex ante consumer surplus of $\frac{1}{6}$ and producer surplus of $\frac{1}{3}$, each of which is higher than what is achieved without personalized pricing. All types in $(1/2,1]$ receive the same price that they would have if personalized pricing were infeasible, and almost every other type is strictly better off. Thus, personalized pricing improves the monopolist's profit, strictly improves the surplus for some consumer types without making any worse off.
How is Zeno's Partition supportable as an equilibrium? First, let us see what deters consumers from using messages that are not in Zeno's Partition. If the monopolist sees such a message, his off-path beliefs ascribe probability $1$ to the highest type that could send such a message, which leads him to charge a price equal to that type. Such beliefs ensure that these off-path messages are not profitable deviations. How about deviations to other on-path messages? For every $v$ in $(a_{k+1},a_k)$, there exists only one on-path message that it can send, and for every $v$ on the boundaries of such messages, our strategy profile prescribes the message that results in the lower price. Thus, there are no profitable deviations for any consumer type. Finally, by construction, the monopolist is always setting a price that is optimal given its beliefs.
It is useful to understand why we do not see unraveling. In many disclosure models, the sender strictly prefers to induce the receiver to have higher (or lower) beliefs in the sense of first-order stochastic dominance. Unraveling emerges as the unique equilibrium outcome as extreme types of the sender have a motive to separate from pools. By contrast, in our setting, there exist many pairs of beliefs $(\mu,\hat\mu)$ that are ranked by FOSD such that the sender is indifferent between inducing $\mu$ and $\hat\mu$ because they result in the receiver taking the same action. For example, the monopolist charges the same price when he ascribes probability $1$ to type $\{1/2\}$ as he does when his beliefs are $U[1/2,1]$. Thus, higher types may be pooled with a lower type without giving that lower type an incentive to separate.
Zeno's Partition isn't the only equilibrium of this example, but in this case, it maximizes ex ante consumer surplus. To demonstrate why this is the case, we prove in \Cref{Section-MonopolistOptimalSegmentation} that for a one-dimensional type space, for every equilibrium, there exists an interim payoff-equivalent equilibrium in which trade occurs with probability $1$ and types segment into partitions. Thus, it is without loss of generality to consider only those equilibria that are fully efficient and partitional. Now we can use a heuristic argument to show why Zeno's Partition is optimal when types are uniformly distributed.
Because consumers always purchase in a fully efficient equilibrium, maximizing consumer surplus is equivalent to minimizing the average price. For a monopolist to price at the bottom of an interval $[a,b]$ when $v$ is uniformly distributed between $a$ and $b$, it must be that $a\geq b/2$. Suppose that the consumer-optimal equilibrium involves types from $[\lambda,1]$ forming the highest segment; by the logic of the previous sentence, $\lambda$ is at least $1/2$. The monopolist charges a price of $\lambda$ to that segment, and thus, its contribution to the ex ante expected price is $(1-\lambda)\lambda$. The remaining population, $[0,\lambda]$, amounts to a $\lambda$-rescaling of the original problem, and so the consumer-optimal equilibrium after removing that highest segment involves replicating the same segmentation on a smaller scale. Thus, the consumer-optimal segmentation can be framed as a recursive problem where ${P}(\bar{v})$ is the lowest expected price generated by a partition when types are uniformly distributed on the interval $[0,\bar{v}]$:
\begin{align*}
{P}(1) &=\min_{\lambda\geq \frac{1}{2}}\, (1-\lambda)\lambda +\lambda {P}(\lambda)=\min_{\lambda\geq \frac{1}{2}}\, (1-\lambda)\lambda +\lambda^2 {P}(1)=\min_{\lambda\geq \frac{1}{2}}\, \frac{(1-\lambda)\lambda}{1-\lambda^2}=\frac{1}{3}, \end{align*} where the first equality frames the problem recursively, the second is from ${P}(\lambda)$ being a re-scaled version of the original problem, and the remainder is algebra. Because Zeno's Partition induces an expected price of $1/3$, no segmentation can generate higher consumer surplus.
\subsection{Bertrand Competition with Horizontal Differentiation}\label{Section-ExampleCompetition}
The analysis in \Cref{Section-ExampleMonopolist} shows that in a monopolistic market, a consumer never benefits from using simple evidence but can obtain Pareto gains if the evidence structure is rich. Here we show that if there is market competition, even simple evidence suffices to generate Pareto gains in consumer surplus. We illustrate this effect in an example of Bertrand competition with horizontal differentiation.
Two firms, $L$ and $R$, compete to sell to a consumer who must purchase one unit of the good from either firm. Firm $L$ is located at the point $\ell_L=-1$, firm $R$ at the point $\ell_R=1$. The consumer's location, $\ell$, is drawn uniformly from $[-1,1]$. The consumer knows her location but the firms do not. If the consumer purchases the good from firm $i$, then she pays the price $p_i$ that is set by firm $i$, as well as a linear transportation cost $|\ell_i-\ell|$. If there were no voluntary disclosure, then each firm would set a price of $2$ in equilibrium. The consumer would then buy the good from the closer firm and incur a total expenditure of $2+\min\{1+\ell,1-\ell\}$.
Let us describe what can happen with simple evidence. Now the consumer at location $\ell$ can disclose one of two messages to each firm privately before prices are set: either she can send a message of $\{\ell\}$, which fully reveals her location, or a message of $[-1,1]$, which fully conceals it. \Cref{Figure-ExampleSimpleCompetition} illustrates an equilibrium where the consumer selectively discloses evidence to amplify price competition and reduce her total expenditure. \begin{figure}
\caption{The figure shows disclosure strategies for every type. Centrally located types fully reveal location to both firms. Extreme types reveal location only to the distant firm and conceal it from the closer firm.}
\label{Figure-ExampleSimpleCompetition}
\end{figure}
In this construction, the consumer conceals her location when she has a strong preferences for the product of one of the firms. Thus, if her location is in $[1/2,1]$, she reveals her location to the distant firm $L$ but conceals it from $R$; she plays a symmetric strategy if her location is in $[-1,-1/2]$. Only if her location is in $(-1/2,1/2)$ does she reveal her location to both firms.
Why does this disclosure strategy lower market prices? Suppose that the consumer is located in $[1/2,1]$, and hence she discloses her location to firm $L$ but not to firm $R$. Firm $R$ infers from this non-disclosure that the consumer must be located in $[1/2,1]$, but does not know where in that interval. Firm $L$, on the other hand, knows the consumer's location, and to poach the consumer's business, it has to significantly reduce its price. In equilibrium, this powerful force pushes firm $L$ to lower its price all the way to its marginal cost of $0$. Anticipating this outside option for the consumer, firm $R$'s solves for its optimal local monopoly price, which is $1$. At these prices, the consumer purchases from firm $R$, and therefore has a total expenditure of $2-\ell$, which is strictly smaller than the total expenditure without personalized pricing.
What about all those interior types in $(-1/2,1/2)$ that are fully revealing themselves to each firm? All of these interior types are unraveling to both firms, but in this competitive market, this is an advantage: each obtains a price of $0$ from the distant firm and a price from the closer firm that makes it just indifferent. In equilibrium, each incurs a total expenditure of $\max\{1+\ell,1-\ell\}$, which once again is below that without personalized pricing.
We see that consumer control, even through simple evidence, can be a powerful force for consumer gains. The idea is simple: by disclosing her preferences to distant firms, the consumer motivates them to lower prices. One's home firm then knows that even though the consumer has strong preferences for its product, she can purchase from another firm at a very low price. As we show in \Cref{Section-Competition}, this force manifests more broadly: it applies when there are more than two firms, and for both general distributions and general forms of product differentiation. We also show there that if the consumer has access to rich evidence, she can improve upon this segmentation using a Zeno-like construction just as we did in the monopolist example.
\section{Voluntary Disclosure to a Monopolist}\label{Section-MonopolistModel} \subsection{Environment}\label{Subsection-Environment} \paragraph{The Pricing Problem.} A monopolist (``he'') sells a good to a single consumer (``she''), who demands a single unit. The consumer's type, denoted by $t$, is drawn according to a measure $\mu$ whose support is ${T}$. The type space $T$ is a convex and compact subset of a finite-dimensional Euclidean space, $\Re^k$. Each of these $k$ dimensions of a consumer's type reflect attributes that affect her valuation for the good according to $v: T\rightarrow \Re$. Payoffs are quasilinear: if the consumer purchases the good from the monopolist at price $p$ when her type is $t$, her payoff is $v(t)-p$ and the monopolist's payoff is $p$; otherwise, each player receives a payoff of $0$. We denote by $F$ the induced CDF over valuations; in other words, $F(\tilde v)\equiv\mu(\{t\in T:v(t)\leq \tilde{v})$. We denote by $\underline{v}$ and $\overline{v}$ the highest and lowest valuations in the support. We simplify exposition by assuming that $F$ is continuous and $F(\underline{v})=0$.\footnote{An equivalent assumption is that $\mu(\{t\in T:v({t})=v'\})=0$ for every $v'$ in the range of $v(\cdot)$. Conditions that guarantee this property are that $\mu$ is absolutely continuous with respect to the Lebesgue measure, and $v(\cdot)$ is strictly monotone in each dimension. }
Throughout our analysis, we assume that $v(t)$ is non-negative for every type $t\in \mathcal T$ and is quasiconvex.
A special leading case is where each dimension of $t$ is a consumer characteristic (e.g. income) and $v(t)$ is linear; in this case, $v(t)=\sum_{i=1}^k \beta_i t_i$ where $\beta_i$ is the coefficient on characteristic $i$. We order types based on their valuations: we say that $t\succeq t'$ if $v(t)\geq v(t')$, and we define $\succ$ and $\sim$ equivalently. When $t\succeq t'$, we refer to $t$ as being a \emph{higher} type.
Were communication infeasible, this pricing problem has a simple solution: the monopolist sets a price $p$ that maximizes $p(1-F(p))$. Let $p^*$ denote the (lowest) optimal price for the monopolist. The consumer's interim payoff is then no more than $\max\{v(t)-p^*,0\}$.
\paragraph{The Disclosure Game.} We append a disclosure game to this pricing problem. After observing her type, the consumer chooses a message $m$ from the set of messages available to her. The set of \emph{all} feasible messages is $\mathcal M^{\mathcal F}\equiv \left\{M\subseteq T:M\text{ is closed and convex}\right\}$, and we interpret a message $M$ in $\mathcal M^{\mathcal F}$ as meaning \emph{``My type is in the set $M$.''} When a consumer's type is $t$, the set of messages that she can send is $\mathcal M(t)\subseteq \mathcal M^{\mathcal F}$. We focus attention on the following two different forms of disclosure: \begin{itemize}[noitemsep]
\item the evidence structure is \textbf{simple} if for every $t$, $\mathcal M(t)=\{ T,\{t\}\}$.
\item the evidence structure is \textbf{rich} if for every $t$, $\mathcal M(t)=\{M \in \mathcal M^{\mathcal F}:t\in M\}$. \end{itemize} In both simple and rich evidence structures, the consumer has access to hard information about her type. In a simple evidence structure, the consumer can either disclose a ``certificate'' that fully reveals her type or says nothing at all. By contrast, in a rich evidence structure, the consumer can verifiably disclose true statements about her type without being compelled to reveal everything. The assumption that messages are convex sets implies that if types $t$ and $t'$ can disclose some common evidence, then so can any intermediate type $t'' = \rho t+(1-\rho)t'$ (for $\rho\in (0,1)$).
\paragraph{Timeline and Equilibrium Concept.} First, the consumer observes her type $t$ and chooses a message $M$ from $\mathcal M(t)$. The monopolist then observes the message and chooses a price $p\geq 0$. The consumer then chooses whether to purchase the good. We study Perfect Bayesian Equilibria (henceforth PBE) of this game. In other words, the seller's belief system is updated via Bayes Rule whenever possible, and both the consumer and the seller behave sequentially rationally.\footnote{If the seller receives message $m$, his beliefs (both on- and off-path) must have a support that is contained in $\{t\in T:m\in \mathcal M(t)\}$.} For expository convenience, we assume that a consumer always breaks her indifference in favor of purchasing the good.
\subsection{Simple Evidence Does Not Help Consumers}\label{Section-SimpleEvidence}
Here, we show that when trading with a monopolist, consumers do not benefit from personalized pricing if the evidence structure is simple relative to a benchmark in which personalized pricing is impossible.
Recall that the interim payoff of each type $t$ without personalized pricing is $\max\{v(t)-p^*,0\}$ where $p^*$ is the monopolist's optimal price. Generalizing the argument of \Cref{Section-Example}, we show that there are equilibria with simple evidence that make all consumer types worse off and there are no equilibrium in which \emph{any} type is strictly better off.
That consumers may be worse off is easily illustrated through the equilibrium in which the consumer fully reveals her type with probability $1$ and the monopolist charges a price of $v(t)$. Off-path, the seller's beliefs are \emph{maximally skeptical} in that he believes that the consumer's type has the highest possible valuation with probability $1$. The monopolist thus extracts all surplus, and so consumers are clearly worse off.
But there are also partially revealing equilibria in which only those types below a cutoff are revealed. For example, there exists an equilibrium in which all types $t$ where $v(t)\geq p^*$ send the message $T$ and only types below $p^*$ reveal. This equilibrium results in payoffs for the consumer identical to that without personalized pricing. This is the consumer-optimal equilibrium.
\begin{proposition}\label{Proposition-Certification} With simple evidence, across all equilibria, the consumer's interim payoff is bounded above by $\max\{v(t)-p^*,0\}$. \end{proposition}
Thus, the consumer gains {nothing}, \emph{ex ante} and \emph{ex interim}, from the ability to disclose her type using simple evidence. If one takes the model of simple evidence as a stylized representation of track / do-not-track regulations, our analysis implies that this form of consumer protection does not benefit consumers in a monopolistic environment relative to a benchmark that prohibits personalized pricing. Instead, richer forms of verifiable disclosure are needed.
\subsection{A Pareto-Improving Segmentation with Rich Evidence}\label{Section-GreedyPareto}
We develop a segmentation that generalizes that of \Cref{Section-Example}.
Each segment is constructed so that the monopolist's optimal price sells to all consumer types in that segment. Consumers would profit if they could deviate ``downwards'' to a lower segment; our construction guarantees that this is impossible. Our construction is ``greedy'' insofar as we start with the highest segment and make each as large as possible without accounting for its effect on subsequent segments.
To define the segmentation strategy, consider a sequence of prices $\{p_s\}_{s=0,1,2,\ldots,S}$ where $S\leq\infty$, $p_0=\overline{v}$, and for every $s$ where $p_{s-1}>\underline{v}$, $p_s$ is the (lowest) maximizer of $p_s(F(p_{s-1})-F(p_s))$. If $p_{s'}=\underline{v}$ for some $s'$, then we halt the algorithm and set $S=s'$; otherwise, $S=\infty$ and $p_\infty=\underline{v}$. We use these prices to construct sets of types, $(M_s)_{s=1,2,\ldots,S}\cup M_\infty$: \begin{align*}
M_s&\equiv\{t\in T:v(t)\leq p_{s-1}\}.\\
M_\infty&\equiv\{t\in T:v(t)=\underline{v}\} \end{align*} Because $v$ is quasiconvex and $T$ is convex, $M_s$ is a convex set for every $s=0,1,2,\ldots,S$, and therefore $M_s$ is a feasible message. These messages segment the market. \begin{proposition}\label{Proposition-MonopolistConsumerSegmentation}
With rich evidence, there exists a Pareto-improving equilibrium in which a consumer's reporting strategy is
\begin{align*} M^*(t) = \begin{cases}
M_s & \text{if }p_{s}<v(t)\leq p_{s-1}, \\
M_\infty & \text{if }t\in M_\infty.
\end{cases}\end{align*} When receiving an equilibrium disclosure of the form $M_s$, the seller charges a price of $p_s$ and sells to all types that send that message. \end{proposition}
The segmentation described above generalizes the ``Zeno Partition'' constructed in \Cref{Section-Example}. The highest market segment consists of those consumer types whose valuations strictly exceed the monopolist's optimal posted price, $p_1=p^*$; these are the types who send message $M_1$. The next highest market segment comprises those whose valuations exceed the optimal posted price, $p_2$, for the \emph{truncated distribution} that excludes the highest market segment; they send message $M_2$. This iterative procedure continues either indefinitely (if $p_s>\underline{v}$ for every $s$) or halts once the monopolist has no incentive to exclude any type in the truncated distribution from trading.
Notice that in this segmentation, disclosures aren't taken at face value. Instead, the monopolist infers from receiving a message $M_s$ that the consumer would have preferred to send message $M_{s+1}$ but couldn't, and so her valuation must be in $(p_s,p_{s-1}]$. Notice also that the market segmentation is constructed so that given these beliefs about the consumer's valuation, the monopolist has no incentive to charge a price that excludes any type. In fact, this constraint for the monopolist binds in our \emph{greedy segmentation} in that if types below $p_s$ were included, the seller's optimal price would exclude those types.
This equilibrium segmentation is fully efficient---trade occurs with probability $1$---and improves consumer surplus relative to the benchmark without personalized pricing. Consumer types in the highest market segment face the same price that they would without personalized pricing, but now consumers in other market segments can also purchase at prices that are (generically) below their willingness to pay. Thus, the segmentation is a Pareto improvement. One feature of the segmentation that is attractive is its simplicity: all that consumers have to disclose is information about their willingness to pay.
Finally, we note that this construction is robust to the possibility that the consumer does not have evidence with positive probability: there exists an equilibrium in this setting where if the consumer lacks evidence, she is charged a price of $p_1=p^*$, and all those with evidence behave as above. The consumer never gains from imitating those who lack evidence.
\subsection{Optimal Equilibrium Segmentation}\label{Section-MonopolistOptimalSegmentation}
In this section, we explore conditions under which the disclosure strategy above is the ex ante optimal equilibrium for consumers. There are two reasons that this equilibrium may not maximize ex ante consumer surplus among equilibria. The first is that it ignores multidimensionality: even if two types have the same valuation, it may be optimal to separate them. The second reason is that even in a one-dimensional world, packing types greedily need not maximize consumer surplus. To explore this issue, we restrict attention to a one-dimensional model. We prove that the consumer-optimal equilibrium uses a partitional structure. We show by an example that the greedy partition may be suboptimal for some distributions. We then prove that it is consumer-optimal for power distribution functions.
Let the set of types $T$ be identical to the set of values $[\underline{v},\overline{v}]$ and $v(t)=t$. Recall that $\underline{v}\geq 0$ and that $F$ is an atomless CDF over valuations. The restriction that $M(t)$ is a closed convex set that includes $t$ implies that here, $M(t)=\{[a,b]\subseteq [\underline{v},\overline{v}]:a\leq t\leq b\}$; in other words, the set of all closed intervals that include $t$. Applied to this setting, \Cref{Proposition-MonopolistConsumerSegmentation} identifies an equilibrium segmentation of the form $\left\{[0,p_s]\right\}_{s=0,1,2,\ldots,S}$ where $p_s$ is the optimal price to when the distribution $F$ is truncated to $[0,p_{s-1}]$. Because only types in $(p_{s+1},p_{s}]$ send the message $[0,p_s]$, a payoff-equivalent segmentation is for a type $t$ to send the message $[p_{s+1},p_s]$ where $p_{s+1}<t\leq p_s$. This equilibrium is ``partitional'' in that types reveal the member of the partition to which they belong, and thus, these messages can be taken at ``face value''. We prove that for any equilibrium, there always exists a payoff-equivalent equilibrium that is partitional and involves the sale happening with probability $1$.
Our characterization uses the following definitions. A PBE is efficient if trade occurs with probability $1$. A collection of sets $\mathcal P$ is a \textbf{partition} of $[\underline{v},\overline{v}]$ if $\mathcal P$ is a subset of $\mathcal M^F$ such that $\bigcup_{m\in\mathcal P} m=[\underline{v},\overline{v}]$ and for every distinct $m,m'$ in $\mathcal P$, $m\bigcap m'$ is at most a singleton. One message $m$ dominates $m'$ (i.e. $m\succeq_{\mathcal M} m'$) if for every $t\in m$ and $t'\in m'$, $t\geq t'$; $\arg\min$ and $\arg\max$ over a set of messages refers to this partial order. Given a partition $\mathcal P$, let $m^{\mathcal P}(t)\equiv \arg\min_{\{m\in \mathcal P:t\in m\}}m$. An equilibrium $\sigma$ is \textbf{partitional} if there exists a partition $\mathcal P$ such that $m^\sigma(t)=m^{\mathcal P}(t)$, and for every $m$ in $\mathcal P$, $p^\sigma(m)=\min_{t\in m} t$.
\begin{proposition}\label{Proposition-MonopolistUnidimensional} Given any equilibrium $\sigma$, there exists an efficient partitional equilibrium $\tilde\sigma$ that is payoff-equivalent for almost every type. \end{proposition}
The implication of \Cref{Proposition-MonopolistUnidimensional} is that it suffices to look at partitional equilibria. The proof of \Cref{Proposition-MonopolistUnidimensional} proceeds in two steps. First, we show that it is without loss of generality to look at efficient equilibria: for any equilibrium in which there exists a type that is not purchasing the product, there exists an interim payoff-equivalent equilibrium in which that type fully reveals itself to the seller. Second, we show that for any efficient equilibrium, there exists a partitional equilibrium that is payoff-equivalent for almost every type. To prove this step, we show that in any efficient equilibrium, prices must be (weakly) decreasing in valuation because otherwise some type has a profitable deviation.
How does the greedy segmentation compare to other partitional equilibria from the perspective of ex ante consumer surplus?\footnote{From an interim perspective, the greedy partition is Pareto efficient because any partition that differs from it must raise the lowest type in at least one segment, which increases the price in that segment.} A consideration that the greedy algorithm ignores is that it may benefit average prices to exclude some high types from a pool, making those types pay a higher price, and pool intermediate types with low types. We illustrate this below. \begin{example}\label{Example-GreedyFailure} Suppose that the consumer's type is drawn from $\left\{1/3,2/3,1\right\}$ where $Pr(t=1)=1/6$, $Pr(t=2/3)=1/3+\varepsilon$, and $Pr(t=1/3)=1/2-\varepsilon$, where $\varepsilon>0$ is small. The greedy construction sets the highest segment as $\{2/3,1\}$---because the seller's optimal posted price here would be $2/3$---and the next segment as $\{1/3\}$. This segmentation results in an average price of $\approx 1/2$. But a better segmentation for ex ante consumer surplus involves the high type perfectly separating as $\{1\}$, and the next highest segment being $\{1/3,2/3\}$. This segmentation reduces the average price to $4/9$. \end{example}
Generally, the optimal segmentation can be formulated as the solution to a constrained optimization problem over partitions that minimizes the average price subject to the constraint that the monopolist finds it optimal to price at the bottom of each segment. The greedy algorithm offers a simple program where that constraint binds in each segment and \Cref{Example-GreedyFailure} indicates that this may be sub-optimal. Identifying necessary and sufficient conditions on distributions when such constraints necessarily bind is challenging because it requires understanding in detail how sharply the monopolist's optimal price responds to truncating the distribution at different points. This exercise is difficult for distributions where we cannot solve for the optimal price in closed-form.\footnote{Without solving for the closed-form, we can verify that the greedy algorithm is optimal if (i) $F$ is convex, and (ii) the optimal price on an interval $[0,\tilde{v}]$, denoted by ${p}(\tilde{v})$, has a slope bounded above by $1$ and is weakly concave.} A class of distributions where a closed-form solution is available is that of power distributions; for this class, the greedy algorithm identifies the consumer-optimal segmentation.
\begin{proposition}\label{Proposition-GreedyOptimalPower}
Suppose that $[\underline{v},\overline{v}]=[0,1]$ and the cdf on valuations, $F(v)=v^k$ for $k>0$. Then the greedy segmentation is the consumer-optimal equilibrium segmentation. \end{proposition}
\subsection{Discussion}\label{Section-Discussion}
Our analysis of the monopolistic setting concludes that (i) the combination of voluntary disclosure and personalized pricing does not benefit consumers if evidence is simple (\Cref{Proposition-Certification}), but (ii) it generates an interim Pareto improvement if evidence is rich (\Cref{Proposition-MonopolistConsumerSegmentation}). Thus, consumers' control over data benefits them when they can choose not only \emph{whether} to communicate but also \emph{what} to communicate.
We have adopted an interpretation that considers the gains that consumers are able to obtain with different technologies. An alternative way to view our results is as a description of consumer payoffs across different kinds of equilibria. Suppose that the set of messages available to type $t$, $\mathcal M(t)$, is richer than the rich evidence structure --- for instance, the set of all (Borel) subsets that contain $t$. A corollary of \Cref{Proposition-Certification} is that in any equilibrium in which all equilibrium path messages are either fully revealing or fully concealing (i.e., $m^*(t)\in \{\{t\},T\}$), no consumer type is better off than without personalized pricing. Analogously, the equilibrium constructed in \Cref{Proposition-MonopolistConsumerSegmentation} remains an equilibrium in this setting.\footnote{In both of these cases, the only adaptation that would have to be made is for the monopolist respond to a larger set of off-path messages; in each case, it suffices if the monopolist attributes every off-path message to a type with the highest valuation that could have sent it.}
\section{How Disclosure Amplifies Competition}\label{Section-Competition}
In many settings, consumers do not interact with only one seller but instead face a competitive market in which firms are differentiated. In this section, we investigate the conditions under which voluntary disclosure and personalized pricing benefit consumers in competitive settings with differentiated products. Our general analysis allows for two or more firms and a general kind of product differentiation that encompasses both horizontal and vertical components. But for expositional clarity, we begin with the case of Bertrand duopoly with horizontally differentiated products, since this analysis elucidates all of the key economic forces.
\Cref{Section-CompetitionEnvironment} describes the Bertrand duopoly setting with horizontal differentiation and \Cref{Section-CompetitionEquilibria} constructs equilibrium segmentations with simple and rich evidence. \Cref{Section-CompetitionWelfare} compares the consumer's payoffs with those of a benchmark setting without personalized pricing. These sections generalize the example of \Cref{Section-ExampleCompetition} in which the consumer's location is uniformly distributed. \Cref{Section-CompetitionMoreThanTwo} considers the general setting with $n\geq 2$ firms.
\subsection{Bertrand Duopoly with Horizontal Differentiation}\label{Section-CompetitionEnvironment}
Two firms, $L$ and $R$, compete to sell to a single consumer who has unit demand. The type of the consumer is her \emph{location}, denoted by $t$, which is drawn according to measure $\mu$ (and cdf $F$) with support $T$. We assume that $T\equiv [-1,1]$ and that $F$ is atomless with a strictly positive and continuous density $f$ on its support. The firms $L$ and $R$ are located at the two end points, respectively $-1$ and $1$, and each firm $i$ sets a price $p_i\geq 0$. The consumer has a value $V$ for buying the good that is independent of her type $t$, and faces a ``transportation cost'' when purchasing from firm $i$ that is proportional to the distance between her location and that of the firm, $\ell_i$.
Thus, her payoff from buying the good from firm $i$ at a price of $p_i$ is $V -|t - \ell_i| - p_i$.
As is standard, we assume that $V$ is sufficiently large that in the equilibria we study below, all types of the consumer purchase the good and no type is excluded from the market.\footnote{See \cite{osborne1987equilibrium}, \cite{caplin1991aggregation}, \cite{bester1992}, and \cite{peitz1997differentiated}. For most of our analysis, it suffices for $V\geq 2$, so that a consumer is always willing to purchase the good from the most distant firm if that distant firm sets a price of $0$. }
\paragraph{Disclosure.} After observing her type, the consumer chooses a message $M$ that is feasible and available for her to send to each of the firms. As before, the set of feasible messages is $\mathcal M^{\mathcal{F}} \equiv \left\{[a,b]:-1\leq a \leq b \leq 1\right\}$ where a message $[a,b]$ is interpreted as ``\textit{my type is in the interval $[a,b]$}." When a consumer's type is $t$, the set of messages that she can send is $\mathcal M(t) \subseteq \mathcal M^{\mathcal{F}}$. We study two disclosure technologies: \begin{itemize}[noitemsep]
\item \textbf{simple} evidence messages for each type $t$, $\mathcal M(t) = \{ [-1,1], \{t\} \}$.
\item \textbf{rich} evidence messages for each type $t$, $\mathcal M(t) = \{ [a,b] : a \leq t \leq b \}$. \end{itemize}
Each evidence technology is identical to its counterpart in the monopolistic model when the type space is unidimensional. The novelty here is that the consumer now sends two messages---$M_L$ to firm L and $M_R$ to firm R---and each message is privately observed by its recipient. Both messages come from the same technology but are otherwise unrestricted. For example, a consumer of type $t$ can reveal her type by sending the message $\{t\}$ to one firm while concealing it from the other firm using the message $[-1,1]$.
\paragraph{Timeline and Equilibrium Concept.} The consumer first observes her type $t$ and then chooses a pair of messages $(M_L, M_R)$, each from $\mathcal M(t)$.\footnote{While we have treated $t$ as the consumer's location, our analysis is also compatible with a setting where all that a consumer observes is a signal with her posterior expected location, like \cite{armstrongzhou2019}, and chooses whether and how to disclose that expected location using simple or rich evidence.} Each firm $i$ privately observe its message $M_i$ and sets price $p_i \geq 0$; price-setting is simultaneous. The consumer then chooses which firm to purchase the good from, if any.
We study Perfect Bayesian Equilibria of this game. As is well-known \citep{osborne1987equilibrium,caplin1991aggregation}, the price-setting game in Bertrand competition with horizontal differentiation may lack a pure-strategy equilibrium for general distributions. By contrast, we show constructively that pure-strategy equilibria always exist when this market setting is augmented with a disclosure game.
\subsection{Constructing Equilibria with Simple and Rich Evidence}\label{Section-CompetitionEquilibria}
This section constructs equilibria of the disclosure game with simple and rich disclosure technologies for any distribution of consumer types. In both cases, we use the following strategic logic. Each consumer reveals her type to the firm that is more distant from her, indicating that she is ``out of reach." This distant firm then competes heavily for her business by setting a low price, which in equilibrium equals $ 0 $. The firm who does not obtain a fully revealing message infers that the consumer is closer to his location. Based on that inference, this seller sets a profit-maximizing price subject to the consumer having the option to buy from the other seller at a price of $0$. We use the assumption that $V\geq 2$ to guarantee that the consumer weakly prefers purchasing the good from the distant firm at a price of $0$ to not purchasing it at all. We begin our analysis with a fully revealing equilibrium in both simple and rich evidence environments, and then show how to improve upon it.
\begin{proposition}
\label{Proposition-DuopolistSimpleUnraveling}
There exists a fully revealing equilibrium in both simple and rich evidence games: every type of consumer $t$ sends the message $\{t\}$ to each firm, and purchases from the firm nearer to her at a price of $2|t|$. \end{proposition}
The logic of \Cref{Proposition-DuopolistSimpleUnraveling} is straightforward. In an equilibrium where the consumer reveals her location to each firm, both firms do not charge her strictly positive prices in equilibrium. Standard Bertrand logic implies that the distant firm must charge her a price of $0$ and the closer firm charges her the highest price that it can subject to the constraint that the consumer finds it incentive-compatible to purchase from the closer firm at that price.\footnote{Once types are revealed, these equilibrium prices necessarily coincide with those of \cite{thissevives1988}, where the consumer's type is common knowledge.} If the consumer deviates by sending a message $M$ that isn't a singleton to firm $i$, then firm $i$ believes that the consumer's type is the one in $M$ closest to $\ell_i$ and that the consumer has revealed her location to firm $j$. This equilibrium, thus, involves each seller holding skeptical beliefs that the consumer is as close as possible (given the message that is sent).
This fully revealing equilibrium serves central types very well because they benefit from intense price competition. However, extreme types suffer from the firm closer to them being able to charge a high price. Ideally, types that are located close to firm $i$ would benefit from pooling with types more distant from firm $i$. The next result uses simple evidence to construct a partial pooling equilibrium that improves upon the fully revealing equilibrium for a strictly positive measure of types without making any type worse off.
Our construction uses the following notation. Let $p^i_1$ be the lowest maximizer of $p\ell_i(F(\ell_i)-F(p\ell_i/2))$, and let $t^i_1\equiv{p^i_1\ell_i/2}$. To provide some intuition, $p^i_1$ is the (lowest) optimal price that firm $i$ charges if he has no information about the consumer's type and firm $j$ charges a price of $0$; in other words, this is firm $i$'s optimal \emph{local monopoly price} against an outside option where firm $j$ charges a price of $0$. At these prices, firm $i$ expects to sell to the consumer with probability $\ell_i(F(\ell_i)-F(p\ell_i/2))$, and $t^i_1$ is the most distant type from firm $i$ that still purchases from firm $i$. It is necessarily the case that $-1<t^L_1<0<t^R_1<1$. We use these types to describe our equilibrium.
\begin{proposition}\label{Proposition-DuopolistSimpleExistence}
With simple evidence, there exists a partially pooling equilibrium in which the consumer's reporting strategy is
\begin{align*}
\big(M^{\ast}_{L}(t), M^{\ast}_{R}(t) \big) = \begin{cases}
\big( [-1,1], \{t\} \big) & \text{if } -1 \leq t \leq t^L_1, \\
\big( \{t\}, \{t\} \big) & \text{if } t^L_1 < t < t^R_1, \\
\big( \{t\}, [-1,1] \big) & \text{if } t^R_1 \leq t \leq 1,
\end{cases}\end{align*} and the prices charged by firm $i$ are \begin{align*} p_i^*(M) = \begin{cases} \max\{2t\ell_i,0\} & \text{if } M=\{t\}, \\
p^i_1 & \text{otherwise.} \end{cases}\end{align*} In equilibrium, every consumer type purchases from the seller nearer to her. \end{proposition}
An intuition for \Cref{Proposition-DuopolistSimpleExistence} is as follows. If the consumer is centrally located---i.e., in $(t_L^1,t_R^1)$---she discloses her type (``track") to both firms. Such consumers then benefit from intense price competition, exactly as in the fully revealing equilibrium of \Cref{Proposition-DuopolistSimpleUnraveling}. If the consumer is not centrally located, she reveals her location to the firm farther from her but not to the nearer one. This private messaging strategy guarantees that the distant firm prices at zero and offers an attractive outside option. The firm that receives an uninformative (``don't track") message infers that the consumer is located sufficiently close but does not learn where. That firm then chooses an optimal local monopoly price given the outside-option price of zero. This pool of extreme types improves consumer welfare by guaranteeing that extreme consumer types can pool with moderate types, thereby decreasing type-contingent prices relative to the fully revealing equilibrium. \Cref{Figure-ExampleSimpleCompetition} in \Cref{Section-ExampleCompetition} depicts this segmentation for the case where the consumer's location is uniformly distributed.
One can do even better with rich evidence by using a segmentation that is analogous to the ``Zeno Partition'' constructed in \Cref{Section-GreedyPareto}. In this case, the central type $t=0$ obtains equilibrium prices of $0$ from each firm, and plays a role similar to the lowest type in the monopolistic setting. Accordingly, one sees a segmentation that goes from the extremes to the center, and becomes arbitrarily fine as one approaches the center. To develop notation for this argument, let us define a sequence of types $\{t_s^i\}_{s=0,1,2,\ldots}$ and prices and messages $\{p_s^i,M_s^i\}_{s=1,2,\ldots}$ where for every firm $i$ in $\{L,R\}$: \begin{itemize}[noitemsep]
\item $t_0^i=\ell_i$ and for every $s>0$, $t_s^i=p_s^i\ell_i/2$.
\item $p_s^i$ is the lowest maximizer of $p\ell_i(F(t_{s-1}^i)-F(p\ell_i/2))$.
\item $M_s^i\equiv\{t\in [-1,1]:t_s^i\ell_i\leq t\ell_i\leq t_{s-1}^i\ell_i\}$. \end{itemize} Let $p_\infty^i=0$ and let $M_\infty^i=\{0\}$. We have thus defined a sequences of cutoffs, prices, and messages where at every stage, we are constructing segments greedily: given a segment $M_s^i$, firm $i$ charges the price that is the optimal local monopoly price (assuming that the other firm charges a price of $0$), and at this price, firm $i$ services all consumer types in $M_s^i$. Because rich evidence allows consumers to disclose intervals directly, our disclosure strategy need not be asymmetric (unlike our analysis of the segmentation with simple evidence): a consumer of type $t$ can send the message $M_s^i$ that contains $t$ to both firms. We use this notation to prove our result below.
\begin{proposition}
\label{Proposition-DuopolistRichExistence}
With rich evidence, there exists a segmentation equilibrium in which a consumer's reporting strategy is to send message $M^*(t)$ to both firms where
\begin{align*}
M^*(t) = \begin{cases}
M_s^i & \text{if } t_s^i\ell_i < t\ell_i \leq t_{s-1}^i\ell_i \\
M_{\infty}^i & \text{if } t = 0.
\end{cases}\end{align*} When receiving an equilibrium disclosure of the form $ M_s^i $, firm $i$ charges a price of $p_s^i$ and firm $j$ charges a price of $0$.
\end{proposition}
This equilibrium construction highlights the versatility of rich evidence disclosure. While the competitive environment differs from the monopolistic setting in many ways, the logic of the ``Zeno Partition" strategy follows in much the same way. Consumers with the highest willingness to pay for the good from firm $i$ are segmented together and send messages $M_1^i $. That message induces a price of $0$ by firm $j$ and given that outside option, firm $i$ charges a price that makes indifferent the marginal consumer type in $M_1^i$, who has the lowest willingness to pay for firm $i$'s product. Prices diminish as the consumer types become closer to the center. As such, the segmentation follows iteratively from both sides of $ 0 $ exactly as in ``Zeno."\footnote{For the uniform distribution, the construction mirrors that in \Cref{Section-Example} where $t_s^i=\ell_i(1/2)^s$. } We depict this segmentation strategy in \Cref{Figure-RichCompetition}.
\begin{figure}
\caption{The figure shows a segmentation using rich evidence. The types in $(t_3^L,t_3^R)$ are partitioned into countably infinitely many segments, and hence these segments are omitted.}
\label{Figure-RichCompetition}
\end{figure}
We have constructed equilibria with simple and rich evidence but we do not argue that these equilibria are consumer-optimal: an equilibrium segmentation (with either simple or rich evidence) may generate segments that induce each firm to randomize in its pricing strategy. Characterizing or bounding prices across mixed strategy equilibria across segments appears intractable.\footnote{Restricting attention to segmentations that generate pure-strategy equilibria, we conjecture that the equilibrium that we construct in \Cref{Proposition-DuopolistSimpleExistence} is consumer-optimal in the game with simple evidence; similarly, we conjecture the same regarding the equilibrium constructed in the game with rich evidence whenever the greedy algorithm yields an optimal segmentation.} Instead, we compare these equilibria to that of a benchmark model without personalized pricing and show that these equilibria generate strict interim Pareto gains.
\subsection{Benefits of Personalized Pricing in Competitive Markets}\label{Section-CompetitionWelfare}
The benchmark is the standard model of Bertrand pricing with horizontal differentiation: each firm $i$ sets a uniform price $p_i$ and the consumer buys from one of the firms. Unfortunately, this game may lack a pure-strategy equilibrium (in prices), and characterizing the mixed strategy equilibria is challenging. Accordingly, we impose a distributional assumption that is standard in this setting: we assume that $f$ is symmetric around $0$ and is strictly log-concave. This assumption guarantees the existence and uniqueness of a symmetric pure strategy equilibrium \citep{caplin1991aggregation}, and is compatible with many standard distributions \citep{bagnoli2005log}.
With this assumption, a symmetric pure-strategy equilibrium in this benchmark setting exists and involves each firm charging a price of $p^*$ where \begin{align*}
p^*\equiv\arg\max_p pF\left(\frac{p^*-p}{2}\right)=\frac{2F(0)}{f(0)}, \end{align*} where the first equality is firm $L$'s profit maximization problem, and the second comes from solving its first-order condition and substituting $p=p^*$.\footnote{As before, we assume that $V$ is sufficiently high that all consumers purchase at these prices. It suffices that $V>(f(0))^{-1}+1$.} Our welfare result compares this price to those of the equilibria constructed in \Cref{Section-CompetitionEquilibria}.
\begin{proposition}\label{Proposition-DuopolyWelfare}
If $f$ is symmetric around $0$ and log-concave, then every type has a strictly higher payoff in the equilibria of the simple and rich evidence games constructed in \Cref{Proposition-DuopolistSimpleExistence,Proposition-DuopolistRichExistence} than in the benchmark setting without personalized pricing. \end{proposition} The logic of \Cref{Proposition-DuopolyWelfare} is that the price in the benchmark setting ($p^*$) is strictly higher than $p_1^i$, the price charged by firm $i$ to a consumer who conceals her type from firm $i$ in the equilibrium of the simple evidence game (\Cref{Proposition-DuopolistSimpleExistence}). The consumer must then be better off because this price ($p_1^i$) is strictly higher than all other equilibrium path prices both in this equilibrium and in the equilibrium that we construct in the rich evidence game. We illustrate the welfare gains from simple evidence in \Cref{Figure-DuopolyWelfare} for the case of the uniform distribution.
\begin{figure}
\caption{The figure compares the interim equilibrium cost (incl. price and transport cost) in the setting without personalized pricing with that of the equilibrium constructed in the simple evidence game (\Cref{Proposition-DuopolistSimpleExistence}) for uniformly distributed types. Simple evidence reduces the expected cost by $50\%$. }
\label{Figure-DuopolyWelfare}
\end{figure}
The ability for extreme types to pool is needed for these interim Pareto gains. These gains would not generally emerge if the consumer fully revealed her type: apart from the uniform distribution, any symmetric log-concave density has $f(0)>1/2$ and therefore, prices in the benchmark are strictly less than $2$. By contrast, in the fully revealing equilibrium, the extreme types pay a price of $2$. Thus, these interim Pareto gains emerge when consumers can disclose or conceal evidence about their types, and not necessarily when types are commonly known.
\subsection{Competition Between More than Two Firms}\label{Section-CompetitionMoreThanTwo}
In this section, we show how our prior conclusions generalize to the case of multiple firms that produce differentiated products. Although the economic intuition for how disclosure amplifies competition remains the same, the arguments and notation are necessarily more involved to account for the higher dimensionality introduced by there being more firms.
Suppose that the set of firms is $N\equiv\{1,\ldots,n\}$ where the number of firms, $n$, is at least $2$. The consumer has a valuation for each of these products, encoded in its type, $t\equiv (t_1,\ldots,t_n)$, where $t_i$ is the consumer's value for the good produced by firm $i$. We assume that $t$ is drawn according to distribution $\mu$ whose support is $T\equiv\left[\underline{t},\overline{t}\right]^n$, where $0<\underline{t}<\overline{t}$; for simplicity, we assume that $\mu$ has a strictly positive and continuous density $f$ on its support. We say that firm $i$ is type $t$'s \emph{favorite firm} if $t_i$ is weakly higher than $t_j$ for every firm $j$.
We first construct a partial pooling equilibrium using simple evidence. We then construct an equilibrium with rich evidence. Finally, we compare the equilibrium prices under the two technologies to the model without personalized pricing.
\paragraph{Simple Evidence:} For each type $t$, the set of messages available to the consumer is $\mathcal M(t)=\{T,\{t\}\}$. Messages are private and $M_i(t)$ denotes the message sent to firm $i$. It is useful to define a demand function for firm $i$ assuming that all other firms charge a price of $0$. For every non-negative price $p$, let \begin{align*}
Q^i(p)&\equiv\mu\left(\{t:t_i-p\geq \max_{j\neq i}t_j\}\right). \end{align*} The demand $Q^i(p)$ is the probability that the consumer purchases from firm $i$ at a price of $p$ when all other firms charge $0$. Let $p^i_1$ denote the lowest maximizer of $pQ^i(p)$. The following result constructs an equilibrium with simple evidence. \begin{proposition}\label{Proposition-OligopolySimpleExistence}
With simple evidence, there exists an equilibrium in which the consumer's reporting strategy is
\begin{align*}
M^{\ast}_{i}(t)= &\begin{cases}
\{t\} & \text{if } t_i-p^i_1< \max_{j\neq i} t_j, \\
T & \text{otherwise.}
\end{cases} \end{align*} {The prices charged by firm $i$ are} \begin{align*} p_i^*(M) = &\begin{cases} \max\{0,t_i-\max_{j\neq i} t_j\} & \text{if } M=\{t\}, \\
p^i_1 & \text{otherwise.} \end{cases}\end{align*} In equilibrium, every consumer type purchases from its favorite firm. \end{proposition} This equilibrium generalizes the simple partitional form of \Cref{Proposition-DuopolistSimpleExistence} and \Cref{Figure-ExampleSimpleCompetition}. The consumer sends the non-discriminatory message $T$ only when she has strong preferences for the product of a particular firm. Following these messages, whenever firm $i$ receives the message $T$, it infers that the consumer has a strong preference for its product and that the consumer has sent a fully revealing message to every other firm. Anticipating that every other firm is then charging a price of $0$, firm $i$ then charges its optimal local monopoly price, which is $p_1^i$. When the consumer has mild preferences for the products of each firm, she sends the fully revealing message to each. The firms then compete for her business using standard Bertrand prices (with differentiated products). We omit a proof of this result as it is nearly identical to that of \Cref{Proposition-DuopolistSimpleExistence}.
\paragraph{Rich Evidence:}
We now construct an equilibrium using the rich evidence structure of \Cref{Section-MonopolistModel} where $\mathcal M(t)$, the set of messages that the consumer can send when her type is $t$, is the set of all closed and convex subsets of $T$ that contain $t$. We partition $T$ on the basis of each consumer type's favorite firm, and the closest competitor that would like to poach that type's business. For each consumer type $t$, let $\alpha(t)$ denote type $t$'s favorite firm and $\beta(t)$ denote type $t$'s favorite among the remaining firms, breaking ties in favor of the firm that has the lowest index in each case.\footnote{Formally, treating $\min$ as the lowest element of a set, $\alpha(t)=\min {\{i\in \{1,\ldots,n\}:t_i \geq t_j\text{ for all }j\}}$, and $\beta(t)=\min {\{i\in \{1,\ldots,n\}\backslash\{\alpha(t)\}:t_i \geq t_j\text{ for all }j\neq \alpha(t)\}}$. These tie-breaking rules are for completeness, but any tie-breaking rule that preserves $\alpha(t)\neq \beta(t)$ suffices.} It is helpful to define a localized demand function: for every non-negative price $p$ and every (Borel) set $A$, \begin{align*}
Q^i(p,A)\equiv \mu \left(\{t\in A:t_i - p\geq \max_{j\neq i} t_j\}\right). \end{align*} This term is the probability that the consumer purchases from firm $i$ at price $p$ when the consumer's type is in $A$ and every other firm is charging a price of $0$.
For each firm $i$ and competitor $j$, define a sequence of prices $\{p^{ij}_s,\}_{s=0,1,2,\ldots}$ and sets of types $\{M^{ij}_s\}_{s=0,1,2,\ldots}$ such that $p^{ij}_0=\overline t-\underline t$ and \begin{align*}
M^{ij}_s &\equiv \left\{t\in T:i=\alpha(t),j=\beta(t),t_i-t_j\leq p^{ij}_{s-1}\right\},\\
p^{ij}_{s}&\text{ is the smallest maximizer of } pQ(p,M^{ij}_s). \end{align*} The set $M^{ij}_s$ denotes all consumer types for which $i$ is the favorite, $j$ is the second favorite, and each type is willing to pay as much as $p^{ij}_{s-1}$ to obtain the product from firm $i$. As we show in the Appendix, this is a convex set. The price $p^{ij}_s$ is the local monopoly price charged by firm $i$ when it knows that the consumer's type is in $M^{ij}_s$. Denoting the set of types for whom firms $i$ and $j$ are two equally favorite firms by $M^{ij}_\infty \equiv \{t\in T:i=\alpha(t),j=\beta(t),t_i=t_j\}$ so that $p^{ij}_\infty =0$, we can now state the following result. \begin{proposition}\label{Proposition-OligopolyRichExistence}
With rich evidence, there exists an equilibrium in which a consumer's reporting strategy is \begin{align*} M^*(t) = \begin{cases}
M^{ij}_s & \text{if }t\in M^{ij}_s\text{ and }t_i-t_j>p^{ij}_s, \\
M^{ij}_\infty & \text{if }t\in M^{ij}_\infty.
\end{cases}\end{align*} When receiving an equilibrium disclosure of the form $M^{ij}_s$, firm $i$ charges a price of $p^{ij}_s$, and all other firms charge a price of $0$. The consumer always purchases the good from her favorite firm. \end{proposition} The construction here merges that of \Cref{Proposition-MonopolistConsumerSegmentation} and \Cref{Proposition-DuopolistRichExistence}: consumer types send messages that reveal their two favorite firms as well as a maximal willingness to pay for the good produced by their favorite firm. All non-favorites charge a price of $0$ and the favorite firm charges a price corresponding to the local monopoly price.
\paragraph{Benefits of Personalized Pricing:} We show that the equilibria constructed in \Cref{Proposition-OligopolySimpleExistence,Proposition-OligopolyRichExistence} improve consumer surplus relative to the benchmark of no personalized pricing. For the benchmark model, to guarantee full market coverage, we assume that the consumer has to purchase the product from one firm. We say that the distribution is symmetric if whenever $t'$ is a permutation of $t$, $f(t')=f(t)$. \begin{proposition}\label{Proposition-OligopolyWelfare} If $f$ is symmetric and log-concave, then every type has a strictly higher payoff in the equilibria constructed in \Cref{Proposition-OligopolySimpleExistence,Proposition-OligopolyRichExistence} than in the benchmark setting without personalized pricing. \end{proposition} Thus, we see that our conclusions from Bertrand duopoly with horizontal differentiation apply broadly: voluntary disclosure with either simple or rich evidence can amplify competition and generate gains for consumers. Our analysis thus highlights how consumer control through a simple track / do-not-track technology is sufficient in competitive markets.
\section{Conclusion} \label{Section-Conclusion}
As the digital economy matures, policymakers and industry leaders are working to establish norms and regulations to govern data ownership and transmission. In light of the privacy and distributional concerns that this issue raises, we set out to study the question: \emph{do consumers benefit from personalized pricing when they have control over their data?} We frame and answer this question using the language of voluntary disclosure, building on a rich theoretical literature on evidence and hard information.
Our initial instinct was that voluntary disclosure would not help. As the market draws inferences based on information that is \emph{not} disclosed, giving consumers the ability to separate themselves would seem to be self-defeating. To put it differently, if the market necessarily unravels as in \cite{grossman:81-JLE} and \cite{milgrom1981good}, consumers retain no surplus and may be worse off with personalized pricing. We show that this conclusion is incorrect because it omits two important strategic forces present in market interactions.
First, one can construct pools in both monopolistic and competitive settings in which the consumer lacks an incentive to separate herself from the pool. These pools are simple, do not require commitment, and depend only on willingness-to-pay rather than on intricate details of the type space. Second, when facing multiple firms, voluntary disclosure and personalized pricing can amplify competitive forces. By revealing features of one's preferences to the market, the consumer obtains a significant price concession from a less competitive firm that forces the more competitive firm to also lower its price.
We have examined these basic strategic considerations through the lens of a stylized model, and it is worth noting important caveats to our conclusions.
One consideration is that voluntary disclosure and personalized pricing can generate multiple equilibria. We view our analysis as identifying possibility results for when consumers can be made better off. Our results show that identifying these possibilities is subtle: simple evidence does not benefit the consumer in any equilibrium of a monopolistic market but can do so in competitive markets. But translating these possibilities for improvement into actual gains for consumer surplus requires the coordination of their information-sharing, and here, we see an important role for intermediaries, platforms, and regulation.
A different concern is that while our model permits a consumer to disclose her valuation perfectly, consumer tracking may be limited so that at best, the consumer can disclose information about her type up to some coarse partition. While the exact analysis of such a setting would depend on the details, we would expect our main qualitative conclusions to still hold: consumer control over all-or-nothing disclosures (like simple evidence) will have a stronger effect in competitive markets than when there is a high degree of market power.
Similarly, one may worry that, if consumers internalize a cost for ceding their privacy when they disclose information, our conclusions may not go through. In the monopolistic setting, this is true: consumers at the bottom of each messaging pool may now receive a negative payoff from remaining in their pool and prefer not to disclose at all. Ultimately, the \emph{only} strategy consistent with equilibrium is for there to be no disclosure, and so no evidence structure --- simple or rich --- could induce a consumer welfare improvement. However, this is not true of the competitive setting. Because of the price competition that disclosure can induce, the consumer always enjoys a strict improvement from disclosing information. Thus, so long as the privacy cost is sufficiently low, all of our results for competitive markets remain. Combining these observations, the addition of an exogenous value for privacy strengthens our message about the sharp differences between consumer control in monopolistic and competitive markets.
A natural question to follow asks whether the technologies envisioned by the simple and rich evidence structures in our model are feasible in real data-sharing settings. It is not difficult to imagine data-sharing tools that fit our definitions: track/do-not-track is already a part of GDPR, and a simple slider system in which consumers report the group or range that they belong to on different dimensions would fit the convexity requirement of rich evidence. Thus, practical feasibility relies on two features: the ability to transmit verifiable information, and the feasibility of implementing an equilibrium.
As argued by \cite{goldfarbtucker} and others, an important element in the ongoing evolution of the digital economy is its increasing ability to verify information. These advances suggest that it may be technologically feasible for an intermediary --- whether a private platform or a government operated service --- to verifiably disclose aspects of consumers' preferences to sellers. Already, regulations such as GDPR have required platforms to allow consumers to opt out of being tracked. The ways in which such policies and technologies are crafted can influence not only how data is shared, but also what data will be shared in equilibrium, facilitating coordination across consumers. Our work suggests that designing these tools with a priority to give consumers control over their data may make personalized pricing attractive and improve consumer welfare.
{\small \begin{singlespace}
\addcontentsline{toc}{section}{References}
\end{singlespace} }
\appendix \section{Appendix}
\begin{proof}[Proof of \cref{Proposition-Certification} on p. \pageref{Proposition-Certification}] Consider an equilibrium. Let $\tilde{T}$ be the set of types that in equilibrium send the non-disclosure message, $T$. Thus, every type in $T\backslash\tilde{T}$ sends a message that fully reveals itself. Sequential rationality demands that the monopolist charges a price of $v(t)$ to every such type, leading to an interim payoff of $0$. We prove below that the non-disclosure message must induce a price that is no less than $p^*$.
Suppose towards a contradiction that it leads to a price $\tilde{p}$ that is strictly less than $p^*$. In equilibrium, if $v(t)>\tilde{p}$, the consumer must be sending the non-disclosure message $T$ (because sending the message $\{t\}$ leads to a payoff of $0$). Therefore, in equilibrium, \begin{align*}
\tilde{T}\supseteq \{t\in T:v(t)>\tilde{p}\} \supseteq \{t\in T:v(t)\geq p^*\}. \end{align*} By charging a price of $\tilde{p}$, the firm's payoff is \begin{align*}
\tilde{p} \mu(\{t\in \tilde T: v(t)\geq \tilde{p}\})&\leq \tilde{p} \mu(\{t\in T: v(t)\geq \tilde{p}\})\\&<{p}^* \mu(\{t\in T: v(t)\geq {p}^*\})\\&={p}^* \mu(\{t\in \tilde{T}: v(t)\geq {p}^*\}), \end{align*} where the weak inequality follows from $\tilde{T}\subseteq T$, the strict inequality follows from $p^*$ being the (lowest) optimal price, and the equality follows from $\{t\in T:v(t)\geq p^*\}\subseteq \tilde{T}$. Therefore, the monopolist gains from profitably deviating from charging $\tilde{p}$ to a price of $p^*$ when facing the non-disclosure measure, thereby rendering a contradiction. \end{proof} \begin{proof}[Proof of \cref{Proposition-MonopolistConsumerSegmentation} on p. \pageref{Proposition-MonopolistConsumerSegmentation}] We augment the description of the strategy-profile with the off-path belief system where when the seller receives a message $M\notin\left(\cup_{s=1,\ldots,S} M_s\right)\cup M_\infty$, she puts probability $1$ on a type in $M$ with the highest valuation (i.e. a type in $\arg\max_{t\in M}v(t)$), and charges a price equal to that valuation.
Observe that the seller has no incentive to deviate from this strategy-profile because for each (on- or off-path) message, the price that he is prescribed to charge in equilibrium is her optimal price given the beliefs that are induced by that message.
We consider whether the consumer has a strictly profitable deviation. Let us consider on-path messages first. Consider a consumer type $t$ that is prescribed to send message $M_s$ where $p_s<v(t)\leq p_{s-1}$. Sending any message of the form $M_{s'}$ where $s'<s$ results in a higher price and therefore is not a profitable deviation. All messages of the form $M_{s'}$ where $s'>s$ are infeasible because $t\notin M_{s'}$ for any $s'>s$. Finally, if the type $t$ is such that she is prescribed to send message $M_\infty$, her equilibrium payoff is $0$, and sending any other message results in a weakly higher price. Thus, the consumer has no profitable deviation to any other on-path message. There is also no profitable deviation to any off-path message: because for any set $M$ that contains $t$, $v(t)\leq \max_{t'\in M}v(t')$, any off-path message is guaranteed to result in a payoff of $0$. \end{proof} \begin{proof}[Proof of \cref{Proposition-MonopolistUnidimensional} on p. \pageref{Proposition-MonopolistUnidimensional}]
Consider an equilibrium $\sigma$. Let $m^\sigma(t)$ denote the message reported by type $t$, let $F^\sigma_m\in \Delta[0,1]$ denote the firm's belief when receiving message $m$ and $\underline{t}^{\sigma}(m)$ be the lowest type in the support of that belief, and let $p^\sigma(m)$ be the sequentially rational price that he charges. In any equilibrium, $p^\sigma(m)\geq \underline{t}^{\sigma}(m)$, because otherwise the firm has a profitable deviation. We say that a message is an equilibrium-path message if there exists at least one type that sends it, and a price is an equilibrium-path price if there exists at least one equilibrium-path message that induces the firm to charge that price. \begin{lemma}[Efficiency Lemma]\label{Lemma-MonopolistEfficiency}
For any equilibrium $\sigma$, there exists an equilibrium that is efficient that results in the same payoff for every consumer type. \end{lemma} \begin{proof}\renewcommand{\textsquare}{}
Consider an equilibrium $\sigma$. Define a strategy profile $\tilde\sigma$ in which \begin{align*} m^{\tilde\sigma}(t) &= \begin{cases}
m^\sigma(t) & \text{if }v(t)\geq p^{\sigma}(m^\sigma(t))\text{,} \\
\{t\} & \text{otherwise,}
\end{cases}\\
p^{\tilde\sigma}(m)&=p^\sigma(m).
\end{align*} In this disclosure strategy profile, a consumer-type that doesn't buy in equilibrium $\sigma$ is fully revealing herself in $\tilde\sigma$. Because $\sigma$ is an equilibrium, and the pricing strategy remains unchanged, such a type purchases in $\tilde\sigma$ at price $v$, and thus, efficiency is guaranteed without a change in payoffs.
We argue that $\tilde\sigma$ is an equilibrium. Note that because $\sigma$ is an equilibrium, and we have not changed the price for any message, no consumer-type has a motive to deviate. We also argue that the monopolist has no incentive to change prices. Because $p^\sigma(m)$ is an optimal price for the firm to charge in the equilibrium $\sigma$ when receiving message $m$, \begin{align}\label{Inequality-MonopolistPriceEfficiency} p^\sigma(m)(1-F^\sigma_m(p^\sigma(m)))\geq p(1-F^\sigma_m(p))\text{ for every }p. \end{align} After receiving message $m$ in $\tilde\sigma$, the monopolist's payoff from setting a price of $p^{\tilde\sigma}(m)$ is $p^{\tilde\sigma}(m)=p^{\sigma}(m)$ (because that price is accepted for sure), and the payoff from setting a higher price is $p(1-F^{\tilde\sigma}_m(p))$. But observe that by Baye's Rule, for every $p\geq p^{\tilde\sigma}(m)$, \begin{align*}
1-F^{\tilde\sigma}_m(p)=\frac{1-F^{\sigma}_m(p)}{1-F^\sigma_m(p^\sigma(m))}. \end{align*} Thus \eqref{Inequality-MonopolistPriceEfficiency} implies that $p^{\tilde\sigma}(m)\geq p(1-F^{\tilde\sigma}_m(p))$ for every $p>p^{\tilde\sigma}(m)$, and clearly the monopolist has no incentive to reduce prices below $p^{\tilde\sigma}(m)$. Therefore, the monopolist has no motive to deviate. \end{proof}
\begin{lemma}[Partitional Lemma]\label{Lemma-Partitional} For every efficient equilibrium $\sigma$, there exists a partitional equilibrium $\tilde\sigma$ that results in the same payoff for almost every type. \end{lemma} \begin{proof}\renewcommand{\textsquare}{} In an efficient equilibrium $\sigma$, trade occurs with probability $1$. Therefore, for every equilibrium-path message, $m$, the price charged by the monopolist after that message, $p^\sigma(m)$, must be no more than the lowest type in the support of his beliefs after receiving message $m$, $\underline{t}^\sigma(m)$ (recall that $v(t)=t$). Sequential rationality of the monopolist demands that $p^\sigma(m)$ is at least $\underline{t}^\sigma(m)$ (because charging strictly below can always be improved), and therefore, in an efficient equilibrium, $p^\sigma(m)=\underline{t}^\sigma(m)$.
\noindent\underline{Step 1}: We first prove that the set of types being charged an equilibrium-path price $p$ is a connected set. Suppose that types $t$ and $t''>t$ are sending (possibly distinct) equilibrium-path messages $m$ and $m''$ such that $p^\sigma(m)=p^\sigma(m'')$. Because $p^\sigma(m)=\underline{t}^\sigma(m)$ and $p^\sigma(m'')=\underline{t}^\sigma(m'')$, it follows that $\underline{t}^\sigma(m)=\underline{v}^\sigma(m'')<v<v''$. Because types arbitrarily close to $\underline{t}^\sigma(m'')$ and $v''$ are both sending the message $m''$, the message $m''$ contains the interval $[\underline{t}^\sigma(m''),t'']$.
Consider any type $t'$ in $[t,t'']$: because $[t,t'']\subseteq [\underline{t}^\sigma(m''),v'']\subseteq m''$, it follows that $m''$ is a \emph{feasible message} for type $t'$. Therefore, denoting $m'$ as the equilibrium-path message of type $t'$, type $t'$ does not have a profitable deviation to sending message $m''$ only if $p^\sigma(m')\leq p^\sigma(m)$.
We argue that this weak inequality holds as an equality. Suppose towards a contradiction that $p^\sigma(m')< p^\sigma(m)$. Then it follows from $p^\sigma(m')=\underline{t}^\sigma(m')$ that $\underline{t}^\sigma(m')<\underline{t}^\sigma(m)\leq t\leq t'$. Therefore, the interval $[\underline{t}^\sigma(m'),t']$ is both a subset of $m'$ and contains $t$, and hence, $m'$ is a feasible message for type $t$. But then, type $t$ has an incentive to deviate from her equilibrium-path message $m$ to $m'$, which is a contradiction.
\noindent\underline{Step 2}: For every equilibrium-path price $p$, let \begin{align*}
M^\sigma(p)&\equiv\{m\in\mathcal M:p^\sigma(m)=p\text{ and }m\text{ is an equilibrium-path message}\},\\
T^\sigma(p)&\equiv\{t:p^\sigma(m^\sigma(t))=p\}. \end{align*} Observe that for every message $m$ in $M^\sigma(p)$, the monopolist's optimal price is $p$. Because the monopolist's payoff from charging any price is linear in his beliefs, and the belief induced by knowing that the type is in $T^\sigma(p)$ is a convex combination of beliefs in the set $\bigcup_{m\in M^\sigma(p)} \{F^\sigma(m)\}$, it follows that the monopolist's optimal price remains $p$ when all he knows is that the type is in $T^\sigma(p)$.
Now consider the collection of sets \begin{align*}
\mathcal P^\sigma \equiv\{m\in \mathcal M:m=cl(T^\sigma(p))\text{ for some equilibrium-path price }p\}, \end{align*} where $cl(\cdot)$ is the closure of a set. We argue that $\mathcal P^\sigma$ is a partition of $[\underline{v},\overline{v}]$: clearly, $[\underline{v},\overline{v}]\subseteq \bigcup_{m\in \mathcal P^\sigma} m$, and because each of $T^\sigma(p)$ and $T^\sigma(p')$ are connected for equilibrium-path prices $p$ and $p'$, $cl(T^\sigma(p))\bigcap cl(T^\sigma(p'))$ is at most a singleton.
Consider a strategy-profile $\tilde\sigma$ where each type $t$ sends the message $m^{\mathcal P^\sigma}(t)$. Fix such a message $m$ generated by $\tilde\sigma$; there exists a price $p$ that is on the equilibrium path (in the equilibrium $\sigma$) such that $m=cl(T^\sigma(p))$. Because the prior is atomless, the monopolist's optimal price when receiving message $m$ in $\tilde\sigma$ is equivalent to setting the optimal price when knowing that the type is in $T^\sigma(p)$, which as established above, is $p$. If any other message $m=[a,b]$ is reported, the monopolist believes that the consumer's type is $b$ with probability $1$.
We argue that this is an equilibrium. We first consider deviations to other messages that are equilibrium-path for $\tilde\sigma$. For any type $t$ such that there exists a unique element in $\mathcal P^\sigma$ that contains $t$, there exists no other feasible message that is an equilibrium-path message for $\tilde\sigma$. For any other type $t$, the strategy of sending the message $m^{\mathcal P^\sigma}(t)$ ensures that type $t$ is sending the equilibrium-path message that induces the lower price. Finally, no type gains from sending an off-path message. Observe that all but a measure-$0$ set of types are charged the same price in $\tilde\sigma$ as they are in $\sigma$. \end{proof}
\end{proof}
\begin{proof}[Proof of \Cref{Proposition-GreedyOptimalPower} on p. \pageref{Proposition-GreedyOptimalPower}] Since all partitional equilibria involve trade with probability $1$, a partitional equilibrium $\sigma$ has higher ex ante consumer welfare than the partitional equilibrium $\tilde\sigma$ if the average price in $\sigma$ is lower than that in $\tilde\sigma$: \begin{align*}
\int_0^1 p^\sigma(m^\sigma(t))dt\leq \int_0^1 p^{\tilde\sigma}(m^{\tilde\sigma}(t))dt. \end{align*} Thus, it suffices to prove that the greedy segmentation attains the lowest average price attainable by any partitional equilibrium.
We first describe the greedy segmentation. For a truncation of valuations $[0,v]$ where $v\leq 1$, let $p(v)$ solve $pf(p)=F(v)-F(p)$, which implies that $p({v})=\frac{{v}}{\sqrt[\leftroot{-2}\uproot{2}k]{k+1}}$; let us denote the denominator of $p(v)$ by $\gamma$, and note that $\gamma>1$. The greedy segmentation divides the $[0,1]$ interval into sets of the form $\{0\}\bigcup_{\ell=0}^\infty S_\ell$ where $S_\ell\equiv\left[\frac{1}{\gamma^{\ell+1}},\frac{1}{\gamma^\ell}\right]$.
We prove that no partitional equilibrium generates a lower average price on the segment $S_\ell$ than $\frac{1}{\gamma^{\ell+1}}$. Consider an arbitrary $\ell\geq 0$. Consider dividing $S_\ell$ into two segments $\left[\frac{1}{\gamma^{\ell+1}},\tilde{v}\right]$ and $\left(\tilde v,\frac{1}{\gamma^{\ell}}\right]$ for some $\tilde v\in \left(\frac{1}{\gamma^{\ell+1}},\frac{1}{\gamma^{\ell}}\right)$. The higher segment is charged $\tilde v$. The lowest possible price that the lower segment is charged is $\frac{\tilde v}{\gamma}$, which is achieved if all types in $\left[\frac{\tilde v}{\gamma},\tilde v\right]$ send the same message. The resulting average price in the segment $S_\ell$ is \begin{align*}
\bar{P}(\tilde{v})\equiv&(F(\tilde{v})-F(1/\gamma^{\ell+1}))\frac{\tilde v}{\gamma}+(F(1/\gamma^{\ell})-F(\tilde{v}))\tilde v\\
=&(\tilde{v}^k-\gamma^{-k(\ell+1)})\frac{\tilde v}{\gamma}+(\gamma^{-k\ell}-\tilde{v}^k)\tilde{v} \end{align*} where the first equality substitutes $F(v)=v^k$. Taking derivatives, \begin{align*}
\frac{d^2\bar P}{d\tilde{v}^2}=(k+1)k\tilde{v}^{k-1}\left(\frac{1}{\gamma}-1\right)<0 \end{align*} where the inequality follows from $\gamma>1$. Therefore, $\bar{P}$ is concave in $\tilde{v}$. The boundary condition that $\bar{P}(\gamma^{-\ell})=\bar{P}(\gamma^{-(\ell+1)})=\gamma^{-(\ell+1)}$ coupled with concavity of $\bar{P}$ implies that $\bar{P}(\tilde{v}) \geq \gamma^{-(\ell+1)}$ for every $\tilde{v}\in \left(\frac{1}{\gamma^{\ell+1}},\frac{1}{\gamma^{\ell}}\right)$. Therefore, no partitional equilibrium generates a lower average price than $\gamma^{-(\ell+1)}$ for the set of types in $S_\ell$. Because the greedy segmentation attains this lowerbound pointwise on every interval $S_\ell$ for every $\ell$, it is the consumer-optimal partitional equilibrium. \end{proof}
\begin{proof}[Proof of \Cref{Proposition-DuopolistSimpleUnraveling} on p. \pageref{Proposition-DuopolistSimpleUnraveling}]
Given a message $M$, let $\tau(i,M)\equiv\arg\min_{t\in M}|t-\ell_i|$ denote the closest type in $M$ to seller $i$; this type is well-defined because $M$ is closed. Let $\delta_t$ denote the degenerate probability distribution that places probability $1$ on type $t$. We use this notation to construct a fully revealing equilibrium:
\begin{itemize}[noitemsep]
\item The consumer of type $t$ always sends message $\{t\}$.
\item If seller $i$ receives message $M$, his beliefs are $\delta_{\tau(i,M)}$ and that the other seller has received a fully revealing message.
\item If seller $i$ holds belief $\delta_{\tau(i,M)}$, he charges a price $p_i(M)=\max\{2\tau(i,M)\ell_i,0\}$.
\item If $V-p_i-|t-\ell_i|>V-p_j-|t-\ell_j|$ and $V-p_i-|t-\ell_i|\geq 0$, then the consumer purchases from firm $i$.
\item If $V-p_L-|t-\ell_L|=V-p_R-|t-\ell_R|\geq 0$, the consumer purchases from firm $L$ if and only if $t\leq 0$, and otherwise, the consumer purchases from firm $R$.
\end{itemize} We argue that this is an equilibrium. Observe that each seller's on-path beliefs are consistent with Bayes rule, since $t=\tau(i,\{t\})$. In the case of an off-path message $M$, Bayes rule does not restrict the set of possible beliefs, and therefore, the above off-path belief assessment is feasible.
To see that each firm does not wish to deviate from charging the above prices, suppose that firm $i$ receives message $M$. He believes with probability $1$ (on or off-path) that the consumer's type is $\tau(i,M)$ with probability $1$ and that the other firm $j$ has received a message $\{\tau(i,M)\}$. Denote this type by $t$. If $2t\ell_i> 0$ then the consumer is closer in location to firm $i$ and therefore $2t\ell_j < 0$. In this case, firm $i$ believes that firm $j$ is charging a price of $0$. Charging a price strictly higher than $2t\ell_i$ leads to a payoff of $0$ (because the consumer will reject such an offer and purchase instead from the other firm), and charging a price $p$ weakly below $2t\ell_i$ leads to a payoff of $p$ (because the consumer always breaks ties in favor of the closer firm). Therefore, firm $i$ has no incentive to deviate. If $2t\ell_i\leq 0$, then the consumer is located closer to the other firm $j$ and is being charged a price equal to $2t\ell_j$. In this case, charging any strictly positive price leads to a payoff of $0$ (because the consumer will purchase the good from the other firm). Therefore, in either case, firm $i$ has no incentive to deviate.
Finally, we argue that the consumer has no incentive to deviate. By sending a fully revealing message, $\{t\}$, the consumer obtains an equilibrium payoff of $V-(|t|+1)$. If $t\leq 0$, the consumer obtains a price of $0$ from firm $R$, which is the lowest possible price. Therefore, there is no incentive to send any other message to firm $R$. Sending any other message $M\in M(t)$ to firm $L$ induces a weakly higher price because for any feasible message $M\in M(t)$, $\tau(L,M)\leq t$, and therefore, $2\tau(L,M)\ell_L\geq 2t\ell_L$. Thus, the consumer has no strictly profitable deviation from sending any other message $M\in M(t)$ to firm $L$. An analogous argument implies that if the consumer's type is $t>0$, she also does not gain from deviating to any other feasible disclosure strategy.
\end{proof}
\begin{proof}[Proof of \Cref{Proposition-DuopolistSimpleExistence} on p. \pageref{Proposition-DuopolistSimpleExistence}] We first show given the pricing strategies that the consumer has no incentive to deviate.
Consider a consumer type $t$ such that $t\in (t_1^L,t_1^R)$, or in other words, $t\ell_i<t_1^i\ell_i$. The equilibrium strategies are that the consumer sends the message $\{t\}$ to each firm. Given these equilibrium strategies, the consumer is quoted a price of $\max\{2t\ell_i,0\}$ by firm $i$. If the consumer deviates and sends message $[-1,1]$ to firm $i$, she induces a price of $p_1^i=2t_1^i\ell_i$, which is strictly higher. Therefore, this deviation is not strictly profitable.
Now suppose that $t\ell_i\geq t_1^i\ell_i $. The equilibrium strategies are that the consumer sends message $[-1,1]$ to firm $i$ and $\{t\}$ to firm $j$. Because the consumer, in equilibrium, is quoted a price of $0$ by firm $j$, sending the other message cannot lower that price. Given the equilibrium message, the consumer is quoted a price of $2t_1^i\ell_i$ by firm $i$, and deviating leads to a weakly higher price of $2t\ell_i$. Therefore, this deviation is not strictly profitable.
We now consider whether firm $i$ has an incentive to deviate. It follows from the proof of \Cref{Proposition-DuopolistSimpleUnraveling} that the prices are optimal whenever firm $i$ receives an (equilibrium-path) message of $\{t\}$ for $t\in (t_1^L,t_1^R)$. An identical argument applies when firm $i$ receives an (off-path) message of $\{t\}$ for $t\ell_i\geq t_1^i\ell_i $: in this case, firm $i$ believes that firm $j$ is charging a price of $0$, and thus, the optimal price is $2t\ell_i$ (because the consumer always breaks ties in favor of firm $i$). When firm $i$ receives an (equilibrium-path) message of $\{t\}$ for $t\ell_j\geq t_1^j\ell_j $, firm $i$ believes that firm $j$ is charging a price of $2t_1^j$. The equilibrium prescribes that firm $i$ charges a price of $0$, which leads to a payoff of $0$ (because the consumer breaks ties in favor of firm $j$), and any strictly positive price also leads to a payoff of $0$. Finally, consider the case when firm $i$ receives an (equilibrium-path) message of $[-1,1]$. Firm $i$ infers that $t\ell_i\geq t_1^i\ell_i$ and believes that firm $j$ is charging a price of $0$. Because $p_1^i$ is, by definition, a profit-maximizing price in response to a price of $0$, firm $i$ has no strictly profitable deviation. \end{proof}
\begin{proof}[Proof of \Cref{Proposition-DuopolistRichExistence} on p. \pageref{Proposition-DuopolistRichExistence}] We use an off-path belief system where if firm $i$ receives an off-path message $M$, she holds degenerate beliefs $\delta_{\tau(i,M)}$ that put probability $1$ on type $\tau(i,M)$ where recall that $\tau(i,M)$ is defined as the type in $M$ that is located closest to firm $i$ (this was defined in the proof of \Cref{Proposition-DuopolistSimpleUnraveling}). Given such beliefs, the firm charges a price $p_i(M)=\max\{2\tau(i,M)\ell_i,0\}$ for an off-path message $M$.
First, we prove that given the pricing strategies, no consumer has an incentive to deviate. Consider a consumer type $t$ such that $t^i_s\ell_i<t\ell_i\leq t^i_{s-1}\ell_i$ for some $s=1,2,\ldots$. Such a consumer should be sending message $M^i_s$ to both firms. Such a message induces a price of $0$ from firm $j$ and $p_s^i=2t_s^i\ell_i$ from firm $i$. No message can induce a lower price from firm $j$. Therefore, any strictly profitable deviation must induce a strictly lower price from firm $i$. We show that this is not possible.
We first argue that the consumer does not have a profitable deviation to any other equilibrium-path message. Suppose that $t\ell_i< t^i_{s-1}\ell_i$. In this case, $M^i_s$ is the only equilibrium-path message that type $t$ can send to firm $i$. If $t\ell_i= t^i_{s-1}\ell_i$, then type $t$ can send either message $M^i_s$ or $M^i_{s-1}$ but because $p_s^i\leq p_{s-1}^i$, this is not a strictly profitable deviation.
We now argue that the consumer does not have a profitable deviation to any off-path message. Any feasible message $M\in \mathcal M(t)$ satisfies the property that the closest type in $M$ to firm $i$ is at least as close as $t$ to firm $i$; or formally: $t\ell_i\leq \tau(i,M)\ell_i$. In that case, the price that the consumer is charged is $2\tau(i,M)\ell_i\geq 2t\ell_i>2t^i_s\ell_i=p_s^i$. Therefore, this deviation is not strictly profitable.
Finally, we argue that the firms have no incentive to deviate in their pricing strategies. For any equilibrium-path message, the prices charged by firms are (by construction) equilibrium prices. For any off-path message $M$, each firm assumes that the consumer sent the equilibrium-path message to the other firm. If $\tau(i,M)\ell_i>0$ then firm $i$ assumes that firm $j$ is charging a price of $0$, and then charging a price of $2\tau(i,M)\ell_i$ is a best-response (assuming that the consumer breaks ties in favor of the closer firm). If $\tau(i,M)\ell_i\leq 0$, then firm $i$ believes that the consumer is being charged a price $p^j_s$ by firm $j$ for some $s$ where $t^i_s\ell_i<\tau(i,M)\ell_i\leq t^i_{s-1}\ell_i$. Because the consumer breaks ties in favor of the closer firm, firm $i$ anticipates that the consumer will reject any strictly positive price. \end{proof}
\begin{proof}[Proof of \Cref{Proposition-DuopolyWelfare} on p. \pageref{Proposition-DuopolyWelfare}]
We show that $p^*>p^i_1$ for every $i$. Observe that \begin{align*}
p^L_1=\frac{2F(-p^L_1/2)}{f(-p^L_1/2)}<\frac{2F(0)}{f(0)}=p^* \end{align*} where the first equality follows from the first-order condition that $p^L_1$ solves, the inequality follows from $F$ being strictly log-concave, and the second equality follows from the definition of $p^*$. A symmetric argument shows that $p^R_1<p^*$.
Now we prove that all consumers are better off in the equilibrium we construct in the game with simple evidence (\Cref{Proposition-DuopolistSimpleExistence}). All types where $t\ell_i\geq t_1^i\ell_i$ are buying the good at a lower price because $p^i_1<p^*$. Consider any other type, i.e., where $t\ell_i <t_1^i\ell_i$ for every $i\in \{L,R\}$. Suppose that $t\ell_i > 0$. That type in equilibrium buys the good from firm $i$ at the price $2t\ell_i<2t_1^i\ell_i$, which equals $p^i_1$. Therefore, it obtains the good at a lower price than $p^*$. Finally, if $t=0$, that type obtains the good at a price equal to $0$.
An analogous argument ranks prices relative to the equilibrium constructed in the game with rich evidence (\Cref{Proposition-DuopolistRichExistence}). All types in that equilibrium pay a price that is less than $p_1^i$, and therefore, buy the good at a price lower than $p^*$. \end{proof}
\begin{proof}[Proof of \Cref{Proposition-OligopolyRichExistence} on p. \pageref{Proposition-OligopolyRichExistence}] We begin by proving that each $M^{ij}_s$ is convex for each $s$. If $t$ and $t'$ are each in $M^{ij}_s$, then $t_i \geq t_j \geq \max_{k\in N\backslash\{i,j\}}t_k$, and $t_i' \geq t_j' \geq \max_{k\in N\backslash\{i,j\}}t_k'$. Therefore, for every $\rho\in (0,1)$, the following are true: \begin{align*}
\rho t_i+(1-\rho)t_i'\geq \rho t_j+(1-\rho)t_j'&\geq \rho \max_{k\in N\backslash\{i,j\}}t_k+(1-\rho)\max_{k\in N\backslash\{i,j\}}t_k'\geq \max_{k\in N\backslash\{i,j\}} (\rho t_k+(1-\rho) t_k')\\
\rho t_i+(1-\rho)t_i'-\rho t_j+(1-\rho)t_j'&= \rho(t_i-t_j)+(1-\rho)(t_i-t_j) \leq \rho p^{ij}_{s-1}+(1-\rho)p^{ij}_{s-1}=p^{ij}_{s-1}. \end{align*} Therefore, $\rho t+(1-\rho)t'$ is also an element of $M^{ij}_{s}$, and so $M^{ij}_s$ is convex for every $s$. Clearly, $M^{ij}_\infty$ is also convex by the same argument.
The remainder of this argument follows the same logic as that of \Cref{Proposition-DuopolistRichExistence}. If firm $i$ receives an off-path message $M$, she holds degenerate beliefs $\delta_{\tau(i,M)}$ where type $\tau(i,M)$ is the type in $M$ that has the highest value of $t_i-\max_{j\neq i} t_j$. Because $M$ is a closed set, $\tau(i,M)$ is defined for every $M$. Given such beliefs, the firm charges a price $p_i(M)=\max\{t_i-\max_{j\neq i} t_j,0\}$.
First, we prove that given the pricing strategies, no consumer type has an incentive to deviate. Consider a consumer type such that $t\in M^{ij}_s$ and $t_i-t_j>p^{ij}_s$. A message of $M^{ij}_s$ to each firm induces all firms other than firm $i$ to set a price of $0$ and induces firm $i$ to set a price of $p^{ij}_s$. No other message can lead to lower prices from any other firm. Moreover, the consumer cannot send any other equilibrium path message to firm $i$ that leads to lower prices. Finally, every off-path message sent to firm $i$ can only increase the price since for any off-path message $M\in \mathcal M(t)$, $p_i(M)\geq M^{ij}_s$.
Second, we argue that firms have no incentives to deviate in their pricing strategies. For any on-path message, the prices charged are (by construction) optimal. For any off-path message $M$, each firm assumes that the consumer sent the equilibrium path message to other firms. If Firm $i=\alpha(\tau(i,M))$ (i.e., Firm $i$ is the favorite), then it assumes that other firms are charging a price of $0$, in which case $p_i(M)$ is optimal. If Firm $i$ is not the favorite, then it charges a price of $0$, and anticipates any strictly positive price to be rejected. \end{proof}
\begin{proof}[Proof of \Cref{Proposition-OligopolyWelfare} on p. \pageref{Proposition-OligopolyWelfare}] Because $f$ is log-concave, it follows directly from \cite{caplin1991aggregation} that for each firm $i$, $Q^i(p)$ is log-concave. Since firms are symmetric, we drop the subscripts. Let $p^*$ denote the price from the symmetric equilibrium of the game without personalized pricing. Observe that for each firm $i$, \begin{align*}
p^i_1 = -\frac{Q(p^i_1)}{q(p^i_1)}<-\frac{Q(0)}{q(0)}=p^*, \end{align*} where the first equality follows from the first-order condition that $p^i_1$ solves, the inequality follows from $Q$ being strictly log-concave, and the second equality follows from the first-order condition that $p^*$ solves. Now all consumers are necessarily better off in the equilibrium constructed with simple evidence. The argument is the same as in \Cref{Proposition-DuopolyWelfare}: all consumer types either pay a price of $p^i_1$ or below. In the equilibrium constructed with rich evidence, it follows from symmetry that $p^{ij}_1 = p^{i}_1$, and therefore, once again, all consumer types either pay a price of $p^i_1$ or below. \end{proof}
\end{document} | arXiv | {
"id": "1912.04774.tex",
"language_detection_score": 0.8640895485877991,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Computing the maximum violation of a Bell inequality is NP-complete}
\author{J. Batle$^{1}$ and C. H. Raymond Ooi$^{2}$} \email{E-mail address: jbv276@uib.es} \affiliation{ $^1$Departament de F\'{\i}sica, Universitat de les Illes Balears, 07122 Palma de Mallorca, Balearic Islands, Europe \\ $^2$Department of Physics, University of Malaya, 50603 Kuala Lumpur, Malaysia \\\\}
\date{\today}
\begin{abstract}
The number of steps required in order to maximize a Bell inequality for arbitrary number of qubits is shown to grow exponentially with either the number of steps and the number of parties involved. The proof that the optimization of such correlation measure is a NP-problem is based on an operational perspective involving a Turing machine, which follows a general algorithm. The implications for the computability of the so called {\it nonlocality} for any number of qubits is similar to recent results involving entanglement or similar quantum correlation-based measures.
\end{abstract}
\pacs{03.65.Ud; 03.67.-a; 03.67.Mn; 03.65.-w; 89.20.Ff}
\maketitle
\section{Introduction}
Quantum correlations lie at the heart of quantum information theory. They are responsible for some tasks that posses no classical counterpart. It is plain from the fact that quantum measures are essential feature for quantum computation or secure quantum communication, that one has to be able to develop some procedures (physical or purely mathematical in origin) so as to ascertain whether the state $\rho$ representing the physical system under consideration is appropriate for developing a given non-classical task. Among those correlations, entanglement is perhaps one of the most fundamental and non-classical features exhibited by quantum systems \cite{LPS98}, that lies at the basis of some of the most important processes studied by quantum information theory \cite{Galindo,NC00,LPS98,WC97,W98} such as quantum cryptographic key distribution \cite{E91}, quantum teleportation \cite{BBCJPW93}, superdense coding \cite{BW93}, and quantum computation \cite{EJ96,BDMT98}.
Other measures have been introduced in the literature that grasp features that are not captured by entanglement. They are not directly related to entanglement, but in some cases --specially when dealing with systems of qubits greater that two-- they provide a satisfactory approximate answer, like the maximum violation of a Bell inequality, that is, nonlocality. Local Variable Models (LVM) cannot exhibit arbitrary correlations. Mathematically, the conditions these correlations must obey can always be written as inequalities --the Bell inequalities-- satisfied for the joint probabilities of outcomes. We say that a quantum state $\rho$ is nonlocal if and only if there are measurements on $\rho$ that produce a correlation that violates a Bell inequality.
Later work by Zurek and Ollivier \cite{olli} established that not even entanglement captures all aspects of quantum correlations. These authors introduced an information-theoretical measure, quantum discord, that corresponds to a new facet of the ``quantumness" that arises even for non-entangled states. Indeed, it turned out that the vast majority of quantum states exhibit a finite amount of quantum discord. Besides its intrinsic conceptual interest, the study of quantum discord may also have technological implications: examples of improved quantum computing tasks that take advantage of quantum correlations but do not rely on entanglement have been reported [see for instance, among a quite extensive references-list \cite{geom,olli,ferraro,dattaprl,luo}]. Actually, in some cases entangled states are useful to solve a problem if and only if they violate a Bell inequality \cite{comcomplex}. Moreover, there are important instances of non-classical information tasks that are based directly upon non-locality, with no explicit reference to the quantum mechanical formalism or to the associated concept of entanglement \cite{device}. A recent work studying how entanglement can be estimated from a Bell inequality violation also sheds new light on the use of Bell inequalities \cite{EntanglementFromBell}
In any case, the study of entanglement in multipartite quantum systems has been limited to few cases. As a consequence, other measures have been introduced in order the describe the ``quantumness'' of a certain (usually mixed) state $\rho$. In recent years the use of the maximum violation of a Bell inequality serves the purpose of describing how {\it nonlocal} the state of the system is (see {\cite{Mauro}} and references therein). Although there is little connection between entanglement and nonlocality (the former is based on how the tensor structure of the concomitant Hilbert space is split, whereas the latter ascertains how well a LVM can mimic quantum mechanics), nonlocality is a good candidate for describing correlations in quantum systems.
To be more precise, {\it the maximum violation of a Bell inequality for $N$ parties} $B_{N}^{\max}$ is the quantity chosen to approach entanglement is those scenarios \cite{jo1,jo2}. Thus, to know whether the computation of the maximum value of a Bell inequality is NP-hard seems a relevant and reasonable question. Previous approaches in the literature have dealt with the simplest possible instance of two parties \cite{Pitowsky}, Alice and Bob, each possessing two nearly dichotomic observables. In the case of the Clauser-Horne-Shimony-Holt Bell inequality (CHSH) \cite{CHSH} , which is the strongest possible inequality for two parties (two qubits), it was proved that its maximum violation requires a computational work which grows exponentially with the number of steps required, that is, it is a NP-problem \cite{Pitowsky}.
In addition, a quantum measure such as discord has been recently proved to be NP-complete for the case of two qubits \cite{Berkeley}. The fact that the computation of some entanglement and correlated quantities is NP-hard is usually a consequence of the optimization involved in the definitions. The traditional tools required for optimizing Bell inequalities are borrowed from linear programming: inequalities are translated into convex polytopes, usually in high dimensions, and the proof for the general case of $N$ parties involves a gigantic task which has not been successfully solved to date.
The purpose of the present work is to provide an operational approach to the process of carrying out the maximization of a Bell inequality, based on a Turing machine, which will prove to be a NP-problem. A key ingredient will be the fact that Bell inequalities (not for probabilities) possess a recursive expression when they are generalized to $N$ qubits. In Section II we review previous results for CHSH for two qubits. In Section III we introduce the structure of the Bell inequalities employed, as well as the algorithm that the Turing machine will perform in this scenario. Finally, some conclusions are drawn in Section IV.
\section{Previous results}
The first approach to a Bell inequality (the CHSH in this case) for two qubits was carried out by Pitowsky \cite{Pitowsky}. He carries out several extremely interesting investigations concerning the foundations of quantum mechanics. He also brings together high level characterizations, in geometrical language, of allowed classical and quantum correlation patterns.
By using ``classical correlation polytopes'', he provides significant insight into familiar the CHSH Bell-type inequalities. Pitowsky provides an algorithm for finding the set of ``generalized Bell inequalities'' corresponding to any particular choice of the vectors defining the hyperplanes of a convex polytope. Afterwards he proves it to be a not efficient one, that is, NP-complete. Thus, already for $N=2$ qubits, the procedure of obtaining the maximum violation of the CHSH Bell inequality is inefficient.
\begin{figure}
\caption{(Color online) Plot of the typical performance of a simulated annealing maximizing the CHSH Bell inequality for a random state $\rho$ of two qubits vs the number of Monte Carlo steps (MC). Every MC step contains 1000 random runs. Fig. (a) depicts the evolution of the maximum of CHSH for the state
$\frac{1}{3}|00 \rangle \langle 00|+\frac{1}{3}|01 \rangle \langle 01|+\frac{1}{3}|11 \rangle \langle 11|$, which is analytic (2/3). Fig. (b) depicts the same quantity for a random state $\rho$. See text for details.}
\label{fig_line}
\end{figure}
\section{The Turing machine and the generalized optimization of the Bell inequality}
\subsection{Bell inequalities}
Most of our knowledge on Bell inequalities and their quantum mechanical violation is based on the CHSH inequality \cite{CHSH}. With two dichotomic observables per party, it is the simplest \cite{Collins} (up to local symmetries) nontrivial Bell inequality for the bipartite case with binary inputs and outcomes. Let $A_1$ and $A_2$ be two possible measurements on A side whose outcomes are $a_j\in \lbrace -1,+1\rbrace$, and similarly for the B side. Mathematically, it can be shown that, following LVM,
$|{\cal B}_{CHSH}^{LVM}(\lambda)|=|a_1b_1+a_1b_2+a_2b_1-a_2b_2|\leq 2$. Since $a_1$($b_1$) and $a_2$($b_2$) cannot be measured simultaneously, instead one estimates after randomly chosen measurements the average value ${\cal B}_{CHSH}^{LVM} \equiv \sum_{\lambda} {\cal B}_{CHSH}^{LVM}(\lambda) \mu(\lambda)= E(A_1,B_1)+E(A_1,B_2)+E(A_2,B_1)-E(A_2,B_2)$, where $E(\cdot)$ represents the expectation value. Therefore the CHSH inequality reduces to
\begin{equation} \label{CHSH_LVM}
|{\cal B}_{CHSH}^{LVM}| \leq 2. \end{equation}
Quantum mechanically, since we are dealing with qubits, these observables reduce to ${\bf A_j}({\bf B_j})={\bf a_j}({\bf b_j}) \cdot {\bf \sigma}$, where ${\bf a_j}({\bf b_j})$ are unit vectors in $\mathbb{R}^3$ and ${\bf \sigma}=(\sigma_x,\sigma_y,\sigma_z)$ are the usual Pauli matrices. Therefore the quantal prediction for (\ref{CHSH_LVM}) reduces to the expectation value of the operator ${\cal B}_{CHSH}$
\begin{equation} \label{CHSH_QM} {\bf A_1}\otimes {\bf B_1} + {\bf A_1}\otimes {\bf B_2} + {\bf A_2}\otimes {\bf B_1} - {\bf A_2}\otimes {\bf B_2}. \end{equation}
\noindent Tsirelson showed \cite{Tsirelson} that CHSH inequality (\ref{CHSH_LVM}) is maximally violated by a multiplicative factor $\sqrt{2}$ (Tsirelson's bound) on the basis of quantum mechanics. In fact, it is true that $|Tr(\rho_{AB}{\cal B}_{CHSH})|\leq 2\sqrt{2}$ for all observables ${\bf A_1}$, ${\bf A_2}$, ${\bf B_1}$, ${\bf B_2}$, and all states $\rho_{AB}$. Increasing the size of Hilbert spaces on either A and B sides would not give any advantage in the violation of the CHSH inequalities. In general, it is not known how to calculate the best such bound for an arbitrary Bell inequality, although several techniques have been developed \cite{Toner}.
A good witness of useful correlations is, in many cases, the violation of a Bell inequality by a quantum state. Although it is known that the violation of an $N$-particle Bell-like inequality of some sort by an $N$-particle entangled state is not enough, {\it per se}, to prove genuine multipartite non-locality, it is the only approximation left in practice.
The first Bell inequality for $N=3$ qubits was provided by Mermin \cite{Mermin}. The Mermin inequality reads as $Tr(\rho {\cal B}_{Mermin}) \leq 2$, where ${\cal B}_{Mermin}$ is the Mermin operator
\begin{equation} \label{Mermin}
{\cal B}_{Mermin}=B_{a_{1}a_{2}a_{3}} - B_{a_{1}b_{2}b_{3}} - B_{b_{1}a_{2}b_{3}} - B_{b_{1}b_{2}a_{3}},
\end{equation}
\noindent with $B_{uvw} \equiv {\bf u} \cdot {\bf \sigma} \otimes {\bf v} \cdot {\bf \sigma} \otimes {\bf w} \cdot {\bf \sigma}$ with ${\bf \sigma}=(\sigma_x,\sigma_y,\sigma_z)$ being the usual Pauli matrices, and ${\bf a_j}$ and ${\bf b_j}$ unit vectors in $\mathbb{R}^3$. Notice that the Mermin inequality is maximally violated by Greenberger-Horne-Zeilinger (GHZ) states.
In the case of $N=4$ qubits, the first Bell inequality was derived by Mermin, Ardehali, Belinskii and Klyshko (MABK) \cite{MABK}. The MABK inequality reads as $Tr(\rho {\cal B}_{MABK}) \leq 4$, where ${\cal B}_{MABK}$ is the MABK operator
\begin{equation} \label{MABK} \begin{split} B_{1111}&-B_{1112} -B_{1121}-B_{1211}-B_{2111}-B_{1122}-B_{1212}\\ &-B_{2112}-B_{1221}-B_{2121}-B_{2211}+B_{2222}+B_{2221}\\ &+B_{2212}+B_{2122}+B_{1222}, \end{split} \end{equation}
\noindent with $B_{uvwx} \equiv {\bf u} \cdot {\bf \sigma} \otimes {\bf v} \cdot {\bf \sigma} \otimes {\bf w} \cdot {\bf \sigma}\otimes {\bf x} \cdot {\bf \sigma}$ with ${\bf \sigma}=(\sigma_x,\sigma_y,\sigma_z)$ being the usual Pauli matrices. We shall define
\begin{equation} \label{MABKMax}
B_{MABK}^{\max} \equiv \max_{\bf{a_j},\bf{b_j}}\,\,Tr (\rho {\cal B}_{MABK}) \end{equation}
\noindent as a measure for the nonlocality content for a given state $\rho$ of four qubits. ${\bf a_j}$ and ${\bf b_j}$ are unit vectors in $\mathbb{R}^3$. MABK inequalities are such that they constitute extensions of the CHSH inequalities with the requirement that generalized GHZ states maximally violate them.
The optimization \cite{jo1,jo2} is taken over the two observers' settings $\{{\bf a_j},{\bf b_j}\}$, which are real unit vectors in $\mathbb{R}^3$. We choose them to be of the form $(\sin\theta_k \cos\phi_k,\sin\theta_k \sin\phi_k,\cos\theta_k)$. With this parameterization, the problem consists in finding the supremum of $Tr(\rho{\cal B}_{MABK})$ over the ($N=4$) $\{k=1\dotsm 4N\}$ angles.
In the case of multiqubit systems, one must instead use a generalization of the CHSH inequality to $N$ qubits. MABK inequalities are of such nature that they constitute extensions of older inequalities. To concoct an extension to the multipartite case, we shall introduce a recursive relation \cite{GisinAntic} that will allow for more parties. This is easily done by considering the operator
\begin{equation} \label{Biterative}
B_{N+1} \propto [(B_1+B_1^{\prime}) \otimes B_N + (B_1-B_1^{\prime}) \otimes B_N^{\prime}] , \end{equation}
\noindent with $B_N$ being the Bell operator for N parties and $B_1={\bf v} \cdot {\bf \sigma}$,
with ${\bf \sigma}=(\sigma_x,\sigma_y,\sigma_z)$ and ${\bf v}$ a real unit vector. The prime on the operator
denotes the same expression but with all vectors exchanged. The concomitant maximum value
\begin{equation} \label{MABK_Nmax} B_N^{\max} \equiv \max_{ \bf{a_j},\bf{b_j} }\,\,Tr (\rho {B_{N}}) \end{equation}
\noindent will serve as a measure for the non-locality content of a given state $\rho$ of $N$ qubits if ${\bf a_j}$ and ${\bf b_j}$ are unit vectors in $\mathbb{R}^3$. The non-locality measure (\ref{MABK_Nmax}) is maximized by generalized GHZ states, $2^{\frac{N+1}{2}}$ being the corresponding maximum value.
However, there exist other measures \cite{MABKnew} such as the Svetlichny inequalities \cite{Svetlichny} which serve the same purpose, having a similar structure extended to the $N$-partite scenario \cite{mauro1,mauro2}. They have been used in the literature as entanglement-like indicators \cite{Mauro}.
\subsection{The Turing machine}
A Turing machine \cite{Turing} has an infinite one-dimensional tape divided into cells. Traditionally we think of the tape as being horizontal with the cells arranged in a left-right orientation. The machine has a read-write head which is scanning a single cell on the tape. This read-write head can move left and right along the tape to scan successive cells. A table of transition rules will serve as the “program” for the machine.
In modern terms, the tape serves as the memory of the machine, while the read-write head is the memory bus through which data is accessed (and updated) by the machine. One very important aspect is that we shall rely on the Turing-computability of the cost function that maximizes the Bell inequality given a state $\rho_N$. As known, there exists an entire class of these problems which is termed “NP-complete” (non-deterministic polynomial time complete) because the computational effort used to find an exact solution increases exponentially as the total number of degrees of freedom of the problem rise.
\begin{figure}
\caption{(Color online) Fig. (a) Describes how a Turing machine computes the maximum violation of a Bell inequality for $N=4$ qubits within the framework of simulated annealing optimization. The number of terms to be evaluated by the machines grows exponentially as $4^{N-2}$. Fig. (b) Emphasizes the fact that as the temperature drops, the available set of $4N=16$ variables reduces until some precision on the cost function is reached. See text for details.}
\label{fig2}
\end{figure}
As a consequence, approximated or heuristic methods are required in practice for further analysis. The most successful statistical method to date is the stochastic model of simulated annealing introduced by Kirkpatrick, Gelatt, and Vecchi \cite{kirkpatrick83}, that is, the Metropolis Monte Carlo algorithm with a fixed temperature $T$ at each state of the annealing schedule. There exist other methods which are not of statistical nature, such as downhill/amoeba or gradient methods \cite{Avriel}, which involve finite differences when considering the corresponding function --a Bell inequality in our case-- in terms of all real variables involved.
\subsection{Results}
In either case --statistical or gradient-type method-- we can program the Turing machine in the same way, because after all it will undergo a Hamiltonian cycle changing the value of several parameters of the total function to be optimized and at every step of the procedure. Therefore, we choose a simulated annealing approach to the program.
Regarding Bell inequalities, one can choose the MABK or the Svetlichny inequalities to maximize for a given state $\rho$. In either case, owing to (\ref{Biterative}), the number of individual terms grow exponentially as $4^{N-2}$, $N$ being the number of qubits. Let us then take the MABK inequalities.
In order to illustrate how the Turing machine works, let us have the $N=4$ case. The total number of independent variables are $4N$, but what makes the computation hard is the number of constraints that we have. The situation is depicted in Fig. 2 (a). The Turing machine reaches the first term in the tensor expansion of the MABK inequality. It is free to move in space the unit vector of each party randomly, keeping in the memory that some vectors will have the same position in the next move for some parties. The temperature is high at $T_0$, which implies in Fig. 2 (b) that the domain of possible values for the variables $\Omega$ is broadly spread. Keep in mind, however, that every angle is reduced as follows: $\{\theta_i \rightarrow \theta_i \mod \pi,\psi_i \rightarrow \psi_i \mod 2\pi\}$. The machine then moves to the next term and performs similar operations accordingly. Finishing one cycle means visiting one after the other the entire $4^{4-2}=16$ sites. After that, the machine has to compute Tr($\hat{B_N} \hat{\rho_N}$), which is the cost function. Then, it starts the cycle anew with a different temperature $T_1$ (we can choose the temperature to decrease like $T(s)=T_0 e^{-\lambda s}$, with $s$ being the number of runs). As the temperature drops, the domain $\Omega_s$ shrinks as depicted in Fig. 2 (b). Thus, at every cycle the range of possible values for the $4N=8$ variables continuously decreases until we reach a desired precision, that is, the algorithm terminates when some stopping criterion is met.
The basic algorithm is shown below.\\
\begin{footnotesize} \begin{eqnarray} 1 &&\,\, \rho_N \,initial\, multiqubit\, (N) \,state\, given\cr 2 &&\,\, T \leftarrow T_0\cr 3 &&\,\, {\bf repeat \,until} \,stopping\, criterion \,is \,met\cr 4 &&\,\, ~~~~~~~{\bf repeat\,} 4^{N-2} \,times \cr 5 &&\,\, ~~~~~~~~~~~~~~orientate \,N\, unit \,vectors \cr 6 &&\,\, ~~~~~~~~~~~~~~move \,to \,the \,next \,term \,within \,the \,set \cr 7 &&\,\, ~~~~~~~{\bf endrepeat} \cr 8 &&\,\, ~~~~~~~super\, operator \,B_N \,\leftarrow \,add \,all \,terms \cr 9 &&\,\, ~~~~~~~calculate \,Tr(B_N \rho_N) \cr 10&&\,\, ~~~~~~~T \,\leftarrow \,T_0 e^{-\lambda s} \cr 11&&\,\, {\bf endrepeat} \cr 12&&\,\, return \,B_N^{\max} \equiv Tr(B_N \rho_N)\cr \end{eqnarray} \end{footnotesize}
If we do not want to specify a method in solving the optimization, we can rewrite the algorithm as:
\begin{footnotesize} \begin{eqnarray} 1 &&\,\, \rho_N \,initial\, multiqubit\, (N) \,state\, given\cr 2 &&\,\, {\bf repeat \,until} \,stopping\, criterion \,is \,met\cr 3 &&\,\, ~~~~~~~{\bf repeat\,} 4^{N-2} \,times \cr 4 &&\,\, ~~~~~~~~~~~~~~perform \, operations\, on\, N\, parties'\, observables \cr 5 &&\,\, ~~~~~~~~~~~~~~register \, in\, memory \cr 6 &&\,\, ~~~~~~~~~~~~~~move \,to \,the \,next \,term \,within \,the \,set \cr 7 &&\,\, ~~~~~~~{\bf endrepeat} \cr 8 &&\,\, ~~~~~~~super\, operator \,B_N \,\leftarrow \,add \,all \,terms \cr 9 &&\,\, ~~~~~~~calculate \,Tr(B_N \rho_N) \cr 10&&\,\, {\bf endrepeat} \cr 11&&\,\, return \,B_N^{\max} \equiv Tr(B_N \rho_N)\cr \end{eqnarray} \end{footnotesize}\newline
\noindent Every cycle contains at least $4^{N-2}$ visits, and the best computation of line 9 in the previous algorithm for the Turing machine is of $O(d^3)$, where $d$ is the dimension of the square matrices ($d=2^N$ in our case) being multiplied \cite{matrixproductMilan}. Therefore, we have an undefined number of times (at least two) $\times 4^{N-2} \times (2^N)^3$ number of steps required to obtain $B_N^{\max}$. In other words, {\it at least} we require $O(2^{5N})$ steps to solve the problem which, in view of the aforementioned result, clearly becomes NP with increasing number of parties. This is precisely the desired outcome: the computation of the maximum value of a Bell inequality requires a computational effort which grows exponentially with the number of parties $N$ involved.
\section{Conclusions}
Based on the iterative structure of the extension of Bell inequalities to the multiqubit case, we have shown that the maximization of the usual Bell inequalities employed in the literature (except the ones for probabilities, as in \cite{Gisin3}), an operation performed by a Turing machine, constitutes a NP-problem. This results somehow express the fact that, regarding nonlocality as a good resource for quantifying quantum correlations other than entanglement, the concomitant optimization becomes non-tractable for high number of qubits. Furthermore, even the fact of cheeking the plain violation of an inequality for a state $\rho_N$, which implies Tr($B_N \rho_N$), is of $O(2^{3N})$, that is, it is limited in practice to a few number of qubits.
\end{document} | arXiv | {
"id": "1503.00272.tex",
"language_detection_score": 0.846858024597168,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\frontmatter
\pagestyle{headings} \addtocmark{}
\mainmatter
\title{Soft Confusion Matrix Classifier for Stream Classification}
\titlerunning{SCM for Stream Classification}
\author{Pawel Trajdos\orcidID{0000-0002-4337-6847} \and Marek Kurzynski\orcidID{0000-0002-0401-2725}}
\authorrunning{P. Trajdos \and M. Kurzynski}
\tocauthor{Pawel Trajdos, Marek Kurzynski}
\institute{Wroclaw University of Science and Technology, Wroclaw, Poland,\\ \email{pawel.trajdos@pwr.edu.pl}, \email{marek.kurzynski@pwr.edu.pl}}
\maketitle
\begin{abstract} In this paper, the issue of tailoring the soft confusion matrix (SCM) based classifier to deal with stream learning task is addressed. The main goal of the work is to develop a wrapping-classifier that allows incremental learning to classifiers that are unable to learn incrementally. The goal is achieved by making two improvements in the previously developed SCM classifier. The first one is aimed at reducing the computational cost of the SCM classifier. To do so, the definition of the fuzzy neighbourhood of an object is changed. The second one is aimed at effective dealing with the concept drift. This is done by employing the ADWIN-driven concept drift detector that is not only used to detect the drift but also to control the size of the neighbourhood. The obtained experimental results show that the proposed approach significantly outperforms the reference methods.
\keywords{classification, probabilistic model, randomized reference classifier, soft confusion matrix, stream classification} \end{abstract}
\section{Introduction} Classification of streaming data is one of the most difficult problems in modern pattern recognition theory and practice. This is due to the fact that a typical data stream is characterized by several features that significantly impede making the correct classification decision. These features include: continuous flow, huge data volume, rapid arrival rate, and susceptibility to change \cite{Krawczyk2017}. If a streaming data classifier aspires to practical applications, it must face these requirements and have to satisfy numerous constraints (e.g. bounded memory, single-pass, real-time response, change of data concept) to an acceptable extent. It is not easy, that is why the methodology of recognizing stream data has been developing very intensively for over two decades, proposing new, more and more perfect classification methods~\cite{Gama,Mehta}.
Incremental learning is a vital capability for classifiers used in stream data classification~\cite{Read2012batch}. It allows the classifier to utilize new objects generated by the stream to improve the model built so far. It also allows, to some extent, dealing with the concept drift. Some of the well-known classifiers are naturally capable to be trained iteratively. Examples of such classifiers are neural networks, nearest neighbours classifiers, or probabilistic methods such as the naive Bayes classifier~\cite{giraud2000note}. Some of the classifiers were tailored to be learned incrementally. An example of such a method is well-known Hoeffding Tree classifier~\cite{Pfahringer2007}. Those types of classifiers can be easily used in stream classification systems. On the other hand, when a classifier is unable to learn in an incremental way, the options for using for stream classification are very limited~\cite{Read2012batch}. Only one option is to keep a set of objects and rebuild the classifier from scratch whenever it is necessary~\cite{giraud2000note}.
To bridge this gap, we propose a wrapping-classifier-based on the soft confusion matrix approach (SCM). The wrapping-classifier may be used to add incremental learning functionality to any batch classifier. The classifier based on the idea of soft confusion matrix has been proposed in~\cite{AMCS}. It proved to be an efficient tool for solving such practical problems as hand gesture recognition~\cite{CBM}. An additional advantage in solving the above-mentioned classification problem is the ability to use imprecise feedback information about a class assignment. The SCM-based algorithm was also successfully used in multilabel learning~\cite{IJNS}.
Dealing with the concept drift using incremental learning only is insufficient. This is because the incremental classifiers deal effectively only with the incremental drift~\cite{Gama}. To handle the sudden concept drift, additional mechanism such as single/multiple window approach~\cite{kuncheva2009window}, forgetting mechanisms~\cite{vzliobaite2011combining}, drift detectors~\cite{Bifet2007} must be used. In this study, we decided to use ADWIN algorithm~\cite{Bifet2007} to detect the drift and to manage the set of stored objects. We use the ADWIN-based detector because this approach was shown to be an effective method~\cite{Gonalves2014,Barros2019}.
The concept drift may also be dealt with using ensemble classifiers~\cite{Gama}. There are a plethora of ensemble-based approaches~\cite{Gomez,Brzezinski2014ReactingTD,Krawczyk2017} however, in this work we are focused on single-classifier-based systems.
The rest of the paper is organized as follows. Section~\ref{sec:ClassifierCorrection} presents the corrected classifier and gives insight into its two-level structure and the original concepts of RRC and SCM which are the basis of its construction. Section~\ref{sec:DSClassification} describes the adopted model of concept drifting data stream and provides details of chunk-based learning scheme of base classifiers and online dynamic learning of the correcting procedure and describes the method of combining ensemble members. In section~\ref{sec:ExpSetup} the description of the experimental procedure is given. The results are presented and discussed in section~\ref{sec:ResAndDisc}. Section~\ref{sec:Conclusions} concludes the paper.
\section {Classifier with Correction}\label{sec:ClassifierCorrection}
\subsection{Preliminaries}\label{sec:ClassifierCorrection:Preliminaries}
Let us consider the pattern recognition problem in which $x \in \mathcal{X}$ denotes a feature vector of an object and $j \in \mathcal{M}$ is its class number ($\mathcal{X}\subseteq \Re^{d}$ and $\mathcal{M}=\{1,2,\ldots,M\}$ are feature space and set of class numbers, respectively). Let $\psi(\mathcal{L})$ be a classifier trained on the learning set $\mathcal{L}$, which assigns a class number $i$ to the recognized object. We assume that $\psi(\mathcal{L})$ is described by the canonical model \cite{Kuncheva}, i.e. for given $x$ it first produces values of normalized classification functions (supports) $g_i(x), i \in \mathcal{M}$ ($g_i(x) \in [0, 1], \sum g_i(x)=1$) and then classify object according to the maximum support rule: \begin{equation} \label{wzor1} \psi(\mathcal{L},x)=i \Leftrightarrow g_i(x) = \max_{k \in \mathcal{M}} g_k(x). \end{equation}
To recognize the object $x$ we will apply the original procedure, which using additional information about the local (relative to $x$) properties of $\psi(\mathcal{L})$ can change its decision to increase the chance of correct classification of $x$.
The proposed correcting procedure which has the form of classifier $\psi^{(Corr)}(\mathcal{L},\mathcal{V})$ built over $\psi(\mathcal{L})$ will be called a wrapping-classifier. The wrapping classifier $\psi^{(Corr)}(\mathcal{L},\mathcal{V})$ acts according to the following Bayes scheme: \begin{equation} \label{wzor2}
\psi^{(Corr)}(\mathcal{L},\mathcal{V},x)=i \Leftrightarrow P(i|x) = \max_{k \in \mathcal{M}} P(k|x), \end{equation}
where \textit{a posteriori} probabilities $P(k|x), k \in \mathcal{M}$ can be expressed in a form depending on the probabilistic properties of classifier $\psi(\mathcal{L})$: \begin{equation} \label{wzor3}
P(j|x)=\sum_{i \in \mathcal{M}} P(i,j|x) = \sum_{i \in \mathcal{M}} P(i|x) P(j|i,x). \end{equation}
$P(j|i,x)$ denotes the probability that $x$ belongs to the $j$-th class given that $\psi(\mathcal{L},x)=i$ and $P(i|x)=P(\psi(\mathcal{L},x)=i)$ is the probability of assigning $x$ to class $i$ by $\psi(\mathcal{L})$ Since for deterministic classifier $\psi(\mathcal{L})$ both above probabilities are equal to 0 or 1 we will use two concepts for their approximate calculation: randomized reference classifier (RRC) and soft confusion matrix (SCM).
\subsection{Randomized Reference Classifier (RRC)}\label{sec:ClassifierCorrection:RRC} RRC is a randomized model of classifier $\psi(\mathcal{L})$ and with its help the probabilities $P(\psi(\mathcal{L},x)=i)$ are calculated.
RRC $\psi^{RRC}$ as a probabilistic classifier is defined by a probability distribution over the set of class labels $\mathcal{M}$. Its classifying functions $\{\delta_j(x)\}_{j \in \mathcal{M}}$ are observed values of random variables $\{\Delta_j(x)\}_{j \in \mathcal{M}}$ that meet -- in addition to the normalizing conditions -- the following condition: \begin{equation} \label{wzor4} \mathbf{E}\left[\Delta_{i}(x) \right] = g_{i}(x),\ i \in \mathcal{M}, \end{equation} where $\mathbf{E}$ is the expected value operator. Formula (\ref{wzor4}) denotes that $\psi^{RRC}$ acts -- on average -- as the modeled classifier $\psi(\mathcal{L})$, hence the following approximation is fully justified: \begin{equation} \label{wzor5}
P(i|x)=P(\psi(\mathcal{L},x)=i)\approx P(\psi^{RRC}(x)=i), \end{equation} where \begin{equation} \label{wzor6} P(\psi^{RRC}(x)=i)=P [\Delta_i(x) > \Delta_k(x), k \in \mathcal{M} \setminus i] \end{equation} can be easily determined if we assume -- as in the original work of Woloszynski and Kurzynski \cite{PR} -- that $\Delta_i(x)$ follows the beta distribution.
\subsection{Soft Confusion Matrix (SCM)}\label{sec:ClassifierCorrection:SCM}
SCM will be used to determine the assessment of probability $P(j|i,x)$ which denotes class-dependent probabilities of the correct classification (for $i=j$) and the misclassification (for $i \neq j$) of $\psi(\mathcal{L},x)$ at the point $x$. The method defines the neighborhood of the point $x$ containing validation objects in terms of fuzzy sets allowing for flexible selection of membership functions and assigning weights to individual validation objects dependent on distance from $x$.
The SCM providing an image of the classifier local (relative to $x$) probabilities $P(j|i,x)$, is in the form of two-dimensional table, in which the rows correspond to the true classes while the columns correspond to the outcomes of the classifier $\psi(\mathcal{L})$, as it is shown in Table 1.
\begin{table} { \renewcommand{0.9}{0.9} \setlength\tabcolsep{.9pt} \caption{The soft confusion matrix of classifier $\psi(\mathcal{L})$} \label{MK_PT:confmatrix} \begin{center}
\begin{tabular}{cc|cccc} & & \multicolumn{4}{c}{Classification by $\psi$}\\ & & $1$ & $2$ & \ldots & $M$ \\ \midrule & $1$& $\varepsilon_{1,1}(x)$&$\varepsilon_{1,2}(x)$& \ldots&$\varepsilon_{1,M}(x)$\\ True& $2$& $\varepsilon_{2,1}(x)$&$\varepsilon_{2,2}(x)$& \ldots&$\varepsilon_{2,M}(x)$\\ class& $\vdots$ & $\vdots$ & $\vdots$ & $\ddots$ &$\vdots$ \\ & $M$& $\varepsilon_{M,1}(x)$&$\varepsilon_{M,2}(x)$& \ldots&$\varepsilon_{M,M}(x)$\\ \end{tabular} \end{center} } \end{table} The value $\varepsilon_{i,j}(x)$ is determined from validation set $\mathcal{V}$ and is defined as the following ratio:
\begin{equation} \label{wzor7}
\varepsilon_{i,j}(x)= \frac{|\mathcal{V}_j \cap \mathcal{D}_i \cap \mathcal{N}(x)|}{|\mathcal{V}_j \cap \mathcal{N}(x)|}, \end{equation}
where $\mathcal{V}_j, \mathcal{D}_i$ and $\mathcal{N}(x)$ are fuzzy sets specified in the validation set $\mathcal{V}$ and $|\cdot|$ denotes the cardinality of a~fuzzy set~\cite{Dhar2013}.
The set $\mathcal{V}_j$ denotes the set of validation objects from the $j$-th class. Formulating this set in terms of fuzzy sets theory it can be assumed that the grade of membership of validation object $x_{\mathcal{V}}$ to $\mathcal{V}_j$ is the class indicator which leads to the following definition of $\mathcal{V}_j$:
\begin{align} \label{wzor8} \mathcal{V}_j&=\{(x_{\mathcal{V}}, \mu_{\mathcal{V}_j}(x_{\mathcal{V}}))\},\\ \mu_{\mathcal{V}_j}(x_{\mathcal{V}})&=\left\{\begin{array}{ll} 1 & \textrm{if}\ x_{\mathcal{V}} \in j\textrm{-th class,}\\ 0 & \textrm{elsewhere.} \end{array} \right. \end{align}
The concept of fuzzy set $\mathcal{D}_i$ is defined as follows: \begin{equation} \label{wzor9}
\mathcal{D}_i=\{(x_{\mathcal{V}}, \mu_{\mathcal{D}_i}(x_{\mathcal{V}})): x_{\mathcal{V}} \in \mathcal{V}, \mu_{\mathcal{D}_i}(x_{\mathcal{V}})=P(i|x_{\mathcal{V}})\}, \end{equation}
where $P(i|x_{\mathcal{V}})$ is calculated according to \eqref{wzor5} and \eqref{wzor6}. Formula \eqref{wzor9} demonstrates that the membership of validation object $x_{\mathcal{V}}$ to the set $\mathcal{D}_i$ is not determined by the decision of classifier $\psi(\mathcal{L})$. The grade of membership of object $x_{\mathcal{V}}$ to $\mathcal{D}_i$ depends on the potential chance of classifying $x_{\mathcal{V}}$ to the $i$-th class by the classifier $\psi(\mathcal{L})$. We assume, that this potential chance is equal to the probability $P(i|x_{\mathcal{V}})=P(\psi(\mathcal{L},x_{\mathcal{V}})=i)$ calculated approximately using the randomized model RRC of classifier $\psi(\mathcal{L})$.
Set $\mathcal{N}(x)$ plays the crucial role in the proposed concept of SCM, because it decides which validation objects $x_{\mathcal{V}}$ and with which weights will be involved in determining the local properties of the classifier $\psi(\mathcal{L})$ and -- as a consequence -- in the procedure of correcting its classifying decision. Formally, $\mathcal{N}(x)$ is also a fuzzy set: \begin{equation} \label{wzor10} \mathcal{N}(x)=\{(x_{\mathcal{V}}, \mu_{\mathcal{N}(x)}(x_{\mathcal{V}}))\}, \end{equation} but its membership function is not defined univocally because it depends on many circumstances. By choosing the shape of the membership function $\mu_{\mathcal{N}(x)}$ we can freely model the adopted concept of "locality" (relative to $x$).
$\mu_{\mathcal{N}(x)}(x_{\mathcal{V}})$ depends on the distance between validation object $x_{\mathcal{V}}$ and test object $x$: its value is equal to 1 for $x_{\mathcal{V}}=x$ and decreases with increasing the distance between $x_{\mathcal{V}}$ and $x$. This leads to the following form of the proposed membership function of the set: \begin{equation} \label{wzor14a} \mu_{\mathcal{N}}(x_{\mathcal{V}})= \left\{ \begin{array}{lcr} C \exp(-\beta \eucDist{x - x_{\mathcal{V}}}^2), & \mathrm{if} & \eucDist{ x - x_{\mathcal{V}}}<K_{d}\\
0 &\mathrm{otherwise} &
\end{array}\right. \end{equation}
$\eucDist{\cdot}$ denotes Euclidean distance in the feature space $\mathcal{X}$, $K_{d}$ is the Euclidean distance between $x$ and the $K$-th nearest neighbor in $\mathcal{V}$, $\beta \in \Re_+$ and $C$ is a normalizing coefficient. The first factor in \eqref{wzor14a} limits the concept of ``locality'' (relatively to $x$) to the set of $K$ nearest neighbors with Gaussian model of membership grade.
Since under the stream classification framework, there should be only one pass over the data~\cite{Krawczyk2017}, $K$ and $\beta$ parameters cannot be found using the extensive grid search approach just like it was for the originally proposed approach~\cite{AMCS,CBM}. Consequently, in this work, we decided to set $\beta$ to 1. Additionally, the initial number of nearest neighbours is found using a simple rule of thumb~\cite{Devroye1996}: \begin{equation}
\hat{K} = \left \lceil{\sqrt{|\mathcal{V}|}}\right \rceil. \end{equation} To avoid ties, the final number of neighbours $K$ is set as follows: \begin{equation}
K = \left\{ \begin{array}{lcr}
\hat{K} & \mathrm{if} & M \mod \hat{K} \neq 0\\
\hat{K}+1 & \mathrm{otherwise}&
\end{array} \right. \end{equation} Additionally, the computational cost of computing the neighbourhood may be further reduced by using the kd-tree algorithm to find the nearest neighbours~\cite{Hou2018}.
Finally, from \eqref{wzor8}, \eqref{wzor9} and \eqref{wzor10} we get the following approximation: \begin{equation} \label{wzor11}
P(j|i,x) \approx \frac{\varepsilon_{i,j}(x)}{\sum_{j \in \mathcal{M}} \varepsilon_{i,j}(x)}, \end{equation} which together with \eqref{wzor3}, \eqref{wzor5} and \eqref{wzor6} give \eqref{wzor2} i.e. the corrected classifier $\psi^{\mathrm{Corr}}(\mathcal{L}, \mathcal{V})$.
\subsection{Creating the validation set}\label{sec:ClassifierCorrection:ValSet}
In this section, the procedure of creating the validation set $\mathcal{V}$ from the training $\mathcal{L}$ is described. In the original work describing SCM~\cite{AMCS}, the set of labelled data was wplit into the learning set $\mathcal{L}$ and the validation set $\mathcal{V}$. The learning set and the validation set were disjoint $\mathcal{L}^\prime \cap \mathcal{V} =\emptyset$. The cardinality of the validation set was controlled by the $\gamma$ parameter $|\mathcal{V}|=\gamma|\mathcal{L}|$, $\gamma \in [0,1]$. The $\gamma$ coefficient was usualy set to $0.6$, however to achieve the highest classification quality, it should be determined using the grid-search procedure. As it was said above, in this work we want to avoid using the grid-search procedure. Therefore, we construct the validation set using three-fold cross-validation procedure that allows using of the entire learning set as a validation set. The procedure is described in Algorithm~\ref{alg:SCM:Train}.
\begin{algorithm}[H]
\KwData{
$\mathcal{L}$ -- Initial learning set\;
}
\KwResult{
$\mathcal{V}$ -- Validation set\;
$\mathcal{D}_i$ -- Decision sets (see~\eqref{wzor9})\;
$\psi(\mathcal{L})$ -- Trained classifier.
}
\Begin{
\For{$k \in \{1,2,3\}$}{
Extract fold specific training and validation set $\mathcal{L}_k$, $\mathcal{V}_k$\;
Learn the $\psi(\mathcal{L}_k)$ using $\mathcal{L}_k$\;
$\mathcal{V} := \mathcal{V} \cup \mathcal{V}_k$ \;
Update the class-specific decision sets $\mathcal{D}_i$ using predictions of $\psi(\mathcal{L}_k)$ for instances from $\mathcal{V}_k$ (see~\eqref{wzor9})\; }
Learn the $\psi(\mathcal{L})$ using $\mathcal{L}$\; }
\caption{Procedure of training the SCM classifier. Including the procedure of validation set creation.\label{alg:SCM:Train}} \end{algorithm}
\section{Classification of Data Stream}\label{sec:DSClassification} The main goal of the work is to develop a wrapping-classifier that allows incremental learning to classifiers that are unable to learn incrementally. In this section, we describe the incremental learning procedure used by the SCM-based wrapping-classifier.
\subsection{Model of Data Stream}\label{sec:DSClassification:streamModel}
We assume that instances from a data stream $\mathcal{S}$ appear as a sequence of labeled examples $\{(x^t, j^t)\}, t=1,2,...,T$, where $x^t \in \mathcal{X} \subseteq \Re^d$ represents a $d$-dimensional feature vector of an object that arrived at time $t$ and $j^t \in \mathcal{M}=\{1,2,\ldots, M\}$ is its class number. In this study we consider a completely supervised learning approach which means that the true class number $j^t$ is available after the arrival of the object $x^t$ and before the arrival of the next object $x^{t+1}$ and this information may be used by classifier for classification of $x^{t+1}$. Such a framework is one of the most often considered in the related literature \cite{Brzez2014,Nguyen}.
In addition, we assume that a data stream can be generated with a time-varying distribution, yielding the phenomenon of concept drift \cite{Gama}. We do not impose any restrictions on the concept drift. It can be real drift referring to changes of class distribution or virtual drift referring to the distribution of features. We allow sudden, incremental, gradual, and recurrent changes in the distribution of instances creating a data stream. Changes in the distribution can cause an imbalanced class system to appear in a changing configuration.
\subsection{Incremental learning for SCM classifier}\label{sec:DSClassification:incrLearningSCM}
We assumed that the base classifier $\psi(\mathcal{L})$ wrapped by the SCM classifier is unable to learn incrementally. Consequently, an initial training set has to be used to build the classifier. This initial data set is called an initial chunk $\mathcal{B}$. The desired size of the initial bath is denoted by $|\mathcal{B}_{\mathrm{des}}|$. The initial data set is built by storing incoming examples from the data stream. By the time the initial batch is collected, the prediction is impossible. Until then, the prediction is made on the basis of \textit{a priori} probabilities estimated from the incomplete initial batch.
Since $\psi(\mathcal{L})$ is unable to learn incrementally, incremental learning is handled with changing the validation set. Incoming instances are added to the validation set until the ADWIN-based drift detector detects that the concept drift has occurred. The ADWIN-based drift detector analyses the outcomes of the corrected classifier for the instances stored in the validation set~\cite{Bifet2007}. When there is a significant difference between the older and the newer part of the validation set, the detector removes the older part of the validation set. The remaining part of the validation set is then used to correct the outcome of $\psi(\mathcal{L})$. The ADWIN-based drift detector also controls the size of the neighbourhood. Even if there is no concept drift, the detector may detect the deterioration of the classification quality when the neighbourhood becomes too large.
The detailed procedure of the ensemble building is described in Algorithms~\ref{alg:AdwUpd} and~\ref{alg:AdwTrain}.
\begin{algorithm}[htb]
\KwData{
$\mathcal{V}$ -- validation set\;
$x$ -- new instance to add\;
}
\KwResult{Updated validation set}
\Begin{
i= $\psi(\mathcal{L},\mathcal{V},x)$ \tcp*{Predict object class using corrected classifier}
Check the prediction using ADWIN detector\;
\uIf{ADWIN detector detects drift}{
Ask the detector fot the newer part of the validation set $\mathcal{V}_{\mathrm{new}}$\;
$\mathcal{V}:= \mathcal{V}_{\mathrm{new}}$\;
}
$\mathcal{V}:= \mathcal{V} \cup x$\;
}
\caption{Validation set update controlled by ADWIN detector.\label{alg:AdwUpd}} \end{algorithm}
\begin{algorithm}[htb]
\KwData{
$x$ -- new instance;
}
\KwResult{Learned SCM wrapping-classifier}
\Begin{
\uIf{$|\mathcal{B}|\geq|\mathcal{B}_{\mathrm{des}}|$}{
Train the SCM classifier using the procedure described in Algorithm~\ref{alg:SCM:Train} using $\mathcal{B}$ as a learning set\;
$\mathcal{B}:= \emptyset$\;
$\mathcal{V}^{\prime}:= \mathcal{V}$ \tcp*{Make a copy of the validation set}
\ForEach{object $x^{\prime} \in \mathcal{V}^{\prime}$}{
Update the validation set $\mathcal{V}$ using $x^{\prime}$ and the procedure described in Algorithm~\ref{alg:AdwUpd}
}
}
\uElseIf{Is SCM classifier trained}{
Update the validation set $\mathcal{V}$ using $x$ and the procedure described in Algorithm~\ref{alg:AdwUpd}
}
\Else{
$\mathcal{B}:= \mathcal{B} \cup x$\;
}
}
\caption{Incremental learning procedure of the SCM wrapping-classifier.\label{alg:AdwTrain}} \end{algorithm}
\section{Experimental Setup}\label{sec:ExpSetup} To validate the classification quality obtained by the proposed approaches, the experimental evaluation, which setup is described below, is performed.
The following base classifiers were employed: \begin{itemize}
\item $\psi_{\mathrm{HOE}}$ -- Hoeffding tree classifier~\cite{Pfahringer2007}
\item $\psi_{\mathrm{NB}}$ -- Naive Bayes classifier with kernel density estimation~\cite{Hand2001}.
\item $\psi_{\mathrm{KNN}}$ -- KNN classifier~\cite{Guo2003}.
\item $\psi_{\mathrm{SGD}}$ -- SVM classifier built using stochastic gradient descent method~\cite{Sakr2017}. \end{itemize} The classifiers implemented in WEKA framework~\cite{Hall2009} were used. If not stated otherwise, the classifier parameters were set to their defaults. We have chosen the classifiers that offer both batch and incremental learning procedures.
The experimental code was implemented using WEKA~\cite{Hall2009} framework. The source code of the algorithms is available online~\footnote{\url{https://github.com/ptrajdos/rrcBasedClassifiers/tree/develop}}~\footnote{\url{https://github.com/ptrajdos/StreamLearningPT/tree/develop}}.
During the experimental evaluation, the following classifiers were compared: \begin{enumerate} \item $\psi_{\mathrm{B}}$ -- The ADWIN-driven classifier created using the unmodified base classifier (The base classifier is able to update incrementally.)~\cite{Bifet2007}.
\item $\psi_{\mathrm{nB}}$ -- The ADWIN-driven created using the unmodified base classifier with the incremental learning disabled. The base classifier is only retrained whenever ADWIN-based detector detects concept drift.
\item $\psi_{\mathrm{S}}$ -- The ADWIN-driven approach using SCM correction scheme with online-learning. As described in Section~\ref{sec:DSClassification}.
\item $\psi_{\mathrm{nS}}$ -- The ADWIN-driven approach created using SCM correction scheme but the online-learning is disabled. The SCM-corrected classifier is only retrained whenever ADWIN-based detector detects concept drift. \end{enumerate}
To evaluate the proposed methods, the following classification-loss criteria are used~\cite{Sokolova2009}: Macro-averaged $\mathrm{FDR}$ (1- precision), $\mathrm{FNR}$ (1-recall), Matthews correlation coefficient ($\mathrm{MCC}$). The Matthews coefficient is rescaled in such a way that 0 is perfect classification and 1 is the worst one. Quality measures from the macro-averaging group are considered because this kind of measures is more sensitive to the performance for minority classes. For many real-world classification problems, the minority class is the class that attracts the most attention~\cite{Leevy2018}.
Following the recommendations of~\cite{demsar2006} and~\cite{garcia2008extension}, the statistical significance of the obtained results was assessed using the two-step procedure. The first step is to perform the Friedman test~\cite{demsar2006} for each quality criterion separately. Since multiple criteria were employed, the familywise errors (\FWER{}) should be controlled~\cite{holm1979}. To do so, the Holm~\cite{holm1979} procedure of controlling \FWER{} of the conducted Friedman tests was employed. When the Friedman test shows that there is a significant difference within the group of classifiers, the pairwise tests using the Wilcoxon signed-rank test~\cite{demsar2006} were employed. To control \FWER{} of the Wilcoxon-testing procedure, the Holm approach was employed~\cite{holm1979}. For all tests, the significance level was set to $\alpha=0.01$.
The experiments were conducted using 48 synthetic datasets generated using the STREAM-LEARN library~\footnote{\url{https://github.com/w4k2/stream-learn}}. The properties of the datasets were as follows: Datasets size: 30k examples; Number of attributes: 8;Types of drift generated: incremental, sudden;Noise: 0\%, 10\%, 20\%; Imbalance ratio: 0 -- 4.
Datasets used in this experiment are available online~\footnote{\url{https://github.com/ptrajdos/MLResults/blob/master/data/stream_data.tar.xz?raw=true}}
To examine the effectiveness of the incremental update algorithms, we applied an experimental procedure based on the methodology which is characteristic of data stream classification, namely, the test-then-update procedure~\cite{Gama:2010:KDD}. The chunk size for evaluation purposes was set to 200.
\section{Results and Discussion}\label{sec:ResAndDisc}
To compare multiple algorithms on multiple benchmark sets, the average ranks approach is used. In this approach, the winning algorithm achieves a rank equal to '1', the second achieves a rank equal to '2', and so on. In the case of ties, the ranks of algorithms that achieve the same results are averaged.
The numerical results are given in Table~\ref{table:streamHOE}~to~ \ref{table:streamSGD}. Each table is structured as follows. The first row contains the names of the investigated algorithms. Then, the table is divided into six sections -- one section is related to a single evaluation criterion. The first row of each section is the name of the quality criterion investigated in the section. The second row shows the p-value of the Friedman test. The third one shows the average ranks achieved by algorithms. The following rows show p-values resulting from the pairwise Wilcoxon test. The p-value equal to $.000$ informs that the p-values are lower than $10^{-3}$. P-values lower than $\alpha$ are bolded. Due to the page limit, the raw results are published online~\footnote{\url{https://github.com/ptrajdos/MLResults/blob/master/RandomizedClassifiers/Results_cldd_2021.tar.xz?raw=true}}
To provide a visualization of the average ranks and the outcome of the statistical tests, the rank plots are used. The rank plots are compatible with the rank plots described in~\cite{demsar2006}. That is, each classifier is placed along the line representing the values of the achieved average ranks. The classifiers between which there are no significant differences (in terms of the pairwise Wilcoxon test) are connected with a horizontal bar placed below the axis representing the average ranks. The results are visualised on figures~\ref{fig:hoeRank}~--~\ref{fig:sgdRank}.
Let us begin with an analysis of the correction ability of the SCM approach when incremental learning is disabled. Although this kind of analysis has been already done~\cite{AMCS,CBM}, in this work it should be done again since the definition of the neighbourhood is significantly changed (see Section~\ref{sec:ClassifierCorrection:SCM}). To assess the impact of the SCM-based correction, we compare the algorithms $\psi_{\mathrm{nB}}$ and $\psi_{\mathrm{nS}}$ for different base classifiers. For $\psi_{\mathrm{HOE}}$ and $\psi_{\mathrm{NB}}$ base classifiers the employment of SCM-based correction allows achieving significant improvement in terms of all quality criteria (see Figures~\ref{fig:hoeRank} and~\ref{fig:nbRank}). For the remaining base classifiers, on the other hand, there are no significant differences between $\psi_{\mathrm{nB}}$ and $\psi_{\mathrm{nS}}$. These results confirm observations previously made in~\cite{AMCS,CBM}. That is, the correction ability of the SCM approach is more noticeable for classifiers that are considered to be weaker ones. The previously observed correction ability holds although the extensive grid-search technique is not applied.
In this paper, the SCM-based approach is proposed to be used as a wrapping-classifier that handles the incremental learning for base classifiers that are unable to be updated incrementally. Consequently, now we are going to analyse the SCM approach in that scenario. The results show that $\psi_{\mathrm{S}}$ significantly outperforms $\psi_{\mathrm{nB}}$ for all base classifiers and quality criteria. It means that it works great as the incremental-learning-handling wrapping-classifier. What is more, it outperforms $\psi_{\mathrm{nS}}$ also for all base classifiers and criteria. It clearly shows that the source of the achieved improvement does not lie in the batch-learning-improvement-ability but the ability to handle incremental learning is also present. Moreover, it handles incremental learning more effective than the base classifiers designed to do so. This observation is confirmed by the fact that $\psi_{\mathrm{S}}$ also outperforms $\psi_{\mathrm{B}}$ for all base classifiers and quality criteria.
{ \setlength\tabcolsep{2.0pt}
\begin{table}[htb] \centering\scriptsize \caption{Statistical evaluation for the stream classifiers based on $\psi_{\mathrm{HOE}}$ classifier.\label{table:streamHOE}}
\begin{tabular}{c|cccc|cccc|cccc}
& $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ & $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ & $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ \\
\midrule Crit. Name&\multicolumn{4}{c|}{MaFDR}&\multicolumn{4}{c|}{MaFNR}&\multicolumn{4}{c}{MaMCC}\\
Friedman p-value&\multicolumn{4}{c|}{\textbf{1.213e-28}}&\multicolumn{4}{c|}{\textbf{5.963e-28}}&\multicolumn{4}{c}{\textbf{5.963e-28}}\\ Average Rank& 2.000 & 3.812 & 1.00 & 3.188 & 2.000 & 3.583 & 1.00 & 3.417 & 2.000 & 3.667 & 1.00 & 3.333 \\
\midrule
$\psi_{\mathrm{B}}$ & & \textbf{.000} & \textbf{.000} & \textbf{.000} & & \textbf{.000} & \textbf{.000} & \textbf{.000} & & \textbf{.000} & \textbf{.000} & \textbf{.000} \\
$\psi_{\mathrm{nB}}$ & & & \textbf{.000} & \textbf{.000} & & & \textbf{.000} & .111 & & & \textbf{.000} & \textbf{.002} \\
$\psi_{\mathrm{S}}$ & & & & \textbf{.000} & & & & \textbf{.000} & & & & \textbf{.000} \\
\end{tabular} \end{table} }
{ \setlength\tabcolsep{2.0pt} \def0.9{0.9}
\begin{table*}[htb] \centering\scriptsize \caption{Statistical evaluation for the stream classifiers based on $\psi_{\mathrm{NB}}$ classifier.\label{table:streamNB}}
\begin{tabular}{c|cccc|cccc|cccc}
& $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ & $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ & $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ \\
\midrule Crit. Name&\multicolumn{4}{c|}{MaFDR}&\multicolumn{4}{c|}{MaFNR}&\multicolumn{4}{c}{MaMCC}\\
Friedman p-value&\multicolumn{4}{c|}{\textbf{3.329e-28}}&\multicolumn{4}{c|}{\textbf{3.329e-28}}&\multicolumn{4}{c}{\textbf{1.739e-28}}\\ Average Rank& 2.021 & 3.771 & 1.00 & 3.208 & 2.000 & 3.708 & 1.00 & 3.292 & 2.000 & 3.792 & 1.00 & 3.208 \\
\midrule
$\psi_{\mathrm{B}}$ & & \textbf{.000} & \textbf{.000} & \textbf{.000} & & \textbf{.000} & \textbf{.000} & \textbf{.000} & & \textbf{.000} & \textbf{.000} & \textbf{.000} \\
$\psi_{\mathrm{nB}}$ & & & \textbf{.000} & \textbf{.000} & & & \textbf{.000} & \textbf{.001} & & & \textbf{.000} & \textbf{.000} \\
$\psi_{\mathrm{S}}$ & & & & \textbf{.000} & & & & \textbf{.000} & & & & \textbf{.000} \\
\end{tabular} \end{table*} }
{ \setlength\tabcolsep{2.0pt} \def0.9{0.9}
\begin{table*}[htb] \centering\scriptsize \caption{Statistical evaluation for the stream classifiers based on $\psi_{\mathrm{KNN}}$ classifier.\label{table:streamKNN}}
\begin{tabular}{c|cccc|cccc|cccc}
& $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ & $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ & $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ \\
\midrule Crit. Name&\multicolumn{4}{c|}{MaFDR}&\multicolumn{4}{c|}{MaFNR}&\multicolumn{4}{c}{MaMCC}\\
Friedman p-value&\multicolumn{4}{c|}{\textbf{1.883e-27}}&\multicolumn{4}{c|}{\textbf{1.883e-27}}&\multicolumn{4}{c}{\textbf{1.883e-27}}\\ Average Rank& 2.000 & 3.521 & 1.00 & 3.479 & 2.000 & 3.542 & 1.00 & 3.458 & 2.000 & 3.500 & 1.00 & 3.500 \\
\midrule $\psi_{\mathrm{B}}$ & & \textbf{.000} & \textbf{.000} & \textbf{.000} & & \textbf{.000} & \textbf{.000} & \textbf{.000} & & \textbf{.000} & \textbf{.000} & \textbf{.000} \\
$\psi_{\mathrm{nB}}$ & & & \textbf{.000} & .955 & & & \textbf{.000} & .545 & & & \textbf{.000} & .757 \\
$\psi_{\mathrm{S}}$ & & & & \textbf{.000} & & & & \textbf{.000} & & & & \textbf{.000} \\
\end{tabular} \end{table*} }
{ \setlength\tabcolsep{2.0pt} \def0.9{0.9}
\begin{table*}[htb] \centering\scriptsize \caption{Statistical evaluation for the stream classifiers based on $\psi_{\mathrm{SGD}}$ classifier.\label{table:streamSGD}}
\begin{tabular}{c|cccc|cccc|cccc}
& $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ & $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ & $\psi_{\mathrm{B}}$ & $\psi_{\mathrm{nB}}$ & $\psi_{\mathrm{S}}$ & $\psi_{\mathrm{nS}}$ \\
\midrule Crit. Name&\multicolumn{4}{c|}{MaFDR}&\multicolumn{4}{c|}{MaFNR}&\multicolumn{4}{c}{MaMCC}\\
Friedman p-value&\multicolumn{4}{c|}{\textbf{3.745e-27}}&\multicolumn{4}{c|}{\textbf{1.563e-27}}&\multicolumn{4}{c}{\textbf{1.563e-27}}\\ Average Rank& 2.042 & 3.500 & 1.00 & 3.458 & 2.021 & 3.292 & 1.00 & 3.688 & 2.000 & 3.438 & 1.00 & 3.562 \\
\midrule $\psi_{\mathrm{B}}$ & & \textbf{.000} & \textbf{.000} & \textbf{.000} & & \textbf{.000} & \textbf{.000} & \textbf{.000} & & \textbf{.000} & \textbf{.000} & \textbf{.000} \\
$\psi_{\mathrm{nB}}$ & & & \textbf{.000} & .947 & & & \textbf{.000} & \textbf{.005} & & & \textbf{.000} & .088 \\
$\psi_{\mathrm{S}}$ & & & & \textbf{.000} & & & & \textbf{.000} & & & & \textbf{.000} \\
\end{tabular} \end{table*} }
\begin{figure}\label{fig:hoeRank}
\end{figure}
\begin{figure}\label{fig:nbRank}
\end{figure}
\begin{figure}\label{fig:knnRank}
\end{figure}
\begin{figure}\label{fig:sgdRank}
\end{figure}
\section{Conclusions}\label{sec:Conclusions}
In this paper, we propose a modified SCM classifier to be used as a wrapping-classifier that allows incremental learning of classifiers that are not designed to be incrementally updated. We applied two modifications of the SCM wrapping-classifier originally described in~\cite{AMCS,CBM}. The first one is a modified neighbourhood definition. The newly proposed neighbourhood does not need an excessive grid-search procedure to be performed to find the best set of parameters. Due to the modified neighbourhood definition, the computational cost of performing the SCM-based correction is significantly smaller. The second modification is to incorporate ADWIN-based approach to create and manage the validation set used by SCM-based algorithm. This modification not only allows the proposed method to effectively deal with the concept drift but also it can shrink the neighbourhood when it becomes too wide.
The experimental results show that the proposed approach outperforms the reference methods for all investigated base classifiers in terms of all considered quality criteria.
The results obtained in this study are very promising. Consequently, we are going to continue our research related to the employment of randomised classifiers in the task of stream learning. Our next step will probably be a proposition of a stream learning ensemble that used the SCM-correction method proposed in this paper.
\subsubsection*{Acknowledgments.} This work was supported by the statutory funds of the Department of Systems and Computer Networks, Wroclaw University of Science and Technology. \FloatBarrier
\end{document} | arXiv | {
"id": "2109.07857.tex",
"language_detection_score": 0.7104230523109436,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\Large Carnap's problem for intuitionistic propositional logic}
\begin{abstract} We show that intuitionistic propositional logic is \emph{Carnap categorical}: the only interpretation of the connectives consistent with the intuitionistic consequence relation is the standard interpretation. This holds with respect to the most well-known semantics relative to which intuitionistic logic is sound and complete; among them Kripke semantics, Beth semantics, Dragalin semantics, and topological semantics. It also holds for algebraic semantics, although categoricity in that case is different in kind from categoricity relative to possible worlds style semantics. \end{abstract}
\begin{quote} \textbf{Keywords:} intuitionistic logic, Carnap's problem, nuclear semantics, algebraic semantics, logical constants, consequence relations, categoricity \end{quote}
\begin{quote} \textbf{MSC:} 00A30, 03B20, 06D20 \end{quote}
\section{Motivation and background\label{s:intro}}
\cite{carnap43} worried about the existence of `non-normal' interpretations of the connectives in classical propositional logic ($\mathsf{CPC}$), i.e.\ interpretations that were different from the usual truth tables but still consistent with $\vdash_\mathsf{CPC}$. Though neglected for many decades, the issue has been reopened in recent years. One reason was that non-normal interpretations were seen to clash with inferentialist meaning theories (\cite{raatikainen08}). It has been countered that inferentialism is not really threatened by non-normal interpretations (e.g.\ \cite{murzihjortland09}), or that such interpretations are avoided by a proper understanding of inference rules (e.g.\ \cite{murzitopey21}).
In this note we do not discuss the shape or the epistemological status of rules (but see Section \ref{s:discussion} for some discussion), but follow the model-theoretic approach to Carnap's problem initiated in \cite{bonwes16}. That is, a single-conclusion consequence relation $\vdash$ in a logical language is \emph{given} (no matter in what way), and, relative to a semantics which compositionally assigns \emph{semantic values} (set-theoretic objects of various kinds) to formulas, one investigates to what extent the meaning of the logical constants in that language is \emph{fixed} by consistency with $\vdash$. This is a precise model-theoretic task.
A first observation, which, as pointed out in \cite{bonwes16}, is hiding already in Carnap's 1943 book, is that \emph{compositionality} already rules out non-normal interpretations of the connectives in $\mathsf{CPC}$, relative to the usual two-valued semantics.\footnote{Compositionality is by now an accepted requirement on formal semantics, but the idea was not around in 1943. The result as stated presupposes that propositional atoms are treated as \emph{variables}, as we do here. Otherwise, one would have to exclude by stipulation the non-standard compositional interpretations making all sentences true, since these would also be consistent with $\vdash_\mathsf{CPC}$. } More interestingly, classical first-order consequence $\vdash_\mathsf{FOL}$ does \emph{not} by itself fix the meaning of $\forall$, relative to a semantics interpreting quantifier symbols as sets of subsets of the domain (unary generalized quantifiers), but constrains it to be a \emph{principal filter}. But if \emph{permutation invariance} is also required, then the only consistent interpretation is the standard one: $\vdash_\mathsf{FOL}$ forces $\forall x\varphi(x)$ to mean `for all $x$ in the domain, $\varphi(x)$ holds'.
Carnap's question can be asked for any consequence relation in any logical language, relative to any formal semantics for that language. \cite{bonwes21} deals with Carnap's problem in modal logic, and \cite{speitel20} discusses it for some logics with generalized quantifiers.
Here we consider Carnap's problem for intuitionistic propositional logic ($\mathsf{IPC}$). Relative to which formal semantics should we ask the question? There is by now a plethora of semantics for which $\mathsf{IPC}$ is sound and complete: Kripke semantics, Beth semantics, topological semantics, algebraic semantics, \ldots; ideally we would like an answer for each of them.
Actually, there are \emph{two kinds} of semantics for $\mathsf{IPC}$: the possible worlds kind and the algebraic kind. The former assigns \emph{subsets of a given domain} as semantic values of formulas. The latter doesn't: semantic values are just elements of the carrier of an algebra. We will see that categoricity means something different in the two cases. In Section \ref{s:categoricity}, we show that \emph{Carnap categoricity} in fact holds for $\mathsf{IPC}$, relative to a large number of possible worlds semantics, including the ones just mentioned: only the standard interpretations are consistent with $\vdash_\mathsf{IPC}$. In Section \ref{s:algebraicsem} we do the same for algebraic semantics. Section \ref{s:discussion} concludes with discussion and open issues..
Our main reference for the semantics of $\mathsf{IPC}$ will be the article \cite{bezhanishvili-holliday19} --- BH19 in what follows --- which surveys and compares a great variety of intuitionistic semantics. In particular, the notion of \emph{nuclear} semantics (introduced in \cite{bezhanishvili-holliday16}) plays a crucial role, as it does in this note. We largely follow the notation and terminology in BH19, and refer to that article for technical notions not explained here.
The propositional language will (with some explicitly noted exceptions) be generated by
\ex.[]
$\varphi \;:=\; p \;|\; \bot \;|\; \varphi\land\varphi \;|\; \varphi\lor\varphi \;|\;\varphi\to\varphi\;\;\;\;\;(p \in \mathit{Prop})$
where $\mathit{Prop}$ is a denumerable set of propositional variables. So $\neg\varphi := \varphi\to\bot$. The single-conclusion consequence relation
\ex.[] $\vdash_\mathsf{IPC}$
in this language is assumed to be familiar, as well as standard intuitionistic Kripke semantics, for which $\vdash_\mathsf{IPC}$ is sound and complete.
\section{Possible worlds semantics\label{s:possworldsem}}
What exactly is a possible worlds semantics for $\mathsf{IPC}$? Compare the analogous question for basic modal logic. That question has a precise (if initially somewhat surprising) answer: neighborhood semantics. The reason is that, as shown in \cite{bonwes16}, when the semantic values of formulas are subsets of a set $X$ of points (worlds, states, \ldots), classical propositional logic \emph{fixes the meaning} of the propositional connectives: negation is complement, conjunction is intersection, etc.\footnote{In fact, this result is a special case of Theorem \ref{mainthm} below; see Remark \ref{remark:classical}. } So the only operator left to interpret is $\Box$, and if no requirements are placed on that interpretation, it must (by compositionality) be a function from sets of points to sets of points. Thus, (local) interpretations can be identified with pairs $(X,F)$, where $F\!:\mathcal{P}(X)\to\mathcal{P}(X)$, and these are exactly the neighborhood frames.\footnote{See \cite{bonwes21}. \cite{pacuit17} is an introduction to neighborhood semantics for modal logic. }
In the case of $\mathsf{IPC}$ there are no further operators to interpret, but surely there should be some constraints on the interpretation of the connectives, since we are after all interested in \emph{intuitionistic} interpretations. Given the great variety of semantics for $\mathsf{IPC}$, it is not obvious how to formulate a constraint that fits all. However, guided by the most well-known instances of such semantics (see below), we suggest, somewhat stipulatively, that an intuitionistic possible worlds semantics assigns \emph{upward closed subsets} (upsets) of partially ordered set (poset) $\mathcal{F}=(X,\leq)$ as semantic values. This immediately rules out the classical interpretation of the connectives (the complement of an upset is usually not an upset).
The elements of the poset can be thought of as \emph{stages}; the rough idea being that information obtained at a stage remains at all later stages. There are numerous implementations and variations of this idea, by Kripke, Dummett, and many others; see the detailed discussion in BH19 and the references therein. But treating semantic values as upsets is common to most of them.
\subsection{Nuclear interpretations}
Let $\mathit{Up}(\mathcal{F})$ be the set of upsets in $\mathcal{F}=(X,\leq)$, and for $U\subseteq X$, let $\uparrow\!U$ be the set $\{x\in X\!: \exists y\in U\: y\leq x\}$. Also, for $x\in X$, $\uparrow\!x =\; \uparrow\!\{x\}$; these are the \emph{principal} upsets (the upsets are exactly the unions of principal upsets).
In algebraic terms, $(\mathit{Up}(\mathcal{F}),\emptyset,\cap,\cup,\to)$ is a \emph{complete Heyting algebra}, also called a \emph{locale}, where, for $U,V\in \mathit{Up}(\mathcal{F})$,
\ex.[] $U\to V = \{x\in X\!:\; \uparrow\!x \cap U \subseteq V\}$ \footnote{\label{fn:alexandroff}More precisely, it is a \emph{spatial} locale, since it comes from the open sets of a topology; completeness is the property of being closed under arbitrary joins (unions), which holds for all topologies. In this case the topology is rather special (the \emph{Alexandroff} topology): the open sets are the upsets on a poset, and these are also closed under arbitrary meets (intersections). }
\begin{definition}[intuitionistic interpretations and consistency]\label{def:possworldssem} Let a poset $\mathcal{F}=(X,\leq)$ be given.
\begin{enumerate} \item An \emph{intuitionistic interpretation of $\mathsf{IPC}$ on $\mathcal{F}$} is a map $I^\mathcal{F}$ from $\{\bot,\land,\lor,\to\}$ to functions of the corresponding arity taking upsets to upsets, together with a function $I^{\mathcal{F},\mathit{at}}\!:\mathit{Up}(\mathcal{F})\to\mathit{Up}(\mathcal{F})$ ($\mathit{at}$ for `atom').
\item A \emph{valuation on $\mathcal{F}$} is a map $v\!:\mathit{Prop}\to \mathit{Up}(\mathcal{F})$.\footnote{There is no constraint on $v$. But, as we will see, particular intuitionistic interpretations may require all semantic values, even those of propositional atoms, to be a special kind of upset. That's the reason for including the function $I^{\mathcal{F},\mathit{at}}$ in interpretations; see the next item. }
\item $I^\mathcal{F}$, $I^{\mathcal{F},\mathit{at}}$, and $v$ compositionally assign a semantic value $[\![\varphi]\!]^{I^\mathcal{F}}_v \in\mathit{Up}(\mathcal{F})$ of each formula $\varphi$, as follows:
\begin{enumerate} \item $[\![ p ]\!]^{I^\mathcal{F}}_v = I^{\mathcal{F},\mathit{at}}(v(p))$ \item $[\![ \#(\varphi_1,\ldots,\varphi_n) ]\!]^{I^\mathcal{F}}_v = I^\mathcal{F}(\#)([\![ \varphi_1 ]\!]^{I^\mathcal{F}}_v,\ldots,[\![ \varphi_n ]\!]^{I^\mathcal{F}}_v)$ \end{enumerate} \item $I^\mathcal{F}$ is \emph{consistent with a consequence relation $\vdash$} if, whenever $\Gamma\vdash\varphi$, we have, for all valuations $v$ on $\mathcal{F}$: \ex.\label{consistencygeneral} $\bigcap_{\psi\in\Gamma}[\![ \psi ]\!]^{I^\mathcal{F}}_v \subseteq\: [\![ \varphi ]\!]^{I^\mathcal{F}}_v$
We take $\bigcap\emptyset=X$, so a special case of \ref{consistencygeneral} is that if $\:\vdash\varphi$, then for all $v$, \ex.\label{consistencyspecial} $[\![ \varphi ]\!]^{I^\mathcal{F}}_v =X$
\end{enumerate} \end{definition}
We can now see how familiar semantics for $\mathsf{IPC}$ each has a corresponding kind of intuitionistic interpretation, and, among these, a \emph{standard} interpretation. The starting point is the \emph{nuclear} semantic framework from \cite{bezhanishvili-holliday16}.
A \emph{nucleus on $\mathit{Up}(\mathcal{F})$} is a function $j$ from $\mathit{Up}(\mathcal{F})$ to $\mathit{Up}(\mathcal{F})$ such that the following holds for all $U,V \in \mathit{Up}(\mathcal{F})$: \begin{enumerate} \item[(i)] $U \subseteq jU$ \hspace{19ex} (inflationarity)
\item[(ii)] $jjU \subseteq jU$ \hspace{17ex} (idempotence)
\item[(iii)] $j(U\cap V) = jU \cap jV$ \hspace{6ex} (multiplicativity) \end{enumerate} $j$ is \emph{dense} if $j\emptyset=\emptyset$. This is an instance of the general notion of a nucleus on a Heyting algebra, wich is defined analogously, with $\cap$ replaced by $\land$, $\emptyset$ by 0, and $\subseteq$ by the partial order $a\leq b \Leftrightarrow a\land b= a$.
It easily follows that $\subseteq$ in (ii) can be replaced by $=$, and that we have: \begin{enumerate} \item[(iv)] $U\subseteq V$ implies $jU \subseteq jV$ \hspace{2ex} (monotonicity) \end{enumerate} We say that $U\in\mathit{Up}(\mathcal{F})$ is \emph{fixed}, if it is a fixpoint of $j$, i.e.\ if $jU=U$. Let
\ex.[] $\mathit{Up}(\mathcal{F})_j$
be the set of fixed elements of $\mathit{Up}(\mathcal{F})$. BH19 calls $(\mathcal{F},j)$ a \emph{nuclear frame}.
\begin{definition}[nuclear interpretations]\label{def:nuclear} Let $j$ be a nucleus on $\mathit{Up}(\mathcal{F})$.
\begin{enumerate} \item A \emph{nuclear interpretation on} $(\mathcal{F},j)$ is an intuitionistic interpretation, written $I^{\mathcal{F},j}$, in which the connectives are interpreted as functions taking fixed upsets to fixed upsets, and $I^{\mathcal{F},j,\mathit{at}}=j$. Thus, semantic values, calculated as in Definition \ref{def:possworldssem}:3, are fixed upsets. Valuations are as before functions from $\mathit{Prop}$ to $\mathit{Up}(\mathcal{F})$.
\item The \emph{standard} nuclear interpretation, $I^{\mathcal{F},j}_\text{st}$, is defined as follows, for fixed upsets $U,V$:
\begin{enumerate} \item $I^{\mathcal{F},j}_\text{st}(\bot)=j\emptyset$ \item $I^{\mathcal{F},j}_\text{st}(\land)(U,V)= U\cap V$ \item $I^{\mathcal{F},j}_\text{st}(\lor)(U,V)= j(U\cup V)$ \item $I^{\mathcal{F},j}_\text{st}(\to)(U,V)= \{x\in X\!:\; \uparrow\!x \cap U \subseteq V\}$ \end{enumerate} \end{enumerate} \end{definition}
That $I^{\mathcal{F},j}_\text{st}$ is indeed a nuclear interpretation, i.e.\ that semantic values are fixed upsets, follows from a well-known fact about Heyting algebras: if $j$ is a nucleus on a Heyting algebra $H$, then the algebra $H_j$ of fixpoints of $j$ is again a Heyting algebra (and if $H$ is a locale, so is $H_j$), with $0_j = j0$, $a \land_j b = a\land b$, $a\lor_j b = j(a\lor b)$ (or $\bigvee_jA = j\bigvee \!A$), and $a\to_j b = a\to b$. In particular, if $U,V\in \mathit{Up}(\mathcal{F})_j$, then $I^{\mathcal{F},j}_\text{st}(\to)(U,V) \in \mathit{Up}(\mathcal{F})_j$. Let
\ex.[] $\mathbf{Up}_j = (\mathit{Up}(\mathcal{F})_j,\emptyset_j,\cap_j,\cup_j,\to_j)$
\subsection{Examples}
\begin{example}[Kripke semantics]\label{ex:kripke-int} Given $\mathcal{F}$, let $j_k$ be the identity function on $\mathit{Up}(\mathcal{F})$. $j_k$ is obviously a nucleus. So \emph{Kripke frames} are nuclear frames of the form $(\mathcal{F},j_k)$ (or simply $\mathcal{F}$), and the \emph{standard Kripke interpretation} $I^{\mathcal{F},j_k}_\text{st}$ is now as in Definition \ref{def:nuclear}, with $j=j_k$. Clearly, these are exactly the usual truth conditions in Kripke semantics for intuitionistic logic.
More generally, let a \emph{Kripke interpretation on $\mathcal{F}$} be a nuclear interpretation $I^{\mathcal{F},j_k}$. Semantic values $[\![\varphi]\!]^{I^{\mathcal{F},j_k}}_v$ are upsets (which are trivially fixed), and $[\![ p ]\!]^{I^{\mathcal{F},j_k}}_v = j_kv(p) = v(p)$. \end{example}
\begin{example}[Beth semantics]\label{ex:beth} There are several versions of Beth semantics, but here we follow BH19: a \emph{Beth frame} is a poset $\mathcal{F}$ (rather than a tree), and a \emph{path} is chain $C$ in $\mathcal{F}$ closed under upper bounds: if $x$ is an upper bound of $C$, then $x\in C$. So the nuclear setting applies without change. Let, for $U\in \mathit{Up}(\mathcal{F})$,
\ex. $j_bU = \{x\in X\!: \text{every path through $x$ intersects $U$}\}$
Then one verfies that $j_b$ is a nucleus on $\mathit{Up}(\mathcal{F})$ --- the \emph{Beth nucleus} --- and the \emph{standard Beth interpretation} $I^{\mathcal{F},j_b}_\text{st}$ is as in Definition \ref{def:nuclear} with $j=j_b$. Thus, it differs from the standard Kripke interpretation only in the semantic values of atoms and disjunctions.
In general, we define a \emph{Beth interpretation on $\mathcal{F}$} to be any nuclear interpretation of the form $I^{\mathcal{F},j_b}$. Semantic values are fixed upsets relative to $j_b$, and $[\![ p ]\!]^{I^{\mathcal{F},j_b}}_v = j_bv(p)$. \end{example}
\begin{example}[Dragalin semantics] Dragalin generalized Beth semantics by considering a more general notion of a path, called a \emph{development} by Bezhanishvili and Holliday. Each $x\in X$ is a associated with a set $D(x)$ of subsets of $X$ satisfying certain conditions. Dragalin proved that
\ex.[] $j_DU = \{x\in X\!: \text{every development in $D(x)$ intersects $U$}\}$
is a nucleus on $\mathit{Up}(\mathcal{F})$, so Dragalin semantics is an instance of nuclear semantics. The importance of this kind of nuclear semantics is seen from the result in \cite{bezhanishvili-holliday16} that \emph{every} locale can be realized as the set of fixed upsets of a nuclear frame of the form $(\mathcal{F},j_D)$ (this does not hold for Beth semantics).
Just as before, one has the \emph{standard Dragalin interpretation}, and a notion of an arbitrary \emph{Dragalin interpretation}. \end{example}
We have now seen three main kinds of what we may call \emph{nuclear semantics}, in a sense that can be made precise as follows:
\begin{definition}[nuclear semantics]\label{def:nuclearsemantics} A \emph{nuclear semantics} for $\mathsf{IPC}$ is a class $\mathcal{C}$ of nuclear frames. For $(\mathcal{F},j)\in\mathcal{C}$, a \emph{$\mathcal{C}$-interpretation on $(\mathcal{F},j)$} is a nuclear interpretation $I^{\mathcal{F},j}$. \end{definition}
For example, \emph{Beth semantics} is then identified with the class of nuclear frames $(\mathcal{F},j_b)$ where $\mathcal{F}$ is a Beth frame, and \emph{Beth interpretations on} $(\mathcal{F},j_b)$ are as in Example \ref{ex:beth} above. BH19 shows that $\vdash_\mathsf{IPC}$ is sound for \emph{any} nuclear semantics, if we use the standard interpretation. The next semantic framework for $\mathsf{IPC}$ is not strictly nuclear, but almost.
\begin{example}[topological semantics]\label{ex:topo-int} Topological semantics is the oldest formal semantics for $\mathsf{IPC}$, going back to \cite{stone37} and \cite{tarski38}. That it can be construed as an instance of nuclear semantics follows from a special case of a theorem of Dragalin; for a proof of the general case in the present setting see \cite{bezhanishvili-holliday16} (Theorem 2.8).
Let $\Omega(X)$ be a topology on $X$, i.e.\ $\Omega(X)$ is its set of opens.\footnote{So $\Omega(X)\subseteq\mathcal{P}(X)$, $\emptyset,X\in \Omega(X)$, and $\Omega(X)$ is closed under finite intersections and arbitrary unions. } Then $\mathbf{\Omega}(X) = (\Omega(X),\emptyset,\cap,\cup,\to)$ is a complete Heyting algebra, with
\ex.[] $U\to V = \mathit{int}((X\!-\!U)\cup V)$
where $\mathit{int}$ is the \emph{interior operation}: for $Y\subseteq X$,
\ex.[] $\mathit{int}(Y) = \bigcup\{U\!\in\Omega(X)\!: \:U\subseteq Y\}$
The open sets themselves need not be upsets of a partial order, but Dragalin's result is that $\mathbf{\Omega}(X)$ is \emph{isomorphic} to the fixed upsets of a nuclear frame.
Let $\mathcal{F}$ be the poset $(\Omega(X)^-,\supseteq)$, where $\Omega(X)^- = \Omega(X) - \{\emptyset\}$ (so the upsets of $\mathcal{F}$ are the downsets of $(\Omega(X)^-,\subseteq)$). We use $A,B,\ldots$ for upsets of $\mathcal{F}$, and $Z,U,V\ldots$ for open sets. Define:
\ex.[] $jA = \;\uparrow\! \bigcup A$
\begin{theorem}[Dragalin]\label{dragalinthm} $j$ is a dense nucleus on $\mathcal{F}$, and the function $h(U) =\; \uparrow\!U$ is an isomorphism from $\mathbf{\Omega}(X)$ to $\mathbf{Up}(\mathcal{F})_j$. \end{theorem}
So in this sense, a topology can be seen as a nuclear frame, defined via the bijection $h$. Next, the \emph{standard topological semantics} on $\Omega(X)$ for $\mathsf{IPC}$, originating from Stone and Tarski, is given by the interpretation we write $I^{\Omega(X)}_\text{st}$, as follows. Valuations are functions from \emph{Prop} to $\Omega(X)$, and semantic values are calculated with the these functions:
\ex.\label{topotruth} \a.\label{topotrutha} $I^{\Omega(X)}_\text{st}(\bot)=\emptyset$
\b. $I^{\Omega(X)}_\text{st}(\land)(U,V)= U\cap V$
\b. $I^{\Omega(X)}_\text{st}(\lor)(U,V)= U\cup V$
\b. $I^{\Omega(X)}_\text{st}(\to)(U,V)= \mathit{int}((X\!-\!U)\cup V)$
This is not an intuitionistic possible worlds semantics in the sense of Definition \ref{def:possworldssem}, but it is isomorphic to one:
\begin{lemma}\label{topohelplemma} If $\mathcal{F}$, $j$, and $h$ are as in Theorem \ref{dragalinthm}, then, for all $A,B\in\mathit{Up}(\mathcal{F})_j$, $I^{\mathcal{F},j}_\text{st}(\land)(A,B) = h(I^{\Omega(X)}_\text{st}\!(\land)(h^{-1}(A),h^{-1}(B)))$, and similarly for the other connectives. \end{lemma}
\noindent \emph{Proof. } The proof is a routine verification, using the density of $j$ for $\bot$, and, for $\lor$ and $\to$, the following observation from the proof of Theorem \ref{dragalinthm}:
\ex.\label{jA=Afact} $jA=A\;$ iff $\;A=\emptyset$ or $A=h(U)$ for some $U\in \Omega(X)^-$.
We omit the details.
$\Box$
In analogy with the preceding examples, let us say that a \emph{topological interpretation of $\mathsf{IPC}$ on} $\Omega(X)$ is a map $I^{\Omega(X)}$ from $\{\bot,\land,\lor,\to\}$ to functions of the corresponding arity taking open sets to open sets. A \emph{topological valuation} is a map $v\!: \mathit{Prop} \to \Omega(X)$, and semantic values $[\![\varphi]\!]^{I^{\Omega(X)}}_v$ are assigned compositionally just as before. Also, consistency of such an interpretation with a consequence relation is defined as before. \end{example}
\section{Carnap categoricity\label{s:categoricity}}
$\vdash_\mathsf{IPC}$ is sound and complete for all the semantics (classes of nuclear frames, or topological spaces) discussed in the preceding section --- relative, of course, to what we are here calling the standard interpretations; see BH19 for proofs. However, our interest is not completeness, but categoricity. For each particular semantics $\mathcal{C}$ mentioned above, we defined the notion of a (local) $\mathcal{C}$-interpretation of the connectives, and described how semantic values relative to such an interpretation are computed for each formula.
The standard interpretations are all consistent with $\vdash_\mathsf{IPC}$. But could there be \emph{other} interpretations, of the relevant kind, that are \emph{also} consistent with $\vdash_\mathsf{IPC}$? This is Carnap's question.
\begin{definition}[categoricity]\label{def:categoricity} A consequence relation $\vdash$ in the propositional language is (Carnap) \emph{categorical} with respect to a nuclear semantics $\mathcal{C}$ if, for every $(\mathcal{F},j)\in\mathcal{C}$, the only $\mathcal{C}$-interpretation on $(\mathcal{F},j)$ consistent with $\vdash$ is $I^{\mathcal{F},j}_\text{st}$. Similarly for topological semantics. If $\vdash$ is associated with a particular logic $\mathsf{L}$, we also say that $\mathsf{L}$ is \emph{categorical} (with respect to $\mathcal{C}$) when $\vdash$ is. \end{definition}
Our main result is the following.
\begin{theorem}\label{mainthm} Let $(\mathcal{F},j)$ be a nuclear frame, and $I^{\mathcal{F},j}$ a nuclear interpretation which is consistent with $\vdash_\mathsf{IPC}$. Then $I^{\mathcal{F},j} = I^{\mathcal{F},j}_\text{st}$. \end{theorem}
\noindent \emph{Proof. } We first show:
\ex.[(a)] $I^{\mathcal{F},j}(\land)$ is standard.
Here and below, $U,V\in \mathit{Up}(\mathcal{F})_j$. Since $p\land q\vdash_\mathsf{IPC} p$, we have $I^{\mathcal{F},j}(\land)(U,V)\subseteq U$. Let us see in detail how this happens. Let $v$ be a valuation such that $v(p)=U$ and $v(q)=V$. The requirement that $v(p),v(q)\in \mathit{Up}(\mathcal{F})$ is satisfied. By the truth definition (Definition \ref{def:possworldssem}:3) and consistency with $\vdash_\mathsf{IPC}$, we have \[ [\![ p\land q]\!]^{I^{\mathcal{F},j}}_v \!= I^{\mathcal{F},j}\!(\land)([\![ p]\!]^{I^{\mathcal{F},j}}_v\!,[\![ q]\!]^{I^{\mathcal{F},j}}_v) = I^{\mathcal{F},j}\!(\land)(jU,jV) = I^{\mathcal{F},j}(\land)(U,V) \subseteq [\![ p]\!]^{I^{\mathcal{F},j}}_v \!= U \] (note that $U$ and $V$ are fixpoints). Similarly, $I^{\mathcal{F},j}(\land)(U,V)\subseteq V$. In the other direction, we use the fact that $p,q\vdash_\mathsf{IPC} p\land q$ to see that $U\cap V \subseteq I^{\mathcal{F},j}(\land)(U,V)$. Thus, $I^{\mathcal{F},j}(\land)(U,V) = U\cap V = I^{\mathcal{F},j}_\text{st}(\land)(U,V)$. Next,
\ex.[(b)] $I^{\mathcal{F},j}(\bot)$ is standard.
This follows from the fact that $\bot\vdash_\mathsf{IPC} p$: letting $v(p) = \emptyset$, we obtain that $I^{\mathcal{F},j}(\bot) \subseteq [\![ p]\!]^{\mathcal{F},j}_v = j\emptyset$. Since $\emptyset \subseteq I^{\mathcal{F},j}(\bot)$ we also have $j\emptyset \subseteq jI^{\mathcal{F},j}(\bot) = I^{\mathcal{F},j}(\bot)$, by the monotonicity of $j$ and the assumption that $I^{\mathcal{F},j}(\bot)$ is a fixed upset. Thus, $I^{\mathcal{F},j}(\bot) = j\emptyset = I^{\mathcal{F},j}_\text{st}(\bot)$.
We next establish some facts about the interpretation of $\to$.
\ex.[(c)] $I^{\mathcal{F},j}(\to)(U,V) \:\subseteq\: I^{\mathcal{F},j}_\text{st}(\to)(U,V)$
To see this, take $x\in I^{\mathcal{F},j}(\to)(U,V)$. In order to show that $x\in I^{\mathcal{F},j}_\text{st}(\to)(U,V) = I^{\mathcal{F}}_\text{st}(\to)(U,V)$, we must show $\uparrow\!x \cap\: U \subseteq V$. From the fact that $p,p\to q\vdash_\mathsf{IPC} q$, we obtain, as should now be clear,
\ex.[] $U \cap I^{\mathcal{F},j}(\to)(U,V) \:\subseteq\: V$
Take $y\in\; \uparrow\!\!x \cap\: U$. Since $y\geq x$ and $I^{\mathcal{F},j}(\to)(U,V)$ is an upset, $y\in I^{\mathcal{F},j}(\to)(U,V)$. So $y\in V$ follows, and (c) is proved.
\ex.[(d)] $U\subseteq V\;$ iff $\;I^{\mathcal{F},j}(\to)(U,V) = X$
Suppose $U\subseteq V$, so $U\cap V = U$. Since $\vdash_\mathsf{IPC} p\land q \to q$, we obtain, from consistency with $\vdash_\mathsf{IPC}$ and the fact that $\land$ is standard, that $I^{\mathcal{F},j}(\to)(U\cap V,V)=X$, i.e.\ that $I^{\mathcal{F},j}(\to)(U,V)=X$. In the other direction, suppose $I^{\mathcal{F},j}(\to)(U,V)=X$. By (c), $I^{\mathcal{F},j}_\text{st}(\to)(U,V)=X$. Then for all $x\in X$, $x\in U \Rightarrow x\in V$ (since $x\leq x$), i.e.\ $U\subseteq V$. This proves (d).
We are now able to show:
\ex.[(e)] $I^{\mathcal{F},j}(\lor)$ is standard.
From $p\vdash_\mathsf{IPC} p\lor q$ and $q\vdash_\mathsf{IPC} p\lor q$ we obtain that $U\cup V\subseteq I^{\mathcal{F},j}(\lor)(U,V)$. By monotonicity and since $I^{\mathcal{F},j}(\lor)(U,V)$ is a fixed upset, we have $j(U\cup V) \subseteq jI^{\mathcal{F},j}(\lor)(U,V) = I^{\mathcal{F},j}(\lor)(U,V)$. In the other direction, let $v$ be a valuation such that $v(p)=U$, $v(q)=V$, and $v(r)=U\cup V$. (Note that $U\cup V$ is an upset, even though it need not be fixed.\footnote{But we could equally well have taken $v(r)=j(U\cup V)$. }) Since $U \subseteq U\cup V$, we have $jU \subseteq j(U\cup V)$, and so by (d), $I^{\mathcal{F},j}(\to)(jU,j(U\cup V)) = I^{\mathcal{F},j}(\to)(U,j(U\cup V)) = [\![ p\to r]\!]^{I^{\mathcal{F},j}}_v = X$. Similarly, $[\![ q\to r]\!]^{I^{\mathcal{F},j}}_v = X$. Then, from consistency and the fact that
\ex.[] $p\to r,q\to r,p\lor q\vdash_\mathsf{IPC} r$
we obtain $I^{\mathcal{F},j}(\lor)(U,V) \subseteq j(U\cup V)$. That is, $I^{\mathcal{F},j}(\lor)(U,V) = j(U\cup V) = I^{\mathcal{F},j}_\text{st}(\lor)(U,V)$, and (e) is proved.
Finally, we show:
\ex.[(f)] $I^{\mathcal{F},j}(\to)$ is standard.
By (c), it suffices to show that $I^{\mathcal{F},j}_\text{st}(\to)(U,V) \subseteq I^{\mathcal{F},j}(\to)(U,V)$. Thus, take $x\in I^{\mathcal{F},j}_\text{st}(\to)(U,V)$, i.e.\ $\uparrow\!x\cap U \subseteq V$. Using the multiplicativity and monotonicity of $j$, we have
\ex.[] $j(\uparrow\!x\cap U) = j \!\!\uparrow\!x \cap jU = j \!\!\uparrow\!x \cap U \subseteq jV = V$
Let $v$ be such that $v(p) = \;\uparrow\!\!x$, $v(q)=U$, and $v(r)=V$. Thus, since $\land$ is standard,
\ex.[] $[\![ p\land q]\!]^{I^{\mathcal{F},j}}_v =\: j \!\!\uparrow\!x \cap U \subseteq V =\, [\![ r]\!]^{I^{\mathcal{F},j}}_v$
By (d), it follows that $[\![ p\land q\to r]\!]^{I^{\mathcal{F},j}}_v =\, X$. Since
\ex.[] $p,p\land q\to r \vdash_\mathsf{IPC} q\to r$
we conclude that $ j \!\!\uparrow\!\!x \subseteq I^{\mathcal{F},j}(\to)(U,V)$. And since $x\in \:\uparrow\!\!x \subseteq j\!\!\uparrow\!\!x$ (by inflationarity), we have that $x\in I^{\mathcal{F},j}(\to)(U,V)$. That is, $I^{\mathcal{F},j}(\to)(U,V) = I^{\mathcal{F},j}_\text{st}(\to)(U,V)$.
This concludes the proof that $I^{\mathcal{F},j} = \,I^{\mathcal{F},j}_\text{st}$.
$\Box$
\begin{corollary} $\mathsf{IPC}$ is Carnap categorical with respect to any nuclear semantics, such as Kripke semantics, Beth semantics, or Dragalin semantics. It is also categorical with respect to topological semantics. Thus, the only interpretation, of the respective kind, of the propositional connectives that is consistent with $\vdash_\mathsf{IPC}$ is the standard interpretation. \end{corollary}
\noindent \emph{Proof. } This is immediate from the theorem for nuclear semantics. For topological semantics, let $\Omega(X)$ be a topology on $X$, and suppose the topological interpretation $I^{\Omega(X)}$ is consistent with $\vdash_\mathsf{IPC}$. Let $\mathcal{F}$, $j$, and $h$ be as in Theorem \ref{dragalinthm}. Define a nuclear interpretation $I^{\mathcal{F},j}$ as follows: $I^{\mathcal{F},j}(\land)(A,B) = h(I^{\Omega(X)}(\land)(h^{-1}(A),h^{-1}(B)))$, and similarly for the other connectives. Also, if $v\!:\mathit{Prop}\to \mathit{Up}(\mathcal{F})$, define the topological valuation $v^*\!:\mathit{Prop}\to \Omega(X)$ by $v^*(p)=h^{-1}(v(p))$. It follows by an easy induction over formulas (using \ref{jA=Afact} for the atomic case) that for every $\varphi$,
\ex.[] $[\![\varphi]\!]^{I^{\mathcal{F},j}}_{v} = \;h([\![\varphi]\!]^{I^{\Omega(X)}}_{v^*})$
Since $h$ is monotone ($U\subseteq V \Rightarrow h(U)\subseteq h(V)$), this entails that $I^{\mathcal{F},j}$ is consistent with $\vdash_\mathsf{IPC}$. Then we have: $I^{\Omega(X)}(\land)(U,V) = h^{-1}(I^{\mathcal{F},j}(\land)(h(U),h(V)))$ (by definition) $= h^{-1}(I^{\mathcal{F},j}_\text{st}(\land)(h(U),h(V)))$ (by the theorem) $= I^{\Omega(X)}_\text{st}(\land)(U,V)$ (by Lemma \ref{topohelplemma}). Similarly for the other connectives. Thus, $I^{\Omega(X)} =\, I^{\Omega(X)}_\text{st}$.
$\Box$
\begin{remark}\label{remark:classical} The Carnap categoricity of $\mathsf{CPC}$ relative to classical possible worlds semantics, proved in \cite{bonwes16}, is a special case of the result for $\mathsf{IPC}$, since on posets that are sets of isolated points, $\mathsf{IPC}$ and $\mathsf{CPC}$ coincide. In more detail: suppose $W$ is any non-empty set (of `worlds') and $I^W$ an interpretation of the connectives over $W$ --- i.e.\ $I^W$ assigns to each connective a function on $\mathcal{P}(W)$ of appropriate arity --- which is consistent with $\vdash_\mathsf{CPC}$. The upsets of the poset $\mathcal{F} = (W,\{(x,x)\!: x\in W\})$ are exactly the subsets of $W$. Thus $I^\mathcal{F} = I^W$ is in fact a Kripke interpretation, in the sense of Example \ref{ex:kripke-int}, which is consistent with $\vdash_\mathsf{CPC}$, hence with $\vdash_\mathsf{IPC}$. By the theorem, $I^\mathcal{F}$ is standard, and so $I^W$ is standard (for example, $I^W(\to)(U,V)=(W\!-\!U)\cup V$).
Similarly, letting $\mathcal{F}$ be a single reflexive point $(\{x\},\{(x,x)\})$, the categoricity of $\mathsf{CPC}$ relative to classical 2-valued semantics, mentioned in Section \ref{s:intro}, also follows from Theorem \ref{mainthm}: in this case the interpretation functions are truth functions, and the truth values 0 and 1 correspond to the upsets $\emptyset$ and $\{x\}$. \end{remark}
As the remark illustrates, the power of the intuitionistic categoricity theorem comes from its strictly local character: the result holds for \emph{each} nuclear frame, no matter how trivial.
The proof of Theorem \ref{mainthm} uses essentially the assumption that consistency with $\vdash_\mathsf{IPC}$ means that \ref{consistencygeneral} holds (for all $v$). One might be tempted to suppose that since $\mathsf{IPC}$ satisfies the Deduction Theorem, it would suffice that all $\mathsf{IPC}$-\emph{theorems} are valid (so that \ref{consistencyspecial} holds for all $v$). It is instructive to see why this is \emph{not} the case.
To begin, we can see where the proof breaks down. Suppose $\psi\vdash_\mathsf{IPC}\varphi$. By the Deduction Theorem, we have $\vdash_\mathsf{IPC}\varphi\to\psi$, so by the soundness assumption, $[\![\psi\to\varphi]\!]^{I^{\mathcal{F},j}}_v=I^{\mathcal{F},j}(\to)([\![\psi]\!]^{I^{\mathcal{F},j}}_v,[\![\varphi]\!]^{I^{\mathcal{F},j}}_v)=X$ for all $v$. We wish to conclude that $[\![\psi]\!]^{I^{\mathcal{F},j}}_v \subseteq [\![\varphi]\!]^{I^{\mathcal{F},j}}_v$. This would follow from (d) in the proof, but (d) presupposes consistency in the stronger sense. More precisely, (d) relies on (c), whose proof in turn uses that from $p,p\to q\vdash_\mathsf{IPC} q$ we are able to conclude that, for any fixed upsets $U,V$, we have $U \cap I^{\mathcal{F},j}(\to)(U,V) \subseteq V$. But it should be fairly clear that from $\vdash_\mathsf{IPC} p\to((p\to q)\to q)$, no such conclusion can be drawn.
In fact, Wesley Holliday found a counter-example (\emph{p.c}): a non-standard interpretation of the connectives which validates all $\mathsf{IPC}$-theorems, and hence is not consistent with $\vdash_\mathsf{IPC}$ in the sense of Definition \ref{def:possworldssem}:4. With his kind permission, we present the example here.\footnote{The example was originally intended to show that the stronger notion of consistency is needed for the result in \cite{bonwes16} about $\mathsf{CPC}$; see Remark \ref{remark:classical}. Here we have adapted it to $\mathsf{IPC}$. }
\begin{example}[Holliday]\label{ex:holliday} Let $\Omega(X)$ be any topological space. We will define a topological interpretation $I^{\Omega(X)}$ (see Example \ref{ex:topo-int}) such that (a) if $\vdash_\mathsf{IPC}\varphi$ then for all valuations $v$ on $\Omega(X)$, $[\![\varphi]\!]^{I^{\Omega(X)}}_v = X$, but (b) $I^{\Omega(X)}\neq I^{\Omega(X)}_\text{st}$. This gives a generic topological counter-example. For more concreteness, we can start with a Kripke frame $\mathcal{F}=(X,\leq)$ and let $\Omega(X)$ be the Alexandroff topology described in footnote \ref{fn:alexandroff}. As made clear in BH19, Sect.\ 2.3, Kripke semantics essentially \emph{is} topological semantics based on Alexandroff spaces. It is easy to see that the counter-example then becomes a non-standard Kripke interpretation $I^{\mathcal{F},j_k}$ (see Example \ref{ex:kripke-int}) which validates all theorems of $\mathsf{IPC}$.
For this example it is easier to use $\neg,\land,\lor,\to$ as primitive connectives. \ref{topotrutha} is then replaced by
\ex.[\ref{topotrutha}$'$] $I^{\Omega(X)}_\text{st}(\neg)(U)= \mathit{int}(X-U)$
Recall the \emph{closure} operation $\mathit{cl}$, the dual of $\mathit{int}$: $\mathit{cl}(Y)$ is the smallest closed set (set whose complement is open) containing $Y$. An easy calculation shows
\ex.\label{Wes1} $[\![\neg\neg\varphi]\!]^{I^{\Omega(X)}_\text{st}}_v = \mathit{int}(\mathit{cl}([\![\varphi]\!]^{I^{\Omega(X)}_\text{st}}_v))$
and thus
\ex.\label{Wes2} $[\![\neg\neg\neg\varphi]\!]^{I^{\Omega(X)}_\text{st}}_v = \mathit{int}(X-\mathit{int}(\mathit{cl}([\![\varphi]\!]^{I^{\Omega(X)}_\text{st}}_v)))$
Now define $I^{\Omega(X)}$ as follows.
\ex.\label{Wes3} \a. $I^{\Omega(X)}(\land)(U,V) = \mathit{int}(\mathit{cl}(U)) \cap \mathit{int}(\mathit{cl}(V))$
\b. $I^{\Omega(X)}(\neg)(U) = \mathit{int}(X-\mathit{int}(\mathit{cl}(U)))$
\b. $I^{\Omega(X)}(\to)(U,V) = I^{\Omega(X)}(\neg)(I^{\Omega(X)}(\land)(I^{\Omega(X)}(\neg)(U),V))$
\b. $I^{\Omega(X)}(\lor)(U,V) = I^{\Omega(X)}(\neg)(I^{\Omega(X)}(\land)(I^{\Omega(X)}(\neg)(U),I^{\Omega(X)}(\neg)(V)))$
Thus, we are putting double negations in front of negated formulas and the conjuncts of conjunctions, whereas $\to$ and $\lor$ are defined classically from $\land$ and $\neg$. Clearly, $I^{\Omega(X)}$ is non-standard: for example, just find a space with open sets $U,V$ such that $\mathit{int}(\mathit{cl}(U)) \cap \mathit{int}(\mathit{cl}(V)) \neq U\cap V$.
Now we use a particular \emph{negative translation} of classical into intuitionistic propositional logic, defined as follows:
\ex.\label{Wes4} \a. $g(p)=p$ \b. $g(\varphi\land\psi) = \neg\neg g(\varphi) \land \neg\neg g(\psi)$ \b. $g(\neg\varphi) = \neg\neg\neg g(\psi)$ \b. $g(\varphi\to\psi) = \neg(g(\varphi) \land \neg g(\psi))$ \b. $g(\varphi\lor\psi) = \neg(\neg g(\varphi) \land \neg g(\psi))$
Using well-known facts about negative translations, it is not hard to show that
\ex.\label{Wes5} $\models_\mathsf{CPC}\varphi\;$ iff $\;\vdash_\mathsf{IPC} g(\varphi)$.\footnote{$g$ is similar to the \emph{Kolmogorov translation}, say, $G$, which puts a double negation in front of all subformulas. One can show that if $\varphi$ is not an atom, $\vdash_\mathsf{IPC} \!g(\varphi) \leftrightarrow G(\varphi)$. It is well-known that $\vdash_\mathsf{CPC}\varphi \Leftrightarrow \;\,\vdash_\mathsf{IPC} G(\varphi)$, and since no atom is a theorem, \ref{Wes5} follows. \cite{ferreira12} is a survey of various negative translations. }
Finally, observe that $I^{\Omega(X)}$ interprets a formula $\varphi$ just as $g(\varphi)$ is standardly interpreted in topological semantics. In other words, for each topological valuation $v$ we have:
\ex.\label{Wes6} $[\![\varphi]\!]^{I^{\Omega(X)}}_v\! = [\![ g(\varphi)]\!]^{I^{\Omega(X)}_\text{st}}_v$
This is proved by a straightforward inductive argument, using the standard topological truth definition and \ref{Wes1} -- \ref{Wes4}. Thus, if $\vdash_\mathsf{IPC}\varphi$, then $\vdash_\mathsf{CPC}\varphi$, hence $\vdash_\mathsf{IPC} g(\varphi)$ by \ref{Wes5}, and so for any topological valuation $v$, $[\![ g(\varphi)]\!]^{I^{\Omega(X)}_\text{st}}_v = X$, which by \ref{Wes6} entails that $[\![\varphi]\!]^{I^{\Omega(X)}}_v=X$. That is, $I^{\Omega(X)}$ validates all $\mathsf{IPC}$ theorems.\footnote{Indeed, it validates all $\mathsf{CPC}$ theorems, by \ref{Wes5}, and since the above argument in fact works for all valuations $v\!: \mathit{Prop} \to \mathcal{P}(X)$. As Wesley Holliday pointed out, $I^{\Omega(X)}$ evaluates complex formulas using the \emph{double negation nucleus}, i.e.\ in the Heyting algebra of \emph{regular open} sets (sets $U$ such that $U = \mathit{int}(\mathit{cl}(U)))$, which is a Boolean algebra. }
On the other hand, it is easy to see that $I^{\Omega(X)}$ is not consistent with $\vdash_\mathsf{IPC}$. For example, one can have $[\![ p]\!]^{I^{\Omega(X)}}_v \!= [\![ p\to q]\!]^{I^{\Omega(X)}}_v \!= X$, while $[\![ q]\!]^{I^{\Omega(X)}}_v\!\neq X$. \end{example}
\begin{remark}\label{remark:classicalML} It may be worth noting that, by contrast, categoricity facts about classical \emph{modal} logic (see \cite{bonwes21}), only require validating all theorems. This is precisely because it is classical. If $\mathsf{L}$ is a modal logic, $\Gamma\vdash_\mathsf{L}\varphi$ means by definition that for some $\psi_1,\ldots,\psi_n\in\Gamma$, we have $\vdash_\mathsf{L}\psi_1\land\ldots\land\psi_n\to\varphi$. Thus, if it holds for all $v$ that $[\![\psi_1\land\ldots\land\psi_n\to\varphi]\!]^{(X,F)}_v=X$, we can conclude, by the fact that $\land$ and $\to$ are standard (which, as we saw in Remark \ref{remark:classical}, follows from Theorem \ref{mainthm}), that $\bigcap_{i=1}^n[\![\psi_i]\!]^{(X,F)}_v\subseteq [\![\varphi]\!]^{(X,F)}_v$, and hence that $\bigcap_{\psi\in\Gamma}[\![\psi]\!]^{(X,F)}_v\subseteq [\![\varphi]\!]^{(X,F)}_v$. \end{remark}
\section{Algebraic semantics\label{s:algebraicsem}}
Algebraic semantics for $\mathsf{IPC}$ standardly interprets the connectives in a Heyting algebra $\mathcal{A} = (A,0^\mathcal{A},1^\mathcal{A},\land^\mathcal{A},\lor^\mathcal{A},\to^\mathcal{A})$: a valuation $v$ maps atoms to $A$, and semantic values $[\![\varphi]\!]^\mathcal{A}_v$ in $A$ are given by the unique homomorphism $\overline{v}$ from the syntax algebra to $\mathcal{A}$ that extends $v$.\footnote{In this section we take 1 ($\top$) to be in the signature (as is commonly done). } Bezhanishvili and Holliday discuss the view that algebraic semantics is just `syntax in disguise' (quoted from \cite{vanbenthem01}, p.\ 358), and that algebraic completeness of $\mathsf{IPC}$ via the usual Tarski-Lindenbaum construction isn't really illuminating about the intuitionistic meaning of the connectives (BH19, end of sect.\ 2.1). They counter that when Heyting algebras are defined order-theoretically, and $\leq$ is seen as entailment, completeness for Heyting algebras is in fact quite illuminating.\footnote{One might also note that completeness for `concrete' semantics, like Kripke semantics or topological semantics, \emph{follows} from algebraic completeness via well-known representation theorems. Arguably, however, the real work here lies in the representations theorems, not in the Lindenbaum algebras of logical systems. }
From the categoricity perspective, Carnap's question for algebraic semantics is conceptually somewhat different from the case of possible worlds semantics. In the latter case, given, say, a nuclear frame, there are many different putative interpretations of the connectives \emph{on that frame}, and we ask which ones are consistent with $\vdash_\mathsf{IPC}$. Relative to an algebra $\mathcal{A}$, on the other hand, the interpretation of $\to$, for example, simply \emph{is} $\to^\mathcal{A}$. That is, \emph{the algebra itself} is the interpretation. Indeed, we shall see that there are sublogics of $\mathsf{IPC}$ which are categorical with respect to algebraic semantics, but not with respect to Kripke semantics.
We ask, then: Are there \emph{other} algebras than Heyting algebras --- the agreed-on standard interpretations --- consistent with $\vdash_\mathsf{IPC}$? It seems plausible that the answer is No: consistency with $\vdash_\mathsf{IPC}$ forces the algebra to be Heyting.
To make the question meaningful we should specify a class of algebras among which the Heyting algebras can be singled out. It has to be algebras for which a suitable notion of consistency with a consequence relation makes sense. For the record, we now describe how this can be done. We hasten to add, however, that when it comes to categoricity rather than completeness, the claim that algebraic semantics is `syntax in disguise' seems rather convincing. This will be clear from the proof of Theorem \ref{mainthmalg} below. But since Carnap's question in this algebraic context is new (as far as we know), we shall spell out the fairly obvious answer.
\subsection{Algebraic interpretations and consistency}
The syntax algebra of propositional logic is a \emph{term algebra} with countably many generators, which means that for \emph{any} algebra $\mathcal{A}$ of the same signature, every map $v$ from propositional atoms to $A$ extends to a unique homomorphism $\overline{v}$ to $\mathcal{A}$. But it would make no sense to let every algebra of that signature be a putative interpretation of the connectives. Interpretations are relative to a \emph{semantics}, and a semantics needs a notion of \emph{truth}, or of \emph{entailment}. We could of course stipulate that $\overline{v}(\varphi)=1^\mathcal{A}$ means that $\varphi$ is true in $\mathcal{A}$ under $v$. But nothing can be done with that stipulation unless we know more about the role of $1^\mathcal{A}$. For example, why not $0^\mathcal{A}$ instead?
On the other hand, we want to make as few assumptions as possible. Without further ado, here is a suggestion.
\begin{definition}\label{def:algsem} $\mathcal{A} = (A,0^\mathcal{A},1^\mathcal{A},\land^\mathcal{A},\lor^\mathcal{A},\to^\mathcal{A})$ is an \emph{algebraic interpretation} if $\leq^\mathcal{A}$ defined by $a\leq^\mathcal{A} b \Leftrightarrow a\land^\mathcal{A} b=a$ is a partial order with $1^\mathcal{A}$ as its largest element. \end{definition}
Then we define consistency with a consequence relation $\vdash$ as follows.\footnote{We are now in the framework that \cite{humberstone11} calls \emph{$\leq$-based algebraic semantics}. The definition is suggested by his Remark 2.14.7(i). }
\begin{definition}\label{def:algcons} An algebraic interpretation $\mathcal{A}$ is \emph{consistent with} $\vdash$, if, whenever $\psi_1,\ldots,\psi_n\vdash\varphi$ holds, we have for all valuations $v$ on $A$ and all $c\in A$, that if $c \leq^\mathcal{A} \overline{v}(\psi_i)$ for $i=1,\ldots,n$, then $c \leq^\mathcal{A} \overline{v}(\varphi)$. \end{definition}
If there are no assumptions, i.e.\ if $\vdash\!\varphi$, the antecedent of the requirement is vacuously satisfied, and the consequent becomes $\overline{v}(\varphi)=1^\mathcal{A}$. It may seem more natural to require that if $c$ is the greatest lower bound of $\overline{v}(\psi_1),\ldots,\overline{v}(\psi_n)$, then $c \leq^\mathcal{A} \overline{v}(\varphi)$, and indeed this is equivalent, provided glb's \emph{exist}. In that case, $\mathit{glb}^\mathcal{A}\{a_1,\ldots,a_n\}=a_1\land^\mathcal{A}\ldots\land^\mathcal{A} a_n$, and the consistency requirement would simply be that $\psi\vdash\varphi$ entails $\overline{v}(\psi) \leq^\mathcal{A} \overline{v}(\varphi)$ for all valuations $v$ on $A$.\footnote{\label{fn:consistency}The consistency requirement only concerns the reduct $(A,1^\mathcal{A},\land^\mathcal{A})$; $1^\mathcal{A}$ is not explicitly mentioned. Generalizing, for algebras without a top element, we can replace ``$\overline{v}(\varphi)=1^\mathcal{A}$'' with ``for all $\theta$, $\overline{v}(\theta) \leq^\mathcal{A} \overline{v}(\varphi)$'' in order to deal with theorems. Logics whose set of theorems coincides with the set of formulas derivable from every formula are sometimes called \emph{non-pseudo-axiomatic}. } But Definition \ref{def:algsem} doesn't require glb's to exist, and the next result shows that such a requirement is not necessary.
\subsection{Algebraic categoricity}
We now have a precise formulation of Carnap's question for algebraic semantics. The next fact gives the answer, here formulated for $\mathsf{CPC}$ as well. Let $\mathit{HA}$ ($\mathit{BA}$) be the class of Heyting (Boolean) algebras.
\begin{theorem}\label{mainthmalg} Let $\mathcal{A}$ be an algebraic interpretation.
\begin{enumerate} \item $\mathcal{A}$ is consistent with $\vdash_\mathsf{IPC}$ iff $\mathcal{A}\in\mathit{HA}$.
\item $\mathcal{A}$ is consistent with $\vdash_\mathsf{CPC}$ iff $(A,0^\mathcal{A},1^\mathcal{A},\land^\mathcal{A},\lor^\mathcal{A},'^{\mathcal{A}})\in\mathit{BA}$, where $a'^{\mathcal{A}} := a\to^\A0^\mathcal{A}$. \end{enumerate} \end{theorem}
\noindent \emph{Proof. } The right-to-left directions are just the soundness of $\mathsf{IPC}$ ($\mathsf{CPC}$) for the class of Heyting (Boolean) algebras. In the other direction, suppose $\mathcal{A}$ is an algebraic interpretation consistent with $\vdash_\mathsf{IPC}$. Thus:
\ex.[(i)] $a=b\;$ iff $\:a\leq^\mathcal{A} b$ and $b\leq^\mathcal{A} a$.
To show that $\mathcal{A}$ is a Heyting algebra, we check that each of the equations defining (the variety of) Heyting algebras is valid in $\mathcal{A}$. This is completely straightforward. Indeed, we have:
\ex.[(ii)] For each defining equation $s=t$ of Heyting algebras there is a valuation $v$ and formulas $\varphi,\psi$ such that $\overline{v}(\varphi)=s$, $\overline{v}(\psi)=t$, and $\varphi\vdash_\mathsf{IPC}\psi$ and $\psi\vdash_\mathsf{IPC}\varphi$.\footnote{\label{fn:heyting}For the record, $\mathcal{A}$ is a Heyting algebra iff $(A,0^\mathcal{A},1^\mathcal{A},\land^\mathcal{A},\lor^\mathcal{A})$ is a bounded lattice and in addition the following equations are valid (omitting superscripts):
\begin{enumerate} \item[] $a \to a = 1$
\item[] $a\land (a \to b) = a \land b$
\item[] $(a\to b) \land b = b$
\item[] $a\to (b \land c) = (a \to b) \land (a \to c)$
\end{enumerate} To be more accurate, we should have distinguished in (ii) between the syntactic equation $s=t$ in which $s,t$ are \emph{terms}, and the corresponding equality in $\mathcal{A}$, but we trust our abuse of notation here is not a problem. }
It then follows, from (i) and consistency with $\vdash_\mathsf{IPC}$, that $\mathcal{A}$ is a Heyting algebra. As an example, let us check the equation
\ex.[(iii)] $a \land^\mathcal{A}(a\to^\mathcal{A} b) = a\land^\mathcal{A} b$
Let $v(p)=a$, $v(q)=b$, $\varphi = p\land(p\to q)$, and $\psi = p\land q$. Since we have that $p\land(p\to q) \vdash_\mathsf{IPC} p\land q$, $p\land q\vdash_\mathsf{IPC} p\land(p\to q)$, and $\overline{v}$ is a homomorphism, the claim follows. As the example shows, we are merely translating familiar laws of $\mathsf{IPC}$ into (in)equalities in $\mathcal{A}$, which are valid by the consistency requirement.
If $\mathcal{A}$ is instead consistent with $\vdash_\mathsf{CPC}$, the defining equations of Heyting algebras are still valid. It only remains to show that (dropping superscripts) $a \land a' = 0$ and $a \lor a' = 1$. The first is a consequence of (iii) and the fact that $a\land 0=0$. The second identity is the Law of Excluded Middle.
$\Box$
We may take left-to-right implication of this theorem to say that $\mathsf{IPC}$ and $\mathsf{CPC}$ are Carnap categorical in algebraic semantics, with respect to the respective classes of algebras. The fact that the implication goes both ways could be seen as a further justification for taking Heyting (Boolean) algebras to be the `standard interpretations'; see also Section \ref{ss:meaning} below.
\begin{remark} Just as for possible worlds semantics, to enforce that all algebraic interpretations consistent with $\vdash_\mathsf{IPC}$ are standard (Heyting), it is not enough to require that all $\mathsf{IPC}$-\emph{theorems} are valid. The proof of Theorem \ref{mainthmalg} doesn't work if we cannot conclude from $\psi\vdash_\mathsf{IPC}\varphi$ that (for all $v$) $\overline{v}(\psi) \leq^\mathcal{A} \overline{v}(\varphi)$. In fact, the following counter-example can be found in BH19.
If $H$ is a Heyting algebra and $j$ a nucleus on $H$, BH19 defines the algebra $D(H,j)$ of the same signature to be just like the Heyting algebra of fixpoints $H_j$ (defined immediately after Definition \ref{def:nuclear}), except that $a \to^j b =_\text{def} a \to jb$. Algebras of this form are called \emph{Dummett algebras} in BH19.\footnote{Dummett algebras have an interesting connection to a proposal in \cite{dummett00} to explain the significance of Beth semantics in terms of a distinction between a formula being \emph{verified} and \emph{assertible} at a stage $x$ (BH19, end of Sect.\ 3.2). } They need not be Heyting algebras --- Bezhanishvili and Holliday give a simple example --- but they of course qualify as algebraic interpretations. However, BH19 also proves that the formulas valid in all Dummett algebras are \emph{exactly} the theorems of $\mathsf{IPC}$ (their Theorem 3.25). Thus, by Theorem \ref{mainthmalg}, non-Heyting Dummett algebras need not be consistent with $\vdash_\mathsf{IPC}$. \end{remark}
\section{Discussion\label{s:discussion}}
We find it rather remarkable that $\mathsf{IPC}$ is Carnap categorical. Many scholars think that the intuitionistic meaning of the connectives is best captured by the informal Brouwer-Heyting-Kolmogorov explanation, and that the familiar formal semantics for which $\mathsf{IPC}$ is sound and complete fail in various degrees to do justice to that explanation. For example, the BHK explanation of the meaning of $\to$ uses notions like `proof' and `construction': a proof of $\varphi\to\psi$ is a construction that takes any proof of $\varphi$ to a proof of $\psi$. Nuclear semantics involves no similar notions. Nor does it rely on facts about verification or assertion, although one can argue quite convincingly, as BH19 does building on Dummett and others, that some instances of nuclear semantics \emph{represent} such facts rather accurately.
\subsection{Carnap's question and the meaning of the connectives\label{ss:meaning}}
However, Carnap's question, as we construe it here, is a model-theoretic question. If you will, it is about the \emph{extension} of the meaning of the connectives. And then, relative to certain set-theoretic objects taken to be \emph{semantic values} of sentences (the `extensional part' of sentence meanings), there is nothing more to say about those extensions than how they determine the semantic value of a complex sentence from the semantic values of its (immediate) constituents. In possible worlds semantics for intuitionistic logic, the values are certain upward closed subsets of some poset, so the connectives must be interpreted as functions on those values. And what we find remarkable is that in every nuclear semantics (and similarly for topological semantics), however the nucleus $j$ selects appropriate semantic values, the consequence relation $\vdash_\mathsf{IPC}$ \emph{uniquely fixes}, among all the in principle available options, the functions interpreting the connectives on those values.
Similarly, though much less surprisingly, for algebraic semantics. Facing the extensive literature on algebraic semantics for intuitionistic logic, someone might naively ask: Why Heyting algebras? Aren't there other possible algebraic interpretations? Theorem \ref{mainthmalg} gives the answer: if you want to work with a class $C$ of algebras --- algebraic interpretations in our sense --- all of whose members are consistent with $\vdash_\mathsf{IPC}$, then $C \subseteq \mathit{HA}$. Indeed, $\mathit{HA}$ is the \emph{largest} such class. Similarly for $\vdash_\mathsf{CPC}$ and $\mathit{BA}$.
Note that these facts have little to do with completeness. To see this, define for each algebraic interpretation $\mathcal{A}$ the semantic consequence relation $\models_\mathcal{A}$ by
\ex. $\psi_1,\ldots,\psi_n\models_\mathcal{A}\varphi\:$ iff for all valuations $v$ on $A$ and all $c\in A$, $c \leq^\mathcal{A} \overline{v}(\psi_i)$ for $i=1,\ldots,n$ implies $c \leq^\mathcal{A} \overline{v}(\varphi)$.
Thus, $\mathcal{A}$ is \emph{consistent} with $\vdash$ iff $\:\vdash\; \subseteq\; \models_\mathcal{A}$. And $\vdash$ is \emph{sound and complete} for a class $C$ of algebras iff $\:\vdash \;\subseteq \bigcap_{\mathcal{A}\in C}\!\models_\mathcal{A}$ (soundness) and $\:\bigcap_{\mathcal{A}\in C} \!\models_\mathcal{A} \;\subseteq\; \vdash$ (completeness). So while it happens to be true that $\mathit{HA}$ is the largest class for which $\vdash_\mathsf{IPC}$ is sound and complete, it is also the largest class for which $\vdash_\mathsf{IPC}$ is sound, by Theorem \ref{mainthmalg}. Completeness is not needed to single out $\mathit{HA}$.
We note further that the abstract completeness of a logic, in the sense of its set of theorems (or its consequence relation) being recursively enumerable, is irrelevant to categoricity. For example, every \emph{intermediate logic} (logic between $\mathsf{IPC}$ and $\mathsf{CPC}$) is categorical with respect to possible worlds semantics as per Theorem \ref{mainthm}, but there are uncountably many intermediate logics, hence uncountably many whose set of theorems is not recursively enumerable.
Going beyond propositional logic, a concrete example of an incomplete logic which is Carnap categorical (the meaning of the logical constants is fixed by the standard consequence relation) is the logic $\mathcal{L}(\mathcal{Q}_0)$, which is classical first-order logic with the additional quantifier `there are infinitely many'.\footnote{This was observed by the second author, and is stated and generalized in \cite{speitel20}. The set of valid sentences in $\mathcal{L}(\mathcal{Q}_0)$ is not recursively enumerable.}
\subsection{Other logics, other settings}
Here are some final observations and open questions.\footnote{Questions and comments from several people inspired the remarks in this subsection, in particular from Wes Holliday for parts 1 and 2, from Johan van Benthem for part 3, and from Denis Bonnay for part 4. }
\noindent \textbf{1.} An obvious question is if our results here are best possible for intuitionistic propositional logic. One way of making this precise is to let, for $\Phi\subseteq\{\neg,\land,\lor,\to\}$, $L_\Phi$ be the propositional language with connectives in $\Phi$, and $\vdash_\Phi$ be the consequence relation defined by the usual natural deduction rules for these connectives. Here we use $\neg$ rather than $\bot$,\footnote{Say, with the rules \begin{itemize} \item[] \alwaysNoLine \AxiomC{$\psi$ $\neg\psi$} \alwaysSingleLine \UnaryInfC{$\varphi$} \DisplayProof \hspace{10ex} \alwaysNoLine \AxiomC{$[\varphi]$} \UnaryInfC{$:$} \UnaryInfC{$\psi$} \AxiomC{$[\varphi]$} \UnaryInfC{$:$} \UnaryInfC{$\neg\psi$} \alwaysSingleLine \BinaryInfC{$\neg \varphi$} \DisplayProof \end{itemize} } so, modulo this change (and deleting curly brackets and commas), $\vdash_{\neg\land\lor\to}\;=\;\vdash_\mathsf{IPC}$. Now we can ask (with the obvious modification of Definition \ref{def:categoricity}): If $\Phi$ is a proper subset of $\{\neg,\land,\lor,\to\}$, is $\vdash_\Phi$ categorical with respect to some suitable semantics?
Again we must distinguish algebraic semantics from possible worlds semantics. Once the relevant classes of algebras have been identified, algebraic categoricity seems (again) essentially obvious. For example, in analogy to Theorem \ref{mainthmalg} we have:\footnote{The signature of $\mathcal{A}$ in (a) is $\{\land,\lor\}$, and in (b) it is $\{\neg,\land,\lor\}$. So there is no 1, but as per footnote \ref{fn:consistency}, consistency is still well-defined. The distributive lattices in (a) need not be bounded, for example, any linearly ordered set is consistent with $\vdash_{\land\lor}$; note that this logic has no theorems. The lattices in (b) are bounded, and pseudo-complementation means that for each $a\in A$, the maximum of $\{b\!: a\land b=0\}$ belongs to $A$. See \cite{fontverdu91} for $\vdash_{\land\lor}$, and \cite{rebagliatoverdu93} for $\vdash_{\neg\land\lor}$, or consult \cite{font16}. It is well-known that $\vdash_{\land\lor}$ can also be defined as $\:\vdash_\mathsf{IPC} \restriction \!L_{\land\lor}$, and similarly for $\vdash_{\neg\land\lor}$. }
\begin{fact} \begin{itemize} \item[\rm (a)] $\mathcal{A}$ is consistent with $\vdash_{\land\lor}$ iff $\mathcal{A}$ is a distributive lattice.
\item[\rm (b)] $\mathcal{A}$ is consistent with $\vdash_{\neg\land\lor}$ iff $\mathcal{A}$ is a pseudo-complemented distributive lattice. \end{itemize} \end{fact}
Possible worlds categoricity is a different matter. The case of $L_\emptyset$ is trivial (we have $\Gamma\!\vdash_\emptyset\!\varphi \Leftrightarrow \varphi\in\Gamma$, but there are no connectives to interpret), and $\vdash_\land$ is easily seen to be categorical ($I^\mathcal{F}(\land)$ must be intersection), but what about others? A particularly relevant case is $\vdash_{\neg\land\lor}$, since the proof we gave that $\lor$ must be standard (in every intuitionistic interpretation of the relevant kind consistent with $\vdash_\mathsf{IPC}$) depended on facts about $\to$. One may conjecture that this is necessary, and so that $\vdash_{\neg\land\lor}$ is \emph{not} Carnap categorical with respect to, say, Kripke semantics, but this is open. We can, however, give an example where there is algebraic categoricity but not possible worlds categoricity. Consider the logics $\vdash_\lor$ and $\vdash_\neg$. First, it is easy to see that
\ex.\label{algor} $\mathcal{A}$ is consistent with $\vdash_{\lor}$ iff $\mathcal{A}$ is a semilattice.\footnote{Now the signature is $\{\lor\}$, and we define $a\leq^\mathcal{A} b \Leftrightarrow a \lor^\mathcal{A} b = b$; see footnote \ref{fn:consistency}. We refrain from fomulating a corresponding fact for $\vdash_\neg$; the algebras of that signature are not algebraic interpretations in our sense. }
Second, however:
\begin{fact} Neither $\vdash_\lor$ nor $\vdash_\neg$ is Carnap categorical with respect to Kripke semantics. \end{fact}
\noindent \emph{Proof. } (outline) $\vdash_\lor$ proves that disjunction is commutative, associative, and idempotent. Using this, it is not hard to verify:
\ex.\label{a1} $\psi_1,\ldots,\psi_n \vdash_\lor \varphi$ iff there is $\psi_i$ such that each atom in $\psi_i$ occurs in $\varphi$.
Now let $\mathcal{F}$ be any poset such that there is a nucleus $j$ on $\mathit{Up}(\mathcal{F})$ which is not the identity, and let $I^\mathcal{F}(U,V)=j(U\cup V)$. Using \ref{a1} and the monotonicity of $j$, it is easy to see that $I^\mathcal{F}$ is consistent with $\vdash_\lor$, but $I^\mathcal{F}$ is not the standard Kripke interpretation.\footnote{This simple non-standard interpretation was suggested by Wesley Holliday, and replaces a more \emph{ad hoc} construction that we originally gave. }
Next, consider $\vdash_\neg$. As is well-known:
\ex.\label{neg1} \a. $\varphi,\neg\varphi\vdash_\neg\psi$ \b. $\varphi\vdash_\neg \neg\neg\varphi $ \b. $\neg\varphi \dashv\vdash_\neg \neg\neg\neg\varphi$
Every $L_\neg$ formula is of the form $\neg^n p$ for some $n\geq0$, where $\neg^0p=p$ and $\neg^{n+1}p= \neg\neg^n p$. Say that $\{\neg^{n_1}p_1,\ldots,\neg^{n_k}p_k\}$ is \emph{contradictory} if $p_i = p_j$ for some $i,j$, and $|n_i-n_j|$ is odd. Then one can show:
\ex.\label{neg2} $\neg^{n_1}p_1,\ldots,\neg^{n_k}p_k \vdash_\neg \neg^{m}q\;$ iff either $\{\neg^{n_1}p_1,\ldots,\neg^{n_k}p_k\}$ is contradictory, or ($q = p_i$ for some $i$ and ($n_i,m$ are both odd, or $n_i,m$ are both even and $m\geq2$)).
Using this, one can verify that the rules (R1) $\varphi\vdash\neg\neg\varphi$, (R2) $\neg\neg\neg\varphi\vdash\neg\varphi$, and (R3) $\varphi,\neg\varphi\vdash\psi$, provide a Hilbert style axiomatization of $\vdash_\neg$.
Now let $\mathcal{F}$ be the 4-element poset $(X,\leq)$, where $X=\{0,0',a,b\}$, $0<a$, and $0'<b$. The upsets are $\emptyset,X,\{a\},\{0,a\},\{b\},\{0',b\}$, and $\{a,b\}$. Recall that $I^\mathcal{F}_\text{st}(\neg)(U) = \{x\!: \forall y\geq x\: y\not\in U\}$, and define the interpretation $I^\mathcal{F}(\neg) = N$ as follows:
\ex.\label{neg3} $N(U)=I^\mathcal{F}_\text{st}(\neg)(U)$, \emph{except} that $N(\{a\})=\{b\}$ and $N(\{b\})=\{a\}$.
(Note that $I^\mathcal{F}_\text{st}(\neg)(\{a\})=\{0',b\}$.) Then, for all upsets $U$,
\ex.\label{neg4} \a. $U \subseteq N(N(U))$ \b. $N(N(N(U))) \subseteq N(U)$ \c. $U\cap N(U)=\emptyset$
Using \ref{neg4}, an induction over the length of Hilbert style proofs shows that $I^\mathcal{F}$ is consistent with $\vdash_\neg$.
$\Box$
Thus, in contrast with the case of $\land$, the introduction and elimination rules for $\lor$ do \emph{not} fix its meaning in Kripke semantics. This is also in contrast with classical 2-valued semantics, where consistency with those rules does fix the meaning, i.e.\ the standard truth table, for $\lor$. Similarly for $\neg$.\footnote{The introduction rules for $\lor$ fix the first three rows, (1,1,1), (1,0,1), and (0,1,1), in the truth table for $\lor$. The fourth row, (0,0,0), is fixed by the fact that $p\lor p\vdash_\lor p$, which is an instance of the elimination rule for $\lor$. Likewise, $p,\neg p\vdash_\neg q$ and $p\vdash_\neg \neg\neg p$ together fix the standard truth table for $\neg$. Note that $\vdash_\lor$ is the restriction of $\vdash_\mathsf{IPC}$, as well as of $\vdash_\mathsf{CPC}$, to $L_\lor$. This of course fails for $\vdash_\neg$, which nevertheless fixes the 2-valued meaning of $\neg$. }
\noindent \textbf{2.} One can also weaken the logics $\vdash_\Phi$ by constraining the inference rules in various ways. A case in point is \cite{holliday22}, which studies a weaker logic in $L_{\neg\land\lor}$, called $\vdash_\mathsf{F}$, with restrictions on the $\lor$E and $\neg$I rules. Among other things, distributivity no longer holds.\footnote{Holliday presents $\vdash_\mathsf{F}$ with a Fitch style natural deduction system, where the added constraint becomes a requirement on the Reiteration rule. $\vdash_\mathsf{F}$ is also extended to a first-order language with the quantifiers $\forall$ and $\exists$ (but without $\to$ and $=$). Failure of distributivity is a characteristic of quantum logic, but Holliday argues that it also accords with certain facts about natural language semantics, in particular facts about epistemic modals. } He shows how $\vdash_\mathsf{F}$ can be seen as a neutral base logic, from which intuitionistic logic, and versions of the \emph{orthologic} studied in \cite{goldblatt74}, and classical logic, can be obtained by suitable additions or changes to the rules, or by corresponding constraints in the algebraic or the relational semantics he provides for $\vdash_\mathsf{F}$. Finding a non-standard interpretation consistent with $\vdash_\mathsf{F}$ --- if there is such an interpretation --- might be easier than finding one for $\vdash_{\neg\land\lor}$.
In the same spirit, we may ask if there is a proper fragment of $\vdash_\mathsf{IPC}$ in the language $L_{\neg\land\lor\to}$ which is Carnap categorical. If no such fragment exists, that would be a new kind of functional completeness property of $\mathsf{IPC}$.
\noindent \textbf{3.} Theorem \ref{mainthm} is an existence and uniqueness result: \emph{there is} an interpretation consistent with $\vdash_\mathsf{IPC}$ (namely, the standard interpretation), and it is in fact the \emph{only} one. In the proof-theoretic tradition, existence and uniqueness of propositional connectives is a well-established topic; see \cite{humberstone11}, Ch.\ 4, for a comprehensive overview. Here, existence and uniqueness is relative to a set of \emph{rules}. For example, $\land$ and $\lor$ are \emph{unique} relative to a natural deduction presentation of $\mathsf{IPC}$, in the sense that if we introduce new connectives $\land'$ and $\lor'$, with the same rules as for $\land$ and $\lor$, respectively, then $\varphi\land\psi$ is equivalent to $\varphi\land'\psi$, and similarly for $\lor$ and $\lor'$. Indeed, we only need the introduction and elimination rules for these two connectives. With the notation from part 1 above, we have: $\varphi\land\psi\dashv\vdash_{\land\land'}\varphi\land'\psi$ and $\varphi\lor\psi\dashv\vdash_{\lor\lor'}\varphi\lor'\psi$.
As Humberstone noted, and \cite{dosenSH88} spells out in detail, there is a difference between $\land$ and $\lor$ here: while the former is \emph{implicitly definable} in a precise sense, the latter, although unique, is not. Do\v{s}en and Schroeder-Heister explore connections to Beth's Definability Theorem.\footnote{\cite{humberstone11}, Chapter 4.35, proves a similar result about $\lor$: it is not uniquely characterizable by what he calls \emph{zero-premiss rules}. See also the discussion in \cite{williamson88}, pp.\ 110--114. } It is an intriguing question if, and in that case how, their proof-theoretic notion of implicit definability relates to our semantic notion of categoricity (which, as we have seen, exhibits a similar contrast between $\land$ and $\lor$).
\noindent \textbf{4.} Can Theorem \ref{mainthm} be generalized to arbitrary intuitionistic interpretations as specified in Definition \ref{def:possworldssem}? This concerns in particular the function $I^{\mathcal{F},\mathit{at}}$, which could be seen as the interpretation of an invisible logical constant preceding all atoms. Does categoricity extend to that constant? More precisely, if $I^\mathcal{F}$ is consistent with $\vdash_\mathsf{IPC}$, under what circumstances would $I^{\mathcal{F},\mathit{at}}$ have to be a nucleus?
\noindent \textbf{5.} We end with another set of questions, not related specifically to intuitionistic logic but the general setting in which Carnap's question has been posed. We have so far simply followed the lead of \cite{bonwes16} and other papers in considering consequence relations $\vdash$ where the first argument is a (possibly empty) \emph{set} of formulas and the second is a \emph{formula}; the \textsc{set-fmla} setting in the terminology of \cite{humberstone11}. One could look at other settings, like \textsc{fmla} (no premises), \textsc{set-set}, \textsc{fmla-fmla}, or \textsc{seq-fmla} (with a sequence of premises). As we have seen, the categoricity results would differ significantly. We think \textsc{set-fmla} most naturally corresponds to intuitive ideas about `what follows from what', but one would like a better argument for why the others are less suitable. Or, less contentiously, an overview of what the results would be in those settings.\footnote{In fact, the solution proposed in \cite{carnap43} to the existence of non-normal interpretations was in effect to use the \textsc{set-set} framework instead. This makes good sense for classical logic. But for intuitionistic logic there is, as far we know, no obviously correct candidate for the \textsc{set-set} version of $\mathsf{IPC}$. For example, $p\lor q\vdash p,q$ is not valid in standard Beth semantics (since we can have $j_b(U\cup V) \not\subseteq U\cup V$), although it holds in Kripke semantics. }
In this context, here is final more concrete issue. In all categoricity results (in our sense) that we are aware of, it is practically immediate that any interpretation consistent with the relevant consequence relation $\vdash$ must interpret conjunction ($\land$) standardly. It is the other logical constants that need some work (except for $\bot$ if \emph{Ex Falso} holds). But one could argue that this is built into our definition of consistency with $\vdash$ in Definition \ref{def:possworldssem}:4: the \emph{intersection} of the values of the premises must be included in the value of the conclusion. This guarantees that $U\cap V \subseteq I(\land)(U,V)$; the converse inclusion is just from single-premise $\land$-elimination.
This choice should at least be motivated. Is it necessary? One approach could be to see what happens in the \textsc{fmla-fmla} setting: does it too force $\land$ to be standard? The answer seems to be No in a general possible worlds environment; at least if interpretations now have the form $\mathcal{F} = (X,\mathcal{C},\subseteq)$, where $\mathcal{C}$ is any subset of $\mathcal{P}(X)$, and semantic values are required to stay in $\mathcal{C}$. It is easy to find such $\mathcal{C}$ such that $I^\mathcal{F}$ is `\textsc{fmla-fmla}-consistent' with $\vdash$ --- in the sense that if $\psi\vdash\varphi$ then for all valuations $v$, $[\![\psi]\!]^{I^\mathcal{F}}_v \subseteq [\![\varphi]\!]^{I^\mathcal{F}}_v$ --- but $I^\mathcal{F}(\land)$ is \emph{not} intersection. On the other hand, if $\mathcal{C}$ is required to be closed under intersections, then $I^\mathcal{F}(\land)$ will be intersection.
\noindent We feel these issues deserve more thought, but leave them for future work.
\end{document} | arXiv | {
"id": "2207.14705.tex",
"language_detection_score": 0.7566626667976379,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\markboth{Hideo Takaoka}{Derivative nonlinear Schr\"odinger equations}
\title{Remarks on blow-up criteria for the derivative nonlinear Schr\"{o}dinger equation under the optimal threshold setting} \author{Hideo Takaoka\thanks{This work was supported by JSPS KAKENHI Grant Number 18H01129.}\\ Department of Mathematics, Kobe University\\ Kobe, 657-8501, Japan\\ takaoka@math.kobe-u.ac.jp}
\date{\empty}
\maketitle
\begin{abstract} We study the Cauchy problem of the mass critical nonlinear Schr\"odinger equation with derivative with the $4\pi$ mass. One has the global well-posedness in $H^1$ whenever ``the mass is strictly less than $4\pi$" or whenever ``the mass is equal to $4\pi$ and the momentum is strictly less than zero". In this paper, by the concentration compact principle as originally done by Kenig-Merle, we obtain the limiting profile of blow up solutions with the critical $4\pi$ mass. \end{abstract}
{\it $2010$ Mathematics Subject Classification Numbers.} 35Q55, 42B37.
{\it Key Words and Phrases.} Derivative nonlinear Schr\"odinger equation, Well-posedness, Blow-up profile.
\section{Introduction}\label{sec:introduction}
In this paper, we consider the Cauchy problem of the derivative nonlinear Schr\"odinger equation; \begin{eqnarray}\label{dnls} \left\{ \begin{array}{ll}
i\partial_t u+\partial_{x}^2u=i\partial_x(|u|^2u), & (t,x)\in\mathbb{R}^2,\\ u(0,x)=u_0(x), & x\in \mathbb{R}, \end{array} \right. \end{eqnarray}
where $u$ is a complex valued function of $(t,x)\in \mathbb{R}^2$. Our aim is to provide the profile of solutions to the Cauchy problem (\ref{dnls}) with $\|u_0\|_{L^2}^2=4\pi$, if the solution of the current equation may became unbounded with respect to the energy norm $H^1(\mathbb{R})$.
The Cauchy problem (\ref{dnls}) is $L^2$ scale critical in the sense that the $L^2$ norm is invariant under the scaling transform; \begin{eqnarray*} u_{\lambda}(t,x)=\lambda^{1/2}u(\lambda^2 t,\lambda x),\quad \lambda>0. \end{eqnarray*} The equation in (\ref{dnls}) has an infinite family of conservations laws and can be solved by the inverse scattering method that are used to solve the Cauchy problem for certain classes of initial data \cite{kn}. Formally solutions of (\ref{dnls}) satisfy the conservation laws, that, if $u=u(t,x)$ is solution to (\ref{dnls}), then for all $t\in\mathbb{R}$, the charge ($L^2$-norm) \begin{eqnarray}\label{eq:mass}
M(u)=\|u\|_{L^2}^2=M(u_0), \end{eqnarray} the energy \begin{eqnarray}\label{eq:H1energy}
E_1(u)=\|\partial_x u\|_{L^2}^2-\frac32\Im\langle \partial_x u,|u|^2u\rangle+\frac12\|u\|_{L^6}^6=E_1(u_0) \end{eqnarray} and the momentum \begin{eqnarray}\label{eq:H1momentum}
P_1(u)=\Im\langle \partial_x u,u\rangle-\frac12\|u\|_{L^4}^4, \end{eqnarray} where $\langle \cdot,\cdot\rangle$ is the standard inner product in $L^2(\mathbb{R})$. For the formal derivation of the recurrence formula of the infinite family of other conserved quantities, we refer the reader to the references \cite{kn,tsm1} (e.g. the conserved quantities $C_n$ in \cite{kn} and $Z^{(n)}$ in \cite{tsm1}). As we will see below, next three specific conservation laws are formulated as the functional in the $H^3$ space. Notice that two of those are the following potential Hamiltonians \begin{eqnarray}\label{eq:H2energy}
E_2(u) & = & \|\partial_x^2u\|_{L^2}^2+\frac78\|u\|_{L^{10}}^{10}+\frac{25}{2}\left\||u|^2\partial_x u\right\|_{L^2}^2\\
& & +5\Re\langle (\partial_x u)^2,|u|^2u^2\rangle-5\Im\langle \partial_x^2u,|u|^2\partial_x u\rangle-\frac{35}{8}\Im\langle \partial_x u,|u|^6u\rangle,\nonumber\\ & = & E_2(u_0) \nonumber \end{eqnarray} and \begin{eqnarray}\label{eq:H2momentum}
P_2(u) & = & \frac12\Im\langle \partial_x^2 u,\partial_x u\rangle-\frac{5}{16}\|u\|_{L^{8}}^8-2\|u\partial_xu\|_{L^2}^2\\
& & -\frac12\Re\langle (\partial_x u)^2,u^2\rangle+\frac54\Im\langle \partial_x u,|u|^4u\rangle\nonumber\\ & = & P_2(u_0).\nonumber \end{eqnarray} The next conservation laws involving third order derivative are \begin{eqnarray}\label{eq:H3energy}
E_3(u) & = & \|\partial_x^3u\|_{L^2}^2+e_3(\partial_x^3u,\partial_x^2u,\partial_xu,u)\\ & = & E_3(u_0)\nonumber \end{eqnarray} and \begin{eqnarray}\label{eq:H3momentum} P_3(u) & = & \frac12\Im\langle \partial_x^3u,\partial_x^2u\rangle+p_3(\partial_x^2u,\partial_xu,u)\\ & = & P_3(u_0),\nonumber \end{eqnarray} where the functional $e_3(\partial_x^3u,\partial_x^2u,\partial_xu,u)$ and $p_3(\partial_x^2u,\partial_xu,u)$ satisfy $$
|e_3(\partial_x^3u,\partial_x^2u,\partial_xu,u)|\le \varepsilon\|\partial_x^3u\|_{L^2}^2+C(\|u\|_{H^2},\varepsilon) $$ for any $\varepsilon>0$, and $$
|p_3(\partial_x^2u,\partial_xu,u)|\le C(\|u\|_{H^2}). $$ Similarly, we have the conservation law at the forth order regularity \begin{eqnarray}\label{eq:H4energy}
E_4(u) & = & \|\partial_x^4u\|_{L^2}^2+e_4(\partial_x^4u,\partial_x^3u,\partial_x^2u,\partial_xu,u)\\ & = & E_4(u_0),\nonumber \end{eqnarray} where the functional $e_4(\partial_x^4u,\partial_x^3u,\partial_x^2u,\partial_xu,u)$ satisfies $$
|e_4(\partial_x^4u,\partial_x^3u,\partial_x^2u,\partial_xu,u)|\le \varepsilon\|\partial_x^4u\|_{L^2}^2+C(\|u\|_{H^3},\varepsilon) $$ for any $\varepsilon>0$.
Let $\phi_{\omega,c}$ be a ground state solution to the following equation \begin{eqnarray}\label{eq:elliptic} -\phi''+\left(\omega-\frac{c^2}{4}\right)\phi+\frac{c}{2}\phi^3-\frac{3}{16}\phi^5=0. \end{eqnarray} The equation of (\ref{dnls}) possesses a two-parameter family of standing wave solution of the form $$
q_{\omega,c}(t,x)=\phi_{\omega,c}(x+ct)e^{i\omega t-i\frac{c}{2}(x+ct)+i\frac34\int_{-\infty}^{x+ct}|\phi_{\omega,c}(y)|^2\,dy}, $$
where $c>0,~\omega\ge c^2/4$. In the case when $\omega>c^2/4$, $\phi_{\omega,c}$ is an exponential decay function as $|x|\to\infty$ and $$
\|\phi_{\omega,c}\|_{L^2}=\left(8\tan^{-1}\sqrt{\frac{\sqrt{4\omega}+c}{\sqrt{4\omega}-c}}\right)^{1/2}<\sqrt{4\pi}. $$ For the borderline case $\omega=c^2/4$, the ground state of (\ref{eq:elliptic}) is expressed as \begin{eqnarray}\label{eq:soliton} Q_c(x)=\phi_{\frac{c^2}{4},c}(x)=\sqrt{\frac{4c}{(cx)^2+1}}, \end{eqnarray}
which satisfies $\|Q_c\|_{L^2}^2=4\pi$ for any $c>0$. It is worth noting that the standing wave solutions satisfy $E_1(q_{\omega,c})=-c\sqrt{4\omega-c^2}$ and $P_1(q_{\omega,c})=2\sqrt{4\omega-c^2}$, that imply $E_1(q_{c^2/4,c})=P_1(q_{c^2/4,c})=0$. Moreover, $E_2(q_{c^2/4,c})=P_2(q_{c^2/4,c})=P_3(q_{c^2/4,c})=0$ (see Section \ref{sec:P2Q}). We infer that $E_k(q_{c^2/4,c})=P_k(q_{c^2/4,c})=0$ for any $k\in\mathbb{N}$, but the author does not know whether it is true.
The well-posedness of the Cauchy problem (\ref{dnls}) has been extensively studied. In \cite{h1,h2}, Hayashi-Ozawa proved the local well-posedness in the energy space $H^1(\mathbb{R})$ and the global well-posedness if the initial data satisfy $\|u_0\|_{L^2}^2<2\pi$. For rough data, it was proved that the local well-posedness holds in $H^s(\mathbb{R})$ for $s\ge 1/2$ \cite{t1} (see also \cite{csktt1} for the global solutions for rough initial data). Recently, Wu \cite{w1} and Guo-Wu \cite{gw} obtained the global well-posedness above the mass threshold $2\pi$, more precisely, if the mass is less than $4\pi$, namely $\|u_0\|_{L^2}^2<4\pi$. The main ingredient in the proof is the following result.
\begin{proposition}[Lemma 2.1 in \cite{gw}]\label{prop:gw} If $\phi\in H^1(\mathbb{R})$ and $\phi\ne 0$, then \begin{eqnarray*}
\int_{\mathbb{R}}\left(\Im(\overline{\phi}\partial_x\phi)+\frac14|\phi|^4\right)\,dx
\ge \frac14\|\phi\|_{L^4}^4\left(1-\frac{\|\phi\|_{L^2}}{\sqrt{4\pi}}\right)-\frac{4\sqrt{\pi}\|\phi\|_{L^2}}{\|\phi\|_{L^4}^4}\int_{\mathbb{R}}\left(|\partial_x\phi|^2-\frac{1}{16}|\phi|^6\right)\,dx. \end{eqnarray*} \end{proposition} The estimate in Proposition \ref{prop:gw} has impact on the availability of the a priori estimate of solutions.
In particular, using Proposition \ref{prop:gw}, Fukaya-Hayashi-Inui \cite{fhi} proved that the global well-posedness was extended to the initial data $\|u_0\|_{L^2}^2=4\pi$ and $P_1(u_0)<0$. If $\|u_0\|_{L^2}^2=4\pi$ and $E_1(u_0)=P_1(u_0)=0$, then the solution of (\ref{dnls}) is the solitary wave solution and exists global in time (see \cite{fhi}). More recently, Jenkins-Liu-Perry-Sulem \cite{jlps} proved the global well-posedness for the initial data in $H^{2,2}(\mathbb{R})$, by using the inverse scattering tools, where $H^{s,a}(\mathbb{R})$ is a weighted Sobolev space. Note that soliton solutions $q_{c^2/4,c}$ do not belong to $H^{2,2}$. Under the mass condition $\|u_0\|_{L^2}^2=4\pi$, it turned out that the global well-posedness for the initial data satisfying $P_1(u_0)\ge 0$ is open.
It is interesting to take the initial data $u_0\in H^1$ with $\|u_0\|_{L^2}=\sqrt{4\pi}$ and $P_1(u_0)\ge 0$. In the following example, we will illustrate that the set of objects $u_0\in H^1$ that satisfy \begin{eqnarray}\label{eq:example}
\|u_0\|_{L^2}=\sqrt{4\pi},\quad P_1(u_0)>0 \end{eqnarray} is not empty.
\begin{proposition}\label{prop:exam} There exists an element $u_0\in H^1$ satisfying (\ref{eq:example}). \end{proposition}
This aim of this paper is to prove the blow-up phenomena with the asymptotic profile for the mass critical case. The main result of this paper is the following theorem.
\begin{theorem}\label{thm:main}
Let $u_0\in H^1$ satisfy $\|u_0\|_{L^2}^2=4\pi$. Assume that $u(t)$ be the corresponding solution of (\ref{dnls}) blows up in finite or infinite time, i.e. there exists $0<T\le +\infty$ such that \begin{eqnarray}\label{eq:assum}
\limsup_{t\uparrow T}\|\partial_xu(t)\|_{L^2}=+\infty. \end{eqnarray} Then there exist a sequence denoted $\{R_n\}_{n\in\mathbb{N}}\subset H^1$ with $\gamma\in \mathbb{R},~t_n\in\mathbb{R},~x_n\in\mathbb{R}$ such that $$ \lim_{n\to\infty}t_n=T $$ and \begin{eqnarray}\label{eq:bu}
u(t_n,x)=v_n(x)\exp\left(i\frac{3}{4}\int_{-\infty}^x|v_n(y)|^2\,dy \right), \end{eqnarray} where $$ \lambda_n^{1/2}v_n(\lambda_n x)e^{\frac{i}{2}x}=e^{i\gamma}\left(Q_1(x-x_n)+R_n(x)\right), $$ with $$
\lambda_n=\sqrt{\frac{3\cdot 2^3\cdot \pi}{\|v_n\|_{L^6}^6}}, $$ $$
\lim_{n\to\infty}\|\partial_xR_n\|_{L^2}=0 $$ and $$
\lim_{n\to\infty}\|R_n\|_{L^p}=0 $$ for every $p\in(2,\infty)$. \end{theorem}
\begin{remark} Our assumption in Theorem \ref{thm:main} allows us to deduce \begin{eqnarray}\label{eq:assum} \lim_{n\to\infty}\lambda_n=0. \end{eqnarray} Indeed, let $u_0\in H^1$ with $P_1(u_0)\ge 0$ and assume the corresponding solution $u(t)$ of (\ref{dnls}) blows up in time $t=T$; then $$
\lim_{t\uparrow T} \|u(t)\|_{L^6}=\infty. $$ \end{remark}
\begin{remark} It is unclear whether (\ref{eq:assum}) holds, and the question is an open problem. \end{remark}
\begin{notation} Throughout the paper, we use letters $c,~C,~c(*),~C(*)$ to denote various constants. \end{notation}
The paper is organized as follows. Section \ref{sec:p} describes the analysis of the gauge group of transformations associated to (\ref{dnls}). In Section \ref{sec:H1estimate}, we derive the a priori estimate for solutions in $\dot{H}^1(\mathbb{R})$ space by using a contradiction argument. In Subsection \ref{sec:concentration}, we perform qualitative analysis to obtain some concentration phenomena; the blow-up solutions behave like a standing wave solution. In Subsection \ref{sec:proofthm}, we provide the proof of Theorem \ref{thm:main}. Section \ref{sec:exam} gives the proof of Proposition \ref{prop:exam}. Section \ref{sec:P2Q} is devoted to the direct calculation of $E_2(q_{c^2/4,c})=P_2(q_{c^2/4,c})=P_3(q_{c^2/4,c})=0$ as an appendix.
\section{Preliminary}\label{sec:p}
We begin by defining the gauge transformation associated to (\ref{dnls}). If we assume $u$ to be a solution to the Cauchy problem (\ref{dnls}), then $$
{\mathcal G}_{\nu}u(t,x)=u(t,x)e^{-i\frac{\nu}{2}\int_{-\infty}^x|u(t,y)|^2\,dy},\quad \nu\in\mathbb{R} $$ is formally a solution of the following Cauchy problem; \begin{eqnarray}\label{dnls-g} \left\{ \begin{array}{l}
i\partial_tv+\partial_x^2v=i(2-\nu)|v|^2\partial_xv+i(1-\nu)v^2\partial_x\overline{v}+\frac{\nu(1-\nu)}{4}|v|^4v,\\ v(0,x)=v_{0,\nu}(x), \end{array} \right. \end{eqnarray} where $v_{0,\nu}={\mathcal G}_{\nu}(u_0)$. Using the gauge transform, the conservation laws in (\ref{eq:mass}), (\ref{eq:H1energy}), (\ref{eq:H1momentum}), (\ref{eq:H2energy}) and (\ref{eq:H2momentum}) reduce to the following, the $L^2$-norm \begin{eqnarray}\label{eq:gmass} {\mathcal M}(v)=M(u), \end{eqnarray} the energy \begin{eqnarray}\label{eq:gH1energy}
{\mathcal E}_{1,\nu}(v) & = & \int_{-\infty}^{\infty}\left(|\partial_x v|^2-\left(\frac32-\nu\right)|v|^2\Im\left(\overline{v}\partial_xv\right)+\frac{(2-\nu)(1-\nu)}{4}|v|^6\right)\,dx\nonumber\\ & = & {\mathcal E}_{1,\nu}(v_{0,\nu})=E(u_0), \end{eqnarray} the momentum \begin{eqnarray}\label{eq:gH1momentum}
{\mathcal P}_{1,\nu}(v)=\int_{-\infty}^{\infty}\left(\Im\left(\overline{v}\partial_x v\right)-\frac{1-\nu}{2}|v|^4\right)\,dx={\mathcal P}_{1,\nu}(v_{0,\nu})=P_1(u_0), \end{eqnarray} the $H^2$-energy \begin{eqnarray}\label{eq:gH2energy}
{\mathcal E}_{2,\nu}(v) & = & E_2(v)+\frac{\nu}{16}\left(\nu^3-10\nu^2+30\nu-35\right)\|v\|_{L^{10}}^{10}
+\nu(4\nu-15)\||v|^2\partial_xv\|_{L^2}^2\\ & &
+\frac52\nu(\nu-3)\Re\langle(\partial_xv)^2,|v|^2v^2\rangle
+3\nu\Im\langle \partial_x^2v,|v|^2\partial_xv\rangle+\nu\Im\langle \partial_x^2v,v^2\overline{\partial_xv}\rangle\nonumber\\ & &
+\frac{\nu}{4}(2\nu^2-15\nu+30)\Im\langle\partial_xv,|v|^6v\rangle\nonumber\\ & = & {\mathcal E}_{2,\nu}(v_{0,\nu}) \nonumber\\ & = & E_2(u_0)\nonumber \end{eqnarray} and the $H^2$-momentum \begin{eqnarray}\label{eq:gH2momentum}
{\mathcal P}_{2,\nu}(v) & = & P_2(v)+\frac{\nu}{16}(\nu^2-6\nu+10)\|v\|_{L^8}^8+\frac54\nu\|v\partial_xv\|_{L^2}^2\\
& & +\frac{\nu}{2}\Re\langle(\partial_xv)^2,v^2\rangle+\frac38\nu(\nu-4)\Im\langle \partial_xv,|v|^4v\rangle\nonumber\\ & = & {\mathcal P}_{2,\nu}(v_{0,\nu})\nonumber\\ & = & P_2(u_0).\nonumber \end{eqnarray} In particular, the equation of (\ref{dnls-g}) with parameter $\nu=3/2$ is \begin{eqnarray}\label{dnls-gp} \left\{ \begin{array}{l}
i\partial_tv+\partial_x^2v=\frac{i}{2}\left(|v|^2\partial_xv-v^2\partial_x\overline{v}\right)-\frac{3}{16}|v|^4v,\\ v(0,x)=v_0, \end{array} \right. \end{eqnarray} where $v_0=v_{0,3/2}$.
It is easy to see that the map ${\mathcal G}_{\nu}$ is a bijection in $H^s$ for $s\ge 1/2$ (see \cite{t1}). Let us consider $v_0\in L^2$ such that the Cauchy problem (\ref{dnls-gp}) blows up at time $t=T$.
\section{$\dot{H}^1$-norm estimate for solutions to the gauged equivalent equation}\label{sec:H1estimate}
First, we recall that because of the time reflection invariance, it suffices to consider non-negative time.
Let $v$ be the $H^1$-solution to (\ref{dnls-gp}). If $\sup_{0\le t< T}\|v(t)\|_{L^6}\le C(T)$ for any $T\in(0,\infty]$, then we have from the energy conservation law (\ref{eq:gH1energy}) that $\sup_{0\le t<T}\|\partial_xv(t)\|_{L^2}\le C(T)$ for any $T>0$. In addition, in \cite{fhi} the global $\dot{H}^1$ bound of solutions was obtained for ${\mathcal P}_{1,3/2}(v_0)<0$. Then without loss of generality, we assume that ${\mathcal P}_{1,3/2}(v_0)\ge 0$ and $\limsup_{t\to T-}\|v(t)\|_{L^6}=\infty$ for some time $T\in(0,\infty]$.
Given solutions $v={\mathcal G}_{3/2}u$ of (\ref{dnls-gp}), for $\alpha\in\mathbb{R}$ we write $w(t,x)=v(t,x)e^{i\alpha x}$, which satisfies the following equation; $$
i\partial_tw+\partial_x^2w-2i\alpha\partial_xw-\alpha^2w=\frac{i}{2}\left(|w|^2\partial_xw-w^2\partial_x\overline{w}\right)+\alpha|w|^2w-\frac{3}{16}|w|^4w. $$ In this setting, the conservation laws ${\mathcal P}_{1/3/2}$ and ${\mathcal E}_{1,3/2}$ become \begin{eqnarray*}
{\cal P}_{1,3/2}(v)=\Im\int_{-\infty}^{\infty}\overline{w}\partial_xw\,dx+\frac{1}{4}\|w\|_{L^4}^4-\alpha\|w\|_{L^2}^2 \end{eqnarray*} and \begin{eqnarray*}
{\cal E}_{1/3/2}(v)=\|\partial_xw\|_{L^2}^2-\frac{1}{16}\|w\|_{L^6}^6-2\alpha\Im\int_{-\infty}^{\infty}\overline{w}\partial_xw\,dx+\alpha^2\|w\|_{L^2}^2, \end{eqnarray*} respectively, that imply \begin{eqnarray}\label{pe}
{\cal E}_{1,3/2}(v)=\|\partial_xw\|_{L^2}^2-\frac{1}{16}\|w\|_{L^6}^6-2\alpha\left({\cal P}_{1,3/2}(v)-\frac14\|w\|_{L^4}^4\right)-\alpha^2\|w\|_{L^2}^2. \end{eqnarray}
We use some ideas from \cite{a1}. The following sharp Gagliardo-Nirenberg inequality was obtained (see also \cite{gw}). \begin{lemma}\label{lem:GN} \begin{eqnarray*}
\|\phi\|_{L^6}\le C_{GN} \|\partial_x\phi\|_{L^2}^{1/9}\|\phi\|_{L^4}^{8/9}, \end{eqnarray*} where $C_{GN}=3^{1/6}\cdot(2\pi)^{-1/9}$. \end{lemma}
If $\alpha>0$, we apply Lemma \ref{lem:GN} to obtain the following \begin{eqnarray*}
\frac{\|v\|_{L^4}^4}{4}+\frac{C_{GN}^{-18}\|v\|_{L^6}^{18}}{2\alpha\|v\|_{L^4}^{16}}\le \frac{{\cal E}_{1,3/2}(v)}{2\alpha}+{\cal P}_{1,3/2}(v)+\frac{\alpha\|v\|_{L^2}^2}{2}+\frac{\|v\|_{L^6}^6}{32\alpha}. \end{eqnarray*}
By scaling the parameter $\alpha$ as $\alpha\|v\|_{L^6}^3$, we have \begin{eqnarray}\label{eq:gw22}
\left(\frac{\|v\|_{L^4}^4}{4\|v\|_{L^6}^3}+\frac{C_{GN}^{-18}\|v\|_{L^6}^{15}}{2\alpha\|v\|_{L^4}^{16}}-\frac{\|v\|_{L^2}^2}{2}\alpha-\frac{1}{32\alpha}\right)\|v\|_{L^6}^3\le \frac{{\cal E}_{1,3/2}(v)}{2\alpha\|v\|_{L^6}^3}+{\cal P}_{1,3/2}(v).\end{eqnarray}
Now we put $X=\|v\|_{L^4}^4/\|v\|_{L^6}^3$, and choose $\alpha=\alpha_0=2^{-5/2}\cdot 3^{-1/2}\cdot \pi^{-1/2}$. Then the left-hand side of (\ref{eq:gw22}) is $$
\left(\frac{X}{4}+\frac{C_{GN}^{-18}}{2\alpha_0X^4}-\frac{\|v\|_{L^2}^2}{2}\alpha_0-\frac{1}{32\alpha_0}\right)\|v(t)\|_{L^6}^3. $$
Observe that upon using the assumption $\|v\|_{L^2}=\sqrt{4\pi}$ in Theorem \ref{thm:main}, the function $$
f(x)=\frac{x}{4}+\frac{C_{GN}^{-18}}{2\alpha_0x^4}-\frac{\|v\|_{L^2}^2}{2}\alpha_0-\frac{1}{32\alpha_0},\quad x>0 $$ has a minimum value zero at $$ x=X_0=\left(\frac{8C_{GN}^{-18}}{\alpha_0}\right)^{1/5}=2^{3/2}\cdot 3^{-1/2}\cdot\pi^{1/2}, $$ namely $f(X_0)=0$. Note that $$ \frac{x}{4}+\frac{C_{GN}^{-18}}{2\alpha_0x^4}-2\pi\alpha_0-\frac{1}{32\alpha_0}=0, $$ is possible only if $x=X_0$. We reapply the above estimate to (\ref{eq:gw22}). Consequently, it follows that \begin{eqnarray}\label{eq:bound}
0\le \left(\frac{X}{4}+\frac{C_{GN}^{-18}}{2\alpha_0X^4}-\frac{\|v\|_{L^2}^2}{2}\alpha_0-\frac{1}{32\alpha_0}\right)\|v\|_{L^6}^3\le \frac{{\cal E}_{1,3/2}(v)}{2\alpha_0\|v\|_{L^6}^3}+{\cal P}_{1,3/2}(v). \end{eqnarray}
If there exists a time $T\in(0,\infty]$ such that $$
\liminf_{t\to T-}\left|\frac{\|v(t)\|_{L^4}^4}{\|v(t)\|_{L^6}^3}-X_0\right|>0, $$ then we have $$
\liminf_{t\to T-}\left(\frac{X}{4}+\frac{C_{GN}^{-18}}{2\alpha_0X^4}-\frac{\|v\|_{L^2}^2}{2}\alpha_0-\frac{1}{32\alpha_0}\right)>0, $$
which leads $\limsup_{t\to T-}\|v(t)\|_{L^6}<\infty$. It is not deemed acceptable.
As we have seen above, blow-up solutions $v$ should satisfy $$
\liminf_{t\to T-}\left|\frac{\|v(t)\|_{L^4}^4}{\|v(t)\|_{L^6}^3}-X_0\right|=0. $$ Then there exists a sequence $\{t_n\}$ such that $$
t_n\le t_{n+1}<T,\quad \lim_{n\to\infty}t_n=T,\quad \lim_{n\to\infty}\frac{\|v(t_n)\|_{L^4}^4}{\|v(t_n)\|_{L^6}^3}=X_0, $$ namely $$
\frac{\|v(t_n)\|_{L^4}^4}{\|v(t_n)\|_{L^6}^3}=X_0+o_n(1), $$
where $\lim_{n\to\infty}o_n(1)=0$. Using the conservation law ${\mathcal E}_{1,3/2}$, one can get $\lim_{n\to\infty}\|\partial_xv(t_n)\|_{L^2}=\infty$. Then it follows that $$
\|v(t_n)\|_{L^4}^4=X_0\|v(t_n)\|_{L^6}^3+o_n(1)\|v(t_n)\|_{L^6}^3. $$
For $c>0$ fixed, which is the presumed constant, see Remark \ref{rem:const} below, take $$
\lambda_n=\sqrt{\frac{3\cdot 2^3\cdot c^2\cdot \pi}{\|v(t_n)\|_{L^6}^6}}, $$ which satisfies $\lim_{n\to \infty}\lambda_n=0$. We rescale the solution $v$ by $$ w_n(x)=\lambda_n^{1/2}v(t_n,\lambda_nx)e^{i\beta x}, $$ where $\beta\in\mathbb{R}$ is chosen later. Hence, we know $$
\|w_n\|_{L^4}^4=\lambda_n\|v(t_n)\|_{L^4}^4,\quad \|w_n\|_{L^6}^3=\lambda_n \|v(t_n)\|_{L^6}^3 $$ and $$
\|\partial_xw_n\|_{L^2}^2=\lambda_n^2\|\partial_xv(t_n)\|_{L^2}^2+2\beta\lambda_n\Im\int_{\mathbb{R}}\overline{v(t_n,x)}\partial_xv(t_n,x)\,dx+\beta^2M(v_0). $$ In particular we have $$
\|w_n\|_{L^6}^6=3\cdot 2^3\cdot c^2\cdot \pi=\|Q_c\|_{L^6}^6, $$ $$
\|w_n\|_{L^4}^4=\lambda_n\left(X_0\|v(t_n)\|_{L^6}^3+o_n(1)\|v(t_n)\|_{L^6}^3\right)\to X_0\cdot 3^{\frac12}\cdot 2^{\frac32}\cdot c \cdot \pi^{\frac12}=8c\pi=\|Q_c\|_{L^4}^4, $$ as $n\to \infty$. Moreover choosing $\beta=c/2$, one gets \begin{eqnarray*}
\|\partial_xw_n\|_{L^2}^2& = &\lambda_n^2\left({\cal E}_{1,3/2}(v_0)+\frac{1}{16}\|v(t_n)\|_{L^6}^6\right)+2\beta\lambda_n\left({\cal P}_{1,3/2}(v_0)-\frac{1}{4}\|v(t_n)\|_{L^4}^4\right)+4\pi\beta^2\\ & \to & \frac{3\cdot 2^3\cdot c^2 \cdot \pi}{2^4}-\frac{2\cdot \beta\cdot 3^{\frac12}\cdot 2^{\frac32}\cdot c\cdot \pi^{\frac12}\cdot 2^{\frac32}\cdot 3^{-\frac12}\cdot \pi^{\frac12}}{2^2}+4\pi\beta^2\\ & = & \frac{\pi}{2}\left(8\left(\beta-\frac{c}{2}\right)^2 +c^2\right)\\
& = & \frac{c^2\pi}{2}=\|\partial_xQ_c\|_{L^2}^2, \end{eqnarray*} as $n$ goes to infinity.
As conclusion, putting $$ w_n(x)=\lambda_n^{1/2}v(t_n,\lambda_nx)e^{icx/2}, $$ we obtain $$
\lim_{n\to\infty}\|w_n\|_{L^4}^4=\|Q_c\|_{L^4}^4,\quad \|w_n\|_{L^6}^6=\|Q_c\|_{L^6}^6,\quad \lim_{n\to\infty}\|\partial_xw_n\|_{L^2}^2=\|\partial_xQ_c\|_{L^2}^2. $$
\begin{remark}
We comment on the profile of $w_n$. Recall $$ v(t_n,x)=\frac{1}{\lambda_n^{1/2}}e^{-i\frac{c}{2\lambda_n}x}w_n\left(\frac{x}{\lambda_n}\right). $$ It follows \begin{eqnarray*}
{\cal P}_{1,3/2}(v(t_n)) & =& \frac{1}{\lambda_n^2}\Im\int_{\mathbb{R}}\overline{w_n\left(\frac{x}{\lambda_n}\right)}\left\{\partial_xw_n\left(\frac{x}{\lambda_n}\right)-i\frac{c}{2}w_n\left(\frac{x}{\lambda_n}\right)\right\}\,dx+\frac{1}{4\lambda_n}\|w_n\|_{L^4}^4\\
& = & \frac{1}{\lambda_n}\left({\mathcal P}_{1,3/2}(w_n)-\frac{c}{2}\|w_n\|_{L^2}^2\right) \end{eqnarray*} and then \begin{eqnarray}\label{eq:scvw}
{\cal E}_{1,3/2}(v(t_n)) & = & \frac{1}{\lambda_n^3}\left\|\partial_xw_n\left(\frac{\cdot}{\lambda_n}\right)-i\frac{c}{2}w_n\left(\frac{\cdot}{\lambda_n}\right)\right\|_{L^2}^2-\frac{1}{16\lambda_n^2}\|w_n\|_{L^6}^6\\
& = & \frac{1}{\lambda_n^2}\left(\|\partial_xw_n\|_{L^2}^2+\frac{c^2}{4}\|w_n\|_{L^2}^2-c\Im\int_{\mathbb{R}}\overline{w_n(x)}\partial_xw_n(x)\,dx-\frac{1}{16}\|w_n\|_{L^6}^6\right)\nonumber\\ & = & \frac{1}{\lambda_n^2}\left({\mathcal E}_{1,3/2}(w_n)
+\frac{c}{4}\|w_n\|_{L^4}^4-\frac{c^2}{4}\|w_n\|_{L^2}^2-c{\cal P}_{1,3/2}(v(t_n))\lambda_n\right).\nonumber \end{eqnarray}
It is important to notice that since $\|w_n\|_{L^6}=\|Q_c\|_{L^6}$ together with (\ref{eq:scvw}), we have \begin{eqnarray*} \frac{5}{2}c^2\pi
& = & \lim_{n\to\infty}\left(\|\partial_xw_n\|_{L^2}^2+\frac{c}{4}\|w_n\|_{L^4}^4\right)\\
& = & \lim_{n\to\infty}\left(\frac{1}{16}\|w_n\|_{L^6}^6+\frac{c^2}{4}\|w_n\|_{L^2}^2+c{\cal P}_{1,3/2}(v(t_n))\lambda_n+{\cal E}_{1,3/2}(v(t_n))\lambda_n^2\right)\\ & = & \lim_{n\to\infty}\left(\frac{5}{2}c^2\pi+c{\cal P}_{1,3/2}(v_0)\lambda_n+{\cal E}_{1,3/2}(v_0)\lambda_n^2\right). \end{eqnarray*} Observe that using (\ref{eq:minmizer}), we get that there holds either (i) ${\cal P}_{1,3/2}(v_0)> 0$ or (ii) ${\cal P}_{1,3/2}(v_0)=0<{\cal E}_{1,3/2}(v_0)$; in the other cases, the failure would lead to the global well-posedness. \end{remark}
\section{Proof of Theorem \ref{thm:main}}
\subsection{Concentration phenomena}\label{sec:concentration}
To obtain a concentration phenomenon and a singularity profile of solutions, we need the following lemma.
\begin{lemma}[Corollary 3.1 \cite{a1}]\label{lem:Au} \begin{eqnarray*}
\min\left\{\frac12\|\partial_xu\|_{L^2}^2+\frac14\|u\|_{L^4}^4\mid\|u\|_{L^6}=1\right\}=\frac{5}{18}\lambda, \end{eqnarray*} where the minimum is attainted by, without gauge invariance and dilation, $$ u^*(x)=\sqrt{\frac{2}{x^2+\frac{4}{3}\lambda}}, $$
for $\lambda=3\cdot (3\pi)^{2/5}/4$ satisfying $\|u^*\|_{L^6}=1$, which satisfies $$ -u''+u^3-\lambda u^5=0. $$ \end{lemma}
Now if $$ \mu=2\cdot(3\pi)^{\frac15}\cdot\nu^2,\quad \nu^2=\frac{(3\pi)^{1/10}}{2\cdot 3^{1/2}\cdot \pi^{1/2}\cdot c}, $$ one has $$
\|w_n\|_{L^6}^6=\frac{\mu}{\nu^6},\quad \left(\frac{\nu}{\mu}\right)^2=\frac{c}{2},\quad \frac34\cdot(3\pi)^{2/5}\cdot\frac{\nu^4}{\mu^2}=\frac{3}{16}. $$ We use these to Lemma \ref{lem:Au}. Using $u(x)=\nu v(\mu x)$, it follows \begin{eqnarray*}
\min\left\{\frac{\nu^2\mu}{2}\|\partial_xv\|_{L^2}^2+\frac{\nu^4\mu^{-1}}{4}\|v\|_{L^4}^4\mid\|v\|_{L^6}^6=\frac{\mu}{\nu^6}\right\}=\frac{5}{18}\lambda. \end{eqnarray*} Hence \begin{eqnarray}\label{eq:minmizer}
\min\left\{\|\partial_xv\|_{L^2}^2+\frac{c}{4}\|v\|_{L^4}^4\mid\|v\|_{L^6}^6=\|Q_c\|_{L^6}^6\right\}=\frac{5}{2}c^2\pi, \end{eqnarray} where the minimum is attainted by $$ v(x)=\frac{1}{\nu}u^*\left(\frac{x}{\mu}\right), $$ which satisfies the following equation $$ -v''+\frac{c}{2}v^3-\frac{3}{16} v^5=0. $$ In particular, the property in (\ref{eq:minmizer}) allows us to establish that for all $\varepsilon>0$, there exists a natural number $N\in\mathbb{N}$ such that for $n\ge N$, we have $$
\frac{5}{2}c^2\pi<\|\partial_xw_n\|_{L^2}^2+\frac{c}{4}\|w_n\|_{L^4}^4<\frac{5}{2}c^2\pi+\varepsilon. $$
Recall the following profile decomposition lemma (see e.g. \cite{g,hk,km,k}).
\begin{lemma}[The profile decomposition (e.g. \cite{g,hk,km,k})]\label{lem:profile} There exist $\{V^j\},~\{x_n^j\}$ such that $$ w_n(x)=\sum_{j=1}^LV^j(x-x_n^j)+R_n^L(x), $$ where for $j\ne k$ \begin{eqnarray}\label{eq:diff}
\lim_{n\to\infty}|x_n^j-x_n^k|=\infty, \end{eqnarray} \begin{eqnarray}\label{eq:r-mass}
\|Q_c\|_{L^2}^2=\|w_n\|_{L^2}^2=\sum_{j=1}^L\|V^j\|_{L^2}^2+\|R_n^L\|_{L^2}^2+o_n(1), \end{eqnarray} \begin{eqnarray}\label{eq:r-kin}
\|\partial_xw_n\|_{L^2}^2=\sum_{j=1}^L\|\partial_xV^j\|_{L^2}^2+\|\partial_xR_n^L\|_{L^2}^2+o_n(1)=\|\partial_xQ_c\|_{L^2}^2+o_n(1), \end{eqnarray} \begin{eqnarray}\label{eq:r-energy} {\mathcal E}_{1,3/2}(w_n)=\sum_{j=1}^L{\mathcal E}_{1,3/2}(V^j)+{\mathcal E}_{1,3/2}(R_n^L)+o_n(1)={\mathcal E}_{1,3/2}(Q_c)+o_n(1), \end{eqnarray} and \begin{eqnarray}\label{eq:r-L6}
\limsup_{n,L\to\infty}\|R_n^L\|_{L^p}=0 \end{eqnarray} for every $p\in(2,\infty)$. \end{lemma}
Note that in view of (\ref{eq:diff}), (\ref{eq:r-mass}), (\ref{eq:r-kin}) and (\ref{eq:r-L6}), this lemma implies \begin{eqnarray}\label{eq:in}
\|w_n\|_{L^p}^p=\sum_{j=1}^L\|V^j\|_{L^p}^p+\|R_n^L\|_{L^p}^p+o_n(1)=\sum_{j=1}^L\|V^j\|_{L^p}^p+o_{n,L}(1), \end{eqnarray} for $p=4$ and $6$, where $\lim_{n,L\to\infty}o_{n,L}(1)=0$. Now let $K_c$ be the functional given by $$
K_c(f)=\|\partial_xf\|_{L^2}^2+\frac{c}{4}\|f\|_{L^4}^4. $$
Since $\|V^j\|_{L^6}\le \|w_n\|_{L^6}$, we have \begin{eqnarray}\label{eq:K} \frac{5}{2}c^2\pi+\varepsilon & > & K_c(w_n)\\ & = & \sum_{j=1}^LK_c(V_j)+K_c(R_n^L)+o_n(1)\nonumber\\ & \ge & \sum_{j=1}^LK_{c_j}(V^j)+o_n(1)\nonumber\\ & \ge & \sum_{j=1}^L\frac{5}{2}c_j^2\pi+o_n(1),\nonumber \end{eqnarray}
where $c_j\in[0,c]$ is defined by $\|V^j\|_{L^6}^6=3\cdot 2^3\cdot c_j^2\cdot \pi$. Also, from (\ref{eq:in}), the right-hand side is $$
\sum_{j=1}^L\frac{5}{2}c_j^2\pi+o_n(1)=\|w_n\|_{L^6}^6+o_{n,L}(1)=\frac{5}{2}c^2\pi+o_{n,L}(1), $$ which implies that $$ K_{c_j}(V^j)=\frac{5}{2}c_j^2\pi $$
for all $1\le j\le L$. Since $\|V^j\|_{L^6}^6=3\cdot 2^3\cdot c_j^2\cdot \pi$, by Lemma \ref{lem:Au}, we obtain that $V^j(x)=Q_{c_j}(x-x_j)e^{i\alpha_j}$ for some $(x_j,\alpha_j)\in\mathbb{R}^2$. Furthermore, by (\ref{eq:r-mass}), we obtain $$
4\pi=\sum_{j=1}^L\|Q_{c_j}\|_{L^2}^2+\|R_{n}^L\|_{L^2}^2+o_{n}(1), $$
for which one has $L=1$, because of $\|Q_{c_j}\|_{L^2}=\sqrt{4\pi}$.
\subsection{Proof of Theorem \ref{thm:main}}\label{sec:proofthm}
Collecting the information above, we describe $w_n$ as $$ w_n(x)=e^{i\gamma}\left(Q_c(x-x_n)+R_n(x)\right). $$ for some $\gamma\in\mathbb{R}$ and $x_n\in\mathbb{R}$. As a consequence, we have \begin{eqnarray}\label{eq:profile} v(t_n,x)=e^{-i\frac{c}{2\lambda_n}x+i\gamma}\frac{1}{\lambda_n^{1/2}}\left(Q_c\left(\frac{x}{\lambda_n}-x_n\right)+R_n\left(\frac{x}{\lambda_n}\right)\right), \end{eqnarray} where $R_n$ satisfies $$
\|R_n\|_{L^2}^2+2\Re\langle Q_c(\cdot-x_n),R_n\rangle=0. $$ Moreover $$
\left\|v(t_n,x)-e^{-i\frac{c}{2\lambda_n}x+i\gamma}\frac{1}{\lambda_n^{1/2}}R_n\left(\frac{x}{\lambda_n}\right)\right\|_{L^p}=\|Q_c\|_{L^p}=\|v(t_n)\|_{L^p}, $$ for $p=2$ and $p=6$.
Denote $v_n=v(t_n,\cdot)$, which completes the proof of Theorem \ref{thm:main}. \qed
\begin{remark}\label{re:K} As a consequence of the above calculation in (\ref{eq:K}), we get $$ \lim_{n\to\infty}K_c(R_n)=0. $$ \end{remark} \begin{remark}\label{rem:const} Notice that if we replace $\lambda_n$ by $\lambda_n/c$ (absorbed into $\lambda_n$), then we ignore the irrelevant constant $c$, and (\ref{eq:profile}) can be also written as the form $$ v(t_n,x)=e^{-i\frac{x}{2\lambda_n}+i\gamma}\frac{1}{\lambda_n^{1/2}}\left(Q_1\left(\frac{x}{\lambda_n}-x_n\right)+R_n\left(\frac{x}{\lambda_n}\right)\right). $$ \end{remark}
\section{Proof of Proposition \ref{prop:exam}}\label{sec:exam}
In this section, we prove Proposition \ref{prop:exam} by giving an example.
Let $q_0=q_{1/4,1}\in H^3(\mathbb{R})$ and define $$ \varphi_a(x)=\frac{\sqrt{x^2+1}}{2(x^4+1)}e^{-\frac{i}{2}x+3i\left(\arctan x+\frac{\pi}{2}\right)-iax}\in H^3(\mathbb{R}),\quad a\in \mathbb{R}. $$ Notice that $$
\|\varphi_a\|_{L^2}^2=\frac14\int_{\mathbb{R}}\frac{1+x^2}{(1+x^4)^2}\,dx\in(0,\infty) $$ is independent of $a\in\mathbb{R}$.
A simple computation shows the following lemma.
\begin{lemma}\label{lem:com} For $a\in\mathbb{R}$ and $b>0$, $$ \int_{\mathbb{R}}\frac{\cos ax}{1+x^4}\,dx=\pi e^{-\frac{a}{\sqrt{2}}}\sin\left(\frac{a}{\sqrt{2}}+\frac{\pi}{4}\right), $$ $$ \int_{\mathbb{R}}\frac{\cos ax}{(x^2+1)(x^4+1)}\,dx=\frac{\pi}{2}\left(e^{-a}+\sqrt{2}e^{-\frac{a}{\sqrt{2}}}\sin\frac{a}{\sqrt{2}}\right), $$ $$ \int_{\mathbb{R}}\frac{x\sin ax}{(x^2+1)(x^4+1)}\,dx=\frac{\pi}{2}\left(e^{-a}-e^{-\frac{a}{\sqrt{2}}}\cos\frac{a}{\sqrt{2}}+e^{-\frac{a}{\sqrt{2}}}\sin\frac{a}{\sqrt{2}}\right). $$ and $$ \int_{\mathbb{R}}\frac{\cos( bx-3\arctan x)\sqrt{x^2+1}}{x^4+1}\,dx=2\pi\left(-2e^{-b}+\frac14\cos\frac{b}{\sqrt{2}}e^{-\frac{b}{\sqrt{2}}}+\frac14\left(1-\frac{2}{\sqrt{2}}\right)\sin\frac{b}{\sqrt{2}}e^{-\frac{b}{\sqrt{2}}}\right). $$ \end{lemma}
Consider $$ u_0(x)=q_0(x)+\eta\varphi_a(x). $$ where $a,\eta\in\mathbb{R}$ will be chosen later. We illustrate that $u_0$ satisfies (\ref{eq:example}) for some $a$ and $\eta$.
Since $\|q_0\|_{L^2}=\sqrt{4\pi}$, we easily see that $\|u_0\|_{L^2}=\sqrt{4\pi}$, if and only if \begin{eqnarray}\label{eq:con11}
\eta\|\varphi_a\|_{L^2}^2+2\Re\langle q,\varphi_a\rangle=0. \end{eqnarray} Using Lemma \ref{lem:com}, one obtains that (\ref{eq:con11}) is equivalent to the following: \begin{eqnarray}\label{eq:con1}
\eta=-\frac{2\pi}{\|\varphi_a\|_{L^2}^2}e^{-\frac{a}{\sqrt{2}}}\sin\left(\frac{a}{\sqrt{2}}+\frac{\pi}{4}\right). \end{eqnarray} So, we pick up $a>0$ such that \begin{eqnarray}\label{eq:con2} \frac{a}{\sqrt{2}}+\frac{\pi}{4}=2k\pi-\varepsilon_0 \end{eqnarray} for $0<\varepsilon_0\ll 1$ and $k\in\mathbb{Z}$, by demanding \begin{eqnarray}\label{eq:con3} 0<\eta\lesssim e^{-\frac{a}{\sqrt{2}}}\varepsilon_0\ll 1. \end{eqnarray}
Observe that \begin{eqnarray}\label{eq:con21} \partial_xq_0=-\frac{i}{2}q_0+\frac34iQ_1^2q_0-\frac{x}{x^2+1}q_0 \end{eqnarray} and \begin{eqnarray}\label{eq:con22} \partial_x\varphi_a=-\frac{i}{2}\varphi_a+\frac34iQ_1^2\varphi_a-ia\varphi_a-\frac{4x^3}{x^4+1}\varphi_a+\frac{x}{\sqrt{x^2+1}}\varphi_a. \end{eqnarray} Using $P_1(q_0)=0$, it follows that $$
P_1(u_0)=2\eta\Im\langle \partial_xq_0,\varphi_a\rangle+\eta^2\Im\langle \partial_x\varphi_a,\varphi_a\rangle-\frac12\left(\|q_0+\eta\varphi_a\|_{L^4}^4-\|q_0\|_{L^4}^4\right). $$ From (\ref{eq:con11}), (\ref{eq:con21}), (\ref{eq:con22}) and (\ref{eq:con3}), we have that $0<\eta\ll 1$ and $$ P_1(u_0)=-2\eta\left(\frac14\Re\langle Q_1^2q_0,\varphi_a\rangle+\Im\left\langle \frac{x}{x^2+1}q_0,\varphi_a\right\rangle\right)+O(\eta^2). $$ By Lemma \ref{lem:com}, one deduces that \begin{eqnarray*} P_1(u_0) & = & -\eta\pi\left(2e^{-a}+e^{-\frac{a}{\sqrt{2}}}\left((\sqrt{2}+1)\sin\frac{a}{\sqrt{2}}-\cos\frac{a}{\sqrt{2}}\right)\right)+O(\eta^2)\nonumber\\ & = & -\eta\pi\left(2e^{-a}+e^{-\frac{a}{\sqrt{2}}}\sqrt{4+2\sqrt{2}}\sin\left(\frac{a}{\sqrt{2}}-\theta_0\right)\right)+O(\eta^2), \end{eqnarray*} where $\theta_0\in(0,\pi/2)$ such that $$ \cos\theta_0=\frac{\sqrt{2}+1}{\sqrt{4+2\sqrt{2}}},\quad \sin\theta_0=\frac{1}{\sqrt{4+2\sqrt{2}}}. $$ For large $k\in\mathbb{N}$ in (\ref{eq:con2}), we have $e^{-a}\ll e^{-a/\sqrt{2}}$ and $$ \sin\left(\frac{a}{\sqrt{2}}-\theta_0\right)=-\sin\left(\theta_0+\frac{\pi}{4}-\varepsilon_0\right)\lesssim -1. $$ Therefore $$ P_1(u_0)\gtrsim \eta e^{-\frac{a}{\sqrt{2}}}+O(\eta^2)>0. $$ In particular, since $M_0(q_0)=0$ and by Lemma \ref{lem:com}, one gets $M_0(u_0)\ne 0$, if $\eta>0$ is small.
This completes the proof of Proposition \ref{prop:exam}. \qed
\section{Appendix}\label{sec:P2Q}
In this section we calculate $E_2(q_{c^2/4_c})=P_2(q_{c^2/4,c})=P_3(q_{c^2/4,c})=0$.
We begin with summarizing a few computations; \begin{eqnarray}\label{eq:com}
& & \|Q_c\|_{L^2}^2=2^2\pi,~\|Q_c\|_{L^4}^4=2^3c\pi,~\|Q_c\|_{L^6}^6=3\cdot 2^3c^2\pi,~\|Q_c\|_{L^8}^8=5\cdot 2^4c^3\pi,\\
& & \|Q_c\|_{L^{10}}^{10}=7\cdot 5\cdot 2^3c^4\pi,~\|Q_c\|_{L^{12}}^{12}=9\cdot 7\cdot 2^4c^5\pi,\nonumber\\
& & \|\partial_xQ_c\|_{L^2}^2=\frac12c^2\pi,~\|Q_c\partial_xQ_c\|_{L^2}^2=c^3\pi,~\|Q_c^2\partial_xQ_c\|_{L^2}^2=\frac{5}{2}c^4\pi,~\|Q_c^3\partial_xQ_c\|_{L^2}^2=7c^5\pi,\nonumber\\
& & \|\partial_xQ_c\|_{L^4}^4=\frac{3}{2^4}c^5\pi.\nonumber \end{eqnarray}
For simplicity, we shall only consider the case $c=1$.
Recall the recurrence formula obtained in \cite{tsm1} (also in \cite{kn}). Let $\{Z^{(n)}\}$ be determined by \begin{eqnarray}\label{eq:rec} Z^{(n+1)}=\sum_{k=1}^{n-1}Z^{(k)}Z^{(n-k)}+iqrZ^{(n)}+q\frac{\partial}{\partial x}\left(\frac1qZ^{(n)}\right),\quad n\in\mathbb{N}, \end{eqnarray} where $Z^{(0)}=qr$ and \begin{eqnarray}\label{eq:z1} Z^{(1)}=-\frac14q^2r^2+\frac{i}{2}qr_x. \end{eqnarray} In \cite{kn,tsm1}, it was proved that if $q=q(t,x)$ is a solution to (\ref{dnls}) and $r=\overline{q}$, every $\int_{\mathbb{R}}Z^{(i)}\,dx$ is a conserved functional. In particular, we have that for a solution $q=q(t,x)$ to (\ref{dnls}) and $r=\overline{q}$, $$ M(q)=\int_{\mathbb{R}}Z^{(0)}\,dx,\quad P_n(q)=2 \int_{\mathbb{R}}Z^{(2n-1)}\,dx,\quad E_n(q)=-2 \int_{\mathbb{R}}Z^{(2n)}\,dx,\quad n\in\mathbb{N}. $$ The density $Z^{(n)}~(n=2,3,4)$ are computed in \cite{tsm1} as follows: \begin{eqnarray}\label{eq:z2} Z^{(2)} = -\frac14qq_xr^2-q^2rr_x-\frac{i}{4}q^3r^3+\frac{i}{2}qr_{xx}, \end{eqnarray} \begin{eqnarray}\label{eq:z3} Z^{(3)}&= & \frac{5}{16}q^4r^4-\frac54q^2r_x^2-\frac32q^2rr_{xx}-\frac32qq_xrr_x\\ & & -\frac14qq_{xx}r^2-2iq^3r^2r_x-\frac34iq^2q_xr^3+\frac{i}{2}qr_{xxx},\nonumber \end{eqnarray} and \begin{eqnarray}\label{eq:z4} Z^{(4)} & = & 4q^4r^3r_x+\frac{29}{16}q^3q_xr^4\\ & & -\frac92q^2r_xr_{xx}-2q^2rr_{xxx}-\frac{11}{4}qq_xr_x^2-3qq_xrr_{xx}-2qq_{xx}rr_x-\frac14qq_{xxx}r^2\nonumber\\ & & +\frac{7}{16}iq^5r^5\nonumber\\ & & -\frac{15}{4}iq^3r^2r_{xx}-\frac{25}{4}iq^3rr_x^2-8iq^2q_xr^2r_x-iq^2q_{xx}r^3-\frac34iqq_x^2r^3\nonumber\\ & & +\frac{i}{2}qr_{xxxx}.\nonumber \end{eqnarray}
Let $\phi=Q_1$. We take $$ q(x)=q_0(x)=q_{1/4,1}(x)=\phi(x)e^{-\frac{i}{2}x+\frac34i\int_{-\infty}^x\phi^2(y)\,dy}. $$ Then by (\ref{eq:elliptic}) $$ \frac{q_x}{e^{-\frac{i}{2}x+\frac34i\int_{-\infty}^x\phi^2(y)\,dy}}=\phi_x-\frac{i}{2}\phi+\frac34i\phi^3, $$ $$ \frac{q_{xx}}{e^{-\frac{i}{2}x+\frac34i\int_{-\infty}^x\phi^2(y)\,dy}}=-\frac14\phi+\frac54\phi^3-\frac34\phi^5+i\left(-\phi_x+3\phi^2\phi_x\right), $$ $$ \frac{q_{xxx}}{e^{-\frac{i}{2}x+\frac34i\int_{-\infty}^x\phi^2(y)\,dy}}=-\frac34\phi_x+6\phi^2\phi_x-6\phi^4\phi_x+i\left(\frac18\phi-\frac{21}{16}\phi^3+3\phi^5-\frac98\phi^7+6\phi\phi_x^2\right), $$ and \begin{eqnarray*} \frac{q_{xxxx}}{e^{-\frac{i}{2}x+\frac34i\int_{-\infty}^x\phi^2(y)\,dy}} &= &\frac{1}{16}\phi-\frac98\phi^3+\frac{45}{8}\phi^5-\frac{111}{16}\phi^7+\frac{63}{32}\phi^9+15\phi\phi_x^2-\frac{57}{2}\phi^3\phi_x^2\\ & &+ i\left(\frac12\phi_x-\frac{15}{2}\phi^2\phi_x+\frac{57}{2}\phi^4\phi_x-\frac{117}{8}\phi^6\phi_x+6\phi_x^3\right). \end{eqnarray*} We substitute these into (\ref{eq:z1}), (\ref{eq:z2}), (\ref{eq:z3}) and (\ref{eq:z4}) (or use the recurrence formula (\ref{eq:rec})) to obtain \begin{eqnarray}\label{eq:z1r} Z^{(1)}=\frac{1}{2^3}\phi^4-\frac{1}{2^2}\phi^2+\frac{i}{2}\phi\phi_x, \end{eqnarray} \begin{eqnarray}\label{eq:z2r} Z^{(2)} =\frac{1}{2^2}\phi^3\phi_x-\frac12\phi\phi_x+i\left(-\frac{1}{2^3}\phi^2+\frac{1}{2^2}\phi^4-\frac{1}{2^4}\phi^6\right), \end{eqnarray} \begin{eqnarray}\label{eq:z3r} Z^{(3)} = \frac{1}{2^4}\phi^2-\frac{9}{2^5}\phi^4+\frac{1}{2^3}\phi^6-\frac{1}{2^6}\phi^8+\frac{1}{2^2}\phi^2\phi_x^2+i\left(-\frac{3}{2^3}\phi\phi_x+\frac12\phi^3\phi_x-\frac{1}{2^3}\phi^5\phi_x\right) \end{eqnarray} and \begin{eqnarray}\label{eq:z4r} Z^{(4)} & = & \frac{1}{2^2}\phi\phi_x-\frac{5}{2^3}\phi^3\phi_x+\frac{5}{2^4}\phi^5\phi_x-\frac{3}{2^6}\phi^7\phi_x+\frac{1}{2^2}\phi\phi_x^3\\ & & +i\left(\frac{1}{2^5}\phi^2-\frac{1}{2^2}\phi^4+\frac{5}{2^5}\phi^6-\frac{5}{2^7}\phi^8+\frac{1}{2^8}\phi^{10}+\frac{5}{2^3}\phi^2\phi_x^2-\frac{3}{2^4}\phi^4\phi_x^2\right).\nonumber \end{eqnarray} From (\ref{eq:com}) and the fact that $\phi_x$ is an odd function, it follows that $$ \int_{\mathbb{R}}Z^{(n)}\,dx=0,\quad n=1,2,3,4, $$ which imply $$ P_1(q)=E_1(q)=P_2(q)=E_2(q)=0. $$
On the other hand, by (\ref{eq:rec}), (\ref{eq:z1r}), (\ref{eq:z2r}), (\ref{eq:z3r}) and (\ref{eq:z4r}), it follows \begin{eqnarray}\label{eq:z5} \frac12P_3(q) & = & \Re\int_{\mathbb{R}}Z^{(5)}\,dx\\ & = & \Re\int_{\mathbb{R}}\left(2Z^{(1)}Z^{(3)}+(Z^{(2)})^2+i\phi^2Z^{(4)}+q\frac{\partial}{\partial x}\left(\frac{1}{q}Z^{(4)}\right)\right)\,dx\nonumber\\ & = & \Re\int_{\mathbb{R}}\left(2Z^{(1)}Z^{(3)}+(Z^{(2)})^2+\left(i\phi^2-\frac{q_x}{q}\right)Z^{(4)}\right)\,dx\nonumber\\ & = & \int_{\mathbb{R}}\left(2\Re(Z^{(1)}Z^{(3)})+\Re(Z^{(2)})^2-\frac{\phi_x}{\phi}\Re Z^{(4)}-\left(\frac12+\frac14\phi^2\right)\Im Z^{(4)}\right)\,dx.\nonumber \end{eqnarray} From (\ref{eq:z1r}), (\ref{eq:z2r}), (\ref{eq:z3r}) and (\ref{eq:z4r}), we see \begin{eqnarray*} \Re(Z^{(1)}Z^{(3)}) = -\frac{1}{2^6}\phi^4+\frac{5}{2^6}\phi^6-\frac{17}{2^8}\phi^8+\frac{5}{2^8}\phi^{10}-\frac{1}{2^9}\phi^{12}+\frac{3}{2^4}\phi^2\phi_x^2-\frac{5}{2^4}\phi^4\phi_x^2+\frac{3}{2^5}\phi^6\phi_x^2, \end{eqnarray*} \begin{eqnarray*} \Re(Z^{(2)})^2 = -\frac{1}{2^6}\phi^4+\frac{1}{2^4}\phi^6-\frac{5}{2^6}\phi^8+\frac{1}{2^5}\phi^{10}-\frac{1}{2^8}\phi^{12}+\frac{1}{2^2}\phi^2\phi_x^2-\frac{1}{2^2}\phi^4\phi_x^2+\frac{1}{2^4}\phi^6\phi_x^2, \end{eqnarray*} \begin{eqnarray*} \frac{\phi_x}{\phi}\Re Z^{(4)}=\frac14\phi_x^2-\frac{5}{2^3}\phi^2\phi_x^2+\frac{5}{2^4}\phi^4\phi_x^2-\frac{3}{2^6}\phi^6\phi_x^2+\frac{1}{2^2}\phi_x^4, \end{eqnarray*} and \begin{eqnarray*} \left(\frac12+\frac14\phi^2\right)\Im Z^{(4)} = \frac{1}{2^6}\phi^2-\frac{15}{2^7}\phi^4+\frac{1}{2^6}\phi^6+\frac{5}{2^8}\phi^8-\frac{1}{2^7}\phi^{10}+\frac{1}{2^{10}}\phi^{12}+\frac{5}{2^4}\phi^2\phi_x^2+\frac{1}{2^4}\phi^4\phi_x^2-\frac{3}{2^6}\phi^6\phi_x^2, \end{eqnarray*} which leads to \begin{eqnarray}\label{eq:zr51} & & 2\Re(Z^{(1)}Z^{(3)})+\Re(Z^{(2)})^2-\frac{\phi_x}{\phi}\Re Z^{(4)}-\left(\frac12+\frac14\phi^2\right)\Im Z^{(4)}\\ & = & -\frac{1}{2^6}\phi^2+\frac{9}{2^7}\phi^4+\frac{13}{2^6}\phi^6-\frac{59}{2^8}\phi^8+\frac{5}{2^6}\phi^{10}-\frac{9}{2^{10}}\phi^{12}\nonumber\\ & & -\frac{1}{2^2}\phi_x^2+\frac{15}{2^4}\phi^2\phi_x^2-\frac{5}{2^2}\phi^4\phi_x^2+\frac{11}{2^5}\phi^6\phi_x^2-\frac{1}{2^2}\phi_x^4. \end{eqnarray} We use (\ref{eq:com}) to have \begin{eqnarray*}
\frac{1}{2\pi}P_3(q)
& = & -\frac{1}{2^6}2^2+\frac{9}{2^7}2^3+\frac{13}{2^6}3\cdot 2^3-\frac{59}{2^8}5\cdot 2^4+\frac{5}{2^6}7\cdot 5\cdot 2^3-\frac{9}{2^{10}}9\cdot 7\cdot 2^4\nonumber\\ & & -\frac{1}{2^2}\cdot \frac12+\frac{15}{2^4}-\frac{5}{2^2}\cdot\frac52+\frac{11}{2^5}\cdot 7-\frac{1}{2^2}\frac{3}{2^4}\\ & = & -17+\frac{47}{2}-\frac{13}{2}= 0. \end{eqnarray*} Hence $P_3(q)= 0$.
\end{document} | arXiv | {
"id": "2012.04988.tex",
"language_detection_score": 0.443256139755249,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[The Quantum Orbifold Cohomology of Toric Stack Bundles]{The Quantum Orbifold Cohomology of \\Toric Stack Bundles}
\author[Y. Jiang]{Yunfeng Jiang} \address{Department of Mathematics\\ University of Kansas\\ 405 Snow Hall 1460 Jayhawk Blvd.\\ Lawrence, KS 66045, USA} \email{y.jiang@ku.edu}
\author[H.-H. Tseng]{Hsian-Hua Tseng} \address{Department of Mathematics\\ Ohio State University\\ 100 Math Tower\\ 231 West 18th Ave.\\Columbus, OH 43210, USA} \email{hhtseng@math.ohio-state.edu}
\author[F. You]{Fenglong You} \address{Department of Mathematics\\ Ohio State University\\ 100 Math Tower\\ 231 West 18th Ave.\\Columbus, OH 43210, USA} \email{you.111@osu.edu}
\keywords{} \date{\today}
\begin{abstract} We study Givental's Lagrangian cone for the quantum orbifold cohomology of toric stack bundles. Using Gromov-Witten invariants of the base and combinatorics of the toric stack fibers, we construct an explicit slice of the Lagrangian cone defined by the genus $0$ Gromov-Witten theory of a toric stack bundle. \end{abstract} \maketitle
\tableofcontents
\section{Introduction} An important problem in Gromov-Witten theory is the computations of Gromov-Witten invariants of orbifolds. Genus $0$ Gromov-Witten invariants of toric stacks can be determined via a Givental-style mirror theorem proven in \cite{CCIT} and \cite{CCK}, while their higher genus Gromov-Witten invariants can be determined by Givental-Teleman reconstruction of semi-simple CohFTs \cite{Giv1}, \cite{Tel}. Toric bundles over a base $B$ are studied in \cite{SU}, where their cohomology rings were computed. Assuming knowledge about Gromov-Witten invariants of $B$, genus $0$ Gromov-Witten invariants of a toric bundle over $B$ can be determined via the mirror theorem in \cite{Brown}, while their higher genus Gromov-Witten invariants can be determined from genus $0$ invariants and localization \cite{CGT}.
Toric stack bundles, introduced by Jiang \cite{Jiang}, generalize toric bundles by using toric Deligne-Mumford stacks as fibers. The main result of this paper, Theorem \ref{main-theorem}, is a mirror theorem for toric stack bundles $\mathcal{P}\to B$. Roughly speaking, Theorem \ref{main-theorem} gives an explicit slice, the extended $I$-function, of the Lagrangian cone $\mathcal{L}_\mathcal{P}$ of the genus $0$ Gromov-Witten theory of $\mathcal{P}$, which can be used to determine all genus $0$ Gromov-Witten invariants of $\mathcal{P}$ following \cite{Giv2}, assuming that genus $0$ Gromov-Witten invariants of $B$ are known.
Theorem \ref{main-theorem} generalizes the mirror theorems in \cite{Brown}, \cite{CCIT}. Our proof of Theorem \ref{main-theorem} follows the same approach as those in \cite{Brown}, \cite{CCIT}: localization yields a characterization result of the Lagrangian cone $\mathcal{L}_\mathcal{P}$, see Theorem \ref{characterization}. We prove that the extended $I$-function lies on $\mathcal{L}_\mathcal{P}$ by checking the conditions \textbf{(C1)}-\textbf{(C3)} in Theorem \ref{characterization}. The verification of \textbf{(C3)} for toric stack bundles involves a novel point. \textbf{(C3)} concerns fixed points of the fiberwise torus action on $\mathcal{P}$. Components of the fixed loci are abelian gerbes over the base $B$. To check \textbf{(C3)}, we need to know Gromov-Witten theory of certain abelian gerbes over $B$. Fortunately these were previously solved in \cite{AJT09}.
The result in this paper will have applications to study birational transformation of orbifold Gromov-Witten invariants. An important class of crepant birational transformation of varieties is {\em flops}. In the study of ordinary flops as in \cite{LLW}, the local models, which are toric bundles over a base scheme $B$ with fibre the projective bundle over a projective space, played an important role in the proof of invariance of genus zero Gromov-Witten invariants. A special example in our case is a toric stack bundle with fibre the weighted projective bundle over a weighted projective stack, which is the local model of {\em ordinary orbifold flop}. Theorem \ref{main-theorem} will play a crucial role to prove the crepant transformation conjecture for ordinary orbifold flops.
The rest of the paper is organized as follows. Section \ref{sec:GW} contains a brief review of genus $0$ Gromov-Witten theory. Section \ref{sec:toric_stack} contains a review about toric stacks and related materials. The construction of toric stack bundles is recalled in Section \ref{sec:toric_bdle}. In Section \ref{sec:characterization} we apply localization to derive a characterization result of the Lagrangian cone for toric stack bundles. The main result is then proven in Section \ref{sec:proof_main_thm}.
Throughout this paper, we work over $\mathbb{C}$.
\subsection*{Acknowledgment} We thank the referees for valuable comments and suggestions. H.-H. T. thanks T. Coates, A. Corti, and H. Iritani for related collaborations. Y. J. thanks T. Coates, A. Corti and R. Thomas for the support at Imperial College London. Y. J. and H.-H. T. are supported in part by Simons Foundation Collaboration Grants. F. Y. was supported by a Presidential Fellowship of the Ohio State University during the revision of this paper.
\section{Preparatory materials}
\subsection{Gromov-Witten theory}\label{sec:GW} We give a very brief account on Gromov-Witten theory. The materials we need are discussed in more details in \cite[Section 2]{CCIT}, to which we refer the reader.
Let $\mathcal{X}$ be a smooth proper Deligne-Mumford stack with projective\footnote{In the presence of a torus action, we may allow $\mathcal{X}$ to be only semi-projective.} coarse moduli space $X$. The {\em Chen-Ruan orbifold cohomology} $H_{\text{CR}}^*(\mathcal{X})$ of $\mathcal{X}$ is additively the cohomology of the inertia stack $\mathcal{IX}:=\mathcal{X}\times_{\mathcal{X}\times \mathcal{X}}\mathcal{X}$, where the fiber product is taken over the diagonal. The grading of $H_{\text{CR}}^*(\mathcal{X})$ is the usual grading on cohomology shifted by {\em age}. $H_{\text{CR}}^*(\mathcal{X})$ is also equipped with a non-degenerate pairing $(-,-)_{\text{CR}}$ called orbifold Poincar\'e pairing.
Gromov-Witten invariants of $\mathcal{X}$ are defined as the following intersection numbers: \[ \langle a_1\bar{\psi}_1^{k_1},...,a_n\bar{\psi}_n^{k_n}\rangle_{g,n,d}:=\int_{[\overline{\mathcal{M}}_{g,n}(\mathcal{X}, d)]^w}(\operatorname{ev}_1^*a_1)\bar{\psi}_1^{k_1},...,\operatorname{ev}_n^*(a_n)\bar{\psi}_n^{k_n}, \] where \begin{itemize} \item $\overline{\mathcal{M}}_{g,n}(\mathcal{X}, d)$ is the moduli stack of $n$-pointed genus $g$ degree $d$ stable maps to $\mathcal{X}$ with sections to all marked gerbes.
\item $[\overline{\mathcal{M}}_{g,n}(\mathcal{X}, d)]^w\in H_*(\overline{\mathcal{M}}_{g,n}(\mathcal{X}, d), \mathbb{Q})$ is the weighted virtual fundamental class, which is a multiple of the usual virtual fundamental class. More details can be found in \cite[Section 4.6]{AGV} and \cite[Section 2.5.1]{Tseng}.
\item For $i=1,...,n$, $\operatorname{ev}_i: \overline{\mathcal{M}}_{g,n}(\mathcal{X}, d)\to \mathcal{IX}$ is the evaluation map.
\item For $i=1,...,n$, $\bar{\psi}_i\in H^2(\overline{\mathcal{M}}_{g,n}(\mathcal{X}, d), \mathbb{Q})$ are the descendant classes.
\item $a_1,..., a_n\in H^*(\mathcal{IX})$. \end{itemize} Gromov-Witten invariants can be packaged into generating functions, as follows. The genus $g$ Gromov-Witten potential of $\mathcal{X}$ is $$\mathcal{F}_\mathcal{X}^g(\mathbf{t}):=\sum_{n, d} \frac{Q^d}{n!}\langle \mathbf{t},...,\mathbf{t}\rangle_{g,n,d},$$ where $Q^d$ is an element in the Novikov ring of $\mathcal{X}$, $\mathbf{t}=\mathbf{t}(z)=t_0+t_1z+t_2 z^2+...\in H_{\text{CR}}^*(\mathcal{X})[z]$, and $\langle \mathbf{t},...,\mathbf{t}\rangle_{g,n,d}:=\sum_{k_1,...,k_n}\langle t_{k_1}\bar{\psi}^{k_1},...,t_{k_n}\bar{\psi}^{k_n} \rangle_{g,n,d}$.
We briefly recall the Givental's formalism about the orbifold Gromov-Witten invariants in terms of a Lagrangian cone in certain symplectic vector space, which was developed in \cite{Tseng}. Let \[ \mathcal{H}:=H^*_{\text{CR}} (\mathcal {X},\mathbb{C})\otimes \mathbb{C}[[\overline{\on{NE}}(\mathcal{X})]][[z,z^{-1}]], \] where $\overline{\on{NE}}(\mathcal{X})$ is the Mori cone of $\mathcal{X}$. There is a $\mathbb{C}[[\overline{\on{NE}}(\mathcal{X})]]$-valued symplectic form \[ \Omega(f,g):=Res_{z=0}(f(-z),g(z))_{\text{CR}}dz, \] where $(-,-)_{\text{CR}}$ is the orbifold Poincar\'e pairing. Let \[ \mathcal{H}_{+}= H^*_{CR}(\mathcal{X},\mathbb{C})\otimes \mathbb{C}[[\overline{\on{NE}}(\mathcal{X})]][[z]] \text{ and } \mathcal{H}_{-}= z^{-1}H^*_{CR}(\mathcal{X},\mathbb{C})\otimes \mathbb{C}[[\overline{\on{NE}}(\mathcal{X})]][[z^{-1}]]. \] Then $\mathcal{H}=\mathcal{H}_{+}\oplus \mathcal{H}_{-}$ and one can think of $\mathcal{H}=T^*(\mathcal{H}_{+})$.
The graph of the differential of $\mathcal{F}_{\mathcal{X}}^0$, in the dilaton-shifted coordinates, defined a Lagrangian submanifold $\mathcal{L}_\mathcal{X}$ inside the symplectic vector space $\mathcal{H}$, more explicitly, \[ \mathcal{L}_{\mathcal{X}}:= \{
(p,q)\in \mathcal{H}_{-}\oplus\mathcal{H}_{+}|p=d_{q}\mathcal{F}^{0}_{\mathcal{X}} \}\subset\mathcal{H}. \] Tautological equations for genus $0$ Gromov-Witten invariants imply that $\mathcal{L}_\mathcal{X}$ is a cone ruled by a {\em finite dimensional} family of affine subspaces. A particularly important finite-dimensional slice of $\mathcal{L}_\mathcal{X}$ is the {\em $J$-function}: $$J_\mathcal{X}(t,z):=1z+t+\sum_{n, d}\sum_\alpha \frac{Q^d}{n!}\langle t,...,t, \frac{\phi_\alpha}{z-\bar{\psi}}\rangle_{0,n+1,d}\phi^\alpha,$$ where $t=\sum_\alpha t^\alpha \phi_\alpha\in H^*_{\text{CR}}(\mathcal{X})$ and $\{\phi_\alpha\}, \{\phi^\alpha\}\subset H_{\text{CR}}^*(\mathcal{X})$ are additive bases dual to each other under $(-,-)_{\text{CR}}$.
Another important slice of $\mathcal L_{\mathcal X}$ is given by the $S$-operator: \begin{equation}\label{S-operator} S_{\mathcal X}(t,z)(\gamma):= \gamma+\sum_{n, d}\sum_\alpha \frac{Q^d}{n!}\langle t,...,t,\gamma, \frac{\phi_\alpha}{z-\bar{\psi}}\rangle_{0,n+2,d}\phi^\alpha, \end{equation} where $\gamma\in H^*_{CR}(\mathcal X;\mathbb Q)$.
The discussion here extends with little efforts to equivariant and twisted settings.
\subsection{Preliminaries on toric stacks}\label{sec:toric_stack} In this section we collect some basic materials concerning toric stacks. Our presentation closely follows \cite[Section 3]{CCIT}.
\subsubsection{Construction} A toric Deligne-Mumford stack is defined by a stacky fan $\boldsymbol{\Sigma} =(\textbf{N},\Sigma,\rho)$, where \begin{itemize} \item $\textbf{N}$ is a finitely generated abelian group of rank $r$; \item $\Sigma \subset \textbf{N}_{\mathbb{Q}}=\textbf{N}\otimes_{\mathbb{Z}}\mathbb{Q}$ is a rational simplicial fan; \item $\rho:\mathbb{Z}^{n}\rightarrow \textbf{N}$ is a map given by $\{\rho_1,\cdots, \rho_n\}\subset \mathbf{N}$, which is assumed to have finite cokernel. The $\rho_i$'s are vectors determining the rays of the stacky fan. \end{itemize} Let $\bar{\rho_{i}}$ be the image of $\rho_{i}$ under the natural map $\textbf{N} \rightarrow \textbf{N}_{\mathbb{Q}}$.
The {\em fan sequence} is \begin{equation} 0 \longrightarrow \mathbb{L}:=\text{ker}(\rho) \longrightarrow \mathbb{Z}^{n} \stackrel{\rho}{\longrightarrow} \textbf{N}. \end{equation}
Let $\rho^{\vee}: (\mathbb{Z}^{*})^{n}\rightarrow \mathbb{L}^{\vee}$ be the Gale dual of $\rho$, where $\mathbb{L}^{\vee}:=H^1(\operatorname{Cone}(\rho)^*)$ is an extension of $\mathbb{L}^{*}=\on{Hom}(\mathbb{L},\mathbb{Z})$ by a torsion subgroup. More details can be found in \cite{BCS}. The {\em divisor sequence} is \begin{equation} 0\longrightarrow\textbf{N}^{*}\stackrel{\rho^{*}}{\longrightarrow} (\mathbb{Z}^{*})^{n}\stackrel {\rho^{\vee}}{\longrightarrow}\mathbb{L}^{\vee}. \end{equation}
Applying $\on{Hom}_{\mathbb{Z}}(-,\mathbb{C}^{\times})$ to the dual map $\rho^{\vee}$ yields a homomorphism
\begin{equation*} \alpha:G\rightarrow (\mathbb{C}^{\times})^{n}, \quad \text{where} \quad G:=\on{Hom}_{\mathbb{Z}}(\mathbb{L}^{\vee},\mathbb{C}^{\times}), \end{equation*}
and we let $G$ act on $\mathbb{C}^{n}$ via this homomorphism.
For $I\subset\{1,2,\cdots,n\}$, let $\sigma_I$ be the cone generated by $\overline{\rho}_i, i\in I$ and let $\overline{I}$ be the complement of $I$ in $\{1,2,\cdots,n\}$. The collection of {\em anti-cones} $\mathcal{A}$ is defined as follows:
\[ \mathcal{A}:=\left\{I\subset \{1,2,\cdots, n\}: \sigma_{\overline{I}}\in \Sigma \right\}. \] For $I\subset \{1,...,n\}$, define \[ \mathbb{C}^{I}=\left\{ (z_{1},\ldots,z_{n}):z_{i}=0 \text{ for } i \not\in I\right\}. \]
Let $\mathcal{U}$ be the open subset of $\mathbb{C}^{n}$ defined as
\begin{equation*} \mathcal{U}:=\mathbb{C}^{n}\setminus \cup_{I\not\in \mathcal{A}}\mathbb{C}^{I}. \end{equation*}
\begin{defn}[see \cite{BCS}, \cite{Iritani}] The toric Deligne-Mumford stack $\mathcal{X}(\boldsymbol{\Sigma})$ is defined as the quotient stack \[ \mathcal{X}(\boldsymbol{\Sigma}):=[\mathcal{U}/G]. \] \end{defn}
Throughout this paper we assume the toric Deligne-Mumford stack $\mathcal{X}(\boldsymbol{\Sigma})$ has semi-projective coarse moduli space, that is, the coarse moduli space $X(\Sigma)$ is a toric variety that has at least one torus-fixed point and the natural map $X(\Sigma)\rightarrow \operatorname{Spec}H^0(X(\Sigma),\mathcal O_{X(\Sigma)})$ is projective. See \cite[Section 3.1]{CCIT} for more details.
\begin{defn} [\cite{BCS}] Given a stacky fan $\boldsymbol{\Sigma}=(\textbf{N},\Sigma,\rho)$, we define the set of box elements $\on{Box}(\boldsymbol{\Sigma})$ as follows
\[ \on{Box}(\sigma)=: \left\{ b\in \textbf{N}: \bar{b}= \sum\limits_{\rho_k\subseteq \sigma}c_{k}\bar{\rho}_{k} \text{ for some }0\leq c_{k}<1 \right\} \] And set $\on{Box}(\boldsymbol{\Sigma}): =\cup_{\sigma\in \boldsymbol{\Sigma}}\on{Box}(\sigma)$ \end{defn}
The connected components of the inertia stack $\mathcal{IX}(\boldsymbol{\Sigma})$ are indexed by the elements of $\on{Box}(\boldsymbol{\Sigma})$ (see \cite{BCS}). Moreover, given $b\in \on{Box}(\boldsymbol{\Sigma})$, the age of the corresponding connected component of $\mathcal{IX}$ is defined by $\on{age}(b):=\sum\limits_{\rho_k\subseteq \sigma} c_{k}$.
The Picard group $\on{Pic}(\mathcal{X}(\boldsymbol{\Sigma}))$ of $\mathcal{X}(\boldsymbol{\Sigma})$ can be identified with the character group $\on{Hom}(G,\mathbb{C}^{\times})$. Hence
\begin{equation} \mathbb{L}^{\vee}= \on{Hom}(G,\mathbb{C}^{\times})\cong \on{Pic}(\mathcal{X}(\boldsymbol{\Sigma})) \cong H^{2}(\mathcal{X}(\boldsymbol{\Sigma});\mathbb{Z}). \end{equation}
The inclusion $(\mathbb{C}^{\times})^{n}\subset \mathcal{U}$ induces an open embedding of the stack $\mathcal{T}=[(\mathbb{C}^{\times})^{n}/G]$ into $\mathcal{X}(\boldsymbol{\Sigma})$ and we have $\mathcal{T}\cong \mathbb{T}\times B\textbf{N}_{tor}$ with $\mathbb{T}:=(\mathbb{C}^{\times})^n/\text{Im}(\alpha)\cong \mathbf{N}\otimes \mathbb{C}^\times$ and $\textbf{N}_{tor}\cong \ker(\alpha)$. The Picard stack $\mathcal{T}$ acts naturally on $\mathcal{X}(\boldsymbol{\Sigma})$ and restricts to the $\mathbb{T}$-action on $\mathcal{X}(\boldsymbol{\Sigma})$. A $\mathcal{T}$-equivariant line bundle on $\mathcal{X}(\boldsymbol{\Sigma})$ corresponds to a $(\mathbb{C}^{\times})^{n}$-equivariant line bundle on $\mathcal{U}$. Thus, \[ \on{Pic}_{\mathcal{T}}(\mathcal{X}(\boldsymbol{\Sigma}))\cong \on{Hom}((\mathbb{C}^{\times})^{n},\mathbb{C}^{\times})\cong (\mathbb{Z}^{n})^{*}. \] We write $u_{1},\ldots,u_{n}$ for the basis of $\mathcal{T}$-equivariant line bundles on $\mathcal{X}(\boldsymbol{\Sigma})$ corresponding to the standard basis of $(\mathbb{Z}^{n})^{*}$ and write $D_{1},\ldots,D_{n}$ for the corresponding non-equivariant line bundles, i.e. $$D_{i}=\rho^{\vee}(u_{i}).$$ By abuse of notation, we also write $u_{i}$ and $D_{i}$ for the corresponding first Chern classes.
\subsubsection{$S$-extended stacky fan} Given a stacky fan $\boldsymbol{\Sigma}=(\textbf{N},\Sigma,\rho)$ and a finite set
\begin{equation*} S= \{s_{1},\ldots,s_{m}\}\subset \textbf{N}. \end{equation*}
The $S$-extended stacky fan in the sense of \cite{Jiang} is given by $(\textbf{N},\Sigma, \rho^{S} )$, where
\begin{equation} \rho^{S}: \mathbb{Z}^{n+m}\rightarrow \textbf{N}, \quad \rho^{S}(e_{i}) := \left\{
\begin{array}{lr}
\rho_{i} & 1\leq i \leq n;\\
s_{i-n} & n+1\leq i \leq n+m.
\end{array}
\right. \end{equation} Let $\mathbb{L}^{S}$ be the kernel of $\rho^{S}: \mathbb{Z}^{n+m}\rightarrow \textbf{N}$. Gale duality of the $S$-extended fan sequence
\begin{equation}\label{S-ext-fan-seq} 0\longrightarrow \mathbb{L}^{S}:=\text{ker}(\rho^S) \longrightarrow \mathbb{Z}^{n+m} \stackrel{\rho^{S}}{\longrightarrow} \textbf{N} \end{equation} yields the $S$-extended divisor sequence \begin{equation}\label{S-extended-divisor-seq} 0\longrightarrow\textbf{N}^{*}\stackrel{\rho^{*}}{\longrightarrow} (\mathbb{Z}^{*})^{n+m}\stackrel {\rho^{S \vee}}{\longrightarrow}(\mathbb{L}^{S })^{\vee}, \end{equation} where $(\mathbb{L}^{S})^{ \vee}$ is the Gale dual of $\rho^S$. As in \cite[Section 4]{CCIT}, $(\mathbb{L}^{S})^{\vee}$ is the $S$-extended Picard group of $\mathcal{X}(\boldsymbol{\Sigma})$.
Let $\mathcal{A}^{S}$ be the collection of $S$-extended anti-cones, i.e.
\begin{equation*} \mathcal{A}^{S}:=\left\{I^{S}\subset \{1,2,\cdots,n+m\}: \sigma_{\overline{I}^S}\in \Sigma\right\}. \end{equation*}
Note that \[ \{s_{1},\ldots,s_{m}\}\subset I^{S}, \quad \forall I^{S}\in \mathcal{A}^{S}. \]
By applying $\on{Hom}_{\mathbb{Z}}(-,\mathbb{C}^{\times})$ to the $S$-extended dual map $\rho^{\vee}$, we have a homomorphism
\begin{equation*} \alpha^{S}:G^{S} \rightarrow (\mathbb{C}^{\times})^{n+m}, \quad \text{where}\quad G^{S} :=\on{Hom}_{\mathbb{Z}}((\mathbb{L}^{S})^{\vee}, \mathbb{C}^{\times}). \end{equation*}
Define $\mathcal{U}^S$ to be the open subset of $\mathbb{C}^{n+m}$ defined by $\mathcal{A}^{S}$:
\begin{equation*} \mathcal{U}^{S}:= \mathbb{C}^{n+m}\setminus \cup_{I^{S} \not\in \mathcal{A}^{S}}\mathbb{C}^{I^{S}} =\mathcal{U}\times (\mathbb{C}^{\times})^{m}, \end{equation*}
where
\begin{equation*} \mathbb{C}^{I^{S}}= \left\{(z_{1},\ldots, z_{n+m}):z_{i}= 0\text{ for }i\not \in I^{S}\right\}. \end{equation*}
Let $G^{S}$ act on $\mathcal{U}^{S}$ via $\alpha^{S}$. Then we obtain the quotient stack $[\mathcal{U}^{S}/G^{S}]$. Jiang \cite{Jiang} showed that
\begin{equation*} [\mathcal{U}^{S}/G^{S}]\cong [\mathcal{U}/G]=\mathcal{X}(\boldsymbol{\Sigma}). \end{equation*}
\subsubsection{Toric maps from $\mathbb{P}_{r_{1},r_{2}}$ to $\mathcal{X}(\boldsymbol{\Sigma})$} We recall the discussion in \cite[Section 3.5]{CCIT} on maps from $1$-dimensional toric stacks to a toric stack. For positive integers $r_{1}$ and $r_{2}$ let $\mathbb{P}_{r_{1},r_{2}}$ be the unique toric Deligne-Mumford stack such that \begin{itemize} \item its coarse moduli space is $\mathbb{P}^1$; \item its isotropy group at $0\in \mathbb{P}^{1}$ is $\mu_{r_{1}}$; \item its isotropy group at $\infty \in\mathbb{P}^{1}$ is $\mu_{r_{2}}$; and \item there are no non-trivial orbifold structures at other points. \end{itemize}
As in \cite{BCS}, For an extended stacky fan $\boldsymbol{\Sigma}$, let $\sigma\in \Sigma$ be a cone, define \[ \mbox{link}(\sigma):=\{\tau: \sigma+\tau\in\Sigma,\sigma\cap\tau=0\}, \] and $\rho_{i_{1}},\ldots,\rho_{i_{l}}$ be the rays in $\mbox{link}(\sigma)$. A cone $\sigma\in \Sigma$ defines a closed substack of $\mathcal{X}(\mathbf{\Sigma})$, which is the toric stack $\mathcal{X}(\boldsymbol{\Sigma}/\sigma)$ corresponding to the quotient stacky fan $(\mathbf{N}(\sigma), \Sigma/\sigma, \rho(\sigma))$, where $\Sigma/\sigma$ is the quotient fan in $\textbf{N}(\sigma)_\mathbb{Q}=(\textbf{N}/\sum_{i\in\sigma} \mathbb Z \rho_i)\otimes\mathbb{Q}$. More precisely, $\boldsymbol{\Sigma}/\sigma= (\textbf{N}(\sigma),\Sigma/\sigma,\rho(\sigma))$ is an extended stacky fan, where $\rho(\sigma):\mathbb{Z}^{l+m}\rightarrow \textbf{N}(\sigma)$ is given by the images of $\rho_{i_{1}},\ldots,\rho_{i_{l}}$, $s_{1}, \ldots, s_{m}$ under $\textbf{N}\rightarrow \textbf{N}(\sigma)$. From the construction of extended toric Deligne-Mumford stack, we have \[ \mathcal{X}(\boldsymbol{\Sigma}/\sigma):= [\mathcal{U}^{S}(\sigma)/G^{S}(\sigma)] \] where $\mathcal{U}^{S}(\sigma)= (\mathbb{C}^{l}-V(J_{\Sigma/\sigma}))\times (\mathbb{C}^{\times})^{m}= \mathcal{U}(\sigma)\times (\mathbb{C}^{\times})^{m}$, $G^{S}(\sigma)= Hom_{\mathbb{Z}}(\mathbb{L}^{S\vee}(\sigma),\mathbb{C}^{\times})$.
For a box element $b\in \on{Box}(\boldsymbol{\Sigma})$, let $\mathcal{X}(\boldsymbol{\Sigma})_{b}$ be the component of the inertia stack $\mathcal{IX}(\boldsymbol{\Sigma})$ corresponding to $b$. Then $\mathcal{X}(\boldsymbol{\Sigma})_{b}\cong \mathcal{X}(\boldsymbol{\Sigma}/\sigma(b))$, where $\sigma(b)$ is the minimal cone containing $\bar{b}$. We define $b_{i}\in[0,1), 1\leq i \leq n$ by the condition $\bar{b}=\sum^{n}_{i=1}b_{i}\bar{\rho}_{i}$, note that $b_{i}=0$ for $\overline{\rho}_i\not\in \sigma(b)$.
\begin{defn}[see \cite{CCIT}, Notation 8] Let $\sigma,\sigma^{\prime}\in \Sigma$ be two top dimensional cones, we write $\sigma \dagger \sigma^\prime$ if they intersect along a codimension-1 face and we denote $j$ to be the unique index such that $\bar{\rho}_{j}\in\sigma\setminus \sigma^{\prime}$, and $j^{\prime}$ to be the unique index such that $\bar{\rho}_{j^{\prime}}\in\sigma^{\prime}\setminus \sigma$. \end{defn}
\begin{prop}[\cite{CCIT}, Proposition 10]\label{toric-morphism} Let $\mathcal{X}(\boldsymbol{\Sigma})$ be the toric Deligne-Mumford stack associated to a stacky fan $\boldsymbol{\Sigma} =(\textbf{N},\Sigma,\rho)$. Suppose top dimensional cones $\sigma, \sigma^{\prime}$ satisfy $\sigma \dagger \sigma^\prime$ and $b\in Box(\sigma)$. The following are equivalent: \begin{itemize} \item A positive rational number $c$ such that $\langle c\rangle =\hat{b}_{j}$, where $\hat{b}=\on{inv}(b)$ is the involution of $b$. \item A representable toric morphism $f: \mathbb{P}_{r_{1},r_{2}}\rightarrow \mathcal{X}(\boldsymbol{\Sigma})$ such that $f(0)=\mathcal{X}(\boldsymbol{\Sigma/\sigma})$, $f(\infty)=\mathcal{X}(\boldsymbol{\Sigma/\sigma^{\prime}})$ and the restriction
$f|_{0}:B\mu_{r_{1}}\rightarrow \mathcal{X}(\boldsymbol{\Sigma/\sigma})$ gives the box element $\hat{b}\in \on{Box}(\sigma)$.
\end{itemize} \end{prop} The data $\sigma, \sigma^{\prime}, b$ and $c$ determine the map $f: \mathbb{P}_{r_{1},r_{2}}\rightarrow \mathcal{X}(\boldsymbol{\Sigma})$ and determine the rational number $r_{2}$ and the box element $b^{\prime}\in \on{Box}(\sigma^{\prime})$ given by the restriction
$f|_{\infty}:B\mu_{r_{2}}\rightarrow \mathcal{X}(\boldsymbol{\Sigma})$. More precisely, $b^{\prime}$ is the unique element of $\on{Box}(\sigma^{\prime})$ such that
\begin{equation}\label{b^prime} \hat{b}+\lfloor c\rfloor \rho_{j}+q^{\prime}\rho_{j^{\prime}}+b^{\prime} \equiv 0 \quad \text{mod} \bigoplus \limits_{i\in \sigma\cap \sigma^{\prime}}\mathbb{Z}\rho_{i} \end{equation}
for some $q^{\prime}\in \mathbb{Z}_{\geq 0}$. As in \cite[Definition 12]{CCIT}, define $d_{c,\sigma,j}$ to be the element of $\mathbb{L}\otimes\mathbb{Q}$ satisfying the relation
\[ c\bar{\rho}_{j}+ \left( \sum\limits_{i\in \sigma\cap\sigma^{\prime}}c_{i}\bar{\rho}_{i} \right) +c^{\prime}\bar{\rho}_{j^{\prime}}=0 \] such that \[ D_{j}\cdot d_{c,\sigma,j}=c, \quad D_{j^{\prime}}\cdot d_{c,\sigma,j}=c^{\prime}, \quad D_{i}\cdot d_{c,\sigma,j}=c_{i} \text{ for } i\in \sigma\cap \sigma^{\prime}, \] and \[ D_{i}\cdot d_{c,\sigma,j}=0 \text{ for }i\not\in \sigma\cup\sigma^{\prime}. \] Hence, $d_{c,\sigma,j}$ is the degree of the representable toric morphism $f:\mathbb{P}_{r_{1},r_{2}}\rightarrow \mathcal{X}(\boldsymbol{\Sigma})$. Let $\Lambda E^{\sigma^{\prime},b^{\prime}}_{\sigma,b}\subset \mathbb{L}\otimes \mathbb{Q}$ to be the set of degrees $d_{c,\sigma,j}$ representable toric morphisms $f:\mathbb{P}_{r_{1},r_{2}}\rightarrow \mathcal{X}(\boldsymbol{\Sigma})$ such that $f(0)=\mathcal{X}(\boldsymbol{\Sigma/\sigma})$, $f(\infty)=\mathcal{X}(\boldsymbol{\Sigma/\sigma^{\prime}})$
and $f|_{0}$ and
$f|_{\infty}$ give the box elements $\hat{b}$ and $b^{\prime}$, respectively. More precisely, \[ \Lambda E^{\sigma^{\prime},b^{\prime}}_{\sigma,b}= \left\{ d_{c,\sigma,j}\in \mathbb{L}\otimes \mathbb{Q}: c>0 \text{ such that } \langle c \rangle =\hat{b}_{j} \text{ and } b^{\prime} \text{ satisfies } (\ref{b^prime}) \right \}, \] see \cite[Definition 14]{CCIT}.
We recall a few notions related to extended degrees for toric stacks. \begin{defn}[\cite{CCIT}, Definition 22] Consider a cone $\sigma\in \Sigma$, let $\Lambda^{S}_{\sigma}\subset\mathbb{L}^{S}\otimes\mathbb{Q}\subset \mathbb{Q}^{n+m}$ be the set of elements $\lambda=\sum\limits_{i=1}^{n+m}\lambda_{i}e_{i}$ such that \[ \lambda_{n+j}\in \mathbb{Z}, \quad 1\leq j\leq m; \quad \lambda_{i} \in \mathbb{Z}, \text{ if }i\not\in \sigma \text{ and }1\leq i\leq n. \] Set $\Lambda^{S}:=\cup_{\sigma\in \Sigma}\Lambda^{S}_{\sigma}$. \end{defn}
\begin{defn}[\cite{CCIT}, Definition 23] The reduction function $v^{S}$ is defined by \begin{align*} v^{S}:\Lambda^{S}& \longrightarrow \on{Box}(\boldsymbol{\Sigma})\\ \lambda & \longmapsto \sum\limits_{i=1}^{n}\lceil \lambda_{i}\rceil \rho_{i}+ \sum\limits_{j=1}^{m}\lceil \lambda_{n+j}\rceil s_{j} \end{align*} Hence, we have $\overline{v^{S}(\lambda)}= \sum^{n}_{i=1}\langle -\lambda_{i}\rangle \bar{\rho}_{i}\in\sigma$ for $\lambda\in \Lambda^{S}_{\sigma}$. We introduce the following sets: \[ \Lambda^{S}_{b}:=\{\lambda\in \Lambda^{S}:v^{S}(\lambda)=b\} \] \[ \Lambda E^{S}:= \Lambda^{S}\cap \overline{\on{NE}}^{S}(\mathcal{X}(\boldsymbol{\Sigma})) \] \[ \Lambda E^{S}_{b}:= \Lambda^{S}_{b}\cap \overline{\on{NE}}^{S}(\mathcal{X}(\boldsymbol{\Sigma})) \] \end{defn}
\section{Toric stack bundles}\label{sec:toric_bdle} \subsection{Construction} Let $P\rightarrow B$ be a principal $(\mathbb{C}^{\times})^{n+m}$-bundle over a smooth projective variety $B$, we introduce the toric stack bundle $^{P}\mathcal{X}(\boldsymbol{\Sigma})$. \begin{defn}[\cite{Jiang}] The toric stack bundle $\pi: \mathcal{P}:=~ ^{P}\mathcal{X}(\boldsymbol{\Sigma})\rightarrow B$ is defined to be the quotient stack \[ ^{P}\mathcal{X}(\boldsymbol{\Sigma}):= [(P\times _{(\mathbb{C}^{\times})^{n+m}}\mathcal{U}^{S})/G^{S}] \] where $G^{S}$ acts on $P$ trivially. \end{defn} It is shown in \cite{Jiang} that $\mathcal{P}$ is a smooth Deligne-Mumford stack.
We now recall the description of the inertia stack of $\mathcal{P}$. We have an action of $(\mathbb{C}^{\times})^{n+m}$ on $\mathcal{U}^{S}(\sigma)$ induced by the natural action of $(\mathbb{C}^{\times})^{l+m}$ on $\mathcal{U}^{S}(\sigma)$ and the projection $(\mathbb{C}^{\times})^{n+m}\rightarrow (\mathbb{C}^{\times})^{l+m}$. We let \begin{align*} ^{P}\mathcal{X}(\boldsymbol{\Sigma}/\sigma)& =[(P\times_{(\mathbb{C}^{\times})^{n+m}}(\mathbb{C}^{\times})^{l+m}\times _{(\mathbb{C}^{\times})^{l+m}}\mathcal{U}^{S}(\sigma))/G^{S}(\sigma)]\\ & =[(P\times_{(\mathbb{C}^{\times})^{n+m}}\mathcal{U}^{S}(\sigma))/G^{S}(\sigma)] \end{align*} be the quotient stack. By \cite[Proposition 3.5]{Jiang}, $^{P}\mathcal{X}(\boldsymbol{\Sigma}/\sigma)$ is a closed substack of $\mathcal{P}$.
\begin{prop}[\cite{Jiang}, Proposition 3.6] Let $\pi: \mathcal{P}\rightarrow B$ be a toric stack bundle over a smooth variety $B$ with fibre the toric Deligne-Mumford stack $\mathcal{X}(\boldsymbol{\Sigma})$ associated to the extended stacky fan $\boldsymbol{\Sigma}$, then the inertia stack of $\mathcal{P}$ is \[ \mathcal{IP}= \coprod\limits_{b\in \on{Box}(\boldsymbol{\Sigma})}\mathcal{P}_{b}:= \coprod\limits_{b\in \on{Box}(\boldsymbol{\Sigma})}{^{P}\mathcal{X}(\boldsymbol{\Sigma}/\sigma(\bar{b}))}. \] The age of $\mathcal{P}_{b}$ is the same as the age of $\mathcal{X}(\boldsymbol{\Sigma})_{b}$. \end{prop} For the principal $(\mathbb{C}^{\times})^{n+m}$-bundle $P=\oplus^{n+m}_{j=1}L^{*}_{j}$ over $B$, where $L_{j}$ is the corresponding $j$-th line bundle. Let \begin{equation} U_{j} = \left\{
\begin{array}{lr}
u_{j}-c_{1}(L_{j}) & 1\leq j \leq n;\\
0 & n+1\leq j \leq n+m.
\end{array}
\right. \end{equation} By abuse of notation, we also denote $U_j$ for the corresponding $\mathbb T$-equivariant line bundle over $\mathcal P$.
\subsection{Main result} We choose an integral basis $\{p_1,\ldots,p_{n-r}\}$ of $\mathbb L^{\vee}$. The toric stack bundle $\mathcal{P}$ is endowed with $n-r$ tautological line bundles whose first Chern classes we denote by $-P_{1},\ldots,-P_{n-r}$. They restrict to the corresponding first Chern classes $-p_{1},\ldots,-p_{n-r}$ on the fiber. Recall that the $\mathbb T$-equivariant Novikov ring of the toric stack $\mathcal X (\boldsymbol{\Sigma})$ is defined as \[ \Lambda_{nov}^{\mathbb T}:=S_{\mathbb T}[[\overline{\on{NE}}(\mathcal{X}(\boldsymbol{\Sigma}))\cap H_{2}(\mathcal{X}(\boldsymbol{\Sigma});\mathbb{Z})]], \] where $S_{\mathbb T}$ is the fraction field of $R_{\mathbb T}:=H^*_{\mathbb T}(pt,\mathbb C)$ and $\overline{\on{NE}}(\mathcal{X}(\boldsymbol{\Sigma}))$ is the Mori cone of $\mathcal{X}(\boldsymbol{\Sigma})$.
For $\mathcal{D}\in H_{2}(\mathcal{P})$, let $\mathfrak D:=\pi_{*}(\mathcal{D})\in H_{2}(B)$ be its projection to the base and let \[ \lambda=(d,k)\in\mathbb{L}^{S}\otimes\mathbb{Q}, \] under the canonical splitting $\mathbb{L}^{S}\otimes \mathbb{Q}\cong (\mathbb{L}\otimes \mathbb{Q})\oplus\mathbb{Q}^m $ be the fiber class, such that $\langle P_{i},\mathcal{D}\rangle= \langle p_{i},d\rangle$. Hence $\mathcal D$ is represented by $Q^{\mathfrak D} q^d$ in the Novikov ring of $\mathcal P$.
Let $J_{B}(z,\tau)= \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)}J_{\mathfrak D}(z,\tau)Q^{\mathfrak D}$ be the decomposition of the $J$ function of $B$ according to the degree of curves.
\begin{defn}\label{defn:I_func} We introduce the hypergeometric modification (The $S$-extended $\mathbb{T}$-equivariant $I$-function of the toric stack bundle $\mathcal{P}$) \begin{align*} & I^{S}_{\mathcal{P}}(z,t,\tau,q,x,Q):=\\ & e^{\sum^{n}_{i=1}U_{i}t_{i}/z} \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{b\in \on{Box}(\boldsymbol{\Sigma})} \sum\limits_{\lambda\in \Lambda E^{S}_{b}} J_{\mathfrak D}(z,\tau)Q^{\mathfrak D}\tilde{q}^{\lambda} e^{\lambda t} \left( \prod\limits^{n+m}_{i=1} \frac {\prod_{\langle a\rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq 0}(U_{i}+az)} {\prod_{\langle a\rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(U_{i}+az)} \right) \textbf{1}_{b} \end{align*} where \begin{enumerate} \item for each $\lambda\in \Lambda E^{S}_{b}$, we write $\lambda_{i}$ for the $i$th component of $\lambda$ as an element of $\mathbb{Q}^{n+m}$. We have $\langle \lambda_{i}\rangle=\hat{b}_{i}$ for $1\leq i \leq n$ and $\langle \lambda_{i}\rangle =0$ for $n+1\leq i\leq n+m$.\\ \item $U_{i}:=0$, if $n+1\leq i\leq n+m$.\\ \item $\textbf{1}_{b}$ is the identity class supported on the twisted sector $\mathcal{X}(\boldsymbol{\Sigma})_{b}$ associated to $b\in \on{Box}(\boldsymbol{\Sigma})$;\\ \item $t=(t_{1},\ldots,t_{n})$ are variables, and $e^{\lambda t}:= \prod^{n}_{i=1}e^{\langle D_{i},d\rangle t_{i}}$ \item for $\lambda=(d,k) \in \Lambda E^{S}\subset \mathbb{L}^{S}\otimes \mathbb{Q}$, we have $k\in (\mathbb{Z}_{\geq 0})^m$ and $d\in \overline{\on{NE}}(\mathcal{X}(\boldsymbol{\Sigma}))\cap H_{2}(\mathcal{X}(\boldsymbol{\Sigma});\mathbb{Z})$, we write $\tilde{q}^{\lambda}= q^{d}x^{k}= q^{d}x_{1}^{k_{1}}\cdots x_{m}^{k_{m}} \in \Lambda^{\mathbb{T}}_{nov}[[x]]$, with variables $x=(x_{1},\ldots, x_{m})$. \end{enumerate} \end{defn}
\begin{defn}[\cite{CCIT}] A $\Lambda^{\mathbb{T}}_{nov}[[x]]$-valued point of $\mathcal L_{\mathcal P}$ is an element of $\mathcal H[[x]]$ of the form \[ -1z+\textbf{t}(z)+ \sum\limits^{\infty}_{n=0} \sum\limits_{\substack{d\in \overline{\on{NE}}(\mathcal{X}(\boldsymbol{\Sigma}))\\ \mathfrak D\in \overline{\on{NE}}(B)}} \sum\limits_{\alpha} \frac{Q^{\mathfrak D}q^{d}}{n!} \langle \textbf{t}(\bar{\psi}),\ldots,\textbf{t}(\bar{\psi}), \frac{\phi_{\alpha}}{-z-\bar{\psi}} \rangle ^{\mathbb{T}}_{0,n+1,\mathcal{D}}\phi^{\alpha} \] for some $\textbf{t}(z)\in \mathcal{H}_{+}[[x]]$
with $\textbf{t}|_{Q=q=x=0}=0$.
\end{defn}
The following is the main result of this paper. \begin{theorem}\label{main-theorem} The hypergeometric modification $I^{S}_{\mathcal{P}}(z,t,\tau,q,x,Q)$ is a $\Lambda^{\mathbb{T}}_{nov}[[x,t]]$-valued point of the Lagrangian cone $\mathcal{L}_{\mathcal{P}}$ for the $\mathbb{T}$-equivariant Gromov-Witten theory of $\mathcal{P}$. \end{theorem} The rest of this paper is devoted to a proof of Theorem \ref{main-theorem}.
\section{Localization methods in toric Gromov-Witten theory}\label{sec:characterization} In this Section we describe a characterization of the Lagrangian cone of a toric stack bundle $\mathcal{P}$ via localization.
\subsection{Lagrangian cones for toric stack bundles} Given a toric Deligne-Mumford stack $\mathcal{X}(\boldsymbol{\Sigma})$ associated to an extended stacky fan $\boldsymbol{\Sigma}$. The maximal torus $\mathbb{T}$ acts on the toric Deligne-Mumford stack $\mathcal{X}(\boldsymbol{\Sigma})$, hence acts on the toric stack bundle $\mathcal{P}= ~^{P}\mathcal{X}(\boldsymbol{\Sigma})$. The fixed points under the torus action correspond to the top dimensional cones in the fan $\Sigma$. A top dimensional cone $\sigma$ gives a fixed point section\footnote{We abuse notation here: $\mathcal{P}_{\sigma}$ are gerbes over $B$ which may not have sections.}
$\mathcal{P}_{\sigma}:=~^{P}\mathcal{X}(\boldsymbol{\Sigma}/\sigma)$ for the toric stack bundle $\mathcal P$. Note that $\mathcal P_\sigma$ is an abelian gerbe over the base $B$: it is a fiber product of root gerbes associated to the line bundles defining $\mathcal{P}$. We write $N_{\sigma}\mathcal{P}$ for the normal bundle at the $\mathbb{T}$-fixed section $\mathcal{P}_{\sigma}$.
For the rest of this paper, we write $\mathcal{H}$ for Givental's symplectic vector space associated to the toric stack bundle $\mathcal{P}$. Let $\sigma$ be a top-dimensional cone, we denote Givental's symplectic vector space associated to the $\mathbb{T}$-fixed section $\mathcal{P}_{\sigma}$ by $\mathcal{H}_{\sigma}$. Let $\mathcal{H}^{tw}_{\sigma}$ and $\mathcal{L}^{tw}_{\sigma}$ be the symplectic vector space and Lagrangian cone associated to the twisted Gromov-Witten theory of $\mathcal{P}_{\sigma}$, where the twist is given by the vector bundle $N_{\sigma}\mathcal{P}$ and the $\mathbb{T}$-equivariant inverse Euler class $e^{-1}_{\mathbb{T}}$. See \cite{Tseng} for more details on twisted theory.
Let \[ \Sigma_{top}:=\{\sigma\in \Sigma: \sigma \text{ is a top-dimensional cone in }\Sigma \} \subset \Sigma \] be the set of top-dimensional cones in $\Sigma$. By the Atiyah-Bott localization theorem, we have an isomorphism of Chen-Ruan orbifold cohomology rings \begin{equation} H^{*}_{\on{CR},\mathbb{T}}(\mathcal{P})\otimes_{R_{\mathbb{T}}}S_{\mathbb{T}}\cong \bigoplus\limits_{\sigma \in \Sigma_{top}}H^{*}_{\on{CR}}(\mathcal{P}_{\sigma})\otimes_{\mathbb{C}}S_{\mathbb{T}}. \end{equation} In particular, the identity class $1\in H^{*}_{\on{CR},\mathbb{T}}(\mathcal{P})$ corresponds to $\bigoplus\limits_{\sigma\in \Sigma_{top}}1_{\sigma} $,
where $1_{\sigma}$ is the identity element in $H^{*}_{\on{CR},\mathbb{T}}(\mathcal{P}_{\sigma})$. Furthermore, we have an isomorphism of vector spaces: \begin{equation}\label{localization-isom} \mathcal{H}\cong \bigoplus_{\sigma\in\Sigma_{top}} \mathcal{H}_{\sigma}. \end{equation} For each $f\in\mathcal{H}$ and $\sigma\in \Sigma_{top}$,
let $f_{\sigma}:=f|_{\mathcal{H}_\sigma}\in \mathcal{H}_{\sigma}$ be the restriction of $f$ to the component $\mathcal{H}_{\sigma}$ of $\mathcal{H}$. Hence $f_{\sigma}$ can also be viewed as the restriction of $f$ to the inertia stack $\mathcal{IP}_{\sigma}$.
Let $f_{(\sigma,b)}:=f_\sigma|_{(\mathcal{P}_{\sigma})_{b}}$ be the restriction of $f_{\sigma}$ to the twisted sector $(\mathcal{P}_{\sigma})_{b}$ of $\mathcal{IP}_{\sigma}$ corresponding to the box element $b\in \on{Box}(\sigma)$.
\subsection{Toric virtual localization} We spell out explicitly the virtual localization applied to $\mathcal{P}$. Our presentation closely follows the toric case in \cite{Liu}.
The $\mathbb T$-action on $\mathcal P$ induces a $\mathbb T$-action on the moduli space $\overline{M}_{0,n+1}(\mathcal{P},\mathcal{D})$. The $\mathbb{T}$-fixed strata in the moduli space $\overline{M}_{0,n+1}(\mathcal{P},\mathcal{D})$ are indexed by decorated trees $\Gamma$, where $\Gamma$ consists of the following data. \begin{enumerate}
\item each vertex $v\in \Gamma$ is assigned with a top-dimensional cone $\sigma\in \Sigma_{top}$, and we denote the vertex by $v(\sigma)$.
\item each edge $e\in \Gamma$ is assigned with a codimension-$1$ cone $\tau_{e}\in\Sigma$.
\item We denote $V(\Gamma)$ to be the set of vertices of $\Gamma$, $E(\Gamma)$ to be the set of edges of $\Gamma$. Let \[
F(\Gamma)=\{(e,v)\in E(\Gamma)\times V(\Gamma)| e \text{ is incident to }v\} \] be the set of flags in $\Gamma$.
\item Each edge $e$ is associated with a positive integer $d_{e}$ by the degree map $d:E(\Gamma)\rightarrow \mathbb Z_{>0}$.
\item Each flag $(e,v)$ of $\Gamma$ is labelled with an element $k_{(e,v)}\in G_v$, where $G_v$ is the isotropy group of the $\mathbb{T}$-fixed section $\mathcal{P}_{\sigma}$.
\item There is a marking map $s: \{1,2,\ldots,n+1\}\rightarrow V(\Gamma)$ that associates each marking with vertices of $\Gamma$.
\item An element $k_{j}\in G_{s(j)}$ is associated with the marking $j\in \{1,2,\ldots,n+1\}$.
\item Some compatibility conditions as in \cite[Definition 9.6]{Liu}. \end{enumerate}
Note that the degree $\mathcal D$ is encoded in the tree $\Gamma$ by $\mathfrak D=\pi_* \mathcal D$ for each vertex $v$ and $d_e$ for each edge $e$.
We write $DT_{0,n+1}(\mathcal{P},\mathcal{D})$ for all decorated trees that contain the above data.
For a vertex $v$ in a decorated graph $\Gamma \in DT_{0,n+1}(\mathcal{P},\mathcal{D})$, we define: \begin{itemize} \item $S(v):=\{j\in \{1,2,\ldots,n+1\}:s(j)=v\}$, the set of markings associated to the vertex $v$.
\item $E(v):=\{e\in E(\Gamma):(e,v)\in F(\Gamma)\}$, the set of edges incident to the vertex $v$.
\item $\on{val}(v):=|E(v)|+|S(v)|$, the valence of the vertex $v$. \end{itemize} We write $\mathcal{M}_{\Gamma}$ for the fixed locus of $\overline{M}_{0,n+1}(\mathcal{P},\mathcal{D})$ given by $\Gamma$, the contribution of the Gromov-Witten invariant $\langle \gamma_{1}\bar{\psi}_{1}^{a_{1}},\ldots,\gamma_{n+1} \bar{\psi}_{n+1}^{a_{n+1}} \rangle_{0,n+1,\mathcal{D}} $ from $\mathcal{M}_{\Gamma}$ is: \begin{align}\label{local-contr} & c_{\Gamma} \prod\limits_{e\in E(\Gamma)}h(e) \prod\limits_{(e,v)\in F(\Gamma)}h(e,v) \prod\limits_{v\in V(\Gamma)} \left( \prod\limits_{j:s(j)=v}\iota^{*}_{\sigma}\gamma_{j} \right)\\ \notag & \times \prod\limits_{v\in V(\Gamma)} \int_{ [ \overline{\mathcal{M}}^{\vec{b}(v)}_{0,val(v)} (\mathcal{P}_{\sigma},\mathfrak D) ]^{w} } \frac{h(v)\prod_{j\in S(v)}\bar{\psi}^{a_{j}}_{j}} {\prod_{e\in E(v)}(e_{\mathbb{T}}(T_{\eta(e,v)}\mathcal{C}_{e})-\bar{\psi}_{(e,v)}/r_{(e,v)})} \end{align} where: \begin{itemize} \item $c_{\Gamma}=
\frac{1}{|Aut(\Gamma)|} \prod\limits_{e\in E(\Gamma)}
\frac{1}{d_{e}|G_{e}|} \prod\limits_{(e,v)\in F(\Gamma)}
\frac{|G_{v}|}{r_{(e,v)}}$.
\item $G_{e}$ is the generic stabilizer of the toric substack bundle $\mathcal{P}_{\tau_{e}}$.
\item $r_{(e,v)}:=|\langle k_{(e,v)}\rangle|$ is the order of $k_{(e,v)}\in G_{v}$.
\item $h(e)= \frac{e_{\mathbb{T}}(H^{1}(\mathcal{C}_{e},f^{*}_{e}T\mathcal{P})^{mov})} {e_{\mathbb{T}}(H^{0}(\mathcal{C}_{e},f^{*}_{e}T\mathcal{P})^{mov})}$
\item $h(e,v)= e_{\mathbb{T}} ((T_{\sigma}\mathcal{P})^{k_{(e,v)}})$
\item $h(v)=e^{-1}_{\mathbb{T}} ((N_{\sigma}\mathcal{P})_{0,val(v),\mathfrak D})$
\item $f_{e}:\mathcal{C}_{e}\rightarrow \mathcal P$ is a map to the toric substack bundle $\mathcal{P}_{\tau_{e}}=~^{P}\chi(\mathbf{\Sigma}/\tau_e)$.
\item $H^{i}(\mathcal{C}_{e},f^{*}_{e}T\mathcal{P})^{mov}$ denotes the moving part of $H^{i}(\mathcal{C}_{e},f^{*}_{e}T\mathcal{P})$ with respect to the $\mathbb{T}$-action. \item $\iota_{\sigma}:\mathcal{P}_{\sigma}\hookrightarrow \mathcal{P}$ is the inclusion of the fixed section $\mathcal{P}_{\sigma}$.
\item $\eta(e,v)=\mathcal{C}_e\cap \mathcal{C}_v$ is a node of $\mathcal C$ on $\mathcal{C}_{e}$, where $(e,v)\in F(\Sigma)$.
\item $\vec{b}(v)\in ({G_v}) ^{val(v)}$ is given by the decorations $k_{j},j\in S(v)$, and $k_{(e,v)},e\in E(v)$. \item $(N_{\sigma}\mathcal{P})_{0,val(v),0}$ is the twisting bundle associated to the vector bundle $N_{\sigma}\mathcal{P}$ over the $\mathbb{T}$-fixed section $\mathcal{P}_{\sigma}$, as in \cite[Definition 2.5.10]{Tseng}.
\item $\overline{\mathcal{M}}^{\vec{b}(v)}_{0,val(v)} (\mathcal{P}_{\sigma},\mathfrak D)$ is taken to be a point if $val(v)\leq 2$ and $\mathfrak D=0$. The twisting bundles $(N_{\sigma}\mathcal{P})_{0,val(v),0}$ in these unstable cases are defined to be $(T_{\sigma}\mathcal{P})^{k_{e,v}}$, as in the end of \cite[Section 9.3.4]{Liu}. \end{itemize}
\subsection{Characterization theorem}
For $\sigma\in \Sigma_{top}$, let $U_{k}(\sigma)$ be the character of $\mathbb{T}$ given by the restriction of the line bundle $U_{k}$ to the $\mathbb{T}$-fixed locus $\mathcal{P}_{\sigma}$.
We will prove the following characterization result:
\begin{thm}\label{characterization} Let $\mathcal{P}=~^{P}\mathcal{X}(\boldsymbol{\Sigma})$ be a smooth toric stack bundle associated to an extended stacky fan $\boldsymbol{\Sigma} =(\textbf{N},\Sigma,\rho)$ and a $(\mathbb{C}^{\times})^{n+m}$ bundle $P\rightarrow B$. Let $x=(x_{1},\ldots,x_{m})$ be formal variables. Suppose $f$ is an element of $\mathcal{H}[[x]]$
satisfies $f|_{Q=q=x=0}=-1z$, then $f$ is a $\Lambda^{\mathbb{T}}_{nov}[[x]]$-value point of the Lagrangian cone $\mathcal{L}_{\mathcal{P}}$ if and only if it meets the following three conditions: \begin{description} \item[(C1)] For each $\sigma \in \Sigma_{top}$ and $b\in \on{Box}(\sigma)$, the restriction $f_{(\sigma,b)}$ is a power series in $Q, q$ and $x$ with coefficients being elements of $S_{\mathbb{T}}(z)$. As a function in $z$, $f_{(\sigma,b)}$ has essential singularity at $z=0$, a finite order pole at $z=\infty$, simple poles at $z=\frac{U_{j}(\sigma)}{c}$, whenever there exists $\sigma^{\prime}\in \Sigma$ and $c>0 $ such that $\sigma \dagger \sigma^\prime$,
$j\in \sigma\setminus \sigma^{\prime}$ and
$\langle c\rangle= \hat{b}_{j}$. And $f_{(\sigma,b)}$ is regular elsewhere.
\item[(C2)] The residues of $f_{(\sigma,b)}$ at the simple pole $z=\frac{U_{j}(\sigma)}{c}$ satisfy the following recursion relations: \[ Res_{z=\frac{U_{j}(\sigma)}{c}} f_{(\sigma,b)}(z)dz= -q^{d_{c,\sigma,j}} Rec(c)^{(\sigma^{\prime},b^{\prime})}_{(\sigma,b)}
f_{(\sigma^{\prime},b^{\prime})}(z)|_{z=\frac{U_{j}(\sigma)}{c}}, \] where the {\em recursion coefficient} $Rec(c)^{(\sigma^{\prime},b^{\prime})}_{(\sigma,b)}$ associated to $(\sigma,\sigma^{\prime},b,c)$ is an element of $S_{\mathbb{T}}$ given by: \begin{equation*}
Rec(c)^{(\sigma^{\prime},b^{\prime})}_{(\sigma,b)}:=\frac{1}{c} \left( \prod\limits_{i\in \sigma:b_{i}=0}U_{i}(\sigma) \right) \frac{\left(\frac{c}{U_{j}(\sigma)} \right)^{\lfloor c\rfloor}} {\lfloor c\rfloor !} \frac{\left(\frac{c}{U_{j}(\sigma)} \right)^{\lfloor c^{\prime}\rfloor}} {\lfloor c^{\prime}\rfloor !} \prod\limits_{i\in \sigma\cap\sigma^{\prime}} \frac{\prod_{\langle a\rangle =\hat{b}_{i},a<0}(U_{i}(\sigma)+U_{j}(\sigma)\frac{a}{-c})} {\prod_{\langle a\rangle =\hat{b}_{i},a<c_{i}}(U_{i}(\sigma)+U_{j}(\sigma)\frac{a}{-c})}, \end{equation*} \item [(C3)] The Laurent expansion of the restriction $f_{\sigma}$ at $z=0$ is a $\Lambda^{\mathbb{T}}_{nov}[[x]]$-valued point of the twisted Lagrangian cone $\mathcal{L}^{tw}_{\sigma}$.
\end{description} \end{thm} \begin{proof} We will follow the approach in \cite{CCIT}. Let $\{\phi_{\alpha}\}$ be a basis for $H^{*}_{\on{CR},\mathbb{T}}(\mathcal{P}) \otimes_{R_{\mathbb{T}}}S_{\mathbb{T}}$ and $\{\phi^{\alpha}\}$ be its dual basis with respect to the orbifold Poincar\'e pairing. Suppose $f$ is a $\Lambda^{\mathbb{T}}_{nov}[[x]]$-valued point on the Lagrangian cone $\mathcal{L}_{\mathcal{P}}$. Then $f$ can be written as \begin{equation}\label{point-in-LP} f= -1z+\textbf{t}(z)+ \sum\limits^{\infty}_{n=0} \sum\limits_{\substack{d\in \overline{\on{NE}}(\mathcal{X}(\boldsymbol{\Sigma}))\\ \mathfrak D\in \overline{\on{NE}}(B)}} \sum\limits_{\alpha} \frac{Q^{\mathfrak D}q^{d}}{n!} \langle \textbf{t}(\bar{\psi}),\ldots,\textbf{t}(\bar{\psi}), \frac{\phi_{\alpha}}{-z-\bar{\psi}} \rangle ^{\mathbb{T}}_{0,n+1,\mathcal{D}}\phi^{\alpha} \end{equation} for some $\textbf{t}(z)\in \mathcal{H}_{+}[[x]]$
with $\textbf{t}|_{Q=q=x=0}=0$. Under the isomorphism (\ref{localization-isom}), we have that $f$ is determined by its restrictions $f_\sigma$ to $\mathcal H _\sigma$: \[ f_{\sigma}= -1_{\sigma}z+\textbf{t}_{\sigma}(z)+ \iota^{*}_{\sigma} \left( \sum\limits^{\infty}_{n=0} \sum\limits_{\substack{d\in \overline{\on{NE}}(\mathcal{X}(\boldsymbol{\Sigma}))\\ \mathfrak D\in \overline{\on{NE}}(B)}} \sum\limits_{\alpha} \frac{Q^{\mathfrak D}q^{d}}{n!} \langle \textbf{t}(\bar{\psi}),\ldots,\textbf{t}(\bar{\psi}), \frac{\phi_{\alpha}}{-z-\bar{\psi}} \rangle ^{\mathbb{T}}_{0,n+1,\mathcal{D}}\phi^{\alpha} \right) \] where $\iota_{\sigma}:\mathcal{P}_{\sigma}\rightarrow \mathcal{P}$ is the inclusion of the $\mathbb{T}$-fixed section. Furthermore, let $\phi^{\alpha}_{\sigma,b}$ be the restriction of $\phi^{\alpha}$ to $\mathcal{IP}_{\sigma,b}$, we obtain the following sum over graphs via virtual localization in $\mathbb{T}$-equivariant cohomology: \begin{align}\label{local-sum} f_{(\sigma,b)}=& -\delta_{b,0}z+\textbf{t}_{(\sigma,b)}(z)+ \sum\limits^{\infty}_{n=0} \sum\limits_{\substack{d\in \overline{\on{NE}}(\mathcal{X}(\boldsymbol{\Sigma}))\\ \mathfrak D\in \overline{\on{NE}}(B)}} \sum\limits_{\alpha}\frac{Q^{\mathfrak D}q^{d}}{n!} \langle \frac{\phi_{\alpha}}{-z-\bar{\psi}},\textbf{t}(\bar{\psi}),\ldots,\textbf{t}(\bar{\psi})\rangle ^{\mathbb{T}}_{0,n+1,\mathcal{D}}\phi^{\alpha}_{\sigma,b}\\ \notag =& -\delta_{b,0}z+\textbf{t}_{(\sigma,b)}(z)+ \sum\limits^{\infty}_{n=0} \sum\limits_{\substack{d\in \overline{\on{NE}}(\mathcal{X}(\boldsymbol{\Sigma}))\\ \mathfrak D\in \overline{\on{NE}}(B)}}\frac{Q^{\mathfrak D}q^{d}}{n!} \sum\limits_{\Gamma\in DT_{0,n+1}(\mathcal{P},\mathcal{D})}\mbox{C}(\Gamma)_{\sigma,b} \end{align} where $\mbox{C}(\Gamma)_{\sigma,b}$ is the contribution from the $\mathbb{T}$-fixed stratum $\mathcal{M}_{\Gamma}\subset \overline{M}_{0,n+1}(\mathcal{P},\mathcal{D})$ corresponding to the decorated tree $\Gamma$. \[ \sum\limits_{\alpha} \langle \frac{\phi_{\alpha}}{-z-\bar{\psi}}, \textbf{t}(\bar{\psi}),\ldots,\textbf{t}(\bar{\psi}) \rangle ^{\mathbb{T}}_{0,n+1,\mathcal{D}}\phi^{\alpha}_{\sigma,b} =\sum\limits_{\Gamma\in DT_{0,n+1}(\mathcal{P},\mathcal{D})}\mbox{C}(\Gamma)_{\sigma,b}. \] In each decorated tree $\Gamma$, there is a distinguished vertex $v$ that carries the first marked point. We may assume that $v(\sigma)=v$ and the element $k_{1}$ associated with the first marking is $\hat{b}$, otherwise the contribution of $\Gamma$ is zero. There are two possibilities:
\begin{description} \item [(A)]The irreducible component carrying the first marked point is a ramified cover of a $1$-dimensional orbit which lies in a fiber $\mathcal{X}$ of the toric stack bundle $\mathcal{P}\rightarrow B$. In this case $val(v)=2$; \item [(B)]The irreducible component carrying the first marked point maps to a fixed section $\mathcal{P}_{\sigma}$. \end{description}
Consider a graph $\Gamma$ of type {\bf (A)}. Let $e\in E(\Gamma)$ be the only edge incident to $v$. We denote the subgraph $\Gamma\setminus \{v,e\}$ by $\Gamma^\prime$, then $\Gamma^\prime$ is connected with $v$ through the edge $e$. Let $v^{\prime}\in V(\Gamma^\prime)$ be the other vertex incident to $e$ and $v(\sigma^\prime)=v^\prime$ We assume the first marking of the graph $\Gamma^{\prime}$ is associated with the vertex $v^{\prime}$. For the fixed locus $\mathcal{M}_{\Gamma}$, We have $\mathcal C_e$ being a $\mathbb P^1$ toric orbifold and $\mathcal{C}_{e}\cong \mathbb{P}_{r_{(e,v)},r_{(e,v^{\prime})}}$. The map $f_{e}:\mathcal{C}_{e}\rightarrow \mathcal{P}$ satisfies $f_{e}(0)\in \mathcal{P}_{\sigma}$ and $f_{e}(\infty)\in \mathcal{P}_{\sigma^\prime}$. Hence, $f_{e}(\mathcal{C}_{e})$ is in a fiber of $\mathcal{P}$, therefore $\mathfrak D=0$, where $\mathcal{D}$ is the degree of $f$ and $\mathfrak D=\pi_*(\mathcal D)\in H_2(B)$. The contribution $\text{C}(\Gamma)_{\sigma,b}$ is nontrivial only if \[ \phi_{\alpha}^{\sigma,\hat{b}}=
|N(\sigma)|e_{\mathbb{T}} (N_{\sigma,b})1_{\sigma,\hat{b}} \text{ and }\phi^{\alpha}_{\sigma,b}=[\mathcal{IP}_{\sigma,b}], \] where $[\mathcal{IP}_{\sigma,b}]$ is the fundamental class of $\mathcal{IP}_{\sigma,b}$, $N_{\sigma,b}$ is the normal bundle to $\mathcal{IP}_{\sigma,b}$ in $\mathcal{IP}_{b}$ and $1_{\sigma,\hat{b}}$ is the fundamental class of $\mathcal{IP}_{\sigma,\hat{b}}$ with $\hat{b}=inv(b)$. The box element $\hat{b}\in Box(\sigma)$ is given by the restriction
$f_{e}|_{0}:B\mu_{r_{(e,v)}} \rightarrow \mathcal{P}_{\sigma}$. The morphism $f_e$ determines a rational number $c\in \mathbb Q$ and a box element $b^{\prime}\in Box(\sigma_{v^{\prime}})$. Since $\bar{\psi}_{1}=-r_{(e,v)}e_{\mathbb{T}}(T_{\eta(e,v)}\mathcal{C}_{e})$, using (\ref{local-contr}), we obtain: \begin{align*} \mbox{C}(\Gamma)_{\sigma,b}=& \frac{c_{\Gamma}} {c_{\Gamma^{\prime}}} h(e) h(e,v) h(e,v^\prime) \\ & \times \int_{[\overline{\mathcal{M}}^{(\hat{b},b)}_{0,2}(\mathcal{P}_{\sigma},0)]^{w}}
\frac{|N(\sigma)||e_{\mathbb{T}}(N_{\sigma,b})|} {-z+r_{(e,v)}e_{\mathbb{T}}(T_{\eta(e,v)}\mathcal{C}_{e})} \frac{1} {(e_{\mathbb{T}}(T_{\eta(e,v)}\mathcal{C}_{e})-\bar{\psi}_{2}/r_{(e,v)})} \cup h(v)\\ & \times \frac{r_{(e,v^{\prime})}}
{|N(\sigma^{\prime})||e_{\mathbb{T}}(N_{\sigma^{\prime},b^{\prime}})|} \mbox{C}(\Gamma^{\prime})_{\sigma^{\prime},b^{\prime}}
|_{z=-r_{(e,v^{\prime})}(e_{\mathbb{T}}(T_{\eta(e,v^{\prime})}\mathcal{C}_{e})} \end{align*}
Using \cite[(9.14)]{Liu} and the definition of $c_\Gamma$, $h(e), h(e,v),h(v)$, we write this as: \begin{align*} \mbox{C}(\Gamma)_{\sigma,b}=&
\frac{|G_{v}|}
{d_{e}|G_{e}|} h(e) \frac{e_{\mathbb{T}}(N_{\sigma,b})} {(-z+U_{j}(\sigma)/c)} \mbox{C}(\Gamma^{\prime})_{\sigma^{\prime},b^{\prime}}
|_{z=-r_{(e,v^{\prime})}e_{\mathbb{T}}(T_{\eta(e,v^{\prime})}\mathcal{C}_{e})}\\ =& \frac{Rec(c)^{(\sigma^{\prime},b^{\prime})}_{(\sigma,b)}} {(-z+U_{j}(\sigma)/c)} \mbox{C}(\Gamma^{\prime})_{\sigma^{\prime},b^{\prime}}
|_{z=U_{j}(\sigma)/c}. \end{align*} Hence, the contribution to $f_{(\sigma,b)}$ from all graphs $\Gamma$ of type {\bf (A)} is: \begin{equation} \sum\limits_{\sigma^{\prime}:\sigma \dagger \sigma^\prime} \sum\limits_{\substack{c\in\mathbb{Q}:c>0,\\\langle c\rangle=\hat{b}_{j}}} q^{d_{c,\sigma,j}} \frac{Rec(c)^{(\sigma^{\prime},b^{\prime})}_{(\sigma,b)}} {(-z+U_{j}(\sigma)/c)} [f_{(\sigma^{\prime},b^{\prime})}]_{z=U_{j}(\sigma)/c}. \end{equation} Hence we have proved $\textbf{(C2)}$, as well as $\textbf{(C1)}$.
To prove {\bf (C3)}, we define: $t_{\sigma}(z):= \sum\limits_{b\in \on{Box}(\sigma)} t_{(\sigma,b)}(z)1_{b}$, where \[ t_{(\sigma,b)}(z):=\textbf{t}_{(\sigma,b)}(z)+ \sum\limits_{\sigma^{\prime}:\sigma \dagger \sigma^\prime} \sum\limits_{\substack{c\in\mathbb{Q}:c>0,\\\langle c\rangle=\hat{b}_{j}}} q^{d_{c,\sigma,j}} \frac{Rec(c)^{(\sigma^{\prime},b^{\prime})}_{(\sigma,b)}} {(-z+U_{j}(\sigma)/c)} [f_{(\sigma^{\prime},b^{\prime})}]_{z=U_{j}(\sigma)/c}, \] and $t_{(\sigma,b)}(z)$ is expanded in terms of positive powers of $z$.
Then, $f_\sigma$ can be written as: \begin{align}\label{f_sigma} \sum\limits_{b\in \on{Box}(\sigma)} f_{(\sigma,b)}1_{b} =& -1_{\sigma}z+t_{\sigma}(z)+ \sum\limits^{\infty}_{n=0} \sum\limits_{\substack{d\in \overline{\on{NE}}(\mathcal{X}(\boldsymbol{\Sigma}))\\ \mathfrak D\in \overline{\on{NE}}(B)}} \sum\limits_{b\in \on{Box}(\sigma)} \sum\limits_{\substack{\Gamma\in DT_{0,n+1}(\mathcal{P},\mathcal{D})\\ \Gamma \text{ is of type B}}} \frac{Q^{\mathfrak D}q^{d}}{n!}\mbox{C}(\Gamma)_{\sigma,b}. \end{align} Then, we consider the contribution given by decorated trees $\Gamma$ of type {\bf (B)} such that $val(v)=l$, where $v$ is the distinguished vertex. The element $k_{1}$ associated to the first marking is $\hat{b}\in \on{Box}(\sigma)$. By integrating over all the factors $\overline{\mathcal{M}}^{\vec{b^{\prime}}}_{0,val(v^{\prime})} (\mathcal{P}_{\sigma^{\prime}})$ except those associated with the distinguished vertex $v$, we can write these contributions as: \[ \sum\limits_{\alpha} \frac{1} {\mbox{Aut}(\Gamma_{2},\ldots,\Gamma_{l})} \left( \int_{[\overline{\mathcal{M}}_{0,l}^{\hat{b},b^{2},\ldots,b^{l}}(\mathcal{P}_{\sigma},\mathfrak D)]^{w}} \frac{\phi_{\alpha}^{\sigma,\hat{b}}} {-z-\bar{\psi}} \cup p_{2}(\textbf{t},\bar{\psi}_{2}) \cup\ldots\cup p_{l}(\textbf{t},\bar{\psi}_{l}) \cup e^{-1}_{\mathbb{T}}((N_{\sigma}\mathcal{P})_{0,l,\mathfrak D}) \right ) \phi^{\alpha}_{\sigma,b} \] for some box elements $b^{2},\ldots,b^{l}\in \on{Box}(\sigma)$ and some polynomials $p_{i}(\textbf{t},\bar{\psi}_{i})$ in $t_{0},t_{1},\ldots,Q,q$ and $\bar{\psi}_{i}$. The graph $\Gamma$ is obtained from joining type {\bf (A)} subgraphs $\Gamma_{2},\ldots,\Gamma_{l}$ at the vertex $v$. More precisely, $\Gamma_{i}$, for $2\leq i \leq l$, is of type {\bf (A)} and satisfies one of the following: \begin{itemize} \item $\Gamma_{i}$ consists of the distinguished vertex $v$ and two markings with the first marking coincides with the first marking of $\Gamma$. $val(v)=2$.
\item $\Gamma_{i}$ contains the distinguished vertex $v$ with exactly one marking that coincides with the first marking of $\Gamma$ and exactly one edge $e_{i}$ connecting $v$ with the rest of the graph. $val(v)=2$. \end{itemize} If $\Gamma_{i}$ consists of one vertex with two markings, then $p_{i}(\textbf{t},\bar{\psi}_{i})=\textbf{t}_{(\sigma,b^{i})}(\bar{\psi}_{i})$. Otherwise, \[
p_{i}(\textbf{t},\bar{\psi}_{i})=Q^{\mathfrak D}q^{d_{i}}\mbox{C}(\Gamma_{i})_{\sigma,b^{i}}|_{z=\bar{\psi}_{i}} \] where $d_{i}$ is the degree from the subgraph $\Gamma_{i}$. Summing over the contribution $\mbox{C}(\Gamma)_{\sigma,b}$ over all $\Gamma$ such that $val(v)=l$ gives the contribution \[ \sum\limits_{\alpha} \frac{1}{(l-1)!} \left( \int_{[\overline{\mathcal{M}}_{0,l}^{\hat{b},b^{2},\ldots,b^{l}}(\mathcal{P}_{\sigma},\iota_{\sigma}^{*}\mathcal{D})]^{w}} \frac{\phi_{\alpha}^{\sigma,\hat{b}}} {-z-\bar{\psi}} \cup t_{\sigma}(\bar{\psi}_{2}) \cup\ldots\cup t_{\sigma}(\bar{\psi}_{l}) \cup e^{-1}_{\mathbb{T}}((N_{\sigma}\mathcal{P})_{0,l,\mathfrak D}) \right ) \phi^{\alpha}_{\sigma,b} \] in (\ref{f_sigma}). Hence, we have \begin{align*} f_{\sigma}= -1_{\sigma}z+t_{\sigma}(z)+\sum\limits_{l=1}^{\infty} \sum\limits_{\mathfrak D\in \overline{\on{NE}}(\mathcal{P}_{\sigma})} \sum\limits_{b\in \on{Box}(\sigma)} \sum\limits_{\alpha} \frac{1}{(l-1)!} \langle \frac{\phi_{\alpha}^{\sigma,\hat{b}}} {-z-\bar{\psi}} ,t_{\sigma}(\psi), \ldots,t_{\sigma}(\psi) \rangle^{\on{tw}}_{0,l,\mathfrak D}\phi^{\alpha}_{\sigma,b} \in \mathcal L^{tw}_\sigma \end{align*} i.e., the Laurent expansion at $z=0$ of $f_{\sigma}$ lies in the twisted Lagrangian cone $\mathcal{L}^{tw}_{\sigma}$. Thus we have proved $\textbf{(C3)}$.
To prove the other direction of the theorem, we assume that $f\in\mathcal{H}[[x]]$
with $f|_{Q=q=x=0}=-1z$ satisfies conditions {\bf (C1), (C2)}, and {\bf (C3)}. Then, from conditions {\bf(C1)} and {\bf(C2)}, we obtain that: \begin{equation}\label{f-sigma} f_{\sigma}= -1_{\sigma}z+\textbf{t}_{\sigma}+ \sum\limits_{b\in \on{Box}(\sigma)}1_{b} \sum\limits_{\sigma^{\prime}:\sigma \dagger \sigma^\prime} \sum\limits_{\substack{c\in\mathbb{Q}:c>0,\\\langle c\rangle=\hat{b}_{j}}} q^{d_{c,\sigma,j}} \frac{RC(c)^{(\sigma^{\prime},b^{\prime})}_{(\sigma,b)}} {(-z+U_{j}(\sigma)/c)} [f_{(\sigma^{\prime},b^{\prime})}]_{z=U_{j}(\sigma)/c}+O(z^{-1}) \end{equation} for some $\textbf{t}_{\sigma}\in \mathcal{H}_{\sigma,+}[[x]]$
satisfying $\textbf{t}_{\sigma}|_{Q=q=t=0}=0$. The remainder $O(z^{-1})$ is a formal power series in $Q$, $q$ and $x$ with coefficients in $z^{-1}S_{\mathbb{T}}[z^{-1}]$. Let $F$ be a $\Lambda^{\mathbb{T}}_{nov}[[x]]$-valued point on $\mathcal{L}_{\mathcal{P}}$ defined by (\ref{point-in-LP}) with $\textbf{t}=\tau$, where $\tau\in \mathcal{H}_{+}[[x]]$ is the unique element such that its restriction to $\mathcal{IP}_{\sigma}$ is $\textbf{t}_{\sigma}$. Then, we know that $F$ and $f$ both satisfy conditions {\bf (C1-C3)}, and they have the same restriction $\textbf{t}_{\sigma}$ in $\mathcal {IP}_\sigma$. Hence, it remains to show that $f$ can be uniquely determined by the set of elements $\{\textbf{t}_\sigma\}_{\sigma\in \Sigma_{top}}$.
To prove the uniqueness, we use induction on the degree with respect to $Q, q$ and $x$. Choose a K\"ahler class $\omega$ of $\mathcal{P}$, recall that the degree of the monomial $Q^{\mathfrak D}q^{d}x_{1}^{k_{1}}\cdots x_{m}^{k_{m}}$, can be defined as $\langle \mathcal{D},\omega\rangle+\sum\limits_{i=1}^{m}k_{i}$. Let $\kappa_{0}$ denote the minimal degree of a non-trivial stable map to $\mathcal P$. Suppose that $f$ is uniquely determined from the collection $\{t_\sigma\}_{\sigma\in \Sigma_{top}}$ up to order $\kappa$. By the isomorphism (\ref{localization-isom}), we know that $f$ is uniquely determined by the collection of its restrictions $\{f_\sigma\}$, hence to show $f$ is determined up to order $\kappa+\kappa_{0}$, we just need to show $f_{\sigma}$ is determined up to order $\kappa+\kappa_{0}$. We know by (\ref{f-sigma}) that $f_{\sigma}$ is determined up to order $\kappa+\kappa_{0}$ except for the remainder $O(z^{-1})$. On the other hand, since the Laurent expansion at $z=0$ of $f_{\sigma}$ lies in $\mathcal{L}^{tw}_{\sigma}$, equation (\ref{point-in-LP}) implies that the higher order terms $O(z^{-1})$ of $z^{-1}$ is also uniquely determined up to order $\kappa+\kappa_{0}$. The proof is completed. \end{proof}
\section{Proof of the main theorem}\label{sec:proof_main_thm} To prove Theorem \ref{main-theorem}, it suffices to show the $S$-extended $I$-function $I^{S}_{\mathcal{P}}(z,t,\tau,q,x,Q)$ satisfies conditions {\bf (C1)-(C3)} in Theorem \ref{characterization}. Recall that the definition of $I^{S}_{\mathcal{P}}(z,t,\tau,q,x,Q)$ is in Definition \ref{defn:I_func}.
Let $I^{S}_{\sigma}$ and $I^{S}_{(\sigma,b)}$ denote the restrictions of $I^{S}_{\mathcal{P}}(z,t,\tau,q,x,Q)$ to the inertia stack $\mathcal{IP}_{\sigma}$ and the component $(\mathcal{P}_{\sigma})_{b}$ of the inertia stack $\mathcal{IP}_{\sigma}$ respectively.
\subsection{Condition {\bf (C1)}: Poles of $I$-function} By Definition \ref{defn:I_func}, we have \begin{align}\label{I-function-restriction} I^{S}_{(\sigma,b)}=&
e^{\sum^{n}_{i=1}U_{i}(\sigma)t_{i}/z} \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{\lambda\in \Lambda E^{S}_{b}} J_{\mathfrak D}(z,\tau)Q^{\mathfrak D}\tilde{q}^{\lambda}e^{\lambda t}\\ & \times \left( \prod\limits_{i \in \sigma} \frac{\prod_{\langle a\rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq 0}(U_{i}(\sigma)+az)} {\prod_{\langle a\rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(U_{i}(\sigma)+az)} \right) \notag \left( \prod\limits_{i \not\in \sigma}\frac{\prod_{\langle a\rangle =0,a\leq 0}(az)}{\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)}\right) \end{align} where we identify the top-dimensional cone $\sigma$ with the index set of its 1-cones and consider $\sigma\subset \{ 1,\ldots,n \} \text{ as a subset of } \{ 1,\ldots,n+m \} $. Note that $U_{i}(\sigma)=0$ for $i\not\in \sigma$. We also need to have $\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\geq 0$ for $i\not\in \sigma$, otherwise the contribution is zero. Therefore, we can see that $I^{S}_{(\sigma,b)}$ has essential singularity at $z=0$ and a finite order pole at $z=\infty$ and simple poles at \[ z=-U_{i}(\sigma)/a\quad \text{with} \quad 0<a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D, \langle a\rangle=\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle= \langle \lambda_{i}\rangle= \hat{b}_{i},i\in \sigma, \] for $\lambda\in \Lambda E^{S}_{b}$ contributing to the sum. To see it satisfies {\bf (C1)} of Theorem \ref{characterization}, it remains to invoke the following lemma, which is proved in \cite[Section 7.1]{CCIT}: \begin{lemma} Consider a top dimensional cone $\sigma\in\Sigma_{top}$, if $\lambda_{i_{0}}>0$ for some $i_{0}\in \sigma$, then there exists another top-dimensional cone $\sigma^{\prime}\in\Sigma_{top}$, such that $\sigma \dagger \sigma^\prime$ and $i_{0}\in \sigma\setminus \sigma^{\prime}$. \end{lemma}
\subsection{Condition {\bf (C2)}: Recursion relations} let $\sigma,\sigma^{\prime}\in\Sigma_{top}$ be top-dimensional cones with $\sigma \dagger \sigma^\prime$. Let $b\in \on{Box}(\sigma)$ and fix a positive rational number $c$ such that $\langle c\rangle=\hat{b}_{j}$. We study the residue of $I^{S}_{(\sigma,b)}$ at $z=-U_{j}(\sigma)/c$. Write \[ \Delta_{\lambda,i,\sigma,\mathfrak D}(z):= \frac{\prod_{\langle a \rangle=\langle \lambda_{i}\rangle, a\leq 0} U_{i}(\sigma)+az} {\prod_{\langle a \rangle=\langle \lambda_{i}\rangle, a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D} U_{i}(\sigma)+az} \] for $\lambda\in \Lambda^{S}$, $1\leq i\leq n+m$, and $\mathfrak D\in \overline{\on{NE}}(B)$. The residue of (\ref{I-function-restriction}) is given by: \begin{equation}\label{I-function-residue}
e^{\frac{\sum^{n}_{i=1}U_{i}(\sigma)t_{i}} {-U_{j}(\sigma)/c}} \frac{1}{c} \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{\substack{\lambda\in \Lambda E^{S}_{b}\\ \lambda_{j}\geq c}} J_{\mathfrak D}(-U_{j}(\sigma)/c,\tau)Q^{\mathfrak D}\tilde{q}^{\lambda}e^{\lambda t} \frac{\prod_{i:i\neq j}\Delta_{\lambda,i,\sigma,\mathfrak D}(-U_{j}(\sigma)/c)} {\prod_{\substack{0<a\leq \lambda_{j}-c_{1}(L_{j})\cdot\mathfrak D,\langle a \rangle =\langle \lambda_{j}\rangle \\ a\neq c} } (U_{j}(\sigma)-a\frac{U_{j}(\sigma)}{c}) }. \end{equation}
Consider the change of variables \[ \lambda=\lambda^{\prime}+d_{c,\sigma,j} \] where $\lambda^{\prime}\in \Lambda^{S}_{b^{\prime}}$. We write \[ c_{i}= D_{i}\cdot d_{c,\sigma,j}, \text{ for }1\leq i \leq n; \quad c_{j}=c; \quad c_{j^{\prime}}=c^{\prime}; \quad c_{i}=0, \text{ for }n+1\leq i \leq n+m. \] For $1\leq i \leq n$, consider the representable morphism $f:\mathbb{P}_{r_{1},r_{2}}\rightarrow \mathcal{P}$ given by Proposition \ref{toric-morphism} with $f(0)\in \mathcal{P}_{\sigma}$ and $f(\infty)\in \mathcal{P}_{\sigma^\prime}$, then applying the localization formula, we have \[ c_{i}=D_{i}\cdot d_{c,\sigma,j}=\int_{\mathbb{P}_{r_{1},r_{2}}}f^{*}D_{i} =\int_{\mathbb{P}_{r_{1},r_{2}}}^{\mathbb{T}}f^{*}U_{i} =\frac{U_{i}(\sigma)} {U_{j}(\sigma)/c}+ \frac{U_{i}(\sigma^{\prime})}{-U_{j}(\sigma^{\prime})/c^{\prime}} =\frac{U_{i}(\sigma)} {U_{j}(\sigma)/c}+ \frac{U_{i}(\sigma^{\prime})}{-U_{j}(\sigma)/c} \] Hence we obtain \begin{equation}\label{U-i-sigma} U_{i}(\sigma)=U_{i}(\sigma^{\prime})+\frac{c_{i}}{c}U_{j}(\sigma). \end{equation} Hence, by equation (\ref{U-i-sigma}), we have the following three equations \begin{equation} \frac{\sum^{n}_{i=1}U_{i}(\sigma)t_{i}} {-U_{j}(\sigma)/c}+\lambda t =\frac{\sum^{n}_{i=1}U_{i}(\sigma^{\prime})t_{i}} {-U_{j}(\sigma)/c}+\lambda^{\prime} t; \end{equation} \begin{equation} \Delta_{\lambda,i,\sigma,\mathfrak D}\left(-\frac{U_{j}(\sigma)}{c}\right) =\Delta_{\lambda^{\prime},i,\sigma^{\prime},\mathfrak D}\left(-\frac{U_{j}(\sigma)}{c}\right) \frac{\prod_{a\leq 0, \langle a\rangle = \langle \lambda_{i}\rangle}(U_{i}(\sigma)-\frac{a}{c}U_{j}(\sigma))} {\prod_{a\leq c_{i}, \langle a\rangle = \langle \lambda_{i}\rangle}(U_{i}(\sigma)-\frac{a}{c}U_{j}(\sigma))}, \quad \text{for } i\neq j; \end{equation}
\begin{equation} \prod\limits_{\substack{0<a\leq \lambda_{j}-c_{1}(L_{j})\cdot\mathfrak D,\\ \langle a \rangle = \langle \lambda_{j} \rangle, a\neq c}} \left( U_{j}(\sigma)-a\frac{U_{j}(\sigma)}{c} \right) =\prod\limits_{\substack{-c<a\leq \lambda^{\prime}_{j}-c_{1}(L_{j})\cdot\mathfrak D,\\\langle a \rangle =\langle \lambda_{j} \rangle, a\neq 0}} \left( -a\frac{U_{j}(\sigma)}{c} \right). \end{equation}
Applying the above three equations we see that (\ref{I-function-residue}), the residue of $I^S_{(\sigma,b)}$ at $z=-\frac{U_i(\sigma)}{c}$, is given by: \begin{align*} & e^{\frac{\sum^{n}_{i=1}U_{i}(\sigma^{\prime})t_{i}} {-U_{j}(\sigma)/c}} \frac{1}{c} \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{\substack{\lambda^{\prime}\in \Lambda E^{S}_{b^{\prime}}\\ \lambda^{\prime}_{j}\geq 0}} J_{\mathfrak D}(-U_{j}(\sigma)/c,\tau)Q^{\mathfrak D}\tilde{q}^{\lambda^{\prime}}q^{d_{c,\sigma,j}}e^{\lambda^{\prime} t}
\frac{\prod_{i:i\neq j}\Delta_{\lambda^{\prime},i,\sigma^{\prime},\mathfrak D}(-U_{j}(\sigma)/c)} { \prod\limits_{ \substack{0<a\leq \lambda^{\prime}_{j}-c_{1}(L_{j})\cdot\mathfrak D,\\\langle a \rangle =\langle \lambda_{j} \rangle, a\neq 0} } \left( U_{j}(\sigma)-a\frac{U_{j}(\sigma)}{c} \right) }\\ & \times \prod\limits_{i:i\neq j} \frac{\prod_{a\leq 0, \langle a\rangle =\langle \lambda_{i}\rangle}(U_{i}(\sigma)-\frac{a}{c}U_{j}(\sigma))} {\prod_{a\leq c_{i}, \langle a\rangle =\langle \lambda_{i}\rangle}(U_{i}(\sigma)-\frac{a}{c}U_{j}(\sigma))}\\ =& q^{d_{c,\sigma,j}}\frac{1}{c}\frac{1} {\prod\limits_{\substack{0<a<c,\\a\in \mathbb{Z}}}\left(a\frac{U_{j}(\sigma)}{c}\right)} \prod\limits_{i\in \sigma^{\prime}} \frac{\prod_{a\leq 0, \langle a\rangle = \langle \lambda_{i}\rangle}(U_{i}(\sigma)-\frac{a}{c}U_{j}(\sigma))} { \prod_{a\leq c_{i}, \langle a\rangle = \langle \lambda_{i}\rangle} (U_{i}(\sigma)-\frac{a}{c}U_{j}(\sigma)) }
I^{S}_{(\sigma^{\prime},b^{\prime})}|_{z=-U_{j}(\sigma)/c}. \end{align*} By a direct computation, we obtain \[ \frac{1}{c}\frac{1} {\prod\limits_{\substack{0<a<c,\\a\in \mathbb{Z}}}\left(a\frac{U_{j}(\sigma)}{c}\right)} \prod\limits_{i\in \sigma^{\prime}} \frac{\prod_{a\leq 0, \langle a\rangle =\langle \lambda_{i}\rangle}(U_{i}(\sigma)-\frac{a}{c}U_{j}(\sigma))} {\prod_{a\leq c_{i}, \langle a\rangle =\langle \lambda_{i}\rangle}(U_{i}(\sigma)-\frac{a}{c}U_{j}(\sigma))} =Rec(c)^{(\sigma^{\prime},b^{\prime})}_{(\sigma,b)}. \] This proves the recursion for the $S$-Extended $I$-function.
\subsection{Condition {\bf (C3)}: Restriction to fixed points.} Consider a top dimensional cone $\sigma\in\Sigma_{top}$, we need to show that $I^{S}_{\sigma}$ lies on the Lagrangian cone $\mathcal{L}^{tw}_{\sigma}$. We will need to use the decomposition theorem of Gromov-Witten theory of $\mu$-gerbe over the base $B$ as in \cite{AJT09}.
By Definition \ref{defn:I_func}, we have \begin{align}\label{I^S_sigma} I^{S}_{\sigma} = & e^{\sum^{n}_{i=1}U_{i}(\sigma)t_{i}/z} \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{\substack{\lambda\in \Lambda^{S}_{\sigma}\\ \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\in\mathbb{Z}_{\geq 0} \text{ if } i \not\in\sigma}} J_{\mathfrak D}(z,\tau)Q^{\mathfrak D}\tilde{q}^{\lambda}e^{\lambda t}\\ & \times \left(
\prod\limits_{i \in \sigma} \frac{\prod_{\langle a \rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq 0}(U_{i}(\sigma)+az)} {\prod_{\langle a\rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(U_{i}(\sigma)+az)} \right) \notag \left(
\prod\limits_{i \not\in \sigma} \frac{\prod_{\langle a\rangle =0,a\leq 0}(az)} {\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)} \right) 1_{v^{S}(\lambda)} \end{align} where $1_{v^{S}(\lambda)}\in H^{*}_{\on{CR}}(\mathcal{P}_{\sigma})$ is the identity class on the twisted sector of $\mathcal{P}_{\sigma}$ corresponding to the box element $v^{S}(\lambda)\in \on{Box}(\sigma)$. By string equation, $\mathcal{L}^{tw}_{\sigma}$ is invariant under multiplication by $ e^{\sum^{n}_{i=1}U_{i}(\sigma)t_{i}/z}$, hence we can remove this factor from (\ref{I^S_sigma}).
Let $\pi(\sigma)$ be the quotient map $\textbf{N} \rightarrow \textbf{N}(\sigma)$. We have \[ v^{S}(\lambda)= \sum\limits_{j\in\sigma} \lceil \lambda_{j}\rceil \rho_{j}+ \sum\limits_{i\not\in\sigma,i\leq n} \lambda_{i}\rho_{i}+ \sum\limits_{i=1}^{m} \lambda_{n+i}s_{i}\equiv \sum\limits_{i\not\in\sigma}\lambda_{i}b^{i}_\sigma \quad \text{mod }\textbf{N}_{\sigma}, \] where \[ b^{i}_\sigma=\left\{ \begin{array}{lr} \pi(\sigma)(\rho_i), \quad 1\leq i \leq n;\\ \pi(\sigma)(s_{i-n}), \quad n+1\leq i\leq n+m. \end{array} \right. \] We also introduce variables $\{q_{i}\}_{i\not\in \sigma}$ and the change of variables: \[ Q^\mathfrak D\tilde{q}^{\lambda}e^{\lambda t}= (\prod\limits_{i\not\in \sigma}q_{i}^{\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D})Q^\mathfrak D \] It remains to show that \begin{align}\label{restriction-sigma} \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{ \substack{ \lambda\in \Lambda^{S}_{\sigma}\\ \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\in\mathbb{Z}_{\geq 0} \text{ if } i \not\in\sigma } } Q^{\mathfrak D}J_{\mathfrak D}(z,\tau) \left( \prod\limits_{i \in \sigma} \frac{\prod_{\langle a \rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq 0}(U_{i}(\sigma)+az)} {\prod_{\langle a\rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(U_{i}(\sigma)+az)} \right)\\ \notag \times \left( \prod\limits_{i \not\in \sigma} \frac{q_{i}^{\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}\prod_{\langle a\rangle =0,a\leq 0}(az)} {\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)} \right) 1_{\sum_{i\not\in \sigma}\lambda_{i}b^{i}_\sigma} \end{align} is a $S_{\mathbb{T}}[[q]]$-valued point on the twisted Lagrangian cone $\mathcal{L}^{tw}_{\sigma}$ of $\mathcal P_{\sigma}$.
By construction, $\mathcal P_\sigma$ is a product of root gerbes over the base $B$. \begin{lemma} For each top dimensional cone $\sigma$, the series \begin{equation}\label{P-sigma-function} \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{b\in \on{Box}(\sigma)} Q^{\mathfrak D} J_{\mathfrak D}(z,\tau)1_{b} \end{equation} lies on the Lagrangian cone $\mathcal{L}$ of the untwisted theory of $\mathcal{P}_{\sigma}$ \end{lemma} \begin{proof} By the definition of J-function, we have \begin{align*} \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{b\in \on{Box}(\sigma)} Q^{\mathfrak D} J_{\mathfrak D}(z,\tau)1_{b} &=\sum\limits_{b\in \on{Box}(\sigma)} (z+\tau+\sum_{n,\mathfrak D}\sum_\alpha\frac{Q^\mathfrak D}{n!}\langle \frac{\phi_\alpha}{z-\bar{\psi}},\tau,\ldots,\tau\rangle_{0,n+1,\mathfrak D}^B\phi^\alpha)1_{b}\\ &= z\sum\limits_{b\in \on{Box}(\sigma)} \sum_{n,\mathfrak D}\sum_\alpha\frac{Q^\mathfrak D}{n!}\langle \frac{\phi_\alpha}{z-\bar{\psi}},1,\tau,\ldots,\tau\rangle_{0,n+2,\mathfrak D}^B\phi^\alpha\otimes 1_{b}, \end{align*} where the second equation is from string euqation. By \cite[Proposition 5.2]{Jiang}, the Chen-Ruan cohomology ring of the toric stack bundle $\mathcal P_\sigma$ is given by \begin{equation}\label{QH*-gerbe} H^*_{CR}(\mathcal P_\sigma;\mathbb Q)\cong H^*(B;\mathbb Q)\otimes H^*_{CR}(BG_\sigma;\mathbb Q) \end{equation} where $G_\sigma$ is the isotropy group at generic points of $\mathcal P_\sigma$. Recall that $\tau$ takes value in $H^*(B,\mathbb Q)\cong H^*(\mathcal P_{\sigma},\mathbb Q)\subset H^*_{CR}(\mathcal P_\sigma,\mathbb Q)$. By \cite[Theorem 7.1]{AJT10}, \begin{align*}
\langle \frac{\phi_\alpha}{z-\bar{\psi}},1,\tau,\ldots,\tau\rangle_{0,n+2,\mathfrak D}^B=|G_\sigma|\langle \frac{\phi_\alpha\otimes 1_{\hat b}}{z-\bar{\psi}},1_{\tilde{b}},\tau,\ldots,\tau\rangle_{0,n+2,\mathfrak D}^{\mathcal P_\sigma} \end{align*} where $\hat{b}=inv(b)$ is the involution of $b$ and $\tilde b$ is uniquely determined by the $\mathfrak D$-admissible condition defined in \cite[Definition 3.3, Remark 3.7]{AJT09} and \cite[Definition 4.5]{AJT10}. Note that $\tilde b$ depends on $b$ and $\mathfrak D$. Moreover, by \cite[Theorem 4.4]{AJT09} and \cite[Theorem 7.1]{AJT10}, for each $\check b\in \on{Box}(\sigma)\setminus{\tilde b}$, the invariant $\langle \frac{\phi_\alpha\otimes 1_{\hat b}}{z-\bar{\psi}},1_{\check{b}},\tau,\ldots,\tau\rangle_{0,n+2,\mathfrak D}^{\mathcal P_\sigma}$ vanishes. Therefore \[ \langle \frac{\phi_\alpha\otimes 1_{\hat b}}{z-\bar{\psi}},1_{\tilde{b}},\tau,\ldots,\tau\rangle_{0,n+2,\mathfrak D}^{\mathcal P_\sigma}=\langle \frac{\phi_\alpha\otimes 1_{\hat b}}{z-\bar{\psi}},\sum_{\underline b\in \on{Box}(\sigma)}1_{\underline b},\tau,\ldots,\tau\rangle_{0,n+2,\mathfrak D}^{\mathcal P_\sigma}, \] for every $b$ and $\mathfrak D$. Notice that $\sum_{\underline b\in \on{Box}(\sigma)}1_{\underline b}$ does not depend on $b$ or $\mathfrak D$. Then, (\ref{P-sigma-function}) can be written as \begin{align*}
z|G_\sigma|\sum\limits_{b\in \on{Box}(\sigma)} \sum_{n,\mathfrak D}\sum_\alpha\frac{Q^\mathfrak D}{n!}\langle \frac{\phi_\alpha\otimes 1_{\hat b}}{z-\bar{\psi}},\sum_{\underline b\in \on{Box}(\sigma)}1_{\underline b},\tau,\ldots,\tau\rangle_{0,n+2,\mathfrak D}^{\mathcal P_\sigma}\phi^\alpha \otimes1_{b} \end{align*} By (\ref{QH*-gerbe}), $\{\phi^\alpha\otimes 1_b\}_{b\in \on{Box}{\sigma},\alpha}$ forms a basis of $H^*_{CR}(\mathcal P_\sigma;\mathbb Q)$, so we have \begin{align*}
& z|G_\sigma|\sum\limits_{b\in \on{Box}(\sigma)} \sum_{n,\mathfrak D}\sum_\alpha\frac{Q^\mathfrak D}{n!}\langle \frac{\phi_\alpha\otimes 1_{\hat b}}{z-\bar{\psi}},\sum_{\underline b\in \on{Box}(\sigma)}1_{\underline b},\tau,\ldots,\tau\rangle_{0,n+2,\mathfrak D}^{\mathcal P_\sigma}\phi^\alpha \otimes1_{b}\\
=& z|G_\sigma| S_{\mathcal P_\sigma}(\tau,z)(\sum_{\underline b\in \on{Box}(\sigma)}1_{\underline b}), \end{align*} by the definition of the $S$-operator (\ref{S-operator}). Hence, we conclude that (\ref{P-sigma-function}) lies on the Lagrangian cone $\mathcal{L}$ of the untwisted theory of $\mathcal{P}_{\sigma}$. \end{proof}
Notice that we have the following identity \begin{align*} \sum\limits_{ \substack{ \lambda\in \Lambda^{S}_{\sigma}\\ \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\in\mathbb{Z}_{\geq 0} \text{ if } i \not\in\sigma } } \left( \prod\limits_{i \not\in \sigma} \frac{q_{i}^{\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}\prod_{\langle a\rangle =0,a\leq 0}(az)} {\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)} \right) 1_{\sum_{i\not\in \sigma}\lambda_{i}b^{i}_\sigma}= \sum\limits_{b\in \on{Box}(\sigma)} \exp(\sum\limits_{i\not\in \sigma}q_{i}/z)1_{b}. \end{align*}
Hence, we have \begin{align}\label{J-function-gerbe} & \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{ \substack{ \lambda\in \Lambda^{S}_{\sigma}\\ \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\in\mathbb{Z}_{\geq 0} \text{ if } i \not\in\sigma } } Q^\mathfrak D J_{\mathfrak D}(z,\tau) \left( \prod\limits_{i \not\in \sigma} \frac{q_{i}^{\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}\prod_{\langle a\rangle =0,a\leq 0}(az)} {\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)} \right) 1_{\sum_{i\not\in \sigma}\lambda_{i}b^{i}_\sigma}\\ \notag =& \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{b\in \on{Box}(\sigma)} Q^\mathfrak D \exp(\sum\limits_{i\not\in \sigma}q_{i}/z)J_{\mathfrak D}(z,\tau)1_{b}. \end{align} lies on the untwisted Lagrangian cone $\mathcal{L}$ of $\mathcal P_{\sigma}$.
We will need to use Tseng's orbifold quantum Riemann-Roch theorem in \cite{Tseng} to prove (\ref{restriction-sigma}) lies in the twisted Lagrangian cone $\mathcal L^{tw}_{\sigma}$. We recall some notations here:
Let $V$ be the direct sum of $d$ vector bundles $V^{(j)}$, for $1\leq j \leq d$, and consider a universal multiplicative characteristic class: \[ \textbf{c}(V)= \prod\limits_{j=1}^{d} \exp\left(\sum\limits_{k=0}^{\infty}s_{k}^{(j)}ch_{k}(V^{(j)})\right) \] where $s_{0}^{(j)},s_{1}^{(j)},s_{2}^{(j)}$,\ldots are formal indeterminates. We consider the special case where $V=N_{\sigma}\mathcal{P}$, which is the direct sum of line bundles $U_{j}(\sigma)$, for $j\in \sigma$, over $\mathcal{P}_{\sigma}$. For $j\in\sigma$, we set \[ s^{(j)}_{k}=\left\{ \begin{array}{lr} -\text{log}U_{j}(\sigma),& k=0\\ (-1)^{k}(k-1)!U_{j}(\sigma)^{-k},& k\geq 1 \end{array} \right. \] Then, we obtain the $(N_{\sigma}\mathcal{P},e_{\mathbb{T}^{-1}})$-twisted Gromov-Witten theory of $\mathcal{P}_{\sigma}$. Recall that $\mathcal{L}^{tw}$ is the Lagrangian cone of the $(N_{\sigma}\mathcal{P},e_{\mathbb{T}^{-1}})$-twisted Gromov-Witten theory of $\mathcal{P}_{\sigma}$. By direct computation, we obtain the following equation: \begin{equation}\label{exp-s-j-k} \exp(s^{(j)}(x)) =(U_{j}(\sigma)+x)^{-1}, \end{equation} where $s^{(j)}(x):=\left(\sum\limits_{k=0}^{\infty}s_{k}^{(j)}\frac{x^{k}}{k!} \right)$.
As in \cite{CCIT09}, we introduce the function: \[ G^{(j)}_{y}(x,z):= \sum\limits_{l,m\geq 0} s^{(j)}_{l+m-1} \frac{B_{m}(y)}{m!} \frac{x^{l}}{l!} z^{m-1}\in \mathbb{C}[y,x,z,z^{-1}] [[s_{0}^{(j)},s_{1}^{(j)},s_{2}^{(j)},\ldots]], \] By \cite{CCIT09}, the function $G^{(j)}_{y}(x,z)$ satisfies the following two relations: \begin{align}\label{relation-G-j-1} G_{y}^{(j)}(x,z)=& G_{0}^{(j)}(x+yz,z);\\ \label{relation-G-j-2} G_{0}^{(j)}(x+z,z)=& G_{0}^{(j)}(x,z)+s^{(j)}(x). \end{align}
Let $\theta_{j}= \left(\sum_{i\not\in \sigma} c_{ij}q_{i}(\partial/\partial q_{i})\right)$, where rational numbers $c_{ij}$ for $i\not\in \sigma$ and $j\in\sigma$ are defined by \[ \bar{\rho}_i=\sum\limits_{i\in\sigma}c_{ij}\bar{\rho}_j, \text{ for } 1\leq i\leq n; \quad \bar{s}_{i}=\sum\limits_{j\in \sigma}c_{ij}\bar{\rho}_j, \text{ for } 1\leq i \leq m. \] Hence, rational numbers $c_{ij}$ satisfy the following equation: \[ \lambda_j=-\sum\limits_{i\not\in\sigma}c_{ij}\lambda_i, \text{ for }\lambda\in \Lambda^S_\sigma\text{ and } j\in\sigma. \] We apply the differential operator $\exp(-\sum_{j\in\sigma}G_{0}^{(j)}(z\theta_{j},z))$ to (\ref{J-function-gerbe}), then we have: \begin{align*}
\textbf{L}:= \exp\left(-\sum_{j\in\sigma}G_{0}^{(j)}(z\theta_{j},z)\right)
\sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{\substack{\lambda\in \Lambda^{S}_{\sigma}\\ \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\in\mathbb{Z}_{\geq 0} \text{ if } i \not\in\sigma}} Q^\mathfrak D J_{\mathfrak D}(z,\tau)\\
\times \left(
\prod\limits_{i \not\in \sigma} \frac{q_{i}^{\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}\prod_{\langle a\rangle =0,a\leq 0}(az)} {\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)} \right) 1_{\sum_{i\not\in \sigma}\lambda_{i}b^{i}_\sigma} \\ \end{align*} \begin{align*} & = \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{\substack{\lambda\in \Lambda^{S}_{\sigma}\\ \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\in\mathbb{Z}_{\geq 0} \text{ if } i \not\in\sigma}} Q^\mathfrak D J_{\mathfrak D}(z,\tau) \left(
\prod\limits_{i \not\in \sigma} \frac{q_{i}^{\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}\prod_{\langle a\rangle =0,a\leq 0}(az)} {\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)} \right) \\
& \times \exp\left(-\sum_{j\in\sigma}G_{0}^{(j)}\left(-z(\lambda_{j}-c_{1}(L_{i})\cdot\mathfrak D),z\right)\right)1_{\sum_{i\not\in \sigma}\lambda_{i}b^{i}_\sigma} \end{align*} lies on the untwisted Lagrangian cone $\mathcal{L}$ of $\mathcal{P}_{\sigma}$. On the other hand, Tseng's orbifold quantum Riemann-Roch operator for $\oplus_{j\in\sigma}U_{j}(\sigma)$ is of the form: \[ \Delta_{tw}:= \bigoplus\limits_{b\in \on{Box}(\sigma)} \exp\left(\sum\limits_{j\in\sigma}G_{b_{j}}^{(j)}(0,z)\right) \] This operator $\Delta_{tw}$ maps the untwisted Lagrangian cone $\mathcal{L}$ to the twisted Lagrangian cone $\mathcal{L}^{tw}$. Therefore
\begin{align*}
\Delta_{tw}\textbf{L}= & \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{\substack{\lambda\in \Lambda^{S}_{\sigma}\\ \lambda_{i}\in\mathbb{Z} \text{ if } i \not\in\sigma}} Q^\mathfrak D J_{\mathfrak D}(z,\tau) \left(
\prod\limits_{i \not\in \sigma} \frac{q_{i}^{\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}\prod_{\langle a\rangle =0,a\leq 0}(az)} {\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)} \right) \\ &\times \exp \left( -\sum_{j\in\sigma}\left(G_{b_{j}}^{(j)}(0,z)-G_{0}^{(j)}(-z(\lambda_{j}-c_{1}(L_{j})\cdot\mathfrak D),z)\right) \right) 1_{\sum_{i\not\in \sigma}\lambda_{i}b^{i}_\sigma}\\
= & \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{\substack{\lambda\in \Lambda^{S}_{\sigma}\\ \lambda_{i}\in\mathbb{Z} \text{ if } i \not\in\sigma}} Q^\mathfrak D J_{\mathfrak D}(z,\tau) \left(
\prod\limits_{i \not\in \sigma} \frac{q_{i}^{\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}\prod_{\langle a\rangle =0,a\leq 0}(az)} {\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)} \right) \\ &\times \exp \left( -\sum_{j\in\sigma}\left(G_{0}^{(j)}(\langle -\lambda_{j} \rangle z,z)-G_{0}^{(j)}(-z(\lambda_{j}-c_{1}(L_{j})\cdot\mathfrak D),z)\right) \right) 1_{\sum_{i\not\in \sigma}\lambda_{i}b^{i}_\sigma}\\ =& \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{\substack{\lambda\in \Lambda^{S}_{\sigma}\\ \lambda_{i}\in\mathbb{Z} \text{ if } i \not\in\sigma}} Q^\mathfrak D J_{\mathfrak D}(z,\tau) \left(
\prod\limits_{i \not\in \sigma} \frac{q_{i}^{\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}\prod_{\langle a\rangle =0,a\leq 0}(az)} {\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)} \right) \\ &\times \left( \prod\limits_{i \in \sigma} \frac{\prod_{\langle a \rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq 0}\exp(-s^{(j)}(-az))} {\prod_{\langle a\rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}\exp(-s^{j}(-az))} \right) 1_{\sum_{i\not\in \sigma}\lambda_{i}b^{i}_\sigma}\\
=& \sum\limits_{\mathfrak D\in \overline{\on{NE}}(B)} \sum\limits_{\substack{\lambda\in \Lambda^{S}_{\sigma}\\ \lambda_{i}\in\mathbb{Z} \text{ if } i \not\in\sigma}} Q^\mathfrak D J_{\mathfrak D}(z,\tau) \left(
\prod\limits_{i \not\in \sigma} \frac{q_{i}^{\lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}\prod_{\langle a\rangle =0,a\leq 0}(az)} {\prod_{\langle a\rangle =0,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(az)} \right) \\ &\times \left( \prod\limits_{i \in \sigma} \frac{\prod_{\langle a \rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq 0}(U_{i}(\sigma)+az)} {\prod_{\langle a\rangle =\langle \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D\rangle,a\leq \lambda_{i}-c_{1}(L_{i})\cdot\mathfrak D}(U_{i}(\sigma)+az)} \right) 1_{\sum_{i\not\in \sigma}\lambda_{i}b^{i}_\sigma}
= I^{S}_{\sigma} e^{-\sum^{n}_{i=1}U_{i}(\sigma)t_{i}/z} \end{align*} lies on $\mathcal{L}^{tw}$, where the second equation follows from (\ref{relation-G-j-1}), the third equation follows from (\ref{relation-G-j-2}) and fourth equation follows from (\ref{exp-s-j-k}). This completes the proof of Theorem \ref{main-theorem}.
\end{document} | arXiv | {
"id": "1503.00155.tex",
"language_detection_score": 0.4720146358013153,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{lemma}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \theoremstyle{definition} \newtheorem{defi}[thm]{Definition} \newtheorem{notation}[thm]{Notation} \newtheorem{exe}[thm]{Example} \newtheorem{conj}[thm]{Conjecture} \newtheorem{prob}[thm]{Problem} \newtheorem{rem}[thm]{Remark}
\newtheorem{conv}[thm]{Convention} \newtheorem{crit}[thm]{Criterion}
\begin{abstract} For a standard Finsler metric $F$ on a manifold $M$, its domain is the whole tangent bundle $TM$ and its fundamental tensor $g$ is positive-definite. However, in many cases (for example, the well-known Kropina and Matsumoto metrics), these two conditions are relaxed, obtaining then either a {\em pseudo-Finsler metric} (with arbitrary $g$) or a {\em conic Finsler metric} (with domain a ``conic'' open domain of $TM$).
Our aim is twofold. First, to give an account of quite a few subtleties which appear under such generalizations, say, for {\em conic pseudo-Finsler metrics} (including, as a previous step, the case of {\em Minkowski conic pseudo-norms} on an affine space).
Second, to provide some criteria which determine when a pseudo-Finsler metric $F$ obtained as a general homogeneous combination of Finsler metrics and one-forms is again a Finsler metric ---or, with more accuracy, the conic domain where $g$ remains positive definite. Such a combination generalizes the known $(\alpha,\beta)$-metrics in different directions. Remarkably, classical examples of Finsler metrics are reobtained and extended, with explicit computations of their fundamental tensors. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
Finsler Geometry is a classical branch of Differential Geometry, with a well-established background. However, its applications to Physics and other natural sciences (see, for example, \cite{AIM93, Asa85, Mat89b, Ran41, SSI08}) introduced some ambiguity in what is understood as a Finsler manifold. The standard definition of a Finsler metric $F$ on a manifold $M$ entails that $F$ is defined {\em on the whole tangent bundle} $TM$ and that {\em strong convexity} is satisfied, i.e., its fundamental tensor $g$ is positive definite \cite{AP,Akbar-Zadeh06,BaChSh00,ChSh05,Mo06,shen2001,Sh01}. In fact, the non-degeneracy of $g$ is essential for many purposes which concern connections and curvature, and its positive-definiteness implies the fundamental inequality, with important consequences on the associated Finsler distance, minimizing properties of geodesics, etc.
However, in many interesting cases, the metric $F$ is defined only in some {\em conic domain} $A\varsubsetneq TM$; recall, for example, Matsumoto \cite{Mat89b} or Kropina metrics \cite{Kro59,Mat72,Shi78}, or some metrics constructed as in the Randers case \cite[Chapter 11]{BaChSh00} (see also Corollary~\ref{randers} below). Typically, this happens when $F$ is not positive-definite or smoothly inextendible in some directions and, so, these directions are removed from the domain of $F$. Many references consider as the essential ingredient of a Finsler metric $F$ that it behaves as a (positively homogeneous) pointwise norm in some conic open subset $A\subset TM$ \cite{AIM93,Asa85,BeFa2000,Br,Mat86}. In this case, the role of the positive-definiteness of $g$ may remain somewhat vague. In principle, one admits the {\em pseudo-Finsler} case when $g$ is allowed to be non-positive definite. If the non-degeneracy of $g$ is required, then one can redefine $A$ so that degenerate directions are also suppressed. If, moreover,
positive-definiteness is required, $A$ will contain only those directions where this property holds. However, notice that it is important then to have criteria which determine exactly the domain $A$, as well as the expected properties for $F$ and $g$ in each case.
The first aim of this work is to study these cases by making two basic extensions of the standard notion of Finsler metric: {\em pseudo-Finsler metrics} (if one admits that its fundamental tensor $g$ can be non-positive definite) and {\em conic Finsler metrics} (if one admits that the domain of $F$ is not necessarily the whole $TM$ but an open conic domain); if both extensions are done at the same time we speak of a {\em conic pseudo-Finsler metric}. Once the general properties of these extensions are studied, our next aim is to determine the regions where strong convexity holds for natural classes of conic pseudo-Finsler metric. Recall that
one of the distinctive aspects of Finsler geometry, in
comparison with the Riemannian one, is how Finsler metrics (and one-forms) can be combined to obtain new Finsler metrics. In Riemannian geometry the natural operations are just the addition and conformal changes. But in Finsler geometry there is a big amount of possibilities providing endless families of Finsler metrics (see Section \ref{ls3}). As a remarkable goal, we compute the fundamental tensor $g$ of these combinations explicitly, in such a way that the domain where $g$ is positive definite becomes apparent. The paper is divided then in three sections.
In Section \ref{s1}, our study starts at the very beginning, by discussing accordingly the extensions of Minkowski norms into {\em Minkowski pseudo-norms} and {\em Minkowski conic norms} ---or, in general, {\em Minkowski conic pseudo-norms}. After a preliminary review of properties of Minkowski norms in the first subsection, in the second one we focus in their generalizations. As the weakening of these properties may yield the loss of the triangle inequality, we study at the same time norms and norms which satisfy the strict triangle inequality (including a discussion about the weakening of differentiability in
Proposition \ref{pcontinuousnorms}, Remark \ref{rcontinuousnorms}). In the third subsection we show how all these norm-related notions are characterized by looking at the corresponding unit (conic) sphere or indicatrix $S$, which allows one to reconstruct the norm from suitable candidates to unit (conic) ball $B$
(Proposition \ref{pfunda}, Theorem \ref{tfunda}). The interplay between the
convexities of $S$, $B$ and the conic domain $A$ of the Minkowski conic pseudo-norm is stressed.
Finally, in the fourth subsection, simple examples
(constructed on $\mathds R^2$ from a curve which plays the role of $S$) show the optimality of the results.
We stress in both, results and examples, that even though the triangle inequality may not hold for Minkowski pseudo-norms, their forward (resp. backward) affine balls still constitute a basis of the topology, as a difference with the conic case (Proposition \ref{ppseudi}). However, for conic Minkowski norms, the sets of all the forward and backward affine balls generate the topology as a subbasis (Proposition \ref{opennormballs}), and suitable triangle inequalities occur when the conic domain $A$ is convex.
In Section \ref{s2}, the general properties of a conic pseudo-Finsler metric $F$, as well as those for the particular pseudo-Finsler and conic Finsler cases, are studied. It is divided into five subsections with, respectively, the following aims: (1) A discussion on the general definition, including the subtleties inherent to our general choice of conic domain $A\subset TM$ and the possibility of extending it to all $TM$. (2) To introduce the notion of {\em admissible curve} (with velocity in $A$) and, associated to it: the conic partial ordering $\prec$, the {\em Finsler separation $d_F$} (which extends the classical Finsler distance), and the open forward and backward balls, which are shown to be always topologically open subsets but, in general, they do not constitute a basis for the topology (in fact, $d_F$ may be identically zero in the conic case). So, the condition of being Riemannianly lower bounded for $F$ is introduced to guarantee that the forward (resp. backward) balls give a basis of the topology. (3) A discussion on the role of the open balls in this general framework. In particular, we show that, in the pseudo-Finsler case, the open forward balls still constitute a basis of the topology, and the Finsler separation $d_F$ is still a generalized distance. However, we stress that the corresponding $d_F$-balls (obtained as a length space) may look very different from the Finsler case ---in particular, they differ from the affine balls for Minkowski pseudo-norms whenever its fundamental tensor $g$ is not positive semi-definite. (4) To introduce geodesics as critical points of the energy functional for admissible curves joining two fixed points. In the non-degenerate directions they are related to the Chern connection, and characterized by means of the geodesic equation (univocally determined from their velocity at some point); otherwise, their possible lack of uniqueness becomes apparent (see Example \ref{ex2a}). Their minimization properties only appear in the conic Finsler case, and some subtleties about them are especially discussed. (5) A final summary is provided, making a comparison between the Finsler, pseudo-Finsler and conic Finsler cases at three levels: (i) Finsler separation $d_F$, (ii) geodesics, and (iii) the particular affine Minkowski case.
In Section \ref{ls3},
we study a general homogeneous functional combination of $n$ (conic) Finsler metrics $F_1, \dots , F_n$ and $m$ one-forms $\beta_{n+1}, \dots , \beta_{n+m}$. Such a $(F_1, \dots , F_n,$ $\beta_{n+1}, \dots , \beta_{n+m})$-metrics constitute an obvious generalization of the known $(\alpha, \beta)$-metrics (see \cite{Mat72} and also \cite{KAM95,MSS10,Mat89,Mat91b,Mat91c,Mat92,SSI08}) and $\beta$-deformations \cite{Rom06}. In the first subsection, we compute explicitly its fundamental tensor $g$ and derive general conditions to ensure that $g$ is positive definite in Theorem \ref{central}. Then, some simple particular cases (Corollaries \ref{sum} and \ref{rrandersr}) and consequences (Remark \ref{consequences} and Corollary \ref{cisom}) are stressed. In the second subsection we focus on the simple case $n=1=m$. The metric $F$ obtained in this case coincides with the so-called $\beta$-deformation of a preexisting Finsler metric $F_0$, and it is called here a {\em $(F_0,\beta)$-metric}. In particular, classical Matsumoto, Randers and Kropina metrics are reobtained and extended, including the explicit computation of their fundamental tensor $g$, as well as the conic domain for its positive-definiteness. As an example of the possibilities of our approach, we conclude with a further extension to metrics constructed from a pair $(F_1,F_2)$ of Finsler ones. Finally, as an Appendix, in the third subsection, our computations are compared with those by Chern and Shen \cite{ChSh05} for $(\alpha,\beta)$-metrics.
Finally, we would like to emphasize that, even though our study is quite extensive, it is not by any means exhaustive. There are still natural questions related to the conic pseudo-Finsler metrics which have not been well studied yet and, as some of the ones studied in this paper, may have a non-trivial and subtle answer. Due to the increasing activity in this field, we encourage
the readers to make further developments.
\section{Generalizing Minkowski norms}\label{s1}
\subsection{Classical notions} First, let us recall the following classical concepts.
\begin{defi}\label{dnorms} Let $V$ be a real vector space of finite dimension $N\in \mathds N$ and consider a function $\|\cdot\|:V\rightarrow\mathds R$ being
\begin{enumerate}[(i)]
\item positive: $\| v \| \geq 0$, with equality if only if $v=0$,
\item positively homogeneous: $\| \lambda v \| =
\lambda \|v\|$ for all $\lambda > 0$. \end{enumerate}
\noindent Then, $\| \cdot \|$ is called: \begin{enumerate}[(a)]
\item A {\em positively homogeneous norm}, or simply {\em norm}, if it satisfies the triangle inequality, i.e.: $\| v+w
\| \leq \| v \| + \| w \|$.
\item A {\em norm with strict triangle inequality}, if the equality in the triangle inequality is satisfied only if $v=\lambda w$ or $w=\lambda v$ for some $\lambda\geq 0$.
\item A {\em Minkowski norm}, if: (c1) $\| \cdot \|$ is smooth\footnote{ For simplicity, ``smooth'' will mean
$C^\infty$ even though differentiability $C^2$ will be enough for most purposes. } away from $0$, so that the {\em fundamental tensor field} $g$ of $\| \cdot \|$ on $V\setminus\{0\}$ can be defined as the Hessian of $\frac 12 \| \cdot \| ^2$, and (c2) $g$ is pointwise positive-definite. \end{enumerate} \end{defi} About the last definition, recall that the Hessian of any smooth function $f$ is written as Hess$f(X,Y)=X(Y(f))-\nabla^0_XY(f)$ for any vector fields $X,Y$, where $\nabla^0$ is the natural affine connection of $V$. Moreover,
Minkowski norms always satisfy the strict triangle inequality (see for example \cite[Theorem 1.2.2]{BaChSh00}).
Let us collect some properties of the fundamental tensor $g$ that come directly from its definition and the positive homogeneity. \begin{prop}\label{fundamentalprop}
Given a Minkowski norm $\|\cdot\|$ and $v\in V\setminus\{0\}$, the fundamental tensor $g_v$ is given as:
\begin{equation}\label{eg} g_v(u,w):= \frac{\partial^2}{\partial t\partial s}
G(v+tu+sw)|_{t=s=0},\end{equation} where $u,w\in V$ and
$G=\frac 12\|\cdot\|^2$. Moreover, $v\mapsto g_v$ is positively homogeneous of degree 0 (that is, $g_{\lambda v}=g_v$ for $\lambda>0$) and it satisfies \begin{align}\label{propfundtensor}
g_v(v,v)&=\|v\|^2,& g_v(v,w)&= \frac{\partial}{\partial s}G\left(v+s w\right)|_{s=0}. \end{align} Moreover, $v$ is $g_v$-orthogonal to the unit sphere $S$, and the metric $g$ is positive definite if and only if so is its restriction to $S$. \end{prop}
Any norm can be characterized by its closed unit ball $B=\{x\in V:\|x\|\leq 1\}$ and its unit sphere or {\em indicatrix} $S=\{x\in V:\|x\|=1\}$. For the next result, recall that $V$ is endowed with a natural affine connection $\nabla^0$ (as so is any vector
or affine space) and, then, any smooth hypersurface admits a
second fundamental form $\sigma^\xi$ for each
{\em transverse}\footnote{ Even though a Minkowski norm would allow one to define locally a normal direction to the hypersurface (recall for example \cite[Def. 3.2.2, Cor. 3.2.3 ]{Th}), this will not play any role here.} vector field $\xi$ (see \eqref{nablacanonica} below).
\begin{prop}\label{pnormball} Let $\| \cdot \|:V\rightarrow \mathds R$ be a norm and $B$ its closed unit ball. Then \begin{enumerate}[(i)] \item Vector $0$ belongs to the interior of $B$, $0\in \ri B$.
\item $B$ is compact.
\item $B$ is convex (i.e., $u,v\in B$ implies $ \lambda u + (1-\lambda)v\in B$, for all $\lambda\in [0,1]$). \end{enumerate}
Moreover, \begin{enumerate}[(i)]
\item[(iv)] the norm $\| \cdot \|$ satisfies the strict triangle inequality iff $B$ is strictly convex (i.e. $u,v\in B$, with $u\neq v$, implies $ \lambda u + (1-\lambda)v\in \ri B$, for all $\lambda\in (0,1)$).
\item[(v)] the norm $\| \cdot \|$ is a Minkowski one iff $S=\partial B$ is a smooth hypersurface embedded in $V$ and it is strongly convex (i.e. the second fundamental form $\sigma^\xi$ of $S$ with respect to some, and then any, transverse vector $\xi$ pointing out to $\ri B$ is positive definite). \end{enumerate} \end{prop} \begin{proof} Standard arguments as in \cite{BaChSh00, Th} are used. Namely, assertions in $(i)$ and $(ii)$ follow from the fact that all the norms on $V$ are equivalent \cite[p. 29]{Th}, and $(iii)$, as well as ($(iv)\Rightarrow$), is straightforward from the triangle inequality applied to $ \lambda u + (1-\lambda)v$. For
($(iv)\Leftarrow$), in the non-trivial case when $\{u, v\}$ is linearly independent, put $\tilde{u}=u/\| u \|, \tilde{v}=v/\| v
\|\in \partial B$ so that $$
z=\frac{\| u \|}{\| u \|+ \| v
\|}\tilde{u} + \frac{\| v \|}{\| u \|+ \| v
\|}\tilde{v} \in \ri B. $$
Then, $1>\| z \| =\| u +v\|/ (\| u \|+ \| v \|)$, as required.
For ($(v)\Rightarrow$), recall that positive homogeneity implies that $1$ must be a regular value of $\| \cdot \|$ and, so, $S$ is a closed smooth hypersurface in $V$. Moreover, the opposite $\xi$ of the vector position is transverse to $S$ and points out to $\ri B$. So, for any vector fields $X, Y$ tangent to $S$, the decomposition of the canonical connection $\nabla^0$ on $V$, \begin{equation}\label{nablacanonica}
\nabla^0_XY =\nabla^\xi_XY + \sigma^\xi(X,Y)\xi, \end{equation}
holds under standard conventions. Putting again $G=\frac 12\|\cdot\|^2$, then \begin{equation}\label{esconv0} g(X,Y)={\rm Hess}\, G(X,Y)= -\nabla^0_XY(G),\end{equation} and using (\ref{nablacanonica}), \begin{equation}\label{esconv} g(X,X) =-\nabla^\xi_XX(G) - \sigma^\xi(X,X)\xi(G)=- \sigma^\xi(X,X)\xi(G). \end{equation} As positive homogeneity implies that $-\xi(G)>0$, we have that $g(X,X)>0$ iff $\sigma^\xi(X,X)>0$. For ($(v)$ $\Leftarrow$), by last statement in Proposition \ref{fundamentalprop}, we only have to prove that the restriction of $g$ to the indicatrix $S$ is positive definite. Moreover, notice that the positive homogeneity implies that $-\xi$ is transverse and
$\|\cdot\|$ is smooth away from $0$. So, \eqref{esconv} can be applied again and we are done. \end{proof} \subsection{Generalized notions} Next, let us generalize the notion of Minkowski norm.
\begin{defi}\label{dnormsextended}
A {\em Minkowski pseudo-norm} on $V$ is a map $\| \cdot
\|:V\rightarrow \mathds R$ which satisfies $(i)$, $(ii)$ and $(c1)$ in Def. \ref{dnorms} (i.e., $g$ is not necessarily positive definite).
A {\em Minkowski conic norm} on $V$ is a map $\| \cdot
\|:A\rightarrow\mathds R$, where $A\subset V$ is
a {\em conic domain} (i.e., $A$ is open, non-empty and satisfies that if $v\in A$, then $\lambda v\in A$ for all $\lambda>0$), which satisfies the conditions $(i)$, $(ii)$, $(c1)$ and $(c2)$ in Def. \ref{dnorms} for all $v\in A$
(i.e. $g$ is positive definite, but the conic domain $A$ may not be all $V$ and, in this case, it excludes vector $0$).
A {\em Minkowski conic pseudo-norm} on $V$ is a map $\| \cdot
\|:A\rightarrow\mathds R$, which satisfies $(i)$, $(ii)$ and (c1) in Def. \ref{dnorms} for all
$v\in A$, where $A\subset V$ is a conic domain (i.e., the two previous extensions of the notion of Minkowski norm are allowed simultaneously). \end{defi}
For simplicity, we will assume typically that $A$ is connected. As in the case of norms, we do not assume that
$\|\cdot\|$
is reversible, and
we can define two types of {\em affine (normed) balls} depending on the order
we compute the substraction. Namely, for any Minkowski conic pseudo-norm the {\em forward and backward
affine open balls} of center $v$
and radius $r> 0$ are defined, respectively, as
\begin{align}\label{eopenballs}
B^+_v(r)&=\{x\in v+A : \|x-v\|<r\}& \text{and} &&B^-_v(r)=\{x\in v-A : \|v-x\|<r\}.
\end{align}
where $v\pm A:=\{v\pm w: w\in A\}$.
In the case that $g$ is not positive-definite, the behavior of these balls may differ dramatically from the behavior of the metric balls obtained from a length space (see Example \ref{ex4}
and Remark \ref{pseudoaffineballs} below), even though the continuity of $\|\cdot\|$ allows one to ensure that they are open subsets. For closed balls $\bar B^\pm_v(r)$, the non-strict inequality $\leq$ is used instead of $<$ in \eqref{eopenballs}; recall that these balls are closed in $v\pm A$ but not in $V$ (except if $V=A$). For the forward and backward spheres $S_v^\pm(r)$, equalities replace the inequalities in \eqref{eopenballs},
and the map \begin{equation}\label{espheres} \varphi:S^+_v(r)\rightarrow S^-_v(r) \quad \quad w\mapsto 2v-w \end{equation} is a homeomorphism. As in the case of norms, we will work by simplicity with $B=\bar B_0^+(1)$ and $S=S_0^+(1)$ (which is the boundary of $B$ in $A$).
Recall also that $0\not\in A$ (and thus $0\not\in B$) except in the case that $A=V$, i. e., when $\parallel \cdot \parallel$ is a pseudo-norm.
\begin{rem}\label{r26}
Observe that Proposition \ref{fundamentalprop} is extended directly to this case due to its local nature and the positive homogeneity. Moreover, the expressions \eqref{nablacanonica} and \eqref{esconv} are also directly transplantable to Minkowski conic pseudo-norms. \end{rem}
The following technical properties must be taken into account.
\begin{prop}\label{plema}
Let $\parallel \cdot \parallel: A\rightarrow \mathds R$ be a Minkowski conic pseudo-norm. Then \begin{enumerate}[(i)] \item the indicatrix $S$ is a hypersurface embedded in $A$ as a closed subset, and the position vector at each point is transverse to $S$,
\item if $\parallel \cdot \parallel$ is a pseudo-norm ($A=V$), $S$ is diffeomorphic to a sphere. \end{enumerate}
\end{prop} \begin{proof} For $(i)$,
positive homogeneity implies that the differential of $\| \cdot
\|$ does not vanish on the position vector and, so, $S$ is the inverse image of the regular value $1$.
For $(ii)$, consider the norm $\| \cdot \|^E$ associated to any auxiliary Euclidean product on $V$, and let $S^E$ be its unit sphere, and observe that the map $S^E\rightarrow S, v\mapsto v/\|
v \|$ and its inverse are smooth (as so is $\| \cdot \|$). \end{proof}
\begin{rem}
As in the proof of the previous proposition, when necessary, we will consider an auxiliary norm $\| \cdot \|^E$ associated to some Euclidean product $g^E$ on $V$, and the results obtained will be independent of the choice of $g^E$. In particular, $V$ can be regarded as a Riemannian manifold with Levi-Civita connection $\nabla^0$, the spheres $S_v^\pm(r)$ as Riemannian submanifolds with the metric induced by $g^E$, and the map \eqref{espheres} as an isometry. The $\|\cdot \|^E$-open balls will be denoted with a superscript $E$, say as in $B^E_v(r)$. \end{rem}
Now, let us focus on Minkowski conic norms. First, we will show that the forward and backward open balls constitute a subbasis for the topology (to check the optimality of this result, see Examples \ref{ex2a} and \ref{ex3}, and Proposition \ref{ppseudi} below).
\begin{prop}\label{opennormballs}
Let $\|\cdot\|:A\rightarrow \mathds R$ be a conic Minkowski norm with $A$ connected. Then, the collection of subsets \[\{B_{v_1}^+(r_1)\cap B_{v_2}^-(r_2):\text{$v_1,v_2\in V,$ $r_1,r_2>0$}\}\] is a topological basis of $V$. \end{prop}
\begin{proof} By using translations, it is enough to show that they yield a topological basis around $0 \in V$.
Choose any $v_0\in S$ and an auxiliary Euclidean $g^E$ such that
$v_0$ is also unit and orthogonal to $S$ for $g^E$. Let $\theta_{v_0}(v)$ be the $g^E$-angle
between $v\in V\setminus \{0\}$ and $v_0$.
Fix $\theta_0\in (0,\pi/2)$ satisfying both, every $v\in V$ with $\theta_{v_0}(v)<\theta_0$ lies in $A$, and for all such $v$ in $S$, the minimum of the $g^E$-principal curvatures of $S$ at $v$ (in the inward direction to $B$) is greater than some constant $k>0$. Such a constant $k$ exists from the positive definiteness of $g$, putting $\xi$ in \eqref{esconv} as the inward $g^E$-unit normal (in particular, $\xi_{v_0}=-v_0$). Then, the $g^E$-round sphere through $v_0$ with radius $r_k=1/k^2$ and center $x^+$ in the $\xi_{v_0}$ direction remains outside of $B \cap \{v\in V: \theta_{v_0}(v)<\theta_0\}$. Observe that, by the isometry $\varphi$ in \eqref{espheres}, the analogous properties hold for $-v_0$ and $S^-_0(1) (=\varphi(S))$ with the same value of $k$ (and thus $r_k$), obtaining then a second center $x^-$.
Now, for any neighborhood $U$ of 0, there exists some $\epsilon>0$ such that $x^+_\varepsilon=x^+-(1-\varepsilon) v_0$ and $x^-_\varepsilon=x^-+(1-\varepsilon) v_0$ satisfy $$B^E_{x^+_\varepsilon}(r_k) \cap B^E_{x^-_\varepsilon}(r_k)\subset U,$$ and the required property follows because, from the construction, $$0\in B^+_{(\epsilon-1)v_0}(1) \cap B^-_{(1-\epsilon)v_0}(1) \subset B^E_{x^+_\varepsilon}(r_k) \cap B^E_{x^-_\varepsilon}(r_k).$$ \end{proof}
\begin{rem}\label{fundineqRem} We emphasize that in the conic Minkowski case the strict triangle inequality still holds for every $v_1,v_2\in A$ such that $t v_1+(1-t)v_2\in A$ for $t\in(0,1)$
(see the proof of parts
$(iv)$ and (v) in Proposition \ref{pnormball}, or Theorem \ref{tfunda} below); in particular, it holds for any $v_1, v_2\in A$ if $A$ is
convex.
Analogously, if additionally $v_1\neq 0$ then
$\|\cdot\|$ satisfies the {\it Fundamental Inequality}:
\[\frac{\partial}{\partial t} \big(\|v_1+t v_2\|\big)|_{t=0}\leq \|v_2\|,\]
or equivalently, \begin{equation}\label{fundineq}
g_{v_1}(v_1,v_2)\leq \|v_1\| \|v_2\|, \end{equation} (use Eq. \eqref{propfundtensor}), where the equality holds if and only
if $v_2=\lambda v_1$ for some $\lambda\geq 0$ (see \cite[Section 1.1]{ChSh05}).
With more generality, the (strict) triangle inequality for $v_1,v_2\in A$ in a Minkowski conic pseudo-norm holds when: \begin{enumerate}[(i)] \item $t v_1+(1-t)v_2\in A$, \item $g$ is positive definite in the direction ${t v_1+(1-t)v_2}$, \end{enumerate} for every $t\in(0,1)$. As, essentially, the triangle inequality implies the fundamental one (see \cite[p. 9]{BaChSh00}), conditions (i) and (ii) are also sufficient to ensure this inequality on $v_1$ and $v_2$. In particular, if $g_{v_1}$ is positive-definite then both, the triangle inequality and the fundamental one, hold in a neighborhood of $v_1$. Moreover, these inequalities also hold for any conic Minkowski pseudonorm under more general hypotheses related to the possible convexity of the indicatrix somewhere. For example, fixing $v_1\in A$ the fundamental inequality holds for any $v_2\in A$ if the hyperplane $H$ tangent to the indicatrix $S$
at $v_1/\| v_1
\|$ touches the closed unit ball only at $v_1$ (this implies that $g_{v_1}$ is positive semi-definite and, even when non-positive definite, the
directions $g_{v_1}$-orthogonal to $v_1$ are those tangent to $H$). Indeed, given $v_1,v_2$, we can assume that $v_2=\lambda v_1+w$, with
$\lambda\geq 0$ and $g_{v_1}(v_1,w)=0$ (if $\lambda<0$, the fundamental inequality holds trivially). Observe that the hypothesis on the hyperplane (transplanted to the parallel hyperplane at $\lambda v_1$) yields $\lambda\| v_1\|\leq \|v_2\|$ with equality only when $\lambda v_1 =v_2$. Then
\[g_{v_1}(v_1,v_2)=\lambda \|v_1\|^2\leq \|v_1\|\|v_2\|,\] and the required fundamental inequality follows. \end{rem}
The word ``Minkowski'' in Definition \ref{dnormsextended} comprises two properties for the defined objects: (i) they are smooth away from 0, and (ii) in the case of (conic) norms, the fundamental tensor $g$ is positive definite. Recall, that for a classical norm as in part $(a)$ of Definition \ref{dnorms}, one has only the weaker properties of continuity and triangle inequality, the former deduced from the latter. This can be extended to the conic case but, as the triangle inequality is involved, the previous remark suggests to impose convexity for the domain $A$ ---in particular, $A$ will be connected. Recall that, under this assumption, if there exists a vector $v\in V$ such that $v,-v\in A$ then $0\in A$ and $A=V$.
\begin{defi}\label{dcontinuousnorms} Let $A$ be a convex conic open subset of $V$. We say that a map $\| \cdot
\|:A\rightarrow\mathds R$ satisfying $(i)$ and $(ii)$ in Def. \ref{dnorms} for all $v\in A$ is:
\begin{enumerate}[(i)] \item a {\em conic norm} on $V$ if it satisfies the triangle inequality ($(a)$ in Def. \ref{dnorms}), \item a {\em conic norm with strict triangle inequality} if it satisfies
$(b)$ in Def. \ref{dnorms}. \end{enumerate} \end{defi}
\begin{prop}\label{pcontinuousnorms}
Any conic norm $\| \cdot \|:A\rightarrow\mathds R$ is continuous, its open forward and backward balls are open subsets of $V$, and its indicatrix $S$ is a topological hypersurface, which is closed as a subset of $A$ and homeomorphic to an open subset of the usual sphere. \end{prop} \begin{proof}
Let us show first that the forward and the backward affine balls are open. Given $x\in B^+_v(r)$, as $A$ is open, we can fix a basis $e_1,e_2,\ldots,e_N$ of $V$ contained in $A$, and such that $z:=x-v=\sum_{i=1}^N z^ie_i$ with $z^i>0$ for all $i=1,\ldots,N$. Denote $C=\max \{\|e_1\|,\|e_2\|,\ldots, \|e_N\| \}$ and $y=\sum_{i=1}^N y^i e_i$ with $y_i>0$ for all $i$. By the triangle inequality, if $0<\lambda <1$, the open subset
\[O_\lambda=\{v+\lambda (x-v)+y: C \sum_{i=i}^N y^i <r-\lambda \|x-v\|\}\]
is contained in $B^+_v(r)$. Moreover, $x\in O_\lambda$ when $(1-\lambda)z$ can be chosen as one such $y$, i.e., whenever
\[(1-\lambda)C \sum_{i=1}^N z^i <r-\lambda\|x-v\|.\] This holds for $\lambda$ close to one, as required. The backward case is analogous.
For the continuity, given $x_0\in A$ and $0<\varepsilon<1$, let us find an open neighborhood $\Omega\subset A$ of $x_0$ such that
$|\|x\|-\|x_0\||<\varepsilon$ for all $x\in \Omega$. Choose $y_\pm
= (1\pm \delta)x_0$ with $\delta=\frac{\varepsilon}{2\|x_0\|}$
and put $\Omega=B^+_{y_-}(\varepsilon)\cap B^-_{y_+}(\varepsilon)$. This subset is open by the first part and, if $x\in \Omega$:
\[\|x\|\leq \|x-y_-\|+\|y_-\|< \varepsilon+\|x_0\|, \quad
\|x_0\|\leq \|y_+\|\leq \|y_+-x\|+\|x\|< \varepsilon+\|x\|\] as required.
For the last assertion, consider an auxiliary Euclidean norm
$\|\cdot\|^E$ with unit sphere $S^E$ and put $S_A^E:= S^E \cap A$. Then, the map $$
\rho: S^E_A \rightarrow S, \quad \quad v\mapsto v/\| v \|, $$ is a homeomorphism, and the required properties of $S$ follow. \end{proof}
\begin{rem}\label{rcontinuousnorms}
In the case of (conic) pseudo-norms (i.e., only $(i)$ and $(ii)$ in Def. \ref{dnorms} are fulfilled) the continuity does not follow because there is no triangle inequality. So, $S$ may be not closed in $A$ (nor a topological hypersurface) and its affine open balls may be non-open as subsets of $V$. Throughout this paper all the conic pseudo-norms will be Minkowski, that is, we will assume that they are smooth away from $0$. Nevertheless, we will also discuss next those which satisfy the (strict or not) triangle inequality, even when $g$ is only positive semi-definite and, so, they are not Minkowski conic norms. \end{rem}
\subsection{Characterization through the unit ball}
In analogy to Proposition \ref{pnormball}, the unit balls can be described as follows.
\begin{prop} \label{pfunda} Let $\parallel \cdot \parallel: A\rightarrow \mathds R$ be a Minkowski conic pseudo-norm. Then \begin{enumerate} \item[(a1)] $B$ is a closed subset of $A$ which intersects all the directions $D_v:=\{\lambda v: \lambda >0$\}, $v\in A$.
\item[(a2)] $B$ is starshaped from the origin, i.e., $v\in B$ implies $\lambda v\in B$ for all $\lambda\in(0,1)$.
\item[(a3)] The boundary $S$ of $B$ in $A$ is a smooth hypersurface and a closed subset of $A$ such that the position vector at each $v\in S$ is transversal (not tangent to $S$).
\item[(a4)] For each $v\in B\setminus\{0\}$ there exists a (necessarily unique)\footnote{It is obvious that the uniqueness of $\lambda$ follows from the definition of Minkowski conic pseudo-norm. However, we point out here that it also follows from the previous three items, to stress the independence of the hypotheses in the next theorem.} $\lambda>0$ such that $v/\lambda \in S$. \end{enumerate} \end{prop} \begin{proof} All the properties are straightforward from the definition of $B$ and $S$ (for $\it (a3)$, use part $(i)$ of Proposition \ref{plema}). \end{proof} Conversely, the unit balls characterize the different types of conic pseudo-norms. In fact, the following theorem (which also strengthens Proposition \ref{pnormball}) gives a very intuitive picture of all the types of Minkowski conic pseudo-norms defined above. Recall that the classic notions of convexity for $B$ (i.e., as a neighborhood) were included in Proposition \ref{pnormball}, and the notions of convexity for $S$ (its boundary hypersurface) are included in the next theorem.
\begin{thm} \label{tfunda} Let $A$ be a conic domain of $V$ and $B$ a subset of $A$ which satisfies all the properties $\it (a1)$ to $\it (a4)$ in Proposition \ref{pfunda}. Then, the map $$
\| \cdot \|_B: A\rightarrow \mathds R , \quad \quad v\mapsto \mbox{{\rm Inf}}\, \{\lambda >0: v/\lambda\in B\}, $$ is a Minkowski conic pseudo-norm and its closed unit ball is equal to $B$. Moreover, \begin{enumerate}
\item[(i)] $\| \cdot \|_B$ is a Minkowski pseudo-norm iff $S$ is homeomorphic to a sphere. In this case, $0\in \ri B$ and
$\|\cdot\|_B$ is continuous in $0$.
\item[(ii)] $\| \cdot \|_B$ is a conic Minkowski norm iff $S$ is strongly convex. \end{enumerate}
Assume now that $A$ is convex. Then \begin{enumerate}
\item[(iii)] $\| \cdot \|_B$ is a conic norm iff $B$ is convex and iff $S$ is convex (its second fundamental form with respect to the inner normal is positive semi-definite).
\item[(iv)] $\| \cdot \|_B$ is a conic norm with strict triangle inequality iff $B$ is strictly convex and iff $S$ is strictly convex (i.e. the hyperplane tangent to $S$ at each point $v_0$ only touches $B$ at $v_0$).
\end{enumerate}
\end{thm}
\begin{proof}
The map $\| \cdot \|_B$ is well defined from property
$\it (a1)$ (in fact, $\| v \|_B$ can be defined as the unique $\lambda$ in $\it (a4)$ if $v\neq 0$). Then, it is positive
and positively homogeneous. Its smoothness follows from the smoothness of $S$ and the transversality ensured in $\it (a3)$, as these conditions characterize when the bijective map $\mathds R ^+ \times S\rightarrow A\setminus\{0\}, (t,v)\mapsto tv$ is a diffeomorphism. Moreover, $B$ is the closed unit ball by construction and $\it (a2)$.
For $(i)$, the implication to the right follows from part $(ii)$ of Proposition \ref{plema}, and the converse because the compactness of $S$ implies that $A=V$.
Therefore, assuming that $S$ is a sphere, the properties $\it (a3)$ and $\it (a4)$ (or the diffeomorphism given in the previous paragraph) imply that $0$ belongs to the inner domain delimited by $S$. Therefore, $B$ is the inner domain (recall $\it (a2)$ and $\it (a3)$), and $0\in \ri B$. For the continuity, if $\{x_n\}$ converges to $0$
but $\|x_n\|_B$ does not,
we can assume that, up to a subsequence, $\|x_n\|_B$ converges to some $c\in(0,+\infty]$. Then, $x_{n}/\|x_{n}\|$ also converges to some $y\in S$ up to a subsequence,
and $x_{n}\to c y\not=0$, which gives a contradiction.
For $(ii)$, recall that positive homogeneity implies that $1$ must be a regular value of $\| \cdot \|$ and, so, $S$ is a closed smooth hypersurface in $A$. Moreover, the opposite $\xi$ of the vector position is transverse to $S$ and points out to $\ri B$.
Putting
$G=\frac 12\|\cdot\|_B^2$ and using \eqref{esconv} (recall Remark \ref{r26}), $\sigma^\xi(X,X)>0$ if and only if $g(X,X)>0$ for every vector field $X$ tangent to $S$. This statement together with last statement in Proposition \ref{fundamentalprop} (recall again Remark \ref{r26}) prove $(ii)$.
For $(iii)$ and $(iv)$, even though the equivalences between the (strict or not) convexities of $B$ and $S$ are known (see for example \cite{BCGS, Sa} and references therein), we will prove the full cyclic implications for the sake of completeness. Moreover, the first implications to the right in $(iii)$ and $(iv)$ are straightforward from the triangle inequality applied to
$ \lambda u + (1-\lambda)v$. For the second implication to the right, consider a straight line $\mathbf r$ tangent to $S$ at some $v_0\in S$ with direction $v$, and let $(a,b)\subset\mathds R$ be the interval of points $t\in \mathds R$ such that $v_0+t v\in A$ (recall that $A$ is convex). Then the function $f:(a,b)\rightarrow \mathds R$, given by $f(t)=G(v_0+t v)-1$, where $G=\frac 12\|\cdot\|^2_B$, satisfies $f(0)=0$, $\dot f(0)=0$ (as $dG_{v_0}(v)=0$) and $\sigma^\xi_{v_0}(v,v)= -$ Hess$(v,v)=-\ddot f(0)$ (recall \eqref{esconv0} and \eqref{nablacanonica} above). Now consider the plane $\pi={\rm span}\{v_0,v\}$. If $\ddot f(0)<0$, then the intersection $\pi\cap S$ must lie, close to $v_0$, in the connected component determined by the line $\mathbf r$ that does not contain $0$. Easily, this contradicts the convexity of $B$ and proves the implication in part $(iii)$.
For $(iv)$, if $\mathbf r$ touches $\pi\cap S$ in two points,
then $S$ must contain a whole segment by convexity and it cannot be strictly convex. Reasoning with all the tangent right lines to $S$ in $v_0$ we conclude that the tangent hyperplane in $v_0$ touches only $v_0$.
To close the cyclic implications in $(iii)$ and $(iv)$, we have to prove that if $S$ is (strictly) convex, then $\|\cdot\|_B$ satisfies the (strict) triangle inequality. Given $u,v\in B$
(linearly independent), consider $\tilde{u}=u/\|u\|_B$,
$\tilde{v}=v/\|v\|_B\in S$ and define $\phi:[0,1]\rightarrow \mathds R$ as \[\phi(t)=G(t \tilde{u}+(1-t) \tilde{v})=G(\tilde{v}+t(\tilde{u}-\tilde{v})).\]
If we denote $y=\tilde{v}+t(\tilde{u}-\tilde{v})$, then $\ddot\phi(t)=g_y(\tilde{u}-\tilde{v},\tilde{u}-\tilde{v})\geq 0$ for $t\in (0,1)$ and $\phi(0)=\phi(1)=1$. It is easy to prove that $\phi$ is constantly equal to $1$ or $\phi(t)<1$ for $t\in (0,1)$. In particular, for $t=\|u\|_B/(\| u \|_B+ \| v \|_B)$ we obtain $$
z=\frac{\| u \|_B}{\| u \|_B+ \| v
\|_B}\tilde{u} + \frac{\| v \|_B}{\| u \|_B+ \| v
\|_B}\tilde{v} \in B. $$
Then, $1\geq\| z \|_B =\| u +v\|_B/ (\| u \|_B+ \| v \|_B)$, as required. Regarding the strict inequality, observe that if $S$ is strictly convex, $\phi$ cannot be constantly equal to $1$.
\end{proof}
\begin{rem}\label{rojo} In the previous theorem, if the hypothesis $\it (a4)$ of Proposition \ref{pfunda} were not imposed, then $\| \cdot \|_B$
would not be positive. That is, $\| v \|_B\geq 0$ would still hold for all $v\in A$, but the equality could be reached for some $v\neq 0$ and, thus, for all its {\em degenerate} direction $D_v$. This could happen even if $A=V$ and, in this case, $B$ would not be compact (nor $S$ homeomorphic to a sphere). Even though this possibility could be also admitted, we prefer not to include it. Or, equally, if degenerate directions appear, $A$ is supposed to be redefined in order to exclude them. \end{rem} Finally, we consider in particular the case of pseudo-norms.
\begin{prop}\label{ppseudi}
Let $\| \cdot \|:V\rightarrow \mathds R$ be a Minkowski pseudo-norm. Then \begin{enumerate}[(i)] \item the affine open forward (resp. backward) balls
constitute a basis for the natural topology of $V$,
\item if $\| \cdot
\|$ is not a Minkowski norm, the fundamental tensor field $g$ is degenerate at some direction. \end{enumerate} \end{prop}
\begin{proof} Choose an auxiliary Euclidean norm $\| \cdot
\|^E$ and denote $S^E$ its unit sphere.
For $(i)$, the compactness of the unit spheres of $\| \cdot \|^E$
and $\| \cdot \|$ implies that each ball for one (pseudo-)norm contains small balls for the other one.
For $(ii)$, it is well-known that, at any point $v_0$ in $S$ which is a relative maximum for the $\parallel \cdot \parallel^E$-distance, the second fundamental form $\sigma^\xi$ with respect to $\xi=-v_0$ must be positive definite (see
\cite[Ch VII, Prop. 4.6]{KN}). Thus, so is $g$ at $v_0$ (recall Eq. \eqref{esconv} and Remark \ref{r26} together with last statement of Proposition \ref{fundamentalprop}) and, as we assume that $\| \cdot \|$ is not a Minkowski norm, there must exist $v_1\in V$ where $g$ is degenerate (see also \cite{Lovas}).
\end{proof}
\begin{rem} \label{rmpseudonorm} These two properties cannot be extended to Minkowski conic norms ---this is trivial for $(ii)$, and see Example \ref{ex3} below for $(i)$.
\end{rem}
\subsection{Some simple examples}\label{sexamples} Theorem \ref{tfunda} provides a simple picture of Minkows\-ki norms and their generalizations in terms of the closed unit ball $B$ and the indicatrix $S$. Next, we consider $\mathds R^2$
and construct illustrative examples of the generalizations of Minkowski norms stated above ---easily, these examples can be extended to higher dimensional vector spaces.
Consider a curve $c: I \subset \mathds R \rightarrow \mathds R^2$, $c(\theta)=r(\theta)(\cos \theta , \sin \theta )$, which does not cross the origin and it is smoothly parameterized by its angular polar coordinate $\theta \in I$. For each $\theta\in \mathds R$ consider the radial half line $$ l_\theta=\{ r(\cos \theta, \sin \theta): r>0 \}. $$ Recall that $c$ crosses transversally all $l_\theta$ with $\theta \in I$, and let $n(\theta)$ be the normal vector characterized by $\langle n(\theta),\dot c(\theta)\rangle_0 =0, -\langle n(\theta), c(\theta)\rangle_0 >0$, where $\langle \cdot ,\cdot \rangle_0$ is the natural scalar product on $\mathds R^2$. Put also $S=c(I)$ and \begin{equation}\label{hatg} \hat g(\theta)= \langle \ddot c(\theta), n(\theta)\rangle_0 =\frac{2\dot r^2+r(r-\ddot r)}{\sqrt{r^2+\dot r^2}}. \end{equation}
Recall that the sign of $\hat g(\theta)$ is equal to the sign of the second fundamental form of $S$ at $c(\theta)$ with respect to the direction $\xi=-c(\theta)$, and it is also equal to the sign of the fundamental tensor field $g$ on the non-zero vectors of $T_{c(\theta)}S$.
Let us construct then some examples.
\begin{exe}\label{ex1} Assume that $c$ is a smooth closed curve, say, $I=[0,2\pi]$, and $c$, as well as all its derivatives, agrees at $0$ and at $2\pi$. By
Theorem
\ref{tfunda} the interior region $B$ of $S$ is the unit ball of a Minkowski pseudo-norm $\|\cdot\|_B$. Clearly $\hat g(\theta)\geq 0$ everywhere iff $S$ is convex and $\|\cdot\|_B$ becomes a norm; in this case, if $\hat g(\theta)$ is equal to zero only at isolated points then $S$ is strictly convex and $\|\cdot\|_B$ satisfies the strict triangle inequality.
The strict inequality $\hat g >0$ characterizes when
$\|\cdot\|_B$ is a Minkowski norm. \end{exe}
\begin{exe}\label{ex2} Assume that $I=(\theta_-,\theta_+)$ with $\theta_+-\theta_-<\pi$.
Now, $A=\cup_{\theta\in I}l_\theta$ constitutes a convex conic domain of $\mathds R^2$, and take the closed subset $B$ of $A$ delimited by $l_{\theta_-}, l_{\theta_+}$ and $S$. Again, $B$ is the closed unit ball for a Minkowski conic pseudo-norm
$\|\cdot\|_B$, which will be a Minkowski conic norm
iff $\hat g>0$ (recall that all the items of Theorem \ref{tfunda} will be
applicable).
If $c$ admits a smooth extension to $\theta_-$ or $\theta_+$ which
is transverse to the corresponding $l_\theta$, then
$\|\cdot\|_B$ can be regarded as the restriction to
$A$ of a Minkowski conic pseudo-norm defined on a bigger domain
$\tilde A$. Recall that, if $S$ is convex, $c$ can always be extended in $\mathds R^2$ to either $\theta_-$ or $\theta_+$ as the tangent line at any $\theta_0 \in I$ must intersect either $l_{\theta_-}$ or $l_{\theta_+}$, but this extension may be
non-transverse, or even the point zero (put $I=(0,\pi/2)$ and choose $c$ as
the $\theta$-reparametrization of the branch $x>x_0$ of the parabola $y=(x-x_0)^2$ for some $x_0\geq
0$).
Observe that, by construction, $ S$ is always closed as a subset of $A$. If $S$ is also closed as a subset of $V$ then $S$ cannot be convex everywhere (essentially, $l_{\theta_-}$ and $l_{\theta_+}$ would be parallel to asymptotes of $c$). In this case, it would be also natural to extend
$\|\cdot\|_B$ to all $V$, so that the directions
$l_{\theta_\pm}$, away from $A$, were degenerate (see part $(1)$ of Remark \ref{rojo}). \end{exe} \begin{exe}\label{ex2185} Let $c(\theta)= \theta (\cos\theta,\sin\theta)$, $\theta\in
(\epsilon,2\pi-\epsilon)$ for some $\epsilon>0$. Clearly, this curve is convex (use \eqref{hatg}) and determines a Minkowski conic norm $\|\cdot\|$. For small $\epsilon$, $\|\cdot\|$ has a non-convex domain $A$ and satisfies: (i) it is not the restriction to $A$ of a Minkowski norm on all $\mathds R^2$, and (ii) there are vectors $u,v\in A$ such that $u+v\in A$ but they do not satisfy the triangle inequality (choose $u=(0,2\sin(2\varepsilon)), v=(\cos (2\epsilon),-\sin( 2\varepsilon))$. Then $\|u\|=4\sin (2\varepsilon)/\pi$, $\|v\|=1/(\pi-2\varepsilon)$ and $\|u+v\|=1/(2\varepsilon)$. It is clear that when $\varepsilon$ is small enough, $\|u\|+\|v\|<\|u+v\|$). \end{exe}
\begin{exe}\label{ex2a} Consider now two examples with $A=\{(x,y)\in \mathds R^2: y>0\}$. First, the curve $c_1$ obtained as the reparametrization of a branch of the parabola $t\mapsto (t,\sqrt{1-t})$, $t\in (-\infty,1)$,
with the angular coordinate $\theta\in (0,\pi)$. In this case $\hat{g}>0$, the curve defines a Minkowski conic norm and Proposition \ref{opennormballs} is applicable. Second, the curve $c_2$ obtaining as reparametrization of the straight line $y=1$
(i.e., the associated norm is $\|(x,y)\|:= y$). This is a conic norm, but the forward and backward balls do not constitute a subbasis for the topology of $\mathds R^2$. Moreover, all its admissible curves connecting two fixed points will have the same length and, according to Subsection \ref{s2d}, they will be extremal of the energy functional (geodesics) when reparameterized at constant speed. \end{exe}
\begin{exe}\label{ex3}
As a refinement of the previous example for future referencing, consider the parabola $c(t)= (t,1-t^2), t\in \mathds R$ which can be reparameterized with the angular coordinate $\theta \in (-\pi/2,3\pi/2)$. This curve is strongly convex and defines a Minkowski conic norm with only a direction excluded from the domain ---concretely, $A=\mathds R^2\setminus (l_{-\pi /2}\cup\{0\}). $
Its affine open balls (see \eqref{eopenballs})
centered at any $v_0=(x_0,y_0)$ are delimited by some parabola as follows: $$ B^+_{v_0}(r)= \{(x,y): (x,y)-v_0\in A;\, y-y_0<r\left(1-\frac{(x-x_0)^2}{r^2}\right)\}. $$ In this example (as well as in the first one in Example \ref{ex2a}), the balls are always open for the topology of $V$ but, as two such parabolas always intersect, the topology generated by the balls on $\mathds R^2$ is strictly coarser than the usual one (in particular, no pair of points are Hausdorff related for that topology). Thus, the result in part $(i)$ of Proposition \ref{ppseudi} cannot be extended to the case of Minkowski conic norms. \end{exe}
\begin{exe}\label{ex4} Consider the curve $c(t)= (\sinh t,\cosh t ), t\in \mathds R$, which can be reparametrized by the angular coordinate $\theta \in (\pi/4,3\pi/4)$ and, thus, can be regarded as a particular case of Example \ref{ex2}. This curve defines a conic pseudo-norm $\parallel\cdot\parallel$ with $S$ concave everywhere (i.e. $\hat g<0$), which can be interpreted as follows.
Consider the natural Lorentzian scalar product on $\mathds R^2$: $$ \langle (x,y),(x',y')\rangle_1 =x x' -y y'. $$ Then, $A$ is composed of all the vectors $v=(x,y)$ which are {\em timelike} ($\langle (x,y),(x,y)\rangle_1 <0$) and {\em future-directed} ($y>0$); moreover: $$
\| v \| =\sqrt{-\langle (x,y),(x,y)\rangle_1}. $$ Here, the (forward) affine balls centered at the origin can be described as: $$ B^+_0(r)= \{v\in A: -\langle v,v\rangle_1 < r^2\}, $$ which is the open region delimited by a hyperbola and the lines $y=\pm x$.
Remarkable, the triangle inequality does not hold, but a reverse strict triangle inequality does, namely: $$
\| v+w\| \; \geq \; \| v \| + \| w
\|, \quad \quad \forall v,w\in A $$ with equality iff $w=\lambda v$ for some $\lambda>0$ (this and the following property are well-known for Lorentzian scalar products, see \cite[Proposition 5.30]{Oneill83}, for example). By applying this triangle inequality, the following property follows easily. For all $v_0\in A$ and $\epsilon>0$ there exists a sequence $P_0=0, P_1, \dots, P_k=v_0$ such that each $v_i=P_i-P_{i-1}$
belongs to $A$ and $\sum_{i=1}^k \| v_i \| < \epsilon$. That is, there are poligonal curves connecting 0 and $v_0$ with arbitrarily short $\| \cdot \|$-length.
Finally, recall also that
$\|\cdot\|$ can be naturally extended to {\em past-directed} timelike vectors, or even all the vectors where the Lorentzian scalar product does not vanish, yielding a bigger conic domain $\tilde A$ with four connected parts. Such a situation becomes natural for {\em quadratic Finsler} manifolds (see part (1) of Remark \ref{r1} below).
\end{exe}
\section{Pseudo-Finsler and conic Finsler metrics}\label{s2}
\subsection{Notion} First, let us clarify the notions of
Finsler metric to be used. \begin{defi}\label{d1} Let $M$ be a manifold and $A$ an open subset of the tangent bundle $TM$ such that $\pi(A)=M$, where $\pi:TM\rightarrow M$ is the natural projection, and let $F:A\rightarrow [0,\infty)$ be a continuous function. Assume that $(A,F)$ satisfies: \begin{enumerate}[(i)] \item $A$ is conic in $TM$, i.e., for every $v\in A$ and $\lambda>0$, $\lambda v\in A$ ---or, equivalently, for each $p\in M$, $A_p:=A\cap T_pM$ is a conic domain in $T_pM$. \item $F$ is smooth on $A$ except at most on the zero vectors. \end{enumerate}
We say that $(A,F)$, or simply $F$, is a {\em conic pseudo-Finsler metric} if each restriction $F_p:=F|_{A_p}$ is a Minkowski conic pseudo-norm on $T_pM$. In this case, $F$ is
\begin{enumerate}[(i)] \item[(iii)] a {\em conic Finsler metric} if
each $F_p$ is a Minkowski conic norm, i.e. the {\em fundamental tensor} $g$ on $A\setminus\{$zero section$\}$ induced by all the fundamental tensor fields $g^{(p)}$ at each $ p\in M$ on $A_p\setminus \{0\}$ is positive definite for all $p\in M$, \item[(iv)] a {\em pseudo-Finsler metric} if $A=TM$, i.e., each $F_p$ is a Minkowski pseudo-norm so that $A_p=T_pM$ for all $p\in M$, \item[(v)] a (standard) {\em Finsler metric} if $F$ is both, conic Finsler and pseudo-Finsler, i.e. $A=TM$ and $g$ is pointwise positive definite. \end{enumerate}
\end{defi} Observe that the fundamental tensor $g$ can be thought as a section of a fiber bundle over $A$. To be more precise, denote also as $\pi:A\setminus\{0\}\rightarrow M$ the restriction of the natural projection from the tangent bundle to $M$, $TM^*$ the cotangent bundle of $M$ and $\tilde{\pi}:TM^*\rightarrow M$ the natural projection. Define $\tilde{\pi}^*:\pi^*(TM^*)\rightarrow A\setminus\{0\}$ as the fiber bundle obtained as the pulled-back bundle of $\tilde{\pi}:TM^*\rightarrow M$ through $\pi:A\setminus\{0\}\rightarrow M$: \begin{equation} \label{efibrados} \xymatrix{ \pi^*(TM^*)\ar[d]_{\tilde{\pi}^*}&TM^*\ar[d]^{\tilde{\pi}}\\ A\setminus\{0\}\ar[r]_{\pi}&M\, }\end{equation} Then $g$ is a smooth symmetric section of the fiber bundle $\pi^*(TM^*)\otimes \pi^*(TM^*)$ over $A\setminus\{0\}$. Let us remark that if we fix a vector $v\in A\setminus\{0\}$, then $g_v$ is a symmetric bilinear form on $T_{\pi(v)}M$. From now on this will be the preferred notation.
\begin{rem}\label{r1} Some comments are in order: \begin{enumerate}[(1)] \item Our definition of (standard) Finsler metric agrees with classical references as \cite{AP,Akbar-Zadeh06,BaChSh00,ChSh05, Mo06,shen2001, Sh01} and our definition of conic Finsler metric is equivalent to the notion of {\em generalized Finsler metric} by \cite{Br} (but we retain our nomenclature as we are also dealing with other generalizations). According to our nomenclature, a classical Kropina metric is a {\em conic Finsler} metric (see Corollary \ref{cKropina} below).
A Matsumoto metric has a maximal conic domain where it is {\em conic Finsler} and a bigger one where it is {\em conic pseudo-Finsler} (see Corollary \ref{matsustrongly} below). Another way to generalize Finsler metrics is considering what we would call a {\em quadratic Finsler metric} $L$ (here $L$ would be positively homogeneous of degree 2, and the definition would include all the Lorentzian or semi-Riemannian metrics). This approach can be found in \cite{AIM93,Asa85, Mat86,Per08}.
In principle, the case of conic quadratic Finsler metrics is more general as one can always consider the conic quadratic Finsler metric $L=F^2$ associated to any conic pseudo-Finsler metric. The converse holds only when $L$ is positive away from the zero section (otherwise, one can define just a conic pseudo-Finsler metric on the positive-definite directions). Nevertheless, it is natural to assume then some minimum restrictions which essentially reduces to our case\footnote{\ See, for example, the definition of Finsler spacetime in \cite{PW}. Its associated Finsler metric satisfies all the assumptions of a conic pseudo-Finsler metric (except strict positive definiteness) plus other additional hypotheses.}. So, we will focus on the already general conic pseudo-Finsler case, which allows one to clarify some properties of the distance and balls. Let us finally point out that, in reference \cite{BeFa2000}, the authors define a Finsler metric as what we call a conic Finsler metric and a pseudo-Finsler metric as a conic quadratic Finsler metric with nondegenerate fundamental tensor.
\item Minkowski conic pseudo-norms as those studied in Subsection \ref{sexamples} give the first examples of conic pseudo-Finsler metrics. Starting at them, one can yield examples of more general situations. For instance, consider the following two conic pseudo Finsler metrics on $\mathds R^2$:
\begin{enumerate}[(a)]
\item $A=\{v\in T\mathds R^2: dy(v)>0\}$ and $F=dy|_A$,
\item $A=\{v\in T\mathds R^2: dx(v)\neq 0, dy(v)\neq 0\}$ and $F=\sqrt{dx^2}+\sqrt{dy^2}$. \end{enumerate}
Recall that they come from Minkowski conic pseudo-norms with non-compact and convex indicatrices, which are not strongly convex at any point (so that only the non-strict triangle inequality will
hold). It is not difficult to accept that a
conic pseudo-Finsler metric $F^a$ on $\mathds R^2$ can be defined such that
$F^a_{(x,y)}$ behaves as the metric in item (a) for $|y|< 1$, as the conic Minkowski norm in Example \ref{ex3} for $y<-2$ and as the Minkowski pseudo-norm in the Example \ref{ex4} for $y>2$. Analogously, a Finsler metric $F^b$ on $\mathds R^2$ can be defined such that $F^b_{(x,y)}$ behaves as the metric in (b) for $x^2+y^2< 1$ and as the Finsler metric associated to the usual scalar product on $\mathds R^2$ for $x^2 +y^2>2$. In particular, the conic domain $A_p$ may vary from point to point so that such domains are not homeomorphic.
\item As the last example shows, the number of connected components of $A_p$ may vary with $p$, and $A$ may be connected even if some $A_p$ are not. Recall that for a Minkowski conic pseudo-norm $\parallel\cdot\parallel$, the conical domain $A$ might have (infinitely) many connected parts, even though we consider typically the case $A$ connected. Accordingly, a natural hypothesis in the conic pseudo-Finsler case is to assume that $A$ is {\em pointwise connected} (i.e., any $A_p$, $p\in M$, is connected) and, in the conic Finsler case, we also may assume that $A$ is {\em pointwise convex} ($A_p$ is a
convex subset, and then connected, of each $T_pM$) in order to have the triangle
inequality.
\item Typically, one can also assume that the pair $(A,F)$ is {\em maximal}, i.e., such that no pair $(\tilde A, \tilde F)$ extends $(A,F)$. This means that no conic pseudo-Finsler metric $(\tilde A, \tilde F)$ satisfies $A\varsubsetneq \tilde A$ with
$\tilde F|_{A}=F$ (say, as in Examples \ref{ex3} or \ref{ex4}). Zorn's lemma ensures the existence of maximal extensions. However, as such an extension may be highly non-unique (recall Example \ref{ex2} or the first one in Example \ref{ex2a}), and maximality will not be assumed a priori along this paper. Moreover, non-maximal pairs $(A,F)$ may be useful to model diverse situations ---for example, to represent restrictions to the possible
velocities on relativistic particles which may move on $M$.
\item As a direct consequence of part $(ii)$ of Proposition \ref{ppseudi}, {\em the fundamental tensor $g$ of a pseudo-Finsler but not Finsler manifold must be degenerate at some points} , concretely, it must be degenerate on some directions at all the tangent spaces $T_pM$, $p\in M$, where $g^{(p)}$ is not definite positive.
\end{enumerate} \end{rem} Some properties of conic pseudo-Finsler metrics can be reduced to the case $A=TM$ taking into account the following result.
\begin{prop}\label{punidad} Let $F: A\rightarrow \mathds R$ be a conic pseudo-Finsler metric and let $C\subset A$ be such that $C\cup \{\text{zero section}\}$ is a closed conic subset of $TM$. Then, there exists a pseudo-Finsler metric $\tilde F$ on $M$
such that $\tilde F|_{C}=F|_{C}$. \end{prop} \begin{proof} Consider some auxiliary Riemannian metric $g_R$ on $M$, let $F_R=\sqrt{g_R}$ be its associated Finsler metric, and take its unit sphere bundle $S_RM\subset TM$. Let $\{U,V\}$ be the open covering of $S_RM$ defined as $U=S_RM\cap (TM\setminus C)$ and $V=S_RM\cap A$. Consider a partition of the unity $\{\mu_U, \mu_V\}$ subordinated to this covering, and regard these functions as functions on $TM\setminus\{$zero section$\}$ just by making them homogeneous of degree $0$. The pseudo-Finsler metric $\tilde F= \mu_U F_R + \mu_V F$ satisfies the required properties. \end{proof}
\begin{rem} \label{runidad} Recall that, in the previous proposition, the functions which constitute the partition of the unity are not defined on $M$ but on $TM$. So, when applied to a conic Finsler metric, the obtained extension is only a conic pseudo-Finsler metric (see Example \ref{ex2185}). \end{rem}
\subsection{Admissible curves and Finslerian separation} In this subsection, $M$ will be assumed to be connected in order to avoid trivialities. Consider a smooth curve $\alpha$ with velocity $\dot \alpha$ in a conic pseudo-Finsler manifold $(M,F)$. As the expression $F(\dot\alpha)$ does not always make sense, we need to restrict to curves where it does. \begin{defi} Let $F:A\rightarrow [0,\infty)$ be a conic pseudo-Finsler metric on $M$. A piecewise smooth curve $\alpha:[a,b]\rightarrow M$ is {\em $F$-admissible} (or simply {\em admissible}) if the right and left derivatives $\dot\alpha_+(t)$ and $\dot\alpha_-(t)$ belong to $A$ for every $t\in [a,b]$. In this case, the {\em ($F$-)length} of $\alpha$ is defined as \begin{equation}\label{lengthfunctional} \ell_F(\alpha)=\int_a^b F(\dot\alpha(t)){\rm d} s. \end{equation} \end{defi}
\begin{defi} Let $(M,F)$ be a conic pseudo-Finsler manifold, and $p,q\in M$. We say that $p$ {\em precedes} $q$, denoted $p\prec q$, if there exists an admissible curve from $p$ to $q$. Accordingly, the {\it future} and the {\it past} of $p$
are the subsets
\[
\begin{array}{ccc}
{\rm I}^+(p)=\{q\in M: p\prec q\}&\text{and}&
{\rm I}^-(p)=\{q\in M: q \prec p\}, \end{array}
\] respectively. Moreover, we define ${\rm I}^+=\{(p,q)\in M\times M: p\prec q\}$, i.e., ${\rm I}^+$ is equal to the binary relation $\prec $ as a point subset of $M\times M$.
\end{defi}
It is straightforward to check the following properties.
\begin{prop}\label{pimas} Given a conic pseudo-Finsler manifold $(M,F)$: \begin{enumerate}[(i)] \item The binary relation $\prec $ is transitive.
\item The subsets ${\rm I}^+(p)$ and ${\rm I}^-(p)$ are open in $M$ for all $p\in M$, and ${\rm I}^+$ is open in $M\times M$.
\item If $F$ is a pseudo-Finsler metric then $\prec$ is trivial, i.e., $p\prec q$ for all $p,q$ in the (connected) manifold $M$. \end{enumerate} \end{prop} (For $(ii)$, recall the techniques in the proof of Proposition \ref{openballs} below).
Now, we can introduce a generalization of the so-called Finsler distance for Finsler manifolds. \begin{defi} Let $(M,F)$ be a conic pseudo-Finsler manifold $p,q\in M$ and
$C_{p,q}^F$ the set of all the $F$-admissible piecewise smooth curves from $p$ to $q$. The {\em Finslerian separation} from $p$ to $q$ is defined as: \[{{\rm d}}_F(p,q)=\inf_{\alpha\in C_{p,q}^F}\ell_F(\alpha)\in [0,\infty].\] \end{defi} From the definitions, one has directly: \begin{equation} \label{sep1} C_{p,q}^F\neq \emptyset \Leftrightarrow p\prec q, \quad {{\rm d}}_F(p,q)=\infty \Leftrightarrow p\not\prec q, \quad {{\rm d}}_F(p,q)\geq 0,\end{equation} as well as the triangle inequality \begin{equation} \label{sep2} {\rm d}_F(p,q)\leq {\rm d}_F(p,z)+{\rm d}_F(z,q), \quad \forall p,q,z \in M.\end{equation}
Analogously, we can define two kinds of balls, the {\it (open) forward $d_F$-balls} $B_F^+(p,r)$ and the {\it backward} ones $B_F^-(p,r)$, namely, \[B_F^+(p,r)=\{q\in M: {\rm d}_F(p,q)<r\}, \quad B^-_F(p,r)=\{q\in M:{\rm d}_F(q,p)<r\}.\]
\begin{prop}\label{openballs} Let $(M,F)$ be a conic pseudo-Finsler manifold. Then the open forward and backward $d_F$-balls are open subsets. \end{prop}
\begin{proof} Let $q\in B_F^+(p,r)$ and $\alpha: [0,r_0]\rightarrow M$ be an
admissible curve from $p$ to $q$ parametrized by arclength, so that $r_0<r$ and $\dot\alpha(r_0)$ belongs to $A\cap T_qM$. Let $K$ be a compact neighborhood of $\dot\alpha(r_0)$ in the unit ball at $T_qM$ entirely contained in $A$, and $C$ the corresponding conical subset $C=\{tv: v\in K, t>0\}$. Choosing coordinates $(U,\varphi)$ in a neighborhood $U$ of $q$, $TU$ can be written as $\varphi(U)\times \mathds R^n$ and $C$ is identified with some $\{\varphi(q)\}\times C_0$, namely $C_0=\{tu: u\in K_0, t>0\}$ is a conical subset of $\mathds R^n$, and the compact subset $K_0$ is defined by $ d\varphi(K)=\{q\}\times K_0$. Moreover, choosing a smaller $U$ if necessary (say, with compact closure), we can assume $\varphi(U)\times C_0\subset d\varphi(A)$ and $F\circ \varphi^{-1}$ is bounded on $\varphi(U)\times K_0$ by some constant $N_0\geq 1$. So, choose some $r_-<r_0$ with $r_0-r_-<(r-r_0)/N_0$, such that $\alpha([r_-,r_0])\subset U$,
the segment from $\varphi(\alpha(r_-))$ to $\varphi(\alpha(r_0))$ is contained in $U$ and there exists $\lambda>1$ with $\frac{\lambda}{r_0-r_-}(\varphi(\alpha(r_0))-\varphi(\alpha(r_-)))\in K_0$. (Observe that we can always find such an $r_-$ because \[\lim_{r_-\to r_0} \frac{\varphi(\alpha(r_0))-\varphi(\alpha(r_-))}{r_0-r_-}=d\varphi(\dot\alpha(r_0))\] and $d\varphi(\dot\alpha(r_0))\in K_0$). The segments in $\varphi(U)$ (regarded as a subset of $\mathds R^n$) which start at $\alpha(r_-)$ with velocity in $K_0$ and defined on some interval $[r_-, b]$ with $b<r_-+(r-r_0)/N_0]$ have $F$-length smaller than $r-r_0$ and cover a neighborhood $\varphi(W)$ of $\varphi(q)$. So, the neighborhood $W\ni q$ is clearly contained in the required ball. The proof for backward balls is analogous. \end{proof} \begin{defi} A conic pseudo-Finsler metric $F$ is {\em (Riemannianly) lower bounded} if there exists a Riemannian metric $g_0$ on $M$ such that $F(v)\geq \sqrt{g_0(v,v)}$ for every $v\in A$. In this case, $g_0$ is called a {\em (Riemannian) lower bound} of $F$. \end{defi} \begin{rem}\label{rprevio} Several comments are in order: \begin{enumerate}[(1)] \item[(1)] The lower boundedness needs to be checked only locally, that is, {\em a conic pseudo-Finsler metric $F$ is lower bounded if and only if for every point $p\in M$, there exists a neighborhood $U\subset M$ of $p$ and a Riemannian metric
$g$ on $U$ such that $F(v)\geq\sqrt{g(v,v)}$ for every $v\in A\cap TU$.}
In fact, each $U$ can be chosen compact and, then, all the metrics $g$ can be taken homothetic to a fix auxiliary one $g_R$ defined on all $M$. By using paracompactness, a locally finite open covering $\{U_i\}_{i\in I}$ of $M$ exists such that
$F(v)\geq \frac{1}{c_i}\sqrt{g_R(v,v)}$ for some $c_i\geq 1$ and all $v\in A\cap TU_i$. If $\{\mu_i\}_{i\in I}$ is a partition of the unity subordinated to $\{U_i\}_{i\in I}$, then $g_0=\frac{1}{\sum_{i\in I}\mu_ic_i}g$ is the required lower bound of $F$.
\item[(2)] The lower boundedness of $F$ can be also checked by looking at the indicatrix $S_p\subset A\cap T_pM, p\in M$ for each $F_p$. Indeed, {\em $F$ is lower bounded if and only if for each $p\in M$ there exists a compact neighborhood $\hat K$ of $0\in T_pM$ in $TM$ such that $S_q\subset \hat K$ for all $q\in K:=\pi(\hat K)$.}
\item[(3)] Either from the definition or from the criteria above, it is straightforward to check that classical Kropina and Matsumoto metrics (see Subsection \ref{s3b} for their definition) are lower bounded.
\item[(4)] Observe that the distance $d_{g_0}$ associated to the lower bound $g_0$ satisfies $d_{g_0}(p,q)\leq d_F(p,q)$ for every $p,q\in M$ and, then, $ B^+_F(p,\epsilon)\subseteq B_{g_0}(p,\epsilon)$. \end{enumerate} \end{rem}
\begin{lemma}\label{basis} Let $F:A\rightarrow [0,+\infty)$ be a conic pseudo-Finsler metric in $M$ that admits a lower bound $g_0$. Fix $p\in M$ and $r>0$.
Given $x\in B_{g_0}(p,r)$, there exist $q, q'\in B_{g_0}(p,r)$
and $\epsilon>0$ such that $x\in B^+_F(q,\epsilon)\cap B^-_F(q',\epsilon)$ and $B^+_F(q,\epsilon)\cup B^-_F(q',\epsilon)\subseteq B_{g_0}(p,r)$. \end{lemma} \begin{proof} Consider an admissible curve $\gamma$ passing through $x$. Clearly, we can choose a point $q$ on $\gamma$ such that $d_F(q,x)<\epsilon$ and $B_{g_0}(q,\epsilon)\subset B_{g_0}(p,r)$. So (recall part (4) of Remark \ref{rprevio}), $x\in B^+_F(q,\epsilon)\subseteq B_{g_0}(q,\epsilon)\subseteq B_{g_0}(p,r)$. By reversing the parametrization of $\gamma$, the required point $q'$ can be found analogously.
\end{proof}
\begin{prop}\label{topologia} Let $F$ be a lower bounded conic pseudo-Finsler metric on $M$. Then the collection of subsets $\{B^+_F(p,r): p\in M, r>0\}$ (resp. $\{B^-_F(p,r): p\in M, r>0\}$) constitute a topological basis of $M$. \end{prop} \begin{proof} It is a direct consequence of Proposition \ref{openballs} and Lemma \ref{basis}. \end{proof}
\subsection{Discussion on balls and distances}
In general, the
Finslerian separation may behave in a very different way from a
distance. To make more precise this statement, let us introduce the
following notion, see \cite[Section 1]{Bus44} and also \cite[page 5]{Z} and \cite{FHS}.
\begin{defi}\label{dgd}
A {\em generalized distance} $d$ on a set $X$ is a map $d:X\times X\rightarrow \mathds R$
satisfying the following axioms: \begin{itemize} \item[(a1)] $d(x,y)\geq 0$ for all $x,y\in X$. \item[(a2)] $d(x,y)=d(y,x)=0$ if and only if $x=y$. \item[(a3)] $d(x,z)\leq d(x,y)+d(y,z)$ for all $x,y,z\in X$. \item[(a4)] Given a sequence $\{x_n\}\subset X$ and $x\in X$, then $\lim_{n\rightarrow \infty}d(x_n,x)=0$ if and only if $\lim_{n\rightarrow \infty}d(x,x_n)=0$. \end{itemize}
\end{defi}
For any generalized distance, the {\em forward}
balls have a natural meaning, and generate a topology which
coincides with the one generated by the {\em backward} balls. Recall that the Finslerian distance associated to a Finsler manifold is a generalized distance in the sense above. This can be extended to pseudo-Finsler manifolds, as they are Riemannianly lower bounded. \begin{thm}\label{tballs} For any pseudo-Finsler manifold $(M,F)$, the Finslerian separation $d_F$ is a generalized distance, the forward (resp. backward) open balls generate the topology of the manifold, and $d_F$ is continuous with this topology. \end{thm}
\begin{proof} The consistency of the definition, including the axiom (a1) in Definition \ref{dgd} follows from part $(iii)$ of Proposition \ref{pimas}, and the axiom (a3) from Eq. \eqref{sep2}. For the other two axioms, choose two auxiliary Riemannian metrics $g_1, g_2$ with distances, resp., $d_1, d_2$ which satisfy $g_1(v,v)\leq F^2(v)\leq g_2(v,v)$ for all $v$ in $TM$ (easily, $g_1$ and $g_2$ can be chosen conformal to any prescribed Riemannian metric $g_R$). Obviously: $$d_1 \leq d_F \leq d_2 .$$ From this equality, (a2) and (a4) plus the assertion of the topology of the manifold, follow directly. The last assertion is general for the topology associated to any generalized distance \cite[p. 6]{Z}. \end{proof}
\begin{rem}\label{pseudoaffineballs} However, if a vector space $V$ endowed with a pseudo-norm $\parallel \cdot \parallel$ is regarded as a pseudo-Finsler space, the affine and $d_F$-balls may differ. More precisely, the unit (forward) affine ball is always contained in the corresponding unit (forward) $d_F$-one, and the inclusion is strict if $g^{p}$ is indefinite at some $p\in V\backslash\{0\}$. \end{rem} Theorem \ref{tballs} shows that some properties of $d_F$ in the Finsler case are retained in the pseudo-Finsler one. However, in the general conic case the things may be very different, as only Proposition \ref{openballs} can be claimed. As the following example suggests, the relation $\prec$ may give a tidier information.
\begin{exe}\label{ex317} (1) In Example \ref{ex4}, regarded as a conic pseudo-Finsler manifold $(M,F)$, the following properties occur:
$${\rm I}^+(0)=\{(x,y): |x|<y\}, \quad d_F(0,v)=\left\{ \begin{array} {cll}0 & \hbox{if} & v\in {\rm I}^+(0), \\ \infty & \hbox{if} & v\not\in {\rm I}^+(0).\end{array} \right.$$ In particular, $d_F(0,0)=\infty$ and the open $d_F$-balls look very different to the open affine balls.
(2) Moreover, one can obtain a quotient conic pseudo-Finsler metric on the torus $T^2=\mathds R^2 / \mathds Z^2$ just taking into account that $F$ is invariant under translations. In this quotient, {\em the Finslerian separation between any two points is equal to 0}.
(3) Regarding Example \ref{ex3} as a conic Finsler metric (with non pointwise convex $A$), one also has $d_F((0,0),(0,-s))=0$ for all $s\geq 0$ (recall that such $(0,-s)$ does not belong to $A_{(0,0)}$ but $(0,0)\prec (0,-s)$ as one can connect these two points by means of $F$-admissible piecewise smooth curves with a break). Thus, $\{(0,-s): s\geq 0\} \subset B_F^+((0,0),r)$ but $\{(0,-s): s\geq 0\} \cap B^+_{(0,0)}(r)=\emptyset$ for all $r>0$. \end{exe}
Let us see that even in the conic Finsler case, the Finslerian separation can be discontinuous around two points $p,q$ with $d_F(p,q)<\infty$. \begin{exe}\label{discontinuity}
Consider the translation $\varphi:\mathds R^2\rightarrow \mathds R^2$ given by $\varphi(x,y)=(x+2,y)$ and the group of isometries $G=\{\varphi^n: n\in\mathds Z\}$. The quotient $\mathds R^2/G$ can be identified with the strip $\{(x,y)\in\mathds R^2:-1\leq x\leq 1$ and $(-1,y)\equiv (1,y)$ $\forall y\in \mathds R\}$. Now consider the convex open cone of $T_{(0,0)}\mathds R^2$ determined by the vectors $(-1,3)$ and $(1,1)$ and choose in every point of the cylinder the open cone obtained as the parallel traslation of this cone. This defines the subset $A$. The Finsler metric $F$ is the square root of the Euclidean metric restricted to $A$. Finally observe that the separation from $p=(0,0)$ is not continuous in $q=(-2/3,2)$. In fact, $d_F((0,0),(-2/3,2))=2\sqrt{13}/3$ (which is equal to the Euclidean distance from $(0,0)$ to $(4/3,2)$), while $d_F((0,0),(-2/3+\epsilon,2))=\sqrt{(\epsilon-2/3)^2+4}$ (the Euclidean distance between the two points) for $0<\epsilon<2$. \end{exe}
\subsection{Minimization properties of geodesics in conic Finsler metrics}\label{s2d} For any conic pseudo-Finsler metric we can define the energy functional as \begin{equation*}\label{energyfunctional} E_F(\alpha)=\int_a^b F(\dot\alpha(t))^2{\rm d} s, \end{equation*} where $\alpha:[a,b]\rightarrow M$ is any $F$-admissible piecewise
smooth curve between two fixed points $p,q\in M$. Then, geodesics can be defined as critical points of this functional. Of course, they will be also critical points of the length funcional in \eqref{lengthfunctional}, but as critical points of the energy functional, geodesics are obliged to have constant speed.
When the fundamental tensor $g$ is non-degenerate we can define the {\em Chern connection} (see \cite[Chapter 2]{BaChSh00}), which is a connection for $ \tilde{\pi}^*: \pi^*(TM)\rightarrow A\setminus\{0\}$ (recall the diagram in \eqref{efibrados}).
Moreover, fixing a vector field $W$ on an open $U\subset M$, the Chern connection gives an affine connection on $U$ that we will denote by $\nabla^W$. If we fix a vector field $T$ along a curve $\gamma$, the Chern connection provides a covariant derivative along $\gamma$ with reference $T$ that we will denote as $D^T$ (see \cite[Sections 7.2 and 7.3]{shen2001}). In this case, geodesics are uniquely determined when the initial conditions are given as the solutions of the equation $D^T_TT=0$, where $T=\dot\gamma$, and there is a unique geodesic tangent to a given vector of the tangent bundle. Moreover, we can define the {\em exponential map} in $p\in M$, $\exp_p:\Omega\subseteq A_p (\subseteq T_pM)\rightarrow M$, as $\exp_p(v)=\gamma(1)$, where $\gamma$ is the unique geodesic such that $\gamma(0)=p$ and $\dot\gamma(0)=v$, whenever $\gamma$ is defined at least in $[0,1]$.
\begin{conv} According to our conventions, if $0\not\in A_p$, the constant curve $\gamma_p(t)=p$ for all $t\in \mathds R$ is not an admissible curve and, so, it is not a geodesic (this situation is common in our study, as it happens in some points whenever a conic pseudo-Finsler metric is not pseudo-Finsler). Nevertheless, when considering any curve $c: [a,b]\rightarrow M$ starting at $p$, we will not care about whether this initial point is obtained from the exponential. So, we will say that $c$ is contained in the image by $\exp_p$ of some subset $S\subseteq \Omega (\subseteq A_p)$ as the geodesic balls below (or that $c$ lies in $\exp_p (S)$), if $c((a,b])\subset \exp_p(S)$, (that is, we will work as if $\exp_p(0)=p$ when forced by the context) and we will assume that $\dot c(a)\in A_p$ (otherwise, the curve would not be admissible). \end{conv}
When the fundamental tensor is degenerate, the interpretation of geodesics as solutions of $D^T_TT=0$ makes no sense, but as critical points of the energy functional still holds. However, then geodesics are not univocally determined by its velocity at one point (see \cite[Example 3 (Fig.1)]{Mat}; as a limit case, all the constant-speed parameterized curves in the second case of Example \ref{ex2a} would be geodesics).
Furthermore, if the fundamental tensor is not degenerate but indefinite at some $v\in A_p$, the geodesic with velocity $v$ is never minimizing. Next, we will see that for conic Finsler metrics, geodesics minimize in geodesics balls, but a previous technical discussion is required.
\begin{rem}\label{gausslemma} The Gauss Lemma in typical references such as \cite[Lemma 6.1.1]{BaChSh00} is proven only for standard Finsler metrics. Nevertheless, its local nature allows one to prove it in a much more general context: it works for any conic pseudo-Finsler metric whenever the exponential is defined in a neighborhood ${\mathcal W}$ of $v\in A_p\subset T_pM$ and $g$ is non-degenerate in ${\mathcal W}$. In fact, put $r=F(v)(>0)$, let $\gamma$ be a geodesic with $\dot \gamma (0)=v$, and $w$, a tangent vector to the sphere
$S_0^+(r)=\{u\in A_p: F(u)=r\}$. Choose a curve $\rho:(-\epsilon,\epsilon)\rightarrow S^+_0(r) \cap \mathcal{W} \subset A_p$ such that $\dot \rho(0)=w$, and consider the variation $(-\epsilon,\epsilon)\times [0,r]\rightarrow A_p$, $(s,t)\mapsto t\rho(s)$. Accordingly, the variation of $\gamma (=\gamma_0)$ by the geodesics $\gamma_s(t)=\exp_p(t\rho(s))$ with variational field $U$ satisfies
\[\frac{\partial}{\partial s} \ell_F(\gamma_s )|_{s=0}=\frac{1}{F(T)}g_T(U,T)|_0^{r}-\int_0^r g_T(U,D^T_T(T/F(T))){\rm d} t,\] where $T=\dot\gamma$ (see \cite[Exercise 5.2.4]{BaChSh00}). As
$\gamma$ is a geodesic, and all the curves in the variation have the same length and depart from the same point, the last equation
reduces to the Gauss Lemma, i.e.:
$ g_T(d\exp_{p}[w],T)=0$. \end{rem}
\begin{prop}\label{minimizeconic} Let $(M,F)$ be a conic Finsler metric with pointwise convex $A$, and assume that $\exp_p$ is defined on a certain ball $B^+_0(r)$ of $(T_pM,F_p)$ and it is a diffeomorphism onto the geodesic ball $B_p^+(r):=\exp_p(B^+_0(r))$. Then, for any $q\in B_p^+(r)$ the radial geodesic from $p$ to $q$ is, up to reparametrizations, the unique minimizer of the Finslerian separation among the admissible curves contained in $B_p^+(r)$. \end{prop} \begin{proof} Given a point $q\in B_p^+(r)$, let $\tilde{r}$ be the length of the radial geodesic from $p$ to $q$. Consider any smooth admissible curve $c:[0,1]\rightarrow M$ from $p$ to $q$ contained in $B_p^+(r)$. As $\exp_p$ is a diffemorphism on $B^+_0(r)$, there exist functions $s:[0,1]\rightarrow [0,r)$ and $v:[0,1]\rightarrow S_0^+(1)\subset T_pM$ uniquely determined (both smooth up to zero and $s$ continuous at zero) such that $c(u)=\exp_p(s(u)v(u))$ so that $s(0)=0$ and $s(1)=\tilde{r}$. Then, define $\sigma: [0,r) \times [0,1]\rightarrow M$ as $\sigma(t,u)=\exp_p(t v(u))$ and denote $T=\frac{\partial \sigma}{\partial t}$ and $U=\frac{\partial \sigma}{\partial u}=d\exp_p[t\dot v]$. We can express $c(u)=\sigma(s(u),u)$ for every $u\in [0,1]$. Hence by the chain rule \begin{equation}\label{cprime}\dot c(u)=\dot s(u)\, T+U. \end{equation} Moreover, by the Fundamental Inequality in \eqref{fundineq}, $g_T(T,\dot c)\leq F(T)F(\dot c)$, and using the previous identity we get \[\dot s\, g_T(T,T)+g_T(T,U)\leq F(T) F(\dot c), \quad \quad \forall s\in (0,1].\] By Gauss Lemma (see Remark \ref{gausslemma} above), $g_T(T,U)=0$, and $F(T)=1$ because $T$ is the velocity of a geodesic with initial velocity equal to $v(u)$ (recall that $F(v(u))=1$). Moreover, $g_T(T,T)=F(T)^2=1$ and, therefore, $\dot s\leq F(\dot c)$. Applying the last inequality, we deduce that \[\ell_F(c)=\int_0^1 F(\dot c)du\geq \lim_{\epsilon \searrow 0} \int_\epsilon^1 \dot s(u)du= s(1)-s(0)=\tilde{r}.\] As the length of the radial geodesic is exactly $\tilde{r}$, this shows that it is a global minimizer between all the curves lying in $B^+_p(r)$.
For the unicity up to reparametrization, observe that the equality in the Fundamental Inequality can happen just when $T$ and $\dot c$ are proportional, but looking at Eq. \eqref{cprime}, this means that $U=0$ (recall that, by Gauss Lemma, $U$ is $g_T$-orthogonal to $T$). As $\exp_p$ is a diffeomorphism, $U=0$ implies that $\dot v=0$, and therefore, that $c$ is a reparametrization of a radial geodesic.
\end{proof}
\begin{rem}\label{rm0}
Regarding the absolute minimization: \begin{enumerate}[(1)] \item In the standard Finsler case, the radial geodesic is an absolute minimizer for the curves from $p$ to $q$ on all $M$. In fact, if we consider a curve $\beta$ that goes out of $B^+_p(r)$, then its length must also be greater or equal than $ r$, as the portion of the curve in $B^+_p(r)$ from $p$ to the boundary of $B^+_p(r)$ has already length greater or equal to $r$. Summing up, in a Finsler metric, each radial geodesic segment of a geodesic ball as in Proposition \ref{minimizeconic} is the unique curve in $M$ of minimum length which connects its endpoints, up to affine reparameterizations.
\item This absolute minimization cannot be ensured in the conic Finsler case. The reason is that shorter curves which cross the boundary of $\exp_p(A_p)$ may appear. In fact, consider the following example on $\mathds R^2$. Let $p=(0,0), q=(0,1) \in \mathds R^2$ and $R_\epsilon$ the open rectangle of vertexes $V_p^\pm(\epsilon)=p+(\pm \epsilon,-\epsilon)$, $V_q^\pm(\epsilon)=q+(\pm \epsilon,\epsilon)$ and choose small $\epsilon_1>\epsilon_0>0$. Consider a Riemannian metric $g=\Lambda \langle\cdot,\cdot\rangle$ (conformal to the usual one $\langle\cdot,\cdot\rangle$) such that the conformal factor $\Lambda (>0)$ is equal to some big constant $B$ on $R_{\epsilon_0}$ and to 1 outside $R_{\epsilon_1}$. $B$ is chosen so that there are curves $y\mapsto (x(y),y)$ from $p$ to $q$ which go outside $R_{\epsilon_1}$ and have $g$-length much smaller than $B$ (which is the $g$-length of the segment from $p$ to $q$). Now, let $F$ be the conic Finsler metric associated to $g$ with domain $A$ determined as follows. $A_{(x,y)}\equiv (x,y)+C_y$ where $C_y$ is: (a) the open cone delimited by $p$ and the vertexes $V_q^\pm(\epsilon_o)$ for $(x,y)$ with $y\leq 0$, (b) the half plane $y>0$ for $(x,y)$ with $y\geq \delta$ where $\delta\geq 0$ is some prescribed small constant, and (c) a cone obtained by opening the one in (a) until the half plane in (b) for $0<y<\delta$. (Notice that the choice $\delta=0$ is permitted, and then the case (c) will not occur, however, the possibility to choose $\delta >0$ stresses that a choice of $C_y$ discontinuous on $y$, is irrelevant here). Then, clearly $B_p^+(1+\epsilon_0)\subset R_{\epsilon_0}$ but there are admissible curves from $p$ to $q$ shorter than the geodesic segment in the ball.
\item As a consequence of (2), some additional hypothesis must be imposed in order to ensure the character of absolute minimizer for radial geodesics. Taking into account (1), a sufficient hypothesis would be: {\em the boundary of $B_p^+(r)$ in ${\rm I}^+(p)$ is equal to $\exp_p(S_0^+(r))$.} \end{enumerate} \end{rem}
\begin{rem}\label{rm1}
Regarding the domain: \begin{enumerate}[(1)] \item
In Proposition \ref{minimizeconic} we have assumed that the domain A is pointwise convex. Even though this could be weakened (only the fundamental inequality was required, and this holds under more general hypotheses, see Remark \ref{fundineqRem}), if A is not pointwise convex, some other pathologies may happen. For example,
for the conic Finsler norm of Example \ref{ex3},
the separation from $(0,0)$ to any point of the form $(0,-s)$, $s\geq 0$ is $0$, but there is no geodesic joining them (recall part (3) of Example \ref{ex317}). The underlying reason is that these points $(0,-s)$ lie in ${\rm I}^+(0,0)$ but not in the affine ball $B_0^+(r)$ for any $r\geq 0$.
\item In contrast with the behavior of Example \ref{ex3}, Proposition \ref{minimizeconic} plus part (3) of Remark \ref{rm1} imply: {\em in a vector space $V$ endowed with a conic Minkowski norm defined on a convex domain $A$, the $d_F$-balls of the Finslerian separation coincide with the affine balls.} Indeed, the convexity of each $A_p=A\cap T_pV$ implies now that $\exp_p(A_p)$ coincides with ${\rm I}^+(p)$.
\item As a further improvement, to find general conditions on a conic Finsler metric $F$ so that Proposition \ref{minimizeconic} can be applied for small enough affine balls would be very interesting. Proposition \ref{topologia} suggests that the lower boundedness of $F$ might be such a property. Recall that when the Finsler metric is not lower bounded, the forward balls may not constitute a topological basis (see Example \ref{ex3} and the first one in \ref{ex2a}), and it is not difficult to find cases where the exponential is not defined even in small balls: consider the first Example \ref{ex2a} (notice that $A$ is pointwise convex in this case), remove from $\mathds R^2$ the semiaxis $\{(x,0): x\leq 0\}$ and consider any ball $B^+_{(x_0,y_0)}(r), r>0$ for $(x_0,y_0)$ with $y_0<0$. \end{enumerate}
\end{rem}
Other properties of local minimization and conjugate points of geodesics in the conic Finsler case deserve to be studied, although we will not go through them.
\subsection{Summary on geodesics and distance} Let us summarize and compare in this subsection the properties of geodesics and distance of the different types of Finsler metric generalizations.
Recall first the following three well-known properties of a Finsler manifold: \begin{enumerate}[(i)] \item $d_F$ is a generalized distance, $d_F$ is continuous, and the open forward (resp. backward) balls generate the topology of $M$.
\item The equation of the geodesics is well defined and characterize them univocally from initial data (the velocity at a point). Moreover, the radial geodesic segments of a geodesic ball are strict global minimizers of the energy functional (see part (1) of Remark \ref{rm0}).
\item In particular, for a Minkowski norm the affine balls and the $d_F$-balls agree, and the geodesics as a Finsler manifold coincide with affinely parametrized straight lines for a Minkowski norm.
\end{enumerate} Let us analyze how these three properties behave in the generalizations of Finsler metrics we are studying.
For a pseudo-Finsler non-Finsler manifold: \begin{enumerate}[(1)] \item Properties in (i) hold (see Theorem \ref{tballs}).
\item Properties in (ii) do not hold. In fact, the equation of the geodesics is necessarily ill defined (as $g^{(p)}$ must be degenerate at some directions, see part (5) of Remark \ref{r1}). Moreover, even in the directions where this equation is well defined, geodesics do not minimize in any sense in general (recall, for example, Remark \ref{pseudoaffineballs}). The definition of geodesics as critical curves of the energy functional makes sense, but their uniqueness from the initial velocity does not hold in general (second Example \ref{ex2a}).
\item The first assertion in Property (iii) does not hold for Minkowski pseudo-norms whenever $g$ is indefinite (see Remark \ref{pseudoaffineballs}). A straightforward computation shows that the affine lines are geodesics for any pseudo-norm, but additional critical curves of the energy functional may appear ---in the case of norms, the uniqueness of the affine lines as geodesics hold if the strict triangle inequality hold.
\end{enumerate}
For a conic Finsler manifold: \begin{enumerate}[(1)] \item The two first assertions in Property (i) do not hold in general, as $d_F$ may reach the value $\infty$ and also can be discontinuous in finite values (see Example \ref{discontinuity}). The open forward balls do not constitute a basis for the topology in general (first Example \ref{ex2a} and \ref{ex3}) even though they do in particular cases (Proposition \ref{topologia}).
\item The first assertion in Property (ii) holds, in the sense that the equation of the geodesics is always well defined along the admisible directions. The radial geodesic segments minimize inside geodesic balls (see Proposition \ref{minimizeconic} and also Remark \ref{rm0}, especially its part (2)).
\item Properties in (iii) hold essentially, but some subtleties must be taken into account. For a conic Minkowski norm {\em with convex $A$} the affine and $d_F$ balls coincide (see part (2), and also (1), in Remark \ref{rm1}). The affinely parametrized curves with velocity in $A$ coincide with the geodesics as a conic Finsler manifold.
\end{enumerate}
For a conic pseudo-Finsler manifold in general the previous
properties do not hold, but Propositions \ref{openballs} and \ref{topologia} and relation $\prec$ may yield some general information, which may be useful combined with particular properties of different classes of examples.
\section{Constructing conic Finsler metrics}\label{ls3}
New, let us consider the case when a new conic pseudo-Finsler metric is constructed from a homogeneous combination of pre-existing conic Finsler metrics and one-forms, as a generalization (in several senses) of the known $(\alpha,\beta)$-metrics.
\subsection{General result and first consequences}\label{s3a}
In the following, $F_1,\ldots,F_n$ will denote conic Finsler metrics on the manifold $M$ of dimension $N$, with fundamental tensors $g^1,\dots, g^n$ (so that $g^k_v$ is the fundamental tensor of $F_k$ at the tangent vector $v$). Moreover, the so-called {\em angular metrics} (see \cite[Eq. 3.10.1]{BaChSh00}) are defined as \begin{equation}\label{angularmetric} h_v^k(w,w)=g_v^k(w,w)-\frac{1}{F_k(v)^2}g_v^k(v,w)^2, \end{equation} for any $v\in A\setminus 0$ and $w\in T_{\pi(v)}M$ and $k=1,\ldots,n$. Due to Cauchy-Schwarz inequality, the angular metric of a conic Finsler metric is always positive semi-definite being the direction of $v$ the only degenerate direction.
The intersection $$A=\cap_{k=1}^n A_k \subset TM$$ of their conic domains $A_k$ is assumed to be non-empty at each point, i.e., $\pi(A)=M$. Moreover, $\beta_{n+1},\beta_{n+2},\ldots,\beta_{n+m}$ will denote $m$ one-forms on $M$. The indexes $k,l$ will run from $1$ to $n$. The indexes which label the ordering of one-forms will be denoted with Greek letters $\mu, \nu$ and run from $n+1$ to $n+m$, while the indexes $r, s$ will run from $1$ to $n+m$.
Let $B$ be a conic open subset of $\mathds R^{n+m}$ and consider a continuous function $L: B\times M\rightarrow \mathds R$, which satisfies: \begin{itemize} \item[(a)] $L$ is smooth and positive away from $0$, i.e, on $(B\times M) \setminus (\{0\}\times M)$.
\item[(b)] $L$ is $B$-positively homogeneous of degree 2, i.e., $L(\lambda x, p)=\lambda^2 L(x,p)$ for all $\lambda>0$ and all $(x,p)\in B\times M$. \end{itemize} Denote by ${\rm Hess}(L)$ the $B$-Hessian function matrix associated to $L$, that is, the matrix with coefficients the functions $a_{rs}=L_{,rs}$, where the comma denotes derivative with respect to the corresponding coordinates of $\mathds R^{n+m}$ (in particular, $L_{,rs}$ means the second partial derivative of $L$ with respect to the $r$-th and $s$-th variables). We say that ${\mathrm Hess}(L)$ is positive semidefinite if so is the matrix of functions at each $(x,p)\in B\times M$.
Finally, consider the function $F^2: A\subseteq TM\rightarrow \mathds R$ defined as: \begin{equation} \label{ef2} F^2(v)=L(F_1(v),F_2(v),\ldots,F_n(v),\beta_{n+1}(v),\beta_{n+2}(v),\ldots,\beta_{n+m}(v),\pi(v)).\end{equation}
\begin{thm}\label{central} For any $L$ satisfying (a) and (b) as above, and $F^2$ as in \eqref{ef2}, the function $F:=\sqrt{F^2}$
is a conic pseudo-Finsler metric with domain $A$ and fundamental tensor: \begin{equation}\label{fundamentalTensor}
g_v(w,w)=\frac 12\sum_k\frac{L_{,k}}{F_k(v)}h_v^k(w,w)+\frac 12
P(v,w){\mathrm Hess}(L) P(v,w)^T,
\end{equation} for all $v\in A\setminus{0}$, $w\in T_{\pi(v)}M$, where the superscript $^T$ denotes transpose and $P(v,w)$ the $(n+m)$-tuple \begin{equation} \label{epvw} P(v,w)=(\frac{g_v^1(v,w)}{F_1(v)}, \ldots,\frac{g_v^n(v,w)}{F_n(v)},\beta_{n+1}(w),\ldots,\beta_{n+m}(w)) \quad (\in \mathds R^{n+m}),\end{equation} and the derivatives $L_{,r}, L_{,rs},...$ of $L$ are computed on \begin{equation} \label{epbm}(F_1(v),F_2(v),\ldots,F_n(v),\beta_{n+1}(v),\beta_{n+2}(v),\ldots,\beta_{n+m}(v),\pi(v)) \; \in B\times M. \end{equation} Moreover, $g$ is positive semi-definite if the following conditions hold on the points in \eqref{epbm} obtained by taking any $v\in A\setminus{0}$: \begin{itemize} \item[(A)] $L_{,k}\geq 0$ for every $k=1,\ldots,n$, and \item[(B)] ${\mathrm Hess}(L)$ is positive semi-definite, \end{itemize} and it is positive definite (i.e., $F$ is a conic Finsler metric) if, additionally: \begin{itemize} \item[(C)] $L_{,1}+\ldots+L_{,n}>0$. \end{itemize} \end{thm} \begin{proof}
Recall from Proposition \ref{fundamentalprop} that $g_v(v,w)=\frac 12\frac{\partial}{\partial s} F(v+sw)^2|_{s=0}$. As, clearly,
\[\left. \frac{\partial F_k(v+sw)}{\partial s}\right|_{s=0}=\left.\frac{\partial \sqrt{F_k(v+sw)^2}}{\partial s}\right|_{s=0}=\frac{1}{F_k(v)} g_v^k(v,w)\]
and $\left.\frac{\partial \beta_\mu(v+sw)}{\partial s}\right|_{s=0}=\beta_\mu(w)$, we have \begin{equation}\label{laanterior}
2 g_v(v,w)=\left.\frac{\partial F^2(v+sw)}{\partial s}\right|_{s=0}=\sum_k \frac{1}{F_k(v)} g^k_v(v,w) L_{,k}+\sum_\mu \beta_\mu(w)L_{,\mu}. \end{equation} Again
from Proposition \ref{fundamentalprop},
\[g_v(w,w)=\frac12 \left. \frac{\partial}{\partial t}\right|_{t=0} \left( \frac{\partial }{\partial s}\left. F^2((v+tw)+sw)\right|_{s=0}
\right)= \frac{\partial }{\partial t} \left. g_{v+tw}(v+tw,w)\right|_{t=0} \] and, then, applying \eqref{laanterior}, we obtain
\begin{multline}
2 g_v(w,w)
=\sum_k\frac{L_{,k}}{F_k(v)}\left(g_v^k(w,w)-\frac{1}{F_k(v)^2}g^k_v(v,w)^2\right) \\ + \sum_{k,l}\frac{L_{,kl}}{ F_k(v)F_l(v)}g_v^k(v,w)g_v^l(v,w) +2\sum_{k,\mu}\frac{L_{,k\mu}}{F_k(v)}g_v^k(v,w)\beta_\mu(w)\\+\sum_{\mu,\nu}L_{,\mu\nu} \beta_\mu(w)\beta_\nu(w).\label{fundamentalTensor2} \end{multline} So, the expression of $g_v$ in \eqref{fundamentalTensor} follows
by using
$F_k(v)^2=g^k_v(v,v)$, formula \eqref{angularmetric}, and that the two last lines of the formula above can be written as $P(v,w){\mathrm Hess}(L) P(v,w)^T$ for $P(v,w)$ as in \eqref{epvw}.
For its positive semi-definiteness, recall that applying hypothesis (A), plus the fact that the angular metrics $h_v^k$ in \eqref{angularmetric} are positive semi-definite:
$$Q_1(v,w):=
\sum_k \frac{L_{,k}}{F_k(v)}h^k_v(w,w)
\geq 0. $$
So, the result follows by applying (B) to obtain: $$2g_v(w,w)-Q_1(v,w)= P(v,w) {\mathrm Hess}(L) P(v,w)^T\geq 0.$$
Finally, for the strict positiveness of $g$, if $w=\lambda v$, with $\lambda\not=0$, then $g_v(w,w)=\lambda^2 F^2(v)>0$ and, otherwise,
$Q_1(v,w)>0$ by hypothesis (C) and because $h^k_v$ is degenerate
only in $v$. \end{proof} \begin{rem}
Observe that the expression of the fundamental tensor in \eqref{fundtensorsum} still holds when $F_1,\ldots,F_n$ are just conic pseudo-Finsler metrics. Indeed, we can obtain more accurate conditions to construct Finsler metrics. For example, we can consider conic pseudo-Finsler metrics with positive semi-definite fundamental tensor satisfying $(A)$ and $(B)$ and to check that for every $w\in T_{\pi(v)}M, v\in A\setminus 0$, there is some term strictly positive. \end{rem} Next, let us consider two particular cases. The first one is very elementary, and we include it because, as far as we know, it does not appear in the classic books on the subject.
\begin{cor}\label{sum}
Let $F_1,F_2,\ldots,F_n$ be conic Finsler metrics on $M$ defined on the same conic domain $A$.
Then $F=F_1+F_2+\cdots+F_n$ is also a conic Finsler metric and its fundamental tensor
is given as \begin{equation}\label{fundtensorsum} g_v(w,w)=F(v)\sum_{k} \frac{h_v^k(w,w)}{F_k(v)}
+\left(\sum_k\frac{g^k_v(v,w)}{F_k(v)}\right)^2, \end{equation} for any $v\in A\setminus{0}$ and $w\in T_{\pi(v)}M$. \end{cor} \begin{proof} Apply Theorem \ref{central} to $L:\mathds R^n\times M\rightarrow \mathds R$, defined as $L((x_1,\dots, x_n),p)= (\sum_r x_r)^2$ (recall $L_{,k}=2(\sum_r x_r)$, thus $L_{,kl}=2$), and apply Eq. \eqref{fundamentalTensor}. \end{proof} As a generalization to be compared with the results on Randers metrics below: \begin{cor}\label{rrandersr}
Consider $n$ conic Finsler metrics and $m$ one-forms as above. Then, \[R(v):=\left(\sum_{k=1}^n F_k(v)^{q}+\sum_{\mu=k+1}^{m+n}
|\beta_\mu(v)|^{q}\right)^{\frac{1}{q}}\] is a conic pseudo-Finsler metric with domain $A$ if $q\geq 2$, and with domain \[A_R:=A\setminus\{v\in TM: \beta_\mu(v)= 0 \text{ for some $\mu=m+1,\ldots,m+n$}\},\] for $2>q\geq 1$. At each case, the fundamental tensor $g$ is given by:
\begin{align}\label{fundtensorsquare} R(v)^{2q-2}g_v(w,w)=& R(v)^q \sum_k F_k(v)^{q-2} h_v^k(w,w)\nonumber\\& + \frac 12(q-1)\sum_{k,l}(F_k(v)F_l(v))^q\left(\frac{g_v^k(v,w)}{F_k(v)^2}-\frac{g_v^l(v,w)}{F_l(v)^2}\right)^2
\nonumber\\& + \frac 12(q-1)\sum_{\mu,\nu}|\beta_\mu(v)\beta_\nu(v)|^q\left(\frac{\beta_\mu(w)}{\beta_\mu(v)}-\frac{\beta_\nu(w)}{\beta_\nu(v)}\right)^2
\nonumber\\& + (q-1) \sum_{k,\mu} |F_k(v)\beta_\mu(v)|^q \left(\frac{g_v^k(v,w)}{F_k(v)^2}-\frac{\beta_\mu(w)}{\beta_\mu(v)}\right)^2 \nonumber\\& +\left(\sum_k F_k(v)^{q-2}g_v^k(v,w)+\sum_\mu
|\beta_\mu(v)|^{q-2} \beta_\mu(v) \beta_\mu(w) \right)^2, \end{align} for all $v\neq 0$ in the conic domain and $w\in T_{\pi(v)}M$.
As a consequence, $g$ is always positive semi-definite and, if $n\geq 1$, $R$ is a conic Finsler metric. \end{cor} \begin{proof} Consider the function $L: B \times M\rightarrow \mathds R$ with \[B=\mathds R^{n+m}\setminus\{(x_1,\dots , x_{n+m}): x_r=0 \text{ for some } r=1,\ldots,n+m\}\]
defined as
\[L\left((a_1,a_2,\ldots,a_{n+m}),p\right)=\sqrt[q]{(|a_1|^q+|a_2|^q+\cdots+|a_{n+m}|^q)^2},\]
Let us denote $U=|a_1|^q+|a_2|^q+\cdots+|a_{n+m}|^q.$ Then
$L_{,r}=2 a_r |a_r|^{q-2} \, U^{\frac{2}{q}-1}$ and
\[L_{,rs}=2\delta_{rs} (q-1) |a_r|^{q-2} \, U^{ \frac{2}{q}-1}
+2(2-q) a_ra_s |a_ra_s|^{q-2} U^{ \frac{2}{q}-2},\] where $\delta_{rs}$ is Kronecker's delta. Observe that if $x=(x_1,x_2,\ldots,x_{n+m})\in\mathds R^{n+m}$,
\[x{\rm Hess}(L) x^T=(q-1)U^{\frac{2}{q}-2}\sum_{r,s}|a_ra_s|^q\left(\frac{x_r}{a_r}-\frac{x_s}{a_s}\right)^2
+2U^{\frac{2}{q}-2}(\sum_r x_r a_r|a_r|^{q-2} )^2.\] The expression of the fundamental tensor \eqref{fundtensorsquare} follows easily by using the last identity to compute \eqref{fundamentalTensor} ---and, then, the other assertions follow directly.
\end{proof} \begin{rem}\label{consequences} Last corollaries can be useful in different situations. For example, let $F$ be a conic Finsler metric on some open subset $U$ of $M$, and $C$ a closed subset of $M$ included in $U$. If $\tilde F$ is any conic Finsler metric on $M\setminus C$, by using a partition of the unity, there exists a conic Finsler metric $F^*$ defined on all $M$ which extends $F$ on $C$ and $\tilde F$ on $M\backslash U$ (compare with Proposition \ref{punidad} and Remark \ref{runidad}). The following corollary develops a different application for the isometry group ${\rm Iso}(M,F)$ of a Finsler manifold.
\end{rem}
\begin{cor} \label{cisom} Let $F$ be a non-reversible Finsler metric. Then,
\begin{align*} \tilde{F}(v)&=F(v)+F(-v),& \hat{F}(v)&=\sqrt{F(v)^2+F(-v)^2}, \end{align*} for $v\in TM$, are reversible Finsler metrics and: \[{\rm Iso}(M,F) \subseteq {\rm Iso}(M,\tilde{F})\cap {\rm Iso}(M,\hat{F}).\]
Moreover, this inclusion becomes an equality for the connected components of the identity of each side. \end{cor} \begin{proof} The inclusion ${\rm Iso}(M,F) \subseteq {\rm Iso}(M,\tilde{F})\cap {\rm Iso}(M,\hat{F})$ is straightforward. Clearly, it also holds for the connected components of the identity and, for the converse, let $\phi$ belong to the connected component of the right hand side. Recall that $$F(v)=\frac{1}{2} \left(\tilde{F}(v)\pm \sqrt{2\hat{F}(v)^2-\tilde{F}(v)^2}\right)$$ with positive sign if $F(v)-F(-v)\geq 0$ and with negative sign otherwise. Both expressions at right hand (with both signs) are $\phi$-invariant, and the radicand of the root is equal to
$\left({F}(v)-{F}(-v)\right)^2$. Therefore, $|F(d\phi (v))-F(-
d\phi (v))|= |F(v)-F(-v)|$ for all $v$, and the equality holds even if the absolute values are removed, as $\phi$ belongs to the connected part of the identity. Thus, $F(d\phi(v))=F(v)$ is a consequence of the expression above for $F$ (applied on $v$ and $d\phi(v)$) corresponding to the sign of $F(v)-F(-v)$. \end{proof}
\begin{rem} (1) Notice that the strict inclusion can hold. In fact, for any non-reversible Minkowski norm $\|\cdot\|\equiv F$ on a vector space $V$, minus the identity belongs to $\left({\rm Iso}(V,\tilde F)\cap {\rm Iso}(V,\hat F)\right)\setminus {\rm Iso}(V,F)$.
(2) Corollary \ref{cisom} reduces the proof that the group of
isometries of any Finsler metric is a Lie group, to the
reversible case (simplifying, say, the proof in \cite[Theorem
3.2]{DengHou02}, which modifies Myers-Steenrod \cite{MySt39} technique for -symmetric- distances). An alternative proof has been also developed recently in \cite[Section 2]{MRTZ09} by proving that ${\rm Iso}(M,F)$ is always a closed subgroup of the isometry group of a suitable average Riemannian metric, see also \cite{ricardo}.
\end{rem}
\subsection{The case of $(F_0,\beta)$ metrics}\label{s3b}
Next, we will focus on the application of Theorem \ref{central} on just one Finsler metric $F_0$ with conic domain $A_0\subseteq TM$ and one 1-form $\beta$. A very important case is that of $(\alpha,\beta)$-metrics introduced by M. Matsumoto in \cite{Mat72} (see also \cite{Bucataru,KAM95,MSS10,Mat89,Mat91b,Mat91c,Mat92,SSI08}) as a generalization of Randers, Kropina and Matsumoto metrics. Here $\alpha$ denotes the square root of a Riemannian metric and $\beta$ a one-form on $M$.
In our general setting, we will consider combinations of the form $F_0+\beta$, $F_0^2/\beta$ and $\frac{F_0}{F_0-\beta}$, which generalize the Randers, Kropina and Matsumoto $(\alpha,\beta)$-cases (compare also with \cite{Has88}). Such combinations were named {\em $\beta$-changes} in \cite{Shi84} but, in concordance with our approach, they will be called here $(F_0,\beta)$-metrics, that is, a conic pseudo-Finsler metric $F$ is an {\em $(F_0,\beta)$-metric} if it is obtained as a positively homogeneous combination of a conic Finsler metric $F_0$ and a one-form $\beta$.
In order to characterize the conic domains for the strong convexity of an $(F_0,\beta)$-metric, we will give first a necessary condition. \begin{prop}\label{reciproco}
Assume that the $(F_0,\beta)$-metric $F=\sqrt{L(F_0(v),\beta(v),x)}$ is a conic Finsler metric and the dimension of $M$ is $N>2$. Then $L_{,1}>0$. \end{prop} \begin{proof} Let $v\in A\setminus 0$,
and denote as $z\in T_{\pi(v)}M$ the $g_v^0$-dual of $\beta$, i.e. $\beta(w)=g_v^0(w,z)$ for all $w\in T_{\pi(v)}M$. Consider an orthonormal basis $e_1,\ldots, e_N$ for $g_v^0$ such that $e_1=v/F_0(v)$ and $z=\lambda^1 e_1+\lambda^2 e_2$ for some $\lambda^1,\lambda^2\geq 0$. Putting
$w=\sum_{i=1}^N w^ie_i$ and using \eqref{fundamentalTensor}: \begin{equation}\label{ejemejem} 2 g_v(w,w)= \frac{1}{F_0} L_{,1} \sum_{i=2}^N (w^i)^2 + L_{,11} (w^1)^2 + 2L_{,12} (\lambda^1 w^1+\lambda^2 w^2)w^1 + L_{,22} (\lambda^1 w^1+\lambda^2 w^2)^2\\
\end{equation} and the result follows by choosing $w=e_3$.
\end{proof}
\begin{rem}\label{rL1defposit2} Rewriting the terms in \eqref{ejemejem} as $a (w^1)^2+2b w^1 w^2+c (w^2)^2+ d\sum_{i\geq 3} (w^i)^2$ the metric $g_v$ is positive definite if and only if $d>0$, $c>0$ and $ac-b^2> 0$, i.e., $L_{,1}>0$, $(\lambda^2)^2L_{,22}+\frac 1F_0 L_{,1}>0$ and \[(L_{,11}+2\lambda^1 L_{,12}+(\lambda^1)^2 L_{,22})(L_{,22}(\lambda^2)^2+\frac 1F_0 L_{,1})-(\lambda^2)^2(L_{,12}+\lambda^1 L_{,22})^2 > 0.\]
For $N=2$ the term in $d$ does not appear, and the condition $L_{,1}>0$ is dropped. \end{rem}
\subsubsection{Canonical form $F=F_0\cdot\phi(\beta/F_0)$} Next, for any $(F_0,\beta)$-metric $F=\sqrt{L(F_0,\beta)}$ we define $\phi(s)=\sqrt{L(1,s)}$ and, thus, $F=F_0\cdot\phi(\beta/F_0)$. This change yieds convenient expressions for $(F_0,\beta)$-metrics, including the Kropina, Matsumoto and Randers ones. The next result is related to one by Chern and Shen in \cite{ChSh05}, which is discussed as an Appendix (Subsection \ref{sA}). \begin{prop}\label{functionpsi} Let $\phi:I\subseteq\mathds R\rightarrow \mathds R$ be a smooth function
satisfying $\phi>0$. Given a conic Finsler metric
$F_0:A_0\subseteq TM\rightarrow \mathds R$ and a one-form $\beta$, consider
the conic pseudo-Finsler metric \[F(v)=F_0(v)\phi\left(\frac{\beta(v)}{F_0(v)}\right)\] on $A:=\{v\in A_0: \beta(v)/F_0(v)\in I \}$ with the convention $0_p\in A_p(\subset A)$ if and only if $T_pM\setminus\{0_p\}\subset A_p$, for each $p\in M$. Put $\psi(s):=\phi^2(s)$ for all $s\in I$ and define the functions: \begin{equation}\label{psipsi} \varphi_1:=2\psi-s\dot\psi \; (=2\phi(\phi -s\dot \phi)), \quad \varphi_2:=2\psi\ddot\psi-\dot\psi^2 \; (=4\ddot \phi \phi^3).\end{equation} The fundamental tensor $g$ of $F$ is determined by \begin{multline}\label{alphabetatensor}
2 g_v(w,w)= \varphi_1\left(\frac{\beta(v)}{F_0(v)}\right) h^0_v(w,w) \\ +\frac12 \psi\left(\frac{\beta(v)}{F_0(v)}\right)^{-1}\varphi_2\left(\frac{\beta(v)}{F_0(v)}\right)\left(\frac{\beta(v)}{F_0(v)^2}g_v^0(v,w)-\beta(w)\right)^2 \\ + \frac12 \psi\left(\frac{\beta(v)}{F_0(v)}\right)^{-1} \left(\varphi_1\left(\frac{\beta(v)}{F_0(v)}\right)\frac{g_v^0(v,w)}{F_0(v)}+\dot\psi\left(\frac{\beta(v)}{F_0(v)}\right)\beta(w)\right)^2, \end{multline} where $h_v^0$ is the $F_0$-angular metric defined in \eqref{angularmetric}, $v\in A \setminus 0$ and $w\in T_{\pi(v)}M$.
Moreover, $F$ is a conic Finsler metric on $A$ if \begin{align}
\varphi_1=2\psi-s\dot\psi>0 \; (\text{i.e.} \, \phi-s\dot\phi>0) &&\text{and}&& \varphi_2=2\psi(s)\ddot\psi(s)-\dot\psi(s)^2 \geq 0 \; (\text{i.e.} \, \ddot\phi&\geq 0). \label{eqpsi} \end{align} \end{prop} \begin{proof} To compute $g$, put $L(a,b)=a^2\psi(b/a)$ defined in \[\{(a,b)\in \mathds R^2\setminus \{(0,s):s\in\mathds R\}: b/a\in I\}.\] Then, \begin{align*}
L_{,1}(a,b)&=2a\, \psi\left(b/a\right)-b\,\dot\psi\left(b/a\right),&\\ L_{,11}(a,b)&=2\, \psi\left(b/a\right)-2\frac{b}{a}\,\dot\psi\left(b/a\right)+\frac{b^2}{a^2}\,\ddot\psi\left(b/a\right),&\\ L_{,12}(a,b)&=\dot\psi\left(b/a\right)-\frac{b}{a}\,\ddot\psi\left(b/a\right),&\\ L_{,22}(a,b)&=\ddot\psi\left(b/a\right). \end{align*} The expression of $g_v$ follows easily from \eqref{fundamentalTensor} and the expressions for $L_{,1},L_{,rs}$ above, recalling that \begin{multline}\label{hessianoL} \left(\begin{array}{cc} x_1 & x_2 \end{array}\right){\rm Hess (L)} \left(\begin{array}{c} x_1\\ x_2 \end{array}\right)=\frac{2\psi(s)\ddot\psi(s)-\dot\psi(s)^2}{2\psi(s)}(s x_1-x_2)^2\\+\frac{1}{2\psi(s)}((2\psi(s)-s\dot\psi(s))x_1+\dot\psi(s) x_2)^2, \end{multline} where $s=b/a$ and $x_1,x_2\in\mathds R$. The positive definiteness of $g_v$ under conditions \eqref{eqpsi} is immediate from expression \eqref{alphabetatensor}.
\end{proof}
\begin{rem} \label{rL1defposit} In the set of sufficient conditions $\varphi_1>0, \varphi\geq 0$ (Eq. \eqref{eqpsi}), the first one is also necessary (to obtain the positive definiteness of $g$) when $N>2$, as $L_{,1}(F_0(v),\beta(v)) = F_0(v) \varphi_1(s)$ for $s=\beta(v)/F(v)$ (recall Proposition \ref{reciproco}). \end{rem}
\subsubsection{Kropina type metrics}
Kropina metrics are $(\alpha, \beta)$-metrics of the form $\alpha/\beta$, which were first studied by Kropina in \cite{Kro59} (see also \cite{Mat91a,Shi78,SiSi85,YoOk07}). In our extension, we consider not only an $(F_0,\beta)$-metric but also introduce an arbitrary exponent $q>0$. We remark that in the paper \cite{SPK03}, the authors study when two Finsler metrics $F$ and $F_0$ are related by $F=F_0/\beta$ for some one-form $\beta$ and in \cite[formula (6)]{Bogos}, the authors consider a quadratic Finsler metric of this type.
\begin{cor}\label{cKropina} Consider $F=F_0^{q+1}/|\beta|^{q}$ and $A=\{v\in A_0 : \beta(v)\not=0\}.$
Then $F$ is a conic Finsler metric defined on $A$ for every $q>0$, with fundamental tensor determined
by \begin{multline}\label{gkropina}
\left|\frac{\beta(v)}{F_0(v)}\right|^{2q} g_v(w,w)=(q+1)h^0_v(w,w) \nonumber\\
+q(q+1)
\left(\frac{g_v^0(v,w)}{F_0(v)}-\frac{F_0(v)}{\beta(v)}\beta(w)\right)^2\\
+ \left((q+1)\frac{g_v^0(v,w)}{F_0(v)}-q\frac{F_0(v)}{\beta(v)}\beta(w)\right)^2\nonumber \end{multline}
where $v\in A$ and $w\in T_{\pi(v)}M$. In particular, any Kropina metric $F=\alpha/|\beta|$ is strongly convex in its natural domain of definition. \end{cor} \begin{proof}
Observe that $F=F_0^{q+1}/|\beta|^{q}=F_0
\left|F_0/\beta\right|^{q}$. Now if $\phi(s)=|s|^{-q}$, $s\in\mathds R\setminus \{0\}$, then $F=F_0\phi\left(\beta/F_0\right)$. Moreover, inequalities \eqref{eqpsi} become \begin{align*}
\varphi_1(s)=2(q+1)|s|^{-2q}&>0,&
\varphi_2(s)=4q(q+1)|s|^{-4q-2}&\geq 0, \end{align*} which hold whenever $q>0$. Therefore, the formula for $g_v$ follows from \eqref{alphabetatensor} (use
$|s|^{2q}\psi^{-1}(s)=|s|^{4q}; \dot \psi(s)=-2q|s|^{-2q} s^{-1}$), and
positive definiteness is obvious from the expression of $g_v$. \end{proof}
\subsubsection{Matsumoto type metrics} Matsumoto metrics were found by this author when measuring the walking time when there is a slope \cite{Mat89b}. They are $(\alpha,\beta)$-metrics of the form $\alpha/(\alpha-\beta)$. We extend them in the same way as Kropina's.
\begin{cor}\label{GenMatsumoto} For any $q\neq 0$, consider the conic pseudo-Finsler metric:
$$F=F_0^{q+1}/|F_0-\beta|^q, \quad A=\{v\in A_0 : F_0(v)\neq \beta(v)\}. $$ Then, its
fundamental tensor is determined by \begin{multline}
\left|\frac{F_0(v)-\beta(v)}{F_0(v)}\right|^{2q+2} g_v(w,w)=\\ \frac{(F_0(v)-\beta(v))(F_0(v)-(q+1)\beta(v))}{F_0(v)^2}h^0_v(w,w) \nonumber\\ +q(q+1)\left(\frac{\beta(v)}{F_0(v)^2}g_v^0(v,w)-\beta(w)\right)^2\nonumber\\
+\left(\frac{F_0(v)-(q+1)\beta(v)}{F_0(v)^2}g_v^0(v,w)+q\beta(w)\right)^2, \end{multline} for any $v\in A$ and $w\in T_{\pi(v)}M$.
When $q>0$ or $q\leq-1$ the restriction of $F$ to
$$A^*=\{v\in A_0 : (F_0(v)-(q+1)\beta(v))(F_0(v)-\beta(v))>0\}$$ is conic Finsler and, if $N>2$, $g$ is not strongly convex at any point of $A\setminus
A^*$. \end{cor} \begin{proof}
As $F=F_0^{q+1}/|F_0-\beta|^{q}=F_0
\left(F_0/|F_0-\beta|\right)^{q}$, putting $\phi(s)=|1-s|^{-q}$, $s\not=1$, then $F=F_0 \phi \left(\beta/F_0\right)$. So, inequalities \eqref{eqpsi} read \begin{align*}
\varphi_1(s)= 2 |1-s|^{-2q-2}(1-s)(1-(1+q)s)&>0,\\
\varphi_2(s)= 4 q(q+1) |1-s|^{-4q-2}&\geq 0, \end{align*} which hold if and only if $(1-s)(1-(1+q)s)>0$ and either $q\geq 0$ or $q\leq -1$. So, the formula for $g_v$ follows from \eqref{alphabetatensor} (use
$\psi(s)^{-1}\varphi_2(s) =4q(q+1)/|1-s|^{2q+2}; \dot
\psi(s)=-2q|1-s|^{-2q} \, (1-s)^{-1}$), and
strong convexity in $A^*$ from the expression of $g_v$.
For the last assertion, apply Remark \ref{rL1defposit}
noticing that $A^*$ determines the region where $L_{,1}>0$. \end{proof} \begin{rem} (1) In the case $N=2$, the maximal domain $A^*$ where such a generalized Matsumoto metric is a conic Finsler one, becomes more involved (see Remark \ref{rL1defposit2}). As a particular case of Corollary \ref{characterization} in the Appendix, the following necessary and sufficient condition to define $A^*$ is obtained (for simplicity, we put $r=1$):
\[3\beta(v)<2\|\beta \|^2_{g_v^0} +1 \quad \quad \hbox{for any} \, v\in A \; \hbox{with} \; F_0(v)=1. \]
(2) The class of metrics of the last corollary with $F_0=\alpha$ and $r<-1$, were considered in \cite{Zhou10} in order to obtain Finsler metrics with constant flag curvature. \end{rem} In particular, a known result on Matsumoto metric is recovered. \begin{cor}\label{matsustrongly} Any
Matsumoto metric $F=\alpha^2/|\alpha-\beta|$ is strongly convex in \[A^* =\{v\in A_0 : (\alpha(v)-2\beta(v))(\alpha(v)-\beta(v))>0\}.\] \end{cor}
As a last consequence, we consider a class of metrics which include those in the references \cite{CuSh09,LiSh07,ShCi08}.
\begin{cor}Let us define $F=(F_0+\beta)^2/F_0$ and \[A=\{v\in A_0 : F_0(v)^2>\beta(v)^2\}.\] Then $F$ is strongly convex in $A$. \end{cor} \begin{proof} Apply Corollary \ref{GenMatsumoto} with $q=-2$. \end{proof}
\subsubsection{Randers type metrics} Randers metrics are $(\alpha,\beta)$-metrics defined by $\alpha+\beta$ (they are named after the work of G. Randers \cite{Ran41} about electromagnetism). Next we will consider Finsler metrics of the form $F_0+\beta$, which generalize Randers construction. In particular our result can be used to prove strong convexity of Randers metric in a more direct way than in \cite[pag. 284]{BaChSh00}. Moreover, it follows that the deformations of Kropina metrics considered in \cite{Rom06} are also strongly convex. \begin{cor}\label{randers}
Any metric $F=F_0+\beta$, where $A=\{v\in A_0 : F_0(v)+ \beta(v)>0\}$, is a conic Finsler metric on all $A$ with fundamental tensor \[g_v(w,w)=\frac{F_0(v)+\beta(v)}{F_0(v)}h^0_v(w,w)+ \left(\frac{g_v^0(v,w)}{F_0(v)}+\beta(w)\right)^2,\] for any $v\in A\setminus 0$ and $w\in T_{\pi(v)}M$.
\end{cor}
\begin{proof}
Put $F=F_0\left(1+\frac{\beta}{F_0}\right)$ and define $\phi(s)=1+s$. As $\varphi_2\equiv 0$, inequalities \eqref{eqpsi} reduce to $\varphi_1=2(1+s)>0$, and the proof follows from
Proposition \ref{functionpsi}.
\end{proof} Notice that, essentially, this result can be also regarded as a particular case of Corollary \ref{rrandersr}.
\subsubsection{A further generalization: $(F_1,F_2)$-metrics} From now on, $F_1$ and $F_2$ will be two conic Finsler metrics defined in a common conic domain $A_0$. In order to show the power of our computations, let us explore the consequences of changing $\beta$ into a second Finsler metric in $(F_0,\beta)$-metrics. \begin{prop}\label{functionpsi2} Let $\phi:I\subset\mathds R\rightarrow \mathds R$, $\phi>0$ be a smooth function, and put $\psi (= \phi^2)$, $\varphi_1$, $ \varphi_2$
as in Proposition \ref{functionpsi}. Define the conic pseudo-Finsler metric: \[F(v)=F_1(v)\phi\left(\frac{F_2(v)}{F_1(v)}\right)\] on $A=\{v\in A_0: F_2(v)/F_1(v)\in I\}$, with the convention $0_p\in A_p(\subset A)$ if and only if $T_pM\setminus\{0_p\}\subset A_p$, for each $p\in M$. The fundamental tensor $g$ of $F$ is: \begin{multline}\label{alphabetatensor2}
2 g_v(w,w)=\varphi_1\left(\frac{F_2(v)}{F_1(v)}\right) h^1_v(w,w) +
\frac{F_1(v)}{F_2(v)} \dot\psi\left(\frac{F_2(v)}{F_1(v)}\right)h^2_v(w,w)\\ +\frac 12\psi\left(\frac{F_2(v)}{F_1(v)}\right)^{-1}\varphi_2\left(\frac{F_2(v)}{F_1(v)}\right) \left(\frac{F_2(v)}{F_1(v)^2}g^1_v(v,w)-\frac{g^2_v(v,w)}{F_2(v)}\right)^2\\ +\frac 12\psi\left(\frac{F_2(v)}{F_1(v)}\right)^{-1} \left(\varphi_1\left(\frac{F_2(v)}{F_1(v)}\right)\frac{g^1_v(v,w)}{F_1(v)}+\dot\psi\left(\frac{F_2(v)}{F_1(v)}\right)\frac{g^2_v(v,w)}{F_2(v)}\right)^2, \end{multline} for any $v\in A \setminus 0$ and $w\in T_{\pi(v)}M$.
Moreover, $F$ is a conic Finsler metric on $A$, if $\dot \psi \geq 0$ and Eq.
\eqref{eqpsi} is satisfied (i.e. $\varphi_1>0, \varphi_2\geq 0$).
\end{prop} \begin{proof} The proof is analogous to that of Proposition \ref{functionpsi}. Just observe that $L_{,2}(a,b)=a\dot\psi(b/a)$ and apply Theorem \ref{central} as in Proposition \ref{functionpsi}. \end{proof}
\begin{cor}\label{GenMatsumoto2}
For any $q\neq 0$, consider the conic pseudo-Finsler metric:
$$F=F_1^{q+1}/|F_1-F_2|^q, \quad A=\{v\in A_0 : F_1(v)\neq F_2(v)\}.$$
Then, its
fundamental tensor is given by
\begin{multline} \left(\frac{F_1(v)-F_2(v)}{F_1(v)}\right)^{2q+2}g_v(w,w)=\\ \frac{(F_1(v)-F_2(v))(F_1(v)-(q+1)F_2(v))}{F_1(v)^2}h^1_v(w,w)\nonumber\\ +q \frac{F_1(v)(F_1(v)-F_2(v))}{F_2(v)^2} h^2_v(w,w)\\ +q(q+1)\left(\frac{F_2(v)}{F_1(v)^2}g^1_v(v,w)-\frac{g^2_v(v,w)}{F_2(v)}\right)^2\nonumber\\
+\left(\frac{F_1(v)-(q+1)F_2(v)}{F_1(v)^2}g^1_v(v,w)+q\frac{g^2_v(v,w)}{F_2(v)}\right)^2, \end{multline} where $v\in A$ and $w\in T_{\pi(v)}M$.
When $q>0$
the restriction of $F$ to $$A^*=\{v\in A_0 : F_1(v)-(q+1)F_2(v)>0 \, \hbox{and} \, F_1(v)-F_2(v)>0\}.$$ is conic Finsler.
\end{cor} \begin{proof} Apply Proposition \ref{functionpsi2} following the same lines as in Corollary \ref{GenMatsumoto}. Notice that we have to ensure now $F_1>F_2$ in the definition of $A^*$ because, otherwise, $\dot\psi(s)$ is not positive (recall the third line in the expression of $g_v$). \end{proof}
\subsection{Appendix}\label{sA} Let us remark that in \cite[Lemma 1.1.2]{ChSh05}, the authors obtained a result closely related to our Proposition \ref{functionpsi} for $(\alpha,\beta)$-metrics, concretely:
\begin{crit}[Chern and Shen, \cite{ChSh05}]\label{shenlemma} $F=\alpha \phi(\beta/\alpha)$ is a Minkowski norm for any Riemannian metric
$\alpha$ and 1-form $\beta$ with $\|\beta\|_\alpha<b_0$ if and only if $\phi=\phi(s)>0$ satisfies the following conditions: \begin{equation}
(\phi(s)-s\dot\phi(s))+(b^2-s^2)\ddot\phi(s)>0,
\label{crit}\end{equation} where $s$ and $b$ are arbitrary numbers with $|s|\leq b<b_0$, and $\phi$ is defined in a symmetric interval $(-b_0,b_0)$. \end{crit}
\begin{rem}\label{rcrit} To compare with Proposition \ref{functionpsi}, recall that there, the hypotheses \eqref{eqpsi} on $\phi$ were $\phi(s)-s\dot\phi(s)>0$ and $\ddot \phi(s)\geq 0$. Clearly, these two hypotheses imply \eqref{crit}. Conversely, \eqref{crit}
implies the first of the two hypotheses (choose $b=s$, and notice that any $b\leq \|\beta\|_\alpha$ can be chosen), but it is somewhat weaker than the second one. The reason is that the criterion above assumes
$\|\beta\|_\alpha<b_0$ but no assumption on $\beta$ was done in Proposition \ref{functionpsi}. However, even in this case, the criterion can be applied to give our sufficient condition. Namely, taking an increasing sequence of compact neighborhoods $\{K_j\}$ which exhausts $M$, for each $K_j$ there exists a constant $b_j$ which plays the role of $b_0$ and, if $\{b_j\}\rightarrow \infty$ then condition \eqref{crit} splits into the two conditions of Proposition \ref{functionpsi}.
For the sake of completeness, Criterion \ref{shenlemma} will be reobtained next for $(F_0,\beta)$-metric, and its hypothseses will be stated in a more accurate way (see Remark \ref{rult}). We will follow the approach in previous references on the topic computing the determinant of the matrix $(g_v)_{ij}$ explicitly. \end{rem}
Let us fix a coordinate system $\varphi:U\rightarrow \bar{U}\subset \mathds R^N$ and denote $\frac{\partial}{\partial x^1},\ldots,\frac{\partial}{\partial x^N}$, the vector fields associated to the system. Moreover, $g_{ij}(v)$ and $g^0_{ij}(v)$ will denote respectively the coordinates of the fundamental tensors $g_v$ and $g_v^0$ associated to $F=\sqrt{L(F_0,\beta)}$ and $F_0$ respectively. Then from \eqref{fundamentalTensor} it follows that \begin{equation}\label{FTcoordinates} 2g_{ij}(v)=\frac{L_{,1}}{F_0}\left(g^0_{ij}(v)-\frac{1}{F_0(v)^2}v_iv_j\right)+\frac{L_{,11}}{F_0(v)^2}v_iv_j+\frac{L_{,12}}{F_0(v)}(v_ib_j+v_jb_i)+L_{,22}b_ib_j, \end{equation} where $v=\sum_{i=1}^N v^i\frac{\partial}{\partial x^i}$, $v_i=\sum_{j=1}^N g^0_{ij}(v)v^j$ and $b_i=\omega(\frac{\partial}{\partial x^i})$, $i=1,\ldots,N$. Our goal is to compute the determinant of the matrix $(g_{ij}(v))$. We will need the following algebraic result whose proof can be found in \cite[Proposition 30.1]{Mat86} (for symmetric matrices) or in \cite[Proposition 11.2.1]{BaChSh00}. \begin{lemma}\label{algebraiclemma} Let $P=(p_{ij})$ and $Q=(q_{ij})$ be $n\times n$ real matrices and $C=(c_i)$ be an $n$-vector. Assume that $Q$ is invertible with $Q^{-1}=(q^{ij})$, and $p_{ij}=q_{ij}+\delta c_ic_j.$ Then \[\det(p_{ij})=(1+\delta c^2)\det (q_{ij}),\] where $c:=\sqrt{\sum_{i,j=1}^nq^{ij}c_ic_j}$. Therefore, if $1+\delta c^2\not=0$, then $P$ is invertible, and its inverse matrix $P^{-1}=(p^{ij})$ is given by \[p^{ij}=q^{ij}-\frac{\delta c^ic^j}{1+\delta c^2},\] where $c^i:=\sum_{j=1}^n q^{ij}c_j$. \end{lemma} We remark that in \cite{SaShi,SaShiErr} similar computations to those of the following lemma have been carried out for $(\alpha,\beta)$-metrics. \begin{lemma}\label{determinant} Denote $\Delta_L=\det( {\rm Hess}(L))$. Then \begin{multline}\label{detgij} \det (g_{ij}(v))=\frac{L_{,1}^{N-2}}{2^NF_0(v)^{N+1}}\left(F_0(v)\Delta_L(F_0(v)^2
\|\beta\|_{g_v^0}^2-\beta(v)^2) +2L_{,1} L\right) \det (g^0_{ij}(v)). \end{multline} \end{lemma} \begin{proof} First observe that \[g_{ij}(v)=\frac{L_{,1}}{2F_0(v)}(g^0_{ij}(v)+\delta(v) (b_i+k(v) v_i)(b_j+k(v) v_j)+\mu(v) v_iv_j),\] where \begin{align*} &\delta(v)=\frac{L_{,22} F_0(v)}{L_{,1}},& k(v)=\frac{L_{,12}}{L_{,22}F_0(v)}&& \text{and} &&\mu(v)=\frac{\Delta_L}{L_{,1}L_{,22}F_0(v)}-\frac{1}{F_0(v)^2}. \end{align*} If we call $c_i=b_i+k(v) v_i$ and $g^0(v)^{ij}=(g^0_{ij})^{-1}(v)$, applying twice Lemma \ref{algebraiclemma}, we obtain that \begin{multline*} \det(g_{ij}(v))=\frac{ L_{,1}^N}{2^NF_0(v)^N}((1+\mu(v)\sum_{i,j= 1}^N g^0(v)^{ij}v_iv_j)(1+\delta(v) c^2)\\-\mu(v)\delta(v)\sum_{i,j=1}^Nc^ic^jv_iv_j) \det (g^0_{ij}(v)), \end{multline*} where \begin{multline*}
c^2=\sum_{i,j=1}^N g^0(v)^{ij}c_ic_j=\sum_{i,j=1}^Ng^0(v)^{ij}(b_i+k(v) v_i)(b_j+k(v) v_j)\\=\|\beta\|^2_{g_v^0}+2k(v) \beta(v)+k(v)^2 F_0(v)^2,
\end{multline*} $\sum_{i,j= 1}^N g^0(v)^{ij} v_iv_j=F_0(v)^2$ and \[\sum_{i,j=1}^N c^ic^j v_iv_j=\sum_{i,j=1}^N(b^i+k(v) v^i)(b^j+k(v) v^j)v_iv_j= (\beta(v)+k(v) F_0(v)^2)^2.\] Then, formula \eqref{detgij} follows
substituting $\delta(v)$, $k(v)$ and $\mu(v)$ by their values, and making some straightforward computations (use \[F_0(v)^2 L_{,11}+\beta(v)^2L_{,22}+2\beta(v)F_0(v)L_{,12}=2L,\] which follows from evaluating \eqref{fundamentalTensor} in $w=v$ and recalling \eqref{propfundtensor}). \end{proof} \begin{prop}\label{pdetg} If $F=F_0\phi(\beta/F_0)$, then $\det(g_{ij}(v))$ is equal to \begin{multline}\label{detphi}
\left(\phi-\frac{\beta(v)}{F_0(v)}\dot\phi\right)^{N-2}\left(\left(\|\beta\|_{g_v^0}^2-\frac{\beta(v)^2}{F_0(v)^2}\right)\ddot\phi +\phi-\frac{\beta(v)}{F_0(v)}\dot\phi\right)\phi^{N+1} \det (g^0_{ij}(v)). \end{multline} \end{prop} \begin{proof}
Using the expressions of the partial derivatives of $L$ computed in Proposition \ref{functionpsi}, Equation \eqref{detgij} becomes,
in terms of $\psi=\phi^2$, \begin{multline*} \det (g_{ij}(v))=\frac{(2F_0(v)\psi-\beta
\dot\psi)^2}{2^NF_0(v)^{N+1}}((2\psi\ddot\psi-(\dot\psi)^2)F_0(v)(F_0(v)^2\|\beta\|^2_{g_v^0}-\beta(v)^2)\\+(2F_0(v)\psi-\beta(v)\dot\psi)2\psi F_0(v)^2). \end{multline*} Then substituting $\psi=\phi^2$, $\dot\psi=2\phi\dot\phi$ and $\ddot\psi=2((\dot\phi)^2+\phi\ddot\phi)$ and making some straightforward computations we obtain \eqref{detphi}. \end{proof} \begin{cor}\label{characterization} Let $F=F_0\phi(\beta/F_0)$, choose $v_0\in A\setminus 0$ (see Proposition \ref{functionpsi}), and put $s_0 =\beta(v_0)/F_0(v_0)$. In the case of dimension $N>2$, the fundamental tensor $g_{v_0}$ is positive definite if and only if \begin{align}\label{etomashen} \phi(s_0)-s_0\dot\phi(s_0)>0,\\
\ddot\phi(s_0)\left(\|\beta\|_{g_{v_0}^0}^2-s_0^2\right) +\phi(s_0)-s_0 \dot\phi(s_0)>0 \end{align} and, in the case of dimension $N=2$, $g_{v_0}$ is positive definite if and only if the second inequality holds. \end{cor} \begin{proof} Define $\phi_t(s)=1-t+t\phi(s)$, in the same domain as $\phi$ for $t\in[0,1]$. Then, in dimension $N>2$ \begin{align*} \phi_t(s_0)-s_0\dot\phi_t(s_0)=1-t+t(\phi(s_0)-s_0\dot\phi(s_0))>0, \end{align*} and in any dimension $N\geq 2$ \begin{multline*}
\ddot\phi_t(s_0)\left(\|\beta\|_{g_{v_0}^0}^2-s_0^2\right)
+\phi_t(s_0)-s_0\dot\phi_t(s_0)\\=1-t+t\left[\ddot\phi(s_0)\left(\|\beta\|_{g_{v_0}^0}^2-s_0^2\right) +\phi(s_0)-s_0\dot\phi(s_0)\right]>0. \end{multline*} Then, let $F_t(v_0)=F_0(v_0)\phi_t(\beta(v_0)/F_0(v_0))$, $g_{v_0}^t$, the fundamental tensor associated to $F_t$, and $(g_{ij}^t(v_0))$ the matrix of coordinates of $g_{v_0}^t$ in the
coordinate chart previously fixed. Applying Proposition \ref{pdetg} to each $\phi_t$, one has $\det(g_{ij}^t(v_0))>0$ for every $t\in [0,1]$. Observe that for $t=0$, $F_t=F_0$, and, consequently, $(g_{ij}^0(v_0))$ is positive definite. Then, as $\det(g_{ij}^t(v_0))>0$ for every $t\in [0,1]$, $g_{ij}^1(v_0)$ must be positive definite.
For the converse, observe that Proposition \ref{reciproco} implies that $\phi(s_0)-s_0\dot\phi(s_0)>0$ for $N>2$, and the other inequality follows for any $N\geq 2$ by using \eqref{detphi}, as $\det(g_{ij}(v_0))$ must be positive. \end{proof}
\begin{rem}\label{rult} The inequalities in the previous corollary characterize in a precise way the maximal conic domain $A^*$ where $F$ is conic Finsler. In the second inequality, observe that
$\|\beta\|_{g_{v_0}^0}^2-s_0^2\geq 0$ (this follows directly from the definition of the norm $\|\beta\|_{g_{v_0}^0}$ and the equality $F_0(v_0)=\sqrt{g_{v_0}^0(v_0,v_0)}$).
In the case that $F_0=\sqrt{\alpha}$ (i.e. $F$ is a conic
$(\alpha, \beta)$-metric) then $\|\beta\|_{g_{v_0}^0}$ depends only on the point $p_0=\pi(v_0)$ and we write
$\|\beta\|_{\alpha_{p_0}}$ instead. Moreover, if $F$ is an
$(\alpha, \beta)$-metric (with $A=TM$), then an $\alpha$-unit vector $v_0$ can be chosen both, in the kernel of $\beta$ and such that $\|\beta\|_{\alpha_{p_0}} = \beta(v_0)$, and as a consequence, $s^2=\beta(v)^2/\alpha(v)^2$, $v\in A_{p_0}\setminus \{0\}$, takes all the values in the interval
$[0,\|\beta\|_{\alpha_{p_0}}^2]$ (notice that, in this case, $0$ must belong to the domain $I$ of definition of $\phi$). Thus, the following criterion (to be compared with Criterion \ref{shenlemma}, taking into account Remark \ref{rcrit}) is obtained: {\em an $(\alpha,\beta$)-metric $F=\alpha \phi(\beta/\alpha)$ is a Finsler one if and only if the inequalities in Corollary \ref{characterization} hold for all
$s\in \mathds R$ such that $0\leq s^2 \leq \|\beta\|_\alpha:=$ {\em Sup}$_{p\in M}\|\beta\|_{\alpha_{p}}$.}, i.e.: \begin{align}\label{etomashen2} \phi(s)-s\dot\phi(s)>0,\\
\ddot\phi(s)\left(\|\beta\|_{\alpha}^2-s^2\right) +\phi(s)-s \dot\phi(s)>0, \end{align}
the latter under the convention that, if $\|\beta\|_\alpha=\infty$ then $\ddot \phi\geq 0$ on $I$.
\end{rem}
\section*{Acknowledgments}
The authors warmly acknowledge Professors R. Bryant, E. Caponio and G. Siciliano for helpful conversations on the topics of this paper and the anonimous referee for his interesting comments.
Both authors are partially supported by Regional J. Andaluc\'{\i}a Grant P09-FQM-4496 with FEDER funds. MAJ is also partially supported by MICINN project MTM2009-10418 and Fundaci\'on S\'eneca project 04540/GERM/06 and MS by MICINN-FEDER Grant MTM2010--18099.
\end{document} | arXiv | {
"id": "1111.5066.tex",
"language_detection_score": 0.7474572658538818,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Min- and Max-Entropy in Infinite Dimensions}
\author{Fabian Furrer} \email{fabian.furrer@itp.uni-hannover.de} \affiliation{Institute for Theoretical Physics, Leibniz Universit\"at Hannover, 30167 Hannover, Germany}
\author{Johan {\AA}berg} \email{jaaberg@phys.ethz.ch} \affiliation{Institute for Theoretical Physics, ETH Zurich,
8093 Zurich, Switzerland}
\author{Renato Renner} \email{renner@phys.ethz.ch} \affiliation{Institute for Theoretical Physics, ETH Zurich,
8093 Zurich, Switzerland}
\begin{abstract} We consider an extension of the conditional min- and max-entropies to infinite-dimensional separable Hilbert spaces. We show that these satisfy characterizing properties known from the finite-dimensional case, and retain information-theoretic operational interpretations, e.g., the min-entropy as maximum achievable quantum correlation, and the max-entropy as decoupling accuracy. We furthermore generalize the smoothed versions of these entropies and prove an infinite-dimensional quantum asymptotic equipartition property. To facilitate these generalizations we show that the min- and max-entropy can be expressed in terms of convergent sequences of finite-dimensional min- and max-entropies, which provides a convenient technique to extend proofs from the finite to the infinite-dimensional setting. \end{abstract}
\maketitle
\section{Introduction}
Entropy measures are fundamental to information theory. For example, in classical information theory a central role is played by the Shannon entropy \cite{Shannon} and in quantum information theory by the von Neumann entropy. Their usefulness partially stems from the fact that they have several convenient mathematical properties (e.g. strong subadditivity) that facilitate a `calculus' of information and uncertainty. Indeed, entropy measures can even be characterized axiomatically in terms of such properties \cite{Renyi}. However, equally important for their use in information theory is the fact that they are related to operational quantities. This means that they characterize the optimal efficiency by which various information-theoretic tasks can be solved. One example of such a task is source coding, where one considers a source that randomly outputs data according to some given probability distribution. The question of interest is how much memory is needed in order to store and faithfully regenerate the data. Another example is channel coding, where the aim is to reliably transmit information over a channel. Here we ask how many bits (or qubits in the quantum case) one can optimally transmit per use of the channel \cite{Shannon,ChannelCoding,Schuhmacher}.
The operational relevance of Shannon and von Neumann entropy is normally limited to the case when one considers the asymptotic limit over infinitely many instances of a random experiment, which are independent and identically distributed (iid) or can be described by a Markov process. In the case of source coding this corresponds to assuming an iid repetition of the source. In the limit of infinitely many such repetitions, the average number of bits one needs to store per output is given by the Shannon entropy of the distribution of the source \cite{Shannon}. In the general case, where we have more complicated types of correlations, or where we only consider finite instances, the role of the Shannon or von Neumann entropies appears to be taken over by other measures of entropy, referred to as the smooth min- and max-entropies \cite{RennerPhD}. For example, in \cite{ChannelCodingMaxEntropy,RenesRenner} it was found that the smooth max-entropy characterizes one-shot data compression, i.e., when we wish to compress a single output of an information source. Furthermore, in \cite{ChannelCodingMinEntropy} it was proved that in one single use of a classical channel, the transmission can be characterized by the difference between a smooth min- and max-entropy. The von Neumann entropy of a state can be regained via the quantum asymptotic equipartition property (AEP) \cite{RennerPhD,QuantumAEP}, by applying these measures to asymptotically many iid repetitions of the state. This allows us to derive properties of the von Neumann entropy from the smooth min- and max-entropies; a technique that has been used for an alternative proof of the quantum reverse Shannon theorem \cite{Reverse}, and to derive an entropic uncertainty relation \cite{Uncertainty}. The min- and max-entropies furthermore generalize the spectral entropy rates \cite{InformationSpectrum} (that are defined in an asymptotic sense) which themselves have been introduced as generalizations of the Shannon entropy \cite{Han,HanVerdu}. Closely related quantities are the relative min- and max-entropies \cite{parent}, which have been applied to entanglement theory \cite{entang1,entang2} as well as channel capacity \cite{capacity}.
So far, the investigations of the operational relevance and properties of the min- and max-entropy and their smoothed versions have been almost exclusively focused on quantum systems with finite-dimensional Hilbert spaces. Here we consider the min- and max-entropy in infinite-dimensional separable Hilbert spaces. Since the modeling in vast parts of quantum physics is firmly rooted in infinite-dimensional Hilbert spaces, it appears that such a generalization is crucial for the application of these tools. For example, it has recently been shown that the smooth min- and max-entropies are the relevant measures of entropy in certain statistical mechanics settings \cite{Oscar,Lidia}. An extension of these ideas to, e.g., quantized classical systems, would require an infinite-dimensional version of the min- and max-entropy. Another example is quantum key distribution (QKD), where in the finite-dimensional case the smooth min-entropy bounds the length of the secure key that can be extracted from an initial raw key \cite{RennerPhD}. The generalization to infinite dimensions has therefore direct relevance for continuous variable QKD (for references see, e.g., Section II.D.~3 of \cite{Scarani}). In such a scheme one uses the quadratures of the electromagnetic field to establish a secret key (as opposed to other schemes that use, e.g., the polarization degree of freedom of single photons). Since such QKD methods are based on the generation of coherent states and measurement of quadratures, it appears rather unavoidable to use infinite-dimensional Hilbert spaces to model the states of the field modes. Beyond the obvious application to continuous variable quantum key distribution, one can argue that there are several quantum cryptographic tasks that today are analyzed in finite-dimensional settings, which strictly speaking would require an analysis in infinite-dimensions, since there is in general no reason to assume the Hilbert spaces of the adversary's systems to be finite.
As indicated by the above discussion, an extension of the min- and max-entropies to an infinite-dimensional setting does not only require that we can reproduce known mathematical properties of these measures, but also that we should retain their operational interpretations. A complete study of this two-fold goal would bring us far beyond the scope of this work. However, here we pave the way for this development by introducing an infinite-dimensional generalization of the min- and max-entropy, and demonstrating a collection of `core' properties and operational interpretations. In particular, we derive (under conditions detailed below) a quantum AEP for a specific choice of an infinite-dimensional conditional von Neumann entropy. On a more practical level we introduce a technique that facilitates the extension of results proved for the finite-dimensional case to the setting of separable Hilbert spaces. More precisely, we show that the conditional min- and max-entropies for infinite-dimensional states can be
expressed as limits of entropies obtained by finite-dimensional truncations of the original state (Proposition~\ref{p:reduction of Hmin to finite dim}). This turns out to be a convenient tool for generalizations, and we illustrate this on the various infinite-dimensional extensions that we consider.
The $\epsilon$-smoothed min-and max-entropies are defined in terms of the `un-smoothed' ($\epsilon=0$) min- and max-entropies (which we simply refer to as `min- and max-entropy'). In Section~\ref{subsec:def} we extend these `plain' min- and max-entropies to separable Hilbert spaces.
Section~\ref{section:reduction of cond entropies} contains the main technical tool, Proposition~\ref{p:reduction of Hmin to finite dim}, by which the infinite-dimensional min- and max-entropies can be expressed as limits of sequences of finite-dimensional entropies. The proof of Proposition~\ref{p:reduction of Hmin to finite dim} is given in Appendix~\ref{app:proofprop1}. In Section~\ref{section:properties of min- and max-entropy} we consider properties of the min- and max-entropy, e.g., additivity and the data processing inequality. Section~\ref{chapter:operational interpretation} focuses on the generalization of operational interpretations. In Section~\ref{chapter:smooth entropy} we consider the extension of the $\epsilon$-smooth min- and max-entropies, for $\epsilon>0$. In Section~\ref{AEP} we bound the smooth min- and max-entropy of an iid state on a system $A$ conditioned on a system $B$ in terms of the conditional von Neumann entropy (Proposition \ref{prop:AEP lower bound}). This result relies on the assumption that $A$ has finite von Neumann entropy. If $A$ furthermore has a finite-dimensional Hilbert space (but the Hilbert space of $B$ is allowed to be separable) we prove that these smooth entropies converge to the conditional von Neumann entropy (Corollary \ref{cor:AEP}), which corresponds to a quantum AEP.
The paper ends with a short summary and outlook in Section~\ref{concl}.
\section{\label{chapter:conditioned entropies} Min- and Max-Entropy}
\subsection{\label{subsec:def}Definition of the conditional min- and max-entropy}
Associated to each quantum system is a Hilbert space $H$, which we assume to be separable in all that follows. We denote the bounded operators by $\mathcal{L}(H)=\{A:H \rightarrow H \ |\ \Vert A\Vert < \infty\}$, where $\Vert A\Vert = \sup_{\Vert \psi\Vert = 1}\Vert A|\psi\rangle\Vert $ is the standard operator norm. Among these, the trace class operators satisfy the additional feature of having a finite trace norm $\Vert T\Vert_1 := \tr|T| = \tr\sqrt{T^{\dagger}T}$. The set of trace class operators is denoted by $\tau_1(H):= \{ T \in \mathcal{L}(H)| \ \Vert T\Vert_1 < \infty \}$.
We consider states which can be represented as density operators, i.e., normal states \cite{Bratteli Robinson}, and denote the set of all these states as $\mathcal{S}(H):=\{\rho \in \tau_1(H)| \ \rho \geq 0, \Vert \rho\Vert_1 =1 \}$. It is often convenient to allow non-normalized density operators, which form the positive cone $\tau^+_1(H)\subset\tau_1(H)$ consisting of all non-negative trace class operators.
We define the conditional min- and max-entropy of bipartite quantum systems analogously to the finite-dimensional case \cite{OperationalMeaning}.\footnote{Max-entropy as we define it in Eq.~(\ref{def,eq3:min/max-entropy}) is related to the R\'enyi 1/2-entropy (see Section~\ref{subsect:entropy of pure states versus unconditioned entropy} or \cite{OperationalMeaning,OnTheSmoothing}). In the original definition \cite{RennerPhD} max-entropy was defined in terms of the R\'enyi 0-entropy.}
\begin{de} \label{def:min/max-entropy}Let $H_A$ and $H_B$ be separable Hilbert spaces and $\rho_{AB} \in \tau_1^+(H_A \otimes H_B)$. The min-entropy of $\rho_{AB}$ conditioned on $\sigma_{B} \in \tau_1^+(H_B)$ is defined by \begin{equation}\label{def,eq1:min/max-entropy}
H_{\mathrm{min}}(\rho_{AB}|\sigma_{B}) := -\log \inf \{\lambda \in \mathbb{R} | \lambda \id_{A} \otimes \sigma_{B} \geq \rho_{AB} \}, \end{equation}
where we let $H_{\mathrm{min}}(\rho_{AB}|\sigma_B) := -\infty$ if the condition $\lambda \id_{A} \otimes \sigma_{B} \geq \rho_{AB}$ cannot be satisfied for any $\lambda \in \mathbb{R}$. Moreover, we define the min-entropy of $\rho_{AB}$ conditioned on B by \begin{equation}\label{def,eq2:min/max-entropy}
H_{\mathrm{min}}(\rho_{AB}|B) := \sup_{\sigma_{B} \in \mathcal{S}(H_B)} H_{\mathrm{min}}(\rho_{AB}|\sigma_{B}). \end{equation} The max-entropy of $\rho_{AB}$ conditioned on B is defined as the dual of the min-entropy \begin{equation}\label{def,eq3:min/max-entropy}
H_{\mathrm{max}}(\rho_{AB}|B) := -H_{\mathrm{min}}(\rho_{AC}|C), \end{equation} where $\rho_{ABC}$ is a purification of $ \rho_{AB}$. \end{de}
In the definition above, and in all that follows, we let ``$\log$'' denote the binary logarithm. The reduction of a state to a subsystem is indicated by the labels of the Hilbert space, e.g., $\rho_A = \tr_C\rho_{AC}$. Note that the max-entropy $ H_{\mathrm{max}}(\rho_{AB}|B)$ as defined in (\ref{def,eq3:min/max-entropy}) is independent of the choice of the purification $\rho_{ABC}$, and thus well-defined. This follows from the fact that two purifications can only differ by a partial isometry on the purifying system, and the min-entropy $H_{\mathrm{min}}(\rho_{AC}|C)$ is invariant under these partial isometries on subsystem C.
The two optimizations in the definition of $H_{\mathrm{min}}(\rho_{AB}|B)$, in Eqs.~(\ref{def,eq1:min/max-entropy}) and (\ref{def,eq2:min/max-entropy}), can be combined into \begin{equation}\label{eq:equiv def of the min-entropy}
H_{\mathrm{min}}(\rho_{AB}|B) = -\log \big(\inf \{ \tr\tilde{\sigma}_B \ | \ \tilde{\sigma}_B \in \tau^+_1(H_B), \\ \id_A \otimes \tilde{\sigma}_B \geq \rho_{AB}\} \big). \end{equation} For convenience we introduce the following two quantities: \begin{eqnarray} \label{lambdadef1}
\Lambda(\rho_{AB}|\sigma_B) & := & 2^{- H_{\mathrm{min}}(\rho_{AB}|\sigma_B)} =\inf \{\lambda \in \mathbb{R} | \lambda \id_{A} \otimes \sigma_{B} \geq \rho_{AB} \},\\
\label{lambdadef2} \Lambda(\rho_{AB}|B) &:= & 2^{- H_{\mathrm{min}}(\rho_{AB}|B)} = \inf \{ \tr\tilde{\sigma}_B \ | \ \tilde{\sigma}_B \in \tau^+_1(H_B), \id_A \otimes \tilde{\sigma}_B \geq \rho_{AB}\}. \end{eqnarray}
\subsection{\label{section:reduction of cond entropies} Finite-dimensional approximations of min- and max-entropies}
In this section we present the main result, Proposition~\ref{p:reduction of Hmin to finite dim}, that provides a method to express the conditional min- and max-entropy as a limit of min- and max-entropies of finite-dimensional systems. The rough idea is to choose sequences $\{P^A_k\}_{k=1}^{\infty}$ and $\{P^B_k\}_{k=1}^{\infty}$ of projectors \footnote{With ``projector'' we intend a bounded operator $P$ such that $P^2 = P$ and $P^{\dagger} = P$, which in the mathematics literature usually is referred to as an ``orthogonal projector''.} onto finite-dimensional subspaces $U_k^A \subset H_A$ and $U_k^B \subset H_B$, respectively, both converging to the identity. Then we define a sequence of non-normalized states as $\rho_{AB}^k = (P^A_k \otimes P^B_k) \rho_{AB} (P^A_k \otimes P^B_k)$. The min- or max-entropy of $\rho_{AB}^k$ can now be treated as if the underlying Hilbert space would be $U_k^A \otimes U_k^B$ (Lemma~\ref{I:id-P}), and therefore finite-dimensional. Proposition~\ref{p:reduction of Hmin to finite dim} shows that, as $k\rightarrow \infty$, these finite-dimensional entropies approach the desired infinite-dimensional entropy. As we will see, this provides a convenient method to extend properties from the finite to the infinite setting.
When we say that an operator sequence $Q_{k}$ converges to $Q$ in the weak operator topology we intend that $\lim_{k\rightarrow 0} \langle \chi |Q-Q_{k}|\psi\rangle = 0$ for all $|\chi\rangle,|\psi\rangle\in H$. The sequence converges in the strong operator topology if $\lim_{k\rightarrow 0} \Vert (Q-Q_{k})|\psi\rangle\Vert = 0$ for all $|\psi\rangle\in H$.
\begin{de} \label{def:projected states} Let $\{P^A_k\}_{k\in\mathbb{N}} \subset \mathcal{L}(H_A)$, $\{P^B_k\}_{k\in\mathbb{N}} \subset \mathcal{L}(H_B)$ be sequences of projectors such that for each $k \in \mathbb{N}$ the projection spaces $U_k^A\subset H_A$, $U_k^B\subset H_B$ of $P_k^A$, $P_k^B$ are finite-dimensional, $P_k^A \leq P_{k'}^A$ and $P_k^B \leq P_{k'}^B$ for all $k \leq k'$, and $P_k^{A}$, $P_k^B$ converge in the weak operator topology to the identity. We refer to such a sequence $(P_k^A,P_k^B)$ as a generator of projected states. For $\rho_{AB} \in \mathcal{S}(H_A \otimes H_B)$ we define the (non-normalized) states \begin{equation} \label{defprojectedstates} \rho_{AB}^k := (P^A_k \otimes P^B_k) \rho_{AB} (P^A_k \otimes P^B_k), \end{equation} which we call the projected states of $\rho_{AB}$ relative to $(P_k^A,P_k^B)$. Moreover, we refer to \begin{equation} \label{snklv} \hat{\rho}_{AB}^k := \frac{\rho_{AB}^k}{ \tr\rho_{AB}^k} \end{equation} as the normalized projected states of $\rho_{AB}$ relative to $(P_k^A,P_k^B)$. \end{de}
Note that a sequence of projectors that converges in the weak operator topology to the identity also converges in the strong operator topology to the identity. As a matter of convenience, we can thus in all that follows regard the generators of projected states to converge in the strong operator topology. One may also note that the sequence of projected states $\rho_{AB}^k$ (as well as the normalized projected states $\hat{\rho}_{AB}^k$) converges to $\rho_{AB}$ in the trace norm (see Corollary~\ref{dfnbkl} in Appendix~\ref{section:Technical Lemmas}). The normalized projected states in Eq.~(\ref{snklv}) are of course only defined if $\tr\rho_{AB}^k \neq 0$. However, this is true for all sufficiently large $k$ due to the trace norm convergence to $\rho_{AB}$.
\begin{p}\label{p:reduction of Hmin to finite dim} For $\rho_{AB}\in\mathcal{S}(H_A \otimes H_B)$, let $\{\rho_{AB}^k\}_{k\in\mathbb{N}}$ be the projected states of $\rho_{AB}$ relative to a generator $(P_k^A,P_k^B)$, and $\hat{\rho}^k_{AB}$ the corresponding normalized projected states. Furthermore, let $\sigma_B \in \mathcal{S}(H_B)$ and define the operators $\sigma_B^k := P^B_k\sigma_B P^B_k$ and $\hat{\sigma}_B^k :=\tr(\sigma_B^k)^{-1}\sigma^k_B$. Then, the following three statements hold. \begin{equation} \label{p,eq1:reduction of Hmin to finite dim}
H_{\mathrm{min}}(\rho_{AB}|\sigma_B) = \lim_{k\rightarrow \infty} H_{\mathrm{min}}(\rho^k_{AB}|\sigma^k_B) =\lim_{k\rightarrow \infty} H_{\mathrm{min}}\big(\hat{\rho}^k_{AB}|\hat{\sigma}^k_B\big), \end{equation} and the infimum in Eq.~(\ref{def,eq1:min/max-entropy}) is attained if $H_{\mathrm{min}}(\rho_{AB}|\sigma_B)$ is finite. \begin{equation}\label{p,eq2:reduction of Hmin to finite dim}
H_{\mathrm{min}}(\rho_{AB}|B) = \lim_{k\rightarrow \infty} H_{\mathrm{min}}(\rho_{AB}^k|B_k) = \lim_{k\rightarrow \infty} H_{\mathrm{min}} \big(\hat{\rho}^k_{AB}|B_k\big), \end{equation} and the supremum in Eq.~(\ref{def,eq2:min/max-entropy}) is attained if $H_{\mathrm{min}}(\rho_{AB}|B)$ is finite. \begin{equation}\label{p,eq3:reduction of Hmin to finite dim}
H_{\mathrm{max}}(\rho_{AB}|B) = \lim_{k \rightarrow \infty} H_{\mathrm{max}}(\rho_{AB}^k|B_k) = \lim_{k\rightarrow \infty} H_{\mathrm{max}} \big(\hat{\rho}^k_{AB}|B_k\big). \end{equation} Here, $B_k$ denotes the restriction of system $B$ to the projection space $U_k^B$ of $P_k^B$. \end{p}
The proof of this proposition is found in Appendix~\ref{app:proofprop1}. When we say that the infimum in (\ref{def,eq1:min/max-entropy}) is attained, it means that there exists a finite $\lambda'$ such that $ \lambda' \id_{A} \otimes \sigma_{B} - \rho_{AB} \geq 0$ and $H_{\mathrm{min}}(\rho_{AB}|\sigma_{B}) = -\log \lambda'$. Similarly, that the supremum in (\ref{def,eq2:min/max-entropy}) is attained, means that there exists a $\sigma'_{B} \in \tau_1^+(H_B)$ satisfying $\id \otimes \sigma'_B \geq \rho_{AB}$ such that $H_{\mathrm{min}}(\rho_{AB}|B) = H_{\mathrm{min}}(\rho_{AB}|\sigma'_{B})$.
Given the above proposition, a natural question is if $H_{\mathrm{min}}(\rho_{AB}|B)$ and $H_{\mathrm{max}}(\rho_{AB}|B)$ are trace norm continuous in general. In the finite-dimensional case \cite{OnTheSmoothing} it is known that these entropies are continuous with a Lipschitz constant depending on the dimension of $H_A$. However, the following example shows that they are in general not continuous in the infinite-dimensional case. Let $\{|k\rangle\}_{k= 0,1,\ldots}$ be an arbitrary orthonormal basis of the Hilbert space $H_A$. For each $n= 1,2,\ldots$ let \begin{equation}
\rho_{n} = (1-\frac{1}{n})|0\rangle\langle 0| + \frac{1}{n^{2}}\sum_{k=1}^{n}|k\rangle\langle k|. \end{equation}
One can see that $\rho_n$ converges in the trace norm to $|0\rangle\langle0|$ as $n\rightarrow\infty$, while
$\lim_{n\rightarrow\infty}H_{\mathrm{max}}(\rho_n) = 2$, and $H_{\mathrm{max}}(|0\rangle\langle 0|) = 0$. Hence, the max-entropy is not continuous. ($H_{\mathrm{max}}(\rho)$ without conditioning means that we condition on a trivial subsystem $B$. See Eq.~(\ref{cor,eq1:unconditioned max-entropy}).) The duality, Eq.~(\ref{def,eq3:min/max-entropy}), yields an example also for the min-entropy.
\section{\label{section:properties of min- and max-entropy} Properties of Min- and Max-Entropy} \subsection{Additivity and the data processing inequality} Proposition~\ref{p:reduction of Hmin to finite dim} can be used as a tool to generalize known finite-dimensional results to the infinite-dimensional case. A simple example is the ordering property \cite{QuantumAEP} \begin{equation}\label{orderingHminHmax}
H_{\mathrm{min}}(\rho_{AB}|B) \leq H_{\mathrm{max}}(\rho_{AB}|B), \end{equation} which is obtained by a direct application of Proposition~\ref{p:reduction of Hmin to finite dim}. Another example is the additivity, which in the finite-dimensional case was proved in \cite{RennerPhD}. A direct generalization of the proof techniques they employed appears rather challenging, while Proposition~\ref{p:reduction of Hmin to finite dim} makes the generalization straightforward. \begin{p} \label{p:additivity} Let $\rho_{AB} \in \mathcal{S}(H_A \otimes H_B)$ and $\rho_{A'B'} \in \mathcal{S}(H_{A'} \otimes H_{B'}) $ for $H_A, H_{A'}$, $H_B$, and $H_{B'}$ separable Hilbert spaces. Then, it follows that
\begin{align} \label{Hminadd} H_{\mathrm{min}}(\rho_{AB} \otimes \rho_{A'B'}| BB') & = H_{\mathrm{min}}(\rho_{AB}|B) + H_{\mathrm{min}}(\rho_{A'B'}|B'), \\
\label{Hmaxadd} H_{\mathrm{max}}(\rho_{AB} \otimes \rho_{A'B'}|BB') & = H_{\mathrm{max}}(\rho_{AB}|B) + H_{\mathrm{max}}(\rho_{A'B'}|B'). \end{align} \end{p} The proof is a simple application of the approximation scheme in Proposition~\ref{p:reduction of Hmin to finite dim} combined with Lemma~\ref{nvdakj} and the finite-dimensional version of Proposition~\ref{p:additivity}, and therefore omitted.
For the sake of completeness we note that the data processing inequalities \cite{RennerPhD} also hold in the infinite-dimensional setting. In this case, however, there is no need to resort to Proposition~\ref{p:reduction of Hmin to finite dim}, as the proof in \cite{RennerPhD} can be generalized directly. \begin{p} \label{p:strsubadditivity} Let $\rho_{ABC} \in \tau_{+}(H_A \otimes H_B\otimes H_C)$ for separable Hilbert spaces $H_A$, $H_B$ and $H_C$. Then, it follows that \begin{eqnarray}
\label{p,eq2:strong subadd of Hmin} & & H_{\mathrm{min}}(\rho_{ABC}|BC) \leq H_{\mathrm{min}}(\rho_{AB}|B), \\
\label{p:strong subadd of Hmax} & & H_{\mathrm{max}}(\rho_{ABC}|BC) \leq H_{\mathrm{max}}(\rho_{AB}|B). \end{eqnarray} \end{p} The data processing inequalities can be regarded as the min- and max-entropy counterparts of the strong subadditivity of the von Neumann entropy (and are sometimes directly referred to as ``strong subadditivity''). One reason for this is that the standard formulation of the strong subadditivity of von Neumann entropy \cite{LiebRuskai1,Lieb,LiebRuskai2}, $H(\rho_{ABC}) + H(\rho_{B})\leq H(\rho_{AB}) + H(\rho_{BC})$, can be recast in the same form.
\subsection{\label{subsect:entropy of pure states versus unconditioned entropy} Entropy of pure states, and a bound for general states} Here we briefly consider the fact that the min-entropy can take the value $-\infty$, and the max-entropy can take the value $+\infty$. For this purpose we discuss the special case of pure states, as well as the case of no conditioning (i.e., if there is no subsystem $B$). Based on this we obtain a general bound which says that the conditional min- and max-entropies of a state $\rho_{AB}$ are finite if the operator $\sqrt{\rho_A}$ is trace class. Moreover it turns out that the min-entropy cannot attain the value $+\infty$, while the max-entropy cannot attain $-\infty$. \begin{lem} \label{cor:min-entropy for purestates} The min-entropy of $\rho_{AB}=\vert \psi \rangle \langle \psi \vert$, where $\vert\psi\rangle \in H_A \otimes H_B$, is given by \begin{equation}\label{cor,eq1:min-entropy for purestates}
H_{\mathrm{min}}(\rho_{AB}|B) = - 2 \log\tr\sqrt{\rho_A}. \end{equation} \end{lem}
From this lemma we can conclude that $H_{\mathrm{min}}(\rho_{AB}|B)$ is finite if and only if $\sqrt{\rho_{A}}$ is trace class. Otherwise $H_{\mathrm{min}}(\rho_{AB}|B) = -\infty$. If the Schmidt decomposition \cite{Convertability} of $\psi$ is given by $\sum_{k=1}^{\infty} r_k \vert a_k\rangle\vert b_k\rangle$, we have $\tr\sqrt{\rho_A}=\sum_{k=1}^{\infty}r_k$, such that a finite Schmidt rank always implies that $H_{\mathrm{min}}(\rho_{AB}|B)$ is finite. Recall that the Schmidt coefficients characterize the entanglement of a pure state, and, roughly speaking, that the more uniformly the Schmidt coefficients are distributed the stronger is the entanglement (see for instance \cite{Convertability}). This suggests that pure states with $H_{\mathrm{min}}(\rho_{AB}|B) =-\infty$ are entangled in a rather strong sense. \begin{proof}
Let $\vert \psi \rangle = \sum_{k=1}^{\infty} r_k \vert a_k\rangle\vert b_k\rangle$ be the Schmidt decomposition of $\vert \psi \rangle$, and $\tilde{\sigma}_{B} \in \tau^+_1(H_B)$ any operator that satisfies $\id_A \otimes \tilde{\sigma}_B \geq \rho_{AB}$. For each $n \in \mathbb{N}$ define $|\chi_{n}\rangle = \sum_{k=1}^{n}|a_{k}\rangle\vert b_{k}\rangle$. It follows that \begin{equation*}
\tr\tilde{\sigma}_B \geq \langle\chi_{n}|\id_A\otimes \tilde{\sigma}_B|\chi_{n}\rangle \geq \langle\chi_{n}|\rho_{AB}|\chi_{n}\rangle = \big(\sum_{k=1}^{n}r_{k}\big)^{2}, \end{equation*}
and thus, by taking the infimum over all $\tilde{\sigma}_{B}$ with $\id_A \otimes \tilde{\sigma}_B \geq \rho_{AB}$, as well as the supremum over all $n$, we find $\Lambda(\rho_{AB}|B) \geq (\tr\sqrt{\rho_A})^2$. Especially, we see that if $\tr\sqrt{\rho_A} = +\infty$ then $\Lambda(\rho_{AB}|B) = +\infty$ (and thus $H_{\mathrm{min}}(\rho_{AB}|B)=-\infty$). In the following we assume that $\tr\sqrt{\rho_A}< +\infty$, i.e., $\sqrt{\rho_A} \in \tau^+_1(H_A)$. We show that the lower bound $\Lambda(\rho_{AB}|B) \geq (\tr\sqrt{\rho_A})^2$ is attained, by proving that $\tilde{\sigma}_B := \tr(\sqrt{\rho_A})\sqrt{\rho_B}$ satisfies $id_A \otimes \tilde{\sigma}_B \geq \rho_{AB}$. By using the Schmidt decomposition of $\psi$ we compute for an arbitrary $\eta \in H_A \otimes H_B$ \begin{align*}
\langle \eta \vert ( \id \otimes\tilde{\sigma}_B-\rho_{AB}) \vert \eta\rangle
= & \tr(\sqrt{\rho_A})\sum_{k,l=1}^{\infty}|c_{k,l}|^{2}r_{l} -\Big|\sum_{k=1}^{\infty}c_{k,k}r_{k}\Big|^{2} \\
\geq & \sum_{l=1}^{\infty}r_l \sum_{k=1}^{\infty}|c_{k,k}|^{2}r_{k} - \Big|\sum_{k=1}^{\infty}c_{k,k}r_{k}\Big|^{2} \geq 0 , \end{align*}
where $c_{k,l} = (\langle a_k\vert\langle b_l\vert)\vert \eta \rangle$, and the last step follows from the Cauchy-Schwarz inequality. Hence, $\id_A \otimes \tilde{\sigma}_B - \rho_{AB}$ is positive and therefore $ \tr(\tilde{\sigma}_B) \geq \Lambda(\rho_{AB}|B)$. Combined with $\Lambda(\rho_{AB}|B) \geq (\tr\sqrt{\rho_A})^2$, we find $H_{\mathrm{min}}(\rho_{AB}|B) = -\log \Lambda(\rho_{AB}|B)=- 2 \log\tr\sqrt{\rho_A}$. \end{proof}
The duality (\ref{def,eq3:min/max-entropy}) allows us to rewrite Lemma~\ref{cor:min-entropy for purestates} by using the unconditional max-entropy. For every $\rho \in \mathcal{S}(H)$ this yields the quantum $1/2$-R\'enyi entropy (cf. \cite{OperationalMeaning}), \begin{equation}\label{cor,eq1:unconditioned max-entropy}
H_{\mathrm{max}}(\rho) = 2\log\tr\sqrt{\rho} = H_{\frac{1}{2}}(\rho), \end{equation} if $\sqrt{\rho}$ is trace-class. Otherwise $H_{\mathrm{max}}(\rho)=+\infty$.
The unconditional min-entropy is obtained by conditioning on a trivial subsystem $B$. One can see that \begin{equation}\label{cor,eq1:unconditioned min-entropy} H_{\mathrm{min}}(\rho) = -\log \Vert \rho\Vert . \end{equation} For a pure state $\rho_{AB} = \vert \psi \rangle \langle \psi \vert\in \mathcal{S}(H_A \otimes H_B)$, the max-entropy is given by \begin{equation}\label{cor,eq1:max-entropy of pure states}
H_{\mathrm{max}}(\rho_{AB}|B) = \log \Vert \rho_A\Vert. \end{equation} To see this one can apply the duality (\ref{def,eq3:min/max-entropy}) where we purify the pure state $\rho_{AB}$ with a trivial system $C$, and next use Eq.~(\ref{cor,eq1:unconditioned min-entropy}).
By combining these facts with the data processing inequality, $H_{\mathrm{min}}(\rho_{ABC}|BC) \leq H_{\mathrm{min}}(\rho_{AB}|B) \leq H_{\mathrm{min}}(\rho_A)$ and $H_{\mathrm{max}}(\rho_{ABC}|BC) \leq H_{\mathrm{max}}(\rho_{AB}|B) \leq H_{\mathrm{max}}(\rho_A)$, for $\rho_{ABC}$ a purification of $\rho_{AB}$, we find the following bounds on the min- and max-entropy. \begin{p}\label{cor:upper bound for min/max-entropy} For every state $\rho_{AB} \in S(H_A \otimes H_B)$ it holds that \begin{eqnarray}
-2\log\tr\sqrt{\rho_A} \leq &H_{\mathrm{min}}(\rho_{AB}|B) & \leq -\log \Vert \rho_A\Vert, \label{cor,eq1:upper bound for min-entropy} \\
\log \Vert \rho_A\Vert \leq & H_{\mathrm{max}}(\rho_{AB}|B) & \leq 2\log\tr\sqrt{\rho_A}. \label{cor,eq2:upper bound for max-entropy} \end{eqnarray} Hence, $H_{\mathrm{min}}(\rho_{AB}|B)$ and $H_{\mathrm{max}}(\rho_{AB}|B)$ are finite if $\sqrt{\rho_A}$ is trace-class. \end{p}
\section{\label{chapter:operational interpretation}Operational Interpretations of Min- and Max-Entropy}
Min- and max-entropy can be regarded as answers to operational questions, i.e., they quantify the optimal solution to certain information-theoretic tasks. Max-entropy $H_{\mathrm{max}}(\rho_{AB}|B)$ answers the question of how distinguishable $\rho_{AB}$ is from states that are maximally mixed on A, while uncorrelated with B \cite{OperationalMeaning} (see also Definition~\ref{def:decoupling accuracy} below). This is a useful concept, e.g., in quantum key distribution, where one ideally would have a maximally random key uncorrelated with the eavesdropper's state. Thus, the above distinguishability quantifies how well this is achieved. Min-entropy $H_{\mathrm{min}}(\rho_{AB}|B)$ is related to the question of how close one can bring the state $\rho_{AB}$ to a maximally entangled state on the bipartite system AB, allowing only local quantum operations on the B system \cite{OperationalMeaning}. In the special case that A is classical (i.e., we have a classical-quantum state, see Eq.~(\ref{eq:cq state}) below) one finds that $H_{\mathrm{min}}(\rho_{AB}|B)$ is related to the guessing probability, i.e., our best chance to correctly guess the value of the classical system A, given the quantum system B. In the following sections we show that these results can be generalized to the case that $H_{B}$ is infinite-dimensional. These generalizations are for instance crucial in cryptographic settings, where there is a priori no reason to expect an eavesdropper to be limited to a finite-dimensional Hilbert space, while it is reasonable to assume the key to be finite. The operational interpretations of the min- and max-entropy exhibit a direct dependence on the dimension of the A system, which is why a naive generalization to an infinite-dimensional A appears challenging, and will not be considered here.
\subsection{ \label{section:oper interp of max entropy} Max-entropy as decoupling accuracy}
To define decoupling accuracy we use fidelity $F(\rho,\sigma) := \Vert \sqrt{\rho}\sqrt{\sigma}\Vert_1$ as a distance measure between states. \begin{de}\label{def:decoupling accuracy} For a finite-dimensional Hilbert space $H_A$ and an arbitrary separable Hilbert space $H_B$, we define the decoupling accuracy of $\rho_{AB} \in \tau_1^+(H_A \otimes H_B)$ w.r.t. the system B as \begin{equation} \label{def,eq1:decoupling accuracy}
d(\rho_{AB}|B) := \sup_{\sigma_B \in \mathcal{S}(H_B)} d_A F( \rho_{AB} , \tau_A \otimes \sigma_B )^2 . \end{equation} Here, $d_A$ is the dimension of $H_A$, and $\tau_A := d_A^{-1} \id_A$ is the maximally mixed state on A. \end{de}
Note that in infinite-dimensional Hilbert spaces there is no trace class operator which can be regarded as a generalization of the maximally mixed state in finite dimensions. We must thus require system A to be finite-dimensional in order to keep the decoupling accuracy well-defined. In \cite{OperationalMeaning}, Proposition~\ref{prop:operational interp of max entropy} was proved in the case where $H_B$ is assumed to be finite-dimensional. Below we use Proposition~\ref{p:reduction of Hmin to finite dim} to extend the assertion to the infinite-dimensional case. \begin{p}\label{prop:operational interp of max entropy} Let $H_A$ be a finite-dimensional and $H_B$ a separable Hilbert space. It follows that \begin{equation} \label{prop,eq1:operational interp of max entropy}
d(\rho_{AB}|B) = 2^{H_{\mathrm{max}}(\rho_{AB}|B)}, \end{equation} for each $\rho_{AB} \in \tau_1^+(H_A \otimes H_B)$. \end{p} In the following we will need to consider physical operations (channels) on states, i.e., trace preserving completely positive maps \cite{Kraus}. By $\TPCPM(H_A,H_B)$ we denote the set of all trace preserving completely positive maps $\mathcal{E}: \tau_1(H_A) \rightarrow \tau_1(H_B)$. Let $\mathcal{I}$ denote the identity map.
\begin{proof}
Let us take projected states $\rho_{AB}^k$ relative to a generator of the form $(\id_A,P^B_k)$ (this is a proper generator since $\dim H_A<\infty$). Denote the space onto which $P^B_k$ projects by $U_k^B$ and set $P_k := \id_A \otimes P_k^B$. The finite-dimensional version of Proposition~\ref{prop:operational interp of max entropy} together with Proposition~\ref{p:reduction of Hmin to finite dim} yield $d(\rho_{AB}^k|B_k)= 2^{H_{\mathrm{max}}(\rho_{AB}^k|B_k)}\rightarrow 2^{H_{\mathrm{max}}(\rho_{AB}|B)}$, as $k\rightarrow\infty$.
In order to prove $d(\rho_{AB}|B) \leq 2^{H_{\mathrm{max}}(\rho_{AB}|B)}$ we construct a suitable TPCPM and use the fact that the fidelity can only increase under its action \cite{NielsenChuang}. For each $k \in \mathbb{N}$ choose a normalized state $\vert \theta_k \rangle \in H_A \otimes H_B$ such that $P_k\vert \theta_k \rangle =0$. We define a channel $\mathcal{E}_k \in \TPCPM(H_A \otimes H_B,H_A \otimes H_B)$ as $\mathcal{E}_k(\eta) := P_k \eta P_k + q_k(\eta) \vert \theta_k \rangle \langle \theta_k \vert$, with $q_k(\eta) :=\tr[ \eta (\id-P_k)]$. Then, for all $\sigma_B \in \mathcal{S}(H_B)$ we find \begin{align*} F(\rho_{AB}, \tau_{A} \otimes \sigma_B) & \leq F\big( \mathcal{E}_k(\rho_{AB}) , \mathcal{E}_k(\tau_{A} \otimes \sigma_B) \big)\\ & = \big\Vert \sqrt{\rho_{AB}^k}\sqrt{\tau_{A} \otimes \sigma_B^k} + \sqrt{q_k(\rho_{AB})q_k(\tau_{A} \otimes \sigma_B)} \vert \theta_k \rangle \langle \theta_k \vert \, \big\Vert_1 \\
& \leq \big\Vert \sqrt{\rho_{AB}^k}\sqrt{\tau_{A} \otimes \sigma_B^k} \big\Vert_1 + \sqrt{q_k(\rho_{AB})} = F(\rho_{AB}^k , \tau_{A} \otimes \sigma_B^k ) + \sqrt{q_k(\rho_{AB})}, \end{align*} where $\sigma_B^k := P_k^B \sigma_B P_k^B$. The second line is due to the fact that $\vert \theta_k \rangle $ is orthogonal to the support of both $\rho_{AB}^k$ and $\tau_{A} \otimes \sigma_B^k$. The last line follows by the triangle inequality and $q_k(\tau_{A} \otimes \sigma_B) \leq 1$. By taking the supremum over all $\sigma_B \in \mathcal{S}(H_B)$ we obtain \begin{equation*}
\sqrt{d(\rho_{AB}|B)} \leq \sqrt{d(\rho_{AB}^k|B_k)} + \sqrt{d_A \tr[ \rho_{AB} (\id-P_k)]} \rightarrow 2^{\frac{1}{2} H_{\mathrm{max}}(\rho_{AB}|B)}, \end{equation*}
as $k\rightarrow\infty$. It remains to show $d(\rho_{AB}|B) \geq 2^{H_{\mathrm{max}}(\rho_{AB}|B)}$. For this purpose we use that the fidelity can be reformulated as \begin{equation}\label{eq:Uhlmann} F(\rho, \sigma) = \sup_{\vert \phi \rangle} F(\vert \psi\rangle, \vert \phi \rangle), \end{equation} where $\vert \psi\rangle$ is a purification of $\rho$, and the supremum is taken over all purifications $\vert \phi \rangle$ of $\sigma$ \cite{Uhlmann1}. Let us fix an arbitrary $k\in \mathbb{N}$ and a $\sigma_B \in \mathcal{S}(H_B)$. Assume $\vert \psi_{ABC} \rangle$ to be a purification of $\rho_{AB}$, and note that $\vert \psi^k_{ABC} \rangle := \tilde{P}_k\vert \psi_{ABC} \rangle$, with $\tilde{P}_k=P_k \otimes \id_C$, is a purification of $\rho_{AB}^k$. Let $\vert \phi \rangle\in H_{A}\otimes H_{B}\otimes H_{C}$ be an arbitrary purification of $\tau_{A} \otimes \sigma_B$. According to (\ref{eq:Uhlmann}) it follows that \begin{align*}
F(\rho_{AB} , \tau_{A} \otimes \sigma_B) & \geq F(\vert \psi_{ABC} \rangle ,\vert \phi \rangle )= | \langle \psi_{ABC} \vert \phi \rangle | \\
& = | \langle \psi_{ABC} \vert \tilde{P}_k \vert \phi \rangle + \langle \psi_{ABC} \vert \id - \tilde{P}_k \vert \phi \rangle | \\
& \geq | \langle \psi^k_{ABC} \vert \phi \rangle | - \Vert (\id-\tilde{P}_k )|\psi_{ABC}\rangle\Vert, \end{align*}
where the last line is obtained by the reverse triangle inequality and the Cauchy-Schwarz inequality. By taking the supremum over all the purifications $|\phi\rangle$ of $\tau_{A} \otimes \sigma_B$ in the above inequality, Eq.~(\ref{eq:Uhlmann}) yields $F(\rho_{AB} , \tau_{A} \otimes \sigma_B) \geq F(\rho_{AB}^k , \tau_{A} \otimes \sigma_B) - \Vert (\id-\tilde{P}_k )|\psi_{ABC}\rangle\Vert$. As this holds for all $\sigma_B \in \mathcal{S}(H_B)$ and all $k$, we obtain with the definition of the decoupling accuracy: \begin{equation*}
d(\rho_{AB}|B) \geq \lim_{k \rightarrow \infty} \Big(\sqrt{d(\rho_{AB}^k|B_k)} - \sqrt{d_A} \ \Vert (\id-\tilde{P}_k )|\psi_{ABC}\rangle\Vert \Big)^2 = 2^{H_{\mathrm{max}}(\rho_{AB}|B)}.
\end{equation*}
\end{proof}
\subsection{\label{section:Min-entropy is maximum achievable quantum correlation} Min-entropy as maximum achievable quantum correlation}
Assume a bipartite quantum system consisting of a finite-dimensional A system and an arbitrary B system. We can then define a maximally entangled state between the A and B system as \begin{equation}\label{max entangled state} \vert \Psi_{AB} \rangle := \frac{1}{\sqrt{d_A}}\sum_{k=1}^{d_A} \vert a_k \rangle\vert b_k \rangle. \end{equation} Here, $d_A$ denotes the dimension of $H_A$, $\{a_k\}_{k=1}^{d_A}$ an arbitrary orthonormal basis of $H_A$ and $\{b_k\}_{k=1}^{d_A}$ an arbitrary orthonormal system in $H_B$, where we assume that $\dim(H_A) \leq \dim(H_B)$.
\begin{de}\label{def:quantum correlation} For $H_A$ a finite-dimensional and $H_B$ a separable Hilbert space (with $\dim H_A \leq \dim H_B$), we define the quantum correlation of a state $\rho_{AB} \in \mathcal{S}(H_A \otimes H_B)$ relative to B as \begin{equation}\label{def,eq1:quantum correlation}
q(\rho_{AB}|B) := \sup_{\mathcal{E}} d_A F\big(( \mathcal{I}_A \otimes \mathcal{E})\rho_{AB} , \vert \Psi_{AB} \rangle \langle \Psi_{AB} \vert \big)^2 , \end{equation} where the supremum is taken over all $\mathcal{E}$ in TPCPM($H_B$,$H_B$), and $\vert \Psi_{AB}\rangle$ is given by (\ref{max entangled state}). \end{de}
Due to the invariance of the fidelity under unitaries \cite{NielsenChuang}, the definition of $q(\rho_{AB}|B)$ is independent of the choice of the maximally entangled state $\vert \Psi_{AB} \rangle$. The quantum correlation can be rewritten as \begin{equation}\label{eq:quantum correlation equiv expression}
q(\rho_{AB}|B) = \sup_{\mathcal{E}} d_A \langle \Psi_{AB} \vert (\mathcal{I}_A \otimes \mathcal{E})\rho_{AB} \vert \Psi_{AB} \rangle. \end{equation} The min-entropy is directly linked to the quantum correlation as shown in \cite{OperationalMeaning} for the finite-dimensional case. We extend this result to a B system with a separable Hilbert space. \begin{p}\label{prop:oper interpr of min entropy} Let $H_A$ be a finite-dimensional and $H_B$ be a separable Hilbert space. It follows that \begin{equation}
q(\rho_{AB}|B) = 2^{-H_{\mathrm{min}}(\rho_{AB}|B)} , \end{equation} for each $\rho_{AB} \in \mathcal{S}(H_A \otimes H_B)$. \end{p}
\begin{proof}
Let $\{\rho_{AB}^k\}_{k\in\mathbb{N}}$ be the projected states of $\rho_{AB}$ relative to a generator of the form $(\id_A,P_k^B)$, and set $P_k := \id_A \otimes P_k^B$. Let us denote the projection space of $P_k^B$ by $U_k^B$ and assume that $\vert b_l\rangle \in U_k^B$, $l=1,...,d_A$, for all $k$, with $\vert b_l\rangle$ as in equation (\ref{max entangled state}). By the already proved finite-dimensional version of Proposition~\ref{prop:oper interpr of min entropy} and Proposition~\ref{p:reduction of Hmin to finite dim}, we obtain $q(\rho_{AB}^k|B_k) = \Lambda(\rho_{AB}^k|B_k) \rightarrow \Lambda(\rho_{AB}|B)$.
We begin to prove $\Lambda(\rho_{AB}|B) \leq q(\rho_{AB}|B)$. Fix $k$ and choose $\mathcal{E}_k \in \TPCPM(U_k^B,U_k^B)$ such that
$ q(\rho_{AB}^k|B_k) = d_A \langle \Psi_{AB} \vert (\mathcal{I}_A \otimes \mathcal{E}_k)\rho_{AB}^k \vert \Psi_{AB} \rangle$. Define $\tilde{\mathcal{E}}_k(\rho) = \mathcal{E}_k(P_k\rho P_k) + (\id_B-P_k^B)\rho (\id_B -P_k^B)$, which is a valid quantum operation in $\TPCPM(H_B,H_B)$. As $\tilde{\mathcal{E}}_k$ is just one possible TPCPM, it follows that \begin{equation*}
q(\rho_{AB}|B) \geq d_A \langle \Psi_{AB} \vert (\mathcal{I}_A \otimes \tilde{\mathcal{E}}_k)\rho_{AB} \vert \Psi_{AB} \rangle \geq q(\rho_{AB}^k|B_k). \end{equation*}
We thus find $q(\rho_{AB}|B) \geq \lim_{k \rightarrow \infty} q(\rho_{AB}^k|B_k) = \Lambda(\rho_{AB}|B)$.
We next prove $\Lambda(\rho_{AB}|B) \geq q(\rho_{AB}|B)$. Let $\mathcal{E}$ be an arbitrary $\TPCPM(H_B,H_B)$. As a special instance of Stinespring dilations we know that there exists an ancilla $H_R$ together with an unitary $U_{BR} \in \mathcal{L}(H_B \otimes H_R)$ and a state $\vert \theta_R \rangle \in H_R$, such that $\mathcal{E}(\sigma_B) = \tr_R[U_{BR} ( \sigma_B \otimes \vert \theta_R \rangle \langle \theta_R \vert )U^{\dagger}_{BR}]$ \cite{Kraus}. With $\vert \psi_{ABC} \rangle$ a purification of $\rho_{AB}$, it follows according to (\ref{eq:Uhlmann}) that \begin{align*}
F \big(( \mathcal{I}_{A} \otimes \mathcal{E})\rho_{AB} , \vert \Psi_{AB} \rangle \langle \Psi_{AB} \vert \big) & = \sup_{\eta_{CR}} F\big((\id \otimes U_{BR}) \vert \psi_{ABC} \rangle\vert\theta_R \rangle , \vert \Psi_{AB} \rangle\vert \eta_{CR} \rangle\big) \\ & \leq \sup_{\eta_{CR}} F\big(\rho_{AC} , \tau_A \otimes \tr_R(\vert \eta_{CR} \rangle \langle \eta_{CR} \vert)\big), \end{align*} where the last inequality is due to the monotonicity of fidelity under the partial trace and $\tau_A = d_A^{-1} \id_A = \tr_B(\vert \Psi_{AB} \rangle \langle \Psi_{AB} \vert )$. The optimization over all pure states $\eta_{CR}$ can be replaced by the optimization over all density operators on $H_C$. Then, with Proposition~\ref{prop:operational interp of max entropy} it follows that \begin{align*}
d_A F(( \mathcal{I}_{A} \otimes \mathcal{E})\rho_{AB} , \vert \Psi_{AB} \rangle \langle \Psi_{AB} \vert )^2 & \leq \sup_{\sigma_C} d_A F( \rho_{AC}, \tau_A \otimes \sigma_C)^2 = 2^{H_{\mathrm{max}}(\rho_{AC}|C)} \\
& = 2^{-H_{\mathrm{min}}(\rho_{AB}|B)} = \Lambda(\rho_{AB}|B). \end{align*}
Since this holds for all $\mathcal{E} \in \TPCPM(H_B,H_B)$, we obtain $q(\rho_{AB}|B) \leq \Lambda(\rho_{AB}|B)$. \end{proof}
The quantum correlation and its relation to min-entropy applied to classical quantum states connects the min-entropy with the optimal guessing probability. Imagine a source that produces the quantum states $\rho_B^x \in \mathcal{S}(H_B)$ at random, according to the probability distribution $P_X(x)$. The average output is characterized by the classical-quantum state, \begin{equation}\label{eq:cq state}
\rho_{XB} = \sum_{x \in X} P_X(x) \vert x \rangle \langle x \vert \otimes \rho_B^x, \end{equation}
where $X$ denotes the (finite) alphabet of the classical system describing the source and $\{ \vert x \rangle \}_{x\in X}$ is an orthonormal basis spanning $H_X$. We define the guessing probability $g(\rho_{XB}|B)$ as the probability to correctly guess $x$, permitting an optimal measurement strategy on subsystem $B$. Formally, this can be expressed as \begin{equation}\label{eq:def of guessing prob}
g(\rho_{XB}|B) := \sup_{ \{M_x\}} \sum_{x\in X}P_X(x) \tr(\rho_B^x M_x), \end{equation} where the supremum is taken over all positive operator valued measures (POVM) on $H_B$. By POVM on $H_B$ we intend a set $\{M_x\}_{x \in X}$ of positive operators which sum up to the identity. For finite-dimensional $H_B$ it is known \cite{OperationalMeaning} that the guessing probability is linked to the min-entropy by \begin{equation}\label{eq:guessin prob finite dim}
g(\rho_{XB}|B) = 2^{-H_{\mathrm{min}}(\rho_{XB}|B)}. \end{equation} We will now use Proposition~\ref{prop:oper interpr of min entropy} to show that Eq.~(\ref{eq:guessin prob finite dim}) also holds for separable $H_B$.
Let $\rho_{XB}$ be a state as defined in Eq.~(\ref{eq:cq state}), and construct the state $|\Psi_{XB}\rangle := |X|^{-1/2}\sum_{x\in X}|x \rangle|x_B\rangle$, where $\{|x_B\rangle\}_{x\in X}$ is an arbitrary orthonormal set in $H_B$. We now define $Q(\rho_{XB},\mathcal{E}) := d_A \langle \Psi_{XB} \vert (\mathcal{I}_X \otimes \mathcal{E})\rho_{XB} \vert \Psi_{XB} \rangle$ (cf. Eq.~(\ref{eq:quantum correlation equiv expression}))
and $G(\rho_{XB}, \{M_x\}) := \sum_{x\in X}P_X(x) \tr(\rho_B^x M_x)$ (cf. Eq.~(\ref{eq:def of guessing prob})). Then, \begin{equation} \label{nadflv}
Q(\rho_{XB},\mathcal{E}) = \sum_{x\in X}P_X(x)\tr[\mathcal{E}^{*}(|x_B\rangle\langle x_B|)\rho_B^{x}], \end{equation}
where $\mathcal{E}^{*}$ denotes the adjoint operation of $\mathcal{E}$. Let $\{M_x\}$ be an arbitrary $|X|$-element POVM on $H_B$. One can see that the TPCPM $\mathcal{E}(\rho) := \sum_{x\in X} \tr(M_x \rho) |x_B\rangle\langle x_B|$ satisfies $\mathcal{E}^{*}(|x_B\rangle\langle x_B|) = M_x$. Thus, by Eq.~(\ref{nadflv}), we find $Q(\rho_{XB},\mathcal{E}) = G(\rho_{XB}, \{M_x\})$. Since the POVM was arbitrary, it follows that $q(\rho_{XB}|B)\geq g(\rho_{XB}|B)$.
Next, let $\mathcal{E}$ be an arbitrary TPCPM on $H_B$. Define $P := \sum_{x\in X} |x_B\rangle\langle x_B|$ and \begin{equation*}
M_{x} := \mathcal{E}^{*}(|x_B\rangle\langle x_B|) + \frac{1}{|X|}\mathcal{E}^{*}(\id_B-P),\quad x\in X. \end{equation*}
One can verify that $\{M_x\}$ is a POVM on $H_B$. By using Eq.~(\ref{nadflv}) we can see that $G(\rho_{XB}, \{M_x\}) \geq Q(\rho_{XB},\mathcal{E})$. This implies $g(\rho_{XB}|B)\geq q(\rho_{XB}|B)$, and thus $g(\rho_{XB}|B) = q(\rho_{XB}|B)$.
\section{\label{chapter:smooth entropy} Smooth Min- and Max-Entropy}
The entropic quantities that usually appear in operational settings are the smooth min- and max-entropies \cite{ChannelCodingMinEntropy,ChannelCodingMaxEntropy,OperationalMeaning}. They result from the non-smoothed versions by an optimization procedure over states close to the original state. The closeness is defined by an appropriate metric on the state space, and a smoothing parameter specifies the maximal distance to the original state. The choice of metric has varied in the literature, but here we follow \cite{OnTheSmoothing}.
By $\mathcal{S}_{\leq}(H)$ we denote the set of positive trace class operators with trace norm smaller than or equal to 1. We define the generalized fidelity on $\mathcal{S}_{\leq}(H)$ by $\bar{F}(\rho,\sigma):= \Vert\sqrt{\rho}\sqrt{\sigma}\Vert_{1} + \sqrt{(1-\tr\rho)(1-\tr\sigma)}$, which induces a metric on $\mathcal{S}_{\leq}(H)$ via \begin{equation} \label{purified distance} P(\rho,\sigma) := \sqrt{1 - \bar{F}(\rho,\sigma)^2}, \end{equation} referred to as the purified distance.
\begin{de}\label{def:smooth min-/max-entropy} For $\epsilon >0$, we define the $\epsilon$-smooth min- and max-entropy of $\rho_{AB} \in \mathcal{S}_{\leq}(H_A \otimes H_B)$ conditioned on $B$ as \begin{equation}\label{def,eq1:smooth min-/max-entropy} H_{\mathrm{min}}^{\epsilon}(\rho_{AB}|B) := \sup_{\tilde{\rho}_{AB} \in \mathcal{B}^{\epsilon}(\rho_{AB})} H_{\mathrm{min}}(\tilde{\rho}_{AB}|B), \end{equation} \begin{equation}\label{def,eq2:smooth min-/max-entropy} H_{\mathrm{max}}^{\epsilon}(\rho_{AB}|B) := \inf_{\tilde{\rho}_{AB} \in \mathcal{B}^{\epsilon}(\rho_{AB})} H_{\mathrm{max}}(\tilde{\rho}_{AB}|B), \end{equation} where the smoothing set $\mathcal{B}^{\epsilon}(\rho_{AB})$ is defined with respect to the purified distance \begin{equation}\label{def,eq3:smooth min-/max-entropy}
\mathcal{B}^{\epsilon}(\rho_{AB}) := \{ \tilde{\rho}_{AB} \in \mathcal{S}_{\leq}(H_A\otimes H_B)| P(\rho_{AB},\tilde{\rho}_{AB}) \leq \epsilon \}. \end{equation} \end{de}
Closely related to this particular choice of smoothing set is the invariance of the smooth entropies under (partial) isometries acting locally on each of the subsystems. This can be used to show the duality relation of the smooth entropies, namely, for all states $\rho_{AB}$ on $H_A \otimes H_B$ it follows that \begin{equation}\label{eq:duality of smooth entropies}
H_{\mathrm{min}}^{\epsilon}(\rho_{AB}|B) = - H_{\mathrm{max}}^{\epsilon}(\rho_{AC}|C), \end{equation} where $\rho_{ABC}$ is an arbitrary purification of $\rho_{AB}$ on an ancilla $H_C$. A proof for the finite-dimensional case can be found in \cite{OnTheSmoothing}, which allows a straightforward modification to infinite dimensions.
A useful property of the smooth entropies is the data processing inequality. \begin{p}\label{p:strong subadditivity for smooth entropies} Let be $ \rho_{ABC} \in \mathcal{S}_{\leq}(H_A \otimes H_B \otimes H_C)$, then it follows that \begin{eqnarray*}
H_{\mathrm{min}}^{\epsilon}(\rho_{ABC}|BC) &\leq & H_{\mathrm{min}}^{\epsilon}(\rho_{AB}|B),\\
H_{\mathrm{max}}^{\epsilon}(\rho_{ABC}|BC) &\leq & H_{\mathrm{max}}^{\epsilon}(\rho_{AB}|B). \end{eqnarray*} \end{p}
\begin{proof} Using the data processing inequality for the min-entropy, Eq.~(\ref{p,eq2:strong subadd of Hmin}), we obtain \begin{eqnarray*}
H_{\mathrm{min}}^{\epsilon}(\rho_{ABC}|BC)=\sup_{\tilde{\rho}_{ABC} \in \mathcal{B}^{\epsilon}(\rho_{ABC})} H_{\mathrm{min}}(\tilde{\rho}_{ABC}|BC) \leq
\sup_{\tilde{\rho}_{ABC} \in \mathcal{B}^{\epsilon}(\rho_{ABC})} H_{\mathrm{min}}(\tr_C \tilde{\rho}_{ABC}|B). \end{eqnarray*} Thus, it is sufficient to show that $\tr_C( \mathcal{B}^{\epsilon} (\rho_{ABC})) \subseteq \mathcal{B}^{\epsilon} (\rho_{AB})$. But this follows directly from the fact that the purified distance does not increase under partial trace \cite{OnTheSmoothing}, i.e., $P(\rho_{ABC},\tilde{\rho}_{ABC})\geq P(\rho_{AB},\tilde{\rho}_{AB})$.
The data processing inequality of the smooth max-entropy follows from the duality (\ref{eq:duality of smooth entropies}), \begin{equation*}
H_{\mathrm{max}}^{\epsilon}(\rho_{ABC}|BC) = -H_{\mathrm{min}}^{\epsilon}(\rho_{AD}|D) \leq- H_{\mathrm{min}}^{\epsilon}(\rho_{ACD}|CD) = H_{\mathrm{max}}^{\epsilon}(\rho_{AB}|B), \end{equation*} where $\rho_{ABCD}$ is a purification of $\rho_{ABC}$. \end{proof}
\section{\label{AEP} An Infinite-Dimensional Quantum Asymptotic Equipartition Property}
In the finite-dimensional case the quantum asymptotic equipartition property (AEP) says that the conditional von Neumann entropy can be regained as an asymptotic quantity from the conditional smooth min- and max-entropy \cite{RennerPhD,QuantumAEP}. (For a discussion on why the AEP can be formulated in terms of entropies, see \cite{Independent}.) More precisely, $\lim_{\epsilon \rightarrow 0} \lim_{ n \rightarrow \infty} \frac{1}{n} H_{\mathrm{min}}^{\epsilon}(\rho_{AB}^{\otimes n}|B^n) = H(\rho_{AB}|B)$ and
$\lim_{\epsilon \rightarrow 0} \lim_{ n \rightarrow \infty} \frac{1}{n} H_{\mathrm{max}}^{\epsilon}(\rho_{AB}^{\otimes n}|B^n) = H(\rho_{AB}|B)$. For the infinite-dimensional case we derive an upper (lower) bound to the conditional von Neumann entropy in terms of the smooth min-(max-)entropy. We then use these bounds to prove the above limits in the case where $H_A$ is finite-dimensional. To this end we need a well defined notion of conditional von Neumann entropy in the infinite-dimensional case. Here we use the definition introduced in \cite{Kuznetsova}, which in turn is based on an infinite-dimensional extension of the relative entropy \cite{Klein31,Lindblad73,Lindblad74,HolevoShirokov}. For $\rho , \sigma \in \tau_1^{+}(H)$ the relative entropy can be defined as \begin{equation} \label{smdal}
H(\rho \Vert \sigma): = \sum_{jk} |\langle a_{j}|b_{k}\rangle|^{2}(a_j\log a_j - a_j\log b_k + b_k-a_j), \end{equation}
where $\{|a_j\rangle\}_j$ is an arbitrary orthonormal eigenbasis of $\rho$ with corresponding eigenvalues $a_j$, and analogously for $\{|b_k\rangle\}_k$, $b_k$, and $\sigma$. The relative entropy is always positive, possibly $+\infty$, and equal to 0 if and only if $\rho=\sigma$ \cite{Lindblad73}. For states $\rho_{AB}$ with $H(\rho_A) < +\infty$, the conditional von Neumann entropy can be defined as \cite{Kuznetsova} \begin{equation}
H(\rho_{AB}|B) := H(\rho_A) -H(\rho_{AB}\Vert \rho_A\otimes \rho_B). \end{equation} For many applications it appears reasonable to assume $H(\rho_A)$ to be finite, e.g., in cryptographic settings it would correspond to restricting the states of the `legitimate' users.
Similarly as for the min- and max-entropy, the conditional von Neumann entropy can be approximated by projected states \cite{Kuznetsova}, i.e., for $\rho_{AB} \in \mathcal{S}(H_A \otimes H_B)$ satisfying $H(\rho_A)<\infty$ with corresponding normalized projected states $\hat{\rho}_{AB}^k$ it follows that \begin{equation}\label{eq: limit von neumann entropy}
\lim_{k \rightarrow \infty } H(\hat{\rho}_{AB}^k|B) = H(\rho_{AB}|B). \end{equation} In the finite-dimensional case it has been shown \cite{QuantumAEP} that the min-, max- and, von Neumann entropy can be ordered as \begin{equation} \label{ordering}
H_{\mathrm{min}}(\rho_{AB}|B) \leq H(\rho_{AB}| B) \leq H_{\mathrm{max}}(\rho_{AB}|B). \end{equation} A direct application of Proposition~\ref{p:reduction of Hmin to finite dim} and (\ref{eq: limit von neumann entropy}) shows that this remains true in the infinite-dimensional case, if $H(\rho_A)<\infty$. Note, however, that the ordering between min- and max-entropy (\ref{orderingHminHmax}) does not hold for their smoothed versions.
\begin{p} \label{prop:AEP lower bound} Let $\rho_{AB} \in \mathcal{S}(H_A \otimes H_B)$ be such that $H(\rho_A)<\infty$. For any $\epsilon > 0$ it follows that \begin{equation} \label{lowerbound}
\frac{1}{n} H_{\mathrm{min}}^{\epsilon}(\rho_{AB}^{\otimes n} | B^n) \geq H(\rho_{AB}|B) - \frac{1}{\sqrt{n}}4\log(\eta) \sqrt{\log\frac{2}{\epsilon^2}}, \end{equation} \begin{equation} \label{upperbound}
\frac{1}{n} H_{\mathrm{max}}^{\epsilon}(\rho_{AB}^{\otimes n} | B^n) \leq H(\rho_{AB}|B) + \frac{1}{\sqrt{n}}4\log(\eta) \sqrt{\log\frac{2}{\epsilon^2}}. \end{equation} for $n \geq (8/5)\log(2/\epsilon^2)$, and
$\eta = 2^{-\frac{1}{2}H_{\mathrm{min}}(\rho_{AB}|B)} + 2^{\frac{1}{2}H_{\mathrm{max}}(\rho_{AB}|B)} +1$. \end{p}
Note that it is not clear under what conditions the limits $n\rightarrow \infty$, $\epsilon \rightarrow 0$ exist for the left hand side of equations (\ref{lowerbound}) and (\ref{upperbound}). If they do, Proposition~\ref{prop:AEP lower bound} implies $\lim_{\epsilon\rightarrow 0}\lim_{n\rightarrow\infty}\frac{1}{n}
H_{\mathrm{min}}^{\epsilon}(\rho_{AB}^{\otimes n} | B^n) \geq H(\rho_{AB}|B)$ and $\lim_{\epsilon\rightarrow 0}\lim_{n\rightarrow\infty}\frac{1}{n}
H_{\mathrm{max}}^{\epsilon}(\rho_{AB}^{\otimes n} | B^n) \leq H(\rho_{AB}|B)$. For the case of a finite-dimensional $H_A$ we show that these inequalities can be replaced with equalities (Corollary~\ref{cor:AEP}).
It should be noted that in the classical case a lower bound on the min-entropy and an upper bound on the max-entropy, analogous to Eqs.~(\ref{lowerbound}) and (\ref {upperbound}), correspond \cite{Independent} to the AEP in classical probability theory \cite{CoverThomas}. Since in the finite-dimensional quantum case, the step from Proposition~\ref{prop:AEP lower bound} to Corollary~\ref{cor:AEP} is directly obtained \cite{QuantumAEP} via Fannes' inequality \cite{Fannes}, the limits in Corollary~\ref{cor:AEP} are usually referred to as `the quantum AEP' \cite{QuantumAEP}. In the infinite-dimensional case the relation between Proposition~\ref{prop:AEP lower bound} and Corollary~\ref{cor:AEP} appears less straightforward, and it is thus not entirely clear what should be regarded as constituting `the quantum AEP'. We will not pursue this question here, but merely note that it is the inequalities in Proposition~\ref{prop:AEP lower bound}, rather than the limits in Corollary~\ref{cor:AEP}, that are the most relevant for applications \cite{RennerPhD}. However, for the sake of simplicity we continue to refer to Corollary~\ref{cor:AEP} as a quantum AEP.
We prove Proposition~\ref{prop:AEP lower bound} after the following lemma. \begin{lem} \label{cor: eps dep smooth min-entropy} Let $\rho_{AB} \in \mathcal{S}(H_A \otimes H_B)$ and let $\{\hat{\rho}_{AB}^k\}_{k=1}^{\infty}$ be a sequence of normalized projected states. For any fixed $1 >t>0$, there exists a $k_0 \in \mathbb{N}$ such that \begin{equation}
H_{\mathrm{min}}^{\epsilon}(\rho_{AB}|B) \geq H_{\mathrm{min}}^{t\epsilon}(\hat{\rho}_{AB}^k|B),\quad \forall k\geq k_0. \end{equation} \end{lem}
\begin{proof} In the following let $t\in (0,1)$ be fixed. According to the definition of the smooth min-entropy in Eq.~(\ref{def,eq1:smooth min-/max-entropy}), it is enough to show that $\mathcal{B}^{t\epsilon}(\hat{\rho}_{AB}^k) \subseteq \mathcal{B}^{\epsilon} (\rho_{AB})$ for all $k \geq k_0$. Note that the purified distance is compatible with trace norm convergence, i.e.,
$\Vert \rho_{AB}-\hat{\rho}_{AB}^k\Vert_1\rightarrow 0$ implies that $P(\hat{\rho}_{AB}^k,\rho_{AB}) \rightarrow 0$. Hence, there exists a $k_0$ such that $P(\hat{\rho}_{AB}^k,\rho_{AB}) < (1-t)\epsilon$ for all $k \geq k_0$. For $k \geq k_0$ and $\tilde{\rho}_{AB} \in \mathcal{B}^{t\epsilon}(\hat{\rho}_{AB}^k)$ we thus find $P(\tilde{\rho}_{AB},\rho_{AB}) \leq P(\tilde{\rho}_{AB}, \hat{\rho}_{AB}^k) + P(\hat{\rho}_{AB}^k,\rho_{AB}) < \epsilon$, such that $\tilde{\rho}_{AB} \in \mathcal{B}^{\epsilon} (\rho_{AB})$. \end{proof}
\begin{proof}\emph{(Proposition~\ref{prop:AEP lower bound})}
Let $(P_k^A, P_k^B)$ be a generator of projected states. The pair of n-fold tensor products of the projections, $\big((P_k^A)^{\otimes n},(P_k^B)^{\otimes n}\big)$, is also a generator of projected states. If we now fix $1>t>0$ and $n \in \mathbb{N}$, it follows by Lemma~\ref{cor: eps dep smooth min-entropy} that we can find a $k_0\in \mathbb{N}$ such that $H_{\mathrm{min}}^{\epsilon}(\rho_{AB}^{\otimes n}|B^n) \geq H_{\mathrm{min}}^{t\epsilon}((\hat{\rho}_{AB}^k)^{\otimes n}|B^n)$ for every $k\geq k_0$. Since Eq.~(\ref{lowerbound}) is valid for the finite-dimensional case \cite{QuantumAEP}, we can apply it to $H_{\mathrm{min}}^{t\epsilon}((\hat{\rho}_{AB}^k)^{\otimes n}|B^n)$ to obtain \begin{equation*}
\frac{1}{n}H_{\mathrm{min}}^{t\epsilon}((\hat{\rho}_{AB}^k)^{\otimes n}|B^n) \geq H(\hat{\rho}_{AB}^k|B) - \frac{1}{\sqrt{n}}4\log(\eta_k) \sqrt{\log\frac{2}{(t\epsilon)^2}} \end{equation*}
for any $n \geq (8/5)\log(2/(t\epsilon)^2)$, and $\eta_k = 2^{-\frac{1}{2}H_{\mathrm{min}}(\hat{\rho}_{AB}^k|B) } + 2^{\frac{1}{2}H_{\mathrm{max}}(\hat{\rho}_{AB}^k|B)} +1$. Hence \begin{equation}\label{pf,lem:AEP lower bound,eq4}
\frac{1}{n} H_{\mathrm{min}}^{\epsilon}(\rho_{AB}^{\otimes n}|B^n) \geq H(\hat{\rho}_{AB}^k|B) - \frac{1}{\sqrt{n}}4\log(\eta_k) \sqrt{\log\frac{2}{(t\epsilon)^2}}, \end{equation} for all $k \geq k_0$. Since the left hand side of Eq.~(\ref{pf,lem:AEP lower bound,eq4}) is independent of $k$ we can use (\ref{eq: limit von neumann entropy}) and Proposition~\ref{p:reduction of Hmin to finite dim}, to find \begin{align*}
\frac{1}{n} H_{\mathrm{min}}^{\epsilon}(\rho_{AB}^{\otimes n}|B^n)
& \geq \lim_{k\rightarrow \infty} \Big\{H(\hat{\rho}_{AB}^k|B) - \frac{1}{\sqrt{n}}4\log(\eta_k) \sqrt{\log\frac{2}{(t\epsilon)^2}} \Big\} \\
& = H(\rho_{AB}|B) - \frac{1}{\sqrt{n}}4\log(\eta) \sqrt{\log\frac{2}{(t\epsilon)^2}}. \end{align*} We finally take the limit $t\rightarrow 1$ in the above inequality, as well as in the condition $n \geq (8/5)\log(2/(t\epsilon)^2)$ to obtain the first part of the proposition.
For the second part we use the duality of the conditional von Neumann entropy, i.e., $H(\rho_{AB}|B)= -H(\rho_{AC}|C)$ for a purification $\rho_{ABC}$ \cite{Kuznetsova}. This, together with the duality relation for smooth min- and max-entropy (\ref{eq:duality of smooth entropies}) leads directly to (\ref{upperbound}). \end{proof}
\begin{cor} \label{cor:AEP} Let $H_A$ be a finite-dimensional and $H_B$ a separable Hilbert space. For all $\rho_{AB} \in \mathcal{S}(H_A \otimes H_B)$ it follows that \begin{equation}\label{cor:AEP min-entropy}
\lim_{\epsilon\rightarrow 0} \lim_{n \rightarrow \infty} \frac{1}{n} H_{\mathrm{min}}^{\epsilon}(\rho_{AB}^{\otimes n} | B^n) = H(\rho_{AB}|B), \end{equation} \begin{equation}\label{cor:AEP max-entropy}
\lim_{\epsilon\rightarrow 0} \lim_{n \rightarrow \infty} \frac{1}{n} H_{\mathrm{max}}^{\epsilon}(\rho_{AB}^{\otimes n} | B^n) = H(\rho_{AB}|B). \end{equation} \end{cor}
\begin{proof} Let $\epsilon >0$ be sufficiently small, and let $(\id_A,P_k^B)$ be a generator of projected states $\rho_{AB}^k$, with corresponding normalized projected states $\hat{\rho}_{AB}^k$. Let $\sigma_{AB}\in \mathcal{B}^{\epsilon} (\rho_{AB})$, with projected states $\sigma^k_{AB}$, and normalized projected states $\hat{\sigma}^k_{AB}$. By $H_{\mathrm{min}}(\sigma_{AB}^k|B)=H_{\mathrm{min}}(\hat{\sigma}^k_{AB}|B)+\log\tr\sigma^k_{AB}$ and (\ref{ordering}) we find $H_{\mathrm{min}}(\sigma^k_{AB}|B_k)
\leq H(\hat{\sigma}^k_{AB}|B)$, where $\hat{\sigma}^k_{AB}=(\tr\sigma^k_{AB})^{-1}\sigma^k_{AB}$. Since $H(\hat{\sigma}^k_{AB}|B_k)$ is finite-dimensional we can use Fannes' inequality \cite{Fannes} to obtain (for $k$ sufficiently large)
$H(\hat{\sigma}^k_{AB}|B_k) \leq H(\hat{\rho}_{AB}^k|B_k) + 4 \Delta_k \log d_A + 4 H_{\mathrm{bin}}(\Delta_k)$, with $d_A=\dim(H_A)$, $\Delta_k = \Vert \hat{\rho}_{AB}^k - \hat{\sigma}^k_{AB} \Vert_1$, and $H_{\mathrm{bin}}(t)= -t\log t - (1-t)\log(1-t)$. Due to the general relation $\Vert \rho-\sigma\Vert_1 \leq 2 P(\rho,\sigma)$ (see Lemma 6 in \cite{OnTheSmoothing}), we have $\Vert \rho_{AB}-\sigma_{AB}\Vert_1 \leq 2\epsilon$ for all $\sigma_{AB}\in\mathcal{B}^{\epsilon} (\rho_{AB})$, which yields $\lim_{k\rightarrow \infty} \Delta_k = \Vert \rho_{AB} -\hat{\sigma}_{AB} \Vert_1 \leq 4\epsilon$, where $\hat{\sigma}_{AB} = \sigma_{AB}/\tr(\sigma_{AB})$. Combined with (\ref{eq: limit von neumann entropy}) this leads to $H^{\epsilon}_{\mathrm{min}}(\rho_{AB} | B) = \sup_{{\sigma}_{AB} \in \mathcal{B}^{\epsilon}(\rho_{AB})}\lim_{k\rightarrow\infty}H_{\mathrm{min}}(\sigma^k_{AB} | B) \leq H(\rho_{AB}|B) + 16\epsilon \log d_A + 4 H_{\mathrm{bin}}(4\epsilon)$. Applied to an n-fold tensor product this gives \begin{equation} \label{MinEntrUpperbound}
\frac{1}{n}H_{\mathrm{min}}^{\epsilon}(\rho_{AB}^{\otimes n} | B^n) \leq H(\rho_{AB}|B) + 16\epsilon \log d_A + \frac{4}{n} H_{\mathrm{bin}}(4\epsilon). \end{equation}
Equation (\ref{cor:AEP min-entropy}) follows by combining (\ref{MinEntrUpperbound}) with the lower bound in (\ref{lowerbound}), taking the limits $n\rightarrow \infty$ and $\epsilon\rightarrow 0$.
Equation (\ref{cor:AEP max-entropy}) follows directly by the duality of the conditional von Neumann entropy \cite{Kuznetsova} together with the duality of the smooth min- and max-entropy (\ref{eq:duality of smooth entropies}). \end{proof}
\section{\label{concl} Conclusion and Outlook}
We have extended the min- and max-entropies to separable Hilbert spaces, and shown that properties and operational interpretations, known from the finite-dimensional case, remain valid in the infinite-dimensional setting. These extensions are facilitated by the finding (Proposition~\ref{p:reduction of Hmin to finite dim}) that the infinite-dimensional min- and max-entropies can be expressed in terms of convergent sequences of finite-dimensional entropies. We bound the smooth min- and max-entropies of iid states (Proposition \ref{prop:AEP lower bound}) in terms of an infinite-dimensional generalization of the conditional von Neumann entropy $H(A|B)$, introduced in \cite{Kuznetsova}, which is defined when the von Neumann entropy of system $A$ is finite, $H(A)<\infty$. Under the additional assumption that the Hilbert space of system $A$ has finite dimension we furthermore prove that the smooth entropies of iid states converge to the conditional von Neumann entropy (Corollary \ref{cor:AEP}), corresponding to a quantum asymptotic equipartition property (AEP). Whether these conditions can be relaxed is an open question. In the general case where $H(A)$ is not necessarily finite, this would however require a more general definition of the conditional von Neumann entropy than the one used here.
For information-theoretic purposes it appears reasonable to require extensions of the conditional von Neumann entropy to be compatible with the AEP, i.e., that the conditional von Neumann entropy can be regained from the smooth min- and max-entropy in the asymptotic iid limit. This enables generalizations of operational interpretations of the conditional von Neumann entropy. For example, in the finite-dimensional asymptotic case the conditional von Neumann entropy characterizes the amount of entanglement needed for state merging \cite{StateMerging}, i.e., the transfer of a quantum state shared by two parties to only one of the parties. An infinite-dimensional generalization of one-shot state merging \cite{Single Shot State merging}, together with the AEP, could be used to extend this result to the infinite-dimensional case.
Some other immediate applications of this work are in continuous variable quantum key distribution, and in statistical mechanics, where it has recently been shown \cite{Oscar,Lidia} that the smooth min- and max-entropies play a role. Our techniques may also be employed to derive an infinite-dimensional generalization of the entropic uncertainty relation \cite{Uncertainty}. Such a generalization would be interesting partially because it could find applications in continuous variable quantum information processing, but also because it may bring this information-theoretic uncertainty relation into the same realm as the standard uncertainty relation.
\section{Acknowledgments} We thank Roger Colbeck and Marco Tomamichel for helpful comments and discussions, and an anonymous referee for very valuable suggestions. Fabian Furrer acknowledges support from the Graduiertenkolleg 1463 of the Leibniz University Hannover.
We furthermore acknowledge support from the Swiss National Science Foundation (grant No. 200021-119868).
\begin{appendix}
\section{\label{section:Technical Lemmas}Technical Lemmas}
In the following, each Hilbert space is assumed to be separable. Let us define the positive cone $\mathcal{L}^+(H) := \{T \in \mathcal{L}(H)| \ T \geq 0\}$ in $\mathcal{L}(H)$. The next two lemmas follow directly from the definition of positivity of an operator. \begin{lem}\label{lem:T geq 0 implies STS geq 0} If $T \in \mathcal{L}^+(H)$, then for each $S \in \mathcal{L}(H)$ it follows that $STS^{\dagger} \in \mathcal{L}^+(H)$. \end{lem}
\begin{lem} \label{lem:pos weak op limit} The positive cone $\mathcal{L}^+(H)$ is sequentially closed in the weak operator topology, i.e., for $\{T_k\}_{k\in\mathbb{N}} \subset \mathcal{L}^+(H)$ such that $T_k$ converge to $T \in \mathcal{L}(H)$ in the weak operator topology, it follows that $T\geq 0$. \end{lem}
The following lemma is a special case of a theorem by Gr\"umm \cite{Grumm} (see also \cite{Simon}, pp. 25-29, for similar results). \begin{lem} \label{lemgrumm} Let $A_{k},A \in\mathcal{L}(H)$, such that $\sup_{k}\Vert A_{k}\Vert<+\infty$, and $A_{k}\rightarrow A$ in the strong operator topology, and let $T\in \tau_1(H)$. Then $\lim_{k\rightarrow\infty}\Vert A_{k}T -AT\Vert_{1}=0$ and $\lim_{k\rightarrow\infty}\Vert TA_{k} -TA\Vert_{1}=0$. \end{lem}
\begin{cor} \label{dfnbkl} If $P_{k}$ is a sequence of projectors on $H$ that converges in the strong operator topology to the identity, and if $\rho \in \tau_1^+(H)$, then $\lim_{k\rightarrow\infty}\Vert P_{k}\rho P_{k}-\rho\Vert_{1}=0$. \end{cor} \begin{lem} \label{nvdakj} If sequences of projectors $P_k^A$ and $P_k^B$ on $H_A$ and $H_B$, respectively, converge in the strong operator topology to the identity, then $P_k^A\otimes P_k^B$ converges in the strong operator topology to $\id_{AB}$. \end{lem}
\begin{lem}\label{lem:weakstar Tk implies weak op of id otimes Tk} Let $\{T_k\}_{k\in \mathbb{N}} \subset \tau_1(H_B)$ be a sequence that converges in the weak* topology to $T\in \tau_1(H_B)$. Then, the sequence $\id_{A} \otimes T_k$ in $\mathcal{L}(H_A \otimes H_B)$ converges to $\id_{A} \otimes T$ in the weak operator topology. \end{lem} \begin{proof}
For each $\psi \in H_A \otimes H_B$ we find that $\langle \psi \vert \id \otimes T_k \vert \psi \rangle = \tr( T_k K^{B}_{\psi})$, where $K^{B}_{\psi}= \tr_{A}|\psi\rangle\langle\psi|$ is the reduced operator. Since $K^{A}_{\psi}$ is trace class (and thus compact) the statement follows immediately.
\end{proof}
\section{\label{app:proofprop1}Proof of Proposition~\ref{p:reduction of Hmin to finite dim}}
In order to derive Proposition~\ref{p:reduction of Hmin to finite dim} we proceed as follows: In Section~\ref{Section:reduction to finite} we show that the min- and max-entropy of a projected state can be reduced to an entropy on a finite-dimensional space. In Section~\ref{sec:monotonicity} we show that the min- and max-entropies are monotonic over the sequences of projected states. Finally we prove the limits listed in Proposition~\ref{p:reduction of Hmin to finite dim}. Note that in what follows we mostly make use of the quantities $\Lambda(\rho_{AB}|\sigma_{B})$ and $\Lambda(\rho_{AB}|B)$, as defined in Eqs.~(\ref{lambdadef1}) and (\ref{lambdadef2}), rather than the min- and max-entropies per se.
\subsection{\label{Section:reduction to finite} Reduction}
Here we show that the min- and max-entropy of a projected state can be considered as effectively finite-dimensional, in the sense that restricting the Hilbert space to the support of the projected states does not change the value of the entropies.
\begin{lem} \label{I:id-P} Let $P_A$, $P_B$ be projectors onto closed subspaces $U_A \subseteq H_A$ and $U_B \subseteq H_B$, respectively, $\tilde{\rho}_{AB} \in \tau_{1}^{+}(H_A \otimes H_B)$, and $\tilde{\sigma}_B \in \tau^+_1(H_B)$. \newline i) If $(P_{A}\otimes \id_{B}) \tilde{\rho}_{AB} (P_{A}\otimes \id_{B}) = \tilde{\rho}_{AB}$ it follows that
\begin{equation}
\Lambda(\tilde{\rho}_{AB}|\tilde{\sigma}_B) = \inf \{ \lambda \in \mathbb{R} | \lambda P_A \otimes \tilde{\sigma}_B \geq \tilde{\rho}_{AB} \}.
\end{equation} ii) If $(\id_A\otimes P_{B})\tilde{\rho}_{AB} (\id_A\otimes P_{B}) = \tilde{\rho}_{AB}$ it follows that
\begin{equation}\label{prop,eq1:Lambda statement for reduction of Hmin}
\Lambda(\tilde{\rho}_{AB}|B) = \Lambda(\tilde{\rho}_{AB}|U_B),
\end{equation}
where $\Lambda(\tilde{\rho}_{AB} |U_B)$ means that the infimum in Eq.~(\ref{lambdadef2}) is taken only over the set $\tau^+_1(U^B)$. \end{lem}
The proof is straightforward and left to the reader. In the particular case of projected states $\rho_{AB}^k$ relative to a generator $(P_k^A,P_k^B)$, the evaluation of $\Lambda(\rho_{AB}^k|\sigma_B^k)$ and $\Lambda(\rho_{AB}^k|B)$, where $\sigma_B^k=P_k^B\sigma_B P_k^B$, can be restricted to the finite-dimensional Hilbert space $U_k^A\otimes U_k^B$ given by the projection spaces of $P_k^A$ and $P_k^B$. Especially, we can conclude that the infima of Eqs.~(\ref{lambdadef1}) and (\ref{lambdadef2}), and consequently the infimum in (\ref{def,eq1:min/max-entropy}) and the supremum in (\ref{def,eq2:min/max-entropy}), are attained for projected states, since these are optimizations of continuous functions over compact sets.
\subsection{\label{sec:monotonicity}Monotonicity} The next lemma considers the monotonic behaviour of the min- and max-entropies with respect to sequences of projected states.
\begin{lem} \label{l:mon incr}
For $\rho_{AB}\in \mathcal{S}(H_A \otimes H_B)$, $ \sigma_B \in \mathcal{S}(H_B)$, let $\{\rho_{AB}^k\}_{k=1}^{\infty}$ and $\{\sigma_B^k\}_{k=1}^{\infty}$ be projected states relative to a generator $(P_k^A,P_k^B)$.\newline i) It follows that $\Lambda(\rho_{AB}^k|\sigma_B^k)$ and $\Lambda(\rho_{AB}^k|B)$ are monotonically increasing in $k$, where the first sequence is bounded by $\Lambda(\rho_{AB}|\sigma_B)$ and the latter by $\Lambda(\rho_{AB}|B)$.\newline ii) For an arbitrary but fixed purification $\rho_{ABC}$ of $\rho_{AB}$ with purifying system $H_C$, let $\rho^k_{AC} = \tr_B\rho_{ABC}^k$ and $\rho_{ABC}^k = (P^A_k \otimes P_k^B \otimes \id_C)\rho_{ABC}(P^A_k \otimes P_k^B \otimes \id_C)$. Then it follows that $\Lambda(\rho_{AC}^k|C)$ is monotonically increasing and bounded by $\Lambda(\rho_{AC}|C)$. \end{lem} Note that $\rho^k_{AC}$ as defined in the lemma is not a projected state in the sense of Definition~\ref{def:projected states}. Translated to min- and max-entropies, the lemma above says that $H_{\mathrm{min}}(\rho^k_{AB}|\sigma^k_B)$ and $H_{\mathrm{min}}(\rho^k_{AB}|B)$ are monotonically increasing while $H_{\mathrm{max}}(\rho^k_{AB}|B)$ is monotonically decreasing. But in general, the monotonicity does not hold for \emph{normalized} projected states.
\begin{proof}
Set $P_k:=P_k^A \otimes P_k^B$ and recall that $\Lambda(\rho_{AB}^k|\sigma_B^k)= \inf \{ \lambda \in \mathbb{R}| \ \lambda P_k^A \otimes \sigma_B^k \geq \rho_{AB}^k \}$ according to Lemma~\ref{I:id-P}. To show the first part of i) note that for $k'\leq k$ the equations \begin{equation*} P_{k'}P_k (\lambda \id \otimes \sigma_B - \rho_{AB} ) P_{k'}P_k=P_{k'} (\lambda P_k^A \otimes \sigma_B^k - \rho_{AB}^k ) P_{k'} = \lambda P_{k'}^A \otimes \sigma_B^{k'} - \rho_{AB}^{k'} \end{equation*}
hold, which imply via Lemma~\ref{lem:T geq 0 implies STS geq 0} that $\Lambda(\rho_{AB}^{k'}|\sigma_B^{k'}) \leq \Lambda(\rho_{AB}^k|\sigma_B^k) \leq \Lambda(\rho_{AB}|\sigma_B)$. For the second part, let $\tilde{\sigma}_B \in \tau_1^+(H_B)$ be the optimal state such that $\Lambda(\rho_{AB}^k|B)=\tr\tilde{\sigma}_B$ and $P_k^A\otimes \tilde{\sigma}_B \geq \rho_{AB}^k$. But then we obtain that $P_{k'}^A \otimes P_{k'}^B\tilde{\sigma}_B P_{k'}^B - \rho_{AB}^{k'} \geq 0$ and therefore also $\Lambda(\rho_{AB}^{k'}|B) \leq \Lambda(\rho_{AB}^k|B)$. The upper bound follows in the same manner.
In order to show ii) we define the sets $ \mathcal{M}_k := \{\tilde{\sigma}_C \in \tau_1^+(H_C)|\ \id_A \otimes \tilde{\sigma}_C \geq \rho_{AC}^k\}$ such that $\Lambda(\rho_{AC}^k|C) = \inf_{\tilde{\sigma}_C \in \mathcal{M}_k}\tr\tilde{\sigma}_C$. To conclude the monotonicity we show that $ \mathcal{M}_{k'} \supset \mathcal{M}_{k}$ for $k'\leq k$. If $\mathcal{M}_{k}= \emptyset$, the statement is trivial. Assume $\tilde{\sigma}_{C}\in \mathcal{M}_k$. Using $P^B_{k'}\leq P^B_{k}$ we find \begin{equation*} \id_A\otimes \tilde{\sigma}_{C}\geq P_{k}^A \tr_{B}(P_{k}^B \rho_{ABC}P^B_{k}) P_{k}^A \geq P_{k}^A \tr_{B}(P_{k'}^B \rho_{ABC}P^B_{k'}) P_{k}^A. \end{equation*}
Together with Lemma~\ref{lem:T geq 0 implies STS geq 0}, this yields $P^A_{k'} \otimes \tilde{\sigma}_{C} \geq \rho^{k'}_{AC}$ and thus $\tilde{\sigma}_{C}\in \mathcal{M}_{k'}$. A similar argument provides the upper bound $\Lambda(\rho_{AC}^{k}|C) \leq \Lambda(\rho_{AC}|C)$. \end{proof}
\subsection{\label{sec:limits}Limits}
After the above discussion on general properties of the min- and max-entropies of projected states we are now prepared to prove Proposition~\ref{p:reduction of Hmin to finite dim}. For the sake of convenience we divide the proof into three lemmas.
\begin{lem} \label{l=sup} For $\rho_{AB}\in \mathcal{S}(H_A \otimes H_B)$ and $\sigma_B \in \mathcal{S}(H_B)$, let $\{\rho_{AB}^k\}_{k=1}^{\infty}$ be the projected states of $\rho_{AB}$ relative to a generator $(P_k^A,P_k^B)$, and let $\sigma_B^k:= P_k^B\sigma_B P_k^B$. It follows that \begin{equation} \label{knbvx}
\Lambda(\rho_{AB}|\sigma_B) = \lim_{k \rightarrow \infty} \Lambda(\rho_{AB}^k|\sigma_B^k), \end{equation}
and the infimum in Eq.~(\ref{lambdadef1}) is attained if $\Lambda(\rho_{AB}|\sigma_B)$ is finite. \end{lem} \begin{proof}
That the infimum is attained follows directly from the definition. To show (\ref{knbvx}) we prove that $\Lambda(\rho_{AB}|\sigma_B)$ is lower semi-continuous in $(\rho_{AB},\sigma_B)$ with respect to the product topology induced by the trace norm topology on each factor. Since this means that $\liminf_{k \rightarrow \infty} \Lambda(\rho_{AB}^k|\sigma_B^k) \geq \Lambda(\rho_{AB}|\sigma_B)$, the combination with Lemma~\ref{l:mon incr} results directly in (\ref{knbvx}). To show lower semi-continuity recall that it is equivalent to say that all lower level sets $\Lambda^{-1}((-\infty,t]) =\{(\rho_{AB},\sigma_B) | \ \Lambda(\rho_{AB}|\sigma_B) \leq t \}$, for $t\in \mathbb R$ have to be closed. But this follows by rewriting $\Lambda^{-1}((-\infty,t])$ as $\{(\rho_{AB},\sigma_B) | \ t\id \otimes \sigma_B \geq \rho_{AB} \}$. \end{proof}
\begin{lem}\label{l:Lambda statement for reduction of Hmin} For $\rho_{AB}\in \mathcal{S}(H_A \otimes H_B)$, let $\{\rho_{AB}^k\}_{k=1}^{\infty}$ be the projected states of $\rho_{AB}$ relative to a generator $(P_k^A,P_k^B)$. It follows that \begin{equation} \label{lakdsfn}
\Lambda(\rho_{AB}|B) = \lim_{k\rightarrow \infty} \Lambda(\rho_{AB}^k|B), \end{equation}
and the infimum in Eq.~(\ref{lambdadef2}) is attained if $\Lambda(\rho_{AB}|B)$ is finite. \end{lem}
\begin{proof}
Let $\mu_k := \Lambda(\rho_{AB}^k|B) = \Lambda(\rho_{AB}^k|B_k)$, where the last equality is due to Lemma~\ref{I:id-P}. By Lemma~\ref{l:mon incr} this sequence is monotonically increasing, and we can thus define $\mu := \lim_{k \rightarrow \infty} \mu_k \in \mathbb{R} \cup \{+\infty\}$. In addition, Lemma~\ref{l:mon incr} also yields $\mu \leq \Lambda(\rho_{AB}|B)$. Hence, the case $\lambda = +\infty$ is trivial, and it remains to show $\mu \geq \Lambda(\rho_{AB}|B)$, for $\mu < \infty$.
For each $k\in \mathbb{N}$ let $\tilde{\sigma}_B^k$ be an optimal state such that $\Lambda(\rho_{AB}^k|B)=\tr\tilde{\sigma}_B^k$ and $\id\otimes\tilde{\sigma}_B^k\geq \rho_{AB}^k$. Note that due to positivity $\tr\tilde{\sigma}_B^k=\Vert \tilde{\sigma}_B^k\Vert_1
\leq \mu$, such that $\tilde{\sigma}_B^k$ is a bounded sequence in $\tau_1(H_B)$. Since the trace class operators $\tau_1(H_B)$ is the dual space of the compact operators $\mathcal K(H_B)$ \cite{SimonReeds}, we can apply Banach Alaoglu's theorem \cite{SimonReeds,HillPhillips} to find a subsequence $\{\tilde{\sigma}_B^k\}_{k\in\Gamma}$ with a weak* limit $\tilde{\sigma}_B \in \tau_1(H_B)$, i.e., $\tr(K\tilde{\sigma}_B^k) \rightarrow \tr(K\tilde{\sigma}_B)$ $(k\in \Gamma)$ for all $K\in \mathcal K(H_B)$, such that $\Vert \tilde{\sigma}_B\Vert_1 \leq \mu$. Obviously, $\tilde{\sigma}_B$ is also positive. According to Lemma~\ref{lem:weakstar Tk implies weak op of id otimes Tk}, $\id \otimes \tilde{\sigma}_B^k$ (for $k\in \Lambda$) converges in the weak operator topology to $\id \otimes \tilde{\sigma}_B$, and so does $\id \otimes \tilde{\sigma}_B^k - \rho_{AB}^k$ to $\id \otimes \tilde{\sigma}_B - \rho_{AB}$. But then we can conclude that $\id \otimes \tilde{\sigma}_B - \rho_{AB}\geq 0$ such that $\Lambda(\rho_{AB}|B) \leq \tr\tilde{\sigma}_B \leq \mu$. \end{proof}
\begin{lem}\label{l:Lambda statement for reduction of Hmax} For $\rho_{AB}\in \mathcal{S}(H_A \otimes H_B)$, let $\rho_{ABC}$ be a purification with purifying system $H_C$, and $(P_k^A,P_k^B)$ be a generator of projected states.
It follows that \begin{equation} \label{prop,eq1:Lambda statement for reduction of Hmax}
\Lambda(\rho_{AC}|C) = \lim_{k \rightarrow \infty} \Lambda(\rho^k_{AC}|C), \end{equation} where $\rho^k_{AC} = \tr_B[(P^A_k \otimes P_k^B \otimes \id_C)\rho_{ABC}(P^A_k \otimes P_k^B \otimes \id_C)].$ \end{lem}
\begin{proof}
Let $\nu_k := \Lambda(\rho_{AC}^k|C)$. Due to Lemma~\ref{l:mon incr} this sequence is monotonically increasing, so we can define $\nu := \lim_{k \rightarrow \infty} \nu_k\in \mathbb{R}\cup\{+\infty\}$, and conclude that $\nu \leq \Lambda(\rho_{AC}|C)$. Thus, the case $\nu=+\infty$ is trivial. It thus remains to show $\nu \geq \Lambda(\rho_{AC}|C)$ for $\nu <+\infty$. As proved in Lemma~\ref{l:Lambda statement for reduction of Hmin}, the infimum in Eq.~(\ref{lambdadef2}) is attained even if the underlying Hilbert spaces are infinite-dimensional. Thereby there exists for each $k \in \mathbb{N}$ a state $\tilde{\sigma}_C^k$ such that $\id\otimes\tilde{\sigma}_C^k \geq \rho_{AC}^k$ and $\tr\tilde{\sigma}_C^k =\Lambda(\rho_{AC}^k|C)$. Now we can proceed in the same manner as in the proof of Lemma~\ref{l:Lambda statement for reduction of Hmin} to construct a weak* limit $\tilde{\sigma}_C \in \tau_1^+(H_B)$ that satisfies $\id_A \otimes \tilde{\sigma}_C \geq \rho_{AC}$, and is such that $\Lambda(\rho_{AC}|C) \leq \tr\tilde{\sigma}_C \leq \nu \leq \Lambda(\rho_{AC}|C)$. This completes the proof. \end{proof}
Of course, Lemma~\ref{l=sup} and \ref{l:Lambda statement for reduction of Hmin} can directly be rewritten in terms of min-entropies and yield the first two statements of Proposition~\ref{p:reduction of Hmin to finite dim}. The part for the normalized projected states follows via $H_{\mathrm{min}}(\hat{\rho}^k_{AB}|\hat{\sigma}^k_B) = H_{\mathrm{min}}(\rho^k_{AB}|\sigma^k_{B}) - \log\tr\sigma_B^k + \log\tr\rho_{AB}^{k}$, and $H_{\mathrm{min}} (\hat{\rho}^k_{AB}|B) = H_{\mathrm{min}}(\rho_{AB}^k|B) + \log\tr\rho_{AB}^k $.
In order to obtain the convergence stated for the max-entropy in Proposition~\ref{p:reduction of Hmin to finite dim}, note that $(P^A_k \otimes P_k^B \otimes \id_C) \rho_{ABC} (P^A_k \otimes P_k^B \otimes \id_C)$ is a purification of $\rho_{AB}^k$, whenever $\rho_{ABC}$ is a purification of $\rho_{AB}$. Hence, $H_{\mathrm{max}}(\rho_{AB}^k|B) = -H_{\mathrm{min}}(\rho_{AC}^k|C) = \log\Lambda(\rho_{AC}^k|C)$. For normalized states use $H_{\mathrm{max}} (\hat{\rho}_{AB}^k|B_k) = H_{\mathrm{max}}(\rho_{AB}^k|B_k) - \log\tr\rho_{AB}^k$.
\end{appendix}
\end{document} | arXiv | {
"id": "1004.1386.tex",
"language_detection_score": 0.670486330986023,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Decay Processes in the Presence of Thin Superconducting Films}
\author{Per K. Rekdal $^{\, a,}$}
\email{per.rekdal@uni-graz.at}
\author {Bo-Sture K. Skagerstam$^{\, b,}$}
\email{boskag@phys.ntnu.no}
\affiliation{ $^a$ Institut f\"ur Theoretische Physik, Karl-Franzens-Universit\"at Graz, Universit\"atsplatz 5, A-8010 Graz, Austria
\\
$^b$ Complex Systems and Soft Materials Research Group, Department of Physics,
The Norwegian University of Science and Technology, N-7491 Trondheim, Norway
}
{\begin{abstract}
\small
In a recent paper [Phys. Rev. Lett. {\bf 97}, 070401 (2006)] the transition rate of magnetic spin-flip of a
neutral two-level atom trapped in the vicinity of a thick superconducting body was studied.
In the present paper we will extend these considerations
to a situation with an atom at various distances from a dielectric film.
Rates for the corresponding electric dipole-flip transition will also be considered.
The rates for these atomic flip transitions can be
reduced or enhanced, and in some situations they can even be completely suppressed.
For a superconducting film or a thin film of a perfect conducting material various analytical
expressions are derived that reveals the dependence of the physical parameters at hand.
\\[5mm]
\noindent PACS numbers 34.50.Dy, 03.75.Be, 42.50.Ct \end{abstract} }
\maketitle
\bc{ \section{INTRODUCTION} \label{sec:introd}} \ec
Harnessing the interactions of electromagnetic field and matter
is one of the ultimate goals of atom optics.
One promising approach towards control of matter waves on small scales is to
trap and manipulate cold neutral atoms in microtraps near structures microfabricated on a surface,
known as atom chips \cite{folman_02}.
Magnetic traps on such atom chips are commonly generated either by microfabricated current-carrying
wire \cite{folman_02} or by poled ferromagnetic films \cite{hinds_99,eriksson_04} attached to some
dielectric or metallic body.
However, the proximity of the atoms to the surface threatens to decohere the
quantum state of the atoms through electromagnetic field fluctuations.
This effect arises because the resistivity of the surface is always accompanied by field
fluctuations as a consequence of the fluctuation-dissipation theorem.
For an atom close to the surface of a dielectric body these fluctuating fields can be
strong enough to drive magnetic and electric dipole transitions in the atom, as e.g.
shown in recent experimental studies \cite{hinds_03,vuletic_04,harber_03}.
If the atom is in a magnetic or electric trap, these flip transitions may lead to atom loss.
Such transitions are therefore most often undesirable, and we want to reduce them or even suppress them completely.
In the present paper we intend to explicitly write down the flip rate for both of these types
of transitions for the dielectric slab as shown in \fig{geo_slab_fig}. We will e.g. consider
a normal conducting slab as well as a superconducting slab, as described in Ref.\cite{rekdal_06}.
To the best of our knowledge, this is not done for a general spin or dipole orientation, despite the fact
that it in principle has been known for a long time (see e.g. Ref.\cite{knight_73}).
\begin{figure}
\caption{Schematic picture of the setup considered in our calculations. An atom inside a microtrap is
located in vacuum at a distance $z$ away from a dielectric slab with thickness $H$.
Vacuum is on both sides of the slab. The dielectric slab can e.g. be a normal conducting metal or a
superconducting metal.
Upon making a magnetic spin-flip transition or an electric dipole-flip transition, the atom becomes more
weakly trapped and is eventually lost.}
\label{geo_slab_fig}
\end{figure}
\bc{ \section{THEORY} \label{sec:theory} }\ec
\bc{ \subsection{Magnetic Spin Transition} }\ec
We begin by considering an atom in an initial state $|i \rangle$ and trapped at position ${\bf r}_A= (0,0,z)$
in vacuum, near a dielectric body. The rate $\Gamma_B $ of spontaneous and thermally stimulated magnetic spin-flip
transition into a final state $|f\rangle$ has e.g. been derived in Ref. \cite{rekdal_04},
\bea \label{gamma_B_generel}
\lefteqn{\Gamma_B = \, \mu_0 \, \frac{2 \, (\mu_B g_S)^2}{\hbar} \; \sum_{j,k=1}^3 ~
S_j \, S_k^{\, *} }
\nonumber
\\
&& \; \times \;
\mbox{Im} \, [ \; \nabla \times \nabla \times
\bm{G}({\bf r}_A , {\bf r}_A , \omega ) \; ]_{jk} \; ( \overline{n}_{\text{th}} + 1 ) \, , \eea
\noi
where we have introduced the dimensionless components $S_j \equiv \langle f | \hat{S}_j/\hbar | i \rangle$ of the electron spin operators $\hat{S}_j$ corresponding to the transition $|i\rangle \rightarrow |f\rangle$,
with $j=x,y,z$.
Here $\mu_B$ is the Bohr magneton, $g_S \approx 2$ is the electron spin $g$ factor,
and $\bm{G}({\bf r}_A , {\bf r}_A , \omega )$ is the dyadic Green tensor of Maxwell's theory.
\eq{gamma_B_generel} follows from a consistent quantum-mechanical treatment of electromagnetic
radiation in the presence of absorbing bodies \cite{dung_00,henry_96}.
Thermal excitations of the electromagnetic field modes are accounted for by the factor
$( \overline{n}_{\text{th}} + 1 )$, where $\overline{n}_{\text{th}} = 1 / ( e^{\hbar \omega/k_{\text{B}} T}- 1 )$ is the Planck distribution
giving the mean number of thermal photons per mode at frequency $\omega$ of the spin-flip transition.
Here $T$ is the temperature of the dielectric body, which is assumed to be in thermal equilibrium
with its surroundings, and $k_B$ is Boltzmann's constant. The dyadic Green tensor is the unique
solution to the Helmholtz equation
\bea \label{G_Helm}
\nabla\times\nabla\times \bm{G}({\bf r},{\bf r}',\omega) - k^2
\epsilon({\bf r},\omega) \bm{G}({\bf r},{\bf r}',\omega) = \delta( {\bf r} - {\bf r}' ) \bm{1} \, ,\nonumber\\ \eea
\noi
with appropriate boundary conditions. Here $k=\omega/c$ is the wavenumber in vacuum, $c$ is the speed of light
and $\bm{1}$ the unit dyad. The tensor $\bm{G}({\bf r},{\bf r}',\omega)$ contains all
relevant information about the geometry of the material and, through the electric permittivity
$\epsilon({\bf r},\omega)$, about its dielectric properties.
Due to causality, any complex dielectric function must in general obey the Kramers-Kronig relations.
Since we only consider non-zero frequencies in a suitable finite range,
such dispersion relations will be of no concern in the present paper.
The current density in a superconducting media is commonly described by the Mattis-Bardeen theory \cite{mattis_58}.
Following Ref. \cite{rekdal_06}, assuming non-zero frequencies $0 < \omega \ll \omega_g \equiv 2 \Delta(0)/\hbar$,
where $\omega$ is the angular frequency and $\Delta(0)$ is the energy gap of the superconductor at zero temperature,
the current density is well described by means of a two-fluid model \cite{gorter_34,london_34_40}.
The dielectric function is in this case given by \cite{rekdal_06}
\bea \label{eps_j}
\epsilon(\omega) = 1 - \frac{1}{k^2 \lambda_L^2(T)} + i \, \frac{2}{k^2 \delta^2(T)} \, , \eea
\noi
where $\lambda_L(T) = e\sqrt{ m/\mu_0 \, n_s(T)}$ is the London penetration length
and where $\delta(T) \equiv \sqrt{2/\omega \mu_0 \, \sigma_n(T)}$ is the skin depth associated with the
normal conducting electrons. As usual, $\mu_0$ is the permeability of vacuum and $e$ is the elementary
charge. The total electron density $n_0(T)$ is constant and given by
$n_0 = n_s(T) + n_n(T)$, where $n_s(T)$ and $n_n(T)$ are the electron densities
in the superconducting and normal state, respectively, at a given temperature $T$.
The optical conductivity corresponding to \eq{eps_j} is
$\sigma(T) = 2/\omega \mu_0 \delta^2(T) + i/\omega \mu_0 \lambda_L^2(T)$.
Above the transition temperature $T_c$, the dielectric function in \eq{eps_j}
reduces to the well known Drude form. We also stress that the theory in this paper is particular
to non-magnetic media.
The rate $ \Gamma^{\, 0}_{ B } $ of a magnetic spin-flip transition for an atom in the unbounded
free-space is well known (see e.g. Refs.\cite{dung_00}), with the result
\bea \label{gamma_0_B}
\Gamma^{\, 0}_{ B }
= {\bar \Gamma}_{ B } S^{\, 2} \; , \eea
\noi with
\bea
{\bar \Gamma}_{ B }
= \, \mu_0 \, \frac{ ( \mu_B g_S )^2 }{3 \pi \, \hbar} \, k^3 \; . \eea
\noi
Here we have introduced the dimensionless spin factor $S^{\, 2} \equiv S_x^{\, 2} + S_y^{\, 2} + S_z^{\, 2}$.
The unbounded free-space lifetime corresponding to this magnetic spin-flip rate is
$\tau^{\, 0}_{ B } \equiv 1/\Gamma^{\, 0}_{ B }$.
In the following we apply our model to the geometry shown in \fig{geo_slab_fig}, where an atom is located in vacuum
at a distance $z$ away from a dielectric slab with thickness $H$. This slab is described by dielectric function as
given by \eq{eps_j}.
The total magnetic transition rate
\bea \label{total_B}
\Gamma_{ B } = ( \Gamma^{\, 0}_{B} + \Gamma^{\, \rm{slab}}_{B} ) \, ( \overline{n}_{\text{th}} + 1 ) \, , \eea
\noi
can then be decomposed into a free part and a part purely due to the presence of the slab.
The latter contribution for an arbitrary spin orientation is given by
\bea
\Gamma^{\, \rm{slab}}_{B} &=& \label{gamma_B_gen_ja}
2 \, {\bar \Gamma}^{\, 0}_{B} \,
\bigg ( ~ ( S_x^{\, 2} \, + \, S_y^{\, 2} ) \, I_{\|} \, + \, S_z^{\, 2} \, I_{\perp} ~ \bigg ) ~ , \eea
\noi
with the atom-spin orientation dependent integrals
\bea \label{I_paral}
I_{\|} &=&
\frac{3}{8}
{\rm Re}
\bigg (
\int_{0}^{\infty} d q \, \frac{q}{\eta_0} \, e^{ i \, 2 \eta_0 \, k z } \,
[ \, {\cal C}_{N}(q) - \eta_0^2 \, {\cal C}_{M}(q) \, ] \bigg ) \, , ~~~ \nonumber \\ \eea
\noi and
\bea
\label{I_perp}
I_{\perp} &=& \frac{3}{4}
{\rm Re}
\bigg ( \,
\int_{0}^{\infty} d q \, \frac{q^3}{\eta_0} \, e^{ i \, 2 \eta_0 \, k z } \,
C_{M}(q) \, \bigg ) \, . \eea
\noi
The scattering coefficients $C_{N}(q)$ and $C_{M}(q)$ are given by
\bea \label{C_N_33}
{{\cal C}}_{N}(q) = r_p(q) ~ \frac{1 - e^{i \, 2 \, \eta(\omega) \, k H} }
{1 \; - \; r_p^2(q) \; e^{i \, 2 \, \eta(\omega) \, k H}} \, , \eea
\noi and
\bea
\label{C_M_33}
{{\cal C}}_{M}(q) = r_s(q) ~ \frac{1 - e^{i \, 2 \, \eta(\omega) \, k H} }
{1 \; - \; r_s^2(q) \; e^{i \, 2 \, \eta(\omega) \, k H}} \, , \eea
\noi
with the electromagnetic field polarization dependent Fresnel coefficients
\bea \label{r_s_og_r_p}
r_s(q) = \frac{\eta_0 - \eta(\omega)}{\eta_0 + \eta(\omega)} ~~ , ~~
r_p(q) =
\frac{\epsilon(\omega) \, \eta_0 - \eta(\omega)}
{\epsilon(\omega) \, \eta_0 + \eta(\omega)} \, . \eea
\noi
Here we have defined $\eta(\omega) = \sqrt{\epsilon(\omega) - q^2}$ and $\eta_0 = \sqrt{1 - q^2}$.
For a thick slab with $H=\infty$, the above equations are reduced to the results in Ref. \cite{henkel_99}.
\bc{ \subsection{Electric Dipole Transition} }\ec
The previous section concerns magnetic field fluctuations. In this section we will consider
electric field fluctuations.
For an electrical dipole transition, the rate spontaneous and thermally stimulated decay
is given by (see e.g. Refs.\cite{dung_00}):
\bea \label{gamma_E_generel}
\lefteqn{\Gamma_E = \, \mu_0 \, \frac{2 \, \omega^2}{\hbar} \; \sum_{j,k=1}^3 ~
d_j \, d_k^{\, *} }
\nonumber
\\
&& \; \times \;
\mbox{Im} \, [ \;
\bm{G}({\bf r}_A , {\bf r}_A , \omega ) \; ]_{jk} \; ( \overline{n}_{\text{th}} + 1 ) ~ , \eea
\noi
where $d_j \equiv \langle f|\hat{d}_j|i \rangle$, with $j=x,y,z$, is the matrix element of the atomic dipole operator
$\hat{d}_j$ in the direction $j$,
corresponding to the transition $|i \rangle \rightarrow |f \rangle$.
Let us apply this model to the geometry shown in \fig{geo_slab_fig}, where an atom is located in vacuum at a
distance $z$ away from a dielectric slab with thickness $H$. The total electric transition rate
\bea
\Gamma_{E} = ( \Gamma^{\, 0}_{E} + \Gamma^{\rm{slab}}_{E} ) \, ( \overline{n}_{\text{th}} + 1 ) \, , \eea
\noi
can then be decomposed into a free part and a part purely due to the presence of the slab.
The latter contribution for an arbitrary dipole orientation is given by
\bea \label{total_E}
\Gamma^{\text{slab}}_{\, E} &=& \
2 \, {\bar \Gamma}^{\, 0}_{E} \,
\bigg ( ~ ( d_x^{\, 2} + d_y^{\, 2} ) \, J_{\|} \, + \, d_z^{\, 2} \, J_{\perp} ~ \bigg ) \; , \eea
\noi
where we have introduced the dipole factor $d^{\, 2} \equiv d_x^{\, 2} + d_y^{\, 2} + d_z^{\, 2}$.
The dipole orientation dependent integrals are
\bea \label{J_paral}
J_{\|} &=& \nonumber
\frac{3}{8}
{\rm Re}
\bigg (
\int_{0}^{\infty} d q \, \frac{q}{\eta_0} \, e^{ i \, 2 \eta_0 \, k z } \,
[ \, {\cal C}_{M}(q) - \eta_0^2 \, {\cal C}_{N}(q) \, ] \bigg ) \, ,
\\
\eea
\noi and
\bea
\label{J_perp}
J_{\perp} &=& \frac{3}{4}
{\rm Re}
\bigg ( \,
\int_{0}^{\infty} d q \, \frac{q^3}{\eta_0} \, e^{ i \, 2 \eta_0 \, k z } \,
{\cal C}_{N}(q) \, \bigg ) . \eea
\noi
Here we have defined
\bea \label{gamma_0_E} \Gamma^{\, 0}_{E} =
{\bar \Gamma }_{E}
\, d^{\, 2} \; , \eea
\noi with
\bea
{\bar\Gamma }_{E}
= \, \mu_0 \, \frac{ c^2 }{3 \pi \, \hbar} \; k^3 \; , \eea
\noi
i.e. the dipole-flip rate in unbounded vacuum for the electric dipole transition $|i \rangle \rightarrow |f \rangle$,
with the corresponding free-space lifetime $\tau^{\, 0}_{E} \equiv 1/\Gamma^{\, 0}_{E}$.
We mention that \eq{gamma_0_E} is consistent with the definition in Refs. \cite{dung_00}.
\bc{ \section{TWO LIMITING CASES} \label{sec:perf_super} }\ec
\subsection{Magnetic Spin Transition}
\subsubsection{ The Limit $\lambda_L(T) \ll \delta(T) , H, \lambda$ }
Let us now consider a special case of the dielectric function in \eq{eps_j}. The superconducting term
dominates over the normal conducting term provided that $\lambda_L(T) \ll \delta(T)$. If, in addition,
$\lambda_L(T) \ll \lambda$, which holds true in practically all cases of interest,
then we can neglect the unit term in \eq{eps_j}.
Here $\lambda = 2\pi/k$ is the wavelength associated to the magnetic spin-flip transition.
The dominant factor in the dielectric function is in this case real,
and the main contribution to the integrals in Eqs. (\ref{I_paral}) and (\ref{I_perp}) occurs for values of
$q$ such that $0 \leq q \leq 1$.
For a slab with a thickness such that $\lambda_L(T) \ll H$, which also holds true in practically all cases
of interest, the exponential functions in the scattering coefficients
Eqs. (\ref{C_N_33}) and (\ref{C_M_33}) can be neglected. The scattering coefficients are then reduced to
${{\cal C}}_{N}(q) \approx r_p(q)$ and ${{\cal C}}_{M}(q) \approx r_s(q)$ for all relevant values of $q$.
Furthermore, with the above mentioned assumptions, the Fresnel coefficients are reduced to $r_p(q) \approx 1$
and $r_s(q) \approx - 1$.
The integrals in Eqs. (\ref{I_paral}) and (\ref{I_perp}) can then be solved analytically.
The total magnetic spin-flip rate for an atom above a slab is then
\bea \label{gamma_perf_B}
\Gamma^{\, \text{pc}}_{B} &\approx& {\bar \Gamma}^{\, 0}_{B} \,
( \, \overline{n}_{\text{th}} + 1 \, )
\\ \nonumber
&\times&
\bigg [ \, S^{\, 2} \, + \, \frac{3}{2} \, f_{\|}(kz) \, ( S_x^{\, 2} + S_y^{\, 2} )
\, + \, 3 \, f_{\perp}( k z ) ~ S_z^{\, 2} ~~ \bigg ] \, , \eea
\noi
where we have defined
\bea
f_{\|}( kz ) &\equiv& \frac{\sin( 2 \, k z )}{2 \, k z} \, + \, f_{\perp}(kz) \, ,
\\
f_{\perp}( kz ) &\equiv& \frac{ 2 \, k z \, \cos( 2 \, k z )
\, - \,
\sin( 2 \, k z ) }{( 2 \, k z )^3} \, . \eea
\noi
Note that \eq{gamma_perf_B} is not valid for an arbitrary small thickness $H$ of the slab.
In the limit $\lambda_L(T) \rightarrow 0$, the magnetic spin-flip
rate in \eq{gamma_perf_B} is exact. This result is consistent with Ref. \cite{henkel_99}.
Let us now consider the near-field case $k z = \frac{2 \pi}{\lambda} \, z \ll 1$, which holds true in practically all cases of
interest. The magnetic spin-flip rate is then reduced to
\bea \label{gamma_perf_c_and_n}
\Gamma^{\text{pc} \, , \, \perp}_{B} ~ \approx ~ \frac{ (2 \, k z)^2 }{10} ~ \Gamma^{\, 0}_{B} \,
( \, \overline{n}_{\text{th}} + 1 \, ) \, , \eea
\noi
provided the atomic spin is oriented perpendicular to the slab, i.e. provided that
$\langle f|\hat{S}_x|i\rangle = \langle f|\hat{S}_y|i\rangle = 0$. This result implies that, for an atom at
the surface of the slab, i.e. $z=0$, there is no magnetic spin-flip at all (see lower graph in \fig{fig_kz}).
The particular atomic spin orientation under consideration is the only one that can give zero spin-flip rate
despite the presence magnetic field fluctuations. Furthermore, when the atomic spin is oriented parallel to the slab,
the magnetic spin-flip rate is
\bea \label{gamma_perf_B_paral}
\Gamma^{\text{pc} \, , \, \|}_{B}
~ \approx ~ 2 ~ \Gamma^{\, 0}_{B} \, ( \, \overline{n}_{\text{th}} + 1 \, ) \, , \eea
\noi
for the near-field case $k z \ll 1$. This result shows that, in the near-field regime, the magnetic dipole-flip
rate is twice the rate as compared to an atom in unbounded free-space, as e.g. pointed out in
Ref.\cite{babiker_03}.
In current atomic chip design (see e.g. Ref.\cite{hinds_03})
the typical atomic frequency is $\omega/2 \pi = 560$ kHz and a typical atom-surface distance is $z=50 \, \mu$m.
Hence, we have $k z \sim 10^{-7}$, i.e. far within the near-field condition $k z \ll 1$.
In passing we also give the small $H$ expansion. For sufficiently small thickness of the slab, i.e.
$H \ll \delta^2(T)/\lambda_L(T) , z , \lambda_L(T)$, the magnetic spin-flip rate is
\bea \nonumber
\Gamma_{B}
&\approx& \nonumber
{\bar \Gamma}_{B} ~ ( \, \overline{n}_{\text{th}} + 1 \, )
\\ \nonumber
&\times&
\bigg [ \, S^{\, 2} \, + \, ( S_x^{\, 2} + S_y^{\, 2} ) \,
\frac{3}{64} ~ \frac{1}{( \, k \, \delta(T) \, )^2} \, \frac{1}{k \, \lambda_L(T)} \, ( \, \frac{H}{z} \, )^2 ~
\\
&&
~~~~~~\, + ~ S_z^{\, 2} ~ \frac{3}{64} ~ \frac{k \, \lambda_L(T)}{( \, k \, \delta(T) \, )^2} \,
\frac{k H}{( \, k z \, )^3} ~ \bigg ] \, . \eea
\noi
This expression for the magnetic spin-flip rate as it is only valid for thicknesses of the slab
smaller than the London penetration length.
\subsubsection{ The Limit $\delta(T) \ll \lambda_L(T) , H, \lambda, z$ }
Let us now consider the case when the dielectric function as given by \eq{eps_j}
is dominated by the normal conducting term, i.e. when $\delta(T) \ll \lambda_L(T)$ and
$\delta(T) \ll \lambda$.
The dominant factor in the dielectric function is in this case imaginary, corresponding to the well known Drude form,
and the main contribution to the integrals in Eqs. (\ref{I_paral}) and (\ref{I_perp}) is for values of $q$ such that
$q \lesssim 1/kz$.
If, in addition, $\delta(T) \ll z$ then the exponential functions in the scattering coefficients in
Eqs. (\ref{C_N_33}) and (\ref{C_M_33}) are negligible for all values $q \lesssim 1/kz$ provided that
$\delta(T) \ll H$.
The scattering coefficients are then reduced to ${{\cal C}}_{N}(q) \approx 1$ and ${{\cal C}}_{M}(q) \approx -1$,
i.e. the same result as in the last subsection.
Hence, the conditions $\delta(T) \ll \lambda_L(T) , H, \lambda, z$ and $\lambda_L(T) \ll \delta(T) , H, \lambda$
give the same results.
In particular, for the perfect normal conducting limit, i.e. $\delta(T) \rightarrow 0$, the magnetic spin-flip
rate as given by \eq{gamma_perf_B} is exact.
Note that the perfect normal conducting limit is only valid for the case
$\delta(T) \ll \lambda_L(T) , H, \lambda,z$, which e.g. means that the slab can not be arbitrarily thin.
It also means that, in contrast to the case as described in last subsection, the atom-surface distance $z$
can not be chosen arbitrary small.
We close this section by mentioning that, following the two-fluid model \cite{gorter_34,london_34_40} and
using the Gorter-Casimir temperature dependence \cite{gorter_34}, the limit $\delta(T) \rightarrow 0$
is obtained for $T \rightarrow 0$, as e.g. described in Ref. \cite{rekdal_06}.
\begin{figure}\label{fig_kz}
\end{figure}
\subsection{Electric Dipole Transition}
\noi
The only difference between the integrals in Eqs. (\ref{J_paral}-\ref{J_perp}) and
Eqs. (\ref{I_paral}-\ref{I_perp}) is the position of the scattering coefficients ${{\cal C}}_{N}(q)$
and ${{\cal C}}_{M}(q)$. Hence, the correction to the vacuum dipole-flip rate corresponding to electric field
fluctuations for the two limits as mentioned above is in general opposite in sign as compared to that of the
magnetic spin-flip case. This was also pointed out in Ref.\cite{knight_73}. It can be understood in physical terms,
since the electric and magnetic field are perpendicular. Hence, if $\delta(T) \ll \lambda_L(T) , H, \lambda, z$ or
$\lambda_L(T) \ll \delta(T) , H, \lambda$ then the total electric dipole-flip rate
for an atom above a slab is given by
\bea \nonumber
\Gamma^{\, \text{pc}}_{E} &\approx& {\bar \Gamma}_{E} \,
( \, \overline{n}_{\text{th}} + 1 \, )
\\
&\times& \label{gamma_perf_E}
\bigg [ \, d^{\, 2} \, - \, \frac{3}{2} \, f_{\|}(kz) \, ( d_x^{\, 2} + d_y^{\, 2} )
\, - \, 3 \, f_{\perp}(kz) \, d_z^{\, 2} \; \bigg ] \, .\nonumber \\ \eea
\noi
This equation is consistent with the results in Ref. \cite{babiker_03}.
Let us again consider the near-field case $k z \ll 1$. The electric dipole-flip rate is then reduced to
\bea \label{gamma_perf_E_paral}
\Gamma^{\text{pc} \, , \, \|}_{E}
\approx \frac{ (2 \, k z)^2 }{5} ~ \Gamma^{\, 0}_{E} \, ( \, \overline{n}_{\text{th}} + 1 \, ) \, , \eea
\noi
provided that the atomic dipole is oriented parallel to the slab, i.e. provided that
$\langle f|\hat{d}_z|i\rangle = 0$. This result implies that, for an atom at the
surface of the slab, i.e. $z=0$, there is no electric dipole-flip at all. The particular atomic
dipole orientation under consideration is the only one that can give zero dipole-flip rate
despite the presence of electric field fluctuations. Furthermore, when the atomic spin is oriented
perpendicular to the slab, the electric dipole-flip rate is
\bea \label{gamma_perf_E_perp}
\Gamma^{\text{pc} \, , \, \perp}_{E}
~ \approx ~ 2 ~ \Gamma^{\, 0}_{E} \, ( \, \overline{n}_{\text{th}} + 1 \, ) \, , \eea
\noi
for the near-field case $k z \ll 1$. This result was pointed out by Babiker in Ref. \cite{babiker_03}.
To summarize, in the present paper we have reported results on the magnetic as well as electric
decay properties of a neutral two-level atom trapped in the vicinity of a dielectric body.
For a slab with vacuum on both sides (see \fig{geo_slab_fig}),
we have obtained the flip rate for both of these types of transitions, for
any spin or dipole orientation.
The expression for the electric and magnetic transition rate can be solved exactly in two limiting cases, i.e. in
the small skin depth limit for normal conducting metals and in the small
London length limit for superconductors. In these limits, the correction
to the vacuum rate for an electric dipole transition is opposite in sign as compared to that of a
magnetic spin transition. These results are consistent with well known results, e.g. Refs. \cite{henkel_99}.
\begin{center} { \bf ACKNOWLEDGMENT } \end{center}
This work has been supported in part the Norwegian University of Science and Technology (NTNU) and
by the Austrian Science Fund (FWF). One of the authors (P.K.R.) wishes to thank Prof. Ulrich Hohenester at the
Karl-Franzens-Universit\"at Graz for warm hospitality while the present work was completed.
\end{document} | arXiv | {
"id": "0609158.tex",
"language_detection_score": 0.7458664774894714,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Homogenization of hyperbolic systems]{On operator error estimates for homogenization of~hyperbolic systems with periodic coefficients}
\author{Yu.~M.~Meshkova}
\thanks{The study was supported by project of Russian Science Foundation no. 17-11-01069.}
\keywords{Periodic differential operators, hyperbolic systems, homogenization, operator error estimates. }
\date{\today}
\address{Chebyshev Laboratory, St. Petersburg State University, 14th Line V.O., 29b, St.~Petersburg, 199178, Russia} \email{y.meshkova@spbu.ru,\quad juliavmeshke@yandex.ru}
\subjclass[2000]{Primary 35B27. Secondary 35L52}
\begin{abstract} In $L_2(\mathbb{R}^d;\mathbb{C}^n)$, we consider a selfadjoint matrix strongly elliptic second order differential operator $\mathcal{A}_\varepsilon$, $\varepsilon >0$. The coefficients of the operator $\mathcal{A}_\varepsilon$ are periodic and depend on $\mathbf{x}/\varepsilon$. We study the behavior of the operator $\mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})$, $\tau\in\mathbb{R}$, in the small period limit. The principal term of approximation in the $(H^1\rightarrow L_2)$-norm for this operator is found. Approximation in the $(H^2\rightarrow H^1)$-operator norm with the correction term taken into account is also established. The results are applied to homogenization for the solutions of the nonhomogeneous hyperbolic equation $\partial ^2_\tau \mathbf{u}_\varepsilon =-\mathcal{A}_\varepsilon \mathbf{u}_\varepsilon +\mathbf{F}$. \end{abstract} \maketitle
\tableofcontents
\section*{Introduction} The paper is devoted to homogenization of periodic differential operators (DO's). A broad literature is devoted to homogenization theory, see, e.~g., the books \cite{BaPa,BeLP,Sa,ZhKO}. We use the spectral approach to homogenization problems based on the Floquet-Bloch theory and the analytic perturbation theory.
\subsection{The class of operators} In $L_2(\mathbb{R}^d;\mathbb{C}^n)$, we consider a matrix elliptic second order DO $\mathcal{A}_\varepsilon$ admitting a factorization $ \mathcal{A}_\varepsilon=b(\mathbf{D})^*g(\mathbf{x}/\varepsilon)b(\mathbf{D})$, $\varepsilon >0$. Here $b(\mathbf{D})=\sum _{j=1}^d b_jD_j$ is an $(m\times n)$-matrix-valued first order DO with constant coefficients. Assume that $m\geqslant n$ and that the symbol $b(\boldsymbol{\xi})$ has maximal rank. A periodic $(m\times m)$-matrix-valued function $g(\mathbf{x})$ is such that $g(\mathbf{x})>0$; $g, g^{-1}\in L_\infty$. The coefficients of the operator $\mathcal{A}_\varepsilon$ oscillate rapidly as $\varepsilon \rightarrow 0$.
\subsection{Operator error estimates for elliptic and parabolic problems}
In a series of papers \cite{BSu,BSu05-1,BSu05,BSu06} by M.~Sh.~Birman and T.~A.~Suslina, an abstract operator-theoretic (spectral) approach to homogenization problems in $\mathbb{R}^d$ was developed. This approach is based on the scaling transformation, the Floquet-Bloch theory, and the analytic perturbation theory.
A typical homogenization problem is to study the behavior of the solution $\mathbf{u}_\varepsilon$ of the equation $\mathcal{A}_\varepsilon \mathbf{u}_\varepsilon +\mathbf{u}_\varepsilon=\mathbf{F}$, where $\mathbf{F}\in L_2(\mathbb{R}^d;\mathbb{C}^n)$, as $\varepsilon\rightarrow 0$. It turns out that the solutions $\mathbf{u}_\varepsilon$ converge in some sense to the solution $\mathbf{u}_0$ of the homogenized equation $\mathcal{A}^0\mathbf{u}_0+\mathbf{u}_0=\mathbf{F}$. Here \begin{equation*} \mathcal{A}^0=b(\mathbf{D})^*g^0b(\mathbf{D}) \end{equation*} is the \textit{effective operator} and $g^0$ is the constant \textit{effective matrix}. The way to construct $g^0$ is well known in homogenization theory.
In \cite{BSu}, it was shown that \begin{equation} \label{eq/ 0} \Vert \mathbf{u}_\varepsilon -\mathbf{u}_0\Vert _{L_2(\mathbb{R}^d)}\leqslant C\varepsilon \Vert\mathbf{F}\Vert _{L_2(\mathbb{R}^d)}. \end{equation} This estimate is order-sharp. The constant $C$ is controlled explicitly in terms of the problem data. Inequality \eqref{eq/ 0} means that the resolvent $(\mathcal{A}_\varepsilon +I)^{-1}$ converges to the resolvent of the effective operator in the $L_2(\mathbb{R}^d;\mathbb{C}^n)$-operator norm, as $\varepsilon\rightarrow 0$. Moreover, \begin{equation*} \Vert (\mathcal{A}_\varepsilon +I)^{-1}-(\mathcal{A}^0 +I)^{-1}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant C\varepsilon . \end{equation*} Results of this type are called \textit{operator error estimates} in homogenization theory.
In \cite{BSu06}, approximation of the resolvent $(\mathcal{A}_\varepsilon +I)^{-1}$ in the $(L_2\rightarrow H^1)$-operator norm was found: \begin{equation*} \Vert (\mathcal{A}_\varepsilon +I)^{-1}-(\mathcal{A}^0 +I)^{-1}-\varepsilon K(\varepsilon)\Vert _{L_2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)}\leqslant C\varepsilon . \end{equation*} Here the \textit{correction term} $K(\varepsilon)$ is taken into account. It contains a rapidly oscillating factor and so depends on $\varepsilon$. Herewith, $\Vert \varepsilon K(\varepsilon)\Vert _{L_2\rightarrow H^1}=O(1)$. In contrast to the traditional corrector of homogenization theory, the operator $K(\varepsilon)$ contains an auxiliary smoothing operator $\Pi _\varepsilon $ (see \eqref{Pi eps} below).
To parabolic homogenization problems the spectral approach was applied in \cite{Su04,Su07,Su_MMNP}. The principal term of approximation was found in \cite{Su04,Su07}: \begin{equation*} \Vert e^{-\tau \mathcal{A}_\varepsilon}-e^{-\tau\mathcal{A}^0}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant C\varepsilon \tau ^{-1/2},\quad \tau >0. \end{equation*} Approximation with the corrector taken into account was obtained in \cite{Su_MMNP}: \begin{equation*} \Vert e^{-\tau \mathcal{A}_\varepsilon}-e^{-\tau\mathcal{A}^0}-\varepsilon\mathcal{K}(\varepsilon ,\tau)\Vert _{L_2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)}\leqslant C\varepsilon (\tau ^{-1}+\tau ^{-1/2}),\quad 0<\varepsilon\leqslant\tau ^{1/2}. \end{equation*}
Another approach to deriving operator error estimates (the so-called \textit{modified method of first order approximation} or the \textit{shift method}) was suggested by V.~V.~Zhikov \cite{Zh1,Zh2} and developed by V.~V.~Zhikov and S.~E.~Pastukhova \cite{ZhPas}. In these papers the elliptic problems for the operators of acoustics and elasticity theory were studied. To parabolic problems the shift method was applied in \cite{ZhPAs_parabol}. Further results of V. V. Zhikov and S. E. Pastukhova are discussed in the recent survey \cite{ZhPasUMN}.
\subsection{Operator error estimates for homogenization of hyperbolic equations and nonstationary Schr\"odinger-type equations}
For elliptic and parabolic problems operator error estimates are well studied. The situation with homogenization of nonstationary Schr\"odinger-type and hyperbolic equations is different. In \cite{BSu08}, the operators $e^{-i\mathcal{A}_\varepsilon}$ and $\cos (\tau \mathcal{A}_\varepsilon ^{1/2})$ were studied. It turned out that for these operators it is impossible to find approximations in the $(L_2\rightarrow L_2)$-norm. Approximations in the $(H^s\rightarrow L_2)$-norms with suitable $s$ were found in \cite{BSu08}: \begin{align} \label{Schreg intr} &\Vert e^{-i\tau \mathcal{A}_\varepsilon }-e^{-i\tau\mathcal{A}^0}\Vert _{H^3(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant C\varepsilon(1+\vert \tau\vert ), \\ \label{cos intr} &\Vert \cos (\tau\mathcal{A}_\varepsilon ^{1/2})-\cos (\tau (\mathcal{A}^0)^{1/2})\Vert _{H^2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant C\varepsilon (1+\vert\tau\vert) . \end{align} Later T.~A.~Suslina \cite{Su17}, by using the analytic perturbation theory, proved that estimate \eqref{Schreg intr} cannot be refined with respect to the type of the operator norm. Developing the method of \cite{Su17}, M.~A.~Dorodnyi and T.~A.~Suslina \cite{DSu,DSu2} showed that estimate \eqref{cos intr} is sharp in the same sense. In \cite{DSu,DSu2,Su17}, under some additional assumptions on the operator, the results \eqref{Schreg intr} and \eqref{cos intr} were improved with respect to the type of the operator norm. In \cite{BSu08,DSu2}, by virtue of the identity $\mathcal{A}_\varepsilon ^{-1/2}\sin(\tau\mathcal{A}_\varepsilon ^{1/2})=\int _0^\tau \cos(\widetilde{\tau} \mathcal{A}_\varepsilon ^{1/2})\,d\widetilde{\tau}$ and the similar identity for the effective operator, the estimate \begin{equation} \label{Th BSu} \begin{split} \Vert \mathcal{A}_\varepsilon^{-1/2}\sin(\tau \mathcal{A}^{1/2}_\varepsilon)-(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})\Vert_{H^2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant C\varepsilon (1+\vert \tau\vert )^2, \quad \tau\in\mathbb{R}, \end{split} \end{equation} was deduced from \eqref{cos intr} as a (rough) consequence. The sharpness of estimate \eqref{Th BSu} with respect to the type of the operator norm was not discussed. Estimates \eqref{cos intr} and \eqref{Th BSu} were applied to homogenization for the solution of the Cauchy problem \begin{equation} \label{hyperbolic problem in introduction} \begin{cases} \partial ^2_\tau \mathbf{u}_\varepsilon (\mathbf{x},\tau)=-\mathcal{A}_\varepsilon\mathbf{u}_\varepsilon (\mathbf{x},\tau )+\mathbf{F}(\mathbf{x},\tau ),\\ \mathbf{u}_\varepsilon (\mathbf{x},0)=\boldsymbol{\varphi}(\mathbf{x}),\quad\partial _\tau \mathbf{u}_\varepsilon (\mathbf{x},0)=\boldsymbol{\psi}(\mathbf{x}). \end{cases} \end{equation}
\subsection{Approximation for the solutions of hyperbolic systems with the correction term taken into account}
Operator error estimates with the correction term for nonstationary equations of Schr\"odinger type and hyperbolic type previously have not been established. So, we discuss the known ``classical'' homogenization results that cannot be written in the uniform operator topology. These results concern the operators in a bounded domain $\mathcal{O}\subset\mathbb{R}^d$. Approximation for the solution of the hyperbolic equation with the zero initial data and a non-zero right-hand side was obtained in \cite[Chapter 2, Subsec. 3.6]{BeLP}. In \cite{BeLP}, it was shown that the difference of the solution and the first order approximation strongly converges to zero in $L_2((0,T);H^1(\mathcal{O}))$. The error estimate was not established. The case of zero initial data and non-zero right-hand side was also considered in \cite[Chapter 4, Section 5]{BaPa}. In \cite{BaPa}, the complete asymptotic expansion of the solution was constructed and the estimate of order $O(\varepsilon ^{1/2})$ for the difference of the solution and the first order approximation in the $H^1$-norm on the cylinder $\mathcal{O}\times(0,T)$ was obtained. Herewith, the right-hand side was assumed to be $C^\infty$-smooth.
It is natural to be interested in the approximation with the correction term for the solutions of hyperbolic systems with non-zero initial data, i. e., in approximation of the operator cosine $\cos (\tau \mathcal{A}_\varepsilon ^{1/2})$ in some suitable sense. One could expect the correction term in this case to be of similar structure as for elliptic and parabolic problems. However, in
\cite{BrOFMu} it was observed that this is true only for very special class of initial data. In the general case, approximation with the corrector was found in \cite{BraLe,CaDCoCaMaMarG1}, but the correction term was non-local because of the dispersion of waves in inhomogeneous media. Dispersion effects for homogenization of the wave equation were discussed in \cite{ABriV,ConOrV,ConSaMaBalV} via the Floquet-Bloch theory and the analytic perturbation theory. Operator error estimates have not been obtained.
\subsection{Main results }
\textit{Our goal} is to refine estimate \eqref{Th BSu} with respect to the type of the operator norm without any additional assumptions and to find an approximation for the operator $\mathcal{A}_\varepsilon ^{-1/2}\sin(\tau \mathcal{A}_\varepsilon ^{1/2})$ in the $(H^2\rightarrow H^1)$-norm. We wish to apply the results to problem \eqref{hyperbolic problem in introduction} with $\boldsymbol{\varphi}=0$ and non-zero $\mathbf{F}$ and $\boldsymbol{\psi}$.
Our first main result is the estimate \begin{align} \label{main result 1} \begin{split} \Vert \mathcal{A}_\varepsilon^{-1/2}\sin(\tau \mathcal{A}^{1/2}_\varepsilon)-(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})\Vert_{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant C\varepsilon (1+\vert \tau\vert ), \quad\varepsilon >0,\quad \tau\in\mathbb{R}. \end{split} \end{align} (Under additional assumptions on the operator, improvement of estimate \eqref{main result 1} with respect to the type of the norm was obtained by M.~A.~Dorodnyi and T.~A.~Suslina in the forthcoming paper \cite{DSu17} that is, actually, major revision of \cite{DSu2}.) Our second main result is the approximation \begin{align} \label{main result 2} \begin{split} \left\Vert \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})-(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})-\varepsilon\mathrm{K}(\varepsilon ,\tau)\right\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \leqslant C \varepsilon(1+\vert\tau\vert), \end{split} \end{align} $\varepsilon >0$, $\tau\in\mathbb{R}$. In the general case, the corrector contains the smoothing operator. We distinguish the cases when the smoothing operator can be removed. Also we show that the smoothing operator naturally arising from our method can be replaced by the Steklov smoothing. The latter is more convenient for homogenization problems in a bounded domain. Using of the Steklov smoothing is borrowed from \cite{ZhPas}.
The results are applied to homogenization of the system \eqref{hyperbolic problem in introduction} with $\boldsymbol{\varphi}=0$. A more general equation $Q (\mathbf{x}/\varepsilon)\partial ^2_\tau \mathbf{u}_\varepsilon (\mathbf{x},\tau)=-\mathcal{A}_\varepsilon\mathbf{u}_\varepsilon (\mathbf{x},\tau )+Q (\mathbf{x}/\varepsilon) \mathbf{F}(\mathbf{x},\tau )$ is also considered. Here $Q(\mathbf{x})$ is a $\Gamma$-periodic $(n\times n)$-matrix-valued function such that $Q(\mathbf{x})>0$ and $Q,Q^{-1}\in L_\infty$. In Introduction, we discuss only the case $Q=\mathbf{1}_n$ for simplicity.
\subsection{Method}
We apply the method of \cite{BSu08,DSu2} carrying out all the constructions for the operator $\mathcal{A}_\varepsilon ^{-1/2}\sin (\tau\mathcal{A}_\varepsilon ^{1/2})$. To obtain the result with the correction term, we borrow some technical tools from \cite{Su_MMNP}. By the \textit{scaling transformation}, inequality \eqref{main result 1} is equivalent to \begin{equation} \label{est no eps intr} \begin{split} \Bigl\Vert & \left(\mathcal{A}^{-1/2}\sin(\varepsilon ^{-1}\tau\mathcal{A}^{1/2})-(\mathcal{A}^0)^{-1/2}\sin(\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})\right)\varepsilon (-\Delta +\varepsilon ^2 I)^{-1/2}\Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant C(1+\vert\tau\vert),\quad \tau\in\mathbb{R},\quad \varepsilon >0. \end{split} \end{equation} Here $\mathcal{A}=b(\mathbf{D})^*g(\mathbf{x})b(\mathbf{D})$. Because of the presence of differentiation in the definition of $H^1$-norm, by the scaling transformation, inequality \eqref{main result 2} reduces to the estimate of order $O(\varepsilon)$: \begin{equation} \label{0.8a introduction} \begin{split} \Bigl\Vert &\mathbf{D}\left(\mathcal{A}^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^{1/2}) -(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2}) -\mathrm{K}(1,\varepsilon ^{-1}\tau) \right) \\ &\times \varepsilon ^2 (-\Delta +\varepsilon ^2 I)^{-1}\Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant C\varepsilon (1+\vert \tau\vert ),\quad \tau \in\mathbb{R},\quad\varepsilon >0. \end{split} \end{equation} For this reason, in estimate \eqref{0.8a introduction}, we use the ,,smoothing operator'' $\varepsilon ^2(-\Delta +\varepsilon ^2 I)^{-1}$ instead of the operator $\varepsilon (-\Delta +\varepsilon ^2 I)^{-1/2}$ which was used in estimate \eqref{est no eps intr} of order $O(1)$. Thus, the principal term of approximation of the operator $\mathcal{A}_\varepsilon ^{-1/2}\sin (\tau\mathcal{A}_\varepsilon ^{1/2})$ is obtained in the $(H^1\rightarrow L_2)$-norm, but approximation in the energy class is given in the $(H^2\rightarrow H^1)$-norm.
To obtain estimates \eqref{est no eps intr} and \eqref{0.8a introduction}, using the unitary \textit{Gelfand transformation} (see Section~\ref{Subsec Gelfand} below), we decompose the operator $\mathcal{A}$ into the direct integral of operators $\mathcal{A}(\mathbf{k})$ acting in the space $L_2$ on the cell of periodicity and depending on the parameter $\mathbf{k}\in\mathbb{R}^d$ called the \textit{quasimomentum}. We study the family $\mathcal{A}(\mathbf{k})$ by means of the analytic perturbation theory with respect to the onedimensional parameter $\vert\mathbf{k}\vert$. Then we should make our constructions and estimates uniform in the additional parameter $\boldsymbol{\theta}:=\mathbf{k}/\vert \mathbf{k}\vert$. Herewith, a good deal of considerations can be done in the framework of an abstract operator-theoretic scheme.
\subsection{Plan of the paper} The paper consists of three chapters. Chapter~{I} (Sec.~\ref{Section Preliminaries}--\ref{Section sandwiched abstract}) contains necessary operator-theoretic material. Chapter~{II} (Sec. \ref{Section Factorized families}--\ref{Section 10}) is devoted to periodic DO's. In Sec.~\ref{Section Factorized families}--\ref{Sec eff op}, the class of operators under consideration is introduced, the direct integral decomposition is described, and the effective characteristics are found. In Sec.~\ref{Section 7 appr A(k)} and \ref{Section 10}, the approximations for the operator-valued function $\mathcal{A} ^{-1/2}\sin(\varepsilon ^{-1}\tau\mathcal{A}^{1/2})$ are obtained and estimates \eqref{est no eps intr} and \eqref{0.8a introduction} are proven. In Chapter~{III} (Sec.~\ref{Section main results in general case} and \ref{Section hyperbolic general case}), homogenization for hyperbolic systems is considered. In Sec.~\ref{Section main results in general case}, the main results of the paper in operator terms (estimates \eqref{main result 1} and \eqref{main result 2}) are obtained. Afterwards, in Sec.~\ref{Section hyperbolic general case}, these results are applied to homogenization for solutions of the nonhomogeneous hyperbolic systems. Section~\ref{Section Applications} is devoted to applications of the general results to the acoustics equation, the operator of elasticity theory and the model equation of electrodynamics.
\subsection{Notation} Let $\mathfrak{H}$ and $\mathfrak{H}_*$ be separable Hilbert spaces. The symbols $(\cdot ,\cdot )_\mathfrak{H}$ and $\Vert \cdot \Vert _\mathfrak{H}$ mean the inner product and the norm in
$\mathfrak{H}$, respectively; the symbol $\Vert \cdot\Vert _{\mathfrak{H}\rightarrow\mathfrak{H}_*}$ denotes the norm of a~bounded linear operator acting from $\mathfrak{H}$ to $\mathfrak{H}_*$. Sometimes we omit the indices if this does not lead to confusion. By $I=I_\mathfrak{H}$ we denote the identity operator in $\mathfrak{H}$. If $A:\mathfrak{H}\rightarrow\mathfrak{H}_*$ is a linear operator, then $\mathrm{Dom}\,A$ denotes the domain of $A$. If $\mathfrak{N}$ is a subspace of $\mathfrak{H}$, then $\mathfrak{N}^\perp:=\mathfrak{H}\ominus\mathfrak{N}$.
The symbol $\langle\cdot,\cdot\rangle$ denotes the inner product in $\mathbb{C}^n$, $\vert \cdot\vert$ means the norm of a vector in $\mathbb{C}^n$; $\mathbf{1}_n$ is the unit matrix of size $n\times n$. If $a$ is an $(m\times n)$-matrix, then $\vert a\vert$ denotes its norm as a linear operator from $\mathbb{C}^n$ to $\mathbb{C}^m$; $a^*$ means the Hermitian conjugate $(n\times m)$-matrix.
The classes $L_p$ of $\mathbb{C}^n$-valued functions on a domain $\mathcal{O}\subset\mathbb{R}^d$ are denoted by $L_p(\mathcal{O};\mathbb{C}^n)$, $1\leqslant p\leqslant \infty$. The Sobolev spaces of order $s$ of $\mathbb{C}^n$-valued functions on a domain $\mathcal{O}\subset\mathbb{R}^d$ are denoted by $H^s(\mathcal{O};\mathbb{C}^n)$. By $\mathcal{S}(\mathbb{R}^d;\mathbb{C}^n)$ we denote the Schwartz class of $\mathbb{C}^n$-valued functions in $\mathbb{R}^d$. If $n=1$, then we simply write $L_p(\mathcal{O})$, $H^s(\mathcal{O})$ and so on, but sometimes we use such simplified notation also for the spaces of vector-valued or matrix-valued functions. The symbol $L_p((0,T);\mathfrak{H})$, $1\leqslant p\leqslant\infty$, stands for $L_p$-space of $\mathfrak{H}$-valued functions on the interval $(0,T)$.
Next, $\mathbf{x}=(x_1,\dots,x_d)\in\mathbb{R}^d$, $iD_j=\partial _j=\partial /\partial x_j$, $j=1,\dots,d$, $\mathbf{D}=-i\nabla=(D_1,\dots,D_d)$. The Laplace operator is denoted by $\Delta =\partial ^2/\partial x_1^2+\dots +\partial ^2/\partial x_d^2$.
By $C$, $\mathcal{C}$, $\mathfrak{C}$, $c$, $\mathfrak{c}$ (probably, with indices and marks) we denote various constants in estimates. The absolute constants are denoted by $\beta$ with various indices.
\section*{Chapter I. Abstract scheme} \label{Section Abstract sheme}
\section{Preliminaries} \label{Section Preliminaries}
\subsection{Quadratic operator pencils} \label{Subsubsection operator pencils} Let $\mathfrak{H}$ and $\mathfrak{H}_*$ be separable complex Hilbert spaces. Suppose that $X_0 :\mathfrak{H}\rightarrow \mathfrak{H}_*$ is a densely defined and closed operator, and that $X_1 :\mathfrak{H}\rightarrow\mathfrak{H}_*$ is a bounded operator. On the domain $\mathrm{Dom}\,X(t)=\mathrm{Dom}\,X_0$, consider the operator $X(t):=X_0+tX_1$, $t\in\mathbb{R}$. \textit{Our main object }is a family of operators \begin{equation} \label{A(t)=} A(t):=X(t)^*X(t),\quad t\in\mathbb{R}, \end{equation} that are selfadjoint in $\mathfrak{H}$ and non-negative. The operator $A(t)$ acting in $\mathfrak{H}$ is generated by the closed quadratic form $\Vert X(t)u\Vert ^2_{\mathfrak{H}_*}$, $u\in\mathrm{Dom}\,X_0$. Denote $A(0)=X_0^*X_0=:A_0$. Put $ \mathfrak{N}:=\mathrm{Ker}\,A_0=\mathrm{Ker}\,X_0$, $ \mathfrak{N}_*:=\mathrm{Ker}\,X_0^*$. We assume that the point $\lambda _0=0$ is isolated in the spectrum of $A_0$ and $ 0<n:=\mathrm{dim}\,\mathfrak{N}<\infty$, $n\leqslant n_*:=\mathrm{dim}\,\mathfrak{N}_*\leqslant\infty $. By $d_0$ we denote the distance from the point zero to the rest of the spectrum of $A_0$ and by $F(t,s)$ we denote the spectral projection of the operator $A(t)$ for the interval $[0,s]$. Fix $\delta >0$ such that $8\delta <d_0$. Next, we choose a number $t_0>0$ such that \begin{equation} \label{t_0(delta) abstact scheme} t_0\leqslant \delta ^{1/2}\Vert X_1\Vert ^{-1} _{\mathfrak{H}\rightarrow \mathfrak{H}_*}. \end{equation} Then (see \cite[Chapter 1, (1.3)]{BSu}) $F(t,\delta )=F(t,3\delta)$ and $\mathrm{rank}\,F(t,\delta)=n$ for $\vert t\vert\leqslant t_0$. We often write $F(t)$ instead of $F(t,\delta )$. Let $P$ and $P_*$ be the orthogonal projections of $\mathfrak{H}$ onto $\mathfrak{N}$ and of $\mathfrak{H}_*$ onto $\mathfrak{N}_*$, respectively.
\subsection{Operators $Z$ and $R$} Let $\mathcal{D}:=\mathrm{Dom}\,X_0\cap \mathfrak{N}^\perp$, and let $u\in\mathfrak{H}_*$. Consider the following equation for the element $\psi\in\mathcal{D}$ (cf. \cite[Chapter 1, (1.7)]{BSu}): \begin{equation} \label{first eq. for Z} X_0^*(X_0\psi - u)=0. \end{equation} The equation is understood in the weak sense. In other words, $\psi\in\mathcal{D}$ satisfies the identity \begin{equation*} (X_0\psi ,X_0\zeta )_{\mathfrak{H}_*}=(u,X_0\zeta)_{\mathfrak{H}_*},\quad \forall \zeta\in\mathcal{D}. \end{equation*} Equation \eqref{first eq. for Z} has a unique solution $\psi$, and $\Vert X_0\psi\Vert_{\mathfrak{H}_*}\leqslant\Vert u\Vert _{\mathfrak{H}_*}$. Now, let $\omega\in\mathfrak{N}$ and $u=-X_1\omega$. The corresponding solution of equation \eqref{first eq. for Z} is denoted by $\psi(\omega)$. We define the bounded operator $Z:\mathfrak{H}\rightarrow\mathfrak{H}$ by the identities \begin{equation*} Z\omega =\psi (\omega),\quad\omega\in\mathfrak{N};\quad Zx=0,\quad x\in\mathfrak{N}^\perp . \end{equation*} Note that \begin{equation} \label{ZP=Z, PZ=0} ZP=Z,\quad PZ=0. \end{equation}
Now, we introduce an operator $ R :\mathfrak{N}\rightarrow \mathfrak{N}_*$ (see \cite[Chapter 1, Subsec. 1.2]{BSu}) as follows:
$R\omega =X_0\psi(\omega)+X_1\omega\in\mathfrak{N}_*$. Another description of $R$ is given by the formula $ R=P_*X_1\vert _\mathfrak{N}$.
\subsection{The spectral germ}
The selfadjoint operator $ S:=R^*R :\mathfrak{N}\rightarrow \mathfrak{N} $ is called the \textit{spectral germ} of the operator family \eqref{A(t)=} at $t=0$ (see \cite[Chapter 1, Subsec. 1.3]{BSu}). This operator also can be written as $S=PX_1^*P_*X_1\vert _\mathfrak{N}$. So, \begin{equation} \label{vert S vert =} \Vert S\Vert \leqslant \Vert X_1\Vert ^2. \end{equation} The spectral germ $S$ is called \textit{nondegenerate,} if $\mathrm{Ker}\,S=\lbrace 0\rbrace$ or, equivalently, $\mathrm{rank}\,R=n$.
In accordance with the analytic perturbation theory (see \cite{K}), for $\vert t\vert\leqslant t_0$ there exist real-analytic functions $\lambda _l(t)$ and real-analytic $\mathfrak{H}$-valued functions $\phi _l(t)$ such that \begin{equation*} A(t)\phi _l(t)=\lambda _l(t)\phi _l(t),\quad l=1,\dots,n,\quad\vert t\vert \leqslant t_0, \end{equation*} and $\phi _l(t)$, $l=1,\dots,n$, form an orthonormal basis in the eigenspace $F(t)\mathfrak{H}$. For sufficiently small $t_*$ ($\leqslant t_0$) and $\vert t\vert \leqslant t_*$, we have the following convergent power series expansions: \begin{align} \label{lambda_l(t)= series} &\lambda _l(t)=\gamma _l t^2 +\mu _l t^3+\dots ,\quad \gamma _l\geqslant 0,\quad\mu _l\in\mathbb{R},\quad l=1,\dots,n;\\ &\phi _l(t)=\omega _l +t\phi _l^{(1)}+t^2\phi _l^{(2)}+\dots,\quad l=1,\dots,n. \nonumber \end{align} The elements $\omega _l=\phi_l(0)$, $l=1,\dots ,n$, form an orthonormal basis in $\mathfrak{N}$.
In \cite[Chapter 1, Subsec. 1.6]{BSu} it was shown that the numbers $\gamma _l$ and the elements $\omega _l$, $l=1,\dots,n$, are eigenvalues and eigenvectors of the operator $S$: \begin{equation} \label{S omega _l=} S\omega_l=\gamma_l\omega_l,\quad l=1,\dots, n. \end{equation} The numbers $\gamma _l$ and the vectors $\omega _l$, $l=1,\dots ,n$, are called \textit{threshold characteristics at the bottom of the spectrum} of the operator family $A(t)$.
\subsection{Threshold approximations} We assume that \begin{equation} \label{A(t)>=} A(t)\geqslant c_*t^2 I,\quad \vert t\vert\leqslant t_0, \end{equation} for some $c_*>0$. This is equivalent to the following estimates for the eigenvalues $\lambda _l(t)$ of the operator $A(t)$: $ \lambda_l(t)\geqslant c_* t^2$, $\vert t\vert\leqslant t_0$, $l=1,\dots,n$. Taking \eqref{lambda_l(t)= series} into account, we see that $\gamma _l\geqslant c_*$, $l=1,\dots,n$. So, by \eqref{S omega _l=}, the germ $S$ is nondegenerate: \begin{equation} \label{S>=} S\geqslant c_* I_\mathfrak{N} . \end{equation}
As was shown in \cite[Chapter 1, Theorem 4.1]{BSu}, \begin{equation} \label{F-P} \Vert F(t)-P\Vert \leqslant C_1 \vert t\vert,\quad \vert t\vert\leqslant t_0;\quad C_1:=\beta_1 \delta ^{-1/2}\Vert X_1\Vert . \end{equation} Besides \eqref{F-P}, we need more accurate approximation of the spectral projection obtained in \cite[(2.10) and (2.15)]{BSu05-1}: \begin{equation} \label{F(t)=P+tF_1+F_2(t)} F(t)=P+tF_1+F_2(t),\quad \Vert F_2(t)\Vert\leqslant C_2 t^2,\quad \vert t\vert\leqslant t_0;\quad C_2:= \beta _2 \delta ^{-1}\Vert X_1\Vert ^2; \end{equation} where \begin{equation} \label{F_1=} F_1=ZP+PZ^*. \end{equation} From \eqref{ZP=Z, PZ=0} and \eqref{F_1=} it follows that \begin{equation} \label{F1P=ZP} F_1P=ZP. \end{equation}
In \cite[Chapter 1, Theorem 5.2]{BSu}, it was proven that \begin{align} \label{Th res BSu-108} &\Vert (A(t)+\zeta I)^{-1}F(t)-(t^2SP+\zeta I)^{-1}P\Vert \leqslant C_3\vert t\vert (c_* t^2 +\zeta )^{-1},\quad \zeta >0,\quad\vert t\vert \leqslant t_0; \\ \label{C_3 abstract} &C_3:=\beta _3\delta ^{-1/2}\Vert X_1\Vert(1 +c_*^{-1} \Vert X_1\Vert ^2). \end{align}
According to \cite[Theorem 2.4]{BSu08}, we have \begin{align} \label{Th A(t)+1/2} &\Vert A(t)^{1/2}F(t)-(t^2 S)^{1/2}P\Vert \leqslant C_4t^2,\quad \vert t\vert \leqslant t_0; \\ \label{C_4 abstract} &C_4:=\beta _4 \delta ^{-1/2}\Vert X_1\Vert ^2(1+c_*^{-1/2}\Vert X_1\Vert ) . \end{align} Combining this with \eqref{vert S vert =}, we see that \begin{equation} \label{A(t)^1/2F<=} \Vert A(t)^{1/2}F(t)\Vert \leqslant \vert t\vert \Vert S\Vert ^{1/2}+C_4 t^2\leqslant (\Vert X_1\Vert +C_4 t_0)\vert t\vert ,\quad \vert t\vert \leqslant t_0. \end{equation}
We also need the following estimate for the operator $A(t)^{1/2}F_2(t)$ obtained in \cite[(2.23)]{BSu06}: \begin{equation} \label{A^1/2F_2} \Vert A(t)^{1/2}F_2(t)\Vert _{\mathfrak{H}\rightarrow \mathfrak{H}}\leqslant C_5t^2,\quad\vert t\vert \leqslant t_0;\quad C_5:=\beta _5\delta ^{-1/2}\Vert X_1\Vert ^2. \end{equation}
\subsection{Approximation of the operator $A(t)^{-1/2}F(t)$ for $t\neq 0$}
\begin{lemma} For $\vert t\vert \leqslant t_0$ and $t\neq 0$ we have \begin{equation} \label{Lm A-1/2} \Vert A(t)^{-1/2}F(t)-(t^2 S)^{-1/2} P\Vert \leqslant C_6. \end{equation} The constant $C_6$ is defined below in \eqref{C_6 abstract} and depends only on $\delta$, $\Vert X_1\Vert $, and $c_*$. \end{lemma}
\begin{proof} We have \begin{equation} \label{A-1/2 tozd} A(t)^{-1/2}F(t)=\frac{1}{\pi}\int _0^\infty \zeta ^{-1/2}(A(t)+\zeta I)^{-1}F(t)\,d\zeta ,\quad t\neq 0. \end{equation} (See, e.~g., \cite[Chapter III, Section 3, Subsection 4]{ViKr}). Similarly, \begin{equation} \label{germ -1/2 tozd} (t^2 S)^{-1/2}P=\frac{1}{\pi}\int _0^\infty \zeta ^{-1/2}(t^2 S+\zeta I_\mathfrak{N} )^{-1}P\,d\zeta =\frac{1}{\pi}\int _0^\infty \zeta ^{-1/2}(t^2 SP+\zeta I )^{-1}P\,d\zeta . \end{equation} Subtracting \eqref{germ -1/2 tozd} from \eqref{A-1/2 tozd}, using \eqref{Th res BSu-108}, and changing the variable $\widetilde{\zeta}:=(c_* t^2)^{-1}\zeta$, we obtain \begin{equation*} \begin{split} \Vert A(t)^{-1/2}F(t)-(t^2 S)^{-1/2}P\Vert &\leqslant \frac{C_3}{\pi}\int _0^\infty \zeta ^{-1/2}\vert t\vert (c_* t^2 +\zeta )^{-1}\,d\zeta =\frac{C_3}{\pi} c_*^{-1/2}\int _0^\infty \widetilde{\zeta}^{-1/2}(1+\widetilde{\zeta})^{-1}\,d\widetilde{\zeta} \\ &\leqslant \frac{C_3}{\pi} c_*^{-1/2}\left(\int _0 ^1 \widetilde{\zeta}^{-1/2}\,d\widetilde{\zeta} +\int _1^\infty \widetilde{\zeta}^{-3/2}\,d\widetilde{\zeta}\right) = 4 \pi ^{-1}c_*^{-1/2}C_3. \end{split} \end{equation*} We arrive at estimate \eqref{Lm A-1/2} with the constant \begin{equation} \label{C_6 abstract} C_6:=4 \pi ^{-1}c_*^{-1/2}C_3. \end{equation} \end{proof}
\section{Approximation of the operator $A(t)^{-1/2}\sin (\tau A(t)^{1/2})$} \label{Subsection abstract approximations for sin}
\subsection{The principal term of approximation}
\begin{proposition} \label{Proposition abstract sin tau principal} For $\vert t\vert \leqslant t_0$ and $\tau\in\mathbb{R}$ we have \begin{equation} \label{A-1/2sin F} \begin{split} &\left\Vert \left( A(t)^{-1/2}\sin (\tau A(t)^{1/2})-(t^2 S)^{-1/2}\sin (\tau (t^2S)^{1/2}P)\right)P\right \Vert \leqslant C_7(1+\vert\tau\vert\vert t\vert ) . \end{split} \end{equation} The constant $C_7$ depends only on $\delta$, $\Vert X_1\Vert $, and $c_*$. \end{proposition}
\begin{proof} For $t=0$ the operator under the norm sign in \eqref{A-1/2sin F} is understood as a limit for $t\rightarrow 0$. Using the Taylor series expansion for the sine function, we see that this limit is equal to zero.
Now, let $t\neq 0$. We put \begin{align} \label{E(tau)} &E(\tau):=e^{-i\tau A(t)^{1/2}}A(t)^{-1/2}F(t)-e^{-i\tau (t^2 S)^{1/2}P}(t^2 S)^{-1/2}P; \\ \label{Sigma(tau)} &\Sigma (\tau):=e^{i\tau (t^2 S)^{1/2}P}E(\tau)=e^{i\tau (t^2 S)^{1/2}P}e^{-i\tau A(t)^{1/2}}A(t)^{-1/2}F(t)-(t^2S)^{-1/2}P. \end{align} Then \begin{equation} \label{2.4a} \Sigma (0)=A(t)^{-1/2}F(t)-(t^2 S)^{-1/2}P \end{equation} and \begin{equation} \label{d Sigma /d tau} \frac{d\Sigma (\tau)}{d\tau}=ie^{i\tau (t^2 S)^{1/2}P}\left( (t^2S)^{1/2}P-A(t)^{1/2}F(t)\right)e^{-i\tau A(t)^{1/2}}A(t)^{-1/2}F(t). \end{equation} By \eqref{A(t)>=} and \eqref{Th A(t)+1/2}, the operator-valued function \eqref{d Sigma /d tau} satisfies the following estimate: \begin{equation} \label{2.5a very new} \left\Vert \frac{d\Sigma (\tau)}{d\tau}\right\Vert \leqslant C_4 t^2\Vert A(t)^{-1/2}\Vert \leqslant C_4 c_*^{-1/2}\vert t\vert,\quad \vert t\vert \leqslant t_0,\quad t\neq 0. \end{equation} Then, taking \eqref{Lm A-1/2}, \eqref{Sigma(tau)}, \eqref{2.4a}, and \eqref{2.5a very new} into account, we see that \begin{align} \label{E(tau) estimate} &\Vert E(\tau)\Vert =\Vert \Sigma(\tau)\Vert \leqslant C_4 c_*^{-1/2}\vert t\vert \vert\tau\vert +\Vert \Sigma(0)\Vert \leqslant C_8(1+\vert\tau\vert\vert t\vert),\quad \vert t\vert \leqslant t_0,\quad t\neq 0; \\ \label{C_7 abstract} &C_8:=\max\lbrace C_4 c_*^{-1/2};C_6\rbrace . \end{align} (Cf. the proof of Theorem 2.5 from \cite{BSu08}.) So, \begin{equation} \label{A-1/2sin F 8} \Vert A(t)^{-1/2}\sin (\tau A(t)^{1/2})F(t)-(t^2 S)^{-1/2}\sin (\tau (t^2S)^{1/2}P)P\Vert \leqslant C_8(1+\vert\tau\vert \vert t\vert ). \end{equation} By virtue of \eqref{A(t)>=} and \eqref{F-P}, from \eqref{A-1/2sin F 8} we derive the inequality \begin{equation} \label{A-1/2sin F 2} \begin{split} \Bigl\Vert &\left( A(t)^{-1/2}\sin (\tau A(t)^{1/2})-(t^2 S)^{-1/2}\sin (\tau (t^2S)^{1/2}P)\right)P\Bigr \Vert \\ & \leqslant C_8(1+\vert\tau\vert\vert t\vert )+\Vert A(t)^{-1/2}\sin (\tau A(t)^{1/2})(F(t)-P)\Vert \\ &\leqslant C_7(1+\vert\tau\vert\vert t\vert ),\quad \vert t\vert \leqslant t_0;\quad C_7:=C_8+c_*^{-1/2}C_1 . \end{split} \end{equation} \end{proof}
\subsection{Approximation in the ``energy'' norm}
Now, we obtain another approximation for the operator $A(t)^{-1/2}\sin(\tau A(t)^{1/2})$ (in the ``energy'' norm).
\begin{proposition} \label{Proposition corr sin tau abstr no hat} For $\tau\in\mathbb{R}$ and $\vert t\vert \leqslant t_0$, we have \begin{equation} \label{Th corr abstr} \left\Vert A(t)^{1/2}\left( A(t)^{-1/2}\sin (\tau A(t)^{1/2})-(I+tZ)(t^2 S)^{-1/2}\sin (\tau (t^2S)^{1/2}P)\right)P\right\Vert \leqslant C_9 (\vert t\vert + \vert \tau\vert t^2 ). \end{equation} The constant $C_9$ depends only on $\delta$, $\Vert X_1\Vert $, and $c_*$. \end{proposition}
\begin{proof} Note that \begin{equation} \label{corr abstr start} \begin{split} A(t)^{1/2}e^{-i\tau A(t)^{1/2}}A(t)^{-1/2}P =A(t)^{1/2}e^{-i\tau A(t)^{1/2}}A(t)^{-1/2}F(t)P +e^{-i\tau A(t)^{1/2}}(P-F(t))P. \end{split} \end{equation} By \eqref{F-P}, \begin{equation} \Vert e^{-i\tau A(t)^{1/2}}(P-F(t))P\Vert \leqslant C_1\vert t\vert ,\quad\tau\in\mathbb{R}, \quad \vert t\vert \leqslant t_0. \end{equation} Next, \begin{equation} \begin{split} A(t)^{1/2}e^{-i\tau A(t)^{1/2}}A(t)^{-1/2}F(t)P=A(t)^{1/2}F(t)e^{-i\tau (t^2S)^{1/2}P}(t^2 S)^{-1/2}P +A(t)^{1/2}F(t) E(\tau) P, \end{split} \end{equation} where $E(\tau)$ is given by \eqref{E(tau)}. By \eqref{A(t)^1/2F<=} and \eqref{E(tau) estimate}, for $t\neq 0$ we have \begin{equation} \label{corr using principal term t neq 0} \Bigl\Vert A(t)^{1/2}F(t) E(\tau) P\Bigr\Vert \leqslant C_8(\Vert X_1\Vert +C_4t_0)(\vert t\vert +\vert \tau\vert t^2), \quad \tau\in\mathbb{R}, \quad \vert t\vert \leqslant t_0,\quad
t\neq 0. \end{equation} For $t=0$ the operator under the norm sign in \eqref{corr using principal term t neq 0} is understood as a limit for $t\rightarrow 0$. We have $e^{-i\tau A(t)^{1/2}}F(t)\rightarrow P$, as $t\rightarrow 0$. Next, by \eqref{S>=} and \eqref{Th A(t)+1/2}, \begin{equation*} \begin{split} \Vert &A(t)^{1/2}F(t)e^{-i\tau (t^2 S)^{1/2}P}(t^2 S)^{-1/2}P -e^{-i\tau (t^2 S)^{1/2}P}P\Vert \\ &=\Vert A(t)^{1/2}F(t)(t^2 S)^{-1/2}P -P\Vert \leqslant c_*^{-1/2}C_4\vert t\vert ,\quad\tau\in\mathbb{R},\quad
\vert t\vert \leqslant t_0. \end{split} \end{equation*} Using these arguments, we see that the limit of the left-hand side of \eqref{corr using principal term t neq 0} as $t\rightarrow 0$ is equal to zero.
According to \eqref{F(t)=P+tF_1+F_2(t)} and \eqref{F1P=ZP}, \begin{equation} \begin{split} A&(t)^{1/2}F(t)e^{-i\tau (t^2 S)^{1/2}P}(t^2 S)^{-1/2}P -A(t)^{1/2}(I+tZ)e^{-i\tau (t^2 S)^{1/2}P}(t^2 S)^{-1/2}P \\ &=A(t)^{1/2}F_2(t)e^{-i\tau (t^2S)^{1/2}P}(t^2 S)^{-1/2}P. \end{split} \end{equation} By \eqref{S>=} and \eqref{A^1/2F_2}, \begin{equation} \label{est corr finish} \Vert A(t)^{1/2}F_2(t)e^{-i\tau (t^2S)^{1/2}P}(t^2 S)^{-1/2}P\Vert \leqslant c_*^{-1/2}C_5\vert t\vert,\quad \tau\in\mathbb{R},\quad \vert t\vert\leqslant t_0. \end{equation} Combining \eqref{corr abstr start}--\eqref{est corr finish}, we arrive at \begin{equation} \label{Th corr exp} \begin{split} \left\Vert A(t)^{1/2}\left(e^{-i\tau A(t)^{1/2}}A(t)^{-1/2}-(I+tZ)e^{-i\tau (t^2 S)^{1/2}P}(t^2S)^{-1/2}P\right)P\right\Vert \leqslant C_9(\vert t\vert +\vert \tau\vert t^2),\\ \tau\in\mathbb{R},\quad\vert t\vert \leqslant t_0;\quad C_9:=C_1+c_*^{-1/2}C_5+C_8(\Vert X_1\Vert +C_4t_0). \end{split} \end{equation}
(Cf. the proof of Theorem 3.1 from \cite{Su_MMNP}.) \end{proof}
\subsection{Approximation of the operator $A(t)^{-1/2}\sin (\varepsilon ^{-1}\tau A(t)^{1/2})P$} Now, we introduce a parameter $\varepsilon>0$. We need to study the behavior of the operator $A(t)^{-1/2}\sin (\varepsilon ^{-1 }\tau A(t)^{1/2})P$ for small $\varepsilon$. Replace $\tau$ by $\varepsilon ^{-1}\tau$ in \eqref{A-1/2sin F}: \begin{equation*} \begin{split} \left\Vert \left( A(t)^{-1/2}\sin (\varepsilon ^{-1}\tau A(t)^{1/2})-(t^2 S)^{-1/2}\sin (\varepsilon ^{-1}\tau (t^2S)^{1/2}P)\right)P\right\Vert \leqslant C_7(1+\varepsilon ^{-1}\vert\tau\vert\vert t\vert ),\\
\vert t\vert \leqslant t_0,\quad \varepsilon >0,\quad \tau\in\mathbb{R}. \end{split} \end{equation*} Multiplying this inequality by the ``smoothing'' factor $\varepsilon (t^2+\varepsilon ^2)^{-1/2}$ and taking into account the inequalities $ \varepsilon (t^2+\varepsilon ^2)^{-1/2}\leqslant 1 $ and $ \vert\tau\vert \vert t\vert (t^2+\varepsilon ^2)^{-1/2}\leqslant \vert\tau\vert $, we obtain the following result.
\begin{theorem} \label{Theorem sin A(t) smoothed} For $\tau\in\mathbb{R}$, $\varepsilon >0$, and $\vert t\vert\leqslant t_0$ we have \begin{equation*} \left\Vert \left( A(t)^{-1/2}\sin (\varepsilon ^{-1}\tau A(t)^{1/2})-(t^2 S)^{-1/2}\sin (\varepsilon ^{-1}\tau (t^2S)^{1/2}P)\right) \varepsilon (t^2+\varepsilon ^2)^{-1/2}P\right\Vert \leqslant C_7(1+\vert\tau\vert ). \end{equation*} \end{theorem} Replacing $\tau$ by $\varepsilon ^{-1}\tau$ in \eqref{Th corr abstr} and multiplying the operator by $\varepsilon ^2 (t^2+\varepsilon ^2)^{-1}$, we arrive at the following statement. \begin{theorem} \label{Theorem sin A(t) smoothed corrector} For $\tau\in\mathbb{R}$, $\varepsilon >0$, and $\vert t\vert\leqslant t_0$ we have \begin{equation*} \begin{split} \Bigl\Vert & A(t)^{1/2}\left( A(t)^{-1/2}\sin (\varepsilon ^{-1}\tau A(t)^{1/2})-(I+tZ)(t^2 S)^{-1/2}\sin (\varepsilon ^{-1}\tau (t^2S)^{1/2}P)\right)\varepsilon ^2(t^2+\varepsilon ^2)^{-1} P\Bigr\Vert\\ &\leqslant C_9\varepsilon (1+\vert\tau\vert ). \end{split} \end{equation*} \end{theorem}
\section{Approximation of the sandwiched operator sine}
\label{Section sandwiched abstract}
\subsection{The operator family $A(t)=M^*\widehat{A}(t)M$} \label{Subsec hat famimy abstract} Now, we consider an operator family of the form $ A(t) = M^* \widehat{A}(t)M$ (see \cite[Chapter 1, Subsections 1.5 and 5.3]{BSu}).
Let $\widehat{\mathfrak{H}}$ be yet another separable Hilbert space. Let $\widehat{X}(t)=\widehat{X}_0+t\widehat{X}_1 : \widehat{\mathfrak{H}}\rightarrow \mathfrak{H}_*$ be a family of operators of the same form as $X(t)$, and suppose that $\widehat{X }(t)$ satisfies the assumptions of Subsection~\ref{Subsubsection operator pencils}.
Let $M : \mathfrak{H}\rightarrow \widehat{\mathfrak{H}} $ be an isomorphism. Suppose that $ M\mathrm{Dom}\,X_0 = \mathrm{Dom}\,\widehat{X}_0$; $X_0 = \widehat{X}_0M$; $ X_1 = \widehat{X}_1M$. Then $X(t) = \widehat{X}(t)M$. Consider the family of operators \begin{equation} \label{hat A(t)} \widehat{A}(t) = \widehat{X}(t)^*\widehat{X}(t) : \widehat{\mathfrak{H}}\rightarrow\widehat{\mathfrak{H}}. \end{equation} Obviously, \begin{equation} \label{A(t) and hat A(t)} A(t) = M^*\widehat{A} (t)M. \end{equation} In what follows, all the objects corresponding to the family \eqref{hat A(t)} are supplied with the upper mark "$\widehat{\phantom{a}} $". Note that $\widehat{\mathfrak{N}} = M\mathfrak{N}$, $\widehat{n} = n$, $\widehat{\mathfrak{N}}_* = \mathfrak{N}_*$, $\widehat{n}_* = n_*$, and $\widehat{P}_* = P_*$.
We denote \begin{equation} \label{Q= abstract} Q:=(MM^*)^{-1}=(M^*)^{-1}M^{-1}:\widehat{\mathfrak{H}}\rightarrow\widehat{\mathfrak{H}}. \end{equation} Let $Q_{\widehat{\mathfrak{N}}}$ be the block of $Q$ in the subspace $\widehat{\mathfrak{N}}$: $ Q_{\widehat{\mathfrak{N}}}=\widehat{P}Q\vert _{\widehat{\mathfrak{N}}}:\widehat{\mathfrak{N}}\rightarrow\widehat{\mathfrak{N}}$. Obviously, $Q_{\widehat{\mathfrak{N}}}$ is an isomorphism in $\widehat{\mathfrak{N}}$. Let $M_0:=\left(Q_{\widehat{\mathfrak{N}}}\right)^{-1/2}: \widehat{\mathfrak{N}}\rightarrow \widehat{\mathfrak{N}}$. As was shown in \cite[Proposition 1.2]{Su07}, the orthogonal projection $P$ of the space $\mathfrak{H}$ onto $\mathfrak{N}$ and the orthogonal projection $\widehat{P}$ of the space $\widehat{\mathfrak{H}}$ onto $\widehat{\mathfrak{N}}$ satisfy the following relation: $ P=M^{-1}(Q_{\widehat{\mathfrak{N}}})^{-1}\widehat{P}(M^*)^{-1}$. Hence, \begin{equation} \label{PM*=} PM^*=M^{-1}(Q_{\widehat{\mathfrak{N}}})^{-1}\widehat{P}=M^{-1}M_0^2 \widehat{P}. \end{equation} According to \cite[Chapter 1, Subsec. 1.5]{BSu}, the spectral germs $S$ and $\widehat{S}$ satisfy \begin{equation*} S=PM^*\widehat{S}M\vert _\mathfrak{N}. \end{equation*}
For the operator family \eqref{hat A(t)} we introduce the operator $\widehat{Z}_Q$ acting in $\widehat{\mathfrak{H}}$ and taking an element $\widehat{u}\in\widehat{\mathfrak{H}}$ to the solution $\widehat{\varphi}_Q$ of the problem \begin{equation} \label{hat ZQ equation} \widehat{X}_0^*(\widehat{X}_0\widehat{\varphi}_Q+\widehat{X}_1\widehat{\omega})=0,\quad Q\widehat{\varphi}_Q \perp \widehat{\mathfrak{N}}, \end{equation} where $\widehat{\omega}:=\widehat{P}\widehat{u}$. Equation \eqref{hat ZQ equation} is understood in the weak sense. As was shown in \cite[Lemma~6.1]{BSu05-1}, the operator $Z$ for $A(t)$ and the operator $\widehat{Z}_Q$ satisfy \begin{equation} \label{hat Z_Q=} \widehat{Z}_Q=MZM^{-1}\widehat{P}. \end{equation}
\subsection{The principal term of approximation for the sandwiched operator $A(t)^{-1/2}\sin(\tau A(t)^{1/2})$} In this subsection, we find an approximation for the operator $A(t)^{-1/2}\sin(\tau A(t)^{1/2})$, where $A(t)$ is given by \eqref{A(t) and hat A(t)}, in terms of the germ $\widehat{S}$ of $\widehat{A}(t)$ and the isomorphism $M$. It is convenient to border the operator $A(t)^{-1/2}\sin(\tau A(t)^{1/2})$ by appropriate factors.
\begin{proposition} Suppose that the assumptions of Subsec.~\textnormal{\ref{Subsec hat famimy abstract}} are satisfied. Then for $\tau\in\mathbb{R}$ and $\vert t\vert \leqslant t_0$ we have \begin{align} \label{prop hat sin principal abstr scheme} \begin{split} \Vert & MA(t)^{-1/2}\sin (\tau A(t)^{1/2})M^{-1}\widehat{P} -M_0(t^2M_0\widehat{S}M_0)^{-1/2}\sin (\tau(t^2M_0\widehat{S}M_0)^{1/2})M_0^{-1}\widehat{P}\Vert _{\widehat{\mathfrak{H}}\rightarrow\widehat{\mathfrak{H}}} \\ &\leqslant C_7\Vert M\Vert \Vert M^{-1}\Vert (1+\vert\tau\vert \vert t\vert ). \end{split} \end{align} Here $t_0$ is defined according to \eqref{t_0(delta) abstact scheme}, and $C_7$ is the constant from \eqref{A-1/2sin F 2} depending only on $\delta$, $\Vert X_1\Vert $, and $c_*$. \end{proposition}
\begin{proof} Estimate \eqref{prop hat sin principal abstr scheme} follows from Proposition~\ref{Proposition abstract sin tau principal} by recalculation. In \cite[Proposition 3.3]{BSu08}, it was shown that \begin{equation} \label{cos and hat cos identity} M\cos (\tau (t^2S)^{1/2}P)PM^* =M_0\cos (\tau (t^2 M_0\widehat{S}M_0)^{1/2})M_0\widehat{P}. \end{equation} Obviously, \begin{equation} \label{sin not hat = int} (t^2S)^{-1/2}\sin (\tau (t^2S)^{1/2}P)P= \int _0^\tau \cos (\widetilde{\tau}(t^2S)^{1/2}P)P\,d\widetilde{\tau}. \end{equation} Similarly, \begin{equation} \label{sin hat = int} (t^2 M_0 \widehat{S}M_0)^{-1/2}\sin (\tau (t^2 M_0\widehat{S}M_0)^{1/2})M_0\widehat{P} =\int _0^\tau \cos(\widetilde{\tau}(t^2 M_0\widehat{S}M_0)^{1/2})M_0\widehat{P}\,d\widetilde{\tau}. \end{equation} Integrating \eqref{cos and hat cos identity} over $\tau$ and taking \eqref{sin not hat = int}, \eqref{sin hat = int} into account, we conclude that \begin{equation} \label{sine and hat sine} M (t^2S)^{-1/2}\sin (\tau (t^2S)^{1/2}P)P M^*=M_0(t^2 M_0 \widehat{S}M_0)^{-1/2}\sin (\tau (t^2 M_0\widehat{S}M_0)^{1/2})M_0\widehat{P}. \end{equation} Next, since $M_0=(Q_{\widehat{\mathfrak{N}}})^{-1/2}$, using \eqref{PM*=}, we obtain $PM^*M_0^{-2}\widehat{P}=M^{-1}\widehat{P}$. So, by \eqref{sine and hat sine}, \begin{equation} \label{3.13a} M(t^2S)^{-1/2}\sin (\tau (t^2S)^{1/2}P))M^{-1}\widehat{P} =M_0(t^2M_0\widehat{S}M_0)^{-1/2}\sin (\tau (t^2M_0\widehat{S}M_0)^{1/2})M_0^{-1}\widehat{P}. \end{equation} Thus, \begin{equation} \label{end of proof prop 3.1} \begin{split} & MA(t)^{-1/2}\sin (\tau A(t)^{1/2})M^{-1}\widehat{P} -M_0(t^2M_0\widehat{S}M_0)^{-1/2}\sin (\tau(t^2M_0\widehat{S}M_0)^{1/2})M_0^{-1}\widehat{P}\\ &=M\Bigl( A(t)^{-1/2}\sin (\tau A(t)^{1/2})P -(t^2S)^{-1/2}\sin (\tau (t^2S)^{1/2}P)P \Bigr)M^{-1}\widehat{P}. \end{split} \end{equation} Using Proposition~\ref{Proposition abstract sin tau principal} and \eqref{end of proof prop 3.1}, we arrive at inequality~\eqref{prop hat sin principal abstr scheme}. \end{proof}
\subsection{Approximation with the corrector}
\begin{proposition} Under the assumptions of Subsec.~\textnormal{\ref{Subsec hat famimy abstract}}, for $\tau\in\mathbb{R}$ and $\vert t\vert \leqslant t_0$ we have \begin{equation} \label{prop sin tu corr abstract} \begin{split} \Bigl\Vert & \widehat{A}(t)^{1/2}\Bigl( MA(t)^{-1/2}\sin(\tau A(t)^{1/2})M^{-1}\widehat{P} \\ &-(I+t\widehat{Z}_Q)M_0(t^2 M_0\widehat{S}M_0)^{-1/2}\sin (\tau (t^2 M_0\widehat{S}M_0)^{1/2})M_0^{-1}\widehat{P}\Bigr)\Bigr\Vert _{\widehat{\mathfrak{H}}\rightarrow\widehat{\mathfrak{H}}} \\ &\leqslant C_9 \Vert M^{-1}\Vert (\vert t\vert +\vert \tau\vert t^2). \end{split} \end{equation} The constant $C_9$ is defined in \eqref{Th corr exp} and depends only on $\delta$, $\Vert X_1\Vert$, and $c_*$. \end{proposition}
\begin{proof} Estimate \eqref{prop sin tu corr abstract} follows from Proposition~\ref{Proposition corr sin tau abstr no hat} by recalculation. According to \eqref{hat Z_Q=} and \eqref{3.13a}, \begin{align} \label{3.14a} \begin{split} &t \widehat{Z}_Q M_0 (t^2 M_0\widehat{S}M_0)^{-1/2}\sin \bigl(\tau (t^2 M_0\widehat{S}M_0)^{1/2}\bigr)M_0^{-1}\widehat{P}\\ &= tMZM^{-1}M_0 (t^2 M_0\widehat{S}M_0)^{-1/2}\sin \bigl(\tau (t^2 M_0\widehat{S}M_0)^{1/2}\bigr)M_0^{-1}\widehat{P} \\ &= tMZ(t^2S)^{-1/2}\sin (\tau (t^2S)^{1/2})PM^{-1}\widehat{P}. \end{split} \end{align} Combining \eqref{3.14a} with \eqref{A(t) and hat A(t)} and \eqref{end of proof prop 3.1}, we obtain \begin{equation*} \begin{split} &\Bigl\Vert \widehat{A}(t)^{1/2}\Bigl( MA(t)^{-1/2}\sin (\tau A(t)^{1/2})M^{-1}\widehat{P}\\ &-(I+t\widehat{Z}_Q)M_0(t^2 M_0\widehat{S}M_0)^{-1/2}\sin \bigl(\tau (t^2 M_0\widehat{S}M_0)^{1/2}\bigr)M_0^{-1}\widehat{P} \Bigr)\Bigr\Vert _{\widehat{\mathfrak{H}}\rightarrow\widehat{\mathfrak{H}}} \\ &=\Bigl\Vert A(t)^{1/2}\Bigl( A(t)^{-1/2}\sin (\tau A(t)^{1/2})P -(I+tZ)(t^2S)^{-1/2}\sin (\tau (t^2S)^{1/2})P\Bigr)M^{-1}\widehat{P} \Bigr\Vert _{\widehat{\mathfrak{H}}\rightarrow\mathfrak{H}}. \end{split} \end{equation*} Together with Proposition~\ref{Proposition corr sin tau abstr no hat}, this implies~\eqref{prop sin tu corr abstract}. \end{proof}
\subsection{Approximation of the sandwiched operator $A(t)^{-1/2}\sin (\varepsilon ^{-1}\tau A(t)^{1/2})$}
Writing down \eqref{prop hat sin principal abstr scheme} and \eqref{prop sin tu corr abstract} with $\tau$ replaced by $\varepsilon ^{-1}\tau$ and multiplying the corresponding inequalities by the ,,smoothing factors,'' we arrive at the following result.
\begin{theorem} \label{Theorem sin sandwiched abstract} Under the assumptions of Subsec.~\textnormal{\ref{Subsec hat famimy abstract}}, for $\tau\in\mathbb{R}$, $\varepsilon >0$, and $\vert t\vert \leqslant t_0$ we have \begin{align} \label{Th 3.3_1} \begin{split} \Bigl\Vert &\Bigl( MA(t)^{-1/2}\sin (\varepsilon ^{-1}\tau A(t)^{1/2})M^{-1}\widehat{P} \\ &-M_0(t^2M_0\widehat{S}M_0)^{-1/2}\sin (\varepsilon ^{-1}\tau(t^2M_0\widehat{S}M_0)^{1/2})M_0^{-1}\widehat{P}\Bigr) \varepsilon (t^2+\varepsilon ^2)^{-1/2}\widehat{P}\Bigr\Vert _{\widehat{\mathfrak{H}}\rightarrow\widehat{\mathfrak{H}}} \\ &\leqslant C_7\Vert M\Vert \Vert M^{-1}\Vert (1+\vert\tau\vert ), \end{split} \\ \label{Th 3.3_2} \begin{split} \Bigl\Vert & \widehat{A}(t)^{1/2}\Bigl( MA(t)^{-1/2}\sin(\varepsilon ^{-1}\tau A(t)^{1/2})M^{-1}\widehat{P} \\ &-(I+t\widehat{Z}_Q)M_0(t^2 M_0\widehat{S}M_0)^{-1/2}\sin (\varepsilon ^{-1}\tau (t^2 M_0\widehat{S}M_0)^{1/2})M_0^{-1}\widehat{P}\Bigr) \varepsilon ^2(t^2+\varepsilon ^2)^{-1}\widehat{P}\Bigr\Vert _{\widehat{\mathfrak{H}}\rightarrow\widehat{\mathfrak{H}}} \\ &\leqslant C_9 \Vert M^{-1}\Vert \varepsilon(1 +\vert \tau\vert ). \end{split} \end{align} The number $t_0$ is subject to \eqref{t_0(delta) abstact scheme}, the constants $C_7$ and $C_9$ are defined by \eqref{A-1/2sin F 2} and \eqref{Th corr exp}. \end{theorem}
\section*{Chapter II. Periodic differential operators in $L_2(\mathbb{R}^d;\mathbb{C}^n)$}
\label{Section Chapter 2}
In the present chapter, we describe the class of matrix second order differential operators admitting a factorization of the form
$\mathcal{A}=\mathcal{X}^*\mathcal{X}$, where $\mathcal{X}$ is a homogeneous first order DO. This class was introduced and studied in \cite[Chapter~2]{BSu}.
\section{Factorized second order operators} \label{Section Factorized families}
\subsection{Lattices $\Gamma$ and $\widetilde{\Gamma}$} Let $\Gamma$ be a lattice in $\mathbb{R}^d$ generated by the basis $\mathbf{a}_1,\dots, \mathbf{a}_d$: \begin{equation*} \Gamma:=\left\lbrace\mathbf{a}\in\mathbb{R}^d : \mathbf{a}=\sum _{j=1}^d \nu _j\mathbf{a}_j,\quad \nu _j\in\mathbb{Z}\right\rbrace , \end{equation*} and let $\Omega$ be the elementary cell of the lattice $\Gamma$: $$ \Omega :=\left\lbrace \mathbf{x}\in\mathbb{R}^d :\mathbf{x}=\sum_{j=1}^d \zeta _j\mathbf{a}_j,\quad -\frac{1}{2}<\zeta _j <\frac{1}{2}\right\rbrace .$$
The basis $\mathbf{b}_1,\dots ,\mathbf{b}_d$ dual to $\mathbf{a}_1,\dots ,\mathbf{a}_d$ is defined by the relations $\langle \mathbf{b}_l,\mathbf{a}_j\rangle =2\pi\delta _{lj}$. This basis generates the lattice $\widetilde{\Gamma}$ dual to $\Gamma$: $ \widetilde{\Gamma}:=\left\lbrace \mathbf{b}\in\mathbb{R}^d : \mathbf{b}=\sum _{j=1}^d\mu_j\mathbf{b}_j,\quad \mu_j\in\mathbb{Z}\right\rbrace $. Let $\widetilde{\Omega}$ be the first Brillouin zone of the lattice $\widetilde{\Gamma}$: \begin{equation} \label{Omega-tilda} \widetilde{\Omega}:=\left\lbrace\mathbf{k}\in\mathbb{R}^d : \vert \mathbf{k}\vert <\vert \mathbf{k}-\mathbf{b}\vert,\quad 0\neq\mathbf{b}\in\widetilde{\Gamma}\right\rbrace . \end{equation} Let $\vert \Omega\vert$ be the Lebesgue measure of the cell $\Omega$: $\vert \Omega\vert =\mathrm{meas}\,\Omega$, and let $\vert \widetilde{\Omega}\vert =\mathrm{meas}\,\widetilde{\Omega}$. We put $2r_1:=\mathrm{diam}\,\Omega$. The maximal radius of the ball containing in $\mathrm{clos}\,\widetilde{\Omega}$ is denoted by $r_0$. Note that \begin{equation} \label{2r0=} 2r_0=\min _{0\neq\mathbf{b}\in\widetilde{\Gamma}} \vert \mathbf{b}\vert. \end{equation}
With the lattice $\Gamma$, we associate the discrete Fourier transformation \begin{equation} \label{discr Fourier} v(\mathbf{x})=\vert \Omega\vert ^{-1/2}\sum _{\mathbf{b}\in \widetilde{\Gamma}}\widehat{v}_\mathbf{b} e^{i\langle \mathbf{b},\mathbf{x}\rangle },\quad\mathbf{x}\in\Omega , \end{equation} which is a unitary mapping of $l_2(\widetilde{\Gamma})$ onto $L_2(\Omega)$: \begin{equation} \label{Fourier unit} \int _\Omega \vert v(\mathbf{x})\vert ^2\,d\mathbf{x}=\sum _{\mathbf{b}\in\widetilde{\Gamma}}\vert \widehat{v}_\mathbf{b}\vert ^2 . \end{equation}
\textit{Below by $\widetilde{H}^1(\Omega;\mathbb{C}^n)$ we denote the subspace of functions from $H^1(\Omega;\mathbb{C}^n)$ whose $\Gamma$-periodic extension to $\mathbb{R}^d$ belongs to $H^1_{\mathrm{loc}}(\mathbb{R}^d;\mathbb{C}^n)$.} We have \begin{equation} \label{6.5 from DSu} \Vert (\mathbf{D}+\mathbf{k})\mathbf{u}\Vert ^2_{L_2(\Omega)} =\sum _{\mathbf{b}\in\widetilde{\Gamma}}\vert \mathbf{b}+\mathbf{k}\vert ^2\vert \widehat{\mathbf{u}}_\mathbf{b}\vert ^2,\quad \mathbf{u}\in \widetilde{H}^1(\Omega ;\mathbb{C}^n),\quad\mathbf{k}\in\mathbb{R}^d, \end{equation} and convergence of the series in the right-hand side of \eqref{6.5 from DSu} is equivalent to the relation $\mathbf{u}\in\widetilde{H}^1(\Omega ;\mathbb{C}^n)$. From \eqref{Omega-tilda}, \eqref{Fourier unit}, and \eqref{6.5 from DSu} it follows that \begin{equation} \label{D+k >=}
\Vert (\mathbf{D}+\mathbf{k})\mathbf{u}\Vert ^2 _{L_2(\Omega)} \geqslant \sum _{\mathbf{b}\in\widetilde{\Gamma}}\vert \mathbf{k}\vert ^2\vert \widehat{\mathbf{u}}_\mathbf{b}\vert ^2 =\vert \mathbf{k}\vert ^2 \Vert \mathbf{u}\Vert ^2_{L_2(\Omega)}
,\quad \mathbf{u}\in\widetilde{H}^1(\Omega ;\mathbb{C}^n),\quad\mathbf{k}\in\widetilde{\Omega}. \end{equation}
If $\psi (\mathbf{x})$ is a $\Gamma$-periodic measurable matrix-valued function in $\mathbb{R}^d$, we put $\overline{\psi}:=\vert \Omega\vert ^{-1}\int _\Omega \psi (\mathbf{x})\,d\mathbf{x}$ and $\underline{\psi}:=\left(\vert \Omega\vert ^{-1}\int _\Omega \psi (\mathbf{x})^{-1}\,d\mathbf{x}\right)^{-1}$. Here, in the definition of $\overline{\psi}$ it is assumed that $\psi\in L_{1,\mathrm{loc}}(\mathbb{R}^d)$, and in the definition of $\underline{\psi}$ it is assumed that the matrix $\psi (\mathbf{x})$ is square and nondegenerate, and $\psi ^{-1}\in L_{1,\mathrm{loc}}(\mathbb{R}^d)$.
\subsection{The Gelfand transformation} \label{Subsec Gelfand} Initially, the Gelfand transformation $\mathcal{U}$ is defined on the functions of the Schwartz class by the formula \begin{equation*} \begin{split} \widetilde{\mathbf{v}}(\mathbf{k},\mathbf{x})=(\mathcal{U}\mathbf{v})(\mathbf{k},\mathbf{x})=\vert \widetilde{\Omega}\vert ^{-1/2}\sum _{\mathbf{a}\in\Gamma}e^{-i\langle\mathbf{k},\mathbf{x}+\mathbf{a}\rangle }\mathbf{v}(\mathbf{x}+\mathbf{a}),\quad \mathbf{v}\in\mathcal{S}(\mathbb{R}^d;\mathbb{C}^n),\quad \mathbf{x}\in\Omega ,\quad \mathbf{k}\in\widetilde{\Omega}. \end{split} \end{equation*} Since $ \int _{\widetilde{\Omega}}\int _\Omega \vert \widetilde{\mathbf{v}}(\mathbf{k},\mathbf{x})\vert ^2\,d\mathbf{x}\,d\mathbf{k}=\int _{\mathbb{R}^d}\vert\mathbf{v}(\mathbf{x})\vert ^2\,d\mathbf{x}$, the transformation $\mathcal{U}$ extends by continuity up to a unitary mapping $ \mathcal{U}: L_2(\mathbb{R}^d;\mathbb{C}^n)\rightarrow \int _{\widetilde{\Omega}}\oplus L_2(\Omega ;\mathbb{C}^n)\,d\mathbf{k}$. Relation $\mathbf{v}\in H^1(\mathbb{R}^d;\mathbb{C}^n)$ is equivalent to $\widetilde{\mathbf{v}}(\mathbf{k},\cdot)\in \widetilde{H}^1(\Omega ;\mathbb{C}^n)$ for a.~e. $\mathbf{k}\in\widetilde{\Omega}$ and $ \int _{\widetilde{\Omega}}\int _\Omega \left(\vert (\mathbf{D}+\mathbf{k})\widetilde{\mathbf{v}}(\mathbf{k},\mathbf{x})\vert ^2 +\vert \widetilde{\mathbf{v}}(\mathbf{k},\mathbf{x})\vert ^2\right)\,d\mathbf{x}\,d\mathbf{k}<\infty $. Under the Gelfand transformation, the operator of multiplication by a bounded periodic function in $L_2(\mathbb{R}^d;\mathbb{C}^n)$ turns into multiplication by the same function on the fibers of the direct integral. The operator $\mathbf{D}$ applied to $\mathbf{v}\in H^1(\mathbb{R}^d;\mathbb{C}^n)$ turns into the operator $\mathbf{D}+\mathbf{k}$ applied to $\widetilde{\mathbf{v}}(\mathbf{k},\cdot )\in \widetilde{H}^1(\Omega ;\mathbb{C}^n)$.
\subsection{Factorized second order operators}
\label{Subsubsection factorized famalies}
Let $b(\mathbf{D})$ be a matrix first order DO of the form $\sum _{j=1}^d b_j D_j$, where $b_j$, $j=1,\dots ,d$, are constant matrices of size $m\times n$ (in general, with complex entries). \textit{We always assume that} $m\geqslant n$. Suppose that the symbol $b(\boldsymbol{\xi})=\sum _{j=1}^d b_j \xi _j$, $\boldsymbol{\xi}\in\mathbb{R}^d$, of the operator $b(\mathbf{D})$ has maximal rank: $\mathrm{rank}\,b(\boldsymbol{\xi})=n$ for $0\neq\boldsymbol{\xi}\in\mathbb{R}^d$. This condition is equivalent to the existence of constants $\alpha _0$, $\alpha _1>0$ such that \begin{equation} \label{<b^*b<} \alpha _0\mathbf{1}_n\leqslant b(\boldsymbol{\theta})^*b(\boldsymbol{\theta})\leqslant \alpha _1\mathbf{1}_n,\quad\boldsymbol{\theta}\in\mathbb{S}^{d-1},\quad 0<\alpha _0\leqslant \alpha _1 <\infty . \end{equation} From \eqref{<b^*b<} it follows that \begin{equation} \label{b_j <=} \vert b_j\vert \leqslant \alpha _1^{1/2},\quad j=1,\dots ,d. \end{equation}
Let $\Gamma$-periodic Hermitian $(m\times m)$-matrix-valued function $g(\mathbf{x})$ be positive definite and bounded together with the inverse matrix: \begin{equation} \label{g in} g(\mathbf{x})>0;\quad g, g^{-1}\in L_\infty (\mathbb{R}^d). \end{equation} Suppose that $f(\mathbf{x})$ is a $\Gamma$-periodic $(n\times n)$-matrix-valued function such that $f,f^{-1}\in L_\infty (\mathbb{R}^d)$. In $L_2(\mathbb{R}^d;\mathbb{C}^n)$, consider DO $\mathcal{A}$ formally given by the differential expression \begin{equation} \label{A=} \mathcal{A}=f(\mathbf{x})^*b(\mathbf{D})^*g(\mathbf{x})b(\mathbf{D})f(\mathbf{x}). \end{equation} The precise definition of the operator $\mathcal{A}$ is given in terms of the quadratic form \begin{equation*}\mathfrak{a}[\mathbf{u},\mathbf{u}]:=\left(gb(\mathbf{D})(f\mathbf{u}),b(\mathbf{D})(f\mathbf{u})\right)_{L_2(\mathbb{R}^d)}, \quad\mathbf{u}\in \mathrm{Dom}\,\mathfrak{a}:=\lbrace \mathbf{u}\in L_2(\mathbb{R}^d;\mathbb{C}^n) : f\mathbf{u}\in H^1(\mathbb{R}^d;\mathbb{C}^n)\rbrace. \end{equation*} Using the Fourier transformation and assumptions \eqref{<b^*b<}, \eqref{g in}, it is easily seen that \begin{equation} \label{<a<} \alpha _0\Vert g^{-1}\Vert _{L_\infty}^{-1}\Vert \mathbf{D}(f\mathbf{u})\Vert ^2_{L_2(\mathbb{R}^d)}\leqslant\mathfrak{a}[\mathbf{u},\mathbf{u}] \leqslant \alpha _1\Vert g\Vert _{L_\infty} \Vert \mathbf{D}(f\mathbf{u})\Vert ^2_{L_2(\mathbb{R}^d)},\quad \mathbf{u}\in \mathrm{Dom}\,\mathfrak{a}. \end{equation} Thus, the form $\mathfrak{a}[\cdot,\cdot]$ is closed and non-negative.
The operator $\mathcal{A}$ admits a factorization of the form $\mathcal{A}=\mathcal{X}^*\mathcal{X}$, where \begin{equation*}\mathcal{X}:=g(\mathbf{x})^{1/2}b(\mathbf{D})f(\mathbf{x}) : L_2(\mathbb{R}^d;\mathbb{C}^n)\rightarrow L_2(\mathbb{R}^d;\mathbb{C}^m),\quad\mathrm{Dom}\,\mathcal{X}=\mathrm{Dom}\,\mathfrak{a}. \end{equation*}
\section{Direct integral decomposition for the operator $\mathcal{A}$}
\subsection{The forms $\mathfrak{a}(\mathbf{k})$ and the operators $\mathcal{A}(\mathbf{k})$}
We put \begin{equation} \label{frak H for L2} \mathfrak{H}:=L_2(\Omega ;\mathbb{C}^n),\quad \mathfrak{H}_*:=L_2(\Omega;\mathbb{C}^m), \end{equation} and consider the closed operator $\mathcal{X}(\mathbf{k}): \mathfrak{H}\rightarrow \mathfrak{H}_*$, $\mathbf{k}\in\mathbb{R}^d$, defined on the domain \begin{equation*} \mathrm{Dom}\,\mathcal{X}(\mathbf{k})=\lbrace \mathbf{u}\in\mathfrak{H} : f\mathbf{u}\in \widetilde{H}^1(\Omega;\mathbb{C}^n)\rbrace =:\mathfrak{d} \end{equation*} by the expression $\mathcal{X}(\mathbf{k})=g(\mathbf{x})^{1/2}b(\mathbf{D}+\mathbf{k})f(\mathbf{x})$. The selfadjoint operator $\mathcal{A}(\mathbf{k}):=\mathcal{X}(\mathbf{k})^*\mathcal{X}(\mathbf{k})$ in $L_2(\Omega;\mathbb{C}^n)$ is formally given by the differential expression \begin{equation} \label{A(k)=} \mathcal{A}(\mathbf{k})=f(\mathbf{x})^*b(\mathbf{D}+\mathbf{k})^*g(\mathbf{x})b(\mathbf{D}+\mathbf{k})f(\mathbf{x}) \end{equation} with the periodic boundary conditions. The precise definition of the operator $\mathcal{A}(\mathbf{k})$ is given in terms of the closed quadratic form $\mathfrak{a}(\mathbf{k})[\mathbf{u},\mathbf{u}]:=\Vert \mathcal{X}(\mathbf{k})\mathbf{u}\Vert ^2_{\mathfrak{H}_*}$, $\mathbf{u}\in\mathfrak{d}$. Using the discrete Fourier transformation \eqref{discr Fourier} and assumptions \eqref{<b^*b<}, \eqref{g in}, it is easily seen that \begin{equation} \label{a(k) estimates} \begin{split} \alpha _0\Vert g^{-1}\Vert ^{-1}_{L_\infty}\Vert (\mathbf{D}+\mathbf{k})(f\mathbf{u})\Vert ^2 _{L_2(\Omega)}\leqslant \mathfrak{a}(\mathbf{k})[\mathbf{u},\mathbf{u}]\leqslant \alpha _1\Vert g\Vert _{L_\infty}\Vert (\mathbf{D}+\mathbf{k})(f\mathbf{u})\Vert ^2_{L_2(\Omega)},\quad\mathbf{u}\in\mathfrak{d}. \end{split} \end{equation} So, by the compactness of the embedding $\widetilde{H}^1(\Omega ;\mathbb{C}^n)\hookrightarrow L_2(\Omega ;\mathbb{C}^n)$, the spectrum of $\mathcal{A}(\mathbf{k})$ is discrete and the resolvent is compact.
By \eqref{D+k >=} and the lower estimate \eqref{a(k) estimates}, \begin{equation} \label{A(k)>=} \mathcal{A}(\mathbf{k})\geqslant c_*\vert \mathbf{k}\vert ^2 I,\quad \mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega};\quad c_*:=\alpha _0\Vert g^{-1}\Vert ^{-1}_{L_\infty}\Vert f^{-1}\Vert ^{-2}_{L_\infty} . \end{equation}
We put \begin{equation} \label{N in L2} \mathfrak{N}:=\mathrm{Ker}\,\mathcal{A}(0)=\mathrm{Ker}\,\mathcal{X}(0). \end{equation} Then \begin{equation} \label{N= in L2} \mathfrak{N}=\lbrace \mathbf{u}\in L_2(\Omega;\mathbb{C}^n) : f\mathbf{u}=\mathbf{c}\in\mathbb{C}^n\rbrace. \end{equation}
From \eqref{2r0=} and \eqref{6.5 from DSu} with $\mathbf{k}=0$ it follows that \begin{equation*} \Vert \mathbf{D}\mathbf{v}\Vert ^2 _{L_2(\Omega)}\geqslant 4 r_0^2\Vert \mathbf{v}\Vert ^2 _{L_2(\Omega)},\quad\mathbf{v}=f\mathbf{u}\in\widetilde{H}^1(\Omega ;\mathbb{C}^n),\quad \int _\Omega \mathbf{v} (\mathbf{x})\,d\mathbf{x}=0. \end{equation*} Combining this with the lower estimate \eqref{a(k) estimates} for $\mathbf{k}=0$, we see that the distance $d_0$ from the point zero to the rest of the spectrum of $\mathcal{A}(0)$ satisfies \begin{equation} \label{d^0<= in L2} d_0\geqslant 4c_*r_0^2. \end{equation}
\subsection{Direct integral decomposition for $\mathcal{A}$} Using the Gelfand transformation, we decompose the operator $\mathcal{A}$ into the direct integral of the operators $\mathcal{A}(\mathbf{k})$: \begin{equation} \label{5.7a} \mathcal{U}\mathcal{A}\mathcal{U}^{-1}=\int _{\widetilde{\Omega}} \oplus \mathcal{A}(\mathbf{k})\,d\mathbf{k}. \end{equation} This means the following. If $\mathbf{v}\in \mathrm{Dom}\,\mathfrak{a}$, then \begin{align} \label{v in dom a e k} &\widetilde{\mathbf{v}}(\mathbf{k},\cdot)=(\mathcal{U}\mathbf{v})(\mathbf{k},\cdot)\in \mathfrak{d}\quad \mbox{for a. e.}\;\mathbf{k}\in\widetilde{\Omega},\\ \label{a(k)=oplus int} &\mathfrak{a}[\mathbf{v},\mathbf{v}]=\int _{\widetilde{\Omega}}\mathfrak{a}(\mathbf{k})[\widetilde{\mathbf{v}}(\mathbf{k},\cdot),\widetilde{\mathbf{v}}(\mathbf{k},\cdot)]\,d\mathbf{k}. \end{align} Conversely, if $\widetilde{\mathbf{v}}\in \int _{\widetilde{\Omega}}\oplus L_2(\Omega;\mathbb{C}^n)\,d\mathbf{k}$ satisfies \eqref{v in dom a e k} and the integral in \eqref{a(k)=oplus int} is finite, then $\mathbf{v}\in \mathrm{Dom}\,\mathfrak{a}$ and \eqref{a(k)=oplus int} holds.
\subsection{Incorporation of the operators $\mathcal{A}(\mathbf{k})$ into the abstract scheme}
\label{Subsection Incorporation of the operators A(k) into the abstract scheme}
For $d>1$ the operators $\mathcal{A}(\mathbf{k})$ depend on the multidimensional parameter $\mathbf{k}$. According to \cite[Chapter~2]{BSu}, we consider the onedimensional parameter $t:=\vert\mathbf{k}\vert$. We will apply the scheme of Chapter~{I}. Herewith, all our considerations will depend on the additional parameter $\boldsymbol{\theta}=\mathbf{k}/\vert\mathbf{k}\vert\in\mathbb{S}^{d-1}$, and we need to make our estimates uniform with respect to $\boldsymbol{\theta}$.
The spaces $\mathfrak{H}$ and $\mathfrak{H}_*$ are defined by \eqref{frak H for L2}. Let $X(t)=X(t,\boldsymbol{\theta}):=\mathcal{X}(t\boldsymbol{\theta})$. Then $X(t,\boldsymbol{\theta})=X_0+tX_1(\boldsymbol{\theta})$, where $X_0=g(\mathbf{x})^{1/2}b(\mathbf{D})f(\mathbf{x})$, $\mathrm{Dom}\,X_0=\mathfrak{d}$, and $X_1(\boldsymbol{\theta})$ is a bounded operator of multiplication by the matrix-valued function $g(\mathbf{x})^{1/2}b(\boldsymbol{\theta})f(\mathbf{x})$. We put $A(t)=A(t,\boldsymbol{\theta}):=\mathcal{A}(t\boldsymbol{\theta})$. Then $A(t,\boldsymbol{\theta})=X(t,\boldsymbol{\theta})^*X(t,\boldsymbol{\theta})$. According to \eqref{N in L2} and \eqref{N= in L2}, $\mathfrak{N}=\mathrm{Ker}\,X_0=\mathrm{Ker}\,\mathcal{A}(0)$, $\mathrm{dim}\,\mathfrak{N}=n$. The distance $d_0$ from the point zero to the rest of the spectrum of $\mathcal{A}(0)$ satisfied estimate \eqref{d^0<= in L2}. As was shown in \cite[Chapter 2, Sec.~3]{BSu}, the condition $n\leqslant n_*=\mathrm{dim}\,\mathrm{Ker}\,X_0^*$ is also fulfilled. Thus, all the assumptions of Section~\ref{Section Preliminaries} are valid.
In Subsection \ref{Subsubsection operator pencils}, it was required to choose the number $\delta <d_0/8$. Taking \eqref{A(k)>=} and \eqref{d^0<= in L2} into account, we put \begin{equation} \label{delta L2} \delta :=c_*r_0^2/4=(r_0/2)^2\alpha _0\Vert g^{-1}\Vert ^{-1}_{L_\infty}\Vert f^{-1}\Vert ^{-2}_{L_\infty}. \end{equation} Next, by \eqref{<b^*b<}, the operator $X_1(\boldsymbol{\theta})=g(\mathbf{x})^{1/2}b(\boldsymbol{\theta})f(\mathbf{x})$ satisfies \begin{equation} \label{X_1(theta)<=} \Vert X_1(\boldsymbol{\theta})\Vert \leqslant \alpha _1^{1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert f\Vert _{L_\infty}. \end{equation} This allows us to take the following number \begin{equation} \label{t0 L2} t_0:=\delta^{1/2}\alpha _1^{-1/2}\Vert g\Vert ^{-1/2}_{L_\infty}\Vert f\Vert ^{-1}_{L_\infty} =(r_0/2)\alpha _0^{1/2}\alpha _1^{-1/2}\Vert g\Vert ^{-1/2}_{L_\infty}\Vert g^{-1}\Vert ^{-1/2}_{L_\infty}\Vert f\Vert ^{-1}_{L_\infty}\Vert f^{-1}\Vert ^{-1}_{L_\infty} \end{equation} in the role of $t_0$ (see \eqref{t_0(delta) abstact scheme}). Obviously, $t_0\leqslant r_0/2$, and the ball $\vert\mathbf{k}\vert\leqslant t_0$ lies in $\widetilde{\Omega}$. It is important that $c_*$, $\delta$, and $t_0$ (see \eqref{A(k)>=}, \eqref{delta L2}, \eqref{t0 L2}) do not depend on the parameter $\boldsymbol{\theta}$.
From \eqref{A(k)>=} it follows that the spectral germ $S(\boldsymbol{\theta})$ (which now depends on $\boldsymbol{\theta}$) is nondegenerate: \begin{equation} \label{S(theta)>=} S(\boldsymbol{\theta})\geqslant c_* I_\mathfrak{N}. \end{equation} It is important that the spectral germ is nondegenerate uniformly in $\boldsymbol{\theta}$.
\section{The operator $\widehat{\mathcal{A}}$. The effective matrix. The effective operator} \label{Sec eff op}
\subsection{The operator $\widehat{\mathcal{A}}$}
In the case where $f=\mathbf{1}_n$, we agree to mark all the objects by the upper hat ,,$\widehat{\phantom{a}}$''. We have $\widehat{\mathfrak{H}}=\mathfrak{H}=L_2(\Omega ;\mathbb{C}^n)$. For the operator \begin{equation} \label{hat A} \widehat{\mathcal{A}}=b(\mathbf{D})^*g(\mathbf{x})b(\mathbf{D}), \end{equation}
the family \begin{equation} \label{hat A(k)=} \widehat{\mathcal{A}}(\mathbf{k})=b(\mathbf{D}+\mathbf{k})^*g(\mathbf{x})b(\mathbf{D}+\mathbf{k}) \end{equation} is denoted by $\widehat{A}(t;\boldsymbol{\theta})$. If $f=\mathbf{1}_n$, the kernel \eqref{N= in L2} takes the form \begin{equation} \label{hat N= in L2} \widehat{\mathfrak{N}}=\lbrace \mathbf{u}\in L_2(\Omega ;\mathbb{C}^n) :\mathbf{u}=\mathbf{c}\in\mathbb{C}^n\rbrace . \end{equation} Let $\widehat{P}$ be the orthogonal projection of $\mathfrak{H}$ onto the subspace $\widehat{\mathfrak{N}}$. Then $\widehat{P}$ is the operator of averaging over the cell: \begin{equation} \label{P L2} \widehat{P}\mathbf{u}=\vert \Omega\vert ^{-1}\int _\Omega \mathbf{u}(\mathbf{x})\,d\mathbf{x},\quad\mathbf{u}\in L_2(\Omega ;\mathbb{C}^n). \end{equation}
From \eqref{A(k)>=} with $f=\mathbf{1}_n$ it follows that \begin{equation} \label{hat A(k)>=} \widehat{\mathcal{A}}(\mathbf{k})=\widehat{A}(t,\boldsymbol{\theta}) \geqslant \widehat{c}_* t^2 I,\quad \mathbf{k}=t\boldsymbol{\theta}\in\mathrm{clos}\, \widetilde{\Omega};\quad \widehat{c}_*:=\alpha _0\Vert g^{-1}\Vert ^{-1}_{L_\infty}. \end{equation}
\subsection{The effective matrix}
In accordance with \cite[Chapter 3, Sec.~1]{BSu}, the spectral germ $\widehat{S}(\boldsymbol{\theta})$ of the operator family $\widehat{A}(t,\boldsymbol{\theta})$ acting in $\widehat{\mathfrak{N}}$ can be represented as \begin{equation} \label{S(theta)=} \widehat{S}(\boldsymbol{\theta})=b(\boldsymbol{\theta})^*g^0b(\boldsymbol{\theta}),\quad\boldsymbol{\theta}\in\mathbb{S}^{d-1}, \end{equation} where $b(\boldsymbol{\theta})$ is the symbol of the operator $b(\mathbf{D})$ and $g^0$ is the so-called \textit{effective matrix}. The constant positive $(m\times m)$-matrix $g^0$ is defined as follows. Assume that a $\Gamma$-periodic $(n\times m)$-matrix-valued function $\Lambda\in\widetilde{H}^1(\Omega)$ is the weak solution of the problem \begin{equation} \label{Lambda problem} b(\mathbf{D})^*g(\mathbf{x})(b(\mathbf{D})\Lambda (\mathbf{x})+\mathbf{1}_m)=0,\quad \int _\Omega \Lambda (\mathbf{x})\,d\mathbf{x}=0. \end{equation} Denote \begin{equation} \label{tilde g} \widetilde{g}(\mathbf{x}):=g(\mathbf{x})(b(\mathbf{D})\Lambda (\mathbf{x})+\mathbf{1}_m). \end{equation} Then the effective matrix $g^0$ is given by \begin{equation} \label{g0} g^0=\vert \Omega\vert ^{-1}\int _\Omega \widetilde{g}(\mathbf{x})\,d\mathbf{x}. \end{equation} It turns out that the matrix $g^0$ is positive definite. In the case where $f=\mathbf{1}_n$, estimate \eqref{S(theta)>=} takes the form \begin{equation} \label{hat S(theta)>=} \widehat{S}(\boldsymbol{\theta})\geqslant \widehat{c}_* I_{\widehat{\mathfrak{N}}}. \end{equation}
From \eqref{Lambda problem} it is easy to derive that \begin{equation} \label{b(D)Lambda <=} \Vert b(\mathbf{D})\Lambda \Vert _{L_2(\Omega)}\leqslant \vert \Omega\vert ^{1/2}m^{1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert g^{-1}\Vert _{L_\infty}^{1/2}. \end{equation} We also need the following inequalities obtained in \cite[(6.28) and Subsec. 7.3]{BSu05}: \begin{align} \label{Lambda<=} &\Vert \Lambda\Vert _{L_2(\Omega)} \leqslant \vert \Omega\vert ^{1/2}M_1;\quad M_1:=m^{1/2}(2r_0)^{-1}\alpha_0^{-1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}; \\ \label{D Lambda <=} &\Vert \mathbf{D}\Lambda\Vert _{L_2(\Omega)} \leqslant \vert\Omega\vert ^{1/2}M_2;\quad M_2:=m^{1/2}\alpha_0^{-1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}. \end{align}
\subsection{The effective operator $\widehat{\mathcal{A}}^0$}
By \eqref{S(theta)=} and the homogeneity of the symbol $b(\mathbf{k})$, we have \begin{equation} \label{S(k)=} \widehat{S}(\mathbf{k}):=t^2 \widehat{S}(\boldsymbol{\theta})=b(\mathbf{k})^*g^0b(\mathbf{k}),\quad\mathbf{k}\in\mathbb{R}^d,\quad t=\vert\mathbf{k}\vert,\quad\boldsymbol{\theta}=\mathbf{k}/\vert\mathbf{k}\vert. \end{equation} The matrix $\widehat{S}(\mathbf{k})$ is the symbol of the differential operator \begin{equation} \label{A^0 hat} \widehat{\mathcal{A}}^0=b(\mathbf{D})^*g^0b(\mathbf{D}) \end{equation} acting in $L_2(\mathbb{R}^d;\mathbb{C}^n)$ on the domain $H^2(\mathbb{R}^d;\mathbb{C}^n)$ and called the \textit{effective operator} for the operator~$\widehat{\mathcal{A}}$.
Let $\widehat{\mathcal{A}}^0(\mathbf{k})$ be the operator family in $L_2(\Omega;\mathbb{C}^n)$ corresponding to the effective operator $\widehat{\mathcal{A}}^0$. Then $ \widehat{\mathcal{A}}^0(\mathbf{k})=b(\mathbf{D}+\mathbf{k})^*g^0b(\mathbf{D}+\mathbf{k}) $ with periodic boundary conditions: $\mathrm{Dom}\,\widehat{\mathcal{A}}^0(\mathbf{k})=\widetilde{H}^2(\Omega;\mathbb{C}^n)$. So, by \eqref{P L2} and \eqref{S(k)=}, \begin{equation} \label{SP=A0P} \widehat{S}(\mathbf{k})\widehat{P}=\widehat{\mathcal{A}}^0(\mathbf{k})\widehat{P}. \end{equation} From estimate \eqref{hat S(theta)>=} for the symbol of the operator $\widehat{\mathcal{A}}^0(\mathbf{k})$ it follows that \begin{equation} \label{A^0(k)>=} \widehat{\mathcal{A}}^0(\mathbf{k})\geqslant \widehat{c}_* \vert \mathbf{k}\vert ^2 I,\quad\mathbf{k}\in\widetilde{\Omega}. \end{equation}
\subsection{Properties of the effective matrix} The effective matrix $g^0$ satisfies the estimates known in homogenization theory as the Voigt-Reuss bracketing (see, e. g., \cite[Chapter~3, Theorem~1.5]{BSu}).
\begin{proposition} Let $g^0$ be the effective matrix \eqref{g0}. Then \begin{equation} \label{Voigt-Reuss} \underline{g} \leqslant g^0\leqslant\overline{g} . \end{equation} If $m=n$, then $g^0=\underline{g}$. \end{proposition}
From inequalities \eqref{Voigt-Reuss} it follows that \begin{equation} \label{g^0<=} \vert g^0\vert \leqslant\Vert g\Vert _{L_\infty},\quad \vert (g^0)^{-1}\vert\leqslant\Vert g^{-1}\Vert _{L_\infty}. \end{equation}
Now, we distinguish the cases where one of the inequalities in \eqref{Voigt-Reuss} becomes an identity. See \cite[Chapter 3, Propositions 1.6 and 1.7]{BSu}.
\begin{proposition} The equality $g^0=\overline{g}$ is equivalent to the relations \begin{equation} \label{overline-g} b(\mathbf{D})^* {\mathbf g}_k(\mathbf{x}) =0,\ \ k=1,\dots,m, \end{equation} where ${\mathbf g}_k(\mathbf{x})$, $k=1,\dots,m,$ are the columns of the matrix-valued function $g(\mathbf{x})$. \end{proposition}
\begin{proposition} The identity $g^0 =\underline{g}$ is equivalent to the relations \begin{equation} \label{underline-g} {\mathbf l}_k(\mathbf{x}) = {\mathbf l}_k^0 + b(\mathbf{D}) {\mathbf w}_k,\ \ {\mathbf l}_k^0\in \mathbb{C}^m,\ \ {\mathbf w}_k \in \widetilde{H}^1(\Omega;\mathbb{C}^m),\ \ k=1,\dots,m, \end{equation} where ${\mathbf l}_k(\mathbf{x})$, $k=1,\dots,m,$ are the columns of the matrix-valued function $g(\mathbf{x})^{-1}$. \end{proposition}
\section{Approximation of the sandwiched operator $\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})$}
\label{Section 7 appr A(k)}
Now, we consider the operator $\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})$ in the general case where $f\neq\mathbf{1}_n$. Recall that $\mathcal{A}(\mathbf{k})$ is the operator \eqref{A(k)=}. Then \begin{equation} \label{A(k) and hat A(k)} \mathcal{A}(\mathbf{k})=f (\mathbf{x})^*\widehat{\mathcal{A}}(\mathbf{k})f(\mathbf{x}). \end{equation}
\subsection{Incorporation of $\mathcal{A}(\mathbf{k})$ in the framework of Section~\ref{Section sandwiched abstract}}
As was shown in Subsec.~\ref{Subsection Incorporation of the operators A(k) into the abstract scheme}, the operator $\mathcal{A}(\mathbf{k})$ satisfies the assumptions of Section~\ref{Section Preliminaries}. Now the assumptions of Subsec.~\ref{Subsec hat famimy abstract} are valid with $\mathfrak{H}=\widehat{\mathfrak{H}}=L_2(\Omega;\mathbb{C}^n)$ and $\mathfrak{H}_*=L_2(\Omega ;\mathbb{C}^m)$. The role of $\widehat{A}(t)$ is played by $\widehat{A}(t,\boldsymbol{\theta})=\widehat{\mathcal{A}}(t\boldsymbol{\theta})$, and the role of $A(t)$ is played by $A(t,\boldsymbol{\theta})=\mathcal{A}(t\boldsymbol{\theta})$. An isomorphism $M$ is the operator of multiplication by the function $f(\mathbf{x})$. Relation \eqref{A(t) and hat A(t)} corresponds to the identity \eqref{A(k) and hat A(k)}.
Next, the operator $Q$ (see \eqref{Q= abstract}) is the operator of multiplication by the matrix-valued function \begin{equation} \label{8.1a} Q(\mathbf{x}):=\left(f(\mathbf{x})f(\mathbf{x})^*\right)^{-1}. \end{equation} The block $Q_{\widehat{\mathfrak{N}}}$ of $Q$ in the subspace $\widehat{\mathfrak{N}}$ (see \eqref{hat N= in L2}) is the operator of multiplication by the constant matrix $ \overline{Q}=\left(\underline{ff^*}\right)^{-1}=\vert \Omega\vert ^{-1}\int _\Omega \left( f(\mathbf{x})f(\mathbf{x})^*\right)^{-1}\,d\mathbf{x}$. The operator $M_0:=\left(Q_{\widehat{\mathfrak{N}}}\right)^{-1/2}$ acts in $\widehat{\mathfrak{N}}$ as multiplication by the matrix $ f_0:=\left(\overline{Q}\right)^{-1/2}=\left( \underline{ff^*}\right)^{1/2}$. Obviously, \begin{equation} \label{f0<=} \vert f_0\vert \leqslant \Vert f\Vert _{L_\infty},\quad \vert f_0^{-1}\vert \leqslant \Vert f^{-1}\Vert _{L_\infty}. \end{equation}
Now, we specify the operators from \eqref{Th 3.3_1} and \eqref{Th 3.3_2}. By \eqref{S(k)=}, \begin{equation} \label{t2 M0 hat S M0=} t^2M_0\widehat{S}(\boldsymbol{\theta})M_0=f_0b(\mathbf{k})^*g^0b(\mathbf{k})f_0,\quad t=\vert\mathbf{k}\vert,\quad \boldsymbol{\theta}=\mathbf{k}/\vert\mathbf{k}\vert. \end{equation}
Let $\mathcal{A}^0$ be the following operator in $L_2(\mathbb{R}^d;\mathbb{C}^n)$: \begin{equation} \label{A0 no hat} \mathcal{A}^0=f_0b(\mathbf{D})^*g^0b(\mathbf{D})f_0,\quad\mathrm{Dom}\,\mathcal{A}^0=H^2(\mathbb{R}^d;\mathbb{C}^n). \end{equation} Let $\mathcal{A}^0(\mathbf{k})$ be the corresponding operator family in $L_2(\Omega;\mathbb{C}^n)$ given by the expression \begin{equation} \label{A0(k)= no hat} \mathcal{A}^0(\mathbf{k})=f_0b(\mathbf{D}+\mathbf{k})^*g^0b(\mathbf{D}+\mathbf{k})f_0 \end{equation} with the periodic boundary conditions. By \eqref{SP=A0P}, \eqref{A^0(k)>=}, \eqref{f0<=}, and the idenity $c_*=\widehat{c}_*\Vert f^{-1}\Vert ^{-2}_{L_\infty}$, the symbol of the operator $\mathcal{A}^0$ satisfies the estimate \begin{equation} \label{f_0 dots >=} f_0b(\mathbf{k})^*g^0b(\mathbf{k})f_0\geqslant c_*\vert \mathbf{k}\vert ^2 \mathbf{1}_n,\quad\mathbf{k}\in\mathbb{R}^d. \end{equation} Hence, using the Fourier series representation for the operator $\mathcal{A}^0(\mathbf{k})$ and \eqref{6.5 from DSu}, we deduce that \begin{equation} \label{8.8a} \mathcal{A}^0(\mathbf{k})\geqslant c_*\vert \mathbf{k}\vert ^2 I,\quad \mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}. \end{equation} By \eqref{P L2}, \eqref{t2 M0 hat S M0=}, and \eqref{A0(k)= no hat}, we obtain $t^2M_0\widehat{S}(\boldsymbol{\theta})M_0\widehat{P}=\mathcal{A}^0(\mathbf{k})\widehat{P}$, whence \begin{equation} \label{M0 sin} \begin{split} M_0&(t^2 M_0 \widehat{S}(\boldsymbol{\theta})M_0)^{-1/2}\sin \left(\varepsilon^{-1}\tau (t^2 M_0\widehat{S}(\boldsymbol{\theta})M_0)^{1/2}\right)M_0^{-1}\widehat{P} \\ &=f_0\mathcal{A}^0 (\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1}\widehat{P}. \end{split} \end{equation}
In accordance with \cite[Sec.~5]{BSu05}, the role of $\widehat{Z}_Q$ is played by the operator \begin{equation} \label{Z_Q(theta)=} \widehat{Z}_Q(\boldsymbol{\theta})=\Lambda _Q b(\boldsymbol{\theta})\widehat{P}. \end{equation} Here $\Lambda _Q$ is the operator of multiplication by the $\Gamma$-periodic $(n\times m)$-matrix-valued solution $\Lambda _Q(\mathbf{x})$ of the problem \begin{equation*} b(\mathbf{D})^*g(\mathbf{x})\left(b(\mathbf{D})\Lambda _Q(\mathbf{x})+\mathbf{1}_m\right)=0,\quad\int _\Omega Q(\mathbf{x})\Lambda_Q(\mathbf{x})\,d\mathbf{x}=0. \end{equation*} Note that \begin{equation} \label{Lambda_Q=Lambda +LambdaQ0} \Lambda _Q(\mathbf{x})=\Lambda (\mathbf{x})+\Lambda _Q^0,\quad \Lambda _Q^0:=-\left(\overline{Q}\right)^{-1}\left(\overline{Q\Lambda}\right), \end{equation} where $\Lambda$ is the $\Gamma$-periodic solution of problem \eqref{Lambda problem}. From \eqref{Z_Q(theta)=} it follows that \begin{equation*} t\widehat{Z}_Q(\boldsymbol{\theta})\widehat{P}=\Lambda _Qb(\mathbf{k})\widehat{P}=\Lambda _Qb(\mathbf{D}+\mathbf{k})\widehat{P}. \end{equation*}
\subsection{Estimates in the case where $\vert\mathbf{k}\vert \leqslant t_0$}
Consider the operator $\mathcal{H}_0=-\Delta $ acting in $L_2(\mathbb{R}^d;\mathbb{C}^n)$. Under the Gelfand transformation, this operator is decomposed into the direct integral of the operators $\mathcal{H}_0(\mathbf{k})$ acting in $L_2(\Omega;\mathbb{C}^n)$ and given by the differential expression $\vert \mathbf{D}+\mathbf{k}\vert ^2$ with the periodic boundary conditions. Denote \begin{equation} \label{R(k,eps)} \mathcal{R}(\mathbf{k},\varepsilon):=\varepsilon ^2 (\mathcal{H}_0(\mathbf{k})+\varepsilon ^2 I)^{-1}. \end{equation} Obviously, \begin{equation} \label{R(k)P} \mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}=\varepsilon ^2 (t^2+\varepsilon ^2 )^{-1}\widehat{P},\quad\vert \mathbf{k}\vert =t . \end{equation}
In order to approximate the operator $f\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1}$, we apply Theorem~\ref{Theorem sin sandwiched abstract}. We only need to specify the constants in estimates. The constants $c_*$, $\delta$, and $t_0$ are defined by \eqref{A(k)>=}, \eqref{delta L2}, and \eqref{t0 L2}. Using estimate \eqref{X_1(theta)<=}, we choose the following values of constants from \eqref{F-P}, \eqref{F(t)=P+tF_1+F_2(t)}, and \eqref{C_3 abstract}: \begin{align*} {C}_1:&=\beta _1\delta ^{-1/2}\alpha _1^{1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert f\Vert _{L_\infty}, \quad {C}_2:=\beta _2\delta ^{-1}\alpha_1\Vert g\Vert _{L_\infty}\Vert f\Vert ^2_{L_\infty}, \\ {C}_3:&=\beta _3\delta ^{-1/2}\alpha _1^{1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert f\Vert _{L_\infty}(1+c_*^{-1}\alpha_1\Vert g\Vert _{L_\infty}\Vert f\Vert ^2_{L_\infty}). \end{align*} Similarly, in accordance with \eqref{C_4 abstract} and \eqref{A^1/2F_2} we define \begin{align*} {C}_4:&=\beta _4\delta ^{-1/2}\alpha_1\Vert g\Vert _{L_\infty}\Vert f\Vert ^2_{L_\infty}(1+c_*^{-1/2}\alpha_1^{1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert f\Vert _{L_\infty}), \\ {C}_5:&=\beta _5\delta ^{-1/2}\alpha _1\Vert g\Vert _{L_\infty}\Vert f\Vert ^2_{L_\infty}. \end{align*} Using these ${C}_1$, ${C}_3$, ${C}_4$, and ${C}_5$, according to \eqref{C_6 abstract}, \eqref{C_7 abstract}, \eqref{A-1/2sin F 2}, and \eqref{Th corr exp}, we put \begin{align*} {C}_6:&=4 \pi ^{-1}c_*^{-1/2}{C}_3,\quad {C}_8:=\max\lbrace {C}_4c_*^{-1/2};{C}_6\rbrace,\\ {C}_7:&={C}_8+c_*^{-1/2}{C}_1,\quad {C}_9:={C}_1+c_*^{-1/2}{C}_5+{C}_8(\alpha_1^{1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert f\Vert _{L_\infty}+{C}_4t_0). \end{align*} By Theorem~\ref{Theorem sin sandwiched abstract}, taking \eqref{M0 sin}, \eqref{Z_Q(theta)=}, and \eqref{R(k)P} into account, we have \begin{align} \label{I, k<t0-1} \begin{split} &\Bigl\Vert \Bigl( f\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1} -f_0 \mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1}\Bigr) \\ &\times \mathcal{R}(\mathbf{k},\varepsilon)^{1/2}\widehat{P}\Bigr\Vert _{L_2(\Omega )\rightarrow L_2(\Omega )} \leqslant {C}_7\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}(1+\vert\tau\vert ),\quad \tau \in\mathbb{R},\quad \varepsilon >0,\quad\vert \mathbf{k}\vert \leqslant t_0, \end{split} \\ \label{II, k<t0} \begin{split} &\Bigl\Vert \widehat{\mathcal{A}}(\mathbf{k})^{1/2}\Bigl( f\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1} \\ &-(I+\Lambda _Q b(\mathbf{D}+\mathbf{k}))f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1} \Bigr)\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Bigr\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant {C}_9 \Vert f^{-1}\Vert _{L_\infty}\varepsilon (1+\vert \tau\vert ),\quad \tau \in\mathbb{R},\quad \varepsilon >0,\quad\vert \mathbf{k}\vert \leqslant t_0. \end{split} \end{align}
Using \eqref{Lambda_Q=Lambda +LambdaQ0}, we show that $\Lambda _Q$ can be replaced by $\Lambda$ in \eqref{II, k<t0}. Only the constant in the estimate will change under such replacement. Indeed, due to the presence of the projection $\widehat{P}$, taking \eqref{<b^*b<}, \eqref{hat A(k)=}, \eqref{f0<=}, \eqref{R(k)P}, and the inequality $\vert \sin x\vert/\vert x\vert \leqslant 1$ into account, we have \begin{equation} \label{A(k)1/2Lambda Q0<=} \begin{split} &\Bigl\Vert \widehat{\mathcal{A}}(\mathbf{k})^{1/2}\Lambda _Q^0 b(\mathbf{D}+\mathbf{k})f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1}\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Bigr\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant \Vert g\Vert ^{1/2}_{L_\infty}\Vert b(\mathbf{k})\Lambda _Q^0b(\mathbf{k})f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1}\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant \alpha_1\Vert g\Vert ^{1/2}_{L_\infty}\vert \Lambda _Q^0\vert \vert \mathbf{k}\vert ^2\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\vert \tau\vert \varepsilon (\vert \mathbf{k}\vert ^2+\varepsilon ^2)^{-1}, \quad\varepsilon>0,\quad \tau\in\mathbb{R},\quad\mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}. \end{split} \end{equation} Next, according to \cite[Sec.~7]{BSu05}, \begin{equation} \label{Lambda Q0<=} \vert \Lambda _Q^0\vert \leqslant m^{1/2}(2r_0)^{-1}\alpha _0^{-1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f\Vert ^2_{L_\infty}\Vert f^{-1}\Vert ^2_{L_\infty}. \end{equation} Combining \eqref{Lambda_Q=Lambda +LambdaQ0} and \eqref{II, k<t0} --\eqref{Lambda Q0<=}, we arrive at the estimate \begin{equation} \label{II-2} \begin{split} &\Bigl\Vert \widehat{\mathcal{A}}(\mathbf{k})^{1/2}\Bigl( f\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1}\\ &-\left(I+\Lambda b(\mathbf{D}+\mathbf{k})\right)f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1}\Bigr)\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Bigr\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant {C}_{10}\varepsilon (1+\vert \tau\vert),\quad \varepsilon >0,\quad \tau\in\mathbb{R},\quad \mathbf{k}\in \mathrm{clos}\,\widetilde{\Omega},\quad \vert \mathbf{k}\vert \leqslant t_0, \end{split} \end{equation} where ${C}_{10}:={C}_9\Vert f^{-1}\Vert _{L_\infty}+m^{1/2}(2r_0)^{-1}\alpha _0^{-1/2}\alpha _1\Vert g\Vert _{L_\infty}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f\Vert ^3_{L_\infty}\Vert f^{-1}\Vert ^3_{L_\infty}$.
\subsection{Approximations for $\vert \mathbf{k}\vert >t_0$}
By \eqref{A(k)>=} and \eqref{8.8a}, \begin{equation} \label{8.22a} \begin{split} \Vert \mathcal{A}(\mathbf{k})^{-1/2}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)}\leqslant c_*^{-1/2}t_0^{-1},\quad\Vert \mathcal{A}^0(\mathbf{k})^{-1/2}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)}\leqslant c_*^{-1/2}t_0^{-1}, \\ \mathbf{k} \in\mathrm{clos}\,\widetilde{\Omega},\quad \vert \mathbf{k}\vert > t_0. \end{split} \end{equation} By \eqref{R(k)P}, \begin{equation} \label{8.22b} \Vert \mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \leqslant 1,\quad\mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}. \end{equation} Combining \eqref{f0<=} and \eqref{8.22a}, \eqref{8.22b}, we obtain \begin{equation} \label{I, k>t0} \begin{split} \Bigl\Vert& \Bigl( f \mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1} -f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1} \Bigr) \\ &\times \mathcal{R}(\mathbf{k},\varepsilon)^{1/2}\widehat{P}\Bigr\Vert _{L_2(\Omega )\rightarrow L_2(\Omega)} \leqslant 2c_*^{-1/2}t_0^{-1}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}, \end{split} \end{equation} $\varepsilon >0$, $\tau\in\mathbb{R}$, $\mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}$, $\vert \mathbf{k}\vert > t_0$. Bringing together \eqref{I, k<t0-1} and \eqref{I, k>t0}, we conclude that \begin{equation} \label{8.3_I} \begin{split} \Bigl\Vert &\Bigl( f\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1} -f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1} \Bigr)\\ &\times
\mathcal{R}(\mathbf{k},\varepsilon)^{1/2}\widehat{P}\Bigr\Vert _{L_2(\Omega )\rightarrow L_2(\Omega)} \leqslant \max\lbrace C_7; 2c_*^{-1/2}t_0^{-1}\rbrace\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}(1+\vert \tau\vert), \end{split} \end{equation} $\varepsilon >0$, $\tau\in\mathbb{R}$, $\mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}$.
Now, we proceed to estimation of the operator under the norm sign in \eqref{II-2} for $\vert\mathbf{k}\vert >t_0$. By \eqref{R(k)P} and the elementary inequality $t^2+\varepsilon ^2 \geqslant 2\varepsilon {t}_0$, $t >{t}_0$, we have \begin{equation} \label{analog of 7.9} \Vert \mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \leqslant(2t_0)^{-1}\varepsilon ,\quad\varepsilon >0,\quad \mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega},\quad\vert \mathbf{k}\vert >t_0. \end{equation} By \eqref{A(k) and hat A(k)} and \eqref{analog of 7.9}, \begin{equation} \label{8.3 A} \begin{split} \Vert &\widehat{\mathcal{A}}(\mathbf{k})^{1/2}f\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1}\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &=\Vert \sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1}\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)}\\ &\leqslant \varepsilon (2t_0)^{-1}\Vert f ^{-1}\Vert _{L_\infty}, \quad \varepsilon >0,\quad\tau\in\mathbb{R},\quad\mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega},\quad \vert \mathbf{k}\vert > t_0. \end{split} \end{equation}
From \eqref{g^0<=}, \eqref{f0<=}, \eqref{A0(k)= no hat}, and \eqref{analog of 7.9} it follows that \begin{equation} \label{8.3 B} \begin{split} \Vert &\widehat{\mathcal{A}}(\mathbf{k})^{1/2}f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0 ^{-1}\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant\varepsilon (2t_0)^{-1}\Vert g^{1/2}b(\mathbf{D}+\mathbf{k})f_0\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})\mathcal{A}^0(\mathbf{k})^{-1/2}f_0 ^{-1}\widehat{P}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant \varepsilon (2 t_0)^{-1}\Vert g\Vert ^{1/2}_{L_\infty}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert \mathcal{A}^0(\mathbf{k})^{1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})\mathcal{A}^0(\mathbf{k})^{-1/2}f_0^{-1} \widehat{P}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant \varepsilon (2 t_0)^{-1}\Vert g\Vert ^{1/2}_{L_\infty}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f ^{-1}\Vert _{L_\infty}, \quad\varepsilon >0,\quad\tau\in\mathbb{R},\quad \mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega},\quad\vert \mathbf{k}\vert >t_0. \end{split} \end{equation}
Next, we have \begin{equation*} \begin{split} &\widehat{\mathcal{A}}(\mathbf{k})^{1/2}\Lambda b(\mathbf{D}+\mathbf{k})f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0 ^{-1}\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P} \\ &=\left(\widehat{\mathcal{A}}(\mathbf{k})^{1/2}\Lambda \widehat{P}_m\right)b(\mathbf{D}+\mathbf{k})f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0 ^{-1}\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}, \end{split} \end{equation*} where $\widehat{P}_m$ is the orthogonal projection of the space $\mathfrak{H}_*=L_2(\Omega;\mathbb{C}^m)$ onto the subspace of constants. According to \cite[(6.22)]{BSu06}, \begin{equation} \label{7.11 c} \Vert \widehat{\mathcal{A}}(\mathbf{k})^{1/2}\Lambda \widehat{P}_m\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)}\leqslant C_\Lambda ,\quad\mathbf{k}\in\widetilde{\Omega}, \end{equation} where the constant $C_\Lambda$ depends only on $m$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$.
By \eqref{g^0<=}, \eqref{f0<=}, \eqref{A0(k)= no hat}, \eqref{analog of 7.9}, and \eqref{7.11 c}, \begin{equation} \label{8.3 C} \begin{split} \Vert &\widehat{\mathcal{A}}(\mathbf{k})^{1/2}\Lambda b(\mathbf{D}+\mathbf{k})f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0 ^{-1}\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant C_\Lambda \Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f ^{-1}\Vert _{L_\infty} (2t_0)^{-1}\varepsilon, \quad\varepsilon >0,\quad\tau\in\mathbb{R},\quad\mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega},\quad \vert \mathbf{k}\vert > t_0. \end{split} \end{equation}
Combining \eqref{II-2}, \eqref{8.3 A}, \eqref{8.3 B}, and \eqref{8.3 C}, we conclude that \begin{equation} \label{8.3 with C11} \begin{split} \Bigl\Vert &\widehat{\mathcal{A}}(\mathbf{k})^{1/2}\Bigl( f\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1} \\ &- (I+\Lambda b(\mathbf{D}+\mathbf{k}))f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1} \Bigr)\mathcal{R}(\mathbf{k},\varepsilon)\widehat{P}\Bigr\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant {C}_{11}\varepsilon (1+\vert \tau\vert ),\quad \varepsilon >0,\quad\tau\in\mathbb{R},\quad\mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}. \end{split} \end{equation} Here ${C}_{11}:=\max\left\lbrace {C}_{10}; (2t_0)^{-1}\Vert f ^{-1}\Vert _{L_\infty}\left(1+\Vert g\Vert ^{1/2}_{L_\infty}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}+C_\Lambda \Vert g^{-1}\Vert ^{1/2}_{L_\infty}\right)\right\rbrace$.
\subsection{Removal of the operator $\widehat{P}$} Now, we show that, in the operator under the norm sign in \eqref{8.3_I} the projection $\widehat{P}$ can be replaced by the identity operator. After such replacement, only the constant in the estimate will be different. To show this, we estimate the norm of the operator $\mathcal{R}(\mathbf{k},\varepsilon)^{1/2}(I-\widehat{P})$ by using the discrete Fourier transform: \begin{equation} \label{R(I-P)} \Vert \mathcal{R}(\mathbf{k},\varepsilon)^{1/2}(I-\widehat{P})\Vert _{L_2(\Omega)\rightarrow L_2(\Omega )} =\max _{0\neq \mathbf{b}\in\widetilde{\Gamma}}\varepsilon (\vert\mathbf{b}+\mathbf{k}\vert ^2+\varepsilon ^2)^{-1/2}\leqslant \varepsilon r_0^{-1},\quad \varepsilon >0,\quad\mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}. \end{equation} Next, applying the spectral theorem and the elementary inequality $\vert \sin x\vert/\vert x\vert \leqslant 1$, $x\in\mathbb{R}$, we conclude that \begin{equation} \label{A^-1-2sin <= grubo} \Vert {\mathcal{A}}(\mathbf{k})^{-1/2}\sin(\varepsilon ^{-1}\tau {\mathcal{A}}(\mathbf{k})^{1/2})\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \leqslant \varepsilon ^{-1}\vert\tau\vert . \end{equation} Similarly, \begin{equation} \label{sin eff grubo} \Vert {\mathcal{A}}^0(\mathbf{k})^{-1/2}\sin(\varepsilon ^{-1}\tau {\mathcal{A}}^0(\mathbf{k})^{1/2})\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \leqslant \varepsilon ^{-1}\vert\tau\vert . \end{equation} Bringing together \eqref{f0<=}, \eqref{R(I-P)}--\eqref{sin eff grubo}, we arrive at the estimate \begin{equation*} \begin{split} \Bigl\Vert&\Bigl( f\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1} -f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1} \Bigr) \\ &\times \mathcal{R}(\mathbf{k},\varepsilon)^{1/2}(I-\widehat{P})\Bigr\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \leqslant 2 r_0^{-1}\Vert f\Vert_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\vert\tau\vert . \end{split} \end{equation*} Combining this with \eqref{8.3_I}, we see that \begin{equation} \label{8.4_0} \begin{split} \Bigl\Vert&\Bigl( f\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1} -f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1} \Bigr) \\ &\times \mathcal{R}(\mathbf{k},\varepsilon)^{1/2}\Bigr\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \leqslant {C}_{12}(1+\vert \tau\vert ),\quad \varepsilon >0,\quad \tau\in\mathbb{R},\quad \mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}, \end{split} \end{equation} where $
{C}_{12}:=\left(2r_0^{-1}+\max\lbrace C_7;2c_*^{-1/2}t_0^{-1}\rbrace\right)\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$.
Now, we show that the operator $\widehat{P}$ in the principal terms of approximation \eqref{8.3 with C11} can be removed. Let us estimate the operator $\mathcal{R}(\mathbf{k},\varepsilon)(I-\widehat{P})$ using the discrete Fourier transform: \begin{equation} \label{R(I-P) for corr} \Vert \mathcal{R}(\mathbf{k},\varepsilon)(I-\widehat{P})\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} =\max _{0\neq\mathbf{b}\in\widetilde{\Gamma}}\varepsilon ^2 (\vert \mathbf{b}+\mathbf{k}\vert ^2+\varepsilon ^2)^{-1}\leqslant \varepsilon r_0^{-1},\quad \varepsilon >0,\quad\mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}. \end{equation} By
\eqref{A(k) and hat A(k)} and \eqref{R(I-P) for corr}, \begin{equation} \label{8.4 I} \begin{split} \Vert &\widehat{\mathcal{A}}(\mathbf{k})^{1/2}f \mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1}\mathcal{R}(\mathbf{k},\varepsilon)(I-\widehat{P})\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &=\Vert \sin (\varepsilon^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1}\mathcal{R}(\mathbf{k},\varepsilon)(I-\widehat{P})\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant \Vert f^{-1}\Vert _{L_\infty}\varepsilon r_0^{-1},\quad \varepsilon >0,\quad \tau\in\mathbb{R},\quad \mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}. \end{split} \end{equation} Next, by \eqref{hat A(k)=}, \eqref{g^0<=}, \eqref{f0<=}, \eqref{A0(k)= no hat}, and \eqref{R(I-P) for corr}, \begin{equation} \label{8.4 II} \begin{split} \Vert &\widehat{\mathcal{A}}(\mathbf{k})^{1/2}f_0\mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1}\mathcal{R}(\mathbf{k},\varepsilon)(I-\widehat{P})\Vert_{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant \Vert g\Vert ^{1/2}_{L_\infty}\Vert g^{-1}\Vert^{1/2}_{L_\infty}\Vert f ^{-1}\Vert_{L_\infty}\varepsilon r_0^{-1}, \quad\varepsilon >0,\quad\tau\in\mathbb{R},\quad\mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}. \end{split} \end{equation} Combining \eqref{8.3 with C11}, \eqref{8.4 I}, and \eqref{8.4 II}, we have \begin{equation} \label{8.4 III} \begin{split} \Bigl\Vert &\widehat{\mathcal{A}}(\mathbf{k})^{1/2}\Bigl( f\mathcal{A}(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}(\mathbf{k})^{1/2})f^{-1} \\ &- (I+\Lambda b(\mathbf{D}+\mathbf{k})\widehat{P}) f_0 \mathcal{A}^0(\mathbf{k})^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^0(\mathbf{k})^{1/2})f_0^{-1} \Bigr) \mathcal{R}(\mathbf{k},\varepsilon)\Bigr\Vert _{L_2(\Omega)\rightarrow L_2(\Omega)} \\ &\leqslant
{C}_{13}\varepsilon (1+\vert \tau\vert ), \quad\varepsilon>0,\quad \tau\in\mathbb{R},\quad \mathbf{k}\in\mathrm{clos}\,\widetilde{\Omega}, \end{split} \end{equation} where $ {C}_{13}:= {C}_{11}+r_0^{-1}\Vert f ^{-1}\Vert _{L_\infty}(1+\Vert g\Vert^{1/2}_{L_\infty}\Vert g^{-1}\Vert ^{1/2}_{L_\infty})$.
\section{Approximation of the sandwiched operator $\mathcal{A}^{-1/2}\sin (\varepsilon ^{-1}\tau\mathcal{A}^{1/2})$}
\label{Section 10}
\subsection{} \label{Subsection 8/1 NEW}
Let $\mathcal{A}$ and $\mathcal{A}^0$ be the operators \eqref{A=} and \eqref{A0 no hat}, respectively, acting in $L_2(\mathbb{R}^d;\mathbb{C}^n)$. Recall the notation $\mathcal{H}_0=-\Delta$ and put $ \mathcal{R}(\varepsilon):=\varepsilon ^2(\mathcal{H}_0+\varepsilon ^2 I)^{-1}$. Using the Gelfand transformation, we decompose this operator into the direct integral of the operators
\eqref{R(k,eps)}: \begin{equation} \label{9.0} \mathcal{R}(\varepsilon)=\mathcal{U}^{-1}\left(\int _{\widetilde{\Omega}}\oplus \mathcal{R}(\mathbf{k},\varepsilon)\,d\mathbf{k}\right)\mathcal{U}. \end{equation}
In $L_2(\mathbb{R}^d;\mathbb{C}^n)$, we introduce the operator $\Pi :=\mathcal{U}^{-1}[\widehat{P}]\mathcal{U}$. Here $[\widehat{P}]$ is the projection in $\int _{\widetilde{\Omega}}\oplus L_2(\Omega;\mathbb{C}^n)\,d\mathbf{k}$ acting on fibers as the operator $\widehat{P}$ (see \eqref{P L2}). As was shown in \cite[(6.8)]{BSu05}, $\Pi$ is the pseudodifferential operator in $L_2(\mathbb{R}^d;\mathbb{C}^n)$ with the symbol $\chi _{\widetilde{\Omega}}(\boldsymbol{\xi})$, where $\chi _{\widetilde{\Omega}}$ is the characteristic function of the set~$\widetilde{\Omega}$. That is $ (\Pi \mathbf{u})(\mathbf{x})=(2\pi )^{-d/2}\int _{\widetilde{\Omega}}e^{i\langle\mathbf{x},\boldsymbol{\xi}\rangle}\widehat{\mathbf{u}}(\boldsymbol{\xi})\,d\boldsymbol{\xi}$. Here $\widehat{\mathbf{u}}(\boldsymbol{\xi})$ is the Fourier image of the function $\mathbf{u}\in L_2(\mathbb{R}^d;\mathbb{C}^n)$.
\begin{theorem} Under the assumptions of Subsection~\textnormal{\ref{Subsection 8/1 NEW}}, for $\varepsilon >0$ and $\tau\in\mathbb{R}$ we have \begin{align} \label{10.I} \begin{split} \Bigl\Vert &\Bigl( f \mathcal{A}^{-1/2}\sin (\varepsilon ^{-1}\tau\mathcal{A}^{1/2})f^{-1} -f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau(\mathcal{A}^0)^{1/2})f_0^{-1} \Bigr) \mathcal{R}(\varepsilon )^{1/2}\Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant {C}_{12}(1+\vert \tau\vert), \end{split} \\ \label{10.II} \begin{split} \Bigl\Vert & \widehat{\mathcal{A}}^{1/2} \Bigl( f \mathcal{A}^{-1/2}\sin (\varepsilon ^{-1}\tau\mathcal{A}^{1/2})f^{-1}\\ &-(I+\Lambda b(\mathbf{D})\Pi )f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})f_0^{-1} \Bigr) \mathcal{R}(\varepsilon) \Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant {C}_{13}\varepsilon (1+\vert \tau\vert ). \end{split} \end{align} The constants $C_{12}$ and $C_{13}$ depend only on $m$, $\alpha_0$, $\alpha_1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{theorem}
\begin{proof} By \eqref{5.7a}, the similar identity for $\mathcal{A}^0$, and \eqref{9.0}, from \eqref{8.4_0} we deduce estimate \eqref{10.I}.
From \eqref{8.4 III} via the Gelfand transform we derive inequality \eqref{10.II}. \end{proof}
\subsection{Removal of the operator $\Pi$ in the corrector for $d\leqslant 4$}
Now, we show that the operator $\Pi$ in estimate \eqref{10.II} can be removed for $d\leqslant 4$.
\begin{theorem} \label{Theorem 8/1 NO PI} Under the assumptions of Subsection~\textnormal{\ref{Subsection 8/1 NEW}}, let $d\leqslant 4$. Then for $0<\varepsilon\leqslant 1$ and $\tau\in\mathbb{R}$ we have \begin{equation} \label{Th 8/1 NO PI} \begin{split} \Vert & \widehat{\mathcal{A}}^{1/2} \Bigl( f \mathcal{A}^{-1/2}\sin (\varepsilon ^{-1}\tau\mathcal{A}^{1/2})f^{-1}\\ &-(I+\Lambda b(\mathbf{D}) )f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})f_0^{-1} \Bigr) \mathcal{R}(\varepsilon) \Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant {C}_{14}\varepsilon (1+\vert \tau\vert ). \end{split} \end{equation} The constant $C_{14}$ depends only on $m$, $n$, $d$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{theorem}
To prove Theorem~\ref{Theorem 8/1 NO PI}, we need the following result, see \cite[Proposition 9.3]{Su_MMNP}.
\begin{proposition} \label{Proposition MMNP 9/3} Let $l=1$ for $d=1$, $l>1$ for $d=2$, and $l=d/2$ for $d\geqslant 3$. Then the operator $\widehat{\mathcal{A}}^{1/2}[\Lambda]$ is a continuous mapping of $H^l(\mathbb{R}^d;\mathbb{C}^m)$ to $L_2(\mathbb{R}^d;\mathbb{C}^n)$, and \begin{equation} \label{A1/2Lambda <=} \Vert \widehat{\mathcal{A}}^{1/2}[\Lambda]\Vert _{H^l(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant \mathcal{C}_d. \end{equation} Here the constant $\mathcal{C}_d$ depends only on $m$, $n$, $d$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$\textnormal{;} for $d=2$ it depends also on $l$. \end{proposition}
\begin{proof}[Proof of Theorem \textnormal{\ref{Theorem 8/1 NO PI}}.] Taking into account that the matrix-valued function~\eqref{t2 M0 hat S M0=} is the symbol of the operator $\mathcal{A}^0$ and the function $\chi _{\widetilde{\Omega}}(\boldsymbol{\xi})$ is the symbol of $\Pi$, using \eqref{<b^*b<}, \eqref{f0<=}, and \eqref{f_0 dots >=} we have \begin{equation} \label{proof Th 8/1 no PI 1} \begin{split} \Vert &b(\mathbf{D})(I-\Pi)f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\mathcal{R}(\varepsilon)\Vert _{L_2(\mathbb{R}^d)\rightarrow H^2(\mathbb{R}^d)} \\ &\leqslant \sup _{\boldsymbol{\xi}\in\mathbb{R}^d}(1+\vert \boldsymbol{\xi}\vert ^2)\vert b(\boldsymbol{\xi})\vert (1-\chi _{\widetilde{\Omega}}(\boldsymbol{\xi})) \vert f_0\vert \vert (f_0b(\boldsymbol{\xi})^*g^0b(\boldsymbol{\xi})f_0)^{-1/2}\vert \vert f_0^{-1}\vert\varepsilon ^2(\vert \boldsymbol{\xi}\vert ^2+\varepsilon ^2)^{-1} \\ &\leqslant \sup _{\vert \boldsymbol{\xi}\vert \geqslant r_0}(1+\vert \boldsymbol{\xi}\vert ^2)\alpha _1^{1/2}\vert \boldsymbol{\xi}\vert \Vert f\Vert _{L_\infty}c_*^{-1/2}\vert \boldsymbol{\xi}\vert ^{-1}\Vert f^{-1}\Vert _{L_\infty}\varepsilon ^2 (\vert \boldsymbol{\xi}\vert ^2 +\varepsilon ^2)^{-1} \\ &\leqslant\alpha _1^{1/2}c_*^{-1/2}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\varepsilon ^2 \sup _{\vert \boldsymbol{\xi}\vert \geqslant r_0}(1+\vert \boldsymbol{\xi}\vert ^2)\vert \boldsymbol{\xi}\vert ^{-2} \\ &\leqslant \alpha _1^{1/2}c_*^{-1/2}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}(r_0^{-2}+1)\varepsilon ^2. \end{split} \end{equation} For $d\leqslant 4$, we can take $l\leqslant 2$ in Proposition~\ref{Proposition MMNP 9/3}. So, combining \eqref{A1/2Lambda <=} and \eqref{proof Th 8/1 no PI 1}, we have \begin{equation*} \Vert \widehat{\mathcal{A}}^{1/2}[\Lambda]b(\mathbf{D})(I-\Pi)f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau(\mathcal{A}^0)^{1/2})f_0^{-1}\mathcal{R}(\varepsilon)\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2 (\mathbb{R}^d)} \leqslant \varepsilon ^2C_{14}', \end{equation*} $C_{14}':=\alpha _1 ^{1/2}c_*^{-1/2}(r_0^{-2}+1)\mathcal{C}_d\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$. Combining this with \eqref{10.II}, we arrive at estimate \eqref{Th 8/1 NO PI} with $C_{14}=C_{13}+C_{14}'$. \end{proof}
\subsection{On the possibility of removal of the operator $\Pi$ from the corrector. Sufficient conditions on $\Lambda$}
It is possible to eliminate the operator $\Pi$ for $d\geqslant 5$ by imposing the following assumption on the matrix-valued function $\Lambda$.
\begin{condition} \label{Condition Lambda multiplier} The operator $[\Lambda ]$ is continuous from $H^2(\mathbb{R}^d;\mathbb{C}^m)$ to $H^1(\mathbb{R}^d;\mathbb{C}^n)$. \end{condition}
Actually, it is sufficient to impose the following condition to remove $\Pi$ for $d\geqslant 5$.
\begin{condition} \label{Condition Lambda in Ld}
Assume that the periodic solution $\Lambda$ of problem \eqref{Lambda problem} belongs to $L_d(\Omega)$. \end{condition}
\begin{proposition} \label{Proposition Ld implies multiplier} For $d\geqslant 3$, Condition~\textnormal{\ref{Condition Lambda in Ld}} implies Condition~\textnormal{\ref{Condition Lambda multiplier}}. \end{proposition}
To prove Proposition~\ref{Proposition Ld implies multiplier} we need the following statement.
\begin{lemma} \label{Proposition d>=3 Lambda}
Let $d\geqslant 3$. Assume that Condition~\textnormal{\ref{Condition Lambda in Ld}} is satisfied.
Then the operator $g^{1/2}b(\mathbf{D})[\Lambda ]$ is a continuous mapping of $H^2(\mathbb{R}^d;\mathbb{C}^m)$ to $L_2(\mathbb{R}^d;\mathbb{C}^m)$ and
\begin{equation}
\label{g1/2b(D)Lambda H2 to L2 Lm} \Vert g^{1/2}b(\mathbf{D})[\Lambda ]\Vert _{H^2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant \mathfrak{C}_\Lambda .
\end{equation}
The constant $\mathfrak{C}_\Lambda$ depends only on $d$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert \Lambda\Vert _{L_d(\Omega)}$, and the parameters of the lattice $\Gamma$. \end{lemma}
\begin{proof} The proof is quite similar to the proof of Proposition 8.8 from \cite{SuAA10}.
Let $\mathbf{v}_j(\mathbf{x})$, $j=1,\dots,m$, be the columns of the matrix $\Lambda (\mathbf{x})$. In other words, $\mathbf{v}_j$ is the $\Gamma$-periodic solution of the problem \begin{equation} \label{v_j columns Lambda problem} b(\mathbf{D})^*g(\mathbf{x})\left( b(\mathbf{D})\mathbf{v}_j(\mathbf{x})+\mathbf{e}_j\right)=0,\quad \int _\Omega \mathbf{v}_j(\mathbf{x})\,d\mathbf{x}=0. \end{equation} Here $\lbrace \mathbf{e}_j\rbrace _{j=1}^m$ is the standard orthonormal basis in $\mathbb{C}^m$. Let ${u}\in H^2 (\mathbb{R}^d)$. Then \begin{equation} \label{v_ju tozd} g^{1/2}b(\mathbf{D})(\mathbf{v}_j u)=g^{1/2}\left(b(\mathbf{D})\mathbf{v}_j\right)u +\sum _{l=1}^d g^{1/2}b_l (D_lu)\mathbf{v}_j. \end{equation} We estimate the second term on the right-hand side of \eqref{v_ju tozd}: \begin{equation} \label{v_ju tozd second summand} \left\Vert \sum _{l=1}^d g^{1/2}b_l (D_lu)\mathbf{v}_j\right\Vert _{L_2(\mathbb{R}^d)} \leqslant\Vert g\Vert ^{1/2}_{L_\infty}\alpha _1^{1/2}d^{1/2}\left(\int _{\mathbb{R}^d}\vert \mathbf{D}u\vert ^2\vert \mathbf{v}_j\vert ^2\,d\mathbf{x}\right)^{1/2}. \end{equation} Next, \begin{equation} \label{v_ju tozd second summand est start} \int _{\mathbb{R}^d}\vert \mathbf{D}u\vert ^2\vert \mathbf{v}_j\vert ^2\,d\mathbf{x} =\sum _{\mathbf{a}\in\Gamma}\int _{\Omega +\mathbf{a}}\vert \mathbf{D}u\vert ^2\vert \mathbf{v}_j\vert ^2\,d\mathbf{x}. \end{equation} By the H\"older inequality with indices $s=d/2$ and $s'=d/(d-2)$, \begin{equation} \int _{\Omega +\mathbf{a}}\vert \mathbf{D}u\vert ^2\vert \mathbf{v}_j\vert ^2\,d\mathbf{x} \leqslant \left(\int _\Omega \vert \mathbf{v}_j\vert ^d\,d\mathbf{x}\right)^{2/d} \left(\int _{\Omega +\mathbf{a}}\vert \mathbf{D}u\vert ^{2d/(d-2)}\,d\mathbf{x}\right)^{(d-2)/d}. \end{equation} By the continuous embedding $H^1(\Omega)\hookrightarrow L_{2d/(d-2)}(\Omega)$, \begin{equation} \label{v_ju tozd second summand est fin} \left(\int _{\Omega +\mathbf{a}}\vert \mathbf{D}u\vert ^{2d/(d-2)}\,d\mathbf{x}\right)^{(d-2)/2d} \leqslant C_\Omega \Vert \mathbf{D}u\Vert _{H^1(\Omega +\mathbf{a})}. \end{equation} The embedding constant $C_\Omega$ depends only on $d$ and $\Omega$ (i. e., on the lattice $\Gamma$). From \eqref{v_ju tozd second summand est start}--\eqref{v_ju tozd second summand est fin} it follows that \begin{equation} \label{Du^2vj^2 <=} \int _{\mathbb{R}^d}\vert \mathbf{D}u\vert ^2\vert \mathbf{v}_j\vert ^2\,d\mathbf{x} \leqslant C_\Omega ^2 \Vert \mathbf{v}_j\Vert ^2 _{L_d(\Omega)}\Vert u\Vert ^2_{H^2(\mathbb{R}^d)}. \end{equation} Using \eqref{v_ju tozd second summand}, from \eqref{Du^2vj^2 <=} we derive the estimate \begin{equation} \label{8.13a new} \left\Vert \sum _{l=1}^d g^{1/2}b_l (D_lu)\mathbf{v}_j\right\Vert _{L_2(\mathbb{R}^d)} \leqslant \Vert g\Vert ^{1/2}_{L_\infty}\alpha _1^{1/2}d^{1/2}C_\Omega\Vert \mathbf{v}_j\Vert _{L_d(\Omega)}\Vert u\Vert _{H^2(\mathbb{R}^d)}. \end{equation}
Next, equation \eqref{v_j columns Lambda problem} implies that \begin{equation} \label{vj int tozd} \int _{\mathbb{R}^d}\Bigl(\langle g(\mathbf{x})b(\mathbf{D})\mathbf{v}_j,b(\mathbf{D})\mathbf{w}\rangle +\sum _{l=1}^d\langle b_l^* g(\mathbf{x})\mathbf{e}_j,D_l\mathbf{w}\rangle\Bigr)\,d\mathbf{x}=0 \end{equation} for any $\mathbf{w}\in H^1(\mathbb{R}^d;\mathbb{C}^n)$ such that $\mathbf{w}(\mathbf{x})=0$ for $\vert \mathbf{x}\vert >R$ (with some $R>0$).
Let $u\in C_0^\infty (\mathbb{R}^d)$. We put $\mathbf{w}(\mathbf{x})=\vert u(\mathbf{x})\vert ^2\mathbf{v}_j(\mathbf{x})$. Then \begin{equation*} b(\mathbf{D})\mathbf{w}=\vert u\vert ^2 b(\mathbf{D})\mathbf{v}_j+\sum _{l=1}^d b_l (D_l \vert u\vert ^2)\mathbf{v}_j. \end{equation*} Substituting this expression into \eqref{vj int tozd}, we obtain \begin{equation*} \int _{\mathbb{R}^d}\Bigl(\langle g(\mathbf{x})b(\mathbf{D})\mathbf{v}_j,\vert u\vert ^2b(\mathbf{D})\mathbf{v}_j+\sum _{l=1}^d b_l (D_l\vert u\vert ^2)\mathbf{v}_j\rangle +\sum _{l=1}^d\langle b_l^* g(\mathbf{x})\mathbf{e}_j,D_l(\vert u\vert ^2 \mathbf{v}_j)\rangle\Bigr)\,d\mathbf{x}=0. \end{equation*} Hence, \begin{equation} \label{J0=} J_0:=\int _{\mathbb{R}^d}\vert g^{1/2}b(\mathbf{D})\mathbf{v}_j\vert ^2 \vert u\vert ^2\,d\mathbf{x}=J_1+J_2, \end{equation} where \begin{align*} J_1&=-\int _{\mathbb{R}^d}\langle g^{1/2}b(\mathbf{D})\mathbf{v}_j,\sum _{l=1}^d g^{1/2}b_l(D_l\vert u\vert ^2)\mathbf{v}_j\rangle\,d\mathbf{x}, \\ J_2&=-\int _{\mathbb{R}^d}\sum _{l=1}^d\langle b_l^*g(\mathbf{x})\mathbf{e}_j,D_l(\vert u\vert ^2\mathbf{v}_j)\rangle\,d\mathbf{x} = -\int _{\mathbb{R}^d}\sum _{l=1}^d\langle b_l^*g(\mathbf{x})\mathbf{e}_j,D_l(\mathbf{v}_ju)u^*+\mathbf{v}_ju(D_lu^*)\rangle\,d\mathbf{x}. \end{align*} By \eqref{b_j <=}, \begin{equation*} \begin{split} \vert J_1\vert &\leqslant \Vert g\Vert _{L_\infty}^{1/2}\alpha _1 ^{1/2}d^{1/2}\int _{\mathbb{R}^d} 2\vert g^{1/2}b(\mathbf{D})\mathbf{v}_j\vert \vert {u}\vert \vert \mathbf{D}u\vert \vert \mathbf{v}_j\vert \,d\mathbf{x} \\ &\leqslant \frac{1}{2}\int _{\mathbb{R}^d}\vert g^{1/2}b(\mathbf{D})\mathbf{v}_j\vert ^2\vert u\vert ^2\,d\mathbf{x} +2\Vert g\Vert _{L_\infty}\alpha _1d\int _{\mathbb{R}^d}\vert \mathbf{D}u\vert ^2\vert \mathbf{v}_j\vert ^2\,d\mathbf{x}. \end{split} \end{equation*} Combining this with \eqref{Du^2vj^2 <=}, we see that \begin{equation} \label{J1 estimate} \vert J_1\vert \leqslant\frac{1}{2}J_0+2\Vert g\Vert _{L_\infty}\alpha _1 dC_\Omega^2\Vert \mathbf{v}_j\Vert ^2 _{L_d(\Omega)}\Vert u\Vert ^2_{H^2(\mathbb{R}^d)}. \end{equation}
Now we proceed to estimating the term $J_2$. By \eqref{b_j <=}, \begin{equation*} \int _{\mathbb{R}^d}\vert b_l^* g(\mathbf{x})\mathbf{e}_j\vert ^2\vert u\vert^2\,d\mathbf{x} \leqslant \alpha _1\Vert g\Vert ^2_{L_\infty}\Vert u\Vert ^2_{L_2(\mathbb{R}^d)}. \end{equation*} Then \begin{equation*} \begin{split} \vert J_2\vert &\leqslant \sum _{l=1}^d\Vert ub_l^*g\mathbf{e}_j\Vert _{L_2(\mathbb{R}^d)}\left(\Vert D_l(\mathbf{v}_ju)\Vert _{L_2(\mathbb{R}^d)} +\Vert \mathbf{v}_j(D_lu^*)\Vert _{L_2(\mathbb{R}^d)}\right) \\ &\leqslant \mu \Vert \mathbf{D}(\mathbf{v}_ju)\Vert ^2 _{L_2(\mathbb{R}^d)} +(4^{-1}+(4\mu)^{-1})d\alpha _1\Vert g\Vert ^2 _{L_\infty}\Vert u\Vert ^2 _{L_2(\mathbb{R}^d)} + \int _{\mathbb{R}^d}\vert \mathbf{v}_j\vert ^2\vert\mathbf{D}u^*\vert ^2\,d\mathbf{x} \end{split} \end{equation*} for any $\mu >0$. By \eqref{Du^2vj^2 <=}, \begin{equation} \label{J2 estimate} \vert J_2\vert \leqslant \mu \Vert \mathbf{D}(\mathbf{v}_ju)\Vert ^2 _{L_2(\mathbb{R}^d)}+ \left( (4^{-1}+ (4\mu)^{-1})d\alpha _1\Vert g\Vert ^2_{L_\infty} + C_\Omega ^2 \Vert \mathbf{v}_j\Vert ^2_{L_d(\Omega)}\right)\Vert u\Vert ^2_{H^2(\mathbb{R}^d)}. \end{equation}
Now, relations \eqref{J0=}, \eqref{J1 estimate}, and \eqref{J2 estimate} imply that \begin{equation} \label{1/2J0<=} \frac{1}{2}J_0\leqslant \mu \Vert \mathbf{D}(\mathbf{v}_ju)\Vert ^2 _{L_2(\mathbb{R}^d)} +\left((2\Vert g\Vert _{L_\infty}\alpha _1d+1)C_\Omega ^2 \Vert \mathbf{v}_j\Vert ^2_{L_d(\Omega)} +\left(\frac{1}{4}+\frac{1}{4\mu}\right)d\alpha _1\Vert g\Vert ^2_{L_\infty}\right)\Vert u\Vert ^2_{H^2(\mathbb{R}^d)}. \end{equation}
Comparing \eqref{v_ju tozd}, \eqref{8.13a new}, \eqref{J0=}, and \eqref{1/2J0<=}, we obtain \begin{equation} \label{g^1/2b(D)(v_ju)<=} \begin{split} \Vert & g^{1/2}b(\mathbf{D})(\mathbf{v}_ju)\Vert ^2 _{L_2(\mathbb{R}^d)}
\leqslant 2J_0+2\Vert g\Vert _{L_\infty}\alpha_1 d C_\Omega ^2 \Vert \mathbf{v}_j\Vert ^2_{L_d(\Omega)}\Vert u\Vert ^2 _{H^2(\mathbb{R}^d)} \\ &\leqslant 4\mu \Vert \mathbf{D}(\mathbf{v}_ju)\Vert ^2 _{L_2(\mathbb{R}^d)} \\ &+\left( (10\Vert g\Vert _{L_\infty}\alpha_1 d +4)C_\Omega ^2 \Vert \mathbf{v}_j\Vert ^2 _{L_d(\Omega)}+(1+\mu ^{-1})d\alpha _1\Vert g\Vert ^2_{L_\infty}\right)\Vert u\Vert ^2_{H^2(\mathbb{R}^d)}. \end{split} \end{equation} By \eqref{<a<} (with $f=\mathbf{1}_n$), \begin{equation*} \begin{split} 4\mu \Vert \mathbf{D}(\mathbf{v}_ju)\Vert ^2 _{L_2(\mathbb{R}^d)} &\leqslant 4\mu \alpha _0^{-1}\Vert g^{-1}\Vert _{L_\infty}\Vert g^{1/2}b(\mathbf{D})(\mathbf{v}_ju)\Vert ^2_{L_2(\mathbb{R}^d)} \\ &=\frac{1}{2}\Vert g^{1/2}b(\mathbf{D})(\mathbf{v}_ju)\Vert ^2 _{L_2(\mathbb{R}^d)}\quad\mbox{for}\;\mu=\frac{1}{8}\alpha _0\Vert g^{-1}\Vert ^{-1}_{L_\infty}. \end{split} \end{equation*} Together with \eqref{g^1/2b(D)(v_ju)<=} this implies \begin{equation*} \Vert g^{1/2}b(\mathbf{D})(\mathbf{v}_ju)\Vert ^2 _{L_2(\mathbb{R}^d)}\leqslant\mathcal{C}_j^2\Vert u\Vert ^2_{H^2(\mathbb{R}^d)}, \end{equation*} where \begin{equation*} \mathcal{C}_j^2=(20\Vert g\Vert _{L_\infty}\alpha _1d+8)C_\Omega ^2 \Vert \mathbf{v}_j\Vert ^2_{L_d(\Omega)} +(2+16\alpha _0^{-1}\Vert g^{-1}\Vert _{L_\infty})d\alpha _1\Vert g\Vert ^2_{L_\infty}. \end{equation*} Thus, \begin{equation*} \Vert g^{1/2}b(\mathbf{D})[\mathbf{v}_j]\Vert _{H^2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant \mathcal{C}_j,\quad j=1,\dots,m, \end{equation*} whence \begin{equation*} \Vert g^{1/2}b(\mathbf{D})[\Lambda ]\Vert _{H^2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant\left(\sum _{j=1}^m \mathcal{C}_j^2\right)^{1/2}=:\mathfrak{C}_\Lambda; \end{equation*} i.~e., \eqref{g1/2b(D)Lambda H2 to L2 Lm} is true. \end{proof}
\begin{proof}[Proof of Proposition~\textnormal{\ref{Proposition Ld implies multiplier}}.] Let $u\in H^2(\mathbb{R}^d)$. Similarly to \eqref{v_ju tozd second summand est start}--\eqref{Du^2vj^2 <=}, \begin{equation*} \Vert\mathbf{v}_ju\Vert ^2 _{L_2(\mathbb{R}^d)}\leqslant {C}_\Omega ^2 \Vert \mathbf{v}_j\Vert ^2_{L_d(\Omega)}\Vert u\Vert ^2_{H^1(\mathbb{R}^d)}. \end{equation*} Here $\mathbf{v}_j(\mathbf{x})$, $j=1,\dots, m$, are the columns of the matrix $\Lambda (\mathbf{x})$. Thus, \begin{equation} \label{Proof of Proposition Ld implies multiplier 1} \Vert [\Lambda ]u\Vert ^2_{L_2(\mathbb{R}^d)}\leqslant {C}_\Omega ^2\sum _{j=1}^m\Vert \mathbf{v}_j\Vert ^2_{L_d(\Omega)}\Vert u\Vert ^2_{H^1(\mathbb{R}^d)}. \end{equation}
By \eqref{<a<} with $f=\mathbf{1}_n$, and Lemma~\ref{Proposition d>=3 Lambda}, \begin{equation} \label{Proof of Proposition Ld implies multiplier 2} \Vert \mathbf{D}[\Lambda ]u\Vert ^2_{L_2(\mathbb{R}^d)}\leqslant \alpha _0^{-1}\Vert g^{-1}\Vert _{L_\infty}\Vert g^{1/2}b(\mathbf{D})[\Lambda ] u\Vert ^2 _{L_2(\mathbb{R}^d)} \leqslant \alpha _0^{-1}\Vert g^{-1}\Vert _{L_\infty}\mathfrak{C}_\Lambda ^2\Vert u\Vert ^2_{H^2(\mathbb{R}^d)}. \end{equation}
Combining \eqref{Proof of Proposition Ld implies multiplier 1} and \eqref{Proof of Proposition Ld implies multiplier 2}, we obtain \begin{equation*} \Vert [\Lambda ]u\Vert ^2 _{H^1(\mathbb{R}^d)}\leqslant \left( {C}_\Omega ^2\sum _{j=1}^m \Vert \mathbf{v}_j\Vert ^2_{L_d(\Omega)} +\alpha _0^{-1}\Vert g^{-1}\Vert _{L_\infty}\mathfrak{C}_\Lambda ^2 \right)\Vert u\Vert ^2_{H^2(\mathbb{R}^d)},\quad u\in H^2(\mathbb{R}^d). \end{equation*} \end{proof}
\begin{theorem} \label{Theorem d>4 chapter 2} Let $d\geqslant 5$. Under Condition~\textnormal{\ref{Condition Lambda multiplier}}, for $0<\varepsilon\leqslant 1$ and $\tau\in\mathbb{R}$ we have \begin{equation} \label{Th d>4 Lambda multiplier} \begin{split} \Bigl\Vert & \widehat{\mathcal{A}}^{1/2} \Bigl( f \mathcal{A}^{-1/2}\sin (\varepsilon ^{-1}\tau\mathcal{A}^{1/2})f^{-1}\\ &-(I+\Lambda b(\mathbf{D}) )f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})f_0^{-1} \Bigr) \mathcal{R}(\varepsilon) \Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant {C}_{15}\varepsilon (1+\vert \tau\vert ). \end{split} \end{equation} The constant ${C}_{15}$ depends only on $m$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, the parameters of the lattice $\Gamma$, and the norm $\Vert [\Lambda ]\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)}$. \end{theorem}
\begin{proof} Under Condition~\ref{Condition Lambda multiplier}, by \eqref{<b^*b<}, \eqref{hat A}, and \eqref{proof Th 8/1 no PI 1}, we have \begin{equation*} \begin{split} \Vert& \widehat{\mathcal{A}}^{1/2}[\Lambda]b(\mathbf{D})(I-\Pi)f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau(\mathcal{A}^0)^{1/2})f_0^{-1}\mathcal{R}(\varepsilon)\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2 (\mathbb{R}^d)} \\ &\leqslant \Vert g\Vert ^{1/2}_{L_\infty}\alpha _1^{1/2}\Vert \mathbf{D}[\Lambda ]\Vert _{H^2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \alpha _1^{1/2}c_*^{-1/2}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}(r_0^{-2}+1)\varepsilon ^2 \\ &\leqslant C_{15}'\varepsilon ^2;\quad C_{15}':=\alpha _1 c_*^{-1/2} \Vert g\Vert ^{1/2}_{L_\infty}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\Vert [\Lambda ]\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)}(r_0^{-2}+1). \end{split} \end{equation*} Combining this with \eqref{10.II}, we arrive at estimate \eqref{Th d>4 Lambda multiplier} with the constant $C_{15}:=C_{13}+C_{15}'$.
\end{proof}
For $d\geqslant 5$, removal of the operator $\Pi$ in the corrector also can be achieved by increasing the degree of the operator $\mathcal{R}(\varepsilon)$. In the application to homogenization of the hyperbolic Cauchy problem, this corresponds to more restrictive assumptions on the regularity of the initial data.
The proof of the following result is quite similar to that of Theorem~\ref{Theorem 8/1 NO PI}.
\begin{proposition} \label{Proposition d>5 from H-kappa chapter 2} Let $d\geqslant 5$. Then for $\tau \in\mathbb{R}$, $0<\varepsilon\leqslant 1$, we have \begin{equation*} \begin{split} \Vert & \widehat{\mathcal{A}}^{1/2} \Bigl( f \mathcal{A}^{-1/2}\sin (\varepsilon ^{-1}\tau\mathcal{A}^{1/2})f^{-1}\\ &-(I+\Lambda b(\mathbf{D}) )f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})f_0^{-1} \Bigr) \mathcal{R}(\varepsilon)^{d/4} \Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant {C}_{16}\varepsilon (1+\vert \tau\vert ). \end{split} \end{equation*} The constant $C_{16}$ depends only on $m$, $n$, $d$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{proposition}
\section*{Chapter III. Homogenization problem for hyperbolic systems} \label{Chapter 3}
\section{Approximation of the sandwiched operator ${\mathcal{A}}_\varepsilon ^{-1/2}\sin(\tau {\mathcal{A}}_\varepsilon^{1/2})$}
\label{Section main results in general case}
For a $\Gamma$-periodic measurable function $\psi(\mathbf{x})$ in $\mathbb{R}^d$ we denote $\psi^\varepsilon (\mathbf{x}):=\psi (\varepsilon ^{-1}\mathbf{x})$, $\varepsilon >0$. Let $[\psi^\varepsilon]$ be the operator of multiplication by the function $\psi^\varepsilon (\mathbf{x})$. \textit{Our main object} is the operator $\mathcal{A}_\varepsilon$, $\varepsilon >0$, acting in $L_2(\mathbb{R}^d;\mathbb{C}^n)$ and formally given by the differential expression \begin{equation} \label{A_eps no hat} \mathcal{A}_\varepsilon =f^\varepsilon (\mathbf{x})^* b(\mathbf{D})^*g^\varepsilon (\mathbf{x})b(\mathbf{D})f^\varepsilon (\mathbf{x}). \end{equation} Denote \begin{equation} \label{A_eps} \widehat{\mathcal{A}}_\varepsilon =b(\mathbf{D})^*g^\varepsilon (\mathbf{x})b(\mathbf{D}). \end{equation} The precise definitions of these operators are given in terms of the corresponding quadratic forms. The coefficients of the operators \eqref{A_eps no hat} and \eqref{A_eps} oscillate rapidly as $\varepsilon \rightarrow 0$.
\textit{Our goal} is to approximate the sandwiched operator $\mathcal{A}_\varepsilon ^{-1/2}\sin(\tau \mathcal{A}_\varepsilon ^{1/2}) $. The results are applied to homogenization of the solutions of the Cauchy problem for hyperbolic systems.
\subsection{The principal term of approximation}
Let $T_\varepsilon$ be the \textit{unitary scaling transformation} in $L_2(\mathbb{R}^d;\mathbb{C}^n)$: $(T_\varepsilon\mathbf{u})(\mathbf{x}):=\varepsilon ^{d/2}\mathbf{u}(\varepsilon \mathbf{x})$, $\varepsilon >0$. Then $\mathcal{A}_\varepsilon =\varepsilon ^{-2}T_\varepsilon ^* \mathcal{A}T_\varepsilon $. Thus, \begin{equation*} \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})=\varepsilon T_\varepsilon ^* \mathcal{A} ^{-1/2}\sin (\varepsilon ^{-1}\tau \mathcal{A}^{1/2})T_\varepsilon . \end{equation*} The operator $\mathcal{A}^0$ satisfies a similar identity. Next, \begin{equation*} (\mathcal{H}_0+I)^{-1/2}=\varepsilon T_\varepsilon ^*(\mathcal{H}_0 +\varepsilon ^2I)^{-1/2}T_\varepsilon =T_\varepsilon ^*\mathcal{R}(\varepsilon)^{1/2}T_\varepsilon . \end{equation*} Note that for any $s$ the operator $(\mathcal{H}_0+I)^{s/2}$ is an isometric isomorphism of the Sobolev space $H^s(\mathbb{R}^d;\mathbb{C}^n)$ onto $L_2(\mathbb{R}^d;\mathbb{C}^n)$. Indeed, for $\mathbf{u}\in H^s(\mathbb{R}^d;\mathbb{C}^n)$ we have \begin{equation} \label{isometria} \Vert (\mathcal{H}_0+I)^{s/2}\mathbf{u}\Vert ^2_{L_2(\mathbb{R}^d)} =\int _{\mathbb{R}^d}(\vert \boldsymbol{\xi}\vert ^2 +1)^s\vert \widehat{\mathbf{u}}(\boldsymbol{\xi})\vert ^2\,d\boldsymbol{\xi}=\Vert \mathbf{u}\Vert ^2_{H^s (\mathbb{R}^d)}. \end{equation} Using these arguments, from \eqref{10.I} we deduce the following result.
\begin{theorem} \label{Theorem 12.1} Let $\mathcal{A}_\varepsilon $ be the operator \eqref{A_eps no hat} and let $\mathcal{A}^0$ be the operator \eqref{A0 no hat}. Then for $\varepsilon >0$ and $\tau \in\mathbb{R}$ we have \begin{equation} \label{12.1} \begin{split} \Vert & f^\varepsilon\mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1}-f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1} \Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant {C}_{12}\varepsilon (1+\vert \tau\vert ). \end{split} \end{equation} The constant ${C}_{12}$ is controlled in terms of $r_0$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, and $\Vert f^{-1}\Vert _{L_\infty}$. \end{theorem}
By \eqref{f0<=} and the elementary inequality $\vert \sin x\vert /\vert x \vert \leqslant 1$, $x\in\mathbb{R}$, \begin{equation} \label{12.2} \begin{split} \Vert & f^\varepsilon\mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1}-f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1} \Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant 2\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\vert \tau\vert . \end{split} \end{equation} Interpolating between \eqref{12.2} and \eqref{12.1}, we obtain the following result.
\begin{theorem} \label{Theorem 12.2} Under the assumptions of Theorem~\textnormal{\ref{Theorem 12.1}}, for $0\leqslant s\leqslant 1$, $\tau\in\mathbb{R}$, and $\varepsilon >0$ we have \begin{equation*} \Vert f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon )^{-1}-f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Vert _{H^s(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant \mathfrak{C}_1(s) (1+\vert \tau\vert )\varepsilon ^s, \end{equation*} where $\mathfrak{C}_1(s):=(2\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty})^{1-s} {C}_{12}^s$. \end{theorem}
\subsection{Approximation with corrector}
Now, we obtain an approximation with the correction term taken into account. We put $\Pi _\varepsilon :=T_\varepsilon ^*\Pi T_\varepsilon$. Then $\Pi _\varepsilon $ is the pseudodifferential operator in $L_2(\mathbb{R}^d;\mathbb{C}^n)$ with the symbol $\chi _{\widetilde{\Omega}/\varepsilon }(\boldsymbol{\xi})$, i.~e., \begin{equation} \label{Pi eps} (\Pi _\varepsilon \mathbf{u})(\mathbf{x})=(2\pi )^{-d/2}\int _{\widetilde{\Omega}/\varepsilon}e^{i\langle\mathbf{x},\boldsymbol{\xi}\rangle}\widehat{\mathbf{u}}(\boldsymbol{\xi})\,d\boldsymbol{\xi}. \end{equation} Obviously, $\Pi_\varepsilon \mathbf{D}^\sigma\mathbf{u}=\mathbf{D}^\sigma \Pi _\varepsilon \mathbf{u}$ for $\mathbf{u}\in H^\kappa(\mathbb{R}^d;\mathbb{C}^n)$ and any multiindex $\sigma$ of length $\vert\sigma\vert \leqslant \kappa$. Note that $ \Vert \Pi_\varepsilon\Vert _{H^\kappa(\mathbb{R}^d)\rightarrow H^\kappa(\mathbb{R}^d)}\leqslant 1, \quad\kappa\in\mathbb{Z}_+$.
The following results were obtained in \cite[Proposition 1.4]{PSu} and \cite[Subsec. 10.2]{BSu06}.
\begin{proposition} \label{Proposition Pi eps -I} For any function $\mathbf{u}\in H^1(\mathbb{R}^d;\mathbb{C}^n)$ we have \begin{equation*} \Vert \Pi _\varepsilon \mathbf{u}-\mathbf{u}\Vert _{L_2(\mathbb{R}^d)}\leqslant \varepsilon r_0^{-1}\Vert \mathbf{D}\mathbf{u}\Vert _{L_2(\mathbb{R}^d)},\quad\varepsilon >0. \end{equation*} \end{proposition}
\begin{proposition} \label{Proposition Pi eps f eps} Let $\Phi(\mathbf{x})$ be a $\Gamma$-periodic function in $\mathbb{R}^d$ such that $\Phi \in L_2(\Omega)$. Then the operator $[\Phi ^\varepsilon]\Pi_\varepsilon$ is bounded in $L_2(\mathbb{R}^d;\mathbb{C}^n)$, and \begin{equation*} \Vert [\Phi ^\varepsilon]\Pi_\varepsilon\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant \vert \Omega\vert ^{-1/2}\Vert \Phi \Vert _{L_2(\Omega)},\quad\varepsilon >0. \end{equation*} \end{proposition}
\begin{theorem} \label{Theorem 12.3} Let $\Lambda (\mathbf{x})$ be the $\Gamma$-periodic solution of problem \eqref{Lambda problem}. Let $\Pi _\varepsilon$ be the operator \eqref{Pi eps}. Then, under the assumptions of Theorem~\textnormal{\ref{Theorem 12.1}}, for $\varepsilon >0$ and $\tau\in\mathbb{R}$ we have \begin{equation} \label{12.3} \begin{split} \bigl\Vert & f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon )^{-1} -(I+\varepsilon\Lambda ^\varepsilon b(\mathbf{D})\Pi _\varepsilon) f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\bigr \Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant {C}_{17}\varepsilon (1+\vert \tau\vert). \end{split} \end{equation} The constant ${C}_{17}$ depends only on $m$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g ^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{theorem}
\begin{proof} By the scaling transformation, \eqref{10.II} implies that \begin{equation} \label{12.3a-2} \begin{split} \Bigl\Vert& \widehat{\mathcal{A}}_\varepsilon ^{1/2} \Bigl( f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau\mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon )^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D})\Pi _\varepsilon )f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1} \Bigr) \\ &\times (\mathcal{H}_0 +I)^{-1}\Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant {C}_{13}\varepsilon (1+\vert \tau\vert ). \end{split} \end{equation} Note that, by \eqref{<b^*b<}, \eqref{g in}, and \eqref{A_eps}, \begin{equation} \label{A_eps^1/2>=} \widehat{c}_*\Vert \mathbf{D}\mathbf{u}\Vert ^2 _{L_2(\mathbb{R}^d)}\leqslant\Vert \widehat{\mathcal{A}}_\varepsilon ^{1/2}\mathbf{u}\Vert ^2 _{L_2(\mathbb{R}^d)},\quad\mathbf{u}\in H^1(\mathbb{R}^d;\mathbb{C}^n), \end{equation} where the constant $\widehat{c}_*$ is defined by \eqref{hat A(k)>=}. From \eqref{12.3a-2} and \eqref{A_eps^1/2>=} it follows that \begin{equation} \label{12.4} \begin{split} \Bigl\Vert &\mathbf{D} \Bigl( f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau\mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon )^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D})\Pi _\varepsilon )f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0 ^{-1} \Bigr) \\ &\times (\mathcal{H}_0 +I)^{-1}\Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant \widehat{c}_*^{-1/2}{C}_{13}\varepsilon (1+\vert \tau\vert ). \end{split} \end{equation}
Now, we estimate the $(L_2\rightarrow L_2)$-norm of the correction term. Let $\Pi _\varepsilon ^{(m)}$ be the pseudodifferential operator in $L_2(\mathbb{R}^d;\mathbb{C}^m)$ with the symbol $\chi _{\widetilde{\Omega}/\varepsilon}(\boldsymbol{\xi})$. By Proposition \ref{Proposition Pi eps f eps} and \eqref{Lambda<=}, \begin{equation} \label{proof fluxes 6} \Vert \Lambda ^\varepsilon \Pi _\varepsilon ^{(m)}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant M_1. \end{equation} Using \eqref{g^0<=}, \eqref{f0<=}, \eqref{A0 no hat}, and \eqref{proof fluxes 6}, we have \begin{equation} \label{12.5} \begin{split} \Vert &\varepsilon \Lambda ^\varepsilon b(\mathbf{D})\Pi _\varepsilon f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1} (\mathcal{H}_0+I)^{-1}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant \varepsilon \Vert \Lambda ^\varepsilon \Pi _\varepsilon ^{(m)}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \Vert b(\mathbf{D})f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0 ^{-1}(\mathcal{H}_0+I)^{-1}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant\varepsilon M_1\Vert g^{-1}\Vert ^{1/2}_{L_\infty} \Vert \sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1} (\mathcal{H}_0+I)^{-1}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant \varepsilon M_1\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f ^{-1}\Vert _{L_\infty}. \end{split} \end{equation} Combining \eqref{isometria}, \eqref{12.1}, \eqref{12.4}, and \eqref{12.5}, we arrive at estimate \eqref{12.3} with the constant $ {C}_{17}:=\widehat{c}_*^{-1/2}{C}_{13}+ {C}_{12}+M_1\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f ^{-1}\Vert _{L_\infty}$. \end{proof}
By interpolation, from Theorem~\ref{Theorem 12.3} we derive the following result.
\begin{theorem} \label{Theorem 12.4} Under the assumptions of Theorem~\textnormal{\ref{Theorem 12.3}}, for $0\leqslant s\leqslant 1$, $\tau\in\mathbb{R}$, and $0<\varepsilon\leqslant 1$ we have \begin{equation} \label{12.6} \begin{split} \Vert &f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon )^{-1} -(I+\varepsilon\Lambda ^\varepsilon b(\mathbf{D})\Pi _\varepsilon ) f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Vert _{H^{s+1}(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant \mathfrak{C}_2(s) (1+\vert \tau\vert )\varepsilon ^s. \end{split} \end{equation} Here the constant $\mathfrak{C}_2(s)$ depends only on $s$, $m$, $\alpha _0$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{theorem}
\begin{proof} Let us estimate the left-hand side of \eqref{12.6} for $s=0$. By \eqref{<b^*b<}, \eqref{A_eps no hat}, and the elementary inequality $\vert \sin x\vert /\vert x\vert \leqslant 1$, $x\in\mathbb{R}$, \begin{equation} \label{12.7} \begin{split} \Vert & f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau\mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon )^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant \Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\vert \tau\vert +\Vert \mathbf{D}f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon )^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant \Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\vert \tau\vert +\alpha _0^{-1/2}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}. \end{split} \end{equation} Similarly, by \eqref{<b^*b<}, \eqref{f0<=}, and \eqref{A0 no hat}, \begin{equation} \label{12.8} \Vert f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0 ^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \leqslant \Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\vert\tau\vert +\alpha _0^{-1/2}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1} \Vert _{L_\infty}. \end{equation}
From \eqref{g^0<=}, \eqref{f0<=}, and \eqref{proof fluxes 6} it follows that \begin{equation} \label{proof interpol corr 3} \begin{split} \Vert &\varepsilon \Lambda ^\varepsilon b(\mathbf{D})\Pi _\varepsilon f_0({\mathcal{A}}^0) ^{-1/2}\sin (\tau ( {\mathcal{A}}^0) ^{1/2})f_0^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)}\\ &\leqslant \varepsilon M_1\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty} +\varepsilon\Vert \mathbf{D}\Lambda ^\varepsilon b(\mathbf{D})\Pi _\varepsilon f_0 ( {\mathcal{A}}^0) ^{-1/2}\sin (\tau ( {\mathcal{A}}^0) ^{1/2})f_0^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant \varepsilon M_1\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty} +\Vert (\mathbf{D}\Lambda)^\varepsilon b(\mathbf{D})\Pi _\varepsilon f_0 ( {\mathcal{A}}^0) ^{-1/2}\sin (\tau ( {\mathcal{A}}^0) ^{1/2})f_0^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &+\varepsilon\Vert \Lambda ^\varepsilon \mathbf{D}b(\mathbf{D})\Pi _\varepsilon f_0 ( {\mathcal{A}}^0) ^{-1/2}\sin (\tau ( {\mathcal{A}}^0) ^{1/2})f_0^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}. \end{split} \end{equation} By Proposition \ref{Proposition Pi eps f eps}, \eqref{D Lambda <=}, \eqref{g^0<=}, \eqref{f0<=}, and \eqref{A0 no hat}, \begin{equation} \label{proof interpol corr 8.18} \Vert (\mathbf{D}\Lambda)^\varepsilon b(\mathbf{D})\Pi _\varepsilon f_0( {\mathcal{A}}^0) ^{-1/2}\sin (\tau ( {\mathcal{A}}^0) ^{1/2})f_0^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant M_2\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}. \end{equation} Next, according to \eqref{g^0<=}, \eqref{A0 no hat}, and \eqref{proof fluxes 6}, \begin{equation} \label{proof interpol corr e1} \begin{split} \varepsilon\Vert& \Lambda ^\varepsilon \mathbf{D}b(\mathbf{D})\Pi _\varepsilon f_0( {\mathcal{A}}^0) ^{-1/2}\sin (\tau ( {\mathcal{A}}^0) ^{1/2})f_0^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant \varepsilon M_1\Vert g^{-1}\Vert _{L_\infty}^{1/2}\Vert \mathbf{D}\sin (\tau ( {\mathcal{A}}^0) ^{1/2})f_0^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}. \end{split} \end{equation} Since the operator ${\mathcal{A}}^0$ with constant coefficients commutes with the differentiation $\mathbf{D}$, we have \begin{equation*} \Vert \mathbf{D}\sin (\tau ( {\mathcal{A}}^0) ^{1/2})\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant 1. \end{equation*} Together with \eqref{f0<=} and \eqref{proof interpol corr 3}--\eqref{proof interpol corr e1} this yields \begin{equation} \label{12.9} \Vert\varepsilon \Lambda ^\varepsilon b(\mathbf{D})\Pi _\varepsilon f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0 ^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \leqslant (2\varepsilon M_1+M_2)\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f ^{-1}\Vert _{L_\infty}. \end{equation} Relations \eqref{12.7}, \eqref{12.8}, and \eqref{12.9} imply that \begin{equation} \label{12.10} \begin{split} \Vert &f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D})\Pi _\varepsilon) f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0 ^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant { {C}}_{18}(1+\vert \tau\vert ),\quad\tau\in\mathbb{R},\quad 0<\varepsilon\leqslant 1, \end{split} \end{equation} where ${ {C}}_{18}:=\max\lbrace2\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty} ;(2\alpha _0 ^{-1/2} +2M_1+M_2)\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f ^{-1}\Vert _{L_\infty}\rbrace$. Interpolating between \eqref{12.10} and \eqref{12.3}, we deduce estimate \eqref{12.6} with $\mathfrak{C}_2(s):={ {C}}_{18}^{\,1-s}{ {C}}_{17}^s$. \end{proof}
\subsection{The case where $d\leqslant 4$}
Now we apply Theorem~\ref{Theorem 8/1 NO PI}. By the scaling transformation, \eqref{Th 8/1 NO PI} implies that \begin{equation} \label{9.20a NEW} \begin{split} \Bigl\Vert &\widehat{\mathcal{A}}_\varepsilon ^{1/2}\left( f^\varepsilon\mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1} \right) \\ &\times (\mathcal{H}_0+I)^{-1} \Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant C_{14}\varepsilon (1+\vert \tau\vert),\quad 0<\varepsilon\leqslant 1,\quad \tau\in\mathbb{R}. \end{split} \end{equation} Combining this with \eqref{A_eps^1/2>=}, we obtain \begin{equation} \label{1 p.9.3} \begin{split} \Bigl\Vert&\mathbf{D}\left( f^\varepsilon\mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1} \right) \\ &\times (\mathcal{H}_0+I)^{-1} \Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant \widehat{c}_*^{-1/2} C_{14}\varepsilon (1+\vert \tau\vert),\quad 0<\varepsilon\leqslant 1,\quad\tau\in\mathbb{R}. \end{split} \end{equation}
Let us estimate the $(L_2\rightarrow L_2)$-norm of the corrector. By the scaling transformation, \begin{equation} \label{L2L2 corr norm} \begin{split} \varepsilon \Vert \Lambda ^\varepsilon b(\mathbf{D})f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}(\mathcal{H}_0 +I)^{-1}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ = \varepsilon \Vert \Lambda b(\mathbf{D})f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\mathcal{R}(\varepsilon)\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}. \end{split} \end{equation}
The $(H^s\rightarrow L_2)$-norm of the operator $[\Lambda ]$ was estimated in \cite[Proposition 11.3]{Su_MMNP}.
\begin{proposition} \label{Proposition Lambda Hs to L2} Let $s=0$ for $d=1$, $s>0$ for $d=2$, $s=d/2-1$ for $d\geqslant 3$. Then the operator $[\Lambda]$ is a continuous mapping of $H^s(\mathbb{R}^d;\mathbb{C}^m)$ to $L_2(\mathbb{R}^d;\mathbb{C}^m)$, and \begin{equation*} \Vert [\Lambda ]\Vert _{H^s(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant \mathfrak{C}_d, \end{equation*} there the constant $\mathfrak{C}_d$ depends only on $d$, $m$, $n$, $\alpha _0$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$\textnormal{;} in the case $d=2$ it depends also on $s$. \end{proposition}
Now we consider only the case $d\leqslant 4$. So, by Proposition~\ref{Proposition Lambda Hs to L2}, \begin{equation} \label{Lambda H1->L2} \Vert [\Lambda]\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant\mathfrak{C}_d,\quad d\leqslant 4. \end{equation} Thus, we need to estimate the operator $b(\mathbf{D})f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\mathcal{R}(\varepsilon)$ in the\break $(L_2\rightarrow H^1)$-norm. By \eqref{g^0<=}, \eqref{f0<=}, and \eqref{A0 no hat}, for any $d$ we have \begin{equation} \label{4 in p.9.3} \begin{split} \Vert & b(\mathbf{D})f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\mathcal{R}(\varepsilon)\Vert _{L_2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant \Vert b(\mathbf{D})f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\mathcal{R}(\varepsilon)\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &+\Vert \mathbf{D}b(\mathbf{D})f_0(\mathcal{A}^0)^{-1/2}\sin (\varepsilon ^{-1}\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\mathcal{R}(\varepsilon)\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant 2\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty},\quad \tau\in\mathbb{R},\quad 0<\varepsilon\leqslant 1. \end{split} \end{equation}
The following result is a direct consequence of \eqref{12.1} and \eqref{1 p.9.3}--\eqref{4 in p.9.3}.
\begin{theorem} \label{Theorem d<=4 chapter 3} Let $d\leqslant 4$. Under the assumptions of Theorem~\textnormal{\ref{Theorem 12.3}}, we have \begin{equation} \label{Th d<=4} \begin{split} \Vert &f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant C_{19}\varepsilon (1+\vert \tau\vert),\quad 0<\varepsilon\leqslant 1,\quad \tau\in\mathbb{R}. \end{split} \end{equation} The constant $C_{19}:=\widehat{c}_*^{-1/2}C_{14}+C_{12}+2\mathfrak{C}_d\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$ depends only on $d$, $m$, $n$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{theorem}
\subsection{Removal of $\Pi _\varepsilon$ from the corrector for $d\geqslant 5$}
The following result can be deduced from Theorem~\ref{Theorem d>4 chapter 2}.
\begin{theorem} \label{Theorem d>=5 eps} Let $d\geqslant 5$. Let Condition~\textnormal{\ref{Condition Lambda multiplier}} be satisfied. Then, under the assumptions of Theorem~\textnormal{\ref{Theorem 12.3}}, for $0<\varepsilon\leqslant 1$ and $\tau\in\mathbb{R}$ we have \begin{equation} \label{Th d>=5, chapter 3} \begin{split} \Vert &f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant C_{20}\varepsilon (1+\vert \tau\vert). \end{split} \end{equation} The constant $C_{20}$ depends only on $m$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, the parameters of the lattice $\Gamma$, and the norm $\Vert [\Lambda ]\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)}$. \end{theorem}
\begin{proof} The proof is similar to that of Theorem~\ref{Theorem 12.3}. Combining \eqref{g^0<=}, \eqref{f0<=}, \eqref{A0 no hat}, \eqref{Th d>4 Lambda multiplier}, \eqref{12.1}, and \eqref{A_eps^1/2>=}, we arrive at the estimate \eqref{Th d>=5, chapter 3} with $C_{20}:=\widehat{c}_*^{-1/2}C_{15}+C_{12}+\Vert [\Lambda ]\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$. \end{proof}
\begin{theorem} \label{Theorem D>=5 smooth date sec 9} Let $d\geqslant 5$. Under the assumptions of Theorem~\textnormal{\ref{Theorem 12.3}}, for $0<\varepsilon\leqslant 1$ and $\tau\in\mathbb{R}$ we have \begin{equation} \label{Th d>5 smooth data ch 3} \begin{split} \Vert &f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Vert _{H^{d/2}(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant C_{21}\varepsilon (1+\vert \tau\vert). \end{split} \end{equation} The constant $C_{21}$ depends only on $d$, $m$, $n$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{theorem}
\begin{proof} By the scaling transformation, from Proposition~\ref{Proposition d>5 from H-kappa chapter 2} it follows that \begin{equation*} \begin{split} \Bigl\Vert &\widehat{\mathcal{A}}_\varepsilon ^{1/2}\Bigl(f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin(\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))f_0 (\mathcal{A}^0)^{-1/2}\sin(\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Bigr) \\ &\times (\mathcal{H}_0+I)^{-d/4}\Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant C_{16}\varepsilon (1+\vert \tau\vert), \quad 0<\varepsilon \leqslant 1,\quad\tau\in\mathbb{R}. \end{split} \end{equation*} By \eqref{A_eps^1/2>=}, \begin{equation} \label{D... d>=5 proof chapter 3} \begin{split} \Bigl\Vert&\mathbf{D}\Bigl(f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin(\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))f_0 (\mathcal{A}^0)^{-1/2}\sin(\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Bigr) \\ &\times(\mathcal{H}_0+I)^{-d/4}\Bigr\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant \widehat{c}_*^{-1/2} C_{16}\varepsilon (1+\vert \tau\vert), \quad 0<\varepsilon \leqslant 1,\quad\tau\in\mathbb{R}. \end{split} \end{equation} By Proposition~\ref{Proposition Lambda Hs to L2}, and \eqref{g^0<=}, \eqref{f0<=}, \eqref{A0 no hat}, \begin{equation} \label{d>5 final proof ch 3} \begin{split} \Vert & \Lambda ^\varepsilon b(\mathbf{D})f_0 (\mathcal{A}^0)^{-1/2}\sin(\tau (\mathcal{A}^0)^{1/2})f_0^{-1}(\mathcal{H}_0+I)^{-d/4}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant\mathfrak{C}_d\Vert b(\mathbf{D})f_0 (\mathcal{A}^0)^{-1/2}\sin(\tau (\mathcal{A}^0)^{1/2})f_0^{-1}(\mathcal{H}_0+I)^{-d/4}\Vert _{L_2(\mathbb{R}^d)\rightarrow H^{d/2-1}(\mathbb{R}^d)} \\ &\leqslant\mathfrak{C}_d\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\Vert (\mathcal{H}_0+I)^{-d/4}\Vert _{L_2(\mathbb{R}^d)\rightarrow H^{d/2-1}(\mathbb{R}^d)} \leqslant \mathfrak{C}_d\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}. \end{split} \end{equation} Combining \eqref{12.1}, \eqref{D... d>=5 proof chapter 3}, and \eqref{d>5 final proof ch 3}, we arrive at estimate~\eqref{Th d>5 smooth data ch 3} with the constant $C_{21}:=\widehat{c}_*^{-1/2}C_{16}+C_{12}+\mathfrak{C}_d\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$. \end{proof}
\subsection{Removal of $\Pi _\varepsilon$. Interpolational results}
To obtain the analogue of Theorem~\ref{Theorem 12.4} with $\Pi_\varepsilon$ replaced by $I$ we need the continuity of the operator $\varepsilon[\Lambda^\varepsilon]b(\mathbf{D})f_0(\mathcal{A}^0)^{-1/2}\sin(\tau (\mathcal{A}^0)^{1/2})f_0^{-1}$ in $H^1(\mathbb{R}^d;\mathbb{C}^n)$, i.~e., we need the boundedness of the norms $\Vert [(\mathbf{D}\Lambda)^\varepsilon ]\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}$ and $\Vert[\Lambda ^\varepsilon ]\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}$. Due to Parseval's theorem, the assumption $\Vert[\Lambda ^\varepsilon ]\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}<\infty$ holds if and only if the matrix-valued function $\Lambda$ is subject to the following condition.
\begin{condition} \label{Condition Lambda in L infty} Assume that the $\Gamma$-periodic solution $\Lambda (\mathbf{x})$ of problem \eqref{Lambda problem} is bounded, i.~e., $\Lambda\in L_\infty (\mathbb{R}^d)$. \end{condition}
Under Condition~\ref{Condition Lambda in L infty}, the operator $[(\mathbf{D}\Lambda)^\varepsilon ]$ is bounded from $H^1$ to $L_2$ due to the following result obtained in \cite[Corollary 2.4]{PSu}.
\begin{lemma} \label{Lemma PSu Lambda in L-infty} Under Condition \textnormal{\ref{Condition Lambda in L infty}}, for any function $u\in H^1(\mathbb{R}^d)$ and $\varepsilon >0$ we have \begin{equation*} \int _{\mathbb{R}^d}\vert (\mathbf{D}\Lambda) ^\varepsilon(\mathbf{x})\vert ^2\vert u(\mathbf{x})\vert ^2\,d\mathbf{x} \leqslant\mathfrak{c}_1\Vert u\Vert ^2_{L_2(\mathbb{R}^d)}+\mathfrak{c}_2\varepsilon ^2\Vert \Lambda\Vert ^2_{L_\infty}\Vert \mathbf{D}u\Vert ^2_{L_2(\mathbb{R}^d)}. \end{equation*} The constants $\mathfrak{c}_1$ and $\mathfrak{c}_2$ depend on $m$, $d$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, and $\Vert g^{-1}\Vert _{L_\infty}$. \end{lemma}
Some cases where Condition~\ref{Condition Lambda in L infty} is fulfilled automatically were distinguished in \cite[Lemma~8.7]{BSu06}.
\begin{proposition} \label{Proposition Lambda in L infty <=} Suppose that at least one of the following assumptions is satisfied\textnormal{:}
\noindent $1^\circ )$ $d\leqslant 2${\rm ;}
\noindent $2^\circ )$ the dimension $d\geqslant 1$ is arbitrary and the operator $\mathcal{A}_\varepsilon$ has the form $\mathcal{A}_\varepsilon =\mathbf{D}^* g^\varepsilon (\mathbf{x})\mathbf{D}$, where $g(\mathbf{x})$ is symmetric matrix with real entries{\rm ;}
\noindent $3^\circ )$ the dimension $d$ is arbitrary and $g^0=\underline{g}$, i.~e., relations \eqref{underline-g} are true.
Then Condition~\textnormal{\ref{Condition Lambda in L infty}} is fulfilled. \end{proposition}
Surely, if $\Lambda \in L_\infty$, then Condition~\ref{Condition Lambda in Ld} holds automatically. Then, by Proposition~\ref{Proposition Ld implies multiplier}, for $d\geqslant 5$, the assumptions of Theorem~\ref{Theorem d>=5 eps} are satisfied.
We are going to check that under Condition \ref{Condition Lambda in L infty} the analog of Theorem \ref{Theorem 12.4} is valid without any smoothing operator in the corrector. To do this, we estimate the $(H^1\rightarrow H^1)$-norm of the operators under the norm sign in \eqref{Th d<=4} (or \eqref{Th d>=5, chapter 3}). By \eqref{g^0<=}, \eqref{f0<=}, \eqref{A0 no hat}, and Lemma~\ref{Lemma PSu Lambda in L-infty}, we obtain \begin{equation} \label{12.15} \begin{split} \Vert &\varepsilon \Lambda ^\varepsilon b(\mathbf{D})f_0(\mathcal{A}^0)^{-1/2}\sin(\tau (\mathcal{A}^0)^{1/2})f_0 ^{-1} \Vert _{H^1(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \leqslant 2\varepsilon\Vert \Lambda\Vert _{L_\infty}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f ^{-1}\Vert _{L_\infty}\\ &+\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1} \Vert _{L_\infty}(\mathfrak{c}_1+\mathfrak{c}_2\Vert \Lambda \Vert ^2_{L_\infty})^{1/2}, \quad 0<\varepsilon\leqslant 1,\quad \tau\in\mathbb{R}. \end{split} \end{equation} Combining \eqref{12.7}, \eqref{12.8}, and \eqref{12.15}, we deduce that \begin{equation} \label{12.16} \begin{split} \Vert &f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D})) f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0 ^{-1}\Vert _{H^1(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant { {C}}_{22}(1+\vert \tau\vert), \quad 0<\varepsilon\leqslant 1,\quad\tau\in\mathbb{R}, \end{split} \end{equation} where \begin{equation*} { {C}}_{22}:=\Vert f^{-1}\Vert _{L_\infty} \max\left\lbrace 2\Vert f\Vert _{L_\infty}; \Vert g^{-1}\Vert ^{1/2}_{L_\infty}\left(2\alpha _0^{-1/2}+2\Vert \Lambda\Vert _{L_\infty} +(\mathfrak{c}_1+\mathfrak{c}_2\Vert \Lambda\Vert ^2_{L_\infty})^{1/2}\right)\right\rbrace . \end{equation*} Interpolating between \eqref{12.16} and \eqref{Th d<=4} for $d\leqslant 4$ and between \eqref{12.16} and \eqref{Th d>=5, chapter 3} for $d\geqslant 5$, we arrive at the following result.
\begin{theorem} \label{Theorem 12.6} Suppose that the assumptions of Theorem~\textnormal{\ref{Theorem 12.1}} are satisfied and Condition~\textnormal{\ref{Condition Lambda in L infty}} holds. Then for $0\leqslant s\leqslant 1$ and $\tau\in\mathbb{R}$, $0<\varepsilon\leqslant 1$ we have \begin{equation*} \begin{split} \Vert & f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D})) f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Vert _{H^{s+1}(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant \mathfrak{C}_3(s) (1+\vert \tau\vert )\varepsilon ^s;\quad \mathfrak{C}_3(s):={ {C}}_{22}^{\,1-s}\ {C}_{19}^s\quad\mbox{for}\quad d\leqslant 4,\quad \mathfrak{C}_3(s):={ {C}}_{22}^{\,1-s}\ {C}_{20}^s\quad\mbox{for}\quad d\geqslant 5. \end{split} \end{equation*} \end{theorem}
\subsection{The case where the corrector is equal to zero}
Assume that $g^0=\overline{g}$, i.~e., relations \eqref{overline-g} are valid. Then the $\Gamma$-periodic solution of problem \eqref{Lambda problem} is equal to zero: $\Lambda (\mathbf{x})=0$, and Theorem \ref{Theorem 12.4} implies the following result.
\begin{proposition} Suppose that relations \eqref{overline-g} hold. Then under the assumptions of Theorem~\textnormal{\ref{Theorem 12.1}}, for $0\leqslant s\leqslant 1$ and $\tau\in\mathbb{R}$, $0<\varepsilon\leqslant 1$ we have \begin{equation*} \begin{split} \Vert f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau \mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} - f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Vert _{H^{s+1}(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \leqslant \mathfrak{C}_2(s) (1+\vert \tau\vert )\varepsilon ^s. \end{split} \end{equation*} \end{proposition}
\subsection{The Steklov smoothing operator. Another approximation with the corrector}
Let us show that the result of Theorem \ref{Theorem 12.3} remains true with the operator $\Pi _\varepsilon$ replaced by another smoothing operator.
The operator $S_\varepsilon$, $\varepsilon >0$, acting in $L_2(\mathbb{R}^d;\mathbb{C}^n)$ and defined by the relation \begin{equation} \label{S_eps} (S_\varepsilon \mathbf{u})(\mathbf{x})=\vert\Omega\vert ^{-1}\int _\Omega \mathbf{u}(\mathbf{x}-\varepsilon\mathbf{z})\,d\mathbf{z},\quad\mathbf{u}\in L_2(\mathbb{R}^d;\mathbb{C}^n), \end{equation} is called the Steklov smoothing operator.
The following properties of the operator $S_\varepsilon$ were checked in \cite[Lemmata 1.1 and 1.2]{ZhPas} (see also \cite[Propositions 3.1 and 3.2]{PSu}).
\begin{proposition} \label{Proposition S_eps -I} For any function $\mathbf{u}\in H^1(\mathbb{R}^d;\mathbb{C}^n)$ we have \begin{equation*} \Vert S_\varepsilon\mathbf{u}-\mathbf{u}\Vert _{L_2(\mathbb{R}^d)}\leqslant \varepsilon r_1\Vert \mathbf{D}\mathbf{u}\Vert _{L_2(\mathbb{R}^d)},\quad\varepsilon >0. \end{equation*} \end{proposition}
\begin{proposition} \label{Proposition S_eps f^eps} Let $\Phi(\mathbf{x})$ be a $\Gamma$-periodic function in $\mathbb{R}^d$ such that $\Phi \in L_2(\Omega)$. Then the operator $[\Phi ^\varepsilon]S_\varepsilon$ is bounded in $L_2(\mathbb{R}^d;\mathbb{C}^n)$, and \begin{equation*} \Vert [\Phi ^\varepsilon]S_\varepsilon\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant\vert\Omega\vert ^{-1/2}\Vert \Phi \Vert _{L_2(\Omega)},\quad\varepsilon >0. \end{equation*} \end{proposition}
We also need the following statement obtained in \cite[Lemma 3.5]{PSu}.
\begin{proposition} \label{Prop Pi -S} Let $\Pi _\varepsilon $ be the operator \eqref{Pi eps} and let $S_\varepsilon$ be the Steklov smoothing operator \eqref{S_eps}. Let $\Lambda (\mathbf{x})$ be the $\Gamma$-periodic solution of problem \eqref{Lambda problem}. Then \begin{equation*}
\Vert [\Lambda ^\varepsilon ]b(\mathbf{D})(\Pi _\varepsilon -S_\varepsilon )\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)}\leqslant C_{23} ,\quad\varepsilon >0, \end{equation*} where the constant $C_{23}$ depends only on $d$, $m$, $r_0$, $r_1$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, and $\Vert g^{-1}\Vert _{L_\infty}$. \end{proposition}
Using \eqref{f0<=}, \eqref{A0 no hat}, Proposition~\ref{Prop Pi -S}, and Theorem~\ref{Theorem 12.3}, we obtain the following result.
\begin{theorem} \label{Theorem 12.7} Suppose that the assumptions of Theorem \textnormal{\ref{Theorem 12.1}} are satisfied. Let $\Lambda (\mathbf{x})$ be the $\Gamma$-periodic $(n\times m)$-matrix-valued solution of problem \eqref{Lambda problem}. Let $S_\varepsilon$ be the Steklov smoothing operator \eqref{S_eps}. Then for $\tau\in\mathbb{R}$ and $\varepsilon>0$ we have \begin{equation*} \begin{split} \bigl\Vert & f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau\mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D})S_\varepsilon) f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1} \bigr\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)} \\ &\leqslant {C}_{24}\varepsilon (1+\vert \tau\vert ), \end{split} \end{equation*} where the constant ${C}_{24}:={C}_{17}+C_{23}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$ depends only on $d$, $m$, $r_0$, $r_1$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, and $\Vert f^{-1}\Vert _{L_\infty}$. \end{theorem}
\begin{remark} \label{Remark 12.8} Similarly to the proof of Theorem \textnormal{\ref{Theorem 12.4}}, using the properties of the Steklov smoothing, one can check that the estimate of the form \eqref{12.6} remains true with $\Pi _\varepsilon$ replaced by $S_\varepsilon$. \end{remark}
\section{Homogenization of hyperbolic systems with periodic coefficients}
\label{Section hyperbolic general case}
\subsection{The statement of the problem. Homogenization for the solutions of hyperbolic systems} Our goal is to apply the results of Section~\ref{Section main results in general case} to homogenization for the solutions of the problem \begin{equation} \label{14.1} \begin{cases} Q^\varepsilon (\mathbf{x})\frac{\partial ^2\mathbf{u}_\varepsilon (\mathbf{x},\tau)}{ \partial \tau ^2} =-b(\mathbf{D})^*g^\varepsilon (\mathbf{x})b(\mathbf{D})\mathbf{u}_\varepsilon (\mathbf{x},\tau)+ Q^\varepsilon (\mathbf{x})\mathbf{F}(\mathbf{x},\tau), \\ \mathbf{u}_\varepsilon (\mathbf{x},0)=0,\quad\frac{\partial\mathbf{u}_\varepsilon (\mathbf{x},0)}{\partial\tau}=\boldsymbol{\psi}(\mathbf{x}), \end{cases} \end{equation} where $\boldsymbol{\psi}\in L_2(\mathbb{R}^d;\mathbb{C}^n)$, $\mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};L_2(\mathbb{R}^d;\mathbb{C}^n))$, and $Q(\mathbf{x})$ is a $\Gamma$-periodic $(n\times n)$-matrix-valued function \eqref{8.1a}. Substituting $\mathbf{z}_\varepsilon (\cdot ,\tau):=(f^\varepsilon)^{-1}\mathbf{u}_\varepsilon (\cdot ,\tau)$ into \eqref{14.1}, we rewrite problem \eqref{14.1} as \begin{equation*} \begin{cases} \frac{\partial ^2\mathbf{z}_\varepsilon (\mathbf{x},\tau)}{ \partial \tau ^2} =-f^\varepsilon(\mathbf{x})^* b(\mathbf{D})^*g^\varepsilon (\mathbf{x})b(\mathbf{D})f^\varepsilon (\mathbf{x})\mathbf{z}_\varepsilon (\mathbf{x},\tau) +f^\varepsilon (\mathbf{x})^{-1}\mathbf{F}(\mathbf{x},\tau), \\ \mathbf{z}_\varepsilon (\mathbf{x},0)=0,\quad \frac{\partial\mathbf{z}_\varepsilon (\mathbf{x},0)}{\partial\tau}=f^\varepsilon (\mathbf{x})^{-1}\boldsymbol{\psi}(\mathbf{x}). \end{cases} \end{equation*} Then \begin{equation} \label{14.2} \begin{split} \mathbf{z}_\varepsilon (\cdot ,\tau) =\mathcal{A}_\varepsilon ^{-1/2}\sin (\tau\mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1}\boldsymbol{\psi} +\int _0^\tau \mathcal{A}_\varepsilon ^{-1/2}\sin ( (\tau -\widetilde{\tau})\mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1}\mathbf{F}(\cdot ,\widetilde{\tau})\,d\widetilde{\tau} \end{split} \end{equation} and \begin{equation} \label{14.3} \begin{split} \mathbf{u}_\varepsilon (\cdot,\tau)&= f^\varepsilon\mathcal{A}_\varepsilon ^{-1/2}\sin (\tau\mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1}\boldsymbol{\psi} +\int _0^\tau f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin ( (\tau -\widetilde{\tau})\mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1}\mathbf{F}(\cdot ,\widetilde{\tau})\,d\widetilde{\tau}. \end{split} \end{equation}
Let $\mathbf{u}_0(\mathbf{x},\tau)$ be the solution of the effective problem \begin{equation} \label{14.4} \begin{cases} \overline{Q}\frac{\partial ^2\mathbf{u}_0(\mathbf{x},\tau)}{\partial \tau ^2}= -b(\mathbf{D})^*g^0b(\mathbf{D})\mathbf{u}_0(\mathbf{x},\tau)+ \overline{Q}\mathbf{F}(\mathbf{x},\tau), \\ \mathbf{u}_0(\mathbf{x},0)=0,\quad\frac{\partial\mathbf{u}_0(\mathbf{x},0)}{\partial\tau}=\boldsymbol{\psi}(\mathbf{x}), \end{cases} \end{equation} where $\overline{Q}=\vert\Omega\vert ^{-1}\int _\Omega Q(\mathbf{x})\,d\mathbf{x}$. Similarly to \eqref{14.2} and \eqref{14.3}, we obtain \begin{equation} \label{14.5} \mathbf{u}_0(\cdot ,\tau) =f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\boldsymbol{\psi} +\int _0^\tau f_0(\mathcal{A}^0)^{-1/2}\sin ((\tau -\widetilde{\tau})(\mathcal{A}^0)^{1/2})f_0^{-1} \mathbf{F}(\cdot,\widetilde{\tau})\,d\widetilde{\tau}. \end{equation}
Using Theorems \ref{Theorem 12.1}, \ref{Theorem 12.3}, and \ref{Theorem 12.7}, and identities \eqref{14.3} and \eqref{14.5}, we arrive at the following result.
\begin{theorem} \label{Theorem 14.1} Let $\mathbf{u}_\varepsilon$ be the solution of problem \eqref{14.1} and let $\mathbf{u}_0$ be the solution of the effective problem \eqref{14.4}.
\noindent $1^\circ$. Let $\boldsymbol{\psi}\in H^1(\mathbb{R}^d;\mathbb{C}^n)$ and let $\mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};H^1(\mathbb{R}^d;\mathbb{C}^n))$. Then for $\tau\in\mathbb{R}$ and $\varepsilon >0$ we have \begin{equation*} \Vert \mathbf{u}_\varepsilon (\cdot ,\tau)-\mathbf{u}_0(\cdot,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant C_{12}\varepsilon(1+\vert\tau\vert ) \left( \Vert \boldsymbol{\psi}\Vert _{H^1(\mathbb{R}^d)} +\Vert \mathbf{F}\Vert _{L_1((0,\tau);H^1(\mathbb{R}^d))}\right). \end{equation*}
\noindent $2^\circ$. Let $\boldsymbol{\psi}\in H^2(\mathbb{R}^d;\mathbb{C}^n)$ and let $\mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};H^2(\mathbb{R}^d;\mathbb{C}^n))$. Let $\Lambda (\mathbf{x})$ be the $\Gamma$-periodic solution of problem \eqref{Lambda problem}. Let $\Pi _\varepsilon$ be the smoothing operator \eqref{Pi eps}. By $\mathbf{v}_\varepsilon$ we denote the first order approximation\textnormal{:} \begin{equation} \label{14.5a} \mathbf{v}_\varepsilon (\mathbf{x},\tau):=\mathbf{u}_0(\mathbf{x},\tau)+\varepsilon \Lambda ^\varepsilon b(\mathbf{D})\Pi_\varepsilon\mathbf{u}_0(\mathbf{x},\tau). \end{equation} Then for $\tau\in\mathbb{R}$ and $\varepsilon >0$ we have \begin{equation} \label{Th H1 solutions no interpolation} \Vert \mathbf{u}_\varepsilon (\cdot ,\tau)-\mathbf{v}_\varepsilon(\cdot,\tau)\Vert _{H^1(\mathbb{R}^d)} \leqslant C_{17}\varepsilon(1+\vert\tau\vert ) \left(
\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)} +\Vert \mathbf{F}\Vert _{L_1((0,\tau);H^2(\mathbb{R}^d))}\right). \end{equation} Let $S_\varepsilon$ be the Steklov smoothing operator \eqref{S_eps}. We put \begin{equation*} \check{\mathbf{v}}_\varepsilon (\mathbf{x},\tau):=\mathbf{u}_0(\mathbf{x},\tau)+\varepsilon\Lambda ^\varepsilon b(\mathbf{D})S_\varepsilon \mathbf{u}_0(\mathbf{x},\tau). \end{equation*} Then for $\tau\in\mathbb{R}$ and $\varepsilon >0$ we have \begin{equation*} \Vert \mathbf{u}_\varepsilon (\cdot ,\tau)-\check{\mathbf{v}}_\varepsilon(\cdot,\tau)\Vert _{H^1(\mathbb{R}^d)} \leqslant C_{24}\varepsilon(1+\vert\tau\vert ) \left( \Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)} +\Vert \mathbf{F}\Vert _{L_1((0,\tau);H^2(\mathbb{R}^d))}\right). \end{equation*} \end{theorem}
\begin{remark} If $d\leqslant 4$ \textnormal{(}or $d\geqslant 5$ and Condition~\textnormal{\ref{Condition Lambda multiplier}} is satisfied\textnormal{),} then we can use Theorem~\textnormal{\ref{Theorem d<=4 chapter 3}} \textnormal{(}respectively, Theorem~\textnormal{\ref{Theorem d>=5 eps})}, i.~e., the estimate of the form \eqref{Th H1 solutions no interpolation} is valid with $\mathbf{v}_\varepsilon$ replaced by \begin{equation} \label{corrector V_eps^0} \mathbf{v}_\varepsilon ^{(0)}(\mathbf{x},\tau):=\mathbf{u}_0(\mathbf{x},\tau)+\varepsilon \Lambda ^\varepsilon b(\mathbf{D})\mathbf{u}_0(\mathbf{x},\tau). \end{equation} \end{remark}
Theorem~\ref{Theorem D>=5 smooth date sec 9} implies the following statement.
\begin{proposition} Assume that $d\geqslant 5$. Let $\boldsymbol{\psi}\in H^{d/2}(\mathbb{R}^d;\mathbb{C}^n)$ and let $\mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};H^{d/2}(\mathbb{R}^d;\mathbb{C}^n))$. Let $\mathbf{u}_\varepsilon$ and $\mathbf{u}_0$ be the solutions of problems \eqref{14.1} and \eqref{14.4} respectively. Let $\mathbf{v}_\varepsilon ^{(0)}$ be given by \eqref{corrector V_eps^0}. Then for $\tau\in\mathbb{R}$ and $0<\varepsilon\leqslant 1$ we have \begin{equation*} \Vert \mathbf{u}_\varepsilon (\cdot ,\tau)-{\mathbf{v}}_\varepsilon ^{(0)}(\cdot,\tau)\Vert _{H^1(\mathbb{R}^d)} \leqslant C_{21}\varepsilon(1+\vert\tau\vert ) \left( \Vert \boldsymbol{\psi}\Vert _{H^{d/2}(\mathbb{R}^d)} +\Vert \mathbf{F}\Vert _{L_1((0,\tau);H^{d/2}(\mathbb{R}^d))}\right). \end{equation*} \end{proposition}
Applying Theorems \ref{Theorem 12.2} and \ref{Theorem 12.4}, we arrive at the following result.
\begin{theorem} \label{Theorem 14.2} Let $\mathbf{u}_\varepsilon$ be the solution of problem \eqref{14.1} and let $\mathbf{u}_0$ be the solution of the effective problem \eqref{14.4}.
\noindent $1^\circ$. Let $\boldsymbol{\psi}\in H^s(\mathbb{R}^d;\mathbb{C}^n)$ and $\mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};H^s(\mathbb{R}^d;\mathbb{C}^n))$, $0\leqslant s\leqslant 1$. Then for $\tau\in\mathbb{R}$ and $\varepsilon >0$ we have \begin{equation*} \Vert \mathbf{u}_\varepsilon (\cdot ,\tau)-\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant \mathfrak{C}_1(s)(1+\vert\tau\vert)\varepsilon ^s\left(\Vert \boldsymbol{\psi}\Vert _{H^s(\mathbb{R}^d)}+ \Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^s(\mathbb{R}^d))}\right). \end{equation*} Under the additional assumption that $\mathbf{F}\in L_1(\mathbb{R}_\pm; H^{s}(\mathbb{R}^d;\mathbb{C}^n))$, for $0<s\leqslant 1$, $\vert \tau\vert =\varepsilon ^{-\alpha}$, $0<\varepsilon \leqslant 1$, $0<\alpha<s$, we have \begin{equation*} \begin{split} \Vert \mathbf{u}_\varepsilon (\cdot ,\pm \varepsilon ^{-\alpha})-\mathbf{u}_0(\cdot ,\pm \varepsilon ^{-\alpha})\Vert _{L_2(\mathbb{R}^d)} \leqslant 2 \mathfrak{C}_1(s) \varepsilon ^{s-\alpha}\left(\Vert \boldsymbol{\psi}\Vert _{H^s(\mathbb{R}^d)}+ \Vert \mathbf{F}\Vert _{L_{1}(\mathbb{R}_\pm ;H^s(\mathbb{R}^d))}\right). \end{split} \end{equation*}
\noindent $2^\circ$. Let $\boldsymbol{\psi}\in H^{1+s}(\mathbb{R}^d;\mathbb{C}^n)$ and $\mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};H^{1+s}(\mathbb{R}^d;\mathbb{C}^n))$, $0\leqslant s\leqslant 1$.
Let $\mathbf{v}_\varepsilon$ be given by \eqref{14.5a}. Then for $\tau\in\mathbb{R}$ and $0<\varepsilon\leqslant 1$ we have \begin{equation*} \Vert \mathbf{u}_\varepsilon(\cdot ,\tau)-\mathbf{v}_\varepsilon (\cdot ,\tau)\Vert _{H^1(\mathbb{R}^d)} \leqslant \mathfrak{C}_2(s)(1+\vert\tau\vert)\varepsilon ^s\left( \Vert \boldsymbol{\psi}\Vert _{H^{1+s}(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^{1+s}(\mathbb{R}^d))}\right). \end{equation*} Under the additional assumption that $\mathbf{F}\in L_1(\mathbb{R}_\pm; H^{1+s}(\mathbb{R}^d;\mathbb{C}^n))$, where $0<s\leqslant 1$, for $\tau=\pm\varepsilon^{-\alpha}$, $0<\varepsilon\leqslant 1$, $0<\alpha<s$, we have \begin{equation*} \begin{split} \Vert \mathbf{u}_\varepsilon(\cdot ,\pm\varepsilon^{-\alpha})-\mathbf{v}_\varepsilon (\cdot ,\pm\varepsilon^{-\alpha})\Vert _{H^1(\mathbb{R}^d)} \leqslant 2\mathfrak{C}_2(s)\varepsilon ^{s-\alpha }\left( \Vert \boldsymbol{\psi}\Vert _{H^{1+s}(\mathbb{R}^d)}+ \Vert \mathbf{F}\Vert _{L_{1}(\mathbb{R}_\pm ;H^{1+s}(\mathbb{R}^d))}\right) .
\end{split} \end{equation*} \end{theorem}
By the Banach-Steinhaus theorem, this result implies the following theorem.
\begin{theorem} \label{Theorem 14.3} Let $\mathbf{u}_\varepsilon$ be the solution of problem \eqref{14.1}, and let $\mathbf{u}_0$ be the solution of the effective problem \eqref{14.4}.
\noindent $1^\circ$. Let $\boldsymbol{\psi}\in L_2(\mathbb{R}^d;\mathbb{C}^n)$ and $\mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};L_2(\mathbb{R}^d;\mathbb{C}^n))$. Then \begin{equation*} \lim _{\varepsilon\rightarrow 0 }\Vert \mathbf{u}_\varepsilon (\cdot ,\tau)-\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} =0,\quad \tau\in\mathbb{R}. \end{equation*}
\noindent $2^\circ$. Let $\boldsymbol{\psi}\in H^{1}(\mathbb{R}^d;\mathbb{C}^n)$ and $\mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};H^{1}(\mathbb{R}^d;\mathbb{C}^n))$.
Let $\mathbf{v}_\varepsilon$ be given by \eqref{14.5a}. Then for $\tau\in\mathbb{R}$ we have \begin{equation*} \lim _{\varepsilon\rightarrow 0}\Vert \mathbf{u}_\varepsilon(\cdot ,\tau)-\mathbf{v}_\varepsilon (\cdot ,\tau)\Vert _{H^1(\mathbb{R}^d)} =0. \end{equation*} \end{theorem}
\begin{remark} Taking Remark \textnormal{\ref{Remark 12.8}} into account, we see that the results of Theorems \textnormal{\ref{Theorem 14.2}($2^\circ$)} and \textnormal{\ref{Theorem 14.3}($2^\circ$)} remain true with the operator $\Pi _\varepsilon$ replaced by the Steklov smoothing $S_\varepsilon$, i. e., with $\mathbf{v}_\varepsilon$ replaced by $\check{\mathbf{v}}_\varepsilon$. This only changes the constants in estimates. \end{remark}
Applying Theorem \ref{Theorem 12.6}, we make the following observation.
\begin{remark} \label{Remark no S-eps} For $0<\varepsilon\leqslant 1$, under Condition \textnormal{\ref{Condition Lambda in L infty}}, the analogs of Theorems \textnormal{\ref{Theorem 14.1}, \ref{Theorem 14.2}}, and \textnormal{\ref{Theorem 14.3}} are valid with the operators $\Pi _\varepsilon$ and $S_\varepsilon$ replaced by the identity operator. \end{remark}
\subsection{Approximation of the flux}
Let $\mathbf{p}_\varepsilon (\mathbf{x},\tau )$ be the ,,flux'' \begin{equation} \label{P_eps ho hat} \mathbf{p}_\varepsilon (\mathbf{x},\tau):=g^\varepsilon (\mathbf{x})b(\mathbf{D})\mathbf{u}_\varepsilon (\mathbf{x},\tau). \end{equation}
\begin{theorem} \label{Theorem 14.6} Suppose that the assumptions of Theorem~\textnormal{\ref{Theorem 14.1}($2^\circ$)} are satisfied. Let $\mathbf{p}_\varepsilon $ be the ,,flux'' \eqref{P_eps ho hat}, and let $\widetilde{g}(\mathbf{x})$ be the matrix-valued function \eqref{tilde g}. Then for $\tau\in\mathbb{R}$ and $\varepsilon >0$ we have \begin{align} \label{flux with Pi_eps no hat} &\Vert \mathbf{p}_\varepsilon (\cdot,\tau)-\widetilde{g}^\varepsilon b(\mathbf{D})\Pi _\varepsilon \mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant C_{25} \varepsilon (1+\vert\tau\vert)\left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right), \\ \label{flux with S_eps no hat} &\Vert \mathbf{p}_\varepsilon (\cdot,\tau)-\widetilde{g}^\varepsilon b(\mathbf{D})S _\varepsilon \mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant C_{26}\varepsilon (1+\vert\tau\vert)\left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{align} The constants ${C}_{25}$ and ${C}_{26}$ depend only on $m$, $d$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, and the~parameters of the lattice $\Gamma$. \end{theorem}
\begin{proof} From \eqref{isometria}, \eqref{12.3a-2}, \eqref{14.3}, and \eqref{14.5}, it follows that \begin{equation} \label{14.6} \begin{split} \Bigl\Vert & \widehat{\mathcal{A}}_\varepsilon ^{1/2} \Bigl( \mathbf{u}_\varepsilon (\cdot ,\tau)-(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D})\Pi _\varepsilon )\mathbf{u}_0(\cdot,\tau)\Bigr)\Bigr\Vert _{L_2(\mathbb{R}^d)} \\ &\leqslant
C_{13}\varepsilon(1+\vert\tau\vert) \left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{split} \end{equation} By \eqref{A_eps} and Proposition~\ref{Proposition Pi eps -I}, \begin{equation} \label{14.7} \begin{split} \Vert &\widehat{\mathcal{A}}_\varepsilon ^{1/2}(\Pi _\varepsilon -I)\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant\varepsilon \alpha _1^{1/2}r_0^{-1}\Vert g\Vert ^{1/2}_{L_\infty} \Vert \mathbf{D}^2 \mathbf{u}_0(\cdot,\tau)\Vert _{L_2(\mathbb{R}^d)} . \end{split} \end{equation} Using \eqref{f0<=}, \eqref{14.5}, and the inequality $\vert \sin x\vert /\vert x\vert \leqslant 1$, $x\in\mathbb{R}$, we obtain \begin{equation} \label{u0 in H2<=} \Vert \mathbf{D}^2 \mathbf{u}_0(\cdot,\tau)\Vert _{L_2(\mathbb{R}^d)}\leqslant\Vert \mathbf{u}_0(\cdot,\tau)\Vert _{H^2(\mathbb{R}^d)}\leqslant \vert \tau\vert \Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{equation} Combining \eqref{P_eps ho hat} and \eqref{14.6}--\eqref{u0 in H2<=}, we arrive at \begin{equation} \label{14.8} \begin{split} \Vert &\mathbf{p}_\varepsilon (\cdot ,\tau)-g^\varepsilon b(\mathbf{D})(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))\Pi_\varepsilon \mathbf{u}_0(\cdot ,\tau)\Vert_{L_2(\mathbb{R}^d)} \\ &\leqslant C_{27}\varepsilon (1+\vert \tau\vert) \left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right), \end{split} \end{equation} where ${{C}}_{27}:={C}_{13}\Vert g\Vert ^{1/2}_{L_\infty} +\alpha_1^{1/2}r_0^{-1}\Vert g\Vert _{L_\infty}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$.
We have \begin{equation} \label{proof fluxes 5} \begin{split} \varepsilon g^\varepsilon b(\mathbf{D})\Lambda^\varepsilon b(\mathbf{D})\Pi _\varepsilon\mathbf{u}_0(\cdot ,\tau) =g^\varepsilon (b(\mathbf{D})\Lambda)^\varepsilon b(\mathbf{D})\Pi_\varepsilon\mathbf{u}_0(\cdot ,\tau) +\varepsilon g^\varepsilon \sum _{l=1}^d b_l \Lambda ^\varepsilon \Pi _\varepsilon ^{(m)}D_lb(\mathbf{D})\mathbf{u}_0(\cdot ,\tau). \end{split} \end{equation} By \eqref{<b^*b<}, \eqref{b_j <=}, \eqref{proof fluxes 6}, and \eqref{u0 in H2<=}, \begin{equation} \label{14.9} \begin{split} \Bigl\Vert &\varepsilon g^\varepsilon \sum _{l=1}^d b_l\Lambda ^\varepsilon \Pi _\varepsilon ^{(m)}D_l b(\mathbf{D})\mathbf{u}_0(\cdot,\tau)\Bigr\Vert _{L_2(\mathbb{R}^d)} \leqslant \varepsilon \Vert g\Vert _{L_\infty}\alpha _1 d^{1/2}M_1\Vert \mathbf{D}^2\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \\ &\leqslant \varepsilon\vert\tau\vert \alpha _1 d^{1/2}M_1\Vert g\Vert _{L_\infty} \Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+ \Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{split} \end{equation} Now, relations \eqref{tilde g} and \eqref{14.8}--\eqref{14.9} imply estimate \eqref{flux with Pi_eps no hat} with the constant ${C}_{25}:={{C}}_{27}+\alpha _1 d^{1/2}M_1\Vert g\Vert _{L_\infty}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$.
We proceed to the proof of inequality \eqref{flux with S_eps no hat}. By \eqref{14.6}, \begin{equation} \label{14.9-2} \begin{split} \Bigl\Vert &\widehat{\mathcal{A}}_\varepsilon ^{1/2} \Bigl( \mathbf{u}_\varepsilon (\cdot ,\tau) -(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))S_\varepsilon \mathbf{u}_0(\cdot ,\tau)\Bigr) \Bigr\Vert _{L_2(\mathbb{R}^d)} \\ &\leqslant C_{13}\varepsilon (1+\vert\tau\vert)\left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right) \\ &+\Vert \widehat{\mathcal{A}}_\varepsilon ^{1/2}(S_\varepsilon -I)\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} +\varepsilon\Vert \widehat{\mathcal{A}}_\varepsilon ^{1/2}\Lambda^\varepsilon b(\mathbf{D})(\Pi _\varepsilon -S_\varepsilon)\mathbf{u}_0(\cdot,\tau)\Vert _{L_2(\mathbb{R}^d)}. \end{split} \end{equation} Similarly to \eqref{14.7}, using Proposition~\ref{Proposition S_eps -I} and \eqref{u0 in H2<=}, we have \begin{equation} \label{14.10} \begin{split} \Vert &\widehat{\mathcal{A}}_\varepsilon ^{1/2}(S_\varepsilon -I)\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant \varepsilon r_1 \alpha _1^{1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert \mathbf{D}^2\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \\ &\leqslant \varepsilon \vert\tau\vert r_1 \alpha _1^{1/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty} \left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{split} \end{equation} To estimate the third summand in the right-hand side of \eqref{14.9-2}, we use \eqref{<b^*b<}, \eqref{A_eps}, and Proposition~\ref{Prop Pi -S}. Then \begin{equation} \label{14.11} \begin{split} \varepsilon \Vert &\widehat{\mathcal{A}}_\varepsilon ^{1/2}\Lambda ^\varepsilon b(\mathbf{D})(\Pi _\varepsilon -S_\varepsilon )\mathbf{u}_0(\cdot,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant \varepsilon \alpha_1^{1/2}C_{23}\Vert g\Vert ^{1/2}_{L_\infty}\Vert \mathbf{u}_0(\cdot ,\tau)\Vert _{H^2(\mathbb{R}^d)} \\ &\leqslant \varepsilon\vert\tau\vert \alpha _1^{1/2}C_{23}\Vert g\Vert ^{1/2}_{L_\infty} \Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\left( \Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{split} \end{equation} Combining \eqref{A_eps}, \eqref{P_eps ho hat}, and \eqref{14.9-2}--\eqref{14.11}, we have \begin{equation} \label{14.12} \begin{split} \Vert& \mathbf{p}_\varepsilon (\cdot ,\tau)-g^\varepsilon b(\mathbf{D})(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))S_\varepsilon \mathbf{u}_0 (\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \\ &\leqslant C_{28}\varepsilon (1+\vert \tau\vert ) \left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{split} \end{equation} Here $ {{C}}_{28}:={C}_{13}\Vert g\Vert ^{1/2}_{L_\infty} +\alpha _1^{1/2}(r_1+C_{23} )\Vert g\Vert _{L_\infty}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$. From Proposition \ref{Proposition S_eps f^eps} and \eqref{Lambda<=} it follows that $ \Vert \Lambda ^\varepsilon S_\varepsilon ^{(m)}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant M_1$. (Here $S_\varepsilon ^{(m)}$ is the Steklov smoothing operator acting in $L_2(\mathbb{R}^d;\mathbb{C}^m)$.) Thus, by analogy with \eqref{proof fluxes 5} and \eqref{14.9}, from \eqref{14.12} we deduce estimate \eqref{flux with S_eps no hat} with the constant ${C}_{26}:={{C}}_{28}+\alpha _1 d^{1/2}M_1\Vert g\Vert _{L_\infty}\Vert f\Vert _{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$. \end{proof}
\begin{lemma} \label{Lemma 14.7} For $\varepsilon >0$ and $\tau\in\mathbb{R}$ we have \begin{equation} \label{Lm flux grubo} \Vert g^\varepsilon b(\mathbf{D})f^\varepsilon\mathcal{A}_\varepsilon ^{-1/2}\sin (\tau\mathcal{A}_\varepsilon ^{1/2})(f^\varepsilon)^{-1} -\widetilde{g}^\varepsilon b(\mathbf{D})\Pi _\varepsilon f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant {C}_{29}. \end{equation} Here ${C}_{29}:=\left(\Vert g\Vert ^{1/2}_{L_\infty}+\Vert g\Vert _{L_\infty}\Vert g^{-1}\Vert_{L_\infty}^{1/2}(m^{1/2}\Vert g\Vert _{L_\infty}^{1/2}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}+1)\right)\Vert f^{-1}\Vert _{L_\infty}$. \end{lemma}
\begin{proof} By \eqref{A_eps no hat}, \begin{equation} \label{lm pr 1} \Vert g^\varepsilon b(\mathbf{D})f^\varepsilon \mathcal{A}_\varepsilon ^{-1/2}\sin (\tau {\mathcal{A}}_\varepsilon ^{1/2})(f^\varepsilon)^{-1}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant \Vert g\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}. \end{equation}
Next, by \eqref{g^0<=}, \eqref{f0<=}, and \eqref{A0 no hat}, \begin{equation} \begin{split} \Vert &\widetilde{g}^\varepsilon \Pi _\varepsilon ^{(m)}b(\mathbf{D})f_0 (\mathcal{A}^0)^{-1/2}\sin (\tau(\mathcal{A}^0)^{1/2})f_0^{-1}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \\ &\leqslant \Vert \widetilde{g}^\varepsilon \Pi _\varepsilon ^{(m)}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}. \end{split} \end{equation} Using Proposition \ref{Proposition Pi eps f eps} and \eqref{tilde g}, \eqref{b(D)Lambda <=}, we obtain \begin{equation} \label{lm pr 3} \Vert \widetilde{g}^\varepsilon \Pi _\varepsilon ^{(m)}\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant\Vert g\Vert _{L_\infty}(\vert\Omega\vert ^{-1/2}\Vert b(\mathbf{D})\Lambda\Vert _{L_2(\Omega)}+1) \leqslant\Vert g\Vert _{L_\infty}(m^{1/2}\Vert g\Vert _{L_\infty}^{1/2}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}+1). \end{equation}
Combining \eqref{lm pr 1}--\eqref{lm pr 3}, we arrive at estimate \eqref{Lm flux grubo}. \end{proof}
\begin{theorem} \label{Theorem 14.8} $1^\circ$. Let $\mathbf{u}_\varepsilon$ and $\mathbf{u}_0$ be the solutions of problems \eqref{14.1} and \eqref{14.4}, respectively, for $\boldsymbol{\psi}\in H^s(\mathbb{R}^d;\mathbb{C}^n)$ and $ \mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};H^s(\mathbb{R}^d;\mathbb{C}^n))$, where $0\leqslant s\leqslant 2$. Let $\mathbf{p}_\varepsilon$ be given by \eqref{P_eps ho hat} and let $\widetilde{g}(\mathbf{x})$ be the matrix-valued function \eqref{tilde g}. Then for $\tau\in\mathbb{R}$ and $\varepsilon >0$ we have \begin{equation} \label{10.24a} \begin{split} \Vert \mathbf{p}_\varepsilon (\cdot ,\tau)-\widetilde{g}^\varepsilon b(\mathbf{D})\Pi _\varepsilon \mathbf{u}_0(\cdot,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant \mathfrak{C}_4(s)(1+\vert\tau\vert)^{s/2}\varepsilon ^{s/2}\left(\Vert\boldsymbol{\psi}\Vert _{H^s(\mathbb{R}^d)} +\Vert \mathbf{F}\Vert _{L_1((0,\tau);H^s(\mathbb{R}^d))}\right). \end{split} \end{equation} Here $\mathfrak{C}_4(s):={C}_{29}^{1-s/2}{C}_{25}^{s/2}$. Under the additional assumption that $\mathbf{F}\in L_1(\mathbb{R}_\pm; H^{s}(\mathbb{R}^d;\mathbb{C}^n))$, where $0\leqslant s\leqslant 2$, for $\vert\tau\vert =\varepsilon^{-\alpha}$, $0<\varepsilon\leqslant 1$, $0<\alpha<1$, we have \begin{equation} \label{10.24b} \begin{split} \Vert &\mathbf{p}_\varepsilon (\cdot ,\pm\varepsilon^{-\alpha})-\widetilde{g}^\varepsilon b(\mathbf{D})\Pi _\varepsilon \mathbf{u}_0(\cdot,\pm\varepsilon^{-\alpha})\Vert _{L_2(\mathbb{R}^d)} \\ &\leqslant 2^{s/2}\mathfrak{C}_4(s)\varepsilon ^{s(1-\alpha)/2}\left(\Vert\boldsymbol{\psi}\Vert _{H^s(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_1(\mathbb{R}_\pm ;H^s(\mathbb{R}^d))}\right). \end{split} \end{equation}
\noindent $2^\circ$. If $\boldsymbol{\psi}\in L_2(\mathbb{R}^d;\mathbb{C}^n)$ and $\mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};L_2(\mathbb{R}^d;\mathbb{C}^n))$, then \begin{equation*} \lim \limits _{\varepsilon\rightarrow 0}\Vert \mathbf{p}_\varepsilon(\cdot,\tau)-\widetilde{g}^\varepsilon b(\mathbf{D})\Pi_\varepsilon \mathbf{u}_0(\cdot,\tau)\Vert _{L_2(\mathbb{R}^d)}=0,\quad\tau\in\mathbb{R}. \end{equation*}
\noindent $3^\circ$. If $\boldsymbol{\psi}\in L_2(\mathbb{R}^d;\mathbb{C}^n)$ and $\mathbf{F}\in L_{1}(\mathbb{R}_\pm;L_2(\mathbb{R}^d;\mathbb{C}^n))$, then \begin{equation*} \lim _{\varepsilon \rightarrow 0}\Vert \mathbf{p}_\varepsilon (\cdot ,\pm \varepsilon ^{-\alpha})-\widetilde{g}^\varepsilon b(\mathbf{D})\Pi _\varepsilon\mathbf{u}_0(\cdot,\pm\varepsilon ^{-\alpha} )\Vert _{L_2(\mathbb{R}^d)}=0, \quad 0<\varepsilon\leqslant 1, \quad0<\alpha<1. \end{equation*} \end{theorem}
\begin{proof} Rewriting estimate \eqref{flux with Pi_eps no hat} with $\mathbf{F}=0$ in operator terms and interpolating with estimate \eqref{Lm flux grubo}, we conclude that \begin{equation*} \begin{split} \Vert& g^\varepsilon b(\mathbf{D})f^\varepsilon {\mathcal{A}}_\varepsilon ^{-1/2}\sin(\tau {\mathcal{A}}_\varepsilon ^{1/2} )(f^\varepsilon)^{-1} -\widetilde{g}^\varepsilon b(\mathbf{D})\Pi _\varepsilon f_0({\mathcal{A}}^0)^{-1/2}\sin(\tau({\mathcal{A}}^0)^{1/2})f_0^{-1}\Vert _{H^s(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\\ &\leqslant C_{29}^{1-s/2}C_{25}^{s/2}(1+\vert \tau\vert )^{s/2}\varepsilon ^{s/2}. \end{split} \end{equation*} Thus, by \eqref{14.3} and \eqref{14.5}, we derive estimate \eqref{10.24a}.
The assertion $2^\circ$ follows from \eqref{10.24a} by the Banach-Steinhaus theorem.
The result $3^\circ$ is a consequence of \eqref{10.24b} and the Banach-Steinhaus theorem. \end{proof}
\begin{remark} \label{Remark flux no S-eps} Using Proposition \textnormal{\ref{Proposition S_eps f^eps}}, it is easily seen that the results of Lemma~\textnormal{\ref{Lemma 14.7}} are valid with the operator $\Pi_\varepsilon$ replaced by the operator $S_\varepsilon$. Hence, by using \eqref{flux with S_eps no hat} and interpolation, we deduce the analog of Theorem \textnormal{\ref{Theorem 14.8}} with $\Pi _\varepsilon$ replaced by $S_\varepsilon$. This only changes the constants in estimates. \end{remark}
\subsection{On the possibility to remove $\Pi _\varepsilon$ from approximation of the flux}
\begin{theorem} \label{Theorem d<=4 fluxes} Under the assumptions of Theorem~\textnormal{\ref{Theorem 14.6}}, let $d\leqslant 4$. Then for $\tau\in\mathbb{R}$ and $0<\varepsilon\leqslant 1$ we have \begin{equation} \label{Th fluxes d<=4} \Vert \mathbf{p}_\varepsilon (\cdot,\tau)-\widetilde{g}^\varepsilon b(\mathbf{D}) \mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant C_{30} \varepsilon (1+\vert\tau\vert)\left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{equation} The constant $C_{30}$ depends only on $m$, $n$, $d$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{theorem}
\begin{proof} The proof repeats the proof of Theorem~\textnormal{\ref{Theorem 14.6}} with some simplifications. By \eqref{9.20a NEW}, \eqref{14.3}, and \eqref{14.5}, \begin{equation} \label{proof flux no Pi d<=4 0} \Vert \widehat{\mathcal{A}}_\varepsilon ^{1/2}\left(\mathbf{u}_\varepsilon(\cdot ,\tau ) -(I+\varepsilon\Lambda ^\varepsilon b(\mathbf{D}))\mathbf{u}_0(\cdot ,\tau)\right)\Vert _{L_2(\mathbb{R}^d)} \leqslant C_{14}\varepsilon (1+\vert\tau\vert)\left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{equation} Then, according to \eqref{A_eps} and \eqref{P_eps ho hat}, \begin{equation} \label{proof flux no Pi d<=4 I} \begin{split} \Vert &\mathbf{p}_\varepsilon (\cdot ,\tau)-g^\varepsilon b(\mathbf{D})(I+\varepsilon \Lambda ^\varepsilon b(\mathbf{D}))\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \\ &\leqslant \Vert g\Vert ^{1/2}_{L_\infty}C_{14}\varepsilon (1+\vert \tau\vert)\left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{split} \end{equation} Similarly to \eqref{proof fluxes 5}, \begin{equation} \label{proof flux no Pi d<=4 II} \varepsilon g^\varepsilon b(\mathbf{D})\Lambda^\varepsilon b(\mathbf{D})\mathbf{u}_0(\cdot ,\tau) =g^\varepsilon (b(\mathbf{D})\Lambda)^\varepsilon b(\mathbf{D})\mathbf{u}_0(\cdot ,\tau) +\varepsilon g^\varepsilon \sum _{l=1}^d b_l \Lambda ^\varepsilon D_lb(\mathbf{D})\mathbf{u}_0(\cdot ,\tau). \end{equation} Let us estimate the second summand in the right-hand side. By \eqref{b_j <=}, \begin{equation} \label{proof flux no Pi d<=4 a} \begin{split} \Bigl\Vert &\varepsilon g^\varepsilon \sum _{l=1}^d b_l \Lambda ^\varepsilon D_lb(\mathbf{D})\mathbf{u}_0(\cdot ,\tau)\Bigr\Vert _{L_2(\mathbb{R}^d)} \leqslant \varepsilon\Vert g\Vert _{L_\infty}(d\alpha _1)^{1/2}\Vert \Lambda ^\varepsilon\mathbf{D} b(\mathbf{D})\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \\ &\leqslant \varepsilon\Vert g\Vert _{L_\infty}(d\alpha _1)^{1/2} \Vert [\Lambda]\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \Vert \mathbf{D} b(\mathbf{D})\mathbf{u}_0(\cdot ,\tau)\Vert _{H^1(\mathbb{R}^d)},\quad 0<\varepsilon\leqslant 1. \end{split} \end{equation} By \eqref{g^0<=}, \eqref{f0<=}, \eqref{A0 no hat}, and \eqref{14.5}, \begin{equation} \label{proof flux no Pi d<=4 b} \Vert \mathbf{D} b(\mathbf{D})\mathbf{u}_0(\cdot ,\tau)\Vert _{H^1(\mathbb{R}^d)}\leqslant \Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{equation} Combining \eqref{Lambda H1->L2}, \eqref{proof flux no Pi d<=4 a}, and \eqref{proof flux no Pi d<=4 b}, we have \begin{equation} \label{proof flux no Pi d<=4 III} \begin{split} \Bigl\Vert \varepsilon g^\varepsilon \sum _{l=1}^d b_l \Lambda ^\varepsilon D_lb(\mathbf{D})\mathbf{u}_0(\cdot ,\tau)\Bigr\Vert _{L_2(\mathbb{R}^d)} &\leqslant \varepsilon\Vert g\Vert _{L_\infty}(d\alpha _1)^{1/2}\mathfrak{C}_d\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty} \\ &\times \left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right),\quad d\leqslant 4. \end{split} \end{equation} Now relations \eqref{tilde g}, \eqref{proof flux no Pi d<=4 I}, \eqref{proof flux no Pi d<=4 II}, and \eqref{proof flux no Pi d<=4 III} imply estimate \eqref{Th fluxes d<=4} with the constant $$ C_{30}:=C_{14}\Vert g\Vert ^{1/2}_{L_\infty}+(d\alpha _1)^{1/2}\mathfrak{C}_d\Vert g\Vert _{L_\infty}\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty} .$$ \end{proof}
Let $d\geqslant 5$ and let Condition~\ref{Condition Lambda multiplier} be satisfied. Then, by the scaling transformation, the analog of \eqref{9.20a NEW} (with the constant $C_{15}$ instead of $C_{14}$) follows from \eqref{Th d>4 Lambda multiplier}. We wish to remove $\Pi _\varepsilon$ from approximation for the flux similarly to \eqref{proof flux no Pi d<=4 0}--\eqref{proof flux no Pi d<=4 III}. According to \cite[Subsection~1.6, Proposition~1]{MaSh}, Condition~\ref{Condition Lambda multiplier} implies the boundedness of $[\Lambda]$ as an operator from $H^1(\mathbb{R}^d;\mathbb{C}^m)$ to $L_2(\mathbb{R}^d;\mathbb{C}^n)$ with the estimate $\Vert [\Lambda ]\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant C \Vert [\Lambda ]\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)}$.
The following statement can be checked by analogy with the proof of Theorem~\ref{Theorem d<=4 fluxes}.
\begin{theorem} \label{Theorem d<=5 fluxes} Let $d\geqslant 5$. Let Condition~\textnormal{\ref{Condition Lambda multiplier}} be satisfied. Then, under the assumptions of Theorem~\textnormal{\ref{Theorem 14.6}}, for $0<\varepsilon\leqslant 1$ and $\tau\in\mathbb{R}$ we have \begin{equation*} \Vert \mathbf{p}_\varepsilon (\cdot,\tau)-\widetilde{g}^\varepsilon b(\mathbf{D}) \mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant C_{31} \varepsilon (1+\vert\tau\vert)\left(\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^2(\mathbb{R}^d))}\right). \end{equation*} The constant $C_{31}:=C_{15}\Vert g\Vert ^{1/2}_{L_\infty}+(d\alpha _1)^{1/2}\Vert g\Vert _{L_\infty}\Vert g^{-1}\Vert _{L_\infty}^{1/2}\Vert f^{-1}\Vert _{L_\infty}\Vert [\Lambda]\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}$ depends only on $d$, $m$, $n$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, the parameters of the lattice $\Gamma$, and the norm $\Vert [\Lambda]\Vert _{H^2(\mathbb{R}^d)\rightarrow H^1(\mathbb{R}^d)}$. \end{theorem}
By analogy with \eqref{proof flux no Pi d<=4 0}--\eqref{proof flux no Pi d<=4 III}, using Proposition~\ref{Proposition Lambda Hs to L2}, from Theorem~\ref{Theorem D>=5 smooth date sec 9} we derive the following result.
\begin{theorem} Let $d\geqslant 5$. Let $\mathbf{u}_\varepsilon$ and $\mathbf{u}_0$ be the solutions of problems \eqref{14.1} and \eqref{14.4}, respectively, where $\boldsymbol{\psi}\in H^{d/2}(\mathbb{R}^d;\mathbb{C}^n)$ and $\mathbf{F}\in L_1((0,\tau);H^{d/2}(\mathbb{R}^d;\mathbb{C}^n))$. Let $\mathbf{p}_\varepsilon$ be defined by \eqref{P_eps ho hat} and let $\widetilde{g}$ be the matrix-valued function \eqref{tilde g}. Then for $0<\varepsilon\leqslant 1$ and $\tau\in\mathbb{R}$ we have \begin{equation*} \Vert \mathbf{p}_\varepsilon (\cdot,\tau)-\widetilde{g}^\varepsilon b(\mathbf{D}) \mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant C_{32} \varepsilon (1+\vert\tau\vert)\left(\Vert \boldsymbol{\psi}\Vert _{H^{d/2}(\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_{1}((0,\tau);H^{d/2}(\mathbb{R}^d))}\right). \end{equation*} The constant $C_{32}$ depends only on $d$, $m$, $n$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, $\Vert f\Vert _{L_\infty}$, $\Vert f^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{theorem}
To obtain interpolational results without any smoothing operator, we need to prove the analog of Lemma~\ref{Lemma 14.7} without $\Pi _\varepsilon$. I.~e., we want to prove $(L_2\rightarrow L_2)$-boundedness of the operator \begin{equation} \label{g tilde dots i wan boundedness} \widetilde{g}^\varepsilon b(\mathbf{D})f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}. \end{equation} The following property of $\widetilde{g}$ was obtained in \cite[Proposition 9.6]{Su_MMNP}. (The one dimensional case will be considered in Subsection~\ref{Subsection special case} below.)
\begin{proposition} \label{Proposition tilde g multiplier} Let $l>1$ for $d=2$, and $l=d/2$ for $d\geqslant 3$. The operator $[\widetilde{g}]$ is a continuous mapping of $H^l(\mathbb{R}^d;\mathbb{C}^m)$ to $L_2(\mathbb{R}^d;\mathbb{C}^m)$, and \begin{equation*} \Vert [\widetilde{g}]\Vert _{H^l(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant\mathfrak{C}_d'. \end{equation*} The constant $\mathfrak{C}_d'$ depends only $d$, $m$, $n$, $\alpha _0$, $\alpha _1$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$\textnormal{;} for $d=2$ it depends also on $l$. \end{proposition}
So, for $d\geqslant 2$, we can not expect the $(L_2\rightarrow L_2)$-boundedness of the operator~\eqref{g tilde dots i wan boundedness}. The $(H^2\rightarrow L_2)$-continuity of the operator \eqref{g tilde dots i wan boundedness} was used in Theorem~\ref{Theorem d<=4 fluxes} and, under Condition~\ref{Condition Lambda multiplier}, in Theorem~\ref{Theorem d<=5 fluxes}. (The $(H^2\rightarrow L_2)$-boundedness of $[\widetilde{g}]$ follows from \cite[Subsection~1.3.2, Lemma~1]{MaSh}.) So, without any additional conditions on $\Lambda$, using Proposition~\ref{Proposition tilde g multiplier}, we can obtain some interpolational results only for $d\leqslant 3$.
By \eqref{g^0<=}, \eqref{f0<=}, \eqref{A0 no hat}, and Proposition~\ref{Proposition tilde g multiplier}, \begin{equation} \label{tilde g dots Hl to L2} \Vert \widetilde{g}^\varepsilon b(\mathbf{D})f_0(\mathcal{A}^0)^{-1/2}\sin (\tau (\mathcal{A}^0)^{1/2})f_0^{-1}\Vert _{H^l(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)} \leqslant \mathfrak{C}_d'\Vert g^{-1}\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}. \end{equation} (Here $l$ is as in Proposition~\ref{Proposition tilde g multiplier}.)
Combining \eqref{lm pr 1} and \eqref{tilde g dots Hl to L2} and interpolating with \eqref{Th fluxes d<=4}, we obtain the following result.
\begin{theorem} Let $2\leqslant d\leqslant 3$, and let $1<l<2$ for $d=2$ and $l=3/2$ for $d=3$. Let $0\leqslant s\leqslant 1$. Assume that $\theta = l+(2-l)s$ for $d=2$ and $\theta =3/2+s/2$ for $d=3$. Let $\mathbf{u}_\varepsilon$ and $\mathbf{u}_0$ be the solutions of problems \eqref{14.1} and \eqref{14.4}, respectively, where $\boldsymbol{\psi}\in H^\theta (\mathbb{R}^d;\mathbb{C}^n)$ and $\mathbf{F}\in L_1((0,\tau);H^\theta (\mathbb{R}^d;\mathbb{C}^n))$. Let $\mathbf{p}_\varepsilon$ be the flux \eqref{P_eps ho hat} and let $\widetilde{g}$ be the matrix-valued function \eqref{tilde g}. Then for $0<\varepsilon\leqslant 1$ and $\tau\in\mathbb{R}$ we have \begin{equation*} \Vert \mathbf{p}_\varepsilon (\cdot ,\tau)-\widetilde{g}^\varepsilon b(\mathbf{D})\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant \mathfrak{C}_5(s)\varepsilon ^s(1+\vert \tau\vert)^s\left(\Vert \boldsymbol{\psi}\Vert _{H^\theta (\mathbb{R}^d)}+\Vert \mathbf{F}\Vert _{L_1((0,\tau);H^\theta (\mathbb{R}^d))}\right). \end{equation*} Here $\mathfrak{C}_5(s):=C_{31}^s(\Vert g\Vert ^{1/2}_{L_\infty}+\mathfrak{C}_d'\Vert g^{-1}\Vert ^{1/2}_{L_\infty})^{1-s}\Vert f^{-1}\Vert ^{1-s}_{L_\infty}$. \end{theorem}
\subsection{The special case}
\label{Subsection special case}
Suppose that $g^0=\underline{g}$, i. e., relations \eqref{underline-g} hold. For $d=1$, identity $g^0=\underline{g}$ is always true, see, e. g., \cite[Chapter~I, \S 2]{ZhKO}. In accordance with \cite[Remark 3.5]{BSu05}, in this case the matrix-valued function \eqref{tilde g} is constant and coincides with $g^0$, i. e., $\widetilde{g}(\mathbf{x})=g^0=\underline{g}$. The following statement is a consequence of Theorem~\ref{Theorem 14.8}($1^\circ$.)
\begin{proposition} Assume that relations \eqref{underline-g} hold. Let $\mathbf{u}_\varepsilon$ and $\mathbf{u}_0$ be the solutions of problems \eqref{14.1} and \eqref{14.4}, respectively, for $\boldsymbol{\psi}\in H^s(\mathbb{R}^d;\mathbb{C}^n)$ and $ \mathbf{F}\in L_{1,\mathrm{loc}}(\mathbb{R};H^s(\mathbb{R}^d;\mathbb{C}^n))$, where $0\leqslant s\leqslant 2$. Let $\mathbf{p}_\varepsilon$ be given by \eqref{P_eps ho hat}. Then for $\tau\in\mathbb{R}$ and $\varepsilon >0$ we have \begin{equation} \label{fluxes in special case} \Vert \mathbf{p}_\varepsilon (\cdot ,\tau)-g^0b(\mathbf{D})\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant \mathfrak{C}_6(s)(1+\vert \tau\vert )^{s/2}\varepsilon ^{s/2} \left(\Vert\boldsymbol{\psi}\Vert _{H^s(\mathbb{R}^d)} +\Vert \mathbf{F}\Vert _{L_1((0,\tau);H^s(\mathbb{R}^d))}\right). \end{equation} Here $\mathfrak{C}_6(s):=\mathfrak{C}_4(s)+2^{1-s/2}r_0^{-s/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}$. \end{proposition}
\begin{proof} We wish to remove the operator $\Pi _\varepsilon$ from the approximation \eqref{10.24a}. Obviously, \break$\Vert \Pi _\varepsilon -I\Vert _{L_2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant 2$. According to Proposition~\ref{Proposition Pi eps -I}, $$\Vert \Pi _\varepsilon -I\Vert _{H^2(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant \Vert \Pi _\varepsilon -I\Vert _{H^1(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant \varepsilon r_0^{-1}.$$ Then, by interpolation, $\Vert \Pi _\varepsilon -I\Vert _{H^s(\mathbb{R}^d)\rightarrow L_2(\mathbb{R}^d)}\leqslant 2^{1-s/2}r_0^{-s/2}\varepsilon ^{s/2}$, $0\leqslant s\leqslant 2$. Combining this with \eqref{g^0<=}, \eqref{f0<=}, \eqref{A0 no hat}, \eqref{14.5}, and taking into account that the operator $\mathcal{A}^0$ with constant coefficients commutes with the smoothing operator $\Pi _\varepsilon$, we obtain \begin{equation} \label{proof flux special case} \begin{split} \Vert &g^0 b(\mathbf{D})(\Pi _\varepsilon -I)\mathbf{u}_0(\cdot ,\tau )\Vert _{L_2(\mathbb{R}^d)} \\ &\leqslant 2^{1-s/2}r_0^{-s/2}\Vert g\Vert ^{1/2}_{L_\infty}\Vert f^{-1}\Vert _{L_\infty}\varepsilon ^{s/2}\left(\Vert\boldsymbol{\psi}\Vert _{H^s(\mathbb{R}^d)} +\Vert \mathbf{F}\Vert _{L_1((0,\tau);H^s(\mathbb{R}^d))}\right). \end{split} \end{equation} Now, from identity $g^0=\widetilde{g}$, \eqref{10.24a}, and \eqref{proof flux special case} we derive estimate \eqref{fluxes in special case}. \end{proof}
\section{Applications of the general results}
\label{Section Applications}
The following examples were previously considered in \cite{BSu,BSu08,DSu17,DSu17-2}.
\subsection{The acoustics equation} \label{Subsection acoustics}
In $L_2(\mathbb{R}^d)$, we consider the operator \begin{equation} \label{acoustics hat A} \widehat{\mathcal{A}}=\mathbf{D}^*g(\mathbf{x})\mathbf{D}=-\mathrm{div}\,g(\mathbf{x})\nabla, \end{equation} where $g(\mathbf{x})$ is a periodic symmetric matrix with real entries. Assume that $g(\mathbf{x})>0$, $g,g^{-1}\in L_\infty$. The operator $\widehat{\mathcal{A}}$ describes a periodic acoustical medium. The operator \eqref{acoustics hat A} is a particular case of the operator \eqref{hat A}. Now we have $n=1$, $m=d$, $b(\mathbf{D})=\mathbf{D}$, $\alpha _0=\alpha _1=1$. Consider the operator $\widehat{\mathcal{A}}_\varepsilon =\mathbf{D}^* g^\varepsilon (\mathbf{x})\mathbf{D}$, whose coefficients oscillate rapidly for small $\varepsilon$.
Let us write down the effective operator. In the case under consideration, the $\Gamma$-periodic solution of problem \eqref{Lambda problem} is a row: $\Lambda (\mathbf{x})=i{\Phi}(\mathbf{x})$, ${\Phi}(\mathbf{x})=\left(\Phi _1(\mathbf{x}),\dots,\Phi _d(\mathbf{x})\right)$, where $\Phi _j\in \widetilde{H}^1(\Omega)$ is the solution of the problem \begin{equation*} \mathrm{div}\,g(\mathbf{x})\left(\nabla \Phi _j(\mathbf{x})+\mathbf{e}_j\right)=0,\quad \int _\Omega \Phi _j(\mathbf{x})\,d\mathbf{x}=0. \end{equation*} Here $\mathbf{e}_j$, $j=1,\dots,d$, is the standard orthonormal basis in $\mathbb{R}^d$. Clearly, the functions $\Phi _j(\mathbf{x})$ are real-valued, and the entries of $\Lambda (\mathbf{x})$ are purely imaginary. By \eqref{tilde g}, the columns of the $(d\times d)$-matrix-valued function $\widetilde{g}(\mathbf{x})$ are the vector-valued functions $g(\mathbf{x})\left(\nabla \Phi _j(\mathbf{x})+\mathbf{e}_j\right)$, $j=1,\dots,d$. The effective matrix is defined according to \eqref{g0}: $g^0=\vert \Omega \vert ^{-1}\int _\Omega \widetilde{g}(\mathbf{x})\,d\mathbf{x}$. Clearly, $\widetilde{g}(\mathbf{x})$ and $g^0$ have real entries. If $d=1$, then $m=n=1$, whence $g^0=\underline{g}$.
Let $Q(\mathbf{x})$ be a $\Gamma$-periodic function on $\mathbb{R}^d$ such that $Q(\mathbf{x})>0$, $Q,Q^{-1}\in L_\infty$. The function $Q(\mathbf{x})$ describes the density of the medium.
Consider the Cauchy problem for the acoustics equation in the medium with rapidly oscillating characteristics: \begin{equation} \label{acoustics problem} \begin{cases} Q^\varepsilon (\mathbf{x})\frac{\partial ^2 u_\varepsilon (\mathbf{x},\tau)}{\partial \tau ^2}=-\mathrm{div}\,g^\varepsilon (\mathbf{x})\nabla u_\varepsilon (\mathbf{x},\tau),\quad \mathbf{x}\in\mathbb{R}^d,\quad \tau\in\mathbb{R}, \\ u_\varepsilon (\mathbf{x},0)=0,\quad \frac{\partial u_\varepsilon (\mathbf{x},0)}{\partial \tau}=\psi (\mathbf{x}), \end{cases} \end{equation} where $\psi\in L_2(\mathbb{R}^d)$ is a given function. (For simplicity, we consider the homogeneous equation.) Then the homogenized problem takes the form \begin{equation} \label{acoustics eff problem} \begin{cases} \overline{Q}\frac{\partial ^2 u_0 (\mathbf{x},\tau)}{\partial \tau ^2}=-\mathrm{div}\,g^0\nabla u_0(\mathbf{x},\tau),\quad \mathbf{x}\in\mathbb{R}^d,\quad \tau\in\mathbb{R}, \\ u_0 (\mathbf{x},0)=0,\quad \frac{\partial u_0 (\mathbf{x},0)}{\partial \tau}=\psi (\mathbf{x}). \end{cases} \end{equation}
According to \cite[Chapter~III, Theorem 13.1]{LaU}, $\Lambda\in L_\infty$ and the norm $\Vert\Lambda\Vert _{L_\infty}$ does not exceed a constant depending on $d$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, and $\Omega$. Applying Theorems~\ref{Theorem 14.2} and \ref{Theorem 14.8}($1^\circ$) and taking into account Remark~\ref{Remark no S-eps}, we arrive at the following result.
\begin{proposition} Under the assumptions of Subsection~\textnormal{\ref{Subsection acoustics}}, let $u_\varepsilon$ be the solution of problem~\eqref{acoustics problem} and let $u_0$ be the solution of the effective problem \eqref{acoustics eff problem}.
\noindent $1^\circ$. Let $\psi\in H^s(\mathbb{R}^d)$ for some $0\leqslant s\leqslant 1$. Then for $\tau\in\mathbb{R}$ and $\varepsilon >0$ we have \begin{equation*} \Vert u_\varepsilon (\cdot ,\tau)-u_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)}\leqslant \mathfrak{C}_6(s)(1+\vert \tau\vert)\varepsilon ^s\Vert \psi\Vert _{H^s(\mathbb{R}^d)}. \end{equation*}
\noindent $2^\circ$. Let $\psi\in H^{s+1}(\mathbb{R}^d)$ for some $0\leqslant s\leqslant 1$. Then for $\tau \in\mathbb{R}$ and $0<\varepsilon\leqslant 1$ we have \begin{equation*} \Vert u_\varepsilon (\cdot ,\tau)-u_0(\cdot ,\tau)-\varepsilon {\Phi} ^\varepsilon \nabla u_0(\cdot ,\tau)\Vert _{H^1(\mathbb{R}^d)} \leqslant \mathfrak{C}_7(s)(1+\vert \tau\vert )\varepsilon ^s\Vert \psi\Vert _{H^{1+s}(\mathbb{R}^d)}. \end{equation*}
\noindent $3^\circ$. Let $\psi \in H^s(\mathbb{R}^d)$ for some $0\leqslant s\leqslant 2$. Let $\Pi_\varepsilon$ be defined by \eqref{Pi eps}. Then for $\tau\in\mathbb{R}$ and $\varepsilon >0$ we have \begin{equation*} \Vert g^\varepsilon\nabla u_\varepsilon (\cdot ,\tau)-\widetilde{g}^\varepsilon\Pi _\varepsilon \nabla u_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^d)} \leqslant \mathfrak{C}_8(s)(1+\vert \tau\vert )^{s/2}\varepsilon ^{s/2}\Vert \psi\Vert _{H^s(\mathbb{R}^d)}. \end{equation*}
The constants $\mathfrak{C}_6(s)$, $\mathfrak{C}_7(s)$, and $\mathfrak{C}_8(s)$ depend only on $s$, $d$, $\Vert g\Vert _{L_\infty}$, $\Vert g^{-1}\Vert _{L_\infty}$, and parameters of the lattice $\Gamma$. \end{proposition}
\subsection{The operator of elasticity theory}
Let $d\geqslant 2$. We represent the operator of elasticity theory in the form used in \cite[Chapter 5, \S 2]{BSu}. Let $\zeta$ be an orthogonal second rank tensor in $\mathbb{R}^d$; in the standard orthonormal basis in $\mathbb{R}^d$, it can be represented by a matrix $\zeta =\lbrace\zeta_{jl}\rbrace_{j,l=1}^d$. We shall consider symmetric tensors $\zeta$, which will be identified with vectors $\zeta _*\in\mathbb{C}^m$, $2m=d(d+1)$, by the following rule. The vector $\zeta _*$ is formed by all components $\zeta_{jl}$, $j\leqslant l$, and the pairs $(j,l)$ are put in order in some fixed way. Let $\chi$ be an $(m\times m)$-matrix, $\chi=\mathrm{diag}\,\lbrace\chi _{(j,l)}\rbrace$, where $\chi_{(j,l)}=1$ for $j=l$ and $\chi_{(j,l)}=2$ for $j<l$. Then ${\pmb |}\zeta{\pmb |}^2=\langle\chi\zeta _*,\zeta _*\rangle _{\mathbb{C}^m}$.
Let $\mathbf{u}\in H^1(\mathbb{R}^d;\mathbb{C}^d)$ be the \textit{displacement vector}. Then the \textit{deformation tensor} is given by $e(\mathbf{u})=\frac{1}{2}\left\lbrace\frac{\partial u_j}{\partial x_l}+\frac{\partial u_l}{\partial x_j}\right\rbrace$. The corresponding vector is denoted by $e_*(\mathbf{u})$. The relation $b(\mathbf{D})\mathbf{u}=-ie_*(\mathbf{u})$ determines an $(m\times d)$-matrix homogeneous DO $b(\mathbf{D})$ uniquely; the symbol of this DO is a matrix with real entries. For instance, with an appropriate ordering, we have \begin{equation*} b(\boldsymbol{\xi})=\begin{pmatrix} \xi _1&0\\ \frac{\xi_2}{2}&\frac{\xi_1}{2}\\ 0&\xi_2 \end{pmatrix},\quad d=2;\quad b(\boldsymbol{\xi})= \begin{pmatrix} \xi_1&0&0\\ \frac{\xi_2}{2}&\frac{\xi_1}{2}&0\\ 0&\xi_2&0\\ 0&\frac{\xi_3}{2}&\frac{\xi_2}{2}\\ 0&0&\xi_3\\ \frac{\xi_3}{2}&0&\frac{\xi_1}{2} \end{pmatrix},\quad d=3. \end{equation*}
Let $\sigma (\mathbf{u})$ be the \textit{stress tensor}, and let $\sigma_*(\mathbf{u})$ be the corresponding vector. The Hooke law can be represented by the relation $\sigma _*(\mathbf{u})=g(\mathbf{x})e_*(\mathbf{u})$, where $g(\mathbf{x})$ is an $(m\times m)$ matrix (which gives a ,,concise'' description of the Hooke tensor). This matrix characterizes the parameters of the elastic (in general, anisotropic) medium. We assume that $g(\mathbf{x})$ is $\Gamma$-periodic and such that $g(\mathbf{x})>0$, and $g,g^{-1}\in L_\infty$.
The energy of elastic deformations is given by the quadratic form \begin{equation} \label{form w} \mathfrak{w}[\mathbf{u},\mathbf{u}]=\frac{1}{2}\int_{\mathbb{R}^d}\langle\sigma _*(\mathbf{u}),e_*(\mathbf{u})\rangle _{\mathbb{C}^m}\,d\mathbf{x} = \frac{1}{2}\int _{\mathbb{R}^d}\langle g(\mathbf{x})b(\mathbf{D})\mathbf{u},b(\mathbf{D})\mathbf{u}\rangle _{\mathbb{C}^m}\,d\mathbf{x},\quad \mathbf{u}\in H^1(\mathbb{R}^d;\mathbb{C}^d). \end{equation} The operator $\mathcal{W}$ generated by this form is the operator of elasticity theory. Thus, the operator $2\mathcal{W}=b(\mathbf{D})^*g(\mathbf{x})b(\mathbf{D})=\widehat{\mathcal{A}}$ is of the form \eqref{hat A} with $n=d$ and $m=d(d+1)/2$.
In the case of an \textit{isotropic} medium, the expression for the form \eqref{form w} simplifies significantly and depends only on two functional \textit{Lam\'e parameters} $\lambda(\mathbf{x})$, $\mu(\mathbf{x})$: \begin{equation*}
\mathfrak{w}[\mathbf{u},\mathbf{u}]=\int_{\mathbb{R}^d}\left(\mu(\mathbf{x}){\pmb |}e(\mathbf{u}){\pmb |}^2+\frac{\lambda(\mathbf{x})}{2}\vert \mathrm{div}\,\mathbf{u}\vert ^2\right)\,d\mathbf{x}. \end{equation*} The parameter $\mu$ is the \textit{shear modulus}. The modulus $\lambda (\mathbf{x})$ may be negative. Often, another parameter $\kappa (\mathbf{x})=\lambda (\mathbf{x})+2\mu(\mathbf{x})/d$ is introduced instead of $\lambda(\mathbf{x})$; $\kappa$ is called the \textit{modulus of volume compression}. In the isotropic case, the conditions that ensure the positive definiteness of the matrix $g(\mathbf{x})$ are $\mu(\mathbf{x})\geqslant \mu _0>0$, $\kappa(\mathbf{x})\geqslant\kappa _0>0$. We write down the ,,isotropic'' matrices $g$ for $d=2$ and $d=3$: \begin{align*} g&=\begin{pmatrix} \kappa+\mu&0&\kappa-\mu\\ 0&4\mu&0\\ \kappa-\mu&0&\kappa+\mu \end{pmatrix},\quad d=2;\\ g&=\frac{1}{3}\begin{pmatrix} 3\kappa+4\mu&0&3\kappa-2\mu&0&3\kappa-2\mu &0\\ 0& 12\mu &0&0&0&0&\\ 3\kappa -2\mu&0&3\kappa+4\mu&0&3\kappa-2\mu&0\\ 0&0&0&12\mu&0&0\\ 3\kappa -2\mu&0&3\kappa -2\mu&0&3\kappa+4\mu&0\\ 0&0&0&0&0&12\mu \end{pmatrix},\quad d=3. \end{align*}
Consider the operator $\mathcal{W}_\varepsilon =\frac{1}{2}\widehat{\mathcal{A}}_\varepsilon$ with rapidly oscillating coefficients. The effective matrix $g^0$ and the effective operator $\mathcal{W}^0=\frac{1}{2}\widehat{\mathcal{A}}^0$ are defined by the general rules (see \eqref{tilde g}, \eqref{g0}, and \eqref{A^0 hat}).
Let $Q(\mathbf{x})$ be a $\Gamma$-periodic $(d\times d)$-matrix-valued function such that $Q(\mathbf{x})>0$, $Q,Q^{-1}\in L_\infty$. Usually, $Q(\mathbf{x})$ is a scalar-valued function describing the density of the medium. We assume that $Q(\mathbf{x})$ is a matrix-valued function in order to take possible anisotropy into account.
Consider the following Cauchy problem for the system of elasticity theory: \begin{equation} \label{elasticity problem} \begin{cases} Q^\varepsilon (\mathbf{x})\frac{\partial ^2\mathbf{u}_\varepsilon (\mathbf{x},\tau)}{\partial \tau ^2}=-\mathcal{W}_\varepsilon\mathbf{u}_\varepsilon (\mathbf{x},\tau),\quad\mathbf{x}\in\mathbb{R}^d,\quad\tau\in\mathbb{R}, \\ \mathbf{u}_\varepsilon (\mathbf{x},0)=0,\quad\frac{\partial\mathbf{u}_\varepsilon (\mathbf{x},0)}{\partial\tau}=\boldsymbol{\psi}(\mathbf{x}), \end{cases} \end{equation} where $\boldsymbol{\psi}\in L_2(\mathbb{R}^d;\mathbb{C}^d)$ is a given function. The homogenized problem takes the form \begin{equation*} \begin{cases} \overline{Q}\frac{\partial ^2 \mathbf{u}_0(\mathbf{x},\tau)}{\partial \tau ^2}=-\mathcal{W}^0\mathbf{u}_0 (\mathbf{x},\tau),\quad\mathbf{x}\in\mathbb{R}^d,\quad\tau\in\mathbb{R}, \\ \mathbf{u}_0 (\mathbf{x},0)=0,\quad\frac{\partial\mathbf{u}_0 (\mathbf{x},0)}{\partial\tau}=\boldsymbol{\psi}(\mathbf{x}). \end{cases} \end{equation*} Theorems \ref{Theorem 14.2} and \ref{Theorem 14.8} can be applied to problem \eqref{elasticity problem}. If $d=2$, then Condition~\ref{Condition Lambda in L infty} is satisfied according to Proposition~\ref{Proposition Lambda in L infty <=}. So, we can use Theorem~\ref{Theorem 12.6}. If $d=3$, then Theorem~\ref{Theorem d<=4 chapter 3} is applicable.
\subsection{The model equation of electrodynamics} \label{Subsection The model equation of electrodynamics}
We cannot include the general Maxwell operator in the scheme developed above; we have to assume that the magnetic permeability is unit. In $L_2(\mathbb{R}^3;\mathbb{C}^3)$, we consider the model operator $\mathcal{L}$ formally given by the expression $\mathcal{L}=\mathrm{curl}\,\eta(\mathbf{x})^{-1}\mathrm{curl}-\nabla \nu (\mathbf{x})\mathrm{div}$. Here the \textit{dielectric permittivity} $\eta(\mathbf{x})$ is $\Gamma$-periodic $(3\times 3)$-matrix-valued function in $\mathbb{R}^3$ with real entries such that $\eta (\mathbf{x})>0$; $\eta,\eta^{-1}\in L_\infty$; $\nu(\mathbf{x})$ is real-valued $\Gamma$-periodic function in $\mathbb{R}^3$ such that $\nu(\mathbf{x})>0$; $\nu,\nu ^{-1}\in L_\infty$. The precise definition of $\mathcal{L}$ is given via the closed positive form \begin{equation*} \int _{\mathbb{R}^3}\left( \langle \eta (\mathbf{x})^{-1}\mathrm{curl}\,\mathbf{u},\mathrm{curl}\,\mathbf{u}\rangle +\nu (\mathbf{x})\vert \mathrm{div}\,\mathbf{u}\vert ^2\right)\,d\mathbf{x},\quad\mathbf{u}\in H^1(\mathbb{R}^3;\mathbb{C}^3). \end{equation*} The operator $\mathcal{L}$ can be written as $\widehat{\mathcal{A}}=b(\mathbf{D})^*g(\mathbf{x})b(\mathbf{D})$ with $n=3$, $m=4$, and \begin{equation} \label{b(D)=,g= Maxwell} b(\mathbf{D})=\begin{pmatrix} -i\mathrm{curl}\\ -i\mathrm{div} \end{pmatrix}, \quad g(\mathbf{x})=\begin{pmatrix} \eta (\mathbf{x})^{-1}&0\\ 0&\nu (\mathbf{x}) \end{pmatrix}. \end{equation} The corresponding symbol of $b(\mathbf{D})$ is \begin{equation*} b(\boldsymbol{\xi})= \begin{pmatrix} 0&-\xi _3&\xi _2\\ \xi _3&0&-\xi _1\\ -\xi _2&\xi _1&0\\ \xi _1&\xi _2&\xi _3 \end{pmatrix}. \end{equation*}
According to \cite[\S 7.2]{BSu} the effective matrix has the form \begin{equation*} g^0=\begin{pmatrix} (\eta ^0)^{-1}&0\\ 0&\underline{\nu} \end{pmatrix}, \end{equation*} where $\eta ^0$ is the effective matrix for the scalar elliptic operator $-\mathrm{div}\,\eta\nabla =\mathbf{D}^*\eta\mathbf{D}$. The effective operator is given by \begin{equation*} \mathcal{L}^0=\mathrm{curl}\,(\eta ^0)^{-1}\mathrm{curl}-\nabla \underline{\nu}\mathrm{div}. \end{equation*}
Let $\mathbf{v}_j\in \widetilde{H}^1(\Omega ;\mathbb{C}^3)$ be the $\Gamma$-periodic solution of the problem \begin{equation*} b(\mathbf{D})^*g(\mathbf{x})\left(b(\mathbf{D})\mathbf{v}_j(\mathbf{x})+\mathbf{e}_j\right)=0,\quad \int _\Omega \mathbf{v}_j(\mathbf{x})\,d\mathbf{x}=0, \end{equation*} $j=1,2,3,4$. Here $\mathbf{e}_j$, $j=1,2,3,4$, is the standard orthonormal basis in $\mathbb{C}^4$. As was shown in \cite[\S 14]{BSu05}, the solutions $\mathbf{v}_j$, $j=1,2,3$, can be determined as follows. Let $\widetilde{\Phi}_j(\mathbf{x})$ be the $\Gamma$-periodic solution of the problem \begin{equation*} \mathrm{div}\,\eta (\mathbf{x})\left(\nabla \widetilde{\Phi}_j(\mathbf{x})+\mathbf{c}_j\right)=0,\quad \int _\Omega \widetilde{\Phi}_j(\mathbf{x})\,d\mathbf{x}=0, \end{equation*} $j=1,2,3$, where $\mathbf{c}_j=(\eta ^0)^{-1}\widetilde{\mathbf{e}}_j$, and $\widetilde{\mathbf{e}}_j$, $j=1,2,3$, is the standard orthonormal basis in $\mathbb{C}^3$. Let $\mathbf{q}_j$ be the $\Gamma$-periodic solution of the problem \begin{equation*} \Delta\mathbf{q}_j=\eta\left(\nabla\widetilde{\Phi}_j+\mathbf{c}_j\right)-\widetilde{\mathbf{e}}_j,\quad\int _\Omega \mathbf{q}_j(\mathbf{x})\,d\mathbf{x}=0. \end{equation*} Then $\mathbf{v}_j=i\mathrm{curl}\,\mathbf{q}_j$, $j=1,2,3$.
Next, we have $\mathbf{v}_4=i\nabla \phi$, where $\phi$ is the $\Gamma$-periodic solution of the problem \begin{equation*} \Delta\phi=\underline{\nu}\left(\nu (\mathbf{x})\right)^{-1}-1,\quad \int _\Omega \phi (\mathbf{x})\,d\mathbf{x}=0. \end{equation*} The matrix $\Lambda (\mathbf{x})$ is the $(3\times 4)$-matrix with the columns $i\mathrm{curl}\,\mathbf{q}_1$, $i\mathrm{curl}\,\mathbf{q}_2$, $i\mathrm{curl}\,\mathbf{q}_3$, $i\nabla \phi$. By $\Psi (\mathbf{x})$ we denote the $(3\times 3)$-matrix-valued function with the columns $\mathrm{curl}\,\mathbf{q}_1$, $\mathrm{curl}\,\mathbf{q}_2$, $\mathrm{curl}\,\mathbf{q}_3$ (then $\Psi (\mathbf{x})$ has real entries). We put $\mathbf{w}=\nabla\phi$. Then \begin{equation*} \Lambda (\mathbf{x})b(\mathbf{D})=\Psi (\mathbf{x})\mathrm{curl} + \mathbf{w}(\mathbf{x})\mathrm{div}. \end{equation*}
The application of Theorems~\ref{Theorem 12.1} and \ref{Theorem d<=4 chapter 3} gives the following result.
\begin{theorem} \label{Theorem Maxwell} Under the assumptions of Subsection~\textnormal{\ref{Subsection The model equation of electrodynamics}}, denote $$\mathcal{L}_\varepsilon :=\mathrm{curl}\,(\eta ^\varepsilon)^{-1}\mathrm{curl}-\nabla \nu ^\varepsilon\mathrm{div}.$$ Then for $\tau\in\mathbb{R}$ we have \begin{align} \label{Th Maxwell 1} \Vert &\mathcal{L}_\varepsilon ^{-1/2}\sin (\tau\mathcal{L}_\varepsilon ^{1/2})-(\mathcal{L}^0)^{-1/2}\sin(\tau (\mathcal{L}^0)^{1/2})\Vert _{H^1(\mathbb{R}^3)\rightarrow L_2(\mathbb{R}^3)}\leqslant C_{12}\varepsilon (1+\vert \tau\vert),\quad\varepsilon >0, \\ \label{Th Maxwell 2} \begin{split} \Vert &\mathcal{L}_\varepsilon ^{-1/2}\sin (\tau\mathcal{L}_\varepsilon ^{1/2}) -(I+\varepsilon\Psi ^\varepsilon\mathrm{curl}+\varepsilon\mathbf{w}^\varepsilon\mathrm{div} )(\mathcal{L}^0)^{-1/2}\sin(\tau (\mathcal{L}^0)^{1/2})\Vert _{H^2(\mathbb{R}^3)\rightarrow H^1(\mathbb{R}^3)} \\ &\leqslant C_{19}\varepsilon (1+\vert\tau\vert),\quad 0<\varepsilon\leqslant 1. \end{split} \end{align} The constants $C_{12}$ and $C_{19}$ depend only on $\Vert \eta\Vert _{L_\infty}$, $\Vert \eta ^{-1}\Vert _{L_\infty}$, $\Vert \nu\Vert _{L_\infty}$, $\Vert \nu ^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{theorem}
Also, we can apply (interpolational) Theorems~\ref{Theorem 12.2} and \ref{Theorem 12.4}. But in this case the correction term contains the smoothing operator $\Pi _\varepsilon$ (see \eqref{Pi eps}). We omit the details.
It turns out that the operators $\mathcal{L}_\varepsilon$ and $\mathcal{L}^0$ split in the Weyl decomposition $L_2(\mathbb{R}^3;\mathbb{C}^3)=J\oplus G$ simultaneously. Here the ,,solenoidal'' subspace $J$ consists of vector functions $\mathbf{u}\in L_2(\mathbb{R}^3;\mathbb{C}^3)$ for which $\mathrm{div}\,\mathbf{u}=0$ (in the sense of distributions) and the ,,potential'' subspace is $$G:=\lbrace\mathbf{u}=\nabla\phi: \phi\in H^1_{\mathrm{loc}}(\mathbb{R}^3), \nabla \phi\in L_2(\mathbb{R}^3;\mathbb{C}^3)\rbrace.$$ The Weyl decomposition reduces the operators $\mathcal{L}_\varepsilon$ and $\mathcal{L}^0$, i.~e., $\mathcal{L}_\varepsilon =\mathcal{L}_{\varepsilon,J}\oplus \mathcal{L}_{\varepsilon,G}$ and $\mathcal{L}^0=\mathcal{L}^0_J\oplus \mathcal{L}^0_G$. The part $\mathcal{L}_{\varepsilon ,J}$ acting in the ,,solenoidal'' subspace $J$ is formally defined by the differential expression $\mathrm{curl}\,\eta ^\varepsilon(\mathbf{x})^{-1}\mathrm{curl}$, while the part $\mathcal{L}_{\varepsilon,G}$ acting in the ,,potential'' subspace $G$ corresponds to the expression $-\nabla \nu ^\varepsilon (\mathbf{x})\nabla$. The parts $\mathcal{L}_J^0$ and $\mathcal{L}^0_G$ can be written in the same way. The Weyl decomposition allows us to apply Theorem~\ref{Theorem Maxwell} to homogenization of the Cauchy problem for the model hyperbolic equation appearing in electrodynamics: \begin{equation} \label{maxwell problem} \begin{cases} \partial _\tau ^2\mathbf{u}_\varepsilon =-\mathrm{curl}\,\eta ^\varepsilon (\mathbf{x})^{-1}\mathrm{curl}\,\mathbf{u}_\varepsilon,\quad\mathrm{div}\,\mathbf{u}_\varepsilon =0,\\ \mathbf{u}_\varepsilon (\mathbf{x},0)=0,\quad \partial _\tau\mathbf{u}_\varepsilon (\mathbf{x},0)=\boldsymbol{\psi}(\mathbf{x}). \end{cases} \end{equation} The effective problem takes the form \begin{equation} \label{maxwell eff problem} \begin{cases} \partial _\tau ^2\mathbf{u}_0 =-\mathrm{curl}\,(\eta ^0)^{-1}\mathrm{curl}\,\mathbf{u}_0,\quad\mathrm{div}\,\mathbf{u}_0 =0,\\ \mathbf{u}_0 (\mathbf{x},0)=0,\quad \partial _\tau\mathbf{u}_0 (\mathbf{x},0)=\boldsymbol{\psi}(\mathbf{x}). \end{cases} \end{equation}
Let $\mathcal{P}$ be the orthogonal projection of $L_2(\mathbb{R}^3;\mathbb{C}^3)$ onto $J$. Then (see \cite[Subsection 2.4 of Chapter 7]{BSu}) the operator $\mathcal{P}$ (restricted to $H^s(\mathbb{R}^3;\mathbb{C}^3)$) is also the orthogonal projection of the space $H^s(\mathbb{R}^3;\mathbb{C}^3)$ onto the subspace $J\cap H^s(\mathbb{R}^3;\mathbb{C}^3)$ for all $s>0$.
Restricting the operators under the norm sign in \eqref{Th Maxwell 1} and \eqref{Th Maxwell 2} to the subspaces \break$J\cap H^1(\mathbb{R}^3;\mathbb{C}^3)$ and $J\cap H^2(\mathbb{R}^3;\mathbb{C}^3)$, respectively, and multiplying by $\mathcal{P}$ from the left, we see that Theorem~\ref{Theorem Maxwell} implies the following result.
\begin{theorem} \label{Theorem Maxwell solutions} Under the assumptions of Subsection~\textnormal{\ref{Subsection The model equation of electrodynamics}}, let $\mathbf{u}_\varepsilon$ and $\mathbf{u}_0$ be the solutions of problems \eqref{maxwell problem} and \eqref{maxwell eff problem}, respectively.
\noindent $1^\circ$. Let $\boldsymbol{\psi}\in J\cap H^1(\mathbb{R}^3;\mathbb{C}^3)$. Then for $\varepsilon >0$ and $\tau\in\mathbb{R}$ we have \begin{equation*} \Vert \mathbf{u}_\varepsilon (\cdot ,\tau)-\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^3)} \leqslant C_{12}\varepsilon (1+\vert \tau\vert)\Vert \boldsymbol{\psi}\Vert _{H^1(\mathbb{R}^3)}. \end{equation*}
\noindent $2^\circ$. Let $\boldsymbol{\psi}\in J\cap H^2(\mathbb{R}^3;\mathbb{C}^3)$. Then for $0<\varepsilon\leqslant 1$ and $\tau\in\mathbb{R}$ we have \begin{equation*} \Vert \mathbf{u}_\varepsilon (\cdot ,\tau)-\mathbf{u}_0(\cdot ,\tau)-\varepsilon \Psi ^\varepsilon\mathrm{curl}\,\mathbf{u}_0(\cdot ,\tau)\Vert _{H^1(\mathbb{R}^3)} \leqslant C_{19}\varepsilon (1+\vert \tau\vert)\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^3)}. \end{equation*} \end{theorem}
According to \eqref{b(D)=,g= Maxwell}, the role of the flux for problem \eqref{maxwell problem} is played by the vector-valued function \begin{equation*} \mathbf{p}_\varepsilon = g^\varepsilon b(\mathbf{D})\mathbf{u}_\varepsilon =-i\begin{pmatrix} (\eta ^\varepsilon )^{-1}\mathrm{curl}\,\mathbf{u}_\varepsilon \\ \nu ^\varepsilon \mathrm{div}\,\mathbf{u}_\varepsilon \end{pmatrix} = -i\begin{pmatrix} (\eta ^\varepsilon )^{-1}\mathrm{curl}\,\mathbf{u}_\varepsilon \\ 0 \end{pmatrix}. \end{equation*} To approximate the flux, we apply Theorem~\ref{Theorem d<=4 fluxes}. The matrix $\widetilde{g}=g(\mathbf{1}+b(\mathbf{D})\Lambda)$ has a block-diagonal structure, see \cite[Subsection 14.3]{BSu05}): the upper left $(3\times 3)$ block is represented by the matrix with the columns $\nabla \widetilde{\Phi}_j(\mathbf{x})+\mathbf{c}_j$, $j=1,2,3$. We denote this block by $a(\mathbf{x})$. The element at the right lower corner is equal to $\underline{\nu}$. The other elements are zero. Then, by \eqref{b(D)=,g= Maxwell} and \eqref{maxwell eff problem}, \begin{equation*} \widetilde{g}^\varepsilon b(\mathbf{D})\mathbf{u}_0=-i\begin{pmatrix} a^\varepsilon\mathrm{curl}\,\mathbf{u}_0\\0 \end{pmatrix}. \end{equation*}
We arrive at the following statement.
\begin{theorem} Under the assumptions of Theorem~\textnormal{\ref{Theorem Maxwell solutions}}, let $\boldsymbol{\psi}\in J\cap H^2(\mathbb{R}^3;\mathbb{C}^3)$. Then for $0<\varepsilon\leqslant 1$ and $\tau \in\mathbb{R}$ we have \begin{equation*} \Vert (\eta ^\varepsilon)^{-1}\mathrm{curl}\,\mathbf{u}_\varepsilon (\cdot ,\tau)-a^\varepsilon\mathrm{curl}\,\mathbf{u}_0(\cdot ,\tau)\Vert _{L_2(\mathbb{R}^3)} \leqslant C_{30}\varepsilon (1+\vert\tau\vert)\Vert \boldsymbol{\psi}\Vert _{H^2(\mathbb{R}^3)}. \end{equation*} The constant $C_{30}$ depends only on $\Vert \eta\Vert _{L_\infty}$, $\Vert \eta^{-1}\Vert _{L_\infty}$, $\Vert \nu\Vert _{L_\infty}$, $\Vert \nu ^{-1}\Vert _{L_\infty}$, and the parameters of the lattice $\Gamma$. \end{theorem}
\end{document} | arXiv | {
"id": "1705.02531.tex",
"language_detection_score": 0.4057101607322693,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Total Degree Formula for the Generic Offset to a Parametric Surface\footnote{\small Preprint of an article to be published at the International Journal of Algebra and Computation, World Scientific Publishing, DOI:10.1142/S0218196711006807}
\tableofcontents
\section*{Introduction}\label{sec:01-Introduction}
\noindent This paper focuses on the study of the total degree w.r.t. the spatial variables of the multivariate polynomial defining the generic offset to a rational surface in three-dimensional space. So, before continuing with this introduction, let us describe, at least informally, the offsetting construction; for a detailed explanation see Section \ref{sec:GenericOffsetDegreeProblem} and, more specifically, for the concept of generic offset, see Definition \ref{def:ch1:GenericOffset} (page \pageref{def:ch1:GenericOffset}).
\noindent Let $\Sigma$ be a surface in three-dimensional space. At each point $\bar p$ of $\Sigma$, consider the normal line $\cL_{\Sigma}$ to the surface (assume, for this informal introduction, that the normal line to $\Sigma$ at $\bar p$ is well defined). Let $\bar q$ be a point on that line, at a distance $d^o$ of $\bar p$ (there are two such points $\bar q$); equivalently, we consider the intersection points of $\cL_{\Sigma}$ with a sphere centered at $\bar p$ and with radius $d^o$. The offset surface to $\Sigma$ at distance value $d^o$, is the set $\cO_{d^o}(\Sigma)$ of all the points $\bar q$ obtained by this geometric construction, illustrated in Figure \ref{Figure:InformalDefinitionOffsetSurface}. $\Sigma$ is said to be the {\sf generating surface} of $\cO_{d^o}(\Sigma)$. \begin{figure}
\caption{Informal Definition of Offset to a Generating Surface }
\label{Figure:InformalDefinitionOffsetSurface}
\end{figure}
\noindent The classical offset construction for algebraic hypersurfaces has been, and still is, an active research subject of scientific interest. Even though the historical origins of the study of offset curves can be traced back to the work of classical geometers (\cite{Leibniz1692},\cite{loria1902},\cite{salmon-treatise}), often under the denomination of {\em parallel curves}, the subject received increased attention when the technological advance in the fields of Computer Assisted Design and Computer Assisted Manufacturing (CAD/CAM) resulted in a strong demand of effective algorithms for the manipulation of curves and surfaces. We quote the following from one of the two seminal papers (\cite{Farouki1990},\cite{Farouki1990a}) by Farouki and Neff: {\em ``Apart from numerical-control machining, offset curves arise in a variety of practical applications such as tolerance analysis, geometric optics, robot path-planning, and in the formulation of simple geometric procedures (growing/shrinking, blending, filleting, etc.)''}. To the applications listed by these authors we should add here some recent ones, e.g. the connection with the medial axis transform; for these, and related applications see Chapter 11 in \cite{patrikalakis2002sic}, and the references contained therein.
\noindent As a result of this interest coming from the applications, many new methods and algorithms have been developed by engineers and mathematicians, and many geometric and algebraic properties of the offset construction have been studied in recent years; see, e.g. the references \cite{alcazar2008},\cite{alcazar2008localsurfaces},\cite{alcazarLocalCurves2009},\cite{alcazar2007local},\cite{anton2005offset},\cite{Arrondo1997},\cite{Arrondo1999},\cite{Farouki1990},\cite{Farouki1990a},\cite{Hoffmann1989},\cite{hoschek1993fundamentals},\cite{Lu1995}, \cite{Lu95TR},\cite{Pottmann1995},\cite{pottmann1996rational},\cite{pottmann1998laguerre},\cite{SS05},\cite{SSS09},\cite{Sendra2000},\cite{Sendra2000a}. In addition to these references, we also refer to the theses \cite{Alcazar2007}, \cite{PhdSanSegundo2010} and \cite{Sendra1999}, developed within the research group of Prof.\,J.R. Sendra. In \cite{Sendra1999} the fundamental algebraic properties of offsets to hypersurfaces are deduced, the unirationality of the offset components are characterized, and the genus problem (for the curve case) is studied. In \cite{Alcazar2007} the topological behavior of the offset curve is analyzed. The present paper contains the results about the total degree of the generic offset to a rational surface in Chapter 4 of the Ph.D. Thesis \cite{PhdSanSegundo2010} .
\noindent With the exception of certain degenerated situations, that are indeed well known, the offset to an algebraic hypersurface is again a hypersurface (see \cite{Sendra2000}). Thus, one might answer all the problems mentioned above (parametrization expressions, genus computation, topologic types determination, degree analysis, etc), by applying the available algorithms to the resulting (offset) hypersurface. However, in most cases, this strategy results unfeasible. The reason is that the offsetting process generates a huge size increment of the data defining the offset in comparison to the data of the original variety. The challenge, therefore, is to derive information (say algebraic or geometric properties) of the offset hypersurface from the information that could be easily derived from the original (in general much simpler) hypersurface.
\noindent Framed in the above philosophy, the goal of this paper is to provide an efficient formula for the total degree of the generic offset to a rational surface. More precisely: let $f(y_1,y_2,y_3)$ be the defining polynomial of $\Sigma$, and let us treat $d$ as variable. Then, we introduce a new polynomial $g(d,x_1,x_2,x_n)$ such that for almost all non-zero values $d^o$ of $d$ the specialization $g(d^o,x_1,x_2,x_3)$ defines the offset to $\Sigma$ at distance $d^0$. Such a polynomial is called the generic offset polynomial (see Definition \ref{def:ch1:GenericOffsetEquation}, page \pageref{def:ch1:GenericOffsetEquation}), and the hypersurface that it defines (in four-dimensional space) is called the generic offset of $\Sigma$ (see Def \ref{def:ch1:GenericOffset}, page \pageref{def:ch1:GenericOffset}). In this situation, the goal of this paper is to describe an effective solution for the problem of computing the total degree in $\{x_1,x_2,x_3\}$ of $g$.
\noindent There exist some (few) contributions in the literature concerning the degree problem for offset curves and surfaces. To our knowledge, the first attempt to provide a degree formula for offset curves was given by Salmon, in \cite{salmon-treatise}. This formula was proved wrong in the already mentioned paper by Farouki and Neff \cite{Farouki1990}. In this paper, the authors provide a degree formula for rational curves given parametrically. They also deal separately with the case of polynomial parametrizations. Our papers \cite{SS05} and \cite{SSS09} provide a complete and efficient solution for the problem of computing the degree structure (that is total and partial degrees, and degree w.r.t. the distance) for the case of plane algebraic curves, both in the implicit and the parametric case. Finally, let us mention that Anton et al. provide in \cite{anton2005offset} an alternative formula (to those presented in \cite{SS05}) for computing the total degree of the offsets to an algebraic curve.
\noindent In contrast with the case of curves, even in the case of a generating parametric surface, there are, up to our knowledge, no available results for the offset degree problem in the scientific literature. In this paper, concretely in Theorem \ref{thm:ch4:DegreeFormula} (page \pageref{thm:ch4:DegreeFormula}) we provide a formula for the total degree of the generic offset to a rational surface, given in parametric form, provided that the Assumption \ref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin} holds (see page \pageref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin}). {\sf The parametrization of the surface is not assumed to be proper}, and the formula in fact provides the product of the total offset degree times the tracing index of the parametrization. However, since there are available efficient algorithms for computing the tracing index of a surface parametrization (see \cite{SendraPerez2004JPAA-DegreeSurfaceParam}) this assumption does not limit the applicability of the formula.
\noindent The strategy for this offset degree problem is, as in our previous papers, based in the analysis of the intersection between the generic offset and a pencil of lines through the origin. The restriction to the rational case, combined with this strategy, results in a reduction in the dimension of the space needed to study of the intersection problem. Thus, we are led to consider again an intersection problem of plane curves. The auxiliary curves involved in this case are obtained by eliminating the variables corresponding to a point in the generating surface from the offset-line intersection system. The main technical differences between this paper and our previous ones (see \cite{SS05} and \cite{SSS09}) are that: \begin{itemize}
\item Here we need to consider more than two intersection curves. Thus, the total degree formula is expressed as a generalized resultant of the equations of these auxiliary curves.
\item Furthermore, all the curves involved in the intersection problem depend on parameters. Thus, the notion of fake point and their characterization is technically more demanding. \end{itemize} Generally speaking, the dimensional advantage gained by working with a parametric representation is partially compensated by the fact that we are not dealing directly with the points of the surface but with their parametric representation, and thus we are losing some geometric intuition. In the general situation of an implicitly given generating surface, if one were to apply a similar strategy to the offset degree problem, we believe that one is bound to consider a surface intersection problem, instead of the simpler curve intersection problem used here. However, in this paper we do not address the offset degree problem in that general situation.
\noindent As those skilled in the art know, going from the curve to the surface case usually implies a huge step in the difficulty of the proofs. For us, this has indeed been the case. As a result, some of the proofs in this paper are rather technical. And in one particular case, we have not been able to extend to the surface case the proof of a result that we obtained for plane curves. Specifically, in Lemma 4 of our paper \cite{SS05} we proved that there are only finitely many distance values $d^o$ for which the origin belongs to $\cO_{d^o}({\mathcal C})$. Our conjecture is that a similar property holds for all algebraic surfaces. However, as we said, we have not been able to provide a proof. The following example shows that, even for curves, this property does not hold if we consider the analytic case
\begin{ExampleIntro} We want to emphasize that this proposition does not hold in a non-algebraic context. For example, the ``offset'' to the analytic curve with implicit equation $y^3-\sin(x)=0$, passes through the origin for infinitely many values of $d$. In fact, for this curve, all the offsets with values of $d$ equal to $k\pi$, for $k\in\Z$, pass through the origin. This is illustrated in Figure \ref{Figure:SmoothCurveInfiniteOffsetssPassingThroughOrigin}, where the curve (in red) and some offsets passing through the origin (for $k=1,2,3$) are shown. \begin{figure}
\caption{A smooth curve with infinitely many offsets through the origin}
\label{Figure:SmoothCurveInfiniteOffsetssPassingThroughOrigin}
\end{figure} \end{ExampleIntro} \begin{center} \rule{2cm}{0.5pt} \quad\\ \end{center}
\noindent Besides, because of its own nature, our strategy fails in the case of some simple surfaces.
We have met similar situations in our previous papers (\cite{SS05} and \cite{SSS09}), where we needed to exclude circles centered at the origin and lines through the origin from our considerations. Correspondingly, in this paper we need to exclude the case in which the generating surface is a sphere centered at the origin. In this case, however, the generic offset degree (in fact the generic offset equation) is known beforehand. Therefore, excluding it does not really affect the generality of the degree formula that we present here. The above observations are the reason for the {\sf following assumptions:}
\noindent{\bf Assumptions \ref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin}} (page \pageref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin}) Let $\Sigma$ denote the generating surface. In this paper, we assume that: \begin{enumerate}
\item[(1)] {\sf There exists a finite subset $\Delta^1$ of $\C$ such that, for $d^o\not\in\Delta^1$ the origin does not belong to $\cO_{d^o}({\Sigma})$.}
\item[(2)] {\sf $\Sigma$ is not a sphere centered at the origin.} \end{enumerate}
\begin{center} \subsubsection*{Structure of the Paper}\label{subsubsec:ch1:StructurePaper} \end{center}
\subsubsection*{}
In Subsection \ref{subsec:ch1:FormalDefinitionPropertiesGenericOffset} of {\sf Section \ref{sec:GenericOffsetDegreeProblem}} (page \pageref{sec:GenericOffsetDegreeProblem}) we begin by reviewing the fundamental concepts related to the classical offset construction, and its basic properties that will be used in the sequel. Then we introduce the notion of generic offset, which can be considered as a hypersurface that collects as level curves all the classical offsets to a given hypersurface. This notion is the natural generalization of the classical concept, by considering the distance as a new variable. The defining polynomial of this object is the {generic offset polynomial}; see Definition \ref{def:ch1:GenericOffsetEquation} (page \pageref{def:ch1:GenericOffsetEquation}). We establish the fundamental specialization property of the generic offset polynomial in Theorem \ref{thm:Ch1:GenericOffsetSpecialization} (page \pageref{thm:Ch1:GenericOffsetSpecialization}). In Subsection \ref{subsec:ch4:SurfaceParametrizationsAndtheirAssociatedNormalVector}, we recall some basic notions on parametric algebraic surfaces, and some technical lemmas about them. We introduce the notion of associated normal vector, and we also construct a parametric analogous of the Generic Offset System. In Subsection \ref{sec:ch1:IntersectionCurvesResultants} (page \pageref{sec:ch1:IntersectionCurvesResultants}) we present two technical lemmas about the use of univariate resultants to study the problem of the intersection of plane algebraic curves.
\noindent {\sf Section \ref{sec:ch4:OffsetLineIntersectionforRationalSurfaces}} (page \pageref{sec:ch4:OffsetLineIntersectionforRationalSurfaces}) describes the theoretical foundation of the strategy. Subsection \ref{subsec:ch4:IntersectionWithLines} contains the analysis of the intersection between the generic offset and a pencil of lines through the origin. In Subsection \ref{sec:ch4:AuxiliaryCurvesForRationalSurfaces} we will see that, when elimination techniques are brought into our strategy, the dimension of the space in which we count the points in ${\mathcal O}_d(\Sigma)\cap{\mathcal L}_{\bar k}$ is reduced, and we arrive at an intersection problem between projective plane curves. Then we begin the analysis of that problem. Specifically, in Subsection \ref{subsec:ch4:EliminationAndAuxiliaryPolynomials} we describe the auxiliary polynomials obtained by using elimination techniques in the Parametric Offset-Line System, and we introduce the Auxiliary System \ref{sys:ch4:AuxiliaryCurvesSystem} (page \pageref{sys:ch4:AuxiliaryCurvesSystem}), denoted by ${{\mathfrak S}^P_3}(d,\bar k)$. Some geometric properties of the solutions of ${{\mathfrak S}^P_3}(d,\bar k)$ (see Proposition \ref{prop:ch4:ExtendableSolutions}, page \pageref{prop:ch4:ExtendableSolutions} and Lemma \ref{lem:ch4:SignOfLambdaAndOffsetting}, page \pageref{lem:ch4:SignOfLambdaAndOffsetting}) will be used in the sequel to study the relation between the solution sets of Systems $\mathfrak S^P_2(d,\bar k)$ and ${{\mathfrak S}^P_3}(d,\bar k)$. In Subsection \ref{subsec:ch4:FakePoints} (page \pageref{subsec:ch4:FakePoints}) we define the corresponding notion of fake points and invariant points for the Affine Auxiliary System ${{\mathfrak S}^P_3}(d,\bar k)$. The relation between these two notions is then shown in Proposition \ref{prop:ch4:FakePointsAndInvariantSolutionsCoincide} (page \pageref{prop:ch4:FakePointsAndInvariantSolutionsCoincide}).
The statement and proof of the degree formula appear in {\sf Section \ref{sec:ch4:TotalDegreeFormulaForRationalSurfaces}} (page \pageref{sec:ch4:TotalDegreeFormulaForRationalSurfaces}). This section is structured into four subsections as follows. In Subsection \ref{subsec:ProjectivizationParametrizationSurface} we study the projective version of the auxiliary curves introduced in the preceding section, and we introduce the Projective Auxiliary System \ref{sys:ch4:AuxiliaryCurvesSystem-ProjectiveAndPrimitive} (page \pageref{sys:ch4:AuxiliaryCurvesSystem-ProjectiveAndPrimitive}). The polynomials that define this system are the basic ingredients of the degree formula. Subsection \ref{subsec:ch4:InvariantSolutionsOfS5}. (page \pageref{subsec:ch4:InvariantSolutionsOfS5}) deals with the invariant solutions of the Projective Auxiliary System. In Subsection \ref{subsec:ch4:MultiplicityOfIntersectionAtNon-FakePoints} (page \pageref{subsec:ch4:MultiplicityOfIntersectionAtNon-FakePoints}) we will prove that the value of the multiplicity of intersection of the auxiliary curves at their non-invariant points of intersection equals one (in Proposition \ref{prop:ch4:MultiplicityAtNonFakePoints}, page \pageref{prop:ch4:MultiplicityAtNonFakePoints}). Subsection \ref{subsec:ch4:DegreeFormula} (page \pageref{subsec:ch4:DegreeFormula}) contains the statement and proof of the degree formula, in Theorem \ref{thm:ch4:DegreeFormula} (page \pageref{thm:ch4:DegreeFormula}).
\begin{center}
\subsubsection*{Notation and Terminology}\label{subsubsec:ch1:NotationTerminology} \end{center} \noindent For ease of reference, we introduce and collect here the main {\sf notation and terminology} that will be used throughout this paper. \begin{itemize}
\item As usual, $\mathbb C$ and $\mathbb R$ correspond to the fields of complex and real numbers, respectively. The $n$-dimensional affine space
is the set ${\C}^3$, and the associated projective space will be denoted by $\mathbb P^3$.
\item We will use $(y_1,y_2,y_3)$ for the affine coordinates in ${\C}^3$, and $(y_0:y_1:y_2:y_3)$ for the projective coordinates in $\mathbb P^3$,
as well as the abbreviations:
\[\bar y=(y_1,y_2,y_3),\quad \bar y_h=(y_0:y_1:y_2:y_3).\]
\item In order to distinguish offset surfaces from their generating surfaces, we will also use $\bar x=(x_1,x_2,x_3),\quad
\bar x_h=(x_0:x_1:x_2:x_3)$ to refer to the affine and projective coordinates of a point in the offset,
and $\bar y, \bar y_h$ as above for the original surface.
\item A point in ${\C}^3$ will be denoted by
\[\bar y^o=(y_1^o,y_2^o,y_3^o)\]
and, correspondingly, a point in $\mathbb P^3$ will be denoted by
\[\bar y^o_h=(y_0^o:y_2^o:y_3^o)\]
Throughout this work, we will frequently use this $^o$ superscript to indicate a particular value of a variable.
\item The Zariski closure of a set $A\subset {\C}^n$ will be denoted by $A^*$. The projective closure of an algebraic set $A$
will be denoted by $\overline A$.
\item Let $A$ be an algebraic set. We denote by ${\rm Sing}_a(A)$ the affine singular locus of $A$, and by ${\rm Sing}(A)$ the projective singular locus of $A$; i.e. the singular locus of $\overline{A}$.
\item If $I$ is a polynomial ideal, $\mathbf V(I)$ denotes the affine algebraic set defined by $I$; that is,
\[\mathbf V(I)=\{\bar x^o\in{\C}^n/\forall f\in I, f(\bar x^o)=0\}\]
\item When we homogenize a polynomial $g\in{\C}[y_1,y_2,y_3]$, we will use capital letters, as in $G(y_0,y_1,y_2,y_3)$, to denote the
homogenization of $g$ w.r.t. $y_0$. Also, by abuse of notation, we will write $g(\bar y)$, $G(\bar y_h), {\C}[\bar y], {\C}[\bar y_h]$.
\item The partial derivatives w.r.t. $y_i$ of $g(y_1,y_2,y_3)$ and of its homogenization $G(y_0,y_1,y_2,y_3)$ will be denoted
$g_i$ and $G_i$ respectively, for $i=0,\ldots,3$. The symbol $\nabla g$ (resp. $\nabla G$) denotes the gradient vector of partial derivatives,
i.e.:
\[\nabla g(\bar y)=\left(g_1,g_2,g_3\right)(\bar y),\quad(\mbox{ resp. }\nabla G(\bar y_h)=\left(G_1,\ldots,G_3\right)(\bar y_h) )\]
\item The symbol $\Sigma$ denotes a {\sf rational algebraic surface} defined over $\C$ by the irreducible polynomial $f(\bar y)\in\C[\bar y]$.
\item We assume that we are given a {\sf non-necessarily proper rational parametrization} of $\Sigma$:
\[
{P}(\bar t)=\left(
\dfrac{P_{1}(\bar t)}{P_{0}(\bar t)},
\dfrac{P_{2}(\bar t)}{P_{0}(\bar t)},
\dfrac{P_{3}(\bar t)}{P_{0}(\bar t)}
\right).
\]
Here $\bar t=(t_1,t_2)$, and $P_0,\ldots,P_3\in\C[\bar t]$ with $\gcd(P_0,\ldots,P_3)=1$.
\item The {\sf projectivization} $P_h$ of ${P}$ is obtained by homogenizing the components of $P$ w.r.t. a new variable $t_0$, multiplying both the numerators and denominators if necessary by a suitable power of $t_0$. It will be denoted by \[ P_h(\bar t_h)= \left( \dfrac{X(\bar t_h)}{W(\bar t_h)}, \dfrac{Y(\bar t_h)}{W(\bar t_h)}, \dfrac{Z(\bar t_h)}{W(\bar t_h)} \right) \] where $\bar t_h=(t_0:t_1:t_2)$, and $X, Y, Z, W\in\C[\bar t_h]$ are homogeneous polynomials of the same degree $d_{P}$, for which $\gcd(X,Y,Z,W)=1$ holds.
\item The (classical) offset at distance $d^o\in {\C}^\times$ to ${\Sigma}$ is denoted by ${\cal O}_{d^o}({\Sigma})$ and the generic (classical) offset to ${\Sigma}$ by ${\cal O}_{d}({\Sigma})$ (see Definition \ref{def:ch1:GenericOffset} in page \pageref{def:ch1:GenericOffset}). In this work, the
variable $d$ always represents the distance values.
\item We denote by $g\in{\C}[d, \bar x]$ the generic offset equation for ${\Sigma}$ (see Definition \ref{def:ch1:GenericOffsetEquation} in
page \pageref{def:ch1:GenericOffsetEquation}). \item $\delta$ is the total degree of $g$ w.r.t. $\bar x$; i.e. $\delta={\rm deg}_{\bar x}(g)$.
\item Given $\phi(\bar y), \psi(\bar y)\in{\C}[\bar y]$ we denote by $\operatorname{Res}_{y_i}(\phi,\psi)$ the
univariate resultant of $\phi$ and $\psi$ w.r.t. $y_i$, for $i=0,\ldots,n$. And if $A$ is a subset of the set of variables $\{y_0,\ldots,y_n\}$, we denote by $\operatorname{PP}_{A}(\phi)$ (resp. $\mathop{\mathrm{Con}}\nolimits_{A}(\phi)$) the primitive part (resp. the content) of the polynomial $\phi$ w.r.t. $A$. \item\label{def:ch0:NotationForGenericLine} $\cL_{\bar k}$ denotes a generic line through the origin, whose direction is determined by the values of a variable $\bar k=(k_1,k_2,k_3)$.
More precisely, for a particular value of $\bar k$, denoted by $\bar k^o$, the parametric equations of $\cL_{\bar k}$ are \[ \ell_i(\bar k,l,\bar x):\,{ x_1}-{ k_1}\,l=0, \mbox{ for }i=1,2,3,\\ \] where $l$ is the parameter.
\item We will keep the convention of always using the letter $\Delta$ to indicate a finite subset of values of the variable $d$.
Accordingly, the letter $\Theta$ denotes a Zariski closed set of values of $\bar k$.
A Zariski open subset of ${\C}\times{\C}^{3}$, formed by pairs of values of $(d,\bar k)$, will be denoted by $\Omega$. In some proofs,
an open set $\Omega$ will be constructed in several steps. In these cases we will use a superscript to indicate the step in the construction.
Thus, $\Omega_1^0, \Omega_1^1, \Omega_1^2$, etc. are open sets, defined in sucessive steps in the construction of $\Omega_1$.
\item A similar convention will be used for systems of equations and their solutions. A system of equations will be denoted by $\mfS$, with
sub and superscripts to distinguish between systems, and the set of solutions of the system will be denoted by $\Psi$, with the same choice of sub and superscripts.
\item We will also need to consider local parametrizations of algebraic varieties. To distinguish local from (global) rational parametrizations, we will use calligraphic typeface for local parametrizations. Thus, a local parametrization will be denoted by \[\cP(\bar t)=\left(\cP_1(\bar t),\ldots,\cP_n(\bar t)\right)\] \item Let $V$ be an $n$-dimensional vector space over ${\C}$. Two vectors $\bar v=(v_1,\ldots,v_n)$ and $\bar w=(w_1,\ldots,w_n)$ are said to be {\sf parallel} if and only if \[v_iw_j-v_jw_i=0\mbox{, for }i,j=1,\ldots,n\] In this case we write $\bar v\parallel\bar w$. \item Given two vectors $(a_1,b_1,c_1), (a_2,b_2,c_2)\in{\C}^3$, their {\sf cross product} is defined as $$(a_1,b_1,c_1)\wedge(a_2,b_2,c_2)=(b_1c_2-b_2c_1,a_2c_1-a_1c_2,b_1c_2-b_2c_1)$$ \item We consider in $V$ the symmetric bilinear form defined by: \[\Xi(\bar v,\bar w)=\sum_{i=1}^nv_iw_i\] which induces in $V$ a metric vector space (see \cite{Roman2005}, \cite{Snapper1971}) with light cone of isotropy given by: \[ L_{\Xi}=\{(x_1,\ldots,x_n)\in V/x_1^2+\cdots+x_n^2=0\} \]
Note that, when ${\C}=\C$, this is not the usual unitary space $\mathbb C^n$. On the other hand, when we consider the field $\R$, it is the usual Euclidean metric space, thus it preserves the usefulness of our results for applications. In this work, the {\sf norm} $\|\bar v\|$ of a vector $\bar v\in V$ denotes a square root of $\bar v\cdot\bar v$, that is
\[\|\bar v\|^2=\Xi(\bar v,\bar v)=\sum_{i=1}^nv_i^2\]
Moreover, a vector $\bar v\in V$ is {\sf isotropic} if $\bar v\in L_{\Xi}$ (equivalently if $\|\bar v\|=0$). Note that for a non-isotropic vectors there are precisely two choices of norm, which differ only by multiplication by $-1$. \item We denote \[\mathrm{nor}_{(i,j)}(\bar x,\bar y)=f_i(\bar y)(x_j-y_j)-f_j(\bar y)(x_i-y_i).\] Recall that $f(\bar y)$ is the defining polynomial of the irreducible hypersurface ${\Sigma}$, and that $f_i(\bar y)$ denotes its partial derivative w.r.t. $y_i$.
\item \label{notation:ch1:Hodograph} The {\sf affine normal-hodograph} of $f$ is the polynomial:
\begin{equation*}\label{def:ch1:ImplicitHodographNormalIsotropic}
h(\bar y)=f_1^2(\bar y)+f_2^2(\bar y)+f_3^2(\bar y)
\end{equation*}
Following our convention of notation, $H$ is the homogenization of $h$ w.r.t. $y_0$; that is:
\[H(\bar y_h)=F_1^2(\bar y_h)+F_2^2(\bar y_h)+F_3^2(\bar y_h)\]
$H$ is called the {\sf projective normal-hodograph} of ${\Sigma}$. Moreover, a point $\bar y^o_h\in\overline{{\Sigma}}$ (resp. $\bar y^o\in{{\Sigma}}$) is called {\sf normal-isotropic} if $H(\bar y^o_h)=0$ (resp. $h(\bar y^o)=0$).
\item We denote by ${\Sigma}_o$ the set of non normal-isotropic affine points of ${\Sigma}$; that is:
\[{\Sigma}_o=\{\bar y^o\in{\Sigma}/h(\bar y^o)\neq 0\}\]
In the rest of this work {\sf we will assume} that the Zariski-open subset ${\Sigma}_o$ is non-empty. In \cite{Sendra1999}, Proposition 2, it is proved that this is equivalent to $H$ not being a multiple of $F$. We denote by ${\rm Iso}({\Sigma})$ the closed set of affine normal-isotropic points of ${\Sigma}$. Note that ${\rm Sing}_a({\Sigma})\subset{\rm Iso}({\Sigma})$. \item If $K$ is an irreducible component of an algebraic set $A$, and $K\subset{\rm Iso}(A)$ we will say that $K$ is normal-isotropic. \end{itemize}
\section{The Generic Offset}\label{sec:GenericOffsetDegreeProblem}
\begin{itemize} \item In {\sf Subsection \ref{subsec:ch1:FormalDefinitionPropertiesGenericOffset}}, we first review the concept of classical offset and the related properties that will be used in the sequel. In order to do this, we follow the incidence diagram formalism in \cite{Arrondo1997} and \cite{Sendra2000} (see Diagram \ref{sys:ch1:IncidenceDiagramOffsetFixedDistance} in page \pageref{sys:ch1:IncidenceDiagramOffsetFixedDistance}; the reader may also see our previous papers \cite{SS06} and \cite{SSS09})\footnote{The refered works deal with general hypersurfaces. We consider here the special case of three dimensional surfaces.}. Besides, the relation of this notion with Elimination Theory is established. We also collect several fundamental properties of the classical offset construction that we will refer to in the sequel. Next, the notions of generic offset and generic offset polynomial are introduced; see Definition \ref{def:ch1:GenericOffsetEquation} (page \pageref{def:ch1:GenericOffsetEquation}). The fundamental specialization property of the generic offset polynomial is established in Theorem \ref{thm:Ch1:GenericOffsetSpecialization} (page \pageref{thm:Ch1:GenericOffsetSpecialization}).
\item In {\sf Subsection \ref{subsec:ch4:SurfaceParametrizationsAndtheirAssociatedNormalVector}}, we recall some basic notions on parametric algebraic surfaces, and some technical lemmas about them. We also introduce the notion of associated normal vector, and we review some of its properties. Besides, we construct a parametric analogous of the Generic Offset System (see System \ref{sys:ch4:ParametricOffsetSystem}).
\item {\sf Subsection \ref{sec:ch1:IntersectionCurvesResultants}} (page \pageref{sec:ch1:IntersectionCurvesResultants}) contains some technical results about the use of univariate resultants to study the problem of the intersection of plane algebraic curves. The classical setting for the computation of the intersection points of two plane curves by means of resultants is well known (see for instance \cite{Brieskorn1986}, \cite{Walker1950} and \cite{Sendra2007}). This requires in general a linear change of coordinates. However, in this work, we need to analyze the behavior of the resultant when some of the standard requirements are not satisfied. This is the content of Lemma \ref{lem:ch1:MultiplicityUsingResultants} (page \pageref{lem:ch1:MultiplicityUsingResultants}). Similarly, we also need to analyze the case when more than two curves are involved, by using generalized resultants. This is done in Lemma \ref{lem:ch1:GeneralizedResultants} (page \pageref{lem:ch1:GeneralizedResultants}).
\end{itemize}
\subsection{Formal definition and basic properties of the generic offset}\label{subsec:ch1:FormalDefinitionPropertiesGenericOffset}
We begin by recalling the formal definition of classical offset, that can be found in \cite{Arrondo1997} and \cite{Sendra1999}. With the notation introduced above, let $d^o\in{\C}^\times$ be a fixed value, and let $\Psi_{d^o}(\Sigma)\subset{\C}^{7}$ be the set of solutions (in the variables $(\bar x,\bar y,u)$) of the following polynomial system: \begin{equation}\label{sys:Ch1:OffsetSystemFixed_d} \left.\begin{array}{lr} &f(\bar y)=0\\ \mathrm{nor}_{(i,j)}(\bar x,\bar y): &f_i(\bar y)(x_j-y_j)-f_j(\bar y)(x_i-y_i)=0\\ (\mbox{for }i,j=1,\ldots,3; i<j)&\\ b_{d^o}(\bar x, \bar y ):& (x_1-y_1)^2+(x_2-y_2)^2+(x_3-y_3)^2-(d^o)^2=0\\
w(\bar y,u):& u\cdot(\|\nabla f(\bar y)\|^2)-1=0 \end{array}\right\} \equiv\mathfrak{S}_1(d^o) \end{equation} \noindent Let us consider the following: \begin{center}
\begin{equation}\label{sys:ch1:IncidenceDiagramOffsetFixedDistance} \begin{array}{c} \mbox{\sf Offset Incidence Diagram}\\[1mm] \xymatrix{ &\Psi_{d^o}({\Sigma})\subset {\C}^3\times{\C}^3\times{\C} \ar[ld]^{\pi_1}\ar[rd]_{\pi_2}&\\ {\mathcal A}_{d^o}({\Sigma})\subset {\C}^3&&{\Sigma}\subset {\C}^3} \end{array} \end{equation} \end{center} \noindent where \[ \begin{array}{ccc} \begin{cases} \pi_1:{\C}^{7}\mapsto{\C}^{3}\\ \pi_1(\bar x,\bar y,u)=\bar x \end{cases} & \mbox{and} & \begin{cases} \pi_2:{\C}^{7}\mapsto{\C}^{3}\\ \pi_2(\bar x,\bar y,u)=\bar y \end{cases} \end{array} \] and ${\mathcal A}_{d^o}({\Sigma})=\pi_1(\Psi_{d^o}({{\Sigma}}))$.
\begin{Definition}\label{Def:ClassicalOffset} The {\sf (classical) offset} to $\Sigma$ at distance $d^o$ is the algebraic set ${\mathcal A}_{d^o}(\Sigma)^*$ (recall that the asterisk indicates Zariski closure). It will be denoted by ${\cO}_{d^o}(\Sigma)$. \end{Definition}
\begin{Remark}\label{rem:ch1:CommentariesToFormalDefinitionClassicalOffset} \begin{enumerate}
\item[]
\item If there is a solution of the system \ref{sys:Ch1:OffsetSystemFixed_d} of the form $\bar p^o=(\bar x^o,\bar y^o,u^o)$, then we say that the point $\bar y^o\in{\Sigma}$ and the point $\bar x^o\in\cO_{d^o}(\Sigma)$ are {\sf associated points}.
\item Let $I(d^o)\subset{\C}[\bar x, \bar y, u]$ be the ideal generated by the polynomials in ${\mathfrak S}_1(d^o)$; that is:
\[I(d^o)=<f(\bar y),b_{d^o}(\bar x, \bar y ),\mathrm{nor}_{(1,2)}(\bar x, \bar y ),\mathrm{nor}_{(1,3)}(\bar x, \bar y ),\mathrm{nor}_{(2,3)}(\bar x, \bar y ),w(\bar y,u)>.\]
This means that
\[\Psi_{d^o}({\Sigma})=\mathbf V(I(d^o))\]
is the affine algebraic set defined by $I(d^o)$, and
\[{\cO}_{d^o}({\Sigma})=\mathbf V(\tilde I(d^o))\]
where $\tilde I(d^o)=I(d^o)\cap{{\C}}[\bar x]$ is the $(\bar y,u)$-elimination ideal
of $I(d^o)$ (see \cite{Cox1997}, Closure Theorem, p. 122).
In particular, this means that the offset can be computed by elimination techniques, such as Gr\"{o}bner bases, resultants, characteristic sets, etc. \end{enumerate} \end{Remark} \noindent Next, we will refer to some properties of the classical offset construction, that we collect here for the reader's convenience. We start with a very important geometric property regarding the normal vector of the classical offset construction.
\begin{Proposition}[\sf Fundamental Property of the Classical Offsets]\label{prop:ch1:FundamentalPropertyOffsets} Let $\bar y^o\in\cV_o$, and let $\bar x^o\in\cO_{d^o}(\cV)$ be a point associated to $\bar y^o$. Then the normal line to $\cV$ at $\bar y^o$ is also normal to $\cO_{d^o}(\cV)$ at $\bar x^o$. \end{Proposition} \begin{proof} See \cite{Sendra1999}, Theorem 16. \end{proof}
\noindent In order to prove that we can avoid degenerated situations, we will sometimes need information about the dimension of certain sets of points. The basic tools for doing this will be the incidence diagrams, analogous to \ref{sys:ch1:IncidenceDiagramOffsetFixedDistance} (page \pageref{sys:ch1:IncidenceDiagramOffsetFixedDistance}) and well known results about the dimension of the fiber of a regular map. For ease of reference, we include here a statement of one such result, in a form that meets our needs. The proof can be found in \cite{Harris1992}
\begin{Lemma}\label{lem:ch1:FiberDimension} Let $A$ be an affine algebraic set, and let $f:A\mapsto\C^p$ be a regular map. Let us denote $B=f(A)^*$. For $\bar a^o\in A$, let $\mu(\bar a^o)={\rm dim}(f^{-1}(f(a^o)))$ Then, if $A_o\subset A$ is any irreducible component, $B_o=f(A_o)$ its image, and $\mu^o$ is the minimum value of $\mu(\bar a^o)$ for $\bar a^o\in A_o$, we have \[{\rm dim}(A_o)={\rm dim}(B_o)+\mu^o\] In particular, if there exists $a^o\in A_o$ for which ${\rm dim}(f^{-1}(f(a^o)))=0$, then ${\rm dim}(A_o)={\rm dim}(B_o)$. \end{Lemma}
\noindent We next analyze the number and dimension of the irreducible components of the offset.
\begin{Proposition}\label{prop:ch1:OffsetHasAtMostTwoComponents} $\cO_{d^o}(\cV)$ has at most two irreducible components. \end{Proposition} \begin{proof} See \cite{Sendra1999}, Theorem 1. \end{proof}
\begin{Proposition}\label{prop:ch1:ComponentsPsiHaveSameDimensionAsV} The irreducible components of $\Psi_{d^o}(\cV)$ have the same dimension as $\cV$. \end{Proposition} \begin{proof} In \cite{Sendra1999}, Lemma 1, this is proved using local parametrizations. However, since, as we have seen above, $\pi_2$ is a $2:1$ map, this can also be considered a straightforward application of the preceding Lemma \ref{lem:ch1:FiberDimension}. \end{proof}
\begin{Remark} This implies immediately that $\cO_{d^o}(\cV)$ has at most two irreducible components, whose dimension is less or equal than ${\rm dim}(\cV)$. \end{Remark}
\noindent To present the next two results, we recall some of the terminology introduced in \cite{Sendra1999}:
\begin{Definition}\label{def:ch1:DegenerationAndSimpleAndSpecialComponents} \begin{enumerate}
\item[]
\item The offset $\cO_{d^o}(\cV)$ is called {\sf degenerated} if at least one of its components is not a hypersurface.
\item A component $\cM\subset\cO_{d^o}(\cV)$ is said to be a {\sf simple component} if there exists a non-empty Zariski dense subset $\cM_1\subset\cM$ such that every point of $\cM_1$ is associated to exactly one point of $\cV$. Otherwise, $\cM$ is called a {\sf special component} of the offset. Furthermore, we say that $\cO_{d^o}(\cV)$ is {\sf simple} if all its components are simple, and {\sf special} if it has at least a special component (in this case, it has precisely one special component, see Proposition \ref{prop:ch1:OffsetPropertiesRegardingSimpleAndSpecialComponents} below). \end{enumerate} \end{Definition} \noindent Note that if the offset is degenerated, and taking into account Lemma \ref{lem:ch1:FiberDimension}, the map $\pi_1$ must have a non-zero dimensional fiber for some point in $\Psi_{d^o}(\cV)$.
\noindent The following two results tell us that degeneration and special components are very infrequent phenomena.
\begin{Proposition}\label{prop:ch1:OffsetDoesNotDegenerateForAlmostAllDistances} There is a finite set $\Delta_0\subset\K$ such that if $d^o\not\in\Delta_0$, then $\cO_{d^o}(\cV)$ is not degenerated. \end{Proposition} \begin{proof} See \cite{Sendra1999}, Theorem 2. \end{proof} \quad\\
\begin{Proposition}\label{prop:ch1:OffsetPropertiesRegardingSimpleAndSpecialComponents} \begin{enumerate}
\item[]
\item Let $\cM$ be an irreducible and non-degenerated component of $\cO_{d^o}(\cV)$. Then $\cM$ is special if and only if $\cO_{d^o}(\cM)=\cV$.
\item $\cO_{d^o}(\cV)$ has at least a simple component.
\item If $\cO_{d^o}(\cV)$ is irreducible, then it is simple.
\item There is a finite set $\Delta_1\subset\C$ such that, if $d^o\not\in\Delta_1$ then $\cO_{d^o}(\cV)$ is simple, and the irreducible components of $\cO_{d^o}(\cV)$ are not contained in ${\rm Iso}(\cV)$. \end{enumerate} \end{Proposition} \begin{proof} See \cite{Sendra1999}, Theorems 7, 8 and Corollary 6. \end{proof} \noindent The next result shows that --as expected, being a metric construction-- the offset construction is invariant under rigid motions of the affine space.
\begin{Proposition}\label{prop:ch1:OffsetInvarianceRigidMovements} Let $\cT$ be a rigid motion of the affine space $\C^3$. Then \[\cT(\cO_{d^o}(\cV))=\cO_{d^o}(\cT(\cV))\] \end{Proposition} \begin{proof} See \cite{Sendra1999}, Lemma 2.5 in Chapter 2. \end{proof}
\begin{center} \subsubsection*{Formal definition of the generic offset} \end{center}
The concept of generic offset to an algebraic surface was formally introduced in our previous paper \cite{SSS09}. The motivation for this concept is the following: As the distance value $d^o$ varies, different offset varieties are obtained. The idea is to have a global expression of the offset for all (or almost all) distance values. This motivates the concept of {\sf generic polynomial of the offset to ${\Sigma}$}. This is a polynomial, depending on the distance variable $d$, such that for every (or almost every, see the examples below) non-zero value $d^o$, the polynomial specializes to the defining polynomial of the offset at that particular distance. Let us see a couple of examples that give some insight into the situation.
\begin{Example} \begin{itemize} \item[] \item[(a)] Using this informal definition of generic offset polynomial, and using Gr\"{o}bner basis techniques, one can see that if ${\mathcal C}$ is the parabola of equation $y_2-y_1^2=0$, the generic polynomial of its offset is:\\
\noindent
$g(d,x_1,x_2)=-48\,{d}^{2}{x_1}^{4}-32\,{d}^{2}{x_1}^{2}{x_2}^{2}+48
\,{d}^{4}{x_1}^{2}+16\,{x_1}^{6}+16\,{x_2}^{2}{x_1 }^{4}+16\,{d}^{4}{x_2}^{2}
-16\,{d}^{6}-40\,x_2\,{x_1}^{ 4}-32\,{x_1}^{2}{x_2}^{3}+8\,{d}^{2}x_2\,{x_1}^{2}
-32\,{d}^{2}{x_2}^{3}+32\,{d}^{4}x_2+{x_1}^{4}+32\,{{ \it
x1}}^{2}{x_2}^{2}+16\,{x_2}^{4}-20\,{d}^{2}{x_1}^{2
}-8\,{d}^{2}{x_2}^{2}-8\,{d}^{4}-2\,x_2\,{x_1}^{2}-8\,{
x_2}^{3}+8\,x_2\,{d}^{2}+{x_2}^{2}-{d}^{2}.$\\
\noindent In addition, and using again Gr\"{o}bner basis techniques, one may check that for every distance the generic offset polynomial specializes properly (see Example \ref{exm:ch1:StandardParabolaGenericOffset} in page \pageref{exm:ch1:StandardParabolaGenericOffset} below, for a detailed description of this example and the preceding claims).
\item[(b)] On the other hand, the generic offset polynomial of the circle of equation $y_1^2+y_2^2-1=0$ factors as the product of two circles of radius $1+d$ and $1-d$; that is:
\[g(d,x_1,x_2)= \left( x_1^2+x_2^2-(1+d)^2\right) \left( x_1^2+x_2^2-(1-d)^2\right).
\]
Now, observe that for $d^o=1$, this generic polynomial gives
\[g(1,x_1,x_2)= \left( x_1^2+x_2^2-2^2\right) \left( x_1^2+x_2^2\right)=
\left( x_1^2+x_2^2-2^2\right)\left( x_1+i x_2\right)\left( x_1-i x_2\right)
\]
which describes the union of a circle of radius $2$, and two complex lines. This is not a correct representation of the offset at distance $1$ to ${\mathcal C}$, which consists of the union of the circle of radius $2$ and a point (the origin). In fact, using Gr\"obner basis techniques, one has that the elimination ideal $\tilde I(1)$ (see Remark \ref{rem:ch1:CommentariesToFormalDefinitionClassicalOffset}(2), page \pageref{rem:ch1:CommentariesToFormalDefinitionClassicalOffset}) is: \[\tilde I(1)=<x_2(x_1^2+x_2^2-4), x_1(x_1^2+x_2^2-4)>.\] \noindent Thus, in this example we see that the generic offset polynomial does not specialize properly for $d^o=1$. Nevertheless, for every other value of $d^o$ the specialization is correct. \end{itemize} \end{Example} \begin{center} \rule{2cm}{0.5pt} \quad\\ \end{center} \noindent After these examples, we describe the formal definition of generic offset and generic offset polynomial, adapted to the case of surfaces in three dimensional space. We start by considering the following system of equations: \begin{equation}\label{sys:Ch1:GenericOffsetSystem} \left.\begin{array}{lr} &f(\bar y)=0\\ \mathrm{nor}_{(i,j)}(\bar x,\bar y): &f_i(\bar y)(x_j-y_j)-f_j(\bar y)(x_i-y_i)=0\\ (\mbox{for }i,j=1,\ldots,3; i<j)&\\ b(d,\bar x, \bar y ):& (x_1-y_1)^2+(x_2-y_2)^2+(x_3-y_3)^2-d^2=0\\
w(\bar y,u):& u\cdot(\|\nabla f(\bar y)\|^2)-1=0
\end{array}\right\} \equiv\mathfrak{S}_1(d) \end{equation} The above system will be called the {\sf Generic Offset System}. In this system, we consider $d$ as a variable, so that $b\in{\C}[d, \bar x, \bar y]$. A solution of this system is thus a point of the form $(d^o, \bar x^o, \bar y^o, u^o)\in{\C}^8$.
\noindent Let $\Psi({\Sigma})\subset{\C}\times{\C}^3\times{\C}^3\times{\C}$ be the set of solutions of $\mathfrak{S}_1(d)$, and consider the following:\\ \begin{center}
\begin{equation}\label{sys:ch1:IncidenceDiagramGenericOffset} \begin{array}{c} \mbox{\sf Generic Offset Incidence Diagram}\\[3mm] \xymatrix{ &\Psi({\Sigma})\subset {\C}\times{\C}^3\times{\C}^3\times{\C} \ar[ld]^{\pi_1}\ar[rd]_{\pi_2}&\\ {\mathcal A}({\Sigma})\subset {\C}^{4}&&{\C}\times{\Sigma}\subset {\C}^{4} } \end{array} \end{equation} \end{center} \noindent where \[ \begin{array}{ccc} \begin{cases} \pi_1:{\C}^{8}\mapsto{\C}^{4}\\ \pi_1(d, \bar x, \bar y, u)=(d, \bar x) \end{cases} & \mbox{and} & \begin{cases} \pi_2:{\C}^{8}\mapsto{\C}^{4}\\ \pi_2(d, \bar x, \bar y, u)=(d, \bar y) \end{cases} \end{array} \] and ${\mathcal A}({\Sigma})=\pi_1(\Psi({{\Sigma}}))$.
\noindent Then one has the following definition (recall that the asterisk denotes the Zariski closure of a set):
\begin{Definition}\label{def:ch1:GenericOffset} The {\sf generic offset} to ${\Sigma}$ is \[{\cO}_{d}({\Sigma})={\mathcal A}({\Sigma})^*=\pi_1(\Psi({{\Sigma}}))^*\subset{\C}^{4}\] \end{Definition}
\begin{Remark}\label{rem:ch1:EliminationIdealGenericOffset} \begin{enumerate}
\item[]
\item Let
\[I(d)=<f(\bar y),b(d, \bar x, \bar y),\mathrm{nor}_{(1,2)}(\bar x, \bar y ),\mathrm{nor}_{(1,3)}(\bar x, \bar y ),\mathrm{nor}_{(2,3)}(\bar x, \bar y ),w(\bar y,u)>\]
be the ideal in ${\C}[d, \bar x, \bar y, u]$ generated by the polynomials in System \ref{sys:Ch1:GenericOffsetSystem}. Note that the above definition implies that
\[{\cO}_{d}({\Sigma})=\mathbf V(\tilde I(d))\]
where $\tilde I(d)=I(d)\cap{\C}[d, \bar x]$ is the $(\bar y,u)$-elimination ideal of $I(d)$. \item The Closure Theorem from Elimination Theory (see e.g. Theorem 3 in page 122 of \cite{Cox1997}) implies that the dimension of the set \[\cO_d({\Sigma})\setminus\pi_1(\Psi_1({\Sigma}))\] is smaller than the dimension of $\cO_d({\Sigma})$. This is the set of points of the generic offset associated with singular or normal-isotropic points of ${\Sigma}$. \end{enumerate}
\end{Remark}
\begin{center} \subsubsection*{Basic properties of the generic offset}\label{subsec:ch1:BasicPropertiesGenericOffset} \end{center}
\noindent In the following Proposition we will see that the properties of the offset at a fixed distance, regarding its dimension and number of components (see Lemma 1, Theorem 1 and Theorem 2 in \cite{Sendra1999}), are reflected in the generic offset. In particular, this Proposition shows that the generic offset is a hypersurface, and thus guarantees the existence of the generic polynomial (see below, Definition \ref{def:ch1:GenericOffsetEquation}).
\begin{Proposition}\label{prop:ch1:GenericOffsetHypersurface} \begin{enumerate}
\item[]
\item $\cO_d({\Sigma})$ has at most two components.
\item Each component of $\cO_d({\Sigma})$ is a hypersurface in ${\C}^{4}$. \end{enumerate} \end{Proposition} \begin{proof} (Adapted from Lemma 1, Theorem 1 and Theorem 2 in \cite{Sendra1999}). We begin by showing that if $K$ is a component of $\Psi_1({\Sigma})$, then ${\rm dim}(K)=3$. Thus \begin{equation}\label{eq:ch1:DimensionPsiVEquals_n} {\rm dim}(\Psi_1({\Sigma}))=3 \end{equation} Let $\psi^o=(d^o, \bar x^o, \bar y^o, u^o)\in K$. Then, $\bar y^o\in{\Sigma}$ is a regular point of ${\Sigma}$. Let $\cP(\bar t)$, with $\bar t=(t_1,\ldots,t_{n-1})$, be a local parametrization of ${\Sigma}$ at $\bar y^o$, with $\cP(\bar t^o)=\bar y^0$. Then, it holds that one of the local parametrizations defined by: \[
\cP^\pm(d, \bar t)=\left(d,\, \cP(\bar t)\pm d\dfrac{\nabla f(\cP(\bar t))}{\|\nabla f(\cP(\bar t))\|},\, \cP(\bar t),\,\dfrac{1}{\|\nabla f(\cP(\bar t))\|^2}\right) \] parametrizes $\Psi_1({\Sigma})$ locally at $\psi^o$ (we choose sign so that $\cP^\pm(\bar t^o)=\psi^o$). Since $(d,\cP(\bar t))$ parametrizes ${\C}\times{\Sigma}$, we get that $(d, \bar t)$ are algebraically independent, and so ${\rm dim}(K)=3$.
\noindent Now we can prove the first statement of the proposition. Since the number of components of $\cO_{d}({\Sigma})$ is at most the number of components of $\Psi_1({\Sigma})$, one just only has to prove that $\Psi_1({\Sigma})$ has at most two components. Let us suppose that $\Gamma_{1}, \Gamma_{2}$ y $\Gamma_{3}$ are three different components of $\Psi_1({\Sigma})$ and let $Z=\pi_{2}(\Gamma_{1})\cap \pi_{2}(\Gamma_{2})\cap\pi_{2}(\Gamma_{3})$, where $\pi_{2}$ is the projection of the incidence diagram \ref{sys:ch1:IncidenceDiagramGenericOffset}. Then, it holds that ${\rm dim}(Z)=3$. Observe that if ${\rm dim}(Z) <3$ then ${\rm dim}({\Sigma}\setminus Z)={\rm dim}(\bigcup^{3}_{i=1}({\Sigma}\setminus \pi_{2}(\Gamma_{i})))=3$. Hence, at least one of the sets ${\Sigma}\setminus \pi_{2}(\Gamma_{i})$ is of dimension $3$, which is impossible since $\pi_{2}(\Gamma_{i})$ are constructible sets of dimension $3$. On the other hand, it holds that ${\rm dim}(Z\cap \pi_{2}(\Gamma_{i}\cap \Gamma_{j}))< 3$ for $i<j$. Then \[Z\setminus \bigcup_{i \neq j}(Z\cap \pi_{2}(\Gamma_{i}\cap \Gamma_{j})) \neq \emptyset. \] Now, take $\bar p=(d^o, \bar y^o)\in Z\setminus\bigcup_{i \neq j}(Z \cap \pi_{2}(\Gamma_{i}\cap \Gamma_{j}))$, then $\pi_{2}^{-1}(\bar p)=\{\bar q_{1}, \bar q_{2}, \bar q_{3} \}$ where $\bar q_{i} \neq \bar q_{j}$ for $i<j$, which is impossible since the mapping $\pi_{2}$ is $(2:1)$ on $\pi_2(\Psi_1({\Sigma}))$.
\noindent Finally we can prove statement 2 in the proposition. We analyze the dimension of the tangent space to a component of the generic offset. Let $(d^o, \bar y^o)\in\pi_2(\Psi_1({\Sigma}))$, such that the two points $(d^o,\bar x^o_1),(d^o,\bar x^o_2)\in\cO_d({\Sigma})$ generated by $(d^o, \bar y^o)$ satisfy that the dimension of their tangent spaces is the dimension of the corresponding component of
$\cO_d({\Sigma})$. Let $u^o=\dfrac{1}{\|\nabla f(\bar y^o)\|^2}$, and let $\cP(\bar t)$ be a local parametrization of ${\Sigma}$ at $\bar x^o$. Then, it holds that : \[ {\tilde\cP}^+(d, \bar t)=
\left(d,\, \cP(\bar t)+d\dfrac{\nabla f(\cP(\bar t))}{\|\nabla f(\cP(\bar t))\|}\right)
\quad\mbox{ and }\quad {\tilde\cP}^-(d, \bar t)=
\left(d,\, \cP(\bar t)-d\dfrac{\nabla f(\cP(\bar t))}{\|\nabla f(\cP(\bar t))\|}\right) \] parametrize locally $\cO_d({\Sigma})$ at $(d^o,\bar x^o_1)$, and $(d^o,\bar x^o_2)$. In this situation, let $Q^\pm$ be as above, and consider the following map: \[ \begin{array}{cccccccc} \psi^{+}: & {{\C}}^{3} & \longrightarrow &\Psi_1({\Sigma}) & \stackrel{\varphi^{+}}{\longrightarrow}
& {\mathcal A}({\Sigma}) &\stackrel{i}{\hookrightarrow} & {{\C}}^{4} \\
& (d,\bar{t}) & \longrightarrow & \cP^{+}(d,\bar{t}\,) & \longrightarrow &\tilde\cP^{+}(d, \bar t)& \longrightarrow & \tilde\cP^{+}(d, \bar t). \end{array} \] Similarly, we define $\psi^{-}$ and $\varphi^{-}$. Now consider the following homomorphism, defined by the differential $d\psi^{+}$ (similarly for $d\psi^{-}$), between the tangent space to $\Psi_1({\Sigma})_{1}$ at $(d^o,\bar x^o_{1},\bar y^o,u^o)$ and the tangent space ${\cT}_{(d^o,\bar x^o_1)}$ to ${\mathcal A}({\Sigma})_{1}$ at $(d^o,\bar x^o_1)$, where $\Psi_1({\Sigma})_{1}$ and ${\mathcal A}({\Sigma})_{1}$ denote the component of $\Psi_1({\Sigma})$ and ${\mathcal A}({\Sigma})$ containing the points $(d^o,\bar x^o_{1},\bar y^o,u^o)$ and $(d^o,\bar x^o_{1})$, respectively. Then one has that \[{\rm dim}({\mathcal A}({\Sigma})_{1})\geq {\rm dim}({\cT}_{(d^o,\bar x^o_1)})\geq {\rm dim}(\operatorname{Im}(d\varphi^{+})) =\operatorname{rank}({\cal J}_{\varphi^{+}}),\] where ${\cal J}_{\varphi^{+}}$ denotes the jacobian matrix of $\varphi^{+}$. Furthermore, by Equation \ref{eq:ch1:DimensionPsiVEquals_n} at the beginning of this proof, one has that \[ 3={\rm dim}(\Psi_1({\Sigma})_{1})\geq {\rm dim}({\mathcal A}({\Sigma})_{1})\geq \operatorname{rank}({\cal J}_{\varphi^{+}}).\] On the other hand, if we take any point of the form $(0,\bar t^o)\in{{\C}}^{n}$, that is, with $d^o=0$, we must get $\operatorname{rank}({\cal J}_{\varphi^{+}})=3$ at that point; otherwise, one would conclude that the rank of the jacobian of ${\cal P}(\,\bar{t}\,)$ is smaller than $2$, which is impossible since ${\Sigma}$ is a surface. \end{proof}
\noindent As a first consequence of this Proposition, ${\cO}_{d}({\Sigma})$ is defined by a polynomial $g(d, \bar x)\in{\C}[d, \bar x]$ (see \cite{Shafarevich1994}, p.69, Theorem 3). Thus, we arrive at the following definition:
\begin{Definition}\label{def:ch1:GenericOffsetEquation} The {\sf generic offset polynomial} is the defining polynomial of the hypersurface ${\cO}_{d}({\Sigma})$. In the sequel, {\sf we denote} by $g(d, \bar x)$ the generic offset equation. \end{Definition}
\noindent The first property of the generic offset polynomial that we study regards its factorization:
\begin{Lemma}\label{lem:ch1:GenericOffsetIsPrimitiveIn_x} The generic offset polynomial is primitive w.r.t. $\bar x$ \end{Lemma} \begin{proof} Suppose, on the contrary, that $g(d,\bar x)$ has a non-constant factor in ${\C}[d]$. That is \[g(d,\bar x)=A(d)\tilde g(d,\bar x)\] Let $d^o\neq 0$ be any root of $A(d)$. Then the hypersurface $\cZ$ in ${\C}\times{\C}^3$ defined by $d=d^o$ is contained in $\cO_d({\Sigma})$. Taking Remark \ref{rem:ch1:EliminationIdealGenericOffset}(2) (page \pageref{rem:ch1:EliminationIdealGenericOffset}) into account, one has that there is an open non-empty subset of $\cZ$ contained in $\pi_1(\Psi_1({\Sigma}))$. This in turn implies that there is an open subset $\tilde\cZ$ of ${\C}^3$ such that if $\bar x^o\in\tilde\cZ$, then $\bar x^o\in\cO_{d^o}(\Sigma)$ (the classical offset at distance $d^o$). This is a contradiction, since we know that $\cO_{d^o}(\Sigma)$ has dimension less or equal to $2$. Thus, we are left with the case when $A(d)$ is a power of $d$. The argument must be different in this case, since the classical offset is only defined for $d^o\in{\C}^\times$. However, the reasoning is similar: we conclude that there is an open non-empty subset $\tilde\cZ_0$ of ${\C}^3$ such that if $\bar x^o\in\tilde\cZ_0$, then the system (``classical offset system for distance $0$'') \[ \left.\begin{array}{lr} f(\bar y)=0\\ f_i(\bar y)(x^o_j-y_j)-f_j(\bar y)(x^o_i-y_i)=0\\ (\mbox{for }i,j=1,\ldots,3; i<j)&\\ (x^o_1-y_1)^2+(x^o_2-y_2)^2+(x^o_3-y_3)^2=0\\
u\cdot(\|\nabla f(\bar y)\|^2)-1=0 \end{array}\right\} \equiv\mathfrak{S}_1(d) \] has solutions. Now, if $(\bar y^o,u^o)$ is a solution of this system then \begin{enumerate}
\item $\nabla f (\bar y^o)$ is not isotropic,
\item $\bar y^o-\bar x^o$ is isotropic,
\item and $\nabla f (\bar y^o)$ is parallel to $\bar y^o-\bar x^o$, \end{enumerate} Thus one has that $\bar y^o-\bar x^o=0$. But since $\bar x^o$ runs through an open subset of ${\C}^3$, this contradicts the fact that ${\Sigma}$ is a surface. \end{proof}
\begin{Remark}\label{rem:ch1:GenericOffsetEqSqfreeAndHasAtMostTwoFactors} \begin{enumerate}
\item[]
\item Observe that the polynomial $g$ may be reducible (recall the example of the circle) but by construction it is always square-free. Moreover, by Proposition \ref{prop:ch1:GenericOffsetHypersurface} (page \pageref{prop:ch1:GenericOffsetHypersurface}) and Lemma \ref{lem:ch1:GenericOffsetIsPrimitiveIn_x}, $g$ is either irreducible or factors into two irreducible factors not depending only on $d$.
\item We will also call $g(d,\bar x)=0$ the {\sf generic offset equation} of ${\Sigma}$. \end{enumerate} \end{Remark} \noindent The following theorem gives the fundamental property of the generic offset.
\begin{Theorem}\label{thm:Ch1:GenericOffsetSpecialization} For all but finitely many exceptions, the generic offset polynomial specializes properly. That is, there exists a finite (possibly empty) set $\Delta_2\subset{\C}$ such that if $d^o\not\in\Delta_2$, then \[g(d^o, \bar x)=0\] is the equation of ${\cO}_{d^o}({\Sigma})$. \end{Theorem} \begin{proof}
Let $G(d)$ be a reduced Gr\"{o}bner basis of $I(d)$ w.r.t. an elimination ordering that eliminates $(\bar y,u)$. Then, up to multiplication by a non-zero constant, $G(d)\cap{\C}[d, \bar x]$ is a Gr\"{o}bner basis of $\tilde I(d)$. Proposition \ref{prop:ch1:GenericOffsetHypersurface} above shows that $\tilde G(d)=G(d)\cap{\C}[d, \bar x]=<\nu(d)g(d,\bar x)>$, where $\nu(d)$ is a non-zero polynomial, depending only on $d$ (see the Remark preceding this proof). But then (see \cite{Cox1997}, exercise 7, page 283) there is a finite (possibly empty) set $\Delta_2^1\subset{\C}$ such that for $d^o\not\in\Delta_2^1$, $G(d)$ specializes well to a Gr\"{o}bner basis of $I(d^o)$ (defined in Remark \ref{rem:ch1:CommentariesToFormalDefinitionClassicalOffset}, page \pageref{rem:ch1:CommentariesToFormalDefinitionClassicalOffset}). It follows that, since $\tilde I(d^o)=I(d^o)\cap{\C}[\bar x]$, then $\tilde G(d^o)=\{\nu(d^o)g(d^o, \bar x)\}$ is a Gr\"{o}bner basis of $\tilde I(d^o)$. In particular, if $\Delta_2^2$ is the finite set of zeros of $\nu(d)$, then for $d^o\not\in\Delta_2=\Delta_2^1\cup\Delta_2^2$, and $d^o\neq 0$, one has that $g(d^o, \bar x)$ is the equation for $\cO_{d^o}(\Sigma)$. \end{proof} \noindent For future reference, we collect in the following corollary all the information about the --finite-- set of {\em bad} distances that appear in the offsetting construction.
\begin{Corollary}\label{cor:ch1:BadDistancesFiniteSet} There is a finite set $\Delta\subset{\C}^{\times}$ such that for $d^o\not\in\Delta$, the following hold: \begin{enumerate}
\item[(1)] (non degeneracy): ${\cO}_{d^o}({\Sigma})$ is not degenerated.
\item[(2)] (simplicity): ${\cO}_{d^o}({\Sigma})$ is simple.
\item[(3)] (good specialization): if $g(d, \bar x)=0$ is the generic offset polynomial, $g(d^o, \bar x)=0$ is the equation of ${\cO}_{d^o}({\Sigma})$.
\item[(4)] (degree invariance):
\[{\rm deg}_{\bar x}(\cO_{d}({\Sigma}))={\rm deg}_{\bar x}(\cO_{d^o}(\Sigma)),\quad {\rm deg}_{x_i}(\cO_{d}({\Sigma}))={\rm deg}_{x_i}(\cO_{d^o}(\Sigma))\mbox{ for }i=1,\ldots,3.\] \end{enumerate} \end{Corollary} \begin{proof} Take $\Delta^1=\Delta_0\cup\Delta_1\cup\Delta_2$, with $\Delta_0$ as in Proposition \ref{prop:ch1:OffsetDoesNotDegenerateForAlmostAllDistances}, $\Delta_1$ as in Proposition \ref{prop:ch1:OffsetPropertiesRegardingSimpleAndSpecialComponents}(4) and $\Delta_2$ as in Theorem \ref{thm:Ch1:GenericOffsetSpecialization} above. Furthermore, let $p(d)\bar x^{\mu}$ be a term of $g(d,\bar x)$ of maximal degree w.r.t. $\bar x$. That is, $\bar\mu=(\mu_1,\mu_2,\mu_3)\in\N^3$, with $\sum \mu_i={\rm deg}_{\bar x}(g)$, where $p(d)\in\C[d]$ is a non-zero polynomial. Then take:
\[\Delta^{\bar x}=\Delta\cup\{d^o\in\C\,|\,p(d^o)=0\},\] and similarly, for $i=1,2,3$ construct $\Delta^{\bar x_i}$, by considering a term of $g(d,\bar x)$ of maximal degree w.r.t $x_i$. Finally, taking \[\Delta=\Delta^1\cup\Delta^{\bar x}\cup\Delta^{\bar x_1}\cup\Delta^{\bar x_2}\cup\Delta^{\bar x_3},\] our claim holds. \end{proof} \noindent Let us see a first example of a generic offset polynomial.
\begin{Example}\label{exm:ch1:StandardParabolaGenericOffset} For the parabola ${\mathcal C}$ with defining polynomial $f(y_1,y_2)=y_2-y_1^2$, the generic offset system turns into: \[ \left.\begin{array}{lr} &f(\bar y)=y_2-y_1^2\\ \mathrm{nor}_{(1,2)}(\bar x,\bar y): &-2y_1(x_2-y_2)-(x_1-y_1)=0\\ b(d, \bar x, \bar y ):& (x_1-y_1)^2+(x_2-y_2)^2-d^2=0\\
w(\bar y,u):& u\cdot(\|4y_1^2+1\|^2)-1=0 \end{array}\right\} \] Computing a Gr\"{o}bner elimination basis of $I(d)=<f,\mathrm{nor}_{(1,2)},b,w>$, we obtain (with the notation in the proof of Theorem \ref{thm:Ch1:GenericOffsetSpecialization}): \[G(d)=\{g(d, \bar x),\chi_1(d, \bar x),\ldots,\chi_8(d, \bar x)\}\] where:\\ $g(d, \bar x)= 16\,{x_{{1}}}^{6}+16\,{x_{{1}}}^{4}{x_{{2}}}^{2}-40\,{x_{{1}}}^{4}x_{{ 2}}-32\,{x_{{1}}}^{2}{x_{{2}}}^{3}+ \left( -48\,{d}^{2}+1 \right) {x_{ {1}}}^{4}+ \left( -32\,{d}^{2}+32 \right) {x_{{1}}}^{2}{x_{{2}}}^{2}+ 16\,{x_{{2}}}^{4}+ \left( 8\,{d}^{2}-2 \right) {x_{{1}}}^{2}x_{{2}}+
\left( -32\,{d}^{2}-8 \right) {x_{{2}}}^{3}+ \left( 48\,{d}^{4}-20\,{ d}^{2} \right) {x_{{1}}}^{2}+ \left( 16\,{d}^{4}-8\,{d}^{2}+1 \right) {x_{{2}}}^{2}+ \left( 32\,{d}^{4}+8\,{d}^{2} \right) x_{{2}}-16\,{d}^{ 6}-8\,{d}^{4}-{d}^{2} $\\ and\\ $\chi_1(d, \bar x)=12\,{d}^{2}u{x_{{1}}}^{2}+16\,{d}^{2}u{x_{{2}}}^{2}-4\,{d}^{2}ux_{{2}} + \left( -12\,{d}^{4}+{d}^{2} \right) u-4\,{x_{{1}}}^{4}+8\,{x_{{1}}}^ {2}x_{{2}}+8\,{d}^{2}{x_{{1}}}^{2}-4\,{x_{{2}}}^{2}+4\,{d}^{2}x_{{2}}- 4\,{d}^{4}+3\,{d}^{2} $\\ $\chi_2(d, \bar x)=64\,{d}^{2}u{x_{{2}}}^{3}-48\,{d}^{2}u{x_{{1}}}^{2}-16\,{d}^{2}u{x_{{2 }}}^{2}+28\,{d}^{2}ux_{{2}}+ \left( -60\,{d}^{4}-3\,{d}^{2} \right) u- 64\,{x_{{1}}}^{4}x_{{2}}+128\,{x_{{1}}}^{2}{x_{{2}}}^{2}+128\,{d}^{2}{ x_{{1}}}^{2}x_{{2}}-64\,{x_{{2}}}^{3}+36\,{d}^{2}{x_{{1}}}^{2}+112\,{d }^{2}{x_{{2}}}^{2}+ \left( -64\,{d}^{4}+36\,{d}^{2} \right) x_{{2}}-36 \,{d}^{4}+3\,{d}^{2} $\\ $\chi_3(d, \bar x)=12\,y_{{2}}-16\,u{x_{{1}}}^{2}-16\,u{x_{{2}}}^{2}+8\,ux_{{2}}+ \left( 16\,{d}^{2}-1 \right) u-8\,x_{{2}}+1 $\\ $\chi_4(d, \bar x)=12\,y_{{1}}x_{{2}}+ \left( -12\,{d}^{2}-3 \right) y_{{1}}-8\,y_{{2}}x_ {{1}}x_{{2}}-14\,y_{{2}}x_{{1}}+8\,{x_{{1}}}^{3}+8\,x_{{1}}{x_{{2}}}^{ 2}-6\,x_{{1}}x_{{2}}+ \left( -8\,{d}^{2}+3 \right) x_{{1}} $\\ $\chi_5(d, \bar x)=3\,y_{{1}}x_{{1}}+2\,y_{{2}}x_{{2}}-y_{{2}}-2\,{x_{{1}}}^{2}-2\,{x_{{2 }}}^{2}+2\,{d}^{2} $\\ $\chi_6(d, \bar x)=12\,{d}^{2}{u}^{2}-4\,u{x_{{1}}}^{2}-16\,u{x_{{2}}}^{2}-4\,ux_{{2}}+
\left( 4\,{d}^{2}-1 \right) u+4\,x_{{2}}+1 $\\ $\chi_7(d, \bar x)=12\,{d}^{2}y_{{1}}u+8\,y_{{2}}ux_{{1}}x_{{2}}+14\,y_{{2}}ux_{{1}}-3\,y _{{1}}-8\,u{x_{{1}}}^{3}-8\,ux_{{1}}{x_{{2}}}^{2}+6\,ux_{{1}}x_{{2}}+
\left( 8\,{d}^{2}+3 \right) ux_{{1}} $\\ $\chi_8(d, \bar x)={y_{{1}}}^{2}+{y_{{2}}}^{2}-2\,y_{{1}}x_{{1}}-2\,y_{{2}}x_{{2}}+{x_{{1 }}}^{2}+{x_{{2}}}^{2}-{d}^{2}.$\\ In particular, \[G(d)\cap{\C}[d, \bar x]=<g(d, \bar x)>.\] And so $g(d, \bar x)$ is the generic offset polynomial for the parabola ${\mathcal C}$.\\ This Gr\"{o}bner basis has been computed considering the generators of $I(d)$ as polynomials in ${\C}(d)[\bar x,\bar y,u]$. This means that we have relationships of the form: \[g(d, \bar x)= a_1(d)f(\bar x)+a_2(d)\mathrm{nor}_{(1,2)}(\bar x,\bar y)+a_3(d)b(d, \bar x, \bar y)+a_4(d)w(\bar y,u)\] and for $i=1,\ldots,8$: \[ \chi_i(d, \bar x)= b_{i1}(d)f(\bar x)+b_{i2}(d)\mathrm{nor}_{(1,2)}(\bar x,\bar y) +b_{i3}(d)b(d, \bar x, \bar y)+b_{i4}(d)w(\bar y,u) \] where $a_1,\ldots,a_4,b_{11},\ldots,b_{84}\in{\C}(d)$. The result in Exercise 7, in page 283 of \cite{Cox1997} indicates that the Gr\"{o}bner basis specializes well for all values $d^o$ such that none of the denominators of the rational functions $a_i$ and $b_{ij}$ vanish at $d^o$. In this particular example, one may compute these rational functions and check that they are all constant. Therefore, specializing $g(d, \bar x)$ provides the offset equation for every non-zero value of $d$. The computations in this example were obtained with the computer algebra system Singular (see \cite{GPS}). We do not include here the details of the computations, because of obvious space limitations. \end{Example} \begin{center} \rule{2cm}{0.5pt} \quad\\ \end{center} \noindent The following result, about the dependence on $d$ of the generic offset polynomial, is an easy consequence of Theorem \ref{thm:Ch1:GenericOffsetSpecialization} above. In fact, Theorem \ref{thm:Ch1:GenericOffsetSpecialization} implies that there are infinitely many values $d^o$ such that $g(d^o, \bar x)$ is the polynomial of $\cO_{d^o}(\Sigma)$ and, simultaneously, $g(-d^o,\bar x)$ is the polynomial of $\cO_{-d^o}(\Sigma)$. But, because of the symmetry in the construction, the offsets $\cO_{d^o}(\Sigma)$ and $\cO_{-d^o}({\Sigma})$ are exactly the same algebraic set. Thus, it follows that for infinitely many values of $d^o$ it holds that up to multiplication by a non-zero constant: \[g(d^o, \bar x)=g(-d^o,\bar x).\] Hence, we have proved the following proposition:
\begin{Proposition}\label{prop:ch1:OffsetEquationIsQuadraticInd} The generic offset polynomial belongs to ${\C}[\bar x][d^2]$. That is, it only contains even powers of $d$. \end{Proposition}
\noindent Now we can describe precisely the central problem of this work.
\begin{Remark} The {\sf total degree problem} for $\Sigma$ consists of finding (efficient) formulae to compute the total degree of $g$ in the variables $\bar x$. We denote this total degree by $\delta$. \end{Remark}
\subsection{Surface Parametrizations and Parametric Offset System} \label{subsec:ch4:SurfaceParametrizationsAndtheirAssociatedNormalVector}
Since $\Sigma$ is an algebraic surface over ${\C}$, all of its irreducible components have dimension $2$ over ${\C}$. Besides, assuming that the surface $\Sigma$ is {\sf unirational (or parametric)} means that there exists a rational map $P:{\C}^2\mapsto\Sigma$ such that the image of ${P}$ is dense in $\Sigma$ w.r.t. the Zariski topology. The map ${P}$ is called an {\sf (affine) parametrization} of $\Sigma$. If ${P}$ is a birational map, then $\Sigma$ is called a {\sf rational surface}, and ${P}$ is called a {\sf proper parametrization} of $\Sigma$. {\sf In this paper we will not assume that ${P}$ is proper} (see Lemma \ref{lem:ch4:PropertiesSurfaceParametrization} in page \pageref{lem:ch4:PropertiesSurfaceParametrization}, and the observations preceding it). It is well known that a rational surface is always irreducible.
\noindent Thus, a parametrization ${P}$ of $\Sigma$ is given through a non-constant triplet of rational functions in two parameters. We will use $\bar t=(t_1,t_2)$ for the parameters of ${P}$ and, as usual, $\bar t^o=(t_1^o,t_2^o)$ stands for a particular value in ${\C}^2$ of the pair of parameters. By a simple algebraic manipulation, we can assume that the three components of ${P}$ have a common denominator. Thus, we can write: \begin{equation}\label{eq:ch4:SurfaceAffineParametrization} {P}(\bar t)=\left( \dfrac{P_{1}(\bar t)}{P_{0}(\bar t)}, \dfrac{P_{2}(\bar t)}{P_{0}(\bar t)}, \dfrac{P_{3}(\bar t)}{P_{0}(\bar t)} \right) \end{equation} where $P_0,\ldots,P_3\in\C[\bar t]$ and $\gcd(P_0,\ldots,P_3)=1$. The number \[d_{{P}}=\max_{i=0,\ldots,3}\left(\{ {\rm deg}_{\bar t}(P_{i})\}\right)\] is then called the {\sf degree} of ${P}$.
\noindent Over the algebraically closed field ${\C}$, the notions of rational and parametric surface are equivalent (see the Castelnuovo Theorem \cite{Castelnuovo1939}). Furthermore, there exists an algorithm by Schicho (see \cite{schicho1998rps}) to obtain a proper parametrization of a rational surface given by its implicit equation. Thus, in principle, given a non-proper parametrization of a surface, it is possible (though computationally very expensive) to implicitize, and then apply Schicho's algorithm to obtain a proper parametrization. In addition, \cite{perezdiaz2006ppr} shows how to properly reparametrize certain special families of rational surfaces. However, {\sf in this paper we will not assume that ${P}$ is proper}, and the degree formulas below take this fact into account.
\noindent The parametrization $P$ has two {\sf associated tangent vectors}, denoted by \begin{equation}\label{def:ch4:SurfaceAffineParametrizationAssociatedTangentVectors} \dfrac{\partial {P}(\bar t)}{\partial t_1}\mbox{ and }\dfrac{\partial {P}(\bar t)}{\partial t_2}. \end{equation} That is:
\[
\dfrac{\partial {P}}{\partial t_i}= \left(\frac{P_{1,i}P_{0}-P_{1}P_{0,i}}{(P_{0})^2}, \frac{P_{2,i}P_{0}-P_{2}P_{0,i}}{(P_{0})^2}, \frac{P_{3,i}P_{0}-P_{3}P_{0,i}}{(P_{0})^2}\right)
\]
where $P_{j,i}$ denotes the partial derivative of $P_{j}$ w.r.t. $t_i$, for $j=0,\ldots,2$ and $i=1,2$.
\noindent The following Lemma states those properties of the surface parametrization $P$ that we will need in the sequel.
\begin{Lemma}\label{lem:ch4:PropertiesSurfaceParametrization} There are non-empty Zariski open subsets $\Upsilon_1\subset\C^2$ and $\Upsilon_2\subset\Sigma$ such that: \[{P}:\Upsilon_1\mapsto\Upsilon_2\] is a surjective regular application of degree $m$. In particular, this means that ${P}$ defines a $m:1$ correspondence between $\Upsilon_1$ and $\Upsilon_2$. Thus, given $\bar y^o\in\Upsilon_2$, there are precisely $m$ different values $\bar t_1^o,\ldots,\bar t_m^o$ of the parameter $\bar t$ such that $P(\bar t_i^o)=\bar y^o$ for $i=1,\ldots,m$. Furthermore, if $\bar t^o\in\Upsilon_1$, the rank of the Jacobian matrix $\left(\dfrac{\partial{P}}{\partial\bar t}\right)$ evaluated at $\bar t^o$ is two. \end{Lemma} \begin{proof} See e.g. \cite{Perez-Diaz2002}. \end{proof}
\begin{Remark}\label{rem:ch4:TracingIndex_m} \begin{enumerate}
\item[]
\item The number $m$ is also called, as in the case of curves, the {\sf tracing index} of ${P}$. See \cite{Sendra2007} for an algorithm to compute $m$. In the sequel, we will {\sf denote} by $m$ the tracing index of $P$.
\item As a consequence of this lemma, the part of the surface $\Sigma$ not covered by the image of $P$ is a proper closed subset (i.e. a finite collection of curves and points). \end{enumerate} \end{Remark}
\noindent Starting with the parametrization $P$ of $\Sigma$ as in (\ref{eq:ch4:SurfaceAffineParametrization}) above, we will construct a polynomial normal vector to $\Sigma$, that will be used in the statements of the degree formulas for rational surfaces. This particular choice of normal vector will be called in the sequel the {\sf associated normal vector of $P$}\index{normal vector $\bar n$ to a rational surface, associated to $P$}, and it will be {\sf denoted} by ${\bar n}(\bar t)$.
\noindent To construct ${\bar n}(\bar t)$, we first take the cross product of the associated tangent vectors introduced in \ref{def:ch4:SurfaceAffineParametrizationAssociatedTangentVectors}, page \pageref{def:ch4:SurfaceAffineParametrizationAssociatedTangentVectors}. Let us denote: \[V(\bar t)=\dfrac{\partial {P}(\bar t)}{\partial t_1}\wedge\dfrac{\partial {P}(\bar t)}{\partial t_2}\] This vector $V(\bar t)$ has the following form: \[ V(\bar t)=\left( \dfrac{A_1(\bar t)}{A_0(\bar t)}, \dfrac{A_2(\bar t)}{A_0(\bar t)}, \dfrac{A_3(\bar t)}{A_0(\bar t)}\right) \] where $A_i\in{\C}[\bar t]$. Let $G(\bar t)=\gcd(A_1,A_2,A_3)$.
\begin{Definition}\label{def:ch4:AffineAssociatedNormalVector} With the above notation, the {\sf associated normal vector} ${\bar n}=(n_1,n_2,n_3)$ to $P$ is the vector whose components are the polynomials: \[n_i(\bar t)=\dfrac{A_i(\bar t)}{G(\bar t)}\mbox{ for }i=1,2,3.\] \end{Definition}
\begin{Remark}\label{rem:ch4:RelationBetweenImplicitAndParametricNormalVector} \begin{enumerate} \item[] \item Note that ${\bar n}$ is a normal vector to $\Sigma$ at $P(\bar t)$, vanishing at most at a finite set of points in the $\bar t$ plane. To see this observe that, because of their construction, $n_1, n_2, n_3$ have no common factors. Besides, at most one of the polynomials $n_i$ is constant (otherwise the surface is a plane). Thus, the non constant components of ${\bar n}$ define a system of at least two plane curves without common components. \item In particular, there are some $\mu\in\N$ and $\beta(\bar t)\in\C[\bar t]$, with $\gcd(\beta,P_0)=1$, such that \begin{equation}\label{eq:ch4:RelationBetweenImplicitAndParametricNormalVectors} f_i(P(\bar t))=\dfrac{\beta(\bar t)}{P_0(\bar t)^{\mu}}n_i(\bar t)\mbox{ for }i=1,2,3. \end{equation} That is: \[\nabla f(P(\bar t))=\dfrac{\beta(\bar t)}{P_0(\bar t)^{\mu}}\cdot\bar n(\bar t)\] \item Note that the polynomial $\beta(\bar t)$ introduced above is not identically zero. Otherwise, one has $f_i(P(\bar t))=0$ for $i=1,2,3$, and this implies that $f(\bar y)$ is a constant polynomial, which is a contradiction. \end{enumerate} \end{Remark}
\begin{Definition} The polynomial $h\in\C[\bar t]$ defined as \[h(\bar t)=n_1(\bar t)^2+n_2(\bar t)^2+n_3(\bar t)^2\] is called the {\sf parametric (affine) normal-hodograph\index{hodograph, affine parametric}\index{normal-hodograph, affine parametric}} of the parametrization $P$. \end{Definition}
\begin{Remark} In the sequel, if we need to refer to the implicit normal-hodograph introduced in page \pageref{notation:ch1:Hodograph}, we will denote it by $H_{\operatorname{imp}}$ in the projective case, resp. $h_{\operatorname{imp}}$ in the affine case. \end{Remark} \noindent The following lemma will be used below to exclude from our discussion certain pathological cases, associated to some particular parameter values.
\begin{Lemma}\label{lem:ch4:ExcludeBadParameterValues} The sets $\Upsilon_1$ and $\Upsilon_2$ in Lemma \ref{lem:ch4:PropertiesSurfaceParametrization} (page \pageref{lem:ch4:PropertiesSurfaceParametrization}) can be chosen so that if $\bar t^o\in\Upsilon_1$, then \[P_0(\bar t^o)h(\bar t^o)\beta(\bar t^o)\neq 0.\] In particular, ${\bar n}(\bar t^o)\neq0$. \end{Lemma} \begin{proof} Note that $P_0$, $h$ and $\beta$ are non-zero polynomials. Thus, the equation: \[P_0(\bar t)h(\bar t)\beta(\bar t)=0\] defines an algebraic curve. Let us call it ${\mathcal C}$. Then it suffices to replace $\Upsilon_1$ (resp. $\Upsilon_2$) in Lemma \ref{lem:ch4:PropertiesSurfaceParametrization} with $\Upsilon_1\setminus{\mathcal C}$ (resp. $\Upsilon_2\setminus P({\mathcal C})$). \end{proof}
\begin{center} \subsubsection*{Parametric system for the generic offset} \label{subsec:ch4:ParametricSystemforGenericOffset} \end{center}
\noindent Let $\Sigma$ and ${P}$ be as above. In order to describe $\cO_d(\Sigma)$ from a parametric point of view, we introduce the following system, to be called the {\sf parametric system for the generic offset}:
\begin{equation}\label{sys:ch4:ParametricOffsetSystem} \begin{minipage}{14cm} \[\hspace{3mm}\mathfrak S_1^{{P}}(d)\equiv\begin{cases} b^{P}(d,\bar t,\bar x): \left (P_0{x_1}-P_1\right)^{_2}+\left (P_0{ x_2}-P_2\right )^{_2}+\left ( P_0{ x_3}-P_3\right)^{_2}-{d}^{_2}{P_0}^{_2}=0\\ \mathrm{nor}^{P}_{(1,2)}(\bar t,\bar x):\,n_1\cdot (P_0x_2-P_2)-n_2\cdot (P_0x_1-P_1)=0\\ \mathrm{nor}^{P}_{(1,3)}(\bar t,\bar x):\,n_1\cdot (P_0x_3-P_3)-n_3\cdot (P_0x_1-P_1)=0\\ \mathrm{nor}^{P}_{(2,3)}(\bar t,\bar x):\,n_2\cdot (P_0x_3-P_3)-n_3\cdot (P_0x_2-P_2)=0\\ w^{P}(r,\bar t):\,r\cdot P_0\cdot h\cdot \beta-1=0\\ \end{cases}\] \end{minipage} \end{equation} Our first result will show that this system provides an alternative description for the generic offset. To state this, we will introduce some additional notation. Let \[\Psi^{P}\subset\C\times\C\times\C^2\times\C^3\] be the set of solutions, in the variables $(d,r, \bar t,\bar x)$, of the system $\mathfrak S_1^{{P}}(d)$. We also consider the projection maps \[ \begin{array}{ccc} \begin{cases} \pi_1^{{P}}:\C\times\C\times\C^2\times\C^3\mapsto\C\times\C^{3}\\ \pi_1^{{P}}(d,r, \bar t,\bar x)=(d,\bar x) \end{cases} & \mbox{and} & \begin{cases} \pi_2^{{P}}:\C\times\C\times\C^2\times\C^3\mapsto\C\times\C^{2}\\ \pi_2^{{P}}(d,r, \bar t,\bar x)=(r,\bar t) \end{cases} \end{array} \] and we define ${\mathcal A}^{{P}}=\pi_1^P(\Psi^{P})$. Recall that $\left({\mathcal A}^{{P}}\right)^*$ denotes the Zariski closure of ${\mathcal A}^{{P}}$.
\begin{Proposition}\label{prop:ch4:ParametricDescriptionOffset} \[ {\cO}_{d}(\Sigma)=\left({\mathcal A}^{{P}}\right)^*. \] \end{Proposition} \begin{proof} With the notation introduced in Definition \ref{def:ch1:GenericOffset}, page \pageref{def:ch1:GenericOffset}, recall that \[{\cO}_{d}(\Sigma)={\mathcal A}(\Sigma)^*=\pi_1(\Psi({\Sigma}))^*.\] Note that in this proof we use $\pi_1, \pi_2$ as in page \pageref{def:ch1:GenericOffset}, to be distinguished from $\pi^P, \pi_2^P$ introduced above. Let $\Upsilon_1, \Upsilon_2$ be as in Lemma \ref{lem:ch4:ExcludeBadParameterValues}, page \pageref{lem:ch4:ExcludeBadParameterValues}, and let us denote $${\mathcal B}^P_{\Sigma}=\pi_2^{-1}(\C\times\Upsilon_2).$$ ${\mathcal B}^P_{\Sigma}$ is a non-empty dense subset of $\Psi({\Sigma})$, because $\C\times\Upsilon_2$ is dense in $\C\times\Sigma$. It follows that ${\cO}_{d}(\Sigma)=\pi_1({\mathcal B}^P_{\Sigma})^*$. We will show that $\pi_1({\mathcal B}^P_{\Sigma})={\mathcal A}^P$, thus completing the proof.
\noindent If $(d^o, \bar x^o)\in\pi_1({\mathcal B}^P_{\Sigma})$, there are $\bar y^o, u^o$ and $\bar t^o\in\Upsilon_1$ such that $(d^o, \bar x^o, \bar y^o, u^o)\in\Psi({\Sigma})$, with $\bar y^o=P(\bar t^o)$. Since $u^o\neq 0$ and also $P_0(\bar t^o)h(\bar t^o)\beta(\bar t^o)\neq 0$, we can define: \[ r^o=\dfrac{u^o\,\beta(\bar t^o)}{P_0(\bar t^o)^{2\mu+1}}. \] where $\mu$ is as in Equation \ref{eq:ch4:RelationBetweenImplicitAndParametricNormalVectors}, page \pageref{eq:ch4:RelationBetweenImplicitAndParametricNormalVectors}. Then, substituting $P(\bar t^o)$ by $\bar y^o$ in System \ref{sys:ch4:ParametricOffsetSystem}, and using also Equation \ref{eq:ch4:RelationBetweenImplicitAndParametricNormalVectors}, one has that: \begin{equation}\label{sys:EquivalenceBetweenImplicitAndParametricOffsetSystems} \begin{cases} b^P(d^o,\bar t^o,\bar x^o)=P_0(\bar t^o)^2 b(d^o,\bar x^o,\bar y^o)=0\\[3mm] \mathrm{nor}^P_{(i,j)}(\bar t^o,\bar x^o)=\dfrac{P_0(\bar t^o)^{\mu+1}}{\beta(\bar t^o)}\mathrm{nor}_{(i,j)}(\bar x^o,\bar y^o)=0\\[3mm] w^P(r^o,\bar t^o)=w(u^o,\bar y^o)=0,&\ \end{cases} \end{equation} because $(d^o, \bar x^o, \bar y^o, u^o)\in\Psi({\Sigma})$. Therefore, one concludes that $(d^o,r^o,\bar t^o,\bar x^o)\in\Psi^{P}$, and so $(d^o, \bar x^o)\in{\mathcal A}^P$. This proves that $\pi_1({\mathcal B}^P_{\Sigma})\subset{\mathcal A}^P$.
\noindent Conversely, let $(d^o, \bar x^o)\in{\mathcal A}^P$. Then, there are $\bar t^o, r^o$ such that $(d^o,r^o,\bar t^o,\bar x^o)\in\Psi^{P}$. Since $P_0(\bar t^o)h(\bar t^o)\beta(\bar t^o)\neq 0$, \[\bar y^o=P(\bar t^o)\quad\mbox{ and }\quad u^o=\dfrac{r^o\,P_0(\bar t^o)^{2\mu+1}}{\beta(\bar t^o)}\] are well defined. The equations (\ref{sys:EquivalenceBetweenImplicitAndParametricOffsetSystems}) still hold, and in this case, they imply that $(d^o, \bar x^o, \bar y^o, u^o)\in\Psi({\Sigma})$. Besides, $\pi_2(d^o, \bar x^o,\bar y^o, u^o)=(d^o,\bar y^o)\in\C\times\Upsilon_2$, and so $(d^o, \bar x^o)\in\pi_1({\mathcal B}^P_{\Sigma})$. This proves that ${\mathcal A}^P\subset\pi_1({\mathcal B}^P_{\Sigma})$, thus finishing the proof. \end{proof}
\subsection{Intersection of Curves and Resultants} \label{sec:ch1:IntersectionCurvesResultants}
In the following sections we will show that we can translate the offset degree problem into a suitably constructed planar curves intersection problem. In this subsection we gather some results about the planar curves intersection problem to be used in the sequel.
\noindent It is well known that the intersection points of two plane curves, without common components, as well as their multiplicity of intersection, can be computed by means of resultants. For this, a suitable preparatory change of coordinates may be required (see for instance, \cite{Brieskorn1986}, \cite{Walker1950} and, for a modern treatment of the subject, \cite{Sendra2007}). In this work, for reasons that will turn out to be clear in subsequent sections, we need to analyze the behavior of the resultant factors, and their correspondence with multiplicities of intersection, when some of the standard requirements are not satisfied. Similarly, we also need to analyze the case when more than two curves are involved.
More precisely, we will use two technical lemmas. The first one, whose proof can be found in \cite{SS05}. shows that, under certain conditions, the multiplicity of intersection is reflected in the factors appearing in the resultant, even though the curves are not properly set. In particular, the requirement that no two intersection points lie on a line through the origin can be relaxed, obtaining in this case the total multiplicity of intersection along that line.
\noindent The second lemma, Lemma \ref{lem:ch1:GeneralizedResultants}, is a generalization of Corollary 1 in \cite{Perez-Diaz2002}. It shows that generalized resultants can be used to study the intersection points of a finite family of curves. This lemma will be applied in Section \ref{sec:ch4:TotalDegreeFormulaForRationalSurfaces} to the case of surfaces.
\noindent As we have said in the preceding paragraphs, the multiplicity of intersection of two projective plane curves can be read at the resultant of their defining polynomials. In fact, this is often used to define the multiplicity of intersection. More precisely (see \cite{Sendra2007}), let ${\mathcal C}_1$ and ${\mathcal C}_2$ be projective plane curves, without common components, such that $(1:0:0)\not\in{\mathcal C}_1\cup{\mathcal C}_2$, and $(1:0:0)$ does not belong to any line connecting two points in ${\mathcal C}_1\cap{\mathcal C}_2$. Let $F(y_0,y_1,y_2)$, resp. $G(y_0,y_1,y_2)$, be the defining polynomials of ${\mathcal C}_1$, resp. ${\mathcal C}_2$. Let $\bar y^o_h=(y^o_0:y^o_1:y^o_2)\in{\mathcal C}_1\cap{\mathcal C}_2$, and let \[R(y_1,y_2)={\rm Res}_{y_0}(F,G)\] Then the multiplicity of intersection of ${\mathcal C}_1$ and ${\mathcal C}_2$ at $\bar y^o_h$, denoted by $\mathop{\mathrm{mult}}\nolimits_{\bar y^o_h}({\mathcal C}_1,{\mathcal C}_2)$, equals the multiplicity of the corresponding factor $(y^o_2y_1-y^o_1y_2)$ in $R(y_1,y_2)$. However, in the following Lemma we see how the multiplicity of intersection of two curves on a line through the origin can be read in the resultant, under certain circumstances, even though the curves are not properly set. This lemma can be seen as a generalization of Theorem 5.3, page 111 in \cite{Walker1950}.
\begin{Lemma}\label{lem:ch1:MultiplicityUsingResultants} Let ${{\mathcal C}}_1$ and ${{\mathcal C}}_2$ be two projective algebraic plane curves without common components, given by the homogeneous polynomials $F(y_0,y_1,y_2)$ and $G(y_0,y_1,y_2)$, respectively. Let $p_1,\ldots,p_k$ be the intersection points, different from $(1:0:0)$, of ${{\mathcal C}}_1$ and ${{\mathcal C}}_2$ lying on the line of equation $\beta y_1-\alpha y_2=0$. Then the factor $(\beta y_1-\alpha y_2)$ appears in ${\rm Res}_{y_0}(F,G)$ with multiplicity equal to \[\sum_{i=1}^k\mathop{\mathrm{mult}}\nolimits_{p_i}({{\mathcal C}}_1,{{\mathcal C}}_2)\] \end{Lemma} \begin{proof} See Lemma 19 in \cite{SS05}. \end{proof} \noindent The following Lemma is a generalization of Corollary 1 in \cite{Perez-Diaz2002}. It shows that generalized resultants can be used to study the intersection points of a finite family of curves.
\begin{Lemma}\label{lem:ch1:GeneralizedResultants} Let ${\mathcal C}_0,\ldots,{\mathcal C}_m$ be the projective plane curves, defined by the homogeneous polynomials $F_0,\ldots,F_m\in{\C}[\bar t_h]$, respectively. Let us suppose that the following hold: \begin{enumerate}
\item[\em (i)] $F_1,\ldots,F_m$ have positive degree in $t_0$.
\item[\em (ii)] ${\rm deg}_{\bar t_h}(F_1)=\cdots={\rm deg}_{\bar t_h}(F_m)$.
\item[\em (iii)] $\gcd(F_1,\ldots,F_m)=1$. \end{enumerate} Let us denote: \[F(\bar c,\bar t_h)=c_1F_1(\bar t^h)+\cdots+c_mF_m(\bar t^h)\] and let \[R(\bar c,\bar t)=
\operatorname{Res}_{t_0}\left(F_0(\bar t^h),F(\bar c,\bar t_h)\right), \] (note that by {\em (iii)}, $R(\bar c,\bar t)$ is not identically zero). Finally, let $\operatorname{lc}_{t_0}(F_0)\in\C[\bar t]$ and $\operatorname{lc}_{t_0}(F)\in\C[\bar c, \bar t]$ denote, respectively, the leading coefficients w.r.t. $t_0$ of $F_0$ and $F$.
\noindent If $\bar t^o=(t^o_1,t^o_2)\in{\C}^2\setminus\{\bar 0\}$ is such that $\operatorname{Cont}_{\bar c}\left(R\right)(\bar t^o)=0$ and $$\operatorname{lc}_{t_0}(F_0)(\bar t^o)\cdot\operatorname{lc}_{t_0}(F)(\bar c,\bar t^o)\neq 0,$$ there exists $t^o_0$ such that $\bar t^o_h=(t^o_0:t^o_1:t^o_2)\in\bigcap_{i=0}^m {\mathcal C}_i$. \end{Lemma} \begin{proof} First, observe that if ${\rm deg}_{t_0}(F_0)=0$, then $\operatorname{lc}_{t_0}(F_0)=F_0$ and $R(\bar c,\bar t)=F_0^{{\rm deg}_{\bar t_0}(F_1)}$. Thus, in this case the lemma holds trivially, since there is no $\bar t^o\in{\C}^2\setminus\{\bar 0\}$ satisfying the hypothesis of the lemma. Thus, w.l.o.g., in the rest of the proof, we assume that ${\rm deg}_{t_0}(F_0)>0$.
\noindent Since $\operatorname{lc}_{t_0}(F)(\bar c,\bar t^o)\neq 0$, there exists an open set $\Phi\subset{\C}^m$ such that if $\bar c^o=(c^o_1,\ldots,c^o_m)\in\Phi$, the leading coefficient w.r.t. $t_0$ of $F(\bar c^o,\bar t^o, t_0)\in\C[t_0]$ is $\operatorname{lc}_{t_0}(F)(\bar c^o,\bar t^o)$, and it is non-zero. Therefore, by the Extension Theorem, (see \cite{Cox1997}, page 159), there exists $\zeta(\bar c^o)\in{\C}$ (which, in principle, could depend on $\bar c^o$) such that \[F_0(\zeta(\bar c^o),t^o_1,t^o_2)=F(\zeta(\bar c^o),t^o_1,t^o_2)=0.\] We claim that there is $t^o_0\in{\C}$ (not depending on $\bar c^o$), such that $$F_0(t^o_0,t^o_1,t^o_2)=F_1(t^o_0,t^o_1,t^o_2)=\cdots=F_m(t^o_0,t^o_1,t^o_2)=0.$$ To see this note that, since $\operatorname{lc}_{t_0}(F_0)\neq 0$, there is a non-empty finite set of solutions of the following equation in $t_0$: $$F_0(t_0,t^o_1,t^o_2)=0.$$ Let $\zeta_1,\ldots,\zeta_p$ be the solutions. If $$F_1(\zeta_j,t^o_1,t^o_2)=\cdots=F_m(\zeta_j,t^o_1,t^o_2)=0$$ holds for some $j=1,\ldots,p$, then it suffices to take $t^o_0=\zeta_j$. Let us suppose that this is not the case, and we will derive a contradiction. Then there exists an open set $\Phi_1\subset\Phi$, such that if $\bar c^o\in\Phi_1$, then $$F(\zeta_j,t^0_1,t^0_2)= c^o_1F_1(\zeta_j,t^0_1,t^0_2)+\cdots+ c^o_mF_m(\zeta_j,t^0_1,t^0_2)\neq 0$$ for every $j=1,\ldots,p$. This means that, for $\bar c^o\in\Phi_1$, there is no solution of: \[ \begin{cases} F_0(t_0,\bar t^o)=0\\ F(\bar c^o,t_0,\bar t^o)=0 \end{cases} \] Since the resultant specializes properly in $\Phi$, this implies that: $R(\bar c^o,\bar t^o)\neq 0$. But, denoting \[ M(\bar t)=\operatorname{Cont}_{\bar c}\left(R(\bar c,\bar t)\right), \quad \mbox{ and } N(\bar c,\bar t)=\operatorname{PP}_{\bar c}\left(R(\bar c,\bar t)\right), \] we have \[R(\bar c^o,\bar t^o)=M(\bar t^o)N(\bar c^o,\bar t^o)=0 \] because, by hypothesis $M(\bar t^o)=0$. This contradiction proves the result. \end{proof} \color{black}
\section{Offset-Line Intersection for Parametric Surfaces} \label{sec:ch4:OffsetLineIntersectionforRationalSurfaces}\label{ch04-DegreeFormulaeRationalSurfaces}
As we said in the introduction to this paper, the parametric character of $\Sigma$ results in a reduction of the dimension of the space in which we count the points in ${\mathcal O}_d(\Sigma)\cap{\mathcal L}_{\bar k}$. This is so because, instead of counting directly those points, we count the values of the $\bar t$ parameters that generate them. In this section we will show how, with this approach, we are led to an intersection problem between projective plane curves, and we will analyze that problem. More precisely: \begin{itemize} \item Subsection \ref{subsec:ch4:IntersectionWithLines} (page \pageref{subsec:ch4:IntersectionWithLines}) is devoted to the analysis of the intersection between the generic offset and a pencil of lines through the origin. The results in this subsection (see Theorem \ref{thm:ch4:TheoreticalFoundation}, page \pageref{thm:ch4:TheoreticalFoundation}) constitute the theoretical foundation of the degree formula to be derived in Section \ref{sec:ch4:TotalDegreeFormulaForRationalSurfaces}. \item In Subsection \ref{subsec:ch4:EliminationAndAuxiliaryPolynomials} we describe the auxiliary polynomials obtained by using elimination techniques in the Parametric Offset-Line System, and we introduce a new auxiliary system, see System \ref{sys:ch4:AuxiliaryCurvesSystem}. Also, we obtain some geometric properties of the solutions of this new system ${{\mathfrak S}^P_3}(d,\bar k)$ in Proposition \ref{prop:ch4:ExtendableSolutions} (page \pageref{prop:ch4:ExtendableSolutions}) and the subsequent Lemma \ref{lem:ch4:SignOfLambdaAndOffsetting} (page \pageref{lem:ch4:SignOfLambdaAndOffsetting}). These results will be used in the sequel to elucidate the relation between the solution sets of Systems $\mathfrak S^P_2(d,\bar k)$ and ${{\mathfrak S}^P_3}(d,\bar k)$. \item In Subsection \ref{subsec:ch4:FakePoints} (page \pageref{subsec:ch4:FakePoints}) we define the corresponding notion of fake points and invariant points for the Affine Auxiliary System ${{\mathfrak S}^P_3}(d,\bar k)$. The main result of this subsection is Proposition \ref{prop:ch4:FakePointsAndInvariantSolutionsCoincide} (page \pageref{prop:ch4:FakePointsAndInvariantSolutionsCoincide}), that shows the relation between these two notions. \end{itemize}
\subsection{Intersection with lines}\label{subsec:ch4:IntersectionWithLines}
As in the case of plane curves (see our paper \cite{SS05}), we will address the degree problem for surfaces by counting the number of intersection points between ${\mathcal O}_d(\Sigma)$ and a generic line through the origin. More precisely, let us consider a family of lines through the origin, {\sf denoted} by $\cL_{\bar k}$, whose direction is determined by the values of the variable $\bar k=(k_1,k_2,k_3)$. The family $\cL_{\bar k}$ is described by the following set of parametric equations: \[\cL_{\bar k}\equiv \begin{cases} \ell_1(\bar k,l,\bar x):\,{ x_1}-k_1\,l=0\\ \ell_2(\bar k,l,\bar x):\,{ x_2}-k_2\,l=0\\ \ell_3(\bar k,l,\bar x):\,{ x_3}-k_3\,l=0 \end{cases} \] A particular line of the family, corresponding to the value $\bar k^o$, will be denoted by $\cL_{\bar k^o}$. We add the equations $\ell_1, \ell_2, \ell_3$ of $\cL_{\bar k}$ to the equations of the parametric system for the generic offset (System \ref{sys:ch4:ParametricOffsetSystem} in page \pageref{sys:ch4:ParametricOffsetSystem}), and we arrive at the following system: \begin{equation}\label{sys:ch4:IntersectionOffsetLine} \hspace{-8mm}\begin{minipage}{14cm} \[\hspace{3mm}\mathfrak S^P_2(d,\bar k)\equiv \begin{cases} b^{P}(d,\bar t,\bar x): \left (P_0{x_1}-P_1\right)^{_2}+\left (P_0{ x_2}-P_2\right )^{_2}+\left ( P_0{ x_3}-P_3\right)^{_2}-{d}^{_2}{P_0}^{_2}=0\\ \mathrm{nor}^{P}_{(1,2)}(\bar t,\bar x):\,n_1\cdot (P_0x_2-P_2)-n_2\cdot (P_0x_1-P_1)=0\\ \mathrm{nor}^{P}_{(1,3)}(\bar t,\bar x):\,n_1\cdot (P_0x_3-P_3)-n_3\cdot (P_0x_1-P_1)=0\\ \mathrm{nor}^{P}_{(2,3)}(\bar t,\bar x):\,n_2\cdot (P_0x_3-P_3)-n_3\cdot (P_0x_2-P_2)=0\\ w^{P}(r,\bar t):\,r\cdot P_0\cdot \beta\cdot h-1=0\\ \ell_1(\bar k,l,\bar x):\,{ x_1}-k_1\,l=0\\ \ell_2(\bar k,l,\bar x):\,{ x_2}-k_2\,l=0\\ \ell_3(\bar k,l,\bar x):\,{ x_3}-k_3\,l=0 \end{cases} \] \end{minipage} \end{equation} We will refer to this as the {\sf Parametric Offset-Line System}. The next step is the study of the generic solutions of this system.
\noindent We need to exclude certain degenerate situations that arise for a set of values of $(d,\bar k)$. For example, a degenerated situation arises if the set of points of ${\Sigma}$ where the normal line to ${\Sigma}$ passes through the origin is {\em too big}. The next Lemma says that this can only happen if ${\Sigma}$ is a sphere centered at the origin.
\begin{Lemma}\label{lem:Ch1:LineMeetsVarietyNormallyInProperClosedSet} Let ${\Sigma}_{\bot}\subset{\Sigma}$ denote the set of regular points $\bar y^o\in{\Sigma}$ such that the normal line to ${\Sigma}$ at $\bar y^o$ is parallel to $\bar y^o$. If ${\Sigma}$ is not a sphere centered at the origin, then ${\Sigma}_{\bot}^*$ is a proper (possibly empty) closed subset of ${\Sigma}$. \end{Lemma} \begin{proof} Let us assume that ${\Sigma}_{\bot}$ is nonempty. Let, as usual, $f(\bar{y})$ be the irreducible polynomial defining ${\Sigma}$, and let $\tilde{\Sigma}$ be the algebraic set in ${\C}^3$ defined by: \[ \begin{cases} f(\bar{y})=0\\ f_i(\bar y)y_j-f_j(\bar y)y_i=0&(\mbox{for }i,j=1,\ldots,3; i<j). \end{cases} \] Note that this set of equations implies $\bar y^o\parallel\nabla f(\bar y^o)$ for $\bar y^o\in{\Sigma}$. Then ${\Sigma}_{\bot}\subset\tilde{\Sigma}\subset{\Sigma}$. Therefore, it suffices to prove that $\tilde{\Sigma}\neq{\Sigma}$. Let us suppose that $\tilde{\Sigma}={\Sigma}$. Let \[K(\bar y)=\bar y\cdot\nabla f(\bar y)=\sum_{j=1}^3y_jf_j(\bar{y}).\] Then for every $\bar y^o\in{\Sigma}$, using that $f_i(\bar y^o)y_j^o=f_j(\bar y^o)y_i^o$ one has that \[f_i(\bar y^o)K(\bar y^o)= \sum_{j=1}^3f_i(\bar y^o)y_j^of_j(\bar y^o)=y_i^o\sum_{j=1}^3f_j(\bar y^o)^2=y_i^oh(\bar y^o),\] for $i=1,2,3$. Now let $\bar t=(t_1,t_{2})$ and let $\cQ(\bar t)=(Q_1,Q_2,Q_3)(\bar t)$ be a local parametrization of ${\Sigma}$. Substituting $\cQ$ in the above relation: \[ f_i(\cQ(\bar t))K(\cQ(\bar t))=Q_i(\bar t) h(P(\bar t))\] that is, $K(\cQ(\bar t))\nabla f(\cQ(\bar t))=h(\cQ(\bar t))\cQ(\bar t)$. Using Prop. 2 in \cite{Sendra1999}, we know that $h(\cQ(\bar t))\neq 0$, and so $K(\cQ(\bar t))\neq 0$. Thus: \[ \frac{h(\cQ(\bar t))}{K(\cQ(\bar t))}Q_i(\bar t)=f_i(\cQ(\bar t)). \] On the other hand, since $f(\cQ(\bar t))=0$, deriving w.r.t. $t_j, (j=1,2)$ one has: \[ \sum_{i=1}^3f_i(\cQ(\bar t))\frac{\partial Q_i(\bar t)}{\partial t_j}= \frac{h(\cQ(\bar t))}{K(\cQ(\bar t))}\sum_{i=1}^3Q_i(\bar t)\frac{\partial Q_i(\bar t)}{\partial t_j}= 0\] From this, one concludes that \[ \frac{\partial }{\partial t_j}\left(\sum_{i=1}^3Q_i^2(\bar t)\right)=2 \sum_{i=1}^3Q_i(\bar t)\frac{\partial Q_i(\bar t)}{\partial t_j}=0 \] for $j=1,2$. This means that $\sum_{i=1}^3Q_i^2(\bar t)=c$ for some constant $c\in{\C}$. Since ${\Sigma}$ is assumed not to be normal-isotropic, one has $c\neq 0$, and since the parametrization converges locally, we conclude that ${\Sigma}$ equals a sphere centered at the origin. \end{proof}
\begin{Remark} Note that, if in Lemma \ref{lem:Ch1:LineMeetsVarietyNormallyInProperClosedSet} we consider those regular points $\bar y^o$ of ${\Sigma}$ such that the normal line to ${\Sigma}$ at $\bar y^o$ is parallel to the vector $\bar y^o-\bar a$ for a fixed $\bar a\in{\C}^3$, then ${\Sigma}^*_{\bot}$ is a proper (possibly empty) closed subset of ${\Sigma}$, unless ${\Sigma}$ is a sphere centered at $\bar a$. \end{Remark}
\noindent A closer analysis of the proof of Lemma \ref{lem:Ch1:LineMeetsVarietyNormallyInProperClosedSet} shows that in fact we have also proved the following:
\begin{Corollary}\label{cor:ch1:PositiveDimensionComponentsOfBotVareInSpheres} If $\cW$ is any irreducible component of ${\Sigma}_{\bot}^*$ with ${\rm dim}(\cW)>0$ then $\cW$ is contained in a sphere centered at the origin. That is, there exists $d^o\in{\C}^\times$ such that if $\bar y^o\in\cW$, then \[(y^0_1)^2+(y^0_2)^2+(y^0_3)^2=(d^o)^2.\] Since ${\Sigma}_{\bot}^*$ has at most finitely many irreducible components, it follows that there is a finite set of distances $\{d_1^{\bot},\ldots,d_p^{\bot}\}$ such that ${\Sigma}_{\bot}^*$ is contained in the union the spheres centered at the origin and with radius $d_i^{\bot}$ for $i=1,\ldots,p$. \end{Corollary} \noindent We will use the notation $\Upsilon({\Sigma}_{\bot})=\{d_1^{\bot},\ldots,d_p^{\bot}\}$, and we will say that $\Upsilon({\Sigma}_{\bot})$ is {\sf the set of critical distances of ${\Sigma}$.}
\noindent The following lemma is the basic tool to avoid the remaining degenerated situations in the analysis of the offset-line intersection: for a given proper closed subset $\mfF\subset\Sigma$, it shows \begin{enumerate} \item how to avoid the set of values $(d^o,\bar k^o)$ such that $\cL_{\bar k^o}\setminus\{\bar 0\}$ meets $\Sigma$ in a point $\bar y^o\in\mfF$, \item and how to avoid the set of values $(d^o,\bar k^o)$ such that $\cL_{\bar k^o}\setminus\{\bar 0\}$ meets $\cO_{d^o}(\Sigma)$ in a point $\bar x^o$ associated to $\bar y^o\in\mfF$. \end{enumerate}
\noindent In the proof of the Lemma we will use the polynomials $f, h, b$ and $\mathrm{nor}_{(i,j)}$ (for $i,j=1,\ldots,3; i<j$), introduced with System $\mfG_1(d)$ in page \pageref{sys:Ch1:GenericOffsetSystem}. For the convenience of the reader we repeat that system here: \[ \left.\begin{array}{lr} &f(\bar y)=0\\ \mathrm{nor}_{(i,j)}(\bar x,\bar y): &f_i(\bar y)(x_j-y_j)-f_j(\bar y)(x_i-y_i)=0\\ (\mbox{for }i,j=1,\ldots,3; i<j)&\\ b(d, \bar x, \bar y ):& (x_1-y_1)^2+(x_2-y_2)^2+(x_3-y_3)^2-d^2=0\\
w(\bar y,u):& u\cdot(\|\nabla f(\bar y)\|^2)-1=0 \end{array}\right\} \equiv\mathfrak{S}_1(d). \]
and that $h(\bar t)=n_1(\bar t)^2+n_2(\bar t)^2+n_3(\bar t)^2$, while $h_{\operatorname{imp}}(\bar t)=\|\nabla f(\bar y)\|^2$.
\begin{Lemma}\label{lem:ch4:AvoidClosedSubsetsSigma} Let $\mfF\subsetneq\Sigma$ be closed. There exists an open $\Omega_{\mfF}\subset\C^4$, such that if $(d^o,\bar k^o)\in \Omega_{\mfF}$, the following hold: \begin{enumerate}
\item[(1)] $\cL_{\bar k^o}\cap\left(\mfF\setminus\{\bar 0\}\right)=\emptyset$.
\item[(2)] If $\bar x^o\in(\cL_{\bar k^o}\cap \cO_{d^o}(\Sigma)){\setminus \{\bar 0\}}$, there is no solution $(d^o,\bar x^o,\bar y^o,u^o)$ of System $\mfG_1(d)$ (System \ref{sys:Ch1:GenericOffsetSystem} in page \pageref{sys:Ch1:GenericOffsetSystem}) with $\bar y^o\in\mfF$. \end{enumerate} \end{Lemma} \noindent{\em Proof.} If $\mfF$ is empty, the result is trivial. Thus, let us assume that $\mfF\neq\emptyset$, and let the defining polynomials of $\mfF$ be $\{\phi_1(\bar y),\ldots,\phi_p(\bar y)\}\subset\C[\bar y]$. We will show that one may take $\Omega_{\mfF}=\Omega^1_{\mfF}\cap\Omega^2_{\mfF}$, where $\Omega^1_{\mfF}, \Omega^2_{\mfF}$ are two open sets constructed as follows: \begin{enumerate}
\item[(a)] Let us consider the following ideal in $\C[\bar k,\rho,v,\bar y]$: \[\cI=<f(\bar y), \phi_1(\bar y),\ldots,\phi_p(\bar y),\bar y-\rho\cdot\bar k,v\cdot\rho-1>,\] and the projection maps defined in its solution set $\bV(\cI)$ as follows: \[\pi_{(1,1)}(\bar k,\rho,v,\bar y)=\bar y, \quad \pi_{(1,2)}(\bar k,\rho,v,\bar y)=\bar k\] We show first that $\pi_{(1,1)}(\bV(\cI))=\mfF$. The inclusion $\pi_{(1,1)}(\bV(\cI))\subset\mfF$ is trivial; and if $\bar y^o\in\mfF$, then since $\cF\subset\Sigma$, $(\bar y^o,1,1,\bar y^o)\in\bV(\cI)$ proves the reversed inclusion. Therefore, since $\cF\subsetneq\Sigma$, ${\rm dim}(\pi_{(1,1)}(\bV(\cI)))={\rm dim}(\mathfrak{F})<2$. Besides, for every $\bar y^o\in\pi_{(1,1)}(\bV(\cI))$ one has:
\[\pi_{(1,1)}^{-1}(\bar y^o)=\{(v^o\bar y^o,\frac{1}{v^o},v^o,\bar y^o)\,|\,v^o\in\C^\times \} \] from where one has that ${\rm dim}(\pi_{(1,1)}^{-1}(\bar y^o))=1$. Since the dimension of the fiber does not depend on $\bar y^o$, applying Lemma \ref{lem:ch1:FiberDimension} (page \pageref{lem:ch1:FiberDimension}), we obtain ${\rm dim}(\bV(\cI))<3$. Thus, ${\rm dim}(\pi_{(1,2)}(\bV(\cI)))<3$. It follows that $(\pi_{(1,2)}(\bV(\cI)))^*$ is a proper closed subset of $\C^3$. Let $\Theta^1=\C^3\setminus(\pi_{(1,2)}(\bV(\cI)))^*$, and let $\Omega_{\mfF}^1=\C\times\Theta^1$.
\item[(b)] Let us consider the following ideal in $\C[d,\bar k,\rho,v,\bar x,\bar y]$: \[ \begin{array}{l} \cJ=<f(\bar y), b(d,\bar x,\bar y),\mathrm{nor}_{(1,2)}(\bar x,\bar y),\mathrm{nor}_{(1,3)}(\bar x,\bar y),\mathrm{nor}_{(2,3)}(\bar x,\bar y),\\ \bar x-\rho\cdot \bar k,v\cdot\rho\cdot d\cdot h_{\operatorname{imp}}(\bar y)-1, \phi_1(\bar y),\ldots,\phi_p(\bar y)> \end{array} \] and the projection maps defined in its solution set $\bV(\cJ)\subset\C^{12}$ as follows: \[\pi_{(2,1)}(d,\bar k,\rho,v,\bar x,\bar y)=\bar y, \quad \pi_{(2,2)}(d,\bar k,\rho,v,\bar x,\bar y)=(d,\bar k)\] Then $\pi_{(2,1)}(\bV(\cJ))\subset\mfF$. Therefore ${\rm dim}(\pi_{(2,1)}(\bV(\cJ)))\leq 1$. Let $\bar y^o\in\pi_{(2,1)}(\bV(\cJ))$. Note that then $h_{\operatorname{imp}}(\bar y^o)\neq 0$. We denote $\sigma^o=\sqrt{h_{\operatorname{imp}}(\bar y^o)}$ (a particular choice of the square root); clearly $\sigma^o\neq 0$. Then, it holds that: \[ \pi_{(2,1)}^{-1}(\bar y^o)= \biggl\{ \biggl(d^o,\frac{1}{\rho^o} \left(\bar y^o\pm\frac{d^o}{\sigma^o}\nabla(\bar y^o)\right),\rho^o,\frac{1}{(\sigma^o)^{2}\rho^od^o}, \bar y^o\pm\frac{d^o}{\sigma^o}\nabla(\bar y^o), \bar y^o \biggr) \, \biggl\lvert\, d^o, \rho^o\in {\C}^\times\biggr\} \] Therefore ${\rm dim}(\pi_{(2,1)}^{-1}(\bar y^o))=2.$ Applying Lemma \ref{lem:ch1:FiberDimension} again, one has \[{\rm dim}(\bV(\cJ))=2+{\rm dim}(\pi_{(2,1)}(\bV(\cJ))\leq 3.\] It follows that ${\rm dim}(\pi_{(2,2)}(\bV(\cJ)))\leq 3$. Let us take $\Omega^2_{\mfF}={\C}^{4}\setminus \pi_{(2,2)}({\cal V})^*.$ \end{enumerate} Let $\Omega_{\mfF}=\Omega^1_{\mfF}\cap\Omega^2_{\mfF}$ and let $(d^o,\bar k^o)\in\Omega_\mfF$. \begin{enumerate} \item If $\bar y^o\in\cL_{\bar k^o}\cap\left(\mfF\setminus\{\bar 0\}\right)$, then there is some $\rho^o\in\C^\times$ such that $\bar y^o=\rho^o\bar k^o$. It follows that $(\bar k^o,\rho^o,\dfrac{1}{\rho^o},\bar y^o)\in\bV(\cI)$, and so $\bar k^o\in\pi_{(1,2)}(\bV(\cI))$, contradicting the construction of $\Omega^1_{\mfF}$. This proves statement (1). \item If $\bar x^o\in(\cL_{\bar k^o}\cap \cO_{d^o}(\Sigma)){\setminus \{\bar 0\}}$, and there is a solution $(d^o,\bar x^o,\bar y^o,u^o)$ of System $\mfG_1(d)$ with $\bar y^o\in\mfF$, then there is some $\rho^o\in\C^\times$ such that $\bar x^o=\rho^o\bar k^o$. It follows that $(d^o,\bar k^o,\rho^o,\dfrac{1}{\rho^o\cdot d^o\cdot h_{\operatorname{imp}}(\bar y^o)},\bar x^o,\bar y^o)\in\bV(\cJ)$. Therefore $(d^o,\bar k^o)\in\pi_{(2,2)}(\bV(\cJ))$, contradicting the construction of $\Omega^2_{\mfF}$. This proves statement (2).\hspace{35mm}$\Box$ \end{enumerate}
\begin{Remark} Note that the origin may belong to $\mfF$. In that case, Lemma \ref{lem:ch4:AvoidClosedSubsetsSigma}(1) guarantees that the origin is the only point in $\cL_{\bar k^o}\cap\cF$. Correspondingly, part (2) of the lemma guarantees that the remaining points in $\cL_{\bar k^o}\cap \cO_{d^o}(\Sigma)$ cannot be extended to a solution $(d^o,\bar x^o,\bar y^o,u^o)$ of System $\mfG_1(d)$ with $\bar y^o\in\mfF$. \end{Remark}
\noindent Our next goal is to prove a theorem (Theorem \ref{thm:ch4:TheoreticalFoundation} below), that gives the theoretical foundation for our approach to the degree problem. Theorem \ref{thm:ch4:TheoreticalFoundation} is the analogous of Theorem 5 in our paper \cite{SS05}. That theorem is preceded by Lemma 4, that states that for a curve ${\mathcal C}$, $\bar 0\in\cO_{d^o}({\mathcal C})$ for at most finitely many values $d^o\in\C$. However, we have not been able to prove a similar result for the case of surfaces: the main difficulty is that a surface can have infinitely many singular points. Even if we restrict ourselves to the case of parametric surfaces, we still have to take into account the possible existence of a singular curve contained in $\Sigma$, and not contained in the image of the parametrization. Besides, in the proof of the theorem we will use Lemma \ref{lem:Ch1:LineMeetsVarietyNormallyInProperClosedSet} (page \pageref{lem:Ch1:LineMeetsVarietyNormallyInProperClosedSet}), that does not apply when $\Sigma$ is a sphere centered at the origin. This is the reason for the Assumptions that we announced in the Introduction of this paper, and that we state formally here. In the sequel, we assume that:
\begin{Assumptions}\label{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin} \begin{enumerate}
\item[]
\item[(1)] {\sf There exists a finite subset $\Delta^1$ of $\C$ such that, for $d^o\not\in\Delta^1$ the origin does not belong to $\cO_{d^o}({\Sigma})$.}
\item[(2)] {\sf $\Sigma$ is not a sphere centered at the origin.} \end{enumerate} \end{Assumptions}
\noindent Before stating the theorem we have to introduce some terminology.
\begin{Remark} For $(d^o,\bar k^o)\in\C^4$ we will {\sf denote} by $\Psi_2^P(d^o,\bar k^o)$ the set of solutions of System {$\mathfrak S^P_2(d^o,\bar k^o)$} in the variables $(l,r,\bar t,\bar x)$ (see (\ref{sys:ch4:IntersectionOffsetLine}) in page \pageref{sys:ch4:IntersectionOffsetLine}). \end{Remark}
\begin{Theorem}\label{thm:ch4:TheoreticalFoundation} Let $\Sigma$ satisfy the hypothesis in Remark \ref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin}. There exists a non-empty Zariski-open subset $\Omega_0\subset\C^4$, such that if {$(d^o,\bar k^o)\in\Omega_0$}, then \begin{itemize}
\item[(a)] if $\bar y^o\in\mathcal L_{\bar k_0}\cap (\Sigma\setminus \{\bar 0\})$, then no normal vector to $\Sigma$ at $\bar y^o$ is parallel to $\bar y^o$.
\item[(b)] $\Psi_2^P(d^o,\bar k^o)$ has precisely $m\delta$ elements (recall that $m$ is the tracing index of $P$ and $\delta$ the total degree of the generic offset).
Besides, the set $\Psi_2^P(d^o,\bar k^o)$ can be partitioned as a disjoint union:
\[\Psi_2^P(d^o,\bar k^o)=
\Psi_{2}^1(d^o,\bar k^o)\cup\cdots\cup\Psi_{2}^{\delta}(d^o,\bar k^o),
\]
such that:
\begin{enumerate}
\item[(b1)] $\#\Psi_{2}^i(d^o,\bar k^o)=m$ for $i=1,\ldots,\delta$.
\item[(b2)] The $m$ elements of $\#\Psi_{2}^i(d^o,\bar k^o)$ have the same values of the variables $(l,r,\bar x)$, and differ only in the value of $\bar t$. Besides, for $(l^o,r^o,\bar t^o,\bar x^o)\in\Psi_{2}^i(d^o,\bar k^o)$, the point $P(\bar t^o)\in\Sigma$ does not depend on the choice of $\bar t^o$.
\end{enumerate}
Let us denote by $(l^o_i,r^o_i,\bar t^o_{h,i},\bar x^o_i)$ an element of $\Psi_{2}^i(d^o,\bar k^o)$. Then
\begin{enumerate}
\item[(b3)] The points $\bar x^o_1,\ldots,\bar x^o_{\delta}$ are all different (and different from $\bar 0$), and
$$\cL_{\bar k^o}\cap\cO_{d^o}(\Sigma)=\{\bar x^o_1,\ldots,\bar x^o_{\delta}\}.
$$
Furthermore, $\bar x^o_i$ is non normal-isotropic in $\cO_{d^o}(\Sigma)$, for $i=1,\ldots,\delta$.
\item[(b4)] The $\delta$ points
$$\bar y^o_1=P(\bar t^o_{h,1}),\cdots,\bar y^o_{\delta}=P(\bar t^o_{h,_{\delta}})$$
are affine, distinct and non normal-isotropic points of $\Sigma$.
\end{enumerate} \item[(c)] $k_i^o\neq 0$ for $i=1,2,3$. \end{itemize} \end{Theorem}
\noindent{\em Proof.}\quad Let $\Delta_0^1=\{d^o\in {\C} \,|\, g(d^o,\bar 0)\neq 0\}$. The assumption in Remark \ref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin} (page \pageref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin}) implies that $\Delta_0^1$ is an open non-empty subset of $\C$. Let $\Delta$ be as in Corollary \ref{cor:ch1:BadDistancesFiniteSet}, (page \pageref{cor:ch1:BadDistancesFiniteSet}), and let $\Omega^0_0=(\Delta_0^1\cap(\C\setminus\Delta))\times(\C^3\setminus\bigl(\{\bar k^o / k_i^o=0\mbox{ for some }i=1,2,3\}\bigr))$. Next, let us consider $g(d,\bar x)$ expressed as follows: \[g(d,\bar x)=\sum_{i=0}^{\delta}g_i(d,\bar x)\] where $g_i$ is a degree $i$ form in $\bar x$. We consider: \[\tilde{g}(d,\bar k,\rho)=g(d,\rho\bar k)=\sum_{i=0}^{\delta}g_i(d,\bar k)\rho^i.\] This polynomial is not identically zero, is primitive w.r.t. $\bar x$ (see Lemma \ref{lem:ch1:GenericOffsetIsPrimitiveIn_x}, page \pageref{lem:ch1:GenericOffsetIsPrimitiveIn_x}), and it is squarefree; note that $g(d,\bar x)$ is square-free by Remark \ref{rem:ch1:GenericOffsetEqSqfreeAndHasAtMostTwoFactors} (page \pageref{rem:ch1:GenericOffsetEqSqfreeAndHasAtMostTwoFactors}), and therefore $\tilde g$ is square-free too. Thus, the discriminant \[Q(d, \bar k)=\operatorname{Dis}_{\rho}\left(\tilde{g}(d,\bar k,\rho)\right)\] is not identically zero either.
\noindent In this situation, let us take \[\Omega^0_1=\Omega^0_0\setminus \{(d^o,\bar k^o)\in\C^4 / Q(d^o, \bar k^o)\cdot g_{0}(d^o, \bar k^o)\cdot g_{\delta}(d^o, \bar k^o)=0\}.\] For $(d^o,\bar k^o)\in\Omega^0_1$, $g(d^o,\rho\bar k_0)$ has $\delta$ different and non-zero roots; say, $\rho_1,\ldots,\rho_{\delta}$. Therefore, $\cL_{\bar k^o}$ intersects $\cO_{d^o}(\Sigma)$ in $\delta$ different points: \[\bar x^o_1=\rho_1\bar k^o,\ldots,\bar x^o_{\delta}=\rho_{\delta}\bar k^o,\] and none of these points is the origin.
\noindent We will now construct an open subset $\Omega^0_2\subset\Omega^0_1$ such that for $(d^o,\bar k^o)\in\Omega^0_2$, the points $\bar x^o_1,\ldots,\bar x^o_{\delta}$ are non-normal isotropic points in $\cO_{d^o}(\Sigma)$, and each one of them is associated with a unique non-normal isotropic point of $\Sigma$. To do this, recall that ${\rm Iso}(\Sigma)$ is the closed set of normal-isotropic points of $\Sigma$ (see page \pageref{notation:ch1:Hodograph}), and let $\Omega_{{\rm Iso}(\Sigma)}$ be the set obtained when applying Lemma \ref{lem:ch4:AvoidClosedSubsetsSigma} (page \pageref{lem:ch4:AvoidClosedSubsetsSigma}) to the closed subset $\mathfrak F={\rm Iso}(\Sigma)$. Note that if $(d^o,\bar k^o)\in\Omega^0_1\cap\Omega_{{\rm Iso}(\Sigma)}$, then the points $\bar x^o_1,\ldots,\bar x^o_{\delta}$ are not associated with normal-isotropic points of $\Sigma$. Let us consider the polynomial \[\Gamma(d,\bar x)=\left(\dfrac{\partial g}{\partial x_1}(d,\bar x)\right)^2+\left(\dfrac{\partial g}{\partial x_2}(d,\bar x)\right)^2+\left(\dfrac{\partial g}{\partial x_3}(d,\bar x)\right)^2.\] This polynomial is not identically zero, because in that case for every $d^o\not\in\Delta$ all the points in $\cO_{d^o}(\Sigma)$ would be normal-isotropic, contradicting Proposition \ref{prop:ch1:OffsetPropertiesRegardingSimpleAndSpecialComponents}(3) (page \pageref{prop:ch1:OffsetPropertiesRegardingSimpleAndSpecialComponents}). Let then $\tilde{\Gamma}(d, \bar k, r)=\Gamma(d,r \bar k)$, and consider the resultant: \[\Phi(d, \bar k)={\rm Res}_r(\tilde{g}(d,\bar k,r),\tilde{\Gamma}(d, \bar k, r))\] If $\Phi(d, \bar k)\equiv 0$, then $\tilde{g}(d,\bar k,r)$ y $\tilde{\Gamma}(d,\bar k,r)$ have a common factor of positive degree in $r$. Let us show that this leads to a contradiction. Suppose that \[ \begin{cases} \tilde{g}(d,\bar k,r)=M(d, \bar k, r) G(d,\bar k,r), \\ \tilde{\Gamma}(d,\bar k,r)=M(d, \bar k, r) \Gamma^*(d,\bar k,r). \end{cases} \] Then $M$ depends on $\bar k$ (because $\tilde g$ cannot have a non constant factor in $\C[d,r]$). Take therefore $r^o\in\C^\times$ such that $M(d,\frac{\bar k}{r^o}, r^o)$ depends on $\bar k$. Then: \[ \begin{cases} g(d,\bar x)=g(d,r^o\frac{\bar x}{r^o})= \tilde{g}(d,\frac{\bar x}{r^o},r^o)=M(d,\frac{\bar x}{r^o}, r^o) G(d,\frac{\bar x}{r^o}, r^o) \\[3mm] \Gamma(d,\bar x)= \Gamma(d,r^o\frac{\bar x}{r^o} )=\tilde{\Gamma}(d,\frac{\bar x}{r^o},r^o)= M(d,\frac{\bar x}{r^o}, r^o) \Gamma^*(d,\frac{\bar x}{r^o}, r^o) \end{cases} \] But since $g$ has at most two irreducible components, this would imply that for $d^o\not\in\Delta$, $\cO_{d^o}(\Sigma)$ would have at least a normal-isotropic component, contradicting Proposition \ref{prop:ch1:OffsetPropertiesRegardingSimpleAndSpecialComponents}(3) (page \pageref{prop:ch1:OffsetPropertiesRegardingSimpleAndSpecialComponents}). Therefore, the equation $\Phi(d, \bar k)=0$ defines a proper closed subset of $\C^4$. This shows that we can take: $$\Omega^0_2=\left(\Omega^0_1\cap\Omega_{{\rm Iso}(\Sigma)}\right)\setminus \{(d^o,\bar k^o):\Phi(d^o, \bar k^o)=0 \}$$ Then, for $(d^o,\bar k^o)\in\Omega^0_2$, each of the points $\bar x^o_i,$ for $i=1,\ldots,\delta$, is associated with a unique non-normal isotropic point $\bar y^o_i$ of $\Sigma$ (recall that $d^o\in\Delta$, and so the irreducible components of $\cO_{d^o}(\Sigma)$ are simple).
\noindent Let $\Omega_{\bot}$ be the open subset of $\C\times\C^3$ obtained by applying Lemma \ref{lem:ch4:AvoidClosedSubsetsSigma} (page \pageref{lem:ch4:AvoidClosedSubsetsSigma}) to the closed subset $\Sigma_{\bot}$ whose existence is guaranteed by Lemma \ref{lem:Ch1:LineMeetsVarietyNormallyInProperClosedSet} (page \pageref{lem:Ch1:LineMeetsVarietyNormallyInProperClosedSet}). Recall that, by assumption (see Remark \ref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin}(2), page \pageref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin}), $\Sigma$ is not a sphere centered at the origin. Besides, let $\Theta=\C^3\setminus {\cal L}_0$, where \[{\cal L}_0=\begin{cases} \emptyset\mbox{ if } \bar 0\not\in\Sigma\mbox{ or if }\bar 0\in{\rm Sing}(\Sigma)\\ \mbox{ the normal line to }\Sigma\mbox{ at } \bar 0\mbox{ otherwise}. \end{cases} \] and set \[ \Omega^0_3=\Omega^0_2\cap\Omega_{\bot}\cap(\C\times\Theta). \] Then for $(d^o,\bar k^o)\in\Omega^o_3$, the points $\bar y^o_i, i=1,\ldots,\delta$, are different. To prove this, note that if $\bar y_i=\bar y_j$, with $i\neq j$, then $\bar y^o_i$ generates $\bar x^o_i$ and $\bar x^o_j$. Thus, since $\bar y^o_i,\bar x^o_i, \bar x^o_j$ are all in the normal line to $\Sigma$ at $\bar y^o_i$ and in $\cL_{\bar k^o}$, it follows that these two lines coincide. This means that $\bar y^o_i\in\Sigma_{\bot}$. Since $(d^o,\bar k^o)\in\Omega_{\bot}$, then (by Lemma \ref{lem:ch4:AvoidClosedSubsetsSigma}) $\bar y^o_i\in {\cL_{\bar k^o}}\cap\Omega_{\bot}$ implies that $\bar y^o_i=\bar 0$, in contradiction with $\bar k^o\in\C\times\Theta$.
\noindent We will now show that it is possible to restrict the values of $(d,\bar k)$ so that the points $\bar y^o_i$ belong to the image of the parametrization $P$. Let $\Upsilon_2$ be as in Lemma \ref{lem:ch4:PropertiesSurfaceParametrization} (page \pageref{lem:ch4:PropertiesSurfaceParametrization}), and let $\Omega_{\Upsilon_2}\subset\C\times\C^3$ be the open subset obtained applying Lemma \ref{lem:ch4:AvoidClosedSubsetsSigma} to $\Sigma\setminus\Upsilon_2$. Then take $\Omega^0_4=\Omega^0_3\cap\Omega_{\Upsilon_2}$.
\noindent Let us show that we can take $\Omega_0=\Omega^0_4$. If $(d^o,\bar k^o)\in\Omega^0_4$, then for each of the points $\bar y^o_i$ there are $\mu$ values $\bar t^o_{(i,j)}$ (with $i=1,\ldots,\delta$, $j=1,\ldots,m$) such that $P(\bar t^o_{(i,j)})=\bar y^o_i$. Setting $\Psi_2^i(d^o,\bar k^o)=\{\bar t^o_{(i,j)}\}_{j=1,\ldots,m}$, one has that \[ \Psi_2^P(d^o,\bar k^o)=\Psi_2^1(d^o,\bar k^o)\cup\cdots\cup\Psi_2^\delta(d^o,\bar k^o) \] and so the first part of claim (2) is proved. Furthermore: \begin{itemize}
\item claim (a) holds because of the construction of $\Omega^0_3$.
\item the structure of $\Psi_2(d^o,\bar k^o)$ in claims (b1) and (b2) holds because of the construction of $\Omega^0_4$.
\item Claims (b3) and (b4) hold because of the construction of $\Omega^0_0, \Omega^0_1$ and $\Omega^0_2$.
\item Claim (c) follows the construction of $\Omega^0_0$. \end{itemize}
\framebox(5,5){}
\begin{Remark}\label{rem:ch4:InOmega0GoodSpecialization} Note that, by the construction of $\Omega_0^0$ in the proof of Theorem \ref{thm:ch4:TheoreticalFoundation} (page \pageref{thm:ch4:TheoreticalFoundation}), if $(d^o,\bar k^o)\in\Omega_0$, then $g(d^o, \bar x)=0$ is the equation of ${\cO}_{d^o}(\Sigma)$. \end{Remark}
\subsection{Elimination and auxiliary polynomials}\label{subsec:ch4:EliminationAndAuxiliaryPolynomials}\label{sec:ch4:AuxiliaryCurvesForRationalSurfaces}
To continue with our strategy, we proceed to eliminate the variables $(l,r,\bar x)$ in the Parametric Offset-Line System $\mathfrak S^P_2(d,\bar k)$ (page \pageref{sys:ch4:IntersectionOffsetLine}). This elimination process leads us to consider the following system of equations: \begin{equation}\label{sys:ch4:AuxiliaryCurvesSystem} \hspace{-5mm}\begin{minipage}{14cm} \[\hspace{3mm} {{\mathfrak S}^P_3}(d,\bar k)\equiv\begin{cases} s_1(d,\bar k, \bar t):=h(\bar t)(k_2P_3-k_3P_2)^2-d^2P_0(\bar t)^2(k_2n_3-k_3n_2)^2=0, \\ s_2(d,\bar k, \bar t):=h(\bar t)(k_1P_3-k_3P_1)^2-d^2P_0(\bar t)^2(k_1n_3-k_3n_1)^2=0, \\ s_3(d,\bar k, \bar t):=h(\bar t)(k_1P_2-k_2P_1)^2-d^2P_0(\bar t)^2(k_1n_2-k_2n_1)^2=0. \end{cases} \] \end{minipage} \end{equation} We will refer to this as the {\sf Affine Auxiliary System}.
\noindent We recall that $P=\left(\dfrac{P_1}{P_0},\dfrac{P_2}{P_0},\dfrac{P_3}{P_0}\right)$, $\bar k=(k_1,k_2,k_3)$, $\bar n=(n_1,n_2,n_3)$ and $h(\bar t)=n_1(t)^2+n_2(t)^2+n_3(t)^2$. Along with the polynomials $s_1, s_2, s_3$ introduced in the above system, we will also need to consider the following polynomial: \[s_0(\bar k, \bar t)=k_1(P_2n_3-P_3n_2)-k_2(P_1n_3-P_3n_1) +k_3(P_1n_2-P_2n_1)\] The geometrical meaning of $s_0$ is clear when one expresses it as a determinant, as follows: \begin{equation}\label{eq:ch4:GeometricalInterpretationOfS0} s_0(\bar k, \bar t)={\rm det}\left( \begin{array}{ccc} k_1&k_2&k_3\\ P_1&P_2&P_3\\ n_1&n_2&n_3 \end{array} \right). \end{equation} We will introduce some additional notation to simplify the expression of the polynomials $s_i$ for $i=1,2,3$. More precisely, we {\sf denote:} \begin{equation}\label{def:ch4:NotationMandGforCurvesSi} \hspace{-5mm}\begin{minipage}{14cm} \[\hspace{3mm} \left\{\begin{array}{lll} M_1(\bar k,\bar t)=k_2P_3-k_3P_2,&G_1(\bar k,\bar t)=k_2n_3-k_3n_2,\\ M_2(\bar k,\bar t)=k_3P_1-k_1P_3,&G_2(\bar k,\bar t)=k_3n_1-k_1n_3,\\ M_3(\bar k,\bar t)=k_1P_2-k_2P_1,&G_3(\bar k,\bar t)=k_1n_2-k_2n_1.\\ \end{array}\right. \] \end{minipage} \end{equation} With this notation one has \[s_i(d,\bar k,\bar t)=h(\bar t)M_i^2(\bar k,\bar t)-d^2P_0(\bar t)^2G_i^2(\bar k,\bar t)\mbox{ for }i=1,2,3.\] Note also that \begin{equation}\label{eq:ch4:MandGasVectorProducts} \begin{cases} (M_1,M_2,M_3)(\bar k,\bar t)=\bar k\wedge\bigl(P_1(\bar t),P_2(\bar t),P_3(\bar t)\bigr)\\ (G_1,G_2,G_3)(\bar k,\bar t)=\bar k\wedge\bar n(\bar t). \end{cases} \end{equation} \noindent Let
\[ I^P_ 2(d)=<b^{P},\mathrm{nor}^{P}_{(1,2)},\mathrm{nor}^{P}_{(1,3)}\mathrm{nor}^{P}_{(2,3)},w^{P},\ell_1,\ell_2,\ell_3> \subset \C[d,\bar k,l,r,\bar t,\bar x]
\] be the ideal generated by the polynomials that define the Parametric Offset-Line System $\mathfrak S^P_2(d,\bar k)$. We consider the projection associated with the elimination: \[\pi_{(2,1)}:\C\times\C^3\times\C\times\C\times\C^2\times\C^3\mapsto\C\times\C^3\times\C^2\] given by \[\pi_{(2,1)}(d,\bar k,l,r,\bar t,\bar x)=(d,\bar k,\bar t)\]
\noindent The next lemma relates the polynomials $s_0,\ldots,s_3\in\C[d,\bar k,\bar t]$ in System ${\mathfrak S_3}(d,\bar k)$ with the elimination process. We {\sf denote} by $\tilde I^P_ 2(d)$ the elimination ideal $I^P_ 2(d)\cap\C[d,\bar k,\bar t]$. For $(d^o,\bar k^o)\in\C\times\C^3$, the set of solutions of the Parametric Offset-Line system is denoted by $\Psi^P_2(d^o,\bar k^o)$, and the set of solutions of ${\mathfrak S_3^P}(d^o,\bar k^o)$ is denoted by $\Psi^P_3(d^o,\bar k^o)$\label{def:ch4:SolutionSet3}. Note that $\Psi^P_2(d^o,\bar k^o)=\bV(I^P_ 2(d))$.
\begin{Lemma}\label{lem:ch4:AuxiliaryPolynomialsBelongToEliminationIdeal} $s_i\in\tilde I^P_ 2(d)$ for $i=0,\ldots,3$. In particular, if $(l^o,r^o,\bar t^o,\bar x^o)\in\Psi^P_2(d^o,\bar k^o)$, then $\bar t^o\in{\Psi_3^P}(d^o,\bar k^o)$. \end{Lemma} \begin{proof} The polynomials $s_i$ can be expressed as follows: \[ s_i=c^{(i)}_1\,b^{P}+c^{(i)}_2\,\mathrm{nor}^{P}_{(1,2)}+c^{(i)}_3\,\mathrm{nor}^{P}_{(1,3)}+c^{(i)}_4\,\mathrm{nor}^{P}_{(2,3)} +c^{(i)}_5\,w^{P}+c^{(i)}_6\,\ell_1+c^{(i)}_7\,\ell_2+c^{(i)}_8\,\ell_3 \] where $c^{(i)}_j\in\C[d,\bar k,l,r,\bar t,\bar x]$ for $i=0,\ldots,3$, $j=1,\ldots,8$. This polynomials (obtained with the CAS Singular \cite{SingularWeb}) can be found in Appendix \ref{Ap3-ComplementsToSomeProofs} (page \pageref{Ap3-ComplementsToSomeProofs}). \end{proof}
\noindent The next step appears naturally to be the converse analysis: which are the $\bar t^o\in{\mathfrak S_3^P}(d^o,\bar k^o)$ that can be extended to a solution $(l^o,r^o,\bar t^o,\bar x^o)\in\Psi^P_2(d^o,\bar k^o)$? In order to describe them, we need some notation and a lemma. Let ${\mathcal A}$ {\sf denote} the set of values $\bar t^o\in\C^2$ such that: \begin{equation}\label{def:ch4:Set_A_OfExtensibleSolutions} \left\{\begin{array}{c} P_0(\bar t^o)h(\bar t^o)\bigl(P_2(\bar t^o)n_3(\bar t^o)-P_3(\bar t^o)n_2(\bar t^o)\bigr)\neq 0\\ \mbox{ or }\\ P_0(\bar t^o)h(\bar t^o)\bigl(P_1(\bar t^o)n_3(\bar t^o)-P_3(\bar t^o)n_1(\bar t^o)\bigr)\neq 0\\ \mbox{ or }\\ P_0(\bar t^o)h(\bar t^o)\bigl(P_1(\bar t^o)n_2(\bar t^o)-P_2(\bar t^o)n_1(\bar t^o)\bigr)\neq 0 \end{array}\right. \end{equation} \noindent Now we can describe which solutions of $\bar t^o\in{\mathfrak S_3^P}(d^o,\bar k^o)$ can be extended.
\begin{Proposition}\label{prop:ch4:ExtendableSolutions} Let $\Omega_0$ be as in Theorem \ref{thm:ch4:TheoreticalFoundation} (page \pageref{thm:ch4:TheoreticalFoundation}), $(d^o,\bar k^o)\in\Omega_0$ and $\bar t^o\in{\Psi_3^P}(d^o,\bar k^o)$. Then the following holds: \begin{enumerate}
\item[(a)] There exists $\lambda^o\in\C^\times$ such that:
\[\bar k^o\wedge\bigl(P_1(\bar t^o),P_2(\bar t^o),P_3(\bar t^o)\bigr)=\lambda^o\left(\bar k^o\wedge\bar n(\bar t^o)\right).\]
That is,
\[M_i(\bar k^o,\bar t^o)=\lambda^o G_i(\bar k^o,\bar t^o)\mbox{ for }i=1,2,3.\]
\item[(b)] If $\bar t^o\in{\mathcal A}$, then $s_0(d^o,k^o,\bar t^o)=0$.
\item[(c)] $(d^o,\bar k^o,\bar t^o)\in\pi_{(2,1)}({\Psi_2^P}(d^o,\bar k^o))$ if and only if $\bar t^o\in{\mathcal A}$. \end{enumerate} In particular,
\[m\delta=\#\left({\mathcal A}\cap{\Psi_3^P}(d^o,\bar k^o)\right).\] Recall that $m$ is the tracing index of $P$, and $\delta$ is the total degree w.r.t $\bar x$ of the generic offset equation. \end{Proposition} \quad\\ \begin{proof} \begin{enumerate}
\item[]
\item[(a)] To prove the existence of $\lambda^o$, notice that $\bar t^o\in{\Psi_3^P}(d^o,\bar k^o)$ implies:
\[h(\bar t^o)M_i^2(\bar k^o,\bar t^o)=(d^o)^2P_0(\bar t^o)^2G_i^2(\bar k^o,\bar t^o)\mbox{ for }i=1,2,3.\]
Since $\bar t^o\in{\mathcal A}$, $h(\bar t^o)\neq 0$. Therefore one concludes that there exist $\epsilon_i$, with $\epsilon_i^2=1$, such that
\[M_i(\bar k^o,\bar t^o)=\epsilon_i\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}G_i(\bar k^o,\bar t^o)\mbox{ for }i=1,2,3.\]
Since there are three of them, two of the $\epsilon_i$ must coincide. We will show that the third one must coincide as well. That is, we will show that either $\epsilon_1=\epsilon_2=\epsilon_3=1$, or $\epsilon_1=\epsilon_2=\epsilon_3=-1$ holds. We will study one particular case, the other possible combinations can be treated similarly. Let us suppose, e.g., that $\epsilon_1=\epsilon_2=1$. Then:
\[
\begin{cases}
k_2^oP_3(\bar t^o)-k_3^oP_2(\bar t^o)=\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}\left(k_2^on_3(\bar t^o)-k_3^on_2(\bar t^o)\right)\\[3mm]
k_3^oP_1(\bar t^o)-k_1^oP_3(\bar t^o)=\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}\left(k_3^on_1(\bar t^o)-k_1^on_3(\bar t^o)\right)
\end{cases}\]
Multiplying the first equation by $k_1^o$ and the second by $k_2^o$, and subtracting one has:
\[k_3^o\left(k_1^oP_2(\bar t^o)-k_2^oP_1(\bar t^o)\right)=k_3^o\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}\left(k_1^on_2(\bar t^o)-k_2^on_1(\bar t^o)\right)\]
Since $(d^o,\bar k^o)\in\Omega_0$, we have $k_3^o\neq 0$ (see Theorem \ref{thm:ch4:TheoreticalFoundation}(c), page \pageref{thm:ch4:TheoreticalFoundation}). Thus, we have shown that \[k_1^oP_2(\bar t^o)-k_2^oP_1(\bar t^o)=\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}\left(k_1^on_2(\bar t^o)-k_2^on_1(\bar t^o)\right)\] and so $\epsilon_3=1$. Therefore, $\lambda^o=\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}$, and it is non-zero because $\bar t^o\in{\mathcal A}$.
\item[(b)] From the identity in (a) it follows immediately that $\bar k^o$, $(P_1(\bar t^o),P_2(\bar t^o),P_3(\bar t^o)\bigr)$ and $\bar n(\bar t^o)$ are coplanar vectors. Thus, $s_0(d^o,\bar k^o,\bar t^o)=0$ (recall the geometric interpretation of $s_0$ in equation \ref{eq:ch4:GeometricalInterpretationOfS0}, page \pageref{eq:ch4:GeometricalInterpretationOfS0}).
\item[(c)] If $(d^o,\bar k^o,\bar t^o)\in\pi_{(2,1)}({\Psi_2^P}(d^o,\bar k^o))$, then $P_0(\bar t^o)\beta(\bar t^o)h(\bar t^o)\neq 0$ follows from equation $w^{P}$ in the Parametric Offset-Line System \ref{sys:ch4:IntersectionOffsetLine} (page \pageref{sys:ch4:IntersectionOffsetLine}). Besides, \[ \bigl(P_2n_3-P_3n_2\bigr)(\bar t^o)=\bigl(P_1n_3-P_3n_1\bigr)(\bar t^o)=\bigl(P_1n_2-P_2n_1\bigr)(\bar t^o)=0 \] is impossible because of Theorem \ref{thm:ch4:TheoreticalFoundation}(a) (page \pageref{thm:ch4:TheoreticalFoundation}). Thus $\bar t^o\in{\mathcal A}$.\\ Conversely, let us suppose that $\bar t^o\in{\mathcal A}$. More precisely, let us suppose w.l.o.g. that \[P_0(\bar t^o)h(\bar t^o)\bigl(P_2(\bar t^o)n_3(\bar t^o)-P_3(\bar t^o)n_2(\bar t^o)\bigr)\neq 0.\] The other cases can be proved in a similar way. First we note that \[G_1(\bar k^o,\bar t^o)=k_3^on_2(\bar t^o)-k_2^on_3(\bar t^o)\neq 0.\] since, using that $s_1(d^o,\bar k^o,\bar t^o)=0$ and $h(\bar t^o)\neq 0$, one has that \[k_2^oP_3(\bar t^o)-k_3^oP_2(\bar t^o)=0.\] Then, from the system: \[ \begin{cases} k_2^on_3(\bar t^o)-k_3^on_2(\bar t^o)=0\\ k_2^oP_3(\bar t^o)-k_3^oP_2(\bar t^o)=0 \end{cases} \] and the fact that $k_2^ok_3^o\neq 0$ (again, this is Theorem \ref{thm:ch4:TheoreticalFoundation}(c)), one deduces that \[P_2(\bar t^o)n_3(\bar t^o)-P_3(\bar t^o)n_2(\bar t^o)=0,\] that is a contradiction. Thus, we can define \[r^o=\dfrac{1}{P_0(\bar t^o)\beta(\bar t^o)h(\bar t^o)}, \mbox{ and } l^o=\dfrac{P_3(\bar t^o)n_2(\bar t^o)-P_2(\bar t^o)n_3(\bar t^o)}{-P_0(\bar t^o)G_1(\bar k^o,\bar t^o)}\] We also define $\bar x^o=l^o\bar k^o$. We claim that $(l^o,r^o,\bar t^o,\bar x^o)\in\Psi^P_2(d^o,\bar k^o)$, and therefore $(d^o,\bar k^o,\bar t^o)\in\pi_{(2,1)}({\Psi_2^P}(d^o,\bar k^o))$. To prove our claim we substitute $(l^o,r^o,\bar t^o,\bar x^o)$ in the equations of the Parametric Offset-Line System \ref{sys:ch4:IntersectionOffsetLine} (page \pageref{sys:ch4:IntersectionOffsetLine}), and we check that all of them vanish. The vanishing of $w^P(r^o,\bar t^o)$ and $\ell_i(\bar k^o,l^o,\bar x^o)$ for $i=1,2,3$ is a trivial consequence of the definitions. Substitution in $\mathrm{nor}^P_{(2,3)}$ leads to a polynomial whose numerator vanishes immediately. Substituting in $\mathrm{nor}^P_{(1,2)}$ (resp. in $\mathrm{nor}^P_{(1,3)}$) one obtains: \[ \mathrm{nor}^{P}_{(1,2)}(\bar t^o,\bar x^o)=\dfrac{n_2(\bar t^o)s_0(\bar k^o,\bar t^o)}{n_2\bar t^ok_3^o-n_3(\bar t^o)k_2^o}=0, \] (respectively \[ \mathrm{nor}^{P}_{(1,3)}(\bar t^o,\bar x^o)=\dfrac{n_3(\bar t^o)s_0(\bar k^o,\bar t^o)}{n_2\bar t^ok_3^o-n_3(\bar t^o)k_2^o}=0), \] where both equations hold because of part (a). Finally, substituting in $b^P(d^o,\bar t^o,\bar x^o)$ one has: \begin{equation}\label{eq:ch4:bSubstitutedWithElevationFormulae} b^P(d^o,\bar t^o,\bar x^o)=\dfrac{s_2(d^o,\bar k^o,\bar t^o)+\phi_1(\bar k^o,\bar t^o)s_0(\bar k^o,\bar t^o)}{(n_2\bar t^ok_3^o-n_3(\bar t^o)k_2^o)^2}=0 \end{equation} with $\phi_1(\bar k,\bar t)=k_2 n_1P_3 + k_2 n_3 P_1 - k_3 n_1P_2 - k_3 n_2 P_1 - k_1 n_3 P_2 + k_1 n_2 P_3$. Equation \ref{eq:ch4:bSubstitutedWithElevationFormulae} holds because of part (a) and because $s_2(d^o,\bar k^o,\bar t^o)=0$. \end{enumerate} \noindent The claim that
\[m\delta=\#\left({\mathcal A}\cap{\Psi_3^P}(d^o,\bar k^o)\right)\] follows easily from Theorem \ref{thm:ch4:TheoreticalFoundation}(b) (page \pageref{thm:ch4:TheoreticalFoundation}) and the above result (c). This shows that, for $(d^o,\bar k^o)\in\Omega_0$, there is a bijection (under $\pi_{(2,1)}$) between the points of ${\Psi_2^P}(d^o,\bar k^o)$ and the points in ${\mathcal A}\cap{\Psi_3^P}(d^o,\bar k^o)$. This finishes the proof of the proposition. \end{proof}
\begin{Remark}\label{rem:ch4:SignOfLambdaAndOffsetting} In the proof of Proposition \ref{prop:ch4:ExtendableSolutions} (page \ref{prop:ch4:ExtendableSolutions}) we have seen that there is a vector equality:
\[ \bar M(\bar k^o,\bar t^o)=\epsilon\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}\bar G(\bar k^o,\bar t^o)\mbox{ for }i=1,2,3. \] where $\bar M=(M_1,M_2,M_3)$, $\bar G=(G_1,G_2,G_3)$ and $\epsilon=\pm 1$. In the next lemma we will see that the value of $\epsilon=1$ determines the sign that appears in the offsetting construction. More precisely, in the proof of Proposition \ref{prop:ch4:ExtendableSolutions} we have seen that if $\bar y^o=P(\bar t^o)$, and \[P_0(\bar t^o)h(\bar t^o)\bigl(P_2(\bar t^o)n_3(\bar t^o)-P_3(\bar t^o)n_2(\bar t^o)\bigr)\neq 0.\] then it holds that \[k_2^on_3(\bar t^o)-k_3^on_2(\bar t^o)\neq0 \mbox{ and }k_2^oP_3(\bar t^o)-k_3^oP_2(\bar t^o)\neq 0.\] Furthermore, the point $\bar x^o$, constructed as follows \begin{equation}\label{eq:ch4:LiftingFormulae} \bar x^o=\dfrac{P_3(\bar t^o)n_2(\bar t^o)-P_2(\bar t^o)n_3(\bar t^o)}{-P_0(\bar t^o)G_1(\bar k^o,\bar t^o)}\bar k^o, \end{equation}
is the point in $\cO_{d^o}(\Sigma)\cap\cL_{\bar k^o}$ associated with $\bar y^o$. Thus, one has: \[ \bar x^o=\bar y^o+\epsilon'\dfrac{d^o\nabla f(\bar y^o)}{\sqrt{h_{\operatorname{imp}(\bar y^o)}}}. \] where $\epsilon'=\pm 1$. \end{Remark}
\begin{Lemma}\label{lem:ch4:SignOfLambdaAndOffsetting} With the notation of Remark \ref{rem:ch4:SignOfLambdaAndOffsetting}, it holds that $\epsilon=\epsilon'$. \end{Lemma} \begin{proof} From the Equations \[ M_2(\bar k^o,\bar t^o)=\epsilon\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}G_2(\bar k^o,\bar t^o) \mbox{ and } M_3(\bar k^o,\bar t^o)=\epsilon\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}G_3(\bar k^o,\bar t^o), \] multiplying the first equation by $n_2(\bar t^o)$, the second by $n_3(\bar t^o)$ and adding the results, one has: \[ -G_1(\bar k^o,\bar t^o)P_1(\bar t^o)-k^o_1(P_3n_2-P_2n_3)(\bar t^o)=\epsilon\, n_1(\bar t^o)\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}G_1(\bar k^o,\bar t^o) \] Using Equation \ref{eq:ch4:LiftingFormulae} in Remark \ref{rem:ch4:SignOfLambdaAndOffsetting}, this is: \[ -G_1(\bar k^o,\bar t^o)P_1(\bar t^o)+x^o_1G_1(\bar k^o,\bar t^o)P_0(\bar t^o)=\epsilon\, n_1(\bar t^o)\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}G_1(\bar k^o,\bar t^o). \] Dividing by $G_1(\bar k^o,\bar t^o)P_0(\bar t^o)$: \[ -\dfrac{P_1(\bar t^o)}{P_0(\bar t^o)}+x^o_1=\epsilon\,\dfrac{d^on_1(\bar t^o)}{\sqrt{h(\bar t^o)}}, \] and finally \[ x^o_1=\dfrac{P_1(\bar t^o)}{P_0(\bar t^o)}+\epsilon\, \dfrac{d^on_1(\bar t^o)}{\sqrt{h(\bar t^o)}}. \] Similar results are obtained for $x^o_2$ and $x^o_3$. Thus we have proved that $\epsilon'=\epsilon$. \end{proof}
\subsection{Fake points}\label{subsec:ch4:FakePoints}
Using Proposition \ref{prop:ch4:ExtendableSolutions} (page \pageref{prop:ch4:ExtendableSolutions}) we can now define the set of fake points associated with this problem.
\begin{Definition}\label{def:ch4:FakePoints} A point $\bar t^o\in\C^2$ is a {\sf fake point} if \[ \left\{\begin{array}{c} P_0(\bar t^o)h(\bar t^o)\bigl(P_2(\bar t^o)n_3(\bar t^o)-P_3(\bar t^o)n_2(\bar t^o)\bigr)=0\\ \mbox{ and }\\ P_0(\bar t^o)h(\bar t^o)\bigl(P_1(\bar t^o)n_3(\bar t^o)-P_3(\bar t^o)n_1(\bar t^o)\bigr)=0\\ \mbox{ and }\\ P_0(\bar t^o)h(\bar t^o)\bigl(P_1(\bar t^o)n_2(\bar t^o)-P_2(\bar t^o)n_1(\bar t^o)\bigr)=0 \end{array}\right. \] Equivalently, \begin{equation}\label{eq:ch4:EquationForFakePoints} P_0(\bar t^o)h(\bar t^o)=0\mbox{ or }(P_1(\bar t^o),P_2(\bar t^o),P_3(\bar t^o))\wedge\bar n(\bar t^o)=\bar 0 \end{equation} The set of fake points will be {\sf denoted} by $\cF$. \end{Definition}
\begin{Definition}\label{def:ch4:InvariantSolutionsSystemS3P} Let $\Omega_0$ be as in Theorem \ref{thm:ch4:TheoreticalFoundation} (page \pageref{thm:ch4:TheoreticalFoundation}) and let $\Omega$ be any open subset of $\Omega_0$. {\sf The set of invariant solutions of ${{\mathfrak S}^P_3}(d,\bar k)$ w.r.t $\Omega$.} is defined as the set: \[ \cI^P_3(\Omega)=\bigcap_{(d^o,\bar k^o)\in\Omega}{\Psi_3^P}(d^o,\bar k^o) \] \end{Definition}
\begin{Remark} Note that if $\bar t^o\in\cF$, we do not assume that $\bar t^o\in{\Psi_3^P}(d^o,\bar k^o)$ for some $(d^o,\bar k^o)\in\C\times\C^3$. \end{Remark}
\noindent We have introduced the fake points starting from the notion non-extendable solutions of $\mfS^P_3(d^o,\bar k^o)$. Another point of view is to define fake points as the {\em invariant solutions} of $\mfS^P_3(d,\bar k)$. First we will define what we mean by invariant in this context, and then we will show that, in a certain open subset of values $(d,\bar k)$, both notions actually coincide.
\noindent To prove the equivalence between the notions of fake points and invariant points we need to further restrict the set of values of $(d,\bar k)$. The following lemma gives the required restrictions.
\begin{Lemma}\label{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation} Let $\Omega_0$ be the open set in Theorem \ref{thm:ch4:TheoreticalFoundation}. There exists an open non-empty $\Omega_1\subset\Omega_0$ such that if $(d^o,\bar k^o)\in\Omega_1$, then \begin{enumerate}
\item[(1)] $\bar k^o$ is not isotropic.
\item[(2)] $d^o$ is not a critical distance of $\Sigma$ (see Corollary \ref{cor:ch1:PositiveDimensionComponentsOfBotVareInSpheres} in page \pageref{cor:ch1:PositiveDimensionComponentsOfBotVareInSpheres}).
\item[(3)] The system
\begin{equation}\label{sys:ch4:SystemOfMiPlusP0}
\{P_0(\bar t)=M_1(\bar k^o,1,\bar t)=M_2(\bar k^o,1,\bar t)=M_3(\bar k^o,1,\bar t)=0\}
\end{equation} has no solutions unless $P_0(\bar t^o)=P_1(\bar t^o)=P_2(\bar t^o)=P_3(\bar t^o)=0$. \end{enumerate} \end{Lemma} \begin{proof}
\begin{enumerate}
\item[]
\item[(1)] Set $\Omega_1^1=\Omega_0\cap(\C\times\mfQ)$, where $\mfQ=\{\bar k^o/(k_1^o)^2+(k_2^o)^2+(k_3^o)^2=0\}$ is the cone of isotropy in $\bar k$.
\item[(2)] Let $\Upsilon(\Sigma^{\bot})$ is the set of critical distances of $\Sigma$ (defined in page \pageref{cor:ch1:PositiveDimensionComponentsOfBotVareInSpheres}), and set $\Omega_1^2=\Omega_1^1\cap(\Upsilon(\Sigma^{\bot})\times\C^3)$.
\item[(3)] First we will show that the set of values $\bar k^o\neq\bar 0$ for which the System \ref{sys:ch4:SystemOfMiPlusP0} has a solution is contained in an at most two-dimensional closed subset $\mfR\subset\C^3$. If $P_0$ is constant the result is trivial. Assuming that $P_0$ is not constant, let ${\mathcal C}_0$ be the affine curve defined by $P_0$, and let ${\mathcal C}_1$, ${\mathcal C}_2$, ${\mathcal C}_3$ be the varieties defined by $P_1, P_2, P_3$ respectively. Let $\cJ_{P_1}\subset\C[\bar k,\bar t,v]$ be the ideal defined as follows: \[\cJ_{P_1}=<P_0,k_2P_3-k_3P_2,k_1P_3-k_3P_1,k_1P_2-k_2P_1,v P_1-1>,\] and let $\bV(\cJ_{P_1})\subset\C^3\times\C^2\times\C$ be the solution set of this ideal. Consider the projections defined by: \[ \begin{cases} \pi_1(\bar k,\bar t,v)=\bar k\\ \pi_2(\bar k,\bar t,v)=\bar t \end{cases} \] Let $A_0$ be an irreducible component of $\bV(\cJ_{P_1})$, and let $(\bar k^o,\bar t^o,v^o)\in A_0$. Then the points in $\pi_2^{-1}(\pi_2(\bar k^o,\bar t^o,v^o))$ are the solutions of the following system: \[ \begin{cases} \bar t=\bar t^o,\\ M_1(\bar k,1,\bar t^o)=M_2(\bar k,1,\bar t^o)=M_3(\bar k,1,\bar t^o)=0,\\
v P_1(\bar t^o)-1=0 \end{cases} \] The dimension of the set of solutions is $1$. On the other hand, $\pi_2(A_0)\subset{\mathcal C}_0$ implies that ${\rm dim}(\pi_2(A_0))\leq 1$. Thus, using Lemma \ref{lem:ch1:FiberDimension} (page \pageref{lem:ch1:FiberDimension}), one has that ${\rm dim}(\bV(\cJ_{P_1}))\leq 2$. Thus, ${\rm dim}(\pi_1\left(\bV(\cJ_{P_1})\right)^*)\leq 2$. Now, defining $\cJ_{P_2}$ and $\cJ_{P_3}$ in a similar way (that is, replacing the equation $v P_1(\bar t^o)-1=0$ by $v P_2(\bar t^o)-1=0$ and $v P_3(\bar t^o)-1=0$ respectively), we set: \[\mfR=\bigcup_{i=1,2,3}\pi_1\left(\bV(\cJ_{P_i})\right)^*\] Now let $\Omega_1^3=\Omega_1^2\cap(\C\times\mfR)$. \end{enumerate} \noindent The above construction shows that $\Omega_1=\Omega_1^3$ satisfies the required properties. \end{proof}
\noindent Now we can prove the announced equivalence between the notions of fake points and invariant points.
\begin{Proposition}\label{prop:ch4:FakePointsAndInvariantSolutionsCoincide} Let $\Omega_1$ be as in Lemma \ref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation} (page \pageref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}). If $\Omega$ is a non-empty open subset of $\Omega_1$, then it holds that: \[\cI^P_3(\Omega)=\displaystyle\cF\cap\left(\bigcup_{(d^o,\bar k^o)\in\Omega}{\Psi_3^P}(d^o,\bar k^o)\right).\] \end{Proposition} \noindent {\em Proof.}\quad Let $\bar t^o\in\cI^P_3(\Omega)$. Then $s_i(d^o,\bar k^o,\bar t^o)=0$ for $i=1,2,3$ and any $(d^o,\bar k^o)\in\Omega$. Thus $\bar t^o\in\cup_{(d^o,\bar k^o)\in\Omega}{\Psi_3^P}(d^o,\bar k^o)$. Furthermore, considering $s_i$ as polynomials in $\C[\bar t][d,\bar k]$, it follows that $\bar t^o$ must be a solution of: \[ \begin{cases} h(\bar t)P_1(\bar t)=h(\bar t)P_2(\bar t)=h(\bar t)P_3(\bar t)=0\\ P_0(\bar t)n_1(\bar t)=P_0(\bar t)n_2(\bar t)=P_0(\bar t)n_3(\bar t)=0 \end{cases} \] It follows that $h(\bar t^o)P_0(\bar t^o)=0$, and so $\bar t^o\in\cF$. In fact, if we suppose $h(\bar t^o)P_0(\bar t^o)\neq 0$, then from $P_0(\bar t^o)\neq 0$ one gets $\bar n(\bar t^o)=0$, and so $h(\bar t^o)=0$, a contradiction.\\ Conversely, let $\bar t^o\in\cF\cap\left(\bigcup_{(d^o,\bar k^o)\in\Omega}{\Psi_3^P}(d^o,\bar k^o)\right)$. Then: \begin{enumerate}
\item If $P_0(\bar t^o)=h(\bar t^o)=0$, then $s_i(d,\bar k,\bar t^o)=0$ identically in $(d,\bar k)$ for $i=1,2,3$, and so $\bar t^o\in\cI^P_3(\Omega)$.
\item If $P_0(\bar t^o)\neq 0$ and $h(\bar t^o)=0$, then since $\bar t^o\in{\Psi_3^P}(d^o,\bar k^o)$ for some $(d^o,\bar k^o)\in\Omega$, one has the following two possibilities:
\begin{enumerate}
\item $\bar n(\bar t^o)$ is isotropic and parallel to $\bar k^o$. This is impossible because of the construction of $\Omega_1$ (see Lemma \ref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}(1), page \pageref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}).
\item $\bar n(\bar t^o)=\bar 0$. In this case, again $s_i(d,\bar k,\bar t^o)=0$ identically in $(d,\bar k)$ for $i=1,2,3$, and so $\bar t^o\in\cI^P_3(\Omega)$. \end{enumerate}
\item Let us suppose that $P_0(\bar t^o)=0$ and $h(\bar t^o)\neq 0$. Then, since $\bar t^o\in{\Psi_3^P}(d^o,\bar k^o)$ for some $(d^o,\bar k^o)\in\Omega$, one has that $\bar t^o$ is a solution of:
\[P_0(\bar t)=0, \quad M_1(\bar t,\bar k^o)=M_2(\bar t,\bar k^o)=M_3(\bar t,\bar k^o)=0,\] Thus, two cases are possible: \begin{enumerate}
\item $(P_1,P_2,P_3)(\bar t^o)=\bar 0$. In this case, $s_i(d,\bar k,\bar t^o)=0$ identically in $(d,\bar k)$ for $i=1,2,3$, and so $\bar t^o\in\cI^P_3(\Omega)$.
\item $(P_1,P_2,P_3)(\bar t^o)$ is non-zero. This contradicts the construction of $\Omega_1$ in Lemma \ref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}(3). \end{enumerate} \item Finally, let us suppose that $P_0(\bar t^o)h(\bar t^o)\neq 0$. Then it follows that the point $P(\bar t^o)$ is well defined, and it belongs to $\Sigma_{\bot}^*$ (recall that $\Sigma_{\bot}$ was introduced in Lemma \ref{lem:Ch1:LineMeetsVarietyNormallyInProperClosedSet}, page \pageref{lem:Ch1:LineMeetsVarietyNormallyInProperClosedSet}). Thus $d^o$ would be one of the critical distances, and this contradicts the construction of $\Omega_0$ in Lemma \ref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}(2).
\framebox(5,5){} \end{enumerate}
\section{Total Degree Formula for Parametric Surfaces}\label{sec:ch4:TotalDegreeFormulaForRationalSurfaces}
According to Proposition \ref{prop:ch4:ExtendableSolutions} (page \pageref{prop:ch4:ExtendableSolutions}), if $(d^o,\bar k^o)\in\Omega_0$ it holds that
\[m\delta=\#\left({\mathcal A}\cap{\Psi_3^P}(d^o,\bar k^o)\right).\] Recall that $m$ is the tracing index of $P$, and $\delta$ is the total degree w.r.t $\bar x$ of the generic offset equation. Moreover, ${\mathcal A}$ was introduced in Equation \ref{def:ch4:Set_A_OfExtensibleSolutions} (page \pageref{def:ch4:Set_A_OfExtensibleSolutions}), and ${\Psi_3^P}(d^o,\bar k^o)$ was also introduced in page \pageref{def:ch4:SolutionSet3}, as the solution set of System \ref{sys:ch4:AuxiliaryCurvesSystem} (page \pageref{sys:ch4:AuxiliaryCurvesSystem}). In this section, we will derive a formula for the total degree $\delta$, using the tools in Section \ref{sec:ch1:IntersectionCurvesResultants} (page \pageref{sec:ch1:IntersectionCurvesResultants}) to analyze the intersection ${\mathcal A}\cap{\Psi_3^P}(d^o,\bar k^o).$
\noindent In order to do this: \begin{itemize}
\item in Subsection \ref{subsec:ProjectivizationParametrizationSurface} we will consider the projective closure of the auxiliary curves introduced in the preceding section. This in turn, requires as a first step the projectivization of the parametrization $P$. At the end of the subsection we introduce the Projective Auxiliary System \ref{sys:ch4:AuxiliaryCurvesSystem-ProjectiveAndPrimitive} (page \pageref{sys:ch4:AuxiliaryCurvesSystem-ProjectiveAndPrimitive}), which will play a key r\^ole in the degree formula.
\item Subsection \ref{subsec:ch4:InvariantSolutionsOfS5}. (page \pageref{subsec:ch4:InvariantSolutionsOfS5}) is devoted to the study of the invariant solutions of the Projective Auxiliary System, connecting them with the corresponding affine notions in Section \ref{sec:ch4:AuxiliaryCurvesForRationalSurfaces}. A crucial step in our strategy concerns the multiplicity of intersection of the auxiliary curves at their non-invariant points of intersection.
\item In Subsection \ref{subsec:ch4:MultiplicityOfIntersectionAtNon-FakePoints} (page \pageref{subsec:ch4:MultiplicityOfIntersectionAtNon-FakePoints}) we will prove (in Proposition \ref{prop:ch4:MultiplicityAtNonFakePoints}, page \pageref{prop:ch4:MultiplicityAtNonFakePoints}) that the value of that multiplicity of intersection has the required property for the use of generalized resultants (according to Lemma \ref{lem:ch1:GeneralizedResultants}, page \pageref{lem:ch1:GeneralizedResultants}) .
\item After this is done, everything is ready for the proof of the degree formula, which is the topic of Subsection \ref{subsec:ch4:DegreeFormula} (page \pageref{subsec:ch4:DegreeFormula}). The formula appears in Theorem \ref{thm:ch4:DegreeFormula} (page \pageref{thm:ch4:DegreeFormula}). \end{itemize}
\subsection{Projectivization of the parametrization and auxiliary curves} \label{subsec:ProjectivizationParametrizationSurface}
Let ${P}$ be the parametrization of $\Sigma$, introduced in Equation (\ref{eq:ch4:SurfaceAffineParametrization}). If we homogenize the components of $P$ w.r.t. a new variable $t_0$, multiplying both the numerators and denominators if necessary by a suitable power of $t_0$ we arrive at an expression of the form: \begin{equation}\label{eq:ch4:SurfaceProjectiveParametrization} P_h(\bar t_h)= \left( \dfrac{X(\bar t_h)}{W(\bar t_h)}, \dfrac{Y(\bar t_h)}{W(\bar t_h)}, \dfrac{Z(\bar t_h)}{W(\bar t_h)} \right) \end{equation} where $\bar t_h=(t_0:t_1:t_2)$, and $X, Y, Z, W\in\C[\bar t_h]$ are homogeneous polynomials of the same degree $d_{P}$, for which $\gcd(X,Y,Z,W)=1$ holds. This $P_h$ will be called the {\sf projectivization} of ${P}$.
\begin{Remark} Note that those projective values of $\bar t_h$ of the form $(0:a:b)$ correspond to points at infinity in the parameter plane. \end{Remark}
\noindent In Section \ref{subsec:ch4:SurfaceParametrizationsAndtheirAssociatedNormalVector} (page \pageref{subsec:ch4:SurfaceParametrizationsAndtheirAssociatedNormalVector}) we defined $\bar n=(n_1,n_2,n_3)$, the associated normal vector to $P$. A similar construction, applied to $P_h$, leads to a normal vector $N=(N_1,N_2,N_3)$, where $N_i$ are homogeneous polynomials in $\bar t_h$ of the same degree, such that $\gcd(N_1,N_2,N_3)=1$. This vector $N$ will be called the {\sf associated homogeneous normal vector} to $P_h$. \noindent The homogeneous polynomial $H$ defined by \[H(\bar t_h)=(N_1(\bar t))^2+(N_2(\bar t))^2+(N_3(\bar t))^2\] is the {\sf parametric projective normal-hodograph\index{hodograph, projective parametric}\index{normal-hodograph, projective parametric}} of the parametrization $P_h$.
\begin{Remark}\label{rem:ch4:OneComponentof_N_isNotDivisibleByt0} The polynomials $N_i$ are, up to multiplication by a power of $t_0$, the homogenization of the components of $\bar n$ w.r.t. $t_0$. However, since $\gcd(N_1,N_2,N_3)=1$, at least one of the components $N_i$ ($i=1,2,3$) is not divisible by $t_0$. Besides, note that if two components $N_i, N_j$, with $i\neq j$, are divisible by $t_0$, then $H$ is not. \end{Remark}
\begin{Lemma}\label{lem:SurfaceParametrizationProperties2} \begin{enumerate}
\item[]
\item If $W$ does not depend on $t_0$, then at least one of the polynomials $X, Y, Z$ must depend on $t_0$.
\item If $W$ does not depend on $t_0$, and there is exactly one of the polynomials
$X,Y,Z$ depending on $t_0$, then the surface is a cylinder with its axis parallel to the direction of the component with numerator depending on $t_0$. \end{enumerate} \end{Lemma}
\noindent{\em Proof}
\begin{enumerate}
\item Otherwise, the rank of the jacobian matrix of $P$ would be less than two. To see this, let us suppose that $X,Y,Z,W$ depend only on $t_1,t_2$.
Let $\partial_i P_h$ be the vector obtained as the partial derivative of $P_h$ w.r.t. $t_i$, that is;
\[
\partial_i P_h=\left(\frac{X_i W-XW_i}{W^2},\frac{Y_iW -Y W_i}{W^2},\frac{Z_iW-Z W_i}{W^2}\right)
\]
where $X_i,Y_i,Z_i,W_i$ denotes the partial derivative of $X,Y,Z,W$ w.r.t. $t_i$. Using Euler's formula, and taking into account that the polynomials $X,Y,Z,W$ have the same degree $n$, one has that $t_1 \partial_1 P_h=-t_2 \partial_2 P_h$. Substituting $t_0=1$, we see that the rank of the jacobian of $P$ would be less than 2. \item Assume w.l.o.g. that $X,Y$ do not depend on $t_0$, but $Z$ does. The rational map \[ \phi(\bar t)=\left(\dfrac{X(\bar t)}{W(\bar t)},\dfrac{Y(\bar t)}{W(\bar t)}\right) \] has rank one, because $X, Y, W$ are homogeneous polynomials in $\bar t$ of the same degree. Thus, $\phi$ parametrizes a curve ${\mathcal C}$ in the $(y_1,y_2)$-plane. Let $\mbox{Cyl}({\mathcal C})$ be the cylinder over ${\mathcal C}$ with axis parallel to the $y_3$-axis. The points of the form $(\phi(\bar t^o),y_3^o)$, with $W(\bar t^o)\neq 0$, are dense in $\mbox{Cyl}({\mathcal C})$. Given one of these points, let $t^o_0$ be any solution of the equation (in $t_0$): \[Z(\bar t^o,t_0)=y^o_3W(\bar t^o)\] Then we have \[P_h(\bar t^o,t^o_0)=(\phi(\bar t^o),y_3^o)\] and so $P_h(\P^2)$ is dense in $\mbox{Cyl}({\mathcal C})$. \hspace{8.5cm}$\Box$ \end{enumerate}
\noindent Now we are ready to introduce the projective auxiliary polynomials. We consider the following system: \begin{equation}\label{sys:ch4:AuxiliaryCurvesSystem-ProjectiveNoPrimitive} \hspace{-5mm}\begin{minipage}{14cm} \[\hspace{3mm} {{\mathfrak S}^{P_h}_4}(d,\bar k)\equiv\begin{cases} S_0(\bar k, \bar t_h):=k_1(YN_3-ZN_2)-k_2(XN_3-ZN_1) +k_3(XN_2-YN_1) \\ S_1(d,\bar k, \bar t_h):=H(\bar t_h)(k_2Z-k_3Y)^2-d^2W(\bar t_h)^2(k_2N_3-k_3N_2)^2 \\ S_2(d,\bar k, \bar t_h):=H(\bar t_h)(k_1Z-k_3X)^2-d^2W(\bar t_h)^2(k_1N_3-k_3N_1)^2 \\ S_3(d,\bar k, \bar t_h):=H(\bar t_h)(k_1Y-k_2X)^2-d^2W(\bar t_h)^2(k_1N_2-k_2N_1)^2 \end{cases} \] \end{minipage} \end{equation}
\noindent As usual, for $(d^o,\bar k^o)\in\C^4$, we {\sf denote} by ${\Psi^{P_h}_4}(d^o,\bar k^o)$ the set of projective solutions of ${{\mathfrak S}^{P_h}_4}(d^o,\bar k^o)$. Our next goal is the analysis of the relation between ${\Psi^{P_h}_4}(d^o,\bar k^o)$ and $\Psi^P_3(d^o,\bar k^o)$ (the set of solutions of ${\mathfrak S_3^P}(d^o,\bar k^o)$, see Subsection \ref{subsec:ch4:EliminationAndAuxiliaryPolynomials}, page \pageref{subsec:ch4:EliminationAndAuxiliaryPolynomials}). In particular, an in order to obtain the degree formula, we will characterize those points in ${\Psi^{P_h}_4}(d^o,\bar k^o)$ that correspond to the points in ${\mathcal A}\cap{\Psi_3^P}(d^o,\bar k^o)$. In Proposition \ref{prop:ch4:FakePointsAndInvariantSolutionsCoincide} (page \pageref{prop:ch4:FakePointsAndInvariantSolutionsCoincide}) we have seen that the invariant solutions of ${\Psi_3^P}(d,\bar k)$ correspond to fake points. Thus, as a first step, we will characterize certain invariant solutions of ${{\mathfrak S}^{P_h}_4}(d,\bar k)$.
\begin{Lemma}\label{lem:ch4:ContentProjectiveAuxiliaryCurves} Let $S=c_1S_1+c_2S_2+c_3S_3$. Then: \[\mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S(\bar c, d,\bar k,\bar t_h))=\gcd(H,W^2).\] \end{Lemma} \begin{proof} Since $S=c_1S_1+c_2S_2+c_3S_3$, one has: \[\mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S)= \gcd\left(\mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S_1),\mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S_2),\mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S_3)\right).\] Now, considering $S_i$ for $i=1,2,3$ as polynomials in $\C[\bar t_h][d,\bar k]$ one has: \[\mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S_1)=\gcd(HZ^2,HZY,HY^2,W^2N_2^2,W^2N_2N_3,W^2N_3^2).\] That is, \[\mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S_1)=\gcd(H\gcd(Y,Z)^2,W^2\gcd(N_2,N_3)).\] Similarly, \[\mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S_2)=\gcd(H\gcd(X,Z)^2,W^2\gcd(N_1,N_3)).\] and \[\mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S_3)=\gcd(H\gcd(X,Y)^2,W^2\gcd(N_1,N_2))\] Taking into account that $\gcd(N_1,N_2,N_3)=1$ and $\gcd(X,Y,Z,W)=1$, one has \[ \mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S)= \gcd(H\gcd(X,Y,Z)^2,W^2)=\gcd(H,W^2). \] \end{proof}
\noindent In order to use the above results, and to state the degree formula, we need to introduce some additional notation. We {\sf denote} by: \begin{equation}\label{eq:ch4:ContentProjectiveAuxiliaryCurves} Q_0(\bar t_h)=\mathop{\mathrm{Con}}\nolimits_{\bar k}(S_0(\bar k,\bar t_h))\quad\mbox{ and }\quad Q(\bar t_h)=\mathop{\mathrm{Con}}\nolimits_{(d,\bar k)}(S(\bar c, d,\bar k,\bar t_h)). \end{equation} Observe that, by Lemma \ref{lem:ch4:ContentProjectiveAuxiliaryCurves}, $Q$ does not depend on $\bar c$, a fact that is reflected in our notation. Furthermore, note that: \[Q_0(\bar t_h)=\gcd(YN_3-ZN_2,XN_3-ZN_1,XN_2-YN_1)\] and \[Q(\bar t_h)=\gcd(H,W^2).\] We also denote by: \begin{equation}\label{eq:ch4:PolynomialsTildeH_andTildeW} \tilde H(\bar t_h)=\dfrac{H(\bar t_h)}{Q(\bar t_h)},\quad\quad \tilde W(\bar t_h)=\dfrac{W^2(\bar t_h)}{Q(\bar t_h)}, \end{equation} and \begin{equation}\label{eq:ch4:PolynomialsU} \begin{cases} U_1(\bar t_h)=\dfrac{(YN_3-ZN_2)(\bar t_h)}{Q_0(\bar t_h)},\\[3mm] U_2(\bar t_h)=\dfrac{(ZN_1-XN_3)(\bar t_h)}{Q_0(\bar t_h)},\\[3mm] U_3(\bar t_h)=\dfrac{(XN_2-YN_1)(\bar t_h)}{Q_0(\bar t_h)}. \end{cases} \end{equation} Thus, one has: \[S_0(\bar k,\bar t_h)=Q_0(\bar t_h)(k_1U_1(\bar t_h)+k_2U_2(\bar t_h)+k_3U_3(\bar t_h)).\] We denote as well: \begin{equation}\label{eq:ch4:Polynomials_MhGh} \begin{cases} M_{h,1}(\bar k,\bar t_h)=k_2Z(\bar t_h)-k_3Y(\bar t_h),&G_{h,1}(\bar k,\bar t_h)=k_2N_3(\bar t_h)-k_3N_2(\bar t_h),\\[3mm] M_{h,2}(\bar k,\bar t_h)=k_3X(\bar t_h)-k_1Z(\bar t_h),&G_{h,2}(\bar k,\bar t_h)=k_3N_1(\bar t_h)-k_1N_3(\bar t_h),\\[3mm] M_{h,3}(\bar k,\bar t_h)=k_1Y(\bar t_h)-k_2X(\bar t_h),&G_{h,3}(\bar k,\bar t_h)=k_1N_2(\bar t_h)-k_2N_1(\bar t_h). \end{cases} \end{equation} and so, for $i=1,2,3$, \[S_i(d,\bar k,\bar t_h)= Q(\bar t_h)\left({\tilde H}(\bar t_h)M_{h,i}^2(\bar k,\bar t_h)-d^2{\tilde W}(\bar t_h)G_{h,i}^2(\bar k,\bar t_h)\right)\] We denote: \begin{equation}\label{eq:ch4:ReducedAuxiliaryPolynomials} T_0(\bar k,\bar t_h)=\dfrac{S_0(\bar k,\bar t_h)}{Q_0(\bar t_h)} \end{equation} and, for $i=1,2,3$, \begin{equation}\label{eq:ch4:ReducedAuxiliaryPolynomials_2} T_i(d,\bar k,\bar t_h)=\dfrac{S_i(d,\bar k,\bar t_h)}{Q(\bar t_h)}, \end{equation} Finally, we denote: \begin{equation}\label{eq:ch4:ReducedAuxiliaryPolynomials_3} T(\bar c,d,\bar k,\bar t_h)=\dfrac{S(\bar c,d,\bar k,\bar t_h)}{Q(\bar t_h)}. \end{equation} Note that: \[T(\bar c,d,\bar k,\bar t_h)=c_1T_1(d,\bar k,\bar t_h)+c_2T_2(d,\bar k,\bar t_h)+c_3T_3(d,\bar k,\bar t_h).\]
\noindent With this notation we can introduce the system of equations that will play the central role in the degree formula: \begin{equation}\label{sys:ch4:AuxiliaryCurvesSystem-ProjectiveAndPrimitive} \hspace{-5mm}\begin{minipage}{14cm} \[\hspace{3mm} {{\mathfrak S}^{P_h}_5}(d,\bar k)\equiv\begin{cases} {T_0}(\bar k, \bar t_h)=k_1U_1(\bar t_h)+k_2U_2(\bar t_h)+k_3U_3(\bar t_h)=0 \\[3mm] {T_i}(d,\bar k, \bar t_h)={\tilde H}(\bar t_h)M_{h,i}^2(\bar k,\bar t_h)-d^2{\tilde W}(\bar t_h)G_{h,i}^2(\bar k,\bar t_h), \\ \mbox{for }i=1,2,3. \end{cases} \] \end{minipage} \end{equation} We will refer to this as the {\sf Projective Auxiliary System}.
\subsection{Invariant solutions of the projective auxiliary system} \label{subsec:ch4:InvariantSolutionsOfS5}
\noindent In passing from ${{\mathfrak S}^P_3}(d,\bar k)$ to ${{\mathfrak S}^{P_h}_4}(d,\bar k)$, and then to ${{\mathfrak S}^{P_h}_5}(d,\bar k)$, we have introduced additional solutions at infinity, in the space of parameters (that is, with $t_0=0$). The following results will show that, in a certain open subset of values of $(d,\bar k)$, these solutions at infinity are invariant w.r.t. $(d,\bar k)$. We start with some technical lemmas.
\begin{Lemma}\label{lem:ch4:U_iAndS_iNotIdenticallyZero} There is always $i^o\in\{1,2,3\}$ such that $U_{i^o}(0,t_1,t_2)$ and $T_{i^o}(d,\bar k,0,t_1,t_2)$ are both not identically zero. \end{Lemma} \begin{proof} First, let us prove that there are always $i,j\in\{1,2,3\}$ such that $t_0$ does not divide $T_i$ and $T_j$. Suppose, on the contrary that, for example $T_1(d,\bar k,0,t_1,t_2)\equiv 0$ and $T_2(d,\bar k,0,t_1,t_2)\equiv 0$. Considering $T_1$ and $T_2$ as polynomials in $\C[\bar t][d,\bar k]$, if $t_0$ divides $T_1$ and $T_2$ one concludes that $t_0$ must divide \[{\tilde H}X, {\tilde H}Y, {\tilde H}Z, {\tilde W}N_1, {\tilde W}N_2\mbox{ and }{\tilde W}N_3.\] If one assumes that $t_0$ divides ${\tilde W}$, then it does not divide ${\tilde H}$, because $\gcd({\tilde H},{\tilde W})=1$. Thus it divides $X$, $Y$ and $Z$. But this is again a contradiction, since $\gcd(X,Y,Z,W)=1$, and ${\tilde W}$ divides $W$. Thus, $t_0$ does not divide ${\tilde W}$. Then it must divide $N_1, N_2, N_3$. This is also a contradiction, since $\gcd(N_1,N_2,N_3)=1$. Therefore we can assume w.l.o.g. that e.g. $t_0$ does not divide $T_1$ and $T_2$. To finish the proof in this case we need to show that, if $t_0$ divides $T_3$, then it does not divide at least one of $U_1$ and $U_2$. The hypothesis that $t_0$ divides $T_3$ implies that it divides \[{\tilde H}X, {\tilde H}Y, {\tilde W}N_1\mbox{ and }{\tilde W}N_2.\] If $t_0$ divides ${\tilde W}$, again, it must divide $X$ and $Y$. Thus it does not divide $Z$. Now, observe that $XU_1+YU_2+ZU_3=0$. Therefore, one concludes that $t_0$ divides $U_3$. Thus, $t_0$ does not divide at least one of $U_1$ and $U_2$, since $\gcd(U_1,U_2,U_3)=1$. If $t_0$ does not divide ${\tilde W}$, then it divides $N_1$ and $N_2$. Observing that $N_1U_1+N_2U_2+N_3U_3=0$, we again conclude that $t_0$ does not divide at least one of $U_1$ and $U_2$, since $\gcd(U_1,U_2,U_3)=1$. \end{proof}
\begin{Lemma}\label{lem:ch4:FactoringT0} Let $i^o\in\{1,2,3\}$ be such that $T_{i^o}(d,\bar k,0,t_1,t_2)$ and $U_{i^o}(0,t_1,t_2)$ are both not identically zero (see Lemma \ref{lem:ch4:U_iAndS_iNotIdenticallyZero}). Then \[\gcd(T_0(\bar k,0,t_1,t_2),T_{i^o}(d,\bar k,0,t_1,t_2))\] does not depend on $\bar k$ (it certainly does not depend on $d$). \end{Lemma} \begin{proof} The claim follows observing that $T_0(\bar k,0,t_1,t_2)$ depends linearly on $k_{i^o}$, and $T_{i^o}(d,\bar k,0,t_1,t_2)$ does not. \end{proof} \noindent In order to describe what we mean when we say that a solution is invariant w.r.t. $(d,\bar k)$, we make the following definition (recall Definition \ref{def:ch4:InvariantSolutionsSystemS3P}, page \pageref{def:ch4:InvariantSolutionsSystemS3P}):
\begin{Definition}\label{def:ch4:InvariantSolutionsSystemS5P} Let $\Omega_1$ be as in Lemma \ref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation} (page \pageref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}), and let $\Omega$ be a non-empty open subset of $\Omega_1$. {\sf The set of invariant solutions of ${{\mathfrak S}^{P_h}_5}(d,\bar k)$ w.r.t $\Omega$.} is defined as the set: \[ \cI^{P_h}_5(\Omega)=\bigcap_{(d^o,\bar k^o)\in\Omega}{\Psi^{P_h}_5}(d^o,\bar k^o) \] \end{Definition}
\begin{Remark}\label{rem:ch4:DescriptionInvariantSolutionsS5} \begin{enumerate}
\item[]
\item Considering $T_i$ (for $i=0,\ldots,3$) as polynomials in $\C[\bar t_h][d,\bar k]$, it is easy to see that $\cI^{P_h}_5(\Omega)$ is the set of solutions of:
\begin{equation}\label{eq:ch4:EquationsInvariantSolutionsS5} \begin{cases} U_1(\bar t_h)=U_2(\bar t_h)=U_3(\bar t_h)=0,\\ (\tilde H\cdot X)(\bar t_h)=(\tilde H\cdot Y)(\bar t_h)=(\tilde H\cdot Z)(\bar t_h)=0,\\ (\tilde W\cdot N_1)(\bar t_h)=(\tilde W\cdot N_2)(\bar t_h)=(\tilde W\cdot N_3)(\bar t_h)=0. \end{cases} \end{equation}
\item In particular, since $\gcd(U_1,U_2,U_3)=1$, the set $\cI^{P_h}_5(\Omega)$ is always a finite set. \end{enumerate} \end{Remark} \noindent The following proposition shows that, restricting the values of $(d,\bar k)$ to a certain open set, we can ensure that all the solutions at infinity of ${{\mathfrak S}^{P_h}_5}(d,\bar k)$ are invariant w.r.t. the particular choice of $(d,\bar k)$ in that open set.
\begin{Proposition}\label{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity} There exists an open non-empty subset $\Omega_2\subset\Omega_1$ (with $\Omega_1$ as in Lemma \ref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}, page \pageref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}), such that if $(d^o,\bar k^o)\in\Omega_2$, and $\bar t^o_h=(0:t^o_1:t^o_2)\in\Psi^{P_h}_5(d^o,\bar k^o)$, then $\bar t^o_h\in\cI^{P_h}_5(\Omega_2)$. \end{Proposition} \begin{proof} We know that $T_0(\bar k,0,\bar t)\not\equiv 0$. Suppose, in the first place, that $T_0(\bar k,0,\bar t)$ depends only in $\bar k$ and one of the variables $t_1, t_2$. Say, e.g., $T_0(\bar k,0,\bar t)=T_0^*(\bar k)t_1^{p}$ for some $p\in\N$. This implies that, for any given $(d^o,\bar k^o)$ such that $T_0^*(\bar k^o)\neq 0$, $(0:0:1)$ is the only possible point of $\Psi^{P_h}_5(d^o,\bar k^o)$ with $t_0=0$. Obviously, if \[T_i(d,\bar k,0,0,1)\equiv 0\mbox{ for }i=1,2,3,\] then one may take $\Omega_2=\Omega_1\cap\{(d^o,\bar k^o)\in\C^4 / T_0^*(\bar k^o)\neq 0\}$, and the result is proved. On the other hand, if not all $T_i(d,\bar k,0,0,1)\equiv 0$, say w.l.o.g that \[T_1(d,\bar k,0,0,1)\not\equiv 0\] then we may take $\Omega_2=\Omega_1\cap\{(d^o,\bar k^o)/T_0^*(\bar k^o)T_1(d^o,\bar k^o,0,0,1)\neq 0\}$, and the result is proved.
\noindent Thus, w.l.o.g. we can assume that $T_0(\bar k,0,\bar t)$ depends on both $t_1$ and $t_2$. Let $i^o\in\{1,2,3\}$ be such that $U_i(0,\bar t)\not\equiv 0$ and $T_i(d,\bar k,0,\bar t)\not\equiv 0$ (see Lemma \ref{lem:ch4:FactoringT0}, \pageref{lem:ch4:FactoringT0}). By Lemma \ref{lem:ch4:U_iAndS_iNotIdenticallyZero} (page \pageref{lem:ch4:U_iAndS_iNotIdenticallyZero}) we know that this is the case at least for one value of $i^o$. Let us consider (see Lemma \ref{lem:ch4:FactoringT0}): \[T^*_{i^o}(\bar t)=\gcd(T_0(\bar k,0,\bar t),T_{i^o}(d,\bar k,0,\bar t))\in\C[\bar t].\] Note that $T^*_{i^o}$ is homogeneous in $\bar t$, and so, if it is not constant, it factors as: \[T^*_{i^o}(\bar t)=\gamma\prod_{j=1}^p(\beta_jt_1-\alpha_jt_2)\] for some $\gamma\in\C^{\times}$, and $(\alpha_j,\beta_j)\in\C^2, j=1,\ldots,p$. For each point $(0:\alpha_j:\beta_j)$ we can repeat the construction that we did for $(0:0:1)$. Thus, one obtains a non-empty open set $\Omega_2^1\subset\Omega_1$ such that, if $(d^o,\bar k^o)\in\Omega_2^1$, and $T^*_{i^o}(\alpha_j,\beta_j)=0$, then either $(0:\alpha_j:\beta_j)\not\in\Psi^{P_h}_5(d^o,\bar k^o)$, or $(0:\alpha_j:\beta_j)\in\cI^{P_h}_5(\Omega_2^1)$.
\noindent Let \[ T'_0(\bar k,\bar t)=\dfrac{T_0(\bar k,0,\bar t)}{T^*_{i^o}(\bar t)}, \quad\mbox{ and }\quad T'_{i^o}(d,\bar k,\bar t))=\dfrac{T_{i^o}(d,\bar k,0,\bar t)}{T^*_{i^o}(\bar t)}. \] Note that both $T'_0(\bar k,\bar t)$ and $T'_{i^o}(d,\bar k,\bar t)$ are homogeneous in $\bar t$, and by construction they have a trivial gcd. If we define: \[
\Gamma(d,\bar k,t_2)=
\begin{cases}
{\rm Res}_{t_1}(T'_0(\bar k,\bar t),T'_{i^o}(d,\bar k,\bar t))&\mbox{ if }{\rm deg}_{t_1}(T'_0(\bar k,\bar t))>0,\\[3mm]
T'_0(\bar k,\bar t)&\mbox{ in other case.}
\end{cases} \] Then $\Gamma$ is not identically zero, and since $T'_0$ and $T'_{i^o}$ are both homogeneous in $\bar t$, we have a factorization: \[\Gamma(d,\bar k,t_2)=t_2^q\Gamma^*(d,\bar k)\] for some $q\in\N$. Note also that, by construction, since $\gcd(T'_0,T'_{i^o})=1$, $t_2$ cannot divide both $T'_0$ and $T'_{i^o}$. In particular, since these polynomials are homogeneous in $\bar t$, one concludes that $\bar t^o=(1,0)$ is not a solution of \[T'_0(\bar k,\bar t)=T'_{i^o}(d,\bar k,\bar t)=0.\] We define \[\Omega_2=\Omega^1_2\cap\{(d^o,\bar k^o)/\Gamma^*(d^o,\bar k^o)\neq 0\}\] If $(d^o,\bar k^o)\in\Omega_2$, and $\bar t^o_h=(0:t^o_1:t^o_2)\in\Psi^{P_h}_5(d^o,\bar k^o)$, then either $T'_0(\bar k^o,\bar t^o)\neq 0$ or $T'_{i^o}(d^o,\bar k^o,\bar t^o)\neq 0$. In any case, one has $T^*_{i^o}(\bar t^o)=0$ (that is, $(0:t^o_1:t^o_2)=(0:\alpha_j:\beta_j)$ for some $j=1,\ldots,p$). The construction of $\Omega^1_2$ implies that $(0:t^o_1:t^o_2)\in\cI^{P_h}_5(\Omega_2)$. \end{proof} \noindent Let ${\mathcal A}_h$ {\sf denote}\label{def:ch4:Set_Ah_OfProjectiveExtensibleSolutions} the set of values $\bar t^o_h\in\P^2$ such that (compare to the Definition of the set ${\mathcal A}$ in Equation \ref{def:ch4:Set_A_OfExtensibleSolutions}, page \pageref{def:ch4:Set_A_OfExtensibleSolutions}): \begin{equation} \left\{\begin{array}{c} t^o_0\,W(\bar t^o_h)H(\bar t^o_h)\bigl(Y(\bar t^o_h)N_3(\bar t^o_h)-Z(\bar t^o_h)N_2(\bar t^o_h)\bigr)\neq 0\\ \mbox{ or }\\ t^o_0\,W(\bar t^o_h)H(\bar t^o_h)\bigl(Z(\bar t^o_h)N_1(\bar t^o_h)-X(\bar t^o_h)N_3(\bar t^o_h)\bigr)\neq 0\\ \mbox{ or }\\ t^o_0\,W(\bar t^o_h)H(\bar t^o_h)\bigl(X(\bar t^o_h)N_1(\bar t^o_h)-Y(\bar t^o_h)N_1(\bar t^o_h)\bigr)\neq 0 \end{array}\right. \end{equation} Equivalently, ${\mathcal A}_h$ consists in those points $\bar t^o_h\in\P^2$ such that: \begin{equation}\label{eq:ch4:Set_Ah_OfProjectiveExtensibleSolutions} t^o_0 W(\bar t^o_h)H(\bar t^o_h)\neq 0\mbox{ and }(X(\bar t^o_h),Y(\bar t^o_h),Z(\bar t^o_h))\wedge\bar N(\bar t^o_h)\neq\bar 0. \end{equation} We will see that the non-invariant solutions of $\Psi^{P_h}_5(d,\bar k)$ are points in ${\mathcal A}_h$. Note that we are explicitly asking these points to be affine (recall Proposition \ref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity}, page \pageref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity})
\noindent With this notation we are ready to state the main theorem about System ${{\mathfrak S}^{P_h}_5}(d,\bar k)$.
\begin{Theorem}\label{thm:ch4:RelationBetweenSystemS5AndSystemS4} Let $\Omega_2$ be as in Proposition \ref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity}. If $(d^o,\bar k^o)\in\Omega_2$, \[\bar t^o=(t^o_1,t^o_2)\in{\mathcal A}\cap\Psi_3^P(d^o,\bar k^o)\Leftrightarrow \bar t^o_h=(1:t^o_1:t^o_2)\in{\mathcal A}_h\cap\Psi_5^{P_h}(d^o,\bar k^o)\] \end{Theorem} \noindent Recall that every point in ${\mathcal A}_h$ is affine, see Equation \ref{def:ch4:Set_A_OfExtensibleSolutions} (page \pageref{def:ch4:Set_A_OfExtensibleSolutions}), for the Definition of ${\mathcal A}$, and page \pageref{lem:ch4:AuxiliaryPolynomialsBelongToEliminationIdeal} for the definition of $\Psi_3^P(d^o,\bar k^o)$. \begin{proof} Let us prove that $\Rightarrow$ holds. If $\bar t^o\in{\mathcal A}\cap\Psi_3^P(d^o,\bar k^o)$, then, e.g., \[P_0(\bar t^o)h(\bar t^o)(P_2n_3-P_3n_2)(\bar t^o)\neq 0.\] Therefore, \[W(\bar t^o_h)H(\bar t^o_h)(YN_3-ZN_2)(\bar t^o_h)\neq 0,\] and so $\bar t^o_h\in{\mathcal A}_h$. Besides, this last inequality implies that $Q_0(\bar t^o_h)Q(\bar t^o_h)\neq 0$. Since $\bar t^o\in{\mathcal A}\cap\Psi_3^P(d^o,\bar k^o)$, by Proposition \ref{prop:ch4:ExtendableSolutions}(b) (page \pageref{prop:ch4:ExtendableSolutions}), one has that $(1:t^o_1:t^o_2)\in\Psi_4^{P_h}(d^o,\bar k^o)$. From Equations \ref{eq:ch4:ReducedAuxiliaryPolynomials} and \ref{eq:ch4:ReducedAuxiliaryPolynomials_2} (page \pageref{eq:ch4:ReducedAuxiliaryPolynomials}), and from $Q_0(\bar t^o_h)Q(\bar t^o_h)\neq 0$, one concludes that $\bar t^o_h=(1:t^o_1:t^o_2)\in{\mathcal A}_h\cap\Psi_5^{P_h}(d^o,\bar k^o)$. Thus, $\Rightarrow$ is proved.\\ \noindent The proof of $\Leftarrow$ is similar, simply reversing the implications. \end{proof}
\begin{Remark}\label{rem:ch4:OffsetDegreeAsCardinal_Projective} Let $\Omega_2$ be as in Proposition \ref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity}, and let $(d^o,\bar k^o)\in\Omega_2$. Then, Theorem \ref{thm:ch4:RelationBetweenSystemS5AndSystemS4} implies that:
\[m\delta=\#\left({\mathcal A}\cap{\Psi_3^P}(d^o,\bar k^o)\right)=\#\left({\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\right).\] \end{Remark} \noindent Theorem \ref{thm:ch4:RelationBetweenSystemS5AndSystemS4} establishes the link between ${\mathcal A}\cap\Psi_3^P(d^o,\bar k^o)$ and ${\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)$ for a fixed $(d^o,\bar k^o)\in\Omega_2$. As we said before, the non-invariant solutions of $\Psi^{P_h}_5(d,\bar k)$ should be the points in ${\mathcal A}_h$. As a first step, we have this result.
\begin{Proposition}\label{prop:ch4:NonInvariantSolutionsCanBeAvoided} Let $\Omega_2$ be as in Proposition \ref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity} (page \pageref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity}). If $\bar t^o_h\in{\mathcal A}_h$, then $\bar t^o_h\not\in\cI^{P_h}_5(\Omega_2)$. \end{Proposition} \begin{proof} Let us suppose that for every $(d^o,\bar k^o)\in\Omega_2$, one has $\bar t^o_h\in\left({\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\right)$. Then (recall Equation \ref{def:ch4:Set_A_OfExtensibleSolutions}, page \pageref{def:ch4:Set_A_OfExtensibleSolutions}), $\bar t^o_h$ is of the form $(1:t^o_1:t^o_2)$ and, by Theorem \ref{thm:ch4:RelationBetweenSystemS5AndSystemS4}, \[\bar t^o=(t^o_1,t^o_2)\in\bigcap_{(d^o,\bar k^o)\in\Omega_2}{\Psi_3^P}(d^o,\bar k^o)=\cI^P_3(\Omega_2).\] Since $\Omega_2\subset\Omega_1$, Proposition \ref{prop:ch4:FakePointsAndInvariantSolutionsCoincide} (page \pageref{prop:ch4:FakePointsAndInvariantSolutionsCoincide}) applies, and one concludes that $(t^o_1,t^o_2)\in\cF$. But then, \[P_0(\bar t^o)h(\bar t^o)=0\mbox{ or }(P_1(\bar t^o),P_2(\bar t^o),P_3(\bar t^o))\wedge\bar n(\bar t^o)=\bar 0\] This implies that: \[W(\bar t^o_h)H(\bar t^o_h)=0\mbox{ or }(X(\bar t^o_h),Y(\bar t^o_h),Z(\bar t^o_h))\wedge\bar N(\bar t^o_h)=\bar 0,\] contradicting $\bar t^o_h\in{\mathcal A}_h$. \end{proof}
\noindent As we said before, we will prove the converse of this proposition. That is, if $\bar t^o_h\in\Psi_5^{P_h}(d^o,\bar k^o)$ for some $(d^o,\bar k^o)\in\Omega_2$, but $\bar t^o_h\not\in\cI^{P_h}_5(\Omega_2)$, then we would like to conclude that $\bar t^o_h\in{\mathcal A}_h$. However, the open set $\Omega_2$ can be too large for this to hold. More precisely, the problem is caused by the solutions of $Q(\bar t_h)Q_0(\bar t_h)=0$ (when $Q$ and $Q_0$ are not equal $1$). These points do not belong to ${\mathcal A}_h$. However, since $\cI_5^{P_h}(\Omega_2)$ is finite (see Remark \ref{rem:ch4:DescriptionInvariantSolutionsS5}, page \pageref{rem:ch4:DescriptionInvariantSolutionsS5}), most of the solutions $Q(\bar t_h)Q_0(\bar t_h)=0$ are not invariant. Therefore, we need to impose some more restrictions in the values of $(d,\bar k)$. Note, however, that we have already dealt with the points at infinity; thus, we need only consider the affine solutions of $Q(\bar t_h)Q_0(\bar t_h)=0$. We do the necessary technical work in the following lemma. First, we introduce some notation for the affine versions of some polynomials. We {\sf denote:} \[ \begin{cases} q(\bar t)=Q(1,t_1,t_2), q_0(\bar t)=Q_0(1,t_1,t_2), \tilde w(\bar t)=\tilde W(1,t_1,t_2), \tilde h(\bar t)=\tilde H(1,t_1,t_2),\\ u_i(\bar t)=U_i(1,t_1,t_2)\mbox{ for }i=1,2,3. \end{cases} \] and we consider a new auxiliary set of variables $\bar\rho=(\rho_1,\rho_2,\rho_3)$, in order to do the necessary Rabinowitsch's tricks.\\ \noindent Let $\cG\subset\C\times\C^3\times\C^3\times\C^2$ be the set of solutions (in the variables $(d,\bar k,\bar\rho,\bar t)$) of this system of equations: \begin{equation}\label{eq:ch4:SystemToExcludeZerosOfQQ} \begin{cases} q(\bar t)q_0(\bar t)=0\\ \tilde h(\bar t)M_i^2(\bar k,\bar t)-d^2\tilde w(\bar t)G_i^2(\bar k,\bar t)=0&\mbox{ for }i=1,2,3.\\ k_1u_1(\bar t)+k_2u_2(\bar t)+k_3u_3(\bar t)=0\\ \rho_1 \tilde w(\bar t)\tilde h(\bar t)-1=0\\ \prod_{i=1}^3(\rho_2 u_i(\bar t)-1)=0\\ \prod_{i=1}^3(\rho_3 n_i(\bar t)-1)=0 \end{cases} \end{equation} and consider the projection $\pi_1(d,\bar k,\lambda,\bar\rho,\bar t)=(d,\bar k)$.
\begin{Lemma} \label{lem:ch4:DimensionSetGlessThan4} $\cG$ is empty or ${\rm dim}(\pi_1(\cG))\leq 3$. \end{Lemma} \begin{proof} Let $\cG\neq\emptyset$. Then $q(\bar t)q_0(\bar t)$ is not constant. We will use Lemma \ref{lem:ch1:FiberDimension} (page \pageref{lem:ch1:FiberDimension}), to prove that ${\rm dim}(\cG)\leq 3$. From this the result follows immediately. Consider the projection $\pi_2(d,\bar k,\bar\rho,\bar t)=\bar t$. Clearly, $\pi_2(\cG)$ is contained in the affine curve defined by $q(\bar t)q_0(\bar t)=0$. Thus, ${\rm dim}(\pi_2(\cG))\leq 1$. Let $\bar t^o\in\pi_2(\cG)$. First, let us suppose that for all $(d^o,\bar k^o,\bar\rho^o,\bar t^o)\in\pi_2^{-1}(\bar t^o)$, one has $\bar k^o=\bar 0$. Then, if $(d^o,\bar 0,\bar\rho^o,\bar t^o)\in\pi_2^{-1}(\bar t^o)$, $\rho_1^o, \rho_2^o$, and $\rho_3^o$ must be one of the finitely many solutions of the polynomial equations: \[\rho_1 \tilde w(\bar t^o)\tilde h(\bar t^o)-1=0,\quad \prod_{i=1}^3(\rho_2 u_i(\bar t^o)-1)=0, \mbox{ and }\prod_{i=1}^3(\rho_3 n_i(\bar t^o)-1)=0.\] The condition $\bar t^o\in\pi_2(\cG)$ implies that these equations can be solved. Note that, in this case, $(d^o,\bar 0,\bar\rho^o,\bar t^o)\in\pi_2^{-1}(\bar t^o)$ does not impose any condition on $d^o$. It follows that, in this case, one has $\mu={\rm dim}(\pi_2^{-1}(\bar t^o))=1$.
\noindent Now, let us suppose that $(d^o,\bar k^o,\bar\rho^o,\bar t^o)\in\pi_2^{-1}(\bar t^o)$, with $\bar k^o\neq\bar 0$. Then, by a similar argument to the proof of Proposition \ref{prop:ch4:ExtendableSolutions}(a) (page \pageref{prop:ch4:ExtendableSolutions}), and taking $\tilde w(\bar t^o)\tilde h(\bar t^o)\neq 0$ into account, there exists $\lambda^o\in\C^\times$ such that
\[M_i(\bar k^o,\bar t^o)=\lambda^o G_i(\bar k^o,\bar t^o)\mbox{ for }i=1,2,3.\] Thus, in this case $d^o$ must be a solution of: \[\tilde h(\bar t^o)(\lambda^o)^2-(d^o)^2\tilde w(\bar t^o)=0.\] Besides, there exists also $j^o\in\{1,2,3\}$ with $u_{j^o}(\bar t^o)\neq 0$. Then, $\bar k^o$ must belong to the two-dimensional space defined by \[k_1u_1(\bar t^o)+k_2u_2(\bar t^o)+k_3u_3(\bar t^o)=0.\] Finally, $\rho_1^o, \rho_2^o$, and $\rho_3^o$ must be one of the finitely many solutions of the polynomial equations: \[\rho_1 \tilde w(\bar t^o)\tilde h(\bar t^o)-1=0,\quad \prod_{i=1}^3(\rho_2 u_i(\bar t^o)-1)=0, \mbox{ and }\prod_{i=1}^3(\rho_3 n_i(\bar t^o)-1)=0.\] The condition $\bar t^o\in\pi_2(\cG)$ implies that these equations can be solved. These remarks show that for every $\bar t^o\in\pi_2(\cG)$, one has $\mu={\rm dim}(\pi_2^{-1}(\bar t^o))\leq 2$. Thus, using Lemma \ref{lem:ch1:FiberDimension}: \[{\rm dim}(\cG)={\rm dim}(\pi_2(\cG))+\mu\leq 1+2=3,\] and the lemma is proved. \end{proof}
\noindent If $\bar t^o_h$ is such that \[T_0(\bar k,\bar t^o_h)\equiv 0\mbox{ and }T_i(d,\bar k,\bar t^o_h)\equiv 0,\mbox{ for }i=1,2,3,\] then $\bar t^o_h\in\cI^{P_h}_5(\Omega)$ for any choice of $\Omega$. However, if this is not the case, then sometimes we need to remove from $\Omega$ precisely those values $(d^o,\bar k^o)$ such that $\bar t^o_h\in\Psi^{P_h}_5(d^o,\bar k^o)$. In the proof of the following lemma we will need to do this several times. Thus we introduce the necessary notation.
\begin{Definition}\label{def:ch4:InvariantSetOfaPoint} Let $\Omega\subset\C\times\C^3$ be non-empty and open. For $\bar t^o_h\in\P^2$ we define: \[ \Omega^{inv}(\bar t^o_h)= \begin{cases} \Omega,\quad \mbox{ if }T_0(\bar k,\bar t^o_h)\equiv 0\mbox{ and }T_i(d,\bar k,\bar t^o_h)\equiv 0,\mbox{ for }i=1,2,3.\\[3mm] \Omega\setminus\bigl\{(d^o,\bar k^o)/\bar t^o_h\in\Psi^{P_h}_5(d^o,\bar k^o)\bigr\}, \mbox{in other case.} \end{cases} \] \end{Definition}
\begin{Remark}\label{rem:ch4:InvariantSetOfaPoint} \begin{enumerate}
\item[]
\item[(1)] Note that if $\Omega\subset\Omega^{inv}(\bar t^o_h)$, then $\bar t^o_h\in\cI^{P_h}_5(\Omega)$.
\item[(2)] Observe that $\Omega^{inv}(\bar t^o_h)\neq\emptyset$. \end{enumerate} \end{Remark}
\begin{Lemma}\label{lem:ch4:QQ_zero_impliesInvariant} Let $\Omega_2$ be as in Proposition \ref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity}. There exists an open non-empty set $\Omega_3\subset\Omega_2$ such that the following hold: \begin{enumerate}
\item[(a)] If $\bar t^o_h\in\Psi_5^{P_h}(d^o,\bar k^o)$ for some $(d^o,\bar k^o)\in\Omega_3$, and $Q_0(\bar t^o_h)Q(\bar t^o_h)=0$, then $\bar t^o_h\in\cI^{P_h}_5(\Omega_3)$.
\item[(b)] If $\bar t^o_h$ satisfies
\[T_1(d,k,\bar t^o_h)=T_2(d,k,\bar t^o_h)=T_3(d,k,\bar t^o_h)=0\mbox{ identically in }(d,\bar k),\] then $\bar t^o_h\in\cI^{P_h}_5(\Omega_3)$. \end{enumerate} \end{Lemma} \begin{proof} Let
\[A_0=\{\bar t^o_h\,|\,X(\bar t^o_h)=Y(\bar t^o_h)=Z(\bar t^o_h)=W(\bar t^o_h)=0\}.\] Since $\gcd(X,Y,Z,W)=1$, one sees that $A_0$ is (empty or) a finite set. Thus, if we define (see Definition \ref{def:ch4:InvariantSetOfaPoint}): \[\Omega^0_3=\Omega_2\cap\left(\bigcap_{\bar t^o_h\in A_0}\Omega^{inv}(\bar t^o_h)\right).\] By Remark \ref{rem:ch4:InvariantSetOfaPoint}, $\Omega^0_3$ is an open non-empty set. \noindent Let
\[A_1=\{\bar t^o_h\,|\,N(\bar t^o_h)=\bar 0\},\] where $N=(N_1,N_2,N_3)$. Recalling that $\gcd(N_1,N_2,N_3)=1$, $A_1$ is (empty or) a finite set. We define: \[\Omega^1_3=\Omega^0_3\cap\left(\bigcap_{\bar t^o_h\in A_1}\Omega^{inv}(\bar t^o_h)\right).\] By Remark \ref{rem:ch4:InvariantSetOfaPoint}, $\Omega^1_3$ is an open non-empty set.\\ \noindent Similarly, since $\gcd(\tilde H,\tilde W)=1$, the set
\[A_2=\{\bar t^o_h\,|\,\tilde H(\bar t^o_h)=\tilde W(\bar t^o_h)=0\}\] is (empty or) finite. We define \[\Omega^2_3=\Omega^1_3\cap\left(\bigcap_{\bar t^o_h\in A_2}\Omega^{inv}(\bar t^o_h)\right),\] and $\Omega^2_3$ is an open non-empty set. Moreover, since $\gcd(U_1,U_2,U_3)=1$, the set
\[A_3=\{\bar t^o_h\,|\,U_1(\bar t^o_h)=U_2(\bar t^o_h)=U_3(\bar t^o_h)=0\}\] is (empty or) finite. We define \[\Omega^3_3=\Omega^2_3\cap\left(\bigcap_{\bar t^o_h\in A_3}\Omega^{inv}(\bar t^o_h)\right),\] and $\Omega^3_3$ is an open non-empty set. We define \[\Omega^4_3=\Omega^3_3\setminus(\pi_1(\cG)^*),\] where $\pi_1(\cG)$ is as in Lemma \ref{lem:ch4:DimensionSetGlessThan4} (page \pageref{lem:ch4:DimensionSetGlessThan4}), and, as usual, the asterisk denotes Zariski closure.
\noindent Finally, since $T(\bar c,d,\bar k,\bar t)$ is primitive w.r.t. $(d,\bar k)$ (recall Equations \ref{eq:ch4:ContentProjectiveAuxiliaryCurves}, page \pageref{eq:ch4:ContentProjectiveAuxiliaryCurves}, and \ref{eq:ch4:ReducedAuxiliaryPolynomials_3}, page \pageref{eq:ch4:ReducedAuxiliaryPolynomials_3}), it follows that the set
\[A_4=\left\{\bar t^o_h\,|\,T_1(d,k,\bar t^o_h)=T_2(d,k,\bar t^o_h)=T_3(d,k,\bar t^o_h)=0\mbox{ identically in }(d,\bar k)\right\}\] is (empty or) finite. We define \[\Omega_3=\Omega^4_3\cap\left(\bigcap_{\bar t^o_h\in A_4}\Omega^{inv}(\bar t^o_h)\right).\] Let us now suppose that $(d^o,\bar k^o)\in\Omega_3$ and $\bar t^o_h\in\Psi_5^{P_h}(d^o,\bar k^o)$, with $Q_0(\bar t^o_h)Q(\bar t^o_h)=0$. We will show that $\bar t^o_h\in\cI^{P_h}_5(\Omega_3)$. This will prove that statement (a) holds. If $\bar t^o_h$ is of the form $(0:t^o_1:t^o_2)$, by Proposition \ref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity}, $\bar t^o_h\in\cI^{P_h}_5(\Omega_3)$ holds trivially. Thus, in the rest of the proof we can assume w.l.o.g. that $\bar t^o_h$ is of the form $(1:t^o_1:t^o_2)$.
\noindent If $\bar t^o_h\in\cup_{i=0,\ldots,3}A_i$, then we have $\Omega_3\subset\Omega^4_3\subset\Omega^{inv}(\bar t^o_h)$, and by Remark \ref{rem:ch4:InvariantSetOfaPoint}, $\bar t^o_h\in\cI^{P_h}_5(\Omega_3)$. So, let $\bar t^o_h\not\in\cup_{i=0,\ldots,3}A_i$. Then the following hold: \begin{enumerate}
\item[(0)] Since $\bar t^o_h\not\in A_0$, $P_i(\bar t^o)\neq 0$ for some $i=0,\ldots,3$ (recall that $\bar t^o_h=(1:\bar t^o)$).
\item[(1)] Since $\bar t^o_h\not\in A_1$, $N(\bar t^o_h)\neq\bar 0$.
\item[(2)] Since $\bar t^o_h\not\in A_2$, $\tilde H(\bar t^o_h)\neq 0$ or $\tilde W(\bar t^o_h)\neq 0$.
\item[(3)] Since $\bar t^o_h\not\in A_3$, $U_i(\bar t^o_h)\neq 0$ for some $i=1,2,3$. \end{enumerate}
Let us show that (0) and (2) imply the following: \begin{enumerate}
\item[(4)] $\tilde H(\bar t^o_h)\tilde W(\bar t^o_h)\neq 0$. \end{enumerate} Indeed, if we suppose that $\tilde H(\bar t^o_h)=0$ but $\tilde W(\bar t^o_h)\neq 0$, then from $\bar t^o_h\in\Psi_5^{P_h}(d^o,\bar k^o)$ one concludes that $d^oG_i(\bar k^o,\bar t^o_h)=0$ for $i=1,2,3$. Since $d^o\neq 0$ and $\bar k^o\neq\bar 0$ in $\Omega_3$, one has that $N(\bar t^o_h)$ is isotropic and parallel to $\bar k^o$, contradicting Lemma \ref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}(1) (page \pageref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}). On the other hand, if we suppose $\tilde H(\bar t^o_h)\neq 0$ but $\tilde W(\bar t^o_h)=0$, then from $\bar t^o_h\in\Psi_5^{P_h}(d^o,\bar k^o)$ one concludes that $d^oG_i(\bar k^o,\bar t^o_h)=0$ for $i=1,2,3$. Since $d^o\neq 0$, we conclude that $G_i(\bar k^o,\bar t^o_h)=0$ for $i=1,2,3$. Thus, $\bar t^o$ is a solution of: \[P_0(\bar t)=M_1(\bar k^o,1,\bar t)=M_2(\bar k^o,1,\bar t)=M_3(\bar k^o,1,\bar t)=0.\] However, by (0), there exists $j^o\in\{0,1,2,3\}$ such that $P_j(\bar t^o)\neq 0$. Therefore, we get a contradiction with Lemma \ref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}(3) (page \pageref{lem:ch4:ExcludingAdditional_dk_AfterTheoreticalFoundation}).
\noindent From (1), (3), (4), and since $\bar t^o_h\in\Psi_5^{P_h}(d^o,\bar k^o)$ and $Q_0(\bar t^o_h)Q(\bar t^o_h)=0$, it follows that $(d^o,\bar k^o,\bar t^o_h)$ can be extended to $(d^o,\bar k^o,\bar\rho^o,\bar t^o)\in\cG$. Thus, one has $(d^o,\bar k^o)\in\pi_1(\cG)$, contradicting the construction of $\Omega^3_3$. This finishes the proof of statement (a).
\noindent The proof of statement (b) is a consequence of the construction of $\Omega_3$ (in particular, see the construction of $A_4$); indeed, if $\bar t^o_h$ satisfies
\[T_1(d,k,\bar t^o_h)=T_2(d,k,\bar t^o_h)=T_3(d,k,\bar t^o_h)=0\mbox{ identically in }(d,\bar k),\] then $\bar t^o_h\in A_4$. It follows that $\Omega_3\subset\Omega^{inv}(\bar t^o_h)$ and so $\bar t^o_h\in\cI^{P_h}_5(\Omega)$ (see Remark \ref{rem:ch4:InvariantSetOfaPoint}(1), page \pageref{rem:ch4:InvariantSetOfaPoint}). \end{proof} \noindent Now, restricting the values of $(d,\bar k)$ to a new open set, we are ready to prove the announced converse of Proposition \ref{prop:ch4:NonInvariantSolutionsCanBeAvoided} (page \pageref{prop:ch4:NonInvariantSolutionsCanBeAvoided}).
\begin{Proposition}\label{prop:ch4:NonInvariantSolutionsCanBeAvoided_Part2} Let $\Omega_{3}$ be as in Lemma \ref{lem:ch4:QQ_zero_impliesInvariant} (page \pageref{lem:ch4:QQ_zero_impliesInvariant}). If $\bar t^o_h\in\Psi_5^{P_h}(d^o,\bar k^o)$ for some $(d^o,\bar k^o)\in\Omega_{3}$, but $\bar t^o_h\not\in\cI^{P_h}_5(\Omega_{3})$, then $\bar t^o_h\in{\mathcal A}_h$. \end{Proposition} \begin{proof} If $\bar t^o_h\not\in\cI^{P_h}_5(\Omega_3)$, then $t^o_0\neq 0$ (by Proposition \ref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity}, page \pageref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity}). Let us write $\bar t^o_h=(1:t^o_1:t^o_2)$. Then, since $\bar t^o_h\in\Psi_5^{P_h}(d^o,\bar k^o)$, one has $\bar t^o\in\Psi_3^{P}(d^o,\bar k^o)$. Note also that, since $\bar t^o_h\not\in\cI^{P_h}_5(\Omega_{3})$, by Lemma \ref{lem:ch4:QQ_zero_impliesInvariant}, we must have $Q_0(\bar t^o_h)Q(\bar t^o_h)\neq 0$. If we suppose $\bar t^o_h\not\in{\mathcal A}_h$, then $\bar t^o\not\in{\mathcal A}$. Thus $\bar t^o\in\cF$, and by Proposition \ref{prop:ch4:FakePointsAndInvariantSolutionsCoincide} (page \pageref{prop:ch4:FakePointsAndInvariantSolutionsCoincide}), $\bar t^o\in\cI^P_3(\Omega_{3})$. Taking Equation \ref{eq:ch4:ReducedAuxiliaryPolynomials_2} (page \pageref{eq:ch4:ReducedAuxiliaryPolynomials_3}) into account, and using $Q(\bar t^o_h)\neq 0$, we conclude that \[T_1(d,k,\bar t^o_h)=T_2(d,k,\bar t^o_h)=T_3(d,k,\bar t^o_h)=0\mbox{ identically in }(d,\bar k).\] Then. by Lemma \ref{lem:ch4:QQ_zero_impliesInvariant}(b) (page \pageref{lem:ch4:QQ_zero_impliesInvariant}), one has that $\bar t^o_h\in\cI^{P_h}_5(\Omega_{3})$. This is a contradiction, and so we obtain that $t^o_h\in{\mathcal A}_h$. \end{proof}
\subsection{Multiplicity of intersection at non-fake points} \label{subsec:ch4:MultiplicityOfIntersectionAtNon-FakePoints}
The auxiliary polynomials $S_i$ (for $i=0,\ldots,3$) were introduced in Section \ref{sec:ch4:AuxiliaryCurvesForRationalSurfaces} (page \pageref{sec:ch4:AuxiliaryCurvesForRationalSurfaces}), in order to reduce the offset degree problem to a problem of intersection between planar curves. More precisely, the preceding results in this paper indicate that the offset degree problem can be reduced to an intersection problem between the planar curves defined by the auxiliary polynomials $T_i$. A crucial step in this reduction concerns the multiplicity of intersection of these curves at their non-invariant points of intersection. In this subsection we will prove that the value of that multiplicity of intersection is one (in Proposition \ref{prop:ch4:MultiplicityAtNonFakePoints}, page \pageref{prop:ch4:MultiplicityAtNonFakePoints}). We first introduce some notation for the curves involved in this problem.
\begin{Definition}\label{def:ch4:CurvesDefinedByAuxiliaryPolynomials} Let $\Omega_0$ be as in Theorem \ref{thm:ch4:TheoreticalFoundation} (page \pageref{thm:ch4:TheoreticalFoundation}). If $(d^o,\bar k^o)\in\Omega_0$, and $T_i$ (for $i=0,\ldots,3$) are the polynomials introduced in Equations \ref{eq:ch4:ReducedAuxiliaryPolynomials} and \ref{eq:ch4:ReducedAuxiliaryPolynomials_2} (page \pageref{eq:ch4:ReducedAuxiliaryPolynomials}), we denote by $\cT^a_0(\bar k^o)$ (resp. $\cT^a_i(d^o,\bar k^o)$ for $i=1,2,3$) the affine algebraic set defined by the polynomial $T_0(\bar k^o,1,\bar t)$ (resp. $T_i(d^o,\bar k^o,1,\bar t)$ for $i=1,2,3$). Similarly, we denote by $\cT^h_0(\bar k^o)$ (resp. $\cT^h_i(d^o,\bar k^o)$ for $i=1,2,3$) the projective algebraic set defined by the polynomial $T_0(\bar k^o,\bar t_h)$ (resp. $T_i(d^o,\bar k^o,\bar t_h)$ for $i=1,2,3$). \end{Definition}
\begin{Remark} Note that the homogenization of the polynomials $T_0(\bar k^o,1,\bar t)$ and $T_i(d^o,\bar k^o,1,\bar t)$ w.r.t. $t_0$ does not necessarily coincide with $T_0(\bar k^o,\bar t_h)$ and $T_i(d^o,\bar k^o,\bar t_h)$. They may differ in a power of $t_0$. In particular, it is not necessarily true that $\overline{\cT^a_0(\bar k^o)}=\cT^h_0(\bar k^o)$ and $\overline{\cT^a_i(d^o,\bar k^o)}=\cT^h_i(d^o,\bar k^o)$ (the overline denotes projective closure, as usual). However, it holds that $\cT^h_i(d^o,\bar k^o)\cap\C^n=\cT^a_i(d^o,\bar k^o)$ and $\cT^h_0(d^o,\bar k^o)\cap\C^n=\cT^a_0(\bar k^o)$. \end{Remark}
\begin{Proposition}\label{prop:ch4:MultiplicityAtNonFakePoints} Let $\Omega_3$ be as in Lemma \ref{lem:ch4:QQ_zero_impliesInvariant} (page \pageref{lem:ch4:QQ_zero_impliesInvariant}). There exists a non-empty open $\Omega_4\subset\Omega_3$, such that if $(d^o,\bar k^o)\in\Omega_4$, and $\bar t^o_h\in{\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)$, then: \[\min_{i=1,2,3}\left(\mathop{\mathrm{mult}}\nolimits_{\bar t^o}(\cT_o(\bar k^o),\cT_i(d^o,\bar k^o))\right)=1.\] \end{Proposition} \begin{proof} Since $\bar t^o_h\in{\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)$, we can write $\bar t^o_h=(1:t^o_1:t^o_2)$. Let $\bar t^o=(t^o_1,t^o_2)$. By Theorem \ref{thm:ch4:RelationBetweenSystemS5AndSystemS4} (page \pageref{thm:ch4:RelationBetweenSystemS5AndSystemS4}) we know that $\bar t^o=(t^o_1,t^o_2)\in{\mathcal A}\cap\Psi_3^P(d^o,\bar k^o)$. W.l.o.g. we will suppose that \[P_0(\bar t^o)h(\bar t^o)\bigl(P_2(\bar t^o)n_3(\bar t^o)-P_3(\bar t^o)n_2(\bar t^o)\bigr)\neq 0\] (see the definition of the set ${\mathcal A}$ in Equation \ref{def:ch4:Set_A_OfExtensibleSolutions}, page \pageref{def:ch4:Set_A_OfExtensibleSolutions}). In this case, it holds (see Remark \ref{rem:ch4:SignOfLambdaAndOffsetting}, page \pageref{rem:ch4:SignOfLambdaAndOffsetting}) that \[k_2^on_3(\bar t^o)-k_3^on_2(\bar t^o)\neq0 \mbox{ and }k_2^oP_3(\bar t^o)-k_3^oP_2(\bar t^o)\neq 0.\] Furthermore, by Remark \ref{rem:ch4:RelationBetweenImplicitAndParametricNormalVector} (page \pageref{eq:ch4:RelationBetweenImplicitAndParametricNormalVectors}), one has \begin{equation}\label{eq:ch4:ImplicitAndParamNormalVectors_Multip} f_j(P(\bar t))=\dfrac{\beta(\bar t)}{P^{\mu}_0(\bar t)}n_j(\bar t) \mbox{ for }j=1,2,3, \end{equation} and therefore \begin{equation}\label{eq:ch4:ImplicitAndParamNormalVectors_Multip_h} \sqrt{h_{\operatorname{imp}}(P(\bar t))}=\dfrac{\beta(\bar t)}{P^{\mu}_0(\bar t)}\sqrt{h(\bar t)}. \end{equation} with $\beta(\bar t^o)\neq 0$ (see Lemma \ref{lem:ch4:ExcludeBadParameterValues}, page \pageref{lem:ch4:ExcludeBadParameterValues}).
\noindent For this case we will construct a non-empty open set $\Omega_{4,1}\subset\Omega_3$ such that if $(d^o,\bar k^o)\in\Omega_{4,1}$, and $\bar t^o_h\in{\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)$, then: \[\mathop{\mathrm{mult}}\nolimits_{\bar t^o}\bigl(\cT_o(\bar k^o),\cT_1(d^o,\bar k^o)\bigr)=1.\] If the second, respectively third, defining equation of ${\mathcal A}$ is used, then analogous open subsets $\Omega_{4,2}$, respectively $\Omega_{4,3}$ can be constructed, and the corresponding result for $\cT_2(d^o,\bar k^o)$, respectively $\cT_3(d^o,\bar k^o)$, is obtained. Finally, it suffices to take \[\Omega_4=\Omega_{4,1}\cap\Omega_{4,2}\cap\Omega_{4,3}.\]
\noindent The construction of $\Omega_{4,1}$ will proceed in several steps: \begin{enumerate}
\item[(1)] \noindent By Proposition \ref{prop:ch4:ExtendableSolutions} (page \pageref{prop:ch4:ExtendableSolutions}), $(d^o,\bar k^o,\bar t^o)\in\pi_{(2,1)}({\Psi_2^P}(d^o,\bar k^o))$. Thus, by Theorem \ref{thm:ch4:TheoreticalFoundation} (page \pageref{thm:ch4:TheoreticalFoundation}), the point $\bar y^o=P(\bar t^o)$ is an affine, non normal-isotropic point of $\Sigma$, and it is associated with $\bar x^o\in\cL_{\bar k^o}\cap\cO_{d^o}(\Sigma)$, where $\bar x^o$ is a non normal-isotropic point of $\cO_{d^o}(\Sigma)$. Besides, since $(d^o,\bar k^o)\in\Omega_3\subset\Omega_0$, (see Remark \ref{rem:ch4:InOmega0GoodSpecialization}, page \pageref{rem:ch4:InOmega0GoodSpecialization}), $g(d^o,\bar x)$ is the defining polynomial of $\cO_{d^o}(\Sigma)$.
It follows that there is an open neighborhood $U^0$ of $(d^o,\bar y^o)$ (in the usual unitary topology of $\C\times\C^3$) such that the equation \[ \left(\dfrac{\partial f}{\partial y_1}(\bar y)\right)^2+\left(\dfrac{\partial f}{\partial y_2}(\bar y)\right)^2+\left(\dfrac{\partial f}{\partial y_3}(\bar y)\right)^2=0 \] has no solutions in $U^0$. Similarly, there is an open neighborhood $V^0$ of $(d^o,\bar x^o)$ (in the usual unitary topology of $\C\times\C^3$) such that the equation \[ \left(\dfrac{\partial g}{\partial x_1}(d,\bar x)\right)^2+\left(\dfrac{\partial g}{\partial x_2}(d,\bar x)\right)^2+\left(\dfrac{\partial g}{\partial x_3}(d,\bar x)\right)^2=0 \] has no solutions in $V^0$. Let us consider the map: \[\varphi:U^0\to\C^3\] defined by \begin{equation}\label{eq:ch4:DefinitionVarphiMap_Multiplicity} \bar\varphi(d,\bar y)=(\varphi_1(d,\bar y),\varphi_2(d,\bar y),\varphi_3(d,\bar y))=\bar y\pm d\dfrac{\nabla f(\bar y)}{\sqrt{h_{\operatorname{imp}}(\bar y)}} \end{equation} We assume w.l.o.g. that the $+$ sign in this expression is chosen so that $\bar\varphi(d^o,\bar y^o)=\bar x^o$; our discussion does not depend on this choice of sign in this expression, as will be shown below. According to Remark \ref{rem:ch4:SignOfLambdaAndOffsetting} and Lemma \ref{lem:ch4:SignOfLambdaAndOffsetting} (page \pageref{lem:ch4:SignOfLambdaAndOffsetting}), this implies that:
\begin{equation}\label{eq:ch4:ChoiceOfEpsilonInMultiplicityProof} \bar M(\bar k^o,\bar t^o)=\epsilon\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}\bar G(\bar k^o,\bar t^o)\mbox{ for }i=1,2,3. \end{equation} We will use Equation \ref{eq:ch4:ChoiceOfEpsilonInMultiplicityProof} later in the proof. Since $\bar y^o$ is not normal-isotropic in $\Sigma$, it follows that $\bar\varphi$ is analytic in $U^0$. Furthermore, we consider the map \[\bar\eta:V^0\to\C^3\] defined by: \[
\bar\eta(d,\bar x)=\bar x+d\dfrac{\nabla_{\bar x}g(d,\bar x)}{\|\nabla_{\bar x}g(d,\bar x)\|}. \] Here $\nabla_{\bar x}$ refers to the gradient computed w.r.t. $\bar x$; that is: \[ \nabla_{\bar x}g(d,\bar x)=\left(\dfrac{\partial g}{\partial x_1}(d,\bar x),\dfrac{\partial g}{\partial x_2}(d,\bar x),\dfrac{\partial g}{\partial x_3}(d,\bar x)\right). \]
In the definition of $\bar\eta$, w.l.o.g. the sign $+$ is chosen so that $\bar\eta(d^o,\bar x^o)=\bar y^o$. Then, since $\bar x^o$ is non normal-isotropic in $\cO_{d^o}(\Sigma)$, it follows that $\bar\eta$ is analytic in $V^o$. Thus, there are open neighborhoods $U^1$ of $(d^o,\bar y^o)$ and $V^1$ of $(d^o,\bar x^o)$ (in the unitary topology of $\C\times\C^3$), such that $\bar\varphi$ is an analytic isomorphism between $U^1$ and $V^1$, with inverse given by $\bar\eta$. We can assume w.l.o.g. that $\|\nabla f(\bar y)\|\neq 0$ holds in $U^1$, and $\|\nabla_{d,\bar x} g(\bar x)\|\neq 0$ holds in $V^1$. Note also that if $(d^o,\bar y^1)\in U^1$, with $\bar y^1\in\Sigma$, then $\bar\varphi(d^o,\bar y^1)\in\cO_{d^o}(\Sigma)$. It follows that the map $\bar\varphi_{d^o}$, obtained by restricting $\bar\varphi$ to $d=d^o$, induces an isomorphism: \[d\bar\varphi_{d^o}:T_{\bar y^o}(\Sigma)\to T_{\bar x^o}(\cO_{d^o}(\Sigma))\] where $T_{\bar y^o}(\Sigma)$ is the tangent plane to $\Sigma$ at $\bar y^o$, and $T_{\bar x^o}(\cO_{d^o}(\Sigma))$ is the tangent plane to $\cO_{d^o}((\Sigma)$ at $\bar x^o$.
\noindent Since $(d^o,\bar k^o)\in\Omega_3\subset\Omega_0$, we have $\bar t^o\in\Upsilon_1$, with $\Upsilon_1$ as in Lemma \ref{lem:ch4:PropertiesSurfaceParametrization}, page \pageref{lem:ch4:PropertiesSurfaceParametrization} (see the construction of $\Omega^4_0$ in the proof of Theorem \ref{thm:ch4:TheoreticalFoundation}, \pageref{thm:ch4:TheoreticalFoundation}). Thus, the jacobian $\dfrac{\partial P}{\partial\bar t}(\bar t^o)$ has rank two. It follows that $P$ induces an isomorphism: \[dP:T_{\bar t^o}(\C^2)\to T_{\bar y^o}(\Sigma)\] where $T_{\bar t^o}(\C^2)$ is the tangent plane to $\C^2$ at $\bar t^o$. Therefore, the map defined by \begin{equation}\label{eq:ch4:DefinitionNu_Multiplicity} \bar\nu_{d^o}(\bar t)=\bar\varphi_{d^o}(P(\bar t))=\bar\varphi(d^o,P(\bar t)) \end{equation} induces an isomorphism $d\bar\nu_{d^o}$ between $T_{\bar t^o}(\C^2)$ and $T_{\bar x^o}(\cO_{d^o}(\Sigma))$.
\item[(2)] Consider the following polynomials in $\C[d,\bar k,\rho]$: \[ \begin{cases} K(d,\bar k,\rho)=k_1\dfrac{\partial g}{\partial x_1}(d,\rho\bar k)+k_2\dfrac{\partial g}{\partial x_2}(d,\rho\bar k)+k_3\dfrac{\partial g}{\partial x_3}(d,\rho\bar k)\\ \tilde g(d,\bar k,\rho)=g(d,\rho\bar k) \end{cases} \] and let \[\Theta(d,\bar k)={\rm Res}_{\rho}(K(d,\bar k,\rho),\tilde g(d,\bar k,\rho)).\] Let us show that this resultant does not vanish identically. If it does, then there are $A,B_1,B_2\in\C[d,\bar k,\rho]$, with ${\rm deg}_{\rho}(A(d,\bar k,\rho))>0$, such that \[ \begin{cases} K(d,\bar k,\rho)=A(d,\bar k,\rho)B_1(d,\bar k,\rho),\\ \tilde g(d,\bar k,\rho)=A(d,\bar k,\rho)B_2(d,\bar k,\rho). \end{cases} \] Then $g(d,\rho\bar k)=A(d,\bar k,\rho)B_2(d,\bar k,\rho)$, and ${\rm deg}_{\bar k}(A(d,\bar k,\rho))>0$ (because $\tilde g$ cannot have a non constant factor in $\C[d,\rho]$). Thus, setting $\rho=1$ and $\bar k=\bar x$, one has $g(d,\bar x)=A(d,\bar x,1)B_2(d,\bar x,1)$. It follows (see Remark \ref{rem:ch1:GenericOffsetEqSqfreeAndHasAtMostTwoFactors}(1), page \pageref{rem:ch1:GenericOffsetEqSqfreeAndHasAtMostTwoFactors}) that if $\tilde A(d,\bar x)$ is any irreducible factor of $A(d,\bar x,1)$, then $\tilde A(d,\bar x)$ defines an irreducible component $\cM$ of the generic offset, such that \[x_1\dfrac{\partial g}{\partial x_1}(d,\bar x)+x_2\dfrac{\partial g}{\partial x_2}(d,\bar x)+x_3\dfrac{\partial g}{\partial x_3}(d,\bar x)=0\] holds identically on $\cM$. Besides, for an open set of points $\bar x^o\in\cM$, one has $\nabla_{\bar x}g(d,\bar x^o)=\nabla_{\bar x}\tilde A(d,\bar x^o)$. Thus the above equation implies that \[x_1\dfrac{\partial\tilde A}{\partial x_1}(d,\bar x)+x_2\dfrac{\partial\tilde A}{\partial x_2}(d,\bar x)+x_3\dfrac{\partial\tilde A}{\partial x_3}(d,\bar x)=0\] holds identically in $\cM$. Therefore, since $\tilde A$ is irreducible, we get \[x_1\dfrac{\partial\tilde A}{\partial x_1}(d,\bar x)+x_2\dfrac{\partial\tilde A}{\partial x_2}(d,\bar x)+x_3\dfrac{\partial\tilde A}{\partial x_3}(d,\bar x)=\kappa^o\tilde A(d,\bar x)\] for some constant $\kappa^o$. This implies that the polynomial $\tilde A(d,\bar x)$ is homogeneous w.r.t. $\bar x$, and it follows that, for any value $d^o\not\in\Delta$ (with $\Delta$ as in Corollary \ref{cor:ch1:BadDistancesFiniteSet}, page \pageref{cor:ch1:BadDistancesFiniteSet}), $\cO_{d^o}(\Sigma)$, has a homogeneous component. This implies that $\bar 0\in\cO_{d^o}(\Sigma)$ for $d^o\not\in\Delta$, which is a contradiction with our hypothesis (see Remark \ref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin}(1), page \pageref{rem:ch4:NotInfinitelyManyOffsetsThroughOrigin}). Thus, $\Theta(d,\bar k)$ is not constant. Let us define $\Omega_{4,1}^1=\Omega_3\setminus\{(d^o,\bar k^o)/\Theta(d^o,\bar k^o)=0\}$.
\item[(3)] Let us consider the following polynomials in $\C[d, \bar k,\bar x]$ \begin{equation}\label{eq:ch4:SigmaPolynomials} \begin{cases} \sigma_0(d,\bar k,\bar x)={\rm det}(\bar k,\bar x,\nabla_{\bar x}g(d,\bar x))\\ \sigma_1(d,\bar k,\bar x)=k_2x_3-k_3x_2\\ \end{cases} \end{equation} Let $\Omega_{4,1}^2\subset\Omega_{4,1}^1$ be such that, for $(d^o,\bar k^o)\in\Omega_{4,1}^2$, these polynomials are non identically zero (note that $\sigma_0$ and $\sigma_1$ are both homogeneous w.r.t. $\bar k$). Therefore, for $(d^o,\bar k^o)\in\Omega_{4,1}^2$, and for $i=0,1$, $\sigma_i(d^o,\bar k^o,\bar x)$ defines a surface $\Sigma_i(d^o,\bar k^o)$. From \[\nabla_{\bar x}\sigma_1(\bar k,\bar x)=(0,-k_3,k_2)\] one has \[\nabla_{\bar x}g\wedge\nabla_{\bar x}\sigma_1=\left(k_2\dfrac{\partial g}{\partial x_2}+k_3\dfrac{\partial g}{\partial x_3},-k_2\dfrac{\partial g}{\partial x_1},-k_3\dfrac{\partial g}{\partial x_1}\right).\] Let $(d^o,\bar k^o)\in\Omega_{4,1}^2$ and $\bar x^o\in\cO_{d^o}(\Sigma)\cap\cL_{\bar k^o}$. We will show that \begin{equation}\label{eq:ch4:GradientPlaneAndOffsetCrossProduct} \nabla_{\bar x}g(d^o,\bar x^o)\wedge\nabla_{\bar x}\sigma_1(\bar k^o,\bar x^o)\neq\bar 0. \end{equation} First, note that there is $\rho^o\in\C$ such that $\bar x^o=\rho^o\bar k^o$. If $\dfrac{\partial g}{\partial x_1}(d^o,\bar x^o)\neq 0$, then since $k_i\neq 0$ for $i=1,2,3$, the result follows. Thus, let $\dfrac{\partial g}{\partial x_1}(d^o,\bar x^o)=0$. If we suppose that $\nabla_{\bar x}g(d^o,\bar x^o)\wedge\nabla_{\bar x}\sigma_1(\bar k^o,\bar x^o)=\bar 0$, then \[ k^o_2\dfrac{\partial g}{\partial x_2}(d^o,\bar x^o)+k^o_3\dfrac{\partial g}{\partial x_3}(d^o,\bar x^o)=0. \] Thus, one obtains \[ \begin{cases} g(d^o,\rho^o\bar k^o)=0\\ k^o_1\dfrac{\partial g}{\partial x_1}(d^o,\rho^o\bar k^o)+k^o_2\dfrac{\partial g}{\partial x_2}(d^o,\rho^o\bar k^o)+k^o_3\dfrac{\partial g}{\partial x_3}(d^o,\rho^o\bar k^o)=0 \end{cases} \] and it follows that $\Theta(d^o,\bar k^o)=0$ (with $\Theta$ as in step (2) of the proof), contradicting the construction of $\Omega_{4,1}^1$. Thus, Equation \ref{eq:ch4:GradientPlaneAndOffsetCrossProduct} is proved.
\noindent We will prove the analogous result for $\sigma_0$. That is, for $(d^o,\bar k^o)\in\Omega_{4,1}^2$ and $\bar x^o\in\cO_{d^o}(\Sigma)\cap\cL_{\bar k^o}$, we will show that: \begin{equation}\label{eq:ch4:GradientSigma0AndOffsetCrossProduct} \nabla_{\bar x}g(d^o,\bar x^o)\wedge\nabla_{\bar x}\sigma_0(d^o,\bar k^o,\bar x^o)\neq\bar 0. \end{equation} From \[\sigma_0(d,\bar k,\bar x)={\rm det}(\bar k,\bar x,\nabla_{\bar x}g(d,\bar x))\] and applying the derivation properties of determinants, one has, e.g. \[ \dfrac{\partial\sigma_0}{\partial x_1}(d,\bar k,\bar x)=
\left|\begin{array}{ccc} k_1&k_2&k_3\\ 1&0&0\\ \partial_1g&\partial_2g&\partial_3g
\end{array}\right| +
\left|\begin{array}{ccc} k_1&k_2&k_3\\ x_1&x_2&x_3\\ \partial_{1,1}g&\partial_{2,1}g&\partial_{3,1}g
\end{array}\right| \] where $\partial_ig=\dfrac{\partial g}{\partial x_i}$ and $\partial_{i,j}g=\dfrac{\partial^2 g}{\partial x_i\partial x_j}$ for $i,j\in\{1,2,3\}$. Let as before, $\bar x^o=\rho^o\bar k^o$ for some $\rho^o\in\C$. Then: \[ \dfrac{\partial\sigma_0}{\partial x_1}(d^o,\bar k^o,\bar x^o)=
\left|\begin{array}{ccc} k^o_1&k^o_2&k^o_3\\ 1&0&0\\ \partial_1g&\partial_2g&\partial_3g
\end{array}\right| +
\left|\begin{array}{ccc} k^o_1&k^o_2&k^o_3\\ \rho^o k^o_1&\rho^o k^o_2&\rho^o k^o_3\\ \partial_{1,1}g&\partial_{2,1}g&\partial_{3,1}g
\end{array}\right| \] with all the partial derivatives evaluated at $(d^o,\bar x^o)$. Since the second determinant in the above equation vanishes, one concludes that \[ \dfrac{\partial\sigma_0}{\partial x_1}(d^o,\bar k^o,\bar x^o)= k^o_3\dfrac{\partial g}{\partial x_2}(d^o,\bar x^o)-k^o_2\dfrac{\partial g}{\partial x_3}(d^o,\bar x^o). \] Similar results are obtained for the other two partial derivatives, leading to: \[ \nabla_{\bar x}\sigma_0(d^o,\bar k^o,\bar x^o)=\nabla_{\bar x}g(d^o,\bar x^o)\wedge\bar k^o. \] Therefore, \[ \nabla_{\bar x}g(d^o,\bar x^o)\wedge\nabla_{\bar x}\sigma_0(d^o,\bar k^o,\bar x^o)= \nabla_{\bar x}g(d^o,\bar x^o)\wedge(\nabla_{\bar x}g(d^o,\bar x^o)\wedge\bar k^o). \] Note that $\nabla_{\bar x}g(d^o,\bar x^o)\wedge\bar k^o\neq \bar 0$ because, by construction, $\cL_{\bar k^o}$ is not normal to $\cO_{d^o}(\Sigma)$ at $\bar x^o$. If we suppose that \[ \nabla_{\bar x}g(d^o,\bar x^o)\wedge\nabla_{\bar x}\sigma_0(d^o,\bar k^o,\bar x^o)=\bar 0, \]
then the vectors $\nabla_{\bar x}g(d^o,\bar x^o)$ and $\nabla_{\bar x}g(d^o,\bar x^o)\wedge\bar k^o$ are parallel and perpendicular to each other. However, if two vectors are parallel and perpendicular, and one of them is not zero, then the other one must be isotropic. One concludes that $\|\nabla_{\bar x}g(d^o,\bar x^o)\|=0$. This is a contradiction (see step (1) of the proof); therefore, Equation \ref{eq:ch4:GradientSigma0AndOffsetCrossProduct} is proved.
\noindent From Equations \ref{eq:ch4:GradientPlaneAndOffsetCrossProduct} and \ref{eq:ch4:GradientSigma0AndOffsetCrossProduct}, and using Theorem 9 in \cite{Cox1997} (page 480), we conclude that $\bar x^o$ is a regular point in $\Sigma_i(d^o,\bar k^o)\cap\cO_{d^o}(\Sigma)$ (for $i=0,1$). Besides, $\bar x^o$ belongs to a unique one-dimensional component of $\Sigma_i(d^o,\bar k^o)\cap\cO_{d^o}(\Sigma)$. For $i=0,1$, let ${\mathcal C}_{i}(d^o,\bar k^o)$ be the one-dimensional component of $\Sigma_i(d^o,\bar k^o)\cap\cO_{d^o}(\Sigma)$ containing $\bar x^o$.
\item[(4)] The non-zero vector \[\bar v_i(d^o,\bar k^o,\bar x^o)=\nabla_{\bar x}g(d^o,\bar x^o)\wedge\nabla_{\bar x}\sigma_i(\bar k^o,\bar x^o),\quad (i=0,1)\] obtained in step (3) of the proof, is a tangent vector to ${\mathcal C}_{i}(d^o,\bar k^o)$ at $\bar x^o$. We will show that \begin{equation}\label{eq:ch4:CrossProductOfTangentVectors} \bar v_0(d^o,\bar k^o,\bar x^o)\wedge\bar v_1(d^o,\bar k^o,\bar x^o)\neq\bar 0. \end{equation} It holds that \[ \begin{array}{l} \bar v_0(d^o,\bar k^o,\bar x^o)\wedge\bar v_1(d^o,\bar k^o,\bar x^o)=\\ -(k^o_3\dfrac{\partial g}{\partial x_2}-k^o_2\dfrac{\partial g}{\partial x_3})\cdot \left(k^o_1\dfrac{\partial g}{\partial x_1}+k^o_2\dfrac{\partial g}{\partial x_2}+k^o_3\dfrac{\partial g}{\partial x_3}\right)\cdot \nabla_{\bar x}g(d^o,\bar x^o), \end{array} \] with all the partial derivatives evaluated at $(d^o,\bar x^o)$. Since
\[\|\nabla f(\bar y^o)\|\cdot\|\nabla_{\bar x}g(d^o,\bar x^o)\|\neq 0,\] by the fundamental property of the offset (Proposition \ref{prop:ch1:FundamentalPropertyOffsets}, page \pageref{prop:ch1:FundamentalPropertyOffsets}), there is some $\kappa^o\in\C^\times$ such that \[\nabla_{\bar x}g(d^o,\bar x^o)=\kappa^o\nabla f(\bar y^o).\] Then, using Equation \ref{eq:ch4:ImplicitAndParamNormalVectors_Multip} (page \pageref{eq:ch4:ImplicitAndParamNormalVectors_Multip}) one has (see Remark \ref{rem:ch4:SignOfLambdaAndOffsetting}, page \pageref{rem:ch4:SignOfLambdaAndOffsetting}) that \[k^o_3\dfrac{\partial g}{\partial x_2}-k^o_2\dfrac{\partial g}{\partial x_3}= \kappa^o(k^o_3f_2(\bar y^o)-k^o_2f_3(\bar y^o))=\kappa^o\dfrac{\beta(\bar t^o)}{P^{\mu}_0(\bar t^o)}(k_3^on_2(\bar t^o)-k_2^on_3(\bar t^o))\neq 0.\] Besides, in step (2) of the proof we have already seen that \[\left(k^o_1\dfrac{\partial g}{\partial x_1}+k^o_2\dfrac{\partial g}{\partial x_2}+k^o_3\dfrac{\partial g}{\partial x_3}\right)\neq 0.\] Thus, the proof of Equation \ref{eq:ch4:CrossProductOfTangentVectors} is finished.
\item[(5)] From Equation \ref{eq:ch4:ChoiceOfEpsilonInMultiplicityProof} (page \pageref{eq:ch4:ChoiceOfEpsilonInMultiplicityProof}) one has \[ M_1\bar k^o,\bar t^o)=\epsilon\dfrac{d^oP_0(\bar t^o)}{\sqrt{h(\bar t^o)}}G_1(\bar k^o,\bar t^o). \] Therefore: \begin{equation} \sqrt{h(\bar t^o)} (k^o_2P_3(\bar t^o)-k^o_3P_2(\bar t^o))=d^oP_0(\bar t^o)(k_2(\bar t^o)n_3(\bar t^o)-k_3(\bar t^o)n_2(\bar t^o)). \end{equation} Multiplying by $\dfrac{\beta(\bar t^o)}{P^{\mu+1}_0(\bar t^o)}$, it holds that \[ \dfrac{\beta(\bar t^o)\sqrt{h(\bar t^o)}}{P^{\mu}_0(\bar t^o)}\left(k^o_2\dfrac{P_3(\bar t^o)}{P_0(\bar t^o)}-k^o_3\dfrac{P_2(\bar t^o)}{P_0(\bar t^o)}\right)= d^o\left(k_2(\bar t^o)\dfrac{\beta(\bar t^o)n_3(\bar t^o)}{P^{\mu}_0(\bar t^o)}-k_3(\bar t^o)\dfrac{\beta(\bar t^o)n_2(\bar t^o)}{P^{\mu}_0(\bar t^o)}\right). \] Using Equation \ref{eq:ch4:ImplicitAndParamNormalVectors_Multip} (page \pageref{eq:ch4:ImplicitAndParamNormalVectors_Multip}), one obtains (recall that $\bar y^o=P(\bar t^o)$): \begin{equation}\label{eq:ch4:Multiplicity_BranchPLusSign} \sqrt{h_{\operatorname{imp}}(\bar y^o)}(k^o_2y^o_3-k^o_3y^o_2)-d^o(k^o_2f^o_3(\bar y^o)-k^o_3f_2(\bar y^o))=0. \end{equation} Note also that, since $\sqrt{h(\bar t^o)} (k^o_2P_3(\bar t^o)-k^o_3P_2(\bar t^o))\neq 0$ we also have \begin{equation}\label{eq:ch4:Multiplicity_BranchMinusSign} \sqrt{h_{\operatorname{imp}}(\bar y^o)}(k^o_2y^o_3-k^o_3y^o_2)+d^o(k^o_2f^o_3(\bar y^o)-k^o_3f_2(\bar y^o))\neq 0. \end{equation} Observe that, if the sign $\epsilon=-1$ is used in the offsetting construction (see step (1) of the proof), the results in Equations \ref{eq:ch4:Multiplicity_BranchPLusSign} and \ref{eq:ch4:Multiplicity_BranchMinusSign} are reversed.
\noindent Recall (see Equation \ref{sys:ch4:AuxiliaryCurvesSystem}, page \pageref{sys:ch4:AuxiliaryCurvesSystem}) that the auxiliary polynomial $s_1$ is given by: \[s_1(d,\bar k, \bar t)=h(\bar t)(k_2P_3-k_3P_2)^2-d^2P_0(\bar t)^2(k_2n_3-k_3n_2)^2.\] Thus, one has: \[\begin{array}{l} \dfrac{\beta^2(\bar t)}{P^{2\mu+2}_0(\bar t)}s_1(d,\bar k, \bar t)=\\[5mm] \dfrac{\beta^2(\bar t)}{P^{2\mu}_0(\bar t)} \left(k_2\dfrac{P_3(\bar t)}{P_0(\bar t)}-k_3\dfrac{P_2(\bar t)}{P_0(\bar t)}\right)^2- d^2\left(k_2\dfrac{\beta(\bar t)n_3(\bar t)}{P^{\mu}_0(\bar t)}-k_3\dfrac{\beta(\bar t)n_2(\bar t)}{P^{\mu}_0(\bar t)}\right)^2. \end{array} \] And substituting $\bar y=P(\bar t)$ in $\dfrac{\beta^2(\bar t)}{P^{2\mu+2}_0(\bar t)}s_1(d,\bar k, \bar t)$, one obtains: \begin{equation}\label{eq:ch4:AuxiliaryCurveImplicitVersion} \dfrac{\beta^2(\bar t)}{P^{2\mu+2}_0(\bar t)}s_1(d,\bar k, \bar t)= h_{\operatorname{imp}}(\bar y)(k_2y_3-k_3y_2)^2-d^2(k_2f_3(\bar y)-k_3f_2(\bar y))^2. \end{equation} Let us consider the polynomial $\sigma'_1\in\C[d,\bar k,\bar y]$ defined by \[\sigma'_1(d,\bar k,\bar y)=h_{\operatorname{imp}}(\bar y)(k_2y_3-k_3y_2)^2-d^2(k_2f_3(\bar y)-k_3f_2(\bar y))^2,\] and let $\Sigma'_1(d^o,\bar k^o)\subset\C^3$ be the algebraic closed set defined by the equation $\sigma'_1(d^o,\bar k^o,\bar y)=0$. Let $\bar\tau=(\tau^1,\tau^2)$, and let $\cQ_1(\bar\tau)$ be a place of $\cT^a_1(d^o,\bar k^o)$ centered at $\bar t^o$. We assume that $\cQ_1(\bar 0)=\bar t^o$. Since $T_1(d^o,\bar k^o,1,\cQ_1(\bar\tau))=0$ identically in $\bar\tau$, from Equation \ref{eq:ch4:ReducedAuxiliaryPolynomials_2} (page \pageref{eq:ch4:ReducedAuxiliaryPolynomials}) it follows that \[s_1(d^o,\bar k^o,\cQ_1(\bar\tau))=S_1(d^o,\bar k^o,1,\cQ_1(\bar\tau))=0\] identically in $\bar\tau$. Thus, from Equation \ref{eq:ch4:AuxiliaryCurveImplicitVersion} (recall that $\bar y=P(\bar t)$ in the lhs of Equation \ref{eq:ch4:AuxiliaryCurveImplicitVersion}) one has: \[\sigma'_1(d^o,\bar k^o,P(\cQ_1(\bar\tau)))=0\] identically in $\bar\tau$. Note that: \[ \sigma'_1(d,\bar k^o,\bar y)=\sigma'_{1,+}(d,\bar k^o,\bar y)\sigma'_{1,-}(d,\bar k^o,\bar y) \] with \[ \begin{cases} \sigma'_{1,+}(d,\bar k^o,\bar y)=\sqrt{h_{\operatorname{imp}}(\bar y)}(k^o_2y_3-k^o_3y_2)+d(k^o_2f_3(\bar y)-k^o_3f_2(\bar y)),\\[3mm] \sigma'_{1,-}(d,\bar k^o,\bar y)=\sqrt{h_{\operatorname{imp}}(\bar y)}(k^o_2y_3-k^o_3y_2)-d(k^o_2f_3(\bar y)-k^o_3f_2(\bar y)). \end{cases} \] The functions $\sigma'_{1,+}(d,\bar k^o,\bar y)$ and $\sigma'_{1,-}(d,\bar k^o,\bar y)$ are analytic in the neighborhood $U^1$ of $(d^o,\bar x^o)$ introduced in step (1) of the proof. Therefore: \[\sigma'_{1,+}(d^o,\bar k^o,\cQ_1(\bar\tau))\sigma'_{1,-}(d^o,\bar k^o,\cQ_1(\bar\tau))=0,\] identically in $\bar\tau$. However, evaluating at $\bar\tau=\bar 0$, and taking Equations \ref{eq:ch4:Multiplicity_BranchPLusSign} and \ref{eq:ch4:Multiplicity_BranchMinusSign} into account, one sees that \[\sigma'_{1,+}(d,\bar k^o,\bar y^o)\neq 0,\mbox{ while }\sigma'_{1,-}(d,\bar k^o,\bar y^o)=0.\] By the analytic character of the functions, one concludes that \[\sigma'_{1,-}(d^o,\bar k^o,\cQ_1(\bar\tau))=0,\] identically in $\bar\tau$. Dividing by $\sqrt{h_{\operatorname{imp}}(\bar y)}$, this relation implies that: {\small \[ k^o_2\left(\dfrac{P_3(\cQ_1(\bar\tau))}{P_0(\cQ_1(\bar\tau))}+d^o\dfrac{f_3(P(\cQ_1(\bar\tau)))}{\sqrt{h_{\operatorname{imp}}(P_3(\cQ_1(\bar\tau)))}}\right) - k^o_3\left(\dfrac{P_2(\cQ_1(\bar\tau))}{P_0(\cQ_1(\bar\tau))}+d^o\dfrac{f_2(P(\cQ_1(\bar\tau)))}{\sqrt{h_{\operatorname{imp}}(P_2(\cQ_1(\bar\tau)))}}\right)=0. \]} \noindent That is, \[k^o_3\varphi_2(d^o,P(\cQ_1(\bar\tau)))-k^o_2\varphi_3(d^o,P(\cQ_1(\bar\tau)))=0\] identically in $\bar\tau$, where $\bar\varphi=(\varphi_2,\varphi_2,\varphi_3)$ was defined in step (1) of the proof (see Equation \ref{eq:ch4:DefinitionVarphiMap_Multiplicity}, page \pageref{eq:ch4:DefinitionVarphiMap_Multiplicity}). With the notation introduced in step (3) of the proof (see Equation \ref{eq:ch4:SigmaPolynomials}, page \pageref{eq:ch4:SigmaPolynomials}), this is \[ \sigma_1(d^o,\bar\varphi(d^o,P(\cQ_1(\bar\tau))))=0, \] identically in $\bar\tau$. This implies that if ${\mathcal B}_1$ is the branch of $\cT^a_1(d^o,\bar k^o)$ at $\bar t^o$ determined by $\cQ_1(\bar\tau)$, then \[\bar\nu_{d^o}({\mathcal B}_1)\subset{\mathcal C}_1(d^o,\bar k^o),\] where $\bar\nu_{d^o}$ was defined in Equation \ref{eq:ch4:DefinitionNu_Multiplicity} (page \pageref{eq:ch4:DefinitionNu_Multiplicity}), and ${\mathcal C}_1(d^o,\bar k^o)$ was introduced at the end of step (3) of the proof.
\item[(6)] Let $\cQ_0(\bar\tau)$ be a place of $\cT^a_0(\bar k^o)$ centered at $\bar t^o$. We assume that $\cQ_0(\bar 0)=\bar t^o$. Since $T_0(\bar k^o,1,\cQ_0(\bar\tau))=0$ identically in $\bar\tau$, from Equation \ref{eq:ch4:ReducedAuxiliaryPolynomials} (page \pageref{eq:ch4:ReducedAuxiliaryPolynomials}) it follows that \[s_0(\bar k^o,\cQ_0(\bar\tau))=S_0(\bar k^o,1,\cQ_0(\bar\tau))=0,\] identically in $\bar\tau$. That is, \begin{equation}\label{eq:ch4:S0InAPLace_Multiplicity} s_0(\bar k^o,\cQ_0(\bar\tau))={\rm det}\left( \begin{array}{ccc} k^o_1&k^o_2&k^o_3\\ P_1(\cQ_0(\bar\tau))&P_2(\cQ_0(\bar\tau))&P_3(\cQ_0(\bar\tau))\\ n_1(\cQ_0(\bar\tau))&n_2(\cQ_0(\bar\tau))&n_3(\cQ_0(\bar\tau)) \end{array} \right), \end{equation} identically in $\bar\tau$. Multiplying this by $\dfrac{\beta(\cQ_0(\bar\tau))}{P^{\mu+1}_0(\cQ_0(\bar\tau))}$, one has: \[ {\rm det}\left( \begin{array}{ccc} k^o_1&k^o_2&k^o_3\\[3mm] \dfrac{P_1(\cQ_0(\bar\tau))}{P_0(\bar\tau)}&\dfrac{P_2(\cQ_0(\bar\tau))}{{P_0(\cQ_0(\bar\tau))}}&\dfrac{P_3(\cQ_0(\bar\tau))}{P_0(\cQ_0(\bar\tau))}\\[5mm] \dfrac{\beta(\cQ_0(\bar\tau))n_1(\cQ_0(\bar\tau))}{P_0^{\mu}(\cQ_0(\bar\tau))}&\dfrac{\beta(\bar\tau)n_2(\cQ_0(\bar\tau))}{P_0^{\mu}(\cQ_0(\bar\tau))}& \dfrac{\beta(\bar\tau)n_3(\cQ_0(\bar\tau))}{P_0^{\mu}(\cQ_0(\bar\tau))} \end{array} \right)=0, \] identically in $\bar\tau$. Using Equation \ref{eq:ch4:ImplicitAndParamNormalVectors_Multip} (page \pageref{eq:ch4:ImplicitAndParamNormalVectors_Multip}), this implies that: \[{\rm det}(\bar k^o,P(\cQ_0(\bar\tau)),\nabla f(P(\cQ_0(\bar\tau))))=0.\] Since \[ \bar\varphi(d^o,P(\cQ_0(\bar\tau)))=P(\cQ_0(\bar\tau))\pm d^o\dfrac{\nabla f(P(\cQ_0(\bar\tau)))}{\sqrt{h_{\operatorname{imp}}(\bar P(\cQ_0(\bar\tau)))}} \] and the second term in the sum is parallel to $\nabla f(P(\cQ_0(\bar\tau)))$, we have \[{\rm det}(\bar k^o,\bar\varphi(d^o,P(\cQ_0(\bar\tau))),\nabla f(P(\cQ_0(\bar\tau))))=0.\] Besides, by the fundamental property of the offset (Proposition \ref{prop:ch1:FundamentalPropertyOffsets}, page \pageref{prop:ch1:FundamentalPropertyOffsets}), and the construction in step (1) of the proof, the vectors \[\nabla f(y)\mbox{ and }\nabla_{\bar x}g(d^o,\bar\varphi(d^o,\bar y))\] are parallel for every value of $(d^o,\bar y)$ in $V^1$. It follows that \[{\rm det}(\bar k^o,\bar\varphi(d^o,P(\cQ_0(\bar\tau))),\nabla_{\bar x}g(d^o,\bar\varphi(d^o,P(\cQ_0(\bar\tau)))))=0,\] identically in $\bar\tau$. Recalling the definition of $\sigma_0$ in Equation \ref{eq:ch4:SigmaPolynomials} (page \pageref{eq:ch4:SigmaPolynomials}), this implies that \[\sigma_0(d^o,\bar k^o,\bar\varphi(d^o,P(\cQ_0(\bar\tau))))=0,\] identically in $\bar\tau$. It follows that, if ${\mathcal B}_0$ is the branch of $\cT^a_0(\bar k^o)$ at $\bar t^o$ determined by $\cQ_0(\bar\tau)$, then \[\bar\nu_{d^o}({\mathcal B}_0)\subset{\mathcal C}_0(d^o,\bar k^o),\] where $\bar\nu_{d^o}$ was defined in Equation \ref{eq:ch4:DefinitionNu_Multiplicity} (page \pageref{eq:ch4:DefinitionNu_Multiplicity}), and ${\mathcal C}_0(d^o,\bar k^o)$ was introduced at the end of step (3) of the proof. \end{enumerate} Now we can finish the proof of the proposition. In steps (5) and (6) of the proof we have shown that any branch at $\bar t^o$ of the curves $\cT^a_0(\bar k^o)$ or $\cT^a_1(d^o,\bar k^o)$ is mapped by $\bar\nu_{d^o}$ respectively into the curve ${\mathcal C}_1(d^o,\bar k^o)$ or ${\mathcal C}_0(d^o,\bar k^o)$ (these curves are constructed in step (3)). Since $d\bar\nu_{d^o}$ is an isomorphism of vector spaces (see step (1)), it follows that: \begin{itemize}
\item By the results in step (3), there is only one branch at $\bar t^o$ of each of the curves $\cT^a_0(\bar k^o)$ and $\cT^a_1(d^o,\bar k^o)$. Besides, since the rank of the Jacobian matrix (and therefore, the condition in \cite{Cox1997}, Theorem 9, page 480) is preserved under analytic isomorphisms, the unique branch of each the curves $\cT^a_0(\bar k^o)$ and $\cT^a_1(d^o,\bar k^o)$ passing through $\bar t^o$ is regular at that point.
\item By the results in step (4), if $\ell_1$ and $\ell_0$ are the two tangent lines of these two branches, then $\ell_1$ and $\ell_0$ are different. \end{itemize} Then \[\mathop{\mathrm{mult}}\nolimits_{\bar t^o}(\cT_o,\cT_1)=1\] follows from Theorem 5.10 in \cite{Walker1950} (page 114). \end{proof}
\subsection{The degree formula} \label{subsec:ch4:DegreeFormula}
\noindent Before the statement of the degree formula we need to introduce some more notation and a technical lemma. Let \[ R(\bar c,d,\bar k,\bar t)=\operatorname{Res}_{t_0}\left(T_0(\bar k,\bar t_h),T(\bar c,d,\bar k,\bar t_h)\right) \] (for the definition of $T_0$ and $T$ see Equations \ref{eq:ch4:ReducedAuxiliaryPolynomials} and \ref{eq:ch4:ReducedAuxiliaryPolynomials_3}, in page \pageref{eq:ch4:ReducedAuxiliaryPolynomials}). Then $R$ factors as follows: \[ R(\bar c,d,\bar k,\bar t)= N_1(d,\bar k,\bar t)M_3(\bar c,d,\bar k,\bar t) \] where $N_1(d,\bar k,\bar t)=\operatorname{Con}_{\bar c}(R)$ and $M_3(\bar c,d,\bar k,\bar t)=\operatorname{PP}_{\bar c}(R)$.
\noindent Besides, $N_1$ factors as follows: \[N_1(d,\bar k,\bar t)=M_1(\bar t)M_2(d,\bar k,\bar t)\] where $M_1(\bar t)=\operatorname{Con}_{(d,\bar k)}(N_1)$ and $M_2(d,\bar k,\bar t)=\operatorname{PP}_{(d,\bar k)}(N_1)$. Thus, one has \[ R(\bar c,d,\bar k,\bar t)= M_1(\bar t)M_2(d,\bar k,\bar t)M_3(\bar c,d,\bar k,\bar t) \] and \[M_2(d,\bar k,\bar t)=\operatorname{PP}_{(d,\bar k)}(\operatorname{Con}_{\bar c}(R)).\] Note that $M_1, M_2$ and $M_3$ are homogeneous polynomials in $\bar t=(t_1,t_2)$.
\noindent The following lemma deals with the specialization of the resultant $R(\bar c,d,\bar k,\bar t)$. More precisely, for $(d^o,\bar k^o)\in\C\times\C^3$ we denote: \[ T^{\bar k^o}_0(\bar t_ h)=T_0(d^o,\bar k^o,\bar t_ h), \quad T^{(d^o,\bar k^o)}(\bar c,\bar t_ h)=T(\bar c,d^o,\bar k^o,\bar t_ h). \] and \[ R^{(d^o,\bar k^o)}(\bar c,\bar t)= \operatorname{Res}_{t_0}\left(T^{\bar k^o}_0(\bar t_h),T^{(d^o,\bar k^o)}(\bar c,\bar t_h)\right) \]
\begin{Lemma}\label{lem:ch4:ResultantSpecializationForDegreeFormula} Let $\Omega_4$ be as in Proposition \ref{prop:ch4:MultiplicityAtNonFakePoints} (page \pageref{prop:ch4:MultiplicityAtNonFakePoints}). There exists a non-empty open $\Omega_5$, such that for $(d^o,\bar k^o)\in\Omega_5$ the following hold: \begin{enumerate}
\item[(a)] The resultant $R(\bar c,d,\bar k,\bar t)$ specializes properly:
\[
R^{(d^o,\bar k^o)}(\bar c,\bar t)=R(\bar c,d^o,\bar k^o,\bar t)=
M_1(\bar t)M_2(d^o,\bar k^o,\bar t)M_3(\bar c,d^o,\bar k^o,\bar t).
\]
\item[(b)] The content w.r.t $\bar c$ also specializes properly:
\[\operatorname{Con}_{\bar c}(R^{(d^o,\bar k^o)})(\bar t)=M_1(\bar t)M_2(d^o,\bar k^o,\bar t).\]
\item[(c)] the coprimality of $M_1$ and $M_2$ is invariant under specialization: \[\gcd\left(M_1(\bar t),M_2(d^o,\bar k^o,\bar t)\right)=1\] \end{enumerate} \end{Lemma} \begin{proof} \noindent For (a), consider $T_0$ and $T$ as polynomials in $\C[\bar c,d,\bar k,\bar t][t_0]$. Let $\operatorname{lc}(T_0)(\bar k,\bar t)$, (resp. $\operatorname{lc}(T)(\bar c,d,\bar k,\bar t)$) be a leading coefficient w.r.t. $t_0$ of $T_0$ (resp. $T$). Take $A_1(\bar k)$ (resp. $B_1(d,\bar k)$) to be the coefficient of a term of $\operatorname{lc}(T_0)(\bar k,\bar t)$ (resp. $\operatorname{lc}(T)(d,\bar k,\bar t)$) of degree equal to ${\rm deg}_{\bar t}(\operatorname{lc}(T_0)(\bar k,\bar t))$ (resp. ${\rm deg}_{\{\bar c,\bar t\}}(\operatorname{lc}(T)(\bar k,\bar t))$). Now, if $A_1(\bar k^o)B_1(d^o,\bar k^o)\neq 0$, then (a) holds. Thus, set \[\Omega^1_5=\Omega_4\cap\{(d^o,\bar k^o)/A_1(\bar k^o)B_1(d^o,\bar k^o)\neq 0\}.\] For (b), we know that $M_3(\bar c,d,\bar k,\bar t)$ is primitive w.r.t. $\bar c$. If $M_3(\bar c,d,\bar k,\bar t)$ (considered as a polynomial in $\C[d,\bar k,\bar t][\bar c]$) has only one term, then its coefficient w.r.t. $\bar c$ must be constant, and so $M_3$ remains primitive under specialization of $(d,\bar k)$. Suppose, on the other hand, that $M_3(\bar c,d,\bar k,\bar t)$ has more than one term, and let: \[M_{3,1}(d,\bar k,\bar t),\ldots,M_{3,\rho}(d,\bar k,\bar t)\] be an (arbitrary) ordering of its non-zero coefficients w.r.t. $\bar c$. Let $\Gamma_1(d,\bar k,\bar t)=M_{3,1}(d,\bar k,\bar t)$, and for $j=2,\ldots,\rho$ let \[ \Gamma_j(d,\bar k,\bar t)=\gcd\left(M_{3,j}(d,\bar k,\bar t),\Gamma_{j-1}(d,\bar k,\bar t)\right) \] Note that, for $j=1,\ldots,\rho$, the $M_{3,j}$ are homogeneous in $\bar t $ of the same degree. Thus, the $\Gamma_j$ are either homogeneous in $\bar t$, or they only depend on $(d,\bar k)$.
\noindent Since $M_3$ is primitive w.r.t. $\bar c$, let $j^o$ be the first index value in $1,\ldots,\rho$ for which $\Gamma_{j^o}(d,\bar k,\bar t)=1$. If $j^o=1$, then $M_{3,1}(d,\bar k,\bar t)$ is a constant, and in this case it is obvious that $M_3$ remains primitive under specialization of $(d,\bar k)$. If $j^o>1$, we consider: \[ {\rm Res}_{t_1}(M_{3,j^o}(d,\bar k,\bar t),\Gamma_{j^o-1}(d,\bar k,\bar t)) \] This resultant is not identically zero, because we have assumed that $\Gamma_{j^o-1}(d,\bar k,\bar t)=1$. Since the involved polynomials are homogeneous in $\bar t$, this resultant is of the form $t_2^p\Phi(d,\bar k)$ for some $p\in\N$ and some $\Phi\in\C[d,\bar k]$. Now, because of the construction, if $\Phi(d^o,\bar k^o)\neq 0$, the specialization $M_3(\bar c,d^o,\bar k^o,\bar t)$ is primitive w.r.t. $\bar c$. Thus, set: \[\Omega^2_5=\Omega^1_5\cap\{(d^o,\bar k^o)/\Phi(d^o,\bar k^o)\neq 0\}.\] For (c) we use a similar construction. If either $M_1$ or $M_2$ do not depend on $\bar t_h$, the result is trivial. Otherwise, $M_1$ and $M_2$ are both homogeneous polynomials in $\bar t$, so the resultant \[ {\rm Res}_{t_1}(M_1(\bar t),M_2(d,\bar k,\bar t)) \] is of the form $t_2^{\tilde p}{\tilde \Phi}(d,\bar k)$ for some $\tilde p\in\N$ and some $\tilde \Phi_1\in\C[d,\bar k]$. Thus, if $\tilde\Phi_1(d^o,\bar k^o)\neq 0$, then $M_1(\bar t)$ and $M_2(d^o,\bar k^o,\bar t)$ do not have common factors of positive degree in $t_1$. A similar construction can be carried out w.r.t. $t_2$, obtaining a certain $\tilde\Phi_2$. Thus, set: \[\Omega^3_5=\Omega^2_5\cap\{(d^o,\bar k^o)/\tilde\Phi_1(d^o,\bar k^o)\tilde\Phi_2(d^o,\bar k^o)\neq 0\}.\] The construction shows that the lemma holds for $\Omega_5=\Omega^3_5$. \end{proof}
\noindent Finally, we are ready to state and prove the degree formula.
\begin{Theorem}[\sf Total Degree Formula for the Offset of a Parametric Surface] \label{thm:ch4:DegreeFormula} Let $T_0$ and $T$ be as in Equations \ref{eq:ch4:ReducedAuxiliaryPolynomials} and \ref{eq:ch4:ReducedAuxiliaryPolynomials_3} (page \pageref{eq:ch4:ReducedAuxiliaryPolynomials}). Then: \[ \fbox{$ m\cdot\delta= {\rm deg}_{\{\bar t\}}\left( \operatorname{PP}_{(d,\bar k)}\left( \operatorname{Con}_{\bar c}\left( \operatorname{Res}_{t_0}\left(T_0(\bar k,\bar t_h),T(\bar c,d,\bar k,\bar t_h)\right)\right)\right)\right)= {\rm deg}_{\bar t}(M_2(d,\bar k,\bar t)) $} \] where $m$ is the tracing index of $P$ (see Remark \ref{rem:ch4:TracingIndex_m}, page \pageref{rem:ch4:TracingIndex_m}), and if $g(d,\bar x)$ is the generic offset polynomial of $\Sigma$, then $\delta={\rm deg}_{\bar x}(g(d,\bar x))$. \end{Theorem} \begin{proof}\label{proof:ch4:DegreeFormula} Recall that (see Remark \ref{rem:ch4:OffsetDegreeAsCardinal_Projective}, page \pageref{rem:ch4:OffsetDegreeAsCardinal_Projective}), since $\Omega_3\subset\Omega_2$, if $(d^o,\bar k^o)\in\Omega_3$ it holds that \[m\delta=\#\left({\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\right).\] Thus, to prove the theorem it suffices to show that for any of these $(d^o,\bar k^o)$, it holds that \[ {\rm deg}_{\bar t}\left(M_2(d,\bar k,\bar t)\right)=\#\left({\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\right). \] In order to do this, we will specialize at $(d^o,\bar k^o)$. More specifically, we will show that there is an open non-empty subset $\Omega_6\subset\Omega_5$, such that, if $(d^o,\bar k^o)\in\Omega_6$, then ${\rm deg}_{\bar t}\left(M_2(d^o,\bar k^o,\bar t)\right)$ equals $\#\left({\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\right)$.
\noindent Let $\Omega_5$ be as in Lemma \ref{lem:ch4:ResultantSpecializationForDegreeFormula} (page \pageref{lem:ch4:ResultantSpecializationForDegreeFormula}), and let $(d^o,\bar k^o)\in\Omega_5$. Note that $M_1(\bar t)$ and $M_2(d^o,\bar k^o,\bar t)$ both factor as product of linear factors. there exists $\gamma\in\C$ such that $(\gamma:\alpha:\beta)\in{\Psi_5^P}(d^o,\bar k^o)$. Let us see that if $M_1(\bar t^o)=0$, with $\bar t^o=(t^o_1,t^o_2)$, and there is $t^o_0$ such that $(t^o_0:t^o_1:t^o_2)\in{\Psi_5^P}(d^o,\bar k^o)$, then $\bar t^o_h\not\in{\mathcal A}_h$. In fact, if $t^o_0=0$, the result follows from Proposition \ref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity} (page \pageref{prop:ch4:SystemS5HasOnlyInvariantSolutionsAtInfinity}) and Proposition \ref{prop:ch4:NonInvariantSolutionsCanBeAvoided} (page \pageref{prop:ch4:NonInvariantSolutionsCanBeAvoided}). Thus, w.l.o.g we suppose that $\bar t^o_h=(1:t^o_1:t^o_2)$, with $\bar t^o_h\in{\mathcal A}_h$. Then using Proposition \ref{prop:ch4:NonInvariantSolutionsCanBeAvoided} (page \pageref{prop:ch4:NonInvariantSolutionsCanBeAvoided}), we get $\bar t^o_h\not\in\cI^{P_h}_5(\Omega_2)$. This is a contradiction, since $M_1(\bar t)$ does not depend on $(d,\bar k)$.
\noindent We will now show that there is an open set $\Omega_6\subset\Omega_5$ such that if $(d^o,\bar k^o)\in\Omega_6$ and $M_2(d^o,\bar k^o,\bar t^o)=0$, then there is $t^o_0$ such that $\bar t^o_h=(t^o_0,\bar t^o)\in{\Psi_5^P}(d^o,\bar k^o)$. This follows from Lemma \ref{lem:ch1:GeneralizedResultants}, page \pageref{lem:ch1:GeneralizedResultants}. Let us define: \[\begin{cases}
\Omega_6^1=\Omega_5\cap\{(d^o,\bar k^o)\,|\,{\rm deg}_{\bar t_0}(T_i(d^o,\bar k^o,\bar t_h))={\rm deg}_{\bar t_0}(T_i(d,\bar k,\bar t_h))\mbox{ for }i=1,2,3\} \\
\Omega_6^2=\Omega_6^1\cap\{(d^o,\bar k^o)\,|\,{\rm deg}_{\bar t_h}(T_i(d^o,\bar k^o,\bar t_h))={\rm deg}_{\bar t_h}(T_i(d,\bar k,\bar t_h))\mbox{ for }i=1,2,3\} \\
\Omega_6^3=\Omega_6^2\cap\{(d^o,\bar k^o)\,|\,\gcd(T_1(d^o,\bar k^o,\bar t_h),T_2(d^o,\bar k^o,\bar t_h),T_3(d^o,\bar k^o,\bar t_h))=1\} \end{cases}\] The sets $\Omega_6^1$ and $\Omega_6^2$ are open and non-empty because they are defined by the non-vanishing of the corresponding leading coefficients. The fact that $\Omega_6^3$ is open and non-empty follows from a similar argument to the proof of Lemma \ref{lem:ch4:ResultantSpecializationForDegreeFormula}(c) (page \pageref{lem:ch4:ResultantSpecializationForDegreeFormula}). Finally, take $\Omega_6=\Omega_6^3$. Then, (i), (ii) and (iii) in Lemma \ref{lem:ch1:GeneralizedResultants} hold because of the construction of $\Omega_6^i$ for $i=1,2,3$, respectively. And also $$\operatorname{lc}_{t_0}(T_0)(\bar t^o)\cdot\operatorname{lc}_{t_0}(T)(\bar c,\bar t^o)\neq 0$$ holds because of the construction of $\Omega_5^1$ in Lemma \ref{lem:ch4:ResultantSpecializationForDegreeFormula} (page \pageref{lem:ch4:ResultantSpecializationForDegreeFormula}), and because $\Omega_6\subset\Omega_5$.
\noindent Let $(d^o,\bar k^o)\in\Omega_6$. If $\bar t^o_h\in{\mathcal A}\cap\Psi^{P_h}_5(d^o,\bar k^o)$, then $M_1(\bar t^o)M_2(d^o,\bar k^o,\bar t^o)=0$. Since we have seen that $M_1(\bar t^o)\neq 0$, one concludes that $M_2(d^o,\bar k^o,\bar t^o)=0$. Conversely, let $\bar t^o$ be such that $M_2(d^o,\bar k^o,\bar t^o)=0$. Then, by the construction of $\Omega_6$, there is $t^o_0$ such that $(t^o_0:\bar t^o)\in{\Psi_5^P}(d^o,\bar k^o)$. Let us see that $\bar t^o_h\in{\mathcal A}_h$. If $\bar t^o_h\in\cI^{P_h}_5(\Omega_2)$, then because of the invariance, $M_1(\bar t^o)=0$, and this contradicts Lemma \ref{lem:ch4:ResultantSpecializationForDegreeFormula}(c) (page \pageref{lem:ch4:ResultantSpecializationForDegreeFormula}). Thus, $\bar t^o_h\not\in\cI^{P_h}_5(\Omega_2)$, and by Proposition \ref{prop:ch4:NonInvariantSolutionsCanBeAvoided_Part2} (page \pageref{prop:ch4:NonInvariantSolutionsCanBeAvoided_Part2}), one has $\bar t^o_h\in{\mathcal A}_h$.
\noindent Thus, we have shown that for each of the factors of $M_2(d^o,\bar k^o,\bar t)$ there is a point $\bar t^o_h\in{\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)$ such that $M_2(d^o,\bar k^o,\bar t^o)=0$, and conversely. Let $L^{(\alpha,\beta)}(\bar t)=\beta t_1-\alpha t_2$ be one of these factors of $M_2(d^o,\bar k^o,\bar t)$, and let $\cL^{(\alpha,\beta)}$ the line defined by the equation $L^{(\alpha,\beta)}(\bar t)=0$. By Lemma \ref{lem:ch4:ResultantSpecializationForDegreeFormula}(c) (page \pageref{lem:ch4:ResultantSpecializationForDegreeFormula}), one has \begin{equation}\label{eq:ch4:CardinalEqualityInDegreeFormulaProof} \#\left(\cL^{(\alpha,\beta)}\cap{\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\right)=\#\bigl(\cL^{(\alpha,\beta)}\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\bigr). \end{equation} If we define \[p(\alpha,\beta)=\#\bigl(\cL^{(\alpha,\beta)}\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\bigr),\] then we will show that $L^{(\alpha,\beta)}(\bar t)$ appears in $M_2(d^o,\bar k^o,\bar t)$ with exponent equal to $p(\alpha,\beta)$. From this it will follow that: \[ \#\left({\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\right)=\sum_{(\alpha,\beta)}\#\bigl(\cL^{(\alpha,\beta)}\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\bigr)= \sum_{(\alpha,\beta)}p(\alpha,\beta)={\rm deg}_{\bar t}\left(M_2(d,\bar k,\bar t)\right), \] and this will conclude the proof of the theorem.
\noindent To prove our claim, note that ${\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)$ is a finite set, and by Proposition \ref{prop:ch4:MultiplicityAtNonFakePoints} (page \pageref{prop:ch4:MultiplicityAtNonFakePoints}), if $\bar t^o_h\in{\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)$, then: \begin{equation}\label{eq:ch4:MultiplicityOfIntersection} \min_{i=1,2,3}\left(\mathop{\mathrm{mult}}\nolimits_{\bar t^o}(\cT_o(\bar k^o),\cT_i(d^o,\bar k^o))\right)=1. \end{equation} Recall that \[ T^{\bar k^o}_0(\bar t_ h)=T_0(d^o,\bar k^o,\bar t_ h)\] and \[ \quad T^{(d^o,\bar k^o)}(\bar c,\bar t_ h)=T(\bar c,d^o,\bar k^o,\bar t_ h) =c_1T_1(d^o,\bar k^o,\bar t_h)+c_2T_2(d^o,\bar k^o,\bar t_h)+c_3T_3(d^o,\bar k^o,\bar t_h).\] For $\bar c^o\in{\C}^3$, let $\cT^{(\bar c^o,d^o,\bar k^o)}$ be the algebraic closed subset of $\P^2$ defined by the equation $T^{(d^o,\bar k^o)}(\bar c^o,\bar t_ h)=0$. Note that there is an open set of values $\bar c^o$ for which $\cT^{(\bar c^o,d^o,\bar k^o)}$ is indeed a curve. Let us see that there is an open subset $A(\bar t^o_h)\subset{\C}^3$, such that if $c^o\in A(\bar t^o_h)$, then \[\mathop{\mathrm{mult}}\nolimits_{\bar t^o_h}({\cT}^{\bar k^o}_0,\cT^{(\bar c^o,d^o,\bar k^o)})=1.\] To prove this, let $\cP(\bar\tau)$ be a place of ${\cT}^{\bar k^o}_0$ at $\bar t^o_h$. Then, by Equation \ref{eq:ch4:MultiplicityOfIntersection} the order of the power series $T^{(d^o,\bar k^o)}(\bar c,\cP(\bar\tau))$ is one. From this, one sees that it suffices to take $A(\bar t^o_h)$ to be the open set of values $\bar c^o$ for which the order of $T^{(d^o,\bar k^o)}(\bar c,\cP(\bar\tau))$ does not increase.
\noindent Let now \[\bar c^o\in\bigcap_{\bar t^o_h\in{\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)}A(\bar t^o).\] Applying Lemma \ref{lem:ch1:MultiplicityUsingResultants} (page \pageref{lem:ch1:MultiplicityUsingResultants}) to the curves ${\cT}^{\bar k^o}_0$ and $\cT^{(\bar c^o,d^o,\bar k^o)}$, and the line $\cL^{(\alpha,\beta)}$, one concludes that the factor $\beta t_1-\alpha_j t_2$ appears in $\operatorname{Res}_{t_0}\left(T^{\bar k^o}_0(\bar t_h),T^{(d^o,\bar k^o)}(\bar c^o,\bar t_h)\right)$ with exponent equal to: \[\sum_{\bar t^o_h\in\cL^{(\alpha,\beta)}\cap{\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)} \mathop{\mathrm{mult}}\nolimits_{\bar t^o_h}({\cT}^{\bar k^o}_0,\cT^{(\bar c^o,d^o,\bar k^o)})=\#\bigl(\cL^{(\alpha,\beta)}\cap{\mathcal A}_h\cap{\Psi_5^{P_h}}(d^o,\bar k^o)\bigr).\] Taking Equation \ref{eq:ch4:CardinalEqualityInDegreeFormulaProof} into account, this finishes the proof of our claim, and of the theorem. \end{proof}
\noindent We will finish this section with some examples, illustrating the use of the degree formula in Theorem \ref{thm:ch4:DegreeFormula} (page \pageref{thm:ch4:DegreeFormula}). The implicit equations in these examples have been obtained with the Computer Algebra System {\hbox{\rm C\kern-.13em o\kern-.07em C\kern-.13em o\kern-.15em A}} (see \cite{CocoaSystem}).
\begin{Example}\label{exam:ch4:HypParabNonSym} Let $\Sigma$ be the surface (a hyperbolic paraboloid) with implicit equation \[y_3-y_1^2+\dfrac{y_2^2}{4}=0.\] A rational --in fact polynomial-- parametrization of $\Sigma$ is given by: \[P(t_1,t_2)=(t_1,2t_2,t_1^2-t_2^2).\] From the form of its two first components, it is clear that this is a proper parametrization. This surface and its offset at $d^o=1$ are illustrated in Figure \ref{fig:HypParabNonSym}. \begin{figure}
\caption{Hyperbolic paraboloid and one of its offsets}
\label{fig:HypParabNonSym}
\end{figure} The homogeneous associated normal vector is \[N(\bar t_h)=(-2t_1,t_2,t_0).\] \noindent Then the auxiliary curves are:\\[3mm] \noindent $T_0(\bar t_h)=2\,k_1\,t_2\,t_0^{2}-k_1\,t_2\,t_1^{2}+k_1\,t_2^{3}-t_1\,t_0^{2}k_2+5\,t_1\,t_0\,k_3\,t_2-2\,k_2\,t_1^{3}+2\,t_1\,k_2\,t_2^{2},$\\[3mm] \noindent$T_1(\bar t_h)=4\,t_1^{6}k_2^{2}-7\,t_1^{4}k_2^{2}t_2^{2}-16\,t_1^{4}k_2\,k_3\,t_2\,t_0+2\,t_1^{2}k_2^{2}t_2^{4}+12\,t_1^{2}k_2\,t_2^{3}k_3\,t_0+16\,t_1^{2}k_3^{2}t_2^{2}t_0^{2}+t_2^{6}k_2^{2}+4\,t_2^{5}k_2\,k_3\,t_0+4\,t_2^{4}k_3^{2}t_0^{2}+t_0^{2}k_2^{2}t_1^{4}-2\,t_0^{2}k_2^{2}t_1^{2}t_2^{2}-4\,t_0^{3}k_2\,t_1^{2}k_3\,t_2+t_0^{2}k_2^{2}t_2^{4}+4\,t_0^{3}k_2\,t_2^{3}k_3+4\,t_0^{4}k_3^{2}t_2^{2}-{d}^{2}t_0^{6}k_2^{2}+2\,{d}^{2}t_0^{5}k_2\,k_3\,t_2-{d}^{2}t_0^{4}k_3^{2}t_2^{2},$\\[3mm] \noindent$T_2(\bar t_h)=4\,t_1^{4}t_0^{2}k_3^{2}-8\,t_1^{5}t_0\,k_3\,k_1+6\,t_1^{3}t_0\,k_3\,k_1\,t_2^{2}+4\,t_1^{6}k_1^{2}-7\,t_1^{4}k_1^{2}t_2^{2}+2\,t_1^{2}k_1^{2}t_2^{4}+t_1^{2}k_3^{2}t_2^{2}t_0^{2}+2\,t_2^{4}t_1\,t_0\,k_3\,k_1+t_2^{6}k_1^{2}+t_0^{4}t_1^{2}k_3^{2}-2\,t_0^{3}t_1^{3}k_3\,k_1+2\,t_0^{3}t_1\,k_3\,k_1\,t_2^{2}+t_0^{2}k_1^{2}t_1^{4}-2\,t_0^{2}k_1^{2}t_1^{2}t_2^{2}+t_0^{2}k_1^{2}t_2^{4}-4\,{d}^{2}t_0^{4}t_1^{2}k_3^{2}-4\,{d}^{2}t_0^{5}k_3\,t_1\,k_1-{d}^{2}t_0^{6}k_1^{2},$\\[3mm] \noindent$T_3(\bar t_h)=t_0^{2} (4\,k_2^{2}t_1^{4}-16\,t_1^{3}k_2\,k_1\,t_2+16\,k_1^{2}t_1^{2}t_2^{2}+k_2^{2}t_1^{2}t_2^{2}-4\,t_2^{3}k_2\,t_1\,k_1+4\,k_1^{2}t_2^{4}+t_0^{2}k_2^{2}t_1^{2}-4\,k_2\,t_1\,t_0^{2}k_1\,t_2+4\,k_1^{2}t_2^{2}t_0^{2}-4\,t_0^{2}{d}^{2}k_2^{2}t_1^{2}-4\,t_0^{2}{d}^{2}t_1\,k_2\,k_1\,t_2-t_0^{2}{d}^{2}k_1^{2}t_2^{2}). $\\[3mm] Denoting, as in the degree formula, \[ R(\bar c,d,\bar k,\bar t)=\operatorname{Res}_{t_0}\left(T_0(\bar k,\bar t_h),T(\bar c,d,\bar k,\bar t_h)\right), \] one has:
\noindent{$ R(\bar c,d,\bar k,\bar t)=
( t_1-t_2) ^{2} ( t_1+t_2) ^{2}( 4\,t_2^{4}c_3^{2}k_1^{4}+t_2^{4}c_2^{2}k_1^{4}+t_2^{4}c_1^{2}k_2^{4}+t_1^{2}t_2^{2}c_3^{2}k_1^{2}k_2^{2}+4\,t_1^{2}t_2^{2}c_1\,c_3\,k_1^{2}k_2^{2}+4\,t_1^{2}t_2^{2}c_2\,c_3\,k_1^{2}k_2^{2}+4\,t_1^{2}t_2^{2}c_2\,c_3\,k_1^{4}+4\,t_1^{2}t_2^{2}c_1\,c_3\,k_2^{4}+4\,t_1^{2}t_2^{2}c_1\,c_2\,k_2^{2}k_3^{2}+4\,t_1^{2}t_2^{2}c_1\,c_2\,k_1^{2}k_3^{2}+17\,t_1^{2}t_2^{2}c_1\,c_3\,k_2^{2}k_3^{2}+17\,t_1^{2}t_2^{2}c_2\,c_3\,k_1^{2}k_3^{2}-4\,t_1^{2}t_2^{2}c_1^{2}k_2^{2}k_3^{2}+17\,t_1^{2}c_2\,c_1\,k_3^{4}t_2^{2}-4\,t_1^{2}t_2^{2}c_2^{2}k_1^{2}k_3^{2}-6\,t_1\,t_2^{3}c_2\,c_3\,k_1^{3}k_2-6\,t_1\,t_2^{3}c_1\,c_3\,k_1\,k_2^{3}+12\,t_1\,t_2^{3}c_1\,c_3\,k_1\,k_2\,k_3^{2}-12\,t_1\,t_2^{3}c_1\,c_2\,k_1\,k_2\,k_3^{2}+12\,t_1\,t_2^{3}c_3^{2}k_1^{3}k_2+2\,t_2^{4}c_1\,c_2\,k_1^{2}k_2^{2}-4\,t_2^{4}c_1\,c_3\,k_1^{2}k_2^{2}-4\,t_2^{4}c_2\,c_3\,k_1^{4}-4\,t_1^{4}c_1\,c_3\,k_2^{4}-2\,t_1^{2}t_2^{2}c_2^{2}k_1^{4}-2\,t_1^{2}t_2^{2}c_1^{2}k_2^{4}+4\,t_1^{4}c_2^{2}k_3^{4}-4\,t_1^{4}c_1\,c_2\,k_2^{2}k_3^{2}+4\,t_1^{4}c_2^{2}k_1^{2}k_3^{2}+2\,t_1^{4}k_1^{2}k_2^{2}c_2\,c_1-4\,t_1^{4}c_2\,c_3\,k_1^{2}k_2^{2}+8\,t_1^{4}c_2\,c_3\,k_2^{2}k_3^{2}-12\,t_1^{3}t_2\,c_3^{2}k_1\,k_2^{3}+6\,t_1^{3}t_2\,c_1\,c_3\,k_1\,k_2^{3}+12\,t_1^{3}t_2\,c_1\,c_2\,k_1\,k_2\,k_3^{2}-12\,t_1^{3}t_2\,c_2\,c_3\,k_1\,k_2\,k_3^{2}+6\,t_1^{3}t_2\,c_2\,c_3\,k_1^{3}k_2-4\,t_1^{2}t_2^{2}c_1\,c_2\,k_1^{2}k_2^{2}-4\,t_2^{4}c_1\,c_2\,k_1^{2}k_3^{2}+8\,t_2^{4}c_1\,c_3\,k_1^{2}k_3^{2}+4\,c_1^{2}t_2^{4}k_3^{4}+4\,t_2^{4}c_1^{2}k_2^{2}k_3^{2}+t_1^{4}k_2^{4}c_1^{2}+4\,t_1^{4}c_3^{2}k_2^{4}+t_1^{4}k_1^{4}c_2^{2})\,\cdot\,( -48\,{d}^{2}t_1^{6}t_2^{4}k_2^{6}+64\,{d}^{2}t_1^{2}t_2^{8}k_1^{6}-72\,{d}^{2}t_1^{4}t_2^{6}k_1^{6}-128\,{d}^{4}t_1^{8}t_2^{2}k_2^{6}+64\,{d}^{4}t_1^{6}t_2^{4}k_2^{6}+{d}^{4}t_1^{4}t_2^{6}k_1^{6}-2\,{d}^{4}t_1^{2}t_2^{8}k_1^{6}+25\,t_1^{6}k_2^{4}t_2^{4}k_3^{2}+4\,k_2^{6}t_1^{10}-78\,t_1^{5}t_2^{5}k_1\,k_2^{5}-154\,t_1^{7}t_2^{3}k_1\,k_2^{5}+{d}^{4}k_1^{6}t_2^{10}+64\,{d}^{4}k_2^{6}t_1^{10}+8\,{d}^{2}k_1^{6}t_2^{10}+32\,{d}^{2}k_2^{6}t_1^{10}+136\,{d}^{2}t_1^{5}t_2^{5}k_1\,k_2^{5}+88\,{d}^{2}t_1^{7}t_2^{3}k_1\,k_2^{5}+320\,{d}^{2}t_1^{8}t_2^{2}k_1^{2}k_2^{4}+440\,{d}^{2}t_1^{4}t_2^{6}k_1^{4}k_2^{2}-100\,{d}^{2}t_1^{6}k_2^{2}k_3^{2}t_2^{4}k_1^{2}-170\,{d}^{2}t_1^{3}t_2^{7}k_1^{3}k_2^{3}-110\,{d}^{2}t_1^{5}t_2^{5}k_1^{3}k_2^{3}+280\,{d}^{2}t_1^{7}t_2^{3}k_1^{3}k_2^{3}+1200\,{d}^{2}t_1^{7}k_2^{3}k_3^{2}t_2^{3}k_1-400\,{d}^{2}t_1^{8}k_2^{4}k_3^{2}t_2^{2}+300\,{d}^{2}t_1^{5}k_2^{3}k_3^{2}t_2^{5}k_1-1200\,{d}^{2}t_1^{5}k_2\,k_3^{2}t_2^{5}k_1^{3}-25\,{d}^{2}t_1^{4}k_2^{2}k_3^{2}t_2^{6}k_1^{2}-400\,{d}^{2}t_1^{4}k_3^{2}t_2^{6}k_1^{4}-100\,{d}^{2}t_1^{2}k_3^{2}t_2^{8}k_1^{4}-370\,{d}^{2}t_1^{6}t_2^{4}k_1^{4}k_2^{2}-344\,{d}^{2}t_1^{5}t_2^{5}k_1^{5}k_2+16\,{d}^{2}t_1\,t_2^{9}k_1^{5}k_2+328\,{d}^{2}t_1^{3}t_2^{7}k_1^{5}k_2-300\,{d}^{2}t_1^{3}k_2\,k_3^{2}t_2^{7}k_1^{3}-70\,{d}^{2}t_1^{2}t_2^{8}k_1^{4}k_2^{2}+465\,t_1^{8}t_2^{2}k_1^{2}k_2^{4}-120\,{d}^{4}t_1^{4}t_2^{6}k_1^{4}k_2^{2}+160\,{d}^{4}t_1^{3}t_2^{7}k_1^{3}k_2^{3}-24\,{d}^{4}t_1^{3}t_2^{7}k_1^{5}k_2+60\,{d}^{4}t_1^{2}t_2^{8}k_1^{4}k_2^{2}+12\,{d}^{4}t_1\,t_2^{9}k_1^{5}k_2+2480\,t_1^{4}t_2^{6}k_1^{4}k_2^{2}+2400\,t_1^{6}k_2^{2}k_3^{2}t_2^{4}k_1^{2}-440\,t_1^{3}t_2^{7}k_1^{3}k_2^{3}-1920\,t_1^{5}t_2^{5}k_1^{3}k_2^{3}-1640\,t_1^{7}t_2^{3}k_1^{3}k_2^{3}-800\,t_1^{7}k_2^{3}k_3^{2}t_2^{3}k_1+100\,t_1^{8}k_2^{4}k_3^{2}t_2^{2}+16\,{d}^{2}t_1^{8}t_2^{2}k_2^{6}-200\,t_1^{5}k_2^{3}k_3^{2}t_2^{5}k_1-3200\,t_1^{5}k_2\,k_3^{2}t_2^{5}k_1^{3}+600\,t_1^{4}k_2^{2}k_3^{2}t_2^{6}k_1^{2}+1600\,t_1^{4}k_3^{2}t_2^{6}k_1^{4}+400\,t_1^{2}k_3^{2}t_2^{8}k_1^{4}+3160\,t_1^{6}t_2^{4}k_1^{4}k_2^{2}-3168\,t_1^{5}t_2^{5}k_1^{5}k_2-128\,t_1\,t_2^{9}k_1^{5}k_2-1504\,t_1^{3}t_2^{7}k_1^{5}k_2-800\,t_1^{3}k_2\,k_3^{2}t_2^{7}k_1^{3}+12\,t_1^{8}t_2^{2}k_2^{6}+9\,t_1^{6}t_2^{4}k_2^{6}+360\,t_1^{2}t_2^{8}k_1^{4}k_2^{2}-224\,{d}^{2}t_1^{9}t_2\,k_1\,k_2^{5}+20\,{d}^{2}t_1^{4}t_2^{6}k_1^{2}k_2^{4}-340\,{d}^{2}t_1^{6}t_2^{4}k_1^{2}k_2^{4}+192\,{d}^{4}t_1^{9}t_2\,k_1\,k_2^{5}+240\,{d}^{4}t_1^{8}t_2^{2}k_1^{2}k_2^{4}-384\,{d}^{4}t_1^{7}t_2^{3}k_1\,k_2^{5}+160\,{d}^{4}t_1^{7}t_2^{3}k_1^{3}k_2^{3}-480\,{d}^{4}t_1^{6}t_2^{4}k_1^{2}k_2^{4}+60\,{d}^{4}t_1^{6}t_2^{4}k_1^{4}k_2^{2}+192\,{d}^{4}t_1^{5}t_2^{5}k_1\,k_2^{5}-320\,{d}^{4}t_1^{5}t_2^{5}k_1^{3}k_2^{3}+12\,{d}^{4}t_1^{5}t_2^{5}k_1^{5}k_2+240\,{d}^{4}t_1^{4}t_2^{6}k_1^{2}k_2^{4}+16\,k_1^{6}t_2^{10}+288\,t_1^{2}t_2^{8}k_1^{6}+1296\,t_1^{4}t_2^{6}k_1^{6}-68\,t_1^{9}t_2\,k_1\,k_2^{5}-100\,{d}^{2}t_1^{6}k_2^{4}t_2^{4}k_3^{2}+265\,t_1^{4}t_2^{6}k_1^{2}k_2^{4}+770\,t_1^{6}t_2^{4}k_1^{2}k_2^{4}). $}\\
\noindent From this expression it is easy to check that $\operatorname{PP}_{(d,\bar k)}\left( \operatorname{Con}_{\bar c}\left( \operatorname{Res}_{t_0}\left(T_0(\bar k,\bar t_h),T(\bar c,d,\bar k,\bar t_h)\right)\right)\right)$ is the last factor in the above expression, and so: \[ {\rm deg}_{\bar t}\left(\operatorname{PP}_{(d,\bar k)}\left( \operatorname{Con}_{\bar c}\left( \operatorname{Res}_{t_0}\left(T_0(\bar k,\bar t_h),T(\bar c,d,\bar k,\bar t_h)\right)\right)\right)\right)=10. \] Using Theorem \ref{thm:ch4:DegreeFormula} one has that the total offset degree is $\delta=10$. In fact, in this case, using elimination techniques, it is possible to check this result, computing the generic offset polynomial:\\
\noindent{\small $ g(d,\bar x)= -256 x_1^{10}-640 x_1^8 x_2^2-256 x_1^8 x_3^2+1408 x_1^8 d^2-400 x_1^6 x_2^4-384 x_1^6 x_2^2 x_3^2+3232 x_1^6 x_2^2 d^2+1152 x_1^6 x_3^2 d^2-3088 x_1^6 d^4+80 x_1^4 x_2^6-16 x_1^4 x_2^4 x_3^2+2448 x_1^4 x_2^4 d^2+1696 x_1^4 x_2^2 x_3^2 d^2-5904 x_1^4 x_2^2 d^4-1936 x_1^4 x_3^2 d^4+3376 x_1^4 d^6+80 x_1^2 x_2^8+96 x_1^2 x_2^6 x_3^2+832 x_1^2 x_2^6 d^2+736 x_1^2 x_2^4 x_3^2 d^2-3744 x_1^2 x_2^4 d^4-2272 x_1^2 x_2^2 x_3^2 d^4+4672 x_1^2 x_2^2 d^6+1440 x_1^2 x_3^2 d^6-1840 x_1^2 d^8-16 x_2^{10}-16 x_2^8 x_3^2+208 x_2^8 d^2+192 x_2^6 x_3^2 d^2-928 x_2^6 d^4-736 x_2^4 x_3^2 d^4+1696 x_2^4 d^6+960 x_2^2 x_3^2 d^6-1360 x_2^2 d^8-400 x_3^2 d^8+400 d^{10}+3200 x_1^8 x_3-320 x_1^6 x_2^2 x_3+3072 x_1^6 x_3^3-12608 x_1^6 x_3 d^2-1560 x_1^4 x_2^4 x_3-2944 x_1^4 x_2^2 x_3^3+496 x_1^4 x_2^2 x_3 d^2-9088 x_1^4 x_3^3 d^2+18088 x_1^4 x_3 d^4+1640 x_1^2 x_2^6 x_3+1696 x_1^2 x_2^4 x_3^3+7016 x_1^2 x_2^4 x_3 d^2+5184 x_1^2 x_2^2 x_3^3 d^2-2184 x_1^2 x_2^2 x_3 d^4+8480 x_1^2 x_3^3 d^4-11080 x_1^2 x_3 d^6-320 x_2^8 x_3-288 x_2^6 x_3^3+2912 x_2^6 x_3 d^2+2272 x_2^4 x_3^3 d^2-7072 x_2^4 x_3 d^4-3680 x_2^2 x_3^3 d^4+2080 x_2^2 x_3 d^6-2400 x_3^3 d^6+2400 x_3 d^8+2544 x_1^8-9144 x_1^6 x_2^2-10752 x_1^6 x_3^2-10520 x_1^6 d^2+4479 x_1^4 x_2^4-6976 x_1^4 x_2^2 x_3^2+25770 x_1^4 x_2^2 d^2-11776 x_1^4 x_3^4+21568 x_1^4 x_3^2 d^2+16583 x_1^4 d^4-684 x_1^2 x_2^6+10304 x_1^2 x_2^4 x_3^2+2700 x_1^2 x_2^4 d^2+9088 x_1^2 x_2^2 x_3^4+25056 x_1^2 x_2^2 x_3^2 d^2-23444 x_1^2 x_2^2 d^4+14720 x_1^2 x_3^4 d^2-7840 x_1^2 x_3^2 d^4-11980 x_1^2 d^6+24 x_2^8-2472 x_2^6 x_3^2+160 x_2^6 d^2-1936 x_2^4 x_3^4+14488 x_2^4 x_3^2 d^2-2752 x_2^4 d^4+8480 x_2^2 x_3^4 d^2-15160 x_2^2 x_3^2 d^4+6080 x_2^2 d^6-400 x_3^4 d^4-3000 x_3^2 d^6+3400 d^8-19008 x_1^6 x_3+25896 x_1^4 x_2^2 x_3+3328 x_1^4 x_3^3+44072 x_1^4 x_3 d^2-6534 x_1^2 x_2^4 x_3+23616 x_1^2 x_2^2 x_3^3+2484 x_1^2 x_2^2 x_3 d^2+15360 x_1^2 x_3^5+15680 x_1^2 x_3^3 d^2-31790 x_1^2 x_3 d^4+312 x_2^6 x_3-9112 x_2^4 x_3^3+3112 x_2^4 x_3 d^2-5760 x_2^2 x_3^5+31120 x_2^2 x_3^3 d^2-22360 x_2^2 x_3 d^4+9600 x_3^5 d^2-17400 x_3^3 d^4+7800 x_3 d^6-6240 x_1^6+3360 x_1^4 x_2^2+29984 x_1^4 x_3^2+15816 x_1^4 d^2-510 x_1^2 x_2^4-18472 x_1^2 x_2^2 x_3^2+11022 x_1^2 x_2^2 d^2+14080 x_1^2 x_3^4+8440 x_1^2 x_3^2 d^2-16520 x_1^2 d^4+15 x_2^6+1319 x_2^4 x_3^2-669 x_2^4 d^2-15680 x_2^2 x_3^4+15010 x_2^2 x_3^2 d^2+1045 x_2^2 d^4-6400 x_3^6+27200 x_3^4 d^2-28825 x_3^2 d^4+8025 d^6+14880 x_1^4 x_3-4800 x_1^2 x_2^2 x_3-13120 x_1^2 x_3^3+14320 x_1^2 x_3 d^2+270 x_2^4 x_3+1720 x_2^2 x_3^3-4270 x_2^2 x_3 d^2-9600 x_3^5+17400 x_3^3 d^2-7800 x_3 d^4-400 x_1^4+200 x_1^2 x_2^2-11040 x_1^2 x_3^2+7840 x_1^2 d^2-25 x_2^4+1440 x_2^2 x_3^2-140 x_2^2 d^2-400 x_3^4-3000 x_3^2 d^2+3400 d^4+800 x_1^2 x_3-200 x_2^2 x_3+2400 x_3^3-2400 x_3 d^2-400 x_3^2+400 d^2 $}\\[3mm] \noindent This is, as predicted by our formula, a polynomial of degree $10$ in $\bar x$. \end{Example} \begin{center} \rule{2cm}{0.5pt} \quad\\ \end{center}
\begin{Example}\label{exam:ch4:NonProperCircularParaboloid} To illustrate the behavior of the degree formula in the case of non-proper parametrizations, let us consider the surface $\Sigma$ defined by the parametrization: \[P(t_1,t_2)=(t_1^3,t_2,t_1^6+t_2^2).\] This is a parametrization of the circular paraboloid with implicit equation $y_3=y_1^2+y_2^2$; the parametrization $P$ has been obtained by replacing $t_1$ with $t_1^3$ in the usual proper parametrization $\tilde P$ of $\Sigma$, which is given by: \[\tilde P(t_1,t_2)=(t_1,t_2,t_1^2+t_2^2).\] Thus, the tracing index of the parametrization $P$ in this example is $\mu=3$. Computing with $P$ we obtain the following associated normal vector: \[N(\bar t_h)=(-2t_1^3,-2t_0^2t_2,t_0^3).\] \noindent Then the auxiliary curves are:\\[3mm] \noindent $T_0(\bar t_h)= -k_2\,t_1^{3}+k_1\,t_0^{2}{t_2} $\\[3mm] \noindent $T_1(\bar t_h)= 4\,t_1^{18}{k_2}^{2}+12\,t_1^{12}{k_2}^{2}t_2^{2}t_0^{4}-8\,t_1^{12}k_2\,k_3\,{\it t2}\,t_0^{5}+12\,t_1^{6}{k_2}^{2}t_2^{4}t_0^{8}-16\,t_1^{6}k_2\,t_2^{3}t_0^{9}k_3+4\,t_1^{6}{k_3}^{2}t_2^{2}t_0^{10}+4\,t_2^{6}t_0^{12}{k_2}^{2}-8\,t_2^{5}t_0^{13}k_2\,k_3+4\,t_2^{4}t_0^{14}{k_3}^{2}+t_0^{6}{k_2}^{2}t_1^{12}+2\,t_0^{10}{k_2}^{2}t_1^{6}t_2^{2}-2\,t_0^{11}k_2\,t_1^{6}k_3\,{t_2}+t_0^{14}{k_2}^{2}t_2^{4}-2\,t_0^{15}k_2\,t_2^{3}k_3+t_0^{16}{k_3}^{2}t_2^{2}-{d}^{2}t_0^{18}{k_2}^{2}-4\,{d}^{2}t_0^{17}k_2\,k_3\,{\it t2}-4\,{d}^{2}t_0^{16}{k_3}^{2}t_2^{2} $\\[3mm] \noindent $T_2(\bar t_h)= 4\,t_1^{12}{k_3}^{2}t_0^{6}-8\,t_1^{15}k_3\,t_0^{3}k_1-16\,t_1^{9}k_3\,t_0^{7}k_1\,t_2^{2}+4\,t_1^{18}{k_1}^{2}+12\,t_1^{12}{k_1}^{2}t_2^{2}t_0^{4}+12\,t_1^{6}{k_1}^{2}t_2^{4}t_0^{8}+4\,t_1^{6}{k_3}^{2}t_2^{2}t_0^{10}-8\,t_2^{4}t_0^{11}k_3\,t_1^{3}k_1+4\,t_2^{6}t_0^{12}{k_1}^{2}+t_0^{12}{k_3}^{2}t_1^{6}-2\,t_0^{9}k_3\,t_1^{9}k_1-2\,t_0^{13}k_3\,t_1^{3}k_1\,t_2^{2}+t_0^{6}{k_1}^{2}t_1^{12}+2\,t_0^{10}{k_1}^{2}t_1^{6}t_2^{2}+t_0^{14}{k_1}^{2}t_2^{4}-4\,{d}^{2}t_0^{12}t_1^{6}{k_3}^{2}-4\,{d}^{2}t_0^{15}k_3\,{{\it t1 }}^{3}k_1-{d}^{2}t_0^{18}{k_1}^{2} $\\[3mm] \noindent $T_3(\bar t_h)= t_0^{6} ( k_2\,t_1^{3}-k_1\,t_0^{2}{t_2} ) ^{2} ( 4\,t_1^{6}+t_0^{6}+4\,t_2^{2}t_0^{4}-4\,{d}^{2}t_0^{6}) $\\[3mm] Denoting, as in the degree formula, \[ R(\bar c,d,\bar k,\bar t)=\operatorname{Res}_{t_0}\left(T_0(\bar k,\bar t_h),T(\bar c,d,\bar k,\bar t_h)\right), \] one has:\\ $ R(\bar c,d,\bar k,\bar t)= ( k_1^{2}a_2+k_2^{2}a_1) ^{2}t_1^{36}( -8\,k_2^{12}t_1^{12}k_1^{2}t_2^{6}k_3^{4}{d}^{2}+16\,k_1^{18}t_2^{18}+16\,k_2^{12}k_1^{6}t_2^{18}+96\,k_2^{10}k_1^{8}t_2^{18}+240\,k_2^{8}k_1^{10}t_2^{18}+320\,k_2^{6}k_1^{12}t_2^{18}+240\,k_1^{14}t_2^{18}k_2^{4}+96\,k_1^{16}t_2^{18}k_2^{2}+t_1^{18}{d}^{4}k_2^{18}+t_1^{6}k_1^{4}t_2^{12}k_2^{14}+8\,t_1^{3}k_1^{5}t_2^{15}k_2^{13}+4\,t_1^{6}k_1^{6}t_2^{12}k_2^{12}+40\,t_1^{3}k_1^{7}{{\it t2} }^{15}k_2^{11}+6\,t_1^{6}k_1^{8}t_2^{12}k_2^{10}+80\,t_1^{3}k_1^{9}t_2^{15}k_2^{9}+80\,t_1^{3}k_1^{11}t_2^{15}k_2^{7}+4\,k_2^{8}t_1^{6}k_1^{10}t_2^{12}+40\,k_2^{5}t_1^{3}k_1^{13}t_2^{15}+k_2^{6}t_1^{6}k_1^{12}t_2^{12}+8\,k_2^{3}t_1^{3}k_1^{15}t_2^{15}-32\,k_2^{6}t_1^{6}k_1^{10}t_2^{12}{d}^{2}k_3^{2}+16\,k_2^{10}t_1^{6}k_1^{4}t_2^{12}k_3^{4}+32\,k_2^{8}t_1^{6}k_1^{6}t_2^{12}k_3^{4}-192\,k_2^{7}t_1^{3}k_1^{9}t_2^{15}k_3^{2}-128\,k_2^{5}t_1^{3}k_1^{11}t_2^{15}k_3^{2}+16\,k_2^{6}t_1^{6}k_1^{8}t_2^{12}k_3^{4}-32\,k_2^{3}t_1^{3}k_1^{13}t_2^{15}k_3^{2}-4\,k_2^{11}t_1^{9}k_1^{5}t_2^{9}k_3^{2}+8\,k_2^{11}t_1^{9}k_1^{3}t_2^{9}k_3^{4}-2\,k_2^{9}t_1^{9}k_1^{7}t_2^{9}k_3^{2}+8\,k_2^{9}t_1^{9}k_1^{5}t_2^{9}k_3^{4}-48\,k_2^{8}t_1^{6}k_1^{8}t_2^{12}k_3^{2}-16\,k_2^{6}t_1^{6}k_1^{10}t_2^{12}k_3^{2}-32\,k_2^{12}k_1^{4}t_2^{12}t_1^{6}{d}^{2}{{\it k3}}^{2}-32\,k_2^{11}k_1^{5}t_2^{15}t_1^{3}k_3^{2}-128\,k_2^{9}k_1^{7}t_2^{15}t_1^{3}k_3^{2}+16\,k_2^{12}t_1^{12}k_1^{2}t_2^{6}{d}^{4}k_3^{4}-144\,k_2^{11}t_1^{9}k_1^{5}t_2^{9}{d}^{2}k_3^{2}-32\,k_2^{11}t_1^{9}k_1^{3}t_2^{9}{d}^{2}k_3^{4}-96\,k_2^{10}t_1^{6}k_1^{6}t_2^{12}{d}^{2}k_3^{2}-2\,t_1^{12}{d}^{2}k_2^{16}k_1^{2}t_2^{6}-2\,t_1^{15}{d}^{2}k_2^{15}k_1\,t_2^{3}k_3^{2}-8\,t_1^{9}{d}^{2}k_2^{15}k_1^{3}t_2^{9}-8\,t_1^{15}{d}^{4}k_2^{15}k_1\,t_2^{3}k_3^{2}-4\,t_1^{12}{d}^{2}k_2^{14}k_1^{4}t_2^{6}-24\,t_1^{12}{d}^{2}k_2^{14}{{ \it k1}}^{2}t_2^{6}k_3^{2}-24\,t_1^{9}{d}^{2}k_2^{13}k_1^{5}t_2^{9}-2\,t_1^{12}{d}^{2}k_2^{12}k_1^{6}t_2^{6}-24\,t_1^{12}{d}^{2}k_2^{12}k_1^{4}t_2^{6}k_3^{2}-24\,t_1^{9}{d}^{2}k_2^{11}k_1^{7}t_2^{9}-8\,t_1^{9}{d}^{2}k_2^{9}k_1^{9}t_2^{9}-2\,t_1^{9}k_1^{3}t_2^{9}k_2^{13}k_3^{2}-72\,t_1^{9}k_1^{3}t_2^{9}k_2^{13}{d}^{2}k_3^{2}-16\,t_1^{6}k_1^{4}t_2^{12}k_2^{12}k_3^{2}-48\,t_1^{6}k_1^{6}t_2^{12}k_2^{10}k_3^{2}+k_2^{12}t_1^{12}k_1^{2}t_2^{6}k_3^{4}-72\,k_2^{9}t_1^{9}k_1^{7}t_2^{9}{d}^{2}k_3^{2}-32\,k_2^{9}t_1^{9}k_1^{5}t_2^{9}{d}^{2}k_3^{4}-96\,k_2^{8}t_1^{6}k_1^{8}t_2^{12}{d}^{2}k_3^{2}). $\\[3mm] \noindent From this expression it is easy to check that \[ {\rm deg}_{\bar t}\left(\operatorname{PP}_{(d,\bar k)}\left( \operatorname{Con}_{\bar c}\left( \operatorname{Res}_{t_0}\left(T_0(\bar k,\bar t_h),T(\bar c,d,\bar k,\bar t_h)\right)\right)\right)\right)=18. \] This agrees with the expected result $\mu\cdot\delta=3\cdot 6=18.$ \end{Example} \begin{center} \rule{2cm}{0.5pt} \quad\\ \end{center}
\begin{Example}\label{exam:ch4:WhitneyUmbrella} Let $\Sigma$ be the surface (Whitney Umbrella) with implicit equation $y_1^2 - y_2^2y_3=0$. A proper rational parametrization of $\Sigma$ is given by: \[P(t_1,t_2)=(t_1t_2,t_2,t_1).\] This surface is illustrated in Figure \ref{fig:WhitneyUmbrella}. \begin{figure}
\caption{The Whitney Umbrella}
\label{fig:WhitneyUmbrella}
\end{figure} The homogeneous associated normal vector is \[N(\bar t_h)=(2t_1t_2,-2t_1^2,-t_0t_2).\] \noindent Then the auxiliary curves are:\\[3mm] \noindent $T_0(\bar t_h)=-k_1\,t_0^{2}t_2^{2}+2\,k_1\,t_1^{4}+t_1\,t_0^{2}k_2\,t_2-2\,t_1^{3}t_0\,k_3+2\,t_1^{3}t_2\,k_2-2\,t_1\,t_2^{2}k_3\,t_0$\\[3mm] \noindent$T_1(\bar t_h)=4\,t_1^{6}t_2^{2}k_2^{2}-8\,t_1^{4}t_2^{3}k_2\,k_3\,t_0+4\,t_1^{2}t_2^{4}k_3^{2}t_0^{2}+4\,t_1^{8}k_2^{2}-8\,t_1^{6}k_2\,k_3\,t_0\,t_2+4\,t_1^{4}k_3^{2}t_0^{2}t_2^{2}+t_0^{2}t_2^{2}k_2^{2}t_1^{4}-2\,t_0^{3}t_2^{3}k_2\,t_1^{2}k_3+t_0^{4}t_2^{4}k_3^{2}-{d}^{2}t_2^{6}k_2^{2}t_0^{2}+4\,{d}^{2}t_2^{5}k_2\,t_1^{2}k_3\,t_0-4\,{d}^{2}t_2^{4}k_3^{2}t_1^{4}$\\[3mm] \noindent$T_2(\bar t_h)=4\,t_1^{4}k_3^{2}t_0^{2}t_2^{2}-8\,t_1^{5}t_2^{2}k_3\,t_0\,k_1+4\,t_1^{6}t_2^{2}k_1^{2}+4\,t_1^{6}k_3^{2}t_0^{2}-8\,t_1^{7}k_3\,t_0\,k_1+4\,t_1^{8}k_1^{2}+t_0^{4}t_2^{2}k_3^{2}t_1^{2}-2\,t_0^{3}t_2^{2}k_3\,t_1^{3}k_1+t_0^{2}t_2^{2}k_1^{2}t_1^{4}-4\,{d}^{2}t_2^{6}t_1^{2}k_3^{2}-4\,{d}^{2}t_2^{6}t_1\,k_3\,k_1\,t_0-{d}^{2}t_2^{6}k_1^{2}t_0^{2}$\\[3mm] \noindent$T_3(\bar t_h)=4\,t_0^{2}t_2^{2}k_2^{2}t_1^{4}-8\,t_1^{3}t_2^{3}k_2\,t_0^{2}k_1+4\,t_1^{2}t_2^{4}k_1^{2}t_0^{2}+4\,t_1^{6}k_2^{2}t_0^{2}-8\,t_1^{5}k_2\,t_0^{2}k_1\,t_2+4\,t_0^{2}t_2^{2}k_1^{2}t_1^{4}+t_0^{4}t_2^{2}k_2^{2}t_1^{2}-2\,t_0^{4}t_2^{3}k_2\,t_1\,k_1+t_0^{4}t_2^{4}k_1^{2}-4\,{d}^{2}t_2^{6}k_2^{2}t_1^{2}-8\,{d}^{2}t_2^{5}t_1^{3}k_2\,k_1-4\,{d}^{2}t_2^{4}k_1^{2}t_1^{4} $\\[3mm] Denoting, as in the degree formula, \[ R(\bar c,d,\bar k,\bar t)=\operatorname{Res}_{t_0}\left(T_0(\bar k,\bar t_h),T(\bar c,d,\bar k,\bar t_h)\right), \] one has:\\
\noindent{\small $ R(\bar c,d,\bar k,\bar t)=4\,t_2^{2}\,t_1^{4}\,( -28\,t_1^{6}t_2^{8}k_3^{2}{d}^{2}k_1^{2}+4\,k_2^{4}t_1^{8}t_2^{6}{d}^{2}+k_1^{4}{d}^{4}t_2^{12}t_1^{2}-6\,{d}^{2}k_1^{4}t_2^{8}t_1^{6}+k_2^{2}k_1^{2}{d}^{4}t_2^{14}-4\,k_1^{4}{d}^{2}t_2^{10}t_1^{4}+12\,k_1^{4}t_2^{6}t_1^{8}-20\,t_1^{4}t_2^{10}{d}^{2}k_1^{2}k_3^{2}-4\,k_2^{2}k_1^{2}{d}^{4}t_2^{12}t_1^{2}-4\,k_2^{2}t_1^{8}t_2^{6}k_3^{2}{d}^{2}-24\,k_2\,t_1^{11}t_2^{3}k_1\,k_3^{2}-2\,k_2\,{d}^{4}k_1^{3}t_2^{11}t_1^{3}+4\,k_2\,k_1^{3}{d}^{2}t_2^{9}t_1^{5}+2\,k_2\,k_1^{3}{d}^{4}t_2^{13}t_1+16\,k_2\,t_1^{5}t_2^{9}{d}^{2}k_3^{2}k_1-8\,k_2\,t_1^{13}t_2\,k_3^{2}k_1-8\,k_2\,t_1^{7}t_2^{7}k_3^{2}k_1+32\,k_2\,t_1^{7}t_2^{7}k_3^{2}{d}^{2}k_1+16\,k_2\,{d}^{2}k_1^{3}t_2^{7}t_1^{7}+16\,k_2\,k_3^{2}k_1\,{d}^{2}t_2^{5}t_1^{9}-24\,k_2\,k_3^{2}k_1\,t_2^{5}t_1^{9}-12\,t_1^{8}t_2^{6}k_3^{2}{d}^{2}k_1^{2}-4\,t_1^{10}t_2^{4}k_3^{4}{d}^{2}+4\,t_1^{12}t_2^{2}k_1^{2}k_3^{2}+4\,t_1^{6}t_2^{8}k_3^{2}k_1^{2}-16\,t_1^{8}t_2^{6}k_3^{4}{d}^{2}-4\,t_1^{2}t_2^{12}{d}^{2}k_3^{4}+12\,t_1^{8}t_2^{6}k_3^{2}k_1^{2}+12\,t_1^{10}t_2^{4}k_3^{2}k_1^{2}-6\,k_2^{3}k_1\,t_2^{5}t_1^{9}+12\,t_1^{10}t_2^{4}k_3^{2}k_2^{2}+4\,t_1^{8}t_2^{6}k_3^{2}k_2^{2}-16\,t_1^{4}t_2^{10}k_3^{4}{d}^{2}+4\,t_1^{4}t_2^{10}k_3^{2}{d}^{2}k_2^{2}-24\,t_1^{6}t_2^{8}k_3^{4}{d}^{2}+t_1^{2}t_2^{12}{d}^{4}k_2^{4}+2\,t_1^{6}t_2^{8}{d}^{2}k_2^{4}+4\,t_1^{6}t_2^{8}k_3^{2}{d}^{2}k_2^{2}-20\,k_2^{3}k_1\,t_2\,t_1^{13}+4\,k_2^{4}t_1^{14}+9\,k_1^{4}t_2^{4}t_1^{10}-22\,k_2^{3}k_1\,t_2^{3}t_1^{11}+37\,k_2^{2}k_1^{2}t_2^{2}t_1^{12}+44\,k_2^{2}k_1^{2}t_2^{4}t_1^{10}+10\,k_2^{2}k_1^{2}{d}^{2}t_2^{10}t_1^{4}+13\,k_2^{2}k_1^{2}t_2^{6}t_1^{8}-12\,k_2\,k_1^{3}t_2^{7}t_1^{7}-30\,k_2\,k_1^{3}t_2^{3}t_1^{11}-38\,k_2\,k_1^{3}t_2^{5}t_1^{9}-4\,k_2\,{d}^{2}k_1^{3}t_2^{11}t_1^{3}+t_1^{10}t_2^{4}k_2^{4}-4\,t_1^{2}t_2^{12}{d}^{2}k_1^{2}k_3^{2}+4\,k_1^{4}t_2^{8}t_1^{6}+4\,k_2^{2}t_1^{14}k_3^{2}+4\,k_2^{4}t_1^{12}t_2^{2}+12\,t_1^{12}t_2^{2}k_3^{2}k_2^{2}-8\,k_2^{3}k_1\,{d}^{2}t_2^{9}t_1^{5}-12\,k_2^{3}k_1\,{d}^{2}t_2^{7}t_1^{7}+2\,k_2^{3}k_1\,{d}^{4}t_2^{11}t_1^{3}+4\,k_2^{3}k_1\,{d}^{2}t_2^{5}t_1^{9}-2\,k_2^{3}k_1\,{d}^{4}t_2^{13}t_1+8\,k_2^{2}k_1^{2}{d}^{2}t_2^{8}t_1^{6}-14\,k_2^{2}k_1^{2}{d}^{2}t_2^{6}t_1^{8}-4\,k_2^{2}t_1^{10}t_2^{4}{d}^{2}k_3^{2}+k_2^{2}k_1^{2}{d}^{4}t_2^{10}t_1^{4})\,\cdot\,(8\,c_3\,c_1\,k_1\,k_3^{2}k_2\,t_1\,t_2^{3}-4\,c_3\,c_1\,k_1\,t_1^{3}k_2^{3}t_2-8\,c_3\,c_1\,k_1\,t_1^{3}k_3^{2}k_2\,t_2+4\,c_3\,c_1\,k_3^{2}k_2^{2}t_2^{4}-4\,c_3\,c_1\,k_2^{4}t_1^{2}t_2^{2}+4\,c_3\,c_1\,t_1^{4}k_2^{2}k_3^{2}-4\,c_1^{2}k_3^{2}k_2^{2}t_1^{2}t_2^{2}+8\,c_3^{2}k_1^{3}t_1\,k_2\,t_2^{3}-8\,c_3^{2}k_1^{3}t_1^{3}t_2\,k_2+4\,c_3^{2}k_2^{2}k_1^{2}t_2^{4}+4\,c_3^{2}k_1^{2}t_1^{4}k_2^{2}-16\,c_3^{2}k_1^{2}t_2^{2}k_2^{2}t_1^{2}-8\,c_3^{2}k_1\,k_2^{3}t_1\,t_2^{3}+8\,c_3^{2}k_1\,t_1^{3}k_2^{3}t_2+4\,{{c_3}}^{2}k_2^{4}t_1^{2}t_2^{2}+c_2^{2}t_1^{2}t_2^{2}k_1^{4}+4\,c_2^{2}t_2^{2}t_1^{2}k_3^{4}+4\,c_2\,{c_1}\,k_3^{4}t_2^{4}+4\,c_2\,c_1\,k_3^{4}t_1^{4}+4\,c_3\,c_2\,t_1^{2}t_2^{2}k_1^{4}+4\,c_3\,c_2\,k_1^{3}t_1\,k_2\,t_2^{3}-4\,c_3\,c_2\,k_1^{3}t_1^{3}t_2\,k_2+4\,c_3\,c_2\,k_1^{2}t_2^{4}k_3^{2}-4\,c_3\,c_2\,k_1^{2}t_2^{2}k_2^{2}t_1^{2}+4\,c_3\,c_2\,k_1^{2}t_1^{4}k_3^{2}-8\,c_3\,c_2\,k_1\,k_3^{2}k_2\,t_1\,t_2^{3}+8\,c_3\,c_2\,k_1\,t_1^{3}k_3^{2}k_2\,t_2+8\,c_3\,c_2\,k_3^{2}k_2^{2}t_1^{2}t_2^{2}+4\,c_2^{2}k_1^{2}t_1^{2}k_3^{2}t_2^{2}+4\,c_2\,c_1\,k_1^{2}t_1^{2}k_3^{2}t_2^{2}+2\,c_2\,c_1\,k_1^{2}t_2^{2}k_2^{2}t_1^{2}+8\,c_2\,c_1\,k_1\,k_3^{2}k_2\,t_1\,t_2^{3}-8\,c_2\,c_1\,k_1\,t_1^{3}k_3^{2}k_2\,t_2-4\,c_2\,{c_1}\,k_3^{2}k_2^{2}t_1^{2}t_2^{2}+c_1^{2}k_2^{4}t_1^{2}t_2^{2}+4\,c_1^{2}t_2^{2}t_1^{2}k_3^{4}+4\,{{c_3}}^{2}t_1^{2}t_2^{2}k_1^{4}+8\,c_3\,c_1\,k_1^{2}t_1^{2}k_3^{2}t_2^{2}+4\,c_3\,c_1\,k_1^{2}t_2^{2}k_2^{2}t_1^{2}+4\,c_3\,c_1\,k_1\,k_2^{3}t_1\,t_2^{3}). $}
\noindent From this expression it is easy to check that \[ {\rm deg}_{\bar t}\left(\operatorname{PP}_{(d,\bar k)}\left( \operatorname{Con}_{\bar c}\left( \operatorname{Res}_{t_0}\left(T_0(\bar k,\bar t_h),T(\bar c,d,\bar k,\bar t_h)\right)\right)\right)\right)=14, \] and then, using Theorem \ref{thm:ch4:DegreeFormula} one concludes that the total offset degree in $\bar x$ is $\delta=14.$
\noindent In fact, in this case, using elimination techniques, it is possible to check this result, computing the generic offset polynomial (see Appendix \ref{Ap3-ComplementsToSomeProofs}, page \pageref{Ap3-ComplementsToSomeProofs-Whitney}). This is indeed a polynomial of degree $14$ in $\bar x$. \end{Example} \begin{center} \rule{2cm}{0.5pt} \quad\\ \end{center}
\section*{Appendix: Computational Complements}\label{Ap3-ComplementsToSomeProofs} \subsection*{Coefficients $c^{(i)}_j$ in Lemma \ref{lem:ch4:AuxiliaryPolynomialsBelongToEliminationIdeal} (page \pageref{lem:ch4:AuxiliaryPolynomialsBelongToEliminationIdeal}).} \addcontentsline{toc}{section}{Appendix: Computational Complements}
The polynomials $s_i$ can be expressed as follows: \[ s_i=c^{(i)}_1\,b^{P}+c^{(i)}_2\,\mathrm{nor}^{P}_{(1,2)}+c^{(i)}_3\,\mathrm{nor}^{P}_{(1,3)}+c^{(i)}_4\,\mathrm{nor}^{P}_{(2,3)} +c^{(i)}_5\,w^{P}+c^{(i)}_6\,\ell_1+c^{(i)}_7\,\ell_2+c^{(i)}_8\,\ell_3 \] where $c^{(i)}_j\in\C[d,\bar k,l,r,\bar t,\bar x]$ for $i=0,\ldots,3$ , $j=1,\ldots,8$ are the following polynomials:
\noindent $c^{(0)}_1=0$
\noindent $c^{(0)}_2=k_3$
\noindent $c^{(0)}_3=-k_2$
\noindent $c^{(0)}_4=k_1$
\noindent $c^{(0)}_5=P_0\,n_2\,k_3-P_0\,n_3\,k_2$
\noindent $c^{(0)}_6=-P_0\,n_1\,k_3+P_0\,n_3\,k_1$
\noindent $c^{(0)}_7=P_0\,n_1\,k_2-P_0\,n_2\,k_1$
\noindent $c^{(0)}_8=0$
\noindent $c^{(1)}_1=-2\,P_0\,n_1^3\,n_2\,k_1\,k_2\,r+P_0\,n_1^2\,n_2^2\,k_1^2\,r-2\,P_0\,n_1\,n_2^3\,k_1\,k_2\,r-2\,P_0\,n_1\,n_2\,n_3^2\,k_1\,k_2\,r+P_0\,n_2^4\,k_1^2\,r+P_0\,n_2^2\,n_3^2\,k_1^2\,r+n_1^2\,k_2^2$
\noindent $c^{(1)}_2=2\,P_1\,P_0\,n_1^3\,k_1\,k_2\,r-P_1\,P_0\,n_1^2\,n_2\,k_1^2\,r+2\,P_1\,P_0\,n_1\,n_2^2\,k_1\,k_2\,r-P_1\,P_0\,n_2^3\,k_1^2\,r+P_1\,n_2\,k_2^2-P_2\,P_0\,n_1^3\,k_1^2\,r-P_2\,P_0\,n_1\,n_2^2\,k_1^2\,r+P_2\,n_1\,k_2^2-2\,P_2\,n_2\,k_1\,k_2+2\,P_3\,P_0\,n_1^2\,n_3\,k_1\,k_2\,r+P_0^2\,n_1^3\,k_1^2\,r\,x_2-2\,P_0^2\,n_1^3\,k_1\,k_2\,r\,x_1+P_0^2\,n_1^2\,n_2\,k_1^2\,r\,x_1-2\,P_0^2\,n_1^2\,n_3\,k_1\,k_2\,r\,x_3+P_0^2\,n_1\,n_2^2\,k_1^2\,r\,x_2-2\,P_0^2\,n_1\,n_2^2\,k_1\,k_2\,r\,x_1+P_0^2\,n_2^3\,k_1^2\,r\,x_1-P_0\,n_1\,k_2^2\,x_2+P_0\,n_2\,k_2^2\,x_1$
\noindent $c^{(1)}_3=2\,P_1\,P_0\,n_1\,n_2\,n_3\,k_1\,k_2\,r-P_1\,P_0\,n_2^2\,n_3\,k_1^2\,r+P_1\,n_3\,k_2^2-2\,P_2\,P_0\,n_1^2\,n_3\,k_1\,k_2\,r-P_3\,P_0\,n_1\,n_2^2\,k_1^2\,r+P_3\,n_1\,k_2^2-2\,P_3\,n_2\,k_1\,k_2+2\,P_0^2\,n_1^2\,n_3\,k_1\,k_2\,r\,x_2+P_0^2\,n_1\,n_2^2\,k_1^2\,r\,x_3-2\,P_0^2\,n_1\,n_2\,n_3\,k_1\,k_2\,r\,x_1+P_0^2\,n_2^2\,n_3\,k_1^2\,r\,x_1-P_0\,n_1\,k_2\,k_3\,x_2+2\,P_0\,n_2\,k_2\,k_3\,x_1-P_0\,n_3\,k_2^2\,x_1$
\noindent $c^{(1)}_4=-2\,P_1\,n_3\,k_1\,k_2+P_2\,P_0\,n_1^2\,n_3\,k_1^2\,r+P_2\,n_3\,k_1^2+P_3\,P_0\,n_1^2\,n_2\,k_1^2\,r+P_3\,n_2\,k_1^2-P_0^2\,n_1^2\,n_2\,k_1^2\,r\,x_3-P_0^2\,n_1^2\,n_3\,k_1^2\,r\,x_2-P_0\,n_2\,k_1\,k_3\,x_1+P_0\,n_3\,k_1\,k_2\,x_1$
\noindent $c^{(1)}_5=2\,P_1\,P_0\,n_1^2\,k_2^2-2\,P_1\,P_0\,n_2\,n_3\,k_2\,k_3+2\,P_1\,P_0\,n_3^2\,k_2^2-2\,P_2\,P_0\,n_1^2\,k_1\,k_2+2\,P_2\,P_0\,n_1\,n_2\,k_2^2-2\,P_2\,P_0\,n_2^2\,k_1\,k_2+P_2\,P_0\,n_2\,n_3\,k_1\,k_3-P_2\,P_0\,n_3^2\,k_1\,k_2+2\,P_3\,P_0\,n_1\,n_2\,k_2\,k_3-P_3\,P_0\,n_2^2\,k_1\,k_3-P_3\,P_0\,n_2\,n_3\,k_1\,k_2+P_0^2\,n_1^2\,k_1\,k_2\,x_2-P_0^2\,n_1^2\,k_2^2\,x_1-2\,P_0^2\,n_1\,n_2\,k_2^2\,x_2-2\,P_0^2\,n_1\,n_2\,k_3^2\,x_2+P_0^2\,n_2^2\,k_1\,k_2\,x_2+P_0^2\,n_2^2\,k_1\,k_3\,x_3+P_0^2\,n_2^2\,k_2^2\,x_1+2\,P_0^2\,n_2\,n_3\,k_2\,k_3\,x_1-P_0^2\,n_3^2\,k_2^2\,x_1$
\noindent $c^{(1)}_6=-2\,P_1\,P_0\,n_1^2\,k_1\,k_2+P_1\,P_0\,n_1\,n_3\,k_2\,k_3-2\,P_1\,P_0\,n_3^2\,k_1\,k_2+2\,P_2\,P_0\,n_1^2\,k_1^2-2\,P_2\,P_0\,n_1\,n_2\,k_1\,k_2+2\,P_2\,P_0\,n_2^2\,k_1^2+P_2\,P_0\,n_3^2\,k_1^2-P_3\,P_0\,n_1^2\,k_2\,k_3+P_3\,P_0\,n_2\,n_3\,k_1^2-P_0^2\,n_1^2\,k_1^2\,x_2+P_0^2\,n_1^2\,k_1\,k_2\,x_1+P_0^2\,n_1^2\,k_2\,k_3\,x_3+2\,P_0^2\,n_1\,n_2\,k_1\,k_2\,x_2-2\,P_0^2\,n_1\,n_2\,k_1\,k_3\,x_3+2\,P_0^2\,n_1\,n_2\,k_3^2\,x_1-P_0^2\,n_1\,n_3\,k_2\,k_3\,x_1-P_0^2\,n_2^2\,k_1^2\,x_2-P_0^2\,n_2^2\,k_1\,k_2\,x_1-P_0^2\,n_2\,n_3\,k_1\,k_3\,x_1+P_0^2\,n_3^2\,k_1\,k_2\,x_1$
\noindent $c^{(1)}_7=-P_1\,P_0\,n_1\,n_3\,k_2^2+2\,P_1\,P_0\,n_2\,n_3\,k_1\,k_2-P_2\,P_0\,n_2\,n_3\,k_1^2+P_3\,P_0\,n_1^2\,k_2^2-2\,P_3\,P_0\,n_1\,n_2\,k_1\,k_2+P_3\,P_0\,n_2^2\,k_1^2-P_0^2\,n_1^2\,k_2^2\,x_3+2\,P_0^2\,n_1\,n_2\,k_1\,k_2\,x_3+2\,P_0^2\,n_1\,n_2\,k_1\,k_3\,x_2-2\,P_0^2\,n_1\,n_2\,k_2\,k_3\,x_1+P_0^2\,n_1\,n_3\,k_2^2\,x_1-P_0^2\,n_2^2\,k_1^2\,x_3-P_0^2\,n_2\,n_3\,k_1\,k_2\,x_1$
\noindent $c^{(1)}_8=2\,P_1\,P_2\,n_1^2\,k_1\,k_2-2\,P_1\,P_0\,n_1^2\,k_1\,k_2\,x_2-P_2^2\,n_1^2\,k_1^2+2\,P_2^2\,n_1\,n_2\,k_1\,k_2-P_2^2\,n_2^2\,k_1^2+2\,P_2\,P_0\,n_1^2\,k_1^2\,x_2-2\,P_2\,P_0\,n_1^2\,k_1\,k_2\,x_1-4\,P_2\,P_0\,n_1\,n_2\,k_1\,k_2\,x_2+2\,P_2\,P_0\,n_2^2\,k_1^2\,x_2+2\,P_3^2\,n_1\,n_2\,k_1\,k_2-P_3^2\,n_2^2\,k_1^2-4\,P_3\,P_0\,n_1\,n_2\,k_1\,k_2\,x_3+2\,P_3\,P_0\,n_2^2\,k_1^2\,x_3-P_0^2\,n_1^2\,k_1^2\,x_2^2+2\,P_0^2\,n_1^2\,k_1\,k_2\,x_1\,x_2-2\,P_0^2\,n_1\,n_2\,k_1\,k_2\,d^2+2\,P_0^2\,n_1\,n_2\,k_1\,k_2\,x_2^2+2\,P_0^2\,n_1\,n_2\,k_1\,k_2\,x_3^2+P_0^2\,n_2^2\,k_1^2\,d^2-P_0^2\,n_2^2\,k_1^2\,x_2^2-P_0^2\,n_2^2\,k_1^2\,x_3^2$
\noindent $c^{(2)}_1=P_0\,n_1^2\,n_2^2\,k_3^2\,r-2\,P_0\,n_1^2\,n_2\,n_3\,k_2\,k_3\,r+P_0\,n_1^2\,n_3^2\,k_2^2\,r+P_0\,n_2^4\,k_3^2\,r-2\,P_0\,n_2^3\,n_3\,k_2\,k_3\,r+P_0\,n_2^2\,n_3^2\,k_2^2\,r+P_0\,n_2^2\,n_3^2\,k_3^2\,r-2\,P_0\,n_2\,n_3^3\,k_2\,k_3\,r+P_0\,n_3^4\,k_2^2\,r$
\noindent $c^{(2)}_2=-P_1\,P_0\,n_1^2\,n_2\,k_3^2\,r+2\,P_1\,P_0\,n_1^2\,n_3\,k_2\,k_3\,r-P_1\,P_0\,n_2^3\,k_3^2\,r+2\,P_1\,P_0\,n_2^2\,n_3\,k_2\,k_3\,r-P_2\,P_0\,n_1^3\,k_3^2\,r-P_2\,P_0\,n_1\,n_2^2\,k_3^2\,r+P_0^2\,n_1^3\,k_3^2\,r\,x_2+P_0^2\,n_1^2\,n_2\,k_3^2\,r\,x_1-2\,P_0^2\,n_1^2\,n_3\,k_2\,k_3\,r\,x_1+P_0^2\,n_1\,n_2^2\,k_3^2\,r\,x_2+P_0^2\,n_2^3\,k_3^2\,r\,x_1-2\,P_0^2\,n_2^2\,n_3\,k_2\,k_3\,r\,x_1$
\noindent $c^{(2)}_3=-P_1\,P_0\,n_1^2\,n_3\,k_2^2\,r-P_1\,P_0\,n_2^2\,n_3\,k_2^2\,r-P_1\,P_0\,n_2^2\,n_3\,k_3^2\,r+2\,P_1\,P_0\,n_2\,n_3^2\,k_2\,k_3\,r-P_1\,P_0\,n_3^3\,k_2^2\,r+2\,P_2\,P_0\,n_1^3\,k_2\,k_3\,r+2\,P_2\,P_0\,n_1\,n_2^2\,k_2\,k_3\,r-P_3\,P_0\,n_1^3\,k_2^2\,r-P_3\,P_0\,n_1\,n_2^2\,k_2^2\,r-P_3\,P_0\,n_1\,n_2^2\,k_3^2\,r+2\,P_3\,P_0\,n_1\,n_2\,n_3\,k_2\,k_3\,r-P_3\,P_0\,n_1\,n_3^2\,k_2^2\,r+P_0^2\,n_1^3\,k_2^2\,r\,x_3-2\,P_0^2\,n_1^3\,k_2\,k_3\,r\,x_2+P_0^2\,n_1^2\,n_3\,k_2^2\,r\,x_1+P_0^2\,n_1\,n_2^2\,k_2^2\,r\,x_3-2\,P_0^2\,n_1\,n_2^2\,k_2\,k_3\,r\,x_2+P_0^2\,n_1\,n_2^2\,k_3^2\,r\,x_3-2\,P_0^2\,n_1\,n_2\,n_3\,k_2\,k_3\,r\,x_3+P_0^2\,n_1\,n_3^2\,k_2^2\,r\,x_3+P_0^2\,n_2^2\,n_3\,k_2^2\,r\,x_1+P_0^2\,n_2^2\,n_3\,k_3^2\,r\,x_1-2\,P_0^2\,n_2\,n_3^2\,k_2\,k_3\,r\,x_1+P_0^2\,n_3^3\,k_2^2\,r\,x_1$
\noindent $c^{(2)}_4=2\,P_2\,P_0\,n_1^2\,n_2\,k_2\,k_3\,r-P_2\,P_0\,n_1^2\,n_3\,k_2^2\,r+P_2\,P_0\,n_1^2\,n_3\,k_3^2\,r+2\,P_2\,P_0\,n_2^3\,k_2\,k_3\,r-P_2\,P_0\,n_2^2\,n_3\,k_2^2\,r+2\,P_2\,P_0\,n_2\,n_3^2\,k_2\,k_3\,r-P_2\,P_0\,n_3^3\,k_2^2\,r+P_2\,n_3\,k_3^2-P_3\,P_0\,n_1^2\,n_2\,k_2^2\,r+P_3\,P_0\,n_1^2\,n_2\,k_3^2\,r-2\,P_3\,P_0\,n_1^2\,n_3\,k_2\,k_3\,r-P_3\,P_0\,n_2^3\,k_2^2\,r-P_3\,P_0\,n_2\,n_3^2\,k_2^2\,r+P_3\,n_2\,k_3^2-2\,P_3\,n_3\,k_2\,k_3+P_0^2\,n_1^2\,n_2\,k_2^2\,r\,x_3-2\,P_0^2\,n_1^2\,n_2\,k_2\,k_3\,r\,x_2-P_0^2\,n_1^2\,n_2\,k_3^2\,r\,x_3+P_0^2\,n_1^2\,n_3\,k_2^2\,r\,x_2+2\,P_0^2\,n_1^2\,n_3\,k_2\,k_3\,r\,x_3-P_0^2\,n_1^2\,n_3\,k_3^2\,r\,x_2+P_0^2\,n_2^3\,k_2^2\,r\,x_3-2\,P_0^2\,n_2^3\,k_2\,k_3\,r\,x_2+P_0^2\,n_2^2\,n_3\,k_2^2\,r\,x_2+P_0^2\,n_2\,n_3^2\,k_2^2\,r\,x_3-2\,P_0^2\,n_2\,n_3^2\,k_2\,k_3\,r\,x_2+P_0^2\,n_3^3\,k_2^2\,r\,x_2-P_0\,n_2\,k_3^2\,x_3+P_0\,n_3\,k_3^2\,x_2$
\noindent $c^{(2)}_5=0$
\noindent $c^{(2)}_6=2\,P_2\,P_0\,n_1^2\,k_3^2+2\,P_2\,P_0\,n_2^2\,k_3^2-2\,P_3\,P_0\,n_1^2\,k_2\,k_3-2\,P_3\,P_0\,n_2^2\,k_2\,k_3+2\,P_3\,P_0\,n_2\,n_3\,k_3^2-2\,P_3\,P_0\,n_3^2\,k_2\,k_3+P_0^2\,n_1^2\,k_2\,k_3\,x_3-P_0^2\,n_1^2\,k_3^2\,x_2+P_0^2\,n_2^2\,k_2\,k_3\,x_3-P_0^2\,n_2^2\,k_3^2\,x_2-2\,P_0^2\,n_2\,n_3\,k_3^2\,x_3+P_0^2\,n_3^2\,k_2\,k_3\,x_3+P_0^2\,n_3^2\,k_3^2\,x_2$
\noindent $c^{(2)}_7=-2\,P_2\,P_0\,n_1^2\,k_2\,k_3-2\,P_2\,P_0\,n_2^2\,k_2\,k_3+2\,P_3\,P_0\,n_1^2\,k_2^2+2\,P_3\,P_0\,n_2^2\,k_2^2-2\,P_3\,P_0\,n_2\,n_3\,k_2\,k_3+2\,P_3\,P_0\,n_3^2\,k_2^2-P_0^2\,n_1^2\,k_2^2\,x_3+P_0^2\,n_1^2\,k_2\,k_3\,x_2-P_0^2\,n_2^2\,k_2^2\,x_3+P_0^2\,n_2^2\,k_2\,k_3\,x_2+2\,P_0^2\,n_2\,n_3\,k_2\,k_3\,x_3-P_0^2\,n_3^2\,k_2^2\,x_3-P_0^2\,n_3^2\,k_2\,k_3\,x_2$
\noindent $c^{(2)}_8=-P_2^2\,n_1^2\,k_3^2-P_2^2\,n_2^2\,k_3^2+2\,P_2\,P_3\,n_1^2\,k_2\,k_3+2\,P_2\,P_3\,n_2^2\,k_2\,k_3-2\,P_2\,P_0\,n_1^2\,k_2\,k_3\,x_3+2\,P_2\,P_0\,n_1^2\,k_3^2\,x_2-2\,P_2\,P_0\,n_2^2\,k_2\,k_3\,x_3+2\,P_2\,P_0\,n_2^2\,k_3^2\,x_2-P_3^2\,n_1^2\,k_2^2-P_3^2\,n_2^2\,k_2^2-P_3^2\,n_2^2\,k_3^2+2\,P_3^2\,n_2\,n_3\,k_2\,k_3-P_3^2\,n_3^2\,k_2^2+2\,P_3\,P_0\,n_1^2\,k_2^2\,x_3-2\,P_3\,P_0\,n_1^2\,k_2\,k_3\,x_2+2\,P_3\,P_0\,n_2^2\,k_2^2\,x_3-2\,P_3\,P_0\,n_2^2\,k_2\,k_3\,x_2+2\,P_3\,P_0\,n_2^2\,k_3^2\,x_3-4\,P_3\,P_0\,n_2\,n_3\,k_2\,k_3\,x_3+2\,P_3\,P_0\,n_3^2\,k_2^2\,x_3-P_0^2\,n_1^2\,k_2^2\,x_3^2+2\,P_0^2\,n_1^2\,k_2\,k_3\,x_2\,x_3-P_0^2\,n_1^2\,k_3^2\,x_2^2-P_0^2\,n_2^2\,k_2^2\,x_3^2+2\,P_0^2\,n_2^2\,k_2\,k_3\,x_2\,x_3+P_0^2\,n_2^2\,k_3^2\,d^2-P_0^2\,n_2^2\,k_3^2\,x_2^2-P_0^2\,n_2^2\,k_3^2\,x_3^2-2\,P_0^2\,n_2\,n_3\,k_2\,k_3\,d^2+2\,P_0^2\,n_2\,n_3\,k_2\,k_3\,x_3^2+P_0^2\,n_3^2\,k_2^2\,d^2-P_0^2\,n_3^2\,k_2^2\,x_3^2$
\noindent $c^{(3)}_1=-2\,P_0\,n_1^3\,n_3\,k_1\,k_3\,r+P_0\,n_1^2\,n_3^2\,k_1^2\,r-2\,P_0\,n_1\,n_2^2\,n_3\,k_1\,k_3\,r-2\,P_0\,n_1\,n_3^3\,k_1\,k_3\,r+P_0\,n_2^2\,n_3^2\,k_1^2\,r+P_0\,n_3^4\,k_1^2\,r+n_1^2\,k_3^2$
\noindent $c^{(3)}_2=2\,P_1\,P_0\,n_1\,n_2\,n_3\,k_1\,k_3\,r+P_1\,n_2\,k_3^2-2\,P_2\,P_0\,n_1^2\,n_3\,k_1\,k_3\,r+P_2\,n_1\,k_3^2-2\,P_3\,P_0\,n_1^2\,n_2\,k_1\,k_3\,r-2\,P_3\,n_2\,k_1\,k_3+2\,P_0^2\,n_1^2\,n_2\,k_1\,k_3\,r\,x_3+2\,P_0^2\,n_1^2\,n_3\,k_1\,k_3\,r\,x_2-2\,P_0^2\,n_1\,n_2\,n_3\,k_1\,k_3\,r\,x_1-P_0\,n_1\,k_3^2\,x_2+P_0\,n_2\,k_3^2\,x_1$
\noindent $c^{(3)}_3=2\,P_1\,P_0\,n_1^3\,k_1\,k_3\,r-P_1\,P_0\,n_1^2\,n_3\,k_1^2\,r+2\,P_1\,P_0\,n_1\,n_3^2\,k_1\,k_3\,r-P_1\,P_0\,n_2^2\,n_3\,k_1^2\,r-P_1\,P_0\,n_3^3\,k_1^2\,r+P_1\,n_3\,k_3^2+4\,P_2\,P_0\,n_1^2\,n_2\,k_1\,k_3\,r-P_3\,P_0\,n_1^3\,k_1^2\,r-P_3\,P_0\,n_1\,n_2^2\,k_1^2\,r-P_3\,P_0\,n_1\,n_3^2\,k_1^2\,r+P_3\,n_1\,k_3^2-2\,P_3\,n_3\,k_1\,k_3+P_0^2\,n_1^3\,k_1^2\,r\,x_3-2\,P_0^2\,n_1^3\,k_1\,k_3\,r\,x_1-4\,P_0^2\,n_1^2\,n_2\,k_1\,k_3\,r\,x_2+P_0^2\,n_1^2\,n_3\,k_1^2\,r\,x_1+P_0^2\,n_1\,n_2^2\,k_1^2\,r\,x_3+P_0^2\,n_1\,n_3^2\,k_1^2\,r\,x_3-2\,P_0^2\,n_1\,n_3^2\,k_1\,k_3\,r\,x_1+P_0^2\,n_2^2\,n_3\,k_1^2\,r\,x_1+P_0^2\,n_3^3\,k_1^2\,r\,x_1-P_0\,n_1\,k_3^2\,x_3+P_0\,n_3\,k_3^2\,x_1$
\noindent $c^{(3)}_4=-P_2\,P_0\,n_1^2\,n_3\,k_1^2\,r+2\,P_2\,P_0\,n_1\,n_2^2\,k_1\,k_3\,r+2\,P_2\,P_0\,n_1\,n_3^2\,k_1\,k_3\,r-P_2\,P_0\,n_2^2\,n_3\,k_1^2\,r-P_2\,P_0\,n_3^3\,k_1^2\,r-P_3\,P_0\,n_1^2\,n_2\,k_1^2\,r-P_3\,P_0\,n_2^3\,k_1^2\,r-P_3\,P_0\,n_2\,n_3^2\,k_1^2\,r+P_0^2\,n_1^2\,n_2\,k_1^2\,r\,x_3+P_0^2\,n_1^2\,n_3\,k_1^2\,r\,x_2-2\,P_0^2\,n_1\,n_2^2\,k_1\,k_3\,r\,x_2-2\,P_0^2\,n_1\,n_3^2\,k_1\,k_3\,r\,x_2+P_0^2\,n_2^3\,k_1^2\,r\,x_3+P_0^2\,n_2^2\,n_3\,k_1^2\,r\,x_2+P_0^2\,n_2\,n_3^2\,k_1^2\,r\,x_3+P_0^2\,n_3^3\,k_1^2\,r\,x_2$
\noindent $c^{(3)}_5=2\,P_1\,P_0\,n_1^2\,k_3^2+2\,P_2\,P_0\,n_1\,n_2\,k_3^2-2\,P_3\,P_0\,n_1^2\,k_1\,k_3+2\,P_3\,P_0\,n_1\,n_3\,k_3^2-2\,P_3\,P_0\,n_2^2\,k_1\,k_3-2\,P_3\,P_0\,n_3^2\,k_1\,k_3+P_0^2\,n_1^2\,k_1\,k_3\,x_3-P_0^2\,n_1^2\,k_3^2\,x_1-2\,P_0^2\,n_1\,n_2\,k_3^2\,x_2-2\,P_0^2\,n_1\,n_3\,k_3^2\,x_3+P_0^2\,n_2^2\,k_1\,k_3\,x_3+P_0^2\,n_2^2\,k_3^2\,x_1+P_0^2\,n_3^2\,k_1\,k_3\,x_3+P_0^2\,n_3^2\,k_3^2\,x_1$
\noindent $c^{(3)}_6=0$
\noindent $c^{(3)}_7=-2\,P_1\,P_0\,n_1^2\,k_1\,k_3-2\,P_2\,P_0\,n_1\,n_2\,k_1\,k_3+2\,P_3\,P_0\,n_1^2\,k_1^2-2\,P_3\,P_0\,n_1\,n_3\,k_1\,k_3+2\,P_3\,P_0\,n_2^2\,k_1^2+2\,P_3\,P_0\,n_3^2\,k_1^2-P_0^2\,n_1^2\,k_1^2\,x_3+P_0^2\,n_1^2\,k_1\,k_3\,x_1+2\,P_0^2\,n_1\,n_2\,k_1\,k_3\,x_2+2\,P_0^2\,n_1\,n_3\,k_1\,k_3\,x_3-P_0^2\,n_2^2\,k_1^2\,x_3-P_0^2\,n_2^2\,k_1\,k_3\,x_1-P_0^2\,n_3^2\,k_1^2\,x_3-P_0^2\,n_3^2\,k_1\,k_3\,x_1$
\noindent $c^{(3)}_8=2\,P_1\,P_3\,n_1^2\,k_1\,k_3-2\,P_1\,P_0\,n_1^2\,k_1\,k_3\,x_3+2\,P_2\,P_3\,n_1\,n_2\,k_1\,k_3-2\,P_2\,P_0\,n_1\,n_2\,k_1\,k_3\,x_3-P_3^2\,n_1^2\,k_1^2+2\,P_3^2\,n_1\,n_3\,k_1\,k_3-P_3^2\,n_2^2\,k_1^2-P_3^2\,n_3^2\,k_1^2+2\,P_3\,P_0\,n_1^2\,k_1^2\,x_3-2\,P_3\,P_0\,n_1^2\,k_1\,k_3\,x_1-2\,P_3\,P_0\,n_1\,n_2\,k_1\,k_3\,x_2-4\,P_3\,P_0\,n_1\,n_3\,k_1\,k_3\,x_3+2\,P_3\,P_0\,n_2^2\,k_1^2\,x_3+2\,P_3\,P_0\,n_3^2\,k_1^2\,x_3-P_0^2\,n_1^2\,k_1^2\,x_3^2+2\,P_0^2\,n_1^2\,k_1\,k_3\,x_1\,x_3+2\,P_0^2\,n_1\,n_2\,k_1\,k_3\,x_2\,x_3-2\,P_0^2\,n_1\,n_3\,k_1\,k_3\,d^2+2\,P_0^2\,n_1\,n_3\,k_1\,k_3\,x_3^2-P_0^2\,n_2^2\,k_1^2\,x_3^2+P_0^2\,n_3^2\,k_1^2\,d^2-P_0^2\,n_3^2\,k_1^2\,x_3^2$
\subsection*{Generic Offset Polynomial for Example \ref{exam:ch4:WhitneyUmbrella} (page \pageref{exam:ch4:WhitneyUmbrella})}\label{Ap3-ComplementsToSomeProofs-Whitney}
\noindent {$ g(d,x_1,x_2,x_3)=-16{\,}x_1^2{\,}x_2^{10}{\,}x_3^2+16{\,}x_1^2{\,}x_2^{10}{\,}d^2-48{\,}x_1^2{\,}x_2^8{\,}x_3^4+128{\,}x_1^2{\,}x_2^8{\,}x_3^2{\,}d^2-80{\,}x_1^2{\,}x_2^8{\,}d^4-48{\,}x_1^2{\,}x_2^6{\,}x_3^6+240{\,}x_1^2{\,}x_2^6{\,}x_3^4{\,}d^2-352{\,}x_1^2{\,}x_2^6{\,}x_3^2{\,}d^4+160{\,}x_1^2{\,}x_2^6{\,}d^6-16{\,}x_1^2{\,}x_2^4{\,}x_3^8+160{\,}x_1^2{\,}x_2^4{\,}x_3^6{\,}d^2-432{\,}x_1^2{\,}x_2^4{\,}x_3^4{\,}d^4+448{\,}x_1^2{\,}x_2^4{\,}x_3^2{\,}d^6-160{\,}x_1^2{\,}x_2^4{\,}d^8+32{\,}x_1^2{\,}x_2^2{\,}x_3^8{\,}d^2-176{\,}x_1^2{\,}x_2^2{\,}x_3^6{\,}d^4+336{\,}x_1^2{\,}x_2^2{\,}x_3^4{\,}d^6-272{\,}x_1^2{\,}x_2^2{\,}x_3^2{\,}d^8+80{\,}x_1^2{\,}x_2^2{\,}d^{10}-16{\,}x_1^2{\,}x_3^8{\,}d^4+64{\,}x_1^2{\,}x_3^6{\,}d^6-96{\,}x_1^2{\,}x_3^4{\,}d^8+64{\,}x_1^2{\,}x_3^2{\,}d^{10}-16{\,}x_1^2{\,}d^{12}-16{\,}x_2^{12}{\,}x_3^2+16{\,}x_2^{12}{\,}d^2-64{\,}x_2^{10}{\,}x_3^4+160{\,}x_2^{10}{\,}x_3^2{\,}d^2-96{\,}x_2^{10}{\,}d^4-96{\,}x_2^8{\,}x_3^6+416{\,}x_2^8{\,}x_3^4{\,}d^2-560{\,}x_2^8{\,}x_3^2{\,}d^4+240{\,}x_2^8{\,}d^6-64{\,}x_2^6{\,}x_3^8+448{\,}x_2^6{\,}x_3^6{\,}d^2-1024{\,}x_2^6{\,}x_3^4{\,}d^4+960{\,}x_2^6{\,}x_3^2{\,}d^6-320{\,}x_2^6{\,}d^8-16{\,}x_2^4{\,}x_3^{10}+208{\,}x_2^4{\,}x_3^8{\,}d^2-768{\,}x_2^4{\,}x_3^6{\,}d^4+1216{\,}x_2^4{\,}x_3^4{\,}d^6-880{\,}x_2^4{\,}x_3^2{\,}d^8+240{\,}x_2^4{\,}d^{10}+32{\,}x_2^2{\,}x_3^{10}{\,}d^2-224{\,}x_2^2{\,}x_3^8{\,}d^4+576{\,}x_2^2{\,}x_3^6{\,}d^6-704{\,}x_2^2{\,}x_3^4{\,}d^8+416{\,}x_2^2{\,}x_3^2{\,}d^{10}-96{\,}x_2^2{\,}d^{12}-16{\,}x_3^{10}{\,}d^4+80{\,}x_3^8{\,}d^6-160{\,}x_3^6{\,}d^8+160{\,}x_3^4{\,}d^{10}-80{\,}x_3^2{\,}d^{12}+16{\,}d^{14}+32{\,}x_1^4{\,}x_2^8{\,}x_3+744{\,}x_1^4{\,}x_2^6{\,}x_3^3-808{\,}x_1^4{\,}x_2^6{\,}x_3{\,}d^2-120{\,}x_1^4{\,}x_2^4{\,}x_3^5-1080{\,}x_1^4{\,}x_2^4{\,}x_3^3{\,}d^2+1232{\,}x_1^4{\,}x_2^4{\,}x_3{\,}d^4+32{\,}x_1^4{\,}x_2^2{\,}x_3^7+408{\,}x_1^4{\,}x_2^2{\,}x_3^5{\,}d^2-272{\,}x_1^4{\,}x_2^2{\,}x_3^3{\,}d^4-168{\,}x_1^4{\,}x_2^2{\,}x_3{\,}d^6+32{\,}x_1^4{\,}x_3^7{\,}d^2-352{\,}x_1^4{\,}x_3^5{\,}d^4+608{\,}x_1^4{\,}x_3^3{\,}d^6-288{\,}x_1^4{\,}x_3{\,}d^8+32{\,}x_1^2{\,}x_2^{10}{\,}x_3+968{\,}x_1^2{\,}x_2^8{\,}x_3^3-1032{\,}x_1^2{\,}x_2^8{\,}x_3{\,}d^2+720{\,}x_1^2{\,}x_2^6{\,}x_3^5-3176{\,}x_1^2{\,}x_2^6{\,}x_3^3{\,}d^2+2488{\,}x_1^2{\,}x_2^6{\,}x_3{\,}d^4-184{\,}x_1^2{\,}x_2^4{\,}x_3^7-392{\,}x_1^2{\,}x_2^4{\,}x_3^5{\,}d^2+2168{\,}x_1^2{\,}x_2^4{\,}x_3^3{\,}d^4-1592{\,}x_1^2{\,}x_2^4{\,}x_3{\,}d^6+32{\,}x_1^2{\,}x_2^2{\,}x_3^9+632{\,}x_1^2{\,}x_2^2{\,}x_3^7{\,}d^2-1672{\,}x_1^2{\,}x_2^2{\,}x_3^5{\,}d^4+1320{\,}x_1^2{\,}x_2^2{\,}x_3^3{\,}d^6-312{\,}x_1^2{\,}x_2^2{\,}x_3{\,}d^8+32{\,}x_1^2{\,}x_3^9{\,}d^2-512{\,}x_1^2{\,}x_3^7{\,}d^4+1344{\,}x_1^2{\,}x_3^5{\,}d^6-1280{\,}x_1^2{\,}x_3^3{\,}d^8+416{\,}x_1^2{\,}x_3{\,}d^{10}+160{\,}x_2^{10}{\,}x_3^3-160{\,}x_2^{10}{\,}x_3{\,}d^2+224{\,}x_2^8{\,}x_3^5-736{\,}x_2^8{\,}x_3^3{\,}d^2+512{\,}x_2^8{\,}x_3{\,}d^4-32{\,}x_2^6{\,}x_3^7-256{\,}x_2^6{\,}x_3^5{\,}d^2+736{\,}x_2^6{\,}x_3^3{\,}d^4-448{\,}x_2^6{\,}x_3{\,}d^6-96{\,}x_2^4{\,}x_3^9+544{\,}x_2^4{\,}x_3^7{\,}d^2-928{\,}x_2^4{\,}x_3^5{\,}d^4+608{\,}x_2^4{\,}x_3^3{\,}d^6-128{\,}x_2^4{\,}x_3{\,}d^8+224{\,}x_2^2{\,}x_3^9{\,}d^2-1024{\,}x_2^2{\,}x_3^7{\,}d^4+1728{\,}x_2^2{\,}x_3^5{\,}d^6-1280{\,}x_2^2{\,}x_3^3{\,}d^8+352{\,}x_2^2{\,}x_3{\,}d^{10}-128{\,}x_3^9{\,}d^4+512{\,}x_3^7{\,}d^6-768{\,}x_3^5{\,}d^8+512{\,}x_3^3{\,}d^{10}-128{\,}x_3{\,}d^{12}-16{\,}x_1^6{\,}x_2^6-2073{\,}x_1^6{\,}x_2^4{\,}x_3^2+873{\,}x_1^6{\,}x_2^4{\,}d^2+384{\,}x_1^6{\,}x_2^2{\,}x_3^4-324{\,}x_1^6{\,}x_2^2{\,}x_3^2{\,}d^2+2052{\,}x_1^6{\,}x_2^2{\,}d^4-16{\,}x_1^6{\,}x_3^6+504{\,}x_1^6{\,}x_3^4{\,}d^2-1728{\,}x_1^6{\,}x_3^2{\,}d^4+216{\,}x_1^6{\,}d^6-16{\,}x_1^4{\,}x_2^8-2797{\,}x_1^4{\,}x_2^6{\,}x_3^2+1277{\,}x_1^4{\,}x_2^6{\,}d^2-2529{\,}x_1^4{\,}x_2^4{\,}x_3^4+2602{\,}x_1^4{\,}x_2^4{\,}x_3^2{\,}d^2+2615{\,}x_1^4{\,}x_2^4{\,}d^4+560{\,}x_1^4{\,}x_2^2{\,}x_3^6+980{\,}x_1^4{\,}x_2^2{\,}x_3^4{\,}d^2+552{\,}x_1^4{\,}x_2^2{\,}x_3^2{\,}d^4-3372{\,}x_1^4{\,}x_2^2{\,}d^6-16{\,}x_1^4{\,}x_3^8+744{\,}x_1^4{\,}x_3^6{\,}d^2-3992{\,}x_1^4{\,}x_3^4{\,}d^4+3768{\,}x_1^4{\,}x_3^2{\,}d^6-504{\,}x_1^4{\,}d^8-620{\,}x_1^2{\,}x_2^8{\,}x_3^2+364{\,}x_1^2{\,}x_2^8{\,}d^2-1060{\,}x_1^2{\,}x_2^6{\,}x_3^4+252{\,}x_1^2{\,}x_2^6{\,}x_3^2{\,}d^2+1256{\,}x_1^2{\,}x_2^6{\,}d^4-824{\,}x_1^2{\,}x_2^4{\,}x_3^6+1436{\,}x_1^2{\,}x_2^4{\,}x_3^4{\,}d^2+2448{\,}x_1^2{\,}x_2^4{\,}x_3^2{\,}d^4-3252{\,}x_1^2{\,}x_2^4{\,}d^6+192{\,}x_1^2{\,}x_2^2{\,}x_3^8+1952{\,}x_1^2{\,}x_2^2{\,}x_3^6{\,}d^2-4032{\,}x_1^2{\,}x_2^2{\,}x_3^4{\,}d^4+608{\,}x_1^2{\,}x_2^2{\,}x_3^2{\,}d^6+1280{\,}x_1^2{\,}x_2^2{\,}d^8+224{\,}x_1^2{\,}x_3^8{\,}d^2-2432{\,}x_1^2{\,}x_3^6{\,}d^4+4544{\,}x_1^2{\,}x_3^4{\,}d^6-2688{\,}x_1^2{\,}x_3^2{\,}d^8+352{\,}x_1^2{\,}d^{10}+8{\,}x_2^{10}{\,}x_3^2-8{\,}x_2^{10}{\,}d^2-480{\,}x_2^8{\,}x_3^4+296{\,}x_2^8{\,}x_3^2{\,}d^2+184{\,}x_2^8{\,}d^4+296{\,}x_2^6{\,}x_3^6-8{\,}x_2^6{\,}x_3^4{\,}d^2+152{\,}x_2^6{\,}x_3^2{\,}d^4-440{\,}x_2^6{\,}d^6-240{\,}x_2^4{\,}x_3^8+104{\,}x_2^4{\,}x_3^6{\,}d^2+424{\,}x_2^4{\,}x_3^4{\,}d^4-584{\,}x_2^4{\,}x_3^2{\,}d^6+296{\,}x_2^4{\,}d^8+672{\,}x_2^2{\,}x_3^8{\,}d^2-1792{\,}x_2^2{\,}x_3^6{\,}d^4+1600{\,}x_2^2{\,}x_3^4{\,}d^6-512{\,}x_2^2{\,}x_3^2{\,}d^8+32{\,}x_2^2{\,}d^{10}-448{\,}x_3^8{\,}d^4+1408{\,}x_3^6{\,}d^6-1536{\,}x_3^4{\,}d^8+640{\,}x_3^2{\,}d^{10}-64{\,}d^{12}+2106{\,}x_1^8{\,}x_2^2{\,}x_3-216{\,}x_1^8{\,}x_3^3+1944{\,}x_1^8{\,}x_3{\,}d^2+2946{\,}x_1^6{\,}x_2^4{\,}x_3+3282{\,}x_1^6{\,}x_2^2{\,}x_3^3+54{\,}x_1^6{\,}x_2^2{\,}x_3{\,}d^2-312{\,}x_1^6{\,}x_3^5+4176{\,}x_1^6{\,}x_3^3{\,}d^2-5400{\,}x_1^6{\,}x_3{\,}d^4+760{\,}x_1^4{\,}x_2^6{\,}x_3+800{\,}x_1^4{\,}x_2^4{\,}x_3^3+56{\,}x_1^4{\,}x_2^4{\,}x_3{\,}d^2+1744{\,}x_1^4{\,}x_2^2{\,}x_3^5+2048{\,}x_1^4{\,}x_2^2{\,}x_3^3{\,}d^2-3696{\,}x_1^4{\,}x_2^2{\,}x_3{\,}d^4-96{\,}x_1^4{\,}x_3^7+2784{\,}x_1^4{\,}x_3^5{\,}d^2-8992{\,}x_1^4{\,}x_3^3{\,}d^4+5280{\,}x_1^4{\,}x_3{\,}d^6-16{\,}x_1^2{\,}x_2^8{\,}x_3+1362{\,}x_1^2{\,}x_2^6{\,}x_3^3-546{\,}x_1^2{\,}x_2^6{\,}x_3{\,}d^2-1560{\,}x_1^2{\,}x_2^4{\,}x_3^5+2880{\,}x_1^2{\,}x_2^4{\,}x_3^3{\,}d^2-2472{\,}x_1^2{\,}x_2^4{\,}x_3{\,}d^4+480{\,}x_1^2{\,}x_2^2{\,}x_3^7+2120{\,}x_1^2{\,}x_2^2{\,}x_3^5{\,}d^2-3408{\,}x_1^2{\,}x_2^2{\,}x_3^3{\,}d^4+1192{\,}x_1^2{\,}x_2^2{\,}x_3{\,}d^6+672{\,}x_1^2{\,}x_3^7{\,}d^2-5088{\,}x_1^2{\,}x_3^5{\,}d^4+6624{\,}x_1^2{\,}x_3^3{\,}d^6-2208{\,}x_1^2{\,}x_3{\,}d^8-72{\,}x_2^8{\,}x_3^3+72{\,}x_2^8{\,}x_3{\,}d^2+440{\,}x_2^6{\,}x_3^5+592{\,}x_2^6{\,}x_3^3{\,}d^2-1032{\,}x_2^6{\,}x_3{\,}d^4-320{\,}x_2^4{\,}x_3^7-864{\,}x_2^4{\,}x_3^5{\,}d^2+896{\,}x_2^4{\,}x_3^3{\,}d^4+288{\,}x_2^4{\,}x_3{\,}d^6+1120{\,}x_2^2{\,}x_3^7{\,}d^2-1440{\,}x_2^2{\,}x_3^5{\,}d^4+32{\,}x_2^2{\,}x_3^3{\,}d^6+288{\,}x_2^2{\,}x_3{\,}d^8-896{\,}x_3^7{\,}d^4+2176{\,}x_3^5{\,}d^6-1664{\,}x_3^3{\,}d^8+384{\,}x_3{\,}d^{10}-729{\,}x_1^{10}-1053{\,}x_1^8{\,}x_2^2-1377{\,}x_1^8{\,}x_3^2+2673{\,}x_1^8{\,}d^2-300{\,}x_1^6{\,}x_2^4+684{\,}x_1^6{\,}x_2^2{\,}x_3^2+3132{\,}x_1^6{\,}x_2^2{\,}d^2-888{\,}x_1^6{\,}x_3^4+5616{\,}x_1^6{\,}x_3^2{\,}d^2-3672{\,}x_1^6{\,}d^4+8{\,}x_1^4{\,}x_2^6-1500{\,}x_1^4{\,}x_2^4{\,}x_3^2+1072{\,}x_1^4{\,}x_2^4{\,}d^2+2232{\,}x_1^4{\,}x_2^2{\,}x_3^4+456{\,}x_1^4{\,}x_2^2{\,}x_3^2{\,}d^2-2576{\,}x_1^4{\,}x_2^2{\,}d^4-240{\,}x_1^4{\,}x_3^6+4384{\,}x_1^4{\,}x_3^4{\,}d^2-8272{\,}x_1^4{\,}x_3^2{\,}d^4+2336{\,}x_1^4{\,}d^6+276{\,}x_1^2{\,}x_2^6{\,}x_3^2-164{\,}x_1^2{\,}x_2^6{\,}d^2-1336{\,}x_1^2{\,}x_2^4{\,}x_3^4+1048{\,}x_1^2{\,}x_2^4{\,}x_3^2{\,}d^2-1024{\,}x_1^2{\,}x_2^4{\,}d^4+640{\,}x_1^2{\,}x_2^2{\,}x_3^6+480{\,}x_1^2{\,}x_2^2{\,}x_3^4{\,}d^2-112{\,}x_1^2{\,}x_2^2{\,}x_3^2{\,}d^4+272{\,}x_1^2{\,}x_2^2{\,}d^6+1120{\,}x_1^2{\,}x_3^6{\,}d^2-5760{\,}x_1^2{\,}x_3^4{\,}d^4+5088{\,}x_1^2{\,}x_3^2{\,}d^6-704{\,}x_1^2{\,}d^8-1{\,}x_2^8{\,}x_3^2+1{\,}x_2^8{\,}d^2+184{\,}x_2^6{\,}x_3^4-112{\,}x_2^6{\,}x_3^2{\,}d^2-72{\,}x_2^6{\,}d^4-240{\,}x_2^4{\,}x_3^6-896{\,}x_2^4{\,}x_3^4{\,}d^2+656{\,}x_2^4{\,}x_3^2{\,}d^4+480{\,}x_2^4{\,}d^6+1120{\,}x_2^2{\,}x_3^6{\,}d^2-480{\,}x_2^2{\,}x_3^4{\,}d^4-864{\,}x_2^2{\,}x_3^2{\,}d^6+224{\,}x_2^2{\,}d^8-1120{\,}x_3^6{\,}d^4+2080{\,}x_3^4{\,}d^6-1056{\,}x_3^2{\,}d^8+96{\,}d^{10}-648{\,}x_1^8{\,}x_3+834{\,}x_1^6{\,}x_2^2{\,}x_3-968{\,}x_1^6{\,}x_3^3+2664{\,}x_1^6{\,}x_3{\,}d^2-336{\,}x_1^4{\,}x_2^4{\,}x_3+1352{\,}x_1^4{\,}x_2^2{\,}x_3^3-1408{\,}x_1^4{\,}x_2^2{\,}x_3{\,}d^2-320{\,}x_1^4{\,}x_3^5+3456{\,}x_1^4{\,}x_3^3{\,}d^2-3776{\,}x_1^4{\,}x_3{\,}d^4+2{\,}x_1^2{\,}x_2^6{\,}x_3-464{\,}x_1^2{\,}x_2^4{\,}x_3^3+176{\,}x_1^2{\,}x_2^4{\,}x_3{\,}d^2+480{\,}x_1^2{\,}x_2^2{\,}x_3^5-568{\,}x_1^2{\,}x_2^2{\,}x_3^3{\,}d^2+1176{\,}x_1^2{\,}x_2^2{\,}x_3{\,}d^4+1120{\,}x_1^2{\,}x_3^5{\,}d^2-3776{\,}x_1^2{\,}x_3^3{\,}d^4+2144{\,}x_1^2{\,}x_3{\,}d^6+8{\,}x_2^6{\,}x_3^3-8{\,}x_2^6{\,}x_3{\,}d^2-96{\,}x_2^4{\,}x_3^5-256{\,}x_2^4{\,}x_3^3{\,}d^2+352{\,}x_2^4{\,}x_3{\,}d^4+672{\,}x_2^2{\,}x_3^5{\,}d^2-64{\,}x_2^2{\,}x_3^3{\,}d^4-608{\,}x_2^2{\,}x_3{\,}d^6-896{\,}x_3^5{\,}d^4+1280{\,}x_3^3{\,}d^6-384{\,}x_3{\,}d^8-216{\,}x_1^8+132{\,}x_1^6{\,}x_2^2-456{\,}x_1^6{\,}x_3^2+720{\,}x_1^6{\,}d^2-1{\,}x_1^4{\,}x_2^4+376{\,}x_1^4{\,}x_2^2{\,}x_3^2-388{\,}x_1^4{\,}x_2^2{\,}d^2-240{\,}x_1^4{\,}x_3^4+1416{\,}x_1^4{\,}x_3^2{\,}d^2-856{\,}x_1^4{\,}d^4-32{\,}x_1^2{\,}x_2^4{\,}x_3^2+20{\,}x_1^2{\,}x_2^4{\,}d^2+192{\,}x_1^2{\,}x_2^2{\,}x_3^4-288{\,}x_1^2{\,}x_2^2{\,}x_3^2{\,}d^2+416{\,}x_1^2{\,}x_2^2{\,}d^4+672{\,}x_1^2{\,}x_3^4{\,}d^2-1472{\,}x_1^2{\,}x_3^2{\,}d^4+416{\,}x_1^2{\,}d^6-16{\,}x_2^4{\,}x_3^4+8{\,}x_2^4{\,}x_3^2{\,}d^2+8{\,}x_2^4{\,}d^4+224{\,}x_2^2{\,}x_3^4{\,}d^2-64{\,}x_2^2{\,}x_3^2{\,}d^4-160{\,}x_2^2{\,}d^6-448{\,}x_3^4{\,}d^4+512{\,}x_3^2{\,}d^6-64{\,}d^8-96{\,}x_1^6{\,}x_3+40{\,}x_1^4{\,}x_2^2{\,}x_3-96{\,}x_1^4{\,}x_3^3+320{\,}x_1^4{\,}x_3{\,}d^2+32{\,}x_1^2{\,}x_2^2{\,}x_3^3-8{\,}x_1^2{\,}x_2^2{\,}x_3{\,}d^2+224{\,}x_1^2{\,}x_3^3{\,}d^2-352{\,}x_1^2{\,}x_3{\,}d^4+32{\,}x_2^2{\,}x_3^3{\,}d^2-32{\,}x_2^2{\,}x_3{\,}d^4-128{\,}x_3^3{\,}d^4+128{\,}x_3{\,}d^6-16{\,}x_1^6-16{\,}x_1^4{\,}x_3^2+48{\,}x_1^4{\,}d^2+32{\,}x_1^2{\,}x_3^2{\,}d^2-48{\,}x_1^2{\,}d^4-16{\,}x_3^2{\,}d^4+16{\,}d^6$. }
\hrule \paragraph{Contact Info:}\quad\\ \noindent{\small \rm Departamento de Matem\'aticas, Facultad de Ciencias, Universidad de Alcal\'a\\ Ap. de Correos 20,E-28871 Alcal\'a de Henares (Madrid),SPAIN\\} \href{mailto:fernando.sansegundo@uah.es}{fernando.sansegundo@uah.es } (corresponding author), \href{mailto:rafael.sendra@uah.es}{rafael.sendra@uah.es}
\paragraph{Funding:} This work has been partially supported by the research project {\sf MTM2008-04699-C03-01 ``Variedades paramétricas: algoritmos y aplicaciones''}, Ministerio de Ciencia e Innovación, Spain.
\addcontentsline{toc}{section}{Bibliography} \label{Bibliografia}
\end{document} | arXiv | {
"id": "1004.2156.tex",
"language_detection_score": 0.5277428030967712,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[A distributional approach to fractional variation: asymptotics II]{A distributional approach to fractional Sobolev spaces and fractional variation: asymptotics II}
\author[E.~Bruè]{Elia Bruè} \address[E.~Bruè]{School of Mathematics, Institute for Advanced Study, 1 Einstein Dr., Princeton NJ 05840, USA} \email{elia.brue@math.ias.edu}
\author[M.~Calzi]{Mattia Calzi} \address[M.~Calzi]{Dipartimento di Matematica, Università degli Studi di Milano, Via C. Saldini~50, 20133 Milano, Italy} \email{mattia.calzi@unimi.it}
\author[G.~E.~Comi]{Giovanni E. Comi} \address[G.~E.~Comi]{Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy} \email{giovanni.comi@dm.unipi.it}
\author[G.~Stefani]{Giorgio Stefani} \address[G.~Stefani]{Department Mathematik und Informatik, Universit\"at Basel, Spiegelgasse 1, CH-4051 Basel, Switzerland} \email{giorgio.stefani@unibas.ch}
\date{\today}
\keywords{Fractional gradient, fractional interpolation inequality, Riesz transform, Hardy space, Bessel potential space.}
\subjclass[2010]{26A33, 26B30, 28A33, 47G40}
\thanks{ \textit{Acknowledgements}. The authors thank Daniel Spector for many valuable observations on a preliminary version of the present work. The first author is supported by the \textit{Giorgio and Elena Petronio Fellowship} at the Institute for Advanced Study. The second, the third and the fourth authors are members of the \textit{Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni} (GNAMPA) of the \textit{Istituto Nazionale di Alta Matematica} (INdAM). The fourth author is partially supported by the ERC Starting Grant 676675 FLIRT -- \textit{Fluid Flows and Irregular Transport} and by the INdAM--GNAMPA Project 2020 \textit{Problemi isoperimetrici con anisotropie} (n.\ prot.\ U-UFMBAZ-2020-000798 15-04-2020). This work was started while the authors were Ph.D.\ students at the Scuola Normale Superiore of Pisa (Italy) and mostly developed while the first two authors and the last one were still employed there. The authors wish to express their gratitude to this institution for the excellent working conditions and the stimulating atmosphere. The third author worked on this paper during his PostDoc at the Department of Mathematics of the University of Hamburg (Germany), and he is grateful for the support received. }
\begin{abstract} We continue the study of the space $BV^\alpha(\mathbb{R}^n)$ of functions with bounded fractional variation in~$\mathbb{R}^n$ and of the distributional fractional Sobolev space $S^{\alpha,p}(\mathbb{R}^n)$, with $p\in [1,+\infty]$ and $\alpha\in(0,1)$, considered in the previous works~\cites{CS19,CS19-2}. We first define the space $BV^0(\mathbb{R}^n)$ and establish the identifications $BV^0(\mathbb{R}^n)=H^1(\mathbb{R}^n)$ and $S^{\alpha,p}(\mathbb{R}^n)=L^{\alpha,p}(\mathbb{R}^n)$, where $H^1(\mathbb{R}^n)$ and $L^{\alpha,p}(\mathbb{R}^n)$ are the (real) Hardy space and the Bessel potential space, respectively. We then prove that the fractional gradient $\nabla^\alpha$ strongly converges to the Riesz transform as~$\alpha\to0^+$ for $H^1\cap W^{\alpha,1}$ and $S^{\alpha,p}$ functions. We also study the convergence of the $L^1$-norm of the $\alpha$-rescaled fractional gradient of $W^{\alpha,1}$ functions. To achieve the strong limiting behavior of~$\nabla^\alpha$ as~$\alpha\to0^+$, we prove some new fractional interpolation inequalities which are stable with respect to the interpolating parameter. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
\subsection{Fractional operators and related spaces}
In~\cites{CS19,CS19-2}, for a parameter $\alpha\in(0,1)$, the third and fourth authors introduced the \emph{space of functions with bounded fractional variation} \begin{equation*} BV^\alpha(\mathbb{R}^n) :=
\set*{f\in L^1(\mathbb{R}^n) : |D^\alpha f|(\mathbb{R}^n)<+\infty}, \end{equation*} where \begin{equation}\label{intro_eq:frac_variation}
|D^\alpha f|(\mathbb{R}^n):=
\sup\set*{\int_{\mathbb{R}^n} f\,\mathrm{div}^\alpha\varphi\,dx : \varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n),\ \|\varphi\|_{L^\infty(\mathbb{R}^n;\,\mathbb{R}^n)}\le1} \end{equation} for all $f\in L^1(\mathbb{R}^n)$, and the \emph{distributional fractional Sobolev space} \begin{equation} \label{intro_eq:S_alpha_p} S^{\alpha,p}(\mathbb{R}^n):=\set*{f\in L^p(\mathbb{R}^n) : \exists \nabla^\alpha f \in L^p(\mathbb{R}^n;\mathbb{R}^n)} \end{equation} for all $p\in[1,+\infty]$ (see Section \ref{sect:overview_fract_grad} for a precise definition). Here and in the following, \begin{equation}\label{intro_eq:nabla_alpha}
\nabla^\alpha f(x):=\mu_{n,\alpha}\int_{\mathbb{R}^n}\frac{(y-x)(f(y)-f(x))}{|y-x|^{n+\alpha+1}}\,dy, \quad x\in\mathbb{R}^n, \end{equation} and \begin{equation}\label{intro_eq:div_alpha}
\mathrm{div}^\alpha\varphi(x):=\mu_{n,\alpha}\int_{\mathbb{R}^n}\frac{(y-x)\cdot(\varphi(y)-\varphi(x))}{|y-x|^{n+\alpha+1}}\,dy, \quad x\in\mathbb{R}^n, \end{equation} are respectively the \emph{fractional gradient} and the \emph{fractional divergence} operators, where \begin{equation}\label{intro_eq:mu_n_alpha} \mu_{n, \alpha} := 2^{\alpha} \pi^{- \frac{n}{2}} \frac{\Gamma\left ( \frac{n + \alpha + 1}{2} \right )}{\Gamma\left ( \frac{1 - \alpha}{2} \right )}. \end{equation} These two operators are \emph{dual}, in the sense that \begin{equation*} \int_{\mathbb{R}^n}f\,\mathrm{div}^\alpha\varphi \,dx=-\int_{\mathbb{R}^n}\varphi\cdot\nabla^\alpha f\,dx \end{equation*} for all sufficiently regular functions~$f$ and vector fields~$\varphi$. For an account on the existing literature related to these operators, we refer the reader to~\cites{BCM20,BCM21,CS19,CS19-2,H59,P16,SSS15,SSS18,SSS17,SS15,SS18,Sil19,S18,S19} and to the references therein.
While the first paper~\cite{CS19} was focused on some geometric aspects of $BV^\alpha$ functions, the subsequent work~\cite{CS19-2} was inspired by the celebrated Bourgain--Brezis--Mironescu formula~\cite{BBM01} and the $\Gamma$-convergence result of Ambrosio--De Philippis--Martinazzi~\cite{ADM11} and dealt with the asymptotic behavior of the fractional $\alpha$-variation as~$\alpha\to1^-$. As already announced in~\cite{CS19-2}, the main aim of present paper is to study the asymptotic behavior of the fractional $\alpha$-variation as~$\alpha\to0^+$, in analogy with the asymptotic result of Maz\cprime ya--Shaposhnikova~\cites{MS02,MS03}.
\subsection{Asymptotic behavior of fractional operators}
The asymptotic behavior of the standard fractional seminorm~$[\,\cdot\,]_{W^{\alpha,p}(\mathbb{R}^n)}$ was completely understood since the groundbreaking work of Bour\-gain--Bre\-zis--Mi\-ro\-ne\-scu~\cite{BBM01} and the subsequent developments of D\'avila~\cite{D02} and Maz\cprime ya--Sha\-po\-shni\-ko\-va~\cites{MS02,MS03}. Here and in the following, \begin{equation*}
W^{\alpha,p}(\mathbb{R}^n)=\set*{f\in L^p(\mathbb{R}^n) : [f]_{W^{\alpha,p}(\mathbb{R}^n)}^p=\int_{\mathbb{R}^{n}} \int_{\mathbb{R}^{n}} \frac{|f(x)-f(y)|^p}{|x-y|^{n+p\alpha}}\,dx\,dy<+\infty} \end{equation*} is the well-known Sobolev--Slobodeckij space of parameters $\alpha\in(0,1)$ and $p\in[1,+\infty)$ (see~\cite{DiNPV12} for an introduction and the related literature). Precisely, for $p\in[1,+\infty)$, \begin{equation}\label{intro_eq:MS_1} \lim_{\alpha\to1^-}(1-\alpha)\,[f]_{W^{\alpha,p}(\mathbb{R}^n)}^p
=A_{n,p}\,\|\nabla f\|_{L^p(\mathbb{R}^n; \mathbb{R}^{n})}^p \end{equation} for all $f\in W^{1,p}(\mathbb{R}^n)$, while \begin{equation}\label{intro_eq:MS_0} \lim_{\alpha\to0^+}\alpha\,[f]_{W^{\alpha,p}(\mathbb{R}^n)}^p
=B_{n,p}\,\|f\|_{L^p(\mathbb{R}^n)}^p \end{equation} for all $f\in\bigcup_{\alpha\in(0,1)}W^{\alpha,p}(\mathbb{R}^n)$. Here $A_{n,p},B_{n,p}>0$ are two constants depending uniquely on~$n$ and~$p$. When $p=1$, the limit in~\eqref{intro_eq:MS_1} holds for the more general class of $BV$ functions, that is, \begin{equation}\label{intro_eq:Davila_limit} \lim_{\alpha\to1^-}(1-\alpha)\,[f]_{W^{\alpha,1}(\mathbb{R}^n)}
=A_{n,1}\,|Df|(\mathbb{R}^n) \end{equation} for all $f\in BV(\mathbb{R}^n)$.
The limits in~\eqref{intro_eq:MS_1} and in~\eqref{intro_eq:Davila_limit} can be recognized as special consequences of the celebrated Bourgain--Brezis--Mironescu (BBM, for short) formula \begin{equation} \label{intro_eq:BBM} \lim_{k\to+\infty} \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}
\frac{|f(x)-f(y)|^p}{|x-y|^p}\,\varrho_k(|x-y|)\,dx\,dy = \begin{cases}
C_{n,p}\,\|\nabla f\|_{L^p(\mathbb{R}^n)}^p & \text{for}\ p\in(1,+\infty),\\[3mm]
C_{n,1}\,|Df|(\mathbb{R}^n) & \text{for}\ p=1, \end{cases} \end{equation} where $C_{n,p}>0$ is a constant depending only on~$n$ and~$p$, and $(\varrho_k)_{k\in\mathbb{N}}\subset L^1_{\loc}([0,+\infty))$ is a sequence of non-negative radial mollifiers such that \begin{equation*}
\int_{\mathbb{R}^n}\varrho_k(|x|)\,dx=1 \quad \text{for all $k\in\mathbb{N}$} \end{equation*} and \begin{equation*} \lim_{k\to+\infty} \int_\delta^{+\infty} \varrho_k(r)\,r^{n-1}\,dr=0 \quad \text{for all $\delta>0$.} \end{equation*} Since its appearance, the BBM formula~\eqref{intro_eq:BBM} has deeply influenced the development of the asymptotic analysis in the fractional framework. On the one hand, the limit in~\eqref{intro_eq:BBM} has led to several important applications, such as Brezis' celebrated work~\cite{B02} on how to recognize constant functions, new characterizations of Sobolev and $BV$ functions and $\Gamma$-convergence results~\cites{AGMP18,AGMP20,AGP20,BMR20,BN06,LS11,LS14,N07,N08,N11,P04-2}, approximation of Sobolev norms and image processing~\cites{B15,BN16-bis,BN18,BN20}, and last but not least fractional Hardy and Poincaré inequalities~\cites{BBM02-bis,FS08,P04-1}. On the other hand, the BBM formula~\eqref{intro_eq:BBM} has inspired an alternative route to fractional asymptotic analysis by means of interpolation techniques~\cites{M05,PS17}. Recently, the BBM formula in~\eqref{intro_eq:BBM} has been revisited in terms of a.e.\ pointwise convergence by Brezis--Nguyen~\cite{BN16} and in connection with weak $L^p$ quasi-norms~\cite{BVY20}, where the now-called Brezis--Van Schaftingen--Yung space \begin{equation*} BSY^{\alpha,p}(\mathbb{R}^n) =
\set*{f\in L_{\loc}^1(\mathbb{R}^n) : \left\|\frac{|f(x)-f(y)|}{|x-y|^{\frac np+\alpha}}\right\|_{L^p_w(\mathbb{R}^n\times\mathbb{R}^n)}<+\infty}, \end{equation*} defined for $\alpha\in(0,1]$ and $p\in[1,+\infty)$, has offered a completely new and promising perspective in the field~\cite{DM20}.
The limits \eqref{intro_eq:MS_1} -- \eqref{intro_eq:BBM} have been linked to variational problems~\cite{AK09}, generalized to various function spaces, such as Besov spaces~\cites{KL05,T11}, Orlicz spaces~\cites{ACPS20,FHR20,FS19} and magnetic and anisotropic Sobolev spaces~\cites{LMP19,NS19,PSV17,PSV19,SV16}, and extended to several ambient spaces, such as compact connected Riemannian manifolds~\cite{KM19}, the flat torus~\cite{A20}, Carnot groups~\cites{B11,MP19} and complete doubling metric-measure spaces supporting a local Poincaré inequality~\cite{DiMS19}.
The asymptotic behavior of the fractional gradient $\nabla^\alpha$ as $\alpha\to1^-$ was fully discussed in~\cite{CS19-2} (see also~\cite{BCM21}*{Theorem~3.2} for a different proof of~\eqref{intro_eq:CS_limit_p} below for the case $p\in(1,+\infty)$ via Fourier transform). Precisely, if $f\in W^{1,p}(\mathbb{R}^n)$ for some $p\in[1,+\infty)$, then $f\in S^{\alpha,p}(\mathbb{R}^n)$ for all $\alpha\in(0,1)$ with \begin{equation}\label{intro_eq:CS_limit_p}
\lim_{\alpha\to1^-}\|\nabla^\alpha f-\nabla f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}=0. \end{equation} If $f\in BV(\mathbb{R}^n)$ instead, then $f\in BV^\alpha(\mathbb{R}^n)$ for all $\alpha\in(0,1)$ with \begin{equation*} D^\alpha f\rightharpoonup Df\ \text{in $\mathscr{M}(\mathbb{R}^n;\mathbb{R}^n)$ and}\
|D^\alpha f|\rightharpoonup |Df|\ \text{in $\mathscr{M}(\mathbb{R}^n)$ as $\alpha\to1^-$} \end{equation*} and \begin{equation}\label{intro_eq:CS_limit_BV}
\lim_{\alpha\to1^-}|D^\alpha f|(\mathbb{R}^n)=|Df|(\mathbb{R}^n). \end{equation} We underline that, differently from the limits~\eqref{intro_eq:MS_1} and~\eqref{intro_eq:Davila_limit}, the renormalizing factor $(1-\alpha)^\frac{1}{p}$ does not appear in~\eqref{intro_eq:CS_limit_p} and~\eqref{intro_eq:CS_limit_BV}. This is motivated by the fact that the constant~$\mu_{n,\alpha}$ encoded in the definition~\eqref{intro_eq:nabla_alpha} of the operator~$\nabla^\alpha$ satisfies \begin{equation*} \mu_{n,\alpha}\sim\frac{1-\alpha}{\omega_n} \quad \text{as $\alpha\to1^-$}. \end{equation*}
Concerning the asymptotic behavior of~$\nabla^\alpha$ as~$\alpha\to0^+$, at least for sufficiently regular functions, the fractional gradient in~\eqref{intro_eq:nabla_alpha} is converging to the operator \begin{equation}\label{intro_eq:def_nabla_0}
\nabla^0 f(x)=\mu_{n,0}\int_{\mathbb{R}^n}\frac{(y-x)(f(y)-f(x))}{|y-x|^{n+1}}\,dy, \quad x\in\mathbb{R}^n. \end{equation} Here and in the following, $\mu_{n,0}$ is simply the limit of the constant~$\mu_{n,\alpha}$ defined in~\eqref{intro_eq:mu_n_alpha} as~$\alpha\to0^+$ (thus, in this case, no renormalization factor has to be taken into account). The operator in~\eqref{intro_eq:def_nabla_0} is well defined (in the principal value sense) at least for all $f\in C^\infty_c(\mathbb{R}^n)$ and, actually, coincides (possibly up to a minus sign, see \cref{subsec:notation} below) with the well-known vector-valued \emph{Riesz transform}~$Rf$, see~\cites{G14-C,S70,S93}. The formal limit $\nabla^\alpha\to R$ as~$\alpha\to0^+$ can be also motivated either by the asymptotic behavior of the Fourier transform of~$\nabla^\alpha$ as~$\alpha\to0^+$ or by the fact that $\nabla^\alpha=\nabla I_{1-\alpha}\to\nabla I_1=R$ for~$\alpha\to0^+$, where \begin{equation*} I_{\alpha} f(x) := 2^{-\alpha} \pi^{- \frac{n}{2}} \frac{\Gamma\left(\frac{n-\alpha}2\right)}{\Gamma\left(\frac\alpha2\right)}
\int_{\mathbb{R}^{n}} \frac{f(y)}{|x - y|^{n - \alpha}} \, dy, \quad x\in\mathbb{R}^n, \end{equation*} stands for the Riesz potential of order $\alpha\in(0,n)$. In a similar fashion, the fractional $\alpha$-divergence in~\eqref{intro_eq:div_alpha} is converging as $\alpha\to0^+$ to the operator \begin{equation*}
\mathrm{div}^0\varphi(x)=\mu_{n,0}\int_{\mathbb{R}^n}\frac{(y-x)\cdot(\varphi(y)-\varphi(x))}{|y-x|^{n+1}}\,dy, \quad x\in\mathbb{R}^n, \end{equation*} which is well defined (in the principal value sense) at least for all $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$.
As a natural target space for the study of the limiting behavior of~$\nabla^\alpha$ as~$\alpha\to0^+$, in a\-na\-lo\-gy with the fractional variation~\eqref{intro_eq:frac_variation}, we introduce the space $BV^0(\mathbb{R}^n)$ of functions $f\in L^1(\mathbb{R}^n)$ such that the quantity \begin{equation*}
|D^0 f|(\mathbb{R}^n):=\sup\set*{\int_{\mathbb{R}^n}f\,\mathrm{div}^0\varphi\,dx : \varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n),\ \|\varphi\|_{L^\infty(\mathbb{R}^n;\,\mathbb{R}^n)}\le1} \end{equation*} is finite. As for the $BV^\alpha$ space, it is not difficult to see that a function $f\in L^1(\mathbb{R}^n)$ belongs to $BV^0(\mathbb{R}^n)$ if and only if there exists a vector-valued Radon measure $D^0 f\in\mathscr M(\mathbb{R}^n;\mathbb{R}^n)$ with finite total variation such that \begin{equation*} \int_{\mathbb{R}^n}f\,\mathrm{div}^0\varphi\,dx = -\int_{\mathbb{R}^n} \varphi\cdot\,d D^0 f \quad \text{for all $\varphi\in C_c^\infty(\mathbb{R}^n;\mathbb{R}^n)$}. \end{equation*} Surprisingly, it turns out that $D^0f\ll\Leb{n}$ for all $f\in BV^0(\mathbb{R}^n)$, in contrast with what is known for the fractional $\alpha$-variation in the case $\alpha\in(0,1]$, see~\cite{CS19}*{Theorem~3.30}. More precisely, we prove that \begin{equation}\label{intro_eq:H_1=BV_0} f\in BV^0(\mathbb{R}^n) \iff f\in H^1(\mathbb{R}^n),\ \text{with $D^0f=Rf\Leb{n}$ in~$\mathscr M(\mathbb{R}^n;\mathbb{R}^n)$}, \end{equation} where \begin{equation*} H^1(\mathbb{R}^n)=\set*{f\in L^1(\mathbb{R}^n) : Rf\in L^1(\mathbb{R}^n;\mathbb{R}^n)} \end{equation*} is the well-known (real) \emph{Hardy space}.
Having the identification~\eqref{intro_eq:H_1=BV_0} at disposal, we can rigorously establish the validity of the convergence $\nabla^\alpha\to R$ as~$\alpha\to0^+$. For $p=1$, we prove that \begin{equation}\label{intro_eq:frac_limit_1}
\lim_{\alpha\to0^+}\|\nabla^\alpha f-Rf\|_{L^1(\mathbb{R}^n;\,\mathbb{R}^n)}=0 \end{equation} for all $f\in H^1(\mathbb{R}^n)\cap\bigcup_{\alpha\in(0,1)}W^{\alpha,1}(\mathbb{R}^n)$. For $p\in(1,+\infty)$ instead, since the Riesz transform~\eqref{intro_eq:def_nabla_0} extends to a linear continuous operator $R\colon L^p(\mathbb{R}^n)\to L^p(\mathbb{R}^n;\mathbb{R}^n)$, the natural target space for the study of the limiting behavior of the fractional gradient is simply $L^p(\mathbb{R}^n;\mathbb{R}^n)$. In this case, we prove that \begin{equation}\label{intro_eq:frac_limit_p}
\lim_{\alpha\to0^+}\|\nabla^\alpha f-Rf\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}=0 \end{equation} for all $f\in\bigcup_{\alpha\in(0,1)}S^{\alpha,p}(\mathbb{R}^n)$.
The limits in~\eqref{intro_eq:frac_limit_1} and~\eqref{intro_eq:frac_limit_p} can be considered as the counterparts of~\eqref{intro_eq:MS_0} in our fractional setting. However, differently from~\eqref{intro_eq:MS_0}, in~\eqref{intro_eq:frac_limit_1} and in~\eqref{intro_eq:frac_limit_p} we obtain strong convergence. This improvement can be interpreted as a natural consequence of the fact that, generally speaking, the $L^p$-norm of the fractional gradient~$\nabla^\alpha$ allows for more cancellations than the $W^{\alpha,p}$-seminorm.
Since the Riesz transform~\eqref{intro_eq:def_nabla_0} extends to a linear continuous operator $R\colon H^1(\mathbb{R}^n)\to H^1(\mathbb{R}^n;\mathbb{R}^n)$, the limit in~\eqref{intro_eq:frac_limit_1} can be improved. Precisely, we prove that \begin{equation}\label{intro_eq:frac_limit_H1}
\lim_{\alpha\to0^+}\|\nabla^\alpha f-Rf\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}=0 \end{equation} for all $f\in\bigcup_{\alpha\in(0,1)} HS^{1,\alpha}(\mathbb{R}^n)$. Here \begin{equation*} HS^{\alpha,1}(\mathbb{R}^n)= \set*{f\in H^1(\mathbb{R}^n) : \nabla^\alpha f\in H^1(\mathbb{R}^n;\mathbb{R}^n)} \end{equation*} is (an equivalent definition of) the fractional Hardy--Sobolev space, see~\cite{Str90} and below for a more detailed presentation. One can recognize that \begin{equation*} H^1(\mathbb{R}^n)\cap\bigcup_{\alpha\in(0,1)}W^{\alpha,1}(\mathbb{R}^n)= \bigcup_{\alpha\in(0,1)} HS^{\alpha,1}(\mathbb{R}^n), \end{equation*} so that~\eqref{intro_eq:frac_limit_H1} is indeed a reinforcement of~\eqref{intro_eq:frac_limit_1}.
Naturally, if $f\notin H^1(\mathbb{R}^n)$, then we cannot expect that $\nabla^\alpha f\to Rf$ in~$L^1(\mathbb{R}^n;\mathbb{R}^n)$ as~$\alpha\to0^+$. Instead, as suggested by the limit in~\eqref{intro_eq:MS_0}, we have to consider the asymptotic behavior of the rescaled fractional gradient $\alpha\,\nabla^\alpha f$ as~$\alpha\to0^+$. In this case, we prove that \begin{equation}\label{intro_eq:mattia_limit} \lim_{\alpha\to0^+}
\alpha\int_{\mathbb{R}^n}|\nabla^\alpha f(x)|\,dx
=n\omega_n\mu_{n,0}\,\bigg|\int_{\mathbb{R}^n} f(x)\,dx\,\bigg|. \end{equation} for all $f\in\bigcup_{\alpha\in(0,1)}W^{\alpha,1}(\mathbb{R}^n)$. Note that~\eqref{intro_eq:mattia_limit} is consistent with both~\eqref{intro_eq:MS_0} and~\eqref{intro_eq:frac_limit_1}. Indeed, on the one side, by simply bringing the modulus inside the integral in the definition~\eqref{intro_eq:nabla_alpha} of~$\nabla^\alpha$, we can estimate \begin{equation*}
\int_{\mathbb{R}^n}|\nabla^\alpha f(x)|\,dx \le \mu_{n,\alpha}[f]_{W^{\alpha,1}(\mathbb{R}^n)} \end{equation*} for all $f\in W^{\alpha,1}(\mathbb{R}^n)$ (see also~\cite{CS19}*{Theorem~3.18}), so that, by~\eqref{intro_eq:MS_0}, we can infer \begin{equation*} \limsup_{\alpha\to0^+}
\alpha\int_{\mathbb{R}^n}|\nabla^\alpha f(x)|\,dx \le \mu_{n,0} \lim_{\alpha\to0^+}\alpha\,[f]_{W^{\alpha,1}(\mathbb{R}^n)} =
\mu_{n,0}B_{n,1}\|f\|_{L^1(\mathbb{R}^n)} \end{equation*} for all $f\in\bigcup_{\alpha\in(0,1)}W^{\alpha,1}(\mathbb{R}^n)$. On the other side, if $f\in H^1(\mathbb{R}^n)$, then \begin{equation*} \int_{\mathbb{R}^n}f(x)\,dx=0 \end{equation*} (see~\cite{S93}*{Chapter~III, Section~5.4(c)} for example), and thus for all $f\in H^1(\mathbb{R}^n)\cap\bigcup_{\alpha\in(0,1)}W^{\alpha,1}(\mathbb{R}^n)$ the limit in~\eqref{intro_eq:mattia_limit} reduces to \begin{equation*} \lim_{\alpha\to0^+}
\alpha\int_{\mathbb{R}^n}|\nabla^\alpha f(x)|\,dx =0, \end{equation*} in accordance with the strong convergence~\eqref{intro_eq:frac_limit_1}.
\subsection{Fractional interpolation inequalities} \label{subsec:intro_frac_interp}
While~\eqref{intro_eq:mattia_limit} is proved by a direct computation, the limits~\eqref{intro_eq:frac_limit_1}, \eqref{intro_eq:frac_limit_p} and~\eqref{intro_eq:frac_limit_H1} follow from some new \emph{fractional interpolation inequalities}.
Let $\alpha\in(0,1)$ be fixed. In the standard fractional framework, by a simple splitting argument, it is not difficult to estimate the $W^{\beta,1}$-seminorm of a function $f\in W^{\alpha,1}(\mathbb{R}^n)$ as \begin{equation}\label{intro_eq:interp_explain_R} [f]_{W^{\beta,1}(\mathbb{R}^n)} \le R^{\alpha-\beta}\,[f]_{W^{\alpha,1}(\mathbb{R}^n)}
+c_n\frac{R^{-\beta}}\beta\,\|f\|_{L^1(\mathbb{R}^n)} \end{equation} for all $R>0$ and $\beta\in(0,\alpha)$, where $c_n>0$ is a dimensional constant. If we choose
$R=\|f\|_{L^1(\mathbb{R}^n)}^{1/\alpha}\,[f]_{W^{\alpha,1}(\mathbb{R}^n)}^{-1/\alpha}$, then~\eqref{intro_eq:interp_explain_R} gives \begin{equation}\label{intro_eq:interp_explain} [f]_{W^{\beta,1}(\mathbb{R}^n)} \le
\left(1+\tfrac{c_n}\beta\right)\,\|f\|_{L^1(\mathbb{R}^n)}^{1-\frac\beta\alpha}\,[f]_{W^{\alpha,1}(\mathbb{R}^n)}^{\frac\beta\alpha} \end{equation} for all $\beta\in(0,\alpha)$. Inequality~\eqref{intro_eq:interp_explain} implies the bound \begin{equation}\label{intro_eq:big_O_W} [f]_{W^{\beta,1}(\mathbb{R}^n)} = O\left(\tfrac1\beta\right) \quad \text{for}\ \beta\to0^+, \end{equation} in agreement with~\eqref{intro_eq:MS_0}.
In a similar fashion (but with a more delicate analysis), an interpolation inequality of the form~\eqref{intro_eq:interp_explain} has been recently obtained by the third and the fourth author for the fractional gradient~$\nabla^\alpha$. Precisely, if $f\in BV^\alpha(\mathbb{R}^n)$, then \begin{equation}\label{intro_eq:frac_interp_explain} [f]_{BV^\beta(\mathbb{R}^n)} \le c_{n,\alpha,\beta}\,
\|f\|_{L^1(\mathbb{R}^n)}^{1-\frac\beta\alpha}\,[f]_{BV^\alpha(\mathbb{R}^n)}^{\frac\beta\alpha} \end{equation} for all $\beta\in(0,\alpha)$, where $c_{n,\alpha,\beta}>0$ is a constant such that \begin{equation}\label{intro_eq:good_asymp_1} c_{n,\alpha,\beta}\sim 1 \quad \text{for}\ \beta\to\alpha^- \end{equation} and \begin{equation}\label{intro_eq:big_O_constant} c_{n,\alpha,\beta}=O\left(\tfrac1\beta\right) \quad \text{for}\ \beta\to0^+, \end{equation} see~\cite{CS19-2}*{Proposition~3.12} (see~\cite{CS19-2}*{Proposition~3.2} also for the case~$\alpha=1$). Here and in the following, we let $[f]_{BV^\alpha(\mathbb{R}^n)}$ be the total fractional variation~\eqref{intro_eq:frac_variation} of $f\in BV^\alpha(\mathbb{R}^n)$. Thanks to~\eqref{intro_eq:big_O_constant}, inequality~\eqref{intro_eq:frac_interp_explain} implies the bound \begin{equation}\label{intro_eq:big_O_BV_beta} [f]_{BV^\beta(\mathbb{R}^n)}=O\left(\tfrac1\beta\right) \quad \text{for}\ \beta\to0^+, \end{equation} coherently with~\eqref{intro_eq:mattia_limit}.
Although strong enough to settle the asymptotic behavior of the fractional gradient~$\nabla^\beta$ when $\beta\to\alpha^-$ thanks to~\eqref{intro_eq:good_asymp_1}, because of~\eqref{intro_eq:big_O_BV_beta} inequality~\eqref{intro_eq:frac_interp_explain} is of no use for the study of the strong $L^1$-limit $\nabla^\beta\to R$ as~$\beta\to0^+$. To achieve this convergence, we thus have to control the interpolation constant $c_{n,\alpha,\beta}$ in~\eqref{intro_eq:frac_interp_explain} with a new interpolation constant $c_{n,\alpha}>0$ independent of $\beta\in(0,\alpha)$, at the price of weakening~\eqref{intro_eq:frac_interp_explain} by replacing the $L^1$-norm with a bigger norm.
This strategy is in fact motivated by the non-optimality of the bound~\eqref{intro_eq:big_O_BV_beta} since, in view of the limit in~\eqref{intro_eq:mattia_limit}, we can still expect some cancellation effect of the fractional gradient for a subclass of $L^1$-functions having zero average. Note that this approach cannot be implemented to stabilize the standard interpolation inequality~\eqref{intro_eq:interp_explain}, since the bound in~\eqref{intro_eq:big_O_W} is in fact optimal due to~\eqref{intro_eq:MS_0}.
At this point, our idea is to exploit the cancellation properties of the fractional gradient~$\nabla^\beta$ by rewriting its non-local part in terms of a convolution kernel. In more precise terms, recalling the definition in~\eqref{intro_eq:nabla_alpha}, for $R>0$ we can split \begin{equation}\label{intro_eq:split_frac_nabla} \nabla^\beta f = \nabla^\beta_{<R} f+\nabla^\beta_{\ge R} f \end{equation} with \begin{equation}\label{intro_eq:nabla_non-local_operator} \nabla^\beta_{\ge R} f(x) = \mu_{n,\beta} \int_{\mathbb{R}^n}f(y)\,K_{\beta,R}(y-x)\,dy, \quad x\in\mathbb{R}^n, \end{equation} for all Schwartz functions $f\in\mathcal S(\mathbb{R}^n)$, where the convolution kernel $K_{\beta,R}$ is a smoothing of the function \begin{equation*} y \mapsto
\frac{y}{|y|^{n+\beta+1}}\,\chi_{[R,+\infty)}(|y|). \end{equation*} By the Calder\'on--Zygmund Theorem, we can extend the functional defined in~\eqref{intro_eq:nabla_non-local_operator} to a linear continuous mapping $\nabla^\beta_{\ge R}\colon H^1(\mathbb{R}^n)\to L^1(\mathbb{R}^n;\mathbb{R}^n)$ whose operator norm can be estimated as \begin{equation}\label{intro_eq:operator_norm_bound}
\|\nabla^\beta_{\ge R}\|_{H^1\to L^1}\le c_nR^{-\beta} \quad \text{for all}\ R>0, \end{equation} for some dimensional constant $c_n>0$. By combining the splitting~\eqref{intro_eq:split_frac_nabla} with the bound~\eqref{intro_eq:operator_norm_bound} and arguing as in~\cite{CS19-2}, we get that \begin{equation}\label{intro_eq:interp_inequ_H1_BV_alpha}
[f]_{BV^\beta(\mathbb{R}^n)}\le c_{n,\alpha}\,\|f\|_{H^1(\mathbb{R}^n)}^{\frac{\alpha-\beta}\alpha}\,[f]_{BV^\alpha(\mathbb{R}^n)}^{\frac\beta\alpha} \end{equation} for all $\beta\in[0,\alpha)$ and all $f\in H^1(\mathbb{R}^n)\cap BV^\alpha(\mathbb{R}^n)$, whenever $\alpha\in(0,1]$. Exploiting~\eqref{intro_eq:interp_inequ_H1_BV_alpha} together with an approximation argument, we thus just need to establish~\eqref{intro_eq:frac_limit_1} for all sufficiently regular functions, in which case we can easily conclude by a direct computation.
To achieve the limit in~\eqref{intro_eq:frac_limit_p} for $p\in(1,+\infty)$ and the stronger convergence in~\eqref{intro_eq:frac_limit_H1} for the case~$p=1$, we adopt a slightly different strategy. Instead of splitting the fractional gradient as in~\eqref{intro_eq:split_frac_nabla}, we rewrite it as \begin{equation} \label{intro_eq:frac_nabla_frac_Laplacian_decomposition} \nabla^\beta=R\,(-\Delta)^{\frac\beta2}, \end{equation} where \begin{equation*} (- \Delta)^{\frac{\beta}{2}} f(x) := \nu_{n,\beta}
\int_{\mathbb{R}^n} \frac{f(x + y) - f(x)}{|y|^{n + \beta}}\,dy, \quad x\in\mathbb{R}^n, \end{equation*} is the usual fractional Laplacian with renormalizing constant given by \begin{equation*} \nu_{n,\beta} := 2^{\beta} \pi^{- \frac{n}{2}} \,\frac{\Gamma \left ( \frac{n + \beta}{2} \right )}{\Gamma \left ( - \frac{ \beta}{2} \right )}. \end{equation*} Since the Riesz transform extends to a linear continuous operator on $L^p(\mathbb{R}^n)$ and $H^1(\mathbb{R}^n)$ as mentioned above, to achieve~\eqref{intro_eq:frac_limit_p} and~\eqref{intro_eq:frac_limit_H1} we just have to study the continuity properties of $(-\Delta)^{\frac\beta2}$. To this aim, we rewrite $(-\Delta)^{\frac\beta2}$ as \begin{equation} \label{intro_eq:frac_Laplacian_decomposition} (-\Delta)^{\frac\beta2} = T_{m_{\alpha,\beta}} \circ (\mathrm{Id}+(-\Delta)^{\frac\alpha2}) \end{equation} where \begin{equation}\label{intro_eq:mattia_operator} T_{m_{\alpha,\beta}}f := f*\mathcal F^{-1}(m_{\alpha,\beta}), \quad f\in\mathcal S(\mathbb{R}^n), \end{equation} $\mathcal{F}$ is the Fourier transform and \begin{equation*}
m_{\alpha,\beta}(\xi):=\frac{|\xi|^\beta}{1+|\xi|^\alpha}, \quad \xi\in\mathbb{R}^n. \end{equation*} Exploiting the good decay properties of the derivatives of $m_{\alpha,\beta}$ (uniform with respect to the parameters $0\le\beta\le\alpha\le1$), by the Mihlin--H\"ormander Multiplier Theorem the convolution operator in~\eqref{intro_eq:mattia_operator} can be extended to two linear operators continuous from $L^p(\mathbb{R}^n)$ to itself and from $H^1(\mathbb{R}^n)$ to itself, respectively. Going back to~\eqref{intro_eq:frac_nabla_frac_Laplacian_decomposition} and~\eqref{intro_eq:frac_Laplacian_decomposition}, we can exploit the continuity properties of the (extensions of) the operator $T_{m_{\alpha,\beta}}$ to deduce two new interpolation inequalities. On the one hand, given $p\in(1,+\infty)$, there exists a constant $c_{n,p}>0$ such that \begin{equation}\label{intro_eq:interpolation_MH_p}
\|\nabla^\beta f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_{n,p} \,
\|\nabla^\gamma f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha-\gamma}} \,
\|\nabla^\alpha f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\beta-\gamma}{\alpha-\gamma}} \end{equation} for all $0\le\gamma\le\beta\le\alpha\le1$ and all $f\in S^{\alpha,p}(\mathbb{R}^n)$. In the particular case $\gamma=0$, thanks to the $L^p$-continuity of the Riesz transform, we also have \begin{equation}\label{intro_eq:interpolation_MH_p_gamma=0}
\|\nabla^\beta f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_{n,p} \,
\|f\|_{L^p(\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha}} \,
\|\nabla^\alpha f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\beta}{\alpha}} \end{equation} for all $0\le\beta\le\alpha\le1$ and all $f\in S^{\alpha,p}(\mathbb{R}^n)$. On the other hand, there exists a dimensional constant $c_n>0$ such that \begin{equation} \label{intro_eq:interpolation_MH_1}
\|\nabla^\beta f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_n \,
\|\nabla^\gamma f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha-\gamma}} \,
\|\nabla^\alpha f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\beta-\gamma}{\alpha-\gamma}} \end{equation} for all $0\le\gamma\le\beta\le\alpha\le1$ and all $f\in HS^{\alpha,1}(\mathbb{R}^n)$. Again, in the particular case $\gamma=0$, thanks to the $H^1$-continuity of the Riesz transform, we also have \begin{equation} \label{intro_eq:interpolation_MH_1_gamma=0}
\|\nabla^\beta f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_n \,
\|f\|_{H^1(\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha}} \,
\|\nabla^\alpha f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\beta}{\alpha}} \end{equation} for all $0\le\beta\le\alpha\le1$ and all $f\in HS^{\alpha,1}(\mathbb{R}^n)$. Having the interpolation inequalities~\eqref{intro_eq:interpolation_MH_p_gamma=0} and~\eqref{intro_eq:interpolation_MH_1_gamma=0} at disposal, as before we just need to establish~\eqref{intro_eq:frac_limit_p} and~\eqref{intro_eq:frac_limit_H1} for all sufficiently regular functions, in which case we can again conclude by a direct computation.
As the reader may have noticed, in the above line of reasoning we can infer the validity of~\eqref{intro_eq:interpolation_MH_p} and~\eqref{intro_eq:interpolation_MH_1} only if we are able to prove the identifications \begin{equation}\label{intro_eq:identification_p} f\in S^{\alpha,p}(\mathbb{R}^n) \iff f\in(\mathrm{Id}-\Delta)^{-\frac\alpha2}(L^p(\mathbb{R}^n)) \iff f\in L^p(\mathbb{R}^n)\cap I_\alpha(L^p(\mathbb{R}^n)), \end{equation} for $p\in(1,+\infty)$, and \begin{equation}\label{intro_eq:identification_H1} f\in HS^{\alpha,1}(\mathbb{R}^n) \iff f\in(\mathrm{Id}-\Delta)^{-\frac\alpha2}(H^1(\mathbb{R}^n)) \iff f\in H^1(\mathbb{R}^n)\cap I_\alpha(H^1(\mathbb{R}^n)), \end{equation} respectively, with equivalence of the naturally associated norms, where $(\mathrm{Id}-\Delta)^{-\frac\alpha2}$ is the standard Bessel potential. While~\eqref{intro_eq:identification_H1} follows by a plain approximation argument building upon the results of~\cite{Str90}, the identification in~\eqref{intro_eq:identification_p} is more delicate and, actually, answers an equivalent question left open in~\cite{CS19}, that is, the density of $C^\infty_c(\mathbb{R}^n)$ functions in $S^{\alpha,p}(\mathbb{R}^n)$, see \cref{sec:identificaiton_Bessel} for the proof. In other words, the equivalence~\eqref{intro_eq:identification_p} allows to identify the Bessel potential space \begin{equation*} L^{\alpha,p}(\mathbb{R}^n): =(\mathrm{Id}-\Delta)^{-\frac\alpha2}(L^p(\mathbb{R}^n)) =\set*{f\in\mathcal S'(\mathbb{R}^n) : (\mathrm{Id}-\Delta)^{\frac\alpha2}f\in L^p(\mathbb{R}^n)} \end{equation*} with the distributional fractional Sobolev space $S^{\alpha,p}(\mathbb{R}^n)$ in~\eqref{intro_eq:S_alpha_p}. Thanks to the identification $L^{\alpha,p}(\mathbb{R}^n)=S^{\alpha,p}(\mathbb{R}^n)$, many of the results established in~\cites{BCM20,BCM21} and in~\cites{SS15,SS18} can be proved in a simpler and more direct way. See also \cref{sec:props_of_S_alpha_p} for other consequences of this identification.
\subsection{Complex interpolation and open problems}
To achieve the interpolation inequalities~\eqref{intro_eq:interp_inequ_H1_BV_alpha} and \eqref{intro_eq:interpolation_MH_p} -- \eqref{intro_eq:interpolation_MH_1_gamma=0}, we essentially relied on a direct approach exploiting the precise structure of the fractional gradient in~\eqref{intro_eq:nabla_alpha}. Adopting the point of view of~\cites{M05,PS17}, a possible alternative route to the above fractional inequalities may follow from complex interpolation techniques.
According to~\cite{BL76}*{Theorem~6.4.5(7)} and thanks to the aforementioned identification $L^{\alpha,p}(\mathbb{R}^n)=S^{\alpha,p}(\mathbb{R}^n)$, for all $\alpha,\vartheta\in(0,1)$ and $p\in(1,+\infty)$ we have the complex interpolation \begin{equation} \label{intro_eq:interpolation_Bessel} (L^p(\mathbb{R}^n),S^{\alpha,p}(\mathbb{R}^n))_{ [\vartheta]} \cong S^{\vartheta\alpha,p}(\mathbb{R}^n). \end{equation} Here and in the following, we write $A\cong B$ to emphasize the fact that the spaces~$A$ and~$B$ are the same with equivalence (and thus, possibly, not equality) of the relative norms. As a consequence, \eqref{intro_eq:interpolation_Bessel} implies that, for all $0<\beta<\alpha<1$ and $p\in(1,+\infty)$, there exists a constant $c_{n,\alpha,\beta,p}>0$ such that \begin{equation} \label{intro_eq:interpolation_Bessel_ineq}
\|f\|_{S^{\beta,p}(\mathbb{R}^n)} \le c_{n,\alpha,\beta,p} \,
\|f\|_{L^p(\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha}} \,
\|f\|_{S^{\alpha,p}(\mathbb{R}^n)}^{\frac{\beta}{\alpha}} \end{equation} for all $f\in S^{\alpha,p}(\mathbb{R}^n)$. In a similar way (we omit the proof because beyond the scopes of the present paper), for all $\alpha,\vartheta\in(0,1)$ one can also establish the complex interpolation \begin{equation} \label{intro_eq:interpolation_HS} (H^1(\mathbb{R}^n),HS^{\alpha,1}(\mathbb{R}^n))_{ [\vartheta]}\cong HS^{\vartheta\alpha,1}(\mathbb{R}^n), \end{equation} and thus, for some constant $c_{n,\alpha,\beta}>0$, \begin{equation}\label{intro_eq:interpolation_HS_ineq}
\|f\|_{HS^{\beta,1}(\mathbb{R}^n)} \le c_{n,\alpha,\beta} \,
\|f\|_{H^1(\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha}} \,
\|f\|_{HS^{\alpha,1}(\mathbb{R}^n)}^{\frac{\beta}{\alpha}} \end{equation} for all $f\in HS^{\alpha,1}(\mathbb{R}^n)$.
Inequalities~\eqref{intro_eq:interpolation_Bessel_ineq} and \eqref{intro_eq:interpolation_HS_ineq} suggest that, in order to obtain \eqref{intro_eq:interpolation_MH_p_gamma=0} and \eqref{intro_eq:interpolation_MH_1_gamma=0} with complex interpolation methods, one essentially should prove that the identifications~\eqref{intro_eq:interpolation_Bessel} and~\eqref{intro_eq:interpolation_HS} hold uniformly with respect to the interpolating parameter. We believe that this result may be achieved but, since we do not need this level of generality for our aims, we preferred to prove~\eqref{intro_eq:interpolation_MH_p} -- \eqref{intro_eq:interpolation_MH_1_gamma=0} in a more direct and explicit way.
We do not know if also inequality~\eqref{intro_eq:interp_inequ_H1_BV_alpha} can be achieved by complex interpolation methods. In fact, we do not even know if the spaces $(H^1(\mathbb{R}^n),BV(\mathbb{R}^n))_{ [\vartheta]}$ and $BV^\vartheta(\mathbb{R}^n)$ are somehow linked for $\vartheta\in(0,1)$ (for a related discussion, see also~\cite{SS15}*{Section~1.1}). By~\cite{BL76}*{Theorems~3.5.3 and~6.4.5(1)}, we have the real interpolations \begin{equation*} (L^1(\mathbb{R}^n),W^{1,1}(\mathbb{R}^n))_{\vartheta,p} \cong (L^1(\mathbb{R}^n),BV(\mathbb{R}^n))_{\vartheta,p} \cong B^\vartheta_{1,p}(\mathbb{R}^n) \end{equation*} for all $\vartheta\in(0,1)$ and $p\in[1,+\infty]$, where $B^\vartheta_{p,q}(\mathbb{R}^n)$ denotes the Besov space as usual (see~\cite{BL76}*{Section~6.2} or~\cite{L09}*{Chapter~14} for the definition). By~\cite{BL76}*{Theorem~4.7.1}, we know that \begin{equation*} (H^1(\mathbb{R}^n),BV(\mathbb{R}^n))_{\vartheta,1} \subset (H^1(\mathbb{R}^n),BV(\mathbb{R}^n))_{ [\vartheta]} \subset (H^1(\mathbb{R}^n),BV(\mathbb{R}^n))_{\vartheta,\infty} \end{equation*} for all $\vartheta\in(0,1)$. Since $H^1(\mathbb{R}^n)\subset L^1(\mathbb{R}^n)$ continuously, on the one side we have \begin{equation*} (H^1(\mathbb{R}^n),BV(\mathbb{R}^n))_{\vartheta,1} \subset (L^1(\mathbb{R}^n),BV(\mathbb{R}^n))_{\vartheta,1} \cong B^\vartheta_{1,1}(\mathbb{R}^n) \cong W^{\vartheta,1}(\mathbb{R}^n) \end{equation*} and, on the other side, \begin{equation*} (H^1(\mathbb{R}^n),BV(\mathbb{R}^n))_{\vartheta,\infty} \subset (L^1(\mathbb{R}^n),BV(\mathbb{R}^n))_{\vartheta,\infty} \cong B^\vartheta_{1,\infty}(\mathbb{R}^n), \end{equation*} for all $\vartheta\in(0,1)$. On the one hand, the continuous inclusion $W^{\alpha,1}(\mathbb{R}^n)\subset BV^\alpha(\mathbb{R}^n)$ is strict for all $\alpha\in(0,1)$ by~\cite{CS19}*{Theorem~3.31}. On the other hand, the inclusion $BV^\alpha(\mathbb{R}^n)\subset B^\alpha_{1,\infty}(\mathbb{R}^n)$ holds continuously for all $\alpha\in(0,1)$ as a consequence of~\cite{CS19}*{Proposition~3.14}, but it also holds strictly when $n\ge2$, see \cref{thm:strict_inclusion}.
\subsection{Organization of the paper}
We conclude this introduction by briefly presenting the organization of the present paper. \cref{sec:preliminaires} provides the main notation, recalls the needed properties of the fractional operators~$\nabla^\alpha$ and~$\mathrm{div}^\alpha$ and, finally, deals with the properties of the space $HS^{\alpha,1}(\mathbb{R}^n)$. \cref{sec:BV_0} is devoted to the proof of the identification $BV^0(\mathbb{R}^n)=H^1(\mathbb{R}^n)$, together with some useful consequences about the relation between $H^1(\mathbb{R}^n)$ and $W^{\alpha,1}(\mathbb{R}^n)$. In Sections~\ref{sec:interpolation_inequalities} and~\ref{sec:asymptotic_to_zero}, the core of our work, we detail the proof of the interpolation inequalities~\eqref{intro_eq:interp_inequ_H1_BV_alpha}, \eqref{intro_eq:interpolation_MH_p} and~\eqref{intro_eq:interpolation_MH_1} and, consequently, we prove both the strong convergence of the fractional gradient~$\nabla^\alpha$ as $\alpha\to0^+$ given by~\eqref{intro_eq:frac_limit_p}, \eqref{intro_eq:frac_limit_H1} and the limit~\eqref{intro_eq:mattia_limit}. We close our work with three appendices: in \cref{sec:identificaiton_Bessel} we prove the density of $C^\infty_c(\mathbb{R}^n)$ functions in $S^{\alpha,p}(\mathbb{R}^n)$; in \cref{sec:props_of_S_alpha_p} we state some properties of $S^{\alpha,p}$-functions; in \cref{sec:continuity_props_nabla_alpha} we establish some continuity properties of the map $\alpha\mapsto\nabla^\alpha$.
\section{Preliminaries} \label{sec:preliminaires}
We start with a brief description of the main notation used in this paper. In order to keep the exposition as reader-friendly as possible, we retain the same notation adopted in the previous works~\cites{CS19,CS19-2}.
\subsection{General notation} \label{subsec:notation}
We let $\Leb{n}$ and $\Haus{\alpha}$ be the $n$-dimensional Lebesgue measure and the $\alpha$-dimensional Hausdorff measure on $\mathbb{R}^n$ respectively, with $\alpha \ge 0$. A measurable set is a $\Leb{n}$-measurable set. We also use the notation $|E|=\Leb{n}(E)$. All functions we consider in this paper are Lebesgue measurable. We let $B_r(x)$ be the standard open Euclidean ball with center $x\in\mathbb{R}^n$ and radius $r>0$. We set $B_r=B_r(0)$. Recall that $\omega_{n} := |B_1|=\pi^{\frac{n}{2}}/\Gamma\left(\frac{n+2}{2}\right)$ and $\Haus{n-1}(\partial B_{1}) = n \omega_n$, where $\Gamma$ is the Euler's \emph{Gamma function}, see~\cite{A64}.
For $m\in\mathbb{N}$, the total variation on~$\Omega$ of the $m$-vector-valued Radon measure $\mu$ is defined as \begin{equation*}
|\mu|(\Omega):=\sup\set*{\int_\Omega\varphi\cdot d\mu : \varphi\in C^\infty_c(\Omega;\mathbb{R}^m),\ \|\varphi\|_{L^\infty(\Omega;\,\mathbb{R}^m)}\le1}. \end{equation*} We thus let $\mathscr{M}(\Omega;\mathbb{R}^m)$ be the space of $m$-vector-valued Radon measure with finite total variation on $\Omega$.
For $k\in\mathbb{N}_{0}\cup\set{+\infty}$ and $m \in \mathbb{N}$, we let $C^{k}_{c}(\Omega;\mathbb{R}^{m})$ and $\Lip_c(\Omega;\mathbb{R}^{m})$ be the spaces of $C^{k}$-regular and, respectively, Lipschitz-regular, $m$-vector-valued functions defined on~$\mathbb{R}^n$ with compact support in the open set~$\Omega\subset\mathbb{R}^n$.
For $m\in\mathbb{N}$, we let $\mathcal{S}(\mathbb{R}^n;\mathbb{R}^m)$ be the space of $m$-vector-valued Schwartz functions on~$\mathbb{R}^n$. For $k\in\mathbb{N}_{0}\cup\set{+\infty}$ and $m\in\mathbb{N}$, let \begin{equation*}
\mathcal{S}_k(\mathbb{R}^n;\mathbb{R}^m):=\set*{f\in\mathcal{S}(\mathbb{R}^n;\mathbb{R}^m) : \int_{\mathbb{R}^n} x^\mathsf{a}f(x)\,dx=0\ \text{for all}\ \mathsf{a}\in\mathbb{N}^n_0\ \text{with}\ |\mathsf{a}|\le k}, \end{equation*} where $x^\mathsf{a}:=x_1^{\mathsf{a}_1}\cdot\ldots\cdot x_n^{\mathsf{a}_n}$ for all multi-indices $\mathsf{a}\in\mathbb{N}^n_0$. We let $\mathcal{S}'(\mathbb{R}^n;\mathbb{R}^m)$ be the dual of $\mathcal{S}(\mathbb{R}^n;\mathbb{R}^m)$ and we call it the space of tempered distributions. See~\cite{G14-C}*{Section~2.2 and 2.3} for instance.
For any exponent $p\in[1,+\infty]$, we let $L^p(\Omega;\mathbb{R}^m)$ be the space of $m$-vector-valued Lebesgue $p$-integrable functions on~$\Omega$.
We let \begin{equation*} \mathcal{F}(f)(\xi) := \int_{\mathbb{R}^n} f(x) \, e^{- i \xi \cdot x} \, dx, \quad \xi \in \mathbb{R}^n, \end{equation*} be the Fourier transform of the function $f \in L^1(\mathbb{R}^n;\mathbb{R}^m)$. As it is well known, the Fourier transform maps $\mathcal{S}(\mathbb{R}^n;\mathbb{R}^m)$ onto itself and may be extended to $\mathcal{S}'(\mathbb{R}^n;\mathbb{R}^m)$ (see \cite{G14-C}*{Sections~2.2 and 2.3} for instance).
We let \begin{equation*}
W^{1,p}(\Omega;\mathbb{R}^m):=\set*{u\in L^p(\Omega;\mathbb{R}^m) : [u]_{W^{1,p}(\Omega;\mathbb{R}^m)}:=\|\nabla u\|_{L^p(\Omega;\mathbb{R}^{n m})}<+\infty} \end{equation*} be the space of $m$-vector-valued Sobolev functions on~$\Omega$, see for instance~\cite{L09}*{Chapter~10} for its precise definition and main properties. We let \begin{equation*}
BV(\Omega;\mathbb{R}^m):=\set*{u\in L^1(\Omega;\mathbb{R}^m) : [u]_{BV(\Omega;\mathbb{R}^m)}:=|Du|(\Omega)<+\infty} \end{equation*} be the space of $m$-vector-valued functions of bounded variation on~$\Omega$, see for instance~\cite{AFP00}*{Chapter~3} or~\cite{EG15}*{Chapter~5} for its precise definition and main properties.
For $\alpha\in(0,1)$ and $p\in[1,+\infty)$, we let \begin{equation*}
W^{\alpha,p}(\Omega;\mathbb{R}^m):=\set*{u\in L^p(\Omega;\mathbb{R}^m) : [u]_{W^{\alpha,p}(\Omega;\mathbb{R}^m)}\!:=\left(\int_\Omega\int_\Omega\frac{|u(x)-u(y)|^p}{|x-y|^{n+p\alpha}}\,dx\,dy\right)^{\frac{1}{p}}\!<+\infty} \end{equation*} be the space of $m$-vector-valued fractional Sobolev functions on~$\Omega$, see~\cite{DiNPV12} for its precise definition and main properties. For $\alpha\in(0,1)$ and $p=+\infty$, we simply let \begin{equation*}
W^{\alpha,\infty}(\Omega;\mathbb{R}^m):=\set*{u\in L^\infty(\Omega;\mathbb{R}^m) : \sup_{x,y\in \Omega,\, x\neq y}\frac{|u(x)-u(y)|}{|x-y|^\alpha}<+\infty}, \end{equation*} so that $W^{\alpha,\infty}(\Omega;\mathbb{R}^m)=C^{0,\alpha}_b(\Omega;\mathbb{R}^m)$, the space of $m$-vector-valued bounded $\alpha$-H\"older continuous functions on~$\Omega$.
Given $\alpha\in(0,n)$, let \begin{equation}\label{eq:Riesz_potential_def} I_{\alpha} f(x) := 2^{-\alpha} \pi^{- \frac{n}{2}} \frac{\Gamma\left(\frac{n-\alpha}2\right)}{\Gamma\left(\frac\alpha2\right)}
\int_{\mathbb{R}^{n}} \frac{f(y)}{|x - y|^{n - \alpha}} \, dy, \quad x\in\mathbb{R}^n, \end{equation} be the Riesz potential of order $\alpha\in(0,n)$ of $f\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^m)$. We recall that, if $\alpha,\beta\in(0,n)$ satisfy $\alpha+\beta<n$, then we have the following \emph{semigroup property} \begin{equation}\label{eq:Riesz_potential_semigroup} I_{\alpha}(I_\beta f)=I_{\alpha+\beta}f \end{equation} for all $f\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^m)$. In addition, if $1<p<q<+\infty$ satisfy \begin{equation*} \frac{1}{q}=\frac{1}{p}-\frac{\alpha}{n}, \end{equation*} then there exists a constant $C_{n,\alpha,p}>0$ such that the operator in~\eqref{eq:Riesz_potential_def} satisfies \begin{equation}\label{eq:Riesz_potential_boundedness}
\|I_\alpha f\|_{L^q(\mathbb{R}^n;\,\mathbb{R}^m)}\le C_{n,\alpha,p}\|f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^m)} \end{equation} for all $f\in C^\infty_c(\mathbb{R}^n;\,\mathbb{R}^m)$. As a consequence, the operator in~\eqref{eq:Riesz_potential_def} extends to a linear continuous operator from $L^p(\mathbb{R}^n;\mathbb{R}^m)$ to $L^q(\mathbb{R}^n;\mathbb{R}^m)$, for which we retain the same notation. For a proof of~\eqref{eq:Riesz_potential_semigroup} and~\eqref{eq:Riesz_potential_boundedness}, we refer the reader to~\cite{S70}*{Chapter~V, Section~1} and to~\cite{G14-M}*{Section~1.2.1}.
Given $\alpha\in(0,1)$, we also let \begin{equation}\label{eq:def_frac_Laplacian} (- \Delta)^{\frac{\alpha}{2}} f(x) := \nu_{n, \alpha}
\int_{\mathbb{R}^n} \frac{f(x + y) - f(x)}{|y|^{n + \alpha}}\,dy, \quad x\in\mathbb{R}^n, \end{equation} be the fractional Laplacian (of order~$\alpha$) of $f \in\Lip_b(\mathbb{R}^{n};\mathbb{R}^m)$, where \begin{equation*} \nu_{n,\alpha}=2^\alpha\pi^{-\frac n2}\frac{\Gamma\left(\frac{n+\alpha}{2}\right)}{\Gamma\left(-\frac{\alpha}{2}\right)}, \quad \alpha\in(0,1). \end{equation*}
For $\alpha\in(0,1)$ and $p\in(1,+\infty)$, let \begin{equation}\label{eq:def1_Bessel_space} \begin{split} L^{\alpha,p}(\mathbb{R}^n;\mathbb{R}^m) &:=(\mathrm{Id}-\Delta)^{-\frac\alpha2}(L^p(\mathbb{R}^n;\mathbb{R}^m))\\ &:=\set*{f\in\mathcal S'(\mathbb{R}^n;\mathbb{R}^m) : (\mathrm{Id}-\Delta)^{\frac\alpha2}f\in L^p(\mathbb{R}^n;\mathbb{R}^m)} \end{split} \end{equation} be the $m$-vector-valued Bessel potential space with norm \begin{equation}\label{eq:def_Bessel_norm1}
\|f\|_{L^{\alpha,p}(\mathbb{R}^n;\,\mathbb{R}^m)}=
\|(\mathrm{Id}-\Delta)^{\frac\alpha2}f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^m)}, \quad f\in L^{\alpha,p}(\mathbb{R}^n;\mathbb{R}^m), \end{equation} see~\cite{A75}*{Sections 7.59-7.65} for its precise definition and main properties. We also refer to~\cite{SKM93}*{Section~27.3}, where the authors prove that the space in~\eqref{eq:def1_Bessel_space} can be equivalently defined as the space \begin{equation}\label{eq:def2_Bessel_space} L^p(\mathbb{R}^n;\mathbb{R}^m)\cap I_\alpha(L^p(\mathbb{R}^n;\mathbb{R}^m)) =\set*{f\in L^p(\mathbb{R}^n;\mathbb{R}^m) : (-\Delta)^{\frac\alpha2}f\in L^p(\mathbb{R}^n;\mathbb{R}^m)}, \end{equation} see~\cite{SKM93}*{Theorem~27.3}. In particular, the function \begin{equation}\label{eq:def_Bessel_norm2} f\mapsto
\|f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^m)} +
\|(-\Delta)^{\frac\alpha2}f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^m)}, \quad f\in L^{\alpha,p}(\mathbb{R}^n;\,\mathbb{R}^m), \end{equation} defines a norm on $L^{\alpha,p}(\mathbb{R}^n;\,\mathbb{R}^m)$ equivalent to the one in~\eqref{eq:def_Bessel_norm1} (and so, unless otherwise stated, we will use both norms~\eqref{eq:def_Bessel_norm1} and~\eqref{eq:def_Bessel_norm2} with no particular distinction). We recall that $C^\infty_c(\mathbb{R}^n)$ is a dense subset of $L^{\alpha,p}(\mathbb{R}^n;\mathbb{R}^m)$, see~\cite{A75}*{Theorem~7.63(a)} and~\cite{SKM93}*{Lemma~27.2}. Note that the space $L^{\alpha,p}(\mathbb{R}^n;\mathbb{R}^m)$ can be defined also for any $\alpha\ge1$ by simply using the composition properties of the Bessel potential (or of the fractional Laplacian), see~\cite{A75}*{Section~7.62}. All the properties stated above remain true also for $\alpha\ge1$ and, moreover, $L^{k,p}(\mathbb{R}^n;\mathbb{R}^m)=W^{k,p}(\mathbb{R}^n;\mathbb{R}^m)$ for all $k\in\mathbb{N}$, see~\cite{A75}*{Theorem~7.63(f)}.
For $m\in\mathbb{N}$, we let \begin{equation*} H^1(\mathbb{R}^n;\mathbb{R}^m):=\set*{f\in L^1(\mathbb{R}^n;\mathbb{R}^m) : Rf\in L^1(\mathbb{R}^n;\mathbb{R}^{mn})} \end{equation*} be the $m$-vector-valued (real) Hardy space endowed with the norm \begin{equation*}
\|f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^m)}:=\|f\|_{L^1(\mathbb{R}^n;\,\mathbb{R}^m)}+\|Rf\|_{L^1(\mathbb{R}^n;\,\mathbb{R}^{mn})} \end{equation*} for all $f\in H^1(\mathbb{R}^n;\mathbb{R}^m)$, where $Rf$ denotes the Riesz transform of~$f\in H^1(\mathbb{R}^n;\mathbb{R}^m)$, componentwise defined by \begin{equation}\label{eq:def_Riesz_transform} R f_{i}(x)
:=\pi^{-\frac{n+1}2}\,\Gamma\left(\tfrac{n+1}{2}\right)\,\lim_{\varepsilon\to0^+}\int_{\set*{|y|>\varepsilon}}\frac{y\,f_i(x+y)}{|y|^{n+1}}\,dy, \quad x\in\mathbb{R}^n,\ i=1,\dots,m. \end{equation} We refer the reader to~\cite{G14-M}*{Sections~2.1 and~2.4.4}, \cite{S70}*{Chapter III, Section 1} and~\cite{S93}*{Chapter~III} for a more detailed exposition. We warn the reader that the definition in~\eqref{eq:def_Riesz_transform} agrees with the one in~\cites{S93} and differs from the one in~\cites{G14-M,S70} for a minus sign. We also recall that the Riesz transform~\eqref{eq:def_Riesz_transform} defines a continuous operator $R\colon L^p(\mathbb{R}^n;\mathbb{R}^m)\to L^p(\mathbb{R}^n;\mathbb{R}^{mn})$ for any given $p\in(1,+\infty)$, see~\cite{G14-C}*{Corollary~5.2.8}, and a continuous operator $R\colon H^1(\mathbb{R}^n;\mathbb{R}^m)\to H^1(\mathbb{R}^n;\mathbb{R}^{mn})$, see~\cite{S93}*{Chapter~III, Section~5.25}.
In the sequel, in order to avoid heavy notation, if the elements of a function space $F(\Omega;\mathbb{R}^m)$ are real-valued (i.e.~$m=1$), then we will drop the target space and simply write~$F(\Omega)$.
\subsection{Overview of \texorpdfstring{$\nabla^\alpha$}{nablaˆalpha} and \texorpdfstring{$\mathrm{div}^\alpha$}{divˆalpha} and the related function spaces} \label{sect:overview_fract_grad}
We recall the definition (and the main properties) of the non-local operators~$\nabla^\alpha$ and~$\diverg^\alpha$, see~\cites{S19,CS19,CS19-2} and the monograph~\cite{P16}*{Section~15.2}.
Let $\alpha\in(0,1)$ and set \begin{equation*} \mu_{n, \alpha} := 2^{\alpha}\, \pi^{- \frac{n}{2}}\, \frac{\Gamma\left ( \frac{n + \alpha + 1}{2} \right )}{\Gamma\left ( \frac{1 - \alpha}{2} \right )}. \end{equation*} We let \begin{equation*}
\nabla^{\alpha} f(x) := \mu_{n, \alpha} \lim_{\varepsilon \to 0^+} \int_{\{ |y| > \varepsilon \}} \frac{y \, f(x + y)}{|y|^{n + \alpha + 1}} \, dy \end{equation*} be the \emph{fractional $\alpha$-gradient} of $f\in\Lip_c(\mathbb{R}^n)$ at $x\in\mathbb{R}^n$. We also let \begin{equation*}
\mathrm{div}^{\alpha} \varphi(x) := \mu_{n, \alpha} \lim_{\varepsilon \to 0^+} \int_{\{ |y| > \varepsilon \}} \frac{y \cdot \varphi(x + y)}{|y|^{n + \alpha + 1}} \, dy \end{equation*} be the \emph{fractional $\alpha$-divergence} of $\varphi\in\Lip_c(\mathbb{R}^n;\mathbb{R}^n)$ at $x\in\mathbb{R}^n$. The non-local operators~$\nabla^\alpha$ and~$\diverg^\alpha$ are well defined in the sense that the involved integrals converge and the limits exist. Moreover, since \begin{equation*}
\int_{\set*{|z| > \varepsilon}} \frac{z}{|z|^{n + \alpha + 1}} \, dz=0, \quad \forall\varepsilon>0, \end{equation*} it is immediate to check that $\nabla^{\alpha}c=0$ for all $c\in\mathbb{R}$ and \begin{align*} \nabla^{\alpha} f(x)
&=\mu_{n, \alpha} \int_{\mathbb{R}^{n}} \frac{(y - x) (f(y) - f(x)) }{|y - x|^{n + \alpha + 1}} \, dy, \quad \forall x\in\mathbb{R}^n, \end{align*} for all $f\in\Lip_c(\mathbb{R}^n)$. Analogously, we also have \begin{align*} \mathrm{div}^{\alpha} \varphi(x)
&= \mu_{n, \alpha} \int_{\mathbb{R}^{n}} \frac{(y - x) \cdot (\varphi(y) - \varphi(x)) }{|y - x|^{n + \alpha + 1}} \, dy, \quad \forall x\in\mathbb{R}^n, \end{align*} for all $\varphi\in\Lip_c(\mathbb{R}^n)$.
Thanks to~\cite{CS19}*{Proposition~2.2}, given $\alpha\in(0,1)$ we can equivalently write \begin{equation}\label{eq:def_frac_operators_Riesz_potential} \nabla^\alpha f =\nabla I_{1-\alpha}f=I_{1-\alpha}\nabla f \quad \text{and} \quad \mathrm{div}^\alpha \varphi =\mathrm{div} I_{1-\alpha}\varphi=I_{1-\alpha}\mathrm{div} \varphi \end{equation} for all $f\in\Lip_c(\mathbb{R}^n;\mathbb{R}^n)$ and $\varphi\in\Lip_c(\mathbb{R}^n;\mathbb{R}^n)$, respectively.
The fractional operators $\nabla^\alpha$ and $\mathrm{div}^\alpha$ are \emph{dual} in the sense that \begin{equation}\label{eq:duality} \int_{\mathbb{R}^n}f\,\mathrm{div}^\alpha\varphi \,dx=-\int_{\mathbb{R}^n}\varphi\cdot\nabla^\alpha f\,dx \end{equation} for all $f\in\Lip_c(\mathbb{R}^n)$ and $\varphi\in\Lip_c(\mathbb{R}^n;\mathbb{R}^n)$, see~\cite{Sil19}*{Section~6} and~\cite{CS19}*{Lemma~2.5}. In addition, given $f\in\Lip_c(\mathbb{R}^n)$ and $\varphi\in\Lip_c(\mathbb{R}^n;\mathbb{R}^n)$, we have \begin{equation}\label{eq:integrability} \nabla^\alpha f\in L^p(\mathbb{R}^n) \quad\text{and}\quad \mathrm{div}^\alpha\varphi\in L^p(\mathbb{R}^n;\mathbb{R}^n) \end{equation} for all $p\in[1,+\infty]$, see~\cite{CS19}*{Corollary~2.3}. The above results and identities hold also for functions $f \in \mathcal{S}(\mathbb{R}^n)$ and $\varphi \in \mathcal{S}(\mathbb{R}^n; \mathbb{R}^n)$.
Given $\alpha\in(0,1)$ and $p\in[1,+\infty]$, inspired by the integration-by-parts formula~\eqref{eq:duality}, we say that a function $f\in L^p(\mathbb{R}^n)$ has bounded \emph{fractional $\alpha$-variation} if \begin{equation}\label{eq:def_fractional_variation}
|D^\alpha f|(\mathbb{R}^n):=\sup\set*{\int_{\mathbb{R}^n}f\,\mathrm{div}^\alpha\varphi\,dx : \varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n),\ \|\varphi\|_{L^\infty(\mathbb{R}^n;\,\mathbb{R}^n)}\le1}<+\infty, \end{equation} see~\cite{CS19}*{Section~3} for the case $p=1$ and the discussion in~\cite{CS19-2}*{Section~3.3} for the case $p\in(1,+\infty]$. Note that the above notion of fractional $\alpha$-variation is well posed thanks to the integrability property~\eqref{eq:integrability}. Following the strategy outlined in~\cite{CS19}*{Section~3.2}, the reader can verify that the linear space \begin{equation*} BV^{\alpha,p}(\mathbb{R}^n):=
\set*{f\in L^p(\mathbb{R}^n) : |D^\alpha f|(\mathbb{R}^n)<+\infty} \end{equation*} endowed with the norm \begin{equation*}
\|f\|_{BV^{\alpha,p}(\mathbb{R}^n)}:=\|f\|_{L^p(\mathbb{R}^n)}+|D^\alpha f|(\mathbb{R}^n), \quad f\in BV^{\alpha,p}(\mathbb{R}^n), \end{equation*} is a Banach space and that the fractional variation defined in~\eqref{eq:def_fractional_variation} is lower semicontinuous with respect to $L^p$-convergence.
In the sequel, we also use the notation $[f]_{BV^{\alpha,p}(\mathbb{R}^n)}=|D^\alpha f|(\mathbb{R}^n)$ for a given $f\in BV^{\alpha,p}(\mathbb{R}^n)$.
In the case $p=1$, we simply write $BV^{\alpha,1}(\mathbb{R}^n)=BV^\alpha(\mathbb{R}^n)$. The space $BV^\alpha(\mathbb{R}^n)$ resembles the classical space $BV(\mathbb{R}^n)$ from many points of view and we refer the reader to~\cite{CS19}*{Section~3} for a detailed exposition of its main properties.
Again motivated by~\eqref{eq:duality} and in analogy with the classical case, given $\alpha\in(0,1)$ and $p\in[1,+\infty]$ we define the \emph{weak fractional $\alpha$-gradient} of a function $f\in L^p(\mathbb{R}^n)$ as the function $\nabla^\alpha f\in L^1_{\loc}(\mathbb{R}^n;\mathbb{R}^n)$ satisfying \begin{equation*} \int_{\mathbb{R}^n}f\,\mathrm{div}^\alpha\varphi\, dx =-\int_{\mathbb{R}^n}\nabla^\alpha f\cdot\varphi\, dx \end{equation*} for all $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$. We notice that, in the case $f \in \Lip_c(\mathbb{R}^n)$ (or $f \in \mathcal{S}(\mathbb{R}^n)$), the weak fractional $\alpha$-gradient of $f$ coincides with the one defined above, thanks to \eqref{eq:duality}. As above, the reader can verify that the \emph{distributional fractional Sobolev space} \begin{equation}\label{eq:def_distrib_frac_Sobolev} S^{\alpha,p}(\mathbb{R}^n):=\set*{f\in L^p(\mathbb{R}^n) : \exists\, \nabla^\alpha f \in L^p(\mathbb{R}^n;\mathbb{R}^n)} \end{equation} endowed with the norm \begin{equation}\label{eq:def_distrib_frac_Sobolev_norm}
\|f\|_{S^{\alpha,p}(\mathbb{R}^n)}:=\|f\|_{L^p(\mathbb{R}^n)}+\|\nabla^\alpha f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^{n})} \quad f\in S^{\alpha,p}(\mathbb{R}^n), \end{equation} is a Banach space.
In the case $p=1$, starting from the very definition of the fractional gradient~$\nabla^\alpha$, one can check that $W^{\alpha,1}(\mathbb{R}^n)\subset S^{\alpha,1}(\mathbb{R}^n)\subset BV^\alpha(\mathbb{R}^n)$ with both strict continuous embeddings, see~\cite{CS19}*{Theorems 3.18, 3.25, 3.26, 3.30 and~3.31}, and that $C^\infty_c(\mathbb{R}^n)$ is a dense subset of $S^{\alpha,1}(\mathbb{R}^n)$, see~\cite{CS19}*{Theorem~3.23}.
In the case $p\in(1,+\infty)$, the density of the set of test functions in the space $S^{\alpha,p}(\mathbb{R}^n)$ was left as an open problem in~\cite{CS19}*{Section~3.9}. More precisely, defining \begin{equation*}
S^{\alpha,p}_0(\mathbb{R}^n):=\closure[-1]{C^\infty_c(\mathbb{R}^n)}^{\,\|\cdot\|_{S^{\alpha,p}(\mathbb{R}^n)}} \end{equation*}
endowed with the norm in~\eqref{eq:def_distrib_frac_Sobolev_norm}, it is immediate to see that $S^{\alpha,p}_0(\mathbb{R}^n)\subset S^{\alpha,p}(\mathbb{R}^n)$ with continuous embedding. The space $(S^{\alpha,p}_0(\mathbb{R}^n),\|\cdot\|_{S^{\alpha,p}(\mathbb{R}^n)})$ was introduced in~\cite{SS15} (with a different, but equivalent, norm) and, in fact, it satisfies \begin{equation*} S^{\alpha,p}_0(\mathbb{R}^n) = L^{\alpha, p}(\mathbb{R}^{n}) \end{equation*} for all $\alpha \in (0, 1)$ and $p \in (1, + \infty)$, see~\cite{SS15}*{Theorem 1.7}. In~\cref{res:L_p_alpha_equal_S_p_alpha} in the appendix, we positively solve the problem of the density of $C^\infty_c(\mathbb{R}^n)$ in the space $S^{\alpha,p}(\mathbb{R}^n)$. As a consequence, we obtain the following result.
\begin{corollary}[Identification $S^{\alpha,p}=L^{\alpha,p}$] \label{res:S=L} Let $\alpha \in (0, 1)$ and $p \in (1, + \infty)$. We have $S^{\alpha,p}(\mathbb{R}^n) = L^{\alpha,p}(\mathbb{R}^n)$. \end{corollary}
According to \cref{res:S=L}, in the sequel we will also use the symbol $S^{\alpha,p}$ to denote the Bessel potential space $L^{\alpha,p}$. In addition, consistently with the asymptotic behavior of the fractional gradient~$\nabla^\alpha$ as $\alpha\to1^-$ established in~\cite{CS19-2}, we will sometimes denote the Sobolev space~$W^{1,p}$ as~$S^{1,p}$ for~$p\in[1,+\infty)$.
Thanks to the identification given by \cref{res:S=L}, we can prove the following result.
\begin{proposition}[$\mathcal S_0$ is dense in $S^{\alpha,p}$] \label{res:S_0_dense_in_S_p_alpha} Let $\alpha\in(0,1)$ and $p\in(1,+\infty)$. The set $\mathcal S_0(\mathbb{R}^n)$ is dense in $S^{\alpha,p}(\mathbb{R}^n)$. \end{proposition}
\begin{proof}
By \cref{res:S=L}, we equivalently have to prove that the set $\mathcal S_0(\mathbb{R}^n)$ is dense in $L^{\alpha,p}(\mathbb{R}^n)$. To this aim, let us consider the functional $M\colon (\mathcal S(\mathbb{R}^n),\|\cdot\|_{L^p(\mathbb{R}^n)})\to\mathbb{R}$ defined as \begin{equation*} M(f)=\int_{\mathbb{R}^n} f(x)\,dx, \quad f\in\mathcal S(\mathbb{R}^n). \end{equation*} Clearly, the linear functional~$M$ cannot be continuous and thus its kernel $\mathcal S_0(\mathbb{R}^n)$ must be dense in $\mathcal S(\mathbb{R}^n)$ with respect to the $L^p$-norm. Since the Bessel potential \begin{equation*}
(\mathrm{Id}-\Delta)^{-\frac\alpha2}\colon (\mathcal S(\mathbb{R}^n),\|\cdot\|_{S^{\alpha,p}(\mathbb{R}^n)})\to (\mathcal S(\mathbb{R}^n),\|\cdot\|_{L^p(\mathbb{R}^n)}) \end{equation*} is an isomorphism, the conclusion follows. \end{proof}
\subsection{The fractional Hardy--Sobolev space \texorpdfstring{$HS^{\alpha,1}(\mathbb{R}^n)$}{HSˆ{alpha,1}(Rˆn)}} \label{subsec:HS_frac_space}
Following the classical approach of~\cite{Str90}, for $\alpha\in[0,1]$ let \begin{align*} HS^{\alpha,1}(\mathbb{R}^n): &=(I-\Delta)^{-\frac\alpha2}(H^1(\mathbb{R}^n))\\ &=\set*{f\in H^1(\mathbb{R}^n) : (I-\Delta)^{\frac\alpha2}f\in H^1(\mathbb{R}^n)} \end{align*} be the (real) \emph{fractional Hardy--Sobolev space} endowed with the norm \begin{equation}\label{eq:def_HS_norm}
\|f\|_{HS^{\alpha,1}(\mathbb{R}^n)}=\|(I-\Delta)^{\frac\alpha2}f\|_{H^1(\mathbb{R}^n)}, \quad f\in H^{1,\alpha}(\mathbb{R}^n). \end{equation} In particular, $HS^{0,1}(\mathbb{R}^n)=H^1(\mathbb{R}^n)$ coincides with the (real) Hardy space and $H^{1,1}(\mathbb{R}^n)$ is the standard (real) Hardy--Sobolev space. As remarked in~\cite{Str90}*{p.~130}, $HS^{\alpha,1}(\mathbb{R}^n)$ can be equivalently defined as \begin{align*} H^1(\mathbb{R}^n)\cap I_\alpha(H^1(\mathbb{R}^n)) =\set*{f\in H^1(\mathbb{R}^n) : (-\Delta)^{\frac\alpha2} f\in H^1(\mathbb{R}^n)}. \end{align*} In particular, the function \begin{equation}\label{eq:def_HS_norm_bis} f\mapsto
\|f\|_{H^1(\mathbb{R}^n)}
+\|(-\Delta)^{\frac\alpha2} f\|_{H^1(\mathbb{R}^n)}, \quad f\in HS^{\alpha,1}(\mathbb{R}^n), \end{equation} defines a norm on $HS^{\alpha,1}(\mathbb{R}^n)$ equivalent to the one in~\eqref{eq:def_HS_norm} (and so, unless otherwise stated, we will use both norms~\eqref{eq:def_HS_norm} and~\eqref{eq:def_HS_norm_bis} with no particular distinction). In particular, the operator \begin{equation*} (-\Delta)^{\frac\alpha2}\colon HS^{\alpha,1}(\mathbb{R}^n)\to H^1(\mathbb{R}^n) \end{equation*} is well defined and continuous.
For the reader's convenience we briefly prove the following density result.
\begin{lemma}[Approximation by $\mathcal{S}_{\infty}$ functions in $HS^{\alpha,1}$] \label{res:approx_H_1_alpha} Let $\alpha\in(0,1)$. The set $\mathcal{S}_{\infty}(\mathbb{R}^n)$ is dense in $HS^{\alpha,1}(\mathbb{R}^n)$. \end{lemma}
\begin{proof} Since the set $\mathcal S_{\infty}(\mathbb{R}^n)$ is dense in $H^1(\mathbb{R}^n)$ by~\cite{S93}*{Chapter~III, Section~5.2(a)}, the set $(I-\Delta)^{-\frac\alpha2}(\mathcal S_{\infty}(\mathbb{R}^n))$ is dense in $HS^{\alpha,1}(\mathbb{R}^n)$. Since clearly $(I-\Delta)^{-\frac\alpha2}(\mathcal S_{\infty}(\mathbb{R}^n))\subset\mathcal S_{\infty}(\mathbb{R}^n)$, the set $\mathcal S_{\infty}(\mathbb{R}^n)$ is dense (and embeds continuously) in $HS^{\alpha,1}(\mathbb{R}^n)$. Thus the conclusion follows. \end{proof}
Exploiting \cref{res:approx_H_1_alpha}, for $\alpha\in(0,1)$, the space $HS^{\alpha,1}(\mathbb{R}^n)$ can be equivalently defined as the space \begin{align*} \set*{f\in H^1(\mathbb{R}^n) : \nabla^\alpha f\in H^1(\mathbb{R}^n;\mathbb{R}^n)} \end{align*} endowed with the norm \begin{equation*}
f\mapsto\|f\|_{H^1(\mathbb{R}^n)}
+\|\nabla^\alpha f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}. \end{equation*} Indeed, if $f\in \mathcal{S}_{\infty}(\mathbb{R}^n)$, then, by exploiting Fourier transform techniques, we can write $\nabla^\alpha f=R\,(-\Delta)^{\frac\alpha2}f$, so that there exists a dimensional constant $c_n>0$ such that \begin{equation}\label{eq:equivalence_norm_H_1_alpha}
c_n^{-1}\|(-\Delta)^{\frac\alpha2}f\|_{H^1(\mathbb{R}^n)}
\le\|\nabla^\alpha f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}
\le c_n\|(-\Delta)^{\frac\alpha2}f\|_{H^1(\mathbb{R}^n)} \end{equation} for all $f\in \mathcal{S}_{\infty}(\mathbb{R}^n)$, thanks to the $H^1$-continuity property of the Riesz transform and the fact that \begin{equation*} \sum_{j = 1}^n R_j^2 = -I\ \quad \text{on}\ \mathcal{S}(\mathbb{R}^n), \end{equation*} where $R_j$ is the $j$-th component of the Riesz transform $R$. By \cref{res:approx_H_1_alpha}, the validity of~\eqref{eq:equivalence_norm_H_1_alpha} extends to all $f\in HS^{\alpha,1}(\mathbb{R}^n)$ and the conclusion follows. As a consequence, note that $HS^{\alpha,1}(\mathbb{R}^n)\subset S^{\alpha,1}(\mathbb{R}^n)$ for all $\alpha\in(0,1)$ with continuous embedding.
We note that the well-posedness and the equivalence of the definitions of $HS^{\alpha,1}(\mathbb{R}^n)$ given above and the stated results hold for any $\alpha\ge0$ thanks to the composition properties of the operators involved. We leave the standard verifications to the interested reader.
\section{The \texorpdfstring{$BV^0(\mathbb{R}^n)$}{BVˆ0(Rˆn)} space} \label{sec:BV_0}
\subsection{Definition of \texorpdfstring{$BV^0(\mathbb{R}^n)$}{BVˆ0(Rˆn)} and Structure Theorem}
Somehow naturally extending the definitions given in~\eqref{eq:def_frac_operators_Riesz_potential} to the case $\alpha=0$, for $f\in\Lip_c(\mathbb{R}^n)$ and $\varphi\in\Lip_c(\mathbb{R}^n;\mathbb{R}^n)$ we define \begin{equation*} \nabla^0 f:=I_1\nabla f \quad \text{and} \quad \mathrm{div}^0\varphi :=I_1\mathrm{div}\varphi. \end{equation*} It is immediate to check that the integration-by-parts formula \begin{equation}\label{eq:int_by_parts_smooth} \int_{\mathbb{R}^n}f\,\mathrm{div}^0\varphi\,dx =-\int_{\mathbb{R}^n}\varphi\cdot \nabla^0f\,dx \end{equation} holds for all given $f\in\Lip_c(\mathbb{R}^n)$ and $\varphi\in\Lip_c(\mathbb{R}^n;\mathbb{R}^n)$. Hence, in analogy with~\cite{CS19}*{Definition~3.1}, we are led to the following definition (which is well posed, since $\mathrm{div}^0 \varphi \in L^{\infty}(\mathbb{R}^n)$ for $\varphi\in\Lip_c(\mathbb{R}^n;\mathbb{R}^n)$).
\begin{definition}[The space $BV^0(\mathbb{R}^n)$] A function $f\in L^1(\mathbb{R}^n)$ belongs to the space $BV^0(\mathbb{R}^n)$ if \begin{equation*}
\sup\set*{\int_{\mathbb{R}^n}f\,\mathrm{div}^0\varphi\,dx : \varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n),\ \|\varphi\|_{L^\infty(\mathbb{R}^n;\,\mathbb{R}^n)}\le1}<+\infty. \end{equation*} \end{definition}
The proof of the following result is very similar to the one of~\cite{CS19}*{Theorem~3.2} and is omitted.
\begin{theorem}[Structure Theorem for $BV^0$ functions] Let $f\in L^1(\mathbb{R}^n)$. Then, $f\in BV^0(\mathbb{R}^n)$ if and only if there exists a finite vector-valued Radon measure $D^0 f\in \!\mathscr M(\mathbb{R}^n;\mathbb{R}^n)$ such that \begin{equation}\label{eq:int_by_parts_BV_0} \int_{\mathbb{R}^n}f\,\mathrm{div}^0\varphi\,dx =-\int_{\mathbb{R}^n}\varphi\cdot\,dD^0f \end{equation} for all $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$. In addition, for all open sets $U\subset\mathbb{R}^n$ it holds \begin{equation}\label{eq:fractional_variation_0}
|D^0 f|(U)=\sup\set*{\int_{\mathbb{R}^n}f\,\mathrm{div}^0\varphi\,dx : \varphi\in C^\infty_c(U;\mathbb{R}^n),\ \|\varphi\|_{L^\infty(U;\,\mathbb{R}^n)}\le1}. \end{equation} \end{theorem}
\subsection{The identification \texorpdfstring{$BV^0(\mathbb{R}^n)=H^1(\mathbb{R}^n)$}{BVˆ0(Rˆn)=Hˆ1(Rˆn)}}
As already announced in~\cite{CS19-2}, the space $BV^0(\mathbb{R}^n)$ actually coincides with the Hardy space $H^1(\mathbb{R}^n)$. More precisely, we have the following result.
\begin{theorem}[The identification $BV^0=H^1$] \label{res:H_1=BV_0} We have $BV^0(\mathbb{R}^n)=H^1(\mathbb{R}^n)$, with \begin{equation*} D^0f=Rf\,\Leb{n}\ \text{in}\ \mathscr M(\mathbb{R}^n;\mathbb{R}^n) \end{equation*} for every $f\in BV^0(\mathbb{R}^n)$. \end{theorem}
\begin{proof} We prove the two inclusions separately.
\textit{Proof of $H^1(\mathbb{R}^n)\subset BV^0(\mathbb{R}^n)$}. Let $f\in H^1(\mathbb{R}^n)$ and assume $f\in\Lip_c(\mathbb{R}^n)$. By~\eqref{eq:int_by_parts_smooth}, we immediately get that $D^0 f=Rf\,\Leb{n}$ in~$\mathscr M(\mathbb{R}^n;\mathbb{R}^n)$ with $Rf=\nabla^0 f$ in $L^1(\mathbb{R}^n;\mathbb{R}^n)$, so that $f\in BV^0(\mathbb{R}^n)$. Now let $f\in H^1(\mathbb{R}^n)$. By~\cite{S93}*{Chapter~III, Section~5.2(b)}, we can find $(f_k)_{k\in\mathbb{N}}\subset H^1(\mathbb{R}^n)\cap C^\infty_c(\mathbb{R}^n)$ such that $f_k\to f$ in $H^1(\mathbb{R}^n)$ as $k\to+\infty$. Hence, given $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$, we have \begin{equation*} \int_{\mathbb{R}^n}f_k\,\mathrm{div}^0\varphi\,dx =-\int_{\mathbb{R}^n}\varphi\cdot Rf_k\,dx \end{equation*} for all $k\in\mathbb{N}$. Passing to the limit as $k\to+\infty$, we get \begin{equation*} \int_{\mathbb{R}^n}f\,\mathrm{div}^0\varphi\,dx =-\int_{\mathbb{R}^n}\varphi\cdot Rf\,dx \end{equation*} so that $f\in BV^0(\mathbb{R}^n)$ with $D^0f=Rf\,\Leb{n}$ in~$\mathscr M(\mathbb{R}^n;\mathbb{R}^n)$ according to~\eqref{eq:fractional_variation_0}.
\textit{Proof of $BV^0(\mathbb{R}^n)\subset H^1(\mathbb{R}^n)$}. Let $f\in BV^0(\mathbb{R}^n)$. Since $f\in L^1(\mathbb{R}^n)$, $Rf$ is well defined as a (vector-valued) distribution, see~\cite{S93}*{Chapter~III, Section~4.3}. Thanks to~\eqref{eq:int_by_parts_BV_0}, we also have that $\scalar*{Rf,\varphi}=\scalar*{D^0f,\varphi}$ for all $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$, so that $Rf=D^0f$ in the sense of distributions. Now let $(\varrho_\varepsilon)_{\varepsilon>0}\subset C^\infty_c(\mathbb{R}^n)$ be a family of standard mollifiers (see e.g.~\cite{CS19}*{Section~3.2}). We can thus estimate \begin{equation*}
\|Rf*\varrho_\varepsilon\|_{L^1(\mathbb{R}^n;\,\mathbb{R}^n)}
=\|D^0f*\varrho_\varepsilon\|_{L^1(\mathbb{R}^n;\,\mathbb{R}^n)}
\le|D^0 f|(\mathbb{R}^n) \end{equation*} for all $\varepsilon>0$, so that $f\in H^1(\mathbb{R}^n)$ by \cite{S93}*{Chapter~III, Section~4.3, Proposition~3}, with $D^0f=Rf\Leb{n}$ in~$\mathscr M(\mathbb{R}^n;\mathbb{R}^n)$. \end{proof}
\subsection{Relation between \texorpdfstring{$W^{\alpha,1}(\mathbb{R}^n)$}{Wˆ{alpha,1}(Rˆn)} and \texorpdfstring{$H^1(\mathbb{R}^n)$}{Hˆ1(Rˆn)}}
Thanks to the identification established in \cref{res:H_1=BV_0}, we can prove the following result. See also~\cite{CS19}*{Lemma~3.28} and~\cite{CS19-2}*{Lemma~3.11}.
\begin{proposition}\label{res:relation_W_alpha_1_and_H_1} Let $\alpha\in(0,1)$. The following hold. \begin{enumerate}[(i)]
\item\label{item:relation_W_alpha_1_and_H_1_1} If $f\in H^1(\mathbb{R}^n)$, then $u:=I_{\alpha} f\in BV^{\alpha,\frac{n}{n - \alpha}}(\mathbb{R}^n)$ with $D^\alpha u=R f \Leb{n}$ in~$\mathscr{M}(\mathbb{R}^n;\mathbb{R}^n)$.
\item\label{item:relation_W_alpha_1_and_H_1_2} If $u\in W^{\alpha,1}(\mathbb{R}^n)$, then $f:=(-\Delta)^{\alpha/2} u\in H^1(\mathbb{R}^n)$ with \begin{equation*}
\|f\|_{L^1(\mathbb{R}^n)}\le\mu_{n,-\alpha}[u]_{W^{\alpha,1}(\mathbb{R}^n)} \quad\text{and}\quad Rf=\nabla^\alpha u\ \text{a.e.\ in $\mathbb{R}^n$}. \end{equation*} \end{enumerate} \end{proposition}
\begin{proof} We prove the two statements separately.
\textit{Proof of~\eqref{item:relation_W_alpha_1_and_H_1_1}}. Let $f\in H^1(\mathbb{R}^n)$. By the Stein--Weiss inequality (see~\cite{SSS17}*{Theorem~2} for instance), we know that $u:=I_{\alpha} f\in L^{\frac{n}{n - \alpha}}(\mathbb{R}^n)$. To prove that $|D^{\alpha} u|(\mathbb{R}^n)<+\infty$, we exploit \cref{res:H_1=BV_0} and argue as in the proof of~\cite{CS19}*{Lemma~3.28}. Indeed, for all $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$, we can write \begin{align*} \int_{\mathbb{R}^n}f\,\mathrm{div}^0\varphi\,dx =\int_{\mathbb{R}^n}f\,I_{\alpha}\mathrm{div}^{\alpha}\varphi\,dx =\int_{\mathbb{R}^n}u\,\mathrm{div}^{\alpha}\varphi\,dx \end{align*}
by Fubini's Theorem, since $f\in L^1(\mathbb{R}^n)$ and $I_\alpha|\mathrm{div}^{\alpha}\varphi|\in L^\infty(\mathbb{R}^n)$, being \begin{align*}
I_{\alpha}|\mathrm{div}^{\alpha}\varphi|
=I_{\alpha}|I_{1 - \alpha}\mathrm{div}\varphi|
\le I_{\alpha} I_{1 - \alpha}|\mathrm{div}\varphi|
=I_1|\mathrm{div}\varphi|\in L^\infty(\mathbb{R}^n) \end{align*} thanks to the semigroup property~\eqref{eq:Riesz_potential_semigroup} of the Riesz potentials. This proves that $D^{\alpha}u=D^0f = R f \Leb{n}$ in $\mathscr{M}(\mathbb{R}^n;\mathbb{R}^n)$, again thanks to \cref{res:H_1=BV_0}.
\textit{Proof of~\eqref{item:relation_W_alpha_1_and_H_1_2}}. Let $u\in W^{\alpha,1}(\mathbb{R}^n)$. Then $f:=(-\Delta)^{\alpha/2} u$ satisfies \begin{align*}
\|f\|_{L^1(\mathbb{R}^n)}
=\mu_{n,-\alpha}\int_{\mathbb{R}^n}\bigg|\int_{\mathbb{R}^n}\frac{u(y)-u(x)}{|y-x|^{n+\alpha}}\,dy\,\bigg|\,dx \le\mu_{n,-\alpha}[u]_{W^{\alpha,1}(\mathbb{R}^n)}. \end{align*} To prove that $f\in H^1(\mathbb{R}^n)$, we exploit \cref{res:H_1=BV_0} again. For all $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$, we can write \begin{equation*} \int_{\mathbb{R}^n}u\,\mathrm{div}^\alpha\varphi\,dx =\int_{\mathbb{R}^n}u\,(-\Delta)^\frac{\alpha}{2}\mathrm{div}^0\varphi\,dx =\int_{\mathbb{R}^n}f\,\mathrm{div}^0\varphi\,dx \end{equation*} by Fubini's Theorem, since $u\in L^1(\mathbb{R}^n)$ and $\mathrm{div}^0\varphi\in\Lip_b(\mathbb{R}^n;\mathbb{R}^n)$, proving that $D^0 f=D^\alpha u$ in~$\mathscr{M}(\mathbb{R}^n;\mathbb{R}^n)$. Since $D^\alpha u=\nabla^\alpha u\,\Leb{n}$ with $\nabla^{\alpha}u \in L^{1}(\mathbb{R}^{n}; \mathbb{R}^{n})$ by~\cite{CS19}*{Theorem~3.18} and $D^0f=Rf\,\Leb{n}$ by \cref{res:H_1=BV_0}, we see that $f=(-\Delta)^{\alpha/2} u\in H^1(\mathbb{R}^n)$ and $Rf = \nabla^{\alpha} u$ $\Leb{n}$-a.e., concluding the proof. \end{proof}
We end this section with the following consequence of \cref{res:relation_W_alpha_1_and_H_1}.
\begin{corollary} \label{res:scatole} The following statements hold. \begin{enumerate}[(i)]
\item\label{item:scatola_HS} $H^1(\mathbb{R}^n) \cap \bigcup_{\alpha\in(0,1)} W^{\alpha,1}(\mathbb{R}^n) = \bigcup_{\alpha\in(0,1)} HS^{\alpha,1}(\mathbb{R}^n). $
\item\label{item:scatola_S} $ \bigcup_{\alpha\in(0,1)} S^{\alpha,p}(\mathbb{R}^n) = \bigcup_{\alpha\in(0,1)} W^{\alpha,p}(\mathbb{R}^n) $ for all $p\in[1,+\infty)$.
\end{enumerate} \end{corollary}
\begin{proof} We prove the two statements separately.
\textit{Proof of~\eqref{item:scatola_HS}}. On the one hand, we have $H^1(\mathbb{R}^n)\cap W^{\alpha,1}(\mathbb{R}^n)\subset HS^{\alpha,1}(\mathbb{R}^n)$ for all $\alpha\in(0,1)$ by \cref{res:relation_W_alpha_1_and_H_1}\eqref{item:relation_W_alpha_1_and_H_1_2} in virtue of the discussion made in \cref{subsec:HS_frac_space}. On the other hand, $HS^{\alpha,1}(\mathbb{R}^n)\subset H^1(\mathbb{R}^n)\cap S^{\alpha,1}(\mathbb{R}^n)$ for all $\alpha\in(0,1)$ as remarked at the end of \cref{subsec:HS_frac_space}. Since we already know that $S^{\alpha,1}(\mathbb{R}^n)\subset W^{\alpha',1}$ for all $0<\alpha'<\alpha<1$ by~\cite{CS19}*{Theorems~3.25 and~3.32}, this proves~\eqref{item:scatola_HS}.
\textit{Proof of~\eqref{item:scatola_S}}. Since $L^{\alpha+\varepsilon,p}(\mathbb{R}^n)\subset W^{\alpha,p}(\mathbb{R}^n)\subset L^{\alpha-\varepsilon,p}(\mathbb{R}^n)$ for all $\alpha\in(0,1)$, $p\in(1,+\infty)$ and $0<\varepsilon<\min\{\alpha,1-\alpha\}$ by~\cite{A75}*{Theorem~7.63(g)}, thanks to the identification established in \cref{res:S=L} we immediately deduce the validity of~\eqref{item:scatola_S} for all $p\in(1,+\infty)$. If $p=1$, then~\eqref{item:scatola_S} is a consequence of~\cite{CS19}*{Proposition~3.24(i) and Theorems~3.25 and~3.32}. \end{proof}
\section{Interpolation inequalities} \label{sec:interpolation_inequalities}
\subsection{The case \texorpdfstring{$p=1$}{p=1} via the Calder\'on--Zygmund Theorem}\label{subsec:CZ_p=1}
Here and in the rest of the paper, let $(\eta_R)_{R>0}\subset C^\infty_c(\mathbb{R}^n)$ be a family of cut-off functions defined as \begin{equation}\label{eq:def_eta_function}
\eta_R(x)=\eta\left(\tfrac{|x|}{R}\right), \quad \text{for all $x\in\mathbb{R}^n$ and $R>0$}, \end{equation} where $\eta\in C^\infty_c(\mathbb{R})$ satisfies \begin{equation}\label{eq:def_cut_off} 0\le\eta\le1, \quad \eta=1\ \text{on}\ \left[-\tfrac12,\tfrac12\right], \quad \supp\eta\subset[-1,1] \quad \Lip(\eta)\le3. \end{equation}
For $\alpha\in(0,1)$ and $R>0$, let $T_{\alpha,R}\colon\mathcal{S}(\mathbb{R}^n)\to\mathcal{S}'(\mathbb{R}^n;\mathbb{R}^n)$ be the linear operator defined by \begin{equation}\label{eq:def_T_alpha_R}
T_{\alpha,R}f(x):=\int_{\mathbb{R}^n}f(y+x)\,\frac{y\,(1-\eta_R(y))}{|y|^{n+\alpha+1}}\,dy, \quad x\in\mathbb{R}^n, \end{equation} for all $f\in\mathcal{S}(\mathbb{R}^n)$. In the following result, we prove that~$T_{\alpha,R}$ is a Calder\'on--Zygmund operator mapping $H^1(\mathbb{R}^n)$ to~$L^1(\mathbb{R}^n;\mathbb{R}^n)$.
\begin{lemma}[Calder\'on--Zygmund estimate for~$T_{\alpha,R}$]\label{res:CZ} There is a dimensional constant $\tau_n>0$ such that, for any given $\alpha\in(0,1)$ and $R>0$, the operator in~\eqref{eq:def_T_alpha_R} uniquely extends to a bounded linear operator $T_{\alpha,R}\colon H^1(\mathbb{R}^n)\to L^1(\mathbb{R}^n;\mathbb{R}^n)$ with \begin{equation*}
\|T_{\alpha,R}f\|_{L^1(\mathbb{R}^n;\mathbb{R}^n)}
\le\tau_n R^{-\alpha}\|f\|_{H^1(\mathbb{R}^n)} \end{equation*} for all $f\in H^1(\mathbb{R}^n)$. \end{lemma}
\begin{proof} We apply~\cite{G14-M}*{Theorem~2.4.1} to the kernel \begin{equation*}
K_{\alpha,R}(x):=\frac{x\,(1-\eta_R(x))}{|x|^{n+\alpha+1}}, \quad x\in\mathbb{R}^n,\ x\ne0. \end{equation*} First of all, we have \begin{align*}
|K_{\alpha,R}(x)|\le\frac{1-\eta_R(x)}{|x|^{n+\alpha}}\le\frac{2^\alpha}{R^\alpha}\,\frac{1}{|x|^n}, \quad x\in\mathbb{R}^n,\ x\ne0, \end{align*} so that we can choose $A_1=2n\omega_nR^{-\alpha}$ in the \emph{size estimate}~(2.4.1) in~\cite{G14-M}. We also have \begin{equation*}
|\nabla K_{\alpha,R}(x)|\le c_n\bigg(\frac{1}{R}\frac{\left|\eta'\big(\tfrac{|x|}{R}\big)\right|}{|x|^{n+\alpha}}
+\frac{1-\eta_R(x)}{|x|^{n+\alpha+1}}\bigg)
\le 4c_n\,\frac{2^\alpha}{R^\alpha}\,\frac{1}{|x|^{n+1}}, \quad x\in\mathbb{R}^n,\ x\ne0, \end{equation*} where $c_n>0$ is some dimensional constant, so that we can choose $A_2=c_n' R^{-\alpha}$ in the \emph{smoothness condition}~(2.4.2) in~\cite{G14-M}, where $c_n'>c_n$ is another dimensional constant. Finally, since clearly \begin{equation*}
\int_{\{m<|x|<M\}} K_{\alpha,R}(x)\,dx=0 \end{equation*} for all $m<M$, we can choose $A_3=0$ in the \emph{cancellation condition}~(2.4.3) in~\cite{G14-M}. Since $A_1+A_2+A_3=c_n''R^{-\alpha}$ for some dimensional constant $c_n''\ge c_n'$, the conclusion follows. \end{proof}
With \cref{res:CZ} at our disposal, we can prove the following result.
\begin{theorem}[$H^1-BV^\alpha$ interpolation inequality] \label{res:interpolation_H1_BV_alpha} Let $\alpha\in(0,1]$. There exists a constant $c_{n,\alpha}>0$ such that \begin{equation}\label{eq:interpolation_H1_BV_alpha}
[f]_{BV^\beta(\mathbb{R}^n)}\le c_{n,\alpha}\,\|f\|_{H^1(\mathbb{R}^n)}^{(\alpha-\beta)/\alpha}\,[f]_{BV^\alpha(\mathbb{R}^n)}^{\beta/\alpha} \end{equation} for all $\beta\in[0,\alpha)$ and all $f\in H^1(\mathbb{R}^n)\cap BV^\alpha(\mathbb{R}^n)$. \end{theorem}
\begin{proof} Let $\alpha\in(0,1]$ be fixed. Thanks to \cref{res:H_1=BV_0}, the case $\beta=0$ is trivial, so we assume~$\beta\in(0,\alpha)$.
We can also assume that $[f]_{BV^\alpha(\mathbb{R}^n)}>0$ without loss of generality, since otherwise $f=0$ $\Leb{n}$-a.e.\ by \cite{CS19}*{Proposition~3.14} (note that the validity of \cite{CS19}*{Proposition~3.14} for all $f\in BV^\alpha(\mathbb{R}^n)$ follows by a simple approximation argument, thanks to~\cite{CS19}*{Theorem~3.8}). Hence, in particular, we can assume $\|f\|_{L^1(\mathbb{R}^n)}>0$. We divide the proof in three steps.
\textit{Step~1: stability as $\beta\to0^+$}. Let $f\in H^1(\mathbb{R}^n)\cap BV^\alpha(\mathbb{R}^n)$ and assume $f\in\Lip_b(\mathbb{R}^n)$. By~\cite{CS19-2}*{Lemma~2.3}, we can write \begin{equation}\label{eq:interpolation_H1_BV_alpha_split_beta} \begin{split}
&|\nabla^\beta f(x)|
=\mu_{n,\beta}\,\bigg|\int_{\mathbb{R}^n}\frac{y (f(y+x)-f(x))}{|y|^{n+\beta+1}}\,dy\,\bigg|\\
&\quad=\mu_{n,\beta}\,\bigg|\int_{\mathbb{R}^n}\eta_R(y)\,\frac{y (f(y+x)-f(x))}{|y|^{n+\beta+1}}\,dy
+\int_{\mathbb{R}^n}(1-\eta_R(y))\,\frac{y (f(y+x)-f(x))}{|y|^{n+\beta+1}}\,dy\,\bigg| \end{split} \end{equation} for all $x\in\mathbb{R}^n$ and all $R>0$. On the one hand, for $\alpha<1$, by~\cite{CS19}*{Proposition~3.14} we can estimate \begin{equation}\label{eq:interpolation_H1_BV_alpha_translation_beta} \begin{split}
\int_{\mathbb{R}^n}\bigg|\int_{\mathbb{R}^n}\eta_R(y)\,\frac{y (f(y+x)-f(x))}{|y|^{n+\beta+1}}\,dy\,\bigg|\,dx &\le
\int_{B_R}\frac{1}{|y|^{n+\beta}}\int_{\mathbb{R}^n}|f(y+x)-f(x)|\,dx\,dy\\ &\le
\gamma_{n,\alpha} \,|D^\alpha f|(\mathbb{R}^n)\int_{B_R}\frac{dy}{|y|^{n+\beta-\alpha}}\\ &=
n \omega_n \gamma_{n,\alpha} \,\frac{R^{\alpha-\beta}}{\alpha-\beta}\,|D^\alpha f|(\mathbb{R}^n) \end{split} \end{equation} for all $R>0$, where $\gamma_{n,\alpha}>0$ is a constant depending only on~$n$ and $\alpha$. If $\alpha=1$ instead, we simply have \begin{equation*}
\int_{\mathbb{R}^n}\bigg|\int_{\mathbb{R}^n}\eta_R(y)\,\frac{y (f(y+x)-f(x))}{|y|^{n+\beta+1}}\,dy\,\bigg|\,dx \le\,
n \omega_n \frac{R^{1-\beta}}{1-\beta}\,|D f|(\mathbb{R}^n) \end{equation*} for all $R>0$ (by \cite{AFP00}*{Remark 3.25} with $\Omega = \mathbb{R}^{n}$, for instance). On the other hand, by \cref{res:CZ} we have \begin{equation}\label{eq:interpolation_H1_BV_alpha_CZ_beta} \begin{split}
\int_{\mathbb{R}^n}\bigg|\int_{\mathbb{R}^n}(1-\eta_R(y))\,\frac{y (f(y+x)-f(x))}{|y|^{n+\beta+1}}\,dy\,\bigg|\,dx
&=\int_{\mathbb{R}^n}\bigg|\int_{\mathbb{R}^n}(1-\eta_R(y))\,\frac{y f(y+x)}{|y|^{n+\beta+1}}\,dy\,\bigg|\,dx\\
&\le\tau_n R^{-\beta}\|f\|_{H^1(\mathbb{R}^n)} \end{split} \end{equation} for all $R>0$, where $\tau_n>0$ is the constant of \cref{res:CZ}. Combining the above estimates, we get \begin{align*}
|D^\beta f|(\mathbb{R}^n) &\le\mu_{n,\beta}\,\bigg(n \omega_n\gamma_{n,\alpha}\,\frac{R^{\alpha-\beta}}{\alpha-\beta}\,[f]_{BV^\alpha(\mathbb{R}^n)}
+\tau_n R^{-\beta}\,\|f\|_{H^1(\mathbb{R}^n)}\bigg)\\ &\le\mu_{n,\beta}\max\{\tau_n,n \omega_n\gamma_{n,\alpha}\}\,\bigg(\frac{R^{\alpha-\beta}}{\alpha-\beta}\,[f]_{BV^\alpha(\mathbb{R}^n)}
+R^{-\beta}\,\|f\|_{H^1(\mathbb{R}^n)}\bigg) \end{align*} for all $R>0$, where we have set $\gamma_{n,1}:=1$ by convention.
With the choice $R=\|f\|_{H^1(\mathbb{R}^n)}^{1/\alpha}\,[f]_{BV^\alpha(\mathbb{R}^n)}^{-1/\alpha}$, we get \begin{equation}\label{eq:step1_BV_alpha_beta_interpolation}
|D^\beta f|(\mathbb{R}^n) \le\frac{2\mu_{n,\beta}\max\{\tau_n,n \omega_n\gamma_{n,\alpha}\}}{\alpha-\beta}
\,\|f\|_{H^1(\mathbb{R}^n)}^{(\alpha-\beta)/\alpha} \,[f]_{BV^\alpha(\mathbb{R}^n)}^{\beta/\alpha} \end{equation} for all $f\in H^1(\mathbb{R}^n)\cap BV^\alpha(\mathbb{R}^n)$ such that $f\in\Lip_b(\mathbb{R}^n)$. Using a standard approximation argument via convolution, thanks to~\cite{CS19}*{Proposition~3.3} inequality~\eqref{eq:step1_BV_alpha_beta_interpolation} follows for all $f\in H^1(\mathbb{R}^n)\cap BV^\alpha(\mathbb{R}^n)$.
\textit{Step~2: stability as $\beta\to\alpha^-$}. If $\alpha<1$, then by~\cite{CS19-2}*{Proposition~3.12} we know that \begin{equation}\label{eq:step2_BV_alpha_beta_interpolation_before}
|D^\beta f|(\mathbb{R}^n) \le d_{n,\alpha}\,\frac{\mu_{n,1+\beta-\alpha}}{n+\beta-\alpha}
\,\bigg(\frac{R^{\alpha-\beta}}{\alpha-\beta}\,[f]_{BV^\alpha(\mathbb{R}^n)}+\frac{R^{-\beta}}{\beta}\|f\|_{L^1(\mathbb{R}^n)}\bigg) \end{equation} for all $f\in BV^\alpha(\mathbb{R}^n)$ and all $R>0$, where \begin{equation*}
d_{n, \alpha} = \max\left \{ n \omega_n, (n + \alpha) \|\nabla^{\alpha} \chi_{B_1}\|_{L^1(\mathbb{R}^n; \mathbb{R}^n)} \right \}, \end{equation*} so that \cite{CS19-2}*{Theorem~4.9} implies \begin{equation*} d_{n,1}:=\lim_{\alpha\to1^-}d_{n,\alpha} = (n + 1)\, n \omega_n <+\infty. \end{equation*}
If $\alpha=1$, then by~\cite{CS19-2}*{Proposition~3.2(i)} inequality~\eqref{eq:step2_BV_alpha_beta_interpolation_before} holds with $\alpha=1$ for all $f\in BV(\mathbb{R}^n)$. Since $\|f\|_{L^1(\mathbb{R}^n)}>0$, choosing $R=[f]_{BV^\alpha(\mathbb{R}^n)}^{1/\alpha}\,\|f\|_{L^1(\mathbb{R}^n)}^{-1/\alpha}$ and using the inequality $\|f\|_{L^1(\mathbb{R}^n)}\le \|f\|_{H^1(\mathbb{R}^n)}$, we can estimate \begin{equation}\label{eq:step2_BV_alpha_beta_interpolation}
|D^\beta f|(\mathbb{R}^n) \le\frac{d_{n,\alpha}}{\beta(\alpha-\beta)}\,\frac{\mu_{n,1+\beta-\alpha}}{n+\beta-\alpha}
\,\|f\|_{H^1(\mathbb{R}^n)}^{(\alpha-\beta)/\alpha} \,[f]_{BV^\alpha(\mathbb{R}^n)}^{\beta/\alpha} \end{equation} for all $f\in H^1(\mathbb{R}^n)\cap BV^\alpha(\mathbb{R}^n)$.
\textit{Step~3: existence of $c_{n,\alpha}$}. Combining~\eqref{eq:step1_BV_alpha_beta_interpolation} and~\eqref{eq:step2_BV_alpha_beta_interpolation}, we get \begin{equation*}
|D^\beta f|(\mathbb{R}^n) \le\varphi_n(\alpha,\beta)
\,\|f\|_{H^1(\mathbb{R}^n)}^{(\alpha-\beta)/\alpha}\,[f]_{BV^\alpha(\mathbb{R}^n)}^{\beta/\alpha} \end{equation*} for all $f\in H^1(\mathbb{R}^n)\cap BV^\alpha(\mathbb{R}^n)$, where \begin{equation*} \varphi_n(\alpha,\beta):=\min\set*{ \frac{2\mu_{n,\beta}\max\{\tau_n,n \omega_n\gamma_{n,\alpha}\}}{\alpha-\beta}, \frac{d_{n,\alpha}}{\beta(\alpha-\beta)}\,\frac{\mu_{n,1+\beta-\alpha}}{n+\beta-\alpha}}, \quad 0<\beta<\alpha\le1. \end{equation*} We observe that, for all fixed $\alpha \in (0, 1]$, $\varphi_n(\alpha, \beta)$ is continuous in $\beta \in (0, \alpha)$. Thanks to \cite{CS19-2}*{Lemma~4.1}, we notice that for all $\alpha \in (0, 1)$ we have \begin{equation*} \lim_{\beta\to\alpha^-}\varphi_n(\alpha,\beta) =\frac{d_{n,\alpha}}{\alpha n} \,\lim_{\beta\to\alpha^-}\frac{\mu_{n,1+\beta-\alpha}}{\alpha-\beta} =\frac{d_{n,\alpha}}{\alpha n\omega_n}, \end{equation*} while in the case $\alpha = 1$ we obtain \begin{align*} \lim_{\beta\to 1^-}\varphi_n(1,\beta) & = \min\set*{ 2\max\{\tau_n, n \omega_n\} \lim_{\beta\to 1^-} \frac{\mu_{n,\beta}}{1-\beta},\ d_{n,1}\lim_{\beta\to 1^-}\frac{\mu_{n,\beta}}{\beta(1-\beta)(n + \beta - 1)}} \\ & = \frac{1}{\omega_n} \min\set*{2 \max\{\tau_n, n \omega_n\}, \frac{d_{n,1}}{n}}. \end{align*} In addition, for all $\alpha \in (0, 1]$, we get \begin{equation*} \lim_{\beta\to0^+}\varphi_n(\alpha,\beta) =\frac{2\mu_{n,0}\max\{\tau_n,n \omega_n\gamma_{n,\alpha}\}}{\alpha}. \end{equation*} Thus, for all $\alpha \in (0, 1]$ we have $\varphi_n(\alpha, \cdot) \in C([0, \alpha])$, and the conclusion follows by setting $c_{n, \alpha} := \max_{\beta \in [0, \alpha]} \varphi_n(\alpha, \beta)$. \end{proof}
\begin{remark}[$H^1-W^{\alpha,1}$ interpolation inequality] \label{rem:interpolation_W_alpha_1_H_1} Thanks to~\cite{CS19}*{Theorem~3.18}, by \cref{res:interpolation_H1_BV_alpha} one can replace the $BV^\alpha$-seminorm in the right-hand side of~\eqref{eq:interpolation_H1_BV_alpha} with the $W^{\alpha,1}$-seminorm up to multiply the constant~$c_{n,\alpha}$ by~$\mu_{n,\alpha}$. However, one can prove a slightly finer estimate essentially following the proof of \cref{res:interpolation_H1_BV_alpha}. Indeed, for any given $f\in H^1(\mathbb{R}^n)\cap W^{\alpha,1}(\mathbb{R}^n)$ sufficiently regular, one writes $\nabla^\beta f$ as in~\eqref{eq:interpolation_H1_BV_alpha_split_beta} and estimates the second part of it as in~\eqref{eq:interpolation_H1_BV_alpha_CZ_beta}. To estimate the first term, instead of following~\eqref{eq:interpolation_H1_BV_alpha_translation_beta}, one simply notes that \begin{align*}
\int_{\mathbb{R}^n}\bigg|\int_{\mathbb{R}^n}\eta_R(y)\,\frac{y (f(y+x)-f(x))}{|y|^{n+\beta+1}}\,dy\,\bigg|\,dx
&\le\int_{\mathbb{R}^n}\int_{B_R}\frac{|f(y+x)-f(x)|}{|y|^{n+\beta}}\,dy\,dx\\
&\le R^{\alpha-\beta}\int_{\mathbb{R}^n}\int_{B_R}\frac{|f(y+x)-f(x)|}{|y|^{n+\alpha}}\,dy\,dx\\ &\le R^{\alpha-\beta}\,[f]_{W^{\alpha,1}(\mathbb{R}^n)} \end{align*} for all $R>0$. Hence\begin{align*}
|D^\beta f|(\mathbb{R}^n) &\le\mu_{n,\beta}\big(R^{\alpha-\beta}\,[f]_{W^{\alpha,1}(\mathbb{R}^n)}
+\tau_n R^{-\beta}\,\|f\|_{H^1(\mathbb{R}^n)}\big) \end{align*} for all $R>0$, and the desired inequality follows by optimizing the parameter~$R>0$ in the right-hand side. \end{remark}
\subsection{The cases \texorpdfstring{$p>1$}{p>1} and \texorpdfstring{$H^1$}{Hˆ1} via the Mihlin--H\"ormander Multiplier Theorem}
Let $0\le\beta\le\alpha\le1$ and consider the function \begin{equation*}
m_{\alpha,\beta}(\xi):=\frac{|\xi|^\beta}{1+|\xi|^\alpha}, \quad \xi\in\mathbb{R}^n. \end{equation*} It is not difficult to see that \begin{equation*}
\|m_{\alpha,\beta}\|_\star :=
\sup_{\mathrm{a}\in\mathbb{N}^n_0,\ |\mathrm{a}|\le\left\lfloor\!\frac n2 \!\right\rfloor+1} \ \sup_{\xi\in\mathbb{R}^n\setminus\set{0}} \
\Big|\,\xi^{\mathrm{a}}\,\partial^{\mathrm{a}}_\xi \, m_{\alpha,\beta}(\xi)\,\Big| <+\infty. \end{equation*} We thus define the convolution operator $T_{m_{\alpha,\beta}}\colon\mathcal S(\mathbb{R}^n)\to\mathcal S'(\mathbb{R}^n)$ with convolution kernel given by $\mathcal F^{-1}(m_{\alpha,\beta})$, i.e., \begin{equation}\label{eq:def_MH} T_{m_{\alpha,\beta}}f:=f*\mathcal F^{-1}(m_{\alpha,\beta}), \quad f\in\mathcal S(\mathbb{R}^n). \end{equation} In the following result, we observe that the multipliers $m_{\alpha,\beta}$ satisfy uniform Mihlin--H\"ormander conditions as $0\le\beta\le\alpha\le1$.
\begin{lemma}[Mihlin--H\"ormander estimates for $T_{m_{\alpha,\beta}}$] \label{res:MH} There is a dimensional constant $\sigma_n>0$ such that the following properties hold for all given $0\le\beta\le\alpha\le1$.
\begin{enumerate}[(i)]
\item\label{item:MH_p} For all given $p\in(1,+\infty)$, the operator in~\eqref{eq:def_MH} uniquely extends to a bounded linear operator $T_{m_{\alpha,\beta}}\colon L^p(\mathbb{R}^n) \to L^p(\mathbb{R}^n)$ with \begin{equation*}
\|T_{m_{\alpha,\beta}}f\|_{L^p(\mathbb{R}^n)} \le \sigma_n\max\set*{p,\frac{1}{p-1}} \,
\|f\|_{L^p(\mathbb{R}^n)} \end{equation*} for all $f\in L^p(\mathbb{R}^n)$.
\item\label{item:MH_1} The operator in~\eqref{eq:def_MH} uniquely extends to a bounded linear operator $T_{m_{\alpha,\beta}}\!\colon\! H^1(\mathbb{R}^n) \to H^1(\mathbb{R}^n)$ with \begin{equation*}
\|T_{m_{\alpha,\beta}}f\|_{H^1(\mathbb{R}^n)} \le \sigma_n \,
\|f\|_{H^1(\mathbb{R}^n)} \end{equation*} for all $f\in H^1(\mathbb{R}^n)$.
\end{enumerate}
\end{lemma}
\begin{proof} Statements~\eqref{item:MH_p} and~\eqref{item:MH_1} follow from the Mihlin--H\"ormander Multiplier Theorem, see~\cite{G14-C}*{Theorem~6.2.7} for the $L^p$-continuity and~\cite{GR85}*{Chapter~III, Theorem~7.30} for the $H^1$-continuity, where \begin{equation*}
\sigma_n:=c_n\sup_{0\le\beta\le\alpha\le1}\|m_{\alpha,\beta}\|_\star<+\infty \end{equation*} with $c_n>0$ a dimensional constant. We leave the simple verifications to the interested reader. \end{proof}
With \cref{res:MH} at our disposal, we can prove the following result.
\begin{theorem}[Bessel and fractional Hardy--Sobolev interpolation inequalities]\label{res:interpolation_MH} The following statements hold.
\begin{enumerate}[(i)]
\item\label{item:interpolation_MH_p} Given $p\in(1,+\infty)$, there exists a constant $c_{n,p}>0$ such that, given $0\le\gamma\le\beta\le\alpha\le1$, it holds \begin{equation}\label{eq:interpolation_MH_p}
\|\nabla^\beta f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_{n,p} \,
\|\nabla^\gamma f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha-\gamma}} \,
\|\nabla^\alpha f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\beta-\gamma}{\alpha-\gamma}} \end{equation} for all $f\in S^{\alpha,p}(\mathbb{R}^n)$. In the case $\gamma=0$ and $0\le\beta\le\alpha\le1$, we also have \begin{equation}\label{eq:interpolation_MH_p_gamma=0}
\|\nabla^\beta f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_{n,p} \,
\|f\|_{L^p(\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha}} \,
\|\nabla^\alpha f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\beta}{\alpha}} \end{equation} for all $f\in S^{\alpha,p}(\mathbb{R}^n)$.
\item\label{item:interpolation_MH_1} There exists a dimensional constant $c_n>0$ such that, given $0\le\gamma\le\beta\le\alpha\le1$, it holds \begin{equation}\label{eq:interpolation_MH_1}
\|\nabla^\beta f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_n \,
\|\nabla^\gamma f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha-\gamma}} \,
\|\nabla^\alpha f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\beta-\gamma}{\alpha-\gamma}} \end{equation} for all $f\in HS^{\alpha,1}(\mathbb{R}^n)$. In the case $\gamma=0$ and $0\le\beta\le\alpha\le1$, we also have \begin{equation}\label{eq:interpolation_MH_1_gamma=0}
\|\nabla^\beta f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_n \,
\|f\|_{H^1(\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha}} \,
\|\nabla^\alpha f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\beta}{\alpha}} \end{equation} for all $f\in HS^{\alpha,1}(\mathbb{R}^n)$.
\end{enumerate}
\end{theorem}
\begin{proof} Without loss of generality, we can directly assume that $0\le\gamma<\beta<\alpha\le1$. We prove the two statements separately.
\textit{Proof of \eqref{item:interpolation_MH_p}}. Given $f\in S^{\alpha,p}(\mathbb{R}^n)$, we can write \begin{equation*} (-\Delta)^{\frac\beta2}f =T_{m_{\alpha,\beta}}\circ(\mathrm{Id}+(-\Delta)^{\frac\alpha2})f, \end{equation*} so that \begin{align*}
\|(-\Delta)^{\frac\beta2}f\|_{L^p(\mathbb{R}^n)} &=
\|T_{m_{\alpha,\beta}}\circ(\mathrm{Id}+(-\Delta)^{\frac\alpha2})f\|_{L^p(\mathbb{R}^n)}\\ &\le\sigma_n\max\set*{p,\tfrac{1}{p-1}} \,
\|f+(-\Delta)^{\frac\alpha2}f\|_{L^p(\mathbb{R}^n)}\\ &\le\sigma_n\max\set*{p,\tfrac{1}{p-1}} \,
\big(\|f\|_{L^p(\mathbb{R}^n)}+\|(-\Delta)^{\frac\alpha2}f\|_{L^p(\mathbb{R}^n)}\big) \end{align*} thanks to \cref{res:MH}\eqref{item:interpolation_MH_p}. By performing a dilation and by optimizing the right-hand side, we find that \begin{equation*}
\|(-\Delta)^{\frac\beta2}f\|_{L^p(\mathbb{R}^n)} \le \sigma_n\max\set*{p,\tfrac{1}{p-1}} \,
\|f\|_{L^p(\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha}} \,
\|(-\Delta)^{\frac\alpha2}f\|_{L^p(\mathbb{R}^n)}^\frac{\beta}{\alpha} \end{equation*} for all $f\in S^{\alpha,p}(\mathbb{R}^n)$. Now let $f\in C^\infty_c(\mathbb{R}^n)$. Since \begin{equation*} (-\Delta)^{\frac\alpha2}\nabla^\gamma f =R\,(-\Delta)^{\frac{\alpha+\gamma}2}f\in L^p(\mathbb{R}^n;\mathbb{R}^n) \end{equation*} because $f\in L^{\alpha+\gamma,p}(\mathbb{R}^n)$ and by the $L^p$-continuity property of the Riesz transform, we get that $\nabla^\gamma f\in S^{\alpha,p}(\mathbb{R}^n;\mathbb{R}^n)$ according to the definition given in~\eqref{eq:def2_Bessel_space} and the identification established in~\cref{res:S=L}. Repeating the above computations for (each component of) the function $\nabla^\gamma f\in S^{\alpha,p}(\mathbb{R}^n;\mathbb{R}^n)$ with exponents~$\alpha-\gamma$ and~$\beta-\gamma$ in place of~$\alpha$ and~$\beta$ respectively and then optimizing, we get \begin{align*}
\|\nabla^\beta f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}
&=\|(-\Delta)^{\frac{\beta-\gamma}2}\nabla^\gamma f\|_{L^p(\mathbb{R}^n;\mathbb{R}^n)}\\ &\le c_{n,p} \,
\|\nabla^\gamma f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha-\gamma}} \,
\|(-\Delta)^{\frac{\alpha-\gamma}2}\nabla^\gamma f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^\frac{\beta-\gamma}{\alpha-\gamma}\\ &= c_{n,p} \,
\|\nabla^\gamma f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha-\gamma}} \,
\|\nabla^\alpha f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^\frac{\beta-\gamma}{\alpha-\gamma} \end{align*} for all $f\in C^\infty_c(\mathbb{R}^n)$, where \begin{equation*} c_{n,p}=\sigma_n n^{1/{2p}}\max\set*{p,\tfrac{1}{p-1}}. \end{equation*} Thanks to \cref{res:L_p_alpha_equal_S_p_alpha}, \cref{res:lsc_norm_S_alpha_p} and \cref{res:Davila_estimate_S_alpha_p}, inequality~\eqref{eq:interpolation_MH_p} follows by performing a standard approximation argument.
In the case $\gamma=0$, inequality~\eqref{eq:interpolation_MH_p_gamma=0} follows from~\eqref{eq:interpolation_MH_p} by the $L^p$-continuity of the Riesz transform. This concludes the proof of~\eqref{item:interpolation_MH_p}.
\textit{Proof of~\eqref{item:interpolation_MH_1}}. Given $f\in HS^{\alpha,1}(\mathbb{R}^n)$, arguing as above, we can write \begin{equation*} (-\Delta)^{\frac\beta2}f =T_{m_{\alpha,\beta}}\circ(\mathrm{Id}+(-\Delta)^{\frac\alpha2})f, \end{equation*} so that \begin{align*}
\|(-\Delta)^{\frac\beta2}f\|_{H^1(\mathbb{R}^n)} \le\sigma_n \,
\big(\|f\|_{H^1(\mathbb{R}^n)}+\|(-\Delta)^{\frac\alpha2}f\|_{H^1(\mathbb{R}^n)}\big) \end{align*} thanks to \cref{res:MH}\eqref{item:interpolation_MH_1}. By performing a dilation and by optimising the right-hand side, we find that
\begin{equation*}\|(-\Delta)^{\frac\beta2}f\|_{H^1(\mathbb{R}^n)} \le \sigma_n \,
\|f\|_{H^1(\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha}} \,
\|(-\Delta)^{\frac\alpha2}f\|_{H^1(\mathbb{R}^n)}^\frac{\beta}{\alpha} \end{equation*} for all $f\in HS^{\alpha,1}(\mathbb{R}^n)$. Now let $f\in C^\infty_c(\mathbb{R}^n)$. Note that $\nabla^\gamma f\in H^1(\mathbb{R}^n;\mathbb{R}^n)$, because $\nabla^\gamma f\in L^1(\mathbb{R}^n;\mathbb{R}^n)$ and \begin{equation*} \mathrm{div}^0\nabla^\gamma f =\mathrm{div}^0R(-\Delta)^{\frac\gamma2}f =(-\Delta)^{\frac\gamma2}f \in H^1(\mathbb{R}^n) \end{equation*} by \cref{res:relation_W_alpha_1_and_H_1}\eqref{item:relation_W_alpha_1_and_H_1_2}. Moreover, \begin{equation*} (-\Delta)^{\frac\alpha2}\nabla^\gamma f =R\,(-\Delta)^{\frac{\alpha+\gamma}2}f\in H^1(\mathbb{R}^n;\mathbb{R}^n) \end{equation*} because $f\in HS^{\alpha+\gamma,1}(\mathbb{R}^n)$ and by the $H^1$-continuity property of the Riesz transform. Thus $\nabla^\gamma f\in HS^{\alpha,1}(\mathbb{R}^n;\mathbb{R}^n)$. Repeating the above computations for (each component of) the function $\nabla^\gamma f\in HS^{\alpha,1}(\mathbb{R}^n;\mathbb{R}^n)$ with exponents~$\alpha-\gamma$ and~$\beta-\gamma$ in place of~$\alpha$ and~$\beta$ respectively and then optimizing, we get \begin{align*}
\|\nabla^\beta f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_n \,
\|\nabla^\gamma f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac{\alpha-\beta}{\alpha-\gamma}} \,
\|\nabla^\alpha f\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}^\frac{\beta-\gamma}{\alpha-\gamma} \end{align*} for all $f\in C^\infty_c(\mathbb{R}^n)$, where $c_n=\sigma_n n^{1/{2}}$. Thanks to \cref{res:approx_H_1_alpha}, inequality~\eqref{eq:interpolation_MH_1} follows by performing a standard approximation argument.
In the case $\gamma=0$, inequality~\eqref{eq:interpolation_MH_1_gamma=0} follows from~\eqref{eq:interpolation_MH_p} by the $H^1$-continuity of the Riesz transform. This concludes the proof of~\eqref{item:interpolation_MH_1}. \end{proof}
\section{Asymptotic behavior of fractional \texorpdfstring{$\alpha$}{alpha}-variation as \texorpdfstring{$\alpha\to0^+$}{alpha tends to 0ˆ+}} \label{sec:asymptotic_to_zero}
In this section, we study the asymptotic behavior of~$\nabla^\alpha$ as $\alpha\to0^+$.
\subsection{Pointwise convergence of \texorpdfstring{$\nabla^\alpha$}{nablaˆalpha} as \texorpdfstring{$\alpha\to0^+$}{alpha tends to 0ˆ+}}
We start with the pointwise convergence of~$\nabla^\alpha$ to~$\nabla^0$ as~$\alpha\to0^+$ for sufficiently regular functions.
\begin{lemma}[Uniform convergence of~$\nabla^\alpha$ as $\alpha\to0^+$] \label{res:pointwise_conv_Riesz} Let $\alpha\in(0,1]$ and $p\in [1,+\infty]$. For $\beta\in (0,\alpha)$, the operator \begin{equation*} \nabla^\beta\colon C^{0,\alpha}_{\loc}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)\to C^0(\mathbb{R}^n; \mathbb{R}^{n}) \end{equation*} is well defined and satisfies \begin{equation}\label{eq:estimate_nabla_beta_general}
\|\nabla^\beta f\|_{L^\infty(B_R;\,\mathbb{R}^{n})} \le c_{n,p}\,\mu_{n,\beta}\, \left(\frac{r^{\alpha-\beta}}{\alpha-\beta}\,[f]_{C^{0,\alpha}(B_{R+r})} +
\frac{r^{-\frac np-\beta}}{\left(\frac{n}{p} + \beta \right)^{1 - \frac{1}{p}}}\,\|f\|_{L^p(\mathbb{R}^n)}\right) \end{equation} for all $r,R>0$ and all $f\in C^{0,\alpha}_{\loc}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$, where \begin{equation}\label{eq:baramba_constant} c_{n,p}:= \begin{cases} \max \left \{n \omega_n, (n \omega_n)^{1 - \frac{1}{p}} \left (1 - \frac{1}{p} \right )^{1 - \frac{1}{p}} \right \} & \text{if } p \in (1,+\infty), \\ \max \left \{n \omega_n, 1 \right \} & \text{if } p = 1, \\ n \omega_n & \text{if } p =+\infty. \end{cases} \end{equation} Moreover, for $\beta \in (0, \alpha)$ and $f \in C^{0,\alpha}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$, we have $\nabla^\beta f \in C^0_{b}(\mathbb{R}^n;\mathbb{R}^n)$ and \begin{equation}\label{eq:estimate_nabla_beta_general_global}
\|\nabla^\beta f\|_{L^\infty(\mathbb{R}^n;\,\mathbb{R}^{n})} \le
c_{n,p}\, \mu_{n,\beta}\, \frac{\alpha p + n}{(\alpha - \beta)(\beta p + n)}\, \left ( \tfrac{n}{p} + \beta \right )^{\frac{\alpha - \beta}{\alpha p + n}} \|f\|_{L^p(\mathbb{R}^n)}^{\frac{p(\alpha - \beta)}{\alpha p + n}}\, [f]_{C^{0,\alpha}(\mathbb{R}^n)}^{\frac{\beta p + n}{\alpha p + n}}, \end{equation} where $c_{n,p}$ is as in~\eqref{eq:baramba_constant}.
Finally, if $p<+\infty$ and $f\in C^{0,\alpha}_{\loc}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$, then $\nabla^{0} f$ is well defined and belongs to $C^{0}(\mathbb{R}^n; \mathbb{R}^n)$, \eqref{eq:estimate_nabla_beta_general_global} holds for $\beta = 0$, for all bounded open sets $U \subset \mathbb{R}^n$ we have \begin{equation}\label{eq:pointwise_conv_nabla_beta}
\lim_{\beta\to0^+} \|\nabla^\beta f - \nabla^0 f\|_{L^{\infty}(U;\,\mathbb{R}^n)} = 0, \end{equation} and \eqref{eq:pointwise_conv_nabla_beta} holds for $U = \mathbb{R}^n$ if $f\in C^{0,\alpha}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$ and $p<+\infty$. \end{lemma}
\begin{proof} We divide the proof in four steps.
\textit{Step~1: proof of~\eqref{eq:estimate_nabla_beta_general}}. Let $\alpha \in (0, 1]$, $p \in [1, +\infty]$, $f\in C^{0,\alpha}_{\loc}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$, $\beta\in (0,\alpha)$ and $x\in\mathbb{R}^n$. We notice that, for all $\varepsilon \in (0, 1)$, \begin{equation*}
\int_{\{|y|>\varepsilon\}}\frac{y f(y+x)}{|y|^{n+\beta+1}}\,dy = \int_{\{\varepsilon < |y| \le 1\}}\frac{y (f(y+x) - f(x))}{|y|^{n+\beta+1}}\,dy + \int_{\{|y|>1\}}\frac{y f(y+x)}{|y|^{n+\beta+1}}\,dy, \end{equation*}
so that we can pass to the limit in the right hand side as $\varepsilon \to 0^+$ thanks to H\"older's continuity and the fact that $y\mapsto|y|^{-n - \beta} \in L^q(\mathbb{R}^n \setminus B_1)$ for all $q \in [1,+\infty]$. This shows that $\nabla^{\beta}f(x)$ is well defined for all $x \in \mathbb{R}^n$. If $p \in [1, +\infty)$, this argument works also in the case $\beta = 0$. Now let $\alpha \in (0, 1]$, $\beta\in[0,\alpha)$, $p \in (1, +\infty)$, $f\in C^{0,\alpha}_{\loc}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$ and $x\in\mathbb{R}^n$. By H\"older's inequality we can estimate \begin{align*}
\bigg|\int_{\{|y|>\varepsilon\}}\frac{y f(y+x)}{|y|^{n+\beta+1}}\,dy\bigg|
&\le\int_{\{\varepsilon<|y|<r\}}\frac{|f(y+x)-f(x)|}{|y|^{n+\beta}}\,dy
+\int_{\{|y|\ge r\}}\frac{|f(y+x)|}{|y|^{n+\beta}}\,dy\\
&\le[f]_{C^{0,\alpha}(B_r(x))}\int_{\{|y|<r\}}\frac{dy}{|y|^{n+\beta-\alpha}}
+\|f\|_{L^p(\mathbb{R}^n)}\bigg(\int_{\{|y|\ge r\}}\frac{dy}{|y|^{(n+\beta)q}}\bigg)^{\frac{1}{q}}\\ &\le \frac{n\omega_n r^{\alpha-\beta}}{\alpha-\beta}\,[f]_{C^{0,\alpha}(B_r(x))}
+\bigg(\frac{n\omega_n r^{n-(n+\beta)q}}{(n+\beta)q-n}\bigg)^{\frac1q}\,\|f\|_{L^p(\mathbb{R}^n)} \end{align*} for all $r>\varepsilon>0$, where $q = \frac{p}{p - 1}$. Moreover, for $p=1$, if $f\in C^{0,\alpha}_{\loc}(\mathbb{R}^n)\cap L^1(\mathbb{R}^n)$, then an analogous calculation shows that \begin{equation*}
\bigg|\int_{\{|y|>\varepsilon\}}\frac{y f(y+x)}{|y|^{n+\beta+1}}\,dy\,\bigg| \le \frac{n\omega_n r^{\alpha-\beta}}{\alpha-\beta}\,[f]_{C^{0,\alpha}(B_r(x))} +
r^{- n - \beta}\,\|f\|_{L^1(\mathbb{R}^n)} \end{equation*} for all $r>\varepsilon>0$. Finally, for $p = +\infty$, if $\beta\in (0,\alpha)$ and $f\in C^{0,\alpha}_{\loc}(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n)$, then we similarly obtain \begin{equation*}
\bigg|\int_{\{|y|>\varepsilon\}}\frac{y f(y+x)}{|y|^{n+\beta+1}}\,dy\,\bigg|
\le \frac{n\omega_n r^{\alpha-\beta}}{\alpha-\beta}\, [f]_{C^{0,\alpha}(B_r(x))} + \frac{n\omega_n r^{- \beta}}{\beta} \|f\|_{L^\infty(\mathbb{R}^n)} \end{equation*} for all $r>\varepsilon>0$. Thus we obtain $\nabla^{\beta} f \in L^\infty_{\loc}(\mathbb{R}^n;\mathbb{R}^n)$ for all $f\in C^{0,\alpha}_{\loc}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$ with $\beta\in(0,\alpha)$ and $p\in[1,+\infty]$, including $\beta=0$ if $p<+\infty$, and~\eqref{eq:estimate_nabla_beta_general} readily follows.
\textit{Step~2: proof of $\nabla^\beta f\in C^0(\mathbb{R}^n;\mathbb{R}^n)$}. Let us now prove that $\nabla^\beta f\in C^0(\mathbb{R}^n;\mathbb{R}^n)$ for any $\beta \in (0,\alpha)$ and $f\in C^{0,\alpha}_{\loc}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$, where $\alpha\in (0,1]$ and $p\in [1,+\infty]$. Let $R >0$, $r > 1$, $x\in B_R$, $h\in B_1$, $\beta<\alpha'<\alpha$ and $g_h(x):=f(x+h) - f(x)$. We notice that \begin{equation}\label{eq:z}
[g_h]_{C^{0,\alpha'}(B_{R+r})} \le 2 [f]_{C^{0,\alpha}(B_{R+r+|h|})} |h|^{\alpha-\alpha'}. \end{equation}
Indeed, given $x,x+h'\in B_{R+r}$ with $|h'|\le |h|$ we have \begin{align*}
|g_h(x+h') - g_h(x)| & \le |f(x+h+h') - f(x+h)| + |f(x+h') - f(x)|
\\& \le 2 [f]_{C^{0,\alpha}(B_{R+r+|h|})} |h'|^\alpha
\\& \le 2 [f]_{C^{0,\alpha}(B_{R+r+|h|})} |h'|^{\alpha'}\, |h|^{\alpha-\alpha'}. \end{align*}
While, in the case $|h|\le |h'|$, it holds \begin{align*}
|g_h(x+h') - g_h(x)| & \le |f(x+h+h') - f(x+h')| + |f(x+h) - f(x)|
\\& \le 2 [f]_{C^{0,\alpha}(B_{R+r+|h|})} |h|^\alpha
\\& \le 2 [f]_{C^{0,\alpha}(B_{R+r+|h|})} |h'|^{\alpha'}\, |h|^{\alpha-\alpha'}, \end{align*} therefore \eqref{eq:z} easily follows. By plugging $g_h(x)$ in \eqref{eq:estimate_nabla_beta_general} with $\alpha'$ in place of $\alpha$ and $r>0$ we obtain \begin{align*}
|\nabla^\beta f(x+h) - \nabla^{\beta} f(x)| & \le c_{n,p}\,\mu_{n,\beta}\,
\left(\frac{r^{\alpha'-\beta}}{\alpha'-\beta}\,[g_h]_{C^{0,\alpha'}(B_{R+r})}
+
\frac{r^{-\frac{n}{p}-\beta}}{\left(\frac{n}{p} + \beta \right)^{1 - \frac{1}{p}}}\,\|g_h\|_{L^p(\mathbb{R}^n)}\right)
\\ & \le C_{n,p,\beta} \left( \frac{r^{\alpha'-\beta}}{\alpha'-\beta}|h|^{\alpha-\alpha'} [f]_{C^{0,\alpha}(B_{R+r+|h|})} + r^{-\frac{n}{p}-\beta}\| f\|_{L^p(\mathbb{R}^n)} \right),
\end{align*} where $C_{n,p,\beta}>0$ is a constant depending only on $n$, $p$ and $\beta$. The sought conclusion comes by letting first $h \to 0$ and after $r\to+\infty$.
\textit{Step~3: proof of~\eqref{eq:estimate_nabla_beta_general_global}}. Let $\alpha \in (0,1]$, $p \in [1,+\infty]$ and $x \in \mathbb{R}^n$. If $f \in C^{0,\alpha}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$, then arguing as in Step~1 we can estimate \begin{equation*}
|\nabla^\beta f(x)| \le c_{n,p}\,\mu_{n,\beta}\,\left(\frac{r^{\alpha-\beta}}{\alpha-\beta}\,[f]_{C^{0,\alpha}(\mathbb{R}^n)}
+\frac{r^{-\frac np-\beta}}{\left (\frac{n}{p} + \beta \right )^{1 - \frac{1}{p}}}\,\|f\|_{L^p(\mathbb{R}^n)}\right), \end{equation*} for all $\beta \in (0, \alpha)$, including $\beta = 0$ if $p<+\infty$, so that \eqref{eq:estimate_nabla_beta_general_global} follows by optimizing the parameter $r > 0$ in the right-hand side.
\textit{Step~4: proof of~\eqref{eq:pointwise_conv_nabla_beta}}. Let $\alpha\in(0,1]$, $\beta \in (0, \alpha)$, $U$ be a bounded open set and $x \in U$. If $p\in(1,+\infty)$, then we can estimate \begin{align*}
|\nabla^\beta f(x)-\nabla^0 f(x)| &\le
\bigg|1-\frac{\mu_{n,\beta}}{\mu_{n,0}}\,\bigg|\,|\nabla^0 f(x)| +
\mu_{n,\beta}\,[f]_{C^{0,\alpha}(B_1(x))}\int_{\{|y|<1\}}\bigg(\frac{1}{|y|^\beta}-1\bigg)\,\frac{dy}{|y|^{n-\alpha}}\\ &\quad
+\mu_{n,\beta}\int_{\{|y| > 1\}}\bigg(1-\frac{1}{|y|^\beta}\bigg)\,\frac{|f(y+x)|}{|y|^{n}}\,dy \\ &\le
\bigg|1-\frac{\mu_{n,\beta}}{\mu_{n,0}}\,\bigg|\, \|\nabla^0 f \|_{L^{\infty}(U;\, \mathbb{R}^n)} + \frac{n \omega_n \beta\mu_{n,\beta}}{\alpha(\alpha - \beta)} \, [f]_{C^{0,\alpha}(U_1)} \\ &\quad+ \mu_{n,\beta}\,
\|f\|_{L^p(\mathbb{R}^n)}\,\bigg ( \int_{\{|y| >1\}}\bigg(1-\frac{1}{|y|^{\beta}}\bigg)^{q} \frac{1}{|y|^{nq}}\,dy \bigg )^{\frac{1}{q}}, \end{align*}
where $q = \frac{p}{p - 1}$ and $U_1 := \{ y \in \mathbb{R}^n : {\rm dist}(y, U) < 1\}$. Since $y\mapsto|y|^{-nq} \in L^1(\mathbb{R}^n \setminus B_1)$ for all $q \in (1,+\infty)$, also the last term vanishes as $\beta \to 0^+$ thanks to Lebesgue's Dominated Convergence Theorem, so that the limit in~\eqref{eq:pointwise_conv_nabla_beta} follows. If $p = 1$, then we can estimate the last term in the above inequality as \begin{equation*}
\int_{\{|y| > 1\}}\bigg(1-\frac{1}{|y|^\beta}\bigg)\,\frac{|f(y+x)|}{|y|^{n}}\,dy \le
\|f\|_{L^1(\mathbb{R}^n)}\,
\sup_{|y| > 1} \frac{1}{|y|^{n}}\, \bigg(1-\frac{1}{|y|^\beta}\bigg). \end{equation*} Since \begin{equation*}
\sup_{|y| > 1} \frac{1}{|y|^{n}}\, \bigg(1-\frac{1}{|y|^\beta}\bigg) = \frac{\beta}{n \left ( 1 + \frac{\beta}{n} \right )^{\frac{n}{\beta} + 1}} \longrightarrow 0 \quad \text{as}\ \beta \to 0^+, \end{equation*} the limit in~\eqref{eq:pointwise_conv_nabla_beta} follows also in this case. Finally, if $f\in C^{0,\alpha}(\mathbb{R}^n)\cap L^p(\mathbb{R}^n)$ and $p<+\infty$, then the above estimates hold for $U = \mathbb{R}^n$, so that we obtain the uniform convergence $\nabla^{\beta} f \to \nabla^0 f$ in $\mathbb{R}^n$. \end{proof}
\begin{remark} \label{rem:div_pointwise_conv_Riesz} It is easy to see that a result analogous to \cref{res:pointwise_conv_Riesz} can be proved for the fractional divergence operator. In particular, if $\varphi \in C^{0,\alpha}(\mathbb{R}^n; \mathbb{R}^n)\cap L^p(\mathbb{R}^n; \mathbb{R}^n)$ for some $\alpha \in (0, 1]$ and $p \in [1,+\infty]$, then $\mathrm{div}^{\beta} \varphi \in L^{\infty}(\mathbb{R}^n)$ for all $\beta \in (0, \alpha)$ with \begin{equation*}
\|\mathrm{div}^\beta \varphi\|_{L^\infty(\mathbb{R}^n)}
\le c_{n,p}\, \mu_{n,\beta}\, \frac{\alpha p + n}{(\alpha - \beta)(\beta p + n)}\, \left ( \tfrac{n}{p} + \beta \right )^{\frac{\alpha - \beta}{\alpha p + n}} \,\|\varphi\|_{L^p(\mathbb{R}^n; \mathbb{R}^n)}^{\frac{p(\alpha - \beta)}{\alpha p + n}} \,[\varphi]_{C^{0,\alpha}(\mathbb{R}^n;\mathbb{R}^n)}^{\frac{\beta p + n}{\alpha p + n}}, \end{equation*} where $c_{n,p}>0$ is the constant defined in~\eqref{eq:baramba_constant}. If $p<+\infty$, then $\mathrm{div}^{\beta} \varphi \in L^{\infty}(\mathbb{R}^n)$ for all $\beta \in [0, \alpha)$, the above estimate holds also for $\beta = 0$ and we have \begin{equation*}
\lim_{\beta\to0^+} \|\mathrm{div}^\beta \varphi - \mathrm{div}^0 \varphi\|_{L^{\infty}(\mathbb{R}^n)} = 0. \end{equation*} \end{remark}
As an immediate consequence of \cref{res:pointwise_conv_Riesz} and \cref{rem:div_pointwise_conv_Riesz}, we can show that the fractional $\alpha$-variation is lower semicontinuous as~$\alpha\to0^+$.
\begin{corollary}[Lower semicontinuity of $BV^\alpha$-seminorm as~$\alpha\to0^+$] If $f\in L^1(\mathbb{R}^n)$, then for all open sets $U \subset \mathbb{R}^n$ it holds \begin{equation}\label{eq:liminf_alpha_0}
|D^0 f|(U)\le\liminf_{\alpha\to0^+}|D^\alpha f|(U). \end{equation} \end{corollary}
\begin{proof}
Given $\varphi\in C^\infty_c(U;\mathbb{R}^n)$ with $\|\varphi\|_{L^\infty(U;\,\mathbb{R}^n)}\le1$, thanks to \cref{res:pointwise_conv_Riesz} and \cref{rem:div_pointwise_conv_Riesz} we have \begin{align*} \int_{\mathbb{R}^n}f\,\mathrm{div}^0\varphi\,dx =\lim_{\alpha\to0^+}\int_{\mathbb{R}^n}f\,\mathrm{div}^\alpha\varphi\,dx
\le\liminf_{\alpha\to0^+}|D^\alpha f|(U), \end{align*} so that~\eqref{eq:liminf_alpha_0} follows by~\eqref{eq:fractional_variation_0}. \end{proof}
\subsection{Strong and energy convergence of \texorpdfstring{$\nabla^\alpha$}{nablaˆalpha} as \texorpdfstring{$\alpha\to0^+$}{alpha tends to 0ˆ+}}
We now study the strong and the energy convergence of~$\nabla^\alpha$ as~$\alpha\to0^+$. For the strong convergence, we have the following result.
\begin{theorem}[Strong convergence of~$\nabla^\alpha$ as~$\alpha\to0^+$] \label{res:strong_conv_alpha_0} The following hold. \begin{enumerate}[(i)]
\item\label{item:strong_conv_Hardy_alpha_0}
If $f\in\bigcup_{\alpha\in(0,1)} HS^{\alpha,1}(\mathbb{R}^n)$, then \begin{equation}\label{eq:strong_conv_Hardy_alpha_0}
\lim_{\alpha\to0^+}\|\nabla^\alpha f-Rf\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}=0. \end{equation}
\item\label{item:strong_conv_alpha_0_Lp} If $p\in(1,+\infty)$ and $f\in\bigcup_{\alpha\in(0,1)}S^{\alpha,p}(\mathbb{R}^n)$, then \begin{equation}\label{eq:strong_conv_alpha_0_Lp}
\lim_{\alpha\to0^+}\|\nabla^\alpha f-Rf\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}=0. \end{equation}
\end{enumerate} \end{theorem}
\begin{remark}\label{rm:z} Thanks to \cref{res:scatole}, \cref{res:strong_conv_alpha_0}\eqref{item:strong_conv_Hardy_alpha_0} can be equivalently stated as \begin{equation}
\lim_{\alpha\to0^+}\|\nabla^\alpha f-Rf\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}=0 \end{equation} for all $f\in H^1(\mathbb{R}^n)\cap\bigcup_{\alpha\in(0,1)} W^{\alpha,1}(\mathbb{R}^n)$. \end{remark}
\noindent We prove \cref{res:strong_conv_alpha_0} in \cref{subsec:proof_strong_conv_alpha_0}. For the convergence of the (rescaled) energy, we instead have the following result.
\begin{theorem}[Energy convergence of~$\nabla^\alpha$ as~$\alpha\to0^+$] \label{res:energy_conv_alpha_0} If $f\in\bigcup_{\alpha\in(0,1)}W^{\alpha,1}(\mathbb{R}^n)$, then \begin{equation*}
\lim_{\alpha\to0^+}\alpha\int_{\mathbb{R}^n}|\nabla^\alpha f|\,dx
=n\omega_n\mu_{n,0}\,\bigg|\int_{\mathbb{R}^n} f\,dx\,\bigg|. \end{equation*} \end{theorem}
\noindent We prove \cref{res:energy_conv_alpha_0} in \cref{subsec:energy_conv_alpha_0}.
\subsection{Proof of \texorpdfstring{\cref{res:strong_conv_alpha_0}}{4.3}} \label{subsec:proof_strong_conv_alpha_0}
Before the proof of \cref{res:strong_conv_alpha_0}, we need to recall the following well-known result, see the first part of the proof of~\cite{FS82}*{Lemma~1.60}. For the reader's convenience and to keep the paper as self-contained as possible, we briefly recall its simple proof.
\begin{lemma}\label{res:derivatives_trick} Let $m\in\mathbb{N}_0$. If $f\in\mathcal{S}_m(\mathbb{R}^n)$, then $f=\mathrm{div} g$ for some $g\in\mathcal{S}_{m-1}(\mathbb{R}^n;\mathbb{R}^n)$ (with $g\in\mathcal{S}(\mathbb{R}^n;\mathbb{R}^n)$ in the case $m=0$). \end{lemma}
\begin{proof}
By means of the Fourier transform, the problem can be equivalently restated as follows: if $\varphi\in\mathcal{S}(\mathbb{R}^n)$ satisfies $\partial^\mathsf{a}\varphi(0)=0$ for all $\mathsf{a}\in\mathbb{N}^n_0$ such that $|\mathsf{a}|\le m$, then $\varphi(\xi)=\sum_1^n\xi_i\psi_i(\xi)$ for some $\psi_1,\dots,\psi_n\in\mathcal{S}(\mathbb{R}^n)$ with $\partial^\mathsf{a}\psi_i(0)=0$ for all $i=1,\dots,n$ and all $\mathsf{a}\in\mathbb{N}^n_0$ such that $|\mathsf{a}|\le m-1$. This can be achieved as follows. Fixed any $\zeta\in C^\infty_c(\mathbb{R}^n)$ such that \begin{equation*} \supp\zeta\subset B_2 \quad\text{and}\quad \zeta \equiv 1\ \text{on}\ B_1, \end{equation*} we can define \begin{equation*} \psi_i(\xi):=\zeta(\xi)\int_0^1\partial_i\varphi(t\xi)\,dt
+\frac{1-\zeta(\xi)}{|\xi|^2}\,\xi_i\,\varphi(\xi), \quad \xi\in\mathbb{R}^n, \end{equation*} for all $i=1,\dots,n$. It is now easy to prove that such $\psi_i$'s satisfy the required properties and we leave the simple calculations to the reader. \end{proof}
Thanks to \cref{res:derivatives_trick}, we can prove the following $L^p$-convergence result of the fractional $\alpha$-Laplacian of suitably regular functions as $\alpha\to0^+$, as well as analogous convergence results for the fractional $\alpha$-gradient.
\begin{lemma} \label{res:frac_Laplacian_strong_convergence} Let $p\in [1,+\infty]$. If $f\in\mathcal S_0(\mathbb{R}^n)$, then \begin{equation}\label{eq:frac_Laplacian_strong_convergence} \lim_{\alpha\to0^+}
\|(-\Delta)^{\frac\alpha2}f-f\|_{L^p(\mathbb{R}^n)} =0. \end{equation} As a consequence, if $p\in(1,+\infty)$ and $f\in\mathcal S_0(\mathbb{R}^n)$, then \begin{equation}\label{eq:nabla_alpha_strong_convergence_S} \lim_{\alpha\to0^+}
\|\nabla^\alpha f-Rf\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} =0; \end{equation} if $p =1$ and $f\in\mathcal S_{\infty}(\mathbb{R}^n)$, then \begin{equation}\label{eq:nabla_alpha_strong_convergence_S_1} \lim_{\alpha\to0^+}
\|\nabla^\alpha f-Rf\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} =0. \end{equation} \end{lemma}
\begin{proof} Let $f\in\mathcal S_0(\mathbb{R}^n)$ be fixed. If $p\in(1,+\infty)$, then \begin{equation*}
\|\nabla^\alpha f-Rf\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} =
\|R(-\Delta)^{\frac\alpha2} f-Rf\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_{n,p}
\|(-\Delta)^{\frac\alpha2}f-f\|_{L^p(\mathbb{R}^n)} \end{equation*} by the $L^p$-continuity of the Riesz transform, so that~\eqref{eq:nabla_alpha_strong_convergence_S} is a consequence of~\eqref{eq:frac_Laplacian_strong_convergence}. To prove~\eqref{eq:frac_Laplacian_strong_convergence}, given $x\in\mathbb{R}^n$ we write \begin{align*} (-\Delta)^{\frac\alpha2}f(x) =\nu_{n,\alpha}
\int_{\set*{|h|>1}}\frac{f(x+h)-f(x)}{|h|^{n+\alpha}}\,dh + \nu_{n,\alpha}
\int_{\set*{|h|\le1}}\frac{f(x+h)-f(x)}{|h|^{n+\alpha}}\,dh, \end{align*} where \begin{equation*} \nu_{n,\alpha}=2^\alpha\pi^{-\frac n2}\frac{\Gamma\left(\frac{n+\alpha}{2}\right)}{\Gamma\left(-\frac{\alpha}{2}\right)}, \quad \alpha\in(0,1), \end{equation*} is the constant appearing in~\eqref{eq:def_frac_Laplacian}. One easily sees that \begin{equation}\label{eq:nu_n_alpha_limit} \lim_{\alpha\to0^+}\frac{\nu_{n,\alpha}}{\alpha}=-\frac{1}{n\omega_n}. \end{equation} On the one hand, we can estimate \begin{equation*}
\left\| \, \nu_{n,\alpha}
\int_{\set*{|h| \le 1}}\frac{f(\cdot+h)-f(\cdot)}{|h|^{n+\alpha}}\,dh \,
\right\|_{L^p(\mathbb{R}^n)} \le \frac{n\omega_n\nu_{n,\alpha}}{1-\alpha}
\,\|\nabla f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} \end{equation*} (by the Fundamental Theorem of Calculus, see~\cite{Brezis11}*{Proposition 9.3(iii)} for instance), so that \begin{equation*} \lim_{\alpha\to0^+}
\left\| \, \nu_{n,\alpha}
\int_{\set*{|h| \le 1}}\frac{f(\cdot+h)-f(\cdot)}{|h|^{n+\alpha}}\,dh \,
\right\|_{L^p(\mathbb{R}^n)} =0 \end{equation*} by~\eqref{eq:nu_n_alpha_limit} for all $p\in[1,+\infty]$. On the other hand, by \cref{res:derivatives_trick} there exists $g\in\mathcal S(\mathbb{R}^n;\mathbb{R}^n)$ such that $f=\mathrm{div} g$ and thus we can write \begin{align*} \nu_{n,\alpha}
\int_{\set*{|h| > 1}}\frac{f(x+h)-f(x)}{|h|^{n+\alpha}}\,dh &= \nu_{n,\alpha}
\int_{\set*{|h| > 1}}\frac{f(x+h)}{|h|^{n+\alpha}}\,dh - \frac{n\omega_n\nu_{n,\alpha}}{\alpha}\, f(x) \\ &= \nu_{n,\alpha}
\int_{\set*{|h| > 1}}\frac{\mathrm{div} g(x+h)}{|h|^{n+\alpha}}\,dh - \frac{n\omega_n\nu_{n,\alpha}}{\alpha}\, f(x). \end{align*} Integrating by parts, the reader can easily verify that \begin{equation*} \lim_{\alpha\to0^+}
\left\| \, \nu_{n,\alpha}
\int_{\set*{|h| > 1}}\frac{\mathrm{div} g(\cdot+h)}{|h|^{n+\alpha}}\,dh \,
\right\|_{L^p(\mathbb{R}^n)} =0 \end{equation*} for all $p\in[1,+\infty]$. Hence we get \begin{equation*} \lim_{\alpha\to0^+}
\|(-\Delta)^{\frac\alpha2}f-f\|_{L^p(\mathbb{R}^n)} =
\|f\|_{L^p(\mathbb{R}^n)} \lim_{\alpha\to0^+}
\left |1+\frac{n\omega_n\nu_{n,\alpha}}{\alpha}\right |=0 \end{equation*} for all $p\in[1,+\infty]$, so that we obtain \eqref{eq:frac_Laplacian_strong_convergence} and \eqref{eq:nabla_alpha_strong_convergence_S}. Finally, let $f \in \mathcal{S}_{\infty}(\mathbb{R}^n)$, so that $R f \in \mathcal{S}_0(\mathbb{R}^n; \mathbb{R}^n)$, $R(R f) \in \mathcal{S}_0(\mathbb{R}^n; \mathbb{R}^{n^2})$ and $(-\Delta)^{\frac\alpha2} Rf = \nabla^{\alpha} f$. Then, we have \begin{align*}
\|\nabla^\alpha f-Rf\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}
= \|(-\Delta)^{\frac\alpha2}Rf - Rf\|_{L^1(\mathbb{R}^n;\,\mathbb{R}^n)} + \|(-\Delta)^{\frac\alpha2}R(Rf) - R(Rf)\|_{L^1(\mathbb{R}^n;\,\mathbb{R}^{n^2})} \end{align*} and thus \begin{equation*} \lim_{\alpha\to0^+}
\|\nabla^\alpha f-Rf\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} =0 \end{equation*} thanks \eqref{eq:frac_Laplacian_strong_convergence} (which clearly holds also for vector-valued functions). Thus, we obtain \eqref{eq:nabla_alpha_strong_convergence_S_1}, and the proof is complete. \end{proof}
We can now prove \cref{res:strong_conv_alpha_0}.
\begin{proof}[Proof of \cref{res:strong_conv_alpha_0}] We prove the two statements separately.
\textit{Proof of~\eqref{item:strong_conv_Hardy_alpha_0}}. Let $f\in HS^{\alpha,1}(\mathbb{R}^n)$. By \cref{res:approx_H_1_alpha}, there exists $(f_k)_{k\in\mathbb{N}}\subset\mathcal S_{\infty}(\mathbb{R}^n)$ such that $f_k\to f$ in~$HS^{\alpha,1}(\mathbb{R}^n)$ as $k\to+\infty$. If $\beta\in(0,\alpha)$, then we can estimate \begin{align*}
\|\nabla^\beta f&-Rf\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} \le
\|\nabla^\beta f_k-Rf_k\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} +
\|\nabla^\beta f-\nabla^\beta f_k\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}\\ &\quad+
\|R f-R f_k\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}\\ &\le
\|\nabla^\beta f_k-Rf_k\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} +
c_n\|f-f_k\|_{H^1(\mathbb{R}^n)}^{\frac{\alpha-\beta}\alpha} \,
\|\nabla^\alpha f-\nabla^\alpha f_k\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac\beta\alpha}\\ &\quad+
c_n'\|f-f_k\|_{H^1(\mathbb{R}^n)} \end{align*} for all $k\in\mathbb{N}$ by~\eqref{eq:interpolation_MH_1_gamma=0} in \cref{res:interpolation_MH}\eqref{item:interpolation_MH_1} and the $H^1$-continuity of the Riesz transform, where $c_n,c_n'>0$ are dimensional constants. Thus \begin{align*} \limsup_{\beta\to0^+}
\|\nabla^\beta f-Rf\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} &\le
\limsup_{\beta\to0^+}\|\nabla^\beta f_k-Rf_k\|_{H^1(\mathbb{R}^n;\,\mathbb{R}^n)} +
c_n''\|f-f_k\|_{H^1(\mathbb{R}^n)}\\ &=
c_n''\|f-f_k\|_{H^1(\mathbb{R}^n)} \end{align*} for all $k\in\mathbb{N}$ by~\eqref{eq:nabla_alpha_strong_convergence_S_1} in \cref{res:frac_Laplacian_strong_convergence}, where $c_n''=c_n+c_n'$. Hence~\eqref{eq:strong_conv_Hardy_alpha_0} follows by passing to the limit as $k\to+\infty$ and the proof of~\eqref{item:strong_conv_Hardy_alpha_0} is complete.
\textit{Proof of~\eqref{item:strong_conv_alpha_0_Lp}}. We argue as in the proof of \eqref{item:strong_conv_Hardy_alpha_0}. Let $f\in S^{\alpha,p}(\mathbb{R}^n)$. By \cref{res:S_0_dense_in_S_p_alpha}, there exists $(f_k)_{k\in\mathbb{N}}\subset\mathcal S_0(\mathbb{R}^n)$ such that $f_k\to f$ in~$S^{\alpha,p}(\mathbb{R}^n)$ as $k\to+\infty$. If $\beta\in(0,\alpha)$, then we can estimate \begin{align*}
\|\nabla^\beta f&-Rf\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} \le
\|\nabla^\beta f_k-Rf_k\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} +
\|\nabla^\beta f-\nabla^\beta f_k\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}\\ &\quad+
\|R f-R f_k\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}\\ &\le
\|\nabla^\beta f_k-Rf_k\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} +
c_{n,p}\|f-f_k\|_{L^p(\mathbb{R}^n)}^{\frac{\alpha-\beta}\alpha} \,
\|\nabla^\alpha f-\nabla^\alpha f_k\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\frac\beta\alpha}\\ &\quad+
c_{n,p}'\|f-f_k\|_{L^p(\mathbb{R}^n)} \end{align*} for all $k\in\mathbb{N}$ by~\eqref{eq:interpolation_MH_p_gamma=0} in \cref{res:interpolation_MH}\eqref{item:interpolation_MH_p} and the $L^p$-continuity of the Riesz transform, where the constants $c_{n,p},c_{n,p}'>0$ depend only on~$n$ and~$p$. Thus \begin{align*} \limsup_{\beta\to0^+}
\|\nabla^\beta f-Rf\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} &\le
\limsup_{\beta\to0^+}\|\nabla^\beta f_k-Rf_k\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} +
c_{n,p}''\|f-f_k\|_{L^p(\mathbb{R}^n)}\\ &=
c_{n,p}''\|f-f_k\|_{L^p(\mathbb{R}^n)} \end{align*} for all $k\in\mathbb{N}$ by~\eqref{eq:nabla_alpha_strong_convergence_S} in \cref{res:frac_Laplacian_strong_convergence}, where $c_{n,p}''=c_{n,p}+c_{n,p}'$. Hence~\eqref{eq:strong_conv_alpha_0_Lp} follows by passing to the limit as $k\to+\infty$ and the proof of~\eqref{item:strong_conv_alpha_0_Lp} is complete. \end{proof}
\begin{remark}[Direct proof of~\eqref{intro_eq:frac_limit_1}]\label{rm:direct_proof_intro_eq:frac_limit_1} The proof of \eqref{intro_eq:frac_limit_1}, i.e., \begin{equation*}
\lim_{\alpha\to0^+}\|\nabla^\alpha f-Rf\|_{L^1(\mathbb{R}^n;\,\mathbb{R}^n)}=0 \quad \text{for all}\ f\in H^1(\mathbb{R}^n)\cap\bigcup_{\alpha\in(0,1)}W^{\alpha,1}(\mathbb{R}^n), \end{equation*} immediately follows from \cref{res:strong_conv_alpha_0}\eqref{item:strong_conv_Hardy_alpha_0} and \cref{rm:z}. As briefly discussed in \cref{subsec:intro_frac_interp}, one can directly prove~\eqref{intro_eq:frac_limit_1} by combining the interpolation inequality proven in \cref{res:interpolation_H1_BV_alpha} with an approximation argument as done in the proof of \cref{res:strong_conv_alpha_0}. We let the interested reader fill the easy details. \end{remark}
\subsection{Proof of \texorpdfstring{\cref{res:energy_conv_alpha_0}}{4.4}} \label{subsec:energy_conv_alpha_0}
We now pass to the proof of \cref{res:energy_conv_alpha_0}. We need some preliminaries. We begin with the following result.
\begin{lemma}\label{res:unif_lim_alpha_to_0} Let $f\in L^1(\mathbb{R}^n)$ and let $R\in(0,+\infty)$ be such that $\supp f\subset B_R$. If $\varepsilon>R$, then \begin{equation*}
\lim_{\alpha\to0^+}\alpha\mu_{n,\alpha}\int_{\mathbb{R}^n}\bigg|\int_{\{|y|>\varepsilon\}}\frac{y f(y+x)}{|y|^{n+\alpha+1}}\,dy\,\bigg|\,dx
=n\omega_n\mu_{n,0}\bigg|\int_{\mathbb{R}^n} f\,dx\,\bigg|. \end{equation*} \end{lemma}
\begin{proof} Since $\mu_{n,\alpha}\to\mu_{n,0}$ as $\alpha\to0^+$, we just need to prove that \begin{equation}\label{eq:target}
\lim_{\alpha\to0^+}\alpha\int_{\mathbb{R}^n}\bigg|\int_{\{|y|>\varepsilon\}}\frac{y f(y+x)}{|y|^{n+\alpha+1}}\,dy\,\bigg|\,dx
=n\omega_n\bigg|\int_{\mathbb{R}^n} f\,dx\,\bigg|. \end{equation} We now divide the proof in two steps.
\textit{Step~1}. We claim that \begin{equation}\label{eq:claim_change_x}
\lim_{\alpha\to0^+}\alpha\int_{\mathbb{R}^n}\bigg|\int_{\{|y|>\varepsilon\}}\frac{x f(y+x)}{|x|^{n+\alpha+1}}\,dy\,\bigg|\,dx
=n\omega_n\bigg|\int_{\mathbb{R}^n} f\,dx\,\bigg|. \end{equation} Indeed, since $\supp f\subset B_R$, we have that \begin{equation*}
\int_{\{|y|>\varepsilon\}}\frac{x f(y+x)}{|x|^{n+\alpha+1}}\,dy=0 \quad
\text{for all $x\in\mathbb{R}^n$ such that}\ |x+y|\ge R\ \text{for all}\ |y| > \varepsilon. \end{equation*}
Recalling that $\varepsilon > R$, we see that, for all $|y| > \varepsilon$, \begin{equation}\label{eq:balls_domain}
|x|\le\varepsilon-R\implies |x+y|\ge R \end{equation} and thus we can write \begin{align*}
\alpha\int_{\mathbb{R}^n}\bigg|\int_{\{|y|>\varepsilon\}}\frac{x f(y+x)}{|x|^{n+\alpha+1}}\,dy\,\bigg|\,dx
&=\alpha\int_{\{|x|>\varepsilon-R\}}\bigg|\int_{\{|y|>\varepsilon\}}\frac{x f(y+x)}{|x|^{n+\alpha+1}}\,dy\,\bigg|\,dx\\
&=\alpha\int_{\{|x|>\varepsilon-R\}}
\frac{1}{|x|^{n+\alpha}}\,
\bigg|\int_{\{|y|>\varepsilon\}}f(y+x)\,dy\,\bigg|\,dx. \end{align*} Now, on the one hand, we have \begin{equation}\label{eq:claim_change_x_proof_1}
\alpha\int_{\{\varepsilon-R<|x|\le\varepsilon+R\}}\frac{1}{|x|^{n+\alpha}}\,\abs*{\int_{\{|y|>\varepsilon\}}f(y+x)\,dy\,}\,dx
\le\alpha n\omega_n\|f\|_{L^1(\mathbb{R}^n)}\int_{\varepsilon-R}^{\varepsilon+R}\frac{dr}{r^{\alpha+1}} \end{equation} for all $\alpha\in(0,1)$. On the other hand, since \begin{equation*}
|x|>\varepsilon+R \implies B_R\subset B_\varepsilon(x)^c, \end{equation*} we have \begin{equation}\label{eq:claim_change_x_proof_2} \begin{split}
\alpha\int_{\{|x|>\varepsilon+R\}}\frac{1}{|x|^{n+\alpha}}\,\bigg|\int_{\{|y|>\varepsilon\}}f(y+x)\,dy\,\bigg|\,dx
&=\alpha\int_{\{|x|>\varepsilon+R\}}\frac{1}{|x|^{n+\alpha}}\,\bigg|\int_{\mathbb{R}^n}f\,dz\,\bigg|\,dx\\
&=\frac{n\omega_n}{(\varepsilon+R)^\alpha}\,\bigg|\int_{\mathbb{R}^n}f\,dz\,\bigg| \end{split} \end{equation} for all $\alpha\in(0,1)$. Hence, claim~\eqref{eq:claim_change_x} follows by first combining~\eqref{eq:claim_change_x_proof_1} and~\eqref{eq:claim_change_x_proof_2} and then passing to the limit as~$\alpha\to0^+$.
\textit{Step~2}. We claim that \begin{equation}\label{eq:claim_elia}
\bigg|\frac{y}{|y|^{n+\alpha+1}}+\frac{x}{|x|^{n+\alpha+1}}\bigg| \le
(n+3)\,\frac{|x+y|}{|y|^{n+\alpha+1}}\bigg(\frac{\varepsilon}{\varepsilon-R}\bigg)^{n+\alpha+1} \end{equation}
for all $x,y\in\mathbb{R}^n$ such that $|x|>\varepsilon-R$, $|y|>\varepsilon$ and $|y+x|<R$. Indeed, setting $F(z):=\frac{z}{|z|^{n+\alpha+1}}$ for all $z\in\mathbb{R}^n\setminus\{0\}$, we can estimate \begin{align*}
\abs*{\frac{y}{|y|^{n+\alpha+1}}+\frac{x}{|x|^{n+\alpha+1}}}
&=|F(y)-F(-x)|
\le|y+x|\sup_{t\in[0,1]}|\nabla F|((1-t)y-tx)\\
&\le (n+\alpha+2)\,|y+x|\,\sup_{t\in[0,1]}\frac{1}{|(1-t)y-tx|^{n+\alpha+1}}. \end{align*} Since \begin{align*}
\frac{1}{|(1-t)y-tx|^{n+\alpha+1}}
&\le\frac{1}{||y|-t|y+x||^{n+\alpha+1}}\\
&\le\frac{1}{(|y|-R)^{n+\alpha+1}}\\
&\le\frac{1}{|y|^{n+\alpha+1}}\bigg(\frac{|y|}{|y|-R}\bigg)^{n+\alpha+1}\\
&\le\frac{1}{|y|^{n+\alpha+1}}\bigg(\frac{\varepsilon}{\varepsilon-R}\bigg)^{n+\alpha+1} \end{align*} for all $t\in[0,1]$, claim~\eqref{eq:claim_elia} immediately follows. Now, recalling~\eqref{eq:balls_domain}, we can estimate \begin{align*}
\bigg|\alpha & \int_{\mathbb{R}^n}\abs*{\int_{\{|y|>\varepsilon\}}\frac{y f(y+x)}{|y|^{n+\alpha+1}}\,dy\,}\,dx -
\alpha\int_{\mathbb{R}^n}\bigg|\int_{\{|y|>\varepsilon\}}\frac{x f(y+x)}{|x|^{n+\alpha+1}}\,dy\,\bigg|\,dx\bigg|\\ &\le
\alpha\int_{\mathbb{R}^n}\int_{\{|y|>\varepsilon\}}|f(y+x)|\,\bigg|\frac{y}{|y|^{n+\alpha+1}}+\frac{x}{|x|^{n+\alpha+1}}\bigg|\,dy\,dx\\
&=\alpha\int_{\{|x|>\varepsilon-R\}}\int_{\{|y|>\varepsilon\}}|f(y+x)|\,\abs*{\frac{y}{|y|^{n+\alpha+1}}+\frac{x}{|x|^{n+\alpha+1}}}\,dy\,dx\\ &\le\alpha(n+3)
\bigg(\frac{\varepsilon}{\varepsilon-R}\bigg)^{n+\alpha+1}\int_{\{|x|>\varepsilon-R\}}\int_{\{|y|>\varepsilon\}}|f(y+x)|\,\frac{|y+x|}{|y|^{n+\alpha+1}}\,dy\,dx \end{align*} for all $\alpha\in(0,1)$ thanks to~\eqref{eq:claim_elia}. Since \begin{align*}
\alpha\int_{\{|y|>\varepsilon\}}\frac{1}{|y|^{n+\alpha+1}}\int_{\{|x|>\varepsilon-R\}}|f(y+x)|\,|y+x|\,dx\,dy
\le\alpha n\omega_n R\,\|f\|_{L^1(\mathbb{R}^n)}\int_{\varepsilon}^{\infty}\frac{dr}{r^{\alpha+2}}, \end{align*} we conclude that \begin{equation}\label{eq:change_limit}
\limsup_{\alpha\to0^+}\bigg|\,\alpha\int_{\mathbb{R}^n}\bigg|\int_{\{|y|>\varepsilon\}}\frac{y f(y+x)}{|y|^{n+\alpha+1}}\,dy\,\bigg|\,dx
-\alpha\int_{\mathbb{R}^n}\bigg|\int_{\{|y|>\varepsilon\}}\frac{x f(y+x)}{|x|^{n+\alpha+1}}\,dy\,\bigg|\,dx\,\bigg|=0. \end{equation} Thus~\eqref{eq:target} follows by combining~\eqref{eq:claim_change_x} with~\eqref{eq:change_limit} and the proof is complete. \end{proof}
Thanks to \cref{res:unif_lim_alpha_to_0}, we can prove the following result.
\begin{lemma}\label{res:eta-eps_lim_alpha_to_0} Let $f\in L^1(\mathbb{R}^n)$ and $\eta>0$. There exists $\varepsilon>0$ such that \begin{equation*}
\limsup_{\alpha\to0^+}\,\bigg|\,\alpha\mu_{n,\alpha}\int_{\mathbb{R}^n}\bigg|\int_{\{|y|>\varepsilon\}}\frac{y f(y+x)}{|y|^{n+\alpha+1}}\,dy\,\bigg|\,dx
-n\omega_n\mu_{n,0}\Big|\int_{\mathbb{R}^n} f\,dx\,\Big|\bigg|<\eta. \end{equation*} \end{lemma}
\begin{proof}
Let $\eta'>0$ be such that $\eta=2n\omega_n\mu_{n,0}\,\eta'$. Since $f\in L^1(\mathbb{R}^n)$, we can find $R>0$ such that $\int_{B_R^c}|f|\,dx<\eta'$. Let $\varepsilon>R$ and $g:=f\chi_{B_R}$, which satisfies $g \in L^1(\mathbb{R}^n)$ and $\supp(g)\subset\closure[1]{B_R}$. Then \begin{align*}
\bigg|\int_{\mathbb{R}^n}&\bigg|\int_{\{|y|>\varepsilon\}}\frac{y \, f(y+x)}{|y|^{n+\alpha+1}}\,dy\,\bigg|\,dx
-\int_{\mathbb{R}^n}\abs*{\int_{\{|y|>\varepsilon\}}\frac{y \, g(y+x)}{|y|^{n+\alpha+1}}\,dy\,}\,dx\,\bigg|\\
&\le\int_{\{|y|>\varepsilon\}}\frac{1}{|y|^{n+\alpha}}\int_{\mathbb{R}^n}|f(y+x)-g(y+x)|\,dx\,dy\\
&=\frac{n\omega_n\|f-g\|_{L^1(\mathbb{R}^n)}}{\alpha\varepsilon^\alpha} <\frac{n\omega_n}{\alpha\varepsilon^\alpha}\,\eta'. \end{align*} Since clearly \begin{equation*}
\bigg|\Big|\int_{\mathbb{R}^n} f\,dx\,\Big|-\Big|\int_{\mathbb{R}^n} g\,dx\,\Big|\bigg|
\le\|f-g\|_{L^1(\mathbb{R}^n)}<\eta', \end{equation*} by \cref{res:unif_lim_alpha_to_0} we conclude that \begin{align*}
\limsup_{\alpha\to0^+}\,&\bigg|\alpha\,\mu_{n,\alpha}\int_{\mathbb{R}^n}\bigg|\int_{\{|y|>\varepsilon\}}\frac{y \, f(y+x)}{|y|^{n+\alpha+1}}\,dy\,\bigg|\,dx
-n\omega_n\,\mu_{n,0}\,\abs*{\int_{\mathbb{R}^n} f\,dx\,}\bigg|\\
&<\limsup_{\alpha\to0^+}\,\abs*{\alpha\,\mu_{n,\alpha}\int_{\mathbb{R}^n}\abs*{\int_{\{|y|>\varepsilon\}}\frac{y \, g(y+x)}{|y|^{n+\alpha+1}}\,dy\,}\,dx -n\omega_n\,\mu_{n,0}\,\abs*{\int_{\mathbb{R}^n} g\,dx\,}}\\ &\quad+\left(n\omega_n\mu_{n,0}+n\omega_n\lim_{\alpha\to0^+}\mu_{n,\alpha}\varepsilon^{-\alpha}\right)\eta'\\ &=2n\omega_n\mu_{n,0}\,\eta'=\eta \end{align*} and the proof is complete. \end{proof}
We are now ready to prove \cref{res:energy_conv_alpha_0}.
\begin{proof}[Proof of \cref{res:energy_conv_alpha_0}] Assume $f\in W^{\beta,1}(\mathbb{R}^n)$ for some $\beta\in(0,1)$ and fix $\eta>0$. By \cref{res:eta-eps_lim_alpha_to_0}, there exists $\varepsilon>0$ such that \begin{equation}\label{eq:find_eps}
\limsup_{\alpha\to0^+}\,\abs*{\alpha\,\mu_{n,\alpha}\int_{\mathbb{R}^n}\abs*{\int_{\{|y|>\varepsilon\}}\frac{y \, f(y+x)}{|y|^{n+\alpha+1}}\,dy\,}\,dx -n\omega_n\mu_{n,0}\abs*{\int_{\mathbb{R}^n} f\,dx\,}}<\eta. \end{equation} Since for all $\alpha\in(0,\beta)$ we can estimate \begin{align*}
\bigg|\alpha &\int_{\mathbb{R}^n}|\nabla^\alpha f|\,dx
-n\omega_n\mu_{n,0}\abs*{\int_{\mathbb{R}^n} f\,dx\,}\bigg|\\
&\le\abs*{\alpha\,\mu_{n,\alpha}\int_{\mathbb{R}^n}\abs*{\int_{\{|y|>\varepsilon\}}\frac{y \, f(y+x)}{|y|^{n+\alpha+1}}\,dy\,}\,dx -n\omega_n\mu_{n,0}\abs*{\int_{\mathbb{R}^n} f\,dx\,}}\\
&\quad+\alpha\,\mu_{n,\alpha}\int_{\mathbb{R}^n}\int_{\{|y|\le\varepsilon\}}\frac{|f(y+x)-f(x)|}{|y|^{n+\alpha}}\,dy\,dx\\
&\le\abs*{\alpha\,\mu_{n,\alpha}\int_{\mathbb{R}^n}\abs*{\int_{\{|y|>\varepsilon\}}\frac{y \, f(y+x)}{|y|^{n+\alpha+1}}\,dy\,}\,dx -n\omega_n\mu_{n,0}\abs*{\int_{\mathbb{R}^n} f\,dx\,}} +\alpha\,\mu_{n,\alpha}\,\varepsilon^{\beta-\alpha}[f]_{W^{\beta,1}(\mathbb{R}^n)}, \end{align*} by~\eqref{eq:find_eps} we have \begin{equation*}
\limsup_{\alpha\to0^+}\,\abs*{\alpha\int_{\mathbb{R}^n}|\nabla^\alpha f|\,dx -n\omega_n\mu_{n,0}\abs*{\int_{\mathbb{R}^n} f\,dx\,}} <\eta \end{equation*} and the conclusion follows passing to the limit as $\eta\to0^+$. \end{proof}
\appendix
\section{\texorpdfstring{$C^\infty_c(\mathbb{R}^n)$}{Cˆinfty-c} is dense in \texorpdfstring{$S^{\alpha,p}(\mathbb{R}^n)$}{Sˆ{alpha,p}(Rˆn)}} \label{sec:identificaiton_Bessel}
In this section, we prove \cref{res:L_p_alpha_equal_S_p_alpha} below. This result completely answers a question left open in~\cite{CS19}*{Section~3.9}.
\begin{theorem}[Approximation by $C^\infty_c$ functions in $S^{\alpha,p}$] \label{res:L_p_alpha_equal_S_p_alpha} Let $\alpha\in(0,1)$ and $p\in[1,+\infty)$. The set $C^\infty_c(\mathbb{R}^n)$ is dense in $S^{\alpha,p}(\mathbb{R}^n)$. \end{theorem}
For the proof of \cref{res:L_p_alpha_equal_S_p_alpha}, we need some preliminary results. We begin with the following integration-by-parts formula.
\begin{lemma}\label{res:int_by_parts_nabla_0} Let $p,q\in(1,+\infty)$ be such that $\frac1p+\frac1q=1$. If $f\in L^p(\mathbb{R}^n)$ and $\varphi\in L^q(\mathbb{R}^n;\mathbb{R}^n)$, then \begin{equation}\label{eq:int_by_parts_nabla_0} \int_{\mathbb{R}^n} f\,\mathrm{div}^0\varphi\,dx =-\int_{\mathbb{R}^n}\varphi\cdot\nabla^0 f\,dx. \end{equation} \end{lemma}
\begin{proof} Integrating by parts and applying Fubini's Theorem, formula~\eqref{eq:int_by_parts_nabla_0} is easily proved for all $f\in C^\infty_c(\mathbb{R}^n)$ and $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$. Since the real-valued bilinear functionals \begin{equation*} (f,\varphi)\mapsto\int_{\mathbb{R}^n} f\,\mathrm{div}^0\varphi\,dx, \quad (f,\varphi)\mapsto\int_{\mathbb{R}^n} \varphi\cdot \nabla^0 f\,dx, \end{equation*} are both continuous on $L^p(\mathbb{R}^n)\times L^q(\mathbb{R}^n;\mathbb{R}^n)$ by H\"older's inequality and the $L^p$-continuity of Riesz transform, the conclusion follows by a simple approximation argument. \end{proof}
\begin{remark} As an immediate consequence of Lemma \ref{res:int_by_parts_nabla_0} and the $L^p$-continuity of the Riesz transform, we can conclude that the space \begin{equation*} S^{0, p}(\mathbb{R}^n) := \set*{f \in L^{p}(\mathbb{R}^n) : \nabla^{0} f \in L^{p}(\mathbb{R}^n; \mathbb{R}^n)} \end{equation*} actually coincides with $L^{p}(\mathbb{R}^n)$ for all $p \in (1, +\infty)$, with $\nabla^0 f = Rf$. In addition, Theorem~\ref{res:H_1=BV_0} easily yields the identity $BV^0(\mathbb{R}^n) = S^{0,1}(\mathbb{R}^n) = H^1(\mathbb{R}^n)$. Arguing in an analogous fashion, we can see that, for all $p \in (1, + \infty)$, \begin{equation*} BV^{0,p}(\mathbb{R}^n) :=\set*{f \in L^{p}(\mathbb{R}^n) : D^{0} f \in \mathscr{M}(\mathbb{R}^n; \mathbb{R}^n)} \end{equation*} coincides with the space \begin{equation*} \set*{f \in L^{p}(\mathbb{R}^n) : R f \in L^1(\mathbb{R}^n; \mathbb{R}^n)}, \end{equation*} and we have $D^0 f = (R f) \Leb{n}$. \end{remark}
Adopting the notation introduced in~\cite{S18}*{Equation~(1.9)}, for $\alpha\in(0,1)$ and $f\in \mathcal{S}(\mathbb{R}^n)$, let \begin{equation*} \mathcal D^\alpha f(x):=
\int_{\mathbb{R}^n}\frac{|f(y+x)-f(y)|}{|y|^{n+\alpha}}\,dy \end{equation*} for all $x\in\mathbb{R}^n$.
Note that $|(-\Delta)^{\frac{\alpha}{2}}f(x)| \le |\nu_{n, \alpha}| \, \mathcal D^\alpha f(x)$ for all $\alpha\in(0,1)$, $f\in \mathcal{S}(\mathbb{R}^n)$ and $x \in \mathbb{R}^n$. In the following result, we prove that the operator $\mathcal D^\alpha$ naturally extends to a continuous operator from $W^{1,p}(\mathbb{R}^n)$ to $L^p(\mathbb{R}^n)$.
\begin{lemma}\label{res:frac_box_Lp} Let $\alpha\in(0,1)$ and $p\in[1,+\infty]$. The operator $\mathcal D^\alpha\colon W^{1,p}(\mathbb{R}^n)\to L^p(\mathbb{R}^n)$ is well defined and satisfies \begin{equation}\label{eq:frac_box_Lp}
\left\|\mathcal D^\alpha f\right\|_{L^p(\mathbb{R}^n)}
\le \frac{2^{1 - \alpha} n\omega_n}{\alpha(1-\alpha)}\, \|f\|_{L^p(\mathbb{R}^n)}^{1 - \alpha}
\|\nabla f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\alpha} \end{equation} for all $f\in W^{1,p}(\mathbb{R}^n)$. \end{lemma}
\begin{proof} Let $f\in W^{1,p}(\mathbb{R}^n)$ and $r>0$. We can estimate \begin{align*} \mathcal D^\alpha f(x) \le
\bigg(\int_{\{|y|<r\}}\frac{|f(y+x)-f(x)|}{|y|^{n+\alpha}}\,dy
+\int_{\{|y|\ge r\}}\frac{|f(y+x)-f(x)|}{|y|^{n+\alpha}}\,dy\bigg) \end{align*} for a.e.\ $x\in\mathbb{R}^n$. By Minkowski's integral inequality and well-known properties of Sobolev functions (see \cite{L09}*{Lemma 11.11} in the case $p \in [1, + \infty)$), on the one hand we have \begin{align*}
\bigg\|\int_{\{|y|<r\}}\frac{|f(y+\cdot)-f(\cdot)|}{|y|^{n+\alpha}}\,dy\,\bigg\|_{L^p(\mathbb{R}^n)}
&\le\int_{\{|y|<r\}}\frac{\|f(y+\cdot)-f(\cdot)\|_{L^p(\mathbb{R}^n)}}{|y|^{n+\alpha}}\,dy\\
&\le\|\nabla f\|_{L^p(\mathbb{R}^n;\mathbb{R}^n)}\int_{\{|y|<r\}}\frac{dy}{|y|^{n+\alpha-1}}\\
&=\frac{n\omega_n r^{1-\alpha}}{1-\alpha}\,\|\nabla f\|_{L^p(\mathbb{R}^n;\mathbb{R}^n)} \end{align*} while, on the other hand, we have \begin{align*}
\bigg\|\int_{\{|y|\ge r\}}\frac{|f(y+\cdot)-f(\cdot)|}{|y|^{n+\alpha}}\,dy\,\bigg\|_{L^p(\mathbb{R}^n)}
&\le\int_{\{|y|<r\}}\frac{\|f(y+\cdot)\|_{L^p(\mathbb{R}^n)}+\|f\|_{L^p(\mathbb{R}^n)}}{|y|^{n+\alpha}}\,dy\\
&=2\|f\|_{L^p(\mathbb{R}^n)}\int_{\{|y|\ge r\}}\frac{dy}{|y|^{n+\alpha}}\\
&=\frac{2n\omega_n r^{-\alpha}}{\alpha}\,\|f\|_{L^p(\mathbb{R}^n)}. \end{align*} Hence \begin{equation*}
\left\|\mathcal D^\alpha f\right\|_{L^p(\mathbb{R}^n)} \le n\omega_n
\,\bigg(\frac{r^{1-\alpha}}{1-\alpha}\,\|\nabla f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}
+2 \frac{r^{-\alpha}}{\alpha}\,\|f\|_{L^p(\mathbb{R}^n)}\bigg) \end{equation*}
for all~$r>0$. Thus~\eqref{eq:frac_box_Lp} follows by choosing $r=\frac{2 \|f\|_{L^p(\mathbb{R}^n)}}{\|\nabla f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}}$ and the proof is complete. \end{proof}
In the following result, we recall the self-adjointness property the fractional Laplacian.
\begin{lemma}\label{res:frac_laplacian_self_adjoint} Let $\alpha\in(0,1)$ and $p,q\in[1,+\infty]$ such that $\frac1p+\frac1q=1$. If $f\in W^{1,p}(\mathbb{R}^n)$ and $g\in W^{1,q}(\mathbb{R}^n)$, then \begin{equation}\label{eq:frac_laplacian_self_adjoint} \int_{\mathbb{R}^n}f\,(-\Delta)^{\frac\alpha2}g\,dx =\int_{\mathbb{R}^n}g\,(-\Delta)^{\frac\alpha2}f\,dx. \end{equation} \end{lemma}
\begin{proof} Formula~\eqref{eq:frac_laplacian_self_adjoint} is well known for $f,g\in \mathcal S(\mathbb{R}^n)$ and can be proved by exploiting Functional Calculus or by directly using the definition of $(-\Delta)^{\frac\alpha2}$ for instance. Since the real-valued functional \begin{equation*} (f,g)\mapsto\int_{\mathbb{R}^n}f\,(-\Delta)^{\frac\alpha2}g\,dx \end{equation*} is bilinear and continuous on $L^p(\mathbb{R}^n)\times W^{1,q}(\mathbb{R}^n;\mathbb{R}^n)$ by H\"older's inequality and \cref{res:frac_box_Lp} above, the conclusion follows by a simple approximation argument for $p,q\in(1,+\infty)$. The case $p,q\in\{1,+\infty\}$ follows by Fubini's theorem, thanks to the fact that the function \begin{equation*}
(x, y) \to f(x) \frac{g(x + y) - g(x)}{|y|^{n + \alpha}} \end{equation*} belongs to $L^1(\mathbb{R}^n \times \mathbb{R}^n)$ if $(f, g) \in L^1(\mathbb{R}^n) \times W^{1, \infty}(\mathbb{R}^n)$ or $(f, g) \in L^{\infty}(\mathbb{R}^n) \times W^{1, 1}(\mathbb{R}^n)$. The details are left to the reader. \end{proof}
We are now ready to prove the main result of this section.
\begin{proof}[Proof of \cref{res:L_p_alpha_equal_S_p_alpha}] The density of $C^\infty_c(\mathbb{R}^n)$ in $S^{\alpha,1}(\mathbb{R}^n)$ was already proved in~\cite{CS19}*{Theorem~3.23}, so we can restrict our attention to the case~$p>1$ without loss of generality. We divide the proof in two steps.
\textit{Step~1}. Let $f\in S^{\alpha,p}(\mathbb{R}^n)$ and assume $f\in W^{1,p}(\mathbb{R}^n)\cap\Lip_b(\mathbb{R}^n)$. Given $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$, we can write $\mathrm{div}^\alpha\varphi=(-\Delta)^{\frac\alpha2}\mathrm{div}^0\varphi$ with $\mathrm{div}^0\varphi\in\Lip_b(\mathbb{R}^n)\cap W^{1,q}(\mathbb{R}^n)$, so that \begin{equation*} \int_{\mathbb{R}^n}f\,(-\Delta)^{\frac\alpha2}\mathrm{div}^0\varphi\,dx =\int_{\mathbb{R}^n}(-\Delta)^{\frac\alpha2}f\,\mathrm{div}^0\varphi\,dx \end{equation*} for all $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$ by \cref{res:frac_laplacian_self_adjoint}. Since $(-\Delta)^{\frac\alpha2} f\in L^p(\mathbb{R}^n)$ thanks to \cref{res:frac_box_Lp}, by \cref{res:int_by_parts_nabla_0} we have \begin{equation*} \int_{\mathbb{R}^n}(-\Delta)^{\frac\alpha2}f\,\mathrm{div}^0\varphi\,dx =-\int_{\mathbb{R}^n}\varphi\cdot\nabla^0(-\Delta)^{\frac\alpha2}f\,dx \end{equation*} for all $\varphi\in C^\infty_c(\mathbb{R}^n;\mathbb{R}^n)$. We thus get that $\nabla^\alpha f=\nabla^0(-\Delta)^{\frac\alpha2}f$ for all $f\in S^{\alpha,p}(\mathbb{R}^n)\cap W^{1,p}(\mathbb{R}^n)\cap\Lip_b(\mathbb{R}^n)$, so that \begin{equation*}
c_1\|(-\Delta)^{\frac\alpha2}f\|_{L^p(\mathbb{R}^n)}\le[f]_{S^{\alpha,p}(\mathbb{R}^n)}
\le c_2\|(-\Delta)^{\frac\alpha2}f\|_{L^p(\mathbb{R}^n)} \end{equation*} for all $f\in S^{\alpha,p}(\mathbb{R}^n)\cap W^{1,p}(\mathbb{R}^n)\cap\Lip_b(\mathbb{R}^n)$, where $c_1,c_2>0$ are two constants depending only on~$p>1$. Thus, recalling the equivalent definition of the space $L^{\alpha,p}(\mathbb{R}^n)$ given in~\eqref{eq:def2_Bessel_space}, we conclude that \begin{equation*} S^{\alpha,p}(\mathbb{R}^n)\cap W^{1,p}(\mathbb{R}^n)\cap\Lip_b(\mathbb{R}^n) \subset L^{\alpha,p}(\mathbb{R}^n) \end{equation*} with continuous embedding.
\textit{Step~2}. Now fix $f\in S^{\alpha,p}(\mathbb{R}^n)$ and let $(\varrho_\varepsilon)_{\varepsilon>0}\subset C^\infty_c(\mathbb{R}^n)$ be a family of standard mollifiers (see~\cite{CS19}*{Section~3.3} for a definition). Setting $f_\varepsilon:=f*\varrho_\varepsilon$ for all~$\varepsilon>0$, arguing as in the proof of~\cite{CS19}*{Theorem~3.22} we have that $f_\varepsilon\to f$ in~$S^{\alpha,p}(\mathbb{R}^n)$ as~$\varepsilon\to0^+$. By Young's inequality, we have that $f_\varepsilon\in S^{\alpha,p}(\mathbb{R}^n)\cap W^{1,p}(\mathbb{R}^n)\cap\Lip_b(\mathbb{R}^n)$ for all~$\varepsilon>0$. Thus $S^{\alpha,p}(\mathbb{R}^n)\cap W^{1,p}(\mathbb{R}^n)\cap\Lip_b(\mathbb{R}^n)$ is a dense subset of $S^{\alpha,p}(\mathbb{R}^n)$.
Hence, by Step~1, we get that also $L^{\alpha,p}(\mathbb{R}^n)$ is a dense subset of~$S^{\alpha,p}(\mathbb{R}^n)$. Since $L^{\alpha,p}(\mathbb{R}^n) = S^{\alpha,p}_0(\mathbb{R}^n) = \closure[-1]{C^\infty_c(\mathbb{R}^n)}^{\,\|\cdot\|_{S^{\alpha,p}(\mathbb{R}^n)}}$ (see \cite{SS15}*{Theorem 1.7}), the conclusion follows. \end{proof}
\section{Some properties of \texorpdfstring{$S^{\alpha,p}(\mathbb{R}^n)$}{Sˆ{alpha,p}(Rˆn)}} \label{sec:props_of_S_alpha_p}
In this section, we collect some additional properties of the space $S^{\alpha,p}(\mathbb{R}^n)$. We begin with the following result, whose proof is very similar to the one of~\cite{CS19}*{Proposition~3.3} and is left to the reader.
\begin{proposition}\label{res:lsc_norm_S_alpha_p} Let $\alpha\in(0,1)$ and $p\in[1,+\infty)$. If $(f_k)_{k\in\mathbb{N}}\subset S^{\alpha,p}(\mathbb{R}^n)$ is such that \begin{equation*}
\liminf_{k\to+\infty}\|\nabla^\alpha f_k\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} <+\infty \end{equation*} and $f_k\to f$ in~$L^p(\mathbb{R}^n)$ as~$k\to+\infty$, then $f\in S^{\alpha,p}(\mathbb{R}^n)$ with \begin{equation}\label{eq:lsc_norm_S_alpha_p}
\|\nabla^\alpha f\|_{L^p(U;\,\mathbb{R}^{n})}
\le\liminf_{k\to+\infty}\|\nabla^\alpha f_k\|_{L^p(U;\, \mathbb{R}^{n})} \end{equation} for any open set $U\subset\mathbb{R}^n$. \end{proposition}
The following result provides an $L^p$-estimate on translations of functions in~$S^{\alpha,p}(\mathbb{R}^n)$. It can be stated by saying that the inclusion $S^{\alpha,p}(\mathbb{R}^n) \subset B_{p,\infty}^{\alpha}(\mathbb{R}^n)$ is continuous, where $B^\alpha_{p,q}(\mathbb{R}^n)$ is the Besov space, see~\cite{L09}*{Chapter~14}. For a similar result in the $W^{\alpha,p}(\mathbb{R}^n)$ space, we refer the reader to~\cite{TGV20}.
Thanks to \cref{res:S=L}, this result can be derived from the analogous result already known for functions in~$L^{\alpha,p}(\mathbb{R}^n)$. However, the estimate in~\eqref{eq:Lp_control_on_traslations} provides an explicit constant (independent of~$p$) that may be of some interest. The proof of \cref{res:Lp_control_on_traslations} below can be easily established following the one of~\cite{CS19}*{Proposition~3.14}(and exploiting Minkowski's integral inequality and Theorem \ref{res:L_p_alpha_equal_S_p_alpha}) and we leave it to the reader.
\begin{proposition}\label{res:Lp_control_on_traslations} Let $\alpha\in(0,1)$ and $p\in[1,+\infty)$. If $f\in S^{\alpha,p}(\mathbb{R}^n)$, then \begin{equation}\label{eq:Lp_control_on_traslations}
\|f(\cdot+y)-f(\cdot)\|_{L^p(\mathbb{R}^n)}
\le\gamma_{n,\alpha}\,|y|^\alpha\,
\|\nabla^\alpha f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} \end{equation} for all $y\in\mathbb{R}^n$, where $\gamma_{n,\alpha}>0$ is as in~\cite{CS19}*{Proposition~3.14}. \end{proposition}
A similar result holds for spaces $BV^\alpha(\mathbb{R}^n)$, indeed from~\cite{CS19}*{Proposition~3.14}, one immediately deduces that the inclusion $BV^{\alpha}(\mathbb{R}^{n}) \subset B^{\alpha}_{1,\infty}(\mathbb{R}^{n})$ holds continuously for all $\alpha\in(0,1)$. The next result shows that this inclusion is actually strict whenever~$n\ge2$.
\begin{theorem}[$B^{\alpha}_{1,\infty}(\mathbb{R}^{n})\setminus BV^{\alpha}(\mathbb{R}^{n})\ne\varnothing$ for $n\ge2$]\label{thm:strict_inclusion}
Let $\alpha\in(0,1)$ and $n\ge2$. The inclusion $BV^{\alpha}(\mathbb{R}^{n}) \subset B^{\alpha}_{1,\infty}(\mathbb{R}^{n})$ is strict. \end{theorem}
\begin{proof}
By~\cite{CS19}*{Theorem~3.9}, we just need to prove that $B^{\alpha}_{1,\infty}(\mathbb{R}^{n})\setminus L^{\frac n{n-\alpha}}(\mathbb{R}^{n})\ne\varnothing$.
Let $\eta_1\in C^\infty_c(\mathbb{R}^n)$ be as in~\eqref{eq:def_eta_function} and \eqref{eq:def_cut_off}, and let $f(x)=\eta_1(x)|x|^{\alpha-n}$ for all $x\in\mathbb{R}^n$. On the one side, we clearly have $f\notin L^{\frac n{n-\alpha}}(\mathbb{R}^{n})$. On the other side, for all $h\in\mathbb{R}^n$ with $|h|<1$, we can estimate
\begin{align*}
\int_{\mathbb{R}^n}|f(x+h)-f(x)|\,dx
&\le
\int_{\set*{|x|>2|h|}}\big|\eta_1(x+h)|x+h|^{\alpha-n}-\eta_1(x)|x|^{\alpha-n}\big|\,dx\\
&\quad+2
\int_{\set*{|x|<3|h|}}\eta_1(x)|x|^{\alpha-n}\,dx\\
&\le C|h|\int_{\set*{|x|>2|h|}}|x|^{\alpha-n-1}\,dx
+
C\int_{\set*{|x|<3|h|}}|x|^{\alpha-n}\,dx\\
&=
C|h|
\int_{2|h|}^{+\infty}r^{\alpha-2}\,dr
+
C\int_{0}^{3|h|} r^{\alpha-1}\,dr
=C|h|^\alpha,
\end{align*}
where $C>0$ is a constant depending only on~$n$ and~$\alpha$ (that may vary from line to line). Thus $f\in B^\alpha_{1,\infty}(\mathbb{R}^n)$ and the conclusion follows. \end{proof}
We conclude with the following result which, again, can be derived from the theory of Bessel potential spaces. We state it here since our distributional approach provides explicit constants (independent of~$p$) in the estimates that may be of some interest. The proof is very similar to the one of~\cite{CS19-2}*{Proposition~3.12} and we leave it to the interested reader.
\begin{proposition}[$S^{\beta,p}(\mathbb{R}^n)\subset S^{\alpha,p}(\mathbb{R}^n)$ for $0<\beta<\alpha<1$] \label{res:Davila_estimate_S_alpha_p} Let $0<\beta<\alpha<1$ and $p\in(1,+\infty)$. If $f\in S^{\alpha,p}(\mathbb{R}^n)$, then $f\in S^{\beta,p}(\mathbb{R}^n)$ with \begin{equation}\label{eq:Davila_estimate_S_alpha_p}
\|\nabla^\beta f\|_{L^p(A;\,\mathbb{R}^n)} \le\frac{n\omega_n\mu_{n,1+\beta-\alpha}}{n+\beta-\alpha}
\left(\frac{r^{\alpha-\beta}}{\alpha-\beta}\,\|\nabla^\alpha f\|_{L^p(\overline{A_r};\,\mathbb{R}^n)}
+c_{n,\alpha}\,\frac{r^{-\beta}}{\beta}\,\|f\|_{L^p(\mathbb{R}^n)}\right) \end{equation} for any $r>0$ and any open set $A\subset\mathbb{R}^n$, where $A_r:=\set*{x\in\mathbb{R}^n : \dist(x,A)<r}$ and $c_{n,\alpha}>0$ is a constant depending only on~$n$ and~$\alpha$. In particular, we have \begin{equation}\label{eq:Davila_estimate_S_alpha_p_right}
\|\nabla^\beta f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)} \le c_{n,\alpha}\,\frac{\mu_{n,1+\beta-\alpha}}{\beta(\alpha-\beta)(n+\beta-\alpha)}\,
\|\nabla^\alpha f\|_{L^p(\mathbb{R}^n;\,\mathbb{R}^n)}^{\beta/\alpha} \|f\|_{L^p(\mathbb{R}^n)}^{(\alpha - \beta)/\alpha}, \end{equation} where $c_{n,\alpha}>0$ is a constant depending only on~$n$ and~$\alpha$. In addition, if $p\in\left(1,\frac{n}{\alpha-\beta}\right)$ and $q=\frac{np}{n-(\alpha-\beta)p}$, then \begin{equation}\label{eq:representation_formula_S_alpha_p} \nabla^\beta f =I_{\alpha-\beta}\nabla^\alpha f \quad \text{a.e.\ in~$\mathbb{R}^n$} \end{equation} and $\nabla^\beta f\in L^q(\mathbb{R}^n;\mathbb{R}^n)$. \end{proposition}
\section{Continuity properties of the map \texorpdfstring{$\alpha\mapsto\nabla^\alpha$}{alpha -> nablaˆalpha}} \label{sec:continuity_props_nabla_alpha}
Here we prove the following continuity properties of the fractional gradient operator.
\begin{theorem}[Continuity properties of $\alpha\mapsto\nabla^\alpha$] Let $\alpha\in(0,1]$ and $p\in[1,+\infty)$.
\begin{enumerate}[(i)]
\item \label{item:cont_nabla_frac_BV} If $f\in BV^\alpha(\mathbb{R}^n)$, then the function \begin{equation*} (0,\alpha)\ni\beta\mapsto\nabla^\beta f\in L^1(\mathbb{R}^n;\mathbb{R}^n) \end{equation*} is continuous. If $f\in BV^\alpha(\mathbb{R}^n)\cap H^1(\mathbb{R}^n)$, then we also have the continuity at $\beta=0$.
\item\label{item:cont_nabla_frac_S} If $f\in S^{\alpha,p}(\mathbb{R}^n)$, then the function \begin{equation*} (0,\alpha]\ni\beta\mapsto\nabla^\beta f\in L^p(\mathbb{R}^n;\mathbb{R}^n) \end{equation*} is continuous. If $p > 1$, then we also have the continuity at $\beta = 0$. \end{enumerate} \end{theorem}
\begin{proof} We prove the two statements separately.
\textit{Proof of~\eqref{item:cont_nabla_frac_BV}}. Let $f\in BV^\alpha(\mathbb{R}^n)$ be fixed. By~\cite{CS19}*{Theorem~3.32}, we know that $f\in W^{\gamma,1}(\mathbb{R}^n)$ for all $\gamma\in(0,\alpha)$. Hence the claimed continuity follows by combining~\cite{CS19-2}*{Lemma~5.1 and Remark~5.2}. If $f\in BV^\alpha(\mathbb{R}^n)\cap H^1(\mathbb{R}^n)$ the claimed conclusion follows from \cref{rm:direct_proof_intro_eq:frac_limit_1}.
\textit{Proof of~\eqref{item:cont_nabla_frac_S}}. The continuity at the boundary points $\alpha=0$ and $\alpha=1$ is already proved in~\cref{res:strong_conv_alpha_0}\eqref{item:strong_conv_alpha_0_Lp} and~\cite{CS19-2}*{Theorem~4.10} respectively, so we can assume $\alpha\in(0,1)$. We can further assume $p>1$ since, thanks to the continuous embedding $S^{\alpha,1}(\mathbb{R}^n)\subset BV^\alpha(\mathbb{R}^n)$ established in~\cite{CS19}*{Theorem~3.25}, the case $p=1$ is already proved in~\eqref{item:cont_nabla_frac_BV}. If $f\in\Lip_c(\mathbb{R}^n)$, then one can prove that $\nabla^\beta f\to\nabla^\alpha f$ in $L^p(\mathbb{R}^n;\mathbb{R}^n)$ as $\beta\to\alpha$ with the strategy adopted in~\cite{CS19-2}*{Section~5.1} up to some minor modifications that we leave to the interested reader. For a general $f\in S^{\alpha,p}(\mathbb{R}^n)$, the claimed continuity follows from \cref{res:L_p_alpha_equal_S_p_alpha} and \cref{res:interpolation_MH}\eqref{item:interpolation_MH_p} arguing as in the proof of \cref{res:interpolation_MH}\eqref{item:interpolation_MH_p}. \end{proof}
\begin{bibdiv} \begin{biblist}
\bib{A75}{book}{
author={Adams, Robert A.},
title={Sobolev spaces},
note={Pure and Applied Mathematics, Vol. 65},
publisher={Academic Press [A subsidiary of Harcourt Brace Jovanovich,
Publishers], New York-London},
date={1975},
}
\bib{ACPS20}{article}{
author={Alberico, Angela},
author={Cianchi, Andrea},
author={Pick, Lubo\v{s}},
author={Slav\'{\i}kov\'{a}, Lenka},
title={On the limit as $s\to 0^+$ of fractional Orlicz-Sobolev spaces},
journal={J. Fourier Anal. Appl.},
volume={26},
date={2020},
number={6},
pages={Paper No. 80, 19},
}
\bib{ADM11}{article}{
author={Ambrosio, Luigi},
author={De Philippis, Guido},
author={Martinazzi, Luca},
title={Gamma-convergence of nonlocal perimeter functionals},
journal={Manuscripta Math.},
volume={134},
date={2011},
number={3-4},
pages={377--403},
}
\bib{AFP00}{book}{
author={Ambrosio, Luigi},
author={Fusco, Nicola},
author={Pallara, Diego},
title={Functions of bounded variation and free discontinuity problems},
series={Oxford Mathematical Monographs},
publisher={The Clarendon Press, Oxford University Press, New York},
date={2000},
}
\bib{A20}{article}{
author={Ambrosio, Vincenzo},
title={On some convergence results for fractional periodic Sobolev spaces},
journal={Opuscula Math.},
volume={40},
date={2020},
number={1},
pages={5--20},
}
\bib{AGMP18}{article}{
author={Antonucci, Clara},
author={Gobbino, Massimo},
author={Migliorini, Matteo},
author={Picenni, Nicola},
title={On the shape factor of interaction laws for a non-local approximation of the Sobolev norm and the total variation},
journal={C. R. Math. Acad. Sci. Paris},
volume={356},
date={2018},
number={8},
pages={859--864},
}
\bib{AGMP20}{article}{
author={Antonucci, Clara},
author={Gobbino, Massimo},
author={Migliorini, Matteo},
author={Picenni, Nicola},
title={Optimal constants for a nonlocal approximation of Sobolev norms and total variation},
journal={Anal. PDE},
volume={13},
date={2020},
number={2},
pages={595--625},
}
\bib{AGP20}{article}{
author={Antonucci, Clara},
author={Gobbino, Massimo},
author={Picenni, Nicola},
title={On the gap between the Gamma-limit and the pointwise limit for a nonlocal approximation of the total variation},
journal={Anal. PDE},
volume={13},
date={2020},
number={3},
pages={627--649},
}
\bib{A64}{book}{
author={Artin, Emil},
title={The Gamma function},
series={Translated by Michael Butler. Athena Series: Selected Topics in Mathematics},
publisher={Holt, Rinehart and Winston, New York-Toronto-London},
date={1964},
pages={vii+39},
}
\bib{AK09}{article}{
author={Aubert, Gilles},
author={Kornprobst, Pierre},
title={Can the nonlocal characterization of Sobolev spaces by Bourgain et al. be useful for solving variational problems?},
journal={SIAM J. Numer. Anal.},
volume={47},
date={2009},
number={2},
pages={844--860},
}
\bib{BMR20}{article}{
author={Bal, Kaushik},
author={Mohanta, Kaushik},
author={Roy, Prosenjit},
title={Bourgain-Brezis-Mironescu domains},
journal={Nonlinear Anal.},
volume={199},
date={2020},
pages={111928, 10},
}
\bib{B11}{article}{
author={Barbieri, Davide},
title={Approximations of Sobolev norms in Carnot groups},
journal={Commun. Contemp. Math.},
volume={13},
date={2011},
number={5},
pages={765--794},
}
\bib{BCM20}{article}{
author={Bellido, Jos\'{e} C.},
author={Cueto, Javier},
author={Mora-Corral, Carlos},
title={Fractional Piola identity and polyconvexity in fractional spaces},
journal={Ann. Inst. H. Poincar\'{e} Anal. Non Lin\'{e}aire},
volume={37},
date={2020},
number={4},
pages={955--981},
}
\bib{BCM21}{article}{
author={Bellido, Jos\'{e} C.},
author={Cueto, Javier},
author={Mora-Corral, Carlos},
title={$\Gamma $-convergence of polyconvex functionals involving $s$-fractional gradients to their local counterparts},
journal={Calc. Var. Partial Differential Equations},
volume={60},
date={2021},
number={1},
pages={Paper No. 7, 29},
}
\bib{BL76}{book}{
author={Bergh, J\"{o}ran},
author={L\"{o}fstr\"{o}m, J\"{o}rgen},
title={Interpolation spaces. An introduction},
note={Grundlehren der Mathematischen Wissenschaften, No. 223},
publisher={Springer-Verlag, Berlin-New York},
date={1976},
}
\bib{BBM01}{article}{
author={Bourgain, Jean},
author={Brezis, Ha\"{\i}m},
author={Mironescu, Petru},
title={Another look at Sobolev spaces},
conference={
title={Optimal control and partial differential equations},
},
book={
publisher={IOS, Amsterdam},
},
date={2001},
pages={439--455},
}
\bib{BBM02-bis}{article}{
author={Bourgain, Jean},
author={Brezis, Ha\"{\i}m},
author={Mironescu, Petru},
title={Limiting embedding theorems for $W^{s,p}$ when $s\uparrow1$ and applications},
note={Dedicated to the memory of Thomas H. Wolff},
journal={J. Anal. Math.},
volume={87},
date={2002},
pages={77--101},
}
\bib{BN06}{article}{
author={Bourgain, Jean},
author={Nguyen, Hoai-Minh},
title={A new characterization of Sobolev spaces},
journal={C. R. Math. Acad. Sci. Paris},
volume={343},
date={2006},
number={2},
pages={75--80},
}
\bib{B02}{article}{
author={Brezis, Ha\"{\i}m},
title={How to recognize constant functions. A connection with Sobolev spaces},
language={Russian, with Russian summary},
journal={Uspekhi Mat. Nauk},
volume={57},
date={2002},
number={4(346)},
pages={59--74},
translation={
journal={Russian Math. Surveys},
volume={57},
date={2002},
number={4},
pages={693--708},
issn={0036-0279},
},
}
\bib{Brezis11}{book}{
author={Brezis, Ha\"{\i}m},
title={Functional analysis, Sobolev spaces and Partial Differential Equations},
series={Universitext},
publisher={Springer, New York},
date={2011},
pages={xiv+599},
}
\bib{B15}{article}{
author={Brezis, Ha\"{\i}m},
title={New approximations of the total variation and filters in imaging},
journal={Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl.},
volume={26},
date={2015},
number={2},
pages={223--240},
}
\bib{BN16}{article}{
author={Brezis, Ha\"{\i}m},
author={Nguyen, Hoai-Minh},
title={The BBM formula revisited},
journal={Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl.},
volume={27},
date={2016},
number={4},
pages={515--533},
}
\bib{BN16-bis}{article}{
author={Brezis, Ha\"{\i}m},
author={Nguyen, Hoai-Minh},
title={Two subtle convex nonlocal approximations of the BV-norm},
journal={Nonlinear Anal.},
volume={137},
date={2016},
pages={222--245},
}
\bib{BN18}{article}{
author={Brezis, Ha\"{\i}m},
author={Nguyen, Hoai-Minh},
title={Non-local functionals related to the total variation and connections with image processing},
journal={Ann. PDE},
volume={4},
date={2018},
number={1},
pages={Paper No. 9, 77},
}
\bib{BN20}{article}{
author={Brezis, Ha\"{\i}m},
author={Nguyen, Hoai-Minh},
title={Non-local, non-convex functionals converging to Sobolev norms},
journal={Nonlinear Anal.},
volume={191},
date={2020},
pages={111626, 9},
}
\bib{BVY20}{article}{
author={Brezis, Ha\"{\i}m},
author={Van Schaftingen, Jean},
author={Yung, Po-Lam},
title={A surprising formula for Sobolev norms},
journal={Proc. Natl. Acad. Sci. USA},
volume={118},
date={2021},
number={8},
pages={Paper No. e2025254118, 6},
}
\bib{CS19}{article}{
author={Comi, Giovanni E.},
author={Stefani, Giorgio},
title={A distributional approach to fractional Sobolev spaces and
fractional variation: Existence of blow-up},
journal={J. Funct. Anal.},
volume={277},
date={2019},
number={10},
pages={3373--3435},
}
\bib{CS19-2}{article}{
author={Comi, Giovanni E.},
author={Stefani, Giorgio},
title={A distributional approach to fractional Sobolev spaces and fractional variation: Asymptotics~I},
journal={Revista Matem\'atica Complutense},
date={2021},
eprint={https://arxiv.org/abs/1910.13419},
status={to appear} }
\bib{D02}{article}{
author={D\'{a}vila, J.},
title={On an open question about functions of bounded variation},
journal={Calc. Var. Partial Differential Equations},
volume={15},
date={2002},
number={4},
pages={519--527},
}
\bib{TGV20}{article}{
author={del Teso, F\'{e}lix},
author={G\'{o}mez-Castro, David},
author={V\'{a}zquez, Juan Luis},
title={Estimates on translations and Taylor expansions in fractional Sobolev spaces},
journal={Nonlinear Anal.},
volume={200},
date={2020},
pages={111995, 12},
}
\bib{DiMS19}{article}{
author={Di Marino, Simone},
author={Squassina, Marco},
title={New characterizations of Sobolev metric spaces},
journal={J. Funct. Anal.},
volume={276},
date={2019},
number={6},
pages={1853--1874},
}
\bib{DiNPV12}{article}{
author={Di Nezza, Eleonora},
author={Palatucci, Giampiero},
author={Valdinoci, Enrico},
title={Hitchhiker's guide to the fractional Sobolev spaces},
journal={Bull. Sci. Math.},
volume={136},
date={2012},
number={5},
pages={521--573},
}
\bib{DM20}{article}{
author={Dominguez, Oscar},
author={Milman, Mario},
title={New Brezis-Van Schaftingen-Yung Sobolev type inequalities connected with maximal inequalities and one parameter families of operators},
date={2020},
eprint={https://arxiv.org/abs/2010.15873},
status={preprint} }
\bib{EG15}{book}{
author={Evans, Lawrence C.},
author={Gariepy, Ronald F.},
title={Measure theory and fine properties of functions},
series={Textbooks in Mathematics},
edition={Revised edition},
publisher={CRC Press, Boca Raton, FL},
date={2015},
}
\bib{FS19}{article}{
author={Fern\'{a}ndez Bonder, Juli\'{a}n},
author={Salort, Ariel M.},
title={Fractional order Orlicz-Sobolev spaces},
journal={J. Funct. Anal.},
volume={277},
date={2019},
number={2},
pages={333--367},
}
\bib{FHR20}{article}{
author={Ferreira, Rita},
author={H\"{a}st\"{o}, Peter},
author={Ribeiro, Ana Margarida},
title={Characterization of generalized Orlicz spaces},
journal={Commun. Contemp. Math.},
volume={22},
date={2020},
number={2},
pages={1850079, 25},
}
\bib{FS82}{book}{
author={Folland, G. B.},
author={Stein, Elias M.},
title={Hardy spaces on homogeneous groups},
series={Mathematical Notes},
volume={28},
publisher={Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo},
date={1982},
}
\bib{FS08}{article}{
author={Frank, Rupert L.},
author={Seiringer, Robert},
title={Non-linear ground state representations and sharp Hardy inequalities},
journal={J. Funct. Anal.},
volume={255},
date={2008},
number={12},
pages={3407--3430},
}
\bib{GR85}{book}{
author={Garc\'{\i}a-Cuerva, Jos\'{e}},
author={Rubio de Francia, Jos\'{e} L.},
title={Weighted norm inequalities and related topics},
series={North-Holland Mathematics Studies},
volume={116},
publisher={North-Holland Publishing Co., Amsterdam},
date={1985},
}
\bib{G14-C}{book}{
author={Grafakos, Loukas},
title={Classical Fourier analysis},
series={Graduate Texts in Mathematics},
volume={249},
edition={3},
publisher={Springer, New York},
date={2014},
}
\bib{G14-M}{book}{
author={Grafakos, Loukas},
title={Modern Fourier analysis},
series={Graduate Texts in Mathematics},
volume={250},
edition={3},
publisher={Springer, New York},
date={2014},
}
\bib{H59}{article}{
author={Horv\'ath, J.},
title={On some composition formulas},
journal={Proc. Amer. Math. Soc.},
volume={10},
date={1959},
pages={433--437},
}
\bib{KL05}{article}{
author={Kolyada, V. I.},
author={Lerner, A. K.},
title={On limiting embeddings of Besov spaces},
journal={Studia Math.},
volume={171},
date={2005},
number={1},
pages={1--13},
}
\bib{KM19}{article}{
author={Kreuml, Andreas},
author={Mordhorst, Olaf},
title={Fractional Sobolev norms and BV functions on manifolds},
journal={Nonlinear Anal.},
volume={187},
date={2019},
pages={450--466},
}
\bib{LMP19}{article}{
author={Lam, Nguyen},
author={Maalaoui, Ali},
author={Pinamonti, Andrea},
title={Characterizations of anisotropic high order Sobolev spaces},
journal={Asymptot. Anal.},
volume={113},
date={2019},
number={4},
pages={239--260},
}
\bib{L09}{book}{
author={Leoni, Giovanni},
title={A first course in Sobolev spaces},
series={Graduate Studies in Mathematics},
volume={105},
publisher={American Mathematical Society, Providence, RI},
date={2009},
}
\bib{LS11}{article}{
author={Leoni, Giovanni},
author={Spector, Daniel},
title={Characterization of Sobolev and $BV$ spaces},
journal={J. Funct. Anal.},
volume={261},
date={2011},
number={10},
pages={2926--2958},
}
\bib{LS14}{article}{
author={Leoni, Giovanni},
author={Spector, Daniel},
title={Corrigendum to ``Characterization of Sobolev and $BV$ spaces'' [J. Funct. Anal. 261 (10) (2011) 2926--2958]},
journal={J. Funct. Anal.},
volume={266},
date={2014},
number={2},
pages={1106--1114},
}
\bib{MP19}{article}{
author={Maalaoui, Ali},
author={Pinamonti, Andrea},
title={Interpolations and fractional Sobolev spaces in Carnot groups},
journal={Nonlinear Anal.},
volume={179},
date={2019},
pages={91--104},
}
\bib{MS02}{article}{
author={Maz\cprime ya, V.},
author={Shaposhnikova, T.},
title={On the Bourgain, Brezis, and Mironescu theorem concerning limiting embeddings of fractional Sobolev spaces},
journal={J. Funct. Anal.},
volume={195},
date={2002},
number={2},
pages={230--238},
}
\bib{MS03}{article}{
author={Maz\cprime ya, V.},
author={Shaposhnikova, T.},
title={Erratum to: ``On the Bourgain, Brezis and Mironescu theorem concerning limiting embeddings of fractional Sobolev spaces'' [J. Funct. Anal. {\bf 195} (2002), no. 2, 230--238]},
journal={J. Funct. Anal.},
volume={201},
date={2003},
number={1},
pages={298--300},
}
\bib{M05}{article}{
author={Milman, Mario},
title={Notes on limits of Sobolev spaces and the continuity of interpolation scales},
journal={Trans. Amer. Math. Soc.},
volume={357},
date={2005},
number={9},
pages={3425--3442},
}
\bib{N07}{article}{
author={Nguyen, Hoai-Minh},
title={$\Gamma$-convergence and Sobolev norms},
journal={C. R. Math. Acad. Sci. Paris},
volume={345},
date={2007},
number={12},
pages={679--684},
}
\bib{N08}{article}{
author={Nguyen, Hoai-Minh},
title={Further characterizations of Sobolev spaces},
journal={J. Eur. Math. Soc. (JEMS)},
volume={10},
date={2008},
number={1},
pages={191--229},
}
\bib{N11}{article}{
author={Nguyen, Hoai-Minh},
title={$\Gamma$-convergence, Sobolev norms, and BV functions},
journal={Duke Math. J.},
volume={157},
date={2011},
number={3},
pages={495--533},
}
\bib{NS19}{article}{
author={Nguyen, Hoai-Minh},
author={Squassina, Marco},
title={On anisotropic Sobolev spaces},
journal={Commun. Contemp. Math.},
volume={21},
date={2019},
number={1},
pages={1850017, 13},
}
\bib{PSV17}{article}{
author={Pinamonti, Andrea},
author={Squassina, Marco},
author={Vecchi, Eugenio},
title={The Maz\cprime ya-Shaposhnikova limit in the magnetic setting},
journal={J. Math. Anal. Appl.},
volume={449},
date={2017},
number={2},
pages={1152--1159},
}
\bib{PSV19}{article}{
author={Pinamonti, Andrea},
author={Squassina, Marco},
author={Vecchi, Eugenio},
title={Magnetic BV-functions and the Bourgain-Brezis-Mironescu formula},
journal={Adv. Calc. Var.},
volume={12},
date={2019},
number={3},
pages={225--252},
}
\bib{P04-1}{article}{
author={Ponce, Augusto C.},
title={An estimate in the spirit of Poincar\'{e}'s inequality},
journal={J. Eur. Math. Soc. (JEMS)},
volume={6},
date={2004},
number={1},
pages={1--15},
}
\bib{P04-2}{article}{
author={Ponce, Augusto C.},
title={A new approach to Sobolev spaces and connections to $\Gamma$-convergence},
journal={Calc. Var. Partial Differential Equations},
volume={19},
date={2004},
number={3},
pages={229--255},
}
\bib{P16}{book}{
author={Ponce, Augusto C.},
title={Elliptic PDEs, measures and capacities},
series={EMS Tracts in Mathematics},
volume={23},
publisher={European Mathematical Society (EMS), Z\"{u}rich},
date={2016},
}
\bib{PS17}{article}{
author={Ponce, Augusto C.},
author={Spector, Daniel},
title={A note on the fractional perimeter and interpolation},
journal={C. R. Math. Acad. Sci. Paris},
volume={355},
date={2017},
number={9},
pages={960--965},
}
\bib{SKM93}{book}{
author={Samko, Stefan G.},
author={Kilbas, Anatoly A.},
author={Marichev, Oleg I.},
title={Fractional integrals and derivatives},
publisher={Gordon and Breach Science Publishers, Yverdon},
date={1993},
}
\bib{SSS15}{article}{
author={Schikorra, Armin},
author={Shieh, Tien-Tsan},
author={Spector, Daniel},
title={$L^p$ theory for fractional gradient PDE with $VMO$ coefficients},
journal={Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl.},
volume={26},
date={2015},
number={4},
pages={433--443},
}
\bib{SSS18}{article}{
author={Schikorra, Armin},
author={Shieh, Tien-Tsan},
author={Spector, Daniel E.},
title={Regularity for a fractional $p$-Laplace equation},
journal={Commun. Contemp. Math.},
volume={20},
date={2018},
number={1},
pages={1750003, 6},
}
\bib{SSS17}{article}{
author={Schikorra, Armin},
author={Spector, Daniel},
author={Van Schaftingen, Jean},
title={An $L^1$-type estimate for Riesz potentials},
journal={Rev. Mat. Iberoam.},
volume={33},
date={2017},
number={1},
pages={291--303},
}
\bib{SS15}{article}{
author={Shieh, Tien-Tsan},
author={Spector, Daniel E.},
title={On a new class of fractional partial differential equations},
journal={Adv. Calc. Var.},
volume={8},
date={2015},
number={4},
pages={321--336},
}
\bib{SS18}{article}{
author={Shieh, Tien-Tsan},
author={Spector, Daniel E.},
title={On a new class of fractional partial differential equations II},
journal={Adv. Calc. Var.},
volume={11},
date={2018},
number={3},
pages={289--307},
}
\bib{Sil19}{article}{
author={\v{S}ilhav\'y, Miroslav},
title={Fractional vector analysis based on invariance requirements (Critique of coordinate approaches)},
date={2019},
journal={M. Continuum Mech. Thermodyn.},
pages={1--22}, }
\bib{S19}{article}{
author={Spector, Daniel},
title={A noninequality for the fractional gradient},
journal={Port. Math.},
volume={76},
date={2019},
number={2},
pages={153--168},
}
\bib{S18}{article}{
author={Spector, Daniel},
title={An optimal Sobolev embedding for $L^1$},
journal={J. Funct. Anal.},
volume={279},
date={2020},
number={3},
pages={108559, 26},
}
\bib{SV16}{article}{
author={Squassina, Marco},
author={Volzone, Bruno},
title={Bourgain-Br\'{e}zis-Mironescu formula for magnetic operators},
journal={C. R. Math. Acad. Sci. Paris},
volume={354},
date={2016},
number={8},
pages={825--831},
}
\bib{S70}{book}{
author={Stein, Elias M.},
title={Singular integrals and differentiability properties of functions},
series={Princeton Mathematical Series, No. 30},
publisher={Princeton University Press, Princeton, N.J.},
date={1970},
}
\bib{S93}{book}{
author={Stein, Elias M.},
title={Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals},
series={Princeton Mathematical Series},
volume={43},
publisher={Princeton University Press, Princeton, NJ},
date={1993},
}
\bib{Str90}{article}{
author={Strichartz, Robert S.},
title={$H^p$ Sobolev spaces},
journal={Colloq. Math.},
volume={60/61},
date={1990},
number={1},
pages={129--139},
}
\bib{T11}{article}{
author={Triebel, Hans},
title={Limits of Besov norms},
journal={Arch. Math. (Basel)},
volume={96},
date={2011},
number={2},
pages={169--175},
}
\end{biblist} \end{bibdiv}
\end{document} | arXiv | {
"id": "2011.03928.tex",
"language_detection_score": 0.4394894540309906,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\sloppy
\begin{abstract} We obtain estimates for integrals of derivatives of rational functions in multiply connected domains in the plane. A sharp order of the growth is found for the integral of the modulus of the derivative of a finite Blaschke product in the unit disk. We also extend the results of E.\,P. Dolzhenko about the integrals of the derivatives of rational functions to a wider class of domains, namely, to domains bounded by rectifiable curves without zero interior angles, and show the sharpness of the obtained results.
\end{abstract}
\maketitle
\section{Introduction}
70 years ago S.\,N.~Mergelyan \cite{Mer} showed that there exists a bounded analytic function $f$ in the disk
$\mathbb{D}=\{|z|<1\}$ such that $$
I(f):=\int_\mathbb{D} |f'(z)|\, dA(z) = \infty, $$ where $dA(z) = \frac{1}{\pi} dxdy$, $z=x+iy$.
This problem was further investigated by W.~Rudin \cite{Rud} who constructed an infinite Blaschke product $$
B(z)=\prod_{k=1}^\infty \frac{|z_k|}{z_k}\frac{z_k-z}{1-\overline{z_k} z} $$ such that
$I(B)=\infty$ and, moreover, $\int_{0}^1 |B'(re^{i\theta})| dr = \infty$ for a.e. $\theta\in[0, 2\pi]$. A similar, but more explicit example, was given by G. Piranian \cite{Pir}.
It is then natural to ask what happens if $B$ is a finite Blaschke product of degree $n$? It is obvious, that, for any fixed $n$, the quantity $I(B)$ is bounded, but it cannot be uniformly bounded with respect to $n$, since any bounded function in $\mathbb{D}$ is a locally uniform limit of finite Blaschke products. We find the sharp order of growth for such integrals. Namely, we have
\\ {\bf Theorem 1.} {\it
Let $B$ be a finite Blaschke product of degree $n$. Then
\begin{equation} \label{upper-estimate}
I(B) \leq \pi(1+\sqrt{\log n}).
\end{equation} On the other hand, there exists an absolute constant $c>0 $ such that for any $n \in \mathbb{N}$
there exists a finite Blaschke product of degree $n$ satisfying $I(B) \ge c(1+\sqrt{\log n})$.}
The proof of sharpness of this inequality is based on subtle results of N.\,G.~Makarov \cite{Mak} and R.~Ba\~nuelos and C.\,N.~Moore \cite{BanMoo} on boundary behaviour of functions from the Bloch space.
It should be noted that there exists a vast literature dealing with the membership of the derivatives of the Blaschke products to various functional spaces, e.g., Bergman-type spaces (see \cite{av, prot1, prot2} and the references therein). However, most of these results concern infinite products and the conditions are formulated in terms of their zeros.
Since a Blaschke product is a bounded rational function in the unit disk, the problem about the estimates of the derivatives of Blaschke products is related to a more general question about the integrals of bounded rational functions studied for the first time by E.\,P.~Dolzhenko \cite{Dol} for sufficiently smooth domains. We will say that a curve belongs to the class K if it is a closed Jordan curve with continuous curvature $k(s)$ satisfying a H\"older condition as the function of the arc length $s$. Let $G$ be a finitely connected domain whose boundary curves belong to the class K. Assume that $1\le p \le 2$ and let $R$ be a rational function of degree at most $n$ with the poles outside $\overline{G}$. Dolzhenko \cite[Theorem 2.2]{Dol} showed that there exists a constant $C$ depending only on the domain $G$ and on $p$ such that \begin{equation}
\label{d1}
\int_{G}|R'(w)|^p\, dA(w) \leq C n^{p-1} \|R\|_{H^\infty(G)}^p, \qquad p \in (1,2], \end{equation} \begin{equation}
\label{d2}
\int_{G}|R'(w)|\, dA(w) \leq C \ln (n+1) \|R\|_{H^\infty(G)}. \end{equation} Here we denote by $H^\infty(G)$ the space of all bounded analytic functions in $G$,
and $\|f\|_{H^\infty(G)} = \sup_{w\in G} |f(w)|$.
Later, inequalities for the derivatives of rational functions (mainly in the disk) were studied in the papers by V.\,V.~Peller \cite{pel}, S.~Semmes \cite{sem}, A.\,A.~Pekarskii \cite{pek, pek1}, V.\,I.~Danchenko \cite{dan0, dan} and by many other authors (see, e.g., \cite{dyn1, dyn2, bz0, bz1, bz2}). A short proof of the Dolzhenko inequalities for the case of the disk when the $H^\infty$-norm is replaced by the weaker $BMOA$-norm can be found in \cite{bz1}.
In the present article the inequalities \eqref{d1} and \eqref{d2} are proved under substantially weaker restrictions on the domain, namely, under the condition that the domain has no zero interior angles (more precisely, for the John class domains -- see the definition in \S 3).
\\ {\bf Theorem 2.} {\it Let $G$ be a finitely connected John domain with the rectifiable boundary and let $1\le p \le 2$. Then there exists a constant $C>0$, depending on the domain $G$ and on $p$, such that for any rational function $R$ of degree at most $n$ the inequalities \eqref{d1} and \eqref{d2} hold.}
The sharpness of \eqref{d1} is seen already on the simplest example of the function $R(z) = z^n$ in the disk (obviously, we can consider polynomials as a special case of rational functions with the pole at infinity). The question about sharpness of the estimate \eqref{d2} in the conditions of Theorem 2 remains open. However, it turns out that under some additional regularity of the domain $G$ inequality \eqref{d2} can be improved.
\\ {\bf Theorem 3.} {\it Let $G$ be a simply connected domain such that $\varphi' \in H^2$, where $\varphi$ is the conformal map of the disk $\mathbb{D}$ onto $G$.
Then there exists a constant $C>0$ depending on the domain $G$ such that
for any rational function $R$ of degree at most $n$ one has
\begin{equation}
\label{g1}
\int_{G}|R'(w)|\, dA(w) \leq C \sqrt{\ln (n+1)} \|R\|_{H^\infty(G)}. \end{equation}}
As follows from Theorem 1, the dependence on $n$ in this inequality is sharp.
Finally, we give a statement for the case $p> 2$. Here $q$ is the conjugate exponent, i.e., $1/p+1/q=1$.
\\ {\bf Theorem 4. } { \it Let $G$ be a bounded simply connected domain and put
$G_\rho = \{ z\in G: {\rm dist}\,(z, \partial G) >\rho\}$. Then for any rational function $R$ of degree at most $n$
and $p>2$ one has
\begin{equation}
\label{g2}
\|R'\|_{A^p(G_\rho)} = \bigg(\int_{G_\rho}|R'(w)|^p\, dA(w)\bigg)^{1/p} \le n^{1/p} \rho^{1/p-1/q} \|R\|_{H^\infty(G)}. \end{equation} }
In \cite{Dol} inequality \eqref{g2} was established for domains of the class K, but, as our result shows, no restrictions on the regularity of the domain are required.
In \cite{bz2} another generalization of the Dolzhenko inequality for the case $p>2$ was obtained for the rational functions in the disk. Let $R$ be a rational function of degree at most $n$
whose poles lie in the complement of the disk $\{|z| <1+\rho\}$. The following inequality follows directly from \cite[Theorem 8.2]{bz1}: $$
\|R'\|_{A^p(\mathbb{D})} \le C(p) n^{1/q} \rho^{1/p-1/q} \|R\|_{BMOA}; $$ here $BMOA$ denotes the analytic space of functions of bounded mean oscillation in the disk. It is interesting to note that the dependence on $n$ in Theorem 4 is substantially weaker (since in this case the function $R$ is assumed to be bounded on a larger set).
In \S 5 more general inequalities are obtained for the weighted norms of the derivatives of rational functions, where the weight is given as some power of the distance to the boundary of the domain.
A suitable toolbox for the study of such inequalities is provided by the theory of the Hardy spaces. For $p>0$ the Hardy space $H^p$ is the set of all analytic functions in $\mathbb{D}$ satisfying
$\|f\|_{H^p}<\infty$, where $$
\|f\|^p_{H^p}:=\sup_{0<r<1}\frac{1}{2\pi}\int_0^{2\pi}|f(re^{it})|^p dt. $$ Note that for $p \ge 1$ the last quantity defines a norm with respect to which $H^p$ is a Banach space.
\section{Estimate for the integral of the modulus of the derivative of a finite Blaschke product }
In the proof of Theorem 1 we will use the following simple lemma.
\\ {\bf Lemma 1.} {\it Assume that the function $g(z) = \sum_{k=0}^\infty b_k z^k$
is analytic in the disk $\mathbb{D}$. If $\|g\|_\infty \le 1$
and $p(z) = \sum_{k=0}^n b_k z^k$, $n\ge 2$, then there exists an absolute constant $C_0$ such that
$$
|p(z)| \le C_0, \qquad |z|\le 1-2\log n/ n,
$$
and
$$
|g'(z) - p'(z)| \le C_0, \qquad |z|\le 1-2\log n/ n.
$$}
\noindent
{\bf Proof.} Since $|b_k| \le 1$ and $|z|^k \le 1/n^2$ for $|z|\le 1-2\log n/ n$ and
$k\ge n$, the function $|g(z) - p(z)|$ admits a uniform estimate for $|z|\le 1-2\log n/ n$, and the first estimate follows.
Clearly, $$
\sum_{k=n}^\infty (k+1)|b_{k+1} z^k| \le \frac{|z|^n}{(1-|z|)^2} +\frac{n|z|^n}{1-|z|} \le const, \qquad |z|\le 1-2\log n/ n, $$ and the second inequality is proved.
\noindent {\bf Proof of Theorem.} We use the following well-known facts: $$
\int_0^{2\pi} |B'(re^{it})| dt \le 2 \pi n, \qquad r \in[0,1], $$ for any finite Blaschke product of degree at most $n$ and \begin{equation} \label{area}
\int_\mathbb{D} |f'(z)|^2 (1-|z|^2)\, dA(z) =
\sum_{n=1}^\infty \frac{n}{n+1}|a_n|^2 \le \|f\|^2_{H^2} \end{equation} for any function $f(z) = \sum_{n\ge 0} a_n z^n$ in the Hardy space $H^2$.
Let $s \in [0,1]$. We have $$
\int_{\{s<|z| <1\}} |B'(z)|\, dA(z)= \int_0^{2\pi} \int_s^1 |B'(re^{it})| r drdt \leq 2\pi n \int_s^1 r dr=\pi n (1-s^2). $$ In the remaining part we apply the Cauchy--Schwarz inequality: $$
\int_{\{0 <|z| \le s\}} |B'(z)|\, dA(z) $$ $$
\le \bigg( \int_{\{0 <|z| \le s\}}
|B'(z)|^2 (1-|z|^2)\, dA(z) \bigg)^{1/2} \bigg( \int_{\{0 <|z| \le s\}} \frac{dA(z)}{1-|z|^2}
\bigg)^{1/2} $$ $$
\leq \sqrt{\pi}\sqrt{2\pi \int_0^s \frac{rdr}{1-r^2}} = \pi \sqrt{ \log\frac{1}{1-s^2}}. $$ Thus, $$ I(B) \leq \pi n (1-s^2)+\pi \sqrt{ \log\frac{1}{1-s^2}}. $$ Taking $s^2=1-1/n$ we obtain \eqref{upper-estimate}.
The estimate from below can be obtained by the methods based on the Makarov law of the iterated logarithm \cite{Mak}. Recall that the Bloch class $\mathcal{B}$ consists of functions
analytic in $\mathbb{D}$ with finite seminorm $ \|f\|_{\mathcal{B}} = \sup_{z\in \mathbb{D}} (1-|z|^2) |f'(z)|$. In \cite{BanMoo} R.~Ba\~nuelos and C.\,N.~Moore, answering a question of N.\,G.~Makarov and F.~Przytycki, constructed a function $f(z) = \sum_{k=1}^\infty a_k z^k$ in the Bloch class such that its asymptotic entropy admits the lower bound $$
\liminf_{r\to 1-} \frac{\sum_{k=1}^\infty |a_k|^2 r^{2k}}{\log\frac{1}{1-r}} >0, $$
whereas for all $\zeta$ with $|\zeta| =1$ one has $$ \limsup_{r\to 1-} \frac{f(r \zeta)}{\sqrt{\log\frac{1}{1-r} \log\log\log \frac{1}{1-r}}} =0. $$ Moreover, in \cite[p. 852--853]{BanMoo} a sequence of polynomials $$ p_n(z) = \sum_{k=4}^{4^{n+1} -1} a_k z^k = \sum_{j=1}^{n} b_j(z), \qquad \text{where} \qquad b_j(z) = \sum_{k=4^j}^{4^{j+1} -1} a_k z^k, $$
is constructed such that $\|b_j\|_\infty \le 1$, $$
\sum_{k=1}^{4^{n+1} -1} |a_k|^2 \ge c \log m, \qquad
\|p_n\|_{H^\infty} \le C\sqrt{\log m}, $$ where $m = {\rm deg}\, p_n = 4^{n+1} -1$ and $C, c>0$ are some absolute positive constants.
It is not difficult to deduce from $\|b_j\|_\infty \le 1$ that $\sup_n \|p_n\|_{\mathcal{B}} <\infty$.
Indeed, by the Schwarz lemma $|b_j(rz)| \le r^{4^j}$, whereas, by the classical Bernstein inequality,
$|b_j' (rz)| \le 4^j r^{4^j}$. It is well known (and easy to show) that $\sum_{j\ge 1} 4^j r^{4^j} \le C_1/(1-r^2)$ for some constant $C_1>0$ and, thus,
$\sup_n \|p_n\|_{\mathcal{B}} <\infty$. Without loss of generality we may assume that $\|p_n\|_{\mathcal{B}} \le 1$.
Let $r=1-1/m$. Then, for some absolute constants $C', c'>0$, $$
c' \log\frac{1}{1-r} \leq \sum_{k=1}^m |a_k|^2 r^{2k} \leq 2 \int_{|z|<r}|p_n'(z)|^2(1-|z|^2)\,dA(z)
\leq C'\int_{|z|<r}|p_n'(z)|\,dA(z). $$
Now put $q_n = p_n/(C\sqrt{\log m})$. Then $\|q_n\|_\infty \le 1$ and $$
\int_{|z|<1-1/m}|q_n'(z)|\, dA(z) \ge c_1\sqrt{\log m} $$ for some absolute constant $c_1>0$. Since $$
\int_{1-2\log m/m < |z|<1-1/m}|q_n'(z)| \, dA(z) \leq \int_{1-2\log m /m < |z|<1-1/m} \frac{dA(z)}{1-|z|^2}=O(\log\log m), $$ we have, for some $c_2>0$, \begin{equation}
\label{eq1}
\int_{|z|<1-2\log m/m}|q_n'(z)|\,dA(z) \ge c_2 \sqrt{\log m}. \end{equation}
Let $B$ be a Blaschke product of degree at most $m+1$ such that its first $m$ Taylor coefficients coincide with respective coefficients of the polynomial $q_n$. By Lemma 1, $$
|q_n'(z)-B'(z)| \leq 2C_0, \qquad |z| \leq 1-2\log m/m. $$ Hence, it follows from \eqref{eq1} that $$
\int_{\mathbb{D}}|B'(z)|\, dA(z) \ge c_3\sqrt{\log m} $$ for some absolute constant $c_3$. Theorem 1 is proved.
\section{Estimates for integrals of rational functions}
Recall that a finitely connected domain $\Omega$ is said to be a {\it John domain} if there exists a constant $C>0$ such that any points $a,b\in \Omega$ can be connected by a curve $\gamma$ in $\Omega$ with the following property: for any $x\in\gamma$, $$ \min \big( {\rm diam}\,\gamma(a,x), {\rm diam}\,\gamma(x,b) \big) \le C {\rm dist}\, (x, \partial \Omega). $$ Here $\gamma (a,x)$ and $\gamma (x,b)$ denote the corresponding subarcs of $\gamma$. For equivalent definitions and properties of John domains see, e.g., \cite{MaSa, Pom}. Essentially, this definition means that the domain has no zero interior angles. In particular, a domain is a John domain if it satisfies the cone conditon: one can touch each boundary point from inside of the domain by some sufficiently small triangle with fixed angles.
In what follows we will essentially use the following property of simply connected John domains: if $\varphi$ is the conformal map of $\mathbb{D}$ onto a simply connected John domain, then \begin{equation}
\label{hold}
|\varphi'(z)| \le \frac{C}{(1-|z|)^{\alpha}} \end{equation} for some $\alpha\in(0,1)$ and $C>0$ (see \cite[pp. 96--100]{Pom}).
In the proof of Theorem 2 we will use the following simple lemma:
\\ {\bf Lemma 2.} {\it Let $g$ be a bounded and at most $n$-valent function in $\mathbb{D}$. Then
$$
\frac{1}{2\pi} \int_0^{2\pi} |g'(re^{it})|^2 dt \le \frac{n}{1-r} \|g\|^2_{H^\infty(\mathbb{D})}.
$$}
\noindent {\bf Proof.} Let $g(z) = \sum_{k=0}^\infty a_kz^k$. Then $$
\frac{1}{2\pi} \int_0^{2\pi} |g'(re^{it})|^2 dt = \sum_{k=1}^\infty k^2|a_k|^2 r^{2k} \leq
\frac{1}{1-r} \sum_{k=1}^\infty k|a_k|^2. $$ Here we used the elementary inequality $kr^{k-1}(1-r) \leq 1$, $k\ge 1$, $r \in [0,1)$. Since $g$ is at most $n$-valent in $\mathbb{D}$, we have, in virtue of the classical area theorem, $$
\sum_{k=1}^\infty k|a_k|^2 \leq n \|g\|^2_{H^\infty(\mathbb{D})}. $$
\noindent {\bf Proof of Theorem 2.} Assume first that the domain $G$ is bounded. Without loss of generality we may assume that $G$ is a simply connected domain with a rectifiable boundary. Indeed, making smooth cuts (and controlling the angles) one can easily represent our domain as a finite union of simply connected John domains.
Let $w=\varphi(z)$ be a conformal map of $\mathbb{D}$ onto $G$. Since the boundary of $G$ is rectifiable, we have $\varphi' \in H^1$. Also, $\varphi'$ satisfies inequality \eqref{hold}.
By the change of the variable, $$
\int_{G}|R'(w)|^p\, dA(w) = \int_{\mathbb{D}}|R'(\varphi(z))|^p|\varphi'(z)|^2\, dA(z)=
\int_{\mathbb{D}} |(R\circ \varphi)'(z)|^p |\varphi'(z)|^{2-p} dA(z). $$
It is obvious, that for $p=2$ the last integral does not exceed $n \|R\|_{H^\infty(G)}^2$, since the function $R\circ\varphi$ is at most $n$-valent in the disk $\mathbb{D}$.
Let us split the last integral into the integrals over the set $\{|z| \le r_n\}$ and over the set $\{r_n < |z| < 1\}$, where $r_n = 1- \frac{1}{(n+1)^{K}}$ and the number $K>0$ is to be chosen later.
\\
{\bf Estimate of the integral over the set $\{|z| \le r_n\}$.} Let $M = \|R\|_{H^\infty(G)}$. Set $$
J := \int_{\{|z|\le r_n\}} |(R\circ \varphi)'(z)|^p |\varphi'(z)|^{2-p} dA(z). $$
For $p=1$ we use the estimate $(1-|z|^2) |(R\circ \varphi)'(z)| \le M$. We have $$ \begin{aligned}
J \le \frac{M}{\pi} \int_0^{r_n} & \frac{1}{1-r}\int_0^{2\pi} |\varphi'(re^{it})| dt dr
\\ &
\le 2 \|\varphi'\|_{H^1} M
\int_0^{r_n} \frac{dr}{1-r} = 2K \|\varphi'\|_{H^1} \log(n+1) M. \end{aligned} $$
In the case $1<p<2$ consider separately the integrals over the sets
$\{|z| \le 1-\frac{1}{n+1}\}$ and ${\{1-\frac{1}{n+1} <|z| \le r_n\}}$. Since $\varphi' \in H^1$, we have
$\varphi'\in H^{2-p}$ and $\|\varphi'\|_{H^{2-p}} \le \|\varphi'\|_{H^1}$. Hence, $$ \begin{aligned}
\int_{\big\{|z| \le 1- \frac{1}{n+1}\big\} } |(R\circ \varphi)'(z)|^p |\varphi'(z)|^{2-p} dA(z)
& \le 2\|\varphi'\|_{H^1}^{2-p} \int_0^{1-\frac{1}{n+1}} \frac{M^p}{(1-r)^p} dr \\
& =
2\|\varphi'\|_{H^1}^{2-p} (p-1)^{-1} (n+1)^{p-1} M^p. \end{aligned} $$
To estimate the integral over the set $\{1-\frac{1}{n+1} <|z| \le r_n\}$ we use the H\"older inequality with exponents $(2-p)^{-1}$ and $(p-1)^{-1}$: $$ J \le
2 \int_{1-\frac{1}{n+1}}^{r_n} \bigg(\frac{1}{2\pi} \int_0^{2\pi} |(R\circ \varphi)'(re^{it})|^{\frac{p}{p-1}} dt\bigg)^{p-1}
\bigg(\frac{1}{2\pi} \int_0^{2\pi} |\varphi'(re^{it})| dt\bigg)^{2-p} dr. $$
Using successively the inequality $(1-|z|^2) |(R\circ \varphi)'(z)| \le M$ and Lemma 2 we get $$ \begin{aligned}
J &\le 2 \|\varphi'\|_{H^1}^{2-p} M^{p-2(p-1)} \int_{1-\frac{1}{n+1}}^{r_n} \frac{1}{(1-r)^{p-2(p-1)}}
\bigg(\frac{1}{2\pi} \int_0^{2\pi} |(R\circ \varphi)'(re^{it})|^2 dt\bigg)^{p-1} dr \\
& \le
2 \|\varphi'\|_{H^1}^{2-p} M^p n^{p-1}
\int_{1-\frac{1}{n+1}}^{r_n} \frac{dr}{(1-r)^{2-p +p-1}} \\
& = 2 \|\varphi'\|_{H^1}^{2-p} M^p n^{p-1}
\int_{1-\frac{1}{n+1}}^{1-\frac{1}{(n+1)^K}} \frac{dr}{1-r} = 2 \|\varphi'\|_{H^1}^{2-p} K n^{p-1} M^p. \end{aligned} $$
\\
{\bf Estimate of the integral over the set $\{r_n<|z| <1\}$.} In this case the argument applies to all $p\in [1, 2)$. Choose $\delta$ such that $0<\delta<1-p/2$. Then $2-p-\delta \in (0,1)$. Let $\beta$ be the exponent conjugate to $(2-p-\delta)^{-1}$. One has, by \eqref{hold}, $$
I := \int_{r_n<|z|<1} |(R\circ \varphi)'(z)|^p |\varphi'(z)|^{2-p} dA(z)
$$
$$
\le C^\delta
\int_{r_n<|z|<1} \frac{|(R\circ \varphi)'(z)|^p |\varphi'(z)|^{2-p-\delta}}{(1-|z|)^{\alpha\delta}} dA(z) $$$$
\le
2C^\delta \int_{r_n}^1 \frac{1}{(1-r)^{\alpha\delta}} \bigg(\frac{1}{2\pi}
\int_0^{2\pi} |(R\circ \varphi)'(re^{it})|^{p\beta} dt\bigg)^{1/\beta}
\bigg(\frac{1}{2\pi} \int_0^{2\pi} |\varphi'(re^{it})| dt\bigg)^{2-p-\delta} dr. $$ Note that it follows from $\delta<1-p/2$ that $p\beta>2$.
Applying the estimate $(1-|z|^2) |(R\circ \varphi)'(z)| \le M$ and Lemma 2, we get $$ \begin{aligned}
I & \le
2 C^\delta\|\varphi'\|_{H^1}^{2-p-\delta} M^{\frac{p\beta - 2}{\beta}}
\int_{r_n}^1 \frac{1}{(1-r)^{\alpha\delta + \frac{p\beta - 2}{\beta}}} \bigg(\frac{1}{2\pi}
\int_0^{2\pi} |(R\circ \varphi)'(re^{it})|^2 dt\bigg)^{1/\beta} dr \\
& \le
2C^\delta\|\varphi'\|_{H^1}^{2-p-\delta} M^p n^{1/\beta} \int_{r_n}^1 \frac{dr}{(1-r)^{\alpha\delta + \frac{p\beta - 2}{\beta} +\frac{1}{\beta}}} \\
& =
2 C^\delta\|\varphi'\|_{H^1}^{2-p-\delta} M^p n^{1/\beta} \int_{r_n}^1 \frac{dr}{(1-r)^{\alpha\delta + p -\frac{1}{\beta}}}. \end{aligned} $$ It remains to notice that $\alpha\delta +p- \frac{1}{\beta} = \alpha\delta +p- (1- (2-p-\delta)) = 1-(1-\alpha)\delta$, whence \begin{equation}
\label{pol}
I \le 2(1-\alpha)^{-1}\delta^{-1}
C^\delta\|\varphi'\|_{H^1}^{2-p-\delta} M^p n^{1/\beta} (1 - r_n)^{(1-\alpha)\delta}. \end{equation} If we fix $\delta\in (0, 1-p/2)$ and choose a sufficiently large $K$ in $r_n = 1- \frac{1}{(n+1)^K}$, we conclude that
$I \le C^\delta \|\varphi'\|_{H^1}^{2-p-\delta} M^p$ (and even $o(1)$ as $n\to\infty$).
We now consider the case when $\infty \in G$. It is clear that such domain can be represented as a union (possibly with an intersection) of the complement of a disk of sufficiently large radius with a simply connected bounded John domain. The statement is already proved for bounded simply connected domains, while for the complement of a disk it follows from the results of Dolzhenko cited above. Theorem 2 is proved.
\section{Proofs of Theorems 3 and 4} \noindent {\bf Proof of Theorem 3.} As in the proof of Theorem we set $r_n = 1- \frac{1}{(n+1)^K}$, where $K>0$. Since $\varphi' \in H^2 \subset H^1$ and the condition \eqref{hold}
is satisfied with $\alpha = 1/2$, one can use the estimate \eqref{pol} for the integral over the set $\{r_n < |z| < 1\}$ established in the proof of Theorem 2. For a sufficiently large $K$ this integral is uniformly bounded over $n$ (and even tends to zero as $n\to\infty$).
Thus, it suffices to estimate $$
J:=\int_{0<|z| \le r_n} |(R\circ \varphi)'(z)| |\varphi'(z)| dA(z). $$ By the Cauchy--Schwarz inequality, $$ \begin{aligned}
J & \le \bigg(\int_{0<|z|\le r_n} (1-|z|) |(R\circ \varphi)'(z)|^2 dA(z) \bigg)^{1/2}
\bigg(\int_{0<|z| \le r_n} \frac{|\varphi'(z)|^2}{1-|z|}dA(z) \bigg)^{1/2} \\
& \le M \bigg( \int_{0<|z| \le r_n} \frac{|\varphi'(z)|^2}{1-|z|} dA(z) \bigg)^{1/2} \leq
\sqrt{2} M \|\varphi'\|_{H^2} \bigg( \int_0^{r_n} \frac{dr}{1-r} \bigg)^{1/2} \\
& =\sqrt{2} M \|\varphi'\|_{H^2} \sqrt{K\ln (n+1)}. \end{aligned} $$ Theorem 3 is proved.
\\ {\bf Proof of Theorem 4.}
Let $\varphi$ be a conformal map of $\mathbb{D}$ onto $G$ and let $D_\rho = \varphi^{-1}(G_\rho)$. Since $\rho \le (1-|z|^2)|\varphi'(z)|$, $z\in D_\rho$, we have $$ \begin{aligned}
\int_{G_\rho} |R'(\zeta)|^p dA(\zeta) & = \int_{D_\rho} |(R\circ \varphi)'(z)|^p |\varphi'(z)|^{2-p} dA(z) \\
& \le \rho^{2-p} \int_{D_\rho} |(R\circ \varphi)'(z)|^p (1-|z|^2)^{p-2} dA(z) \\
& \le \rho^{2-p} M^{p-2} \int_{D_\rho} |(R\circ \varphi)'(z)|^2 dA(z) \le \rho^{2-p} n M^p. \end{aligned} $$ At the last step we used the fact that $R\circ \varphi$ covers each point of the disk of radius $M$ with multiplicity at most $n$. Theorem 4 is proved.
\section{Weighted inequalities of Dolzhenko and Peller type}
As a natural generalization of the Dolzhenko inequalities one can consider weighted integrals of derivatives of rational functions. Similar inequalities were studied extensively in the setting of the Bergman (or Besov) spaces. E.g., a well-known inequality by V.V.~Peller \cite{pel} states that for a rational function $R$ of degree $n$ with poles outside $\overline{\mathbb{D}}$ one has $$
\|R\|_{B_p^{1/p}} \le C n^{1/p} \|R\|_{BMOA}, $$ where $B_p^{1/p}$ is the Besov space, $p>0$, $C=C(p)$. In particular, for $1<p<\infty$, $$
\int_\mathbb{D} |R'(z)|^p (1-|z|)^{p-2} dA(z) \le C n \|R\|^p_{H^\infty}. $$ Various proofs and generalizations of this inequality can be found in \cite{sem, pek, pek1, bz1}.
Using the methods of \S 3 one can obtain more general weighted estimates where the weight equals to some power of the distance to the boundary. To formulate the corresponding result, we set, for a bounded domain $G\subset \mathbb{C}$ and $z\in G$, $$ d_G(z) : = {\rm dist}\, (z, \partial G). $$ For $p\ge 1$, $\beta \in \mathbb{R}$ and a function $f$ analytic in $G$ put $$
I_{p,\beta} (f) := \int_G |f'(\zeta)|^p \, d_G^\beta(\zeta)\, dA(\zeta) $$ (in general, the quantity $I_{p,\beta} (f)$ can be infinite). We are interested in the estimates of the form $$
I_{p,\beta}(R) \le C\Psi(n) \|R\|^p_{H^\infty(G)}, $$ which hold for all rational functions $R$ of degree at most $n$ with poles outside $\overline{G}$ and with a constant $C$, depending on $G$, $p$ and $\beta$, but not on $n$ and $R$. Here $\Psi$ is some function depending on $n$ only. It is easy to see that such estimates are possible only for $\beta \ge p-2$; it is seen already from the example $G=\mathbb{D}$ and rational fractions $R(\zeta) = \frac{1}{\zeta-\lambda}$ that for $\beta< p-2$ the integral $I_{p,\beta} (f)$ does not admit the estimate depending only on $n$, one has also to take into account the distance from the poles of $R$ to $\partial G$.
To simplify the notations, in what follows we write $X(R,n) \lesssim Y(R,n)$, if $X(R, n)\le CY(R,n)$ with a constant $C$, depending only from $G$, $p$ and $\beta$, but not on $n$ and $R$.
\noindent {\bf Theorem 5.} {\it Let $G$ be a simply connected bounded domain, $\varphi$ is the conformal map of $\mathbb{D}$ onto $G$, $p\ge 1$, $\beta \ge p-2$. The following estimates hold true.
1. If $\beta > p-1$ and $\varphi'\in H^{\gamma}$ for some $\gamma >1$, then
$I_{p,\beta}(R) \lesssim \|R\|^p_{H^\infty(G)}$, i.e., the dependence on $n$ disappears.
2. If $\beta = p-1$, $1\le p <2$ and $\varphi'\in H^{\frac{2}{2-p}}$, then
$$
I_{p,\beta}(R) \lesssim \big( \log n\big)^{1- \frac{p}{2}} \|R\|^p_{H^\infty(G)}.
$$
If $\beta = p-1$, $p \ge 2$ and $\varphi'\in H^\infty$, then $I_{p,\beta}(R) \lesssim \|R\|^p_{H^\infty(G)}$.
3. If $p-2 \le \beta <p-1$, $p\ge 2$, $\varphi' \in H^1$ and $G$ is a John domain, then
$$
I_{p,\beta}(R) \lesssim n^{p-1-\beta} \|R\|^p_{H^\infty(G)}.
$$ }
The dependence on $n$ in the inequalities in Theorem 5 is sharp already for the case of the unit disk. In statement 3 the optimal growth is attained on $R(z) = z^n$, while the sharpness of the inequality in statement 2 can be shown by considering the Ba\~nuelos--Moore construction (as $R$ one can take a polynomial or a Blaschke product).
Note that statement 3 of the theorem does not cover the case $p-2 \le \beta <p-1$ and $1<p<2$. In this case it would be sufficient to prove the following analogue of Peller's inequality: $$
\int_\mathbb{D} |(R\circ\varphi)'(z)|^p (1-|z|)^{p-2} dA(z) \lesssim n\|R\|_{H^\infty(G)}^p, $$ where $G = \varphi(\mathbb{D})$ is a John domain and $R$ is a rational function of degree at most $n$ with the poles outside $\overline{G}$. However, we do not know whether this inequality holds true.
\\
{\bf Proof.} Put $M = \|R\|_{H^\infty(G)}$. Let us make the change of the variable $\zeta = \varphi(z)$. Taking into account that
$d_G(\zeta) \le |\varphi'(z)|(1-|z|^2)$, we obtain $$
I_{p,\beta}(R) \lesssim \int_{\mathbb{D}} |(R\circ\varphi)'(z)|^p |\varphi'(z)|^{2-p+\beta} (1-|z|)^{\beta}\, dA(z). $$
Statement 3 follows from Theorem 2. Indeed, $1<p-\beta \le 2$ and, using the inequality
$|(R\circ\varphi)'(z)| (1-|z|) \le M$, we obtain $$
I_{p,\beta}(R) \le M^\beta \int_{\mathbb{D}} |(R\circ\varphi)'(z)|^{p-\beta} |\varphi'(z)|^{2-(p-\beta)}\, dA(z) \lesssim n^{p-\beta-1}M^p. $$
Let us prove statement 1. The quantity $d_G(z)$ is bounded, so it suffices to prove the statement for $\beta \in (p-1, p-2 + \gamma]$. For such $\beta$ it follows from the inequality $p-\beta <1$ and the inclusion $\varphi' \in H^\gamma \subset H^{2-p+\beta}$ that $$
I_{p,\beta}(R) \lesssim M^p \int_0^1 \bigg(\int_0^{2\pi}|\varphi'(re^{it})|^{2-p+\beta} dt\bigg) \frac{r dr}{(1-r)^{p-\beta}} \lesssim M^p. $$
At the first step we used the inequality $|(R\circ\varphi)' (z) |(1-|z|) \le M$.
Consider the most interesting statement 2: $\beta = p-1$. Let $1\le p <2$. Put $s=1-\frac{1}{n}$. Applying the H\"older inequality with exponents $\frac{2}{p}$ and $\frac{2}{2-p}$ and inequality \eqref{area}, we get $$ \begin{aligned}
\int_{\{|z|\le s\}} & |(R\circ\varphi)'(z)|^p |\varphi'(z)| (1-|z|)^{p-1}\, dA(z) \\
& \leq \bigg( \int_{\{|z|\le s\}}
|(R\circ\varphi)'(z)|^2 (1-|z|)\, dA(z) \bigg)^{\frac{p}{2}}
\bigg( \int_{\{|z|\le s\}} \frac{|\varphi'(z)|^{\frac{2}{2-p}}}{1-|z|}\, dA(z) \bigg)^{1-\frac{p}{2}} \\
& \lesssim M^p \bigg( \int_0^s \bigg(\int_0^{2\pi} |\varphi'(re^{it})|^{\frac{2}{2-p}} dt \bigg) \frac{r dr}{1-r} \bigg)^{1-\frac{p}{2}}
\lesssim \big( \log n\big)^{1- \frac{p}{2}} M^p. \end{aligned} $$
It remains to estimate the integral over the set $\{s<|z| <1\}$: $$
\int_{\{s<|z| <1\}} |(R\circ\varphi)'(z)|^p |\varphi'(z)| (1-|z|)^{p-1}\, dA(z)
$$
$$
\le
M^{p-1}
\int_{\{s<|z| <1\}} |(R\circ\varphi)'(z)| \cdot |\varphi'(z)|\, dA(z)
$$$$
\lesssim M^{p-1} \bigg(\int_{\{s<|z| <1\}} |(R\circ\varphi)'(z)|^2 dA(z)\bigg)^{1/2} \bigg(\int_{\{s<|z| <1\}}
|\varphi'(z)|^2 dA(z)\bigg)^{1/2} \lesssim M^p. $$ In the last inequality we used the fact that $$
\int_{\{s<|z| <1\}} |(R\circ\varphi)'(z)|^2 dA(z) \lesssim n M^2, $$ since $R\circ\varphi$ covers the disk of radius $M$ with multiplicity at most $n$, as well as the inclusion $\varphi'\in H^2$.
The case $p\ge 2$ is trivial: $$
I_{p, p-1}(R) \lesssim M^{p-2} \int_\mathbb{D} |(R\circ\varphi)'(z)|^2 (1-|z|)\, dA(z)\lesssim M^p, $$ i.e., the quantity $I_{p, p-1}(R)$ is uniformly bounded over $R$ and $n$.
Theorem 5 is proved.
\end{document} | arXiv | {
"id": "2112.01391.tex",
"language_detection_score": 0.6921498775482178,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\thispagestyle{myheadings} \markright{right head}
\title{A new approach to perturbation theory for a Dirac particle in a central field\footnote{To be published in Physics Letters A 260 (1999) 10-16 }}
\author{I.V. Dobrovolska and R.S. Tutik \footnote{Author to whom all correspondence should be addressed. E-mail address: tutik@ff.dsu.dp.ua}}
\maketitle {PACS numbers: 03.65G, 03.65S.}
Department of Physics, Dniepropetrovsk State University, Dniepropetrovsk, UA-320625, Ukraine. \\ \\ {\sc Abstract.} The explicit semiclassical treatment of logarithmic perturbation theory for the bound-state problem within the framework of the Dirac equation is developed. Avoiding disadvantages of the standard approach in the description of exited states, new handy recursion formulae with the same simple form both for ground and exited states have been obtained. As an example, the perturbation expansions for the energy eigenvalues for the Yukawa potential containing the vector part as well as the scalar component are considered.
Spectrum analysis poses some of the most important problems in quantum mechanics. Success of nonrelativistic potential models of quark confinement [1] reattracted attention to the bound-state problem in relativistic physics as well [2,3]. Several attempts have been made to describe relativistic systems in a central field due to a Lorentz-vector and Lorentz-scalar interaction within the framework of the Dirac equation. However, for almost all potentials this equation is not exactly solvable which compels to resort to some approximation methods.
A number of such approaches to solving the radial Dirac equation in analytical expressions have been developed, including, in particular, the use of the WKB-method [4,5], the hypervirial and Hellman-Feynman theorem [6,7], the 1/$N$-expansion [8-17], the algebraic approach [18], the method of Regge trajectories [19-21] and various perturbation schemes [21-26].
Despite such a variety of methods one of the most popular techniques is still remained logarithmic perturbation theory [27-32]. This technique involves reducing the Dirac equation,
which becomes a pair of coupled first-order differential equations in central-field problems, to a nonlinear Riccati equation. In the case of ground states, the consequent expansion in a small parameter results in handy recursion relations which permit us to derive the high order corrections to energy eigenvalues and eigenfunctions. It should be emphasized that
high orders of expansions are needed for applying modern summation procedures because the obtained perturbation series are typically divergent. However, when radially exited states are considered, the standard approach becomes extremely cumbersome and, practically, inapplicable. It is caused by factoring out zeros of the wave functions, which, in addition, are not the same zeros for small- and large-components of the Dirac spinor [33].
On the other hand, it is known, that the radial quantum number,
$n_r$, most conveniently and naturally is introduced in consideration by means of quantization conditions, as in the WKB-approach [34,35]. However, since the WKB-approximation is more suitable for obtaining energy eigenvalues in the limiting case of large quantum numbers and the perturbation theory, on the contrary, deals with low-lying levels, the WKB quantization conditions need change.
Recently, a new technique based on a specific quantization condition has been proposed to get the perturbation series via semiclassical expansions within the one-dimensional Schr\"{o}dinger equation [36]. For the Dirac equation, on performing the scale transformation, $r\to\hbar^{2}r$, the coupling constants appear in common with powers of Planck's constant, $\hbar$, thus implying the possibility to obtain perturbation expansions in a semiclassical manner in this case, too.
The objective of this letter is to develop the explicit semiclassical treatment of logarithmic perturbation theory for the bound-state problem within the framework of the Dirac equation and to describe a new procedure for deriving perturbation corrections through handy recursion formulae having the same simple form both for ground and exited states.
The proposed technique can be regarded as a further investigation of a part assigned to a rule of achieving a classical limit in construction of semiclassical methods. In addition to the rule, $ \hbar\to 0,\;n_r \to\infty,\; l\to\infty,\;\hbar n_r={\rm const},\;\hbar l={\rm const},$ required within the WKB-approach; and conditions, $ \hbar\to 0,\; n_r={\rm const},\; l\to\infty,\;\hbar n_r\to 0,\;\hbar l={\rm const}$, which are applied within the method of 1/N-expansion [17]; here we address ourselves to the alternative possibility: $ \hbar\to 0,\; n_r={\rm const},\;l={\rm const},\; \hbar n_r\to 0,\;\hbar l\to 0$, that results in the explicit semiclassical treatment of the logarithmic perturbation theory.
\\
(1) {\it Method. } We study the bound state problem for a single fermion moving in an attractive central potential. This potential contains both the time component of a Lorentz four-vector, $ V(r)$, and a Lorentz-scalar term, $W(r)$, which, in general, have a Coulomb-like behaviour at the origin \begin{equation} V(r) =\frac{1}{r}{ \sum_{i=0}^{\infty}{V_i}\,r^i}\;,\\ W(r)=\frac{1}{r}{ \sum_{i=0}^{\infty}W_i r^i\; },
\end{equation} though the case $ V_0=0$ or $ W_0=0 $ is permissible, too. In what follows, a scalar potential will be included in the mass term $ m(r) $ by analogy with "dynamical mass" models of quark confinement [2,3] \begin{eqnarray}
m(r)=m+ \frac{ W(r) }{ c^2 } \; .
\end{eqnarray} Then the Dirac radial wave equations have the form \begin{eqnarray} \hbar F'(r)-\frac{\hbar\chi}{r}F(r)+\frac{1}{c}[{E-V(r)-m(r)c^2}]G(r)=0\;, \nonumber \\ \hbar G'(r)+\frac{\hbar\chi}{r}G(r)-\frac{1}{c}[{E-V(r)+m(r)c^2}]F(r)=0\;. \end{eqnarray} Here $ F(r) $ and $ G(r) $ are the small and large components of the
wavefunction of a particle and $ E $ is its total energy, $\chi =s(j+{ \frac{1}{2} }) $ for $ j=l-{\frac{s}{2} } $, with $ s=\pm1 $ denoted the sign of $\chi $.
Eliminating $ F(r) $ from the system (3) and performing the substitution, $ R(r)= \hbar G'(r) / G(r) $, for the logarithmic derivative of large component we then arrive at the Riccati equation \begin{equation} \begin{array}{l} \hbar R'(r)-\hbar Q(r) R(r)+R^2(r) \\ \displaystyle =\frac{\hbar^2 \chi(\chi+1)}{r^2} +Q(r)\frac{\hbar^2\chi}{r}+m^2(r)c^2-\frac{1}{c^2}[E-V(r)]^2\;, \end{array} \end{equation} with $Q(r)=[m'(r)-V'(r)]/[E+m(r)-V(r)]$.
As was above pointed out, logarithmic perturbation theory does involve coupling constants in common with powers of Planck's constant. Therefore we attempt now to solve eq.(4) in a semiclassical manner. Taking into account the leading orders in $\hbar$ of the quantities $E\sim \frac{1}{\hbar^2} $, $\hbar \chi \sim \hbar $, from Riccati equation (4) we have
\begin{equation} \begin{array}{l} \displaystyle R(r)= \frac{1}{\hbar} \sum_{i=0}^{\infty}R_i(r) \hbar^{2i}\;, \\ \displaystyle Q(r)=\hbar^2 \sum_{i=0}^{\infty} Q_i(r)\hbar^{2i} \;, \\ \displaystyle E= \frac{1}{\hbar^2} \sum_{i=0}^{\infty}E_i\hbar^{2i}\, . \end{array} \end{equation}
Keeping in mind that relativistic mechanics must go over to nonrelativistic one as the speed of light tends to infinity and quantum mechanics must go over to classical one as $\hbar\to 0$, the correlation between these constants has to be set. By analogy with quantum electrodynamics the foregoing consideration will be carried out under the condition $\hbar c\sim O(1)$ [37]. Moreover, for simplicity we put $\hbar c=1 $.
On substituting the expansions (5) into the Riccati equation (4) and comparing coefficients of the different powers of $\hbar$, one obtains the following hierarchy of equations \begin{equation} \begin{array}{l} \displaystyle R_0^2(r)=m^2-E_0^2\;, \\ \displaystyle R_0(r)R_1(r)=E_0\Bigl[ V(r)-E_1 \Bigr] +mW(r) \;, \\ \displaystyle R_1'(r)+2R_0(r)R_2(r)+ R_1^2(r) - R_0(r) Q_0(r) \\ \displaystyle =\frac {\chi(\chi+1)}{r^2}+ 2E_1V(r)-E_1^2-2E_0E_2+W^2(r)-V^2(r)\;, \\ \cdots \\ \displaystyle R_{k-1}'(r)+\sum_{j=0}^kR_j(r)R_{k-j}(r) -\sum_{j=0}^{k-2}R_j(r)Q_{k-j-2}(r) \\ \displaystyle =2E_{k-1}V(r)-\sum_{j=0}^kE_jE_{k-j} +\frac{\chi}{r}Q_{k-3}(r) \;, \\ \end{array} \end{equation} where
$ \displaystyle Q_0(r)=\frac{1}{E_0+m}\Bigl[ W'(r)-V'(r)\Bigr] , $
$ \displaystyle Q_k(r)=-\frac{1}{E_0+m}\left[\sum_{j=0}^{k-1} {Q_j(r)E_{k-j}}+Q_{k-1}(r)\Bigl[W(r)-V(r)\Bigr]\right]. $
For nodeless states this system can be solved straightforwardly.
However, when radial excitations are described with standard technique the nodes of the wave functions need to be factored out first and consideration becomes extremely cumbersome. We intend to circumvent this difficulty by making use of the quantization condition. Its fundamental idea that stems from the
WKB-approach [34,35] is well known as the principle of argument in the analysis of complex variables. Being applied to the logarithmic derivative, $R(r)$, it means that \begin{equation} \frac{1}{2\pi\rm{i}}\oint{R(r)\,{\rm d} r}=\hbar N\;, \end{equation} where $N$ is a number of zeros inside a closed contour.
This condition is exact and is widely used for deriving the high-order corrections to the WKB-approximation and the 1/$N$-expansions. There is, however, one important point to note. The radial and orbital quantum numbers, $n_r$ and $l$, correspondingly, are specific quantum
notions and need be defined before going over from quantum mechanics to classical physics. Therefore the quantization condition (7)
must be supplemented with the rule of achieving a classical limit that stipulates the type of semiclassical approximation.
In particular, within the WKB-approach the passage to the classical limit is implemented using the rule
\begin{equation}
\hbar\to 0,\;n_r\to\infty,\; l\to\infty, \;\hbar n_r={\rm const},\;\hbar l={\rm const}, \end{equation} whereas the 1/$N$-expansion, being complementary to the
WKB-method, requires the conditions [17] \begin{equation}
\hbar\to 0,\; n_r={\rm const},\;l\to\infty, \;\hbar n_r\to 0,\;\hbar l={\rm const}. \end{equation}
The semiclassical treatment of logarithmic perturbation theory proved to ivolve the alternative possibility: \begin{equation}
\hbar\to 0,\; n_r={\rm const},\;l={\rm const}, \;\hbar n_r\to 0,\;\hbar l\to 0. \end{equation}
Notice that the part of this rule concerned the orbital quantum number,
$l$, differs from one used within the WKB-approach and the 1/$N$-expansion method. In our consideration this part, implying the first order in $\hbar$ for the quantity $\hbar\chi$, has been used in deriving the system (6).
The remaining part of the rule respects the radial quantum number. Due to (10) the right-hand side of
the equality (7) has the first order in $\hbar$ and, hence,
on substituting the
expansion (5) the quantization conditions (7) takes the form \begin{equation}
\frac{1}{2\pi\rm{i}}\oint{R_i(r)\,{\rm d} r}=N\delta_{i,1} \; , \end{equation} where the Kronecker delta $\delta_{ij}$ is used.
Before proceeding further, we must specify the quantity $N$. In contrast to the WKB-approach and the 1/$N$-method, we choose such a contour of integration which encloses both the nodes of the wave function $G(r)$ and the boundary point, $r=0$. Then the quantity N is depends on both the radial quantum number, $n_r$, and the behaviour of the wave function near the origin, and is given by
\begin{equation}
N=n_r+\sqrt{\chi^2+W_0^2-V_0^2}\;\; ,\;n_r=n+(s+1)/2\;\;,\;n=0,1,2,... \end{equation}
A further application of the theorem of residues to the explicit form of functions $R_i(r)$ easily solves the problem of taking into account nodes of the wave functions for exited states.
\\
(2) {\it Recursion formulae}. We begin with investigation of behaviour of the functions $R_i(r)$. From the system (6) we have \begin{equation}
R_0(r)=-\sqrt{m^2-E_0^2}\;, \end{equation} where the minus sign is chosen from boundary condition. Then the function $R_1(r)$ has a simple pole at the origin, owing to the Coulombic behaviour of the potentials at this point, while the function $R_k(r)$ has a pole of the order $k$. Hence $R_k(r)$ can be represented by the Laurent series \begin{equation}
R_k(r)=r^{-k}\sum_{i=0}^\infty{R^{k}_{i}r^i}\;, \end{equation} which makes it possible to write the quantization conditions (11) as \begin{equation}
R_k^{k+1}=N \delta_{k,0}\;. \end{equation}
Now, by analogy with $R_k(r)$, let us also represent the functions $Q_k(r)$, involved in the expansion (5), as a power series in $r$: \begin{equation}
Q_k(r)=r^{-2-k}\sum_{i=0}^\infty{Q^{k}_{i}r^i}\;. \end{equation}
From the equation $Q(r)[E+m(r)-V(r)]= m'(r)-V'(r) $ we then obtain \begin{equation} \begin{array}{l}
\displaystyle Q_i^0=\frac{i-1}{E_0+m}(W^0_i-V^0_i), \\ \displaystyle Q_i^k=- \frac{1}{E_0+m}\left[ \sum_{j=0}^{k-1}{Q^j_{j+i-k}E_{k-j}} +\sum_{j=0}^{i}{Q^{k-1}_j(W_{i-j}-V_{i-j})} \right]. \end{array} \end{equation}
Finally, substituting expansions (14) and (16) into the last equation of the system (6) and collecting coefficients of the like powers of $r$ leads to the recursion relation in terms of the Laurent coefficients, $R^k_i$: \begin{equation} \begin{array}{l} \displaystyle R^k_i = -\frac{1}{2R^0_0} \biggl[
(i-k+1)R^{k-1}_i+
\sum^{k-1}_{j=1} \sum^i_{p=0} R^j_p R^{k-j}_{i-p}
-\sum^{k-2}_{j=0} \sum^i_{p=0} Q^j_p R^{k-2-j}_{i-p} \\
\displaystyle -\chi Q^{k-3}_i
+\delta_{k,i} \sum^k_{j=0} E_j E_{k-j} -2 E_{k-1} V_{i-k+1} \\
\displaystyle +\delta_{k,2} \sum^i_{p=0} (V_p V_{i-p}- W_p W_{i-p})
-\delta_{k,1} \, 2 m W_i
-\delta_{i,0} \delta_{k,2}\,\chi ( \chi + 1 ) \biggr]\; , \end{array} \end{equation} where for universality of designations we put $R_0^0=R_0,R^0_i=0,i>0.$
In the case $i\not=k$, this formula is intended for obtaining coefficients $R_i^k$. When $i=k$, by equating the explicit expression for $R^{k+1}_k$ to the quantization condition (15) we get the recursion relation for the energy eigenvalues \begin{equation} \begin{array}{l}
\displaystyle E_0 = \frac{ m}{N^2+V_0^2} \left( N \sqrt{N^2 + V_0^2 - W_0^2 } - V_0 W_0 \right)
\;,k=0, \\ \\ \displaystyle E_k=\frac{R^0_0R^1_0}{2(E_0R^1_0+V_0R^0_0 )} \Biggl[
\frac{1} {R^1_0}
\Biggl(
\sum^{k-1}_{j=2}\sum^k_{p=0} R^j_p R^{k+1-j}_{k-p}
+2 \Theta ( k-2 ) \sum_{p=1}^k R^1_p R^{k}_{k-p}
\\ \displaystyle
-\sum^{k-1}_{j=0} \sum^k_{p=0} Q^j_p R^{k-1-j}_{k-p}
+\delta_{k,1} (V_0V_1- W_0W_1)
- \chi Q^{k-2}_k
\Biggr) \\ \displaystyle -\frac{1}{R_0^0}
\Biggl(
R^{k-1}_k
+\sum^{k-1}_{j=1}\sum^k_{p=0}R^j_p R^{k-j}_{k-p}
-\sum^{k-2}_{j=0}\sum^k_{p=0}Q^j_pR^{k-2-j}_{k-p} \\ \displaystyle
+\sum^{k-1}_{j=1} E_j E_{k-j}
-2E_{k-1} V_{1}
+\delta_{k,2}\sum^2_{p=0}(V_p V_{2-p}- W_pW_{2-p}) \\ \displaystyle
-\delta_{k,1} 2 m W_1
- \chi Q^{k-3}_k \Biggr) \Biggr] \;,k>0\;. \end{array} \end{equation} Here $E_0$ does be the exact solution to the Dirac-Coulomb equation [38,39] and we use the step function
$\begin{array}{ll} \Theta(k)&=1,\quad k\geq0,\\
&=0,\quad k<0\; . \end{array} $
Thus, equations (18) and (19) determine the coefficients of perturbation expansions of energy eigenvalues and eigenfunctions for screened Coulomb potentials in the same form both for ground and exited states. \\
(3) {\it Examples of application.} As a check of the obtained formulae we calculate the energy eigenvalues for the pure-vector, screened Coulomb potential of general form. On applying the recursion relations (18) and (19), analytical expressions for the perturbation coefficients are found to be equal to \begin{equation} \begin{array}{rl} E_0= & \displaystyle \frac{m N}{\sqrt{N^2+a^2}} \; , \\ E_1=&a\,{V_1}\; , \\ E_2=&\displaystyle -\frac{V_2}{2\,\rho^2}\left(3a^2\epsilon-\chi(\chi \epsilon+1 )\rho^2
\right)\; , \\ E_3=&\displaystyle \frac{V_3}{2\,\rho^4}\left( a^3(4 \epsilon^2+1) -a(2\chi^2\epsilon^2+3\chi \epsilon+\chi^2-1) \rho^2 \right) \; , \\ E_4=&\displaystyle\frac{1}{8\,\rho^6} \Bigl( V_2^2
\Bigl[
a^4\,\epsilon ( 5 \epsilon^2-12 ) +
a^2 \epsilon ( 6\chi^2 \rho^2-5)\rho^2 {}
\\ & \displaystyle
+ \chi^2 \bigl[
\chi^2\epsilon ( \epsilon^2+2 ) +
\chi( 4 \epsilon^2+2 ) + 3\epsilon
\bigr] \rho^4
\Bigr] {}
\\ & \displaystyle
+ V_4
\Bigl[
- 5 a^4 \epsilon ( 4 \epsilon^2+3 )
+ a^2 \bigl[
6 \chi^2 \epsilon ( 2 \epsilon^2+3)
+ 6 \chi ( 4 \epsilon^2+1)
- 25\epsilon
\bigr] \rho^2 {}
\\ & \displaystyle
-3 \chi ( \chi^2-1)
( \chi \epsilon+2) \rho^4
\Bigr] \Bigr) \;, \\ E_5=&\displaystyle\frac{-a}{8\,\rho^8} \Bigl( V_2 V_3
\Bigl[ 3 a^4
( 8\epsilon^4-20 \epsilon^2 -3) -
a^2 \bigl[ \chi ^2( 32\epsilon^4 - 36\epsilon^2 -10 )
\\ & \displaystyle + \chi \epsilon ( 10\epsilon^2 -24)
+ 9( 6\epsilon^2 +1 )
\bigr] \rho^2
+ \chi \bigl[ 30\chi ^2\epsilon^3
\\ & \displaystyle
+ 24\chi \epsilon^2 + 10\epsilon
+\chi
+ \chi ^3( 8\epsilon^4 + 8\epsilon^2 -1)
\bigr] \rho^4
\Bigr] \\ & \displaystyle
+ V_5 \Bigl[
- 3a^4 ( 8\epsilon^4 + 12\epsilon^2 +1 )
+ a^2 \bigl[ \chi ^2( 16\epsilon^4+ 48\epsilon^2 + 6 )
\\ & \displaystyle
+ 10\chi \epsilon( 4\epsilon^2+3 )
- 15( 6\epsilon^2+1 )
\bigr] \rho^2 \\ & \displaystyle
+ \bigl[ 5\chi ^2( 4\epsilon^2+3 )
- 3\chi ^4( 4\epsilon^2+1)
+50\chi \epsilon
- 30\chi ^3\epsilon-12
\bigr] \rho^4
\Bigl]
\Bigr)\; , \end{array} \end{equation} where $ a=V_0 $, $\epsilon =E_0$ and $ \rho $ is $ \sqrt{1-\epsilon^2} $.
One can verifies that the first three corrections coincide with those derived by McEnnan et al [24] with standard technique.
The next example will be the attractive Yukawa potential, often utilized in relativistic calculations, which has not only the Lorentz-vector component, $ V(r)=-(a/r) \, e^{-\lambda r}$, but the Lorentz-scalar term, $ W(r)=-(b/r) \, e^{-\mu r}$, as well. Now the analytic expressions for perturbation corrections to the bound state energy take the form \begin{equation} \begin{array}{rl}
E_0 =& \displaystyle\frac{ m } {N^2+a^2}\left( N \sqrt{N^2 + a^2 - b^2 } - a b \right)\;, \\ \\ E_1=&\displaystyle a\,\lambda + b\,\mu \,{\epsilon}\;, \\ \\ E_2=&-\displaystyle \frac{1}{4( b\epsilon + a ) \rho^2} \biggl( \lambda ^2 \Bigl[ 3a^3\epsilon - a\chi ( \chi \epsilon+1 ) \rho^2
+ 2a^2b ( 2\epsilon^2+1 ) \\& \displaystyle
+ a b^2\epsilon( \epsilon^2 +2 )
\Bigr] \\& \displaystyle
+\mu ^2 \Bigl[ 3b^3\epsilon^2
- b\chi \epsilon\rho^2 - b\chi ^2\rho^2
+ 2b^2a\epsilon( \epsilon^2 +2) +
ba^2( 2\epsilon^2+1) \Bigr]
\biggr)\;, \\ E_3=&\displaystyle \frac{1}{12( a + b\epsilon ) \rho^4} \biggl(
\lambda ^3 \Bigl[ a^4( 4\epsilon^2+1) - a^2(
2\chi ^2\epsilon^2 + 3\chi \epsilon+ \chi ^2 -1 ) \rho^2
\\&+ 3a^3b\epsilon( 2\epsilon^2+3 )
+ a^2b^2( 2\epsilon^4+ 11\epsilon^2 +2 )
- ab( 3\chi +
(3\chi ^2 -1 ) \epsilon ) \rho^2 \\&
+ ab^3\epsilon( 3\epsilon^2 + 2) \Bigr]
\\&
+ \mu ^3 \Bigl[ b^4( - 8\epsilon^4 +13\epsilon^2)
+ b^2( 3\chi \epsilon^3+( 2\chi ^2 + 1 ) \epsilon^2
- 6\chi \epsilon -5\chi ^2
) \rho^2 \\&
- 3b^3a\epsilon( 2\epsilon^4-\epsilon^2 -6 )
-b^2a^2( 4\epsilon^4 - 14\epsilon^2 -5) \\&
- ba\epsilon( 3\chi \epsilon+ 3\chi ^2-1
) \rho^2
+ba^3\epsilon (2\epsilon^2+3) \Bigr]
\\&
+ \lambda ^2\mu \Bigl[ 9a^3b\epsilon \rho^2
- 6a^2b^2( 2\epsilon^4 - \epsilon^2-1 ) \\&
+ 3ab^3\epsilon\rho^2 ( \epsilon ^2 +2 ) -
3ab\chi ( \chi \epsilon +1 ) \rho^4 \Bigr] \biggr).
\end{array} \end{equation}
In order to assess the speed and accuracy of the perturbation technique for the Yukawa potential with various ratios of its component we consider energy eigenvalues for the pure vector case, $V(r)=- (a/r)\, e^{-\lambda r}$; the pure scalar potential, $W(r)=-( a/r)\, e^{-\lambda r}$; and the equally mixed interaction, $V(r)+W(r)=-(I+\gamma_0)(a/2r)\, e^{-\lambda r}$. Typical results of calculation are represented in Table 1 where the sequence of the sums of first terms from our expansion for relativistic binding energies is compared with the results, $E_{num}$ (in KeV), obtained by numerical integration. The calculation has been performed for $s=1,\,n_r=1,\,l=1$ and $s=-1,\,n_r=1,\,l=0$ states with parameters $ a= \alpha z ,\; \lambda =1,13 \alpha z^{1/3},\; z=74 $ ( $ \alpha $ is the fine-structure constant and $z$ the nuclear charge).
As it can be seen from Table 1, in all cases we have two subsequences bounded below and above the energy eigenvalues. The average of these subsequences at the point of their maximal drawing together is proved to result in a quite good approximation to the exact value. \vskip 1cm
To summarize, we have developed a semiclassical treatment of logarithmic perturbation theory for a Dirac particle in a central field. Based upon the $\hbar $-expansions and suitable quantization conditions, new handy recursion relations for solving the bound-state problem for the Dirac equation with the screened Coulomb potential having both vector and scalar component have been derived. Avoiding the disadvantages of the standard approach these formulae have the same simple
form both for ground and exited states and provide, in principle, the calculation of the perturbation corrections up to an arbitrary order in the analytic or numerical form. And at last, this approach does not imply knowledge of the exact solution for zero approximation, which is obtained automatically. $$ $$ This work was supported in part by the International Soros Science Education Program (ISSEP) under grant APU052102.
\eject
\eject Table 1
\begin{tabular}{c|lll|lll} \hline \vphantom{\Big(} k & \multispan{2}{\hss $s=1, n_r=1, l=1$} & & \multispan{2}{\hss $s=-1, n_r=1, l=0$}& \\ \cline{2-7} \vphantom{\Big(} & $E_V$ & $E_W$ & $E_{V+W}$
& $E_V$ & $E_W$ & $E_{V+W}$ \\ \hline
0& 20.644616& 16.59173& 18.292804& 20.644616& 16.59173& 18.292804\\
1& 11.091653& 7.348948& 10.846326& 11.091653& 7.348948& 10.846326\\
2& 12.415123& 9.066156& 11.810855& 12.721342& 9.372375& 12.003761\\
3& 12.264120& 8.784152& 11.703753& 12.500951& 8.995036& 11.855249\\
4& 12.308677& 8.880691& 11.730813& 12.558797& 9.118034& 11.890052\\
5& 12.292914& 8.838096& 11.722213& 12.537837& 9.063240& 11.878837\\
6& 12.299805& 8.860978& 11.725555& 12.546910& 9.092488& 11.883163\\
7& 12.296473& 8.847251& 11.724110& 12.542466& 9.074820& 11.881275\\
8& 12.298236& 8.856234& 11.724792& 12.544844& 9.086483& 11.882175\\
9& 12.297242& 8.849965& 11.724449& 12.543482& 9.078246& 11.881715\\ 10& 12.297834& 8.854576& 11.724631& 12.544306& 9.084385& 11.881963\\ 11& 12.297465& 8.851033& 11.724530& 12.543784& 9.079601& 11.881823\\ 12& 12.297704& 8.853860& 11.724588& 12.544128& 9.083474& 11.881905\\ 13& 12.297544& 8.851529& 11.724553& 12.543893& 9.080233& 11.881856\\ 14& 12.297654& 8.853509& 11.724575& 12.544058& 9.083025& 11.881886\\ 15& 12.297576& 8.851782& 11.724561& 12.543939& 9.080555& 11.881867\\ \hline \vphantom{\Big(} $E_{\rm num}$& 12.297609& 8.852592& 11.724567& 12.543990& 9.081723& 11.881875\\ \hline \end{tabular}
\end{document} | arXiv | {
"id": "9911006.tex",
"language_detection_score": 0.7160589098930359,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Basic Deformation Theory]{Basic Deformation Theory of smooth formal schemes}
\author[M. P\'erez]{Marta P\'erez Rodr\'{\i}guez} \address{Departamento de Matem\'a\-ticas\\ Escola Superior de En\-xe\-\~ne\-r\'{\i}a Inform\'atica\\ Campus de Ourense, Univ. de Vigo\\ E-32004 Ou\-ren\-se, Spain} \email{martapr@uvigo.es}
\thanks{This work was partially supported by Spain's MCyT and E.U.'s FEDER research project MTM2005-05754}
\subjclass[2000]{Primary 14B10; Secondary 14A15, 14B20, 14B25, 14F10} \keywords{formal scheme, smooth morphism, \'etale morphism, infinitesimal lifting property, deformation.}
\hyphenation{pseu-do}
\begin{abstract} We provide the main results of a deformation theory of smooth formal schemes as defined in \cite{AJP1}. Smoothness is defined by the local existence of infinitesimal liftings. Our first result is the existence of an obstruction in a certain $\ext^{1}$ group whose vanishing guarantees the existence of global liftings of morphisms. Next, given a smooth morphism $f_{0}\colon\mathfrak X_{0} \to \mathfrak Y_{0}$ of noetherian formal schemes and a closed immersion $\mathfrak Y_{0} \hookrightarrow \mathfrak Y$ given by a square zero ideal $\mathcal I$, we prove that the set of isomorphism classes of smooth formal schemes lifting $\mathfrak X_{0}$ over $\mathfrak Y$ is classified by $\ext^{1}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*} \mathcal I)$ and that there exists an element in $\ext^{2}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I)$ which vanishes if and only if there exists a smooth formal scheme lifting $\mathfrak X_{0}$ over $\mathfrak Y$. \end{abstract}
\maketitle
\tableofcontents
\section*{Introduction}
\setcounter{equation}{0} We provide here a further step in the program of the study of infinitesimal conditions in the category of formal schemes developed, among others, in the recent papers \cite{AJP1} and \cite{AJP2}. These previous works have systematically studied the infinitesimal conditions of locally noetherian formal schemes together with a hypothesis of finiteness, namely, the pseudo finite type condition. In \cite{AJP1} the fundamental properties of the infinitesimal conditions of usual schemes are generalized to formal schemes. One of the main tools is the sheaf of differentials, which is coherent for a pseudo finite type map of formal schemes. The latter is concentrated on the study of properties that are noteworthy in the category of formal schemes, obtaining a structure theorem for smooth morphisms and focusing on the relationship between the infinitesimal conditions of a map of formal schemes and those of the underlying maps of usual schemes. We have to mention that some basics of smoothness of formal schemes have also been studied by Yekutieli in \cite{Y} under the assumption that the base of the map is a usual noetherian scheme, and in Nayak's thesis for essentially pseudo finite type maps, whose results have been included in \cite{LNS}.
This background motivates our interest of obtaining a deformation theory in the context of locally noetherian formal schemes. This needs the development of a suitable version of the cotangent complex. The problem is difficult because it involves the use of the derived category of complexes with coherent cohomology associated to a formal scheme, whose behavior is not straightforward, as is clear from looking at \cite{AJL1}. We concentrate here on the case of smooth morphisms ---a particular situation that arises quite often. The problem consists in constructing morphisms that extend a given morphism over a smooth formal scheme to a base which is an ``infinitesimal neighborhood" of the original. Questions of existence and uniqueness should be analyzed. We want to express the answer via cohomological invariants that are explicitly computed using the \v{C}ech complex. Another group of questions that we treat are the construction of formal schemes over an infinitesimal neighborhood of the base lifting a given relative formal scheme. The existence of such lifting will be controlled by an element belonging to a $2^{\text{nd}}$\!-order cohomology group. We prefer to use the more down-to-earth \v{C}ech view point, which has the minor drawback of requiring separateness, but which suffices for a large class of applications. Although our exposition generalizes the well-known analogous statements for smooth schemes (\emph{cf.} \cite[III]{sga1} and \cite[VII,\S1]{G}), we have not been able to deduce from them, even in the case of a map of formal schemes such that the underlying morphisms of usual schemes are all smooth (see \cite{AJP2}). For our argument, we require main results related to smoothness of formal schemes such as the universal property of the module of differentials (\cite[Theorem 3.5]{AJP1}), some lifting property (\cite[Proposition 2.3]{AJP1}) and the matrix Jacobian criterion for the affine formal disc (\cite[Corollary 5.13]{AJP2}). We expect that our results would be applied to the cohomological study of singular varieties.
Let us describe briefly the organization of this paper. The first section deals with preliminary material, pointing to precise references in the literature. The second treats the case of global lifting of smooth morphisms. We prove that the obstruction to the existence of a global lifting lies in a $\h^1$ group.
The setup for the remaining sections is a smooth morphism $f_{0}\colon\mathfrak X_{0} \to \mathfrak Y_{0}$ of noetherian formal schemes and a closed immersion $\mathfrak Y_{0} \hookrightarrow \mathfrak Y$ given by a square zero ideal $\mathcal I$. We deal first with the uniqueness of a lifting of \emph{smooth formal schemes}. We prove that the set of isomorphism classes lifting $\mathfrak X_{0}$ over $\mathfrak Y$ is classified by $ \h^{1} (\mathfrak X_{0}, \shom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I))$, in the sense that they form an affine space over this module. In the last section we study the existence of liftings of smooth formal schemes. There exists an obstruction, lying in $ \h^{2} (\mathfrak X_{0}, \shom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I))$, whose vanishing characterizes the existence of a smooth formal scheme lifting $\mathfrak X_{0}$ over $\mathfrak Y$.
All the results in Sections \ref{sec1}, \ref{sec2} and \ref{sec3} generalize the corresponding results in the category of schemes. We have followed the outline for this case given in \cite[p. 111--113]{ill}.
\end{ack}
\section{Preliminaries} We denote by $\mathsf {NFS}$ the category of locally noetherian formal schemes together with morphisms of formal schemes. The affine noetherian formal schemes are a full subcategory of $\mathsf {NFS}$, denoted $\sfn_{\mathsf {af}}$.
We assume the basics of the theory of formal schemes as explained in \cite[\S 10]{EGA1}. Also, this work rests on the theory of smoothness in $\mathsf {NFS}$ as studied in the papers \cite{AJP1} and \cite{AJP2}.
\begin{parraf} \label{defnenc}
Let $\mathfrak X$ be in $ \mathsf {NFS}$. If $\mathcal I \subset \mathcal O_{\mathfrak X}$ is a coherent ideal, $\mathfrak X' $ the corresponding closed subset and $(\mathfrak X', (\mathcal O_{\mathfrak X}/\mathcal I)|_{\mathfrak X'})$ the induced formal scheme on it, then we say that $\mathfrak X'$ is the \emph{closed (formal) subscheme} of $\mathfrak X$ defined by $\mathcal I$. A morphism $f:\mathfrak Z \to \mathfrak X$ is a \emph{closed immersion} if there exists a closed subset $\mathfrak Y\subset \mathfrak X$ such that $f$ factors as $ \mathfrak Z \xrightarrow{g} \mathfrak Y \hookrightarrow \mathfrak X $ where $g$ is a isomorphism \cite[\S 10.14.]{EGA1}. \end{parraf}
\begin{parraf} Given $f:\mathfrak X \to \mathfrak Y$ a morphism in $\mathsf {NFS}$ and $\mathcal K \subset \mathcal O_{\mathfrak Y}$ an ideal of definition, there exists an ideal of definition $\mathcal J \subset \mathcal O_{\mathfrak X}$ such that $f^{*}(\mathcal K) \mathcal O_{\mathfrak X} \subset \mathcal J$ (see \cite[(10.5.4) and (10.6.10)]{EGA1}). The map $f$ induces the morphism of locally noetherian (usual) schemes $f_0 \colon (\mathfrak X, \mathcal O_{\mathfrak X}/ \mathcal J) \to (\mathfrak Y, \mathcal O_{\mathfrak Y}/ \mathcal K) $ (see \cite[(10.5.6)]{EGA1}). The morphism $f$ is of \emph{pseudo finite type} \cite[p. 7]{AJL1} (\emph{separated} \cite[\S 10.15]{EGA1} and \cite[1.2.2]{AJL1}) if for any such pair of ideals the induced morphism of schemes, $f_0$, is of finite type (separated). A morphism $f:\mathfrak X \to \mathfrak Y$ is of \emph{finite type} if it is adic and of pseudo finite type \cite[(10.13.1)]{EGA1}. \end{parraf}
\begin{parraf} A morphism $f:\mathfrak X \to \mathfrak Y$ in $\mathsf {NFS}$ is \emph{smooth (unramified, \'etale)} \cite[Definition 2.1 and Definition 2.6]{AJP1} if it is of pseudo finite type and satisfies the following lifting condition:
\emph{For all affine $\mathfrak Y$-schemes $Z$ and for each closed subscheme $T\hookrightarrow Z$ given by a square zero ideal $\mathcal I \subset \mathcal O_{Z}$, the induced map \begin{equation*} \Hom_{\mathfrak Y}(Z,\mathfrak X) \longrightarrow \Hom_{\mathfrak Y}(T,\mathfrak X) \end{equation*} is surjective (injective, bijective; respectively).} \end{parraf}
\begin{parraf} Given $f: \mathfrak X \to \mathfrak Y$ in $\mathsf {NFS}$ the \emph{differential pair of $\mathfrak X$ over $\mathfrak Y$}, $( \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, \widehat{d}_{\mathfrak X/\mathfrak Y})$, is locally given by $ \left( (\widehat{\Omega}^{1}_{A/B})^{\triangle}, \mathcal O_{\mathfrak U}=A^{\triangle} \xrightarrow{\text{ via }\widehat{d}_{A/B}} (\widehat{\Omega}^{1}_{A/B})^{\triangle} \right) $ for all open subsets $\mathfrak U=\spf(A) \subset \mathfrak X$ and $\mathfrak V=\spf(B) \subset \mathfrak Y$ with $f(\mathfrak U) \subset \mathfrak V$. The $\mathcal O_{\mathfrak X}$-module $\widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}$ is called the \emph{module of $1$-differentials of $\mathfrak X$ over $\mathfrak Y$} and the continuous $\mathfrak Y$-derivation $\widehat{d}_{\mathfrak X/\mathfrak Y}$ is called the \emph{canonical derivation of $\mathfrak X$ over $\mathfrak Y$}. The basic properties of the differential pair in $\mathsf {NFS}$ are treated, for instance, in \cite[\S3]{AJP1}. \end{parraf}
\begin{parraf} Let $\mathfrak Y= \spf(A)$ be in $\sfn_{\mathsf {af}}$, $\mathbf{T}=T_{1},\, T_{2},\, \ldots,\, T_{r}$ and $\mathbf{Z}=Z_{1},\, Z_{2},\, \ldots,\\ Z_{s}$ finite numbers of indeterminates and $\mathbb D^{s}_{\mathbb A^{r}_{\mathfrak Y}}=\spf(A\{\mathbf{T}\}[[\mathbf{Z}]])$ (\emph{cf.} \cite[Example 1.6]{AJP1}). Then $\widehat{\Omega}^{1}_{\mathbb D^{s}_{\mathbb A^{r}_{\mathfrak Y}}/\mathfrak Y}= (\widehat{\Omega}^{1}_{A\{\mathbf{T}\}[[\mathbf{Z}]]/A})^{\triangle}$ and in \cite[3.14]{AJP1} it is shown that $\widehat{\Omega}^{1}_{A\{\mathbf{T}\}[[\mathbf{Z}]]/A}$ is a free $A\{\mathbf{T}\}[[\mathbf{Z}]]$-module, with basis $\{\widehat{d} T_{1},\,\ldots ,\widehat{d} T_{r},\, \widehat{d} Z_{1},\, \ldots,\\ \widehat{d} Z_{s}\}$ where $\widehat{d}=\widehat{d}_{A\{\mathbf{T}\}[[\mathbf{Z}]]/A}$. Furthermore, given $g \in A\{\mathbf{T}\}[[\mathbf{Z}]]$ it holds that: \[ \widehat{d} g = \sum_{i=1}^{r} \frac{\partial g}{\partial T_{i}} \widehat{d} T_{i} + \sum_{j=1}^{s} \frac{\partial g}{\partial Z_{j}} \widehat{d} Z_{j} \] \end{parraf}
\begin{parraf} \label{existidefmorf} Given $f: \mathfrak X=\spf(A) \to \mathfrak Y=\spf(B)$ a morphism in $\sfn_{\mathsf {af}}$ of pseudo finite type, there exists a factorization of $f$ as \[ \mathfrak X= \spf(A) \overset{j} \hookrightarrow \mathbb D^{s}_{\mathbb A^{r}_{\mathfrak Y}}= \spf(B\{\mathbf{T}\}[[\mathbf{Z}]] ) \xrightarrow{p} \mathfrak Y= \spf(B) \] with $j$ a closed immersion given by an ideal $\mathcal I= I^{\triangle} \subset \mathcal O_{\mathbb D^{s}_{\mathbb A^{r}_{\mathfrak Y}}}$ where $I= \langle g_{1}, g_{2}, \ldots, g_{k}\rangle \subset B\{\mathbf{T}\}[[\mathbf{Z}]]$ and $p$ the natural projection (\cite[Proposition 1.7]{AJP1}). The \emph{Jacobian matrix of $\mathfrak X$ over $ \mathfrak Y$ at $x$} (\cite[5.12]{AJP2}) is defined as \begin{equation*} \Jac_{\mathfrak X/\mathfrak Y}(x)=\begin{pmatrix}
\frac{\partial g_{1}}{\partial T_{1}}(x) & \ldots &
\frac{\partial g_{1}}{\partial T_{r}}(x) & \frac{\partial g_{1}}{\partial Z_{1}}(x) & \ldots & \frac{\partial g_{1}}{\partial Z_{s}}(x) \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots\\
\frac{\partial g_{k}}{\partial T_{1}}(x) & \ldots &
\frac{\partial g_{k}}{\partial T_{r}}(x) & \frac{\partial g_{k}}{\partial Z_{1}}(x) & \ldots & \frac{\partial g_{k}}{\partial Z_{s}}(x) \\
\end{pmatrix}, \end{equation*} where for $u \in \{T_{1},\ldots ,T_{r},Z_{1},\ldots, Z_{s} \}$, $\frac{\partial g_{i}}{\partial u}(x) $ denotes the image of $\frac{\partial g_{i}}{\partial u} \in B\{\mathbf{T}\}[[\mathbf{Z}]]$ in $k(x)$, for all $i=1,2, \ldots k$. \end{parraf}
\begin{parraf} We will use the calculus of \v Cech cohomology, which forces to impose the separation hypothesis each time we will need cohomology of degree greater than or equal to $2$. Moreover, in that context the \v Cech cohomology agrees with the (usual) derived functor cohomology. This follows from \cite[Ch. III, Exercise 4.11]{ha1} in view of \cite[Corollary 3.1.8]{AJL1}. \end{parraf}
\section{Lifting of morphisms} \label{sec1} \begin{parraf} Consider a commutative diagram of morphisms of pseudo finite type in $\mathsf {NFS}$ \begin{equation}\label{hipotesis} \begin{diagram}[height=2em,w=2em,p=0.3em,labelstyle=\scriptstyle] \mathfrak Z_{0} & \rTinc^{i} & \mathfrak Z \\ \dTto^{u_{0}} & \ldTdash & \dTto\\ \mathfrak X & \rTto^{f} & \mathfrak Y\\ \end{diagram} \end{equation} where $\mathfrak Z_{0}\hookrightarrow \mathfrak Z$ is a closed formal subscheme given by a square zero ideal $\mathcal I \subset \mathcal O_{\mathfrak Z}$. A morphism $u: \mathfrak Z \to \mathfrak X$ is a \emph{lifting of $u_{0}$ over $\mathfrak Y$} if it makes this diagram commutative. For instance, if $f$ is \'etale, then for all such morphisms $u_{0}$, there always exists a unique lifting by \cite[Corollary 2.5]{AJP1}.
So the basic question is: When can we guarantee uniqueness and existence of a lifting for a $\mathfrak Y$-morphism $u_{0}: \mathfrak Z_{0} \to \mathfrak X$? In \ref{parrfderivlevant} it is shown that if $\Hom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*}\widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, \mathcal I)=0$, then the lifting is unique. Proposition \ref{propobstruclevant} establishes that, whenever $f$ is smooth, there exists an obstruction in $\ext^{1}_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, \mathcal I)$ to the existence of such a lifting.
Observe that in the diagram above, $i\colon \mathfrak Z_0 \to \mathfrak Z$ is the identity as topological map and, therefore, we may identify $i_{*}\mathcal O_{\mathfrak Z_{0}} \equiv \mathcal O_{\mathfrak Z_{0}}$. Through this identification we have that the ideal $\mathcal I$ is a $\mathcal O_{\mathfrak Z_{0}}$-module and $\mathcal I = i_{*} \mathcal I$. \end{parraf}
\begin{parraf} \label{parrfderivlevant} Let us continue to consider the situation depicted in diagram (\ref{hipotesis}). If there exists a lifting $u \colon \mathfrak Z \to \mathfrak X$ of $u_{0}$ over $\mathfrak Y$, then we claim that the set of liftings of $u_{0}$ over $\mathfrak Y$ is an affine space via \[ \Hom_{\mathcal O_{\mathfrak X}}(\widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, u_{0 *}\mathcal I) \cong \Hom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*}\widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, \mathcal I). \] Indeed, $u_{0 *}\mathcal I = u_{*}\mathcal I$ and in view of this identification, from \cite[(\textbf{0}, 20.1.1), (\textbf{0}, 20.3.1) and (\textbf{0}, 20.3.2)]{EGA41} we deduce that if $v: \mathfrak Z \to \mathfrak X$ is another lifting of $u_{0}$ over $\mathfrak Y$, the morphism $
\mathcal O_{\mathfrak X} \xrightarrow{u^{\sharp} - v^{\sharp}} u_{0*} \mathcal I $ is a continuous $\mathfrak Y$-derivation. By \cite[Lemma 3.6 and Theorem 3.5]{AJP1}, there exists a unique morphism of $\mathcal O_{\mathfrak X}$-modules $\phi\colon \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} \to u_{0*} \mathcal I$ such that $\phi \circ \widehat{d}_{\mathfrak X/\mathfrak Y} =u^{\sharp} - v^{\sharp}$. On the other hand, given a morphism of $\mathcal O_{\mathfrak X}$-modules $\phi: \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} \to u_{0*} \mathcal I$, the map $v^{\sharp} := u^{\sharp}+ \phi\circ \widehat{d}_{\mathfrak X/\mathfrak Y} $ defines another morphism $v \colon \mathfrak Z \to \mathfrak X$ that is a lifting of $u_{0}$.
Moreover, given $r\colon \mathfrak X \to \mathfrak X'$ a $\mathfrak Y$-morphism of pseudo finite type in $\mathsf {NFS}$ induces a morphism of $\mathcal O_{\mathfrak X}$-Modules $r^{*} \widehat{\Omega}^{1}_{\mathfrak X'/\mathfrak Y} \to \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}$ which is compatible with the canonical derivation (\emph{cf.} \cite[Proposition 3.7]{AJP1}). Therefore, any lifting of $u_{0}$ over $\mathfrak Y$ leads to a lifting of $r \circ u_{0}$ over $\mathfrak Y$ preserving compatibility with the natural map $\Hom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*}\widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, \mathcal I) \to \Hom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*}r^{*}\widehat{\Omega}^{1}_{\mathfrak X'/\mathfrak Y}, \mathcal I)$. \end{parraf}
\begin{rem}
Using the language of torsor theory \ref{parrfderivlevant} says that the sheaf on $\mathfrak Z_{0}$ which associates to the open subset $\mathfrak U_{0} \subset \mathfrak Z_{0}$ the set of liftings $\mathfrak U \to \mathfrak X$ of $u_{0}|_{\mathfrak U_{0}}$ over $\mathfrak Y$ ---where $\mathfrak U \subset \mathfrak Z$ is the open subset corresponding to $\mathfrak U_{0}$--- is a pseudo torsor over $\shom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*}\widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} , \mathcal I)$ which is functorial on $\mathfrak X$. \end{rem}
When can we guarantee for a diagram like (\ref{hipotesis}) the existence of a lifting of $u_{0}$ over $\mathfrak Y$? In \cite[Proposition 2.3]{AJP1} we have shown that if $f$ is smooth and $\mathfrak Z$ is in $\sfn_{\mathsf {af}}$, then there exists lifting of $u_{0}$ over $\mathfrak Y$. So, the issue amounts to patching local data to obtain global data.
\begin{propo} \label{propobstruclevant} Consider the commutative diagram (\ref{hipotesis}) where $f:\mathfrak X \to \mathfrak Y$ is a smooth morphism. Then there exists an element (usually called the \emph{obstruction}) $c_{u_{0}} \in \ext^{1}_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, \mathcal I)$ such that: $c_{u_{0}}=0$ if and only if there exists $u: \mathfrak Z \to \mathfrak X$ a lifting of $u_{0}$ over $\mathfrak Y$. \end{propo}
\begin{proof}
Let $\{\mathfrak U_{\alpha}\}_{\alpha \in L}$ be an affine open covering of $\mathfrak Z$ and $\mathfrak U_{\bullet} = \{\mathfrak U_{\alpha, 0}\}_{\alpha \in L}$ the corresponding affine open covering of $\mathfrak Z_{0}$ such that, for all $\alpha$, $\mathfrak U_{\alpha,0} \hookrightarrow \mathfrak U_{\alpha}$ is a closed immersion in $\sfn_{\mathsf {af}}$ given by the square zero ideal $\mathcal I|_{\mathfrak U_{\alpha}}$. Since $f$ is a smooth morphism, \cite[Proposition 2.3]{AJP1} implies that for all $\alpha$ there exists a lifting $v_{\alpha}: \mathfrak U_{\alpha} \to \mathfrak X$ of $u_{0}|_{\mathfrak U_{\alpha,0}}$ over $\mathfrak Y$. For all couples of indexes $\alpha,\, \beta$ such that $\mathfrak U_{\alpha \beta}:= \mathfrak U_{\alpha} \cap \mathfrak U_{\beta} \neq \varnothing$, if we denote by $\mathfrak U_{\alpha \beta,0}$ the corresponding open formal subscheme of $\mathfrak Z_{0}$, from \ref{parrfderivlevant} we have that there exists a unique morphism of $\mathcal O_{\mathfrak X}$-modules $\phi_{\alpha \beta}: \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} \to (u_{0}|_{\mathfrak U_{\alpha \beta, 0}})_{*} (\mathcal I|_{\mathfrak U_{\alpha \beta,o}})$ such that the following diagram \begin{diagram}[height=2em,w=2em,p=0.3em,labelstyle=\scriptstyle]
\mathcal O_{\mathfrak X} & \rTto^{\widehat{d}_{\mathfrak X/\mathfrak Y}\qquad} & \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} \\
\dTto^{(v_{\alpha}|_{\mathfrak U_{\alpha\beta}})^{\sharp} -(v_{\beta}|_{\mathfrak U_{\alpha\beta}})^{\sharp}} & \ldTto_{\phi_{\alpha \beta}} &\\
(u_{0}|_{\mathfrak U_{\alpha \beta, 0}})_{*} (\mathcal I|_{\mathfrak U_{\alpha \beta,0}}) & & \\ \end{diagram}
commutes. Let $u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}|_{\mathfrak U_{\alpha \beta,0}} \to \mathcal I|_{\mathfrak U_{\alpha \beta,0}}$ be the morphism of $\mathcal O_{\mathfrak U_{\alpha \beta,0}}$-modules adjoint to $\phi_{\alpha \beta}$, which we continue to denote by $\phi_{\alpha \beta}$. The family of morphisms $\phi_{\mathfrak U_{\bullet}}:=(\phi_{\alpha \beta})$ satisfies the cocycle condition; that is, for any $\alpha,\, \beta,\, \gamma$ such that $\mathfrak U_{\alpha \beta \gamma,0}:= \mathfrak U_{\alpha,0} \cap \mathfrak U_{\beta,0} \cap \mathfrak U_{\gamma,0} \neq \varnothing$, we have that \begin{equation} \label{datosrecolec1}
\phi_{\alpha \beta}|_{\mathfrak U_{\alpha \beta \gamma,0}} - \phi_{\alpha \gamma}|_{\mathfrak U_{\alpha \beta \gamma,0}}+\phi_{ \beta \gamma}|_{\mathfrak U_{\alpha \beta \gamma,0}}=0 \end{equation} so, $\phi_{\mathfrak U_{\bullet}} \in \check{\Z}^{1} (\mathfrak U_{\bullet}, \shom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} ,\mathcal I))$. Moreover, its class \[[\phi_{\mathfrak U_{\bullet}}] \in \check{\h}^{1} (\mathfrak U_{\bullet}, \shom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} ,\mathcal I))\]
does not depend on the liftings $\{v_{\alpha}\}_{\alpha \in L}$. Indeed, for all arbitrary $\alpha \in L$, let $w_{\alpha}: \mathfrak U_{\alpha} \to \mathfrak X$ be a lifting of $u_{0}|_{\mathfrak U_{\alpha,0}}$ over $\mathfrak Y$ and let $\psi_{\mathfrak U_{\bullet}}:= (\psi_{\alpha \beta}) \in \check{\Z}^{1} (\mathfrak U_{\bullet}, \shom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} ,\mathcal I))$ be the corresponding cocycle defined as above. By \ref{parrfderivlevant} there exists a unique $\xi_{\alpha} \in \Hom_{\mathfrak X}(\widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, (u_{0}|_{\mathfrak U_{\alpha , 0}})_{*} (\mathcal I|_{\mathfrak U_{\alpha , 0}}))$ such that $v^{\sharp}_{\alpha}-w^{\sharp}_{\alpha} = \xi_{\alpha} \circ \widehat{d}_{\mathfrak X/\mathfrak Y}$. Then for all couples of indexes $\alpha,\beta$ such that $\mathfrak U_{\alpha \beta} \neq \varnothing$ we have that
$\psi_{\alpha \beta}=\phi_{\alpha \beta}+ \xi_{\beta}|_{\mathfrak U_{\alpha \beta}} -\xi_{\alpha}|_{\mathfrak U_{\alpha \beta}}$. In other words, the cocycles $\psi_{\mathfrak U_{\bullet}}$ and $\phi_{\mathfrak U_{\bullet}}$ differ by a coboundary from which we conclude that $[\phi_{\mathfrak U_{\bullet}}] = [\psi_{\mathfrak U_{\bullet}}] \in \check{\h}^{1} (\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} ,\mathcal I))$. With an analogous argument it is possible to prove that, given a refinement $\mathfrak V_{\bullet}$ of $\mathfrak U_{\bullet}$, we have $[\phi_{\mathfrak U_{\bullet}}]=[\phi_{\mathfrak V_{\bullet}}] \in \check{\h}^{1} (\mathfrak Z_{0}, \shom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} ,\mathcal I))$.
We define: \begin{align*} c_{u_{0}}:= [\phi_{\mathfrak U_{\bullet}}] &\in \,
\check{\h}^{1} (\mathfrak Z_{0},\shom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} ,\mathcal I)) = \\
&= \h^{1} (\mathfrak Z_{0}, \shom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} ,\mathcal I))
\tag{\cite[(5.4.15)]{te}} \end{align*} Since $f$ is smooth, \cite[Proposition 4.8]{AJP1} implies that $\widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}$ is a locally free $\mathcal O_{\mathfrak X}$-module of finite rank, so, \[c_{u_{0}} \in \h^{1} (\mathfrak Z_{0}, \shom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, \mathcal I))=\ext^{1}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, \mathcal I).\]
The element $c_{u_{0}}$ is the obstruction to the existence of a lifting of $u_{0}$. If $u_{0}$ admits a lifting then it is clear that $c_{u_{0}}=0$. Reciprocally, suppose that $c_{u_{0}}=0$. From the family of morphisms $\{v_{\alpha} \}_{\alpha \in L}$ we are going to construct a collection of liftings $\{u_{\alpha}: \mathfrak U_{\alpha} \to \mathfrak X\}_{\alpha \in L}$ of $\{u_{0}|_{\mathfrak U_{\alpha,0}}\}_{\alpha \in L}$ over $\mathfrak Y$ that will patch into a morphism $u: \mathfrak Z \to \mathfrak X$. By hypothesis, there exists $\{\varphi_{\alpha}\}_{\alpha \in L} \in \check{\C}^{0}(\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y} ,\mathcal I))$ such that for all couples of indexes $\alpha, \beta$ with $\mathfrak U_{\alpha \beta} \neq \varnothing$, \begin{equation} \label{datosrecolec2}
\varphi_{\alpha}|_{\mathfrak U_{\alpha \beta}} - \varphi_{\beta}|_{\mathfrak U_{\alpha \beta}}=\phi_{\alpha \beta} \end{equation}
For all $\alpha \in L$, let $u_{\alpha}: \mathfrak U_{\alpha} \to \mathfrak X$ be the morphism that agrees with $u_{0}|_{\mathfrak U_{\alpha,0}}$ as a topological map and is given by
\[u^{\sharp}_{\alpha}:= v_{\alpha}^{\sharp} - \varphi_{\alpha} \circ \widehat{d}_{\mathfrak X/\mathfrak Y}\] as a map of topologically ringed spaces. By \ref{parrfderivlevant} we have that $u_{\alpha}$ is a lifting of $u_{0}|_{\mathfrak U_{\alpha,0}}$ over $\mathfrak Y$ for all $\alpha$, and from (\ref{datosrecolec2}) and (\ref{datosrecolec1}) (for $\gamma = \beta$) we deduce that the morphisms $\{u_{\alpha}\}_{\alpha \in L}$ glue into a morphism $u: \mathfrak Z \to \mathfrak X$. \end{proof}
\begin{parraf} Let $r\colon \mathfrak X \to \mathfrak X'$ be a $\mathfrak Y$-morphism of pseudo finite type in $\mathsf {NFS}$. From \ref{parrfderivlevant} and the last proof it follows that the obstruction $c_{u_{0}}$ leads to the obstruction $c_{ r \circ u_{0}}$ through the natural map $\ext^{1}_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} \widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, \mathcal I) \to \ext^{1}_{\mathcal O_{\mathfrak Z_{0}}}(u_{0}^{*} r^{*}\widehat{\Omega}^{1}_{\mathfrak X'/\mathfrak Y}, \mathcal I) $. \end{parraf}
\section{Lifting of smooth formal schemes: Uniqueness} \label{sec2}
Given a smooth morphism $f_{0}:\mathfrak X_{0} \to \mathfrak Y_{0}$ and a closed immersion $\mathfrak Y_{0} \hookrightarrow \mathfrak Y$ defined by a square zero ideal $\mathcal I$, one can pose the following question: Suppose that there exists a smooth $\mathfrak Y$-formal scheme $\mathfrak X$ such that $\mathfrak X \times_{\mathfrak Y} \mathfrak Y_{0} = \mathfrak X_{0}$. When is $\mathfrak X$ unique? We will answer it in the present section. It follows from Proposition \ref{deform4} that if $\ext^{1}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I)=0$, then $\mathfrak X$ is unique up to isomorphism.
\begin{parraf} \label{deform} Assume that $f_{0}:\mathfrak X_{0} \to \mathfrak Y_{0}$ is a smooth morphism and $i \colon \mathfrak Y_{0} \hookrightarrow \mathfrak Y$ a closed immersion given by a square zero ideal $\mathcal I \subset \mathcal O_{\mathfrak Y}$, hence, $\mathfrak Y_{0}$ and $\mathfrak Y$ have the same underlying topological space. If there exists a smooth morphism $f:\mathfrak X \to \mathfrak Y$ in $\mathsf {NFS}$ such that the diagram \begin{equation}\label{hipotesis2} \begin{diagram}[height=2em,w=2em,p=0.3em,labelstyle=\scriptstyle] \mathfrak X_{0} & \rTto^{f_{0}} & \mathfrak Y_{0} \\ \dTinc^{j} & & \dTinc \\ \mathfrak X & \rTto^{f} & \mathfrak Y\\ \end{diagram} \end{equation} is cartesian we will say that $f:\mathfrak X \to \mathfrak Y$ is a \emph{smooth lifting of $\mathfrak X_{0}$ over $\mathfrak Y$}.
Observe that, since $f$ is flat (\cite[Proposition 4.8]{AJP1}), then $j \colon \mathfrak X_{0} \to \mathfrak X$ is a closed immersion given (up to isomorphism) by the square zero ideal $f^{*}\mathcal I$. The sheaf $\mathcal I$ is an $\mathcal O_{\mathfrak Y_0}$-module in a natural way, $f^{*}\mathcal I$ is a $\mathcal O_{\mathfrak X_0}$-module and it is clear that $f^{*}\mathcal I$ agrees with $f_0^{*}\mathcal I$ as an $\mathcal O_{\mathfrak X_0}$-module. \end{parraf}
\begin{parraf} \label{deform1} Denote by $\aut_{\mathfrak X_{0}}(\mathfrak X)$ the group of $\mathfrak Y$-automorphisms of $\mathfrak X$ that induce the identity on $\mathfrak X_{0}$. In particular, we have that $1_{\mathfrak X} \in \aut_{\mathfrak X_{0}}(\mathfrak X)$ and, therefore, by \ref{parrfderivlevant} there exists a bijection $\aut_{\mathfrak X_{0}}(\mathfrak X) \tilde{\to} \Hom_{\mathcal O_{\mathfrak X}}(\widehat{\Omega}^{1}_{\mathfrak X/\mathfrak Y}, j_{*} f_{0}^{*}\mathcal I) $ defined using the map $ g \in \aut_{\mathfrak X_{0}}(\mathfrak X) \leadsto g^{\sharp} - 1_{\mathfrak X}^{\sharp} \in \Dercont_{\mathfrak Y}(\mathcal O_{\mathfrak X}, j_{*} f_{0}^{*}\mathcal I)$. \end{parraf}
\begin{parraf} \label{deform2}
If $\mathfrak X_{0}$ is in $\sfn_{\mathsf {af}}$ and $\mathfrak X_{0} \overset{\,\, j'}\hookrightarrow \mathfrak X' \overset{\, f'}\to \mathfrak Y$ is another smooth lifting of $\mathfrak X_0$ over $\mathfrak Y$, then there exists a $\mathfrak Y$-isomorphism $g: \mathfrak X \xrightarrow{\sim} \mathfrak X'$ such that $g|_{\mathfrak X_{0}} = j'$. Indeed, by Proposition 2.3 and \cite[Corollary 3.1.8]{AJL1} there are morphisms $g: \mathfrak X \to \mathfrak X'$, $g': \mathfrak X' \to \mathfrak X$ such that the following diagram is commutative: \begin{diagram}[height=2em,w=2em,p=0.3em,labelstyle=\scriptstyle] \mathfrak X_{0} & \rTinc^{j} & \mathfrak X & & \\
&\rdTinc^{j'} & \dTto^{g} \uTto_{g'} & \rdTto^{f} & \\
& & \mathfrak X' & \rTto^{f'} & \mathfrak Y\\ \end{diagram} From \ref{parrfderivlevant} and \ref{deform1} it is easy to deduce that $g' \circ g \in \aut_{\mathfrak X_{0}}(\mathfrak X)$, $g \circ g'\in \aut_{\mathfrak X_{0}}(\mathfrak X')$, therefore $g$ is an isomorphism. \end{parraf}
\begin{parraf} \label{deform3} In the setting of \ref{deform2}, the set of $\mathfrak Y$-isomorphisms of $\mathfrak X$ onto $\mathfrak X'$ that make commutative the diagram is an affine space over $\Hom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}},f_{0}^{*} \mathcal I)$ (or, equivalently over $\Hom_{\mathcal O_{\mathfrak X'}}(\widehat{\Omega}^{1}_{\mathfrak X'/\mathfrak Y}, j'_{*} f_{0}^{*}\mathcal I)$, by adjunction). Indeed, assume that $g: \mathfrak X \to \mathfrak X'$ and $h: \mathfrak X \to \mathfrak X'$ are two such $\mathfrak Y$-isomorphisms. From \ref{parrfderivlevant} there exists a unique homomorphism of $\mathcal O_{\mathfrak X'}$-modules $\phi: \widehat{\Omega}^{1}_{\mathfrak X'/\mathfrak Y} \to j'_{*} f_{0}^{*} \mathcal I$ such that $g^{\sharp}-h^{\sharp} = \phi \circ \widehat{d}_{\mathfrak X'/\mathfrak Y}$. Reciprocally, if \[\phi \in \Hom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}},f_{0}^{*} \mathcal I) \cong \Hom_{\mathcal O_{\mathfrak X'}}(\widehat{\Omega}^{1}_{\mathfrak X'/\mathfrak Y}, j'_{*} f_{0}^{*}\mathcal I)\]
and $g: \mathfrak X \to \mathfrak X'$ is a $\mathfrak Y$-isomorphism with $g|_{\mathfrak X_{0}}=j'$, the $\mathfrak Y$-morphism $h: \mathfrak X \to \mathfrak X'$ defined by $h^{\sharp} = g^{\sharp}+ \phi \circ \widehat{d}_{\mathfrak X'/\mathfrak Y}$, which as topological space map is the identity, is an isomorphism. Indeed, using \ref{parrfderivlevant} and \ref{deform1} it follows that $h\circ g^{-1} \in \aut_{\mathfrak X_{0}}(\mathfrak X')$ and $g^{-1} \circ h \in \aut_{\mathfrak X_{0}}(\mathfrak X)$, therefore $h$ is an isomorphism. \end{parraf}
\begin{propo} \label{deform4} Let $\mathfrak Y_{0} \hookrightarrow \mathfrak Y$ be a closed immersion in $\mathsf {NFS}$ defined by a square zero ideal $\mathcal I \subset \mathcal O_{\mathfrak Y}$ and $f_{0}:\mathfrak X_{0} \to \mathfrak Y_{0}$ a smooth morphism in $\mathsf {NFS}$ and suppose that there exists a smooth lifting of $\mathfrak X_{0}$ over $\mathfrak Y$. Then the set of isomorphism classes of smooth liftings of $\mathfrak X_{0}$ over $\mathfrak Y$ is an affine space over $\ext^{1}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*} \mathcal I)$. \end{propo}
\begin{proof} Let $\mathfrak X_{0} \overset{\, j} \hookrightarrow \mathfrak X \overset{\, f}\to \mathfrak Y$ and $\mathfrak X_{0} \overset{\, j'} \hookrightarrow \mathfrak X' \overset{\, f'}\to \mathfrak Y$ be two smooth liftings over $\mathfrak Y$. Given an affine open covering $\mathfrak U_{\bullet}=\{\mathfrak U_{\alpha,0}\}_{\alpha \in L}$ of $\mathfrak X_{0}$, let $\{\mathfrak U_{\alpha}\}_{\alpha \in L}$ and $\{\mathfrak U'_{\alpha}\}_{\alpha \in L}$ be the corresponding affine open coverings of $\mathfrak X$ and $\mathfrak X'$, respectively. From \ref{deform2}, for each $\alpha \in L$ there exists an isomorphism of $\mathfrak Y$-formal schemes $u_{\alpha}: \mathfrak U_{\alpha} \xrightarrow{\sim} \mathfrak U'_{\alpha}$ such that the following diagram \begin{diagram}[height=2em,w=2em,p=0.3em,labelstyle=\scriptstyle]
\mathfrak U_{\alpha,0}& & \rTinc^{j|_{\mathfrak U_{\alpha,0}}}& \mathfrak U_{\alpha}& & & \\
& \rdTinc(3,2)_{j'|_{\mathfrak U_{\alpha,0}}}&&\dTto_{u_{\alpha}}^{\wr} &\rdTto(3,2)& & \\
& & &\mathfrak U'_{\alpha} & & \rTto& \mathfrak Y\\ \end{diagram}
is commutative. By \ref{deform3}, for all couples of indexes $\alpha, \beta$ such that $\mathfrak U_{\alpha \beta,0}:= \mathfrak U_{\alpha,0} \cap \mathfrak U_{ \beta,0} \neq \varnothing$, if $\mathfrak U_{\alpha \beta}:= \mathfrak U_{\alpha} \cap \mathfrak U_{ \beta}$ the difference between $u_{\alpha}^{\sharp}|_{\mathfrak U_{\alpha \beta}}$ and $u_{\beta}^{\sharp}|_{\mathfrak U_{\alpha \beta}}$ is measured by a homomorphism of $\mathcal O_{\mathfrak U_{\alpha \beta,0}}$-modules
$\phi_{\alpha \beta}: \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}|_{\mathfrak U_{\alpha \beta,0}} \to ( f_{0}^{*} \mathcal I) |_{\mathfrak U_{\alpha \beta,0}}$. Then $\phi_{\mathfrak U_{\bullet}}:=\{\phi_{\alpha \beta}\} \in \check{\C}^{1}(\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I))$ has the property that for all $\alpha,\, \beta,\, \gamma$ such that $\mathfrak U_{\alpha \beta \gamma, 0}:= \mathfrak U_{\alpha, 0} \cap \mathfrak U_{ \beta, 0} \cap \mathfrak U_{ \gamma, 0} \neq \varnothing$, the cocycle condition \begin{equation} \label{aaaayyyyy}
\phi_{\alpha \beta}|_{\mathfrak U_{\alpha \beta \gamma,0}} - \phi_{\alpha \gamma}|_{\mathfrak U_{\alpha \beta \gamma,0}}+\phi_{ \beta \gamma}|_{\mathfrak U_{\alpha \beta \gamma,0}}=0 \end{equation} holds and, therefore, $\phi_{\mathfrak U_{\bullet}} \in \check{\Z}^{1} (\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I))$. The homology class of the element \[c_{\mathfrak U_{\bullet}}:=[\phi_{\mathfrak U_{\bullet}}] \in \check{\h}^{1} (\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I))\] does not depend on the choice of the isomorphisms $\{u_{\alpha}\}_{\alpha \in L}$. Indeed, consider another collection of $\mathfrak Y$-isomorphisms
$\{v_{\alpha}: \mathfrak U_{\alpha} \xrightarrow{\sim} \mathfrak U'_{\alpha}\}_{\alpha \in L}$ such that, for all $\alpha \in L$, $v_{\alpha} \circ j|_{\mathfrak U_{\alpha,0}}= j'|_{\mathfrak U_{\alpha,0}}$ and let $\psi_{\mathfrak U_{\bullet}}:= \{\psi_{\alpha \beta}\}$ be the corresponding element in $\check{\Z}^{1}(\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I))$ defined in the same way as $\phi_{\mathfrak U_{\bullet}}$ from $\{ u_{\alpha}\}_{\alpha \in L}$. Using \ref{deform3} we obtain a collection $\{\xi_{\alpha}\}_{\alpha \in L} \in \check{\C}^{0}(\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I))$ satisfying that, for all $\alpha \in L$, their adjoints (for which we will use the same notation) are such that $u_{\alpha}-v_{\alpha} =\xi_{\alpha} \circ \widehat{d}_{\mathfrak U'_{\alpha}/\mathfrak Y}$, therefore, $[\phi_{\mathfrak U_{\bullet}}] = [\psi_{\mathfrak U_{\bullet}}] \in \check{\h}^{1} (\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I))$. If $\mathfrak V_{\bullet}$ is an affine open refinement of $\mathfrak U_{\bullet}$, by what we have already seen, we deduce that $c_{\mathfrak U_{\bullet}}=c_{\mathfrak V_{\bullet}}$. Let us define \begin{align*} c :=[\phi_{\mathfrak U_{\bullet}}] &\in \check{\h}^{1} (\mathfrak X_{0},\shom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I)) = \\ &= \h^{1} (\mathfrak X_{0}, \shom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I)) \tag{\cite[(5.4.15)]{te}}\\ &= \ext^{1}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I) \end{align*}
Conversely, let $f\colon \mathfrak X \to \mathfrak Y$ be a smooth lifting of $\mathfrak X_{0}$ and consider $c \in \ext^{1}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*} \mathcal I)$. Given $\mathfrak U_{\bullet}=\{\mathfrak U_{\alpha,0}\}_{\alpha \in L}$ an affine open covering of $\mathfrak X_{0}$, take $\{\mathfrak U_{\alpha}\}_{\alpha \in L}$ the corresponding affine open covering in $\mathfrak X$ and \[\phi_{\mathfrak U_{\bullet}}=(\phi_{\alpha \beta}) \in \check{\Z}^{1} (\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I))\] such that $c=[\phi_{\mathfrak U_{\bullet}}]$. For each couple of indexes $\alpha, \beta$ such that $\mathfrak U_{\alpha \beta}= \mathfrak U_{\alpha}\cap \mathfrak U_{\beta} \neq \varnothing$, let us consider the morphism $u_{\alpha \beta}: \mathfrak U_{\alpha \beta} \to \mathfrak U_{\alpha \beta}$ which is the identity as topological map and is defined by
$u^{\sharp}_{\alpha \beta} := 1^{\sharp}_{\mathfrak U_{\alpha \beta}} + \phi_{\alpha \beta} \circ \widehat{d}_{\mathfrak U_{\alpha \beta}/\mathfrak Y}$, as a map of topologically ringed spaces, where again $\phi_{\alpha \beta}$ denotes also its adjoint $\phi_{\alpha \beta} \colon \widehat{\Omega}^{1}_{\mathfrak U_{\alpha \beta}/\mathfrak Y} \to (j_{*} f_{0}^{*} \mathcal I) |_{\mathfrak U_{\alpha \beta}}$, such that the following hold: \begin{itemize} \item $u_{\alpha \beta} \in \aut_{\mathfrak U_{\alpha \beta,0}}(\mathfrak U_{\alpha \beta})$ (by \ref{deform1}); \item
$u_{\alpha \beta}|_{\mathfrak U_{\alpha \beta \gamma}} \circ u^{-1}_{\alpha \gamma}|_{\mathfrak U_{\alpha \beta \gamma}}\circ u_{ \beta \gamma}|_{\mathfrak U_{\alpha \beta \gamma}}=1_{\mathfrak U_{\alpha \beta \gamma}}$, for any $\alpha,\, \beta,\, \gamma$ such that $ \mathfrak U_{\alpha \beta \gamma}:= \mathfrak U_{\alpha} \cap \mathfrak U_{\beta} \cap \mathfrak U_{\gamma} \neq \varnothing$ (because $\{\phi_{\alpha \beta}\}$ satisfies the cocycle condition (\ref{aaaayyyyy})); \item
$u_{\alpha \alpha}= 1_{\mathfrak U_{\alpha}}$
and
$u_{\alpha \beta}^{-1}= u_{\beta \alpha}$. \end{itemize} Then the $\mathfrak Y$-formal schemes $\mathfrak U_{\alpha}$ glue into a smooth lifting $f':\mathfrak X' \to\mathfrak Y$ of $\mathfrak X_{0}$ through the morphisms $\{u_{\alpha \beta}\}$, since the morphism $f:\mathfrak X \to \mathfrak Y$ is compatible with the family of isomorphisms $\{u_{\alpha \beta}\}$.
We leave the verification that these correspondences are mutually inverse to the reader. \end{proof}
\begin{rem} Proposition \ref{deform4} can be rephrased in the language of torsor theory as follows: The sheaf on $\mathfrak X_{0}$ that associates to each open $\mathfrak U_{0} \subset \mathfrak X_{0}$ the set of isomorphism classes of smooth liftings of $\mathfrak U_{0}$ over $\mathfrak Y$ is a pseudo torsor over $\ext^{1}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*} \mathcal I)$. \end{rem}
\begin{rem} With the hypothesis of Proposition \ref{deform4}, if $\mathfrak X_{0}$ is in $\sfn_{\mathsf {af}}$, we have that $\h^{1} (\mathfrak X_{0},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I))=0$ (\emph{cf.} \cite[Corollary 3.1.8]{AJL1}) and, therefore, there exists a unique isomorphism class of liftings of $\mathfrak X_{0}$ over $\mathfrak Y$. \end{rem}
\section{Lifting of smooth formal schemes: Existence}\label{sec3}
We continue considering the set-up of the previous section, namely, a smooth morphism $f_{0}:\mathfrak X_{0} \to \mathfrak Y_{0}$ and a closed immersion $\mathfrak Y_{0} \hookrightarrow \mathfrak Y$ defined by a square zero ideal $\mathcal I$. Let us pose the following question: Does there exist a smooth $\mathfrak Y$-formal scheme $\mathfrak X$ such that it holds that $\mathfrak X \times_{\mathfrak Y} \mathfrak Y_{0} = \mathfrak X_{0}$? We will give the following local answer: for all $x \in \mathfrak X_{0}$ there exists an open $\mathfrak U_{0} \subset \mathfrak X_{0}$ with $x \in \mathfrak U_{0}$ and a locally noetherian smooth formal scheme $\mathfrak U$ over $\mathfrak Y$ such that $\mathfrak U_{0}= \mathfrak U \times_{\mathfrak Y} \mathfrak Y_{0}$ (see Proposition \ref{deformloclis}). Globally, Theorem \ref{obstrext2} provides an element in $\ext^{2}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I)$ whose vanishing is equivalent to the existence of such an $\mathfrak X$. In particular, whenever $\mathfrak X_{0}$ is in $\sfn_{\mathsf {af}}$ Corollary \ref{*} asserts the existence of $\mathfrak X$.
\begin{propo} \label{deformloclis} Let us consider in $\mathsf {NFS}$ a closed immersion $\mathfrak Y' \hookrightarrow \mathfrak Y$ and a smooth morphism $f': \mathfrak X' \to \mathfrak Y'$. For all points $x \in \mathfrak X'$ there exists an open subset $\mathfrak U' \subset \mathfrak X'$ with $x \in \mathfrak U'$ and a locally noetherian formal scheme $\mathfrak U$ smooth over $\mathfrak Y$ such that $\mathfrak U'= \mathfrak U\times_{\mathfrak Y}\mathfrak Y'$. \end{propo} \begin{proof} Since it is a local question we may assume that the morphisms $\mathfrak Y'=\spf(B') \hookrightarrow \mathfrak Y=\spf(B)$ and $f': \mathfrak X'=\spf(A') \to \mathfrak Y'=\spf(B')$ are in $\sfn_{\mathsf {af}}$ and that there exist $r,\, s \in \mathbb N$ such that the morphism $f'$ factors as \[\mathfrak X' =\spf(A') \hookrightarrow \mathbb D^{s}_{\mathbb A^{r}_{\mathfrak Y'}}\!\! = \spf(B' \{\mathbf{T}\}[[\mathbf{Z}]]) \xrightarrow{p'} \mathfrak Y'=\spf(B'),\] where $\mathfrak X' \hookrightarrow \mathbb D^{s}_{\mathbb A^{r}_{\mathfrak Y'}}$ is a closed subscheme given by an ideal $\mathcal I'=(I')^{\triangle} \subset \mathcal O_{\mathbb D^{s}_{\mathbb A^{r}_{\mathfrak Y'}}}$\!\!, with $I' \subset B' \{\mathbf{T}\}[[\mathbf{Z}]]$ an ideal, and $p'$ is the canonical projection (see \ref{existidefmorf}). Fix $x \in \mathfrak X'$. As $f'$ is smooth, by the matrix Jacobian criterion for the affine formal space and the affine formal disc (\cite[Corollary 5.13]{AJP2}), we have that there exists $\{g'_{1},\, g'_{2},\, \ldots,\, g'_{l}\} \subset I'$ such that: \begin{equation} \label{rangojacobiano} \langle g'_{1},\, g'_{2},\, \ldots,\, g'_{l} \rangle\mathcal O_{\mathfrak X,x} = I'_{x} \qquad \textrm{and} \qquad \rg (\Jac_{\mathfrak X'/\mathfrak Y'}(x)) = l \end{equation} Replacing, if necessary, $\mathfrak X'$ by a smaller affine open neighborhood of $x$ we may assume that $I' =\langle g'_{1},\, g'_{2},\, \ldots,\, g'_{l}\rangle$. Let $\{g_{1},\, g_{2},\, \ldots,\, g_{l}\}\subset B\{\mathbf{T}\}[[\mathbf{Z}]]$ be such that $ g_{i} \in B\{\mathbf{T}\}[[\mathbf{Z}]] \leadsto g'_{i} \in B'\{\mathbf{T}\}[[\mathbf{Z}]] $ through the continuous homomorphism of rings $B\{\mathbf{T}\}[[\mathbf{Z}]] \twoheadrightarrow B'\{\mathbf{T}\}[[\mathbf{Z}]] $ induced by $B \twoheadrightarrow B'$. Put $I:= \langle g_{1},\, g_{2},\, \ldots,\, g_{l}\rangle \subset B \{\mathbf{T}\}[[\mathbf{Z}]]$ and $\mathfrak X:= \spf(B \{\mathbf{T}\}[[\mathbf{Z}]]/I)$. It holds that $\mathfrak X' \subset \mathfrak X$ is a closed subscheme and that in the diagram \begin{diagram}[height=2.3em,w=2.3em,p=0.3em,labelstyle=\scriptstyle] \mathfrak X & \rTinc & \mathbb D^{s}_{\mathbb A^{r}_{\mathfrak Y}} &\rTto^{p} & \mathfrak Y \\ \uTinc & & \uTinc & & \uTinc\\ \mathfrak X' & \rTinc & \mathbb D^{s}_{\mathbb A^{r}_{\mathfrak Y'}} &\rTto^{p'} & \mathfrak Y' \\ \end{diagram} the squares are cartesian. From (\ref{rangojacobiano}) we deduce that $\rg (\Jac_{\mathfrak X/\mathfrak Y}(x)) = l$ and, applying the Jacobian criterion for the affine formal space and the affine formal disc, it follows that $\mathfrak X \to \mathfrak Y$ is smooth at $x \in \mathfrak X$. To finish the proof it suffices to take $\mathfrak U \subset \mathfrak X$, an open neighborhood of $x \in \mathfrak X$ such that the morphism $\mathfrak U \to \mathfrak Y$ is smooth, and $\mathfrak U'$ the corresponding open set in $\mathfrak X'$. \end{proof}
\begin{thm} \label{obstrext2} Let us consider in $\mathsf {NFS}$ a closed immersion $\mathfrak Y_{0} \hookrightarrow \mathfrak Y$ given by a square zero ideal $\mathcal I \subset \mathcal O_{\mathfrak Y}$ and $f_{0}:\mathfrak X_{0} \to \mathfrak Y_{0}$ a smooth morphism with $\mathfrak X_{0}$ a separated formal scheme. Then there is an element $c_{f_{0}} \in \ext^{2}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I)$ such that: $c_{f_{0}}$ vanishes if and only if there exists a smooth lifting $\mathfrak X$ of $\mathfrak X_{0}$ over $\mathfrak Y$. \end{thm}
\begin{proof} From Proposition \ref{deformloclis}, there exists an affine open covering $\mathfrak U_{\bullet}=\{\mathfrak U_{\alpha, 0}\}_{\alpha \in L}$ of $\mathfrak X_{0}$, such that for all $\alpha \in L$ there exists a smooth lifting $\mathfrak U_{\alpha}$ of $\mathfrak U_{\alpha, 0}$ over $\mathfrak Y$. As $\mathfrak X_{0}$ is a separated formal scheme $\mathfrak U_{\alpha \beta, 0}:= \mathfrak U_{\alpha, 0} \cap \mathfrak U_{ \beta, 0}$ is an affine open set for any $\alpha,\beta$ and, if we call $\mathfrak U_{\alpha \beta} \subset \mathfrak U_{\alpha}$ and $\mathfrak U_{\beta \alpha} \subset \mathfrak U_{\beta}$ the corresponding open subsets, from \ref{deform2} there exists an isomorphism $u_{\alpha \beta}: \mathfrak U_{\alpha \beta} \xrightarrow{\sim} \mathfrak U_{\beta\alpha}$ such that the following diagram \begin{diagram}[height=2em,w=2em,p=0.3em,labelstyle=\scriptstyle] \mathfrak U_{\alpha \beta, 0}& & \rTinc&\mathfrak U_{\alpha \beta} & & & \\
&\rdTinc(3,2)&&\dTto_{u_{\alpha \beta}}^{\wr } &\rdTto(3,2)& & \\
& & &\mathfrak U_{\beta\alpha}& & \rTto& \mathfrak Y\\ \end{diagram} commutes. For any $\alpha, \beta, \gamma$ such that $\mathfrak U_{\alpha \beta \gamma,0}:= \mathfrak U_{\alpha,0} \cap \mathfrak U_{ \beta,0} \cap \mathfrak U_{ \gamma,0}\neq \varnothing$, let us write $\mathfrak U_{\alpha \beta \gamma}:= \mathfrak U_{\alpha \beta} \times_{\mathfrak U_{ \alpha}} \mathfrak U_{ \alpha \gamma}$. It holds that \[u_{\alpha \beta \gamma}:=
u^{-1}_{ \alpha \gamma}|_{\mathfrak U_{\gamma \beta} \cap \mathfrak U_{ \gamma\alpha}} \circ
u_{ \beta \gamma}|_{\mathfrak U_{ \beta\alpha} \cap \mathfrak U_{\beta\gamma}} \circ
u_{\alpha \beta}|_{\mathfrak U_{\alpha \beta} \cap \mathfrak U_{\alpha \gamma}} \in \aut_{\mathfrak U_{\alpha \beta \gamma,0}}(\mathfrak U_{\alpha \beta \gamma}).\] Applying \ref{deform1} we get a unique $\phi_{\alpha \beta \gamma} \in \ga(\mathfrak U_{\alpha \beta \gamma,0}, \shom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I))$ who\-se adjoint satisfies the relation $u_{\alpha \beta \gamma}^{\sharp}-1^{\sharp}_{\mathfrak U_{\alpha \beta \gamma}}= \phi_{\alpha \beta \gamma} \circ \widehat{d}_{\mathfrak U_{\alpha \beta \gamma}/\mathfrak Y}$. Let $\mathfrak U_{\alpha \beta \gamma \delta,0}:= \mathfrak U_{\alpha,0 } \cap \mathfrak U_{ \beta,0 } \cap \mathfrak U_{ \gamma,0} \cap \mathfrak U_{ \delta,0}$. By the previous discussion the cochain $\phi_{\mathfrak U_{\bullet}}:=(\phi_{\alpha \beta \gamma}) \in \check{\C}^{2}(\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I))$ satisfies the cocycle condition \begin{equation} \label{aaaayyyyy2}
\phi_{\alpha \beta \gamma}|_{\mathfrak U_{\alpha \beta \gamma \delta,0}} -
\phi_{\alpha \gamma \delta}|_{\mathfrak U_{\alpha \beta \gamma \delta,0}} +
\phi_{ \beta \gamma \delta}|_{\mathfrak U_{\alpha \beta \gamma \delta,0}} -
\phi_{ \beta \delta \alpha}|_{\mathfrak U_{\alpha \beta \gamma \delta,0}} = 0 \end{equation} for any $\alpha,\, \beta,\, \gamma,\, \delta$ such that $\mathfrak U_{\alpha \beta \gamma \delta,0} \neq \varnothing$ and, therefore, \[\phi_{\mathfrak U_{\bullet}} \in \check{\Z}^{2} (\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I)).\]
Using \ref{deform3} and reasoning in an analogous way as in the proof of Proposition \ref{deform4}, it is easily seen that the definition of \[c_{\mathfrak U_{\bullet}}:=[\phi_{\mathfrak U_{\bullet}}] \in \check{\h}^{2} (\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I))\] does not depend on the choice of the family of isomorphisms $\{u_{\alpha \beta}\}$. Furthermore, if $\mathfrak V_{\bullet}$ is an affine open refinement of $\mathfrak U_{\bullet}$, then $c_{\mathfrak U_{\bullet}}=c_{\mathfrak V_{\bullet}} \in \check{\h}^{2} (\mathfrak X_{0},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I))$. By \cite[Proposition 4.8]{AJP1}, $\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}$ is a locally free $\mathcal O_{\mathfrak X_{0}}$-module. Since $\mathfrak X_{0}$ is separated, using \cite[Corollary 3.1.8]{AJL1} and \cite[Ch. III, Exercise 4.11]{ha1}, we have that $ \check{\h}^{2} (\mathfrak X_{0},\shom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I)) = \h^{2} (\mathfrak X_{0}, \shom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I)). $ We set \[ c_{f_{0}}:=[\phi_{\mathfrak U_{\bullet}}] \in \h^{2} (\mathfrak X_{0}, \shom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I)) = \ext^{2}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I). \]
Let us show that $c_{f_{0}}$ is the obstruction to the existence of a smooth lifting of $\mathfrak X_{0}$ over $\mathfrak Y$. If there exists a smooth lifting $\mathfrak X$ of $\mathfrak X_{0}$ over $\mathfrak Y$, one could take the isomorphisms $\{u_{\alpha \beta}\}$ above as the identities, then $c_{f_{0}}=0$, trivially. Reciprocally, let $\mathfrak U_{\bullet}=\{\mathfrak U_{\alpha,0}\}_{\alpha \in L}$ be an affine open covering of $\mathfrak X_{0}$ and, for each $\alpha$, $\mathfrak U_{\alpha}$ a smooth lifting of $\mathfrak U_{\alpha,0}$ over $\mathfrak Y$ such that, with the notations established at the beginning of the proof, $c_{f_{0}}=[\phi_{\mathfrak U_{\bullet}}]$ with \[\phi_{\mathfrak U_{\bullet}}=(\phi_{\alpha \beta \gamma}) \in \check{\Z}^{2} (\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}}, f_{0}^{*}\mathcal I)).\] In view of $c_{f_{0}}=0$, we are going to glue the $\mathfrak Y$-formal schemes $\{\mathfrak U_{\alpha}\}_{\alpha \in L}$ into a smooth lifting of $\mathfrak X_{0}$ over $\mathfrak Y$. By hypothesis, we have that $\phi_{\mathfrak U_{\bullet}}$ is a coboundary and therefore, there exists $(\phi_{\alpha \beta}) \in \check{\C}^{1}(\mathfrak U_{\bullet},\shom_{\mathcal O_{\mathfrak X_{0}}}(\widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I))$ such that, for any $\alpha, \beta, \gamma$ with $\mathfrak U_{\alpha \beta \gamma, 0} \neq \varnothing$, \begin{equation} \label{datosrecolec22}
\phi_{\alpha \beta}|_{\mathfrak U_{\alpha \beta \gamma, 0}} - \phi_{\alpha \gamma}|_{\mathfrak U_{\alpha \beta \gamma, 0}}+ \phi_{ \beta \gamma}|_{\mathfrak U_{\alpha \beta \gamma, 0}}=\phi_{\alpha \beta \gamma} \end{equation} For each couple of indexes $\alpha, \beta$ such that $\mathfrak U_{\alpha \beta,0} \neq \varnothing$, let $v_{\alpha \beta}: \mathfrak U_{\alpha \beta} \to \mathfrak U_{\beta \alpha }$ be the morphism which is the identity as topological map, and that, as topologically ringed spaces map is given by
$v^{\sharp}_{\alpha \beta} := u^{\sharp}_{\alpha \beta} - \phi_{\alpha \beta} \circ \widehat{d}_{\mathfrak X/\mathfrak Y}|_{\mathfrak U_{\alpha \beta}}$. The family $\{v_{\alpha \beta}\}$ satisfies: \begin{itemize} \item Each map $v_{\alpha \beta}$ is an isomorphism of $\mathfrak Y$-formal schemes (by \ref{deform3}). \item For any $\alpha$, $\beta$, $\gamma$ such that $\mathfrak U_{\alpha \beta \gamma, 0}\neq \varnothing$,
\[v^{-1}_{ \alpha \gamma}|_{\mathfrak U_{\gamma \beta} \cap \mathfrak U_{ \gamma\alpha}} \circ v_{ \beta \gamma}|_{\mathfrak U_{ \beta\alpha} \cap \mathfrak U_{\beta\gamma}} \circ v_{\alpha \beta}|_{\mathfrak U_{\alpha \beta} \cap \mathfrak U_{\alpha \gamma}}=1_{\mathfrak U_{\alpha \beta} \cap \mathfrak U_{\alpha \gamma}} \] by (\ref{aaaayyyyy2}) and (\ref{datosrecolec22}). \item For any $\alpha$, $\beta$,
$v_{\alpha \alpha}= 1_{\mathfrak U_{\alpha}}$ and
$v_{\alpha \beta}^{-1}= v_{\beta \alpha}$. \end{itemize} Thus, the $\mathfrak Y$-formal schemes $\{\mathfrak U_{\alpha}\}$ glue into a smooth lifting $f:\mathfrak X \to \mathfrak Y$ of $\mathfrak X_{0}$ over $\mathfrak Y$ through the glueing morphisms $\{v_{\alpha \beta}\}$. \end{proof}
\begin{cor} \label{*} With the hypothesis of Theorem \ref{obstrext2}, if $\mathfrak X_{0}$ is affine, there exists a lifting of $\mathfrak X_{0}$ over $\mathfrak Y$. \end{cor}
\begin{proof} By \cite[Corollary 3.1.8]{AJL1} we have that $\h^{2} (\mathfrak X_{0},\shom_{\mathcal O_{\mathfrak X_{0}}}( \widehat{\Omega}^{1}_{\mathfrak X_{0}/\mathfrak Y_{0}} ,f_{0}^{*}\mathcal I))=0$ and the result follows from the last proposition. \end{proof}
\end{document} | arXiv | {
"id": "0801.2846.tex",
"language_detection_score": 0.569493293762207,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\baselineskip=17pt
\title[Ramanujan congruences]
{Ramanujan-style congruences for prime level}
\author{Arvind Kumar, Moni Kumari, Pieter Moree and Sujeet Kumar Singh}
\address{Department of Mathematics, Indian Institute of Technology Jammu, Jagti NH-44, PO Nagrota, Jammu 181221, India.} \email{arvind.kumar@iitjammu.ac.in}
\address{Max-Planck-Institut f\"ur Mathematik, Vivatsgasse 7,
53111 Bonn, Germany}
\email{kumari@mpim-bonn.mpg.de}
\address{Max-Planck-Institut f\"ur Mathematik, Vivatsgasse 7,
53111 Bonn, Germany}
\email{moree@mpim-bonn.mpg.de }
\address{School of Mathematical Sciences, The University of Nottingham, University Park, Nottingham NG7 2RD, United Kingdom.}
\email{sujeet170492@gmail.com}
\subjclass[2010]{11F33, 11F11, 11F80, 11N37}
\date{\today}
\keywords{Modular forms,
Ramanujan congruences, Euler-Kronecker constants}
\maketitle
\begin{abstract}
We establish Ramanujan-style congruences modulo certain primes $\ell$ between an Eisenstein series of weight $k$, prime level $p$ and a cuspidal newform in the $\varepsilon$-eigenspace of the Atkin-Lehner operator inside the space of cusp forms of weight $k$ for $\Gamma_0(p)$. Under a mild assumption, this refines a result of Gaba-Popa. We use these congruences and recent work of Ciolan, Languasco and the third author on Euler-Kronecker constants, to quantify the non-divisibility of the
Fourier coefficients involved by $\ell.$ The degree of the number field generated by these coefficients we investigate using recent results on prime factors of shifted prime numbers.
\end{abstract}
\section{Introduction}
Let $E_k$ be the Eisenstein series of even weight $k\ge 2$ for the group $SL_2({\mathbb Z})$,
normalized so that its Fourier series expansion is
$$E_k(z)=-\frac{B_k}{2k}+\sum_{n=1}^{\infty}\sigma_{k-1}(n)e^{2\pi i n z},$$
where $B_k$ is the $k$th Bernoulli number and $\sigma_r(n) = \sum_{d|n}d^{r}$ is the
$r$-th sum of divisors function.
The prototype of a Ramanujan congruence goes back
to 1916 and asserts that
\begin{equation}\label{RC}
\tau(n)\equiv \sigma_{11}(n) \pmod*{691},
\end{equation}
for every positive integer $n$. This can be viewed as a (coefficient-wise) congruence between the unique cusp form $\Delta(z)=\sum_{n=1}^{\infty}\tau(n)e^{2\pi i n z}$ of weight $12$ and the Eisenstein series
$E_{12}(z)$, namely
$ \Delta \equiv E_{12} \pmod*{691}.$
There are several well-known ways to prove, interpret, and
generalize this.
For example, to higher weights eigenforms of level 1
by Datskovsky-Guerzhoy \cite{dagu}, to newforms of weight $k$ and prime level $p$ by Billerey-Menares \cite{bime}, and to Fourier coefficients of index coprime to $p$ by Dummigan-Fretwell \cite{dufr}. The latter two authors were primarily motivated
by an interesting relation of these congruences with the Bloch-Kato conjecture for
the partial Riemann zeta function.
Gaba-Popa \cite{gapo} refined these results, by determining, under some technical conditions, also the Atkin-Lehner eigenvalue of the involved newform, and thus obtained congruences for all coefficients.
\par To make our statements more concrete, we first define
for $\varepsilon\in \{\pm 1\}$ an Eisenstein series of
even weight $k\ge 2$ and prime level $p,$
namely
\begin{equation}\label{ES}
E_{k,p}^{\varepsilon}(z):=E_k(z)+ \varepsilon E_k| W_p(z)={E_k(z)+{\varepsilon}p^{k/2}E_k(pz)},
\end{equation}
where $W_p$ is the Atkin-Lehner operator.
By $M_k^{\varepsilon}(p)$ (resp.\,$S_k^{\varepsilon}(p)$) we denote the $\varepsilon$-eigenspace of the Atkin-Lehner operator $W_p$ inside $M_k(p)$
(resp.\,$S_k(p)$), the space of modular forms (resp.\,cusp forms) of weight $k$ and for the group $\Gamma_0(p)$.
It is known that $E_{2,p}^{-1}\in M_2^{-1}(p)$ and
$ E_{k,p}^{\varepsilon}\in M_k^{\varepsilon}(p)$ for $k\ge 4.$ Using the Fourier series expansion of $E_k$, we obtain
\begin{equation*}
E_{k,p}^{\varepsilon}(z)=-\frac{B_k}{2k}\varepsilon(\varepsilon+ p^{k/2})+\sum\limits_{n\ge 1} \left(\sigma_{k-1}(n)+\varepsilon p^{k/2}\sigma_{k-1}\left(\frac{n}{p}\right)\right)e^{2\pi i n z},
\end{equation*}
where $\sigma_{k-1}\left(\frac{n}{p}\right)=0$ if $p\nmid n$. We now recall the main result of Gaba-Popa,
the proof of which relies on the theory of period polynomials for congruence subgroups developed by Paşol and Popa \cite{papo}.
\begin{thm}\label{gapo_result} \cite[Theorem 1]{gapo}
Let $k\ge 4$ be an even integer, $p$ a prime and $\varepsilon \in \{\pm 1\}$.
Let $\ell\ge k+2$ be a prime such that
\begin{center}
$\ell \mid \frac{B_k}{2k}(\varepsilon + p^{k/2})$ and
$\ell \mid (\varepsilon+p^{k/2}) (\varepsilon+p^{k/2-1}).$
\end{center}
In case $\ell \nmid (\varepsilon+ p^{k/2})$ we assume
in addition that there exists an even integer $n$
with $0 <n <k$ such that
$\ell\nmid B_n B_{k-n} (p^{n-1}-1).$
Then, there exists a newform $f\in S_k^{\varepsilon}(p)$ and a prime ideal $\lambda$ lying above $\ell$ in the coefficient field of $f$ such that
\begin{equation*}
f\equiv E_{k,p}^{\varepsilon} \pmod*{\lambda}.
\end{equation*}
\end{thm}
We emphasize that from the latter congruence it follows that if $\overline \rho_{f,\lambda}$
denotes the mod $\lambda$ Galois representation, then (up to semisimplification) $\overline \rho_{f,\lambda} \simeq 1 \oplus \chi_\ell^{k-1}$, where $\chi_\ell$ is the mod $\ell$ cyclotomic character. Therefore, \thmref{gapo_result} gives a sufficient conditon on the prime $\ell$
such that the representation $1 \oplus \chi_\ell^{k-1}$ arises from a newform in $S_k^\varepsilon(p)$.
The purpose of this paper is
to strengthen \thmref{gapo_result} and, on
a somewhat different note, to quantify
the non-divisibility of the Fourier coefficients
of $E_{k,p}^{\varepsilon}$. The corresponding results
are presented in the next section, respectively in Section \ref{quanti}.
\subsection{Strengthening of Theorem \ref{gapo_result}}
We sharpen Theorem \ref{gapo_result} in the next two theorems.
\begin{thm}\label{main}
Let $k\ge 2$ be an even integer, $p$ a prime and $\varepsilon \in \{\pm 1\}$.
If $k=2$, we
also assume that $\varepsilon=-1.$
Let $\ell\ge \max\{5, k-1\}$ be a prime such that $p\not\equiv -1\pmod*{\ell}$.
Then the following are equivalent:
\begin{itemize}
\item[$(1)$] $\ell \mid \frac{B_k}{2k}(\varepsilon + p^{k/2})$ and
$\ell \mid (\varepsilon+p^{k/2}) (\varepsilon+p^{k/2-1});$
\item[$(2)$]
the existence of a newform $f\in S_k^{\varepsilon}(p)$ and a prime ideal $\lambda$ lying above $\ell$ in the coefficient field of $f$ such that
$$f\equiv E_{k,p}^{\varepsilon} \pmod*{\lambda}.$$
\end{itemize}
\end{thm}
\noindent This result improves on Theorem
\ref{gapo_result} in three different aspects.
\begin{itemize}
\item[(a)] Instead of $\ell > k+1,$ now also $\ell=k\pm 1$ is allowed. Gaba-Popa \cite{gapo} pointed out that, based on several numerical examples, they expect that their result should hold even for $\ell=k\pm 1,$ although their method breaks down for these values of $\ell$. Therefore, it
is reasonable to expect that \cite[Conjecture 3.2]{bime} and \cite[Conjecture on p. 53]{gapo}
should also hold for $\ell=k\pm 1$.
\item[(b)] Taking $k=2$ is allowed and hence this recovers an earlier result of Mazur \cite[Proposition 5.12 (ii)]{maz} who used geometry of modular curves associated to weight 2 modular forms and the $q$-expansion principle to prove his result.
\item[(c)] There is no condition on $B_n B_{k-n} (p^{n-1}-1)$ anymore in case $\ell \nmid (\varepsilon+ p^{k/2}).$
\end{itemize}
Comparing \thmref{main} with \thmref{gapo_result}, we see that there
is now the extra condition $p\not\equiv -1\pmod*{\ell}$ (redundant for $k=2$). In some special cases we remove this condition, together with the assumption $\ell \ge k-1,$ by proving the following variant of \thmref{main}.
\begin{thm}\label{rmain}
Let $k\ge 2$ be an even integer, $\ell \ge 5$ and $p$ be primes and $\varepsilon \in \{\pm 1\}$. If $k=2$, we also assume that $\varepsilon=-1$. Suppose that $\ell \mid \frac{B_k}{2k}(\varepsilon+ p^{k/2})$. We further assume that $k\not\equiv0 \pmod{\ell-1}$ and $\ell\nmid \frac{B_k}{2k}$. Then there exists a newform $f\in S_k^\varepsilon(p)$ and a prime ideal $\lambda$ over $\ell$ in the coefficient field of $f$ such that
$$
f\equiv E_{k,p}^{\varepsilon} \pmod*{\lambda}.
$$
\end{thm}
Our proof of the above theorems uses some classical results from the theory of mod $\ell$ modular forms and Deligne's theorem on Galois representations attached to eigenforms.
Further, it is based on the ideas used in \cite{dufr}, is quite classical in nature, and completely avoids the use of period polynomials. More precisely, we first prove that the assumptions on $\ell$ ensure that the reduction of $E_{k,p}^{\varepsilon}$ modulo $\ell$ is a cuspidal eigenform in characteristic $\ell,$ and then using the Deligne-Serre lifting lemma we lift it to an eigenform in characteristic zero. In
the final step we apply the Diamond-Ribet level raising theorem and a result of Langlands to obtain the desired newform.
It is also worth pointing out that our
method of proof enables us to obtain a refinement of the celebrated
Diamond-Ribet level raising theorem (for a precise statement see
\thmref{refine}).
{We remark that the main underlying ideas in our work are similar to that of Billerey-Menares \cite{bime}. However, because we are only working over forms of prime level in contrast to squarefree level, we are able to consider more appropriate Eisenstein series $E_{k,p}^\varepsilon$ (cf. \cite[\S 1.2.2]{bime}), resulting an improvement over \cite{bime} and allowing us to take $\ell\ge k-1$.
}
Next using some elementary ideas, we establish the following result in which
the resulting cusp form may not be an eigenform as
before, but it will always have rational Fourier coefficients.
\begin{thm}\label{main2}
Let $k\ge 2$ be an even integer, $p$ a prime and $\varepsilon \in \{\pm 1\}$.
If $k=2$, we
also assume that $\varepsilon=-1.$ Let $N^{\varepsilon}_{k, p}$ be the reduced numerator of ${\frac{B_k}{2k}(\varepsilon+ p^{k/2})}$. Suppose
that at least one of the following conditions hold:
\begin{itemize}
\item[{\rm (a)}]
$ p\in \{2, 3, 5, 7, 11, 13, 17, 19, 23, 29,31, 41, 47, 59, 71 \}$.
\item[{\rm (b)}]
$k\ge 8,$ $k \equiv 1 - \varepsilon\pmod* 4$
and $N_{k, p}^{-1}$ is coprime to $p-1$.
\item[{\rm (c)}]
$k\ge 10$ and $k \equiv 1 - \varepsilon\pmod* {10},$ and $N^{\varepsilon}_{k, p}$ is coprime to $(p+\varepsilon)p(p+1).$
\end{itemize}
{If the space $S_k^{\varepsilon}(p)$ is non-trivial,} then there exists a non-zero cusp form $f\in S_k^{\varepsilon}(p)$ with rational Fourier coefficients such that
$$f\equiv E_{k, p}^\varepsilon \pmod*{N_{k, p}^\varepsilon}.$$
\end{thm}
\begin{rmk}
{\rm The primes $p$ in (a) are exactly those for
which the genus of the
Fricke group of level $p$ is zero. They are
also precisely the prime factors of
$2^{46}\cdot 3^{20}\cdot 5^9\cdot 7^6\cdot 11^2\cdot 13^3\cdot 17 \cdot19 \cdot23 \cdot29 \cdot31 \cdot41\cdot 47\cdot59\cdot71,$ the order of the Monster
group. That is not a coincidence!
For more details, see, e.g., Gannon \cite{terry0,terry}}.
\end{rmk} {Using \thmref{main2} one can prove
\thmref{eigen}. Even though its assertion is weaker than Theorems \ref{main} and \ref{rmain}, we stress that the significance of \thmref{eigen} is that it is less restrictive and it gives congruences even for small primes, namely for $\ell=2$ and $3$.}
The proof
is completely patterned after the proof of \cite[Lemma 2.1, Theorem 2]{dagu} and so we omit it here.
It rests on the fact that the set
$$
\mathcal{B}:=\{f+\varepsilon f|{W_p} : f\in S_k(1) ~\text{is a normalized eigenform} \} \cup \{g: g \in S_k^\varepsilon(p)~\text{newform} \}
$$
forms a basis of $S_k^{\varepsilon}(p)$ consisting of normalized Hecke eigenfunctions for all $T_q$ for $q\neq p$.
\begin{thm}\label{eigen}
Suppose at least one of the conditions $(a), (b), (c)$ of \thmref{main2} holds.
Let $\mathcal{B}$ be a basis of $S_k^{\varepsilon}(p)$ of
normalized Hecke eigenfunctions for all $T_q$ for $q\neq p$. Suppose some prime ideal $\lambda$ divides
$N_{k, p}^{\varepsilon}$ in the coefficient field $\mathbb{Q}(a_f(q): f\in \mathcal{B},\,q\neq p)$. Then
there exists a cusp form $f=\sum_{n\ge 1}a_{f}(n)e^{2\pi i nz} \in \mathcal{B},$ such that for all integers
$n$ coprime to $p,$ we have
$$a_{f}(n)\equiv \sigma_{k-1}(n) \pmod*{\lambda}.$$
\end{thm}
As an application of our results (especially of \thmref{eigenform}), we give a non-trivial lower bound of the degree of the number field generated by all normalized eigenforms (and newforms) in the space $S_k(p)$, see Section \ref{application}. These bounds improve a similar result of \cite{bime} and are valid in a subset of the primes with natural density close to one.
\subsection{Quantification of Fourier
coefficient non-divisibility}
\label{quanti}
The second goal of this article is to quantify
how often $\ell\nmid a(n)$ for certain prime numbers $\ell$
and Fourier coefficients $a(n).$
This problem
was first considered by Ramanujan for his tau function (in \cite{beon} that remained unpublished
for many years). He made various claims of the form
\begin{equation}
\label{valseanalogie}
\sum_{n\le x,\,\ell\nmid \tau(n)}1=C_{\ell}\int_2^x \frac{dt}{(\log t)^{\delta_{\ell}}}+O\left(
\frac{x}{(\log x)^r}
\right),
\end{equation}
that he thought to be valid for arbitrary $r$.
For example, he claimed
\eqref{valseanalogie} for $\ell=691$ and $\delta_{\ell}=1/690$.
Partial integration gives
\begin{equation}
\label{partial}
C_{\ell}\int_2^x\frac{dt}{(\log t)^{1/\delta_{\ell}}}=\frac{C_{\ell} \,x}{(\log x)^{1/\delta_{\ell}}}\left(1+
\frac{1}{\delta_{\ell}\log x}
+O_{\ell}\left(\frac{1}{(\log x)^2}\right)\right).
\end{equation}
The functions $$\frac{C_{\ell} \,x}{(\log x)^{1/\delta_{\ell}}}\quad\text{and}\quad C_{\ell}\int_2^x\frac{dt}{(\log t)^{1/\delta_{\ell}}},$$
are now called the \emph{Landau approximation}, respectively \emph{Ramanujan approximation} of
the counting function in \eqref{valseanalogie}, the true behavior of which is, see Serre \cite{ser76},
\begin{equation}
\label{true}
\sum_{n\le x,\,\ell\nmid \tau(n)}1=\frac{C_{\ell} \,x}{(\log x)^{1/\delta_{\ell}}}\left(1+
\frac{1-\gamma_{\tau;\ell}}{\delta_{\ell}\log x}
+O_{\ell}\left(\frac{1}{(\log x)^2}\right)\right),
\end{equation}
with $\gamma_{\tau;\ell}$ a constant sometimes called \emph{Euler-Kronecker constant}.
Note that if $\gamma_{\tau;\ell}>1/2$ the Landau approximation asymptotically gives a better approximation to the sum on the left of \eqref{true} than the Ramanujan approximation, and that if
$\gamma_{\tau;\ell}<1/2$ it is
the other way around. Comparing \eqref{partial} and \eqref{true} we see that
Ramanujan's claim \eqref{valseanalogie} entails $\gamma_{\tau;\ell}=0.$ For
$\ell=3,5,7,23$ and $691$ this was disproved by Moree \cite{mor}. For the true value
of these numerical constants see Table 1
(data taken from \cite{clm}).
\centerline{{\bf Table 1:} Euler-Kronecker constants $\gamma_{\tau;\ell}$}
\label{tab:LvRold}
\begin{center}
\begin{tabular}{|c|c|c|}\hline
$\ell$ & $\gamma_{\tau;\ell}$ & winner \\ \hline \hline
$3$ & $+0.534921\ldots$ & Landau \\ \hline
$5$ & $+0.399547\ldots$ & Ramanujan \\ \hline
$7$ & $+0.231640\ldots$ & Ramanujan \\ \hline
$23$ & $+0.216691\ldots$ & Ramanujan \\ \hline
$691$ & $+0.571714\ldots$ & Landau \\ \hline
\end{tabular}
\end{center}
Let $\ell$ be an odd prime. An arithmetic function $\mathfrak f$ assuming
integer values has
a \emph{refined $\ell$-non-divisibility asymptotic} with Euler-Kronecker constant
$\gamma_{\mathfrak{f};\ell}$ if there exist
positive constants $C_{\ell}$ and $h_1$ such that
\begin{equation}
\label{refined}
\sum_{n\le x,\,\ell\nmid {\mathfrak f}(n)}1=\frac{C_{\ell} \,x}{(\log x)^{1/h_1}}\left(1+
\frac{1-\gamma_{\mathfrak f;\ell}}{h_1 \log x}
+o_{\mathfrak f}\left(\frac{1}{\log x}\right)\right),
\end{equation}
where the implicit constant in the error term may depend on ${\mathfrak f}.$
\begin{thm}
\label{conditionA} Let $\mathfrak{f}$ be an integer valued multiplicative function.
If
\begin{equation}
\label{primecondition}
\#\{p_1\le x:\,p_1\text{~prime~and~}\ell\mid {\mathfrak f}(p_1)\}=\delta~\sum_{p_1\le x}1+O_{\mathfrak f}\bigg(\frac{x}{
(\log x)^{2+\rho}}\bigg),
\end{equation}
for some real numbers $\rho>0$ and $0<\delta<1,$
then \eqref{refined} holds with $h_1=1/\delta$ for some
positive constant $C_{\ell}.$
\end{thm}
\begin{cor}
Let $m\ge 1$ be an integer and $\ell$ an
odd prime such that $h_2:=(\ell-1)/gcd(\ell-1,m)$ is even.
Then the $m$-th sum of divisors function $\sigma_m$ has
a refined $\ell$-non-divisibility asymptotic with
$h_1=h_2$ for some
positive constant $C_{\ell}.$
\end{cor}
The proof of the corollary is left as an exercise, cf.\,\cite{clm}.
We remark that in case $h_2$ is odd, \eqref{refined} takes a more trivial form.
\begin{thm}\label{non-divisibility}
Let $k\ge 4$ be an even integer, $p$ a prime and
$\varepsilon\in \{\pm 1\}$.
Let $\ell\ge 5$ be a prime
such that $\varepsilon p^{k/2}\equiv -1\pmod*{\ell}$.
Set $r=\text{gcd}(\ell-1,k-1).$ Let $g_1$ be the multiplicative
order of $p^r$ modulo $\ell.$ Put $\mu_p=\ell$ if $g_1=1$ and
$\mu_p=g_1$ otherwise.
Then $\mathfrak{f}(n)=\sigma_{k-1}(n)+\varepsilon p^{k/2}\sigma_{k-1}(n/p)$ has
a refined $\ell$-non-divisibility asymptotic \eqref{refined} with
$h_1=(\ell-1)/r,$
and Euler-Kronecker constant
\begin{equation}
\label{gammarelator}
\gamma_{\mathfrak{f};\ell}=\gamma_{\sigma_{k-1};\ell}+\left(\frac{\mu_p}{p^{\mu_p}-1}-\frac{(\mu_p-1)}{p^{\mu_p-1}-1}\right)\log p,
\end{equation}
provided that $h_1$ is even.
\end{thm}
This result reduces the study of $\gamma_{\mathfrak{f};\ell}$
to that of $\gamma_{\sigma_{k-1};\ell},$ which was
studied in extenso by Ciolan et al.\,\cite{clm}. They gave a
(long and involved) formula for
this Euler-Kronecker constant that
allows one to evaluate it with a certified accuracy of several decimals. The relevant computer programs are
made available at \url{www.math.unipd.it/~languasc/CLM.html}.
\subsection{Plan for the remainder of the article}
In Section \ref{pre}, we revisit some basic preliminaries needed to prove \thmref{main}, such as Hecke operators, Artin-Lehner newforms theory, mod $\ell$ modular forms and $\ell$-adic Galois representations associated to modular forms.
In Section \ref{proof_main}, we give a proof of \thmref{main} by assuming
\thmref{eigenform} (proven in Section \ref{v-proof} along with a
variant of it).
In Section \ref{proof_main2}, we prove \thmref{main2} followed by a discussion of
several interesting numerical examples in Section \ref{example}. In Section \ref{application}, as an application of our results, we give a non-trivial lower bound for the degree of the field of coefficients of any normalized eigenform of fixed weight and level,
with Section \ref{dickman} recalling some relevant
results on large prime factors of shifted primes.
Finally, in Section \ref{pietersproof}, we
prove Theorem \ref{non-divisibility}.
\section{Required Preliminaries}\label{pre}
\subsection{Notation}
The letters
$p,p_1,q$ and $\ell$ will denote prime numbers throughout, except in Section \ref{example},
where $q=e^{2\pi i z}$. For a rational number $\frac{m}{n}$, by $\ell \mid \frac{m}{n}$, we mean that $\ell$ divides the reduced numerator of $\frac{m}{n}$.
Given a newform $f$ we denote
its $n$-th Fourier coefficient by $a_f(n)$ and its
{\em coefficient field} $\mathbb Q(a_f(n): n\ge 1)$ by $K_f$. We say two forms $f$ and $g$ are {\em congruent} mod $\ell$ or mod $\lambda$ if $a_f(n)$ is congruent to
$a_g(n)$ for every integer $n$.
For notational convenience, we also abbreviate $M_k^{\pm 1}(p)$, $S_k^{\pm 1}(p)$ and $E_{k,p}^{\pm 1}$ by $M_k^{\pm }(p)$, $S_k^{\pm }(p)$ and $E_{k,p}^{\pm }$, respectively.
\subsection{Atkin-Lehner operators and newform theory}
Let $M_k(N)$ and $S_k(N)$ be the $\mathbb C$-vector space of modular forms and cusp forms, respectively of even weight $k\ge 2$ and level $N$ with respect to $\Gamma_0(N)$ of trivial nebentypus. These spaces have actions of the Hecke operators $T_1,T_2,\ldots$ which satisfy the following relations: $T_1=1$, $T_{mn}=T_m T_{n}$ if $(m,n)=1$ and for prime powers $q^r$
with $q\nmid N$ we have the recurrence
$$T_{q^r}=T_qT_{q^{r-1}}-q^{k-1}T_{q^{r-2}}.$$
For a prime $p\mid N$, the action of $T_p$ (we will denote it by $U_p$) on $f\in M_k(N)$ is given by
$a_{\substack{\\ T_p f}}(n)=a_f(np)$ and
such an operator $T_p$ is generally
called an {\em $U_p$ operator}.
Next, we recall some newform theory from \cite{atle}. A modular form $f\in M_k(N)$ is called a {\em Hecke eigenform} if it is an eigenfunction for all the Hecke operators $T_q$ for $(q, N)=1$ and $U_p$ for all $p \mid N$. It is a well-known result that if $f$ is a Hecke cusp
eigenform, then $a_f(1)\ne 0.$ We say such $f$ is {\em normalized} if $a_f(1)=1.$
Suppose that $p\mid N,$ but $p^2\nmid N.$
Then there are two ways to embed $S_k(N/p)$ inside $S_k(N)$; one by the identity and the other $f(z)\mapsto f(pz)$ which give rise to a map
$$S_k(N/p)\oplus S_k(N/p)\rightarrow S_k(N)\text{~defined~by~}(f,g)\mapsto f(z)+g(pz).$$ The image of this map is called the {\em space of $p$-oldforms} in $S_k(N)$, and is denoted by $S_k(N)^{p-{\rm old}}$. The orthogonal complement of $S_k(N)^{p{\rm-old}}$ in $S_k(N)$ with respect to the Petersson inner product is called the {\em space of $p$-newforms}, and denoted by $S_k(N)^{p{\rm-new}}$. Finally, in case $N$ is squarefree, we define the {\em space of newforms}
$S_k(N)^{{\rm new}}$ as the intersection $\bigcap\limits_{p\mid N}S_k(N)^{p{\rm-new}}$. A normalized Hecke eigenform $f$ in $S_k(N)^{{\rm new}}$ is called a {\em newform}.
\par Let $W_p$
be the {\em Atkin-Lehner operator} on $M_k(p)$ defined by
$$f|{W_p}(z)= p^{-k/2}z^{-k}f\Big(\frac{-1}{pz}\Big).$$
It preserves the space $M_k(p)$ and $S_k(p)$ and also since it is an involution its
eigenvalue $\varepsilon$ is in $\{\pm 1\}$.
Next we state some standard facts about
the operators $T_q~ (q\neq p),$ $U_p,$ and $W_p$ and newforms for the space $S_k(p)$ that can be, for
example, found in \cite[Lemma 17, Theorem 5]{atle}.
\begin{lem}\label{alr} We have the following.
\begin{itemize}
\item[$(1)$]
Both $\{T_q, U_p: q\neq p\}$ and $\{T_q, W_p: q\neq p\}$ are commutating families of operators.
\item[$(2)$]
$ f \in S_k(p)$ is a newform if and only if it is an eigenfunction for all $T_q ~(q \neq p)$, $U_p$ and $W_p$.
\item[$(3)$]
If $f\in S_k(p)^{{\rm new}}$ is a newform with Atkin-Lehner eigenvalue $\varepsilon,$ then $a_f(p)=-\varepsilon p^{k/2-1}$.
\end{itemize}
\end{lem}
\subsection{Modular forms with coefficients in a ring A}\label{mr}
Let $M_k(N,\mathbb Z)\subset \mathbb Z[\![q]\!]$ denote the set of elements of $M_k(N)$
having integer
Fourier coefficients at the cusp infinity. For a commutative ring $A$, we define
$$
M_k(N, A)=M_k(N, \mathbb Z)\otimes_{\mathbb Z}A.
$$
By the $q$-expansion principle,
the map $M_k(N, A)\rightarrow A[\![q]\!]$ is injective, so we may view $M_k(N, A)$ as a submodule of $A[\![q]\!]$. Note that $S_k(N, \mathbb Z)=M_k(N, \mathbb Z)\cap S_k(N)$. Hence we can define $S_k(N, A)$ similarly, and we identify it with an $A$-submodule of $M_k(N, A)$.
The Hecke operators $T_n$ defined earlier also act on the space $M_k(N, A)$ with
the small modification that
the action of $T_\ell$ on $M_k(N,A)$ coincides with the action of $U_\ell$ if $A$ is a domain of positive characteristic $r$ and $\ell\mid r.$
\subsection{Mod $\ell$ modular forms}
For a prime $\ell$, let $\overline{\mathbb{F}}_{\ell}$ denote an algebraic closure of the finite field $\mathbb{F}_{\ell}$, with $\ell$ elements. In this section we recall the notion of modular forms with coefficients in $\overline{\mathbb F}_{\ell}$ (see \cite[Section 3.1]{ser87}).
Fix an embedding $\iota_{\ell}:\overline{\mathbb Q}\hookrightarrow \overline{\mathbb Q}_{\ell}$ in particular we have $\overline{\mathbb Z}\hookrightarrow \overline{\mathbb Z}_{\ell}$.
Therefore the ring $\overline{\mathbb Z}_{\ell}$ has a natural reduction map to its residue field
$\overline{\mathbb F}_{\ell}$ and we obtain a homomorphism
\begin{align*}
\overline{\mathbb Z}_{\ell}&\rightarrow \overline{\mathbb F}_{\ell} {\rm~defined~by~}
a \mapsto \overline{a}.
\end{align*}
For $k\ge 2$ and an integer $N$, coprime to $\ell$, we define the space of modular forms of type $(N,k)$ with coefficients in $\overline{\mathbb F}_{\ell}$, denoted by $M_k(N, \overline{\mathbb F}_{\ell})$, consisting of formal power series
$$F(z)=\sum_{n\ge 0}A_ne^{2\pi i n z}, ~~A_n \in \overline{\mathbb F}_{\ell},$$
for which there exists a modular form
$f(z)=\sum_{n\ge 0}a_ne^{2\pi i n z} \in M_k(N), ~~a_n \in \overline{\mathbb Z},$
such that $\overline{a}_n=A_n$ for all $n\ge 0$. The space $S_k(N, \overline{\mathbb F}_{\ell})$ is defined analogously.
As mentioned in Section \ref{mr}, we have the action of the Hecke algebra generated by the operators $T_q$, $q \nmid \ell N$ and $U_{p}$, $p\mid \ell N$ on the space $M_k(N, \overline{\mathbb F}_{\ell})$, they also preserve
$S_k(N, \overline{\mathbb F}_{\ell})$.
Observe that by the Deligne-Serre lifting lemma if
$F \in S_k(N, \overline{\mathbb F}_{\ell})$ is a non-zero normalized Hecke eigenform then $F$ is the reduction$\mod {\ell}$ of some normalized Hecke eigenform $f\in S_k(N, \overline{\mathbb Z}_{\ell})$.
We say $F \in S_k(N, \overline{\mathbb F}_{\ell})$ is an {\em eigenfunction for the Atkin-Lehner operator} $W_N$ with eigenvalue $\varepsilon$ if it is a reduction of $f \in S_k(N, \overline{\mathbb{Z}}_{\ell})$ which is an eigenfunction for $W_N$ with eigenvalue $\varepsilon$.
Notice that this is a well-defined operator on
$M_k(N, \overline{\mathbb F}_{\ell})$
as we know that if $f$ and $f'$ are characteristic zero modular forms of the same weight and level N that are congruent modulo $\ell$
then $W_Nf$ and $W_Nf'$ are congruent modulo $\ell$ as well.
\subsection{Galois representations attached to modular forms}
In this section, we briefly recall some standard facts about 2-dimensional Galois representations of Gal$(\overline{\mathbb Q}/\mathbb Q)$ associated to Hecke eigenforms.
Let $f(z)=\sum_{n\ge 1}a_f(n)e^{2\pi i n z}$ be a normalized Hecke eigenform of weight $k$ and level $N$. It is well-known that the Fourier coefficients $a_f(n)$ belong to the ring of integers $\mathcal{O}_{K_f}$ of a finite extension field $K_f$ of $\mathbb Q$.
For a given
prime $\ell,$ due to a theorem of Deligne, corresponding to such an eigenform $f$ and a prime ideal $\lambda$ above $\ell$ in $K_f$, there is a continuous $\ell$-adic Galois representation
$$\rho_{f, \lambda}:{\rm Gal}(\overline{\mathbb Q}/\mathbb Q)\rightarrow GL(2, K_{f,\lambda}),$$
where $K_{f,\lambda}$ is the completion of $K_f$ at the place $\lambda$. The representation $\rho_{f, \lambda}$ is irreducible, unique up to isomorphism and it is unramified outside the primes dividing $N$ and the norm of $\lambda$, and has
the following properties:
\begin{center}
${\rm {tr}}(\rho_{f, \lambda}({\rm {Frob}}_q))=a_f(q)$ and
${\rm {det}}(\rho_{f, \lambda}({\rm {Frob}}_q))=q^{k-1},$
\end{center}
where Frob$_q\in {\rm Gal}(\overline{\mathbb Q}/\mathbb Q)$ is the Frobenius element at the prime $q$.
Conjugating by a matrix in $GL(2, K_{f,\lambda})$, one can assume that the image of
$\rho_{f, \lambda}$ lands inside $GL(2, \mathcal O_{K_{f,\lambda}})$. Reducing this representation with values in $GL(2, \mathcal O_{K_{f,\lambda}})$ modulo $\lambda$, we get a mod-$\ell$ representation of
Gal$(\overline{\mathbb Q}/\mathbb Q)$
$$\overline{\rho}_{f, \lambda}:{\rm Gal}(\overline{\mathbb Q}/\mathbb Q)\rightarrow GL(2, \mathcal O_{K_f}/\lambda).$$
The representation $\overline{\rho}_{f, \lambda}$ is well defined up to semi-simplification and depends only on the cusp form $f$ modulo $\lambda.$
\subsection{Diamond-Ribet level raising theorem and a refinement}
We now recall the following celebrated result of Ribet \cite{rib} (for weight two and trivial character) and Diamond \cite{dia} (for higher weight and non-trivial characters), called {\em level raising theorem},
which gives a criterion for the existence of a congruence between two newforms of the same weight, but
of different level. This plays an important role in the proof of our results.
\begin{thm}[Diamond-Ribet level raising theorem]\label{level_raising}
Let $g \in S_k(N)$ be a newform of weight $k \ge 2$ and let $p$ and $\ell$ be distinct primes not dividing $N$ with $\ell \nmid \frac{1}{2}\varphi(N) N p (k-2)!$. Let $\lambda$ be a prime ideal above $\ell$ in the field generated by the eigenvalues of all eigenforms in $S_k(Np)$ and $S_k(N)$. Then the following are equivalent:
\begin{itemize}
\item[$(1)$]
$a_g(p)^2 \equiv p^{k-2}(1+p)^2 \pmod* \lambda.$
\item[$(2)$]
There exists a $p$-newform $f\in S_k(Np)$ such that for every prime $q$ coprime
to $pN,$
$$a_f(q)\equiv a_g(q) \pmod* \lambda.
$$
\end{itemize}
\end{thm}
In \cite[Theorem 2]{gapo}, Gaba-Popa obtained a refinement of Diamond's level raising theorem and their proof is (loosely speaking) a part of the proof of their main result. In the same vein, we record the following refinement of the above theorem which also strengthens \cite[Theorem 2]{gapo} under a mild assumption.
\begin{thm}\label{refine}
Let $k\ge 2$ be an even integer, $p$ a prime, $N$ a positive integer coprime to $p$ and $\varepsilon \in \{\pm 1\}$.
If $k=2$ we
assume furthermore that $\varepsilon=-1.$ Suppose $g(z)=\sum_{n\ge 1}a_g(n)e^{2\pi i n z} \in S_k(N)$ is a newform.
Let $\ell\ge k-1$ be a prime such that $p\not\equiv -1\pmod*{\ell}$ and $\ell\nmid N \phi(N)$ and let $\lambda$ be a prime ideal above $\ell$ in the field generated by the eigenvalues of all eigenforms in $S_k(Np)$ and $S_k(N)$.
Then the following are equivalent:
\begin{itemize}
\item[$(1)$]
$a_g(p) \equiv -\varepsilon p^{k/2-1}(1+p) \pmod \lambda$.
\item[$(2)$]
There exists an eigenform $f\in S_k^{\varepsilon}(Np)$ which is new at $p$ such that
\begin{equation*}\label{maineq}
f(z) \equiv g(z)+ \varepsilon p^{k/2}g(pz) \pmod*{\lambda}.
\end{equation*}
\end{itemize}
\end{thm}
\begin{proof}
We omit the proof because for $N=1$ its proof is the content of the second half of the proof of \thmref{main} (c.f.\,\eqref{ref1}), and for general $N$ the same arguments apply.
\end{proof}
\section{Proof of \thmref{main}}\label{proof_main}
We give a proof of \thmref{main} using \thmref{eigenform} (the proof
of which is given in the next section). We first prove that $(1)$ implies $(2)$. The assumptions on $\ell $, in view of \thmref{eigenform}, ensure that there is a normalized eigenform $h(z)= \sum_{n\ge 1}a_h(n)e^{2\pi i n z}\in S_k(p)$ and a prime ideal $\lambda$ above $\ell$ in $K_h$ such that \begin{equation}
\label{f_cong}
h\equiv E_{k,p}^\varepsilon \pmod* \lambda. \end{equation} We now prove that this eigenform $h$ can be replaced by a newform under our assumptions on $\ell$ and $k$. We distinguish between two cases:\\
{\bf{Case (i)}}\label{new} {\em The eigenform $h$ is a newform.} We will show that the $W_p$-eigenvalue of $h$ is $\varepsilon$.
Writing $h|W_p=\delta h$ with $\delta \in \{\pm 1\},$ we obtain $a_h(p)=-\delta p^{k/2-1}$ by \lemref{alr}. Now by considering the $p$-th Fourier coefficients of both functions appearing in \eqref{f_cong}, and using the fact that $\ell \mid (\varepsilon+ p^{k/2}) (\varepsilon+ p^{k/2-1})$, we obtain \begin{align*}
-\delta p^{k/2-1}\equiv 1+p^{k-1}+\varepsilon p^{k/2} \equiv -\varepsilon p^{k/2-1}\pmod* \ell. \end{align*} Since the prime $\ell $ is odd and different from $p$, this proves that $h\in S_k^\varepsilon(p).$ \\ {\bf{Case (ii)}}\label{notnew} {\em The eigenform $h$ is not a newform.} Then there is a level 1 eigenform $g$ such that the corresponding $\ell$-adic Galois representations $\rho_{h, \Lambda}$ and $\rho_{g, \Lambda}$ are the same, where $\Lambda$ is a prime ideal above $\ell$ in the compositum of coefficient fields of all normalized eigenforms in $S_k(p)$ and $S_k(1)$. By \eqref{f_cong}, we have $a_h(q)\equiv 1+q^{k-1} \pmod* \Lambda$ for all $q \neq p.$ A standard application of the Chebotarev density theorem then shows that $\bar\rho_{h, \Lambda}$ is isomorphic to $1 \oplus \chi_\ell^{k-1}$, where $\chi_\ell$ denotes the mod $\ell$ cyclotomic character. Thus, we conclude that \begin{equation}\label{isom}
\bar\rho_{g, \Lambda} \simeq 1\oplus \chi_\ell^{k-1}. \end{equation} Because $g$ is of level 1, the representation $\bar\rho_{g, \Lambda}$ is unramified outside $\ell$. In particular, $\bar\rho_{g, \Lambda}$ and $\chi_{\ell}$ are unramified at the prime $p$. Taking the trace of the image of ${\rm Frob}_p$ on both sides of \eqref{isom} yields \begin{equation}\label{g_cong}
a_g(p)\equiv 1+p^{k-1} \pmod* {\Lambda}. \end{equation} Now using that $\ell$ divides $1+ p^{k-1}+ \varepsilon p^{k/2}+ \varepsilon p^{k/2-1},$ we infer that \begin{equation}\label{ref1}
a_g(p) \equiv -\varepsilon p^{k/2-1}(1+p) \pmod* {\Lambda}, \end{equation} which gives $$ a_g(p)^2 \equiv p^{k-2}(1+p)^2 \pmod* {\Lambda}. $$ Hence, by apply \thmref{level_raising} and using the hypothesis $\ell > k-2,$ we obtain a newform $f\in S_k(p)$ for which \begin{equation*}
a_f(q) \equiv a_g(q) \pmod* {\Lambda}, {\rm~~ for~all~} q\neq p. \end{equation*} Taken together with \eqref{g_cong} this results in \begin{equation}\label{h_cong}
a_f(q) \equiv 1+q^{k-1} \pmod* {\Lambda}, {\rm~~ for~all~} q\neq p. \end{equation} Let $\delta$ be the $W_p$-eigenvalue of $f$ and so $a_f(p)=-\delta p^{k/2-1}$, from \lemref{alr}. In order to complete the proof of \thmref{main}, we only need to show that $\delta=\varepsilon$. The reason is that if $\delta=\varepsilon,$ then $ a_f(p)= -\varepsilon p^{k/2-1}\equiv 1 + p^{k-1}+\varepsilon p^{k/2} \pmod* {\Lambda}. $ Combining this with \eqref{h_cong} and using that $E_{k,p}^\varepsilon$ mod $\ell$ is an eigenform, which has been established in the course of the proof of \thmref{eigenform}, gives that $f\equiv E_{k,p}^\varepsilon \pmod \Lambda,$ thus completing the proof (note that we can restrict $\Lambda$ to $K_f$ to get the required prime ideal above $\ell$ in $K_f$). \par Let $G_p$ denote the decomposition group of the absolute Galois group Gal$(\overline{\mathbb Q}/\mathbb Q)$ at a place over $p.$ Denote, for any algebraic integer $\alpha,$ the unique unramified character $G_p \rightarrow \overline{\mathbb F}_\ell^\times$ sending the arithmetic Frobenius ${\rm Frob}_p$ to $\alpha$ mod $\ell$ by $\mu_{\alpha}.$ It follows from the work of Langlands \cite{lan} (see also \cite[Proposition 2.8 (2)]{lowe}) that the restriction of $\bar\rho_{f, \Lambda}$ to $G_p$ is given by $$
\bar\rho_{f, \Lambda}|_{G_p} \simeq \begin{pmatrix}
\chi_\ell^{k/2} & * \\ & \chi_\ell^{k/2-1} \end{pmatrix} \otimes \mu_{a_f(p)/p^{k/2-1}}. $$ Since $\mu_\alpha$ and $\chi_\ell$ are unramified at $p,$ one can consider the trace of Frob$_p$ on the right hand side and hence ${\rm tr} (\bar\rho_{f, \Lambda}({\rm Frob}_{p}))$ is well-defined. Since $\bar\rho_{f, \Lambda} \simeq \bar\rho_{g, \Lambda}$ (up to semisimplification), we have $
\bar\rho_{f, \Lambda}|_{G_{p}} \simeq \bar\rho_{g, \Lambda}|_{G_{p}}. $ Taking the trace of the image of ${\rm Frob}_p$ yields $$ \left(p^{k/2}+ p^{k/2-1} \right)\frac{a_f(p)}{p^{k/2-1}} \equiv a_g(p) \pmod* {\Lambda}. $$ The congruence \eqref{g_cong} together with $a_f(p)=-\delta p^{k/2-1}$ gives $$ -\delta(p^{k/2}+ p^{k/2-1}) \equiv 1 +p^{k-1} \equiv -\varepsilon (p^{k/2}+ p^{k/2-1}) \pmod* {\Lambda}, $$ which yields $ \ell \mid (\delta-\varepsilon)p^{k/2-1}(p+1). $ Since by assumption $(\ell, p(p+1))=1,$ we infer from this that $\delta =\varepsilon,$ and so (1) implies (2).
It remains to show that $(2)$ implies $(1)$. Let $\ell \ge 5$ be a prime for which there exists a newform $f(z)=\sum_{n\ge 1}a_f(n)e^{2\pi i n z}\in S_k^\varepsilon(p)$ such that \begin{equation}\label{fe}
f\equiv E_{k,p}^\varepsilon \pmod* \lambda, \end{equation} for some prime ideal $\lambda$ above $\ell$ in ${K_f}$. Since the constant term of $f$ is zero and the norm of $\lambda$ is a power of $\ell,$ we get $\ell \mid \frac{B_k}{2k}(\varepsilon+ p^{k/2})$. The fact that $f$ is a newform with $W_p$-eigenvalue $\varepsilon,$ along with \lemref{alr} yields $a_f(p)=-\varepsilon p^{k/2-1}$. Taken together with the congruence \eqref{fe} at the prime $p,$ this leads to $-\varepsilon p^{k/2-1}\equiv 1+ p^{k-1}+ \varepsilon p^{k/2} \pmod* \lambda.$ Since $\lambda$ is a prime ideal above $\ell,$ this completes the proof.
{$\square$}
\section{Variants of \thmref{main} and proof of \thmref{rmain}}\label{v-proof}
As promised in the previous section, we now give a proof of \thmref{eigenform}.
This result may be of independent interest because of the limited assumptions on $\ell$ when compared with \thmref{main}.
Recall that \thmref{rmain} is an immediate consequence of \thmref{eigenform}. We also remind the reader that by ``eigenform" we mean an eigenfunction of all the Hecke operators $T_n$, $n\ge 1$. \begin{thm}\label{eigenform}
Let $k\ge 2$ be an even integer, $\ell \ge 5$ and $p$ be primes and $\varepsilon \in \{\pm 1\}$. If $k=2$, we also assume that $\varepsilon=-1$. Suppose that $\ell$ divides
both $\frac{B_k}{2k}(\varepsilon+ p^{k/2})$ and
$(\varepsilon+ p^{k/2}) (\varepsilon+ p^{k/2-1}).
$
Then there exists a normalized eigenform $h\in S_k(p)$ and a prime ideal $\lambda$ over $\ell$ in the coefficient field of $h$ such that
$$
h\equiv E_{k,p}^{\varepsilon} \pmod*{\lambda}.
$$
Moreover, if $k=2$, then $h\in S_2^-(p)$ is a newform. \end{thm} \begin{proof}
We observe that $E_{k,p}^{\varepsilon}|W_p =\varepsilon E_{k,p}^{\varepsilon}$ and
that $W_p$ interchanges both the cusps of $\Gamma_0(p),$ which implies that the constant term of $E_{k,p}^{\varepsilon}$ at both the cusps is $-\frac{B_k}{2k}\varepsilon (\varepsilon+ p^{k/2})$, up to a sign and powers of $p$. Since $\ell \mid \frac{B_k}{2k}(\varepsilon+ p^{k/2})$, it follows from
the $q$-expansion principle (here $q=e^{2\pi iz}$) that the reduction of $E_{k,p}^{\varepsilon}$ modulo $\ell$ gives rise to an element $\overline{E}_{k,p}^{\varepsilon} \in S_k^{\varepsilon}(p, \mathbb F_\ell) \subset S_k(p,\overline{\mathbb F}_{\ell})$. As $E_k$ is an eigenfunction for all the Hecke operators $T_q ~(q \neq p),$ all of which commute with $W_p$, we see that
$E_{k,p}^{\varepsilon}$, hence $\overline{E}_{k,p}^{\varepsilon}$ is a common eigenfunction of all $T_q$ ($q\neq p$). We next claim that the assumptions on the prime $\ell$ ensure that $\overline{E}_{k,p}^{\varepsilon}$ is also an eigenfunction of the operators $U_p$. For $k=2$, it is easy to see that $U_p E_{2,p}^-= E_{2,p}^-$
which shows that ${E}_{2,p}^-,$ and so in particular $\overline {E}_{2,p}^-$, is an eigenfunction for $U_{p}$ with eigenvalue 1. For $k\ge 4$, a simple computation gives that if $a(n)$ and $b(n)$ are the $n$th Fourier coefficients of $E_{k,p}^\varepsilon(z),$
respectively $U_{p} E_{k,p}^\varepsilon(z)$, then
$$
b(n)=\begin{cases}
a(0) & \text{if~}\,n=0;\\
(1+p^{k-1}+\varepsilon p^{k/2}) a(n) & \text{if~}p\nmid n;\\
(1+p^{k-1}+\varepsilon p^{k/2}) a(n) -\varepsilon \sigma_{k-1}(n/p)p^{k/2}(\varepsilon+ p^{k/2}) (\varepsilon+ p^{k/2-1}) & \text{otherwise}.
\end{cases}
$$
It shows that $E_{k,p}^\varepsilon$ is not an eigenfunction of $U_p,$ but
by assumptions $\ell \mid \frac{B_k}{2k}(\varepsilon+ p^{k/2})$ and $\ell \mid (\varepsilon+ p^{k/2}) (\varepsilon+ p^{k/2-1}),$ it follows that
\begin{equation}\label{eisenstein_eigenform}
U_p E_{k,p}^\varepsilon(z)\equiv (1+p^{k-1}+\varepsilon p^{k/2}) E_{k,p}^\varepsilon(z) \pmod* \ell.
\end{equation}
In other words, $\overline{E}_{k,p}^\varepsilon$ is an eigenfunction of $U_p$ with eigenvalue $1+p^{k-1}+\varepsilon p^{k/2}$ and this proves our claim.
The reduction map from $S_k(p,\bar{\mathbb Z}_\ell )$ to $S_k(p,\overline{\mathbb F}_\ell)$ is surjective by Carayol's lemma \cite[Proposition 1.10]{edi}. Hence, there exists an element in $S_k(p,\mathcal O_K)$
having $\overline{E}_{k,p}^\varepsilon$ as mod $\ell$ reduction, where $\mathcal O_K$ is the ring of integers of some finite extension $K$ of $\mathbb Q_\ell$.
In other words, $\overline{E}_{k,p}^\varepsilon$ is the reduction of a characteristic $0$ cusp form,
which may not be an eigenfunction for the Hecke operators.
Now we use the Deligne-Serre lifting lemma \cite[Lemma 6.11]{dese} guaranteeing
the existence of an $h'\in S_k(p,\mathcal O_L)$ that is a normalized common eigenfunction for every element of $\{T_q, U_p: q\neq p\},$
such that
\begin{center}
$h'\equiv E_{k,p}^\varepsilon\pmod* {\lambda'}$,
\end{center}
for some prime ideal $\lambda'$ lying above $\ell$ in $\mathcal O_{L}$. Here $L\supseteq K$ is a finite extension of $\mathbb Q_\ell$ which is a completion of some number field at a prime over $\ell$. Moreover, such an $h'$ arises from some $h(z)= \sum_{n\ge 1}a_h(n)e^{2\pi i n z}\in S_k(p)$ via the embedding of $K_h$ into $L,$ and hence there exists a prime ideal $\lambda$ above $\ell$ in $K_h$ such that
\begin{equation*}
h\equiv E_{k,p}^\varepsilon \pmod* \lambda.
\end{equation*}
This proves the first part of \thmref{eigenform}.
Next, if $k=2$, then we know that $S_2(1)=0,$ and therefore there are no oldforms in $S_2(p),$ and hence $h$ must be a newform. Let $\delta$ be the $W_p$-eigenvalue
of $h.$ Then by \lemref{alr}
we have $a_h(p)=-\delta,$ whereas the $p$-th Fourier coefficient of $E_{2,p}^{-1}$ is 1. This gives rise to $-\delta \equiv 1\pmod* \lambda,$ and hence $h\in S_2^-(p),$ as $\ell$ (the characteristic of $\lambda$) is odd. \end{proof}
The next result refines \cite[Theorem 1]{dufr} and it is a direct consequence of \thmref{eigenform} and \cite[Theorem 1]{dagu}. \begin{cor}\label{eigenform_varepsilon}
Let $k\ge 2$ be an even integer, $p$ a prime and $\varepsilon \in \{\pm 1\}$.
If $k=2$, we
also assume that $\varepsilon=-1.$
Let
$\ell\ge 5$ be a prime divisor of $\frac{B_k}{2k}(\varepsilon+ p^{k/2}).$
Then there exists a normalized eigenfunction $f \in S_k^\varepsilon(p)$ for all $T_q$ with $q\neq p,$ and a prime ideal $\lambda$ over $\ell$ in the coefficient field of $f$ such that
$$f\equiv E_{k,p}^{\varepsilon} \pmod*{\lambda}.$$ \end{cor} \begin{proof}
If $\ell \mid \frac{B_k}{2k}$, applying \cite[Theorem 1]{dagu} gives a normalized eigenform $g\in S_k(1)$ and a prime ideal $\lambda$ above $\ell$ such that $g\equiv E_k \pmod* \lambda$. In this case, the form $f:=g+\varepsilon g|W_p$ is a desired eigenfunction. We now assume that $\ell\nmid \frac{B_k}{2k},$ which implies $\ell \mid (\varepsilon+ p^{k/2})$. Then, by \thmref{eigenform}, we obtain a normalized eigenform $h\in S_k(p)$ and a prime ideal $\lambda$ above $\ell$ in the coefficient field of $h$ such that $h\equiv E_{k,p}^\varepsilon \pmod* \lambda$. If $h$ is a newform, then Case (i) in the proof of \thmref{main} gives $h\in S_k^\varepsilon(p)$ and hence we are done. Whereas, if $h$ is not a newform, then following the arguments in Case (ii) and by \eqref{g_cong}, we have a normalized eigenform $g\in S_k(1)$ of level $1$ such that $g\equiv E_k \pmod*{\lambda},$ where $\lambda$ is a prime ideal above $\ell$ in the coefficient field of $g$. As before, $f:=g+\varepsilon g|W_p$ serves our purpose. \end{proof}
\subsection*{Proof of \thmref{rmain}}
The proof uses \thmref{eigenform} and some ideas used in the proof of \thmref{main}. So we only give a sketch here. We first notice that $\ell \mid (\varepsilon + p^{k/2})$ and hence from \thmref{eigenform} we have an eigenform $h\in S_k(p)$ such that $h\equiv E_{k,p}^{\varepsilon}\pmod*{\lambda}.$ We now claim that because of the conditions $\ell\nmid \frac{B_k}{2k}$ and $k\not\equiv0 \pmod{\ell-1}$, the eigenform $h$ has to be a newform with $W_p$-eigenvalue $\varepsilon$, i.e., the Case (ii) in the proof of \thmref{main} does not occur at all. Suppose it does occur, then by \eqref{g_cong} we have an eigenform $g$ of level $1$ such that
$g\equiv E_k \pmod*{\Lambda}.$ Since $g$ has integral Fourier coefficients in its coefficient field and $k\not\equiv0 \pmod*{\ell-1}$, applying \cite[Chapter X, Theorem 8.4]{lang} gives $\ell \mid \frac{B_k}{2k}$, which is a contradiction.
\section{Proof of \thmref{main2}}\label{proof_main2}
The main idea of the proof is to construct, in each
of the three cases, a modular form $g \in M^{\varepsilon}_k(p)$
with integral Fourier coefficients and having non-zero
constant term $a_g(0)$,
coprime to $N^{\varepsilon}_{k,p},$ such that if one would define
\begin{equation}\label{f}
f:=E_{k, p}^{\varepsilon} +\frac{B_k}{2k}\varepsilon (\varepsilon+p^{k/2})\frac{g}{a_g(0)},
\end{equation}
then such $f$ should be a non-zero cusp form. The existence of such $g$ then ensures that
the corresponding $f$ has rational Fourier coefficients and, moreover,
$$f\equiv E_{k, p}^\varepsilon \pmod*{N_{k, p}^\varepsilon}.$$
\subsection{Case (a)}
Let
dim$M_k^\varepsilon(p)=d+1$. From \cite{chki13} for $\varepsilon=+1$ and \cite{ckl} for $\varepsilon=-1$, we know that for such choices of primes $p$ there exists a
basis $\{f_0, f_1,\cdots , f_d\}$ of $M_k^{\varepsilon}(p),$ known as Victor-Miller basis,
such that all the Fourier coefficients of the $f_j$ are integers
and have Fourier series expansion of the form
$$
f_j(z)=e^{2\pi ijz}+O(e^{2\pi i(d+1)z})~~~{\rm for~} 0\le j\le d.
$$
So, the obvious choice for $g$ in this case is $f_0,$ completing the proof.
\subsection{Case (b)}
For any even integer $\alpha\ge 4$ observe that $G_{\alpha}(z)G_{\alpha}(pz)\in M_{2\alpha}^+(p)$, where
$G_{\alpha}$ is the Eisenstein series of weight $\alpha$ defined by
$$
G_\alpha(z)= 1-\frac{2\alpha}{B_\alpha}\sum_{n\ge 1}\sigma_{\alpha-1}(n)e^{2\pi i n z}.
$$
We first consider the situation when $\varepsilon=+1$ and $k \ge 8$ with $k\equiv 0 \pmod* 4$. Such $k$ can be written as $k=8a+12b=4\beta$ for some non-negative integers $a,$ $b$ and $\beta \ge 2,$
and so
$$g(z):=(G_4(z)G_4(pz))^a(G_6(z)G_6(pz))^b \in M_{4\beta}^+(p)$$
has integer Fourier coefficients (as $G_4$ and $G_6$ have integer Fourier coefficients) with constant term $1$.
Next we claim that the corresponding function $f$ defined by \eqref{f} is non-zero by showing that its $e^{2\pi iz}$ coefficient is non-zero, i.e.,
$$1+(30a-63b)\frac{B_{4\beta}(1+p^{2\beta})}{\beta} \neq 0.
$$
For that, we write
$\displaystyle{\frac{B_{4\beta}}{8\beta}=\frac{m}{n}}$, where $m\in \mathbb{N}, n\in \mathbb{Z}$ and $(m,n)=1$ and hence $\displaystyle{1+(30a-63b)\frac{B_{4\beta}(1+p^{2\beta})}{\beta}}$ is zero
if and only if $n=8(-30a+60b)(1+p^{2\beta})$ and $m=1$.
But one can easily check that $n\neq -8(30a-60b)(1+p^{2\beta})$ for $\beta=2$ and $m>1$ for $\beta\ge 3.$
\par For $\varepsilon=-1$ and $k \ge 8$ with $k\equiv 2 \pmod* 4$, we write $k=2+8a+12b=2+4\beta$. Define
$$g(z)=G_{2,p}^-(z)(G_4(z)G_4(pz))^a(G_6(z)G_6(pz))^b \in M_{2+4\beta}^-(p),$$
where $ G_{2,p}^-(z)=G_2(z)-pG_2(pz)\in M_2^-(p)$. This $g$ has integer Fourier coefficients and constant term $1-p$.
The first Fourier coefficient of the corresponding $f$ is $$1+(240a-504b-24)\frac{B_{2+4\beta}(p^{1+2\beta}-1)}{2(2+4\beta)},$$ and as before one can verify that it is non-zero.
\subsection{Case (c)}
For $\varepsilon=+1$, write $k=10 \alpha$ with $\alpha >0$.
Define
$$g(z):=(p^2G_4(pz)G_6(z)+p^3G_4(z)G_6(pz))^{\alpha}.$$
Observe that $p^2G_4(pz)G_6(z)+p^3G_4(z)G_6(pz) \in M_{10}^+(p)$ and
hence $g\in M_{k}^+(p).$ Further, $g$ has integer Fourier coefficients and constant term $(p^2+p^3)^{\alpha}.$ Since
by assumption, $p(p+1)$ does not divide $N^+_{k,p},$ it follows
that the corresponding $f$ defined by \eqref{f} satisfies
$$f\equiv E_{k, p}^\varepsilon \pmod*{N_{k, p}^\varepsilon}.$$
To show that $f$ is non zero we prove that its $e^{2\pi iz}$ coefficient is
non-zero, i.e.,
$$
1+\frac{(240p^3-504p^2)\alpha (p^2+p^3) B_{10\alpha}(1+p^{5\alpha})}{20\alpha (p^2+p^3)^{\alpha}} \neq 0.
$$
First suppose that $\alpha>1$. Write $$\frac{B_{10\alpha}(1+p^{5\alpha})}{20\alpha(p^2+p^3)^{\alpha-1}}=\frac{m}{n}\text{~with~}(m, n)=1.$$ Then one checks that $m$ is always greater than 1,
and therefore the coefficient can not be equal to zero.
If $\alpha=1$ we write $B_{10}(1+p^{5})/20=m/n$ with $(m, n)=1$. Again the coefficient is zero if and only if $n=-(240p^3-504p^2) m$. Since $(m, n)=1$, we must have $m=1$. But for $\alpha =1$, we have $B_{10}(1+p^{5})/20=(1+p^5)/264$, i.e., if $p\ge 5$ then it can be easily seen that $m>1.$ For the remaining two primes we have
$n\neq -(240p^3-504p^2)$.
Finally we assume that $\varepsilon=-1$ and $k=2+10 \alpha$, for some $\alpha$, integer. Define $$g(z):=G_{2,p}^-(z)(p^2G_4(pz)G_6(z)+p^3G_4(z)G_6(pz))^{\alpha}.$$
This $g$ has all the required properties and the corresponding $f$, defined by \eqref{f}, has $e^{2\pi iz}$ coefficient
$$1+\frac{(\alpha(240p^3-504p^2)(p^2+p^3)-24)B_{2+10\alpha}(1+p^{1+5\alpha})}{2(1+10\alpha)(p^2+p^3)^{\alpha}}.$$
As before, one can easily check that this coefficient is non-zero and hence we are done.
{$\square$}
\section{Numerical Examples}\label{example}
In this section, we give several numerical examples of Ramanujan-style congruences
and write $q$ for $e^{2\pi iz}.$
We first recall some basic facts that will be used. To prove that two normalized eigenforms of weight $k$ and level $p$ are congruent, it is enough to check that their first $k(p +1)/12$ Fourier coefficients at prime indices are congruent (this is due to
the Sturm bound). Moreover, if $f(z)=\sum_{n\ge 1}a_f(n)q^n$ is a newform and $\sigma \in {\rm Gal}(\overline {\mathbb Q}/ \mathbb Q)$, then its Galois conjugate $f^\sigma(z):=\sum_{n\ge 1}\sigma(a_f(n))q^n$ is a newform. In fact, it is easy to see that if two modular forms $f$ and $g$ are congruent modulo some prime ideal $\lambda$ above $\ell$ and $\sigma \in {\rm Gal}(\overline {\mathbb Q}/ \mathbb Q)$, then $\sigma(\lambda)$ is a prime ideal above $\ell$ and
\begin{equation}\label{cong_galois}
f^\sigma \equiv g^\sigma \pmod {\sigma(\lambda)}.
\end{equation}
To simplify the notation in this section, we put
\begin{center}
$N_{k,p}^\varepsilon:=$ the reduced numerator of ${\frac{B_k}{2k}(\varepsilon+ p^{k/2})}~~~$ and
$~~~M_{k,p}^\varepsilon:=(\varepsilon+p^{k/2}) (\varepsilon+p^{k/2-1})$.
\end{center}
\begin{example} {\rm Take $p=11$ and $k=6$. For $\varepsilon=-1$, we easily see that
$N_{6,11}^-=5\cdot 19$ and both the primes $5$ and $19$ divide
$M_{6,11}^-$, so $\ell =5$ or $19$.
We see that $S_{6}^-(11)$ is 3-dimensional, spanned by the newforms
\begin{align*}
g(z) &= q + aq^2 - (1/6a^2 + 5/3a - 64/3)q^3 + (a^2 - 32)q^4 - (3/2a^2
+ 7a - 98)q^5 \\
& \hspace{30pt} - (5/3a^2 - 19/3a - 94/3)q^6+ O(q^{7}),
\end{align*}
where $a$ is any root of the polynomial $x^3 - 90x + 188.$
{Because any two newforms in $S_{6}^-(11)$ are Galois conjugates, in view of \eqref{cong_galois}, \thmref{main} ensures the existence of a congruence between the newform $g$ and
$E_{6,11}^-$ modulo some prime in $\mathbb Q(a)$ above $5$ (resp. for $19$)
which we verify now.} Factoring the ideals (5) and (19) in the ring of
integers of $\mathbb Q(a)$ gives $5=\lambda \lambda'$ and
$19=\beta\beta'^2$, where $\lambda=(5, -1/6a^2 + 1/3a + 28/3)$,
$\lambda'= (5, 1/6a^2 + 2/3a - 28/3)$, $\beta=(19, 1/6a^2 + 2/3a - 31/3)
$ and $\beta'=(19, 1/6a^2 + 2/3a - 58/3)$.
We then check that
\begin{equation}\label{l=5}
g\equiv E_{6,11}^- \pmod* {\lambda'} \hspace{10pt}{\rm~and}
\hspace{10pt} g\equiv E_{6,11}^- \pmod* {\beta}
\end{equation}
We emphasize the fact that the congruence \eqref{l=5} for the
prime above $\ell =5=6-1$ is new
and corresponds to the non-covered case $\ell=k-1$ in \cite{gapo}.
For $\varepsilon=+1$, we have $N_{6,11}^+=37$ and $37 \mid M_{6,11}^+.$ As $S_{6}^{+}(11)$
is 1-dimensional and spanned by the newform
$$f(z)=q - 4q^{2} - 15q^{3} - 16q^{4} - 19q^{5} + 60q^{6} + O(q^{7}),
$$
\thmref{main} guarantees the congruence
$$
f\equiv E_{6,11}^+ \pmod* {37}.
$$
As $11^3\equiv -1\pmod*{37},$ the conditions of
\thmref{non-divisibility} are satisfied with
$\ell=37,p=11,k=6$ and $\varepsilon=1.$ On using that
$\gamma_{1,37}=0.47464\ldots,$
we conclude that $f$ has
a refined $\ell$-non-divisibility asymptotic \eqref{refined} with
$h_1=36,$ and Euler-Kronecker constant
$$\gamma_{f;37}=\gamma_{1,37}+\frac{6}{11^6-1}-\frac{5}{11^5-1}
=0.47464\ldots-0.000027\ldots=0.47461\ldots$$
Hence, in this case the Ramanujan approximation is better than the
Landau one.}
\end{example}
\begin{example}
{\rm
Although \thmref{main} holds for $\ell>k-2$, we have checked several numerical examples for the case $\ell \le k-2$ and in all those cases, we find that the assertion of \thmref{main} is true. We give one such example here.
Take $k=12$ and $p=7$. Then the space of newforms in $S_{12}^{+}(7)$ is 3-dimensional and spanned by
the newforms
\begin{align*}
f(z)& =q + aq^2 - (11/21 a^2 - 103/7 a - 33758/21)q^3 + (a^2 - 2048)q^4 + (59/7a^2 - 517/7a - 203864/7)q^5 \\
&\hspace{30pt}+ (-538/21a^2 + 788/7a + 2476144/21)q^6 +O(q^7),
\end{align*}
where $a$ is any root of the polynomial $x^3 - 77x^2 - 2854x + 225104.$
Here $N_{12,7}^+=5 \cdot 181 \cdot 691$ and $5\cdot 181 \mid M_{12,7}^+$. Therefore, by using the same reasoning used in the previous example, \thmref{main} guarantees the existence of the congruence of $f$ with $E_{12,7}^+$ modulo a prime
ideal above 181 in $\mathbb Q(a),$ something we have verified by
direct computation. Note that \thmref{main} is not applicable for the prime $\ell =5,$ as $\ell < k-1=11$. But factoring the ideal (5) in the ring of integers of $\mathbb Q(a)$ gives $5=\lambda \lambda'$, where $\lambda=(5, -1/14a^2 + 23/14a + 1628/7)$ and $\lambda'=(5, 1/42a^2 - 3/14a - 1775/21)$ and we check that
$$
f\equiv E_{12,7}^+ \pmod* {\lambda'}.
$$ }
\end{example}
\begin{example}
{\rm
The dimension of $S_k^+(p)$ turns out to be $1$ for $p=2, 3, 5, 7, 11$ and for certain finite values of $k,$ and the unique newforms have integer Fourier coefficients. In these cases, \thmref{main2} gives a suitable congruence modulo $N_{k,p}^+$. For example if $p=2$ and $k=8,$ then one easily check that $N_{8,2}^{+}=17$. Then
$S_8^+(2)$ is 1-dimensional and spanned by the newform
$$\Delta_{8,2}(z):=\eta^8(z)\eta^8(2z) \in S_8^+(2).$$
\thmref{main2} gives that a constant multiple of $\Delta_{8,2}$ and $E_{8,2}^+ $ are the same modulo 17. By comparing the Fourier coefficients, we see that the constant must be 1 and hence
$\Delta_{8,2} \equiv E_{8,2}^+ \pmod* {17}.$}
\end{example}
\begin{example}
{\rm This example, which falls outside the scope of \thmref{main}, gives a congruence modulo 6 using \thmref{main2}. Take $k=4$ and $p=17,$ and hence $N_{4,17}^-=6$. The space $S_4^{-}(17)$ is 1-dimensional, spanned by the newform
$$
f(z)= q - 3q^{2} - 8q^{3} + q^{4} + 6q^{5} + 24q^{6} - 28q^{7} + 21q^{8} + 37q^{9} + O(q^{10}).
$$
\thmref{main2} guarantees a congruence between a constant multiple of $f$ and $E_{4,17}^-$ modulo 6. An easy computation shows that the first 6
Fourier coefficients of $f$ and $E_{4,17}^-$ are
the same modulo 6. Hence, invoking the Sturm bound,
$$
f \equiv E_{4,17}^- \pmod* 6.
$$ } \end{example}
We end this section by making some comments on Examples 5.6 and 5.7 considered
in \cite{dufr}. These involve congruences between the coefficients of a newform in $S_k(p)$ and $E_k$ away from the level $p$.
More precisely, in those examples, it is shown that if $\ell \mid \frac{B_k}{2k}(1-p^{k})$ and $\ell \ge 5,$ then there exists a newform $f\in S_k(p)$ such that, for all primes $q\neq p$,
modulo a prime ideal $\lambda$ above $\ell$ we have $a_f(q)\equiv 1+q^{k-1}\pmod*{\lambda}$.
On using \thmref{main} more can be said. For both
examples the prime $\ell$ divides $\frac{B_k}{2k}(\varepsilon+p^{k/2})$ and $(\varepsilon+ p^{k/2}) (\varepsilon+ p^{k/2-1})$, for some $\varepsilon\in \{\pm 1\}$ and also satisfies
the further requirements of \thmref{main}, which then
yields that the $W_p$-eigenvalue of $f$ is $\varepsilon$ and
that $a_f(p)\equiv 1+p^{k-1}+\varepsilon p^{k/2}\pmod*{\lambda}.$
\section{Intermezzo: Anatomy of integers}\label{dickman}
This section is a preamble for the next one.
\par Let $P^+(n)$ denote the largest prime divisor
of an integer $n\geqslant 2$. Put $P^+(1)=1$.
A number $n$ is said to be {\it $y$-friable}\footnote{Some authors use $y$-smooth. Friable
is an adjective meaning easily crumbled or broken.} if $P^+(n)\leqslant y$.
The number of integers $1\leqslant n\leqslant x$
such that $P^+(n)\leqslant y$ is denoted by $\Psi(x,y)$.
The study of the smoothness of integers was dubbed psixyology by the third
author in his PhD thesis, but is currently called the anatomy of integers
(thus the focus shifted from the mind of numbers to their body...).\\
\indent In 1930, Dickman \cite{Dickman} proved that
\begin{equation}
\label{dikkertje}
\lim_{x\rightarrow \infty}\frac{\Psi(x,x^{1/u})}{x}=\rho(u),
\end{equation}
where the {\it Dickman function}
$\rho(u)$
is defined by
\begin{equation}
\label{defie}
\rho(u)=
\begin{cases}
1 & \text{for $0\leqslant u\leqslant 1$};\\
\frac{1}{u}\int_{0}^1 \rho(u-t) dt & \text{for $u>1$.}
\end{cases}
\end{equation}
We have $0<\rho(u)<1/\Gamma(u+1),$ where $\Gamma$ is the Gamma function. Thus
$\rho$ is rapidly decreasing. Dickman's result also remains true if we ask
for the proportion of integers $n\le x$ such that $P^+(n)\le n^{1/u}.$
Thus if $p$ is a prime number and $p-1$ would
behave like a typical integer, then one would expect that $P^+(p-1)\le p^{1/u}$
with probability $\rho(u).$ The following
known result
partially confirms this.
\begin{thm}
\label{pu}
Let $s$ be any fixed non-zero integer.
The set $\{p:P^+(p+s)\ge p^{1/u}\}$
has density $1-\rho(u)$ under the Elliott-Halberstam
conjecture and unconditionally a lower
density at least
$1-4\rho(u)$ for $u>u_1,$ where $u_1\in (2.677,2.678)$ is the unique solution of
the equation $4u_1\rho(u_1)=1.$
\end{thm}
\begin{proof}
A detailed
proof of the first
assertion was given by Lamzouri \cite{lam} and, independently, by Wang \cite{wang}. The second assertion is
due to Feng-Wu \cite{FW} and Liu-Wu-Xi \cite{liwuxi}.
\end{proof}
Unconditionally the set considered in
\thmref{pu}
is not known to have a density, and therefore we
work with the notion of lower and upper density.
\par Interestingly, \eqref{dikkertje} was
already known
to Ramanujan, for details and more about the
behaviour of the Dickman function, see, e.g.,
Moree \cite{mordeB}.
\section{Application to the degree of the coefficient field}\label{application}
Let $K_f$ be the coefficient field of
a normalized eigenform $f\in S_k(p).$ Put
$$d_k^{\rm new}(p):= \max\{[K_f:\mathbb Q]:f\in S_k(p), ~f\text{~newform}\}$$
and
$$d_k(p):= \max\{[K_f:\mathbb Q]:f\in S_k(p), ~f\text{~normalized~eigenform}\}.$$
Billerey-Menares \cite{bime} showed that for every even integer $k\ge 2$ and every prime $p\ge (k+1)^4$
with $P^+(p-1)\ge 5$ one has
\begin{equation}
\label{5/2}
d_k^{\rm new}(p) \ge \frac{5\log(P^+(p-1))}{2k}.
\end{equation}
(Actually the original result has the
factor $\log(1+2^{(k-1)/2})$ in the denominator, which we prefer to
replace by the upper bound $k/5.$) In 2015, Luca et al.\,\cite{LMP}
showed that there exists a set of primes of natural density at least $3/4$ such
that $P^+(p-1)\ge p^{1/4}.$ This result in combination with
inequality \eqref{5/2}, then yields that
\begin{equation}
\label{oud}
d_k^{\rm new}(p) \ge \frac{5\log p }{8 k}
\end{equation}
for a set of primes of density at least $3/4.$
If one wants a lower bound valid for all large
enough primes $p,$ we still cannot do better than
Bettin et al.\,\cite{verynew} who showed that
$$
d_k^{\rm new}(p) \gg_k \log \log p,~~ p\rightarrow \infty.
$$
On combining \thmref{pu} and inequality
\eqref{5/2}, we obtain the following
improvement of \eqref{oud}.
\begin{thm}
Let $k\ge 2$ be an even integer and $u>1$ any real number. Under the Elliott-Halberstam
conjecture the set of primes $p$ for which
$$
d_k^{\rm new}(p) \ge \frac{5\log p }{ 2 u k}
$$
has lower density at least
$1-\rho(u).$ Unconditionally this
set has
at least lower density $1-4\rho(u)$ for
$u>2.678.$
\end{thm}
\begin{cor}
The set of
primes $p$ for which
\eqref{oud} holds has lower
density at least $1-4\rho(4)\ge 0.98.$
\end{cor}
In the same spirit and as an application of
\thmref{eigenform}, we establish analogues
for $d_k(p)$ of
the latter theorem and corollary,
by following the ideas used in the proof of \cite[Theorem 2]{bime}.
\begin{thm}
\label{dkpbound}
If $k\ge 4$ is an even integer and $p$ any prime, then
$$d_k(p) \ge \frac{5\log (P^+(p^2-1)) }{2k}.$$
\end{thm}
\begin{proof}
Define $\ell:= P^+(p^2-1).$ First
assume that $\ell \ge 5.$ Then we can choose $\varepsilon\in \{\pm 1\}$ such that $\ell \mid (\varepsilon+ p^{k/2})$. Applying \thmref{eigenform} yields a normalized eigenform $f=\sum_{n\ge 1}a_f(n)q^n \in S_k (p)$ and a prime ideal $\lambda \subset \mathcal{O}_{K_f}$ above $\ell$ such that
$$
f\equiv E_{k,p}^\varepsilon \pmod* \lambda.
$$
The coefficients of the eigenform $f$ and those of any Galois conjugate of $f$ all
satisfy Deligne's estimate and hence the non-zero algebraic integer $b:=a_f(2)-1-2^{k-1}$ and all its Galois conjugates have absolute value
bounded above by $(1+2^{(k-1)/2})^2$. Because of the above congruence, it is clear that $b\in \lambda$ and hence $\ell$ divides
the absolute value of the norm of $b,$ which is at most $(1+2^{(k-1)/2})^{2d}$, where $d=[K_f:\mathbb Q]$. Therefore we conclude that
$$
d_k(p)\ge d\ge \frac{\log \ell}{2\log \left(1+2^{(k-1)/2} \right)}\ge \frac{5\log \ell}{2k}.
$$
In the remaining case $\ell\le 3$ we have
$5(\log \ell)/2k\le 5(\log 3)/8<1\le d_k(p),$ and there is nothing to prove.
\end{proof}
Combining this with \thmref{pu} we obtain the following corollary.
\begin{cor}
Let $k\ge 4$ be even integer and $u>1$ any real number. The set of primes $p$ for which
$$
d_k(p) \ge \frac{5\log p }{ 2 u k}
$$
has lower density at least $1-4\rho(u)$ for
$u>2.678.$
\end{cor}
This can be sharpened if
one solves the
following problem.
\begin{prob}
Show that the set of primes
$p$ for which $P^+(p^2-1)<p^{1/u}$
has an upper density
not exceeding $4\rho(u)$ for all $u$ large
enough.
\end{prob}
By \thmref{pu} under the Elliott-Halberstam
conjecture each of the inequalities
$P^+(p-1)< p^{1/u}$ and $P^+(p+1)<p^{1/u}$
is satisfied with probability $\rho(u).$
Assuming
independence of the two events leads to the following
conjecture on invoking Theorem \ref{dkpbound}.
\begin{con}
Let $k\ge 4$ be even integer and $u>1$ any real number.
The set of primes $p$ for which
$$d_k(p) \ge \frac{5\log p }{ 2 u k},$$
has lower density at least $1-\rho(u)^2.$
\end{con}
The independence assumption was already made
by Erd\H{o}s and Pomerance, see the two articles
by Wang \cite{wang,wang3}, who also made partial
progress in proving it.
\par Using elementary number theory it is easy to show that
$P^+(p^2-1)\le 3$ if and only if $p\in \{2,3,5,7,17\}.$ A much deeper
result is the following trivial
consequence of a celebrated
result of Evertse (where as usual we denote by $\pi(x)$ the number
of primes $p\le x$).
\begin{prop}
Let $x$ be a real number.
The number of primes $p$ for which
$P^+(p^2-1)\le x$ is at most $3\cdot 7^{1+2\pi(x)}.$
\end{prop}
\begin{proof} Put $S:=\{p_1,\ldots,p_s\}.$ Integers of the
form $\pm p_1^{e_1}\cdot p_2^{e_1}\cdots p_s^{e_s}$ are
called $S$-units.
Evertse
\cite[Theorem 1]{Ever} proved that the equation
$a+b=1$ has at most $3\cdot 7^{1+2s}$ solutions
$(a,b)$ with $a$ and $b$ both S-units.
If $P^+(p^2-1)\le x,$ then both
$p-1$ and $p+1$ are $S$-units, with $S$ the
set of consecutive primes not excedding $x,$
which has cardinality $\pi(x)$. Taking the difference
and dividing by two leads to the S-unit equation $a+b=1.$
\end{proof}
Combining this result with \thmref{dkpbound}, we obtain the following result.
\begin{thm}
Let $m\ge 1$ be an integer.
Then, with the exception of at most
$O(e^{e^{2km/5}})$ primes $p,$
we have
$d_k(p)\ge m.$
\end{thm}
Proving a similar result for $d_k^{\rm new}(p)$ is an open problem.
\section{Proof of \thmref{non-divisibility}}
\label{pietersproof}
A set of natural numbers $S$ is said to be \emph{multiplicative} if its
characteristic function is a multiplicative function.
The set $B$ of natural numbers that can be written as a sum of two squares provides an example (as
already Fermat
knew).
One can
wonder about the asymptotic behavior of $S(x)$,
the number of positive integers $n\le x$ that are in $S$.
An important role in understanding $S(x)$ is played by the Dirichlet series
\begin{equation}
\label{LS}
L_S(s) := \sum_{n\in S}n^{-s},
\end{equation}
which converges for $\Re(s) > 1$.
If the limit
\begin{equation}
\label{EKf}
\gamma_S:=\lim_{s\rightarrow 1^+}\bigg(
\frac{L'_S(s)}{L_S(s)}
+\frac{\alpha}{s-1}\bigg)
\end{equation}
exists for some $\alpha\ne 0$, we say that the set $S$ admits an \emph{Euler-Kronecker constant} $\gamma_S$. A leisurely account
of the theory of Euler-Kronecker constants with plenty of examples
is given in Moree \cite{morSx}.
\begin{proof}[Proof of Theorem \ref{non-divisibility}]
We first consider the non-divisibility of $\sigma_{k-1}(n)$ by
$\ell.$ Note that $\ell\nmid \sigma_{k-1}(n)$ if and only if
$\ell\nmid \sigma_r(n),$ where $r=(k-1,\ell-1).$
Let $g_{p_1}$ be the multiplicative order
of $p_1^{r}$ modulo $\ell,$ with $p_1\ne \ell$ an arbitrary
prime number.
We let
\begin{equation}
\label{gp-def}
\mu_{p_1}=\begin{cases} \ell&\text{if~}g_{p_1}=1,\\
g_{p_1} &\text{if~}g_{p_1}>1.\end{cases}
\end{equation}
Put $S_1:=\{n:\ell\nmid \sigma_r(n)\}.$
Rankin \cite{Ts} showed that
\begin{equation}
\label{EulerProductTsgeneral1}
L_{S_1}(s)=\frac1{1-\ell^{-s}}\prod_{p_1\ne \ell}\frac{1-p_1^{-(\mu_{p_1}-1)s}}{(1-p_1^{-s})(1-p_1^{-\mu_{p_1}s})}.
\end{equation}
The proof of this is short and
can also be found in \cite{clm}.
Set $f(n)=\sigma_{k-1}(n)+\varepsilon p^{k/2}\sigma_{k-1}(n/p).$ Write
$n=mp^e,$ with $m$ coprime to $p.$ Note that $f(m)=\sigma_{k-1}(m).$
Writing momentarily $\sigma$ instead of $\sigma_{k-1},$ we have
$$f(mp^e)=\sigma(mp^e)+\varepsilon p^{k/2}\sigma(mp^{e-1})=\sigma(m)\sigma(p^e)+\varepsilon p^{k/2}\sigma(m)\sigma(p^{e-1})=f(m)f(p^e),$$
and so $f$ is a multiplicative function. Put $S_2(p):=\{n:\ell\nmid f(n)\}$.
The multiplicativity of $f$ implies that
$$L_{S_2(p)}(s)=\prod_{p_1}\sum_{\substack{a\ge 0\\ \ell\nmid f(p_1^a)}}\frac{1}{p_1^{as}}$$
has an Euler product, the sums being the Euler product
factors.
For the primes $p_1\ne p,$ we have
$\ell\nmid f(p_1^a)$ if and only if $\ell\nmid \sigma_r(p_1^a)$ and we get the same Euler product factors as
in \eqref{EulerProductTsgeneral1}.
Since $\varepsilon p^{k/2}=-1\pmod*{\ell}$ by assumption, we have
$$f(p^a)\equiv
\sigma_{k-1}(p^a)-\sigma_{k-1}(p^{a-1})\equiv p^{a(k-1)}\not\equiv 0\pmod*{\ell},$$ and we
conclude that the Euler product factor at $p_1=p$ is $(1-p^{-s})^{-1}$.
We infer that
\begin{equation}
\label{EulerProductTsgeneral2}
L_{S_2(p)}(s)=\frac1{(1-\ell^{-s})}\frac1{(1-p^{-s})}\prod_{\substack{p_1\ne \ell\\ p_1\ne p}}\frac{1-p_1^{-(\mu_{p_1}-1)s}}{(1-p_1^{-s})(1-p_1^{-\mu_{p_1}s})}=
L_{S_1}(s)\frac{(1-p^{-\mu_{p}s})}{1-p^{-(\mu_{p}-1)s}}.
\end{equation}
Comparing the logarithmic derivative of $L_{S_2(p)}(s)$ with that of
$L_{S_1}(s),$ we
obtain
$$
\frac{L'_{S_2(p)}(s)}{L_{S_2(p)}(s)}=
\frac{L'_{S_1}(s)}{L_{S_1}(s)}+\log p\left(\frac{\mu_p}{p^{\mu_ps}-1}-\frac{(\mu_p-1)}{p^{(\mu_p-1)s}-1}\right).
$$
The proof of \eqref{gammarelator} is then completed on invoking \eqref{EKf}.
\par The prime counting functions
$\#\{p_1\le x:~\ell\mid \sigma_r(p_1)\}$ and
$\#\{p_1\le x:~\ell\mid f(p_1)\}$
differ
by at most one for every $x.$ As the first counting function
satisfies \eqref{primecondition} with
$\delta=r/(\ell-1),$ so does the second, and the
proof is completed on account of Theorem \ref{conditionA}.
\end{proof}
\noindent \textbf{Acknowledgment.} The authors would like to thank Shaunak Deo for several useful discussions, going through the paper carefully and giving useful suggestions. We also thank Jaban Meher for sharing his ideas in the proof of \thmref{main2} and Dan Fretwell for his useful comments. Matteo Bordignon
explained the authors how to compute $\rho(u)$ using SAGE, Zhiwei Wang updated them on the literature on friable shifted primes and Nicolas Billerey and Ricardo Menares pointed
out the existence of \cite{verynew}. The third
author thanks Younes Nikdelan for his meticulous proofreading of earlier
versions and frequent discussions on his related preprint \cite{you}. The authors thank the anonymous referees for a careful reading of the manuscript and for giving valuable suggestions.
\par The research of the first author was supported by the grant no. 692854 provided by the European Research Council (ERC) while the second author was supported by Israeli Science Foundation grant 1400/19. This work was carried out when these two authors were postdoctoral fellows at Hebrew University of Jerusalem and Bar-Ilan University respectively.
\par The open-source mathematics software SAGE (www.sagemath.org) has been used for numerical computations in this work.
\end{document} | arXiv | {
"id": "2111.15287.tex",
"language_detection_score": 0.7137140035629272,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\twocolumn[ \icmltitle{DizzyRNN: Reparameterizing Recurrent Neural Networks for Norm-Preserving Backpropagation} \icmlauthor{Victor Dorobantu*}{vdd6@cornell.edu} \icmlauthor{Per Andre Stromhaug*}{pas282@cornell.edu} \icmlauthor{Jess Renteria*}{jvr35@cornell.edu} \icmladdress{Cornell University, Ithaca, NY}
\vskip 0.3in ]
\begin{abstract}
The vanishing and exploding gradient problems are well-studied obstacles that make it difficult for recurrent neural networks to learn long-term time dependencies. We propose a reparameterization of standard recurrent neural networks to update linear transformations in a provably norm-preserving way through Givens rotations. Additionally, we use the absolute value function as an element-wise non-linearity to preserve the norm of backpropagated signals over the entire network. We show that this reparameterization reduces the number of parameters and maintains the same algorithmic complexity as a standard recurrent neural network, while outperforming standard recurrent neural networks with orthogonal initializations and Long Short-Term Memory networks on the copy problem.
\end{abstract}
\section{Defining the problem}
Recurrent neural networks (RNNs) are trained by updating model parameters through gradient descent with backpropagation to minimize a loss function. However, RNNs in general will not prevent the loss derivative signal from decreasing in magnitude as it propagates through the network. This results in the \textit{vanishing gradient problem}, where the loss derivative signal becomes too small to update model parameters \cite{VanishingGradient}. This hampers training of RNNs, especially for learning long-term dependencies in data.
\section{Signal scaling analysis}
The prediction of an RNN is the result of a composition of linear transformations, element-wise non-linearities, and bias additions. To observe the sources of vanishing and exploding gradient problems in such a network, one can observe the minimum and maximum scaling properties of each transformation independently, and compose the resulting scaling factors.
\subsection{Linear transformations}
Let $y = Ax$ be an arbitrary linear transformation, where $A \in \mathbb{R}^{m \times n}$ is a matrix of rank $r$.
\begin{theorem} The singular value decomposition (SVD) of $A$ is $A = U \Sigma V^T$, for orthogonal $U$ and $V$, and diagonal $\Sigma$ with diagonal elements $\sigma_1, \dots, \sigma_n$, the singular values of $A$. \end{theorem}
From the SVD, Corollaries 1 and 2 follow.
\begin{corollary}
Let $\sigma_{min}$ and $\sigma_{max}$ be the minimum and maximum singular values of $A$, respectively. Then $\sigma_{min}\Vert x \Vert_2 \leq \Vert y \Vert_2 \leq \sigma_{max}\Vert x \Vert_2$.
\end{corollary}
\begin{corollary}
Let $\sigma_{min}$ and $\sigma_{max}$ be the minimum and maximum singular values of $A$, respectively. Then $\sigma_{min}$ and $\sigma_{max}$ are also the minimum and maximum singular values of $A^T$.
\end{corollary}
Proofs for these corollaries are deferred to the appendix.
Let $L$ be a scalar function of $y$. Then $$\frac{\partial L}{\partial x} = \frac{\partial L}{\partial y}\frac{\partial y}{\partial x} = A^T\frac{\partial L}{\partial y}$$ In an RNN, this relation describes the scaling effect of a linear transformation on the backpropagated signal. By Corollary 2, each linear transformation scales the loss derivative signal by at least the minimum singular value of the corresponding weight matrix and at most by the maximum singular value.
\begin{theorem} All singular values of an orthogonal matrix are $1$. \end{theorem}
By Corollary 2, if the linear transformation $A$ is orthogonal, then the linear transformation will not scale the loss derivative signal.
\subsection{Non-linear functions}
Let $y = f(x)$ be an arbitrary element-wise non-linear transformation. Let $L$ be a scalar function of $y$. Then $$\frac{\partial L}{\partial x} = \frac{\partial L}{\partial y}\frac{\partial y}{\partial x} = f'(x)\odot\frac{\partial L}{\partial y}$$ where $f'$ denotes the first derivative of $f$ and $\odot$ denotes the element-wise product. The $i$-th element of $\frac{\partial L}{\partial y}$ is scaled at least by $\min{\left(f'(x_i)\right)}$ and at most by $\max{\left(f'(x_i)\right)}$.
\subsection{Bias}
Let $y = x + b$ be an arbitrary addition of bias to $x$. Let $L$ be a scalar function of $y$. Then $$\frac{\partial L}{\partial x} = \frac{\partial L}{\partial y}\frac{\partial y}{\partial x} = \frac{\partial L}{\partial y}$$Though additive bias does not preserve the norm during a forward pass over the network, it does preserve the norm of the backpropagated signal during a backward pass.
\section{Previous Work}
In general, the singular values of weight matrices in RNNs are allowed to vary unbounded, leaving the network susceptible to the vanishing and exploding gradient problems. A popular approach to mitigating this problem is through orthogonal weight initialization, first proposed by Saxe et al. \cite{OrthogonalInit}. Later, identity matrix initialization was introduced for RNNs with ReLU non-linearities, and was shown to help networks learn longer time dependencies \cite{IRNN}.
Arjovsky et al. \cite{uRNN} introduced the idea of an orthogonal reparametrization of weight matrices. Their approach involves composing several simple complex-valued unitary matrices, where each simple unitary matrix is parametrized such that updates during gradient descent happen on the manifold of unitary matrices. The authors prove that their network cannot have an exploding gradient, and believe that this is the first time a non-linear network has been proven to have this property.
Wisdom et al. \cite{wisdom} note that Arjovsky's approach does not parametrize all orthogonal matrices, and propose a method of computing the gradients of a weight matrix such that the update maintains orthogonality, but also allows the matrix to express the full set of orthogonal matrices.
Jia et al. \cite{jia} propose a method of regularizing the singular values during training by periodically computing the full SVD of the the weight matrices, and clipping the singular values to have some maximum allowed distance from $1$. The authors show this has comparable performance to batch normalization in convolutional neural networks. As computing the SVD is an expensive operation, this approach may not translate well to RNNs with large weight matrices.
\section{DizzyRNN}
We propose a simple method of updating orthogonal linear transformations in an RNN in a way that maintains orthogonality. We combine this approach with the use of the absolute value function as the non-linearity, thus constructing an RNN that provably has no vanishing or exploding gradient. We term an RNN using this approach a \textit{Dizzy Recurrent Neural Network} (DizzyRNN). The reparameterization maintains the same algorithmic space and time complexity as a standard RNN.
\subsection{Givens rotation}
An orthogonal matrix $A \in \mathbb{R}^{n\times n}$ may be constructed as a product of $n(n-1)/2$ Givens rotations \cite{GivensRotation}. Each rotation is a sparse matrix multiplication, depending on only two elements and modifying only two elements, meaning each rotation can be performed in $O(1)$ time. Additionally, each rotation is represented by one parameter: a rotation angle. These rotation angles can be updated directly using gradient descent through backpropagation.
Let $a$ and $b$ denote the indices of the fixed dimensions in one rotation, with $a < b$. Let $y = R_{a,b}(\theta)x$ express this rotation by an angle $\theta$. The rotation matrix $R_{a,b}(\theta)$ is sparse and orthogonal with the following form: each diagonal element is $1$ except for the $a$-th and $b$-th diagonal elements, which are $\cos{\theta}$. Additionally, two off-diagonal elements are non-zero; the element $(a, b)$ is $\sin{\theta}$ and the element $(b, a)$ is $-\sin{\theta}$. All remaining off-diagonal elements are $0$. Let $L$ be a scalar function of $y$. Then $$\frac{\partial L}{\partial x} = \frac{\partial L}{\partial y}\frac{\partial y}{\partial x} = R_{a,b}^T(\theta)\frac{\partial L}{\partial y}$$ Recall that since the matrix $R_{a,b}(\theta)$ is orthogonal with minimum and maximum singular values of $1$, the transpose $R_{a,b}^T(\theta)$ also has minimum and maximum singular values of $1$ (by Corollary 2).
To update the rotation angles, note that the only elements of $y$ that differ from the corresponding element of $x$ are $y_a$ and $y_b$. Each can be expressed as $y_a = \cos{\theta}x_a + \sin{\theta}x_b$ and $y_b = -\sin{\theta}x_a + \cos{\theta}x_b$. The derivative of $L$ with respect to the parameter $\theta$ is thus \begin{align*} \frac{\partial L}{\partial \theta} &= \frac{\partial L}{\partial y_a}\frac{\partial y_a}{\partial \theta} + \frac{\partial L}{\partial y_b}\frac{\partial y_b}{\partial \theta}\\&=\begin{bmatrix}\frac{\partial L}{\partial y_a}&\frac{\partial L}{\partial y_b}\end{bmatrix}\begin{bmatrix}-\sin{\theta}&\cos{\theta}\\-\cos{\theta}&-\sin{\theta}\end{bmatrix}\begin{bmatrix}x_a\\x_b\end{bmatrix} \end{align*}
To simplify this expression, define the matrix $E_{a,b}$ as $$E_{a,b} = \begin{bmatrix}e_a^T\\e_b^T\end{bmatrix}$$ where $e_i \in \mathbb{R}^n$ is a column vector of zeros with a $1$ in the $i$-th index. The matrix $E_{a,b}$ selects only the $a$-th and $b$-th indices of a vector. Additionally, define the matrix $R_{\partial}(\theta)$ as $$R_{\partial}(\theta) = \begin{bmatrix}-\sin{\theta}&\cos{\theta}\\-\cos{\theta}&-\sin{\theta}\end{bmatrix}$$ Note that $R_{\partial}(\theta)$ always has this form; it does not depend on indices $a$ and $b$. Now the derivative of $L$ with respect to the parameter $\theta$ can be represented as $$\frac{\partial L}{\partial \theta} = \left(E_{a,b}\frac{\partial L}{\partial y}\right)^T R_{\partial}(\theta)E_{a,b}x$$ This multiplication can be implemented in $O(1)$ time.
\subsection{Parallelization Through Packed Rotations}
While the DizzyRNNs maintain the same algorithmic complexity as standard RNNs, it is important to perform as many Givens rotations in parallel as possible in order to get good performance on GPU hardware. Since each Givens rotation only affects two values in the input vector, we can perform $n/2$ Givens rotations in parallel. We therefore only need $n-1$ sequential operations, each of which has $O(n)$ computational and space complexity. We refer to each of these $n-1$ operations as a \textit{packed rotation}, representable by a sparse matrix multiplication.
\subsection{Norm preserving non-linearity}
Typically used non-linearities like tanh and sigmoid strictly reduce the norm of a loss derivative signal during backpropagation. ReLU only preserves the norm in the case that each input element is non-negative. We propose the use of an element-wise absolute value non-linearity (denoted as \textit{abs}). Let $y = abs(x)$ be the element-wise absolute value of $x$, and let $L$ be a scalar function of $y$. Then $$\frac{\partial L}{\partial x} = \frac{\partial L}{\partial y}\frac{\partial y}{\partial x} = sign(x)\odot\frac{\partial L}{\partial y}$$ The use of this non-linearity preserves the norm of the backpropagated signal.
\subsection{Eliminating Vanishing and Exploding Gradients} Let $P_1, \dots, P_{n-1}$ represent $n-1$ packed rotations, $h_t$ be a hidden state at time step $t$, $x_t$ be an input vector, and $b$ be a bias vector. Define the hidden state update equation as $$ h_t = abs(P_1 \cdots P_{n-1} h_{t-1} + W_xx_t + b)$$
If $W_x$ is square, it can also be represented as $n-1$ packed rotations $Q_1, \dots, Q_n$, resulting in the hidden state update equation $$ h_t = abs(P_1 \cdots P_{n-1} h_{t-1} + Q_1\cdots Q_{n-1}x_t + b)$$
\subsection{Eliminating Vanishing and Exploding Gradients} Arvosky et al. claim to provide the first proof of a network having no exploding gradient (through their uRNN) \cite{uRNN}. We show that DizzyRNN has no exploding gradient and, more importantly, no vanishing gradient.
Let a state update equation for an RNN be defined as $$h_t = f(W_h h_{t-1} + W_x x_t + b)$$ Let $L$ be a loss function over the RNN.
\begin{theorem} In a DizzyRNN cell, $\left \Vert \frac{\partial L}{\partial h_t} \right \Vert_2 = \left \Vert \frac{\partial L}{\partial h_{t-1}} \right \Vert_2 $ \end{theorem}
\begin{theorem} If $W_x$ is square, then $\left \Vert \frac{\partial L}{\partial x_t} \right \Vert_2 = \left \Vert \frac{\partial L}{\partial h_{t-1}} \right \Vert_2 $ \end{theorem}
Therefore, the network can propagate loss derivative signals through an arbitrarily large number of state updates and stacked cells. The proofs for Theorems $3$ and $4$ are deferred to the Appendix.
\section{Incorporating Singular Value Regularization} \subsection{Exposing singular values}
Let $y = Ax$ be an arbitrary linear transformation, where $A \in \mathbb{R}^{n \times n}$ is a matrix of rank $n$. Such a matrix can be represented by the DizzyRNN reparameterization through a construction $U\Sigma V^T$, where $U$ and $V$ are orthogonal matrix and $\Sigma$ is a diagonal matrix. This construction represents a singular value decomposition of a linear transformation; however, the diagonal elements of $\Sigma$ (the singular values) can be updated directly along with the rotation angles of $U$ and $V$. Additionally, the distribution of singular values can be penalized easily, regularizing the network while allowing full expressivity of linear transformations.
\subsection{Diagonal matrix}
A matrix-vector product $y = \Sigma x$ where $\Sigma \in \mathbb{R}^{n\times n}$ is a diagonal matrix can be represented as the element-wise vector product $y = \sigma \odot x$, where $\sigma$ is the vector of the diagonal elements of $\Sigma$. Let $L$ be a scalar function of $y$. Then \begin{align*}\frac{\partial L}{\partial x} &= \frac{\partial L}{\partial y}\frac{\partial y}{\partial x} = \sigma \odot \frac{\partial L}{\partial y}\\\frac{\partial L}{\partial \sigma} &= \frac{\partial L}{\partial y}\frac{\partial y}{\partial \sigma} = x \odot \frac{\partial L}{\partial y}\end{align*} Each computation can be performed in $O(n)$ time.
\subsection{Singular value regularization}
For a DizzyRNN, an additional term can be added to the loss function $L$ to penalize the distance of the singular values of each linear transformation from $1$. For each cell in the stack, let $\sigma$ denote the vector of all singular values of all linear transformations in the cell; the regularization term is then $\frac{1}{2}\lambda\Vert\sigma - e\Vert_2^2$, where $\lambda$ is a penalty factor and $e$ is the vector of all ones. The loss function can now be rewritten as $$L' = L + \frac{1}{2}\lambda\sum_{i=1}^{M}\Vert\sigma^{(i)}-e\Vert_2^2$$ for a DizzyRNN with a stack height of $M$ where $\sigma^{(i)}$ represents the vector of all singular values associated with the $i$-th cell in the stack. Note that setting the $\lambda$ hyperparameter to $0$ allows the singular values to grow or decay unbounded, and setting $\lambda$ to $\infty$ constrains each linear transformation to be orthogonal. Additionally, note that initializing the singular values of each linear transformation to $1$ is equivalent to an orthogonal initialization of the DizzyRNN.
\section{Experimental results}
We implemented DizzyRNN in Tensorflow and compared the performance of DizzyRNN with standard RNNs, Identity RNNs \cite{IRNN}, and Long Short-Term Memory networks (LSTM) \cite{LSTM}. We evaluated each network on the copy problem described in \cite{uRNN}. We modify the loss function in this problem to only quantify error on the copied portion of the output, making our baseline accuracy $10\%$ (guessing at random).
The copy problem for our experiments consisted of memorizing a sequence of 10 one-hot vectors of length 10 and outputting the same sequence (via softmax) upon seeing a delimiter after a time lag of 90 steps.
We use a stack size of $1$ and use only a subset of the total $n-1$ possible packed rotations for every orthogonal matrix.
All experiments consist of epochs with 10 batches of size 100, sampled directly from the underlying distribution.
\begin{figure}
\caption{Hidden state size of 128 across models. DizzyRNN has 10 packed rotations.}
\end{figure}
\begin{figure}
\caption{Rotation here refers to a packed rotation. State size is fixed at 128.}
\end{figure}
DizzyRNN manages to reach near perfect accuracy in under 100 epochs while other models either fail to break past the baseline or plateau at a low test accuracy. Note that 100 epochs corresponds to 100000 sampled training sequences.
\section{Conclusion}
DizzyRNNs prove to be a promising method of eliminating the vanishing and exploding gradient problems. The key is using pure rotations in combination with norm-preserving non-linearities to force the norm of the backpropagated gradient at each timestep to remain fixed. Surprisingly, at least for the copy problem, restricting weight matrices to pure rotations actually improves model accuracy. This suggests that gradient information is more valuable than model expressiveness in this domain.
Further experimentation with sampling packed rotations will be a topic of future work. Additionally, we would like to augment other state-of-the-art networks with Dizzy reparameterizations, such as Recurrent Highway Networks \cite{RHN}.
\section*{Appendix}
\setcounter{corollary}{0} \setcounter{theorem}{0}
Let $y = Ax$ be an arbitrary linear transformation, where $A \in \mathbb{R}^{m \times n}$ is a matrix of rank $r$.
\begin{theorem}
The singular value decomposition of $A$ is expressed as $A = U\Sigma V^T = \sum_{i=1}^r \sigma_iu_iv_i^T$, for orthogonal $U \in \mathbb{R}^{m \times r}$ and $V \in \mathbb{R}^{n \times r}$, and diagonal $\Sigma \in \mathbb{R}^{r \times r}$. $u_i$ and $v_i$ are the $i$-th columns of $U$ and $V$, respectively, and $\sigma_i$ is the $i$-th diagonal element of $\Sigma$. The columns of $U$ are the left singular vectors, the columns of $V$ are the right singular vectors, and the diagonal elements of $\Sigma$ are the singular values.
\end{theorem}
\begin{corollary}
Let $\sigma_{min}$ and $\sigma_{max}$ be the minimum and maximum singular values of $A$, respectively. Then $\sigma_{min}\Vert x \Vert_2 \leq \Vert y \Vert_2 \leq \sigma_{max}\Vert x \Vert_2$.
\end{corollary}
\begin{proof}
Express $y$ as $y = \sum_{i=1}^r \sigma_iu_iv_i^Tx$, i.e. a linear combination of the orthonormal set of left singular vectors. The $i$-th coefficient of the combination is the scalar $\sigma_iv_i^Tx$. Since the set of left singular vectors is orthonormal, the $l_2$-norm of $y$ is the Pythagorean sum of the coefficients of the linear combination, i.e., $\Vert y\Vert_2 = \left(\sum_{i=1}^r\left(\sigma_iv_i^Tx\right)^2\right)^{1/2}$. Note that the term $v_i^Tx$ is the magnitude of the projection of $x$ onto $v_i$. Without loss of generality, fix the norm of $x$ to $1$. Let $v_{min}$ and $v_{max}$ be the right singular vectors corresponding to $\sigma_{min}$ and $\sigma_{max}$, respectively. Then, the norm of $y$ is minimized for $x$ parallel to $v_{min}$, and maximized for $x$ parallel to $v_{max}$. The corresponding norms of $y$ are $\sigma_{min}v_{min}^Tv_{min}$ and $\sigma_{max}v_{max}^Tv_{max}$. Since the set of right singular vectors is orthonormal, $v_{min}^Tv_{min} = v_{max}^Tv_{max} = 1$, and the corresponding norms of $y$ are $\sigma_{min}$ and $\sigma_{max}$.
\end{proof}
\begin{corollary}
Let $\sigma_{min}$ and $\sigma_{max}$ be the minimum and maximum singular values of $A$, respectively. Then $\sigma_{min}$ and $\sigma_{max}$ are also the minimum and maximum singular values of $A^T$.
\end{corollary}
\begin{proof}
Since $A = U\Sigma V^T$ where $U$ and $V$ are orthogonal, $A^T = V\Sigma U^T$. By the same construction as in the previous corollary, if $y = A^Tx$, then $\Vert y\Vert_2 = \left(\sum_{i=1}^r\left(\sigma_iu_i^Tx\right)^2\right)^{1/2}$. For all $x$ such that $\Vert x\Vert_2 = 1$, the quantity is minimized and maximized for $x = u_{min}$ and $x = u_{max}$, respectively, where $u_{min}$ and $u_{max}$ are the left singular vectors corresponding to $\sigma_{min}$ and $\sigma_{max}$. The corresponding minimum and maximum norms are $\sigma_{min}$ and $\sigma_{max}$.
\end{proof}
Let a state update equation for an RNN be defined as $$h_t = f(W_h h_{t-1} + W_x x_t + b)$$ Let $L$ be a loss function over the RNN.
\setcounter{theorem}{2}
\begin{theorem} In a DizzyRNN cell, $\left \Vert \frac{\partial L}{\partial h_t} \right \Vert_2 = \left \Vert \frac{\partial L}{\partial h_{t-1}} \right \Vert_2 $ \end{theorem}
\begin{proof}
Let $y = W_h h_{t-1} + W_x x_t + b$ and express $$\frac{\partial L}{\partial h_{t-1}} = \frac{\partial L}{\partial h_t}\frac{\partial h_t}{\partial y} \frac{\partial y}{\partial h_{t-1}} = W_h^T \left (f'(y) \odot \frac{\partial L}{\partial {h_t}}\right )$$ The $l_2$-norms of each side of this equation are $$ \left \Vert \frac{\partial L}{\partial h_{t-1}} \right \Vert_2= \left \Vert W_h^T \left (f'(y) \odot \frac{\partial L}{\partial {h_t}}\right) \right \Vert_2$$ In a DizzyRNN, $f$ is the absolute value function, thus the elements of $f'(y)$ are $1$ or $-1$. $W_h$ is orthogonal since it is defined by a composition of Givens rotations. Neither $f'(y)$ and $W_h^T$ scale the norm of the vector $\frac{\partial L}{\partial h_t}$, thus $$ \left \Vert\frac{\partial L}{\partial h_{t-1}} \right \Vert_2 = \left \Vert \frac{\partial L}{\partial {h_t}} \right \Vert_2$$
\end{proof}
If instead $f$ is the ReLU non-linearity, then $\left \Vert \frac{\partial L}{\partial h_{t-1}} \right \Vert_2$ is only equal to $\left \Vert \frac{\partial L}{\partial {h_t}} \right \Vert_2$ in the case where all values in $y$ are non-negative, resulting in a diminishing gradient in all other cases.
\begin{theorem} If $W_x$ is square, then $\left \Vert \frac{\partial L}{\partial x_t} \right \Vert_2 = \left \Vert \frac{\partial L}{\partial h_{t-1}} \right \Vert_2 $ \end{theorem}
\begin{proof} By symmetry with the proof of Theorem 3, the norms are shown to be equal. \end{proof}
\begin{figure}
\caption{Variance in convergence rates with constant hyperparameters. State size is 128 with 10 packed rotations.}
\end{figure}
\begin{figure}
\caption{The number of packed rotations is fixed at 5. State sizes are 16, 32, 64, and 128}
\end{figure}
\begin{figure}
\caption{Several regularized runs.}
\end{figure}
\begin{figure}
\caption{Several regularized runs.}
\end{figure}
\end{document} | arXiv | {
"id": "1612.04035.tex",
"language_detection_score": 0.7745742201805115,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Optimal interferometry for Bell-nonclassicality induced by a vacuum-one-photon qubit}
\author{Tamoghna Das} \author{Marcin Karczewski} \author{Antonio Mandarino} \author{Marcin Markiewicz} \author{Marek \.Zukowski} \affiliation{International Centre for Theory of Quantum Technologies, University of Gda\'nsk, 80-308 Gda\'nsk, Poland}
\begin{abstract} Bell nonclassicality of a single photon superposition in two modes, often referred to as `nonlocality of a single photon', is one of the most striking nonclassical phenomena discussed in the context of foundations of quantum physics. Here we show how to robustly violate local realism within the weak-field homodyne measurement scheme for \textit{any} superposition of one photon with vacuum. Our modification of the previously proposed setups involves tunable beamsplitters at the measurement stations, and the local oscillator fields significantly varying between the settings, optimally being {\it on} or {\it off}. As photon number resolving measurements are now feasible, we advocate for the use of the Clauser-Horne Bell inequalities for detection events using precisely defined numbers of photons. We find a condition for optimal measurement settings for the maximal violation of the Clauser-Horne inequality with weak-field homodyne detection, which states that the reflectivity of the local beamsplitter must be equal to the strength of the local oscillator field. We show that this condition holds not only for the vacuum-one-photon qubit input state, but also for the superposition of a photon pair with vacuum, which suggests its generality as a property of weak-field homodyne detection with photon-number resolution. Our findings suggest a possible path to employ such scenarios in device-independent quantum protocols.
\end{abstract}
\maketitle
\section{Introduction}
Security of many quantum information protocols relies on a violation of local realism (often imprecisely referred to as `nonlocality'). In the standard approach, it results from the incompatibility of measurements and the entanglement between at least two particles \cite{HHH09,ZukowskiRMP,Brunner14}. However, the idea of violating local realism with just a single particle has also been extensively investigated \cite{TWC91, Hardy94, Vaidman95, Hessmo04, Enk05, Dunningham07, Heaney11, Jones11, Brask13, Morin13, Fuwa2015, Lee17,1stPaper, 2ndPaper}.
First experimental proposals aimed at demonstrating the ``nonlocality of a single photon'' \cite{oliver1989, TWC91} employed weak balanced homodyne measurements, with local oscillators of fixed strength whose phases defined the local settings. After a long debate, these schemes were recently decisively rejected, as a local hidden variable model replicating \textit{all} their outcome probabilities exists \cite{1stPaper, 2ndPaper}.
A modification of these schemes, put forward by Hardy in \cite{Hardy94}, showed genuine violation of local realism by the state \begin{equation} \label{eq:InState}
\ket{\psi(p)} = \sqrt{1-p}\ket{00}_{b_1,b_2} + \sqrt{\frac{p}{2}} \left(\ket{01}_{b_1,b_2} + \ket{10}_{b_1,b_2} \right), \end{equation} for $0<p\leq1$, presented here without irrelevant phase factors. In the formula, e.g. $\ket{01}_{b_1, b_2}$ denotes single excitation of the field mode $b_2$ and no excitation of the mode $b_1$. This state is obtained by feeding one input of a balanced beamsplitter with a vacuum-one-photon qubit (the state given by the superposition of the vacuum and a single photon) and the other one with the vacuum, as presented in Fig. \ref{mainSetup}. Aside from the initial state, Hardy's scheme differs from the ones of \cite{oliver1989, TWC91} in turning local oscillators \textit{on} or \textit{off}, depending on the measurement settings.
The violation in \cite{Hardy94} is presented in the form of a paradox and relies on precisely tuned local oscillators that interfere destructively with the initial state, completely erasing specific terms appearing after $BS_1$ and $BS_2$ (see Fig. \ref{mainSetup}). Such reasoning, now customarily called `Hardy's paradox', is very elegant but does not necessarily lead to the most robust violations of Bell inequalities.
This begs the question of whether further modifications of the setup proposed by Hardy could improve the predicted Bell violations for $\ket{\psi(p)}$, especially for $p=1$, where the original scheme falls short. In this work, we investigate all-optical schemes involving only local oscillators, tunable beamsplitters, and photon number resolving detection. A modification of the original Hardy's setup allows us to violate a Clauser-Horne (CH) Bell inequality
for all non-zero values of $p$. We also show that the original Hardy's approach works {\it effectively} only for non-trivial superpositions of a single-photon state, i.e.
for a non-negligible amplitude of the vacuum component. If its probabilistic weight is around $8\%$ or less, the Hardy approach leads to experimentally irrelevant, minute violations of local realism. We show that tunable beamsplitters are essential to violate local realism robustly, and the local oscillator fields must significantly vary between the settings. The optimal arrangement is such that the local oscillators are switched \textit{off} in the case of one of the local measurement settings.
As photon number resolving measurements are now feasible, see, e.g. \cite{Walmsley14, Walmsley20}, we use Clauser-Horne Bell inequalities for detection events using precisely defined numbers of photons. This sets a possible path to apply such scenarios in device-independent quantum protocols. Finally, we obtain an interesting interferometric condition relating the power of the local oscillator and reflectivity of the local beamsplitter (they must be equal) which allows the most robust violation of local realism for each value of $p$.
Moreover, the single photon delocalized into two modes (the $p=1$ case in formula (\ref{eq:InState})) has been used to build reliable quantum repeaters networks \cite{Repeater1, Repeater2} that require less resources and are less sensitive to source of experimental inefficiencies than other platforms \cite{REV-Repeater}. Therefore we believe that our work constitutes a basic ingredient to devise quantum information protocols based on Bell-nonclassicality with minimal physical resources, such as device-independent quantum key distribution or self-testing.
\section{Preliminaries} \begin{figure}
\caption{
Schematic representation of the general homodyne setup for testing violation of local realism by the vacuum-one-photon state. The original proposal of Hardy results from fixing $BS_i$ ($i=1,2$) as balanced beamsplitters for one measurement setting and removing them for the other. PS is a phase shifter that compensates the $\pi/2$ phase acquired by the reflected beam after BS$_0$.}
\label{mainSetup}
\end{figure}
\subsection{Experimental setup} \label{exp_set} We consider here an experimental scheme generalizing the setups of \cite{Hardy94, TWC91} by introducing tunable beamsplitters at the detection stations. This modification allows us to recover the original proposals as limiting cases, but also to go beyond them and determine a procedure that optimally detects Bell nonclassicality. The final measurements will be maintained all-optical, namely we use only passive optical elements and coherent local beams as auxiliary systems to implement the photon number resolving weak-field homodyne detection scheme. The scheme uses three beamsplitters BS$_j$ for $j=0,1,2$, see Fig. \ref{mainSetup}. The first beamsplitter, BS$_0$, is a 50-50 one that serves for the preparation of the input state. The remaining two, BS$_1$ and BS$_2$, are tunable beamsplitters used by two spatially separated parties, Alice and Bob, in their local homodyne photon-number resolving measurements. Note that in the proposals of \cite{TWC91} and \cite{Hardy94} the beamsplitters were symmetric 50-50 ones. Such were used in experiment \cite{Walmsley14}.
{\bf State preparation --} Consider two input modes $s$ and $t$ of BS$_0$. The input mode $s$ is {in} a vacuum--single-photon qubit {state}, namely it is a superposition \begin{equation} \label{eq:input}
\sqrt{1-p} |0\rangle_s + \sqrt{p}|1\rangle_s, \end{equation} where $0 < p \leq 1$. The input mode $t$ is in the vacuum state. After passing through BS$_0$ the output state in modes $b_1$ and $b_2$ is given by (\ref{eq:InState}), which can also be written in an equivalent form of: \begin{equation} \label{HardyPsi}
\ket{\psi(p)} = \left( \sqrt{1-p} + \sqrt{\frac{p}{2} } \left(\hat b_1^\dagger + \hat b_2^\dagger \right)\right)|\Omega\rangle, \end{equation}
where $\hat b_j^\dagger$ is the creation operator in mode $b_j$, for $j = 1,2$ and $|\Omega\rangle$ is the vacuum state. For the sake of having a symmetric initial state, we choose a mode transformation by the beamsplitter $BS_0$ of to lead to no relative phase shifts between reflected and transmitted beams in the case of radiation entering via input $s$. The case presented in the figure 1 shows a symmetric 50-50 beamsplitter, whose phase jump $\pi/2$ at the reflection is compensated by a suitable phase shifter, PS. The case of an unbalanced BS$_0$ will be discussed in Section \ref{sec:generalized}.
{\bf Measurement stations --} The optical field in state (\ref{HardyPsi}) is sent to two spacelike-separated observers Alice and Bob. The local measurement station use tunable beamsplitters $BS_1$ and $BS_2$, and auxiliary weak coherent local oscillator fields $\ket{\alpha_j}_{a_j}$ with amplitudes $ \alpha_j$ and phases $\phi_j$, which are fed into the input ports $a_j$, where $j =1,2$. The moduli of amplitudes of the local oscillators reaching the beamsplitters are assumed to be tunable, as in the case of \cite{Hardy94}. In Ref. \cite{TWC91} they were constant.
In the proposals of Refs. \cite{TWC91} and \cite{Hardy94} the final beamsplitters were 50-50 ones. Here, we consider tunable two-input-two-ouput beamsplitters. They can be realized with e.g. Mach-Zehnder interferometers.
We assume that the tunable beamspliters perform unitary transformations $U_{BSj}$ on the input modes, given by: \begin{equation} \begin{pmatrix} \label{SU2trans2} \hat c_j \\ \hat d_j \end{pmatrix} = U_{BS_j}(\chi_j) \begin{pmatrix} \hat a_j \\ \hat b_j \end{pmatrix}, \end{equation} with the unitary matrix:
\begin{equation} \label{SU2Un} U_{BS}(\chi) = \begin{pmatrix} \cos \chi & i\sin \chi \\
i\sin \chi & \cos \chi \end{pmatrix}, \end{equation} where $\cos \chi = \sqrt{T}$ is the transmission amplitude of the beamsplitter, see e.g. \cite{bachor2019guide, Pan2012}. Then, detectors in modes $c_j$ and $d_j$ register numbers of photons.
In the sequel special local settings will play a crucial role. Those with $T_j=1$ and $\alpha_j=0$, will be called the \textit{off} settings. Note that they are equivalent to no beamsplitter and the local oscillator switched off. The remaining settings are called \textit{on}.
\subsection{Clauser-Horne inequality based on single photon detection} \label{detection}
The Clauser-Horne \cite{CH74} inequality reads: \begin{eqnarray} \label{CHin}
-1~\leq~P(A,B)+P(A,B')+P(A',B) -P(A',B') \nonumber \\
-P(A)-P(B)~=~CH~\leq~0, ~~~
\end{eqnarray} where $A$ and $A'$ for Alice, $B$ and $B'$ for Bob denote some local events for different local settings, whereas $P(\cdot)$ and $P(\cdot,\cdot)$ denote a local probability and a joint probability respectively.
We propose, inspired by \cite{Hardy94}, that Alice's detection events $A$ and $A'$ are in both cases a single photon in mode $d_1$ and no photon in mode $c_1$ for her two possible measurement choices. Bob's events $B$ and $B'$ are defined analogously. Note that such measurement results are linked with \textit{one-dimensional projectors} onto a specific Fock state.
The measurement settings, ``primed'' and ``unprimed'', are given by the amplitude $\alpha_j^{(')}$ and phase $\phi_j^{(')}$ of the coherent local oscillator fields, and the reflectivity $R_j = \sin^2 \chi_j^{(')}$ of the local beamsplitter ($j = 1,2$ for Alice and Bob respectively).
\begin{figure}
\caption{
Maximal violation of the right hand side of the CH inequality as a function of the input state parameter $p$. The red solid curve corresponds to the maximum for completely general measurement settings (any phases, any local oscillator amplitudes, any transmittivities of the final beamsplitter, for each possible local setting). The purple dashed curve represents the CH value for the Hardy's settings of \cite{Hardy94}.}
\label{CH_max}
\end{figure}
\section{Search for CH inequality violations: scan over arbitrary pairs of the settings}
We consider general measurement settings, of the above kind, for both Alice and Bob. We do not fix any of the setting parameters: values of beamsplitter's reflectivity $R_j = \sin^2 \chi_j$ amplitudes of coherent states $\alpha_j$, and their phases $\phi_j$, neither for the unprimed setting nor for the primed one (on both sides).
The joint probability of detecting single photon in mode $d_j$ and no photon in mode $c_j$, for $j = 1,2$, for the unprimed settings, is given by: \begin{eqnarray}
&&P(A,B) = \nonumber \\&& \hspace{-1em}|{}_{c_1,d_1,c_2,d_2}\bra{0,1,0,1}(U_{BS_1})_{a_1,b_1}(U_{BS_2})_{a_2,b_2}\ket{\Psi_{p}(\alpha_1,\alpha_2)}|^2, \nonumber \\ \end{eqnarray} where $\ket{\Psi_{p}(\alpha_1,\alpha_2)}$ is the full input state (\ref{eq:InState}), augmented with the auxiliary coherent fields. It is given by: \begin{eqnarray}
&& \ket{\Psi_{p}(\alpha_1,\alpha_2)} \nonumber \\
&& \hspace{-0.5em} = \ket{\alpha_1}_{a_1}\left(\sqrt{1-p}\ket{00} + \sqrt{\frac p2} (\ket{01} + \ket{10} )\right)_{b_1,b_2} \ket{\alpha_2}_{a_2}. ~~~~ \end{eqnarray} The joint probability reads: \begin{eqnarray} \label{Pab}
&& P(A,B) = e^{-\alpha_1^2 -\alpha_2^2}\Big[ (1 -p) R_1 R_2 \alpha_1^2 \alpha_2^2 \nonumber \\
&& + \frac{p}{2}\Big( \alpha_1^2 R_1 (1 - R_2) + (1 - R_1) \alpha_2^2 R_2 \nonumber \\ && + 2 \alpha_1 \alpha_2 \sqrt{R_1R_2(1 - R_1) (1 - R_2)} \cos(\phi_1 - \phi_2) \Big) \nonumber \\ && - \alpha_1 \alpha_2 \sqrt{2 p (1 - p) R_1R_2} \times \nonumber \\ && \left(\sqrt{(1 - R_1)R_2} \alpha_2 \sin \phi_1 + \sqrt{R_1(1 - R_2)} \alpha_1 \sin \phi_2 \right) \Big].~~~ \end{eqnarray}
We have used in the formula the reflectivities $R_j=\sin^2{\chi_j}$, as it turns out that in some calculations this is a better choice. Probabilities $P(A,B'), P(A',B)$ and $P(A',B')$ can be obtained from $P(A,B)$ by replacing the unprimed parameters by primed ones for this general set of measurements.
\begin{figure}
\caption{
Plot of the optimal coherent state strength $\alpha_0^2$, which maximizes the CH expression given in (\ref{CHin}), for completely general measurement settings, as a function of $p$, depicted as a green line.
We compare our result with the coherent state strength $\alpha^2 = \frac{p}{2(1-p)}$ (Hardy's measurement scheme), which is plotted by the dashed purple curve.
We have found that, $\alpha^2$ of Hardy increases much faster than the optimal $\alpha_0^2$, and it is no longer in a weak homodyne regime for $p \rightarrow 1$. }
\label{Opt_alpha}
\end{figure}
The local probabilities are: \begin{eqnarray}\label{Pa} P(A) &=& \frac{1}{2} e^{-\alpha_1 ^2}\Big(p (1 - R_1) + \alpha_1 ^2 (2-p) R_1 \nonumber \\ && - 2\sqrt{2} \alpha_1 \sqrt{p(1-p)R_1(1-R_1)} \sin (\phi_1) \Big), \end{eqnarray}
\begin{eqnarray} \label{Pb} P(B) &=& \frac{1}{2} e^{-\alpha_2 ^2}\Big(p (1 - R_2) + \alpha_2 ^2 (2-p) R_2 \nonumber \\ && - 2\sqrt{2} \alpha_2 \sin (\phi_2) \sqrt{p(1-p)R_2(1-R_2)} \sin (\phi_2) \Big). \nonumber \\ \end{eqnarray}
The optimization over all possible local parameters $\{\alpha_j, \phi_j, \chi_j\}$ for $j = 1,2$ and for primed and unprimed indices, yields $CH_{\max}$ as a function of $p$, which is depicted in Fig. \ref{CH_max}. The measurement settings which give the maximum of the CH expression are the same for both $A$ and $B$ and for $A'$ and $B'$. The primed settings turn out to be the \textit{off} settings, with $T'_j = 1$, and $\alpha'_j = 0$, for both $j = 1,2$. Therefore the optimal settings indeed follow the \textit{on}/\textit{off} measurement scheme. Note, that if primed events represent the {\it off} settings, we have \begin{eqnarray} P(A,B') = \frac p2 R_1 \alpha_1^2 e^{-\alpha_1^2}, \\ P(A',B) = \frac p2 R_2 \alpha_2^2 e^{-\alpha_2^2}, \end{eqnarray} and $P(A',B') = 0$.
There is an interesting and puzzling relation between the optimal transmissivity of the final beamsplitters and the intensity of the local coherent beams,
which holds for the maximal violations. Numerical results show that the optimal settings for CH inequality violation satisfy $T_j + \alpha_j^2 = 1$, or equivalently \begin{equation} R_j=\alpha_j^2, \label{OptCondR} \end{equation} for both $j = 1,2$, where $R_j=1-T_j$ is the reflectivity of the beamsplitter. This holds for the entire range of $p$, except the $p \approx0$. However, the $CH_{\max}$ in the region of $p \approx0$ is prohibitively small for the computer results to be reliable. One thus can conjecture that the condition for optimal settings $R_j=\alpha_j^2$ holds for all $p$. The discussion which follows strongly supports this conjecture. Also some analytical results, see further, lead to it.
$R_j=\alpha_j^2$ seems to be a general optimality-of-settings pre-condition for ``single photon'' Bell experiments involving \textit{on}/\textit{off} weak homodyne measurements and single photon detection events. Although this relation does not look like a mere coincidence, we must admit that we were not able to find an underlying physical reason.
As we shall see further, we have found this condition also to hold in the case of another experiment, which involves homodyne detection.
One additional comment is necessary here, namely we have fixed our single-photon detection events to one photon in mode $d_j$ and no photon in mode $c_j$, which is consistent with the original formulation of Hardy. However, we have verified that if we switch the modes in the above definition, namely we detect single photon in mode $c_j$ and no photons in mode $d_j$, the optimal CH values as a function of $p$ remain the same, whereas the optimality condition \eqref{OptCondR} changes into:
\begin{equation}
T_j=\alpha_j^2. \label{OptCondT}
\end{equation}
This fact can be intuitively understood by noticing that for the switched single-photon detection scheme the condition \eqref{OptCondT} guarantees that the intensity of the auxiliary beam reaching the detector responsible for registering a single photon (here it is $c_j$) is the same as in the original scheme (for detector $d_j$ registering single photon). Therefore the two choices of single-photon detection scheme seem to be effectively symmetric. For clarity in the remaining part of the work we always keep the original convention for the detection events.
The optimal intensity of the coherent beam, denoted $\alpha_0^2$, is the same for both Alice and Bob. It is plotted in Fig. \ref{Opt_alpha} by a solid green curve as a function of $p$, which represents the fidelity of the input state (\ref{eq:input}) with the single-photon state. A curve fitting on $\alpha_0^2$ yields an approximated functional dependence on $p$ given by:
\begin{equation}\label{eq:optalphamax}
\alpha_0^2(p) \approx 1- \left(1-p^{0.39634}\right)^{0.453581}. \end{equation}
In our full unconstrained numerical analysis, some of our optimal parameters match with the not-optimized measurement settings in the Hardy's scheme \cite{Hardy94}. There, full \textit{on}/\textit{off} settings are implemented using a balanced beamsplitter for both A and B (\textit{on} settings) or removing the beamsplitter and detecting one photon in mode $b_j, ~~j = 1,2$ (\textit{off} settings). A detailed analysis of Hardy's reasoning is given in Appendix \ref{s:hardyexp}. For a comparison we report here the value of a violation of CH inequality related with the paradox of Hardy of Ref. \cite{Hardy94}: \begin{equation}\label{CH_Hardy}
CH_{Hardy} = \frac{e^{-\frac{p}{1-p}} p^2}{16 (1-p)},
\end{equation} which is plotted in Fig. \ref{CH_max}, as a function of $p$, by dashed purple line.
The value of violation of the inequality for $p>0.922$ is prohibitively small, $CH_{Hardy} < 10^{-6}$, therefore it is of no significance in experiment, or for any
experimentally feasible protocol requiring a violation of local realism for its certification.
Thus, the paradox of Hardy, from the point of view of experiment, pertains only to situations in which we have a significant
vacuum component in the signal beam $s$, Fig.1.
\begin{figure}
\caption{
Plot of the minimum $CH_{min}$ of the CH expression as a function of $p$, which reveals the violation of the CH inequality in the negative side (orange dotted curve). We obtained a measurable violation of the CH inequality for the single photon limit for $ 0.989 \leq p \leq 1$, and hence, the Bell nonclassicality of (\ref{eq:InState}) can be experimentally detectable. The $CH_{min}$ is not a monotonic function of $p$, and it reaches its minimum at $p = 0.999958$. The neighbourhood of it has been zoomed in. The variation of the optimal coherent state strength $\alpha_0^2$, has been plotted in the bottom left inset, which always satisfy $ \alpha_0^2 = R_0$, $R_0$ being the optimal reflectivity of the local beamsplitter.}
\label{CH_ours}
\end{figure}
Moreover, the optimal intensity of the local oscillator, which gives rise to $CH_{max}$, (solid green curve in Fig. \ref{Opt_alpha}), lies entirely within the weak homodyne region. This is in contrast to the intensity of the coherent field used by Hardy \cite{Hardy94}, $\alpha_{Hardy}^2 = \frac{p}{2(1-p)}$, which goes to infinity as $p \rightarrow 1$. Hence, the investigation by Hardy for $p\approx 1$ is no longer in the regime of weak homodyning. \\
\subsection{No-vacuum and almost no-vacuum component input states ($p\approx1$)}\label{sec:peq1}
It turns out that the CH inequality (\ref{CHin}) is non-negligibly violated on the left hand side for input states with small vacuum component, $0.989 < p \leq 1$. This violation is most robust close to $p=1$, however surprisingly not at $p=1$, where it is high but not maximal. Its numerically obtained minimal values, $CH_{\min}$, are plotted in Fig. \ref{CH_ours}.
Due to the complexity of the CH expression (\ref{CHin}) it is hard to minimize it analytically. However, our numerical results point at several properties of the optimal settings. Our unconstrained search for the minimal value of the CH expression always leads to the symmetric conditions: $\alpha_1=\alpha_2=0$ and $\chi_1=\chi_2=0$ (\textit{off} settings), $\alpha'_1=\alpha'_2=\alpha$, $\chi'_1=\chi'_2 = \chi$, and $\phi'_1=\phi'_2=\frac{\pi}{2}$ (\textit{on} settings). Note that the optimal settings in this case also follow the \textit{on}/\textit{off} scheme, however now the primed settings are \textit{on}, whereas the unprimed -- \textit{off}.
Taking into account the above symmetry of the optimal settings for the single-photon input state ($p=1$), we get the following functional dependence of the value of the CH expression: \begin{equation} CH_{p=1} =-1+ \left[e^{-\alpha ^2} \alpha ^2 -2 e^{-2 \alpha ^2} \alpha ^2 (1-R)\right] R. \end{equation}
Note that the $-1$ term is due to the fact that for the \textit{off} settings $P(A)=P(B)=\frac{1}{2}.$ In order to find its minimal value, we check the conditions under which the partial derivatives satisfy $\frac{\partial }{\partial R }CH_{p=1}=0$ and $\frac{\partial }{\partial \alpha }CH_{p=1}=0$. The two equations yield $R=\alpha^2$ as a necessary condition for the minimum. The threshold $\alpha$ which gives violation is given by equation: $e^{\alpha ^2} = 2(1 - 2 \alpha ^2)$, and the value of the violation is given by $[e^{-\alpha ^2} \alpha ^2 -2 e^{-2 \alpha ^2} \alpha ^2 (1-\alpha^2)] \alpha^2.$
The condition $R=\alpha^2$ is exactly the same relation between the transmissivity of a local beamsplitter and the local oscillator strength \eqref{OptCondR} that has been purely numerically found in the previous case of the right-side violation of the CH inequality. The latter one implies that in the optimal case $\alpha ^2 \approx 0.1959$, which leads to $CH_{min} \approx -1.01016$. Numerical calculations for $0.989 < p \leq 1$ also give $R=\alpha^2$ as the optimality condition.
\subsection{No violation of CH inequality based on fixed local detection events of more than one photon for the {\it on}/{\it off} scenario}
In the previous section, we have considered the violation of the CH inequality based on the coincidence detection of a single photon in mode $d_j$ and no photon in mode $c_j$, for both Alice and Bob, and for all local settings of the interferometric setup. In this section, we will show that this is the only set of photon number detection events with which the Bell-nonclassicality of the single photon input state in (\ref{eq:InState}) can be revealed in a scenario with fixed photon-number detections for both local settings on both sides, and the \textit{on}/\textit{off} settings.
The local detection events will be denoted by $(n,m)$, where $n$ is the number of photons detected in the local detector $c_j$ and $m$ in $d_j$. Assume that all events $A$, $A'$, $B$, $B'$ are of this kind. We analyze below the cases $(n,m)$ which are not $(0,1)$, $(1,0)$ (already analysed in previous section), or $(0,0)$ (which is trivial).
We consider three cases:
\textbf{Settings $A, B$ are \textit{on} and $A', B'$ are \textit{off}:} For the initial state (\ref{eq:InState}) and local events $(n,m)$, with $n+m>1$, the following conditions trivially hold: $$ P(A,B') = P(A',B) = P(A',B') = 0.$$ Thus, the CH expression now reduces to: \begin{equation}\label{reducedCH}
CH = P(A,B) - P(A) - P(B), \end{equation} where the joint probability \begin{eqnarray} P(A,B) &=& P(n,m;n,m)_{c_1,d_1;c_2,d_2}, \end{eqnarray} can be obtained from Eq. (\ref{Pab_n1m1n2m2}), of Appendix \ref{appen:arb_prob}, by putting $n_1 = n_2 = n$ and $m_1 = m_2 = m$. And the local probabilities are \begin{eqnarray} P(A) &=& P(n,m)_{c_1,d_1}, \\ P(B) &=& P(n,m)_{c_2,d_2}, \end{eqnarray} see Appendix \ref{appen:arb_prob} for the full expression. The r.h.s of (\ref{reducedCH}) is always less than or equal to $0$ for any probabilistic theory. Moreover, the lower bound of the expression is for the studied problem always higher than $-1$.
This is because the probability that Alice (or Bob) gets exactly $n+m>1$ particles on their side is less than $\frac{1}{2}$, and thus we have $P(A) + P(B) \leq 1$.
\textbf{Settings $A, B$ are \textit{off} and $A', B'$ are \textit{on}:} In this case all probabilities involving $A$ or $B$ vanish, and the CH inequality reduces to \begin{equation}
-1 \leq CH = - P(A',B') \leq 0. \end{equation} Note that in this case the joint probability for the primed settings is $P(A',B') = P(n,m;n,m)_{c_1,d_1;c_2,d_2}$, (see Appendix \ref{appen:arb_prob})
\textbf{Settings $A, B$ and $A', B'$ are arbitrary:} We have numerically checked that for $n+m > 1$, there is no violation of the CH inequality in the either side. This shows that condition $n+m=1$ gives the only set of events which reveals the nonclassicality of a single photon input state, if the detection scheme events are fixed for both local settings and both observers.\\
\begin{figure}
\caption{
Plot of the maximal violation of the CH inequality for various values of $(n,m) \in \mathbb{N}^2$ representing detected local photon numbers in the \textit{on} setting, in the positive side of (\ref{CHin}). In this case we detect just a single photon in the \textit{off} case, as in the basic detection scenario we have proposed. The solid red line is the same as the one shown in Fig. \ref{CH_max}.
}
\label{CHallevents}
\end{figure}
\subsection{CH inequality violation for different detection events associated with different settings, {\it on}/{\it off} scenario}
We will consider a CH inequality based on the \textit{on}/\textit{off} measurement settings scenario, however with differently defined detection events for the \textit{on} and \textit{off} case. For the \textit{on} settings we assume the pair of numbers $(n,m)$, with $m+n>1$ as the set of local detection events, in the same way as before, whereas for the \textit{off} case, i.e., when the beamsplitter is absent and the local oscillator is turned off, the observer will detect only a single photon in mode $b_j=d_j$.
The joint probabilities for these new sets of events are as follows: $P(A,B)$ is the same as in the previous case defined in Eq. (\ref{Pab}), $P(A,B')$ reads: \begin{equation} P(A,B')=\frac{p}{2 m! n!}e^{-\alpha_1^2} \alpha_1^{2 (m+n)} R_1 {}^m (1 - R_1)^n,~~~ \end{equation} and $P(A',B)$ is the same as $P(A,B')$ with $1 \leftrightarrow 2$ interchange.
An optimization over all possible local parameters has been carried out for various values of $(n,m) \in \mathbb{N}^2$. We have observed that although there is a violation for nearly all possible values of $(n,m)$ \footnote{Like the previous section, there is no violation for the vacuum event, i.e., for $n = m = 0$.}, its magnitude is significantly smaller than for the $n+m=1$ case. The plot of the maximal violation of the CH inequality (\ref{CHin}), for various values of $(n,m)$, is given in Fig. \ref{CHallevents}. It can be seen that the maximal violation of the CH is decreasing with the increasing number of $m$ in the case of $n = 0$. This shows that for the single photon input state, detecting single photon in mode $d_j$ and no photon in mode $c_j$ (or \textit{vice versa}) is the optimal measurement choice to experimental detection of the vacuum-single-photon Bell nonclassicality.
Moreover, we found that in the case of the $(n,m)$ events for which $n+m>1$ the optimal $\alpha_0^2$ does not satisfy the condition $\alpha_0^2 = R$, and $\alpha_0^2 > 1$ for a wide range of values of $p$.
The results of this section support our interpretation of the discussed experimental setup presented in our previous work \cite{2ndPaper}, in which we interpret the nonclassicality found for the single-photon input state as arising from interference due to indistinguishability of photons from the input state and the local oscillator. Indeed, if we detect only a single photon at each local measurement station in the \textit{on} setting, it must have come either from the input state or from the local oscillator. If we locally detect a higher number of photons, all but one of them must have come from the local oscillator. Thus, the indistinguishability of possible events is decreased, which suppresses the observed level of violation of local realism. \begin{figure}
\caption{
Plot of $\left|CH+\frac{1}{2}\right|$ as a function of $p$, for the one-parameter family of states (\ref{eq:InState}).
The range of $p$ shown in this plot is $0.922 \leq p \leq 1$, which is the region of no detectable violation by Hardy's scheme (see Fig. \ref{CH_max}). Here, the red solid curve is the contribution of $CH_{\max}$, whereas the orange dotted line is coming from $CH_{\min}$. Both the curves are above the $\frac{1}{2}$ limit, showing the violation of the CH inequality in both sides. It is clear that when there is no violation in the positive side, one can interchange the measurement settings and obtain the violation in the other side.
}
\label{CH_absolute}
\end{figure}
\subsection{Absolute violation} Note that the CH inequality can be put in a symmetric form \begin{equation}
\left|CH+\frac{1}{2}\right|\leq \frac{1}{2}. \end{equation} A comparison of violations of the lower and upper bounds of the CH inequality in the range of $0.922 \leq p \leq 1$, where there is no significant violation by Hardy's scheme, is plotted in Fig. \ref{CH_absolute}. The higher values for the juxtaposed curves of Fig. \ref{CH_absolute} show the magnitude of violation of this inequality.
\begin{figure}
\caption{
Plot of the relative CH violation as a function of $p$. The term \textit{relative} has been used because we divide the maximal value of CH, given in (\ref{CHin}), by $p$, the probability of having single photon in the input state (\ref{eq:InState}). The red solid curve represent the optimum relative CH violation, whereas the dashed purple curve is for the Hardy's scheme. }
\label{RCH}
\end{figure}
\subsection{Relative CH inequality for $p \approx 0$ }
The inequality we have derived in (\ref{CHin}) relies on a specific detection scheme that depends on the probability of detecting one photon in mode $d_j$ and no photon in mode $c_j $, for both $j = 1,2$. As it is easy to see in Fig. \ref{CH_max} the optimum CH value (for both our and Hardy's detection schemes) signals that the violation is no longer significant for values $p \approx 0$. Anyway, let us remind that $p$ quantifies the fidelity that the input state impinging on $BS_0$ has with the single photon states (see Eq. (\ref{eq:input})), so a low value of $p$ implies that the probability of detecting one photon in mode $d_j$ can be negligibly small and it affects the value of the violation of the CH inequality. To have a clear insight on the significance of the violation as function of $p$ along all its range, we plot the optimal CH value divided by the probability $p$ in in Fig. \ref{RCH} and term it as \textit{relative} CH value. This stress that our detection scheme still holds and allows to claim a violation of local realism also when the input state has a huge overlap with the vacuum.
\section{Robustness to experimental imperfections}
In the previous sections we provided a scheme to certify the ``non-locality'' of a single photon, or of a superposition of a single photon and vacuum. We found that when Clauser-Horne inequality is used, the "non-locality" is best detected by working with the \textit{on}/\textit{off} measurement settings and single photon detections. This extends the results of Hardy \cite{Hardy94} to a more general scenario. Whether a setting should be \textit{on} or \textit{off} depends on the value of $p$, the probability of getting the single photon in the input state $|\psi(p)\rangle$. Moreover, we found that for the \textit{on} setting, the optimal transmittivities of the beamsplitters $BS_j$ are the same for both parties and they read $T_j = 1 - \alpha_0^2$. Here $\alpha_0^2$ denotes the optimal intensity of the local coherent state, which depends on $p$, as shown in Fig. \ref{Opt_alpha}, and is approximated in Eq. (\ref{eq:optalphamax}).
In this section we discuss the feasibility of implementing our scheme in inevitably imperfect experiments. We focus on two potentially most important sources of problems: fluctuation of the local fields around its optimal value and detector inefficiency.
Note that, to achieve the optimal violation of the CH inequality, the parties need to tune their local settings: the reflectivity of beamsplitters, phases of the auxiliary coherent fields and their intensities. The reflectivity is relatively easy to control and stable once set to the desired value. The outcome probabilities (\ref{Pab}-\ref{Pb}) depend on phases in a simple way. Thus, the effects of phase detuning will not differ from the ones seen in standard interferometric experiments.
\subsection{Noise fluctuations around the optimal local field}
\begin{figure}
\caption{
Region of violation of the upper bound of the CH inequality (\ref{CHin}) in the plane of $p$ and $\alpha^2$, for the single-parameter family of quantum states $\ket{\psi(p)}$, as given in (\ref{eq:InState}). $\alpha^2$ represents the intensity of the local oscillator, when the beamsplitter $BS_j$ is \textit{on} for the unprimed settings and it is the same for both the parties. The reflectivities of the beamsplitters, are taken to be fixed $R_j = \alpha^2_0$, for $j = 1,2$, for \textit{on}, whereas the primed settings are taken as \textit{off}.
}
\label{alpharange}
\end{figure}
\begin{figure}
\caption{
Regionplot of the violation of the CH inequality (\ref{CHin}), in the left hand side, for the state $\ket{\psi(p)}$, for $0.989 < p < 1$ and $\alpha^2 < 1$. Here $\alpha^2$ is the intensity of both the local oscillators, when the beam splitters $BS_j$, of reflectivity $R_j = \alpha^2_0$, for $j = 1,2$, are \textit{on} for the primed settings. The unprimed settings are taken to be \textit{off}. This figure also depicts the sustainable fluctuation of $\alpha^2$, from the optimum one (the black dotted line), for a fixed $p$, to have violation in the negative side.
}
\label{alpharange_min}
\end{figure}
To address the effect of fluctuations of the intensity of the local coherent fields,
we checked what is the range of $\alpha^2$ for which it is possible to detect a violation of local realism while the other measurement settings are fixed to those optimal for the optimal intensity $\alpha_0^2$. The numerical results we obtained this way are presented in Fig. \ref{alpharange} (violation of the upper bound of CH inequality) and Fig. \ref{alpharange_min} (lower bound).
\begin{figure}
\caption{
Plot of $ \%$ of error, $\zeta(p)$, in a simulation of an experimental realization of the violation of CH inequality in the right hand side, with respect to $p$. The red curve is for $\sigma= 0.05 \alpha_0^2$, namely $5\%$ fluctuations of the optimal $ \alpha_0^2$ for each $p$, and the blue one is for $10\%$ fluctuations ($\sigma= 0.1 \alpha_0^2$).
The reflectivity of the beam splitter is fixed to the optimal coherent state strength, via $R = \alpha_0^2$. In the inset we have shown the $ \%$ of error in the left hand side violation of (\ref{CHin}), in the close proximity of $p = 1$. As from Fig. \ref{alpharange_min}, it is easy to visualise that the violation is quite robust around optimal $\alpha_0^2$, and hence the error is very low for left hand side compared to the right hand side violation. The error has been calculated with a sample size of $N = 5000$.}
\label{errorCH}
\end{figure}
We also quantify the impact of the fluctuations of the intensity of local coherent fields $\alpha^2$ on the magnitude of violation of the CH inequality. To this end, we compare the violation obtained for the optimal settings, $CH(\alpha_0^2, p)$, with the one resulting from a detuned intensity of the coherent fields $CH(\alpha^2, p)$ (other measurement settings, like reflectivities, are fixed to those optimal for the $\alpha_0^2$ case). We define a relative difference of violations as
\begin{equation}
\zeta(p) \! = \! \int d\alpha^2 \frac{\left|CH(\alpha^2, p)- CH(\alpha_0^2, p)\right|}{CH(\alpha_0^2, p)} \mathcal{N}_{\alpha_0^2, \sigma}(\alpha^2) \times 100\%, \end{equation} where $\mathcal{N}_{\alpha_0^2, \sigma}(\alpha^2)$ is a truncated normal distribution with a standard deviation $\sigma$ and the lower tail cut off at 0, centered around the optimal intensity $\alpha_0^2$.
The numerical estimates of $\zeta(p)$ for intensity fluctuations proportional to the optimal intensity $\alpha_0^2$ are plotted in Fig. \ref{errorCH}. The violations of the CH inequality prove to be quite robust, especially for $p\approx1$.
\subsection{Inefficient detectors} Thus far, the photon number resolving detectors we considered were tacitly assumed to have perfect efficiency. In this section, we will investigate detectors of a finite efficiency $\eta < 1$. \begin{figure}
\caption{ Maximal violation of CH inequality for different values of detection efficiency $\eta$, as a function of $p$, the probability of having a single photon in the initial state $\ket{\psi(p)}$. The red line corresponds to $\eta = 1$, i.e., for the perfect detectors similar to the plots in figures \ref{CH_max} and \ref{CHallevents}. The orange line is for $\eta = 0.98$ and the green and blue curves are for $\eta = 0.95$ and $\eta = 0.9$ respectively. The dotted line depicts the migration of the point of maximum CH violation in the plane of $p$ and CH, with various values of $\eta$ (given in the color palette).}
\label{inefficiency}
\end{figure}
To do that, we assume that if there are $n$ photons in a given mode, the detector might not register all of them. Instead, it can detect any $n'$ number of photons, $0 \leq n' \leq n$, with the probability $\binom{n}{n'} \eta^{n'} (1 - \eta)^{(n - n')}$. Hence, the joint probability of detecting $(k,l;r,s)$ numbers of photons in modes $c_1, d_1; c_2$, and $d_2$ by detectors with the same efficiency $\eta$, is \begin{eqnarray}\label{jointeta} &&P_\eta(k,l;r,s)_{c_1, d_1, c_2, d_2} = \nonumber \\ && \sum_{n,m = 0}^\infty \sum_{n',m' = 0}^\infty \binom{k+n}{k} \binom{l+m}{l} \binom{r+n'}{r} \binom{s+m'}{s}
\eta^{k+l + r+s} \nonumber \\ &&
(1 - \eta)^{n+m + n' + m'} P(k+n, l+m, r + n'; r + m')_{c_1, d_1, c_2, d_2}, \nonumber \\ \end{eqnarray}
and the local probabilities are
\begin{eqnarray}\label{localeta} &&P_\eta(k,l)_{c_1, d_1} = \nonumber \\ && \sum_{n,m = 0}^\infty \binom{k+n}{k} \binom{l+m}{l} \eta^{k+l} (1 - \eta)^{n+m} P(k+n, l+m)_{c_1, d_1}, \nonumber \\ \end{eqnarray} and the $P_\eta(r,s)_{c_2, d_2}$ have the similar expressions as \eqref{localeta}, see Appendix \ref{inefficient_detector} for more details. A detailed expressions of the joint and local probabilities of detecting arbitrary number of photons, i.e., detecting $n_j$ number of photons in mode $c_j$ and $m_j$ in mode $d_j$ for $j = 1,2$ are given in Eqs. (\ref{Pab_n1m1n2m2}) and (\ref{Pa_nm}) of Appendix \ref{appen:arb_prob}.
We calculated the violation of CH inequality, given in \eqref{CHin}, for the set of photon-detection events, in which single photon was detected in mode $d_j$ and no photon was detected in mode $c_j$, i.e., for $k = 0,\, \,l = 1;\,\, r = 0$ and $s = 1$ in Eq. (\ref{jointeta}) and (\ref{localeta}). The maximal violation for the one parameter family of vacuum-one-photon qubit $\ket{\psi(p)}$ is plotted in figure \ref{inefficiency} for each $p$ and various values of detector efficiency $\eta$.
Regarding the threshold detector efficiency of our scheme, we assume that it is not possible to experimentally demonstrate a violation of the upper bound of CH inequality of magnitude smaller than $ 10^{-5}$. The lowest detection efficiency, for which this violation can be attained, is $\eta \approx 0.844$ (for $p \approx 0.038$).
Most importantly, for the optimal violation of CH inequality the measurement settings satisfy the condition $\eta \alpha_j^2 = R_j$ and $\eta {\alpha'_j}^2 = R'_j$, for both $j = 1,2$.
Thus the optimality relation holds as before: as the intensity of the local detectors \textit{observed} by the detectors is reduced by factor of $\eta$. This is an important fact to be taken into account in experimental realizations. It is very interesting that this type of losses does not affect the optimality condition.
\section{Generalizations of the input state} \subsection{At most single photon states} \label{sec:generalized} In the previous sections we have considered the family of input states $\ket{\psi(p)}$, produced by impinging the superposition of vacuum and a single photon on a balanced beamsplitter $BS_0$. Now suppose that $BS_0$ is instead an unbalanced beamsplitter of transmitivitty $\cos^2 \xi$. This leads to a different family of input states $\ket{\psi(p, \xi)}$ given by \begin{eqnarray} \label{eq:InStategen}
&\ket{\psi(p,\xi)} =&\nonumber\\
&\left(\sqrt{1-p}\ket{00} + \sqrt{p} \left(\cos \xi \ket{01} + \sin \xi \ket{10} \right)\right)_{b_1b_2}.& \end{eqnarray} These states correspond to a general (up to local phases) situation in which at most a single photon is present in the modes $b_1$ and $b_2$.
To probe the nonclassicality of the states $\ket{\psi(p, \xi)}$ we used the CH inequality (\ref{CHin}) based on single photon detection events described in \ref{detection}. As in the previous sections, we optimized numerically the CH expression over all the measurement settings.
\begin{figure}
\caption{
Plot of the maximal value $CH_{max}$ of the CH expression for the family of states $\ket{\psi(p,\xi)}$. The optimization was performed numerically for completely general measurement settings and single-photon detection events (see \ref{detection}). For $0<p<1$ and $0<\xi<\frac{\pi}{2}$ the CH inequality is violated as $CH_{max}>0$.}
\label{CHmaxgen}
\end{figure}
A violation of the upper bound of (\ref{CHin}), depicted in Fig. \ref{CHmaxgen}, was obtained for the all $0<p<1$ and $0<\xi<\frac{\pi}{2}$. For a given $p$, maximal violation is obtained for $\xi=\frac{\pi}{4}$. The optimal measurement settings which lead to $CH_{max}$ are \textit{on} for unprimed settings (beamsplitters $BS_j$, and the coherent states $\ket{\alpha_j e^{i \phi_j}}$ for $j = 1,2$ are present) and \textit{off} for the primed settings (beamsplitters are removed, and local oscillators are turned off). Just as in the case of the balanced $BS_0$, the condition $T_j + \alpha_j^2 = 1$ is satisfied by the optimal \textit{on} measurements for both $j = 1,2$. However, the optimal $\alpha_1$ and $\alpha_2$ are not, in general, the same
when in the initial slate the amplitudes of $\ket{01}_{b_1b_2}$ and $\ket{10}_{b_1b_2}$ are different.
\begin{figure}
\caption{
Plot of the minimal value $CH_{min}$ of the CH inequality for the family of states $\ket{\psi(p,\xi)}$. The optimization was performed numerically for completely general measurement settings and single-photon detection events (see \ref{detection}). The violation has been found for a very small range of parameter $p$. For $p=1$ there is no violation in of the upper bound of the CH inequality, but in the lower bound is violated for the entire range of $\xi$.}
\label{CHmingen}
\end{figure}
Let us look for violations of the lower bound of the CH inequality. For $p = 1$ the inequality is violated for all $0<\xi<\frac{\pi}{2}$. As $p$ gets smaller, the range of $\xi$ for which a violation can be obtained quickly narrows and finally reduces, from the numerical point of view, to a single point $\xi=\frac{\pi}{4}$ for $p\approx0.989$, see Fig. \ref{CHmingen}.
The optimal settings are, again, of the \textit{on}/\textit{off} kind and satisfy the condition $T_j + \alpha_j^2 = 1$ for both Alice and Bob.\\
\subsection{$\alpha^2=R$ condition beyond single photon case, a simple example}
\label{sec:TMSV} The necessary condition for optical measurement settings to violate the CH inequality is puzzling, and thus it begs for an investigation whether it can appear also in other interferometric contexts involving weak homodyne measurements, and single photon detections. Surprisingly it holds in simple case which we present here. More general studies of this condition in the case off wider classes of states will be presented somewhere else.
The discussed interferometric configuration (Fig. \ref{mainSetup}) can be {extended towards investigating other} two-mode input optical states, like e.g. the following one: \begin{equation} \label{TMSV}
\sqrt{1-p} \ket{0, 0}_{b_1,b_2}+ \sqrt p \ket{1, 1}_{b_1,b_2}. \end{equation} The joint probability of detecting no photon in mode $c_j$ and single photon in mode $d_j$, for both $j = 1,2$, when both the beamsplitters are present and local oscillators are turned on is: \begin{eqnarray} &&P(A,B)= e^{-\alpha_1^2 -\alpha_2^2}\Big[ (1 -p) R_1 R_2 \alpha_1^2 \alpha_2^2 + p (1 - R_1)(1 - R_2) \nonumber \\ && - 2\alpha_1 \alpha_2 \sqrt{p (1 - p) } \sqrt{R_1(1 - R_1)} \sqrt{R_2(1 - R_2)}\cos(\phi_1 - \phi_2) \Big]. \nonumber \\ \end{eqnarray} The local probabilities are: \begin{equation} P(A) = e^{-\alpha_1^2} \Big[ (1 - p) R_1 \alpha_1^2 + p (1 - R_1) \Big], \end{equation} and \begin{equation} P(B) = e^{-\alpha_2^2} \Big[ (1 - p) R_2 \alpha_2^2 + p (1 - R_2) \Big]. \end{equation}
Here we present a numerical optimization of the violations of the CH inequality. The local settings are specified as before by three parameters: amplitude $\alpha$ of the local oscillator, its phase $\phi$ and reflectivity $R$ of the local beamsplitter. The events in the CH inequality are defined with respect to single photon detection, as detailed in \ref{detection}. It turns out that the CH inequality is violated for the entire range of $p\in(0,1)$ (see Fig. \ref{GPY}).
\begin{figure}
\caption{
Plot of the maximal CH violation for the
superposition of vacuum and a two-photon
state \eqref{TMSV} as a function of the probability $p$ of having a two-photon component. The red line represents the maximal violation within the perfect \textit{on}/\textit{off} scheme for the \textit{off} settings chosen as unprimed ones for both observers. The green line represents maximal violation within the perfect \textit{on}/\textit{off} scheme for the \textit{off} settings chosen as unprimed for one observer and primed for another. Finally the dashed blue line represents the maximal violation for the general settings (no \textit{on}/\textit{off} constraint). It can be seen that the \textit{on}/\textit{off} settings are almost optimal for lower values of $p$ and exactly optimal for the higher values.}
\label{GPY}
\end{figure}
For lower values of $p$ the \textit{on}/\textit{off} settings lead to almost optimal violation (see the red solid line in the Fig. \ref{GPY}), if the \textit{off} settings correspond to the unprimed measurement choices for both observers. On the other hand for higher values of $p$ the exact optimal violation (see green solid line in the Fig. \ref{GPY}) can be obtained if the \textit{off} settings are the primed measurement choices for one observer and unprimed for another (due to the symmetry of the CH expression with respect to a swap of observers it does not matter which one is which).
Surprisingly, both the \textit{on}/\textit{off} optimal settings, and the general optimal ones, outside the \textit{on}/\textit{off} scenario follow the $\alpha_j^2=R_j$ and $\alpha_j'^2=R'_j$ conditions for both primed and unprimed measurement choices of Alice ($j=1$) and Bob ($j=2$). This suggests that the condition $\alpha^2=R$ might be a general property of settings leading to maximal violation of CH inequality based on weak-field homodyne measurements and single-photon detections.
\section{ Closing remarks}
Bell tests are the cornerstone of many quantum protocols, device independent quantum key distribution and randomness certification \cite{QRNG_appl, Farkas_2021}. Optical states of one or few photons are a feasible choice to implement long-distance protocols in disparate experimental situations, ranging from satellite-based communication \cite{Bell-space} to submarine cable connections \cite{Bell-Submarine}. For this reason a detailed study of Bell scenarios becomes of paramount importance for possible quantum information processing tasks implemented with such states. All doubtful elements in their analysis must be removed, and the gedanken versions must be translated into ones, which are possible to execute in the laboratories.
Our aim in this work is to move from the foundational level the results obtained in \cite {1stPaper, 2ndPaper}, which show basic inconsistencies in the thus-far offered interpretations of the gedanken-experiments presented in the classic papers \cite{TWC91,GPY}, and turn the discussion into one concerning the conditions required to reveal violations of local realism in laboratory realisations of the experiments. Our discussion here, and in \cite {1stPaper, 2ndPaper}, is also a basis of re-interpretation of results obtained in current state-of-the-art weak-homodyne photon number resolving experiments, like \cite{Walmsley14, Walmsley20}, which were based on the proposals of \cite{TWC91,GPY}. Note that in \cite {1stPaper, 2ndPaper} we have shown, or strongly conjectured, that keeping local oscillator strenghts constant for both local settings, and fixing the transmissivity of the beamsplitters in the detection stations at $T=50\%$, cannot lead to a proper Bell test based on schemes of \cite{TWC91,GPY}, even with detectors of $100\%$ efficiency.
We searched for optimal interferometric scheme revealing {\it true} violation of local realism for a family of initial states $\ket{\psi(p)}$ \eqref{eq:InState}. A scheme proposed in \cite{Hardy94} is a correct Bell test, but is not optimal, as we show here. This is especially so when the vacuum component of the initial state is of a probabilistic weight below $8\%$. Moreover for $p=1$ (ideally a single photon) the scheme does not work. However, note that Hardy did {\it not} aim at the optimality. Still, our numerical searches and analytic calculations for more tractable cases show that the idea of Hardy for turning {\it off} the local oscillator, and removing the local beamsplitter, in one of the two local settings on each side is a feature which leads to the optimal violation the CH inequality in the case of local events specified by detection of just one photon in the local measurement station. Also we show that such single-photon detection events (for all settings) are indeed optimal, and that only for $p\approx 1/2$ optimal are 50-50 beamsplitters at the final measurement stations in the {\it on} operation mode. Hardy assumed fixed 50-50 beamsplitters, as it was the case in \cite{TWC91}.
The most surprising result that we show is that the conditions: \begin{eqnarray}\label{COND}
\alpha_j^2 + T_j &=&1 \quad \text{for} \, \, j=1, 2,\nonumber\\
\alpha_j^2 + R_j &=&1 \quad \text{for} \, \, j=1, 2, \end{eqnarray} are necessary to find optimal violation of the CH inequality, when we base our Bell test on single-photon detection events: one photon in mode $d_j$ and no photon in $c_j$ (first condition), or vice versa (second condition). These necessary conditions, despite their beauty, are not easy to interpret. Most importantly they hold also for the case of inefficient detection in a modified form: \begin{equation}
\eta\alpha_j^2 + T_j =1 \quad \text{for} \, \, j=1, 2, \end{equation} and analogously for the second case. Note that this allows one to easily choose the optimal settings for the actual experimental setup. Also, we find an interesting situation, which is one of the characteristic trait of the experiment: optimal settings change with the efficiency of the detection.
Surprisingly, the conditions (\ref{COND}) are also necessary ones for optimality of settings in the case of state (\ref{TMSV}). In a forthcoming article we discuss a modified experiment of \cite{GPY}, which involves weak homodyne measurement on two mode (beam) squeezed vacuum. The modification rest on \textit{on}/\textit{off} settings and tunable beamsplitters at the measurement stations, for single-photon detection events for each party. Thus, the conditions might be some general feature of weak homodyne measurements in Bell tests. This will be discussed in another article. An open question is to find the entire family of states for which the optimality conditions (\ref{COND}) hold.
\begin{thebibliography}{32} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{https://doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Horodecki}\ \emph {et~al.}(2009)\citenamefont
{Horodecki}, \citenamefont {Horodecki}, \citenamefont {Horodecki},\ and\
\citenamefont {Horodecki}}]{HHH09}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Horodecki}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Horodecki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Horodecki}},\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Horodecki}},\ }\bibfield {title} {\bibinfo {title} {Quantum entanglement},\
}\href {https://doi.org/10.1103/RevModPhys.81.865} {\bibfield {journal}
{\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {81}},\
\bibinfo {pages} {865} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pan}\ \emph {et~al.}(2012{\natexlab{a}})\citenamefont
{Pan}, \citenamefont {Chen}, \citenamefont {Lu}, \citenamefont {Weinfurter},
\citenamefont {Zeilinger},\ and\ \citenamefont {\ifmmode~\dot{Z}\else
\.{Z}\fi{}ukowski}}]{ZukowskiRMP}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont
{Pan}}, \bibinfo {author} {\bibfnamefont {Z.-B.}\ \bibnamefont {Chen}},
\bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont {Lu}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {\ifmmode~\dot{Z}\else \.{Z}\fi{}ukowski}},\
}\bibfield {title} {\bibinfo {title} {Multiphoton entanglement and
interferometry},\ }\href {https://doi.org/10.1103/RevModPhys.84.777}
{\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf
{\bibinfo {volume} {84}},\ \bibinfo {pages} {777} (\bibinfo {year}
{2012}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brunner}\ \emph {et~al.}(2014)\citenamefont
{Brunner}, \citenamefont {Cavalcanti}, \citenamefont {Pironio}, \citenamefont
{Scarani},\ and\ \citenamefont {Wehner}}]{Brunner14}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Brunner}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Cavalcanti}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pironio}}, \bibinfo
{author} {\bibfnamefont {V.}~\bibnamefont {Scarani}},\ and\ \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Wehner}},\ }\bibfield {title} {\bibinfo
{title} {Bell nonlocality},\ }\href
{https://doi.org/10.1103/RevModPhys.86.419} {\bibfield {journal} {\bibinfo
{journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo
{pages} {419} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tan}\ \emph {et~al.}(1991)\citenamefont {Tan},
\citenamefont {Walls},\ and\ \citenamefont {Collett}}]{TWC91}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{Tan}}, \bibinfo {author} {\bibfnamefont {D.~F.}\ \bibnamefont {Walls}},\
and\ \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Collett}},\
}\bibfield {title} {\bibinfo {title} {Nonlocality of a single photon},\
}\href {https://doi.org/10.1103/PhysRevLett.66.252} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {66}},\
\bibinfo {pages} {252} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hardy}(1994)}]{Hardy94}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Hardy}},\ }\bibfield {title} {\bibinfo {title} {Nonlocality of a single
photon revisited},\ }\href {https://doi.org/10.1103/PhysRevLett.73.2279}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {73}},\ \bibinfo {pages} {2279} (\bibinfo {year}
{1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vaidman}(1995)}]{Vaidman95}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Vaidman}},\ }\bibfield {title} {\bibinfo {title} {Nonlocality of a single
photon revisited again},\ }\href
{https://doi.org/10.1103/PhysRevLett.75.2063} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {75}},\ \bibinfo
{pages} {2063} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hessmo}\ \emph {et~al.}(2004)\citenamefont {Hessmo},
\citenamefont {Usachev}, \citenamefont {Heydari},\ and\ \citenamefont
{Bj\"ork}}]{Hessmo04}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Hessmo}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Usachev}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Heydari}},\ and\ \bibinfo
{author} {\bibfnamefont {G.}~\bibnamefont {Bj\"ork}},\ }\bibfield {title}
{\bibinfo {title} {Experimental demonstration of single photon nonlocality},\
}\href {https://doi.org/10.1103/PhysRevLett.92.180401} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {92}},\
\bibinfo {pages} {180401} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {van Enk}(2005)}]{Enk05}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont
{van Enk}},\ }\bibfield {title} {\bibinfo {title} {Single-particle
entanglement},\ }\href {https://doi.org/10.1103/PhysRevA.72.064306}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {72}},\ \bibinfo {pages} {064306} (\bibinfo {year}
{2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dunningham}\ and\ \citenamefont
{Vedral}(2007)}]{Dunningham07}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Dunningham}}\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Vedral}},\ }\bibfield {title} {\bibinfo {title} {Nonlocality of a single
particle},\ }\href {https://doi.org/10.1103/PhysRevLett.99.180404} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {99}},\ \bibinfo {pages} {180404} (\bibinfo {year}
{2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Heaney}\ \emph {et~al.}(2011)\citenamefont {Heaney},
\citenamefont {Cabello}, \citenamefont {Santos},\ and\ \citenamefont
{Vedral}}]{Heaney11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Heaney}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cabello}},
\bibinfo {author} {\bibfnamefont {M.~F.}\ \bibnamefont {Santos}},\ and\
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vedral}},\ }\bibfield
{title} {\bibinfo {title} {Extreme nonlocality with one photon},\ }\href
{https://doi.org/10.1088/1367-2630/13/5/053054} {\bibfield {journal}
{\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume}
{13}},\ \bibinfo {pages} {053054} (\bibinfo {year} {2011})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Jones}\ and\ \citenamefont
{Wiseman}(2011)}]{Jones11}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont
{Jones}}\ and\ \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont
{Wiseman}},\ }\bibfield {title} {\bibinfo {title} {Nonlocality of a single
photon: Paths to an einstein-podolsky-rosen-steering experiment},\ }\href
{https://doi.org/10.1103/PhysRevA.84.012110} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo
{pages} {012110} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brask}\ \emph {et~al.}(2013)\citenamefont {Brask},
\citenamefont {Chaves},\ and\ \citenamefont {Brunner}}]{Brask13}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~B.}\ \bibnamefont
{Brask}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Chaves}},\ and\
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Brunner}},\ }\bibfield
{title} {\bibinfo {title} {Testing nonlocality of a single photon without a
shared reference frame},\ }\href {https://doi.org/10.1103/PhysRevA.88.012111}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {88}},\ \bibinfo {pages} {012111} (\bibinfo {year}
{2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Morin}\ \emph {et~al.}(2013)\citenamefont {Morin},
\citenamefont {Bancal}, \citenamefont {Ho}, \citenamefont {Sekatski},
\citenamefont {D'Auria}, \citenamefont {Gisin}, \citenamefont {Laurat},\ and\
\citenamefont {Sangouard}}]{Morin13}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Morin}}, \bibinfo {author} {\bibfnamefont {J.-D.}\ \bibnamefont {Bancal}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ho}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Sekatski}}, \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {D'Auria}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Gisin}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Laurat}},\ and\ \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Sangouard}},\ }\bibfield {title} {\bibinfo {title}
{Witnessing trustworthy single-photon entanglement with local homodyne
measurements},\ }\href {https://doi.org/10.1103/PhysRevLett.110.130401}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {110}},\ \bibinfo {pages} {130401} (\bibinfo {year}
{2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fuwa}\ \emph {et~al.}(2015)\citenamefont {Fuwa},
\citenamefont {Takeda}, \citenamefont {Zwierz}, \citenamefont {Wiseman},\
and\ \citenamefont {Furusawa}}]{Fuwa2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Fuwa}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Takeda}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zwierz}}, \bibinfo
{author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},\ and\ \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Furusawa}},\ }\bibfield {title}
{\bibinfo {title} {Experimental proof of nonlocal wavefunction collapse for a
single particle using homodyne measurements},\ }\href
{https://doi.org/10.1038/ncomms7665} {\bibfield {journal} {\bibinfo
{journal} {Nature Communications}\ }\textbf {\bibinfo {volume} {6}},\
\bibinfo {pages} {6665} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lee}\ \emph {et~al.}(2017)\citenamefont {Lee},
\citenamefont {Park}, \citenamefont {Kim},\ and\ \citenamefont
{Noh}}]{Lee17}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-Y.}\ \bibnamefont
{Lee}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Park}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Kim}},\ and\ \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Noh}},\ }\bibfield {title} {\bibinfo
{title} {Single-photon quantum nonlocality: Violation of the
clauser-horne-shimony-holt inequality using feasible measurement setups},\
}\href {https://doi.org/10.1103/PhysRevA.95.012134} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\
\bibinfo {pages} {012134} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Das}\ \emph {et~al.}(2021{\natexlab{a}})\citenamefont
{Das}, \citenamefont {Karczewski}, \citenamefont {Mandarino}, \citenamefont
{Markiewicz}, \citenamefont {Woloncewicz},\ and\ \citenamefont
{{\.Z}ukowski}}]{1stPaper}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Das}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Karczewski}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mandarino}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Markiewicz}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Woloncewicz}},\ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {{\.Z}ukowski}},\ }\bibfield {title}
{\bibinfo {title} {On detecting violation of local realism with photon-number
resolving weak-field homodyne measurements},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {arXiv preprint arXiv:2104.10703}\ } (\bibinfo
{year} {2021}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Das}\ \emph {et~al.}(2021{\natexlab{b}})\citenamefont
{Das}, \citenamefont {Karczewski}, \citenamefont {Mandarino}, \citenamefont
{Markiewicz}, \citenamefont {Woloncewicz},\ and\ \citenamefont
{{\.{Z}}ukowski}}]{2ndPaper}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Das}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Karczewski}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mandarino}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Markiewicz}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Woloncewicz}},\ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {{\.{Z}}ukowski}},\ }\bibfield {title}
{\bibinfo {title} {Can single photon excitation of two spatially separated
modes lead to a violation of bell inequality via weak-field homodyne
measurements?},\ }\href {https://doi.org/10.1088/1367-2630/ac0ffe} {\bibfield
{journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo
{volume} {23}},\ \bibinfo {pages} {073042} (\bibinfo {year}
{2021}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Oliver}\ and\ \citenamefont
{Stroud~Jr}(1989)}]{oliver1989}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont
{Oliver}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Stroud~Jr}},\ }\bibfield {title} {\bibinfo {title} {Predictions of
violations of bell's inequality in an 8-port homodyne detector},\ }\href
{https://www.sciencedirect.com/science/article/abs/pii/0375960189900364}
{\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ }\textbf
{\bibinfo {volume} {135}},\ \bibinfo {pages} {407} (\bibinfo {year}
{1989})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Donati}\ \emph {et~al.}(2014)\citenamefont {Donati},
\citenamefont {Bartley}, \citenamefont {Jin}, \citenamefont {Vidrighin},
\citenamefont {Datta}, \citenamefont {M.},\ and\ \citenamefont
{Walmsley}}]{Walmsley14}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Donati}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Bartley}},
\bibinfo {author} {\bibfnamefont {X.-M.}\ \bibnamefont {Jin}}, \bibinfo
{author} {\bibfnamefont {M.-D.}\ \bibnamefont {Vidrighin}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Datta}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {M.}},\ and\ \bibinfo {author} {\bibfnamefont {I.~A.}\
\bibnamefont {Walmsley}},\ }\bibfield {title} {\bibinfo {title} {Observing
optical coherence across fock layers with weak-field homodyne detectors},\
}\href {https://doi.org/10.1038/ncomms6584} {\bibfield {journal} {\bibinfo
{journal} {Nat Commun}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages}
{5584} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Thekkadath}\ \emph {et~al.}(2020)\citenamefont
{Thekkadath}, \citenamefont {Phillips}, \citenamefont {Bulmer}, \citenamefont
{Clements}, \citenamefont {Eckstein}, \citenamefont {Bell}, \citenamefont
{Lugani}, \citenamefont {Wolterink}, \citenamefont {Lita}, \citenamefont
{Nam}, \citenamefont {Gerrits}, \citenamefont {Wade},\ and\ \citenamefont
{Walmsley}}]{Walmsley20}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont
{Thekkadath}}, \bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont
{Phillips}}, \bibinfo {author} {\bibfnamefont {J.~F.~F.}\ \bibnamefont
{Bulmer}}, \bibinfo {author} {\bibfnamefont {W.~R.}\ \bibnamefont
{Clements}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Eckstein}},
\bibinfo {author} {\bibfnamefont {B.~A.}\ \bibnamefont {Bell}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Lugani}}, \bibinfo {author}
{\bibfnamefont {T.~A.~W.}\ \bibnamefont {Wolterink}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Lita}}, \bibinfo {author} {\bibfnamefont
{S.~W.}\ \bibnamefont {Nam}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Gerrits}}, \bibinfo {author} {\bibfnamefont {C.~G.}\
\bibnamefont {Wade}},\ and\ \bibinfo {author} {\bibfnamefont {I.~A.}\
\bibnamefont {Walmsley}},\ }\bibfield {title} {\bibinfo {title} {Tuning
between photon-number and quadrature measurements with weak-field homodyne
detection},\ }\href {https://doi.org/10.1103/PhysRevA.101.031801} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{101}},\ \bibinfo {pages} {031801} (\bibinfo {year} {2020})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Monteiro}\ \emph {et~al.}(2015)\citenamefont
{Monteiro}, \citenamefont {Vivoli}, \citenamefont {Guerreiro}, \citenamefont
{Martin}, \citenamefont {Bancal}, \citenamefont {Zbinden}, \citenamefont
{Thew},\ and\ \citenamefont {Sangouard}}]{Repeater1}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Monteiro}}, \bibinfo {author} {\bibfnamefont {V.~C.}\ \bibnamefont
{Vivoli}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Guerreiro}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Martin}}, \bibinfo
{author} {\bibfnamefont {J.-D.}\ \bibnamefont {Bancal}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Zbinden}}, \bibinfo {author} {\bibfnamefont
{R.~T.}\ \bibnamefont {Thew}},\ and\ \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Sangouard}},\ }\bibfield {title} {\bibinfo {title}
{Revealing genuine optical-path entanglement},\ }\href
{https://doi.org/10.1103/PhysRevLett.114.170504} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\
\bibinfo {pages} {170504} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Caspar}\ \emph {et~al.}(2020)\citenamefont {Caspar},
\citenamefont {Verbanis}, \citenamefont {Oudot}, \citenamefont {Maring},
\citenamefont {Samara}, \citenamefont {Caloz}, \citenamefont {Perrenoud},
\citenamefont {Sekatski}, \citenamefont {Martin}, \citenamefont {Sangouard},
\citenamefont {Zbinden},\ and\ \citenamefont {Thew}}]{Repeater2}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Caspar}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Verbanis}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Oudot}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Maring}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Samara}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Caloz}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Perrenoud}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Sekatski}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Martin}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Sangouard}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Zbinden}},\ and\ \bibinfo {author} {\bibfnamefont {R.~T.}\
\bibnamefont {Thew}},\ }\bibfield {title} {\bibinfo {title} {Heralded
distribution of single-photon path entanglement},\ }\href
{https://doi.org/10.1103/PhysRevLett.125.110506} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {125}},\
\bibinfo {pages} {110506} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sangouard}\ \emph {et~al.}(2011)\citenamefont
{Sangouard}, \citenamefont {Simon}, \citenamefont {de~Riedmatten},\ and\
\citenamefont {Gisin}}]{REV-Repeater}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Sangouard}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Simon}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {de~Riedmatten}},\ and\
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}},\ }\bibfield
{title} {\bibinfo {title} {Quantum repeaters based on atomic ensembles and
linear optics},\ }\href {https://doi.org/10.1103/RevModPhys.83.33} {\bibfield
{journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo
{volume} {83}},\ \bibinfo {pages} {33} (\bibinfo {year} {2011})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Bachor}\ and\ \citenamefont
{Ralph}(2019)}]{bachor2019guide}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bachor}}\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Ralph}},\
}\href {https://books.google.pl/books?id=BMTssgEACAAJ} {\emph {\bibinfo
{title} {A Guide to Experiments in Quantum Optics}}}\ (\bibinfo {publisher}
{Wiley},\ \bibinfo {year} {2019})\BibitemShut {NoStop} \bibitem [{\citenamefont {Pan}\ \emph {et~al.}(2012{\natexlab{b}})\citenamefont
{Pan}, \citenamefont {Chen}, \citenamefont {Lu}, \citenamefont {Weinfurter},
\citenamefont {Zeilinger},\ and\ \citenamefont {\ifmmode~\dot{Z}\else
\.{Z}\fi{}ukowski}}]{Pan2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont
{Pan}}, \bibinfo {author} {\bibfnamefont {Z.-B.}\ \bibnamefont {Chen}},
\bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont {Lu}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {\ifmmode~\dot{Z}\else \.{Z}\fi{}ukowski}},\
}\bibfield {title} {\bibinfo {title} {Multiphoton entanglement and
interferometry},\ }\href {https://doi.org/10.1103/RevModPhys.84.777}
{\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf
{\bibinfo {volume} {84}},\ \bibinfo {pages} {777} (\bibinfo {year}
{2012}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Clauser}\ and\ \citenamefont {Horne}(1974)}]{CH74}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont
{Clauser}}\ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Horne}},\ }\bibfield {title} {\bibinfo {title} {Experimental consequences
of objective local theories},\ }\href
{https://doi.org/10.1103/PhysRevD.10.526} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo
{pages} {526} (\bibinfo {year} {1974})}\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1}
\BibitemOpen
\bibinfo {note} {Like the previous section, there is no violation for the
vacuum event, i.e., for $n = m = 0$.}\BibitemShut {Stop} \bibitem [{\citenamefont {Avesani}\ \emph {et~al.}(2021)\citenamefont
{Avesani}, \citenamefont {Tebyanian}, \citenamefont {Villoresi},\ and\
\citenamefont {Vallone}}]{QRNG_appl}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Avesani}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Tebyanian}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Villoresi}},\ and\
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Vallone}},\ }\bibfield
{title} {\bibinfo {title} {Semi-device-independent heterodyne-based quantum
random-number generator},\ }\bibfield {journal} {\bibinfo {journal}
{Physical Review Applied}\ }\textbf {\bibinfo {volume} {15}},\ \href
{https://doi.org/10.1103/physrevapplied.15.034034}
{10.1103/physrevapplied.15.034034} (\bibinfo {year} {2021})\BibitemShut
{NoStop} \bibitem [{\citenamefont {Farkas}\ \emph {et~al.}(2021)\citenamefont {Farkas},
\citenamefont {Guerrero}, \citenamefont {Cari単e}, \citenamefont {Ca単as},\
and\ \citenamefont {Lima}}]{Farkas_2021}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Farkas}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Guerrero}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Cari単e}}, \bibinfo
{author} {\bibfnamefont {G.}~\bibnamefont {Ca単as}},\ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Lima}},\ }\bibfield {title} {\bibinfo
{title} {Self-testing mutually unbiased bases in higher dimensions with
space-division multiplexing optical fiber technology},\ }\bibfield {journal}
{\bibinfo {journal} {Physical Review Applied}\ }\textbf {\bibinfo {volume}
{15}},\ \href {https://doi.org/10.1103/physrevapplied.15.014028}
{10.1103/physrevapplied.15.014028} (\bibinfo {year} {2021})\BibitemShut
{NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2017)\citenamefont {Yin},
\citenamefont {Cao}, \citenamefont {Li}, \citenamefont {Liao}, \citenamefont
{Zhang}, \citenamefont {Ren}, \citenamefont {Cai}, \citenamefont {Liu},
\citenamefont {Li}, \citenamefont {Dai}, \citenamefont {Li}, \citenamefont
{Lu}, \citenamefont {Gong}, \citenamefont {Xu}, \citenamefont {Li},
\citenamefont {Li}, \citenamefont {Yin}, \citenamefont {Jiang}, \citenamefont
{Li}, \citenamefont {Jia}, \citenamefont {Ren}, \citenamefont {He},
\citenamefont {Zhou}, \citenamefont {Zhang}, \citenamefont {Wang},
\citenamefont {Chang}, \citenamefont {Zhu}, \citenamefont {Liu},
\citenamefont {Chen}, \citenamefont {Lu}, \citenamefont {Shu}, \citenamefont
{Peng}, \citenamefont {Wang},\ and\ \citenamefont {Pan}}]{Bell-space}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo
{author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {S.-K.}\ \bibnamefont {Liao}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{J.-G.}\ \bibnamefont {Ren}}, \bibinfo {author} {\bibfnamefont {W.-Q.}\
\bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont {W.-Y.}\ \bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Dai}}, \bibinfo {author}
{\bibfnamefont {G.-B.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont
{Q.-M.}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\
\bibnamefont {Gong}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Xu}}, \bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {F.-Z.}\ \bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {Y.-Y.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont
{Z.-Q.}\ \bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {J.-J.}\
\bibnamefont {Jia}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Ren}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {He}}, \bibinfo
{author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Zhou}}, \bibinfo {author}
{\bibfnamefont {X.-X.}\ \bibnamefont {Zhang}}, \bibinfo {author}
{\bibfnamefont {N.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont
{X.}~\bibnamefont {Chang}}, \bibinfo {author} {\bibfnamefont {Z.-C.}\
\bibnamefont {Zhu}}, \bibinfo {author} {\bibfnamefont {N.-L.}\ \bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {Y.-A.}\ \bibnamefont {Chen}},
\bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont {Lu}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Shu}}, \bibinfo {author}
{\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \bibinfo {author}
{\bibfnamefont {J.-Y.}\ \bibnamefont {Wang}},\ and\ \bibinfo {author}
{\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\bibfield {title} {\bibinfo
{title} {Satellite-based entanglement distribution over 1200 kilometers},\
}\href {https://doi.org/10.1126/science.aan3211} {\bibfield {journal}
{\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {356}},\ \bibinfo
{pages} {1140} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wengerowsky}\ \emph {et~al.}(2019)\citenamefont
{Wengerowsky}, \citenamefont {Joshi}, \citenamefont {Steinlechner},
\citenamefont {Zichi}, \citenamefont {Dobrovolskiy}, \citenamefont {van~der
Molen}, \citenamefont {Los}, \citenamefont {Zwiller}, \citenamefont
{Versteegh}, \citenamefont {Mura}, \citenamefont {Calonico}, \citenamefont
{Inguscio}, \citenamefont {H{\"u}bel}, \citenamefont {Bo}, \citenamefont
{Scheidl}, \citenamefont {Zeilinger}, \citenamefont {Xuereb},\ and\
\citenamefont {Ursin}}]{Bell-Submarine}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Wengerowsky}}, \bibinfo {author} {\bibfnamefont {S.~K.}\ \bibnamefont
{Joshi}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Steinlechner}},
\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Zichi}}, \bibinfo
{author} {\bibfnamefont {S.~M.}\ \bibnamefont {Dobrovolskiy}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {van~der Molen}}, \bibinfo {author}
{\bibfnamefont {J.~W.~N.}\ \bibnamefont {Los}}, \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {Zwiller}}, \bibinfo {author} {\bibfnamefont
{M.~A.~M.}\ \bibnamefont {Versteegh}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Mura}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Calonico}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Inguscio}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {H{\"u}bel}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Bo}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Scheidl}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Zeilinger}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Xuereb}},\ and\ \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Ursin}},\ }\bibfield {title} {\bibinfo {title}
{Entanglement distribution over a 96-km-long submarine optical fiber},\
}\href {https://doi.org/10.1073/pnas.1818752116} {\bibfield {journal}
{\bibinfo {journal} {Proceedings of the National Academy of Sciences}\
}\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages} {6684} (\bibinfo {year}
{2019})},\ \Eprint
{https://arxiv.org/abs/https://www.pnas.org/content/116/14/6684.full.pdf}
{https://www.pnas.org/content/116/14/6684.full.pdf} \BibitemShut {NoStop} \bibitem [{\citenamefont {Grangier}\ \emph {et~al.}(1988)\citenamefont
{Grangier}, \citenamefont {Potasek},\ and\ \citenamefont {Yurke}}]{GPY}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Grangier}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont
{Potasek}},\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Yurke}},\ }\bibfield {title} {\bibinfo {title} {Probing the phase coherence
of parametrically generated photon pairs: A new test of bell's
inequalities},\ }\href {https://doi.org/10.1103/PhysRevA.38.3132} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{38}},\ \bibinfo {pages} {3132} (\bibinfo {year} {1988})}\BibitemShut
{NoStop} \end{thebibliography}
\appendix
\section{Hardy's argument vs ours} \label{s:hardyexp}
When discussing here the version of the gedanken-experiment by Hardy, we shall use a slightly different initial state of beams $b_j$, namely
\begin{equation} \label{eq:InStateHARDY}
\sqrt{1-p}\ket{00}_{b_1,b_2} + \sqrt{\frac{p}{2}} \left(\ket{01}_{b_1,b_2} + i\ket{10}_{b_1,b_2} \right). \end{equation} This state was used by Hardy.
It differs form, $\ket{\psi(p)}$, formula (\ref{eq:InState}), by a trivial $\pi/2$ phase shift in beam $b_1$.
We decided to use in the main text (\ref{eq:InState}) because in its a case the formulas for probabilities become symmetric with respect an Alice-Bob interchange, and thus also the optimal settings acquire a fully symmetric form.
Hardy \cite{Hardy94}, considered the following measurement settings for both Alice and Bob.
\textit{ Local settings $U_j$:} No beamsplitter in mode $b_j$ or the beamsplitter BS$_j$ has $100\%$ transmittance. If single photon is detected in detector ${d_j}$, (in this case we have $b_j \equiv d_j$) then the corresponding outcome is considered as $U_j = 1$ otherwise it is $U_j = 0$. The definition of the $U$ event is effectively the same, if one additionally switches \textit{off} the local oscillator, as local oscillator photons cannot reach the $d$ detectors when the beamsplitter is removed. Thus this definition is concurrent with our for the \textit{off} setting.
\textit{Local settings $F_j$:} The beamsplitter BS$_j$, is a 50-50 one, and the input state $\ket{\psi(p)}$, interferes with auxiliary coherent beams $\ket{\alpha_j}$, for $j = 1,2$. If precisely a single photon is detected in ${d_j}$ and no photon clicks in ${c_j}$, then the event is put by Hardy as $F_j = 1$. This is also concurrent with our definition of the \textit{on} setting and the result considered by us is $F_j=1$.
\textit{CH inequality:} The reasoning given by Hardy can be linked with CH inequalities in the following way. The following CH inequality must hold for the considered events: \begin{eqnarray}\label{eq:Hardy_CH0}
&& -1 \leq P(F_1 = 1, F_2 = 1) + P(F_1 = 1, U_2 = 1) \nonumber \\
&& \hspace{0.65cm} + P(U_1 = 1, F_2 = 1)
- P(U_1 = 1, U_2 = 1) \nonumber \\
&& \hspace{2cm} - P(F_1 = 1) - P(F_2 = 1)\leq 0, \end{eqnarray} where $P(X_1 = 1, Y_2 = 1)$, denotes the probability of getting outcomes $X_1 = 1$, and $Y_2 = 1$ of joint measurements $X_1$ and $Y_2$, by two spacelike separated observers Alice and Bob, for $X,Y \in \{U, F\}$. The above inequality is equivalent to \begin{eqnarray}\label{eq:Hardy_CH}
&& -1 \leq P(F_1 = 1, F_2 = 1) - P(F_1 = 1, U_2 = 0) \nonumber \\
&& \hspace{0.65cm} - P(U_1 = 0, F_2 = 1) - P(U_1 = 1, U_2 = 1) \leq 0.\nonumber\\
\end{eqnarray}
This can be shown using the fact that $P(A) - P(A, B)= P(A, \bar{B})$, where $\bar{B}$ is the opposite event with respect to $B$. This trick is done here for events $U_j$. The opposite event to $U_j=1$ is $U_j=0$, which in fact means no photon detected in $d_j$.
The right hand side of the new inequality can be put as follows \begin{eqnarray}\label{eq:Hardy_CH-2}
P(F_1 = 1, F_2 = 1) \leq P(F_1 = 1, U_2 = 0) \nonumber \\
+ P(U_1 = 0, F_2 = 1) + P(U_1 = 1, U_2 = 1).
\end{eqnarray} This is equivalent to the Hardy's paradox: in the local realistic case if the three right-hand-side probabilities are zero, then the left hand side one must be zero.
In the quantum case one seeks situations in which all three right hand side probabilities are zero and we have $P_{quantum}(F_1 = 1, F_2 = 1)>0.$ Hardy obtained the non-zero $P_{quantum}$ for his settings: \begin{equation}
CH_{Hardy} = P_{quantum}(F_1 = 1, F_2 = 1) = \frac{e^{-\frac{p}{1-p}} p^2}{16 (1-p)}. \end{equation} He chooses the input coherent state amplitudes, their phases are now included, of the values $\alpha_1 = - \sqrt{\frac{p}{2(1-p)}} $ for mode ${a_1}$ and $\alpha_2 = i\sqrt{\frac{p}{2(1-p)}} $ for ${a_2}$. This is so for his $F$ settings, our \textit{on} ones. This makes the other three probabilities vanish. Probability $P_{quantum}(F_1 = 1, F_2 = 1)$ gives the value of the CH expression (\ref{CHin}), for the Hardy approach. A plot of it is given in Fig. \ref{CH_max}.
Fig. \ref{CH_max} shows that, in $p$ close to $1$ the value of $CH_{Hardy}$ is minuscule. We have, $P_{quantum} < 10^{-6}$, for $p > 0.922$. Thus with the approach it is experimentally impossible to detect the non-classicality of the single photon state for such values of $p$. Still, the ideal prediction $P_{quantum} > 0$ holds for the entire range of $0<p<1$, and hence, in principle Hardy's approach can detect the non-classicality of the state (\ref{eq:InState}) for any $p$ within the range, however, tellingly, not for $p=1$.
\begin{figure}
\caption{
Schematic diagram of inefficient detector.
}
\label{ineffi_detector}
\end{figure} All that was said above points to the fact that the method directly employing the Hardy paradox used a constrained version of the CH inequality, and thus is less effective in detection of local realism than the one which we present in the main text. This is not a criticism of Hardy's result, as one of his aims was to show an application of his paradox.
\section{Modeling detector inefficiency}\label{inefficient_detector} We modeled the inefficiency of a detector, by introducing \textit{imaginary} additional beam splitter of transmissivity $\eta$ in the modes $c_j$ and $d_j$.
The two other input modes $u_j$ and $v_j$ of the imaginary beam splitters, as shown in the figure \ref{ineffi_detector}, then one has
\begin{eqnarray} \begin{pmatrix} \hat x_j \\ \hat y_j, \end{pmatrix} = \begin{pmatrix} \sqrt{\eta} & \sqrt{1 - \eta} \\ -\sqrt{1 - \eta} & \sqrt{\eta} \end{pmatrix}. \begin{pmatrix} \hat x'_j \\ \hat y'_j, \end{pmatrix} \end{eqnarray}
where $x = c,d$ and $y = u, v$, and Similarly $u'_j$ and $v'_j$ are the loss modes of the imaginary beam splitters modelling the inefficiency.
Now, the joint probability of getting $(k,l;r,s)$ photons in mode $c'_1, d'_1; c'_2, d'_2$ is given by \begin{widetext} \begin{eqnarray} P(k,l;r,s)_{c'_1, d'_1; c'_2, d'_2} = \text{tr}\big(\ket{k,l}\bra{k,l}_{c'_1, d'_1} \otimes \ket{r,s}\bra{r,s}_{c'_2, d'_2} \otimes I_{u'_1v'_1} \otimes I_{u'_2v'_2} \ket{\Psi_{in}(p)}\bra{\Psi_{in}(p)}\big), \end{eqnarray} and the local probability is \begin{eqnarray}\label{eq:local_loss_prob} P(k,l)_{c'_1, d'_1} = \text{tr}\big(\ket{k,l}\bra{k,l}_{c'_1, d'_1} \otimes I_{u'_1v'_1} \ket{\Psi_{in}(p)}\bra{\Psi_{in}(p)}\big), \end{eqnarray} \end{widetext} where $I_{u'_jv'_j}$, for $j = 1,2$ are the identity operators in the loss modes. One can write Eq. (\ref{eq:local_loss_prob}) as \begin{eqnarray}
P(k,l)_{c'_1, d'_1} = \sum_{n,m = 0}^\infty\big|(\bra{k,l}_{c'_1, d'_1} \otimes \bra{n,m}_{u'_1v'_1} ) \ket{\Psi_{in}(p)}\big|^2,~~~~~~ \end{eqnarray} where we use the fact that $I_{u'_1v'_1} = \sum_{n,m = 0}^\infty \ket{n,m}\bra{n,m}_{u'_1v'_1}$.
Now expand the state $\ket{k,l}_{c'_1, d'_1} \otimes \ket{n,m}_{u'_1v'_1}$ in terms of the input modes \begin{eqnarray} && \ket{k,l}_{c'_1, d'_1} \otimes \ket{n,m}_{u'_1v'_1} \nonumber \\ && = \frac{1}{\sqrt{k! l! n! m!}} ({c'}^\dagger_1)^k ({d'}^\dagger_1)^l ({u'}^\dagger_1)^n ({v'}^\dagger_1)^m \ket{\Omega}~~~~~~ \\ && = \frac{1}{\sqrt{k! l! n! m!}} (\sqrt{\eta}~ c_1^\dagger + \sqrt{1 - \eta} ~u_1^\dagger)^k (\sqrt{\eta} ~d_1^\dagger + \sqrt{1 - \eta}~ v_1^\dagger)^l \nonumber \\ && (\sqrt{\eta} ~u_1^\dagger - \sqrt{1 - \eta}~ c_1^\dagger)^n (\sqrt{\eta} ~v_1^\dagger - \sqrt{1 - \eta} ~d_1^\dagger)^m \ket{\Omega}~~~~~ \label{eq:loss_ex1}\\ && = \frac{(-1)^{m+n}}{\sqrt{k! l! n! m!}}~ \eta^{\frac{k+l}{2}} (1 -\eta)^{\frac{n+m}{2}} (c_1^\dagger)^{k+n} (c_1^\dagger)^{l+m} \ket{\Omega} \label{eq:loss_ex2}\\ && = \frac{(-1)^{m+n}}{\sqrt{k! l! n! m!}}~ \eta^{\frac{k+l}{2}} (1 -\eta)^{\frac{n+m}{2}} \sqrt{(k+n)!} \sqrt{(l+m)!} \nonumber \\ && \hspace{1.2in} \ket{k+n, l+m}_{c_1, d_1} \ket{\Omega}_{u_1v_1}. \end{eqnarray} From Eq. (\ref{eq:loss_ex1}) to (\ref{eq:loss_ex2}), we use the feature of the loss model that there is vacuum in modes $u_1$ and $v_1$, as in figure \ref{ineffi_detector}, hence any terms containing $\hat u_1^\dagger$ and $\hat v_1^\dagger$ simply vanishes when sandwiched with $\ket{\Psi_{in}(p)}$. Hence, Eq. (\ref{eq:local_loss_prob}), reduces to \begin{eqnarray} && P(k,l)_{c'_1, d'_1} = \sum_{n,m = 0}^\infty \binom{k+n}{k} \binom{l+m}{l} \nonumber \\ && \hspace{1in} \eta^{k+l} (1 - \eta)^{n+m} P(k+n, l+m)_{c_1, d_1},~~~~~~ \end{eqnarray} now put $P(k,l)_{c'_1, d'_1} \equiv P_{\eta}(k,l)_{c_1, d_1}$, we obtained Eq. (\ref{localeta}), and similarly Eq. (\ref{jointeta}).
\section{Probabilities for arbitrary number of photo-detection events}\label{appen:arb_prob}
The joint probability of detecting arbitrary number of photons, where $n_j$ number of photons have been detected in mode $c_j$ and $m_j$ number of photons in mode $d_j$ for $j = 1,2$ is given by
\begin{eqnarray} && P(n_1, m_1; n_2,m_2)_{c_1,d_1;c_2, d_2} = \frac{e^{-\alpha _1^2-\alpha _2^2}}{m_1! m_2! n_1! n_2!} R_1^{m_1-1} R_2^{m_2-1} \times \nonumber \\ && \hspace{1em} \left(1-R_1\right){}^{n_1-1} \left(1-R_2\right){}^{n_2-1} \alpha _1^{2 \left(m_1+n_1-1\right)} \alpha _2^{2 \left(m_1+n_1-1\right)} \times \nonumber \\ && \hspace{0.5em} \Big( \alpha _1^2 \alpha _2^2 (1-p) \left(1-R_1\right) R_1 \left(1-R_2\right) R_2 \nonumber \\ && \hspace{1em} +~ \frac{p}{2} \alpha _1^2 \left(1-R_1\right) R_1 \left(m_2 \left(1-R_2\right)-n_2 R_2\right){}^2 \nonumber \\ && \hspace{1em} +~ \frac{p}{2} \alpha _2^2 \left(1-R_2\right) R_2 \left(m_1 \left(1-R_1\right)-n_1 R_1\right){}^2 \nonumber \\ && \hspace{1em} - \sqrt{2} \alpha _1 \alpha _2^2 \sqrt{p(1-p)} \sqrt{R_1(1-R_1)} \left(1-R_2\right) R_2 \times \nonumber \\ && \hspace{12em} \left(m_1 \left(1-R_1\right)-n_1 R_1\right)\sin (\phi_1) \nonumber \\ && \hspace{1em} - \sqrt{2} \alpha _1^2 \alpha _2 \sqrt{p(1-p)} \left(1-R_1\right) R_1 \sqrt{R_2(1-R_2)} \times \nonumber \\ && \hspace{12em} \left(m_2 \left(1-R_2\right)-n_2 R_2\right)\sin (\phi_2) \nonumber \\ && \hspace{1em} +~ p \alpha _1 \alpha _2 \sqrt{R_1(1-R_1)} \sqrt{R_2(1-R_2)} \cos (\phi_1 - \phi_2) \times \nonumber \\ && \hspace{3em} \left(m_1 \left(1-R_1\right)-n_1 R_1\right) \left(m_2 \left(1-R_2\right)-n_2 R_2\right) \Big). \label{Pab_n1m1n2m2} \end{eqnarray}
The local probability for Alice is \begin{eqnarray}\label{Pa_nm} && P(n,m)_{c_1,d_1} = \frac{1}{m! n!}e^{-\alpha _1^2} R_1{}^{m-1} (1-R_1)^{n-1} \alpha _1^{2 (m+n-1)} \nonumber \\ && \hspace{-1.3em} \Big(-\sqrt{2} \alpha _1 \sqrt{(1-p) p} \sqrt{\left(1-R_1\right) R_1} \sin \left(\phi _1\right) \left(m \left(1-R_1\right) -n R_1\right) \nonumber \\ && +~ \frac{p}{2} \left(m \left(1-R_1\right)-n R_1\right){}^2+\frac{1}{2} \alpha _1^2 (2-p) \left(1-R_1\right) R_1\Big). \end{eqnarray} The local probability for Bob, $P(n,m)_{c_2,d_2}$, is exactly same as (\ref{Pa_nm}), with $1 \leftrightarrow 2$.
\end{document} | arXiv | {
"id": "2109.10170.tex",
"language_detection_score": 0.7130311131477356,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Bifunctor cohomology and Cohomological finite generation for reductive groups} \sloppy \begin{abstract} Let $G$ be a reductive linear algebraic group over a field $k$. Let $A$ be a finitely generated commutative $k$-algebra on which $G$ acts rationally by $k$-algebra automorphisms. Invariant theory tells that the ring of invariants $A^G=H^0(G,A)$ is finitely generated. We show that in fact the full cohomology ring $H^*(G,A)$ is finitely generated. The proof is based on the strict polynomial bifunctor cohomology classes constructed in \cite{Touze}. We also continue the study of bifunctor cohomology of $\Gamma^*(\mathop{\mathfrak{gl}}\nolimits^{(1)})$. \end{abstract}
\section{Introduction} Consider a linear algebraic group $G$, or linear algebraic group scheme $G$, defined over a field $k$. So $G$ is an affine group scheme whose coordinate algebra $k[G]$ is finitely generated as a $k$-algebra. We say that $G$ has the cohomological finite generation property (CFG) if the following holds. Let $A$ be a finitely generated commutative $k$-algebra on which $G$ acts
rationally by $k$-algebra automorphisms. (So $G$ acts from the right on $\mathop{\mathrm{Spec}}\nolimits(A)$.) Then the cohomology ring $H^*(G,A)$ is finitely generated as a $k$-algebra.
Here, as in \cite[I.4]{Jantzen}, we use the cohomology introduced by Hochschild, also known as `rational cohomology'.
Our main result confirms a conjecture of the senior author: \begin{Theorem}\label{reductiveCFG} Any reductive linear algebraic group over $k$ has property (CFG). \end{Theorem} The proof will be based on the `lifted' universal classes \cite{Touze} constructed by the junior author for this purpose. Originally \cite{Touze} was the end of the proof, but for the purpose of exposition we have changed the order.
If the field $k$ has characteristic zero, then the theorem just reiterates a standard fact in invariant theory. Indeed the reductive group is then linearly reductive and rational cohomology vanishes in higher degrees for any linearly reductive group.
So we further assume $k$ has positive characteristic $p$. In this introduction we will also take $k$ algebraically closed. (One easily reduces to this case, cf.\ \cite[Lemma 2.3]{reductive}, \cite[I 4.13]{Jantzen}, \cite{Waterhouse}.) We will say that $G$ acts on the algebra $A$ if $G$ acts
rationally by $k$-algebra automorphisms.
Let us say that $G$ has the finite generation property (FG), or a positive solution to Hilbert's 14-th problem, if the following holds. If $G$ acts on a finitely generated commutative $k$-algebra $A$, then
the ring of invariants $A^G=H^0(G,A)$ is finitely generated as a $k$-algebra. Observe that, unlike Hilbert, we do \emph{not} require that $A$ is a domain.
It is obvious that (CFG) implies (FG). We will see that our main result can also be formulated as follows
\begin{Theorem}\label{FGCFG} A linear algebraic group scheme $G$ over $k$ has property (CFG) if and only if it has property (FG). \end{Theorem}
Let us give some examples. The first example is a finite group $G$, viewed as a discrete algebraic group over $k$. It is well known to have property (FG), \cite[Lemma 2.4]{reductive}, and the proof goes back to Emmy Noether 1926 \cite{noether}. Thus we recover the finite generation theorem of Evens, at least over our field $k$: \begin{Theorem}[Evens 1961 \cite{Evens}]\label{CFGEvens} A finite group has property (CFG), over $k$. \end{Theorem} As our proof of Theorem \ref{FGCFG} does not rely on theorem \ref{CFGEvens}, we get a new proof of \ref{CFGEvens}, albeit much longer than the original proof. Note that the setting of Evens is more general: Instead of a field he allows an arbitrary noetherian base. This suggests a direction for further work.
If $G$ is a linear algebraic group over $k$, we write $G_r$ for its $r$-th Frobenius kernel, the scheme theoretic kernel of the $r$-th iterate $F^r:G\to G^{(r)}$ of the Frobenius homomorphism \cite[I Ch. 9]{Jantzen}. It is easy to see that $G_r$ has property (FG). More generally it is easy to see \cite[Lemma 2.4]{reductive} that any finite group scheme over $k$ has property (FG). (A group scheme is finite if its coordinate ring is a finite dimensional vector space.) Indeed one has \cite[Theorem 3.5]{reductive}: \begin{Theorem}[Friedlander and Suslin 1997 \cite{Friedlander-Suslin}] \label{CFGFS} A finite group scheme has property (CFG). \end{Theorem} But we do not get a new proof of this theorem, as our proof of the main result relies heavily on the specific information in \cite[section 1]{Friedlander-Suslin}. Recall that the theorem of Friedlander and Suslin was motivated by a desire to get a theory of support varieties for infinitesimal group schemes. Our problem has the same origin. Then it started to get a life of its own and became a conjecture.
It is a theorem of Nagata \cite{Nagata}, \cite[Ch. 2]{Springer} that geometrically reductive groups (or group schemes \cite{Borsari-Santos}) have property (FG). (Springer \cite{Springer} deletes `geometrically' in the terminology.) Conversely, it is elementary \cite[Th. 2.2]{reductive} that property (FG) implies geometric reductivity. (Here it is essential that in propery (FG) one allows any finitely generated commutative $k$-algebra on which $G$ acts.) So our main result states that property (CFG) is equivalent to geometric reductivity.
Now Haboush has shown \cite[II.10.7]{Jantzen} that reductive groups are geometrically reductive, and Popov \cite{PopovRed} has shown that a linear algebraic group with property (FG) is reductive. (Popov allows only reduced algebras, so his result is even stronger.) Waterhouse has completed the classification by showing \cite{Waterhouse} that a linear algebraic group scheme $G$ (he calls it an `algebraic affine group scheme') is geometrically reductive exactly when the connected component $G_{\mathop{\mathrm{red}}\nolimits}^o$ of its reduced subgroup $G_{\mathop{\mathrm{red}}\nolimits}$ is reductive. So this is also a characterization of the $G$ with property (CFG).
Let us now give a consequence of (CFG).
We say that $G$ acts on an $A$-module $M$
when it acts rationally on $M$ such that the structure map $A\otimes M\to M$ is a $G$-module map. \begin{Theorem}Let $G$ have property (CFG). Let $G$ act on the finitely generated commutative $k$-algebra $A$ and on the noetherian $A$-module $M$. Then $H^*(G,M)$ is a noetherian $H^*(G,A)$-module. In particular, if $G$ is reductive and $A$ has a good filtration, then $H^*(G,M)$ is a noetherian $A^G$-module, $H^i(G,M)$ vanishes for large $i$, and $M$ has finite good filtration dimension. \end{Theorem} \paragraph{Proof}See \cite[Lemma 3.3, proof of 4.7]{reductive}.
One puts an algebra structure on $A\oplus M$ and uses that $A\otimes k[G/U]$ also has a good filtration. \unskip\nobreak
\hbox{ $\Box$}
As special case we mention \begin{Theorem}\label{SvdKthm}Let $G=\mathop{\mathit{GL}}\nolimits_n$, $n\geq1$. Let $G$ act on the finitely generated commutative $k$-algebra $A$ and on the noetherian $A$-module $M$. If $A$ has a good filtration, then $H^*(G,M)$ is a noetherian $A^G$-module, $H^i(G,M)$ vanishes for large $i$, and $M$ has finite good filtration dimension. \end{Theorem}
This theorem is proved directly in \cite{Srinivas vdK}, with functorial resolution of the ideal of the diagonal in a product of Grassmannians. It will be used in our proof of the main theorems.
Now let us start discussing the proof of the main result. First of all one has the following variation on the ancient transfer principle \cite[Chapter Two]{Grosshans book}.
\def\cite[Lemma 3.7]{reductive}{\cite[Lemma 3.7]{reductive}} \begin{Lemma}[\cite[Lemma 3.7]{reductive}]\label{mylemma} Let $G$ be a linear algebraic group over $k$ with property (CFG). Then any geometrically reductive subgroup scheme $H$ of $G$ also has property (CFG). \end{Lemma}
As every geometrically reductive linear algebraic group scheme is a subgroup scheme of $\mathop{\mathit{GL}}\nolimits_n$ for $n$ sufficiently large, we only have to look at the $\mathop{\mathit{GL}}\nolimits_n$ to prove the main theorems, Theorem \ref{reductiveCFG} and Theorem \ref{FGCFG}. Therefore we further assume $G=\mathop{\mathit{GL}}\nolimits_n$ with $n>1$. (Or $n\geq p$ if you wish.) (In \cite{cohGrosshans} we used $\mathop{\mathit{SL}}\nolimits_n$ instead of $G=\mathop{\mathit{GL}}\nolimits_n$, but also explained it hardly makes any difference.)
We have $G$ act on $A$ and wish to show $H^*(G,A)$ is finitely generated. If $A$ has a good filtration \cite{Jantzen}, then there is no higher cohomology and invariant theory (Haboush) does the job. A general $A$ has been related to one with a good filtration by Grosshans. He defines a filtration $A_{\leq 0}\subseteq A_{\leq 1}\cdots$ on $A$ and embeds the associated graded $\mathop{\mathrm{gr}}\nolimits A$ into an algebra with a good filtration $\mathop{\mathrm{hull}_\nabla}\nolimits\mathop{\mathrm{gr}}\nolimits A$. He shows that $\mathop{\mathrm{gr}}\nolimits A$ and $\mathop{\mathrm{hull}_\nabla}\nolimits\mathop{\mathrm{gr}}\nolimits A$ are also finitely generated and that there is a flat family parametrized by the affine line with special fiber $\mathop{\mathrm{gr}}\nolimits A$ and general fiber $A$. We write ${\cal A}$ for the coordinate ring of the family. It is a graded algebra and one has natural homomorphisms ${\cal A}\to \mathop{\mathrm{gr}}\nolimits A$, ${\cal A}\to A$. Mathieu has shown \cite{Mathieu G}, cf.\ \cite[Lemma 2.3]{cohGrosshans}, that there is an $r>0$ so that $x^{p^r}\in \mathop{\mathrm{gr}}\nolimits A$ for every $x\in \mathop{\mathrm{hull}_\nabla}\nolimits\mathop{\mathrm{gr}}\nolimits A$. We have no bound on $r$, which is the main reason that our results are only qualitative. One sees that $\mathop{\mathrm{gr}}\nolimits A$ is a noetherian module over the $r$-th Frobenius twist $(\mathop{\mathrm{hull}_\nabla}\nolimits\mathop{\mathrm{gr}}\nolimits A)^{(r)}$ of $\mathop{\mathrm{hull}_\nabla}\nolimits\mathop{\mathrm{gr}}\nolimits A$. So we do not quite have the situation of theorem \ref{SvdKthm}, but it is close. We have to untwist. Untwisting involves $G^{(r)}=G/G_r$ and we end up looking at the Hochschild--Serre spectral sequence $$E_2^{ij}=H^i(G/G_r,H^j(G_r,\mathop{\mathrm{gr}}\nolimits A))\Rightarrow H^{i+j}(G,\mathop{\mathrm{gr}}\nolimits A).$$ One may write $H^i(G/G_r,H^j(G_r,\mathop{\mathrm{gr}}\nolimits A))$ also as $H^i(G,H^j(G_r,\mathop{\mathrm{gr}}\nolimits A)^{(-r)})$. By Friedlander and Suslin $H^*(G_r,\mathop{\mathrm{gr}}\nolimits A)^{(-r)}$ is a noetherian module over the graded algebra $\bigotimes_{i=1}^rS^*((\mathop{\mathfrak{gl}}\nolimits_n)^\#(2p^{i-1}))\otimes \mathop{\mathrm{hull}_\nabla}\nolimits\mathop{\mathrm{gr}}\nolimits A$. Here the $\#$ refers to taking a dual, $S^*$ refers to a symmetric algebra over $k$, and the $(2p^{i-1})$ indicates in what degree one puts a copy of the dual of the adjoint representation $\mathop{\mathfrak{gl}}\nolimits_n$. By the fundamental work \cite{Akin} of Akin, Buchsbaum, Weyman, which is also of essential importance in \cite{Srinivas vdK}, one knows that $\bigotimes_{i=1}^rS^*((\mathop{\mathfrak{gl}}\nolimits_n)^\#(2p^{i-1}))\otimes \mathop{\mathrm{hull}_\nabla}\nolimits\mathop{\mathrm{gr}}\nolimits A$ has a good filtration. So $H^*(G_r,\mathop{\mathrm{gr}}\nolimits A)^{(-r)}$ has finite good filtration dimension and page 2 of our Hochschild--Serre spectral sequence is noetherian over its first column $E^{0*}_2$. By Friedlander and Suslin $H^*(G_r,\mathop{\mathrm{gr}}\nolimits A)^{(-r)}$ is a finitely generated algebra and by invariant theory $E^{0*}_2$ is thus finitely generated, so $E^{**}_2$ is finitely generated. The spectral sequence is one of graded commutative differential graded algebras in characteristic $p$, so the $p$-th power of an even cochain in a page passes to the next page. It easily follows that all pages are finitely generated. As page 2 has only finitely many columns by \ref{SvdKthm}, cf.~\cite[2.3]{Srinivas vdK}, this explains why the abutment $H^*(G,\mathop{\mathrm{gr}}\nolimits A)$ is finitely generated. We are getting closer to $H^*(G,A)$.
The filtration $A_{\leq 0}\subseteq A_{\leq 1}\cdots$ induces a filtration of the Hochschild complex \cite[I.4.14]{Jantzen} whence a spectral sequence $$E(A):E_1^{ij}=H^{i+j}(G,\mathop{\mathrm{gr}}\nolimits_{-i}A)\Rightarrow H^{i+j}(G, A).$$ It lives in the second quadrant, but as $E^{**}_1$ is a finitely generated $k$-algebra this causes no difficulty with convergence: given $m$ there will be only finitely many nonzero $E_1^{m-i,i}$. (Compare \cite[4.11]{cohGrosshans}. Note that in \cite{cohGrosshans} the $E_1$ page is mistaken for an $E_2$ page.) All pages are again finitely generated, so we would like the spectral sequence to stop, meaning that $E_s^{**}=E_\infty^{**}$ for some finite~$s$. There is a standard method to achieve this \cite{Evens}, \cite{Friedlander-Suslin}. One must find a `ring of operators' acting on the spectral sequence and show that some page is a noetherian module for the chosen ring of operators. As the ring of operators we take $H^*(G,{\cal A})$. Indeed $E(A)$ is acted on by the trivial spectral sequence $E({\cal A})$ whose pages equal $H^*(G,{\cal A})$, see \cite[4.11]{cohGrosshans}. And $H^*(G,{\cal A})$ also acts on our Hochschild--Serre spectral sequence through its abutment. If we can show that one of the pages of the Hochschild--Serre spectral sequence is a noetherian module over $H^*(G,{\cal A})$, then that will do the trick, as then the abutment $H^*(G,\mathop{\mathrm{gr}}\nolimits A)$ is noetherian by \cite[Lemma 1.6]{Friedlander-Suslin}. And this abutment is the first page of $E(A)$.
Now we are in a situation similar to the one encountered by Friedlander and Suslin. Their problem was `surprisingly elusive'. To make their breakthrough they had to invent strict polynomial functors. Studying the homological algebra of
strict polynomial functors they found universal cohomology classes $e_r\in H^{2p^{r-1}}(G, \mathop{\mathfrak{gl}}\nolimits_n^{(r)})$
with nontrivial restriction to $G_1$. That was enough to get through.
We faced a similar bottleneck. We know from invariant theory and from \cite{Friedlander-Suslin} that page 2 of our
Hochschild--Serre
spectral sequence is noetherian over $H^0(G,(\bigotimes_{i=1}^rS^*((\mathop{\mathfrak{gl}}\nolimits_n)^\#(2p^{i-1}))\otimes {\cal A})^{(r)})$.
But we want it to be noetherian over $H^*(G,{\cal A})$. So if we could factor the homomorphism
$H^0(G,(\bigotimes_{i=1}^rS^*((\mathop{\mathfrak{gl}}\nolimits_n)^\#(2p^{i-1}))\otimes {\cal A})^{(r)})\to E^{0*}_2$ through $H^*(G,{\cal A})$, then that would do it.
The universal classes $e_j$ provide such a factorization on some summands, but they do not seem to help on the rest.
One would like to have universal classes in more degrees so that one can map every
summand of the form
$H^0(G,(\bigotimes_{i=1}^rS^{m_i}((\mathop{\mathfrak{gl}}\nolimits_n)^\#(2p^{i-1}))\otimes {\cal A})^{(r)})$ into the appropriate $H^{2m}(G,{\cal A})$, or even into
$H^{2m}(G,{\cal A}^{G_r})$.
The dual of $S^{m_i}((\mathop{\mathfrak{gl}}\nolimits_n)^\#)^{(r)}$ is $\Gamma^{m_i} (\mathop{\mathfrak{gl}}\nolimits_n^{(r)})$.
Thus one seeks nontrivial classes in $H^{2mp^{i-1}}(G,\Gamma^m (\mathop{\mathfrak{gl}}\nolimits_n^{(r)}))$, to take cup product with. It turns out that
$r=i=1$ is the crucial case and we seek nontrivial classes $c[m]\in H^{2m}(G,\Gamma^m (\mathop{\mathfrak{gl}}\nolimits_n^{(1)}))$.
The construction of such classes $c[m]$ has been a sticking point at least since 2001.
In \cite{cohGrosshans} they were constructed for $\mathop{\mathit{GL}}\nolimits_2$, but one needs them for $\mathop{\mathit{GL}}\nolimits_n$ with $n$ large.
The strict polynomial functors of Friedlander and Suslin do not provide a natural home for this problem, but the
strict polynomial bifunctors \cite{Franjou-Friedlander} of Franjou and Friedlander do.
When the junior author found \cite{Touze}
a construction of nontrivial `lifted' classes $c[m]$, this finished a proof of the conjecture.
We present two proofs. The first proof continues the investigation of bifunctor cohomology in \cite{Touze}
and establishes properties of the $c[m]$ analogous to those employed in \cite{cohGrosshans}.
Then the result follows as in the proof in \cite{cohGrosshans} for $\mathop{\mathit{GL}}\nolimits_2$.
As a byproduct one also obtains extra bifunctor cohomology classes and relations between them.
The second proof needs no more properties of the classes $c[m]$ than those established in \cite{Touze}.
Indeed \cite{Touze} stops exactly where the two arguments start to diverge.
The second proof does not quite factor the homomorphism
$H^0(G,(\bigotimes_{i=1}^rS^*((\mathop{\mathfrak{gl}}\nolimits_n)^\#(2p^{i-1}))\otimes {\cal A})^{(r)})\to E^{0*}_2$ through $H^*(G,{\cal A})$,
but argues by induction on $r$, returning to \cite[section 1]{Friedlander-Suslin} with the new classes in hand. It is not hard to guess which author contributes which proof. The junior author goes first.
\section{Main theorem and cohomological finite generation}\label{three}
We work over a field $k$ of positive characteristic $p$. We keep the notations of \cite{Touze}. In particular, $\mathcal{P}_k(1,1)$ denotes the category of strict polynomial bifunctors of \cite{Franjou-Friedlander}. The main result of part I is theorem \ref{thm-cl-univ}, which states the existence of classes in the cohomology of the bifunctor $\Gamma^*(gl^{(1)})$. By \cite[Thm 1.3]{Touze}, the cohomology of a bifunctor $B$ is related to the cohomology of $GL_{n,k}$ with coefficients in the rational representation $B(k^n,k^n)$ by a map $\phi_{B,n}:H^*_\mathcal{P}(B)\to H^*(GL_{n,k},B(k^n,k^n))$ (natural in $B$ and compatible with cup products). So our main result yields classes in the cohomology of $GL_{n,k}$, actually more classes (and more relations between them) than originally needed \cite[Section 4.3]{cohGrosshans} for the proof of the cohomological finite generation conjecture.
\begin{Theorem}\label{thm-cl-univ} Let $k$ be a field of characteristic $p>0$. There are maps $\psi_\ell: \Gamma^\ell H^*_{\mathcal{P}}(gl^{(1)})\to H^*_{\mathcal{P}}(\Gamma^\ell(gl^{(1)}))$, $\ell\ge 1$ such that \begin{enumerate} \item $\psi_1$ is the identity map. \item For all $\ell\ge 1$ and for all $n\ge p$, the composite $$\Gamma^\ell H^*_{\mathcal{P}}(gl^{(1)})\xrightarrow[]{\psi_\ell} H^*_{\mathcal{P}}(\Gamma^\ell(gl^{(1)}))\xrightarrow[]{\phi_{\Gamma^\ell(gl^{(1)}),n}} H^*(GL_{n,k},\Gamma^\ell(\mathfrak{gl}_n^{(1)}))$$ is injective. In particular, for all $\ell\ge 1$, $\psi_\ell$ is injective. \item For all positive integers $\ell,m$, there are commutative diagrams $$\xymatrix{ H^*_{\mathcal{P}}(\Gamma^{\ell+m}(gl^{(1)}))\ar@{->}[rr]^-{\Delta_{\ell,m\,*}}&& H^*_{\mathcal{P}}(\Gamma^{\ell}(gl^{(1)})\otimes\Gamma^m(gl^{(1)}))\\ \Gamma^{\ell+m}H^*_{\mathcal{P}}(gl^{(1)})\ar@{^{(}->}[rr]^-{\Delta_{\ell,m}}\ar@{->}[u]^{\psi_{\ell+m}}&& \Gamma^{\ell}H^*_{\mathcal{P}}(gl^{(1)})\otimes\Gamma^m H^*_{\mathcal{P}}(gl^{(1)})\ar@{->}[u]^{\psi_{\ell}\cup\psi_{m}}\;, }$$ and $$\xymatrix{ H^*_{\mathcal{P}}(\Gamma^{\ell}(gl^{(1)})\otimes\Gamma^m(gl^{(1)}))\ar@{->}[rr]^-{m_{\ell,m\,*}}&& H^*_{\mathcal{P}}(\Gamma^{\ell+m}(gl^{(1)}))\\ \Gamma^{\ell}H^*_{\mathcal{P}}(gl^{(1)})\otimes\Gamma^m H^*_{\mathcal{P}}(gl^{(1)})\ar@{->}[u]^{\psi_{\ell}\cup\psi_{m}}\ar@{->}[rr]^-{m_{\ell,m}}&& \Gamma^{\ell+m}H^*_{\mathcal{P}}(gl^{(1)})\ar@{->}[u]^{\psi_{\ell+m}}\;, }$$ where $m_{\ell,m}$ and $\Delta_{\ell,m}$ denote the maps induced by the multiplication $\Gamma^\ell\otimes\Gamma^m \to \Gamma^{\ell+m}$ and the diagonal $\Gamma^{\ell+m}\to \Gamma^\ell\otimes\Gamma^m$. \end{enumerate} \end{Theorem}
As a consequence, we obtain that \cite[Th 4.4]{cohGrosshans} is valid for any value of $n$:
\begin{Corollary}\label{cor-univ-class} Let $k$ be a field of positive characteristic. For all $n>1$, there are classes $c[m]\in H^{2m}(GL_{n,k},\Gamma^m(\mathfrak{gl}_n^{(1)}))$ such that \begin{enumerate} \item $c[1]$ is the Witt vector class $e_1$, \item $\Delta_{i,j\,*}(c[i+j])=c[i]\cup c[j]$ for $i,j\ge 1$. \end{enumerate} \end{Corollary}
\begin{proof} Arguing as in \cite[Lemma 1.5]{Touze} we notice that it suffices to prove the statement when $n\ge p$. By \cite[Th 1.3]{Touze}, we have morphisms $$\phi_{\Gamma^m(gl^{(1)}),n}: H^*_\mathcal{P}(\Gamma^m(gl^{(1)}))\to H^*(GL_{n,k},\Gamma^m(\mathfrak{gl}_n^{(1)}))$$ compatible with the cup products, and for $m=1$ the map $\phi_{\Gamma^m(gl^{(1)}),n}$ is an isomorphism. Let $b[1]$ be the pre-image of the Witt vector class by $\phi_{\Gamma^1(gl^{(1)}),n}$. We define $c[m]:=(\phi_{\Gamma^m(gl^{(1)}),n}\circ \psi_m)(b[1]^{\otimes m})$. Then $c[1]$ is the Witt vector class since $\psi_1$ is the identity, and by theorem \ref{thm-cl-univ}(3) the classes $c[i]$ satisfy condition 2. \end{proof}
\begin{Corollary} The cohomological finite generation conjecture (Theorem \ref{reductiveCFG}) holds. \end{Corollary} \begin{proof} Let $G$ be a reductive linear algebraic group acting on a finitely generated commutative $k$-algebra $A$. We want to prove that $H^*(G,A)$ is finitely generated. To do this, it suffices to follow \cite{cohGrosshans} and this is exactly what we do below. We keep the notations of the introduction.
By lemma \ref{mylemma}, the case $G=GL_{n,k}$ suffices. As recalled in the introduction, there exists a positive integer $r$ such that the Hochschild-Serre spectral sequence $$E_2^{ij}=H^i(G/G_r,H^j(G_r,\mathop{\mathrm{gr}}\nolimits A))\Rightarrow H^{i+j}(G,\mathop{\mathrm{gr}}\nolimits A)$$ stops for a finite good filtration dimension reason. Moreover it is a sequence of finitely generated algebras, and its second page is noetherian over its subalgebra $E_2^{0*}$ (all this was first proved in \cite[Prop 3.8]{cohGrosshans}, under some restrictions on the characteristic which were removed in \cite{Srinivas vdK}).
The composite ${\cal A}^{G_r}\hookrightarrow {\cal A} \twoheadrightarrow \mathop{\mathrm{gr}}\nolimits A$ makes $\mathop{\mathrm{gr}}\nolimits A$ into a noetherian module over ${\cal A}^{G_r}$. Hence, by \cite[Thm 1.5]{Friedlander-Suslin} (with $\text{`$C$'}={\cal A}^{G_r}$) and by invariant theory \cite[Thm 16.9]{Grosshans book}, $E_2^{0*}=H^0(G/G_r,H^*(G_r,\mathop{\mathrm{gr}}\nolimits A))$ (hence $E_2^{**}$) is noetherian over $H^0(G/G_r,\bigotimes_{i=1}^r S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2p^{i-1}))\otimes{\cal A}^{G_r})$.
Now we use the classes of corollary \ref{cor-univ-class} as in section 4.5 and in the proof of corollary 4.8 of \cite{cohGrosshans}. In this way, we factor the morphism $H^0(G/G_r,\bigotimes_{i=1}^r S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2p^{i-1}))\otimes{\cal A}^{G_r})\to E_2^{0*}$ through the map $H^{\mathrm{even}}(G,{\cal A})\to H^0(G/G_r,H^{\mathrm{even}}(G_r,{\cal A}))= E_2^{0\,{\mathrm{even}}}$ (the latter map is induced by restricting the cohomology from $G$ to $G_r$). So $E_2^{**}$ is noetherian over $H^{\mathrm{even}}(G,{\cal A})$. By \cite[Lemma 1.6]{Friedlander-Suslin} (with $\text{`$A$'}=H^{\mathrm{even}}(G,{\cal A})$ and $\text{`$B$'}=k$), we conclude that the map $H^{\mathrm{even}}(G,{\cal A})\to H^*(G,\mathop{\mathrm{gr}}\nolimits A)$ (induced by ${\cal A}\to \mathop{\mathrm{gr}}\nolimits A$) makes $H^*(G,\mathop{\mathrm{gr}}\nolimits A)$ into a noetherian module over $H^{\mathrm{even}}(G,{\cal A})$.
The proof finishes as described in the introduction (or in section 4.11 of \cite{cohGrosshans}): the second spectral sequence $$E(A):E_1^{ij}=H^{i+j}(G,\mathop{\mathrm{gr}}\nolimits_{-i}A)\Rightarrow H^{i+j}(G, A)$$ is a sequence of finitely generated algebras. It is acted on by the trivial spectral sequence $E({\cal A})$ whose pages equal $H^*(G,{\cal A})$. But we have proved that $E_1^{**}$ is noetherian over $H^*(G,{\cal A})$, so by the usual trick (\cite{Evens}, \cite{Friedlander-Suslin} or \cite[Lemma 3.9]{reductive}) the spectral sequence $E(A)$ stops, which proves that $H^*(G,A)$ is finitely generated. \end{proof}
\section{Proof of theorem \ref{thm-cl-univ}}\label{four}
By \cite[Prop 3.21]{Touze}, the divided powers $\Gamma^\ell$ admit a twist compatible coresolution $J_\ell$. So by \cite[Prop 3.18]{Touze}, we have a bicomplex $A(J_\ell)$ whose totalization yield an $H^*_\mathcal{P}$-acyclic coresolution of $\Gamma^\ell(gl^{(1)})$. In particular the homology of the totalization of $H^0_\mathcal{P}(A(J_\ell))$ computes $H^*_\mathcal{P}(\Gamma^\ell(gl^{(1)}))$.
The plan of the proof of theorem \ref{thm-cl-univ} is the following. First, we build the maps $\psi_\ell$. To be more specific, we build maps $\vartheta_\ell$ which send each element of degree $d$ of $\Gamma^\ell(H^*_\mathcal{P}(gl^{(1)}))$ to a homogeneous cocycle of bidegree $(0,d)$ in the bicomplex $H^0_\mathcal{P}(A(J_\ell))$. Our maps $\psi_\ell$ will then be induced by the $\vartheta_\ell$.
Second, we show the relations between the classes on the cochain level. In this step, we encounter the following problem: the cup product of two classes is represented by a cocycle in the bicomplex $H^0_\mathcal{P}(A(J_\ell)\otimes A(J_m))$ while we want to have it represented by a cocycle in $H^0_\mathcal{P}(A(J_\ell\otimes J_m))$. So we have investigate further the compatibility of the functor $A$ with cup products.
Finally, we prove theorem \ref{thm-cl-univ}(2) by reducing to one parameter subgroups.
\begin{NotationConvention}\label{notasgn} If $\mathfrak{A}$ is an additive category, we denote by $Ch^{\ge 0}(\mathfrak{A})$ (resp. $\text{$p$-}Ch^{\ge 0}(\mathfrak{A})$, resp. $\text{bi-}Ch^{\ge 0}(\mathfrak{A})$) the category of nonnegative cochain complexes (resp. $p$-complexes, resp. bicomplexes) in $\mathfrak{A}$.
If $\mathfrak{A}$ is equipped with a tensor product, then $Ch^{\ge 0}(\mathfrak{A})$ inherits a tensor product. The differential of the tensor product $C\otimes D$ involves a Koszul sign: the restriction of $d_{C\otimes D}$ to $C^i\otimes D^j$ equals $d_C\otimes\mathrm{Id}+(-1)^i \mathrm{Id}\otimes d_D$. The category $\text{$p$-}Ch^{\ge 0}(\mathfrak{A})$ also inherits a tensor product, but the $p$-differential of $C\otimes D$ does not involve any sign: $d_{C\otimes D}=d_C\otimes\mathrm{Id}+\mathrm{Id}\otimes d_D$.
Now we turn to bicomplexes. First, we may view a complex $C^\bullet$ whose terms $C^j$ are chain complexes as a bicomplex $C^{\bullet,\bullet }$ whose object $C^{i,j}$ is the $i$-th object of the complex $C^j$ ({\it i.e.} the complexes $C^j$ are the rows of $C^{\bullet,\bullet }$). Thus we obtain an identification: $$Ch^{\ge 0}(Ch^{\ge 0}(\mathfrak{A}))=\text{bi-}Ch^{\ge 0}(\mathfrak{A})\;.$$ Being a category of cochain complexes, the term on the left hand side has a tensor product. If $C$ is a bicomplex, let us denote by $d_C^{i,j}:C^{i,j}\to C^{i+1,j}$ its first differential, and by $\partial_C^{i,j}:C^{i,j}\to C^{i,j+1}$ its second one. Then one checks that the tensor product on bicomplexes induced by the identification is such that the restriction of $d_{C\otimes D}$ (resp. $\partial_{C\otimes D}$) to $C^{i_1,j_1}\otimes D^{i_2,j_2}$ equals $d_C\otimes\mathrm{Id}+(-1)^{i_1}\mathrm{Id}\otimes d_D$ (resp. $\partial_C\otimes\mathrm{Id}+(-1)^{j_1}\mathrm{Id}\otimes \partial_D$).
We define the totalization $\mathrm{Tot}(C)$ of a bicomplex $C$ with the Koszul sign convention: the restriction of $d_{\mathrm{Tot}(C)}$ to $C^{i,j}$ equals $d_C+(-1)^{i}\partial_C$. If $C$, $D$ are two bicomplexes, there is a canonical isomorphism of complexes: $\mathrm{Tot}(C)\otimes \mathrm{Tot}(D)\simeq \mathrm{Tot}(C\otimes D)$ which sends an element $x\otimes y\in C^{i_1,j_1}\otimes D^{i_2,j_2}$ to $(-1)^{j_1i_2}x\otimes y$. \end{NotationConvention}
\subsection{Construction of the $\psi_\ell$, $\ell\ge 1$}
Let $\ell$ be a positive integer. By \cite[Prop 3.18, Prop 3.21]{Touze} we have a bicomplex $H^0_\mathcal{P}(A(J_\ell))$ whose homology computes the cohomology of the bifunctor $\Gamma^\ell(gl^{(1)})$. We now recall the description of the first two columns of this bicomplex. As in \cite[Section 4]{Touze}, we denote by $A_1$ the $p$-coresolution of $gl^{(1)}$ obtained by precomposing the $p$-complex $T(S^1)$ by the bifunctor $gl$. The symmetric group $\mathfrak{S}_\ell$ acts on the $p$-complex $A_1^{\otimes \ell}$ by permuting the factors of the tensor product (unlike the case of ordinary complexes, the action of $\mathfrak{S}_\ell$ does not involve a Koszul sign since the tensor product of $p$-complexes does not involve any sign). Contracting the $p$-complex $A_1^{\otimes \ell}$ and applying $H^0_\mathcal{P}$, we obtain an action of $\mathfrak{S}_\ell$ on the ordinary complex $H^0_\mathcal{P}((A_1^{\otimes\ell})_{[1]})$. By \cite[Lemma 4.2]{Touze}, the first two columns $H^*_\mathcal{P}(A(J_\ell)^{0,\bullet})\to H^*_\mathcal{P}(A(J_\ell)^{1,\bullet})$ of $H^0_\mathcal{P}(A(J_\ell))$ equal $$ \underbrace{H^0_\mathcal{P}\left((A_1^{\otimes \ell})_{[1]}\right)}_{ \text{column of index $0$}}\xrightarrow[]{\,\prod (1-\tau_i)\,} \underbrace{\bigoplus_{i=0}^{\ell-2}H^0_\mathcal{P}\left((A_1^{\otimes \ell} )_{[1]}\right)}_{\text{column of index $1$}}\;,$$ where $\tau_i\in\mathfrak{S}_\ell$ is the transposition which exchanges $i+1$ and $i+2$ (and with the convention that the second column is null if $\ell=1$). Thus we have: \begin{Lemma}\label{lm-cocycles-col-1} Let $\mathcal{Z}_\ell^{\mathrm{even}}$ be the set of homogeneous cocycles of bidegree $(0,d)$, $d$ even, in the bicomplex $H^0_\mathcal{P}(A(J_\ell))$. Then $\mathcal{Z}_\ell^{\mathrm{even}}$ identifies as the set of even degree cocycles of the complex $H^0_\mathcal{P}((A_1^{\otimes\ell})_{[1]})$, which are invariant under the action of $\mathfrak{S}_\ell$. \end{Lemma}
Now we turn to building a map $\vartheta_\ell:\Gamma^\ell(H^*_\mathcal{P}(gl^{(1)}))\to \mathcal{Z}_\ell^{\mathrm{even}}$. In view of lemma \ref{lm-cocycles-col-1}, it suffices to build a $\mathfrak{S}_\ell$-equivariant map $\vartheta_\ell:H^*_\mathcal{P}(gl^{(1)})^{\otimes\ell} \to H^0_\mathcal{P}((A_1^{\otimes\ell})_{[1]})$.
Let us first recall what we know about $H^*_{\mathcal{P}}(gl^{(1)})$. By \cite[Thm 1.5]{Franjou-Friedlander} and \cite[Th. 4.5]{Friedlander-Suslin}, the graded vector space $H^*_{\mathcal{P}}(gl^{(1)})$ is concentrated in degrees $2i$, $0\le i<p$ and one dimensional in these degrees. Following \cite{SFB}, we denote by $e_1(i)$ a generator of degree $2i$ of this graded vector space. The homology of the complex $H^0_\mathcal{P}(A_{1[1]})$ computes the cohomology of the bifunctor $gl^{(1)}$. Thus we may choose for each integer $i$, $0\le i<p$, a cycle $z_i$ representing the cohomology class $e_1(i)$ in this complex. The cycles $z_i$ determine a graded map $H^*_{\mathcal{P}}(gl^{(1)})\to H^0_\mathcal{P}(A_{1[1]})$. By \cite[Prop 3.3]{Touze}, we may take cup products on the cochain level to obtain for each $\ell\ge 1$ a map $$H^*_{\mathcal{P}}(gl^{(1)})^{\otimes \ell}\to H^0_\mathcal{P}(A_{1[1]})^{\otimes \ell}\xrightarrow[]{\cup} H^0_\mathcal{P}((A_{1[1]})^{\otimes \ell})\;.$$ Moreover we define chain maps $h_\ell:(A_{1[1]})^{\otimes \ell}\to (A_1^{\otimes\ell})_{[1]}$ by iterated use of \cite[Prop 2.7]{Touze}. More specifically, $h_1$ is the identity and $h_\ell=h_{A_1^{\otimes\ell-1},A_1}\circ (h_{\ell-1}\otimes h_1)$.
\begin{Lemma}\label{lm-vartheta} Let $\ell$ be a positive integer and let $\vartheta_\ell$ be the composite $$\vartheta_\ell:= H^*_{\mathcal{P}}(gl^{(1)})^{\otimes \ell}\to H^0_\mathcal{P}((A_{1[1]})^{\otimes \ell})\xrightarrow[]{\,H^0_\mathcal{P}(h_\ell)\,} H^0_\mathcal{P}((A_{1}^{\otimes \ell})_{[1]})\;. $$ Then $\vartheta_\ell$ satisfies the following two properties: \begin{enumerate} \item[(1)] The image of $\vartheta_\ell$ is contained in the set of even degree cocycles of $H^0_\mathcal{P}((A_{1}^{\otimes \ell})_{[1]})$. \item[(2)] $\vartheta_\ell$ is $\mathfrak{S}_\ell$-equivariant. \end{enumerate} \end{Lemma} \begin{proof} The first property is straightforward from the definition of $\vartheta_\ell$. We prove the second one. The map $H^*_{\mathcal{P}}(gl^{(1)})^{\otimes \ell}\to H^0_\mathcal{P}((A_{1[1]})^{\otimes \ell})$ is defined using cup products, hence it is $\mathfrak{S}_\ell$-equivariant. Thus, to prove the lemma, we have to study the map $h_\ell:(A_{1[1]})^{\otimes \ell}\to (A_1^{\otimes\ell})_{[1]}$.
Recall that $h_\ell$ is built by iterated uses of \cite[Prop 2.7]{Touze}. Thus, if we define the graded object $p(A_1,\dots,A_1)=\bigoplus_{i_1,\dots,i_\ell}\bigotimes_{s=1}^\ell A_1^{i_s p}$ with the component $\bigotimes_{s=1}^\ell A_1^{i_s p}$ in degree $2(\sum i_s)$, we have well defined inclusions of $p(A_1,\dots,A_1)$ into the complexes $(A_{1[1]})^{\otimes \ell}$ and $(A_1^{\otimes\ell})_{[1]}$. Moreover $h_\ell$ fits into a commutative diagram: $$\xymatrix{ (A_{1[1]})^{\otimes \ell}\ar@{->}[rr]^-{h_\ell}&&(A_1^{\otimes\ell})_{[1]}\\ p(A_1,\dots,A_1)\ar@{^{(}->}[u]^{(a)}\ar@{=}[rr]&&p(A_1,\dots,A_1)\ar@{^{(}->}[u]^{(b)}\,. }$$ Let $\mathfrak{S}_\ell$ act on $p(A_1,\dots,A_1)$ by permuting the factors of the tensor product, on $(A_{1[1]})^{\otimes \ell}$ by permuting the factors of the tensor product with a Koszul sign, and on $(A_1^{\otimes\ell})_{[1]}$ by permuting the factors of the tensor product $A_1^{\otimes\ell}$ (without sign). Then the map $(b)$ is equivariant, and the map $(a)$ is also equivariant since $p(A_1,\dots,A_1)$ is concentrated in even degrees. The map $h_\ell$ is \emph{not} equivariant. However, by definition the equivariant map $H^*_{\mathcal{P}}(gl^{(1)})^{\otimes \ell}\to H^0_\mathcal{P}((A_{1[1]})^{\otimes \ell})$ factors through $H^0_\mathcal{P}(p(A_1,\dots,A_1))$ so that postcomposition of this map by $H^0_\mathcal{P}(h_\ell)$ (ie: the map $\vartheta_\ell$) is in fact equivariant. \end{proof}
\begin{Notation} By lemmas \ref{lm-cocycles-col-1} and \ref{lm-vartheta}, for all $\ell\ge 1$, the map $\vartheta_\ell$ induces a map $\Gamma^\ell(H^*_\mathcal{P}(gl^{(1)}))\to \mathcal{Z}^{\mathrm{even}}_\ell$. We denote by $\psi_\ell$ the composite $$\psi_\ell:= \Gamma^\ell( H^*_{\mathcal{P}}(gl^{(1)})) \to \mathcal{Z}_\ell^{\mathrm{even}} \to H^*_\mathcal{P}(\Gamma^\ell(gl^{(1)})) \;. $$ \end{Notation}
\begin{Lemma} The map $\psi_1$ equals the identity map. \end{Lemma} \begin{proof} For $\ell=1$, $\vartheta_1$ is just the map $H^*_{\mathcal{P}}(gl^{(1)})\to \mathop{\mathrm{Hom}}\nolimits(\Gamma^{p}(gl),A_{1[1]})$ which sends the generator $e_1(i)$ of $H^{2i}_{\mathcal{P}}(gl^{(1)})$ to the cycle $z_i$ representing this generator. Moreover, by definition of $z_i$, the map $\mathcal{Z}_1^{\mathrm{even}}\twoheadrightarrow H^*_{\mathcal{P}}(gl^{(1)})$ sends $z_i$ to $e_1(i)$. Thus, for all $i$, $\psi_1$ sends $e_1(i)$ to itself. \end{proof}
\subsection{Proof of theorem \ref{thm-cl-univ}(3)}
Let $\mathcal{P}_k$ be the strict polynomial functor category and let $\mathcal{TP}_k$ be the twist compatible subcategory \cite[Def 3.9]{Touze}. Before proving theorem \ref{thm-cl-univ}(3), we need to study further properties of the functor $A:Ch^{\ge 0}(\mathcal{TP}_k)\to \text{bi-}Ch^{\ge 0}(\mathcal{P}_k(1,1))$ \cite[Def. 3.17]{Touze}. Recall that $A$ is defined as the composite of the following three functors: \begin{enumerate} \item The `Troesch coresolution functor' \cite[Prop 3.13]{Touze} $$ T: Ch^{\ge 0}(\mathcal{TP}_k)\to \text{$p$-}Ch^{\ge 0}(Ch^{\ge 0}(\mathcal{P}_k))\;,$$ \item The contraction functor $$ -_{[1]}: \text{$p$-}Ch^{\ge 0}(Ch^{\ge 0}(\mathcal{P}_k))\to Ch^{\ge 0}(Ch^{\ge 0}(\mathcal{P}_k))\;,$$ \item Precomposition by the bifunctor $gl$ $$ -\circ gl: Ch^{\ge 0}(Ch^{\ge 0}(\mathcal{P}_k))\to Ch^{\ge 0}(Ch^{\ge 0}(\mathcal{P}_k(1,1)))= \text{bi-}Ch^{\ge 0}(\mathcal{P}_k(1,1))\;.$$ \end{enumerate}
All the categories coming into play in the definition of $A$ are equipped with tensor products (cf. notations and sign conventions \ref{notasgn}). The functors $T$ and $-\circ gl$ commute with tensor products, but $-_{[1]}$ does not. As a result, if $F,G$ are homogeneous strict polynomial functors of respective degree $f,g$ and with respective twist compatible coresolutions $J_F,J_G$, we have two (in general non isomorphic) $H^*_\mathcal{P}$-acyclic coresolutions of the tensor product $F\otimes G$ at our disposal: $$\mathrm{Tot}(A(J_F))\otimes \mathrm{Tot}(A(J_G))\quad\text{and} \quad\mathrm{Tot}(A(J_F\otimes J_G))\;.$$
Now the problem is the following. On the one hand, cycles representing cup products of classes in the cohomology of $F$ and $G$ are easily identified using the first complex.
Indeed, by \cite[Prop 3.3]{Touze}, the cup product $$H^*_\mathcal{P}(F(gl^{(1)}))\otimes H^*_\mathcal{P}(G(gl^{(1)}))\to H^*_\mathcal{P}(F(gl^{(1)})\otimes G(gl^{(1)})) $$ is defined at the cochain level by sending cocycles $x$ and $y$ respectively in $\mathop{\mathrm{Hom}}\nolimits(\Gamma^{pf}(gl),\mathrm{Tot} (A(J_F)))$ and $\mathop{\mathrm{Hom}}\nolimits(\Gamma^{pg}(gl),\mathrm{Tot} (A(J_G)))$ to the cocycle $$x\cup y:= (x\otimes y)\circ \Delta_{pf,pg}\in \mathop{\mathrm{Hom}}\nolimits(\Gamma^{p(f+g)}(gl),\mathrm{Tot} (A(J_F))\otimes \mathrm{Tot} (A(J_G)))\,,$$ where $\Delta_{pf,pg}$ is the diagonal map $\Gamma^{p(f+g)}(gl)\to \Gamma^{pf}(gl)\otimes \Gamma^{pg}(gl)$. But on the other hand, by functoriality of $A$, if $E\in\mathcal{P}_k$ then the effect of a morphism $E\to F\otimes G$ is easily computed in $H^0_\mathcal{P}(\mathrm{Tot}(A(J_F\otimes J_G)))$. So we want to be able to identify cup products in $H^0_\mathcal{P}(\mathrm{Tot}(A(J_F\otimes J_G)))$ rather than in $H^0_\mathcal{P}(\mathrm{Tot}(A(J_F))\otimes \mathrm{Tot}(A(J_G)))$. This is the purpose of next lemma.
\begin{Lemma}\label{lm-cup-prod-bicompl} Let $F,G$ be homogeneous strict polynomial functors of degree $f$, $g$ which admit twist compatible coresolutions $J_F$ and $J_G$. Let $i,j,\ell,m$ be nonnegative integers, and let $$x_{i,2j}\in \mathop{\mathrm{Hom}}\nolimits_{\mathcal{P}_{pf}^{pf}}(\Gamma^{pf} (gl),A(J_F))\text{ , }y_{\ell,2m}\in \mathop{\mathrm{Hom}}\nolimits_{\mathcal{P}_{pg}^{pg}}(\Gamma^{pg} (gl), A(J_G))$$ be homogeneous cocycles of respective bidegrees $(i,2j)$ and $(\ell, 2m)$. \begin{enumerate} \item The object $A(J_F)^{i,2j}\otimes A(J_G)^{\ell,2m}$ appears once and only once in the bicomplex $A(J_F\otimes J_G)$. It appears in bidegree $(i+\ell, 2j+2m)$. In particular, the formula $$(x_{i,2j}\otimes y_{\ell,2m})\circ \Delta_{pf,pg}\in \mathop{\mathrm{Hom}}\nolimits(\Gamma^{p(f+g)}(gl),A(J_F)^{i,2j}\otimes A(J_G)^{\ell,2m} )$$ defines a homogeneous element of bidegree $(i+\ell, 2j+2m)$ in the bicomplex $H^0_\mathcal{P}(A(J_F\otimes J_G))$. \item The element $(x_{i,2j}\otimes y_{\ell,2m})\circ \Delta_{pf,pg}$ is actually a cocycle, and represents the cup product $[x_{i,2j}]\cup[y_{\ell,2m}]$ in $H^0_\mathcal{P}(\mathrm{Tot}(A(J_F\otimes J_G)))$. \end{enumerate} \end{Lemma}
\begin{proof} Since $T$ commutes with tensor products \cite[Prop 3.13]{Touze}, the bicomplex $A(J_F\otimes J_G)$ is naturally isomorphic to the precomposition by $gl$ of the bicomplex $(T(J_F)\otimes T(J_G))_{[1]}$, while $A(J_F)\otimes A(J_G)$ equals the precomposition by $gl$ of the bicomplex $T(J_F)_{[1]}\otimes T(J_G)_{[1]}$. Recall that in the identification of $Ch^{\ge 0}(Ch^{\ge 0}(\mathcal{P}_k(1,1)))$ and $\text{bi-}Ch^{\ge 0}(\mathcal{P}_k(1,1))$, the $j$-th object of a complex of complexes $C^\bullet$ corresponds to the $j$-th row of the bicomplex $C^{\bullet,\bullet}$ (that is the elements of bidegree $(*,j)$). So the first statement simply follows from \cite[Lemma 2.2]{Touze}. Furthermore, by \cite[Prop 2.4]{Touze} there is a map of bicomplexes $$A(J_F)\otimes A(J_G)\to A(J_F\otimes J_G)$$ which is the identity on $A(J_F)^{i,2j}\otimes A(J_G)^{\ell,2m}$. Applying the functor $\mathrm{Tot}$ we obtain a map of $H^*_{\mathcal{P}}$-acyclic coresolutions $$\theta\,:\,\mathrm{Tot}(A(J_F))\otimes \mathrm{Tot}(A(J_G))\simeq
\mathrm{Tot}(A(J_F)\otimes A(J_G))\to \mathrm{Tot}(A(J_F\otimes J_G))$$ over the identity map of $F(gl^{(1)})\otimes G(gl^{(1)})$, and whose restriction to $A(J_F)^{i,2j}\otimes A(J_G)^{\ell,2m}$ equals the identity.
(more specifically, this equality holds up to a $(-1)^{2j\ell}=1$ sign coming from the sign in the formula $\mathrm{Tot}(C\otimes D)\simeq \mathrm{Tot}(C)\otimes\mathrm{Tot}(D)$).
By definition of the cup product, $(x_{i,2j}\otimes y_{\ell,2m})\circ\Delta_{pf,pg}$ is a cocycle representing $[x_{i,2j}]\cup[y_{\ell,2m}]$ in $H^0_\mathcal{P}(\mathrm{Tot}(A(J_F))\otimes \mathrm{Tot}(A(J_G)))$. Applying $\theta$ we obtain that $(x_{i,2j}\otimes y_{\ell,2m})\circ\Delta_{pf,pg}$ is a cocycle in $H^0_\mathcal{P}(\mathrm{Tot}(A(J_F\otimes J_G)))$, representing the same cup product. \end{proof}
We now turn to the specific situation of theorem \ref{thm-cl-univ}(3), that is $F=\Gamma^\ell$ and $G=\Gamma^m$. We first determine explicit maps between the bicomplexes $A(J_\ell \otimes J_m)$ and $A(J_{\ell+m})$, which lift the multiplication $\Gamma^\ell(gl^{(1)})\otimes\Gamma^m(gl^{(1)}) \to \Gamma^{\ell+m}(gl^{(1)})$ and the diagonal $\Gamma^{\ell+m}(gl^{(1)}) \to \Gamma^\ell(gl^{(1)}) \otimes\Gamma^m(gl^{(1)})$. To do this, we first need new information about the twist compatible coresolutions $J_\ell$ from \cite[Prop 3.21]{Touze}. \begin{Lemma}\label{lm-info-compl-res} Let $\ell,m$ be positive integers. \begin{enumerate} \item The multiplication $\Gamma^\ell\otimes \Gamma^m\to \Gamma^{\ell+m}$ lifts to a twist compatible chain map $J_\ell\otimes J_m\to J_{\ell+m}$. This chain map is given in degree $0$ by the shuffle product $(\otimes^\ell)\otimes(\otimes^m)= J^0_\ell\otimes J^0_m\to J^0_{\ell+m}=\otimes^{\ell+m}$ which sends a tensor $\otimes_{i=1}^{m+\ell} x_i$ to the sum $\sum_{\sigma\in Sh(\ell,m)}\otimes_{i=1}^{m+\ell} x_{\sigma^{-1}(i)}$. \item The diagonal $\Gamma^{\ell+m}\to \Gamma^\ell\otimes \Gamma^m $ lifts to a twist compatible chain map $J_{\ell+m}\to J_\ell\otimes J_m$. This chain map equals the identity map in degree $0$. \end{enumerate} \end{Lemma} \begin{proof} The reduced bar construction yields a functor from the category of Commutative Differential Graded Augmented algebras over $k$ to the category of Commutative Differential Graded bialgebras over $k$, see \cite{ML}, resp. \cite{FHT}, for the algebra, resp.\ coalgebra, structure (this bialgebra structure is actually a Hopf algebra structure but we don't need this fact).
The category of strict polynomial functors splits as a direct sum of subcategories of homogeneous functors. Taking the $(m+\ell)$ polynomial degree part of the multiplication (resp. comultiplication) of $\overline{B}(\overline{B}(S^*(-)))$ we obtain chain maps $\bigoplus J_i^\bullet \otimes J_j^\bullet\to J_{\ell+m}^\bullet$ and $ J_{\ell+m}^\bullet\to \bigoplus J_i^\bullet \otimes J_j^\bullet$ (the sums are taken over all nonnegative integers $i,j$ such that $i+j=\ell+m$). The bialgebra structure of $\overline{B}(\overline{B}(S^*(-)))$ is defined using only the \emph{algebra} structure of $S^*$. But the multiplication of $S^*$ is a twist compatible map and the twist compatible category is additive and stable under tensor products \cite[Lemma 3.8 and 3.10]{Touze}. So the chain maps are twist compatible.
Next, we identify the chain maps in degree $0$. We begin with the map $J_\ell^0\otimes J_m^0\to J_{\ell+m}^0$ induced by the multiplication of the bar construction. By \cite[Lemma 3.18]{Touze}, for all $i\ge 1$ we have $J_i^0=\overline{B}_1(S^*(-))^{\otimes i}=\otimes^i$. The product $\overline{B}(\overline{B}(S^*(-)))^{\otimes 2}\to \overline{B}(\overline{B}(S^*(-)))$ is given by the shuffle product formula \cite[p.313]{ML}, more precisely it sends the tensor $\otimes_{i=1}^{m+\ell} x_i$ to the sum $\sum_{\sigma\in Sh(\ell,m)}\otimes_{i=1}^{m+\ell} x_{\sigma^{-1}(i)}$. The signs in this shuffle product are all positive since the $x_i$ are elements of degree $1+1=2$ in the chain complex $\overline{B}_\bullet(\overline{B}(S^*(-)))$. The identification of the map $J_{m+\ell}^0\to J_m^0\otimes J_\ell^0$ induced by the diagonal is simpler. The coproduct in $\overline{B}(\overline{B}(S^*(-)))$ is given by the deconcatenation formula \cite[p.268]{FHT}:
$\Delta[x_1|\dots|x_{\ell+m}]=\sum_{i=0}^{m+\ell}[x_1|\dots|x_{i}]\otimes [x_{i+1}|\dots|x_{m+\ell}]$. Thus, the map $J_{m+\ell}^0\to J_m^0\otimes J_\ell^0$ sends the tensor product $\otimes_{i=1}^{m+\ell} x_i$ to itself.
Finally, with the description of the chain maps in degree $0$, one easily checks that $J_\ell\otimes J_m\to J_{\ell+m}$, resp. $J_{\ell+m}\to J_\ell\otimes J_m$, lifts the multiplication $\Gamma^\ell\otimes\Gamma^m\to \Gamma^{\ell+m}$, resp. the comultiplication $\Gamma^{\ell+m} \to \Gamma^\ell\otimes\Gamma^m$. (In fact, this actually proves that the quasi isomorphism $\Gamma^*\to \overline{B}(\overline{B}(S^*(-)))$ is a Hopf algebra morphism) \end{proof}
Applying the functor $A$, we obtain: \begin{Lemma}\label{lm-map-bicompl-induced} Let $\ell,m$ be positive integers. \begin{enumerate} \item The multiplication $\Gamma^\ell(gl^{(1)})\otimes \Gamma^m(gl^{(1)}) \to \Gamma^{\ell+m}(gl^{(1)})$ lifts to a map of bicomplexes $A(J_\ell\otimes J_m)\to A(J_{\ell+m})$. The restriction of this map to the columns of index $0$ equals $$A(J_\ell\otimes J_m)^{0,\bullet}=(A_1^{\otimes \ell+m})_{[1]}\xrightarrow[]{sh_{[1]}} (A_1^{\otimes \ell+m})_{[1]}= A(J_{\ell+m})^{0,\bullet}$$ where $sh$ is the unsigned shuffle map, which sends a tensor $\otimes_{i=1}^{m+\ell} x_i$ to the sum $\sum_{\sigma\in Sh(\ell,m)}\otimes_{i=1}^{m+\ell} x_{\sigma^{-1}(i)}$. \item The diagonal $\Gamma^{\ell+m}(gl^{(1)})\to \Gamma^\ell(gl^{(1)}) \otimes \Gamma^m(gl^{(1)}) $ lifts to a twist compatible chain map $A(J_{\ell+m})\to A(J_\ell\otimes J_m)$. The restriction of this map to the columns of index $0$ equals the identity map of $(A_1^{\otimes \ell+m})_{[1]}$. \end{enumerate} \end{Lemma}
Next we identify cycles representing the cup products $\psi_\ell(x)\cup \psi_m(y)$ in the bicomplex $H^0_\mathcal{P}(A(J_\ell\otimes J_m))$. \begin{Lemma}\label{lm-repres-cup-bicompl} Let $x\in \Gamma^\ell H^*_\mathcal{P}(gl^{(1)})$ and $y\in \Gamma^m H^*_\mathcal{P}(gl^{(1)})$ be classes of homogeneous degrees $2d$ and $2e$. Then $\vartheta_{\ell+m}(x\otimes y)$ is a cocycle of bidegree $(0,2d)$ in the bicomplex $H^0_\mathcal{P}(A(J_\ell\otimes J_m))$. Moreover, it represents the cup product $\psi_\ell(x)\cup \psi_m(y)\in H^*_\mathcal{P}(\Gamma^\ell(gl^{(1)})\otimes \Gamma^m(gl^{(1)}))$. \end{Lemma} \begin{proof}By definition, $\psi_\ell(x)$ is represented by the homogeneous cocycle $\vartheta_\ell(x)$ of bidegree $(0,2d)$ in the bicomplex $\mathop{\mathrm{Hom}}\nolimits(\Gamma^{p\ell}(gl),A(J_\ell))$ (and similarily for $\psi_m(y)$). Then, by lemma \ref{lm-cup-prod-bicompl}, $\psi_\ell(x)\cup \psi_m(y)$ is represented by the cocycle $(\vartheta_\ell(x)\otimes \vartheta_m(y))\circ \Delta_{\ell p, mp}$ in the bicomplex $\mathop{\mathrm{Hom}}\nolimits(\Gamma^{p(\ell+m)}(gl),A(J_\ell\otimes J_m))$. Now if $x=\otimes_{s=1}^\ell(e(i_s))$ and $y=\otimes_{s=\ell+1}^m(e(i_s))$,
we compute that $\vartheta_{\ell+m}(x\otimes y)$ and $(\vartheta_\ell(x)\otimes \vartheta_m(y))\circ \Delta_{\ell p, mp}$ both equal the element $(\otimes_{i=1}^{\ell+m}z_i)\circ\Delta_{p,\dots,p}$, where $\Delta_{p,\dots,p}$ is the diagonal $\Gamma^{p(\ell+m)}(gl)\to \Gamma^p(gl)^{\otimes \ell+m}$. \end{proof}
We are now ready to prove theorem \ref{thm-cl-univ}(3). We begin with the commutativity of the diagram involving the multiplication. Let $x\in \Gamma^\ell H^*_\mathcal{P}(gl^{(1)})$ and $y\in \Gamma^m H^*_\mathcal{P}(gl^{(1)})$ be homogeneous elements of respective degrees $2d$ and $2e$. By lemmas \ref{lm-map-bicompl-induced} and \ref{lm-repres-cup-bicompl}, $m_{\ell,m\,*}(\psi_\ell(x)\cup \psi_m(y))$ is represented by the cocycle $$\sum_{\sigma\in Sh(\ell,m)}\nolimits \sigma.\vartheta_{\ell+m}(x\otimes y)$$ of bidegree $(0, 2d+2e)$ in the bicomplex $H^0_\mathcal{P}(A(J_{\ell+m}))$. By definition of $\psi_{\ell+m}$, $\psi_{\ell+m}(m_{\ell,m}(x\otimes y))$ is represented by the cocycle $$\vartheta_{\ell+m}\left(\sum_{\sigma\in Sh(\ell,m)}\nolimits \sigma.(x\otimes y)\right) $$ in the same bicomplex. Since $\vartheta_{\ell+m}$ is equivariant (lemma \ref{lm-vartheta}), these two cocycles are equal. Hence, the diagram involving the multiplication is commutative. The diagram involving the comultiplication commutes for a similar reason: if $x\in \Gamma^{\ell+m} H^*_\mathcal{P}(gl^{(1)})$, the cohomology classes $(\psi_\ell\cup \psi_m)(\Delta_{\ell,m}(x))$ and $\Delta_{\ell,m\,*}(\psi_{\ell+m}(x))$ are both represented by the cycle $\vartheta_{\ell+m}(x)$. This concludes the proof of theorem \ref{thm-cl-univ}(3).
\subsection{Proof of theorem \ref{thm-cl-univ}(2)}
To prove theorem \ref{thm-cl-univ}(2), it suffices to prove for all $n\ge p$ the injectivity of the composite: \begin{align*}\Gamma^\ell H^*_{\mathcal{P}}(gl^{(1)})\xrightarrow[]{\psi_\ell} H^*_{\mathcal{P}}(\Gamma^\ell(gl^{(1)}))\xrightarrow[]{\phi_{\Gamma^\ell(gl^{(1)}),n}} &H^*(GL_{n,k},\Gamma^\ell(\mathfrak{gl}_n^{(1)}))\\&\xrightarrow[]{\Delta_{1,\dots,1\,*}} H^*(GL_{n,k},\mathfrak{gl}_n^{(1)\,\otimes \ell})\;. \end{align*} By naturality of the maps $\phi_{\Gamma^\ell(gl^{(1)}),n}$ \cite[Th 1.3]{Touze} and by the compatibility of the $\phi_i$ with diagonals and cup products given in theorem \ref{thm-cl-univ}(3), this composite equals the composite: \begin{align*}\Gamma^\ell H^*_{\mathcal{P}}(gl^{(1)})\hookrightarrow H^*_{\mathcal{P}}(gl^{(1)})^{\otimes \ell}
\to H^*(GL_{n,k},\mathfrak{gl}_n^{(1)})^{\otimes\ell}\xrightarrow[]{\cup} H^*(GL_{n,k},\mathfrak{gl}_n^{(1)\,\otimes \ell}) \end{align*} Thus, the proof of theorem \ref{thm-cl-univ}(2) follows from: \begin{Lemma} Let $k$ be a field of characteristic $p>0$ and let $j\ge 1$ be an integer. For all $n\ge p$, the following map is injective: $$\begin{array}[t]{cccc} \cup_{i=1}^j \phi_{gl^{(1)},n}:& H^*_{\mathcal{P}}(gl^{(1)})^{\otimes j} &\to & H^*(GL_{n,k},(\mathfrak{gl}_n^{(1)})^{\otimes j})\\ & \otimes_{i=1}^j c_i&\mapsto & \cup_{i=1}^j \phi_{gl^{(1)},n}(c_i) \end{array}. $$ \end{Lemma} \begin{proof} We prove this lemma by reducing our cohomology classes to an infinitesimal one parameter subgroup ${\mathbb G_a}_{1}$ of $GL_{n,k}$, as it is done in \cite{SFB}. Since $n\ge p$, we can find a $p$-nilpotent matrix $\alpha\in\mathfrak{gl}_n$. Using this matrix, we define an embedding ${\mathbb G_a}_1\to {\mathbb G_a}\xrightarrow[]{exp_\alpha} GL_{n,k}$. For all $\ell$, this embedding makes the $GL_{n,k}$-module $\mathfrak{gl}_n^{(1)\otimes \ell}$ into a trivial ${\mathbb G_a}_1$-module. Thus there is an isomorphism $H^*({\mathbb G_a}_1,\mathfrak{gl}_n^{(1)\otimes \ell})\simeq H^*({\mathbb G_a}_1,k)\otimes \mathfrak{gl}_n^{(1)\otimes \ell}$. The algebra $H^*({\mathbb G_a}_{1},k)$ is computed in \cite{CPSvdK}. In particular, $H^{even}({\mathbb G_a}_{1},k)=k[x_1]$ is a polynomial algebra on one generator $x_1$ of degree $2$. Let's thrash out the compatibility of this isomorphism with the cup product. If $x_1^{\ell}\otimes \beta_\ell$ and $x_1^{m}\otimes \beta_m$ are classes in $H^*({\mathbb G_a}_1,k)\otimes \mathfrak{gl}_n^{(1)\otimes \ell}$, resp. $H^*({\mathbb G_a}_1,k)\otimes \mathfrak{gl}_n^{(1)\otimes m}$, their cup product is the class $x_1^{\ell+m}\otimes (\beta_\ell\otimes\beta_m)$ in $H^*({\mathbb G_a}_1,k)\otimes \mathfrak{gl}_n^{(1)\otimes \ell+m}$.
We recall that $H^*_{\mathcal{P}}(gl^{(1)})$ is a graded module with basis the classes $e_1(i)$ of degree $2i$ for $0\le i< p$. By \cite[Th 4.9]{SFB}, the composite $$H^*_{\mathcal{P}}(gl^{(1)})\xrightarrow[\simeq]{\phi_{gl^{(1)},n}} H^*(GL_{n,k},\mathfrak{gl}_n^{(1)})\to H^*({\mathbb G_a}_1,\mathfrak{gl}_n^{(1)})\simeq H^*({\mathbb G_a}_1,k)\otimes \mathfrak{gl}_n^{(1)} $$ sends $e_1(i)$ to the class $x_1^i\otimes (\alpha^{(1)})^i\in H^*({\mathbb G_a}_1,k)\otimes \mathfrak{gl}_n^{(1)}$. Since restriction to ${\mathbb G_a}_{1}$ is compatible with cup products, the composite $$H^*_{\mathcal{P}}(gl^{(1)})^{\otimes j} \to H^*(GL_{n,k},\mathfrak{gl}_n^{(1)\otimes j}) \to H^*({\mathbb G_a}_1,k)\otimes \mathfrak{gl}_n^{(1)\otimes j} $$ sends the tensor product $\otimes_{\ell=1}^j e(i_\ell)$ to the class $x_1^{\sum_{\ell=1}^j i_\ell}\otimes (\otimes_{\ell=1}^j (\alpha^{(1)})^{i_\ell})$. As a result, this composite sends the basis $\left(\otimes_{i=\ell}^j e(i_\ell)\right)_{(i_1,\dots,i_\ell)}$ into the linearly independent
family $\left(x_1^m \otimes (\otimes_{\ell=1}^j (\alpha^{(1)})^{i_\ell})\right)_{(m,i_1,\dots,i_\ell)}$. Hence the map $\cup_{i=1}^j \phi_{gl^{(1)},n}$ is injective. \end{proof}
\part{The second proof}
\section{The starting point} Many notations are as in \cite{cohGrosshans}.
We work over a field $k$ of positive characteristic $p$. Fix an integer $n$, with $n>1$.
The important case is when $n$ is large. So if one finds this convenient, one may take $n\geq p$. We wish to draw conclusions from the following result. \begin{Theorem}[Lifted universal cohomology classes \cite{Touze}]\label{divided powers} There are cohomology classes $c[m]$ so that \begin{enumerate} \item $c[1]\in H^{2}(\mathop{\mathit{GL}}\nolimits_{n,k},\mathop{\mathfrak{gl}}\nolimits_n^{(1)})$ is nonzero, \item For $m\geq1$ the class $c[m]\in H^{2m}(\mathop{\mathit{GL}}\nolimits_{n,k},\Gamma^{m}(\mathop{\mathfrak{gl}}\nolimits_n^{(1)}))$ lifts $c[1]\cup\cdots\cup c[1]\in H^{2m}(\mathop{\mathit{GL}}\nolimits_{n,k},\bigotimes^{m}(\mathop{\mathfrak{gl}}\nolimits_n^{(1)}))$. \end{enumerate} \end{Theorem} \begin{Remark}We really have in mind that $c[1]$ is the Witt vector class of \cite[section 4]{cohGrosshans}, which is certainly nonzero. The computation of $H^{2}(\mathop{\mathit{GL}}\nolimits_{n,k},\mathop{\mathfrak{gl}}\nolimits_n^{(1)})$ is easy, using \cite[Corollary (3.2)]{CPSvdK}. One finds that $H^{2}(\mathop{\mathit{GL}}\nolimits_{n,k},\mathop{\mathfrak{gl}}\nolimits_n^{(1)})$ is one dimensional. Thus any nonzero $c[1]$ is up to scaling equal to the Witt vector class. \end{Remark} \section{Using the classes} We write $G$ for $\mathop{\mathit{GL}}\nolimits_{n,k}$, the algebraic group $\mathop{\mathit{GL}}\nolimits_n$ over $k$. Sometimes it is instructive to restrict to $\mathop{\mathit{SL}}\nolimits_n$ or other reductive subgroups of $\mathop{\mathit{GL}}\nolimits_n$. We leave this to the the reader.
\subsection{Other universal classes} We recall some constructions from \cite{cohGrosshans}. If $M$ is a finite dimensional vector space over $k$ and $r\geq1$, we have a natural homomorphism between symmetric algebras $S^*(M^{\#(r)})\to S^{*}(M^{\#(1)})$ induced by the map $M^{\#(r)}\to S^{p^{r-1}}(M^{\#(1)})$ which raises an element to the power $p^{r-1}$. It is a map of bialgebras. Dually we have the bialgebra map $\pi^{r-1}:\Gamma^{*}(M^{(1)})\to \Gamma^{*}(M^{(r)})$ whose kernel is the ideal generated by $\Gamma^1(M^{(1)})$ through $\Gamma^{p^{r-1}-1}(M^{(1)})$. So $\pi^{r-1}$ maps $\Gamma^{p^{r-1}a}(M^{(1)})$ onto $\Gamma^{a}(M^{(r)})$.
\begin{Notation} We now introduce analogues of the classes $e_r$ and $e_r^{(j)}$ of Friedlander and Suslin \cite[Theorem 1.2, Remark 1.2.2]{Friedlander-Suslin}. We write $\pi^{r-1}_*(c[ap^{r-1}])\in H^{2ap^{r-1}}(G,\Gamma^{a}(\mathop{\mathfrak{gl}}\nolimits_n^{(r)}))$ as $c_r[a]$. Next we get $c_r[a]^{(j)}\in H^{2ap^{r-1}}(G,\Gamma^{a}(\mathop{\mathfrak{gl}}\nolimits_n^{(r+j)}))$ by Frobenius twist. As in \cite{Friedlander-Suslin} a notation like $S^*(M(i))$ means the symmetric algebra $S^*(M)$, but graded, with $M$ placed in degree $i$. \end{Notation}
Here is the analogue of \cite[Lemma 4.7]{cohGrosshans}.
\begin{Lemma}\label{cup} The $c_i[a]^{(r-i)}$ enjoy the following properties ($r\geq i\geq1$) \begin{enumerate} \item \label{cuphom} There is a homomorphism of graded algebras $S^*(\mathop{\mathfrak{gl}}\nolimits_n^{\#(r)}(2p^{i-1}))\to H^{2p^{i-1}*}(G_{r},k)$ given on $\mathop{\mathfrak{gl}}\nolimits_n^{\#(r)}(2p^{i-1})=H^0(G_{r},\mathop{\mathfrak{gl}}\nolimits_n^{\#(r)})$ by cup product with the restriction of $c_i[1]^{(r-i)}$ to $G_r$. If $i=1$, then it is given on $S^a(\mathop{\mathfrak{gl}}\nolimits_n^{\#(r)}(2))=H^0(G_{r},S^a(\mathop{\mathfrak{gl}}\nolimits_n^{\#(r)}))$ by cup product with the restriction of $c[a]^{(r-1)}$ to $G_r$. \item \label{new e_r} For $r\geq1$ the restriction of $c_r[1]$ to $ H^{2p^{r-1}}(G_{1},\mathop{\mathfrak{gl}}\nolimits_n^{(r)})$ is nontrivial, so that $c_r[1]$ may serve as the universal class $e_r$ in \cite[Thm 1.2]{Friedlander-Suslin}. \end{enumerate} \end{Lemma}
\paragraph{Proof} \ref{cuphom}. When $M$ is a $G$-module, one has a commutative diagram $$\begin{array}{ccc} \Gamma^m M\otimes \bigotimes^mM^\# & {\rightarrow} &\bigotimes ^m M\otimes \bigotimes^mM^\#\\[.3em] \qquad \downarrow& & \downarrow\\[.3em] \Gamma^m M\otimes S^mM^\# & \rightarrow& k\\ \end{array}$$ Take $M=\mathop{\mathfrak{gl}}\nolimits_n^{(1)}$. There is a homomorphism of algebras $\bigotimes^*(\mathop{\mathfrak{gl}}\nolimits_n^{\#(1)})\to H^{2*}(G_{1},k)$ given on $\mathop{\mathfrak{gl}}\nolimits_n^{\#(1)}$ by cup product with $c[1]$. (We do not mention obvious restrictions to subgroups like $G_1$ any more.)
On $\bigotimes^m(\mathop{\mathfrak{gl}}\nolimits_n^{\#(1)})$ it is given by cup product with $c[1]\cup\cdots\cup c[1]$, so by Theorem \ref{divided powers} it is also given by cup product with $c[m]$, using the pairing $\Gamma^m M\otimes \bigotimes^mM^\#\to k$. As this pairing factors through $\Gamma^m M\otimes S^mM^\#$ we get that the induced algebra map $S^*(\mathop{\mathfrak{gl}}\nolimits_n^{\#(1)})\to H^{2*}(G_{1},k)$ is given by cup product with $c[m]$ on $S^m(\mathop{\mathfrak{gl}}\nolimits_n^{\#(1)})$. If we compose with the algebra map $S^*(\mathop{\mathfrak{gl}}\nolimits_n^{\#(i)})\to S^{p^{i-1}*}(\mathop{\mathfrak{gl}}\nolimits_n^{\#(1)})$ we get an algebra map $\psi:S^*(\mathop{\mathfrak{gl}}\nolimits_n^{\#(i)})\to H^{2p^{i-1}*}(G_{1},k)$ given on $\mathop{\mathfrak{gl}}\nolimits_n^{\#(i)}$ by cup product with $c[p^{i-1}]$, using the pairing $\mathop{\mathfrak{gl}}\nolimits_n^{\#(i)}\otimes \Gamma^{p^{i-1}}(\mathop{\mathfrak{gl}}\nolimits_n^{(1)})\to k$. This pairing factors through $\mathop{\mathrm{id}}\nolimits\otimes\pi^{i-1}:\mathop{\mathfrak{gl}}\nolimits_n^{\#(i)}\otimes \Gamma^{p^{i-1}}(\mathop{\mathfrak{gl}}\nolimits_n^{(1)})\to \mathop{\mathfrak{gl}}\nolimits_n^{\#(i)}\otimes \mathop{\mathfrak{gl}}\nolimits_n^{(i)}$, so the homomorphism $\psi$ is given on $\mathop{\mathfrak{gl}}\nolimits_n^{\#(i)}$ by cup product with $\pi^{i-1}_*c[p^{i-1}]=c_i[1]$. We can lift it to an algebra map $S^*(\mathop{\mathfrak{gl}}\nolimits_n^{\#(i)})\to H^{2p^{i-1}*}(G_{i},k)$ by simply still using cup product with $c_i[1]$ on $\mathop{\mathfrak{gl}}\nolimits_n^{\#(i)}$. Pull back along the $(r-i)$-th Frobenius homomorphism $G_r\to G_i$ and you get an algebra map $\psi^{(r-i)}:S^*(\mathop{\mathfrak{gl}}\nolimits_n^{\#(r)})\to H^{2p^{i-1}*}(G_{r},k)$, given on
$\mathop{\mathfrak{gl}}\nolimits_n^{\#(r)}$ by cup product with $c_i[1]^{(r-i)}$. If $i=1$, pull back the cup product with $c[m]$ on $S^m(\mathop{\mathfrak{gl}}\nolimits_n^{\#(1)})$
to a cup product with $c[m]^{(r-1)}$ on $S^m(\mathop{\mathfrak{gl}}\nolimits_n^{\#(r)}(2))$. This then describes the homomorphism $\psi^{(r-1)}$
degree-wise. \\ \ref{new e_r}. In fact if we restrict $c_r[1]$ as in remark \cite[4.1]{cohGrosshans} to $H^{2p^{r-1}}({\mathbb G_a}_{1},(\mathop{\mathfrak{gl}}\nolimits_n^{(r)})_{p^r\alpha})= H^{2p^{r-1}}({\mathbb G_a}_{1},k)\otimes(\mathop{\mathfrak{gl}}\nolimits_n^{(r)})_{p^r\alpha}$, then even that restriction is nontrivial. That is because the Witt vector class generates the polynomial ring $H^{\mathrm{even}}({\mathbb G_a}_{1},k)$, see \cite[I 4.26]{Jantzen}. And at this level $\Gamma^m\hookrightarrow \bigotimes^m$ gives an isomorphism, showing that $c[m]$ restricts to the $m$-th power of the polynomial generator. \unskip\nobreak
\hbox{ $\Box$}\\
\subsection{Noetherian homomorphisms} Let $A$ be a commutative $k$-algebra. The cohomology algebra $H^*(G,A)$ is then graded commutative, so we must also consider graded commutative algebras. \begin{Definition} If $f:A\to B$ is a homomorphism of graded commutative $k$-algebras, we call $f$ \emph{noetherian} if $f$ makes $B$ into a noetherian left $A$ module. \end{Definition}
\begin{Remark} In algebraic geometry a noetherian homomorphism between finitely generated commutative $k$-algebras is called a \emph{finite} morphism. With our terminology we wish to stress the importance of chain conditions in our arguments. \end{Remark}
\begin{Lemma} The composite of noetherian homomorphisms is noetherian. \end{Lemma} \paragraph{Proof} If $A\to B$ and $B\to C$ are noetherian, view $C$ as a quotient of the module $B^r$ for some $r$. \unskip\nobreak
\hbox{ $\Box$}
\begin{Lemma}\label{second noeth} If the composite of $A\to B$ and $B\to C$ is noetherian, so is $B\to C$. \end{Lemma} \paragraph{Proof} View $B$-submodules of $C$ as $A$-modules. \unskip\nobreak
\hbox{ $\Box$}
\begin{Remark}\label{map} In this lemma $A\to C$ and $B\to C$ must be homomorphisms, but $A\to B$ could be just a map. \end{Remark}
\begin{Lemma}\label{even int} Suppose $B$ is finitely generated as a graded commutative $k$-algebra. Then $f:A\to B$ is noetherian if and only if $B^{\mathrm{even}}$ is integral over $f(A^{\mathrm{even}})$. \end{Lemma}
\paragraph{Proof} The map $B^{\mathrm{even}}\to B$ is noetherian. So if $B^{\mathrm{even}}$ is integral over $f(A^{\mathrm{even}})$, then $f$ is noetherian. Conversely, if $f$ is noetherian and $b\in B^{\mathrm{even}}$, then for some $r$ one must have $b^r\in \sum_{i<r}f(A)b^i$. But then in fact $b^r\in \sum_{i<r}f(A^{\mathrm{even}})b^i$. \unskip\nobreak
\hbox{ $\Box$}\\
In particular one has
\begin{Lemma} Suppose $B$ is a finitely generated commutative $k$-algebra. Let $n>1$ and let $A$ be a subalgebra of $B$ containing $x^n$ for every $x\in B$. Then $A\hookrightarrow B$ is noetherian and $A$ is also finitely generated. \end{Lemma} \paragraph{Proof} We follow Emmy Noether \cite{noether}. Indeed $B$ is integral over $A$. Take finitely many generators $b_i$ of $B$ and let $C$ be the subalgebra generated by the $b_j^n$. Then $A$ is a $C$-submodule of $B$, hence finitely generated.\unskip\nobreak
\hbox{ $\Box$} \\
Invariant theory tells \begin{Lemma}\cite[Thm. 16.9]{Grosshans book}\label{inv noeth} Let $f:A\to B$ be a noetherian homomorphism of finitely generated graded commutative $k$-algebras with rational $G$ action. Then $A^G\to B^G$ is noetherian.
$\Box$ \end{Lemma}
\begin{Lemma} Let $f:A\to B$ be a noetherian homomorphism of finitely generated graded commutative $k$-algebras with rational $G_r$ action. Then $H^*(G_r,A)\to H^*(G_r,B)$ is noetherian. \end{Lemma} \paragraph{Proof} Take $C=H^0(G_r,A^{\mathrm{even}} )$ or its subalgebra generated by the $p^r$-th powers in $A^{\mathrm{even}}$. Then apply \cite[Theorem 1.5, Remark 1.5.1]{Friedlander-Suslin}. \unskip\nobreak
\hbox{ $\Box$}\\
We will need a minor variation on a theorem of Friedlander and Suslin. \begin{Theorem}\label{smaller height} Let $r\geq1$. Let $S\subset G_r$ be an infinitesimal group scheme over $k$ of height at most $r$. Let further $C$ be a finitely generated commutative $k$-algebra (considered as a trivial $S$-module) and let $M$ be a noetherian $C$-module on which $S$ acts by $C$-linear transformations. Then $H^*(S,M)$ is a noetherian module over the algebra $\bigotimes_{i=1}^rS^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2p^{i-1}))\otimes C$, with the map given as suggested by Lemma \ref{cup}. \end{Theorem}
\begin{Corollary}\label{noeth res} The restriction map $H^*(G_r,C)\to H^*(G_{r-1},C)$ is noetherian. \end{Corollary} \paragraph{Proof} Take $S=G_{r-1}$ and note that the map $\bigotimes_{i=1}^rS^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2p^{i-1}))\otimes C\to H^*(G_{r-1},C)$ factors through $H^*(G_r,C)$. \unskip\nobreak
\hbox{ $\Box$}\\
\paragraph{Proof of the theorem} The key difference with \cite[Theorem 1.5, Remark 1.5.1]{Friedlander-Suslin} is that we do not require the height of $S$ to be $r$. (As $S\subset G_r$ the fact that its height is at most $r$ is automatic.) Thus, to start their inductive argument, we must also check the obvious case where $r=1$ and $S$ is the trivial group. The rest of the proof goes through without change. \unskip\nobreak
\hbox{ $\Box$}\\
\begin{Remark} If $S$ has height $s$, then the map $(\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2p^{i-1})\to H^{2p^{i-1}}(S,k)$ is trivial for $r-i\geq s$. \end{Remark}
\subsection{Cup products on the cochain level} As we will need a differential graded algebra structure on Hochschild--Serre spectral sequences, we now expand the discussion of the Hochschild complex in \cite[I 5.14]{Jantzen}. Let $L$ be an affine algebraic group scheme over the field $k$, $N$ a normal subgroup scheme, $R$ a commutative $k$-algebra on which
$L$ acts rationally by algebra automorphisms. We have a Hochschild complex $C^*(L,R)$ with $R\otimes k[L]^{\otimes i}$ in degree $i$. Define a cup product on $C^*(L,R)$ as follows. If $u\in C^r(L,R)$ and $v\in C^s(L,R)$, then $u\cup v$ is defined in simplified notation by $$(u\cup v)(g_1,\ldots g_{r+s})=u(g_1,\ldots,g_r).{}^{g_1\cdots g_r} v(g_{r+1},\ldots ,g_{r+s}),$$ where ${}^gr$ denotes the image of $r\in R$ under the action of $g$. One checks that \begin{Lemma} With this cup product $C^*(L,R)$ is a differential graded algebra. \end{Lemma}
In particular, taking for $R$ the algebra $k[L]$ with $L$ acting by right translation, we get the differential graded algebra $C^*(L,k[L])$, quasi-isomorphic to $k$. And the action by left translation on $k[L]$ is by $L$-module isomorphisms, so this makes $C^*(L,k[L])$ into a differential graded algebra with $L$ action. It consists of injective $L$-modules in every degree. We write $C^*(L)$ for this differential graded algebra with $L$ action. One has $C^i(L)=k[L]^{\otimes i+1}$ and this is our elaboration of \cite[I 4.15 (1)]{Jantzen}.
The Hochschild--Serre spectral sequence $$E_2^{rs}=H^r(L/N,H^s(N,R))\Rightarrow H^{r+s}(L,R)$$ can now be based on the double complex $( C^*(L/N)\otimes (C^*(L)\otimes R)^N )^{L/N}$. The tensor product over $k$ of two differential graded algebras is again a differential graded algebra and the spectral sequence inherits differential graded algebra structures \cite[3.9]{Benson II} from such structures on $C^*(L)\otimes R$, $(C^*(L)\otimes R)^N$, $C^*(L/N)\otimes (C^*(L)\otimes R)^N$.
\subsection{Hitting invariant classes} We now come to the main result of this section, which is the counterpart of \cite[Cor. 4.8]{cohGrosshans}. It does not seem to follow from the cohomological finite generation conjecture, but we will show it implies the conjecture. \begin{Theorem}\label{dominate}Let $r\geq1$. Further let $A$ be a finitely generated commutative $k$-algebra with $G$ action. Then $H^{\mathrm{even}}(G,A)\to H^0(G,H^*(G_r,A))$ is noetherian. \end{Theorem} \begin{Remark} Recall that $H^0(G,H^*(G_r,A))$ is finitely generated as a $k$-algebra, by \cite{Friedlander-Suslin} and invariant theory. \end{Remark} \paragraph{Proof} \begin{list}{{\bf\stepcounter{enumi}\arabic{enumi}}.}{\topsep0pt \parsep0pt\itemsep0pt \setcounter{enumi}{0}\itemindent2em\labelwidth2em\leftmargin0pt} \item If $M$ is a $G$-module on which $G_r$ acts trivially, then $H^0(G,M)$ and $H^0(G/G_r,M)$
denote the same subspace of $M$. We may thus switch between these variants. \item We argue by induction on $r$. Put $C=H^0(G_r,A)$. Then $C$ contains the elements of $A$ raised to the power $p^r$, so $C$ is also a finitely generated algebra and $A$ is a noetherian module over it. \item Let $r=1$. This case is the same as in \cite{cohGrosshans}. By \cite[Thm 1.5]{Friedlander-Suslin} $H^*(G_1,A)$ is a noetherian module over the finitely generated algebra $$R=S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(1)})^\#(2))\otimes C.$$ Then, by invariant theory \cite[Thm. 16.9]{Grosshans book}, $H^0(G,H^*(G_1,A))$ is a noetherian module over the finitely generated algebra $H^0(G,R)$. By lemma \ref{cup} we may take the algebra homomorphism $R\to H^*(G_1,A)$ of \cite{Friedlander-Suslin} to be based on cup product with our $c[a]=c[a]^{(0)}$ on the summand $S^a((\mathop{\mathfrak{gl}}\nolimits_n^{(1)})^\#(2))\otimes C$. But then the map $H^0(G,R)\to H^*(G_1,A)$ factors, as a linear map, through $H^{\mathrm{even}}(G,A)$. This settles the case $r=1$ by \ref{map}. \item Now let the level $r$ be greater than $1$. We are going to follow the analysis in \cite[section 1]{Friedlander-Suslin} to peel off one level at a time. Heuristically, in the tensor product of Theorem \ref{smaller height} we treat one factor at a time. That is the main difference with the argument in \cite{cohGrosshans}.
Thus, consider the Hochschild--Serre spectral sequence $E^{ij}_2(C)=H^i(G_r/G_{r-1},H^j(G_{r-1},C))\Rightarrow H^{i+j}(G_r,C)$. We first wish to argue that this spectral sequence stops, meaning that $E^{**}_s(C)=E^{**}_\infty(C)$ for some finite~$s$. This is proved in \cite[section 1]{Friedlander-Suslin} for a very similar spectral sequence. So we imitate the argument. We need to apply \cite[Lemma 1.6]{Friedlander-Suslin} and its proof. We use $H^{\mathrm{even}} (G_r,C)$ for the $A$ of that lemma and $S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))$ for its $B$. We map $H^{\mathrm{even}} (G_r,C)$ in the obvious way to the abutment $H^* (G_r,C)$ and for $B\to E^{*0}_2(k)=H^*(G_r/G_{r-1},k)= H^*(G_{1},k)^{(r-1)}$ we use the $(r-1)$-st Frobenius twist of the map $S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(1)})^\#(2))\to H^*(G_{1},k)$ of Lemma \ref{cup}. So we use the class $c[a]^{(r-1)}$ on $S^a((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))$. By Corollary \ref{noeth res} and Lemma \ref{even int} the restriction map $H^{\mathrm{even}} (G_r,C)\to H^*(G_{r-1},C)$ is noetherian and by Theorem \ref{smaller height}, compare \cite[Cor 1.8]{Friedlander-Suslin}, it follows that the $H^{\mathrm{even}} (G_r,C)\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))$ module $E^{**}_2(C)=H^*(G_{1},H^*(G_{r-1},C)^{(1-r)})^{(r-1)}$ is noetherian, so the spectral sequence stops, say at $E^{**}_s(C)$. Note also that the image of $H^{\mathrm{even}} (G_r,C)\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))$ in $E^{**}_2(C)$ consists of permanent cycles. \item As the spectral sequence is one of graded commutative differential graded algebras, the $p$-th power of an even cochain in a page passes to the next page. As the spectral sequence stops at page $s$ one finds that for an $x\in E^{{\mathrm{even}},{\mathrm{even}}}_2(C)$ the power $x^{p^s}$ is a permanent cycle. Let $P$ be the algebra generated by permanent cycles in $E^{{\mathrm{even}},{\mathrm{even}}}_2(C)$. Then $P\to E^{ij}_t(C)$ is noetherian for $2\leq t\leq\infty$. So $P^G\to (E^{**}_\infty(C))^G$ is noetherian by Lemma \ref{inv noeth}. \item By the inductive assumption $H^0(G,H^*(G_{r-1},C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))))$ is noetherian over $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$. By step 1 we may rewrite $H^0(G,H^j(G_{r-1},C\otimes S^i((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))))$ as $H^0(G/G_{r-1},H^j(G_{r-1},C\otimes S^i((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))))$. The latter description will be needed in the sequel. We may map $H^0(G/G_{r-1},H^j(G_{r-1},C\otimes S^i((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))))$ by restriction to $H^0(G_r/G_{r-1},H^j(G_{r-1},C)\otimes S^i((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$ and then to $E^{2i,j}_2(C)=H^{2i}(G_r/G_{r-1},H^{j}(G_{r-1},C))$ by cup product with $c[i]^{(r-1)}$. So we now have a map from $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$ to $E^{**}_2(C)$. We will factor it further. \item One checks that the map from $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$ to $H^0(G_r/G_{r-1},H^*(G_{r-1},C)\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$ of step~6 factors naturally through the algebra $H^{\mathrm{even}} (G_r,C)\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))$ of step 4. Moreover, as the algebra $H^{\mathrm{even}} (G_r,C)\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))$ acts on the full spectral sequence, we may make $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$ act on the full spectral sequence by way of that algebra. \item The noetherian map $H^{\mathrm{even}} (G_r,C)\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))\to E^{**}_2(C)$ factors through $H^0(G_r/G_{r-1},H^*(G_{r-1},C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))))$. But then $H^0(G_r/G_{r-1},H^*(G_{r-1},C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))))\to E^{**}_2(C)$ is noetherian by Lemma \ref{second noeth}. So by Lemma \ref{inv noeth} the map $H^0(G/G_{r-1},H^*(G_{r-1},C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2))))\to (E^{**}_2(C))^G$ is noetherian. Combining with the inductive hypothesis,
we learn that $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))\to (E^{**}_2(C))^G$ is noetherian. It lands in $P^G$, because the map in step 4 lands in $P$. We conclude that $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))\to (E^{**}_\infty(C))^G$ is noetherian. \item We filter $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$ by putting $H^{\mathrm{even}}(G, C\otimes S^t((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$ in $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))^{\geq j}$ for $t\geq j$. As in \cite[section~1]{Friedlander-Suslin} the filtered algebra may be identified with its associated graded, and the map $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))\to H^*(G_r,C)$ respects filtrations. Now we care about $H^*(G_r,C)^G$ as a module for $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$. To see that it is noetherian we may pass as in \cite[section 1]{Friedlander-Suslin} to the associated graded, where one puts $(H^*(G_r,C)^G)^{\geq j}=(H^*(G_r,C)^G)\cap H^*(G_r,C)^{\geq j}$. This associated graded of $H^*(G_r,C)^G$ is a submodule of $(E^{**}_\infty(C))^G$, containing the image of $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$. So it is indeed noetherian. We conclude that $H^*(G_r,C)^G$ is noetherian over $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))$. \item As in the case $r=1$ the map $H^{\mathrm{even}}(G, C\otimes S^*((\mathop{\mathfrak{gl}}\nolimits_n^{(r)})^\#(2)))\to H^*(G_r,C)^G$ factors, as a linear map, through $H^{\mathrm{even}}(G, C)$, so $H^{\mathrm{even}}(G, C)\to H^*(G_r,C)^G$ is noetherian by \ref{map}. As $H^*(G_r,C)^G\to H^*(G_r,A)^G$ is noetherian, the result follows. \end{list}\unskip\nobreak
\hbox{ $\Box$}\\
\subsection{Cohomological finite generation} Now let $A$ be a finitely generated commutative $k$-algebra with $G$ action. We wish to show that $H^*(G,A)$ is finitely generated, following the same path as in \cite{cohGrosshans}, but using improvements from \cite{Srinivas vdK}. As in \cite{cohGrosshans} we denote by ${\cal A}$ the coordinate ring of a flat family with general fiber $A$ and special fiber $\mathop{\mathrm{gr}}\nolimits A$ \cite[Theorem 13]{Grosshans contr}. Choosing $r$ as in \cite[Prop. 3.8]{cohGrosshans} we have the spectral sequence $$E_2^{ij}=H^i(G/G_r,H^j(G_r,\mathop{\mathrm{gr}}\nolimits A))\Rightarrow H^{i+j}(G,\mathop{\mathrm{gr}}\nolimits A).$$ and $R=H^*(G_r,\mathop{\mathrm{gr}}\nolimits A)^{(-r)}$ is a finite module over the algebra $$\bigotimes_{a=1}^rS^*((\mathop{\mathfrak{gl}}\nolimits_n)^\#(2p^{a-1}))\otimes \mathop{\mathrm{hull}_\nabla}\nolimits(\mathop{\mathrm{gr}}\nolimits A).$$ This algebra has a good filtration, and by the main result of \cite{Srinivas vdK} the ring $R$ has finite good filtration dimension. In particular, there are only finitely many nonzero $H^i(G,R)$. Thus the same main result tells that $E_2^{0*}\to E_2^{**}$ is noetherian. Now $H^0(G/G_r,H^*(G_r,{\cal A}))\to H^0(G/G_r,H^*(G_r,\mathop{\mathrm{gr}}\nolimits A))$ is noetherian by \cite{Friedlander-Suslin} and lemma \ref{inv noeth}. And $H^*(G,{\cal A}))\to H^0(G/G_r,H^*(G_r,{\cal A}))$ is noetherian by theorem \ref{dominate}, so another application of \cite[Lemma 1.6]{Friedlander-Suslin} (with $B=k$) shows that $H^*(G,{\cal A})\to H^*(G,\mathop{\mathrm{gr}}\nolimits A)$ is noetherian.
There is a map of spectral sequences from a totally degenerate spectral sequence $$E({\cal A}):\quad E_1^{ij}({\cal A})=H^{i+j}(G,\mathop{\mathrm{gr}}\nolimits_{-i}{\cal A})\Rightarrow H^{i+j}(G, {\cal A}),$$ with pages $H^*(G,{\cal A})$, to the spectral sequence $$E(A):\quad E_1^{ij}(A)=H^{i+j}(G,\mathop{\mathrm{gr}}\nolimits_{-i}A)\Rightarrow H^{i+j}(G, A).$$ This makes that $H^*(G,{\cal A})$ acts on $E(A)$ and the noetherian homomorphism $H^*(G,{\cal A})\to H^*(G,\mathop{\mathrm{gr}}\nolimits A)$ is used in standard fashion \cite[slogan 3.9]{reductive} to make the spectral sequence $E(A)$ stop. It follows easily that $H^*(G,A)$ is finitely generated.
So far $G$ was $\mathop{\mathit{GL}}\nolimits_{n,k}$. As explained in some detail in \cite{reductive} this case implies our Cohomological Finite Generation Conjecture (over fields.) \begin{Remark} The spectral sequence $E(A)$ is based on filtering the Hochschild complex of $A$. As it lives in the second quadrant, the exposition of multiplicative structure in \cite[3.9]{Benson II} does not apply as stated. (In order to avoid convergence issues \cite{Benson II} uses a filtration that reaches a maximum.) But \cite[Ch XV, Ex. 2]{Cartan-Eilenberg} is sufficiently general to cover our case. Or see \cite{Massey}. \end{Remark}
\end{document} | arXiv | {
"id": "0809.1014.tex",
"language_detection_score": 0.6838794350624084,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} We consider the 2D inviscid incompressible irrotational infinite depth water wave problem neglecting surface tension. Given wave packet initial data of the form $\epsilon B(\epsilon \alpha)e^{ik\alpha}$ for $k > 0$, we show that the modulation of the solution is a profile traveling at group velocity and governed by a focusing cubic nonlinear Schr\"odinger equation, with rigorous error estimates in Sobolev spaces. As a consequence, we establish existence of solutions of the water wave problem in Sobolev spaces for times of order $O(\epsilon^{-2})$ provided the initial data differs from the wave packet by at most $O(\epsilon^{3/2})$ in Sobolev spaces. These results are obtained by directly applying modulational analysis to the evolution equation with no quadratic nonlinearity constructed in \cite{WuAlmostGlobal2D} and by the energy method. \end{abstract}
\title{A Rigorous Justification of the Modulation Approximation to the 2D Full Water Wave Problem}
\section{Introduction}
The mathematical problem of two dimensional water waves concerns the evolution of an interface separating an inviscid, incompressible, irrotational fluid, under the influence of gravity, from a region of zero density (e.g., air) in two dimensional space. It is assumed that the fluid region lies below the air region. Assume the fluid is infinitely deep and has density 1, and that the gravitational field is $g = (0, -1)$. At $t \geq 0$, denote the fluid interface by $\Sigma(t)$ and the fluid region by $\Omega(t)$. If surface tension is neglected, then the motion of the fluid is described by
$$\begin{matrix} \begin{cases} & \mathbf{v}_t + \mathbf{v} \cdot \nabla \mathbf{v} = g - \nabla p \cr &\text{div}\,\mathbf{v} = 0 , \qquad \text{curl}\,\mathbf{v} = 0 \end{cases} & \qquad\; \text{on } \Omega(t), \, t \geq 0 \cr p = 0 & \text{on } \Sigma(t) \end{matrix}$$ \begin{equation}\label{EulerVelocityField} (\mathbf{v}, 1) \text{ is tangent to the free surface } (\Sigma(t), t) \end{equation} where $\mathbf{v}$ is the fluid velocity, $p$ is the fluid pressure.
Assume further that the interface $\Sigma(t)$ is parametrized by $z = z(\alpha, t)$, where $\alpha \in \mathbb{R}$ is the Lagrangian coordinate, i.e., $z_t(\alpha, t) = \mathbf{v}(z(\alpha, t), t)$. Let $\mathfrak{a} = -\frac{\partial p}{\partial \mathbf{n}} \frac{1}{|z_\alpha|}$, where $\mathbf{n}=\frac{iz_\alpha}{|z_\alpha|}$ is the unit outward normal of $\Omega(t)$. We know from \cite{WuLocal2DWellPosed} that \eqref{EulerVelocityField} is equivalent to the following complex system on the interface: \begin{equation}\label{OldEuler} z_{tt} - i\mathfrak{a}z_\alpha = -i\end{equation} \begin{equation}\label{ztIsAntihol} (I - \overline{\mathfrak{H}})z_t = 0,\end{equation} where $\mathfrak{H}$ is the Hilbert transform associated to the fluid region $\Omega(t)$: $$\mathfrak{H} f(\alpha, t) = \frac{1}{\pi i} \,\text{p.v.} \int_{-\infty}^\infty \frac{f(\beta, t)z_\beta(\beta, t)}{z(\alpha, t) - z(\beta, t)} d\beta$$
In this paper we consider the modulation approximation to the infinite depth water wave equations \eqref{OldEuler}-\eqref{ztIsAntihol}, i.e., a solution which is to the leading order a wave packet of the form \begin{equation}\label{WavePacket} \epsilon B(\epsilon \alpha, \epsilon t, \epsilon^2 t)e^{i(k\alpha + \omega t)} \end{equation} It is well-known (c.f. \cite{PeterMillerAsymptoticAnalysis}, \cite{JohnsonWaterWaves}) that if one performs a multiscale analysis to determine modulation approximations to the finite or infinite depth 2D water wave equations, one should expect to find that the amplitude $B$ is a profile that travels at the group velocity determined by the dispersion relation of the water wave equations over time intervals of length $O(\epsilon^{-1})$, and evolves according to a nonlinear Schr\"odinger equation (NLS) over time intervals of length $O(\epsilon^{-2})$. The first formal derivations of the NLS from the 2D water wave equations was obtained by Zakharov \cite{ZakharovInfiniteDepth} for the infinite depth case, and by Hasimoto and Ono \cite{HashimotoOno} for the finite depth case. In \cite{CraigSulemSulemNLSFiniteDepth}, Craig, Sulem and Sulem applied modulation analysis to the finite depth 2D water wave equation, derived an approximate solution of the form of a wave packet and showed that the modulation approximation satisfies the 2D finite depth water wave equation to leading order.
A rigorous justification of the NLS from the full water wave equations would bring us one step closer to understanding qualitative properties for wave packet-like solutions of the water wave equations from that of solutions to NLS on the appropriate time scales. Rigorous justifications of the KdV, KP, Boussinesq, shallow water and various other asymptotic models from the full water wave equations have been done in \cite{CraigWWExistenceBousKdV}, \cite{SchneiderWayveJustifyKdV}, \cite{LannesJustifyWWModel3D}. As was noted in \cite{CraigSulemSulemNLSFiniteDepth}, the reason that a justification for NLS has not been given is that the longest existence time in Sobolev spaces for the water waves equation demonstrated thus far have been on time scales of the order $O(\epsilon^{-1})$, for data with Sobolev norms of the order $O(\epsilon)$. However these times are too short to distinguish the NLS behavior of the wave packet from simple translation of the initial wave packet at group velocity. Since there is no existence result in Sobolev spaces on the necessary time scales, an attempt to justify NLS as a rigorous modulation approximation to the water wave system on that scale has not been made.
Let $U_g f = f \circ g$, and for $\kappa: \mathbb R\to \mathbb R$ a diffeomorphism we introduce the notation \begin{equation*}\zeta := z \circ \kappa^{-1}, \qquad U_\kappa^{-1} D_t := \partial_t U_\kappa^{-1}, \qquad U_\kappa^{-1} \mathcal{P} := (\partial_t^2 - i\mathfrak{a}\partial_\alpha)U_\kappa^{-1}\end{equation*}
$$b := \kappa_t \circ \kappa^{-1}, \qquad U_\kappa^{-1} \mathcal{A} \partial_\alpha := \mathfrak{a} \partial_\alpha U_\kappa^{-1}$$
\begin{equation}\label{NewVariableNotation}
D_t = (\partial_t + b \partial_\alpha), \qquad U_\kappa^{-1} \mathcal{H} = \mathfrak{H} U_\kappa^{-1}, \qquad \mathcal{P} = D_t^2 - i\mathcal{A}\partial_\alpha \end{equation} In \cite{WuAlmostGlobal2D}, Wu showed that for any solution $z$ of \eqref{OldEuler}-\eqref{ztIsAntihol}, the quantity $\Pi := (I - \mathfrak{H})(z - \overline{z})$ satisfies the equation \begin{align}\label{ChiEquation} \mathcal{P}(\Pi \circ \kappa^{-1}) & = -2\left[D_t \zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha} \right]\partial_\alpha D_t \zeta + \frac{1}{\pi i} \int \left(\frac{D_t\zeta(\alpha,t) - D_t\zeta(\beta,t)}{\zeta(\alpha,t) - \zeta(\beta,t)}\right)^2 \partial_\beta(\zeta - \overline{\zeta}) d\beta \notag\\
& = \frac{4}{\pi} \int \frac{(D_t\zeta(\alpha,t) - D_t\zeta(\beta,t))(\Im \zeta(\alpha,t) - \Im \zeta(\beta,t))}{|\zeta(\alpha,t) - \zeta(\beta,t)|^2} \partial_\beta D_t\zeta(\beta,t) d\beta\notag \\ & \quad + \frac{2}{\pi} \int \left(\frac{D_t\zeta(\alpha,t) - D_t\zeta(\beta,t)}{\zeta(\alpha,t) - \zeta(\beta,t)}\right)^2 \partial_\beta \Im \zeta(\beta,t) d\beta \end{align} and furthermore there is a coordinate change $\kappa$, such that in this coordinate system, the equation \eqref{ChiEquation} contains no quadratic nonlinear terms. Using this favorable structure and the method of vector fields, Wu further proved the almost global well-posedness for the full water wave system \eqref{OldEuler}-\eqref{ztIsAntihol} for data small in the generalized $L^2$ Sobolev spaces defined by the invariant vector fields. However, the wave packet data $\epsilon B(\epsilon \alpha)e^{ik\alpha}$ (for $B$ sufficiently smooth and localized) has slow decay at infinity, and in terms of the generalized Sobolev norms used in \cite{WuAlmostGlobal2D} these are at least of size $O(\epsilon^{-1/2})$. In terms of the standard Sobolev norms they are of size $O(\epsilon^{1/2})$. Therefore the standard $L^2$ Sobolev spaces suits our purposes better.
As is suggested by the work of \cite{KSMCubicNonlinearityLongtimeRemainder}, in justifying the modulation approximation for a nonlinear system it is advantageous if the nonlinear system contains no quadratic nonlinear terms. We therefore use the equation \eqref{ChiEquation} to perform the multiscale analysis. In fact, we will use a slightly different change of variables $\kappa$ than that given in \cite{WuAlmostGlobal2D}. Upon performing this multiscale analysis, we derive an approximate wave packet-like solution $\tilde{\zeta}$ satisfies the transformed equations (see \eqref{NewEuler}-\eqref{XiIsAntihol} below) with a residual of size $O(\epsilon^4)$. The special structure of \eqref{ChiEquation} then allows us to obtain bounds for the error $r = \zeta - \tilde{\zeta}$ between the true solution and the approximate solution on the appropriate time scale in Sobolev spaces.
We will see in the course of the multiscale analysis that the envelope of the leading term of $\tilde{\zeta} - \alpha$ obeys a focusing cubic nonlinear Schr\"odinger equation which is globally well-posed in sufficiently regular Sobolev spaces. This implies that the approximate solution $\tilde{\zeta}$ is eternal. This fact, along with the a priori bounds on the remainder $r$, allows us to show existence and uniqueness of solutions of the system \eqref{OldEuler}-\eqref{ztIsAntihol} on the proper $O(\epsilon^{-2})$ time scales, for initial data which is no more than $O(\epsilon^{3/2})$ away from a wave packet $\epsilon B(\epsilon \alpha)e^{ik\alpha}$ in Sobolev spaces. A rigorous justification of wave packet approximations to
solutions of the water wave system is then obtained in this special coordinate system $\kappa$. Upon changing variables, we obtain appropriate wave packet approximations to water waves in Lagrangian coordinates. Finally, by introducing some further restrictions on the initial data, we justify an Eulerian version of the asymptotics.
\section{Derivation of the Main Equations}
In this section we introduce our notation as well as collect for future reference the main equations and formulas from \cite{WuAlmostGlobal2D} that we will use. We first recall the definition of the Hilbert transform $\mathcal{H}_\gamma$ associated to the interface determined by a curve parametrization $\gamma(\alpha) : \mathbb{R} \to \mathbb{C}\;$: \begin{equation}\label{HilbertTransformDefinition} \mathcal{H}_\gamma f(\alpha) := \frac{1}{\pi i} \,\text{p.v.} \int_{-\infty}^\infty \frac{\gamma_\beta(\beta)}{\gamma(\alpha) - \gamma(\beta)} f(\beta) d\beta \end{equation} We adopt the following notations for Hilbert transforms associated to specific curves: $\mathfrak{H}$ is the Hilbert transform associated to $z$ already defined, $\mathcal{H}$ is the Hilbert transform associated to $\zeta$, and $\mathcal{H}_0$ is the flat Hilbert transform associated to the line $\gamma(\alpha) = \alpha$. In general, the Hilbert transform $\mathcal{H}_\gamma$ satisfies the convention $\mathcal{H}_\gamma 1 = 0$ and the identity $\mathcal{H}_\gamma^2 = I$ in $L^2$. Let $\Omega$ be a domain in $\mathbb R^2$, with $\partial\Omega$ parametrized by $\gamma(\alpha)$, $\alpha\in \mathbb R$, oriented clock-wisely. We know $f(\cdot) = F(\gamma(\cdot))\in L^2(\mathbb R)$ is the trace of a holomorphic function $F$ in $\Omega$ if and only if $(I - \mathcal{H}_\gamma)f = 0$. The celebrated result of \cite{CoifmanMeyerMcintoshL2Bounds} (see Theorem \ref{CMMWEstimates}) states that $\mathcal{H}_\gamma$ is bounded on $L^2$ provided that $\gamma$ satisfies the chord-arc condition: There exist constants $\nu, N > 0$ so that \begin{equation}\label{ChordArcCondition}
\nu |\alpha - \beta| \leq |\gamma(\alpha) - \gamma(\beta)| \leq N |\alpha - \beta| \qquad \text{for all } \alpha, \beta \in \mathbb{R}. \end{equation}
We will frequently use the properties of the Hilbert transform given in Lemmas 2.1 and 2.2 of \cite{WuAlmostGlobal2D} which for convenience are recorded here. Note that in the sequel we will often be suppressing the dependence on $t$.
\begin{proposition}[c.f. Lemma 2.1 of \cite{WuAlmostGlobal2D}]\label{HilbertCommutatorIdentities}
Suppose that $z(\alpha, t)$ has no self-intersections at time $t \in [0, T_0]$ and satisfies $z_t, z_\alpha - 1 \in C^1([0, T_0]; H^1)$. Then for all functions $f \in C^1(\mathbb{R} \times [0, T_0])$ having the property that $f_\alpha(\alpha, t) \to 0$ as $|\alpha| \to \infty$ we have the identities $$[\partial_t, \mathfrak{H}]f = [z_t, \mathfrak{H}]\frac{f_\alpha}{z_\alpha}, \qquad [\mathfrak{a}\partial_\alpha, \mathfrak{H}]f = [\mathfrak{a}z_\alpha, \mathfrak{H}]\frac{f_\alpha}{z_\alpha}, \qquad [\mathfrak{H}, \partial_\alpha/z_\alpha] = 0$$ $$[\partial_t^2, \mathfrak{H}]f = [z_{tt}, \mathfrak{H}]\frac{f_\alpha}{z_\alpha} + 2[z_t, \mathfrak{H}]\frac{f_{t\alpha }}{z_\alpha} - \frac{1}{\pi i}\int\left(\frac{z_t(\alpha) - z_t(\beta)}{z(\alpha) - z(\beta)}\right)^2 f_\beta(\beta) d\beta$$ $$[\partial_t^2 - i\mathfrak{a}\partial_\alpha, \mathfrak{H}]f = 2[z_t, \mathfrak{H}]\frac{f_{t\alpha }}{z_\alpha} - \frac{1}{\pi i}\int\left(\frac{z_t(\alpha) - z_t(\beta)}{z(\alpha) - z(\beta)}\right)^2 f_\beta(\beta) d\beta$$ $$(I - \mathfrak{H})(-i\mathfrak{a}_t \overline{z}_\alpha) = 2[z_{tt}, \mathfrak{H}]\frac{\overline{z}_{t\alpha }}{z_\alpha} + 2[z_t, \mathfrak{H}]\frac{\overline{z}_{tt\alpha }}{z_\alpha} - \frac{1}{\pi i}\int\left(\frac{z_t(\alpha) - z_t(\beta)}{z(\alpha) - z(\beta)}\right)^2 \overline{z}_{t\beta }(\beta) d\beta$$ \end{proposition}
\textbf{Remark.} Observe that if we change variables via $\kappa$ each formula above has a corresponding formula in which $z$ is replaced by $\zeta$, $\partial_t$ is replaced by $D_t$, $\mathfrak{H}$ is replaced by $\mathcal{H}$, etc.
\begin{proposition}[c.f. Lemma 2.2 of \cite{WuAlmostGlobal2D}]\label{HoloProperties} Let $\Omega \subset \mathbb{C}$ be a region whose boundary $\partial\Omega$ is parametrized by $\gamma(\alpha)$, oriented clockwise. Then the following hold: \begin{enumerate} \item{If $f = \mathcal{H}_\gamma f$ and $g = \mathcal{H}_\gamma g$, then $[f, \mathcal{H}_\gamma]g = 0$.} \item{For all $f, g \in L^2(\partial\Omega)$, $[f, \mathcal{H}_\gamma]\mathcal{H}_\gamma g = -[\mathcal{H}_\gamma f, \mathcal{H}_\gamma]g$.} \end{enumerate} \end{proposition}
With these preparations, we give the change of variables used to convert \eqref{OldEuler}-\eqref{ztIsAntihol} into a more suitable equation for our purposes. Originally, in \cite{WuAlmostGlobal2D}, the change of variables $\kappa$ was introduced using a Riemann map $\mathbf{\Phi}(z, t) : \Omega(t) \to P_-$ which for each $t$ mapped the fluid region $\Omega(t)$ to the lower half plane, and then defined by $\alpha \mapsto z(\alpha, t) + \overline{z}(\alpha, t) - h(\alpha, t)$, where $h$ was taken to be $\alpha \mapsto \mathbf{\Phi}(z(\alpha, t), t)$.
However, the only property of $h$ that was used was that it was a real-valued trace of a holomorphic function defined on $\Omega(t)$. This idea was already used in the 3D setting to prove global existence of solutions to the 3D water wave problem \cite{WuGlobal3D}. We use it here by choosing to set $$h(\alpha, t) = z(\alpha, t) - \frac{1}{2}(I + \mathfrak{H})(I + \mathfrak{K})^{-1}\left(z(\alpha, t) - \overline{z}(\alpha, t)\right),$$ where $\mathfrak{K} = \Re \mathfrak{H}$ is the double layer potential operator associated to the curve $z$. It is easy to see from the definition that $h$ is a real-valued trace of a holomorphic function in $\Omega(t)$. Then the change of variables is defined by \begin{align}\label{ChangeOfVariables} \kappa(\alpha, t) & = z(\alpha,t) + \overline{z}(\alpha,t) - h(\alpha, t) \notag \\ & = \overline{z}(\alpha, t) + \frac{1}{2}(I + \mathfrak{H})(I + \mathfrak{K})^{-1}(z(\alpha, t) - \overline{z}(\alpha, t)) \end{align} Our choice of $\kappa$ then gives us the crucial identity \begin{equation}\label{OldXiIsAntihol} (I - \mathfrak{H})(\overline{z} - \kappa) =- (I - \mathfrak{H})\left(\frac{1}{2}(I + \mathfrak{H})(I + \mathfrak{K})^{-1}(z - \overline{z})\right) = 0, \end{equation} and from this it follows immediately in the new coordinates that \begin{equation} (I - \mathcal{H})(\overline{\zeta} - \alpha) = 0 \end{equation} and \begin{equation}\label{ChiAndXiAreEqual} \Pi \circ \kappa^{-1} = (I - \mathcal{H})(\zeta - \overline{\zeta}) = (I - \mathcal{H})(\zeta - \alpha) \end{equation} We denote $$\xi := \zeta - \alpha,$$ the perturbation of $\zeta$ from the rest state $\alpha$. Then from \eqref{ChiEquation} and \eqref{OldXiIsAntihol} we have that solutions $z$ also satisfy the system \begin{equation}\label{NewEuler}\mathcal{P}(I - \mathcal{H})\xi = G \end{equation} \begin{equation}\label{XiIsAntihol}(I - \overline{\mathcal{H}})\xi = 0\end{equation} where as in \eqref{ChiEquation} the cubic nonlinearity $G$ is \begin{equation}\label{GFormula} G := -2\left[D_t\zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]\partial_\alpha D_t\zeta + \frac{1}{\pi i}\int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 (\zeta_\beta(\beta) - \overline{\zeta}_\beta(\beta))\,d\beta \end{equation} We will also need the equations corresponding to the time derivative, which by virtue of \eqref{ztIsAntihol} and a derivative $D_t$ to \eqref{NewEuler} are given by \begin{equation}\label{DtNewEuler}(D_t^2 - i\mathcal{A}\partial_\alpha)D_t(I - \mathcal{H})\xi = D_t G + [\mathcal{P}, D_t](I - \mathcal{H})\xi \end{equation} \begin{equation}\label{DtXiIsAntihol}(I - \overline{\mathcal{H}})D_t\zeta = 0\end{equation} An explicit formula for $D_t G$ is \begin{align}\label{DtGFormula} D_t G & = -2\left[D_t^2 \zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]\partial_\alpha D_t \zeta - 2\left[D_t \zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]\partial_\alpha D_t^2 \zeta \notag\\
& + \frac{2}{\pi i}\int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \partial_\beta D_t\zeta(\beta) \, d\beta - \frac{2}{\pi i}\int \frac{\left|D_t\zeta(\alpha) - D_t\zeta(\beta)\right|^2}{(\overline{\zeta}(\alpha) - \overline{\zeta}(\beta))^2} \partial_\beta D_t \zeta(\beta) \notag\\ & \quad + \frac{4}{\pi}\int \frac{(D_t\zeta(\alpha) - D_t\zeta(\beta))(D_t^2\zeta(\alpha) - D_t^2 \zeta(\beta))}{(\zeta(\alpha) - \zeta(\beta))^2} \partial_\beta \Im \zeta(\beta) d\beta \notag \\ & \quad + \frac{2}{\pi} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \partial_\beta \Im D_t \zeta(\beta) d\beta \notag\\ & - \frac{4}{\pi} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^3 \partial_\beta \Im \zeta(\beta) \, d\beta \end{align} We also have the following formulas for $b$ and $\mathcal{A}$ in terms of $\zeta$ (c.f. Proposition 2.4 of \cite{WuAlmostGlobal2D} for a proof. From the proof, it is clear that \eqref{XiIsAntihol} and \eqref{DtXiIsAntihol} together implies \eqref{bFormula} and \eqref{AFormula}.): \begin{equation}\label{bFormula}(I - \mathcal{H})b = -[D_t \zeta, \mathcal{H}]\frac{\overline{\zeta}_\alpha - 1}{\zeta_\alpha},\end{equation} \begin{equation}\label{AFormula}(I - \mathcal{H})\mathcal{A} = 1 + i[D_t^2 \zeta, \mathcal{H}]\frac{\overline{\zeta}_\alpha - 1}{\zeta_\alpha} + i[D_t \zeta, \mathcal{H}]\frac{\partial_\alpha D_t \overline{\zeta}}{\zeta_\alpha}\end{equation} The commutator in the right hand side of \eqref{DtNewEuler} can be rewritten using \begin{equation}\label{PCommutatorDtFormula}[\mathcal{P}, D_t](I - \mathcal{H})\xi = U_{\kappa^{-1}}\left(\frac{\mathfrak{a}_t}{\mathfrak{a}}\right) i\mathcal{A}\partial_\alpha(I - \mathcal{H})\xi,\end{equation} and is controlled using the following formula (c.f. (1.9) and (2.32) of \cite{WuAlmostGlobal2D} for a derivation): \begin{align}\label{atOveraFormula} (I - \mathcal{H})\biggl(\mathcal{A}\overline{\zeta}_\alpha U_{\kappa}^{-1}\left(\frac{\mathfrak{a}_t}{\mathfrak{a}}\right)\biggr) & = 2i[D_t^2 \zeta, \mathcal{H}]\frac{\partial_\alpha D_t \overline{\zeta}}{\zeta_\alpha} + 2i[D_t \zeta, \mathcal{H}]\frac{\partial_\alpha D_t^2 \overline{\zeta}}{\zeta_\alpha} \notag\\ & \quad -\; \frac{1}{\pi} \int \biggl(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)} \biggr)^2 \partial_\beta D_t \overline{\zeta}(\beta) d\beta \end{align} We also record Proposition 2.7 of \cite{WuAlmostGlobal2D}: \begin{align}\label{DtbFormula} (I - \mathcal{H})D_t b & = [D_t\zeta, \mathcal{H}]\frac{\partial_\alpha(2b - D_t\overline{\zeta})}{\zeta_\alpha} - [D_t^2\zeta, \mathcal{H}]\frac{\overline{\zeta}_\alpha - 1}{\zeta_\alpha} \notag \\ & \qquad + \frac{1}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 (\overline{\zeta}_\beta(\beta) - 1) d\beta \end{align} To estimate terms involving time derivatives of singular integral operators we record the following \begin{lemma}\label{SingIntCommuteWithDt} Suppose that $\mathcal{T} f = \int K(\alpha, \beta) \partial_\beta f(\beta)\, d\beta$. Then $$[D_t, \mathcal{T}]f = \int (\partial_t + b(\alpha)\partial_\alpha + b(\beta)\partial_\beta)K(\alpha, \beta) \; \partial_\beta f(\beta) \, d\beta$$ \end{lemma} \begin{proof} We have \begin{align*} [D_t, \mathcal{T}]f & = (\partial_t + b(\alpha) \partial_\alpha)\int K(\alpha, \beta) f_\beta(\beta) \, d\beta - \int K(\alpha, \beta) \partial_\beta D_t f(\beta) \, d\beta \\ & = \int (\partial_t + b(\alpha)\partial_\alpha + b(\beta)\partial_\beta)K(\alpha, \beta) f_\beta(\beta) \, d\beta \\ & \quad + \int K(\alpha, \beta)\Bigl(b_\beta(\beta) f_\beta(\beta) + D_t f_\beta(\beta) - \partial_\beta D_t f(\beta)\Bigr) \, d\beta \\ & = \int (\partial_t + b(\alpha)\partial_\alpha + b(\beta)\partial_\beta)K(\alpha, \beta) f_\beta(\beta) \, d\beta \end{align*} as desired. \end{proof}
Denote the Fourier transform on $\mathbb{R}$ by $$\hat{f}(x) = \frac{1}{2\pi}\int_{-\infty}^\infty f(\alpha)e^{-ix\alpha} d\alpha$$ For $s \in \mathbb{R}$ we have the usual Sobolev spaces $$H^s = \{f \in L^2(\mathbb{R}) : \|f\|_{H^s} := \|(1 + |\cdot|^2)^{s/2}\hat{f}(\cdot)\|_{L^2} < \infty\}$$ and the homogeneous Sobolev spaces $$\dot{H}^s = \{f \in L^2(\mathbb{R}) : \|f\|_{\dot{H}^s} := \|\,|\cdot|^s\hat{f}(\cdot)\|_{L^2} < \infty\}$$ Also for $s \in \mathbb{N}$ we define $W^{s, \infty} = \{f \in L^\infty : \partial_\alpha^j f \in L^\infty, \, j = 1, \ldots, s\}$, with $\|f\|_{W^{s,\infty}}:=\sum_{j=0}^s\|\partial_\alpha^jf\|_{L^\infty}$. A well-known consequence of the Sobolev embedding theorem is that $H^s$ is continuously embedded in $W^{s - 1, \infty}$ for $s \geq 1$. Given a Banach space $X$, let $C([0, T]; X)$ be the space of all $f \in \mathbb R \times [0, T]$ so that $t \mapsto \|f(t)\|_X$ is continuous on $[0, T]$; equip $C([0, T]; X)$ with the norm $$\|f\|_{C([0, T]; X)} := \max_{t \in [0, T]} \|f(t)\|_X < \infty.$$
In the rest of the paper, we make the following \begin{assumption} Let $s \geq 6$, and let $\zeta$ be a solution to the water wave system \eqref{NewEuler}-\eqref{XiIsAntihol}-\eqref{DtXiIsAntihol} on some time interval $[0, T_0]$ satisfying for $0 \leq t \leq T_0$ the bounds \begin{equation}\label{ZetaLocalAPrioriBound}
\mathfrak{S}(T_0) := \|\zeta_\alpha - 1\|_{C([0, T_0]; H^s)} + \|D_t\zeta\|_{C([0, T_0]; H^{s })} \leq \delta. \end{equation} First we choose $\delta > 0$ sufficiently small so that $\zeta$ satisfies the chord-arc condition \eqref{ChordArcCondition} and $\mathcal{A} \geq 1/2$ (c.f. \cite{WuAlmostGlobal2D}). In the course of the paper we will need to choose $\delta$ smaller still. \end{assumption}
In order to use the formulas \eqref{bFormula}, \eqref{AFormula}, \eqref{atOveraFormula} to get estimates for $b$, $\mathcal{A}$ and $U_\kappa^{-1} (\mathfrak{a}_t/\mathfrak{a})$ in $H^s$ we use the following lemma, whose proof is essentially that of Lemma 3.8 and Lemma 3.15 of \cite{WuAlmostGlobal2D}: \begin{lemma}\label{DoubleLayerPotentialArgument} Let $s \geq 4$, and suppose that $\zeta$ satisfies \eqref{ZetaLocalAPrioriBound}. Then there exists a constant $C$ depending on $\mathfrak{S}(T_0)$, so that for all real-valued $f$ we have the following estimates: \begin{enumerate}
\item{$\|f\|_{H^s} \leq C\|(I - \mathcal{H})f\|_{H^s}$}
\item{$\|f\|_{H^s} \leq C\|(I - \mathcal{H})\left(f\mathcal{A}\overline{\zeta}_\alpha\right)\|_{H^s}$} \end{enumerate} \end{lemma}
\section{The Formal Multiscale Calculation.}
The goal of this section is to derive a formal solution to the system \eqref{NewEuler}-\eqref{XiIsAntihol} which is to leading order a wave packet. Since we want our approximation to remain bounded for times on the order $O(\epsilon^{-2})$, we calculate this formal solution using a multiscale analysis. As mentioned in the introduction, we expect from similar formal derivations of modulation approximations to the water wave equations that the amplitude of the wave packet is a profile which travels at the group velocity of the water wave operator, and evolves according to a nonlinear Schr\"odinger equation.
To effect this multiscale analysis, we must first formally expand the Hilbert transform $\mathcal{H}$ appearing in the water wave equations. In particular, we must intepret how the flat Hilbert transform $\mathcal{H}_0$ acts on multiple scale functions of the form $F(\epsilon \alpha) e^{ik\alpha}$ for $k \neq 0$.
\subsection{Formal Expansion of the Hilbert Transform}
Understanding the system \eqref{NewEuler}, \eqref{XiIsAntihol} depends on understanding the Hilbert Transform $\mathcal{H}$. Since our first goal is to seek a perturbation expansion $$\zeta(\alpha, t) = \alpha + \xi = \alpha + \sum_{n = 1}^\infty \epsilon^n \zeta^{(n)}(\alpha, t, \epsilon),$$ we must find a corresponding development of $\mathcal{H}$ into a formal power series $$\mathcal{H} = \mathcal{H}_0 + \epsilon\mathcal{H}_1 + \epsilon^2 \mathcal{H}_2 + \cdots$$ To predict what the terms of this series ought to be, we heuristically expand the kernel of $\mathcal{H}$ in a formal power series as follows: \begin{equation} \mathcal{H} f = \mathcal{H}_0 f + \sum_{n = 1}^\infty \frac{(-1)^{n + 1}}{n \pi i} \int f_\beta(\beta) \left(\frac{\xi(\alpha) - \xi(\beta)}{\alpha - \beta}\right)^n d\beta \end{equation} Equating like powers of $\epsilon$ on the right hand side of this last expression suggests the following formulas for $\mathcal{H}_1$: \begin{align*} \mathcal{H}_1 f & := \frac{1}{\pi i} \int f_\beta \left(\frac{\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta)}{\alpha - \beta}\right) d\beta \\ & = [\zeta^{(1)}, \mathcal{H}_0]f_\alpha \end{align*} and for $\mathcal{H}_2$: \begin{align}\label{HilbertFormulas} \mathcal{H}_2 f & := \frac{1}{\pi i} \int f_\beta(\beta) \left(\frac{\zeta^{(2)}(\alpha) - \zeta^{(2)}(\beta)}{\alpha - \beta}\right) d\beta - \frac{1}{2\pi i} \int f_\beta(\beta) \left(\frac{\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta)}{\alpha - \beta}\right)^2 d\beta \notag\\ & = \frac{1}{\pi i} \int f_\beta(\beta) \left(\frac{\zeta^{(2)}(\alpha) - \zeta^{(2)}(\beta)}{\alpha - \beta}\right) d\beta \notag\\ & - \frac{1}{\pi i} \int f_{\beta} \zeta^{(1)}_\beta \left(\frac{\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta)}{\alpha - \beta}\right) d\beta + \frac{1}{2\pi i} \int f_{\beta \beta}(\beta) \left(\frac{(\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta))^2}{\alpha - \beta}\right) d\beta \notag\\ & = [\zeta^{(2)}, \mathcal{H}_0]f_\alpha - [\zeta^{(1)}, \mathcal{H}_0](\zeta^{(1)}_\alpha f_\alpha) + \frac{1}{2}[\zeta^{(1)}, [\zeta^{(1)}, \mathcal{H}_0]]f_{\alpha\alpha} \end{align} and so we define the approximate Hilbert Transform $$\tilde{\mathcal{H}} := \mathcal{H}_0 + \epsilon \mathcal{H}_1 + \epsilon^2 \mathcal{H}_2$$ If $\tilde{\mathcal{H}}$ acts on a multiple scale function $f(\alpha_0, \alpha_1) = f(\alpha, \epsilon\alpha)$, then we have the expansion $$\tilde{\mathcal{H}} = \mathcal{H}^{(0)} + \epsilon \mathcal{H}^{(1)} + \epsilon^2 \mathcal{H}^{(2)} + O(\epsilon^3),$$ where $$\mathcal{H}^{(0)}f = \mathcal{H}_0f, \qquad \mathcal{H}^{(1)}f = [\zeta^{(1)}, \mathcal{H}_0]\partial_{\alpha_0}f,$$ \begin{equation}\label{MultiscaleHilbertFormulas}\mathcal{H}^{(2)}f = [\zeta^{(1)}, \mathcal{H}_0]\partial_{\alpha_1}f + [\zeta^{(2)}, \mathcal{H}_0]\partial_{\alpha_0}f - [\zeta^{(1)}, \mathcal{H}_0]\zeta^{(1)}_{\alpha_0}\partial_{\alpha_0}f + \frac{1}{2}[\zeta^{(1)}, [\zeta^{(1)}, \mathcal{H}_0]]\partial_{\alpha_0}^2f\end{equation} Later we will need to estimate the operator $$\mathcal{H} - \tilde{\mathcal{H}} = (\mathcal{H} - \mathcal{H}_{\tilde{\zeta}}) + (\mathcal{H}_{\tilde{\zeta}} - \tilde{\mathcal{H}}),$$ where $\mathcal{H}_{\tilde{\zeta}}$ is the Hilbert transform associated to the curve given by the approximation $\tilde{\zeta}$. We will see that for our purposes it suffices to develop the approximate solution $\tilde{\zeta}$ to the third order: $$\tilde{\zeta}(\alpha, t) = \alpha + \epsilon \zeta^{(1)}(\alpha, t) + \epsilon^2 \zeta^{(2)}(\alpha, t) + \epsilon^3\zeta^{(3)}(\alpha, t)$$ Hence we record the following formula as a first step towards analyzing $\mathcal{H}_{\tilde{\zeta}} - \tilde{\mathcal{H}}$: \begin{lemma}\label{DiffHilbertPart1Formula} $(\mathcal{H}_{\tilde{\zeta}} - \tilde{\mathcal{H}})f$ can be written as the following finite sum of singular integrals: \begin{align}\label{HTildeZetaMinusTildeHFormula} & (\mathcal{H}_{\tilde{\zeta}} - \tilde{\mathcal{H}})f = -\frac{1}{\pi i}\int \frac{\left(\tilde{\xi}(\alpha) - \tilde{\xi}(\beta)\right)^3\tilde{\zeta}_\beta(\beta)}{(\alpha - \beta)^3\left(\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)\right)} f(\beta) d\beta \\ & + \sum_S \frac{C_{p_1, p_2}\epsilon^{n_1p_1 + n_2p_2 + m}}{\pi i} \int \frac{\left(\zeta^{(n_1)}(\alpha) - \zeta^{(n_1)}(\beta)\right)^{p_1}\left(\zeta^{(n_2)}(\alpha) - \zeta^{(n_2)}(\beta)\right)^{p_2}}{(\alpha - \beta)^{p_1 + p_2 + 1}}\zeta^{(m)}_\beta(\beta) f(\beta) d\beta \notag \end{align} where $S = \{(n_1, n_2, m, p_1, p_2) : n_1p_1 + n_2p_2 + m \geq 3,\, 0 \leq p_1 + p_2 \leq 2,\, 0 \leq n_1,\,n_2,\, m \leq 3\}$ and $C_{p_1, p_2}$ are constants depending only on $p_1, p_2$. \end{lemma}
\begin{proof} First observe that with an integration by parts we have the formulas $$\mathcal{H}_1 f = \frac{1}{\pi i} \,\text{p.v.} \int f(\beta)\left(\frac{\zeta^{(1)}_\beta(\beta)}{\alpha - \beta} - \frac{\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta)}{(\alpha - \beta)^2}\right) d\beta$$ and $$\mathcal{H}_2 f = \frac{1}{\pi i} \,\text{p.v.} \int f(\beta)\left(\frac{\zeta^{(2)}_\beta(\beta)}{\alpha - \beta} - \frac{\zeta^{(2)}(\alpha) - \zeta^{(2)}(\beta)}{(\alpha - \beta)^2}\right) d\beta$$ $$-\frac{1}{\pi i}\int f(\beta) \left(\frac{\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta)}{\alpha - \beta}\right)\left(\frac{\zeta^{(1)}_\beta(\beta)}{\alpha - \beta} - \frac{\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta)}{(\alpha - \beta)^2}\right) d\beta$$ Now we repeatedly apply the identity $$\frac{1}{\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)} = \frac{1}{\alpha - \beta} - \frac{\tilde{\xi}(\alpha) - \tilde{\xi}(\beta)}{(\alpha - \beta)\left(\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)\right)}$$ so as to arrive at the identity \begin{equation} \frac{1}{\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)} = \frac{1}{\alpha - \beta} - \frac{\tilde{\xi}(\alpha) - \tilde{\xi}(\beta)}{(\alpha - \beta)^2} + \frac{\left(\tilde{\xi}(\alpha) - \tilde{\xi}(\beta)\right)^2}{(\alpha - \beta)^3} - \frac{\left(\tilde{\xi}(\alpha) - \tilde{\xi}(\beta)\right)^3}{(\alpha - \beta)^3\left(\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)\right)} \end{equation} The last of these terms is of size $O(\epsilon^3)$. As for the rest, if we arrange $\tilde{\zeta}_\beta(\beta)/\left(\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)\right)$ in powers of $\epsilon$ up through $\epsilon^2$, we see that \begin{eqnarray*} \frac{\tilde{\zeta}_\beta(\beta)}{\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)} & = & \frac{1}{\alpha - \beta} \cr & & \!\!\!\! +\; \epsilon \left(\frac{\zeta^{(1)}_\beta(\beta)}{\alpha - \beta} - \frac{\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta)}{(\alpha - \beta)^2}\right) \cr & & \!\!\!\! +\; \epsilon^2 \Biggl(\frac{\zeta^{(2)}_\beta(\beta)}{\alpha - \beta} - \frac{\zeta^{(2)}(\alpha) - \zeta^{(2)}(\beta)}{(\alpha - \beta)^2} \cr & & \quad -\; \frac{\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta)}{\alpha - \beta}\left(\frac{\zeta^{(1)}_\beta(\beta)}{\alpha - \beta} - \frac{\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta)}{(\alpha - \beta)^2}\right)\Biggr) \cr & & \!\!\!\! + \; O(\epsilon^3) \end{eqnarray*} All of the terms here up through order $O(\epsilon^2)$ precisely comprise $\tilde{\mathcal{H}}$, and so vanish upon subtracting $\tilde{\mathcal{H}}$. The remaining $O(\epsilon^3)$ terms consists of a finite number of terms which can be written explicitly in the form $$\sum_S C_{p_1, p_2} \epsilon^{n_1 p_1 + n_2 p_2 + m} \frac{\left(\zeta^{(n_1)}(\alpha) - \zeta^{(n_1)}(\beta)\right)^{p_1}\left(\zeta^{(n_2)}(\alpha) - \zeta^{(n_2)}(\beta)\right)^{p_2}}{(\alpha - \beta)^{p_1+p_2 + 1}}\zeta^{(m)}_\beta(\beta)$$ where $S = \{(n_1, n_2, m, p_1, p_2) : n_1p_1 + n_2p_2 + m \geq 3,\, 0 \leq p_1 + p_2 \leq 2,\, 0 \leq n_1,\,n_2,\, m \leq 3\}$ and $C_{p_1, p_2}$ are constants depend only on $p_1, p_2$. \end{proof}
\subsection{The Action of $\mathcal{H}_0$ on Multiscale Functions}
As we saw in the last section, the operators appearing in the power series expansion of the Hilbert Transform of the interface can be written in terms of the flat Hilbert transform $$\mathcal{H}_0 f := \frac{1}{\pi i} \,\text{p.v.} \int \frac{f(\beta)}{\alpha - \beta} \, d\beta$$ It is known that $\mathcal{H}_0$ is a Fourier multiplier with Fourier symbol $\hat{\mathcal{H}}_0(\xi) = -\,\text{sgn}(\xi)$. However, it still remains to be seen how to interpret the action of $\mathcal{H}_0$ on a multiscale function $f = f(\alpha, \epsilon\alpha)$ as a multiscale function.
Since we are interested in the modulation approximation of the water wave problem, we will choose the leading order of our approximation to be a wave packet of the form $B(\epsilon\alpha)e^{ik\alpha}$ for $k > 0$. Hence the formal calculation depends upon understanding the action of $\mathcal{H}_0$ on such wave packets. Since the amplitude of $B(\epsilon\alpha)e^{ik\alpha}$ is slowly varying for small $\epsilon$, we heuristically expect for $k \neq 0$ that $$\overline{\mathcal{H}}_0\left(B(\epsilon\alpha)e^{ik\alpha}\right) \sim B(\epsilon\alpha)\overline{\mathcal{H}}_0\left(e^{ik\alpha}\right) = B(\epsilon\alpha)\,\text{sgn}(k)e^{ik\alpha},$$ where $\sim$ indicates an error depending on $\epsilon$. The following result confirms this intuition. We adopt the usual practice of assuming, unless otherwise stated, that a constant $C$ may denote different constants in the process of deriving an inequality.
\begin{proposition}\label{WavePacketAntiholProp}
Let $k \neq 0$ and $s, m \geq 0$ be given. Assume $\epsilon\le 1$. Then if $f \in H^{s + m}$, $$\|(\overline{\mathcal{H}}_0 - \,\text{sgn}(k))f(\epsilon\alpha)e^{ik\alpha}\|_{H^s} \leq C \frac{\epsilon^{m - 1/2}}{k^m}\|f\|_{H^{s + m}}$$ where the constant depends only on $s$. \end{proposition}
\textbf{Proof.} It suffices to consider the case $k > 0$, since the case $k < 0$ follows by complex conjugation and the fact that $\overline{\mathcal{H}}_0 = -\mathcal{H}_0$. We first derive a bound for $\|\partial_\alpha^n(I - \overline{\mathcal{H}}_0)f(\epsilon\alpha)e^{ik\alpha}\|_{L^2}$. We calculate that \begin{eqnarray*}
\|\partial_\alpha^n(I - \overline{\mathcal{H}}_0)f(\epsilon\alpha)e^{ik\alpha}\|_{L^2} & = & \left( \int_{-\infty}^\infty \left|(i\xi)^n (1 - \,\text{sgn}(\xi)) \frac{1}{\epsilon}\hat{f}\biggl(\frac{\xi - k}{\epsilon}\biggr)\right|^2 d\xi \right)^{1/2}\cr
& = & 2\left( \int_{-\infty}^{-k} \left|(\xi + k)^n \frac{1}{\epsilon}\hat{f}\left(\frac{\xi}{\epsilon}\right)\right|^2 d\xi \right)^{1/2}\cr
& \leq & 2\left( \int_{-\infty}^{-k} \epsilon^{2(n + m) - 1}|\xi|^{-2m}\left|\widehat{\partial_\alpha^{n + m}f}\left(\frac{\xi}{\epsilon}\right)\right|^2 \frac{d\xi}{\epsilon} \right)^{1/2} \cr
& \leq & 2\epsilon^{n + m - 1/2} \left(\sup_{\xi \leq -k} |\xi|^{-m}\right) \left(\int\left|\widehat{\partial_\alpha^{n + m}f}\left(\frac{\xi}{\epsilon}\right)\right|^2\frac{d\xi}{\epsilon}\right)^{1/2} \cr
& \leq & 2\frac{\epsilon^{n + m - 1/2}}{k^m}\|\partial_\alpha^{n + m}f\|_{L^2}. \end{eqnarray*} But since $\epsilon \leq 1$, we have for any $m \geq 0$ that \begin{eqnarray*}
\|(I - \overline{\mathcal{H}}_0)f(\epsilon\alpha)e^{ik\alpha}\|_{H^s} & \leq & C \sum_{n = 0}^s \|\partial_\alpha^n(I - \overline{\mathcal{H}}_0)f(\epsilon\alpha)e^{ik\alpha}\|_{L^2} \cr
& \leq & C \sum_{n = 0}^s \frac{\epsilon^{n + m - 1/2}}{k^m}\|\partial_\alpha^{n + m}f\|_{L^2} \cr
& \leq & C \frac{\epsilon^{m - 1/2}}{k^m}\|\partial_\alpha^mf\|_{H^s} \cr
& \leq & C \frac{\epsilon^{m - 1/2}}{k^m}\|f\|_{H^{s + m}}.\Box \end{eqnarray*}
As a consequence we may freely assume in the multiscale calculation that $\mathcal{H}_0$ formally treats the amplitude of the wave packet $B(\epsilon\alpha)e^{ik\alpha}$ as a constant when $k \neq 0$. However, note that in the case $k = 0$ we can at best say that $$\overline{\mathcal{H}}_0(f(\epsilon \cdot))(\alpha) = (\overline{\mathcal{H}}_0 f)(\epsilon\alpha)$$ and so these must be retained as functions of the slow variable $\alpha_1 = \epsilon\alpha$ whenever they occur in the multiscale calculation.
We record an immediate consequence of this result that will be used frequently in the multiscale calculation.
\begin{corollary}\label{CommutatorPhase}
Let $s\ge 1, m \geq 0$, $\epsilon\le 1$ and $f, g \in H^{s + m}(\mathbb{R})$ and suppose that $k, l$ are given so that $l \neq 0, -k$, and $sgn (l)=sgn (k+l)$. Then $$\|[f(\epsilon\alpha)e^{ik\alpha}, \mathcal{H}_0]g(\epsilon\alpha)e^{il\alpha}\|_{H^s} \leq C\epsilon^{m - 1/2}\left(\frac{1}{(k + l)^m} + \frac{1}{k^m}\right)\|f\|_{H^{s + m}}\|g\|_{H^{s + m}}$$ \end{corollary}
\subsection{The Multiscale Calculation}
We are now prepared to find an approximate solution $\tilde{\zeta}$ to the four equations \eqref{NewEuler}-\eqref{DtXiIsAntihol} which is to leading order a wave packet, where $G$ is given by \eqref{GFormula}. Our approach will be to derive an approximate solution to the system \eqref{NewEuler}-\eqref{XiIsAntihol} having residual $O(\epsilon^4)$ with a multiscale analysis and then verify that this approximate solution also satisfies \eqref{DtNewEuler}-\eqref{DtXiIsAntihol} up to a residual of size $O(\epsilon^4)$. We begin by seeking a perturbative ansatz for \eqref{NewEuler}-\eqref{XiIsAntihol} $$\zeta(\alpha, t) = \alpha + \sum_{n = 1}^\infty \epsilon^n \zeta^{(n)}(\alpha, t, \epsilon)$$ In order to construct an expansion that is valid on times on the order $O(\epsilon^{-2})$, we introduce multiple scales $$t_0 = t, \quad t_1 = \epsilon t, \quad t_2 = \epsilon^2 t, \quad \alpha_0 = \alpha, \quad \alpha_1 = \epsilon \alpha$$ and so we seek a solution of the form $$\zeta(\alpha, t) = \alpha + \sum_{n = 1}^\infty \epsilon^n \zeta^{(n)}(\alpha_0, \alpha_1, t_0, t_1, t_2)$$ which formally satisfies the original equations up to terms of size $O(\epsilon^4)$.
Before we begin solving these equations, we expand the auxiliary quantities and operators in powers of $\epsilon$. In particular we must determine the expansions in $\epsilon$ of the quantities $$b = \sum_{n = 0}^\infty \epsilon^n b_n, \qquad \mathcal{A} = \sum_{n = 0}^\infty \epsilon^n \mathcal{A}_n, \qquad G = \sum_{n = 0}^\infty \epsilon^n G_n$$ Notice that since $b$ and $\mathcal{A} - 1$ are of quadratic order and $G$ is of cubic order, it follows that that $$b_0 = b_1 = \mathcal{A}_0= \mathcal{A}_1 = G_0 = G_1 = G_2 = 0.$$ We will also show in the sequel that $\mathcal{A}_2 = 0$ and $b_2 = b_2(\alpha_1, t_1, t_2)$; the linear operator associated to the water wave equation thus has the multiscale expansion $$D_t^2 - i\mathcal{A}\partial_\alpha = (\partial_{t_0}^2 - i\partial_{\alpha_0}) + \epsilon(2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1})$$ \begin{equation}\label{PowerExpansionOfP} \qquad\qquad +\; \epsilon^2(2\partial_{t_0}\partial_{t_2} + \partial_{t_1}^2 + 2b_2 \partial_{t_0}\partial_{\alpha_0}) + O(\epsilon^3)\end{equation} Recall that we also have the formulas for the multiscale expansion of the Hilbert Transform given by \eqref{MultiscaleHilbertFormulas}
We will find explicit formulas for $b_2$, $b_3$ and $G_3$ in the course of the analysis. In what follows we will repeatedly use the fact, justified by the last section, that \begin{equation}\label{WavePacketIsAlmostHol} \overline{\mathcal{H}}_0(f(\alpha_1)e^{ik\alpha_0}) = \,\text{sgn}(k)f(\alpha_1)e^{ik\alpha_0} + O(\epsilon^4), \qquad k \neq 0\end{equation} and hence that \begin{equation}\label{CommutatorPhaseIdentity} [f(\alpha_1)e^{ik\alpha_0}, \overline{\mathcal{H}}_0]g(\alpha_1)e^{il\alpha_0} = O(\epsilon^4) \quad \text{ whenever } \quad \,\text{sgn}(l) = \,\text{sgn}(k + l), \ l, l+k\neq 0\end{equation}
We are now ready to expand \eqref{NewEuler}-\eqref{XiIsAntihol} in powers of $\epsilon$. Collecting like terms yields a hierarchy of systems that allow us to successively solve for the holomorphic trace $\frac{1}{2}(I + \mathcal{H}_0)\zeta^{(n)}$ and the antiholomorphic trace $\frac{1}{2}(I - \mathcal{H}_0)\zeta^{(n)}$ of the $\zeta^{(n)}$'s in the lower half plane. The terms of order $O(\epsilon)$ in \eqref{NewEuler}-\eqref{XiIsAntihol} yield the system \begin{equation}\label{Xi1NewEuler}(\partial_{t_0}^2 - i\partial_{\alpha_0})(I - \mathcal{H}_0)\zeta^{(1)} = 0 \end{equation} \begin{equation}\label{Xi1IsAntihol}(I - \overline{\mathcal{H}}_0)\zeta^{(1)} = 0 \end{equation} Because we are interested in solutions which to leading order are given by wave packets, we assume an ansatz concentrated in Fourier space about the fixed wave number $k > 0$: $$\zeta^{(1)} = B_+(\alpha_1, t_0, t_1, t_2) e^{ik\alpha} + B_-(\alpha_1, t_0, t_1, t_2)e^{-ik\alpha}$$ Injecting the above ansatz into \eqref{Xi1IsAntihol} forces $B_- = 0$ by \eqref{WavePacketIsAlmostHol}. Similarly substituting this ansatz into \eqref{Xi1NewEuler} yields the condition $(\partial_{t_0}^2 + k)B_+ = 0$, which implies that $B_+(\alpha_1, t_0, t_1, t_2) = B(\alpha_1, t_1, t_2)e^{i\omega t_0}$, where we have introduced the wave frequency $\omega$ which satisfies the water wave dispersion relation \begin{equation}\label{WWDispersionRelation}\omega^2 = k\end{equation} Thus we take as our solution \begin{equation}\label{Zeta1Formula}\zeta^{(1)} = B(\alpha_1, t_1, t_2)e^{i\phi},\end{equation} where we have introduced the phase $\phi := k\alpha_0 + \omega t_0$.
Moving to the $O(\epsilon^2)$ terms from \eqref{NewEuler}, we have by \eqref{Zeta1Formula} and using \eqref{CommutatorPhaseIdentity} that \begin{align}\label{Xi2NewEuler} (\partial_{t_0}^2 - i\partial_{\alpha_0})(I - \mathcal{H}_0)\zeta^{(2)} & = -(\partial_{t_0}^2 - i\partial_{\alpha_0})(-\mathcal{H}^{(1)})\zeta^{(1)} \notag\\ & \quad -(2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1})(I - \mathcal{H}_0)\zeta^{(1)} \\ & = -4i\omega(B_{t_1} - \omega^\prime B_{\alpha_1})e^{i\phi} \notag \end{align} where $\omega^\prime = d\omega/dk$ is the group velocity of the wave packet. If we want $(I - \mathcal{H}_0)\zeta^{(2)}$ to be uniformly bounded for all time we must insist that the right hand side of \eqref{Xi2NewEuler} be equal to zero in order to avoid secular terms. Therefore we choose \begin{equation}\label{BTravelsAtGroupVelocity}B(\alpha_1, t_1, t_2) = B(\alpha_1 + \omega^\prime t_1, t_2) := B(X, T)\end{equation} where $\omega^\prime = d\omega/dk$ is the group velocity. The $O(\epsilon^2)$ terms from \eqref{XiIsAntihol} yield the equation \begin{align}\label{Zeta2XiIsAntihol} (I - \overline{\mathcal{H}}_0)\zeta^{(2)} & = \overline{\mathcal{H}}^{(1)} \zeta^{(1)} \notag\\ & = [\overline{\zeta}^{(1)}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_0} \\
& = ik(I - \overline{\mathcal{H}}_0)|B|^2 \notag \end{align}
An obvious choice seems to be $\zeta^{(2)}=ik|B|^2+B_2(\alpha_1,t_1,t_2) e^{i\phi}$. However such choice leads to unavoidable secular growth in the $O(\epsilon^3)$ level. Instead, we find that taking $\zeta^{(2)}$ so that $(I - \mathcal{H}_0)\zeta^{(2)} = 0$ avoids such secular growth.
Hence we take \begin{equation}\label{Zeta2Formula}\zeta^{(2)} = \frac{1}{2}ik(I - \overline{\mathcal{H}}_0)|B|^2\end{equation}
Before we move on to the $O(\epsilon^3)$ system, we must first derive formulas for $b_2$, $\mathcal{A}_2$ and $G_3$. Substituting the expansion of $\zeta$ into the formula \eqref{bFormula} we see immediately upon collecting like powers of $\epsilon$ that $b_0 = b_1 = 0$. Therefore we have
$$(I - \mathcal{H}_0)b_2 = -[\zeta^{(1)}_{t_0}, \mathcal{H}_0]\overline{\zeta}^{(1)}_{\alpha_0} = -k\omega(I - \mathcal{H}_0)|B|^2,$$ and so since $b_2$ is real-valued we conclude that \begin{equation}\label{b2Formula}b_2 = -k\omega|B|^2\end{equation} Similarly, using \eqref{AFormula} we have immediately that $\mathcal{A}_1 = 0$ and that
$$(I - \mathcal{H}_0)\mathcal{A}_2 = i[\partial_{t_0} \zeta^{(1)}_{t_0}, \mathcal{H}_0]\overline{\zeta}^{(1)}_{\alpha_0} + i[\zeta^{(1)}_{t_0}, \mathcal{H}_0]\partial_{t_0}\overline{\zeta}^{(1)}_{\alpha_0} = -ik\omega\partial_{t_0}(I - \mathcal{H}_0)|B|^2 = 0$$ whence $\mathcal{A}_2 = 0$ as claimed.
Finally we derive from \eqref{GFormula} a formula for $G_3$: \begin{align*} G_3 & = \frac{4}{\pi}\int\frac{\left(\zeta^{(1)}_{t_0}(\alpha) - \zeta^{(1)}_{t_0}(\beta)\right)\left(\Im \zeta^{(1)}(\alpha) - \Im \zeta^{(1)}(\beta)\right)}{(\alpha - \beta)^2} \zeta^{(1)}_{t_0 \beta_0}(\beta) \, d\beta \notag\\ & \quad + \frac{2}{\pi}\int\frac{(\zeta^{(1)}_{t_0}(\alpha) - \zeta^{(1)}_{t_0}(\beta))^2}{(\alpha - \beta)^2} \Im \zeta^{(1)}_{\beta_0} (\beta) d\beta \notag\\ & := I_1 + I_2 \end{align*} Using \eqref{WavePacketIsAlmostHol} and \eqref{CommutatorPhaseIdentity} yields \begin{align*} I_1 & = -\frac{2}{\pi i}\int\frac{\left(\zeta^{(1)}_{t_0}(\alpha) - \zeta^{(1)}_{t_0}(\beta)\right)\left(\overline{\zeta}^{(1)}(\alpha) - \overline{\zeta}^{(1)}(\beta)\right)}{(\alpha - \beta)^2} \zeta^{(1)}_{t_0 \beta_0}(\beta) \, d\beta \\ & = \frac{2}{\pi i} \int \frac{\left(\zeta^{(1)}_{t_0}(\alpha) - \zeta^{(1)}_{t_0}(\beta)\right)\left(\overline{\zeta}^{(1)}(\alpha) - \overline{\zeta}^{(1)}(\beta)\right)}{(\alpha - \beta)} \zeta^{(1)}_{t_0 \beta_0 \beta_0}(\beta) \, d\beta \\ & \quad - \frac{2}{\pi i} \int \frac{\left(\zeta^{(1)}_{t_0}(\alpha) - \zeta^{(1)}_{t_0}(\beta)\right)\overline{\zeta}^{(1)}_{\beta_0}(\beta)}{(\alpha - \beta)} \zeta^{(1)}_{t_0 \beta_0}(\beta) \, d\beta \notag\\ & \quad -\frac{2}{\pi i} \int \frac{\zeta^{(1)}_{t_0 \beta_0}(\beta)\left(\overline{\zeta}^{(1)}(\alpha) - \overline{\zeta}^{(1)}(\beta)\right)}{(\alpha - \beta)} \zeta^{(1)}_{t_0 \beta_0}(\beta) \, d\beta \\ & = \frac{2k^3}{\pi i}\overline{\zeta}^{(1)}(\alpha) \int \frac{\left(\zeta^{(1)}(\alpha) - \zeta^{(1)}(\beta)\right)}{(\alpha - \beta)} \zeta^{(1)}(\beta) \, d\beta \\ & \quad -2k^3[\overline{\zeta}^{(1)}, \mathcal{H}_0]\left((\zeta^{(1)})^2\right) \\ & = 0 \end{align*} Similarly, we simplify \begin{align*} I_2 & = \frac{2}{\pi}\int\frac{(\zeta^{(1)}_{t_0}(\alpha) - \zeta^{(1)}_{t_0}(\beta))^2}{(\alpha - \beta)^2} \Im \zeta^{(1)}_{\beta_0} (\beta) d\beta \\ & = 2i\biggl(2[\zeta^{(1)}_{t_0}, \mathcal{H}_0](\zeta^{(1)}_{t_0 \alpha_0}\Im\zeta^{(1)}_{\alpha_0}) - [\zeta^{(1)}_{t_0}, [\zeta^{(1)}_{t_0}, \mathcal{H}_0]]\Im\zeta^{(1)}_{\alpha_0 \alpha_0}\biggr) \\ & = - 2[\zeta^{(1)}_{t_0}, \mathcal{H}_0](\zeta^{(1)}_{t_0 \alpha_0}\overline{\zeta}^{(1)}_{\alpha_0}) + [\zeta^{(1)}_{t_0}, [\zeta^{(1)}_{t_0}, \mathcal{H}_0]]\overline{\zeta}^{(1)}_{\alpha_0 \alpha_0} \\
& = 2k^3Be^{i\phi}(I + \mathcal{H}_0)|B|^2 - 2k^3Be^{i\phi}\mathcal{H}_0|B|^2 \\
& = 2k^3B|B|^2e^{i\phi} \end{align*} In summary, \begin{equation}\label{G3Formula}
G_3 = 2k^3B|B|^2e^{i\phi} \end{equation}
We can now arrange the $O(\epsilon^3)$ terms of \eqref{NewEuler}, and using \eqref{Zeta1Formula} and \eqref{Zeta2Formula} along with \eqref{b2Formula}, \eqref{G3Formula} and \eqref{CommutatorPhaseIdentity} arrive at the equation \begin{align}\label{Zeta3NewEuler} (\partial_{t_0}^2 - i\partial_{\alpha_0})(I - \mathcal{H}_0)\zeta^{(3)} & = -(\partial_{t_0}^2 - i\partial_{\alpha_0})(-\mathcal{H}^{(1)})\zeta^{(2)} - (\partial_{t_0}^2 - i\partial_{\alpha_0})(-\mathcal{H}^{(2)})\zeta^{(1)} \cr & \quad -\;(2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1})(I - \mathcal{H}_0)\zeta^{(2)} - (2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1})(-\mathcal{H}^{(1)})\zeta^{(1)} \cr & \quad -\; (2\partial_{t_0}\partial_{t_2} + \partial_{t_1}^2 + 2b_2\partial_{t_0}\partial_{\alpha_0})(I - \mathcal{H}_0)\zeta^{(1)} + G_3 \cr & = - (2\partial_{t_0}\partial_{t_2} + \partial_{t_1}^2 + 2b_2\partial_{t_0}\partial_{\alpha_0})(I - \mathcal{H}_0)\zeta^{(1)} \cr
& \quad +\; 2k^3B|B|^2e^{i\phi} \cr
& = - 2\omega(2iB_T - \omega^{\prime \prime}B_{XX} + k^2\omega B|B|^2)e^{i\phi}
\end{align} where $\omega^{\prime\prime} = d^2\omega/dk^2$. To supress secular growth we now insist that the amplitude $B$ satisfy the focusing cubic nonlinear Schr\"odinger equation\footnote{Observe that this equation agrees with the equation derived in \cite{CraigSulemSulemNLSFiniteDepth} when one formally lets the depth of the fluid tend to infinity.} \begin{equation}\label{NLS}2iB_T - \omega^{\prime \prime}B_{XX} + k^2\omega B|B|^2 = 0,\end{equation} With this choice made we solve \eqref{Zeta3NewEuler} by taking $(I - \mathcal{H}_0)\zeta^{(3)} = 0$.
Finally, the $O(\epsilon^3)$ terms from \eqref{XiIsAntihol} yields the equation \begin{align}\label{Zeta3XiIsAntihol} (I - \overline{\mathcal{H}}_0)\zeta^{(3)} & = \overline{\mathcal{H}}^{(1)} \zeta^{(2)} + \overline{\mathcal{H}}^{(2)} \zeta^{(1)} \notag\\ & = [\overline{\zeta}^{(1)}, \overline{\mathcal{H}}_0]\zeta^{(2)}_{\alpha_0} + [\overline{\zeta}^{(2)}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_0} + [\overline{\zeta}^{(1)}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_1} \\ & \quad -\; [\overline{\zeta}^{(1)}, \overline{\mathcal{H}}_0](\overline{\zeta}_{\alpha_0}^{(1)}\zeta^{(1)}_{\alpha_0}) + \frac{1}{2}[\overline{\zeta}^{(1)}, [\overline{\zeta}^{(1)}, \overline{\mathcal{H}}_0]]\zeta^{(1)}_{\alpha_0 \alpha_0} \notag\\
& = (I - \overline{\mathcal{H}}_0)(\overline{B}B_X) - k^2\overline{B}e^{-i\phi}(I + \overline{\mathcal{H}}_0)|B|^2 + k^2\overline{B}e^{-i\phi}\overline{\mathcal{H}}_0|B|^2 \notag\\
& = -k^2\overline{B}|B|^2e^{-i\phi} + (I - \overline{\mathcal{H}}_0)\left( \overline{B}B_X\right) \notag
\end{align} Hence we choose \begin{equation}\label{Zeta3Formula} \zeta^{(3)} = -\frac{1}{2}k^2\overline{B}|B|^2e^{-i\phi} + \frac{1}{2}(I - \overline{\mathcal{H}}_0)\left(\overline{B}B_X\right)\end{equation}
Now that we have constructed an approximate solution $\tilde{\zeta}$ to the equations \eqref{NewEuler}-\eqref{XiIsAntihol}, we claim that $\tilde{\zeta}$ also solves the system \eqref{DtNewEuler}-\eqref{DtXiIsAntihol} up to an $O(\epsilon^4)$ residual. First we notice that \eqref{DtNewEuler} is obtained by applying a derivative $D_t$ to \eqref{NewEuler}, therefore it is clear that $\tilde{\zeta}$ solves
\eqref{DtNewEuler} up to an $O(\epsilon^4)$ residual. Now we consider \eqref{DtXiIsAntihol}. By \eqref{XiIsAntihol} we have that \begin{align*} (I - \overline{\mathcal{H}})D_t\zeta & = (I - \overline{\mathcal{H}})D_t(\zeta - \alpha) + (I - \overline{\mathcal{H}})D_t\alpha \\ & = [D_t, \overline{\mathcal{H}}](\zeta - \alpha) + (I - \overline{\mathcal{H}})D_t\alpha \\ & = [D_t\zeta, \overline{\mathcal{H}}]\frac{\zeta_\alpha - 1}{\overline{\zeta}_\alpha} + (I - \overline{\mathcal{H}})b \end{align*} Hence to show that $\tilde{\zeta}$ satisfies \eqref{DtXiIsAntihol} up to an $O(\epsilon^4)$ residual
it suffices to show that our approximation of $b$ satisfies \eqref{bFormula} up to a residual of size $O(\epsilon^4)$. Hence we need only choose $b_3$ so that \begin{align}\label{HolPartOfb3} (I - \overline{\mathcal{H}}_0)b_3 & = \overline{\mathcal{H}}^{(1)} b_2 \notag\\ & \quad -\; [\partial_{t_0}\overline{\zeta}^{(2)}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_0} - [\partial_{t_1}\overline{\zeta}^{(1)}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_0} \notag\\ & \quad -\; [\partial_{t_0}\overline{\zeta}^{(1)}, \overline{\mathcal{H}}^{(1)}]\zeta^{(1)}_{\alpha_0} - [\partial_{t_0}\overline{\zeta}^{(1)}, \overline{\mathcal{H}}_0]\zeta^{(2)}_{\alpha_0} \notag\\
& \quad -\; [\partial_{t_0}\overline{\zeta}^{(1)}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_1} + [\partial_{t_0}\overline{\zeta}^{(1)}, \overline{\mathcal{H}}_0]|\zeta^{(1)}_{\alpha_0}|^2 \notag\\ & \notag \\ & = -\frac{1}{2}i\omega(I - \overline{\mathcal{H}}_0)(B\overline{B}_X) \notag\\
& \quad -\; i\omega k^2 \overline{B}e^{-i\phi}(I - \overline{\mathcal{H}}_0)|B|^2 \notag\\ & \quad +\; i\omega(I - \overline{\mathcal{H}}_0)(\overline{B}B_X) \notag\\
& \quad -\; i\omega k^2 \overline{B}e^{-i\phi}(I + \overline{\mathcal{H}}_0)|B|^2 \notag \\ & \notag\\
& = i\omega(I - \overline{\mathcal{H}}_0)\biggl(B\overline{B}_X - \frac{1}{2}\overline{B}B_X \biggr) -2i\omega k^2 \overline{B}|B|^2e^{-i\phi} \end{align} In summary, we have shown that the equations \eqref{NewEuler}-\eqref{XiIsAntihol}-\eqref{DtNewEuler}-\eqref{DtXiIsAntihol} are satisfied up to a residual of size $O(\epsilon^4)$ by the approximation \begin{align}\label{TildeZetaFormula} \tilde{\zeta} & := \alpha + \epsilon \zeta^{(1)} + \epsilon^2 \zeta^{(2)} + \epsilon^3 \zeta^{(3)} \notag\\
& = \alpha + \epsilon Be^{i\phi} + \epsilon^2\frac{1}{2}ik(I - \overline{\mathcal{H}}_0)|B|^2 \notag \\
& \qquad + \epsilon^3 \left(-\frac{1}{2}k^2\overline{B}|B|^2e^{-i\phi} + \frac{1}{2}(I - \overline{\mathcal{H}}_0)\left(\overline{B}B_X\right)\right)
\end{align} where $B = B(\epsilon(\alpha + \omega^\prime t), \epsilon^2 t) = B(X, T)$ satisfies the NLS equation $$2iB_T - \omega^{\prime\prime}B_{XX} + k^2\omega B|B|^2 = 0$$ From \eqref{HolPartOfb3}, enforcing the reality condition on $b_3$ yields \begin{align}\label{TildeBFormula} \tilde{b} & := b_0 + \epsilon b_1 + \epsilon^2 b_2 + \epsilon^3 b_3 \notag\\
& = \epsilon^2(-k\omega|B|^2) \notag \\
& + \epsilon^3\biggl(\Re\left(2i\omega k^2 B|B|^2e^{i\phi}\right) + \frac{3}{4}i\omega(B\overline{B}_X - \overline{B}B_X) - \frac{1}{4}i\omega\overline{\mathcal{H}}_0(B\overline{B}_X + \overline{B}B_X)\biggr)\end{align} We also define \begin{equation}\label{TildeAFormula}\tilde{\mathcal{A}} := \mathcal{A}_0 + \epsilon \mathcal{A}_1 + \epsilon^2 \mathcal{A}_2 = 1\end{equation} and \begin{equation} \tilde G := G_0 + \epsilon G_1 + \epsilon^2 G_2 + \epsilon^3 G_3 = \epsilon^3 G_3 \end{equation} Corresponding to this approximate solution \eqref{TildeZetaFormula} we introduce $$\tilde{\xi} := \tilde{\zeta} - \alpha$$ as well as \begin{equation}\label{TildeDtFormulas} \tilde{D}_t := \partial_t + \tilde{b}\partial_\alpha \qquad \tilde{\mathcal{P}} := \tilde{D}_t^2 - i\tilde{\mathcal{A}}\partial_\alpha \end{equation} We then have the formulas for the difference \begin{equation}\label{DiffDtFormula} D_t - \tilde{D}_t = (b - \tilde{b})\partial_\alpha \end{equation} as well as for \begin{equation}\label{DiffDtSquaredFormula} D_t^2 - \tilde{D}_t^2 = \left(D_t(b - \tilde{b})\right)\partial_\alpha + (b - \tilde{b})\left(D_t \partial_\alpha + \partial_\alpha \tilde{D}_t\right) \end{equation} and so \begin{equation}\label{DiffPFormula} \mathcal{P} - \tilde{\mathcal{P}} = \left(D_t(b - \tilde{b}) - i(\mathcal{A} - \tilde{\mathcal{A}})\right)\partial_\alpha + (b - \tilde{b})\left(D_t \partial_\alpha + \partial_\alpha \tilde{D}_t\right) \end{equation}
For future reference we also include the following calculation.
\begin{proposition}\label{VariousMultiscaleIdentities} Let $\tilde{\zeta}$, $\tilde{b}$, $\tilde{\mathcal{A}}$ be as above. Then \begin{enumerate} \item{$\tilde{\mathcal{P}}\tilde{\xi} = O(\epsilon^3)$.} \item{$[\tilde{\mathcal{P}}, \tilde{\mathcal{H}}]\tilde{\xi} = O(\epsilon^4)$.} \end{enumerate} \end{proposition}
\begin{proof} The first statement is straightforward. For the second, observe that by \eqref{CommutatorPhaseIdentity} we have that $\mathcal{H}^{(1)}\zeta^{(1)} = \mathcal{H}^{(2)} \zeta^{(1)} = O(\epsilon^4)$. The $O(\epsilon)$ term is $[\partial_{t_0}^2 - i\partial_{\alpha_0}, \mathcal{H}_0]\zeta^{(1)} = 0$. The $O(\epsilon^2)$ terms are $$[\partial_{t_0}^2 - i\partial_{\alpha_0}, \mathcal{H}_0]\zeta^{(2)} + [\partial_{t_0}^2 - i\partial_{\alpha_0}, \mathcal{H}^{(1)}]\zeta^{(1)} + [2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1}, \mathcal{H}_0]\zeta^{(1)}$$ which vanishes by virtue of the above observation, \eqref{Zeta1Formula}, \eqref{BTravelsAtGroupVelocity}, and \eqref{Zeta2Formula}. Finally, the $O(\epsilon^3)$ terms are given by \begin{align*} & \quad\; [\partial_{t_0}^2 - i\partial_{\alpha_0}, \mathcal{H}_0]\zeta^{(3)} \\ & + [\partial_{t_0}^2 - i\partial_{\alpha_0}, \mathcal{H}^{(1)}]\zeta^{(2)} \\ & + [\partial_{t_0}^2 - i\partial_{\alpha_0}, \mathcal{H}^{(2)}]\zeta^{(1)} \\ & + [2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1}, \mathcal{H}_0]\zeta^{(2)} \\ & + [2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1}, \mathcal{H}^{(1)}]\zeta^{(1)} \\ & + [2\partial_{t_0}\partial_{t_2} + \partial_{t_1}^2 + 2b_2\partial_{\alpha_0}\partial_{t_0}, \mathcal{H}_0]\zeta^{(1)}
\end{align*} For the same reasons as for the $O(\epsilon^2)$ terms all of the above are immediately seen to vanish except for the last, which by \eqref{b2Formula} is given by $$2[b_2, \mathcal{H}_0]\zeta^{(1)}_{\alpha_0 t_0} = 2k^3[|B|^2, \mathcal{H}_0]Be^{i\phi} = 0,$$ by \eqref{CommutatorPhaseIdentity}. \end{proof}
Now we have shown that the approximation $\tilde{\zeta}$ depends on $B$ and $B_X$, where $B$ satisfies the NLS equation \eqref{NLS}. To be certain that the forthcoming objects are well-defined, we appeal to the following global well-posedness result for NLS:
\begin{theorem}\label{NLSWellPosedness} (c.f. \cite{CazenaveSemilinearSchrodingerEquations}, \cite{SulemSulemNLSBook}) Let $m \geq 1$ be given, and suppose that $B_0 \in H^m$ is given. Then there exists a unique solution $B \in C([0, \infty); H^m)$ to \eqref{NLS} with initial condition $B(0) = B_0$. \end{theorem}
Fix $s \geq 6$, $\mathscr T > 0$. For the rest of the paper we assume that $B_0 \in H^{s + 7}$, and hence by the above theorem that $B \in C([0, \infty), H^{s + 7})$ with $\|B\|_{C([0, \mathscr T); H^{s + 7})} \leq C(\|B_0\|_{H^{s + 7}}, \mathscr T)$. If we calculate $\tilde{\zeta}$ and $\tilde{D}_t\tilde{\zeta}$ from $B \in H^{s + 7}$ through \eqref{TildeZetaFormula}, we see by counting the maximum number of derivatives that fall on $B$ that we have the bound \begin{equation}\label{NLSGlobalBound}
\left\|\left(\tilde{\xi}, \tilde{D}_t\tilde{\zeta}, \tilde{D}_t^2 \tilde{\zeta}\right)\right\|_{C([0, \mathscr T); H^{s + 6} \times H^{s + 4} \times H^{s + 2})} \leq C(\|B_0\|_{H^{s + 7}}, \mathscr T)\epsilon^{1/2}\end{equation} For the rest of the paper, we choose $\epsilon < \epsilon_0$ for $\epsilon_0\le 1$ sufficiently small depending on $B_0$ so that $\tilde{\zeta}$ satisfies the chord-arc condition \eqref{ChordArcCondition}. Along with the a priori assumption \eqref{ZetaLocalAPrioriBound} using an appropriately small choice of $\delta > 0$ , this implies that the singular integrals in the next section are well-defined.
\section{Estimates of the Remainder}
Now that we have derived a formal approximation of the solution $\zeta$ to the system \eqref{NewEuler}-\eqref{XiIsAntihol}-\eqref{DtNewEuler}-\eqref{DtXiIsAntihol}, we can consider the size of the remainder $r = \zeta - \tilde{\zeta}$. Our basic approach is to expand the known equations for $\zeta$ and formulas for quantities defined in terms of $\zeta$ given in \S 2 by writing $\zeta = r + \tilde{\zeta}$ and thereby find the appropriate governing equations from which we will derive energy estimates for $r$.
In \S 4.1 we derive from \eqref{NewEuler}-\eqref{XiIsAntihol}-\eqref{DtNewEuler}-\eqref{DtXiIsAntihol} new equations in terms of quantities related to $r$. Many functions and operators will arise in these equations that we need to study before we can estimate them appropriately. In particular we devote \S 4.2 to studying the remainder between the true and approximate Hilbert transforms introduced in \S 3.1.
To clearly describe the respects in which we consider quantities to be small, we adopt the following terminology: we say a term is of $n$th order (with linear, quadratic, cubic having the typical meaning) if the term consists of $n$ small factors.
Alternately, given a Banach space $X$ with norm $\|\cdot\|_X$, we say that a term $f \in X$ as being $O(\epsilon^n)$ in $X$ when there exists a constant $C$ so that $\|f\|_X \leq C\epsilon^n$. If we use the notation $O(\epsilon^n)$ without mentioning a norm explicitly, we mean size in the physical sense $O(\epsilon^n)$ as we have used in \S 3. Since we ultimately seek bounds in Sobolev spaces $H^s$, we introduce the special notation that $f \in H^s$ is $\mathcal{O}(\epsilon^n)$, which means that $f$ is $O(\epsilon^n)$ in $H^s$ where the index $s$ will be clear from context.
We ultimately plan to control all of our quantities in terms of $r_\alpha$ and $D_t r$ in Sobolev space, and so we need some idea of how large we expect $r_\alpha$ and $D_tr$ to be in terms of $\epsilon$. Since we are only interested in the leading term of the approximation, it is a suitable goal to seek a remainder which is of physical size $O(\epsilon^2)$, and in the $L^2$ sense to be $\mathcal{O}(\epsilon^{3/2})$. Therefore, we expect here that $r_\alpha$ and $D_t r$ should be $\mathcal{O}(\epsilon^{3/2})$.
In \S 4.3 we bound in $H^s$ the remaining quantities appearing in the cubic nonlinearities of the equations of \S 4.1 by terms involving the quantity $$E_s^{1/2} := \|r_\alpha\|_{H^s} + \|D_t r\|_{H^s},$$ which we expect to be $O(\epsilon^3)$. We will then show that for $\epsilon < \epsilon_0$ with $\epsilon_0$ chosen sufficiently small, the quantity $E_s$ is bounded above by the quantity
$$\sum_{n = 0}^s \|D_t\partial_\alpha^n \rho\|_{L^2}^2 + \|D_t \partial_\alpha^n \sigma\|^2_{L^2}$$ where $$ \rho := \frac{1}{2}(I - \mathcal{H})r \qquad \text{and} \qquad \sigma := \frac{1}{4}(I - \mathcal{H})\left(D_t(I - \mathcal{H})\xi - \tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi}\right)$$ which in turn is bounded above by the energy $\mathcal {E}$ for the remainder.
We then use these estimates to show that the cubic nonlinearities of the remainder equations of \S 4.1 are $\mathcal{O}(\epsilon^{7/2})$. Having done so, we derive in \S 4.5 an energy inequality which roughly reads $d\mathcal{E}/dt \leq O(\epsilon^5)$. Heuristically, an inequality of this type is suitable since on time scales on the order $O(\epsilon^{-2})$ this implies $E_s$ is of size $O(\epsilon^3)$, as we would like. We then go on to rigorously derive a priori bounds of $E_s$ on $O(\epsilon^{-2})$ time scales.
\subsection{The Derivation of the Equations for the Remainder}
Here we derive the equations governing the evolution of the quantities \begin{equation}\label{RhoSigmaDefn}\rho := \frac{1}{2}(I - \mathcal{H})r \qquad \text{and} \qquad \sigma := \frac{1}{4}(I - \mathcal{H})\left(D_t(I - \mathcal{H})\xi - \tilde{D}_t(I - \tilde{H})\tilde{\xi}\right)\end{equation} Our goal in this section is to manipulate the nonlinearities of these equations so that they will be in a suitable form for showing they are of size $\mathcal{O}(\epsilon^{7/2})$. For example, from \eqref{DtXiIsAntihol} we have \begin{equation}\label{DtXiIsAntiholRemainder} (I - \mathcal{H})D_t\overline{r} = -(I - \tilde{\mathcal{H}})\tilde{D}_t\overline{\tilde{\zeta}} - (I - \tilde{\mathcal{H}})(D_t - \tilde{D}_t)\overline{\tilde{\zeta}} + (\mathcal{H} - \tilde{\mathcal{H}}){D}_t\overline{\tilde{\zeta}}\end{equation} We will show in \S 4.2 that the operator norm of $\mathcal{H} - \tilde{\mathcal{H}}$ on $H^s$ is of size $O(\epsilon^{3/2})$, and in \S 4.3 that the function $b - \tilde{b}$ is of size $\mathcal{O}(\epsilon^{5/2})$. Hence the right hand side of \eqref{DtXiIsAntiholRemainder} is of size $\mathcal{O}(\epsilon^{5/2})$.
We now give the equation for the remainder corresponding to \eqref{NewEuler}. In decomposing the right hand side of this equation, we keep two goals in mind. First, we must split the terms in such a way as to arrive at $\tilde{G}$ so as to cancel the $O(\epsilon^3)$ contribution from $G$. Next, we must whenever possible avoid estimating terms formed by $\mathcal{P}$ acting on complicated terms, so as to reduce all estimates whenever possible to those already derived. Specifically we expand using Proposition \ref{HilbertCommutatorIdentities} as follows: \begin{align*} \mathcal{P}(I - \mathcal{H})r & = G - \mathcal{P}(I - \mathcal{H})\tilde{\xi} \\ & = G + [\mathcal{P}, \mathcal{H}]\tilde{\xi} - (I - \mathcal{H})\mathcal{P}\tilde{\xi} \\ & = G + 2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha}{\zeta_\alpha}D_t\tilde{\xi} - \frac{1}{\pi i}\int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \tilde{\xi}_\beta(\beta) \, d\beta \\ & \quad - (I - \mathcal{H})(\mathcal{P} - \tilde{\mathcal{P}})\tilde{\xi} - (I - \mathcal{H})\tilde{\mathcal{P}}\tilde{\xi} \\ & = G + 2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha}{\zeta_\alpha}D_t\tilde{\xi} - \frac{1}{\pi i}\int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \tilde{\xi}_\beta(\beta) \, d\beta \\ & \quad - (I - \mathcal{H})(\mathcal{P} - \tilde{\mathcal{P}})\tilde{\xi} + (\mathcal{H} - \tilde{\mathcal{H}})\tilde{\mathcal{P}}\tilde{\xi} - (I - \tilde{\mathcal{H}})\tilde{\mathcal{P}}\tilde{\xi} \\ & = (G - \tilde{G}) + 2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha}{\zeta_\alpha}D_t\tilde{\xi} - \frac{1}{\pi i}\int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \tilde{\xi}_\beta(\beta) \, d\beta \\ & \quad - (I - \mathcal{H})(\mathcal{P} - \tilde{\mathcal{P}})\tilde{\xi} + (\mathcal{H} - \tilde{\mathcal{H}})\tilde{\mathcal{P}}\tilde{\xi} - [\tilde{\mathcal{P}}, \tilde{\mathcal{H}}]\tilde{\xi} + \epsilon^4 R, \end{align*} where $\epsilon^4 R := \tilde{G} - \tilde{\mathcal{P}}(I - \tilde{\mathcal{H}})\tilde{\xi}$ is the residual arising from the approximate equation corresponding to \eqref{NewEuler}.
Note that at most five\footnote{Observe that, despite the appearance of formulas \eqref{MultiscaleHilbertFormulas}, since the operators $\mathcal{H}_1$ and $\mathcal{H}_2$ can be written as singular integrals as in \eqref{HilbertFormulas}, they do not lose derivatives due to Proposition \ref{SingIntSobolevEstimates}.} derivatives of $B$ are taken in $R$ through the term $\partial_t^2 \mathcal{H}_2 \zeta^{(3)}$, and so $R \in H^s$ provided $B \in H^{s + 5}$. Similarly, at most seven derivatives of $B$ are taken in $\tilde{D}_t R$ through the term $\partial_t^3 \mathcal{H}^{(2)} \zeta^{(3)}$, and so $\tilde{D}_t R \in H^s$ provided $B \in H^{s + 7}$.
The only term that is not immediately of size $\mathcal{O}(\epsilon^{7/2})$ is $2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha}{\zeta_\alpha}D_t\tilde{\xi}$. As in the calculation (2.13) et. seq. of \cite{WuAlmostGlobal2D}, we exploit the fact that $D_t\tilde{\xi}$ is almost holomorphic. Using \eqref{DtXiIsAntihol} and Proposition \ref{HoloProperties} allows us to rewrite this term as \begin{align*} 2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha}{\zeta_\alpha}D_t\tilde{\xi} & = 2\left[D_t\zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]\partial_\alpha D_t \tilde{\xi} - 2[D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha}{\overline{\zeta}_\alpha}(D_t\zeta - D_t\alpha - D_t r) \\ & = 2\left[D_t\zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]\partial_\alpha D_t \tilde{\xi} + 2[D_t\zeta, \overline{\mathcal{H}}]\frac{b_\alpha}{\overline{\zeta}_\alpha} + 2[D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha}{\overline{\zeta}_\alpha}D_t r \end{align*} To see that the last of these terms is acceptably small, we again apply Proposition \ref{HoloProperties} to see that
\begin{equation}\label{4.2}
2[D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha}{\overline{\zeta}_\alpha}D_t r = [(I + \overline{\mathcal{H}})D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha}{\overline{\zeta}_\alpha}D_t r = [D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha}{\overline{\zeta}_\alpha}(I - \overline{\mathcal{H}})D_t r,
\end{equation}
which is now easily seen to be $\mathcal{O}(\epsilon^{7/2})$ by \eqref{DtXiIsAntiholRemainder}. Thus our equation for $\rho$ is now \begin{align}\label{NewEulerRemainder} 2\mathcal{P}\rho & = (G - \tilde{G}) - (I - \mathcal{H})(\mathcal{P} - \tilde{\mathcal{P}})\tilde{\xi} + (\mathcal{H} - \tilde{\mathcal{H}})\tilde{\mathcal{P}}\tilde{\xi} - [\tilde{\mathcal{P}}, \tilde{\mathcal{H}}]\tilde{\xi} \notag\\ & + 2\left[D_t\zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]\partial_\alpha D_t \tilde{\xi} + 2[D_t\zeta, \overline{\mathcal{H}}]\frac{b_\alpha}{\overline{\zeta}_\alpha} + 2[D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha D_t r}{\overline{\zeta}_\alpha} \notag\\ & - \frac{1}{\pi i}\int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \tilde{\xi}_\beta(\beta) \, d\beta+\epsilon^4 R \end{align} Note that the terms on the right hand side of \eqref{NewEulerRemainder} are cubic, and so a priori there may be contributions of size $\mathcal{O}(\epsilon^{5/2})$. However, we will show later that all such contributions arise as terms depending only on $\tilde{\xi}$ and $\epsilon$ of physical size $O(\epsilon^3)$; moreover, these putative terms will be shown to vanish by multiscale calculations.
Next we derive the evolution equation for $\sigma$. First we calculate that \begin{align*} \mathcal{P}(I - \mathcal{H})D_t(I - \mathcal{H})\xi & = -[\mathcal{P}, \mathcal{H}]D_t(I - \mathcal{H})\xi + (I - \mathcal{H})\mathcal{P}D_t(I - \mathcal{H})\xi \\ & = -2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t^2(I - \mathcal{H})\xi}{\zeta_\alpha} \\ & \quad + \frac{1}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \partial_\beta D_t(I - \mathcal{H})\xi(\beta) d\beta \\ & \quad + (I - \mathcal{H})[\mathcal{P}, D_t](I - \mathcal{H})\xi + (I - \mathcal{H})(D_t G) \\ & = -2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t^2(I - \mathcal{H})\xi}{\zeta_\alpha} \\ & \quad + \frac{1}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right) \partial_\beta D_t(I - \mathcal{H})\xi(\beta) d\beta \\ & \quad + (I - \mathcal{H})iU_{\kappa^{-1}}\left(\frac{\mathfrak{a}_t}{\mathfrak{a}}\right) \partial_\alpha(I - \mathcal{H})\xi \\ & \quad + (I - \mathcal{H})(D_t G) \end{align*} Similarly we have \begin{align*} \mathcal{P}(I - \mathcal{H})\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi} & = -[\mathcal{P}, \mathcal{H}]\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi} + (I - \mathcal{H})\mathcal{P}\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi} \\ & = -2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t \tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi}}{\zeta_\alpha} \\ & \quad + \frac{1}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \partial_\beta \tilde{D_t}(I - \tilde{\mathcal{H}})\tilde{\xi}(\beta) d\beta \\ & \quad + (I - \mathcal{H})\mathcal{P}\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi} \\
& = -2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t \tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi}}{\zeta_\alpha} \\ & \quad +\frac{1}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \partial_\beta \tilde{D_t}(I - \tilde{\mathcal{H}})\tilde{\xi}(\beta) d\beta \\ & \quad + (I - \mathcal{H})(\mathcal{P} - \tilde{\mathcal{P}})\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi} -i (I - \mathcal{H})\tilde{b}_\alpha\partial_\alpha(I - \tilde{\mathcal{H}})\tilde{\xi} \\ & \quad + (I - \mathcal{H})(\tilde{D}_t \tilde{G}) + (I - \mathcal{H})\epsilon^4 (\tilde{D}_t R) \end{align*} Subtracting these two equations then gives the desired evolution equation for $\sigma$: \begin{align}\label{DtNewEulerRemainder} 4\mathcal{P}\sigma & = -8[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t \sigma}{\zeta_\alpha} \notag\\ & \quad + \frac{4}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \sigma_\beta(\beta) d\beta \notag\\ & \quad + (I - \mathcal{H})iU_{\kappa^{-1}}\left(\frac{\mathfrak{a}_t}{\mathfrak{a}}\right) \partial_\alpha(I - \mathcal{H})\xi \notag\\ & \quad - (I - \mathcal{H})(\mathcal{P} - \tilde{\mathcal{P}})\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi} \notag\\ & \quad +i (I - \mathcal{H})\tilde{b}_\alpha\partial_\alpha(I - \tilde{\mathcal{H}})\tilde{\xi} \notag\\ & \quad + (I - \mathcal{H})(D_t G - \tilde{D}_t\tilde{G}) - (I - \mathcal{H})\epsilon^4 (\tilde{D}_t R) \end{align}
The right hand side of \eqref{DtNewEulerRemainder} is $\mathcal{O}(\epsilon^{7/2})$ provided we can show that the right hand side of \eqref{DtNewEulerRemainder} is $\mathcal{O}(\epsilon^{7/2})$. The formula \eqref{atOveraFormula} implies that the third term on the right hand side of \eqref{DtNewEulerRemainder} is of size $\mathcal{O}(\epsilon^{7/2})$. Before we can show that the rest of the terms are appropriately small, we must study the quantities appearing on the right hand side of these equations further. We will see that estimates for these quantities presuppose a satisfactory bound for the difference $\mathcal{H} - \tilde{\mathcal{H}}$, and so estimating this operator in Sobolev space is our first task.
\subsection{Estimates for the Difference Operator $\mathcal{H} - \tilde{\mathcal{H}}$}
While the operator $\tilde{\mathcal{H}}$ is well suited for multiscale calculation, it remains to be seen how $\tilde{\mathcal{H}}$ compares to our original Hilbert Transform $\mathcal{H}$ corresponding to the true solution $\zeta$ of the water wave system. To do so, we will bound the operator $\mathcal{H} - \tilde{\mathcal{H}}$ in $H^s$. This entails decomposing it as $$\mathcal{H} - \tilde{\mathcal{H}} = (\mathcal{H} - \mathcal{H}_{\tilde{\zeta}}) + (\mathcal{H}_{\tilde{\zeta}} - \tilde{\mathcal{H}}),$$ where $\mathcal{H}_{\tilde{\zeta}}$ is the Hilbert transform corresponding to the approximate interface $\tilde{\zeta}$. If we apply Proposition \ref{SingIntSobolevEstimates} to the formula of Lemma \ref{DiffHilbertPart1Formula} we arrive at
\begin{lemma}\label{DiffHilbertBoundPart1}
Let $s \geq 4$ be given. Then we have the bounds $$\|(\mathcal{H}_{\tilde{\zeta}} - \tilde{\mathcal{H}})f\|_{H^s} \leq C \epsilon^3 \|f\|_{H^s} \qquad \text{and} \qquad \|(\mathcal{H}_{\tilde{\zeta}} - \tilde{\mathcal{H}})f\|_{H^s} \leq C \epsilon^{5/2} \|f\|_{W^{s, \infty}}$$ where the constant $C = C
\left(\|B\|_{H^{s + 2}}\right)$. \end{lemma}
The analogous result for the first sum in the decomposition is \begin{lemma}\label{DiffHilbertBoundPart2}
Let $s \geq 4$ be given, and suppose \eqref{ZetaLocalAPrioriBound} holds. Then for all $t \leq T_0$, $$\|(\mathcal{H} - \mathcal{H}_{\tilde{\zeta}})f\|_{H^s} \leq C\|r_\alpha\|_{H^{s - 1}}\|f\|_{H^s} \qquad \text{and} \qquad \|(\mathcal{H} - \mathcal{H}_{\tilde{\zeta}})f\|_{H^s} \leq C\|r_\alpha\|_{H^{s - 1}}\|f\|_{W^{s, \infty}}$$ where the constant $C = C(\mathfrak{S}(T_0), \|B\|_{H^{s + 2}})$. \end{lemma}
\begin{proof} We use the fact that this operator can be written in two different ways using integration by parts: \begin{align*}(\mathcal{H} - \mathcal{H}_{\tilde{\zeta}})f & = \frac{1}{\pi i} \int \log\left(1 + \frac{r(\alpha) - r(\beta)}{\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)}\right) f_\beta(\beta) d\beta \\ & = \frac{1}{\pi i} \int \left(\frac{r_\beta(\beta)}{\zeta(\alpha) - \zeta(\beta)} - \frac{\tilde{\zeta}_\beta(r(\alpha) - r(\beta))}{(\zeta(\alpha) - \zeta(\beta))(\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta))}\right) f(\beta)d\beta
\end{align*} Now consider the $n$th derivative of the first formula. If all $n$ derivatives fall on $f$, then we can pass to an integral of the second form above via integration by parts. Such an integral can then be bounded in $L^2$ by either $$C\left(\mathfrak{S}(T_0), \|B\|_{H^{n + 2}}\right)\|r_\alpha\|_{H^2}\|f\|_{H^n} \qquad \text{or} \qquad C\left(\mathfrak{S}(T_0), \|B\|_{H^{n + 2}}\right)\|r_\alpha\|_{H^2}\|f\|_{W^{n, \infty}}$$ If at least one derivative falls on the logarithm, then we have a kernel of the form $$(\partial_\alpha + \partial_\beta)\log\left(1 + \frac{r(\alpha) - r(\beta)}{\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)}\right) = \frac{r_\alpha(\alpha) - r_\beta(\beta)}{\zeta(\alpha) - \zeta(\beta)} - \frac{(r(\alpha) - r(\beta))(\tilde{\zeta}_\alpha(\alpha) - \tilde{\zeta}_\beta(\beta))}{(\zeta(\alpha) - \zeta(\beta))(\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta))}$$ This yields a singular integral which can be bounded in $H^n$ by either $$C\left(\mathfrak{S}(T_0), \|\tilde{\zeta}_\alpha - 1\|_{H^{n + 1}}\right)\|r_\alpha\|_{H^{n - 1}}\|f\|_{H^{n - 1}} \; \; \text{or} \; \; C\left(\mathfrak{S}(T_0), \|\tilde{\zeta}_\alpha - 1\|_{H^{n + 1}}\right)\|r_\alpha\|_{H^{n - 1}}\|f\|_{W^{n - 1, \infty}}$$ The proposition follows by summing these bounds $n = 0, 1, \ldots, s$. \end{proof}
Combining these lemmas yields the
\begin{corollary}\label{DiffHilbertBound}
Let $s \geq 4$ be given, and suppose that \eqref{ZetaLocalAPrioriBound} holds. Then for all $t \leq T_0$, $$\|(\mathcal{H} - \tilde{\mathcal{H}})f\|_{H^s} \leq C(\epsilon^3 + \|r_\alpha\|_{H^{s - 1}})\|f\|_{H^s}$$
$$\|(\mathcal{H} - \tilde{\mathcal{H}})f\|_{H^s} \leq C(\epsilon^{5/2} + \|r_\alpha\|_{H^{s - 1}})\|f\|_{W^{s, \infty}}$$ where $C = C\left(\mathfrak{S}(T_0), \|B\|_{H^{s + 2}}\right)$. \end{corollary}
We will also need to estimate the operator $D_t(\mathcal{H} - \tilde{\mathcal{H}})$. To do so, it will suffice to consider the commutator $[D_t, \mathcal{H} - \tilde{\mathcal{H}}]$.
\begin{proposition}\label{DiffHilbertCommuteDtBound}
Let $s \geq 4$, and suppose that \eqref{ZetaLocalAPrioriBound} holds. Then $\|[D_t, \mathcal{H} - \tilde{\mathcal{H}}]f\|_{H^s} \leq C(\epsilon^3 + \|r_\alpha\|_{H^{s - 1}} + \|D_t r\|_{H^s})\|f\|_{H^s}$, where the constant $C = C\left(\mathfrak{S}(T_0), \|B\|_{H^{s + 4}}\right)$. \end{proposition}
\begin{proof} We decompose $\mathcal{H} - \tilde{\mathcal{H}} = (\mathcal{H} - \mathcal{H}_{\tilde{\zeta}}) + (\mathcal{H}_{\tilde{\zeta}} - \tilde{\mathcal{H}})$ and estimate each term separately. We begin with the latter operator and apply Lemma \ref{SingIntCommuteWithDt} to \eqref{HTildeZetaMinusTildeHFormula}. Using the product rule, this results in a sum of singular integrals whose numerators are products of differences involving the functions $\tilde{\xi}$, $\zeta^{(n)}$, $D_t \tilde{\xi}$, $D_t \zeta^{(n)}$, $n = 1, 2, 3$. Then using the identity $D_t g = (b - \tilde{b})g_\alpha + \tilde{D}_t g$, we can further split these terms until we arrive at a sum of kernels whose numerators are products of differences involving the functions $$\tilde{\xi}, \; \zeta^{(n)}, \; \tilde{D}_t \tilde{\xi}, \; \tilde{D}_t \zeta^{(n)}, \; (b - \tilde{b})\tilde{\xi}, \; (b - \tilde{b})\zeta^{(n)}_\alpha, \qquad n = 1, 2, 3$$
In order to estimate the terms $(b - \tilde{b})g$ that arise here for $g = \tilde{\xi}, \zeta^{(n)}$, notice that \eqref{ZetaLocalAPrioriBound}, along with \eqref{bFormula} and Lemma \ref{DoubleLayerPotentialArgument}, shows that $$\|(b - \tilde{b})g_\alpha\|_{H^s} \leq C\|b - \tilde{b}\|_{H^s}\|g_\alpha\|_{W^{s, \infty}} \leq C\left(\mathfrak{S}(T_0)\right)\|g_\alpha\|_{W^{s, \infty}}$$
The resulting kernels have the properties that (1) each has at least three factors in its numerator of size at most $O(\epsilon)$ in the sense of $L^\infty$, (2) each has the same number of factors in the numerator as in the denominator. In estimating this sum of singular integrals we always estimate $f$ in $L^2$ so as not to lose any half-powers of $\epsilon$. In doing so, the largest number of derivatives of $B$ that appears is in $\tilde{D}_t \tilde{\zeta}$; a time derivative will fall on $B_X$ in the formula for $\zeta^{(3)}$ which by \eqref{NLS} is equivalent to a term with three derivatives on $B$. The result is the bound $C(\mathfrak{S}(T_0), \|B\|_{H^{s + 3}})\epsilon^3\|f\|_{H^s}$.
Next, using Lemma \ref{SingIntCommuteWithDt}, we explicitly write the kernel $$[D_t, \mathcal{H} - \mathcal{H}_{\tilde{\zeta}}]f = \frac{1}{\pi i} \int f_\beta(\beta) \left(\frac{D_t r(\alpha) - D_t r(\beta)}{\zeta(\alpha) - \zeta(\beta)} - \frac{(r(\alpha) - r(\beta))(D_t \tilde{\zeta}(\alpha) - D_t \tilde{\zeta}(\beta))}{(\zeta(\alpha) - \zeta(\beta))(\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta))}\right) d\beta$$ and appealing to the crude bound of $b - \tilde{b}$ above now implies the proposition. \end{proof}
\begin{corollary}\label{DtHilbertDiffBound}
Let $s \geq 4$ be given, and suppose that \eqref{ZetaLocalAPrioriBound} holds. Then $$\|D_t(\mathcal{H} - \tilde{\mathcal{H}})f\|_{H^s} \leq C(\epsilon^3 + \|r_\alpha\|_{H^{s - 1}} + \|D_t r\|_{H^s})(\|f\|_{H^s} + \|D_t f\|_{H^s}),$$ where the constant $C = C\left(\mathfrak{S}(T_0), \|B\|_{H^{s + 3}}\right)$. \end{corollary}
\subsection{Formulas for Remainders of $b$ and $\mathcal{A}$}\label{formulaforremainders}
Applying the energy method to the remainder equations \eqref{NewEulerRemainder}-\eqref{DtNewEulerRemainder}, we expect to obtain bounds on the quantity: \begin{equation}\label{ProxyRemainderEnergy}
E_s^{1/2} := \|r_\alpha\|_{H^s} + \|D_t r\|_{H^s}. \end{equation} However in \eqref{NewEulerRemainder}-\eqref{DtNewEulerRemainder}, the quantities $b - \tilde{b}$, $\mathcal{A} - \tilde{\mathcal{A}}$, etc., arise as coefficients of the operators $\mathcal{P} - \tilde{\mathcal{P}}$ and $D_t(\mathcal{P} - \tilde{\mathcal{P}})$. Moreover, such energy estimates would give bounds on the quantities $D_t\partial_\alpha^n \rho$ and $D_t\partial_\alpha^n \sigma$, not directly on the quantities $r_\alpha$ and $D_t r$. So in the following subsections we must perform the following tasks: \begin{enumerate} \item{Bound $b - \tilde{b}$ in terms of $E_s$ and $\epsilon$.} \item{Bound $D_t(b - \tilde{b})$ in terms of $E_s$, $\epsilon$, and a small multiple of $D_t^2 r$.} \item{Bound $\mathcal{A} - \tilde{\mathcal{A}}$ in terms of $E_s$, $\epsilon$, and a small multiple of $D_t^2 r$.} \item{Bound $D_t^2 r$ in terms of $E_s$, $\epsilon$ and a small multiple of $\mathcal{A} - \tilde{\mathcal{A}}$, and thus bound $D_t^2 r$, $\mathcal{A} - \tilde{\mathcal{A}}$ and $D_t(b - \tilde{b})$ appropriately by $E_s$ and $\epsilon$ alone.} \item{Show that $D_t\rho$ and $D_t\sigma$ are equivalent to $D_t r$ and $D_t^2 r$, respectively.} \end{enumerate} Since $\tilde{b}$ and $\tilde{\mathcal{A}}$ are intended to be power expansions in $\epsilon$ of $b$ and $\mathcal{A}$ up to at least quadratic terms, we expect that the differences $b - \tilde{b}$, $D_t(b - \tilde{b})$ and $\mathcal{A} - \tilde{\mathcal{A}}$ will be of size $\mathcal{O}(\epsilon^{5/2})$.
\subsubsection*{Step 1. Controlling $b - \tilde{b}$ by $E_s$ and $\epsilon$.}
In order to use \eqref{bFormula}, we write \begin{align*} (I - \mathcal{H})(b - \tilde{b}) & = (I - \mathcal{H})b + (\mathcal{H} - \tilde{\mathcal{H}})\tilde{b} - (I - \tilde{\mathcal{H}})\tilde{b} \end{align*}
By the multiscale calculation, the residual quantity $$(I - \tilde{\mathcal{H}})\tilde{b} + [\tilde{D}_t\tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\overline{\tilde{\zeta}}_\alpha - 1}{\tilde{\zeta}_\alpha}$$ consists only of terms $O(\epsilon^4)$. The largest number of derivatives of $B$ appearing in this residual is through the term $\mathcal{H}_2 \tilde{D}_t \zeta^{(3)}$, where three derivatives fall on $B$. Hence this residual is bounded in $H^s$ by $C(\|B\|_{H^{s + 3}})\epsilon^{7/2}$. By Corollary \ref{DiffHilbertBound}, we have \begin{align*}
\|(\mathcal{H} - \tilde{\mathcal{H}})\tilde{b}\|_{H^s} & \leq C(\epsilon^3 + E_s^{1/2})\|\tilde{b}\|_{H^s} \\ & \leq C(\epsilon^3 + E_s^{1/2})\epsilon^{3/2} \\ & \leq C(\epsilon E_s^{1/2} + \epsilon^{5/2})
\end{align*} where $C = C(\mathfrak{S}(T_0), \|B\|_{H^{s + 3}})$. Observe that in the last step we have relaxed the estimate so that every term is of the optimal size $\mathcal{O}(\epsilon^{5/2})$.
It now suffices to consider the difference \begin{align*} -[D_t \zeta, \mathcal{H}]\frac{\overline{\xi}_\alpha}{\zeta_\alpha} + [\tilde{D}_t\tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\overline{\tilde{\xi}}_\alpha}{\tilde{\zeta}_\alpha} & = -[D_t r, \mathcal{H}]\frac{\overline{\xi}_\alpha}{\zeta_\alpha} - [(D_t - \tilde{D}_t)\tilde{\zeta}, \mathcal{H}]\frac{\overline{\xi}_\alpha}{\zeta_\alpha} \\ & \quad - [\tilde{D}_t\tilde{\zeta}, \mathcal{H}]\frac{r_\alpha}{\zeta_\alpha} - [\tilde{D}_t\tilde{\zeta}, \mathcal{H}]\overline{\tilde{\xi}}_\alpha\left(\frac{1}{\zeta_\alpha} - \frac{1}{\tilde{\zeta}_\alpha}\right) \\ & \quad - [\tilde{D}_t\tilde{\zeta}, \mathcal{H} - \tilde{\mathcal{H}}] \frac{\overline{\tilde{\xi}}_\alpha}{\tilde{\zeta}_\alpha} \end{align*} Estimating each of these terms in $H^s$ using Proposition \ref{SingIntSobolevEstimates}, we sum the bounds under the assumption of \eqref{ZetaLocalAPrioriBound} to find by Corollary \ref{DiffHilbertBound} that for $s \geq 4$: \begin{align*}
\|b - \tilde{b}\|_{H^s} & \leq C\epsilon^{7/2} \; + \; C(\epsilon E_s^{1/2} + \epsilon^{5/2}) \\
& \quad + \; C E_s^{1/2}(E_s^{1/2} + \epsilon) \; + \; C\|b - \tilde{b}\|_{H^s}(\delta + \epsilon) \\ & \quad + \; C\epsilon E_s^{1/2} \; + \; C\epsilon(\epsilon^3 + E_s^{1/2})\epsilon^{1/2} \\
& \leq C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right) + C\|b - \tilde{b}\|_{H^s}(\epsilon + \delta) \end{align*}
and so choosing $\epsilon_0$ and $\delta$ so that the coefficient $C(\epsilon + \delta)$ of $\|b - \tilde{b}\|_{H^s}$ on the right hand side is less than $\frac{1}{2}$ for all $\epsilon < \epsilon_0$ yields the bound \begin{equation}\label{bMinusTildebBound}\|b - \tilde{b}\|_{H^s} \leq C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right)\end{equation} where the constant $C = C(\mathfrak{S}(T_0), \|B\|_{H^{s + 4}})$. From this bound and \eqref{TildeBFormula} we also have
\begin{equation}\label{bBound}\|b\|_{H^s} \leq C\left(E_s^{1/2} + \epsilon^{3/2}\right)\end{equation}
\subsubsection*{Step 2. Controlling $D_t(b - \tilde{b})$ by $E_s$, $\epsilon$, and a small multiple of $D_t^2 r$.} To control $D_t(b - \tilde{b})$, we write \begin{equation*} (I - \mathcal{H})D_t(b - \tilde{b}) = \left((I - \mathcal{H})D_tb - (I - \tilde{\mathcal{H}})\tilde{D}_t\tilde{b}\right) + (\mathcal{H} - \tilde{\mathcal{H}})\tilde{D}_t\tilde{b} - (I - {\mathcal{H}})(b - \tilde{b})\partial_\alpha\tilde{b} \end{equation*}
By Step 1 and Corollary \ref{DiffHilbertBound} we have that
$$\| (I - {\mathcal{H}})(b - \tilde{b})\partial_\alpha\tilde{b} \|_{H^s} \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$$ and
$$\| (\mathcal{H} - \tilde{\mathcal{H}})\tilde{D}_t\tilde{b} \|_{H^s} \leq C(\epsilon^3 + E_s^{1/2})(\epsilon^{5/2}) \leq C(\epsilon E_s^{1/2} + \epsilon^{5/2})$$ where the constant $C$ depends only on $\mathfrak{S}(T_0)$ and $\|B\|_{H^{s + 4}}$. To estimate the remaining terms we appeal to the formula \eqref{DtbFormula}: \begin{align*} (I - \mathcal{H})D_t b & = [D_t\zeta, \mathcal{H}]\frac{\partial_\alpha(2b - D_t\overline{\zeta})}{\zeta_\alpha} - [D_t^2\zeta, \mathcal{H}]\frac{\overline{\zeta}_\alpha - 1}{\zeta_\alpha} \notag \\ & \qquad + \frac{1}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 (\overline{\zeta}_\beta(\beta) - 1) d\beta \end{align*} By a multiscale calculation, the term $(I - \tilde{\mathcal{H}})\tilde{D}_t\tilde{b}$ has the property that the residual quantity \begin{align*} & (I - \tilde{\mathcal{H}})\tilde{D}_t\tilde{b} - [\tilde{D}_t\tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\partial_\alpha(2\tilde{b} - \tilde{D}_t\overline{\tilde{\zeta}})}{\tilde{\zeta}_\alpha} + [\tilde{D}_t^2\tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\overline{\tilde{\zeta}}_\alpha - 1}{\tilde{\zeta}_\alpha} \notag \\ & \qquad - \frac{1}{\pi i} \int \left(\frac{\tilde{D}_t\tilde{\zeta}(\alpha) - \tilde{D}_t\tilde{\zeta}(\beta)}{\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)}\right)^2 (\overline{\tilde{\zeta}}_\beta(\beta) - 1) d\beta \end{align*} is of size $O(\epsilon^4)$. Therefore it suffices to estimate the difference between each term in \eqref{DtbFormula} with its approximate analogue. We may estimate the first such difference crudely, since by Step 1 we have that \begin{align*}
\left\|[D_t \zeta, \mathcal{H}]\frac{b_\alpha}{\zeta_\alpha} - [\tilde{D}_t \tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\tilde{b}_\alpha}{\tilde{\zeta}_\alpha}\right\|_{H^s} & \leq \left\|[D_t \zeta, \mathcal{H}]\frac{(b - \tilde{b})_\alpha}{\zeta_\alpha}\right\|_{H^s} + \left\|[D_t \zeta, \mathcal{H}]\frac{\tilde{b}_\alpha}{\zeta_\alpha}\right\|_{H^s} + \left\|[\tilde{D}_t \tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\tilde{b}_\alpha}{\tilde{\zeta}_\alpha}\right\|_{H^s} \\ & \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}) + C\delta\epsilon^{5/2} + C\epsilon^{1/2}\epsilon^{5/2} \\ & \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}),
\end{align*} where the constant $C$ depends only on $\mathfrak{S}(T_0)$ and $\|B\|_{H^{s + 4}}$, and where we estimated the commutator $[D_t \zeta, \mathcal{H}]\frac{\tilde{b}_\alpha}{\zeta_\alpha}$ term-by-term. The estimate of the difference $$[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t\overline{\zeta}}{\zeta_\alpha} - [\tilde{D}_t\tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\partial_\alpha \tilde{D}_t\overline{\tilde{\zeta}}}{\tilde{\zeta}_\alpha}$$ proceeds by decomposing in the same manner as in Step 1, and yields the bound $C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$.
Next, by writing $D_t \zeta = D_t r + (b - \tilde{b})\tilde{\zeta}_\alpha + \tilde{D}_t \tilde{\zeta}$, the remaining singular integrals $$\frac{1}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 (\overline{\zeta}_\beta(\beta) - 1) d\beta - \frac{1}{\pi i} \int \left(\frac{\tilde{D}_t\tilde{\zeta}(\alpha) - \tilde{D}_t\tilde{\zeta}(\beta)}{\tilde{\zeta}(\alpha) - \tilde{\zeta}(\beta)}\right)^2 (\overline{\tilde{\zeta}}_\beta(\beta) - 1) d\beta$$ are controlled in $H^s$ with Proposition \ref{SingIntSobolevEstimates} by $C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$. Finally we address the difference $$[D_t^2\zeta, \mathcal{H}]\frac{\overline{\zeta}_\alpha - 1}{\zeta_\alpha} - [\tilde{D}_t^2\tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\overline{\tilde{\zeta}}_\alpha - 1}{\tilde{\zeta}_\alpha}.$$ Again decomposing in the fashion of Step 1, we arrive at a sum of commutators all controlled in $H^s$ by $C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$ except for two commutators. The first is
$$[D_t^2 r, \mathcal{H}]\frac{\overline{\zeta}_\alpha - 1}{\zeta_\alpha}$$ which is controlled in $H^s$ by $(E_s^{1/2} + \epsilon)\|D_t^2 r\|_{H^s}$. The second is \begin{align*} [(D_t^2 - \tilde{D}_t^2)\tilde{\zeta}, \mathcal{H}]\frac{\overline{\zeta}_\alpha - 1}{\zeta_\alpha} & = \left[\left(D_t(b - \tilde{b}) \right)\tilde{\zeta}_\alpha + (b - \tilde{b})\left(D_t\tilde{\zeta}_\alpha + \partial_\alpha \tilde{D}_t\tilde{\zeta}\right), \mathcal{H}\right]\frac{\overline{\zeta}_\alpha - 1}{\zeta_\alpha} \end{align*} which has been expanded using \eqref{DiffDtSquaredFormula}, and is controlled in $H^s$ by
$$C\delta\|D_t(b - \tilde{b})\|_{H^s} + C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$$ Summing all of these estimates, we therefore have for $\delta$ chosen sufficiently small that \begin{align}\label{PrelimDtbBound}
\|D_t(b - \tilde{b})\|_{H^s} \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}) + C(E_s^{1/2} + \epsilon)\|D_t^2 r\|_{H^s} \end{align}
\subsubsection*{Step 3. Controlling $\mathcal{A} - \tilde{\mathcal{A}}$ in terms of $E_s$, $\epsilon$, and a small multiple of $D_t^2 r$.}
Since $\tilde{\mathcal{A}} = 1$ by \eqref{TildeAFormula}, it suffices to control $\mathcal{A} - 1$ in $H^s$. The right hand side of the formula \eqref{TildeAFormula} consists of terms that are almost the same as those in the formula \eqref{DtbFormula} for $D_t b$, and so the same methods of estimation will apply. However, from \S 3.3 we know that $\mathcal{A}_2 = 0$, and so we will want to decompose the right hand side of the formula \eqref{AFormula} so that it is easily seen that the pure $O(\epsilon^2)$ contribution vanishes. From \eqref{AFormula} we have $$(I - \mathcal{H})(\mathcal{A} - 1) = i[D_t^2 \zeta, \mathcal{H}]\frac{\overline{\zeta}_\alpha - 1}{\zeta_\alpha} + i[D_t \zeta, \mathcal{H}]\frac{\partial_\alpha D_t \overline{\zeta}}{\zeta_\alpha} := I_1 + I_2$$ Decomposing the difference corresponding to $I_2$ as in Step 2, we have $$\left\|[D_t \zeta, \mathcal{H}]\frac{\partial_\alpha D_t \overline{\zeta}}{\zeta_\alpha} - [\tilde{D}_t \tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\partial_\alpha \tilde{D}_t \overline{\tilde{\zeta}}}{\tilde{\zeta}_\alpha}\right\|_{H^s} \leq C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right),$$ where $C = C(\mathfrak{S}(T_0), \|B\|_{H^{s + 4}})$. The difference corresponding to $I_1$ is decomposed as follows: \begin{align*} [D_t^2 \zeta, \mathcal{H}]\frac{\overline{\xi}_\alpha}{\zeta_\alpha} - [\tilde{D}_t^2 \tilde{\zeta}, \tilde{\mathcal{H}}] \frac{\overline{\tilde{\xi}}_\alpha}{\tilde{\zeta}_\alpha} & = [D_t^2 r, \mathcal{H}]\frac{\overline{\xi}_\alpha}{\zeta_\alpha} + [(D_t^2 - \tilde{D}_t^2)\tilde{\zeta}, \mathcal{H}]\frac{\overline{\xi}_\alpha}{\zeta_\alpha} \\ & \quad + [\tilde{D}_t^2\tilde{\zeta}, \mathcal{H}]\left(\frac{\overline{\xi}_\alpha}{\zeta_\alpha} - \frac{\overline{\tilde{\xi}}_\alpha}{\tilde{\zeta}_\alpha}\right) + [\tilde{D}_t^2 \tilde{\zeta}, \mathcal{H} - \tilde{\mathcal{H}}] \frac{\overline{\tilde{\xi}}_\alpha}{\tilde{\zeta}_\alpha} \end{align*} Note that in the expression $\tilde{D}_t^2 \tilde{\zeta}$, five derivatives fall on $B$ through $\zeta^{(3)}$, and so we need five extra derivatives on $B$ to bound $\tilde{D}_t^2 \tilde{\zeta}$ in $H^s$. Using Step 1 and Corollary \ref{DiffHilbertBound} then gives \begin{align*}
\left\|[D_t^2 \zeta, \mathcal{H}]\frac{\overline{\xi}_\alpha}{\zeta_\alpha} - [\tilde{D}_t^2 \tilde{\zeta}, \tilde{\mathcal{H}}] \frac{\overline{\tilde{\xi}}_\alpha}{\tilde{\zeta}_\alpha}\right\|_{H^s} & \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}) \\
& \quad + C\left(\epsilon + E_s^{1/2}\right)\left(\|D_t^2 r\|_{H^s} + \|D_t(b - \tilde{b})\|_{H^s}\right)
\end{align*} where $C = C(\mathfrak{S}(T_0), \|B\|_{H^{s + 5}})$. Now since a multiscale calculation shows that the function $$[\tilde{D}_t \tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\partial_\alpha \tilde{D}_t \overline{\tilde{\zeta}}}{\tilde{\zeta}_\alpha} + [\tilde{D}_t^2 \tilde{\zeta}, \tilde{\mathcal{H}}] \frac{\overline{\tilde{\xi}}_\alpha}{\tilde{\zeta}_\alpha}$$ consists only of terms of order $O(\epsilon^3)$, the highest number of derivatives appearing is through the term $\mathcal{H}_2 \partial_t^2 \zeta^{(3)}$ which contains five derivatives of $B$. This residual is thus controlled in $H^s$ by $C(\|B\|_{H^{s + 5}})\epsilon^{5/2}$. Combining these estimates, we can choose $\epsilon_0$ and $\delta$ sufficiently small so as to arrive at the following estimate for $\mathcal{A} - \tilde{\mathcal{A}}$: \begin{align*}
\|\mathcal{A} - \tilde{\mathcal{A}}\|_{H^s} \leq C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right) + C\left(\epsilon + E_s^{1/2}\right)(\|D_t^2 r\|_{H^s} + \|D_t(b - \tilde{b})\|_{H^s}) \end{align*} Now using Step 2 and possibly choosing $\delta$ and $\epsilon_0$ smaller still allows us to give the following preliminary bound for $\mathcal{A} - \tilde{\mathcal{A}}$: \begin{equation}\label{PrelimAMinusTildeABound}
\|\mathcal{A} - \tilde{\mathcal{A}}\|_{H^s} \leq C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right) + C\left(\epsilon + E_s^{1/2}\right)\|D_t^2 r\|_{H^s} \end{equation}
\subsubsection*{Step 4. Bounding $D_t^2 r$ in terms of $E_s$, $\epsilon$, and a small multiple of $\mathcal{A} - \tilde{\mathcal{A}}$.}
We start by deriving a formula for $D_t^2 r$. Changing variables via $U_{\kappa^{-1}}$ in \eqref{OldEuler} yields the equation $\mathcal{P}\zeta = -i$ and so decomposing as $\xi = \tilde{\xi} + r$ yields \begin{align*} \mathcal{P}r & = -i - \mathcal{P}\alpha - \mathcal{P}\tilde{\xi} \\ & = -i - (D_t b - i\mathcal{A}) - (\mathcal{P} - \tilde{\mathcal{P}})\tilde{\xi} - \tilde{\mathcal{P}}\tilde{\xi} \\ & = -D_t b + i(\mathcal{A} - 1) - (\mathcal{P} - \tilde{\mathcal{P}})\tilde{\xi} - \tilde{\mathcal{P}}\tilde{\xi} \end{align*} and so \begin{equation}\label{DtSquaredrFormula} D_t^2 r - ir_\alpha = i(\mathcal{A} - 1)(1 + \xi_\alpha) - (D_t^2 - \tilde{D}_t^2)\tilde{\xi} - \tilde{\mathcal{P}}\tilde{\xi} - D_t b
\end{equation} By Proposition \ref{VariousMultiscaleIdentities} we have $\|\tilde{\mathcal{P}}\tilde{\xi}\|_{H^s} \leq C \epsilon^{5/2}$ with the constant depending on $\mathfrak{S}(T_0)$ and
$\|B\|_{H^{s + 5}}$. Next, using Step 1, \eqref{DiffDtSquaredFormula} and \eqref{PrelimAMinusTildeABound} gives \begin{align*}
\|(D_t^2 - \tilde{D}_t^2)\tilde{\xi}\|_{H^s} & \leq C\epsilon\|D_t(b - \tilde{b})\|_{H^s} + C\epsilon\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right) \\
& \leq C(\epsilon^{1/2} + \delta)\|D_t^2 r\|_{H^s} + C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right) \end{align*} We also have \begin{align*}
\|D_t b\|_{H^s} & \leq \|D_t(b - \tilde{b})\|_{H^s} + \|(D_t - \tilde{D}_t)\tilde{b}\|_{H^s} + C\epsilon^{5/2} \\
& \leq C(\epsilon^{1/2} + \delta)\|D_t^2 r\|_{H^s} + C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}) \end{align*}
Finally we have from \eqref{PrelimAMinusTildeABound} that $$\|(\mathcal{A} - 1)\zeta_\alpha\|_{H^s} \leq C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right) + C(\epsilon^{1/2} + \delta)\|D_t^2 r\|_{H^s}$$ Combining these estimates through \eqref{DtSquaredrFormula} gives $$\|D_t^2 r - ir_\alpha\|_{H^s} \leq C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right) + C(\epsilon^{1/2} + \delta)\|D_t^2 r\|_{H^s}$$ Hence we can choose $\epsilon_0$ and $\delta$ sufficiently small so that \begin{align}\label{DtSquaredrBound}
\|D_t^2 r\|_{H^s} & \leq \|r_\alpha\|_{H^s} + C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right) \notag\\ & \leq C(E_s^{1/2} + \epsilon^{5/2})
\end{align} where the constant $C$ depends only on $\mathfrak{S}(T_0)$ and $\|B\|_{H^{s + 5}}$. Then we immediately have \begin{equation}\label{AMinusTildeABound}
\|\mathcal{A} - \tilde{\mathcal{A}}\|_{H^s} \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}) \end{equation} by virtue of Step 3, as well as \begin{equation}\label{dtbminusb}
\|D_t(b - \tilde{b})\|_{H^s} \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}) \end{equation} from Step 2. From this last inequality we have \begin{equation}\label{DtbBound}
\|D_t b\|_{H^s} \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}) \end{equation}
Note that from \eqref{DtSquaredrFormula}, applying \eqref{AMinusTildeABound}, \eqref{DtbBound}, we also have the estimate \begin{equation*}
\|r_\alpha\|_{H^s} - C(E_s + \epsilon E_s^{1/2}) \leq C\|D_t^2 r\|_{H^s} + C\epsilon^{5/2} \end{equation*} and hence if we choose $\delta$ and $\epsilon_0$ sufficiently small, we conclude that \begin{equation}\label{DtSquaredrEnergyBound}
E_s^{1/2} \leq C(\|D_t r\|_{H^s} + \|D_t^2 r\|_{H^s} + \epsilon^{5/2}) \end{equation}
\subsubsection*{Step 5. Showing that $D_t\rho$, $\sigma$ and $D_t\sigma$ are equivalent to $r_\alpha$ and $D_t r$.}
In the sequel we will show that the energy constructed from the equations of \S 4.1 is bounded below by the sum
$$\sum_{n = 0}^s \|D_t \partial_\alpha^n \rho\|_{L^2} + \|D_t\partial_\alpha^n\sigma\|_{L^2}$$ Therefore, this energy will control $E_s$ provided we can show that $E_s$ is bounded above by this sum. We will show that this is the case with the following three claims.
\noindent \textbf{Claim 1.} For $s \geq 4$ we have, for $\delta$ and $\epsilon < \epsilon_0$ chosen sufficiently small, that $$\|D_t r\|_{H^s} \leq C\|\sigma\|_{H^s} + C(\delta + \epsilon)E_s^{1/2} + C\epsilon^{5/2}$$ and $$\|\sigma\|_{H^s} \leq CE_s^{1/2} + C\epsilon^{5/2}$$
\textit{Proof of Claim 1.} Denote $\mathscr{I} := \frac{1}{2}D_t(I - \mathcal{H})\xi - \frac{1}{2}\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi}$. First consider the difference \begin{equation}\label{dti} \begin{aligned} D_t r - \mathscr{I} & = D_t\xi - \frac{1}{2}D_t(I - \mathcal{H})\xi \\ & \quad - \tilde{D}_t\tilde{\xi} +\frac{1}{2}\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi} \\ & \quad - (D_t - \tilde{D}_t)\tilde{\xi} \\ & = \frac{1}{2}D_t(\mathcal{H} + \overline{\mathcal{H}})\xi \\ & \quad - \tilde{D}_t\tilde{\xi} + \frac{1}{2}\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi} \\ &\quad - (b - \tilde{b})\tilde{\xi}_\alpha \end{aligned} \end{equation}
By Step 1 we have that $\|(b - \tilde{b})\tilde{\xi}_\alpha\|_{H^s} \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$, and by a multiscale calculation we have that $\|\tilde{D}_t\tilde{\xi} - \frac{1}{2}\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi}\|_{H^s} \leq C\epsilon^{5/2}$. The final term can be expanded as \begin{align*} \frac{1}{2}D_t(\mathcal{H} + \overline{\mathcal{H}})\xi & = \frac{1}{2}[D_t \zeta, \mathcal{H}]\frac{\xi_\alpha}{\zeta_\alpha} + \frac{1}{2}[D_t \overline{\zeta}, \overline{\mathcal{H}}]\frac{\xi_\alpha}{\overline{\zeta}_\alpha} + \frac{1}{2}(\mathcal{H} + \overline{\mathcal{H}})D_t \xi
\end{align*} Decomposing these terms as in Step 1 yields a sum of terms all bounded in $H^s$ by $C(E_s + \epsilon E_s^{1/2})$. The only terms which are not immediately $\mathcal{O}(\epsilon^{5/2})$ after this decomposition are $$\frac{1}{2}[\tilde{D}_t \overline{\tilde{\zeta}}, \overline{\tilde{\mathcal{H}}}] \frac{\tilde{\xi}_\alpha}{\overline{\tilde{\zeta}}_\alpha} + \frac{1}{2}(\tilde{\mathcal{H}} + \overline{\tilde{\mathcal{H}}})\tilde{D}_t \tilde{\xi}$$ whose leading $O(\epsilon^2)$ term is $$\frac{1}{2}[\overline{\zeta}^{(1)}_{t_0}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_0} + \frac{1}{2}\overline{\mathcal{H}}^{(1)}\zeta^{(1)}_{t_0} = 0.$$ Hence, we have $$\left\|D_t r - \mathscr I\right\|_{H^s} \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$$ We can further write \begin{align*} D_t r - \sigma & = D_t r - \frac{1}{2}(I - \mathcal{H})\mathscr{I} \\ & = \frac{1}{2}(I - \overline{\mathcal{H}})D_tr + \frac{1}{2}(\mathcal{H} + \overline{\mathcal{H}})D_tr \\ & \quad + \frac{1}{2}(I - \mathcal{H})(D_t r - \mathscr{I})
\end{align*} which by virtue of \eqref{DtXiIsAntiholRemainder} and the above bound on $D_t r - \mathscr{I}$ yields $$\|D_t r - \sigma\|_{H^s} \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$$ Hence for sufficiently small $\delta$ and $\epsilon_0$ the claim follows.$\Box$
\noindent \textbf{Claim 2.} Given $s \geq 4$, then for $\delta$ and $\epsilon < \epsilon_0$ chosen sufficiently small we have for all $n = 0, 1, \ldots, s$ that $$\|D_t^2 r\|_{H^s} \leq C\sum_{n = 0}^s \|D_t\partial_\alpha^n\sigma\|_{L^2} + C(\epsilon + \delta)E_s^{1/2} + C\epsilon^{5/2}$$ and $$\sum_{n = 0}^s \|D_t\partial_\alpha^n \sigma\|_{L^2} \leq CE_s^{1/2} + C\epsilon^{5/2}$$
\textit{Proof of Claim 2.} First note that for every $n = 0, 1, \ldots, s$ we have $$\partial_\alpha^n D_t^2 r - D_t \partial_\alpha^n\sigma = \partial_\alpha^n(D_t^2 r - D_t \sigma) - [b, \partial_\alpha^n] \sigma_\alpha$$ The latter term can be easily estimated by $C(E_s^{1/2} + \epsilon^{3/2})^2$ using the product rule, Claim 1, and Step 1. Therefore it suffices to bound $D_t^2 r - D_t \sigma$ in $H^s$. Again denote $\mathscr{I} := \frac{1}{2}D_t(I - \mathcal{H})\xi - \tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi}$, so that $\sigma = \frac{1}{2}(I - \mathcal{H})\mathscr{I}$. We first write \begin{align*} D_t^2 r - D_t \sigma & = D_t^2 r - \frac{1}{2}D_t(I - \mathcal{H})\mathscr{I} \\ & = \frac{1}{2}D_t(I - \overline{\mathcal{H}})D_tr + \frac{1}{2}D_t(\mathcal{H} + \overline{\mathcal{H}})D_t r + \frac{1}{2}D_t(I - \mathcal{H})\left(D_t r - \mathscr{I}\right) \\ & = \frac{1}{2}D_t(I - \overline{\mathcal{H}})D_tr + \frac{1}{2}D_t(\mathcal{H} + \overline{\mathcal{H}})D_t r - \frac{1}{2}[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha}{\zeta_\alpha}(D_t r - \mathscr{I}) \\ & \quad + \frac{1}{2}(I - \mathcal{H})\left(D_t^2 r - D_t \mathscr{I}\right) \end{align*} All of the terms except the last are appropriately bounded in $H^s$, by \eqref{DtXiIsAntiholRemainder}, Lemma \ref{SingIntCommuteWithDt}, Claim 1, and Proposition \ref{SingIntSobolevEstimates}. Hence it suffices to estimate $D_t^2 r - D_t \mathscr{I}$ in $H^s$. We have by \eqref{dti} that \begin{equation*} \begin{aligned} D_t^2 r - D_t\mathscr{I} & = \frac{1}{2}D_t^2(\mathcal{H} + \overline{\mathcal{H}})\xi \\ & \quad - D_t(\tilde{D}_t\tilde{\xi} - \frac{1}{2}\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi} )\\ &\quad - D_t((b - \tilde{b})\tilde{\xi}_\alpha) \end{aligned} \end{equation*} The last two terms are controlled by $C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$ by Step 1, \eqref{dtbminusb} and by a multiscale calculation. Using Proposition \ref{HilbertCommutatorIdentities} we can write \begin{align*} D_t^2(\mathcal{H} + \overline{\mathcal{H}})\xi & = [D_t^2\zeta, \mathcal{H}]\frac{\xi_\alpha}{\zeta_\alpha} + 2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t \xi}{\zeta_\alpha} - \frac{1}{\pi i}\int\left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \xi_\beta(\beta) d\beta \\ & \quad + [D_t^2\overline{\zeta}, \overline{\mathcal{H}}]\frac{\xi_\alpha}{\overline{\zeta}_\alpha} + 2[D_t\overline{\zeta}, \overline{\mathcal{H}}]\frac{\partial_\alpha D_t \xi}{\overline{\zeta}_\alpha} - \frac{1}{\pi i}\int\left(\frac{D_t\overline{\zeta}(\alpha) - D_t\overline{\zeta}(\beta)}{\overline{\zeta}(\alpha) - \overline{\zeta}(\beta)}\right)^2 \xi_\beta(\beta) d\beta \\ & \quad + (\mathcal{H} + \overline{\mathcal{H}})D_t^2\xi \end{align*} Now we effect the usual decomposition of all of these terms. The terms which are not of size $\mathcal{O}(\epsilon^{5/2})$ are \begin{align*} & \quad \; \epsilon^2[\overline{\zeta}^{(1)}_{t_0 t_0}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_0} + 2\epsilon^2[\overline{\zeta}^{(1)}_{t_0}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_0 t_0} + \epsilon^2\overline{\mathcal{H}}^{(1)}\zeta^{(1)}_{t_0 t_0} \\ & = \epsilon^2[\overline{\zeta}^{(1)}_{t_0 t_0}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_0} + 2\epsilon^2[\overline{\zeta}^{(1)}_{t_0}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_0 t_0} + \epsilon^2[\overline{\zeta}^{(1)}, \overline{\mathcal{H}}_0]\zeta^{(1)}_{\alpha_0 t_0 t_0} \\ & = 0, \end{align*} This completes the estimate of the term $D_t^2 r - D_t \mathscr{I}$, and hence the claim.$\Box$
\noindent \textbf{Claim 3.} Given $s \geq 4$, for $\delta$ and $\epsilon < \epsilon_0$ chosen sufficiently small, we have for all $n = 0, 1, \ldots, s$ that $$\|D_t r\|_{H^s} \leq C\sum_{n = 0}^s \|D_t\partial_\alpha^n\rho\|_{L^2} +C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right)$$
\textit{Proof of Claim 3.} First observe that we can write \begin{align*} D_t r - \frac{1}{2}D_t(I - \mathcal{H})r & = D_t r - \frac{1}{2}(I - \mathcal{H})D_t r + \frac{1}{2}[D_t\zeta, \mathcal{H}]\frac{r_\alpha}{\zeta_\alpha} \\ & = \frac{1}{2}(I - \overline{\mathcal{H}})D_t r + \frac{1}{2}(\mathcal{H} + \overline{\mathcal{H}})D_tr +\frac{1}{2}[D_t\zeta, \mathcal{H}]\frac{r_\alpha}{\zeta_\alpha}, \end{align*} and thus \begin{align*} \partial_\alpha^n D_t r - D_t\partial_\alpha^n\rho & = \partial_\alpha^n D_t r - \frac12\partial_\alpha^n D_t(I - \mathcal{H})r -\frac12 [b, \partial_\alpha^n]\partial_\alpha(I - \mathcal{H})r\\ & = \partial_\alpha^n\left(\frac{1}{2}(I - \overline{\mathcal{H}})D_t r + \frac{1}{2}(\mathcal{H} + \overline{\mathcal{H}})D_tr +\frac{1}{2}[D_t\zeta, \mathcal{H}]\frac{r_\alpha}{\zeta_\alpha}\right) \\ & \qquad + \frac12\sum_{j = 1}^n \binom{n}{j}\left(\partial_\alpha^{j - 1}b_\alpha\right)\left(\partial_\alpha^{n - j + 1}(I - \mathcal{H})r\right) \end{align*} Taking the $L^2$ norm of this equation, using \eqref{DtXiIsAntiholRemainder} and summing over $n = 0, 1, \ldots, s$ yields
$$\|D_t r\|_{H^s} \leq C\sum_{n = 0}^s \|D_t\partial_\alpha^n\rho\|_{L^2} + C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right)$$ and so the claim follows.$\Box$
\subsubsection*{Summary of Estimates}
Hence we have shown that for $s \geq 4$, there exists an $\epsilon_0 > 0$ and a $\delta > 0$ so that if \eqref{ZetaLocalAPrioriBound} holds, then for all $0 < \epsilon < \epsilon_0$, the quantity $b$ is bounded in $H^s$ by $C(E_s^{1/2} + \epsilon^{3/2})$, and the quantities $$b - \tilde{b}, \qquad \mathcal{A} - \tilde{\mathcal{A}}, \qquad D_t(b - \tilde{b}), \qquad D_t b$$ are bounded in $H^s$ by $C\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right)$, where the constant $C$ depends only on $\mathfrak{S}(T_0)$ and $\|B\|_{H^{s + 7}}$. It is also useful to note that under the same conditions, \begin{equation}\label{balphaBound}
\|b_\alpha\|_{H^{s - 1}} \leq \|b - \tilde{b}\|_{H^s} + \|\tilde{b}_\alpha\|_{H^{s - 1}} \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}) \end{equation} Finally, from step 4 we have that for $\delta$ and $\epsilon_0$ sufficiently small, \begin{equation}\label{equivalentRbound}
C_1(\|D_tr\|_{H^s}+\|D_t^2 r\|_{H^s}-\epsilon^{5/2})\le E_s^{1/2} \leq C_2(\|D_tr\|_{H^s}+\|D_t^2 r\|_{H^s}+\epsilon^{5/2}); \end{equation} from Step 5 and \eqref{DtSquaredrEnergyBound} we have that for $\delta$ and $\epsilon_0$ sufficiently small, \begin{equation}\label{EnergyEsEquivalence} \begin{aligned}
E_s^{1/2} \leq C\sum_{n = 0}^s (\|D_t \partial_\alpha^n \rho\|_{L^2} + \|D_t\partial_\alpha^n \sigma\|_{L^2}) + C\epsilon^{5/2} \\
\|\sigma\|_{H^s} + \|D_t\sigma\|_{H^s} + \sum_{n = 0}^s \|D_t\partial_\alpha^n \sigma\|_{L^2} \leq CE_s^{1/2} + C\epsilon^{5/2} \end{aligned} \end{equation}
\subsection{The Estimates of the Cubic Nonlinearities in the Equations for the Remainder}
Now that we have satisfactory estimates of the remainders of the auxiliary quantities, we can show that the right hand sides of \eqref{NewEulerRemainder} and \eqref{DtNewEulerRemainder} are sufficiently small to provide suitable energy estimates. We begin by controlling the quantities appearing in the right hand side of \eqref{NewEulerRemainder}.
\begin{proposition}\label{NewEulerRemainderIsCubic}
Let $s \geq 4$ be given. Then there exist $\epsilon_0, \delta$ so that if \eqref{ZetaLocalAPrioriBound} holds, then for all $\epsilon < \epsilon_0$, $$\|\mathcal{P}\rho\|_{H^s} \leq C\left(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2}\right)$$ where the constant $C = C(\mathfrak{S}(T_0), \|B\|_{H^{s + 7}})$. \end{proposition}
\begin{proof} By \eqref{NewEulerRemainder} we must estimate the terms \begin{align*} & (G - \tilde{G}) - (I - \mathcal{H})(\mathcal{P} - \tilde{\mathcal{P}})\tilde{\xi} + (\mathcal{H} - \tilde{\mathcal{H}})\tilde{\mathcal{P}}\tilde{\xi} - [\tilde{\mathcal{P}}, \tilde{\mathcal{H}}]\tilde{\xi} \notag\\ & + 2\left[D_t\zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]\partial_\alpha D_t \tilde{\xi} + 2[D_t\zeta, \overline{\mathcal{H}}]\frac{b_\alpha}{\overline{\zeta}_\alpha} + 2[D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha D_t r}{\overline{\zeta}_\alpha} \notag\\ & - \frac{1}{\pi i}\int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \tilde{\xi}_\beta(\beta) \, d\beta \end{align*}
We estimate these terms in steps. We make the blanket assumption that all constants $C$ may depend on $\mathfrak{S}(T_0)$ and $\|B\|_{H^{s + 7}}$.
\textbf{Step 1.} We collect in this step terms with immediate bounds. We have already seen though Proposition \ref{VariousMultiscaleIdentities} that $\|[\tilde{\mathcal{P}}, \tilde{\mathcal{H}}]\tilde{\xi}\|_{H^s} \leq C\epsilon^{7/2}$. We also have by Corollary \ref{DiffHilbertBound} that $$\|(\mathcal{H} - \tilde{\mathcal{H}})\tilde{\mathcal{P}}\tilde{\xi}\|_{H^s} \leq C(\epsilon^3 + E_s^{1/2})\|\tilde{\mathcal{P}}\tilde{\xi}\|_{H^s} \leq C\left(\epsilon^2 E_s^{1/2} + \epsilon^{7/2}\right)$$ By \eqref{DiffPFormula} and the estimates we obtained in Section~\ref{formulaforremainders}, we have \begin{align*}
\|(I - \mathcal{H})(\mathcal{P} - \tilde{\mathcal{P}})\tilde{\xi}\|_{H^s} \leq C\left(\epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2}\right) \end{align*} Next, \begin{align*}
\left\|[D_t\zeta, \overline{\mathcal{H}}]\frac{b_\alpha}{\overline{\zeta}_\alpha}\right\|_{H^s} & = \left\|[D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha(b - \tilde{b})}{\overline{\zeta}_\alpha}\right\|_{H^s} + \left\|[D_t\zeta, \overline{\mathcal{H}}]\frac{\tilde{b}_\alpha}{\overline{\zeta}_\alpha}\right\|_{H^s} \\ & \leq C\left(E_s^{1/2} + \epsilon\right)\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right) \\ & \leq C\left(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2}\right) \end{align*} where as usual we estimated the former term with Proposition \ref{SingIntSobolevEstimates} and the latter term crudely in $H^s$. By \eqref{DtXiIsAntiholRemainder}, \eqref{4.2} and Corollary \ref{DiffHilbertBound} we have \begin{align*}
\left\|2[D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha}{\overline{\zeta}_\alpha}D_t r\right\|_{H^s} & = \left\|[D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha}{\overline{\zeta}_\alpha}(I - \overline{\mathcal{H}})D_t r\right\|_{H^s} \\ & \leq C\left(E_s^{1/2} + \epsilon\right)\left(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}\right)\\ & \leq C\left(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2}\right) \end{align*}
\textbf{Step 2.} Next we consider the integral $$\frac{1}{\pi i}\int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \tilde{\xi}_\beta(\beta) \, d\beta$$ Since this integral is cubic, the only way it will contribute a term larger than $\mathcal{O}(\epsilon^{7/2})$ is if it contributes a term independent of $r$ of order $O(\epsilon^3)$. To see that this does not occur, we decompose the integral in the same way as in Step 2 of \S 4.3.
Decomposing the differences in the numerator of the integrand by writing $$D_t\zeta = D_t r + (b - \tilde{b})\tilde\zeta_\alpha + \tilde{D}_t \tilde{\zeta}$$ yields a sum of integrals depending on $r$ or $b - \tilde{b}$ which are controlled in $H^s$ by $$C\left(\epsilon E_s + \epsilon^2 E_s^{1/2}\right),$$ as well as the following integral: $$\frac{1}{\pi i}\int \left(\frac{\tilde{D}_t\tilde{\zeta}(\alpha) - \tilde{D}_t\tilde{\zeta}(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \tilde{\xi}_\beta(\beta) \, d\beta$$ Next, decomposing the differences in the denominator of this integral via the identity $$\frac{1}{\zeta(\alpha) - \zeta(\beta)} = \frac{1}{\alpha - \beta} - \frac{\xi(\alpha) - \xi(\beta)}{\left(\zeta(\alpha) - \zeta(\beta)\right)\left(\alpha - \beta\right)}$$ yields a sum of integrals controlled in $H^s$ by $$C\epsilon^3\left(E_s^{1/2} + \epsilon^{1/2}\right)$$ along with the integral $$\frac{1}{\pi i}\int \left(\frac{\tilde{D}_t\tilde{\zeta}(\alpha) - \tilde{D}_t\tilde{\zeta}(\beta)}{\alpha - \beta}\right)^2 \tilde{\xi}_\beta(\beta) \, d\beta$$ Expanding $\tilde{D}_t \tilde{\zeta}$ and $\tilde{\xi}_\alpha$ in powers of $\epsilon$ and collecting like powers yields a sum of integrals controlled by $C\epsilon^{7/2}$ except for the leading term of size $O(\epsilon^3)$ given by the integral $$\frac{\epsilon^3}{\pi i}\int \left(\frac{\zeta^{(1)}_{t_0}(\alpha) - \zeta^{(1)}_{t_0}(\beta)}{\alpha - \beta}\right)^2 \zeta^{(1)}_{\beta_0}(\beta) \, d\beta = 2\epsilon^3[\zeta^{(1)}_{t_0}, \mathcal{H}_0](\zeta^{(1)}_{t_0 \alpha_0} \zeta^{(1)}_{\alpha_0}) - \epsilon^3[\zeta^{(1)}_{t_0}, [\zeta^{(1)}_{t_0}, \mathcal{H}_0]]\zeta^{(1)}_{\alpha_0 \alpha_0},$$ which is also controlled by $C\epsilon^{7/2}$ by Corollary \ref{CommutatorPhase}.
\textbf{Step 3.} We turn to the term $$\left[D_t\zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]\partial_\alpha D_t \tilde{\xi} = -\frac{2}{\pi}\int \frac{\left(D_t\zeta(\alpha) - D_t\zeta(\beta)\right)\left(\Im \zeta(\alpha) - \Im \zeta(\beta)\right)}{|\zeta(\alpha) - \zeta(\beta)|^2} \partial_\beta D_t \tilde{\xi}(\beta) \, d\beta$$ Decomposing the differences in the numerator of the integral as in Step 2 yields a sum of singular integrals. All but one of these singular integrals depends on $r$ and are controlled in $H^s$ by $$C\left(\epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2}\right)$$ The remaining singular integral is given by $$\frac{2}{\pi} \int \frac{(\tilde{D}_t \tilde{\zeta}(\alpha) - \tilde{D}_t \tilde{\zeta}(\beta))(\Im \tilde{\xi}(\alpha) - \Im \tilde{\xi}(\beta))}{(\alpha - \beta)^2} \partial_\beta \tilde{D}_t \tilde{\xi} \, d\beta,$$ of which the leading term is isolated by expanding $\tilde{\zeta} = \alpha + \epsilon \zeta^{(1)} + \epsilon^2 \zeta^{(2)} + \epsilon^2 \zeta^{(3)}$, yielding $$\frac{2}{\pi}\epsilon^3 \int \frac{(\zeta^{(1)}_{t_0}(\alpha) - \zeta^{(1)}_{t_0}(\beta))(\Im \zeta^{(1)}(\alpha) - \Im \zeta^{(1)}(\beta))}{(\alpha - \beta)^2} \zeta^{(1)}_{t_0 \beta_0} (\beta) \, d\beta$$ By the same calculation in \S 3.3 showing that the $I_1$ term of $G_3$ vanished, we see that this leading term is actually $O(\epsilon^4)$ by Corollary \ref{CommutatorPhase}. Therefore only terms of size $O(\epsilon^4)$ appear, and so we have that $$\left\|\left[D_t\zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]\partial_\alpha D_t \tilde{\xi}\,\right\|_{H^s} \leq C(\epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2})$$
Similarly, the same method of decomposition allows us to expand $G$ in the same way, until the leading term of the part of the decomposition that is independent of $r$ is apparent. However, this leading term of size $O(\epsilon^3)$ is by construction equal to $\tilde{G}$, with which it cancels. Therefore $G - \tilde{G}$ and hence the whole right hand side of \eqref{NewEulerRemainder} is bounded in $H^s$ by $C(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2})$. \end{proof}
Next we consider the right hand side of \eqref{DtNewEulerRemainder}.
\begin{proposition}\label{DtNewEulerRemainderIsCubic}
Let $s \geq 4$ be given. Then there exist $\epsilon_0 > 0$ and $\delta > 0$ so that if \eqref{ZetaLocalAPrioriBound} holds, then for all $\epsilon < \epsilon_0$, $$\|\mathcal{P}\sigma\|_{H^s} \leq C\left(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2}\right),$$ where the constant $C = C(\mathfrak{S}(T_0), \|B\|_{H^{s + 7}})$. \end{proposition}
\begin{proof} It suffices to show that the following terms are $\mathcal{O}(\epsilon^{7/2})$: \begin{align*} & -8[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t \sigma}{\zeta_\alpha} + \frac{4}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2 \sigma_\beta(\beta) d\beta \notag\\ & \quad + (I - \mathcal{H})iU_{\kappa^{-1}}\left(\frac{\mathfrak{a}_t}{\mathfrak{a}}\right) \partial_\alpha(I - \mathcal{H})\xi \notag\\ & \quad - (I - \mathcal{H})(\mathcal{P} - \tilde{\mathcal{P}})\tilde{D}_t(I - \tilde{\mathcal{H}})\tilde{\xi} \notag\\ & \quad +i (I - \mathcal{H})\tilde{b}_\alpha\partial_\alpha(I - \tilde{\mathcal{H}})\tilde{\xi} \notag\\ & \quad + (I - \mathcal{H})(D_t G - \tilde{D}_t\tilde{G}) - (I - \mathcal{H})\epsilon^4 (\tilde{D}_t R) \notag\\ & := I_1 + I_2 + I_3 +I_4 + I_5 + I_6 +I_7. \end{align*}
Clearly $\|I_7\|_{H^s} \leq C\epsilon^{7/2}$ and $\|I_5\|_{H^s} \leq C\epsilon^{7/2}$. By \eqref{DiffPFormula}, and the estimates of \S 4.3 we have that \begin{align*}
\|I_4\|_{H^s} & \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}) \epsilon \\ & \leq C(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2}) \end{align*} Using Lemma \ref{SingIntCommuteWithDt} along with Proposition \ref{SingIntSobolevEstimates}, we can decompose $D_t G$ into a sum of singular integrals as in Step 2 of \S 4.3. Each of these integrals can be bounded by $C(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2})$ except for $\tilde{D}_t \tilde{G}$, which has leading term of size $O(\epsilon^3)$; but then $I_6$ is $\mathcal{O}(\epsilon^{7/2})$. Similarly, if we effect the usual decomposition on the right hand side of the formula \eqref{atOveraFormula}, we see that the only term not of size $\mathcal{O}(\epsilon^{5/2})$ is the term
$$\epsilon^2 2i\left([\zeta^{(1)}_{t_0 t_0}, \mathcal{H}_0]\overline{\zeta}^{(1)}_{\alpha_0 t_0} + [\zeta^{(1)}_{t_0}, \mathcal{H}_0]\overline{\zeta}^{(1)}_{\alpha_0 t_0 t_0}\right) = 0$$ and hence that $\|I_3\|_{H^s} \leq C(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2})$. By Step 5 of \S 4.3 and Proposition \ref{SingIntSobolevEstimates} we estimate $I_2$ as \begin{align*}
\|I_2\|_{H^s} & \leq C(E_s^{1/2} + \epsilon)^2\|\sigma\|_{H^s} \\ & \leq C(E_s^{1/2} + \epsilon)^2(E_s^{1/2}+ \epsilon^{5/2}) \\ & \leq C(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2}) \end{align*} The only term left to estimate is $I_1$. We first write $$2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t \sigma}{\zeta_\alpha} = 2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t^2 r}{\zeta_\alpha} + 2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha (D_t \sigma - D_t^2 r)}{\zeta_\alpha},$$ and by Step 5 of \S 4.3 we have that the latter term is bounded by $C(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2})$ in $H^s$. Next we have $$2[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t^2 r}{\zeta_\alpha} = 2\left[D_t\zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]\partial_\alpha D_t^2 r - 2[D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha D_t^2 r}{\overline{\zeta}_\alpha},$$ and the former term is bounded in $H^s$ by $C(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2})$. Of the latter term we write using Proposition \ref{HilbertCommutatorIdentities} that $$2[D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha D_t^2 r}{\overline{\zeta}_\alpha} = [(I + \overline{\mathcal{H}})D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha D_t^2 r}{\overline{\zeta}_\alpha} = [D_t\zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha }{\overline{\zeta}_\alpha}(I - \overline{\mathcal{H}})D_t^2 r$$ Finally, we have by \eqref{DtXiIsAntiholRemainder} that \begin{align*} (I - \overline{\mathcal{H}})D_t^2 r & = [D_t \zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha D_t r}{\bar \zeta_\alpha} + D_t\left(-(I - \tilde{\mathcal{H}})\tilde{D}_t\overline{\tilde{\zeta}} - (I - \tilde{\mathcal{H}})(D_t - \tilde{D}_t)\overline{\tilde{\zeta}} + (\mathcal{H} - \tilde{\mathcal{H}}){D}_t\overline{\tilde{\zeta}}\right) \end{align*}
Therefore $\|(I - \overline{\mathcal{H}})D_t^2 r\|_{H^s} \leq C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$, from which the Proposition follows. \end{proof}
\subsection{Construction of the Energy for the Remainder}
In this section we construct the energy corresponding to the equations \eqref{NewEulerRemainder} and \eqref{DtNewEulerRemainder}. We then show that this energy obeys a differential inequality which yields a priori bounds on a $O(\epsilon^{-2})$ time scale. The energy so constructed will control the quantity $\|D_t r\|_{H^s}^2 + \|D_t^2 r\|_{H^s}^2$, and hence by \eqref{DtSquaredrEnergyBound} it follows that for sufficiently small energies also yields suitable bounds on $E_s$.
\subsubsection*{Bounds on the Equations for the Derivatives}
We must first show that the nonlinearities in the corresponding equations for the derivatives are appropriately bounded in $L^2$. \begin{proposition}\label{NewEulerRemainderDerivBound} Let $s \geq 4$ and $1\le n\le s$ be given. Then there exist $\epsilon_0 > 0$ and $\delta > 0$ so that if \eqref{ZetaLocalAPrioriBound} holds, then for all $\epsilon < \epsilon_0$, if $\Theta = \rho, \sigma$, then
$$\|\mathcal{P}\partial_\alpha^n\Theta\|_{L^2} \leq C(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2})$$ where $C$ depends only on $\mathfrak{S}(T_0)$ and $\|B\|_{H^{s + 7}}$. \end{proposition}
\begin{proof} Let $\Theta = \rho, \sigma$ as above. Observe that for any $n \geq 1$ we can write $$\mathcal{P}\partial_\alpha^n\Theta = \partial_\alpha^n\mathcal{P}\Theta - \sum_{j = 1}^n \partial_\alpha^{n - j}[\partial_\alpha, \mathcal{P}]\partial_\alpha^{j - 1}\Theta$$ Using the identity \begin{equation}\label{PartialAlphaCommuteP} [\partial_\alpha, \mathcal{P}] = \left\{\partial_\alpha\bigl(D_t b - i(\mathcal{A} - 1)\bigr)\right\}\partial_\alpha + 2b_\alpha D_t \partial_\alpha \end{equation} we rewrite as \begin{align*} \mathcal{P}\partial_\alpha^n\Theta = \partial_\alpha^n\mathcal{P}\Theta & - \sum_{j = 1}^n \partial_\alpha^{n - j}\Bigl(\partial_\alpha\bigl(D_t b - i(\mathcal{A} - 1)\bigr)\partial_\alpha^j\Theta\Bigr) \\ & - 2\sum_{j = 1}^n \partial_\alpha^{n - j}\bigl(b_\alpha D_t \partial_\alpha^j\Theta\bigr) \end{align*} Now using the identity \begin{align}\label{CommuteDtPastPartialAlpha} D_t \partial_\alpha^j & = \partial_\alpha^j D_t - \sum_{l = 1}^j \partial_\alpha^{j - l}[\partial_\alpha, D_t]\partial_\alpha^{l - 1} \notag\\ & = \partial_\alpha^j D_t - \sum_{l = 1}^j \partial_\alpha^{j - l}( b_\alpha\partial_\alpha^l) \end{align} we have by the product rule, Steps 2 and 3 of \S 4.3, \eqref{balphaBound} and Proposition \ref{NewEulerRemainderIsCubic} that for all $1 \leq n \leq s$, \begin{align*}
\|\mathcal{P}\partial_\alpha^n\Theta\|_{L^2} & \leq C\|\mathcal{P}\Theta\|_{H^s} \\
& \quad + C\|D_t b - i(\mathcal{A} - 1)\|_{H^s}\|\partial_\alpha\Theta\|_{H^{s - 1}} \\
& \quad + C\|b_\alpha\|_{H^{s - 1}}(\|D_t\Theta\|_{H^s}+\|\partial_\alpha\Theta\|_{H^{s - 1}} ) \\ & \leq C\left(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2}\right) \end{align*} where the last inequality follows from Step 5 of \S 4.3. \end{proof}
\subsubsection*{Construction of the Energy and the Energy Inequality}
Now that we have shown that the equations for the derivatives of the quantities in \eqref{NewEulerRemainder} and \eqref{DtNewEulerRemainder} also have $\mathcal{O}(\epsilon^{7/2})$ nonlinearities, we can construct the energies corresponding to these equations. Since the principal operator of \eqref{NewEulerRemainder} and \eqref{DtNewEulerRemainder} is $\mathcal{P}$, we can use the same construction given by Lemma 4.1 of \cite{WuAlmostGlobal2D} to construct our energy; we record this lemma here for convenience.
\begin{proposition}[c.f. Lemma 4.1 of \cite{WuAlmostGlobal2D}]\label{BasicEnergyInequality}
Suppose that a function $\Theta \in C^0([0, T]; \dot{H}^{1/2}) \cap C^1([0, T]; L^2)$ is given satisfying $\mathcal{P}\Theta = \mathscr{G}$. Define $$\mathfrak{E}(t) := \int \frac{1}{\mathcal{A}}|D_t\Theta(\alpha, t)|^2 + i\Theta(\alpha, t)\overline{\Theta}_\alpha(\alpha, t) d\alpha $$ Then
$$\frac{d\mathfrak{E}}{dt} = \int \frac{2}{\mathcal{A}}\Re\left(\mathscr{G}D_t\overline{\Theta}\right) - \frac{1}{\mathcal{A}}U_\kappa^{-1} \left(\frac{\mathfrak{a}_t}{\mathfrak{a}}\right)|D_t\Theta|^2 d\alpha$$ Moreover if $\Theta$ is the trace of a holomorphic function on $\Omega(t)^c$, i.e., if $\Theta = \frac{1}{2}(I - \mathcal{H})\Theta$, then $$\int i\Theta\overline{\Theta}_\alpha d\alpha = -\int i\overline{\Theta}\Theta_\alpha d\alpha \geq 0$$ \end{proposition}
For brevity, we introduce the quantities \begin{equation}\label{DefinitionRhoNSigmaN} \rho^{(n)} := \partial_\alpha^n\rho \qquad \text{and} \qquad \sigma^{(n)} := \partial_\alpha^n\sigma \end{equation} We cannot use the second part of Proposition \ref{BasicEnergyInequality} directly for $n > 0$ since $\rho^{(n)}$ and $\sigma^{(n)}$ need not be the trace of a holomorphic function on $\Omega(t)^c$. Hence we further introduce the notation \begin{align}\label{RhoSigmaHoloPart} \rho^{(n)} = \frac{1}{2}(I - \mathcal{H})\rho^{(n)} + \frac{1}{2}(I + \mathcal{H})\rho^{(n)} := \phi^{(n)} + \mathcal{R}^{(n)} \notag\\ \sigma^{(n)} = \frac{1}{2}(I - \mathcal{H})\sigma^{(n)} + \frac{1}{2}(I + \mathcal{H})\sigma^{(n)} := \psi^{(n)} + \mathcal{S}^{(n)} \end{align} Consider now the case $0 \leq n \leq s$. Define \begin{equation}\label{RemainderEnergyFormulaE}
\mathcal{E}_n(t) = \int \frac{1}{\mathcal{A}}|D_t\rho^{(n)}|^2 + i \phi^{(n)} \overline{\phi}^{(n)}_\alpha \,d\alpha \end{equation} and \begin{equation}\label{RemainderEnergyFormulaF}
\mathcal{F}_n(t) = \int \frac{1}{\mathcal{A}}|D_t\sigma^{(n)}|^2 + i \sigma^{(n)} \overline{\sigma}^{(n)}_\alpha \,d\alpha \end{equation} We must show that the parts contributed by $\frac{d\mathcal{E}_n}{dt}$ to the energy inequality by the parts of these terms that are antiholomorphic in $\Omega(t)^c$ are at most of size $\mathcal{O}(\epsilon^5)$.
Observe first that we can write \begin{align}\label{SmallAntiholPart} \mathcal{R}^{(n)} & = \frac{1}{2}(I + \mathcal{H})\partial_\alpha^n\rho \notag\\ & = \frac{1}{4}\partial_\alpha^n(I + \mathcal{H})(I - \mathcal{H})r - \frac{1}{2}\sum_{j = 1}^n \partial_\alpha^{n - j}[\partial_\alpha, \mathcal{H}]\partial_\alpha^{j - 1}\rho \notag\\ & = - \frac{1}{2}\sum_{j = 1}^n \partial_\alpha^{n - j}[\zeta_\alpha - 1, \mathcal{H}]\frac{\partial_\alpha^j\rho}{\zeta_\alpha} \end{align} and so $\mathcal{R}^{(n)}$ is bounded by $C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$ in $L^2$. Writing $\phi^{(n)} = \rho^{(n)} - \mathcal{R}^{(n)}$ in $\mathcal{E}_n$ yields \begin{align*}
\mathcal{E}_n & = \int \frac{1}{\mathcal{A}}|D_t \rho^{(n)}|^2 + i \rho^{(n)} \overline{\rho}_\alpha^{(n)} \, d\alpha - i \int \phi^{(n)}\overline{\mathcal{R}}^{(n)}_\alpha + \mathcal{R}^{(n)} \overline{\phi}^{(n)}_\alpha + \mathcal{R}^{(n)}\overline{\mathcal{R}}^{(n)}_\alpha \, d\alpha \end{align*} Differentiating this with respect to $t$ and integrating by parts yields \begin{align}\label{EnergyInequalityV1}
\frac{d\mathcal{E}_n}{dt} & = \int \frac{2}{\mathcal{A}}\Re\left(D_t \overline{\rho}^{(n)}\mathcal{P}\rho^{(n)}\right) - \frac{1}{\mathcal{A}} U_\kappa^{-1} \left(\frac{\mathfrak{a}_t}{\mathfrak{a}}\right)|D_t \rho^{(n)}|^2 \, d\alpha \notag\\ & \qquad + 2\Im \int \mathcal{R}^{(n)}_t\overline{\phi}^{(n)}_\alpha + \phi^{(n)}_t \overline{\mathcal{R}}^{(n)}_\alpha + \mathcal{R}^{(n)}_t\overline{\mathcal{R}}^{(n)}_\alpha \, d\alpha \end{align} We want to show that the right hand side of this inequality is $\mathcal{O}(\epsilon^5)$. By Proposition~\ref{NewEulerRemainderDerivBound} and \eqref{atOveraFormula} it is clear that the first integral is $\mathcal{O}(\epsilon^5)$, and so it suffices to show that the second integral is of size $\mathcal{O}(\epsilon^{5})$.
The arguments for handling the first two terms rely on the fact that $\phi^{(n)}$ and $\mathcal{R}^{(n)}$ are almost orthogonal in $L^2$, and so the arguments showing these terms are small are similar to each other, so we will only consider the term $\mathcal{R}^{(n)}_t\overline{\phi}^{(n)}_\alpha$. We have \begin{equation*} \mathcal{R}^{(n)}_t = \frac{1}{2}\partial_t(I + \mathcal{H}) \mathcal{R}^{(n)} = \frac{1}{2}(I + \mathcal{H})\mathcal{R}^{(n)}_t + [\zeta_t, \mathcal{H}]\frac{\partial_\alpha \mathcal{R}^{(n)}}{\zeta_\alpha} \end{equation*} and since the latter term is $\mathcal{O}(\epsilon^{5/2})$, it suffices to consider only the former term. Likewise, recalling that the adjoint\footnote{The adjoint $T^*$ of a linear operator $T : L^2 \to L^2$ is defined by $\int f \, T^*(g) \, d\alpha = \int g \, T(f) \, d\alpha$ for all $f, g \in L^2$.} $\mathcal{H}^*$ of the Hilbert transform satisfies the identity $\mathcal{H}^*f = -\zeta_\alpha\mathcal{H}(f/\zeta_\alpha)$, the identity $[\mathcal{H}, \partial_\alpha/\zeta_\alpha] = 0$ of Proposition \ref{HilbertCommutatorIdentities} implies that $\partial_\alpha \mathcal{H} = -\mathcal{H}^* \partial_\alpha$, and so we can write $\overline{\phi}_\alpha^{(n)}$ as \begin{equation*} \overline{\phi}_\alpha^{(n)} = \frac{1}{2}\partial_\alpha(I - \overline{\mathcal{H}})\partial_\alpha^n\overline{\rho} = \frac{1}{2}(I + \overline{\mathcal{H}}^*)\partial_\alpha^{n + 1}\overline{\rho} \end{equation*} But now, using the usual $L^2$ pairing\footnote{Here we use the real inner product $\langle f, g\rangle = \int f\,g\, d\alpha$ for $f, g \in L^2$.} $\langle, \rangle$, we have \begin{align*} \frac{1}{4}\left\langle (I + \mathcal{H})\mathcal{R}^{(n)}_t, (I + \overline{\mathcal{H}}^*)\partial_\alpha^{n + 1}\overline{\rho} \right\rangle & = \frac{1}{4}\left\langle \mathcal{H}\mathcal{R}^{(n)}_t, (I + \overline{\mathcal{H}}^*)\partial_\alpha^{n + 1}\overline{\rho} \right\rangle \\ & \; + \frac{1}{4}\left\langle \mathcal{R}^{(n)}_t, \overline{\mathcal{H}}^*(I + \overline{\mathcal{H}}^*)\partial_\alpha^{n + 1}\overline{\rho} \right\rangle \\ & = \frac{1}{2}\left\langle (\mathcal{H} + \overline{\mathcal{H}})\mathcal R^{(n)}_t, \overline{\phi}_\alpha^{(n)} \right\rangle \end{align*} Therefore \begin{align*} \int \mathcal{R}^{(n)}_t\overline{\phi}^{(n)}_\alpha d\alpha = \int \frac{1}{2}\bar \phi_\alpha^{(n)}(\mathcal{H} + \overline{\mathcal{H}})\mathcal{R}^{(n)}_t d\alpha + \int \overline{\phi}^{(n)}_\alpha[\zeta_t, \mathcal{H}]\frac{\partial_\alpha \mathcal{R}^{(n)}}{\zeta_\alpha} d\alpha, \end{align*} and so these integrals are bounded by $$CE_s^{1/2}(E_s^{1/2} + \epsilon)(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2}) \leq C(E_s^2 + \epsilon E_s^{3/2} + \epsilon^2 E_s + \epsilon^{7/2})$$ From \eqref{SmallAntiholPart}, estimating as usual gives bound of $\mathcal{R}^{(n)}_\alpha$ and $\mathcal{R}_t^{(n)}$ in $L^2$ of $C(E_s + \epsilon E_s^{1/2})$ and $C(E_s + \epsilon E_s^{1/2} + \epsilon^{5/2})$, respectively. Summing these bounds, we have that \eqref{EnergyInequalityV1} reads \begin{equation*} \frac{d\mathcal{E}_n}{dt} \leq C(E_s^2 + \epsilon E_s^{3/2} + \epsilon^2 E_s + \epsilon^{7/2}E_s^{1/2}) \end{equation*} If we try to apply the same argument to $\mathcal{F}_n$ as we just did to $\mathcal{E}_n$, we find that $\frac{d\mathcal{F}_n}{dt}$ has an extra half-derivative than can be controlled by the energy, since $\mathcal{F}_n$ consists of quantities with one time derivative more than the quantities comprising $\mathcal{E}_n$. Now since
$$\frac{d\mathcal{F}_n}{dt} = \int \frac{2}{\mathcal{A}}\Re\left(D_t \overline{\sigma}^{(n)}\mathcal{P}\sigma^{(n)}\right) - \frac{1}{\mathcal{A}} U_\kappa^{-1} \left(\frac{\mathfrak{a}_t}{\mathfrak{a}}\right)|D_t \sigma^{(n)}|^2 \, d\alpha,$$ which by Step 5 of \S 4.3 and Proposition \ref{NewEulerRemainderDerivBound} implies
$$\frac{d\mathcal{F}_n}{dt} \leq C(E_s^2 + \epsilon E_s^{3/2} + \epsilon^2 E_s + \epsilon^{7/2} E_s^{1/2} + \epsilon^6).$$ Hence we need only show that the quantity $\mathcal{F}_n$ itself is bounded below by $\|D_t\sigma^{(n)}\|_{L^2}^2$ up to a term of size $\mathcal{O}(\epsilon^5)$, for $n = 0, \ldots, s$. By writing $\sigma^{(n)} = \psi^{(n)} + \mathcal{S}^{(n)}$ we can use Proposition \ref{BasicEnergyInequality} to estimate \begin{align*}
\mathcal{F}_n & = \int \frac{1}{\mathcal{A}}|D_t\sigma^{(n)}|^2 + i\sigma^{(n)}\overline{\sigma}^{(n)}_\alpha d\alpha \\
& \geq \int \frac{1}{\mathcal{A}}|D_t\sigma^{(n)}|^2 d\alpha - \left|\int \psi^{(n)}\overline{\mathcal{S}}^{(n)}_\alpha + \mathcal{S}^{(n)} \overline{\psi}^{(n)}_\alpha + \mathcal{S}^{(n)}\overline{\mathcal{S}}^{(n)}_\alpha d\alpha \right| \end{align*} Now, as with $\mathcal{R}^{(n)}$, we can rewrite $$\mathcal{S}^{(n)} = -\frac{1}{2}(I + \mathcal{H})\sum_{j = 1}^n \partial_\alpha^{n - j}[\zeta_\alpha - 1, \mathcal{H}]\frac{\partial_\alpha}{\zeta_\alpha}\partial_\alpha^{j - 1}\sigma$$ $$\mathcal{S}^{(n)}_\alpha = -\frac{1}{2}(I - \mathcal{H}^*)\sum_{j = 1}^n \partial_\alpha^{n - j + 1}[\zeta_\alpha - 1, \mathcal{H}]\frac{\partial_\alpha}{\zeta_\alpha}\partial_\alpha^{j - 1}\sigma$$ From the above formula for $\psi^{(n)}$ we see that $\psi^{(n)}_\alpha$ has one more spatial derivative than the energy provides. However, if we integrate by parts and use Step 5 of \S 4.3, we can estimate \begin{align*}
\mathcal{F}_n & \geq \int \frac{1}{\mathcal{A}}|D_t\sigma^{(n)}|^2 d\alpha - \left|\int \psi^{(n)}\overline{\mathcal{S}}^{(n)}_\alpha - \mathcal{S}^{(n)}_\alpha \overline{\psi}^{(n)} + \mathcal{S}^{(n)}\overline{\mathcal{S}}^{(n)}_\alpha d\alpha \right| \\
& \geq \int \frac{1}{\mathcal{A}}|D_t\sigma^{(n)}|^2 d\alpha - C\delta( E_s^{1/2} + \epsilon^{5/2})^2 \\
& \geq \int \frac{1}{\mathcal{A}}|D_t\sigma^{(n)}|^2 d\alpha - C\delta(E_s + \epsilon^5) \end{align*} If we set \begin{equation}\label{FullEnergyDefinition} \mathcal{E} = \sum_{n = 0}^s (\mathcal{E}_n + \mathcal{F}_n) \end{equation}
and if we choose $\delta$ sufficiently small, then we have by \eqref{EnergyEsEquivalence} that \begin{align*} E_s^{1/2} & \leq C\mathcal{E}^{1/2} + C\epsilon^{5/2}. \end{align*} Thus if we choose $\epsilon_0$ and $\delta$ still smaller, we have from the inequality $$\sum_{n = 0}^s \left(\frac{d\mathcal{E}_n}{dt} + \frac{d\mathcal{F}_n}{dt}\right) \leq C(E_s^2 + \epsilon E_s^{3/2} + \epsilon^2 E_s + \epsilon^{7/2} E_s^{1/2}+\epsilon^6)$$ that the following lemma is demonstrated:
\begin{lemma}\label{EnergyInequality} Let $\mathcal{E}$ be defined as in \eqref{FullEnergyDefinition}. Then there exists an $\epsilon_0 > 0$ and a $\delta > 0$ so that if \eqref{ZetaLocalAPrioriBound} holds, then there is a constant $C = C(\epsilon_0, \delta)$ so that for all $\epsilon < \epsilon_0$, \begin{enumerate} \item[(1)]{$E_s^{1/2} \leq C(\mathcal{E}^{1/2} + \epsilon^{5/2})$} \item[(2)]{$\frac{d\mathcal{E}}{dt} \leq C(\mathcal{E}^2 + \epsilon \mathcal{E}^{3/2} + \epsilon^2 \mathcal{E} + \epsilon^{7/2}\mathcal{E}^{1/2} + \epsilon^6)$} \end{enumerate} where the constants $C$ depend only on $\epsilon_0$ and $\delta$. \end{lemma}
\subsubsection*{A Priori Bounds on the Remainder Energy}
Now we can derive a priori bounds from the energy inequality derived in the last section.
\begin{proposition}\label{RemainderEnergyAPrioriBound}
Let $s \geq 4$, $\mathscr{T}$, $B_0\in H^{s+7}$ be given, let $\epsilon_0$, $\delta$ be given. Let $T_0$ be a time so that \eqref{ZetaLocalAPrioriBound} hold. Suppose further that $\mathcal{E}(0) = M_0^2\epsilon^3$. Then there is a possibly smaller $\epsilon_0 = \epsilon_0(\mathscr{T}, M_0, \delta, \|B_0\|_{H^{s + 7}})$ so that for all $0 < \epsilon < \epsilon_0$ and $0 \leq t \leq \min(T_0, \epsilon^{-2}\mathscr{T})$ we have $\mathcal{E}(t) \leq C\epsilon^3$, where the constant $C = C(\mathscr{T}, M_0, \delta, \|B_0\|_{H^{s + 7}})$. \end{proposition}
\begin{proof} Let $C_0$ be the constant appearing in Lemma~\ref{EnergyInequality}. Define $\mathcal{S}(T) = \sup_{0 \leq t \leq T} \mathcal{E}(t)$. Then for any $T \in [0, \min(T_0, \epsilon^{-2}\mathscr{T})]$ we have for all $t \in [0, T]$ that \begin{align*} \frac{d\mathcal{E}}{dt}(t) & \leq C_0\left(\mathcal{E}^2(t) + \epsilon \mathcal{E}^{3/2}(t) + \epsilon^2 \mathcal{E}(t) + \epsilon^{7/2} \mathcal{E}^{1/2}(t) + \epsilon^6\right) \\ & \leq C_0\left(\mathcal{S}(T) + \epsilon \mathcal{S}(T)^{1/2} + \epsilon^2\right)\mathcal{E}(t) + C_0(\epsilon^{7/2} \mathcal{S}(T)^{1/2} + \epsilon^6) \end{align*} Solving this differential inequality for $0 \leq t \leq T$ gives $$\mathcal{E}(t) \leq \left(\mathcal{E}(0) + \frac{\epsilon^{7/2}\mathcal{S}(T)^{1/2} + \epsilon^6}{\mathcal{S}(T) + \epsilon \mathcal{S}(T)^{1/2} + \epsilon^2}\right)e^{C_0(\mathcal{S}(T) + \epsilon \mathcal{S}(T)^{1/2} + \epsilon^2)t}$$ and so taking the supremum over $[0, T]$ gives for all $T \leq \min(T_0, \epsilon^{-2}\mathscr{T})$ that \begin{equation}\label{EnergySupBound} \mathcal{S}(T) \leq \left(\mathcal{E}(0) + \frac{\epsilon^{7/2}\mathcal{S}(T)^{1/2} + \epsilon^6}{\mathcal{S}(T) + \epsilon \mathcal{S}(T)^{1/2} + \epsilon^2}\right)e^{C_0(\mathcal{S}(T) + \epsilon \mathcal{S}(T)^{1/2} + \epsilon^2)T} \end{equation}
We now begin a continuity argument. Let $M_1$ be the positive root of the equation $\frac{1}{2}e^{-3C_0\mathscr{T}}M_1 = M_0 + \sqrt{M_1} + 1$. If $\mathcal{S}(\min(T_0, \epsilon^{-2}\mathscr{T})) \leq M_1 \epsilon^3$ then we are done. If not, let $T^* < \min(T_0, \epsilon^{-2}\mathscr{T})$ be the first time at which $S(T^*) = M_1\epsilon^3$. Choose $\epsilon_0$ so that $\epsilon_0 M_1 \leq 1$. Then we have from \eqref{EnergySupBound} that \begin{align*} S(T^*) & \leq \left(\mathcal E(0) + \frac{\epsilon^5 \sqrt{M_1} + \epsilon^6}{\epsilon^2}\right)e^{C_0(M_1\epsilon^3 + \sqrt{M_1}\epsilon^{5/2} + \epsilon^2)\epsilon^{-2}\mathscr{T}} \\ & \leq (M_0 + \sqrt{M_1} + 1)e^{3C_0\mathscr{T}}\epsilon^3 \\ & \leq \frac{1}{2}M_1\epsilon^3, \end{align*} which contradicts the definition of $T^*$. \end{proof}
\section{Long time existence of wave packet-like solutions}
We would like to show that for wave packet-like data, the solution of the water wave system \eqref{OldEuler}-\eqref{ztIsAntihol} exists on the $O(\epsilon^{-2})$ time scale, and is well approximated by the wave packet whose modulation evolves according to NLS. Thus far we have found a globally existing approximation $\tilde{\zeta}$, as well as a suitable a priori bound on the energy of the remainder $r$ for $O(\epsilon^{-2})$ time scales. Since $\tilde \zeta$ does not in general satisfy the water wave system, the wave packet data $(\tilde\zeta(0), \tilde D_t\tilde \zeta(0), \tilde D_t^2\tilde \zeta(0))$ cannot be taken as the initial data for the water wave system \eqref{OldEuler}-\eqref{ztIsAntihol}.
In what follows, we will show that there is data for the water wave system that is within $\mathcal O(\epsilon^{3/2})$ to the wave packet $(\tilde\zeta(0), \tilde D_t\tilde \zeta(0), \tilde D_t^2\tilde \zeta(0))$. Moreover for all such data, the solution of the system \eqref{OldEuler}-\eqref{ztIsAntihol} exists on the $O(\epsilon^{-2})$ time scale. The a priori bound on $r$ gives the estimate of the error between $\zeta$ and the wave packet $\tilde\zeta$ on the order $\mathcal O(\epsilon^{3/2})$ for time on the $O(\epsilon^{-2})$ scale. The appropriate wave packet approximation to $z$ is then obtained upon changing coordinates back to the Lagrangian variable.
\subsection{Construction of Appropriate Initial Data}
Notice that we can parametrize the initial interface $z = z(\cdot,0)$ arbitrarily, and that we are only concerned with such data that $z_\alpha(\cdot,0) - 1$ is $\mathcal{O}(\epsilon^{1/2})$. For any initial interface that is a small perturbation of the $x$-axis in this sense, $\kappa(\cdot,0): \mathbb R\to \mathbb R$ is a diffeomorphism (c.f. Lemma~\ref{KappaDerivativeBound}). Hence we may without loss of generality assume that $z = z(\cdot, 0)$ is initially parametrized so that $\kappa(\alpha,0) = \alpha$, and hence that $z(\cdot,0) = \zeta(\cdot,0)$.
In order for $(\zeta_0, v_0, w_0) = (\zeta(0), D_t\zeta(0), D^2_{t}\zeta(0)) = (z(0), z_t(0), z_{tt}(0))$ to be data for a solution $z$ of the water wave system \eqref{OldEuler}-\eqref{ztIsAntihol}, we must enforce the compatibility conditions $(I - \mathcal{H}_{\zeta_0})\overline{v}_0 = 0$ and $w_0 := i \mathcal{A}_0 \partial_\alpha \zeta_0 - i$, with the formula for $\mathcal{A}_0$ given through \eqref{AFormula} by \begin{equation}\label{a0} (I - \mathcal{H}_{\zeta_0})(\mathcal{A}_0 - 1) = i[w_0, \mathcal{H}_{\zeta_0}]\frac{\partial_\alpha\overline{\xi}_0}{\partial_\alpha \zeta_0} + i[v_0, \mathcal{H}_{\zeta_0}]\frac{\partial_\alpha \overline{v}_0}{\partial_\alpha \zeta_0},\end{equation} where $\zeta_0 := \xi_0+\alpha$. We therefore define the manifold of initial data for \eqref{OldEuler}-\eqref{ztIsAntihol} or for \eqref{NewEuler}-\eqref{DtXiIsAntihol} by \begin{equation*} \begin{aligned}
\mathscr{A}^s = \{(\xi_0, v_0, w_0): (|D_\alpha|^{1/2}\xi_0,v_0, w_0)\in H^{s+1/2} \times H^{s + 1}\times H^{s+1/2},\; \\\xi_0= \overline{\mathcal{H}}_{\xi_0+ \alpha} \xi_0, \;(I - \overline{\mathcal{H}}_{\xi_0 + \alpha})v_0 = 0,\;
w_0=i\mathcal A_0(\partial_\alpha \xi_0+1) - i\}
\end{aligned}
\end{equation*}
with $\mathcal A_0$ defined by \eqref{a0}.
In the remainder of this section let $s \geq 6$ and $k > 0$ be fixed, and let an arbitrary initial envelope $B_0 \in H^{s + 7}$ be given. By Theorem \ref{NLSWellPosedness}, for any $\mathscr{T} > 0$ there is a $B \in C([0, \mathscr{T}]; H^{s + 7})$ which solves \eqref{NLS} with initial data $B(0) = B_0$. Using \eqref{TildeZetaFormula} we can construct, using this $B$, an approximate profile $\tilde{\zeta} \in C([0, \mathscr{T}\epsilon^{-2}]; H^{s + 6})$ satisfying \eqref{NLSGlobalBound} which solves the equations \eqref{NewEuler}-\eqref{XiIsAntihol}-\eqref{DtNewEuler}-\eqref{DtXiIsAntihol} up to a residual of size $O(\epsilon^4)$, provided the initial profile $\tilde{\zeta}(0)$ is calculated through $B_0$.
As we observed above, we cannot simply take $(\tilde{\xi}(0), \tilde{D}_t \tilde{\zeta}(0), \tilde{D}_t^2 \tilde{\zeta}(0))$ as our initial data for \eqref{NewEuler}-\eqref{DtXiIsAntihol}, as these may not be in the manifold $\mathscr A^{s}$.
Since we found in Proposition \ref{RemainderEnergyAPrioriBound} that a $\mathcal{O}(\epsilon^{3/2})$ error is acceptable, we construct data for $(\zeta - \alpha, D_t\zeta, D_t^2 \zeta)$ which lie in the manifold $\mathscr A^s$, and which are also $\mathcal{O}(\epsilon^{3/2})$ away from $(\tilde{\xi}(0), \tilde{D}_t\tilde{\zeta}(0), \tilde{D}_t^2 \tilde{\zeta}(0))$.
\begin{lemma}\label{InitialDataConstruction}
For sufficiently small $\epsilon_0(\|B_0\|_{H^{s + 7}}) > 0$, there exist functions $\xi_0 \in H^{s + 6}$ and $v_0 \in H^{s + 4}$ with $\zeta_0 := \alpha + \xi_0$ such that for all $\epsilon < \epsilon_0$ the following properties hold: \begin{enumerate} \item{$\xi_0 = \frac{1}{2}(1 + \overline{\mathcal{H}}_{\zeta_0})\tilde{\xi}(0)$,}
\item{$\|\xi_0 - \tilde{\xi}(0)\|_{H^{s + 6}} \leq C(\|B_0\|_{H^{s + 7}})\epsilon^{3/2}$.}
\item{$v_0 := \frac12(I + \overline{\mathcal{H}}_{\zeta_0})\tilde{D}_t \tilde{\zeta}(0)$ satisfies $\|v_0 - \tilde{D}_t\tilde{\zeta}(0)\|_{H^{s + 4}} \leq C(\|B_0\|_{H^{s + 7}})\epsilon^{3/2}$.} \item{for $(\xi_0, v_0)$ as constructed in parts (1) - (3), $w_0 := i\mathcal{A}_0\partial_\alpha \zeta_0 - i$, with $\mathcal A_0$ calculated by \eqref{a0}
satisfies $\|w_0 - \epsilon(i\omega)^2\zeta^{(1)}(0)\|_{H^{s+4}} \leq C\epsilon^{3/2}$.} \end{enumerate} \end{lemma}
\begin{proof} We prove Part 1 by an iteration argument. Define a sequence of functions $g_n(\alpha, t), n = -1, 0, 1, \ldots$ along with $\gamma_n(\alpha, t) := \alpha + g_n(\alpha, t)$ by setting $g_{-1} = 0$ and for $n \geq -1$, \begin{equation}\label{RecursiveDefnInitialData}g_{n + 1} = \frac{1}{2}(1 + \overline{\mathcal{H}}_{\gamma_n})\tilde{\xi}(0)
\end{equation} Observe first that $g_0 = \frac{1}{2}(I + \overline{\mathcal{H}}_0)\tilde{\xi}(0)$ and so $\|g_0\|_{H^{s + 6}} \leq C(\|B_0\|_{H^{s + 7}})\epsilon^{1/2}$. Next, as in the proof of Lemma \ref{DiffHilbertBoundPart2}, we can write \begin{equation*} (\mathcal{H}_{\gamma_n} - \mathcal{H}_{\gamma_{n - 1}})f = \frac{1}{\pi i}\int \log\left(1 + \frac{(g_n - g_{n - 1})(\alpha) - (g_n - g_{n - 1})(\beta))}{\gamma_{n - 1}(\alpha) - \gamma_{n - 1}(\beta)}\right) f_\beta(\beta) \, d\beta \end{equation*} \begin{equation*} = \frac{1}{\pi i} \int \biggl(\frac{(g_n^\prime(\beta) - g_{n - 1}^\prime(\beta))}{\gamma_n(\alpha) - \gamma_n(\beta)} - \frac{\gamma_{n - 1}^\prime(\beta)\left((g_n - g_{n - 1})(\alpha) - (g_n - g_{n - 1})(\beta)\right)}{(\gamma_n(\alpha) - \gamma_n(\beta))(\gamma_{n - 1}(\alpha) - \gamma_{n - 1}(\beta))}\biggr) f(\beta) \, d\beta \\
\end{equation*} From this formula and Proposition \ref{SingIntSobolevEstimates} we have the estimate $$\|(\mathcal{H}_{\gamma_n} - \mathcal{H}_{\gamma_{n - 1}})f\|_{H^{s + 6}} \leq C\left(\|g_n\|_{H^{s + 6}}, \|g_{n - 1}\|_{H^{s + 6}}\right)\|g_n - g_{n - 1}\|_{H^{s + 6}} \|f\|_{H^{s + 6}}$$ if we can show that $\gamma_n$ and $\gamma_{n - 1}$ obey the chord-arc condition. However, there indeed exists some $\delta \in (0, \frac{1}{2}]$ so that if $\|g_n\|_{H^{s + 6}}, \|g_{n - 1}\|_{H^{s + 6}} \leq \delta$, then $\gamma_n$ and $\gamma_{n - 1}$ satisfy the chord-arc condition and the operator norm $\|\mathcal{H}_{\gamma_n} - \mathcal{H}_{\gamma_{n - 1}}\|_{H^{s + 6} \to H^{s + 6}} \leq C_1\|g_n - g_{n - 1}\|_{H^{s + 6}}$, where $C_1$ is a universal constant. Choose $\epsilon_0$ so small so that $C_1\|\tilde{\xi}(0)\|_{H^{s + 6}} \leq \delta$ and $\|g_0\|_{H^{s + 6}} \leq \frac{1}{2}\delta$.
It now suffices to prove the following statement by induction: For every $n \geq 0$, $$\|g_{n + 1} - g_n\|_{H^{s + 6}} \leq \frac{1}{2}\delta\|g_n - g_{n - 1}\|_{H^{s + 6}} \qquad \text{and} \qquad \|g_n\|_{H^{s + 6}} \leq \delta$$ By our choice of $\epsilon_0$ we have already shown the case $n = 0$. If we assume the above statement is true for all integers $k = 0, 1, \ldots, n$, note that \begin{align*}
\|g_{n + 1} - g_n\|_{H^{s + 6}} & = \frac{1}{2}\|(\overline{\mathcal{H}}_{\gamma_n} - \overline{\mathcal{H}}_{\gamma_{n - 1}})\tilde{\xi}(0)\|_{H^{s + 6}} \\
& \leq \frac{1}{2}\|\overline{\mathcal{H}}_{\gamma_n} - \overline{\mathcal{H}}_{\gamma_{n - 1}}\|_{H^{s + 6} \to H^{s + 6}} \cdot \|\tilde{\xi}(0)\|_{H^{s + 6}} \\
& \leq \frac{1}{2}\delta\|g_n - g_{n - 1}\|_{H^{s + 6}}, \end{align*} from which the induction statement follows immediately.
To prove Part 2, we note that since $\|\xi_0\|_{H^{s + 6}}$ and $\|\tilde{\xi}(0)\|_{H^{s + 6}}$ do not exceed $\delta$, we can estimate that \begin{align*}
\|\xi_0 - \tilde{\xi}(0)\|_{H^{s + 6}} & = \frac{1}{2}\|(1 - \overline{\mathcal{H}}_{\zeta_0})\tilde{\xi}(0)\|_{H^{s + 6}} \\
& \leq \frac{1}{2}\|(\overline{\mathcal{H}}_{\zeta_0} - \overline{\mathcal{H}}_{\tilde{\zeta}(0)})\tilde{\xi}(0)\|_{H^{s + 6}} + \|(I- \overline{\mathcal{H}}_{\tilde{\zeta}(0)})\tilde{\xi}(0)\|_{H^{s + 6}} \\
& \leq \delta\|\xi_0 - \tilde{\xi}(0)\|_{H^{s + 6}} + C\epsilon^{3/2} \end{align*} from which Part 2 follows. Since the construction of $v_0$ is determined by $\zeta_0$, Part 3 is now shown in the same way as was Part 2 once we observe that $\tilde{D}_t\tilde{\zeta}(0) \in H^{s + 4}$ and $\overline{\mathcal{H}}_{\zeta_0}$ is bounded from $H^{s + 4}$ to $H^{s + 4}$.
We now prove Part 4. By the definition of $w_0$ we have \begin{align*} w_0 - \epsilon(i\omega)^2\zeta^{(1)}(0) & = i\mathcal{A}_0\partial_\alpha \zeta_0 - i - \epsilon(i\omega)^2\zeta^{(1)}(0) \\ & = i(\mathcal A_0 - 1)\partial_\alpha \zeta_0 + i\left(\partial_\alpha \xi_0 - \epsilon (i k)\zeta^{(1)}(0)\right)
\end{align*} Since we are assuming $\xi_0$ and $v_0$ are constructed as above, we can write $v_0 = (v_0 - \epsilon(i\omega)\zeta^{(1)}(0)) + \epsilon(i\omega)\zeta^{(1)}(0) \in H^{s + 4}$ and $\partial_\alpha \xi_0 = (\partial_\alpha \xi_0 - \epsilon(ik)\zeta^{(1)}(0)) + \epsilon(ik)\zeta^{(1)} (0)\in H^{s + 5}$ in the above formula for $\mathcal{A}_0 - 1$. As usual, we can isolate the $O(\epsilon^2)$ leading term and see that it vanishes by a multiscale calculation, and what remains gives us the estimate $$\|\mathcal{A}_0 - 1\|_{H^{s + 4}} \leq C\epsilon^{3/2} + C\epsilon\|w_0 -\epsilon (i\omega)^2\zeta^{(1)}(0)\|_{H^{s + 4}}$$ But then we have by the above that $\|w_0 - \epsilon(i\omega)^2\zeta^{(1)}(0)\|_{H^{s + 4}} \leq C\epsilon^{3/2}$ for a sufficiently small choice of $\epsilon_0$. \end{proof}
\begin{definition} We call $(\xi_0, v_0, w_0)$ a $B_0$-\textbf{admissible} initial data
if $(\xi_0, v_0, w_0) \in \mathscr{A}^s$ and there is a constant $C$ depending only on $\|B_0\|_{H^{s + 7}}$ so that $$\left\|(|D_\alpha|^{1/2}\xi_0, v_0, w_0) - (\epsilon |D_\alpha|^{1/2}\zeta^{(1)}(0), \epsilon \zeta^{(1)}_{t_0}(0), \epsilon \zeta^{(1)}_{t_0 t_0}(0))\right\|_{H^{s + 1/2} \times H^{s + 1} \times H^{s + 1/2}} \leq C\epsilon^{3/2}$$ \end{definition}
Recall from \eqref{FullEnergyDefinition} that \begin{align*} \mathcal{E} & = \sum_{n = 0}^s (\mathcal{E}_n + \mathcal{F}_n) \\
& \leq C\sum_{n = 0}^s(\|D_t\partial_\alpha^n \rho\|_{L^2}^2 + \|D_t\partial_\alpha^n \sigma\|_{L^2}^2) + \||D|^{1/2}\rho\|_{H^{s + 1/2}}^2 + \|\sigma\|_{H^{s + 1}}^2. \end{align*} It is clear that for $B_0$-\textbf{admissible} initial data, we have \begin{equation}\label{energyzero} \mathcal E(0)\le C \epsilon^{3}. \end{equation}
\subsection{Long-Time Existence of $\zeta$ and $z$}
In this section we will make rigorous the existence and uniqueness of the solutions $z$ and $\zeta$ on the appropriate $O(\epsilon^{-2})$ time scales. We begin with the following local well-posedness (c.f., \cite{WuLocal2DWellPosed}, \cite{WuAlmostGlobal2D}):
\begin{theorem}\label{zLocalWellPosed} Let $n \geq 5$ be given. Suppose that initial data $\xi_0$, $v_0$, $w_0$ are given so that $\partial_\alpha z(0) - 1 = \partial_\alpha \xi_{0}$ is in $H^{n - 1/2}$, $z_t(0) = v_0$ is in $H^{n + 1/2}$, $ z_{tt}(0)=w_0 $ is in $H^n$; $\xi_0$, $v_0$, $w_0$ satisfy the water wave system: i.e. $\bar v_0={\mathcal H}_{z(0)}\bar v_0$, and $w_0 = i\mathfrak{a}_0\partial_\alpha z(0) - i$ for some real valued function $\mathfrak{a}_0$. Suppose further that $z(0) = \alpha + \xi_0(\alpha)$, $\alpha\in \mathbb{R}$ defines a chord-arc curve: i.e. there exists $\nu>0$, such that
$$|\alpha + \xi_0(\alpha) - \beta - \xi_0(\beta)| \geq \nu |\alpha - \beta|,\qquad\text{for all } \alpha,\beta\in \mathbb R;$$
Then there exists a $T_0> 0$ so that the system \eqref{OldEuler}-\eqref{ztIsAntihol} with initial data $z(0) = \xi_0 + \alpha$, $z_t(0) = v_0$, $z_{tt}(0)=w_0 $ has a unique solution $z(\alpha, t)$ for $t\in [0, T_0]$ with the property that there exist constants $C = C(T, \|\partial_\alpha\xi_{0}\|_{H^{n - 1/2}}, \|v_0\|_{H^{n + 1/2}}, \|w_0\|_{H^{n }}, \nu)$ and $\mu>0$, $$\|(z_\alpha - 1, z_t, z_{tt})\|_{C([0, T_0]; H^{n - 1/2} \times H^{n + 1/2} \times H^n)} \leq C\left(\|\partial_\alpha\xi_{0}\|_{H^{n - 1/2}} + \|v_0\|_{H^{n + 1/2}} + \|w_0\|_{H^{n }}\right),$$
and $|z(\alpha,t)-z(\beta,t)|\ge \mu |\alpha-\beta|$ for all $ \alpha,\beta\in \mathbb R, \ t\in [0, T_0]$.
Moreover, if $T^*$ is the supremum over all such $T_0$, then either $T^* = \infty$ or \begin{equation}\label{zBlowUpQuantity}
\lim_{t \nearrow T^{*}}\left( \|(z_t, z_{tt})\|_{C([0, t], H^{n} \times H^{n})} + \sup_{\alpha \neq \beta} \left|\frac{\alpha - \beta}{z(\alpha, t) - z(\beta, t)}\right| \right)=\infty \end{equation} \end{theorem}
Given this result, we take any $B_0$-admissible initial data $(\xi_0, v_0, w_0) \in \mathscr A^{s}$ and use Theorem \ref{zLocalWellPosed} to construct a solution $z = z(\alpha, t)$ on the time interval $[0, T_0]$ with $(z_\alpha(t) - 1, z_t(t), z_{tt}(t)) \in H^s \times H^{s + 1} \times H^{s + 1/2}$. Using this solution we construct the change of variables $$\kappa = \overline{z} + \frac{1}{2}(I + \mathfrak{H})(I + \mathfrak{K})^{-1}(z - \overline{z})$$ on $[0, T_0]$ as in \S 2. In order to use this change of variables to control $\zeta$ in terms of $z$, we need the following elementary calculus lemma.
\begin{lemma}\label{ChangeVarBounds}
Let $n \geq 3$, let $f \in H^n$, and let $\gamma \in H^n$ be given with $\gamma^\prime(\alpha) \geq c_0 > 0$ for all $\alpha\in \mathbb{R}$ and $\|\gamma^\prime - 1\|_{H^{n - 1}} \leq M$. Then \begin{enumerate}
\item{$\|f \circ \gamma\|_{L^2} \leq C(c_0)\|f\|_{L^2}$.}
\item{$\|f \circ \gamma\|_{H^n} \leq C(M, c_0)\|f\|_{H^n}$.} \end{enumerate} \end{lemma}
\begin{proof}
First we have $$\|f \circ \gamma\|_{L^2} = \left(\int |f \circ \gamma|^2 d\alpha\right)^{1/2} = \left(\int |f|^2 \frac{d\alpha}{\gamma^\prime} \right)^{1/2} \leq \frac{1}{\sqrt{c_0}}\|f\|_{L^2}$$ which proves (1). To prove (2), first observe that \begin{align*}
\|\partial_\alpha(f \circ \gamma)\|_{L^2} & = \|(f^\prime \circ \gamma)\gamma^\prime\|_{L^2} \\
& \leq C(c_0)\|\gamma^\prime\|_{L^\infty}\|f'\|_{L^2} \\
& \leq C(c_0)(1 + \|\gamma^\prime - 1\|_{H^2})\|f'\|_{L^2} \end{align*} Now let $n \geq 3$ and let $2 \leq j \leq n$ be an integer. By the chain and product rules there exist polynomials $p_{l, j}( \gamma^\prime, \ldots, \gamma^{(j - 1)})$ of total degree\footnote{This is meant to include both algebraic multiplicity and the number of differentiations. For instance, the term $f^{\prime \prime}(f^\prime)^2$ has total order $4$.} at most $j$ such that $$\partial_\alpha^j(f \circ \gamma) = (f' \circ \gamma)\partial_\alpha^{j - 1}(\gamma^\prime - 1) + \sum_{l = 2}^j (f^{(l)} \circ \gamma) \, p_{l, j}( \gamma^\prime, \ldots, \gamma^{(j - 1)})$$ The lemma follows upon estimating the first term with $f' \circ \gamma$ in $L^\infty$ and $\partial_\alpha^{(j - 1)}(\gamma^\prime - 1)$ in $L^2$, and the remaining terms with $f^{(l)} \circ \gamma$ in $L^2$ by (1) and $p_{l, j}$ in $L^\infty$. \end{proof}
To use this Lemma to change from the $\zeta$ quantites back to the $z$ quantities, we need control of $\kappa_\alpha - 1$ in terms of $z_\alpha - 1$ in $H^s$.
\begin{lemma}\label{KappaDerivativeBound}
For $n \geq 3$, if $\|z_\alpha - 1\|_{C([0, T_0]; H^n)}$ is sufficiently small, then $\|\kappa_\alpha - 1\|_{C([0, T_0]; H^n)} \leq C\|z_\alpha - 1\|_{C([0, T_0]; H^n)}$. \end{lemma}
\begin{proof} Differentiating \eqref{ChangeOfVariables} with respect to $\alpha$ we get \begin{align*} \kappa_\alpha - 1 & = (\overline{z}_\alpha - 1) + \frac{1}{2}(I + \mathfrak{H})\partial_\alpha(I + \mathfrak{K})^{-1}(z - \overline{z}) + \frac{1}{2}[z_\alpha - 1, \mathfrak{H}]\frac{\partial_\alpha(I + \mathfrak{K})^{-1}(z - \overline{z})}{z_\alpha} \end{align*} Now the Lemma follows from Lemma~\ref{DoubleLayerPotentialArgument} and the $H^n$ boundedness of the operator $\mathfrak{H}$. \end{proof}
Since we have chosen initial data that is $\mathcal{O}(\epsilon^{1/2})$, by Theorem \ref{zLocalWellPosed} there is an interval $[0, T_0]$, such that for all $t\in [0, T_0]$, both $\|z_\alpha(t) - 1\|_{H^{s}}$ and $\|z_t(t)\|_{H^{s}}$ are of order $\mathcal{O}(\epsilon^{1/2})$. Also by Lemma \ref{KappaDerivativeBound} we have that $\|\kappa_\alpha - 1\|_{H^{s}}$ is of order $\mathcal{O}(\epsilon^{1/2})$. Then for $\epsilon<\epsilon_0$ we can choose $\epsilon_0>0$ so small that $\|\kappa_\alpha - 1\|_{L^\infty} \leq \frac{1}{2}$ and $\|\kappa_\alpha - 1\|_{H^s} \leq 1$. Applying Lemma \ref{ChangeVarBounds}, we can choose $\epsilon_0$ sufficiently small so that \begin{align*}
\|\zeta_\alpha(t) - 1\|_{H^{s}} & \leq C\left\|\frac{z_\alpha}{\kappa_\alpha} - 1\right\|_{H^{s}} \\
& \leq C\|z_\alpha - 1\|_{H^{s }} + C\|\kappa_\alpha - 1\|_{H^{s}} \\ & \leq \frac{1}{2}\delta \end{align*} and
$$\|D_t \zeta(t)\|_{H^{s + 1}} = \|z_t \circ \kappa^{-1}(t)\|_{H^{s + 1}} \leq \frac{1}{2}\delta$$
for all times $t\in [0, T_0]$, where $\delta$ is the quantity required by \eqref{ZetaLocalAPrioriBound}.
This now justifies the a priori bound \eqref{ZetaLocalAPrioriBound} on $[0, T_0]$. Since we now legitimately have such a bound, all of the work through Proposition \ref{RemainderEnergyAPrioriBound} now holds on $[0, T_0]$ for $\delta$ and $\epsilon_0$ chosen sufficiently small. We are now ready to prove the main
\begin{theorem}\label{MainResult}
Let $s \geq 6$ and $k > 0$ be given. Let $B_0 \in H^{s + 7}$, and $\mathscr{T} > 0$ be given. Denote by $B(X, T)$ the solution of \eqref{NLS} with initial data $B(0) = B_0$, and let $\zeta^{(1)}$ be defined as in \eqref{Zeta1Formula}. Then there exists an $\epsilon_0 = \epsilon_0(\|B_0\|_{H^{s + 7}}) > 0$ so that for all $\epsilon < \epsilon_0$ the following holds: there exists initial data $(\xi_0, v_0, w_0) \in \mathscr{A}^s$ for the system \eqref{OldEuler}-\eqref{ztIsAntihol} satisfying $$\|(|D_\alpha|^{1/2}\xi_0, v_0, w_0) - (\epsilon |D_\alpha|^{1/2} \zeta^{(1)}(0), \epsilon \zeta^{(1)}_{t}(0), \epsilon \zeta^{(1)}_{tt}(0))\|_{H^{s + 1/2} \times H^{s + 1} \times H^{s + 1/2}} \leq M_0\epsilon^{3/2},$$ and for all such initial data, there exists a possibly smaller $\epsilon_0 = \epsilon_0(\|B_0\|_{H^{s + 7}}, \mathscr{T}, M_0) > 0$ so that the system \eqref{OldEuler}-\eqref{ztIsAntihol} has a unique solution $z(\alpha, t)$ with $\left(|D_\alpha|^{1/2}(z - \alpha), z_t, z_{tt}\right)$ in the space $C([0, \mathscr{T}\epsilon^{-2}]; H^{s + 1/2} \times H^{s + 1} \times H^{s + 1/2})$ satisfying \begin{equation}\label{errorestimate} \begin{aligned}
\|(\zeta_\alpha(t) - 1, D_t\zeta(t), D_t^2\zeta(t)) &- (\epsilon \zeta^{(1)}_{\alpha}(t), \epsilon \zeta^{(1)}_{t}(t), \epsilon \zeta^{(1)}_{tt}(t))\|_{H^{s} \times H^{s} \times H^s}
\\& \le C(\|B_0\|_{H^{s + 7}}, \mathscr{T}, M_0)\epsilon^{3/2} \end{aligned} \end{equation} for all $0 \leq t \leq \epsilon^{-2}\mathscr{T}$. \end{theorem}
\begin{proof}
Given our initial data, we have shown that there is some time interval $[0, T_0]$ on which a solution to \eqref{OldEuler}-\eqref{ztIsAntihol} exists with that initial data. We have also shown that for sufficiently small $\epsilon_0$ the a priori bound \eqref{ZetaLocalAPrioriBound} on $\zeta$ holds and $\kappa$ satisfies $\|\kappa_\alpha - 1\|_{L^\infty} \leq \frac{1}{2}$ and $\|\kappa_\alpha - 1\|_{H^s} \leq 1$ on $[0, T_0]$. Now assume that $[0, T^*]$ is the maximum of such intervals contained in $ [0, \mathscr T\epsilon^{-2}]$. We will show in what follows that $T^* = \mathscr T\epsilon^{-2}$.
We assume now $T^* < \mathscr T\epsilon^{-2}$ for otherwise we are done.
First we have by \eqref{NLSGlobalBound}, \eqref{energyzero}, the estimates in Section 4 and Proposition \ref{RemainderEnergyAPrioriBound} that for all $t \in [0, T^*]$, \begin{equation}\label{zetaestimate} \begin{aligned}
\|D_t \zeta(t)\|_{H^{s}} + \|\zeta_\alpha(t) - 1\|_{H^{s}} +& \|D_t^2 \zeta(t)\|_{H^{s}} \leq \|\tilde{D_t}\tilde{\zeta}(t)\|_{H^{s}} + \|\tilde{\xi}_\alpha(t)\|_{H^{s}} + \|\tilde{D}_t^2 \tilde{\zeta}(t)\|_{H^{s}} \\
& \quad + \|(D_t - \tilde{D}_t)\tilde{\zeta}(t)\|_{H^s} + \|(D_t^2 - \tilde{D}_t^2)\tilde{\zeta}(t)\|_{H^s} \\ & \quad + C(\mathcal{E}^{1/2} + \epsilon^{5/2}) \\ & \leq C\epsilon^{1/2}. \end{aligned} \end{equation} In particular, this estimate holds with a constant $C$ independent of $T^*$.
In order to use this bound on $\zeta$ to in turn control $z$, we would like to show that the change of variables $\kappa$ can be constructed in terms of $\zeta$ so that it is controlled independently of $T^*$. This will imply that there are similar a priori estimates for $z$, and so the long-time existence with appropriate regularity will then follow from the blow-up criterion of Theorem \ref{zLocalWellPosed}.
We know $\kappa(\alpha,t)$ satisfies \begin{equation}\label{KappaIVP} \begin{cases} \kappa_t(\alpha, t) = b(\kappa(\alpha,t), t) \\ \kappa(\alpha,0) = \alpha\end{cases} \end{equation} with $b$ determined through \eqref{bFormula}. Writing \eqref{KappaIVP} in integral form, differentiating with respect to $\alpha$, and using Lemma \ref{ChangeVarBounds} then gives the bound \begin{align*}
\|\kappa_\alpha(t) - 1\|_{H^{s - 1}} & \leq \int_0^t \|b_\alpha(\kappa(\tau), \tau)\|_{H^{s - 1}}\left(1 + \|\kappa_\alpha(\tau) - 1\|_{H^{s - 1}}\right) d\tau \\
& \leq C\epsilon^{1/2}\left(1 + \|\kappa_\alpha(t) - 1\|_{C([0, T^*]; H^{s - 1})}\right) \end{align*} Taking the supremum over $0 \leq t \leq T^*$ and choosing $\epsilon_0$ sufficiently small then yield
\begin{equation}\label{kappaestimate}
\|\kappa_\alpha - 1\|_{C([0, T^*]; H^{s - 1})}\leq C\epsilon^{1/2}
\end{equation}
where the constant $C$ depends on $\mathscr{T}$, and is independent of $T^*$.
Now on $[0, T^*]$, we have that $\zeta(\kappa(\alpha,t), t) = z(\alpha, t)$.
Hence if we apply Lemma \ref{ChangeVarBounds} we have for $t\in [0, T^*]$, \begin{align*}
\|z_\alpha(t)-1\|_{H^{s-1}}&+\|z_t(t)\|_{H^s} + \|z_{tt}(t)\|_{H^{s}} \\
& \leq C\left(\|\zeta_\alpha(t)-1\|_{H^{s-1}}+\|\kappa_\alpha(t)-1\|_{H^{s-1}}+\|D_t\zeta(t)\|_{H^s} + \|D_t^2\zeta(t)\|_{H^s}\right) \\ & \leq C\epsilon^{1/2} \end{align*} and that \begin{equation*}
\sup_{\alpha \neq \beta} \left|\frac{\alpha - \beta}{z(\alpha) - z(\beta)}\right| \leq \frac{1}{(1 - \|\zeta_\alpha - 1\|_{L^\infty})(1 - \|\kappa_\alpha - 1\|_{L^\infty})},
\end{equation*} where the constants $C$ are independent of $T^*$. Thus it follows by the blow-up criteria given in Theorem \ref{zLocalWellPosed} that we can continue the solution $z$ to $t\in [0, T_1]$ for some $T_1 > T^*$. On the other hand, we can choose $\epsilon_0$ so small that for $\epsilon < \epsilon_0$, the bounds $C\epsilon^{1/2}$ in \eqref{zetaestimate} and \eqref{kappaestimate} are small enough that there exist $T^* < T_2 < T_1$, so that on $[0, T_2]$, $\|\kappa_\alpha - 1\|_{L^\infty} \leq \frac{1}{2}$, $\|\kappa_\alpha - 1\|_{H^s} \leq 1$ and the a priori estimate \eqref{ZetaLocalAPrioriBound} holds. This contradicts the maximality of $T^*$. Therefore we must have $T^*=\mathscr T\epsilon^{-2}$ and the long time existence of $z$ follows. The error estimate \eqref{errorestimate} then follows from \eqref{energyzero} and Proposition~\ref{RemainderEnergyAPrioriBound}. \end{proof}
There is still the matter of interpreting this result in more familiar coordinates. Changing variables by $\kappa$, we can convert the estimates of the above theorem into estimates in Lagrangian coordinates: \begin{equation}\label{lag}
\|(z_\alpha - \kappa_\alpha, z_t, z_{tt}) - (\epsilon \zeta^{(1)}_\alpha \circ \kappa, \epsilon \zeta^{(1)}_t \circ \kappa, \epsilon \zeta^{(1)}_{tt} \circ \kappa)\|_{H^s \times H^s \times H^s} \leq C\epsilon^{3/2} \end{equation} Calculating the asymptotic expansion of $z_\alpha - 1, z_t, z_{tt}$ now depends on understanding $\kappa - \alpha$. From \eqref{KappaIVP} we have that \begin{align*} \kappa(\alpha, t) - \alpha & = \int_0^t b(\kappa(\alpha, \tau), \tau) d\tau \end{align*}
Using our estimate of $\|\kappa_\alpha - 1\|_{H^s} \leq C\epsilon^{1/2}$ and writing the integrand as $b = (b - \tilde{b}) + \epsilon^2 b_2 + \epsilon^3 b_3$ yields the following leading order expression: \begin{align}\label{kexp}
\kappa(\alpha, t) - \alpha & = -k\omega\epsilon^2 \int_0^t |B|^2(\epsilon\alpha + \epsilon \omega^\prime\tau, \epsilon^2\tau) d\tau + \mathcal{O}(\epsilon^{1/2}) \end{align} From \eqref{kexp}, we can obtain and justify asymptotics for $\partial_\alpha \Im z$, $z_t$ and $z_{tt}$ without any additional restriction on the initial data. However, justifying the asymptotics for $\Re z_\alpha - 1$ requires an understanding of the asymptotic for $\kappa_\alpha$ up to order $\mathcal O(\epsilon^{3/2})$, which is not available merely from the estimates given in Theorem~\ref{MainResult}. We therefore leave open the justification of the modulation approximations for $\Re z_\alpha-1$.
Note that the leading term of the right hand side of \eqref{kexp} can be as large as $O(1)$ on times of order $O(\epsilon^{-2})$, and so would contribute corrections to the asymptotic formula for $\Re z_\alpha -1$.
\section{Justification of an Eulerian Version}
By imposing some additional mild restrictions on the initial data, we are able to obtain justifications of the derivative in the space variable of the interface and the trace of the velocity field on the interface in Eulerian coordinates. With further restrictions on the initial data, we are able to justify the asymptotics for the profile itself. All these reduce to obtaining an appropriate bound and, in the latter case, asymptotics for $\Re \zeta(\alpha,t)-\alpha$ in $C([0, \mathscr T\epsilon^{-2}]; L^2)$, which can be achieved by introducing another quantity as follows.
Following the proof of Proposition 2.3 of \cite{WuAlmostGlobal2D}, we introduce the velocity potential $\Phi(x, t)$ of the fluid in the domain $\Omega(t)$ that satisfies $\nabla \Phi = {\mathbf{v}}$. Let $\psi(\alpha, t) = \Phi(z(\alpha, t), t)$ be the trace of $\Phi$ on the interface $\Sigma(t)$. If we write $\Psi = \psi \circ \kappa^{-1}$, then the time derivative of the quantity $\lambda := (I - \mathcal{H})\Psi$ is comparable to the imaginary part of $\zeta$ through the identity (c.f. (2.46) of \cite{WuAlmostGlobal2D}): \begin{equation}\label{DtLambdaFormula} D_t\lambda = -(I - \mathcal{H})\Im(\zeta) - \frac{1}{2}[D_t\zeta, \mathcal{H}]\frac{\overline{\zeta}_\alpha D_t\zeta}{\zeta_\alpha} \end{equation} We also know by Proposition 2.3 of \cite{WuAlmostGlobal2D} that $\lambda$ satisfies an evolution equation of the form \begin{align}\label{LambdaEvolutionEquation} \mathcal{P}\lambda & = -\left[D_t\zeta,\mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right](\overline{\zeta}_\alpha D_t^2\zeta) + [D_t\zeta, \overline{\mathcal{H}}]\left(D_t\overline{\zeta}\frac{\partial_\alpha D_t\zeta}{\overline{\zeta}_\alpha}\right) + D_t\zeta[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t\overline{\zeta}}{\zeta_\alpha} \notag \\ & \qquad - 2[D_t\zeta, \mathcal{H}]\frac{D_t\zeta \cdot \partial_\alpha D_t \zeta}{\zeta_\alpha} + \frac{1}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2(D_t\zeta(\beta) \cdot \zeta_\beta(\beta)) d\beta \notag \\ & := G_\lambda
\end{align} Since $G_\lambda$ is of third order and depends only on $\zeta_\alpha - 1, D_t\zeta, D_t^2 \zeta$, we expect that we can construct an energy from this equation that allows us to bound $D_t \lambda$ by $C\epsilon^{1/2}$, provided the initial energy is bounded by $C\epsilon$. This is enough to control $\|\Re \zeta(\cdot,t)- \alpha\|_{L^2}$ and justify an Eulerian version of Theorem \ref{MainResult}. The details are given in Section 6.1 below.
However, with further restrictions on the initial data we can justify asymptotics for the profile itself, and we will devote the remainder of Section 6 to this task. Specifically, we will develop an approximate solution $\tilde{\lambda}$ to \eqref{LambdaEvolutionEquation} to the desired order $O(\epsilon^4)$ and thereby construct an energy for the remainder $l = \lambda - \tilde{\lambda}$. As was the case with the quantities $D_t \rho$ and $D_t \sigma$, such an energy will bound the $L^2$ norm of $D_t l$ for $O(\epsilon^{-2})$ times. This will allow us to justify asymptotics for the profile under reasonable restrictions on the initial profile and the initial velocity potential restricted to the initial interface.
\subsection{Justifying Eulerian Asymptotics for Derivatives of the Profile}
Our first task is to prove the
\begin{lemma}\label{RealPartProfileControl}
Suppose that the hypotheses of Theorem \ref{MainResult} hold. Suppose further that $\|\xi_0\|_{L^2} \leq C\epsilon^{1/2}$ and $\|\mathbf{v}_0\|_{L^2(\Omega(0))} \leq C\epsilon^{1/2}$, where $\mathbf{v}_0$ is the initial velocity field. Then $\|\Re \zeta(\cdot, t) - \alpha\|_{L^2} \leq C\epsilon^{1/2}$ for all $t \leq \mathscr{T}\epsilon^{-2}$. \end{lemma}
\begin{proof}
We begin by deriving conditions under which $\Re(\zeta) - \alpha$ is controlled in $L^2$. We can construct the energy corresponding to \eqref{LambdaEvolutionEquation}: $$\mathcal{L}(t) = \int \frac{1}{\mathcal{A}}|D_t \lambda|^2 + i \lambda \overline{\lambda}_\alpha$$ Since $\lambda$ is the trace of a holomorphic function on $\Omega(t)^c$, we have by Proposition \ref{BasicEnergyInequality} that $\|D_t \lambda\|_{L^2}^2 \leq C\mathcal{L}(t)$. Formula \eqref{DtLambdaFormula} provides the estimate \begin{align*}
\Bigl| \|D_t\lambda\|_{L^2} - \|(I - \mathcal{H})\Im\zeta\|_{L^2} \Bigr| & \leq \|D_t \lambda + (I - \mathcal{H})\Im(\zeta)\|_{L^2} \\ & \leq C\epsilon^{5/2}
\end{align*} Clearly we also have $\|(I - \mathcal{H})\Im \zeta\|_{L^2} \leq C\|\xi\|_{L^2}$. However, by \eqref{XiIsAntihol} and Lemma \ref{DoubleLayerPotentialArgument}, we conversely have that \begin{align*}
\|\xi\|_{L^2} & \leq \|(I - \mathcal{H})\Re\xi\|_{L^2} + \|\Im\xi\|_{L^2} \\
& = \|(I - \mathcal{H})\Im\xi\|_{L^2} + \|\Im\xi\|_{L^2} \\
& \leq C\|(I - \mathcal{H})\Im\xi\|_{L^2} \\ & \leq C\mathcal{L}^{1/2} + C\epsilon^{5/2} \end{align*} Hence it suffices to show that $\mathcal{L}(t)$ is $O(\epsilon)$ whenever $t \leq \mathscr{T}\epsilon^{-2}$. Now by Proposition \ref{BasicEnergyInequality} and Theorem \ref{MainResult} the energy $\mathcal{L}$ satisfies $$\frac{d\mathcal{L}}{dt} \leq C\epsilon^{5/2}\mathcal{L}^{1/2} + C\epsilon^{2}\mathcal{L}$$ therefore $$\frac{d\mathcal{L}^{1/2}}{dt} \leq C\epsilon^{5/2} + C\epsilon^{2}\mathcal{L}^{1/2}.$$ Solving this inequality gives us that $$\sup_{0 \leq t \leq \mathscr{T}\epsilon^{-2}} \mathcal{L}(t)^{1/2} \leq C\mathcal{L}(0)^{1/2} + C\epsilon^{1/2}$$
Hence the question now reduces to asking which conditions on the initial data imply that $\mathcal{L}(0)$ is $O(\epsilon)$. We first have that $$\int \frac{1}{\mathcal{A}_0} |D_t\lambda_0|^2 d\alpha \leq C\|D_t \lambda_0\|_{L^2}^2 \leq (\|\xi_0\|_{L^2} + C\epsilon^{5/2})^2,$$ and so to control this part of $\mathcal{L}(0)$ it suffices to take $\|\xi_0\|_{L^2} \leq C\epsilon^{1/2}$.
The second part of $\mathcal{L}(0)$ takes more work. Recall that our parametrization for the initial data was chosen so that $\zeta(0) = z(0)$. Let $\psi_0$, $\lambda_0$, etc., be the initial values of $\psi$, $\lambda$, etc., respectively. To estimate the second part of $\mathcal{L}(0)$, we follow the discussion of initial data in section 5.1 of \cite{WuAlmostGlobal2D}. Observe that we can choose a function $\Xi_0$ holomorphic in $\Omega(0)$ for which $\Re(\Xi_0) \circ \zeta_0 = \Psi_0$, specifically $\Xi_0 \circ \zeta_0 = (I + \mathcal{H}_{\zeta_0})(I + \mathcal{K}_{\zeta_0})^{-1}\Psi_0$; such a function will satisfy $\partial_z \Xi_0 = \overline{\mathbf{v}}_0$. Since we have the operator identity $$(I - \mathcal{H}) - (I + \overline{\mathcal{H}})(I + \mathcal{K})^{-1} = -(I + \mathcal{H})(I + \mathcal{K})^{-1}\mathcal{K}$$ it follows that \begin{equation*} \lambda_0 - \overline{\Xi}_0 \circ \zeta_0 = -(I + \mathcal{H}_{\zeta_0})(I + \mathcal{K}_{\zeta_0})^{-1}\mathcal{K}_{\zeta_0}\Psi_0 \end{equation*} Observe that we can control derivatives of $\Psi_0$ but not $\Psi_0$ itself; however, the expression for $\mathcal{K}_{\zeta_0}\Psi_0$ contains an extra derivative. Write $z^\tau_0 = (1 - \tau)\zeta_0 + \tau \overline{\zeta}_0$, so $z^1_0 = \bar \zeta_0$ and $z^0_0 = \zeta_0$. Then by the Fundamental Theorem of Calculus we have \begin{align*} \mathcal{K}_{\zeta_0} & =\frac12(\mathcal H_0+\overline{\mathcal H}_0)= -\frac{1}{2}(\mathcal{H}_{z_0^1} - \mathcal{H}_{z_0^0}) \\ & = -\frac{1}{2} \int_0^1 \partial_\tau \mathcal{H}_{z_0^\tau} d\tau \\ & = -\frac{1}{2} \int_0^1 [\overline{\xi}_0 - \xi_0, \mathcal{H}_{z_0^\tau}]\frac{\partial_\alpha}{z_\alpha^\tau} \, d\tau
\end{align*} and so estimating this expression crudely gives the bound $\|\mathcal{K}_{\zeta_0} \Psi_0\|_{L^2} \leq C\|\xi_0\|_{L^\infty}\|v_0\|_{L^2} \leq C\epsilon$, and so $\|\lambda_0 - \overline{\Xi}_0 \circ \zeta_0\|_{L^2} \leq C\epsilon$ as well. But then we can by Green's Theorem write \begin{align*}
\left|\int i\lambda_0 \partial_\alpha \overline{\lambda}_0 d\alpha \right| & \leq 2 \left| \int \partial_\alpha \lambda_0 (\overline{\lambda}_0 - \Xi \circ \zeta_0) d\alpha \right| + \left| \int i(\overline{\Xi} \circ \zeta_0)\partial_\alpha(\Xi \circ \zeta_0) d\alpha \right| \\
& \leq C\epsilon^{3/2} + \iint_{\Omega(0)} |\mathbf{v}_0(x)|^2 dx
\end{align*} Hence if we choose $\|\mathbf{v}_0\|_{L^2(\Omega(0))} \leq C\epsilon^{1/2}$, the lemma follows. \end{proof}
We can now prove the \begin{theorem}\label{WeakEulerianResult} Let $s \geq 6$ and $k > 0$ be given. Let $B_0 \in H^{s + 7}$, and $\mathscr{T} > 0$ be given. Denote by $B(X, T)$ the solution of \eqref{NLS} with initial data $B(0) = B_0$, and let $\zeta^{(1)}$ be defined as in \eqref{Zeta1Formula}. Suppose that the initial interface $\Sigma(0)$ is given by a graph $\{(x, \eta_0(x)) : x \in \mathbb{R}\}$, the initial velocity is $\mathbf{v}_0$, the trace of the initial velocity, acceleration on $\{(x, \eta_0(x)) : x \in \mathbb{R}\}$ are $\mathfrak{v}_0$, $\mathfrak{w}_0$, which satisfy the compatibility conditions as stated in Theorem~\ref{zLocalWellPosed}, and $(\eta_0, \mathfrak{v}_0,\mathfrak{w}_0) \in H^{s + 1} \times H^{s + 1} \times H^{s + 1/2}$ with the remainder estimates \begin{equation}\label{EulerianRemainderConditions}
\|(|D_x|^{1/2}\eta_0, \mathfrak{v}_0,\mathfrak{w}_0) - \epsilon(\Im |D_x|^{1/2}\zeta^{(1)}(0), \zeta_t^{(1)}(0),\zeta_{tt}^{(1)}(0))\|_{H^{s + 1/2} \times H^{s + 1} \times H^{s + 1/2}} \leq C_1\epsilon^{3/2} \end{equation} along with \begin{equation}\label{WeakConditions}
\|\eta_0\|_{L^2} \leq C_1\epsilon^{1/2} \qquad \text{and} \qquad \|\mathbf{v}_0\|_{L^2(\Omega(0))} \leq C_2\epsilon^{1/2} \end{equation}
Then there exists an $\epsilon_0 = \epsilon_0(\|B_0\|_{H^{s + 7}}, \mathscr{T}, C_1, C_2)$ so that for all $\epsilon < \epsilon_0$ the following holds: There exists a solution to \eqref{EulerVelocityField} for times $0 \leq t \leq \mathscr{T}\epsilon^{-2}$ for which $\Sigma(t)$ is given by a graph $\{(x, \eta(x, t)) : x \in \mathbb{R}, t \geq 0\}$, the trace of the velocity field on $\{(x, \eta(x, t)) : x \in \mathbb{R}, t \geq 0\}$ is given by $\mathfrak{v}(x,t)$,
and which satisfies $$\|(\eta_x(t), \mathfrak{v}(t)) - \epsilon(k\Re \zeta^{(1)}(t), \zeta_t^{(1)}(t) )\|_{H^{s} \times H^{s}} \leq C(\|B_0\|_{H^{s + 7}}, \mathscr{T}, C_1, C_2)\epsilon^{3/2}$$ for all $0 \leq t \leq \mathscr{T}\epsilon^{-2}$. \end{theorem}
\begin{proof} First, we will show that the initial data after being reparametrized by $\kappa^{-1}$, is $B_0$-admissible. Once we do, a solution $z(\alpha, t)$ exists as in Theorem \ref{MainResult}. We must then show that this solution can be, for possibly smaller $\epsilon_0$, written as a graph, and we must give remainder estimates for this graph corresponding to the remainder estimates of $\zeta$ in Theorem \ref{MainResult}.
We begin by showing that the reparametrized data satisfies the hypotheses of Theorem \ref{MainResult}. Let $\gamma_0(\alpha, t) = \alpha + i\eta_0(\alpha, t)$. Let $\zeta_0(\alpha) = (\gamma_0 \circ \kappa_0^{-1})(\alpha)$, where as in \eqref{ChangeOfVariables} we define $$\kappa_0(\alpha) = \overline{\gamma}_0(\alpha) + \frac{1}{2}(I + \mathcal{H}_{\gamma_0})(I + \mathcal{K}_{\gamma_0})^{-1}(\gamma_0(\alpha) - \overline{\gamma}_0(\alpha))$$ Then if we denote $\xi_0 := \zeta_0 - \alpha$ as usual, we have $(I - \overline{\mathcal{H}}_{\zeta_0})\xi_0 = 0$. This implies that $\xi_0 = i(I + \overline{\mathcal{H}}_{\zeta_0})(I+\mathcal K_{\zeta_0})^{-1}\Im \xi_0$. By Proposition \ref{WavePacketAntiholProp} we have $\zeta^{(1)} = i(I + \overline{\mathcal{H}}_0)\Im \zeta^{(1)} + \mathcal{O}(\epsilon^{3/2})$. For brevity, temporarily denote $\|\cdot\| := \||D_\alpha|^{1/2} \cdot\|_{H^{s + 1/2}}$. Then by Lemma \ref{ChangeVarBounds} and interpolation we have \begin{align*}
\|\xi_0 - \epsilon \zeta^{(1)}(0)\| & \leq \|i(I + \overline{\mathcal{H}}_{\zeta_0})(I+\mathcal K_{\zeta_0})^{-1}\Im \xi_0 - i(I + \overline{\mathcal{H}}_0)\Im \epsilon\zeta^{(1)}(0)\| + C\epsilon^{3/2} \\
& \leq \|\Im \xi_0 - \Im \epsilon\zeta^{(1)}(0)\| + \|(\overline{\mathcal{H}}_{\zeta_0} - \overline{\mathcal{H}}_0)\Im\epsilon\zeta^{(1)}(0)\| + C\epsilon^{3/2} \\
& \leq C\|\eta_0 - \Im \zeta^{(1)}(0)\| + C\epsilon\|\zeta^{(1)}(0) \circ \kappa_0 - \zeta^{(1)}(0)\| + C\epsilon^{3/2}
\end{align*} Since $\|\eta_0\|_{H^{s+1}} \leq C\epsilon^{1/2}$ by hypothesis, $\|\kappa_0 - \alpha\|_{H^{s+1}} \leq C\epsilon^{1/2}$, and so by the Mean Value Theorem $\|\zeta^{(1)}(0) \circ \kappa_0 - \zeta^{(1)}(0)\| \leq C\epsilon^{1/2}$. But then $\|\xi_0 - \epsilon \zeta^{(1)}(0)\| \leq C\epsilon^{3/2}$ follows from $\|\eta_0 - \Im \epsilon \zeta^{(1)}(0)\| \leq C\epsilon^{3/2}$.
Let $v_0 = \mathfrak{v}_0 \circ \kappa^{-1}_0$, $w_0 = \mathfrak{w}_0 \circ \kappa_0^{-1}$. By Lemma~\ref{ChangeVarBounds}, we also have $$\| v_0 - \epsilon i\omega \zeta^{(1)}(0)\|_{H^{s + 1}} \leq C\epsilon^{3/2}$$ $$\| w_0 + \epsilon k \zeta^{(1)}(0)\|_{H^{s + 1/2}} \leq C\epsilon^{3/2}$$ This gives $B_0$-admissible initial data, and so by Theorem \ref{MainResult} there exists a solution to the $\zeta$ system with justified asymptotics.
We must now show that we can give Eulerian estimates for the remainders of this solution. Since $\zeta$ and $z$ parametrize the same interface $\Sigma(t)$, it suffices to write $\zeta = x + iy$, where \begin{align}\label{ParametricZeta} x =x(\alpha,t)& = \alpha + \Re \xi(\alpha,t) \notag \\ y=y(\alpha,t) & = \Im \xi(\alpha,t) \end{align} For sufficiently small $\epsilon_0$, $\Sigma(t)$ describes a graph by Lemma \ref{RealPartProfileControl}, and so we can invert to solve for $\alpha = \alpha(x, t)$. Then we wish to justify asymptotics of $\eta(x,t) := y(\alpha(x,t),t)$.
The rigorous justifications of the asymptotics for $\zeta_\alpha - 1$ and $D_t \zeta$ give rise to rigorous justifications of the quantities $\eta_x$ and $\mathfrak{v}$. The derivations of each are similar, and so we will focus on $\eta_x$. By Theorem \ref{MainResult}, we have a solution $\zeta = x + iy$ satisfying $$\|y_\alpha(\cdot, t) - k\epsilon \Re \zeta^{(1)}(\cdot, t)\|_{H^s_\alpha} \leq C\epsilon^{3/2}$$ for sufficiently small $\epsilon_0$, and $\epsilon < \epsilon_0$. Since $x = \alpha(x, t) + \Re \xi(\alpha(x, t), t)$, we have immediately that $\|\alpha_x - 1\|_{H^s_x} \leq C\epsilon^{1/2}$. Changing variables then gives us $$\|y_\alpha(\alpha(\cdot,t), t) - k \epsilon \Re \zeta^{(1)}(\alpha(\cdot,t), t)\|_{H^s_x} \leq C\epsilon^{3/2}$$ Moreover, since we would like to take advatage of asymptotics for $\alpha_x(x) - 1$, we write \begin{align*} \alpha_x(x) - 1 & = -\Re\xi_\alpha(\alpha(x))\alpha_x(x) \\ & = -\Re\xi_\alpha(\alpha(x)) - \Re\xi_\alpha(\alpha(x))(\alpha_x(x) - 1) \\ & = -\Re\tilde{\xi}_\alpha(\alpha(x)) + \left( \Re(\xi_\alpha(\alpha(x))\Re\Tilde{\xi}_\alpha(\alpha(x)) - \Re r_\alpha(\alpha(x))\right) \\ & \quad - \Re \xi_\alpha(\alpha(x))\left(\alpha_x(x) + \Re\tilde{\xi}_\alpha(\alpha(x)) - 1\right)
\end{align*} from which we have the estimate $$\|\alpha_x(\cdot) + \Re\tilde{\xi}_\alpha(\alpha(\cdot)) - 1\|_{H^s_x} \leq C\epsilon^{3/2}$$ Next, we estimate the derivative of the graph $\eta_x$: \begin{align*}
\|y_\alpha(\alpha(\cdot), t) - \eta_x(\cdot, t)\|_{H^s_x} & = \|y_\alpha(\alpha(\cdot), t)(\alpha_x(\cdot, t) - 1)\|_{H^s_x} \\
& \leq \|y_\alpha(\alpha(\cdot))\|_{H^s_x}\|\alpha_x(\cdot) + \Re\tilde{\xi}_\alpha(\alpha(\cdot)) - 1\|_{H^s_x} \\
& \qquad + C\|y_\alpha(\alpha(\cdot))\|_{H^s}\|\tilde{\xi}_\alpha\|_{W^{s, \infty}_\alpha} \\ & \leq C\epsilon^{3/2} \end{align*} By the Mean Value Theorem and Lemma~\ref{RealPartProfileControl}, we have that \begin{align*}
& \quad\; \|\Re \zeta^{(1)}(\alpha(x), t) - \Re \zeta^{(1)}(x, t)\|_{H^s_x} \\
& \leq \|B(\epsilon\alpha(x) + \epsilon \omega^\prime t, \epsilon^2 t) - B(\epsilon x + \epsilon \omega^\prime t, \epsilon^2 t)\|_{H^s_x} \\
& \qquad + \|B(\epsilon x + \epsilon \omega^\prime t, \epsilon^2 t)\|_{W^{s, \infty}} \|e^{i(k\alpha(x) + \omega t)} - e^{i(k x + \omega t)}\|_{H^s_x} \\
& \leq \|B\|_{W^{s + 1, \infty}}\|\alpha(x) - x\|_{H^s} \\ & \leq C\epsilon^{1/2} \end{align*} Thus we have \begin{align*}
\|\eta_x(\cdot,t) - k \epsilon \Re \zeta^{(1)}(\cdot, t)\|_{H^s_x} & \leq \|\eta_x(\cdot, t) - y_\alpha(\alpha(\cdot), t)\|_{H^s_x} \\
& \quad + \|y_\alpha(\alpha(\cdot), t) - k\epsilon \Re \zeta^{(1)}(\alpha(\cdot), t)\|_{H^s_x} \\
& \quad + \|\epsilon \zeta^{(1)}(\alpha(\cdot), t) - \epsilon \zeta^{(1)}(\cdot, t)\|_{H^s_x} \\ & \leq C\epsilon^{3/2} \end{align*} and with a similar argument we also have \begin{equation*}
\|\mathfrak{v}(\cdot,t) - \epsilon \zeta_t^{(1)}(\cdot, t)\|_{H^{s + 1}_x} \leq C\epsilon^{3/2} \end{equation*} \end{proof}
\subsection{The Multiscale Calculation for $\tilde{\Psi}$ and $\tilde{\lambda}$.}
We have two formal calculations to complete. The first is to derive an expansion for the quantity $\Psi = \psi \circ \kappa^{-1}$ of the form $\tilde\Psi = \epsilon \Psi^{(1)} + \epsilon^2 \Psi^{(2)} + \epsilon^3 \Psi^{(3)}$ so that it satisfies the transformed version of Bernoulli's principle (c.f. (2.14) of \cite{WuAlmostGlobal2D}): \begin{equation}\label{DtPsi}D_t\Psi = -\Im(\zeta) + \frac{1}{2}|D_t\zeta|^2\end{equation} up to the order $O(\epsilon^4)$. The second is to check whether $\tilde\lambda =(I-\tilde {\mathcal H})\tilde\Psi$ satisfies \eqref{LambdaEvolutionEquation} up to the order $O(\epsilon^4)$. We will repeatedly use the formula \eqref{TildeZetaFormula} for $\tilde{\zeta}$ in the sequel.
\subsubsection{Deriving the expansion of $\Psi$}
The $O(\epsilon)$ terms of \eqref{DtPsi} yield\footnote{Here $\text{c.c.}$ represents the complex conjugate of the term immediately preceding it.} \begin{align*} \Psi^{(1)}_{t_0} & = -\Im(\zeta^{(1)}) \\ & = -\frac{1}{2i}Be^{i\phi} + \text{c.c.} \end{align*} from which we have \begin{equation*} \Psi^{(1)} = \frac{1}{2\omega}Be^{i\phi} + \text{c.c.} + C^{(1)}(\alpha_0, \alpha_1, t_1, t_2) \end{equation*} Equating the $O(\epsilon^2)$ terms of \eqref{DtPsi} gives \begin{align*}
\Psi^{(2)}_{t_0} & = -\Psi^{(1)}_{t_1} - \Im(\zeta^{(2)}) + \frac{1}{2}|\zeta^{(1)}_{t_0}|^2 \\
& = -\omega^\prime\frac{1}{2\omega}B_X e^{i\phi} + \text{c.c.} - C^{(1)}_{t_1} - \Im\left(\frac{1}{2}ik(I - \overline{\mathcal{H}}_0)|B|^2\right) + \frac{1}{2}k|B|^2 \\
& = -\frac{1}{4k}B_Xe^{i\phi} + \text{c.c.} - C^{(1)}_{t_1} - \frac{1}{2}k|B|^2 + \frac{1}{2}k|B|^2 \\ & = -\frac{1}{4k}B_Xe^{i\phi} + \text{c.c.} - C^{(1)}_{t_1} \end{align*} To avoid secular terms we set $C^{(1)} = 0$ and so arrive at the solution \begin{equation}\label{Psi2Formula} \Psi^{(2)} = -\frac{1}{4ik\omega}B_Xe^{i\phi} + \text{c.c.} + C^{(2)}(\alpha_0, \alpha_1, t_1, t_2) \end{equation} and hence determine $\Psi^{(1)}$ as \begin{equation}\label{Psi1Formula} \Psi^{(1)} = \frac{1}{2\omega}Be^{i\phi} + \text{c.c.} \end{equation} Finally, we collect the $O(\epsilon^3)$ terms of \eqref{DtPsi} together to give the equation \begin{align*} \Psi^{(3)}_{t_0} & = -\Psi^{(2)}_{t_1} - \Psi^{(1)}_{t_2} - b_2 \Psi^{(1)}_{\alpha_0} - \Im(\zeta^{(3)}) + \Re\left(\overline{\zeta}^{(1)}_{t_0}(\zeta^{(1)}_{t_1} + \zeta^{(2)}_{t_0})\right) \\ & = -\Psi^{(2)}_{t_1} - \Psi^{(1)}_{t_2} - b_2 \Psi^{(1)}_{\alpha_0} - \Im(\zeta^{(3)}) + \Re(\overline{\zeta}^{(1)}_{t_0}\zeta^{(1)}_{t_1})
\end{align*} We calculate that $\Psi^{(2)}_{t_1} = -\frac{1}{8ik^2}B_{XX}e^{i\phi} + \text{c.c.} + C^{(2)}_{t_1}$ and $\Psi^{(1)}_{t_2} = \frac{1}{2\omega}B_T e^{i\phi} + \text{c.c.}$. Recalling from \eqref{TildeBFormula} that $b_2 = -k\omega|B|^2$ we also have $b_2\Psi^{(1)}_{\alpha_0} = -\frac{1}{2}ik^2B|B|^2 e^{i\phi} + \text{c.c.}$ As for the remaining terms, we can write \begin{align*}
\Im(\zeta^{(3)}) & = \Im\left(-\frac{1}{2}k^2\overline{B}|B|^2e^{-i\phi} + \frac{1}{2}(I - \overline{\mathcal{H}}_0)\left(\overline{B}B_X\right)\right) \\
& = \frac{1}{4i}k^2B|B|^2e^{i\phi} + \text{c.c.} + \frac{1}{2}\Im(I - \overline{\mathcal{H}}_0)(\overline{B}B_X) \end{align*} as well as $\Re(\overline{\zeta}^{(1)}_{t_0}\zeta^{(1)}_{t_1}) = \Re\left(-\frac{1}{2}i\overline{B}B_X\right) = \Im\left(\frac{1}{2}\overline{B}B_X\right)$, and so \begin{align*}
-\Im(\zeta^{(3)}) + Re(\overline{\zeta}^{(1)}_{t_0}\zeta^{(1)}_{t_1}) & = -\frac{1}{4i}k^2B|B|^2e^{i\phi} + \text{c.c.} - \frac{1}{2}\Im(I - \overline{\mathcal{H}}_0)(\overline{B}B_X) + \frac{1}{2} \Im(\overline{B}B_X) \\
& =- \frac{1}{4i}k^2B|B|^2e^{i\phi} + \text{c.c.} + \frac{1}{2}\Im \overline{\mathcal{H}}_0(\overline{B}B_X) \end{align*} Summing these terms now gives \begin{align*} \Psi^{(3)}_{t_0} & = -\Psi^{(2)}_{t_1} - \Psi^{(1)}_{t_2} - b_2 \Psi^{(1)}_{\alpha_0} - \Im(\zeta^{(3)}) + \Re(\overline{\zeta}^{(1)}_{t_0}\zeta^{(1)}_{t_1}) \\
& = \frac{1}{8ik^2}B_{XX}e^{i\phi} + \text{c.c.} - C^{(2)}_{t_1} - \frac{1}{2\omega}B_Te^{i\phi} + \text{c.c.} + \frac{1}{2}ik^2B|B|^2e^{i\phi} + \text{c.c.} \\
& \qquad - \frac{1}{4i}k^2B|B|^2e^{i\phi} + \text{c.c.} + \frac{1}{2}\Im \overline{\mathcal{H}}_0(\overline{B}B_X) \\
& = \left(-\frac{1}{2\omega}B_T + \frac{1}{8ik^2}B_{XX} + \frac{3}{4}ik^2B|B|^2\right)e^{i\phi} + \text{c.c.} + \left(-C^{(2)}_{t_1} + \frac{1}{2}\Im \overline{\mathcal{H}}_0(\overline{B}B_X)\right) \\
& = -\frac{1}{4i\omega}\left(2iB_T - \frac{1}{2k\omega}B_{XX} + 3k^2\omega B|B|^2\right)e^{i\phi} + \text{c.c.} + \left(-C^{(2)}_{t_1} + \frac{1}{2}\Im \overline{\mathcal{H}}_0(\overline{B}B_X)\right) \\
& = -\frac{1}{4i\omega}\left(2iB_T + 2\omega^{\prime\prime}B_{XX} +3 k^2\omega B|B|^2\right)e^{i\phi} + \text{c.c.} + \left(-C^{(2)}_{t_1} + \frac{1}{2}\Im \overline{\mathcal{H}}_0(\overline{B}B_X)\right) \end{align*} We must choose $C^{(2)}$ so that $C^{(2)}_X = \omega \Im \overline{\mathcal{H}}_0(\overline{B}B_X)$. Therefore
$$C^{(2)}=\frac12\omega i\mathcal H_0(|B|^2).$$
Since $B$ satisfies the NLS equation $2iB_T - \omega^{\prime \prime}B_{XX} + k^2\omega B|B|^2 = 0$, we have that $$\Psi^{(3)}_{t_0} = -\frac{3\omega^{\prime\prime}}{4i\omega}B_{XX}e^{i\phi}-\frac{k^2}{2i}B|B|^2e^{i\phi}+ \text{c.c.} = \frac{3}{16ik^2}B_{XX}e^{i\phi} -\frac{k^2}{2i}B|B|^2e^{i\phi} + \text{c.c.}$$ and so we can take as our solution \begin{equation}\label{Psi3Formula}
\Psi^{(3)} = -\frac{3}{16k^2\omega}B_{XX}e^{i\phi}+\frac{k^2}{2\omega}B|B|^2e^{i\phi}
+ \text{c.c.} \end{equation}
\subsubsection*{Checking the Evolution Equation for $\lambda$}
Now we would like to use our expansion of $\Psi$ to check to see whether \eqref{LambdaEvolutionEquation} is satisfied up to terms of order $O(\epsilon^4)$. The $O(\epsilon)$ equation that we must verify is \begin{align*} (\partial_{t_0}^2 - i\partial_{\alpha_0})(I - \mathcal{H}_0)\Psi^{(1)} & = (\partial_{t_0}^2 - i\partial_{\alpha_0})(I - \mathcal{H}_0)\left(\frac{1}{2\omega}Be^{i\phi} + \text{c.c.}\right)\\ & = (\partial_{t_0}^2 - i\partial_{\alpha_0})\frac{1}{\omega}Be^{i\phi}\\ & = 0, \end{align*} as desired. Similarly, recalling that $\mathcal{H}^{(1)} f = [\zeta^{(1)}, \mathcal{H}_0]f_{\alpha_0}$, it is quick to see that the $O(\epsilon^2)$ terms vanish as well: \begin{align*} & \;\; (\partial_{t_0}^2 - i\partial_{\alpha_0})(I - \mathcal{H}_0)\Psi^{(2)} \\ & + (\partial_{t_0}^2 - i\partial_{\alpha_0})(-\mathcal{H}_1)\Psi^{(1)} \\ & + (2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1})(I - \mathcal{H}_0)\Psi^{(1)} = 0 \end{align*} For the $O(\epsilon^3)$ terms, we must investigate the sum \begin{align*} & \;\; (\partial_{t_0}^2 - i\partial_{\alpha_0})(I - \mathcal{H}_0)\Psi^{(3)} \\ & + (\partial_{t_0}^2 - i\partial_{\alpha_0})(-\mathcal{H}_1)\Psi^{(2)} \\ & + (\partial_{t_0}^2 - i\partial_{\alpha_0})(-\mathcal{H}_2)\Psi^{(1)} \\ & + (2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1})(I - \mathcal{H}_0)\Psi^{(2)} \\ & + (2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1})(-\mathcal{H}_1)\Psi^{(1)} \\ & + (2\partial_{t_0}\partial_{t_2} + \partial_{t_1}^2 + 2b_2\partial_{t_0}\partial_{\alpha_0})(I - \mathcal{H}_0)\Psi^{(1)} \\ & - G_\lambda^{(3)} \\ & = I_1 + \cdots + I_6 - G_\lambda^{(3)} \end{align*} where $G_\lambda^{(3)}$ is the third term in the formal expansion of the cubic term $G_\lambda$ in \eqref{LambdaEvolutionEquation}. We have \begin{align*}
I_1 & = (\partial_{t_0}^2 - i\partial_{\alpha_0}) \left(-\frac{3}{8k^2\omega}B_{XX}e^{i\phi}+\frac{k^2}{\omega}B|B|^2e^{i\phi} \right)\\ & = 0 \end{align*} and \begin{align*} I_2 & = (\partial_{t_0}^2 - i\partial_{\alpha_0})(-\mathcal{H}^{(1)})\left(-\frac{1}{4ik\omega}B_Xe^{i\phi} + \text{c.c.} + C^{(2)}\right) \\ & = (\partial_{t_0}^2 - i\partial_{\alpha_0})\left(\frac{1}{4\omega}(I - \mathcal{H}_0)B\overline{B}_X\right) \\ & = 0 \end{align*} We also have \begin{align*} I_4 & = (2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1})\left(-\frac{1}{2ik\omega}B_Xe^{i\phi} + (I - \mathcal{H}_0)C^{(2)}\right) \\ & = -i\omega(I - \mathcal{H}_0) \Im \overline{\mathcal{H}}_0 (\overline{B}B_X) \\ & = -i\omega(I - \mathcal{H}_0)\frac{\overline{\mathcal{H}}_0(\overline{B}B_X) - \mathcal{H}_0(B\overline{B}_X)}{2i} \\ & = -\frac{1}{2}\omega(I - \mathcal{H}_0)(\overline{B}B_X + B\overline{B}_X) \end{align*} and \begin{align*} I_5 & = (2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1})(-\mathcal{H}^{(1)})\left(\frac{1}{2\omega}Be^{i\phi} + \text{c.c.}\right) \\
& = (2\partial_{t_0}\partial_{t_1} - i\partial_{\alpha_1})\frac{1}{2}i\omega(I - \mathcal{H}_0)|B|^2 \\ & = \frac{1}{2}\omega(I - \mathcal{H}_0)(B\overline{B}_X + \overline{B}B_X) \end{align*} Moreover, since $B$ satisfies the NLS equation \eqref{NLS}, \begin{align*}
I_6 & = (2\partial_{t_0}\partial_{t_2} + \partial_{t_1}^2 -2k\omega|B|^2\partial_{t_0}\partial_{\alpha_0}) \frac{1}{\omega}Be^{i\phi} \\
& = (2iB_t - \omega^{\prime\prime}B_{XX} + 2k^2\omega B|B|^2)e^{i\phi} \\
& = k^2\omega B|B|^2e^{i\phi} \end{align*} The remaining terms are more involved. Recall the multiscale operator $$\mathcal{H}^{(2)}f = [\zeta^{(1)}, \mathcal{H}_0]\partial_{\alpha_1}f + [\zeta^{(2)}, \mathcal{H}_0]\partial_{\alpha_0}f - [\zeta^{(1)}, \mathcal{H}_0]\zeta^{(1)}_{\alpha_0}\partial_{\alpha_0}f + \frac{1}{2}[\zeta^{(1)}, [\zeta^{(1)}, \mathcal{H}_0]]\partial_{\alpha_0}^2f$$ Thus we first have \begin{align*} \mathcal{H}^{(2)} \Psi^{(1)} & = [Be^{i\phi}, \mathcal{H}_0]\left(\frac{1}{2\omega}B_Xe^{i\phi} + \text{c.c.}\right) \\
& \quad + \left[\frac{1}{2}ik(I - \overline{\mathcal{H}}_0)|B|^2, \mathcal{H}_0\right]\left(\frac{1}{2}i\omega Be^{i\phi} + \text{c.c.}\right) \\ & \quad - [Be^{i\phi}, \mathcal{H}_0]\left(ikBe^{i\phi}\left(\frac{1}{2}i\omega Be^{i\phi} + \text{c.c.}\right)\right) \\ & \quad + \frac{1}{2}[Be^{i\phi}, [Be^{i\phi}, \mathcal{H}_0]]\left(-\frac{1}{2}k\omega Be^{i\phi} + \text{c.c.}\right) \\ & = [Be^{i\phi}, \mathcal{H}_0]\left(\frac{1}{2\omega}\overline{B}_Xe^{-i\phi}\right) \\
& \quad - \frac{1}{2}k\omega[Be^{i\phi}, \mathcal{H}_0]|B|^2 \\ & \quad + \frac{1}{2}[Be^{i\phi}, [Be^{i\phi}, \mathcal{H}_0]]\left(-\frac{1}{2}k\omega \overline{B}e^{-i\phi}\right) \\ & = \frac{1}{2\omega}(I - \mathcal{H}_0)(B\overline{B}_X)\\
& \quad -\frac{1}{2}k\omega Be^{i\phi}(I + \mathcal{H}_0)|B|^2 \\
& + \frac{1}{2}k\omega Be^{i\phi}\mathcal{H}_0|B|^2 \\
& = -\frac{1}{2}k\omega B|B|^2e^{i\phi} + \frac{1}{2\omega}(I - \mathcal{H}_0)(B\overline{B}_X) \end{align*} But then \begin{align*} I_3 & = (\partial_{t_0}^2 - i\partial_{\alpha_0})(-\mathcal{H}^{(2)})\Psi^{(1)} \\
& = (\partial_{t_0}^2 - i\partial_{\alpha_0})\left(\frac{1}{2}k\omega B|B|^2e^{i\phi} - \frac{1}{2\omega}(I - \mathcal{H}_0)(B\overline{B}_X)\right) \\ & = 0 \end{align*} Finally, we turn to calculating $G_\lambda^{(3)}$. We have by definition that $$G_\lambda^{(3)} = -\left[D_t\zeta,\mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right](\overline{\zeta}_\alpha D_t^2\zeta) + [D_t\zeta, \overline{\mathcal{H}}]\left(D_t\overline{\zeta}\frac{\partial_\alpha D_t\zeta}{\overline{\zeta}_\alpha}\right) + D_t\zeta[D_t\zeta, \mathcal{H}]\frac{\partial_\alpha D_t\overline{\zeta}}{\zeta_\alpha}$$ $$ - 2[D_t\zeta, \mathcal{H}]\frac{D_t\zeta \cdot \partial_\alpha D_t \zeta}{\zeta_\alpha} + \frac{1}{\pi i} \int \left(\frac{D_t\zeta(\alpha) - D_t\zeta(\beta)}{\zeta(\alpha) - \zeta(\beta)}\right)^2(D_t\zeta(\beta) \cdot \zeta_\beta(\beta)) d\beta$$ We simplify the formal leading terms of the commutators first. We have that \begin{align*}
[\zeta^{(1)}_{t_0}, \overline{\mathcal{H}}_0](\overline{\zeta}^{(1)}_{t_0}\zeta^{(1)}_{t_0 \alpha_0}) & = k^2 \omega Be^{i\phi} (I - \overline{\mathcal{H}}_0)|B|^2 \end{align*} and \begin{align*}
\zeta^{(1)}_{t_0}[\zeta^{(1)}_{t_0}, \mathcal{H}_0]\overline{\zeta}^{(1)}_{t_0 \alpha_0} & = k^2\omega Be^{i\phi} (I - \mathcal{H}_0)|B|^2 \end{align*} Also, since $\zeta^{(1)}_{t_0} \cdot \zeta^{(1)}_{t_0 \alpha_0} = 0$, the third commutator vanishes. We will write the leading orders of the remaining terms as singular integrals to which we can apply the following formula: $$\frac{1}{\pi i}\int \frac{(g(\alpha) - g(\beta))(h(\alpha) - h(\beta))}{(\alpha - \beta)^2} f(\beta) d\beta = [g, \mathcal{H}_0](h_\alpha f) + [h, \mathcal{H}_0](g_\alpha f) - [g, [h, \mathcal{H}_0]]f_\alpha$$ Since to leading order, $\zeta^{(1)}_{t_0} \cdot \zeta^{(1)}_{\alpha_0} = \zeta^{(1)}_{t_0} \cdot 1 + O(\epsilon^2) = \Re(\zeta^{(1)}_{t_0}) + O(\epsilon^2) = \frac{1}{2}(\zeta^{(1)}_{t_0} + \overline{\zeta}^{(1)}_{t_0}) + O(\epsilon^2)$, we can rewrite the second singular integral above as \begin{align*} \frac{1}{\pi i} \int \left(\frac{\zeta^{(1)}_{t_0}(\alpha) - \zeta^{(1)}_{t_0}(\beta)}{\alpha - \beta}\right)^2 \frac{1}{2}\overline{\zeta}^{(1)}_{t_0} d\beta & = [\zeta^{(1)}_{t_0}, \mathcal{H}_0](\zeta^{(1)}_{t_0 \alpha_0} \overline{\zeta}^{(1)}_{t_0}) - \frac{1}{2}[\zeta^{(1)}_{t_0}, [\zeta^{(1)}_{t_0}, \mathcal{H}_0]]\overline{\zeta}^{(1)}_{\alpha_0 t_0} \end{align*} Similarly, the leading order of the first singular integral is \begin{align*} & -\frac{1}{\pi i} \int \left( \frac{(\zeta^{(1)}_{t_0}(\alpha) - \zeta^{(1)}_{t_0}(\beta))(\overline{\zeta}^{(1)}(\alpha) - \overline{\zeta}^{(1)}(\beta))}{(\alpha - \beta)^2}\right) \zeta^{(1)}_{t_0 t_0}(\beta) d\beta \\ & = -[\zeta^{(1)}_{t_0}, \mathcal{H}_0](\overline{\zeta}^{(1)}_{\alpha_0}\zeta^{(1)}_{t_0 t_0}) - [\overline{\zeta}^{(1)}, \mathcal{H}_0](\zeta^{(1)}_{t_0 \alpha_0}\zeta^{(1)}_{t_0 t_0}) + [\zeta^{(1)}_{t_0}, [\overline{\zeta}^{(1)}, \mathcal{H}_0]]\zeta^{(1)}_{t_0 t_0 \alpha_0} \end{align*} By extracting the coefficients resulting from differentiation, the first terms of these two expressions cancel each other. Therefore we are left with the following expression as the sum of these singular integrals: \begin{align*} & - \frac{1}{2}[\zeta^{(1)}_{t_0}, [\zeta^{(1)}_{t_0}, \mathcal{H}_0]]\overline{\zeta}^{(1)}_{\alpha_0 t_0} + [\zeta^{(1)}_{t_0}, [\overline{\zeta}^{(1)}, \mathcal{H}_0]]\zeta^{(1)}_{t_0 t_0 \alpha_0} \\
& = k^2\omega Be^{i\phi} \mathcal{H}_0|B|^2 - k^2 \omega Be^{i\phi} (I + \mathcal{H}_0)|B|^2 \\
& = -k^2\omega B|B|^2e^{i\phi} \end{align*} Therefore, summing these calculations gives \begin{align*}
G_\lambda^{(3)} & = k^2\omega Be^{i\phi} (I + \mathcal{H}_0)|B|^2 + k^2 \omega Be^{i\phi}(I - \mathcal{H}_0)|B|^2 - k^2 \omega B|B|^2 e^{i\phi} \\
& = k^2 \omega B|B|^2 e^{i\phi} \end{align*} Therefore we have at last that the $O(\epsilon^3)$ terms sum to \begin{align*}
& -\frac{1}{2}\omega(I - \mathcal{H}_0)(\overline{B}B_X + B\overline{B}_X) + \frac{1}{2}\omega(I - \mathcal{H}_0)(B \overline{B}_X + \overline{B}B_X) + k^2 \omega B|B|^2e^{i\phi} - k^2\omega B|B|^2e^{i\phi} \end{align*} which exactly cancels. Thus the development of $\Psi$ indeed satisfies \eqref{LambdaEvolutionEquation} up to $O(\epsilon^4)$. Define \begin{equation}\label{TildePsiFormula} \tilde{\Psi} = \epsilon \Psi^{(1)} + \epsilon^2 \Psi^{(2)} + \epsilon^3\Psi^{(3)} \end{equation} as well as \begin{equation}\label{TildeLambdaFormula} \tilde{\lambda} = (I - \tilde{\mathcal{H}})\tilde{\Psi} \end{equation} so that $\tilde{\mathcal{P}}\tilde{\lambda} - G_\lambda^{(3)} = O(\epsilon^4)$.
\subsection{Estimates of the Remainder of $\lambda$}
Our goal here is to construct an energy from an evolution equation for \begin{equation}\label{LFormula} l = \lambda - \tilde{\lambda} \end{equation} This will enable us to show that the quantity $D_t l = D_t(\lambda - \tilde{\lambda})$ is bounded in $L^2$. In turn, we will control $r$ in $L^2$ for $O(\epsilon^{-2})$ times.
\subsubsection{Showing that $D_t l$ and $r$ are comparable.}
Following the proof of Lemma \ref{RealPartProfileControl}, we first show that $r$ and $(I - \mathcal{H})\Im(r)$ are comparable in $L^2$. First, since $(I - \mathcal{H})\overline{\xi} = 0$ by \eqref{XiIsAntihol}, we have by the multiscale calculation of Section 3.3 and Corollary \ref{DiffHilbertBound} that $$(I - \mathcal{H})\overline{r} = -(I - \mathcal{H})\overline{\tilde{\xi}} = -(\tilde{\mathcal{H}} - \mathcal{H})\overline{\tilde{\xi}} - (I - \tilde{\mathcal{H}})\overline{\tilde{\xi}} = \mathcal{O}(\epsilon^{5/2})$$ Hence we have \begin{align*}
\|r\|_{L^2} & \leq C\|(I - \mathcal{H})(r + \overline{r})\|_{L^2} + C\|(I - \mathcal{H})\Im(r)\|_{L^2} \\
& \leq C\|(I - \mathcal{H})\Im(r)\|_{L^2} + C\epsilon^{5/2}, \end{align*} and so \begin{equation}\label{DtLAndRComparable}
\frac{1}{C} \|r\|_{L^2} - C\epsilon^{5/2} \leq \|(I - \mathcal{H})\Im(r)\|_{L^2} \leq C\|r\|_{L^2} + C\epsilon^{5/2} \end{equation} Turning to $D_t l$ and $r$, we expand \begin{align*} D_t l & = D_t \lambda - D_t \tilde{\lambda} \\ & = D_t \lambda - \tilde{D}_t\tilde{\lambda} - (D_t - \tilde{D}_t)\tilde{\lambda} \\ & = D_t \lambda - \tilde{D}_t\tilde{\lambda} - (b - \tilde{b})\tilde{\lambda}_\alpha \end{align*} Another multiscale calculation confirms that the residual quantity $$\tilde{D}_t\tilde{\lambda} + (I - \tilde{\mathcal{H}})\Im(\tilde{\zeta}) + \frac{1}{2}[\tilde{D}_t\tilde{\zeta}, \tilde{\mathcal{H}}]\frac{\overline{\tilde{\zeta}}_\alpha \tilde{D}_t\tilde{\zeta}}{\tilde{\zeta}_\alpha}$$ is of size at most $C\epsilon^{3/2}$ in $L^2$. Hence, using \eqref{DtLambdaFormula}, we have that $D_t l = -(I - \mathcal{H})\Im(r) + \mathcal{O}(\epsilon^{3/2})$. But then this implies the bound \begin{align}\label{DtLandREquiv}
\frac{1}{C} \|r\|_{L^2} - C\epsilon^{3/2} \leq \|D_t l\|_{L^2} \leq C\|r\|_{L^2} + C\epsilon^{3/2} \end{align}
\subsubsection{The Evolution Equation and Energy Estimates for $l$.}
We can write immediately that \begin{align*} \mathcal{P}l & = G_\lambda - (\mathcal{P} - \tilde{\mathcal{P}})\tilde{\lambda} - \tilde{\mathcal{P}}\tilde{\lambda} \\ & = (G_\lambda - G_\lambda^{(3)}) - (\mathcal{P} - \tilde{\mathcal{P}})\tilde{\lambda} - (\tilde{\mathcal{P}}\tilde{\lambda} - G_\lambda^{(3)})
\end{align*} from which we have by the usual decompositions and estimates that $\mathcal{P}l$ is controlled in $H^s$ by $C(E_s^{3/2} + \epsilon E_s + \epsilon^2 E_s^{1/2} + \epsilon^{7/2}) = O(\epsilon^{7/2})$. We can now construct the energy $$\int \frac{1}{\mathcal{A}}|D_t l|^2 + il\overline{l}_\alpha d\alpha$$ corresponding to the above evolution equation for $l$. Since $l$ need not be the trace of a holomorphic function in $\Omega(t)^c$, we cannot conclude that this quantity bounds $\|D_t l\|_{L^2}^2$ above. Hence we decompose $l$ as $$l = \frac{1}{2}(I - \mathcal{H})l + \frac{1}{2}(I + \mathcal{H})l := l^- + l^+$$ The energy $$\mathscr{L}(t) = \int \frac{1}{\mathcal{A}}|D_t l|^2 + il^-\overline{l}^-_\alpha d\alpha$$ does bound $\|D_t l\|_{L^2}^2$ from above, by Lemma \ref{BasicEnergyInequality}. We would like to show that $d\mathscr{L}/dt \leq C\epsilon^5$. To do so, we write $$\mathscr{L}(t) = \int \frac{1}{\mathcal{A}}|D_t l|^2 + il\overline{l}_\alpha d\alpha - i\int l^-\overline{l}^+_\alpha + l^+\overline{l}^-_\alpha + l^+\overline{l}^+_\alpha d\alpha := \mathscr{L}_1(t) + \mathscr{L}_2(t)$$ By Lemma \ref{BasicEnergyInequality} and \eqref{DtLandREquiv}, the time derivative of the first integral is \begin{equation*}
\frac{d\mathscr{L}_1}{dt} \leq C \epsilon^{7/2}\|D_t l\|_{L^2} + C \epsilon^{2}\|D_t l\|_{L^2}^2 \end{equation*} We use the usual almost-orthogonality argument to address the terms of $\mathscr{L}_2(t)$. Observe that with a change of variables we have \begin{align*} \frac{d\mathscr{L}_2}{dt} & = \frac{d}{dt} \left( -i \int l^-\overline{l}^+_\alpha + l^+\overline{l}^-_\alpha + l^+\overline{l}^+_\alpha d\alpha \right) \\ & = -i \int D_t l^-\overline{l}^+_\alpha + D_t l^+\overline{l}^-_\alpha + D_t l^+\overline{l}^+_\alpha + l^-\partial_\alpha D_t\overline{l}^+ + l^+\partial_\alpha D_t \overline{l}^- + l^+\partial_\alpha D_t \overline{l}^+ d\alpha \\ & = \frac{1}{i} \int D_t l^-\overline{l}^+_\alpha + D_t l^+\overline{l}^-_\alpha + D_t l^+\overline{l}^+_\alpha - l^-_\alpha D_t\overline{l}^+ - l^+_\alpha D_t \overline{l}^- - l^+_\alpha D_t \overline{l}^+ d\alpha \\ & = 2 \Im \int D_t l^-\overline{l}^+_\alpha + D_t l^+\overline{l}^-_\alpha + D_t l^+\overline{l}^+_\alpha d\alpha \end{align*} We calculate that \begin{align*} l^+ & = \frac{1}{2}(I + \mathcal{H})l \\ & = \frac{1}{2}(I + \mathcal{H})(\lambda - \tilde{\lambda}) \\ & = -\frac{1}{2}(I + \mathcal{H})(I - \tilde{\mathcal{H}})\tilde{\Psi} \\ & =- \frac{1}{2}(I + \mathcal{H})(\mathcal{H} - \tilde{\mathcal{H}})\tilde{\Psi},
\end{align*} from which we have $\|l^+\|_{H^1} \leq C\epsilon^{5/2}$. Via Corollary \ref{DtHilbertDiffBound} the same formula readily implies that $\|D_t l^+\|_{L^2} \leq C\epsilon^{5/2}$, and so we clearly have $$\int D_t l^+\overline{l}^+_\alpha d\alpha \leq C\epsilon^5$$ The other two terms are controlled by exploiting their almost-orthogonality. Note that $D_t l^- = \frac{1}{2}(I - \mathcal{H})D_t l - \frac{1}{2}[D_t \zeta, \mathcal{H}]\frac{l_\alpha}{\zeta_\alpha}$ and $\overline{l}_\alpha^+ = \frac{1}{2}(I - \overline{\mathcal{H}}^*)\overline{l}_\alpha$. Since we have \begin{align*} l_\alpha & = \lambda_\alpha - \tilde{\lambda_\alpha} \\ & = (I - \mathcal{H})\Psi_\alpha - [\xi_\alpha, \mathcal{H}]\frac{\Psi_\alpha}{\zeta_\alpha} - \tilde{\lambda}_\alpha \\ & = (I - \mathcal{H})\Re ( \overline{\zeta}_\alpha D_t\zeta) - [\xi_\alpha, \mathcal{H}]\frac{\Re(\overline{\zeta}_\alpha D_t\zeta )}{\zeta_\alpha} - \tilde{\lambda}_\alpha
\end{align*} we see that the only $O(\epsilon)$ terms contributed are $(I - \mathcal{H}_0)\Re(\overline{\zeta}^{(1)}_{t_0}) - \partial_{\alpha_0}(I - \mathcal{H}_0)\Psi^{(1)} = 0$. Hence $\|l_\alpha\|_{L^2} \leq C\epsilon^{3/2}$. But then we can rewrite the commutator as a term of third order as follows: \begin{align*} [D_t \zeta, \mathcal{H}]\frac{l_\alpha}{\zeta_\alpha}& = \left[D_t \zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]l_\alpha - [D_t \zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha}{\overline{\zeta}_\alpha}l \\ & = \left[D_t \zeta, \mathcal{H}\frac{1}{\zeta_\alpha} + \overline{\mathcal{H}}\frac{1}{\overline{\zeta}_\alpha}\right]l_\alpha - [D_t \zeta, \overline{\mathcal{H}}]\frac{\partial_\alpha}{\overline{\zeta}_\alpha}\left(l^+ - \frac{1}{2}(\mathcal{H} + \overline{\mathcal{H}})l\right)
\end{align*} and so $\|[D_t \zeta, \mathcal{H}]\frac{l_\alpha}{\zeta_\alpha}\|_{L^2} \leq C\epsilon^{7/2}$. Since $\|l^+_\alpha\|_{L^2} \leq C\epsilon^{5/2}$ as above, it suffices to estimate the inner product \begin{align*} \langle (I - \mathcal{H})D_t l, (I - \overline{\mathcal{H}}^*)\overline{l}_\alpha\rangle & = -\langle (\mathcal{H} + \overline{\mathcal{H}})D_t l, (I - \overline{\mathcal{H}}^*)\overline{l}_\alpha\rangle \\ & = -2\langle (\mathcal{H} + \overline{\mathcal{H}})D_t l, \overline{l}_\alpha^+\rangle \\
& \le C\epsilon^{7/2}\|D_t l\|_{L^2}
\end{align*} For the second term, we have that $D_t l^+ = \frac{1}{2}(I + \mathcal{H}) D_t l + \frac{1}{2}[D_t \zeta, \mathcal{H}]\frac{l_\alpha}{\zeta_\alpha}$ and $\overline{l}_\alpha^- = \frac{1}{2}(I + \overline{\mathcal{H}}^*)\overline{l}_\alpha$. The commutator is estimated by $\|[D_t \zeta, \mathcal{H}]\frac{l_\alpha}{\zeta_\alpha}\|_{L^2} \leq C\epsilon^{7/2}$ as before. Hence it suffices to consider \begin{align*} \langle (I + \mathcal{H})D_t l, (I + \overline{\mathcal{H}}^*)\overline{l}_\alpha \rangle & = \langle (I + \mathcal{H})D_t l, (\mathcal{H}^* + \overline{\mathcal{H}}^*)\overline{l}_\alpha \rangle \\ & = \left\langle 2D_t l^+ - [D_t\zeta, \mathcal{H}]\frac{l_\alpha}{\zeta_\alpha}, (\mathcal{H} + \overline{\mathcal{H}})^*\overline{l}_\alpha \right\rangle \\
&\le C(\epsilon^{7/2}\|D_t l\|_{L^2}+\epsilon^5) \end{align*} Summing these estimates, we finally have that $$\frac{d\mathscr{L}}{dt}(t) \leq C(\epsilon^5 + \epsilon^{7/2}\mathscr L(t)^{1/2}+ \epsilon^{2}\mathscr L(t))\leq C\epsilon^2(\epsilon^3 + \mathscr L(t))$$ whenever $0 \leq t \leq \mathscr{T}\epsilon^{-2}$. Therefore $$\sup_{0\le t\le \mathscr T \epsilon^{-2}}\mathscr L(t)\le C(\mathscr L(0) + \epsilon^3)$$ Consequently
$$\|r\|_{C([0, \mathscr{T}\epsilon^{-2}]: L^2)} \leq C(\mathscr{L}(0)^{1/2} + \epsilon^{3/2}).$$
\subsection{Justifying the Eulerian Asymptotics for the Profile.}
With these preliminaries, we can now prove the
\begin{theorem}\label{StrongEulerianResult} Suppose the remainder hypotheses \eqref{EulerianRemainderConditions} hold, and moreover that the stronger conditions hold: \begin{equation}\label{StrongerConditions}
\|\eta_0 - \epsilon \Im \zeta^{(1)}\|_{L^2} \leq C\epsilon^{3/2} \qquad \text{and} \qquad \|\Phi_0(\alpha + i\eta_0(\alpha)) - \epsilon \omega^{-1} \Re \zeta^{(1)}\|_{\dot{H}^{1/2}} \leq C\epsilon^{3/2}
\end{equation} where $\Phi_0$ is the initial velocity potential. Then there exists a possibly smaller $\epsilon_0$ so that in addition to the conclusions of Theorem \ref{WeakEulerianResult} holding, the profile $\eta$ satisfies $$\|\eta(t) - \epsilon\Im \zeta^{(1)}(t)\|_{H^{s + 1}} \leq C(\|B_0\|_{H^{s + 7}}, \mathscr{T}, C_1, C_2)\epsilon^{3/2}$$ for all $0 \leq t \leq \mathscr{T}\epsilon^{-2}$. \end{theorem}
\begin{proof}
As in the proof of Lemma \ref{RealPartProfileControl}, it suffices to derive conditions under which $\mathscr{L}(0) = O(\epsilon^3)$. We will show that the quantity $$\mathscr{L}(0) = \int \frac{1}{\mathcal{A}_0} |D_t l_0|^2 + i l_0 \partial_\alpha \overline{l}_0 d\alpha$$ is controlled by $\|r_0\|_{L^2}$ and $\|\Phi_0 \circ z_0 - \epsilon \omega^{-1} \Re \zeta^{(1)}\|_{\dot{H}^{1/2}}$.\footnote{Ideally one would prefer, in keeping with the weaker conditions given in Theorem \ref{WeakEulerianResult}, to bound $\mathscr{L}(0)$ by some difference of the initial velocity fields of the true and approximate solution in the square-mean. However, since these velocity fields are defined in different domains, we instead give this equivalent condition, which is more straightforward to state.}
We can control the first term $$\int \frac{1}{\mathcal{A}_0}|D_t l(0)|^2 \leq C\|D_t l_0\|_{L^2}^2 \leq C(\|r_0\|_{L^2} + \epsilon^{3/2})^2$$ by \eqref{DtLAndRComparable}.
To estimate the other term in $\mathscr{L}(0)$, observe that we can write $l$ in terms of $\Psi - \tilde{\Psi}$ as follows: \begin{align*} l & = (I - \mathcal{H})\Psi - (I - \tilde{\mathcal{H}})\tilde{\Psi} \\ & = (I - \mathcal{H})(\Psi - \tilde{\Psi}) - (\mathcal{H} - \tilde{\mathcal{H}})\tilde{\Psi} \end{align*} and the latter term is $\mathcal{O}(\epsilon^{5/2})$ by Corollary \ref{DiffHilbertBound}. Hence we expand our integral as usual: \begin{align*} \int i l_0 \partial_\alpha \overline{l}_0 d\alpha & = i \int \left(l_0 - (I - \mathcal{H}_{z_0})(\Psi_0 - \tilde{\Psi}_0)\right) \partial_\alpha \overline{l}_0 d\alpha \\ & \quad - i \int \partial_\alpha(I - \mathcal{H}_{z_0})(\Psi_0 - \tilde{\Psi}_0) \overline{\left(l_0 - (I - \mathcal{H}_{z_0})(\Psi_0 - \tilde{\Psi}_0)\right)} d\alpha \\ & \quad + i \int (I - \mathcal{H}_{z_0})(\Psi_0 - \tilde{\Psi}_0) \partial_\alpha \overline{(I - \mathcal{H}_{z_0})(\Psi_0 - \tilde{\Psi}_0)} d\alpha
\end{align*} The first two of these integrals are $O(\epsilon^4)$, since $\partial_\alpha(\Psi - \tilde{\Psi}) = \Re(\overline{\zeta}_\alpha D_t\zeta) - \tilde{\Psi}_\alpha$ is $\mathcal{O}(\epsilon^{3/2})$. Therefore since $\mathcal{H}$ is bounded on $\dot{H}^{1/2}$,\footnote{Since $\mathcal{H}$ is bounded on $L^2$, this can be checked by showing that $\mathcal{H}$ is bounded on $\dot{H}^1$ using the identity $\partial_\alpha \mathcal{H} f = \mathcal{H} f_\alpha + [z_\alpha, \mathcal{H}]\frac{f_\alpha}{z_\alpha}$ and then by using complex interpolation.} it follows that $$\left| \int i l_0 \partial_\alpha \overline{l}_0 d\alpha \right| \leq \|(\Phi_0 \circ z_0) - \epsilon \omega^{-1} \Re \zeta^{(1)} \|_{\dot{H}^{1/2}}^2 + C\epsilon^4$$ Hence, if we choose the initial profile and the initial velocity potential $\Phi_0$ to satisfy $$\|r_0\|_{L^2} \leq C\epsilon^{3/2} \qquad \text{and} \qquad \|(\Phi_0 \circ z_0) - \epsilon \omega^{-1} \Re \zeta^{(1)} \|_{\dot{H}^{1/2}} \leq C\epsilon^{3/2}$$ then $\mathscr{L}(0) \leq C\epsilon^3$, and so $\sup_{0\le t\le \mathscr T\epsilon^{-2}}\|r(t)\|_{L^2} \leq C\epsilon^{3/2}$ as well. \end{proof}
\textit{Acknowledgement:} Part of the work in this paper was done while the authors were visiting at the IMA during the academic year 2009-10. S. Wu would like to thank the IMA for their hospitality, generous support and pleasant academic environment. N. Totz would like to thank the IMA for its generous travel funding during 2009-10.
\appendix
\section{Glossary of Symbols}
We collect the commonly used notations and symbols. References such as (1.1) refer to the equation in which the symbol was introduced, whereas p. 1 refers to the page number in which the symbol is first used.
\begin{align*} \mathbb{N} & \qquad \text{The set of nonnegative integers} \\ \mathbb{R} & \qquad \text{The set of real numbers} \\ \mathbb{C} & \qquad \text{The set of complex numbers} \\ \overline{w} & \qquad \text{The complex conjugate of } w \in \mathbb{C} \\ \Re(w) & \qquad \text{The real part of } w \in \mathbb{C} \\ \Im(w) & \qquad \text{The imaginary part of } w \in \mathbb{C} \\ [F, G] & \qquad \text{The commutator }FG - GF\text{ of the operators }F\text{ and }G \\ U_g & \qquad \text{Precomposition by }g\text{, p. 2}\\ \langle \cdot, \cdot\rangle & \qquad \text{The real inner product on }L^2\text{, p. 38} \\ T^* & \qquad \text{The formal real adjoint of a linear operator } T\text{, p. 38} \\ \Omega(t) & \qquad \text{The fluid region associated to }z\text{ at time }t\text{, p. 1} \\ \Sigma(t) & \qquad \text{The boundary of the fluid region associated to }z\text{ at time }t\text{, p. 1} \\ z(\alpha, t) & \qquad \text{The parametrization of }\Sigma(t)\text{ in Lagrangian coordinates }\alpha\text{, p. 1} \\ \mathfrak{a} & \qquad \text{see p. 1}\\ \,\text{p.v.} \int & \qquad \text{The principal value integral, p. 1} \\ \mathfrak{H} & \qquad \text{The Hilbert transform associated to }z\text{, p. 1} \\ \mathfrak{K} & \qquad \text{The double layer potential operator associated to }z\text{, p. 4} \\ \kappa & \qquad \text{The change of variables taking }z\text{ to }\zeta\text{, \eqref{ChangeOfVariables}} \\ \zeta & \qquad \text{The new water wave interface variable, \eqref{NewVariableNotation}} \\ \xi & \qquad \text{The perturbation of }\zeta\text{ from the still water solution, p. 5} \\ \mathcal{H} & \qquad \text{The Hilbert transform associated to }\zeta\text{, p. 3} \\ \mathcal{K} & \qquad \text{The double layer potential operator associated to the curve }\zeta\text{, p. 51} \\ \mathcal{H}_\gamma & \qquad \text{The Hilbert transform associated to the curve }\gamma\text{, p. 3} \\ \mathcal{K}_\gamma & \qquad \text{The double layer potential operator associated to the curve }\gamma \\ \mathcal{H}_0 & \qquad \text{The Hilbert transform associated to the curve }\alpha\text{, p. 3} \\ D_t & \qquad \text{The transformed time derivative, \eqref{NewVariableNotation}} \\ \mathcal{P} & \qquad \text{The transformed linear water wave operator, \eqref{NewVariableNotation}} \\ b & \qquad \text{see \eqref{NewVariableNotation}} \\ \mathcal{A} & \qquad \text{see \eqref{NewVariableNotation}} \\ \end{align*} \begin{align*} G & \qquad \text{The cubic nonlinearity of the transformed water wave equation, \eqref{GFormula}} \\ \hat{f} & \qquad \text{The Fourier transform of }f\text{, p. 6} \\ H^s & \qquad \text{The }L^2\text{ Sobolev space of index }s\text{, p. 6} \\ \dot{H}^s & \qquad \text{The }L^2\text{ homogeneous Sobolev space of index }s\text{, p. 6} \\ W^{s, \infty} & \qquad \text{The }L^\infty\text{ Sobolev space of index }s\text{, p. 6} \\ C([0, T]; X) & \qquad \text{The Banach space of functions }f \in X \times [0, T]\text{ with} \\
& \qquad \qquad \|f\|_X\text{ varying continuously in }[0, T]\text{, p. 6} \\ \mathfrak{S}(t) & \qquad \text{The supremum of a modified energy of }\zeta\text{, \eqref{ZetaLocalAPrioriBound}} \\ \zeta^{(n)} & \qquad \text{Terms of the formal power expansion of }\zeta\text{ in }\epsilon\text{, \eqref{TildeZetaFormula}} \\ \mathcal{H}_n & \qquad n\text{th order term of the formal expansion of }\mathcal{H}\text{, \eqref{HilbertFormulas}} \\ \mathcal{H}^{(n)} & \qquad \text{Operator at the order }\epsilon^n\text{ of the formal expansion of }\mathcal{H} \\ & \qquad \qquad \text{ acting on a multiscale function, \eqref{MultiscaleHilbertFormulas}} \\ \tilde{\zeta} & \qquad \text{Formal multiscale approximation of }\zeta\text{, see \eqref{TildeZetaFormula}} \\ k & \qquad \text{The wave number of the wave packet approximation to }\zeta\text{, p. 12} \\ \omega & \qquad \text{The wave frequency of the wave packet approximation to }\zeta\text{, p. 12} \\ \phi & \qquad \text{The phase of the wave packet approximation to }\zeta\text{, p. 12} \\ \omega^\prime & \qquad \text{The group velocity, p. 13} \\ \omega^{\prime \prime} & \qquad \text{The dispersion coefficent, p. 15} \\ B & \qquad \text{The slowly varying envelope of the leading order of }\tilde{\zeta}\text{, \eqref{Zeta1Formula}}\\ \tilde{\xi} & \qquad \text{Perturbation of }\tilde{\zeta}\text{ from the still water solution, p. 17} \\ \tilde{b} & \qquad \text{see \eqref{TildeBFormula}} \\ \tilde{\mathcal{A}} & \qquad \text{see \eqref{TildeAFormula}} \\ \tilde{D}_t & \qquad \text{see \eqref{TildeDtFormulas}} \\ \tilde{\mathcal{P}} & \qquad \text{see \eqref{TildeDtFormulas}} \\ r & \qquad \text{The difference between the true solution }\zeta\text{ and the} \\ & \qquad \qquad \text{approximate solution }\tilde{\zeta}\text{ of the water wave equations, p. 18} \\ \mathcal{O}(\epsilon^n) & \qquad \text{Landau notation for functions in }H^s\text{, p. 18} \\ E_s & \qquad \text{The modified energy of }r\text{, p. 18} \\ \mathcal{E} & \qquad \text{The energy of }r\text{, \eqref{FullEnergyDefinition}} \\ \mathscr{A}^s & \qquad \text{The manifold of admissible initial conditons for }(z, z_t)\text{, p. 41} \\ +\,\text{c.c.} & \qquad \text{Adds the complex conjugate of the preceding term, p. 52} \end{align*}
\section{Estimates of Singular Integrals in Sobolev Space}
The purpose of this appendix is to provide bounds for singular integrals of the form \begin{equation}\label{S1Formula} S_1(A, f) = \int \prod_{j = 1}^m \frac{A_j(\alpha) - A_j(\beta)}{\gamma_j(\alpha) - \gamma_j(\beta)} \frac{f(\beta)}{\gamma_0(\alpha) - \gamma_0(\beta)} d\beta \end{equation} and \begin{equation}\label{S2Formula} S_2(A, f) = \int \prod_{j = 1}^m \frac{A_j(\alpha) - A_j(\beta)}{\gamma_j(\alpha) - \gamma_j(\beta)} f_\beta(\beta) d\beta\end{equation} in Sobolev space. For these singular integrals to be well-defined we insist that the $\gamma_j$ each obey the chord-arc condition \eqref{ChordArcCondition}. Our starting point is the result of Coifman-Meyer-McIntosh, expanded upon by Wu, which bounds these singular integrals in $L^2$.
\begin{theorem}\label{CMMWEstimates}
(c.f. \cite{CoifmanMeyerMcintoshL2Bounds} and \cite{WuAlmostGlobal2D}) Both $\|S_1(A, f)\|_{L^2}$ and $\|S_2(A, f)\|_{L^2}$ are bounded by $$C \prod_{j = 1}^m \|A_j^\prime\|_{X_j} \|f\|_{X_0},$$ where one of the $X_0, X_1, \ldots, X_n$ is equal to $L^2$ and the rest are $L^\infty$. The constant $C$ depends $\|\gamma_0^\prime\|_{L^\infty}, \|\gamma_1^\prime\|_{L^\infty}, \ldots, \|\gamma_m^\prime\|_{L^\infty}$. \end{theorem}
Observe that the kernels of the operators $S_1$ and $S_2$ are functions of differences of the form $F\left(f_1(\alpha) - f_1(\beta), \ldots, f_n(\alpha) - f_n(\beta)\right)$. When the differential operator $(\partial_\alpha + \partial_\beta)$ acts on such differences of functions, it yields another function of the same kind, e.g., the Chain Rule becomes $$(\partial_\alpha + \partial_\beta)F\left(f_1(\alpha) - f_1(\beta), \ldots, f_n(\alpha) - f_n(\beta)\right) = \sum_{i = 1}^n (\partial_i F) (\partial_\alpha + \partial_\beta)(f_i(\alpha) - f_i(\beta))$$ The other rules of differential calculus hold as well. Hence acting on kernels of $S_1$ or $S_2$ with $m$ factors by $(\partial_\alpha + \partial_\beta)$ yields another kernel which is a sum of the same type with $m + 1$ factors. This allows us to cleanly prove the following
\begin{proposition}\label{SingIntSobolevEstimates}
Let $n \geq 3$ be given, and suppose that \eqref{ChordArcCondition} holds. Then $$\|S_2(A, f)\|_{H^n} \leq C \prod_{j = 1}^m \|A'_j\|_{Y_j}\|f\|_Z,$$ where for all $j = 1, \ldots, m$ the Banach spaces $Y_j = H^{n-1} \text{ or } W^{n - 2, \infty}$, $Z= H^n \text{ or } W^{n - 1, \infty}$.
Moreover, the constant $C = C\left( \|\partial_\alpha \gamma_j - 1\|_{H^{n-1}}, j=1,\dots, m\right)$. \end{proposition}
\begin{proof} Write $S_2 f = \int K(\alpha, \beta) f_\beta(\beta) d\beta$. To exploit the observations preceding the theorem, we expand $\partial_\alpha^n S_2 f$ using the Binomial Theorem applied to $((\partial_\alpha + \partial_\beta) - \partial_\beta)^n$: \begin{align*} \partial_\alpha^n S_2 f(\alpha) & = \sum_{j=0}^n \binom n j\int (-1)^{j}(\partial_\alpha + \partial_\beta)^{n-j}\partial_\beta^{j} K(\alpha, \beta) f_\beta(\beta) d\beta \\ & = \sum_{j= 0}^n\binom n j \int (\partial_\alpha + \partial_\beta)^{n-j}K(\alpha, \beta) \partial_\beta^{j} f_\beta(\beta) d\beta \end{align*} After applying routine calculus identities, we see that $(\partial_\alpha + \partial_\beta)^{n-j}K(\alpha, \beta)$ yields a sum of terms, each of which is another kernel expressible in the form \eqref{S2Formula}. Now we apply Theorem \ref{CMMWEstimates} to estimate each term in $L^2$.
We proceed by cases. Since $n \geq 3$, it suffices to consider the cases where $j = 0$ and $j = 1$; in all other cases one can estimate however one pleases using Theorem \ref{CMMWEstimates}. If a difference of the form $A_j^{(n - 1)}(\alpha) - A_j^{(n - 1)}(\beta)$ or $\gamma_j^{(n-1)}(\alpha) - \gamma_j^{(n-1)}(\beta)$ occurs in some kernel, estimate this difference in $L^2$; observe that only one of these can occur in a given singular integral since $n \geq 3$. If a difference of the form $A_j^{(n)}(\alpha) - A_j^{(n)}(\beta)$ or $\gamma_j^{(n)}(\alpha) - \gamma_j^{(n)}(\beta)$ occurs in some kernel, split the integral into a difference of singular integrals of the form $S_1$ and estimate using Theorem \ref{CMMWEstimates}. \end{proof}
{}
\end{document} | arXiv | {
"id": "1101.0545.tex",
"language_detection_score": 0.5601228475570679,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Initial boundary value problems for a fractional differential equation with hyper-Bessel operator} \begin{abstract} Direct and inverse source problems of a fractional diffusion equation with regularized Caputo-like counterpart hyper-Bessel operator are considered. Solutions to these problems are constructed based on appropriate eigenfunction expansion and results on existence and uniqueness are established. To solve the resultant equations, a solution to a non-homogeneous fractional differential equation with regularized Caputo-like counterpart hyper-Bessel operator is also presented. \end{abstract}
\section{Introduction} In this paper, we consider the following fractional differential equation involving hyper-Bessel operator with a source term $F$: \begin{equation} \,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t}\right)^{\alpha}u(x,t)-u_{xx}(x,t)=F. \end{equation} We study both a direct problem where $F=f(x,t)$ is a known function of space and time and an inverse source problem where $F=f(x)$ is an unknown function of space only. Here $\,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t}\right)^{\alpha}$ stands for a regularized Caputo-like counterpart hyper-Bessel operator of order $0<\alpha<1$ (see formula $(\ref{relation})$). The hyper-Bessel operator was introduced by Dimovski in \cite{dim} and it arises in various problems, such as, fractional relaxation \cite{garra} and fractional diffusion models \cite{garra1}. As an example, authors in \cite{garra1} used hyper-Bessel operator to describe heat diffusion for fractional Brownian motion. Their analysis based on converting fractional power of hyper-Bessel operator into Erd\'elyi-Kober (E-K) fractional integral. For more details about fractional Brownian motion, the reader is referred to \cite{garra1, fbm}.\\ In fact, expressing hyper-Bessel operator in terms of Erd\'elyi-Kober fractional integral plays a key role in finding solution to fractional differential equations involving hyper-Bessel operator as illustrated in this paper as well. Some results related to hyper-Bessel operator are given in \cite{garra},\cite{kiry2}.\\ In \cite{kiry}, AL-Saqabi and Kiryakova considered Volterra integral equation of second kind and a fractional differential equation, involving (E-K) integral or differential operator. They found explicit solutions to these equations using transmutation method which reduces solutions to known integral solutions of Riemann-Liouville fractional equations.\\ The purpose of this paper is to prove existence and uniqueness of solution to a fractional diffusion equation involving a regularized Caputo-like counterpart hyper-Bessel operator considering both direct and inverse source problems.
\section{Preliminaries } In this section, we recall some definitions and results related to fractional hyper-Bessel operator which will be used later in this paper. We start by writing down the definition of Erd\'elyi-Kober fractional integral. \begin{definition}(see \cite{kiry, garra}) Erd\'elyi-Kober fractional integral of a function $f(t)$ with arbitrary parameters $\delta>0$, $\gamma\in\mathrm{R}$ and $\beta >0$ is defined as \begin{equation} I_{\beta}^{\gamma,\delta}f(t)=\dfrac{t^{-\beta(\gamma+\delta)}}{\Gamma(\delta)}\int_{0}^{t}(t^{\beta}-\tau^{\beta})^{\delta-1} \tau^{\beta\gamma}f(\tau)\, d(\tau^{\beta}), \end{equation} which is reduced to the well-known Riemann-Liouville fractional integral when $\gamma= 0$ and $\beta =1$ with a power weight.\\ For $\delta<0,$ the interpretation is via integro-differential operator $$I_{\beta}^{\gamma, \delta}f(t)=(\gamma+\delta+1)I_{\beta}^{\gamma,\delta+1}f(t)+\dfrac{1}{\beta}I_{\beta}^{\gamma,\delta+1} \left( t \dfrac{d}{dt}f(t)\right).$$ \end{definition} In the following theorem, we present an explicit solution to an integral equation involving E-K fractional integral. \begin{theorem}\label{eksol}(see \cite{kiry}, Theorem 1) The unique solution $y(t)\in C_{\beta\mu},$ $\mu\geq \max\left\lbrace 0,-\gamma\right\rbrace-1$ to the following fractional integral equation of a second kind : $$y(t)-\lambda t^{\beta \delta}I_{\beta}^{\gamma,\delta}y(t)=f(t),$$ or equivalently, $$y(t)-\lambda t^{-\beta\gamma}\int_{0}^{t}\dfrac{(t^{\beta}-\tau^{\beta})^{\delta-1}}{\Gamma(\delta)}\tau^{\beta\gamma}y(\tau)\, d(\tau^{\beta})=f(t),$$ with $f\in C_{\beta\mu}$, has the explicit form of a convolutional type integral : \begin{equation}y(t)=f(t)+\lambda t^{-\beta \gamma}\int_{0}^{t}(t^{\beta}-\tau^{\beta})^{\delta-1}E_{\delta,\delta}\left[ \lambda(t^{\beta}-\tau^{\beta})^{\delta}\right]\tau^{\beta\gamma}f(\tau)\, d(\tau^{\beta}). \end{equation} \end{theorem} Next, we use E-K integral to define the regularized counterpart hyper-Bessel operator. \begin{definition}(see \cite{garra}) The hyper-Bessel operator of order $0<\alpha<1$ is defined in terms of E-K integral as follows \begin{eqnarray} \left(t^{\theta}\dfrac{d}{dt}\right)^{\alpha}f(t)=\left\lbrace \begin{array}{ll} (1-\theta)^{\alpha}t^{-(1-\theta)\alpha} I_{1-\theta}^{0,-\alpha}f(t),& \text{ if } \theta<1,\\ (\theta-1)^{\alpha}I_{1-\theta}^{-1,-\alpha}t^{(1-\theta)\alpha}f(t),&\text{ if } \theta>1. \end{array}\right. \end{eqnarray} \end{definition} Note that when $\theta=0$, the hyper-Bessel operator coincides with Riemann-Liouville fractional derivative.\\ Now, recall that Caputo and Riemann-Liouville fractional derivatives of order $0<\alpha<1$ are defined as (see \cite{gm}): \[
\begin{array}{l}\displaystyle ^CD_{0|t}^{\alpha}f(t)=\dfrac{1}{\Gamma(1-\alpha)}\displaystyle\int_{0}^{t}\frac{f' (\tau)}{(t-\tau)^{\alpha}}\, d\tau,\\
\displaystyle D_{0|t}^{\alpha}f(t)=\dfrac{1}{\Gamma(1-\alpha)}\displaystyle\dfrac{d}{dt}\int_{0}^{t}\frac{f (\tau)}{(t-\tau)^{\alpha}}\, d\tau,\end{array}\] respectively, and they are related by (\cite{gm}): $$
\displaystyle ^CD_{0|t}^{\alpha}f(t)=D_{0|t}^{\alpha}(f(t)-f(0^{+})). $$ Using the above relation, we can express the regularized Caputo-like counterpart hyper-Bessel operator as : \begin{equation}\label{relation} \,^{C}\left(t^{\theta}\dfrac{d}{dt}\right)^{\alpha}f(t)=\left(t^{\theta}\dfrac{d}{dt}\right)^{\alpha}f(t)-\dfrac{f(0)\,t^{-\alpha(1-\theta)}}{(1-\theta)^{-\alpha}\Gamma(1-\alpha)}, \end{equation} and in terms of E-K fractional integral : \begin{equation}\label{relation1}\,^{C}\left(t^{\theta}\dfrac{d}{dt}\right)^{\alpha}f(t)=(1-\theta)^{\alpha}t^{-\alpha(1-\theta)}I_{1-\theta}^{0,-\alpha} \left( f(t)-f(0) \right).\end{equation} Also, we need to recall the Mittag-Leffler function of one parameter : $$ E_{\alpha}(z)=\sum_{k=0}^{\infty}\dfrac{z^k}{\Gamma(\alpha k +1)},\quad\text{Re}(\alpha)>0,\, z\in C, $$ and the Mittag-Leffler of two parameters $$ E_{\alpha,\beta^{*}}(z)=\sum_{k=0}^{\infty}\dfrac{z^k}{\Gamma(\alpha k +\beta^{*})},\quad\text{Re}(\alpha)>0,\text{ Re}(\beta^{*})>0,\, z\in C. $$ Now, we need the following results related to the Mittag-Leffler function \begin{theorem}\label{intint}(see\cite{Prabhakar}) If $\text{Re}(\mu)>0$, $\text{Re}(\beta^*)>0,$ $\lambda$ is a complex number and $f(t)$ is an integrable function, then \begin{equation*}\label{simplified} \begin{array}{ll} \displaystyle\int_{a}^{x} (x-u)^{\beta^*-1}E_{\alpha,\beta^*}\left( \lambda(x-u)^{\alpha}\right) du& \displaystyle\int_{a}^{u} \dfrac{(u-t)^{\mu-1}}{\Gamma(\mu)}f(t)dt=\\&\displaystyle\int_{a}^{x}(x-t)^{\beta^*+\mu-1} E_{\alpha,\,\beta^*+\mu}\left( \lambda(x-t)^{\alpha}\right) f(t)dt. \end{array} \end{equation*} \end{theorem} \begin{theorem}(see\cite{pod}) Let $\alpha<2,$ $\beta^{*} \in R$ and $\dfrac{\pi\alpha}{2}<\mu < \min\left\lbrace \pi, \pi\alpha\right\rbrace.$ Then we have the following estimate \begin{equation}\label{MLbound}
\left| E_{\alpha,\beta^{*}}(z)\right| \leq \dfrac{M}{ 1+|z|},\quad \mu \leq |\text{arg} z|\leq \pi,\,|z|\geq 0.\end{equation} Here and in the rest of the paper, $M$ denotes a positive constant. \end{theorem} In the following theorem, we present a homogeneous fractional equation with regularized Caputo-like counterpart hyper-Bessel operator and its explicit solution as proved in \cite{garra}. \begin{theorem}\label{homsol}( see \cite{garra}, Theorem 2.1) The function $$u(t)=E_{\alpha}\left( -\dfrac{\lambda t^{\alpha(1-\theta)}}{(1-\theta)^{\alpha}} \right), $$ solves the fractional Cauchy problem \[\left\lbrace \begin{array}{rll} \,^{C}\left(t^{\theta}\dfrac{d}{dt}\right)^{\alpha}u(t)&=-\lambda u(t), &\alpha\in(0,1),\;\theta<1,\; t\geq 0,\\ u(0)&=1.& \end{array}\right. \] \end{theorem} In this paper, we consider a more general case, namely, a non-homogeneous fractional differential equation with a regularized Caputo-like counterpart hyper-Bessel operator presented in the following lemma: \begin{lemma} \label{nonhomo FDE} Consider the following non-homogeneous fractional differential equation \begin{equation}\label{chp} \,^{C}\left(t^{\theta}\dfrac{d}{dt}\right)^{\alpha}u(t)=-\lambda u(t)+f(t), \;\alpha\in(0,1),\;\theta<1,\; t\geq 0, \end{equation} with $u(0)=u_0, $ where $u_0 $ is a constant. Then, its solution is given in the integral form \begin{equation}\label{nonsol} \begin{array}{ll} u(t)&=u_0\; E_{\alpha}\left( \lambda^{*}t^{\rho\alpha}\right)+ \dfrac{1}{\rho^{\alpha}\Gamma(\alpha)}\displaystyle\int_{0}^{t}(t^{\rho}-x^{\rho})^{\alpha-1}f(x)\, d(x^{\rho})\\ &+\dfrac{\lambda^{*}}{\rho^{\alpha}}\displaystyle\int_{0}^{t}(t^{\rho}-x^{\rho})^{2\alpha-1}E_{\alpha,2\alpha}\left(\lambda^{*} (t^{\rho}-x^{\rho})^{\alpha}\right) f(x)\, d(x^{\rho}), \end{array}
\end{equation} where, $\rho=1-\theta$ and $\lambda^{*}=-\dfrac{\lambda}{\rho^{\alpha}}.$ Moreover, if $f=f_0$ is constant, then the solution reduces to $$u(t)= C^{*}\;E_{\alpha}\left(\lambda^{*} t^{\rho\alpha}\right)+\dfrac{f_0}{\lambda}.$$ where $C^{*}=\left(u_0-\dfrac{f_0}{\lambda} \right)$. In particular, when $f=0$, we have $$u(t)=u_0 \;E_{\alpha}\left(\lambda^{*} t^{\rho\alpha}\right).$$ \end{lemma} \begin{proof} First, using relation $(\ref{relation1}),$ equation (\ref{chp}) can be written as $$ \rho^{\alpha}t^{-\alpha\rho}I_{\rho}^{0,-\alpha} \left( u(t)-u_0 \right)=-\lambda u(t)+f(t),$$ which, on dividing by $\rho^{\alpha}t^{-\alpha\rho},$ becomes $$I_{\rho}^{0,-\alpha} \left( u(t)-u_0 \right)=\lambda^{*} t^{\rho\alpha}u(t)+ \dfrac{t^{\rho\alpha}}{\rho^{\alpha}}f(t),$$ where $\lambda^{*}=-\dfrac{\lambda}{\rho^{\alpha}}.$ Using the following property of the inverse of E-K integral (see\cite{mcbride}, Theorem 2.7): $$(I_{m}^{\eta,\alpha})^{-1}=I_{m}^{\eta+\alpha,-\alpha},$$ the above equation can be written as an integral equation, namely, $$u(t)-\lambda^{*} I_{\rho}^{-\alpha,\alpha}\left( t^{\rho\alpha}u(t)\right)=u_0+\dfrac{1}{\rho^{\alpha}}I_{\rho}^{-\alpha,\alpha}\left( t^{\rho\alpha}f(t)\right),$$ or equivalently, $$u(t)-\dfrac{\lambda^{*}}{\Gamma(\alpha)}\int_{0}^{t}(t^{\rho}-\tau^{\rho})^{\alpha-1}u(\tau) \, d(\tau^{\rho})=u_0+\dfrac{1}{\rho^{\alpha}\Gamma(\alpha)}\int_{0}^{t}(t^{\rho}-\tau^{\rho})^{\alpha-1}f(\tau)\, d(\tau^{\rho}).$$ Whereupon using Theorem $\ref{eksol}$, we have \begin{equation}\label{intsol} \begin{array}{ccc}
u(t)&=&f^{*}(t)+\lambda^{*}\displaystyle\int_{0}^{t}(t^{\rho}-\tau^{\rho})^{\alpha-1}E_{\alpha,\alpha}(\lambda^{*}(t^{\rho}-\tau^{\rho})^{\alpha})f^{*}(\tau)\, d(\tau^\rho)\\
&+& u_{0}\left(1+ \lambda^{*}\displaystyle\int_{0}^{t}(t^{\rho}-\tau^{\rho})^{\alpha-1}E_{\alpha,\alpha}(\lambda^{*}(t^{\rho}-\tau^{\rho})^{\alpha})\, d(\tau^\rho)\right),
\end{array} \end{equation} where, $f^{*}(t)=\dfrac{1}{\rho^{\alpha}\Gamma(\alpha)}\displaystyle\int_{0}^{t}(t^{\rho}-x^{\rho})^{\alpha-1} f(x)\, d(x^{\rho}).$\\ The first integral in (\ref{intsol}) can be simplified using Theorem $\ref{intint}$ to the following: \begin{equation}\label{intf} \dfrac{\lambda^{*}}{\rho^{\alpha}}\displaystyle\int_{0}^{t}(t^{\rho}-x^{\rho})^{2\alpha-1}E_{\alpha,2\alpha}\left(\lambda^{*} (t^{\rho}-x^{\rho})^{\alpha}\right) f(x)\, d(x^{\rho}), \end{equation} and the second integral in $(\ref{intsol})$ can be also simplified as follows: \begin{equation}\label{u0} \begin{array}{l} u_{0}\left(1+ \lambda^{*}\displaystyle\int_{0}^{t}(t^{\rho}-\tau^{\rho})^{\alpha-1}E_{\alpha,\alpha}(\lambda^{*}(t^{\rho}-\tau^{\rho})^{\alpha})\, d(\tau^\rho)\right)\\ =u_{0}\left( 1+\displaystyle\sum_{k=0}^{\infty} \dfrac{(\lambda^{*})^{k+1}}{\Gamma(\alpha k+\alpha)}\int_{0}^{t}(t^{\rho}-\tau^{\rho})^{\alpha k+\alpha-1} \, d(\tau^{\rho})\right)\\=u_{0}\left( 1+\displaystyle\sum_{k=0}^{\infty} \dfrac{(\lambda^{*})^{k+1}}{\Gamma(\alpha( k+1)+1)}t^{\rho\,\alpha( k+1)}\right)\\ =u_{0}\left( 1+\displaystyle\sum_{m=1}^{\infty} \dfrac{(\lambda^{*})^{m}}{\Gamma(\alpha m+1)}t^{\rho\alpha m}\right)\\ =u_{0}\,E_{\alpha}(\lambda^{*}t^{\rho\alpha}). \end{array}\end{equation} Substituting the two simplified forms $(\ref{intf}) $ and $(\ref{u0})$ into $(\ref{intsol}),$ we get the integral solution $(\ref{nonsol}).$\\ Now, if $ f(t)=f_0$ is constant, then evaluating the first integral in the expression $(\ref{nonsol})$ gives
$$\dfrac{f_{0}}{\rho^{\alpha}\Gamma(\alpha)}\displaystyle\int_{0}^{t}(t^{\rho}-x^{\rho})^{\alpha-1} \, d(x^{\rho})=\dfrac{f_0 t^{\rho \alpha}}{\rho^{\alpha}\Gamma(\alpha+1)}.$$ Substituting back into $(\ref{nonsol})$ and proceeding in a similar way as in $(\ref{u0})$, the expression of $u(t)$ can be reduced to $$u(t)=u_{0}E_{\alpha}(\lambda^{*}t^{\rho\alpha})-\dfrac{f_{0} }{\lambda} \displaystyle\sum_{k=1}^{\infty}\dfrac{\lambda^{*k}\,t^{\rho\alpha k}}{\Gamma(\alpha k+1)},$$
which can be rewritten as $$u(t)=\dfrac{f_0}{\lambda}+ C^{*}\; E_{\alpha}\left(\lambda^{*} t^{\rho\alpha}\right), $$ where $C^{*}=\left(u_0-\dfrac{f_0}{\lambda} \right)$. Finally, if $f(t)=f_0=0$, the the expression of $u(x,t)$ can be further reduced to $$u(t)= u_0\; E_{\alpha}\left(\lambda^{*} t^{\rho\alpha}\right), $$ which is consistent with Theorem \ref{homsol}. \end{proof} The rest of this paper is devoted for the main results. In the remaining two sections, we present existence and uniqueness results of solutions to direct and inverse source problems involving a regularized Caputo-like counterpart hyper-Bessel operator. \section{A Direct Problem} \subsection{Statement of Problem and Main Result} Find a function $u(x,t)$ in a domain $\Omega = \left\{ {0 < x < 1, \, 0 < t < T} \right\}$ satisfying \begin{equation}\label{direct} \,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t}\right) ^{\alpha} u(x,t)-u_{xx}(x,t)=f(x,t), \quad (x,t)\in \Omega, \end{equation} the boundary conditions \begin{equation} \label{dbcond} u(0,t)=0,\quad u(1,t)=0,\quad 0\leq t\leq T, \end{equation} and the initial condition \begin{equation}\label{Icond} u(x,0)=\psi(x),\quad 0\leq x\leq 1, \end{equation} where $f(x,t)$ is a given function, $\theta<1$, $0<\alpha<1$ and $^C\left(t^{\theta}\dfrac{\partial}{\partial t }\right) ^{\alpha}$ is the regularized Caputo-like counterpart hyper-Bessel operator defined in $(\ref{relation})$. Our aim is to prove the existence and uniqueness of solution to the problem (\ref{direct}) - (\ref{Icond}) as stated in the following theorem: \begin{theorem} Assume that the following conditions hold \begin{itemize} \item $\psi(x)\in C[0,1]$ such that $\psi(0)=\psi(1)=0$ and $\psi'(x)\in L^{2}(0,1),$ \item $f(\cdot,t)\in C^{3}[0,1]$ such that $f(0,t)=f(1,t)=f_{xx}(0,t)=f_{xx}(1,t)=0,$ and $\dfrac{\partial^4}{\partial x^4} f(\cdot, x)\in L(0,1),$ \end{itemize} then, the problem $(\ref{direct})-(\ref{Icond})$ has a unique solution given by $$
u(x,t) =\sum_{k=1}^{\infty} \left[ \psi_k\, E_{\alpha}\left( \dfrac{-k^2 \pi^2}{(1-\theta)^{\alpha}}t^{(1-\theta)\alpha}\right)+F_{k}(t)\right]\sin(k \pi x),$$ where, $$ \begin{array}{rl} F_{k}(t)=&\dfrac{1}{(1-\theta)^{\alpha}\Gamma(\alpha)}\displaystyle\int_{0}^{t}\left( t^{(1-\theta)}-\tau^{(1-\theta)}\right) ^{\alpha-1}f_{k}(\tau)\, d(\tau^{(1-\theta)})-\dfrac{k^2\pi^2}{(1-\theta)^{2\alpha}}\\ &\displaystyle\int_{0}^{t}\left( t^{(1-\theta)}-y^{(1-\theta)}\right) ^{2\alpha-1} E_{\alpha,2\alpha}\left(-\dfrac{\lambda (t^{(1-\theta)}-y^{(1-\theta)})^{\alpha}}{(1-\theta)^{\alpha}}\right)f_{k}(y)\, d(y^{(1-\theta)}), \end{array}$$
$$\psi_k = 2 \int_{0}^{1} \psi(x) \sin(k\pi x ) \, dx, \quad k=1,2,3,\cdots$$ $$f_{k}(t)=2 \int_{0}^{1} f(x,t) \sin(k\pi x ) \, dx, \quad k=1,2,3,\cdots$$ \end{theorem} \subsection{Proof of Result} \subsubsection{Existence of Solution} Using separation of variables method for solving the homogeneous equation corresponding to (\ref{direct}) along with the homogeneous boundary conditions (\ref{dbcond}) yields the following spectral problem: \begin{eqnarray}\label{sepr}
\left\lbrace \begin{array}{l} X''+\lambda X=0,\\ X(0)=0,\quad X(1)=0. \end{array}\right. \end{eqnarray} It is known that the above problem is self adjoint and has the following eigenvalues $$\lambda_k=(k\pi)^2,\quad k=1,2,3,\cdots$$ and the corresponding eigenfunctions are \begin{equation} \label{sys1} X_{k}=\sin(k \pi x) \quad k=1,2,3,\cdots. \end{equation} Using the fact that the system of eigenfunctions (\ref{sys1}) forms an orthogonal basis in $L^{2}(0,1)$\cite{Moiseev}, we can write the solution $u(x,t)$ in the form of a series expansion as follows: \begin{equation}\label{solu} u(x,t)=\sum_{k=1}^{\infty} u_k(t) \sin(k \pi x), \end{equation} and \begin{equation}\label{souf} f(x,t)=\sum_{k=1}^{\infty} f_k(t) \sin(k \pi x), \end{equation} where, $u_{k}(t)$ is the unknown to be determined and $f_{k}(t)$ is known and given by $$f_{k}(t)=2\int_{0}^{1} f(x,t)\sin(k\pi x)dx.$$ Substituting $(\ref{solu})$ and $(\ref{souf})$ into $(\ref{direct})$ and $(\ref{Icond})$, we get the linear fractional differential equation \begin{equation}\label{fde}
\,^{C}\left(t^{\theta}\dfrac{d}{dt}\right) ^{\alpha} u_k(t)+ k^2 \pi^2 u_k(t)=f_{k}(t),\end{equation} with the initial condition $$u_k(0)=\psi_k,$$ where, $\psi_k$ is the coefficient of the series expansion of $\psi(x)$ in terms of the orthogonal basis (\ref{sys1}), i.e., $$\psi_k =2\int_{0}^{1} \psi(x) \sin(k\pi x ) \, dx.$$ Whereupon using Lemma \ref{nonhomo FDE}, the solution of equation $(\ref{fde})$ is given by $$ u_{k}(t)=\psi_k \,E_{\alpha}\left( \dfrac{-k^2 \pi^2}{\rho^{\alpha}}t^{\rho\alpha}\right)+ F_{k}(t), $$ where, $\rho=1-\theta$ and $$ \begin{array}{rl} F_{k}(t)=&\dfrac{1}{\rho^{\alpha}\Gamma(\alpha)}\displaystyle\int_{0}^{t}\left( t^{\rho}-\tau^{\rho}\right) ^{\alpha-1}f_{k}(\tau)\, d(\tau^{\rho})\\ -&\dfrac{k^2\pi^2}{\rho^{2\alpha}}\displaystyle\int_{0}^{t}\left( t^{\rho}-y^{\rho}\right) ^{2\alpha-1}E_{\alpha,2\alpha}\left(-\dfrac{\lambda }{\rho^{\alpha}}(t^{\rho}-y^{\rho})^{\alpha}\right)f_{k}(y)\, d(y^{\rho}). \end{array}$$ Consequently, the expression of $u(x,t)$ can be written as \begin{equation} u(x,t) =\sum_{k=1}^{\infty} \left( \psi_k\, E_{\alpha}\left( \dfrac{-k^2 \pi^2}{\rho^{\alpha}}t^{\rho\alpha}\right)+F_{k}(t)\right) \sin(k \pi x). \end{equation} To complete the proof of existence, we need to prove the uniform convergence of the series representations of $$u(x,t),\,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t }\right) ^{\alpha} u(x,t),\,u_{x}(x,t),\,u_{xx}(x,t).$$ We start with the series representation of $u(x,t)$, rewriting $F_{k}(t)$ as follows $$ \begin{array}{rl} F_{k}(t)=&\dfrac{1}{k^2\pi^2\rho^{\alpha}\Gamma(\alpha)}\displaystyle\int_{0}^{t}\left( t^{\rho}-\tau^{\rho}\right) ^{\alpha-1}f''_{k}(\tau)\, d(\tau^{\rho})\\ -&\dfrac{1}{\rho^{2\alpha}}\displaystyle\int_{0}^{t}\left( t^{\rho}-y^{\rho}\right) ^{2\alpha-1}E_{\alpha,2\alpha}\left(-\dfrac{k^2\pi^2 }{\rho^{\alpha}}(t^{\rho}-y^{\rho})^{\alpha}\right) f''_{k}(y)\, d(y^{\rho}), \end{array}$$ where,
$$f''_{k}(t)=2\int_{0}^{1}f_{xx}(x,t)\sin(k\pi x)dx.$$ Now, we estimate the Mittag-Leffler function using inequality $(\ref{MLbound})$ :
$$\left| E_{\alpha,2\alpha}\left(-\dfrac{k^2\pi^2 }{\rho^{\alpha}}(t^{\rho}-y^{\rho})^{\alpha}\right)\right|\leq \dfrac{ M\rho^{\alpha}}{\rho^{\alpha}+k^2\pi^2 |t^{\rho}-y^{\rho}|^{\alpha}},$$ which implies the following estimate for $u(x,t)$: \[\begin{array}{ll}
|u(x,t)|&\leq M\displaystyle \sum_{k=1}^{\infty} \left( \dfrac{ | \psi_k\,|}{\rho^{\alpha}+k^2\pi^2 |t^{\rho}-y^{\rho}|^{\alpha}}+\dfrac{1}{k^2\pi^2}\displaystyle\int_{0}^{t}\left\vert t^{\rho}-\tau^{\rho}\right\vert ^{\alpha-1}\vert f''_{k}(\tau)\vert d(\tau^{\rho})\right. \\ &\qquad\left. +\displaystyle\int_{0}^{t}\dfrac{ \left\vert t^{\rho}-y^{\rho}\right\vert ^{2\alpha-1}}{\rho^{\alpha}+k^2\pi^2 |t^{\rho}-y^{\rho}|^{\alpha}} \vert f''_{k}(y)\vert\, d(y^{\rho})\right) . \end{array}\] Since $\psi(x) \in C[0,1]$ and $f(\cdot, t) \in C^3[0,1]$ , then the above series converges and hence, by Weierstrass M-test the series representation of $u(x,t)$ is uniformly convergent in $\Omega$.\\ Next, we show the uniform convergence of series representation of $u_{xx}(x,t)$, which is given by $$ u_{xx}(x,t) =-\sum_{k=1}^{\infty}k^2\pi^2 \left( \psi_k\, E_{\alpha}\left( \dfrac{-k^2 \pi^2}{\rho^{\alpha}}t^{\rho\alpha}\right)+F_{k}(t)\right) \sin(k \pi x).$$ To prove this assertion, we have the following estimate \[\begin{array}{ll}
|u_{xx}(x,t)|&\leq M\displaystyle \sum_{k=1}^{\infty} \left( \dfrac{ k^2\pi^2| \psi_k\,|}{\rho^{\alpha}+k^2\pi^2 (t^{\rho}-y^{\rho})^{\alpha}}+\dfrac{1}{k^2\pi^2}\displaystyle\int_{0}^{t}\left\vert t^{\rho}-\tau^{\rho}\right\vert ^{\alpha-1}\vert f^{(4)}_{k}(\tau)\vert d(\tau^{\rho})\right. \\ &\qquad\left. +\displaystyle\int_{0}^{t}\dfrac{ \left\vert t^{\rho}-y^{\rho}\right\vert ^{2\alpha-1}}{\rho^{\alpha}+k^2\pi^2 (t^{\rho}-y^{\rho})^{\alpha}} \vert f^{(4)}_{k}(y)\vert\, d(y^{\rho})\right),\end{array}\] where $$f_{k}^{(4)}(t)=2\int_{0}^{1}\dfrac{\partial^4}{dx^{4}}f(x,t) \sin(k\pi x) \,dx.$$ Since $\psi(0)=\psi(1)=0$ and $ \dfrac{\partial^4 f}{\partial x^4}(\cdot,t) \in L(0,1)$, then using integration by parts, we arrive at the following estimate \[\begin{array}{ll}
|u_{xx}(x,t)|&\leq M \displaystyle\sum_{k=1}^{\infty} \left( \dfrac{1}{k\pi}\left|\psi_{k}^{(1)}\right|+\dfrac{1}{k^2\pi^2} \right)\\
&\leq M\left( \displaystyle\sum_{k=1}^{\infty} \dfrac{1}{(k\pi)^2}+\displaystyle\sum_{k=1}^{\infty} \left|\psi_{k}^{(1)} \right|^2 \right), \end{array}\] where, we have used the inequality $2ab \le a^2 + b^2$ and $$ \psi_{k}^{(1)}=2\displaystyle\int_{0}^{1}\psi'(x) \cos(k\pi x) \,dx.$$ Then, Bessel's inequality for trigonometric functions $$ \sum_{k=0}^{\infty} g_{k}^2\leq \Vert g\Vert_{L^2(0,1)}^2,$$ implies \[\begin{array}{ll}
|u_{xx}(x,t)|&\leq M
\displaystyle\sum_{k=1}^{\infty} \dfrac{1}{(k\pi)^2}+\Vert\psi'(x)\Vert_{L^2(0,1)}^2. \end{array}\] Thus, the series in expression of $ u_{xx}(x,t)$ is bounded by a convergent series which implies that its uniformly convergent by Weierstrass M-test. Finally, series representation of
$\,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t}\right) ^{\alpha}u(x,t)$ is given by $$ \begin{array}{ll} \,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t}\right) ^{\alpha}u(x,t)&=-\displaystyle\sum_{k=1}^{\infty}k^2 \pi^2\left( \psi_k E_{\alpha}\left( \dfrac{-k^2 \pi^2 }{\rho^{\alpha}}+ t^{\rho\alpha}\right)+F_{k}(t)\right) \sin(k \pi x) +\,f(x,t), \end{array} $$ and convergence of the above series follows directly from the uniform convergence of $u_{xx}(x,t),$ which also ensures the uniform convergence of $u_{x}(x,t).$
\subsubsection{Uniqueness of Solution:} Suppose that $u_1(x,t)$ and $u_2(x,t)$ are two solutions of the problem (\ref{direct}) - (\ref{Icond}), then $\widehat{u}(x,t)= u_1(x,t)- u_2(x,t)$ satisfies the following boundary value problem: \begin{eqnarray}\label{eq} &\,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t }\right) ^{\alpha} \widehat{u}-\dfrac{\partial^2 \widehat{u}}{\partial x^2}=0,&(x,t)\in \Omega, \\ \label{condb} &\widehat{u}(0,t)=0,\quad \widehat{u}(1,t)=0, &0\leq t \leq T,\\ \label{condi} &\widehat{u}(x,0)=0, &0\leq x \leq 1. \end{eqnarray} Define the following function: \begin{equation}\label{un} u_{k}(t)=2\int_{0}^{1} \widehat{u}(x,t) \sin(k\pi x)dx.\end{equation} Then, the initial condition $(\ref{condi})$ implies \begin{equation}\label{newicond} u_{k}(0)=0.\end{equation}
Applying regularized Caputo-like counterpart hyper-Bessel operator to $(\ref{un})$, we get $$ \begin{array}{ll} \,^{C}\left(t^{\theta}\dfrac{d}{d t }\right) ^{\alpha} u_{k}(t)&=2\displaystyle\int_{0}^{1}\,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t }\right) ^{\alpha} \widehat{u}(x,t)\sin(k\pi x)dx,\\ &=2\displaystyle\int_{0}^{1} \widehat{u}_{xx}(x,t)\sin(k\pi x)dx.\\ \end{array} $$ Then, integrating by parts twice and using boundary conditions $(\ref{condb})$, we obtain the following fractional differential equation $$\,^{C}\left(t^{\theta}\dfrac{d}{d t }\right) ^{\alpha}u_{k}(t)-(k\pi)^2 u_{k}(t)=0.$$ Using Lemma $\ref{nonhomo FDE}$, the above equation with the initial condition $(\ref{newicond})$ has the trivial solution $u_{k}(t)\equiv 0,$ and hence we have $$\int_{0}^{1}\widehat{u}(x,t)\sin(k\pi x)dx=0.$$ Therefore, using the completeness property of system $(\ref{sys1})$, we deduce that $\widehat{u}(x,t)=0$ in $\Omega$, which implies the uniqueness of solution to the problem (\ref{direct}) - (\ref{Icond}).
\section{Inverse source problem}
Here, we consider an inverse source problem of finding a pair of functions $\left\lbrace u(x,t), f(x) \right\rbrace $ in a rectangular domain $\Omega=\left\lbrace (x,t):0<x<1,\; 0<t<T\right\rbrace,$ which satisfies the following initial-boundary value problem: \begin{eqnarray} &&\,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t}\right) ^{\alpha} u(x,t)-u_{xx}(x,t)=f(x), \quad \quad (x,t) \in \Omega \label{sinverse}\\ &&u(0,t)=0,\quad u(1,t)=0,\hspace{3cm} 0\leq t\leq T,\label{ISBC}\\ &&u(x,0)=\psi(x),\quad u(x,T)=\phi(x), \hspace{1.5cm} 0\leq x\leq 1, \label{ISIC} \end{eqnarray} where $\phi$ and $\psi$ are given functions, such that \[ \psi(0)=\psi(1)=0, \quad \phi(0)=\phi(1)=0, \] which follows directly from (\ref{ISBC}) and (\ref{ISIC}). As in the previous section, we seek solution to problem (\ref{sinverse}) - (\ref{ISIC}) in a form of series expansions using the orthogonal system (\ref{sys1}) as follows: $$ u(x,t)=\sum_{k=1}^{\infty} u_k(t) \sin(k \pi x), $$ $$ f(x)=\sum_{k=1}^{\infty} f_k \sin(k \pi x). $$ where $f_k$, $u_k$ are the unknowns to be determined. Substituting the above expressions for $u(x,t)$ and $f(x)$ into (\ref{sinverse}) and (\ref{ISIC}) gives the following fractional differential equation: $$ \,^{C}\left(t^{\theta}\dfrac{d}{dt}\right) ^{\alpha}u_k(t)+k^2 \pi^2 u_k(t)=f_k,$$ with the following conditions $$u_k(0)=\psi_k,\quad u_{k}(T)=\phi_k,$$ where $\psi_k, \phi_k$ are called the Fourier sine coefficients and defined as $$\psi _k=2\int_{0}^{1}\psi (x)\sin(k\pi x) dx, \quad \phi_k=2\int_{0}^{1}\phi(x)\sin(k\pi x) dx.$$ Solving the above equation, using Lemma $2.7$, we obtain $$u_k(t)= C_k \;E_{\alpha}\left(-\dfrac{k^2\pi^2}{(1-\theta)^{\alpha}} t^{(1-\theta)\alpha}\right)+\dfrac{f_k}{k^2\pi^2},$$ and using the given initial conditions, we have $$C_k=\dfrac{\psi_k-\phi_k}{1-E_{\alpha}\left(-\dfrac{k^2\pi^2}{(1-\theta)^{\alpha}} T^{(1-\theta)\alpha}\right)},\qquad f_k=k^2\pi^2(\psi_k - C_k).$$ Hence, the expressions for $u(x,t)$ and $f(x)$ can be written as, \[\begin{array}{ll} u(x,t)&=\displaystyle\sum_{k=1}^{\infty} C_k E_{\alpha}\left(-\dfrac{k^2\pi^2}{(1-\theta)^{\alpha}} t^{(1-\theta)\alpha}\right)\sin(k \pi x)+ (\psi_k - C_k)\sin(k \pi x) \\ &=\psi(x)-\displaystyle\sum_{k=1}^{\infty}\dfrac{1-E_{\alpha}\left(-\dfrac{k^2\pi^2}{(1-\theta)^{\alpha}} t^{(1-\theta)\alpha}\right)}{1-E_{\alpha}\left(-\dfrac{k^2\pi^2}{(1-\theta)^{\alpha}} T^{(1-\theta)\alpha}\right)}(\psi_k-\phi_k)\sin(k \pi x),\end{array}\] and \[\begin{array}{ll} f(x)&=\displaystyle \sum_{k=1}^{\infty}k^2\pi^2 (\psi_k-C_k)\sin(k \pi x)\\ &=\psi''(x)-\displaystyle\sum_{k=1}^{\infty}\dfrac{k^2\pi^2 (\psi_k-\phi_k)}{1-E_{\alpha}\left(-\dfrac{k^2\pi^2}{(1-\theta)^{\alpha}} T^{(1-\theta)\alpha}\right)}\sin(k \pi x).\end{array}\] Appropriate conditions on the given functions $\psi(x)$ and $\phi(x)$, see the Theorem $4.1$ below, are assumed for establishing the uniform convergence of the series expansions of $u(x,t),$ $\,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t}\right)^{\alpha}u(x,t),$ $\,u_{x}(x,t)$, $u_{xx}(x,t)$ and $f(x).$ This can be done in a similar approach as presented earlier. For example, for $f(x)$ we have the following estimate: \[\begin{array}{ll}
|f(x)|&\leq |\psi''(x)|+ \displaystyle\sum_{k=1}^{\infty}\dfrac{ k^2\pi^2 \left[ (1-\theta)^{\alpha}+k^2\pi^2 T^{ (1-\theta)\alpha}\right] }{(1-M)(1-\theta)^{\alpha}+k^2\pi^2 T^{ (1-\theta)\alpha}}\left( \left|\psi_{k}\right|+\left|\phi_{k}\right|\right) \\
&\leq |\psi''(x)|+M \displaystyle\sum_{k=1}^{\infty}\dfrac{ 1}{k\pi}\left( \left|\psi_{k}^{(3)}\right|+\left|\phi_{k}^{(3)}\right|\right) \\
&\leq |\psi''(x)|+ M \left(\displaystyle \sum_{k=1}^{\infty}\dfrac{1}{(k\pi)^2}+\displaystyle \sum_{k=1}^{\infty}\left|\psi_{k}^{(3)}\right|^2+ \sum_{k=1}^{\infty}\left|\phi_{k}^{(3)}\right|^2\right) \\
&\leq |\psi''(x)|+M\left(\displaystyle \sum_{k=1}^{\infty}\dfrac{1}{(k\pi)^2}+\Vert\psi'''(x)\Vert_{L^2(0,1)}^2+\Vert\phi'''(x)\Vert_{L^2(0,1)}^2\right), \end{array}\] where $$ \psi_{k}^{(3)}=2\displaystyle\int_{0}^{1}\psi'''(x) \cos(k\pi x) \,dx,$$ and $$ \phi_{k}^{(3)}=2\displaystyle\int_{0}^{1}\phi'''(x) \cos(k\pi x) \,dx.$$
Assuming that $\psi(x)\in C^{2}[0,1]$ and $ \psi'''(x),\phi'''(x)\in L^{2}(0,1),$ then by Weierstrass M-test the series representation of $f(x)$ is uniformly convergent. Also, the series representation of $\,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t}\right)^{\alpha}u(x,t)$, which is given by $$ \,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t}\right)^{\alpha}u(x,t) =-\displaystyle\sum_{k=1}^{\infty}\dfrac{E_{\alpha}\left(-\dfrac{k^2\pi^2}{(1-\theta)^{\alpha}} t^{(1-\theta)\alpha}\right)}{1-E_{\alpha}\left(-\dfrac{k^2\pi^2}{(1-\theta)^{\alpha}} T^{(1-\theta)\alpha}\right)}k^2\pi^2(\psi_k-\phi_k)\sin(k \pi x),
$$ can be estimated as follows: \[\begin{array}{ll}
\left|\,^{C}\left(t^{\theta}\dfrac{\partial}{\partial t}\right)^{\alpha}u(x,t)\right|
&\leq M\displaystyle\sum_{k=1}^{\infty}\dfrac{ k^2\pi^2}{ (1-\theta)^{\alpha}+k^2\pi^2 t^{ (1-\theta)\alpha} }(\left|\psi_k\right|+\left|\phi_k\right|)\\
&\leq M\displaystyle\sum_{k=1}^{\infty}\dfrac{1}{k\pi}\left( \left|\psi_k^{(1)}\right|+\left|\phi_k^{(1)}\right|\right)\\ &\leq M\left(\displaystyle \sum_{k=1}^{\infty}\dfrac{1}{(k\pi)^2}+\Vert\psi'(x)\Vert_{L^2(0,1)}^2+\Vert\phi'(x)\Vert_{L^2(0,1)}^2\right). \end{array}\] It is clear that the above series is uniformly convergent. The main result for this section can be summarized in the following theorem: \begin{theorem} Assume $\psi(x),\phi(x)\in C^{2}[0,1]$ such that
$\psi^{(i)}(0)=\psi^{(i)}(1)=\phi^{(i)}(0)=\phi^{(i)}(1)=0,$ $(i=0,2)$ and $\psi'''(x),\phi'''(x)\in L^{2}(0,1),$ then the inverse source problem (\ref{sinverse}) - (\ref{ISIC}) has a unique pair of solutions $\left\lbrace u(x,t), f(x) \right\rbrace $ given by $$ u(x,t)=\psi(x)-\displaystyle\sum_{k=1}^{\infty}\dfrac{1-E_{\alpha}\left(-\dfrac{k^2 \pi^2}{(1-\theta)^{\alpha}} t^{(1-\theta)\alpha}\right)}{1-E_{\alpha}\left(-\dfrac{k^2 \pi^2}{(1-\theta)^{\alpha}} T^{(1-\theta)\alpha}\right)}(\psi_k-\phi_k)\sin(k \pi x), $$ $$ f(x)=\psi''(x)-\displaystyle\sum_{k=1}^{\infty}\dfrac{k^2 \pi^2 (\psi_k-\phi_k)}{1-E_{\alpha}\left(-\dfrac{k^2 \pi^2}{(1-\theta)^{\alpha}} T^{(1-\theta)\alpha}\right)}\sin(k \pi x),$$ where, $$\psi _k=2\int_{0}^{1}\psi (x)\sin(k\pi x) dx, \quad \phi_k=2\int_{0}^{1}\phi(x)\sin(k\pi x) dx.$$ \end{theorem} \begin{description} \item[Acknowledgements.] Authors acknowledge financial support from The Research Council (TRC), Oman. This work is funded by TRC under the research agreement no. ORG/SQU/CBS/13/030. \end{description}
\end{document} | arXiv | {
"id": "1610.05524.tex",
"language_detection_score": 0.607598602771759,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\sffamily A projected primal-dual
splitting for solving constrained monotone
inclusions
\footnote{Contact author:
L. M. Brice\~no-Arias, {\ttfamily luis.briceno@usm.cl}
\begin{abstract} In this paper we provide an algorithm for solving constrained composite primal-dual monotone inclusions, i.e., monotone inclusions in which a priori information on primal-dual solutions is represented via closed convex sets. The proposed algorithm incorporates a projection step onto the a priori information sets and generalizes the method proposed in \cite{2.vu}. Moreover, under the presence of strong monotonicity, we derive an accelerated scheme inspired on \cite{3.CP} applied to the more general context of constrained monotone inclusions. In the particular case of convex optimization, our algorithm generalizes the methods proposed in \cite{1.condat,3.CP} allowing a priori information on solutions and we provide an accelerated scheme under strong convexity. An application of our approach with a priori information is constrained convex optimization problems, in which available primal-dual methods impose constraints via Lagrange multiplier updates, usually leading to slow algorithms with unfeasible primal iterates. The proposed modification forces primal iterates to satisfy a selection of constraints onto which we can project, obtaining a faster method as numerical examples exhibit. The obtained results extend and improve several results in \cite{1.condat,2.vu,3.CP}. \end{abstract}
{\bfseries Keywords:} accelerated schemes, constrained convex optimization, monotone operator theory, proximity operator, splitting algorithms
\section{Introduction}
This paper is devoted to the numerical resolution of composite primal-dual monotone inclusions in which a priori information on solutions is known. The relevance of monotone inclusions and convex optimization is justified via the increasing number of applications in several fields of engineering and applied mathematics as image processing, evolution inclusions, variational inequalities, learning, partial differential equations, Mean Field Games, among others (see, e.g., \cite{4.Sicon1,5.Atto08,6.Atto17,7.BAKS,8.Facc03,9.Gabay83,10.Mercier79} and references therein). The a priori information on primal-dual solutions is represented via closed convex sets in primal and dual spaces, following some ideas developed in \cite{11.Tsen00,12.Siopt2}. We force primal-dual iterates to belong to these information sets by adding additional projections on primal-dual iterates in each iteration of our proposed method.
An important instance, in which the advantage of our formulation arises, is composite convex optimization with affine linear equality constraints. In this context, the primal-dual methods proposed in \cite{1.condat,2.vu,3.CP,13.Esser10,13.HeYuan,13.Cpock16,13.LorPock15} impose feasibility through Lagrange multiplier updates. A disadvantage of this approach is that such algorithms are usually slow and their primal iterates do not necessarily satisfy any of the constraints (see, e.g., \cite{7.BAKS}), leading to unfeasible approximate primal solutions. By projecting onto the affine subspace generated by the constraints, previous problem is solved. However, in several applications this projection is not easy to compute because of singularity or bad conditioning on the linear system (see, e.g. \cite{13.CEMRACS}). In this context, the a priori information on primal solutions can be set as any selection of the affine linear constraints. Indeed, since any solution is feasible, we know it must satisfy any selection of the constraints. Even if in the previous context the formulation with a priori information may be seen as artificial, in a practical point of view this formulation allows us to propose a method with an additional projection onto an arbitrary selection of the constraints, which improves the efficiency of the method (see Section~\ref{sec:numeric}). This method forces primal iterates to satisfy the selection of the constraints, which can be chosen in order to compute the projection easily.
In this paper we provide a new projected primal-dual splitting method for solving constrained monotone inclusions, i.e., inclusions in which we count on a priori information on primal-dual solutions. We also provide an accelerated scheme of our method in the presence of strong monotonicity and we derive linear convergence in the fully strongly monotone case. In the case without a priori information, our results give an accelerated scheme of the method proposed in \cite{2.vu} for strongly monotone inclusions. In the context of convex optimization, our method generalize the algorithms proposed in \cite{1.condat,3.CP} and \cite{13.LorPock15} without inertia, by incorporating a projection onto an the a priori primal-dual information set. This method is applied in the context of convex optimization with equality constraints, when the a priori information set is chosen as a selection of the affine linear constraints in which it is easy to project. The advantages of this approach with respect to classical primal-dual approaches are justified via numerical examples. Our acceleration scheme in the convex optimization context is obtained as a generalization of \cite{3.CP}, complementing the ergodic rates obtained in the case without projection in \cite{13.Cpock16} and, as far as we know, have not been developed in the literature and are interesting in their own right.
The paper is organized as follows. In Section~\ref{sec2} we set our notation and we give a brief background. In section \ref{sec3}, we set the constrained primal-dual monotone inclusion and we propose our algorithm together with the main results. We also provide connections with existing methods in the literature. In Section~\ref{sec6}, we apply previous results to convex optimization problems with equality affine linear constraints, together with numerical experiences illustrating the improvement in the efficiency of the algorithm with the additional projection. We finish with some conclusions in Section~\ref{sec5}.
\section{Notation and preliminaries} \label{sec2} Let $\cal{H}$ and ${\cal{G}}$ be real Hilbert spaces. We denote the scalar products of
${\cal{H}}$ and $\cal{G}$ by $\scal{\cdot}{\cdot}$ and the associated norms by $\|\cdot\|$. The projector operator onto a nonempty closed convex set $C\subset {\cal{H}}$ is denoted by $P_{C}$ and, for a set-valued operator $M: {\cal{H}} \rightarrow 2^{{\cal{H}}}$ we use $\mbox{ran}(M)$ for the range of $M$, $\mbox{gra}(M)$ for its graph, $M^{-1}$ for its inverse, $J_{M}=(\ensuremath{\operatorname{Id}}\,+M)^{-1}$ for its resolvent, and $\ensuremath{\mbox{\small$\,\square\,$}}$ stands for the parallel sum as in \cite{14.Livre1}. Moreover, $M$ is $\rho$-strongly monotone if, for every $(x,u)$ and $(y,v)$ in
$\mbox{gra}(M)$, $\left\langle x-y, u-v\right\rangle \geq \rho\left\|x-y\right\|^{2},$ it is $\rho-$cocoercive if $M^{-1}$ is $\rho-$strongly monotone, $M$ is monotone if it is $\rho$-strongly monotone with $\rho=0$, and it is maximally monotone if its graph is maximal in the sens of inclusions in $\ensuremath{{\mathcal H}}\times\ensuremath{{\mathcal H}} $, among the graphs of monotone operators. The class of all lower semicontinuous convex functions $f : {\cal{H}} \rightarrow (-\infty, +\infty]$
such that $\ensuremath{\operatorname{dom}}(f)=\left\{x\in {\cal{H}} \,|\, f(x)<+\infty \right\} \neq \varnothing$ is denoted by $\Gamma_{0}({\cal{H}})$ and, for every $f\in \Gamma_{0}({\cal{H}})$, the Fenchel conjugate of $f$ is denoted by $f^{*}$, its subdifferential by $\partial f$, and its proximity operator by $\ensuremath{\operatorname{prox}}_f$, as in \cite{14.Livre1}. We recall that $(\partial f)^{-1}=\partial f^{*}$ and $J_{\partial f} = \ensuremath{\operatorname{prox}}_{f}$. In addition, when $C\subset {\cal{H}}$ is a convex closed subset, we have that $J_{\partial
\iota_{C}}=\ensuremath{\operatorname{prox}}_{\iota_C}=P_{C}$, where $\iota_C$ is the indicator function of $C$, which
is $0$ in $C$ and $+\infty$ otherwise. Given $\alpha\in]0,1[$, an operator $T\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$ satisfying $\ensuremath{\operatorname{Fix}} T\neq\varnothing$ is $\alpha-$averaged
quasi-nonexpansive if, for every $x\in\ensuremath{{\mathcal H}}$ and $y\in\ensuremath{\operatorname{Fix}} T$ we have $\|Tx-y\|^2\le
\|x-y\|^2-(\frac{1-\alpha}{\alpha})\|x-Tx\|^2$. We refer the reader to \cite{14.Livre1} for definitions and further results in monotone operator theory and convex optimization. \section{Problem and main results} \label{sec3} We consider the following problem. \begin{problem}
\label{prob:main}
Let $T\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$ be an $\alpha-$averaged quasi-nonexpansive operator with
$\alpha\in]0,1[$,
let $V$ be a closed vector subspace of $\ensuremath{{\mathcal G}}$, let $L\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal G}}$ be a nonzero
linear
bounded
operator satisfying $\ensuremath{\operatorname{ran}} L\subset V$, let $A:{\cal{H}}\rightarrow 2^{\cal{H}}$ and
$D\colon{\ensuremath{{\mathcal G}}}\rightarrow 2^{\ensuremath{{\mathcal G}}}$
be maximally monotone operators which are $\rho$ and
$\delta-$strongly
monotone, respectively, and let $B:{\ensuremath{{\mathcal G}}}\rightarrow 2^{\ensuremath{{\mathcal G}}}$ and $C\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$
be $\chi$ and $\beta-$cocoercive,
respectively,
for $(\rho,\chi)\in\ensuremath{\left[0,+\infty\right[}^2$ and $(\delta,\beta)\in\ensuremath{\left[0,+\infty\right]}^2$.
The problem is to solve the primal and dual inclusions
\begin{align}\label{inc:primal}\tag{$\mathcal{P}$}
&\text{find }\quad \hat{x}\in\ensuremath{\operatorname{Fix}} T\quad\text{such that}\quad 0\in A\hat{x}+
L^*(B\ensuremath{\mbox{\small$\,\square\,$}} D)(L\hat{x})+C\hat{x}\\
\label{inc:dual}\tag{$\mathcal{D}$}
&\text{find }\quad \hat{u}\in V\quad\text{such that}\quad \left(\ensuremath{\exists\,}\hat{x}\in
\ensuremath{\operatorname{Fix}} T\right)\quad \begin{cases}
-L^*\hat{u}\in A\hat{x}+C\hat{x}\\
\hat{u}\in (B\ensuremath{\mbox{\small$\,\square\,$}} D)(L\hat{x}),
\end{cases}
\end{align}
under the assumption that solutions exist. \end{problem} When $A=\partial f$, $B=\partial g$, $C=\nabla h$, and $D=\partial \ell$, where $f\in\Gamma_0(\ensuremath{{\mathcal H}})$, $g\in\Gamma_0(\ensuremath{{\mathcal G}})$, $h\colon\ensuremath{{\mathcal H}}\to\ensuremath{\mathbb{R}}$ is a differentiable convex function with $\beta^{-1}-$Lipschitz gradient, and $\ell\in\Gamma_0(\ensuremath{{\mathcal G}})$ is $\delta-$strongly convex, Problem~\ref{prob:main} reduces to \begin{equation}
\label{e:primal}\tag{$\mathcal{P}_0$}
\text{find}\quad \hat{x}\in\ensuremath{\operatorname{Fix}} T\cap\ensuremath{\operatorname{argmin}}_{x\in
{\cal{H}}}F(x):=f(x)+(g\ensuremath{\mbox{\small$\,\square\,$}}\ell)(Lx)+h(x) \end{equation} together with the dual problem \begin{equation}
\label{e:dual}\tag{$\mathcal{D}_0$}
\text{find}\quad \hat{u}\in V\cap\ensuremath{\operatorname{argmin}}_{u\in\ensuremath{{\mathcal G}}}g^*(u)+(f^*\ensuremath{\mbox{\small$\,\square\,$}} h^*)(-L^*u)+\ell^*(u), \end{equation} assuming that some qualification condition holds. Note that, when $T=P_X$, any solution to \eqref{e:primal} is a solution to $\min_{x\in X}F(x)$, but the converse is not true. The set $X$ in this case represents an a priori information on the primal solution. As you can see in the next section, an application of this formulation is constrained convex optimization, in which $X$ may represent a selection of the affine linear constraints. Even if, in this case, the formulation can be set without considering the set $X$, its artificial appearance has a practical relevance: the method obtained include a projection onto $X$ which helps to the performance of the method as stated in Section~\ref{sec6}.
When $\rho=\chi=0$, $V=\ensuremath{{\mathcal G}}$ and $T= \ensuremath{\operatorname{Id}}\,$, \eqref{e:primal}-\eqref{e:dual} can be solved by using \cite[Theorem~4.2]{16.CombPes12} or \cite[Theorem~5]{13.LorPock15}. In the last method, inertial terms are also included. In the case when $\ell^*=0$ the algorithm in \cite{1.condat} can be used and if $\ell^*=h=0$, \eqref{e:primal}-\eqref{e:dual} can be solved by \cite{3.CP,15.Siopt1} or a version of \cite{3.CP} with linesearch proposed in \cite{19.MaliP18}. In \cite{3.CP}, the strong convexity is exploited via acceleration schemes. Moreover, when $T=P_X$, $X\subset\ensuremath{{\mathcal H}}$ is nonempty, closed and convex, $V=\ensuremath{{\mathcal G}}$ and $\ell^*=h=0$, \eqref{e:primal}-\eqref{e:dual} is solved in \cite[Theorem~3.1]{7.BAKS}. When $\rho>0$ or $\chi>0$, ergodic convergence rates are derived in \cite{13.Cpock16} when $V=\ensuremath{{\mathcal G}}$ and $T= \ensuremath{\operatorname{Id}}\,$. In its whole generality, as far as we know, \eqref{e:primal}-\eqref{e:dual} has not been solved and strong convexity has not been exploited.
In Problem~\ref{prob:main} set $T=\ensuremath{\operatorname{Id}}\,$, $V=\ensuremath{{\mathcal G}}=G_1\oplus\cdots\oplus G_m$, $L\colon x\mapsto(L_1x,\ldots,L_mx)$, $B\colon (u_1,\ldots,u_m)\mapsto \omega_1B_1u_1\times \cdots\times\omega_mB_mu_m$ and $D\colon (u_1,\ldots,u_m)\mapsto \omega_1D_1u_1\times \cdots\times\omega_mD_mu_m$, where for every $i\in\{1,\ldots,m\}$, $L_i\colon\ensuremath{{\mathcal H}}\to G_i$ is linear and bounded, $B_i\colon G_i\mapsto 2^{G_i}$ and $D_i\colon G_i\mapsto 2^{G_i}$ are maximally monotone operators such that $D_i$ is strongly monotone, and $\omega_i>0$ satisfies $\sum_{i=1}^m\omega_i=1$. Then, Problem~\ref{prob:main} reduces to \cite[Problem~1.1]{16.CombPes12} (see also \cite[Problem~1.1]{2.vu}). We prefer to set $m=1$ for simplicity. In \cite{16.CombPes12}, previous problem is solved when $C$ is monotone and Lipschitz by applying the method in \cite{17.Tsen00} to the product primal-dual space. Accelerated versions of previous algorithm under strong monotonicity are proposed in \cite{18.bot14}. The cocoercivity of $C$ is exploited in \cite{2.vu}, where an algorithm is proposed for solving Problem~\ref{prob:main} when $\rho=\chi=0$, $V=\ensuremath{{\mathcal G}}$ and $T= \ensuremath{\operatorname{Id}}\,$. In the following theorem we provide an algorithm for solving Problem~\ref{prob:main} in its whole generality with weak convergence to a solution when the stepsizes are fixed. Moreover, when $A$ or $B^{-1}$ are strongly monotone ($\rho>0$ or $\chi>0$), we provide an accelerated version inspired on (and generalizing) \cite[Section~5.1]{3.CP}. Finally, we generalize \cite[Section~5.2]{3.CP} for obtaining linear convergence when $\rho>0$ and $\chi>0$. \begin{theorem}
\label{thm:main}
Let $\gamma_{0}\in]0,2\delta[$ and $\tau_{0}\in]0,2\beta[$ be such that
\begin{equation}
\label{e:parcond}
\|L\|^2\le
\left(\frac{1}{\tau_0}-\frac{1}{2\beta}\right)\left(\frac{1}{\gamma_0}-\frac{1}{2\delta}\right)
\end{equation}
and let $(x^0,\bar{x}^0,u^{0})\in \ensuremath{{\mathcal H}}\times\ensuremath{{\mathcal H}}\times\ensuremath{{\mathcal G}}$ such that $\bar{x}^{0}=x^{0}$.
Let
$(\theta_k)_{k\in\ensuremath{\mathbb N}}$, $(\gamma_k)_{k\in\ensuremath{\mathbb N}}$ and $(\tau_k)_{k\in\ensuremath{\mathbb N}}$ be sequences
in $]0,1]$, $]0,2\delta[$ and $]0,2\beta[$, respectively, and consider
\begin{equation}
\label{e:alg2}
(\forall k\in \mathbb{N})\quad
\left\lfloor
\begin{array}{ll}
\eta^{k+1}=J_{\gamma_{k} B^{-1}}(u^k+\gamma_{k} (L\bar{x}^k-D^{-1}u^k))\\
u^{k+1}=P_V\,\eta^{k+1}\\
p^{k+1}=J_{\tau_{k} A}(x^k-\tau_{k} (L^*u^{k+1}+Cx^k))\\
x^{k+1}=T\,p^{k+1}\\
\bar{x}^{k+1}=x^{k+1}+\theta_{k}(p^{k+1}-x^{k}).
\end{array}
\right.
\end{equation}
Then, the following hold.
\begin{enumerate}
\item\label{thm:maini} For every $k\in\ensuremath{\mathbb N}$ and for every solution $(\hat{x},\hat{u})$
to Problem~\ref{prob:main}, we have
\begin{align}
\label{e:ineqmain}
\hspace{-.5cm}
\frac{\|x^k-\hat{x}\|^2}{\tau_{k}} +\frac{\|u^k-\hat{u}\|^2}{\gamma_{k}} ∞
&\geq (2\rho\tau_k+1)\frac{\|p^{k+1}-\hat{x}\|^2}{\tau_k}
+\left\|p^{k+1}-x^k\right\|^2\left(\frac{1}{\tau_{k}}-\frac{1}{2\beta}\right)\nonumber\\
&+(2\chi\gamma_k+1)\frac{\|\eta^{k+1}-\hat{u}\|^2}{\gamma_k} +
\|\eta^{k+1}-u^k\|^2\left(\frac{1}{\gamma_{k}}-\frac{1}{2\delta}\right) \nonumber\\
& \hspace{-1.3cm}+2\scal{L(p^{k+1}-{x}^k)}{\eta^{k+1}-\hat{u}}
-2\theta_{k-1}\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}} \nonumber\\
&\hspace{-1.3cm}
-2\theta_{k-1}\|L\|\|p^k-x^{k-1}\|\|\eta^{k+1}-u^k\|.
\end{align}
\item\label{thm:mainii} Suppose that $\rho=0$ and $\chi=0$. If we set
$\theta_k\equiv
1$, $\tau_k\equiv\tau$, $\gamma_k\equiv\gamma$ and we
assume that \eqref{e:parcond} holds with strict inequality,
we obtain $x^{k}\ensuremath{\:\rightharpoonup\:}\hat{x}$ and $u^{k}\ensuremath{\:\rightharpoonup\:}\hat{u}$, for some
solution $(\hat{x},\hat{u})$ to Problem~\ref{prob:main}.
\item\label{thm:mainiii} Suppose that $\rho>0$, $\chi=0$, and $D^{-1}=0$. If we set
\begin{equation}
\label{e:defparam}
(\forall k\in\ensuremath{\mathbb N})\quad \theta_k=\frac{1}{\sqrt{1+2\rho\tau_k}},\quad
\tau_{k+1}=\theta_{k}
\tau_{k},\quad \gamma_{k+1}={\gamma_{k}}/{\theta_{k}},
\end{equation}
and we
assume that \eqref{e:parcond} holds with equality, we obtain, for every solution
$(\hat{x},\hat{u})$ to Problem~\ref{prob:main},
$(\forall \varepsilon>0)(\exists
N_{0}\in \mathbb{N})(\forall k\geq N_{0})$\
\begin{equation} \big\|x^{k}-\hat{x}\big\|^{2}\leq
\dfrac{1+\varepsilon}{k^2}\left(\dfrac{\left\|x^{0}-\hat{x}\right\|^{2}}{\rho^2\tau_{0}^{2}}
+
\frac{2\beta\|L\|^{2}}{\rho^2(2\beta-{\tau_0})}
\left\|u^{0}-\hat{u}\right\|^{2}\right).
\end{equation}
\item\label{thm:mainiv} Suppose that
$\rho>0$ and $\chi>0$ and define
\begin{equation}
\label{e:defmualpha}
\mu = \frac{2\sqrt{\rho\chi}}{\left\|L\right\|}\quad \text{and}\quad
\alpha =\min\left\{\frac{\mu \rho}{\rho+\frac{\mu}{4\beta}} , \frac{\mu
\chi}{\chi+\frac{\mu}{4\delta}}\right\}.
\end{equation}
If we set $\theta_k\equiv\theta\in((1+\alpha)^{-1}, 1]$, $\tau_{k}\equiv\tau$ and
$\gamma_{k}\equiv\gamma$ with
\begin{equation}
\label{e:deftaugamma}
\tau = \dfrac{2\beta\mu}{\mu +
4\beta\rho}
\quad\text{and}\quad \gamma =
\dfrac{2\mu\delta}{\mu+4\delta\chi},
\end{equation}
we obtain linear convergence. That is, for every $k\in \mathbb{N}$,
\begin{multline}
\label{clec}
\left(\chi(1-\omega) + \dfrac{\mu}{4\delta}\right)\left\|u^{k}-\hat{u}\right\|^{2} +
\left(\rho +
\dfrac{\mu}{4\beta}\right)\left\|x^{k}-\hat{x}\right\|^{2}\\
\leq\omega^{k}\left(\left(\chi + \dfrac{\mu}{4\delta}\right)\left\|u^{0}-\hat{u}\right\|^{2}
+
\left(\rho + \dfrac{\mu}{4\beta}\right)\left\|x^{0}-\hat{x}\right\|^{2}\right),
\end{multline}
where $\omega =(1+\theta)/(2+\alpha)\in [(1+\alpha)^{-1},\theta).$
\end{enumerate} \end{theorem} \begin{proof}
\ref{thm:maini}:
Fix $k\in\ensuremath{\mathbb N}$ and let $(\hat{x},\hat{u})$ be a solution to Problem~\ref{prob:main}. We
have
$\hat{x}\in\ensuremath{\operatorname{Fix}} T$, $\hat{u}\in V$ and, using $B\ensuremath{\mbox{\small$\,\square\,$}} D=(B^{-1}+D^{-1})^{-1}$, we
deduce $-(L^*\hat{u}+C\hat{x})\in A\hat{x}$ and $L\hat{x}-D^{-1}\hat{u}\in
B^{-1}\hat{u}$.
Therefore,
since \eqref{e:alg2} yields
\begin{equation}
\label{e:algoinc}
\begin{cases}
\frac{x^k-p^{k+1}}{\tau_{k}} - L^{*}u^{k+1} -Cx^k\in Ap^{k+1}\\
\frac{u^k -\eta^{k+1}}{\gamma_{k}} + L\bar{x}^{k} -D^{-1}u^k\in B^{-1}\eta^{k+1}
\end{cases}
\end{equation}
and $A$ and $B^{-1}$ are $\rho$ and $\chi$-strongly monotone, respectively, we deduce
\begin{multline}
\label{e:cocoer}
\!\!\Scal{\dfrac{x^k\!-\!p^{k+1}}{\tau_{k}} - L^{*}(u^{k+1}\!-\hat{u})}{\!p^{k+1}\!-\hat{x}\!}+
\Scal{\dfrac{u^k\!-\eta^{k+1}}{\gamma_{k}}\!+\!L(\bar{x}^{k} -
\hat{x})}{\!\eta^{k+1}\!-\hat{u}\!}\\
-\scal{Cx^k-C\hat{x}}{p^{k+1}-\hat{x}}-
\scal{D^{-1}u^k-D^{-1}\hat{u}}{\eta^{k+1}-\hat{u}}\\
\ge\rho\left\|p^{k+1}-\hat{x}\right\|^2+\chi\|\eta^{k+1}-\hat{u}\|^2.
\end{multline}
From the cocoercivity of $C$ and $D^{-1}$ we have from $ab\le \beta
a^2+b^2/(4\beta)$ that
\begin{align}
\label{e:parteC}
\scal{Cx^k-C\hat{x}}{p^{k+1}-\hat{x}}&=\scal{Cx^k-C\hat{x}}{p^{k+1}-{x}^k}+
\scal{Cx^k-C\hat{x}}{x^{k}-\hat{x}}\nonumber\\
&\ge-\|Cx^k-C\hat{x}\|\|p^{k+1}-{x}^k\|+\beta\|Cx^k-C\hat{x}\|^2\nonumber\\
&\ge -\frac{\|p^{k+1}-{x}^k\|^2}{4\beta},
\end{align}
and, analogously, $ \scal{D^{-1}u^k-D^{-1}\hat{u}}{\eta^{k+1}-\hat{u}}\ge
-\frac{\|\eta^{k+1}-{u}^k\|^2}{4\delta}$.
Hence, by using
\cite[Lemma~2.12(i)]{14.Livre1} in \eqref{e:cocoer}, we deduce
\begin{align}
\label{e:ec4}
\frac{\|x^k\!-\hat{x}\|^2}{\tau_{k}}+\frac{\|u^k\!-\hat{u}\|^2}{\gamma_{k}}
&\geq \left(\!2\rho+\!\frac{1}{\tau_k}\!\right)\left\|p^{k+1}-\hat{x}\right\|^2
+\left(\!2\chi+\!\frac{1}{\gamma_k}\!\right)\|\eta^{k+1}-\hat{u}\|^2\nonumber\\
&\hspace{-.5cm}+ 2\left[\left\langle
L(p^{k+1}-\hat{x}) \,|\,
u^{k+1}-\hat{u}\right\rangle-\left\langle L(\bar{x}^k -\hat{x}) \,|\,
\eta^{k+1}-\hat{u}\right\rangle\right]\nonumber\\
&\hspace{-.5cm}
+\|\eta^{k+1}\!-{u}^k\|^2\left(\!\frac{1}{\gamma_{k}}-\frac{1}{2\delta}\!\right)
+\|p^{k+1}\!-x^k\|^2\left(\!\frac{1}{\tau_{k}}-\frac{1}{2\beta}\!\right).
\end{align}
Moreover, \eqref{e:alg2}, $\mbox{ran}(L)\subset V$ and $u^k-\eta^k\in V^{\bot}$, for
every $k\in\ensuremath{\mathbb N}$, yield
\begin{align}
\label{e:parteL}
\scal{L(p^{k+1}-\hat{x})}{u^{k+1}-\hat{u}}
-\scal{L(\bar{x}^k -\hat{x})}{\eta^{k+1}-\hat{u}}&\nonumber\\
&\hspace{-5.8cm}= \scal{L(p^{k+1}-\hat{x})}{u^{k+1}-\hat{u}}-\scal{L({x}^k
-\hat{x})}{\eta^{k+1}-\hat{u}}
\nonumber\\
&\hspace{-5.4cm}-\theta_{k-1}\scal{L(p^k-x^{k-1})}{\eta^{k+1}-\hat{u}}\nonumber\\
&\hspace{-5.8cm}= \scal{L(p^{k+1}-{x}^k)}{\eta^{k+1}-\hat{u}}
-\theta_{k-1}\scal{L(p^k-x^{k-1})}{\eta^{k+1}-\hat{u}}\nonumber\\
&\hspace{-5.8cm}= \scal{L(p^{k+1}-{x}^k)}{\eta^{k+1}-\hat{u}}
-\theta_{k-1}\scal{L(p^k-x^{k-1})}{\eta^{k+1}-u^k}\nonumber\\
&\hspace{-5.4cm}-\theta_{k-1}\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}}\nonumber\\
&\hspace{-5.8cm}\ge
\scal{L(p^{k+1}-{x}^k)}{\eta^{k+1}-\hat{u}}-\theta_{k-1}\|L\|\|p^k-x^{k-1}\|\|\eta^{k+1}-u^k\|\nonumber\\
&\hspace{-5.3cm}-\theta_{k-1}\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}},
\end{align}
which, together with \eqref{e:ec4}, yield \eqref{e:ineqmain}.
\ref{thm:mainii}: For every $k\in\ensuremath{\mathbb N}$,
it follows from Theorem~\ref{thm:main}\eqref{thm:maini}, \cite[Lemma~2.1]{20.Opti04},
$\rho=\chi=0$, $\theta_k\equiv 1$,
$\tau_k\equiv\tau$, $\gamma_k\equiv\gamma$, and the properties of $T$ and $P_X$ that
\begin{align}
\label{e:aux1}
\frac{\|p^k-\hat{x}\|^2}{\tau}+ \frac{\|\eta^k-\hat{u}\|^2}{\gamma}
&\geq \frac{\|p^{k+1}-\hat{x}\|^2}{\tau}
+\left(\frac{1-\alpha}
{\alpha}\right)\frac{\|x^k-p^k\|^2}{\tau}+\frac{\|u^k-\eta^k\|^2}{\gamma}\nonumber\\
&\hspace{-2cm}+\frac{\|\eta^{k+1}\!-\hat{u}\|^2}{\gamma}
+\|p^{k+1}\!-x^k\|^2\!\left(\!\frac{1}{\tau}-\frac{1}{2\beta}\!\right)\! +
\|\eta^{k+1}\!-u^k\|^2\!\left(\!\frac{1}{\gamma}-\frac{1}{2\delta}\!\right)\nonumber\\
&\hspace{-2cm}+2\scal{L(p^{k+1}-{x}^k)}{\eta^{k+1}-\hat{u}}
-2\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}} \nonumber\\
&\hspace{-2cm}-2\|L\|\|p^k-x^{k-1}\|\|\eta^{k+1}-u^k\|\nonumber\\
&\hspace{-2cm}\geq \frac{\|p^{k+1}-\hat{x}\|^2}{\tau}
+\frac{\|u^k-\eta^k\|^2}{\gamma}+
\|\eta^{k+1}-u^k\|^2\left(\frac{1}{\gamma}-\frac{1}{2\delta}-\frac{1}{\nu}\right)
\nonumber\\
&\hspace{-2cm}+
\left(\frac{1-\alpha}
{\alpha}\right)\frac{\|x^k-p^k\|^2}{\tau}+\frac{\|\eta^{k+1}-\hat{u}\|^2}{\gamma}
+\|p^{k+1}-x^k\|^2\left(\frac{1}{\tau}-\frac{1}{2\beta}\right)
\nonumber\\
&\hspace{-2cm}+2\scal{L(p^{k+1}-{x}^k)}{\eta^{k+1}-\hat{u}}
-2\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}} \nonumber\\
&\hspace{-2cm}-\nu\|L\|^2\|p^k-x^{k-1}\|^2,
\end{align}
for every $\nu>0$. If we let
$\varepsilon=\left[\left(\frac{1}{\tau}-\frac{1}{2\beta}\right)\left(\frac{1}{\gamma}-\frac{1}{2\delta}\right)-\|L\|^2\right]
\left(\frac{\beta\tau}{2\beta-\tau}\right)>0,$
and we choose $\nu=(\frac{1}{\gamma}-\frac{1}{2\delta}-\varepsilon)^{-1}>0$, we have
$\nu\|L\|^2=
(\frac{1}{\tau}-\frac{1}{2\beta})-\nu\varepsilon(\frac{1}{\tau}-\frac{1}{2\beta})$. Hence,
from \eqref{e:aux1} we have
\begin{multline}
\label{e:fejer}
\Upsilon_k+\frac{\left\|p^k-\hat{x}\right\|^2}{\tau}\ge\Upsilon_{k+1}+
\frac{\left\|p^{k+1}-\hat{x}\right\|^2}{\tau}+\left(\frac{1-\alpha}
{\alpha}\right)\frac{\|x^k-p^k\|^2}{\tau}+\frac{\|u^k-\eta^k\|^2}{\gamma}\\+
\varepsilon\|\eta^{k+1}-u^k\|^2+\nu\varepsilon\left(\frac{1}{\tau}-\frac{1}{2\beta}\right)\|p^k-x^{k-1}\|^2,
\end{multline}
where, for every $k\in\ensuremath{\mathbb N}$,
\begin{equation}
\Upsilon_k=\frac{\|\eta^k-\hat{u}\|^2}{\gamma} +
2\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}}+\left(\frac{1}{\tau}-\frac{1}{2\beta}\right)\|p^k-x^{k-1}\|^2.
\end{equation}
Note that from \eqref{e:parcond} we have, for every $k\in\ensuremath{\mathbb N}$,
\begin{align}
\Upsilon_k&\ge \frac{\|\eta^k-\hat{u}\|^2}{\gamma} +
2\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}}+
\frac{\|L\|^2}{\left(\frac{1}{\gamma}-\frac{1}{2\delta}\right)}\|p^k-x^{k-1}\|^2\nonumber\\
&\ge\frac{\|\eta^k-\hat{u}\|^2}{\gamma} +
2\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}}+
\gamma\|L\|^2\|p^k-x^{k-1}\|^2\nonumber\\
&\ge\frac{1}{\gamma}\|\eta^k-\hat{u}+\gamma L(p^k-x^{k-1})\|^2\ge 0,
\end{align}
and, hence, from \eqref{e:fejer} we deduce that
$(\Upsilon_k+\|p^k-\hat{x}\|^2/\tau)_{k\in\ensuremath{\mathbb N}}$ is a F\'ejer sequence. We
deduce
from \cite[Lemma~5.31]{14.Livre1} that
$(\eta^k)_{k\in\ensuremath{\mathbb N}}$ and $(p^k)_{k\in\ensuremath{\mathbb N}}$ are bounded,
\begin{equation}
\label{e:tozero}
x^k-p^k\to 0,\quad u^k-\eta^k\to 0,\quad \eta^{k+1}-u^k\to 0,\quad \text{and}\quad
p^k-x^{k-1}\to 0.
\end{equation}
Therefore, there
exist weak accumulation points $\bar{x}$ and $\bar{u}$ of the
sequences $(p^k)_{k\in\ensuremath{\mathbb N}}$ and $(\eta^k)_{k\in\ensuremath{\mathbb N}}$, respectively,
say $p^{k_n}\rightharpoonup \bar{x}$ and $\eta^{k_n}\rightharpoonup \bar{u}$ and, from
\eqref{e:tozero}, we have $u^{k_n}\rightharpoonup \bar{u}$,
$u^{k_n+1}\rightharpoonup \bar{u}$,
$p^{k_n}\rightharpoonup \bar{x}$, $p^{k_n+1}\rightharpoonup \bar{x}$,
$x^{k_n-1}\rightharpoonup \bar{x}$
and
$\bar{x}^{k_n}=x^{k_n}+p^{k_n}-x^{k_n-1}\rightharpoonup \bar{x}$. Since $T$ and $P_V$
are nonexpansive, $\ensuremath{\operatorname{Id}}\,-T$ and $\ensuremath{\operatorname{Id}}\,-P_V$ are maximally monotone
\cite[Example~20.29]{14.Livre1} and, therefore, they have weak-strong closed graphs
\cite[Proposition~20.38]{14.Livre1}.
Hence, it follows from \eqref{e:tozero} that $(\ensuremath{\operatorname{Id}}\,-T)p^k\to 0$ and $(\ensuremath{\operatorname{Id}}\,-P_V)\eta^k\to 0$
and, hence, $(\bar{x},\bar{u})\in \ensuremath{\operatorname{Fix}} T\times V$.
Moreover,
\eqref{e:algoinc} can be written equivalently as
\begin{equation}
(v^{k_n},w^{k_n})\in (\boldsymbol{M}+\boldsymbol{Q})(p^{k_n+1},\eta^{k_n+1}),
\end{equation}
where $\boldsymbol{M}\colon (p,\eta)\mapsto (Ap+L^*\eta)\times(B^{-1}\eta-Lp)$ is
maximally
monotone \cite[Proposition~2.7(iii)]{15.Siopt1}, $\boldsymbol{Q}\colon
(p,\eta)\mapsto
(Cp,D^{-1}\eta)$ is $\min\{\beta,\delta\}-$cocoercive, and
\begin{equation}
\begin{cases}
v^{k}:=\frac{x^k-p^{k+1}}{\tau}-L^*(u^{k+1}-\eta^{k+1})+Cp^{k+1}-Cx^k\\
w^{k}:=\frac{u^k-\eta^{k+1}}{\gamma}+L(x^{k}-p^{k+1}+p^k-x^{k-1})+D^{-1}\eta^{k+1}
-D^{-1}u^k.
\end{cases}
\end{equation}
It follows from
\cite[Corollary~25.5]{14.Livre1} that $\boldsymbol{M}+\boldsymbol{Q}$ is maximally
monotone and, since \eqref{e:tozero} and the uniform continuity of $C$, $D$ and $L$
yields $v^{k_n}\to0$ and $w^{k_n}\to 0$, we deduce from the weak-strong closedness
of the
graph of $\boldsymbol{M}+\boldsymbol{Q}$ that
$(\bar{x},\bar{u})$ is a solution to Problem~\ref{prob:main}, and the result follows.
\ref{thm:mainiii}: Fix $ k\in\ensuremath{\mathbb N}$. Since $\rho>0$, $\delta=+\infty$, $\chi=0$,
from Theorem~\ref{thm:main}\eqref{thm:maini} we have
\begin{align}
\label{e:ineqmainiii}
\frac{\|x^k-\hat{x}\|^2}{\tau_{k}}+\frac{\|u^k-\hat{u}\|^2}{\gamma_{k}}
&\geq (2\rho\tau_k+1)\frac{\tau_{k+1}}{\tau_k}\frac{\|p^{k+1}-\hat{x}\|^2}{\tau_{k+1}}
+\frac{\gamma_{k+1}}{\gamma_k}\frac{\|\eta^{k+1}-\hat{u}\|^2}{\gamma_{k+1}}
\nonumber\\
&\hspace{-2cm}+2\scal{L(p^{k+1}-{x}^k)}{\eta^{k+1}-\hat{u}}
-2\theta_{k-1}\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}} \nonumber\\
&\hspace{-2cm} +\left\|p^{k+1}-x^k\right\|^2\left(\frac{1}{\tau_{k}}-\frac{1}{2\beta}\right)
-\theta_{k-1}^2\gamma_k\|L\|^2\|p^k-x^{k-1}\|^2,
\end{align}
where we have used $2ab\le a^2/\gamma+\gamma b^2$.
Moreover, it follows from \eqref{e:defparam} that
\begin{equation}
\label{e:paraccel}(\forall k\in\ensuremath{\mathbb N})\quad
(1+2\rho\tau_{k})\dfrac{\tau_{k+1}}{\tau_{k}} =
(1+2\rho\tau_{k})\theta_{k} =
\dfrac{1}{\theta_{k}} = \dfrac{\gamma_{k+1}}{\gamma_{k}},
\end{equation}
which, combined with \eqref{e:ineqmainiii}, yields
\begin{align}
\label{e:ineqmainiii2}
\frac{\|x^k-\hat{x}\|^2}{\tau_{k}}+\frac{\|u^k-\hat{u}\|^2}{\gamma_{k}}
&\geq \frac{1}{\theta_k}\left(\frac{\|p^{k+1}-\hat{x}\|^2}{\tau_{k+1}}+
\frac{\|\eta^{k+1}-\hat{u}\|^2}{\gamma_{k+1}}\right)\nonumber\\
&\hspace{-2cm}+2\scal{L(p^{k+1}-{x}^k)}{\eta^{k+1}-\hat{u}}
-2\theta_{k-1}\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}}\nonumber\\
&\hspace{-2cm}+\left\|p^{k+1}-x^k\right\|^2\left(\frac{1}{\tau_{k}}-\frac{1}{2\beta}\right)
-\theta_{k-1}^2\gamma_k\|L\|^2\|p^k-x^{k-1}\|^2.
\end{align}
Now define
\begin{equation}
\label{e:defDelta}
(\forall k\in \mathbb{N})\quad\Delta_{k}=
\dfrac{\left\|x^k-\hat{x}\right\|^2}{\tau_{k}}+\dfrac{\left\|u^k-\hat{u}\right\|^2}{\gamma_{k}}
.
\end{equation}
Dividing \eqref{e:ineqmainiii2} by $\tau_{k}$ and using
$\theta_{k}\tau_{k} =\tau_{k+1}$
we obtain from the nonexpansivity of $P_V$ and $T$ that
\begin{align}
\label{e:auxsxs}
\dfrac{\Delta_{k}}{\tau_{k}} &\geq \dfrac{\Delta_{k+1}}{\tau_{k+1}}
+\frac{2}{\tau_k}\scal{L(p^{k+1}-{x}^k)}{\eta^{k+1}-\hat{u}}
-\frac{2}{\tau_{k-1}}\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}} \nonumber\\
&\hspace{.5cm}+\frac{\left\|p^{k+1}-x^k\right\|^2}{\tau_k^2}\left(1-\frac{\tau_k}{2\beta}\right)
-\gamma_k\tau_k\|L\|^2\frac{\|p^k-x^{k-1}\|^2}{\tau_{k-1}^2}.
\end{align}
In addition, \eqref{e:parcond} with equality
reduces to
\begin{equation}
\label{e:parcondiii}
\|L\|^2=
\left(\frac{1}{\tau_0}-\frac{1}{2\beta}\right)\frac{1}{\gamma_0}\quad
\Leftrightarrow\quad \gamma_0\tau_0\|L\|^2=
\left(1-\frac{\tau_0}{2\beta}\right).
\end{equation}
Since, for every $k\in\ensuremath{\mathbb N}\setminus\{0\}$, $\gamma_k\tau_k=\gamma_0\tau_0$ and
$\{\tau_{k}\}_{k\in\ensuremath{\mathbb N}}$ is decreasing (see \eqref{e:defparam}),
we have from \eqref{e:parcondiii} that
\begin{equation}
\label{e:paraccel2}
\gamma_{k}\tau_{k}\|L\|^2 =\gamma_{0}\tau_{0}\|L\|^2=
\left(1-\frac{\tau_0}{2\beta}\right)\le \left(1-\frac{\tau_{k-1}}{2\beta}\right),
\end{equation}
and \eqref{e:auxsxs} yields
\begin{align}
\label{e:almost}
\dfrac{\Delta_{k}}{\tau_{k}} &\geq \dfrac{\Delta_{k+1}}{\tau_{k+1}}
+\frac{\left\|p^{k+1}-x^k\right\|^2}{\tau_k^2}\left({1}-\frac{\tau_{k}}{2\beta}\right)
-\frac{\|p^k-x^{k-1}\|^2}{\tau_{k-1}^2}\left({1}-\frac{\tau_{k-1}}{2\beta}\right)
\nonumber\\
&\hspace{.5cm}+\frac{2}{\tau_k}\scal{L(p^{k+1}-{x}^k)}{\eta^{k+1}-\hat{u}}
-\frac{2}{\tau_{k-1}}\scal{L(p^k-x^{k-1})}{\eta^{k}-\hat{u}}.
\end{align}
Now fix $N\geq 1$. By adding from $k=0$ to $k=N-1$ in \eqref{e:almost},
using that $\bar{x}^0=x^0$ and defining $p^{0}=x^{0}=:x^{-1}$, we obtain from
$u^N=P_V\eta^N$, and
$\ensuremath{\operatorname{ran}} L\subset V$ that
\begin{align}
\dfrac{\Delta_{0}}{\tau_{0}} &\geq \dfrac{\Delta_{N}}{\tau_{N}} +
\dfrac{\left\|p^{N}-x^{N-1}\right\|^{2}}{\tau_{N-1}^{2}}\left({1}-\frac{\tau_{N-1}}{2\beta}\right)+\dfrac{2}{\tau_{N-1}}
\left\langle
L(p^{N}-x^{N-1}) \,|\, u^{N} - \hat{u}\right\rangle \nonumber\\
&\geq \dfrac{\Delta_{N}}{\tau_{N}} - \frac{\|L\|^{2}}{(1-\frac{\tau_{N-1}}{2\beta})}
\left\|u^{N}
-\hat{u}\right\|^{2}\nonumber\\
& =
\dfrac{1}{\tau_{N}}\left(\Delta_{N}-\frac{\gamma_N\tau_N\|L\|^{2}}{(1-\frac{\tau_{N-1}}{2\beta})}
\frac{\left\|u^{N} -\hat{u}\right\|^{2}}{\gamma_N}\right)\geq
\frac{\|x^N-\hat{x}\|^2}{\tau_N^2},
\label{e:ec12}
\end{align}
where the last inequality follows from \eqref{e:paraccel2} and \eqref{e:defDelta}.
Multiplying \eqref{e:ec12} by $\tau_{N}^{2}$ and using $\gamma_{N}\tau_{N} =
\gamma_{0}\tau_{0}$ and \eqref{e:parcondiii}, we conclude that
\begin{equation}
\left\|x^{N}-\hat{x}\right\|^{2} \leq
\tau_{N}^{2}\left(\dfrac{\left\|x^{0}-\hat{x}\right\|^{2}}{\tau_{0}^{2}} +
\frac{\|L\|^{2}}{(1-\frac{\tau_0}{2\beta})}
\left\|u^{0}-\hat{u}\right\|^{2}\right).
\end{equation}
The result follows from $\lim_{N \to \infty} N\rho\tau_{N}
= 1$ \cite[Corollary~1]{3.CP}.
\ref{thm:mainiv}: Fix $k\in\ensuremath{\mathbb N}$. Note that \eqref{e:deftaugamma} yields
$\left(\frac{1}{\tau} - \frac{1}{2\beta}\right)
\left(\frac{1}{\gamma} -
\frac{1}{2\delta}\right) = \left\|L\right\|^{2}$ and, from
\eqref{e:ec4}, $u^{k+1}=P_V\eta^{k+1}$ and $\ensuremath{\operatorname{ran}} L\subset V$ we have
\begin{align}
\label{kec1}
\frac{\left\|u^k-\hat{u}\right\|^2}{2\gamma} +
\frac{\left\|x^k-\hat{x}\right\|^2}{2\tau}
&\geq
(2\rho\tau+1)\frac{\|p^{k+1}-\hat{x}\|^2}{2\tau}+(2\chi\gamma+1)\frac{\|\eta^{k+1}-\hat{u}\|^2}{2\gamma}
\nonumber\\
&\hspace{-1cm}+\frac{\big\|p^{k+1}-x^k\big\|^2 }{2}
\left(\frac{1}{\tau}-\frac{1}{2\beta}\right) +
\frac{\|\eta^{k+1}-u^k\|^2}{2}\left(\frac{1}{\gamma}-\frac{1}{2\delta}\right) \nonumber\\
&\hspace{-1cm}+\scal{L(p^{k+1}-\overline{x}^k)}{\eta^{k+1}-\hat{u}}.
\end{align}
Hence, by defining
\begin{equation}
\label{e:defOmega}
(\forall k\in\ensuremath{\mathbb N})\quad \Omega_{k} := \left(\chi +\frac{\mu}{4\delta}\right) \|u^{k}-\hat{u}
\|^{2} +
\left(\rho
+\frac{\mu}{4\beta}\right) \|x^{k}-\hat{x} \|^{2},
\end{equation}
multiplying \eqref{kec1} by $\mu$ and using \eqref{e:defmualpha},
\eqref{e:deftaugamma}, $u^{k+1} =P_{V}
(\eta^{k+1})$, $\ensuremath{\operatorname{ran}}(L)\subset V$, and the nonexpansivity of $T$ and $P_V$
we have
\begin{align}
\label{kec3}
\Omega_{k} &\geq \Omega_{k+1} +\mu\rho \big\|p^{k+1}-\hat{x}\big\|^{2} +\mu \chi
\big\|\eta^{k+1}-\hat{u}\big\|^{2}+\rho \big\|p^{k+1}-x^{k}\big\|^{2} \nonumber\\
& \hspace{4cm}
+\chi
\big\|\eta^{k+1}-u^{k}\big\|^{2} +\mu
\scal{L(p^{k+1}-\overline{x}^k)}{\eta^{k+1}-\hat{u}}\nonumber\\
& \geq (1+\alpha)\Omega_{k+1} + \rho \big\|p^{k+1}-x^{k}\big\|^{2} +\chi
\big\|u^{k+1}-u^{k}\big\|^{2}\nonumber\\
&\hspace{4cm}+ \mu \scal{L(p^{k+1}-\overline{x}^k)}{u^{k+1}-\hat{u}}.
\end{align}
Moreover, for every $\omega,\lambda>0$ we have
\begin{align}
\label{kec4}
\mu &\scal{L(p^{k+1}-\overline{x}^k)}{u^{k+1}-\hat{u}} \nonumber\\
&\hspace{1.5cm}= \mu
\scal{L(p^{k+1}-x^{k})}{u^{k+1}-\hat{u}} - \mu\theta
\scal{L(p^{k}-x^{k-1})}{u^{k+1}-\hat{u}}\nonumber\\
&\hspace{1.5cm}=\mu \scal{L(p^{k+1}-x^{k})}{u^{k+1}-\hat{u}} - \omega \mu
\scal{L(p^{k}-x^{k-1})}{u^{k}-\hat{u}}\nonumber\\
&\hspace{1.8cm}- \omega\mu \scal{L(p^{k}-x^{k-1})}{u^{k+1}-u^{k}}-(\theta-\omega)
\mu
\scal{L(p^{k}-x^{k-1})}{u^{k+1}-\hat{u}}\nonumber\\
&\hspace{1.5cm}\geq \mu \scal{L(p^{k+1}-x^{k})}{u^{k+1}-\hat{u}} - \omega \mu
\scal{L(p^{k}-x^{k-1})}{u^{k}-\hat{u}}\nonumber\\
&\hspace{1.8cm}-\omega\mu\left\|L\right\|\left(\dfrac{\lambda\left\|p^{k}-x^{k-1}\right\|^{2}}{2}
+
\dfrac{\left\|u^{k+1}-u^{k}\right\|^{2}}{2\lambda}\right)\nonumber\\
&\hspace{1.8cm}-
(\theta-\omega)\mu\left\|L\right\|\left(\dfrac{\lambda\left\|p^{k}-x^{k-1}\right\|^{2}}{2}
+ \dfrac{\left\|u^{k+1}-\hat{u}\right\|^{2}}{2\lambda}\right)\nonumber\\
&\hspace{1.5cm}=\mu \scal{L(p^{k+1}-x^{k})}{u^{k+1}-\hat{u}} - \omega \mu
\scal{L(p^{k}-x^{k-1})}{u^{k}-\hat{u}}\nonumber\\
&\hspace{1.8cm}
-\mu\theta\lambda\left\|L\right\| \dfrac{\left\|p^{k}-x^{k-1}\right\|^{2}}{2} -
\dfrac{\omega\mu\left\|L\right\| \left\|u^{k+1}-u^{k}\right\|^{2}}{2\lambda}\nonumber\\
&\hspace{1.8cm}-(\theta-\omega)\mu\left\|L\right\|\dfrac{\left\|u^{k+1}-\hat{u}\right\|^{2}}{2\lambda}.
\end{align}
By choosing $\lambda=\omega\sqrt{\dfrac{\rho}{\chi}}$, from \eqref{kec3},
\eqref{kec4} and \eqref{e:defmualpha}, we obtain
\begin{equation}
\label{kec5}
\begin{split}
\Omega_{k}&\geq \dfrac{\Omega_{k+1}}{\omega} + \left(1+\alpha
-\dfrac{1}{\omega}\right)\Omega_{k+1}+ \rho \left\|p^{k+1}-x^{k}\right\|^{2} \\
& \hspace{.5cm}+\mu \scal{L(p^{k+1}-x^{k})}{u^{k+1}-\hat{u}}- \omega \mu
\scal{L(p^{k}-x^{k-1})}{u^{k}-\hat{u}}\\
&\hspace{.5cm}-\omega\theta\rho
\left\|p^{k}-x^{k-1}\right\|^{2}-
\left(\dfrac{\theta-\omega}{\omega}\right)\chi\left\|u^{k+1}-\hat{u}\right\|^{2}.
\end{split}
\end{equation}
Since $\theta\in\left](1+\alpha)^{-1},1\right]$, by setting
$\omega=\dfrac{1+\theta}{2+\alpha}\in \left[(1+\alpha)^{-1},\theta\right[$, we have
$1+\alpha -\dfrac{1}{\omega} =
\dfrac{\theta-\omega}{\omega}>0$. Hence, since \eqref{e:defOmega} yields
$\Omega_{k+1}\geq \chi
\left\|u^{k+1}-\hat{u}\right\|^{2}$, from
\eqref{kec5} and $\theta\le 1$ we have
\begin{equation}
\label{kec6}
\begin{split}
\Omega_{k}&\geq \dfrac{\Omega_{k+1}}{\omega} + \rho \left\|p^{k+1}-x^{k}\right\|^{2}
-\omega\rho
\left\|p^{k}-x^{k-1}\right\|^{2}\\
& \hspace{.5cm}+\mu \scal{L(p^{k+1}-x^{k})}{u^{k+1}-\hat{u}}- \omega \mu
\scal{L(p^{k}-x^{k-1})}{u^{k}-\hat{u}}.
\end{split}
\end{equation}
Moreover, using $p^{0}=x^{0}=:x^{-1}$, multiplying \eqref{kec6} by
$\omega^{-k}$ and adding from $k=0$ to $k=N-1$, we conclude from the definition of
$\mu$ that
\begin{align}
\label{kec7}
\Omega_{0}&\geq \omega^{-N}\Omega_{N} +\omega^{-N+1} \rho
\left\|p^{N}-x^{N-1}\right\|^{2}+\mu\omega^{-N+1}
\scal{L(p^{N}-x^{N-1})}{u^{N}-\hat{u}}\nonumber\\
&\geq \omega^{-N}\Omega_{N} +\omega^{-N+1} \rho
\left\|p^{N}-x^{N-1}\right\|^{2}\nonumber\\
& \hspace{.5cm}-\mu\omega^{-N+1}\left\|L\right\|\left(\sqrt{\dfrac{\rho}{\chi}}
\dfrac{\left\|p^{N}-x^{N-1}\right\|^{2}}{2} + \sqrt{\dfrac{\chi}{\rho}}
\dfrac{\left\|u^{N}-\hat{u}\right\|^{2}}{2}\right)\nonumber\\
& = \omega^{-N}\Omega_{N} - \omega^{-N+1}\chi \left\|u^{N}-\hat{u}\right\|^{2},
\end{align}
or, equivalently,
\begin{multline}
\label{kec8}
\omega^{N}\left(\left(\chi + \dfrac{\mu}{4\delta}\right)\left\|u^{0}-\hat{u}\right\|^{2} +
\left(\rho + \dfrac{\mu}{4\beta}\right)\left\|x^{0}-\hat{x}\right\|^{2}\right)\\
\geq \left(\chi(1-\omega) + \dfrac{\mu}{4\delta}\right)\left\|u^{N}-\hat{u}\right\|^{2} +
\left(\rho + \dfrac{\mu}{4\beta}\right)\left\|x^{N}-\hat{x}\right\|^{2},
\end{multline}
which proves the linear convergence since $\omega <\theta\le 1$. \end{proof}
\begin{remark}
\begin{enumerate}
\item Note that condition \eqref{e:parcond} is weaker than the condition
needed in \cite{2.vu}. Indeed, this condition in our case reads
$2\rho\min\{\beta,\delta\}>1$, where
$\rho=\min\left\{{\gamma}^{-1},{\tau}^{-1}\right\}(1-\sqrt{\tau\gamma\|L\|^2})$,
which implies $2\min\{\delta,\beta\}>\frac{1}{\rho}>\max\{\gamma,\tau\}$,
\begin{equation}
\left(1-\frac{\tau}{2\beta}\right)>\sqrt{\tau\gamma \|L\|^2}\quad \text{and}\quad
\left(1-\frac{\gamma}{2\delta}\right)>\sqrt{\tau\gamma \|L\|^2}.
\end{equation}
Thus, by multiplying last expressions we obtain
$
\left(1-\frac{\tau}{2\beta}\right)\left(1-\frac{\gamma}{2\delta}\right)>\tau\gamma\|L\|^2,
$
which implies \eqref{e:parcond}. Our condition is strictly weaker, as it can be seen
in Figure~\ref{fig:regions}, in which we plot the case $\|L\|=1$ and $\delta=\beta=b$, for
$b=1$,
$b=1/2$
and $b=1/4$. That is, we compare regions
\begin{align}
R_b&=\Menge{(\tau,\gamma)\in[0,2b]\times[0,2b]}
{\min\left\{\frac{1-\sqrt{\tau\gamma}}{\tau},\frac{1-\sqrt{\tau\gamma}}{\gamma}\right\}>\frac{1}{2b}}\\
S_b&=\Menge{(\tau,\gamma)\in[0,2b]\times[0,2b]}
{\left(1-\frac{\tau}{2b}\right)\left(1-\frac{\gamma}{2b}\right)>\tau\gamma}.
\end{align}
\begin{figure}
\caption{We plot regions
$R_b$ in blue and $S_b$ in orange.
Left: case $b=1$, Center: case $b=1/2$, Right: case
$b=1/4$. Note that in the case $\tau=\gamma$ the regions
coincide.}
\label{fig:regions}
\end{figure}
\item It is not difficult to extend our method by replacing the averaged
quasi-nonexpansive operator $T$ by
$(\alpha_k)_{k\in\ensuremath{\mathbb N}}-$averaged quasi-nonexpansive operators $(T_k)_{k\in\ensuremath{\mathbb N}}$
varying at each iteration
and satisfying $\sup_{k\in\ensuremath{\mathbb N}}\alpha_k<1$. Indeed, as in \cite{20.Opti04}, we have to
assume that $x^k-T_kx^k\to
0$ and $x^k\ensuremath{\:\rightharpoonup\:} x$ implies
$x\in\cap_{k\in\ensuremath{\mathbb N}}\ensuremath{\operatorname{Fix}} T_k$, which is satisfied in several cases. In
particular, if we set, for every $k\in\ensuremath{\mathbb N}$, $T_k:=J_{\gamma_kM}(\ensuremath{\operatorname{Id}}\,-\gamma_k N)$,
where $M\colon \ensuremath{{\mathcal H}}\to 2^{\ensuremath{{\mathcal H}}}$ is maximally monotone and $N\colon\ensuremath{{\mathcal H}}\to\ensuremath{{\mathcal H}}$ is
$\xi$-cocoercive, if $\gamma_k\in]0,2\xi[$, $T_k$ is $\gamma_k/2\xi-$cocoercive and
$\cap_{k\in\ensuremath{\mathbb N}}\ensuremath{\operatorname{Fix}} T_k={\rm zer}(M+N)$. Therefore, our method using these operators
leads to the common solution to ${\rm zer}(M+N)$ and ${\rm zer}(A+L^*\circ B\circ
L+C)$. Previous example can also be tackled by Theorem~\ref{thm:main} if we use
$\gamma_k\equiv\gamma$ and $T_k\equiv T:=J_{\gamma M}(\ensuremath{\operatorname{Id}}\,-\gamma N)$ and we
prefer to keep the constant operator case for avoiding additional hypotheses and for
the sake of simplicity.
\item The method proposed in \cite{18.bot14} is an accelerated version of the method
proposed in
\cite{16.CombPes12} under the assumption that $A+C$ is
strongly monotone. Of course, this weaker assumption can also be used in our context,
but we prefer to keep the statement of Theorem~\ref{thm:main} simpler.
\item Theorem~\ref{thm:main}\eqref{thm:mainiii} generalizes the acceleration scheme
proposed in \cite{3.CP} to monotone inclusions
with a priori information. From this
results we derive an accelerated version of the methods in \cite{2.vu} in the
strongly monotone case when $T=\ensuremath{\operatorname{Id}}\,$ and $V=\ensuremath{{\mathcal G}}$. These accelerated version, as far
as we know, have not been developed in the literature.
\item In the context of primal-dual problem \eqref{e:primal}-\eqref{e:dual},
\eqref{e:alg2}
reduces to
\begin{align}
\label{e:algoopt}
(\forall k\in\ensuremath{\mathbb N})\quad
&\left\lfloor
\begin{array}{ll}
\eta^{k+1}&=\ensuremath{\operatorname{prox}}_{\gamma_k g^*}(u^k+\gamma_k (L\bar{x}^k-\nabla\ell^*(u^k)))\\
u^{k+1}&=P_V\,\eta^{k+1}\\
p^{k+1}&=\ensuremath{\operatorname{prox}}_{\tau_k f}\left(x^k-\tau_k \left(L^*u^{k+1}+\nabla
h(x^k)\right)\right)\\
x^{k+1}&=T\,p^{k+1}\\
\bar{x}^{k+1}&=
x^{k+1}+\theta_k(p^{k+1}-x^{k}),
\end{array}
\right.
\end{align}
and our conditions on the parameters coincide with
\cite{13.Cpock16,13.LorPock15}.
Without strong convexity of $f$ and $g^*$, we deduce from
Theorem~\ref{thm:main}\eqref{thm:mainii} the weak convergence of the sequences
generated
by \eqref{e:algoopt}, generalizing results in \cite{1.condat,7.BAKS,13.LorPock15}. When
$f$
or $g^*$
is strongly convex, Theorem~\ref{thm:main}\eqref{thm:mainiii} yields an accelerated
and projected version of \cite{1.condat}. When $V=\ensuremath{{\mathcal G}}$ and $T=\ensuremath{\operatorname{Id}}\,$, this result
complements the
ergodic convergence rates obtained in \cite{13.Cpock16} and generalizes \cite{3.CP}.
When
$\ell^*=0$, $V=\ensuremath{{\mathcal G}}$, $T=\ensuremath{\operatorname{Id}}\,$ and $f$ and $g^*$ are strongly convex
Theorem~\ref{thm:main}\eqref{thm:mainiv} yields
non-ergodic linear convergence of \cite{1.condat}, complementing
the ergodic linear convergence in \cite{13.Cpock16}.
The advantage of the algorithm \eqref{e:algoopt} with respect to
\cite{1.condat,3.CP} is that primal-dual iterates of the former are forced to be in
$X\times
V$ when $T=P_X$. This feature leads to a faster
algorithm in the context of constrained convex optimization, by choosing $X$ to be
some of the constraints. This can be observed in the particular instance developed
in \cite{7.BAKS} and in Section~\ref{sec6}, in which we provide some numerical
simulations.
\end{enumerate}
\end{remark}
\section{Application to constrained convex optimization} \label{sec6} In this section, we explore the advantages of the proposed method in constrained convex optimization. \subsection{Constrained convex optimization problem} \begin{problem}
\label{prob:numeric}
Let $f\in\Gamma_0(\ensuremath{\mathbb{R}}^N)$, let $R$ and $S$ be $m\times N$ and $n\times N$
real matrices, respectively, and let
$c\in\ensuremath{\mathbb{R}}^m$ and $d\in \ensuremath{\mathbb{R}}^n$. The problem is to
\begin{align}
\label{e:new}
\min_{x\in\ensuremath{\mathbb{R}}^N}&\,f(x)\quad
\text{s.t.}\quad R x=c\quad Sx=d,
\end{align}
under the assumption that solutions exist. \end{problem} Note that \eqref{e:new} can be written equivalently as $ \min_{x\in\ensuremath{\mathbb{R}}^N}f(x)+\iota_{\{b\}}( L x), $ where $L\colon x\mapsto (Rx,Sx)$ and $b=(c,d)\in\ensuremath{\mathbb{R}}^{m+n}$. Assume that $0\in{\rm
sri}(L(\ensuremath{\operatorname{dom}} f)-b)$. Note that, since $\ensuremath{\operatorname{prox}}_{\gamma\iota_{\{b\}}^*}=\ensuremath{\operatorname{Id}}\,-\gamma b$ \cite[Proposition~24.8(ix)]{14.Livre1}, the method proposed in \cite[Algorithm~1]{3.CP} in this case reads: given $x^0=\bar{x}^0\in\ensuremath{{\mathcal H}}$ and $u^0\in\ensuremath{{\mathcal G}}$, \begin{align}
(\forall k\in\ensuremath{\mathbb N})\quad
&\left\lfloor
\begin{array}{ll}
u^{k+1}&=u^k+\gamma(L\bar{x}^k-b)\\
x^{k+1}&=\ensuremath{\operatorname{prox}}_{\tau f}(x^k-\tau L^*u^{k+1})\\
\bar{x}^{k+1}&=
2x^{k+1}-x^{k},
\end{array}
\right.
\label{e:pruebaCP} \end{align}
where $\gamma\tau\|L\|^2<1$. The constraint is imposed via the Lagrange multiplier update in the first step of \eqref{e:pruebaCP}. This implies that the primal sequence $\{x^k\}_{k\in\ensuremath{\mathbb N}}$ does not necessarily satisfy any of the constraints. For ensuring feasibility, we should project onto $L^{-1}b$ by considering the problem $\min_{x\in \ensuremath{\mathbb{R}}^N}f(x)+\iota_{L^{-1}b}(x)$. However, this is not always possible since, in several applications, the matrices involved are singular or very bad conditioned (see discussion in \cite{7.BAKS,13.CEMRACS}). If it is difficult to compute $P_{L^{-1}b}$ but we can project onto $R^{-1}c$, we can rewrite \eqref{e:new} as the problem of finding $\hat{x}\in R^{-1}c\cap\ensuremath{\operatorname{argmin}}_{x\in\ensuremath{\mathbb{R}}^N}f(x)+\iota_{\{b\}}( L x),$ which is \eqref{e:primal} when $X=R^{-1}c$, $h=\ell^*=0$, and $g=\iota_{\{b\}}$. Next corollary follows from Theorem~\ref{thm:main}, \eqref{e:algoopt} and $P_{X}\colon x\mapsto x-R^*(RR^*)^{-1}(Rx-c)$. \begin{corollary}
\label{cor:algo1}
Let $\gamma>0$ and $\tau>0$ be such that $\gamma\tau\| L\|^2<1$ and
let $(x^0,\bar{x}^0,u^0)\in\ensuremath{\mathbb{R}}^N\times\ensuremath{\mathbb{R}}^N\times\ensuremath{\mathbb{R}}^{m+n}$ be
such that $x^0=\bar{x}^0$. Consider the routine
\begin{align}
\label{e:prueba}
(\forall k\in\ensuremath{\mathbb N})\quad
&\left\lfloor
\begin{array}{ll}
u^{k+1}&=u^k+\gamma( L\bar{x}^k-b)\\
p^{k+1}&=\ensuremath{\operatorname{prox}}_{\tau f}(x^k-\tau L^*u^{k+1})\\
x^{k+1}&=p^{k+1}- R^*( R R^*)^{-1} (Rp^{k+1}-c)\\
\bar{x}^{k+1}&=x^{k+1}+ p^{k+1}-x^{k}.
\end{array}
\right.
\end{align}
Then, there exist a solution $\hat{x}$ to Problem~\ref{prob:numeric} and
an
associated multiplier $\hat{u}$ such that $x^k\to\hat{x}$
and
$u^k\to\hat{u}$. \end{corollary}
\subsection{Numerical experiences} \label{sec:numeric}
In this section we consider some particular instances of Problem~\ref{prob:numeric}. We consider the case when $f =\|\cdot\|_1 \in \Gamma_{0}({\mathbb{R}}^{N})$,
$N=1000$, $\tau=\frac{0.99}{\gamma {\left\|L\right\|}^{2}}$ and the relative error in
\eqref{e:prueba} is $r_{k}=\sqrt{\frac{\left\|u^{k+1}-u^{k}\right\|^{2} +
\left\|x^{k+1}-x^{k}\right\|^{2}}{\left\|u^{k}\right\|^{2} + \left\|x^{k}\right\|^{2}}}$, for every $k\in \mathbb{N}$. We set $\gamma=10^{-2}$ and $(x^0,\bar{x}^0,u^0)=(0,0,0)\in\ensuremath{\mathbb{R}}^N\times\ensuremath{\mathbb{R}}^N\times\ensuremath{\mathbb{R}}^{m+n}$ and, in each test we show the average execution time and the number of average iterations of both methods, obtained by considering $20$ random realizations of matrices $R$, $S$ and vectors $c \in {\mathbb{R}}^{m}$ and $d\in {\mathbb{R}}^{n}$. Here PCP and CP denote the algorithms \eqref{e:prueba} and \eqref{e:pruebaCP}, respectively.
\textbf{Test 1.} In Problem~\ref{prob:numeric}, Table~\ref{table:1} show the efficiency of CP and PCP for the case $m=1$ and $n=100$. \begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$m=1$, $n=100$}} &
\multicolumn{2}{c|}{\small$e=10^{-4}$} &
\multicolumn{2}{c|}{\small$e=5\cdot 10^{-5}$} &
\multicolumn{2}{c|}{\small$e=10^{-5}$} \\ \cline{2-7}
\multicolumn{1}{|c|}{} & iter & time (s) & iter & time (s) &
iter & time (s) \\ \hline
\multicolumn{1}{|c|}{PCP} & \multicolumn{1}{c|}{$9265$} &
\multicolumn{1}{c|}{$22.28$}&
\multicolumn{1}{|c|}{$14570$} &
\multicolumn{1}{c|}{$37.02$}& \multicolumn{1}{c|}{$46191$} &
\multicolumn{1}{c|}{$116.26$}\\ \hline
\multicolumn{1}{|c|}{CP} & \multicolumn{1}{c|}{$9732$} &
\multicolumn{1}{c|}{$23.04$}&
\multicolumn{1}{|c|}{$15718$} &
\multicolumn{1}{c|}{$39.21$}& \multicolumn{1}{c|}{$50544$}&
\multicolumn{1}{c|}{$125.49$} \\ \hline
\multicolumn{1}{|c|}{\%improv.} & \multicolumn{1}{c|}{$4.8$}&
\multicolumn{1}{c|}{$3.3$} &
\multicolumn{1}{|c|}{$7.3$} &
\multicolumn{1}{c|}{$5.6$}& \multicolumn{1}{c|}{$8.6$}&
\multicolumn{1}{c|}{$7.4$} \\ \hline
\end{tabular}
\caption{Average time and number of iterations when $m=1$ for obtaining $r_{k}
< e$.}
\label{table:1} \end{table} We see that both algorithms are similar in terms of the execution time and the number of iterations, with a small advantage for the PCP algorithm. In addition, by decreasing the tolerance $e$, the percentage of improvement, computed as $100\cdot(x_{\rm CP}-x_{\rm PCP})/x_{\rm CP}$, slightly increases.
\textbf{Test 2.} In Problem~\ref{prob:numeric}, Table~\ref{table:2} show the efficiency of CP and PCP for the case $m=10$ and $n=100$. \begin{table}[h]
\centering
\caption{Average time and number of iterations when $m=10$ for obtaining
$r_{k} < e$.}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$m=10$, $n=100$}} &
\multicolumn{2}{c|}{\small$e=10^{-4}$} &
\multicolumn{2}{c|}{\small$e=5\cdot 10^{-5}$} &
\multicolumn{2}{c|}{\small$e=10^{-5}$} \\ \cline{2-7}
\multicolumn{1}{|c|}{} & iter & time (s) & iter & time (s) &
iter & time (s) \\ \hline
\multicolumn{1}{|c|}{PCP} & \multicolumn{1}{c|}{$6865$} &
\multicolumn{1}{c|}{$18.65$}&
\multicolumn{1}{|c|}{$10229$} &
\multicolumn{1}{c|}{$27.86$}& \multicolumn{1}{c|}{$22855$} &
\multicolumn{1}{c|}{$65.05$}\\ \hline
\multicolumn{1}{|c|}{CP} & \multicolumn{1}{c|}{$9280$} &
\multicolumn{1}{c|}{$23.72$}&
\multicolumn{1}{|c|}{$16033$} &
\multicolumn{1}{c|}{$39.13$}& \multicolumn{1}{c|}{$49526$}&
\multicolumn{1}{c|}{$129.78$} \\ \hline
\multicolumn{1}{|c|}{\%improv.} & \multicolumn{1}{c|}{$26.0$}&
\multicolumn{1}{c|}{$21.4$} &
\multicolumn{1}{|c|}{$36.2$} &
\multicolumn{1}{c|}{$28.8$}& \multicolumn{1}{c|}{$53.9$}&
\multicolumn{1}{c|}{$49.9$} \\ \hline
\end{tabular}
\label{table:2} \end{table} In this case, there are clear differences between both algorithms and, as before, PCP is more efficient as tolerance decreases. In fact, when tolerance is $10^{-5}$, there is an improvement of approximately $50\%$ with respect to the CP in the execution time and the number of iterations is less than a half.
\textbf{Test 3.} Finally, in Problem~\ref{prob:numeric}, Table~\ref{table:3} show the efficiency of CP and PCP for the case $m=30$ and $n=100$. \begin{table}[h]
\centering
\caption{Average time and number of iterations when $m=30$ for obtaining
$r_{k} < e$.}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{$m=30$, $n=100$}} &
\multicolumn{2}{c|}{\small$e=10^{-4}$} &
\multicolumn{2}{c|}{\small$e=5\cdot 10^{-5}$} &
\multicolumn{2}{c|}{\small$e=10^{-5}$} \\ \cline{2-7}
\multicolumn{1}{|c|}{} & iter & time (s) & iter & time (s) &
iter & time (s) \\ \hline
\multicolumn{1}{|c|}{PCP} & \multicolumn{1}{c|}{$5146$} &
\multicolumn{1}{c|}{$7.68$}&
\multicolumn{1}{|c|}{$7143$} &
\multicolumn{1}{c|}{$10.67$}& \multicolumn{1}{c|}{$13421$} &
\multicolumn{1}{c|}{$19.70$}\\ \hline
\multicolumn{1}{|c|}{CP} & \multicolumn{1}{c|}{$9941$} &
\multicolumn{1}{c|}{$12.93$}&
\multicolumn{1}{|c|}{$16438$} &
\multicolumn{1}{c|}{$21.37$}& \multicolumn{1}{c|}{$50841$}&
\multicolumn{1}{c|}{$64.23$} \\ \hline
\multicolumn{1}{|c|}{\%improv.} & \multicolumn{1}{c|}{$48.2$}&
\multicolumn{1}{c|}{$40.6$} &
\multicolumn{1}{|c|}{$56.5$} &
\multicolumn{1}{c|}{$50.1$}& \multicolumn{1}{c|}{$73.6$}&
\multicolumn{1}{c|}{$69.3$} \\ \hline
\end{tabular}
\label{table:3} \end{table} We note that the improvement in execution times are considerably higher than in the previous cases. For example, in the case of $e = 10 ^{-4}$ the improvement increases by approximately $20\%$ with respect to the case $m = 10$ and by approximately $40\%$ in the case of $m = 1$. As in the previous cases, if we decrease the tolerance to $10^{-5}$, PCP has better efficiency reaching almost $70\%$ improvement with respect to CP. Table~\ref{table:7} summarizes the percentage of improvements for each test. \begin{table}[h]
\centering
\caption{Comparison of improvement of average iterations and average times.}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\% improv.}} & \multicolumn{2}{c|}{$m=1$} &
\multicolumn{2}{c|}{$m=10$} & \multicolumn{2}{c|}{$m=30$} \\ \cline{2-7}
\multicolumn{1}{|c|}{} & iter & time (s) & iter & time (s) &
iter & time (s) \\ \hline
{\small $e=10^{-4}$} & $4.8$ & $3.3$ & $26.0$ &
$21.4$ & $48.2$ & $40.6$ \\ \hline
{\small $e=5 \cdot 10^{-5}$ } & $7.3$ & $5.6$ &
$36.2$ & $28.8$ & $56.5$ & $50.1$ \\ \hline
{\small $e=10^{-5}$} & $8.6$ & $7.4$ & $53.9$
& $49.9$ & $73.6$ & $69.3$ \\ \hline
\end{tabular}
\label{table:7} \end{table} We observe that for larger values of $m$ we obtain a better relative performance of PCP with respect to CP. The larger is $m$, the larger is the proportion of constraints on which we project.
\section{Conclusions} \label{sec5} In this paper we provide a projected primal-dual method for solving composite monotone inclusions with a priori information on solutions. We provide acceleration schemes in the presence of strong monotonicity and we derive linear convergence in the fully strongly monotone case. The importance of the a priori information set is illustrated via a numerical example in convex optimization with equality constraints, in which the proposed method outperforms \cite{3.CP}.
\textbf{Acknowledgements}
The authors thank the ``Programa de financiamiento basal'' from CMM--Universidad de
Chile and the project DGIP-UTFSM PI-M-18.14 of Universidad T\'ecnica Federico Santa
Mar\'ia.
\end{document} | arXiv | {
"id": "1805.11687.tex",
"language_detection_score": 0.5344266295433044,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{A remark on quiver varieties and Weyl groups} \author{Andrea Maffei} \begin{abstract} In this paper we define an action of the Weyl group on the quiver varieties $M_{m,\grl}(d,v)$ with generic $(m,\grl)$. To do it we describe a set of generators of the projective ring of a quiver variety. We also prove connectness for the smooth quiver variety $M(d,v)$ and normality for $M_0(d,v)$ in the case of a quiver of finite type and $d-v$ a regular weight. \end{abstract}
\maketitle
In \cite{Na1,Na2} Nakajima defined quiver varieties and show how to use them to give a geometric construction of integrable representation of Kac-Moody algebras. Luckily these varieties can be used also to give a geometric construction of representations of Weyl groups. In \cite{Lu:Q4}, Lusztig constructed a representation of the Weyl group on the homology of quiver varieties. His construction is similar to the construction of Springer representations. In \cite{Na2}, Nakajima gave an construction of isomorphism $\Phi_{\grs,\zeta}(d,v):\goM_{\zeta}(d,v) \lra \goM_{\grs \zeta}(d,\grs(v-d)+d)$ in the case of a quiver of finite type. His construction was analytic and relies on a description of quiver varieties as moduli spaces of instantons on ALE spaces. The main result of this paper is a direct and algebraic construction of these isomorphism which works for a general quiver without simple loops. To do it we also describe a set of generators of the algebra of covariant functions.
The paper is organized as follows. In the first section we fix the notation and we give the definition of a quiver variety: $M_{m,\grl}(d,v)$ where $m, \grl$ are two parameter, $d$ is a weight of the algebra associated to the quiver and $v$ an element of the root lattice. We are interested to quiver varieties as algebraic varieties but to explain one of the applications we need to give also the hyperK\"ahler construction of a quiver variety. We use a result of Migliorini \cite{Migliorini} to explain the connection between the two constructions.
Algebraic quiver varieties are defined as the $\mathbf{Proj}$ scheme of a ring of covariants. In the second section we describe a set of generators of this ring. In a special case which is not directly to Nakajima's quiver varieties we are also able to give a more precise results and to describe a basis of the vector space of $\chi$-covariants functions.
In the third section we use this description to generalize a construction of Lusztig \cite{Lu:Q4}. Namely for any element of the Weyl group we construct an isomorphism $\Phi_{\grs}$ between $M_{m,\grl}(d,v)$ and $M_{\grs m,\grs\grl}(d,\grs(v-d)+d)$ if $m,\grl$ are generic.
In the fourth section, following Nakajima \cite{Na1}, we show how to use the action constructed in section 3 (and the connection between the hyperK\"ahler construction and the algebraic construction) to describe an action of the Weyl group on the homology of a class of quiver varieties. This action is different from the one constructed by Lusztig in \cite{Lu:Q4}.
In the fifth section we give a result which reduce the study of geometric and algebraic properties of quiver varieties $M_{0,0}(d,v)$ to the case $d-v$ dominant.
In the sixth section we prove the normality of the quiver variety $M_0(d,v)$ and the connectdness of $M(d,v)$ in the case of a quiver of finite type and $d-v$ a regular weight.
I wish to thank Ilaria Damiani for many usefull discussions.
\section{Notations and definitions} In this section we give the definition of quiver varieties. Except some minor change all definition are due to Nakajima \cite{Na1,Na2}.
\subsection{The graph} Let $(I,H)$ be a finite oriented graph: $I$ is the set of vertices that we suppose of cardinality $n$, $H$ the set of arrows and the orientation is given by the two maps $$ h \longmapsto h_0 \mand h \longmapsto h_1 $$ from $H$ to $I$. We suppose also that: \begin{enumerate}
\item $ \forall \, h\in H \quad h_0 \neq h_1$,
\item an involution $ h\mapsto \bar h $ of $H$ without fixed points and
satisfying ${\bar h}_0 = h_1$ is fixed,
\item a map $\gre : H \lra \{-1,1\}$ is given such that $\gre(\bar h) = - \gre(h)$.
We define $\grO = \{ h \in H \st \gre(h)= 1\}$ and ${\wbar {\grO}} =
\{ h \in H \st \gre(h)= -1\}$. \end{enumerate} Observe that given a symmetric graph without loops is always possible to define $\gre$ and an involution $\bar {\;}$ as above.
\subsection{The Cartan matrix and the Weyl group} Let $A$ be the matrix whose entries are the numbers $$a_{ij}= card\{ h \in H \, : \, h_0=i \mand h_1=j \}.$$ We define a generalized symmetric Cartan matrix by $C=2I-A$. Following \cite{Lusztig:QG} an $X,Y$-regular root datum $(I,X,X \cech ,\pairing)$ with Cartan matrix equal to $C$ is defined in the following way: \begin{enumerate}
\item $X \cech$ and $X$ are finetely generated free abelian groups,
\item $\pairing: X \times X \cech \lra \mZ$ is a perfect
bilinear pairing,
\item two linearly independent sets $\Pi= \{\gra_i \tc i \in I \}
\subset X$ and
$\Pi \cech =\{ \gra_i\cech \tc i \in I \}
\subset X \cech$ are fixed and we set $Q = \langle \Pi \rangle$
and $Q \cech = \langle \Pi \cech \rangle$,
\item $\bra \gra_i \, , \gra_j \cech \ket = c_{ij}$,
\item (nonstandard) $\rank X = \rank X \cech = 2n - \rank C$,
\item (nonstandard) a linearly independent set
$\{ \gro_i \tc i\in I \}$ of $X$ such that
$\bra \gro_i , \gra_j \cech \ket = \grd_{ij}$ is fixed. \end{enumerate} Once $C$ is given it is easy to construct a data as above. We call $\goh$ the complexification of $X \cech$ and we observe that through the bilinear pairing $\pairing$ we can identify $\goh^*$ with the complexification of $X$. We observe also that the triple $(\goh,\Pi,\Pi \cech)$ is a realization of the Cartan matrix $C$ (\cite{Kac} pg.1).
The Weyl group $W$ attached to $C$ is defined as the subgroup of $Aut(X) \subset GL(\goh^*)$ generated by the reflections \begin{equation}\label{eq:generatoriW} s_i: x \longmapsto x - \! \bra x, \gra_i \cech \ket \gra_i. \end{equation} Observe that the dual action is given by $s_i( y)= y - \! \bra \gra_i, y \ket \gra_i\cech$ and that the lattices $Q$ and $Q\cech$ are stable for these actions. So the annihilator $\overset{\circ}{ Q\cech} =\{x \in X \tc \bra x,y\ket \, = 0 \: \forall y \in Q \cech\}$ is also stable by $W$ and we can consider the action of $W$on the lattice $P=X \, / \, \overset{\circ}{ Q\cech} \isocan \Hom_{\mZ}(Q,\mZ)$ and we call $x \mapsto \wbar x$ the projection from $X$ to $P$. We observe also that this projection is an isomorphism from the lattice $\wt P$, that is not $W$-stable, spanned by $\{ \gro_i \tc i \in I\}$ and $P$. Finally we observe that $$\wbar {\gra}_i = \sum _{j \in I} c_{ij} \wbar {\gro}_j.$$
\subsection{$d, v$ and the space of all matrices} For the exposition it will be usefull to identify the set $I$ with the set of integers $\{ 1, \dots, n \}$.
Let $d=(d_1,\dots,d_n)$ and $v=(v_1,\dots,v_n)$ be two $n$-tuples of integers. We also think of $d, v$ as elements of $X$ in the following way: \begin{equation}\label{dvX}
d = \sum_{i\in I} d_i {\gro}_i \; \mand \; v= \sum _{i \in I} v_i {\gra}_i\:; \end{equation} and through this identification we define an action of $W$ on $v$. We define also $v\cech = \sum_{i \in I} v_i \gra_i \cech \in Q \cech$. Once $d,v$ are fixed we fix complex vector spaces $D_i$ and $V_i$ of dimensions $d_i$ and $v_i$ and we define the following spaces of maps: \begin{subequations} \label{eq:decompS} \begin{align}
S_{\grO}(d,v) &= \bigoplus _{i \in I} \Hom(D_i,V_i) \oplus \bigoplus
_{h \in \grO} \Hom(V_{h_0},V_{h_1}),\\
S_{\wbar \grO}(d,v) &=\bigoplus _{i \in I} \Hom(V_i,D_i) \oplus \bigoplus
_{h \in \wbar \grO} \Hom(V_{h_0},V_{h_1}), \\
S(d,v)&= S_{\grO}(d,v) \oplus S_{\wbar \grO}(d,v) . \end{align} \end{subequations} More often, when it will not be ambiguous we will write $S_{\grO}, S_{\wbar \grO}$ and $S$ instead of $S_{\grO}(d,v), S_{\wbar \grO}(d,v)$ and $S(d,v)$.
For each $h \in H$ (resp. $i\in I$) we define the projection $B_h$ (resp. $\grg_i$ and $\grd_i$) from $S$ to $\Hom(V_{h_0},V_{h_1})$ (resp. $\Hom(D_i,V_i)$ and $\Hom(V_i,D_i)$) with respect to the decomposition described in \eqref{eq:decompS}.
When an element $s$ of $S$ is fixed we will often write $B_h$ (resp. $\grg_i$, $\grd_i$) instead of $B_h(s)$ (resp. $\grg_i(s)$ and $\grd_i(s)$). We will also use $\grg$ for $(\grg_1 , \dots , \grg_n)$, $\grd$ for $(\grd_1 , \dots , \grd_n)$ and $B$ for $(B_h)_{h \in H}$ and often we will write an element of $S$ as a triple $(B,\grg,\grd)$.
Once $D_i,V_i$ and an element $s$ of $S$ are fixed we define also: \begin{subequations}\label{defTab} \begin{align}
T_i &= D_i \oplus \bigoplus _{h \tc h_1 =i} V_{h_0} \, , \label{defT}\\
a_i &= a_i(s) = (\grd_i(s), (B_{\bar h}(s))_{h \tc h_1 =i}):
V_i \lra T_i \, ,\label{defa}\\
b_i &= b_i(s) = (\grg_i(s), (\gre(h)B_{ h}(s))_{h \tc h_1 =i}):
T_i \lra V_i \, .\label{defb} \end{align} \end{subequations}
We will identify the dual of space of the $\mC$-linear maps $\Hom (E,F)$ between two finite dimensional vector spaces with $\Hom(F,E)$ through the pairing $\bra \grf , \psi \ket = \Tr ( \grf \comp \psi )$. So we can describe $S$ also as $S_{\grO} \oplus S_{\grO}^* = T^* S_{\grO}$ and we observe that a natural symplectic structure $\gro$ is defined over $S$ by $$ \gro((s_{\grO},s_{\wbar \grO}),(t_{\grO},t_{\wbar \grO})) = \bra s_{\grO},t_{\wbar \grO}\ket -\bra t_{\grO},s_{\wbar \grO}\ket. $$
\subsection{Hermitian structure on $S$} \label{hyperKahlerstructure}
We suppose now that the spaces $D_i$, $V_i$ are endowed with hermitian metrics. So we can speak of the adjoint $\grf^*$ of a linear map between these spaces, and we have a positive definite hermitian structure $h$ on $S$ with explicit formula: \begin{align}
h( (B,\grg,\grd),(\wt B,\wt \grg,\wt \grd)) &= \sum _{h\in H}
\Tr (B_h {\wt B}_{ h}^*) + \sum _{i \in I }
\Tr (\grg_i {\wt \grg}_i^* + {\wt \grd}_i^* \grd_i)
\label{defh} \\
&= \sum_{i \in I} \Tr(a_i {\wt a}_i^* + {\wt b}_i^* b_i) \notag \end{align} and an associated real and closed symplectic form $\gro_I(s,t)=\Re h (\im s, t) - \Im h(s,t)$.
\subsection{Group actions and moment maps}\label{groupactions} We can define an action of the groups $G=GL(V)=\prod GL(V_i)$ and $GL(D)=\prod GL(D_i)$ on the set $S$ in the following way: \begin{align}
g (B_h,\grg_i,\grd_i) &=
(g_{h_1}B_h g_{h_0}^{-1},g_i \grg _i,
\grd_i g_i^{-1}) &&\mfor g=(g_i)\in GL(V), \\
g (B_h,\grg_i,\grd_i) &= (B_h , \grg _i g_i^{-1},
g_i \grd_i ) &&\mfor g=(g_i)\in GL(D). \end{align} Observe that these actions commute and that $\gro$ is $GL(V)$ invariant. Moreover if $U=U(V)=\prod U(V_i)$ is the group of unitary trasformations in $GL(V)$ the real simplectic form $\gro_I$ is $U(V)$ invariant.
Define $\grm, \grm_I \colon S \lra \gog=\oplus gl(V_i)$ by the following explicit formulas: \begin{align*}
\grm_i(B,\grg,\grd)&= \sum_{h \in H \tc h_1=i} \gre(h) B_h B_{\bar h}
+ \grg_i \grd_i = b_i a_i\, , \\
\grm_{I,i}(B,\grg,\grd)&= \frac{\im}{2}
\left(\sum _{h\in H\tc h_1=i} B_h B_h^*
- B_{\bar h}^* B_{\bar h} + \grg_i \grg_i^* -\grd_i^* \grd_i
\right) =\frac{\im}{2} ( b_i b_i^* - a_i^* a_i), \end{align*}
If we identify $\gog^*=\Hom_{\mC}(\gog,\mC)$ (resp. $\gou^* = \Hom_{\mR} (\gou , \mR)$)
with $\gog=\oplus gl(V_i)$ (resp. $\gou$) through the pairing $\bra (x_i)\, , (y_i) \ket = \sum _i \Tr (x_i y_i)$, we can observe that $\grm$ is a moment map for the action of $G$ on the symplectic manifold $(S, \gro)$ and that $\grm_I$ is a moment map for the action of $U$ on the symplectic manifold $(S, \gro_I)$. It is common to group all these moment maps together and to define an hyperK\"ahler moment map $$ \wt \grm = (\grm_I, \grm): S \lra \gou \oplus \gog = (\mR \oplus \mC) \otimes_{\mR} \gou. $$
\subsection{Quiver varieties as hyperK\"ahler quotients} \label{definitionquiver} Let $\grz_i = (\xi_i,\grl_i) \in \mR \oplus \mC$ and $\grz = (\grz_1,\dots,\grz_N)$. We define: $$ \goL_{\grz}(d,v) = \{s \in S \tc \grm (s) - \grl_i \Id_{V_i} = 0 \,\mand\,
\grm_{I,i} (s) - \im \xi_i \Id_{V_i} = 0 \}. $$ We observe that $\goL_{\grz}(d,v)$ is stable for the action of $U(V)$, so, at least as a topological Hausdorf space we can define the \emph{quiver variety of type} $\grz$ as $$ \goM_{\grz} (d,v) = \goL_{\grz}(d,v) / U(V). $$ It will be convenient to define also $\goM_{\grz} (d,v)=\vuoto$ if $d,v \in \mZ^n $ and there exists $i$ such that $v_i <0$ or $d_i <0$ for some $i$. We call $\goZ = \mR^n \oplus \mC^n$ and we observe that we can idenyify it to $(\mR \oplus \mC) \otimes _{\mZ}P$ through: \begin{equation}\label{zetaRCP} (\xi_1,\dots,\xi_n,\grl_1,\dots,\grl_n) \longleftrightarrow \sum _{i \in I} (\xi_i , \grl_i) \wbar{\gro}_i. \end{equation} In particular we consider an action of the Weyl group $W$ on $\goZ$ through this identification.
\begin{oss}\label{oss:centrogoZ} There is a surjective map from $\goZ$ to $Z_U(\gou) \oplus Z_G(\gog)$: $$ (\xi_1,\dots,\xi_n,\grl_1,\dots,\grl_n) \lra \sum _{i \in I} ( \im \xi_i,\grl_i) \Id_{V_i} $$ Observe that $\goL_{\zeta}$ is the fiber of $\wt \mu$ over the image of $\zeta$ in $Z_U(\gou) \oplus Z_G(\gog)$. \end{oss}
\begin{oss}\label{oss:riduzioneanonzero} If $v , d \geq 0$ define: $I^* = \{ i \in I \st v_i \neq 0\}$,
$H^* = \{ h \in H \st h_0,h_1 \in I^*\}$, $\gre^* = \gre\bigr|_{H^*}$, $v^* = (v_i)_{i \in I^*}$, $d^* =(d_i )_{i \in I^*}$, and $\zeta^*= (\grz_i)_{i \in I^*}$ then it is clear that $$ \goL_ {\grz^*}(d^*,v^*) \isocan \goL_{\grz}(d,v) \;\mand\; \goM_ {\grz^*}(d^*,v^*) \isocan \goM_{\grz}(d,v). $$ \end{oss}
Except for the last equivalence which is trivial in our case the following is a general well known fact (\cite{GIT} ch. 8).
\begin{lem} \label{regolaritamu} Let $s \in \goL_{\zeta}$ then \begin{gather*} d{\wt {\grm}}_s \text{ is surjective} \iff d\grm_s \text{ is surjective} \iff
d\grm_I \text{ is surjective } \iff \\ \iff \dim \mathrm{Stab}_G\{s\}= 0 \iff \mathrm{Stab}_G\{s\}=\{1_G\} \end{gather*} \end{lem}
\begin{dfn} If $u \in \mZ^n = Q\cech$ $A\subset Q\cech$ we define $$ \calH_u=\{\grz =(\xi,\grl)\in \goZ \tc \bra \xi \, , u \cech \ket= \bra \grl \, , u \cech \ket=0 \} \; \mand \; \calH_A= \bigcup_{u\in A} H_u. $$ Let now $U_v = \{ u \in \mN^n-\{0\} \msuchthat$ $0 \leq u_i \leq v_i \}$ and $\calH = \calH_{U_v}$. $\calH$ is a union of a finite number of real subspace of $\goZ$ of codimension $3$. \end{dfn}
\begin{lem}[Nakajima, \cite{Na1}] \label{lem:regmu} If $\grz \in \goZ - \calH $ and $\wt {\grm}(s) = \grz$ then $\mathrm{Stab}_G\{s\}={1_G}$. \end{lem}
As a consequence of the above lemma and general results on on hyperK\"ahler manifolds (for example \cite{Hit} or \cite{HKLR}) we obtain the following corollary. \begin{cor}\label{dimensioniquiver} If $\grz \in \goz - \calH$ then if it is not empty $\goM_{\grz} (d,v)$ is a smooth hyperK\"ahler manifold of real dimension $2 \bra 2d-v , v \cech \ket$. \end{cor}
\subsection{Geometric invariant theory and moment map} \label{ssec:GIT} In this section we explain the relation between the moment map and the GIT quotient proved by Kempf, Ness \cite{KeNe}, Kirwan \cite{GIT} and others. To be more precise we need a generalization of their results in the case of an action on an affine variety proved by Migliorini \cite{Migliorini}.
Let $X$ be an affine variety over $\mC$ and $G$ a reductive group acting on $X$. We can assume that $X$ is a closed subvariety of a vector space $V$ where $G$ acts linearly. Let $h$ be an hermitian form on $V$ invariant by the action of a maximal compact group $U$ of $G$ and define a real $U$-invariant symplectic form on $V$ by $$\eta (x,y) = \Re h(\im x,y).$$ Then we can define a moment map $\nu : V \lra \gou^*=\Hom_{\mR}(\gou,\mR) $: $$\bra \nu(x), u \ket = \tfrac{1}{2} \eta ( u \cdot x , x).$$ We observe that the real symplectic form $\eta$ resctricted to a complex submanifold is always non degenerate and that $\mu$ restricted to the non singular locus of $X$ is a moment map for the action of $U$ on $X$.
Now let $\chi$ be a multiplicative character of $G$. We observe
that for all $g \in U$ we have $|\chi(g)|=1$ so $\im \,d\chi : u \lra \mR$. In particular we can think to $\im\,d\chi$ as an element of $\gou ^*$. Morover we observe that it is invariant by the dual adjoint action, hence it makes sense to consider the quotient: $$\goM = \nu^{-1}(i\,d\chi) / U.$$ As we saw our variety are a particular case of this construction.
On the other side we can consider the GIT quotient. Let us remind the definition. If $\grf$ is a character of $G$ we consider the line bundle $L_{\grf} = V \times \mC$ on $V$ with the following $G$-linearization: $$ g(x,z)= (g\cdot x,\grf(g)z). $$ An invariant section of $L_{\grf}$ is determined by an algebraic function $f:V\lra \mC$ such that $f(gx)=\grf(g)f(x)$ for all $g \in G$ and $x \in V$. We use the same symbol $L_{\grf}$ also for the restriction of $L_{\grf}$ to $X$.
Given a rational action of $G$ on $\mC$-vector space $A$ we define \begin{gather*} A_{\grf,n} = \{ a \in A \tc g\cdot a = \grf^{-n}(g) a \text{ for all } g \in G\},\\ A_{\grf} = \bigoplus _{n=0}^{\infty} A_{\grf,n} \quad \text{ as a graded vector space.} \end{gather*} Hence we have that $H^0(X,L_{\grf})^G = \mC[X]_{\grf,1}$. We observe that if $I$ is the ideal of algebraic function on $V$ vanishing on $X$ then $$H^0(X,L_{\grf})^G = \frac{H^0(V,L_{\grf})^G}{I_{\grf,1}}.$$ This last fact can be proved easily for example averaging a $\grf$ equivariant function $f$ on $X$ in the following way: $$\tilde f (v) = \int_U \grf^{-1}(u) f(u\cdot v) \, du .$$ \begin{dfn} A point $x$ of $X$ is said to be $\chi$-\emph{semistable} if there exist $n>0$ and $f \in H^0(X,L_{\chi}^{\otimes n})^G$ such that $f(x) \neq 0$. We observe that by the remark above a point of $X$ is $\chi$-semistable if and only if is $\chi$-semistable as a point of $V$. We call $X^{ss}_{\chi}$ (resp. $V^{ss}_{\chi}$) the open subset of $\chi$-semistable points of $X$ (resp. $V$). \end{dfn}
\begin{prp}[ \cite{GIT,Newstead}] There exists a good quotient of $X^{ss}_{\chi}$ by the action of $G$ and we have that $$ X^{ss}_{\chi} //G = \mathrm{Proj}\, \mC[X]_{\chi}.$$ Moreover $\mathrm{Proj} \,\mC[X]_{\chi}$ is a finetely generated $\mC$-algebra and a natural projective map $$\pi : X^{ss}_{\chi} //G \lra X//G = \mathrm{Spec} \, \mC[X]^G$$ is defined. \end{prp}
In the case of $\chi \coinc 1$ the following fact is well known: $$\mathrm{Proj}\, \mC[X]_{\chi}= \mathrm{Spec}\, \mC[X]^G = \nu^{-1}(0)/U.$$ The following result is less well known, and its proof requires some adjustment of the classical proof for the case $\chi \coinc 1$ (see for example an appendix of \cite{Migliorini} or \cite{TatoPhD} ).
\begin{prp}[Migliorini, \cite{Migliorini}]\label{prp:KNM1} Let $x \in X$ then $$ \exists g \in G \st \nu(gx) = \im d\chi \iff Gx \text{ is a closed orbit in } X^{ss}_{\chi}. $$ \end{prp} \begin{prp}[Migliorini, \cite{Migliorini}]\label{prp:KNM2} The inclusion $\nu^{-1}(\im d\chi) \subset X^{ss}_{\chi}$ induces an homeomorphism $$ \nu^{-1}(\im d\chi) / U \isocan X^{ss}_{\chi}//G . $$ \end{prp}
\subsection{Quiver varieties as algebraic varieties} If $m= (m _1,\dots, m _n) \in \mZ^N$ we define a character $\chi_m$ of $G_v$ by $\chi_m = \prod_{i \in I} \det^{m_i}_{GL(V_I)}$.
If $\grl = (\grl_1,\dots,\grl_n ) \in \mC^n$ and $m=(m_1,\dots , m_n) \in \mZ^n$ then we define the varieties: \begin{align*} \grL_{\grl}(d,v) &= \{ s \in S \st \grm_i(s) - \grl_i \Id_{V_i} =0 \text{ for all }i\}, \\ \grL_{m,\grl}(d,v) &= \{ s \in \grL_{\grl}(d,v) \tc s \text{ is } \chi_m-\text{semistable}\}. \end{align*} and the associeted \emph{quiver varieties} \begin{align*} M_{m,\grl}(d,v) &= \grL_{m,\grl}(d,v) /\!/ G_v \, \mand \, \\ M^0_{\grl}(d,v) &= M_{0,\grl}(d,v) = \grL_{\grl}(d,v) /\!/G_v \end{align*} We call $p_{m, \grl}^{d,v} \colon \grL_{m,\grl}(d,v) \lra M_{m,\grl}(d,v)$ the quotient map. Observe that the inclusion $\grL_{m,\grl}(d,v) \subset \grL_{\grl}(d,v)$ induces a projective morphism $$
\pi_{m,\grl}^{d,v} : M_{m,\grl}(d,v) \lra M^0_{\grl}(d,v). $$
Finally it will be convenient to define $M_{m,\grl}(d,v)=\vuoto$ if $d,v \in \mZ^n$ and $v_i <0$ or $d_i<0$ for some $i$.
We identify $\mZ^n$ with $P$ and $Z=\mC^n$ with $\mC \otimes_{\mZ} P$ through $$ (m_1,\dots,m_n) \longrightarrow \sum m_i \wbar \omega_i \;\mand\; (\grl_1,\dots,\grl_n) \longrightarrow \sum \grl_i \wbar \omega_i. $$
\begin{oss} As in \ref{oss:centrogoZ} we have a surjective map from $Z$ to $Z(\gog)$ and $\grL_{\grl}(d,v)$ is the fiber of $\mu$ over the image of $\grl$ in $Z(\gog)$. \end{oss} \begin{oss} Remark \ref{oss:riduzioneanonzero} holds without changes also in this case. \end{oss} \begin{oss}Observe that $P \oplus Z \subset \goZ$. Observe also that the map $m \lra \chi_m$ define a surjective morphism from $P$ to $\Hom(G_v,\mC^*)$ and that the following diagram commute: $$ \xymatrix{
P \ar@{^{(}->}[d] \ar[rr]& & \Hom(G,\mC^*) \ar@{}[r]|>>>>>{\ni} \ar[d]& \chi \ar[d]\\
\mR^n \ar[rr] && {Z(\gou) \isocan (\gou^*)^U} \ar@{}[r]|>>>>{\ni} & {\im d\chi} } $$ In particular we can apply \ref{prp:KNM2} to the action of $G_v$ on $\grL_{\grl}(d,v)$ and we obtain: $$
\goM_{(m,\grl)}(d,v) \isocan M_{m,\grl}(d,v). $$ \end{oss}
\begin{prp}\label{prp:1.14} Let $(m,\grl) \notin \calH$ and $s \in \grL_{m,\grl}(d,v)$ then $\mathrm{Stab}_{G_v}(s) = \{1\}$. \end{prp} \begin{proof} As we have already claimed it is enough to prove $\dim \mathrm{Stab}_G(s)=\{1\}$. We know that there is a good quotient of $\grL_{m,\grl}(d,v)$ so it is enough to prove that any closed orbit has maximal dimension. By Proposition \ref{prp:KNM1} if $G_vs$ is closed in $\grL_{m,\grl}(d,v)$ then there exists $g \in G_v$ such that $\mu_I(gs) = \im d \chi$. The thesis follows now form $(m,\grl) \notin \calH$ and Lemma \ref{lem:regmu}. \end{proof}
\begin{cor} If $(m,\grl) \notin \calH$ and $M_{m,\grl}(d,v) \neq \vuoto$ then it is a smooth algebraic variety of dimension $\bra v\cech , 2d-v \ket$. \end{cor}
\subsection{Path algebra and $b$-path algebra} To describe functions on quiver varieties we need some notation about the path algebra.
\begin{dfn} \label{defpath}
A \emph{path} $\gra$ in our graph is a sequence $h^{(m)}\dots h^{(1)}$ such that $h^{(i)} \in H$ and $h^{(i)}_1 = h^{(i+1)}_0$ for $i =1,\dots,m-1$. We define also $\gra_0=h^{(1)}_0$, $\gra_1= h^{(m)}_1$ and we say that the length of $\gra$ is $m$. If $\gra_0=\gra_1$ we say that $\gra$ is a closed path. We consider also the empty paths $\vuoto_i$ for $i\in I$ and we define $(\vuoto_i)_0=(\vuoto_i)_1=i$. The product of path is defined in the obvious way.
A $b$-\emph{path} $[\grb]$ in our graph is a sequence $[ i_{m+1}^{r_{m+1}} \gra^{(m)}i_{m}^{r_m}\dots\gra^{(1)}i_1^{r_1} ]$, that we write between square brackets such that $i_j \in I$, $\gra^{(j)}$ are $B$-path, $r_j \in \mN$ and $\gra^{(j)}_0 = i_j $ and $\gra^{(j)}_1= i_{j+1} $ for $j=1,\dots,m$. We consider also the ``empty'' $b$-paths indiced by elements of $I$: $[\vuoto_i]$. We define $[\grb]_0 = i_1$, $[\grb]_1 = i_{m+1}$ and $[\vuoto_i]_0=[\vuoto_i]_1=i$. The length of $[\grb]$ is $\sum_{j=1}^{m+1} r_j +\sum_{j=1}^{m} length(\gra^{j})$ and the product of $b$-paths is defined in the obvious way: $$ [\grb]\cdot[\grb^{\prime}]= \begin{cases} 0 &\text{if } [\grb^{\prime}]_1 \neq [\grb]_0 \\ {[\grb \grb^{\prime}]} &\text{if } [\grb^{\prime}]_1 = [\grb]_0 =i \end{cases} $$
Given a path $\gra= h^{(m)}\dots h^{(1)}$ and a $b$-path $\grb =[ i_{m+1}^{r_{m+1}} \gra^{(m)}\dots$ $\dots i_1^{r_1} ]$ we define an evaluation of $\gra$ and $\grb$ on $S$ in the following way: if $s=(B,\grg,\grd) \in S $ then \begin{align*}
\vuoto_i(s)& = 0 \in \Hom(V_i,V_i)\;\mand\; [\vuoto_i](s) = 0 \in \Hom(V_i,V_i),\\ \gra(s) &= B_{h^{(m)}} \comp \dots \comp B_{h^{(1)}} \in
\Hom (V_{\gra_0},V_{\gra_1}),\\ \grb(s) &= (\grg_{i_{m+1}}\comp\grd_{i_{m+1}})^{r_{m+1}}
\comp \gra^{(m)}(s) \comp (\grg_{i_{m}}\comp \grd_{i_{m}})^{r_{m}} \comp \dots \comp \\
&\quad \circ \dots \circ \gra^{(1)}(s) \comp
(\grg_{i_{1}}\comp \grd_{i_{1}})^{r_{1}} \in
\Hom (D_{\grb_0},D_{\grb_1}). \end{align*}
The path algebra $\calR$ is the vector space spanned by paths with the product induced by the product of paths. If $i,j \in I$ we say that an element in $\calR$ is of type $(i,j)$ if it is in the linear span of the paths with source in $i$ and target in $j$.
The $b$-path algebra $\calQ$ is the vector space spanned by $b$-paths with the product induced by the product of $b$-paths described above. If $i,j \in I$ we say that an element in $\calR$ is of type $(i,j)$ if it is in the linear span of the $b$-paths with source in $i$ and target in $j$. \end{dfn} \begin{oss} We observe that the evaluation on $S$ is a morphism of algebra from $\calR$ to the algebra defined by the morphisms of the category of vector spaces. Moreover if $f$ is of type $(i,j)$ we observe that $f(s) \in \Hom (V_i,V_j)$. \end{oss}
\section{Generators of the projective ring of a quiver variety} In this section we want to describe a set of generators of the graded ring $\mC[S] _{\chi}$ and by consequence of the projective ring of a quiver variety $\mC[\grL_{\grl}]_{\chi}$. More precisely we will give a set of generators as $\mC[S]^G$-module of its $l$-homogeneous part: $\mC[S] _{\chi,l}$. This result is a generalization of the one obtained by Lusztig in the case of invariants: $\chi \coinc 1$. First of all recall his result.
\begin{teo}[Lusztig, \cite{Lu:Q3} theorem 1.3]\label{generatoriinvarianti} The ring $\mC[S]^G$ is generated by the polynomials: $$
s \longmapsto \Tr\left( \gra(s) \right) \;\mand\;
s \longmapsto \grf\left(\grd_{\grb_1}(s)\grb(s)\grg_{\grb_0}(s)\right) $$ for $\gra$ a closed path, $\grb$ a path and $\grf \in \left(\Hom(D_{\grb_0},D_{\grb_1}) \right)^*$. \end{teo}
\subsubsection{Determinants} \label{sssec:det} To describe our result we do first some general remark. Forget for a moment our quiver, and suppose to have a finite set of finite dimensional vector spaces $X_1,\dots,X_k$ of dimensions $u_1,\dots,u_k$ and a pair of nonnegative integers $(m_i^+, m_i^-)$ for each of them. Finally let $m^+, m^-$ two nonnegative integers such that $$ N=\sum_{i=1}^k m_i^+ u_i + m^+ = \sum_{i=1}^k m_i^- u_i + m^-,$$ and two vector spaces $M^+$ and $M^-$ of dimension $m^+,m^-$. Construct the vector spaces: $$ Y = \bigoplus _{i=1}^k \mC^{m_i^-} \otimes X_i \oplus M^-, \qquad Z = \bigoplus _{i=1}^k \mC^{m_i^+} \otimes X_i \oplus M^+ $$ and observe that $\dim Y = \dim Z = N$. Define an action of the general linear group $GL(X_i)$ of $X_i$ on $Y$ by $$ g_i\cdot(\sum_{j=1}^k v_j \otimes x_j + m) = \sum_{j\neq i} v_j \otimes x_j
+ m+ v_i \otimes g_i x_i, $$ and also a similar action on $Z$. Hence the vector space $\Hom(Y,Z)$ acquires a natural structure of $G_X = \prod_{i=1}^k GL(X_i)$ module. If we choose an isomorphism $\grs$ between $\Hom(\bigwedge^NY,\bigwedge^NZ)$ and $\mC$ we can define a function $det$ on $\Hom(Y,Z)$ by $$ det ( A) = \grs \left( \bigwedge^nA \right). $$ For simplicity we do not emphasize the role of $\grs$ on this definition, so strictly speaking, $\det$ is a function defined only up to a nontrivial constant factor. We observe also that
$ \bigwedge^nY \isocan \left({\bigwedge^{u_i}X_1}\right)^{\otimes m_1^-} \otimes \dots \otimes \left({\bigwedge^{u_k}X_k}\right)^{\otimes m_k^-} \otimes \bigwedge^{m^-}M^- $
\noindent (and similarly for $Z$) so an isomorphism $\grs$ is determined if we choose orientations, or basis, of $X_j, M^+,M^-$. Finally observe that for any $g = (g_j) \in G_X$ we have $$\det( g \cdot A) = \prod_{i=1}^k (det_{GL(X_i)}(g_i))^{m_i^+ - m_i^-} \; \det(A).$$
\subsubsection{Description of generators}\label{sssec:generators} We go back now to our quiver and we describe a set of covariant polynomials on $S$. Any character $\chi$ of the group $G_v=GL(V)$ is of the form $\chi=\chi_m = \prod_{i \in I} det^{m_i}_{GL(V_i)}$. We fix such a character and we define \begin{align*} I^+ &= \{ i \in I \tc m _i >0 \} \mand \wt m_i = m _i \mif i\in I^+,\\ I^0 &= \{ i \in I \tc m _i =0 \} \mand \wt m_i = 0 \mif i\in I^0,\\ I^- &= \{ i \in I \tc m _i <0 \} \mand \wt m_i = - m _i \mif i\in I^-. \end{align*} We use now the construction explained in \ref{sssec:det} in the case $X_i= V_i$ and $m_i^+ - m_i^- = m_i$. We choose ordered sets $A=(a_1, \dots,a_{m^-}) \subset (\bigcup D_i)^{m^-}$ and $B=(b_1, \dots, b_{m^+}) \subset (\bigcup D_i^*)^{m^+}$ and we define a function $I:A,B \lra I$ by $a \in D_{I(a)}$, $b \in D^*_{I(b)}$. In the framework described above it is then possible to set $M^- = \bigoplus_{i=1}^{m^-} \mC_{a_i}$ and $M^+ = \bigoplus_{i=1}^{m^+} \mC_{b_i}$. In particular we have $$
Y = \bigoplus_{i\in I} \bigoplus_{h=1}^{m_i^-} V_i^{(h)} \oplus \bigoplus_{i=1}^{m^-} \mC_{a_i}, \qquad
Z = \bigoplus_{i\in I} \bigoplus_{k=1}^{m_i^+} V_i^{[k]} \oplus \bigoplus_{i=1}^{m^+} \mC_{b_i} $$ where $V_i^{(l)}, V_i^{[l]}$ are isomorphic copies of $V_i$. We choose now elements of the $b$-path algebra as follows: \begin{enumerate} \item for any $i,j \in I$ and for any $1 \leq h \leq m^-_i$, $1 \leq k \leq m^+_j$ we choose an element $\gra^{i,h}_{j,k}$ of the $b$-path algebra of type $(i,j)$, \item for any $i \in I$, $1 \leq h \leq m^-_i$ and for any $1\leq l \leq m^+$ we choose an element $\gra^{i,h}_l$ of the $b$-path algebra of type $(i,I(b_l))$, \item for any $1 \leq l \leq m^-$ and for any $j \in I$, $1 \leq k \leq m^+_j$ we choose an element $\gra^l_{j,k}$ of the $b$-path algebra of type $(I(a_l),j)$, \item for any $1 \leq l \leq m^-$ and for any $1 \leq l'\leq m^-$ we choose an element $\gra^l_{l'}$ of the $b$-path algebra of type $(I(a_l),I(b_{l'}))$. \end{enumerate} We call such a data $\Delta = (\{ (m_i^+,m_i^-) \}_{i\in I} ,(m^+,m^-),A,B, \gra^{i,h}_{j,k},\gra^{i,h}_l,\gra^l_{j,k},\gra^l_{l'})$ a $\chi$-data and we attach to it a $\chi$-covariant function on $S$: $$ f_{\Delta} (s) = \det \left( \Psi_{\Delta} (s)\right) $$ where $\Psi_{\Delta} $ is a linear map from $Y$ to $Z$ defined by \begin{align*} [\Psi_{\Delta}] ^{V_i^{(h)}}_{V_j^{[k]}} (s) &= \gra^{i,h}_{j,k}(s), \\ [\Psi_{\Delta}] ^{V_i^{(h)}}_{\mC_{b_l}} (s) &=
b_l \comp \grd_{I(b_l)}\comp \gra^{i,h}_l (s) , \\ [\Psi_{\Delta}] ^{\mC_{a_l}}_{V_j^{[k]}} (s) &=
\gra^l_{j,k} (s) \comp \grg_{I(a_l)} \bigr|_{\mC a_l}, \\ [\Psi_{\Delta}] ^{\mC_{a_l}}_{\mC_{b_{l'}}}(s) &=
b_{l'} \comp \grd_{I(b_{l'})} \comp
\gra^l_{l'}(s)\comp\grg_{I(a_l)}\bigr|_{\mC a_l}. \end{align*}
The function $f_{\Delta}$ are a set of generators as $\mC[S]^G$-module of $\mC[S]_{\chi,1}$, but we will need to define a smaller set of generators. To define this set we give a notion of good $\Delta$.
\begin{dfn} A data $\Delta$ as above is said to be $\chi$-\emph{good} if it satisfies the following conditions: \begin{enumerate} \item $m_i^+ + m_i^- = \wt m_i$ for all $i \in I$, \item $\gra^{l}_{l'} = 0$ for all $l,l'$, \item $\gra^*_*$ is an element of the path algebra (and not just an element of the $b$-path algebra which is obviously bigger), \item $card\{(j,k) \tc \gra^{i,h}_{j,k} \neq 0 \} + card\{l\tc \gra^{i,h}_l \neq 0\} \leq v_i$ for all $i,h$, \item $card\{(i,h) \tc \gra^{i,h}_{j,k} \neq 0 \} + card\{l\tc \gra^l_{j,k} \neq 0\} \leq v_j$ for all $j,k$, \item for all $l$ there exists at most one pair $(i,h)$ such that $\gra^{i,h}_l \neq 0$, \item for all $l$ there exists at most one pair $(j,k)$ such that $\gra^l_{j,k} \neq 0$. \end{enumerate} \end{dfn}
For the applications the only important point will be the first one.
\begin{prp} \label{prp:covarianti} The set of polynomials $f_{\Delta}$ with $\Delta$ $\chi$-good generates $\mC[S]_{\chi,1}$ as a $\mC[S]^{G_v}$-module. \end{prp}
\begin{oss} Prof. Weyman said me that in the case $D=0$ a similar proposition has been proved by him and for arbitrary characteristic. \end{oss}
\subsection{Some remark on the invariant theory of $GL(n)$} \label{ssec:invGL} If $V$ is a finite dimensional representation of a linearly reductive Lie group $G$ and $S$ is a simple representation of $S$ we write $V[S]$ for the $S$-isotipic component of type $S$ of $V$.
We now fix $n$ and we make some remark on the representations of $GL(n)$. To any partition of height less or equal to $n$ we associate an irreducible representation of $GL(n)$ in the usual way. If we multiply these representations by a power of the inverse of determinant representation we obtain a complete list of irreducible representations of $GL(n)$. If $\grl$ is a partition we call $\grl$ the transpose partition as usual and we define $\grl^{op}=( \grl_1 - \grl_n , \grl_1 - \grl_{n-1}, \dots , \grl_1 - \grl_1)$. We call $\grd$ the determinant representation of $GL(n)$ and $\gre= 1^n$ the associated partition. Finally we call $V$ the natural representation.
\begin{lem}\label{lem:invGL1} \begin{enumerate} \item ${L_{\grl}}^* = \grd^{-\grl_1} \otimes L_{\grl^{op}}$, \item $\Hom_{GL(n)}(\grd^m, L_{\grl} \otimes L_{\mu}) = \begin{cases} \mC &\mif \grl = \mu^{op} +(m-\mu_1)\gre, \\ 0 &\text{ otherwise,}\\ \end{cases} $ \item $\Hom_{GL(n)}(\grd^m, L_{\grl} \otimes L^*_{\mu}) = \begin{cases} \mC &\mif \grl = \mu +m\gre, \\ 0 &\text{ otherwise.} \end{cases}$ \end{enumerate} \end{lem}
\begin{proof} We prove only 2). \begin{align*}
Hom_{GL(n)}(\grd^m, L_{\grl} \otimes L_{\mu}) &=
Hom_{GL(n)}(\grd^m\otimes {L_{\mu}}^*, L_{\grl} ) \\
&= Hom_{GL(n)}(\grd^{m-\mu_1} \otimes {L_{\mu^{op}}}, L_{\grl} ) \end{align*} If $m \geq \mu_1$ the last group is isomorphic to $Hom_{GL(n)}( L_{\mu^{op}+(m-\mu_1) \gre}, L_{\grl} )$ and if $m< \mu_1$ is isomorphic to $Hom_{GL(n)}( {L_{\mu^{op}}}; L_{\grl+ (\mu_1 -m)\gre})$. In any case the thesis follows. \end{proof}
We want now to describe $\Hom_{GL(n)} \left(\grd^m,V^{\otimes i} \otimes \left( V^* \right)^{\otimes j} \right)$. To do it we will use Schur-duality. Remind that the irreducible representations of the groups $S_m$ are parametrized by partitions of $m$ and call $S_{\grl}$ the irreducible representation associated with $\grl$. Consider now the action of $S_m$ on $V^{\otimes m}$ given by permuting the factors. This action commute with the $GL(n)$ action. Schur duality asserts that the action of the group $S_m \times GL(n)$ on $V^{\otimes m}$ decomposes in the following way: $$V^{\otimes m} = \bigoplus_{\substack{\grl \vdash m \\ ht(\grl)\leq n}} S_{\grl} \otimes L_{\grl}. $$ We describe a set of elements of $\Hom_{GL(n)} \left(\grd^m,V^{\otimes i} \otimes \left( V^* \right)^{\otimes j} \right)$. Let $m$ be a nonnegative integers a choose a permutation $\grs$ of $\{1,\dots,i+mn\}$. To $\grs$ we associate maps: \begin{align*}
\Phi_{\grs} &: \left(V^{\otimes i} \otimes (V^*)^{\otimes i}\right)^{GL(n)}
\lra
V^{\otimes i+mn} \otimes (V^*)^{\otimes i} [\grd^m] \\
\Psi_{\grs} &: \left(V^{\otimes i} \otimes (V^*)^{\otimes i}\right)^{GL(n)}
\lra
V^{\otimes i} \otimes (V^*)^{\otimes i+mn} [\grd^{-m}] \end{align*} by \begin{align*} \Phi_{\grs} (t \otimes s) &= \grs(o\otimes \dots \otimes o\otimes t) \otimes s \\ \Psi_{\grs} (t \otimes s) &= t \otimes \grs(o^* \otimes \dots \otimes o^*\otimes s) \end{align*} where $o $ is a nonzero vector in $\bigwedge^n V$ and $o^*$ is a non zero vector in $\bigwedge^n V^*$ and $t \in V^{\otimes i}$, $s \in (V^*)^{\otimes i} $. \begin{lem}\label{lem:invGL2} 1) If $i \neq j+mn$ then $$\Hom_{GL(N)} \left(\grd^m,V^{\otimes i} \otimes \left( V^* \right)^{\otimes j} \right) = 0.$$
2) If $m>0$ then $$ V^{\otimes i+mn} \otimes ( V^* )^{\otimes i} [\grd^m] = \sum _{\grs} \Im \Phi_{\grs}.$$
3) If $m>0$ then
$$ V^{\otimes i} \otimes ( V^* )^{\otimes i+mn} [\grd^{-m}] = \sum _{\grs} \Im \Psi_{\grs}.$$ \end{lem}
\begin{proof} 1) follows directly from lemma \ref{lem:invGL1}.
2) Let $M=V^{\otimes i+mn} \otimes ( V^* )^{\otimes i}[\grd^m]$ and
$N = (V^{\otimes i} \otimes ( V^* )^{\otimes i})^G$. $N$ is a
$S_i \times S_i$ module, $M$ is a $S_{i+mn} \times S_i$- module and the maps $\Phi_{\calI,\grs}$ are equivariant with respect the $S_i$ action on $(V^* )^{\otimes i}$. In particular it is enough to prove that if $\grl $ is a partition of $i$, $M_{\grl}$ is the $S_{\grl}$-isotipic component of $M$ w.r.t. the $S_i$ action and $N_{\grl}$ the $S_{\grl}$-isotipic component of $N$ w.r.t. the $S_i$ action on $(V^* )^{\otimes i}$ then $$ M_{\grl} = \sum _{\grs} \Phi_{\grs}(N_{\grl}). $$ By point 3 of lemma\ref{lem:invGL1} we have that \begin{align*}
M &= \bigoplus_{\substack{\grl \vdash i \\ ht(\grl)\leq n} }
\left( S_{\grl+m\gre} \otimes L_{\grl+m\gre} \right) \otimes
\left( S_{\grl} \otimes L_{\grl} \right)^* [\grd^m] \\
&= \bigoplus_{\substack{\grl \vdash i \\ ht(\grl)\leq n} } S_{\grl+m\gre}
\otimes S_{\grl} \otimes \left( \grd^m \otimes \left( L_{\grl} \otimes
{L_{\grl}}^*\right)^G\right) \end{align*} In particular $M_{\grl} = S_{\grl+m\gre} \otimes S_{\grl} $ is an irreducible representation of $S_{i+mn}\times S_i$. Observe $\sum _{\grs} \Phi(N_{\grl})$ is a $S_{i+mn}\times S_i$-submodule of $M_{\grl}$ and that it is clearly nonzero. So $M_{\grl}=\sum _{\grs} \Phi_{\grs}(N_{\grl})$ as claimed.
The proof of 3) is equal to the previous one. \end{proof}
We want now to give a slightly different formulation of the lemma above. Let $M = V^{\otimes i} \otimes (V^*)^{\otimes j}$ we want to describe $ M^*_{\grd^m} = \{ \grf \in M^* \st g \cdot \grf = \grd^{-m}(g) \grf \}$. Of course this problem is completely equivalent to the previous one. What we want to do is to reformulate in a more convenient way for our purposes the description of a set of generators of $M^*_{\grd^m}$. Let $m \geq 0$ and choose $\calI = \{ I_1,\dots,I_m \} $ a collection of $m$ disjoint subsets of $\{1, \dots ,i+mn\}$ of cardinality $n$. Let $I_j = \{ i_{j1}< \dots < i_{jn} \}$ and $\{1, \dots ,i+mn\} - \bigcup \calI = \{ j_1 <\dots <j_i \}$. To $\calI$ and to a permutation $\grs \in S_i$ we associate elements $$
\phi_{\calI,\grs} \in \left(V^{\otimes i+mn} \otimes (V^*)^{\otimes i}\right)^*_{\grd^m} \; \mand \; \psi_{\calI,\grs} \in \left(V^{\otimes i} \otimes (V^*)^{\otimes i+mn}\right)^*_{\grd^{-m}} $$ defined by \begin{align*} \phi_{\calI,\grs} (v_1\otimes \dots v_{i+mn} \otimes \grf_1 \dots \grf_i)&=
\prod_{j=1}^{m} \bra o^*, v_{j1}\wedge \dots \wedge v_{jn}\ket \cdot
\prod_{h=1}^i \bra v_{j_h}\, , \grf_{\grs_h} \ket \\ \psi_{\calI,\grs}(v_1\otimes \dots v_{i} \otimes \grf_1 \dots \grf_{i+mn})&= \prod_{j=1}^{m} \bra o, \grf_{j1}\wedge \dots \wedge \grf_{jn}\ket \cdot \prod_{h=1}^i \bra v_{\grs_h}\, , \grf_{j_h} \ket \end{align*} where $o $ is a nonzero vector in $\bigwedge^n V$ and $o^*$ is a non zero vector in $\bigwedge^n V^*$.
\begin{lem}\label{lem:invGL3} 1) If $i \neq j+mn$ then $ \left(V^{\otimes i} \otimes ( V^* )^{\otimes j} \right)^*_{\grd^{m}} = 0.$
2) If $m\geq 0$ then $ \left(V^{\otimes i+mn} \otimes ( V^* )^{\otimes i} \right)^*_{\grd^{m}} \,$
is generated by the functions $ \phi_{\calI,\grs}.$
3) If $m\geq 0$ then $\left(V^{\otimes i} \otimes ( V^* )^{\otimes i+mn} \right)^*_{\grd^{-m}}\,$ is generated by the functions $\psi_{\calI,\grs}.$ \end{lem}
\begin{proof} The proof is clear by the previous lemma. \end{proof}
\subsection{A special case}\label{ssec:specialcase} In this section we proove a special case of Proposition
\ref{prp:covarianti} in which we are able to give a more precise result. To simplify the exposition of the proof of Proposition \ref{prp:covarianti} we will also prove another lemma.
Here and in the following we will use polarization. If $V$ is finite dimensional vector space then we can define a map $$\pu : (V^{\otimes n})^* \lra S^n(V^*)\subset \mC[V] \; \text{ through }\; \pu(\grf)(v) = \grf(v\otimes \dots \otimes v).$$
\begin{lem} $\pu$ is surjective, moreover if $V$ is a finite dimensional representation of a reductive group $\grG$, and $\chi$ is a character of $\grG$ then $$\pu((V^{\otimes n})^*_{\chi}) = S^n(V^*)_{\chi}$$ where $E_{\chi}$ is the isotipic component of type $\chi^{-1}$ of a $G$ module $E$. \end{lem}
\begin{lem} For $i =1,\dots,n$ let $\grG_i$ be a reductive group, $\chi_i$ be a character of $\grG_i$ and $E_i$ be a f.d.representation of $\grG_i$. Let $\grG = \prod \grG_i$, then $E= \otimes _i E_i $ is a representation of $\grG$ and $\chi = \prod \chi_i$ is a character of $\grG$. Then $$ E^*_{\chi} = (E_1)^*_{\chi_1} \otimes \dots \otimes (E_n)^*_{\chi_n}. $$ \end{lem}
Let $J^+$, $J^-$ be two sets of indeces and define $ \wt J^+= \{0\} \coprod J^+$, $\wt J^- = \{0\} \coprod J^- $ and $J = J^+ \times J^-$, $\wt J = \wt J^+\times \wt J^- - \{(0,0)\}$. For each $i \in \wt J^+$ (resp. $j \in \wt J^+$) choose a vector space $Y_i$ (resp. $X_j$) and define $X = \bigoplus _{j\in J^-} X_j$ and $Y = \bigoplus _{i\in J^+} Y_i$. Consider the group $$ G_{XY} = \prod _{i \in J^+} GL(Y_i) \times \prod _{j \in J^-} GL(X_j) $$ and its character $c = \prod _{i \in J^+} \det_{GL(Y_i)} \times \left(\prod _{j \in J^-} \det_{GL(X_j)} \right)^{-1}$.
We fix a matrix $r=(r_{ij}) _{i \in \wt J^+ , j \in \wt J^-}$ of integers such that $r_{i0}=1=r_{0j}$ for all $i,j$ and $r_{00} =-1$ and we consider the vector spaces: $$
H^{XY} = H = \bigoplus_{(i,j) \in \wt J} \Hom(X_j, Y_i)^{\oplus r_{ij}} \;\mand\;
H^{XY}_0 = H_0 = \bigoplus_{(i,j) \in J}
\!\!\Hom(X_j,Y_i) $$ where we adopt the convention $E^{\oplus n}=\mC^n = 0$ if $n<0$. When the spaces $X,Y$ will be clear from the context we will write $H$ and $H_0$ insted of $H^{XY}$ and $H^{XY}_0$. We fix a basis $e^{ij}_m$ of $\mC^{r_{ij}}$ so we have a canonical identification \begin{equation} \label{eq:Hbasitensor} H=\bigoplus_{(i,j) \in \wt J} \Hom(X_j,Y_i)\otimes \mC^{ r_{ij}} . \end{equation} We want to study $c$-equivariant polynomials on $H$. If we choose two finite dimensional vector spaces $\tilde A, \tilde B $, linear maps $\gra : \tilde A \lra X_0$ , $\grb: Y_0 \lra \tilde B$, and elements $\grf_{ij} \in (\mC^{r_{ij}})^*$ for all $i,j$ then we can define a map $\Phi_{\grf,\gra,\grb} : H \lra H_0 \oplus \Hom(\tilde A,Y) \oplus \Hom(X,\tilde B) \subset \Hom(X\oplus \tilde A, Y \oplus \tilde B)$ by \begin{equation}\label{eq:sssec:casoXY:Phi} \Phi_{\grf,\gra,\grb} (\sum_{ (i,j) \in \wt J} A_{ij} \otimes v_{ij}) = \sum _{ (i,j) \in J} \grf_{ij}(v_{ij}) A^{ij} + \sum _{ i\in J^+} A^{i0} \comp \gra + \sum _{ j\in J^-} \grb \comp A^{0j} \end{equation} where $ A_{ij} \otimes v_{ij} \in \Hom (X_j,Y_i) \otimes \mC^{ r_{ij}}$.
The following is a special version of \ref{prp:covarianti}.
\begin{lem}\label{lem:lemmaprpcovarianti} $\mC[H]_{c}$ is generated as a vector space by the following functions: $$ s \longmapsto \det \bigl(\Phi_{\grf,\gra,\grb} (s)\bigr) $$ where $\Phi_{\grf,\gra,\grb} : H \lra H_0$ is as above. \end{lem}
\subsubsection{The special case} We will study an even more special case in which we are able to prove a better result that I find nice. In the above setting suppose that $X_0=Y_0=0$ and that $r_{ij}=1$ for all $i,j$.
Define the following set of matrices: \begin{align*} \calS_n &= \{ S = (s_{ij}) \in \mN^{J^+ \times J^-}\st \sum_{i,j} s_{ij}=n \} \\ \calS^{XY} &= \calS = \{ S = (s_{ij}) \in \mN^{J^+ \times J^-}\st \sum_{j} s_{ij}= \dim Y_i
\;\forall i\in J^+\;\\
&\qquad \qquad\; \mand \; \sum_i s_{ij} = \dim X_j \;\forall j\in J^- \;\} \end{align*} As for $H$ we will write $\calS$ when the spaces $X_j,Y_i$ will be clear from the context. Observe that $\calS = \vuoto$ if $\sum_j \dim X_j \neq \sum_i \dim Y_i$ and that if $N=\sum \dim X_j=\sum \dim Y_i$ then $\calS \subset\calS_N$. For each $card(J^+) \times card(J^-)$ matirx $S= (s_{ij})$ we consider $\grf_{ij} \in \mC^*$ given by $\grf_ {ij}(\grl) = s_{ij}\grl$ and we define $$ \Phi_S = \Phi_{\grf,0,0} \;\mand \; f_S =f_S^{XY} =\det (\Phi_S ). $$ \begin{prp}\label{prp:27} $\{ f_S \}_{S \in \calS^{XY}} $ is a basis of $\mC[H]_{c}$. \end{prp} \begin{proof} We have to compute $S^n(H^*)_{c}= (S^n(H))^*_{c}$ for all $n$. For all $S \in \calS_n$ define $$ E_S = \bigotimes_{(i,j)\in J^+\times J^-} S^{s_{ij}}\left(\Hom (X_j,Y_i) \right). $$ Observe that $S^n(H) = \bigoplus _{S\in \calS_n} E_S$ as a $G$-module. So $S^n(H)^*_{c} = \bigoplus _{S\in \calS_n} (E_S)^*_{c}$. Observe now that $E_S $ is a quotient of \begin{equation}\label{eq:EStilde} \tilde E_S = \bigotimes _{(i,j) \in J^+ \times J^-} (X_j^*)^{\otimes s_{ij}} \otimes
Y_i^{\otimes s_{ij}}. \end{equation} By the lemmas in the previous section we have that $$ (\tilde E_S)^*_{c} = \begin{cases} 0 &\mif S \notin \calS^{XY}, \\ \mC & \mif S \in \calS^{XY}. \end{cases} $$ So in particular $(E_S)^*_{c} = 0$ if $S \notin \calS^{XY}$. Hence $\dim S^n(H)^*_{c} \leq card(\calS^{XY})$.
The function $f_S$ are clearly $c$-equivariant so the only thing that we have to prove is that they are linearly independent. To prove it we will prove a generalization of it.
If $i \in J^+$ and $j \in J^-$ let $E_{ij}$ be the $card(J^+) \times card(J^-)$ matrix with a $1$ in the $(i,j)$ position and $0$ elsewhere.
For each $i\in J^+, j\in J^-, m \in \mN$ and $N\in \mN$ we consider the following sentence $P_{i,j,m,N}$: \begin{quote} If $\sum_j X_j = N = \sum_i Y_i$ then $\{ f_{S + mE_{ij}} \}_{S \in \calS^{XY}}$ is lineraly independent. \end{quote} In the case $m=0$ we call this proposition $P_{0,N}$ since it does not depend on $i,j$ and observe that $\forall N \,P_{0,N}$ is equivalent to our thesis.
For each $N \in \mN$ we consider also the following sentence $Q_N$: \begin{quote} If $\sum_j X_j = N = \sum_i Y_i$ then $P_{i,j,m,N}$ is true for all $i\in J^+$, $j\in J^-$ and $m\in \mN$. \end{quote} \noindent \emph{First remark:} $N=1$ is true.
\noindent \emph{Second remark:} let $\calS_0 ^{XY} = \{ S \in S^{XY} \st s_{ij}=0 $ for all $i\in J^+$ and $j\in J^-$ such that $\dim Y_i ,\dim X_j \geq 2$. Observe that $\{f_S\}_{S \in \tilde{\calS} ^{XY}}$ is linearly independent.
Now we prove $Q_N$ by induction on $N$.
\noindent \emph{Firts step:} $ Q_{N-1} \then P_{0,N}$. Suppose that there exists $c_S \in \mC$ such that $$ \sum_{S \in \calS^{XY}} c_S f_S = 0. $$ If $\dim X_{j_0},\dim Y_{i_0} \geq 2 $ choose a nonzero element $x_{j_0} \in X_{j_0}$ (resp. $y_{i_0} \in Y_{i_0}$) and an hyperplane $ X' _{j_0} \subset X_{j_0}$ (resp. $ Y'_{i_0} \subset Y_{i_0}$) such that $X_{j_0} = \mC x_{j_0} \oplus X'_{j_0}$ (resp. $Y_{j_0} = \mC y_{i_0} \oplus Y'_{i_0}$) and define: \begin{equation}\label{eq:prp27:defXYprimo} \tilde X_j = \begin{cases} X_j &\mif j \neq j_0 \\ X'_{j_0} &\mif j=j_0 \end{cases} \;\mand \; \tilde Y_i = \begin{cases} Y_i &\mif i \neq i_0 \\ Y'_{i_0} &\mif i=i_0 \end{cases} \end{equation} and define $\Psi : H^{\tilde X \tilde Y} \lra H^{XY}$ by \begin{equation}\label{eq:prp27:defPsi}
\Psi(T)\biggr|_{\tilde X_j} = T \quad\mand\quad \Psi(T)(x_{j_0}) = y_{j_0}. \end{equation} Then we see that \begin{gather*}
0=\sum_{S \in \calS^{XY}} c_S f_S^{XY} (\Psi(T)) =
\sum_{S \in \calS^{XY}\st s_{i_0j_0}\neq 0} s_{i_0j_0} c_S f_S^{\tilde X \tilde Y} (T) \\
= \sum_{S \in \calS^{\tilde X \tilde Y}} (s_{i_0j_0}+1) c_{S+E_{i_0j_0}}
f_{S+E_{i_0j_0}}^{\tilde X \tilde Y} (T) \end{gather*} By induction $P_{i,j,1,N-1}$ is true for all $i,j$ so we see that $c_S=0$ for all $S \in \calS$ such that there exists $i_0,j_0$ such that $s_{i_0,j_0}\geq 1$ and $\dim X_{j_0} ,\dim Y_{i_0} \geq 2$. Now we conclude by the second remark.
\noindent \emph{Second step:} $Q_{N-1} \then P_{i_0,j_0,m,N}$ if
$\dim X_{j_0},\dim Y_{i_0} \geq 2$ and $m\geq 1$. Suppose that $ \sum_{S \in \calS^{XY}} c_S f_{S+mE_{i_0j_0}}^{XY} =0$. We can construct $\tilde X_j, \tilde Y_i, \Psi$ as in the first step and we see that \begin{gather*}
0=\sum_{S \in \calS^{XY}} c_S f_{S+mE_{i_0j_0}}^{XY} (\Psi(T)) =
\sum_{S \in \calS^{XY}} (s_{i_0j_0}+m) c_S f_{S+mE_{i_0j_0}}^{\tilde X \tilde Y} (T) \\
= \sum_{S \in \calS^{\tilde X \tilde Y}} (s_{i_0j_0}+m+1) c_{S+E_{i_0j_0}}
f_{S+(m+1)E_{i_0j_0}}^{\tilde X \tilde Y} (T) \end{gather*} and by $P_{i_0,j_0,m+1,N-1}$ we deduce $c_S = 0$ for all $S$.
\noindent \emph{Third step:} $Q_{N-1} \then Q_N$. By the previous two step we have only to prove $P_{i_0,j_0,m,N}$ for $m\geq 1$ and $\dim X_{j_0} = 1 $ or $\dim Y_{i_0} =1$. We will suppose $\dim X_{j_0} = 1$, the other case is completely similar. Suppose that $\sum_{S\in \calS^{XY}} c_S f_{S+mE_{i_0j_0}} =0$. Set $$ \tilde {\calS}_i = \{ S \in \calS^{XY} \st s_{ij_0}=1 \} $$ and observe that since $\dim X_{j_0}=1$ then $\calS^{XY} = \coprod \tilde {\calS}_i$. Now choose a non zero vector $x_{j_0} \in X_{j_0}$ and for all $i\in J^+$ choose a non zero vector $y_{i} \in Y_i$ and an hyperplane $Y_i' $ of $Y_i$ such that $Y_i = \mC y_i \oplus Y_i'$.
Now fix $i_1 \neq i_0$ such that $\dim Y_{i_1} \geq 2 $ and consider $\tilde J^+ = J^+$ and $\tilde J^- = J^- - \{j_0\}$. For all $i \in \tilde J^+$ and for all $j \in \tilde J^-$ define: $$ \tilde X_j = X_j, \;\mand\; \tilde Y_i = \begin{cases} Y_i & \mif i \neq i_1, \\ Y_{i_1}' &\mif i =i_1. \end{cases} $$ For any $S\in \tilde {\calS}_{i_1}$ we define also $t(S) \in \calS^{\tilde X \tilde Y}$ by $t(S)_{ij} = s_{ij}$ for all $i \in \tilde J^+$, $j \in \tilde J^-$. $S\longmapsto t(S)$ is a bijection between $\tilde {\calS}_{i_1}$ and $\calS^{\tilde X \tilde Y}$: we call $t^{-1}$ the inverse map. Finally we define $\Psi : H^{\tilde X \tilde Y} \lra H^{XY}$ as in the previou step and we observe that if $S\in \calS$ then $f_{S+mE_{i_0j_0}} \comp \Psi = 0$ if $S\notin \tilde {\calS}_{i_1}$. Hence \begin{gather*} 0= \sum_{S\in \calS^{XY}}c_S f_{S+mE_{i_0j_0}}(\Psi(T))=
\sum_{S\in \tilde {\calS}_i} c_S f_{t(S)}(T) \\
= \sum_{S\in \calS^{\tilde X \tilde Y}} c_{t^{-1}(S)} f_{S}(T) \end{gather*} and applying $P_{0,N-1}$ we obtain $c_S=0$ for all $S \in \tilde {\calS}_{i_1}$ if $\dim Y_{i_1} \geq 2$ and $i_1 \neq i_0$.
In a similar way we prove $c_S$ if $S\in \tilde {\calS}_{i_1}$ and $\dim Y_{i_1}=1$ and $i_1 \neq i_0$.
Finally we observe that if $S \in \tilde {\calS} _{i_0}$ then $f_{S+mE_{i_0j_0}} = (m+1) f_S$, hence $c_S=0$ follows now from $P_{0,N}$ that we already know to be true. \end{proof}
\subsubsection{Proof of Lemma \ref{lem:lemmaprpcovarianti}} We study first $(H^{\otimes n})_{c}^*$ and then we apply polarization. As in the previous section we can decompose $H^{\otimes n}$ in summands of the following form: \begin{equation}\label{eq:lemdefE} E = \bigotimes_{(i,j)\in \tilde J} (\Hom(X_j,Y_i) \otimes \mC^{r_{ij}})^{\otimes s_{ij}}= \bigotimes _{ (i,j) \in \wt J} (X_{j}^*)^{\otimes s_{ij}} \otimes Y_{i}^{\otimes s_{ij}} \otimes (\mC^{r_{ij}})^{\otimes s_{ij}} \end{equation} where $s_{ij}$ are nonnegative integers such that $\sum _{i,j} s_{ij} = n$. Observe that the order of the factors is not important for us since we will aply polarization.
We can describe easily $E^*_{c}$ using the lemma in the previous section. In particular a necessary and sufficient condition for the existence of $c$-covariants is $\sum _{i\in \wt J^+} s_{ij} = \dim X_j$ for all $j \in J^-$ and
$\sum _{j\in \wt J^+} s_{ij} = \dim Y_i$ for all $i \in J^+$. Moreover $$ E^*_{c} \isocan \bigotimes _{ (i,j) \in J} \left((\mC^{r_{ij}})^*\right)^{\otimes s_{ij}} \otimes \bigotimes _{j \in J^-} (Y_0^*) ^{\otimes s_{0j}} \otimes \bigotimes _{i \in J^+} X_0^{\otimes s_{i0}} $$
To write explicit formulas we choose an order on the factors of $E$ for example choosing a lexicographic order in $i \in \wt J^+$, $j \in \wt J^-$ and $1\leq q \leq s_{ij}$: $$ E= \underbrace{ X_{1}^* \otimes Y_{1} \otimes \mC^{r_{11}}}_{q=1} \otimes \dots \otimes \underbrace{ X_{1}^* \otimes Y_{1} \otimes \mC^{r_{11}}}_{q=s_{11}} \otimes \underbrace{ X_{1}^* \otimes Y_{2} \otimes \mC^{r_{12}}}_{q=1} \otimes \cdots $$ Once we have chosen such an order we can write an element of $E$ as linear combination elements of the form $\otimes _{(i,j,q) \in \wt K} x^{i,j,q} \otimes y^{i,j,q} \otimes v^{i,j,q}$ with $x^{i,j,q} \in X_j^* $, $ y^{i,j,q} \in Y_i$ $v^{i,j,q} \in \mC^{r_{ij}}$ and we setted $\wt K= \{ (i,j,q) \in \wt J \times \mN \st 1\leq q \leq s_{ij} \}$. We define also $K= \{(i,j,q) \in \wt K \st (i,j) \in J\}$. Using this convention if \begin{equation}\label{eq:defphi18} \phi = \bigotimes _{(i,j,q) \in \wt K} \phi ^{i,j,q} \in \bigotimes _{(i,j,q) \in K} (\mC^{r_{ij}})^* \otimes \bigotimes _{(0,j,q) \in \wt K} Y_0^* \otimes \bigotimes _{(i,0,q) \in \wt K} X_0 \end{equation} the corresponding $c$ equivariant linear function on $E$ is defined on an element $s=\otimes _{(i,j,q) \in \wt K} x^{i,j,q} \otimes y^{i,j,q} \otimes v^{i,j,q}$ by \begin{align*} \phi(s) &= \prod _{i \in J^+} \bra \!\! \bigwedge_{\substack{\lra \\ (i,j,q)\in \wt K} }\!\!\! y^{i,j,q} , o_i^*\ket
\prod _{j \in J^-} \bra \!\! \bigwedge_{\substack{\lra \\ (i,j,q)\in \wt K} }\!\!\! x^{i,j,q} , o_j \ket \\ & \quad \prod_{(i,j,q) \in K} \phi^{i,j,q}(v^{i,j,q}) \prod_{(i,0,q) \in \wt K} \phi^{i,0,q}(x^{i,0,q}) \prod_{(0,j,q) \in \wt K} \phi^{0,j,q}(y^{0,j,q}) \end{align*} Now consider the group $$\goS = \goS_1\times \goS_2 \times \goS_3 = \prod _{(i,j)\in J} S_{s_{ij}} \times \prod _{ j \in J^-} S_{s_{0j}} \times \prod _{ i \in J^+} S_{s_{i0}}.$$ This group acts naturally on $ \bigotimes _{ (i,j) \in J} \left((\mC^{r_{i_1j_1}})^*\right)^{\otimes s_{ij}} \otimes \bigotimes _{j \in J^-} (Y_0^*) ^{\otimes s_{0j}} \otimes \bigotimes _{i \in J^+} X_0^{\otimes s_{i0}} = E^*_{c}$ by permuting the factors and we observe that $$ \pu \bigl( (\grs_1, \grs_2, \grs_3) \phi\bigr) = \gre(\grs_2) \gre(\grs_3) \pu (\phi) $$ for all $\phi \in E^*_{c}$ and for all $(\grs_1,\grs_2,\grs_3) \in \goS$. So we have that $$ \pu (E^*_{c}) = \pu \biggl( \bigotimes _{ (i,j) \in J} S^{s_{ij}} \bigl((\mC^{r_{ij}})^* \bigr) \otimes \bigotimes_{j\in J-} \bigwedge ^{s_{0j}} Y_0^* \otimes \bigotimes_{i\in J^+} \bigwedge ^{s_{i0}} X_0 \biggr). $$ In particular since $S^m(V)$ is spanned by vectors of the form $v \otimes \dots \otimes v$, $\pu(E^*_{c})$ is spanned by the functions $\pu(\phi)$ with $\phi $ of the following special form: \begin{equation}\label{eq:phispeciale} \phi = \bigotimes _{(i,j) \in J} (\phi ^{i,j})^{\otimes s_{ij}} \otimes \bigotimes_{j \in J^-} \phi^{0,j,1}\wedge \dots \wedge \phi ^{0,j,s_{0j}} \otimes \bigotimes_{i \in J^+} \phi^{i,0,1}\wedge \dots \wedge \phi ^{i,0,s_{i0}}. \end{equation}
The lemma now follows from the following claim:
\noindent {\bf{Claim:}} For each $\phi$ as in \eqref{eq:phispeciale} $\pu(\phi)$ is a linear combination of the functions $\det(\phi_{\grf,\gra,\grb})$.
We prove the claim as follows: we construct vector spaces $A_i$, $B_j$ and $A= \bigoplus_{i\in J^+}A_i$, $B=\bigoplus_{j\in J^-} B_j$ and $$ \tilde H_0 = H_0 \oplus \bigoplus_{i\in J^+} \Hom(A_i,Y_i) \oplus \bigoplus _{j\in J^-} \Hom(X_j,B_j) \subset \Hom(X\oplus A, Y\oplus B) = \tilde H. $$ Observe that on $\tilde H, \tilde H_0$ there is an action of $\tilde G = G_{XY}\times G_{AB} = G_{XY} \times \prod_{i\in J^+} Gl(A_i) \times \prod_{j\in J^-} Gl(B_j)$ and we call $\tilde c$ the character of $\tilde G$ given by $ (\prod_j \det_{GL(X_j)} \times \prod_i \det_{GL(A_i)} )^{-1} \times
\prod_i \det_{GL(Y_i)} \times \prod_j \det_{GL(B_j)}$. We have an embedding of $G_{XY}$ in $\tilde G$ such that $\grs^* \tilde c = c$. Observe also that by Proposition \ref{prp:27} we know that the $\tilde c$-covariants functions on $H$ are generated by the functionts $\det(\Phi_{\tilde S})$ with $S \in \tilde {\calS}$: we put a tilde to emphasize that we have to consider also the components $\{A_i\}$ and $\{B_j\}$. Then we construct a $G_{XY}$-equivariant map $\rho : H \lra \tilde H_0$ such that \begin{enumerate}
\item there exists a $\tilde c$-covariant function $f$ on $\tilde H$ such that $\pu(\phi)= f \comp \rho$.
\item for all $\tilde S \in \tilde {\calS}$ there exists $\grf, \gra, \grb$ as in equation \eqref{eq:sssec:casoXY:Phi} such that $\det (\Phi_{\tilde S}) \comp \rho = \det (\Phi_{\grf,\gra,\grb})$ . \end{enumerate} The claim now follows by Proposition \ref{prp:27}.
For $i\in J^+$ and $j\in J^-$ define $$
A_i = \mC^{s_{i0}}, \qquad A = \bigoplus_{i\in J^+} A_i, \qquad
B_j = \mC^{s_{0j}}, \qquad B = \bigoplus_{j\in J^-} B_j. $$ Define also $\gra_i : A_i \lra X_0$ and $(\grb_j)^t : B_j^* \lra Y_0^*$ by \begin{align*}
\gra_i(e_l) &= \phi^{i,0,l}, &\mand\quad\gra &= \coprod_{i\in J^+} \gra_i : A \lra X_0 \\
(\grb_i)^t(e^l) &= \phi^{0,j,l} &\mand \quad \grb^t &= \prod_{j\in J^-} B_j :B^* \lra Y_0, \end{align*} where $e_l$ (resp. $e^l$) is the canonical basis of $\mC^m$ (resp. $(\mC^m)^*$). We define $\grb_i$ (resp. $\grb$) as the transpose of $(\grb_i)^t$ (resp. $\grb^t$). Now define $\rho^{ij} : \Hom(X_j,Y_i) \otimes \mC^{r_{ij}} \lra \Hom (X_j, Y_i)$, $\rho^{i0} : \Hom(X_0,Y_i) \lra \Hom (A_i, Y_i)$, $\rho^{0j} : \Hom(X_j,Y_0) \lra \Hom (X_j, B_j)$ by $$ \rho^{ij}(T \otimes v) = \phi^{i,j} (v) T, \qquad \rho^{i0}(T) = T \comp \gra_i, \qquad \rho^{0j}(T) = \grb_j \comp T, $$ and finally define $\rho = \bigoplus_{i,j\in \tilde J} \rho^{ij}: H \lra \tilde H_0$. Observe that $\rho$ is $G_{XY}$-equivariant.
Observe now that $\tilde H _0^{\otimes n}= \bigoplus \tilde E_{\tilde S}$ where $\tilde S \in \tilde{\calS}$ and $\tilde E_{\tilde S}$ is defined as in \eqref{eq:EStilde}. In particular we choose the following summund of $\tilde H _0^{\otimes n}$: $$ \tilde E = \bigotimes _{(i,j) \in J} \Hom(X_j,Y_i) ^{\otimes s_{ij}} \otimes \bigotimes_{j \in J^-} \Hom(X_j,B_j)^{\otimes s_{0j}} \otimes \bigotimes_{i \in J^+} \Hom(A_i,Y_i)^{\otimes s_{i0}} $$ and we observe that $(\tilde E)^*_{\tilde c} = \mC$. Choose a non zero element $\tilde {\phi} \in (\tilde E)^*_{\tilde c}$ and observe that up to a scalar we have \begin{equation}\label{eq:phiphitilderho} \pu_{\tilde H}(\tilde{\phi}) \comp \rho = \pu(\phi). \end{equation} To see this choose $\phi $ as in \eqref{eq:phispeciale}, and bases $y^{i}_h$ of $Y_i$, $x^{j}_k$ of $X_j^*$ (and its dual basis $z^{j}_k$ of $X_j$) . Choose also a bases $\gre^{ij}_m$ of $\mC^{r_{ij}}$ such that $\phi^{i,j}(\gre^{ij}_m) = \grd_{m,1}$ and set $A^{ij} = \rho^{ij}(s)= \sum_{h,k} a^{ij}_{hk} y^i_h\otimes x^j_k$ for $s \in H$. Then {\allowdisplaybreaks \begin{align*} \pu(\phi)(t) &=
\sum _{h \in \calK_Y , k \in \calK_X} \;
\prod _{i \in J^+} \bra \!\!
\bigwedge_{\substack{\lra\\ (i,j,q) \in \wt K}}
a^{ij}_{h(i,j,q) k(i,j,q)} y^i_{h(i,j,q)} , o^*_i \ket
\prod _{j \in J^-} \bra \!\!
\bigwedge_{\substack{\lra\\ (i,j,q) \in \wt K}}
x^j_{k(i,j,q)} , o_j \ket \\*
& \qquad
\prod _{i \in J^+} \bra \!\!
\bigotimes _{\substack{\lra\\ (i,0,q) \in \wt K}}
x^0_{k(i,0,q)} , \phi^{i,0,1} \wedge \dots \wedge
\phi^{i,0,s_{i0}} \ket \\*
& \qquad
\prod _{j \in J^-} \bra \!\!
\bigotimes _{\substack{\lra\\ (0,j,q) \in \wt K}}
a^{0j}_{h(0,j,q) k(0,j,q)} y^0_{h(0,j,q)} ,
\phi^{0,j,1} \wedge \dots \wedge
\phi^{0,j,s_{i0}} \ket \\
& = \sum _{k \in \calK_X}
\prod _{i \in J^+} \bra \!\!
\bigwedge_{\substack{\lra\\ (i,j,q) \in \wt K}}
A^{ij} z^j_{k(i,j,q)} , o^*_i \ket
\prod _{j \in J^-} \bra \!\!
\bigwedge_{\substack{\lra\\ (i,j,q) \in \wt K}}
x^j_{k(i,j,q)} , o_j \ket \\*
& \qquad
\prod _{i \in J^+} \bra \!\!
\bigwedge _{\substack{\lra\\ (i,0,q) \in \wt K}}
x^0_{k(i,0,q)} , \phi^{i,0,1} \wedge \dots \wedge
\phi^{i,0,s_{i0}} \ket \\*
& \qquad
\prod _{j \in J^-} \bra \!\!
\bigwedge _{\substack{\lra\\ (0,j,q) \in \wt K}}
A^{0j} z^j_{k(0,j,q)},
\phi^{0,j,1} \wedge \dots \wedge
\phi^{0,j,s_{i0}} \ket \end{align*} } where the indeces are as follows: \begin{align*}
\calK_X &= \{ k: \wt K \lra \mN \st 1\leq k(i,j,q) \leq \dim X_j\} \\
\calK_Y &= \{ h: \wt K \lra \mN \st 1\leq h(i,j,q) \leq \dim Y_i\}. \end{align*} The lefthandside in \eqref{eq:phiphitilderho} clearly furnishes the same expression.
Finally if we fix $\tilde S = (s_{MN})_{N\in \{X_j\}\cup\{A_i\} \mand M\in \{Y_i\}\cup\{B_j\}} \in \tilde {\calS}$ and we choose $\grf_{ij}=s_{Y_iX_j} \phi^{i,j}$ and $\gra = \coprod_{i \in J^+} s_{Y_iA_i} \gra_i : A \lra X_0$ and $\grb = \coprod_{j \in J^-} s_{B_jX_j} \grb_j : Y_0 \lra B$ we have $$ \det (\Phi_{\tilde S}) \comp \rho = \det (\Phi_{\grf,\gra,\grb}). $$
\begin{oss} The basis of $\mC[H]_c$ we have described are different from the polarization of the natural basis of $E^*_c$. The relation between the two basis is given by formulas of the following types \begin{enumerate} \item If $A = \begin{pmatrix}a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}$ and $B = \begin{pmatrix}b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix}$ then $$ \det \begin{pmatrix}a_{11} & b_{12} \\ a_{21} & b_{22} \end{pmatrix} + \det \begin{pmatrix}b_{11} & a_{12} \\ b_{21} & a_{22} \end{pmatrix}
= \det (A+B) - \det A - \det B. $$ \item If $A = \begin{pmatrix}a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}$, $B = \begin{pmatrix}b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix}$, $C =\begin{pmatrix}c_{11} & c_{12} \\ c_{21} & c_{22} \end{pmatrix}$ and $D =\begin{pmatrix}d_{11} & d_{12} \\ d_{21} & d_{22} \end{pmatrix}$ then \begin{gather*} \det \begin{pmatrix}a_{11} & b_{11} \\ a_{21} & b_{21} \end{pmatrix} \det \begin{pmatrix}c_{12} & d_{12} \\ c_{22} & d_{22} \end{pmatrix} -\det \begin{pmatrix}a_{11} & b_{12} \\ a_{21} & b_{22} \end{pmatrix} \det \begin{pmatrix}c_{12} & d_{11} \\ c_{22} & d_{21} \end{pmatrix} + \\ -\det \begin{pmatrix}a_{12} & b_{11} \\ a_{22} & b_{21} \end{pmatrix} \det \begin{pmatrix}c_{11} & d_{12} \\ c_{21} & d_{22} \end{pmatrix} +\det \begin{pmatrix}a_{12} & b_{12} \\ a_{22} & b_{22} \end{pmatrix} \det \begin{pmatrix}c_{11} & d_{11} \\ c_{21} & d_{21} \end{pmatrix} = \\ = -\det \begin{pmatrix}A & B \\ C & D \end{pmatrix} +\det \begin{pmatrix} A & 0 \\ 0 & D \end{pmatrix} +\det \begin{pmatrix} 0 & B \\ C & 0 \end{pmatrix} \end{gather*} \end{enumerate} The first type of formula correspond to the reduction of Lemma \ref{lem:lemmaprpcovarianti} to the case $r_{ij}=1$ and $X_0=Y_0=1$. The second type of formula correspond to the case of Proposition \ref{prp:27}. \end{oss}
\subsection{Proof of Proposition \ref{prp:covarianti}} Choose basis $\calB_i$ (resp. $\calB_i^*$) of the vector spaces $D_i$ and $D_i^*$ and we write our vector space $S(d,v)$ in the following way: $$ S = \bigoplus _{h\in H} V_{h_0}^* \otimes V_{h_1} \oplus
\bigoplus _{\substack{i \in I \\ b \in \calB_i}} V_{i,b} \oplus
\bigoplus _{i \in I, b^* \in \calB_i^*} V_{i,b^*}^* $$ where $V_{i,b}$ (resp. $V^*_{i,b^*}$) is an isomorphic copy of $V_i$ (resp. $V_i^*$). We fix also a character $\chi_m$ and $m_i$, $\wt m _i,m_i^+,m_i^-,I^+,I^-,I^0$ as in \ref{sssec:generators} and we describe first the $\chi_m$-covariants of $S^{\otimes n}$. To do it we observe that we can decompose $S^{\otimes n}$ in the following way: $$ S^{\otimes n} = \bigoplus _{\ell} E^{(\ell)}_1 \otimes \dots \otimes E^{(\ell)}_n $$ where each $E^{(\ell)}_i$ is a representation of $G$ of one of the following types: $ V_{h_0}^* \otimes V_{h_1}$, $V_{i,b}$ or $V_{i,b^*}^*$. So it is enough to compute the $\chi$-covariants of each piece $E^{(\ell)}_1 \otimes \dots \otimes E^{(\ell)}_n$. We fix one of them: $E = E_1 \otimes \dots \otimes E_n$ and we compute $E^*_{\chi}$. Let $I^*$ be a copy of $I$ and fix an isomorphism $i\longleftrightarrow i^*$ between the two sets. For each $j=1,\dots,n$ we define a subset $\calS_j$ of $I\coprod I^*$ according to the following rule: $$ \calS_j = \begin{cases}
\{h^*_0, h_1\} &\mif E_j = V_{h_0}^* \otimes V_{h_1}, \\
\{i\} &\mif E_j = V_{i,b} \text{ for some } b \in \calB_i, \\
\{i^*\} &\mif E_j = V_{i,b^*}^* \text{ for some } b^* \in \calB_i^*. \end{cases} $$ Let now be $\calS =\coprod_{j=1}^{n} \calS_j$. An element of $\calS$ can be thought as a couple $(i,j)$ (or $(i^*,j)$) where $i$ (or $i^*$) is in $\calS_j$. We consider now a special class of partitions of $\calS$: a collection $\goF=\{\calC, \calM^{(l)}_i \mfor i\in I \mand 1\leq l\leq m_i\}$ of disjoint subsets of $2^{\calS}$ is called $m$-\emph{special} if: \begin{enumerate} \item $\bigcup \goF$ is a partition of $\calS$, \item $\forall \, C \in \calC$ $\, card C =2 $ and $\exists i \in I$,
$\calS_{j_1}, \calS_{j_2}$ such that $i\in \calS_{j_1}$, $i^*\in
\calS_{j_2}$ and $C=\{(i,j_1),(i^*,j_2)\}$, \item $\forall M \in \calM^{(l)}_i$ we have $M=\{(i,j)\}$ if $i \in I^+$
and $M=\{(i^*,j)\}$ if $i \in I^-$, \item $card \calM ^{(l)}_i = v_i =\dim V_i$. \end{enumerate}
We can represents a special collection with an enriched graph whose vertices are the sets $\calS_j$ and completed according with the following rules: \begin{enumerate} \item we put an arrow from $\calS_{j_1}$ to $\calS_{j_2}$ if there exists
$C = \{(i,j_2),(i^*,j_1)\}\in \calC$, \item we put an indexed circle box $\comp^l_i$ on $\calS_j$ if there exists
$M = \{(i,j)\} \in \calM^{(l)}_i$ \item we put an indexed square box $\vuotosq^l_i$ on $\calS_j$ if there exists
$M = \{(i^*,j)\} \in \calM^{(l)}_i$ \item if $E_j$ is of type $V_{i,b}$ or $V_{i,b^*}^*$ then we add the element
$b$ or $b^*$ at the left of the corresponding vertex. \item if $E_j$ is of type $V_{h_0}^* \otimes V_{h_1}$ then we write $h$ at the
left of the corresponding vertex. \end{enumerate} Observe that a vertex can be marked with a circle and a square but that it cannot be marked with two circles or two squares.
There is a perfect bijection between $m$-special collection $\goF$ and graphs as above such that: \begin{enumerate} \item the cardinality of vertexes marked with $\comp^l_i$ is $v_i$ for each $i \in I^+$ and
$1\leq l \leq m_i$, \item the cardinality of vertexes marked with $\vuotosq^l_i$ is $v_i$ for each $i \in I^-$ and
$1\leq l \leq -m_i$. \end{enumerate} We will use the same letter $\goF$ to indicate the collection or the graph.
To a special collection $\goF$ as above we attach a function $\phi_{\goF}$ on $E$. We define it by the formula $$ \phi_{\goF}(e_1 \otimes \dots \otimes e_n) = \prod_{C \in \calC} \phi_C \cdot \prod_{i \in I^+} \prod _{l=1}^{m_i} \bra o_i^* , \bigwedge \calM^{(l)}_i\ket \cdot \prod_{i \in I^-} \prod _{l=1}^{m_i} \bra o_i , \bigwedge \calM^{(l)}_i\ket $$ where $o_i$ is a non zero element in $\bigwedge^{v_i}V_i$, $o_i^*$ is a non zero element in $\bigwedge^{v_i}V_i^*$ and \begin{align*} e_j &= \begin{cases}
x^*_j \in V_i^* & \mif E_j = V_{i,b}^*,\\
y_j \in V_i & \mif E_j = V_{i,b}, \\
x^*_j \otimes y_j \in V^*_{h_0} \otimes V_{h_1} & \mif
E_j=V^*_{h_0} \otimes V_{h_1},
\end{cases} \\ \phi_C &= \bra x^*_{j_1} \,, v_{j_2}\ket \qquad \mif
C=\{(i^*,j_1),(i,j_2)\}\\ \bigwedge \calM_i^{(l)} &= y_{j_1}\wedge \dots \wedge y_{j_{v_i}} \quad \mif
\calM_i^{(l)} =\{ \{(i,j_1)\} , \dots,\{(i,j_{v_i})\} \} \mand i \in
I^+ \\ \bigwedge \calM_i^{(l)} &= x^*_{j_1}\wedge \dots \wedge x^*_{j_{v_i}} \quad
\mif \calM_i^{(l)} =\{ \{(i,j_1)\} , \dots,\{(i,j_{v_i})\} \} \mand i
\in I^- \end{align*} Finally we extend $\phi_{\goF}$ to all $E$ by linearity. By the lemma above and the discussion in \ref{ssec:invGL} we deduce easily the following lemma: \begin{lem}\label{lem:28} $E^*_{\chi}$ is generated by the functions $\phi_{\goF}$. \end{lem}
Proposition \ref{prp:covarianti} now follows from lemma \ref{lem:28} and the following claim: \noindent {\bf claim:} for any special collection $\goF$ the function $\pu(\phi_{\goF})$ is a $\mC[S]^{G_v}$-linear combination of the functions $f_{\Delta}$ described in \ref{sssec:generators}.
We consider the connected components of the graph. There are only five possible types of paths: \begin{enumerate} \item closed paths, \item straight paths leaving from a non boxed vertex and arriving in a non boxed vertex, \item straight paths leaving from a non boxed vertex and arriving in a circle boxed vertex, \item straight paths leaving from a square boxed vertex and arriving in a non boxed vertex, \item straight paths leaving from a square boxed vertex and arriving in a circle boxed vertex. \end{enumerate}
Let now $\goF_0$ be the union of the connected components of the first two types and $\goF_1$ be the union of the remaing components. Observe that $$ \pu(\phi_{\goF}) = \pu(\phi_{\goF_0}) \pu(\phi_{\goF_1}). $$ Observe also that $\phi_{\goF_0}$ is an invariant function (indeed this part of the graph corresponds to the situation studied by Lusztig in \cite{Lu:Q4}). Since we are interested in generators of $\mC[S]_{\chi,1}$as a $\mC[S]^G$-module, we can suppose for simplicity $\goF = \goF_1$.
Observe now that each connected component $\grG$ of the graph of the third type and with a circle $\comp^l_{i_1}$ at the end, has an initial vertex which is an $\calS_j = \{i_0^*\}$ and that is marked with $b \in \calB_{i_0}^*$ on the left. All the other vertexes of the connected component are of type $\calS_j = \{ h_0^*,h_1\}$ and they define a path $\gra^{\grG} $ such that $\gra^{\grG}_0 = i_0$ and $\gra^{\grG}_1 = i_1$. We call $b = b({\grG})$ and $l=L_1(\grG)$.
In the same way we see that: \begin{enumerate} \item each connected component $\grG$ of the fourth type determines a path $\gra^{\grG}$, $b^*=b^*(\grG) \in \calB^*_{\gra^{\grG}_1}$ and $l=L_0(\grG)$ such that $1 \leq l \leq -m_{\gra^{\grG}_0}$, \item each connected component $\grG$ of the fifth type determines a path $\gra^{\grG}$, $l_0=L_0(\grG)$ and $l_1=L_0(\grG)$ such that $1 \leq l_0 \leq -m_{\gra^{\grG}_0}$ and $1 \leq l_1 \leq m_{\gra^{\grG}_1}$. \end{enumerate}
Now we prove the claim in the following way, we construct $X_j$ and $Y_i$ as in \ref{ssec:specialcase}, a groups homomorphism $\grs : G_v \lra G_{XY}$ such that $\grs^* c =\chi_m$, a $G_v$ equivariant map $\rho: S \lra H$, and a $G_{XY}$ $c$-covariant function $f$ on $H$ such that: \begin{enumerate}
\item for all $\grf,\gra,\grb$ there exists a $\chi_m$-good data such that
$\det(\Phi_{\phi,\gra,\grb}) \comp \rho = f_{\Delta}$,
\item $\pu( \phi_{\goF}) = f \comp \rho$ \end{enumerate} The claim will clearly follow.
Set \begin{align*} J^- &= \{ (i,l) \st i \in I^- \mand 1\leq l \leq -m_i \}, \\ J^+ &= \{ (i,l) \st i \in I^+ \mand 1\leq l \leq m_i \}. \end{align*} For all $(i,l) \in J^-$ choose $X_{(i,l)} = V_i$ and for each $(i,l) \in J^+$ choose $Y_{(i,l)} = V_i$. For each $(i_0,l_0) \in J^-$ and for each $(i_1,l_1) \in J^+$ define: \begin{align*} r_{(i_0,l_0)(i_1,l_1)} &= card\{\, \text{connected component $\grG$ of the fifth type such } \\
&\qquad \text{that }\gra^{\grG}_0 = i_0,\; \gra^{\grG}_1 = i_1,\; L_0(\grG) = l_0 \mand
L_1(\grG) = l_1\} \end{align*} We the connected component $\grG$ of the set in the left handside as a basis $e_{\grG}$ of the vector space $\mC^{r_{(i_0,l_0)(i_1,l_1)}}$. This basis plays the role of the basis $e^{ij}_m$ we used to give the identification in \eqref{eq:Hbasitensor}.
For each $\grG$ of the third type choose a one dimensional vector space $\mC_{b(\grG)}$ and fix a generator $b_{\grG}$. For each $\grG$ of the fourth type choose a one dimensional vector space $\mC_{b^*(\grG)}$ and fix a generator $b^*_{\grG}$. \begin{align*} X_0 &= \bigoplus_{\grG \text{ of the third type}} \mC_{b(\grG)} = \bigoplus_{\grG \text{ of the third type}} \mC b_{\grG} \\ Y_0 &= \bigoplus_{\grG \text{ of the fourth type}} \mC_{b^*(\grG)} = \bigoplus_{\grG \text{ of the fourth type}} \mC b^*_{\grG}. \end{align*}
Now for each connected component $\grG$ of the third type define $\rho^{\grG} : S \lra \Hom(\mC_{b(\grG)} , Y_{(\gra^{\grG}_1,L_1(\grG))}) $ by $$ s \longmapsto \{ \grl \mapsto \gra^{\grG}(s) \grg_{\gra_0^{\grG}} (b(\grG))\grl\} $$ In a similar way define $\rho^{\grG}$ if $\grG$ is the fourth or of the fifth type. Finally define $$ \rho : S \lra H \; \text{ by }\; \rho = \bigoplus_{\grG} \rho^{\grG}. $$ Define also a group homomorphism $\grs : G_v \lra G_{XY}$ by $(\grs(g_i)) _{X_{(i_0,l_0)}} = g_{i_0}$ and $(\grs(g_i)) _{Y_{(i_1,l_1)}} = g_{i_1}$, and observe that $\rho$ is $G_v$ equivariant.
Now we describe $\phi \in (H^{\otimes \tilde n})^*_c$ (in general $\tilde n$ is less or equal to $n$) such that \begin{equation}\label{eq:puphigoF} \pu (\phi_{\goF}) (s) = \pu (\phi)(\rho(s)). \end{equation} We describe $\phi$ by giving a summunds $\tilde E$ of $H^{\otimes \tilde n}$ as in \eqref{eq:lemdefE} and $\phi \in \tilde E^*_c$ as in \eqref{eq:defphi18}.To define $\tilde E$ we have to define $s_{(i_1,l_1)(i_0,l_0)}$, $s_{(i_1,l_1)0}$ and $s_{0(i_0,l_0)}$ for all $(i_1,l_1) \in J^+$ and for all $(i_0,l_0)\in J^-$. We set \begin{align*} s_{(i_1,l_1)(i_0,l_0)} & = r_{(i_1,l_1)(i_0,l_0)} \\ s_{(i_1,l_1)0} & = card \{\, \text{ connected component $\grG$ of the third type} \\
&\qquad \text{ such that }
\gra^{\grG}_1 = i_1 \mand L_1(\grG)=l_1 \} \\ s_{0(i_0,l_0)} & = card \{\, \text{ connected component $\grG$ of the fourth type} \\
&\qquad \text{ such that }
\gra^{\grG}_0 = i_0 \mand L_0(\grG)=l_0 \} \end{align*} Observe that we can choose a bijection $ q \longleftrightarrow \grG_q$ between $\{1,\dots,s_{(i_1,l_1)(i_0,l_0)}\}$ and the set of connected component $\grG$ of the fifth type such that $\gra^{\grG}_1 = i_1$,$ L_1(\grG)=l_1$, $\gra^{\grG}_0 = i_0 $ and $ L_0(\grG)=l_0 $. So we can define $\phi^{(i_1,l_1),(i_0,l_0),q}$ by $$ \phi^{(i_1,l_1),(i_0,l_0),q} (e_{\grG}) = \grd_{\grG,\grG_q}. $$ Observe also that we can choose a bijection $ q \longleftrightarrow \grG_q$ between $\{1,\dots,s_{(i_1,l_1)0}\}$ and the set of connected component $\grG$ of the third type such that $\gra^{\grG}_1 = i_1$,$ L_1(\grG)=l_1$. So we can define $\phi^{(i_1,l_1),0,q}$ by $$ \phi^{(i_1,l_1),0,q} (b_{\grG}) = \grd_{\grG,\grG_q}. $$ In a similar way define $\phi^{0,(i_0,l_0),q}$.
Up to a sign which depends on our choices and ordering equation \eqref{eq:puphigoF} is a tautologically satisfied.
Observe now that by lemma \ref{lem:lemmaprpcovarianti} and linearity $\mC[H]_c$ is generated by functions $s\longmapsto \det(\Phi_{\grf,\gra,\grb}(s))$ where $\tilde A,\tilde B,\grf,\gra,\grb$ are as in \eqref{eq:sssec:casoXY:Phi} and moreover there exists a basis $e_1,\dots,e_{r_A}$ of $\tilde A$ and a basis $\tilde e_1,\dots,\tilde e_{r_B}$ of $\tilde B^*$ such that for all $i$ there exist a connected component of the third type $\grG^A_i$ such that $\gra(e_i)= b_{\grG^A_i}$ and for all $i$ there exists a connected component of the fourth type $\grG^B_i$ such that $\tilde e _i (\grb (b^*_{\grG})) = \grd_{\grG\grG^B_i}$. So it is enough to prove that if $\tilde A,\tilde B,\grf,\gra,\grb$ are as above then there exists a $\chi_m$-good $\grD$ such that $$ \det(\Phi_{\grf,\gra,\grb}) \comp \rho = f_{\Delta}. $$ We define {\allowdisplaybreaks \begin{align*} A & = ( b_{\grG^A_i} )_{i=1,\dots,r_A} \in \left(\bigcup D_i\right)^{\dim \tilde A} \\ B & = ( b^*_{\grG^B_i} )_{i=1,\dots,r_B} \in \left(\bigcup D^*_i\right)^{\dim \tilde B} \\ \gra^{ik}_{jh} & = \sum_{ 1\leq q \leq s_{(j,h)(i,k)}} \phi^{(j,h),(i,k),q}(e_{\grG}) \gra^{\grG}\\ \gra^{ik}_l & = \grd_{i,(\gra^{\grG^B_l})_0}\grd_{k,L_0(\grG^B_l)} \gra^{\grG^B_l} \\ \gra_{jh}^l & = \grd_{j,(\gra^{\grG^A_l})_1}\grd_{h,L_1(\grG^A_l)} \gra^{\grG^A_l} \end{align*} } The equation \eqref{eq:puphigoF} follows now by the very definition.
\section{The action of the Weyl group} For any $m\in P$ and for any $\grl \in Z$ we defined a variety $M_{m,\grl}(d,v)$. Observe that on both $m,\grl$ there is a natural action of the Weyl group $W$. We define an action of the Weyl group also on $(d,v)$. We have already described $d$ as an element of $X$ and $v$ as an element of $Q$. We can now define $$ \grs (d,v) = (d,\grs(v-d)+d). $$ Observe that $\grs(v-d)+d \in Q$ so the definition is well given. So it make sense to consider the variety $M_{\grs m,\grs \grl}(\grs(d,v))$ or the variety $\goM_{\grs \zeta} ( \grs(d,v))$ for $\zeta \in \goZ$.
In \cite{Na1} Nakajima used analytic methods to prove, in the case of a finite Dynkin diagram, that if $\zeta$ is generic then there exists a diffeomorphism of differentiable manifolds $$ \Phi_{\grs,\zeta} \colon \goM_{\zeta} (d,v) \lra \goM_{\grs \zeta} ( \grs(d,v)) $$ and moreover that $\Phi_{\grs',\grs\zeta}\comp\Phi_{\grs,\zeta}=\Phi_{\grs'\grs,\zeta}$. In the same paper he also asserted that a similar construction could be obtained in the general case using reflection functors as indeed we are going to do.
In \cite{Lu:Q4} Lusztig gave a purely algebraic construction of an isomorphism $$ M_{0,\grl}(d,v) \isocan M_{0,s_i \grl} (s_i(d,v)) $$ whenever $\grl_i \neq 0$. In this paper we will give a generalization of Lusztig construction.
\begin{dfn}\label{dfn:22} If $u \in \mZ^n=Q\cech$ and $A \subset Q\cech$ we define $$ H_u = \{ (m,\grl)\in P \oplus Z \st \bra u, \cech ,\grl \ket = \bra u \cech , m \ket =0 \} \; \mand \; H_A = \bigcup_{a\in A} H_a $$ Let $K= \max \{1,a_{ij}^2 \st i,j \in I \}$. If $v \in \mZ^n$ we define $$ \tilde U _{v} = \{ u \in \mN^I \tc 0 \leq u_i \leq K v_i \} \; \mand \; \tilde H^{v} = H_{\tilde U_v} . $$ We define also $$
U_{\infty} = \bigcup _{i\in I} W \gra_i\cech \; \mand \;
H^{\infty} = H_{U_{\infty}}. $$ Finally we set $ \calG_{v} = \{ (m,\grl) \in P \times Z_G \st \grs(m,\grl) \notin H^{\grs \cdot v} \text{ for all } \grs \in W \}.$ Both of the following definition of the set $\calG$ will be fine for us: \begin{align*} \calG &= \{(v,m,\grl)\in Q \times P \times Z_G \st (m,\grl) \in \calG_{v}\} \; \text{ or } \\ \calG &= \{(v,m,\grl)\in Q \times P \times Z_G \st (m,\grl) \notin H^{\infty} \}. \end{align*} \end{dfn} We observe that in any case $\calG$ is $W$-stable.
\begin{prp}\label{prp:azioneweyl}
For all $d,v$, for all $\grs \in W$ and for all $(m,\grl)$ such that $(m,\grl,v) \in \calG_v$ there exists an algebraic isomorphism: $$ \Phi^{d,v}_{\grs,m,\grl}\colon M_{m,\grl}\bigl(d,v\bigr) \lra M_{\grs m ,\grs \grl }\bigl(\grs(d,v)\bigr). $$ Moreover this isomorphims satisfies \begin{equation}\label{eq:azioneweyl} \Phi^{\grs(d,v)}_{\tau,\grs m,\grs \grl} \comp \Phi^{d,v}_{\grs,m,\grl} = \Phi^{d,v}_{\tau \grs,m,\grl}. \end{equation} \end{prp}
\subsection{Generators} In this section we define the actions of the generators $s_i$ of $W$ following \cite{Lu:Q4}. We fix $ i \in I$ and $(d,v)$, $\grl \in Z$ and $m \in P$. We call $(d,v')= s_i(d,v)$, $\grl ' = s_i \grl$ and $m '=s_i m$. Through all this section we assume $v,v'\geq 0$.
For the convenience of the reader we write explicit formula in this case: \begin{align*}
\grl_j ' &= \grl_j - c_{ij} \grl_i & m _j'&= m_j - c_{ij}
m_i \; \text{ for all } j \\
v_i' &= d_i -v_i +\sum _{j\neq i} a_{ij} v_j && v_j'=v_j
\; \text{ for all } j \neq i \end{align*} Observe that we can choose $$ D_j' = D_j \;\text{ for all } j \;\mand \;
V_j' =V_j \;\text{ for all } j \neq i . $$ In particular we have $$
T_i = D_i \oplus \bigoplus _{h_1=i} V_{h_0} = T_i' $$ since we suppose that our quiver has not simple loops.
\begin{dfn}[Lusztig \cite{Lu:Q4}] \label{dfn:generatorsweyl} Fix $\grl \in Z_G$ and define $Z^{\grl}_i (d,v)$ to be the subvariety of $S_i(d,v) \times S_i(d,v')$ of pairs $(s,s')= \bigl( (B,\grg,\grd), (B',\grg',\grd') \bigr)$ such that the following conditions hold: \begin{enumerate}
\item $B_h (s)= B_h'(s')$ for all $h$ such that $h_0,h_1 \neq i$,
\item $\grg_j (s)= \grg_j'(s') $ for all $j \neq i$ ,
\item $\grd_j (s)= \grd_j'(s') $ for all $j \neq i$,
\item the following sequence is exact: \begin{equation} \label{eq:seqes:defsi} \begin{CD} 0 @>>> V_i' @>{a_i'}>> T_i @>{b_i}>> V_i @>>> 0, \end{CD} \end{equation}
\item $a_i'(s')b_i'(s') = a_i(s)b_i(s) - \grl_i \Id_{T_i}$,
\item $s \in \grL_{\grl}(d,v)$ and $s' \in \grL_{\grl'}(d,v')$. \end{enumerate} \end{dfn}
\begin{lem}
Let $(s,s') \in S_i(d,v) \times S_i(d,v')$ and suppose that it satisfies conditions 1), 2), 3), 4) , 5) above then:
\begin{enumerate}
\item $s \in \grL_{\grl}(d,v) \iff s' \in \grL_{\grl'}(d,v')$,
\item if $\mu_j(s) - \grl_j \Id_{V_j}=0$ for all $j\neq i$ then $s \in \grL_{\grl}(d,v)$ ,
\item if $\mu_j(s') = \grl_j \Id_{V_j'}$ for all $j\neq i$ then $s' \in \grL_{\grl'}(d,v')$.
\end{enumerate} \end{lem}
\begin{proof} 2) We have to prove $b_ia_i - \grl_i\Id _{V_i}=0$ and by condition 4) it is enough to prove
$b_ia_ib_i = \grl_i b_i$. So $b_ia_ib_i = b_i(a_i'b_i'-\grl_i) = \grl_i b_i$ by conditions 4) and 5).
The proof of 3) is equal to the proof of 2). We prove the implication $\then$ in 1). By 2) and 3) it is enough to prove that $b_j'a_j' = \grl_j'$ for $j\neq i$. {\allowdisplaybreaks \begin{align*}
b_j'a_j' &= \sum_{h_1=j} \gre(h) B_h' B'_{\bar h} + \grg_j' \grd_j' = \\
& = \sum_{h_1=j, h_0 \neq i} \gre(h) B_h B_{\bar h} + \grg_j \grd_j
+ \sum_{h_1=j, h_0 = i} \gre(h) B_h' B_{\bar h}' = \\
& = b_j a_j + \sum_{h_1=j, h_0 = i} \gre(h)
\left( B_h' B_{\bar h}' - B_h B_{\bar h} \right) \\
& = b_j a_j + \sum_{h_0=j, h_1 = i}
\left( B_{\bar h} \gre(h) B_{h}- B_{\bar h}' \gre(h) B'_h
\right) \\
& = b_j a_j + \sum_{h_0=j, h_1 = i} \left( [a_ib_i]^{V_{h_0}}_{V_{h_0}}
- [a'_ib'_i]^{V_{h_0}}_{V_{h_0}} \right) \\
& = \grl_j + \sum_{h_0=j, h_1 = i} \grl_i = \grl_j' \end{align*} } The proof of the converse is completely analougous. \end{proof}
\begin{lem}
Let $\grl \in Z_G$, $(s,s') \in Z^{\grl}_i(d,v)$ and $\gra $ be
an element of the path algebra algebra of type $(\gra_0,\gra_1)$ then \begin{enumerate}
\item if $\gra_0,\gra_1\neq i$ there exists an element $\gra'$
of the $b$-path algebra of type $(\gra_0,\gra_1)$ such that
$\gra'(s')=\gra(s)$,
\item if $\gra_1\neq i$ there exists an element $\gra'$
of the $b$-path algebra of type $(\gra_0,\gra_1)$ such that
$\gra'(s') \grg '_{\gra_0} =\gra(s)\grg_{\gra_0}$,
\item if $\gra_0\neq i$ there exists an element $\gra'$
of the $b$-path algebra of type $(\gra_0,\gra_1)$ such that
$\grd '_{\gra_1}\gra'(s') =\grd _{\gra_1}\gra(s)$,
\item there exists an element $\gra'$
of the $b$-path algebra of type $(\gra_0,\gra_1)$ such that
$\grd '_{\gra_1}\gra'(s') \grg '_{\gra_0} = \grd
_{\gra_1}\gra(s)\grg_{\gra_0}$, \end{enumerate} \end{lem} \begin{proof} By induction on the length of $\gra$ we can reduce the proof of this lemma to the following identities that are a consequence of condition 5) in definition \ref{dfn:generatorsweyl}: \begin{align*} B'_hB'_k &= \begin{cases}
B_hB_k &\mif h \neq \bar k \\
B_hB_k - \grl_i &\mif k = \bar h \end{cases} \\ \grd '_iB'_k &= \grd _iB_k \\ B'_h \grg '_i &= B_h \grg_i \\ \grd '_i \grg _i ' &= \grd _i \grg_i -\grl_i \end{align*} for $h,k$ such that $h_0 = i = k_1$. \end{proof}
\begin{lem} Let $(s,s') \in Z^{\grl}_i(d,v)$ and suppose $m_i \geq 0$ or $\grl_i\neq 0$ then $$ s \text{ is } \chi_m \text{ semistable } \iff s' \text{ is } \chi_{m'} \text{ semistable } $$ \end{lem}
\begin{proof}We prove only $\then$. Let's do first the case $m_i \geq 0$. If $s$ is $\chi_m$ semistable, then there exists $\grD= \{ A,B,\gra^{*}_{*} \}$ $m$-good such that $f_{\grD}(s) \neq 0$. Using the notation in \ref{sssec:generators} we have $\grf_{\grD} = \det \Psi_{\grD}$ where $ \Psi_{\grD} : Y \lra Z$ is a linear map. In our case we can write $Z$ as $\mC^{m_i} \otimes V_i \oplus \wt Z $ and we obseve that no $V_i$ summunds appear in $Y$ or $\wt Z$.
Now we construct a new data $\grD'= \{ A',B',{\gra'}^*_* \}$ such that $f_{\grD'}(s') \neq 0 $ and $f_{\grD'}$ a $\chi'$-covariant polynomial. Our strategy will be the following: we substitute each $V_i$ with the space $T_i$ in the space $Z$ and we add $m_i$ copies of $V_i'$ to $Y$. Let's do it more precise: first of all the new data will not be $m'$ good so we have to define ${m'}_j^+ $ and ${m'}_j^-$: \begin{enumerate}
\item ${m'}_i^+ = 0$ and ${m'}_i^- = m_i = m_i^+$,
\item ${m'_j}^- = m_j^-$ and ${m'_j}^+ = m_j^+ + a_{ij} m_i^+$ for all
$j\neq i$,
\item ${m'}^- = m^- $ and ${m'}^+ = m^+ + d_i m_i^+$. \end{enumerate} Observe that ${m'_j}^+ - {m'_j}^- = m _j'$ for all $j$ so our data will furnish a $\chi'$ equivariant function. Moreover if we define $$ Z' = \mC^{m_i} \otimes V_i \oplus \wt Z \;\mand\; Y' = \mC^{m_i} \otimes T_i \oplus Y $$ we observe that they have the numbers of $V'_j$, $\mC_a$, $\mC_b$ factors specified by ${m'}$. Now we construct the new data $\grD'$ in such a way that with respect to the decompositions above we have: \begin{align*}
[\Psi_{\grD}(s)]_{\mC^{m_i}\otimes V_i \oplus \wt Z}^{Y} & = \begin{pmatrix}
( \Id\otimes b_i ) \comp \pi\\
\Phi \end{pmatrix},\\
[\Psi_{\grD'}(s')]_{\mC^{m_i}\otimes T_i \oplus \wt Z}^{\mC^{m_i}\otimes
V_i' \oplus Y} & =
\begin{pmatrix}
\Id \otimes a'_i & \pi \\
0 & \Phi
\end{pmatrix}. \end{align*}
If we construct a data with this property we observe that $\Psi_{\grD}(s)$ is an isomorphism if and only if $\Psi_{\grD'}(s')$ is an isomorphism. Hence $f_{\grD}(s) \neq 0$ implies $f_{\grD'}(s') \neq 0$ and the lemma is proved.
To construct the new data we choose a basis $e_1,\dots,e_{d_i}$ of $D_i$ and we define the other elements of the data according to the following rules \begin{enumerate}
\item $A' =A$,
\item if $B=(b_1,\dots,b_{m^+})$ we set $B'=(b_1,\dots,b_{m^+},
\underbrace{e_1,\dots,e_1}_{m_1\;times},\dots ,
\underbrace{e_{d_i}, \dots e_{d_i}} _{m_1\;times})$,
\item ${\gra'}^{j_1,h_1}_{j_2,h_2}$ for $j_1 \neq i$ and
$h_2 \leq m^+_{j_2}$ is an
element constructed according to case 1) in the
previous lemma,
\item ${\gra'}^{a}_{j_2,h_2}$ for $h_2 \leq m^+_{j_2}$ is an
element constructed according to case 2) in the
previous lemma,
\item ${\gra'}^{j_1,h_1}_{b_l}$ for $j_1 \neq i$ and
$l \leq m^+$ is an
element constructed according to case 3) in the
previous lemma.
\item ${\gra'}^{i,h}_{b_l} = {\gra'}^{i,h}_{j,k}=0$
if $l \leq m^+$ and $k \leq m_j^+$. \end{enumerate} In this way we garantee that the projection of $\Psi_{\grD'} (s') $ onto $\wt Z$ is equal to $\bigl( 0\; \Phi \bigr)$. To define the remaining part of the new data we do not give details on the indexes, but we explain how to construct it. It is clear that we can choose ${\gra'}^{i,h}_*$ for the remaining indeces * in such a way that the projection of
$\Psi_{\grD'} (s')\bigr|_{ \mC^{m_i}\otimes V_i'}$ on $\mC^{m_i} \otimes T_i$ is equal to $\Id \otimes a'_i$. Finally we observe that a path $\grb$ from $V_j$ to $V_i$ with $j\neq i$ has to go through a summand of $T_i$ so there exists a path $\gra$ such that $\grb(s)= b_i \comp \gra(s)$. Now we use the previous lemma to change $\gra$ with a $\gra'$ such that $\grb(s) = b_i \comp \gra'(s')$. More generally if $\grb$ is an element of the path algebra of type $(j,i)$ with $j\neq i$ then there exists an element of the $b$-path algebra $\gra'$ such that $\grb(s) = b_i\comp \gra'(s)$. In this way we define the elements of the $b$-path algebra connecting summunds of $Y$ and summunds of $\mC^{m_i}\otimes T_i$.
In the case $m_i<0$ we proceed in a similar way: we choose $\grD$ $m$-good and we have $$ Y = \mC^{-m_i} \otimes V_i \oplus \wt Y, \quad Y'= \mC^{-m_i} \otimes T_i \oplus \wt Y, \quad Z'= \mC^{-m_i} \otimes V'_i \oplus Z. $$ As in the previous case we can find a new data $\grD'$ such that: \begin{align*} [\Psi_{\grD}(s)] _{ Z} ^{\mC^{-m_i}\otimes V_i \oplus \wt Y} &=
\begin{pmatrix}
\pi \comp ( \Id \otimes a_i ) &
\Phi
\end{pmatrix},\\
[\Psi_{\grD'}(s')] _{\mC^{-m_i}\otimes V_i' \oplus Z} ^{\mC^{-m_i}\otimes T_i
\oplus \wt Y} & =
\begin{pmatrix}
\Id \otimes b'_i & 0 \\
\pi & \Phi
\end{pmatrix}. \end{align*} Now to conclude that $\Psi_{\grD'}(s')$ is an isomorphism if $\Psi_{\grD}(s)$ is we need to know that $b'_i$ is an epimorphism and this is not garantee by $(s,s') \in Z^{\grl}_i(d,v)$. But if $\grl_i \neq 0$ then, since $b_i'a_i'= -\grl_i$, we have that $b_i'$ is surjective. \end{proof}
\begin{dfn}
Let $p$ (resp. $p'$) be the projections of $Z_i^{\grl}(d,v)$ on $\grL_{\grl}(d,v) \subset S(d,v)$ (resp. $\grL_{\grl'}(d,v') \subset S(d,v')$). Suppose that $m_i > 0$ or $\grl_i \neq 0$ then we define $$ Z_i^{m,\grl}= p^{-1}\bigl( \grL_{m,\grl}(d,v) \bigr) = {p'}^{-1}\bigl( \grL_{m',\grl'}(d,v') \bigr). $$ We define also $$ G_{i,v} = \prod _{j \neq i} GL(V_j) \times GL(V_i) \times GL(V_i'). $$ Observe that there are natural projections from $G_{i,v}$ to $G_v$ and $G_{v'}$, therefore ther are natural actions of $G_{i,v}$ on $S_i(d,v)$, $S_i(d,v')$. Observe that here is a natural action of $G_{i,v}$ on $Z_i^{\grl}$ and $Z_i^{m,\grl}$ such that the projections $p$, $p'$ are equivariant. \end{dfn}
\begin{lem} \label{lem:abmonoepi}
Let $s \in \grL_{\grl,m}(d,v)$ then \begin{enumerate}
\item if $\grl_i \neq 0$ then $b_i$ is epi and $ a_i$ is mono,
\item if $m_i>0$ then $b_i$ is epi,
\item if $m_i<0$ then $a_i$ is mono. \end{enumerate} \end{lem} \begin{proof}
If $\grl_i \neq 0$ then the result is clear by $b_ia_i = \grl_i$. Suppose now that $\grl_i = 0 $ and $m_i >0$. Let $U_i = \Im b_i$ and let $V_i = U_i \oplus W_i$. Define now a one parameter subgroup $g(t)$ of $G_V$ in the following way: $$ [g_i(t)]^{U_i \oplus W_i}_{U_i\oplus W_i} = \begin{pmatrix} 1 & 0 \\ 0 & t^{-1} \end{pmatrix} \mand g_j \coinc 1 \mfor j \neq i $$ Since $\Im b_i \subset U_i$ we have that there exists the limit $\lim_{t\to 0 } g(t) \cdot s = s_{0}$. Let now $n>0$ and $f$ a $\chi^n$-covariant function on $S$ such that $f(s)\neq 0$. Then $$ f(s_0) = \lim_{t \to 0} f(g(t) \cdot s) = \lim_{t \to 0} {\det}_{GL(V_i)}^{nm_i} f(s) = \lim_{t \to 0} t^{-nm_i\dim W_i} f(s) $$ So we must have $\dim W_i =0$. The proof of the third case is completely similar to this one. \end{proof}
\begin{lem}[see also Lusztig \cite{Lu:Q4}]
If $m_i>0$ or $\grl_i \neq 0 $ then \begin{enumerate}
\item $p: Z_i^{m,\grl}(d,v) \lra \grL_{m,\grl}(d,v)$ is a principal
$GL(V_i')$ bundle,
\item $p': Z_i^{m,\grl}(d,v) \lra \grL_{m',\grl'}(d,v')$ is a principal
$GL(V_i)$ bundle. \end{enumerate} \end{lem} \begin{proof} Lusztig's proof extend to this case without changes. Let's prove for example 1. We have to prove: $i.$ that the action on the fiber is free, $ii.$ that it is transitive. First of all we observe that by the previous lemma if $s \in \grL_{m,\grl}$ then $b_i(s)$ is epi. In particular there exists $a_i' : V_i' \lra T_i$ such that sequence \eqref{eq:seqes:defsi} is exact, and clearly $a_i'$ is univoquely determined up to the action of $GL(V_i')$, moreover this action is free. So $i.$ and $ii.$ reduce to the following fact: if $s \in \grL_{m,\grl}$ and $a_i'$ is such that sequence \eqref{eq:seqes:defsi} is exact, then there exists a unique, $b_i'$ such that $a_i'b_i'=a_ib_i -\grl_i$. Since $a_i'$ is mono the unicity is clear. To prove the existence we observe that it is equivalent to $\Im a_i' \supset \Im (a_ib_i - \grl_i)$. But the last statement is clear since we have: $\Im a_i' = \ker b_i$ and $ b_i(a_ib_i - \grl_i) = 0$. \end{proof}
\begin{prp} \label{prp:isoMsi}
If $m_i>0$ or $\grl_i \neq 0$ then the projections $p$, $p'$ induces algebraic isomorphisms $\bar p$ , $\bar p'$: $$ \begin{CD} \grL_{m,\grl}(d,v) /\!/ G_v @<{\circa}<{\bar p}< Z_i^{m,\grl}(d,v) /\!/ G_{i,v} @>{\circa}>{\bar p '}> \grL_{m',\grl'}(d,v') /\!/ G_{v'} \end{CD} $$ \end{prp} \begin{proof} This proposition is a straightforward consequence of the previous lemma and the following general fact (see for example \cite{GIT} Proposition 0.2): let $G$ be an algebraic groups over $\mC$ and $X$, $Y$ two irreducible algebraic variety over $\mC$; if $G$ acts on $X$ and $\grf:X \lra Y$ is such that for all $y \in Y$ the fiber $X_y$ contains exactly one $G$-orbit then $\grf $ is a categorical quotient. If we apply this lemma to the projection $p$, (resp. $p'$) and to the group $GL(V_i')$ (resp. $GL(V_i)$) we obtain the required result. \end{proof}
We can use this proposition to define the action of the generators of the Weyl group. \begin{dfn} Let $i,\grl,m,d,v,\grl',m',v'$ be as above, and suppose $d_j \geq 0$, $ v_j,v_j' \geq 0 $ for all $j$ then we define an isomorphism of algebraic variety $$ \Phi^{d,v}_{s_i,\grl,m}\colon M_{m,\grl}(d,v) \lra M_{m',\grl'}(d,v') $$ in the following way: \begin{enumerate}
\item if $m_i>0$ or $\grl_i \neq 0$
then we set $\Phi^{d,v}_{s_i,\grl,m} = \bar p' {\bar p}^{-1}$,
\item if $m_i<0$ then we exchange the role of $v,v'$ in the previous construction: more
precisely we observe that $m_i' >0$ so we can define
$\Phi^{d,v'}_{s_i,\grl',m'}\colon M_{m',\grl'}(d,v') \lra M_{m,\grl}(d,v)$
and we define $\Phi^{d,v}_{s_i,\grl,m} = \bigl(\Phi^{d,v'}_{s_i,\grl',m'}\bigr)^{-1}$. \end{enumerate} \end{dfn}
\begin{oss}\label{oss:defisosi} To see that $\Phi^{d,v}_{s_i,\grl,m}$ is univoquely defined we have to verify that if $\grl_i \neq 0$ and $m_i<0$ the two definitions above coincide. This fact reduces easily to the following remark: if $\grl_i \neq 0$ then $$ (s,s') \in Z_i^{\grl}(d,v) \iff (s',s) \in Z_i^{\grl'}(d,v'). $$ Let us prove, for example, the $\then$ part. Since $a_i b_i = a_i'b_i' + \grl_i = a_i'b_i' - \grl_i ' $ the only thing we have to verify is that the sequence $$ \begin{CD} 0 @>>> V_i @>{a_i}>> T_i @>{b_i'}>> V_i' @>>> 0 \end{CD} $$ is exact. The surjectivity of $b_i'$ and the injectivity of $a_i$ are a consequence of $\grl_i \neq 0$. Since $\dim T_i = \dim V_i + \dim V_i'$ we need only to prove that $b_i'a_i =0$. Observe that $b_i'a_i =0$ if and only if $ a_i'b_i'a_i =0$ since also $a_i'$ is injective. Finally $ a_i'b_i'a_i = (a_ib_i-\grl_i) a_i = 0$. \end{oss}
\subsection{Preliminaries} We saw how to define $$ \Phi^{d,v}_{s_i,m,\grl}\colon M_{m,\grl} \bigl(d,v\bigr)\lra M_{s_i(m),s_i(\grl)}\bigl( s_i(d,v) \bigr) $$ in the case that $(\grl_i,m_i ) \neq 0$ and $d,v,s_iv \geq 0$. To define an action of the Weyl group we have now to garantee that coxeter relations hold. We will prove these relations in the next paragraph. Before doing it we observe that we have to garantee some conditions on $m,\grl$ such that we will be able to define $\Phi_{s_i,\grs m,\grs\grl}^{\grs(d,v)}$ for any element $\grs \in W$: this condition will be $(m,\grl) \in \calG_v$ (\ref{dfn:22}). We have also to say something about the case $d_i <0$ or $v_i<0$ for some $i \in I$.
In the case that $d_i <0$ for some $i$ then $M_{m,\grl}\bigl( \grs(d,v) \bigr)= \vuoto $ for all $\grs,m,\grl$ by the very definition, so there is nothing to define.
The second trivial case is $d=v=0$. Indeed in this case we have $M_{\grs m,\grs \grl}\bigl(\grs(d,v)\bigr) = \{ 0 \}$ so the definition is trivial.
The other two cases are threated in the two lemmas below.
In the following we fix $d$ such that $d_i \geq 0 $ for all $i$. It will be convenient to define an affine action of $W$ on $Q$ by $\grs \cdot v = \grs(v-d) +d$.
\begin{lem} Let $d\geq 0 $ and $(m,\grl) \in \calG_v$ if there exists $\grs$ such that $\grs \cdot v \not \geq 0$ then $M_{m,\grl}(d,v) = \vuoto$. \end{lem} \begin{proof} Suppose that $\grs$ is an element of minimal length such that $\grs \cdot v \not \geq 0$ and let $l = \ell(\grs)$. We prove the lemma by induction on $l$. The case $l=0$ is trivial.
\noindent \emph{Initial step: $l=1$.} If $s_i \cdot v \not \geq 0$ then we have $0 \leq d_i + \sum a_{ij} v_j < v_i$. Hence $\dim T_i < \dim V_i $, $u = ( 0,\dots,\overset{i}{1},0,\dots) \in \tilde U_v$ and $(\grl_i,m_i) \neq 0$. So $M_{m,\grl}(d,v) = \vuoto$ by lemma \ref{lem:abmonoepi}.
\noindent \emph{Inductive step: if $l\geq2$ then $l-1 \then l$.} Let $\grs = \tau s_i$ with $\ell(\tau) = l-1$ and $v' = s_i \cdot v$, $\grl'= s_i \grl$, $m ' = s_i m$. By induction $M_{m',\grl'}(d,v') = \vuoto$ and, since $l \geq 2 $, $v' \geq 0$. If $(m_i,\grl_i) \neq 0$ then we can apply Proposition \ref{prp:isoMsi} and we obtain $M_{m,\grl}(d,v)\isocan M_{m',\grl'}(d,v') = \vuoto$. If $(m_i,\grl_i)=0$ then $u = ( 0,\dots,\overset{i}{1},0,\dots) \notin \tilde U_v$, hence $v_i =0$. Moreover $\grl ' = \grl$ and $m' = m$ so $(m_i',\grl_i')=0$ and $u = ( 0,\dots,\overset{i}{1},0,\dots) \notin U_{v'}$. Hence $v_i'=0$ so $v'=v$ and $\tau v \not \geq 0$ against the minimality of $\grs$. \end{proof}
\begin{lem}\label{lem:sisipuodefinire}
Let $(I,H)$ be connected, $(m,\grl ) \in \calG_v$ and suppose $d\geq 0$ and $\grs \cdot v \geq 0$ for all $\grs \in W$. If there exists $i \in I, \grs \in W$ such that $\grs(m,\grl) = ( m',\grl')$ and $(m_i',\grl_i')=0$ then $d=v=0$. \end{lem} \begin{proof} Without loss of generality we can assume $\grs =1$.
\noindent \emph{First step}: $v_i =0$. This is clear since otherwise $u = ( 0,\dots,\overset{i}{1},0,\dots) \in U_v$.
\noindent \emph{Second step}: $d_i =0$ and $v_j=0$ for all $j$ such that $a_{ij}$. Let $v'= s_i\cdot v$ and observe that $s_i\grl =\grl$ and $s_im=m$. Then as in first step we have $0 =v_i' = d_i + \sum_{j}a_{ij}v_j$ from which the claim follows.
Let now $ W' = \bra \{s_j \st a_{ij} \neq 0 \mand j\neq i\} \ket $. If $(d,v) \neq 0$ then there exists $j \in I$ and $\grs \in W'$ such that $a_{ij} \neq 0$ and $$ n = d_j + \sum_{h\in I} a_{jh}\Tilde v _h >0. $$ where $\Tilde v=\grs\cdot v$. Since $(\grs \grl)_i = \grl_i = 0 = m_i = (\grs m)_i$ we can assume $\grs = 1$. Let now $v' = s_is_j \cdot v$, $ \grl' =s_is_j \grl$ and $m ' = s_is_j m$, we have: \begin{align*}
v'_i &= a_{ij}n & \grl'_i&= -a_{ij}\grl_j & m_i&= -a_{ij}m_j \\
v'_j &= n & \grl'_j&= (a_{ij}^2-1) \grl_j & m_j&= (a_{ij}^2-1)m_j . \end{align*} Hence $u= (0,\dots,\overset{j}{a_{ij}},0,\dots, \overset{i}{a_{ij}^2-1},0,\dots)\in U_{v'}$ and $\bra u \cech ,\grl' \ket = \bra u\cech, m' \ket =0$ against $(m,\grl) \in \calG_v$. \end{proof}
\begin{oss} the analougous lemma in the case of $\calG= \{(m,\grl,v) \st (m,grl) \notin H^{\infty} \}$ are more simple. \end{oss}
\subsection{Relations} In this section we define an isomorphism of algebraic variety $$ \Phi_{\grs,m,\grl}^{d,v} \colon M_{m,\grl}\bigl(d,v\bigr) \lra M_{\grs m,\grs\grl}\bigl(\grs(d,v)\bigr). $$ in the case $(m,\grl) \in \calG _v$ or $(m,\grl) \notin H^{\infty}$. In the case $d \not \geq 0$ or in the case in which there exists $\grs \in W$ such that $\grs v \not \geq 0$ or in the case $d=v=0$ we have seen in the previous section that there is nothing to define or that the definition is trivial. In the remaing cases we observe that for all $\tau, i$ we have $(\tau(m)_i , \tau(\grl)_i) \neq 0$ by lemma \ref{lem:sisipuodefinire}. Hence we can define $\Phi_{\grs,m,\grl}^{d,v}$ by induction on $\ell(\grs)$ by the formula \begin{equation} \label{eq:defphisigma} \Phi_{\grs,m,\grl}^{d,v} = \Phi_{s_i,\tau m,\tau \grl}^{\tau(d,v)} \comp \Phi_{\tau , m,\grl}^{d,v}. \end{equation} Of course we have to prove that this definition is well given by checking Coxeter relations: $$
s_i^2 = \Id, \quad
s_is_j=s_js_i \;\mif a_{ij}=0 \; \mand \;
s_is_js_i = s_js_is_j\;\mif a_{ij}=1 $$ which in our situation take the following form: \begin{subequations} \begin{gather} \Phi_{s_i,s_i\grl,s_im}^{s_i(d,v)} \comp \Phi_{s_i,\grl,m}^{d,v} = \Id \label{eq:cox1}\\ \Phi_{s_i,s_j\grl,s_jm}^{s_j(d,v)} \comp \Phi_{s_j,\grl,m}^{d,v} = \Phi_{s_j,s_i\grl,s_im}^{s_i(d,v)} \comp \Phi_{s_i,\grl,m}^{d,v} \label{eq:cox2}\\ \Phi_{s_i,s_js_i\grl,s_js_im}^{s_js_i(d,v)} \comp \Phi_{s_j,s_i\grl,s_im}^{s_i(d,v)} \comp \Phi_{s_i,\grl,m}^{d,v} = \Phi_{s_j,s_is_j\grl,s_is_jm}^{s_is_j(d,v)} \comp \Phi_{s_i,s_j\grl,s_jm}^{s_j(d,v)} \comp \Phi_{s_j,\grl,m}^{d,v}. \label{eq:cox3} \end{gather} \end{subequations} The first of the two equations is clear by the very definition and remark \ref{oss:defisosi}. The second equation is trivial. We need to prove the third equation. We will need the following two simple lemmas of linear algebra which proofs are trivial.
\begin{lem}\label{lem:25} Let $V,W,X,Y,Z$ be finite dimensional vector spaces and $\gra,\grb,\grg,\grd,\gre,\grf$ linear maps between them as in the diagrams below. The diagram $$ \begin{CD} 0 @>>> V @>{\begin{pmatrix} \gra \\ \grb \\ \grg \end{pmatrix}}>> W \oplus X \oplus Y @>{\begin{pmatrix} \grd & 0 & -1 \\ 0 & \gre & \grf \end{pmatrix}}>> Y \oplus Z @>>> 0 \end{CD} $$ is exact if and only if the diagram $$ \begin{CD} 0 @>>> V @>{\begin{pmatrix} \gra \\ \grb \end{pmatrix}}>> W \oplus X @>{\begin{pmatrix} \grf \grd & \gre \end{pmatrix}}>> Z @>>> 0 \end{CD} $$ is exact and $\grg = \grd\gra$. \end{lem}
\begin{lem}\label{lem:26} Let $U,V,W,X,Y,Z$ be finite dimensional vector spaces and $\gra,\grb,\grg,\grd,\gre,\grf,\psi,\rho,\grs$ linear maps between them as in the diagrams below such that $\psi \oplus \rho : W \oplus X \lra Z$ is an epimorphism. Then the diagram $$ \begin{CD} 0 \lra U @>{\begin{pmatrix} \gra \\ \grb \\ \grg \end{pmatrix}}>> V \oplus W \oplus X @>{\begin{pmatrix} \grd & 0 & 1 \\ \gre & \grf & 0 \\ 0 & \psi & \rho \end{pmatrix}}>> X \oplus Y \oplus Z @>{\begin{pmatrix}\rho & \grs & -1 \end{pmatrix}}>> Z \lra 0 \end{CD} $$ is exact if and only if $\grg = - \grd \gra$ , $\psi=\grs\phi$ , $\rho\grd + \grs\gre =0$ and the diagram $$ \begin{CD} 0 @>>> U @>{\begin{pmatrix} \gra \\ \grb \end{pmatrix}}>> V \oplus W @>{\begin{pmatrix} \gre & \grf \end{pmatrix}}>> Y @>>> 0 \end{CD} $$ is exact. \end{lem}
We fix now and $i,j$ such that $a_{ij}=1$ and we verifies \eqref{eq:cox3}. Let \begin{align*}
\grl ' & = s_i \grl & m' &=s_i m & v' & =s_i v \\
\grlsec & = s_j \grl ' & \msec &=s_j m' & \vsec & =s_j v' \\
\grlter & = s_i \grlsec & \mter &=s_i \msec & \vter &=s_i \vsec \\
\Tilde {\grl} & = s_j \grl & \Tilde m &=s_j m & \Tilde v & = s_j v \\
{\Tilde {\Tilde {\grl}}} & = s_i {\Tilde {\grl}}
& {\Tilde {\Tilde m}} &=s_i {\Tilde m} & {\Tilde {\Tilde v}} & = s_i {\Tilde v } \end{align*}
First of all we observe that since relation \eqref{eq:cox1} holds we can assume that: \begin{enumerate}
\item $\grl_i \neq 0$ or $m_i > 0$ and $\grl_j \neq 0 $ or $m_j >0$,
\item $\grl'_j \neq 0$ or $m'_j > 0$ and $\Tilde{\grl} _i \neq 0 $ or $\Tilde m _i >0$,
\item $\grlsec_i \neq 0$ or $\msec_i > 0$ and
$\Tilde{ \Tilde{\grl}}_j \neq 0 $ or $\Tilde{\Tilde m}_j >0$. \end{enumerate} Define \begin{align*} Z_{iji} & = \{ (\ster,s) \in \grL_{\mter,\grlter} (d,\vter) \times \grL_{m,\grl}(d,v) \st \exists \ssec \in S(d,\vsec), \\ & \;\qquad \mand s' \in S(d,v') \msuchthat (\ster,\ssec) \in Z_i^{\msec,\grlsec}(d,\vsec), \\ & \;\qquad (\ssec,s')\in Z_j^{m',\grl'}(d,v') \mand (s',s) \in Z_i^{m,\grl}(d,v) \} \\ Z_{jij} & = \{ (\ster,s) \in \grL_{\mter,\grlter} (d,\vter) \times \grL_{m,\grl}(d,v) \st \exists \Tilde{\Tilde{s}} \in S(d,\Tilde{\Tilde{v}}), \\ & \;\qquad \mand \Tilde s \in S(d,\Tilde v ) \msuchthat (\ster,\Tilde{\Tilde s}) \in Z_j^{\Tilde{\Tilde m},\Tilde{\Tilde{\grl} } }(d,\Tilde{\Tilde v}), \\ & \;\qquad (\Tilde{\Tilde s},\Tilde s )\in Z_i^{\Tilde m,\Tilde{\grl}}(d,\Tilde v ) \mand (\Tilde s,s) \in Z_j^{m,\grl}(d,v) \} \end{align*} Observe that $(\ster,s) \in Z_{iji} \iff p^{d,\vter}_{\mter.\grlter}(\ster) = \Phi_{s_i}\Phi_{s_j}\Phi_{s_i}(p^{d,v}_{m,\grl}(s))$ and that $(\ster,s) \in Z_{jij} \iff p^{d,\vter}_{\mter.\grlter}(\ster) = \Phi_{s_j}\Phi_{s_i}\Phi_{s_j}(p^{d,v}_{m,\grl}(s))$. So relation \eqref{eq:cox3} is equivalent to $Z_{iji}=Z_{jij}$.
Let now $R_i = D_i \oplus \bigoplus_{h\st h_1=i, h_0 \neq j} V_{h_0} $, $R_j = D_i \oplus \bigoplus_{h\st h_1=i, h_0 \neq j} V_{h_0} $ and observe that $T_i = R_i \oplus V_j$ and $T_j = R_j \oplus V_i$. Let $k$ be the only element of $H$ such that $k_0=j$ and $k_1 = i$. Let $\gre = \gre(k)$. Define also $A=A(s)= B_k(s)$, $B=B(s)=B_{\bar k}(s)$ and for $l =i ,j $ and $\{l',l\}=\{i,j\}$ set $c_l=c_l(s)= \pi_{R_l}^{R_l\oplus V_{l'}} a_l(s)$ and
$d_l=d_l(s)= b_l(s)\bigr|_{R_l} $.
Let now $(s,s\ter) \in \grL_{\grl}(d,v) \times \grL_{\grl\ter}(d,v\ter)$ and set $A^* =A(s^*)$, $ B^* =B(s^*)$, $ c_l^* = c_l(s^*)$ and $ d_l^* = d_l(s^*)$ for $l \in \{i,j\}$ and $* \in \{\;, \ter \}$.
If we apply lemmas \ref{lem:25} and \ref{lem:26} to our situation we obtain the following result: $(s,\ster) \in Z_{iji} $ of and only there exists vector spaces $V_i',V_j',V_i\sec ,V_j\sec $ and linear maps $A',B',c_i',d_i',c_j',d_j', A\sec , B\sec , c_i\sec , d_i\sec , c_j\sec , d _j\sec$ sucht that: \begin{enumerate}
\item $\dim V_l^* =v_l^*$ for $l \in \{i,j\} $ and $* \in \{ ',\sec\}$,
\item for each $*\in \{\, ' , \,\sec\}$ and
$l \in \{ i, j\}$
$A^* \in \Hom (V^*_i,V^*_j)$, $B^* \in \Hom(V^*_j , V^* _i)$,
$c_l \in \Hom (V^*_l,R^*_l)$ and
$d_l \in \Hom (R^*_l,V^*_l)$,
\item $V_j \ter = V_j \sec$, $c_j \ter = c_j \sec$, $d_j \ter = d_j \sec$
and
\begin{align*}
c_i\ter d_i \ter &= c_i d_i -\grl_i -\grl_j & c_i\ter B\ter& =c_i'B\sec \\
A\ter d_i\ter &=A\sec d_i'& \gre A\ter B\ter&= \gre A\sec B\sec - \grl_j
\end{align*}
\item $V_i \sec = V_i '$, $c_i \sec = c_i '$, $d_i \sec = d_i'$
and
\begin{align*}
c_j\sec d_j \sec &= c_j d_j -\grl_i -\grl_j & c_j\sec A\sec & =c_j A' \\
B\sec d_j\sec &= B' d_j & \gre A \sec B \sec&= \gre A' B' + \grl_i + \grl_j
\end{align*}
\item $V_j' = V_j $, $c_j' = c_j$, $d_j ' = d_j$
and
\begin{align*}
c_i' d_i ' &= c_i d_i -\grl_i & c_i' B'& =c_i B \\
A' d_i' &=A d_i & \gre A' B'&= \gre A B - \grl_i
\end{align*}
\item $ \gre c_i' B\sec A\ter + c_i' d_i' c_i \ter =0$ and $\gre A' B\sec = d_ja_j\sec$,
\item the following diagrams are exact $$ \begin{CD} 0@>>> V_i\ter @>{\begin{pmatrix} c_i \ter \\ c_j\sec A\ter \end{pmatrix}}>> R_i \oplus R_j @>{\begin{pmatrix} Ad_i& d_j \end{pmatrix}}>> V_j @>>> 0 \\ 0@>>> V_j\sec @>{\begin{pmatrix} c_j \sec \\ c_i'B\sec \end{pmatrix}}>> R_j \oplus R_i @>{\begin{pmatrix} Bd_j& d_i \end{pmatrix}}>> V_i @>>> 0 \\ 0@>>> V_i' @>{\begin{pmatrix} c_i' \\ A' \end{pmatrix}}>> R_i \oplus V_j @>{\begin{pmatrix} d_i & \gre B \end{pmatrix}}>> V_i @>>> 0 \end{CD} $$ \end{enumerate} \begin{oss} The first condition in point 6) is equivalent to $\gre B\sec A\ter = d_i' c_i \ter$. Indeed this condition is certainly sufficient. To prove the necessity observe that by the injectivity of $a_i' = (c_i '\;A')^t$ it is enough to prove $ \gre c_i' B\sec A\ter + c_i' d_i' c_i \ter =0$ and $ \gre A' B\sec A\ter + A' d_i' c_i \ter =0$. The first equation is the first condition in point 6) and the second one is a consequence of $ \gre A'B\sec =d_j c_i \ter $, $A'd_i'=Ad_i$ and the exactness of the first sequence. \end{oss} \begin{oss} The condition $(s,s\ter) \in Z_{jij}$ can be expressed in a similar way. In the prevoius conditions we have only to change $i$ with $j$ and $\gre$ with $-\gre$. \end{oss}
We will prove now $Z_{iji} \subset Z_{jij}$. To do it we supose that $A', \dots, d_j \sec$ are given as above and we construct $\Tilde A ,\Tilde B, \Tilde c_i , \Tilde d_i ,\Tilde c_j, \Tilde d_j, \Tilde{\Tilde {A}} , \Tilde{\Tilde { B}} ,\Tilde{\Tilde { c}}_i ,\Tilde{\Tilde { d}}_i , \Tilde{\Tilde { c}}_j ,\Tilde{\Tilde { d}} _j$ such that they satisfy the conditions. for $(s,s\ter) \in Z_{jij}$.
\noindent {First step: construction of $\Tilde A, \Tilde B, \Tilde c_i, \Tilde c_j , \Tilde d_i, \Tilde d_j$}. Choose $\Tilde s$ such that $(\Tilde s, s ) \in Z_j^{\chi,\grl}$ and define $\Tilde A = A(\Tilde s)$, $\Tilde B = B( \Tilde s)$, $\Tilde c_l = c_l(\Tilde s)$ and $\Tilde d_l = d_l(\Tilde s)$ for $l \in \{i,j\}$ .
Now I claim that there exists unique $\Tilde {\Tilde A} : V_i\ter \lra \Tilde V_j$ and $\Tilde {\Tilde B} : \Tilde V_j \lra V_i\ter$ such that: $$ \begin{cases}
\Tilde c_j \Tilde{\Tilde A} = c_j\sec A\ter \\
\Tilde B \Tilde{\Tilde A} = - \gre d_i c_i \ter \end{cases} \;\mand\; \begin{cases}
\Tilde{\Tilde A} \Tilde{\Tilde B} = \Tilde A \Tilde B - \gre \grl_i - \gre \grl_j \\
c_i \ter \Tilde{\Tilde B} = c_i \Tilde B \end{cases} $$
\noindent \emph{Unicity of $\Tilde{\Tilde A}$}: since the map $\Tilde a_j = (\Tilde c_j \;\; - \gre \Tilde B )^t$ is injective the unicity is clear.
\noindent \emph{Existence of $\Tilde{\Tilde A}$}: to prove the existence of $\Tilde{\Tilde A}$ is enough to prove: $$ \Im \begin{pmatrix} c_j \sec A \ter \\ - \gre d_i c_i \ter \end{pmatrix} \subset \Im \begin{pmatrix} \Tilde c_j \\ \Tilde B \end{pmatrix} = \ker \begin{pmatrix} d_j & -\gre A \end{pmatrix}. $$ So the thesis follows from $d_jc_j\sec A\ter + A d_i c_i \ter =0$.
Let now $\Tilde {\Tilde a} _i = (c_i \ter \;\; \Tilde {\Tilde A })^t$. I claim that $\Tilde {\Tilde a} _i $ is injective and that $\Im \Tilde {\Tilde a} _i = \ker ( d_i \;\; \gre \Tilde B)= \ker \Tilde b _i$. First of all observe that since $\Tilde m_i >0$ or $ \grl_i \neq 0 $, $\Tilde b_i$ is surjective. Observe also that $$ \begin{pmatrix} \Tilde c_j & 0 \\ 0 & \Id_{V_i\ter} \end{pmatrix} \comp \begin{pmatrix} \Tilde {\Tilde A} \\ c_i \ter \end{pmatrix} = \begin{pmatrix} c_j \sec A\ter \\ c_i \ter \end{pmatrix}. $$ So $\Tilde {\Tilde a}_i$ is injective as claimed. Now since $\dim R_i + \dim \Tilde V_j = \dim V_i \ter + \dim V_i$ to prove the last part of the claim it is enough to check that $\Tilde b_i \Tilde{\Tilde a}_i=0$. Indeed $$ \Tilde b_i \Tilde{\Tilde a}_i = d_i c_i \ter +\gre \Tilde B \Tilde{\Tilde A} =0. $$ \noindent \emph{Unicity of $\Tilde{\Tilde B}$}: this is a consequence of $\Tilde {\Tilde a}_i$ injective.
\noindent \emph{Existence of $\Tilde{\Tilde B}$}: As for the existence of $\Tilde{\Tilde A}$ this is equivalent to $$ \Im \begin{pmatrix} c_i \Tilde B \\ \Tilde A \Tilde B -\gre \grl_i - \gre \grl_j \end{pmatrix} \subset \Im \begin{pmatrix} c_i \ter \\ \Tilde{\Tilde A} \end{pmatrix} = \ker \begin{pmatrix} d_i & \gre \Tilde B \end{pmatrix}. $$ So the thesis follows from $ \gre \Tilde B \Tilde A \Tilde B - \grl_i \Tilde B - \grl_j \Tilde B + d_i c_i \Tilde B =0$.
Finally we set \begin{align*}
\Tilde{\Tilde V}_i & = V_i \ter & \Tilde{\Tilde c}_i &=c_i \ter & \Tilde{\Tilde d}_i&=d_i\ter \\
\Tilde{\Tilde V}_i & =\Tilde V_j & \Tilde{\Tilde c}_j &=\Tilde c_j & \Tilde{\Tilde d}_j&=\Tilde d_j. \end{align*} The verification of all the conditions is now straightforward.
The inclusion $Z_{jij}\subset Z_{iji}$ can be proved similarly and equation \eqref{eq:azioneweyl} is clear by definition. So Proposition \ref{prp:azioneweyl} is proved.
\section{A representation of the Weyl group} In this section, following Nakajima \cite{Na1}, we show how to use the above action to construct an action of the Weyl group on the homology of quiver varieties. Maybe this action is related with the one constructed by Slodowy in the case of flag varieties (\cite{Slodowy}, ch.4).
First we recall some general about the action of the Weyl group. Let $Z\cech = Q\cech \otimes _{\mZ} \mC$ and $Z= Q \otimes _{\mZ}P$. On $Z$, $Z\cech$ there is a natural action of $W$. \begin{lem} For all $ u\in Z\cech$ the set $Wu$ is discrete. \end{lem} \begin{lem} Consider the action of $W$ on $\mP(Z\cech)$. If $p \in \mP(Z\cech)$ then $$ \overline{Wp} \text{ is countable}. $$ \end{lem} If $p \in \mP(Z\cech)$ we define $H_p= \{x \in P\otimes _{\mZ}\mC \st \bra x , p \ket =0\}$ \begin{lem} If $p \in \mP(Z\cech)$ then $$ \overline{W H_p} = \bigcup_{q \in \overline{Wp}} H_q. $$ \end{lem}
We define $\calH = \overline{W\calH_v \cup \calH_U}$ and $\calR = \goz - \calH$. By the previous lemmas $\calH$ is the union of a countable number of real codimension $3$ subspaces in $\goZ$ and in particular $\calR$ is simply connected. We need also the following definition \begin{align*} K &= \{ u \in \mZ^I \st -\sum{u_i\gra_i} \text{ is dominant and supp}\, u \;
\text{ is connected } \} \\ P_0 &= \{ p \in P \st p \text{ is dominant and } \bra u \cech , p \ket \geq 2
\text{ for all } u \in K \}. \end{align*}
Now we choose $d,v$ such that $\bar d =\bar v$.
\begin{lem}\label{lem:lemma38} If $\bar d \in P_0$ then $\tilde {\grm}$ is surjective and is a locally trivial bundle over $\calR$. \end{lem} \begin{proof} By Proposition 10.5 and Corollary 10.6 in \cite{Na2} there exists a closed orbit $Gs$ in $\grL_0(d,v)$ with trivial stabilizer. Then by Proposition \ref{prp:KNM1} there exists $t \in Gs$ such that $\tilde {\grm}(t) = 0$ and by lemma \ref{lem:regmu} and \ref{regolaritamu} $d\tilde {\mu}_t$ is surjective. Now the surjectivity follows by homogeneity.
The local triviality over $\calR$ follows also from lemma \ref{lem:regmu} and \ref{regolaritamu}. \end{proof}
Now consider \begin{align*} R & = \{\grl \in Z \st (0,\grl) \notin \calR \}, \\ \grL(d,v) & = \{ (\grl,s) \in Z \times S \st s \in \grL_{\grl} \} , \\ M(d,v) &= \grL(d,v) /\!/ G_v \;\mand\; p : M(d,v) \lra Z \text{ the projection} \\ \goL(d,v) & = \{ (\zeta,s) \in \goZ \times S \st s \in \goL_{\zeta} \} , \\ \goM(d,v) &= \goL(d,v) / U(V) \;\mand\; \tilde p : \goM(d,v) \lra \goZ \text{ the projection} \end{align*} We have the following commutative diagram $$ \xymatrix{
(\grl,s) \ar@{}[r]|{\in} \ar[d] & M(d,v) \ar[r]^p \ar[d]^{\mi_M} & Z \ar[d]
& \grl \ar@{}[l]|{\ni} \ar[d]\\
(0,\grl,s) \ar@{}[r]|{\in} & \goM(d,v) \ar[r]^{\tilde p} & \goZ
& (0,\grl) \ar@{}[l]|{\ni} } $$ By Proposition \ref{prp:KNM2} the diagram is a pull back and by lemma \ref{lem:lemma38} $p$ and $\tilde p$ are locally trivial over $R$, $\calR$. We call $M_R = p^{-1}(R)$ and $\goM_{\calR}= \tilde p ^{-1} (\calR)$.
Now consider the complex $\calF = R p_* (\mZ_{M_{R}}) = \mi_M^{-1} R\tilde p _* (\mZ_{\goM_{\calR}}) $ which is cohomologically a locally constant complex. We observe now that $\Pi_1(\calR)$ is trivial so $ R\tilde p _* (\mZ_{\goM_{\calR}})$ is isomorphic to cohomologically constant complex on $\calR$ so it is $\calF$ on $R$. In particular for any $x,y\in R$ we have a canonically isomorphism $$ \psi^i_{x,y} : H^i(\calF_x) \lra H^i(\calF_y). $$ Now observe that by Proposition \ref{prp:azioneweyl} there is an action on $W$ on $R, M_R$ and that $p$ is equivariant with respect to this action. So we can define a $W$ action on $H^i(M_{0,\grl}(d,v),\mZ)$ by $$
\grs (c) = \psi^i_{\grs\grl,\grl}\comp H^i(\Phi^{d,v}_{\grs,0,\grl}) (c) $$ for any $\grs \in W$. To verify that this is an action we have only to verify that $$ \psi^i_{\grs\grl^2,\grl^1}\comp H^i(\Phi^{d,v}_{\grs,0,\grl^2}) \psi^i_{\grl^1,\grl^2}(c) =\psi^i_{\grs\grl^1,\grl^1}\comp H^i(\Phi^{d,v}_{\grs,0,\grl^1}) (c). $$ Since $R$ is connected and $H^i(M,\mZ)$ is discrete this is clear. S o we have proved the following corollary. \begin{cor} If $d=v$ and $(0,\grl) \in \calR$ then there is an action of $W$ on $H^i(M_{0,\grl}(d,v),\mZ)$. \end{cor}
\begin{oss} If $m_+=(1,\dots,1)$ and $\grl =0$ it is easy to see that $d\grm_s$ is surjetive for all $s \in \grL_{m_+,0}(d,v)$. Then by lemma \ref{lem:lemma38} there is a canonical isomorphism $H_*(M_{m_+,0}(d,v)) \isocan H_*(M_{0,\grl}(d,v))$ if $(0,\grl) \in \calR$. So by Nakajima's Theorem (Theorem 10.2 \cite{Na2}) it is natural to make the following conjecture: \end{oss} \begin{con} Let $top = \frac{1}{2} \dim H_*(M_{0,\grl}(d,v))$ then $$ H^{top}\bigl(M_{(0,\grl)}(d,v), \mC \bigr) \isocan \bigl(L_d \bigr)_0 $$ where $\bigl(L_d \bigr)_0$ is the $0$-weight space of the Kac-Moody algebra associated to the quiver of heighest weight $\sum_i d_i \bar {\omega}_i$. \end{con}
\section{Reduction to the dominant case} \label{riduzioneM0}
As a consequence of Proposition \ref{prp:azioneweyl} we see that if $(m,\grl) \in \calG_v$ then there exists $\grs \in W$ and $v' = \grs \cdot v$ such that $d- v'$ is dominant and $M_{\grs m , \grs \grl}(d,v') \isocan M_{m,\grl}(d,v)$. We generalize now this result to arbitrary $\grl$.
On $Q$ we consider the following order: $v' \leq v$ if and only if $v_i' \leq v_i$.
We consider now the following construction: let $v' \leq v$ and fix an embedding $V'_i \incluso V_i$ and a complement $U_i$ of $V'$ in $V_i$, then we can define a map $\wt{\mj} \colon S(d,v') \lra S(d,v)$ through: \begin{equation}\label{defembeddingj} \wt{\mj}(B^{\prime},\grg^{\prime},\grd^{\prime})= \left( \begin{pmatrix} B^{\prime} & 0 \\ 0 & 0 \end{pmatrix} , \begin{pmatrix} \grg^{\prime} \\ 0 \end{pmatrix} , \begin{pmatrix} \grd^{\prime} & 0 \end{pmatrix}\right) \end{equation} where the matrices of the new triple represents the maps described through the decomposition $V_i=V'_i \oplus U_i$.
Suppose now that $(m_i,\grl_i)=0$ for all $i$ such that $v_i'\neq v_i$. Then it is easy to see that this map restrict to a map $\mj_r : \grL_{m,\grl}(d,v') \lra \grL_{m,\grl}(d,v)$ and so enduces a map $\mj^{v'}_v=\mj \colon M_{0,\grl}(d,v') \lra M_{0,\grl}(d,v)$.
\begin{lem} \label{lemmaj} $\mj$ is a closed immersion \end{lem} \begin{proof} We prove that the map $\mj^{\sharp} : \mC[\grL_{\grl}(d,v)]^{G(v)} \lra \mC[\grL_{\grl}(d,v')]^{G(v' )}$ is surjective. By proposition \ref{generatoriinvarianti} this follows by the following two identities: $$ \Tr\left(\gra\left(\mj (s) \right) \right) = \Tr\left(\gra (s) \right) \; \mand \; \grb\left(\mj (s) \right) = \grb (s) $$ for each $B$-path $\gra$ and for each admissible path $\grb$. \end{proof}
\begin{lem} \label{lemmariduzioneM0} If $2 v_i > d_i + \sum_{j \in I} a_{ij}v_j$ and $v' = v -\gra_i$ then $\mj$ is an isomorphism of algebraic varieties \end{lem} \begin{proof}
It's enough to prove that $\mj$ is surjective. Let $s=(B,\grg,\grd) \in \grL_0(d,v)$ and consider the sequence (see \eqref{defTab} for the notation) : $$ \begin{CD} T_i @>{b_i}>> V_i @>{a_i}>> T_i. \end{CD} $$ Since $b_ia_i =0 $ and $2 \dim V_i > \dim T_i$ we have that $b_i$ is not surjective or that $a_i$ is not injective.
Suppose that $b_i$ is not surjective, then up to the action of $G_v$ we can assume that $\Im b_i \subset v'_i$. Then, for $t \in \mC^*$ consider $g_t=(g_{j,t}) \in G_v$ with $$ g_i = \begin{pmatrix} \Id_{v'_i} & 0 \\ 0 & t^{-1} \end{pmatrix} \; \mand \; g_j = \Id_{V_j} \mfor j \neq i. $$ Then \begin{enumerate}
\item $g_{i,t} B_h = B_h $ if $h_1 =i$ and $g_i \grg_i =\grg_i$,
since $\Im B_h , \Im \grg_i \subset \Im b_i \subset v'_i$,
\item $\exists\, \lim_{t \to 0} B_h g_{i,t}^{-1} = B_h $ if
$h_0 =i$ and $ \grd_i g_i^{-1} =\grd_i$ \end{enumerate} So $\exists\, \lim_{t \to 0} g_t s = s^{\prime}$ and it is clear that $s^{\prime} \in \wt{\mj}(\grL_0(d,v'))$ and that $p_0(s)= p_0(s^{\prime})\in \Im \mj$.
If $b_j$ is surjective and $a_i$ is not injective the argument is similar. \end{proof}
\begin{prp}\label{prp:riduzioneM0}For all $\grl$ and for all $d\geq 0, v\geq 0$ there exists $v'$ and $\grs \in W$ such that $\bar d - \bar v ' $ is dominant and $$ M_{0 ,\grs \grl}(d,v') \isocan M_{0,\grl}(d,v). $$ \end{prp}
\begin{proof} We prove this proposition by induction on the order $\leq$ on $Q$.
\noindent{First step: $v=0$}. If $v=0$ we can take $v'=v$ and $\grs =1$.
\noindent{Inductive step.} If $d-v$ is not dominant then there exists $i$ such that $2v_i > d_i + \sum a_{ij}v_j$.
If $ \grl_i \neq 0 $ we observe that $s_i v= v' < v$ (that is $v' \leq v$ and $v'\neq v$) and that $M_{s_i m,s_i \grl}(d,v')\isocan M_{m,\grl}(d,v)$ and so we can apply the inductive hypothesis.
If $\grl_i = 0$ we apply the previous lemma and the inductive hypothesis. \end{proof}
\section{On normality and connectdness of quiver variety in the finite type case}
In this section we resctrict our attention to the case of quiver varieties of finite type and to the case $m=(1,\dots,1)$ and $\grl =0$ and we fix $d,v$. By remark \ref{oss:riduzioneanonzero} we can assume without loss of generality that $v_i >0$ for all $i$. We would like to prove the following conjecture: \begin{con} $M_{m,0}(d,v)$ is connected and $M_{0,0}(d,v)$ is normal. \end{con}
\begin{oss} If $\tilde m = (m_1,\dots,m_n) \in\mN_+^I$ it is easy to see that $\grL_{\tilde m,0}= \grL_{m,0}$ ao in particular $M_{m,0}(d,v)$ is smooth. Instead $M_{0,0}$ is a cone so it is clearly connected. \end{oss}
By proposition \ref{prp:riduzioneM0}it is enough to prove the theorem in the case $d-v$ dominant. Unfortunately I'm not able to prove the conjecture only in the case $d-v$ regular: $\bra d-v,\gra_i\ket >0$ for all $i$. To prove the conjecture in this case we will use the following stratification introduced by Lusztig in \cite{Lu:Q4}.
\begin{dfn} For any $s\in S$ and $i\in I$ let $$ V_i^+ = V_i^+(s) = \sum_{\gra \text{ a }B-\text{path}\st \gra_1=i} \Im (\gra(s)\grg_{\gra_0}) $$ If $v'=(v_1',\dots,v_n') \in \mN^n$ we define $$ \grL^{v'} = \{ s\in \grL_0(d,v) \st \dim V_i^+ (s) = v_i'\}. $$ \end{dfn}
Observe that $\grL^v = \grL_{m_+,0}(d,v)$. To prove our result we will use the following lemma of Lusztig.
\begin{lem}[Lusztig: \cite{Lu:Q4} Proposition 4.5 and Proposition 5.3] \label{lem:Lusztigdim}If $0\leq v'_i \leq v_i$ for each $i$ then $$ \dim \grL^{v'}(d,v) = \dim S - \sum_{i\in I} \dim gl(V_i) - \bra (v-v')\cech , d-v \ket - \tfrac{1}{2}
\bra (v-v')\cech , v-v' \ket $$ \end{lem}
Our result follows trivially from the following lemma.
\begin{lem} 1) If $d-v$ is dominant then $\grL_{0,0}(d,v)$ is a complete intersection.
\noindent 2) If $d-v$ is regular then $\grL_{0,0}$ is normal and
irreducible and $\grL_{m_+,0}(d,v)$ is connected. \end{lem} \begin{proof} Observe that $\grL_{0,0}(d,v) =\mu ^{-1}(0)$ so each irreducible component of $\grL_{0,0}(d,v)$ must have dimension at least $\dim S - \sum_i \dim gl(V_i) = \delta_V$.
Suppose now that $d-v$ is dominant. By Nakajima's theorem (\cite{Na2} Theorem 10.2) $M_{m_+,0}$ is not empty. Observe also that by Proposition \ref{prp:1.14} $\grL_{m_+,0}(d,v)$ is a smooth subset of $\grL_{0,0}(d,v)$ of dimension $\delta_V$.
It is well known that $\grL_{m_+,0} (d,v) = \grL^v$. Hence $$ \grL_{0,0}(d,v) - \grL_{m_+,0} (d,v) = \bigcup_{v'\leq v \mand v' \neq v} \grL^{v'}. $$ By the lemma above we have that if $v' \leq v$ and $v' \neq v$ $\dim \grL^{v'} < \delta_V$. So $\grL_{m_+,0}(d,v)$ must be dense in $\grL_{0,0}(d,v)$ and $\grL_{0,0}(d,v)$ is a complete intersection. Moreover if $d-v$ is regular we have that $\dim \grL^{v'} < \delta_V - 1$ so the singular locus has codimension at least two and normality and irreducibility follows. Finally by our discussion it is clear that if $\grL_{m_+,0}(d,v)$ is disconnected then $\grL_{0,0}(d,v)$ is not irreducible. \end{proof}
\begin{oss} In the lemma we can substitute $\grL_{m_+,0}(d,v)$ with any other subset $Reg$ of regular points in $\grL(d,v)$. In this way is indeed possible to improve a little bit the theorem but Crawley-Boevey explained me that this strategy cannot work in general becouse there are cases where $d-v$ is dominant and $\grL_{0,0}(d,v)$ is not normal. It should be also pointed out that Crawley-Boevey proved the connectdness in complete generality (\cite{CrBo}). He said me that is also able to prove normality for a much bigger class of quiver varieties. \end{oss}
\end{document} | arXiv | {
"id": "0003159.tex",
"language_detection_score": 0.6023648381233215,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\textbf{Asymptotic behaviour methods for the Heat \\Equation. Convergence to the Gaussian}}
\author{{\Large Juan Luis V\'azquez}\\ [4pt]
Universidad Aut\'{o}noma de Madrid\\ } \date{\today}
\date{2018}
\maketitle
\begin{abstract} \noindent In this expository work we discuss the asymptotic behaviour of the solutions of the classical heat equation posed in the whole Euclidean space.
After an introductory review of the main facts on the existence and properties of solutions, we proceed with the proofs of convergence to the Gaussian fundamental solution, a result that holds for all integrable solutions, and represents in the PDE setting the Central Limit Theorem of probability. We present several methods of proof: first, the scaling method. Then several versions of the representation method. This is followed by the functional analysis approach that leads to the famous related equations, Fokker-Planck and Ornstein-Uhlenbeck. The analysis of this connection is also given in rather complete form here. Finally, we present the Boltzmann entropy method, coming from kinetic equations.\\
\qquad The different methods are interesting because of the possible extension to prove the asymptotic behaviour or stabilization analysis for more general equations, linear or nonlinear. It all depends a lot on the particular features, and only one or some of the methods work in each case. Other settings of the Heat Equation are briefly discussed in Section 9 and a longer mention of results for different equations is done in Section 10. \end{abstract}
\
\
\noindent \it 2010 Mathematics Subject Classification\rm: 35K05, 35K08.
\noindent\it Keywords and phrases\rm: Heat equation, asymptotic behaviour, convergence to the Gaussian solution.
\tableofcontents \normalsize
\section{The Cauchy Problem in $\mathbb{R}^N$ for the Heat Equation}\label{sec.intro}
The classical heat equation (HE) \ $\partial_t u = \Delta u$, is one of the most important objects in the theory of partial differential equations, and it has great relevance in the applied sciences. It has been developed since the seminal work of J. Fourier, 1822, \cite{Fou1822}, to describe phenomena of heat transport and diffusion in many contexts. The great progress of the mathematical theory in these two centuries has had a strong influence not only on PDEs, but also on Probability, Functional Analysis, as well as Numerics. It now has well-established connections with other subjects like viscous fluids, differential geometry, image processing or finance. See more on motivation in standard textbooks, like \cite{EvansPDE, SalsaBk} or in the survey paper \cite{VazCIME}.
The heat equation enjoys a well developed theory that has many distinctive features. It can be solved in many settings, suitable for different interests, that lead to quite different results and use different tools. In this paper we will study the most typical setting, the Cauchy Problem posed in $\mathbb{R}^N$, $N\ge 1$: \begin{equation}\label{eq:CPHE}\tag{CPHE} \begin{cases} \begin{aligned} \partial_t u = \Delta u \qquad\quad\;\, &\text{in } \mathbb{R}^N\times(0,\infty) \\ u(x,0) = u_0(x) \quad &\text{in } \mathbb{R}^N, \end{aligned} \end{cases} \end{equation} with initial data $u_0 \in L^1(\mathbb{R}^N)$. The equation appears in the theory of Probability and Stochastic Processes as the PDE description of Brownian motion, and in that application the function $u(\cdot,t)$ denotes the probability density of the process at time $t$, hence it must be nonnegative and the total integral $\int u(x,t)\,dx$, often called the mass, must be one. Such restrictions on sign and integral are not needed in the analytical study, and the extra generality is convenient both for the theory and for a number of other applications.
The first question to be addressed by mathematicians is to find appropriate functional spaces to solve this problem and to prove that it is a well-posed problem in the sense of Hadamard's definition (existence, uniqueness and stability). In tackling such a problem we will be lucky enough to find a very explicit representation of the solution by means of the so-called ``fundamental solution'' procedure.
\noindent {\bf Definition.} We call {\sl fundamental solution of the HE in $\mathbb{R}^N$} the solution of the equation with initial data a ``Dirac delta'': \begin{equation}\label{eq:CPHE1}\tag{FS} \begin{cases} \begin{aligned} \partial_t U = \Delta U \qquad\quad\;\, &\text{en } \mathbb{R}^N\times(0,\infty) \\ U(x,0) = \delta_0(x) \quad\;\, &\text{en } \mathbb{R}^N. \end{aligned} \end{cases} \end{equation} In probabilistic terms, we start from a unit point mass distribution concentrated at the origin $x_0=0$, and we look for the way it expands with time according to the heat equation.
\noindent {\bf Exercise 1.} (i) Prove by the method of Fourier Transform that the fundamental solution is given by the formula \begin{equation}\label{Gaussiana}\tag{GF}
U(x,t) = (4\pi t)^{-\frac{N}{2}} e^{-\frac{|x|^2}{4t}}. \end{equation}
\noindent {\sl Sketch.} The Fourier transform of $U$ satisfies $\partial_t \widetilde U = -|\xi|^2\widetilde U$ with $\widetilde U(\xi,0)=1$, hence $\widetilde U(\xi,t)=e^{-|\xi|^2 t}$. Then, invert the Fourier transform.
(ii) Prove that $ U(x, t) \to \delta_0 (x) $ when $ t \to 0 $ in the sense of distributions, i.\,e., \[ \int_{\mathbb{R}^N}U(x,t)\phi(x)dx \; \to \; <\delta_0,\phi> := \phi(0), \quad \text{for all } \phi \in C_c^{\infty}(\mathbb{R}^N), \quad \text{ as \ } t \to 0. \]
(iii) Prove that for every sequence $ t_n \to 0 $ the functional sequence $ U (x, t_n) $ is a $ C^\infty $ approximation of the identity.
\begin{center} \includegraphics[width=3.5cm]{Gauss.jpg}\qquad \qquad \includegraphics[width=5.0cm]{ToGauss.jpg} \end{center}
\centerline{C. F. Gauss and the Gaussian solution}
\noindent {\bf Notations and remarks.} This function is known as the Gaussian Function or Gaussian Kernel and we use the notation $ G(x, t) $ or $ G_t (x) $ instead of $U$. We will also use the Gaussian measure $ d\mu = G_tdx $; it is a Radon measure in $ \mathbb{R}^N $ with parameter $ t $. The Gaussian kernel is named after the famous German mathematician Carl Friedrich Gauss, the Prince of Mathematics.
In statistics and probability theory the Gaussian function appears as the density function of the normal distribution. Then the standard deviation of the distribution is taken as the parameter in the formula, and we have \ $ 2t = \sigma^2 $. This relation : {\sl time proportional to square space deviation}, is called the Brownian scale and appears in many calculations of evolution processes related to heat propagation or Brownian motion. Finally, the same function appears in Statistical Mechanics with the name of Maxwellian distribution (after James Clerk Maxwell, 1860). See other historical comments at the end of the paper.
A main result in those disciplines is the fact that $G$ appears as the limiting probability distribution of many discrete processes, according to the famous Central Limit Theorem. The limiting behaviour of the solutions of the Heat Equation is our main concern in this paper, and the Gaussian kernel will play a main role.
\noindent {\bf Exercise 2.} Calculate the Gaussian function by the method of self-similar solutions, $U(x,t)=t^{-\alpha}F(x\,t^{-\beta})$, plus the condition of mass conservation.
{\sl Hint.} Substitute the self-similar form into the PDE and check that time disappears explicitly from the resulting equation for $F=F(\xi)$ (with $\xi=x\,t^{-\beta}$) if we put $\beta=1/2$. Check that the choice $\alpha=N/2$ means conservation of mass. This selects both similarity exponents. Then write the resulting elliptic equation for $F(\xi)$ assuming that it is a function of $r=|\xi|$ (it is a radial function) to get the ODE $$ (r^{N-1}F'(r))'+ \frac{1}{2}(r^N F)'=0. $$
Calculate that \ $\log F(r)= a- r^2/4$, with $a\in\mathbb{R}$, hence $F=Ce^{-|x|^2/4}.$
\noindent {\bf Exercise 3.} Write the formula of Gaussian with mass $ M $ as $$ U(x,t;M)=\frac{M}{(4\pi t)^{N/2}}e^{-x^2/4t} $$ and check that it is dimensionally correct. Dimensional analysis is a powerful tester for correct formulas (if applied well).
\noindent {\bf Exercise 4. Gaussian Representation Formula.} Show that if $ u_0 \in C(\mathbb{R}^N) $ and is bounded, then the convolution
\begin{equation}\label{eq:conv.form}\tag{GRF}
u(x,t) = (U \ast u_0)(x,t) := (4\pi t)^{-\frac{N}{2}} \int_{\mathbb{R}^N}u_0(y) e^{-\frac{|x-y|^2}{4t}}dy \end{equation} is a solution of the heat equation with initial data $u_0$. Check that it is a $C^\infty$ function of $x$ and $t$ for every $x\in\mathbb{R}^N$ and $t>0$. Prove that $ u(x,t) \to u_0 (x) $ pointwise when $ t \to 0 $.
(ii) Prove that the initial data is taken uniformly if $ u_0 \in BUC (\mathbb{R}^N) $, the space of bounded and uniformly continuous functions in $\mathbb{R}^N$.
(iii*) Consider whether the boundedness condition of point (i) can be relaxed or eliminated.
\noindent {\bf Exercise 5.} Prove that formula \eqref{eq:conv.form} is still valid when the initial data belong to $L^p(\mathbb{R}^N)$, $1\le p\le\infty $.
(ii) What is the regularity of the solution?
(iii) In which sense are the initial data taken?
\noindent {\bf Exercise 6.} (i) Prove that (GRF) generates a continuous contraction semigroup in the following Banach spaces $$ X=BUC(\mathbb{R}^N), \quad L^2(\mathbb{R}^N), \quad L^p(\mathbb{R}^N), \quad 1<p<\infty\,, $$ by means of the definition $S_t:X\to X$, $S_t u_0=u(\cdot,t)$, where $u$ is the solution of the heat equation with initial data $u_0$.
{\sl Sketch.} What you have to check is that, as a function of time $t$ \ $f(t):= S_tu_0$ belongs to $C([0,\infty: X)$, that $S_t\ast S_s=S_{t+s}$, and also $S_tu_0\to u_0$ as $t\to 0$ in $X$. Moreover, contraction means that
$\|S_tu_0\|_X\le \|u_0\|_X$ for every $t>0$ and every $u_0\in X$.
(ii) Study in which sense the semigroups agree (explain if they are the same or not). You may define the {\sl core} of the semigroup by restricting it to a nice and dense set of functions, and then explain how the concept of density is used to recover the complete semigroup in each of the above spaces.
\noindent {\sc Notation.} We will often write $u(t)$ instead of $u(\cdot,t)$ for brevity in the hope that no confusion will arise. Hence, $u(t)$ is a function of $x$ for every fixed $t$.
\noindent {\bf Exercise 7.} (i) Prove that for $p=1$ we have the stronger property of {\bf mass conservation} \begin{equation}\tag{MC} \int_{\mathbb{R}^N} u(x,t)\,dx=\int_{\mathbb{R}^N}u_0(x)\,dx\,, \end{equation} for all data in $L^1(\mathbb{R}^N)$.
(ii) Note that this law also holds for signed solutions. Show that the version with absolute value is not valid for signed solutions by means of an example (superposition of two Gaussians)
(iii) Prove on the fundamental solution that the $L^p$-integrals with $p>1$ are not conserved and check the rate of decay they exhibit. {\sl Terminology:} The $p$ energy is $\int |u|^p \,dx. $ Those $p$-energies are not conserved because they ``undergo dissipation''.
(iv*) The curious reader may want to learn more about that dissipation and how to measure it accurately. Any ideas?
\noindent {\bf Exercise 8. Conservation of moments.} (i) We define the first (signed) moment as the vector quantity \begin{equation} {\mathcal N}_1(u(t))=\int_{\mathbb{R}^N} x\,u(x,t)\,dx\,, \end{equation} which in principle evolves with time. Prove that this quantity is actually conserved in time for any solution of the HE with initial data such that
$\int (1+|x|)|\,u_0(x)|\,dx$ is finite.
(ii) For data such that $\int |x|^2\,|u_0(x)|\,dx$ is finite, we define the second moment as the scalar time-dependent quantity \begin{equation}
{\mathcal N}_2(u(t))=\int_{\mathbb{R}^N} |x|^2\,u(x,t)\,dx\,. \end{equation} Prove that this quantity is not conserved in time for solutions of the HE, and in fact \begin{equation} {\mathcal N}_2(u(t))={\mathcal N}_2(u_0) +2Nt\,. \end{equation} (Recall here that $N$ is the space dimension). These results will be important to understand the finer versions of the asymptotic convergence to the Gaussian. Deeper analysis proves that the only integral quantities conserved by the solutions of the Heat Equation are the mass and the first moment.
\noindent {\bf Exercise 9.} (i) Prove the so-called {\bf ultra-contractive estimates}, also called {\sl smoothing effects}. They allow to pass from data in $L^p$ to solutions $u(t)$ in $ L^\infty$ for every $t>0$. More precisely, prove that there exists a constant $C=C(p,N)$ such that for every $u_0 \in L^p(\mathbb{R}^N)$ $$
\|u(t)\|_\infty \le C\frac{\|u_0\|_p}{t^{N/2p}}. $$ We may say that the constant is universal since it does not depend of the particular solution we take.
(ii) Calculate the exponent $N/2p$ using only dimensional calculus.
\noindent {\bf Exercise 10. Regularity}. (i) Prove that there exists a universal constant $C=C(p,N)>0$ such that for every $p\ge 1$ and every $t>0$ $$
\|\frac{du(t)}{dt}\|_p\le C\,\frac{\|u_0\|_p}{t}\,. $$
(ii) Prove that for nonnegative solutions we have a stronger pointwise inequality $$ \frac{\partial u}{\partial t} \ge -C_1 \frac{u}{t}\quad \mbox{with} \quad C_1=N/2\,. $$ Prove that this constant is optimal. {\sl Hint.} Check the properties first for the fundamental solution, then use convolution.
(iii) Prove that for every $ t > 0 $ the $x$-derivatives of $u$ satisfy \begin{equation*} \label{ESTIMACIONDERIVADAS}
\left|\frac{\partial}{\partial x_i} u(x,t)\right| \leq C \frac{|| u_0||_{L^{\infty}}}{t^{\frac{1}{2}}}. \end{equation*} Obtain a similar estimate for data in $ L^p\left(\mathbb{R}^N \right) $.
(iv) Check that the dimensions are all correct in these formulas.
\noindent {\bf Exercise 11. Gaussian Representation Formula with measure data.} (i) Prove that the representation formula can be used with any bounded Radon measure as initial data, in the form \begin{equation}\label{eq:conv.form.m}\tag{GRFm}
u(x,t) = (G_t \ast \mu)(x,t) := (4\pi t)^{-\frac{N}{2}} \int_{\mathbb{R}^N} e^{-\frac{|x-y|^2}{4t}}d\mu(y)\,. \end{equation}
(ii) Prove that it produces a classical solution of the HE for all $t>0$ and that the a priori estimates hold with the $L^1$ norm of $u_0$ replaced by the total mass of the measure,
$|\mu(\mathbb{R}^N)|=\int |d\mu|$.
(iii) Prove that the initial data are taken in the weak sense of measures.
(iv) Observe that the fundamental solution $U=G_t$ falls into this class.
\noindent {\bf Exercise 12.} (i) Find explicit solutions of polynomial type of heat equation. {\sl Suggestion:} try the solution types $$ u = A_0 (x), \quad u = A_0 (x) + A_1 (x) t, \quad u = A_0 (x) + A_1 (x) t + A_2 (x) t ^ 2, \dots $$
Find all the solutions of the first two types. Maybe the best known solutions in this class are $u=1$ and $u=|x|^2+2Nt$.
(ii) Prove that they also satisfy the representation formula, even though they are not integrable or bounded.
The general theory says that the (GRF) holds for all initial data which are locally bounded measures $\mu$ with a weighted integrability condition $$
\int_{\mathbb{R}^N} e^{-c |x|^2}|d\mu(x)|<\infty \quad \mbox{for some} \ c>0\,, $$
see \cite{Widder}, but we will not use that sharp result in these notes. We will only point out the following example of blow-up solution with very large growth as $|x|\to\infty$ \begin{equation}\label{bu.sol}
U_b(x,t)=C\,(T-t)^{-N/2}e^{|x|^2/4(T-t)}. \end{equation}
\noindent {\bf Exercise 13.} Check that $U_b$ is a classical solution of the heat equation for $-\infty<t<T$ and blows up everywhere in $x$ as $t\to T$.
\noindent {\bf Exercise 14.} Construction of new solutions.
(i) Show that if $u (x, t) $ is a solution of the heat equation, so are \ $ u_1 = \partial_t u $, and $ u_2 = \partial_{x_i} u $.
(ii) Show that if $u (x, t) $ is a solution of the heat equation in 1D then so is also
$ u_3 = \int_{- \infty}^x u(s, t) \, ds $.
(iii) Prove that if $u (x, t) $ is a solution of the heat equation so is also $$ v(x, t) = A u(Bx, B^2t) $$ for any choice of the parameters $ A $ and $ B $. This is called the scaling property. $ A $ and $ B $ need not be positive.
(v*) Show that if $u (x, t) $ is a solution of the heat equation, then so is also $$ v(x, t) = x u_x + 2t u_t\,. $$ This is a 1D notation. But the result is valid in any dimension if the notation is correctly interpreted.
\section{Asymptotic convergence to the Gaussian}\label{main.result}
The main result on the asymptotic behaviour of general integrable solutions of the heat equation consists in proving that they look increasingly like the fundamental solution. Since this solution goes to zero uniformly with time, the estimate of the convergence has to take into account that fact and compensate for it. This happens by considering a renormalized error that divides the standard error in some norm by the size of the Gaussian solution $U(t)=G_t$ in the same norm. For instance, in the case of the sup norm we know that $$
\|G_t\|_{L^\infty(\mathbb{R}^N)}=Ct^{-N/2},\qquad \|G_t\|_{L^1(\mathbb{R}^N)}=1. $$ This is the basic result we want to prove
\begin{theorem} \label{main.convthm} Let $u_0\in L^1(\mathbb{R}^N)$ and let $\int u_0(x)dx=M$ be its mass. Then the solution $u(t)=u(\cdot,t)$ of the HE in the whole space ends up by looking like $M$ times the fundamental solution $U(t)=G_t$ in the sense that \begin{equation}\label{cr.l1}
\lim_{t\to\infty} \|u(t)-M G_t\|_1\to 0 \end{equation} and also that \begin{equation}\label{cr.linf}
\lim_{t\to\infty} t^{N/2}\|u(t)-M G_t\|_\infty\to 0\,. \end{equation} By interpolation we get the convergence result for all $L^p$ norms \begin{equation}\label{cr.lp}
\lim_{t\to\infty} t^{N(p-1)/2p}\|u(t)-M G_t\|_{L^p(\mathbb{R}^N)}\to 0\,. \end{equation} for all $1\le p\le \infty$. \end{theorem}
We add important information to this result in a series of remarks.
\noindent $\bullet$ First, one comment about the spatial domain. The fact that we are working in the whole space is crucial for the result of Theorem \ref{main.convthm}. The behaviour of the solutions of the heat equation posed in a bounded domain with different kinds of boundary conditions is also known, and the asymptotic behaviour {\sl does not follow the Gaussian pattern.}
\noindent $\bullet$ As we will see below, convergence to the Gaussian happens on the condition that the data belong to the class of integrable functions, that can be extended without problem to bounded Radon measures. This is actually no news since the theory says that the solution corresponding to an initial measure $\mu\in {\mathcal M}(\mathbb{R}^N)$ is integrable and bounded for any positive time, so we may change the origin of time and make the assumption of integrable and bounded data. But we point out the some of the proofs work directly for measures without any problem.
\noindent $\bullet$ We recall that it is usually assumed that $u\ge 0$ on physical grounds but such assumption is not at all needed for the analytical study of this paper. Thus, the basic result holds also for signed solutions even if the total integral is negative, $M\le 0$. There is no change in the proofs. We may also put $M=1$ by linearity as long as $M\ne 0$.
\noindent $\bullet$ $M=0$ is a special case that deserves attention: even if Theorem \eqref{main.convthm} is true, the statement does not imply that the solution looks asymptotically like a Gaussian; to be more precise, it only says that the previous first-order approximation disappears. If we want more precise details about what the solution looks like, we have to search further to identify the terms that may give us the size and shape of such a solution. This question will be addressed below. For the moment let us point out that differentiation in $x_i$ of the Heat Kernel produces a new solution with zero integral $$
u_i(x,t)=\partial_{x_i} G_t(x)= C\,t^{-(N+2)/2}x_i\,e^{-|x|^2/4t}\,, $$ to which we can apply the above comments. In particular, $$ t^{N/2}u_i(x,t)= O(t^{-1/2})\,, $$ where $O(\cdot)$ is the Landau $O$-notation for orders of magnitude.
\noindent $\bullet$ It must be stressed that convergence to the Gaussian {\sl does not hold for other data}. Maybe the simplest example of solution that does not approach the Gaussian is given by any non zero constant solution, but it could be objected that $L^\infty(\mathbb{R}^N)$ is very far from $L^1(\mathbb{R}^N)$. Actually, the same happens for all $L^p(\mathbb{R}^N)$ spaces, $p>1$. Indeed, a simple argument based on approximation and comparison shows that for any $u_0\ge 0$ with $\int u_0(x)\,dx=+\infty$ we have $$ \lim_{t\to\infty} t^{N/2}u(x,t)=+\infty $$ everywhere in $x\in \mathbb{R}^N$ (and the divergence is locally uniform).
\noindent $\bullet$ The way different classes of non-integrable solutions actually behave for large time is an interesting question that we will not address here. Thus, the reader may prove using the convolution formula that for locally integrable data that converge to a constant $C$ as $|x|\to\infty$ the solution $u(x,t)$ stabilizes to that constant as $t\to\infty$. Taking growing data may produce solutions that tend to infinity with time, like the 1D family of travelling waves $$ U_{TW}(x,t)=C\,e^{c^2t+cx} $$ defined for real constants $C,c\ne 0$. A more extreme case is the blow-up solution $U_b(x,t)$ of formula \eqref{bu.sol} that not only does not stay bounded with time, it even blows up in finite time.
\noindent $\bullet$ About the three convergence results of the Theorem, it is clear that \eqref{cr.lp} follows from \eqref{cr.l1} and \eqref{cr.linf}. Now, what is interesting is that \eqref{cr.linf} follows from \eqref{cr.l1} and the smoothing effect (Exercise 9). We ask the reader to prove this fact. Hint: use \eqref{cr.l1} between $0$ and $t/2$ and then the smoothing effect between $t/2$ and $t$.
\noindent $\bullet$ It is interesting to note that the convergence in $L^1$ norm of formula \eqref{cr.l1} can be formulated without mention to the existence of a fundamental solution in the following form:
\noindent {\bf Alternative Theorem.} {\sl Let $u(t)$ and $v(t)$ be any two solutions of the Cauchy problem for the HE in the whole space, and let us assume that their initial data satisfy $\int u_0\,dx=\int v_0\,dx$. Then we have} \begin{equation}\label{cr.l1.mod}
\lim_{t\to\infty} \|u(t)-v(t)\|_1\to 0\,. \end{equation} We could use a similar approach for the $L^\infty$ estimate but some Gaussian information appears in the weight $t^{N/2}$. The alternative approach has been first remarked by specialists in stochastic processes of Brownian type, and it is known as a {\sl mixing property}. It has been generalized to many variants of the heat equation.
\section{Proof of convergence by scaling}\label{sec.scaling}
There are many approaches to the proof of the main result, and we will show some of the best known below. They are interesting for their possible extension to similar asymptotic results for other equations, both linear and nonlinear, see Sections \ref{sec.appl1} and \ref{sec.appl2}. The first proof we give is based on scaling arguments. In the proposed method the proof is divided into 5 steps.
\noindent {\sc Step 1. \sl Scaling transformation.} It is easy to check that the HE is invariant under the following one-parameter family of transformations ${\mathcal T}_k $ defined on space-time functions by the formula $$ {\mathcal T}_k u(x,t)= k^{N} u (kx,k^2t), \qquad k>0, $$ see also Exercise 14 (iii). This transformation maps solutions $u$ into new solutions $u_k={\mathcal T}_k u$, called rescalings of the original solution. It also conserves the mass of the solutions $$ \int_{\mathbb{R}^N} {\mathcal T}_k u(x,t)\,dx=\int_{\mathbb{R}^N} u(y,k^2t)\,dy=\mbox{\rm constant}. $$
\noindent {\sc Step 2. \sl Uniform estimates.} The estimates proved on the general solutions in Section \ref{sec.intro} imply that the whole family $u_k$ is uniformly bounded in space and time if time is not so small: $x\in\mathbb{R}^N$ and $t\ge t_1>0$ (cf. Exercise 9). They also have uniform estimates on all derivatives under the same restriction on time (cf. Exercise 10).
\noindent {\sc Step 3. \sl Limit problem.} We can now use functional compactness and pass to the limit $k\to \infty$ along suitable subsequences to obtain a function $\hat U(x,t)$ that satisfies the same estimates mentioned above and is a weak solution of the HE in $Q=\mathbb{R}^N\times (0,\infty)$. By the estimates on derivatives the solution is classical. The convergence takes place locally in $Q$ in the sup norm for the functions and their space and time derivatives.
\noindent {\sc Step 4. \sl Identifying the limit.} We now have to check that the limit solution $\hat U(x,t)$ has the same mass $M$ as the sequence $u_k$ and that is takes the Dirac delta as initial data. By uniqueness for solutions with measure data (which we accept as part of the theory) we will then conclude that $\hat U(x,t)$ is just a fundamental solution, $\hat U(x,t)=M\,G_t$.
(i) The proof of this step is best done when $u_0$ is nonnegative, compactly supported and bounded, since in that case the solution is bounded above by constant times the fundamental solution, $|u(x,t)|\le C\,G(x,t+1)$. Applying the transformation we get $$
|u_k(x,t)|\le C\,{\mathcal T}_k (G(x,t+1))= C\, G(x,t+ k^{-2}). $$ This means a uniform control from above of the mass of all the tail mass of all the solutions $u_k$, by which we mean the mass lying in exterior regions of space. Such control allows to avoid the loss of mass that could occur in the limit by Fatou's Theorem. Hence, the mass of $\hat U$ is the same, i.e., $M$.
The convergence of $\hat U$ to $M\,\delta(x)$ as $t\to 0$ happens because $u_k(x,0)\to M\,\delta(x)$, and the previous tail analysis shows that $\hat U$ takes zero initial values for $x\ne 0$.
(ii) To recover the same result for a general $u_0 \ge $ we use approximation by data as above and then the $L^1$ contraction of the heat semigroup. For signed solutions, separate the positive and negative parts of the data and solve separately.
\noindent {\sc Step 5. \sl Recovering the result.} (i) We now use the convergence of the rescaled solutions at a fixed time, say $t=1$, $$
\lim_{k\to\infty}|u_k(x,1)-MG_1(x)|\to 0. $$ This convergence takes place locally in $\mathbb{R}^N$ in all $L^p$ norms. By the tail analysis, it also happens in $L^1(\mathbb{R}^N)$. But since the derivatives are also uniformly bounded we have also an $L^\infty$ estimate.
(ii) We now use the meaning to the transformation ${\mathcal T}_k$ and write $k=t_1^{1/2}$ to get $$
\lim_{t_1\to\infty} |t_1^{N/2}u(xt_1^{1/2},t_1)-MG_1(x)|\to 0. $$ But this is just the result we wanted to prove after writing $x=y\,t_1^{-1/2}$ and observing that $G_{t_1}(y)=t_1^{-N/2}G_1(x)$. Similarly for the $L^1$ norm.
This 5-step proof is taken from paper \cite{KVplap}, where it was applied to the $p$-Laplacian equation. See whole details for the porous medium equation in the book \cite{VazBook}. It has had further applicability.
\noindent {\bf Exercise 15.} Fill in the details of the above proof.
\section{Asymptotic convergence to the Gaussian via representation}\label{sec.conv1}
The second proof we give of the main result, Theorem \ref{main.convthm}, is based on the examination of the error in terms of the representation formula. This is a very direct approach, but it needs a previous step, whereby the proof is done under the further restriction that the data have a finite first moment. Then the convergence result is more precise and quantitative. This particular case has an interest in itself since it shows the importance of controlling the first moment of a mass distribution. We already know that the first moment is associated to a conserved quantity (see Exercise 8).
\begin{theorem} \label{thm.conv.2} Under the assumptions that $u_0\in L^1(\mathbb{R}^N)$ and that the first absolute moment is finite \begin{equation}
{\mathcal N}_1=\int |u_0(y)y|\,dy < \infty\,, \end{equation} we get the convergence \begin{equation}\label{conv.fm}
\displaystyle t^{N/2}|u(x,t)-MG_t(x)|\le C{\mathcal N}_1t^{-1/2}\,, \end{equation} as well as \begin{equation}
\displaystyle \|u(x,t)-MG_t(x)\|_{L^1(\mathbb{R}^N)}\le C{\mathcal N}_1t^{-1/2}\,. \end{equation} The rate $O(t^{-1/2})$ is optimal under such assumptions. \end{theorem}
\noindent {\sl Proof.} (i) We may perform the proof under the further restriction that $u_0\ge 0$. For a signed solution we must only separate the positive and negative parts of the data and apply the results to both partial solutions.
(ii) Let us do first the sup convergence. We have $$ \begin{array}{c} \displaystyle u(x,t)-MG_t(x)= \int u_0(y)G_t(x-y)\,dy- G_t(x)\int u_0(y)\,dy\\[4pt] \displaystyle =\int u_0(y)(G_t(x-y)-G_t(x))\,dy\\ \displaystyle =\int u_0(y)\left(\int_0^1\partial_s (G_t(x-sy))\,ds\right)\,dy\\[4pt] \displaystyle = C t^{-N/2}
\int dy \int_0^1 ds \,u_0(y) \langle y, \frac{x-sy}{2t}\rangle e^{-|x-sy|^2/4t}\\[4pt] \end{array} $$ with $C=(4\pi)^{-N/2}$. Consider the piece of the integrand of the form $$
f=\frac{x-sy}{t^{1/2}}e^{-|x-sy|^2/4t}= \xi e^{-|\xi|^2/4}\,, $$ where we have put $\xi=(x-sy)/t^{1/2}$. We observe that the vector function $f$ is bounded by a numerical constant, hence $$
\displaystyle |u(x,t)-MG_t(x)|\le C_1t^{-(N+1)/2}\int |u_0(y)y|\,dy\,. $$
Taking into account that $ G_t$ is of order $t^{-N/2}$ in sup norm, we write the result as \eqref{conv.fm}, and $C=C(N)$ is a universal constant.
(iii) For the $L^1$ convergence we start in the same way and arrive at $$ \displaystyle u(x,t)-MG_t(x)= C \,t^{-N/2} \int dy \int_0^1 ds \,u_0(y) \langle y, \frac{x-sy}{2t}\rangle e^{-(x-sy)^2/4t}\,. $$ Now we integrate to get $$ \begin{array}{c}
\displaystyle \|u(x,t)-MG_t(x)\|_1\le C \, t^{-N/2}
\int_{\mathbb{R}^N} dx\int_{\mathbb{R}^N} dy \int_0^1 ds \,|u_0(y) y| \frac{|x-sy|}{2t}e^{-|x-sy|^2/4t}\\[6pt]
\displaystyle = C \,\int_{\mathbb{R}^N} dy \int_0^1 ds \,|u_0(y) y|t^{-1/2} (\int_{\mathbb{R}^N } t^{-N/2}\frac{|x-sy|}{t^{1/2}}e^{-|x-sy|^2/4t} \,dx) \end{array} $$ With the change of variables $x-sy=t^{1/2}\xi$ we already know that the last integral is a constant independent of $u$, hence the formula for the $L^1$ error. Note that now we are speaking of masses and we do not need any renormalization time factor.
(iv) We ask the reader to prove the optimality as an exercise. \qed
\noindent {\bf Exercise 16.} Take as solution the Gaussian after a space displacement, $u(x,t)= G(x+h,t)$, and find the convergence rate to be exactly $O(t^{-1/2})$. This is just a calculus exercise but attention to details is needed. {\sl Hint}: Write $$ G(x+h,t)=G_t(x)+ h\partial_x G_t(x)+ \frac{h^2}2 D^2_xG_t(\xi) $$ (where $\xi=x+sh$, $0<s<1$), and check that $$ \partial_x G_t(x)=Ct^{-(N+1)/2}\,{\xi}e^{-\xi^2/4}=O(t^{-(N+1)/2})\,, $$ uniformly in $x$, and $D^2_xG_t(\xi)= O(t^{-(N+2)/2})$ uniformly in $x$. This exercise shows that the term $h\partial_x G_t(x)$ is the {\sl precise corrector} with relative error $O(t^{-1/2})$. We could continue the analysis by expanding in Taylor series with further terms, see below. Exact correctors and longer expansions for general solutions will be done later in Section \ref{sec.spec} by the methods of Functional Analysis.
\noindent {\bf Remark.} The factor $t^{N/2}$ is the appropriate weight to consider relative error. We point out that the more precise relative error formula \begin{equation}
\epsilon_{rel}(t)=\frac{|u(x,t)-G_t(x)|}{G_t(x)} \end{equation} does not admit a sup bound, as can be observed by choosing $u(x,t)=G_t(x-h)$ for some constant $h$ since then $$
\epsilon_{rel}(t)=|e^{xh/2t}e^{-h^2/4t}-1|\,, $$
which is not even bounded. It is then quite good that our weaker form does admit a good estimate. Same happens for the $L^1$ norm. This comment wants to show that error calculations with Gaussians are delicate because of the tail (i.e., the behaviour for large $|x|$.
\noindent $\bullet$ {\sl Proof of Theorem \ref{main.convthm} in this approach}. Given an initial function $u_0\in L^1(\mathbb{R}^N)$ without any assumption on the first moment, we argue by approximation plus the triangular inequality. In the end we get a convergence result, but less precise. To quote, if $u_0$ is integrable with integral $M$ and let us fix an error $\delta>0$. First, we find an approximation $u_{01}$ with compact support and such that $$
\|u_0-u_{01}\|_1<\delta. $$ Due to the already mentioned effect $L^1\to L^\infty$, and applied to the solution $u-u_1$, we know that for all $t>0$ $$
\|u(t)-u_{1}(t)\|_\infty<C\,\delta\, t^{-N/2}\,. $$ On the other hand, we have just proved that for data with finite moment: $$
\|u_1(t)-M_1G_t\|_\infty\le C {\mathcal N}_{1\delta} \,t^{-(N+1)/2}\,. $$ In this way, for sufficiently large $t$ (depending on $\delta$) we have $$
t^{N/2}\|u_1(t)-M_1G_t\|_\infty\le C \delta $$
with $ C$ a universal constant. Next se recall that $|M-M_1|\le \delta$ as well as $G_t\le Ct^{-N/2}$. Using the triangular inequality we arrive at \begin{equation}
\lim_{t\to\infty}t^{N/2}\|u(t)-MG_t\|_\infty=0\,, \end{equation} which ends the proof. \qed
\noindent {\bf Remark}. In the general conditions of Theorem \ref{main.convthm} we still obtain convergence, but we no longer obtain a rate. In fact, we show next that no convergence speed can be found without further information on the data other than integrability of $ u_0 $.
\hiddensubsection{No explicit rates for general data. Counterexample}
Let us explain how the lack of a rate for the whole class of $L^1$ functions is shown. Given any decreasing and positive rate function $\phi$ such that $\phi(t)\to 0$ as $t\to\infty$ we construct a modification of the Gaussian kernel that produces a solution with the same mass $M=1$ and such that it satisfies a lower bound for the error of the form $$
t^{N/2}\|u(x,t)- G_t(x)\|_\infty\ge n \phi(t_n) $$ at a sequence of times $t_n\to\infty$ to be chosen.
\noindent {\bf Construcion}. The idea is to find a choice of small masses $m_1,m_2,\dots$ with $\sum_n m_n=\delta< 1$, and locations $x_n$ with $|x_n|=r_n\to \infty$ and consider the solution $$ u(x,t)=(1-\delta)G_t(x)+ \sum_{n=1}^\infty m_nG_t(x-x_n)\,. $$ Let us be precise. The error $u(x,t)-G_t(x)$ is calculated at $x=0$ as $$
(4\pi t)^{N/2}|u(0,t)-G_t(0)|=|\delta-\sum_{n=1}^\infty m_n e^{-x_n^2/4t}|= \sum_{n=1}^\infty m_n (1-e^{-x_n^2/4t})\,. $$
Put $m_n=2^{-n}$ (any other summable series will do). Choose iteratively $t_n$ and $x_n$ as follows. Given choices for the steps $1,2,\dots, n-1$, pick $t_n$ to be much larger than $t_{n-1}$ and such that $\phi(t_n)\le m_n/2n$. This is where we use the fact that $\phi(t)$ tends to zero, even if it may decrease in a very slow way. Choose now $|x_n|=r_n$ so large that \ $e^{-r_n^2/4t_n}< 1/2$. Essentially, the mass has to be displaced at distance equal or larger than $O(t_n^{1/2})$. Then, $$ m_n(1-e^{-x_n^2/4t_n})\ge n\phi(t_n)\,. $$
Let us make some further practical calculations (with no precise scope in mind): Let for example $\phi(t)=t^{-\epsilon}$. Choose $m_n=2^{-n}$. Then $t_n ^{\epsilon}\ge 2^{n}$, $$ t_n\sim 2^{n /\epsilon}, \qquad r_n\sim 2^{n/2\epsilon} $$
and the mass in the outer region $\{|x|\ge r_n\} $ is approximately $2^{-n}$; hence such a mass is $M(r)\sim r^{-2\epsilon}\,,$ which is not so small if $\epsilon\to 0$. \normalcolor
\hiddensubsection{Infinite propagation in space }
The representation formula immediately shows that a solution $u$ corresponding to nonnegative initial data will be
strictly positive at all points $x\in\mathbb{R}^N$ for any time $t>0$. The infinite speed of propagation of the heat equation with the instantaneous formation of a thin tail at infinity is considered an un-physical property by many authors, one of the not many drawbacks of this wonderful equation. However, it is essential to the equation and creates some curious effects.
\noindent $\bullet$ {\large \bf Spatial tails for positive solutions.} Let us examine the precise form of the tail in the simplest case where $u_0$ has compact support. For simplicity and w.l.o.g. we assume that $u_0$ is supported in the ball $B_R$ of radius $R>0$ centered at $x=0$. Take $x_1\in \mathbb{R}^N$ such that $|x_1|=r_1>1$, say $x_1=r_1 {\bf e}_1$. Then the representation formula implies that a bound from above is obtained by displacing all the mass to the nearest point to $x_1$ inside $\overline{B_R}$, which is $x_0'=R {\bf e}_1$, and we get
the upper bound
$$
u(x_1,t)\le M\,G_t(x_1-x'_0)=\frac{M}{(4\pi t)^{N/2}}e^{-(|x_1|-R)^2/ 4t} \,.
$$ In the opposite direction, moving all the mass to $x_0''=-R {\bf e}_1$ we get the lower bound $$
u(x_1,t)\ge M\,G_t(x_1-x_0'')=\frac{M}{(4\pi t)^{N/2}}e^{-(|x_1|+R)^2/ 4t} \,.
$$
\normalcolor
Both estimates are clearly optimal in this context. Since the equation is invariant under rotations the result holds for all $x$ such that $|x|>R$ instead of our restricted choice of $x_1$. Moreover, we see that the ratio of both estimates tends to infinity as $|x|\to\infty$. It is therefore convenient to take logarithms, and then we easily get a general formula where $B_R$ is centered at $x_c$ with $ |x_c|=R_c$.
\begin{proposition} \label{prop.tails} Let $u_0$ be supported in the ball $B_R(x_c)$ and let $M=\int u_0(x)\,dx$. Then for every $t>0$ and every $x\not\in B_R(x_c)$ we have \begin{equation}
\frac{N}2 \log(4\pi t)+\frac1{4t}(|x-x_c|-R)^2 \le -\log (u(x,t)/M)
\le \frac{N}2 \log(4\pi t)+\frac1{4t}(|x-x_c|+R)^2 \,. \end{equation}
It follows that
\begin{equation}
\lim_{|x|\to\infty}\frac{\log (u(x,t)/M)}{|x|^2}= -\frac1{4t}.
\end{equation}
\end{proposition}
We conclude that in first approximation the tail at infinity of all solutions with compactly supported initial data is universal in shape and depends only on the mass of the data and time. Of course, the second term in the expansion depends also on the radii $R_c$ and $R$.
Another observation is that the asymptotic space behaviour allows to calculate the time elapsed since the solution had compact support (if we already know the mass $M$).
Let us also remark that solutions with more general nonnegative data can have other type of tails and we invite the reader to calculate some of them, both for integrable and non-integrable data. Here is an example that decays like a simple exponential.
\noindent {\bf Exercise 17.} Consider the heat equation in 1D. Show that when $u_0$ is integrable, positive and bounded and $u_0(x)=e^{-x}$ for $x\ge 0$, then for all times $$ \lim_{x\to\infty} e^x u(x,t)= e^t. $$ {\sl Sketch:} Putting $u_0(x)=0$ for $x<0$ for simplicity, use the representation formula to write $$ e^{x-t}u(x,t)=\frac1{(4\pi t)^{1/2}}\int_{-\infty}^x e^{-(y-2t)^2/4t}dy $$ Let then $x\to \infty$. You may also use the explicit solution $U(x,t)=e^{t-x+c}$.
In any case the behaviour described in Proposition \ref{prop.tails} is the \sl minimal \rm one for nonnegative solutions.
\noindent $\bullet$ {\large \bf Signed solutions do not become everywhere positive.}\\ The square exponential tail behaviour of the fundamental solution has more curious consequences. Thus, if a signed solution $u(x,t)$ has initial data such that the mass of the positive part $M_+=\int u_0^+(x)dx$ is larger than the mass of the negative part $M_-=\int u_0^-(x)dx$, then we know that it converges as $t\to\infty$ to the positive Gaussian $M G_t$ with $M= M_+-M_- > 0 $ so that \begin{equation}
\lim_{t\to\infty} t^{N/2} u(x,t)=(4\pi)^{-N/2}\,M>0. \end{equation} for every $x\in\mathbb{R}^N$. It would be natural to expect that when $M_+$ is much larger than $M_-$ then the solution $u$ is indeed positive everywhere for large enough, finite times. Now, this is true in most of the space because of the previous convergence, but it is not true in all the space.
\noindent {\bf Exercise 18.} (i) Take $N=1$. Show that for the choice $u_0(x)=\delta(x)-{\varepsilon} \,\delta(x-1)$ with $0<{\varepsilon}<1$ (a combination of delta functions) the solution $u$ is positive on the left of a line $x=r(t)$ and negative for $x>r(t)$. Show that for large times $r(t)\sim 2\log(1/{\varepsilon}) t$.
{\sl Remark.} We see that the problem arises at the far away tail. Of course, $u(x,t)$ becomes positive for large times at all points located at or less than the typical distance, $|x|\le C t^{1/2}$.
(ii) Show that a similar result is true for integrable data if $u_0$ is positive for $x<0$ and negative for $x>1$, and zero in the middle. Show in particular that $u(x,t)<0$ if $x> 2t\log(1/{\varepsilon}) +1/2$.
(iii) State similar results in several dimensions.
\section{Improved convergence for distributions with second moment} \label{sec.conv2}
Better convergence rates can be obtained by asking a better decay at infinity of $u_0$. The technical condition we use is having a finite second moment, a condition that is very popular in the literature. In probability this is known as having a finite variation. The motivating example is described next.
\noindent {\bf Exercise 19.} Consider a time displacement of the fundamental solution and show that $u(x,t)= G(x,t+t_0)$ has the precise convergence rate $O(1/t)$ towards $G(x,t)$.
The result we prove is as follows.
\begin{theorem}\label{thm.conv.3} Under the assumptions that $u_0\in L^1(\mathbb{R}^N)$ and that the signed first moment \begin{equation} {\mathcal N}_{1,i}=\int_{\mathbb{R}^N} y_i\,u_0(y)\,dy \end{equation} is finite (for all coordinates), as well as the second moment: \begin{equation}
{\mathcal N}_2=\int_{\mathbb{R}^N} y^2\,|u_0|(y)\,dy<\infty, \end{equation} we get the convergence \begin{equation}\label{conv.fm2}
\displaystyle t^{N/2}|u(x,t)-M\,G_t(x)+\sum_i {\mathcal N}_{1,i}\,\partial_{x_i}G_t(x)|\le C{\mathcal N}_2\,t^{-1}\,, \end{equation} and \begin{equation}\label{imprvdcon1}
\displaystyle \|u(x,t)-M\,G_t(x)+\sum_i {\mathcal N}_{1,i}\,\partial_{x_i}G_t(x)\|_{L^1(\mathbb{R}^N)}\le C\,{\mathcal N}_2 \,t^{-1}\,. \end{equation} The rate $O(t^{-1})$ is optimal under such assumptions. \end{theorem}
\noindent {\bf Remark.} The signed first moment is called in Mechanics the center of mass, for $M\ne 1$, $M\ne 0$, we use the formula \begin{equation}\label{com} x_c=\frac1{M}\int x\, u_0(x)dx\,. \end{equation} In probability it is the average location of the sample and $M=1$. When $x_c$ is finite and $M\ne 0$, the center of mass can be reduced to zero by just a displacement of the spatial axis. This very much simplifies formulas \eqref{conv.fm2}, \eqref{imprvdcon1}, see below.
\noindent {\sl Proof.} (i) Starting as in Theorem \ref{thm.conv.2} we arrive at the formula $$
D:=\displaystyle u(x,t)-MG_t(x) = (4\pi t)^{-N/2} \int_{\mathbb{R}^N} \left(e^{-|x-y|^2/4t}-e^{-|x|^2/4t}\right)\,u_0(y)\,dy. $$
Let us consider the 1D function $f(s)=e^{-|x-sy|^2/4t}$, $s\in\mathbb{R}$, and let us use the Taylor formula $$ f(1)=f(0)+ f'(0)+\int_0^1 f''(s)(1-s)\,ds\,. $$ We have $$
f'(s)=\frac1{2t}\langle y, x-sy\rangle e^{-|x-sy|^2/4t}, \quad f''(s)=(-\frac1{2t}|y|^2 +\frac1{4t^2}|\langle y, x-sy\rangle|^2)e^{-|x-sy|^2/4t}\,. $$ Using these results, we get $D=D_1+D_2$, where $$
D_1= (4\pi t)^{-(N+1)/2} \int u_0(y)\langle y, \frac{x}{2t^{1/2}}\rangle e^{-|x^2|^2/4t}\,dy = -\sum_i \left(\int y_i u_0(y)\,dy \right) \partial_i G_t(x)\,. $$
On the other hand, putting $\xi=(x-sy)/2\sqrt{t}$ and $\widehat y= y/|y|$, we also have $$
D_2= (4\pi t)^{-(N+2)/2} \int_{\mathbb{R}^N} \int_0^1 |y|^2 u_0(y)\left(-\frac12 + (\langle \widehat y, \xi \rangle)^2 \right) e^{-\xi^2}\,dyds. $$ Since the factor dependent on $\xi$ is uniformly bounded for all $\xi$ we have $$
|D_2|\le C(N)\left(\int_{\mathbb{R}^N} u_0(y)\, |y|^2\,dy)\right) t^{-(N+2)/2}. $$ This proves the result.
(ii) We leave to the reader to prove the corresponding statement in $L^1$ norm.
(iii) Optimality of the rate follows from Exercise 19. \qed
\noindent {\bf Exercise 20.} Give examples of well-known probability distributions for which the first moment is finite or infinite. Same for the second moment. Try with examples of the form $$
f(x)=C(1+|x|^2)^{-a}\,. $$ For $a=1$ we get the well-known Cauchy distribution that is integrable only in 1D.\\ {\sl Answers.} Integrable $2a>N$, 1st moment $2a>N+1$, 2nd moment $2a>N+2$.
\noindent {\bf Reformulation of the result.} It is well-known that the first moment can be eliminated to moving the origin of coordinates to the center of mass $x_c$ defined in formula \eqref{com} when $M\ne 0$. If we do that and then apply Theorem \ref{thm.conv.3} we get the following result
\begin{corollary} Under the assumptions of Theorem \ref{thm.conv.3} we have \begin{equation} \left\{ \begin{array}{l}
\displaystyle |u(x,t) - M G_t(x-x_c)|\le C\,{\mathcal N}_2^*(u_0)\,t^{-(N+2)/2}), \\[6pt]
\displaystyle \|u(x,t) - M G_t(x-x_c)\|_1\le C\,{\mathcal N}_2^*(u_0)\,t^{-1})\,, \end{array} \right. \end{equation}
where ${\mathcal N}_2^*(u_0)= \int_{\mathbb{R}^N} |x-x_c|^2 u_0(x)\,dx$ is the centered second moment. \end{corollary}
\noindent {\bf Higher development result.} A continuation of this method into stricter convergence rates using higher moments can be done by using further terms in the Taylor series development. We will not do it but only quote the statement that can be found in \cite{DuoZZ}.
\noindent {\bf Theorem 4 \cite{DuoZZ}} {\sl Let $G(x, t) $ be the heat kernel. For any $1 \le p \le N/(N - 1) $ and $k \ge 0$ an integer the solution of initial value problem for the heat equation satisfies: $$
\|u(x,t)-\sum_{|\alpha |\le k-j}\frac{(-1)^{|\alpha|}}{\alpha!}\left(\int f(x)x^\alpha\, dx\right)\,D^\alpha G(x,t)\|_p\le C_k t^{-(k+1)/2}\||x|^{k+1}f(x)\|_p, $$
for any initial data $f \in L^1(\mathbb{R}^N, 1 + |x|^k)$ such that $ |x|^{k+1}f\in L^p(\mathbb{R}^N).$}
Let us note that their proof is different from the previous ones and interesting. As a conclusion, we also have a result about convergence in the first moment norm.
\begin{theorem}\label{thm.conv.4} Under the assumptions that $u_0\in L^1(\mathbb{R}^N)$ and that the first, second and third moments are finite, we get the convergence \begin{equation}\label{imprvdcon1b}
\displaystyle \|u(x,t)-M\,G_t(x)+\sum_i {\mathcal N}_{1,i}\,\partial_{x_i}G_t(x)\|_{L^1(|x|dx)}\le C \,t^{-1/2}\,. \end{equation} The constant $C>0$ depends on $u_0$ through the second and third moments. \end{theorem}
\noindent {\sl Proof.} (i) We start with formulas from Theorem \eqref{thm.conv.3} where it is proved that \begin{equation*}
|u(x,t)-M\,G_t(x)+\sum_i {\mathcal N}_{1,i}\,\partial_{x_i}G_t(x)|= D_2 \end{equation*} with $$
D_2= (4\pi t)^{-(N+2)/2} \int_{\mathbb{R}^N} \int_0^1 |y|^2 u_0(y)\left(-\frac12 + (\langle \widehat y, \xi \rangle)^2 \right) e^{-\xi^2}\,dyds, $$
and $\xi=(x-sy)/2\sqrt{t}$. Multiplying by $|x|$ and integrating we have $$
\int_{\mathbb{R}^N} D_2 \,|x|dx=(4\pi t)^{-(N+2)/2} \iiint |y|^2 u_0(y)\left(-\frac12 + (\langle \widehat y, \xi \rangle)^2 \right) e^{-\xi^2}\,|x|dx\,dyds. $$
Writing now $|x|\le 2|\xi| \sqrt{t}+ s|y|$, we split the upper estimate of this integral into $I_1 + I_2$, where $$
I_1=C t^{-(N+1)/2} \iiint |y|^2 u_0(y)\left(-\frac12 + (\langle \widehat y, \xi \rangle)^2 \right) e^{-\xi^2}\,2|\xi|dx\,dyds\,, $$ and $$
I_2= C t^{-(N+2)/2} \iiint |y|^2 u_0(y)\left(-\frac12 + (\langle \widehat y, \xi \rangle)^2 \right) e^{-\xi^2}\,s|y|dx\,dyds\,. $$
As for the first integral, the separate integrals are bounded and only the $\xi$ integral gets a time factor $t^{N/2}$ from $dx=t^{N/2}d\xi$, so that $$ I_1\le C\,t^{-1/2}. $$ The second integral easily gives $I_2\le C\,t^{-1}$ by the assumption on the third moment of $u_0$. The proof is complete. \qed
\noindent {\bf General conclusion}. All integrable solutions of HE in the whole space converge to Gaussian (in the renormalized forms we have written) if the initial mass is finite, $ u_0 \in L^1 (\mathbb{R}^N) $. But the speed with which they do depends on how much initial mass is located far away, in other colloquial words for probabilists, on ``how populated the tails are''. The quantitative versions we have established use mainly the moments of order 1 and 2. The moment of order 2 is called in Probability the (square of) the standard deviation. We remind the reader that not all probability distributions have a finite standard deviation (see Exercise 20).
\section{Functional Analysis approach for Heat Equations}\label{sec.conv.fa}
We are going to use energy functions of different types to study the evolution of dissipation equations. The basic equation is the classical heat equation, but the scope is quite general. Our aim is not to establish the convergence of general solutions to the fundamental solution (which is well done by other methods, as we have shown), but a bit more, namely, to find the speed of convergence. After change of variables (renormalization) this reads as rate of convergence to equilibrium and relies on functional inequalities. These functional inequalities also play an important in other areas.
The methods we will introduce next will apply to more general linear parabolic equations that generate semigroups. The method also works for equations evolving on manifolds as a base space. Since around the year 2000 we have been studying these questions for nonlinear diffusion equations. The main nonlinear models are: the porous medium equation, the fast diffusion equation, the $p$-Laplacian evolution equation, the chemotaxis system, some thin film equations, ... Recently, the fractional heat equation and various fractional porous medium equations have been much studied.
\hiddensubsection{Heat Equation Transformations}
Take the classical Heat Equation posed in the whole space $\mathbb{R}^N$ for $\tau>0$: \begin{equation*} u_\tau =\frac12 \Delta_y u \end{equation*} with notation $u=u(y,\tau)$ that is useful since we want to save the standard notation $(x,t)$ for later use. We know the (self-similar) fundamental solution, also called the evolution Gaussian profile $$ U(y,\tau)=C\,\tau^{-N/2}e^{-y^2/2\tau}. $$ It was proved in previous sections that this Gaussian is an attractor for all solutions in its {\sl basin of attraction}, consisting on all solutions with initial data that belong to $L^1(\mathbb{R}^N)$ with integral $M=1$. See Sections \ref{sec.scaling}, \ref{sec.conv1}, and \ref{sec.conv2}.
\noindent {\bf Remark.} We have inserted a harmless factor $1/2$ in front of the left-hand side following the probabilistic convention in order to get a Gaussian with clean exponent $-|y|^2/2t$ which has standard deviation $t^{1/2}$ with no extra factors. Eliminating the prefactor leads to the exponential expression with exponent $-|y|^2/4t$, usual in PDE books. Our convention leads to some other simpler constants.
\noindent $\bullet$ {\bf Fokker-Planck equation.} It is the first step in this approach to the asymptotic study. The scaling on the variables $u$ and $y$ to factor out the expected size of both which must mimic the Gaussian sizes, and then take logarithmic scale for the new time $$
u(y,\tau)=v(x,t)\,(1+\tau)^{-N/2}, \qquad 2t= \log(1+\tau). $$ After some simple computations this leads to the well-known \sl Fokker-Plank equation \rm for $v(x,t)$: \begin{equation} v_t= \Delta_x v + \nabla_x\cdot(x\,v)=\nabla_x\left(\nabla_x v + xv\right)\,. \end{equation} We can write it as \ $v_t=L_1(v),$ where the Fokker-Planck operator $L_1=L_{FP}$ can be written in more explicit form as $$ L_1(v)=\Delta v + x\cdot\nabla v + N\,v\,. $$ We check now that when we look for stationary solutions by putting $v_t=0$ we get as easiest case the equation $\nabla v+ x\cdot v=0$ (after cancelling a divergence). Integrating it under the radial symmetry assumption is the simplest way to get the Gaussian distribution $G=c\,e^{-x^2/2}$, and indirectly, the fundamental solution of the original heat equation. We choose the constant $c=(2\pi)^{-N/2}$ to normalize $\int G\,dx=1$.
The asymptotic result we are aiming at consists precisely of proving that when $v_0(x)$ is integrable with mass 1 then $v(x,t)$ will tend to $G$ as $t\to\infty$. For a general presentation of the FP equation see \cite{Risken}. We will keep the notation $v(x,t)$ for the solutions of the Fokker-Planck equation throughout this section.
\noindent $\bullet$ {\bf The Ornstein-Uhlenbeck semigroup.} (i) In order to study relative error convergence it seems reasonable to pass to the quotient $w = v/G$, where $G$ is the stationary state, to get the \sl Ornstein-Uhlenbeck \rm version
\begin{equation} w_t= L_2(w)= G^{-1}\,\nabla\cdot\big(\,G\,\nabla w\,\big)=\Delta w - x\cdot\nabla w\,, \end{equation} a symmetrically weighted heat equation with Gaussian weight. Note that the corresponding stationary solution is now $W=1$, much easier.
(ii) The two-term form of the r.h.s. looks easier, with a diffusion and a convection term. Indeed, the weighted form of the {\sl Ornstein-Uhlenbeck operator} $L_2=L_{OU}$ is very convenient for our calculations. To begin with, it allows to prove the symmetry of the operator in the weighted space $X=L^2(Gdx)$: for every two convenient functions $w_1$ and $w_2$ we have \begin{equation} \int_{\mathbb{R}^N} (L_2w_1)\,w_2\,G dx=\int_{\mathbb{R}^N} w_1\,(L_2w_2)\,Gdx= -\int_{\mathbb{R}^N} \langle \nabla w_1,\nabla w_2\rangle Gdx. \end{equation} It seems natural to introduce the Gaussian measure $d\mu=Gdx$ as a reference measure in the calculations. The important consequence of this computation is that $A=-L_2$ is a positive and self-adjoint operator in the Hilbert space $X=L^2(d\mu)$. This is a rather large space that includes all functions with polynomial growth. We will keep the notation $w(x,t)$ for the solutions of the Ornstein-Uhlenbeck equation throughout this section.
(iii) We may also observe that $L_1(v)=G\,L_2(v/G)$ and that $L_2$ is the adjoint to $L_1$ in the sense that for conveniently smooth and decaying functions $$ \int_{\mathbb{R}^N} (L_1 v)w\, dx=\int_{\mathbb{R}^N} v\,(L_2w)dx. $$ More formally, we can consider the duality between the spaces $X=L^2(Gdx)$ and $X'=L^2(G^{-1}dx)$ given precisely by the integral of the product and the operators are adjoint.
(iv) Finally, to complete the comparison we can write the Fokker-Planck equation as $$ \partial_t v= \nabla \cdot (G\nabla (v/G))\,. $$ The analogy says that this operator is negative and self-adjoint in the stranger space $X_1=L^2(G^{-1}dx)$, that is much smaller than $L^2(\mathbb{R}^N)$. For a detailed mathematical presentation of the Ornstein-Uhlenbeck semigroup we refer to \cite{AnSj}.
\noindent $\bullet$ {\bf The Hamiltonian connection.} Start from the Fokker-Planck equation and use now the change of variables \ $v=zG^{1/2}$. Then, $$ G^{1/2}z_t= \nabla\cdot(G\nabla (z G^{-1/2})). $$ We have $$ \nabla\cdot(G\nabla (z G^{-1/2}))=G^{1/2}\Delta z+ \nabla G^{1/2} \cdot\nabla z+G\nabla G^{-1/2}\cdot \nabla z+ z \nabla (G \nabla G^{-1/2}), $$
so that the equation for $z$ becomes: $$ z_t= \Delta z- V(x)z, \qquad V(x)=G^{-1/2}\Delta G^{1/2}\,, $$ that we may write as a real Schr\"odinger Equation $z_t= L_3(z)=-H(z)$ with Hamiltonian operator $$
H(z)=-\Delta z+V(x)z, \qquad V=\frac14 |x|^2-\frac{N}2, $$
In calculating the Schr\"odinger potential we have used $G^{1/2}=e^{-|x|^2/4}$. Operator $H$ is directly symmetric in $L^2$ with no weight. The fact that it is positive is not clear from the formulas but it will follow from the equivalence with the Ornstein-Uhlenbeck operator.
\noindent $\bullet$ The equivalence of this equation with the former ones comes from the transformation formulas $$ L_1(v)=G^{1/2}L_3(v/G^{1/2})=G\,L_2(v/G), $$ and also that if $v_i=G^{1/2}z_i=G w_i$, $i=1,2$, we get $$ \int_{\mathbb{R}^N} v_1\,v_2 \, G^{-1}dx=\int_{\mathbb{R}^N} z_1\,z_2\,dx=\int_{\mathbb{R}^N} w_1\,w_2\,Gdx, $$ which allows to show that all three operators $L_i$ are self-adjoint and dissipative since we have already proved it for $L_2$. The interesting remark for the Schr\"odinger representation is that it does not need any weighted space, $X_3=L^2(\mathbb{R}^N)$.
The rich equivalence among the three equations and also with the heat equation is a favorite topic in Linear Diffusion and Semigroup Theory.
\noindent $\bullet$ {\bf General Fokker-Planck Equation.} It is based on generalising the coefficient $x$ of the convection term into a more general term that is the gradient of a potential that we call $S(x)$.\footnote{The standard notation is $U(x)$ but we will change the notation here to $S(x)$ to avoid confusion with other uses of $U$.} The General Fokker-Planck equation (GFP) reads \begin{equation} v_t=\Delta v+ \nabla \cdot (\nabla S\,v)\,. \end{equation} Standard assumption is that $S$ must be a positive and convex function in $\mathbb{R}^N$, called the potential. The stationary state is now $\widetilde G(x)=C\,e^{-S(x)}$, and the equation reads then $v_t=L_1(v)$ with $$ L_1(v)=\nabla \cdot (\widetilde G\,\nabla (v/\widetilde G)). $$ The other equations, General Ornstein-Uhlenbeck and General Hamiltonian Equation, follow in the same way as before using only the expressions in terms of $G$ that is replaced by $\widetilde G$. The weighted scalar products have no difference and the relation of norms still holds. In the Hamiltonian representation we get a potential (put $G=e^{-S}$) $$
V(x)=-G^{-1/2}\Delta (G^{1/2})= \frac14|\nabla S|^2-\frac12\Delta S\,. $$
On the other hand, the analysis of the complete spectrum is not possible unless we have very particular cases of potentials $S$. Moreover, the connection with a renormalization of the heat equation is completely lost.
\hiddensubsection{Asymptotic Energy Method via the Ornstein-Uhlenbeck Equation}
The Ornstein-Uhlenbeck formulation allows for a very clear and simple treatment of the problem of convergence with rate to the Gaussian profile. We may assume without lack of generality that $$ \int w\,d\mu=\int v\,dx=\int u\,dy=1\,,
$$
with the above notations for $u, v,$ and $w$. We now make a simple but crucial calculation on the time decay of the energy for the OUE: \begin{equation}
{\mathcal F}(w(t))=\int_{\mathbb{R}^N} |w-1|^2\,G\,dx, \quad \frac{d{\cal F}(w(t))}{dt} =-2\int_{\mathbb{R}^N}|\nabla w|^2\,G\,dx =-{\mathcal D}(w(t)). \end{equation} We can now use a result from abstract functional analysis: the {\bf Gaussian Poincar\'e inequality } with measure $d\mu=G(x)\,dx$: $$
\int_{\mathbb{R}^N} w^2d\mu- \left(\int_{\mathbb{R}^N}w\,d\mu\right)^2 \le C_{GP} \int_{\mathbb{R}^N} |\nabla w|^2\,d\mu $$ The sharp constant in this inequality is precisely $C_{GP}=1,$ with no dependence on dimension. Moreover the functions that realize the optimal constant are $w(x)=x_i$ for any $i=1,2,\dots,N$. \footnote{This is an old inequality in the folklore of Hermite polynomials, and probably was known in one dimension to both mathematicians and physicists in the 1930's in relation to eigenvalue problems, as mentioned in \cite{Beck}.} We will give below a proof based on the analysis of the spectrum of the Ornstein-Uhlenbeck operator.
Then, the left-hand side is just ${\cal F}$ and the inequality implies $$ -\frac{d}{dt}\,{\mathcal F}(w(t))\ge 2{\mathcal F}(w(t)), $$ which after integration gives ${\mathcal F}(w(t))\le {\mathcal F}(w_0)e^{-2t}$, i.e.: \begin{equation*}
\int_{\mathbb{R}^N}|w-1|^2\,{\rm d}\mu \le {\rm e}^{-2t}\,
\int_{\mathbb{R}^N}|w_0-1|^2\,{\rm d}\mu\quad\forall\; t\ge 0 \end{equation*}
We have proved the convergence to equilibrium in the following form.
\begin{theorem} Under the assumptions on the initial data $$ \int_{\mathbb{R}^N}w_0\,{\rm d}\mu =1, \qquad \int_{\mathbb{R}^N}w_0^2\,{\rm d}\mu <\infty\,, $$ the solutions of the OUE satisfy the following stabilization estimate $$
\|w(t)-1\|_{L^2(d\mu)}\le \|w_0-1\|_{L^2(d\mu)}\,{\rm e}^{-t}. $$ \end{theorem}
This is the first of the well known Functional Estimates for the solutions to the HE. The renormalization $\int w_0\, d\mu=1$ is no restriction.
In terms of $ v $ the hypotheses are $ \int v_0 \,dx = 1 $ and $ \| v_0-G \|_{L^2 (G^{-1}dx)} <\infty $. The weight is now $K=G^{-1}$ and the measure $ d \nu = K \,dx $, which behaves inversely in infinity. The result is $$
\int_{\mathbb{R}^N}|v-G|^2\,d\nu \le {\rm e}^{-2t}\,
\int_{\mathbb{R}^d}|v_0-G|^2d\nu \quad\forall\; t\ge 0. $$
Recall that we have a new logarithmic time $t=\log(1+\tau).$ The rate of convergence in real time in both variables is then $O((1+\tau)^{-1/2})=O(\tau^{-1/2})$ as $\tau\to \infty$.
\noindent $\bullet$ Then by regularity theory (regularizing effect from $L^1$ to $L^\infty$) for the heat equation and the other equations, we get estimates in the sup norm with similar relative rates at least locally in space. But note that since $w=xe^{-t}$ is a solution of the OUE, we cannot get a uniform estimate in $L^\infty$ of the Ornstein-Uhlenbeck variable.
\section{Calculation of spectrum. Refined asymptotics}\label{sec.spec}
We will proceed with a further step in the analysis to get more precise asymptotics. Indeed, the knowledge of the spectrum of the equivalent operators allows to obtain a complete description of the long-time behaviour in weighted spaces. This is a well-known fact in the study of the HE posed in bounded domains, that has a parallel here.
\hiddensubsection{Spectrum} We will make all computations on the Ornstein-Uhlenbeck operator. The Gaussian Poincar\'e inequality is a simple consequence of the following analysis of the spectrum of the Ornstein-Uhlenbeck Operator in $L^2(d\mu)$:
(a) Since the FP and the OU operators have a compact inverse, we conclude that they have a discrete spectrum, \cite{Kavian}. The ground state of the OUE is formed by the constant function $w=1$ (which comes from the Gaussian function $v=G$ for the FPE) with eigenvalue $\lambda_0=0$. The next eigenfunctions are the coordinate functions $\phi_i=x_i$ corresponding to the eigenvalue $\lambda_1=1$ with multiplicity $N$.
(b) In 1D we find the rest of the eigenfunctions and eigenvectors as the family of Hermite polynomials, given by the compact formula \begin{equation} H_k(x)= (-1)^k G(x)^{-1} (d/dx)^k G(x)\,, \end{equation} that tells much about how we will see them. Indeed, the formula can be derived from the fact that the derivatives in $y$ of the Gaussian evolution solution $U(y,\tau)$ are still solutions of the heat equation with different decay rate. Passing to the FPE we conclude that the derivatives $D^k G$ are eigenfunctions of $L_1$ (here $D=d/dx$). It is easy to see that these solutions have the form $H_k(x)G(x)$ where $H_k$ is a polynomial of degree $k$ (proof by induction). Passing to the OUE we get the formula above. More precisely, we have the recursion formula: $$ G H_{k+1}=-d/dx(H_k G)=-(H_k'-xH_k) G, \quad H_{k+1}=(x-\frac{d}{dx})H_k\,. $$ The first members of the family $H_k$ are $1, x, x^2-1, \dots$
The corresponding eigenvalue to $H_k$ for $L_2=L_{OU}$ is $\lambda_k=k$. This can be seen from the heat equation formula since differentiating in $x$ adds a factor $\tau^{-1/2}$ to the decay, which goes over as $e^{-t}$ for every derivative we take. Induction proof at the FP level: if we assume that $$ L_1(D_kG)=(G(D_kG/G)')'=(D_kG)''+x(D_kG)'=-\lambda_kD_kG\,, $$ then $$ L_1(D_{k+1}G)=(D_kG)'''+x(D_kG)''=-\lambda_kD_kG'-(x(D_kG)')'+ x(D_kG)''=-(\lambda_k+1)D_{k+1}G. $$ Therefore, $\lambda_{k+1}=\lambda_k+1$. Since $\lambda_0=0$ the proof is done.
It is then proved that the $H_k(x)$ form a basis in $L^2(d\mu)$ in 1D.
(c) For several dimensions we have the functorial property: if $x=(x_1,x_2) $ and $w(x)=w_1(x_1)w_2(x_2)$ we get $$ L_2(w)=w_1\,L_2(w_2)+ L(w_1)\,w_2\,. $$ This produces new eigenfunctions in higher dimensions, and it gives for the product function the sum of the eigenvalues: $\lambda(w)=\lambda(w_1)+\lambda(w_2)$. The set of combinations generates a base of eigenfunctions, this is essentially due to Fubini's theorem, cf. \cite{Bell}. Our account is very short but we consider this part an extension, and it is well documented in the corresponding literature.
\hiddensubsection{Refined asymptotics}
From the spectrum we can get a very precise description of the convergence in the weighted spaces by using the equivalent of the Fourier analysis on bounded domains. The meaning of the coefficients for the original equation has to be understood.
We get $$ w(x,t)=\sum_\alpha c_\alpha\, H_\alpha(x)e^{-kt}, \quad v(x,t)=\sum_\alpha c_\alpha \,\partial_\alpha G(x)e^{-kt} $$
where $\alpha$ is an $N$-multi-index, $k=|\alpha|$, and $H_\alpha$ is the corresponding multidimensional Hermite polynomial, after renormalization in $L^2(d\mu)$. We have $$ c_\alpha=\frac{\langle w_0, H_\alpha\rangle_{L^2_\mu}}{\langle H_\alpha\, H_\alpha\rangle_{L^2_\mu}}=\frac{\int v_0(x)H_\alpha(x)\,d\mu(x)}{\int H^2_\alpha(x)\,d\mu(x)}, $$ \normalcolor so that $c_0$ is the mass and for $k=1$, and $c_i$ are the first coordinate moments $\int v_0(x)x_i\,dx$ after normalization, and so on. The convergence of the $w$ series holds in $L^2(d\mu)$, with errors of the order of the first term that is left out. The convergence of the $v$ series holds in $L^2(G^{-1}dx)$, with errors of the order of the first term that is left out.
\section{Convergence via the Boltzmann entropy approach}\label{sec.entropy}
There is another approach for the convergence to the Gaussian that starts the analysis from Boltzmann's ideas on entropy dissipation. We start now from the Fokker-Planck equation $v_t=\Delta v+ \nabla\cdot(xv)$ and consider the functional called {\bf entropy} $$ {\cal E}(v):=\int_{\mathbb{R}^N} v\,\log(v/G)\,dx=\int_{\mathbb{R}^N} v\,\log(v)dx+\frac12\int_{\mathbb{R}^N} x^2 v\,dx +C\,. $$ and we assume that the data are such that the initial entropy is finite. We recall that no decay is possible without some restriction on the data.
Differentiating along the flow (i.e., for a solution) leads to $$ \frac{d{\cal E}(v)}{dt}=-{\cal I}(v), \quad
{\cal I}(u)=\int_{\mathbb{R}^N} v\,\left|\frac{\nabla v}v+ x \right|^2\,dx=\int_{\mathbb{R}^N} v\,|\nabla \log(v/G)|^2\,dx\,. $$ For some reasons the dissipation ${\cal I}(v)$ is called Fisher information. Let us continue the proof. Putting now $v=Gf^2$ we find that $$
{\cal E}(v)=2\int_{\mathbb{R}^N} f^2\,\log(f)\,d\mu, \quad {\cal I}(v)=4\int_{\mathbb{R}^N} |\nabla f|^2\,d\mu. $$ The famous {\sl logarithmic Sobolev inequality} proved by Gross in 1975, \cite{Gross75}, says than that (for all suitable functions, not only solutions) $$ {\cal E}\le \frac 12 {\cal I}\,, $$ and we obtain the decay ${\cal E}(t)\le {\cal E}(0)\,e^{-2t}.$ This means a precise decay for the entropy functional. The calculations are justified for smooth solutions, and then we can pass to the limit for general solutions with finite entropy.
In order to obtain decay in standard norms, there are formulas connecting the entropy with other norms, like the Cziszar-Kullback inequality that implies that that $$
\|f-G\|^2_{L^1(\mathbb{R}^N)}\le 2{\cal E}(f, G), $$ for any $f, G\in L^1(\mathbb{R}^N)$ positive with equal mass, see \cite{Barron}. This is paper is a very good early reference to the subject of entropies and the central limit theorem.
There are many works dealing with the use of functionals and functional inequalities to arrive at asymptotic behaviour results plus a rate of convergence for this kind of equations. Let us mention here \cite{A4, BE84, Chafai, MV, Tosc} and the references to be mentioned in Section \ref{sec.appl2}.
\hiddensubsection{About entropy in Physics}
Entropy has been introduced as a state function in thermodynamics by R. Clausius in 1865, in the framework of the second law of thermodynamics, in order to interpret the results of S. Carnot.
A statistical physics approach: Boltzmann's formula (1877) defines the entropy of a physical system in terms of a counting of its micro-states. Boltzmann's equation: $$\partial_t f + v \cdot\nabla_x f = Q(f, f)\,. $$
It describes the evolution of a gas of particles having binary collisions at the kinetic level; $f(t,x,v)$ is a time dependent distribution function (probability density) defined on the phase space $(x,v)\in \mathbb{R}^N\times\mathbb{R}^N$. The Boltzmann entropy: \ $H[f] :=\iint f \log (f) dx dv$ measures irreversibility. The famous H-Theorem (1872) says that $$ \frac{d}{dt}H[f] =\iint Q(f, f) log (f) dx dv\le 0\,. $$ Other approaches to thermodynamic entropy are due to Carath\'eodory (1908), Lieb-Yngvason (1997),... see \cite{LY}.
An important version of entropy appears in Information Theory. In 1948, while working at Bell Telephone Laboratories Claude Shannon, an electrical engineer, set out to mathematically quantify the statistical nature of ``lost information'' in phone-line signals (cf. Wikipedia article). He arrived at an analog to thermodynamic entropy for use in information entropy.
There is also a concept of entropy in probability theory (with reference to an arbitrary measure).
\section{Brief review of other heat equation problems}\label{sec.appl1}
The methods presented above have been applied to prove convergence to a distinguished solution (that plays the role of the Gaussian fundamental solution) in different contexts. We have not mentioned some other methods like the transport method of Jordan-Kinderlehrer-Otto \cite{JKO98}, 1998, where the Fokker-Planck equation is interpreted as the steepest descent for a free energy related to Boltzmann-Gibbs entropy, taken with respect to the Wasserstein metric. This novel technique, based on mass transportation \cite{VillaniOT}, has played an increasing role since then
\hiddensubsection{Equation with forcing} A modification that still keeps the flavor of this presentation consists of considering a forcing term \begin{equation} \partial_t u=\Delta u +f\,, \end{equation} where $f$ is an integrable function of $(x,t)\in Q_T=\mathbb{R}^N\times(0,T)$, $T>0$. It can be easily proved that a representation formula holds, \cite{EvansPDE}. From it we can derive asymptotic results that we leave as exercises.
\noindent {\bf Exercise 21.} (i) Prove that the $L^1$ estimate of Theorem \ref{main.convthm}, formula \eqref{cr.l1}, holds if $f\in L^1(Q_\infty)$, and we take as $M$ the accumulated mass defined as \begin{equation} M=\int_{\mathbb{R}^N} u_0(x)\,dx+\iint_{Q_\infty} f(x,t)\,dxdt. \end{equation}
\noindent (ii) The $L^\infty$ statement \eqref{cr.linf} needs some further decay condition on $f$ as $t\to\infty$. Prove that it holds e.g. if $\|f(\cdot,t)\|_\infty \le Ct^{-\gamma}$ with $\gamma\ge 1+N/2$.
The results of Section \ref{sec.conv1} can be repeated under conditions that we invite the reader to provide.
\noindent $\bullet$ Let us continue with a variation of the heat equation with forcing. In fact, there are many studies where the forcing term takes the form $f=f(u)$, and they fall into what is called reaction-diffusion, \cite{Smo82}. The simplest case correspond to linear forcing, $f(u)=\kappa u$ with $\kappa\ne 0$. The modifications in the analysis are minimal since the change of variables $v(x,t)=u(x,t)\,e^{-\kappa t}$ transforms equation \begin{equation} \partial_t u=\Delta u +\kappa u\,, \end{equation} into the classical form $v_t=\Delta v$, to which our results apply. We thus conclude that for very large $t$ we have the asymptotic behaviour \begin{equation} u(x,t)\sim M\,G(x,t)\,e^{\kappa t}\,, \end{equation} which preserves the Gaussian profile as asymptotic shape but not the decay rates in time. The reader is asked to write the precise theorems using the results of Sections \ref{sec.scaling}, \ref{sec.conv1}, and \ref{sec.conv2}.
\hiddensubsection{Dipoles and related issues} We have already seen that the heat equation with signed $L^1$ initial data and zero mass, i.\,e., $\int u_0(x)\,dx=0$ in the sense that the term representing the Gaussian approximation vanishes, so that the rate of decay as $t\to\infty$ is faster and the first approximation is given by a first term that combines partial derivatives of the Gaussian : \begin{equation} u(x,t)\sim D_{\bf v} G_t(x)\,, \qquad {\bf v}=\sum_i {\cal N}_{1,i}(u_0)\,e_i\,, \end{equation} of course under the condition that this vector $\bf v$ does not vanish. The $e_i$ are the canonical basis and $D_{\bf v}$ denotes directional derivative. Therefore, $u(x,t)= O(t^{-(n+1)/2})$.
\noindent $\bullet$ There is a very interesting application of this result in $N=1$. Indeed, we may solve the problem of asymptotic behaviour of the solutions of the heat equation posed in a half line $I=(0,\infty)\subset \mathbb{R}$ for $t>0$ with lateral Dirichlet data $u(0,t)=0$, let us call it (DP-HE-HL). We assume that $u_0\in L^1(I)$ and $u_0\ge 0 $ (the last assumption is made for simplicity). The idea is tho extend the initial data to the whole line by putting $$ {\widehat u}_0(x)=u_0(x) \quad \mbox{if } \ x>0, \qquad {\widehat u}_0(x)=-u_0(-x) \quad \mbox{if } \ x<0 $$ (called anti-symmetric reflection). Note that ${\widehat u}\in L^1(\mathbb{R})$, its total mass is zero, and the first moment $2{\cal N}_1$ is not zero. Solving the heat equation with the usual representation formula we obtain a solution ${\widehat u}(x,t)$ defined for $x\in\mathbb{R}$ and $t>0$. This solution must be antisymmetric, ${\widehat u}(x,t)=-{\widehat u}(-x,t)$ by the form of the data and an elementary symmetry property of the heat equation. Restricting ${\widehat u}$ to $x>0$ we find a unique solution of Problem (DP-HE-HL). We can now copy our asymptotic results to ${\widehat u}$ and translate them to $u$. Let us write \begin{equation}\label{dipole}
D(x,t)=-\partial_{x}G_t(x)=\frac{x}{t^{3/2}}e^{-|x|^2/4t}\,. \end{equation} Theorems \ref{thm.conv.3} and \ref{thm.conv.4} imply that $D(x,t)$ is the asymptotic attractor of the evolution in the half line.
\begin{theorem}\label{thm.dipoleconv} Let us assume that $u_0\in L^1(I)$ and the first moment in $I$ \begin{equation} {\mathcal N}_{1}=\int_0^\infty y\,u_0(y)\,dy \end{equation} is finite, as well as the second moment :
${\mathcal N}_2=\int_0^\infty |y^2\,u_0|(y)\,dy<\infty.$ Then, we get the following convergence formulas with rate for the solutions of (DP-HE-HL) \begin{eqnarray*}\label{conv.dip}
\displaystyle \|u(x,t) - 2{\mathcal N}_{1}\, D(x,t)\|_{L^1(I)}\le C\,{\mathcal N}_2 \,t^{-1}\,,\\
\displaystyle t^{1/2}|u(x,t)- 2{\mathcal N}_{1}\, D(x,t)|\le C{\mathcal N}_2\,t^{-1}\,. \end{eqnarray*} The rate $O(t^{-1})$ is optimal under such assumptions. If the third moment is finite we also have \begin{equation}
\displaystyle \|u(x,t) - 2{\mathcal N}_{1}\, D(x,t)\|_{L^1(I; |x|dx)}\le C\,({\mathcal N}_2 + {\mathcal N}_3)\,t^{-1/2}. \end{equation} \end{theorem}
Function $D(x,t)$ given by \eqref{dipole} is called the {\bf dipole solution} because it takes the derivative of the unit Dirac delta as initial data. It has a constant-in-time first moment that characterizes its strength. Not that the mass in $I$, $M_+=\int_0^\infty u(x,t)\,dx$, is not conserved in time, but decays like $O(t^{-1/2})$. \footnote{Dipole solutions appear often in Physics, specially in Electromagnetism.}
We can derive a more general convergence result
\begin{theorem}\label{thm.dipoleconv2} Let us assume that $u_0\in L^1(I)$ and the first moment in $I$, ${\mathcal N}_{1}$, is finite. Then, \begin{equation}
\displaystyle \|u(x,t) - 2{\mathcal N}_{1}\, D(x,t)\|_{L^1(I; |x|dx)}\to 0 \end{equation} as $t\to \infty$. \end{theorem}
\noindent {\sl Proof.} (i) Assume that $u\ge 0$. Approximate $u_0$ by a compactly supported function $\widetilde u_0\le u_0$, so that the error in the first moment is ${\varepsilon}$. Then, $$
\|u(x,t) - \widetilde u(x,t)\|_{L^1(I; |x|dx)}=\int_0^\infty (u(x,t)-\widetilde u(x,t)) xdx \le {\varepsilon} $$ because this expression is conserved with time for solutions of Problem (DP-HE-HL) in $I$ (check this!). The previous theorem for the solution $\widetilde u$ gives $$
\|\widetilde u(x,t) - 2{\widetilde {\mathcal N}}_{1}\, D(x,t)\|_{L^1(I; |x|dx)}\le K\,t^{-1/2}. $$
that can be made less than ${\varepsilon}$ if $t$ is large enough. Finally, $| {\mathcal N}_{1}-{\widetilde {\mathcal N}}_{1}| \le {\varepsilon}$ and $D$ has a finite moment. Combining all this, the result holds.
(ii) For signed data, split into positive and negative part, and then combine the results. \qed
\noindent $\bullet$ Similar formulas hold for the heat equation posed in a half space $\Omega$ in $N$ dimensions. After rotation and translation we may take \ $\Omega=\{x\in \mathbb{R}^N: x_1>0\}$. We consider zero lateral Dirichlet data: $u(x,t)=0$ for $x=(0,x_2,\dots,x_N)$. n this case a multi-dimensional dipole solution appears, $D=-\partial_{x_1}G_t$. Problems in half spaces can be solved too. The equivalent of Theorem \ref{thm.dipoleconv} holds. We leave the easy details to the reader.
Finally, solutions of the type $D_{12}=\partial_{x_1}\partial_{x_2}G_t$ are attractors for problems posed in quadrant domains $\{x\in \mathbb{R}^N: x_1>0, x_2>0\}$. And so on.
\noindent $\bullet$ By using the symmetric extension instead of the anti-symmetric one we can solve the heat equation posed in a half line $I=(0,\infty)\subset \mathbb{R}$ for $t>0$ with lateral Neumann data $u_x(0,t)=0$, let us call it (NP-HE-HL). Let the reader fill in the details if needed.
\noindent $\bullet$ A related problem in several dimensions occurs when the domain is an exterior domain with one or several holes and appropriate boundary conditions. Thus, with zero Dirichlet boundary conditions and integrable data convergence to the Gaussian holds, while in 1D we fall back into the dipole problem.
\hiddensubsection{Other problems in subsets of $\mathbb{R}^N$} There are a number of problems involving the heat equation that have been studied in great detail, like the heat equation posed in a bounded domain with boundary conditions of different types (Dirichlet, Neumann, mixed, or other), but these settings lead to quite different results that depart too much from the picture presented here, so we will not comment on them.
There are also equations with coefficients or weights; they form a large topic that leads also very far from the present presentation.
\hiddensubsection{Heat equation on manifolds}
The construction of the heat equation has been carefully studied when the equation is posed on a Riemannian manifold $(M^N,g)$. The equation then takes the form
\begin{equation}
\partial_t u(x,t)= \Delta_{g}\, u(x,t)={|g(x)|^{-N/2}}\sum_{i,j=1}^{N}\partial_i \left(g(x)^{ij}|g(x)|^{N/2}\partial_j u(x,t)\right)\,, \end{equation}
where $g_{ij}$ is the metric tensor, $g^{ij}$ its inverse, $|g|$ its determinant, so that $\Delta_{g}$ is the Laplace-Beltrami operator, \cite{Gr2009}. Two particular manifolds are specially relevant because their internal symmetries and homogeneity make the theory specially strong and mathematically appealing: the $N$-dimensional sphere $\mathbb{S}^N$ and the hyperbolic space $\mathbb{H}^N$. The heat flow on the former is easily shown to stabilize to a constant (much like a Neumann problem in $\mathbb{R}^N$). The flow on $\mathbb{H}^N$ is more interesting, and typical solutions with finite mass converge to a modified Gaussian function, the hyperbolic fundamental solution, that is described in detail in \cite{Grig98}, see also \cite{Grig87}.
\section{Application to other diffusion equations}\label{sec.appl2}
We now examine some nonlinear diffusion equations where similar methods and results have been successfully proved in the last half century, where the Gaussian profile is replaced by some other attractive object.
\noindent $\bullet$ A prominent example that has been much studied is the Porous Medium Equation, $\partial_t u= \Delta u^m$, or better $\partial_t u= \Delta (|u|^{m-1}u)$, $m>1$. The asymptotic study depends crucially on the existence and properties of a distinguished family of solutions, the Barenblatt solutions, \cite{Bar52}, that are compactly supported and self-similar, one for every mass $M$. Explicit formulas exist for them (1950, 1952): \begin{equation}\label{Barensol} {\bf B}(x,t;M)= t^{-\alpha} {\bf F}(x/t^{\beta}), \quad {\bf F}(\xi)=\left(C - k \xi^{2}\right)_+^{1/(m-1)} \end{equation} where $C$ is a free constant (to be determined by the mass $M>0$) and $k>0$ is a function of $m,N$ ($k=(m-1)/2m(N(m-1)+2)$). They replace the Gaussian fundamental solutions in the statement of the asymptotic theorems. Convergence of finite mass solutions to the Barenblatt solution with the same mass is proved by the scaling method in \cite{Vaz03}. A much earlier proof used a method of optimal upper bounds, \cite{FrKa80}, 1980.
The faster convergence of solutions of the porous medium equation with changing sign is studied in \cite{KV91}. Here dipole solutions appear when the total mass is zero. Dipole solutions for nonlinear parabolic problems were also studied in \cite{BaZe57, GPV95, HuVa93}. Convergence to the Barenblatt kernel for the PME posed in $\mathbb{R}^N$ minus one or several holes was studied in \cite{BQV2007, GG2007}.
These methods did not produce rates of convergence, but such rates were established using the entropy method by Carrillo-Toscani in \cite{CaTo00}, 2000, via Bakry-Emery inequalities \cite{BE84, BGL}, and then in Del Pino-Dolbeault \cite{DPD-2002}, 2002, using Gagliardo-Nirenberg inequalities. Convergence in the sense of Wasserstein distances was introduced by F. Otto in 2001, \cite{Otto}. Entropy methods suitable for weighted porous media equations are used in \cite{DGGW}.
\noindent $\bullet$ The Fast diffusion equation (i.\,e., the PME for $m<1$) behaves much like the porous medium equation for $m$ close to 1, even if the shape of the selfsimilar profile is different, \begin{equation}\label{Baren.fast} {\bf F}(\xi)=\left(C + k(m,N) \xi^{2}\right)^{-1/(1-m)} \end{equation}
with fat tails as $|x|\to\infty$. This is not the case for $m<(N-2)/N$ due to the phenomenon of extinction in finite time. A detailed analysis of convergence to so-called pseudo-Barenblatt profiles is done in \cite{BBDGV2, BDGVProc} by an entropy method which relies on some Hardy-Poincar\'e inequalities. This is anyway a quite different scenario.
\noindent $\bullet$ The asymptotic convergence for the $p$-Laplacian equation $\partial_t u= \Delta_p u$ has been treated in \cite{KVplap} by the scaling method, after settling the uniqueness of the fundamental solution of Barenblatt type. This was done for $p>2$, and it extends to some $p<2$, to be precise to $2N/(N+1)<p<1$. The entropy method is used in \cite{DPD-2002cras, DPD-2003} using the analogy with the porous medium equation. The doubly nonlinear equation $\partial_t u= \Delta_p (u^m)$ was studied in \cite{Agueh} and subsequent works.
\noindent $\bullet$ There many variants of heat equation with lower-order terms, either first order or zero order. If these terms are strong enough they will destroy the convergence towards some attractive solution with a Gaussian shape. This is a huge research field and we will give only some ideas. Maybe the best known models correspond to the case that can be written as $\partial_t u=\Delta u + f(u)$ with $f(u)=-u^p$, $u\ge 0$; we then have so-called diffusion-absorption equations. Let us take and $p>1$, since the case $p=1$ was explained before. Even if conservation of mass does not hold, convergence of finite-mass solutions to a Gaussian profile with some positive mass $M_\infty$ is proved when $p>p_c=(2+N)/N$. The limit case $p_c$ is very interesting and was studied in \cite{GmVeron}. For $p<p_c$ we enter into completely new asymptotic profiles. See also \cite{KPV89, VazIMA}.
There are many extensions of these ideas. The reaction cases, $f(u)=+u^p$, lead to the existence of blow-up in finite time, a huge topic that falls complete out of the scope of these notes, see \cite{GaVaz20014, QuiSou2007, SGKM}. Another reaction case that has attracted the attention of researchers is $f(u)=u(1-u)$, called the Fisher-KPP model, of interest in biology and chemistry; here the long term behaviour takes the form of expanding travelling waves and no trace of a Gaussian is seen, \cite{K-P-P:art}.
\noindent $\bullet$ Let us now examine some of the recent work on diffusion with fractional Laplace operators. The linear fractional heat equation $\partial_t u+ (-\Delta)^s u=0$ has a rather complete theory in the paper \cite{BSiV2016}. It is well known that the self-similar fundamental solution exists for every $N\ge 1$, $0<s<1$ and has the form $P(x,t;s)=t^{-N/2s}F_s(|x|t^{-1/2s})$ where $F_s$ is a smooth and positive profile function with a fat tail as $|x|\to\infty$ \begin{equation}
F_s(\xi) \sim c(N,s)|\xi|^{-(N+2s)}\,, \end{equation} see \cite{BG1960}. Convergence to the self-similar fundamental solution can be proved for finite-mass solutions by the scaling method (no rates), or by the representation analysis (with rates). See separate notes by the author. For the entropy method see \cite{BK2003, GentImb}.
\noindent $\bullet$ We continue with nonlinear fractional heat equations of porous medium type. The model studied by Caffarelli and V\'azquez \cite{CV1} admits self-similar solutions that we may call fractional Barenblatt solutions \cite{BKM10, CV2, BIK2015}. The entropy method is used in \cite{CV2} to establish asymptotic convergence without rates. Rates in 1D were obtained in \cite{CH2013}. Convergence with rates in several dimensions is not known.
The alternative model of fractional porous equation, $\partial_t u+ (-\Delta)^s(|u|^{m-1}u)=0$ was studied in \cite{pqrv1, pqrv2}. Unique fundamental solutions of Barenblatt type were described in \cite{VazBar2012}, where convergence (without rates) was proved by the scaling method.
\noindent $\bullet$ There are a number of other equations that have been studied, like thin film equations \cite{CaTo01}, the Barenblatt equation of elastoplastic filtration {KPV91}, inhomogeneous heat or porous medium equations with weights, \cite{KRV10}, or chemotaxis models, like \cite{BDEF}. A very important topic is the study of the heat equation and the nonlinear diffusion models on manifolds, like the hyperbolic space, see \cite{Grig98, Vazhyp15}.
\noindent $\bullet$ See \cite{VazCIME} for a general presentation of linear and nonlinear diffusion equations including a detailed survey of recent research work.
\section{Historical comments}
We add some historical notes on the origins and development of the Gaussian function, borrowed from Wikipedia and other widely available sources, with no claim to be a rigorous historical presentation. It seems that the so-called Gaussian function has its origin in Statistics. The 18th century statistician Abraham de Moivre, a Frenchman exiled in England, seems to have been the first person who noticed the existence of a bell-shaped curve as the limit of the probability distributions of repeated random trials done independently. He was led by the practical problems of calculating odds in gambling, not a every elevated motivation indeed. But he was a very fine mathematician, appreciated by Newton. The curve he discovered is now called the ``normal curve''.
One of the first applications of the normal distribution was to the analysis of errors of measurements made in astronomical observations. A century later than De Moivre, the mathematicians Adrain in 1808 and Gauss in 1809 developed independently the formula for the normal distribution and showed that errors were fit well by this distribution. The brilliant Gauss received much credit, Adrain's work was not known for many years.
This same distribution had been discovered by Laplace in 1778 when he derived the extremely important central limit theorem, a main topic of this paper. Laplace showed that even if a distribution is not normally distributed, the means of repeated samples from the distribution would be very nearly normally distributed, and that the larger the sample size, the closer the distribution of means would be to a normal distribution.
The distribution appeared later in another disguise in Statistical Mechanics as the Maxwell-Boltzmann distribution, shortly called the Maxwellian. The original derivation in 1860 by James Clerk Maxwell was an argument based on molecular collisions of the kinetic theory of gases as well as certain symmetries in the speed distribution function; Maxwell also gave an early argument that these molecular collisions entail a tendency towards equilibrium. After Maxwell, Ludwig Boltzmann in 1872 also derived the distribution on mechanical grounds and argued that gases should over time tend toward this distribution, due to collisions (see H-theorem).
The normal distribution has a wide implication to social issues. Thus, Qu\'etelet seems to have been the first to apply the normal distribution to human characteristics. He noted that characteristics such as height, weight, and strength were normally distributed.
Evidence for the Gaussian function as the fundamental solution of the heat equation came after the work of probabilists in the 20th century to establish the link between heat equation and Brownian diffusion, which is turn is the limit of discrete processes based on iterated random trials. The close connection between stochastic differential equations and parabolic partial differential equations is very much influenced by the role of the Gaussian function in both theories.
\
\
\noindent {\sc Acknowledgment.} Work partially supported by Spanish Project MTM2014-52240-P. These notes developed from Ph. D. courses and lectures given by the author at different events in recent years, the last one was the Annual Meeting of the Red de An\'alisis Funcional y Aplicaciones, held in C\'aceres, Spain, in March 2017.
\
\
\
\noindent {\sc Address:}
\noindent Juan Luis V\'azquez. Departamento de Matem\'{a}ticas, Universidad Aut\'{o}noma de Madrid,\\ Campus de Cantoblanco, 28049 Madrid, Spain. e-mail address:~\texttt{juanluis.vazquez@uam.es}
\
\
\end{document} | arXiv | {
"id": "1706.10034.tex",
"language_detection_score": 0.8293885588645935,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Cullen numbers in sums of terms of recurrence sequence]{Cullen numbers in sums of terms of recurrence sequence}
\author[N.K. Meher]{N.K. Meher} \address{Nabin Kumar Meher, National Institute of Science Education and Research, Bhubaneswar, HBNI, P.O. Jatni, Khurda, Odisha -752050, India.} \email{mehernabin@gmail.com}
\author[S. S. Rout]{S. S. Rout} \address{Sudhansu Sekhar Rout, Institute of Mathematics and Applications\\ Andharua, Bhubaneswar, Odisha - 751029\\ India.} \email{lbs.sudhansu@gmail.com; sudhansu@iomaorissa.ac.in}
\thanks{2010 Mathematics Subject Classification: Primary 11B39, Secondary 11J86. \\ Keywords: Cullen numbers, Linear recurrence sequence, linear forms in logarithms, Diophantine equation} \maketitle \pagenumbering{arabic} \pagestyle{headings}
\begin{abstract} Let $(U_n)_{n\geq 0}$ be a fixed linear recurrence sequence of integers with order at least two, and for any positive integer $\ell$, let $\ell \cdot 2^{\ell} + 1$ be a Cullen number. Recently in \cite{bmt}, generalized Cullen numbers in terms of linear recurrence sequence $(U_n)_{n\geq 0}$ under certain weak assumptions has been studied. However, there is an error in their proof. In this paper, we generalize their work, as well as our result fixes their error. In particular, for a given polynomial $Q(x) \in \mathbb{Z}[x]$ we consider the Diophantine equation $U_{n_1} + \cdots + U_{n_k} = \ell \cdot x^{\ell} + Q(x)$, and prove effective finiteness result. Furthermore, we demonstrate our method by an example. \end{abstract}
\section{Introduction}\label{sec2}
Let $r$ be a positive integer. The linear recurrence sequence $(U_{n})_{n \geq 0}$ of order $r$ is defined as \begin{equation}\label{eq4} U_{n} = a_1U_{n-1} + \dots +a_rU_{n-r} \end{equation} where $a_1,\dots, a_r \in \mbox{$\mathbb Z$}$ with $a_r\neq 0$ and $U_0,\dots,U_{r-1}$ are integers not all zero.
The characteristic polynomial of $U_n$ is defined by \begin{equation}\label{eq5} f(x):= x^r - a_1x^{r-1}-\dots-a_r = \prod_{i =1}^{t}(x - \alpha_i)^{m_i}\in \mbox{$\mathbb Z$}[X] \end{equation} where $\alpha_1,\dots,\alpha_t$ are distinct algebraic numbers and $m_1,\dots, m_t$ are positive integers. Then $U_{n}$ (see e.g. Theorem C1 in part C of \cite{st}) has a nice representation of the form \begin{equation}\label{eq6} U_n=\sum_{i=1}^t f_i(n)\alpha_{i}^n \ \ \ \text{for all}\ n\geq 0, \end{equation} where $f_i(x)$ is a polynomial of degree $m_i -1$ $(i=1,\dots,t)$ and this representation is uniquely determined. We call the sequence $(U_n)_{n \geq 0}$ {\it simple} if $t=r$. In this paper, we assume that $t\geq 2,$ and the characteristic polynomial $f$ is irreducible over $Q$. Thus, all the roots $\alpha_i, (1\leq i \leq r)$ of \eqref{eq5} are simple roots and hence $f_i(n)$ in \eqref{eq6} are constants, say $f_i$ (because the degree of $f_i(n)$ would be at most $m_i - 1 = 1 - 1 = 0)$ and hence \eqref{eq6} becomes \begin{equation}\label{eq6a} U_n=\sum_{i=1}^r f_i\alpha_{i}^n \ \ \ \text{for all}\ n\geq 0. \end{equation}
If $|\alpha_1|>|\alpha_j|$ for all $j$ with $2\leq j\leq r$, then we say that $\alpha_1$ is a dominant root of the sequence $(U_n)_{n\geq 0}$.
The {\em Cullen numbers} are elements of the sequence $(C_{\ell})_{\ell\geq 0}$, where the $\ell$-th term of the sequence is given by $C_{\ell}:= \ell \cdot 2^{\ell} + 1, \hbox{with} \ \ell \in \mathbb{Z}_{\geq 0}$. This sequence was first introduced by Father J. Cullen \cite{cullen} and it is also mentioned in Guy's book \cite[Section B20]{guy}. In 1976, C. Hooley \cite{Hooley1976} proved that almost all Cullen numbers are composite. However, there is a conjecture that there are infinitely many {\it Cullen primes}. One of the Cullen prime having more than $2$ million digits is $C_{6679881}.$
Further, we define the {\em generalized Cullen numbers} are the numbers of the form
\[C_{\ell, s} = \ell \cdot s^{\ell} + 1\]
where $\ell \geq 1$ and $s\geq 2$. Clearly, $C_{\ell, 2} = C_{\ell}$ for all $\ell \geq 1$. For simplicity we call $C_{\ell, s}$ as $s$-Cullen numbers.
The occurrence of Cullen numbers in recurrence sequence has been analyzed by various authors. For instance, Luca and Shparlinski \cite{ls1} studied on the pseudoprime Cullen numbers and Berrizbeitia et. al., \cite{bfg} investigated on Cullen numbers which are both Riesel and Sierpinski numbers. Further, Luca and St$\breve{a}$nic$\breve{a}$ \cite{ls} proved that there are only finitely many Cullen numbers in a binary recurrence sequence under some additional assumptions. Besides this, they showed that the largest Fibonacci number in the Cullen sequence is $F_4$. Recently, Bilu et. al., \cite{bmt} studied the occurrence of generalized Cullen numbers in a fixed linear recurrence sequences. In particular, they proved that there are finitely many solutions in integers $(n, m, x)$ of the Diophantine equation
\begin{equation}\label{cullen1}
G_n = m \cdot x^{m} + T(x)
\end{equation}
for a given polynomial $T(x) \in \mbox{$\mathbb Z$}[x]$ with some assumptions on the roots of characteristic polynomial of the given linear recurrence sequence $(G_n)$ of order at least two.
However, there is an error in their proof \cite[Theorem 1]{bmt}. For instance, consider the linear recurrence sequence of order three defined by the recurrence relation \[G_{n+3} = 3G_{n+2} -3G_{n+1} + 2G_{n},\quad n\geq 0\] with initial values $G_0 = 0, G_1 = 1$ and $G_2 = 1$. The characteristic polynomial of $G_n$ is $x^3 -3x^2 +3x-2$ and its roots are $2, (1\pm \sqrt{3}i)/2$. Here these roots satisfy all the technical conditions in \cite[Theorem 1]{bmt}. Taking $T(x) =-1$ in \eqref{cullen1}, we get \begin{equation}\label{cullen2} G_n = m\cdot x^m -1. \end{equation} The integer solutions of \eqref{cullen2} are precisely \[(n, m, x) \in \{(6k+1, 1, 2), (6k+2, 1, 2)\mid k\in \mbox{$\mathbb Z$}_{\geq 0}\} \]
which shows that the conclusion in \cite[Theorem 1]{bmt} is false.
\subsection{Main results}
In this paper, for a given polynomial $Q(x)\in \mbox{$\mathbb Z$}[x]$ we study the following Diophantine equation
\begin{equation}\label{eq1}
U_{n_1} + \cdots + U_{n_k} = \ell \cdot x^{\ell} + Q(x)
\end{equation} in non negative integers $ n_1, \ldots, n_k, \ell$ with $ n_1 > n_2 > \cdots > n_k \geq 0$. Our main result is the following. \begin{theorem}\label{thm1} Let $(U_n)_{n\geq 0}$ be a linear recurrence sequence of order at least two such that its characteristic polynomial is irreducible over $\mbox{$\mathbb Q$}$ and has a real dominant root $\alpha_1>1$ and let $Q(x) \in \mbox{$\mathbb Z$}[x]$ be a polynomial. Then there exists an effectively computable constant $C$ depending only on $(U_n)_{n\geq 0},$ and $Q(x)$ such that the solutions $(n_1, n_2, \dots, n_k, \ell )$ of equation \eqref{eq1} satisfy
\[\max \{ n_1, n_2, \dots, n_k, \ell \} <C (\log |x|)^k . \] \end{theorem}
\begin{remark} We note that if $k =1$ in \eqref{eq1}, then we do not need the assumptions $\alpha_1>1$ and $\alpha_1$ is real in Theorem \ref{thm1}. Also, we obtain the conclusion in \cite[Theorem 1]{bmt}. \end{remark}
Fibonacci sequence $(F_n)_n\geq {0}$ is a well known recurrence sequence of order two which satisfies the recurrence relation \[F_{n} = F_{n-1}+ F_{n-2}, \quad n\geq 2\] with initial values $F_0 =1$ and $F_1=1$. Thus, from \eqref{eq6a} we have the Binet form \begin{equation}\label{fibeq02} F_n = \frac{\alpha^n- \beta^n}{\sqrt{5}} \end{equation} where $\alpha = (1+\sqrt{5})/2$ and $\alpha \beta= -1$. From the well known Binet form, we deduce the bound of $F_n$ as \begin{equation}\label{fibeq03} \alpha^{n-2}\leq F_n \leq \alpha^{n-1}. \end{equation}
Our next theorem illustrates Theorem \ref{thm1} for Fibonacci sequence.
\begin{theorem} \label{thm2} If $(n_1, n_2 , \ell )$ is a solutions of the Diophantine equation \begin{equation}\label{fibeq01} F_{n_1}+ F_{n_2}= \ell\cdot 2^{\ell} + 1 \end{equation} in non-negative integers $ n_1, n_2, \ell$ with $ n_1 \geq n_2\geq 0,$ then \[ \max\{ n_1, n_2, \ell \} < 7 \times 10^{18}. \] \end{theorem}
\begin{remark} However, we expect that if \eqref{fibeq01} holds with $n_1\geq n_2 \geq 0$, then \[(n_1, n_2, \ell) \in \{(1, 0, 0), (2, 0, 0), (4, 0, 1), (3, 1, 1), (3, 2, 1), (6, 1, 2), (6, 2, 2), (14, 6, 6)\}.\] \end{remark}
\section{Auxiliary results}
Let $\eta$ be an algebraic number of degree $d$ with minimal polynomial \[c_{0}x^d + c_1x^{d-1} + \cdots + c_d = c_0 \prod_{i=1}^{d}\left(X - \eta^{(i)}\right),\] where $c_0$ is the leading coefficient of the minimal polynomial of $\eta$ over $\mathbb{Z}$ and the $\eta^{(i)}$'s are conjugates of $\eta$ in $\mathbb{C}$. We define the absolute {\it logarithmic height} of an algebraic number $\eta$ as \[
h(\eta) = \frac{1}{d} \left( \log |c_0| + \sum_{i=1}^d \log \max ( 1, |\eta^{(i)}| ) \right). \]
In particular, if $\eta = p/q$ is a rational number with $\gcd(p, q) = 1$ and $q >0$, then $h(\eta) = \log \max\{|p|, q\}$.
To prove our theorem, we use lower bounds for linear forms in logarithms to get a bound for $\max\{n_1, \ldots, n_k, \ell\}$ appearing in \eqref{eq1}. Generically, we need the following general lower bound for linear forms in logarithms due to Matveev \cite[Theorem 2.2]{Matveev2000}. \begin{lemma}[\cite{Matveev2000}]\label{lem12} Let $\gamma_1,\ldots,\gamma_s$ be real algebraic numbers and let $b_{1},\ldots, b_{s}$ be non-zero rational integer numbers. Let $D$ be the degree of the number field $\mathbb{Q}(\gamma_1,\ldots,\gamma_s)$ over $\mathbb{Q}$ and let $A_{j}$ be real numbers satisfying \begin{equation}\label{eq8a}
A_j \geq \max \left\{ Dh(\gamma_j) , |\log \gamma_j|, 0.16 \right\}, \quad j= 1, \ldots,s. \end{equation}
Assume that $B\geq \max\{|b_1|, \ldots, |b_{s}|\}$ and $\Lambda:=\gamma_{1}^{b_1}\cdots\gamma_{s}^{b_s} - 1$. If $\Lambda \neq 0$, then
\[|\Lambda| \geq \exp \left( -1.4\times 30^{s+3}\times s^{4.5}\times D^{2}(1 + \log D)(1 + \log B)A_{1}\cdots A_{s}\right).\] \end{lemma}
\begin{lemma}\label{lem13a} Suppose that $(U_n)_{n\geq 0}$ has a real simple dominant root $\alpha_1$ with $1<\alpha_1\not \in \mathbb{Z}$ and $f_1$ is the constant coefficient of $\alpha_1^n$ defined in the formula \eqref{eq6}. Set \[\Lambda_i = 1 - \ell f_{1}^{-1}x^{\ell}\alpha_1^{-n_1}\left(1 + \alpha_1^{n_{2} - n_1} + \cdots + \alpha_1^{n_{i} - n_1} \right)^{-1}\] for all $2\leq i \leq k$. If $\Lambda_i = 0$, then there exists an index $m$ with $2\leq m \leq t$ such that $n_k < \varkappa_i$, where \begin{equation}\label{eq26yz} \varkappa_i:= \begin{cases}
\frac{\log (i\cdot (|f_1^{(m)}|/|f_1|))}{\log \alpha_1} &\quad \text{if} \;\;|\alpha_m| \leq 1\\
\frac{\log (i\cdot (|f_1^{(m)}|/|f_1|))}{\log (\alpha_1/|\alpha_m|)} &\quad \text{if} \;\; |\alpha_m| > 1. \end{cases} \end{equation} \end{lemma}
\begin{proof}
Suppose that $\Lambda_i = 0$. This implies \begin{equation}\label{eq25} \ell x^{\ell} = f_{1}\cdot \left(\alpha_1^{n_1} + \cdots + \alpha_1^{n_{i}}\right). \end{equation} Since $\alpha_1\notin \mathbb{Z}$, then there exists a conjugate $\alpha_m$ of $\alpha_1$ in the field $\mathbb{Q}(\alpha_1, \ldots, \alpha_t)$ such that $\alpha_1\neq \alpha_m$. Therefore, by taking the $m$-th conjugate of both sides of \eqref{eq25}, we get \begin{equation}\label{eq26} \ell x^{\ell} = f_1^{(m)}\cdot \left(\alpha_m^{n_1}+ \cdots + \alpha_m^{n_{i}}\right), \end{equation}
(here $f_1^{(m)}$ is $m$-th conjugate of $f_1$). As $\alpha_1>1$ and it is real, we deduce from \eqref{eq25} and \eqref{eq26}, \begin{align*}
|f_1|\alpha_1^{n_1} & < |f_1||\alpha_1^{n_1} + \cdots + \alpha_1^{n_{i}}| = | f_1^{(m)}||\alpha_m^{n_1}+ \cdots +\alpha_m^{n_{i}}|. \end{align*} This implies \begin{equation}\label{eq26z}
\alpha_1^{n_1} < i \cdot (|f_1^{(m)}|/|f_1|) |\alpha_m|^{n_1} \end{equation}
and hence $n_1 < \varkappa_i$ where $\varkappa_i$ is given in \eqref{eq26yz}.
\end{proof}
\begin{remark}\label{lem13b}
Suppose $\Lambda_1 := 1 - \ell f_{1}^{-1}x^{\ell}\alpha_1^{-n_1}=0$. Then $\ell x^{\ell} = f_1 \alpha_1^{n_1}$. So by taking conjugation of this relation in $\mbox{$\mathbb Q$}(\alpha_1, \ldots, \alpha_t)$, we get $\ell x^{\ell} = f_1^{(m)} \alpha_m^{n_1}$, where $f_1^{(m)}$ is the conjugate of $f_1$ over $\mbox{$\mathbb Q$}(\alpha_1, \ldots, \alpha_t)$. Thus by taking absolute values, we obtain $|\alpha_1/\alpha_m|^{n_1} = |f_1^{(m)}/f_1|$ and this implies $n_1 = \log (|f_1^{(m)}/f_1|)/\log(|\alpha_1/\alpha_m|)=: \varkappa_1$.
\end{remark}
\begin{proposition}\label{prop1} Let $(U_n)_{n\geq 0}$ be a linear recurrence sequence of order at least two such that its characteristic polynomial is irreducible and has a real dominant root $\alpha_1>1$. If equation \eqref{eq1} holds with $n_{1} > n_{1} > \cdots > n_{k}$,
then for $ 2 \leq i \leq k$ we have, \begin{equation}\label{boundeq} (n_{1}-n_{i}) \leq C_i (\log x)^{i-1}(\log n_1)^{2i- 2}. \end{equation} \end{proposition} \begin{proof}
Without loss of generality, we may assume that $x\geq 2$ in \eqref{eq1} and $|\alpha_2|\geq \cdots \geq |\alpha_r|$. Suppose $\varkappa:= \max\{\varkappa_1, \cdots, \varkappa_{k}\}$, where $\varkappa_i \;(1\leq i\leq k)$ are defined in Lemma \ref{lem13a}) and Remark \ref{lem13b}. Now, we may assume $n_1 > \varkappa$. In the proof, $c_1, \ldots, c_{23}$ denote positive effective constants depending on $(U_n)_{n\geq 0}$ and $Q(x)$.
We use induction method to find an upper bound of $n_1- n_i$. We first calculate the upper bound of $n_1 - n_2.$ Let us consider the equation \eqref{eq1} and rewrite this as \begin{equation}\label{eq7} \ell \cdot x^{\ell} - f_1 \alpha_1^{n_1} = \sum_{i=2}^{k} U_{n_i} + \sum_{i =2}^{r}f_i\alpha_i^{n_1} - Q(x). \end{equation} From, \eqref{eq6a}, we obtain (see \cite[Lemma 3.2]{bhpr})
\begin{equation}\label{eq77}
|U_n|\leq c_1|\alpha_1|^n\ \ \ (n\geq 1).
\end{equation} Hence from \eqref{eq7} and \eqref{eq77}, \begin{equation}\label{eq8}
|\ell x^{\ell} - f_1 \alpha_1^{n}| \leq c_2 |\alpha_1|^{n_2} + c_3|\alpha_2|^{n_1} + |Q(x)| \leq c_4 (|\alpha_1|^{n_2} + |\alpha_2|^{n_1}). \end{equation}
Dividing both sides of the equation \eqref{eq8} by $|f_1 \alpha_1^{n_1}|$, we get \begin{equation}\label{eq9}
|1 - \ell f_1^{-1} \alpha_1^{-n_1}x^{\ell}| \leq c_5 |\alpha_1|^{n_2 - n_1}. \end{equation} We denote $\Lambda_1:= 1 - (\ell f_1^{-1}) \alpha_1^{-n_1}x^{\ell}$ and by Remark \ref{lem13b}, $\Lambda_1 \neq 0$ as $n_1 > \varkappa$. In order to apply Lemma \ref{lem12}, we consider \[D : = \mbox{$\mathbb Q$}(\gamma_1, \ldots, \gamma_s)/\mbox{$\mathbb Q$}, s:= 3, \;\; \gamma_1 := \ell f_1^{-1}, \;\; \gamma_2:= \alpha_1, \;\; \gamma_3:= x\] and \[b_1:= 1, \;\; b_2:= -n_1, \;\; b_3:= \ell.\] Note that
\[|\ell x^{\ell} + Q(x)| \geq |\ell x^{\ell}| - |\mbox{$\mathbb Q$}(x)|.\]
Thus, we have \begin{align}\label{eq9a1} \begin{split}
|\ell x^{\ell}| &\leq |\ell x^{\ell} + Q(x)| + |Q(x)| = |U_{n_1}+ \cdots + U_{n_k}| + |Q(x)| \\
& \leq k|U_{n_1}| + |Q(x)| \leq c_6 \alpha_1^{n_1}. \end{split}
\end{align} and this implies $ \ell \log x \leq c_7 n_1 \log \alpha_1$. Hence, \begin{equation}\label{eq9a} \ell \leq c_7 n_1 \log \alpha_1/\log x< c_{8}n_1, \end{equation}
with $c_8 :=\max\{c_7\log \alpha_1, c_7\log \alpha_1/\log 2\}$. Now choose $B = \max\{|b_1|, |b_2|, |b_3|\}= c_{9}n_1 $, \[h(\gamma_1) \leq h(\ell)+ h(f_1) \leq c_{10} \log \ell, \; h(\gamma_2) \leq \log \alpha_1, \hbox{and} \;\; h(\gamma_3) \leq \log x.\] By applying Lemma \ref{lem12} and the inequality \eqref{eq9a}, we obtain \begin{align}\label{eq9a2}
\log | \Lambda_1 |\geq -c_{11} (1+\log {n_1}) \log x \log \ell\geq -c_{12} \log x (\log n_1)^2. \end{align}
On comparing the lower and upper bound of $\Lambda_1$, we get
\[(n_1-n_2) \leq c_{13}\log x (\log n_1)^2 \] Therefore, the inequality \eqref{boundeq} is true for $i=2$. We will assume that the inequality \eqref{boundeq} is true for $i=q$ with $ 2 \leq q \leq k-1$. Now, we prove that the inequality \eqref{boundeq} is true for $i =k$.
Rewrite the equation \eqref{eq1} as
\begin{equation}\label{eq010} \ell x^{\ell} - f_1 \alpha_1^{n_1} \left( 1 + \alpha_1^{n_{2} - n_1} \cdots + \alpha_1^{n_{k-1} -n_1} \right) = U_{n_k} + \sum_{j=1}^{k-1} \sum_{i =2}^{r}f_i\alpha_i^{n_j} - Q(x). \end{equation} Simplifying \eqref{eq010} with \eqref{eq77}, we obtain \begin{align}\label{eq10a}
\left| \ell x^{\ell} - f_1 \alpha_1^{n_1} \left( 1 + \alpha_1^{n_{2} - n_1} \cdots + \alpha_1^{n_{k-1} -n_1} \right) \right| & \leq c_{14} \alpha_1^{n_k} + c_{15} |\alpha_2|^{n_1} + |Q(x)| \\ \notag
& \leq c_{16} (\alpha_1^{n_k} + |\alpha_2|^{n_1}). \end{align}
Dividing both side of the equation \eqref{eq10a} by $ \left| f_1 \alpha_1^{n_1} \left( 1 + \alpha_1^{n_{2} - n_1} \cdots + \alpha_1^{n_{k-1} -n_1} \right) \right| ,$
we get
\begin{align}\label{eq10b}
\left| 1 - \ell f_1^{-1} x^{\ell} \alpha_1^{- n_1} \left( 1 + \alpha_1^{n_{2} - n_1} \cdots + \alpha_1^{n_{k-1} -n_1} \right)^{-1} \right| & \leq c_{17}\alpha_1^{n_k - n_1}. \end{align}
Here we denote $\Lambda_{k-1} = 1 - \ell f_1^{-1} x^{\ell} \alpha_1^{- n_1} \left( 1 + \alpha_1^{n_{2} - n_1} \cdots + \alpha_1^{n_{k-1} -n_1} \right)^{-1}.$ We will use Lemma \ref{lem12} to compute the lower bound of $\Lambda_{k-1}$. As $n_1 > \varkappa$, we infer that $\Lambda_{k-1} \neq 0$. To use Lemma \ref{lem12}, we choose
$$ D : = \mbox{$\mathbb Q$}(\gamma_1, \ldots, \gamma_s)/\mbox{$\mathbb Q$}, \;\; s:= 4,$$ $$ \;\; \gamma_1 := \ell f_1^{-1}, \;\; \gamma_2:= \alpha_1, \;\; \gamma_3:= x, \gamma_4= \left( 1 + \alpha_1^{n_{2} - n_1} \cdots + \alpha_1^{n_{k-1} -n_1} \right) $$ and \[b_1:= 1, \;\; b_2:= -n_1, \;\; b_3:= \ell,\;\; b_4 = -1.\]
Now choose $B = \max\{|b_1|, |b_2|, |b_3|, |b_4|\}= c_{18}n_1 .$ We have already computed $h(\gamma_1),$ $h(\gamma_2)$ and $h(\gamma_3).$ Now will estimate $h(\gamma_4).$
By induction hypothesis, we obtain
$$ h(\gamma_4) \leq c_{19} (n_1- n_{k-1}) \log \alpha_1 \leq c_{20} (\log x)^{k-2} (\log n_1)^{2k-4} \log \alpha_1.$$
By applying Lemma \ref{lem12}, we have
\begin{align}
\begin{split} \log \Lambda_{k-1}& \geq - c_{21} (1+ \log n_1) \log \ell \log x \log \alpha_1 (\log x)^{k-2} (\log n_1)^{2k-4}\\
& \geq - c_{22} (\log x)^{k-1}(\log n_1)^{2k-2}.
\end{split}
\end{align}
By comparing both upper and lower bound of $\Lambda_{k-1}$, we get
$$ (n_1- n_k) \leq c_{23} (\log x)^{k-1} ( \log n_1)^{2k-2}$$ and this completes the proof of the proposition.
\end{proof}
\section{Proof of Theorem \ref{thm1}}
First we assume that $n_1 > \varkappa$, where $\varkappa:= \max\{\varkappa_1, \cdots, \varkappa_{k-1}\}$, are defined in Lemma \ref{lem13a}). Without loss of generality, we assume $|\alpha_2|\geq \cdots \geq |\alpha_r|$. In the proof, $c_{24}, \ldots, c_{31}$ denote positive effective constants depending on $(U_n)_{n\geq 0}$ and $Q(x)$.
Now rewrite \eqref{eq1} as \begin{equation}\label{eq12} \ell x^{\ell} - f_1(\alpha_1^{n_1} +\alpha_1^{n_2}+ \cdots + \alpha_1^{n_k})=\sum_{j =1}^{k} \sum_{i =2}^{r}f_i\alpha_i^{n_j} - Q(x). \end{equation} Taking absolute values on both sides of \eqref{eq12} \begin{equation*}
|\ell x^{\ell} - f_1(\alpha_1^{n_1} +\alpha_1^{n_2}+ \cdots + \alpha_1^{n_k})| \leq c_{24}|\alpha_2|^{n_1} + |Q(x)| \leq c_{25} |\alpha_2|^{n_1}. \end{equation*} Since $\alpha_1$ is a dominant root, there exists $\delta \in (0, 1)$ such that \begin{equation}\label{eq13}
|\ell x^{\ell} - f_1(\alpha_1^{n_1} +\alpha_1^{n_2}+ \cdots + \alpha_1^{n_k})| \leq c_{26} |\alpha_1|^{(1-\delta)n_1}. \end{equation}
Dividing both sides of the equation \eqref{eq13} by $|f_1(\alpha_1^{n_1} +\alpha_1^{n_2}+ \cdots + \alpha_1^{n_k})|$, we get \begin{align}\label{eq14} \begin{split}
|1 - \ell f_1^{-1} x^{\ell} \alpha_1^{-n}&( 1 + \alpha_1^{n_2 - n_1}+ \cdots + \alpha_1^{n_k - n_1})^{-1}| \\
&\leq \frac{ c_{27}\alpha_1^{(1-\delta)n_1}}{|\alpha_1|^{n_1}| ( 1 + \alpha_1^{n_2 - n_1}+ \cdots + \alpha_1^{n_k - n_1}) |}\leq c_{28}\alpha_1^{-\delta n_1}. \end{split} \end{align} Thus, our required linear form is \[\Lambda_k:= 1 - \ell f_1^{-1} x^{\ell} \alpha_1^{-n}( 1 + \alpha_1^{n_2 - n_1}+ \cdots + \alpha_1^{n_k - n_1})^{-1}\] and $\Lambda_k \neq 0$ since $n_1>\varkappa$ (see Lemma \ref{lem13a}).
By applying Lemma \ref{lem12} using Proposition \ref{prop1}, we obtain \begin{equation}\label{eq15}
\log | \Lambda_k |\geq -c_{29} (1+\log n_1) \log \ell \log x \log \alpha_1 (\log x)^{k-1} (\log n_1)^{2k-2} \geq -c_{30} (\log x)^{k}(\log n_1)^{2k}. \end{equation} Now from \eqref{eq14} and \eqref{eq15}, we have \[n_1 \leq c_{31} (\log x)^{k}(\log n_1)^{2k}.\] Therefore, $n_1< C (\log x)^{k} ,$ where $C$ depend upon $(U_n)_{n \geq 0}$ and $Q(x).$ This completes the proof of Theorem \ref{thm1}.
\section{Proof of Theorem \ref{thm2}}
For the reason of symmetry in \eqref{fibeq01}, we assume that $n_1 \geq n_2$. Firstly, consider that $n _1 = n_2$. Then, \eqref{fibeq01} becomes \begin{equation}\label{331} 2F_{n_1} = \ell \cdot 2^{\ell}+1. \end{equation} Since the left hand side of \eqref{331} is even and the right hand side is odd, \eqref{331} has no solution. Now, assume that $n _1 > n_2$. Further, if $n_2 = 0$, then \eqref{fibeq01} becomes \[F_{n_1} = \ell \cdot 2^{\ell}+1,\] and the solutions are $(n_1, \ell) \in \{ (1, 0), (2, 0), (4, 1)$ (see \cite{ls}). From now on, assume that $n _1 > n_2 > 0$. From \eqref{eq9a1}, we get
\[|\ell \cdot 2^{\ell}| \leq 2 |F_{n_1}| + 1.\]
Now, we will work on the assumption that $n_1 > n_2.$
\subsection{Bounding $(n_1 - n_2)$ in terms of $n_1$} From Eq.\eqref{fibeq01} and \eqref{fibeq03}, we get \begin{equation}\label{eq33}
2^{\ell}< \ell \cdot 2^{\ell} +1 = |F_{n_1} + F_{n_2}|\leq 2 \alpha^{n_1 - 1}. \end{equation} Taking logarithms on both sides of the inequality \eqref{eq33}, we obtain \begin{equation}\label{eq34} \ell \leq 1+ (n_1 - 1)\frac{\log \alpha}{\log 2} \leq 0.75 n_1. \end{equation}
Using $ |\beta| < 1 $ and $ \ell \leq 0.75 n_1$, similar to the inequality \eqref{eq9}, we obtain \begin{align}\label{eq35} \begin{split}
\left|1 - \ell 2^{\ell}\alpha^{-n_1} \sqrt{5} \right| \leq (2\sqrt{5}+1)\alpha^{n_{2} -n_1}. \end{split} \end{align}
Suppose $1 - \ell 2^{\ell}\alpha^{-n_1} \sqrt{5} = 0$, then \begin{equation}\label{eq42a}
\ell 2^{\ell} \sqrt{5} = \alpha^{n_1}. \end{equation} Taking squares on the both sides of \eqref{eq42a}, we arrive at a contradiction as the left hand side of resulting equation is rational whereas the right hand side is irrational. In order to apply Lemma \ref{lem12}, we take $B = n_1, h(\gamma_1) = \log 2 = 0.6931\cdots < 0.7 , h(\gamma_2)=(\log \alpha)/2 = 0.2406\cdots < 0.25, h(\gamma_3) = \log(\sqrt{5}) < 0.81,$ $ h(\gamma_4) = \log \ell < \log n$. We can choose $A_1 = 1.5, A_2 = 0.5, A_3 = 1.7, A_4 = 2 \log n$. Using these parameters, we obtain \[\exp( - 1.3 \times 10^{14}\times (1 + \log n_1) \log n_1) < (2\sqrt{5}+1)\alpha^{n_{2} -n_1}.\] Further, since $(1+\log n_1) < 2\log n_1$ and $\log \ell < \log n_1$, we get \begin{equation}\label{eq36} (n_1-n_2)\log \alpha < 3.5\times 10^{14} (\log n_1)^2, \end{equation} which leads to \begin{equation}\label{eq37} (n_1-n_2) < \frac{3.5\times 10^{14}}{\log \alpha} (\log n_1)^2 < 7.27 \times 10^{14} (\log n_1)^2. \end{equation} Similarly, the inequality corresponding to second linear form is \begin{equation}\label{eq38}
| 1- \ell 2^{\ell} \sqrt{5} \alpha^{-n_1}(1 + \alpha^{n_2-n_1})^{-1}|\leq (2\sqrt{5}+1)\alpha^{-n_1}. \end{equation}
Here, we choose $ h(\gamma_1) = \log 2 = 0.6931\cdots < 0.7 , h(\gamma_2)=(\log \alpha)/2 = 0.2406\cdots < 0.25, \gamma_3 = \sqrt{5} (1 + \alpha^{n_2-n_1})^{-1}| < 0.81,$ $ h(\gamma_4) = \log \ell < \log n.$ In this case, $A_1$ and $A_2$ are same as in the previous case, and since \[2(\log \sqrt{5} + (n_1 - n_2)\frac{\log \alpha}{2} + \log 2) < 2.1 \times 10^{15} (\log n_1)^2, \] we take $A_3 := 2.1 \times 10^{15} (\log n_1)^2$. Again, employing Lemma \ref{lem12}, we obtain \begin{equation}\label{eq39} \exp(- 7.3 \times 10^{11}\times (1 + \log n_1)( 2.1 \times 10^{15} (\log n_1)^2) \log n_1 )\leq (2\sqrt{5}+1)\alpha^{-n_1} \end{equation} and this implies \begin{equation}\label{eq40} n_1 < \frac{3.3\times 10^{27}}{\log \alpha} (\log n_1)^4 < 6.9 \times 10^{27}(\log n_1)^4. \end{equation} Further, using reduction procedure based on the LLL-algorithm \cite{s99}, we obtain $n_1- n_2 < 230$ and hence $n_1< 7\times 10^{18}$. This completes the proof.
\begin{remark} To explicitly find all the solutions of \eqref{fibeq01}, one needs to further reduce the size of $n_1$ and the usual method for this process is Baker-Davenport reduction method (or results related to Dujella-Peth\"o theorem). However, for this problem, we have a form like \[\ell \left(\frac{\log 2}{\log \alpha}\right) - n_1 + \left(\frac{\log \ell \sqrt{5}}{\log \alpha}\right).\] In this case, to use the reduction method, we should get a positive lower bound for $\epsilon$ depending on $\ell$ which by its size $\approx 10^{15}$ makes the calculation impossible. \end{remark}
\end{document} | arXiv | {
"id": "2010.10014.tex",
"language_detection_score": 0.5937536358833313,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\newcommand{\csname @footnotemark\endcsname}{\csname @footnotemark\endcsname} \newcommand{{\bf MakeHeap\/}}{{\bf MakeHeap\/}} \newcommand{{\bf FindMin\/}}{{\bf FindMin\/}} \newcommand{{\bf DeleteMin\/}}{{\bf DeleteMin\/}} \newcommand{{\bf Delete\/}}{{\bf Delete\/}} \newcommand{{\bf Meld\/}}{{\bf Meld\/}} \newcommand{{\bf Cut\/}}{{\bf Cut\/}} \newcommand{{\bf Insert\/}}{{\bf Insert\/}} \newcommand{{\bf DecreaseKey\/}}{{\bf DecreaseKey\/}} \newcommand{\hbox{\sl right\/}}{\hbox{\sl right\/}} \newcommand{\hbox{\sl left\/}}{\hbox{\sl left\/}} \newcommand{\hbox{\sl null\/}}{\hbox{\sl null\/}} \newcommand{{\sl parent\/}}{{\sl parent\/}} \newcommand{\footnote}{\footnote}
\title{Fast Fibonacci heaps with worst case extensions}
\begin{abstract} We are concentrating on reducing overhead of heaps based on comparisons with optimal worstcase behaviour. The paper is inspired by Strict Fibonacci Heaps \cite{StrictHeaps}, where G. S. Brodal, G. Lagogiannis, and R. E. Tarjan implemented the heap with {\bf DecreaseKey\/}\ and {\bf Meld\/}\ interface in assymptotically optimal worst case times (based on key comparisons). In the paper \cite{sewc}, the ideas were elaborated and it was shown that the same asymptotical times could be achieved with a strategy loosing much less information from previous comparisons. There is big overhead with maintainance of violation lists in these heaps. We propose simple alternative reducing this overhead. It allows us to implement fast amortized Fibonacci heaps, where user could call some methods in variants guaranting worst case time. If he does so, the heaps are not guaranted to be Fibonacci until an amortized version of a method is called. Of course we could call worst case versions all the time, but as there is an overhead with the guarantee, calling amortized versions is prefered choice if we are not concentrated on complexity of the separate operation.
We have shown, we could implement full {\bf DecreaseKey\/}-{\bf Meld\/}\ interface, but {\bf Meld\/}\ interface is not natural for these heaps, so if {\bf Meld\/}\ is not needed, much simpler implementation suffices. As I don't know application requiring {\bf Meld\/}, we would concentrate on no{\bf Meld\/}\ variant, but we will show the changes could be applied on {\bf Meld\/}\ including variant as well. The papers \cite{StrictHeaps}, \cite{sewc} shown the heaps could be implemented on pointer machine model. For fast practical implementations we would rather use arrays. Our goal is to reduce number of pointer manipulations. Maintainance of ranks by pointers to rank lists would be unnecessary overhead. \end{abstract}
\section{Introduction} We will call heaps from the paper \cite{StrictHeaps} BLT heaps as their connection to Fibonacci is only negligable. We will call the heaps in paper \cite{sewc} as BLMT heaps, with {\bf Meld\/}\ and no{\bf Meld\/}\ variants. The BLT (resp. BLMT) heaps strategy is not to be too pedantic to the heap shape and allow some types of violations. Violation sizes of each type are maintained bounded by maximal rank $R$ plus 1. What gives quadratic inequality bound for $R$ leading to $R\le 6+1.2\log_2 n$. To maintain degrees bounded by $O(\log n)$ in {\bf Meld\/}\ variants, list of heap nodes is required and first two nodes are moved to the end of the list after each heap size decrement by 1. The degree reduction is performed on moved nodes. This ensures the $2n-p$ ($p$ position in the list) remains constant for all except moved nodes so degree bounds by funcion $b(2n-p)$ could be maintained. For no{\bf Meld\/}\ variant no degree reductions are required and no list of heap nodes is needed.
The violations are maintained in violation structures in BLT/BLMT heaps. In conlcluding remarks of \cite{sewc} the big overhead with their maintainance is mentioned and caching is recomended. Our goal is to reduce violation structures at all, and use same rank identifying places with caching instead. Amortized versions would simply empty the caches at the end of each operation. Worst case versions would need to plan the reductions and count the reductions made not to exceed the declared time. We have to propose violation/cache reduction strategy to ensure the violation sizes remain bounded by $R+1$ at the same time.
With empty cache for loss violations we have guarentee no node have loss greater 1, what is condition maintained at Fibonacci heaps, therefore at these times Fibonacci sequence bounds the sizes of subtrees of nodes of given rank. Actually for big $n$ the total loss bound by $R(n)$ is more restrictive than local loss bound at each node. So only for small ranks (and empty cache) the Fibonacci bound applies. The trees with bigger ranks are bounded more by total loss. This is on the edge to call them Fibonacci. Calling them (local/)global loss bounded would be more appropriate.
\section{Overhead of violation structures} Violations in BLT/BLMT heaps are maintained in double linked lists pointed inside from ranks. Adding a violation first of the rank means making pointer from rank to the node and inserting listnode to the corresponding list end. This is why end list pointer changes and 2 pointers in neighbours are updated (and one remains \hbox{\sl null\/}) so at least 4 pointer updates are required. Adding 2nd violation of the same rank removes node pointed by rank from the list, and inserts pair of nodes to the other end of the list. This changes 2 neighbour pointers around removed node from the list, sets 2 pointers among the pair of nodes of the same rank and changes end list pointer and 2 pointers connecting the inserted pair with the list. This makes 7 pointer changes. When node with loss 1 gets 2nd loss, it was pointed by rank pointer and there is exactly one other node of the same rank in the violation list, the update is even bigger. Rank pointer should be changed, the other node of the rank should be removed from the list and reinserted to the end of the list. The loss 2 node representant should be removed from the list as well and reinserted to the other end of the list. This affects 2 pointers to reconnect the list near removal, both list end pointers change and both moved nodes change both neighbours (\hbox{\sl null\/}\ at end of the list should be changed to inserted node, inserted node should change one pointer to neighbour and the other to \hbox{\sl null\/}). This sums to 11 pointer changes. Violation reduction steps remove upto two nodes from corresponding list end and either update rank pointer or if there is just one node remaining of affected rank, the node is removed and reinserted to the other list end. This affects at most 6 pointers (not counting overhead by garbage collector of reduced nodes).
We propose using same rank identifying places and cache of not yet inserted violations. On pointer machine model the list of ranks could be used as same rank identifying places (with one pointer reserved in the list node for each type of violation). We would recomend array for each type of violation indexed by numeric ranks when pointer machine model is not required.
Cache could be any set structure supporting insert of a pointer value and pop (removing an arbitrary pointer value from the set and returning it). Single link list could be used on pointer machine requiring 2 pointer changes per update. Array (with guaranted sufficient size) requiring 1 pointer change and one index value change per insert and no pointer change and one index value change per pop is recomended when pointer machine model is not needed.
Each violation would be inserted to the cache and cache would be processed during cache reductions. As each node could be part of at most one violation type, we would maintain the type identification in the node (could be empty, loss $\ge 2$ could be identified here as well). When node changes rank, it allows us to localise the violation set to update the node rank here (to remove, change, and reinsert). The removal from corresponding type of violation would check corresponding pointer in same rank identifying place. If it points to the node, we just change it to \hbox{\sl null\/}. Otherwise we just remember it is in the cache. This would leave the violation update in cache. If the node should be reinserted to the same violation type we insert it to cache (unless we know it is there already). If the node should be reinserted as other violation type, we insert it to the corresponding cache, but update the type of violation to which it should belong. If the node is no more violation, we should empty the type identification maintained in it. This means node could be even several times in several caches, but it would represent at most one violation of its current violation type.
When node from cache is processed during cache reduction, we at first check it belongs to the type of violation. If not, we just discard the cache info. We will discard the cache info even when corresponding rank identifying place points to the node. When the node is loss $\ge 2$ violation (allowed on loss violation cache), we do corresponding single node loss reduction immediately. Otherwise we check the same rank identifying place for the node rank. If it contains pointer to the processed node, we just discard the cache info. If it contains \hbox{\sl null\/}, the pointer to the node is inserted there. Otherwise we have pointers to two nodes of the same rank and corresponding violation reduction is performed. This puts \hbox{\sl null\/}\ to the same rank identifying place and could insert new violations to corresponding caches.
We actually use same rank identifying places the way it is used in {\bf FindMin\/}\ of amortised versions of Binomial/Fibonacci/Padovan heaps except we leave the pointers in the places even after {\bf FindMin\/}\ is finished. This reduces the overhead with same rank identifying places of Binomial/Fibonacci/Padovan heaps.
The arrays would be preferable choice regarding to hardware caches. Using arrays in no{\bf Meld\/}\ variant is natural, especially when we could predict the maximal heap size, otherwise we could use worstcase variant of array doubling. The array doubling overhead would be negligable as the arrays are of logarithmic sizes. Worstcase variant of array doubling would work even in {\bf Meld\/}\ variant, it would need to copy two slots per array and operation in scenario long sequence of melds of equally sized heaps occurs. Most other times the array would be filled less than from half so no copying is needed.
\section{Structure balancing overview} Except the introduction of caches and change on their maintanance strategy the heaps would work as in \cite{sewc}. In the {\bf Meld\/}\ variant there would be deffered and solid nodes, where deffering could be implicit. Solid children would be either rank or nonrank. The number of rank children correspond to rank of the node which is bounded by some function $R(n)$. Solid nonrank child is a rank tree root. We will maintain all rank tree roots as violations to ensure, their number does not exceed $R(n)+1$. Unfortunately if we would maintain all rank tree roots together, violation reductions would allow series where increase of degree would force degree reduction resulting in new rank tree root violation so there will be no visible progress in reducing rank tree roots count. Therefore we crerate two violation types for rank tree roots, one where the roots with guaranted degree reserves are added and the others (this prevents eager conversion of most defered children to solid). Each degree reduction ensures the guarantee and linking two guaranted rank roots does not require the degree reduction. Therefore the number of rank roots is bounded by $2R(n)+2$, one of them is the main root so at most $2R(n)+1$ rank roots could be children.
We would maintain all nodes with loss as violation of loss type. The number of solid children is therefore bounded by $3R(n)+1$ where $R(n)$ is maximal posible rank for heap with $n$ nodes and total loss at most $R(n)+1$. Estimating the bigger root of the corresponding quadratic equation gives us $R(n)\le 6+1.2\log_2 n$.
In the case there could be deffered children ({\bf Meld\/}\ variant) we should maintain number of children bounded by $O(\log n)$ explicitly. It is sufficient to regularly do node degree reductions. This reduction either reduces the degree of a node by 2 or ensures the node has no more than 2 deffered children. Each {\bf Meld\/}\ ensures the bounds hold for newly implicitly deffered nodes and the bounds do not decrease for other nodes. Whenever heap size is decremented (due to {\bf DeleteMin\/}) two heap nodes are degree reduced and their bounds are changed. List of heap nodes, removing first two nodes from it and inserting them as last two (after two node degree reductions) would do needed, when bounds are defined by function of $3R(2n-p)+c$, where $p$ is position of the node in the list, and $c$ is small natural number depending on node being solid without loss or not. Actually $c$ allows one more children (4+1) for solid nodes with zero loss than for others (4). The degree reductions would maintain the bounds implicitly without knowing their actual values.
We would present the {\bf Meld\/}\ variant first and the simple no{\bf Meld\/}\ variant at the end.
\section{Violation reductions} \vbox to 86mm{ \hbox{\kern7mm\pdfximage width 14cm {sewc_red.pdf} \rlap{\smash{\pdfsave\pdfsetmatrix{0 -0.56 0.56 0} \pdfrefximage\pdflastximage}}\pdfrestore \hss} \vss \hbox{Figure 1: Reductions to maintain the heap shape with complications for {\bf Meld\/}\ support} \kern4mm }
\begin{table}
\begin{center}
\caption{Effect of different transformations $3|GR|+4|GC|+5|AR|+6|AC|+10|LR|+11|LC|=\Phi$}
\label{tab:eff}
\begin{tabular}{lrrrrrrrr}
Reduction & $|GR|$ & $|GC|$ & $|AR|$ & $|AC|$ & $|LR|$ & $|LC|$ & $\Phi$ & $p$\\
\hline
\hline
node degree & $0$ & $0$ & $0$ & $+1$ & $0$ & $0$ & $+6$ & $1$\\
\hline
$|AC|$ discard & $0$ & $0$ & $0$ & $-1$ & $0$ & $0$ & $-6$ & $0$ \\
\hline
$|AC|$ type $A$ no match & $0$ & $0$ & $+1$ & $-1$ & $0$ & $0$ & $-1$ & $1$ \\
\hline
$|AC|$ type $A$ matched & $0$ & $+1$ & $-1$ & $\le 0$ & $0$ & $0$ & $\le -1$ & $\le 3$\\
\ - no 3 deffered chidren & $0$ & $+1$ & $-1$ & $-1$ & $0$ & $0$ & $-7$ & $2$ \\
\ - 3 deffered children & $0$ & $+1$ & $-1$ & $0$ & $0$ & $0$ & $-1$ & $3$ \\
\hline
$|GC|$ discard & $0$ & $-1$ & $0$ & $0$ & $0$ & $0$ & $-4$& $0$ \\
\hline
$|GC|$ type $G$ no match & $+1$ & $-1$ & $0$ & $0$ & $0$ & $0$ & $-1$& $1$ \\
\hline
$|GC|$ type $G$ matched & $-1$ & $-1$ & $0$ & $+1$ & $0$ & $0$ & $-1$ & $2$\\
\hline
$|LC|$ discard & $0$ & $0$ & $0$ & $0$ & $0$ & $-1$& $-11$ & $0$ \\
\hline
$|LC|$ subtype $L'$ & $\le 0$ & $\le +2$ & $\le 0$ & $\le +1$ & $\le 0$ & $\le 0$ & $\le -1$ & $\le 3$\\
\ - parent G in $GR$& $-1$ & $+2$ & $0$ & $0$ & $0$ & $\le -2$ & $\le -17$ & $3$ \\
\ - parent G in $GC$& $0$ & $+1$ & $0$ & $0$ & $0$ & $\le -2$ & $\le -18$ & $1$ \\
\ - parent A in $AR$& $0$ & $+1$ & $-1$ & $+1$ & $0$ & $\le -2$ & $\le -17$ & $3$ \\
\ - parent A in $AC$& $0$ & $+1$ & $0$ & $0$ & $0$ & $\le -2$ & $\le -18$ & $1$ \\
\ - parent L in $LR$& $0$ & $+1$ & $0$ & $0$ & $-1$ & $\le 0$ & $\le -6$ & $3$ \\
\ - parent L* in $LC$& $0$ & $+1$ & $0$ & $0$ & $0$ & $\le -1$ & $\le -7$ & $1$ \\
\ - parent N no 3 deffered children& $0$ & $+1$ & $0$ & $0$ & $0$ & $\le -1$ & $\le -7$ & $2$ \\
\ - parent N 3 deffered children& $0$ & $+1$ & $0$ & $+1$ & $0$ & $\le -1$ & $\le -1$ & $3$ \\
\hline
$|LC|$ subtype $L$ no match & $0$ & $0$ & $0$ & $0$ & $+1$ & $-1$& $-1$ & $1$ \\
\hline
$|LC|$ subtype $L$ matched & $\le 0$ & $\le +1$ & $\le 0$ & $0$ & $\le 0$ & $\le +1$ & $\le -9$ & $\le 3$ \\
\ - parent of $h$ G in $GR$& $-1$ & $+1$ & $0$ & $0$ & $-1$ & $-1$ & $-20$ & $3$ \\
\ - parent of $h$ G in $GC$& $0$ & $0$ & $0$ & $0$ & $-1$ & $-1$& $-21$ & $1$ \\
\ - parent of $h$ A in $AR$& $0$ & $+1$ & $-1$ & $0$ & $-1$ & $-1$ & $-22$ & $3$ \\
\ - parent of $h$ A in $AC$& $0$ & $+1$ & $0$ & $0$ & $-1$ & $-1$ & $-17$ & $2$ \\
\ - parent of $h$ L in $LR$& $0$ & $0$ & $0$ & $0$ &$-2$ & $+1$ & $-9$ & $3$ \\
\ - parent of $h$ L* in $LC$& $0$ & $0$ & $0$ & $0$ & $-1$ & $0$ & $-10$ & $1$ \\
\ - parent of $h$ N& $0$ & $0$ & $0$ & $0$ & $-1$ & $0$ & $-10$ & $2$ \\
\hline
\end{tabular}
\end{center}
\vskip 4pt
Here $p$ denotes number of pointer changes not reflected in heap trees during reduction when arrays are used for caches.
We can see each cache reduction decrements $\Phi$ by at least 1. \end{table}
Deffered nodes would be made by the {\bf Meld\/}\ method. Similarly as in BLT heaps, the nodes of smaller heap would become deffered implicitly. Implicitly deffered nodes cannot have solid children. Deffered nodes would be accessed during degree reductions of their parents, and during {\bf DeleteMin\/} s, when the heap root was deffered node parent or by degree reduction of the deffered node reflecting decrease of heap size when moved from the start of heap node list to its end. When the implicitly deffered node is firstly accessed, the pointer responsible for implicit deffering is removed and the node is converted to explicitly deffered. Its heap pointer is redirected to current heap. All its children are deffered (either implicitly or explicitly) in this time, so they are rightmost. No solid rank child is allowed for explicitly deffered nodes, but nonrank solid children are allowed. Deffered nodes violation type is $N$ (no violation) by default.
Degree reduction step on a node $x$ would be made similarly as root degree reduction on BLT heaps. If node $x$ is implicitly deffered, it is converted to explicitly deffered. If the rightmost 3 children are deffered, we convert them to explicitly deffered if not converted yet and we remove them from children list of $x$. We made 3 comparisons to find order of their keys, let node $s$ have the smallest, $m$ the middle, and $h$ the highest key. We continue by making $s$ and $m$ solid. We create rank edge making $s$ root of rank 1 having solid rank child $m$ of rank 0. We make $h$ a deffered child of $m$, whose rank would stay 0. Finaly $s$ is linked as a nonrank (leftmost) child of $x$. Degree constraints are OK for $s$ and $m$ as they become solid with loss 0. New rank root without guaranted degree reserve was created. Degree of $x$ was reduced by 2.
Rank roots without guaranted reserve would be maintained as violations of type $A$. Violations of type $A$ would be inserted to cache $AC$ and from the cache to the same rank identifying places $AR$. Similarly rank roots with guaranted reserve would be maintained as violations of type $G$ using cache $GC$ and same rank identifying places $GR$.
During $AC$ reduction if processed cache node $x$ is already not of type $A$, the cache item is discarded. The cache item is discarded as well if $AR$ for $x$'s rank points to $x$. Other case is $AR$ for $x$'s rank contains \hbox{\sl null\/}, than pointer $x$ is stored there, and $AC$ reduction step ends. Last, and the most important case is when $AR$ for $x$'s rank pointed to other violation $y$ of the same rank and actual violation type $A$, reduction step could be applied after putting \hbox{\sl null\/}\ to $AR$ of $x$'s rank. Violation reduction step of type $A$ links nodes $x$, $y$ of the same rank. (Their keys are compared, let node $s$ be the one with smaller key while $h$ the other. We cut $h$ from its parent (if there exists nonrank edge) and put it as a rank child of $s$. This increases rank of $s$ as well as it's degree. Degree reduction is performed on $s$ what makes $s$ active root with guaranted reserve. So $h$ violation type is changed to $N$, the active root possibly created by the degree reduction would be added to $AC$. Node $s$ violation type is changed to $G$, and $s$ is added to $GC$.)
During $GC$ reduction the simillar trivial cases appear (just use $G$, $GC$ resp. $GR$ instead of $A$, $AC$ resp. $AR$). Last, and the most important case is violation reduction step of type $G$ linking nodes $x$, $y$ of the same rank after putting \hbox{\sl null\/}\ to $GR$ of $x$'s rank. (Their keys are compared, let node $s$ be the one with smaller key while $h$ the other. We cut $h$ from its parent (if there exists nonrank edge) and put it as a rank child of $s$. This increases rank of $s$ as well as it's degree. Degree reduction is not performed on $s$ as there was degree reserve. So $h$ violation type is changed to $N$, $s$ violation type to $A$ and $s$ is added to $AC$.)
Whenever rank child's rank is decremented, its loss is incremented. All nodes with nonzero loss would be maintained as violations of type $L*$,
this type has subtype $L$ for nodes with loss exactly 1 and subtype $L'$ for nodes with loss at least 2. Violations of type $L*$ would be inserted to cache $LC$, from which violations of subtype $L$ will be inserted to same rank identifying places $LR$. Symbol $|LC|$ has weighted meaning. Weight of nodes of subtype $L'$ corresponds to the loss of the node, while others weight is 1 (including nodes of other type than $L*$).
Similarly as for $AC$ reduction, when during $LC$ reduction node $x$ is already not of type $L*$ or $LR$ of $x$'s rank points to $x$, the cache item is discarded. Different is the second case when $x$'s subtype is $L'$. It invokes one node loss reduction, which takes node $x$ with loss at least 2, it makes it nonrank child of it's parent $p$. This creates new rank root $x$ (with loss 0 and guaranted degree reserve), so $x$ is put to $GC$ and violation type of $x$ is changed to $G$. The rank of $p$ is decremented. Unless violation type of $p$ is $N$, $p$ should be removed from the rank identifying place identified by its type (If there was \hbox{\sl null\/}\ in the place, we know $p$ resists in the cache). If $p$ is a rank child it should be inserted to $LC$ and type changed to $L*$ (if it does not resist there), its loss is increased and subtype changed accordingly. Total loss was reduced by at least 1. Degree of $p$ could have been on it's limit and the limit was decremented if the loss changed from 0 to 1, therefore degree reduction should be called on $p$ if it changed loss from 0 to 1 (what could insert new violation of type $A$ to $AC$). If $p$ was a rank root, its violation type does not change as both limit and degree did not changed so $p$ should be just inserted to the cache of the type unless it already resists there. Third case of $LC$ reduction is for $L$ subtype when the rank identifying place $LR$ of $x$'s rank contains \hbox{\sl null\/}. As for $AC$ reduction the pointer to $x$ is stored in $LR$ and the $LC$ reduction step ends. Last case is when $LR$ for $x$'s rank (for node of subtype $L$) pointed to other violation $y$ of the same rank and actual violation type $L$ reduction step could be applied after putting \hbox{\sl null\/}\ to $LR$ of $x$'s rank. Violation reduction step of type $L$ for nodes $x$, $y$ of equal rank and loss 1 links the two nodes. (Their keys are compared, let $h$ and $s$ be the nodes with higher and smaller keys respectively.
Remove $h$ from it's parent and link it under $s$ by a rank edge.
This reduces loss of $s$ to 0 and sets loss of $h$ to 0, so both $s$ and $h$ violation types are changed to $N$.
Original parent $p$ of $h$ decrements rank by 1.
Unless violation type of $p$ is $N$,
$p$ should be removed from the rank identifying place identified by its type (If there was \hbox{\sl null\/}\ in the place, we know $p$ resists in the cache).
If $p$ is a rank child its type should be changed to $L*$ and $p$ inserted to $LC$ (if it does not resist there),
and its loss is increased and subtype changed accordingly. Total loss was reduced by at least 1.
Degree constraint for $s$ is OK as well as for $p$.
If $p$ was rank root, it got degree reserve so its type should be changed to $G$ and $p$ should be inserted to $GC$ (if it does not resist there).)
In the Figure 1 you can see the reductions and in Table \ref{tab:eff} you can see the effect of reductions.
For amortized versions, we do cache reductions unless all caches are empty.
The process must terminate as $\Phi=3|GR|+4|GC|+5|AR|+6|AC|+10|LR|+11|LC|$ is decremented by each cache reduction. $\Phi$ could be used as potential to pay for the cache reductions. Strategy to maintain violations in bounds would for worst case variants calculate changes of $\Phi_G=3|GR|+4|GC|$, $\Phi_A=5|AR|+6|AC|$, $\Phi_L=10|LR|+11|LC|$ from the start of the method. In each coordinate the positive change at the method end is allowed only in the case the corresponding cache is empty. So while a coordinate has positive change and nonempty cache, the corresponding cache reduction step is invoked. Again as $\Phi=\Phi_G+\Phi_A+\Phi_L$ is decremented by each cache reduction, we could simply bound the number of required reductions.
Let $\Delta^0 \Phi_G$, $\Delta^0 \Phi_A$, $\Delta^0 \Phi_L$ be the values at the start of reducing process, let $\Delta^1 \Phi_G=\max(-6,\Delta^0 \Phi_G)$, $\Delta^1 \Phi_A=\max(-4,\Delta^0 \Phi_A)$, and $\Delta^1 \Phi_L=\max(-10,\Delta^0 \Phi_G)$. Violation reductions could not finish earlier providing we start with at least same coordinates of $\Delta\Phi$ so if we bound the number of violation steps providing we start at $\Delta^1 \Phi_G$, $\Delta^1 \Phi_A$, $\Delta^1 \Phi_L$, this would bound the real value.
Let $\Delta^E \Phi_G$, $\Delta^E \Phi_A$, $\Delta^E \Phi_L$ be the values at the end of reducing process starting from $\Delta^1 \Phi_G$, $\Delta^1 \Phi_A$, $\Delta^1 \Phi_L$. 3 deffered children case reduces $\Phi$ coordinates at least as no 3 deffered children case, so we can exclude it from analysis, as well as cases when cache item is just discarded.
Let us exclude $|LC|$ reducing case which decrement $\Sigma_A$ because it reduces $\Phi$ coordinates at least as the next case in the table. Now all remaining cases reduce only one coordinate, remaining coordinates could be only increased.
Than $\Delta^E \Phi_G\ge -6$ as each reduction of $|GC|$ decreases $\Phi_G$ by at most 7. Similarly $\Delta^E \Phi_A\ge -4$ as each nonexcluded reduction of $|AC|$ decreases $\Phi_A$ by at most 5. If we consider the last |LC| decreasing step as decreasing $|LR|+|LC|$ by 1, change of $\Phi$ would be still at most $-1$ and $\Phi_L$ would change by at most $-11$. As $\Phi$ decreases by each considered reduction by at least $-1$, there could be at most $\Delta^1\Phi_G-\Delta^E \Phi_G+\Delta^1 \Phi_A-\Delta^E \Phi_A+\Delta^1 \Phi_L+10\le \max(-6,\Delta^0\Phi_G)+\max(-4,\Delta^0 \Phi_A)+\max(-10,\Delta^0 \Phi_L)+20$ reduction steps in total.
We know $|GR|\le R(n)+1$, $|AR|\le R(n)+1$, and $|LR|\le R(n)+1$. This defines equilibrium values of $\Phi_G\le 3R(n)+3$ for the case $|GC|=0$, $\Phi_A\le 5R(n)+5$ for the case $|AC|=0$, and $\Phi_L\le 10R(n)+10$ for the case $|LC|=0$. We will always use amortized versions of {\bf DeleteMin\/}\ which has the worstcase time $O(\log n)$. This guarantees after each heap size decrement $\Phi$ coordinates would be at at most equilibrium values.
When a worstcase method is called, we got either smaller $\Phi$ coordinate or empty cache so a $\Phi$ coordinate never increases above the equilibrium value. But the $\Phi_G$ bounds $|GR|+|GC|$ by $\Phi_G/3$, $\Phi_A$ bounds $|AR|+|AC|$ by $\Phi_A/5$ and $\Phi_L$ bounds $|LR|+|LC|$ by $\Phi_L/10$ so number of nodes of violation type $G$ is bounded by $|GR|+|GC|\le \Phi_G/3\le R(n)+1$, number of nodes of violation type $A$ is bounded by $|AR|+|AC|\le \Phi_A/5\le R(n)+1$, and total loss is bounded by $|LR|+|LC|\le \Phi_L/10\le R(n)+1$.
Degree reduction gives us equilibrium bounds for node degrees. We already know the number of solid children does not exceed $3R(n)+1$. If it has at least $3R(n)+4$ children, at least 3 must be deffered and degree reduction can be performed. For our analysis it would be fine to define degree bounds $b(2n-p)=3R(2n-p)+5$ for solid nodes with loss 0 and $b(2n-p)=3R(2n-p)+4$ for other nodes (with $p$ being position in the global list of nodes). With estimate $R(2n-p)\le 6+1.2\log_2(2n-p)$ we got $b(2n-p)\le 23+4\log_2(2n-p)$ resp. $b(2n-p)\le 22+4\log_2(2n-p)$ so we have to plan degree reduction by 4 when moving from start of the heap node list to its end to compensate for $n$ decrement, what corresponds to planning two degree reduction steps for a checked node.
As in BLT heaps, linking of rank roots which are nonrank children introduces situation which cannot happen when comparing only tree roots. In the case keys could be equal, random choice of result would allow chosing $h$ to be predecessor of $s$ resulting in broken tree and a cycle. To prevent this we should expect keys are all different. If this is not guaranted from outside, solution is to generate (different) id's for key nodes and broke ties by id's comparisons.
\section{Heap structure}
Heap information contains size info inicialized to 0, reference count initialised to 0, pointers to list of heap tree roots, and to the list of all heap nodes. It contains same rank identifying places $GR$, $AR$, $LR$ and the caches $GC$, $AC$ and $LC$. All the lists are maintained double linked, left pointers are maintaned cyclic (left of leftmost points to rightmost). This allows access of both ends in constant time as well as adding or removing of a given node. List of heap tree roots uses sibling pointers maintained in the heap nodes. All heap nodes could have pointers internally in heap nodes as well.
In the pointer machine case we would maintain double linked list of ranks and rank would be represented by poiter to it. In the array version this is not required and we use arrays with worstcase doubling instead. We would implement caches as stacks (last in first out).
When the size info is $-1$ and the reference count is 0, the heap information is discarded.
Whether the node is rank child, nonrank child or explicitly deffered is maintained in the node state, but this is overriden by being a heap tree root or being implicitly deffered.
Each node which points to the heap information with size info $<0$ is implicitly deffered. It could be made explicitly deffered by pointing to current heap and setting corresponding state. The reference counter for original heap should be decremented and reference counter on current heap incremented.
\section{Implementation of methods}
We will describe the methods using private blocks. Their use could be slightly optimized (for example replacing pointer twice during a method without reading it between changes could be avoided). Decomposition into blocks makes the description easier.
\begin{table}
\begin{center}
\caption{Effect of private blocks to violations}
\label{tab:privblock}
\begin{tabular}{lrrrrr}
Method & $\Phi_G$ & $\Phi_A$ & $\Phi_L$ & $\Phi$ & $p$\\
\hline
\hline
heap size decrement & $0$ & $\le 24$ & $0$ & $\le 24$ & $\le 7$ \\
set violation type $G$& $\le +4$ & $\le 0$ & $\le 0$ & $\le +4$ & $\le 2$ \\
\ | from $G$ & $\le +1$ & $0$ & $0$ & $\le +1$ & $\le 2$ \\
\ | from $A$ & $+4$ & $\le 0$ & $0$ & $\le +4$ & $\le 2$ \\
\ | from $L*$ & $+4$ & $0$ & $\le 0$ & $\le +4$ & $\le 2$ \\
\ | from $N$ & $+4$ & $0$ & $0$ & $+4$ & $1$ \\
set violation type $A$& $\le 0$ & $\le +6$ & $\le 0$ & $\le +6$ & $\le 2$ \\
\ | from $G$ & $\le 0$ & $+6$ & $0$ & $\le +6$ & $\le 2$ \\
\ | from $A$ & $0$ & $\le +1$ & $0$ & $\le +1$ & $\le 2$ \\
\ | from $L*$ & $0$ & $+6$ & $\le 0$ & $\le +6$ & $\le 2$ \\
\ | from $N$ & $0$ & $+6$ & $0$ & $+6$ & $1$ \\
set violation type $L*$& $\le 0$ & $\le 0$ & $\le +12$ & $\le +12$ & $\le 2$ \\
\ | from $G$ & $\le 0$ & $0$ & $+11$ & $\le +11$ & $\le 2$ \\
\ | from $A$ & $0$ & $\le 0$ & $+11$ & $\le +11$ & $\le 2$ \\
\ | from $L*$ & $0$ & $0$ & $\le +12$ & $\le +12$ & $\le 2$ \\
\ | from $N$ & $0$ & $0$ & $+11$ & $+11$ & $1$ \\
set violation type $N$& $\le 0$ & $\le 0$ & $\le 0$ & $\le 0$ & $\le 1$ \\
\ | from $G$ & $\le 0$ & $0$ & $0$ & $\le 0$ & $\le 1$ \\
\ | from $A$ & $0$ & $\le 0$ & $0$ & $\le 0$ & $\le 1$ \\
\ | from $L*$ & $0$ & $0$ & $\le 0$ & $\le 0$ & $\le 1$ \\
\ | from $N$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
rank decrement& $\le +4$ & $\le +1$ & $\le +12$ & $\le +12$ & $\le 2$ \\
\ | $\alpha$ rank root & $\le +4$ & $\le 0$ & $0$ & $\le +4$ & $\le 2$ \\
\ | $\beta$ rank root & $\le +1$ & $\le +1$ & $0$ & $\le +1$ & $\le 2$ \\
\ | $N$ & $0$ & $0$ & $+11$ & $+11$ & $\le 1$ \\
\ | $L$ & $0$ & $0$ & $+12$ & $+12$ & $\le 2$ \\
\ | $L'$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
add a solid child& $\le +4$ & $\le +6$ & $0$ & $\le +10$ & $\le 3$ \\
\ | $G\to A$ & $\le 0$ & $\le +6$ & $0$ & $\le +6$ & $\le 2$ \\
\ | $A\to G$ & $\le +4$ & $\le +6$ & $0$ & $\le +10$ & $\le 3$ \\
\ | $N$ or $L*$ & $0$ & $0$ & $0$ & $0$ & $0$ \\
child removal& $\le +4$ & $\le +1$ & $\le +12$ & $\le +12$ & $\le 2$ \\
link & $\le +8$ & $\le +7$ & $\le +12$ & $\le +22$ & $\le 6$ \\
\ + $h$ removal from $p$ & $\le +4$ & $\le +1$ & $\le +12$ & $\le +12$ & $\le 2$ \\
\ + add $h$ as child to $s$ & $\le +4$ & $\le +6$ & $0$ & $\le +10$ & $\le 3$ \\
\ + $h$ type to $N$ & $\le 0$ & $\le 0$ & $\le 0$ & $\le 0$ & $\le 1$ \\
link of rank roots& $\le +4$ & $\le +6$ & $\le 0$ & $\le +10$ & $\le 4$ \\
\hline
\end{tabular}
\end{center}
\vskip 4pt
Here $p$ again denotes number of pointer changes not reflected in heap trees when arrays are used for caches.
(Including the heap node list pointer cahnges). \end{table}
Before a public worst case method is called, the $\Phi$ coordinates does not exceed the equilibrium values. During the method the coordinates changes $\Delta\Phi$ are maintained. Worstcase version of {\bf FindMin\/}\ performs cache reductions as mentioned in the previous two sections. Each other public method calls {\bf FindMin\/}\ and does not introduce new violations after the return.
Whenever we decrement size of the heap, we decrement the reference count as well, we two times remove first node $f$ of the list of heap nodes (if it exists), we make two degree reductions on $f$ and put $f$ to the end of the list. This makes the degree constraints to hold for all nodes of the heap (assuming they have held prior to the decrement).
Whenever we set violation type of a node $x$, we remove $x$ from the same rank identifying place of the original type (unless the type is $N$). And we insert it to cache corresponding to the new type (unless the type is $N$ or we know the node is already there).
Whenever we decrement rank of a node $p$, violations should be made up to date. We should know if decrement is done by $\alpha$) rank child removal or $\beta$) rank child conversion to nonrank child. In the case $p$ is rank root, violation type should be set to $G$ in case $\alpha$ and to its original value in case $\beta$\footnote{by the already described method}. In the case $p$ is not rank root, it's loss is increased. If $p$ violation type is $N$ the loss becomes 1, we set violation type to $L*$ and subtype to $L$\csname @footnotemark\endcsname. If the loss was 1 (violation type $L*$ and subtyle $L$), we set violation type to $L*$ and subtype to $L'$\csname @footnotemark\endcsname. Only in the case loss of $p$ was at least 2 (violation type $L*$ and subtype $L'$), we know $p$ has proper type and subtype and is in $LC$ so no update is needed.
Whenever we add a solid child $c$ to a rank root $p$, $p$ must have $A$ or $G$ violation type. Violation type should be set to the other\csname @footnotemark\endcsname. When the type changed from $A$ to $G$ we should call the degree reduction on $p$ (what could create a new violation of type $A$) to finish the rank increment. There is no sideeffect when we add a child to a rank child.
Removal of a child $c$ of parent $p$ means following: In all cases the parent pointer of $c$ would be set to \hbox{\sl null\/}\ and $c$ would be removed from the children list of $p$ and added to the list of heap tree roots. If $c$ was a rank child, rank of $p$ is decremented\csname @footnotemark\endcsname.
To link two solid nodes means comparing their keys, let node $s$ be the one with smaller key while $h$ the other. If $h$ had no parent, it is simply removed from its sibling list. Otherwise removal of a child $h$ of its parent is invoked\csname @footnotemark\endcsname. Node $h$ is added as a solid (therefore as leftmost) child of $s$\csname @footnotemark\endcsname\ marking $h$ rank child if the nodes had equal rank and nonrank otherwise. If a rank child was added, rank of $s$ should be incremented and type of violation of $h$ set to $N$\csname @footnotemark\endcsname.
\begin{table}
\begin{center}
\caption{Effect of public methods to violations}
\label{tab:pubmeth}
\begin{tabular}{lrrrrr}
Method & $\Phi_G$ & $\Phi_A$ & $\Phi_L$ & $\Phi$ & $p$\\
\hline
\hline
{\bf Insert\/} & $\le +4$ & $\le +12$ & $0$ & $\le +16$ & $\le 5$\\
\ + {\bf FindMin\/}\ phase 0 & $0$ & $+6$ & $0$ & $+6$ & $1$\\
\ + {\bf FindMin\/}\ phase 2 & $\le +4$ & $\le +6$ & $0$ & $\le +10$ & $\le 4$\\
\hline
{\bf FindMin\/} & $0$ & $0$ & $0$ & $0$ & $0$ \\
\hline
{\bf DeleteMin\/} \\
\ + heap size decrement & $0$ & $\le 24$ & $0$ & $\le 24$ & $\le 7$ \\
\ + $\rho$ removal from heap nodes & $0$ & $0$ & $0$ & $0$ & $\le 3$ \\
\ + $\rho$ type to $N$ & $\le 0$ & $\le 0$ & $\le 0$ & $\le 0$ & $\le 1$ \\
\ + {\bf FindMin\/}\ phase 0 & $\le 12R(2n)$ & $\le 6R(n)$ & $0$ & $\le 12R(2n)$& $< 6R(2n)$ \\
\ + & $+20$ & & & $+2R(n)+20$ & $+10$\\
\ + {\bf FindMin\/}\ phase 2 & $\le 4R(n)$ & $\le 6R(n)$ & $0$ & $\le 10R(n)$& $\le 4R(n)$ \\
\hline
{\bf DecreaseKey\/} & $\le +8$ & $\le +13$ & $\le +12$ & $\le +28$ & $\le 8$ \\
\ + child removal & $\le +4$ & $\le +1$ & $\le +12$ & $\le +12$ & $\le 2$ \\
\ + {\bf FindMin\/}\ phase 0 & $0$ & $\le +6$ & $\le 0$ & $\le +6$ & $\le 2$\\
\ + {\bf FindMin\/}\ phase 2 & $\le +4$ & $\le +6$ & $0$ & $\le +10$ & $\le 4$\\
\hline
{\bf Meld\/}\ ($h_H$, $\Phi_{h_S}\leftarrow 0$) & $\le +8$ & $\le +6$ & $0$ & $\le +14$ & $\le 5$\\
\ + {\bf FindMin\/}\ phase 0& $+4$ & $0$ & $0$ & $+4$ & $1$\\
\ + {\bf FindMin\/}\ phase 2& $\le +4$ & $\le +6$ & $0$ & $\le +10$ & $\le 4$\\
\hline
\end{tabular}
\end{center}
\vskip 4pt
Here $p$ again denotes number of pointer changes not reflected in heap trees when arrays are used for caches.
(Including the heap node list pointer cahnges).
{\bf DeleteMin\/}\ requires at most $12R(2n)+12R(n)+44\le 14.4\log_2(2n)+14.4\log_2(n)+188\le 29\log_2n+203$ cache size reductions in amortized sense. It would generate at most $42R(2n)+40R(n)+153\le 50.4\log_2(2n)+48\log_2 n+645<99\log_2 n+696$ pointer change overhead. If $\Phi^0$ be potential before and $\Phi^E$ after {\bf DeleteMin\/}, we should include the difference into account as well. But $\Phi^0\le 18R(n)+18$ and $\Phi^E\ge 0$. So we have worst case bound $12R(2n)+30R(n)+62$ cache reductions. It would generate at most $42R(2n)+94R(n)+207\le 50.4\log_2(2n)+112.8\log_2 n+699<153\log_2 n+750$ pointer change overhead in worst case. \end{table}
{\bf MakeHeap\/}\ inicializes the heap structure.
{\bf Insert\/}($k$)\ creates new solid node $x$ with violation type $N$, key $k$, rank 0, no parent and no child, pointing to the heap. It increments the heap size and reference count in the heap information without side effects. It adds $x$ as a new root to the list of heap tree roots and invokes {\bf FindMin\/}. {\bf Insert\/}\ returns $x$ for further references.
{\bf FindMin\/}\ traverses nodes of heap tree roots list and makes their parent pointers explicitly to \hbox{\sl null\/}. It converts implicitly deffered roots to explicitly deffered and (even new) explicitly deffered to solid, the newly solid roots violation type is set to $G$. Violation type of a root which was already solid is checked to be either $A$ or $G$. If not, it is set to $A$. This finishes phase 0.
In the worst case variant, the changes to $\Phi$ coordinates were calculated and cache size reductions are performed whenever corresponding $\Delta\Phi$ is positive and cache is nonempty, until all coordinates with positive $\Delta\Phi$ have empty caches. In the amortized variant $\Delta\Phi$ is not calculated at all and cache size reductions are performed until caches are empty. This finishes phase 1.
Than {\bf FindMin\/}\ traverses the heap tree roots leftwise linking two neighbouring roots interlaced with steps to left in the circular list (to link the roots as even as possible). We finish when only one tree remains. It's root points to minimum and it will be returned. $\Delta\Phi$ are updated during phase 2 (of worst case variant) as well and reduction of cache sizes is repeated at the phase 3 what is last phase of {\bf FindMin\/}.
{\bf DeleteMin\/}\ implements only amortized variant, which has guaranted worst case time $O(\log n)$, so no maintainance of $\Delta\Phi$ coordinates is needed. It decrements size\csname @footnotemark\endcsname\ in the heap information. Let $\rho$ be the only tree root. It updates pointer to the list of roots to point to the leftmost child of $\rho$. It removes $\rho$ from list of heap nodes and sets violoation type of $\rho$ to $N$\csname @footnotemark\endcsname. At the end it calls {\bf FindMin\/}\ and discards $\rho$.
{\bf DecreaseKey\/}($x$, $k$) removes $x$ from its parent $p$\csname @footnotemark\endcsname\ if such parent exists. Than in all cases it updates key at node $x$ to $k$. It invokes {\bf FindMin\/}\ at the end.
{\bf Meld\/}($h_1$, $h_2$) identifies smaller heap $h_S$ and larger $h_H$ by comparing size info in the heap informations (call with a heap of size info $<0$ is invalid). It appends list of $h_S$ nodes to start of the list of $h_H$ nodes (and sets corresponding pointer at $h_S$ to \hbox{\sl null\/}). As position nodes of $h_S$ in the new list remain same, but the heap size at least doubles, $c_2\log_2(2n-p)$ increases by at least
$c_2>1$ so we got reserve 1 in degree bounds so we could make solid node with loss 0 of $h_S$ deffered node of $h_H$ without violating degree constraint bounds (for other nodes of $h_S$ it is even more obvious). It stores sum of the sizes in the heap $h_H$ informations and changes size in $h_S$ to -1, that makes all $h_S$ nodes implicitly deffered. It appends roots of trees list of $h_S$ to the front of roots of trees list of $h_H$ (and sets them to \hbox{\sl null\/}\ in $h_S$). The same rank identifying places and caches of $h_S$ are discarded. So $h_S$ contains only negative size info and reference counts to allow discard after no implicit deffering is caused by $h_H$. Finally it invokes {\bf FindMin\/}\ and returns $h_H$ as a current heap.
We can see the effect of public methods on $\Phi$ coordinates and pointer overhead in the table \ref{tab:pubmeth}. Together with reduction of caches we got at most $28$ cache size reductions and $8$ in additional pointer overhed for amorized version (amortized) of other methods than {\bf DeleteMin\/}. This with cache reductions makes pointer overhead at most $92$ per such method. With worst case version the upper bound is $48$ cache size reductions and $8$ in additional pointer overhed so overhead at most $152$ pointer changes (and constant time) is guaranted.
There is alternative not to calculate $\Delta\Phi$ coordinates during worst case methods and use their upper bounds instead and plan cache reductions according the upper bounds. If it's sufficient to guarantee $O(\log n)$ worstcase bounds, simpler strategy is to keep caches $|GC|$, $|AC|$ empty and do two $|LC|$ cache reductions after each ${\bf DecreaseKey\/}$.
\section{Simplification when {\bf Meld\/}\ is not needed} There will be no need for pointers to heap, no need to maintain heap size and the heap reference count. As deffered nodes are created only by {\bf Meld\/}\ method, there will be no deffered nodes in the heap at all. Therefore all nonrank nodes will be rank roots. Their number is limited by their maintenance in violation lists by $2R(n)+2$. This makes degrees bounded by $O(\log n)$ and the degree reduction is impossible and it is not needed at all. All nodes have implicitly degree reserve, so there is no need to maintain rank roots with two different violation types and one violation type say $A$ suffices. The global node list to organize degree reductions is not needed as well. So the only support needed are the two volation types $A$ and $L$ with same rank identifying places and caches. If there are no deffered nodes, we would prefere inserts of nonrank nodes rather to right end of children lists for aesthetic reasons.
\begin{table}
\begin{center}
\caption{Effect of different transformations $|AR|+2|AC|+3|LR|+4|LC|=\Phi$}
\label{tab:effNoDeffered}
\begin{tabular}{lrrrrrr}
Reduction & $|AR|$ & $|AC|$ & $|LR|$ & $|LC|$ & $\Phi$ & $p$\\
\hline
\hline
$|AC|$ type $\not=A$& $0$ & $-1$ & $0$ & $0$ & $-2$ & $0$ \\
\hline
$|AC|$ type $A$ no match& $+1$ & $-1$ & $0$ & $0$ & $-1$ & $1$ \\
\hline
$|AC|$ type $A$ matched& $-1$ & $0$ & $0$ & $0$ & $\le -1$ & $2$\\
\hline
$|LC|$ type $\not=L*$ & $0$ & $0$ & $0$ & $-1$& $-4$ & $0$ \\
\hline
$|LC|$ subtype $L'$ & $\le 0$ & $\le +2$ & $\le 0$ & $\le 0$ & $\le -1$ & $\le 3$\\
\ - parent A in $AR$& $-1$ & $+2$ & $0$ & $\le -2$ & $\le -5$ & $3$ \\
\ - parent A in $AC$& $0$ & $+1$ & $0$ & $\le -2$ & $\le -6$ & $1$ \\
\ - parent L in $LR$& $0$ & $+1$ & $-1$ & $\le 0$ & $\le -1$ & $3$ \\
\ - parent L* in $LC$& $0$ & $+1$ & $0$ & $\le -1$ & $\le -2$ & $1$ \\
\ - parent N& $0$ & $+1$ & $0$ & $\le -1$ & $\le -2$ & $2$ \\
\hline
$|LC|$ subtype $L$ no match & $0$ & $0$ & $+1$ & $-1$& $-1$ & $1$ \\
\hline
$|LC|$ subtype $L$ matched& $\le 0$ & $\le +1$ & $\le 0$ & $\le +1$ & $\le -2$ & $\le 3$ \\
\ - parent of $h$ A in $AR$& $-1$ & $+1$ & $-1$ & $-1$ & $-6$ & $3$ \\
\ - parent of $h$ A in $AC$& $0$ & $0$ & $-1$ & $-1$ & $-7$ & $2$ \\
\ - parent of $h$ L in $LR$& $0$ & $0$ &$-2$ & $+1$ & $-2$ & $3$ \\
\ - parent of $h$ L* in $LC$& $0$ & $0$ & $-1$ & $0$ & $-3$ & $1$ \\
\ - parent of $h$ N& $0$ & $0$ & $-1$ & $0$ & $-3$ & $2$ \\
\hline
\end{tabular}
\end{center}
\vskip 4pt
Here $p$ denotes number of pointer changes not reflected in heap trees during reduction when arrays are used for caches.
We can see each cache reduction decrements $\Phi$ by at least 1. \end{table}
The violation reduction steps would simplify as shown in Table \ref{tab:effNoDeffered}. $\Phi$ simplifies to $|AR|+2|AC|+3|LR|+4|LC|$ with coordinates $\Phi_A=|AR|+2|AC|$ and $\Phi_L=3|LR|+4|LC|$. $\Delta\Phi$ could pay for violation reductions in amortized case, the analysis for worstcase case would show that there could be at most $\Delta^1 \Phi_A-\Delta^E \Phi_A+\Delta^1 \Phi_L+3\le \max(-1,\Delta^0 \Phi_A)+\max(-3,\Delta^0 \Phi_L)+4$ reduction steps in total. Maximal degree would be $2R(n)+1$.
\begin{table}
\begin{center}
\caption{Effect of private blocks to violations}
\label{tab:privblockNoMeld}
\begin{tabular}{lrrrr}
Method & $\Phi_A$ & $\Phi_L$ & $\Phi$ & $p$\\
\hline
\hline
set violation type $A$& $\le +2$ & $\le 0$ & $\le +2$ & $\le 2$ \\
\ | from $A$ & $\le +1$ & $0$ & $\le +1$ & $\le 2$ \\
\ | from $L*$ & $+2$ & $\le 0$ & $\le +2$ & $\le 2$ \\
\ | from $N$ & $+2$ & $0$ & $+2$ & $1$ \\
set violation type $L*$& $\le 0$ & $\le +5$ & $\le +5$ & $\le 2$ \\
\ | from $A$ & $\le 0$ & $+4$ & $\le +4$ & $\le 2$ \\
\ | from $L*$ & $0$ & $\le +5$ & $\le +5$ & $\le 2$ \\
\ | from $N$ & $0$ & $+4$ & $+4$ & $1$ \\
set violation type $N$ & $\le 0$ & $\le 0$ & $\le 0$ & $\le 1$ \\
\ | from $A$ & $\le 0$ & $0$ & $\le 0$ & $\le 1$ \\
\ | from $L*$ & $0$ & $\le 0$ & $\le 0$ & $\le 1$ \\
\ | from $N$ & $0$ & $0$ & $0$ & $0$ \\
rank decrement& $\le +1$ & $\le +5$ & $\le +5$ & $\le 2$ \\
\ | $A$ & $\le +1$ & $0$ & $\le +1$ & $\le 2$ \\
\ | $N$ & $0$ & $+4$ & $+4$ & $\le 1$ \\
\ | $L$ & $0$ & $\le +5$ & $\le +5$ & $\le 2$ \\
\ | $L'$ & $0$ & $0$ & $0$ & $0$ \\
add a solid child& $\le +1$ & $0$ & $\le +1$ & $\le 2$ \\
\ | $A$ & $\le +1$ & $0$ & $\le +1$ & $\le 2$ \\
\ | $N$ or $L*$ & $0$ & $0$ & $0$ & $0$ \\
child removal& $\le +1$ & $\le +5$ & $\le +5$ & $\le 2$ \\
link & $\le +2$ & $\le +5$ & $\le +6$ & $\le 5$ \\
\ + $h$ removal from $p$ & $\le +1$ & $\le +5$ & $\le +5$ & $\le 2$ \\
\ + add $h$ as child to $s$ & $\le +1$ & $0$ & $\le +1$ & $\le 2$ \\
\ + $h$ type to $N$ & $\le 0$ & $\le 0$ & $\le 0$ & $\le 1$ \\
link of rank roots& $\le +1$ & $\le 0$ & $\le +1$ & $\le 3$ \\
\hline
\end{tabular}
\end{center}
\vskip 4pt
Here $p$ again denotes number of pointer changes not reflected in heap trees when arrays are used for caches.
(Including the heap node list pointer cahnges). \end{table}
\begin{table}
\begin{center}
\caption{Effect of public methods to violations}
\label{tab:pubmethNoMeld}
\begin{tabular}{lrrrr}
Method & $\Phi_A$ & $\Phi_L$ & $\Phi$ & $p$\\
\hline
\hline
{\bf Insert\/} & $\le +3$ & $0$ & $\le +3$ & $\le 3$\\
\ + {\bf FindMin\/}\ phase 0 & $+2$ & $0$ & $+2$ & $1$\\
\ + {\bf FindMin\/}\ phase 2 & $\le +1$ & $0$ & $\le +1$ & $\le 2$\\
\hline
{\bf FindMin\/} & $0$ & $0$ & $0$ & $0$ \\
\hline
{\bf DeleteMin\/} \\
\ + $\rho$ type to $N$ & $\le 0$ & $\le 0$ & $\le 0$ & $\le 1$ \\
\ + {\bf FindMin\/}\ phase 0 & $\le 2R(n)$ & $0$ & $\le 2R(n)$& $\le R(n)$ \\
\ + {\bf FindMin\/}\ phase 2 & $0$ & $0$ & $0$& $0$ \\
\hline
{\bf DecreaseKey\/} & $\le +3$ & $\le +5$ & $\le +8$ & $\le 5$ \\
\ + child removal & $\le +1$ & $\le +5$ & $\le +5$ & $\le 2$ \\
\ + {\bf FindMin\/}\ phase 0 & $\le +2$ & $0$ & $\le +2$ & $\le 1$\\
\ + {\bf FindMin\/}\ phase 2 & $\le +1$ & $0$ & $\le +1$ & $\le 2$\\
\hline
\end{tabular}
\end{center}
\vskip 4pt
Here $p$ again denotes number of pointer changes not reflected in heap trees when arrays are used for caches.
(Including the heap node list pointer cahnges).
{\bf DeleteMin\/}\ requires at most $2R(n)\le 2.4\log_2(n)+12$ cache size reductions in amortized sense.
It would generate at most $7R(n)+1\le 8.4\log_2 n+43$ pointer overhead.
If $\Phi^0$ be potential before and $\Phi^E$ after {\bf DeleteMin\/}, we should include the difference into account as well. But $\Phi^0\le 4R(n)+4$ and $\Phi^E\ge 0$. So we have worst case bound $6R(n)+4$ cache reductions. It would generate at most $19R(n)+13\le 22.8\log_2 n+127$ pointer change overhead in worst case.
\end{table}
We can see the effect of public methods on $\Phi$ coordinates and pointer overhead in the table \ref{tab:pubmethNoMeld}. Together with reduction of caches we got at most $8$ cache size reductions and $5$ in additional pointer overhed for amorized version (amortized) of other methods than {\bf DeleteMin\/}. This with cache reductions makes pointer overhead at most $29$ per such method. With worst case version the upper bound is $12$ cache size reductions and $5$ in additional pointer overhed so overhead at most $41$ pointer changes (and constant time) is guaranted.
\section{Concluding remarks} We have not discussed problems with ids to make heap keys unique. Usually incrementing global count (much bigger range than available memory) would be sufficient. Garbage collection could help not to increment the count too often when the heap size is maintained almost constant. With usage where heap size oscilates among small and big sizes the garbage should be discarded to keep structure size proportional to represented set. In such a scenario when pool of counts is going to be exhausted, nodes of all the heaps could be traversed and ordered temporary set of used id's constructed. The ids could be replaced by their order in the set. This overhead could be distributed among long enough sequence of operations.
\section{Summary} We have shown a variant of worst case heaps not losing information by repeated linking of heap nodes under the heap roots could be implemented and the overhead af the heaps could be kept in reasonable bounds. Especially for heaps not requiring {\bf Meld\/}\ operation the overhead of the heaps is small. The overhead is probably smaller than in Fibonacci heaps as we do not discard information from same rank identifying places at the end of {\bf FindMin\/}.
For the worst case interface of {\bf DecreaseKey\/}\ heaps (without {\bf Meld\/}) these are the fastest and simplest published heaps so far (according to our current knowledge).
\end{document} | arXiv | {
"id": "1911.11637.tex",
"language_detection_score": 0.8266817927360535,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Langlands program for $p$-adic coefficients and the petites camarades conjecture} \author{Tomoyuki Abe} \date{} \maketitle
\begin{abstract}
In this paper, we prove that, if Deligne's ``petites camarades
conjecture'' holds, then a Langlands type correspondence holds also for
$p$-adic coefficients on a smooth curve over a finite field. We also
prove that any overconvergent $F$-isocrystal of rank less than or equal
to $2$ on a smooth variety is $\iota$-mixed. \end{abstract}
\section{Introduction} Let $p$ be a prime number, and $k$ be a finite field with $q=p^s$ elements. Let $X$ be a smooth proper geometrically connected curve over $\mr{Spec}(k)$ with function field $\mc{K}$. First, we put: \begin{center}
\begin{tabular}{cp{14cm}}
$\mc{A}_r$:& The set of isomorphism classes of irreducible cuspidal
automorphic representations $\pi$ of $\mr{GL}_r(\mb{A}_{\mc{K}})$,
where $\mb{A}_{\mc{K}}$ denotes the ring of adeles of $\mc{K}$, such
that the order of the central character of $\pi$ is finite.\\
\end{tabular} \end{center} Let $U$ be an open dense subscheme of $X$. Let $K$ be a finite extension of $K_0:=W(k)\otimes\mb{Q}$. We fix an algebraic closure $\overline{\mb{Q}}_p$ of $K_0$. For an overconvergent $F$-isocrystal $E$ of rank $r$ on a dense open subset $U$ of $X/K$ (see \S \ref{overconv} for the definition), we say that it is {\em of finite determinant} if there exists a positive integer $n$ such that the $n$-th tensor power of $\det(E):=\bigwedge^rE$ is the trivial overconvergent $F$-isocrystal. We say that $E$ is {\em absolutely irreducible} if for any finite extension $L$ of $K$ in $\overline{\mb{Q}}_p$, the isocrystal $E\otimes_KL$ on $U/L$ is irreducible. We put: \begin{center}
\begin{tabular}{cp{13cm}}
$\mc{I}_r(U/K)$:& The set of isomorphism classes of absolutely
irreducible overconvergent $F$-isocrystals of rank $r$ on $U/K$ which
is of finite determinant.\\
\end{tabular} \end{center} For any finite extension $L$ of $K$, the scalar extension induces a map $\mc{I}_r(U/K)\rightarrow\mc{I}_r(U/L)$, and we put $\mc{I}(U):=\mathop{\underrightarrow{\mathrm{lim}}}_{K/K_0}\mc{I}(U/K)$ where $K$ runs over any finite extension of $K_0$ in $\overline{\mb{Q}}_p$. When we have two open subschemes $V\subset U$ of $X$, the restriction induces a map $\mc{I}_r(U)\rightarrow\mc{I}_r(V)$ by Lemma \ref{irreducible} below. Using this map, we define a set by \begin{equation*}
\mc{I}_r:=\mathop{\underrightarrow{\mathrm{lim}}}_{U\subset X}\mc{I}_r(U). \end{equation*}
In this paper, we fix two things: an isomorphism $\iota\colon\overline{\mb{Q}}_p\cong\mb{C}$, and a root $\pi$ of the equation $x^{p-1}+p=0$ or equivalently a non-trivial additive character $\mb{F}_p\rightarrow\overline{\mb{Q}}_p^\times$ (cf.\ \cite[2.4.1]{AM}). We conjecture the following:
\begin{conj*}[Langlands program for $p$-adic coefficients]
There are two maps $\pi_{\bullet}$ and $E_{\bullet}$ as follows:
\begin{enumerate}
\item There exists a {\em unique} map
$\pi_{\bullet}\colon\mc{I}_r\rightarrow\mc{A}_r$ such that for
$E\in\mc{I}_r$, the sets of unramified places of $E$ and
$\pi_E$ coincide, which we denote by $U$, and for any $x\in
|U|$, the eigenvalues of the Frobenius of $E$ at $x$ and the
Hecke eigenvalues of $\pi_E$ at $x$ coincide.
\item There exists a {\em unique} map
$E_{\bullet}\colon\mc{A}_r\rightarrow\mc{I}_r$ such that for
$\pi\in\mc{A}_r$, the sets of unramified places of $\pi$ and
$E_\pi$ coincide, which we denote by $U$, and for any
$x\in |U|$, the eigenvalues of the Frobenius of $E_\pi$ at $x$
and the Hecke eigenvalues of $\pi$ at $x$ coincide.
\end{enumerate}
Moreover, these maps induce one-by-one correspondence, namely
$E_{\bullet}\circ\pi_{\bullet}=\mr{id}_{\mc{I}}$, and
$\pi_{\bullet}\circ E_{\bullet}=\mr{id}_{\mc{A}}$. \end{conj*}
For short, we call this Conjecture (L). In the celebrated paper \cite[1.2.10 (vi)]{WeilII}, Deligne conjectured existence of ``{\it petites camarades}'' of smooth $\ell$-adic sheaves. This conjecture was stated in a vague way, which was later formulated in a clear form by Crew \cite[4.13]{CrMono} as follows:
\begin{conj*}[petites camarades conjecture for curves]
Let $U$ be a smooth geometrically connected curve over $k$, and
$\ms{F}$ be an irreducible smooth $\overline{\mb{Q}}_\ell$-sheaf on $U$
whose determinant is defined by a
finite order character of $\pi_1(U)$. Then there exists a number field
$E$ such that, for any $x\in|U|$, the characteristic polynomial of the
action of the geometric Frobenius on $\ms{F}_{\overline{x}}$ has
coefficients in $E$, and for any place $\ms{P}$ of $E$ dividing $p$,
there is an overconvergent $F$-isocrystal on $U/E_{\ms{P}}$ such that
the characteristic polynomials of the Frobenius action at $x$ for
$\ms{F}$ and $E$ coincide for any $x\in|U|$. \end{conj*} For short, we call this Conjecture (D). By using the proven Langlands correspondence in the $\ell$-adic cases \cite[VI.9]{La}, if Conjecture (L) holds, Conjecture (D) holds and moreover, we can take the overconvergent $F$-isocrystal in Conjecture (D) to be irreducible. The main theorem of this paper is the following:
\begin{thm*}
Conjecture {\normalfont (L)} and {\normalfont (D)} are equivalent. \end{thm*}
The proof of this theorem have been made possible thanks to the product formula for $p$-adic epsilon factors proven in \cite{AM}, and we use the standard argument to deduce the theorem from the product formula (cf.\ \cite[9.7]{Delconst} for $\mr{GL}_2$ case, and \cite[3.2.2.3]{Lau} for $\mr{GL}_n$ case).
In the last part of this paper, we prove that any overconvergent $F$-isocrystal of rank less than or equal to $2$ on a smooth variety is $\iota$-mixed, which can be seen as a part of the results we can derive from Conjecture (L).
\section{Overconvergent $F$-isocrystals} \label{overconv} For the Langlands correspondence, we need to consider overconvergent $F$-isocrystals on $U/K$ where $U$ is an open dense subscheme of $X$ and $K$ is a finite extension of $K_0$. When $K$ is not totally ramified over $K_0$, this concept is not defined {\it a priori}, so we briefly review the concept in this section although it might be trivial for experts.
Let $Y$ be a scheme of finite type over $k$. We consider the $s$-th Frobenius automorphism $F$ on $k$, which is the identity since the number of elements of $k$ is $p^s$. We denote by $F\mbox{-}\mr{Isoc}^\dag(Y/K_0)$ the category of overconvergent $F$-isocrystals on $Y/K_0$ (cf.\ \cite{Be}). We define the category of overconvergent $F$-isocrystals on $Y/K$ denoted by $F\mbox{-}\mr{Isoc}^\dag(Y/K)$ to be $(F\mbox{-}\mr{Isoc}^\dag(Y/K_0))_{K}$ using the notation of \cite[7.3]{AM} ({\it i.e.}\ the category of couples $(E_0,\lambda)$ such that $E_0\in\mr{Ob}\bigl(F\mbox{-}\mr{Isoc}^\dag(Y/K_0)\bigr)$ and $\lambda\colon K\rightarrow\mr{End}(E_0)$, which is called the {\em $K$-structure of $E$}, is a homomorphism of $K_0$-algebras). If $K$ is a totally ramified extension of $K_0$, the category $F\mbox{-}\mr{Isoc}^\dag(Y/K)$ is nothing but that defined in \cite{Be}. We see easily that the category $F\mbox{-}\mr{Isoc}^\dag(\mr{Spec}(k)/K)$ is equivalent to the category of finite dimensional $K$-vector spaces $V$ endowed with an automorphism of $K$-vector spaces $V\xrightarrow{\sim}V$. Let $x$ be a closed point of $Y$, and $i_x\colon \{x\}\hookrightarrow Y$ be the closed immersion. Then using the equivalence, $i_x^*E$ is a finite dimensional $K$-vector space endowed with an automorphism, and we are able to consider the characteristic polynomial of the Frobenius action of $E$ at $x$.
For an object $E=(E_0,\lambda)$ of $F\mbox{-}\mr{Isoc}^\dag(Y/K)$, we note that $\mr{rk}(E_0)$ is divisible by $[K:K_0]$, and we define the rank of $E$ to be $\mr{rk}(E):=[K:K_0]^{-1}\cdot\mr{rk}(E_0)$. As written in \cite[7.3.5]{AM}, we automatically have rigid cohomology $H^r_{\mr{rig}}(Y,E)$ which is a {\em $K$-vector space with Frobenius structure} and so on.
Now, let $x\in X\setminus U$. Let $K_x$ be the unramified extension of $K_0$ corresponding to the extension $k(x)/k$. An overconvergent
$F$-isocrystal $E_0$ on $U/K_0$ induces a differential module on the Robba ring $\mc{R}_{K_x}$ over $K_x$ with Frobenius structure. We denote this by $E_0|_{\eta_x}$. Given an overconvergent $F$-isocrystal
$E=(E_0,\lambda)$ on $U/K$, we put $E|_{\eta_x}$ to be the differential module with Frobenius structure $E_0|_{\eta_x}$ endowed with
$K$-structure. We see that $\mr{irr}(E_0|_{\eta_x})$ is divisible by $[K:K_0]$, and we define the irregularity of $E$ at $x$ to be
$\mr{irr}(E|_{\eta_x}):=[K:K_0]^{-1}\cdot\mr{irr}(E_0|_{\eta_x})$.
Now, we prove the following irreducibility result\footnote{ The proof written here was suggested by N. Tsuzuki. One can refer to \cite{AC} for more general results on the irreducibility. }:
\begin{lem}
\label{irreducible}
Let $W$ be a smooth curve over $k$, and $V\subset U\subset W$ be dense
open subschemes of $W$, and let $j\colon V\hookrightarrow U$ be the
open immersion. Then the functor $j^\dag\colon
F\mbox{-}\mr{Isoc}^\dag(U,W/K)\rightarrow
F\mbox{-}\mr{Isoc}^\dag(V,W/K)$ preserves irreducibility. \end{lem} \begin{proof}
In the following argument, all the modules are implicitly equipped with
$K$-structure. Let $E$ be an irreducible object in
$F\mbox{-}\mr{Isoc}^\dag(U,W/K)$. Assume there exists a non-zero
overconvergent $F$-isocrystal $E'_V$ and a surjection $j^\dag
E\twoheadrightarrow E'_V$. Then we can find a finite covering $f\colon
W'\rightarrow W$ such that the following diagram is commutative
\begin{equation*}
\xymatrix{
V'\ar[r]^{j'}\ar[d]_{f_V}\ar@{}[rd]|\square&U'\ar[d]^f\ar[r]
\ar@{}[dr]|\square&W'\ar[d]^{f_W}
\\V\ar[r]_j&U\ar[r]&W
}
\end{equation*}
where $f_V$ is finite \'{e}tale and $f_V^*E'_V$ is log-extendable to
$U'$ by \cite{Kedcss}. Since $f_V^*E\twoheadrightarrow f_V^*E'_V$,
$f_V^*E'_V$ extends to an overconvergent $F$-isocrystal $E'_{U'}$ on
$U'$ which is a quotient of $f^*E$. Let $\ms{W}$ and $\ms{W}'$ be
smooth formal liftings of $W$ and $W'$ respectively over
$\mr{Spf}(W(k))$. Let $Z:=W\setminus U$ and
$Z':=W'\setminus U'$. We have a coherent $\DdagQ{\ms{W}}(^\dag
Z)$-module $\mr{sp}_*(E)$ and a coherent $\DdagQ{\ms{W'}}(^\dag
Z')$-module $\mr{sp}_*(E'_{U'})$. Since $f_W$ is proper, we have a
functor $f_+\colon D_{\mr{coh}}^b(\DdagQ{\ms{W'}}(^\dag
Z'))\rightarrow D_{\mr{coh}}^b(\DdagQ{\ms{W}}(^\dag Z))$ (see
\cite[3.4.3]{Be2} for the construction of the functor in the case where
$f_W$ is not liftable). Then we get a homomorphism
\begin{equation*}
\phi\colon\ms{H}^0(f_+f^!\mr{sp}_*(E))\rightarrow
\ms{H}^0(f_+\mr{sp}_*(E'_{U'})).
\end{equation*}
By using the trace homomorphism (see \cite[4.15]{Abe}), $\mr{sp}_*(E)$
can be seen as a submodule of $\ms{H}^0(f_+f^!\mr{sp}_*(E))$. We define
$E':=\mr{sp}^*(\phi(\mr{sp}_*(E)))$. Since $j^\dag E'\cong E'_V$, $E'$
is not equal to $E$ if $E'_V$ is not equal to $j^\dag E$. Since $E$ is
assumed to be irreducible, this shows that $E'_V=j^\dag E$, and
conclude the proof. \end{proof}
\begin{rem1}
Lemma \ref{irreducible} does not hold if we replace $\mr{Isoc}^\dag$
by $\mr{Isoc}$. For example, let $f\colon
Y\rightarrow A:=\mb{P}^1\setminus\{0,1,\infty\}$ be the Legendre family
defined by $y^2=x(x-1)(x-\lambda)$. Consider the $F$-isocrystal
$E:=R^1f_{\mr{rig}*}(Y/A)$ of rank $2$. Let $S\subset A$ be the
finite set of closed points $s$ such that $f^{-1}(s)$ is a
supersingular elliptic curve, and we put $U:=A\setminus S$. Then the
convergent $F$-isocrystal $E|_U$ is not irreducible as written in
\cite[4.15]{Crrep}. However, $E$ is irreducible since all the
subquotients of $E|_U$ are not overconvergent along $S$ as written in
{\it ibid.}. The same example is showing that the functor
$F\mbox{-}\mr{Isoc}^\dag(U,A/K)\rightarrow F\mbox{-}\mr{Isoc}(U/K)$
does not preserve irreducibility. \end{rem1}
\section{\v{C}ebotarev density} \label{Cebotarev} A technical point in the proof of the theorem is to show the uniqueness of the maps. In the $\ell$-adic case, this was a consequence of the \v{C}ebotarev density theorem. Since in general we are not able to describe overconvergent $F$-isocrystals as representations of $\mr{Gal}(\overline{\mc{K}}/\mc{K})$, we need a different method to show this. We will show the following proposition\footnote{After a large part of this paper had been written, the author was pointed out by A.\ P\'{a}l that the following proposition had been generalized by him and U. Hartl. However, since we do not know the precise statement of their theorem, we decided to put the proposition.}: \begin{prop*}
\label{p-adicCebo}
Assume $E$ and $E'$ are {\em $\iota$-mixed} (cf.\
{\normalfont\cite[10.4]{Cr}} for
the definition) overconvergent $F$-isocrystals on an open dense
subscheme $U/K$ of $X$ whose characteristic polynomials of the action
of the Frobenius at any closed point of $U$ coincide. Then the
semi-simplifications of $E$ and $E'$ coincide. \end{prop*}
Before proving the proposition, we recall some notation in \cite[10.4, 10.10]{Cr}. Let $M:=E^{\mr{ss}}\oplus E'^{\mr{ss}}$ where ${}^{\mr{ss}}$ denotes the semi-simplification, and take a closed point $x_0$ in $U$. Associated to this overconvergent $F$-isocrystal, we have a short exact sequence: \begin{equation*}
0\rightarrow\mr{DGal}(M,x_0)\rightarrow W^{M}_{x_0}\rightarrow
W(\overline{k}/k)\rightarrow0. \end{equation*} Here, $\mr{DGal}(M,x_0)$ denotes the differential Galois group of $M$ (see \cite[2.1]{CrMono}), $W^M_{x_0}$ denotes the Weil group of $M$ (see [{\it ibid.}, 5.1]), and $W(\overline{k}/k)\cong\mb{Z}$. We denote the extension of scalars by $K\rightarrow\mb{C}$ using the isomorphism $\iota$ of $\mr{DGal}(M,x_0)$ and $W^M_{x_0}$ by $G^0_{\mb{C}}$ and $G_{\mb{C}}$ respectively. Since $M$ is assumed to be $\iota$-mixed, there is a subgroup $G_{\mb{R}}\subset G_{\mb{C}}$ which projects onto $W(\overline{k}/k)$ and $G^0_{\mb{C}}\cap G_{\mb{R}}$ is a maximal compact subgroup of $G^0_{\mb{C}}$. For a group $G$, we denote by $G^{\natural}$ the set of conjugacy classes. We take an element $z$ of positive degree in the center of $G_{\mb{R}}$, which exists since $M$ is semi-simple. For more details, see \cite[10.10]{Cr}.
\begin{proof}
Let $\rho^{(\prime)}\colon W^M_{x_0}\rightarrow\mr{GL}
(V_{\rho^{(\prime)}})$ be a representation of $W^M_{x_0}$
corresponding to $(E^{(\prime)})^{\mr{ss}}$. These define
continuous complex representations of $G_{\mb{R}}$ denoted by
$\rho_{\mb{R}}$ and $\rho'_{\mb{R}}$. Since the category of linear
$\overline{\mb{Q}}_p$-representations of $W^M_{x_0}$
and that of continuous complex representations of $G_{\mb{R}}$ are
equivalent by exactly the same argument as \cite[2.2.8]{WeilII}, it
suffices to show that $\rho_{\mb{R}}\cong\rho'_{\mb{R}}$.
We endow the quotient topology with $G_{\mb{R}}^{\natural}$ by the
surjection $\alpha\colon G_{\mb{R}}\twoheadrightarrow
G_{\mb{R}}^{\natural}$. We denote by $G^{\natural}_i$ the subset of
$G^{\natural}_{\mb{R}}$ consisting of the elements of degree $i$. Note
that $G^{\natural}_i$ is both open and closed. Let $S$ be the subset of
elements $g$ of $G^{\natural}_{\mb{R}}$ such
that the characteristic polynomials $\det(1-g\cdot t;V_{\rho})$ and
$\det(1-g\cdot t;V_{\rho'})$ coincide. By the Brauer-Nesbitt theorem,
it suffices to show that
$S=G^{\natural}_{\mb{R}}$. Since if the characteristic polynomials for
$g$ coincide, that of $g^{-1}$ coincide as
well, we have $S=S^{-1}$. Since the map
$\widetilde{\rho}^{(\prime)}\colon G\rightarrow\mb{C}[t]$ sending $g$ to
$\det(1-g\cdot t;V_{\rho^{(\prime)}})$ is continuous,
$S=(\widetilde{\rho}-\widetilde{\rho}')^{-1}(0)$ is closed in
$G^{\natural}_{\mb{R}}$. Let $A$ be the complement of $S$ in
$G_{\mb{R}}$, which is open, and {\em assume that this is not
empty}. Let $A_i:=A\cap G^{\natural}_i$, which is an open subset of
$G^{\natural}_{\mb{R}}$. Since $z$ is in the center of $G_{\mb{R}}$, the
characteristic polynomials of $g$ for an element $g\in G_{\mb{R}}$
coincide if and only if they coincide for $z^{n}\cdot g$ for some integer
$n$. This is showing that the isomorphism $z\colon
G^{\natural}_{n}\xrightarrow{\sim} G^{\natural}_{n+d}$ where $d>0$
denotes the degree of $z$ induces the bijection
$A_{n}\xrightarrow{\sim}A_{n+d}$. Since we are assuming $A$ to be
non-empty, there exists a {\em positive} integer $n_0$ such that
$A_{n_0}$ is also non-empty.
Now, let $dg$ be the Haar measure on $G_{\mb{R}}$, and $\mu_0$ be the
product of $dg$ by the characteristic function of the elements of positive
degree. We denote by $\mu^{\natural}_0$ the direct image of $\mu_0$ on
$G^{\natural}_{\mb{R}}$. Since $A_{n_0}$ is open, we get
\begin{equation*}
\mu^{\natural}_0(A_{n_0})=dg(\alpha^{-1}(A_{n_0}))>0.
\end{equation*}
Thus the equidistribution theorem \cite[10.11]{Cr}\footnote{
Although missing in {\it ibid.}, in this theorem, we think that we
need to assume, moreover, $M$ to be semi-simple.}
implies that there
exists a positive integer $n$ such that $\mu^{\natural}(z^n\cdot
A_{n_0})>0$. This is showing that $z^{-n}\cdot \mr{Frob}_x^{n'}\in
A_{n_0}$ where $-nd+\deg(x)\,n'=n_0$. As a consequence, we have
$\mr{Frob}_x^{n'}\in A$, which contradicts with the assumption. \end{proof}
\section{$L$-factors and $\varepsilon$-factors} Before proving the main theorem, let us review the theory of local $L$-factors and $\varepsilon$-factors. Let $\psi$ be a non-trivial additive character of $\mb{A}_{\mc{K}}/\mc{K}$, which is equivalent to choosing a meromorphic differential form $\omega$ on $X$. For an overconvergent $F$-isocrystal $E$ on a dense open subset $U/K$ of $X$, we have the global $\varepsilon$-factor\footnote{ The definition is slightly different from \cite[7.2]{AM}.} defined by \begin{equation*}
\varepsilon(E,t):=\prod_{r\in\mb{Z}}\det(-F\cdot t;
H^{r}_{\mr{rig},c}(U,E))^{(-1)^{r+1}} \end{equation*} where $F$ denotes the Frobenius automorphism of the rigid cohomology with compact support. For each closed point $x$ of $X$, we have local $\varepsilon$-factor defined as follows. We define the (Artin) conductor of $E$ at $x$ by \begin{equation*}
a_x(E_x):=
\begin{cases}
\mr{rk}(E)+\mr{irr}(E|_{\eta_x})&
\mbox{if $x$ is ramified,}\\
0&\mbox{if $x$ is unramified.}
\end{cases} \end{equation*} Then we define the local $\varepsilon$-factor at $x$ to be \begin{align*}
\varepsilon_x(E_x,t,\psi_x)&:=
\begin{cases}
\varepsilon^{\mr{rig}}_0(E|_{\eta_x},\omega_x)\cdot c_x(t)
&\mbox{if $x$ is ramified,}\\
\bigl(q^{\deg(x)\,v_x(\omega)\,\mr{rk}(E)}\cdot
\det_E(x)^{v_x(\omega)}\bigr)\cdot c_x(t)
&\mbox{if $x$ is unramified,}
\end{cases}\\
c_x(t)&:=q^{-\deg(x)\,v_x(\omega)\,\mr{rk}(E)/2}\cdot
t^{\deg(x)(\mr{rk}(E)\,v_x(\omega)+a_x(E))}, \end{align*} where $\varepsilon^{\mr{rig}}_0(\cdot)$ denotes the $\varepsilon$-factor defined by Marmora (cf.\ we followed the notation of \cite[7.1.3]{AM}), $\det_M(x)$ denotes the determinant of the Frobenius action on $E$ at $x$ (see [{\it ibid.}, 7.2.6]), $v_x(\omega)$ denotes the order of $\omega$ at $x$. The main theorem of \cite{AM} (Theorem 7.2.6 of {\it ibid.}) is stating that the following equality holds: \begin{equation}
\label{prodform}\tag{PF}
\varepsilon(E,t)=\prod_{x\in|X|}\varepsilon_x(E_x,t,\psi_x). \end{equation}
\begin{rem*}
In \cite{AM}, only the case where the residue field of $K$ is equal to
$k$ is dealt with. However, we are able to prove the product formula in
the same way without assuming this,
or more precisely [{\it ibid.}, (7.4.3.1)]
holds. The detail is left to the reader. We note that the
$\varepsilon$-factor in {\it ibid.}\ is a function. However, applying
the product formula to the canonical extension, we see that the
function is constant, and we define the $\varepsilon$-factor
$\varepsilon^{\mr{rig}}_0(E|_{\eta_x},\omega_x)$ in the above to be
this constant. \end{rem*} Let $\mc{K}_{\mr{loc}}$ be a local field, and $\pi$ and $\pi'$ be smooth admissible representations of $\mr{GL}_r(\mc{K}_{\mr{loc}})$ which are irreducible or of Whittaker type. Fix an additive character $\psi_{\mr{loc}}$ of $\mc{K}_{\mr{loc}}$. Then in \cite[Thm 2.7 and subsection 9.4]{JPS}, the $L$-factor and $\varepsilon$-factor for the pair $(\pi,\pi')$ are defined, and denoted by $L(\pi\times\pi',t)$ and
$\varepsilon(\pi\times\pi',t,\psi_{\mr{loc}})$ respectively. For automorphic representations $\pi=\bigotimes_{x\in|X|}\pi_x$ and
$\pi'=\bigotimes_{x\in|X|}\pi'_x$ such that $\pi_x$ and $\pi'_x$ are irreducible or of Whittaker type for any $x$, we denote by $L_x(\pi_x\times\pi'_x,t)$ (resp.\ $\varepsilon_x(\pi_x\times\pi'_x,t,\psi_x)$) the local $L$-factor (resp.\ $\varepsilon$-factor) of the couple $(\pi_x,\pi'_x)$ (resp.\ and the additive character $\psi_x$ induced by $\psi$) of representations of $\mr{GL}_r(\mc{K}_x)$ where $\mc{K}_x$ denotes the local field at the place $x$ of $\mc{K}$.
\section{Proof of the theorem} We prove the main theorem in this section. First, we remark the following.
\begin{rem}
\label{remRPconj}
If the map $\pi_\bullet$ is constructed for some $r$, the isocrystals
belonging to $\mc{I}_r$ is $\iota$-pure of weight $0$. This can be seen
from the generalized Ramanujan-Petersson conjecture proven by Lafforgue
\cite[VI.10 (i)]{La}. \end{rem}
Let us start to prove the main theorem. First let us see the uniqueness of the maps. The map $\pi_{\bullet}$ is uniquely determined if it exists by the strong multiplicity one theorem \cite{Pia}. For the uniqueness of $E_{\bullet}$, assume we had two maps $E_{\bullet}$ and $E'_{\bullet}$. For $\pi\in\mc{A}_r$, $E_{\pi}$ and $E'_\pi$ are $\iota$-pure of weight $0$ by Remark \ref{remRPconj}, and we are in the situation to apply the proposition of \S\ref{Cebotarev}, which implies that $E_\pi=E'_\pi$. Thus the uniqueness of $E_{\bullet}$ follows.
The rest of the argument is nothing but the ``{\it principe de r\'{e}currence}'' using the product formula (\ref{prodform}). Since all we need to do is to copy the proof of \cite[VI.11]{La}, we only sketch the proof. We use the induction on $r$. We claim the following.
\begin{cl}
\label{indDeligh}
Assume that the correspondence is established for any $r$ which is
strictly less than $r_0$, and the local $L$-factor and
$\varepsilon$-factor coincide {\em for any place of $X$} via this
correspondence. Then we have the map
$\pi_{\bullet}\colon\mc{I}_r\rightarrow\mc{A}_r$ in the sense of
Langlands for $r=r_0$. \end{cl}
\begin{proof}[Sketch of the proof]
Take an element $E$ of $\mc{I}_{r_0}$, and denote by $S$ the set of
points of $X$ at which $E$ is
ramified. For a point $x\not\in S$, we put $\pi_x$ to be the unramified
smooth admissible irreducible representation of
$\mr{GL}_{r_0}(\mc{K}_x)$ whose set of Hecke eigenvalues is that of
Frobenius eigenvalues of $E$ at $x$. For $x\in S$, we put $\pi_x$ to be
an irreducible representation of $\mr{GL}_{r_0}(\mc{K}_x)$ of Whittaker
type whose center corresponds to $\det(E_x)$ via the reciprocity
map. We put $\pi:=\bigotimes_{x\in|X|}\pi_x$,
which is a smooth admissible irreducible representation of
$\mr{GL}_{r_0}(\mb{A}_{\mc{K}})$. First, we need to show that there is
an irreducible automorphic representation $\pi_E$ such that the local
factors of $\pi_E$ at any closed point $x\not\in S$ is equal to
$\pi_x$. For this, we
use the converse theorem \cite[B.13]{La} of Piatetski-Shapiro. Since
the argument is exactly the same as \cite{La} using the product formula
(\ref{prodform}), we omit the detail. We only note here that on a way
we check the hypothesis of the converse theorem, we need to show that
for any closed point $x$ of $X$, the local factors coincide for certain
pairs. When $\pi$ is {\em unramified} at $x$, we use the hypothesis on
the coincidence of local factors in the statement of the claim to see
this coincidence at $x$. As a result, we have an automorphic
representation $\pi_E$ with desired property, which is moreover
cuspidal if $S=\emptyset$.
Finally, to show the claim, it remains to show that the
automorphic representation is in fact cuspidal when
$S\neq\emptyset$. Assume that $\pi_{E}$ were not cuspidal. Then a
result of Langlands is saying that there exists a non-trivial partition
$r_0=r_1+\dots+r_k$, and automorphic cuspidal representations
$\pi^1,\dots,\pi^{k}$ of $\mr{GL}_{r_1},\dots\mr{GL}_{r_k}$ which are
unramified outside of $S$, the central characters are of finite order,
and the Hecke eigenvalues at $x\not\in S$ of $\pi$ is the disjoint
union of that of $\pi^{i}$. By induction
hypothesis, we have the overconvergent $F$-isocrystal $E_{\pi^i}$ of rank
$r_i$. By construction, the Frobenius eigenvalue of $E$ and the
semi-stable overconvergent $F$-isocrystal
$E':=\bigoplus_{i=1}^kE_{\pi_i}$ are the same for any point outside of
$S$. We know that $E'$ is $\iota$-pure of weight $0$. This shows that
$E$ is $\iota$-pure of weight $0$ as well, and by applying Proposition
\ref{p-adicCebo}, we have $E\cong E'$, which is a contradiction, and we
conclude the proof of the claim. \end{proof}
\begin{cl}
\label{coinLE}
Assume $\pi\in\mc{A}_r$ (resp.\ $\pi'\in\mc{A}_{r'}$) correspond in the
sense of Langlands to $E\in\mc{I}_r$ (resp.\ $E'\in\mc{I}_{r'}$). Then
we get
\begin{equation*}
L_x(\pi_x\times\pi'_x,t)=L_x(E_x\otimes E'_x,t),\qquad
\varepsilon_x(\pi_x\times\pi'_x,t,\psi_x)=\varepsilon_x
(E_x\otimes E'_x,t,\psi_x),
\end{equation*}
for any $x\in|X|$. \end{cl} \begin{proof}
The proof is exactly the same as that of \cite[Prop VI.11
(ii)]{La}. The assumption is slightly milder than {\it ibid.}\ since we
already know that the generalized Ramanujan-Petersson conjecture
[{\it ibid.}, VI.10 (i)] is true. In particular, $\pi_x$ is tempered for
any $x\in|X|$, and $E$ and $E'$ are $\iota$-pure of weight $0$. In the
proof, we need to replace [{\em ibid}., VI.5] by \cite{Ke} or
\cite{AC}. \end{proof}
Let us prove the theorem. Assume that the correspondence is established for $r$ strictly less than $r_0$. Then Claim \ref{coinLE} is showing that the local $L$-factors and $\varepsilon$-factors coincide via the correspondence at any closed point of $X$. This enables us to apply Claim \ref{indDeligh}, which give us the map $\pi_\bullet$ for $r=r_0$.
It remains to construct $E_{\bullet}$. Since we are assuming Conjecture (D), for $\pi\in\mc{A}_{r_0}$ with the set of unramified places $U$, there exists an overconvergent $F$-isocrystal $E$ of rank $r_0$ whose set of Frobenius eigenvalues at $x\in U$ is equal to that of Hecke eigenvalues of $\pi$ at $x$. We need to prove that $E$ is irreducible. Assume $E$ were not irreducible, and write $E^{\mr{ss}}\cong\bigoplus E_i$ where $E_i$ are irreducible. By Lemma \ref{twistdetfin}\footnote{ We do not have circular reasoning.}, there exists a twist $\chi_i$ such that $E_i(\chi_i)$ is of finite determinant for any $i$. Take a prime $\ell\neq p$, and let $\ms{F}_i$ (resp.\ $\ms{F}$) be the irreducible $\ell$-adic sheaf corresponding to $\pi_{E_i(\chi_i)}$ (resp.\ $\pi$) in the sense of Langlands using the induction hypothesis and proven Langlands correspondence for $\ell$-adic sheaves. Then the set of Frobenius eigenvalues of $\ms{F}$ and $\bigoplus\ms{F}_i(\chi_i^{-1})$ (using the notation of Remark \ref{remtwist} (ii)) coincide at any $x\in U$. By the \v{C}ebotarev density theorem, this is not possible, which implies that $E$ is irreducible. We see from the Frobenius eigenvalues at each closed point in $U$ that $E$ is of finite determinant, and thus the theorem follows.
\section{Some consequences} Finally, let us see some consequences of the theorem.
A {\em twist} is an $F$-isocrystal of rank $1$ on $\mr{Spec}(k)$. Let $\chi$ be a twist. For an overconvergent $F$-isocrystal $E$, we denote by $E(\chi)$ the tensor product $E\otimes f^*(\chi)$ where $f\colon X\rightarrow\mr{Spec}(k)$ is the structural morphism.
\begin{lem}
\label{twistdetfin}
Let $X$ be a scheme of finite type over $k$.
(i) Let $E$ be an overconvergent $F$-isocrystal on $X/K$ of rank
$1$. Then there exists a twist $\chi$ and a positive integer $n$ such
that $E(\chi)^{\otimes n}$ is trivial.
(ii) For any overconvergent $F$-isocrystal $E$ on $X/K$, by taking an
extension of $K$ if necessary, there exists a twist $\chi$ such that
$E(\chi)$ is of finite determinant. \end{lem} \begin{proof}
Let us see (i). We may replace $k$ by its finite extension. Thus
we may assume that the residue field of $K$ is $k$ and there is a
uniformizer of $K$ fixed by the Frobenius automorphism of $K$. By the
definition of overconvergent $F$-isocrystals, we may
assume that $X$ is reduced. By \cite[2.1.11]{Be}, we may shrink $X$,
and in particular, we may assume that $X$ is smooth.
Since $E$ is of rank $1$, there exists a twist $\chi'$ such that
$E(\chi')$ is unit-root. Assume there exists a smooth compactification
$X\hookrightarrow\overline{X}$ such that the complement is a simple
normal crossing divisor. Then by the same argument as
\cite[4.13]{Crrep}, using \cite[Thm 4.3]{Sh} and \cite[Theorem 2]{KL}
instead of \cite[4.12]{Crrep} and the class field theory, we get that
(i) holds in this case.
There exists a generically \'{e}tale alteration $f\colon Y\rightarrow
X$ such that $Y$ possesses a smooth compactification whose complement
is a simple normal crossing divisor. This shows that there exists an
integer $n'$ and a twist $\chi$ such that $f^*E(\chi)^{\otimes n'}$ is
trivial. There exists open dense subschemes $V$ of $Y$ and $U$ of $X$
such that $f_V\colon V\rightarrow U$ is finite \'{e}tale of degree
$d$. Let $G$ be an $F$-isocrystal $U$ such that $f_V^*G$ is
trivial. Then $G^{\otimes d}$ is trivial. This implies that
$E(\chi)^{\otimes dn'}|U$ is trivial, and by using \cite[2.1.11]{Be}
again, we conclude that $E(\chi)^{\otimes dn'}$ is trivial on $X$.
For (ii), we only need to note that
$\det(E(\chi))\cong\det(E)(\chi^{\otimes\mr{rk}(E)})$. \end{proof}
\begin{rem}
\label{remtwist}
(i) The lemma also holds when $k$ is the perfection of an absolutely
finitely generated field, since the theorem of \cite{KL} is still
applicable in this case.
(ii) Fixing a twist is equivalent to fixing an element of
$K^\times$. Fix an isomorphism $\iota'\colon\overline{\mb{Q}}_p\cong
\overline{\mb{Q}}_{\ell}$, and take a twist $\chi$ corresponding to
$b\in\overline{\mb{Q}}_p$. Given a $\overline{\mb{Q}}_\ell$-sheaf
$\ms{F}$, we have $\ms{F}^{\iota'(b)}$ using the notation of
\cite[1.2.7]{WeilII}. We sometimes denote this sheaf by
$\ms{F}(\chi)$. \end{rem}
We say that a scheme $X$ over $k$ is a {\em d-variety} if there exists a proper smooth formal scheme $\ms{P}$, a divisor $Z$ of the special fiber of $\ms{P}$, and an embedding $X\hookrightarrow\ms{P}$ such that $X=\overline{X}\setminus Z$ where $\overline{X}$ denotes the closure of $X$ in $\ms{P}$. We are able to consider overholonomic $F$-$\DdagQ{X}$-complexes as in \cite{AC}. We say that an overholonomic $F$-$\DdagQ{X}$-complex $\ms{C}$ is {\em $\iota$-mixed} if there exists a finite stratification $\{X_i\}$ by smooth d-varieties such that $\ms{H}^j\mb{R}\underline{\Gamma}^\dag_{X_i}(\ms{C})$ is the specialization of an $\iota$-mixed overconvergent $F$-isocrystal for any $i$ and $j$. See \cite{AC} for more details.
\begin{cor}
\label{ocisocimixed}
Assume Conjecture {\normalfont (D)} holds for any function field. Then
for any $d$-variety $X$ of finite type over $k$, any
overholonomic $F$-$\DdagQ{X}$-complex is $\iota$-mixed. \end{cor} \begin{proof}
By definition, it suffices to show that, for any {\em smooth} d-variety
$X$, any irreducible overconvergent $F$-isocrystal $E$ on $X$ is
$\iota$-pure. By Lemma \ref{twistdetfin} (ii), we may assume that $E$
is of finite determinant. For any closed point $x$ of $X$, there exists
a smooth curve $i\colon C\hookrightarrow X$ passing through $x$. It
suffices to see that $i^*E$ is $\iota$-pure of weight $0$. Using Remark
\ref{remRPconj} and Conjecture (L), any overconvergent $F$-isocrystal
with finite determinant on a smooth curve is $\iota$-pure of weight
$0$, which shows that $i^*E$ is $\iota$-pure of weight $0$, and
conclude the proof. \end{proof}
\begin{lem}
The Langlands correspondence is established for $r=1$. \end{lem} \begin{proof}
Let us construct $E_{\bullet}$. Using the reciprocity map, we have a
representation of $\pi_1(U)$ with finite monodromy at the boundary,
which induces a unit-root overconvergent $F$-isocrystal by
\cite[7.1.1]{Ts}. To construct $\pi_{\bullet}$, we note that objects in
$\mc{I}_1$ are unit-root since they are finite. Thus using the result
of Tsuzuki and the reciprocity map, we conclude. \end{proof}
\begin{thm}
Any overconvergent $F$-isocrystal of rank less than or equal to $2$ on
a smooth variety is $\iota$-mixed. \end{thm} \begin{proof}
Arguing in the same way as Corollary \ref{ocisocimixed}, we are reduced
to showing the existence of $\pi_\bullet$ for $r=1,2$.
The case $r=1$ has been established by the previous lemma. In this
case, the coincidence of local $\varepsilon$-factors for any
place of $X$ follows by definition. Thus applying Claim
\ref{indDeligh}, we have the map $\pi_\bullet$ for $r=2$, and the
theorem follows. \end{proof}
Tomoyuki Abe:\\ Institute for the Physics and Mathematics of the Universe (IPMU)\\ The University of Tokyo\\ 5-1-5 Kashiwanoha, Kashiwa, Chiba, 277-8583, Japan \\ e-mail: {\tt tomoyuki.abe@ipmu.jp}
\end{document} | arXiv | {
"id": "1111.2479.tex",
"language_detection_score": 0.7602728009223938,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.